id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
842607673 | Armstrong number using recursive approach using Dart
🚀 Feature
Add code for Armstrong number using the recursive approach in dart.
Have you read the Contributing Guidelines on Pull Requests?
Yes
/assign
| gharchive/issue | 2021-03-27T21:41:33 | 2025-04-01T06:37:37.189453 | {
"authors": [
"ritvij14"
],
"repo": "TesseractCoding/NeoAlgo",
"url": "https://github.com/TesseractCoding/NeoAlgo/issues/4296",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
865164858 | Floyd's Triangle
💥 Proposal
(A clear and concise description of what the proposal is.)
floyd's triangle program in kotlin
1
2 3
4 5 6
7 8 9 10
Have you read the Contributing Guidelines on Pull Requests?
yes
(Write your answer here.)
/assign
| gharchive/issue | 2021-04-22T16:38:14 | 2025-04-01T06:37:37.191455 | {
"authors": [
"hemant2705"
],
"repo": "TesseractCoding/NeoAlgo",
"url": "https://github.com/TesseractCoding/NeoAlgo/issues/6433",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
858973482 | Trapping rain water in python
Have you read the Contributing Guidelines on Pull Requests?
yes
Description
Added code for trapping rain water
Checklist
[ ] I've read the contribution guidelines.
[ ] I've checked the issue list before deciding what to submit.
[ ] I've edited the README.md and link to my code.
Related Issues or Pull Requests
Fixes: #5851
@HarshCasper please review
@ankitaggarwal23 please review
| gharchive/pull-request | 2021-04-15T15:02:22 | 2025-04-01T06:37:37.194420 | {
"authors": [
"Amit366"
],
"repo": "TesseractCoding/NeoAlgo",
"url": "https://github.com/TesseractCoding/NeoAlgo/pull/5913",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2301764741 | Restore capability to create draft release
This was temporarily blocked in issue #67. By upgrading GitReleaseManager to version 0.17, it can be restored.
:tada: This issue has been resolved in version 1.2.1 :tada:
The release is available on:
GitHub Release
NuGet Package
| gharchive/issue | 2024-05-17T04:00:14 | 2025-04-01T06:37:37.204167 | {
"authors": [
"CharliePoole"
],
"repo": "TestCentric/TestCentric.Cake.Recipe",
"url": "https://github.com/TestCentric/TestCentric.Cake.Recipe/issues/75",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1509914557 | 🛑 Proxmox is down
In 0595ded, Proxmox (https://pve.tetragg.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Proxmox is back up in cfedffd.
| gharchive/issue | 2022-12-24T02:49:44 | 2025-04-01T06:37:37.231509 | {
"authors": [
"TetraGG"
],
"repo": "TetraGG/Upptime",
"url": "https://github.com/TetraGG/Upptime/issues/1075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1524012576 | 🛑 Proxmox is down
In 5c31c6d, Proxmox (https://pve.tetragg.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Proxmox is back up in 3b7c693.
| gharchive/issue | 2023-01-07T17:44:41 | 2025-04-01T06:37:37.233894 | {
"authors": [
"TetraGG"
],
"repo": "TetraGG/Upptime",
"url": "https://github.com/TetraGG/Upptime/issues/3427",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1524697114 | 🛑 Proxy is down
In a2426c3, Proxy (https://proxy.tetragg.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Proxy is back up in 1f08de9.
| gharchive/issue | 2023-01-08T21:58:15 | 2025-04-01T06:37:37.236244 | {
"authors": [
"TetraGG"
],
"repo": "TetraGG/Upptime",
"url": "https://github.com/TetraGG/Upptime/issues/3606",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1533825966 | 🛑 Jellyfin is down
In fd1e9f2, Jellyfin (https://jellyfin.tetragg.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Jellyfin is back up in db391f6.
| gharchive/issue | 2023-01-15T13:56:05 | 2025-04-01T06:37:37.238583 | {
"authors": [
"TetraGG"
],
"repo": "TetraGG/Upptime",
"url": "https://github.com/TetraGG/Upptime/issues/4545",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1572396169 | 🛑 Proxy is down
In 7f789b4, Proxy (https://proxy.tetragg.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Proxy is back up in 6e2baa4.
| gharchive/issue | 2023-02-06T11:26:05 | 2025-04-01T06:37:37.241155 | {
"authors": [
"TetraGG"
],
"repo": "TetraGG/Upptime",
"url": "https://github.com/TetraGG/Upptime/issues/7856",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
193829873 | amwiki支持包含空白字符的文件路径么
作者大大你好,我在使用amwiki编写文章的时候遇到了一个问题,情况是这样的:
因为amwiki是通过抽取Library目录下的文件结构来生成对应的页面导航的,所以.md的文件名自然就会影响到导航页面上的文字。现在我想在导航上面显示“XXXWindows SDK”等字样,里面包含了空白字符,按道理来说,此时我也应该为此对应的页面创建一个.md文件,文件名类似“001XXXWindows SDK”等等,创建文件成功后,我执行“在浏览器中打开此文档”的功能时,发现它会自动定位到amwiki的首页上,无法定位到当前.md文件对应的网页上,假设将这些空白字符用下划线等来代替之后,则是正常的。
情况大致如此了,其实我是想让导航页上的文本不带有下划线等阻碍阅读的字符,有没有办法解决呢,当然,不包括换名字啦〒▽〒
嗯,有时候文件名确实需要是有空格的 :smiley:
这个问题只需要一个小改动,webServer.js 第197行加上 .replace(/ /g, '%20') 就可以容许带空格的文件名
在下一个版本 0.7.6 中会带上这一修复
多谢大大的给力支持啦O(∩_∩)O~~
已经在 v0.7.6 版本中,修复此问题
| gharchive/issue | 2016-12-06T17:09:38 | 2025-04-01T06:37:37.244521 | {
"authors": [
"TevinLi",
"YaoXuanZhi"
],
"repo": "TevinLi/amWiki",
"url": "https://github.com/TevinLi/amWiki/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
74184428 | Divide array in 3 columns
Can we divide "type":"array" in 2 or 3 columns and have a tabarray with horizontal tabs? Please suggest a workaround.
I don't think I understand, do you want the same array flowing over three columns?
Columns can be achieved by using standard bootstrap classes, see http://schemaform.io/examples/bootstrap-example.html#/4fa8967ae5596fe8b0c0
tabarray type has support for horizontal tabs, just set tabType to "top", here are the docs https://github.com/Textalk/angular-schema-form/blob/development/docs/index.md#tabarray
In my project I achieve columns by splitting the items array in half using a custom fieldset decorator like:
<fieldset ng-disabled="form.readonly" class="schema-form-fieldset schema-form-fieldset-columns {{form.htmlClass}}">
<legend ng-show="showTitle()">{{ form.title }}</legend>
<div ng-show="form.description" ng-bind-html="form.description"></div>
<div class="column">
<div class="row">
<sf-decorator ng-repeat="item in form.items|arrayHalf" form="item"></sf-decorator>
</div>
</div>
<div class="column">
<div class="row">
<sf-decorator ng-repeat="item in form.items|arrayHalf:1" form="item"></sf-decorator>
</div>
</div>
</fieldset>
The arrayHalf filter is really easy to implement...
Please re-open if you do not feel the question has been answered sufficiently.
| gharchive/issue | 2015-05-08T01:18:22 | 2025-04-01T06:37:37.249952 | {
"authors": [
"arteme",
"davidlgj",
"nicklasb",
"subhendupsingh"
],
"repo": "Textalk/angular-schema-form",
"url": "https://github.com/Textalk/angular-schema-form/issues/383",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1554299021 | chore: bump version and delay dependabot
related: #151 #184
This PR bumps the deps and tries to slow down dependabot to reduce the noise.
We will try to adopt renovate bot next week.
And I will manually update the dependencies temporarily.
cc. @rtritto
| gharchive/pull-request | 2023-01-24T04:29:00 | 2025-04-01T06:37:37.251593 | {
"authors": [
"pionxzh"
],
"repo": "TexteaInc/json-viewer",
"url": "https://github.com/TexteaInc/json-viewer/pull/216",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2347111128 | [REQUEST] Rich progress bar changes color based on the number steps/files elapsed.
Consider posting in https://github.com/textualize/rich/discussions for feedback before raising a feature request.
Have you checked the issues for a similar suggestions?
Yes
How would you improve Rich?
My application submits a bunch of transformation to a server and the rich bar gives the status of the transformation and eventual downloads. However, these transformation are prone to failures and I would like to change the color of my progress bar to indicate that some failure has occurred while other transformations continue.
I would like to change the bar color to red because it stopped just before finishing.
The update function does provide ways to make changes but I couldn't find a way to change the color of the bar using BarColumn() when passed as a keyword argument to the update method.
e.g
progress.update(
progress_task,
progress_bar_title,
completed=self.current_status.files_completed,
bar = BarColumn(complete_style="rgb(0,0,255)",
style="rgb(255,0,0)")
)
What problem does it solve for you?
This will add more style to the progress bar when there are failures or if we can't reach 100%.
You can override Progress.get_renderables() to display the progress however you like.
| gharchive/issue | 2024-06-11T19:13:24 | 2025-04-01T06:37:37.255604 | {
"authors": [
"ketan96-m",
"willmcgugan"
],
"repo": "Textualize/rich",
"url": "https://github.com/Textualize/rich/issues/3379",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
911604909 | AE2 Wireless Terminals (ae2wtlib) encountered an error during the load_registries event phase
AE2 Wireless Terminals (ae2wtlib) encountered an error during the load_registries event phase
java.lang.RuntimeException: Cannot find class appeng/container/implementations/MEMonitorableContainer
crash-2021-06-04_18.12.47-fml.txt
very undescriptive report, but this looks like your ae2 version is too new
Wouldn't that be more like, this mod is outdated? If it requires an older version of AE2
Came here with same issue btw, crash log here: https://pastebin.com/H3zFJNmK
Bump.
Running into the same issue now. Here is a link to my crash log as well: https://pastebin.com/HUKwVjKR
Same issue, looks like this addon hasn't been updated since May
same issue here
Repeating an already known issue won't fix it any faster.
I just updated my server and ran into this issue. In the crash report it lead me here so I've the crash report to pastebin and if that helps let me know.
https://pastebin.com/KrVtgELd
I've got to figure out how to get my server to run. If you need anything, ask. I have access to all of the files for the server and pack.
| gharchive/issue | 2021-06-04T15:17:33 | 2025-04-01T06:37:37.264672 | {
"authors": [
"DrAkashic",
"MrTubzy456",
"Tfarcenim",
"aaronhowser1",
"epicyeeto",
"j4rw15",
"tankcr"
],
"repo": "Tfarcenim/AE2WirelessTerminalLibrary",
"url": "https://github.com/Tfarcenim/AE2WirelessTerminalLibrary/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1046391194 | Advantages of ezcolors over the de facto color library?
Hello,
I am in the market for a python color manipulation library for a new project. Any chance of a paragraph or two of why ezcolors was created and how it compares to the other libraries out there?
Many thanks,
Scott.
Sure thing. EzColors was created when I was trying to find a multi platform terminal colour solution that was easy to use but capable of a wide array of different colour functionalities like gradients, rainbows and various generateable colour palettes and I couldn't find anything that quite fit the bill of what I was looking for.
So I started work on a few functions and classes to make ANSI escape coloured strings in the form of an ordinary str or a ColorStr class with various methods to fiddle with the colours etc I got it to where it was usable for making coloured text relatively easily with nothing but ColorStr("text", "foreground colour in RGB, integer or hex", "background colour") and sort of left it for a while but then later returned and added some extra functionality like simple coloured splash screens and separators and a coloured yes/no/whatever option you want prompt. And colored exceptions for easier reading
The whole thing is somewhat of a mess and in need of cleanup but I use it in many of my projects and for quick prototyping. Overall though I'm not sure I'd recommend it other some alternatives I haven't really looked for too long for exactly what I needed.
If you wanna take a look and suggest any improvements I'd be happy to hear. There's a whole suite of eztools I've got in the works
EzFiles (contains a file class for much easier file manipulation)
EzConfig (fully featured configs from 1 line)
EzPack (dynamically generate, package and build python scripts, modules and packages)
EzColors
Ezcmd (decorator based system to very easily turn any script into a command line utility and a class to easily create a command prompt from a list of functions)
EzTest (easy decorator based unit tests with coloured output and reports with a Test class and a Case class that can be extended easily to automatically generate and cache testcases)
EzValid (a bunch of regedit powered varlidators)
With all being available either separately or as 1 package called eztools with additional utilities such as 1 line simple input function output tkinter GUI, and Timers
Sorry to just dump a huge wall of text.
In short I'm an intermediate-advanced solo python dev I wouldn't use my EzColors for anything super important but please do mess about with it and see how you like it. I will update the documentation soon as some stuff has probably slightly changed since I did it
Also to note I was high both when I wrote EzColors and when I wrote this comment. And both my comments and the entirety of EzColors development were written on a Samsung galaxy a6. So when I say it's multi platform it works everywhere. Mac, windows, Linux, Android
Also only just realised how long ago you commented this haha. Sorry for such a slow reply
| gharchive/issue | 2021-11-06T04:19:35 | 2025-04-01T06:37:37.270956 | {
"authors": [
"Th3M4ttman",
"scott91e1"
],
"repo": "Th3M4ttman/ezcolors",
"url": "https://github.com/Th3M4ttman/ezcolors/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1978049382 | Chrome android does not work anymore
Subtitles simply don't appear on chrome android anymore (could not tell from which version, it was already the case in 1.7.9 but maybe before too). This can be reproduced from this page, for example: https://thaunknown.github.io/jassub/jassub/simple/index.html
There aren't any errors, and debug messages in the console appears when the subtitle should be shown. It feels like everything is working as it should, but subtitles are not shown.
can't reproduce
Interesting, even with brave, it does not work for me.
this is likely due to offscreen render, it's a chromium bug
The issue for the sandbox was #32 (which was fixed) but my codesandbox does indeed show another issue (I have not faced in production yet)
as I said, disabling offscreen render fixes this on android, I don't know why this happens, I assume it occurs when the bitmap given to skia is too big?
closing as no reproduction is available
| gharchive/issue | 2023-11-06T00:23:56 | 2025-04-01T06:37:37.274777 | {
"authors": [
"ThaUnknown",
"zoriya"
],
"repo": "ThaUnknown/jassub",
"url": "https://github.com/ThaUnknown/jassub/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
954125005 | 🛑 Portfolio@OSS is down
In 611d57e, Portfolio@OSS (http://shekhar.aitoss.club/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Portfolio@OSS is back up in 0d988d2.
| gharchive/issue | 2021-07-27T17:54:24 | 2025-04-01T06:37:37.294899 | {
"authors": [
"The-Anton"
],
"repo": "The-Anton/upptime-test",
"url": "https://github.com/The-Anton/upptime-test/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1769507602 | Add server message to error object
When we make request via db or call object, if error occur from backend frappe send friendly message to client via _server_messages key.
I cannot extract it in .catch((error) { ... }). It seems that you did not include it in error object.
Here is server message when I check in network response.
Could you please help include this message in .catch error object?
Thanks,
@frappeerpnext - yes this is something that needs to be added. I'll try adding it in.
Yes, I hope this will be added soon. These error information is very useful and we save a lot time to customize friendly message to end user.
Thanks,
Hi Guys,
I have update and test it. Working perfect.
Big Thanks
| gharchive/issue | 2023-06-22T11:43:32 | 2025-04-01T06:37:37.297869 | {
"authors": [
"frappeerpnext",
"nikkothari22"
],
"repo": "The-Commit-Company/frappe-js-sdk",
"url": "https://github.com/The-Commit-Company/frappe-js-sdk/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1766815269 | Implementation of ML model on Happiness index data.
💥 Proposal
I would like to apply a machine learning model to Happiness index data. By using the data I would like to predict the overall rank. Please assign me this.
@karthikbhandary2 - please provide the dataset.
This is present in this repo only. Link: https://github.com/The-Data-Alchemists-Manipal/MindWave/tree/main/Data Analytics/Happines_index_Data_analysis_visualization
@karthikbhandary2 - you can go ahead! We are assigning you 21 days for this project, after which it will be assigned to someone else if not completed. All the best!
Name the file as: algorithm_dataset.ipynb and link it in the readme of the labeled directory as algorithm - dataset.
ok @khusheekapoor
| gharchive/issue | 2023-06-21T06:04:12 | 2025-04-01T06:37:37.304790 | {
"authors": [
"karthikbhandary2",
"khusheekapoor"
],
"repo": "The-Data-Alchemists-Manipal/MindWave",
"url": "https://github.com/The-Data-Alchemists-Manipal/MindWave/issues/454",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1357605921 | Flow consistently fails regardless of configurations
Description
When hardening a SHA1 design inside the caravel user project, flow fails with FP_SIZING=relative and FP_CORE_UTIL=30%. Other configurations were also tried but hardening failed for all. Configuration trials can be seen in openlane/sha1_top/config.tcl (https://github.com/WebKingdom/bitcoin_asic/blob/055aed80953a4996cf8b931a32cf2a102ebe4ca6/openlane/sha1_top/config.tcl).
Expected behavior
The design should be able to harden and go through layout and placement.
Environment
Kernel: Darwin v21.6.0
Distribution: macOS 10.16
Python: v3.9.0 (OK)
Container Engine: docker v20.10.17 (OK)
OpenLane Git Version: f9b5781f5ef0bbdf39ab1c2bbd78be8db11b27f2
pip: INSTALLED
pip:venv: INSTALLED
---
PDK Version Verification Status: OK
---
Git Log (Last 3 Commits)
f9b5781 2022-07-01T16:04:31+02:00 Fix a bug with `-overwrite` (#1171) - Anton Blanchard - (grafted, HEAD, tag: 2022.07.02_01.38.08)
Reproduction Material
issue_reproducible attached as zip.
issue_reproducible.zip
Logs
Console logs:
[STEP 18]
[INFO]: Running Detailed Placement...
[ERROR]: during executing openroad script /openlane/scripts/openroad/opendp.tcl
[ERROR]: Exit code: 1
...
[ERROR]: Last 10 lines:
[INFO DPL-0035] ANTENNA__14155__A1
[INFO DPL-0035] ANTENNA__14155__A1
[INFO DPL-0035] ANTENNA__20474__A2
[INFO DPL-0035] ANTENNA__20930__A
[INFO DPL-0035] ANTENNA__20930__A
[INFO DPL-0035] ANTENNA__16103__A0
[INFO DPL-0035] message limit reached, this message will no longer print
[ERROR DPL-0036] Detailed placement failed.
Error: opendp.tcl, 32 DPL-0036
@WebKingdom
Update your config.tcl with following configuration resolve the issue:
set ::env(CELL_PAD) 2
Also you can use set ::env(PL_TARGET_DENSITY) 40 to utilize more core area.
I set those 2 variables but set PL_TARGET_DENSITY = 0.4 (not 40). I was able to get further to [STEP 39] with the following openlane.log:
[INFO]: Writing Verilog...
[INFO]: Running LEF LVS...
[INFO]: Running Magic DRC...
[INFO]: Converting Magic DRC Violations to Magic Readable Format...
[INFO]: Converting Magic DRC Violations to Klayout XML Database...
[ERROR]: There are violations in the design after Magic DRC.
[ERROR]: Total Number of violations is 5
[INFO]: Saving current set of views in '../Users/somasz/Documents/GitHub/mpw_6c/caravel_design/caravel_bitcoin_asic/openlane/sha1_top/runs/22_09_01_01_54/results/final'...
[INFO]: Generating final set of reports...
[INFO]: Created manufacturability report at '../Users/somasz/Documents/GitHub/mpw_6c/caravel_design/caravel_bitcoin_asic/openlane/sha1_top/runs/22_09_01_01_54/reports/manufacturability.rpt'.
[INFO]: Created metrics report at '../Users/somasz/Documents/GitHub/mpw_6c/caravel_design/caravel_bitcoin_asic/openlane/sha1_top/runs/22_09_01_01_54/reports/metrics.csv'.
[INFO]: Saving runtime environment...
[ERROR]: Flow failed.
The manufacturability.rpt contained:
Design Name: sha1_top
Run Directory: /Users/somasz/Documents/GitHub/mpw_6c/caravel_design/caravel_bitcoin_asic/openlane/sha1_top/runs/22_09_01_01_54
Magic DRC Summary:
Source: /Users/somasz/Documents/GitHub/mpw_6c/caravel_design/caravel_bitcoin_asic/openlane/sha1_top/runs/22_09_01_01_54/reports/signoff/drc.rpt
Violation Message "Min area of metal1 holes > 0.14um^2 (met1.7) "found 5 Times.
Total Magic DRC violations is 5
LVS Summary:
Source: /Users/somasz/Documents/GitHub/mpw_6c/caravel_design/caravel_bitcoin_asic/openlane/sha1_top/runs/22_09_01_01_54/logs/signoff/38-sha1_top.lvs.lef.log
LVS reports no net, device, pin, or property mismatches.
Total errors = 0
Antenna Summary:
No antenna report found.
If you update open_pdks to latest it may resolve this error.
I do have the latest version of OpenLane and still get the error. Any suggestions? Thanks!
OpenLane Git Version: f9b5781f5ef0bbdf39ab1c2bbd78be8db11b27f2
While creating this issue, you shared above OpenLane version.
Can you post latest OpenLane version do you tried?
sha1_top design tested at my end and am not seening magic drc error.
[INFO]: Running LEF LVS...
[STEP 43]
[INFO]: Running Magic DRC (log: logs/signoff/43-drc.log)...
[INFO]: Converting Magic DRC Violations to Magic Readable Format...
[INFO]: Converting Magic DRC Violations to Klayout XML Database...
[INFO]: No DRC violations after GDS streaming out.
[STEP 44]
[INFO]: Running OpenROAD Antenna Rule Checker (log: logs/signoff/44-antenna.log)...
[STEP 45]
[INFO]: Running CVC (log: logs/signoff/45-erc_screen.log)...
[INFO]: Saving current set of views in 'results/final'...
[INFO]: Saving runtime environment...
[INFO]: Generating final set of reports...
[INFO]: Created manufacturability report at 'reports/manufacturability.rpt'.
[INFO]: Created metrics report at 'reports/metrics.csv'.
[WARNING]: There are max fanout violations in the design at the typical corner. Please refer to 'reports/signoff/33-rcx_sta.slew.rpt'.
[INFO]: There are no hold violations in the design at the typical corner.
[INFO]: There are no setup violations in the design at the typical corner.
[SUCCESS]: Flow complete.
[INFO]: Note that the following warnings have been generated:
[WARNING]: There are max fanout violations in the design at the typical corner. Please refer to 'reports/signoff/33-rcx_sta.slew.rpt'.
I think this is fixed now?
| gharchive/issue | 2022-08-31T16:49:24 | 2025-04-01T06:37:37.322059 | {
"authors": [
"WebKingdom",
"donn",
"vijayank88"
],
"repo": "The-OpenROAD-Project/OpenLane",
"url": "https://github.com/The-OpenROAD-Project/OpenLane/issues/1299",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1810809629 | make write_lef and others write out files atomically
Description
A half-written .lef file can be lying around after a crash which complicates e.g. deltaDebug.py.
Suggestion:
modify all write_* commands to write to a temp file
when the write completes, rename temp file to final name
This way there are no partial files lying around
Suggested Solution
No response
Additional Context
No response
Hello @maliberty @oharboe , I'd like to work on this issue. Could you please assign it to me ?
@annapetrosyan26 You can simply create a pull request without being assigned this task, I don't have the role to assign github issues in this project.
I haven't seen that @maliberty assigns github issues to new contributors, but perhaps that's something that would be approperiate?
Thank you @oharboe for the clarification. Then I'll work on the issue and once the code is ready I will create a pull request.
@maliberty Fixed, no?
Fixed in https://github.com/The-OpenROAD-Project/OpenROAD/pull/5109
| gharchive/issue | 2023-07-18T22:25:40 | 2025-04-01T06:37:37.326790 | {
"authors": [
"annapetrosyan26",
"maliberty",
"oharboe"
],
"repo": "The-OpenROAD-Project/OpenROAD",
"url": "https://github.com/The-OpenROAD-Project/OpenROAD/issues/3658",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2551259012 | updated SDCs to define both read and write clocks
PR to update SRAM SDC file to define both clocks
Isnt this also used by BoomTile? In which case we need an additional .sdc file.
Also, the BUILD bazel need to be updated to use the file.
If I read the BUILD.bazel correctly, the current constraints.sdc is used by the SRAM, regfile, and L1MetadataArray abstract generation flows. It is not used by BoomTile (it uses constraints-boomtile.sdc).
But, yes, the proposed change wouldn't work for dataArrayB, tag_array*, or L1MetadataArray, since their clocks are named either RW0_clk, R*_clk, W*_clk or just clock.
So, it seems like we might be able to:
Modify the existing constraints.sdc to use *_clk to find the clock ports and use it for all SRAMs and regfiles
Have L1MetadataArray continue to use the existing constraints-boomtile.sdc since they both have just "clock"
Please correct me if I'm wrong, but I think that will hook up the appropriate clocks for the abstract generation flow.
Yes, this should work.
lgtm.
should this have gone through jenkins CI?
Interesting, the Jenkins CI jobs got kicked off and passed:
https://jenkins.openroad.tools/job/megaboom-Public/view/change-requests/job/PR-118-head/
https://jenkins.openroad.tools/job/megaboom-Public/view/change-requests/job/PR-118-merge/
Maybe there needs to be some hook up in GitHub so that it shows up in the Rules section? @vvbandeira , how do we register the CI as a check?
| gharchive/pull-request | 2024-09-26T18:21:12 | 2025-04-01T06:37:37.332806 | {
"authors": [
"jeffng-or",
"oharboe"
],
"repo": "The-OpenROAD-Project/megaboom",
"url": "https://github.com/The-OpenROAD-Project/megaboom/pull/118",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
932972008 | Create n_bonacci.cpp
Description of Change
Checklist
[x] Added description of change
[x] Added file name matches File name guidelines
[x] Added tests and example, test must pass
[x] Added documentation so that the program is self-explanatory and educational - Doxygen guidelines
[x] Relevant documentation/comments is changed or added
[x] PR title follows semantic commit guidelines
[x] Search previous suggestions before making a new one, as yours may be a duplicate.
[x] I acknowledge that all my contributions will be made under the project's license.
Notes:
@Panquesito7 See now this PR, I think you will be able to see it clearly now as I have recreated it from the correct fork.
Thanks in advance 😄
| gharchive/pull-request | 2021-06-29T18:04:43 | 2025-04-01T06:37:37.338989 | {
"authors": [
"Swastyy"
],
"repo": "TheAlgorithms/C-Plus-Plus",
"url": "https://github.com/TheAlgorithms/C-Plus-Plus/pull/1518",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1359021652 | No property named GuiControl
Hi,
(Thank you for all your AHK libraries !)
Here is an error message with AutoHotKey 2.0 beta 7.
Oh crud, I may not have tested that as thoroughly as I should have. Thanks for the report.
fixed
| gharchive/issue | 2022-09-01T15:19:13 | 2025-04-01T06:37:37.369491 | {
"authors": [
"TheArkive",
"gcailly"
],
"repo": "TheArkive/GuiCtlExt_ahk2",
"url": "https://github.com/TheArkive/GuiCtlExt_ahk2/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
837743819 | Do less files/mkdir calls
Related to #28, currently we call files/mkdir for every single file updated/added. It'd speed up these operations a lot if we only did this once per directory.
For context I just attempted to update 18k files totalling ~1.1GiB, it's taking over 12 hours on a powerful system with an NVME SSD. This is obviously not good enough.
Each call can take 1-3s!!
Testing a possible patch for this right now. I think my set took 13 or 14 hours to process. I'll see how long this one takes.
Fixed in 02a37e96cf6e118647b0da899ca7573332f19fb8
| gharchive/issue | 2021-03-22T13:57:02 | 2025-04-01T06:37:37.412466 | {
"authors": [
"TheDiscordian"
],
"repo": "TheDiscordian/ipfs-sync",
"url": "https://github.com/TheDiscordian/ipfs-sync/issues/29",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1234715041 | Complex items and optionality
Many things in 5e contain options, whereby the AE to activate depends on a choice made. This leads to two options for data:
Split each option out into a different Foundry item (and therefore a different button on the sheet)
Use a macro to create a pop-up querying which option the player would like to trigger
The latter is neater and more intuitive imo, but it might be slow if a player never uses a certain option. We need to agree how to handle this—and whether we are happy to accept 'half-complete' items until a general solution is found—because there's probably a fucctonne of items that it applies to
Only one item please imo. You're only importing one thing from plutonium, you only expect one thing. So needs to be a macro. But also how would you implement that macro, is there a template you'd need to use for every item like that, or can you structure the data such that the module/plutonium generates a standard macro from an array of options?
is there a template you'd need to use for every item like that
Midi QoL has a (non-AE) option to trigger a macro when an item is activated (i.e. clicked). Using Item Macro, we can bundle this macro with the item itself rather than import it into the normal macros directory.
I agree it would be nice to have all the popups be 'consistent' with each other in styling, but the issue is that the macro will invariably have more programming than just the popup. That is, once the popup appears, you need to write code to handle what each button does, and that has to be included in the same macro! I suppose in principle the popup could be triggered by a core Plutonium function with reliable behaviour, so the PAD-only, item-specific macro only needs to call the initial 'create modal' function and provide instructions for each result.
await plutonium.optionsPopup.create(
"Body text <b>with HTML</b> and whatnot like.",
"Button 1", "function1",
"Button 2", "function 2"
);
function function1 {
executeCommands();
}
function function2 {
executeCommands();
}
function closePopup {
execute Commands();
}
I agree, one item/feature in the game equals one item on the character sheet, so please use macros. I personally do not see a reason to dictate code structure, format, or styling.
The reason to at least have a style guide is so other people can review the code easily, if it ever needs updates! This could be fixing a bug, an API change, Midi overhauling itself, etc.
The only requirements I'd like to 'enforce', to be clear, are:
Write comments (obvious)
No minified code unless the human-readable one is available somewhere else (see my post above)
Use the same pop-up formatting as everything else (which we can draft whether or not Giddy makes some Plutonium magic to make it easier; this is to provide a 'consistent' experience so users don't have to constantly reread every single pop-up because they're all phrased differently)
Whatever solution I end up cobbling together for jamming macros into proceedings, I'll make it eslint-able, so a minimum bar for code styling will be in place
Writing good code is a whole different kettle of fish, and (un)fortunately not machine-enforceable (yet 😓)
kinda made irrelevant by CPR/etc. managing it better than anything any of us clearly have the effort to maintain 😏
| gharchive/issue | 2022-05-13T04:10:04 | 2025-04-01T06:37:37.459292 | {
"authors": [
"Spappz",
"TheGiddyLimit",
"jbowensii",
"revilowaldow"
],
"repo": "TheGiddyLimit/plutonium-addon-automation",
"url": "https://github.com/TheGiddyLimit/plutonium-addon-automation/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
412195225 | Crear Pagina de "About"
Is your feature request related to a problem? Please describe.
Queremos un "about" donde podamos incluir el por que del proyecto y enlances importantes. Esto tambien nos sirve para explicar de donde abtenemos nuestra data y el proceso de recopilacion.
Describe the solution you'd like
Crear una pagina adicional con el URL /about
Describe alternatives you've considered
None
Additional context
Necesitamos esto para poder enviar un solo enlace en Twitter ... 🙄
Links:
https://spectrum.chat/contratospr
| gharchive/issue | 2019-02-13T03:54:34 | 2025-04-01T06:37:37.467479 | {
"authors": [
"froi",
"jpadilla"
],
"repo": "TheIndexingProject/contratospr",
"url": "https://github.com/TheIndexingProject/contratospr/issues/2",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2723964282 | Generated layers do not follow OME standard
The generated layers, when saved as zarr files, do not follow the OME standard and cannot be opened back again correctly in napari.
Solved in PR #29
| gharchive/issue | 2024-12-06T21:32:17 | 2025-04-01T06:37:37.469558 | {
"authors": [
"fercer"
],
"repo": "TheJacksonLaboratory/activelearning",
"url": "https://github.com/TheJacksonLaboratory/activelearning/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1896417738 | Suggestion to define main project name in a separate file
The main project name (Greeter) is currently used across multiple files. So it requires modifications in all these files to use this template, which can be time-consuming and error-prone.
I suggest defining the main project name in a separate file. This way, any changes to the name would only need to be made in one place, reducing potential errors and increasing efficiency.
for example, we can create a new file info.cmake:
# Note: update this to your new project's name and version
set(MAIN_PROJECT_NAME Greeter)
And then, include this file in other files:
# ./CMakeLists.txt
cmake_minimum_required(VERSION 3.14...3.22)
# ---- Project ----
include(info.cmake)
project(
${MAIN_PROJECT_NAME}
VERSION 1.0
LANGUAGES CXX
)
Then, one can start its own projects with changing only one Greeter.
I write a demo in this branch, but I haven’t updated the documentation. If this suggestion is accepted, I can submit a PR.
Hey thanks for the input! I've also been bugged by this, but decided to not bother as it would introduce extra complexity without not adding long term benefits. Also in an actual project, you would expect the project name to be hardcoded in multiple places instead of being encoded in a generic variable name.
| gharchive/issue | 2023-09-14T11:59:25 | 2025-04-01T06:37:37.475736 | {
"authors": [
"TheLartians",
"ldeng-ustc"
],
"repo": "TheLartians/ModernCppStarter",
"url": "https://github.com/TheLartians/ModernCppStarter/issues/178",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1615696830 | module not found error
after entering
Path_to_HuggingFace:XpucT/Deliberate
I'm encountering this error
ModuleNotFoundError Traceback (most recent call last)
in
3 from IPython.utils import capture
4 from IPython.display import clear_output
----> 5 import wget
6
7 #@markdown - Skip this cell if you are loading a previous session that contains a trained model.
ModuleNotFoundError: No module named 'wget'
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
Same here.
Me too but yesterday it worked.
Solution: put !pip install wget before import wget.
Thank You! Still, later in the training cell an avalanche of errors happen so I guess we wait for the updated colab.
Oh yes I see, hope they will fix it ASAP.
fixed
| gharchive/issue | 2023-03-08T18:08:10 | 2025-04-01T06:37:37.480357 | {
"authors": [
"The-Great-Nothing",
"TheLastBen",
"blazing",
"cxyzdroid90"
],
"repo": "TheLastBen/fast-stable-diffusion",
"url": "https://github.com/TheLastBen/fast-stable-diffusion/issues/1705",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1474454438 | "LayerNormKernelImpl" not implemented for 'Half'
No flames please, be gentle, it's my first time posting. :)
I got this error message after successfully (so I thought) installing and then running a request: "LayerNormKernelImpl" not implemented for 'Half'.
Any words of wisdom? What else can I tell you to help someone help me out?
Thanks so much!
Where exactly do you get this error ?
After putting in search terms and hitting the Generate button. I just went back to the main screen and saw the code output that had been generated:
https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb#scrollTo=PjzwxTkPSPHf
Did this work? I copied and pasted but it wasn't getting the entire output as for some reason the paste results were truncated. I can also try copying to a .txt file if you need.
That phrase shows up a few times in the text at the end of each block.
Thanks!
For clear instructions how to use it, look for AUTOMATIC1111 colab tutorial on YouTube.
Thanks much, I'll look into that!
| gharchive/issue | 2022-12-04T04:55:33 | 2025-04-01T06:37:37.483560 | {
"authors": [
"AlternativelyMaybe",
"TheLastBen"
],
"repo": "TheLastBen/fast-stable-diffusion",
"url": "https://github.com/TheLastBen/fast-stable-diffusion/issues/859",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
457890301 | /feed/reels_media/
Great repo.
Is there any way we could have story highlights added as well?
make sure update to version 1.0.5
Example:
https://github.com/TheM4hd1/SwiftyInsta/blob/10db0b97b7508a5c492c9f178c35b63768b2811b/SwiftyInstaTests/SwiftyInstaTests.swift#L1330
Great!
Unfortunately it does not seem to populate items for every Higlight, only the first one, and not reliably either.
Quickly googling it seems to suggest that to retrieve the actual items (i.e. stories) we need to pass the id to the "reels_media" endpoint. Any plans to add that as well?
Thanks @TheM4hd1
did you try passing ID like this? highlight:123882132324123
I see, I'll check it for you.
Thanks man
There is an endpoint: /feed/reels_media/
it accepts an array of user_ids like highlight:123882132324123
I've tried to implement it but I'm receiving an error from server
{"message": "Invalid reel id list", "status": "fail"}
try it yourself, see if you can fix it, maybe I'm doing wrong somewhere.
{"message": "Invalid reel id list", "status": "fail"}
I have to admit that I've been trying to implement this for a while, without success. I was hoping you would fare better, but we seem to have stumbled on the same error unfortunately.
I'l let you know if I find anything, but as I see similar libraries used this endpoint but I didn't test them to see if they works or not.
maybe we're missing something... idk
I try with "supported_capabilities_new" but same response. anyone fixed?
sample request
POST: https://i.instagram.com/api/v1/feed/reels_media/
DATA: signed_body=bca1cf35fe840d851d3d37f488665e955fa517e06a521843b619fd44183e28e3.{"supported_capabilities_new":"[{"name":"SUPPORTED_SDK_VERSIONS","value":"9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,19.0,20.0,21.0,22.0,23.0,24.0,25.0,26.0,27.0,28.0,29.0,30.0,31.0,32.0,33.0,34.0,35.0,36.0,37.0,38.0,39.0,40.0,41.0,42.0,43.0"},{"name":"FACE_TRACKER_VERSION","value":"10"},{"name":"segmentation","value":"segmentation_enabled"},{"name":"WORLD_TRACKER","value":"WORLD_TRACKER_ENABLED"}]","source":"feed_timeline","_csrftoken":"lQElejwdXHJToUUryVZWghOEN2X8GFn0","user_ids":["archiveDay:17960651299222776"],"_uid":"5889897609","_uuid":"1df4e0f8-fc98-4250-a6a4-56b455e75699"}&ig_sig_key_version=4
@TheM4hd1 how can we create signed_body with array inside. how to know which part will be sign?
@canaksoy
Here is an example of how to sign body.
https://github.com/TheM4hd1/SwiftyInsta/blob/c2beaad164a49b84fe8254af25ac75e36e192f72/SwiftyInsta/API/Services/UserHandler.swift#L164
The sample request you attached above, is it a working version?
I tried with signed_body samples but its hard to handle [String] with encoder. Yes its a working php sample. @TheM4hd1
@canaksoy
Okay, I'll check it.
Thanks.
I tested function with 2 id, if you see there are some missing data in returned model, you can decode the returned data to access full info about request.
| gharchive/issue | 2019-06-19T08:59:56 | 2025-04-01T06:37:37.494879 | {
"authors": [
"TheM4hd1",
"canaksoy",
"sbertix"
],
"repo": "TheM4hd1/SwiftyInsta",
"url": "https://github.com/TheM4hd1/SwiftyInsta/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2330674928 | crash
2024-06-03 17:43:58.274 27954-27954 AndroidRuntime com.example.presentationsample E FATAL EXCEPTION: main
Process: com.example.presentationsample, PID: 27954
java.lang.IllegalArgumentException: Window type mismatch. Window Context's window type is 2037, while LayoutParams' type is set to 2038. Please create another Window Context via createWindowContext(getDisplay(), 2038, null) to add window with type:2038
at android.view.WindowManagerImpl.assertWindowContextTypeMatches(WindowManagerImpl.java:204)
at android.view.WindowManagerImpl.applyTokens(WindowManagerImpl.java:182)
at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:147)
at android.app.Dialog.show(Dialog.java:352)
at android.app.Presentation.show(Presentation.java:279)
at com.example.presentationsample.MainActivity.showPresentation(MainActivity.java:97)
at com.example.presentationsample.MainActivity.access$000(MainActivity.java:18)
at com.example.presentationsample.MainActivity$1.onClick(MainActivity.java:32)
at android.view.View.performClick(View.java:7536)
at android.view.View.performClickInternal(View.java:7509)
at android.view.View.-$$Nest$mperformClickInternal(Unknown Source:0)
at android.view.View$PerformClick.run(View.java:29562)
at android.os.Handler.handleCallback(Handler.java:942)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at android.app.ActivityThread.main(ActivityThread.java:8191)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:573)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1019)
这个错误是说window类型不匹配,由于Android版本问题导致的。解决方案是找到app/src/main/java/com/example/presentationsample/MyPresentation.java类中的第24行,getWindow().setType(WindowManager.LayoutParams.TYPE_APPLICATION_OVERLAY);这一句中TYPE_APPLICATION_OVERLAY改为TYPE_PRESENTATION。
| gharchive/issue | 2024-06-03T09:45:40 | 2025-04-01T06:37:37.520670 | {
"authors": [
"13120241790",
"TheOne-Xin"
],
"repo": "TheOne-Xin/presentation-sample",
"url": "https://github.com/TheOne-Xin/presentation-sample/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1501710358 | Update renovatebot/github-action action to v34.62.1
This PR contains the following updates:
Package
Type
Update
Change
renovatebot/github-action
action
minor
v34.60.0 -> v34.62.1
Release Notes
renovatebot/github-action
v34.62.1
Compare Source
See the the changelog for changes in all releases.
34.62.1 (2022-12-17)
Bug Fixes
deps: update renovate/renovate docker tag to v34.62.1 (afc5d3c)
v34.61.0
Compare Source
See the the changelog for changes in all releases.
34.61.0 (2022-12-17)
Bug Fixes
deps: update renovate/renovate docker tag to v34.61.0 (57b8e85)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
Branch automerge failure
This PR was configured for branch automerge. However, this is not possible, so it has been raised as a PR instead.
| gharchive/pull-request | 2022-12-18T02:35:22 | 2025-04-01T06:37:37.545882 | {
"authors": [
"P4ranoidAndroid"
],
"repo": "TheRealArthurDent/renovate",
"url": "https://github.com/TheRealArthurDent/renovate/pull/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2403502904 | [Snyk] Security upgrade com.amazonaws:aws-java-sdk-core from 1.12.239 to 1.12.760
This PR was automatically created by Snyk using the credentials of a real user.
Snyk has created this PR to fix 61 vulnerabilities in the maven dependencies of this project.
Snyk changed the following file(s):
api/pacman-api-admin/pom.xml
Vulnerabilities that will be fixed with an upgrade:
Issue
Score
Upgrade
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-608664
780
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 Reachable Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-450917
705
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Mature
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-467015
675
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Mature
Denial of Service (DoS) SNYK-JAVA-COMFASTERXMLJACKSONCORE-3038426
670
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 Reachable Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1054588
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056416
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056418
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056420
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056421
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056426
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056427
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-174736
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-548451
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-559094
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-559106
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-560762
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-561585
630
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1009829
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1047324
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056414
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056417
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056419
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056424
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1056425
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-467016
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-560766
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-561362
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-561373
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-561586
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-561587
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-564887
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-564888
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-570625
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-572300
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-572314
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-572316
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-72448
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-72449
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-72450
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-72451
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-72882
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-72883
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-72884
563
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
XML External Entity (XXE) Injection SNYK-JAVA-COMFASTERXMLJACKSONCORE-1048302
560
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1052449
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1052450
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-1061931
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-455617
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-467014
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-469674
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-469676
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-471943
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-472980
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-540500
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-6056407
555
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Denial of Service (DoS) SNYK-JAVA-COMFASTERXMLJACKSONCORE-2421244
525
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Denial of Service (DoS) SNYK-JAVA-COMFASTERXMLJACKSONDATAFORMAT-1047329
525
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
Denial of Service (DoS) SNYK-JAVA-COMFASTERXMLJACKSONCORE-3038424
520
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Deserialization of Untrusted Data SNYK-JAVA-COMFASTERXMLJACKSONCORE-450207
520
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found Proof of Concept
Information Exposure SNYK-JAVA-COMMONSCODEC-561518
485
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 Reachable No Known Exploit
Improper Input Validation SNYK-JAVA-ORGAPACHEHTTPCOMPONENTS-1048058
415
com.amazonaws:aws-java-sdk-core: 1.12.239 -> 1.12.760 No Path Found No Known Exploit
[!IMPORTANT]
Check the changes in this PR to ensure they won't cause issues with your project.
Max score is 1000. Note that the real score may have changed since the PR was raised.
This PR was automatically created by Snyk using the credentials of a real user.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
📜 Customise PR templates
🛠 Adjust project settings
📚 Read about Snyk's upgrade logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Deserialization of Untrusted Data
🦉 XML External Entity (XXE) Injection
🦉 Denial of Service (DoS)
🦉 More lessons are available in Snyk Learn
Checkmarx One – Scan Summary & Details – f1ea51e7-8150-4369-b139-d654d1e8728c
New Issues
Severity
Issue
Source File / Package
Checkmarx Insight
Cleartext_Submission_of_Sensitive_Information
/lambda-functions/notification-es-logging-service/src/main/java/com/paladincloud/notification_log/config/AuthManager.java: 44
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/lambda-functions/notification-es-logging-service/src/main/java/com/paladincloud/notification_log/config/AuthManager.java: 44
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-rule-engine-2.0/src/main/java/com/tmobile/pacman/commons/autofix/manager/AuthManager.java: 41
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-aqua-enricher/src/main/java/com/tmobile/cso/pacman/aqua/jobs/AquaDataImporter.java: 37
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-rule-engine-2.0/src/main/java/com/tmobile/pacman/commons/autofix/manager/AuthManager.java: 41
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-rule-engine-2.0/src/main/java/com/tmobile/pacman/commons/autofix/manager/AuthManager.java: 41
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-asset/src/main/java/com/tmobile/pacman/api/asset/service/AssetServiceImpl.java: 856
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-rule-engine-2.0/src/main/java/com/tmobile/pacman/executor/PolicyExecutor.java: 141
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-rule-engine-2.0/src/main/java/com/tmobile/pacman/executor/PolicyExecutor.java: 141
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/gcp-discovery/src/main/java/com/tmobile/pacbot/gcp/inventory/auth/GCPCredentialsProvider.java: 281
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/azure-discovery/src/main/java/com/tmobile/pacbot/azure/inventory/auth/AzureCredentialProvider.java: 72
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/azure-discovery/src/main/java/com/tmobile/pacbot/azure/inventory/auth/AzureCredentialProvider.java: 67
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/gcp-discovery/src/main/java/com/tmobile/pacbot/gcp/inventory/auth/GCPCredentialsProvider.java: 281
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/azure-discovery/src/main/java/com/tmobile/pacbot/azure/inventory/auth/AzureCredentialProvider.java: 72
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/azure-discovery/src/main/java/com/tmobile/pacbot/azure/inventory/auth/AzureCredentialProvider.java: 67
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/service/AmazonCognitoConnector.java: 121
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-vulnerability/src/main/java/com/tmobile/pacman/api/vulnerability/service/VulnerabilityService.java: 973
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/gcp-discovery/src/main/java/com/tmobile/pacbot/gcp/inventory/auth/GCPCredentialsProvider.java: 281
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/azure-discovery/src/main/java/com/tmobile/pacbot/azure/inventory/auth/AzureCredentialProvider.java: 72
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/azure-discovery/src/main/java/com/tmobile/pacbot/azure/inventory/auth/AzureCredentialProvider.java: 67
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/gcp-discovery/src/main/java/com/tmobile/pacbot/gcp/inventory/auth/GCPCredentialsProvider.java: 143
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/gcp-discovery/src/main/java/com/tmobile/pacbot/gcp/inventory/auth/GCPCredentialsProvider.java: 144
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-data-shipper/src/main/java/com/tmobile/cso/pacman/datashipper/util/AssetGroupUtil.java: 391
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-data-shipper/src/main/java/com/tmobile/cso/pacman/datashipper/util/AssetGroupUtil.java: 373
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-data-shipper/src/main/java/com/tmobile/cso/pacman/datashipper/util/AssetGroupUtil.java: 391
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-data-shipper/src/main/java/com/tmobile/cso/pacman/datashipper/util/AssetGroupUtil.java: 373
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-data-shipper/src/main/java/com/tmobile/cso/pacman/datashipper/util/AssetGroupUtil.java: 391
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-data-shipper/src/main/java/com/tmobile/cso/pacman/datashipper/util/AssetGroupUtil.java: 373
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/controller/AccountsController.java: 59
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/controller/AccountsController.java: 59
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/controller/AccountsController.java: 59
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/controller/AccountsController.java: 59
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/controller/AccountsController.java: 59
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/lambda-functions/notification-template-formatter-service/src/main/java/com/paladincloud/HttpUtil.java: 154
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/lambda-functions/notification-send-email-service/src/main/java/com/paladincloud/utils/HttpUtil.java: 152
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/controller/AccountsController.java: 59
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/QualysAccountServiceImpl.java: 132
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/QualysAccountServiceImpl.java: 125
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/jobs/pacman-data-shipper/src/main/java/com/tmobile/cso/pacman/datashipper/util/AuthManager.java: 22
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 125
Attack Vector
Cleartext_Submission_of_Sensitive_Information
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/TenableAccountServiceImpl.java: 119
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 20540
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 20540
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 20540
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 20540
Attack Vector
Client_Potential_XSS
/webapp/src/app/shared/searchable-dropdown/searchable-dropdown.component.ts: 142
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 21841
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 21838
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 21831
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 21828
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 21825
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 21822
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger.js: 63
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 20550
Attack Vector
Client_Potential_XSS
/commons/pac-api-commons/src/main/resources/docs/v1/js/swagger-ui.js: 20547
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 111
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 106
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 111
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 106
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 111
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 106
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 111
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 106
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/controller/AccountsController.java: 59
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 89
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AwsAccountServiceImpl.java: 213
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AwsAccountServiceImpl.java: 216
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/QualysAccountServiceImpl.java: 155
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/QualysAccountServiceImpl.java: 158
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 151
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AquaAccountServiceImpl.java: 154
Attack Vector
Privacy_Violation
/api/pacman-api-admin/src/main/java/com/tmobile/pacman/api/admin/repository/service/AbstractAccountServiceImpl.java: 111
Attack Vector
More results are available on AST platform
| gharchive/pull-request | 2024-07-11T15:36:57 | 2025-04-01T06:37:37.851045 | {
"authors": [
"TheRedHatter"
],
"repo": "TheRedHatter/CE",
"url": "https://github.com/TheRedHatter/CE/pull/501",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2303935190 | [Snyk] Security upgrade php from 7.1-apache to 7.4.33-apache
This PR was automatically created by Snyk using the credentials of a real user.
Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.
Changes included in this PR
base/gitlist/0.6.0/Dockerfile
We recommend upgrading to php:7.4.33-apache, as this image has only 225 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.
Some of the most important vulnerabilities in your base image include:
Severity
Priority Score / 1000
Issue
Exploit Maturity
929
Server-Side Request Forgery (SSRF) SNYK-DEBIAN10-APACHE2-1585740
Mature
929
Server-Side Request Forgery (SSRF) SNYK-DEBIAN10-APACHE2-1585740
Mature
929
Server-Side Request Forgery (SSRF) SNYK-DEBIAN10-APACHE2-1585740
Mature
929
Server-Side Request Forgery (SSRF) SNYK-DEBIAN10-APACHE2-1585740
Mature
886
Out-of-bounds Write SNYK-DEBIAN10-APACHE2-2322058
Mature
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
🛠 Adjust project settings
Note: This is a default PR template raised by Snyk. Find out more about how you can customise Snyk PRs in our documentation.
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Server-Side Request Forgery (SSRF)
Checkmarx One – Scan Summary & Details – 7b6409b7-8ce8-417a-8cec-254bc00144e0
No New Or Fixed Issues Found
| gharchive/pull-request | 2024-05-18T08:44:00 | 2025-04-01T06:37:37.867481 | {
"authors": [
"TheRedHatter"
],
"repo": "TheRedHatter/vulhub",
"url": "https://github.com/TheRedHatter/vulhub/pull/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
159247251 | NPE@temportalist.esotericraft.emulation.common.EntityState.serializeNBT
Server crash on killing blow to EnderZoo Wither Witch while using Tconstruct broadsword.
fml-server-latest.log.txt
crash-2016-06-08_11.57.10-server.txt
Can you provide more ways and steps to reproduce?
I got a similar crash. Happened when I entered menu to pause the game for an unrelated issue of hostile mobs ignoring players. Last entity killed was the only actual hostile entity, ironically a bat (part of rough mobs).
forge: 1954
mc: 1.9.4
EsoTeriCraft-1.9.4-0.0.1.71
Origin-1.9.4-9.1.6
log.txt
| gharchive/issue | 2016-06-08T19:08:29 | 2025-04-01T06:37:37.905061 | {
"authors": [
"Cat-McCatface",
"Rumpelstiltskinny",
"TheTemportalist"
],
"repo": "TheTemportalist/EsoTeriCraft",
"url": "https://github.com/TheTemportalist/EsoTeriCraft/issues/8",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
260558830 | fix(report/handlers): accept more semver versions
The current problem we have, specifically, is that since v1.2.0 the
mobile apps emit full semver including also the build.
This requires several parts of our infrastructure to improve the
regexp being used so to validate also this kind of input.
See also: measurement-kit/measurement-kit#1388
An earlier version of this diff was blessed by @hellais on Slack and
since then I just changed comments.
The build is failing like this:
function killitwithfire () {
trap - ALRM
kill -ALRM $prog 2>/dev/null
kill -9 $! 2>/dev/null && exit 0
}
function waitforit () {
trap "killitwithfire" ALRM
sleep $1& wait
kill -ALRM $$
}
waitforit $1& prog=$! ; shift ;
+prog=6047
+shift
+waitforit
trap "killitwithfire" ALRM INT
+trap killitwithfire ALRM
+trap killitwithfire ALRM INT
"$@"& wait $!
+wait
+sleep
+wait 6049
sleep: missing operand
Try 'sleep --help' for more information.
RET=$?
+RET=0
if [[ "$(ps -ef | awk -v pid=$prog '$2==pid{print}{}')" != "" ]]; then
kill -ALRM $prog
wait $prog
fi
+kill -ALRM 6046
ps -ef | awk -v pid=$prog '$2==pid{print}{}'
killitwithfire
++killitwithfire
++trap - ALRM
++kill -ALRM 6047
++kill -9 6049
++ps -ef
++awk -v pid=6047 '$2==pid{print}{}'
/home/travis/.travis/job_stages: line 57: 6046 Segmentation fault (core dumped) ./.travis.test.sh 30 ./bin/oonib
The command "chmod +x .travis.test.sh && ./.travis.test.sh 30 ./bin/oonib" exited with 139.
I don't understand very well what it could be, and it seems like this is something related to the way in which travis is handling the test. I am going to restart it to see if it's deterministic (I actually hope so).
I am experiencing more errors with the build, like that the keyserver doesn't know the key. 😡
Alright, it seems it's time to play golf.
Coverage increased (+0.04%) to 78.469% when pulling 9e0eca426cacf7ec4e301fd612ed25d9c824e22a on fix/version_regexp into 03d738b8daa86ed103e435a23adc16e9bac64127 on master.
Coverage increased (+0.09%) to 78.527% when pulling ec91e00d53ea57413c7cb5301fc2d747433ee7d9 on fix/version_regexp into 03d738b8daa86ed103e435a23adc16e9bac64127 on master.
Coverage increased (+0.04%) to 78.469% when pulling 9bb6db5e2a453911873e6fa069f12a85466473fe on fix/version_regexp into 03d738b8daa86ed103e435a23adc16e9bac64127 on master.
I really hate these coveralls annoying comments that don't serve any purpose.
Coverage increased (+0.09%) to 78.527% when pulling d09c1c59e09743a3986ef27ca4d31f6878125b80 on fix/version_regexp into 03d738b8daa86ed103e435a23adc16e9bac64127 on master.
In the end, I decided to rewrite the test that caused a segfault.
I believe the problem appeared now that travis has upgraded its infrastructure to 14.04: the previous travis build was for 03d738b and occurred on Apr, 6 when travis was still using 12.04.
The original script was brilliant: it did two nested waits to make sure the running process was either killed gracefully or with fire. But probably was too brilliant and triggered some edge case.
I did not want to wrestle too much with travis. Also, reading the script, it seems to me it's fine to rewrite it such that, if we cannot kill the background process, the build will hang and then fail (on travis).
I guess having the local build hang and the travis build fail is good enough for our purposes.
Alright, I have read the diff once more. I am going to self bless this as good, given that @hellais already blessed the diff improving the regexp on Slack and that the test changes "look good to me".
| gharchive/pull-request | 2017-09-26T09:42:49 | 2025-04-01T06:37:37.941625 | {
"authors": [
"bassosimone",
"coveralls"
],
"repo": "TheTorProject/ooni-backend",
"url": "https://github.com/TheTorProject/ooni-backend/pull/111",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2351745420 | 🛑 Trident API is down
In b7d830c, Trident API (https://thetrident.one/api) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Trident API is back up in 0509d6d after 20 minutes.
| gharchive/issue | 2024-06-13T17:55:37 | 2025-04-01T06:37:37.944555 | {
"authors": [
"an-lee"
],
"repo": "TheTridentOne/upptime",
"url": "https://github.com/TheTridentOne/upptime/issues/291",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1395664400 | [Addition] World Pregenerators
Checklist
[X] I've checked that my mod suggestion hasn't already been added to one of the lists.
[X] I've checked the latest issues to make sure that this mod hasn't already been suggested.
What is your mod's name?
Chunk Pregenerator
What is your mod's main version?
Other (Forge)
What is your mod's other versions?
1.8.X or earlier, 1.12.X, 1.15.X (Forge), 1.16.X (Forge), 1.18.X (Forge), 1.19.X (Forge)
What is your mod's type?
Enhancement
What side dose your mod need to run on?
Both
Input a link to your suggested mod.
https://www.curseforge.com/minecraft/mc-mods/chunkpregenerator
What is your reasoning for including this mod?
Worldgen lag is a real issue. Even in Singleplayer, and worldpregeneration is actually a really important thing you should do.
This can turn it from "unplayable" to "playable" especially if your core count (including threads) is stuck at 4 or less or the CPU isn't the newest anymore... And even then its suggested to do so.
That's why I am suggestion ChunkPregenerator.
Which supports 1.4.7-1.7.x-1.8.9-1.10.x-1.11.x-1.12.x-1.14.x-1.15x-1.16x-1.18x-1.19.x
Also another mod that I would suggest is Chunky
Which is another pregenerator that provides a lot of performance gains and covers
forge-fabric-customservers
while chunkpregenerator is dedicated for forge.
(OPTIONAL) Give some extra information about the mod.
Chunk Pregenerator is a tool for generating your world before you actually play it.
It also includes maintenance tools such as:
Chunk Deletion/Trimming
Performance Tracking (up to 1.12)
World Maintenance tools (up to 1.12)
Retro-generation that isn't relying on the Player Itself.
Harddrive protection (1.14 or newer where it becomes nessesary)
Memory Leak fixes that become apparent when pregeneration.
And a few other things.
I've added your mod to performance mods as i think that is a better fit then Enhancement. Love your mods, Keep up the good work :D
@NordicGamerFE
Small extra note since i saw this.
Know issues + Fixes are actually tracked by myself.
Here you find both lists.
1.12 or older: https://github.com/TinyModularThings/Chunk-Pregenerator-Issue-Tracker/issues/1
1.14 or newer: https://github.com/TinyModularThings/Chunk-Pregenerator-Issue-Tracker/issues/2
| gharchive/issue | 2022-10-04T04:52:48 | 2025-04-01T06:37:37.955958 | {
"authors": [
"NordicGamerFE",
"Speiger"
],
"repo": "TheUsefulLists/UsefulMods",
"url": "https://github.com/TheUsefulLists/UsefulMods/issues/137",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2705012586 | [BUG]: Mute/Volume 0.0 not working while screen recording
Version
6.8.2
What platforms are you having the problem on?
Android
System Version
All
On what device are you experiencing the issue?
Real device
Architecture
Old architecture
What happened?
I encountered an issue, when I screen record my application using, react native video,
Screen Recorded video have the sound of the muted videos.
In some phone the file is corrupted or is a lot laggy
The saved video in the gallery have alot of sounds all over the components where the video is muted and volume is 0;
DO NOTE: the sound is not being played in the app itself, muting and unmuting is working in the app,
BUT on screen recording it, the sound of all those unpaused and muted video is there in the recorded video.
The code snippet is:
video config= Platform.select({
ios: {
automaticallyWaitsToMinimizeStalling: false,
bufferConfig: {
minBufferMs: 2000,
maxBufferMs: 5000,
bufferForPlaybackMs: 1000,
bufferForPlaybackAfterRebufferMs: 2000,
},
},
android: {
bufferConfig: {
minBufferMs: 2000,
maxBufferMs: 5000,
bufferForPlaybackMs: 1000,
bufferForPlaybackAfterRebufferMs: 2000,
},
},
});
<Video
source={{
uri: videoUri,
}}
ref={ref => {
this.player = ref;
}}
onBuffer={this.onBuffer}
onError={this.videoError}
style={styles.thumbnailImage}
paused={isPaused}
repeat={true}
resizeMode="stretch"
muted={true}
{...videoConfig}
/>
I even tried adding volume: 0, but it disnot work as well
Reproduction Link
Reproduction
Step to reproduce this bug are:
Initially when my video is paused nothing is happening,
But when video is changed from paused to played, the video sound is coming in the screen recording of my app,
DO NOTE: the sound is not being played in the app itself, muting and unmuting is working in the app,
BUT on screen recording it, the sound of all those unpaused and muted video is there in the recorded video.
look strange, but I don't know how we can fix it...
I usually use another package to control device volume instead of player volume :/
maybe switching to texture view can fix the issue. you can have a try I think.
Tried doing it on adding a viewType prop and changing it to textured view, still I am facing the same issue and sometimes the video is not openable by the phone, it says corrupted file or screen recording cannot be played, even on showing it on meet or any other screen sharing calls, It is creating an issue
Ok, another solution can be to unselect audio tracks I think
| gharchive/issue | 2024-11-29T12:42:40 | 2025-04-01T06:37:37.962842 | {
"authors": [
"AakashJaiswal-beta",
"freeboub"
],
"repo": "TheWidlarzGroup/react-native-video",
"url": "https://github.com/TheWidlarzGroup/react-native-video/issues/4313",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1452008633 | 🛑 Jira is down
In 7f013b5, Jira (https://jira.internal.therapy-box.co.uk/) was down:
HTTP code: 503
Response time: 432 ms
Resolved: Jira is back up in 4ea34d0.
| gharchive/issue | 2022-11-16T17:34:28 | 2025-04-01T06:37:37.997335 | {
"authors": [
"TherapyBox"
],
"repo": "TherapyBox/upptime",
"url": "https://github.com/TherapyBox/upptime/issues/657",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1794891640 | Update Fadeless to 1.20.1
Currently, the mod is made for 1.20-Snapshot, and not 1.20.1.
Update to 1.20.1, when the update comes out.
Curse Forge Page
It works fine on 1.20.1, no update is required. Another mod it works well together with is remove reloading screen , which removes the resource pack reloading screen.
Great, I just noticed that it wasn't specifically for 1.20.1, so I figured that might cause issues.
Also yeah, that mod is also part of the pack :D
| gharchive/issue | 2023-07-08T11:00:43 | 2025-04-01T06:37:37.999878 | {
"authors": [
"DerpDerpling",
"Therkelsen"
],
"repo": "Therkelsen/echoes_of_the_wilderness",
"url": "https://github.com/Therkelsen/echoes_of_the_wilderness/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2514165546 | 🛑 Outlook is down
In c0bad11, Outlook (https://outlook.live.com) was down:
HTTP code: 417
Response time: 118 ms
Resolved: Outlook is back up in a5fad8d after 12 minutes.
| gharchive/issue | 2024-09-09T14:41:08 | 2025-04-01T06:37:38.002540 | {
"authors": [
"ptoone"
],
"repo": "Thexyz/Email-Monitoring",
"url": "https://github.com/Thexyz/Email-Monitoring/issues/709",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1161745137 | Password Change logout handling
Currently, to destroy session button will log you out of your current device. It should log you out of all devices instead.
Session handling is a bit weird anyway. After the session times out, a reload redirects to https://identity.eurofurence.org/auth/choose. Clicking "Login with existing account" gets me directly to the dashboard without having to actually log in.
Hey @mowny
Thanks for your comment. Yes that is wanted as the openid connect Server saves a cookie depending on if you habe set the remember me.
The clients all got a limited session time. Although that session time does not end the cookie session at the idp.
E.x. the idp may have a cookie lifetime of 180 days meanwhile the apps only got an hour.
The right solution should be to redirect when the session times out.
Fixed by implementing #11, sessions are not concern of the idp. So session management can only be done by the idp for the IDP Apps not for the Reg as example. A fix for this could be backchannel logout.
| gharchive/issue | 2022-03-07T17:59:16 | 2025-04-01T06:37:38.039593 | {
"authors": [
"Thiritin",
"mowny"
],
"repo": "Thiritin/identity",
"url": "https://github.com/Thiritin/identity/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
64672965 | [GTA:SA] IVRadarScaling
Would it be possible to add the IVRadarScaling option to the GTA: San Andreas fix like in the Vice City and GTA 3 fixes?
I guess, but maybe later.
| gharchive/issue | 2015-03-27T01:51:32 | 2025-04-01T06:37:38.041979 | {
"authors": [
"ThirteenAG",
"jm10087"
],
"repo": "ThirteenAG/Widescreen_Fixes_Pack",
"url": "https://github.com/ThirteenAG/Widescreen_Fixes_Pack/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2646267069 | 🛑 Qbit is down
In b8a2115, Qbit (https://qbit.cond.dk) was down:
HTTP code: 502
Response time: 481 ms
Resolved: Qbit is back up in 95a4800 after 3 hours, 52 minutes.
| gharchive/issue | 2024-11-09T16:14:09 | 2025-04-01T06:37:38.054355 | {
"authors": [
"ThomasConrad"
],
"repo": "ThomasConrad/uptime_monitor",
"url": "https://github.com/ThomasConrad/uptime_monitor/issues/397",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1827229336 | Update scalafmt-core to 3.7.11
About this PR
📦 Updates org.scalameta:scalafmt-core from 3.7.5 to 3.7.11
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "org.scalameta", artifactId = "scalafmt-core" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Superseded by #128.
| gharchive/pull-request | 2023-07-29T00:09:54 | 2025-04-01T06:37:38.111505 | {
"authors": [
"scala-steward"
],
"repo": "ThoughtWorksInc/enableIf.scala",
"url": "https://github.com/ThoughtWorksInc/enableIf.scala/pull/127",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
954407753 | 'in' can't be used as an identifier because it's a keyword.
iso 3166 country code of India have problem, as below:
Thanks for this issue. I will probably add a rename from in to indian
By the way, it is more common to use language codes. Or language code + country code. Using country code only is pretty rare
Fixed in 5.0.3.
By the way, it is more common to use language codes. Or language code + country code. Using country code only is pretty rare.
Also be aware that country codes must be in uppercase, e.g. CN not cn. Otherwise it is interpreted as language code like zh of zh-Hant-TW
Thanks a lot for the quick response, and We'll consider your suggestion. Thanks
| gharchive/issue | 2021-07-28T02:16:25 | 2025-04-01T06:37:38.126839 | {
"authors": [
"Tienisto",
"ffshy1214"
],
"repo": "Tienisto/flutter-fast-i18n",
"url": "https://github.com/Tienisto/flutter-fast-i18n/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
363400142 | Api/sponsors implementation
I did it I think... I may have missed some good error handling and may have left a todo in there somewhere but I'll let you guys tell me if things still need to change
Unless I'm reading this wrong (which I definitely could be) the only travis errors I'm getting are for naming enum cases with a lowercase letter, which I do to match up with the variable name in the structs and the name of the api fields for decodable to work. Can we add a thing to ignore the rule for enums? I'm hoping that would be the easiest fix.
| gharchive/pull-request | 2018-09-25T03:52:26 | 2025-04-01T06:37:38.127981 | {
"authors": [
"eteters"
],
"repo": "TigerHacks/app-ios",
"url": "https://github.com/TigerHacks/app-ios/pull/49",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1176879889 | Allow raw installation outside of the context of PyPI.
If the package is installed outside of PyPI or an active checkout
(e.g. via pip install [some-path]), a version.py file might not be
created. Use a local version number in that case.
@thetorpedodog I've used python setup.py develop --user as my go-to for a while. Are you trying to get it working from the top level repo folder without any install?
Figured out what spurred me to do this. It would cause an error if you pip-installed directly from git:
pip install --upgrade git+https://github.com/TileDB-Inc/TileDB-Cloud-Py.git@some-version-hash
| gharchive/pull-request | 2022-03-22T14:33:10 | 2025-04-01T06:37:38.141527 | {
"authors": [
"Shelnutt2",
"thetorpedodog"
],
"repo": "TileDB-Inc/TileDB-Cloud-Py",
"url": "https://github.com/TileDB-Inc/TileDB-Cloud-Py/pull/242",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2639644963 | Bump to 1.16.2 and update requirements
This PR does a few related things:
Bumps version to 1.16.2 (and resets build number to 0)
Builds for Python 3.12 (replaces #15 because 1.15.0 couldn't be built against 3.12)
Use the new conda-forge syntax to control and test the minimum supported Python version. Upstream supports >=3.10
Also note that this version 1.16.2 will not be able to be installed in the TileDB Cloud py39 environment since it requires py>=310. The goal is to install it in the updated py312 environment that is in progress
Confirmed that the version number is still working as expected (#12):
import: 'cellxgene_census'
+ python -c 'import cellxgene_census; print(cellxgene_census.__version__)'
1.16.2
+ pip check
No broken requirements found.
| gharchive/pull-request | 2024-11-07T02:06:46 | 2025-04-01T06:37:38.145059 | {
"authors": [
"jdblischak"
],
"repo": "TileDB-Inc/cellxgene-census-feedstock",
"url": "https://github.com/TileDB-Inc/cellxgene-census-feedstock/pull/17",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
391669454 | Fix security vulnerabilities
This PR fixes security vulnerabilities for current version of send-transform. All tests are passed.
If you're going to go the route of updating my fork, I'd rather keep it in sync with the original as much as possible.
That would mean applying my changes on top of the latest release of send. However, I notice you've updated some of the dependencies here to newer versions than those used by the latest release of send - does that mean the versions currently used by send have security vulnerabilities?
@TimBarham just a friendly ping on this.
| gharchive/pull-request | 2018-12-17T11:28:16 | 2025-04-01T06:37:38.147026 | {
"authors": [
"TimBarham",
"ruslan-bikkinin"
],
"repo": "TimBarham/send",
"url": "https://github.com/TimBarham/send/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2391452582 | fix: Do not throw an exception when the solution or entity classes are interfaces
The superclass of an interface is null. This caused this line of code to throw an exception for solution classes that are interface:
var superclass = bottomClass.getSuperclass();
lineageClassList.addAll(getAllAnnotatedLineageClasses(superclass, annotation));
Since getAllAnnotatedLineageClasses expected superclass to not be null. getAllAnnotatedLineageClasses now returns an empty list for null arguments.
I'm wondering if solutions as interfaces should be supported. What would be the use case? What would be the downsides?
Considering that in all the years we have not seen anyone ask us for this, maybe we don't need this.
| gharchive/pull-request | 2024-07-04T20:51:27 | 2025-04-01T06:37:38.149785 | {
"authors": [
"Christopher-Chianelli",
"triceo"
],
"repo": "TimefoldAI/timefold-solver",
"url": "https://github.com/TimefoldAI/timefold-solver/pull/934",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1556378592 | replaced FocusTrap to TapRegion
I replaced FocusTrap to TapRegion because FocusTrap is not supporten now in Flutter 3.7.0 (https://github.com/flutter/flutter/pull/107262)
..
Hey @AurelVU, I'm sorry for closing your PR without merging it but I was already working on it.
The update is live on pub
Version 2.2.22
| gharchive/pull-request | 2023-01-25T10:08:54 | 2025-04-01T06:37:38.191346 | {
"authors": [
"AurelVU",
"Tkko",
"bugrevealingbme"
],
"repo": "Tkko/Flutter_Pinput",
"url": "https://github.com/Tkko/Flutter_Pinput/pull/118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1596608685 | Update to 1.19.3
Update required libraries.
By the way, do you ever plan on uploading this to either or both Curseforge and Modrinth? Also, what about cutting off sharking from the mod if it keeps being unusable?
| gharchive/pull-request | 2023-02-23T10:38:02 | 2025-04-01T06:37:38.192257 | {
"authors": [
"StunninglyWrong"
],
"repo": "Tlesis/SquakePlusPlus",
"url": "https://github.com/Tlesis/SquakePlusPlus/pull/4",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
2397818486 | frame stepper doesnt work in geode
annoying
frame stepper isn't even a feature in the mod yet.
https://github.com/TobyAdd/GDH/issues/198
the feature is back in 4.6.4
| gharchive/issue | 2024-07-09T10:34:07 | 2025-04-01T06:37:38.200529 | {
"authors": [
"Chaotixu",
"TobyAdd",
"exploitle"
],
"repo": "TobyAdd/GDH",
"url": "https://github.com/TobyAdd/GDH/issues/243",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
505491054 | SQLite Testing Environment
Does this package work w/ a sqlite testing environment?
I'm getting the following error when using 'TEST' as the query string:
+errorInfo: array:3 [
0 => "HY000"
1 => 1
2 => "near "'T%E%S%T%'": syntax error"
]
The query in my controller is as follows:
$query = Searchy::search('products')
->fields('title', 'producer')
->query($q)
->getQuery()
->having('relevance', '>', 20)
->limit(20)
->pluck('id')
->toArray();
Unfortunately, this package only works with a MySQL database. It uses MySQL specific features to calculate match relevance.
| gharchive/issue | 2019-10-10T20:23:34 | 2025-04-01T06:37:38.247602 | {
"authors": [
"FutureFutureTo",
"TomLingham"
],
"repo": "TomLingham/Laravel-Searchy",
"url": "https://github.com/TomLingham/Laravel-Searchy/issues/103",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1091938451 | 🛑 VTP.XYZ is down
In bef760f, VTP.XYZ (https://vtp.xyz/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: VTP.XYZ is back up in bc4f774.
| gharchive/issue | 2022-01-02T02:49:34 | 2025-04-01T06:37:38.265250 | {
"authors": [
"TomsProject"
],
"repo": "TomsProject/uptime",
"url": "https://github.com/TomsProject/uptime/issues/1215",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
349159039 | Questions on columnOrder API
Hi @TonyGermaneri,
Thanks for the quick feature release for columns sorting.
I've been playing around with it, and am able to retrieve the order using grid.columnOrder but cannot set the order. Looking over the docs, I assumed grid.columnOrder([1, 0 , 2, 3]) would work but got the error:
grid.columnOrder is not a function
May I get an example of how to set column order?
Thanks,
Louis
PS: Theres a duplicate entry for columnOrder in the API documentation https://tonygermaneri.github.io/canvas-datagrid/docs/#canvasDatagrid.columnOrder
All fixed!
https://tonygermaneri.github.io/canvas-datagrid/docs/#canvasDatagrid.columnOrder
Thanks again for pointing this out.
| gharchive/issue | 2018-08-09T14:42:40 | 2025-04-01T06:37:38.281488 | {
"authors": [
"LabShareLouie",
"TonyGermaneri"
],
"repo": "TonyGermaneri/canvas-datagrid",
"url": "https://github.com/TonyGermaneri/canvas-datagrid/issues/154",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
37610798 | Adds support for custom GYP include
When developers have libraries in non-standard locations (or don't wish
to dump them into System32 on Windows), they need to run "node-gyp
configure", edit the generated files to add include paths, and then run
"node-gyp install"
This is doubly painful when you wanted to "npm install" something, but
it depends on a library being in a default path on your system.
I've added the ability for users to include an additional ".gyp" file in
all node-gyp builds by setting an environment variable
(NODE_GYP_ADDITIONAL_CONFIG), which they can use to configure
include/library paths. No longer must I manually install packages or
dump files into global include directories.
Has this been resolved in subsequent releases? I'm currently stuck trying to install packages that require libraries I've installed in custom locations.
The preferred way to do this is to add a common.gypi file to the root of your module. For example: https://github.com/TooTallNate/node-vorbis/blob/master/common.gypi
| gharchive/pull-request | 2014-07-10T22:18:55 | 2025-04-01T06:37:38.292460 | {
"authors": [
"TooTallNate",
"anprogrammer",
"bdunlay"
],
"repo": "TooTallNate/node-gyp",
"url": "https://github.com/TooTallNate/node-gyp/pull/473",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1552080517 | Proper Arc support for schematic and footprint
The arcs for the schematic and footprint are rarely properly placed. It needs to be correctly handled.
I pushed some work on the Fix_ARC branch, but I cannot figure out how to draw them, there are always components for which it does not work.
For now, I tried to use the GetCenterParam function, which is reversed from https://easyeda.com/editor/6.5.5/js/editorPCB.min.js
This function seems to return the coordinates of the center (or possibly the midpoint in some occasions ? ), and two angles, but when using theses to calculate the start point, end point and center, the result is inconsistent, sometimes it works, sometimes it doesn't.
The schematic equivalent is more consistent and seems to have less issues.
The following components have an arc in their footprint and could be used to test:
C55684 C185659 C86002 C312983 C1341701 C307522 C689358 C403695 C602208 C152951 C688068 C163798 C661330
This concern the h_ARC function in the footprint :
https://github.com/TousstNicolas/JLC2KiCad_lib/blob/b5c38c2beff6f710eb8ac427622384717539ce84/JLC2KiCadLib/footprint/footprint_handlers.py#L174-L278
I made my version of the code and it seems to be working pretty well so far:
https://github.com/Xyntexx/JLC2KiCad_lib/tree/my_ARC_Fixes
Thanks a lot for your work, merged in 12c6860aecd43dd8b4262347f74a3bb06de85959
| gharchive/issue | 2023-01-22T11:12:17 | 2025-04-01T06:37:38.333590 | {
"authors": [
"TousstNicolas",
"Xyntexx"
],
"repo": "TousstNicolas/JLC2KiCad_lib",
"url": "https://github.com/TousstNicolas/JLC2KiCad_lib/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1329369236 | USB Mass Storage mode - eMMC unrecognized with pinephoneA64
Successful Tow-Boot Install
- Tow-Boot 2021.10-004 was successfully installed on a pinephoneA64.
- I first verified booting pOS from eMMC.
- I then verified the Mass Storage mode successfully using a Windows10 host.
- USB device was recognized as a "PinePhone (A64)", and the eMMC partitions are viewable.
Mass Storage mode issue
- After install; rebooted phone with USB plug still plugged in, but battery out.
- Entered into Mass Storage mode, (blue LED).
- Windows host now reporting, "Unknown USB Device (Device Descriptor Request Failed)".
Mass Storage mode issue reoccurring across phone reboots
- Tried again by disconnecting USB cable, inserting battery, and rebooting phone.
- Entered into Mass Storage mode, (blue LED).
- Plugged in USB cable.
- Windows host still reporting, "Unknown USB Device (Device Descriptor Request Failed)".
@BCoyler
I was having very similar issue.
Turns out i connected my pinephone to USB 2.0 port of the host device.
Tried with different port(USB 3.0) port and device was recognized.
If you haven't already found the solution. Give above method a try maybe it will resolve your issue as well.
Had the same problem. For me, the problem cames from the provided (red) cable. Using another USB-C cable works at first try.
| gharchive/issue | 2022-08-05T02:34:20 | 2025-04-01T06:37:38.338065 | {
"authors": [
"BCoyler",
"Disctanger",
"Narann"
],
"repo": "Tow-Boot/Tow-Boot",
"url": "https://github.com/Tow-Boot/Tow-Boot/issues/169",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1561754046 | how can I using with react native
as title, thanks
thanks a lots
| gharchive/issue | 2023-01-30T05:05:23 | 2025-04-01T06:37:38.339105 | {
"authors": [
"fukemy"
],
"repo": "TowhidKashem/snapchat-clone",
"url": "https://github.com/TowhidKashem/snapchat-clone/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2012999629 | fix/docs/chunk
@dcvz
I was a bit late to review your chunk operation. It's very nice but I found something was wrong with the doc. I made a test that explains why I'm changing it. Is this the behaviour you want?
@dcvz now it should have the right behaviour
| gharchive/pull-request | 2023-11-27T20:00:02 | 2025-04-01T06:37:38.341605 | {
"authors": [
"louisfd"
],
"repo": "Tracel-AI/burn",
"url": "https://github.com/Tracel-AI/burn/pull/1006",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1871662043 | Using a standard .env file instead of config.yaml for docker deployment.
Refactoring docker image example docker deployments to take a standard .env file with variables defined in the docker-compose itself to simplify docker custom deployments (eg. on Portainer).
Description
You will customize the installation via .env file instead of YAML.
Related Issues
No direct issue, just a Discord conversation.
Solution and Design
This setup helps deployments via docker for users which have a production-like environment. It's focused on the docker installation, it will not impact non-docker usage.
Test Plan
Type of change
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[x] Breaking change (fix or feature that would cause existing functionality to change)
[ ] Docs update
Checklist
[x] My pull request is atomic and focuses on a single change.
[x] I have read the contributing guide and my code conforms to the guidelines.
[x] I have documented my changes clearly and comprehensively.
[ ] I have added the required tests.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
With the lastest release I'm having another issue here:
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3288, in raw_connection
return self.pool.connect()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 1267, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 284, in _do_get
return self._create_connection()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
self.__connect()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 637, in connect
return dialect.connect(*cargs, **cparams)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 615, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
File "/opt/venv/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "super__postgres" to address: Name or service not known
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/bin/alembic", line 8, in <module>
sys.exit(main())
File "/opt/venv/lib/python3.10/site-packages/alembic/config.py", line 632, in main
CommandLine(prog=prog).main(argv=argv)
File "/opt/venv/lib/python3.10/site-packages/alembic/config.py", line 626, in main
self.run_cmd(cfg, options)
File "/opt/venv/lib/python3.10/site-packages/alembic/config.py", line 603, in run_cmd
fn(
File "/opt/venv/lib/python3.10/site-packages/alembic/command.py", line 385, in upgrade
script.run_env()
File "/opt/venv/lib/python3.10/site-packages/alembic/script/base.py", line 582, in run_env
util.load_python_file(self.dir, "env.py")
File "/opt/venv/lib/python3.10/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "/opt/venv/lib/python3.10/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/app/migrations/env.py", line 94, in <module>
run_migrations_online()
File "/app/migrations/env.py", line 82, in run_migrations_online
with connectable.connect() as connection:
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3264, in connect
return self._connection_cls(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 147, in __init__
Connection._handle_dbapi_exception_noconnection(
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2426, in _handle_dbapi_exception_noconnection
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3288, in raw_connection
return self.pool.connect()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 1267, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 284, in _do_get
return self._create_connection()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
self.__connect()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 637, in connect
return dialect.connect(*cargs, **cparams)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 615, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
File "/opt/venv/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "super__postgres" to address: Name or service not known
Looks like alembic.ini has hardcoded db credentials.
We have a pr for this,
https://github.com/TransformerOptimus/SuperAGI/pull/1136
waiting on its review @andreacasarin
What if I make a new docker compose definition, something like: docker-compose.prod.yaml which has the preferred vector store and explicit env vars for production like deployments? I can let the example there and add a new one.
I think it would also be great to have a nginx image built so that you can actually deploy without pulling the whole repo.
Perfect then, I'll restore the example and go with docker-compose.prod.yaml, just a couple more questions:
which would be the preferred vector store?
i'd keep the env file minimal, just the required env vars to start the project, then if I got this right, the rest can be configured via gui, right?
Also, we still need https://github.com/TransformerOptimus/SuperAGI/pull/1149#issuecomment-1698503784 done to make it viable.
which would be the preferred vector store?
By default we are using redis vector store
keep .env.dist the same, because we can set variables from the gui, but the vector stores
in docker hub we have setup autobuilds so every push to main and dev gets build
which would be the preferred vector store?
By default we are using redis vector store
Ok, but does it work adding knowledge to that? Because in my installation it asks to configure a new vector store to add knowledge.
keep .env.dist the same, because we can set most variables from the gui, but the vector stores
I'm not sure I got it right, but I'll leave it like that if that is what you prefer. I'd only like to point out that it gets a bit overwhelming to have so many vars in there, especially since there are also 6 docker compose specifications.
in docker hub we have setup autobuilds so every push to main and dev gets build
Ok!
Pushed, it requires nginx/DockerfileNginx to be added to the autobuild as superagi/superagi-proxy.
Pushed, it requires nginx/DockerfileNginx to be added to the autobuild as superagi/superagi-proxy.
i'll add it
keep .env.dist the same, because we can set most variables from the gui, but the vector stores
I'm not sure I got it right, but I'll leave it like that if that is what you prefer. I'd only like to point out that it gets a bit overwhelming to have so many vars in there, especially since there are also 6 docker compose specifications.
To explain a bit more my point-of-view:
I'd go with 1 base docker compose with the bare minimum and then maybe a couple more which will be used to override the base one for different applications (like local dev, local llm, prod) they could only override the base one in specific keys;
we'll discuss more towards this, there are some issues with gui hmr
I'd add a .env.dist with the minimum variables needed to start the project and point out that you can actually configure all the rest from the gui.
The reason why I wanted to keep env.dist with all the keys is to let users know all the keys in the project in one place
we'll comment everything out other than the bare minimum
Just my 2 cents, I'll stop arguing :)
No issues
which would be the preferred vector store?
By default we are using redis vector store
Ok, but does it work adding knowledge to that? Because in my installation it asks to configure a new vector store to add knowledge.
sorry forgot about knowledge, asked @Tarraann ( main contributor to knowledges )
to use knowledge for now only pinecone, weviate and qdrant are supported
so keeping the weviate commented out should be good for users to quickly set it up
keep .env.dist the same, because we can set most variables from the gui, but the vector stores
I'm not sure I got it right, but I'll leave it like that if that is what you prefer. I'd only like to point out that it gets a bit overwhelming to have so many vars in there, especially since there are also 6 docker compose specifications.
in docker hub we have setup autobuilds so every push to main and dev gets build
Ok!
keep .env.dist the same, because we can set most variables from the gui, but the vector stores
I'm not sure I got it right, but I'll leave it like that if that is what you prefer. I'd only like to point out that it gets a bit overwhelming to have so many vars in there, especially since there are also 6 docker compose specifications.
To explain a bit more my point-of-view:
I'd go with 1 base docker compose with the bare minimum and then maybe a couple more which will be used to override the base one for different applications (like local dev, local llm, prod) they could only override the base one in specific keys;
we'll discuss more towards this, there are some issues with gui hmr
👍
I'd add a .env.dist with the minimum variables needed to start the project and point out that you can actually configure all the rest from the gui.
The reason why I wanted to keep env.dist with all the keys is to let users know all the keys in the project in one place we'll comment everything out other than the bare minimum
I see, looks fine to me, I took the time to rearrange the variables there and put them under some comment headings, that might help this process.
| gharchive/pull-request | 2023-08-29T13:19:43 | 2025-04-01T06:37:38.383426 | {
"authors": [
"CLAassistant",
"Fluder-Paradyne",
"andreacasarin"
],
"repo": "TransformerOptimus/SuperAGI",
"url": "https://github.com/TransformerOptimus/SuperAGI/pull/1149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1928795900 | 🛑 MeVuelo Packages Availability (ASU - Mobile) is down
In 9f20848, MeVuelo Packages Availability (ASU - Mobile) (https://api.packages.production.travelonux.com/v1/mevuelo/frontend/packages/healthcheck?departure=ASU&api_key=ebfc5bcd-c798-4a53-a45a-5118f8a1e377&resolution=1) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MeVuelo Packages Availability (ASU - Mobile) is back up in 603fbac after 20 minutes.
| gharchive/issue | 2023-10-05T18:00:08 | 2025-04-01T06:37:38.402489 | {
"authors": [
"travelonux-dev"
],
"repo": "Travelonux/upptime",
"url": "https://github.com/Travelonux/upptime/issues/1235",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1280255421 | Pañol-->Salida Herramientas: quitar el cartel "Ningún dato disponible en esta tabla"
Ambiente test - v2.0.15.3
Descripción
Al agregar el registro de los datos cargados el sistema muestra el cartel de aviso de "ningún dato disponible.." cuando lo correspondiente es que desaparezca al mostrar Los Datos
Pasos para reproducir
En Pañol
Clic en Salida Herramientas
Completar los campos obligatorios
Clic en el botón "Agregar"
Solucionado - v2.0.16
| gharchive/issue | 2022-06-22T14:34:56 | 2025-04-01T06:37:38.413132 | {
"authors": [
"floto-trazalog"
],
"repo": "Trazalog/traz-comp-pan",
"url": "https://github.com/Trazalog/traz-comp-pan/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2199584151 | 如何设置背景颜色为透明?
Description
如何设置背景颜色为透明?
Suggested solution
如何设置背景颜色为透明?
Alternative
No response
Additional context
No response
Validations
[X] I agree to follow this project's Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
Ok its <TresCanvas clearColor: ' '>
| gharchive/issue | 2024-03-21T09:02:21 | 2025-04-01T06:37:38.418044 | {
"authors": [
"oscorops"
],
"repo": "Tresjs/tres",
"url": "https://github.com/Tresjs/tres/issues/594",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1345159555 | Features/challenge4/userprofile
PR Template
Purpose
...
Does this introduce a breaking change?
[ ] Yes
[ ] No
Pull Request Type
What kind of change does this Pull Request introduce?
[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Documentation content changes
[ ] Other... Please describe:
How to Test
Get the code
git clone [repo-address]
cd [repo-name]
git checkout [branch-name]
npm install
Test the code
What to Check
Verify that the following are valid
...
Other Information
NG
| gharchive/pull-request | 2022-08-20T12:59:02 | 2025-04-01T06:37:38.551890 | {
"authors": [
"akinaritsugo"
],
"repo": "TripInsurance/devopsoh78828",
"url": "https://github.com/TripInsurance/devopsoh78828/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1816594901 | Multiple Charge Controllers
Sorry for raising an issue for this, but I have multiple Charge controllers. You can assign them numbers. I don't see anywhere in the code where it's referencing the unit number (It's 1 by default)
Is it possible to specify that in the protocol?
Not a problem!
The charge controller is referenced in the hex code string sent over UART by the ESP32. Look for this code block in your YAML file:
time: - platform: homeassistant id: esptime on_time: - seconds: 0,30 then: - uart.write: id: uart_bus data: [ 0x01, 0xB3, 0x01, 0x00, 0x00, 0x00, 0x00, 0xB5 ] # Reads only real-time data
I think the controller address is the first hex block so you would need to change that and recalculate the checksum. That part is pretty easy. The bigger part of the work is that you would need to duplicate the ampinvt.h file to create separate sensors for each charge controller. If you don't need or want all the sensors then you could cut a lot of it out and just keep the sensors you did want but essentially, you have two parts to this; 1) sending the command to each controller (easy and quick) and 2) processing the response from each controller (harder and more time).
Thanks, I'll go ahead an give it a shot. I currently have 6 of these, and was hoping to not have to purchase 6 ESP32's. Lot's of cabling and mess.
Two questions
I'm a Network Engineer who only dabbles in Computer Science. How would I calculate the checksum?
If I were to duplicate only specific sensors, how would I differentiate between which charge controller it was coming from? I do not see a reference to the controller address in the ampinvt.h file.
Thanks,
PS. I'm ok with being told it's too far over my head, in that case I'll just purchase more ESP32's and do it that way. ESP32 I have setup connect to one of my charge controllers has been rock solid this entire time during testing.
One ESP32 will do the job and even 1 ampinvt.h file but what you will need is to setup the ampinvt.h to read 6 x sensor sets and replicate all those custom sensors in your YAML file.
If you go the 6 x ESP32 setup, you will still need to recode the sensors because they all come together in HA and HA will not know how to process 6 instances of the same sensors. When you figure out the renaming for those, just work backwards to the ampinvt sensor naming.
| gharchive/issue | 2023-07-22T03:42:16 | 2025-04-01T06:37:38.556185 | {
"authors": [
"TripitakaBC",
"spilegi"
],
"repo": "TripitakaBC/ampinvt_esphome",
"url": "https://github.com/TripitakaBC/ampinvt_esphome/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
115102875 | Team groups going off edge of page.
Most likely known of but here: http://i.imgur.com/c4aJxd0.png
its a Known Bug <B
Would love This Fixed https://surl.im/i/8qvwj
| gharchive/issue | 2015-11-04T17:20:43 | 2025-04-01T06:37:38.567278 | {
"authors": [
"ItzDan",
"TheSpaceArmy"
],
"repo": "Tromino/PolyExtend",
"url": "https://github.com/Tromino/PolyExtend/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
883515375 | 编辑文档,图片上传成功了,uploads下对应的文件夹已经生成,但是图片没有移动到指定的路径下,日志也没有报错信息
请按照一下格式提交issue,谢谢!
你当前使用的是哪个版本的 BookStack?
你当前使用的是什么操作系统?
你是如何操作的?
你期望得到什么结果?
当前遇到的是什么结果?
我在本地使用local,没有用oss,也遇到同样的问题解决了。
是运行目录与uploads不在一个层级。
比如 这样执行就不行
[root@xxx~] /www/wwwroot/BookStack/BookStack
比如cd /www/wwwroot/BookStack 然后 ./BookStack 运行,否则无法找到 uploads文件夹。
我用的自己写的docker-compose和dockerfile,代码路径存放的路径不对,导致读取文件的时候读取不到,但是在读文件的时候没把异常信息抛出来,已修复
@cai-ti 我采用了https://github.com/willzhang/docker-bookstackcn的方法,也碰到了相同的问题,请问该怎么解决啊?
麻烦帮我看看啊,谢谢!!!
`FROM buildkite/puppeteer:10.0.0
ENV BOOKSTACK_VERSION=2.10
LANG=C.UTF-8
QTWEBENGINE_CHROMIUM_FLAGS="--no-sandbox"
WORKDIR /bookstack
RUN apt-get update -y
&& apt-get install -y tzdata git
&& ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
&& apt-get install -y ttf-wqy-zenhei fonts-wqy-microhei
&& apt-get install -y python3 unzip xz-utils
&& wget -qO /usr/local/bin/envsubst https://github.com/a8m/envsubst/releases/download/v1.2.0/envsubst-Linux-x86_64 \
&& chmod +x /usr/local/bin/envsubst
&& wget -nv -O- https://download.calibre-ebook.com/linux-installer.sh | sh /dev/stdin
&& wget https://github.com/TruthHun/BookStack/releases/download/v${BOOKSTACK_VERSION}/BookStack-V${BOOKSTACK_VERSION}_Linux_amd64.zip
&& unzip BookStack-V${BOOKSTACK_VERSION}_Linux_amd64.zip
&& chmod +x BookStack
&& rm -rf BookStack-V${BOOKSTACK_VERSION}_Linux_amd64.zip
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/*
COPY conf/ /tmp/conf
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8181
CMD ["/bookstack/BookStack"]`
| gharchive/issue | 2021-05-10T06:37:56 | 2025-04-01T06:37:38.596554 | {
"authors": [
"cai-ti",
"kpeng2016",
"vogin"
],
"repo": "TruthHun/BookStack",
"url": "https://github.com/TruthHun/BookStack/issues/138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
228047273 | [WIP] ✨ use markdown-it for markdown previews
refs https://github.com/TryGhost/Ghost/pull/8451
replaces SimpleMDE's default marked rendering with markdown-it
adds markdown-it plugins to more closely match legacy Showdown behaviour
footnotes
highlight/mark
named headers
don't require a space after the # for headers
adds ember-browserify so that markdown plugins that only provide CommonJS modules can be imported
Coverage decreased (-0.2%) to 71.556% when pulling f190198f49cb38b32133abd721f528564f89d1d1 on kevinansfield:markdown-it into fbb46dc72c4380a632e73db37c4ae8f2370b2087 on TryGhost:master.
Coverage decreased (-0.2%) to 71.556% when pulling 0f7dd8a26025fd7a05a389190746bebcc077cee1 on kevinansfield:markdown-it into 627a71e1a4ea95a9d0b61dd92b90c0d822c2f907 on TryGhost:master.
| gharchive/pull-request | 2017-05-11T16:17:47 | 2025-04-01T06:37:38.601633 | {
"authors": [
"coveralls",
"kevinansfield"
],
"repo": "TryGhost/Ghost-Admin",
"url": "https://github.com/TryGhost/Ghost-Admin/pull/690",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
597410275 | [API] Ordering by meta_title is not possible
Issue Summary
It's not possible to make an API request (internally or externally) that sorts by meta_title. Moreover, sorting by a related table (e.g. posts_meta) is allowed, but the request fails. I think this is a regression from 2.x, after the post metadata got split out into a new table.
Technical details:
Ghost Version: 3.13.1
Forum Ref: https://forum.ghost.org/t/order-by-meta-title-doesnt-seem-to-work/13349
Possible related issue:
The model base parseOrderOptions pulls in a list of permitted attributes from the child modal, including allowed relations. This allows you to set the order to one of those relations, which kind-of breaks the API request. For example, ordering by authors asc will give you a 400, but title asc works perfectly fine, and potato asc succeeds because you can’t sort by potatoes, and therefore the order is ignored.
Might be related to this issue - https://github.com/TryGhost/Ghost/issues/11572, should be checked when investigated deeper.
After deeper digging into the issue, confirming that problem is how we create the order query here and the way Post model's permittedAttributes function returns only posts table specific fields ignoring posts_meta fields.
A closest candidate to explore a solution is tinkering bookshelf-relations plugin, which should be able to abstract the table split in a way that permittedAttributes gives back correct fields from posts_meta .
This issue needs appetite do dig into possible solution through bookshelf-relations. Guesstimating 1 day.
Note, while loading up on the context have observed another place which will need an update when the ordering is implemented - pagination plugin. Specifically, options.order should most likely come with resolved tableName because pagination plugin should not know about relations and not do the tableName + '.' + property concatenation.
Braindump after playing extensively with ordering
There are 2 separate problems that this issue raises:
Main one - cannot order post resource by a legitimate field, which comes from a 1:1 relation (posts:posts_meta tables).
Ordering behaves inconsistently for fields that are not meant to be ordered by: ignores unknown properties and returns 400 when trying to order by property like authors, which comes from a relation name. In case of any field that can't be ordered by, it should consistently ignore that field (throwing a validation error would break API compatibility, so this is a no-go)
The key to both problems is how parseOrderOption function relies on permittedAttributes to calculate all the fields that could be ordered upon.
(pt.1) Ordering by fields which come from 1:1 relation
The missing piece here is a mechanism in model layer which would recognize a 1:1 relation and extract "orderable" fields out of related table. Additionally, the fields would have to come in a format that includes related table name so that pagination plugin could correctly form the query, e.g: ['posts_meta.meta_title ASC'] instead of ['meta_title ASC'].
The solution I'm thinking of here is adding a orderAttributes method which would be used instead of permittedAttributes in parseOrderOption. orderAttributes would have the same values as permittedAttributes for tables with no additional relations logic, and would have special overrides on each model with "expanded" field names coming from related table.
To make above solution more maintainable would need to create a declarative way of describing which fields from related model the orderAttributes could be taken from. One of the possible directions to explore is expanding hasOne relation in bookshelf-relations - plan to timebox this direction to half a day tomorrow (cc @matthanley).
(pt. 2) Treating all non-orderable fields the same
The solution here somewhat relates to the solution from pt. 1. When parsing order options the function should not rely on permittedAttributes because those are not the same fields that could be "ordered" upon. Adding a orderAttributes method to the base model would possibly solve the problem.
Thoughts on holistically solving posts<>posts_meta ordering/filtering/change detection problems
I've dug through bookshelf-relations aiming to figure out an abstraction which would allow solving current ordering, related filtering, and change-detection(1) problems(2). I have not found a way that would solve all of them in a holistic way.
These are 3 distinct areas that have some or no relation to bookshelf-relations:
Ordering - has to do with findPage method from pagination plugin and depends on buggy/outdated handling in parseOrderOption function. Nothing to do with bookshelf-relations because the ordering fields should be calculated independently IMO not just for Post model but all other models that have relations.
Filtering - problems here stem from lack of explicit NQL configuration for posts_meta table in combination with mapping posts_meta filter fields to correct ones (using posts_meta. prefix). This kind of mapping should be done in serialization layer same way other field mappings are done there.
Change detection - to fix this problem on a deeper level (the fix that was done through override to wasChanged() was a "patch") we could look into fixing bookshelf-relations change tracking. This might come through some special parameter passed with hasOne relation that posts_meta is declared with or invent a whole new relation for this specific situation. New relation could also help with elimination of patchwork that has to be done in Post model. This needs to be researched.
Additional note, bookshelf-relations inherently deals with create/update/delete operations only and doesn't have anything to do with read operations that are needed in case of ordering and filtering.
Conclusion with regards to current issue
Two problems that were described in the comment above should be solved outside of bookshelf-relations. This proposed solution:
adding a orderAttributes method which would be used instead of permittedAttributes in parseOrderOption. orderAttributes would have the same values as permittedAttributes for tables with no additional relations logic, and would have special overrides on each model with "expanded" field names coming from related table.
seems like the most viable solution for now and would solve both problems.
@matthanley would love to know if you have any feedback on this? I estimate implementing the proposed solution would take about a day.
adding a orderAttributes method which would be used instead of permittedAttributes in parseOrderOption. orderAttributes would have the same values as permittedAttributes for tables with no additional relations logic, and would have special overrides on each model with "expanded" field names coming from related table
@naz this looks like a reasonable approach to me 👍
After implementing the ordering idea I had in mind, got faced with a Bookshelf limitation I completely forgot existed. Have my experimentation available on this spike branch - https://github.com/TryGhost/Ghost/compare/master...naz:ordering-for-posts-meta-fields.
To sum up the problem. In pagination plugin, when self.fetchAll is done it doesn't have post_meta table loaded into the query and it fails with "no such column" error (when trying to order by posts_meta.meta_title for example). The problem of not having loaded relations when fetching records in Bookshelf is summed up with references here - https://github.com/bookshelf/bookshelf/issues/1707#issuecomment-351026830.
There are two possible ways to get around this problem which I've been thinking of:
Extend pagination with detection of relations and build in a join to related table when the ordering query is built. We extend the query in similar way in filtering plugin through NQL (it adds joins and filters records at the same stage). The downside for this method, is possibly keeping yet another configuration that might be similar to one in filter. This could end up hard to maintain longterm.
Explore fixing/extending Bookshelf itself and load up relations like posts_meta into the queryBuilder automatically. This potentially, would be a more maintainable approach. The downside is, I have no clue how hard this would be to achieve, would need to dive into bookshelf codebase to understand more. The problem seems to have been around in Bookshelf since 2014.
@matthanley the point Hannah made earlier about approaching the problem through bookshelf-relations is now clearer to me. Probably the limitation I've rediscovered was the reason. This issue could become a scope creep, so I think approaching it by timeboxing and researching possible directions would be best approach forward. I'm thinking of timeboxing first approach above to half a day and see what comes out and then if we really end up with need for configs would research the second approach. Let me know what you think or have any questions about the issue itself!
Summary of the discussion around future plan for this issue
Short term plan (to be done now) would be implementing a solution which expands query builder object inside of pagination (or ordering) plugin based on additional configurations similar to ones done with NQL/filtering.
Long term, aiming to develop a maintainable solution which is not based on configuration but rather on special relation type coming from Bookshelf or some other fix which might also address https://github.com/bookshelf/bookshelf/issues/202. Will timebox 1 day for research to figure out right questions and maybe possible solutions. A solution that is in mind right now is expansion of hasOne relation with an "alwaysFetched" parameter - this might be done through a plugin or from within bookshelf itself (needs digging around).
Extend pagination with detection of relations and build in a join to related table when the ordering query is built. We extend the query in similar way in filtering plugin through NQL (it adds joins and filters records at the same stage). The downside for this method, is possibly keeping yet another configuration that might be similar to one in filter. This could end up hard to maintain longterm.
This solution has landed in master through https://github.com/TryGhost/Ghost/pull/12226.
Next up will be pushing pt. 2 and figuring out more generic way to handle relation inclusion in query builder object through Bookshelf/special plugin. This should help solve filtering/ordering and attribute change tracking all together.
| gharchive/issue | 2020-04-09T16:40:23 | 2025-04-01T06:37:38.630936 | {
"authors": [
"gargol",
"matthanley",
"naz",
"vikaspotluri123"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/issues/11729",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1207504489 | Impersonation tokens do signup action if no account is found
Issue Summary
When a user is deleted it is expected that all tokens associated with that account are revoked. This is currently not the case.
Creating a token before deleting a user creates the account again when pasted into the browser.
Steps to Reproduce
Create an access token for an account.
Delete that account.
Paste that token into your browser.
User is created again.
The user should NOT be created again and all tokens should be revoked when a user is deleted.
Ghost Version
4.44.0
Node.js Version
16.14.2
How did you install Ghost?
OS - Debian 11 with MariaDB 10.5.15
Database type
MySQL 8
Browser & OS version
No response
Relevant log / error output
No response
Code of Conduct
[X] I agree to be friendly and polite to people in this repository
Hey there @guidefox. Ghost's magic links are based on JWTs, the tokens aren't stored and there's not really a concept of revocation here. What's happening is that the magic link has a fall back behaviour of creating a new account if no matching account is found.
I realise that's a little jarring, but it's a brand new account that is created, not an old one being restored.
I think it would make sense to pin the impersonation links to only be allowed to do signin, rather than falling back to signup, to make this a little less weird.
Ah, that makes more sense. I think that the current behavior can be improved because it is a little bit jarring right now.
Perhaps a dedicated button for having impersonation tokens re-create the account instead of doing it automatically would be a better solution.
And maybe make the tokens one use only? or at least provide the option to have it expire after one use.
This has cropped up in other forms recently, and is something we want to prioritise fixing.
| gharchive/issue | 2022-04-18T22:17:12 | 2025-04-01T06:37:38.637726 | {
"authors": [
"ErisDS",
"guidefox"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/issues/14508",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1823498846 | Make this site private - not working in Chrome and Opera
Issue Summary
Change or delete the row 58:
versions/5.54.4/core/frontend/apps/private-blogging/lib/middleware.js
return session({
name: 'ghost-private',
maxAge: constants.ONE_MONTH_MS,
signed: false,
sameSite: 'lax' <----- row 58: instead of 'none' or delete row
})(req, res, next);
Or you can delete the row 58 because sameSite: 'Lax' is the default value.
You can't code 'secure' within an object - secure: true will not work.
Works now in Chrome and Opera.
See https://web.dev/i18n/en/samesite-cookies-explained
Steps to Reproduce
See https://forum.ghost.org/t/make-this-site-private-not-working/39938/1
Ghost Version
5.54.4
Node.js Version
v18.15.0
How did you install Ghost?
local, macos
Database type
SQLite3
Browser & OS version
No response
Relevant log / error output
No response
Code of Conduct
[X] I agree to be friendly and polite to people in this repository
In core/frontend/apps/private-blogging/lib/middleware.js:
´´´
const privateBlogging = {
…
return session({
name: 'ghost-private',
maxAge: constants.ONE_MONTH_MS,
signed: false,
// sameSite: 'none' <——— replace this with 2 lines below
sameSite: urlUtils.isSSL(config.get('url')) ? 'none' : 'lax',
secure: urlUtils.isSSL(config.get('url'))
})(req, res, next);
},
´´´
and all is fine!
Implement in the same way as you did in
core/server/services/auth/session/express-session.js
Hey there, thank you so much for the detailed bug report.
That does look like something that shouldn't happen! A PR to fix this issue would be very welcome 🙂
I have made the PR
Hi, I am facing the same issue in the latest version of Ghost, unable to login to the private site using Chrome based browsers.
It's nearly three month later ... an nothing happened. But for me it closed, because I'm working locally :-)
https://forum.ghost.org/t/make-this-site-private-not-working/39938
It's nearly three month later ... an nothing happened. But for me it closed, because I'm working locally :-) https://forum.ghost.org/t/make-this-site-private-not-working/39938
yeah, they didn't care to merge it. but I appreciate you for your troubleshooting and the fix.
I think just a rerun would be required to pass the build, otherwise the PR is already approved:
https://github.com/TryGhost/Ghost/actions/runs/6057836235/job/16821076886?pr=17938
I have made my first PR.
I think the 2. one failed: Merge branch 'main' into joe-blocher-patch-1 https://github.com/TryGhost/Ghost/pull/17938/commits/ae0f64eb578f4aa43248da1b3e807a1f0c3b9bef
I don't really know what is for and how I can delete this PR...
Am 16.10.2023 um 08:45 schrieb Hussain @.***>:
It's nearly three month later ... an nothing happened. But for me it closed, because I'm working locally :-) https://forum.ghost.org/t/make-this-site-private-not-working/39938 https://forum.ghost.org/t/make-this-site-private-not-working/39938
yeah, they didn't care to merge it. but I appreciate you for your troubleshooting and the fix.
I think just a rerun would be required to pass the build, otherwise the PR is already approved:
https://github.com/TryGhost/Ghost/actions/runs/6057836235/job/16821076886?pr=17938 https://github.com/TryGhost/Ghost/actions/runs/6057836235/job/16821076886?pr=17938
https://user-images.githubusercontent.com/4962633/275414297-2abc6c33-ed0c-442d-9e81-9cdbd0e405cd.png
—
Reply to this email directly, view it on GitHub https://github.com/TryGhost/Ghost/issues/17514#issuecomment-1763832578, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWHNLAX5ONSU2FFFUG6MRN3X7TJZDANCNFSM6AAAAAA2ZM7QIU.
You are receiving this because you modified the open/close state.
Maybe @daniellockyer can help
You did't fix the error: Make this site private - not working not working in Chrome and Opera
SOLUTION - it told you in August 2023 and I have made the PR!
versions/5.82.2/core/frontend/apps/private-blogging/lib/middleware.js
`const privateBlogging = {
....
return session({
name: 'ghost-private',
maxAge: constants.ONE_MONTH_MS,
signed: false,
sameSite: urlUtils.isSSL(config.get('url')) ? 'none' : 'lax', <------------ insert this
secure: urlUtils.isSSL(config.get('url')) <------------------------------ insert this
//sameSite: 'none' <------------------------------------------ row 58: remove
})(req, res, next);
},`
The pull request still not merged in version 5.82.2:
Fixed private mode cookie for local development #17938
What makes you say the PR wasn't merged? The commit shows that it's been in releases starting from 5.70.0.
I've downloaded the code:
versions/5.82.2/core/frontend/apps/private-blogging/lib/middleware.js
But the code is still the same:
` return session({
name: 'ghost-private',
maxAge: constants.ONE_MONTH_MS,
signed: false,
sameSite: 'none' <------------------------------------------ why this?
})(req, res, next);
},``
The code being the same does not mean your PR was not merged. In this case it looks like this change ended up possibly breaking something else so it was reverted:
https://github.com/TryGhost/Ghost/pull/19298
The code being the same does not mean your PR was not merged. In this case it looks like this change ended up possibly breaking something else so it was reverted:
#19298
OK so that means it is still a problem. I am running 5.79.6 (released Feb 26) and cannot make the site private because of this bug. What's the ETA on solving this?
The code being the same does not mean your PR was not merged. In this case it looks like this change ended up possibly breaking something else so it was reverted:
#19298
OK so that means it is still a problem. I am running 5.79.6 (released Feb 26) and cannot make the site private because of this bug. What's the ETA on solving this?
My solution:
I change always the code by myself, when I install an update. You have to change only 2 lines.
The first time I reported the solution in August 2023.
Maybe they will fix the bug sometimes ...
Hey guys, any update on this one? I couldn't access the links behind why the commit was reverted. So not sure on the details or complexity of the bug, is there any progress towards figuring it out? Thanks, and I'm a huge Ghost fan 😊
Downloaded version 5.89.1 - this bug still not fixed
You have to change only 2 lines in your code.
The first time I reported the solution in August 2023.
Why this is impossible?
@daniellockyer this is disappointing that this is still an issue exp with docker involved.
I can confirm that I have no access via chromium, chrome and edge. Works with firefox. Will tell that to my customers not.
Hey, I'm sorry that it wasn't made clear when the related PR was reverted. Unfortunately the fix broke the theme preview in Ghost admin for private sites, when the admin and site URLs are configured differently (the recommended configuration).
We are clearly missing some test coverage there, as the PR looked good to merge.
In the meantime, whilst trying to understand this issue I wasn't able to reproduce it in Arc, Chrome or Chromium.
There's something really janky going on here, because whilst there are clearly a couple of people here on this issue experiencing the problem, there's not a lot of wider noise despite private sites being used widely very successfully.
Meanwhile, when we merged the PR and broke the theme preview for private sites, we heard about it instantly from many people.
So there has to be a caveat that's not being covered here in the reproduction steps. I have a feeling that is something to do with SSL, which shouldn't be impacting production sites.
I'm going to close this bug as it stands. If anyone has the detailed reproduction case, feel free to open a new issue and we can work through what cases should and shouldn't work & making sure that fixing this issue doesn't cause a more widespread issue for private site users.
I wasn't able to reproduce it in Arc, Chrome or Chromium.
You're right!
I've updated Chrome and the bug disappeared.
It was a bug in Chrome with the error-message:
OK, many thanks
| gharchive/issue | 2023-07-27T02:22:40 | 2025-04-01T06:37:38.665761 | {
"authors": [
"ErisDS",
"Grasume",
"TheLaurenBarger",
"daniellockyer",
"davedub",
"hussainb",
"joe-blocher",
"kilmarnock",
"vikaspotluri123"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/issues/17514",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
345677414 | Ajax call from external website - Response 200 but no posts
Issue Summary
I’d like to fetch blog posts into my website client-side.
When I fetch posts, I get this response, and no response body:
ok: true
redirected: false
status: 200
statusText: "OK"
type: "cors"
url: "http://my_website/ghost/api/v0.1/posts/?limit=3&client_id=ghost-frontend&client_secret=MY_SECRET
To Reproduce
Follow this guide: https://api.ghost.org/docs/ajax-calls-from-an-external-website
Try to fetch posts from an external website.
Expected behaviour is to get a list of posts. When I curl it works fine.
Technical details:
Ghost Version: 1.25.1
Node Version: 6.14.3
Browser/OS: Chrome 67.0.3396.99 on the client, backend running on Ubuntu 16.04.5 (aws ec2 t2.micro)
Database: mysql
Thanks for looking into it :)
Hey @c0derabbit 👋 We ask that you please do not use GitHub for help or support, the default issue template pointed you to our forum for this type of question 😄 We use GitHub solely for bug-tracking and on-going feature development so we try to keep it noise free.
Many questions can be answered by reviewing our docs for self-hosters, our theme API, or our public API. If you can't find an answer then our forum is a great place to get community support, plus it helps create a central location for searching problems/solutions.
FYI: Many projects have their own support guidelines and GitHub will highlight them for you, or the project owners will use issue templates to point you in the right direction, please read them before opening issues
Hi @kevinansfield, I started there but no response, also it seems to me more like a bug than a question, as I did follow the instructions, and I suspect it's either not working as it should, or there's a missing step in the docs.
| gharchive/issue | 2018-07-30T09:22:11 | 2025-04-01T06:37:38.672269 | {
"authors": [
"c0derabbit",
"kevinansfield"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/issues/9760",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1368994144 | Cannot install sqlite3 in electron project - linux ubuntu 22
Issue Summary
I can't install sqlite3 in an electron project. Apparently it's not a problem with python or NAPI version, after searching a lot I didn't find any problem like this.
The issue seems to happen at this moment:
ACTION deps_sqlite3_gyp_action_before_build_target_unpack_sqlite_dep Release/obj/gen/sqlite-autoconf-3390200/sqlite3.c
/bin/sh: 1: Syntax error: "(" unexpected
Relevant logs or output
electron-rebuild assuming is prebuildify powered: lzma-native +0ms
electron-rebuild Checking for prebuilds for "lzma-native" +0ms
electron-rebuild Found prebuilt Node-API module in /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/lzma-native/prebuilds/linux-x64" +2ms
⠏ Building module: lzma-native, Completed: 1 electron-rebuild rebuilding sqlite3 with args [
'node',
'node-gyp',
'rebuild',
'--runtime=electron',
'--target=20.1.1',
'--arch=x64',
'--dist-url=https://www.electronjs.org/headers',
'--build-from-source',
'--verbose',
'--module_name=node_sqlite3',
'--module_path=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/lib/binding/napi-v{napi_build_version}-linux-glibc-x64',
'--host=https://github.com/TryGhost/node-sqlite3/releases/download/',
'--remote_path=v5.0.11',
'--package_name=napi-v{napi_build_version}-linux-glibc-x64.tar.gz'
] +0ms
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
⠋ Building module: sqlite3, Completed: 1gyp verb command configure []
gyp verb download using dist-url https://www.electronjs.org/headers
gyp verb find Python checking Python explicitly set from command line or npm configuration
gyp verb find Python - "--python=" or "npm config get python" is "/usr/bin/python3"
gyp verb find Python - executing "/usr/bin/python3" to get executable path
gyp verb find Python - executable path is "/usr/bin/python3"
gyp verb find Python - executing "/usr/bin/python3" to get version
⠙ Building module: sqlite3, Completed: 1gyp verb find Python - version is "3.10.4"
gyp info find Python using Python version 3.10.4 found at "/usr/bin/python3"
gyp verb get node dir compiling against --target node version: 20.1.1
gyp verb command install [ '20.1.1' ]
gyp verb download using dist-url https://www.electronjs.org/headers
gyp verb install input version string "20.1.1"
gyp verb install installing version: 20.1.1
gyp verb install --ensure was passed, so won't reinstall if already installed
⠹ Building module: sqlite3, Completed: 1gyp verb install version is already installed, need to check "installVersion"
gyp verb got "installVersion" 9
gyp verb needs "installVersion" 9
gyp verb install version is good
gyp verb get node dir target node version installed: 20.1.1
gyp verb build dir attempting to create "build" dir: /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build
gyp verb build dir "build" dir needed to be created? Yes
gyp verb python symlink creating symlink to "/usr/bin/python3" at "/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/node_gyp_bins/python3"
gyp verb build/config.gypi creating config file
gyp verb build/config.gypi writing out config file: /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/config.gypi
gyp verb config.gypi checking for gypi file: /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/config.gypi
gyp verb common.gypi checking for gypi file: /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/common.gypi
gyp verb gyp gyp format was not specified; forcing "make"
gyp info spawn /usr/bin/python3
gyp info spawn args [
gyp info spawn args '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/duende/.electron-gyp/20.1.1/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/home/duende/.electron-gyp/20.1.1',
gyp info spawn args '-Dnode_gyp_dir=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/home/duende/.electron-gyp/20.1.1/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
⠴ Building module: sqlite3, Completed: 1gyp verb command build []
gyp verb build type Release
gyp verb architecture x64
gyp verb node dev dir /home/duende/.electron-gyp/20.1.1
gyp verb which succeeded for make /usr/bin/make
gyp verb bin symlinks adding symlinks (such as Python), at "/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/node_gyp_bins", to PATH
gyp info spawn make
gyp info spawn args [ 'V=1', 'BUILDTYPE=Release', '-C', 'build' ]
make: Entrando no diretório '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build'
cc -o Release/obj.target/nothing/node_modules/node-addon-api/nothing.o ../node_modules/node-addon-api/nothing.c '-DNODE_GYP_MODULE_NAME=nothing' '-DUSING_UV_SHARED=1' '-DUSING_V8_SHARED=1' '-DV8_DEPRECATION_WARNINGS=1' '-DV8_DEPRECATION_WARNINGS' '-DV8_IMMINENT_DEPRECATION_WARNINGS' '-D_GLIBCXX_USE_CXX11_ABI=1' '-DELECTRON_ENSURE_CONFIG_GYPI' '-D_LARGEFILE_SOURCE' '-D_FILE_OFFSET_BITS=64' '-DUSING_ELECTRON_CONFIG_GYPI' '-DV8_COMPRESS_POINTERS' '-DV8_COMPRESS_POINTERS_IN_SHARED_CAGE' '-DV8_ENABLE_SANDBOX' '-DV8_SANDBOXED_POINTERS' '-DV8_31BIT_SMIS_ON_64BIT_ARCH' '-DV8_REVERSE_JSARGS' '-D__STDC_FORMAT_MACROS' '-DOPENSSL_NO_PINSHARED' '-DOPENSSL_THREADS' '-DOPENSSL_NO_ASM' -I/home/duende/.electron-gyp/20.1.1/include/node -I/home/duende/.electron-gyp/20.1.1/src -I/home/duende/.electron-gyp/20.1.1/deps/openssl/config -I/home/duende/.electron-gyp/20.1.1/deps/openssl/openssl/include -I/home/duende/.electron-gyp/20.1.1/deps/uv/include -I/home/duende/.electron-gyp/20.1.1/deps/zlib -I/home/duende/.electron-gyp/20.1.1/deps/v8/include -fPIC -pthread -Wall -Wextra -Wno-unused-parameter -m64 -O3 -fno-omit-frame-pointer -MMD -MF ./Release/.deps/Release/obj.target/nothing/node_modules/node-addon-api/nothing.o.d.raw -c
⠦ Building module: sqlite3, Completed: 1 rm -f Release/obj.target/node_modules/node-addon-api/nothing.a && ar crs Release/obj.target/node_modules/node-addon-api/nothing.a Release/obj.target/nothing/node_modules/node-addon-api/nothing.o
ln -f "Release/obj.target/node_modules/node-addon-api/nothing.a" "Release/nothing.a" 2>/dev/null || (rm -rf "Release/nothing.a" && cp -af "Release/obj.target/node_modules/node-addon-api/nothing.a" "Release/nothing.a")
LD_LIBRARY_PATH=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/Release/lib.host:/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/Release/lib.target:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; cd ../deps; mkdir -p /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/Release/obj/gen/sqlite-autoconf-3390200; node ./extract.js ./sqlite-autoconf-3390200.tar.gz "/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/Release/obj/gen"
/bin/sh: 1: Syntax error: "(" unexpected
make: *** [deps/action_before_build.target.mk:13: Release/obj/gen/sqlite-autoconf-3390200/sqlite3.c] Error 2
make: Exiting directory '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build'
✖ Rebuild Failed
An unhandled error occurred inside electron-rebuild
node-gyp failed to rebuild '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3'.
For more information, rerun with the DEBUG environment variable set to "electron-rebuild".
Error: make failed with exit code: 2
Error: node-gyp failed to rebuild '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3'.
For more information, rerun with the DEBUG environment variable set to "electron-rebuild".
Error: make failed with exit code: 2
at NodeGyp.rebuildModule (/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/electron-rebuild/lib/src/module-type/node-gyp.js:120:19)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async ModuleRebuilder.rebuildNodeGypModule (/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/electron-rebuild/lib/src/module-rebuilder.js:98:9)
at async ModuleRebuilder.rebuild (/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/electron-rebuild/lib/src/module-rebuilder.js:128:14)
at async Rebuilder.rebuildModuleAt (/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/electron-rebuild/lib/src/rebuild.js:149:13)
at async Rebuilder.rebuild (/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/electron-rebuild/lib/src/rebuild.js:112:17)
at async /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/electron-rebuild/lib/src/cli.js:158:9
Running "npm start" on project
npm start
my-new-app@1.0.0 start
cross-env NODE_ENV=development electron-forge start
✔ Checking your system
✔ Locating Application
⠸ Preparing native dependencies: 0 / 1make: Entrando no diretório '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build'
CC(target) Release/obj.target/nothing/node_modules/node-addon-api/nothing.o
AR(target) Release/obj.target/node_modules/node-addon-api/nothing.a
⠼ Preparing native dependencies: 0 / 1 COPY Release/nothing.a
ACTION deps_sqlite3_gyp_action_before_build_target_unpack_sqlite_dep Release/obj/gen/sqlite-autoconf-3390200/sqlite3.c
/bin/sh: 1: Syntax error: "(" unexpected
make: *** [deps/action_before_build.target.mk:13: Release/obj/gen/sqlite-autoconf-3390200/sqlite3.c] Erro 2
make: Saindo do diretório '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build'
✖ Preparing native dependencies: 0 / 1
An unhandled error has occurred inside Forge:
node-gyp failed to rebuild '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3'.
For more information, rerun with the DEBUG environment variable set to "electron-rebuild".
Installing from --build-from-source
npm WARN deprecated core-js@2.6.12: core-js@<3.23.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Some versions have web compatibility issues. Please, upgrade your dependencies to the actual version of core-js.
npm ERR! code 1
npm ERR! path /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3
npm ERR! command failed
npm ERR! command sh /tmp/install-ac1adf2c.sh
npm ERR! make: Entrando no diretório '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build'
npm ERR! CC(target) Release/obj.target/nothing/node_modules/node-addon-api/nothing.o
npm ERR! AR(target) Release/obj.target/node_modules/node-addon-api/nothing.a
npm ERR! COPY Release/nothing.a
npm ERR! ACTION deps_sqlite3_gyp_action_before_build_target_unpack_sqlite_dep Release/obj/gen/sqlite-autoconf-3390200/sqlite3.c
npm ERR! make: Saindo do diretório '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build'
npm ERR! Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64 --napi_version=8 --node_abi_napi=napi --napi_build_version=6 --node_napi_label=napi-v6' (1)
npm ERR! node-pre-gyp info it worked if it ends with ok
npm ERR! node-pre-gyp info using node-pre-gyp@1.0.10
npm ERR! node-pre-gyp info using node@16.17.0 | linux | x64
npm ERR! node-pre-gyp info build requesting source compile
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using node-gyp@9.0.0
npm ERR! gyp info using node@16.17.0 | linux | x64
npm ERR! gyp info ok
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using node-gyp@9.0.0
npm ERR! gyp info using node@16.17.0 | linux | x64
npm ERR! gyp info find Python using Python version 3.10.4 found at "/usr/bin/python3"
npm ERR! gyp info spawn /usr/bin/python3
npm ERR! gyp info spawn args [
npm ERR! gyp info spawn args '/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
npm ERR! gyp info spawn args 'binding.gyp',
npm ERR! gyp info spawn args '-f',
npm ERR! gyp info spawn args 'make',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args '/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/build/config.gypi',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args '/usr/local/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args '/home/duende/.cache/node-gyp/16.17.0/include/node/common.gypi',
npm ERR! gyp info spawn args '-Dlibrary=shared_library',
npm ERR! gyp info spawn args '-Dvisibility=default',
npm ERR! gyp info spawn args '-Dnode_root_dir=/home/duende/.cache/node-gyp/16.17.0',
npm ERR! gyp info spawn args '-Dnode_gyp_dir=/usr/local/lib/node_modules/npm/node_modules/node-gyp',
npm ERR! gyp info spawn args '-Dnode_lib_file=/home/duende/.cache/node-gyp/16.17.0/<(target_arch)/node.lib',
npm ERR! gyp info spawn args '-Dmodule_root_dir=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3',
npm ERR! gyp info spawn args '-Dnode_engine=v8',
npm ERR! gyp info spawn args '--depth=.',
npm ERR! gyp info spawn args '--no-parallel',
npm ERR! gyp info spawn args '--generator-output',
npm ERR! gyp info spawn args 'build',
npm ERR! gyp info spawn args '-Goutput_dir=.'
npm ERR! gyp info spawn args ]
npm ERR! gyp info ok
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using node-gyp@9.0.0
npm ERR! gyp info using node@16.17.0 | linux | x64
npm ERR! gyp info spawn make
npm ERR! gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
npm ERR! /bin/sh: 1: Syntax error: "(" unexpected
npm ERR! make: *** [deps/action_before_build.target.mk:13: Release/obj/gen/sqlite-autoconf-3390200/sqlite3.c] Erro 2
npm ERR! gyp ERR! build error
npm ERR! gyp ERR! stack Error: make failed with exit code: 2
npm ERR! gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:513:28)
npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
npm ERR! gyp ERR! System Linux 5.15.0-47-generic
npm ERR! gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "build" "--fallback-to-build" "--module=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64/node_sqlite3.node" "--module_name=node_sqlite3" "--module_path=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64" "--napi_version=8" "--node_abi_napi=napi" "--napi_build_version=6" "--node_napi_label=napi-v6"
npm ERR! gyp ERR! cwd /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3
npm ERR! gyp ERR! node -v v16.17.0
npm ERR! gyp ERR! node-gyp -v v9.0.0
npm ERR! gyp ERR! not ok
npm ERR! node-pre-gyp ERR! build error
npm ERR! node-pre-gyp ERR! stack Error: Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64 --napi_version=8 --node_abi_napi=napi --napi_build_version=6 --node_napi_label=napi-v6' (1)
npm ERR! node-pre-gyp ERR! stack at ChildProcess. (/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/@mapbox/node-pre-gyp/lib/util/compile.js:89:23)
npm ERR! node-pre-gyp ERR! stack at ChildProcess.emit (node:events:513:28)
npm ERR! node-pre-gyp ERR! stack at maybeClose (node:internal/child_process:1093:16)
npm ERR! node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5)
npm ERR! node-pre-gyp ERR! System Linux 5.15.0-47-generic
npm ERR! node-pre-gyp ERR! command "/usr/local/bin/node" "/home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build"
npm ERR! node-pre-gyp ERR! cwd /home/duende/Downloads/my-new-app(1)/my-new-app/node_modules/sqlite3
npm ERR! node-pre-gyp ERR! node -v v16.17.0
npm ERR! node-pre-gyp ERR! node-pre-gyp -v v1.0.10
npm ERR! node-pre-gyp ERR! not ok
Version
"sqlite3": "^5.0.11",
Node.js Version
v16.17.0
How did you install the library?
npm install sqlite3 --save
** i using typeorm and reflect-metadata
I think this is an issue with node-gyp and a folder in your path name: my-new-app(1)
Would you be able to try without the brackets?
I think this is an issue with node-gyp and a folder in your path name: my-new-app(1)
Would you be able to try without the brackets?
the problem really was the path name with ( )
thanks @daniellockyer
| gharchive/issue | 2022-09-11T16:20:17 | 2025-04-01T06:37:38.730375 | {
"authors": [
"daniellockyer",
"duendeee"
],
"repo": "TryGhost/node-sqlite3",
"url": "https://github.com/TryGhost/node-sqlite3/issues/1634",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
303444258 | Sqlite3 doen't install into dory-node android app
Hy.
I'm testing a proget with dory node (app into google play from tempage.io).
I've installed into a Samsung Tab A10.1 with android 7.0 and kernel 3.18.14, the gekko project from git hub.
Then i used:
npm install --only=production
But there was error to install sqlite3.
I tryed to install itmanually with
npm install sqlite3
But i have error
I attach this log
npm-debug (npm bugs sqlite3).log
npm-debug (npm install --build-from-source).log
npm-debug (npm install sqlite3 --loglevel=info).log
Before try to root my Tablet, i would like to know if there is an other solution...
Thanks
I'm trying to find different solution for the problem... I tried also to root my tablet but I had issue also with this procedure because my tablet is unsupported yet.
I read carefully the wiki but I'm not able to understand how to do it. I'm a very newbie with github and npm so I can't understand some things.
I have a package to clone from github (gekko from askmike);
Before putting into terminal npm install --only=production after the cloning, I have understand that I must:
download the package of sqlite3 (I downloaded the zip file from github: node-sqlite3-master.zip);
then, where should I put the sqlite3 unzipped files? Into what directory of the cloned package (there isn't a node-modules directory after the cloning process... the directory is created after the install process)?
After this, I have to open the file node.gyp but info the cloned directory, there isn't any *.gyp file...
only after the install process a node-pre-gyp file exist and isn't a node-gyp file like the wiki.
So I'm very confused!
Would you be able to try on the latest version v5.0.3? 🙂
| gharchive/issue | 2018-03-08T11:16:29 | 2025-04-01T06:37:38.738097 | {
"authors": [
"daniellockyer",
"scidran"
],
"repo": "TryGhost/node-sqlite3",
"url": "https://github.com/TryGhost/node-sqlite3/issues/949",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1836288865 | How to choose features from OpenCLIP?
Hello! Thanks for your great work.
I don't know how to use OpenCLIP to find correspondences. Could you please share these codes?
Thanks again.
Hi, thanks for your interest in our work!
The original OpenCLIP codebase didn't support input image size larger than the training resolution (at least it was still the case when we wrote our paper), so we follow the common practice and manually interpolate position encoding to support larger input resolution. This pull request could be very helpful as a reference.
Feel free to let us know if you have more questions.
@ZY123-GOOD I attached a cleaned-up version of our implementation here. Hope this helps!
| gharchive/issue | 2023-08-04T08:17:07 | 2025-04-01T06:37:38.744587 | {
"authors": [
"Tsingularity",
"ZY123-GOOD"
],
"repo": "Tsingularity/dift",
"url": "https://github.com/Tsingularity/dift/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
751982563 | transforminfo command
Shows an overview of a card including the awakenings of both base & transformed version, as well as the cooldown it takes to transform
Overview tab
Description
Show types of transformed version
Show both pre-transform and transform (bc we wanna count total SB, SBR, etc)
Show latents but don't bother showing caption that mentions pre-xform cos we know that
Stats etc
Leave as-is for now
Skills
Active skill (2cd) (Base: 30 -> 30)
Leave description as-is for now
Leave leader skill as-is for now
Additional tabs
Overview tab of base
Overview tab of transformed card
(Similar to how ^ls works)
Now I see why you've asked me to comment: https://github.com/isaacs/github/issues/100
Now I see why you've asked me to comment: https://github.com/isaacs/github/issues/100
| gharchive/issue | 2020-11-27T05:01:49 | 2025-04-01T06:37:38.754745 | {
"authors": [
"RheingoldRiver",
"turtleworks"
],
"repo": "TsubakiBotPad/pad-cogs",
"url": "https://github.com/TsubakiBotPad/pad-cogs/issues/272",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2282163518 | How do I change the background color of a highlighted variable?
In the screenshot below, what is the name of the highlight group which is making "lunarvim/colorschemes" appear with a yellow background? I would like to change the color.
Figured it out: CurrentWord. Add this to your config to change the color:
-- Add specific highlight groups
on_highlights = function(highlights, colors)
highlights.CurrentWord.bg = colors.blue
end,
| gharchive/issue | 2024-05-07T02:52:05 | 2025-04-01T06:37:38.764707 | {
"authors": [
"joobus"
],
"repo": "Tsuzat/NeoSolarized.nvim",
"url": "https://github.com/Tsuzat/NeoSolarized.nvim/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
995520851 | nscrollbar不在组件文档中
This function solves the problem (这个功能解决的问题)
nscrollbar不在组件文档中
Expected API (期望的 API)
有时候业务是需要虚拟滚动条的,希望可以增加这部分的文档
会弄一个,之后的版本
2.19.5
| gharchive/issue | 2021-09-14T02:38:00 | 2025-04-01T06:37:38.766244 | {
"authors": [
"07akioni",
"jinmarcus"
],
"repo": "TuSimple/naive-ui",
"url": "https://github.com/TuSimple/naive-ui/issues/1174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
923959718 | feat: space justify center space-
Space around 没啥用,因为 margin 导致反而对不齐。
space-bewteen 也没有用
就是用了反而对不齐
center 是 work 的,如果你想支持 space-bewteen 和 space-around 那么需要对于 margin 做专门的处理,比如说第一个没有 margin-left,然后之后的每个 margin-left 和 margin-right 是各自一半。
我看看怎么处理一下
| gharchive/pull-request | 2021-06-17T13:58:45 | 2025-04-01T06:37:38.769178 | {
"authors": [
"07akioni",
"Innei"
],
"repo": "TuSimple/naive-ui",
"url": "https://github.com/TuSimple/naive-ui/pull/182",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1716625828 | 🛑 Content Delivery Network (S3 Bucket) is down
In efc5186, Content Delivery Network (S3 Bucket) (https://cdn.tubnet.gg/minecraft-resourcepack/TubPack-production.zip) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Content Delivery Network (S3 Bucket) is back up in 39a9826.
| gharchive/issue | 2023-05-19T05:15:58 | 2025-04-01T06:37:38.771920 | {
"authors": [
"PublicQualityAcc"
],
"repo": "Tubnom/tubnet-uptime",
"url": "https://github.com/Tubnom/tubnet-uptime/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
147779523 | Update for Gnome 3.20
Please update the extension to be compatible with Gnome version 3.20
Thanks :)
Uploaded a new version.
| gharchive/issue | 2016-04-12T15:01:48 | 2025-04-01T06:37:38.772845 | {
"authors": [
"BoBeR182",
"Tudmotu"
],
"repo": "Tudmotu/gnome-shell-extension-bettervolume",
"url": "https://github.com/Tudmotu/gnome-shell-extension-bettervolume/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1983262428 | Define the pipeline & decide ecosystem
Description
What do we need from Toolbox/ESDL and what's our pipeline going to be?
[x] What are all the pieces of the pipeline?
[ ] How much of the pipeline can SpineToolbox cover?
[ ] How much of the pipeline can ESDL/EDR cover?
[ ] What is left uncovered?
[ ] Do we need to change the input of the model to "match" SpineToolbox?
Related Issues
Blocking #94, #106, #105, #118, #88, #89, #115, #36
The more I look at this, the more I think we should use SpineToolbox to integrate everything and the EDR as just one of several data sources. And maybe also be able to put results back into ESDL to use the MapEditor etc for analysis (depending on its capability).
What I'm a bit concerned about is the whole system still working in 5 years. Seems a bit complex.
Assigned myself although this is a group effort.
WHAT WE WANT
Build the network once (in a while)
Use draft networks to build new networks
Sufficient flexibility for ad-hoc code for experimentation
Definition of temporal stuff
Definition of solver specifications
Be able to mix data sources (ESDL + ENTSO-E for example)
| gharchive/issue | 2023-11-08T10:21:44 | 2025-04-01T06:37:38.785817 | {
"authors": [
"clizbe"
],
"repo": "TulipaEnergy/TulipaEnergyModel.jl",
"url": "https://github.com/TulipaEnergy/TulipaEnergyModel.jl/issues/237",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
128656772 | zerobin.py runserver: Unknown option '--host'
Using the command:
python zerobin.py --host 0.0.0.0 --port 80 --compressed-static
returns the error:
zerobin.py runserver: Unknown option '--host'
Changing it to:
python zerobin.py host 0.0.0.0 port 8001 compressed-static
returns:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/bottle-0.12.9-py2.7.egg/bottle.py", line 3099, in run
server = server(host=host, port=port, **kargs)
File "/usr/lib/python2.7/site-packages/bottle-0.12.9-py2.7.egg/bottle.py", line 2723, in __init__
self.port = int(port)
ValueError: invalid literal for int() with base 10: '0.0.0.0'
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/bottle-0.12.9-py2.7.egg/bottle.py", line 3099, in run
server = server(host=host, port=port, **kargs)
File "/usr/lib/python2.7/site-packages/bottle-0.12.9-py2.7.egg/bottle.py", line 2723, in __init__
...
Non-stop...
Either the docs are incorrect, there's a bug, or I'm just stupid?
I have the same issue. It looks like the way 0bin handles parameters changed.
I've found this digging a bit in the code:
def runserver(host='', port='', debug=None, user='', group='',
settings_file='', compressed_static=None,
version=False, paste_id_length=None, server="cherrypy"):
So I managed to run 0bin on a custom port with:
python zerobin.py 0.0.0.0 8006
But it doesn't really make any sense:
How am I supposed to enable compressed-static without setting previous parameters?
I have to put dummy parameters to set the settings files?
I'm probably missing something, but yes, the docs are outdated.
All of this happen because I merged some PR and trusted the content instead of reading it all. I'm guilty of lazyness. I will have to roll back everything, and fix all the bugs one by one.
I'm sorry for the mess, especially since I'm so slow at fixing it.
Any news on this issue?
BTW I also get issues with the command line suggested by @ArthurHoaro:
$ python zerobin.py --host 0.0.0.0 --port 80 --compressed-static
Traceback (most recent call last):
File "zerobin.py", line 4, in <module>
from zerobin.cmd import main
File "/home/zerobin-python/0bin/zerobin/cmd.py", line 12, in <module>
from sigtools.modifiers import annotate, autokwoargs
ImportError: No module named sigtools.modifiers
$ python zerobin.py 0.0.0.0 8006
Traceback (most recent call last):
File "zerobin.py", line 4, in <module>
from zerobin.cmd import main
File "/home/zerobin-python/0bin/zerobin/cmd.py", line 12, in <module>
from sigtools.modifiers import annotate, autokwoargs
ImportError: No module named sigtools.modifiers
Can/Do I need to install sigtools.modifiers somehow?
This has been fixed in the V2 branch. It will be merged in master and pushed to pypi soon.
| gharchive/issue | 2016-01-25T22:20:03 | 2025-04-01T06:37:38.851811 | {
"authors": [
"ArthurHoaro",
"ksamuel",
"rugk",
"sametmax",
"wankbank"
],
"repo": "Tygs/0bin",
"url": "https://github.com/Tygs/0bin/issues/101",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
521305936 | Transpiling Math.atan2 to math.atan2
Currently Math.atan2(y, x) is transpiled to math.atan(y / x).
math.atan2 is available in Lua 5.
math.atan2 (y, x)
Returns the arc tangent of y/x (in radians), but uses the signs of both parameters to find the quadrant of the result. (It also handles correctly the case of x being zero.)
Therefore Math.atan2(y, x) can be transpiled to math.atan2(y, x) for the extra functionality of quadrant checking and handling the case of x being zero.
It is a little more complicated: Lua 5.1, 5.2 and I'm assuming lua JIT (should be checked) do support math.atan2. Lua 5.3 however does not have math.atan2, but instead takes an optional second argument to math.atan, which allows it to function as atan2 https://www.lua.org/manual/5.3/manual.html#pdf-math.atan
The math.atan2 function is labelled as deprecated but still available in Lua 5.3. See https://www.lua.org/manual/5.3/manual.html#8.2 and I have tested if the math.atan2 function exists in Lua 5.3.5. So until the function is removed in a future version of Lua, it could be just transpiled directly to math.atan2. I have also verified that math.atan2 is available in LuaJIT 2.1.0-beta3.
I have tested if the math.atan2 function exists in Lua 5.3.5
Are you sure about that? I don't have it in a standard 5.3.5 build, maybe you had some compatibility flags enabled?
| gharchive/issue | 2019-11-12T03:26:19 | 2025-04-01T06:37:38.868609 | {
"authors": [
"Perryvw",
"Sanjo",
"ark120202"
],
"repo": "TypeScriptToLua/TypeScriptToLua",
"url": "https://github.com/TypeScriptToLua/TypeScriptToLua/issues/746",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
547400000 | TypeDoc doesn't render comments with Handlebars 4.6.0
[x] I have checked issues with bug label and found no duplicates
Expected Behavior
Comments should be rendered when generating documents using globally installed typedoc. (i.e. npm install typedoc --global)
Note that it works as expected when installed locally.
# doesn't work - documentation rendered without comments
typedoc --out docs
# works - documentation rendered with comments
node_modules/.bin/typedoc --out docs
Expected:
Actual Behavior
Comments aren't included when run using typedoc global install.
The comment section is missing from the html output:
<section class="tsd-panel tsd-comment">
<div class="tsd-comment tsd-typography">
<div class="lead">
<p>Base class for animals</p>
</div>
</div>
</section>
Steps to reproduce the bug
Command:
npm install typedoc --global
git clone https://github.com/socsieng/typedoc-plugin-typescript-declaration.git
cd typedoc-plugin-typescript-declaration/example
typedoc --out docs
open docs/classes/_index_.example.animal.html
Environment
Typedoc version: 0.15.6
Node.js version: 8.16.0
npm version: 6.4.1
nvm version: 0.34.0
OS: macOS Catalina 10.15.1 (19B88)
I'm experiencing a similar issue, although it's the local installation that's not rendering the comments in the HTML output.
When I add the --json flag, I can see that my tags and shortText are picked-up in the object's comment property.
+1
This is.... really weird. I can confirm the global/local issue, no idea what's causing it yet. Looking into it.
@jeremyrea could you provide a repo with a repro for the issue when run locally?
It seems like the global install has been broken for a long time... TypeDoc@0.12.0 also has this issue.
@Gerrit0 https://github.com/jeremyrea/typedoc-comment-repro
Looks like Handlebars is the cause of this break (global + local I'm guessing, I bet the local install that works has a lower version of handlebars pinned in package-lock.json) - https://github.com/wycats/handlebars.js/pull/1633...
I'm not exactly sure how we should go about fixing this... listing out all of the prototype methods that we expect a template (as suggested in the handlebars PR) to be able to access isn't feasible and is very likely to break in the future whenever a new method is added.
For now, I'll pin handlebars to a lower version and release a patch with that change.
Fixed in v0.15.7, thanks for the report @socsieng + @jeremyrea!
Leaving this open to track finding a better solution. I don't want to be stuck on an old version of handlebars forever.
Thanks @Gerrit0, can confirm that it works for me.
Handlebars 4.7.0 has been release with options to disable prototype restrictions:
https://handlebarsjs.com/api-reference/runtime-options.html#options-to-control-prototype-access
Thanks @nknapp!
I'll release 0.15.8 with a handlebars version bump to 4.7.0 later today :)
v0.15.8 is released
| gharchive/issue | 2020-01-09T10:48:21 | 2025-04-01T06:37:38.879747 | {
"authors": [
"Gerrit0",
"jeremyrea",
"kobezzza",
"nknapp",
"socsieng"
],
"repo": "TypeStrong/typedoc",
"url": "https://github.com/TypeStrong/typedoc/issues/1159",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1255373926 | Use typedoc as remote type
Search terms
typedoc, typescript, remote, micro frontends
Question
I love typedoc and its generated docs, but is it possible to use the output of typedoc or even write some plugin to be able to use typedoc output as remote type (comment or import) e.g.
/** typedoc:url website.com/typedoc/MyComponentType **/
const MyComponent = import('website.com/components/MyComponent');
- or -
import MyComponentType from `website.com/typedoc/MyComponentType`;
const MyComponent: MyComponentType = import('website.com/components/MyComponent');
This pattern would allow remote types, when using patterns like micro frontend and there no direct importing of the different repos. This could also aid in the generation of better docs, where the type is remote, we could say something like:
import MyComponentType from `website.com/typedoc/MyComponentType`;
type Something {
MyComponent: MyComponentType
}
Let me know if this is possible, if not maybe point me in the right direction.
You're probably after renderer.addUnknownSymbolResolver - https://github.com/TypeStrong/typedoc/blob/master/internal-docs/third-party-symbols.md
| gharchive/issue | 2022-06-01T08:40:25 | 2025-04-01T06:37:38.882644 | {
"authors": [
"Gerrit0",
"Kivylius"
],
"repo": "TypeStrong/typedoc",
"url": "https://github.com/TypeStrong/typedoc/issues/1947",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
937273081 | MP3 downloaded music missing background image
Version
1.8.2
Details
When i download a song in MP3 version I don't get the image when I play it with VLC Media Player, there is only a black screen instead of the background (which is the thumbnail of the video I guess)
Is that bug or a problem with my media player or you intended to remove this feature in this version?
Steps to reproduce
Download any video in MP3 version
Open the file with a media player
This only extracts audio from the video from YouTube, so there won't be any "tags" for the audio file, so this is not a bug, I suppose, as you are getting the audio only
@KavyaKinjalk yeah but with the previous version I could do it, so I don't know if they removed this feature
@EremitaDelle6Vie, I have the suffer from issue, I tried around with some older versions but it did not work there either, maybe its caused by some changes on the serverside.
Same problem ++ Please add thumbnail images to mp3 files.
I can add on my previous comments.
In the Audi entertainment system the thumbnails were shown, but not in Windows.
But I used a older version in the process of getting it to work again.
| gharchive/issue | 2021-07-05T17:39:27 | 2025-04-01T06:37:38.901348 | {
"authors": [
"EremitaDelle6Vie",
"KavyaKinjalk",
"M123-dev",
"berkayyildi"
],
"repo": "Tyrrrz/YoutubeDownloader",
"url": "https://github.com/Tyrrrz/YoutubeDownloader/issues/222",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2280374801 | 🛑 Mirrors OPL is down
In 14f997c, Mirrors OPL (https://mirrors.opl.uab.cat) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Mirrors OPL is back up in 6a5e777 after 4 minutes.
| gharchive/issue | 2024-05-06T08:53:13 | 2025-04-01T06:37:38.916593 | {
"authors": [
"JordiRoman"
],
"repo": "UAB-OPL/opl-uab-monitoring",
"url": "https://github.com/UAB-OPL/opl-uab-monitoring/issues/914",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
954999569 | Merge develop into master - Include Grouper 2.5 migration
This is the primary PR now for testing Grouper 2.5 and implementation.
Note: That the original feature was so outdated that required a git merge --no-ff master.
Before this is merged, here are a few things to do:
[ ] Bump the version from v1.0.0 to v1.0.1
[ ] setup.py
[ ] requiam/__init__.py
[ ] docs/source/conf.py
[ ] Update CHANGELOG.md
Note that we have two config files for v2.2 and v2.5 deployment. It's likely that when Grouper 2.5 goes public, that the grouper25.iam.arizona.edu will become grouper.iam.arizona.edu
Closes #116
To test this PR, we do:
./scripts/script_run --config config/figshare_grouper25.ini --persistent_path /mnt/block3_sfo2/ --ldap_password $eds_pass --grouper_password $eds_pass --portal --quota --sync
A dry run tested 99% of the cases except it does not update the quota etc. So after the grouper 2.5 upgrade, partial failures exists for automatic run.
A manual run was carried out. A dry run was successful, but a "--sync" run failed in timeout. Updated "timeout =100" (originally timeout =60 in figshare.ini) solved this issue. The first batch of 100 took 73s. (that is the reason to timeout =60 failed).
17:41:48 - INFO: batch size = 100, batch timeout = 100 seconds, batch delay = 0 seconds
17:41:48 - INFO: processing drops:
17:41:48 - INFO: processing adds:
17:43:02 - INFO: added batch 1, 100 entries, 73.567426 seconds
17:43:12 - INFO: added batch 2, 100 entries, 10.436354 seconds
17:43:22 - INFO: added batch 3, 100 entries, 10.194059 seconds
17:43:32 - INFO: added batch 4, 100 entries, 9.949762 seconds
17:43:42 - INFO: added batch 5, 100 entries, 10.029692 seconds
17:43:45 - INFO: added batch 6, 33 entries, 2.717927 seconds
17:43:45 - INFO: QUOTA : Total time: 0 hours 7 minutes 17.11 seconds
17:43:45 - INFO: Total time: 0 hours 15 minutes 39.70 seconds
17:43:45 - INFO: ******************************
17:43:45 - INFO: SUMMARY DATA
num_EDS
num_Grouper
adds
drops
total
fine_arts
258
258
0
0
0
performing_arts
426
426
0
0
0
architecture
178
178
0
0
0
arts_design
41
41
0
0
0
business_econ
424
424
0
0
0
management
272
272
0
0
0
economics
110
110
0
0
0
education
1301
1301
0
0
0
english
296
296
0
0
0
lang_culture
392
392
0
0
0
humanities
271
271
0
0
0
law
402
402
0
0
0
nursing
369
369
0
0
0
med_health
1949
1949
0
0
0
clinical
857
857
0
0
0
ped_reprod
197
197
0
0
0
neurology
104
104
0
0
0
oncology
280
280
0
0
0
pharmacology
723
723
0
0
0
physiology
134
134
0
0
0
public_health
492
492
0
0
0
astro
646
646
0
0
0
cognitive_sci
360
360
0
0
0
life_sci
1846
1846
0
0
0
sci_math
1898
1898
0
0
0
earth_sci
1047
1047
0
0
0
physics
200
200
0
0
0
lpl
310
310
0
0
0
agriculture
143
143
0
0
0
anthropology
207
207
0
0
0
social_sci
870
870
0
0
0
cultural_studies
169
169
0
0
0
history
72
72
0
0
0
journalism
245
245
0
0
0
engineering
1268
1268
0
0
0
technology
144
144
0
0
0
libraries
277
277
0
0
0
536870912
15097
15097
0
0
0
2147483648
24393
23860
533
0
533
17:43:45 - INFO: ******************************
17:43:45 - INFO: Exit 0
| gharchive/pull-request | 2021-07-28T15:48:40 | 2025-04-01T06:37:38.952143 | {
"authors": [
"astrochun",
"yhan818"
],
"repo": "UAL-RE/ReQUIAM",
"url": "https://github.com/UAL-RE/ReQUIAM/pull/161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
746322230 | create code-of-conduct and contribution file
I also created a .gitignore file which ignores unnecessary files.
Reviewed by Huan
| gharchive/pull-request | 2020-11-19T07:27:03 | 2025-04-01T06:37:38.966050 | {
"authors": [
"chuangw46",
"huan-ds"
],
"repo": "UBC-MDS/Abalone_Age_Prediction",
"url": "https://github.com/UBC-MDS/Abalone_Age_Prediction/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2110806386 | Peer Review Feedback
Tests
check test framework - potential compatibility issues with non-Mac users (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1915907137 - 1)Tests passed for Joey (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916139577)
Look into Marco's experience with Windows: (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1917919802 - 3, 4) @phchen5
Flies to Delete
remove pyxplor.py (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1915907137 - 2) (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916139577 - 1) (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916277295 - 2) (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1917919802 - 1) @iris0614 (DELETE: pyxplor.py from src AND from tests)
Vignette @rbouwer
Break the Main Vignette, into smaller ones, could have a quick look, basic usage section. A longer in depth vignette going into the lengthy explorations that are possible with the package (separated) under a separate tab. Albeit the current breakdown is definitely helpful. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1915907137 - 3)
It would be better if there was more context about the dataset, and a more detailed introduction explaining the importance and applications of EDA in data science would help users understand the relevance and application of the examples. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916139577 - 2)
If possible, include interactive elements or widgets in the tutorial for a hands-on experience. These interactive elements can make the learning process more engaging and effective, and help users better understand the capabilities and use of the package. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916139577 - 3)
Function Improvements
plot_categorical @rbouwer (Nice to have - OPTIONAL)
For the categorical bar plots, they could be ordered by largest to smallest to make it easier to visualise the categories. They are ordered in the facetted plots, only the first plot in the docs for categorical is not. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1915907137 - 4)
It might be beneficial to reconsider the use of colour in the Distribution of Categorical Variables. The current approach assigns colours to bars based on their count ranking, which could potentially confuse users. For example, in your example.ipynb, pickup_borough is displayed in green for Bronx, while dropoff_borough is in red, solely due to count variations for the same variable. Given that each bar already has a clear label, the additional colour coding might not be necessary and could be removed. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916277295 - 3)
plot_numeric @iris0614 (Nice to have - OPTIONAL)
You can enhance the EDA experience by offering scatterplots between the target variable (if numerical) and the numerical explanatory variables, catering to users who want to visualize the relationships before creating models for predictions. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916277295 - 4)
Ordinal? (For later)
Consider EDA for ordinal variables, I think "passengers" is visualized as an ordinal variable instead of categorical variable, as you've maintained the natural order of the number of passengers instead of ranking them as you did with other categorical variables. (see example.ipynb). (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916277295 - 5)
Badges
The badges are not all all there, could consider all adding the 2 missing ones Continuous integration and test coverage, and Python versions supported (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1915907137 - 5) (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916139577 - 4) (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1916277295 - 1) @arturoboquin
Repo Improvements
Although there is a clickable button that leads you to the documentation, I would suggest adding a link in the "About" section of the repo so that less experienced users don't struggle as much when trying to find it. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1917919802 - 2) @arturoboquin
README
As of today, the installation instructions' title is "Installation (developers)". As a regular user, that made me think that I was looking into the wrong section and I searched for the "Installation (mortals)" section. As I didn't find one, I assumed that you chose this title as the package is still in development. However, I would remove the "(developers)" in the final version or create a regular user section and move the developer's instructions to the ReadTheDocs full documentation website. (https://github.com/UBC-MDS/software-review-2024/issues/9#issuecomment-1917919802 - 5) @phchen5
any additional improvements (log in this issue)
COMMIT MESSAGE FORMATTING
fix: Feedback addressed by ...
| gharchive/issue | 2024-01-31T19:24:49 | 2025-04-01T06:37:38.984061 | {
"authors": [
"rbouwer"
],
"repo": "UBC-MDS/PyXplor",
"url": "https://github.com/UBC-MDS/PyXplor/issues/115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2257290518 | 🛑 Student Media - Motley is down
In 5467b7d, Student Media - Motley (https://motley.ie) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Student Media - Motley is back up in 657aeab after 26 minutes.
| gharchive/issue | 2024-04-22T19:19:19 | 2025-04-01T06:37:39.010268 | {
"authors": [
"gal"
],
"repo": "UCCNetsoc/upptime",
"url": "https://github.com/UCCNetsoc/upptime/issues/1096",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
728394787 | Software Emulation / Onboard Running Stuck after Enqueued task
First of all, thank you for the awesome work.
I am trying to run the FlexCNN workflow and repeat the experiments in the paper. I have encountered some difficulties and I think I could use some help.
Environment
Ubuntu 16.04 LTS
Vivado Design Suite 2018.3
SDx 2018.3
Instruction generation and DSE are done the same as the documentation says, no changes made.
Description
I could not finish running the entire workflow on AWS.F1 instance. SDAccel hw_emu works but sw_emu produces the same error.
The host program output during sw_emu is as follows:
$ ./pose_prj.exe binary_container_1.xclbin
Your working PATH is: /home/niansong.zhang/FlexCNN
cin_size: 15728384
bias_size: 27792
weight_size: 553952
Loading instructions...
Layer num: 95
Preparing data...
Loading input...
Loading weight...
Loading bias...
Loading output...
Found Platform
Platform Name: Xilinx
device number: 1
device name: xilinx_aws-vu9p-f1-04261818_dynamic_5_0
INFO: Importing binary_container_1.xclbin
Loading: 'binary_container_1.xclbin'
Kernel launched!
Enqueued task!
The workflow could not finish running after ~5 hours. However, hw_emu finishes running and passes result check (which is done in the host program).
Makefile
My makefile for sw_emu is: (same as the repo, only changed tool paths)
#
# this file was created by a computer. trust it.
#
# compiler tools
XILINX_VIVADO_HLS ?= $(XILINX_SDX)/Vivado_HLS
SDX_CXX ?= $(XILINX_SDX)/bin/xcpp
XOCC ?= $(XILINX_SDX)/bin/xocc
RM = rm -f
RMDIR = rm -rf
SDX_PLATFORM = xilinx_aws-vu9p-f1-04261818_dynamic_5_0
# SDX_PLATFORM = xilinx_vcu1525_xdma_201830_1
XILINX_XRT = /opt/xilinx/xrt
# host compiler global settings
CXXFLAGS += -DSDX_PLATFORM=$(SDX_PLATFORM) -D__USE_XOPEN2K8 -I/home/Xilinx/SDx/2018.3/runtime/include/1_2/ -I/home/Xilinx/SDx//2018.3/include/ -O2 -Wall -c -fmessage-length=0 -std=c++14 -I/opt/xilinx/xrt/include/
LDFLAGS += -L/opt/xilinx/xrt/lib/ -lxilinxopencl -lpthread -lrt -lstdc++ -L/home/Xilinx/SDx/2018.3/runtime/lib/x86_64
# kernel compiler global settings
XOCC_OPTS = -t sw_emu --save-temps --report system --max_memory_ports top_kernel --platform $(SDX_PLATFORM) -O3 --sp top_kernel_1.m_axi_gmem1:bank0 --sp top_kernel_1.m_axi_gmem2:bank1 --sp top_kernel_1.m_axi_gcontrol:bank0
#
# OpenCL kernel files
#
BINARY_CONTAINERS += binary_container_1.xclbin
BUILD_SUBDIRS += binary_container_1
BINARY_CONTAINER_1_OBJS += binary_container_1/top_kernel.xo
ALL_KERNEL_OBJS += binary_container_1/top_kernel.xo
ALL_MESSAGE_FILES = $(subst .xo,.mdb,$(ALL_KERNEL_OBJS)) $(subst .xclbin,.mdb,$(BINARY_CONTAINERS))
#
# host files
#
HOST_OBJECTS += src/cnn_sw.o
HOST_OBJECTS += src/host.o
HOST_OBJECTS += src/xcl2.o
HOST_EXE = pose_prj.exe
BUILD_SUBDIRS += src/
#
# primary build targets
#
.PHONY: all clean
all: $(BINARY_CONTAINERS) $(HOST_EXE)
clean:
-$(RM) $(BINARY_CONTAINERS) $(ALL_KERNEL_OBJS) $(ALL_MESSAGE_FILES) $(HOST_EXE) $(HOST_OBJECTS)
-$(RM) *.xclbin.sh
-$(RMDIR) $(BUILD_SUBDIRS)
-$(RMDIR) _xocc*
-$(RMDIR) .Xil
.PHONY: incremental
incremental: all
nothing:
#
# binary container: binary_container_1.xclbin
#
binary_container_1/top_kernel.xo: ../src/hw_kernel.cpp ../src/pose.h /home/Xilinx/Vivado/2018.3/include/hls_stream.h /home/Xilinx/Vivado/2018.3/include/ap_int.h /home/Xilinx/Vivado/2018.3/include/ap_fixed.h
@mkdir -p $(@D)
-@$(RM) $@
$(XOCC) $(XOCC_OPTS) -c -k top_kernel --max_memory_ports top_kernel --messageDb $(subst .xo,.mdb,$@) -I"$(<D)" --xp misc:solution_name=_xocc_compile_binary_container_1_top_kernel -o"$@" "$<" --kernel_frequency 310
binary_container_1.xclbin: $(BINARY_CONTAINER_1_OBJS)
-@echo $(XOCC) $(XOCC_OPTS) -l --nk top_kernel:1 --messageDb $(subst .xclbin,.mdb,$@) --xp misc:solution_name=_xocc_link_binary_container_1 --remote_ip_cache /home/niansong.zhang/workspace/ipcache -o"$@" $(+) > binary_container_1.xclbin.sh
$(XOCC) $(XOCC_OPTS) -l --nk top_kernel:1 --messageDb $(subst .xclbin,.mdb,$@) --xp misc:solution_name=_xocc_link_binary_container_1 --remote_ip_cache /home/niansong.zhang/workspace/ip_cache -o"$@" $(+) --kernel_frequency 310
#
# host rules
#
src/cnn_sw.o: ../src/cnn_sw.cpp ../src/pose.h
@mkdir -p $(@D)
$(SDX_CXX) $(CXXFLAGS) -DSDX_PLATFORM=$(SDX_PLATFORM) -D__USE_XOPEN2K8 -I/home/Xilinx/SDx/2018.3/runtime/include/1_2/ -I/home/Xilinx/Vivado/2018.3/include/ -O2 -Wall -c -fmessage-length=0 -o "$@" "$<"
src/host.o: ../src/host.cpp ../src/xcl2.hpp ../src/pose.h
@mkdir -p $(@D)
$(SDX_CXX) $(CXXFLAGS) -DSDX_PLATFORM=$(SDX_PLATFORM) -D__USE_XOPEN2K8 -I/home/Xilinx/SDx/2018.3/runtime/include/1_2/ -I/home/Xilinx/Vivado/2018.3/include/ -O2 -Wall -c -fmessage-length=0 -o "$@" "$<"
src/xcl2.o: ../src/xcl2.cpp ../src/xcl2.hpp
@mkdir -p $(@D)
$(SDX_CXX) $(CXXFLAGS) -DSDX_PLATFORM=$(SDX_PLATFORM) -D__USE_XOPEN2K8 -I/home/Xilinx/SDx/2018.3/runtime/include/1_2/ -I/home/Xilinx/Vivado/2018.3/include/ -O2 -Wall -c -fmessage-length=0 -o "$@" "$<"
$(HOST_EXE): $(HOST_OBJECTS)
$(SDX_CXX) -o "$@" $(+) $(LDFLAGS) -lxilinxopencl -lpthread -lrt -lstdc++ -L/home/Xilinx/SDx/2018.3/runtime/lib/x86_64
I would really appreciate any help or insights. Thank you for looking into this problem.
TL,DR: sw_emu and hardware test are stuck after task enqueued.
Could you please run the code with libsacc/config/openpose.insts so that we can find out whether the code has a problem or the instructions?
If that worked, add “#define DEBUG_layer” to “util.h”, run the software emulation and see at which layer the code gets stuck. Note that when you add this, hardware emulation doesn’t run. You should comment it for hardware emulation.
Thank you for the quick response!
I changed the instruction to libsacc/config/openpose.insts and recompiled, but software emulation still stucks.
Using openpose instructions and enabling DEBUG_layer:
Passed85
Passed86
Passed87
Software emulation of compute unit(s) exited unexpectedly
It was running fine until layer 88. I'll check what layer it is and see if I can get some clue.
A random guess: could it have something to do with pooling layer? I am using the SDx_project code, and just noticed that pooling is not connected in the engine module (SDx_project/src/hw_kernel.cpp).
The problem here is that there is no layer 88. There are instructions for 87 layers in openpose.insts. Change the LAYER_NUM in params.h to 87 and the problem should go away.
The OpenPose network does not use the pooling layer that is why it is commented in SDx_project/src/hw_kernel.cpp. If your network uses a pooling layer, just uncomment the pool module.
Thank you, that solved my problem. Thank you very much for the timely help!
| gharchive/issue | 2020-10-23T17:33:12 | 2025-04-01T06:37:39.028025 | {
"authors": [
"atefehsz",
"zzzDavid"
],
"repo": "UCLA-VAST/FlexCNN",
"url": "https://github.com/UCLA-VAST/FlexCNN/issues/14",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1701115184 | Name intermediate results of Future composition, and other clarifications
The primary aim of this PR is to make it easier to comprehend the Future composition paradigm that I have used extensively throughout the codebase, by giving names to things that were previously anonymous.
I've also corrected some typos and made minor no-op changes for concision (hopefully not at the expense of clarity).
I'm open to all feedback as far as what you all think works, or doesn't.
Also, I'd like to pretend the branch name is "naming things".
I guess the tests ran slowly enough on that particular PR build that they triggered a timeout error. The timeout value on those tests is 10 seconds. My rationale for keeping those was that they provide additional specification of how the component under test should behave, although I am not opposed to removing them if they're going to fail intermittently when run in GHA.
| gharchive/pull-request | 2023-05-09T00:21:04 | 2025-04-01T06:37:39.030599 | {
"authors": [
"markmatney"
],
"repo": "UCLALibrary/prl-harvester",
"url": "https://github.com/UCLALibrary/prl-harvester/pull/62",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.