id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1880949003 | Fix caret jumps for web composer
Fixes https://github.com/bluesky-social/social-app/issues/1300
Fixes caret jumping to the end when inserting an emoji in the middle of the post using our picker
Fixes caret jumping to the end when blurring out of the composer and focusing it back again (maybe controversial)
See https://github.com/bluesky-social/social-app/issues/1300#issuecomment-1705753534 for an explanation. TLDR:
We shouldn't be jumping the caret when the user picks an emoji. They might be in the middle of the post anyway.
We should not be reading mutable .length in order to "jump" the caret to the end — it appears that our text editing library does not guarantee whether the new value is flushed yet by the time onFocus fires.
Although we could have fixed that by using focus('end') instead of focus().setTextSelection(...), we probably shouldn't be jumping the caret to the end at all on focus anyway — since it's not what you actually want e.g. when inserting an emoji in the middle of the text. So I removed that line altogether unless we have some motivation to keep it. I haven't found a case where it's needed yet.
This might need a review from someone familiar with #1241 and #1254.
This needs more work. @ansh pointed out that we need to jump the cursor to the end when the user opens the composer on somebody else's profile. This is because we want to place it after the mention.
This needs more work. @ansh pointed out that we need to jump the cursor to the end when the user opens the composer on somebody else's profile. This is because we want to place it after the mention.
Pushed another commit. This ensures that if we press Compose on someone's profile, the cursor will appear after the handle.
| gharchive/pull-request | 2023-09-04T23:13:34 | 2025-04-01T06:38:04.576556 | {
"authors": [
"gaearon"
],
"repo": "bluesky-social/social-app",
"url": "https://github.com/bluesky-social/social-app/pull/1374",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2334132466 | add types for desktopFixedHeight to List
Let's stop requiring @ts-ignore for this prop in our List implementation
Oh, actually need to look closer. In Views.web.tsx it's typed as boolean | number
I never implemented it for web actually. I think it's maybe used for native?
I'm not sure what it's really supposed to be doing. I think feeds use it.
Yea, the FlatList_INTERNAL takes both, but that makes the prop name a little confusing. I'll have to look through later to see why it gets used on native or if it even does.
I see that desktopFixedHeightOffset is a prop for Feed, but it never gets used either as far as I can tell?
Will leave number as well for now, but made a note to revisit this and remove number if possible. Seems fine, though want to give it a more solid pass before assuming that's true.
| gharchive/pull-request | 2024-06-04T18:28:44 | 2025-04-01T06:38:04.580953 | {
"authors": [
"gaearon",
"haileyok"
],
"repo": "bluesky-social/social-app",
"url": "https://github.com/bluesky-social/social-app/pull/4356",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2364764052 | Factor out bluesky-tiled-plugins package
This creates a second Python package in the databroker repository, bluesky-tiled-plugins, with the "special client" objects BlueskyRun, BlueskyEventStream, and CatalogOfBlueskyRuns, as well as the custom query objects PartialUID, ScanID, and TimeRange. Quoting the README:
For a user wishing to connect to a running Tiled server and access Bluesky data,
this package, along with its dependency tiled[client], is all they need.
The databroker package is only required if the user wants to use the legacy
databroker.Broker API.
This means it is no longer necessary to install databroker in the client environment unless they have legacy databroker.Broker code.
To be clear, the server environment still needs databroker.mongo_normalized, the Tiled Adapter for MongoDB with Bluesky document collections.
This is a backward-compatible change. Databroker now has a dependency on bluesky-tiled-plugins and has shim modules that expose the moved objects at the original locations within the databroker package.
Closes #812
I have registered the publish-pypi.yml workflow in this repository as a "pending" trusted publisher such that merge this PR should create the bluesky-tiled-plugins package on PyPI.
| gharchive/pull-request | 2024-06-20T15:46:22 | 2025-04-01T06:38:04.585266 | {
"authors": [
"danielballan"
],
"repo": "bluesky/databroker",
"url": "https://github.com/bluesky/databroker/pull/814",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2539766250 | TYPO should be lower case to be consistent with others in this module
Somehow, this missed testing. A long time ago.
tetramm = TetrAMM("8idTetra:QUAD1:", name="tetramm")
for axis in "x y".split():
for attr_name in "offset offset_calc scale".split():
getattr(tetramm, f"position_{attr_name}_{axis}").kind = "config"
AttributeError: position_scale_y
We can get by without this change (which might break existing usage) with:
for attr_name in tetramm.component_names:
attr = getattr(tetramm, attr_name)
if attr_name.startswith("current_"):
for ch_name in attr.component_names:
getattr(attr, ch_name).kind = "config"
elif attr_name.startswith("position_"):
attr.kind = "config"
But for sure, all these components need to be kind="config".
| gharchive/pull-request | 2024-09-20T22:21:34 | 2025-04-01T06:38:04.587082 | {
"authors": [
"prjemian"
],
"repo": "bluesky/ophyd",
"url": "https://github.com/bluesky/ophyd/pull/1211",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2444012200 | 🛑 Skytalks SSL Website (v6) is down
In 3a82c3e, Skytalks SSL Website (v6) (https://skytalks.info) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Skytalks SSL Website (v6) is back up in 4f8f558 after 15 minutes.
| gharchive/issue | 2024-08-02T04:30:29 | 2025-04-01T06:38:04.590549 | {
"authors": [
"bluknight"
],
"repo": "bluknight/skytalks-monitor",
"url": "https://github.com/bluknight/skytalks-monitor/issues/660",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
126839719 | 検索
記事を検索したいニーズがありそう
実装案
LIKE検索で済ませる(遅いけどお手軽)
全文検索エンジンを使う(速いけど運用が複雑になる)
記事のタイトル、本文、著者情報 (?) から検索
一致した位置は出なくてもいいような気はするけど出たら嬉しい
検索した結果、記事のタイトル一覧が並んでいればそれで ok
固定ページは対象外
本文は Markdown を HTML としてレンダーした結果のテキスト部分から検索する
HTMLへの変換結果はDBに入れても良いのでは、と思いました。
変換部分のロジックをちょいちょい変えるなら、今のままでいいけど、一旦決まったらほとんど変えない気がするので、変換結果をカラムに入れておいた方が便利なのでは?と思いました。
#288 のpull requestで基本機能について対応しました。
主な対応内容は以下です。
[x] 記事のタイトル、本文、関係者から検索
[x] 記事のタイトルを表示
[x] 関係者で絞り込み
[x] 公開後に公開停止した記事が検索で引っかかってしまわないようにする https://github.com/bm-sms/daimon-news-multi-tenant/pull/288/files#r54970392
別PRで進める:
[ ] テキスト抽出(マークダウンの記号で引っかからないようにする)
[ ] https://github.com/bm-sms/daimon-news-multi-tenant/pull/288#discussion_r54832043
[ ] 著者情報検索のレビュー対応(著者情報フルネームの順位を改善)
[ ] https://github.com/bm-sms/daimon-news-multi-tenant/commit/c15ca8bc8211a61f25b223838cb623390e6ebba2#commitcomment-16527154
[ ] 著者情報検索のレビュー対応(著者情報の重み付け改善)
[ ] https://github.com/bm-sms/daimon-news-multi-tenant/commit/c15ca8bc8211a61f25b223838cb623390e6ebba2#commitcomment-16557270
[ ] strong parameterでpermitするの減らす? https://github.com/bm-sms/daimon-news-multi-tenant/pull/288#discussion_r55311282
[ ] 検索結果のページネーションは不要? https://github.com/bm-sms/daimon-news-multi-tenant/pull/288#issuecomment-193617934
[ ] 検索結果ページでは検索ボックスはサイドバーではなく検索結果のところに表示? https://github.com/bm-sms/daimon-news-multi-tenant/pull/288#issuecomment-193616616
[ ] 検索後に検索ボックスが空になってしまう
[ ] サムネイルを表示する
[ ] 検索結果をスコア順に並べる(今はARのモデルにマッピングするときに順序が失われるので、ひとまず最新日付順にしている)
issue を見落しそうなので、残タスクは別issueに切り出した。
| gharchive/issue | 2016-01-15T09:51:18 | 2025-04-01T06:38:04.606320 | {
"authors": [
"mtsmfm",
"myokoym",
"okkez",
"tricknotes"
],
"repo": "bm-sms/daimon-news-multi-tenant",
"url": "https://github.com/bm-sms/daimon-news-multi-tenant/issues/185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
91775620 | Change status code description from 'warn' to 'warning'.
For compatibility with alerta dashboard (http://alerta.io) this change is needed as status 'warn' it treats as 'unknown'.
http://nagios.sourceforge.net/docs/3_0/pluginapi.html also describes it as 'WARNING', not 'WARN'.
Thanks for doing this! I didn't realize I'd made this mistake :-)
| gharchive/pull-request | 2015-06-29T11:12:55 | 2025-04-01T06:38:04.631494 | {
"authors": [
"bmhatfield",
"ealekseev"
],
"repo": "bmhatfield/riemann-sumd",
"url": "https://github.com/bmhatfield/riemann-sumd/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2066722477 | Adds kind field and ability to create Microshift ISO
PR is complete:
generates an iso using os-builder
uses redfish api to boot it into a system
@bn222 this script is currently working consistently. I have tested it out. After investigation, I figured out that the issue I was running into was a space problem which is why the build was failing. PTAL. I will clean up the style checks and address your comments to get this into CDA. I have also included a small microshift.yml which I used to test my configuration on CDA.
Please make sure that the linter checks pass while I take a look.
@bn222 @wizhaoredhat PTAL I have addressed all the comments
A lot better than your previous drafts, Good Job. Please keep in mind modular/readable/reusable code!
A lot better than your previous drafts, Good Job. Please keep in mind modular/readable/reusable code! I would add more commentary to explain things better.
Added some more comments to my functions
@bn222 PTAL
I've tested the PR. It works but it has two more issues.
The cleanup uses composer-cli but it is installed later so there is a mistake in the ordering
Your code returns before the installation completes making it difficult to use this in automation since there is no way to tell when it's really done.
https://github.com/bn222/cluster-deployment-automation/blob/main/clustersConfig.py#L103
This is where I expect the kubeconfig to be
https://github.com/bn222/cluster-deployment-automation/blob/main/clustersConfig.py#L103
This is where I expect the kubeconfig to be
@vrindle @bn222
This is a bit more nuanced since it is possible to have multiple kubeconfigs. We might need to change it such that it is a folder of kubeconfigs or we can concatenate into one file
wdym multiple kubeconfigs? 1 kubeconfig per cluster, I would assume.
wdym multiple kubeconfigs? 1 kubeconfig per cluster, I would assume.
Current schema:
clusters:
- name: "microshift-cluster"
kind: "microshift"
kubeconfig: "/root/kubeconfig.whichclusterdoibelong"`
masters:
- name: "m1"
type: "physical"
node: "m1"
ip: "10.19.128.15"
bmc_ip: "10.19.128.16"
bmc_user: "root"
bmc_password: "calvin"
- name: "m2"
type: "physical"
node: "m2"
ip: "10.19.128.17"
bmc_ip: "10.19.128.18"
bmc_user: "root"
bmc_password: "calvin"
I expect:
clusters:
- name: "microshift-cluster"
kind: "microshift"
kubeconfig: "/root/kubeconfig.1"`
masters:
- name: "m1"
type: "physical"
node: "m1"
ip: "10.19.128.15"
bmc_ip: "10.19.128.16"
bmc_user: "root"
bmc_password: "calvin"
- name: "microshift-cluster"
kind: "microshift"
kubeconfig: "/root/kubeconfig.2"`
masters:
- name: "m1"
type: "physical"
node: "m1"
ip: "10.19.128.17"
bmc_ip: "10.19.128.18"
bmc_user: "root"
bmc_password: "calvin"
assert len(masters) == 1 or not microshift
I expect:
clusters:
- name: "microshift-cluster1"
kind: "microshift"
kubeconfig: "/root/kubeconfig.1"`
masters:
- name: "m1"
type: "physical"
node: "m1"
ip: "10.19.128.15"
bmc_ip: "10.19.128.16"
bmc_user: "root"
bmc_password: "calvin"
- name: "microshift-cluster2"
kind: "microshift"
kubeconfig: "/root/kubeconfig.2"`
masters:
- name: "m1"
type: "physical"
node: "m1"
ip: "10.19.128.17"
bmc_ip: "10.19.128.18"
bmc_user: "root"
bmc_password: "calvin"
assert len(masters) == 1 or not microshift
Ah I see, ok with that scheme then it should be ok.
assert len(masters) == 1 or not microshift
@wizhaoredhat @bn222 With the way that this code is currently implemented we check to see if kind microshift we want len(masters) == 1 so I believe that this should be in line with the schema that was presented.
| gharchive/pull-request | 2024-01-05T04:47:42 | 2025-04-01T06:38:04.646662 | {
"authors": [
"bn222",
"vrindle",
"wizhaoredhat"
],
"repo": "bn222/cluster-deployment-automation",
"url": "https://github.com/bn222/cluster-deployment-automation/pull/95",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
241856433 | Possible bug in start/stop routines
Played with "echo" demo, as found on "Getting started" page (using stable branch with debug symbols).
All runs smoothly, however when I run this demo under valgrind and ^C it, the following errors are reported:
==7444== Memcheck, a memory error detector
==7444== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==7444== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright info
==7444== Command: ./a.out
==7444==
* Listening on port 3000
Server is running 1 worker X 1 thread, press ^C to stop
* 7444 is running.
^C==7444== Invalid read of size 4
==7444== at 0x12115D: defer_perform_in_fork (defer.c:394)
==7444== by 0x112DD6: facil_run (facil.c:1069)
==7444== by 0x1105A8: main (e2.c:53)
==7444== Address 0x5404cc0 is 0 bytes after a block of size 0 alloc'd
==7444== at 0x4C2DBC5: calloc (vg_replace_malloc.c:711)
==7444== by 0x121022: defer_perform_in_fork (defer.c:363)
==7444== by 0x112DD6: facil_run (facil.c:1069)
==7444== by 0x1105A8: main (e2.c:53)
==7444==
==7444== Invalid read of size 4
==7444== at 0x121196: defer_perform_in_fork (defer.c:397)
==7444== by 0x112DD6: facil_run (facil.c:1069)
==7444== by 0x1105A8: main (e2.c:53)
==7444== Address 0x5404cc0 is 0 bytes after a block of size 0 alloc'd
==7444== at 0x4C2DBC5: calloc (vg_replace_malloc.c:711)
==7444== by 0x121022: defer_perform_in_fork (defer.c:363)
==7444== by 0x112DD6: facil_run (facil.c:1069)
==7444== by 0x1105A8: main (e2.c:53)
==7444==
* 7444 cleanning up.
* (7444) Stopped listening on port 3000
--- Completed Shutdown ---
Probably there is an error there and you should have a look.
I have looked over the defer.c's defer_perform_in_fork, and for me it seems, that the main culprit is defer.c:382
pids_count++;
This makes the loop under 'finish:' to fall over the allocated memory, deleting this line fixes the valgrind error.
Hope this helps :)
Thanks 🙏🏻 🎉👍🏻👏🏻👏🏻
I re-read that piece of code and I have no idea what that line was doing there... 😂 I guess it was a leftover from a previous implementation.
Thank you very much for spotting it. 👍🏻
Thank you for your work :)
Out of curiosity, why do you use hand-written spinlocks, and not mutexes?
You're welcome :-)
why do you use hand-written spinlocks, and not mutexes?
It's a combination of performance testing and my own foolishness.
When I started the project, some of my design choices were a bit foolish and resulted in higher lock contention. Some of the lock contention issues couldn't be avoided (i.e. the defer library's queue access).
The original locking used mutexes and I noticed that mutexes were slower than I would like them to be. Something about the rescheduling mechanism was taking a long time to wake the threads up. Conditional variables weren't any better...
I tested it against spin locks using stdatomic.h . The spinlocks were noticeably faster for the defer queue (where contention can't be avoided).
But stdatomic.h wasn't available on Linux 14.04 (I was deploying on Heroku) so I just ended up writing the thing myself.
| gharchive/issue | 2017-07-10T21:27:56 | 2025-04-01T06:38:04.670581 | {
"authors": [
"boazsegev",
"cdkrot"
],
"repo": "boazsegev/facil.io",
"url": "https://github.com/boazsegev/facil.io/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2472060417 | Update README.md
Added new Section called Other Resources which contains SQL Tutorials and Interview questions Free resources...
Please let me know if any changes required.
Thank you!
Thank you but I think that this is not needed at the moment!
| gharchive/pull-request | 2024-08-18T17:00:01 | 2025-04-01T06:38:04.673147 | {
"authors": [
"CodeForHunger",
"bobbyiliev"
],
"repo": "bobbyiliev/introduction-to-sql",
"url": "https://github.com/bobbyiliev/introduction-to-sql/pull/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
70790805 | Missing Getting Started with Johnny Five and JY MCU Bluetooth Serial Port Module
I was unable to locate this article https://github.com/rwaldron/johnny-five/wiki/Getting-Started-with-Johnny-Five-and-JY-MCU-Bluetooth-Serial-Port-Module or anything talking about the JY-MCU module. Not sure if this was by design. If not, how can I help to get it up there?
Also - Great job to everyone contributing to this project. This is an excellent resource and it's much appreciated.
It's here: http://johnny-five.io/api/
Currently, those links go back to the repo, but I plan on doing a formal migration in very near future
That works for me! Thank you for your help.
Awesome :)
| gharchive/issue | 2015-04-24T20:48:29 | 2025-04-01T06:38:04.704974 | {
"authors": [
"bmf",
"rwaldron"
],
"repo": "bocoup/johnny-five.io",
"url": "https://github.com/bocoup/johnny-five.io/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2516867844 | Scope and time scheduling of the meeting for overview of activities performed during development of dynamic systems (issue #908 related)
The goal is to schedule the meeting in Teams calendar on 2024.09.10 at 20:30.
Meeting scheduled.
| gharchive/issue | 2024-09-10T16:02:56 | 2025-04-01T06:38:04.743477 | {
"authors": [
"bogumilchilinski"
],
"repo": "bogumilchilinski/dynpy",
"url": "https://github.com/bogumilchilinski/dynpy/issues/909",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
505356074 | electron-prebuilt is deprecated
Following this blog: https://medium.com/developers-writing/building-a-desktop-application-with-electron-204203eeb658
$ npm install --save-dev electron-prebuilt
npm WARN deprecated electron-prebuilt@1.4.13: electron-prebuilt has been renamed to electron. For more details, see http://electron.atom.io/blog/2016/08/16/npm-install-electron
@bojzi I will work around errors as long as I can and report issues here. If you are not interested in further issues regarding the blog or you are not going to fix them anyway, just let me know to save me that work. Thanks.
Anyone alive here around?
| gharchive/issue | 2019-10-10T15:43:33 | 2025-04-01T06:38:04.745277 | {
"authors": [
"flaschbier"
],
"repo": "bojzi/sound-machine-electron-guide",
"url": "https://github.com/bojzi/sound-machine-electron-guide/issues/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
300483613 | Philippjfr/register kernel 2
fixes #10
supercedes #21
I'm pointing out that this PR causes us to expose the Jupyter kernel to viewers of a push_notebook plot. This should be revisited in the future.
| gharchive/pull-request | 2018-02-27T03:44:50 | 2025-04-01T06:38:04.791812 | {
"authors": [
"canavandl"
],
"repo": "bokeh/jupyterlab_bokeh",
"url": "https://github.com/bokeh/jupyterlab_bokeh/pull/22",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1513219532 | Ruby 3.x support
Summary
Ruby 3.x has been out for more than 2 years now, so I was surprised when I went to run jets new today and couldn't because there's no support for modern Ruby versions. 😞
I understand priorities need to be balanced when maintaining a project of this size, but it feels like it's time to bump this one up the list a little if it isn't already on top 😊 Thanks for helping build and maintain this awesome library. I'm excited to see it continue to push Ruby forward in the serverless age.
Main reason Ruby 3 is not support yet is because AWS Lambda doesn’t support it yet. It’s a bummer.
RE: Thanks for helping build and maintain this awesome library. I'm excited to see it continue to push Ruby forward in the serverless age.
Thanks for the kind words.
Main reason Ruby 3 is not support yet is because AWS Lambda doesn’t support it yet. It’s a bummer.
RE: Thanks for helping build and maintain this awesome library. I'm excited to see it continue to push Ruby forward in the serverless age.
Thanks for the kind words.
Thanks for the quick response @tongueroo! Apologies, I did not realize lambda doesn't support 3.x yet 😅 Hopefully this closed issue can serve as an answer to future folks who have this same question at least. 🙏🏻
Seems like this could probably be re-opened with the recent news of container base image support of Ruby 3.2 in Lambda: https://github.com/aws/aws-lambda-base-images/issues/54#issuecomment-1486974882
@Jengah Got me excited. Sadly, its only for the lambda container base image. It's not yet rolled out to AWS Lambda itself officially. The AWS console still shows Ruby 2.7 only.
I checked 2 regions: us-east-1 and us-west-2. Seems like AWS is getting closer ready. Guessing they updated the lambda container image in preparation for it. Will look at adding 3.2 support once it's officially released.
Also, saw that AWS also released python 3.1 recently. So Ruby 3.2 hopefully is close 🤞
Sorry for the false excitement. As the comment mentions the AWS managed framework runtime should be available within 90 days of the announcement, with 2.7 support lasting 6 months past the General Availability release of the Ruby 3.2 runtime for some roadway to get migrated.
I was more pointing out that preliminary testing could be done using a the supported base container as a custom runtime, but the official managed runtime should be out in the next 60 days or so and may not be worth the effort.
Awesome. Appreciate the heads up!
Done in https://github.com/boltops-tools/jets/pull/654 🎉 Blog Post: Jets v4 Release: Ruby 3.2 Support
Looks like AWS is ghost-testing Ruby 3.2 support for AWS Lambda. Ruby 3.2 does not show up in the AWS console, but you're able to deploy with the Ruby 3.2 runtime via CloudFormation. Was able to deploy a jets v4 app with ruby 3.2 successfully without the use of custom runtime. 🎉
| gharchive/issue | 2022-12-28T22:48:54 | 2025-04-01T06:38:04.840812 | {
"authors": [
"Jengah",
"jhunschejones",
"tongueroo"
],
"repo": "boltops-tools/jets",
"url": "https://github.com/boltops-tools/jets/issues/636",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2650241527 | Fix #100 bird identify or plugin unexpected error must be reported
Fix #100 bird identify or plugin unexpected error must be reported
Coverage after merging identify-500-must-be-reported into main will be77.96%
Coverage Report for Changed FilesFileStmtsBranchesFuncsLinesUncovered Linessrc/services PluginsCommonService.js66.02%72%50%66.51%100–101, 110–123, 132–135, 154, 159, 161–164, 191, 196, 200–201, 207–209, 38–40, 46, 46–59, 63–66, 69–73, 76–81, 84–89, 92–99 | gharchive/pull-request | 2024-11-11T19:46:54 | 2025-04-01T06:38:04.845372 | {
"authors": [
"boly38"
],
"repo": "boly38/botEnSky",
"url": "https://github.com/boly38/botEnSky/pull/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1849854811 | Support annotations / properties for components / sboms
As you know, CDX and SPDX use different to tag/label/annotate components.
SPDX uses Annotations for adding additional data to SPDX Element, SPDX2.x has Document and Package/snippet/file... level annotations. While SPDX3 uses flat Annotations list with ref to and SPDX element.
CDX 1.x has Property, which is a narrowed-down version of annotations introduced in CDX 1.5.
I've looked over past work for conversions, which yielded:
https://github.com/spdx/cdx2spdx/blob/08ec34f11b15dd747410d13c1fc4d11645a20a4b/src/main/java/org/spdx/cdx2spdx/CycloneSpdxConverter.java#L565C60-L565C60
As I figured, they used the opinionated way to convert Properties to Annotations in SPDX (this is a CDX2SPDX tool), forcing Properties as Annotations related to the package/file/document/etc.
Since we have the luxury of having it both ways, we should add Annotations to Nodes and the Bom(document); we should also structure annotation so it could easily convert from/to property.
When unserializing SPDX: we do not have "Properties," only annotations. We would need to support Node-level properties. The problem is that we won't have a way to know how to parse "key" "value" unless. We can default to "annotation1", "annotation2" etc., with the value from "statement."
When serializing SPDX: if we'd have a specific "key" [if we pulled it from CDX] we would encode it alongside the value into a statement ("key=value") and add protobom as the annotator.
When serializing CDX we have no issues when serializing to annotations (document level) and properties (from annotations with properties)
When we unserialize, we simply load document level as global annotation and properties (as node-level annotation).
Wdyt?
I have opened an issue for SPDX3 at https://github.com/spdx/spdx-3-model/issues/477. It appears that name could be a suitable source for "key", and storing it would enable lossless conversion from SPDX to CDX and vice versa.
SPDX->CDX->SPDX
annotation -> document annotation (cdx1.5 only) / node annotations as properties rebuilding the entire Element as properties list, using the Name prop value as "key" and statement as "value"
"properties": [
{
"name": "annotator",
"value": "annotator value"
},
{
"name": "comment",
"value": "comment value"
},
{
"name": "annotationType",
"value": "other"
},
{
"name": "annotationDate",
"value": "...."
}
]
Going back to SPDX would include rebuilding the annotation object from those props.
There is an edge case where multiple annotations refer to the same key. To handle this, property names need to be scoped and prefixed either by index (in SPDX 2.x) or by SPDXID in SPDX 3.0.
Ah this is a common hack that abuses both standards a little bit. Properties in CDX != Annotations in SPDX. There is really no equivalent between them, it was one of the points raised during discussions between the SPDX/CDX compatibility talks and CDX offer to add annotations to their spec.
My understanding is that annotations were devised as a way to comment on the document while properties are a way to capture additional data. So, in the purest sense, I would not store them in the same place, but since it is a hack that people use frequently I'd support that as an option at the conversion layer.
Ah awesome, reviewing the 1.5 spec annotations are now in:
https://cyclonedx.org/docs/1.5/json/#annotations_items_bom-ref
Yup, the issue with SPDX is the lack of key-value structure that can properly capture data. In the spdx-3 thread I referenced, Im in contact with maintainers about adding a new properties field to Annotation to avoid abusing statement value for kv store. That would make conversation easier going forward.
Basically, SPDX annotation could be translated to CDX1.5 annotations with the exception of signature, and ofc since CDX annotation does not contain properties and has a separate field for that. we could safely assume that ANY document-level annotation is either CDX Annotation or CDX Properties, making all other annotations (element annotation) convert into CDX component properties
Not sure why this is completed. reopening.
| gharchive/issue | 2023-08-14T14:07:57 | 2025-04-01T06:38:04.854965 | {
"authors": [
"manifestori",
"puerco"
],
"repo": "bom-squad/protobom",
"url": "https://github.com/bom-squad/protobom/issues/71",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1242367156 | 🛑 Segunda Cia Bomberos is down
In edd0b79, Segunda Cia Bomberos (https://2da.cl) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Segunda Cia Bomberos is back up in aecad0a.
| gharchive/issue | 2022-05-19T21:51:27 | 2025-04-01T06:38:04.857888 | {
"authors": [
"bomberosalas"
],
"repo": "bomberosalas/status",
"url": "https://github.com/bomberosalas/status/issues/170",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2119069760 | 🛑 Segunda Cia Bomberos is down
In 7297278, Segunda Cia Bomberos (https://2da.cl) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Segunda Cia Bomberos is back up in c8aa505 after 6 minutes.
| gharchive/issue | 2024-02-05T17:20:16 | 2025-04-01T06:38:04.860601 | {
"authors": [
"bomberosalas"
],
"repo": "bomberosalas/status",
"url": "https://github.com/bomberosalas/status/issues/964",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1078595551 | Update the documentation with the changes required for "Redesign the eventing library to support multiple Streams and Consumers"
Hey @corcoja , can you please take a look and update the necessary documentation part of the changes done in #255 ?
You can create a new branch from useboomerang.io repo for this change.
Hey @morarucostel! Do you want me to update the boomerang-io/lib.eventing repo docs, the documentation on useboomerang.io website or both?
Hey @corcoja , I believe both of them should be updated to reflect the latest.
The action with higher priority, imo is the useboomerang.io one.
| gharchive/issue | 2021-12-13T14:25:31 | 2025-04-01T06:38:04.899513 | {
"authors": [
"corcoja",
"morarucostel"
],
"repo": "boomerang-io/roadmap",
"url": "https://github.com/boomerang-io/roadmap/issues/300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
479736746 | Sending very large strings over websockets using Beast
I am trying to send a very large string (strlen 50,000) over websockets using beast.
std::string ss1 = some_function(); net::mutable_buffer cb0(&ss1[0], size(s1)); //size(s1) > - 45,000
I was not facing any issues if the size is less than 4096.
I am facing an issue while converting string to buffer.
accessing unallocated memory error arises when converting to buffer
I traced the error back to boost/asio/buffer.hpp file
memcpy function (line 2179) ;- then to
template <typename TargetIterator, typename SourceIterator> std::size_t buffer_copy(one_buffer, multiple_buffers, TargetIterator target_begin, TargetIterator, SourceIterator source_begin, SourceIterator source_end, std::size_t max_bytes_to_copy = (std::numeric_limits<std::size_t>::max)()) ASIO_NOEXCEPTr fun(line 2201 )
I am adding the whole sample code that I used
`//
// Copyright (c) 2016-2017 Vinnie Falco (vinnie dot falco at gmail dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
// Official repository: https://github.com/boostorg/beast
//
//------------------------------------------------------------------------------
//
// Example: WebSocket client, asynchronous
//
//------------------------------------------------------------------------------
#include <boost/beast/core.hpp>
#include <boost/beast/websocket.hpp>
#include <boost/asio/connect.hpp>
#include <boost/asio/ip/tcp.hpp>
#include
#include
#include
#include
#include
#include
using namespace std;
using tcp = boost::asio::ip::tcp; // from <boost/asio/ip/tcp.hpp>
namespace websocket = boost::beast::websocket; // from <boost/beast/websocket.hpp>
//------------------------------------------------------------------------------
std::string read_data(){
//read data from a file
string line;
string rline;
ifstream myfile ("examples.txt");
if (myfile.is_open())
{
while ( getline (myfile,line) )
{
cout << line << '\n';
rline = rline + rline;
}
myfile.close();
}
else cout << "Unable to open file";
return rline;
}
// Report a failure
void
fail(boost::system::error_code ec, char const* what)
{
std::cerr << what << ": " << ec.message() << "\n";
}
// Sends a WebSocket message and prints the response
class session : public std::enable_shared_from_this
{
tcp::resolver resolver_;
websocket::streamtcp::socket ws_;
boost::beast::multi_buffer buffer_;
std::string host_;
std::string text_;
public:
// Resolver and socket require an io_context
explicit
session(boost::asio::io_context& ioc)
: resolver_(ioc)
, ws_(ioc)
{
}
// Start the asynchronous operation
void
run(
char const* host,
char const* port,
char const* text)
{
// Save these for later
host_ = host;
text_ = text;
// Look up the domain name
resolver_.async_resolve(
host,
port,
std::bind(
&session::on_resolve,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
void
on_resolve(
boost::system::error_code ec,
tcp::resolver::results_type results)
{
if(ec)
return fail(ec, "resolve");
// Make the connection on the IP address we get from a lookup
boost::asio::async_connect(
ws_.next_layer(),
results.begin(),
results.end(),
std::bind(
&session::on_connect,
shared_from_this(),
std::placeholders::_1));
}
void
on_connect(boost::system::error_code ec)
{
if(ec)
return fail(ec, "connect");
// Perform the websocket handshake
ws_.async_handshake(host_, "/",
std::bind(
&session::on_handshake,
shared_from_this(),
std::placeholders::_1));
}
void
on_handshake(boost::system::error_code ec)
{
if(ec)
return fail(ec, "handshake");
// Send the message
ws_.async_write(
boost::asio::buffer(text_),
std::bind(
&session::on_write,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
void
on_write(
boost::system::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "write");
// Read a message into our buffer
ws_.async_read(
buffer_,
std::bind(
&session::on_read,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
void
on_read(
boost::system::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "read");
else{
std::string ss1 = read_data();
net::mutable_buffer cb0(&ss1[0], size(s1));
ws_.async_write(
boost::asio::buffer(ss1),
std::bind(
&session::on_write,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
// Close the WebSocket connection
ws_.async_close(websocket::close_code::normal,
std::bind(
&session::on_close,
shared_from_this(),
std::placeholders::_1));
}
void
on_close(boost::system::error_code ec)
{
if(ec)
return fail(ec, "close");
// If we get here then the connection is closed gracefully
// The buffers() function helps print a ConstBufferSequence
std::cout << boost::beast::buffers(buffer_.data()) << std::endl;
}
};
//------------------------------------------------------------------------------
int main(int argc, char** argv)
{
// Check command line arguments.
if(argc != 4)
{
std::cerr <<
"Usage: websocket-client-async \n" <<
"Example:\n" <<
" websocket-client-async echo.websocket.org 80 "Hello, world!"\n";
return EXIT_FAILURE;
}
auto const host = argv[1];
auto const port = argv[2];
auto const text = argv[3];
// The io_context is required for all I/O
boost::asio::io_context ioc;
// Launch the asynchronous operation
std::make_shared<session>(ioc)->run(host, port, text);
// Run the I/O service. The call will return when
// the socket is closed.
ioc.run();
return EXIT_SUCCESS;
}`
And attached is the sample data file
examples.txt
We need more information. Can you show the code that sends the string using the beast websocket?
We need more information. Can you show the code that sends the string using the beast WebSocket?
I have attached the code and data file I was Using, Ignore the compilation errors if any and could you reply on how to send very large data in one go.
Thank you
boost::asio::buffer(ss1)
ss1 gets destroyed when it goes out of scope, leading to an access violation. You must ensure that the lifetime of the string extends until at least the invocation of the completion handler.
Thanks a lot .Its working
| gharchive/issue | 2019-08-12T16:24:29 | 2025-04-01T06:38:04.926121 | {
"authors": [
"07sainishanth",
"vinniefalco"
],
"repo": "boostorg/beast",
"url": "https://github.com/boostorg/beast/issues/1676",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
329839372 | Add copyright to all sources
Add copyright headers to all source files in all bootique modules:
[ ] bootique
[ ] bootique-aws
[ ] bootique-bom
[ ] bootique-cayenne
[ ] bootique-curator
[ ] bootique-cxf
[ ] bootique-di
[ ] bootique-flyway
[ ] bootique-jcache
[ ] bootique-jdbc
[ ] bootique-jersey
[ ] bootique-jersey-client
[ ] bootique-jetty
[ ] bootique-job
[ ] bootique-jooq
[ ] bootique-kafka-client
[ ] bootique-kotlin
[ ] bootique-linkmove
[ ] bootique-linkrest
[ ] bootique-liquibase
[ ] bootique-logback
[ ] bootique-metrics
[ ] bootique-modules-parent
[ ] bootique-mvc
[ ] bootique-parent
[ ] bootique-rabbitmq-client
[ ] bootique-shiro
[ ] bootique-swagger
[ ] bootique-tapestry
[x] bootique-undertow
I applied the PR. A few minor points:
"the" should be removed from this text: "Licensed to the ObjectStyle LLC"
Remove paragraph indentation before "Licensed"
| gharchive/issue | 2018-06-06T12:07:43 | 2025-04-01T06:38:05.001184 | {
"authors": [
"andrus",
"stariy95"
],
"repo": "bootique/bootique",
"url": "https://github.com/bootique/bootique/issues/226",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1708686792 | Help Request - Having trouble overriding the 'top-nav-search' widget in child theme
Wondered if I could get a little help.
I trying to override the top-nav-search widget in the child theme to added the 'mt-2' class into the div. I'm attempting to do this with the following code in the child themes functions.php
function child_theme_register_sidebar() {
unregister_sidebar('top-nav-search');
register_sidebar(array(
'name' => esc_html__('Top Nav Search', 'bootscore'),
'id' => 'top-nav-search',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="top-nav-search mt-2">',
'after_widget' => '</div>',
'before_title' => '<div class="widget-title d-none">',
'after_title' => '</div>'
));
}
add_action('widgets_init', 'child_theme_register_sidebar');
I've also tried using the hook
add_action( 'after_setup_theme', 'child_theme_register_sidebar' );
But it's still using the parent themes 'top-nav-search' widget
Not sure if it's possible to unregister a single widget and register a new one with the same name. But of course, you can override the entire widget register function:
function bootscore_widgets_init() {
// Top Nav
register_sidebar(array(
'name' => esc_html__('Top Nav', 'bootscore'),
'id' => 'top-nav',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="top-nav-widget ms-2">',
'after_widget' => '</div>',
'before_title' => '<div class="widget-title d-none">',
'after_title' => '</div>'
));
// Top Nav 2
// Adds a widget next to the Top Nav position but moves to offcanvas on <lg breakpoint
register_sidebar(array(
'name' => esc_html__('Top Nav 2', 'bootscore'),
'id' => 'top-nav-2',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="top-nav-widget-2 d-lg-flex align-items-lg-center mt-2 mt-lg-0 ms-lg-2">',
'after_widget' => '</div>',
'before_title' => '<div class="widget-title d-none">',
'after_title' => '</div>'
));
// Top Nav Search
register_sidebar(array(
'name' => esc_html__('Top Nav Search', 'bootscore'),
'id' => 'top-nav-search',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="top-nav-search mt-2">',
'after_widget' => '</div>',
'before_title' => '<div class="widget-title d-none">',
'after_title' => '</div>'
));
// Sidebar
register_sidebar(array(
'name' => esc_html__('Sidebar', 'bootscore'),
'id' => 'sidebar-1',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<section id="%1$s" class="widget %2$s card card-body mb-4">',
'after_widget' => '</section>',
'before_title' => '<h2 class="widget-title card-header h5">',
'after_title' => '</h2>',
));
// Top Footer
register_sidebar(array(
'name' => esc_html__('Top Footer', 'bootscore'),
'id' => 'top-footer',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="footer_widget mb-5">',
'after_widget' => '</div>',
'before_title' => '<h2 class="widget-title">',
'after_title' => '</h2>'
));
// Footer 1
register_sidebar(array(
'name' => esc_html__('Footer 1', 'bootscore'),
'id' => 'footer-1',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="footer_widget mb-4">',
'after_widget' => '</div>',
'before_title' => '<h2 class="widget-title h5">',
'after_title' => '</h2>'
));
// Footer 2
register_sidebar(array(
'name' => esc_html__('Footer 2', 'bootscore'),
'id' => 'footer-2',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="footer_widget mb-4">',
'after_widget' => '</div>',
'before_title' => '<h2 class="widget-title h5">',
'after_title' => '</h2>'
));
// Footer 3
register_sidebar(array(
'name' => esc_html__('Footer 3', 'bootscore'),
'id' => 'footer-3',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="footer_widget mb-4">',
'after_widget' => '</div>',
'before_title' => '<h2 class="widget-title h5">',
'after_title' => '</h2>'
));
// Footer 4
register_sidebar(array(
'name' => esc_html__('Footer 4', 'bootscore'),
'id' => 'footer-4',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="footer_widget mb-4">',
'after_widget' => '</div>',
'before_title' => '<h2 class="widget-title h5">',
'after_title' => '</h2>'
));
// Footer Info
register_sidebar(array(
'name' => esc_html__('Footer Info', 'bootscore'),
'id' => 'footer-info',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="footer_widget">',
'after_widget' => '</div>',
'before_title' => '<div class="widget-title d-none">',
'after_title' => '</div>'
));
// 404 Page
register_sidebar(array(
'name' => esc_html__('404 Page', 'bootscore'),
'id' => '404-page',
'description' => esc_html__('Add widgets here.', 'bootscore'),
'before_widget' => '<div class="mb-4">',
'after_widget' => '</div>',
'before_title' => '<h1 class="widget-title">',
'after_title' => '</h1>'
));
}
add_action('widgets_init', 'bootscore_widgets_init');
This makes sense if you want to edit entire widgets in more detail. But if you want to add only the mt-2 class, why not use SCSS?
.top-nav-search {
margin-top: $spacer * .5;
}
or use @extend
.top-nav-search {
@extend .mt-2;
}
The result is the same ;-)
Cheers @crftwrk I thought there might be a way to just override a single widget. But if it's not obvious to you then I definitely pass and use your suggestion to use SCSS instead
.top-nav-search {
@extend .mt-2;
}
Many thanks for the assistance!
You‘re welcome
| gharchive/issue | 2023-05-13T17:09:16 | 2025-04-01T06:38:05.016127 | {
"authors": [
"crftwrk",
"hsankala"
],
"repo": "bootscore/bootscore",
"url": "https://github.com/bootscore/bootscore/issues/474",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1608394308 | Switch from medium featured images to an own bootScore size
There seems to be a weakness concerning dealing with Featured Images. I've described the effect in two posts: https://fodina.de/image-d/ and https://fodina.de/image-e/. If you want to see the difference, use your desktop machine and try this
Go to https://fodina.de/ and look at the post list. The square featured images are sharp.
Insert the word 'Picture' into the search box and look at that list (delivered by the unmodified archive.php). On large Screens, everything blurs.
I am going to describe the technical solution in a longer statement, posted in discussions.
Personally I feel like this change is more up to the developer, instead of in the theme by default.
Also what I usually do is call a bigger size, like large, and make it square using the Bootstrap ratios. That way I have more control over the size if it needs to change depending on screen size.
possible. But the more large pictures you have to distribute over the network, the slower the site.
However, I think that such a valuable work as bootScore should not show blurred and distorted images by default. There's never a second chance for the first impression. Yes, the user could fix it himself, but why do we want to burden him with something that is very little effort on our side: it costs only 1 line more, and 4 changed values to improve it.
As @justinkruit said, image sizes should be the developers job. Because we follow straight the WordPress standard and changing default WP image sizes will produce more questions than answers like here https://github.com/bootscore/bootscore/discussions/399.
However, I understand the thing with the super large screens. But:
Bootstrap is mobile first, means designed for smaller screens.
thumbnail or medium are the default WP preview image sizes .
Changing to larger sizes will load larger images on mobile as well, bad for SEO.
XXL screens have often a larger screen resolution. Means the image is larger than the original size is. Of course, it's not sharp anymore.
What you can do:
Use . get_the_post_thumbnail(null, 'large') . instead of medium.
Shorten the excerpt text or write a custom excerpt to lower card (and image) height.
Use a different loop template https://github.com/bootscore/bs-loop-templates which not crop the featured preview image. For example this one https://bootscore.me/archives/equal-height-sidebar-right/.
Many thanks for your answers. Unfortunately, I cannot agree with you in this case. A tool creating bad optical effects, by default, will perhaps lose its advantage: Again you never get a second chance to make a first impression. If you have templates, which can better deal with the default image sizes of WordPress, then bootScore should perhaps take one of them as 'index.php'
However, my opinion is nothing more than my opinion. Hence, if you don't like my solution, there's no need to integrate this PR. And yes, I will test the other bs-loop-templates later on.
with best regards KR
We can generally rethink and improve list view in loop. But let's do this carefully and later.
I've just summarized our discussion: https://fodina.de/image-i/ Fell free to contact me, if you think the post contains unfair remarks. I wanted to describe the case, not to blame anyone. Again and again and again: I appreciate your work very much! KR
| gharchive/pull-request | 2023-03-03T10:40:41 | 2025-04-01T06:38:05.025288 | {
"authors": [
"crftwrk",
"justinkruit",
"kreincke"
],
"repo": "bootscore/bootscore",
"url": "https://github.com/bootscore/bootscore/pull/414",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
215338496 | using helper inside the form-builder
undefined method `team?' for #BootstrapForm::FormBuilder:0x007fe9c2c91590
BootstrapForm::FormBuilder.class_eval do
if manager_namespace?
end
if team_namespace?
end
end
apparently those namespace_methods aren't known.
they actually are defined in the BaseController for that namespace which is pretty much the ApplicationController for our Backoffice
class Manager::BaseController < ActionController::Base
def team_namespace?
#. ...
helper_method :team_namespace?
end
anybody know how to get our own helpers included?
ok, that was easy fix
include NamespaceHelper and then define the methods inside that module
| gharchive/issue | 2017-03-20T06:26:15 | 2025-04-01T06:38:05.027859 | {
"authors": [
"krtschmr"
],
"repo": "bootstrap-ruby/rails-bootstrap-forms",
"url": "https://github.com/bootstrap-ruby/rails-bootstrap-forms/issues/319",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1571640240 | 🛑 BORINGPLACE.NET main DC kerberos is down
In be3aba6, BORINGPLACE.NET main DC kerberos (hp.boringplace.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: BORINGPLACE.NET main DC kerberos is back up in 13d368a.
| gharchive/issue | 2023-02-05T22:28:23 | 2025-04-01T06:38:05.053488 | {
"authors": [
"vanaf"
],
"repo": "boringplace/status",
"url": "https://github.com/boringplace/status/issues/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1289513311 | System.InvalidOperationException: 'When 'UseTempDB' is set then BulkOperation has to be inside Transaction. Otherwise destination table gets dropped too early because transaction ends before operation is finished.'
System.InvalidOperationException: 'When 'UseTempDB' is set then BulkOperation has to be inside Transaction. Otherwise destination table gets dropped too early because transaction ends before operation is finished.'
Step to reproduce
try
{
using (TransactionScope scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
var core = new ZZZContext();
...
await core.AuvyBulkUpdateAsync(objects);
scope.Complete();
}
}
catch (Exception)
{
throw;
}
Take a look at Test:
https://github.com/borisdj/EFCore.BulkExtensions/blob/4bb9fd3d4b28ad2c43009d2469b7758557692fd5/EFCore.BulkExtensions.Tests/EFCoreBulkTest.cs#L325-L334
In my opinion, this is a bug.
The use of a System.Transaction will allow the database provider to create a transaction automatically, so the check that's built into TableInfo is not correct.
I've solved this by opening the database connection explicitly before using BulkUpdateAsync.
In my opinion, this is a bug.
The use of a System.Transaction will allow the database provider to create a transaction automatically, so the check that's built into TableInfo is not correct.
I've solved this by opening the database connection explicitly before using BulkUpdateAsync.
@DaveVdE Thank you! Our code was also using an ambient transaction in combination with UseTempDb and SetOutputIdentity, which confusingly kept throwing the "When 'UseTempDB' is set then BulkOperation has to be inside Transaction" exception. Opening the db connection before calling the bulk operation solved my issue.
@borisdj Looking at the exception condition, would it make sense to add || Transaction.Current != null to this expression? https://github.com/borisdj/EFCore.BulkExtensions/blob/03934e648db21b790f903b008d0deafb0a71cd1e/EFCore.BulkExtensions/TableInfo.cs#L126C21-L126C21
| gharchive/issue | 2022-06-30T02:56:43 | 2025-04-01T06:38:05.058089 | {
"authors": [
"DaveVdE",
"borisdj",
"proc01",
"tincann"
],
"repo": "borisdj/EFCore.BulkExtensions",
"url": "https://github.com/borisdj/EFCore.BulkExtensions/issues/867",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2381702150 | 🛑 clans.wowsgame.cn is down
In ee1bf66, clans.wowsgame.cn (https://clans.wowsgame.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: clans.wowsgame.cn is back up in 40048f6 after 22 minutes.
| gharchive/issue | 2024-06-29T11:56:53 | 2025-04-01T06:38:05.061329 | {
"authors": [
"boriskhodok"
],
"repo": "boriskhodok/wowsuptime",
"url": "https://github.com/boriskhodok/wowsuptime/issues/792",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
72486057 | node.d.ts: interface Global / NodeJS.Global breaks current Node+TypeScript
TypeScript complains of missing names in the new interface Global type, from 5235188 (5 days ago).
tsc --module commonjs --target ES5 --declaration my_incredible_new_mvc_framework.ts
node/node.d.ts(198,26): error TS2304: Cannot find name 'DataView'.
node/node.d.ts(212,21): error TS2304: Cannot find name 'Map'.
node/node.d.ts(221,21): error TS2304: Cannot find name 'Set'.
node/node.d.ts(231,25): error TS2304: Cannot find name 'WeakMap'.
Those are all ES6 types. The current Node.js / v8 version implements them, but they aren't native types, as far as the TypeScript compiler is concerned. Perhaps we are getting ahead of ourselves here?
My system:
$ node --version
v0.12.2
$ tsc --version
message TS6029: Version 1.5.0-beta
@chbrown please watching this PR. https://github.com/borisyankov/DefinitelyTyped/pull/4101
Is this fixed or there is a temporary fix to implement?
I'm still getting those same errors:
node.d.ts(198,26): error TS2304: Cannot find name 'DataView'.
node.d.ts(212,21): error TS2304: Cannot find name 'Map'.
node.d.ts(221,21): error TS2304: Cannot find name 'Set'.
node.d.ts(231,25): error TS2304: Cannot find name 'WeakMap'.
@lucasmciruzzi I simply reverted the breaking commit in my DefinitelyTyped fork and it's been working great: https://github.com/chbrown/DefinitelyTyped/commit/17f99f1
Thanks for the reply @chbrown! I'm using DTSM so I don't get updates from forks, I only get them from the borisyankov repo. Shouldn't this fix be on the main repo?
@vvakame thanks for the headsup on https://github.com/borisyankov/DefinitelyTyped/pull/4101 .
with typescript 1.5.2 ES5 target output resolves the DataView error. Map, Set, WeakMap are still present, but compile appears to work fine.
+1 to merge changes from chbrown to borisyankof repo.
@chbrown Thank you for the quick fix, this solved my issue and works for now!
I updated the SHA in my tsd.json file to f0aa5507070dc74859b636bd2dac37f3e8cab8d1 and ran tsd reinstall -o to revert back to the prior version.
Thanks @chbrown - I switched to your repo until this gets fixed.
Why is this fix not merged with the repo yet? @chbrown @vvakame
master/HEAD are solved this issue.
I'm still using the chbrown@17f99f1 version because the borisyankov version still trows errors with "Map", "Set" and "WeakMap" when target is ES5 ....
@lucasmciruzzi no harm in that, but after @vvakame's comment, I tried out node.d.ts from DefinitelyTyped/master, which, combined with the typescript@1.5.3 that was released ~5 days ago, compiles just fine.
Are you getting those errors even with the latest tsc?
Yep, I'm running TypeScript v1.5.3
Fixed! ... it seems like it was a problem with the packages cache on my PC. Thanks a lot :)
I don't get any error when I run tsc in command line directly. However, I do get error TS2304: Cannot find name 'DataView'. when I use grunt. GruntFile.js:
...
typescript: {
base: {
src: ['src/**/*.ts'],
dest: 'build',
options: {
module: 'commonjs',
target: 'es5'
}
}
}
...
Using https://github.com/borisyankov/DefinitelyTyped/commit/f0aa5507070dc74859b636bd2dac37f3e8cab8d1 as stated by @aciccarello works.
TypeScript version is 1.5.3. Any ideas?
@Kelvin-Ng it should be an issue with the grunt plugin. It surely uses its own TypeScript instead of the globally installed. Check the settings of the plugin you are using for an option to set the TypeScript instance to use.
PS: If you can, move away from grunt to gulp, the plugin is far better: https://www.npmjs.com/package/gulp-typescript and has a "typescript" option to set a custom version of it.
@lucasmciruzzi
Thank you! I have moved to gulp and gulp-typescript works great. Thanks for your recommendation and I like gulp more than grunt now.
As of yesterday this is now fixed in grunt-typescript.
https://github.com/k-maru/grunt-typescript/issues/105
| gharchive/issue | 2015-05-01T18:06:37 | 2025-04-01T06:38:05.074546 | {
"authors": [
"Kelvin-Ng",
"aciccarello",
"chbrown",
"drudru",
"hsrobmln",
"lucasmciruzzi",
"omencat",
"sethx",
"vote539",
"vvakame"
],
"repo": "borisyankov/DefinitelyTyped",
"url": "https://github.com/borisyankov/DefinitelyTyped/issues/4249",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
97176368 | lodash: added _.toPlainObject() method
https://lodash.com/docs#toPlainObject
@chrootsu thanks mate!
| gharchive/pull-request | 2015-07-25T00:51:56 | 2025-04-01T06:38:05.076012 | {
"authors": [
"chrootsu",
"vvakame"
],
"repo": "borisyankov/DefinitelyTyped",
"url": "https://github.com/borisyankov/DefinitelyTyped/pull/5071",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
108032268 | ng-dialog fix IDialogOpenResult.closePromise
create IDialogClosePromise type to be returned by IDialogOpenResult.closePromise.
ng-dialog/ng-dialog.d.ts
to author(@stephenlautier). could you review this PR?
:+1: or :-1:?
check list
[ ] pass the Travic-CI test?
Looks good :+1:. Thanks @marknadig
@vvakame looks good.
thanks mate!
| gharchive/pull-request | 2015-09-24T00:17:04 | 2025-04-01T06:38:05.078328 | {
"authors": [
"marknadig",
"stephenlautier",
"vvakame"
],
"repo": "borisyankov/DefinitelyTyped",
"url": "https://github.com/borisyankov/DefinitelyTyped/pull/5983",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
395966645 | Stable Release Date
Not really an issue, more of a question, but any ideas of when a stable release might happen for the Craft 3 version?
We want to use this for a client of ours but aren't too keen on using software which is still in beta as we'd like to avoid as many bugs as possible. Looks like a great plugin though, good work!
@matt-adigital Thanks! We're doing some in depth security tests in the coming week. Once those clear, i'll release it as stable. Shouldn't take more than two weeks.
| gharchive/issue | 2019-01-04T15:33:21 | 2025-04-01T06:38:05.082727 | {
"authors": [
"matt-adigital",
"roelvanhintum"
],
"repo": "born05/craft-twofactorauthentication",
"url": "https://github.com/born05/craft-twofactorauthentication/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1881916967 | can't install in windows
in windows it try for some reason to locate python package then fail
full error
yarn install cobertura-merge
yarn install v1.22.19
error `install` has been replaced with `add` to add new dependencies. Run "yarn add cobertura-merge" instead.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
PS C:\Users\aa.risaac\repos\Dashboard> yarn add cobertura-merge -D
yarn add v1.22.19
[1/4] Resolving packages...
warning cobertura-merge > xml2json > joi@13.7.0: This version has been deprecated in accordance with the hapi support policy (hapi.im/support). Please upgrade to the latest version to get the best features, bug fixes, and securi
ty patches. If you are unable to upgrade at this time, paid support is available for older versions (hapi.im/commercial).
warning cobertura-merge > xml2json > hoek@4.2.1: This version has been deprecated in accordance with the hapi support policy (hapi.im/support). Please upgrade to the latest version to get the best features, bug fixes, and securi
ty patches. If you are unable to upgrade at this time, paid support is available for older versions (hapi.im/commercial).
warning cobertura-merge > xml2json > joi > hoek@5.0.4: This version has been deprecated in accordance with the hapi support policy (hapi.im/support). Please upgrade to the latest version to get the best features, bug fixes, and
security patches. If you are unable to upgrade at this time, paid support is available for older versions (hapi.im/commercial).
warning cobertura-merge > xml2json > joi > topo@3.0.3: This module has moved and is now available at @hapi/topo. Please update your dependencies as this version is no longer maintained an may contain bugs and security issues.
warning cobertura-merge > xml2json > joi > topo > hoek@6.1.3: This module has moved and is now available at @hapi/hoek. Please update your dependencies as this version is no longer maintained an may contain bugs and security iss
ues.
[2/4] Fetching packages...
warning Pattern ["object-assign@latest"] is trying to unpack in the same destination "C:\\Users\\aa.risaac\\AppData\\Local\\Yarn\\Cache\\v6\\npm-object-assign-4.1.1-2109adc7965887cfc05cbbd442cac8bfbb360863-integrity\\node_module
s\\object-assign" as pattern ["object-assign@^4","object-assign@^4.0.1","object-assign@^4.1.1","object-assign@^4.1.1"]. This could result in non-deterministic behavior, skipping.
[3/4] Linking dependencies...
warning " > ngx-device-detector@3.0.0" has incorrect peer dependency "@angular/common@>=7.0.0 <=13.0.0 || ^13.0.0".
warning " > ngx-device-detector@3.0.0" has incorrect peer dependency "@angular/core@>=7.0.0 <=13.0.0 || ^13.0.0".
warning "@angular-devkit/build-angular > source-map-loader@4.0.1" has incorrect peer dependency "webpack@^5.72.1".
warning " > @angular-eslint/schematics@13.1.0" has incorrect peer dependency "@angular/cli@>= 13.0.0 < 14.0.0".
warning " > @cypress/webpack-preprocessor@5.11.1" has unmet peer dependency "@babel/core@^7.0.1".
warning " > @cypress/webpack-preprocessor@5.11.1" has unmet peer dependency "@babel/preset-env@^7.0.0".
warning " > @cypress/webpack-preprocessor@5.11.1" has unmet peer dependency "babel-loader@^8.0.2".
warning " > cypress-multi-reporters@1.5.0" has unmet peer dependency "mocha@>=3.1.2".
warning " > mochawesome@7.0.1" has unmet peer dependency "mocha@>=7".
[4/4] Building fresh packages...
error C:\Users\aa.risaac\repos\Dashboard\node_modules\node-expat: Command failed.
Exit code: 1
Command: node-gyp rebuild
Arguments:
Directory: C:\Users\aa.risaac\repos\Dashboard\node_modules\node-expat
Output:
C:\Users\aa.risaac\repos\Dashboard\node_modules\node-expat>if not defined npm_config_node_gyp (node "C:\Program Files\nodejs\node_modules\npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.js" rebuild ) else (node "
" rebuild )
gyp info it worked if it ends with ok
gyp info using node-gyp@9.3.1
gyp info using node@18.16.0 | win32 | x64
gyp ERR! find Python
gyp ERR! find Python Python is not set from command line or npm configuration
gyp ERR! find Python Python is not set from environment variable PYTHON
gyp ERR! find Python checking if "python3" can be used
gyp ERR! find Python - "python3" is not in PATH or produced an error
gyp ERR! find Python checking if "python" can be used
gyp ERR! find Python - "python" is not in PATH or produced an error
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python39\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python39\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python39\python.exe
gyp ERR! find Python - "C:\Program Files\Python39\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python39-32\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python39-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python39-32\python.exe
gyp ERR! find Python - "C:\Program Files\Python39-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files (x86)\Python39-32\python.exe
gyp ERR! find Python - "C:\Program Files (x86)\Python39-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python38\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python38\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python38\python.exe
gyp ERR! find Python - "C:\Program Files\Python38\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python38-32\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python38-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python38-32\python.exe
gyp ERR! find Python - "C:\Program Files\Python38-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files (x86)\Python38-32\python.exe
gyp ERR! find Python - "C:\Program Files (x86)\Python38-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python37\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python37\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python37\python.exe
gyp ERR! find Python - "C:\Program Files\Python37\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python37-32\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python37-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python37-32\python.exe
gyp ERR! find Python - "C:\Program Files\Python37-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files (x86)\Python37-32\python.exe
gyp ERR! find Python - "C:\Program Files (x86)\Python37-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python36\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python36\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python36\python.exe
gyp ERR! find Python - "C:\Program Files\Python36\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Users\aa.risaac\AppData\Local\Programs\Python\Python36-32\python.exe
gyp ERR! find Python - "C:\Users\aa.risaac\AppData\Local\Programs\Python\Python36-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files\Python36-32\python.exe
gyp ERR! find Python - "C:\Program Files\Python36-32\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Program Files (x86)\Python36-32\python.exe
gyp ERR! find Python - "C:\Program Files (x86)\Python36-32\python.exe" could not be run
gyp ERR! find Python checking if the py launcher can be used to find Python 3
gyp ERR! find Python - "py.exe" is not in PATH or produced an error
gyp ERR! find Python
gyp ERR! find Python **********************************************************
gyp ERR! find Python You need to install the latest version of Python.
gyp ERR! find Python Node-gyp should be able to find and use Python. If not,
gyp ERR! find Python you can try one of the following options:
gyp ERR! find Python - Use the switch --python="C:\Path\To\python.exe"
gyp ERR! find Python (accepted by both node-gyp and npm)
gyp ERR! find Python - Set the environment variable PYTHON
gyp ERR! find Python - Set the npm configuration variable python:
gyp ERR! find Python npm config set python "C:\Path\To\python.exe"
gyp ERR! find Python For more information consult the documentation at:
gyp ERR! find Python https://github.com/nodejs/node-gyp#installation
gyp ERR! find Python **********************************************************
gyp ERR! find Python
gyp ERR! configure error
gyp ERR! stack Error: Could not find any Python installation to use
gyp ERR! stack at PythonFinder.fail (C:\Users\aa.risaac\AppData\Roaming\nvm\v18.16.0\node_modules\npm\node_modules\node-gyp\lib\find-python.js:330:47)
gyp ERR! stack at PythonFinder.runChecks (C:\Users\aa.risaac\AppData\Roaming\nvm\v18.16.0\node_modules\npm\node_modules\node-gyp\lib\find-python.js:159:21)
gyp ERR! stack at PythonFinder.<anonymous> (C:\Users\aa.risaac\AppData\Roaming\nvm\v18.16.0\node_modules\npm\node_modules\node-gyp\lib\find-python.js:228:18)
gyp ERR! stack at PythonFinder.execFileCallback (C:\Users\aa.risaac\AppData\Roaming\nvm\v18.16.0\node_modules\npm\node_modules\node-gyp\lib\find-python.js:294:16)
gyp ERR! stack at exithandler (node:child_process:427:5)
gyp ERR! stack at ChildProcess.errorhandler (node:child_process:439:5)
gyp ERR! stack at ChildProcess.emit (node:events:513:28)
gyp ERR! stack at ChildProcess._handle.onexit (node:internal/child_process:289:12)
gyp ERR! stack at onErrorNT (node:internal/child_process:476:16)
gyp ERR! stack at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
gyp ERR! System Windows_NT 10.0.19044
gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild"
gyp ERR! cwd C:\Users\aa.risaac\repos\Dashboard\node_modules\node-expat
gyp ERR! node -v v18.16.0
gyp ERR! node-gyp -v v9.3.1
gyp ERR! not ok
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.
You need to read https://github.com/nodejs/node-gyp#on-windows
This is how nodejs handles building native binaries.
You need to read https://github.com/nodejs/node-gyp#on-windows
This is how nodejs handles building native binaries.
can you publish binary like other libraries usually do so we don't have to
Hi @robertIsaac, you need to install python and have it on your path to fix it :)
Works after installing python
| gharchive/issue | 2023-09-05T12:50:26 | 2025-04-01T06:38:05.090731 | {
"authors": [
"MattiJarvinen",
"jorgealgaba",
"robertIsaac"
],
"repo": "borremosch/cobertura-merge",
"url": "https://github.com/borremosch/cobertura-merge/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
489501864 | Make the UTXO set re-entrant
This also needs to make sure it's memory-efficient. A simple (but naive) approach is to use a new Set for each time we return a utxoFinder delegate. But this would cause a lot of allocations.
Related to #194
Moving this out of Full Node and into Validator and removing it from the current kanban, because it's something we can think about doing at a later point. The re-entrancy might not be an issue right now since we use fibers, and we don't yield anywhere in the utxoFinder delegate. In theory it should just work.
This hasn't been needed for CoinNet as well, so moving out of the milestone.
| gharchive/issue | 2019-09-05T03:01:07 | 2025-04-01T06:38:05.101650 | {
"authors": [
"AndrejMitrovic",
"Geod24"
],
"repo": "bosagora/agora",
"url": "https://github.com/bosagora/agora/issues/300",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1552379118 | import boxing
Right now imports dump everything into the global space. Perhaps we should change this to create a box and then stuff everything in that box kind of like "use" does.
This would give the advantage of sectioning off everything imported.
Need to keep in mind that we don't import things multiple times right now. Need to determine a way to cache imported boxes so upon a second import we can just give a shared pointer to the importer with access to the box in their own env
We already have a caching mechanism ... it IS env.... just do this. it will make sense
Realistically we should merge use and import into one command. Using some FS stuff we can determine if we will import a file or a directory. Then we can ensure that the directory is a pkg or not. Single files would be loaded just like the source_files in pkgs.
This is a great idea.
After deep investigations I've decided that this is not behavior that I want to create at this point in time
| gharchive/issue | 2023-01-23T01:30:00 | 2025-04-01T06:38:05.108372 | {
"authors": [
"bosley"
],
"repo": "bosley/sauros",
"url": "https://github.com/bosley/sauros/issues/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
859549296 | Assignment 5
Forgot to upload to github and pull request it.
I did not have enough time to do bonus.
100!
| gharchive/pull-request | 2021-04-16T07:32:37 | 2025-04-01T06:38:05.109312 | {
"authors": [
"TonyRahme",
"haehn"
],
"repo": "bostongfx/cs480student",
"url": "https://github.com/bostongfx/cs480student/pull/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
96017530 | "Shouldn't be here" error upon query
My metric linux.net.sockets.used for one of hosts is shown fine for 3h-long data, but when I choose 4h interval I get an error:
opentsdb: {"start":"4h-ago","queries":[{"aggregator":"sum","metric":"linux.net.sockets.used","rateOptions":{},"tags":{"host":"app3"}}]}: Shouldn't be here
What is the meaning of this?
Thank you.
Haven't seen this one in a long time. IIRC it is what OpenTSDB returns when you query something that had duplicated datapoints (Same tagset, Same Timestamp, different values.
If this is the case, you have two options: Ensure there are no duplicates (you will only find out at query time, not write time) or enable tsd.storage.fix_duplicates in the OpenTSDB config.
If it is something else it is an error returned by OpenTSDB, so you that would be the place for further troubleshooting / bug filing.
Thanks, closing for now then, will reopen if I'll see the error again and these won't help.
@kylebrandt I think 2 instances of scollector were running on a problematic host
| gharchive/issue | 2015-07-20T08:51:00 | 2025-04-01T06:38:05.112094 | {
"authors": [
"k-bx",
"kylebrandt"
],
"repo": "bosun-monitor/bosun",
"url": "https://github.com/bosun-monitor/bosun/issues/1180",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
70786909 | Update and simplify botocore loaders to new dir layout
This reworks and greatly simplifies the loaders used in botocore.
This is part paying off technical debt as well as standardizing
on a shared common format.
In terms of the new layout, the loaders docstring discusses how it
looks like, but we now support the directory layout used by the
other AWS SDKs.
In terms of the loaders themselves:
Remove any fuzzy API version matching, you either can specify an
exact API version or you can get the latest API version.
Move up the service type names ('service-2', 'paginators-1', etc)
out of the loader. The session/client objects should know about
those details, but not the loader.
Remove all file extensions from the loader class. This actually
allows pluggable loader types besides JSON. This wasn't
actually possible previously.
cc @kyleknap
Coverage decreased (-0.62%) to 93.84% when pulling 68be91d8480a1d83fe0a92db7692a86ff6ecd951 on jamesls:simple-loaders into 60e72b22bdebfa83e9b964aca232c7cad572212e on boto:develop.
There's actually a few changes I'm going to need to make based on updating boto3, specifically with list_available_services and determine_latest_version.
The problem is that both of these assume that the directory structure is sufficient to extract meaningful data about what is or isn't available (in fact, I even comment on this in this PR: https://github.com/boto/botocore/pull/527/files#diff-f8c1b99f0b4538cf77c2052b498c9630R219).
We actually need to be able to specify the type name (services-2, paginators-1, resources-1) when asking about the latest available and the list of all services. For example, the latest available service for services-2 may not necessarily mean that there's a resources-1 model for that same API version. Similarly listing all the available resources may differ we only care about resources (i.e boto3).
A concrete example of breakage is doing loader.load_service_model('ec2', 'resources-1') in boto3. It will try to find the latest API version for EC2 and assuming it can load a resource for it, which it currently cannot so it will raise an exception.
To fix this I'll be adding these changes:
Modify determine_latest_version and list_available_services to require a type_name param, just like load_service_model.
Update the file loader interface (currently only JSONFileLoader is implemented) to add an exists() method that will return True/False if the resource exists.
@kyleknap I've updated with the necessary changes for boto3 as well as the latest model updates from develop. Should be ready now for another look. Thanks.
Coverage decreased (-0.06%) to 94.39% when pulling 12cc84bcc717b0d4cfd1302b99d8975526e30ccb on jamesls:simple-loaders into ad1d6e298fd46c9b5994d1a073d40bb1ca838726 on boto:develop.
https://github.com/boto/boto3/pull/104 is an example of how this new loader API can be used. Now boto3 simply needs to add its internal directory to the search path and then specify the type name when dealing with services that must contain a resource definition.
Coverage decreased (-1.06%) to 93.4% when pulling 56d4f48b718489cbea5ba89abd680067770bebe0 on jamesls:simple-loaders into ad1d6e298fd46c9b5994d1a073d40bb1ca838726 on boto:develop.
It looks good. Looks like you updated the code after my pep8 comment through the pr-check script. Had a couple of comments about tests and docstrings, but nothing too significant :ship:
Looks like you also need to rebase on develop and pull in the latest model changes, which should only be dynamodb (not sure if you rebased after my ec2 pr).
The PR should have been rebased on develop as of the latest release a few hours ago. I'll double check this is the case.
Hmm the merge button is grey right now. Usually means that I need to rebase.
Probably because I merged in the _retry bugfix PR.
@kyleknap pushed a commit to extract out all the test boilerplate. I think the tests are much easier to read now.
Also rebased against develop. Looks like it's because of the _retry PR I merged, which makes sense. Any changes to anything in botocore/data/ in develop will require a rebase to merge cleanly:
$ git rebase develop
First, rewinding head to replay your work on top of it...
Applying: Update and simplify botocore loaders to new dir layout
Applying: Move services to standard directory structure
Using index info to reconstruct a base tree...
M botocore/data/aws/_retry.json
Falling back to patching base and 3-way merge...
Applying: Fix pr-check issues
Applying: Fix docstring
Applying: Move out boiler plate for dir patching to helper method
Recent commits look good. Those tests look much cleaner. :ship: again.
Coverage increased (+0.06%) to 94.52% when pulling f5d2c33311fd99c613d2ee30c001501000c88ae1 on jamesls:simple-loaders into 5eb99b9e09702057ba5772d86af2d7751d75a459 on boto:develop.
| gharchive/pull-request | 2015-04-24T20:31:36 | 2025-04-01T06:38:05.129676 | {
"authors": [
"coveralls",
"jamesls",
"kyleknap"
],
"repo": "boto/botocore",
"url": "https://github.com/boto/botocore/pull/527",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
273190463 | LayerList layers prop behavior
Say I have two layers: an OSM basemap, and a geojson point layer. I've given each of these layers a metadata property called 'isBasemap', and set it to true for the OSM layer and false for the geojson.
I'm trying to exclude the basemap from the layer list component by passing an array of objects with the following shape:
[{id: 'layer_id}]
and only populating that array with layers where isBasemap is false.
However, passing that array to the LayerList component as the layers prop doesn't seem to work, as I'm still seeing the OSM layer in the layer list.
I believe the issue is because the layers prop in the LayerList component is being mapped to the Redux store: state.map.layers. Unless I'm missing something, this prevents using a custom list of layers to show in the LayerList component.
Good point, we recently introduced a metadata prop for hiding layers from the layerlist, bnd:hide-layerlist so in your case you could set those to true for the layers you do not want in the layerlist.
id: 'my_layer_id',
metadata: {
'bnd:hide-layerlist': true
}
would that work for your use case?
I don't think it is already in a release though.
thanks for the feedback, that is good to hear
@brambow we just released 2.2.0 now https://github.com/boundlessgeo/sdk/releases/tag/v2.2.0 please check it out at your convenience and let us know if the new metadata works for you
It works. Thanks!
| gharchive/issue | 2017-11-12T01:22:11 | 2025-04-01T06:38:05.239303 | {
"authors": [
"bartvde",
"brambow"
],
"repo": "boundlessgeo/sdk",
"url": "https://github.com/boundlessgeo/sdk/issues/734",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2108951694 | Callback for change
I read through the documentation and found there is no (simple) way to let the editor save user input automatically. Is it possible to have a callback function such as on_change which exists now in all streamlit official widgets?
I agree.
BTW is there an simple way to submit content by using an external button?
import streamlit as st
import code_editor
output = code_editor.code_editor("print('Hello world!')")
if st.button("Show output"):
st.write(output)
I have heard a few similar requests.
To be honest, I have been avoiding this issue simply because it seems like it all leads to the same solution which comes with an issue that I think will make the behavior of the editor feel more janky. However, Streamlit now supports partial reruns so maybe now is the time to add the debounced update feature.
Let me give some background that might illuminate why this feature is missing. One of the things I didnt like about the other ace editor component when I tried using it (after learning about it) is that it would rerun the app/script after almost every keypress. And I think you had to set a prop to avoid this. The frequent refreshes made the whole experience so bad that I saw it as a bad thing as it promoted bad UI experiences.
In order for the streamlit script to get data from the code component, the script had to be rerun with this new data. The same would be true with a callback function. But with fragments, the code editor and dependent elements can be the only thing that is rerun and that can perhaps minimize some of the jerkiness.
I think I will add a prop that will allow for a debounced response which will enable auto-updating the dictionary with current contents. I think I will also try and provide an option for updating when the editor loses focus which might prove to be a great compromise.
Thanks for the feedback!
I have some good news to share. I got this feature up and running in version 0.1.4
See here for more: https://discuss.streamlit.io/t/new-component-streamlit-code-editor-a-react-ace-code-editor-customized-to-fit-with-streamlit-with-some-extra-goodies-added-on-top/42868/16
Works great! Kudos for the great job and amazing response time.
and amazing response time.
You caught me in a moment where I had some time and I already knew how to implement the feature cause I did it before a while ago. So the stars aligned lol. Unfortunately, its been a month long wait for @calvinchai
Thank you for the great work and timely response!
| gharchive/issue | 2024-01-30T23:00:44 | 2025-04-01T06:38:05.271207 | {
"authors": [
"bouzidanas",
"calvinchai",
"marcinsoftem"
],
"repo": "bouzidanas/streamlit-code-editor",
"url": "https://github.com/bouzidanas/streamlit-code-editor/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1954810873 | use truth in hash function of Sentence
@bowen-xu I looked at the code and it seems the core of the issue is the hash of Task not taking truth value into account. This simple change achieves what you described in the issue but I wonder if it could have some unintended consequences? Can you think of the situation when elsewhere in the code we would want two sentences to compare same regardless of their truth values? Then we could try a different fix.
@maxeeem I found there's something "magic" by adding self.truth to the tuple.
If two input tasks have the different truth-values,for sure, the hash-value are different. Ex.
A. %1.0;0.9%
A. %1.0;0.8%
However, I thought if the two truth-values are the same, for example, input two sentences
A. %1.0;0.9%
A. %1.0;0.9%
The hash-values of the tasks should have been the same, as a result, the beliefs table would have still contains only one item. But in fact, it works; the beliefs table contains the two statements with the same truth-value.
I didn't think of that before. I guess this is because the python object of the first truth-value %1.0; 0.9% and that of the second have different ids. When computing the hash-value, it takes the id of an object into account.
Can we record the premises of a task, can use them when computing the hash value?
For example, we can store its (direct) premises into a list if it is derived via inference, so the length of the list is usually 2. If it is an input task, then the premises list is empty. The hash-value is influenced by the input_id of the task, as well as the hash-values of the premises.
Another solution might be adding all elements of the evidential base into that tuple when computing hash-value, though this approach might be more costly.
Can we record the premises of a task and use them when computing the hash-value?
For example, we can store its (direct) premises into a list if it is derived via inference, so the length of the list is usually 2. If it is an input task, then the premises list is empty. The hash-value is influenced by the input_id of the task, as well as the hash-values of the premises.
Another solution might be adding all elements of the evidential base into that tuple when computing hash-value, though this approach might be more costly.
I didn't think of that before. I guess this is because the python object of the first truth-value %1.0; 0.9% and that of the second have different ids. When computing the hash-value, it takes the id of an object into account.
Huh, interesting. I'm so used to value types that I forget sometimes that everything is an object in python. What do you think of just implementing a hash function in Truth that will only take the values into account and disregard the id?
I pushed up a commit, see if this works. I tested on your examples and when truth values are the same it doesn't store a duplicate belief.
What do you think of just implementing a hash function in Truth that will only take the values into account and disregard the id?
That looks better. Then there would be an another issue. When inputing two tasks with the same truth-value, actually we hope them both to be stored in the table, right?
What do you think of just implementing a hash function in Truth that will only take the values into account and disregard the id?
That looks better. Then there would be an another issue. When inputing two tasks with the same truth-value, actually we hope them both to be stored in the table, right?
We do? Wouldn't we only store one since they are both input tasks?
We do? Wouldn't we only store one since they are both input tasks?
According to the technical report 3.1.0
Beliefs/desires in a concept-level table: multiple versions of a belief with the same content but different truth-values compete each time a belief is requested. Bag is not used here because each request is one-time. Multiple versions of beliefs are kept because overlapping evidence may prevent the version with the highest confidence from being used.
Well, it looks like that statements in the beliefs table have different truth-values.
But I doubt on this. Even thought two input tasks have the same truth-value but different evidential base, should they be both kept in the table?
But I doubt on this. If two input tasks have the same truth-value but different evidential base, should they be both kept in the table?
What is the evidential base of an input task? Just a serial number? For derived tasks it is formed from the evidential bases of the parents, but for input... If we treat each input task with identical truth values as distinct then wouldn't the belief table quickly fill up with identical beliefs?
But I doubt on this. If two input tasks have the same truth-value but different evidential base, should they be both kept in the table?
What is the evidential base of an input task? Just a serial number? For derived tasks it is formed from the evidential bases of the parents, but for input... If we treat each input task with identical truth values as distinct then wouldn't the belief table quickly fill up with identical beliefs?
The evidential base of each input task is a set with only one identical element.
You may be right. The table may be filled up quickly.
The same problem exists in OpenNARS 3.0.4. So we can tentatively adopt this design, though it needs further discussion.
| gharchive/pull-request | 2023-10-20T18:14:22 | 2025-04-01T06:38:05.308752 | {
"authors": [
"bowen-xu",
"maxeeem"
],
"repo": "bowen-xu/PyNARS",
"url": "https://github.com/bowen-xu/PyNARS/pull/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
171698287 | 7-zip is not uninstalled correctly.
Currently, the 7-zip is getting installed as part of Windows boxes provisioning logic, by the script named vmtool.bat.
It is supposed to be completely and correctly removed at a later stage by the dedicated script uninstall-7zip.bat, however it does not perform the correct uninstall of the 7-zip from the registry (and it is still present in the "Programs and Features" in Control panel).
This result in some unpredictable and hard to track behavior, when 7-zip is getting installed on the created box as part of integration tests, for example, for Chef cookbooks.
Here is the corresponding message from the packer\boxcutter log:
virtualbox-iso: ==> Uninstalling 7zip
virtualbox-iso: ==> WARNING: Directory not found: "C:\Users\vagrant\AppData\Local\Temp\sevenzip"
It happens due to the fact, that the following piece of code does not get executed:
msiexec /qb /x "%SEVENZIP_PATH%"
as the msi file and its parent folder are getting cleaned up by previous script, named clean.bat, which performs complete remove of all folders from the %TEMP% folder, including the download folder of 7-zip.
The possible solution can be to exclude the sevenzip folder from the cleanup, so that it can be correctly uninstalled.
I've like to move away from using 7zip altogether. We can unzip the files directly via Powershell, which would avoid the whole mess of installing / uninstalling 7zip multiple times. Look for that fix in the future one I get a nice powershell based pattern
I've built several hosts today and all of them correctly removed 7zip so I'm going to close this issue at now. If you're still having issues with the latest code in master feel free to open it back up
| gharchive/issue | 2016-08-17T16:11:32 | 2025-04-01T06:38:05.332128 | {
"authors": [
"GolubevV",
"tas50"
],
"repo": "boxcutter/windows",
"url": "https://github.com/boxcutter/windows/issues/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2080745838 | Update sbt-scalajs, scalajs-compiler, ... to 1.15.0
About this PR
📦 Updates
org.scala-js:sbt-scalajs
org.scala-js:scalajs-compiler
org.scala-js:scalajs-library
org.scala-js:scalajs-library_2.13
org.scala-js:scalajs-test-bridge
org.scala-js:scalajs-test-bridge_2.13
from 1.13.2 to 1.15.0
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scala-js" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "org.scala-js" }
}]
labels: library-update, early-semver-minor, semver-spec-minor, commit-count:1
Superseded by #128.
| gharchive/pull-request | 2024-01-14T14:14:19 | 2025-04-01T06:38:05.355677 | {
"authors": [
"scala-steward"
],
"repo": "bpholt/java-time-literals",
"url": "https://github.com/bpholt/java-time-literals/pull/117",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
180504476 | Add stock prices chart
Connected to #44
Coverage remained the same at 89.495% when pulling 0bcfb725c50e362d63893744f1e94a3ffccdeb64 on add-js-stock-graphs into 49a6b702e1b31aa5dd74707f0f7cb1e05c36d0a8 on master.
| gharchive/pull-request | 2016-10-02T09:47:12 | 2025-04-01T06:38:05.357501 | {
"authors": [
"bpietraga",
"coveralls"
],
"repo": "bpietraga/dollar_tracker",
"url": "https://github.com/bpietraga/dollar_tracker/pull/56",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1088469795 | [Enhancement] Look into reafactoring features to use createFeature
Is your enhancement request related to a problem? Please describe.
with ngrx 13, there is a new createFeature function that can be used to cut down on boilerplate.
Describe the solution you'd like
Refactor existing ngrx utils to leverage this.
Additional context
see: https://ngrx.io/guide/store/feature-creators
Not going to this right now.
| gharchive/issue | 2021-12-24T18:13:17 | 2025-04-01T06:38:05.380825 | {
"authors": [
"bradtaniguchi"
],
"repo": "bradtaniguchi/nx-workspace-template",
"url": "https://github.com/bradtaniguchi/nx-workspace-template/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1747539391 | Can we use a UI for configuration?
For those new to the process, configuring settings directly through json can be quite complex and may require referring to documentation. Perhaps a more user-friendly approach would be to offer a visual configuration method similar to that of vscode.
对于新手来说,直接通过json配置比较复杂,需要参考文档,也许我们可以提供类似vscode那样的可视化方式配置。
Thank you for your advice. I will discuss with other developers how to implement a user-friendly GUI configuration editor.
I am a front-end engineer and a fan of yours at zhihu. If you need any help with the development of this feature, I can contribute some code
| gharchive/issue | 2023-06-08T10:03:19 | 2025-04-01T06:38:05.390908 | {
"authors": [
"alili",
"bramblex"
],
"repo": "bramblex/niva",
"url": "https://github.com/bramblex/niva/issues/67",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2229842892 | where is default_path variable?
I could not find in config.py.
Hi Branislav. I'm a bit confused about that as well. As well as what value we're supposed to put in there? I'd really like to use your app :) Thanks in advance!
Hello.
If you think default checkpoint for SAM path, it is called from here:
https://github.com/branislavhesko/segment-anything-ui/blob/496bef9ac7571edd9080def9816a8efc1f882093/segment_anything_ui/config.py#L35
| gharchive/issue | 2024-04-07T16:31:58 | 2025-04-01T06:38:05.419256 | {
"authors": [
"Shigeto-Amatake",
"benibargera",
"branislavhesko"
],
"repo": "branislavhesko/segment-anything-ui",
"url": "https://github.com/branislavhesko/segment-anything-ui/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
316454229 | Upgrade to FFmpeg 4.0
https://ffmpeg.org/index.html#pr4.0
:tada:
I'm waiting for 4.0 too.
@AdityaAnand1 @liuatgit You guys can build it yourself, which I highly recommend. Most FFmpeg binaries you find online have modules enabled that you might never use which will increase the size of it immensely.
@AdityaAnand1 @liuatgit
Good news! FFmpeg 4.0 is now supported by this library since version 1.1.4
dependencies {
implementation 'nl.bravobit:android-ffmpeg:1.1.4'
}
It is still 3.0.1
| gharchive/issue | 2018-04-21T02:31:06 | 2025-04-01T06:38:05.773502 | {
"authors": [
"AdityaAnand1",
"Brianvdb",
"NiekAkerboom",
"liuatgit",
"nzackoya"
],
"repo": "bravobit/FFmpeg-Android",
"url": "https://github.com/bravobit/FFmpeg-Android/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
246078561 | 对最近发生的事情感到遗憾和难过
唉
刚逛知乎看到,挺震惊的,作为一名学生的确不应该承受着么大的压力。只能说唯小人难伺候,放宽心。我想用过ssr的人都应该尊重你的贡献,你起码让数十万人拥有了他们想得到的东西,这应该是你人生中浓墨重彩的一笔。感谢。
感谢曾经的努力!看不惯那些人的嘴脸。
缅怀
我现在打这句话的流量是我自己用ssr建的双cn2通道,
有些人就是觉得有些猪比其他动物更加平等更加自由,
因为他既不理解麻烦的技术又没办法为自己争取自由
感谢作者无私奉献, 无论如何, 你干过伟大的事. 祝前程似锦, 平安健康.
感谢!
感谢破娃的付出
谢谢破娃的付出,你会后继有人的,吸血鬼杀死每一只报晓的公鸡也是无法阻止黎明的到来的。总得有人肩负历史的重任,无论这世界有多少恶毒的小人。祝福破娃,希望你能有肉体翻墙的那天,欢迎来袋鼠国哦,哈哈。
感谢破娃的付出。同时也希望通过备份能恢复ssr社区的生机与活力。
没什么好说的,要说只有一句:谢谢!
需要帮助么?我们团队可以接下来开发,可以完全换个名字重新开始的。
感谢作者付出
致敬!
本来只是发现项目突然被删除,然后找了一下最新版,发现我之前提过意见你已经添加了功能,想来留言感谢一下。但是没想到这个事情真的闹得这么过分,恶的让人恶心。谢谢你为我们平常人做的这些。(本来想开个问题道谢,然后删除掉,但是已经有相关的我就跟一下,只怪这网站没有留言之类的东西。。)
感谢作者,致敬!
用ssr的代理流量留言,谢谢你
感谢作者,致敬!
每一个在黑暗中发出呼喊的人都值得感谢
感谢作者,致敬!
同时我大概明白发生了什么事情
哎 希望越变越好把
| gharchive/issue | 2017-07-27T15:25:58 | 2025-04-01T06:38:05.781122 | {
"authors": [
"1688pc",
"3160479057qq",
"JHrockice",
"JimTang67",
"Jingyu-Yan",
"LexsionLee",
"Martinjon146001",
"TwilightHome",
"ZhengHui-Z",
"Zzz97siao",
"a17uk",
"akiraxiao",
"dlmgis",
"evolutionjay",
"kj415j45",
"sailosha",
"shanlinfeiniao",
"tphz",
"vajvip",
"xsaddata"
],
"repo": "breakwa11/gfw_whitelist",
"url": "https://github.com/breakwa11/gfw_whitelist/issues/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
334257681 | xlsxio fails in the presence of xmlnamespace qualification
The same proprietary file that I mentioned in #28 also includes namespace qualification on the elements in several of its xml files. For example
<?xml version="1.0" encoding="utf-8"?>
<x:sst xmlns:x="http://schemas.openxmlformats.org/spreadsheetml/2006/main">
<x:si>
<x:t>NPANXX</x:t></x:si><x:si>
<x:t>JUR</x:t></x:si><x:si>
<x:t>CARRIERGRP</x:t></x:si><x:si>
<x:t>DETAILED_CARRIERGRP</x:t></x:si><x:si>
<x:t>ST</x:t></x:si><x:si>
<x:t>LATA</x:t></x:si><x:si>
...
xlsxio assumes the author of an xlsx file will not do this, but it is valid for them to do so.
I hack-fixed this in my fork by changing every comparison like this:
if ((XML_Char_icmp(name, X("sst")) == 0) || (XML_Char_icmp(name, X("x:sst")) == 0)) {
But this is just a quick workaround, since any namespace is possible, I just needed to solve for a particular vendor that was using x.
A proper solution would be to only compare the element name's ending substring, i.e. (pseudocode)
int len = "sst'.length();
if( element.name.last(len) == "sst")
This way xlsxio could just ignore namespaces all-together without failing in their presence.
I believe a seperate compare function to either match completely or match after the last colon should do the trick. When I have some time I can look into this.
Could you send me a complete example .xlsx file?
Would a similar fix be required in the get_expat_attr_by_name() function?
Hi,
I was just looking back on this issue that was never closed.
XLSX I/O relies on Expat to process the XML, which is why namespaces aren't checked by the library itself, as Expat normally takes care of that.
But if you still have this issue and you can send an example .xlsx file I would be glad to take a look at the issue.
Regards
Brecht
I’m several projects away from where I was when I wrote this. Feel free to
close!
Thanks.
Matt
On March 19, 2020 at 11:55:32 AM, Brecht Sanders (notifications@github.com)
wrote:
Hi,
I was just looking back on this issue that was never closed.
XLSX I/O relies on Expat to process the XML, which is why namespaces aren't
checked by the library itself, as Expat normally takes care of that.
But if you still have this issue and you can send an example .xlsx file I
would be glad to take a look at the issue.
Regards
Brecht
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/brechtsanders/xlsxio/issues/29#issuecomment-601357559,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABPYNFA37QDDX55CB75YW2LRIJTCJANCNFSM4FGAC5HA
.
| gharchive/issue | 2018-06-20T21:33:13 | 2025-04-01T06:38:05.812293 | {
"authors": [
"brechtsanders",
"webern"
],
"repo": "brechtsanders/xlsxio",
"url": "https://github.com/brechtsanders/xlsxio/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
595293512 | codespan-reporting and codespan-lsp compatibility
I've been using codespan and codespan-reporting for error reporting for my CLI. I'm now working on an LSP extension and want to use codespan-lsp to covert to the proper LSP types, but it seems like codespan-reporting doesn't play well.
For example, I have a custom file system abstraction that implements the Files trait but codespan-lsp utilities like byte-span-to-range expect a reference to codespan::Files. They also expect a Span but codespan-reporting's Label uses Range<usize>.
Are there plans to align the packages one way or the other, or should I just roll my own diagnostics object and have compatibility layers for codespan-reporting and codespan-lsp?
Thanks for your issue!
I've been trying hard to de-emphasise codespan, and emphasizing codespan-reporting, because I've found it rather challenging to create a one-size-fits all thing for handling files. This was part of the work I did in inverting the dependency between those two crates. Eventually I'd like to deprecate codespan, if I'm honest.
codespan-lsp currently depends on codespan though, so sadly you are tied to using codespan if you want these conversions. It would be cool if there would be some way to make those conversions easier to implement for custom file system abstractions though! I'm still not sure where they fit though…
@brendanzab would you be open to including some LSP utilities in codespan-reporting? It might make sense to have codespan_reporting::lsp to mirror the codespan_reporting::term backend. I'd be happy to help here if you're interested.
Ohhh, that is a good idea, yes! Now I have a better handle on the lsp-types version ranges this could also be helpful too - it was a constant pain to keep having to update them.
I'd definitely be open to collaborating on this! Feel free to chat on Gitter or the the #langdev channel on the rust community discord if you like!
| gharchive/issue | 2020-04-06T17:28:38 | 2025-04-01T06:38:05.821256 | {
"authors": [
"aweary",
"brendanzab"
],
"repo": "brendanzab/codespan",
"url": "https://github.com/brendanzab/codespan/issues/227",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2134942543 | implement remove-by-model-name. Make backups of configs
I had several hundred iBeacon devices to get rid of, all with long hex-string names, so I extended your code to do model names as well. Minimal change w/o restructuring completely. In the spirit of it being a power-user tool, but even power users can sometimes shoot themselves in the foot, I provided one level of toe-armor: it makes a backup of the three config files.
Looks good, though I haven't had occasion to use it. Does it still work for basic removal?
| gharchive/pull-request | 2024-02-14T18:34:41 | 2025-04-01T06:38:05.840020 | {
"authors": [
"brettonw",
"reedstrm"
],
"repo": "brettonw/Remove-Home-Assistant-Device",
"url": "https://github.com/brettonw/Remove-Home-Assistant-Device/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
248192316 | PrometheusMetricsTrackerFactory should support custom registries
Prometheus collectors can typically be registered either in the default registry or in a user supplied registry. As the default registry is basically a singleton maintained in a static field of Prometheus' CollectorRegistry, using it in non-trivial class loader scenarios (like in a Java EE servers) can be problematic. Therefore, I think that providing support for custom registries is very valuable.
I'm aware of the problems discussed in #940 and #851 regarding the difficulties to maintain single instances of collectors as required by the Prometheus Client API, but I think there must be a way to fix it. If we force users to use a single MetricsTrackerFactory for each CollectorRegistry, this could actually be very simple.
There should also be a way to deregister collectors in a clean way. Something like PrometheusMetricsTrackerFactory.close(). It is important to support this especially if you create and destroy pools at runtime.
I would love to work in this and provide a pull request if you are interested. But we should merge #940 before as it already improves the overall structure of the code.
Hi @chkal !
Could you please have a look on this PR https://github.com/brettwooldridge/HikariCP/pull/1331 ?
I've introduced the ability to deregister collectors when connection pool is shutting down.
@apodkutin Thanks. I'll try to find some time in the next days to have a deeper look this. But I'm not very familiar with the details, so I may not be the best person for a review.
Now that #1331 has been merged and released, can this issue be closed?
Now that #1331 has been merged and released, can this issue be closed?
I second this. @brettwooldridge Please close this issue.
| gharchive/issue | 2017-08-05T15:51:09 | 2025-04-01T06:38:05.847139 | {
"authors": [
"apodkutin",
"chkal",
"djbehnke",
"edysli"
],
"repo": "brettwooldridge/HikariCP",
"url": "https://github.com/brettwooldridge/HikariCP/issues/950",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
100573408 | Fix #224 - aliased column in function
Fixed the aliased column in function call issue. When in a function call, columns must not have alias. Added some tests as well.
Hi @brianc. More conflicts resolved. I think this one is good to merge too
:dancer:
| gharchive/pull-request | 2015-08-12T15:44:04 | 2025-04-01T06:38:05.872274 | {
"authors": [
"brianc",
"edudutra"
],
"repo": "brianc/node-sql",
"url": "https://github.com/brianc/node-sql/pull/255",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
163054266 | Jitter when animating via transformNode.scale
Hey!
I'm currently trying to put together a project using masking and we've run across an issue wherein setting scale on a transformNode causes a visible jitter.
This behavior seems to become worse the more nodes are added to the graph.
Essentially we are running an requestAnimationFrame loop which sets a new value using
transformNode.scale(value)
on every frame. What we observe is either stale values, or conflicting values within the transform node.
I've attached a video of the observed bug. This doesn't seem to be performance related as the page runs at a solid 60fps in the test.
I'd be happy to try to contribute a fix if you can point to me in the right direction in the source. Any help on this one is greatly appreciated as I've got a delivery coming up very soon.
jitter_capture.mp4.zip
Hey @tencircles. That's interesting. Do you have a reduced code sample you can share? Maybe a jsbin, codepen or something live like that?
Does it only happen with whatever effects you're using for masking?
Have you tried it in Firefox as well?
Does it make a difference if you change the resolution of the source image/video?
Hey @brianchirls, thanks for the response!
I've set up a test repo here
https://github.com/tencircles/seriously_mask_test
The javascript code is here:
https://github.com/tencircles/seriously_mask_test/blob/master/js/index.js
And you can find the code live here:
http://45.55.37.142/seriously_mask_test/
The jitter seems to be most prominent when using the translate method of the transform2d node.
Steps to reproduce:
Check do_translate
Set start/end_translate to any values
Click 'tween'
There seems to be an observable disparity between the numbers we send (viewable under 'translate' in the GUI), and the result on screen. You can see both the scale and translate input values on screen.
You can find a screen cap showing the disparity between input values on and on screen result here. If you step through frame by frame it's really easy to see.
http://45.55.37.142/seriously_mask_test/captures/chrome_counter.mp4
The behavior seems to be the same in firefox and in safari.
Tried changing images, didn't have any effect. Seems to be limited to the transform node, not sure if the behavior is present in other nodes.
Let me know if I can dig anything up!
Okay, first of all, this is a pretty cool-looking composition. I'm looking forward to seeing the finished product.
This is a very strange one. But there is definitely something going wrong here - you're not imagining things. ;-) I'm not able to replicate your problem on my machine, but I can see in your video that the "A" is moving backwards, which it obviously should not do. I've run CPU profiles, timeline recordings and screen captures; I even tried CPU throttling. Apart from a few janky frames, this does not appear to be a performance issue. That wouldn't explain backwards movement anyway.
I wonder if something weird might be going on with TweenMax. It's hard for me to tell for sure, because the code is minified and I'm not familiar with the API or behavior of that library. In theory, it might be possible that two requestAnimationFrame cycles are running and fighting over setting different values on the same target object. A later value might be smaller than the previous one, and the difference might be < 1/1000, in which case you would not see it show up in your scale value text. But the difference might be magnified enough to be visible in the canvas by rounding of pixel positions and/or floating point quirks. I know it sounds like a bit of a stretch, but it's all I got at the moment.
Are you sure you're cleaning up the timeline properly?
Is it possible that this only happens after you've gone through the animation at least once? Or does it happen on the first time you play through it?
Since I can't replicate, I can't do the debugging for you, but I can point you in the right direction. I suggest two things:
Set a conditional breakpoint at line 5933 of seriously.js with this code: translateX < 499 && x < translateX. If it breaks, you'll know that the incoming value is less than the one before it. (This will only work if you let the animation play all the way to the end.) You can see in the call stack where that happened. If it never breaks and you still see the A going backwards, that will tell us we need to look deeper into Seriously.
Try making a reduced test without TweenMax or even dat.gui. Write your own code in the callback passed to seriously.go() to determine the x value of the translation. If you still have the problem, then we'll know for sure the bug is in Seriously.js. If not, then it's more likely that something is going on with TweenMax.
Can you report the results back here?
Set a conditional breakpoint at line 5933 of seriously.js with this code: translateX < 499 && x < translateX. If it breaks, you'll know that the incoming value is less than the one before it. (This will only work if you let the animation play all the way to the end.) You can see in the call stack where that happened. If it never breaks and you still see the A going backwards, that will tell us we need to look deeper into Seriously.
Just tried this out, but the breakpoint doesn't trigger. You can also seek frame by frame in the video above to see the values being passed to seriously. You'll see the values on screen going up constantly, but the result rendered jumps back.
It's very difficult to see with the naked eye, so this might be occurring for you just not quite as noticeable perhaps? For me most of the time it looks like slight performance jank, but at a 60fps screen cap you can actually see it's moving backwards.
Try making a reduced test without TweenMax or even dat.gui. Write your own code in the callback passed to seriously.go() to determine the x value of the translation. If you still have the problem, then we'll know for sure the bug is in Seriously.js. If not, then it's more likely that something is going on with TweenMax.
Rather than coding the manually, I just added all values sent to seriously to an array with timestamp + value. Since all values we send are send through a single point (line 112 in the example above), we can be 100% sure that nothing else is setting values or calling methods within seriously. Checking the values sent both manually and programmatically it looks like we are sending values which steadily increase over time. If you want to have a look at the number let me know.
Any ideas on where to look within seriously to try to catch this? Stale matrix values maybe?
Just pinging this. Any clue where we could start looking?
I was only just able to replicate this. Earlier efforts, even with screen recording, didn't show the problem.
It's not jank, because that wouldn't be moving backwards. Also seems unlikely that stale matrix values would cause it to go backwards. I checked most of the matrix values of the different, and they all seem correct.
Another hunch I had is that something weird is happening with ping-ponging textures. Some effects do that, but not the ones you're using. I suppose it's possible that something weird is going on with buffering in the GPU driver, though unlikely. What machines have you tested this on?
I did notice that you're using a lot of "layer" effects that don't seem necessary, either because they only have a single source input or they have 2 sources but only one of them is in use. Maybe there's something going on there? Is there a reason for that? Maybe if you can eliminate those nodes and it fixes the problem, that might get you far enough to deliver your product and it'll give me/us a starting point to hunt down whatever bug.
Hello Brian,
We've ran some stress tests to try and see what nodes specifically could be causing the issue.
For each of these tests, we animate the letter by calling the letter transform node's translate method within the callback passed to seriously.go().
All tests have jittering when animating. The jittering is sometimes limited, sometimes erratic, seemingly without taking into account the level of complexity. The video captures in the zip have all been made on Chrome OS X, the behavior is exactly the same in Safari. On Chrome Windows, the jittering is still present but much more subtle.
We haven't been able to identify what worsens or betters the jittering as the behavior seems to be random. To clear any possible external causes, we are not updating the translate method's x position with TweenMax anymore.
To answer your question, we are using layer nodes so that we can control each layer's opacity level independently.
You can download the zip and run the directory on a local server.
Here's the routing for each of these tests, in order of complexity.
simple_test_step1_noReformat:
letter source image => letter transform node
target canvas (source: letter transform node)
simple_test_step1:
letter source image => letter reformat node => letter transform node
target canvas (source: letter transform node)
simple_test_step2_noReformat:
letter source image => letter transform node
color node
layers node (source0: color node, source1: letter transform node)
target canvas (source: layers node)
simple_test_step2:
letter source image => letter reformat node => letter transform node
color node
layers node (source0: color node, source1: letter transform node)
target canvas (source: layers node)
simple_testBug:
letter source image => letter reformat node => letter transform node
color node
layers node (source0: color node, source1: letter transform node)
starry sky source image => starry sky reformat node
gradient wipe (source: starry sky reformat node, gradient: layers node)
target canvas (source: gradient wipe)
Do you have any suggestions as to what we could do?
Thank you very much
seriously_stressTests.zip
Okay, thanks for these reduced test cases. I can work with this. I have some ideas of where to start and will dive into it as soon as I can.
@brianchirls Any way I can help out with this one? I've poked around in the source, but it's really just guesswork.
Hi Brian! Have you had any chance to look into this? Can we be of any help?
| gharchive/issue | 2016-06-30T00:03:50 | 2025-04-01T06:38:06.446454 | {
"authors": [
"brianchirls",
"johanbelin",
"tencircles"
],
"repo": "brianchirls/Seriously.js",
"url": "https://github.com/brianchirls/Seriously.js/issues/126",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1111673246 | AAD SignIns Insights
I wonder if there would be an interest for such a module. It's essentially a similar concept of what we have for analysts in the entity page available for automation and a bit of what we have in UEBA.
Takes a user and return stats such as:
Last successful logon data (timestamp + other metadata)
Last failed logon data
Usual user-agent-string data
Usual contries/IPs
If there is a cloud-logon-session present in the entities (case of an AAD Protection alert), return all the info about this particular login.
That last one maybe could be added to the AAD Risk Module instead.
I like the idea, would this be a new module or perhaps an extension to the capabilities of the AAD Risks? It may make it too complex though if we add it... not sure off hand.
One issue, the cloud-logon-session is not passed from the incident trigger so you have to go back into SecurityAlerts to get it.... this is one of the reasons for #205 so you can use the KQL module to lookup the incident easily and work from there
Maybe also return a table of last successfull access per app?
possibilities to include:
Conditional access failures (and the policies that failed)
Named locations the user has been seen from
Device join status of the signins
password resets or other interesting admin actions on account
insights about signin hours
(although we would need to define what baseline could be used to determine out of character behaviors)
| gharchive/issue | 2022-01-22T20:01:59 | 2025-04-01T06:38:06.458523 | {
"authors": [
"briandelmsft",
"piaudonn"
],
"repo": "briandelmsft/SentinelAutomationModules",
"url": "https://github.com/briandelmsft/SentinelAutomationModules/issues/210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
46562977 | publish to npm
so ppl can npm install selectize --save :+1:
Caved in and published it: https://www.npmjs.com/package/selectize
:+1: Thank you :)
On Thursday, January 29, 2015, Brian Reavis notifications@github.com
wrote:
Closed #604 https://github.com/brianreavis/selectize.js/issues/604.
—
Reply to this email directly or view it on GitHub
https://github.com/brianreavis/selectize.js/issues/604#event-227320904.
--
~ BE*Kind . kn0wledge 1s p0wer ~
| gharchive/issue | 2014-10-22T21:28:08 | 2025-04-01T06:38:06.471518 | {
"authors": [
"brianreavis",
"gdibble"
],
"repo": "brianreavis/selectize.js",
"url": "https://github.com/brianreavis/selectize.js/issues/604",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
255324908 | Fix removed task file include
Fix bug introduced in : fb936053af1663abe4557befb4db36a845043c3a
I tested this is working on a single node install (ubuntu 17.04)
Our fixes just crossed as I just fixed this as well and made a new release.
| gharchive/pull-request | 2017-09-05T15:51:14 | 2025-04-01T06:38:06.472563 | {
"authors": [
"brianshumate",
"vincent-legoll"
],
"repo": "brianshumate/ansible-consul",
"url": "https://github.com/brianshumate/ansible-consul/pull/115",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2154845812 | Proper save time handling
Finally resolve the TODO for reading save time
use chrono::DateTime for parsed save time
if save does not have a time, use current time when writing
Result of read_json example
{
"version":10,
"game_version":6781,
"map":"Plate",
"description":"",
"author":{
"name":"x",
"id":"3f5108a0-c929-4e77-a115-21f65096887b"
},
"host":{
"name":"x",
"id":"3f5108a0-c929-4e77-a115-21f65096887b"
},
"save_time":"2021-07-10T22:22:49.135Z",
...
}
I made the chrono and uuid imports pub use so that dependent crates can use them without adding whole crates to their Cargo.toml.
Otherwise, this is great. Thanks!
| gharchive/pull-request | 2024-02-26T18:33:46 | 2025-04-01T06:38:06.493637 | {
"authors": [
"Kmschr",
"voximity"
],
"repo": "brickadia-community/brickadia-rs",
"url": "https://github.com/brickadia-community/brickadia-rs/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1349017990 | Add depth check for brace and paren to fix #297
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
This patch added a depth check for brace and paren to fix incorrect tag value parse.
Unfortunately I found that the gosec issue and failed tests also exist in the main branch. I'll try to fix them in this pr but cannot guarantee it.
I've passed the unit tests and integration tests on my machine, and I've fixed gosec issue, could we run CI for this pr please? Thanks.
Sign off, I've passed all unit tests and integration tests on my machine, and I've fixed gosec issue, could we run CI for this pr please? Thanks.
Hi @nimrodkor , would you please give this pr a review? Thanks!
Hi, I've solved the conflicts, can anyone give this pr a review? Thanks.
| gharchive/pull-request | 2022-08-24T07:50:18 | 2025-04-01T06:38:06.502959 | {
"authors": [
"lonegunmanb"
],
"repo": "bridgecrewio/yor",
"url": "https://github.com/bridgecrewio/yor/pull/298",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1324871695 | Publish v6.0.3
Publishes v6.0.3
Publishes fixes for all v.6.0.3 milestone issues
ran across this when I toggled theme...is correct with the white-ish?
@jeffvg for that section it's supposed to be a random image. It's just not rendering for you for some reason so I think this is ok for now. There's no styles or anything doing that. I assume that's just what happens when an image doesn't render.
| gharchive/pull-request | 2022-08-01T19:26:08 | 2025-04-01T06:38:06.587415 | {
"authors": [
"daileytj",
"jeffvg"
],
"repo": "brightlayer-ui/react-native-component-library",
"url": "https://github.com/brightlayer-ui/react-native-component-library/pull/286",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1033097366 | TableSortLabel should not be hidden when mouse is not hovering on it
Describe the desired behavior
When a .MuiTableSortLabel is not hovered on, the arrow should have the ~disabled color~
(edit: should be a "disabledBackground" color of Gray500@12% for light theme, and a Black200@24% for dark theme).
When it is hovered or in-use, the label should be text.secondary
Describe the current behavior
The arrow has an opacity 0 on it when not in use.
Additional Context
This will be part of the Tables effort.
2633
@huayunh Shall we close this, as PR has been merged?
| gharchive/issue | 2021-10-22T01:49:16 | 2025-04-01T06:38:06.590073 | {
"authors": [
"bkarambe",
"huayunh"
],
"repo": "brightlayer-ui/react-themes",
"url": "https://github.com/brightlayer-ui/react-themes/issues/11",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1642017995 | Weatherflow Hourly based Forecast: Unknown
As visible in the following screenshot, I sometimes get 'unknown' ('Inconnu' in French) as the current weather condition:
I get this for a few weeks/months now. Not sure if it was caused by a change in this plugin or if Weatherflow changed something in their API. One of the statuses is probably not mapped properly. I didn't manage to find which one at the moment. Looks also like there is no French translation in this plugin so I guess the value "Inconnu" comes from HA directly.
Looking at the code, I guess you take the value from "icon" in "current_conditions" from /better_forecast. I'll see if I can find the value reported by Weatherflow API when the problem occurs.
There is nothing in HA's logs by the way.
Duplicate. Sorry, Github had a hiccup.
| gharchive/issue | 2023-03-27T12:33:10 | 2025-04-01T06:38:06.592484 | {
"authors": [
"lostcontrol"
],
"repo": "briis/hass-weatherflow",
"url": "https://github.com/briis/hass-weatherflow/issues/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1697128985 | Discord Bot
Description
Create a new Discord Server for Vike, and use/create a Discord bot to teach users about https://github.com/brillout/vite-plugin-ssr/discussions/526.
Having a cozy firechat-like place to casually chit-chat about general topics could be lovely. Example use cases:
Show us what you're building (startup, dev tool, Vike extension, etc.)
Philosophical discussions about the future of programming and/or Vike (e.g. vertical integration vs do-one-thing-do-it-well)
News
Etc.
It's paramount that users don't ask for help in that space. Therefore I think a bot is necessary to to prominently show the rules of https://github.com/brillout/vite-plugin-ssr/discussions/526 with maybe (if possible?) a checkbox "I've read the rules" that users have to check before being able to start chatting.
Contribution much welcome to point us to a Discord bot that does this, or maybe implement one? I'm unfamiliar with Discord's bot API.
I express interest in such a bot for inlang
Closing as we don't need this anymore.
| gharchive/issue | 2023-05-05T07:13:36 | 2025-04-01T06:38:06.598289 | {
"authors": [
"brillout",
"samuelstroschein"
],
"repo": "brillout/vite-plugin-ssr",
"url": "https://github.com/brillout/vite-plugin-ssr/issues/860",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1989014187 | Show max depth or prevent drop on folders
I've got a tree that has categories and within those there are child links. I render two different trees, one that shows just the top level categories (and I want it only to ever display the root level nodes) and one that when you select a category shows the children of the selected category. Imagine it's a bit like windows explorer with 2 panes, and the left pane only ever shows the top level items.
This is all working great apart from one small thing! :)
If you drag within the top level category tree and drop onto one of the other categories, it then opens the node and shows all the sub items.
Is there a way you can tell a tree to allow drag and drop, but not to automatically open the folders if you drag something into it OR only ever show a max-depth of children (so you could set it to one for example)
I'm using a controlled tree and have my own handlers for onMove etc.
I hope that rather convoluted explanation makes sense... Thanks for any help, it's a terrific package!
I managed to get a solution and tackled it in a different way. What I do is pass a modified version of the top level node data into the left hand tree where I strip all the children from the array. That makes it behave as leaf nodes, so no folder opening!
Great! Glad you found a solution. I love seeing screenshots of people using the package. Happy coding.
| gharchive/issue | 2023-11-11T15:58:12 | 2025-04-01T06:38:06.601515 | {
"authors": [
"jameskerr",
"theearlofsandwich"
],
"repo": "brimdata/react-arborist",
"url": "https://github.com/brimdata/react-arborist/issues/187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1508193376 | Replace zio/parquetio with zio/arrowio and pqarrow
We can replace the Parquet reader and writer in zio/parquetio with a combination of zio/arrowio and github.com/apache/arrow/go/v11/parquet/pqarrow, and we probably should since
it'll let us remove about 750 lines from zio/parquetio,
the Arrow Parquet implementation will probably receive more attention the one we're currently using, and
it fixes #764.
Verifications of this change are in https://github.com/brimdata/zed/issues/764#issuecomment-1526667106 and https://github.com/brimdata/zed/issues/4527#issuecomment-1526671541. Thanks @nwt!
| gharchive/issue | 2022-12-22T16:42:00 | 2025-04-01T06:38:06.604239 | {
"authors": [
"nwt",
"philrz"
],
"repo": "brimdata/zed",
"url": "https://github.com/brimdata/zed/issues/4278",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
137374964 | Filter based on Gaussianity
We filter targets with extreme variance, the idea being that tangent normalization did a poor job with these and our model is unreliable for them. However, issue #378 will likely go a long way toward making this less of a concern. One thing we don't filter on and perhaps should is not how much variance remains after tangent normalization but how Gaussian the coverage looks after tangent normalization. Since our model assumes Gaussian copy ratio this amounts to filtering out targets for which our model is unsuitable.
We could implement this, for example, by filtering on the Anderson-Darling test statistic for each target.
Obviated by new coverage model.
| gharchive/issue | 2016-02-29T21:06:06 | 2025-04-01T06:38:06.723382 | {
"authors": [
"davidbenjamin",
"samuelklee"
],
"repo": "broadinstitute/gatk-protected",
"url": "https://github.com/broadinstitute/gatk-protected/issues/380",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2611301659 | github video player having issues when playing video
Describe the bug
Video keeps restarting or seizing the volume button and playing with no sound
Desktop (please complete the following information):
chrome, edge
Additional context
Suggest the video is moved to youtube and displayed in the readme
@ukpagrace This is regular GitHub behavior 🙏
| gharchive/issue | 2024-10-24T11:38:51 | 2025-04-01T06:38:06.760685 | {
"authors": [
"Shchepotin",
"ukpagrace"
],
"repo": "brocoders/nestjs-boilerplate",
"url": "https://github.com/brocoders/nestjs-boilerplate/issues/1772",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1686493301 | Honkai Star Rail Support?
With the recent launch of Honkai Star Rail, seems adding it would be really easy.
https://act.hoyolab.com/bbs/event/signin/hkrpg/index.html?act_id=e202303301540311&bbs_auth_required=true&bbs_presentation_style=fullscreen&lang=en-us&utm_source=share&utm_medium=link&utm_campaign=web
yes please, add Star Rail too
| gharchive/issue | 2023-04-27T09:40:37 | 2025-04-01T06:38:06.762051 | {
"authors": [
"Seantourage",
"ShatteredGod"
],
"repo": "brokiem/auto-hoyolab-checkin",
"url": "https://github.com/brokiem/auto-hoyolab-checkin/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1620226847 | refactor gpkg_write()
[x] vector write not working properly anymore
[x] better handling of list input for naming feature / tile sets / data_null
[x] basic post-processing/validating of result
[x] tests
I am happy with the functioning of gpkg_write() and think it is now more robust and better documented. Will create a new issue for the idea of validating geopackages.
| gharchive/issue | 2023-03-12T04:04:08 | 2025-04-01T06:38:06.785024 | {
"authors": [
"brownag"
],
"repo": "brownag/gpkg",
"url": "https://github.com/brownag/gpkg/issues/1",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
292876140 | License / publication on hex
Hi brpandey,
thanks for this implementation. Is there any chance of publishing this on hex? And/or, could you put an explicit license on the code?
Thanks in advance
arnomi
https://hex.pm/packages/interval_tree
| gharchive/issue | 2018-01-30T17:47:15 | 2025-04-01T06:38:06.823047 | {
"authors": [
"arnomi",
"brpandey"
],
"repo": "brpandey/interval_tree",
"url": "https://github.com/brpandey/interval_tree/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
53504800 | Test assets folder copied in public folder on production env
In previous versions, too. Today one is 1.17.20
Change the assets convention in your config to be more specific, or rename the assets dir you don't want copied.
| gharchive/issue | 2015-01-06T11:14:56 | 2025-04-01T06:38:06.850829 | {
"authors": [
"devel-pa",
"es128"
],
"repo": "brunch/brunch",
"url": "https://github.com/brunch/brunch/issues/904",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
746353127 | Installation problem
Hi there,
I hope someone in the know could help point me in the right direction. What could I possibly be missing? Thank you!
/usr/local/lib/node_modules/gatsby-cli/node_modules/yoga-layout-prebuilt/yoga-layout/build/Release/nbind.js:53
throw ex;
^
Error: listen EADDRINUSE: address already in use :::8000
at Server.setupListenHandle [as _listen2] (net.js:1280:14)
at listenInCluster (net.js:1328:12)
at Server.listen (net.js:1415:7)
at startDevelopProxy (/Users/adam/PROJECT/MAIONE/node_modules/gatsby/src/utils/develop-proxy.ts:86:10)
at module.exports (/Users/adam/PROJECT/MAIONE/node_modules/gatsby/src/commands/develop.ts:124:17)
at process._tickCallback (internal/process/next_tick.js:68:7)
at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
Oh apparently, we can't run another app at the same port :8000 (even though I have none running)
Been trying for hours, just couldn't get pass
Cannot query field "allStripeSku" on type "Query".
I've created a test data on Stripe and follow everything to a "T"
Hopefully someone is kind enough to point me in the right direction. What supposedly a 5-min installation turned in hours scratching my head
Thanks everyone.
@yansusanto Did you figure this out? You have to use the Stripe API to add SKU/quantity.
The latest version of the starter comes with Stripe fixtures, which I hope will alleviate problems others are having with getting the proper data setup in Stripe. It also moves from the Orders API (Skus) to the Prices API.
Please let me know if you have any problems with the new version of the starter!
| gharchive/issue | 2020-11-19T08:18:35 | 2025-04-01T06:38:06.854741 | {
"authors": [
"brxck",
"thomasvaeth",
"yansusanto"
],
"repo": "brxck/gatsby-starter-stripe",
"url": "https://github.com/brxck/gatsby-starter-stripe/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
363009858 | Quits with issue
Finally getting around to trying this again. (Thanks for #3)
I have:
[x] dayone2 cli installed
[x] iMessage journal created
[x] successfully run the app
When attempting to import (regardless of date range selected, and clearing the imported dates each time), I get an exception. Here's the full traceback
2018-09-23 20:58:30.669000-0700 iMessage Importer[20871:2537379] [default] Unable to load Info.plist exceptions (eGPUOverrides)
importing: 2017-09-23 15:00:00 +0000
=================================================================
Main Thread Checker: UI API called on a background thread: -[NSControl setStringValue:]
PID: 20871, TID: 2539520, Thread name: (none), Queue name: com.imessagesimport, QoS: 17
Backtrace:
4 iMessage Importer 0x000000010003acf6 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyF + 3334
5 iMessage Importer 0x000000010003bd64 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyFTo + 36
6 iMessage Importer 0x0000000100037ad0 $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_ + 32
7 iMessage Importer 0x0000000100037b2d $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_TA + 13
8 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
9 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
10 libdispatch.dylib 0x00000001010e35f9 _dispatch_block_invoke_direct + 291
11 iMessage Importer 0x0000000100037c8f $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_ + 31
12 iMessage Importer 0x0000000100037cac $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_TA + 12
13 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
14 libdispatch.dylib 0x00000001010de71b _dispatch_call_block_and_release + 12
15 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
16 libdispatch.dylib 0x00000001010e79f7 _dispatch_lane_serial_drain + 789
17 libdispatch.dylib 0x00000001010e88ae _dispatch_lane_invoke + 446
18 libdispatch.dylib 0x00000001010f3f6c _dispatch_workloop_worker_thread + 691
19 libsystem_pthread.dylib 0x000000010115a058 _pthread_wqthread + 409
20 libsystem_pthread.dylib 0x0000000101159e51 start_wqthread + 13
2018-09-23 20:58:51.078156-0700 iMessage Importer[20871:2539520] [reports] Main Thread Checker: UI API called on a background thread: -[NSControl setStringValue:]
PID: 20871, TID: 2539520, Thread name: (none), Queue name: com.imessagesimport, QoS: 17
Backtrace:
4 iMessage Importer 0x000000010003acf6 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyF + 3334
5 iMessage Importer 0x000000010003bd64 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyFTo + 36
6 iMessage Importer 0x0000000100037ad0 $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_ + 32
7 iMessage Importer 0x0000000100037b2d $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_TA + 13
8 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
9 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
10 libdispatch.dylib 0x00000001010e35f9 _dispatch_block_invoke_direct + 291
11 iMessage Importer 0x0000000100037c8f $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_ + 31
12 iMessage Importer 0x0000000100037cac $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_TA + 12
13 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
14 libdispatch.dylib 0x00000001010de71b _dispatch_call_block_and_release + 12
15 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
16 libdispatch.dylib 0x00000001010e79f7 _dispatch_lane_serial_drain + 789
17 libdispatch.dylib 0x00000001010e88ae _dispatch_lane_invoke + 446
18 libdispatch.dylib 0x00000001010f3f6c _dispatch_workloop_worker_thread + 691
19 libsystem_pthread.dylib 0x000000010115a058 _pthread_wqthread + 409
20 libsystem_pthread.dylib 0x0000000101159e51 start_wqthread + 13
2018-09-23 20:58:51.121123-0700 iMessage Importer[20871:2539520] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-09-23 20:58:51.121149-0700 iMessage Importer[20871:2539520] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
unable to open database file (code: 14)
2018-09-23 20:58:51.128226-0700 iMessage Importer[20871:2539520] [General] An uncaught exception was raised
2018-09-23 20:58:51.128257-0700 iMessage Importer[20871:2539520] [General] NSWindow drag regions should only be invalidated on the Main Thread!
2018-09-23 20:58:51.128369-0700 iMessage Importer[20871:2539520] [General] (
0 CoreFoundation 0x00007fff340ba43d __exceptionPreprocess + 256
1 libobjc.A.dylib 0x00007fff5ffc7720 objc_exception_throw + 48
2 CoreFoundation 0x00007fff340d3ec1 -[NSException raise] + 9
3 AppKit 0x00007fff315d82a5 -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 324
4 AppKit 0x00007fff315d568c -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1488
5 AppKit 0x00007fff31695edf -[NSPanel _initContent:styleMask:backing:defer:contentView:] + 50
6 AppKit 0x00007fff315d50b6 -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45
7 AppKit 0x00007fff31695e94 -[NSPanel initWithContentRect:styleMask:backing:defer:] + 64
8 AppKit 0x00007fff315d35c7 -[NSWindowTemplate nibInstantiate] + 495
9 AppKit 0x00007fff3154e646 -[NSIBObjectData instantiateObject:] + 267
10 AppKit 0x00007fff3154d9b4 -[NSIBObjectData nibInstantiateWithOwner:options:topLevelObjects:] + 579
11 AppKit 0x00007fff3154bbe1 loadNib + 401
12 AppKit 0x00007fff3154b0a9 +[NSBundle(NSNibLoading) _loadNibFile:nameTable:options:withZone:ownerBundle:] + 696
13 AppKit 0x00007fff3154acee -[NSBundle(NSNibLoading) loadNibNamed:owner:topLevelObjects:] + 204
14 AppKit 0x00007fff31904cc9 -[NSAlert init] + 101
15 iMessage Importer 0x00000001000414a3 $SSo7NSAlertCABycfcTO + 19
16 iMessage Importer 0x00000001000381cf $SSo7NSAlertCABycfC + 31
17 iMessage Importer 0x0000000100044b87 $S17iMessage_Importer0aB0C11getMessagesyyF + 9719
18 iMessage Importer 0x000000010003be41 $S17iMessage_Importer20ImportViewControllerC14importMessages4datey10Foundation4DateV_tF + 193
19 iMessage Importer 0x000000010003bea1 $S17iMessage_Importer20ImportViewControllerC14importMessages4datey10Foundation4DateV_tFTo + 65
20 iMessage Importer 0x000000010003afc8 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyF + 4056
21 iMessage Importer 0x000000010003bd64 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyFTo + 36
22 iMessage Importer 0x0000000100037ad0 $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_ + 32
23 iMessage Importer 0x0000000100037b2d $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_TA + 13
24 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
25 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
26 libdispatch.dylib 0x00000001010e35f9 _dispatch_block_invoke_direct + 291
27 iMessage Importer 0x0000000100037c8f $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_ + 31
28 iMessage Importer 0x0000000100037cac $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_TA + 12
29 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
30 libdispatch.dylib 0x00000001010de71b _dispatch_call_block_and_release + 12
31 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
32 libdispatch.dylib 0x00000001010e79f7 _dispatch_lane_serial_drain + 789
33 libdispatch.dylib 0x00000001010e88ae _dispatch_lane_invoke + 446
34 libdispatch.dylib 0x00000001010f3f6c _dispatch_workloop_worker_thread + 691
35 libsystem_pthread.dylib 0x000000010115a058 _pthread_wqthread + 409
36 libsystem_pthread.dylib 0x0000000101159e51 start_wqthread + 13
)
2018-09-23 20:58:51.138791-0700 iMessage Importer[20871:2539520] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'NSWindow drag regions should only be invalidated on the Main Thread!'
*** First throw call stack:
(
0 CoreFoundation 0x00007fff340ba43d __exceptionPreprocess + 256
1 libobjc.A.dylib 0x00007fff5ffc7720 objc_exception_throw + 48
2 CoreFoundation 0x00007fff340d3ec1 -[NSException raise] + 9
3 AppKit 0x00007fff315d82a5 -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 324
4 AppKit 0x00007fff315d568c -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1488
5 AppKit 0x00007fff31695edf -[NSPanel _initContent:styleMask:backing:defer:contentView:] + 50
6 AppKit 0x00007fff315d50b6 -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45
7 AppKit 0x00007fff31695e94 -[NSPanel initWithContentRect:styleMask:backing:defer:] + 64
8 AppKit 0x00007fff315d35c7 -[NSWindowTemplate nibInstantiate] + 495
9 AppKit 0x00007fff3154e646 -[NSIBObjectData instantiateObject:] + 267
10 AppKit 0x00007fff3154d9b4 -[NSIBObjectData nibInstantiateWithOwner:options:topLevelObjects:] + 579
11 AppKit 0x00007fff3154bbe1 loadNib + 401
12 AppKit 0x00007fff3154b0a9 +[NSBundle(NSNibLoading) _loadNibFile:nameTable:options:withZone:ownerBundle:] + 696
13 AppKit 0x00007fff3154acee -[NSBundle(NSNibLoading) loadNibNamed:owner:topLevelObjects:] + 204
14 AppKit 0x00007fff31904cc9 -[NSAlert init] + 101
15 iMessage Importer 0x00000001000414a3 $SSo7NSAlertCABycfcTO + 19
16 iMessage Importer 0x00000001000381cf $SSo7NSAlertCABycfC + 31
17 iMessage Importer 0x0000000100044b87 $S17iMessage_Importer0aB0C11getMessagesyyF + 9719
18 iMessage Importer 0x000000010003be41 $S17iMessage_Importer20ImportViewControllerC14importMessages4datey10Foundation4DateV_tF + 193
19 iMessage Importer 0x000000010003bea1 $S17iMessage_Importer20ImportViewControllerC14importMessages4datey10Foundation4DateV_tFTo + 65
20 iMessage Importer 0x000000010003afc8 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyF + 4056
21 iMessage Importer 0x000000010003bd64 $S17iMessage_Importer20ImportViewControllerC36importMessagesForAllNonImportedDatesyyFTo + 36
22 iMessage Importer 0x0000000100037ad0 $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_ + 32
23 iMessage Importer 0x0000000100037b2d $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU1_TA + 13
24 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
25 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
26 libdispatch.dylib 0x00000001010e35f9 _dispatch_block_invoke_direct + 291
27 iMessage Importer 0x0000000100037c8f $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_ + 31
28 iMessage Importer 0x0000000100037cac $S17iMessage_Importer20ImportViewControllerC25importAllNotImportedDatesyySo8NSButtonCFyycfU2_TA + 12
29 iMessage Importer 0x0000000100032aad $SIeg_IeyB_TR + 45
30 libdispatch.dylib 0x00000001010de71b _dispatch_call_block_and_release + 12
31 libdispatch.dylib 0x00000001010df7f3 _dispatch_client_callout + 8
32 libdispatch.dylib 0x00000001010e79f7 _dispatch_lane_serial_drain + 789
33 libdispatch.dylib 0x00000001010e88ae _dispatch_lane_invoke + 446
34 libdispatch.dylib 0x00000001010f3f6c _dispatch_workloop_worker_thread + 691
35 libsystem_pthread.dylib 0x000000010115a058 _pthread_wqthread + 409
36 libsystem_pthread.dylib 0x0000000101159e51 start_wqthread + 13
)
libc++abi.dylib: terminating with uncaught exception of type NSException
I'm running Mojave—so possibly that's what's going on? (Have you run this lately?)
Such a cool tool, I truly can't wait to use it.
I"ll look into this. Looks like a label is being set from a background thread
I've just tried it on what is actually a completely new computer, as I got a new laptop between the time I had that issue and now.
I remember seeing this in the readme before:
Make sure Don't Sign Code is the selected setting in Build Settings > Code Signing Identity When code signing is enabled, it looks like some sandbox restrictions are disabling access to the iMessage DB
But in the latest version of Xcode, at least, that option doesn't seem to exist:
If I have it sign for ad hoc local use, I get a new error:
If I reset everything and run it without setting anything in that spot, then I don't have the "fatal error" business, but in the console, I basically get the same message described above:
2018-11-04 21:45:24.812172-0800 iMessage Importer[12428:2288923] [default] Unable to load Info.plist exceptions (eGPUOverrides)
[file:///Users/adambrault/Library/]
Attempting to run the import in this condition gives me an error dialog box which says:
FATAL ERROR
unable to open database file (code: 14)
Which, combined with the console output below, indicates that it's probably not accessing that DB and a different setting is needed:
2018-11-04 21:45:24.812172-0800 iMessage Importer[12428:2288923] [default] Unable to load Info.plist exceptions (eGPUOverrides)
[file:///Users/adambrault/Library/]
importing: 2018-10-28 15:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.519350-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.519385-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
unable to open database file (code: 14)
importing: 2018-10-29 15:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.523456-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.523468-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
importing: 2018-10-30 15:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.524766-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.524782-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
importing: 2018-10-31 15:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.525691-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.525707-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
importing: 2018-11-01 15:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.526637-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.526652-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
importing: 2018-11-02 15:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.527502-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.527515-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
importing: 2018-11-03 15:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.528183-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.528196-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
importing: 2018-11-04 16:00:00 +0000
[file:///Users/adambrault/Library/]
/Users/adambrault//Library/Messages/chat.db
2018-11-04 21:48:03.528894-0800 iMessage Importer[12428:2292103] [logging-persist] cannot open file at line 42249 of [95fbac39ba]
2018-11-04 21:48:03.528906-0800 iMessage Importer[12428:2292103] [logging-persist] os_unix.c:42249: (0) open(/Users/adambrault//Library/Messages/chat.db) - Undefined error: 0
Thanks again for your help @bryan1anderson :)
@adambrault Can you try these steps?
Open System Preferences > Security & Privacy > Full Disk Access
Click +
Locate iMessage Importer and add it
@adambrault the only way I can duplicate these issues is by removing disk access. Would you mind sometime trying out the steps above?
@bryan1anderson I sure appreciate you looking into this. It's on my list to test it and I'll try to do that this weekend.
| gharchive/issue | 2018-09-24T04:05:47 | 2025-04-01T06:38:06.884286 | {
"authors": [
"adambrault",
"bryan1anderson"
],
"repo": "bryan1anderson/iMessage-to-Day-One-Importer",
"url": "https://github.com/bryan1anderson/iMessage-to-Day-One-Importer/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
98620899 | Allow the config_module key to be placed on any config block
Since there is now only one config item for Joken, config_module, it would probably be nice to make it so that the user can define which config block contains the key.
I propose change the config_module key to joken_module, and adding a using macro that takes an otp_app option which defines which config block to look for the joken_module key.
Something like below:
#the config block
config :my_app,
joken_config: My.Config.Module
#Next, tell Joken where to find the config block
use Joken, otp_app: :my_app
#then to encode and decode
{:ok, token} = encode_token(%{username: "johndoe"})
{:ok, decoded_payload} = decode_token(jwt)
I made a branch with the change and the only thing I could come up with is having the functions to encode and decode added to the using macro and called directly in the module using it. Since a lot of libraries use encode and decode as function names, in this branch I renamed the functions encode_token and decode_token.
This is nice! Some things I think would be useful:
Perhaps it is good to have the property name as an optional parameter. This way it would be possible to have 2 config modules: one for user login and another for system to system authentication. I don't need this use case but I think it will be desired by people who are coding microservices.
We could use a @on_load to cache the generated JOSE header for each configuration. :smiley:
Good points. I think making the name of the config property optional makes sense. Or maybe add a property that points directly to the module instead?
I had to look up @on_load :smile:. It sounds like that could work.
@cs-victor-nascimento I added a PR for this, but I haven't added the bit to customize the property itself yet.
Also I'm not sure I completely understand what should be cached using @on_load. Could you give me an idea of what to do?
The cache idea is that for each config module the first part of a JWT (which describes de algorithm and optional parameters) will always be the same. Say you choose to use HS256. Then it will always be {"alg": "HS256", "typ": "jwt"} or something.
So we can generate the JSON and Base64 representation of those once we have that information. Not sure the on_load will work with that (not 100% secure we can ensure our dependencies are all loaded when on_load function is executed) and probably it would be better to go all in and define an application module. I mentioned the on_load just to avoid adding another breaking change: people would need to add joken to their list of apps in mix.exs.
What do you think?
Actually I was thinking and I believe we should have a performance benchmark suite set before atempting any early optimization. I will open another issue for that.
The caching may work, but I will probably do a separate issue/PR for it. I can probably finish this by adding the ability to customize the config key tonight.
With all the changes going on, the next release will be a big one!
Closing this as it's no longer valid
| gharchive/issue | 2015-08-02T16:43:25 | 2025-04-01T06:38:06.892131 | {
"authors": [
"bryanjos",
"cs-victor-nascimento"
],
"repo": "bryanjos/joken",
"url": "https://github.com/bryanjos/joken/issues/54",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2388636511 | Evaluate to implement other use cases from langgraph tutorial
Original langgraph tutorial contains a lot of interesting examples, I would implements others to promote langgraph4j as first citizen in langchain4j eco-system:
RAG
Agentic RAG
Corrective RAG (CRAG)
Corrective RAG (CRAG) using local LLMs
Self-RAG
Self-RAG using local LLMs
SQL Agent
Agent Architectures
Multi-Agent Systems
Collaboration
Supervision
Hierarchical Teams
Planning Agents
Plan-and-Execute
Reasoning without Observation
LLMCompiler
Reflection & Critique
Basic Reflection
Reflexion
Language Agent Tree Search
Self-Discover Agent
That's really great. When will these RAG and Agent frameworks be combined with Langchain4j for an example! I have been following langchain4j and langgraph4j all along.
Hi @243006306 thanks for interest
However Agent Executor and Adaptive RAG are already available
This project looks interesting. Currently going through the code base , please let me know if you want me to contribute on anything. Thanks
| gharchive/issue | 2024-07-03T13:28:47 | 2025-04-01T06:38:06.914512 | {
"authors": [
"243006306",
"ArnWEB",
"bsorrentino"
],
"repo": "bsorrentino/langgraph4j",
"url": "https://github.com/bsorrentino/langgraph4j/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1094980527 | Can't load plugins into Ableton 11 or Bitwig 4.1.2
For bug reports
Operating system(s): Windows 10 64-bit 21H2
Version of Rack if using official binary, or commit hash and branch if compiling from source: VCVRack:v2.0.5
All hardware relevant to your issue (e.g. graphic card model, audio/MIDI device): Radeon Pro WX3100, Radeon RX550, SSL 2+ ASIO
Plugins don't show up during scan, both Ableton and Bitwig refuse to let me load the .dll files manually; I get a crossed out circle icon for a mouse cursor when I try this
Both instrument and effect VCVR plugins load fine into VSThostx64, no problems
Tried restarting machine, running both as admin, still won't load. Tried deleting preferences files and cache, still wont load
Not sure what to try next. Any suggestions?
Thanks
Version of Rack if using official binary, or commit hash and branch if compiling from source: VCVRack:v2.0.5
for questions regarding VCVRack v2.0.5, please go to https://github.com/VCVRack
Sorry I'm confused, I have the standalone VCVR which works fine no problems there, I'm trying to get these plugins to work inside Bitwig and Ableton.
I think its 0.6.1? I just downloaded it a few hours ago
your post mentioned v2.0.5, that's what got me confused.
if you are indeed using VCVR (0.6.x), make sure that the entire vst2_bin/ folder is in the host plugin path.
I vaguely recall that someone once resolved a similar issue by not placing the plugin in a deeply nested folder structure (cannot remember which host required that).
I've tested VCVR in both Ableton Live and Bitwig and it worked fine (just re-tested it with the latest Live version 11.0.12).
Well I got them to work inside Bitwig. Was never able to manually drag and drop the .dlls, but after deleting the Bitwig preferences file and Index folder, then running Bitwig as admin, they were both picked up during the plugin scan.
This is the exact same procedure I tried initially; not sure what I did differently this time that made it work, but not complaining!
| gharchive/issue | 2022-01-06T05:24:42 | 2025-04-01T06:38:06.920716 | {
"authors": [
"bsp2",
"loiteringsloth"
],
"repo": "bsp2/VeeSeeVSTRack",
"url": "https://github.com/bsp2/VeeSeeVSTRack/issues/33",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
786236497 | UX Improvements
"bootstrap native" breadcrumbs
"alert" wrapper for "Advanced Considerations"
tweak text related to "advanced" sections
use some "»" chars (for "next steps" prefix and before "Advanced Considerations" in titles)
tweak margins/padding in the base layout to give a little more breathing room
Love the new style, thanks!
Love the new style, thanks!
| gharchive/pull-request | 2021-01-14T19:00:36 | 2025-04-01T06:38:06.929371 | {
"authors": [
"janoside",
"mflaxman"
],
"repo": "btcguide/btcguide.github.io",
"url": "https://github.com/btcguide/btcguide.github.io/pull/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
542407074 | bech32: Encode accepts invalid uppercase HRP
bech32.Encode will accept an uppercase HRP and generate an invalid bech32 encoding instead of normalizing it or returning an error.
BIP 0173 is clear that mixed case encodings are invalid, and that the lowercase encoding should be used for checksum purposes. This means there are two ways to handle an uppercase HRP in Encode:
treat it as lowercase, and ideally return a lowercase encoding
return an error
Instead, this library will use the uppercase values for the checksum and return a mixed case encoding. Decode will correctly reject the mixed case encoding, and will fail the checksum of the lowercase version.
https://play.golang.org/p/h0ekj8VmiPV
I'm no longer active on this project, but for reference for the new maintainers, you guys might want to go ahead and back port the updated version from Decred at https://github.com/decred/dcrd/tree/master/bech32. We originally based it on this implementation from the LL folks, but improved it in many ways, which includes handling what this issue raises properly.
The primary improvements made are:
Much improved efficiency in terms of memory allocations and overall encoding and decoding performance along with benchmarks
Corrected the issue being reported here by automatically converting the HRP to lowercase and improved error handling to catch other potential misuses
Added convenience functions for EncodeFromBase256 and DecodeToBase256 which automatically handles the typical case of converting to base32 with padding before encoding and back to base256 without padding when decoding
Improved the error handling to be more in line with the rest of the code base such that the errors are more descriptive and programmatically detectable
Fleshed out the test coverage to test more corner cases as well as ensure the actual errors that are produced are the expected errors versus just checking that some error happened
Updated the code and documentation to be more consistent
Created and tagged a separate Go module specifically for bech32 (github.com/decred/dcrd/bech32) and thus provide a tighter module with a smaller API surface which results in less notifications of new versions for consumers due to other things not related to bech32 changing
I've slightly modified the code provided by @FiloSottile accordingly to show the improved version works as expected:
https://play.golang.org/p/VlNZprYObxE
package main
import (
"log"
"strings"
"github.com/decred/dcrd/bech32"
)
func main() {
s, err := bech32.EncodeFromBase256("UPPERCASE", []byte("xxx"))
if err != nil {
log.Fatal(err)
}
log.Print("encoded: ", s)
log.Print(bech32.Decode(s))
log.Print(bech32.Decode(strings.ToUpper(s)))
}
Output:
... encoded: uppercase10pu8sss7kmp
... uppercase[15 1 28 7 16] <nil>
... uppercase[15 1 28 7 16] <nil>
| gharchive/issue | 2019-12-26T02:02:44 | 2025-04-01T06:38:06.956828 | {
"authors": [
"FiloSottile",
"davecgh"
],
"repo": "btcsuite/btcutil",
"url": "https://github.com/btcsuite/btcutil/issues/152",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
1186735128 | Tratamento de erros na entrada do usuario
Ao tentar converter a entrada do usuário para inteiro não tem nenhum tratamento de erro o que pode levar o programa a crashar caso o usuário entre com um valor que não seja possível converter
https://github.com/btwhelena/Swift-Challenge/blob/a540f08104cd97cc8b14e196eb9360499bdf14c3/Sources/Hello/main.swift#L174-L177
O leitura do input poderia ficar dentro de um laço while e só sair quando o valor for válido:
var isInputValid = false
var input: String?
while !isInputValid {
input = readLine()
if let intInput = Int(input!) {
isInputValid = true
} else {
print("Por favor insira um valor inteiro")
}
}
Obrigada pela sugestão, Lais! Irei implementá-la assim que possível ✨
| gharchive/issue | 2022-03-30T16:52:41 | 2025-04-01T06:38:06.975871 | {
"authors": [
"btwhelena",
"laisbastosbg"
],
"repo": "btwhelena/Swift-Challenge",
"url": "https://github.com/btwhelena/Swift-Challenge/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
793032124 | 如何训练可以得到77maP
想问一下如何训练可以得到您给出的在voc上77mAP的权重?可以问一下训练策略吗?我在你的代码中注释了对预训练文件的引入,直接使用原始ResNet网络进行train.py文件中100epoch的训练,得到的训练效果很差?想问怎样可以训练出和您相同的voc的结果呢?
pretrain=True可以用主干的预训练权重吧
pretrain=True可以用主干的预训练权重吧
同问,请问你解决了嘛?
想问一下如何训练可以得到您给出的在voc上77mAP的权重?可以问一下训练策略吗?我在你的代码中注释了对预训练文件的引入,直接使用原始ResNet网络进行train.py文件中100epoch的训练,得到的训练效果很差?想问怎样可以训练出和您相同的voc的结果呢?
同问,请问你解决了嘛?
想问一下如何训练可以得到您给出的在voc上77mAP的权重?可以问一下训练策略吗?我在你的代码中注释了对预训练文件的引入,直接使用原始ResNet网络进行train.py文件中100epoch的训练,得到的训练效果很差?想问怎样可以训练出和您相同的voc的结果呢?
为啥会忽略我中间的回答呢…
呃,主要是我试了你用pretrain=True也是这样30%的map
截图,训练策略也截图,loss也截图
Train
Train one epochs
Loss
Dataloader
DataLoader 和 Loss基本按照你的方式写的。训练策略稍有改动,但我感觉Loss和data都一样的情况,应该没啥问题。
你咋改了这么多…
是呀,跟着你的讲解自己又重新加了一点。
你这个改的有点多啊……我看l1_loss里面连permute都删了……我不知道具体原因。
找个我是在外面进行了permute,维度顺序是一致的,不然也没法计算。不过我想和你确认的是,你是基于resnet50 backbone的预训练权重,然后fine-tune就可以在VOC上达到77的MAP吗?那此时的val loss是多少呢?我也是采用resnet的backbone预训练权重,也没法达到这个77,而且val loss达到2.5的时候,就会反向上升(虽然我看了下原作者的repo说hm loss是正常的)。模型确实是学到东西了,直接测训练集Map是89,但验证集只有40多。很难受。。。
当然,上传前我已经测试过了,100epoch大概到76.多,之前是77多,因为之前学习率较大,现在增大epoch后也可以77。
你是我提供的数据集么, 我是freeze batch为32,unfreeze batch16
我是用voc2012的,训练集15000多张,验证集1700多张。我用同样的数据集,训练你最新的repo,上周五下载的。map只有28。。。
下次问问题说清楚情况。
VOC2012的数据集你用labelimg打开就知道,里面有些标注是不完全的,因为有一些不是拿来目标检测的。
可以,那我试试VOC2007
是07+12,我都提供了为啥不用呢
单07训练集就5000张,效果又不一样
是07+12,我都提供了为啥不用呢
你这个是自己改过标注的吗?还是整理过官方的呢
https://www.bilibili.com/read/cv10239076?spm_id_from=333.999.0.0
谢谢耐心解答,麻烦了。
你好,您的邮件我已收到。我将在近期查看,尽快给你回复。
| gharchive/issue | 2021-01-25T05:14:10 | 2025-04-01T06:38:06.989330 | {
"authors": [
"Runist",
"ZxSnow",
"bubbliiiing"
],
"repo": "bubbliiiing/centernet-pytorch",
"url": "https://github.com/bubbliiiing/centernet-pytorch/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2051369313 | sortedlist 试两次
{ "hash": "0x39c767e230f1cc4d8fa7baa4ef8c39bc2e4add8680d09bfe086e1efdaa0d6437", "score": 290 }, // 10
{ "hash": "0x39c767e230f1cc4d8fa7baa4ef8c39bc2e4add8680d09bfe086e1efdaa0d6438", "score": 290 }, // 10
看那个先进
According to current implementation, two items with the same score are sorted in order of insertion, from first to last. If the inserted item has the same score as the last one in list, the insertion will be considered invalid.
I updated the testcase to include insert two items which has same score. it will ranked by their insert order.
| gharchive/issue | 2023-12-20T22:19:56 | 2025-04-01T06:38:06.993855 | {
"authors": [
"fulldecent",
"weiqiushi"
],
"repo": "buckyos/DCRM",
"url": "https://github.com/buckyos/DCRM/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
313724601 | code splitting and uploading all source maps
Hi,
In my react app I am using code splitting (react-loadable) to reduce the bundle size and serve them on demand whenever the respective route is loaded. Using this technique I am getting about 53 chunks and a main file (.js and .map).
How can I upload all these source maps to identify the error stack trace correctly.
regards
Aman
Hi @amanthegreatone. Are you using webpack? If so you should take a look at our webpack plugins module which can upload sourcemaps:
webpack-bugsnag-plugins (docs)
If you're not using webpack, you can make use of the underlying JS API/CLI tool:
bugsnag-sourcemaps (docs)
I see!
So create-react-app does use webpack for you but it hides the details away, and as far as I can tell doesn't let you add plugins.
Our webpack plugin would do exactly what you want – it iterates over all the chunks/maps and uploads them all. But it seems like your only option would be to "eject" from create-react-app in order to modify the webpack config yourself. But you probably don't want to do that!
Alternatively you can use bugsnag-sourcemaps, iterating over all of the generated source maps using bash or javascript. Here's a sketch of what I mean (in JS):
const { upload } = require('bugsnag-sourcemaps')
const glob = require('glob')
// find all of the map files in ./dist
glob('dist/**/*/*.map', (err, files) => {
if (err) throw err
// process each .map file
Promise.all(files.map(processMap))
})
// returns a Promise which uploads the source map with accompanying sources
function processMap (sourceMap) {
// remove .map from the file to get the js filename
const minifiedFile = sourceMap.replace('.map', '')
// remove the preceding absolute path to the static assets folder
const minifiedFileRelativePath = minifiedFile.replace(`${__dirname}/dist/`, '')
// call bugsnag-sourcemaps upload()
return upload({
apiKey: 'YOUR_API_KEY_HERE',
appVersion: '1.2.3',
minifiedUrl: `http*://your-domain.app/path/to/assets/${minifiedFileRelativePath}`,
sourceMap,
minifiedFile,
projectRoot: __dirname
uploadSources: true
})
}
I didn't run or test this so you will undoubtably need to tweak paths/urls for your setup and play with the arguments to bugsnag-sourcemaps. This also rather crudely will fire off all the uploads concurrently. See webpack-bugsnag-plugins for an example of how to limit the concurrency of that.
Feel free to continue on this thread if you have any further questions, or alternatively email support@bugsnag.com where we can dig into your specific project and have a bit more context. We can also then share more detail than we would be able to on this public issue tracker.
Thanks!
Thanks @bengourley.
I will try your suggestions and update.
@bengourley as per your suggestion I followed the same instructions and uploaded the chunk.js.map files to bugsnag. Here is my code.
const upload = require('bugsnag-sourcemaps').upload;
const glob = require('glob');
const appVersion = require('./package.json').version;
const bugsnagKey = require('./src/config/env').bugsnagKey;
const path = require('path');
glob('source-maps/*.js.map', (err, files) => {
if (err) throw err;
Promise.all(files.map(processMap));
});
const processMap = sourceMap => {
const minifiedFileName = sourceMap.split('/')[1].split('.map')[0]; //just extracting the file name.
return upload({
apiKey: bugsnagKey,
appVersion: appVersion,
minifiedUrl: `http://myAppWebsite.com/static/js/${minifiedFileName}`,
sourceMap,
minifiedFile: `${__dirname}/build/static/js/${minifiedFileName}`,
projectRoot: __dirname,
uploadSources: true,
overwrite: true,
});
};
The files are all uploaded and everything works. But the problem is the map and chunks are not mapped when an error is raised in the stacktrace i still get the chunk code rather than the original code.
Hoping you could point out where I am going wrong or what the error really is.
Thanks.
Hi @pavan-syook. I took a look at your account and it seems you are uploading everything correctly 👍
There is just one minor problem which you need to resolve. For your source maps, you are providing a value for appVersion (it's currently 0.1.0), but events coming in from your notifier have no appVersion. What our system does is look for an uploaded source map matching the url and app version, and since the app version is different (undefined vs. 0.1.0) it doesn't find your source map.
All you need to do is make sure the app version is set in your notifier and kept in sync with the version of your app. Then we'll be able to use the source maps you've uploaded to show the original sources. Hope this helps!
@bengourley Thanks for the quick update. will do that and let you know if I face any issues.
Hi @bengourley. we are also using code splitting in our react web application(groww.in) and in our case, the number of chunks are more than 80. all chunks source map files are uploaded on bugsnag for every app release. So my doubt is how many source map files can be uploaded on bugsnag, is there any restriction on uploading number of files?
Nope, no restriction. As many as you need.
@bengourley Thanks for the reply.
| gharchive/issue | 2018-04-12T13:21:54 | 2025-04-01T06:38:07.030191 | {
"authors": [
"Shiv-Dangi",
"amanthegreatone",
"bengourley",
"pavan-syook"
],
"repo": "bugsnag/bugsnag-js",
"url": "https://github.com/bugsnag/bugsnag-js/issues/339",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
557429558 | RNCNetInfo.getCurrentState got 3 arguments, expected 2.
Enviornment info :
System:
OS: Windows 10 10.0.17763
CPU: (8) x64 Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
Memory: 1.06 GB / 7.88 GB
Binaries:
Node: 8.16.1 - C:\Program Files\nodejs\node.EXE
npm: 6.4.1 - C:\Program Files\nodejs\npm.CMD
SDKs:
Android SDK:
API Levels: 21, 22, 23, 24, 25, 26, 27, 28, 29
Build Tools: 26.0.2, 28.0.3, 29.0.2
System Images: android-24 | Google Play Intel x86 Atom, android-28 | Google Play Intel x86 Atom
IDEs:
Android Studio: Version 3.5.0.0 AI-191.8026.42.35.6010548
npmPackages:
react: ~16.9.0 => 16.9.0
react-native: https://github.com/expo/react-native/archive/sdk-36.0.0.tar.gz => 0.61.4
Facing this problem using expo directory.
import {AppState } from 'react-native';
import NetInfo from '@react-native-community/netinfo';
Using this two library
@mattdyoung : I have not used bugsnag-js in current project. I am using it in expo directory.
Above all information mentioned and JS code too attached.
@SanjanaTailor
You've raised this as an issue with bugsnag-js. If you're not using bugsnag-js in your project I don't think there's any issue for the maintainers of bugsnag-js to resolve here.
If you are using bugsnag-js can you provide a reproducible example of the issue bugsnag-js is causing using the latest version v6.5.1.
Please reopen, we need to know what is the correct version of @react-native-community/netinfo that can be used with sdk35 too
@react-native-community/netinfo - expected version range: ~3.2.1 - actual version installed: ^5.5.1
@react-native-community/netinfo - expected version range: ~3.2.1 - actual version installed: 4.6.2
Both crash with RNCNetInfo.getCurrentState got 3 arguments, expected 2..
Hi @lc3t35 - if you have a React Native project, we would recommend using our React Native notifier: https://github.com/bugsnag/bugsnag-react-native
Are you currently using bugsnag-js in your project? what version of bugsnag-js are you using?
I'm using expo sdk35 with "@bugsnag/expo": "6.4.1" for dev and "bugsnag-react-native": "^2.23.2" for production.
Hey @lc3t35, the bugsnag-react-native library should not be used with Expo apps. Additionally, a fix was released that fixes an issue that sound exactly like this. This was released in v6.5.1 of our bugsnag-js library. Can you please try a version that is at least v6.5.1 to see if the issue persists?
Yes as ejected Expo app, I use bugsnag-react-native ("bugsnag-react-native": "^2.23.6",)
But for staging/dev, I would like to use https://docs.bugsnag.com/platforms/react-native/expo/ too. What do you suggest ?
I had to remove bugsnag-react-native and @bugsnag/expo, so I can work on my project, I can stay stuck ... QA is waiting for updates.... without error catching then...
I expect to update to sdk36 soon, so I'll check again.
@lc3t35 Have you confirmed that you still see this issue with the latest @bugsnag/expo which includes the fix @xander-jones refers to?
https://github.com/bugsnag/bugsnag-js/releases
installing version 3.2.1 worked for me
| gharchive/issue | 2020-01-30T11:23:07 | 2025-04-01T06:38:07.041568 | {
"authors": [
"Ashwini-ap",
"SanjanaTailor",
"lc3t35",
"mattdyoung",
"phillipsam",
"xander-jones"
],
"repo": "bugsnag/bugsnag-js",
"url": "https://github.com/bugsnag/bugsnag-js/issues/719",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
58300908 | Rename req.host to req.hostname
Just upgraded to Bugsnag v1.6.0, but express v4.11.2 started logging this warning:
express deprecated req.host: Use req.hostname instead node_modules/bugsnag/lib/request_info.js:6:46
This should fix this!
Original pull request broke on old versions of express. Just fixed that.
Awesome, thanks!
| gharchive/pull-request | 2015-02-20T01:59:12 | 2025-04-01T06:38:07.043737 | {
"authors": [
"ConradIrwin",
"paton"
],
"repo": "bugsnag/bugsnag-node",
"url": "https://github.com/bugsnag/bugsnag-node/pull/51",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
672786287 | Sourcemap path in case of android flavor builds
I have created flavor builds in the android side of the React Native app. Earlier there was no flavor builds
So I was uploading the source map using CI like this
- run:
name: Upload sourcemaps to Bugsnag
command: |
if [[ $BUGSNAG_KEY ]]; then
yarn generate-source-maps-android upload \
--api-key=$BUGSNAG_KEY \
--app-version=$CIRCLE_BUILD_NUM \
--minifiedFile=android/app/build/generated/assets/react/release/app.bundle \
--source-map=android/app/build/generated/sourcemaps/react/release/app.bundle.map \
--minified-url=app.bundle \
--upload-sources
fi
But now the app is divided into two flavor builds (play and foss) and Bugsnag is available only in play build.
I am getting this error as the source map path needs to be updated.
[error] Error uploading source maps: Error: Source map file does not exist (android/app/build/generated/sourcemaps/react/release/app.bundle.map)
at /home/********/repo/node_modules/bugsnag-sourcemaps/lib/options.js:141:17
Can anyone help me in determining what will be the updated path of source map in case of multiple flavors in the app? Thanks in advance.
PS: I Have gone through the docs but didn't find anything related to this scenario.
Hi @GOVINDDIXIT
Are you using the Hermes JS engine on Android in your React Native app? If you're not using Hermes you can just follow our standard instructions here to upload via the API without bugsnag-sourcemaps:
https://docs.bugsnag.com/platforms/react-native/react-native/showing-full-stacktraces/#uploading-source-maps-to-bugsnag
If Hermes is enabled then using bugsnag-sourcemaps is currently the only upload method we support.
Where does you flavor build output the .bundle and .bundle.map files? Have you tried manually running the build and searching for the files? Is it just the path that's wrong in your upload command?
Hi @mattdyoung
Thanks for the quick response. Yes, I have got the correct path by building source maps locally and the issue is now solved. If possible I would suggest adding a specific section for the android flavor builds in official docs.
After adding flavors the updated path for minified file and source map is
--minifiedFile=android/app/build/generated/assets/react/play/release/app.bundle
--source-map=android/app/build/generated/sourcemaps/react/play/release/app.bundle.map
In general
--minifiedFile=android/app/build/generated/assets/react/{flavor_name}/release/app.bundle
--source-map=android/app/build/generated/sourcemaps/react/{flavor_name}/release/app.bundle.map
@GOVINDDIXIT
Thanks for letting us know! We're looking into various docs improvements currently so we'll consider this clarification.
| gharchive/issue | 2020-08-04T13:17:48 | 2025-04-01T06:38:07.049707 | {
"authors": [
"GOVINDDIXIT",
"mattdyoung"
],
"repo": "bugsnag/bugsnag-react-native",
"url": "https://github.com/bugsnag/bugsnag-react-native/issues/473",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1514757688 | Use global default shell in release-tag.yml workflow
To remove the need to repeat default shell declaration in our workflow, we should set a global default so that it is available across all jobs in https://github.com/build-trust/ockam/blob/cdf925aa2adb4439061449e5ea5f9ad774833e20/.github/workflows/release-tag.yml#L1
More here https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#defaults
Closing this now. Thanks @rghdrizzle
| gharchive/issue | 2022-12-30T19:28:28 | 2025-04-01T06:38:07.052955 | {
"authors": [
"metaclips"
],
"repo": "build-trust/ockam",
"url": "https://github.com/build-trust/ockam/issues/4005",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
137796424 | Bootstrap script doesn't fetch correctly with Git < v1.9
TL;DR: In version 2.1.5, the bootstrap script runs git fetch origin --tags to update all heads and tags. In some older versions of Git, that command will only update the tags, which can cause the subsequent checkout to fail.
We upgraded to version 2.1.5 on Ubuntu 12.04 LTS and the new bootstrap script failed. The problematic command is git fetch origin --tags. It seems to have been introduced in #243.
The relevant comment in the bootstrap script reads: "we fall back to fetching all heads and tags, hoping that the commit is included." With Git 1.9 and later, git fetch origin --tags will indeed fetch all heads and tags, assuming the remote is configured in the usual way. With Ubuntu 12.04 LTS's version of Git (v1.7.9.5), the command will run cleanly – that is, it will give a successful exit code and won't print error messages – but it will fetch only the tags. As a result, the git checkout command that follows it may fail and complain about "bad object".
We worked around this issue by adding a custom checkout hook that uses the command git fetch origin --tags +refs/heads/*:refs/remotes/origin/*.
#250 may have fixed this.
Yep, you're right, this is fixed in #250. Thanks for the report!
| gharchive/issue | 2016-03-02T07:27:58 | 2025-04-01T06:38:07.058434 | {
"authors": [
"ab9",
"toolmantim"
],
"repo": "buildkite/agent",
"url": "https://github.com/buildkite/agent/issues/249",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
321134627 | Docker images docs
I have few suggestions about docker images:
There should be the docs on the dockerhub where explicitly stated base image for tags. Or at least how to determine which base image was used.
Agent docker images should have appropriate tag naming for ubuntu and alpine, etc.. Take a look on python dockerhub.
buildkite/agent:3.1.1-ubuntu
buildkite/agent:3.1.1-alpine
buildkite/agent:3.1.1-alpine3.6
etc..
You can still say that defalut image is alpine. So buildkite/agent:3.1.1 and buildkite/agent:3.1.1-alpine are just symlinks.
IMO it's much more convinient way.
Why?
Sometimes it's useful to modify existing image (in my case I need to add envsubst to the image). And it was quite confusing for me when dockerfile dockerhub page use ubuntu:14.04 and agent:latest based on alpine.
Yup, good idea @ksanderer, we're going through a bit of a transition with how we build and manage docker images, so still trying to figure out best practice and how to work around some of the clunky aspets of docker hub.
| gharchive/issue | 2018-05-08T10:44:48 | 2025-04-01T06:38:07.062896 | {
"authors": [
"ksanderer",
"lox"
],
"repo": "buildkite/agent",
"url": "https://github.com/buildkite/agent/issues/761",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
151009758 | Add support for other aws regions
It would be nice to launch these in e.g. us-west-2 in my case.
@mikekap @deoxxa this is now live, you can now use the stack in:
us-west-1
us-west-2
eu-west-1
eu-central-1
ap-northeast-1
ap-northeast-2
ap-southeast-1
ap-southeast-2
sa-east-1
Excellent! Thank you!
| gharchive/issue | 2016-04-25T23:59:00 | 2025-04-01T06:38:07.065677 | {
"authors": [
"deoxxa",
"mikekap",
"toolmantim"
],
"repo": "buildkite/buildkite-aws-stack",
"url": "https://github.com/buildkite/buildkite-aws-stack/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
603957345 | Make h3's linkable, and provide the ability for id and class overrides
Makes all the h3 elements linkable by adding an automatic id, and let's you customize the id and class.
For example, the following markdown:
## A long complicated section description
{: id="short-id"}
### Subsection 1
## Section 2
### Subsection 2
generates:
<h2 id="short-id">A long complicated section description</h2>
<h3 id="short-id-subsection-1">Subsection 1</h3>
<h2 id="section-2">Section 2</h2>
<h3 id="section-2-subsection-2">Subsection 2</h3>
This means we can now link to individual environment variables, and instead of #buildkite-environment-variables-buildkite-agent-pid we can have #bk-env-vars-buildkite-agent-pid for example.
Fixes #708
Everything here is up for grabs, so if you’ve got any preferred syntax changes or anything, let me know!
Ace! Yeah, I was hoping we could use the class/id for a bunch of things.
I'm happy with this as is, unless you fancy testing the ID clashes just converting H3 without prepending H2 to them? But it is probably not worth the time.
Yeah, I couldn’t think of a good algorithm. Would it be just adding -2, -3 etc to the end of the ids that clash? So as you progress down the page?
Also I didn’t add the hover link doohicky that h2’s have. Should we?
ooh, I didn't notice the lack of doohicky! I reckon add it, saves from having to "view source" :-p. Please and thank you.
On the clash front, if there are a few dozen or so site wide, I'd be happy to add manual overrides to all of them, and just put up with an ID clash breaking the build for new content. What do you think?
ooh, I didn't notice the lack of doohicky! I reckon add it, saves me from having to "view source" :-p. Please and thank you.
All done!
On the clash front, if there are a few dozen or so site wide, I'd be happy to add manual overrides to all of them, and just put up with an ID clash breaking the build for new content. What do you think?
I like the idea, but I think I'm keen to stick with the current method for now — 90% because the code is mostly ready, and 10% because of getting my head around dealing with the clashes. Let's stick with the nesting method we have now, and revisit if we're not feeling it's working well.
:100:
| gharchive/pull-request | 2020-04-21T12:26:04 | 2025-04-01T06:38:07.072317 | {
"authors": [
"plaindocs",
"toolmantim"
],
"repo": "buildkite/docs",
"url": "https://github.com/buildkite/docs/pull/710",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
410498119 | Set individual environment variables for build
pack allows setting build-time environment variables via a file, but not
as individual values on the cli. This PR adds the ability to set one (or
more) environment variables directly as cli switches.
The same semantics are preserved from --env-file where a value can be
forwarded from the current environment if a variable name is specified
without a value. The new flags can be used to augment or override
specific values from a file.
The BuildFlags struct now includes an Env field in addition to EnvFile.
The BuildConfig struct EnvFile field has been renamed to Env for
consistency as the field contains the parsed values and not a file.
Refs buildpack/roadmap#25
A little background context.
riff currently uses a riff.toml file to pass configuration to function buildpacks. This file is generated based on values provided to the riff cli. Since this config file is not actually part of the project, it's awkward to write a file to the filesystem, do some work and then remove that file. It would be nicer to specify this config as environment variables.
Merged master and fixed a conflict with build config changes.
cc @ekcasey
@scothis I was under the impression that riff was importing github.com/buildpack/pack as a library using the pack.Build(...) function to run builds. In which case, I would imagine the correct solution for this problem is to add a map of env vars as a param to pack.Build. In that case riff should be able to construct that map as desired (no file required).
Is riff shelling out to the pack cli? I am not opposed to supporting a --env flag but to better enable library consumers it's important to us to know where users are integrating. In the near future we plan to make everything except a deliberately exposed library internal and thus not importable.
@ekcasey yes, riff embeds pack rather than shelling out. On its own, this PR is not enough to fully solve riff's needs, but it provides a large step towards scratching that itch. I'll open another PR that takes a baby step towards creating a programatic API that exposes the same capabilities.
This PR is updated to resolve conflicts.
@scothis Thanks for the explanation. If we think cli users will value having a --env flag in addition to an --env-file, flag this is a good step. I think the only thing missing here is acceptance test coverage. If we are exposing a new flag we should add coverage in acceptance/acceptance_test.go, just like we do with --env-file.
@ekcasey added an acceptance test and manually verified that all the tests are actually passing
| gharchive/pull-request | 2019-02-14T21:22:48 | 2025-04-01T06:38:07.088690 | {
"authors": [
"ekcasey",
"scothis"
],
"repo": "buildpack/pack",
"url": "https://github.com/buildpack/pack/pull/99",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1274144466 | enable terse option for 'buildtest buildspec find invalid'
The two files that i edited was "tests/cli/test_buildspec.py" and "buildtest/cli/buildspec.py". I added a test to the test_buildspec.py and added some code to allow for the terse option in buildspec.py
It seems Black Formatter check is not passed.
@JeffreyFG please run buildtest style -a which will run all the style checks that should fix the black issue. Please see https://buildtest.readthedocs.io/en/devel/contributing/code_contribution_guide.html#running-stylechecks-via-buildtest-stylecheck. I would recommend you setup pre-commit with black and isort see https://buildtest.readthedocs.io/en/devel/contributing/code_contribution_guide.html#configuring-black-pre-commit-hook. This will run black and isort during git commit even if you forget to run buildtest style
@JeffreyFG I have tested this locally
This is the exception
(buildtest) ~/Documents/github/buildtest/ [JeffreyFG-Issue-1092] buildtest bc find --terse invalid --error
The --terse flag can not be used with the --error option
This is with terse option and without header
(buildtest) ~/Documents/github/buildtest/ [JeffreyFG-Issue-1092] buildtest bc find --terse invalid
buildspec
/Users/siddiq90/Documents/github/buildtest/tutorials/invalid_buildspec_section.yml
/Users/siddiq90/Documents/github/buildtest/tutorials/invalid_tags.yml
/Users/siddiq90/Documents/github/buildtest/tutorials/burstbuffer_datawarp_executors.yml
(buildtest) ~/Documents/github/buildtest/ [JeffreyFG-Issue-1092] buildtest bc find --terse -n invalid
/Users/siddiq90/Documents/github/buildtest/tutorials/invalid_buildspec_section.yml
/Users/siddiq90/Documents/github/buildtest/tutorials/invalid_tags.yml
/Users/siddiq90/Documents/github/buildtest/tutorials/burstbuffer_datawarp_executors.yml
This is the table format
(buildtest) ~/Documents/github/buildtest/ [JeffreyFG-Issue-1092] buildtest bc find invalid
Invalid Buildspecs
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Buildspec ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ /Users/siddiq90/Documents/github/buildtest/tutorials/invalid_buildspec_section.yml │
└────────────────────────────────────────────────────────────────────────────────────┘
This is the implementation of --error
(buildtest) ~/Documents/github/buildtest/ [JeffreyFG-Issue-1092] buildtest bc find invalid --error
─────────────────────────────────────────────────────────────────────────── /Users/siddiq90/Documents/github/buildtest/tutorials/invalid_buildspec_section.yml ────────────────────────────────────────────────────────────────────────────
"'[/Users/siddiq90/Documents/github/buildtest/tutorials/invalid_buildspec_section.yml]: type badscript is not known to buildtest.'"
────────────────────────────────────────────────────────────────────────────────── /Users/siddiq90/Documents/github/buildtest/tutorials/invalid_tags.yml ──────────────────────────────────────────────────────────────────────────────────
"['network', 'network'] is not valid under any of the given schemas\n\nFailed validating 'oneOf' in schema['properties']['tags']:\n {'oneOf': [{'type': 'string'},\n {'$ref': '#/definitions/list_of_strings'}]}\n\nOn instance['tags']:\n ['network', 'network']"
───────────────────────────────────────────────────────────────────────── /Users/siddiq90/Documents/github/buildtest/tutorials/burstbuffer_datawarp_executors.yml ─────────────────────────────────────────────────────────────────────────
"'create_burst_buffer_multiple_executors' is too long\n\nFailed validating 'maxLength' in schema['properties']['buildspecs']['propertyNames']:\n {'maxLength': 32, 'pattern': '^[A-Za-z_.-][A-Za-z0-9_.-]*$'}\n\nOn instance['buildspecs']:\n 'create_burst_buffer_multiple_executors'"
| gharchive/pull-request | 2022-06-16T22:28:28 | 2025-04-01T06:38:07.101727 | {
"authors": [
"JeffreyFG",
"Xiangs18",
"shahzebsiddiqui"
],
"repo": "buildtesters/buildtest",
"url": "https://github.com/buildtesters/buildtest/pull/1101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1233844202 | 🛑 heartMap is down
In 6e1cf1e, heartMap (https://heartmap.buligadragos.work/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: heartMap is back up in 75f2a5e.
| gharchive/issue | 2022-05-12T11:29:42 | 2025-04-01T06:38:07.107941 | {
"authors": [
"buligadragos"
],
"repo": "buligadragos/UpTime",
"url": "https://github.com/buligadragos/UpTime/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1999519850 | Add CI job for testing against the starter repo
This make it so that the GitHub Actions CI pipeline will:
Checkout the starter repo. We'll look for a branch on the starter repo that matches the name of the branch being tested. If we can't find one we'll test against the main branch.
Checkout the core repo, also looking for a matching branch.
Alter the Gemfile in the starter repo to point to the local branch of jbuilder-schema and the core gems.
Run the Minitest suite of the starter repo
~Run the Super Scaffolding suite of the starter repo~ Edit: I'm not sure it's useful to run the Super Scaffolding tests, so I'm skipping it for now.
@newstler This should be good to go for running tests against the starter repo. There's one test failing on this PR related to https://github.com/bullet-train-co/jbuilder-schema/issues/74 but this doesn't change any shipped code so it should be safe to merge.
@jagthedrummer How urgent is this? I would wait when #74 is resolved if possible (we're looking for solution with @kaspth) to be able to release next versions with green tests.
@newstler not super urgent. I just figured that we should get this going sooner rather than later since we kept finding issues after a release instead of before.
@newstler looks like #74 is now resolved, so I rebased this branch and now it looks like everything is good to go.
@jagthedrummer Released this as v2.6.7 so should be good to check in BT gems without updating the minor version number.
| gharchive/pull-request | 2023-11-17T16:39:12 | 2025-04-01T06:38:07.112489 | {
"authors": [
"jagthedrummer",
"newstler"
],
"repo": "bullet-train-co/jbuilder-schema",
"url": "https://github.com/bullet-train-co/jbuilder-schema/pull/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
685214884 | Menu with options
Is your feature request related to a problem? Please describe.
It isn't clear how or there isn't a component yet to make a menu that contains options that could be selected
Describe the solution you'd like
If it's already possible with the current components we could add an example for it in the Docs, If it's not possible with the current set of components we could consider adding a component that handles this use-case.
I would say SelectMenu might be what you are looking for... However, it doesn't have the ability to group sections yet...
SelectMenu acts more like an input.
I was thinking more of a menu that works with a Button like the chakra-UI Menu
Yeah true. Reakit exports a MenuItemRadio & MenuItemCheckbox component. So we could definitely make use of them. https://reakit.io/docs/menu/#menu-bar
Added in 1.2.0
| gharchive/issue | 2020-08-25T06:30:44 | 2025-04-01T06:38:07.126514 | {
"authors": [
"hazem3500",
"jxom"
],
"repo": "bumbag/bumbag-ui",
"url": "https://github.com/bumbag/bumbag-ui/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
902106862 | D3 Plugin Pattern
Use Let’s Make a (D3) Plugin pattern for consistent result
Closing - #35 duplicate
| gharchive/issue | 2021-05-26T09:41:30 | 2025-04-01T06:38:07.127592 | {
"authors": [
"bumbeishvili"
],
"repo": "bumbeishvili/d3-organization-chart",
"url": "https://github.com/bumbeishvili/d3-organization-chart/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113186183 | Letzter Commit vor 3 Jahren
Finde ich sehr schade, dass das Repo nicht mehr verwaltet wird…
Obermenge von #54
Wenn es da im Netz noch Quellen gibt die aktuell gehalten werden, dann können wir ein Skript schreiben um sie dort Runterzuladen evtl. in md umwandeln. @UgnilJoZ Hast du eine idee?
@hhirsch Theoretisch gibt es ja noch das Repo https://github.com/bundestag/gesetze-tools, das alles mitbringen sollte, um die Gesetze halbautomatisch einzupflegen. Auch dieses wurde vor gut zwei Jahren das letzte Mal geändert, müsste aber noch gehen. Ich werd das mal ausprobieren.
Genau, grundsätzlich sollte alles noch gehen, aber vielleicht eben auch nicht. Gerne auch pull requests an das Repo. Ich gebe auch gerne jedem, der einen vernünftigen Pull Request zustande bringt, Schreibrechte geben.
Kümmert sich denn aktuell jemand darum? Das Gespräch ist ja auch schon eine Weile her... Ich habe leider nicht die Python-Kenntnisse.
Kümmert sich denn aktuell jemand darum? Das Gespräch ist ja auch schon eine Weile her... Ich habe leider nicht die Python-Kenntnisse.
Kennt jemand aktuellere Quellen als gesetzte-im-internet.de?
Dort scheint es zum Beispiel keine aktuelle Version des Bundesdatenschutzgesetz (BDSG) in der Fassung von 2018 zu geben...
@darkdragon-001 gesetze-im-internet.de ist das offizielle Portal für Gesetze. Das Bundesgesetzblatt stellt die Änderungsgesetze zeitnaher bei sich zur Verfügung, die Änderungen müssen dann aber erst (offenbar von Hand) in den Gesetzestext übertragen werden.
Wie bedauerlich, dass dieses Projekt aufgegeben wurde
Schade, wenn es nicht mehr gewartet wird, sollte der Eigentümer das Repository konsequenterweise auf archived setzen.
Es gibt wieder eine Seite, welche alle aktuellen Gesetztes-Veröffentlichungen online stellt. Die Frage ist, nur, ob diese Plattform nicht vom Dumont Verlag in Grund und Boden geklagt wird.
https://offenegesetze.de/
wolterskluwer-online bietet Verlinkungen zu Paragrafen und einen praktischen Versionsvergleich. Zumindest diese beiden Features scheinen kostenlos zu sein.
As indicated in https://github.com/bundestag/gesetze/pull/58#issuecomment-1033097288 why not run this update automatically via GitHub actions?
I can help with workflow setup if we decide we want that.
I can help with GitHub Actions workflow setup if we decide we want that.
We want that!
Please reference https://github.com/bundestag/gesetze-tools/issues/16
Consider to implement the workflow on the https://github.com/bundestag/gesetze-tools/ repo. ❤️
| gharchive/issue | 2015-10-24T22:13:07 | 2025-04-01T06:38:07.176075 | {
"authors": [
"ISSTJ",
"UgnilJoZ",
"darkdragon-001",
"hhirsch",
"keilw",
"laulens12",
"mdschweda",
"mk-pmb",
"rugk",
"stefanw",
"tom-richter",
"ulfgebhardt"
],
"repo": "bundestag/gesetze",
"url": "https://github.com/bundestag/gesetze/issues/55",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
198291201 | gem error
Error details
Errno::EACCES: Permission denied @ rb_file_s_rename - (/var/folders/vj/dkscdd2x3wbftywhjf9d8lx80000gn/T/bundler-compact-index-20170101-53146-7cinnb/versions, /Users/pallaviaggarwal/.bundle/cache/compact_index/rubygems.org.443.29b0360b937aa4d161703e6160654e47/versions)
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:528:in `rename'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:528:in `block in mv'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:1571:in `block in fu_each_src_dest'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:1587:in `fu_each_src_dest0'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:1569:in `fu_each_src_dest'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:517:in `mv'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client/updater.rb:55:in `block in update'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/tmpdir.rb:89:in `mktmpdir'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client/updater.rb:29:in `update'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client.rb:65:in `update'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client.rb:56:in `update_and_parse_checksums!'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher/compact_index.rb:67:in `available?'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher/compact_index.rb:15:in `call'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher/compact_index.rb:15:in `block in compact_index_request'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher.rb:157:in `use_api'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:332:in `block in api_fetchers'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:332:in `select'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:332:in `api_fetchers'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:337:in `block in remote_specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/index.rb:10:in `build'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:336:in `remote_specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:83:in `specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:261:in `block (2 levels) in index'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:259:in `each'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:259:in `block in index'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/index.rb:10:in `build'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:256:in `index'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:250:in `resolve'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:174:in `specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:162:in `resolve_remotely!'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/installer.rb:225:in `resolve_if_need'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/installer.rb:78:in `run'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/installer.rb:24:in `install'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli/install.rb:71:in `run'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli.rb:189:in `install'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor/invocation.rb:126:in `invoke_command'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor.rb:359:in `dispatch'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli.rb:20:in `dispatch'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor/base.rb:440:in `start'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli.rb:11:in `start'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/exe/bundle:34:in `block in <top (required)>'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/friendly_errors.rb:100:in `with_friendly_errors'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/exe/bundle:26:in `<top (required)>'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/bundle:22:in `load'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/bundle:22:in `<main>'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/ruby_executable_hooks:15:in `eval'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/ruby_executable_hooks:15:in `<main>'
Environment
Bundler 1.13.7
Rubygems 2.6.8
Ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-darwin16]
GEM_HOME /Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3
GEM_PATH /Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3:/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3@global
RVM 1.28.0 (latest)
Git 2.11.0
rubygems-bundler (1.4.4)
--- TEMPLATE END ----------------------------------------------------------------
See the other Permission denied issues. A suite of permission errors have been addressed by #5007 which will release in Bundler 1.14.
Let us know if you're still having trouble and none of those solutions worked for you.
I'm closing this for now. If you're still experiencing your original issue don't be afraid to re-open this ticket.
| gharchive/issue | 2017-01-02T02:18:36 | 2025-04-01T06:38:07.181902 | {
"authors": [
"colby-swandale",
"hmistry",
"pallavi16"
],
"repo": "bundler/bundler",
"url": "https://github.com/bundler/bundler/issues/5299",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
425982322 | backport: bundle clean native extensions for gems with a git source
Cherry-pick https://github.com/bundler/bundler/pull/7059 to 1-17-stable since, as https://github.com/bundler/bundler/issues/7058 mentioned, the bug is present on 1.17.3
This test error seems unrelated to this PR
ERROR: Error installing rubocop:
parallel requires Ruby version >= 2.2.
Thanks for the PR. I've marked the PR this references with a backport for when we organize the next Bundler 1 release which will get cherry-picked.
| gharchive/pull-request | 2019-03-27T14:08:46 | 2025-04-01T06:38:07.184308 | {
"authors": [
"colby-swandale",
"dylanahsmith"
],
"repo": "bundler/bundler",
"url": "https://github.com/bundler/bundler/pull/7069",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.