id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
157367032
|
Issue with date Conversion
Hi there,
I think I discovered a serious issue with date conversion here is an example:
console.log(new DateOnly(20160101).toDate())
//Mon Feb 01 2016 00:00:00 GMT+1300 (NZDT) - Correct
Now we change year to 2017
console.log(new DateOnly(20170101).toDate())
//Wed Mar 01 2017 00:00:00 GMT+1300 (NZDT) - Not correct
So I'm not sure if you aware of this but if you could shine some light on why it works this way would be much appreciated.
Thanks
Thanks, this is now fixed and published on npm @1.1.1.
|
gharchive/issue
| 2016-05-29T03:17:23 |
2025-04-01T04:56:11.445068
|
{
"authors": [
"boblauer",
"unkn0wn-kgb"
],
"repo": "boblauer/dateonly",
"url": "https://github.com/boblauer/dateonly/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
116407562
|
Fix #6
I haven't actually tested this (sorry, one of those days), but it should prevent the unnecessary adding of a listener and it's really simple.
Ok, tested and works great!
Thanks!
That's strange... I don't see these changes in master. Master still has the delaying binding issue in that setTimeout will run the next tick and leave a listener behind.
see my comment here - https://github.com/boblauer/react-onclickout/issues/6#issuecomment-158823487
|
gharchive/pull-request
| 2015-11-11T19:45:51 |
2025-04-01T04:56:11.447623
|
{
"authors": [
"boblauer",
"grassick"
],
"repo": "boblauer/react-onclickout",
"url": "https://github.com/boblauer/react-onclickout/pull/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
134196889
|
Craft\EntryModel.date is not defined.
Hello Sir.
I am using the 0.5.7 version of craft export plugin.
When I tick the fields Post Date or Expiry date of entries to export.
After clicking the export it gives error >> Craft\EntryModel.date is not defined.
Is this still an issue?
|
gharchive/issue
| 2016-02-17T06:54:52 |
2025-04-01T04:56:11.451804
|
{
"authors": [
"anish274",
"boboldehampsink"
],
"repo": "boboldehampsink/export",
"url": "https://github.com/boboldehampsink/export/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2212231991
|
[p5.js KO] Create lerpColor.mdx
Changes: KO translation
[] npm run lint passes
[] [Inline documentation] is included / updated
@yinhwa @designerSejinOH 깃허브로 돌아와서 다시 작업시작했습니다. 이렇게 진행하면 될까요?
Hi @yunyoungJang yes this is the right place to PR! But kindly close down this PR and be back instead with multiple commits ;) it should be 1 PR per 1 contributor
@yinhwa 다시 반영할게요~!
|
gharchive/pull-request
| 2024-03-28T02:37:22 |
2025-04-01T04:56:11.462714
|
{
"authors": [
"yinhwa",
"yunyoungJang"
],
"repo": "bocoup/p5.js-website",
"url": "https://github.com/bocoup/p5.js-website/pull/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2733551873
|
Bug: save prompt when switching language without changes
I encountered an issue when using the field on a multilingual page. If I have a translated table field on the page and switch languages, the panel always prompts me to save the page, even when no changes have been made. I tested this on a clean Kirby starter kit with version 4.5.0 and table field version 2.4.0. Functionally, everything still works as expected.
@saltandbits Thanks for reporting this, I'm currently facing some reactive issues since the introduction of the writer-input in which it detects changes even if there's none of them. I don't have a solution atm but would investigate this further.
|
gharchive/issue
| 2024-12-11T17:22:56 |
2025-04-01T04:56:11.491888
|
{
"authors": [
"bogdancondorachi",
"saltandbits"
],
"repo": "bogdancondorachi/kirby-table-field",
"url": "https://github.com/bogdancondorachi/kirby-table-field/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
265330
|
./clsrv: symbol lookup error: ./clsrv: undefined symbol: free_mem_coffer
zeromq version: tags/v2.0.6^0=ae35a1 (installed locally in my home, so i had to add -I$(HOME)/include and -L$(HOME)/include to Makefile-s below)
gonewrong version: 754648
coffer version: 19c834
gozero version: 7b78b4
$ ./clsrv
./clsrv: symbol lookup error: ./clsrv: undefined symbol: free_mem_coffer
$ ldd -r clsrv
linux-vdso.so.1 => (0x00007fffcc7ff000)
cgo_zmq.so => /home/temoto/src/go/pkg/linux_amd64/cgo_zmq.so (0x00007f208f296000)
libcgo.so => /home/temoto/src/go/pkg/linux_amd64/libcgo.so (0x00007f208f093000)
cgo_unsafe_coffer.so => /home/temoto/src/go/pkg/linux_amd64/cgo_unsafe_coffer.so (0x00007f208ee90000)
libzmq.so.0 => /home/temoto/lib/libzmq.so.0 (0x00007f208ec52000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00007f208ea11000)
libm.so.6 => /lib/libm.so.6 (0x00007f208e78d000)
libc.so.6 => /lib/libc.so.6 (0x00007f208e40b000)
/lib64/ld-linux-x86-64.so.2 (0x00007f208f49b000)
libuuid.so.1 => /lib/libuuid.so.1 (0x00007f208e206000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f208def1000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00007f208dcda000)
undefined symbol: free_mem_coffer (./clsrv)
What could be the reason of this?
Looking at the code, i suppose zmq.go:410 tries to use coffer.FreeMemCoffer by it's "direct C name", so maybe it's about that coffer exports that function with different name on my Linux amd64?
likely irrelevant now
|
gharchive/issue
| 2010-07-31T15:25:04 |
2025-04-01T04:56:11.501214
|
{
"authors": [
"temoto"
],
"repo": "boggle/gozero",
"url": "https://github.com/boggle/gozero/issues/2",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1616176093
|
Criar assets personalizados
Esta tarefa é opcional. Além disso, poderá ser desmembrada, de acordo com o volume de trabalho escolhido pela equipe, em outras:
[ ] Tiles;
[ ] Sprites;
[ ] Sons de efeito;
[ ] Músicas.
Depende de #16, #18 e #20.
Esta tarefa é usada pelas equipes para documentar a criação de assets. Neste repositório, em particular, ela não é realizada - apenas fechada para registro.
|
gharchive/issue
| 2023-03-09T00:08:36 |
2025-04-01T04:56:11.503350
|
{
"authors": [
"boidacarapreta"
],
"repo": "boidacarapreta/adcipt20231",
"url": "https://github.com/boidacarapreta/adcipt20231/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
135147379
|
datashader Anacoder demo code: 2 errors numpy.core.multiarray failed to import, cannot import name config
Hi - Making progress getting the datashader demo code to run so far :) , except... ;(
Ref # https://github.com/bokeh/datashader/issues/72#issuecomment-185412152
Now at "10-million-point datashaded plots: auto-ranging, but not perceptually calibrated" step @
https://anaconda.org/jbednar/nyc_taxi/notebook
import datashader as ds
from datashader import transfer_functions as tf
from functools import partial
def create_image(x_range, y_range, w, h, color_fn=tf.interpolate):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
image = color_fn(agg)
return image
I am receiving these two errors. Any guidance on this please? I ran "pip install numpy" just in case and received the message: " Requirement already satisfied"
Many Thanks.
Error#1 of 2:
RuntimeError Traceback (most recent call last)
RuntimeError: module compiled against API version a but this version of numpy is 9
ImportError Traceback (most recent call last)
in ()
----> 1 import datashader as ds
2 from datashader import transfer_functions as tf
3 from functools import partial
4
5 def create_image(x_range, y_range, w, h, color_fn=tf.interpolate):
anaconda/lib/python2.7/site-packages/datashader/init.py in ()
3 version = '0.1.0'
4
----> 5 from .core import Canvas
6 from .reductions import (count, sum, min, max, mean, std, var, count_cat,
7 summary)
anaconda/lib/python2.7/site-packages/datashader/core.py in ()
5 from odo import discover
6
----> 7 from .utils import Dispatcher, ngjit
8
9
anaconda/lib/python2.7/site-packages/datashader/utils.py in ()
3 from inspect import getmro
4
----> 5 import numba as nb
6 from datashape import Unit
7 from datashape.predicates import launder
anaconda/lib/python2.7/site-packages/numba/init.py in ()
5 import re
6
----> 7 from . import testing, decorators
8 from . import errors, special, types, config
9
anaconda/lib/python2.7/site-packages/numba/decorators.py in ()
8 from . import config, sigutils
9 from .errors import DeprecationError
---> 10 from .targets import registry
11 from . import cuda
12
anaconda/lib/python2.7/site-packages/numba/targets/registry.py in ()
1 from future import print_function, division, absolute_import
2
----> 3 from . import cpu
4 from .descriptors import TargetDescriptor
5 from .. import dispatcher, utils, typing
anaconda/lib/python2.7/site-packages/numba/targets/cpu.py in ()
9 from numba import _dynfunc, config
10 from numba.callwrapper import PyCallWrapper
---> 11 from .base import BaseContext, PYOBJECT
12 from numba import utils, cgutils, types
13 from numba.utils import cached_property
anaconda/lib/python2.7/site-packages/numba/targets/base.py in ()
13
14 from numba import types, utils, cgutils, typing
---> 15 from numba import _dynfunc, _helperlib
16 from numba.pythonapi import PythonAPI
17 from numba.targets.imputils import (user_function, user_generator,
ImportError: numpy.core.multiarray failed to import
Error#2 of 2:
Then after running the same code/cell again in jupyter I receive this error message:
ImportError Traceback (most recent call last)
in ()
----> 1 import datashader as ds
2 from datashader import transfer_functions as tf
3 from functools import partial
4
5 def create_image(x_range, y_range, w, h, color_fn=tf.interpolate):
anaconda/lib/python2.7/site-packages/datashader/init.py in ()
3 version = '0.1.0'
4
----> 5 from .core import Canvas
6 from .reductions import (count, sum, min, max, mean, std, var, count_cat,
7 summary)
anaconda/lib/python2.7/site-packages/datashader/core.py in ()
5 from odo import discover
6
----> 7 from .utils import Dispatcher, ngjit
8
9
anaconda/lib/python2.7/site-packages/datashader/utils.py in ()
3 from inspect import getmro
4
----> 5 import numba as nb
6 from datashape import Unit
7 from datashape.predicates import launder
anaconda/lib/python2.7/site-packages/numba/init.py in ()
5 import re
6
----> 7 from . import testing, decorators
8 from . import errors, special, types, config
9
anaconda/lib/python2.7/site-packages/numba/decorators.py in ()
6 import warnings
7
----> 8 from . import config, sigutils
9 from .errors import DeprecationError
10 from .targets import registry
ImportError: cannot import name config
I suspect you've got some different versions of numpy installed in different places. E.g. if you do python -c "import numpy;print numpy.version;print numpy.file";, I think it needs to be at least 1.8 for multiarray to be available. Perhaps if you're combining pip and conda, one of them has gotten confused?
Thanks for your reply I typed print numpy.version.version in jupyter where I am running the code, and got back 1.9.2
I also have 1.10 and 1.10.4 installed?
Also not sure I understand what you mean by pip and conda being confused?
By pip and conda being confused, I just meant that you mentioned installing something using pip, while the datashader instructions say to use conda, and I was theorizing (with no data! :-) that perhaps something got mixed up between those two package-management systems. I myself tend to stick to conda for everything that's available on conda (like numpy), which works very smoothly. I think multiarray should be available in both numpy 1.9 and 1.10, so I can't see what the problem might be. But it looks like a problem with numpy or numba, not datashader per se. If nothing else worked, I'd try a fresh conda install on a different machine or at least in a different environment.
Okay I'll give that a shot. Thanks.
I'll assume that worked and close this, but if it's still an issue, please re-open this and provide additional details.
I reinstalled OSX and am going to try it again. Hopefully there won't be any conflicts with my Python installs this time. Thx
Sent from my iPhone
On Mar 24, 2016, at 7:16 PM, James A. Bednar notifications@github.com wrote:
I'll assume that worked and close this, but if it's still an issue, please re-open this and provide additional details.
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
|
gharchive/issue
| 2016-02-21T02:17:01 |
2025-04-01T04:56:11.551698
|
{
"authors": [
"jbednar",
"tonyj4102"
],
"repo": "bokeh/datashader",
"url": "https://github.com/bokeh/datashader/issues/75",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
141371726
|
Add grid property
This makes the grid property from draggable available.
@calculon0 thanks!!
|
gharchive/pull-request
| 2016-03-16T19:05:48 |
2025-04-01T04:56:11.553339
|
{
"authors": [
"bokuweb",
"calculon0"
],
"repo": "bokuweb/react-resizable-and-movable",
"url": "https://github.com/bokuweb/react-resizable-and-movable/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1641719908
|
🛑 Geomark is down
In 1f98de0, Geomark (https://apps.gov.bc.ca/pub/geomark/overview) was down:
HTTP code: 502
Response time: 420 ms
Resolved: Geomark is back up in 057a3b1.
|
gharchive/issue
| 2023-03-27T09:14:19 |
2025-04-01T04:56:11.577294
|
{
"authors": [
"bolyachevets"
],
"repo": "bolyachevets/upptime",
"url": "https://github.com/bolyachevets/upptime/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2247370614
|
Calendar Events
Contains the implementation of the Calendar events
Calendar Event implemention
|
gharchive/pull-request
| 2024-04-17T04:55:01 |
2025-04-01T04:56:11.580662
|
{
"authors": [
"Nataliazeus94"
],
"repo": "bombies/comp3161-final-project",
"url": "https://github.com/bombies/comp3161-final-project/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
865827144
|
Update documentation
Update links (see https://github.com/bonej-org/BoneJ2/issues/295)
Legacy/bonej/src/main/java/org/bonej/DebugLauncher.java was not update because https://imagej.net/images/bat-cochlea-volume.zip might be already in the system.
Update links (see #295)
Legacy/bonej/src/main/java/org/bonej/DebugLauncher.java was not update because https://imagej.net/images/bat-cochlea-volume.zip might be already in the system.
I uploaded bat-cochlea-volume.zip to https://imagej.github.io/images/bat-cochlea-volume.zip so you could update the link there.
|
gharchive/pull-request
| 2021-04-23T07:23:12 |
2025-04-01T04:56:11.584117
|
{
"authors": [
"mdoube",
"rgaiacs"
],
"repo": "bonej-org/BoneJ2",
"url": "https://github.com/bonej-org/BoneJ2/pull/299",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
319507473
|
Last layer's weights when changing the number of classes + expected labels
Thank you for sharing your code
In my case , i want to train on my own dataset, should the labels (mask) be a 2-d image (with one chanel eventually) where each pixel has as value the class label (for example pixel belonging to a person has value 15,ect, which will requires later on a colormap) ? or what format the labels (mask) should have.......may i know what software do you use to create your labels ?
again thank you for sharing your effort
For labels check original impl https://github.com/tensorflow/models/blob/master/research/deeplab/utils/input_generator.py#L133-L134
thank you, will check it
|
gharchive/issue
| 2018-05-02T11:03:13 |
2025-04-01T04:56:11.590670
|
{
"authors": [
"IhamdiS",
"bhack"
],
"repo": "bonlime/keras-deeplab-v3-plus",
"url": "https://github.com/bonlime/keras-deeplab-v3-plus/issues/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1227266159
|
🛑 Radarr4K is down
In fb249cb, Radarr4K ($URLS_RADARR4K) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Radarr4K is back up in 0effeea.
|
gharchive/issue
| 2022-05-05T23:18:29 |
2025-04-01T04:56:11.593003
|
{
"authors": [
"bonny1992"
],
"repo": "bonny1992/upptime",
"url": "https://github.com/bonny1992/upptime/issues/524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1489285511
|
🛑 Prowlarr is down
In 90654bb, Prowlarr ($URLS_PROWLARR) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Prowlarr is back up in 2f57728.
|
gharchive/issue
| 2022-12-11T02:14:14 |
2025-04-01T04:56:11.595055
|
{
"authors": [
"bonny1992"
],
"repo": "bonny1992/upptime",
"url": "https://github.com/bonny1992/upptime/issues/680",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1630336485
|
🛑 Deluge is down
In 10cd635, Deluge ($URLS_DELUGE) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Deluge is back up in c927a64.
|
gharchive/issue
| 2023-03-18T13:58:10 |
2025-04-01T04:56:11.597068
|
{
"authors": [
"bonny1992"
],
"repo": "bonny1992/upptime",
"url": "https://github.com/bonny1992/upptime/issues/804",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1071438214
|
Command built-in controllers via events or another keymap agnostic way.
As implemented, the controls are hardcoded. This is not a practical design for any serious game.
The controllers do currently work based on events, but the plugins will add a default_input_map system that hardcodes the bindings.
As a workaround, you could make your own Plugin that uses all the same systems, except for the default_input_map.
But I could also try to make the plugins generic over an input bindings system.
Not urgent for my use-case, just something I noticed. I'm planning on implementing my own controller as of now, but if I make a PR with it, I would like there to be a standard for generic bindings.
Perhaps a solution could be to expose an events enum with more "input"-y variants -- something you could map one to one with a button press. As of now the events used aren't much more abstract as using the the LookTransform itself.
Perhaps a solution could be to expose an events enum with more "input"-y variants -- something you could map one to one with a button press.
I assume you mean something like the amethyst Bindings type:
https://github.com/amethyst/amethyst/blob/a4d87713f873d59bf5bad37b4c5f9074dd978daf/amethyst_input/src/bindings.rs#L17-L56
Basically a mapping of bevy's Input and Axis types.
Consider the FPS controller event:
pub enum ControlEvent {
Rotate(Vec2),
TranslateEye(Vec3),
}
I don't think abstracting over Bindings works very well here because in some cases (mouse + keyboard) you want a Input to control the TranslateEye and in other cases (gamepad) you want an Axis to control TranslateEye. The amethyst solution is to have "emulated" axes using buttons. But that seems like a feature best left for bevy proper.
So that's why I think it makes sense for controller to define their own event type to be as abstract as possible. Another crate could provide the abstraction you want in order to generate the events from Bindings.
As of now the events used aren't much more abstract as using the the LookTransform itself.
I disagree. The controller implementations require a fair bit of complexity that is hidden by the event interface. Many different input schemes could be implemented by "simply" generating those events.
So that's why I like my idea of making the plugins generic over an entire input system. And in Bevy, that's as simple as not implementing the input system at all. The API can be "you just need to run a system that generates ControlEvents." That gives an enormous amount of flexibility and control, while still offering a specific way of controlling the camera.
After looking at the current controllers in more detail, I think the current ControlEvents are sufficient, but making the generic over an input system is still a goal for sure.
|
gharchive/issue
| 2021-12-05T11:45:00 |
2025-04-01T04:56:11.605582
|
{
"authors": [
"bonsairobo",
"scrblue"
],
"repo": "bonsairobo/smooth-bevy-cameras",
"url": "https://github.com/bonsairobo/smooth-bevy-cameras/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2240895168
|
[FEATURE] RGB COLORS
What is the feature request for?
Add RGB colors support
Relevant links
No response
Additional Information
like #C00000
Feature requests are not going to be implemented by booksaw, instead you will have to find another developer to make a pull request to implement the feature.
[X] I understand
Duplicate of https://github.com/booksaw/BetterTeams/issues/353
|
gharchive/issue
| 2024-04-12T20:51:32 |
2025-04-01T04:56:11.609037
|
{
"authors": [
"HonzasikCZ",
"booksaw"
],
"repo": "booksaw/BetterTeams",
"url": "https://github.com/booksaw/BetterTeams/issues/583",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2169299492
|
Update database to version 1.3.9
Integrate the new version of the boolder database.
Resolves #84
FYI I have an uncommitted file appearing in /Users/nico/Dev/boolder-android/app/schemas/com.boolder.boolder.data.database.BoolderAppDatabase/16.json
Content:
https://gist.github.com/nmondollot/d0f4c1dd138351202f82deb0e05104fe
First time I'm seeing this, is it normal? Should it be committed inside the repo?
@nmondollot Yes, this is normal. It is the database schema that is generated by Room upon compilation, in case we want to perform migrations across the different versions of the database. This is useful for the user's database (which contains the ticked problems and the projects, and you'll see that the generated .json schema has been commited), but not for the boulders one, as this database is always initialized from the .db file.
In short, the schema under app/schemas/com.boolder.boolder.data.userdatabase.UserDatabase have to be versioned, but not the ones under app/schemas/com.boolder.boolder.data.database.BoolderAppDatabase (at least until we continue to initialize from a .db file)
|
gharchive/pull-request
| 2024-03-05T13:56:15 |
2025-04-01T04:56:11.612511
|
{
"authors": [
"nmondollot",
"wang-li"
],
"repo": "boolder-org/boolder-android",
"url": "https://github.com/boolder-org/boolder-android/pull/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
615925193
|
Avoid using deprecated header boost/detail/iterator.hpp
This header is deprecated in favor of <iterator> and will be removed in a future release. This silences deprecation warnings.
Please, merge to master.
Done.
|
gharchive/pull-request
| 2020-05-11T14:23:31 |
2025-04-01T04:56:11.629988
|
{
"authors": [
"Lastique",
"mclow"
],
"repo": "boostorg/algorithm",
"url": "https://github.com/boostorg/algorithm/pull/71",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
282711257
|
Building with g++ in C++03 mode fails
Building the library with f.ex. msvc-8.0, or 10.0, works; and so does building with g++ in C++11 mode. However, building with g++ in C++03 mode fails:
C:\Projects\boost-git\boost\libs\contract>b2 build toolset=gcc cxxstd=03
Performing configuration checks
- 32-bit : yes (cached)
- arm : no (cached)
- mips1 : no (cached)
- power : no (cached)
- sparc : no (cached)
- x86 : yes (cached)
- symlinks supported : no (cached)
- junctions supported : yes (cached)
- hardlinks supported : yes (cached)
...patience...
...patience...
...found 2483 targets...
...updating 8 targets...
gcc.compile.c++ ..\..\bin.v2\libs\contract\build\gcc-gnu-6.4.0\debug\cxxstd-03-i
so\threadapi-win32\contract.o
In file included from ..\../boost/contract/detail/checking.hpp:11:0,
from ..\../boost/contract/old.hpp:17,
from ..\../boost/contract/detail/inlined/old.hpp:13,
from ..\../boost/contract/detail/inlined.hpp:10,
from ..\..\libs\contract\src\contract.cpp:11:
..\../boost/contract/detail/static_local_var.hpp:19:71: error: a cast to a type
other than an integral or enumeration type cannot appear in a constant-expressio
n
template<typename Tag, typename T, typename Init = none*, Init init = Init()>
^~~~~~
..\../boost/contract/detail/static_local_var.hpp:28:31: error: template argument
4 is invalid
struct static_local_var<Tag, T> {
^
and so on.
Yes, thanks for reporting this. I actually noted it too just other day. GCC/Clang C++03 get confused on a template specialization that I recently introduced. I removed the specialization fixing this on my local copy.
I'm running some other tests now. I'll be checking the fix in the next few days.
Fix this in the last commit (split template specializations into 2 separate templates so that works on GCC/Clang C++03).
|
gharchive/issue
| 2017-12-17T18:01:06 |
2025-04-01T04:56:11.633710
|
{
"authors": [
"lcaminiti",
"pdimov"
],
"repo": "boostorg/contract",
"url": "https://github.com/boostorg/contract/issues/6",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
526685986
|
std::vector<boost::dynamic_bitset<>>unable to free memory
Hi,
I am using boost 1.66.0 version, gcc/g++ -7, and I found when using std::vector<boost::dynamic_bitset<>>, it is unable to free memory when destruction, even using function swap(), resize(), reset(). it doesn't work.
The test program is show below
int main()
{
cout << "[ memory start] " << getMemoryUsage();
{
std::vector<boost::dynamic_bitset<>> m_nodes;
m_nodes.resize(1000000 + 1);
uint64_t descriptor[4] = {1, 2, 3, 4};
for(size_t i = 0; i < m_nodes.size(); i++)
{
m_nodes[i] = boost::dynamic_bitset<>(descriptor, descriptor + 4);
}
}
cout << "[ memory end] " << getMemoryUsage() << endl;
}
result is:
[ memory start ] 13
[ memory end ] 59
As the result show:
The memory of boost::dynamic_bitset can not release, so it is the problem.
Could you please confirm it, and tell me how to solve it?
I'm not seeing a memory leak here. What is getMemoryUsage, and what system are you running this on?
My test program (attached) reports that every allocation is paired with a deallocation. (Boost trunk, on Mac OS 10.14)
#include <vector>
#include <iostream>
#include <cstdio>
#include <boost/dynamic_bitset.hpp>
void* operator new (std::size_t count)
{
void *ptr = malloc(count);
::printf ("Allocating %ld bytes at %p\n", count, ptr);
return ptr;
}
void* operator new[]( std::size_t count)
{
void *ptr = malloc(count);
::printf ("Allocating %ld bytes[] at %p\n", count, ptr);
return ptr;
}
void operator delete(void *ptr) throw()
{
::printf ("deleting %p\n", ptr);
free(ptr);
}
void operator delete[](void *ptr) throw()
{
::printf ("deleting[] %p\n", ptr);
free(ptr);
}
int main() {
{
std::vector<boost::dynamic_bitset<>> m_nodes;
// m_nodes.resize(1000000 + 1);
m_nodes.resize(10 + 1);
uint64_t descriptor[4] = {1, 2, 3, 4};
for(size_t i = 0; i < m_nodes.size(); i++)
{
m_nodes[i] = boost::dynamic_bitset<>(descriptor, descriptor + 4);
}
::printf("Exiting scope\n");
}
}
Hi, thanks for reply.
system: ubuntu 18.04, boost 1.66.0 release version, gcc/g++ -7
getMemoryUsage() is code of get the program memory, it like command "busybox top"
here is the linux code of it:
unsigned long getMemoryUsage()
{
unsigned long MEM = 0;
FILE *f = ::fopen ("/proc/self/stat", "r");
if (!f) return 0;
if (!::fscanf(f,
"%*d %*s %*c %*d %*d %*d %*d %*d%*u %*u %*u %*u %*u %*u %*u%*d %*d %*d %*d %*d %*d %*u %lu", &MEM))
{
// Error parsing:
MEM=0;
}
::fclose (f);
return MEM/1024/1024;
}
For my test program, you can see after destruct of "std::vector<boost::dynamic_bitset<>>", the memory can not restore "13M", that why I mean "std::vector<boost::dynamic_bitset<>>" has memory leak.
Hi, I also test std::bitset, the memory of this has not meomory leak, here is the test program
int main()
{
cout << "[ memory start ] " << getMemoryUsage() << std::endl;
{
std::vector<std::bitset <256> > m_nodes;
m_nodes.resize(1000000 + 1);
uint64_t descriptor[4] = {1, 2, 3, 4};
std::bitset <256> tmp(ULONG_MAX);
for(size_t i = 0; i < m_nodes.size(); i++)
{
m_nodes[i] = tmp;
}
}
cout << "[ memory end ] " << getMemoryUsage() << endl;
}
The result is:
[ memory start ] 13
[ memory end ] 13
AMDG
This is not a bug. It's the normal behavior of memory allocation.
Allocators do not return memory to the system when it is deallocated,
as that would be horribly inefficient for multiple reasons. Instead,
the freed memory will be reused on the next allocation.
In Christ,
Steven Watanabe
Ok, got it, thank you very much.
|
gharchive/issue
| 2019-11-20T09:47:52 |
2025-04-01T04:56:11.645588
|
{
"authors": [
"mclow",
"robot-kimbo",
"swatanabe"
],
"repo": "boostorg/dynamic_bitset",
"url": "https://github.com/boostorg/dynamic_bitset/issues/51",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2728457901
|
docs: modify logo variable
When building the docs with Sphinx 8.1.3 there is an error about 'logo'.
Where is the variable originally defined?
The modification in this PR seems to solve the problem.
Where is the variable originally defined?
Possibly a bug!
Thank you!
Thanks. RemovedInSphinx90Warning warnings (not errors) also appeared. I have not researched them.
writing output... [ 2%] design/basics
/opt/venvboostdocs/lib/python3.12/site-packages/sphinx/builders/html/init.py:1076: RemovedInSphinx90Warning: The str interface for _JavaScript objects is deprecated. Use js.filename instead.
if resource and '://' in otheruri:
/opt/venvboostdocs/lib/python3.12/site-packages/sphinx/util/osutil.py:48: RemovedInSphinx90Warning: The str interface for _JavaScript objects is deprecated. Use js.filename instead.
if to.startswith(SEP):
/opt/venvboostdocs/lib/python3.12/site-packages/sphinx/util/osutil.py:51: RemovedInSphinx90Warning: The str interface for _JavaScript objects is deprecated. Use js.filename instead.
t2 = to.split('#')[0].split(SEP)
|
gharchive/pull-request
| 2024-12-09T22:53:04 |
2025-04-01T04:56:11.649790
|
{
"authors": [
"mloskot",
"sdarwin"
],
"repo": "boostorg/gil",
"url": "https://github.com/boostorg/gil/pull/759",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2013113722
|
Segmentation fault when a JSON string with whitespaces is passed to boost::json::parse() function
Description
As described in the title.
Version of Boost: 1.80.0
Operating system: Ubuntu 22.04.3 LTS (64-bit)
Compiler: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
CMake: 3.22.1
Link to the ready-to-build project with reproduction example (just run ./run.sh):
https://github.com/Elnee/minimal-reproduction-for-boost-issue
Minimal reproduction example:
#include <bits/stdc++.h>
#include <boost/json.hpp>
int main() {
const std::string json = R"({"example": "foo"})"; // NOTE: The whitespace here between colon and quotation mark.
const auto &object = boost::json::parse(json).as_object();
try {
// Crash appears on the next line while executing the `at()` function.
std::cout << object.at("example").as_string().c_str() << std::endl;
} catch (const std::exception &ex) {
std::cout << "Exception: " << ex.what() << std::endl;
}
return 0;
}
Output:
Segmentation fault (core dumped)
json::parse returns a json::value, which you don't store anywhere. You instead get a reference to its subobject of type json::object. At the end of that statement the value goes out of scope and destroys its internal object. After that you use a dangling reference. In order to fix this you should store either the value or the object:
const auto value = boost::json::parse(json);
const auto& object = value.as_object();
// or
const auto object = boost::json::parse(json).as_object();
@grisumbras Thank you, it works. But I don't understand why using a const-ref doesn't expand the lifetime of that object.
As I see boost::json::value has a union returned by as_object(). Is it because its lifetime is bound to the instance of boost::json::value?
And also, isn't it a bit misleading that I should think about as_object() method in such a way? I thought about it as getting the same object but in a different representation (due to the as prefix). It turned out to return some internals to me.
Doesn't it force me to know some internal library implementation details aside from the interface as a user?
@Elnee Per [class.temporary] p4, temporary objects are destroyed after evaluating the full-expression in which they appear (which in this case is the initialization of object). When a temporary object is bound to a reference, it's lifetime will be extended to that of the reference if, and only if, one of the conditions specified by [class.temporary] p6 are met. Binding a reference to the result of calling a non-static member function of a temporary object does not satisfy any of these conditions, so the lifetime is not extended.
@sdkrystian Many thanks for the detailed explanation.
|
gharchive/issue
| 2023-11-27T21:14:07 |
2025-04-01T04:56:11.657505
|
{
"authors": [
"Elnee",
"grisumbras",
"sdkrystian"
],
"repo": "boostorg/json",
"url": "https://github.com/boostorg/json/issues/957",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
100052971
|
Access VPN from Docker containers
It seems that Docker containers are unable to access host machine VPN resources. Unfortunately I don't know much about networking, but I suspect it's because boot2docker VM does not use the same routing tables as the host (OS X) machine.
Is there perhaps an easy way how to make boot2docker use the same network settings?
boot2docker:
docker@boot2docker:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 1 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.59.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
host:
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 192.168.1.1 UGSc 40 1 en1
default link#7 UCSI 0 0 utun0
10.1/16 172.27.249.211 UGSc 0 0 utun0
10.56/16 172.27.249.211 UGSc 0 0 utun0
10.66/16 172.27.249.211 UGSc 0 0 utun0
10.201.104.224/28 172.27.249.211 UGSc 0 0 utun0
10.202.104.224/28 172.27.249.211 UGSc 0 0 utun0
64.41.140.228/32 192.168.1.1 UGSc 1 0 en1
127 localhost UCS 0 0 lo0
localhost localhost UH 9 2473956 lo0
169.254 link#4 UCS 0 0 en1
172.16/24 172.27.249.211 UGSc 0 0 utun0
172.16.1/27 172.27.249.211 UGSc 0 0 utun0
172.16.1.32/32 172.27.249.211 UGSc 0 0 utun0
172.16.1.33/32 172.27.249.211 UGSc 0 0 utun0
172.16.1.34/32 172.27.249.211 UGSc 0 0 utun0
172.16.1.37/32 172.27.249.211 UGSc 0 0 utun0
172.16.1.38/32 172.27.249.211 UGSc 0 0 utun0
172.16.1.39/32 172.27.249.211 UGSc 0 0 utun0
172.16.1.40/30 172.27.249.211 UGSc 0 0 utun0
172.16.1.44/30 172.27.249.211 UGSc 0 0 utun0
172.16.1.48/28 172.27.249.211 UGSc 0 0 utun0
172.16.1.64/26 172.27.249.211 UGSc 0 0 utun0
172.16.1.128/25 172.27.249.211 UGSc 0 0 utun0
172.16.2/23 172.27.249.211 UGSc 0 0 utun0
172.16.4/24 172.27.249.211 UGSc 0 0 utun0
172.16.8/21 172.27.249.211 UGSc 0 0 utun0
172.17.36/24 172.27.249.211 UGSc 0 0 utun0
172.18 172.27.249.211 UGSc 0 0 utun0
172.19 172.27.249.211 UGSc 0 0 utun0
172.27 172.27.249.211 UGSc 16 54 utun0
172.27.240/20 link#7 UCS 0 0 utun0
172.27.249.211/32 localhost UGSc 26 0 lo0
172.31 172.27.249.211 UGSc 0 0 utun0
192.168.1 link#4 UCS 2 0 en1
192.168.1.1 4c:60:de:97:ce:2b UHLS 0 0 en1
192.168.1.1 4c:60:de:97:ce:2b UHLWIir 42 11 en1 1190
192.168.1.8/32 link#4 UCS 0 0 en1
192.168.1.255 ff:ff:ff:ff:ff:ff UHLWbI 0 3 en1
can you tell us more about your VPN? so far, we've have very little success getting VM's and VPNs to play nice.
|
gharchive/issue
| 2015-08-10T12:36:38 |
2025-04-01T04:56:11.660194
|
{
"authors": [
"Elijen",
"SvenDowideit"
],
"repo": "boot2docker/boot2docker",
"url": "https://github.com/boot2docker/boot2docker/issues/1031",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
55573890
|
bootstrap_for_tag and collection_select giving bad name
Hello, I have a form that isn't associated with an object and therefore I am using bootstrap_form_tag. On this form I have a button where I can add different search filters which will add an an additional dropdown box when clicked. This means I can have an indefinite amount of dropdowns. My code abbreviated looks like the following:
= bootstrap_form_tag(bsf_opts(url: "/recorded_calls", method: :post)) do |f|
...
%td= f.collection_select "durations_greater_less[]", @duration_options, :last, :first, hide_label: true
The html that the f.collection_select generates is found and cloned and added to the page with jQuery when the user clicks a button to add a new filter. The problem is the html that is generated has what I think is an incorrect name. Below is the html that comes out. the name is "[durations_greater_less[]]". This is a problem because lets says a user makes 5 of these dropdowns, I expect to receive as params an array for durations_greater_less, but since the name is enclosed with "[]" the params is not an array, it is a single value. I expect the name to be just durations_greater_less[].
<select class=form-control name=[durations_greater_less[]] id=_durations_greater_less[]>
<option value=gt>Greater Than</option>
<option value=lt>Less Than</option>
</select>
I tried to trace this out in the code, but I got stuck in the form_group method on form_builder.rb on the line
control = capture(█).to_s
I see that capture is delegated to @template, but I do not know where @template comes from. Before the aforementioned line I did capture(1,2) just so it would raise an exception that I had the wrong number of args. In the backtrace I saw capture was going to code in another gem, so I'm not sure if it is the other gem where the issue lies or maybe there is just some data in bootstrap-forms I need to change to get this working.
In the meantime I can hack a solution with using rails select_tag and I'll set the classes manually to have it bootstrap styled.
If this was too complicated, allow me to clarify with a easier to understand example. Imagine I have a form where there is a button that says "Add Fruit". When clicked it will inset a text_field named amounts[] and a dropdown a fuits named fruits[]. So you can do something like
Hit "Add Fruit"
Enter "5" in the text field
Select "Banana "from the drop down
Hit "Add Fruit"
Enter "2" in the text field
Enter "Pineapple" from the drop down
Hit Submit
The params will be {amounts => ["5","2"], fruits => nil}
this might be coming from rails framework...
|
gharchive/issue
| 2015-01-27T04:10:34 |
2025-04-01T04:56:11.671765
|
{
"authors": [
"burnt43",
"parhs"
],
"repo": "bootstrap-ruby/rails-bootstrap-forms",
"url": "https://github.com/bootstrap-ruby/rails-bootstrap-forms/issues/193",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1555700245
|
Fixes as per GTSAM 4.2
Updates to get GTD to compile with GTSAM 4.2.
Looks like we need a PPA of the 4.2 release. @dellaert @ProfFan do we have that?
Looks like we need a PPA of the 4.2 release. @dellaert @ProfFan do we have that?
Not finalized yet. When I finalize (actually make a tag and verify the wrappers) I'll need help cutting a PPA. 4.2a8 is the latest "official" release at this point. The other is a release branch.
I'll add the const when updating the CI for the new PPA.
|
gharchive/pull-request
| 2023-01-24T21:30:51 |
2025-04-01T04:56:11.709668
|
{
"authors": [
"dellaert",
"varunagrawal"
],
"repo": "borglab/GTDynamics",
"url": "https://github.com/borglab/GTDynamics/pull/368",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254398913
|
🛑 clans.wowsgame.cn is down
In dc353d3, clans.wowsgame.cn (https://clans.wowsgame.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: clans.wowsgame.cn is back up in e51be45 after 27 minutes.
|
gharchive/issue
| 2024-04-20T05:52:35 |
2025-04-01T04:56:11.712662
|
{
"authors": [
"boriskhodok"
],
"repo": "boriskhodok/wowsuptime",
"url": "https://github.com/boriskhodok/wowsuptime/issues/499",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
80067655
|
Add definitions for markerclustererplus
Hello, I would like to propose a definition file for the library MarkerClustererPlus for Google Maps V3.
Project home: mahnunchik/markerclustererplus
This *.d.ts file correspond to the 2.1.1 version.
I'm not sure about how to provide a meaningful demo without the heavy JSON data I put in the -tests file, so please feel free to correct me if this is wrong.
Cheers!
markerclusterer/markerclusterer-2.1.1.d.ts
check list
[ ] is collect naming convention?
https://www.npmjs.com/package/markerclusterer
http://bower.io/search/?q=markerclusterer
others?
[X] has a test file? (markerclusterer/markerclusterer-2.1.1-tests.ts or others)
[ ] pass the Travic-CI test?
@enanox thank you for your contributions.
please rename markerclusterer/markerclusterer-2.1.1.d.ts to markerclustererplus/markerclustererplus.d.ts
Hi @vvakame, thanks for your feedback! Please find the changes on fa3b6b5.
About the issue about the full dummy data on markerclustererplus-tests.ts, don't you think it will be fine to provide an external file named markerclustererplus-tests.json? Otherwise, please let me know your thoughts.
Thanks in advance!
@enanox don't mind! thanks mate.
(Y)
|
gharchive/pull-request
| 2015-05-24T06:04:01 |
2025-04-01T04:56:11.718648
|
{
"authors": [
"enanox",
"vvakame"
],
"repo": "borisyankov/DefinitelyTyped",
"url": "https://github.com/borisyankov/DefinitelyTyped/pull/4439",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1141131697
|
You might have unsuporrted firmware version None
Describe the bug
After the upgrade to HA Core 2022.2.8 unsupported firmware error
Version
HA version? 2022.2.8
HA Bosch component version? 0.17.3
** Debug SCAN **
** IMPORTANT **
Go to Developer tools in Home Assistant, choose Service tab and choose bosch.debug_scan
Download file to your computer and upload it somewhere eg. https://jsonblob.com/
bosch_scan.zip
Restarting HA fix the issue
|
gharchive/issue
| 2022-02-17T10:10:44 |
2025-04-01T04:56:11.731841
|
{
"authors": [
"damiano75"
],
"repo": "bosch-thermostat/home-assistant-bosch-custom-component",
"url": "https://github.com/bosch-thermostat/home-assistant-bosch-custom-component/issues/144",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
107744483
|
scollector: enabling json or toml config
After some conversation in stack chat, we decided we don't like toml so much. This allows toml or json files to be passed in.
At some point in the future we may depricate the toml file format.
This also removes the totoml functionality. If that is needed, the 0.4.0 release of scollector should have it availible.
Closing this after talking to Craig. The frustrations with toml are coming in when we do more complex data structures, in particular for SNMP definitions.
I think since this is the problem, we are better keeping the SNMP defs out of the config and treating them as loadable object definitions. This also makes them more sharable (other people may have the same or similar PDUs for example).
Those definitions could be in JSON, Craig would rather have them go in the scollector code. I'm okay with either as long as keep the main conf clean and avoid switching conf formats yet again unless there is a strong need to.
|
gharchive/pull-request
| 2015-09-22T15:53:06 |
2025-04-01T04:56:11.790540
|
{
"authors": [
"captncraig",
"kylebrandt"
],
"repo": "bosun-monitor/bosun",
"url": "https://github.com/bosun-monitor/bosun/pull/1341",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1599190093
|
upgrade isort
Upgrade isort to resolve current action failures. See https://github.com/PyCQA/isort/issues/2083 for details
Codecov Report
Base: 93.55% // Head: 93.55% // No change to project coverage :thumbsup:
Coverage data is based on head (ff18ef4) compared to base (be50153).
Patch has no changes to coverable lines.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
Additional details and impacted files
@@ Coverage Diff @@
## develop #2877 +/- ##
========================================
Coverage 93.55% 93.55%
========================================
Files 63 63
Lines 13398 13398
========================================
Hits 12534 12534
Misses 864 864
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2023-02-24T19:42:36 |
2025-04-01T04:56:11.817359
|
{
"authors": [
"codecov-commenter",
"nateprewitt"
],
"repo": "boto/botocore",
"url": "https://github.com/boto/botocore/pull/2877",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
202235570
|
Label has incorrect "for" tag
Label references the name of the input, this is incorrect, should use ID
Larger issue is ID is being used for the wrapper, this should be changed so ID is always used on the field, and the wrapper has a different ID
|
gharchive/issue
| 2017-01-20T20:32:53 |
2025-04-01T04:56:11.859633
|
{
"authors": [
"nicksnell"
],
"repo": "boughtbymany/mutt-forms",
"url": "https://github.com/boughtbymany/mutt-forms/issues/46",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
302065567
|
training error
2018-03-04 13:19:20.290777: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[1,1024,51,38]
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.ResourceExhaustedError'>, OOM when allocating tensor with shape[1,1024,51,38]
[[Node: FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm = FusedBatchNorm[T=DT_FLOAT, data_format="NHWC", epsilon=1.001e-05, is_training=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/Conv2D, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/gamma/read/_683, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/beta/read/_685, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_mean/read/_687, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_variance/read/_689)]]
[[Node: Reshape_24/_1255 = _HostRecvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4073_Reshape_24", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Caused by op u'FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm', defined at:
File "object_detection/train.py", line 198, in
tf.app.run()
File "~/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "~/Documents/Custom-Object-Detection-master/object_detection/trainer.py", line 192, in train
clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
File "~/Documents/Custom-Object-Detection-master/slim/deployment/model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "~/Documents/Custom-Object-Detection-master/object_detection/trainer.py", line 131, in _create_losses
prediction_dict = detection_model.predict(images)
File "~/Documents/Custom-Object-Detection-master/object_detection/meta_architectures/faster_rcnn_meta_arch.py", line 513, in predict
image_shape) = self._extract_rpn_feature_maps(preprocessed_inputs)
File "~/Documents/Custom-Object-Detection-master/object_detection/meta_architectures/faster_rcnn_meta_arch.py", line 652, in _extract_rpn_feature_maps
preprocessed_inputs, scope=self.first_stage_feature_extractor_scope)
File "~/Documents/Custom-Object-Detection-master/object_detection/meta_architectures/faster_rcnn_meta_arch.py", line 131, in extract_proposal_features
return self._extract_proposal_features(preprocessed_inputs, scope)
File "~/Documents/Custom-Object-Detection-master/object_detection/models/faster_rcnn_resnet_v1_feature_extractor.py", line 126, in _extract_proposal_features
scope=var_scope)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_v1.py", line 298, in resnet_v1_101
reuse=reuse, scope=scope)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_v1.py", line 216, in resnet_v1
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_utils.py", line 185, in stack_blocks_dense
net = block.unit_fn(net, rate=rate, **dict(unit, stride=1))
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_v1.py", line 118, in bottleneck
activation_fn=None, scope='conv3')
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1042, in convolution
outputs = normalizer_fn(outputs, **normalizer_params)
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 643, in batch_norm
outputs = layer.apply(inputs, training=is_training)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 671, in apply
return self.call(inputs, *args, **kwargs)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 575, in call
outputs = self.call(inputs, *args, **kwargs)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/normalization.py", line 395, in call
return self._fused_batch_norm(inputs, training=training)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/normalization.py", line 302, in _fused_batch_norm
training, _fused_batch_norm_training, _fused_batch_norm_inference)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/utils.py", line 208, in smart_cond
return fn2()
File "~/lib/python2.7/site-packages/tensorflow/python/layers/normalization.py", line 299, in _fused_batch_norm_inference
data_format=self._data_format)
File "~/lib/python2.7/site-packages/tensorflow/python/ops/nn_impl.py", line 831, in fused_batch_norm
name=name)
File "~/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2034, in _fused_batch_norm
is_training=is_training, name=name)
File "~/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "~/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "~/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1,1024,51,38]
[[Node: FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm = FusedBatchNorm[T=DT_FLOAT, data_format="NHWC", epsilon=1.001e-05, is_training=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/Conv2D, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/gamma/read/_683, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/beta/read/_685, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_mean/read/_687, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_variance/read/_689)]]
[[Node: Reshape_24/_1255 = _HostRecvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4073_Reshape_24", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Traceback (most recent call last):
File "object_detection/train.py", line 198, in
tf.app.run()
File "~/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "~/Documents/Custom-Object-Detection-master/object_detection/trainer.py", line 296, in train
saver=saver)
File "~/lib/python2.7/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 775, in train
sv.stop(threads, close_summary_writer=True)
File "~/lib/python2.7/contextlib.py", line 35, in exit
self.gen.throw(type, value, traceback)
File "~/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 964, in managed_session
self.stop(close_summary_writer=close_summary_writer)
File "~/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 792, in stop
stop_grace_period_secs=self._stop_grace_secs)
File "~/lib/python2.7/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "~/lib/python2.7/site-packages/tensorflow/python/training/coordinator.py", line 296, in stop_on_exception
yield
File "~/lib/python2.7/site-packages/tensorflow/python/training/coordinator.py", line 494, in run
self.run_loop()
File "~/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 994, in run_loop
self._sv.global_step])
File "~/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "~/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "~/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "~/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,1024,51,38]
[[Node: FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm = FusedBatchNorm[T=DT_FLOAT, data_format="NHWC", epsilon=1.001e-05, is_training=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/Conv2D, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/gamma/read/_683, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/beta/read/_685, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_mean/read/_687, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_variance/read/_689)]]
[[Node: Reshape_24/_1255 = _HostRecvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4073_Reshape_24", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Caused by op u'FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm', defined at:
File "object_detection/train.py", line 198, in
tf.app.run()
File "~/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "~/Documents/Custom-Object-Detection-master/object_detection/trainer.py", line 192, in train
clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
File "~/Documents/Custom-Object-Detection-master/slim/deployment/model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "~/Documents/Custom-Object-Detection-master/object_detection/trainer.py", line 131, in _create_losses
prediction_dict = detection_model.predict(images)
File "~/Documents/Custom-Object-Detection-master/object_detection/meta_architectures/faster_rcnn_meta_arch.py", line 513, in predict
image_shape) = self._extract_rpn_feature_maps(preprocessed_inputs)
File "~/Documents/Custom-Object-Detection-master/object_detection/meta_architectures/faster_rcnn_meta_arch.py", line 652, in _extract_rpn_feature_maps
preprocessed_inputs, scope=self.first_stage_feature_extractor_scope)
File "~/Documents/Custom-Object-Detection-master/object_detection/meta_architectures/faster_rcnn_meta_arch.py", line 131, in extract_proposal_features
return self._extract_proposal_features(preprocessed_inputs, scope)
File "~/Documents/Custom-Object-Detection-master/object_detection/models/faster_rcnn_resnet_v1_feature_extractor.py", line 126, in _extract_proposal_features
scope=var_scope)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_v1.py", line 298, in resnet_v1_101
reuse=reuse, scope=scope)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_v1.py", line 216, in resnet_v1
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_utils.py", line 185, in stack_blocks_dense
net = block.unit_fn(net, rate=rate, **dict(unit, stride=1))
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/Documents/Custom-Object-Detection-master/slim/nets/resnet_v1.py", line 118, in bottleneck
activation_fn=None, scope='conv3')
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1042, in convolution
outputs = normalizer_fn(outputs, **normalizer_params)
File "~/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "~/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 643, in batch_norm
outputs = layer.apply(inputs, training=is_training)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 671, in apply
return self.call(inputs, *args, **kwargs)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 575, in call
outputs = self.call(inputs, *args, **kwargs)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/normalization.py", line 395, in call
return self._fused_batch_norm(inputs, training=training)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/normalization.py", line 302, in _fused_batch_norm
training, _fused_batch_norm_training, _fused_batch_norm_inference)
File "~/lib/python2.7/site-packages/tensorflow/python/layers/utils.py", line 208, in smart_cond
return fn2()
File "~/lib/python2.7/site-packages/tensorflow/python/layers/normalization.py", line 299, in _fused_batch_norm_inference
data_format=self._data_format)
File "~/lib/python2.7/site-packages/tensorflow/python/ops/nn_impl.py", line 831, in fused_batch_norm
name=name)
File "~/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2034, in _fused_batch_norm
is_training=is_training, name=name)
File "~/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "~/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "~/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1,1024,51,38]
[[Node: FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm = FusedBatchNorm[T=DT_FLOAT, data_format="NHWC", epsilon=1.001e-05, is_training=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/Conv2D, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/gamma/read/_683, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/beta/read/_685, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_mean/read/_687, FirstStageFeatureExtractor/resnet_v1_101/block3/unit_12/bottleneck_v1/conv3/BatchNorm/moving_variance/read/_689)]]
[[Node: Reshape_24/_1255 = _HostRecvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4073_Reshape_24", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
What might cause error, Why this error?
Whenever I saw the "ResourceExhaustedError" I restarted my computer and it helped. Usually the memory is full and the computer cannot function anymore. Restarting is helping with the memory. Try that.
I Have the same error:
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framewo
rk.errors_impl.InvalidArgumentError'>, Incompatible shapes: [1,63,4] vs. [1,64,4
]
[[Node: gradients/Loss/BoxClassifierLoss/Loss/sub_grad/BroadcastGradien
tArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task
:0/device:CPU:0"](gradients/Loss/BoxClassifierLoss/Loss/sub_grad/Shape, gradient
s/Loss/BoxClassifierLoss/Loss/sub_1_grad/Shape)]]
Caused by op 'gradients/Loss/BoxClassifierLoss/Loss/sub_grad/BroadcastGradientAr
gs', defined at:
File "object_detection/train.py", line 198, in
tf.app.run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\Users\sukhovva\Conv1\object_detection\trainer.py", line 226, in train
clones, training_optimizer, regularization_losses=None)
File "C:\Users\sukhovva\Conv1\slim\deployment\model_deploy.py", line 297, in o
ptimize_clones
optimizer, clone, num_clones, regularization_losses, **kwargs)
File "C:\Users\sukhovva\Conv1\slim\deployment\model_deploy.py", line 261, in _
optimize_clone
clone_grad = optimizer.compute_gradients(sum_loss, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\training\optimizer.py", line 526, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 494, in gradients
gate_gradients, aggregation_method, stop_gradients)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 636, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 385, in _MaybeCompile
return grad_fn() # Exit early
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 636, in
lambda: grad_fn(op, *out_grads))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\math_grad.py", line 857, in _SubGrad
rx, ry = gen_array_ops.broadcast_gradient_args(sx, sy)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gen_array_ops.py", line 812, in broadcast_gradient_args
"BroadcastGradientArgs", s0=s0, s1=s1, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 3392, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-
access
...which was originally created as op 'Loss/BoxClassifierLoss/Loss/sub', defined
at:
File "object_detection/train.py", line 198, in
tf.app.run()
[elided 1 identical lines from previous traceback]
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\Users\sukhovva\Conv1\object_detection\trainer.py", line 192, in train
clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
File "C:\Users\sukhovva\Conv1\slim\deployment\model_deploy.py", line 193, in c
reate_clones
outputs = model_fn(*args, **kwargs)
File "C:\Users\sukhovva\Conv1\object_detection\trainer.py", line 133, in crea
te_losses
losses_dict = detection_model.loss(prediction_dict)
File "C:\Users\sukhovva\Conv1\object_detection\meta_architectures\faster_rcnn
meta_arch.py", line 1265, in loss
groundtruth_classes_with_background_list))
File "C:\Users\sukhovva\Conv1\object_detection\meta_architectures\faster_rcnn_
meta_arch.py", line 1421, in loss_box_classifier
batch_reg_targets, weights=batch_reg_weights) / normalizer
File "C:\Users\sukhovva\Conv1\object_detection\core\losses.py", line 71, in __
call_
return self._compute_loss(prediction_tensor, target_tensor, **params)
File "C:\Users\sukhovva\Conv1\object_detection\core\losses.py", line 157, in _
compute_loss
diff = prediction_tensor - target_tensor
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\math_ops.py", line 979, in binary_op_wrapper
return func(x, y, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gen_math_ops.py", line 8582, in sub
"Sub", x=x, y=y, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 3392, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-
access
InvalidArgumentError (see above for traceback): Incompatible shapes: [1,63,4] vs
. [1,64,4]
[[Node: gradients/Loss/BoxClassifierLoss/Loss/sub_grad/BroadcastGradien
tArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task
:0/device:CPU:0"](gradients/Loss/BoxClassifierLoss/Loss/sub_grad/Shape, gradient
s/Loss/BoxClassifierLoss/Loss/sub_1_grad/Shape)]]
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\client\session.py", line 1322, in _do_call
return fn(*args)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\client\session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\client\session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shape
s: [1,63,4] vs. [1,64,4]
[[Node: gradients/Loss/BoxClassifierLoss/Loss/sub_grad/BroadcastGradien
tArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task
:0/device:CPU:0"](gradients/Loss/BoxClassifierLoss/Loss/sub_grad/Shape, gradient
s/Loss/BoxClassifierLoss/Loss/sub_1_grad/Shape)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "object_detection/train.py", line 198, in
tf.app.run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\Users\sukhovva\Conv1\object_detection\trainer.py", line 296, in train
saver=saver)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\co
ntrib\slim\python\slim\learning.py", line 769, in train
sess, train_op, global_step, train_step_kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\co
ntrib\slim\python\slim\learning.py", line 487, in train_step
run_metadata=run_metadata)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\client\session.py", line 900, in run
run_metadata_ptr)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\client\session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\client\session.py", line 1316, in _do_run
run_metadata)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\client\session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shape
s: [1,63,4] vs. [1,64,4]
[[Node: gradients/Loss/BoxClassifierLoss/Loss/sub_grad/BroadcastGradien
tArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task
:0/device:CPU:0"](gradients/Loss/BoxClassifierLoss/Loss/sub_grad/Shape, gradient
s/Loss/BoxClassifierLoss/Loss/sub_1_grad/Shape)]]
Caused by op 'gradients/Loss/BoxClassifierLoss/Loss/sub_grad/BroadcastGradientAr
gs', defined at:
File "object_detection/train.py", line 198, in
tf.app.run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\Users\sukhovva\Conv1\object_detection\trainer.py", line 226, in train
clones, training_optimizer, regularization_losses=None)
File "C:\Users\sukhovva\Conv1\slim\deployment\model_deploy.py", line 297, in o
ptimize_clones
optimizer, clone, num_clones, regularization_losses, **kwargs)
File "C:\Users\sukhovva\Conv1\slim\deployment\model_deploy.py", line 261, in _
optimize_clone
clone_grad = optimizer.compute_gradients(sum_loss, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\training\optimizer.py", line 526, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 494, in gradients
gate_gradients, aggregation_method, stop_gradients)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 636, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 385, in _MaybeCompile
return grad_fn() # Exit early
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gradients_impl.py", line 636, in
lambda: grad_fn(op, *out_grads))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\math_grad.py", line 857, in _SubGrad
rx, ry = gen_array_ops.broadcast_gradient_args(sx, sy)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gen_array_ops.py", line 812, in broadcast_gradient_args
"BroadcastGradientArgs", s0=s0, s1=s1, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 3392, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-
access
...which was originally created as op 'Loss/BoxClassifierLoss/Loss/sub', defined
at:
File "object_detection/train.py", line 198, in
tf.app.run()
[elided 1 identical lines from previous traceback]
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\Users\sukhovva\Conv1\object_detection\trainer.py", line 192, in train
clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
File "C:\Users\sukhovva\Conv1\slim\deployment\model_deploy.py", line 193, in c
reate_clones
outputs = model_fn(*args, **kwargs)
File "C:\Users\sukhovva\Conv1\object_detection\trainer.py", line 133, in crea
te_losses
losses_dict = detection_model.loss(prediction_dict)
File "C:\Users\sukhovva\Conv1\object_detection\meta_architectures\faster_rcnn
meta_arch.py", line 1265, in loss
groundtruth_classes_with_background_list))
File "C:\Users\sukhovva\Conv1\object_detection\meta_architectures\faster_rcnn_
meta_arch.py", line 1421, in loss_box_classifier
batch_reg_targets, weights=batch_reg_weights) / normalizer
File "C:\Users\sukhovva\Conv1\object_detection\core\losses.py", line 71, in __
call_
return self._compute_loss(prediction_tensor, target_tensor, **params)
File "C:\Users\sukhovva\Conv1\object_detection\core\losses.py", line 157, in _
compute_loss
diff = prediction_tensor - target_tensor
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\math_ops.py", line 979, in binary_op_wrapper
return func(x, y, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\ops\gen_math_ops.py", line 8582, in sub
"Sub", x=x, y=y, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 3392, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\py
thon\framework\ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-
access
InvalidArgumentError (see above for traceback): Incompatible shapes: [1,63,4] vs
. [1,64,4]
[[Node: gradients/Loss/BoxClassifierLoss/Loss/sub_grad/BroadcastGradien
tArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task
:0/device:CPU:0"](gradients/Loss/BoxClassifierLoss/Loss/sub_grad/Shape, gradient
s/Loss/BoxClassifierLoss/Loss/sub_1_grad/Shape)]]
|
gharchive/issue
| 2018-03-04T07:52:04 |
2025-04-01T04:56:11.983870
|
{
"authors": [
"BanuSelinTosun",
"Tejeshwarabm",
"Victorsoukhov"
],
"repo": "bourdakos1/Custom-Object-Detection",
"url": "https://github.com/bourdakos1/Custom-Object-Detection/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1935723093
|
formula to calculate accuracy
I saw the following formula to calculate accuracy in your scGPT paper:
Accuracy = tp / (tp + fp + tn + fn)
Should the numerator be: (tp + tn)? The correct formula should be:
Accuracy = (tp + tn) / (tp + fp + tn + fn)
Thank you so much! This indeed is a typo. The actual calculation should be correct since we used the sklearn interface. We will update this.
|
gharchive/issue
| 2023-10-10T16:10:30 |
2025-04-01T04:56:11.988297
|
{
"authors": [
"subercui",
"yueming-ding"
],
"repo": "bowang-lab/scGPT",
"url": "https://github.com/bowang-lab/scGPT/issues/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2350531061
|
FileNotFoundError
Dear author:
Thank you for sharing the project. During the replication process, I couldn't find two files (as shown in the figure below), which are two pre trained models that I couldn't find in the checkpoints download section. I hope you can add a download link.
I would greatly appreciate it if you could reply to me.
Hi Yayalel. I am currently traveling for work and will be back office at the beginning of July. Sorry for the inconvenience.
Okay, thank you for your reply. If you have time to upload it after you come back, I would greatly appreciate it.
May I ask if your business trip has ended now? If it's convenient for you, could you please upload the file? I would greatly appreciate it.
Hello. May we know how urgent you need these two checkpoints? We find that these two checkpoints, which are relatively old, were not preserved during recent server updates, so we would need to retrain them to get their weights. However, these two models are only two of the intermediate ablation studies, rather than the final best-performing models we would recommend for either downstream applications or comparative studies in your own work.
Thank you for your reply. I understand what you mean. Currently, I am in the experimental stage and need to know the training results of the model with only hierarchical relationship classification. I am deeply sorry for the inconvenience caused to you and sincerely hope that you can upload two pre trained models after retraining. Regarding the urgency level, I would like to complete the entire experiment within a month, but everything will be based on your schedule.
I think the model with a flat classification head + commonsense validation has already been uploaded, and you can find it on README: https://drive.google.com/file/d/1nwN8ToqfcRfabtf5PcJLAzd0J-Ky-6s3/view?usp=sharing. The saved model name might be different, so please try with this one first and rename it if needed.
Currently, we are busy with several upcoming submissions and there is no free GPU resource left in our group, but you can follow our README to train your baseline model with a flat classification head and without the commonsense validation step.
Apologize for the inconvenience and thank you so much for your understanding.
Okay, thank you very much for your patient reply.
|
gharchive/issue
| 2024-06-13T08:32:03 |
2025-04-01T04:56:11.993675
|
{
"authors": [
"Yayalel",
"bowen-upenn"
],
"repo": "bowen-upenn/scene_graph_commonsense",
"url": "https://github.com/bowen-upenn/scene_graph_commonsense/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
17345539
|
No AX.25 port data configured
Hi,
I'm not sure that bower is the problem, but each time I try to launch it, i get this message in the syslog.
for information :
npm -v
1.3.5
nodejs -v
v0.10.15
regards,
You're probably installing and running the node package which is an amateur radio related tool - I had this issue installing node-red.
Try this to fix it:
sudo ln -s /usr/bin/nodejs /usr/bin/node
|
gharchive/issue
| 2013-07-29T15:36:25 |
2025-04-01T04:56:11.996303
|
{
"authors": [
"CoStiC",
"yaleman"
],
"repo": "bower/bower",
"url": "https://github.com/bower/bower/issues/677",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2728150081
|
SDK fails with "Module checksum failed: Could not find '/app/'." in a Linux container on .NET 8
[x] I have checked that the SDK documentation doesn't solve my issue.
[x] I have checked that the API documentation doesn't solve my issue.
[x] I have searched the Box Developer Forums and my issue isn't already reported (or if it has been reported, I have attached a link to it, for reference).
[x] I have searched Issues in this repo and my issue isn't already reported.
Description of the Issue
SDK fails with "Module checksum failed: Could not find '/app/'." when deployed in a Linux container on .NET 8
I'm in the process of upgrading some of my organization's .NET Framework apps to .NET Core. I updated the SDK from the deprecated one to Box.SDK.Gen 1.4.0.
My organization's C# architect notes that Bouncy Castle BC-FNA 1.0.2 is certified for use with .NET applications running on CLR 4.
I thought this issue might be similar to the AWS PublishReadyToRun issue, but I wanted to confirm they're related. It seems like in my case, Bouncy Castle is attempting to use Win32 calls in the Linux container.
Steps to Reproduce
I'm working with other teams at my organization to get a minimal example that reproduces the issue. These are the steps that are visible to me so far from our build process:
Base image OS Ubuntu 22.04 jammy-20240911.1
dotnet build --no-incremental -r linux-x64 --no-self-contained --no-restore --configuration Release AppService.sln
dotnet publish --no-build -c Release -r linux-x64 AppService/src/AppService.csproj -o ./Target --no-self-contained -p:PublishSingleFile=true
COPY Target/AppService app-bin
Call GetFolderItemsAsync from the Box client's FoldersManager
Expected Behavior
A list of folders is returned.
Error Message, Including Stack Trace
unable to read encrypted data: Module checksum failed: Could not find file '/app/<Unknown>'.
---> Org.BouncyCastle.Pkcs.PkcsException: unable to read encrypted data: Module checksum failed: Could not find file '/app/<Unknown>'.
---> Org.BouncyCastle.Crypto.CryptoOperationError: Module checksum failed: Could not find file '/app/<Unknown>'.
---> System.IO.FileNotFoundException: Could not find file '/app/<Unknown>'.
File name: '/app/<Unknown>'
at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirError)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String fullPath, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize, UnixFileMode openPermissions, Int64& fileLength, UnixFileMode& filePermissions, Boolean failForSymlink, Boolean& wasSymlink, Func`4 createOpenException)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String fullPath, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize, Nullable`1 unixCreateMode, Func`4 createOpenException)
at System.IO.File.OpenHandle(String path, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize)
at System.IO.File.ReadAllBytes(String path)
at Org.BouncyCastle.Crypto.CryptoStatus.CalculateAssemblyHMac()
at Org.BouncyCastle.Crypto.CryptoStatus.ChecksumValidate()
--- End of inner exception stack trace ---
at Org.BouncyCastle.Crypto.CryptoStatus.MoveToErrorStatus(CryptoOperationError error)
at Org.BouncyCastle.Crypto.CryptoStatus.ChecksumValidate()
at Org.BouncyCastle.Crypto.CryptoStatus.IsReady()
at Org.BouncyCastle.Security.SecurityContext.CreateBuilder[A](IBuilderServiceType`1 type)
at Org.BouncyCastle.Crypto.CryptoServicesRegistrar.CreateService[A](IBuilderServiceType`1 type)
at Org.BouncyCastle.Operators.PkixPbeDecryptorProviderBuilder.MyDecryptorBuilderProvider.CreateDecryptorBuilder(AlgorithmIdentifier algorithmDetails)
at Org.BouncyCastle.Pkcs.Pkcs8EncryptedPrivateKeyInfo.DecryptPrivateKeyInfo(IDecryptorBuilderProvider`1 inputDecryptorProvider)
--- End of inner exception stack trace ---
at Org.BouncyCastle.Pkcs.Pkcs8EncryptedPrivateKeyInfo.DecryptPrivateKeyInfo(IDecryptorBuilderProvider`1 inputDecryptorProvider)
at Box.Sdk.Gen.Internal.JwtUtils.GetSigningCredentials(JwtKey key)
at Box.Sdk.Gen.Internal.JwtUtils.CreateJwtAssertion(Dictionary`2 claims, JwtKey key, JwtSignOptions options)
at Box.Sdk.Gen.BoxJwtAuth.RefreshTokenAsync(NetworkSession networkSession)
at Box.Sdk.Gen.BoxJwtAuth.RetrieveTokenAsync(NetworkSession networkSession)
at Box.Sdk.Gen.BoxJwtAuth.RetrieveAuthorizationHeaderAsync(NetworkSession networkSession)
at Box.Sdk.Gen.Internal.HttpClientAdapter.BuildHttpRequest(FetchOptions options, Stream stream)
at Box.Sdk.Gen.Internal.HttpClientAdapter.FetchAsync(FetchOptions options)
at Box.Sdk.Gen.Managers.FoldersManager.GetFolderItemsAsync(String folderId, GetFolderItemsQueryParams queryParams, GetFolderItemsHeaders headers, Nullable`1 cancellationToken)
System.IO.FileNotFoundException: Could not find file '/app/<Unknown>'.
File name: '/app/<Unknown>'
at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirError)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String fullPath, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize, UnixFileMode openPermissions, Int64& fileLength, UnixFileMode& filePermissions, Boolean failForSymlink, Boolean& wasSymlink, Func`4 createOpenException)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String fullPath, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize, Nullable`1 unixCreateMode, Func`4 createOpenException)
at System.IO.File.OpenHandle(String path, FileMode mode, FileAccess access, FileShare share, FileOptions options, Int64 preallocationSize)
at System.IO.File.ReadAllBytes(String path)
at Org.BouncyCastle.Crypto.CryptoStatus.CalculateAssemblyHMac()
at Org.BouncyCastle.Crypto.CryptoStatus.ChecksumValidate()
Versions Used
.NET SDK: 1.4.0
.NET: 8.0.10
Hi @tmoore8RJ
After investigation, I found out this issue can from the feature PublishSingleFile during the publish process, from the docs of MS, we can ExcludeFromSingleFile but it's not really working from my side.
I found a workaround for this issue from this comment, with some modification from my side, so you can try this:
<Target Name="ExplicitRemoveFromFilesToBundle" BeforeTargets="GenerateSingleFileBundle" DependsOnTargets="PrepareForBundle">
<ItemGroup>
<FilesToRemoveFromBundle Include="@(FilesToBundle)" Condition="$([System.String]::new('%(Filename)').ToLower().Contains('bc-fips')) OR $([System.String]::new('%(Filename)').ToLower().Contains('bcpkix'))" />
</ItemGroup>
<Message Text="FilesToRemoveFromBundle '@(FilesToRemoveFromBundle)'" Importance="high" />
<ItemGroup>
<FilesToBundle Remove="@(FilesToRemoveFromBundle)" />
</ItemGroup>
</Target>
<Target Name="CopyFilesToRemoveFromBundle" AfterTargets="Publish">
<Copy SourceFiles="@(FilesToRemoveFromBundle)" DestinationFolder="$(PublishDir)" />
<Message Text="Copied files to remove from bundle to '$(PublishDir)'" Importance="high" />
</Target>
Please put this into the bottom of your .csproj file and let me know if it resolves the issue.
Bests,
Minh
Awesome, makes sense. I'll test these changes out and report back. Thanks!
I got it working. Thank you so much for your help!
In case others encounter something similar, in addition to the snippet provided that makes sure the BouncyCastle DLLs are in the Target folder after I restore, build, and publish, I also modified my Docker file to use the same pattern as the arguments to FilesToRemoveFromBundle to match the DLLs and copy them into the container in the same folder as the single file produced by the publish.
My csproj file with unrelated entries omitted
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<AssemblyName>AppService</AssemblyName>
<RootNamespace>AppService</RootNamespace>
<TargetFramework>net8.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Box.Sdk.Gen" Version="1.4.0" />
</ItemGroup>
<Target Name="ExplicitRemoveFromFilesToBundle" BeforeTargets="GenerateSingleFileBundle" DependsOnTargets="PrepareForBundle">
<ItemGroup>
<FilesToRemoveFromBundle Include="@(FilesToBundle)" Condition="$([System.String]::new('%(Filename)').ToLower().Contains('bc-fips')) OR $([System.String]::new('%(Filename)').ToLower().Contains('bcpkix'))" />
</ItemGroup>
<Message Text="FilesToRemoveFromBundle '@(FilesToRemoveFromBundle)'" Importance="high" />
<ItemGroup>
<FilesToBundle Remove="@(FilesToRemoveFromBundle)" />
</ItemGroup>
</Target>
<Target Name="CopyFilesToRemoveFromBundle" AfterTargets="Publish">
<Copy SourceFiles="@(FilesToRemoveFromBundle)" DestinationFolder="$(PublishDir)" />
<Message Text="Copied files to remove from bundle to '$(PublishDir)'" Importance="high" />
</Target>
</Project>
The additional commands in my Docker file
COPY Target/*bc-fips* .
COPY Target/*bcpkix-fips* .
|
gharchive/issue
| 2024-12-09T20:04:55 |
2025-04-01T04:56:12.009181
|
{
"authors": [
"congminh1254",
"tmoore8RJ"
],
"repo": "box/box-dotnet-sdk-gen",
"url": "https://github.com/box/box-dotnet-sdk-gen/issues/342",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
387750175
|
Can't pull from repository
When i tried to pull from repository, "pull" process just said "Finished" and do - nothing.
My command:
java -jar mojito-cli-0.94.jar pull -r REPONAME -t TARGET_DIR -ft MAC_STRING
Can you provide more information on how you created your repository, if you see the string in mojito and the directory layout you have? If it was working before or if you just started.
@sentiurin I can't help without more information. Feel free to reopen the issue with more details
|
gharchive/issue
| 2018-12-05T13:11:27 |
2025-04-01T04:56:12.016600
|
{
"authors": [
"aurambaj",
"sentiurin"
],
"repo": "box/mojito",
"url": "https://github.com/box/mojito/issues/366",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
134962294
|
Can't build latest vbox image
Manage to build eval-win2012r2-standard-nocm-1.0.4.box a while ago but nothing now.
Host is Ubuntu trusty, vagrant 1.8.1, packer 0.8.6
$ vagrant plugin list
vagrant-aws (0.7.0)
vagrant-digitalocean (0.7.10)
vagrant-lxc (1.2.1)
vagrant-scp (0.5.7)
vagrant-share (1.1.5, system)
vagrant-winrm (0.7.0)
$ time make virtualbox/eval-win81x64-enterprise
rm -rf output-virtualbox-iso
mkdir -p box/virtualbox
packer build -only=virtualbox-iso -var 'cm=nocm' -var 'version=1.0.4' -var 'update=false' -var 'headless=false' -var "shutdown_command=shutdown /s /t 10
/f /d p:4:1 /c Packer_Shutdown" -var "iso_url=http://download.microsoft.com/download/B/9/9/B999286E-0A47-406D-8B3D-5B5AD7373A4A/9600.16384.WINBLUE_RTM.130821-1623_X64FRE_ENTERPRISE_EVAL_EN-US-IRM_CENA_X64FREE_EN-US_DV5.ISO" -var "iso_checksum=73321fa912305e5a16096ef62380a91ee1f112da" eval-win81x64-enterprise.json
virtualbox-iso output will be in this color.
==> virtualbox-iso: Downloading or copying Guest additions
virtualbox-iso: Downloading or copying: file:///usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Downloading or copying ISO
virtualbox-iso: Downloading or copying: http://download.microsoft.com/download/B/9/9/B999286E-0A47-406D-8B3D-5B5AD7373A4A/9600.16384.WINBLUE_RTM.130821-1623_X64FRE_ENTERPRISE_EVAL_EN-US-IRM_CENA_X64FREE_EN-US_DV5.ISO
==> virtualbox-iso: Creating floppy disk...
virtualbox-iso: Copying: floppy/00-run-all-scripts.cmd
virtualbox-iso: Copying: floppy/01-install-wget.cmd
virtualbox-iso: Copying: floppy/_download.cmd
virtualbox-iso: Copying: floppy/_packer_config.cmd
virtualbox-iso: Copying: floppy/disablewinupdate.bat
virtualbox-iso: Copying: floppy/eval-win81x64-enterprise/Autounattend.xml
virtualbox-iso: Copying: floppy/fixnetwork.ps1
virtualbox-iso: Copying: floppy/install-winrm.cmd
virtualbox-iso: Copying: floppy/oracle-cert.cer
virtualbox-iso: Copying: floppy/passwordchange.bat
virtualbox-iso: Copying: floppy/powerconfig.bat
virtualbox-iso: Copying: floppy/update.bat
virtualbox-iso: Copying: floppy/zz-start-sshd.cmd
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive...
==> virtualbox-iso: Attaching floppy disk...
==> virtualbox-iso: Creating forwarded port mapping for SSH (host port 3653)
==> virtualbox-iso: Executing custom VBoxManage commands...
virtualbox-iso: Executing: modifyvm eval-win81x64-enterprise --memory 1536
virtualbox-iso: Executing: modifyvm eval-win81x64-enterprise --cpus 1
virtualbox-iso: Executing: setextradata eval-win81x64-enterprise VBoxInternal/CPUM/CMPXCHG16B 1
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Waiting for WinRM to become available...
==> virtualbox-iso: Connected to WinRM!
==> virtualbox-iso: Uploading VirtualBox version info (5.0.14)
==> virtualbox-iso: Uploading VirtualBox guest additions ISO...
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Error uploading guest additions: Error uploading file to $env:TEMP\winrmcp-56c5c20b-fded-abe5-dcc3-e3256d4b3f85.tmp: http error: 401 -
==> Some builds didn't complete successfully and had errors:
--> virtualbox-iso: Error uploading guest additions: Error uploading file to $env:TEMP\winrmcp-56c5c20b-fded-abe5-dcc3-e3256d4b3f85.tmp: http error: 401 -
==> Builds finished but no artifacts were created.
make: *** [box/virtualbox/eval-win81x64-enterprise-nocm-1.0.4.box] Erreur 1
$ time make virtualbox/eval-win7x64-enterprise
rm -rf output-virtualbox-iso
mkdir -p box/virtualbox
packer build -only=virtualbox-iso -var 'cm=nocm' -var 'version=1.0.4' -var 'update=false' -var 'headless=false' -var "shutdown_command=shutdown /s /t 10 /f /d p:4:1 /c Packer_Shutdown" -var "iso_url=http://care.dlservice.microsoft.com/dl/download/evalx/win7/x64/EN/7600.16385.090713-1255_x64fre_enterprise_en-us_EVAL_Eval_Enterprise-GRMCENXEVAL_EN_DVD.iso" -var "iso_checksum=15ddabafa72071a06d5213b486a02d5b55cb7070" eval-win7x64-enterprise.json
virtualbox-iso output will be in this color.
==> virtualbox-iso: Downloading or copying Guest additions
virtualbox-iso: Downloading or copying: file:///usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Downloading or copying ISO
virtualbox-iso: Downloading or copying: http://care.dlservice.microsoft.com/dl/download/evalx/win7/x64/EN/7600.16385.090713-1255_x64fre_enterprise_en-us_EVAL_Eval_Enterprise-GRMCENXEVAL_EN_DVD.iso
==> virtualbox-iso: Creating floppy disk...
virtualbox-iso: Copying: floppy/00-run-all-scripts.cmd
virtualbox-iso: Copying: floppy/01-install-wget.cmd
virtualbox-iso: Copying: floppy/_download.cmd
virtualbox-iso: Copying: floppy/_packer_config.cmd
virtualbox-iso: Copying: floppy/disablewinupdate.bat
virtualbox-iso: Copying: floppy/fixnetwork.ps1
virtualbox-iso: Copying: floppy/install-winrm.cmd
virtualbox-iso: Copying: floppy/networkprompt.bat
virtualbox-iso: Copying: floppy/oracle-cert.cer
virtualbox-iso: Copying: floppy/passwordchange.bat
virtualbox-iso: Copying: floppy/powerconfig.bat
virtualbox-iso: Copying: floppy/upgrade-wua.bat
virtualbox-iso: Copying: floppy/win7x64-enterprise/Autounattend.xml
virtualbox-iso: Copying: floppy/zz-start-sshd.cmd
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive...
==> virtualbox-iso: Attaching floppy disk...
==> virtualbox-iso: Creating forwarded port mapping for SSH (host port 3861)
==> virtualbox-iso: Executing custom VBoxManage commands...
virtualbox-iso: Executing: modifyvm eval-win7x64-enterprise --memory 2048
virtualbox-iso: Executing: modifyvm eval-win7x64-enterprise --cpus 1
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Waiting for WinRM to become available...
==> virtualbox-iso: Timeout waiting for WinRM.
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Timeout waiting for WinRM.
==> Some builds didn't complete successfully and had errors:
--> virtualbox-iso: Timeout waiting for WinRM.
==> Builds finished but no artifacts were created.
make: *** [box/virtualbox/eval-win7x64-enterprise-nocm-1.0.4.box] Erreur 1
real 167m5.185s
user 0m14.044s
$ time make virtualbox/eval-win81x64-enterprise
rm -rf output-virtualbox-iso
mkdir -p box/virtualbox
packer build -only=virtualbox-iso -var 'cm=nocm' -var 'version=1.0.4' -var 'update=false' -var 'headless=false' -var "shutdown_command=shutdown /s /t 10
/f /d p:4:1 /c Packer_Shutdown" -var "iso_url=http://download.microsoft.com/download/B/9/9/B999286E-0A47-406D-8B3D-5B5AD7373A4A/9600.16384.WINBLUE_RTM.130821-1623_X64FRE_ENTERPRISE_EVAL_EN-US-IRM_CENA_X64FREE_EN-US_DV5.ISO" -var "iso_checksum=73321fa912305e5a16096ef62380a91ee1f112da" eval-win81x64-enterprise.json
virtualbox-iso output will be in this color.
==> virtualbox-iso: Downloading or copying Guest additions
virtualbox-iso: Downloading or copying: file:///usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Downloading or copying ISO
virtualbox-iso: Downloading or copying: http://download.microsoft.com/download/B/9/9/B999286E-0A47-406D-8B3D-5B5AD7373A4A/9600.16384.WINBLUE_RTM.130821-1623_X64FRE_ENTERPRISE_EVAL_EN-US-IRM_CENA_X64FREE_EN-US_DV5.ISO
==> virtualbox-iso: Creating floppy disk...
virtualbox-iso: Copying: floppy/00-run-all-scripts.cmd
virtualbox-iso: Copying: floppy/01-install-wget.cmd
virtualbox-iso: Copying: floppy/_download.cmd
virtualbox-iso: Copying: floppy/_packer_config.cmd
virtualbox-iso: Copying: floppy/disablewinupdate.bat
virtualbox-iso: Copying: floppy/eval-win81x64-enterprise/Autounattend.xml
virtualbox-iso: Copying: floppy/fixnetwork.ps1
virtualbox-iso: Copying: floppy/install-winrm.cmd
virtualbox-iso: Copying: floppy/oracle-cert.cer
virtualbox-iso: Copying: floppy/passwordchange.bat
virtualbox-iso: Copying: floppy/powerconfig.bat
virtualbox-iso: Copying: floppy/update.bat
virtualbox-iso: Copying: floppy/zz-start-sshd.cmd
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive...
==> virtualbox-iso: Attaching floppy disk...
==> virtualbox-iso: Creating forwarded port mapping for SSH (host port 2494)
==> virtualbox-iso: Executing custom VBoxManage commands...
virtualbox-iso: Executing: modifyvm eval-win81x64-enterprise --memory 1536
virtualbox-iso: Executing: modifyvm eval-win81x64-enterprise --cpus 1
virtualbox-iso: Executing: setextradata eval-win81x64-enterprise VBoxInternal/CPUM/CMPXCHG16B 1
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Waiting for WinRM to become available...
==> virtualbox-iso: Connected to WinRM!
==> virtualbox-iso: Uploading VirtualBox version info (5.0.14)
==> virtualbox-iso: Uploading VirtualBox guest additions ISO...
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Error uploading guest additions: Error uploading file to $env:TEMP\winrmcp-56c7258f-cc3b-ddf1-b373-eea683167461.tmp: http error: 401 -
==> Some builds didn't complete successfully and had errors:
--> virtualbox-iso: Error uploading guest additions: Error uploading file to $env:TEMP\winrmcp-56c7258f-cc3b-ddf1-b373-eea683167461.tmp: http error: 401 -
==> Builds finished but no artifacts were created.
make: *** [box/virtualbox/eval-win81x64-enterprise-nocm-1.0.4.box] Erreur 1
See https://github.com/packer-community/packer-windows-plugins/issues/25
I just built this image this morning without issues. Try the latest code in this repo with the new packer.
|
gharchive/issue
| 2016-02-19T20:15:02 |
2025-04-01T04:56:12.021389
|
{
"authors": [
"icnocop",
"juju4",
"tas50"
],
"repo": "boxcutter/windows",
"url": "https://github.com/boxcutter/windows/issues/61",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
121941215
|
Field moisture 4C 10/2 high CO2 variability
The single highest s.d. between CO2 measurements in the entire experiment was on 10/2, in the 4 C chamber, field moisture treatment:
Date Treatment Temperature CO2_ppm_s_sd CO2_ppm_s incday
1 2015-10-02 Field moisture 4 26.13742 26.08171 33
samplenums Num
1 843 844 845 846 847 848 849 850 851 852 853 854 1
This can be seen here (labeled as 1):
We use our new script to 'zoom in' on this high-variability observation. First, here are the observations making up that mean and sd:
And here are the raw data that generated those flux estimates:
So: the large variability we see in the first graph is due to the initial measurements made on two cores (AL 8 and AL 20), sample numbers 845 and 848 respectively, which are ~2x larger than any other 4 C field moisture measurement made across the whole experiment. How do we want to handle? Run a formal outlier test on these puppies? @apeyton
Previous comment by @apeyton :
The variability in CO2 on 10/2 in the 4C chamber appears to be driven by cores AL8 and AL20. AL20 'appears' to be more concentrated in clay/clay-sized particles and has a slower filtration rate than other cores. When I wet it back to its field moist water content from the top of the core, water added pools and will often not percolate completely through the core for more than 24 hours. I say 'appear' only because we can not say if it is more clayey than other cores until end-of-incubation destructive analysis. As for why AL8 is so variable, I can't say. I have not observed any marked differences in AL 8 relative to other cores.
Still not sure what to do here. The issue is that we have two samplenums (845 and 848, cores AL 8 and AL 20 respectively) that have 3x high fluxes than other cores on that date, and also 3x higher than themselves (i.e. they were measured twice on that date):
samplenum DATETIME Core CO2_ppm_s
<int> <time> <chr> <dbl>
1 843 2015-10-02 23:15:33 AL 25 4.618766
2 844 2015-10-02 23:17:33 AL 36 16.269834
3 845 2015-10-02 23:19:33 AL 8 74.869561
4 846 2015-10-02 23:21:33 AL 7 38.270162
5 847 2015-10-02 23:23:33 AL 29 24.725795
6 848 2015-10-02 23:25:32 AL 20 75.443966
7 849 2015-10-02 23:27:32 AL 25 0.000000
8 850 2015-10-02 23:29:32 AL 36 5.754149
9 851 2015-10-02 23:31:32 AL 8 34.345918
10 852 2015-10-02 23:33:31 AL 7 3.105452
11 853 2015-10-02 23:35:32 AL 29 8.733848
12 854 2015-10-02 23:37:31 AL 20 26.837440
They're not failing the outlier tests because group flux is 26 ± 22 (the m.a.d.), and we're allowing up to 5x the m.a.d. And in context they don't look so terrible.
@apeyton thoughts? If they're not failing our formal outlier test, I'm loath to remove these couple of high fluxes just because 'we know that can't be right'.
I think is now addressed (see issue #44). Closing.
|
gharchive/issue
| 2015-12-13T20:19:56 |
2025-04-01T04:56:12.057059
|
{
"authors": [
"bpbond"
],
"repo": "bpbond/cpcrw_incubation",
"url": "https://github.com/bpbond/cpcrw_incubation/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1143952251
|
Suggestion for better latex svg styling
Right now all latex fragments that are exported to svg are displayed centered in a new line:
It would be nice if they could be displayed inline like the emacs latex preview:
This can actually be achieved with a simple css snippet:
img.org-svg {
display: inline-block;
vertical-align: middle;
}
It will then be displayed more similar (and in my opinion more readable) like in the emacs preview.
I don't know where in which file this snipped would belong along all the assets/.scss files, otherwise I would open a pull request.
Feel free to add this if you also like the inline latex.
👋 Ah, it looks nicer indeed!
As a workaround, can try adding your custom SCSS under assets/scss/override/_custom.scss?
In the mean time, can you share your org-mode file?
I will try locally as I am concerned that generalizing for all org-svg may not fit everyone. Afaik, the theme is centering all img which is something that I can always leave as an option. I believe it was an option the hugo-book theme (the theme I forked from).
For curiosity, is it possible to generate the following .md file with KaTex inside: https://github.com/alex-shpak/hugo-book/edit/master/exampleSite/content/docs/shortcodes/katex.md
I am asking this because that .md file renders nicely: https://hugo-book-demo.netlify.app/docs/shortcodes/katex/
As a workaround, can you try adding your custom SCSS under assets/scss/override/_custom.scss?
Yes, that is where I put it for now for me.
Afaik, the theme is centering all img which is something that I can always leave as an option.
The provided css snippet would only inline the org-svg css class which (AFIK) is only generated when latex is exported to svg. It does not affect other images, they are still centered.
In my case using KaTex is not really an option, because I rely on various kind of latex packages which are not supported by mathjax or katex, so the best bet for me is to export the latex to svg.
In the mean time, can you share your org-mode file?
Sure!
:PROPERTIES:
:ID: 6a76068e-87cc-4751-91e0-0523a290f118
:END:
#+latex_header: \usepackage{qcircuit}
#+latex_header: \usepackage{braket}
#+latex_header: \usepackage{blochsphere}
#+options: tex:dvisvgm
#+filetags: :quantum-computing:
#+title: Hadamard Gate
$$\Qcircuit @C=1em @R=.7em { & \gate{H} & \qw}$$ where $$H = \frac 1 {\sqrt{2}} \begin{pmatrix}1 &1\\1 &-1\end{pmatrix}$$
\begin{equation*}\begin{tikzpicture}[line cap=round, line join=round, >=Triangle]
\clip(-2.5,-2.7) rectangle (2.66,2.58);
\draw(0,0) circle (2cm);
\draw [rotate around={0.:(0.,0.)},dash pattern=on 3pt off 3pt] (0,0) ellipse (2cm and 0.6cm);
\draw [->] (0,0) -- (0,2.5);
\draw [->] (0,0) -- (-1,-1);
\draw [->] (0,0) -- (2.5,0);
\draw (-1.01,-0.9) node[anchor=north west] {$x$};
\draw (2.2,0.5) node[anchor=north west] {$y$};
\draw (-0.0,2.6) node[anchor=north west] {$z$};
\draw (-0.5,-0.6) node[anchor=south east] {$\ket{+}$};
\draw (0.6, 0.6) node[anchor=south west] {$\ket{-}$};
\draw (-0.65,2.6) node[anchor=north west] {$\ket{0}$};
\draw (-0,-2.05) node[anchor=north] {$\ket{1}$};
\scriptsize
\draw [fill] (0, 2) circle (2pt);
\draw [fill] (0,-2) circle (2pt);
\draw [fill] (-0.58,-0.58) circle (2pt);
\draw [fill] ( 0.58, 0.58) circle (2pt);
\end{tikzpicture}\end{equation*}
* Properties
- Self inverse :: $H = H^\dagger$
- The Hadamard gate transforms the $\ket{0}$ and $\ket{1}$ basis states
into equal superpositions over them.
$$H \ket{0} = \frac 1 {\sqrt{2}} \ket{0} + \frac 1 {\sqrt{2}} \ket{1}
= \ket{+}$$
$$H \ket{1} = \frac 1 {\sqrt{2}} \ket{0} - \frac 1 {\sqrt{2}} \ket{1}
= \ket{-}$$
Feel free to add it at the end of explorer-hugo-theme/assets/scss/components/_markdown.scss.
IMO the img styling in this theme is too opinionated. Ideally it would be great adding custom css to exported img/svg (e.g., .align-center, align-left) but unfortunately exporting org to markdown/html can be fairly limited.
Thank you for using the theme, let me know if you have more feedback!
which (AFIK) is only generated when latex is exported to svg
I think it is also possible to export plantuml to svg but they will look fine as they will be aligned left.
Thank you for the answers. Feel free to close this if this is not somethig you want to add.
I also want to add:
If you also want to have centered latex equations you can wrap them in \begin{equation} or \begin{align} or similar, this will wrap them in a <div class="equation-container"> on export, which you can then style to contain
.equation-container {
text-align: center;
}
Ups, mis-pressed the close button.
Thank you for the answers. Feel free to close this if this is not something you want to add.
I do intend to have a nicer experience out-of-the box for org-mode. Can you check version v0.3.0?
This version also removes the opinionated 85% width on images, hopefully it doesnt break much 😅
Fixed and released under https://github.com/bphenriques/explorer-hugo-theme/releases/tag/v0.3.0
Feel free to re-open if you find any issues or if you have more feedback
|
gharchive/issue
| 2022-02-18T23:42:39 |
2025-04-01T04:56:12.069673
|
{
"authors": [
"FelixBrendel",
"bphenriques"
],
"repo": "bphenriques/explorer-hugo-theme",
"url": "https://github.com/bphenriques/explorer-hugo-theme/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
309562845
|
issue #3 correction
Correction of the following problem:
Cursor in tag will output class/id though search/cmd+p dialog has focus #3
To simulate: place cursor within an html tag and then use cmd+p and type a . this will output class="" into the previously selected html tag even though it's not in focus.
I just checked, it does work! Thank you sir! Will release soon.
0.7.0 published!!! This is awesome
(Nevermind me I was trying to historically add you as a reviewer. Ignore that)
@bradleyflood hehe no worries! this is a good-one, all the dot files used to trigger the classes.. glad it's fixed
|
gharchive/pull-request
| 2018-03-28T23:02:10 |
2025-04-01T04:56:12.100681
|
{
"authors": [
"alexfabianoricioli",
"bradleyflood",
"revelt"
],
"repo": "bradleyflood/auto-id-class",
"url": "https://github.com/bradleyflood/auto-id-class/pull/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
218550875
|
Roadmap
Here is the official todolist that what is planned. For new ideas open new issues.
[ ] Translation to german
[ ] Timed Jobs - Transfer from server x, some files y at given time z
[ ] Goto beta phase when enough feedback is in
I know this is an ftp client, but does it (or could it) have sftp support?
It already has. I should mention that somewhere :)
Thanks!Sent from my Huawei Mobile-------- Original Message --------Subject: Re: [brainfoolong/web-ftp-client] Roadmap (#1)From: Roland Eigelsreiter To: brainfoolong/web-ftp-client CC: Theodore Seán Tubbs ,Comment It already has. I should mention that somewhere :)
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/brainfoolong/web-ftp-client","title":"brainfoolong/web-ftp-client","subtitle":"GitHub repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open in GitHub","url":"https://github.com/brainfoolong/web-ftp-client"}},"updates":{"snippets":[{"icon":"PERSON","message":"@brainfoolong in #1: It already has. I should mention that somewhere :)"}],"action":{"name":"View Issue","url":"https://github.com/brainfoolong/web-ftp-client/issues/1#issuecomment-292069167"}}}
Thanks!
|
gharchive/issue
| 2017-03-31T16:19:07 |
2025-04-01T04:56:12.112547
|
{
"authors": [
"AdrianKoshka",
"brainfoolong"
],
"repo": "brainfoolong/web-ftp-client",
"url": "https://github.com/brainfoolong/web-ftp-client/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
717604284
|
infinite plugin overrides fastSwipe behavior
Describe the bug
infinite plugin overrides fastSwipe behavior.
To Reproduce
Steps to reproduce the behavior:
Go to https://brainhubeu.github.io/react-carousel/docs/examples/swipingSlides
Swipe left/right, observe fastSwipe behavior not working as expected
Remove 'infinite' from example, observe fastSwipe behavior working
Expected behavior
fastSwipe + infinite should be compatible
👍
|
gharchive/issue
| 2020-10-08T19:27:28 |
2025-04-01T04:56:12.132238
|
{
"authors": [
"bmathews",
"haniotis"
],
"repo": "brainhubeu/react-carousel",
"url": "https://github.com/brainhubeu/react-carousel/issues/651",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
372359025
|
cross is flickering
Because the cross is drawn in an in-memory canevas at native resolution, it gets resampled to the screen on the fly.
This sometimes causes the cross to be blurry, or even disappear!
this is very uncommon, and impossible to fix. Closing.
|
gharchive/issue
| 2018-10-21T21:29:47 |
2025-04-01T04:56:12.133284
|
{
"authors": [
"pbellec"
],
"repo": "brainsprite/brainsprite",
"url": "https://github.com/brainsprite/brainsprite/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
982728763
|
Autodiscovered light now failing even with fixed as mode
This entity used to work but after a recent update is failing to create the power entity.
- platform: powercalc
entity_id: light.color_temperature_light_1 # Pantry WW/CW LED Strip
standby_usage: 1.8
fixed:
power: 8.8
Logs:
2021-08-30 22:05:35 DEBUG (MainThread) [custom_components.powercalc.model_discovery] Auto discovered Hue model for entity light.color_temperature_light_1: (manufacturer=GLEDOPTO, model=GL-C-006)
2021-08-30 22:05:35 INFO (MainThread) [custom_components.powercalc.sensor] Model not found in library light.color_temperature_light_1: ('Model not supported', 'GL-C-006')
The autodiscovery is correct but the creation should not fail as I am specifying a fixed mode. Same thing if linear is used. The failure should only occur if LUT is specified or nothing is specified.
Thanks for reporting. You are correct, this should only fail in LUT mode. This should have gone broken after a lot of changes and cleanups I made in v0.5.0.
Will have a look into this tomorrow.
Should be resolved with 0.5.1
Yes fixed. Thanks!
|
gharchive/issue
| 2021-08-30T12:15:53 |
2025-04-01T04:56:12.170172
|
{
"authors": [
"OzGav",
"bramstroker"
],
"repo": "bramstroker/homeassistant-powercalc",
"url": "https://github.com/bramstroker/homeassistant-powercalc/issues/151",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
182355795
|
Can you do it with this thing?
So from what i understood from reading the forum and digging around i found out that you turn the USB flash drive with 2303 to a development board but what if you buy this thing:
http://www.ebay.co.uk/itm/BadUsb-Beetle-Bad-USB-ATMEGA32U4-Development-Board-Module-Arduino-Leonardo-R3-/272382496885?hash=item3f6b431075:g:s~MAAOSw4shX3-CQ
How can i make it into badusb/usbdriveby/usb rubber ducky/psychson pen drive?I don't know if this is the right place to ask but any further information will be appreciated.
Watch this this https://www.youtube.com/watch?v=MIXeYL1iCDA
It uses the same ATMEGA32U4
|
gharchive/issue
| 2016-10-11T19:44:38 |
2025-04-01T04:56:12.195943
|
{
"authors": [
"HomeFork",
"christofersimbar"
],
"repo": "brandonlw/Psychson",
"url": "https://github.com/brandonlw/Psychson/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
110293603
|
Script fails when there are no active users to pick
At least that's what I'm guessing this means at first glance:
Selecting user at random (queue length was 3)
Traceback (most recent call last):
File "slackbotExercise.py", line 282, in <module>
main()
File "slackbotExercise.py", line 277, in main
assignExercise(bot, exercise)
File "slackbotExercise.py", line 205, in assignExercise
winners = [selectUser(bot, exercise) for i in range(bot.num_people_per_callout)]
File "slackbotExercise.py", line 114, in selectUser
return active_users[random.randrange(0, len(active_users))]
File "/usr/lib/python2.6/random.py", line 204, in randrange
raise ValueError, "empty range for randrange() (%d,%d, %d)" % (istart, istop, width)
ValueError: empty range for randrange() (0,0, 0)
:+1:
|
gharchive/issue
| 2015-10-07T18:59:40 |
2025-04-01T04:56:12.197326
|
{
"authors": [
"StevenNunez",
"slinlee"
],
"repo": "brandonshin/slackbot-workout",
"url": "https://github.com/brandonshin/slackbot-workout/issues/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1251497733
|
A typo in Appendix F
I believe the third-to-last line here should read
target <-data(t+NK:t+(N+1)K)
It's easy to see this if we let N=1
thanks a lot for spotting this @MilkshakeForReal! you are absolutely right!
|
gharchive/issue
| 2022-05-28T06:44:47 |
2025-04-01T04:56:12.199163
|
{
"authors": [
"MilkshakeForReal",
"brandstetter-johannes"
],
"repo": "brandstetter-johannes/MP-Neural-PDE-Solvers",
"url": "https://github.com/brandstetter-johannes/MP-Neural-PDE-Solvers/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
361747518
|
Update scripts
Also snuck an update of the Sublime Text project file in there.
The Travis build is broken on Travis, due to cortex-m-rt's new minimum requirement. I'm going to leave this as is for now, as it's going to unbreak itself with the next stable release.
|
gharchive/pull-request
| 2018-09-19T13:23:45 |
2025-04-01T04:56:12.201209
|
{
"authors": [
"hannobraun"
],
"repo": "braun-robotics/rust-lpc82x",
"url": "https://github.com/braun-robotics/rust-lpc82x/pull/52",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
2104309548
|
Ship first iteration of Breadboard doc site
Here are the sections that we'll need. For each section: complete, edit, check for accuracy:
[ ] Kits inputs/outputs documentation (stretch)
[ ] New syntax documentation (stretch)
Split out into a bunch of issues:
https://github.com/breadboard-ai/breadboard/issues/524
https://github.com/breadboard-ai/breadboard/issues/523
https://github.com/breadboard-ai/breadboard/issues/522
https://github.com/breadboard-ai/breadboard/issues/521
https://github.com/breadboard-ai/breadboard/issues/520
https://github.com/breadboard-ai/breadboard/issues/519
|
gharchive/issue
| 2024-01-28T19:15:05 |
2025-04-01T04:56:12.573243
|
{
"authors": [
"aomarks",
"dglazkov"
],
"repo": "breadboard-ai/breadboard",
"url": "https://github.com/breadboard-ai/breadboard/issues/510",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
702149744
|
Support LW on day component
Support Quartz syntax "LW" as "last weekday of the month"
Example:
0 14 LW * ?
Next 5 executions from today:
09/30/2020 14:00:00
10/30/2020 14:00:00
11/30/2020 14:00:00
12/31/2020 14:00:00
01/29/2021 14:00:00
Reference from quartz-scheduler docs:
The 'L' and 'W' characters can also be combined for the day-of-month expression to yield 'LW', which translates to "last weekday of the month".
Expressing support for this feature. "LW" does not function work as expected.
|
gharchive/issue
| 2020-09-15T18:19:49 |
2025-04-01T04:56:12.595208
|
{
"authors": [
"chrisszeluga",
"thefat32"
],
"repo": "breejs/later",
"url": "https://github.com/breejs/later/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1643449224
|
Add calendar support for customer DOB
Try to use the native input field with minimum js code.
Implmented in 2.5.0
|
gharchive/issue
| 2023-03-28T08:24:39 |
2025-04-01T04:56:12.596320
|
{
"authors": [
"vovayatsyuk"
],
"repo": "breezefront/module-breeze",
"url": "https://github.com/breezefront/module-breeze/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2303373537
|
Add support for other intervals
Add support for day/hour/month interval instead of harcoded day
Hi, will you be reviewing this pull request anytime soon? I'm just wondering if I should publish my own package or wait for the request to be accepted.
Thanks
I lost track of this one, sorry! Let me get back to it next week, does that work for you?
No problem at all! Take your time.
Thanks
Ok, I managed to take a look at it! So, in hindsight, I think this package should have been unaware of the time period, making this problem go away in full. It should just render a collection of points, regardless of the meaning of the y-axis (because the y-axis is never visually shown anyway, that's the nature of a sparkline).
I actually planned on making a new major version of this package with that change, but I never came to it. I might in the future though, especially if I know people are still using it.
So, I don't think I wanna take the approach of this PR, but I agree it should be improved. I'll try to look at it this week :)
I've tagged v2, which is totally agnostic about periods: https://github.com/brendt/php-sparkline/releases/tag/2.0.0
|
gharchive/pull-request
| 2024-05-17T18:45:15 |
2025-04-01T04:56:12.614237
|
{
"authors": [
"brendt",
"tito10047"
],
"repo": "brendt/php-sparkline",
"url": "https://github.com/brendt/php-sparkline/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1519503584
|
Request-Reply Microservices / Remote Procedure Calls
Is your feature request related to a problem? Please describe.
From the 2023 roadmap, the system doesn't currently support request-response interactions which limit its use in distributed systems. For example, we may want to centralize capabilities by deploying Substation as a node in a deployment where the node only performs one action (e.g., receive IP address, perform DNS resolution, return DNS results).
Describe the solution you'd like
Add support for the following:
AWS Lambda synchronous invocation
gRPC unary RPC
Describe alternatives you've considered
It's possible to achieve something like this by implementing the caching strategies described here, though that solution is much more complex than what is proposed here.
Additional context
N/A
I can take a stab at this one. I'll work on some protos for the transformers and a dummy implementation! I'll document my thoughts on the naming and the chosen functionality to expose in my PR!
|
gharchive/issue
| 2023-01-04T20:01:17 |
2025-04-01T04:56:12.637484
|
{
"authors": [
"dnelson27",
"jshlbrd"
],
"repo": "brexhq/substation",
"url": "https://github.com/brexhq/substation/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1565188569
|
Portfolio: details popup window
Portfolio: details popup window and Milstone
[X] Implemented popups for both mobile and desktop screen sizes.
[X] Used a JavaScript array to store all information of the work section
[X] Linter and Eslin errors has been checked.
[X] Each project associated with the requied popup window details
Thank you very much
|
gharchive/pull-request
| 2023-02-01T00:12:44 |
2025-04-01T04:56:12.647104
|
{
"authors": [
"brhanuhailu"
],
"repo": "brhanuhailu/portifolio",
"url": "https://github.com/brhanuhailu/portifolio/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
941317349
|
Split out domain name and hosted zone in order to deploy ECS site to a subdomain of the hosted zone
Currently the site can only be deployed to the name of the hosted zone passed in to the stack. There should be:
One prop used to look up the hosted zone
One prop used to set the A Record that points to the ALB
This has been completed in v0.0.25
|
gharchive/issue
| 2021-07-10T20:17:00 |
2025-04-01T04:56:12.651434
|
{
"authors": [
"briancaffey"
],
"repo": "briancaffey/django-cdk",
"url": "https://github.com/briancaffey/django-cdk/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
423129279
|
Testing HBase 2.0.0 (working on top of Hadoop 2.7.1)
Hello,
I try to test HBase 2.0.0 with YCSB 0.15. Hbase can create tables and works fine. However, when I run the command "./bin/ycsb load hbase20 -P workloads/workloada -p columnfamily=family" I get the following warnings and errors (a bit long- excuse me please):
log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Starting test.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
com.yahoo.ycsb.DBException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=16, exceptions:
Wed Mar 20 10:57:29 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 10:58:04 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 10:58:38 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 10:59:12 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 10:59:46 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:00:20 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:00:55 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:01:33 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:02:17 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:03:01 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:03:45 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:04:29 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:05:23 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:06:17 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:07:11 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
Wed Mar 20 11:08:05 EET 2019, RpcRetryingCaller{globalStartTime=1553068615964, pause=100, maxAttempts=16}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: org.apache.hadoop.hbase.shaded.org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at com.yahoo.ycsb.db.HBaseClient10.init(HBaseClient10.java:159)
at com.yahoo.ycsb.DBWrapper.init(DBWrapper.java:86)
at com.yahoo.ycsb.ClientThread.run(ClientThread.java:74)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=16, exceptions:
Could you please help? Thanks a lot.
Oylum
That looks like YCSB couldn't find you HBase instance. Did you include pointing it at your local hbase configs?
Closing as nonresponsive. Please reopen if you have more details.
|
gharchive/issue
| 2019-03-20T08:41:52 |
2025-04-01T04:56:13.293248
|
{
"authors": [
"busbey",
"oylumalatli"
],
"repo": "brianfrankcooper/YCSB",
"url": "https://github.com/brianfrankcooper/YCSB/issues/1284",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
136459382
|
Update SuitePath.php
Get this Warning: preg_quote() expects parameter 1 to be string, array given in brianium/paratest/src/ParaTest/Runners/PHPUnit/SuitePath.php
Due to suffix being an empty array in the construtor
The build is failing on your PR. Not your fault but would you mind having a look at it? Maybe it's quick to fix. Probably one of the latest versions of PHPUnit is acting different on warnings.
@michaelbutler fixed the build, thanks for the contribution ;)
|
gharchive/pull-request
| 2016-02-25T17:42:45 |
2025-04-01T04:56:13.297249
|
{
"authors": [
"MarkVaughn",
"julianseeger"
],
"repo": "brianium/paratest",
"url": "https://github.com/brianium/paratest/pull/198",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
76388675
|
NodeDetail endpoint update operation fails with KeyError
The NodeDetail view (sample url http://127.0.0.1:5000/api/v2/nodes/<node_id> ) currently does not accept update/patch operations. Sending a patch request to an existing node returns a KeyError in Django.
First cause
Primarily, this fails because the update view expects a JSON key is_public , but the serialized field in the request body goes by the name public.
Further causes
A secondary cause of the update failure is that the validated_data dictionary passed to the serializer update method is empty. (an empty dictionary won't have a key named public regardless) The PATCH request would return 200 OK if the KeyError was fixed, but the instance would still not be changed upon submitting the "update".
The failure to update is currently under investigation.
Suggestions
Unit tests are needed that cover node create and delete operations via the API.
I've added tests for PUT and POST, and those work consistently. I'll try PATCH next.
Yup. Replicated the PATCH problem. Let's see what's happening there.
What was happening is that the update function in the serializer wasn't checking for if public were being set in a partial update before trying to validate, at least for the problems I was having with my tests (Fixed in 2ea2bd58a8dddd653565f2082fb97728b0c0127c) . I haven't quite figured out how to properly set the description to something not descriptiony yet. Getting a JSON Parse error.
Okay, to clear the description, you set it to "" rather than null. The JSON error is because the Django docs are a little persnickity about the format of the JSON. It wants to remove the whitespace inside the braces. As near as I can tell, this works properly.
This seems to work for clearing the description- but existing instances can be created via other means that will have a null description.
This leads to a small edge case where the API can fail in a patch request when trying to modify existing valid instances. Separate issue, perhaps...
Fixed in de3d869d8ebf502872143317d00087807e9e6c62
|
gharchive/issue
| 2015-05-14T15:10:40 |
2025-04-01T04:56:13.302378
|
{
"authors": [
"abought",
"brianjgeiger"
],
"repo": "brianjgeiger/osf.io",
"url": "https://github.com/brianjgeiger/osf.io/issues/61",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
29083737
|
Closes #129, allows duplicate values
I broke down and had to add this in! For duplicates to work, you must have the mode set to multi, otherwise it wouldn't make sense.
Documentation changes to follow.
I've included the compiled files too. Since I only added the duplicates option, I don't know why there are so many other changes? If you can figure this out, I can change the commits.
Closing due to conflicts and no time to make it work.
|
gharchive/pull-request
| 2014-03-10T10:12:09 |
2025-04-01T04:56:13.320522
|
{
"authors": [
"jbrooksuk"
],
"repo": "brianreavis/selectize.js",
"url": "https://github.com/brianreavis/selectize.js/pull/324",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
930720867
|
Add a lot of new types based on the API docs
I wanted to open this as a draft to get your thoughts, hopefully it's not too intrusive. I saw your Show HN, and dove right in!
This adds a couple of concepts to the library, as well as a bunch of new types. The biggest changes are the two Meta types, Meta, and ItemsMeta.
These structs contain a lot of the common fields that are a part of the API, as well as implement a simple Get(spotify.API, interface{}) error method that will fetch all current data for the given object. I've added an example of this here.
This also adds the Playlist, Album, Image, and Owner types, as well as fleshes out the Track type a little more
TODO before completing the draft:
[ ] write some docs
[ ] add some testing for the Meta types
There are a few breaking changes that'll require fixing https://github.com/brianstrauch/spotify-cli,
I've got a local copy I've been testing, I can submit that once this merges (since it won't build correctly beforehand, anyway)
This last changeset seems to have gotten a bit out of hand... Let me know how I should proceed here
|
gharchive/pull-request
| 2021-06-26T15:45:51 |
2025-04-01T04:56:13.331548
|
{
"authors": [
"twexler"
],
"repo": "brianstrauch/spotify",
"url": "https://github.com/brianstrauch/spotify/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2639993907
|
Add Open Telemetry Tracing Support
This enables bricksllm to opt-in support open-telemetry instrumented HTTP requests.
add distributed tracing support via otlp in internal/telemetry/otel_instrumentation.go.
add otel middleware to trace all incoming request to proxy server.
add otel roundtripper to trace all outgoing request to third party LLM provider.
flags in configuration to disable otel by default.
If you can help resolve the merge conflicts, this is good to go
|
gharchive/pull-request
| 2024-11-07T06:14:11 |
2025-04-01T04:56:13.338857
|
{
"authors": [
"galileilei",
"spikelu2016"
],
"repo": "bricks-cloud/BricksLLM",
"url": "https://github.com/bricks-cloud/BricksLLM/pull/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
281520829
|
Infinite recursion because of overriding Equals
When overriding Equals, if invocation of base.Equals is used, in order to perform the default .net Equals behavior, the generated javascript will invoke itself instead. This leads to infinite recursion.
Steps To Reproduce
http://deck.net/ <-- Replace with a link to your Deck
using System;
public class Program
{
public static void Main()
{
A a1 = new A(10);
A a2 = new A(10);
A a3 = new A(-10);
Console.WriteLine(a1 == a2);
Console.WriteLine(a1.Equals(a2));
Console.WriteLine(a1.Equals(a3));
}
public class A
{
private int val;
public int Val
{
get { return val; }
}
public A(int v)
{
val = v;
}
public override bool Equals(object o)
{
A a = (A)o;
if (a.Val < 0)
return base.Equals(o);
else
return this.Val == a.Val;
}
}
}
Expected Result
False
True
False
Actual Result
False
True
System.Exception: RangeError: Maximum call stack size exceeded
at Object.is (https://deck.net/resources/js/bridge/bridge.min.js?16.5.0:7:8335)
at Object.cast (https://deck.net/resources/js/bridge/bridge.min.js?16.5.0:7:9857)
at ctor.equals (https://deck.net/RunHandler.ashx?h=1429931346:38:32)
at Object.equals (https://deck.net/resources/js/bridge/bridge.min.js?16.5.0:7:14701)
at ctor.equals (https://deck.net/RunHandler.ashx?h=1429931346:40:35)
at Object.equals (https://deck.net/resources/js/bridge/bridge.min.js?16.5.0:7:14701)
at ctor.equals (https://deck.net/RunHandler.ashx?h=1429931346:40:35)
at Object.equals (https://deck.net/resources/js/bridge/bridge.min.js?16.5.0:7:14701)
at ctor.equals (https://deck.net/RunHandler.ashx?h=1429931346:40:35)
at Object.equals (https://deck.net/resources/js/bridge/bridge.min.js?16.5.0:7:14701)
Hello @Equijano24! Thanks for the report! Please note in order to effectively share your deck example, you should click the "share" button and copy the URL displayed there (which becomes your browser url as soon as you click the button):
We've just merged the fix! It will make it to Bridge next release! The release that will contain this fix is reflected by the milestone bound here (currently 16.6.0).
|
gharchive/issue
| 2017-12-12T20:08:58 |
2025-04-01T04:56:13.345947
|
{
"authors": [
"Equijano24",
"fabriciomurta"
],
"repo": "bridgedotnet/Bridge",
"url": "https://github.com/bridgedotnet/Bridge/issues/3308",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
147969858
|
gem install fails
$ gem install haml-lint
ERROR: While executing gem ... (Gem::RemoteFetcher::FetchError)
bad response Not Found 404 (https://rubygems.global.ssl.fastly.net/quick/Marshal.4.8/haml-lint-0.13.0.gemspec.rz)
haml-lint might be affected by the latest rubygems hack: http://blog.rubygems.org/2016/04/06/gem-replacement-vulnerability-and-mitigation.html
Please push the gem to rubygems to fix this issue.
Ok. I see haml-lint is the old gem. I can switch to haml_lint instead. You may close this issue.
|
gharchive/issue
| 2016-04-13T07:25:25 |
2025-04-01T04:56:13.347694
|
{
"authors": [
"notalex"
],
"repo": "brigade/haml-lint",
"url": "https://github.com/brigade/haml-lint/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
678232136
|
Feature Request: Override default units
Title says it all. Would be nice to be able to override the default units. While it did choose the right temperature degrees for me, pressure is using inHg instead of mb/hPa.
I will do this as soon as I have time. Plan is to let the user pick the Display unit for the different unit types when configuring the Integration. This might be a breaking change, requiring to delete and re-add.
Looks like it's working good, at least as far as using MB instead of in/Hg
|
gharchive/issue
| 2020-08-13T07:53:09 |
2025-04-01T04:56:13.350692
|
{
"authors": [
"briis",
"lightmaster"
],
"repo": "briis/meteobridge",
"url": "https://github.com/briis/meteobridge/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1198718877
|
loose -> lose
Thanks :-).
|
gharchive/pull-request
| 2022-04-09T17:18:42 |
2025-04-01T04:56:13.351683
|
{
"authors": [
"brillout",
"louwers"
],
"repo": "brillout/vite-plugin-ssr",
"url": "https://github.com/brillout/vite-plugin-ssr/pull/302",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
741780413
|
prevent seek index misses from consecutive timestamps
@alfred-landrum brought up a valid concern that records with the same timestamp could result in seek index misses: If the record chosen as an entry in the seek index has the same timestamp as the previous record, that previous record would be missed.
This ticket aims to prevent this scenario from happening.
Note that this may be the cause of, or at least related to, the record count discrepancy in https://github.com/brimsec/zq/issues/1579 .
Note that this may be the cause of, or at least related to, the record count discrepancy in #1579 .
I can verify that this is not the case, pr arriving shortly.
|
gharchive/issue
| 2020-11-12T17:18:10 |
2025-04-01T04:56:13.353917
|
{
"authors": [
"alfred-landrum",
"mattnibs"
],
"repo": "brimsec/zq",
"url": "https://github.com/brimsec/zq/issues/1594",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
592123227
|
Optimize searches based on time using a time index
@mccanne has a branch that implements the scanning part of this. This depends on the work in https://github.com/brimsec/zq/issues/499.
For the scope of this issue, we want to speed up the single bzng file search that exists in Brim right now. In the future (possibly very near future), we're going to work through how this would expand for multi-file spaces, as we're prototyping with zar now.
Fixed in #579. Note that this isn't yet used by zqd, filed #595 as a follow-up to integrate it into zqd.
|
gharchive/issue
| 2020-04-01T18:38:02 |
2025-04-01T04:56:13.355793
|
{
"authors": [
"alfred-landrum",
"aswan",
"philrz"
],
"repo": "brimsec/zq",
"url": "https://github.com/brimsec/zq/issues/498",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
406494466
|
Very slow on Chrome with software WebGL (blacklisted GPU)
WebGL rendering is vveerryy ssllooww in Chrome with software WebGL due to a blacklisted GPU (mid-2010 MacBook Pro with NVidia 320M, running macOS 10.13 and Chrome 73)
Should detect this and switch to the 2d canvas path which is a lot faster.
It should work to add failIfMajorPerformanceCaveat: true back into the options. There may be some circumstances where it would still be faster to use the webgl path, but it doesn't scale up well.
|
gharchive/issue
| 2019-02-04T19:59:28 |
2025-04-01T04:56:13.358378
|
{
"authors": [
"brion"
],
"repo": "brion/yuv-canvas",
"url": "https://github.com/brion/yuv-canvas/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2206365
|
Squirrel!
I am a great tracker.
My pack sent me on a special mission, all by myself.
Have you seen a bird? I am going to find one, and I am on the scent.
I am a great tracker; did I mention that?
+1
|
gharchive/issue
| 2011-11-11T04:49:16 |
2025-04-01T04:56:13.369658
|
{
"authors": [
"nevir",
"rkh"
],
"repo": "brixen/labrador",
"url": "https://github.com/brixen/labrador/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2727572593
|
Issue with SpliceAI Website
For the past week, the SpliceAI website has not been generating results when I input a variant. I have attempted to use the tool with several different variants across multiple browsers (e.g., Chrome, Firefox, Safari), but the issue persists. Some of my colleagues have also tried accessing the site from different computers and networks, and they are encountering the same problem. Is this a known issue? Are there specific troubleshooting steps I can follow to resolve this?
Any guidance or updates would be greatly appreciated. Thank you for your help!
Hi @emmamiz , no there's no known issue currently. Can you provide a screenshot or description of what happens?
Hi Ben,
Thanks for your quick response. Screenshot below. The website looks normal, but this screenshot was taken after I pressed “submit.” As you can see, the bottom half of the screenshot is empty. Nothing loads.
@.***
From: Ben Weisburd @.>
Date: Monday, December 9, 2024 at 11:15 AM
To: broadinstitute/SpliceAI-lookup @.>
Cc: Mizrahi-Powell, Emma @.>, Mention @.>
Subject: Re: [broadinstitute/SpliceAI-lookup] Issue with SpliceAI Website (Issue #87)
You don't often get email from @.*** Learn why this is importanthttps://aka.ms/LearnAboutSenderIdentification
[EXTERNAL]
Hi @emmamizhttps://urldefense.com/v3/__https:/github.com/emmamiz__;!!MXfaZl3l!fM_oPLFAp-5ITHFppccd4zmbpegXa1YM3HDUsqpVS-UhaZ82uzkN9u6zNe4g6FGsA2eh_xdk56kfzrccfyzXSX09_AUj-GBnjPLnXg$ , no there's no known issue currently. Can you provide a screenshot or description of what happens?
—
Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/broadinstitute/SpliceAI-lookup/issues/87*issuecomment-2528535086__;Iw!!MXfaZl3l!fM_oPLFAp-5ITHFppccd4zmbpegXa1YM3HDUsqpVS-UhaZ82uzkN9u6zNe4g6FGsA2eh_xdk56kfzrccfyzXSX09_AUj-GDgoLZmlg$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/BNQMJCPFCOMXT5RHI7ECYQD2EW6ZFAVCNFSM6AAAAABTJFURWWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMRYGUZTKMBYGY__;!!MXfaZl3l!fM_oPLFAp-5ITHFppccd4zmbpegXa1YM3HDUsqpVS-UhaZ82uzkN9u6zNe4g6FGsA2eh_xdk56kfzrccfyzXSX09_AUj-GBL9kNqcw$.
You are receiving this because you were mentioned.Message ID: @.***>
Hmm.. the screenshot didn't go through, but if nothing loads, some error is occuring on the page that I can't reproduce here.
Can you please open the Developer Console in your browser by doing:
and share any error messages you see there?
In chrome, normal Console output looks like:
This is after I click Submit.
I see this issue with every variant I tried, and I tried upwards of 20 (including the example variants listed on the webpage). Example variant here: NM_007294.4:c.4096+1G>A . Screenshot of the page and console output attached.
Thanks .. for some reason your browser isn't able to load a library (TabixIndexedFile) that the latest version of SpliceAI-lookup relies on. I just switched it to a different url. Can you please try hard-reloading the SpliceAI-lookup page
via Command-Shift-R (or Ctrl-Shift-R on windows) and see if it works now?
If not, can you tell me what you see when you go to https://unpkg.com/@gmod/tabix/dist/tabix-bundle.js ?
Fantastic, it works now. Thank you so much!
From: Ben Weisburd @.>
Date: Monday, December 9, 2024 at 2:18 PM
To: broadinstitute/SpliceAI-lookup @.>
Cc: Mizrahi-Powell, Emma @.>, Mention @.>
Subject: Re: [broadinstitute/SpliceAI-lookup] Issue with SpliceAI Website (Issue #87)
You don't often get email from @.*** Learn why this is importanthttps://aka.ms/LearnAboutSenderIdentification
[EXTERNAL]
Thanks .. for some reason your browser isn't able to load a library (TabixIndexedFile) that the latest version of SpliceAI-lookup relies on. I just switched it to a different url. Can you please try hard-reloading the SpliceAI-lookup page
via Command-Shift-R (or Ctrl-Shift-R on windows) and see if it works now?
If not, can you tell me what you see when you go to @.@./tabix/dist/tabix-bundle.js__;!!MXfaZl3l!c8KH6CIW9Loy9RvWlNP7vNF9FZpFaIOdAU8V2irOqqW0nuxU65oTWxQ-iSEL25uQBd0bGl3a09pKzB8WN4XKmi0vg1W0wb-o_xaI8Q$> ?
—
Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/broadinstitute/SpliceAI-lookup/issues/87*issuecomment-2529156541__;Iw!!MXfaZl3l!c8KH6CIW9Loy9RvWlNP7vNF9FZpFaIOdAU8V2irOqqW0nuxU65oTWxQ-iSEL25uQBd0bGl3a09pKzB8WN4XKmi0vg1W0wb-jzS9UVw$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/BNQMJCJ3GI33AQ62HIGVQ6D2EXUGJAVCNFSM6AAAAABTJFURWWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMRZGE2TMNJUGE__;!!MXfaZl3l!c8KH6CIW9Loy9RvWlNP7vNF9FZpFaIOdAU8V2irOqqW0nuxU65oTWxQ-iSEL25uQBd0bGl3a09pKzB8WN4XKmi0vg1W0wb-854ThoQ$.
You are receiving this because you were mentioned.Message ID: @.***>
Great! Thanks for reporting the issue
|
gharchive/issue
| 2024-12-09T16:09:21 |
2025-04-01T04:56:13.391685
|
{
"authors": [
"bw2",
"emmamiz"
],
"repo": "broadinstitute/SpliceAI-lookup",
"url": "https://github.com/broadinstitute/SpliceAI-lookup/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
987150348
|
K8S beam runner improvements
Why
Relevant ticket
DAP invokes dataflow in a slightly different fashion than HCA.
This PR
Adds an optional command paramter that allows customization of the k8s container entrypoint
Checklist
[ ] Documentation has been updated as needed.
@quazi-broad yeah, the underlying issue is that the entrypoint in the published DAP docker images is not correct. There is some investigation to be done in ingest-utils to determine why this is the case, but given that K8S gives us the ability to specify a different entrypoint via the command parameter, this will at least get us moving.
|
gharchive/pull-request
| 2021-09-02T20:47:26 |
2025-04-01T04:56:13.395837
|
{
"authors": [
"aherbst-broad"
],
"repo": "broadinstitute/dagster-utils",
"url": "https://github.com/broadinstitute/dagster-utils/pull/26",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2056544273
|
[Feature request]: A separate accessory/switch for specific motion detections
Describe the solution you'd like
Hi Team,
The Homebase S380 is able to differentiate between motion detections (face, human, pet, vehicle).
I would like to have a separate switch/accessory for these specific detected motions to be able to trigger separate Homekit automations for each specific motion.
Is this possible? Can this be added?
Best regards,
Are already available.
@bropat In Homebridge I only see a generic Motion accessory in the Accessories overview. How can I make the specific motion accessory available/usable? I'm looking for a Human motion detection accessory and a pet motion detection accessory. I assume that both would then become visible in the Homebridge accessories overview.
I can't find a way to make them visible in Homebridge (and therefore also not in HomeKit).
I'm using version v2.3.3 of the homebridge-eufy-security plugin.
In Homebridge I only see a generic Motion accessory in the Accessories overview. How can I make the specific motion accessory available/usable? I'm looking for a Human motion detection accessory and a pet motion detection accessory. I assume that both would then become visible in the Homebridge accessories overview.
I can't find a way to make them visible in Homebridge (and therefore also not in HomeKit).
I'm using version v2.3.3 of the homebridge-eufy-security plugin.
On 5 Jan 2024, at 20:56, bropat @.***> wrote:
Are already available.
—
Reply to this email directly, view it on GitHub https://github.com/bropat/eufy-security-client/issues/434#issuecomment-1879188733, or unsubscribe https://github.com/notifications/unsubscribe-auth/AP5H2SH4N7HUHMZ3FKCMWMDYNBLHBAVCNFSM6AAAAABBDNVGS2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZZGE4DQNZTGM.
You are receiving this because you authored the thread.

You must open an issue in the homebridge-eufy-security asking for implementation. This library already offers what you ask
|
gharchive/issue
| 2023-12-26T16:25:46 |
2025-04-01T04:56:13.508198
|
{
"authors": [
"JaFe1968",
"bropat"
],
"repo": "bropat/eufy-security-client",
"url": "https://github.com/bropat/eufy-security-client/issues/434",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1551791113
|
RDR 2 Vulkan more issues
Can't reopen original issue. It renders the menu now but I can't interact with it in any way. Do I need to implement something on my side to make it work? We can move back to the original issue if you reopen it.
I guess demo menu just has no functionality, I'm stupid, closing the issue.
I guess demo menu just has no functionality, I'm stupid, closing the issue.
Edit: or maybe it was hooked too early, the point is, my custom menu works as expected, not sure what's the issue with demo was.
hmm usually when games do this (hides cursor or whatever) i open an menu or console and then use the menu (ghetto workaround)
I guess demo menu just has no functionality, I'm stupid, closing the issue.
Edit: or maybe it was hooked too early, the point is, my custom menu works as expected, not sure what's the issue with demo was.
hmm usually when games do this (hides cursor or whatever) i'll open a menu or console and then use the menu (ghetto workaround)
Cursor was on the screen, hover effects were there but clicks just didn't come through.
i don't experience it but good for you if its working
|
gharchive/issue
| 2023-01-21T12:44:43 |
2025-04-01T04:56:13.535853
|
{
"authors": [
"JIStream",
"bruhmoment21"
],
"repo": "bruhmoment21/UniversalHookX",
"url": "https://github.com/bruhmoment21/UniversalHookX/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
651299785
|
[IDEA] Collaboration with ScorchCrafter ?
Hi there, your project looks very promising !
I strongly suggest you to establish some kind of collaboration with ScorchCrafter Guitar FX DAW, another great open source VST plugin project.
I do also suggest you to take a look at those other interesting projects that may inspire you:
https://github.com/apohl79/GuitarAmp
https://github.com/kaktus3000/HighGain
https://github.com/andrepxx/go-dsp-guitar
https://github.com/forart/ayemux
Hope that helps !
BTW we've collected guitard DSP projects (yours too) here:
https://github.com/forart/HyMPS/blob/main/GuitarDSPs.md
Hope that inspires !
|
gharchive/issue
| 2020-07-06T07:02:06 |
2025-04-01T04:56:13.539553
|
{
"authors": [
"forart"
],
"repo": "brummer10/FatFrog.lv2",
"url": "https://github.com/brummer10/FatFrog.lv2/issues/1",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
163876460
|
Update README.md
Turn lib/geo/postgis.ex into a clickable link.
Thanks!
|
gharchive/pull-request
| 2016-07-05T15:18:42 |
2025-04-01T04:56:13.544411
|
{
"authors": [
"bartolsthoorn",
"bryanjos"
],
"repo": "bryanjos/geo",
"url": "https://github.com/bryanjos/geo/pull/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
249848362
|
Cleanup code
Cleanup code, refactor, pylint, add testing and code coverage, trim notebooks, convert to scripts where applicable
Clean-up Resume Tailor repo
|
gharchive/issue
| 2017-08-13T02:51:44 |
2025-04-01T04:56:13.548593
|
{
"authors": [
"bryantbiggs"
],
"repo": "bryantbiggs/resume_tailor",
"url": "https://github.com/bryantbiggs/resume_tailor/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1974694416
|
Cannot use Google Provider > 5.0
There's not much we can currently do, because terraform-google-modules/container-vm/google is the one that imposes < 5.0
Just raising it here, so we can track https://github.com/terraform-google-modules/terraform-google-container-vm/issues/116 and update when the update is out.
Thanks for raising this @kpocius
|
gharchive/issue
| 2023-11-02T17:10:10 |
2025-04-01T04:56:13.559299
|
{
"authors": [
"bschaatsbergen",
"kpocius"
],
"repo": "bschaatsbergen/terraform-gce-atlantis",
"url": "https://github.com/bschaatsbergen/terraform-gce-atlantis/issues/128",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
187107707
|
added global FullPath = 1
Hi, thanks for the patch.
This is a known issue, yet I'm still looking for a better solution for it.
|
gharchive/pull-request
| 2016-11-03T16:01:35 |
2025-04-01T04:56:13.565127
|
{
"authors": [
"bsdelf",
"roberbnd"
],
"repo": "bsdelf/bufferhint",
"url": "https://github.com/bsdelf/bufferhint/pull/5",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
496881973
|
BSV算力减少60%问题
我用btcpool 搭建完后,连接一台矿机,30秒才处理一个share 算力减少了60% 这是为什么,是哪些地方没设置好么。
how many seconds between two share submit
share_avg_seconds = 10;
default difficulty (hex)
default_difficulty = "4000";
max difficulty (hex)
max_difficulty = "4000000000000000";
min difficulty (hex)
min_difficulty = "40";
Adjust difficulty once every N second
diff_adjust_period = 900;
参考一下这个配置
把默认难度降低10倍,这可以让你3秒收到一个share。但是算力并不会增加10倍,因为10个低难度share等于一个高难度share。
default_difficulty = "400";
算力是一段时间的share难度之和与达到难度1所需的运算次数之积,再比上这段时间的秒数所得到的值,它与share的数量没有直接关系,因为在矿机工况不发生变化的情况下,提交数量的减少意味着提交难度的增加,总算力还是不变。
https://github.com/btccom/btcpool/wiki/How-to-compute-a-worker's-hashrate-|-如何计算矿工的哈希速率
长时间没有新回应
|
gharchive/issue
| 2019-09-23T03:48:58 |
2025-04-01T04:56:13.598634
|
{
"authors": [
"YihaoPeng",
"ajspider",
"rocqina"
],
"repo": "btccom/btcpool",
"url": "https://github.com/btccom/btcpool/issues/377",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
712732945
|
add a gallery with svg_utils examples
What
it would be nice to have more examples that can be downloaded and adapted by the user, this examples should be shown in an easy to browse gallery like the one for matplotlib:
https://matplotlib.org/gallery/index.html
How
implement the gallery using sphinx-gallery addon: https://sphinx-gallery.github.io/stable/index.html
add more examples to the examples directory
Can I work on this? And would it be counted towards hacktoberfest-2021 label?
|
gharchive/issue
| 2020-10-01T10:59:00 |
2025-04-01T04:56:13.619159
|
{
"authors": [
"Aniket-508",
"btel"
],
"repo": "btel/svg_utils",
"url": "https://github.com/btel/svg_utils/issues/46",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
509383489
|
update CONTRIBUTING.md file so each sentence is on it's own line
Addressing issue#35
https://github.com/bthayer2365/git-gud/issues/35
Looks good! Thank you @christinaworkman
|
gharchive/pull-request
| 2019-10-19T03:04:34 |
2025-04-01T04:56:13.620545
|
{
"authors": [
"bthayer2365",
"christinaworkman"
],
"repo": "bthayer2365/git-gud",
"url": "https://github.com/bthayer2365/git-gud/pull/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1881613087
|
What is the scope of data generated by HALO?
According to the content of the article, I believe that the HALO model generates not only disease diagnosis codes but also complete Electronic Health Records (EHR). However, why does this code only import diagnosis files and admission records?
Hello. Yes, the architecture can support any type of medical codes, and we have since experimented with other types of codes to similar effect. However, the experiments in the paper used only diagnosis codes (for simplicity, because our downstream experiments were diagnosis-based, and to match what was present in our other outpatient dataset) and so that is what is included here. We are happy to provide a dataset building script that includes medications and procedures as well if you would like.
Thank you for your response, and I appreciate your willingness to provide the script. If it's convenient for you, could you please send it to my email (htao36@163.com) or upload it to your repository?
Additionally, based on your response, it seems you have attempted generating various types of content independently, such as diagnoses, medications, programs, and more. Have you ever tried combining them all together for generation (diagnosis + medication + program + other)? I'm curious about the results of such an approach.
Also, the paper mentioned about time-series data generation. Is there any seperate script on that?
@huangtao36 Hi. Did the authors send you the script? I would be glad if you can share it.
Hello. Yes, the architecture can support any type of medical codes, and we have since experimented with other types of codes to similar effect. However, the experiments in the paper used only diagnosis codes (for simplicity, because our downstream experiments were diagnosis-based, and to match what was present in our other outpatient dataset) and so that is what is included here. We are happy to provide a dataset building script that includes medications and procedures as well if you would like.
Hello, I am very interested in your work! May I ask for a processing code for medications and procedures? If you are willing to share, please send it to my email: shuijing@stu.xidian.edu.cn .
Thank you very much!
|
gharchive/issue
| 2023-09-05T09:47:52 |
2025-04-01T04:56:13.624893
|
{
"authors": [
"WoodPecker1111",
"btheodorou99",
"huangtao36",
"mirzafarhan7"
],
"repo": "btheodorou99/HALO_Inpatient",
"url": "https://github.com/btheodorou99/HALO_Inpatient/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
664126434
|
Inconsistent result when CAS exists from proxy but not locally.
I have CI cache backed by S3 and here is an HTTP cache log:
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/ac/a559cd7910cbbe821525f48e6efef10964c4243aeff3b6747baffd1e2863c096 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/615324b6d4553bf401f3e20e1d1e7fbb02f99f4d6a672f069157871547b0b4cb OK
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/0e5166cbbaee2f9bcb621398ab8f0c75deb169766d88b9f7b4047f39e2dcef31 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/f8e5351bad6c3c961fbc247790047c031508b98ae083f8d72483505c8c039931 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/6a36cd36e694976de00dd7aa55a8731a7fa9e06cde1a855d6409530bcfbd4e40 OK
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/f5ea44e96bcbec7c85633f0ebaecb16cc374ed1b1e2fb29513b76d0f1896289e OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/f8e5351bad6c3c961fbc247790047c031508b98ae083f8d72483505c8c039931 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/6a36cd36e694976de00dd7aa55a8731a7fa9e06cde1a855d6409530bcfbd4e40 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/3a56dc577ede5d09612ee47874719a67709bede1d54c7337709ba0f1b34e7eae OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/615324b6d4553bf401f3e20e1d1e7fbb02f99f4d6a672f069157871547b0b4cb OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/4069577f6b77e0d8e22150e405f13faf31cc3973735a6613034cfbf88e9f3f09 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/39cda18dd362ba07607ecb7007f92a4270408b2eefd3a0b6ca4f7a06998f7de2 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/74a306028b6a3fe5f6635cf35c7b0844e816d6e689f8c9af1e5aebe0193ef4a1 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/bd62d11533ba6d63328b92369ec905eaa18380badbfc394603feef887388e062 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/a5bb7efae52fb008c3d3db5aaec798a74815d73bc786efacefc56a13b8f0336c OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/5c458a5e671110255be575f7f58b2e63931f3a7c6b960fa489e67343c87e1d61 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/5efafabd06c21547fa756512bbf92b3a8b1e9e9a6cf872080b4cadd8821f7c35 OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/400863baa375f1b465e0f72bfe6bc18f6ac13cc065fa9fc66254df9e88e7ff5c OK
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/aad80f9583cb54f43ea70280c68a737f01441f430f96c1f290fd0ee24c979009 OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/0e5166cbbaee2f9bcb621398ab8f0c75deb169766d88b9f7b4047f39e2dcef31
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/f5ea44e96bcbec7c85633f0ebaecb16cc374ed1b1e2fb29513b76d0f1896289e
2020/07/22 23:54:39 GET 404 127.0.0.1 /cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba
2020/07/22 23:54:39 GET 404 127.0.0.1 /cas/615324b6d4553bf401f3e20e1d1e7fbb02f99f4d6a672f069157871547b0b4cb
2020/07/22 23:54:39 GET 404 127.0.0.1 /cas/6a36cd36e694976de00dd7aa55a8731a7fa9e06cde1a855d6409530bcfbd4e40
2020/07/22 23:54:39 GET 404 127.0.0.1 /cas/f8e5351bad6c3c961fbc247790047c031508b98ae083f8d72483505c8c039931
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/400863baa375f1b465e0f72bfe6bc18f6ac13cc065fa9fc66254df9e88e7ff5c OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/400863baa375f1b465e0f72bfe6bc18f6ac13cc065fa9fc66254df9e88e7ff5c
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/bd62d11533ba6d63328b92369ec905eaa18380badbfc394603feef887388e062 OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/bd62d11533ba6d63328b92369ec905eaa18380badbfc394603feef887388e062
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/5c458a5e671110255be575f7f58b2e63931f3a7c6b960fa489e67343c87e1d61 OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/5c458a5e671110255be575f7f58b2e63931f3a7c6b960fa489e67343c87e1d61
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/39cda18dd362ba07607ecb7007f92a4270408b2eefd3a0b6ca4f7a06998f7de2 OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/39cda18dd362ba07607ecb7007f92a4270408b2eefd3a0b6ca4f7a06998f7de2
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/615324b6d4553bf401f3e20e1d1e7fbb02f99f4d6a672f069157871547b0b4cb OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/615324b6d4553bf401f3e20e1d1e7fbb02f99f4d6a672f069157871547b0b4cb
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/aad80f9583cb54f43ea70280c68a737f01441f430f96c1f290fd0ee24c979009 OK
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/f8e5351bad6c3c961fbc247790047c031508b98ae083f8d72483505c8c039931 OK
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/aad80f9583cb54f43ea70280c68a737f01441f430f96c1f290fd0ee24c979009
2020/07/22 23:54:39 GET 200 127.0.0.1 /cas/f8e5351bad6c3c961fbc247790047c031508b98ae083f8d72483505c8c039931
Particularly, for entries like /cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba, S3 returns OK :
2020/07/22 23:54:39 S3 CONTAINS envoy-ci-build-cache-us-east-2 public-x64/cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba OK
While bazel-remote return 404 to Bazel:
2020/07/22 23:54:39 GET 404 127.0.0.1 /cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba
And the download completes later:
2020/07/22 23:54:39 S3 DOWNLOAD envoy-ci-build-cache-us-east-2 public-x64/cas/d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba OK
This result
WARNING: Reading from Remote Cache:
BulkTransferException
and a cache miss.
Thanks for the bug report.
Looking at instances of d35d521228f07e30b28984179d3435236ace2eddaadaa1526aaea58e39a389ba in your logs, I suspect that there are two concurrent requests for the blob:
request 1 arrives, the blob is not available locally, but a place is reserved in the LRU index, then bazel-remote starts downloading the blob from s3
request 2 arrives, the blob is still not available locally, and a place cannot be reserved in the LRU index because there's an in-progress download, and gives the 404 error
request 1 finishes downloading from s3, then successfully serves the blob
So this is almost certainly related to #267 - a shortcut was taken early on in bazel-remote's design, based on the assumption that such concurrent requests are rare enough to ignore. That assumption no longer seems to hold with more recent versions of bazel.
The reporter of that issue has gone quiet, so I will try implementing this myself. I might have a test build for you to try next week.
a shortcut was taken early on in bazel-remote's design, based on the assumption that such concurrent requests are rare enough to ignore. That assumption no longer seems to hold with more recent versions of bazel.
@mostynb Yes that's the case. I will open a bug in bazel as well to not request same blob.
I was able to hack a quick solution by return result from proxy directly when a blob is in progress: https://github.com/buchgr/bazel-remote/compare/master...lizan:inprogress?expand=1
Another potential quick workaround is return 503 instead of 404, that at least let bazel to retry the request instead of execute the action.
In our case, re-execute the action (which is a rules_foreign_cc build) will result slightly different result, which is foreign_cc's issue though. As this happens in an early action so the cache of targets depends on it will also be invalidated.
I will open a bug in bazel as well to not request same blob.
Note that this is also an issue on the bazel-remote side when you have multiple bazel clients interacting with the cache simultaneously.
I was able to hack a quick solution by return result from proxy directly when a blob is in progress: https://github.com/buchgr/bazel-remote/compare/master...lizan:inprogress?expand=1
I have made a start on the cleanup work to remove the assumption, and so far it appears to be easier than I thought. So I would prefer to focus on this, and hopefully have something ready for you to test fairly soon.
Another potential quick workaround is return 503 instead of 404, that at least let bazel to retry the request instead of execute the action.
That is an interesting idea, definitely worth investigating if my cleanup work takes too long.
Here's my work-in-progress fix: https://github.com/mostynb/bazel-remote/commits/concurrent_inserts
@mostynb thanks, I can confirm the branch fixes with my minimal case.
|
gharchive/issue
| 2020-07-23T00:23:53 |
2025-04-01T04:56:13.648924
|
{
"authors": [
"lizan",
"mostynb"
],
"repo": "buchgr/bazel-remote",
"url": "https://github.com/buchgr/bazel-remote/issues/318",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1784725164
|
🛑 Videoglancer is down
In 683f14a, Videoglancer (https://videoglancer.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Videoglancer is back up in fa87cf3.
|
gharchive/issue
| 2023-07-02T16:58:52 |
2025-04-01T04:56:13.686734
|
{
"authors": [
"budlebee"
],
"repo": "budlebee/upptime-kundera",
"url": "https://github.com/budlebee/upptime-kundera/issues/248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
390496767
|
Add new link
As title
Hey, @mmarif4u There was a conflict in your PR, so I just added the link separately in 68e67164eb964460de9c9cff715525a2803d2fa3
Thanks!
|
gharchive/pull-request
| 2018-12-13T02:55:38 |
2025-04-01T04:56:13.687673
|
{
"authors": [
"budparr",
"mmarif4u"
],
"repo": "budparr/awesome-hugo",
"url": "https://github.com/budparr/awesome-hugo/pull/8",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1243584656
|
Clients can't differentiate server-sent and client-synthesized errors
Talked to @robbertvanginkel about this for a while today: in short, the problem is that clients can't tell whether an *Error was sent by the server, or whether it was synthesized within the client code (e.g., to wrap an underlying networking error). grpc-go has the same problem, and documents the resulting complexity thoroughly.
I don't think we can differentiate based on status code, because many codes are semantically appropriate for both clients and servers.
We could do something like this:
type synthesizedError struct {
err error
}
func (e *synthesizedError) Unwrap() error { return e.err }
func (e *synthesizedError) Error() string { return e.err.Error() }
func IsFromServer(err error) bool {
if se := new(synthesizedError); errors.As(err, &se) {
return false
}
return true
}
and then be careful to always wrap client-created errors in synthesizedError. Introducing this change would be backward-compatible, so let's defer deciding until we've released Connect and have some more user experience reports.
Edit, months later: it's probably simpler and much less error-prone to do the opposite of what I suggested above, and specially flag errors that are actually read off the wire.
@ElliotMJackson If you're interested, this is a good issue to start with!
|
gharchive/issue
| 2022-05-20T20:26:15 |
2025-04-01T04:56:13.700249
|
{
"authors": [
"akshayjshah"
],
"repo": "bufbuild/connect-go",
"url": "https://github.com/bufbuild/connect-go/issues/222",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1267890913
|
Add tests for nil sender/receiver
In the HTTP handler, there is code that handles returning nil
sender/receiver and continues to call interceptors. This is currently
unreachable with the exported API - add tests using a custom
protocolHandler implementation.
Mea culpa - I've steered you wrong here. :sob: #290 is definitely the better approach.
|
gharchive/pull-request
| 2022-06-10T18:43:17 |
2025-04-01T04:56:13.701563
|
{
"authors": [
"akshayjshah",
"pkwarren"
],
"repo": "bufbuild/connect-go",
"url": "https://github.com/bufbuild/connect-go/pull/293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1463266919
|
add generics for Protocol.createHandler
Adds type coverage for the impl function.
@fubhy, I've been going back and forth on this. If the return type also had a related type argument, generics could ensure that they match:
createHandler<T>(spec: ImplSpec<T): ImplHandler<T>
But it doesn't, and it's fair to implement a Protocol and use the ImplSpec with its default type arguments. We don't because it's a little bit helpful to add none-default type arguments, so that we don't confuse I and O, but why not leave that up to the implementation?
I can see both sides, but I'm leaning towards the simple version we currently have. Closing this, but thank you for the PR!
Agreed. This only came up while experimenting with other parts (more specifically the testing use-case that I was playoung around with as mentioned on Slack). There, I was trying to work with individual handlers. But I realized (as you also suggested) that it's perfectly fine to just use unimplementService for that case so my use-case is solved differently now anyways :-)
|
gharchive/pull-request
| 2022-11-24T12:24:35 |
2025-04-01T04:56:13.704021
|
{
"authors": [
"fubhy",
"timostamm"
],
"repo": "bufbuild/connect-web",
"url": "https://github.com/bufbuild/connect-web/pull/340",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1811367310
|
Missing Images?
in the default XML views/res/btn_default_small_transparent.xml & views/res/btn_default_small_transparent_dark.xml both have references to drawable(s) I can't find & nothing in views folder has any file with the set drawable names.
should there be images for btn_default_small_normal_hover & btn_default_small_normal_hover_dark?
No I can not, I can't find anything broken that would need the images.
If there are unnecessary defines they should be removed
|
gharchive/issue
| 2023-07-19T07:48:30 |
2025-04-01T04:56:13.789439
|
{
"authors": [
"GrimMaple",
"Yorizuka"
],
"repo": "buggins/dlangui",
"url": "https://github.com/buggins/dlangui/issues/668",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1867465946
|
RuntimeError when launching a training
Hi,
I tried launching a training with the following arguments:
python train.py --aux_weight 0.9 --aux_loss_type lpips --beta 0.3072 --device 0
But I obtained an error at this line :
https://github.com/buggyyang/CDC_compression/blob/e5d5e29d9d730c1f2052609bdb29f9fd83a0a647/modules/unet.py#L109
How to solve it ?
Sorry for the late reply, I'm not sure if you have solved it. It seems there is a shape mismatch between tensors. Maybe you can try to debug it by printing the shape of each tensor? For training, we use 256x256 resolution RGB images, note that the shape of the noise tensor must have the same size.
I also encountered the same problem, you only need to modify this item 【parser.add_argument('--reverse_context_dim_mults', type=int, nargs='+', default=[4, 2],】to 【parser.add_argument('--reverse_context_dim_mults', type=int, nargs='+', default=[4,3,2,1],】 in train.py should be work.
|
gharchive/issue
| 2023-08-25T17:51:07 |
2025-04-01T04:56:13.794277
|
{
"authors": [
"Yolice",
"buggyyang",
"marl917"
],
"repo": "buggyyang/CDC_compression",
"url": "https://github.com/buggyyang/CDC_compression/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
389231078
|
mapping fail not uploaded but gradle task succeed
Expected behavior
If I run a command like :app:uploadBugsnagProduction-releaseMapping and it fails the task should fail.
Observed behavior
I get a message that say that the upload failed:
Bugsnag upload failed with code 400: proguard file generated without -keepattributes LineNumberTable
But the build succeed.
Steps to reproduce
Proguard without -keepattributes LineNumberTable and try to upload the mapping manually.
Version
4.5.0
Additional information
This is a problem when you are running scripts or all your release is done by jenkins or similar.
Thanks for the report @BraisGabin - this has now been addressed by #151
|
gharchive/issue
| 2018-12-10T10:19:53 |
2025-04-01T04:56:13.796770
|
{
"authors": [
"BraisGabin",
"fractalwrench"
],
"repo": "bugsnag/bugsnag-android-gradle-plugin",
"url": "https://github.com/bugsnag/bugsnag-android-gradle-plugin/issues/138",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
876451544
|
PLAT-6370 unity support
Goal
Expose BSGBreadcrumbTypeValue and BSGBreadcrumbTypeFromString via BugsnagBreadcrumb+Private.h
This is needed to support https://github.com/bugsnag/bugsnag-unity/pull/234
Testing
Re-run unit and e2e tests
This has introduced a lint warning, and I'm not sure that the introduction of an "unknown" string is desirable.
Maybe the Unity layer could deal with the NSDictionary representation which can be accessed via -[BugsnagBreadcrumb objectValue] ?
Technically "unknown" will never happen given our code, but I needed a non-null in all possibilities. The old code just did an implicit return of nil (which would also cause problems if that possibility occurred). That worked before because there was no declaration for the function, and so the compiler's nonnull checks were bypassed.
I could access it via the dict, but then we're back to losing all type checking and name binding, and we'd still have a nonnull being filled with a nullable.
I'd rather avoid using dictionary lookups to get internal values and instead make use of the type checker and type bindings. Otherwise we have to manually keep everything in sync across multiple projects, and that's asking for trouble.
|
gharchive/pull-request
| 2021-05-05T13:28:14 |
2025-04-01T04:56:13.799872
|
{
"authors": [
"kstenerud",
"nickdowell"
],
"repo": "bugsnag/bugsnag-cocoa",
"url": "https://github.com/bugsnag/bugsnag-cocoa/pull/1088",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
974048403
|
Localized rescue for crashy button clicks
Goal
Restrict the rescuing of errors on Appium button clicks to just those that occur for run_scenario (and on macOS).
Design
Until now such errors were handled by Maze Runner for all button clicks, but this will be removed in its next release as it is too generic.
Testing
Covered by CI.
💭 In theory only the When("I run {string} and relaunch the app") step should ever trigger crashes - so we might be able to restrict the rescue to that step and be more specific
💭 In theory only the When("I run {string} and relaunch the app") step should ever trigger crashes - so we might be able to restrict the rescue to that step and be more specific
Yes, agreed - now reworked.
|
gharchive/pull-request
| 2021-08-18T20:55:08 |
2025-04-01T04:56:13.802574
|
{
"authors": [
"nickdowell",
"twometresteve"
],
"repo": "bugsnag/bugsnag-cocoa",
"url": "https://github.com/bugsnag/bugsnag-cocoa/pull/1169",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
352735775
|
Android Crash on start
Description
Trying to install the alpha version on Bugsnag on our Unity project
Issue
When starting the application it crashes
Environment
bugsnag-unity version: 4.0.0 alpha 1
Unity version: 2017.4.9f1
Operating system name and version: MacOs 10.13.5
Target platform names and versions: Android 6.0.1
Initializing bugsnag via the Unity UI or in code? Code
Bug report
art E JNI ERROR (app bug): accessed stale local reference 0x200001 (index 0 in a table of size 0)
F art/runtime/indirect_reference_table.cc:67] JNI ERROR (app bug): see above.
google-breakpad W ### ### ### ### ### ### ### ### ### ### ### ### ###
W Chrome build fingerprint:
W 68.0.3440.91
W 344009150
W ### ### ### ### ### ### ### ### ### ### ### ### ###
CRASH E signal 11 (SIGSEGV), code 2 (SEGV_ACCERR), fault addr da8d0000
E *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
E Build fingerprint: 'Verizon/nobleltevzw/nobleltevzw:6.0.1/MMB29K/N920VVRS2BPF2:user/release-keys'
E Revision: '9'
E pid: 32461, tid: 306, name: Thread-2392 >>> com.************.********** <<<
E r0 00000000 r1 00000000 r2 0001f3c4 r3 da8d0000
E r4 da8cdfbc r5 da8cdf40 r6 e9b443b9 r7 da8ce020
E r8 00000000 r9 da8cdf50 sl c69c9a80 fp da8cdfbc
E ip 00000004 sp da8cded4 lr f612cd61 pc f74b0a98 cpsr eeb61c4c
E backtrace:
E #00 pc 00017a98 /system/lib/libc.so (memset+48)
E #01 pc 00009d5d /system/lib/libunwind.so (_Uarm_local_access_addr_space_init+8)
E #02 pc e9b4608c <unknown/absolute>
E stack:
E da8cde94 ffffffff
E da8cde98 ffffffff
E da8cde9c 00000000
E da8cdea0 00000000
E da8cdea4 00000000
E da8cdea8 00000032
E da8cdeac 0000001f
E da8cdeb0 da8cdf40
E da8cdeb4 e9b4439a
E da8cdeb8 da8ce020
E da8cdebc 00000005
E da8cdec0 da8cdf50
E da8cdec4 00000000
E da8cdec8 c69c9a80 [anon:libc_malloc]
E da8cdecc e9b460a8
E da8cded0 00000000
E #00 da8cded4 da8cdfbc
E ........ ........
E #01 da8cded4 da8cdfbc
E da8cded8 e9b46090
E #02 da8cdedc f612c247 /system/lib/libunwind.so
E da8cdee0 ffffffff
E da8cdee4 00000000
E da8cdee8 00001000
E da8cdeec 00000001
E da8cdef0 00000000
E da8cdef4 00000037
E da8cdef8 e9b45000
E da8cdefc e9b460c0
E da8cdf00 0000002d
E da8cdf04 da8cdf3c
E da8cdf08 e9b443b9
E da8cdf0c 00007ecd
E da8cdf10 e9b4439a
E da8cdf14 f7513ec0
E da8cdf18 da8cdf34
E memory near r2:
E 0001f3a4 ffffffff ffffffff ffffffff ffffffff ................
E 0001f3b4 ffffffff ffffffff ffffffff ffffffff ................
E 0001f3c4 ffffffff ffffffff ffffffff ffffffff ................
E 0001f3d4 ffffffff ffffffff ffffffff ffffffff ................
E 0001f3e4 ffffffff ffffffff ffffffff ffffffff ................
E 0001f3f4 ffffffff ffffffff ffffffff ffffffff ................
E 0001f404 ffffffff ffffffff ffffffff ffffffff ................
E 0001f414 ffffffff ffffffff ffffffff ffffffff ................
E 0001f424 ffffffff ffffffff ffffffff ffffffff ................
E 0001f434 ffffffff ffffffff ffffffff ffffffff ................
E 0001f444 ffffffff ffffffff ffffffff ffffffff ................
E 0001f454 ffffffff ffffffff ffffffff ffffffff ................
E 0001f464 ffffffff ffffffff ffffffff ffffffff ................
E 0001f474 ffffffff ffffffff ffffffff ffffffff ................
E 0001f484 ffffffff ffffffff ffffffff ffffffff ................
E 0001f494 ffffffff ffffffff ffffffff ffffffff ................
E code around pc:
E f74b0a78 e1a01c01 e1811421 e1811821 e213c007 ....!...!.......
E f74b0a88 1a000024 e1a00001 e2522040 3a00000a $.......@ R....:
E f74b0a98 e1c300f0 e1c300f8 e1c301f0 e1c301f8 ................
E f74b0aa8 e1c302f0 e1c302f8 e1c303f0 e1c303f8 ................
E f74b0ab8 e2833040 e2522040 aafffff4 e2822040 @0..@ R.....@ ..
E f74b0ac8 e1b0cd82 3a000004 e1c300f0 e1c300f8 .......:........
E f74b0ad8 e1c301f0 e1c301f8 e2833020 5a000002 ........ 0.....Z
E f74b0ae8 e1c300f0 e1c300f8 e2833010 e1b0ce82 .........0......
E f74b0af8 3a000000 e0c300f8 5a000000 e4831004 ...:.......Z....
E f74b0b08 e1b0cf82 14c31001 24c31001 25c31000 ...........$...%
E f74b0b18 e8bd0001 e12fff1e e26cc008 e042200c ....../...l.. B.
E f74b0b28 e1b00f8c 44c31001 24c31001 24c31001 .......D...$...$
E f74b0b38 e35c0004 3affffd2 e4831004 eaffffd0 ..\....:........
E f74b0b48 eec01b10 e1b0ce82 3a000000 f400070d ...........:....
E f74b0b58 aa000000 f480080d e1b0cf82 44c01001 ...............D
E f74b0b68 24c01001 24c01001 e8bd0001 e12fff1e ...$...$....../.
E code around lr:
E f612cd40 2024f845 f855bdf8 60311024 f06fbdf8 E.$ ..U.$.1`..o.
E f612cd50 bdf80002 2100b510 46044a03 ea58f7f8 .......!.J.F..X.
E f612cd60 447b4b02 bd1060e3 00021448 ffffff47 .K{D.`..H...G...
E f612cd70 2100b510 4a154c16 4620447c ea48f7f8 ...!.L.J|D F..H.
E f612cd80 49142301 4a154814 4b156263 68094479 .#.I.H.Jcb.KyD.h
E f612cd90 68004478 6061447a 447b4912 48126020 xD.hzDa`.I{D `.H
E f612cda0 4a1260a2 4b1260e3 44784479 6812447a .`.J.`.KyDxDzD.h
E f612cdb0 21006161 6120447b 61a24620 61e3460a aa.!{D a F.a.F.a
E f612cdc0 fc1df7f8 4010e8bd bb94f7ff 00021448 .......@....H...
E f612cdd0 0000b2e0 00007128 00007128 fffffef5 ....(q..(q......
E f612cde0 fffffeff ffffff3b ffffff73 00007110 ....;...s....q..
E f612cdf0 ffffff05 b085b530 a8044605 0006e900 ....0....F......
E f612ce00 461a9902 b9199c03 f06fb914 e00d0002 ...F......o.....
E f612ce10 0c02f014 682b6868 9300d004 69042300 ....hh+h.....#.i
E f612ce20 e00347a0 46639300 47a868c5 bd30b005 .G....cF.h.G..0.
E f612ce30 b5734b62 460d4604 681b447b b9086818 bKs..F.F{D.h.h..
System.err W remove failed: ENOENT (No such file or directory) : /data/user/0/com.************.**********/files/AppEventsLogger.persistedevents
om.facebook.FacebookSdk D getGraphApiVersion: v3.0
ViewRootImpl D ViewPostImeInputStage processPointer 0
CRASH E other thread is trapped; signum = 11
InputEventReceiver E Exception dispatching input event.
MessageQueue-JNI E Exception in MessageQueue callback: handleReceiveCallback
E java.lang.Error: signal 11 (SIGSEGV), code 2 (SEGV_ACCERR), fault addr da8d0000
E Build fingerprint: 'Verizon/nobleltevzw/nobleltevzw:6.0.1/MMB29K/N920VVRS2BPF2:user/release-keys'
E Revision: '9'
E pid: 32461, tid: 306, name: Thread-2392 >>> com.************.********** <<<
E r0 00000000 r1 00000000 r2 0001f3c4 r3 da8d0000
E r4 da8cdfbc r5 da8cdf40 r6 e9b443b9 r7 da8ce020
E r8 00000000 r9 da8cdf50 sl c69c9a80 fp da8cdfbc
E ip 00000004 sp da8cded4 lr f612cd61 pc f74b0a98 cpsr eeb61c4c
E at libc.memset(memset:48)
E at libunwind._Uarm_local_access_addr_space_init(_Uarm_local_access_addr_space_init:8)
E at Unknown.0000208c(Unknown Source)
Tested with v3.6.6 (2018-05-24).
Crash is not happening with this version.
@willispinaud thanks for trying out the alpha!
Is there anymore of the log that you can share?
Are you upgrading your application from an old version of Bugsnag or are you installing the Bugsnag package into an application that hasn't had Bugsnag installed before?
Which methods are you calling in the Bugsnag codebase?
I will assemble your collection of versions you mentioned to see if I can reproduce this.
I've managed to reproduce this with android 6, investigating the cause and a fix now
This has been fixed in the latest alpha's so I'm closing. Thanks for the report @willispinaud and @martin308 for the fix!
|
gharchive/issue
| 2018-08-21T22:34:11 |
2025-04-01T04:56:13.809891
|
{
"authors": [
"martin308",
"snmaynard",
"willispinaud"
],
"repo": "bugsnag/bugsnag-unity",
"url": "https://github.com/bugsnag/bugsnag-unity/issues/88",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
900663140
|
fix(maxbreadcrumbs): set the max breadcrumbs in the native layer on init
Fix max breadcrumbs config setting not working
The native layers where not informed of the config setting
Simply following the setup of other config values
Added missing code to create native config methods
Tested manually
Agreed to hold off merging this until v5 is released
|
gharchive/pull-request
| 2021-05-25T11:16:33 |
2025-04-01T04:56:13.812008
|
{
"authors": [
"fractalwrench",
"rich-bugsnag"
],
"repo": "bugsnag/bugsnag-unity",
"url": "https://github.com/bugsnag/bugsnag-unity/pull/275",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1916579974
|
Improve ockam project ticket by removing deprecation description from --project-path
Current behavior
When you run ockam project ticket --help, you will get the following output as shown below.
The --project-path argument is listed as "DEPRECATED".
Desired behavior
This argument is required and should not be marked as "DEPRECATED". It is needed in some cases and not in others. In use cases that involve working with Ockam Orchestrator, this argument is not needed. However, in cases where you start your own authority node then it is needed.
Please take a look at this file to get started.
We love helping new contributors! ❤️
If you have questions or need help as you explore, please join us on Discord. If you're looking for other issues to contribute to, please checkout our good first issues.
@0scvr That's awesome, this is all yours. Please let us know if you have any questions as you explore. You can also ask questions on the contributors discord https://discord.gg/RAbjRr3kds
|
gharchive/issue
| 2023-09-28T01:50:45 |
2025-04-01T04:56:13.836432
|
{
"authors": [
"nazmulidris"
],
"repo": "build-trust/ockam",
"url": "https://github.com/build-trust/ockam/issues/6126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2484860510
|
GEM052 IFC4 Scenario
This ends up checking if the ContextType is within the list of valid RepresentationType which is always a fail
https://github.com/buildingSMART/ifc-gherkin-rules/blob/f7dcaac762beb0c613d46f29fe94ca8aa778f9dd/features/GEM052_Correct-geometric-subcontexts.feature#L24-L29
Thanks both for reporting it. It's a known issue, already fixed and will be included in next release of the service
This issue has been fixed by 0.6.6 release https://github.com/buildingSMART/ifc-gherkin-rules/commit/c5808bfa00bbfc3cfed53ffc8d92cd3e0ce34491
|
gharchive/issue
| 2024-08-24T20:53:15 |
2025-04-01T04:56:13.863371
|
{
"authors": [
"evandroAlfieri",
"isma3lMB"
],
"repo": "buildingSMART/ifc-gherkin-rules",
"url": "https://github.com/buildingSMART/ifc-gherkin-rules/issues/268",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.