id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
874203602
|
Overwrite the width
I am using this basic template for the react-alert library. It works well except my text is just slightly too long and wraps to the second line and doesn't look ideal. I'm wondering if there is a simple way to overwrite the width from this template?
const alertStyle = {
backgroundColor: '#151515',
color: 'white',
padding: '10px',
textTransform: 'uppercase',
borderRadius: '3px',
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',
boxShadow: '0px 2px 2px 2px rgba(0, 0, 0, 0.03)',
fontFamily: 'Arial',
width: '300px',
boxSizing: 'border-box'
}
The Template returns a React component with style as property. So you can do the following in your App Container:
function App () {
return (
<AlertProvider template={p => <AlertTemplate {...p} style={{width: 600}}/>} {...options}>
...
</AlertProvider>
)
}
|
gharchive/issue
| 2021-05-03T03:51:21 |
2025-04-01T04:35:48.701632
|
{
"authors": [
"dmikester1",
"michaelmika"
],
"repo": "schiehll/react-alert-template-basic",
"url": "https://github.com/schiehll/react-alert-template-basic/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
3512921
|
twig.js causes error in AsseticBundle
Hi, I got configured assetic like this:
assetic:
debug: %kernel.debug%
use_controller: false
filters:
cssrewrite: ~
closure:
jar: %kernel.root_dir%/Resources/java/compiler.jar
yui_js:
jar: %kernel.root_dir%/Resources/java/yuicompressor.jar
yui_css:
jar: %kernel.root_dir%/Resources/java/yuicompressor.jar
(indents changed to make post nicer - they are not wrong in the config file).
And my template like this:
{% javascripts "@MyAdminBundle/Resources/views/Resource/videoView.html.twig"
filter="twig_js, ?yui_js" %}
<script language="javascript" src="{{ asset_url }}"></script>
{% endjavascripts %}
Which generates the following stacktrace in _dev:
[exception] 500 | Internal Server Error | Exception
[message] SplObjectStorage::serialize() must return a string or NULL
[1] Exception: SplObjectStorage::serialize() must return a string or NULL
at n/a
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 138
at serialize()
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 0
at Assetic\Asset\AssetCache::getCacheKey(object(Assetic\Asset\FileAsset), null, 'dump')
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 62
at Assetic\Asset\AssetCache->dump()
in /home/tarjei/myproject/vendor/bundles/Symfony/Bundle/AsseticBundle/Controller/AsseticController.php line 80
at Symfony\Bundle\AsseticBundle\Controller\AsseticController->render('4bc396c', '0')
in line
at call_user_func_array(array(object(Symfony\Bundle\AsseticBundle\Controller\AsseticController), 'render'), array('4bc396c', '0'))
in /home/tarjei/myproject/app/cache/dev/classes.php line 3905
at Symfony\Component\HttpKernel\HttpKernel->handleRaw(object(Symfony\Component\HttpFoundation\Request), '1')
in /home/tarjei/myproject/app/cache/dev/classes.php line 3875
at Symfony\Component\HttpKernel\HttpKernel->handle(object(Symfony\Component\HttpFoundation\Request), '1', true)
in /home/tarjei/myproject/app/cache/dev/classes.php line 4852
at Symfony\Bundle\FrameworkBundle\HttpKernel->handle(object(Symfony\Component\HttpFoundation\Request), '1', true)
in /home/tarjei/myproject/app/bootstrap.php.cache line 547
at Symfony\Component\HttpKernel\Kernel->handle(object(Symfony\Component\HttpFoundation\Request))
in /home/tarjei/myproject/web/app_dev.php line 26
[2] Exception: SplObjectStorage::serialize() must return a string or NULL
at n/a
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 138
at serialize()
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 0
at Assetic\Asset\AssetCache::getCacheKey(object(Assetic\Asset\FileAsset), null, 'dump')
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 62
at Assetic\Asset\AssetCache->dump()
in /home/tarjei/myproject/vendor/bundles/Symfony/Bundle/AsseticBundle/Controller/AsseticController.php line 80
at Symfony\Bundle\AsseticBundle\Controller\AsseticController->render('4bc396c', '0')
in line
at call_user_func_array(array(object(Symfony\Bundle\AsseticBundle\Controller\AsseticController), 'render'), array('4bc396c', '0'))
in /home/tarjei/myproject/app/cache/dev/classes.php line 3905
at Symfony\Component\HttpKernel\HttpKernel->handleRaw(object(Symfony\Component\HttpFoundation\Request), '1')
in /home/tarjei/myproject/app/cache/dev/classes.php line 3875
at Symfony\Component\HttpKernel\HttpKernel->handle(object(Symfony\Component\HttpFoundation\Request), '1', true)
in /home/tarjei/myproject/app/cache/dev/classes.php line 4852
at Symfony\Bundle\FrameworkBundle\HttpKernel->handle(object(Symfony\Component\HttpFoundation\Request), '1', true)
in /home/tarjei/myproject/app/bootstrap.php.cache line 547
at Symfony\Component\HttpKernel\Kernel->handle(object(Symfony\Component\HttpFoundation\Request))
in /home/tarjei/myproject/web/app_dev.php line 26
[3] Exception: SplObjectStorage::serialize() must return a string or NULL
at n/a
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 138
at serialize()
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 0
at Assetic\Asset\AssetCache::getCacheKey(object(Assetic\Asset\FileAsset), null, 'dump')
in /home/tarjei/myproject/vendor/assetic/src/Assetic/Asset/AssetCache.php line 62
at Assetic\Asset\AssetCache->dump()
in /home/tarjei/myproject/vendor/bundles/Symfony/Bundle/AsseticBundle/Controller/AsseticController.php line 80
at Symfony\Bundle\AsseticBundle\Controller\AsseticController->render('4bc396c', '0')
in line
at call_user_func_array(array(object(Symfony\Bundle\AsseticBundle\Controller\AsseticController), 'render'), array('4bc396c', '0'))
in /home/tarjei/myproject/app/cache/dev/classes.php line 3905
at Symfony\Component\HttpKernel\HttpKernel->handleRaw(object(Symfony\Component\HttpFoundation\Request), '1')
in /home/tarjei/myproject/app/cache/dev/classes.php line 3875
at Symfony\Component\HttpKernel\HttpKernel->handle(object(Symfony\Component\HttpFoundation\Request), '1', true)
in /home/tarjei/myproject/app/cache/dev/classes.php line 4852
at Symfony\Bundle\FrameworkBundle\HttpKernel->handle(object(Symfony\Component\HttpFoundation\Request), '1', true)
in /home/tarjei/myproject/app/bootstrap.php.cache line 547
at Symfony\Component\HttpKernel\Kernel->handle(object(Symfony\Component\HttpFoundation\Request))
in /home/tarjei/myproject/web/app_dev.php line 26
I think this is due to Twig.js keeping a reference to a simplexml object somewhere - but I have no idea where. I tried to find it but got lost.
A temporary workaround:
--- a/src/TwigJs/Assetic/TwigJsFilter.php
+++ b/src/TwigJs/Assetic/TwigJsFilter.php
@@ -48,4 +48,9 @@ class TwigJsFilter implements FilterInterface
public function filterLoad(AssetInterface $asset)
{
}
-}
\ No newline at end of file
+
+ public function __sleep() {
+ return array();
+ }
+}
+
|
gharchive/issue
| 2012-03-05T20:08:34 |
2025-04-01T04:35:48.732492
|
{
"authors": [
"tarjei"
],
"repo": "schmittjoh/twig.js",
"url": "https://github.com/schmittjoh/twig.js/issues/20",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
58516290
|
SAPBroadphase > insertionSortXYZ useless?
I looked at SAPBroadphase source code, and I've found 3 functions insertionSortX / insertionSortY / insertionSortZ which are used nowhere. Are these functions useful? If not they could be removed to save 0,5kb... wow!! ;-)
NB: I'm writing a GWT wrapper over your fantastic library, that's why I'm reviewing each method ;)
Hmm... They're not used? In .sortList()? Gotta double check on my laptop tomorrow.
Thanks a bunch for your work, it's greatly appreciated!
Damn you are totally right, how did I miss this one... Sorry, my bad!
|
gharchive/issue
| 2015-02-22T20:10:50 |
2025-04-01T04:35:48.796704
|
{
"authors": [
"jgottero",
"schteppe"
],
"repo": "schteppe/cannon.js",
"url": "https://github.com/schteppe/cannon.js/issues/186",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
910272980
|
Fix Critical Bugs
@rrcomtech @ronnyporsch @pcprinz
In order to improve comprehensibility, maintainability and overall quality
critical bugs detected by SonarQube
need to be removed in the backend subproject.
@rrcomtech @ronnyporsch @rrcomtech All done.
|
gharchive/issue
| 2021-06-03T08:53:16 |
2025-04-01T04:35:48.798323
|
{
"authors": [
"schubmat"
],
"repo": "schubmat/Custom-MADE",
"url": "https://github.com/schubmat/Custom-MADE/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1789577572
|
fix(deps): update aws-cli to v2.12.6
Might fix https://github.com/schueco/platform-team-tasks/issues/80
:tada: This PR is included in version 1.0.7 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-07-05T13:34:37 |
2025-04-01T04:35:48.800444
|
{
"authors": [
"Jeinhaus"
],
"repo": "schueco/atlantis-docker-image",
"url": "https://github.com/schueco/atlantis-docker-image/pull/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
191238907
|
Provide us with your github repository
Dear participant.
Please provide us with the link to the github repository that you will be using during the hackfest.
Thanks !
Bruce
Hello - could you respond with your ORCID please ?
https://github.com/damilare88/gwas_usecases
Ok, thanks
|
gharchive/issue
| 2016-11-23T10:50:22 |
2025-04-01T04:35:48.819982
|
{
"authors": [
"brucellino",
"mtorrisi"
],
"repo": "sci-gaia/LagosHackfest",
"url": "https://github.com/sci-gaia/LagosHackfest/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
792141126
|
Multisite
O que esse PR faz?
Adiciona capacidade de disponibilizar multisites compartilhando os mesmos arquivos core do IAHX.
Onde a revisão poderia começar?
Foi configurado dois ServerNames (search.scielo.org e search.revenf.org) no arquivo apache-config.conf e para testes locais os mesmos foram adicionados no arquivo hosts do computador.
Aberto um chamado para a equipe de infraestrutura para adicionar um novo domínio, para que seja realizado testes de uso.
Chamado: https://suporte.scielo.org/#ticket/zoom/713
Em conversa com a equipe decidimos em reunião que nesse momento é desnecessária uma validação de uso da aplicação @alexxxmendonca.
Por conta dessa alteração o sistema de indexação ganhou novos domínios segue:
http://homolog-search.scielo.org
http://homolog-search.revenf.org
|
gharchive/pull-request
| 2021-01-22T16:30:07 |
2025-04-01T04:35:48.826106
|
{
"authors": [
"deandr",
"jamilatta"
],
"repo": "scieloorg/search-journals",
"url": "https://github.com/scieloorg/search-journals/pull/526",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
413542240
|
wire up CI providers and add build badge
Appveyor was previously used on the project to test on WIndows, so I'm inclined to enable that again unless someone wants to try enabling Azure Pipelines so this can be tested on macOS and Linux.
Appveyor also supports Linux environments, but I've not had a chance to use that yet
What level of access is needed in order to try setting up Azure pipelines?
Also, do we want parallel spikes on AP and AV?
Or to decide on one and run with it?
What level of access is needed in order to try setting up Azure pipelines?
It needs a Microsoft account on Azure DevOps to host the project, from a user that has write access to the project on GitHub to install the GitHub app.
Also, do we want parallel spikes on AP and AV?
Thinking on this some more, I'm fine with reusing the Appveyor configuration for now. Let's hold off on Azure Pipelines in favour of just getting the current config up and running.
What level of access is needed in order to try setting up Azure pipelines?
It needs a Microsoft account on Azure DevOps to host the project, from a user that has write access to the project on GitHub to install the GitHub app.
Also, do we want parallel spikes on AP and AV?
Thinking on this some more, I'm fine with reusing the Appveyor configuration for now. Let's hold off on Azure Pipelines in favour of just getting the current config up and running.
|
gharchive/issue
| 2019-02-22T19:19:19 |
2025-04-01T04:35:48.840305
|
{
"authors": [
"M-Zuber",
"shiftkey"
],
"repo": "scientistproject/Scientist.net",
"url": "https://github.com/scientistproject/Scientist.net/issues/115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1303425235
|
New release?
@kernc Hi. I see there are many PRs that would greatly improve the library ready to merge to master. Any idea when this will happen and a new release will be launched? I think many people and other libraries would benefit from this. I could help if that's needed. Thanks
For a start, getting something like https://github.com/scikit-optimize/scikit-optimize/pull/1074 in and finally have a working CI again would be great. One of these days I might just get round to it.
|
gharchive/issue
| 2022-07-13T13:16:30 |
2025-04-01T04:35:48.894251
|
{
"authors": [
"kernc",
"tvdboom"
],
"repo": "scikit-optimize/scikit-optimize",
"url": "https://github.com/scikit-optimize/scikit-optimize/issues/1117",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
233822374
|
发现现在在回复的时候无法添加图片了
是因为新浪图床挂了吗?
请提供详细的无法添加图片的相关信息
例如 浏览器与浏览器版本、扩展版本、出错信息等以供判断。
新浪微博是否已经登录?
|
gharchive/issue
| 2017-06-06T08:54:42 |
2025-04-01T04:35:48.898385
|
{
"authors": [
"SinanWang",
"p0we7",
"sciooga"
],
"repo": "sciooga/v2ex-plus",
"url": "https://github.com/sciooga/v2ex-plus/issues/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
497136807
|
BUG: solve_ivp and lsoda method does not work with array_like jacobians
solve_ivp supposedly accepts jacobians with array_like type for jacobians that can be assumed to be constant with time and state. However, both LSODA implementations in scipy require callable jacobians, and solve_ivp uses the ode integrator implementation.
Note that LSODA method requires a stiff ODE to actually test this, but only recently have stiff solutions been added to tests, although only in a limited fashion, in #10309. So this isn't being tested for in the test suite.
I'm using the same problem set up as in #10793 .
Reproducing code example:
from scipy.integrate import solve_ivp, odeint, ode
import numpy as np
def f(t, y):
return np.array([
y[0],
-0.04 * y[1] + 1e4 * y[2] * y[3],
0.04 * y[1] - 1e4 * y[2] * y[3] - 3e7 * y[2]**2,
3e7 * y[2]**2,
y[4]
])
def jac(t, y):
return np.array([
[1, 0, 0, 0, 0],
[0, -0.04, 1e4*y[3], 1e4*y[2], 0],
[0, 0.04, -1e4*y[3]-3e7*2*y[2], -1e4*y[2], 0],
[0, 0, 3e7*2*y[2], 0, 0],
[0, 0, 0, 0, 1]
])
rtol = 1e-8
atol = 1e-10
tspan = (0, 1000)
t_eval = np.linspace(*tspan, 100)
y0 = np.array([0, 1, 0, 0, 0])
method = 'LSODA'
jac_const = jac(0, y0)
sol = solve_ivp(f, tspan, y0, t_eval=t_eval, method='LSODA', rtol=rtol, atol=atol)
sol_jac = solve_ivp(f, tspan, y0, jac=jac, t_eval=t_eval, method='LSODA', rtol=rtol, atol=atol)
assert np.allclose(sol.y, sol_jac.y)
sol_jac_const = solve_ivp(f, tspan, y0, jac=jac_const, t_eval=t_eval, method='LSODA', rtol=rtol, atol=atol)
#error
assert np.allclose(sol.y, sol_jac_const.y)
Error message:
Traceback (most recent call last):
File "test.py", line 35, in <module>
sol_jac_const = solve_ivp(f, tspan, y0, jac=jac_const, t_eval=t_eval, method='LSODA', rtol=rtol, atol=atol)
File "/SFS/user/wp/flammm/testsp/venv/lib/python3.6/site-packages/scipy/integrate/_ivp/ivp.py", line 502, in solve_ivp
message = solver.step()
File "/SFS/user/wp/flammm/testsp/venv/lib/python3.6/site-packages/scipy/integrate/_ivp/base.py", line 182, in step
success, message = self._step_impl()
File "/SFS/user/wp/flammm/testsp/venv/lib/python3.6/site-packages/scipy/integrate/_ivp/lsoda.py", line 149, in _step_impl
solver.f, solver.jac or (lambda: None), solver._y, solver.t,
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Scipy/Numpy/Python version information:
This is clearly stated in the documentation, but IMO, the documentation is hard to parse in the current format with so many edge cases depending on solver usage.
|
gharchive/issue
| 2019-09-23T14:34:08 |
2025-04-01T04:35:48.912931
|
{
"authors": [
"MatthewFlamm"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/10864",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1519021221
|
scipy.stats.bootstrap broke with statistic returning multiple values
The issue appeared upgrading from scipy-1.9.3 to scipy-1.10.0
We are using scipy.stats.bootstrap with a vectorized statistic that returns multiple values, i.e. computes multiple statistics at once.
Consider the following example:
import numpy as np
from scipy.stats import bootstrap
def statistic_ccc(cm: np.ndarray, axis: int) -> np.ndarray:
tp, tn, fp, fn = cm
actual = tp + fp
expected = tp + fn
ret = np.nan_to_num(
concordance_array(actual, expected, axis=axis), copy=False, nan=0.0
)
# This is intended to return multiple statistics, this just inserts an extra
# dimension. With scipy-1.9.3, stacking along the zeroth dimension worked.
# However, due to the changes in PR 16455, this no longer works.
return ret[None, ...]
def concordance_array(x: np.ndarray, y: np.ndarray, axis: int) -> np.ndarray:
assert x.shape == y.shape
assert axis == -1
if x.ndim == 1:
return np.asarray(concordance(x, y), dtype=float)
if x.ndim == 2:
return np.array(
[concordance(x[k, :], y[k, :]) for k in range(x.shape[0])], dtype=float
)
assert False
def concordance(x: np.ndarray, y: np.ndarray) -> float:
xm = x.mean()
ym = y.mean()
covariance = ((x - xm) * (y - ym)).mean()
ccc = (2 * covariance) / (x.var() + y.var() + (xm - ym) ** 2)
return ccc
# Elements are (tp, tn, fp, fn)
data = (
np.array([[4, 0, 0, 2], [2, 1, 2, 1], [0, 6, 0, 0], [0, 6, 3, 0], [0, 8, 1, 0]]),
)
ret = bootstrap(
data,
statistic_ccc,
confidence_level=0.9,
random_state=42,
vectorized=True,
)
print(ret.confidence_interval)
With scipy-1.9.3, the output was
ConfidenceInterval(low=array([0.]), high=array([0.87804878]))
or, without the insertion of another dimension in statistic_ccc,
ConfidenceInterval(low=0.0, high=0.878048780487805)
With scipy-1.10.0, the output becomes
ConfidenceInterval(low=array([nan]), high=array([nan]))
or, without the insertion of another dimension in statistic_ccc,
ConfidenceInterval(low=0.23076923076923073, high=0.8962655601659751)
which still differs from the behavior in scipy-1.9.3, but produces finite values.
This is caused by the changes in #16455, more specifically the use of len in n_j = [len(theta_hat_i) for theta_hat_i in theta_hat_ji].
Due to the batching and concatenation in _bca_interval, statistic cannot insert a dimension at the end. Using theta_hat_i.shape[-1] instead of len(theta_hat_i) would solve the issue for my use case, but I'm not sure I miss anything. What do you think?
All computations in _bca_interval are done along axis=-1, so .shape[-1] looks more reasonable than .shape[0] / len(). If you agree, I can prepare a PR with that one-line fix, and add a parametrization to the existing test cases.
Also, shouldn't this be labelled bug or regression instead of query?
I broke it, so I'll fix it today.
@slanmich it looks to me like you really have a four-sample statistic. When each of tp, tn, fp, fn are 1D arrays (e.g. of length 5, like you have in data right now), statistic_ccc returns one number, right? I'm surprised you didn't get an error before. This is how bootstrap saw your data before:
data was a tuple with one sample, a 5 x 4 array,
axis=0 by default
bootstrap expects that statistic is a reduction statistic that acts along axis=0, so bootstrap would expect statistic_ccc(data, axis=0) to return an array with four elements.
If you want tp, tn, fp, fn to get reduced to a single number, this needs to be treated as a multi-sample statistic; data should have four elements. You can do this by taking data out of the tuple and transposing it, and statistic_ccc should accept tp, tn, fp, fn as separate arguments.
There is still a bug, but I'll continue to investigate.
@slanmich it looks to me like you really have a four-sample statistic. When each of tp, tn, fp, fn are 1D arrays (e.g. of length 5, like you have in data right now), statistic_ccc returns one number, right? I'm surprised you didn't get an error before. This is how bootstrap saw your data before:
data was a tuple with one sample, a 5 x 4 array,
axis=0 by default
bootstrap expects that statistic is a reduction statistic that acts along axis=0, so bootstrap would expect statistic_ccc(data, axis=0) to return an array with four elements.
If you want tp, tn, fp, fn to get reduced to a single number, this needs to be treated as a multi-sample statistic; data should have four elements. You can do this by taking data out of the tuple and transposing it, and statistic_ccc should accept tp, tn, fp, fn as separate arguments.
There is still a bug, but I'll continue to investigate.
Right, I think I also tried passing data[0].transpose(), but that causes the following ValueError, which is probably what you are referring to.
actual = tp + fp
ValueError: operands could not be broadcast together with shapes (5,4) (5,5)
That error goes away if I call bootstrap with paired=True, which I thought would apply to my case. The handling of the paired parameter in _bootstrap_iv modifies data to contain a single item, which is probably why I stayed with the original single-item data in the example.
Ah, yes, ok. I'll submit a PR soon. Also, I mentioned above that I left the ability for bootstrap to handle multi-output statistics undocumented. Other than this issue, has it been working well enough that support for multi-output statistics should be a documented feature? Any other features you'd like to see in 1.11.0?
|
gharchive/issue
| 2023-01-04T13:54:58 |
2025-04-01T04:35:48.927189
|
{
"authors": [
"mdhaber",
"slanzmich"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/17715",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
58057039
|
box constraint in scipy optimize cobyla
I don't really understand why box constraints are not allowed in cobyla, because every box constraint is an inequality constraint , x >= lb is x - lb >= 0 and x <= lb is ub - x >= 0.
This should be easily accomplished using something along the lines of
% boxBounds is an numpy.ndarray of shape (len(x0),2)
% x0 is the parameter
cons = list()
for i in range(0,len(x0)):
cons += ({'type': 'ineq', 'fun': (lambda x,i : x[i]-boxBounds[i,0]), 'args': (i,)},)
cons += ({'type': 'ineq', 'fun': (lambda x,i : boxBounds[i,1]-x[i]), 'args': (i,)},)
or with two argument where the second is the value of the bound.
+1 It's really weird to get an out of bounds result when using everything just as documented:
In [34]: scipy.optimize.minimize(func, 0, method='COBYLA', bounds=[(0, 2 * numpy.pi)])
/Users/samuelainsworth/research/res-venv/lib/python2.7/site-packages/scipy/optimize/_minimize.py:400: RuntimeWarning: Method COBYLA cannot handle bounds.
RuntimeWarning)
Out[34]:
fun: -0.99999999543552442
maxcv: 0.0
message: 'Optimization terminated successfully.'
nfev: 24
status: 1
success: True
x: array(-1.57070078125)
The warning is great, but this should at the very least be included in the documentation.
|
gharchive/issue
| 2015-02-18T11:36:45 |
2025-04-01T04:35:48.930598
|
{
"authors": [
"edwintye",
"samuela"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/4532",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
60790120
|
Incorporate pbcf improvements from gh-4587 to scipy.special
Incorporate the asymptotic expansion in gh-4587 into the scipy.special parabolic functions
Just passing by and thought I'd ping this before it gets lost in the mists of time.
@ewmoore Thoughts?
I've actually been working on this...
|
gharchive/issue
| 2015-03-12T09:17:21 |
2025-04-01T04:35:48.932023
|
{
"authors": [
"charris",
"person142",
"pv"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/4626",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1471344807
|
ENH: Add distance_correlation to scipy.stats
Reference issue
Closes #13728
What does this implement/fix?
This PR adds Distance Correlation, a high dimensional independence test that is powerful enough to extract linear and nonlinear trends. scipy.stats contains a host of independence and other hypothesis tests, but are limited by assumptions of normality, linearity, unidimensionality, etc. While this may be appropriate in a host of circumstances, it is increasingly important to analyze nonlinear and high dimensional trends. The tests included can also be used to enable both classification and also for regression.
Additional information
I wrote scipy.stats.multiscale_graphcorr and this implementation builds upon that with minimal additional code.
The implementation is ported from the hyppo package which I wrote and currently maintain. Code in that package is licensed under MIT.
@bstraus1 and @kfangster are coauthors for this PR.
Not sure if this is in error in my fork or in the main repo, but tests aren't building due to:
ERROR: File ../../_lib/highs/src/mip/HighsObjectiveFunction.cpp does not exist.
I checked if this file exists in the fork or the main repo and it doesn't. Do you all think that this is caused from syncing error from my fork or is this a known build error?
Out of curiosity what is the difference between this and https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.correlation.html#scipy.spatial.distance.correlation
Out of curiosity what is the difference between this and https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.correlation.html#scipy.spatial.distance.correlation
They are two different things. This is a measure of nonlinear correlation based on the Euclidean distances (see https://en.wikipedia.org/wiki/Distance_correlation). The other you mention is a distance based on the (linear) Pearson correlation.
|
gharchive/pull-request
| 2022-12-01T13:43:55 |
2025-04-01T04:35:48.937621
|
{
"authors": [
"j-bowhay",
"sampan501",
"vnmabus"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/17517",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2395541607
|
ENH: Add the ability to specify k in NearestNDInterpolator
This commit allows you to specify k nearest neighbors in NearestNDInterpolator. This enables you to allow for averaging over the top closest neighbors rather than just the most similar. It also allows you to specify the averaging method, either uniform or distance.
Reference issue
addresses gh-21138
What does this implement/fix?
allows you to specify k in NearestNDInterpolator
What @j-bowhay said in #21138 : this sounds almost like an IDW, so let's consider this PR together with #2022 and https://github.com/scipy/scipy/pull/19665
while it's true that with the distance weights it is IDW, it could still be useful having the ability to specify k points to average rather than just 1. I can continue the discussion in gh-2022
Currently, NearestNDInterpolator does something simple and straightforward: for each input point it selects a nearest-neighbor value. The only additional options are those from KD-tree.
IMO it's better to keep it as is, and add any additional heuristics to a separate object. The proposal in https://github.com/scipy/scipy/pull/19665 and the discussion in https://github.com/scipy/scipy/issues/2022 looks like a natural candidate.
I would agree with @ev-br I would also like to see this kept separate. If you would be interested in reviving https://github.com/scipy/scipy/pull/19665 by address the review comment that wold be appreciated.
No worries then, feel free to close this pull request in that case! Thanks for looking into this.
It appears we are all in agreement that a generalization of this PR is best done in a separate class, as a version of IDW or some such.
Thank you @PolarBean and I'm very much looking forward to your involvement in https://github.com/scipy/scipy/pull/19665 or its successor (if you feel like, it's perfectly OK to take over a stalled PR, address review comments and resubmit). Thank Jake for weighting in, too.
|
gharchive/pull-request
| 2024-07-08T12:43:47 |
2025-04-01T04:35:48.943314
|
{
"authors": [
"PolarBean",
"ev-br",
"j-bowhay"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/21139",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
93762027
|
BUG: Fix of dlsim when the system is characterized has no dynamic.
See #4970 for more details.
When the system is characterized by A=[], B=[], C=[], D=[1] the function generate an error.
I've added a check on the value of the matrix A.
Thanks @tfabbri. Makes sense to make that work I think. Could you please add a unit test in signal/tests/test_dltisys.py?
I have added new unit test to signal/tests/test_dltisys.py
As per https://github.com/scipy/scipy/issues/4970#issuecomment-520613155, the underlying issue was resolved elsewhere
|
gharchive/pull-request
| 2015-07-08T10:57:16 |
2025-04-01T04:35:48.945623
|
{
"authors": [
"lucascolley",
"rgommers",
"tfabbri"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/5025",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
139433566
|
MAINT: sparse: clean up format conversion methods
This PR removes a bunch of redundant subclass overrides,
and adds some small enhancements to existing conversion methods.
It also provides:
spmatrix.tocsr, based on spmatrix.tocoo
dia_matrix.tocsc, which avoids a conversion through COO
@@ master #5946 diff @@
======================================
Files 238 238
Stmts 43710 43716 +6
Branches 8200 8200
Methods 0 0
======================================
+ Hit 34139 34147 +8
Partial 2599 2599
+ Missed 6972 6970 -2
Review entire Coverage Diff as of 91cd0da
Powered by Codecov. Updated on successful CI builds.
Thanks for the comments, @pv. I pushed a new commit so you can see what changed.
Also, some asv highlights:
BENCHMARK BEFORE AFTER FACTOR
...version.time_conversion('dia', 'csr') 2.60ms 1.87ms 0.72061911x
...version.time_conversion('dia', 'csc') 2.60ms 1.31ms 0.50364981x
...version.time_conversion('coo', 'csr') 1.12ms 449.57μs 0.39992938x
...version.time_conversion('coo', 'csc') 1.14ms 448.04μs 0.39172148x
thanks, probably ok
|
gharchive/pull-request
| 2016-03-09T00:16:09 |
2025-04-01T04:35:48.950016
|
{
"authors": [
"codecov-io",
"perimosocordiae",
"pv"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/5946",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
277553761
|
Convert collection acquisition method to use proper container storage paths
Fixes missing info_exists, adds proper permissions checking.
Review Checklist
Tests were added to cover all code changes
Documentation was added / updated
Code and tests follow standards in CONTRIBUTING.md
Codecov Report
Merging #1014 into master will increase coverage by 0.01%.
The diff coverage is 90%.
@@ Coverage Diff @@
## master #1014 +/- ##
==========================================
+ Coverage 90.44% 90.45% +0.01%
==========================================
Files 50 50
Lines 6758 6756 -2
==========================================
- Hits 6112 6111 -1
+ Misses 646 645 -1
|
gharchive/pull-request
| 2017-11-28T22:02:03 |
2025-04-01T04:35:48.955995
|
{
"authors": [
"codecov-io",
"nagem"
],
"repo": "scitran/core",
"url": "https://github.com/scitran/core/pull/1014",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2517541912
|
Can't get the container to run reliably
I had several issues getting things to work and through hodgepodge I was able to determine how to configure things in with docker compose - where it was running but now it will not run. Below are the errors I receive in logs:
today at 11:55:53 AMalways restart: false
today at 1:57:47 PMnpm ERR! path /app
today at 1:57:47 PMnpm ERR! command failed
today at 1:57:47 PMnpm ERR! signal SIGTERM
today at 1:57:47 PMnpm ERR! command sh -c -- node index.js
today at 1:57:47 PM
today at 1:57:47 PMnpm ERR! A complete log of this run can be found in:
today at 1:57:47 PMnpm ERR! /home/app/.npm/_logs/2024-09-10T15_55_51_112Z-debug-0.log
today at 1:58:29 PM
today at 1:58:29 PM> plate-minder@0.2.2 start
today at 1:58:29 PM> node index.js
today at 1:58:29 PM
today at 1:58:32 PMalways restart: false
today at 2:04:47 PMnpm ERR! path /app
today at 2:04:47 PMnpm ERR! command failed
today at 2:04:47 PMnpm ERR! signal SIGTERM
today at 2:04:47 PMnpm ERR! command sh -c -- node index.js
today at 2:04:47 PM
today at 2:04:47 PMnpm ERR! A complete log of this run can be found in:
today at 2:04:47 PMnpm ERR! /home/app/.npm/_logs/2024-09-10T17_58_29_936Z-debug-0.log
today at 2:04:48 PM
today at 2:04:48 PM> plate-minder@0.2.2 start
today at 2:04:48 PM> node index.js
today at 2:04:48 PM
today at 2:04:50 PMalways restart: false
today at 2:04:50 PMalways restart: false
today at 2:04:50 PMalways restart: false
today at 2:04:50 PMalways restart: false
today at 2:04:50 PMalways restart: false
today at 2:04:50 PMalways restart: false
today at 2:04:50 PMalways restart: false
I don't know what the problem is but the other two containers for the other services are running but I get an error saying Error
Failed to fetch but I believe this is because the plate-minder service will not run.
docker compose yaml:
plate-minder:
container_name: plate-minder
restart: unless-stopped
image: sclaflin/plate-minder:latest
ports:
- 4000:4000
environment:
- TZ=America/New_York
volumes:
# Set's the docker container to the host container local time
- /etc/localtime:/etc/localtime:ro
- /home/neil/docker/data/plate-minder/data:/app/data
- /home/neil/docker/data/plate-minder/config.yaml:/app/config.yaml
# For Intel related hardware acceleration, the container needs the same
# group id as /dev/dri/renderD128.
open-alpr-http-wrapper:
container_name: open-alpr-http-wrapper
restart: unless-stopped
image: sclaflin/open-alpr-http-wrapper:latest
ports:
- 3000:3000
plate-minder-web:
container_name: plate-minder-web
image: sclaflin/plate-minder-web:latest
restart: unless-stopped
# The default configuration assumes docker is running on the same machine
# you're viewing the web UI with. If you're accessing the service from a
# different computer, you should set the PLATE_MINDER_URL to whatever host
environment:
- PLATE_MINDER_URL=http://localhost:4000
ports:
- 8080:80
config.yaml for plate-minder:
sources:
- type: rtsp
name: mainlpr
captureInterval: 0.5
url: 'rtsp://admin:dsfdsf!@192.168.15.29:554/cam/realmonitor?channel=1&subtype=0'
openALPR:
url: http://192.168.10.253:3000/detect
country_code: 'us'
recorders:
- type: file
pattern: './data/images/{{DATE}}/{{SOURCE}}/{{TIME}}_{{PLATE}}.jpg'
retainDays: 30
filters:
- type: mask
shapes:
- 1267,0,1920,0,1920,100,1267,100
debug: false
restService:
enable: true
port: 4000
|
gharchive/issue
| 2024-09-10T19:34:41 |
2025-04-01T04:35:48.958852
|
{
"authors": [
"heffneil"
],
"repo": "sclaflin/Plate-Minder",
"url": "https://github.com/sclaflin/Plate-Minder/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1886284734
|
[Request] Automatic switch to new chat when limit is reached
Is your feature request related to a problem? Please describe.
No response
Describe the solution you'd like
For example, it would be nice if after the 30/30th message in the chat, it starts a new chat and starts again at 1/30. In the current state it just stops.
Describe alternatives you've considered
No response
Additional context
No response
Actually, when you reach the limit, it should start a new chat. I'll test it again, maybe it somehow got disabled, but it was something I added from the very first releases.
Duplicate of #87. I'll keep it open until I have some time to code and tests all these issues.
|
gharchive/issue
| 2023-09-07T17:02:48 |
2025-04-01T04:35:48.962922
|
{
"authors": [
"Eikosa",
"scmanjarrez"
],
"repo": "scmanjarrez/EdgeGPT-Telegram-Bot",
"url": "https://github.com/scmanjarrez/EdgeGPT-Telegram-Bot/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
369137878
|
fedora download through cdi takes too long
we should either use a lighter image or use uploader to use a prefetched and local image
First run -> Segmentation fault
E1011 13:45:34.502150 1 prlimit.go:143] qemu-img [convert -p -f qcow2 -O raw json: {"file.driver": "https", "file.url": "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2", "file.timeout": 3600} /data/disk.img] failed output is:E1011 13:45:34.503582 1 prlimit.go:144] (0.00/100%) (1.01/100%) (2.01/100%) (3.02/100%) (4.02/100%)E1011 13:45:34.503723 1 importer.go:40] signal: segmentation fault
Second run stuck on 87%
addressed in https://github.com/scollier/kubevirt-tutorial/commit/13bd8583b657107d7a9bd9b3f41d23f6a873399d
|
gharchive/issue
| 2018-10-11T14:04:30 |
2025-04-01T04:35:48.965898
|
{
"authors": [
"StLuke",
"karmab"
],
"repo": "scollier/kubevirt-tutorial",
"url": "https://github.com/scollier/kubevirt-tutorial/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
297499782
|
Need centralized error handling
Type of issue
[ ] Bug / Error
[ ] Idea / Feature
[x] Improvement detail
Short description of the issue
Every route is handling the error on its own which is just doubling the effort. It should handle all the error centrally.
Possible fix
I have created a PR #75. A middleware is added, which is working as an error handler. It will catch all the error thrown by any module.
Closing as PR is already merged
|
gharchive/issue
| 2018-02-15T16:01:57 |
2025-04-01T04:35:48.983439
|
{
"authors": [
"varunzxzx"
],
"repo": "scorelab/Stackle",
"url": "https://github.com/scorelab/Stackle/issues/76",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
303213024
|
pack+import bat - solution import issue
getting error when running pack+import.bat command using spkl in VS2017
any idea?
Following root components are not defined in customizations:
Type='PluginAssembly', Id (or schema name)='{0201327f-2822-e811-a8a4-0022480173bb}'.
Following objects, required by the solution, are not present. /n/r Type='PluginAssembly', Id (or schema name)='PluginAssembly-SF365Plugins, Version=2.0.0.0, Culture=neutral, PublicKeyToken=15f92f8a08de05d5'.
/n/r Please do a dependency check on your solution prior to exporting, add the missing objects to your solution and re-export/n/r
Error occurred during execution of plugin 'RootComponentValidation': RootComponent validation failed.
See log file 'packagerlog.txt' for details.
CrmSvcUtil Runtime = 00:00:01.4195902
Solution Packager exited with error 3
2018-03-07 18:33:19.295 - Info Processing Component: Entities
2018-03-07 18:33:19.298 - Info Processing Component: Roles
2018-03-07 18:33:19.340 - Info Processing Component: Workflows
2018-03-07 18:33:19.346 - Info Processing Component: FieldSecurityProfiles
2018-03-07 18:33:19.347 - Info Processing Component: Templates
2018-03-07 18:33:19.349 - Info Processing Component: EntityMaps
2018-03-07 18:33:19.349 - Info Processing Component: EntityRelationships
2018-03-07 18:33:19.351 - Verbose Reading: C:\SF365\Solution\package\Other\Relationships.xml
2018-03-07 18:33:19.354 - Info Processing Component: optionsets
2018-03-07 18:33:19.354 - Info Processing Component: SolutionPluginAssemblies
2018-03-07 18:33:19.372 - Verbose Reading: C:\SF365\Solution\package\PluginAssemblies\SF365Plugins-0201327F-2822-E811-A8A4-0022480173BB\SF365Plugins.dll.data.xml
2018-03-07 18:33:19.379 - Info - SF365Plugins, Version=2.0.0.0, Culture=neutral, PublicKeyToken=15f92f8a08de05d5
2018-03-07 18:33:19.382 - Info Mapping: C:\SF365\Solution\package\PluginAssemblies\SF365Plugins-0201327F-2822-E811-A8A4-0022480173BB\SF365Plugins.dll to C:\SF365\Plugins\bin\Debug\SF365Plugins.dll
2018-03-07 18:33:19.383 - Verbose Reading: C:\SF365\Plugins\bin\Debug\SF365Plugins.dll
2018-03-07 18:33:19.384 - Info Processing Component: SdkMessageProcessingSteps
2018-03-07 18:33:19.387 - Verbose Reading: C:\SF365\Solution\package\SdkMessageProcessingSteps{703b4e82-2822-e811-a8a4-00224801a0fe}.xml
2018-03-07 18:33:19.407 - Warning Following root components are not defined in customizations:
Type='PluginAssembly', Id (or schema name)='{0201327f-2822-e811-a8a4-0022480173bb}'.
2018-03-07 18:33:19.407 - Warning Following root components are not defined in customizations:
Type='PluginAssembly', Id (or schema name)='{0201327f-2822-e811-a8a4-0022480173bb}'.
2018-03-07 18:33:19.409 - Error Following objects, required by the solution, are not present. /n/r Type='PluginAssembly', Id (or schema name)='PluginAssembly-SF365Plugins, Version=2.0.0.0, Culture=neutral, PublicKeyToken=15f92f8a08de05d5'.
/n/r Please do a dependency check on your solution prior to exporting, add the missing objects to your solution and re-export/n/r
2018-03-07 18:33:19.415 - Error Error occurred during execution of plugin 'RootComponentValidation': RootComponent validation failed.
2018-03-07 18:33:19.416 - Error See log file 'packagerlog.txt' for details.
2018-03-07 18:33:19.527 - Error Microsoft.Crm.Tools.SolutionPackager.PluginExecutionException: RootComponent validation failed.
at Microsoft.Crm.Tools.SolutionPackager.Plugins.RootComponentsValidation.BeforeWrite(PluginContext pluginContext)
at Microsoft.Crm.Tools.SolutionPackager.SolutionPackager.InvokePlugin(PluginConfigurationElement pluginConfig, Action`1 action)
at Microsoft.Crm.Tools.SolutionPackager.SolutionPackager.Run(IPackageReader reader, IPackageWriter writer)
at Microsoft.Crm.Tools.SolutionPackager.SolutionPackager.Run()
at Microsoft.Crm.Tools.SolutionPackager.Program.Main(String[] args)
2018-03-07 18:33:19.530 - Verbose CrmSvcUtil Runtime = 00:00:01.4195902
fixed it by installing latest nuget pkg
|
gharchive/issue
| 2018-03-07T18:38:46 |
2025-04-01T04:35:49.079002
|
{
"authors": [
"abz789"
],
"repo": "scottdurow/SparkleXrm",
"url": "https://github.com/scottdurow/SparkleXrm/issues/210",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
335627139
|
Unpack/Pack issue between v9 and c8.2
Hi Scott
No problems unpack/pack a solution with following.
-spkl 1.0.198
-Microsoft.CrmSdk.CoreTools 8.2.05
Solution imports with no issue into crm online Version 1612 (9.0.2.449) (DB 9.0.2.449)
With the above I am unable to deploy plugins or generate earlybound entities and get following error:
CrmSvcUtil Error: 2 : Exiting program with exit code 2 due to exception : System.IO.FileLoadException: Could not load file or assembly 'CrmSvcUtil, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
Exiting program with exception: Could not load file or assembly 'CrmSvcUtil, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
File name: 'CrmSvcUtil, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName, ObjectHandleOnStack type)
at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName)
The problem is that my plugins reference CoreAssemblies 9.0.0.7. When I update Microsoft.CrmSdk.CoreTools to 9.0 then plugins and earlybound entities run with no issues but then cause my solution import fails with invalid xml on Visualizations.
Same when install spkl in the plugins project directly as it picks up the first CoreTools in my packages
spkl.1.0.198\tools....\Microsoft.CrmSdk.CoreTools.8.2.0.5\content\bin\coretools\CrmSvcUtil.exe
Can you try the latest build- https://www.nuget.org/packages/spkl/1.0.225-beta
It’s going to be the next release version shortly.
with the new version and CoreTools 9.0.0.7 I am able to deploy plugins and create earlybound but unable to import the packed solution, error:
The element 'Visualizations' has invalid child element 'savedqueries'. List of possible elements expected: 'visualization'.
If you have multiple spkl references, ie. one for the solution and other for plugins, it still picks up the first within packages as below:
_spkl.1.0.198\tools....\Microsoft.CrmSdk.CoreTools.8.2.0.5_
So basically with 1.0.198 with CoreTools 8.2.0.5 I can successfully unpack/pack and import solutions - Solution project is separate from Plugins project.
Plugins deploy correctly with spkl.1.0.198 and CoreTools.9.0.0.7 - but Earlybound fails.
I think for now I will generate Earlybound entities seperate.
It looks like unpack/pack with CoreTools 9+ generates different XML schema than 8.2 hence the Solution import failure.
@hmdvs Is this still an issue for you with the least release?
@scottdurow I've not re looked at this issue since. Will give it a test on the new version and report results.
@scottdurow - I re tested unpack/pack and it looks good with the following:
crm:Version 1612 (9.0.2.751)
spkl:1.0.226
CoreTools: 9.0.0.7
I tried the new one and not facing any issue :)
|
gharchive/issue
| 2018-06-26T01:43:36 |
2025-04-01T04:35:49.089325
|
{
"authors": [
"hmdvs",
"mihirkadam",
"scottdurow"
],
"repo": "scottdurow/SparkleXrm",
"url": "https://github.com/scottdurow/SparkleXrm/issues/249",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
300037180
|
history.html on readdocs does not properly format bullet points
See: https://dateparser.readthedocs.io/en/latest/history.html
Bullet-points used on the page show up as an asterisk instead of a bullet. This is only true for part of the bullet lists.
Good point, thank you.
The formatting was fixed in https://github.com/scrapinghub/dateparser/pull/384, but ReadTheDocs still displayed the version from the release commit. Rebuilt.
@asadurski Looks much better! Good job.
|
gharchive/issue
| 2018-02-25T15:58:10 |
2025-04-01T04:35:49.155202
|
{
"authors": [
"asadurski",
"thernstig"
],
"repo": "scrapinghub/dateparser",
"url": "https://github.com/scrapinghub/dateparser/issues/389",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1083865827
|
support for dates with dots and spaces
Closes #1010
this will support dates such as 26 .10.21 or 26 . 10.21 , in date 26 . 10.21 the first period was getting removed in sanitization even when it was between numerals only(surrounded by spaces) which was not needed. that is why changed the period sanitization regex a bit
@Gallaecio sorry to bother but can you look into this PR? This is a regression for us (https://github.com/scrapinghub/dateparser/issues/1010) 🙏 Many thanks!
Can someone merge this to fix https://github.com/scrapinghub/dateparser/issues/1010 ? 🙏
|
gharchive/pull-request
| 2021-12-18T16:25:51 |
2025-04-01T04:35:49.157488
|
{
"authors": [
"atharmohammad",
"jc-louis"
],
"repo": "scrapinghub/dateparser",
"url": "https://github.com/scrapinghub/dateparser/pull/1028",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1336850658
|
Improve Japanese support
Thank you for maintaining this very useful library.
I've just improved Japanese support a little.
Please check if this is ok.
Hi @smurak Could you please resolve the conflicts? Thanks in advance.
Thanks for checking, @serhii73 .
Resolved conflicts.
Thanks!
|
gharchive/pull-request
| 2022-08-12T07:24:19 |
2025-04-01T04:35:49.159115
|
{
"authors": [
"Gallaecio",
"smurak"
],
"repo": "scrapinghub/dateparser",
"url": "https://github.com/scrapinghub/dateparser/pull/1068",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
578469298
|
Detect empty string in the input
Current release taking too much time handling empty strings (#470 ).
Hi @utkarsh261 thanks for this PR, could you add some tests? There is already a PR regarding this: https://github.com/scrapinghub/dateparser/pull/486/ if you want to check it.
hey @noviluni , didn't notice #486 earlier, thanks.
|
gharchive/pull-request
| 2020-03-10T10:12:28 |
2025-04-01T04:35:49.160786
|
{
"authors": [
"noviluni",
"utkarsh261"
],
"repo": "scrapinghub/dateparser",
"url": "https://github.com/scrapinghub/dateparser/pull/631",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
222233872
|
Add VCR.py json-serialized tests
The PR adds additional tests formed with json serialization based on:
new vcrpy accept-header matcher to distinguish messagepack-marked requests from others
additional fixture json_and_msgpack to parametrize some of the existing tests
json-serialized tests being put to separate vcrpy cassettes, named with -json.gz suffix
pytest --disable-msgpack option to disable msgpack tests (same if msgpack is unavailable)
I have some doubts if we need to add Travis configuration to run all the tests with uninstalled msgpack to make sure everything works as expected w/o msgpack at all, let's discuss it if needed.
Please, review.
Codecov Report
Merging #72 into master will increase coverage by <.01%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #72 +/- ##
==========================================
+ Coverage 92.81% 92.82% +<.01%
==========================================
Files 28 28
Lines 1866 1867 +1
==========================================
+ Hits 1732 1733 +1
Misses 134 134
Impacted Files
Coverage Δ
scrapinghub/hubstorage/serialization.py
73.52% <100%> (+0.8%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ce09ed5...1b3d585. Read the comment docs.
I have some doubts if we need to add Travis configuration to run all the tests with uninstalled msgpack to make sure everything works as expected w/o msgpack at all
why not?
why not?
@chekunkov I had to add some new tests and corresponding cassettes for oldy hubstorage client, now satisfied with results, please review when you have a minute.
@chekunkov Let me know if there's anything else worrying you so we can proceed with the PR.
|
gharchive/pull-request
| 2017-04-17T21:15:59 |
2025-04-01T04:35:49.170545
|
{
"authors": [
"chekunkov",
"codecov-io",
"vshlapakov"
],
"repo": "scrapinghub/python-scrapinghub",
"url": "https://github.com/scrapinghub/python-scrapinghub/pull/72",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
136173735
|
Lua while loop problem
I am trying run lua code in splash. But fail and don't know how to continue.
Below is my code. Anyone know how to run while loop in splash?
import scrapy
from scrapy.selector import Selector
from splashtry.items import SplashtryItem
class MySpider(scrapy.Spider):
name="ssplash"
allowed_domains=["twitter.com"]
start_urls = ["https://twitter.com/ScrapingHub","https://twitter.com/Jackie_Sap"]
def start_requests(self):
for url in self.start_urls:
script="""
local Splash=require("splash")
function Splash:wait_for(condition)
while not condition() do
assert(splash:runjs("window.scrollTo(0,document.body.scrollHeight)"))
assert(self:wait(0.5))
end
end
require("wait_for")
function main(splash)
assert(splash:go(splash.args.url))
splash:wait_for(function()
return splash:evaljs("document.body.scrollTop == document.body.scrollHeight")
end)
return splash:html()
end
"""
yield scrapy.Request(url, self.parse, meta={
'splash': {
'args':{'lua_source':script},
'endpoint':'execute',
}
})
def parse(self, response):
sel=Selector(response)
item=SplashtryItem()
items=[]
item['tweets']=sel.xpath('//div[@class="js-tweet-text-container"]/p/text()').extract()
item['location']=sel.xpath('//span[@class="ProfileHeaderCard-locationText u-dir"]').extract()
item['birth']=sel.xpath('//span[@class="ProfileHeaderCard-birthdateText u-dir"]').extract()
item['name'] = sel.xpath('//title/text()').extract()
items.append(item)
return items
I think the problem is with local Splash=require("splash") - this module is not available by default sandbox settings. You likely copied splasg:wait_for example from [Writing Modules]](http://splash.readthedocs.org/en/stable/scripting-libs.html#writing-modules) docs, but this section is specifically about writing modules which you set up at Splash startup time, not about writing scripts you're sending to Splash.
In future please post the error message you get from Splash, it should contain useful information.
-- wait_for function without all these modules
function wait_for(splash, condition)
while not condition() do
splash:wait(0.05)
end
end
Hi @kmike , base on your function, I changed my code. But it run into another error, could you help me look at it?
def start_requests(self):
for url in self.start_urls:
script="""
function wait_for(splash, condition)
while not condition() do
assert(splash:runjs("window.scrollTo(0,document.body.scrollHeight)"))
splash:wait(0.5)
end
end
function main(splash)
assert(splash:go(splash.args.url))
wait_for(splash, function()
return splash:evaljs("document.body.scrollTop == document.body.scrollHeight")
end)
return splash:html()
end
"""
yield scrapy.Request(url, self.parse, meta={
'splash': {
'args':{'lua_source':script},
'endpoint':'execute',
}
})
Now the error shows:
2016-02-24 22:10:34 [scrapy] INFO: Scrapy 1.0.5 started (bot: splashtry)
2016-02-24 22:10:34 [scrapy] INFO: Optional features available: ssl, http11
2016-02-24 22:10:34 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'splashtry.spiders', 'FEED_URI': 'si.json', 'DUPEFILTER_CLASS': 'scrapyjs.SplashAwareDupeFilter', 'SPIDER_MODULES': ['splashtry.spiders'], 'BOT_NAME': 'splashtry', 'FEED_FORMAT': 'json', 'HTTPCACHE_STORAGE': 'scrapyjs.SplashAwareFSCacheStorage', 'DOWNLOAD_DELAY': 1}
2016-02-24 22:10:34 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2016-02-24 22:10:35 [py.warnings] WARNING: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapyjs/middleware.py:8: ScrapyDeprecationWarning: Module scrapy.log has been deprecated, Scrapy now relies on the builtin Python library for logging. Read the updated logging entry in the documentation to learn more.
from scrapy import log
2016-02-24 22:10:35 [py.warnings] WARNING: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapyjs/dupefilter.py:8: ScrapyDeprecationWarning: Module scrapy.dupefilter is deprecated, use scrapy.dupefilters instead
from scrapy.dupefilter import RFPDupeFilter
2016-02-24 22:10:35 [py.warnings] WARNING: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapyjs/cache.py:11: ScrapyDeprecationWarning: Module scrapy.contrib.httpcache is deprecated, use scrapy.extensions.httpcache instead
from scrapy.contrib.httpcache import FilesystemCacheStorage
2016-02-24 22:10:35 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, SplashMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-02-24 22:10:35 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-02-24 22:10:35 [scrapy] INFO: Enabled item pipelines:
2016-02-24 22:10:35 [scrapy] INFO: Spider opened
2016-02-24 22:10:35 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-02-24 22:10:35 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-02-24 22:11:05 [scrapy] DEBUG: Retrying <POST http://192.168.99.100:8050/execute> (failed 1 times): 504 Gateway Time-out
2016-02-24 22:11:06 [scrapy] DEBUG: Retrying <POST http://192.168.99.100:8050/execute> (failed 1 times): 504 Gateway Time-out
2016-02-24 22:11:11 [scrapy] DEBUG: Retrying <POST http://192.168.99.100:8050/execute> (failed 2 times): 400 Bad Request
2016-02-24 22:11:11 [scrapy] DEBUG: Retrying <POST http://192.168.99.100:8050/execute> (failed 2 times): 400 Bad Request
2016-02-24 22:11:35 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-02-24 22:11:41 [scrapy] DEBUG: Gave up retrying <POST http://192.168.99.100:8050/execute> (failed 3 times): 504 Gateway Time-out
2016-02-24 22:11:41 [scrapy] DEBUG: Crawled (504) <POST http://192.168.99.100:8050/execute> (referer: None)
2016-02-24 22:11:41 [scrapy] DEBUG: Ignoring response <504 http://192.168.99.100:8050/execute>: HTTP status code is not handled or not allowed
2016-02-24 22:11:42 [scrapy] DEBUG: Gave up retrying <POST http://192.168.99.100:8050/execute> (failed 3 times): 504 Gateway Time-out
2016-02-24 22:11:42 [scrapy] DEBUG: Crawled (504) <POST http://192.168.99.100:8050/execute> (referer: None)
2016-02-24 22:11:42 [scrapy] DEBUG: Ignoring response <504 http://192.168.99.100:8050/execute>: HTTP status code is not handled or not allowed
2016-02-24 22:11:42 [scrapy] INFO: Closing spider (finished)
2016-02-24 22:11:42 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 5613,
'downloader/request_count': 6,
'downloader/request_method_count/POST': 6,
'downloader/response_bytes': 1746,
'downloader/response_count': 6,
'downloader/response_status_count/400': 2,
'downloader/response_status_count/504': 4,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 2, 25, 3, 11, 42, 649283),
'log_count/DEBUG': 11,
'log_count/INFO': 8,
'log_count/WARNING': 3,
'response_received_count': 2,
'scheduler/dequeued': 8,
'scheduler/dequeued/memory': 8,
'scheduler/enqueued': 8,
'scheduler/enqueued/memory': 8,
'splash/execute/request_count': 2,
'splash/execute/response_count/400': 2,
'splash/execute/response_count/504': 4,
'start_time': datetime.datetime(2016, 2, 25, 3, 10, 35, 145048)}
hey @WUJJU,
The log shows 'bad request'; I suggest to debug your script first in Splash UI: visit <your-splash-url>:8050 in a browser and put your script there.
also note you've got 504 error, it means that request timeouts. Default timeout is 30s, and it looks like your script doesn't finish in this time. Maybe waiting condition never occured - try e.g. limiting a number of iterations in wait_for.
I'm closing this issue because it doesn't look like a Splash bug.
Sorry, I forget to close it.
|
gharchive/issue
| 2016-02-24T20:10:29 |
2025-04-01T04:35:49.192405
|
{
"authors": [
"WUJJU",
"kmike"
],
"repo": "scrapinghub/splash",
"url": "https://github.com/scrapinghub/splash/issues/389",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1314716311
|
Include the prompt in the results
Is there any way to get the prompt in response.MultiResponse?
Tried adding opoptions.WithNoStripPrompt() but no luck yet. While GetPrompt() can get the prompt, it would be nice to get exact prompt if a certain command in a long list to SendConfigs failed
It seems the option should be set on SendConfigs (and not in NewPlatform!)
d.SendConfigs(data, opoptions.WithNoStripPrompt())
Thanks!!
Hey @kellerza -- yep you got it -- it is kind of confusing and that is my fault, so sorry about that! but yeah, the idea is that any "platform" wide options live in options (just options) and apply to the driver as a whole, and the "operation" options live in "opoptions" and are applied on a per operation basis. HTH!
|
gharchive/issue
| 2022-07-22T09:03:56 |
2025-04-01T04:35:49.195969
|
{
"authors": [
"carlmontanari",
"kellerza"
],
"repo": "scrapli/scrapligo",
"url": "https://github.com/scrapli/scrapligo/issues/87",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1107613008
|
🛑 Github Insights is down
In 66c8a6d, Github Insights (https://github-insights.atanas.info) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Github Insights is back up in cf97b00.
|
gharchive/issue
| 2022-01-19T03:57:35 |
2025-04-01T04:35:49.245745
|
{
"authors": [
"scriptex"
],
"repo": "scriptex/uptime",
"url": "https://github.com/scriptex/uptime/issues/491",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1280218506
|
backlog: add gas reporter plugin in solidity
very convenient
https://github.com/fluidex/plonkit/blob/0a84db7d9844b1a6a91f97f644445bac17d32d45/test/contract/single/hardhat.config.js#L13
Cool, I will have a try.
|
gharchive/issue
| 2022-06-22T14:13:11 |
2025-04-01T04:35:49.246902
|
{
"authors": [
"lispc",
"xgaozoyoe"
],
"repo": "scroll-tech/halo2-snark-aggregator",
"url": "https://github.com/scroll-tech/halo2-snark-aggregator/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
233422059
|
Update docs with already existing mapping variables
There are three keymaps already present in the code that aren't included in the docs. I am currently using these mappings (g:NERDTreeMapPreview, g:NERDTreeMapPreviewSplit and g:NERDTreeMapPreviewVSplit) in my .vimrc and can confirm that they work correctly.
This pull requests updates the documentation to mention these mappings.
Good catch @asnr. Thanks for the update.
|
gharchive/pull-request
| 2017-06-04T09:20:32 |
2025-04-01T04:35:49.248479
|
{
"authors": [
"PhilRunninger",
"asnr"
],
"repo": "scrooloose/nerdtree",
"url": "https://github.com/scrooloose/nerdtree/pull/699",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
332026107
|
[Issue] Creating S/MIME Certificates fails with 400 Bad Request
Add support for managing a users S/MIME Certificates
https://developers.google.com/gmail/api/guides/smime_certs
Thanks, @jtwaddle !
Working on this still @jtwaddle - Once I have this ready, I may need some testing feedback from you since my personal domain doesn't have S/MIME (G Suite Business) and my org uses an external solution for handling email encryption, so no dice there either lol.
I'll keep you updated here though!
Branch created for this feature request: https://github.com/scrthq/PSGSuite/tree/feature/SMIME_support_issue57
hey @jtwaddle - S/MIMEInfo functions have been added in as of v2.11.0! I don't have an Enterprise subscription with S/MIME enabled, so I'm not 100% comfortable that New-GSGmailSMIMEInfo is going to format the cert correctly. When you get a chance, can you let me know if all is well?
Thank you! I will test it out next week!
Thanks again!
On Thu, Jul 5, 2018, 9:05 PM Nate Ferrell notifications@github.com wrote:
hey @jtwaddle https://github.com/jtwaddle - S/MIMEInfo functions have
been added in as of v2.11.0! I don't have an Enterprise subscription with
S/MIME enabled, so I'm not 100% comfortable that New-GSGmailSMIMEInfo is
going to format the cert correctly. When you get a chance, can you let me
know if all is well?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-402919571, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8uXiF9mPN94rFtn2RNgQERoymkS2s0ks5uDuH8gaJpZM4UmVZu
.
Sounds good! Looking forward to your feedback, have a great weekend!
Initial Testing:
Get-GSGmailSMIMEInfo - appears to function as expected.
New-GSGmailSMIMEInfo: I am getting an error when trying to run it. The cert and password work fine when I add it via the GUI.
New-GSGmailSMIMEInfo -User 'me@zoo.com' -SendAsEmail 'me@zoo.com' -Pkcs12 "D:\gmailcertzoo.pfx" -EncryptedKeyPassword $SecurePassword
New-GSGmailSMIMEInfo : Exception calling "Execute" with "0" argument(s): "Google.Apis.Requests.RequestError
Bad Request [400]
Errors [
Message[Bad Request] Location[ - ] Reason[invalidArgument] Domain[global]
]
"
At line:1 char:1
The issue looks related to these lines:
EncryptedKeyPassword {
$body.$key = (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$p12String = Convert-Base64 -From NormalString -To WebSafeBase64String -String "$([System.IO.File]::ReadAllText((Resolve-Path $PSBoundParameters[$key]).Path))"
$body.$key = $p12String
}
It looks like you need to set these to two different values instead of both to $body.$key make them both part of a smimeInfo object which I assume is $body.
so, the $key in $body.$key is referring to the current PSBoundParameter in
the foreach statement, super normal and a common pattern throughout the
rest of PSGSuite.
It's either bad formatting on the EncryptedKeyPassword (i.e. Google is
expecting a Base64 encoded string not a plain text string) and/or bad
formatting on the P12Key... Google's documentation isn't *super *clear on
it and there are discrepancies between what their Rest API docs say, what
Python examples are doing, and what their .NET SDK docs say, so it's a bit
of a mixed bag :-(.
Do you have another non-password protected key to test with, by chance,
that way we can at least eliminate one variable and focus solely on whether
the key itself is parsed into the bodycorrectly?
On Tue, Jul 10, 2018 at 4:54 PM jtwaddle notifications@github.com wrote:
The issue looks related to these lines:
EncryptedKeyPassword {
$body.$key = (New-Object PSCredential
"user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$p12String = Convert-Base64 -From NormalString -To WebSafeBase64String
-String "$([System.IO.File]::ReadAllText((Resolve-Path
$PSBoundParameters[$key]).Path))"
$body.$key = $p12String
}
It looks like you need to set these to two different values instead of
both to $body.$key make them both part of a smimeInfo object which I assume
is $body.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-403978458, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AMIo3cNyh4Z4Y1OJjCoU7ZKZVePnaW7bks5uFSKigaJpZM4UmVZu
.
I got the same error with this code change.
On Fri, Jul 13, 2018 at 3:25 PM Jessie Twaddle jtwaddle@gmail.com wrote:
Thank you for clarifying. I will test it out!
Jessie
On Fri, Jul 13, 2018 at 12:27 AM Nate Ferrell notifications@github.com
wrote:
@jtwaddle https://github.com/jtwaddle - I have some ideas on how to
adjust that block below. If you could swap that out, reimport the module
with the -Force parameter, then try, that would be awesome!
I have no doubt that the cert and password you're supplying are correct
and valid, this is a conversion issue within the function.
I re-opened this issue for tracking so it doesn't get buried =]
EncryptedKeyPassword {
$body.$key = (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$p12String = Convert-Base64 -From Base64String -To WebSafeBase64String -String ([System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path)))
$body.$key = $p12String
}
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-404729388,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AP8uXm_aIUUjHemPBPxMwRcITHHf5thMks5uGC-0gaJpZM4UmVZu
.
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
Any other ideas of what I could try?
On Sat, Jul 14, 2018, 8:32 AM Jessie Twaddle jtwaddle@gmail.com wrote:
I got the same error with this code change.
On Fri, Jul 13, 2018 at 3:25 PM Jessie Twaddle jtwaddle@gmail.com wrote:
Thank you for clarifying. I will test it out!
Jessie
On Fri, Jul 13, 2018 at 12:27 AM Nate Ferrell notifications@github.com
wrote:
@jtwaddle https://github.com/jtwaddle - I have some ideas on how to
adjust that block below. If you could swap that out, reimport the module
with the -Force parameter, then try, that would be awesome!
I have no doubt that the cert and password you're supplying are correct
and valid, this is a conversion issue within the function.
I re-opened this issue for tracking so it doesn't get buried =]
EncryptedKeyPassword {
$body.$key = (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$p12String = Convert-Base64 -From Base64String -To WebSafeBase64String -String ([System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path)))
$body.$key = $p12String
}
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-404729388,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AP8uXm_aIUUjHemPBPxMwRcITHHf5thMks5uGC-0gaJpZM4UmVZu
.
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
@jtwaddle - Thanks for your help out with testing and your patience on this! Here are a few more options:
1. P12 as Base64 and Password as plain text:
EncryptedKeyPassword {
$body.$key = (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$body.$key = [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path))
}
2. P12 as Base64 and Password as Base64:
EncryptedKeyPassword {
$body.$key = Convert-Base64 -From NormalString -To Base64String -String (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$body.$key = [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path))
}
3. P12 as WebSafeBase64 and Password as WebSafeBase64:
EncryptedKeyPassword {
$body.$key = Convert-Base64 -From NormalString -To WebSafeBase64String -String (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$body.$key = Convert-Base64 -From Base64String -To WebSafeBase64String -String ([System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path)))
}
Let me know if any of these get you going!
Thank you for sending some more to try. I did not have any luck with any
of these 3 options.
Do you have any more ideas?
On Sat, Jul 21, 2018 at 12:13 PM Nate Ferrell notifications@github.com
wrote:
@jtwaddle https://github.com/jtwaddle - Thanks for your help out with
testing and your patience on this! Here are a few more options:
P12 as Base64 and Password as plain text:
EncryptedKeyPassword {
$body.$key = (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$body.$key = [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path))
}
P12 as Base64 and Password as Base64:
EncryptedKeyPassword {
$body.$key = Convert-Base64 -From NormalString -To Base64String -String (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$body.$key = [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path))
}
P12 as WebSafeBase64 and Password as WebSafeBase64:
EncryptedKeyPassword {
$body.$key = Convert-Base64 -From NormalString -To WebSafeBase64String -String (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$body.$key = Convert-Base64 -From Base64String -To WebSafeBase64String -String ([System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path)))
}
Let me know if any of these get you going!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-406810573, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8uXq3fqtig_pWs9L5RkImUfn_w5mdcks5uI2FHgaJpZM4UmVZu
.
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
Seeing if I can get Google to give me a test domain with Enterprise licensing so I can test on my end. I don't have any other suggestions off hand right now though 😞 I'll keep you updated!
Did you ever have any luck with this?
On Thu, Aug 2, 2018, 12:45 PM Nate Ferrell notifications@github.com wrote:
Seeing if I can get Google to give me a test domain with Enterprise
licensing so I can test on my end. I don't have any other suggestions off
hand right now though 😞 I'll keep you updated!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-410010780, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8uXibE9DRn4E0C3lRyuX6FjMFON3_jks5uMzq5gaJpZM4UmVZu
.
Nothing yet 😢 I've tried a couple different avenues to get access to an Enterprise account so I can test further as well as reached out to their Gmail API support team and have gotten literally nothing useful back 😞
Any other ideas on things I could try?
On Wed, Oct 3, 2018 at 2:34 PM Nate Ferrell notifications@github.com
wrote:
Nothing yet 😢 I've tried a couple different avenues to get access to an
Enterprise account so I can test further as well as reached out to their
Gmail API support team and have gotten literally nothing useful back 😞
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-426769951, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8uXoMrV0iMm-l3a39g4kLY5V7QUwzvks5uhRFggaJpZM4UmVZu
.
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
Any other ideas I could try?
On Thu, Oct 4, 2018, 8:05 AM Jessie Twaddle jtwaddle@gmail.com wrote:
Any other ideas on things I could try?
On Wed, Oct 3, 2018 at 2:34 PM Nate Ferrell notifications@github.com
wrote:
Nothing yet 😢 I've tried a couple different avenues to get access to an
Enterprise account so I can test further as well as reached out to their
Gmail API support team and have gotten literally nothing useful back 😞
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-426769951,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AP8uXoMrV0iMm-l3a39g4kLY5V7QUwzvks5uhRFggaJpZM4UmVZu
.
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
hey @jtwaddle - nothing yet, apologies on the delay on this, it's literally been forever and I feel bad 😢. I am going to check out bumping my own account to Enterprise where I can test so I can close this one out, just been a bit slammed.
No problem at all. Thank you for trying!
On Tue, Mar 19, 2019, 6:34 PM Nate Ferrell notifications@github.com wrote:
hey @jtwaddle https://github.com/jtwaddle - nothing yet, apologies on
the delay on this, it's literally been forever and I feel bad 😢. I am
going to check out bumping my own account to Enterprise where I can test so
I can close this one out, just been a bit slammed.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-474625236, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8uXjsjjhRcmiBWl0BIGewqEWDs1jLYks5vYXQUgaJpZM4UmVZu
.
hey @jtwaddle - I updated my own account to Enterprise and have been giving it a few whacks and am at least replicating the issue. Going to try going through the REST API directly instead of the .NET SDK, in case there's an issue with the .NET SDK itself. I should hopefully have some progress on this by this weekend!
Now to find a low-cost cert that Google will allow for S/MIME...
Sectigo (fka Comodo) actually has some reasonably priced ones that I believe are trusted by Google for S/MIME
They used to offer a free one. I am not sure if they still do.
On Wed, Apr 10, 2019 at 9:17 AM Nate Ferrell notifications@github.com
wrote:
Sectigo (fka Comodo) actually has some reasonably priced ones
https://sectigo.com/products/signing-certificates/email-smime-certificate
that I believe are trusted by Google for S/MIME
https://support.google.com/a/answer/7448393?hl=en
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-481709819, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8uXm1eB5RhI8OQvgK9TpLhS0qFJmofks5vffJcgaJpZM4UmVZu
.
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
Doesn't appear so (but will confirm with their support for sure). Google search still turns up results that point at https://www.comodo.com/home/email-security/free-email-certificate.php, but going to that link takes you to the page linked in my last comment and searching for the word "free" in the page contents doesn't yield anything, so my guess is Google cached search results still showing =\
This may work potentially: https://sectigo.com/ssl-certificates/free-trial
Any luck with this?
Thank you!
On Wed, Apr 10, 2019, 10:24 AM Nate Ferrell notifications@github.com
wrote:
This may work potentially: https://sectigo.com/ssl-certificates/free-trial
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-481737667, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8uXqUmztVngW2UQkB0xXIFJbvlgNnKks5vfgJEgaJpZM4UmVZu
.
Hey @jtwaddle - I've been on vacation but should hopefully be jumping on this again by the weekend!
Any luck on this one?
On Fri, Jul 20, 2018 at 5:44 PM Jessie Twaddle jtwaddle@gmail.com wrote:
Any other ideas of what I could try?
On Sat, Jul 14, 2018, 8:32 AM Jessie Twaddle jtwaddle@gmail.com wrote:
I got the same error with this code change.
On Fri, Jul 13, 2018 at 3:25 PM Jessie Twaddle jtwaddle@gmail.com
wrote:
Thank you for clarifying. I will test it out!
Jessie
On Fri, Jul 13, 2018 at 12:27 AM Nate Ferrell notifications@github.com
wrote:
@jtwaddle https://github.com/jtwaddle - I have some ideas on how to
adjust that block below. If you could swap that out, reimport the module
with the -Force parameter, then try, that would be awesome!
I have no doubt that the cert and password you're supplying are correct
and valid, this is a conversion issue within the function.
I re-opened this issue for tracking so it doesn't get buried =]
EncryptedKeyPassword {
$body.$key = (New-Object PSCredential "user",$PSBoundParameters[$key]).GetNetworkCredential().Password
}
Pkcs12 {
$p12String = Convert-Base64 -From Base64String -To WebSafeBase64String -String ([System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes((Resolve-Path $PSBoundParameters[$key]).Path)))
$body.$key = $p12String
}
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/scrthq/PSGSuite/issues/57#issuecomment-404729388,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AP8uXm_aIUUjHemPBPxMwRcITHHf5thMks5uGC-0gaJpZM4UmVZu
.
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
--
Jessie Twaddle
jtwaddle@gmail.com
Something to think about....
"Finish each day and be done with it. You have done what you could; some
blunders and absurdities have crept in; forget them as soon as you can.
Tomorrow is a new day; you shall begin it serenely and with too high a
spirit to be encumbered with your old nonsense."
-Emerson
@jtwaddle - still nothing, I need to pick this back up. Thanks for the poke!
@jtwaddle Opened up https://github.com/googleapis/google-api-dotnet-client/issues/1492 to see if there's an issue with the .NET SDK potentially
@jtwaddle - Working on the issue in the google-api-dotnet-client repo, but I was doing some code comparison against GAM and was able to replicate the resulting string being sent as the value for Pkcs12 when inserting a new S/MIME object.
v2.35.1 is being deployed now and should be ready to test at your convenience =]
@jtwaddle - let me know if you've had a chance to test! Working with the Google API Client team and the changes implemented should have it working now. Anxiously awaiting your feedback :D
|
gharchive/issue
| 2018-06-13T14:40:22 |
2025-04-01T04:35:49.329748
|
{
"authors": [
"jtwaddle",
"scrthq"
],
"repo": "scrthq/PSGSuite",
"url": "https://github.com/scrthq/PSGSuite/issues/57",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
524608034
|
Could you update docs to use tippy.js v5?
Is your feature request related to a problem? Please describe.
Apparently the suggestion example uses tippy.js version 4.x.
Here is the code: https://github.com/scrumpy/tiptap/blob/master/examples/Components/Routes/Suggestions/index.vue#L220
Describe the solution you'd like
Could you update the example to use version 5.x of tippy.js? Here's what I did: https://github.com/Human-Connection/Human-Connection/pull/2258
From the PR description:
I've spent a considerable amount of time to try/error the correct way of integrating tippy.js into tiptap. Here's what I came up with. It's probably not nice to destroy and recreate the tiptap context menu but I didn't know a better way to do it. The new API of tippy.js does not have instance.popperInstance.scheduleUpdate method anymore.
There is the sticky plugin which should reposition the popper menu when the content or the reference object changes. I couldn't find a way to get the desired behaviour since it's calling the virtualNode.getBoundingClientRect() 4 times, and in the end, it's always at position 0/0 (top left corner) thus invisible. https://atomiks.github.io/tippyjs/plugins/
The original error message was that you cannot pass a normal object to tippy(object, { ... }) anymore. https://atomiks.github.io/tippyjs/misc/
Describe alternatives you've considered
You are using MutationObserver to observe this.$refs.suggestions and you call instance.popperInstance.scheduleUpdate . The latter is not available anymore, I'm afraid. So the best solution I came up with is to destroy the menu all the time. I wonder if that's an appropriate solution.
Probably a better solution is to use tippy's sticky plugin.
Looking forward for this
Hey @roschaefer,
I don't have a solution for tippy 5, but I managed to get tippy.js 6 (current version) working with tiptap. Your idea to use the sticky plugin helped!
I created a PR here: https://github.com/scrumpy/tiptap/pull/655
this is fixed now!
|
gharchive/issue
| 2019-11-18T20:42:47 |
2025-04-01T04:35:49.339425
|
{
"authors": [
"gambolputty",
"philippkuehn",
"roschaefer",
"rosnaib11"
],
"repo": "scrumpy/tiptap",
"url": "https://github.com/scrumpy/tiptap/issues/523",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
124717214
|
use prctl instead of (glibc-specific) pthread_setname_np on Linux
With this small change, this code should build with other libc's on linux (eg uClibc).
Merged. Thanks a lot for providing the patch.
|
gharchive/pull-request
| 2016-01-04T08:46:48 |
2025-04-01T04:35:49.366744
|
{
"authors": [
"mostynb",
"tuexen"
],
"repo": "sctplab/usrsctp",
"url": "https://github.com/sctplab/usrsctp/pull/44",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
890377821
|
Fix one more sctp_process_cookie_existing return path...
...which needs to unlock stcb when returning NULL, when forwarding the return
value of sctp_process_cookie_new.
I committed a similar fix, which was tested by Tolya. Just saw the PR after it.
|
gharchive/pull-request
| 2021-05-12T18:35:10 |
2025-04-01T04:35:49.368075
|
{
"authors": [
"taylor-b",
"tuexen"
],
"repo": "sctplab/usrsctp",
"url": "https://github.com/sctplab/usrsctp/pull/588",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
859033020
|
data.json is not generated correctly
🐞 Bug report
Description
After update scully and ng-lib to version 1.1.1 the data.json contains the following string:
")[1].split("
While the index.html in state script contains the correct information and renders the page properly, navigating to the other pages in the running app breaks because it can't get the data from the state.
🔬 Minimal Reproduction
Have an application that utilizes TransferStateService.
Build the app and run scully.
Go to dist/static/data.json
I tried it in 2 different projects and it has the same issue.
💻Your Environment
Angular Version:
Angular CLI: 11.1.4
Node: 14.16.0
OS: linux x64
Angular: 11.1.2
... animations, cdk, common, compiler, compiler-cli, core, forms
... localize, platform-browser, platform-browser-dynamic, router
Ivy Workspace: Yes
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1101.4
@angular-devkit/build-angular 0.1101.4
@angular-devkit/core 11.1.4
@angular-devkit/schematics 11.1.4
@angular/cli 11.1.4
@schematics/angular 11.1.4
@schematics/update 0.1101.4
rxjs 6.6.3
typescript 4.1.3
Scully Version:
"@scullyio/ng-lib": "^1.1.1",
"@scullyio/scully": "^1.1.1",
@BlindDespair This problem is fixed already. It's happening if you have template strings inside the JSON you're trying to embed.
I'll check if we did release the fix tomorrow.
@SanderElias sounds great, thank you for a quick response on the issue!
@BlindDespair Do you still have this issue?
We encountered this before, and that turned out to be an issue with the htmlMinimize plugins. (you need to do some settings, but I don't recall which ones)
I still have the issue, but I'll check how I can configure the htmlMinimize plugin and see if it helps. I'll post my findings here.
@SanderElias I check the configs and what helps is switching off the minifyJS option off.
So here is the difference it produces:
minifyJS: false,:
minifyJS: true,:
and this part is exactly what gets parsed by scully, I guess you can tell why:
So I am gonna turn it off for now, but I think this has to be solved by default somehow, from what I see, the version with minified inline JS is only 17 bytes smaller which I can easily let go of. I suggest we open an issue on scully-plugin-minify-html and suggest to make minimizeJS: false as a default, but if you have a better idea how this can be hanlded better that would interesting to know.
Yes, the minify plugin does mess up the order of the lines. The script that is injected by Scully is optimized by hand and put in that exact order for a reason.
During the write-to-disk, the write-to-disk plugin extracts the first string that is inside the __SCULLY__xxx markers.
The 17 bytes is probably some white-space, that is put in by the conversion from JSON to the embedded string.
As there is nothing Scully can do about this, I'm going to close the issue. Thanks for reporting back
issue still present using "scully-plugin-minify-html": "^6.0.0"
|
gharchive/issue
| 2021-04-15T16:08:50 |
2025-04-01T04:35:49.390903
|
{
"authors": [
"AlonsoK28",
"BlindDespair",
"SanderElias"
],
"repo": "scullyio/scully",
"url": "https://github.com/scullyio/scully/issues/1331",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
537663675
|
fix(sampleblog): add mine lazy routing
When leave mine module alone, guess-parser will produce the following error.
Error: Multiple root routing modules found
/scully/projects/sampleBlog/src/app/app.module.ts,
/scully/projects/sampleBlog/src/app/mine/mine-routing.module.ts
at findRootModule (/scully/node_modules/guess-parser/dist/guess-parser/index.js:432:15)
at Object.exports.parseRoutes [as parseAngularRoutes] (/scully/node_modules/guess-parser/dist/guess-parser/index.js:596:31)
at Object.exports.traverseAppRoutes (/scully/scully/bin/routerPlugins/traverseAppRoutesPlugin.js:7:39)
at Object.staticServer (/scully/scully/bin/utils/staticServer.js:13:56)
at startStaticServer (/scully/scully/bin/scully.js:149:20)
at Timeout.setTimeout [as _onTimeout] (/scully/scully/bin/scully.js:157:9)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
@jorgeucano check this out
Looks like #7 will nullify this PR
Hey, thanks for the PR, but we remove this!
|
gharchive/pull-request
| 2019-12-13T17:07:04 |
2025-04-01T04:35:49.393045
|
{
"authors": [
"aaronfrost",
"jorgeucano",
"nartc",
"tieppt"
],
"repo": "scullyio/scully",
"url": "https://github.com/scullyio/scully/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2545545555
|
fix(results): display and submit results fixes
Perf Simple Query results were not displayed due Cell.svelte error on null value. Also fixed table error status when there's UNSET status cell value.
And most important, fixing regression in submit_results when submitting tables with text values.
@k0machi sorry for merging without approve, but needed for hot-fixing my previous PR.
Let me know if something needs to be changed here.
|
gharchive/pull-request
| 2024-09-24T14:17:30 |
2025-04-01T04:35:49.431262
|
{
"authors": [
"soyacz"
],
"repo": "scylladb/argus",
"url": "https://github.com/scylladb/argus/pull/460",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1701118665
|
fix(schema): reorganize statement
Currecntly gemini has wierd wa to attaching values to the statement, it is done in a form of function, which makes no sense.
Let's remove function and use raw values stored in Stmt struct
Please, add more details to the commit messages describing the benefits of this change.
Done
|
gharchive/pull-request
| 2023-05-09T00:27:14 |
2025-04-01T04:35:49.432655
|
{
"authors": [
"dkropachev"
],
"repo": "scylladb/gemini",
"url": "https://github.com/scylladb/gemini/pull/303",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
463430201
|
scylla-cql-optimization and scylla-errors added to generate-dashboards.sh but not load-grafana.sh
@amnonh
For folks that use load-grafana.sh directly, the two latest dashboards are missing.
I have used load-grafana.sh in the past and it was easy to use with my pre-existing Grafana installation. I guess it makes sense for load-grafana.sh to be a distinct step after genconfig.py and not have it attempt generation.
|
gharchive/issue
| 2019-07-02T20:52:05 |
2025-04-01T04:35:49.445801
|
{
"authors": [
"miltieIV2"
],
"repo": "scylladb/scylla-grafana-monitoring",
"url": "https://github.com/scylladb/scylla-grafana-monitoring/issues/665",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1157923380
|
Support unsigned versions of integers.
Unsigned versions can be automatically mapped to the signed versions that database uses.
Lossy - definitely not. A runtime bound check should be performed
Since it just bit me, for varints columns I think it should Just Work
I think that this idea may backfire on us later. Lack of unsigned types in Scylla / C* is of course an issue for some application developers because it increases complexity (as the developer now needs to validate data and deal with errors).
This complexity is however inherent to developing with Scylla / C* (as long as they don't have unsigned types) and trying to hide this complexity may cause a perfectly working application to suddenly start throwing errors after long time.
New deserialization framework will probably allow to downcast error to specific implementation and check what exactly went wrong - but doing this will be much more difficult that creating 2 structures (first with signed types, to interact with DB, second with unsigned types, to use in rest of the application) and handling conversion errors.
for those reasons I think it's better to make users aware of this complexity and deal with it explicitly - it will result in more robust, less error-prone applications.
Instead of implementing SerializeCql and FromCqlVal for u* types, this crate could provide wrapper types for handling different serialization patterns.
There are 4 different approaches:
as: Just convert the unsigned to signed and back. (Could be a wrapper like SignedU*)
MappedU*: e.g. MappedU32(u32) would map the range of an u32 to the range of an i32. (So, u32::MIN..u32::MAX -> i32::MIN..i32::MAX)
BlobU*: e.g. BlobU32(u32) would convert the u32 to [u8; 4] and back. (Also supports u128)
BigInt: Store the signed number as a BigInt. (Could be a wrapper like VarU*)
I'm using MappedU* and BlobU* in my project.
|
gharchive/issue
| 2022-03-03T03:22:37 |
2025-04-01T04:35:49.460690
|
{
"authors": [
"JonahPlusPlus",
"Lorak-mmk",
"dtzxporter",
"gsson",
"psarna"
],
"repo": "scylladb/scylla-rust-driver",
"url": "https://github.com/scylladb/scylla-rust-driver/issues/409",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2037253314
|
getting "For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions."
I can run the program flawlessly upon Running on local URL: http://127.0.0.1:7860, but once it start animate the error occure.
Traceback (most recent call last):
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\gradio\queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\gradio\blocks.py", line 1522, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\gradio\blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\gradio\utils.py", line 674, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\demo\gradio_animate.py", line 30, in animate
return animator(
^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\demo\animate.py", line 236, in __call__
sample = self.pipeline(
^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\magicanimate\pipelines\pipeline_animation.py", line 654, in __call__
ref_image_latents = self.images2latents(source_image[None, :], latents_dtype).cuda()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\magicanimate\pipelines\pipeline_animation.py", line 394, in images2latents
latents.append(self.vae.encode(images[frame_idx:frame_idx+1])['latent_dist'].mean * 0.18215)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\diffusers\models\autoencoder_kl.py", line 260, in encode
h = self.encoder(x)
^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\diffusers\models\vae.py", line 144, in forward
sample = self.mid_block(sample)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\diffusers\models\unet_2d_blocks.py", line 562, in forward
hidden_states = attn(hidden_states, temb=temb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\diffusers\models\attention_processor.py", line 417, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\diffusers\models\attention_processor.py", line 1036, in __call__
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\gradio\queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\DisplacementMap\MagicAnimate_forwin\magic-animate-for-windows\venv\Lib\site-packages\gradio\queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None
I am runing gtx980 with Python 3.11.7 torch-2.0.1cu118-cp311-cp311-win_amd64
C:\Users\PatZ>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0
you torch with cuda 118,but your install cuda 12.1 in locally...
plz change requiments-windows about --extra-index-url https://download.pytorch.org/whl/cu118
to --extra-index-url https://download.pytorch.org/whl/cu121
C:\Users\PatZ>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
C:\Users\PatZ>time
The current time is: 16:53:20.28
Enter the new time:
C:\Users\PatZ>date
The current date is: 12/12/2023 Tue
Enter the new date: (mm-dd-yy)
should be this, I didnt reopen the cmd window after i install the new cuda, sorry.
C:\Users\PatZ>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
C:\Users\PatZ>time
The current time is: 16:53:20.28
Enter the new time:
C:\Users\PatZ>date
The current date is: 12/12/2023 Tue
Enter the new date: (mm-dd-yy)
should be this, I didnt reopen the cmd window after i install the 11.8 cuda, sorry.
we recommend python 3.10 because some deps happend errors in 3.11
C:\Users\PatZ>python --version
Python 3.10.11
C:\Users\PatZ>time
The current time is: 17:23:53.55
Enter the new time:
C:\Users\PatZ>date
The current date is: 12/12/2023 Tue
Enter the new date: (mm-dd-yy)
C:\Users\PatZ>
--extra-index-url https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-win_amd64.whl
I installed python 3.10 but got same error
if your local cuda is 12.1,plz install cu121.
it is cuda environment problems...
what do you mean local cuda? , isnt cuda version is depend on what CUDA Toolkit you have installed on you machine? I had cuda 11.1 at the begining(from nvidia control pannel - system info), then i saw the project need 11.3 or above, so i install the 12.3 version, but torch does not support the 12.3 yet so I installed 12.1, sooner I figured out the program require v0.15 torchvision which only gonna work with v2.0 torch, and v2.0 torch does not support cuda 12.1 so finally I installed 11.8.
check your environment path in windows, if you have multi cuda, it will use lastest version.
what do you mean local cuda? , isnt cuda version is depend on what CUDA Toolkit you have installed on you machine? I had cuda 11.1 at the begining(from nvidia control pannel - system info), then i saw the project need 11.3 or above, so i install the 12.3 version, but torch does not support the 12.3 yet so I installed 12.1, sooner I figured out the program require v0.15 torchvision which only gonna work with v2.0 torch, and v2.0 torch does not support cuda 12.1 so finally I installed 11.8.
we use venv torch with cuda version like cu118 or cu121
but your system should have a cuda installed.
this is my enviorment path and use pip list after venv/script/active followed with nvcc --version
this is my enviorment path and use pip list after venv/script/active followed with nvcc --version
as i known, GTX980 use CC with SM5.2
I guess cuda 11.8 or xformers ==0.0.22 deprecated SM5.2
(xformers 0.0.22 maybe)
You can down level these version for your GPU gtx980
|
gharchive/issue
| 2023-12-12T08:39:11 |
2025-04-01T04:35:49.713696
|
{
"authors": [
"sdbds",
"swpatrick"
],
"repo": "sdbds/magic-animate-for-windows",
"url": "https://github.com/sdbds/magic-animate-for-windows/issues/11",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1256179192
|
rework json / line reassembly / splitting
test for linting
i'll let it run for a day and let you know when you can merge, thanks!
|
gharchive/pull-request
| 2022-06-01T14:41:33 |
2025-04-01T04:35:49.755202
|
{
"authors": [
"wiedehopf"
],
"repo": "sdr-enthusiasts/acars_router",
"url": "https://github.com/sdr-enthusiasts/acars_router/pull/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2317266684
|
Not really an issue, but works with dotool, adding Wayland compatibility
Hello,
I just swapped in dotool everywhere xdotool was in the script and having installed the former, which is Wayland compatible, found your script worked great!
Did you try with ydotool? dotool is not available on Debian/Ubuntu.
|
gharchive/issue
| 2024-05-25T21:58:43 |
2025-04-01T04:35:49.760753
|
{
"authors": [
"dkragen",
"emk2203"
],
"repo": "sdushantha/fontpreview",
"url": "https://github.com/sdushantha/fontpreview/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
314637098
|
HelpWindowTest: fix failing test in non-headless mode
HelpWindowTest#focus_helpWindowNotFocused_focused() test checks that
the HelpWindow#focus() method will cause the HelpWindow to be in focus
when called. The test first asserts that the HelpWindow is not in focus
first after it is shown, being calling this method and asserting that
the HelpWindow is now in focus.
When tests are run in non-headless mode, the HelpWindow will be in
focus immediately after it is shown, thus our first assertion is
incorrect, causing the test to consistently fail in non-headless mode.
Let's add a method GuiRobot#removeFocus(), and update the test to call
this method to remove focus from the HelpWindow after it is shown
to ensure the first assertion is correct in non-headless mode.
@vivekscl @Rinder5 for your review.
Actually, should I add in the comments that the removeFocus() does not work in headless mode due to an issue with Monocle, and make the test return immediately if in headless mode?
@yamidark
Actually, should I add in the comments that the removeFocus() does not work in headless mode due to an issue with Monocle, and make the test method return immediately if in headless mode?
Actually, I'm not sure why the test is even run in headless mode? For headless mode to be useful its results need to match the real-world head-full mode, otherwise its test results are meaningless since the fact that a test passes/fails in headless mode does not mean that it would pass/fail in the real-world.
If Monocle's window focusing behavior does not match the "real world" then there is no point trying to run focus tests with it. Better to be explicit about that by printing a gigantic "TEST SKIPPED IN HEADLESS MODE, RUN IT IN HEAD-FULL MODE INSTEAD" message.
If Monocle's window focusing behavior does not match the "real world" then there is no point trying to run focus tests with it.
Actually I take that back. There's still some value in the test if it fails if requestFocus is not called for whatever reason.
@yamidark
Sorry, I re-assert again that the test should not be run in headless mode :-). Its relying on undefined behavior, and I think that is part of the reason why HelpWindowTest is stalling our builds.
Consider this: (Make sure to run it in headless mode)
package seedu.address.ui;
import static org.junit.Assert.assertTrue;
import org.junit.Test;
import org.testfx.api.FxRobot;
import org.testfx.api.FxToolkit;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.stage.Stage;
public class HelpWindowTest {
Stage stage1;
Stage stage2;
@Test
public void test() throws Exception {
FxRobot fxRobot = new FxRobot();
FxToolkit.registerPrimaryStage();
fxRobot.interact(() -> {
stage1 = new Stage();
stage1.setScene(new Scene(new Pane()));
stage2 = new Stage();
stage2.setScene(new Scene(new Pane()));
});
fxRobot.interact(stage1::show);
assertTrue(stage1.isFocused()); // NOTE: stage1 is focused once it is shown!!!
}
}
Note that upon calling stage1::show, stage1 is immediately focused.
Now, consider the following case where stage1 and stage2 share the same scene:
package seedu.address.ui;
import static org.junit.Assert.assertTrue;
import org.junit.Test;
import org.testfx.api.FxRobot;
import org.testfx.api.FxToolkit;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.stage.Stage;
public class HelpWindowTest {
Stage stage1;
Stage stage2;
@Test
public void test() throws Exception {
FxRobot fxRobot = new FxRobot();
FxToolkit.registerPrimaryStage();
fxRobot.interact(() -> {
Scene scene = new Scene(new Pane());
stage1 = new Stage();
stage1.setScene(scene);
stage2 = new Stage();
stage2.setScene(scene);
});
fxRobot.interact(stage1::show);
assertTrue(stage1.isFocused()); // ASSERTION FAILURE!!!!
}
}
stage1 is now not automatically focused when it is shown.
So, on normal operation in headless mode stages are supposed to be focused automatically when they are shown. But when stages share a Scene (which the JavaFX docs specifically say not to), then the focus does not occur.
I think this is exactly what is happening in HelpWindowTest. Specifically this:
Stage helpWindowStage = FxToolkit.setupStage((stage) -> stage.setScene(helpWindow.getRoot().getScene()));
I also believe that this is tied to HelpWindowTest randomly stalling our builds, since from my investigation it stalls due to a deadlock between the Application thread and the renderer thread because the Application thread miscounts the number of scenes.
@pyokagan Should I also make the other tests in HelpWindowTest also skip in headless mode, since they also seem to have undefined behaviour as well?
@yamidark Please submit another version because your PR has been modified.
|
gharchive/pull-request
| 2018-04-16T12:53:05 |
2025-04-01T04:35:49.793946
|
{
"authors": [
"pyokagan",
"yamidark"
],
"repo": "se-edu/addressbook-level4",
"url": "https://github.com/se-edu/addressbook-level4/pull/880",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
34146564
|
GitHub 发布提问编辑器(Markdown ) 学习笔记
GitHub 发布提问编辑器(Markdown ) 学习笔记
代码块
<html>
</html>
http://www.baidu.com/
自动连接
http://www.baidu.com/
黑体
中国
这是h1
这是h2
这里使用了p.
列表
无序
列表1
列表2
列表3
有序
列表1
列表2
列表3
分隔符
各种图案:
:+1:
:-1:
:airplane:
:art:
:bear:
:beer:
:bike:
:bomb:
:book:
:bulb:
:bus:
:cake:
:calling:
:clap:
:computer:
:cool:
:cop:
:email:
:feet:
:fire:
:fish:
:fist:
:gift:
:hammer:
:heart:
:iphone:
:key:
:leaves:
:lipstick:
:lock:
:mag:
:mega:
:memo:
:moneybag:
:new:
:octocat:
:ok:
:pencil:
:punch:
:runner:
:scissors:
:ski:
:smoking:
:sparkles:
:star:
:sunny:
:taxi:
:thumbsdown:
:thumbsup:
:tm:
:tophat:
:train:
:v:
:v2:
:vs:
:warning:
:wheelchair:
:zap:
:zzz:
对应效果
:+1:
:-1:
:airplane:
:art:
:bear:
:beer:
:bike:
:bomb:
:book:
:bulb:
:bus:
:cake:
:calling:
:clap:
:computer:
:cool:
:cop:
:email:
:feet:
:fire:
:fish:
:fist:
:gift:
:hammer:
:heart:
:iphone:
:key:
:leaves:
:lipstick:
:lock:
:mag:
:mega:
:memo:
:moneybag:
:new:
:octocat:
:ok:
:pencil:
:punch:
:runner:
:scissors:
:ski:
:smoking:
:sparkles:
:star:
:sunny:
:taxi:
:thumbsdown:
:thumbsup:
:tm:
:tophat:
:train:
:v:
:v2:
:vs:
:warning:
:wheelchair:
:zap:
:zzz:
#271
:1st_place_medal:
|
gharchive/issue
| 2014-05-23T05:27:20 |
2025-04-01T04:35:49.812308
|
{
"authors": [
"kinneylee",
"yanyinxi"
],
"repo": "seajs/seajs",
"url": "https://github.com/seajs/seajs/issues/1214",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1587562082
|
chore: When PR is first committed, E2E tests are performed by default
Describe what this PR does / why we need it
When PR is first committed, E2E tests are performed by default
Does this pull request fix one issue?
Describe how you did it
Describe how to verify it
Special notes for reviews
Codecov Report
Base: 18.34% // Head: 18.34% // No change to project coverage :thumbsup:
Coverage data is based on head (c7072e7) compared to base (5f73ef4).
Patch has no changes to coverable lines.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
Additional details and impacted files
@@ Coverage Diff @@
## main #2045 +/- ##
=======================================
Coverage 18.34% 18.34%
=======================================
Files 101 101
Lines 9222 9222
=======================================
Hits 1692 1692
Misses 7299 7299
Partials 231 231
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2023-02-16T12:07:57 |
2025-04-01T04:35:49.819554
|
{
"authors": [
"codecov-commenter",
"zhy76"
],
"repo": "sealerio/sealer",
"url": "https://github.com/sealerio/sealer/pull/2045",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1127735131
|
Update build configuration
The build configuration of this project is outdated and may no longer work.
This pull request will be merged automatically if there are no conflicts.
:tada: This PR is included in version 4.0.29 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-02-08T20:38:55 |
2025-04-01T04:35:49.822120
|
{
"authors": [
"comgit"
],
"repo": "sealsystems/node-request-service",
"url": "https://github.com/sealsystems/node-request-service/pull/168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1921397946
|
Menu fixes
Closes #425
See commits for list of changes.
Notable things
Now calculating container dimensions via body which doesn't change depending on content.
Demo
https://github.com/seamapi/react/assets/11449462/bce78d7b-9f65-4875-a240-f87c45d436e7
Filter button is showing again.
Opening nested menu will align to correct anchor.
Multiple menus in storybook still open / are positioned correctly.
Menus only open up if there is enough space. No clipping.
Opening a menu after scrolling (and while scrolling) still positions the menu correctly.
@mikewuu should there be a down-chevron icon in that inner menu trigger? Looks like it's not rendering (I know this is unrelated to the changes made here). Could this be another instance of the mask ID conflicts?
@xplato indeed it is. fixed in https://github.com/seamapi/react/pull/503/commits/340bb8feef9f06f443d93ad0aa3851057b5147cb
|
gharchive/pull-request
| 2023-10-02T07:36:25 |
2025-04-01T04:35:49.825989
|
{
"authors": [
"mikewuu",
"xplato"
],
"repo": "seamapi/react",
"url": "https://github.com/seamapi/react/pull/503",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1356176723
|
Object Pages Not Shown in Spotter when Contents Matches
Reported by @RalfBarkow
Asked on GT Discord 2022-08-30
This can be fixed by altering #gtSpotterContainingSubPartsFor: to send pagesDo: per @chisandrei on Discord.
However, it seemed very slow, so we will wait for two things:
new streaming Spotter implementation, the release of which is apparently imminent
time to profile
|
gharchive/issue
| 2022-08-30T18:57:56 |
2025-04-01T04:35:49.843777
|
{
"authors": [
"seandenigris"
],
"repo": "seandenigris/Objective-Lepiter",
"url": "https://github.com/seandenigris/Objective-Lepiter/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
211525341
|
Spelling and grammar corrections
Still coming along nicely @seankross!
Thank you so much for the edits!
|
gharchive/pull-request
| 2017-03-02T21:35:52 |
2025-04-01T04:35:49.844662
|
{
"authors": [
"jonmcalder",
"seankross"
],
"repo": "seankross/the-unix-workbench",
"url": "https://github.com/seankross/the-unix-workbench/pull/5",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2538151552
|
Bump rustls-native-certs
The current version of reqwest will pull in two versions of rustls-native-certs if that feature is enabled, this just bumps the version so that there is only one.
├ rustls-native-certs v0.7.3
└── reqwest v0.12.7
├ rustls-native-certs v0.8.0
└── hyper-rustls v0.27.3
└── reqwest v0.12.7
Looks like the API changed in the new version. Would you want to look in how to upgrade?
|
gharchive/pull-request
| 2024-09-20T07:59:25 |
2025-04-01T04:35:49.845840
|
{
"authors": [
"Jake-Shadle",
"seanmonstar"
],
"repo": "seanmonstar/reqwest",
"url": "https://github.com/seanmonstar/reqwest/pull/2427",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1405958834
|
[vacuum] cannot hydrate needle from file: size mismatch
Describe the bug
One volume regularly can't make a compact
System Setup
3.31
Expected behavior
I would like to have a recommendation, and preferably right in the log, how you can fix such a volume when there is no second replica
Additional context
Oct 12, 2022 @ 15:14:36.665 | E1012 10:14:36.664977 volume_grpc_vacuum.go:63 failed compact volume 1777: cannot hydrate needle from file: size mismatch
volume.list
DataNode fast-volume-3:8080 hdd(volume:1278/1600 active:1278 free:322 remote:0)
volume id:853 size:620119072 collection:"monitoring-thanos" file_count:42 delete_count:1 deleted_byte_count:31457301 replica_placement:10 version:3 compact_revision:33 modified_at_second:1664541139
DataNode fast-volume-0:8080 hdd(volume:1259/1600 active:1258 free:341 remote:0)
volume id:853 size:1079987944 collection:"monitoring" file_count:68 delete_count:14 deleted_byte_count:396953201 replica_placement:10 version:3 compact_revision:32 modified_at_second:1664541139
volume.check.disk
> volume.check.disk -force -slow -volumeId 853
volume 853 fast-volume-3:8080 has 40 entries, fast-volume-0:8080 missed 0 entries
volume 853 fast-volume-0:8080 has 40 entries, fast-volume-3:8080 missed 0 entries
volume.vacuum -volumeId 853
this needs to set the threshold.
need use
volume.fsck -v -verifyNeedles -forcePurging -reallyDeleteFromVolume
https://github.com/seaweedfs/seaweedfs/pull/3879
to long be reproduced after volume.fsck -v -verifyNeedles -forcePurging -reallyDeleteFromVolume
`
|
gharchive/issue
| 2022-10-12T10:24:04 |
2025-04-01T04:35:49.884770
|
{
"authors": [
"chrislusf",
"kmlebedev"
],
"repo": "seaweedfs/seaweedfs",
"url": "https://github.com/seaweedfs/seaweedfs/issues/3835",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
527492223
|
currentResX/Y seems incorrect
My goal is to differentiate between native resolution of a screen and it's scaled current resolution:
$ node
Welcome to Node.js v12.10.0.
Type ".help" for more information.
> const si = require('systeminformation');
undefined
> si.graphics().then(data => console.log(data))
Promise { <pending> }
> {
controllers: [
{
vendor: 'Intel',
model: 'Intel Iris Plus Graphics 650',
bus: 'Built-In',
vram: 1536,
vramDynamic: true
},
{ vendor: '', model: '', bus: '', vram: -1, vramDynamic: false }
],
displays: [
{
vendor: '',
model: 'Color LCD',
main: false,
builtin: false,
connection: '',
sizex: -1,
sizey: -1,
pixeldepth: 30,
resolutionx: 2560,
resolutiony: 1600,
currentResX: 2560,
currentResY: 1600,
positionX: 0,
positionY: 0,
currentRefreshRate: -1
},
{
vendor: '',
model: 'LG QHD',
main: true,
builtin: false,
connection: 'DisplayPort',
sizex: -1,
sizey: -1,
pixeldepth: 30,
resolutionx: 2560,
resolutiony: 1440,
currentResX: 2560,
currentResY: 1440,
positionX: 0,
positionY: 0,
currentRefreshRate: -1
}
]
}
The resolution on my macbook's retina screen is 1680x1050 -- I don't think it's possible to get 2560x1600 native resolution on the 13.3" screen.
If we're just using system_profiler SPDisplaysDataType here, it won't provide the scaled resolution we're looking for.
Fortunately (as you have a Retina display) your MacBook Pro really has a native screen resolution of 2560 x 1600. See also https://www.apple.com/macbook-pro-13/specs/ ... under "Display" in the first text line you should see "... 2560‑by‑1600 native resolution ...". Also on your Mac, in the apple menu - About my Mac - in the display tab you should also see this. So I also tested it here on my machine and it also shows the correct resolution. Closing it here.
@sebhildebrandt fwiw, a reliable way to get scaled resolution is this: https://wiki.libsdl.org/SDL_GetCurrentDisplayMode
If you don't want to use SDL, I am sure you can look at the source of how they accomplish it.
|
gharchive/issue
| 2019-11-23T01:47:11 |
2025-04-01T04:35:49.913939
|
{
"authors": [
"Suhail",
"sebhildebrandt"
],
"repo": "sebhildebrandt/systeminformation",
"url": "https://github.com/sebhildebrandt/systeminformation/issues/300",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1575927418
|
networkInterfaces type definition seems broken
In recent commit 21b1d62, networkInterfaces() return type was updated from Promise<Systeminformation.NetworkInterfacesData[]> to Promise<Systeminformation.NetworkInterfacesData[] | Systeminformation.NetworkInterfacesData> with no apparent changes to implementation.
Was this change really valid ? The current implementation seems to always return an Array of NetworkInterfacesData. Also the corresponding documentation was not updated.
Note that this breaks typescript compilation in code using the result directly as an array :
error TS2339: Property 'find' does not exist on type 'NetworkInterfacesData | NetworkInterfacesData[]'.
Property 'find' does not exist on type 'NetworkInterfacesData'.
@AltarBeastiful ... in the docs https://systeminformation.io/network.html there is a section Get Default Interface only where you can call the function with the string default. Here the function returns just the default interface (not returned in an array).
I am not sure, where your error message occurs. Is it somewhere in your code where you try to find a specific interface?
I am aware that the current solution is not ideal. This will change in version 6 as ta change would be a breaking change.
@AltarBeastiful closing it for now.
@AltarBeastiful ... in the docs https://systeminformation.io/network.html there is a section Get Default Interface only where you can call the function with the string default. Here the function returns just the default interface (not returned in an array).
I am not sure, where your error message occurs. Is it somewhere in your code where you try to find a specific interface?
I am aware that the current solution is not ideal. This will change in version 6 as ta change would be a breaking change.
Sorry for commenting in a closed issue,
your suggested still won't solve the typing issues for me.
` const relevantNetwork = await si.networkInterfaces("default");
const speedBytes = (relevantNetwork.speed * 1000000) / 8;
//TS2339: Property 'speed' does not exist on type 'NetworkInterfacesData | NetworkInterfacesData[]'. Property 'speed' does not exist on type 'NetworkInterfacesData[]'.
`
@Evilu ... I need to have a look at it. Will provide a fix as soon as possible (sorry for the inconvenience)
@Evilu , when I am running this code:
const si = require('../systeminformation/lib/index.js');
si.networkInterfaces('default').then(data => {
console.log(data);
})
I get:
{
iface: 'en0',
ifaceName: 'en0',
default: true,
ip4: '192.xxx.xxx.xxx',
ip4subnet: '255.255.255.0',
ip6: 'fe80::xxxx:xxxx:xxxx:xxxx',
ip6subnet: 'ffff:ffff:ffff:ffff::',
mac: 'xx:xx:xx:xx:xx:xx',
internal: false,
virtual: false,
operstate: 'up',
type: 'wireless',
duplex: 'full',
mtu: 1500,
speed: 130.26,
dhcp: true,
dnsSuffix: '',
ieee8021xAuth: '',
ieee8021xState: '',
carrierChanges: 0
}
So speed is actually there ...
Having a look at the interface definition in index.d.ts we have:
interface NetworkInterfacesData {
iface: string;
ifaceName: string;
default: boolean;
ip4: string;
ip4subnet: string;
ip6: string;
ip6subnet: string;
mac: string;
internal: boolean;
virtual: boolean;
operstate: string;
type: string;
duplex: string;
mtu: number | null;
speed: number | null; <--------------
dhcp: boolean;
dnsSuffix: string;
ieee8021xAuth: string;
ieee8021xState: string;
carrierChanges: number;
}
export function networkInterfaces(
cb?:
| ((data: Systeminformation.NetworkInterfacesData[] | Systeminformation.NetworkInterfacesData) => any)
| boolean
| string,
rescan?: boolean,
defaultString?: string
): Promise<Systeminformation.NetworkInterfacesData[] | Systeminformation.NetworkInterfacesData>;
So I cannot spot, why we get this error ... do you have any idea??
On what OS are you running your code?
@Evilu ... probably you heed to check, if your result is an array or not ...
const relevantNetwork = await si.networkInterfaces("default");
if (!Array.isArray(relevantNetwork)) {
const speedBytes = (relevantNetwork.speed * 1000000) / 8;
}
Good idea, also it'll probably shout about speed being null.
Thanks!
let speedBytes if (!Array.isArray(relevantNetwork) && relevantNetwork.speed) { speedBytes = (relevantNetwork.speed * 1000000) / 8; }
Hi, apologies from me for commenting on a closed issue too.
I am using this code:
const networkInterfaces = (await si.networkInterfaces()).filter((adapter) => { ... });
and getting a type error
error TS2339: Property 'filter' does not exist on type 'NetworkInterfacesData | NetworkInterfacesData[]'.
Property 'filter' does not exist on type 'NetworkInterfacesData'.
const networkInterfaces = (await si.networkInterfaces()).filter((adapter) => {
Granted, I am not totally confident with typescript, but if networkInterfaces() is returning an array, then surely the filter method should exist?
This started appearing when I go from: (but not sure which specific version it starts creating this ts error)
- "systeminformation": "5.16.9",
+ "systeminformation": "^5.21.9",
Thanks for any input from anyone here!
@bwp91 ... as it can be a single interface (e.g. when calling si.networkInterfaces('default')) OR an array, you need to check first if it is an array ... unfortunately I introduced this some time ago (forgetting to change also the types) and changing it to always return an array would be a breaking change (so a major version bump). In version 6 I corrected it besides some more breaking changes ... but this version is not yet ready and released.
Hi @sebhildebrandt thank you for your quick reply, so just to clarify, if the 'default' param isnt given, and the only network interfact is the default, this would still be returned as a single array of one object right?
@bwp91 ... not really, if you have no parameter, you get an array ... but the typescript types reflect, that there can be two types of results and therefore it complains about the .filter (which would only be available in one of the two types of results) even we are in your case sure, that this is an array. But when you include this check, the typescript error will disappear.
if you have no parameter, you get an array
I think this what I am trying to confirm - forgetting any issues of typescript now - if I do NOT pass the default param, and the ONLY network interface found is the default, then this will be returned in an ARRAY of ONE result? rather than just an OBJECT?
Sorry for caps, is for clarification, not shouting 😁
@bwp91 correct ... only if you provide the default param, you get an object (and not an array) ... No problem about the caps ;-) ... all fine ... I am sorry that I somehow messed it up with the two types of result. As said, this will be corrected in version 6.
no worries at all appreciate your fast responses. and can just ts-ignore this until next release 😁
appreciate your work on this repo!
|
gharchive/issue
| 2023-02-08T11:11:17 |
2025-04-01T04:35:49.928406
|
{
"authors": [
"AltarBeastiful",
"Evilu",
"bwp91",
"sebhildebrandt"
],
"repo": "sebhildebrandt/systeminformation",
"url": "https://github.com/sebhildebrandt/systeminformation/issues/775",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
94929982
|
Add YoutubeApiBridge.php
The YoutubeApiBridge utilizes the YouTube Data API to receive channel or playlist information. About 200 entries can be requested within 8 seconds (could be more with faster internet I guess), however a valid API key with active YouTube Data API is required (must be provided with the request).
a valid API key with active YouTube Data API is required
I personally reject authenticated API (that's the very reason I started messing with Twitter and this led to the rss-bridge project), yet I'm gonna leave the PR open for other collaborators: political reasons of one person don't supersede practical reasons of the many.
PR closed: no feedback / master refactoring
|
gharchive/pull-request
| 2015-07-14T11:58:27 |
2025-04-01T04:35:49.930657
|
{
"authors": [
"LogMANOriginal",
"mitsukarenai"
],
"repo": "sebsauvage/rss-bridge",
"url": "https://github.com/sebsauvage/rss-bridge/pull/139",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2493683581
|
出现数据库文件被锁的情况
Issue Type
Others
Have you searched for existing documents and issues?
Yes
OS Platform and Distribution
centos7
All_in_one Version
0.6
Kuscia Version
0.7
What happend and What you expected to happen.
数据库文件被锁,这是为什么?
Log output.
2024-08-29 15:32:32 [http-nio-8080-exec-4] WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Error: 5, SQLState: null
2024-08-29 15:32:32 [http-nio-8080-exec-4] ERROR o.h.e.jdbc.spi.SqlExceptionHelper - [SQLITE_BUSY] The database file is locked (database is locked)
2024-08-29 15:32:32 [http-nio-8080-exec-4] ERROR o.s.s.w.e.SecretpadExceptionHandler - handler Exception error
org.springframework.dao.CannotAcquireLockException: could not execute statement [[SQLITE_BUSY] The database file is locked (database is locked)] [update user_tokens set gmt_create=?,gmt_modified=?,gmt_token=?,is_deleted=?,name=?,session_data=? where token=?]; SQL [update user_tokens set gmt_create=?,gmt_modified=?,gmt_token=?,is_deleted=?,name=?,session_data=? where token=?]
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:262)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:232)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.translateExceptionIfPossible(AbstractEntityManagerFactoryBean.java:550)
at org.springframework.dao.support.ChainedPersistenceExceptionTranslator.translateExceptionIfPossible(ChainedPersistenceExceptionTranslator.java:61)
at org.springframework.dao.support.DataAccessUtils.translateIfNecessary(DataAccessUtils.java:243)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:152)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:164)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:244)
at jdk.proxy2/jdk.proxy2.$Proxy247.saveAndFlush(Unknown Source)
at org.secretflow.secretpad.web.interceptor.LoginInterceptor.processByUserRequest(LoginInterceptor.java:198)
at org.secretflow.secretpad.web.interceptor.LoginInterceptor.preHandle(LoginInterceptor.java:149)
at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:146)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1076)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:974)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:914)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:590)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:205)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
at org.secretflow.secretpad.web.filter.AddResponseHeaderFilter.doFilterInternal(AddResponseHeaderFilter.java:61)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:109)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:482)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:673)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:391)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:896)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1744)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.hibernate.exception.LockAcquisitionException: could not execute statement [[SQLITE_BUSY] The database file is locked (database is locked)] [update user_tokens set gmt_create=?,gmt_modified=?,gmt_token=?,is_deleted=?,name=?,session_data=? where token=?]
at org.hibernate.community.dialect.SQLiteDialect.lambda$buildSQLExceptionConversionDelegate$1(SQLiteDialect.java:453)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:56)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:108)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:278)
at org.hibernate.engine.jdbc.mutation.internal.AbstractMutationExecutor.performNonBatchedMutation(AbstractMutationExecutor.java:107)
at org.hibernate.engine.jdbc.mutation.internal.MutationExecutorSingleNonBatched.performNonBatchedOperations(MutationExecutorSingleNonBatched.java:40)
at org.hibernate.engine.jdbc.mutation.internal.AbstractMutationExecutor.execute(AbstractMutationExecutor.java:52)
at org.hibernate.persister.entity.mutation.UpdateCoordinatorStandard.doStaticUpdate(UpdateCoordinatorStandard.java:771)
at org.hibernate.persister.entity.mutation.UpdateCoordinatorStandard.performUpdate(UpdateCoordinatorStandard.java:327)
at org.hibernate.persister.entity.mutation.UpdateCoordinatorStandard.coordinateUpdate(UpdateCoordinatorStandard.java:244)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2754)
at org.hibernate.action.internal.EntityUpdateAction.execute(EntityUpdateAction.java:166)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:635)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:502)
at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:364)
at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:39)
at org.hibernate.event.service.internal.EventListenerGroupImpl.fireEventOnEachListener(EventListenerGroupImpl.java:127)
at org.hibernate.internal.SessionImpl.doFlush(SessionImpl.java:1412)
at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1398)
at jdk.internal.reflect.GeneratedMethodAccessor153.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:312)
at jdk.proxy2/jdk.proxy2.$Proxy216.flush(Unknown Source)
at org.springframework.data.jpa.repository.support.SimpleJpaRepository.flush(SimpleJpaRepository.java:663)
at org.springframework.data.jpa.repository.support.SimpleJpaRepository.saveAndFlush(SimpleJpaRepository.java:630)
at jdk.internal.reflect.GeneratedMethodAccessor156.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.springframework.data.repository.core.support.RepositoryMethodInvoker$RepositoryFragmentMethodInvoker.lambda$new$0(RepositoryMethodInvoker.java:288)
at org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:136)
at org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:120)
at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:516)
at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:285)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:628)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:168)
at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:143)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:70)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.data.repository.core.support.EventPublishingRepositoryProxyPostProcessor$EventPublishingMethodInterceptor.invoke(EventPublishingRepositoryProxyPostProcessor.java:108)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:123)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:391)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:137)
... 59 common frames omitted
Caused by: org.sqlite.SQLiteException: [SQLITE_BUSY] The database file is locked (database is locked)
at org.sqlite.core.DB.newSQLException(DB.java:1179)
at org.sqlite.core.DB.newSQLException(DB.java:1190)
at org.sqlite.core.DB.execute(DB.java:985)
at org.sqlite.core.DB.executeUpdate(DB.java:1054)
at org.sqlite.jdbc3.JDBC3PreparedStatement.lambda$executeLargeUpdate$2(JDBC3PreparedStatement.java:119)
at org.sqlite.jdbc3.JDBC3Statement.withConnectionTimeout(JDBC3Statement.java:454)
at org.sqlite.jdbc3.JDBC3PreparedStatement.executeLargeUpdate(JDBC3PreparedStatement.java:118)
at org.sqlite.jdbc3.JDBC3PreparedStatement.executeUpdate(JDBC3PreparedStatement.java:100)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:275)
... 103 common frames omitted
datasource:
hibernate.dialect: org.hibernate.dialect.SQLiteDialect
driver-class-name: org.sqlite.JDBC
url: jdbc:sqlite:./db/secretpad.sqlite
hikari:
idle-timeout: 60000
maximum-pool-size: 30
connection-timeout: 50000
之前修改了连接池的信息
1、确保在操作数据库时,只有一个程序在同时操作数据库。这样可以避免多个程序同时锁定数据库文件。
2、如果问题仍然存在,可以尝试重启数据库服务器,或者检查数据库服务器上的其他程序是否正在使用数据库文件。
|
gharchive/issue
| 2024-08-29T07:39:08 |
2025-04-01T04:35:49.959271
|
{
"authors": [
"BrainWH",
"wangzeyu135798"
],
"repo": "secretflow/secretpad",
"url": "https://github.com/secretflow/secretpad/issues/129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
983717712
|
Understanding blockchain as a service
Proposed title of article
Understanding blockchain as a service (Baas)*
Introduction paragraph (2-3 paragraphs):
BaaS –Blockchain-as-a-service (BaaS) is the third-party creation and management of cloud-based networks for companies in the business of building blockchain applications. BaaS is based on the software as a service (SaaS) model and allows customers to leverage cloud-based solutions to build, host, and operate their own blockchain apps and related functions on the blockchain. BaaS help the customers in a faster application development, low maintenance cost and fast adoption of Blockchain technologies.
Key takeaways:
understanding deeper concepts of blockchain
how to offer blockchain as a service.
implementation of blockchain as a service
References:
-https://www.trustradius.com/blockchain-as-a-service-baas
Templates to use as guides
How To Guide Template
Software Review Template
Tutorial Template
Good afternoon and thank you for submitting your topic suggestion. @crezyj
Your topic form has been entered into our queue and should be reviewed (for approval) as soon as a content moderator is finished reviewing the ones in the queue before it.
@crezyj
Sounds like a helpful topic - lets please be sure it add value beyond what is in the official docs and what is covered in other blog sites. (beyond a basic explanation - it would be best to reference your favorite intro article and build upon it).
Please be sure to double check that it does not overlap with any existing EngEd articles, articles on other blog sites, or any incoming EngEd topic suggestions (if you haven't already) to avoid any potential article closure, please reference any relevant EngEd articles in yours. - Approved
Thanks I am working on the topic almost done.
|
gharchive/issue
| 2021-08-31T10:48:06 |
2025-04-01T04:35:49.966917
|
{
"authors": [
"WanjaMIKE",
"crezyj",
"hectorkambow"
],
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/issues/3612",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1138005226
|
[Languages] Implementations of dependency injection in C#
Proposed title of the article
[Languages] Implementations of dependency injection in C#
Proposed article introduction
Dependence is an object that another object relies on for its existence. When an object wants an object, it doesn't have to build it. This is called dependency injection (or inversion of control). Mocking the dependencies is an effective way to make testing more efficient.
The fact that class B calls a method on class C indicates that class A is dependent upon both classes B and C. Instead of requiring these classes to create instances of B and C, we can use dependency injection to provide an instance of class C to class B and an instance of class B to class A.
Key takeaways
The article will cover:
An overview of dependency injection
How to implement dependency injection using method injection in C#
How to implement dependency injection using constructor Injection C#
How to implement dependency injection using property injection c#
How to implement dependency injection using interface-based injections in C#
Article quality
This article would deal with explaining dependency injection and its implementation in C# in detail. To make my article unique I will create my own project to demonstrate the implementation process. Also, my article will be beginner-friendly and the code snippets will be well explained and tested.
References
N/A
Sounds like a helpful topic - let's please be sure it adds value beyond what is in any official docs and/or what is covered in other blog sites. (the articles should go beyond a basic explanation - and it is always best to reference any EngEd article and build upon it). @djayjames
Please be attentive to grammar/readability and make sure that you put your article through a thorough editing review prior to submitting it for final approval. (There are some great free tools that we reference in EngEd resources.)
ANY ARTICLE SUBMITTED WITH GLARING ERRORS WILL BE IMMEDIATELY CLOSED.
Please be sure to double-check that it does not overlap with any existing EngEd articles, articles on other blog sites, or any incoming EngEd topic suggestions (if you haven't already) to avoid any potential article closure, please reference any relevant EngEd articles in yours. - Approved
can anyone reopen this article for me please!
You may now link the PR!
|
gharchive/issue
| 2022-02-15T00:21:16 |
2025-04-01T04:35:49.972276
|
{
"authors": [
"Ericgacoki",
"ahmadmardeni1",
"djayjames"
],
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/issues/6659",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2469805185
|
fix(KONFLUX-3663): format PipelineRun files and upload SAST results
This update configures the SAST task to upload SARIF results to quay.io for long-term storage
Please note that this PR was automatically generated and may include unrelated changes due to automatic YAML formatting performed by yq
The YAML files will be indented using 2 spaces, if the YAML file uses indentationless list the automation will try to keep this format
The PR contains two separate commits:
Format YAML files: Ensures consistent indentation and formatting of the YAML files
Upload SAST results: Configures the PipelineRun files to enable uploading SARIF results to quay.io
Separating these changes into two commits simplifies the review process. The first commit focuses on indentation and formatting, while the second commit contains the semantic changes
Related:
https://issues.redhat.com/browse/KONFLUX-3663
https://issues.redhat.com/browse/KONFLUX-2263
/approve
|
gharchive/pull-request
| 2024-08-16T08:43:09 |
2025-04-01T04:35:49.991520
|
{
"authors": [
"JasonPowr",
"ccronca"
],
"repo": "securesign/secure-sign-operator",
"url": "https://github.com/securesign/secure-sign-operator/pull/556",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2650035468
|
Write tests for dalton-agent
Currently, there are no tests for the dalton-agent code.
The dalton-agent will need some refactoring in order to be testable.
Create a file of tests IN the dalton-agent directory
Use importlib to import from the dalton-agent file (since it contains a dash)
Need to do some refactoring of dalton-agent.py before we can run tests on it
|
gharchive/issue
| 2024-11-11T18:11:43 |
2025-04-01T04:35:49.993524
|
{
"authors": [
"rkoumis"
],
"repo": "secureworks/dalton",
"url": "https://github.com/secureworks/dalton/issues/196",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
166234242
|
Linux Crash
As far as I know this isn't an issue with extraterm but I'm posting this anyway since it might help someone else. Upon launching extraterm it crashed because of a limit on the number of inotify watches.
Doing the following made the error go away which allowed extraterm to launch:
TypeError: Unable to watch path
If you get following error with a big traceback right after Atom starts:
TypeError: Unable to watch path
you have to increase number of watched files by inotify. For testing if this is the reason for this error you can issue
sudo sysctl fs.inotify.max_user_watches=32768
and restart Atom. If Atom now works fine, you can make this setting permanent:
echo 32768 | sudo tee -a /proc/sys/fs/inotify/max_user_watches
See also #2082.
Source: https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md#typeerror-unable-to-watch-path
Thanks for the info. Extraterm uses inotify watches to monitor its theme files to do live updates which is very handy if you're working on a theme. It is not vital to the function of Extraterm for 95%+ of users. It should thus handle this error condition and not blow up. Or I could make it an option somewhere for those who need live theme loading. It is a bug in other words.
I've removed the code which watched the file system. This problem can't happen now.
|
gharchive/issue
| 2016-07-19T02:24:28 |
2025-04-01T04:35:50.006724
|
{
"authors": [
"aidanharris",
"sedwards2009"
],
"repo": "sedwards2009/extraterm",
"url": "https://github.com/sedwards2009/extraterm/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2488568524
|
Make things work
Initially, this branch started out as adding an alarm to the CodeDeploy group. I deleted all that in https://github.com/seek-oss/aws-codedeploy-hooks/pull/58/commits/52d528e2a8ad108287c2424a0355b1413afcc1dc.
My experience with the alarm:
Going from healthy to unhealthy, a lambda erroring 5 times a minute wasn't quick enough to roll back the deployment
Every subsequent deploy back to health needed [skip alarm] otherwise it would autoroll back
I was getting internal CloudFormation errors 🤷
In all, I think the smoke testing stuff should be enough. Lambda deploys are too quick for the "slow" CloudWatch alarm to update, and we're not in ECS world with a persistent target group for blue/green that we alternate between.
Changed this PR significantly - worth another look
|
gharchive/pull-request
| 2024-08-27T07:51:09 |
2025-04-01T04:35:50.013213
|
{
"authors": [
"AaronMoat"
],
"repo": "seek-oss/aws-codedeploy-hooks",
"url": "https://github.com/seek-oss/aws-codedeploy-hooks/pull/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
132527375
|
If a trait name is already carrying a FullStory type tag, don't mess with that.
This allows users to specify that a quantity is "real" (i.e. float) even if the
momentary value happens to be an integer. It also passes type mismatches
through so FullStory can complain about it, instead of assuming that the user
must have meant avg_purchase_real to be a string just 'cause they botched the
value when passing the trait in. passed. And that same avg_purchase_real won't
become avg_puchase_real_real, either, which is all good.
This is the second half of https://github.com/segment-integrations/analytics.js-integration-fullstory/pull/6
this makes sense to me @f2prateek we do something similar for Salesforce, which requires __c attached at the end of the custom properties so we do a look up to see if the last three chars is __c and leave it if it is.
Let's get this merged after #6 is sorted out. @fka3
Did you mean #6, which I closed already and split into #8 and #7? Or did you mean #7?
They're independent enough that I can land them in either order; I have some actual heat under #7 because customers are encountering it in the wild.
@fka3 sorry i meant #7
@f2prateek can you comment on this?
I think we should update the PR to allow type tags for ones we can detect automatically (string, bool, date, etc.) and maybe just have this logic for numbers or types that can't be automatically detected if needed by the client.
The 2nd change (as @fka3 suggested) we'd want to make is to automatically look for our defined fields in spec and define the appropriate type tag.
I think you're misunderstanding the intent, really... if the user TELLS YOU WHAT THEY WANT, in this case by explicitly putting the admittedly odd (and therefore unlikely to be accidental) type tagging into their name, then I think this code should pass that name through unchanged, no matter what. That's the big point.
Even if the user is "wrong" (maybe the say it's an int but they're passing a string, or vice versa), there are two possible errors: they might have mis-named the variable, or they may be making a mistake about the value passed. If you always try to infer the type, you're assuming the second error is never being made (and you're going to make truly bizarre variable names, like foo_int_str, because you'll tack _str onto foo_int because a string was passed to it). The fact that accepting the user's name ALSO lets them enforce foo_real even if the passed value happens to be integral is just a bonus you get from listening to what they said they wanted.
Absent that, you're already doing a decent job of best-effort figuring what type to use if there is no indication from the user; I don't have any problem with that!
It would be a good idea to specify types for your defined fields that we don't recognize, but those will already be going through convert, so the obvious ones should already be tagged normally. You did mention "total" as a spec'ed value (though I don't see it at https://segment.com/docs/spec/identify/#traits), so forcing that to total_real would probably be helpful, for example.
A customer using FullStory and Mixpanel will now have to see all their reports with foo_real in Mixpanel just because of FullStory to use foo as a real type.
And this only compounds if each integration had a limitation like this. Imagine if Mixpanel needed all floating numbers to be tagged float, and Amplitude needed them to be tagged double. Would our users send the same property 3 times tagged differently for each tool, adding extra properties to each event?
analytics.track("Loaded a Page", {
size_real: 32,
size_double: 32,
size_float: 32
});
Additionally, size_double would actually become size_double_int and size_float_int in FullStory.
To reiterate - your approach only makes sense if a customer is using only FullStory, which needs type tags instead of dynamically adapting types to incoming data as most tools try to do.
When a customer absolutely needs to force a type tag - they can call into FullStory directly using our ready API. It's more verbose, but the data in the end tools will be cleaner.
analytics.track("Loaded a Page", { size: 32 }); // + pass a flag to ignore full story
analytics.ready(function(){
// Whatever the equivalent call is.
window.fs.track("Loaded a Page", { size_real: 32 });
});
I still feel like we're talking past each other here. Would a phone call or VC help?
I'm not suggesting that everybody tag anything, nor that (hypothetically) MixPanel and FullStory have dueling type tags. But if that world did develop, you'd fix it in these integration wrappers---FullStory would map all of { real, float, double } as _real, MixPanel's integration layer would map to float, etc.
But since I'm not suggesting that, that whole argument is a strawman anyway.
What I am suggesting is that if the user does offer FullStory style typing, for whatever reason (for example to make sure their real variables stay reals, even if the momentary value is integral), then that type tag should be respected, and that the integration layer shouldn't double it. And yes, they pay for that by having MixPanel variables that have typed names, true. It's harmless there and helpful here.
The ready() API is useful; I hadn't known about that. But it's going to, in effect, force all users who care about any real-valued variable to use analytics.ready(), at least if they use FullStory... which is going to cramp use of your analytics.identify(), which I assume you'd like to avoid.
So, to me, the downside of not doing this is that users either have to over-use analytics.ready(), or get doubled FullStory variables (foo_real and also foo_int once the numbers come up wrong). And if they naively try to do it the FullStory way, they get truly bizarre variables (foo_real_real and foo_real_int)... which are basically permanent once set.
And the downside of doing this... I honestly don't see it. There's not an objection in principle, because you do something similar for Salesforce (and neither of you have decried that as a historical mistake). There's not a burden on other tools, because you'd only get this behavior if you asked for it by using the type tags. There's not a future multi-tool tug-of-war risk, really, because (a) the other tools don't ask for tagging anyway, and (b) if they did, this integration layer is a perfect place to resolve the tug-of-war for each tool individually.
You guys own the gate, obviously, and although you and @hankim813 seem to disagree, you're (plural) in your rights to just say no. But I still don't understand the objection; I'll spot that FullStory's way is the oddball, but dealing with such variations is exactly why this layer should exist, and should handle it sensibly.
Given that I'm the only one against this, it's probably best to merge this :)
@fka3 sorry, but if you could rebase your changes against master we can get this out - if not I can get around to it sometime next week.
Done.
Thanks!
|
gharchive/pull-request
| 2016-02-09T20:41:41 |
2025-04-01T04:35:50.045997
|
{
"authors": [
"f2prateek",
"fka3",
"hankim813"
],
"repo": "segment-integrations/analytics.js-integration-fullstory",
"url": "https://github.com/segment-integrations/analytics.js-integration-fullstory/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1015811540
|
DataRangers Destination
rewrite: true
title: Datarangers
beta: true
DataRangers Destination
DataRangers provides product analytics for mobile and web applications, including event/retention/funnel/error analysis, user segmentation, user paths, behavior lookup, A/B testing and other functions.
In addition to the docs below, please reference DataRangers' integration guide.
This destination is maintained by DataRangers. For any issues with the destination, please contact the DataRangers Support team
Getting Started
{% include content/connection-modes.md %}
From the Destinations catalog page in the Segment App, click Add Destination.
Search for "DataRangers" in the Destinations Catalog, and select the "DataRangers" destination.
Choose which Source should send data to the "DataRangers" destination.
In DataRangers, go to your "Organization Settings" > "Project List" and find the targeted project.
Click on Details for the targeted project and find the API key ("App Key") on the pop-out information page. This should be a 32-character string of numbers and letters.
Enter the "API Key" in the "DataRangers" destination settings in Segment.
Page
If you aren’t familiar with the Segment Spec, take a look at the Page method documentation (https://segment.com/docs/connections/spec/page/) to learn about what it does. An example call would look like:
analytics.page()
Segment sends Page calls to DataRangers as a page event.
Screen
If you aren’t familiar with the Segment Spec, take a look at the Screen method documentation to learn about what it does. An example call would look like:
[[SEGAnalytics sharedAnalytics] screen:@"Home"];
Segment sends Screen calls to DataRangers as ascreenevent.
Identify
If you aren’t familiar with the Segment Spec, take a look at the Identify method documentation to learn about what it does. An example call would look like:
analytics.identify('userId123', {
email: 'john.doe@example.com'
});
Segment sends Identify calls to DataRangers as an identify event with SSID.
Track
If you aren’t familiar with the Segment Spec, take a look at the Track method documentation to learn about what it does. An example call would look like:
analytics.track('Login Button Clicked')
Segment sends Track calls to DataRangers as a track event with event name and properties.
Hi @RobbenWeemsSegment, I'm not sure what's up with this Pull Request, but it seems like it might be backwards. I'll ping you on Slack to discuss.
|
gharchive/pull-request
| 2021-10-05T02:18:25 |
2025-04-01T04:35:50.079800
|
{
"authors": [
"RobbenWeemsSegment",
"markzegarelli"
],
"repo": "segmentio/segment-docs",
"url": "https://github.com/segmentio/segment-docs/pull/1951",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2153740310
|
Overhang parameter required
>>> ldseguid("AT", "AT")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ldseguid() missing 1 required positional argument: 'overhang'
You appear to be using an old version here. The use of overhang parameter was dropped in:
$ git log --grep overhang
commit 975f74e5b2f60b0ab610a0b9dc697ad1ef29edad
Author: Henrik Bengtsson <henrik.bengtsson@gmail.com>
Date: Sun Feb 11 17:57:12 2024 -0800
Python: Drop argument 'overhang' from ldseguid() [#73]
I see! we should push the latest version to PyPI
I see! we should push the latest version to PyPI
Ideally, we should just have had a "don't use" stub on PyPi until we know that SEGUID v2 is final and stable; my main concern all this time has been there are user out there who might have start producing checksums that will no longer be correct. At least the prefixes of the PyPi version are different.
So, we should hold back push an update until we reached milestone 1.0 (= SEGUID v2 is finalized and we don't believe the API or the algorithm will change anymore).
|
gharchive/issue
| 2024-02-26T09:57:21 |
2025-04-01T04:35:50.094286
|
{
"authors": [
"HenrikBengtsson",
"louisabraham"
],
"repo": "seguid/seguid-python",
"url": "https://github.com/seguid/seguid-python/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
42179010
|
.select2-choice in RTL is behind select2-arrow
hi,
.select2-choice should have something like this
html[dir="rtl"] .select2-search-choice-close {
right: auto;
left: 24px;
}
This issue has been fixed in later versions of Select2.
|
gharchive/issue
| 2014-09-08T09:36:23 |
2025-04-01T04:35:50.137294
|
{
"authors": [
"hellxcz",
"kevin-brown"
],
"repo": "select2/select2",
"url": "https://github.com/select2/select2/issues/2663",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
94912858
|
infinite scroll stops after element is selected
I am using 4.0.0 version and I want to populate a select with data returned by an Ajax call, using infinite scrolling.
New elements are loaded each time I reach the end of the scrollbar, and everything works fine until I select an element.
After an element has been selected, if I open again the dropdown menu and I scroll to the bottom of the list, nothing happens and the load no longer starts (Ajax url is not called).
When I initialize the plugin for the first time, I add some elements in the dropDown and preselect an element automatically, but doing that does not cause the issue described above (which seems to be generated only when an element is selected manually)
This appears to be more of a usage question than a bug report or common feature request. As we continue to keep the Select2 issue tracker geared towards these two areas, bug reports and common feature requests, we are redirecting usage questions to other communities such as the mailing list, IRC, or Stack Overflow.
This issue does not appear to conform to our contributing guidelines.
Please re-read the contributing guidelines carefully and then edit your question. Once your issue complies with the guidelines, we can reopen it.
|
gharchive/issue
| 2015-07-14T10:15:30 |
2025-04-01T04:35:50.140793
|
{
"authors": [
"alexweissman",
"azza0091",
"kevin-brown"
],
"repo": "select2/select2",
"url": "https://github.com/select2/select2/issues/3574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
126646124
|
Added array example when using data attribute
tokenSeparators expects an array of characters when using javascript to initialize select2. When using the data-* attribute data-token-separators , you don't write an array, you simply write the characters one after the other, in quotes.
Any particular reason why this works? I would normally expect jQuery to pick up the array (written as a string in data-token-separators) and convert it to a JS array.
In theory,
<select data-token-separators=";,"></select>
Should be equivalent to
$('select').select2({
tokenSeparators: ';,'
});
Which isn't exact an array (though it is array-like).
Hi @kevin-brown , thank for looking into it, I tried several array like syntaxes and select2 kept just ignoring my separators and would use anything that was inside quotes, like, if I used: data-token-separator="[',', ';']" , then typing a space triggered a new tag work, instead of just the command and semicolon. As it took me a while to get this working, I thought it would be good to have an example on the docs.
Let me know if there is anything else I can do.
Thank you.
It's because you are using single quotes instead of double quotes in your array. JSON requires double quotes, so jQuery isn't parsing it as valid JSON.
Related Stack Overflow question
Hi Kevin, sorry for the delay, you are right, if I use single quotes around the array syntax, it works, unfortunately that doesn't work for me because I'm using Scala to generate the html and their parser forces me to use double quotes on any attribute.
Would you accept the PR for documentation if I add both syntaxes, with a note that if your server side language forces double quotes, this is the alternative syntax that also works?
Thanks
Would you accept the PR for documentation if I add both syntaxes, with a note that if your server side language forces double quotes, this is the alternative syntax that also works?
There's a problem with this: technically the alternative syntax is wrong. It's not just that it's wrong in the "it technically produces the equivalent data" sense, it only works because strings are indexable similar to how arrays are. Select2 expects an array for the tokenSeparators option, and passing a string is not officially supported and may break in the future.
It works, yes. Only if you need to provide an array of characters, but not if you need to provide an array of strings (or other data, like numbers). So I'm not comfortable with us recommending it as an alternative syntax for arrays when it doesn't actually produce an array.
I would be comfortable with documenting the official way for providing an array (using JSON) in the attribute.
|
gharchive/pull-request
| 2016-01-14T12:34:39 |
2025-04-01T04:35:50.146765
|
{
"authors": [
"fmpwizard",
"kevin-brown"
],
"repo": "select2/select2",
"url": "https://github.com/select2/select2/pull/4084",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2095353866
|
rewrite fe_entrypoint.sh script
find master use column name IsMaster to avoid relying on the sequence of show frontends;.
down
|
gharchive/issue
| 2024-01-23T06:44:28 |
2025-04-01T04:35:50.148201
|
{
"authors": [
"intelligentfu"
],
"repo": "selectdb/doris-operator",
"url": "https://github.com/selectdb/doris-operator/issues/102",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2509784274
|
Question: LDAP Configuration with plain bind password
Question
Hi Everyone,
we use ansible semaphore at our company and we get ldaps working, but why there is no such way to hash the password and we have to wrtie it down in plain text?
{
"ldap_binddn": "cn=admin,dc=example,dc=org",
"ldap_bindpassword": "admin_password",
"ldap_server": "localhost:389",
"ldap_searchdn": "ou=users,dc=example,dc=org",
"ldap_searchfilter": "(&(objectClass=inetOrgPerson)(uid=%s))",
"ldap_mappings": {
"dn": "",
"mail": "uid",
"uid": "uid",
"cn": "cn"
},
"ldap_enable": true,
"ldap_needtls": false,
}
Is there some way to encrypt the configfile or hash the password?
Semaphore Version: 2.10.22-e44910d-1721658299
OS: Debian 12
Related to
No response
I am with you it would be great to not have that in clear text. I am not a developer on this project just a Semaphore user
You can set the permissions config.json to the below and the service will still start without issue. If the semphore user is ever accessed, it cannot modify the config
sudo chown root:semaphore config.json
sudo chmod 0640
ls -la config.json
# -rw-r-----. 1 root semaphore 3482 Sept 13 00:00 config.json
For anyone else that may see this you will want a service account dedicated to LDAP Bind requests which has least privilege applied. This will ensure that the password being exposed can only use LDAP queries. If you have ways of improving what I mention here please do.
Add that user to a security group called "Deny Interactive Login" or something like that. Then add that group to these GPO settings to disable interactive login for the account
Computer Configuration > Policies > Windows Settings > Security Settings > Local Policies > User Rights Assignment
Deny Access to this computer from the network
Deny logon as batch service
Deny logon as a service
Deny logon locally
Deny log on through Terminal Services
Hope this helps for now. A lot of applications use the clear text password in config files, and I would be surprised but very happy if this was resolved
Hi @tobor88, @DarthDensus any ideas how to store password securely?
To just store your password secuerly, just use a password manager. In our company we use netwrix, for personal use i have a 1password subscription.
|
gharchive/issue
| 2024-09-06T08:01:18 |
2025-04-01T04:35:50.183365
|
{
"authors": [
"DarthDensus",
"fiftin",
"tobor88"
],
"repo": "semaphoreui/semaphore",
"url": "https://github.com/semaphoreui/semaphore/issues/2331",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
49994284
|
@page is formatted with extra spaces like "@ page"
@page is formatted with extra spaces like "@ page"
while trying to beautify the following code the
img {
max-width: 100%!important;
}@page {
margin: .5cm;
}
was formatted as
img {
max-width: 100%!important;
}@ page {
margin: .5cm;
}
Seems similar to issue 35
https://github.com/senchalabs/cssbeautify/issues/35
|
gharchive/issue
| 2014-11-25T09:37:43 |
2025-04-01T04:35:50.203750
|
{
"authors": [
"MortalCatalyst",
"minhal-mauj"
],
"repo": "senchalabs/cssbeautify",
"url": "https://github.com/senchalabs/cssbeautify/issues/42",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2361681958
|
release: 3.14.11
[v3.11.14] (June 19, 2024)
Features
Markdown Support for Text Messages
Added enableMarkdownForUserMessage to UIKitOptions. When enabled, markdown syntaxes for bold and link are now applied to user messages.
Descriptive Color Names Support
Users can now customize the color set using more descriptive color names instead of numerical codes.
Added a color map parsing utility function, mapColorKeys, to support both new and existing color names.
Detailed color name mappings:
Primary, Secondary, Error, information
Primary-500 -> Primary-extra dark
Primary-400 -> Primary-dark
Primary-300 -> Primary-main
Primary-200 -> Primary-light
Primary-100 -> Primary-extra light
Secondary-500 -> Secondary-extra dark
Secondary-400 -> Secondary-dark
Secondary-300 -> Secondary-main
Secondary-200 -> Secondary-light
Secondary-100 -> Secondary-extra light
Background 100~700: No changes
Overlay
Overlay-01 -> Overlay-dark
Overlay-02 -> Overlay-light
OnLight & OnDark
// On Light
On Light-01 -> Onlight-text-high-emphasis
On Light-02 -> Onlight-text-mid-emphasis
On Light-03 -> Onlight-text-low-emphasis
On Light-04 -> Onlight-text-disabled
// On Dark
On Dark -01 -> Ondark-text-high-emphasis
On Dark -02 -> Ondark-text-mid-emphasis
On Dark -03 -> Ondark-text-low-emphasis
On Dark -04 -> Ondark-text-disabled
Chores
Message Menu Component Refactor
Created MessageMenuProvider, useMessageMenuContext, and MessageMenu component.
Replaced MessageItemMenu with MessageMenu in GroupChannel. Future PR will apply it to Thread.
Migrated MobileContextMenu and MobileBottomSheet using MessageMenuProvider.
/bot create ticket
[Creating Ticket] 🔖 Done https://sendbird.atlassian.net/browse/SDKRLSD-1322
|
gharchive/pull-request
| 2024-06-19T08:13:57 |
2025-04-01T04:35:50.210352
|
{
"authors": [
"AhyoungRyu",
"sendbird-sdk-deployment"
],
"repo": "sendbird/sendbird-uikit-react",
"url": "https://github.com/sendbird/sendbird-uikit-react/pull/1141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
466329442
|
Wait Additional Email Testing Credit Add-Ons
Wait to deploy until additional Email Testing credit add-ons have been deployed.
Updating the docs to reflect the availability of additional Email Testing credits for purchase as an add-on.
Description of the change:
Reason for the change:
Link to original source:
Closes #
Hello @sendgrid-ryan,Thanks again for the PR! We appreciate your contribution and look forward to continued collaboration. Thanks! Team SendGrid DX
|
gharchive/pull-request
| 2019-07-10T14:04:16 |
2025-04-01T04:35:50.213220
|
{
"authors": [
"sendgrid-ryan",
"thinkingserious"
],
"repo": "sendgrid/docs",
"url": "https://github.com/sendgrid/docs/pull/5410",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
270085684
|
Add Code Review to Contributing.md
Add a "Code Reviews" section on CONTRIBUTING.md
Put the following under the header:
If you can, please look at open PRs and review them. Give feedback and help us merge these PRs much faster! If you don't know how, Github has some great information on how to review a Pull Request.
Link the header in the Contributing.md Table of Contents
Add a link to the README.md file: Review Pull Requests under "Contributing Section"
@thinkingserious: PR:#81
Closing all open issues and PRs. This repo is no longer maintained. Please use https://github.com/sendgrid/sendgrid-nodejs/tree/master/packages/client instead.
|
gharchive/issue
| 2017-10-31T19:13:15 |
2025-04-01T04:35:50.217533
|
{
"authors": [
"childish-sambino",
"mptap",
"thinkingserious"
],
"repo": "sendgrid/nodejs-http-client",
"url": "https://github.com/sendgrid/nodejs-http-client/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
585705267
|
Can not send email design via sendgrid
Issue Summary
I made an email design in sendgrid homepage and I copied the code to node.js in order to send it via my app. But I am not able to send it
Steps to Reproduce
This is the first step
Code Snippet
const sgMail = require('@sendgrid/mail')
const path = require('path');
require('dotenv').config({path: path.resolve(process.cwd(), 'config', '.env'), debug: true});
sgMail.setApiKey(process.env.SENDGRID_API_KEY)
const sendWelcomeEmail = (email, name) => {
sgMail.send({
to: email,
from: 'softwaredeveloperjava80@gmail.com',
subject:'Welcome to DanceWithMe!',
html: `<td style="font-size:6px; line-height:10px; padding:0px 0px 0px 0px;" valign="top" align="center">
<img class="max-width" border="0" style="display:block; color:#000000; text-decoration:none; font-family:Helvetica, arial, sans-serif; font-size:16px; max-width:100% !important; width:100%; height:auto !important;" width="600" alt="" data-proportionally-constrained="true" data-responsive="true" src="http://cdn.mcauto-images-production.sendgrid.net/ecfbf4aace3a49e1/b128e5f9-57e9-4c9f-9e4f-590648d8c153/1600x900.png">
</td>`,
text:`Welcome ${name}. Be ready to join wonderful dance events with the best partners which our matching engine will offer to you!`
})
}
module.exports = sendWelcomeEmail
# paste code here
Exception/Log
# paste exception/log here
Technical details:
sendgrid-nodejs version: @sendgrid/mail": "^6.5.4"
node version: v12.14.1
Hello @Mert1980,
Could you please provide us with the error you have received?
It looks like the quotes around the content for html or text are not correct. They should be the same as what you have for subject. That is, ' instead of `.
With best regards,
Elmer
Hello @thinkingserious I moved text into html and it is working fine now, except the style. The response email shows picture, text and button side by side, but not up to bottom. I couldn't figure out how to solve it. Would you please give an advise?
const sendWelcomeEmail = (email, name) => { sgMail.send({ to: email, from: "softwaredeveloperjava80@gmail.com", subject: "Welcome to DanceWithMe!", html:
Thanks for joining to our community. Please click the DanceWithMe button to explore the latest dance events and find your best partner!
DanceWithMe
}); };
const sendWelcomeEmail = (email, name) => { sgMail.send({ to: email, from: "softwaredeveloperjava80@gmail.com", subject: "Welcome to DanceWithMe!", html:
Thanks for joining to our community. Please click the DanceWithMe button to explore the latest dance events and find your best partner!
DanceWithMe
}); };
Hello @Mert1980,
I suggest you use Litmus (or similar tooling) for testing out email styles.
You may also want to look into our transactional templates. Documentation on how to use those templates in this helper library can be found here.
Thanks!
With best regards,
Elmer
|
gharchive/issue
| 2020-03-22T12:18:07 |
2025-04-01T04:35:50.234876
|
{
"authors": [
"Mert1980",
"thinkingserious"
],
"repo": "sendgrid/sendgrid-nodejs",
"url": "https://github.com/sendgrid/sendgrid-nodejs/issues/1071",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
166373967
|
PHP Version Compatibility
Issue Summary
In your documentation, you don't seem to state anywhere (that I've seen) which version of PHP is required to use this library and, unfortunately, I have a client who is using an old version of PHP (5.3.3) which doesn't support the new PHP Array short form [ 'email' => $this->getEmail() ] (Mail.php:39 in your 5.0.4 release).
Please can you at least either (a) document the minimum required PHP version for the library to work or (b) not use backwards-incompatible code? I'm happy to create a PR if you'd like me to fix these issues.
Steps to Reproduce
Install SendGrid PHP API r5.0.4
Create a simple file which executes $sg = new \SendGrid($apiKey);
Execute the file
This is a bit disappointing on both the documentation front and on the build side: I appreciate that we all want nice, tidy and efficient code but, so far, I'm finding this unusable without a bunch of work and I'd like to use your library but you've explicitly excluded a whole bunch of users.
Technical details:
sendgrid-php Version: 5.0.4
PHP Version: 5.3.3
Turns out the reason for this issue is simply usage of the new Array form and that your composer.json file is incorrect in regard to the version of PHP that is required to use this library.
I will attempt to correct this on 5.0.3 (as your composer.json says it requires PHP >= 5.3) and see if I can get it working and create PR if required/desired.
Hello @ilithium,
Thanks for the feedback and offer to assist!
This library will only support versions that are not end of life. Currently, that means versions 5.6 and above.
That said, I'm sure this community would appreciate a fork that supports older versions and we would be happy to link to it.
I will be adding the version requirement to the README under the dependencies section for better visibility.
|
gharchive/issue
| 2016-07-19T16:21:48 |
2025-04-01T04:35:50.240800
|
{
"authors": [
"ilithium",
"thinkingserious"
],
"repo": "sendgrid/sendgrid-php",
"url": "https://github.com/sendgrid/sendgrid-php/issues/267",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
515394423
|
closes sendinblue/magento2-plugin#4
Let’s add a contributing file, so we can work better, together!
This close issue #4
Let’s add a contributing file, so we can work better, together!
This close issue #4
@marcelmastel is this the fix for magento 2.3.3 compatibility??
|
gharchive/pull-request
| 2019-10-31T12:31:09 |
2025-04-01T04:35:50.263170
|
{
"authors": [
"marcelmastel",
"marcolesh-doto"
],
"repo": "sendinblue/magento2-plugin",
"url": "https://github.com/sendinblue/magento2-plugin/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
430887322
|
您好,我创建的一级菜单没有header应该如何处理
1.header的问题解决了,加入后正常,子表格还是不会嵌套
2.嵌套子表格时,如何传入数据,现在是直接在脚手架的基础上做改造的,这种表格如何嵌入子表格呢
data () { return { mdl: {}, // 高级搜索 展开/关闭 advanced: false, // 查询参数 queryParam: {}, // 表头 columns: [
谢谢
你说表格,又没说哪种表格,能不能描述清楚?描述不清楚也可以给个例子或者图。发一段代码弄这样根本不能看
从你提供的描述里,我一个表格都没看到。 要嵌入的表格,我也没看到
不好意思
是这样的,我有一个如图1的json,以图2的方式展示到了页面上,如图3,但是里面的commodities是一个子列表,我想展示成图4那样,应该怎么做呢,官方文档中的是默认数据,我的数据应该怎么放入子列表呢,谢谢了
看文档都不会用吗?
<a-table :columns="columns" :dataSource="data" class="components-table-demo-nested">
<a slot="operation" slot-scope="text" href="javascript:;">Publish</a>
<a-table
slot="expandedRowRender"
slot-scope="text"
:columns="innerColumns"
:dataSource="innerData"
:pagination="false"
>
<span slot="status" slot-scope="text">
<a-badge status="success" />Finished
</span>
<span slot="operation" slot-scope="text" class="table-operation">
<a href="javascript:;">Pause</a>
<a href="javascript:;">Stop</a>
<a-dropdown>
<a-menu slot="overlay">
<a-menu-item>
Action 1
</a-menu-item>
<a-menu-item>
Action 2
</a-menu-item>
</a-menu>
<a href="javascript:;">
More <a-icon type="down" />
</a>
</a-dropdown>
</span>
</a-table>
</a-table>
参数
说明
类型
默认值
expandedRowRender
额外的展开行
Function(record, index, indent, expanded):VNode | slot="expandedRowRender" slot-scope="record, index, indent, expanded"
-
不如这样,你把你的数据用 codesandobx 实现出来 a-table 。然后发出来,我帮你改成支撑展开的
|
gharchive/issue
| 2019-04-09T10:20:41 |
2025-04-01T04:35:50.269105
|
{
"authors": [
"baiLangA",
"sendya"
],
"repo": "sendya/ant-design-pro-vue",
"url": "https://github.com/sendya/ant-design-pro-vue/issues/182",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1284248359
|
Overall Testing
Acceptance Criteria
Identify bottlenecks/performance constraints globally and per module
Edge cases fixed (if any)
Performance/Stability ironed out
Tasks
[ ] Identify edge cases and fix them
[ ] Identify bottlenecks and fix them
[ ] Report on application performance (CPU/RAM/Runtime) (ticket comment)
[ ] Report on overall application stability (ticket comment)
Doc: https://docs.google.com/document/d/1BbPYqZFaDrPg-Q8FxXYzu5zKei6bCHmpHBCNkC-UhKs/edit?usp=sharing
|
gharchive/issue
| 2022-06-24T22:32:22 |
2025-04-01T04:35:50.418325
|
{
"authors": [
"siddhant-28"
],
"repo": "seng499-company2/algorithm2",
"url": "https://github.com/seng499-company2/algorithm2/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
156778425
|
Shorthand analyze options don't work
I'm new to deptrac, so if this isn't a bug but just incorrect usage, feel free to close it!
The Run Deptrac-section of the readme states:
php deptrac.phar
# which is equivalent to
php deptrac.phar analyze depfile.yml
So, that made me assume that you could append the various graphviz-options like so:
php deptrac.phar --formatter-graphviz-dump-image="deptrac-imagedump.png"
But that doesn't seem to work. Instead, you have to use the 'full' notation to make the image-dump work:
php deptrac.phar analyze depfile.yml --formatter-graphviz-dump-image="deptrac-imagedump.png"
I suggest either enabling the use of the options on the shorthand notation as well, remove the traces (in the documentation) to the shorthand version or provide an example of the full notation including the options in the readme for that option.
this is related to the way the symfony console component works (take a look at http://symfony.com/doc/current/components/console/changing_default_command.html)
This feature has a limitation: you cannot use it with any Command arguments.
we already mention this limitation in the docs:
https://github.com/sensiolabs-de/deptrac#formatters
feel free to try to improve the docs (PR) to ensure that people don't get confused by this "issue".
|
gharchive/issue
| 2016-05-25T15:22:46 |
2025-04-01T04:35:50.431804
|
{
"authors": [
"Didel",
"slde-flash"
],
"repo": "sensiolabs-de/deptrac",
"url": "https://github.com/sensiolabs-de/deptrac/issues/102",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1282249615
|
🛑 Klaudia.io Application is down
In d8aa46b, Klaudia.io Application (https://app.klaudia.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Klaudia.io Application is back up in 9248a69.
|
gharchive/issue
| 2022-06-23T11:26:04 |
2025-04-01T04:35:50.436503
|
{
"authors": [
"UncleSamSwiss"
],
"repo": "sensioty/upptime",
"url": "https://github.com/sensioty/upptime/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
169008653
|
Extend supported sensu_client attributes to include all those defined in reference docs
Description
This change set extends the sensu_client provider to support the following additional client attributes as defined by the Sensu Client Reference Documentation:
deregister
deregistration
keepalives
redact
registration
safe_mode
socket
Motivation and Context
Motivated by #395
Closes #395
How Has This Been Tested?
Added unit tests for sensu_client provider.
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist:
[x] My change requires a change to the documentation.
[x] I have updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
Outstanding :+1:
Nice!
|
gharchive/pull-request
| 2016-08-02T22:57:41 |
2025-04-01T04:35:50.441494
|
{
"authors": [
"calebhailey",
"cwjohnston",
"portertech"
],
"repo": "sensu/sensu-chef",
"url": "https://github.com/sensu/sensu-chef/pull/476",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
151424419
|
Redis master group/set configuration key needs to be more obvious
"master": "" should be something like "master_group": "".
I like this because I think it is more consistent with Redis' own terminology for this. e.g. the following via http://redis.io/topics/sentinel:
The meaning of the arguments of sentinel monitor statements is the following:
sentinel monitor <master-group-name> <ip> <port> <quorum>
|
gharchive/issue
| 2016-04-27T16:05:12 |
2025-04-01T04:35:50.447101
|
{
"authors": [
"cwjohnston",
"portertech"
],
"repo": "sensu/sensu-redis",
"url": "https://github.com/sensu/sensu-redis/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
170182928
|
Sensu sent only one email
I got this message repeated:
{"timestamp":"2016-08-09T16:28:28.539539+0200","level":"info","message":"handler output","handler":{"type":"pipe","command":"cat","name":"stdout"},"output":["{\"client\":{\"name\":\"n2.client.io\",\"address\":\"138.201.127.97\",\"subscriptions\":[\"etcd\",\"openshift-node\",\"default_rhel\",\"default\"],\"load\":{\"warning\":\"1,1,1\",\"critical\":\"2,2,2\"},\"version\":\"0.20.3\",\"timestamp\":1470720441},\"check\":{\"thresholds\":{\"warning\":120,\"critical\":180},\"name\":\"keepalive\",\"issued\":1470752908,\"executed\":1470752908,\"output\":\"No keepalive sent from client for 32467 seconds (>=180)\",\"status\":2,\"type\":\"standard\",\"history\":[\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\",\"2\"],\"total_state_change\":0},\"occurrences\":1074,\"action\":\"create\",\"timestamp\":1470752908,\"id\":\"c2b54e25-79fc-415f-bb0f-429279fbdac5\",\"last_state_change\":null,\"last_ok\":null}"]}
But sensu only sent message on first occurrence of this event.
I went through configuration (which was done via ansible scripts) and I don't know where to look at.
I am using standard sensu-plugin-mailer handler.
Can there be a problem that some checks has defined refresh interval to one day? (which I will see tomorrow)?
I've wirtten refresh interval like this:
"keepalive": {
"thresholds": {
"warning": 60,
"critical": 300
},
"handlers": ["mailer"],
"refresh": 1800
},
Into client.json file and it seems that everything is working from now.
This looks to be an issue related to the event handler doing it's own event filtering (not in Sensu core). Event handlers built on the sensu-plugin library do support occurrence filtering, using the check definition attributes occurrences and refresh. It looks like you have figured this out, by changing the refresh value to 30 minutes :+1:
|
gharchive/issue
| 2016-08-09T14:37:40 |
2025-04-01T04:35:50.451011
|
{
"authors": [
"david-strejc",
"portertech"
],
"repo": "sensu/sensu",
"url": "https://github.com/sensu/sensu/issues/1407",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
60131678
|
Various parts of the UI break if there is a "/" in a DC name
We set up Uchiwa, and one of our DCs was named dal05 / dal09, and the url becomes: /#/client/dal05%20/%20dal09/<node>. I assume this extra / in the url is causing some issues on the frontend.
Thanks @mattrobenolt for reporting this issue, I'll try to have a fix released within the next release.
After some investigation, it looks like replacing encodeURI with encodeURIComponent in the go method, in order to escape characters like /, would require a good amount of work.
For now, I'll decrease the priority of this issue and you should avoid using the characters : / ; and ? in a datacenter or client name. I'll update the documentation to reflect this limitation. I'll gladly accept a PR for this issue, otherwise I'll work on it once the other issues with greater priority are completed.
|
gharchive/issue
| 2015-03-06T17:22:35 |
2025-04-01T04:35:50.454211
|
{
"authors": [
"mattrobenolt",
"palourde"
],
"repo": "sensu/uchiwa",
"url": "https://github.com/sensu/uchiwa/issues/279",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2458844955
|
Enable HACS validation workflow
Enable the validate.yaml HACS validation worflow once these issues are fixed:
Add home-assistant topic to repository
Add images to Home Assistant brands repository
Reference: 35b2aa0
Fixed. Validations reenabled with 1dcef94.
|
gharchive/issue
| 2024-08-10T01:45:42 |
2025-04-01T04:35:50.463776
|
{
"authors": [
"fzuccolo"
],
"repo": "senziio-admin/hacs-senziio-integration",
"url": "https://github.com/senziio-admin/hacs-senziio-integration/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
155952894
|
Suffix file not bundled with Hex package?
Created a new project (mix new foo), added the necessary things to mix.exs:
...
def application do
[applications: [:logger, :public_suffix]]
end
defp deps do
[{:public_suffix, "~> 0.2.0"}]
end
...
Then:
$ mix deps.get
Running dependency resolution
Dependency resolution completed
idna: 2.0.0
public_suffix: 0.2.0
* Getting public_suffix (Hex package)
Checking package (https://hexpmrepo.global.ssl.fastly.net/tarballs/public_suffix-0.2.0.tar)
Using locally cached package
* Getting idna (Hex package)
Checking package (https://hexpmrepo.global.ssl.fastly.net/tarballs/idna-2.0.0.tar)
Fetched package
$ mix compile
==> idna (compile)
Compiled src/idna_ucs.erl
Compiled src/idna_unicode.erl
Compiled src/punycode.erl
Compiled src/idna.erl
Compiled src/idna_unicode_data.erl
==> public_suffix
Compiled lib/public_suffix/remote_file_fetcher.ex
Compiled lib/public_suffix/rules_parser.ex
== Compilation error on file lib/public_suffix.ex ==
** (File.Error) could not read file /home/user/elixir/foo/deps/public_suffix/data/public_suffix_list.dat: no such file or directory
(elixir) lib/file.ex:244: File.read!/1
lib/public_suffix.ex:122: (module)
(stdlib) erl_eval.erl:670: :erl_eval.do_apply/6
could not compile dependency :public_suffix, "mix compile" failed. You can recompile this dependency with "mix deps.compile public_suffix", update it with "mix deps.update public_suffix" or clean it with "mix deps.clean public_suffix"
Works fine with download_data_on_compile: true, though.
(Elixir 1.2.5, Hex 0.12.0)
Sorry for the trouble and thanks for reporting it! I just released 0.2.1 with this fixed.
|
gharchive/issue
| 2016-05-20T12:32:55 |
2025-04-01T04:35:50.469643
|
{
"authors": [
"andersju",
"myronmarston"
],
"repo": "seomoz/publicsuffix-elixir",
"url": "https://github.com/seomoz/publicsuffix-elixir/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2177910045
|
sharing a file via curl
Hi,
I found this command very useful. So i decided to add it into your repo.
Please create a condition to check for no input. and give the user the necessary guidance.
@sepsoh I made a change to meet your requirements. so take a look at it!
|
gharchive/pull-request
| 2024-03-10T21:19:51 |
2025-04-01T04:35:50.484958
|
{
"authors": [
"Milad75Rasouli",
"sepsoh"
],
"repo": "sepsoh/awesome-bash-scripts",
"url": "https://github.com/sepsoh/awesome-bash-scripts/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
56650128
|
added ActiveRecord like logging messages for each executed migration
see https://github.com/sequelize/cli/issues/92
Please don't output directly to console.log - use some logging system so that clients can control where the log goes. E.g. Sequelize accepts an option { logging: } and uses that to output logs.
does umzug already have a logging: option?
afaik, the sequelize-cli disables the sequelize logs (mainly to disable the sql prints), so I don't think I can use their logging function.
What would you propose?
I'm not suggesting using the sequelize logs directly. What I meant was add a logging: option to Umzug if it doesn't have one already. Sequelize/cli can specify logging: console.log and other clients of Umzug can do as they see fit.
I like that idea - see newest commit
Looks good to me, just missing documentation.
Will review it later today
@rpaterson does the documentation just go into the readme or somewhere else, as well? Do you want me to write something up?
Just the readme AFAIK
Looks good. The readme is right. Please also add tests for the log function :)
doc & tests added
LGTM
Sorry to bother you but would it be possible to actually call the migrate function and check if the expected logging behavior is executed?
Okay here we go
@sdepold feedback?
Sorry. The last days were a bit stressful :) Could you please add an assertion for the actual arguments that get passed to logger? So ideally we would really make messages are getting logged.
Hossa! Merged + added the missing test pieces on my own.
just released v1.5.0
awesome!
|
gharchive/pull-request
| 2015-02-05T10:06:50 |
2025-04-01T04:35:50.552090
|
{
"authors": [
"alubbe",
"rpaterson",
"sdepold"
],
"repo": "sequelize/umzug",
"url": "https://github.com/sequelize/umzug/pull/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
127226634
|
CLOUD-46290 fixed validation
@akanto pls review
The validation on the request objects missing this pr fixing the problem.
Jenkins build finished, all tests passed.
Refer to this link for build results: http://ci.sequenceiq.com/job/cloudbreak-pull-request/1956/
|
gharchive/pull-request
| 2016-01-18T13:29:29 |
2025-04-01T04:35:50.561385
|
{
"authors": [
"doktoric",
"jenkins-sequenceiq"
],
"repo": "sequenceiq/cloudbreak",
"url": "https://github.com/sequenceiq/cloudbreak/pull/1209",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
160484238
|
rename blueprint: hdp-etl-edw-tp to hdp-etl-edw-spark2-hive2
BUG-58088
Can one of the admins verify this patch?
Can one of the admins verify this patch?
Can one of the admins verify this patch?
Can one of the admins verify this patch?
LGTM
@martonsereg cloud you merge it, please be aware that a blueprint was renamed and mapping needs to be updated in the other project
@seanorama
please remove hive2 from the name as Ram wrote it in the email, I'll merge it then.
@martonsereg done.
|
gharchive/pull-request
| 2016-06-15T17:53:59 |
2025-04-01T04:35:50.564132
|
{
"authors": [
"akanto",
"jenkins-sequenceiq",
"martonsereg",
"seanorama"
],
"repo": "sequenceiq/cloudbreak",
"url": "https://github.com/sequenceiq/cloudbreak/pull/1700",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
438023247
|
How to check which data structure matched the json data efficiently
I am receiving json data from aria2 rpc and the response can be one of many types.
In order to check which data structure matched the json data, my approach is like this:
#[derive(Serialize, Deserialize)]
struct A {
a: String
}
#[derive(Serialize, Deserialize)]
struct B {
b: String,
c: String,
}
//...
fn parse(json: &str){
if let Ok(a) = serde_json::from_str::<A>(json) {
//json data is type A
} else {
if let Ok(b) = serde_json::from_str::<B>(json) {
//json data is type B
} else {
//more conditional statements
}
}
}
There are other approaches to make this type checking more elegant and efficient?
You could try parsing as an untagged enum:
#[derive(Deserialize)]
#[serde(untagged)]
enum Rpc {
A(A),
B(B),
}
Thank you very much! Your approach works fine.
|
gharchive/issue
| 2019-04-28T07:11:10 |
2025-04-01T04:35:50.569635
|
{
"authors": [
"dtolnay",
"moeworld"
],
"repo": "serde-rs/json",
"url": "https://github.com/serde-rs/json/issues/538",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
336718455
|
"can't find crate" error for crate:: paths in enum newtype variants when using the 2018 preview
I'm seeing error: can't find crate for ..., can't find crate errors when trying to use crate::foo::Bar-style items in enum newtype variants and newtype structs. It works as expected for both normal structs and enum struct variants.
From my small amount of digging, it looks as though field.ty is missing the leading crate element when deserialize_externally_tagged_newtype_variant gets called, e.g. crate::foo::Bar becomes just ::foo::Bar.
Since absolute paths starting with :: were valid in rust 2015, I assumed that they would be valid in the 2018 edition as well, which would make the absence of the crate element a non-issue, but maybe not?
// Opt in to unstable features expected for Rust 2018
#![feature(rust_2018_preview, use_extern_macros)]
// Opt in to warnings about new 2018 idioms
#![warn(rust_2018_idioms)]
use serde_derive::Deserialize;
#[derive(Deserialize)]
struct Foo;
// These three work as expected:
#[derive(Deserialize)]
enum ThingOne {
Foo(self::Foo),
}
#[derive(Deserialize)]
enum ThingTwo {
Foo(Foo),
}
enum ThingThree {
Foo(crate::Foo),
}
// The next three don't work and all give the same error:
// error: can't find crate for `Foo`, can't find crate
// #[derive(Deserialize)]
// enum ThingFour {
// Foo(crate::Foo),
// }
// #[derive(Deserialize)]
// enum ThingFive {
// Foo(::Foo),
// }
// enum ThingSix {
// Foo(::Foo),
// }
Thanks, I released Syn 0.14.4 with a fix. This was a parsing ambiguity between crate::X as an absolute path and crate (::X) as a visibility specifier. https://github.com/rust-lang/rust/issues/45388#issuecomment-386594513
|
gharchive/issue
| 2018-06-28T17:49:36 |
2025-04-01T04:35:50.573485
|
{
"authors": [
"dtolnay",
"jechase"
],
"repo": "serde-rs/serde",
"url": "https://github.com/serde-rs/serde/issues/1326",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
194498639
|
Teradata: Json fields returns JDK6_SQL_Clob
when selecting a json field from teradata the field is returned as a string consisting of [JDK6_SQL_Clob]
Additionally if I cast it as a string and try and copy and paste the results only the keys are pasted with no spaces.
However, if I switch to the text view the text can be copy pasted as expected.
copy paste from grid view
timestampcalloutId
copy paste from text view
{"timestamp": 1480909216, "calloutId": "To"}
Same query / driver returns results as expected in another client
For some reason DBeaver thinks that even_json column has complex Object data type (not CLOB).
Does this problem happens when you execute custom SQL query or when you browse table data (or both)?
It happens with custom sql and when browsing table data.
Please open "Metadata" panel and see what actual data type this column has.
name
label
#
table name
type
max length
precision
scale
not null
auto
event_json
event_json
8
table_name
JSON
2,000
0
0
false
false
JSON is a new data type in Teradata and it support isn't fully JDBC compliant.
I'll add a workaround in 3.8.2 but I can't test it with real Teradata server.
Please check.
The Json values are now showing up as [CLOB] instead of [JDK6_SQL_Clob]
What happens if you double click?
If I double click it acts like a text field.
p.s. one thing I'd like to call out is that the strange behavior with copy / paste I mentioned above is fixed.
copy paste from grid view
timestampcalloutId
copy paste from text view
{"timestamp": 1480909216, "calloutId": "To"}
Similar to #538.
Please check error log and debug log (https://github.com/serge-rider/dbeaver/wiki/Log-files). There should be some errors.
Nothing in the error log but I found this in dbeaver-debug.log
2016-12-19 16:11:21.656 - java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:756)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.getSchema(JDBCConnectionImpl.java:511)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.determineSelectedEntity(GenericDataSource.java:753)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.refreshDefaultObject(GenericDataSource.java:713)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$12.run(SQLEditor.java:2045)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:103)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:746)
... 6 more
Hm, something is wrong with CLOB fetching but I can't realize what exactly.
Try to enable CLOB caching (preferences->ResultSets->Binaries)
So if I enable that option the json is displayed properly.
Reviewing the logs however reveals the same error regardless of wether or not the option is enabled.
2016-12-20 10:04:59.835 - java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:756)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.getSchema(JDBCConnectionImpl.java:511)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.determineSelectedEntity(GenericDataSource.java:753)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.refreshDefaultObject(GenericDataSource.java:713)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$12.run(SQLEditor.java:2045)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:103)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:746)
... 6 more
Also what I think are unrelated errors that I noticed when viewing the log file after clearing and review. These are logged on connecting to teradata.
2016-12-20.09:59:59.930 TERAJDBC4 ERROR [Connect to datasource "Teradata"] TeraDriver/org.jkiss.dbeaver.registry.driver.DriverClassLoader@7f203bff The com.ncr.teradata.TeraDriver class name is deprecated. Please use the com.teradata.jdbc.TeraDriver class name instead.
2016-12-20 10:00:02.066 - Open [http://dbeaver.jkiss.org/product/version.xml]
2016-12-20 10:00:02.384 - java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:756)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.getSchema(JDBCConnectionImpl.java:511)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.determineSelectedEntity(GenericDataSource.java:753)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.initialize(GenericDataSource.java:426)
at org.jkiss.dbeaver.registry.DataSourceDescriptor.connect(DataSourceDescriptor.java:685)
at org.jkiss.dbeaver.runtime.jobs.ConnectJob.run(ConnectJob.java:74)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:103)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:746)
... 7 more
2016-12-20 10:00:02.388 - Connected (teradata-156d77c8
These errors (warnings actually) are not related to CLOBs.
It is quite strange that there are no CLOB related errors in debug log. I still can't reproduce this problem and thus I can't fix it..
BTW maybe there is an error in results editor status message?
Like this:
No error in the status message / error log.
This error may happen (and printed in status bar) when you view LOB in value panel.
I've added a bit more error logging. Hopefully we'll see it in the next version.
I couldn't trigger an error when viewing it in the value panel.
I'll check and see if I can find an error when you release the next version.
I am trying to read a json value, all I see is [CLOB], and at the bottom of the screen I see:
SQL Error [20000] [XJ215]: You cannot invoke other java.sql.Clob/java.sql.Blob methods after calling the free() method or after the Blob/Clob's transaction has been committed or rolled back.
I am trying to read a json value, all I see is [CLOB], and at the bottom of the screen I see:
SQL Error [20000] [XJ215]: You cannot invoke other java.sql.Clob/java.sql.Blob methods after calling the free() method or after the Blob/Clob's transaction has been committed or rolled back.
In order yo work with LOBs in Derby you need to switch into transactional mode (on the main toolbar - toggle auto-commit mode).
Another option - enable CLOB/BLOB caching in preferences.
Please check how it works in 3.8.3
sorry for the long delay on this. Its still showing the value without the cache option enabled.
looking in the logs i see the same error as before.
anyways since i can see the real value with the cache option enabled i guess you can close this?
2017-04-27 14:31:13.709 - Connect with 'Teradata' (teradata-156d77c8c3d-9484f66c7c0e459)
2017-04-27 14:31:14.608 - java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
java.sql.SQLException: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:769)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.getSchema(JDBCConnectionImpl.java:533)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.determineSelectedEntity(GenericDataSource.java:784)
at org.jkiss.dbeaver.ext.generic.model.GenericDataSource.initialize(GenericDataSource.java:457)
at org.jkiss.dbeaver.registry.DataSourceDescriptor.connect(DataSourceDescriptor.java:725)
at org.jkiss.dbeaver.runtime.jobs.ConnectJob.run(ConnectJob.java:73)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:95)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.lang.AbstractMethodError: com.teradata.jdbc.jdk6.JDK6_SQL_Connection.getSchema()Ljava/lang/String;
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.callMethod17(JDBCUtils.java:759)
... 7 more
Ok
|
gharchive/issue
| 2016-12-09T02:30:47 |
2025-04-01T04:35:50.622464
|
{
"authors": [
"nasht00",
"r-richmond",
"serge-rider"
],
"repo": "serge-rider/dbeaver",
"url": "https://github.com/serge-rider/dbeaver/issues/1043",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
219857427
|
Is it possible to log the content of UPDATE transactions for manually edited data?
I occasionally use DBeaver in transactional mode to manually edit table data (by clicking on cells and typing new values). It is very useful to be able to modify multiple rows in multiple tables and then commit all the changes simultaneously.
However, although I have enabled Query Manager logging to file, I cannot see the actual changes made in the log.
Here is an example log excerpt after updating a number of rows and committing:
!ENTRY org.jkiss.dbeaver.core 0 201 2017-04-06 11:21:37.381
!MESSAGE COMMIT
!ENTRY org.jkiss.dbeaver.core 1 206 2017-04-06 11:28:42.991
!MESSAGE SELECT x.* FROM <redacted> x
WHERE <redacted> IN <redacted>
!SUBENTRY 1 org.jkiss.dbeaver.core 1 0 2017-04-06 11:28:43.055
!MESSAGE SUCCESS [19]
!ENTRY org.jkiss.dbeaver.core 0 203 2017-04-06 11:28:39.808
!MESSAGE COMMIT
!ENTRY org.jkiss.dbeaver.core 0 207 2017-04-06 11:29:36.427
!MESSAGE COMMIT
!ENTRY org.jkiss.dbeaver.core 0 217 2017-04-06 11:31:01.897
!MESSAGE COMMIT
As you can see, only the COMMITs are shown. Is there any way to make DBeaver log the content of the UPDATE queries too, so I can keep a record of the data which was manually changed?
Although I only mention UPDATE transactions in the previous post, naturally it would be helpful to log other data-modifying statements such as DELETE, INSERT, REPLACE etc.
Go to QM preferences (Filters button in QM view or Preferences->Database->Query Manager).
Enable all query types and all object types - thus you will see all logs.
|
gharchive/issue
| 2017-04-06T10:45:04 |
2025-04-01T04:35:50.625683
|
{
"authors": [
"serge-rider",
"zejji"
],
"repo": "serge-rider/dbeaver",
"url": "https://github.com/serge-rider/dbeaver/issues/1534",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.