id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
356208878
|
missing dependency "libgconf-2-4"
I used the "Sqlectron_1.29.0_amd64.deb" on ubuntu 18.04 LTS and it didn't work, so I tried running it from command line and it complained about a missing dependency so I had to install it manually (libgconf-2-4).
I assume this dependency should be added to the deb file itself?
It is indeed missing from the deb file. Confirming here. Found the workaround here via Google: https://github.com/electron/electron/issues/1518
sudo apt-get install libgconf-2-4
Issue present in 1.30.0 on Arch install with Pacman.
Are there any updates on this? Do we even know what is causing it?
@LoveSponge the project is dead - switch to DBeaver: https://www.archlinux.org/packages/community/x86_64/dbeaver/
Are there any updates on this? Do we even know what is causing it?
What's causing it is that Chrome added a dependency on libgconf to function on Linux, except that it does not come with Chrome nor is it installed by default on most distros. After spending yet another day trying to get VirtualBox and Debian / Ubuntu OS to run inside of it (with both failing at different points of installation / runtime), I'm going to kick this issue and fixing it "properly" down the road to a later version and instead just update the README to say you need to install libgconf if on Linux. There's a good chance that just upgrade the version of electron / electron-builder that sqlectron uses will resolve this as well.
@seantcanavan I only realised after posting my comment;
Since then i've found Beekeeper Studio (link), a great alternative with active maintinence.
This project does have active maintenance, though my time is split amongst a number of things.
I want to think this issue has been fixed with 1.31.0, but it's hard for me to say as at least at the moment, I do not have a Linux environment to test on natively, and my last few attempts at setting up a VirtualBox image has been met with a number of annoying problems just installing debian / ubuntu, let alone getting them to run. It will definitely be fixed in 1.32.0 as I upgrade the electron and electron-builder dependencies which should definitely force gconf to be a dependency by default of building the project.
This was fixed in #507.
help I don't want to build from the source. discover says "dependency resolution failed". But it does not tell what dependency :<
@dev-rsonx just switch to dbeaver already it's free and a million times better. It's fully cross-platform as well. https://dbeaver.io/download/
@dev-rsonx what linux distribution and version are you using?
|
gharchive/issue
| 2018-09-01T15:01:33 |
2025-04-01T06:40:27.877698
|
{
"authors": [
"LoveSponge",
"MasterOdin",
"arakash92",
"dev-rsonx",
"seantcanavan"
],
"repo": "sqlectron/sqlectron-gui",
"url": "https://github.com/sqlectron/sqlectron-gui/issues/449",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1761582639
|
SQL Saturday Columbus 2023 - Site Adds
Please add the link to the schedule
https://sessionize.com/api/v2/piprkmwb/view/GridSmart
Please add the precon links
Understanding and Optimizing SQL Server Performance - Joey D'Antoni
https://www.eventbrite.com/e/657500942017
An Introduction to Python for Data Science and Data Engineering - Chris Hyde
https://www.eventbrite.com/e/657503609997
done
From: pshore73 @.>
Sent: Saturday, June 17, 2023 2:59 AM
To: sqlsaturday/sqlsatwebsite @.>
Cc: Subscribed @.***>
Subject: [sqlsaturday/sqlsatwebsite] SQL Saturday Columbus 2023 - Site Adds (Issue #201)
Please add the link to the schedule
https://sessionize.com/api/v2/piprkmwb/view/GridSmart
Please add the precon links
Understanding and Optimizing SQL Server Performance - Joey D'Antoni
https://www.eventbrite.com/e/657500942017
An Introduction to Python for Data Science and Data Engineering - Chris Hyde
https://www.eventbrite.com/e/657503609997
—
Reply to this email directly, view it on GitHubhttps://github.com/sqlsaturday/sqlsatwebsite/issues/201, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AA6AGDD3EJNGAI37HB4M7EDXLUFOTANCNFSM6AAAAAAZJ5EUZQ.
You are receiving this because you are subscribed to this thread.Message ID: @.***>
Redgate respects your privacy. To find out how we handle your personal data, see our privacy noticehttps://www.red-gate.com/privacy. Use of our software is subject to our license agreementhttps://www.red-gate.com/eula.
Our head office is Cavendish House, Cambridge Business Park, Cambridge, CB4 0XB, United Kingdom, and we are registered in the UK as company #3857576.
|
gharchive/issue
| 2023-06-17T01:58:55 |
2025-04-01T06:40:27.939238
|
{
"authors": [
"pshore73",
"way0utwest"
],
"repo": "sqlsaturday/sqlsatwebsite",
"url": "https://github.com/sqlsaturday/sqlsatwebsite/issues/201",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
119921244
|
HttpLoggingInterceptor shows a gzipped response as a raw string
I use the standard HttpLoggingInterceptor with the log level set to Level.BODY. When the response from a server is gzipped HttpLoggingInterceptor shows it as a raw string (encoded). Is it expected behaviour? It doesn't seem to be very useful unless your server has some issue with gzip :)
pidcat:
OkHttp D <-- HTTP/1.1 200 OK (146ms)
D Date: Wed, 02 Dec 2015 10:53:01 GMT
D Server: Apache
D X-Frame-Options: SAMEORIGIN
D Access-Control-Allow-Origin: *
D Cache-Control: no-cache, private
D Pragma: no-cache
D Content-Encoding: gzip
D Vary: Accept-Encoding
D Content-Length: 367
D Keep-Alive: timeout=15, max=69
D Connection: Keep-Alive
D Content-Type: application/json;charset=UTF-8
D OkHttp-Selected-Protocol: http/1.1
D OkHttp-Sent-Millis: 1449053545316
D OkHttp-Received-Millis: 1449053545463
D ������������Œ�j�0EeЪG�a�rv&-$tQ(�.[���dd%%��{ӗ�d7bĜ;���2%�"�"dU���j����`�u�l�5¦�5
D 6ݫr�
D �Y4�@UW�U��
D �� ����c_�����p�n�/�i:@*C5A��{����xt{,� �RJy6�
�9Ic)���t��E0������
'!��JL2�e��Jɇԥ� _�k�x�3�bA36ޔ�'$ft�4���e
-ܸ�*t��<�A�� N#����tf��� ����tU���<���`�w��^���f�'�&��y�|��
�:���r!�99{ժ�;���;�%�d���{�na�ս��|r�?.2������
D <-- END HTTP (367-byte body)
Is it expected behaviour?
In the sense that there's no logic to handle non-plain text bodies then yes, it behaves as we expect.
Are you adding this as a network interceptor or a normal interceptor? If you do not set it as a network interceptor you should see the plain-text body.
by adding it as a regular interceptor I'm getting decoded data :)
thanks @JakeWharton
Have tried to use HttpLoggingInterceptor as either network or application interceptor but in one case it doesn't show gzipped body. in another - all headers. Is there any default solution to log both? If no how hard is that to write one?
Best approach is to use Chuck or Charles.
https://github.com/jgilfelt/chuck
https://www.charlesproxy.com/
|
gharchive/issue
| 2015-12-02T11:20:21 |
2025-04-01T06:40:27.964196
|
{
"authors": [
"JakeWharton",
"jbaginski",
"mikhailmelnik",
"swankjesse"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/2058",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
209148583
|
CertificatePinner not working
I am working with CertificatePinner to disable the ## man-in-the-middle attack .
I have an ## ssl certificate .
the problem here is # the web service works fine if i enter a invalid sha pins
CertificatePinner` certificatePinner = new CertificatePinner.Builder()
.add("myadr.com", "sha256/********************************=")
.build();
Authenticator authenticator = new Authenticator() {
@Override
public Request authenticate(Route route, Response response) throws IOException {
String credential = Credentials.basic("******", "********");
return response.request().newBuilder()
.header("Authorization", credential)
.build();
}
};
OkHttpClient client = new OkHttpClient.Builder()
.certificatePinner(certificatePinner)
.connectTimeout(timeOut, TimeUnit.MILLISECONDS)
.readTimeout(timeOut, TimeUnit.MILLISECONDS)
.authenticator(authenticator)
.build();
Request request;
......................
Could you please provide a complete test case?
my code works just if i change the sha pin the web service still working .
CertificatePinner certificatePinner = new CertificatePinner.Builder()
.add(HOSTNAME, "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=")
.build();
OkHttpClient client;
client = new OkHttpClient.Builder()
.connectTimeout(timeOut, TimeUnit.MILLISECONDS)
.readTimeout(timeOut, TimeUnit.MILLISECONDS)
.authenticator(new Authenticator() {
@Override
public Request authenticate(Route route, Response response) throws IOException {
String credential = Credentials.basic("*********", "********");
return response.request().newBuilder()
.header("Authorization", credential)
.build();
}
})
.build();
Request request;
Response response = null;
if (method == GET) {
if (paramsGet != null) {
String paramString = URLEncodedUtils
.format(paramsGet, "utf-8");
url += "?" + paramString;
}
String deviceID = Utility.DEVICE_ID;
String carrier = Utility.CARRIER;
request = new Request.Builder()
.url(url)
.addHeader("Device", deviceID)
.addHeader("carrier", carrier)
.build();
response = client.newCall(request).execute();
}
In the code sample above no certificate pinner is installed.
Thx , have you a tutorial can help me
On 21 Feb 2017 8:28 p.m., "Jesse Wilson" notifications@github.com wrote:
In the code sample above no certificate pinner is installed.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/square/okhttp/issues/3179#issuecomment-281453043, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8BhMehohGcfZtnjWw9g7dfx0yBSNu6ks5rezrggaJpZM4MHVAf
.
Merci
On 21 Feb 2017 8:32 p.m., "Jesse Wilson" notifications@github.com wrote:
https://github.com/square/okhttp/blob/master/samples/
guide/src/main/java/okhttp3/recipes/CertificatePinning.java
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/square/okhttp/issues/3179#issuecomment-281454367, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AP8BhF5YTxWwR-1q7xAMWSXytxoCcpDuks5rezvfgaJpZM4MHVAf
.
Good Morning ,
I add the CertificatePinning class with my custom param but how i use this
class ?? i am working in android appliocation .
Cordially
On Tue, Feb 21, 2017 at 8:50 PM, mohamed chouat mohamed.chouatt@gmail.com
wrote:
Merci
On 21 Feb 2017 8:32 p.m., "Jesse Wilson" notifications@github.com wrote:
https://github.com/square/okhttp/blob/master/samples/guide/
src/main/java/okhttp3/recipes/CertificatePinning.java
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/square/okhttp/issues/3179#issuecomment-281454367,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AP8BhF5YTxWwR-1q7xAMWSXytxoCcpDuks5rezvfgaJpZM4MHVAf
.
--
Cordialement .
CHOUAT Mohamed
+216 24 546 564 / +216 28 205 270
mohamed.chouatt@gmail.c mohamed.chouat@esprit.tnom
http://tn.linkedin.com/pub/mohamed-chouat/93/6ab/288
|
gharchive/issue
| 2017-02-21T13:41:35 |
2025-04-01T06:40:27.977579
|
{
"authors": [
"mohamedchouat",
"swankjesse"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/3179",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
500794513
|
callEnd and responseBodyEnd methods are not called for EventListener
When using the asynchronous call with ResponseCallback, the callEnd and responseBodyEnd callbacks are not triggered. But Callback's onResponse is getting triggered
val client = OkHttpClient.Builder()
.eventListenerFactory(HttpEventListenerFactory.FACTORY)
.build()
val request = Request.Builder()
.url("http://jsonplaceholder.typicode.com/comments?postId=1")
.build()
with(client) {
newCall(request).enqueue(object : Callback {
override fun onFailure(call: Call, e: IOException) {
Log.d("OkHttp##", "Request failed")
}
override fun onResponse(call: Call, response: Response) {
//this is getting triggered
Log.d("OkHttp##", "Response received")
}
})
}
EventListener class
public class HttpEventListenerFactory extends EventListener {
public static final Factory FACTORY = new Factory() {
final AtomicLong nextCallId = new AtomicLong(1L);
@Override
public EventListener create(Call call) {
long callId = nextCallId.getAndIncrement();
Log.d("OkHttp##", "next call id : " + nextCallId);
String message = String.format(Locale.US, "%04d %s%n", callId, call.request().url());
Log.d("OkHttp##", message);
return new HttpEventListenerFactory(callId, System.nanoTime());
}
};
@Override
public void responseBodyEnd(Call call, long byteCount) {
// this method never gets called
printEvent("Response body end", callId);
}
It doesn't look like you are consuming the response body and then closing it. This is likely the problem.
What do you suggest I should do in my code to get the responseBodyEnd callback ?
What do you suggest I should do in my code to get the responseBodyEnd callback ?
Read the body and then close the response?
Thank you @yschimke. It worked.
|
gharchive/issue
| 2019-10-01T10:05:46 |
2025-04-01T06:40:27.980957
|
{
"authors": [
"bantyK",
"yschimke"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/5516",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
579454628
|
Windows build
Build Addle for Windows with MinGW-w64 and MSVC.
Bonus points for attempting to build Addle for Win32.
I'm seriously deliberating whether to support MSVC. It's doable, but a pain, and I question its necessity.
After some deliberation, I have decided that Addle will only support MinGW-w64 based builds on Windows. Supporting MSVC adds clutter to the source code, constrains the libraries Addle can use on Windows, and represents a general vague compatibility hazard where all our other target platforms are POSIX. These would be perfectly livable (lots of projects put up with it), but frankly I don't think MSVC support would add that much value to Addle.
MinGW-w64 works quite well and definitely simplifies this leg of cross-platform support. Meanwhile, Addle isn't a library or utility that necessitates ABI compatibility, and the requirement of Qt is already quite large enough that I don't find the requirement of a (free and open source) compiler and toolchain to be that significant of an increase to our build burden.
I still intend for Addle to support plugins compiled with MSVC.
Clang exposes some interesting MSVC compatibility features that could represent some kind of compromise solution and/or means of using Visual Studio to develop and debug Addle.
I am reversing this decision in light of three important pieces of information:
__declspec( dllexport ) and __declspec( dllimport ) specifiers are actually not so much about MSVC as they are about Windows DLLs -- which stands to reason, given the "DLL" in the names. While MinGW-w64/GCC is able to link symbols declared without these specifiers, they will not be relocated during runtime, meaning that the "same" symbols will have different addresses from the perspectives of different libraries. That is a problem when, for example, calling QObject::connect in one library, using a pointer to a member function defined in another library.
If a library contains __declspec( dllexport ) directives, MinGW-w64/GCC will link it using MSVC-style rules instead of the default Linux-style rules, meaning that GCC will produce similar linker errors for missing export specifiers.
Classes containing only pure-virtual methods do not need to be exported.
These import/export specifiers were a big reason why I was so reluctant to support MSVC, since GCC "worked" without them and MSVC didn't. But note 1 shows us avoiding them is much more of a "compatibility hazard" than using them. Notes 2 and 3 show us that they will not be too difficult to maintain as their use can be checked from Linux and there will not be nearly as many of them as I expected.
MSVC still requires small adjustments to be happy, but at this point they're basically trivial. Meanwhile, MSVC is the most popular and well-supported compiler for native Windows binaries, so I might as well support it in the spirit of making Addle more accessible to the developer community and the tools that are available. Plus, being natively supported makes it possibly advantageous for performance or debugging on Windows.
After 999603e86789397d1d03ebaf771ca7c98767e5e7 I was able to successfully build and run the dev_1 branch using MSVC.
|
gharchive/issue
| 2020-03-11T18:13:54 |
2025-04-01T06:40:28.023227
|
{
"authors": [
"squeevee"
],
"repo": "squeevee/Addle",
"url": "https://github.com/squeevee/Addle/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
507091774
|
Natural join seems to eliminate rows which it shouldn't
MySQL [gitbase]> select blob_hash, repository_id from blobs natural join repositories where blob_hash in ('93ec5b4525363844ddb1981adf1586ebddbc21c1', 'aad34590345310fe813fd1d9eff868afc4cea10c', 'ed82eb69daf806e521840f4320ea80d4fe0af435');
+------------------------------------------+-------------------------------------+
| blob_hash | repository_id |
+------------------------------------------+-------------------------------------+
| aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/javascript-driver |
| ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/enry |
| aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/python-driver |
| 93ec5b4525363844ddb1981adf1586ebddbc21c1 | github.com/src-d/go-mysql-server |
| aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/ruby-driver |
| ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/gitbase |
+------------------------------------------+-------------------------------------+
6 rows in set (14.90 sec)
MySQL [gitbase]> select blob_hash, repository_id from blobs where blob_hash in ('93ec5b4525363844ddb1981adf1586ebddbc21c1', 'aad34590345310fe813fd1d9eff868afc4cea10c', 'ed82eb69daf806e521840f4320ea80d4fe0af435');
+------------------------------------------+-------------------------------------+
| blob_hash | repository_id |
+------------------------------------------+-------------------------------------+
| aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/python-driver |
| aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/javascript-driver |
| ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/enry |
| aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/ruby-driver |
| 93ec5b4525363844ddb1981adf1586ebddbc21c1 | github.com/src-d/gitbase |
| ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/gitbase |
| 93ec5b4525363844ddb1981adf1586ebddbc21c1 | github.com/src-d/go-mysql-server |
| ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/go-mysql-server |
+------------------------------------------+-------------------------------------+
8 rows in set (0.13 sec)
also note that removing the natural join makes things go much faster- it was my understanding that normally we want to join with repositories to benefit from some specific optimizations (although I'm guessing that filtering with blob_hash makes those optimizations moot).
Normally we don't want to join with repositories unless there are already joins involved. When querying a single table like blobs, they usually have other optimizations in place.
For example, blobs with a filter like blob_hash IN list only reads the given blobs in each repository. That's why no join it's faster.
As with everything: it depends on the query, depending on what you want, some optimizations may be better than others for performance.
In any case, I reproduced the bug and there's actually an issue. It seems to not return the repeated rows for some reason.
Yeah, I suspected something like that. Anyway, for my use case lack of duplicated rows is not an issue, so for my this is not high priority.
This bug is really weird. The natural join is the one returning the correct result. If you remove the optimization in blobs table it returns the same. So, there something going on because repo.BlobObjects() doesn't return these blobs, but accessing them directly does
@alexpdp7 are you using siva files got from gitcollector?
I tried with regular repositories and it didn't happen.
Yup, it's using siva
Narrowed it down to a siva issue and reported it to go-borges: https://github.com/src-d/go-borges/issues/90, so leaving this as blocked until it's solved on their side.
|
gharchive/issue
| 2019-10-15T08:35:45 |
2025-04-01T06:40:28.152435
|
{
"authors": [
"alexpdp7",
"erizocosmico"
],
"repo": "src-d/gitbase",
"url": "https://github.com/src-d/gitbase/issues/977",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
293376982
|
Does it support new file or notebook?
When choose new a file or notebook, its will redirect a blank page.
Please turn on debug logging (README) and send the full log here. Otherwise I cannot help - works everywhere in our setups.
Also please say which python version you are using.
Also try to run the test suite and report the result (python3.5 test.py)
|
gharchive/issue
| 2018-02-01T01:56:20 |
2025-04-01T06:40:28.154183
|
{
"authors": [
"chlins",
"vmarkovtsev"
],
"repo": "src-d/jgscm",
"url": "https://github.com/src-d/jgscm/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
254378468
|
Document dependencies
I created a Requirements section that will explain the things needed to serve the landing.
It is not described the full process to install and configure Go, but it is added too links to do so.
I'd need you to confirm that the docs are now clear enough.
I'd need that you @marnovo and @ricardobaeta take a look into this PR, since it is needed your feedback to improve the docs :D
|
gharchive/pull-request
| 2017-08-31T15:38:20 |
2025-04-01T06:40:28.155754
|
{
"authors": [
"dpordomingo"
],
"repo": "src-d/landing",
"url": "https://github.com/src-d/landing/pull/148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2431672876
|
remove syslog-ng
Hi @KTodts
do you recall where did you remove syslong in time for 24.3? I forgot which lab was that.
Maybe you can replicate this for this repo as well?
@hellt,
yes sure!
it was this repo
|
gharchive/issue
| 2024-07-26T08:07:50 |
2025-04-01T06:40:28.169149
|
{
"authors": [
"KTodts",
"hellt"
],
"repo": "srl-labs/srl-telemetry-lab",
"url": "https://github.com/srl-labs/srl-telemetry-lab/issues/40",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
356181418
|
Bump dependencies for newer Java and Leiningen support
This PR bumps a couple of dependencies:
Newer Leiningen
When running leiningen commands with a newer JDK (I'm running java 10), things fail immediately:
$ ./go test
Start frontend tests
[...]
Start backend tests
Exception in thread "main" java.lang.ExceptionInInitializerError
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:374)
at clojure.lang.RT.classForName(RT.java:2168)
at clojure.lang.RT.classForName(RT.java:2177)
at clojure.lang.RT.loadClassForName(RT.java:2196)
at clojure.lang.RT.load(RT.java:443)
at clojure.lang.RT.load(RT.java:419)
at clojure.core$load$fn__5677.invoke(core.clj:5893)
at clojure.core$load.invokeStatic(core.clj:5892)
at clojure.core$load.doInvoke(core.clj:5876)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core__init.load(Unknown Source)
at clojure.core__init.<clinit>(Unknown Source)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:374)
at clojure.lang.RT.classForName(RT.java:2168)
at clojure.lang.RT.classForName(RT.java:2177)
at clojure.lang.RT.loadClassForName(RT.java:2196)
at clojure.lang.RT.load(RT.java:443)
at clojure.lang.RT.load(RT.java:419)
at clojure.lang.RT.doInit(RT.java:461)
at clojure.lang.RT.<clinit>(RT.java:331)
at clojure.main.<clinit>(main.java:20)
Caused by: java.lang.ClassNotFoundException: java/sql/Timestamp
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:374)
at clojure.lang.RT.classForName(RT.java:2168)
at clojure.lang.RT.classForNameNonLoading(RT.java:2181)
at clojure.instant$loading__5569__auto____6869.invoke(instant.clj:9)
at clojure.instant__init.load(Unknown Source)
at clojure.instant__init.<clinit>(Unknown Source)
... 23 more
Seems like the Leiningen 2.7.* has some issues with those newer versions so this PR updates the packaged lein script to the most recent stable release.
Newer lambdacd-libaries as dev-dependencies
The newer Leiningen version enforces HTTPs for repositories. The bumped libraries were still pointing to an insecure repository as a transitive dependency, resulting in errors on build:
$ ./go test
Start frontend tests
[...]
Start backend tests
Picked up _JAVA_OPTIONS: -Xmx2048m -Xms512m
Retrieving lambdacd/lambdacd/0.13.3/lambdacd-0.13.3.pom from clojars
Retrieving throttler/throttler/1.0.0/throttler-1.0.0.pom from clojars
Retrieving hiccup/hiccup/1.0.5/hiccup-1.0.5.pom from clojars
Retrieving org/clojure/clojure/1.2.1/clojure-1.2.1.pom from clojars
Retrieving me/raynes/conch/0.8.0/conch-0.8.0.pom from clojars
Tried to use insecure HTTP repository without TLS.
This is almost certainly a mistake; however in rare cases where it's
intentional please see `lein help faq` for details.
Newer http-kit to support newer java versions
The older http-kit had problems with Java9/10 (observed and fixed in LambdaCD as well)
It would be great if this PR can be merged and a new release can be created!
Thanks for your great work!
|
gharchive/pull-request
| 2018-09-01T07:38:45 |
2025-04-01T06:40:28.173963
|
{
"authors": [
"andibraeu",
"flosell"
],
"repo": "sroidl/lambda-ui",
"url": "https://github.com/sroidl/lambda-ui/pull/108",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1816876921
|
🛑 NZBGet is down
In c2f0fbd, NZBGet (https://nzbget.srueg.ch/ping) was down:
HTTP code: 530
Response time: 64 ms
Resolved: NZBGet is back up in 02e5bcd.
|
gharchive/issue
| 2023-07-22T19:52:37 |
2025-04-01T06:40:28.184240
|
{
"authors": [
"srueg"
],
"repo": "srueg/upptime",
"url": "https://github.com/srueg/upptime/issues/390",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1835623294
|
🛑 NZBGet is down
In 3afb5c6, NZBGet (https://nzbget.srueg.ch/ping) was down:
HTTP code: 502
Response time: 3692 ms
Resolved: NZBGet is back up in 851ba6c after 38 days, 14 hours, 16 minutes.
|
gharchive/issue
| 2023-08-03T19:31:14 |
2025-04-01T06:40:28.186620
|
{
"authors": [
"srueg"
],
"repo": "srueg/upptime",
"url": "https://github.com/srueg/upptime/issues/634",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
863773793
|
Double requestSolution sent for Firefox Focus (Android)
Similar to https://github.com/ssb-ngi-pointer/go-ssb-room/issues/170 but sign-in succeeds, it's just odd or undesirable that there would be two calls to requestSolution.
On Chrome Android and normal Firefox Android, only one requestSolution is called.
I have a hunch Firefox Focus might be doing a HEAD request first, which might be handled like a GET.
|
gharchive/issue
| 2021-04-21T11:37:41 |
2025-04-01T06:40:28.206161
|
{
"authors": [
"cryptix",
"staltz"
],
"repo": "ssb-ngi-pointer/go-ssb-room",
"url": "https://github.com/ssb-ngi-pointer/go-ssb-room/issues/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
358359140
|
KeyNotFoundException when generating source code using Jenny.
Hi All
I suggest sth.. I found out sth which causes KeyNotFoundException.
I'm pretty sure you all already know it.
It must be before a user hasn't generated code from Entitas yet. (or not be imported plugin)
I wanted to modify to let it print 'Please generate code using Entitas' or execute this process,
but it seems like DesperateDevs is not part of this project.
Hi,
indeed, I've seen this too :D This error message might be confusing, you are right.
DesperateDevs is not part of the Entitas project, I will take a look into this.
Fyi:
KeyNotFoundException is usually thrown by the Properties.cs class, which is thrown, when a key is requested that doesn't exists, e.g. when the Preferences.properties is not yet configured
|
gharchive/issue
| 2018-09-09T08:04:06 |
2025-04-01T06:40:28.221039
|
{
"authors": [
"gd2020",
"sschmid"
],
"repo": "sschmid/Entitas-CSharp",
"url": "https://github.com/sschmid/Entitas-CSharp/issues/784",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1693936324
|
Update README.md
instead of -t it will be the -i for Image to build/run
Thanks!
|
gharchive/pull-request
| 2023-05-03T12:00:52 |
2025-04-01T06:40:28.229332
|
{
"authors": [
"ranjitkathiriya",
"sskorol"
],
"repo": "sskorol/ros2-humble-docker-dev-template",
"url": "https://github.com/sskorol/ros2-humble-docker-dev-template/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
910782789
|
Update logging, change logging module name
Hi Gordon, there's a few changes here. Some are more ticky-tacky so feel free to push back on those if needed. The main change is logging on line 77 of servicex_adapter.py ( self.logger.info(f"Metric: {json.dumps(mesg)}") ) . This outputs the necessary code to get parsed and used for performance information by Ilija's code.
The other changes are:
changing the name from logging to did_logging, I thought this might cause confusion in regards to the logging module
using logging.exception in the except causes, doing this gets rid of the need for the traceback.print_exc since the exception reporting should output the same information
in servicex_adapter, I changed __logging to a class instance and renamed to self.logger. I think using a class instance looks better, also __logging being used in the methods might be a bit confusing for people (and possibly the interpreter) since it's really close to self.__logging which would get name mangled.
These look great, thank you very much!!
|
gharchive/pull-request
| 2021-06-03T19:16:24 |
2025-04-01T06:40:28.233480
|
{
"authors": [
"gordonwatts",
"sthapa"
],
"repo": "ssl-hep/ServiceX_DID_Finder_lib",
"url": "https://github.com/ssl-hep/ServiceX_DID_Finder_lib/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1474240951
|
Physical backspace key is slow
when remaping the backspace to a physical key, the responsiveness is much lower then when using the software backspace button. Can this responsiveness be improved somehow?
I think I know what you mean. Unfortunately, I can't make them work the same on all phones, because the on-screen key needs to be adjusted programmatically, while the keypad repeat rate is handled by Android and it will probably differ a little on every phone.
Either way, I am almost certain there is a mistake in the code, slowing down the hardware key more than it should, that I will fix.
Technical Description:
The on-screen key sends events so fast, that I had to write extra code to make it repeat every 100 ms or so. Unfortunately, I forgot the hardware keys have a normal repeat rate and additionally applied the software repeat rate to the hardware backspace. When the two are combined, it results in a longer delay between two "delete" events.
Solution:
Apply the repeat delay only to the soft key.
Thanks for reporting! I wouldn't have noticed it myself.
The hardware key is no longer manually controlled, so it will now repeat at whatever the default phone/Android rate is. On my phone the delay between two events is about 50 ms. I adjusted the soft key to match that. It may differ on other phones, but nothing can be done about it. Case closed.
|
gharchive/issue
| 2022-12-03T21:25:45 |
2025-04-01T06:40:28.238851
|
{
"authors": [
"TomJansen",
"sspanak"
],
"repo": "sspanak/tt9",
"url": "https://github.com/sspanak/tt9/issues/123",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2252210530
|
Generated typescript files should not be excluded from git
Hi,
When you want to for check typescript and ESlint rules in CI PR branches you quickly end up in the situation that you first need to deploy your app in CI before the types are generated. For PR this is not really ideal since you don't want the changes to be applied directly (hence the PR).
An option would be to generate the types in userspace (ex: sst.types.ts), indicating that the files should not be edited and always overwritten, this way they can be committed and CI can run.
Regards
yeah we need to think about this more - maybe it does make sense to check in the generated codei was thinking about adding a codegen field in the config to allow specifying additional places to output code (and support other languages like go)
we reworked where these get generated so you can handle however you want - try 0.0.343
|
gharchive/issue
| 2024-04-19T06:47:56 |
2025-04-01T06:40:28.251718
|
{
"authors": [
"JanStevens",
"thdxr"
],
"repo": "sst/ion",
"url": "https://github.com/sst/ion/issues/271",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2278305469
|
AWS Cognito preSignUp trigger arn: invalid prefix
Hello to all, when cognito preSignUp trigger is created, deploy is terminated and there is an error that say:
XXX is an invalid ARN: arn: invalid prefix. Examine values at 'XXX.lambdaConfig.preSignUp'.
Nice I'll ask Frank to review.
@jaduplessis I've just hit this issue and after fixing it using workaround (see below) I discovered another issue, this time in pulumi:
https://github.com/pulumi/pulumi-aws/issues/678
Should invoke permission be implemented here as well? Eg. adding invoke permission for each function:
new aws.lambda.Permission("AllowExecutionFromCognito", {
action: "lambda:InvokeFunction",
function: migrateUser.name,
principal: "cognito-idp.amazonaws.com",
sourceArn: pool.nodes.userPool.arn,
});
Workaround:
const migrateUser = new sst.aws.Function("MigrateUser", {
handler: "src/lambdas/migrate-user.handler",
});
const pool = new sst.aws.CognitoUserPool(
"RekapUserPool",
{
transform: {
userPool: {
lambdaConfig: {
userMigration: migrateUser.arn,
},
}
}
)
@jakubknejzlik I think you make a good point. I had to implement something similar myself. I've updated the PR to create the permissions after the pool and functions have been defined
|
gharchive/issue
| 2024-05-03T19:34:53 |
2025-04-01T06:40:28.254765
|
{
"authors": [
"jaduplessis",
"jakubknejzlik",
"jayair",
"urosbelov"
],
"repo": "sst/ion",
"url": "https://github.com/sst/ion/issues/361",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2514892910
|
NextjsSite.buildDefaultServerCachePolicyProps() is no longer available?
Since 2.43.0 NextjsSite.buildDefaultServerCachePolicyProps() has been removed? Its still in the docs though.
What would be the recommended way to ensure we are only modifying off the defaults vs redefining the settings?
// Add Accept-Language to cache policy
const defaultServerCachePolicyProps = NextjsSite.buildDefaultServerCachePolicyProps()
const cachePolicyProps = {
...defaultServerCachePolicyProps,
headerBehavior: {
...defaultServerCachePolicyProps.headerBehavior,
behavior: defaultServerCachePolicyProps.headerBehavior?.behavior || 'whitelist',
headers: [
...(defaultServerCachePolicyProps.headerBehavior?.headers || []),
'Accept-Language',
],
},
}
const serverCachePolicy = new cf.CachePolicy(stack, 'ServerCache', cachePolicyProps)
We ended up just looking into SSRSite + NextjsSite for serverCachePolicy and copied and pasted into ours and just added our needed changes...
{
cdk: {
distribution: {
defaultBehavior: {
responseHeadersPolicy: cf.ResponseHeadersPolicy.SECURITY_HEADERS, // Added sane sec defaults
},
},
serverCachePolicy: new cf.CachePolicy(stack, 'ServerCache', {
queryStringBehavior: cf.CacheQueryStringBehavior.all(),
headerBehavior: cf.CacheHeaderBehavior.allowList(
'x-open-next-cache-key',
'accept-language', // Added lang support
),
cookieBehavior: cf.CacheCookieBehavior.allowList('appSession', 'hubspotutk'), // Added Auth0 and HubSpot sessions
defaultTtl: cdk.Duration.days(0),
maxTtl: cdk.Duration.days(365),
minTtl: cdk.Duration.days(0),
enableAcceptEncodingBrotli: true,
enableAcceptEncodingGzip: true,
comment: 'SST server response cache policy',
}),
},
}
|
gharchive/issue
| 2024-09-09T20:49:47 |
2025-04-01T06:40:28.257548
|
{
"authors": [
"iDVB"
],
"repo": "sst/sst",
"url": "https://github.com/sst/sst/issues/3855",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
203131470
|
influxdb metrics weird
As seen below, it reports http data as 0.X. Is that intentional?
I'm not sure what you are referring to 0.X.
In the image attached, it shows HTTP queries as 0.05 ops or 0.30 ops.
That seems inaccurate to me. Unless I don't understand it right?
I see what you are referring to. I don't currently have those dashboards running so I can't validate the data, but that does look odd.
closing as the old dashboards are not kept up to date.
|
gharchive/issue
| 2017-01-25T15:20:01 |
2025-04-01T06:40:28.259661
|
{
"authors": [
"sstarcher",
"xreeckz"
],
"repo": "sstarcher/grafana-dashboards",
"url": "https://github.com/sstarcher/grafana-dashboards/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1574471969
|
fix release build for npm and pypi packages
The release jobs for a tag pipeline are missing credentials and/or using the wrong image to publish their packages:
https://git.apextoaster.com/ssube/onnx-web/-/jobs/432976
https://git.apextoaster.com/ssube/onnx-web/-/jobs/432977
Fix that before the next release, so that it can happen automatically.
https://git.apextoaster.com/ssube/onnx-web/-/jobs/434939 passed and a fix for https://git.apextoaster.com/ssube/onnx-web/-/jobs/434921 has been pushed.
|
gharchive/issue
| 2023-02-07T14:49:08 |
2025-04-01T06:40:28.286342
|
{
"authors": [
"ssube"
],
"repo": "ssube/onnx-web",
"url": "https://github.com/ssube/onnx-web/issues/115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2541052141
|
build flash -firmware
please help me
Hi, the link you posted is broken and I can't check the details. Could you please repost it?
Hi,Thanks for your reply,This issue is due to computer configuration problems,which have already been solved。I have encountered another problem:
npm run mod ./mods/face/manifest.json
Error: no manifest!
please help me!
Please specify the target according to the device you are using, as shown below.
npm run mod --target=esp32/m5stack_core2 ./mods/face/manifest.json
I followed your instructions to run the command, but still received an error message. However, I changed -- target=esp32/m5stack_core2 to -- target=esp32, and now it can be run.
run as shown below:
s@ubuntu:~/stack-chan/firmware$ npm run mod --target=esp32/m5stack_core2 ./mods/face/manifest.json
stack-chan@0.2.1 mod
cross-env npm_config_target?=esp32/m5stack cross-env-shell mcrun -d -m -p $npm_config_target ./mods/face/manifest.json
Error: No such file or directory
s@ubuntu:~/stack-chan/firmware$ npm run mod --target=esp32 ./mods/face/manifest.json
stack-chan@0.2.1 mod
cross-env npm_config_target?=esp32/m5stack cross-env-shell mcrun -d -m -p $npm_config_target ./mods/face/manifest.json
But I encountered another mistake,as shown below:
Python requirements are satisfied.
Please wait - probing for device
Failed to find a suitable device. Check your connections or set UPLOAD_PORT
Please wait - probing for device
Failed to find a suitable device. Check your connections or set UPLOAD_PORT
/bin/sh: 1: [[: not found
/home/s/Projects/moddable/tools/serial2xsbug/serial2xsbug_lin.c:132: No such file or directory
make: *** [/home/s/Projects/moddable/build/tmp/esp32/debug/face/makefile:136: debug] Error 1
However, I have already connected an m5stack.basic to the virtual machine using type_c
If you have recently set up the environment with npm run setup, it is likely due to an issue with Moddable. In that case, updating Moddable to the latest version
should resolve the issue.
https://github.com/Moddable-OpenSource/moddable/issues/1408
You can check the commit of the installed Moddable with npm run doctor.
I'm sorry,Following your advice,updating Moddable to the latest version, reproduced the same problem.as shown below:
Build environment:linux
Target device: esp32/m5stack
Steps to Reproduce:
Build and install the app using this build command: npm run mod --target=esp32/m5stack ./mods/look_around/manifest.json
/bin/sh: 1: [[: not found
/home/s/Projects/moddable/tools/serial2xsbug/serial2xsbug_lin.c:132: No such file or directory
make: *** [/home/s/Projects/moddable/build/tmp/esp32/debug/look_around/makefile:133: debug] Error 1
It's true that there is a failure, but from the logs, it seems that another issue is occurring.
How did you update Moddable?
I will perform the following actions:
git clone https://github.com/Moddable-OpenSource/moddable
I made another attempt,Stuck at the following prompt:
s@ubuntu:~/stack-chan/firmware$ npm run mod --target=esp32/m5stack ./mods/face/manifest.json
stack-chan@0.2.1 mod
cross-env npm_config_target?=esp32/m5stack cross-env-shell mcrun -d -m -p $npm_config_target ./mods/face/manifest.json
Detecting the Python interpreter
Checking "python3" ...
Python 3.8.10
"python3" has been detected
Checking Python compatibility
Using a supported version of tool cmake found in PATH: 3.16.3.
However the recommended version is 3.24.0.
Using a supported version of tool cmake found in PATH: 3.16.3.
However the recommended version is 3.24.0.
Constraint file: /home/s/.espressif/espidf.constraints.v5.3.txt
Requirement files:
/home/s/esp32/esp-idf/tools/requirements/requirements.core.txt
Python being checked: /home/s/.espressif/python_env/idf5.3_py3.8_env/bin/python
Python requirements are satisfied.
/bin/sh: 1: [[: not found
Has this open-source project already rejected interested developers ,please reply me ,thank you .
As you continue developing stack-chan, I recommend understanding Moddable as well.
For this Moddable update, simply updating the repository is not sufficient. Please refer to the following documentation for the updating and other troubleshooting.
https://github.com/Moddable-OpenSource/moddable/blob/public/documentation/devices/esp32.md
thank you for your reply
|
gharchive/issue
| 2024-09-22T12:19:26 |
2025-04-01T06:40:28.315577
|
{
"authors": [
"stc1988",
"su-cheng-yang"
],
"repo": "stack-chan/stack-chan",
"url": "https://github.com/stack-chan/stack-chan/issues/294",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2718106092
|
doc: describe self signed certificate lifetime configuration
Part of: https://github.com/stackabletech/issues/issues/586
Replaced by https://github.com/stackabletech/documentation/pull/689, which incorporated the changes from here
|
gharchive/pull-request
| 2024-12-04T15:28:12 |
2025-04-01T06:40:28.317315
|
{
"authors": [
"razvan",
"sbernauer"
],
"repo": "stackabletech/documentation",
"url": "https://github.com/stackabletech/documentation/pull/688",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2652344419
|
Not able to see my previous chat
Describe the bug
When I open bolt.new and open the side menu I can't see my previous chat. I had a full app that I was almost finishing it and now I can't find it.
Link to the Bolt URL that caused the error
https://bolt.new/
Steps to reproduce
As you can see I don't see any project here in this screenshot
But if I open https://stackblitz.com/ I can see my previous project, only the ones that I created a repo. But unfortunately the project that I was working on I didn't create the repo and I don't see how can I recover it.
Expected behavior
I should be able to see all my projects.
Screen Recording / Screenshot
No response
Platform
OS: macOS
Browser: Chrome
Version: Version 130.0.6723.117 (Official Build) (arm64)
Additional context
No response
Appreciate the feedback! We are aware of this issue with chat persistence. Temporary workarounds and updates can be found here: #39. (Go to stackblitz.com, login (same credentials as Bolt), click Collections on the left-hand side, click Bolt Collection). Appreciate your patience as improvements continue to be made.
|
gharchive/issue
| 2024-11-12T14:25:59 |
2025-04-01T06:40:28.333537
|
{
"authors": [
"blbacelar",
"endocytosis"
],
"repo": "stackblitz/bolt.new",
"url": "https://github.com/stackblitz/bolt.new/issues/2098",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2567930527
|
Bug: Project Not Persisted To Backend for Some Free Tier Users (RESOLVED)
Describe the bug
Project suddenly disappeared after page refresh, making me to lose 4 hours of work in an instant. :-/
Link to the Bolt URL that caused the error
https://bolt.new/~/bolt-vanilla-vite-6e7ljo
Steps to reproduce
Browser window got refreshed and the project disappeared completely.
Expected behavior
Browser window got refreshed and the project had disappeared.
Screen Recording / Screenshot
Platform
Browser name = Chrome
Full version = 129.0.0.0
Major version = 129
navigator.appName = Netscape
navigator.userAgent = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36
performance.memory = {
"totalJSHeapSize": 82182935,
"usedJSHeapSize": 80779135,
"jsHeapSizeLimit": 2172649472
}
Username = tincleo
Chat ID = eb14bf1cab7d
Additional context
Please add some versioning to save versions while building something.
Apologies for this frustrating experience!
While there is a bug affecting a small number of free users (see below for more info) it is possible that what you are experiencing is simply related to our current limited chat persistence model. Chat history is currently not persisted across browsers/devices, and chats will disappear from the Bolt.new UI if your cache is cleared, so it is possible that your project was actually saved to StackBlitz behind the scenes. If that is the case, you can find your project on StackBlitz.com (same account as Bolt.new) > Collections Tab > Bolt collection. If this is you, we are landing improved chat persistence in 1-2 weeks and you can track that in issue 39.
Unfortunately, If your project is not stored there, then it is likely you are experiencing the bug described below:
Our engineering team has been looking at the logs and links that were provided related to this issue (thanks for providing these) and we have identified a root cause affecting a small number of users on our free tier rate limits. We are shipping a fix now but unfortunately are unlikely to be able to recover projects that ran into this error. We are working as I type this to ship the fix!
While bugs can happen during beta periods, we are sorry that you experienced this! I want to thank everyone that reported this issue and provided project links and information as it was key in identifying and resolving this issue for everyone going forward :man-bowing:
🎉 This fix has been shipped to production: This should not affect any users going forward!
If you are looking for a missing project, here's where you should find it:
Find Missing Chats or Open Chats On a New Browser/Device
Login to StackBlitz.com (same account as Bolt.new)
Open the Collections Tab
Open the Bolt collection
Open the project you want to edit in Bolt
Press the "Open in Bolt" button and continue coding!
Dear Bolt.new Support Team,
I am writing to follow up on my previous communication regarding the loss of my project, "Advanced Bookmark Management System with React and TypeScript." I appreciate the recent update indicating that a fix has been shipped to production, but unfortunately, this does not resolve my specific issue.
To reiterate my situation:
Paid User Status: I was using a paid plan when my project disappeared. This incident has significantly impacted my work, and I believe it warrants additional attention beyond what has been provided for free-tier users.
Project Recovery Attempts: I have followed the suggested steps to locate missing projects via StackBlitz.com under the Collections Tab > Bolt collection. However, my original project is not there. The files currently available were created after the loss as I attempted to restart my work.
Given these circumstances, I am requesting:
A thorough investigation into what happened to my original project and any potential recovery options available for paid users.
A refund or token reset due to the disruption caused by this data loss, reflecting my status as a paying customer.
I understand that bugs can occur during beta periods, but as a paying user, I expected a higher level of data protection and customer support. Could you please escalate this issue to ensure it receives the necessary attention?
Thank you for your assistance. I look forward to your response and a satisfactory resolution to this matter.
Best regards,
Alex
@Jedimind369 absolutely! Please reach out to hello@stackblitz.com with the same info as above + project URLs for assistance with this!
while you are fixing the bug for chat persistence -- what is the guidance to store the chat-projects? can i save it locally or in github? its not clear how bolt.new projects can be saved.
|
gharchive/issue
| 2024-10-05T12:02:44 |
2025-04-01T06:40:28.345997
|
{
"authors": [
"Jedimind369",
"danishmbutt",
"kc0tlh",
"tincleo"
],
"repo": "stackblitz/bolt.new",
"url": "https://github.com/stackblitz/bolt.new/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2438313524
|
Adding urls to the previews definition
Is your feature request related to a problem?
I want to show several preview tabs - with different relative urls for example:
{port:5173, title:"frontend"}
{port:5173, title:"api",url:"/api/tasks"}
{port:5173, title:"admin",url:"/api/admin"}
Describe the solution you'd like.
I want to show several preview tabs - with different relative urls for example:
{port:5173, title:"frontend"}
{port:5173, title:"api",url:"/api/tasks"}
{port:5173, title:"admin",url:"/api/admin"}
Describe alternatives you've considered.
reviewed the https://tutorialkit.dev/reference/configuration/ url and couldn't find anything
Additional context
No response
Currently this is not possible but we are planning to add support for it. I've done some design for the possible API earlier:
previews:
- 1234/pathname
- [1234/pathname, 'Title']
- { port: 1234, path: '/pathname', title: 'Title' }
Released in 0.1.5.
|
gharchive/issue
| 2024-07-30T17:16:20 |
2025-04-01T06:40:28.353123
|
{
"authors": [
"AriPerkkio",
"noam-honig"
],
"repo": "stackblitz/tutorialkit",
"url": "https://github.com/stackblitz/tutorialkit/issues/192",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
694146941
|
Allow megaparsec 9.0.0
https://github.com/commercialhaskell/stackage/issues/5632
cassava-megaparsec-2.0.1 builds and passes tests with megaparsec-9.0.0, so a hackage revision should be enough.
Fixed by the 2.0.2 release.
|
gharchive/issue
| 2020-09-05T18:55:57 |
2025-04-01T06:40:28.354640
|
{
"authors": [
"simonmichael"
],
"repo": "stackbuilders/cassava-megaparsec",
"url": "https://github.com/stackbuilders/cassava-megaparsec/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1105220283
|
🛑 JodoHost is down
In e814c43, JodoHost (https://www.jodohost.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: JodoHost is back up in abb8167.
|
gharchive/issue
| 2022-01-16T23:33:43 |
2025-04-01T06:40:28.357170
|
{
"authors": [
"stackexpress-shivam"
],
"repo": "stackexpress-shivam/upptime",
"url": "https://github.com/stackexpress-shivam/upptime/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
127108706
|
tex.setPixels binds to unit zero
This seems to be causing issues with gl-audio-analyser
https://github.com/stackgl/gl-audio-analyser/issues/1
Because the bind() calls in setPixels is not necessarily going to be the same active unit as the user requests with tex.bind(n), it seems like it is causing some issues. I didn't realize this but according to the spec, the two seem tied somehow:
glTexImage2D specifies a two-dimensional or cube-map texture for the current texture unit, specified with glActiveTexture.
I wonder if we should keep track of the most recently bound texture unit for that instance, and bind to that? So:
tex1.setPixels(...) // binds to 0
tex1.bind(3)
tex2.bind(5)
tex1.setPixels(...) // binds to 3
A little less likely to cause issues, but it might still result in weirdness if you're working at the WebGL level. Either way would be worth one of us documenting this behaviour in the readme's setPixels() docs when/if we next have a chance :)
|
gharchive/issue
| 2016-01-17T17:03:10 |
2025-04-01T06:40:28.360304
|
{
"authors": [
"hughsk",
"mattdesl"
],
"repo": "stackgl/gl-texture2d",
"url": "https://github.com/stackgl/gl-texture2d/issues/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2338292263
|
ROX-23972: Add enableNetworkPolicies option to Helm charts
Description
Add the enableNetworkPolicies flag (default true) to the ACS Operator's CentralServices and SecuredClusterServices charts. Disabling it will prevent NetworkPolicy objects from being created.
Checklist
[x] Investigated and inspected CI test results
[x] Unit test and regression tests added
[x] Evaluated and added CHANGELOG entry if required
[ ] Determined and documented upgrade steps
[ ] Documented user facing changes (create PR based on openshift/openshift-docs and merge into rhacs-docs)
Testing Performed
Here I tell how I validated my change
go test -v github.com/stackrox/rox/pkg/helm/charts/tests/centralservices
go test -v github.com/stackrox/rox/pkg/helm/charts/tests/securedclusterservices
CI
Reminder for reviewers
In addition to reviewing code here, reviewers must also review testing and request further testing in case the
performed one does not seem sufficient. As a reviewer, you must not approve the change until you understand the
performed testing and you are satisfied with it.
LGTM. Did you manually test that disabling the option and then upgrading the installation removes the existing network policies?
Yes, good idea :) Now I did:
kind delete cluster
kind create cluster
kubectx kind-kind
cdrox
./bin/linux_amd64/roxctl helm output central-services --image-defaults=development_build --debug --remove
helm upgrade --install -n stackrox stackrox-central-services --create-namespace ./stackrox-central-services-chart --set central.persistence.none=true --disable-openapi-validation
k get netpol -A
NAMESPACE NAME POD-SELECTOR AGE
stackrox allow-ext-to-central app=central 2m43s
stackrox central-db app=central-db 2m43s
stackrox scanner app=scanner 2m43s
stackrox scanner-db app=scanner-db 2m43s
helm upgrade --install -n stackrox stackrox-central-services --create-namespace ./stackrox-central-services-chart --set central.persistence.none=true --disable-openapi-validation --set system.enableNetworkPolicies=false
k get netpol -A
No resources found
helm upgrade --install -n stackrox stackrox-central-services --create-namespace ./stackrox-central-services-chart --set central.persistence.none=true --disable-openapi-validation --set system.enableNetworkPolicies=true
k get netpol -A
NAMESPACE NAME POD-SELECTOR AGE
stackrox allow-ext-to-central app=central 2s
stackrox central-db app=central-db 2s
stackrox scanner app=scanner 2s
stackrox scanner-db app=scanner-db 2s
|
gharchive/pull-request
| 2024-06-06T13:38:36 |
2025-04-01T06:40:28.382774
|
{
"authors": [
"ebensh"
],
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/11419",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1963490497
|
fix(ui): Sync FE and BE types for exception requests
Description
Updates UI expected response types to match BE type refactor in https://github.com/stackrox/stackrox/pull/8333
Checklist
[ ] Investigated and inspected CI test results
[ ] Unit test and regression tests added
[ ] Evaluated and added CHANGELOG entry if required
[ ] Determined and documented upgrade steps
[ ] Documented user facing changes (create PR based on openshift/openshift-docs and merge into rhacs-docs)
If any of these don't apply, please comment below.
Testing Performed
Repeat testing steps for submitting a deferral or false positive requests from previous PRs.
Current dependencies on/for this PR:
master
PR #8375 👈
This comment was auto-generated by Graphite.
|
gharchive/pull-request
| 2023-10-26T12:47:33 |
2025-04-01T06:40:28.387727
|
{
"authors": [
"dvail"
],
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/8375",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1686593733
|
Claims - Inconsistency in the type of percentage properties?
I have a question regarding the types of the percentage properties of claimDetails.
percentage in defaultInerest is defined as number.
defaultCharge and discount (dayAmountItem) can be either fixed amount or percentage. They are defined as integer.
Is there a specific reason for this or should they all be of the type number?
This issue was addessed in the Claims 3.1 work done under workgroup VH-7. See the referenced comments and pull requests as appropriate.
|
gharchive/issue
| 2023-04-27T10:45:49 |
2025-04-01T06:40:28.404965
|
{
"authors": [
"danielsveins",
"kristinnstefansson"
],
"repo": "stadlar/IST-FUT-FMTH",
"url": "https://github.com/stadlar/IST-FUT-FMTH/issues/167",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
945970847
|
🛑 Jenkins Server is down
In 3ae00b4, Jenkins Server (https://jenkins01.its-telekom.eu/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Jenkins Server is back up in 5caf764.
|
gharchive/issue
| 2021-07-16T05:58:13 |
2025-04-01T06:40:28.430876
|
{
"authors": [
"stamateas"
],
"repo": "stamateas/upptime",
"url": "https://github.com/stamateas/upptime/issues/1002",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1028935642
|
⚠️ GitLab Server has degraded performance
In bc9cd72, GitLab Server (https://gitlab01.its-telekom.eu) experienced degraded performance:
HTTP code: 200
Response time: 9257 ms
Resolved: GitLab Server performance has improved in 1cfe653.
|
gharchive/issue
| 2021-10-18T10:31:48 |
2025-04-01T06:40:28.433364
|
{
"authors": [
"stamateas"
],
"repo": "stamateas/upptime",
"url": "https://github.com/stamateas/upptime/issues/1185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
261950870
|
No quantiles test in the unit tests for exp_mod_normal_rng
Summary:
There aren't any tests that validate exp_mod_normal_rng via assert_matches_quantiles. These tests are there for all the other distributions.
Description
Current code: https://github.com/stan-dev/math/blob/develop/test/unit/math/prim/scal/prob/exp_mod_normal_test.cpp
Example _rng with test: https://github.com/stan-dev/math/blob/develop/test/unit/math/prim/scal/prob/normal_test.cpp
The issue with this is it's not easy to test that exp_mod_normal_rng is working correctly. I found this out after I introduced a bug into it in my local working branch (trying to make it a little faster -- these changes didn't get committed) and the unit tests passed.
Current Version:
v2.17.0
Thanks for filing the issue. The CDFs are implemented, so it shouldn't be hard to add them.
P.S. I assigned to you (@bbbales2), but if you don't want to deal with it, please unassign yourself.
I believe this was fixed in #833.
|
gharchive/issue
| 2017-10-01T22:13:41 |
2025-04-01T06:40:28.474101
|
{
"authors": [
"bbbales2",
"bob-carpenter",
"mcol"
],
"repo": "stan-dev/math",
"url": "https://github.com/stan-dev/math/issues/635",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
539474159
|
Generic var templates for operators and std::iterator_trait var/fvar specialization
Summary
This pulls out / cleans up some of the stuff in the complex branch. Namely
Pulls out and tidies up the templates that are added to the var operators
Adds std::iterator_traits specializations for var and fvar
Adds cmath member functions to the stan::math namespace for autodiff types
Tests
@bob-carpenter do need additional tests for the iterators?
Side Effects
std will have `std::iterator_trait specializations for var and fvar
Checklist
[x] Math issue #123
[x] Copyright holder: Steve Bronder
The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
- Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
- Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
[x] the basic tests are passing
unit tests pass (to run, use: ./runTests.py test/unit)
header checks pass, (make test-headers)
docs build, (make doxygen)
code passes the built in C++ standards checks (make cpplint)
[x] the code is written in idiomatic C++ and changes are documented in the doxygen
[x] the new changes are tested
@bob-carpenter ready for review!
@SteveBronder Should I really be reviewing this given that I wrote some of it?
Good point! Would anyone else mind taking a look at this?
OK. I'll just go ahead and review today or tomorrow---it will have two of us looking at all the code.
I've read through this mainly because I happened to see how you dealt with std::isinf(), which hit me in #1545 (and I had worked around it in not such a nice way). It turned out that I think I've learned a lot of new things, so it was time well spent.
Glad to hear and thanks for looking this over!
As you'll see, most of comments concern the ordering of typenames that doesn't match the ordering of function arguments. Of course that's not a big deal, so I leave it up to you what do about that.
I think that's v reasonable to fix up
I totally skipped stan/math/rev/core/std_iterator_traits.hpp because I can't even pretend to know what use it has, and also stan/math/rev/scal/fun/pow.hpp because I haven't cracked std::forward yet. So you should get a second pair of more experienced eyes on this! :)
That's fair, after snooping around online the link below makes me think they are much harder to implement than I originally thought. I'll probably have to rework these
https://github.com/tzlaine/stl_interfaces
OK. I'll just go ahead and review today or tomorrow---it will have two of us looking at all the code. This is going to take some reviewing and I have to head out soon, so I can get to it tomorrow.
Worst case we had 3 sets of eyes on it now (thanks again @mcol!) so I think that's fine
looking at cmath it looks like all these are in the cmath header? For clarification you want signbit, isinf, isfinite, isnan, copysign, and isnormal to be in their own header files?
Sorry, but yes. That's how we've laid out the other cmath function overloads.
Sorry, but yes. That's how we've laid out the other cmath function overloads.
lol wugh aight I'll get to this. They also need tests
+using require_all_autodiff_t = require_all_t<is_autodiff...>;
Now that I see what they're doing I think these should be enable_if_all_autodiff_t.
... I've leaned towards not doc since aliases are inlined in the docs you can see the full definition
Nice. So that'll follow cppreference's definition, which is OK by me.
var is just a pointer to vari, so can be copied by value efficient. An fvar is just two Ts held by value, not reference. So I don't see anything to move anywhere.
On Dec 30, 2019, at 1:53 PM, Steve Bronder notifications@github.com wrote:
@SteveBronder commented on this pull request.
In stan/math/rev/core/operator_addition.hpp:
@param a First variable operand.
@param b Second variable operand.
@return Variable result of adding two variables.
*/
-inline var operator+(const var& a, const var& b) {
+template <typename Var1, typename Var2, require_all_var_t<Var1, Var2>...>
The direct answer to this is that a function signature with const var&& would only accept constant rvalue types. The Var1&& and Var2&& in the function signature are universal references. Combined with require_all_var_t the operators here accept var&, const var&, and var&& (they could accept const var&& though idt const rvalues are a thing). It allows the caller (callee?) to do something like auto the_sign = copysign(std::forward(an_fvar)); to move a fvar, for example, when the call site can tell a_fvar is movable.
The longer indirect answer is that if we want "universal references" in function signatures for perfect forwarding then you can't have any declaration specifiers alongside the type specifier for each function parameter.
Would we ever allow a var to be moved?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
I think I get what perfect forwarding is doing after reading a couple explanations. I should've thought through why T&& would be more general than var&&---I always forget that sometimes T matches more than just a base type---it picks up qualifiers.
I don't think we ever need to pass non-constant references for autodiff variables or move them. Where would the extra generality get used? Or the other way around, is there an example where (const var&) as an argument for a var won't work?
On Dec 30, 2019, at 1:53 PM, Steve Bronder notifications@github.com wrote:
@SteveBronder commented on this pull request.
In stan/math/rev/core/operator_addition.hpp:
@param a First variable operand.
@param b Second variable operand.
@return Variable result of adding two variables.
*/
-inline var operator+(const var& a, const var& b) {
+template <typename Var1, typename Var2, require_all_var_t<Var1, Var2>...>
The direct answer to this is that a function signature with const var&& would only accept constant rvalue types. The Var1&& and Var2&& in the function signature are universal references. Combined with require_all_var_t the operators here accept var&, const var&, and var&& (they could accept const var&& though idt const rvalues are a thing). It allows the caller (callee?) to do something like auto the_sign = copysign(std::forward(an_fvar)); to move a fvar, for example, when the call site can tell a_fvar is movable.
The longer indirect answer is that if we want "universal references" in function signatures for perfect forwarding then you can't have any declaration specifiers alongside the type specifier for each function parameter.
Would we ever allow a var to be moved?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
fvar is 16 bytes and fvar<fvar> is 32 bytes whether T is var or double. But everything lives on the function call stack, so I'm not seeing what can get usefully forwarded.
On Dec 30, 2019, at 2:10 PM, Steve Bronder notifications@github.com wrote:
@SteveBronder commented on this pull request.
In stan/math/rev/core/operator_equal.hpp:
@param a First variable.
@param b Second variable.
@return True if the first variable's value is the same as the
second's.
*/
-inline bool operator==(const var& a, const var& b) {
+template <typename Var1, typename Var2, require_all_var_t<Var1, Var2>...>
I agree but let's flag this as an issue and do it in another PR once this goes through.
On a side note, do we think of fvar's as a "heavy" type? we may want specializations that forward for fvar's with vars or fvar's inside
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Yep! I think with all these it would be good to do them as a seperate PR else this is going to blast off into like a 800-1000 line change PR
Agreed.
I can help write the tests if you want to give me some or all of them to do.
Actually I think the only ones we need testing for are
iterator_traits for var / fvar (? idk if this needs tests but we could check we can call and declare it with types we expect
ad types, and arithmetic for
signbit
isinf
isfinite
isnan
isnormal
copysign
I can probs get to them this weekend if you can do it earlier then :+1:
Now that I see what they're doing I think these should be enable_if_all_autodiff_t.
I'll make a seperate issue for this
I don't think we ever need to pass non-constant references for autodiff variables or move them. Where would the extra generality get used? Or the other way around, is there an example where (const var&) as an argument for a var won't work?
Tadej and I had a big discussion on ref vs const ref for Eigen matrices here. There's also this stackoverflow post I found interesting talking about casting rvalue's to const lvalue's in function calls.
For var I think it might be a different story. Though if you look at fvar's inverse function in mat it has these eigen matrix fvars multiplied together.
m_deriv = multiply(multiply(m_inv, m_deriv), m_inv);
I think the above stackoverflow's rvalue / const lvalue example would apply here
I have to think about this. var seems to be trivially copyable so idk. If var ever became non-trivial then these could matter.
Do non-const references hurt in some way?
I'll look over the C++ templates book tmrw, section 7 has a bunch of stuff about passing ref, const ref, etc. stuff
fvar is 16 bytes and fvar<fvar> is 32 bytes whether T is var or double. But everything lives on the function call stack, so I'm not seeing what can get usefully forwarded.
I'm not following why everything sitting on the function call stack makes their sizes not relevant
I'm not following why everything sitting on the function call stack makes their sizes not relevant
Because there's no memory allocated with malloc to steal.
Compare this with the behavior of std::vector, which allocates memory using malloc and frees it in the destructor. When you deep copy a standard vector, it requires a malloc and copy, whereas when you move it's just an assignment.
With var, there's nothing in memory to be moved. If I write
var a = 3.2;
A var is just a struct containing a vari*. So a copy is just copying a pointer.
An fvar is just a struct containing two T type objects. Again no malloc going on and there's no difference between a copy and a deep copy.
Because there's no memory allocated with malloc to steal.
gah! brainfart alright I'll swap these back (though I'd like to leave the templates for Arith)
I think the templates for arithmetic make sense---it cuts out a lot of code.
On Jan 3, 2020, at 12:35 PM, Steve Bronder notifications@github.com wrote:
Because there's no memory allocated with malloc to steal.
gah! brainfart alright I'll swap these back (though I'd like to leave the templates for Arith)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Is this ready to review again?
Not yet! Sorry gonna try to get around to to tonight. Didn't have time to write tests yet for the new functions. I'll ping you when this is ready for review
The problem now is that the test server is running out of memory. @serban-nicusor-catena ---should we just restart the tests or does something else need to happen?
g++ -std=c++1y -m64 -D_REENTRANT -Wall -Wno-unused-function -Wno-uninitialized -Wno-unused-but-set-variable -Wno-unused-variable -Wno-sign-compare -Wno-unused-local-typedefs -I lib/stan_math/lib/tbb_2019_U8/include -O3 -I src -I . -I lib/stan_math/ -I lib/stan_math/lib/eigen_3.3.3 -I lib/stan_math/lib/boost_1.72.0 -I lib/stan_math/lib/sundials_5.1.0/include -I lib/stan_math/lib/gtest_1.8.1/include -I lib/stan_math/lib/gtest_1.8.1 -D_USE_MATH_DEFINES -DBOOST_DISABLE_ASSERTS -c src/test/unit/lang/parser/assignment_statement_test.cpp -o test/unit/lang/parser/assignment_statement_test.o
cc1plus.exe: out of memory allocating 2097112 bytes
cc1plus.exe: out of memory allocating 1442192 bytes
make/tests:13: recipe for target 'test/unit/lang/generator/generate_idxs_test.o' failed
mingw32-make: *** [test/unit/lang/generator/generate_idxs_test.o] Error 1
mingw32-make: *** Waiting for unfinished jobs....
make/tests:13: recipe for target 'test/unit/lang/generator/generate_cpp_test.o' failed
mingw32-make: *** [test/unit/lang/generator/generate_cpp_test.o] Error 1
Argh, Nic did apply this fix https://www.intel.com/content/www/us/en/programmable/support/support-resources/knowledge-base/embedded/2016/cc1plus-exe--out-of-memory-allocating-65536-bytes.html but it seems to not do the trick.
It only occurs occasionally on the recently added Windows machine (running in our lab in Ljubljana). Let me know @serban-nicusor-toptal if I can do something you cant remotely. Will also buy some additional RAM.
I restarted the upstream tests https://jenkins.mc-stan.org/blue/organizations/jenkins/Math Pipeline/detail/PR-1525/30/pipeline
If you only restart a stage Github doesnt show the yellow dot but if it will pass it will go green.
Thanks for the help here @rok-cesnovar
Ram and swap are more than enough I think so it may be a software configuration which doesn't let it allocate more maybe ... I'll look more into it tonight.
Yatta! The fully-templated version works. This is useful as I'll be able to do this elsewhere I've been having trouble with unwanted promotion of primitives to var.
@SteveBronder or @rok-cesnovar : would one of you review my changes to pow() here---nothing else has changed since it's been reviewed. I also added tests for instantiating pow.
I approved the changes I was waiting for.
Sorry for the delay I'll take a look at this today
|
gharchive/pull-request
| 2019-12-18T06:15:27 |
2025-04-01T06:40:28.514925
|
{
"authors": [
"SteveBronder",
"bob-carpenter",
"rok-cesnovar",
"serban-nicusor-toptal"
],
"repo": "stan-dev/math",
"url": "https://github.com/stan-dev/math/pull/1525",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1997882501
|
update psis to not add back the max and return the unnormalized resample ratios
Fixes #3241 by just removing the line that adds back the max log likelihood ratio
Submission Checklist
[x] Run unit tests: ./runTests.py src/test/unit
[x] Run cpplint: make cpplint
[x] Declare copyright holder and open-source license: see below
Summary
Intended Effect
How to Verify
Should we have tests for this? This just seemed like a numeric bug inside of the function so I'm not sure if / how to test this
Side Effects
Documentation
Copyright and Licensing
Please list the copyright holder for the work you are submitting (this will be you or your assignee, such as a university or company): Flatiron Institute
By submitting this pull request, the copyright holder is agreeing to license the submitted work under the following licenses:
Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
Fixes https://github.com/stan-dev/stan/issues/3241 by just removing the line that adds back the max log likelihood ratio
it seems you also dropped the normalization, which is ok, if boost is accepting unnormalized weights
it seems you also changed how the truncation is done, by using a one-liner instead of for loop, and as I'm not familiar with the syntax, I'm not able to verify that it does the same thing, but I assume you know it's the same
|
gharchive/pull-request
| 2023-11-16T22:14:00 |
2025-04-01T06:40:28.531886
|
{
"authors": [
"SteveBronder",
"avehtari"
],
"repo": "stan-dev/stan",
"url": "https://github.com/stan-dev/stan/pull/3243",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
516182409
|
Adding FixedValueRule class
This adds a class for fixed value rules in the mold of the current existing Rule classes.
I'm on board with this, and agree with Chris's assessment!
|
gharchive/pull-request
| 2019-11-01T15:28:06 |
2025-04-01T06:40:28.566869
|
{
"authors": [
"kjmahalingam",
"ngfreiter"
],
"repo": "standardhealth/sushi",
"url": "https://github.com/standardhealth/sushi/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1501078791
|
Add support for sharding using Slurm job arrays
In the short term, we want to run write_run_display_json on multiple nodes.
Addresses #1260
Abandoning this for a few of reasons:
The use of hash is incorrect because of hash randomization; we should be using a stable hash function instead.
The helper job submission script in the Stanford NLP SLURM cluster does not support job arrays.
I would prefer to have a sharding mechanism that isn't tied to SLURM, and works in other environments as well.
|
gharchive/pull-request
| 2022-12-17T01:22:44 |
2025-04-01T06:40:28.590555
|
{
"authors": [
"yifanmai"
],
"repo": "stanford-crfm/helm",
"url": "https://github.com/stanford-crfm/helm/pull/1261",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2055534303
|
about the ranking operation
Hi, @okhat
Thanks for the great repo! I have a hard time understanding the code for reranking. Especially in this section: https://github.com/stanford-futuredata/ColBERT/blob/706a7265b06c6b8de1e3236294394e5ada92134e/colbert/ranking/index_ranker.py#L56C7-L112
I have searched the relevant github issue and I understand these code are for efficiency from this issue.
But can I find relevant documentations somewhere to help me better understand this? It seems that it would first turn 2D embeddings to 3D embeddings with different strides (108 and 180) for matrix multiplication. But I don't get it why we need this stride parameter? Why couldn't we just do something like this:
load all embeddings and corresponding doclens
get the embedding per passage based on the pids
padding them to the same length for matrix multiplication
maxsim operation and select the topk
After digging into the code, I think I got the logic behind. This stride thing was intended to partition the whole documents into different buckets based on their length (if I understand it correctly). In the actual implementation of ColBERTv1, it split the whole documents into two buckets, one with documents whose lengths takes up 90% of the whole document collections (which is 108), and the other takes the rest.
This would save the flops for the matching process. scores = (D @ group_Q) * mask.unsqueeze(-1)
BTW, for anyone who is also curious about the stride operation, I hope this would help:
|
gharchive/issue
| 2023-12-25T09:02:21 |
2025-04-01T06:40:28.595155
|
{
"authors": [
"Hannibal046"
],
"repo": "stanford-futuredata/ColBERT",
"url": "https://github.com/stanford-futuredata/ColBERT/issues/281",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2225239096
|
fix(dspy): add copy to AzureOpenAI to propagate api_(key|version|...)
Fixes issue #591 without re-creating issue #543
The problem was that AzureOpenAI.copy used LM.copy, but because api_key etc. aren't in the AzureOpenAI.kwargs, and LM.copy per default only propagates self.__class__.kwargs.
I now gave AzureOpenAI its own copy-method. This isn't really beautiful, but it seems to work pretty well.
Thanks @snimu !
|
gharchive/pull-request
| 2024-04-04T11:34:11 |
2025-04-01T06:40:28.597318
|
{
"authors": [
"arnavsinghvi11",
"snimu"
],
"repo": "stanfordnlp/dspy",
"url": "https://github.com/stanfordnlp/dspy/pull/769",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
123244611
|
SDR.handleImages problem - no screenshots in log from Android 6.0+ devices
Hello, lately I've been fighting with Spoon being unable to create complete *.html log for Android 6.0+ devices. I can't fully understand/localise the problem so I want to share my thoughts, get your opinion and make it work.
Specify the problem
1. Run tests on device with Android 6.0+ from terminal with usage of ./gradlew spoon task.
2. To make your screenshots use spoon provided method Spoon.screenshot(Activity activity, String tag)
3. Wait for tests to finish.
4. You can see log like that:
(...)
2015-12-18 15:58:00 [STRL.testRunEnded] elapsedTime=408847
03:58:00 I/XmlResultReporter: XML test result file generated at /Users/F1sherKK/Dev/myapp-Android/app/build/spoon-log/normal/debugRelease/junit-reports/05f3785c3444f1bf.xml. Total tests 32, failure 1, passed 31,
2015-12-18 15:58:00 [SDR.run] About to grab screenshots and prepare output for [05f3785c3444f1bf]
2015-12-18 15:58:00 [SDR.pullDirectory] Internal path is /data/data/com.myapp.sendmoney.debug1/app_spoon-screenshots
2015-12-18 15:58:00 [SDR.pullDirectory] External path is /sdcard/app_spoon-screenshots
2015-12-18 15:58:00 [SDR.pullDirectory] Pulling files from external dir on [05f3785c3444f1bf]
2015-12-18 15:58:05 [SDR.pullDirectory] Pulling files from internal dir on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Done pulling app_spoon-screenshots from on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Internal path is /data/data/com.myapp.sendmoney.debug1/app_spoon-files
2015-12-18 15:58:06 [SDR.pullDirectory] External path is /sdcard/app_spoon-files
2015-12-18 15:58:06 [SDR.pullDirectory] Pulling files from external dir on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Pulling files from internal dir on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Done pulling app_spoon-files from on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.handleImages] Moving screenshots to the image folder on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.flowtests.NotificationCenterActivity.NotificationCenterActivityFunctionTest#assertReferralPopUpWillAppear
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.flowtests.NotificationCenterActivity.NotificationCenterActivityFunctionTest#assertReferralPopUpWillHide
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#addMockItem_newMessage
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerService
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnItem_referralType
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.General.GeneralNavigator#sendNotification_addSampleMessage
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.InviteFriendsActivity.InviteFriendsNavigator#clickOnBackButton_returnToCustomerService
(...)
And html log from tests lacks screenshots.
Example: [log without screenshots]
(http://s8.postimg.org/ojirb2rc5/Screen_Shot_2015_12_21_at_10_56_54.png)
Fix tries and observations
As we know, since Marshmallow, the way we have to deal with permission changed a bit. I thought that there might be lack of
WRITE_EXTERNAL_STORAGE
or
READ_EXTERNAL_STORAGE
permissions which caused spoon to malfunction. So i created this small shell script. It should grant those two permission to the device from console:
SDK=`adb shell getprop ro.build.version.sdk | tr -d '\r'`
if (( "$SDK" >= 23 )) ; then
adb shell pm grant com.myapp.sendmoney.debug1 android.permission.WRITE_EXTERNAL_STORAGE
adb shell pm grant com.myapp.sendmoney.debug1 android.permission.READ_EXTERNAL_STORAGE
fi
./gradlew spoon
So I granted permissions to my device. I ran the tests but the result was the same. Nothing changed. So if my device has permission to pull the screenshots, then I thought that maybe there are no screenshots. I checked it and for sure screenshots are created on the device during test and screenshots are successfully pulled from the device after the test but not inserted into html.
So now I would like to find out - where is the difference between pre 6.0 devices and the 6.0+ ones. Where lies the problem? Did anyone of you faced/fixed it? Any thoughts what could be done?
We're facing the exact same problem right now and are probing for a solution without success so far. We'll post here if we find something.
Seems like problem got fixed after spoon-plugin got updated to:
com.stanfy.spoon:spoon-gradle-plugin:1.0.4
yep, should be fixed now
closing the ticket
|
gharchive/issue
| 2015-12-21T09:55:25 |
2025-04-01T06:40:28.609518
|
{
"authors": [
"FisherHUB",
"nicolasgramlich",
"roman-mazur"
],
"repo": "stanfy/spoon-gradle-plugin",
"url": "https://github.com/stanfy/spoon-gradle-plugin/issues/90",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2407200734
|
Favourites, history or progress
Summary
A quick way to access shows being watched from the home screen, rather than searching at every app start. This could be in the form of a favourites list, a watch history, or a Progress list from Trakt. Once the app starts to the home screen, it shouldn't take more than 2 clicks to continue watching a show!
Please confirm the following
[X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open issue.
Already implemented it, when you arrive on a movie or TV show you can favorite it with a button near the "Watch" button
|
gharchive/issue
| 2024-07-13T23:57:15 |
2025-04-01T06:40:28.614269
|
{
"authors": [
"Axanderism",
"stantanasi"
],
"repo": "stantanasi/streamflix",
"url": "https://github.com/stantanasi/streamflix/issues/136",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1359630635
|
[Feature] Public Native name_of function
Pull request type
Please check the type of change your PR introduces:
[ ] Bugfix
[x] Feature
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no api changes)
[ ] Build related changes
[ ] Documentation content changes
[ ] Other (please describe):
public native fun name_of<TokenType: store>(): (address, vector<u8>, vector<u8>);
这个 native 方法中的 store 是必须的么? @jolestar
native fun name_of<TokenType: store>(): (address, vector<u8>, vector<u8>);
这个 native 方法中的 store 是必须的么? @jolestar
native fun name_of<TokenType: store>(): (address, vector<u8>, vector<u8>);
应该不是必须的
这个 native 方法中的 store 是必须的么? @jolestar
native fun name_of<TokenType: store>(): (address, vector<u8>, vector<u8>);
应该不是必须的
那我先去掉吧
我先合并了, @lerencao @baichuan3 @0xpause 如果有问题单独提
|
gharchive/pull-request
| 2022-09-02T02:33:51 |
2025-04-01T06:40:28.622668
|
{
"authors": [
"WGB5445",
"jolestar"
],
"repo": "starcoinorg/starcoin-framework",
"url": "https://github.com/starcoinorg/starcoin-framework/pull/107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1966941974
|
Conflict with GenesisBook
Could you please just fix 4D pocket...It seems some other api conflict with GenesisBook
Please provide the error message, otherwise I don't know where to fix it.
|
gharchive/issue
| 2023-10-29T11:01:55 |
2025-04-01T06:40:28.627848
|
{
"authors": [
"WindKn",
"starfi5h"
],
"repo": "starfi5h/DSP_Mod_Support",
"url": "https://github.com/starfi5h/DSP_Mod_Support/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1011131148
|
Stargate fails to start because of too many open files
I am running Stargate 1.0.31 in Kubernetes with K8ssandra. The Stargate image used is stargateio/stargate-3_11:v1.0.31. In one of our automated tests we have seen Stargate fail to start a few times with this a resource limit error like this:
INFO [main] 2021-09-28 21:25:10,597 AbstractConnector.java:331 - Started Server@29862f53{HTTP/1.1, (http/1.1)}{0.0.0.0:8082}
INFO [main] 2021-09-28 21:25:10,598 Server.java:415 - Started @80358ms
INFO [main] 2021-09-28 21:25:10,599 BaseActivator.java:185 - Started restapi
Finished starting bundles.
Unexpected error: java.io.IOException: User limit of inotify instances reached or too many open files
java.lang.RuntimeException: java.io.IOException: User limit of inotify instances reached or too many open files
at io.stargate.starter.Starter.watchJarDirectory(Starter.java:539)
at io.stargate.starter.Starter.start(Starter.java:441)
at io.stargate.starter.Starter.cli(Starter.java:619)
at io.stargate.starter.Starter.main(Starter.java:660)
Caused by: java.io.IOException: User limit of inotify instances reached or too many open files
at sun.nio.fs.LinuxWatchService.<init>(LinuxWatchService.java:64)
at sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at io.stargate.starter.Starter.watchJarDirectory(Starter.java:526)
... 3 more
This is in a CI environment with limited cpu/memory resources. The test is running in the free tier runner in GitHubActions. The runner vm has 2 cpus and 7 GB memory. The particular test in which this failed had already deployed two Cassandra nodes and one Stargate node. This failure is from the second Stargate node.
I believe the open file limit on the vm is set to 65536. I don't think I am able to increase it. Maybe the solution is to run my tests in an environment with more resources, but it would be nice if Stargate could less demanding especially considering this happens on startup.
Huh, well that's a new one. We run some things in the GitHub actions free tier as well without a problem, granted it is with fewer nodes.
The odd thing is that file descriptors shouldn't be heavily consumed until the services start taking traffic. In a resource constrained environment I'd expect the error you're seeing to occur under load rather than on start up.
We can take a look at dropwizard to see if there's anything we can tune. Although I wonder if your runner had a noisy neighbor?
We have only seen this 2 or 3 times so maybe it is noisy neighbors.
On Wed, Sep 29, 2021 at 8:01 PM Doug Wettlaufer @.***>
wrote:
Huh, well that's a new one. We run some things in the GitHub actions free
tier as well without a problem, granted it is with fewer nodes.
The odd thing is that file descriptors shouldn't be heavily consumed until
the services start taking traffic. In a resource constrained environment
I'd expect the error you're seeing to occur under load rather than on start
up.
We can take a look at dropwizard to see if there's anything we can tune.
Although I wonder if your runner had a noisy neighbor?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/stargate/stargate/issues/1286#issuecomment-930632573,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABJBOMWLS44U2QC5KFTWXTUEOSGVANCNFSM5FAGY5DA
.
--
John
@jsanda @Miles-Garnsey Maybe what we can do is attempt these tests on the self-hosted runner we're going to setup and keep the file descriptor limits at the defaults and see if we run into this problem there, that would help rule out the noisy neighbor problem?
The file limit error has only happened a few times. We can run the test N times on a self-hosted runner without the error happening. That doesn't mean it won't happen, but it does give increased confidence. We have been deploying nodes with heaps configured as low as 256 MB and 384 MB. Surprisingly that works fine a lot of the time, but we have issues too often. The issues are not limited to this open file limit error. The situation is like a game of Jenga :)
https://media.giphy.com/media/PlnQNcQ4RYOhG/giphy.gif
@dougwettlaufer @jsanda How about creating a small fix here by adding a --enableBundlesWatch which would be false by default and thus you can enable the watching of bundles.. Or vice-versa.
Doug, I think we don't need bundles watching by default, but this might be a breaking change.. So we could also go with --disableBundlesWatch, keep the current behavior by default, but give anybody an option to avoid it..
I'm just curious here, what is the bundle watching used for @ivansenic ?
I'm just curious here, what is the bundle watching used for @ivansenic ?
With OSGi you can replace bundles during the runtime. Meaning you can paste a new version of a jar to the folder we are watching and you would in runtime update that specific bundle with new version.
Makes sense, I guess I was more wondering if that's something that is often done with Stargate? Just trying to gauge for example, is that an option we'd want to see exposed through K8ssandra or would it be sufficient just to turn it off by default when deployed through K8ssandra.
If you ask me, and you do :smile:, I would say it should be turned off in Kubernetes. I mean this is old tech, developed for monoliths and actually this bundle reloading was a way to achieve something you would nowadays do in the cloud. You have a new version, no problem, deploy. In fact, that's the whole benefit of the cloud-native development that you can deploy as much times as you want.
Right, watching the directory is to enable the hot-reload use case which really doesn't apply in the cloud. How about we add the --disableBundlesWatch and try-catch @ivansenic? That way we can avoid the error from ever happening with the flag and if it isn't set then avoid completely breaking with the try-catch.
|
gharchive/issue
| 2021-09-29T15:49:19 |
2025-04-01T06:40:28.639330
|
{
"authors": [
"dougwettlaufer",
"ivansenic",
"jdonenine",
"jsanda"
],
"repo": "stargate/stargate",
"url": "https://github.com/stargate/stargate/issues/1286",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2634456505
|
feat(blockifier): add secp logic
This PR adds the logic for secp in preparation of the native syscalls.
The code has been refactored such that the implementation is shared between the native and non-native syscalls.
This change is
crates/blockifier/src/execution/native/syscall_handler.rs line 450 at r1 (raw file):
Previously, meship-starkware (Meshi Peled) wrote…
Can we move these tests to the secp test? I am not sure this is the best solution, but I don't like the tests to be part of the syscall handler file.
These can be moved to somewhere else, but that would require making Secp256Point public.
crates/blockifier/src/execution/syscalls/secp.rs line 40 at r13 (raw file):
pub fn secp_mul(&mut self, request: SecpMulRequest) -> SyscallResult<SecpMulResponse> {
let ep_point = self.get_point_by_id(request.ec_point_id)?;
let result = *ep_point * Curve::ScalarField::from(request.multiplier);
@ilyalesokhin-starkware
?
Suggestion:
let ec_point = self.get_point_by_id(request.ec_point_id)?;
let result = *ec_point * Curve::ScalarField::from(request.multiplier);
|
gharchive/pull-request
| 2024-11-05T05:30:00 |
2025-04-01T06:40:28.647476
|
{
"authors": [
"avi-starkware",
"reviewable-StarkWare",
"xrvdg"
],
"repo": "starkware-libs/sequencer",
"url": "https://github.com/starkware-libs/sequencer/pull/1813",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
583207719
|
PCM_DPS result column order changed in order to match with JHU result
fixed result column order and added Last_Update_Date in UTC time zone to it
Please do Cells -> `All outputs' -> 'Clear' in the notebook before you commit
|
gharchive/pull-request
| 2020-03-17T18:18:48 |
2025-04-01T06:40:28.650377
|
{
"authors": [
"Atsidir",
"tfoldi"
],
"repo": "starschema/COVID-19-data",
"url": "https://github.com/starschema/COVID-19-data/pull/25",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2729101963
|
Relabel the infected state as Infected
The infection class currently has:
ss.State('infected', label='Infectious')
This is fine for SIR, but modules that inheret from Infection are stuck with the infected state being labeled as Infectious, when often there's a non-infectious-but-infected latent period.
Child classes could replace this state, as SIR already does, but I expect most will want the infected state to be "Infected" rather than "Infectious."
I think this is probably better, but I can see an argument both ways. "Infectious" is the standard (https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology), and, as you said, anyone who wants to overwrite the default easily can. By default, infection.infected and infected.infectious are synonyms (via the infectious property), so I think a case could be made for either.
Since infected is a superset of infectious, and many diseases don't have equality, I'd recommend the relabeling. Also then the state name and label align.
|
gharchive/issue
| 2024-12-10T05:56:28 |
2025-04-01T06:40:28.787001
|
{
"authors": [
"cliffckerr",
"daniel-klein"
],
"repo": "starsimhub/starsim",
"url": "https://github.com/starsimhub/starsim/issues/822",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
670238349
|
Audit SignState Usage
Profiling the pong server has revealed that we spend a lot of time in SignState. We should audit our calls to SignState to make sure we're only calling when necessary.
See the flamegraph:
https://statechannels.slack.com/files/UMXGSF1EY/F017WR84PD4/flamegraph.html
I think we can get a quick win by only calling hash state once per state?
This has been done!
|
gharchive/issue
| 2020-07-31T21:11:46 |
2025-04-01T06:40:28.845859
|
{
"authors": [
"andrewgordstewart",
"lalexgap"
],
"repo": "statechannels/statechannels",
"url": "https://github.com/statechannels/statechannels/issues/2406",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1761448251
|
internal/document: migrate to TOML
Introduces attributeParser internal interface to deal with attribute parsing.
Tries to parse as TOML first, then falls back to our old parsing language written by @adambabik ("babikML").
In order to use double quotes rather than single, I had to use the old version of pelletier's go-toml (v1 rather than v2). We can use v2, I just personally thought double quotes are more common for toml.
There's some jankiness around inline toml, since toml is primarily a multiline format, but supports inline maps for nested attributes. So for parsing, I had to wrap the parsed string like so:
attrs={ <provided attributes> }
And for serialization, I had to join new lines with strings.
This is all in attributes.go - I would appreciate a more thorough review if possible, since it would be pretty bad if this broke.
Bye bye 👋 BabikML, you served us well!
What was the reasoning behind TOML? It requires unmaintained TOML v1 and additional operations after parsing and writing. Why not JSON?
I guess, we could do JSON as long as we curb or restrain the use of nesting objects
Will be switching to JSON
|
gharchive/pull-request
| 2023-06-16T22:50:06 |
2025-04-01T06:40:28.849187
|
{
"authors": [
"mxsdev",
"sourishkrout"
],
"repo": "stateful/runme",
"url": "https://github.com/stateful/runme/pull/308",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1347964142
|
Improvements on project item references
fixes #126
This patch implements a set of improvements to the linking mechanics of project items (todos, clipboard items and notes). It implements a file watcher logic that is able to update references when line changes. It also consolidates the item links into a single React component to remove duplicate code.
Todo:
[x] enhance unit tests for new components
[ ] write some e2e tests to ensure functionality is given
Created a todo from the context menu inside the editor and got this when I overwrote the changes (stash pop) and clicked on the linkage icon in the widget:
Using a filewatcher is clever. I had a plan once to use a fuzzy text collation algo (levenshtein distance or simhash) to infer that a todo comment was edited / moved. Maybe that's the next level here. Endless amounts of strategies to do some work ahead of time to make scanning every line in the file fast. However, probably good call to keep it simple.
However, probably good call to keep it simple.
Adding the levenshtein distance algo shouldn't be difficult nor complex, thanks for the tip.
|
gharchive/pull-request
| 2022-08-23T13:29:24 |
2025-04-01T06:40:28.852526
|
{
"authors": [
"christian-bromann",
"sourishkrout"
],
"repo": "stateful/vscode-marquee",
"url": "https://github.com/stateful/vscode-marquee/pull/206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2395493942
|
Video not working
On google collab, video not working!
This program created two folders. The first with input images. The second (empty file log)
@statho please help
Please see the response to #18.
|
gharchive/issue
| 2024-07-08T12:23:48 |
2025-04-01T06:40:28.855445
|
{
"authors": [
"exel-dot",
"statho"
],
"repo": "statho/ScoreHMR",
"url": "https://github.com/statho/ScoreHMR/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
99643114
|
Kick the osx builders?
http://buildbot.e.ip.saba.us:8010/builders
This has been delaying nightlies for about 3 days and will need addressing before we can make RC binaries. We can maybe manually work around it for all platforms other than mac if needed.
cc @staticfloat
Done. The VM host got rebooted, but the VM didn't get started back up correctly.
Thanks! I swear I've been trying things on rundeck before opening these, but so far with little success.
Yeah, unfortunately that problem wasn't solvable with rundeck. :P
-E
On Fri, Aug 7, 2015 at 10:48 AM, Tony Kelman notifications@github.com
wrote:
Thanks! I swear I've been trying things on rundeck before opening these,
but so far with little success.
—
Reply to this email directly or view it on GitHub
https://github.com/staticfloat/julia-buildbot/issues/27#issuecomment-128776633
.
|
gharchive/issue
| 2015-08-07T13:00:24 |
2025-04-01T06:40:28.861264
|
{
"authors": [
"staticfloat",
"tkelman"
],
"repo": "staticfloat/julia-buildbot",
"url": "https://github.com/staticfloat/julia-buildbot/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1535091855
|
chore(main): release 0.1.2
:robot: I have created a release beep boop
0.1.2 (2023-01-30)
Dependency Updates
deps: bump github.com/onsi/ginkgo/v2 from 2.6.1 to 2.7.0 (#62) (c34c324)
deps: bump github.com/onsi/ginkgo/v2 from 2.7.0 to 2.7.1 (#77) (3b026b9)
deps: bump github.com/onsi/gomega from 1.24.2 to 1.25.0 (#69) (2795e36)
deps: bump github.com/onsi/gomega from 1.25.0 to 1.26.0 (#74) (351b75f)
deps: bump github.com/spf13/viper from 1.14.0 to 1.15.0 (#70) (b1422c8)
deps: bump sigs.k8s.io/controller-runtime from 0.14.1 to 0.14.2 (#76) (51f16a4)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/statnett/controller-runtime-viper/releases/tag/v0.1.2 :sunflower:
|
gharchive/pull-request
| 2023-01-16T15:10:51 |
2025-04-01T06:40:28.870052
|
{
"authors": [
"mikaelol"
],
"repo": "statnett/controller-runtime-viper",
"url": "https://github.com/statnett/controller-runtime-viper/pull/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2239604954
|
Properly fix configmanager installation
Create the directory that it should rather than creating a directory of the filename
Currently creates the directory below rather than just ensuring that the one it's trying to copy to exists:
oops..
|
gharchive/pull-request
| 2024-04-12T09:48:32 |
2025-04-01T06:40:29.433836
|
{
"authors": [
"Plootie",
"artehe"
],
"repo": "stayintarkov/SIT.Manager.Avalonia",
"url": "https://github.com/stayintarkov/SIT.Manager.Avalonia/pull/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
829025013
|
Add a TLDR for cloning
Checklist
Please ensure the following tasks are completed before submitting this pull request.
[x] Read, understood, and followed the contributing guidelines, including the relevant style guides.
[x] Read and understand the Code of Conduct.
[x] Read and understood the licensing terms.
[x] Searched for existing issues and pull requests before submitting this pull request.
[x] Filed an issue (or an issue already existed) prior to submitting this pull request.
[x] Rebased onto latest develop.
[x] Submitted against develop branch.
Description
What is the purpose of this pull request?
This pull request:
add TLDR for contribution
Related Issues
Does this pull request have any related issues?
This pull request:
Addresses https://github.com/stdlib-js/stdlib/issues/373#issuecomment-782247121
Questions
Any questions for reviewers of this pull request?
No.
Other
Any other information relevant to this pull request? This may include screenshots, references, and/or implementation notes.
No.
@stdlib-js/reviewers
I added a TLDR in https://github.com/stdlib-js/stdlib/commit/45f6535faa0b00d13fb2d96bfa708ff84933b4a3.
|
gharchive/pull-request
| 2021-03-11T10:40:42 |
2025-04-01T06:40:29.440887
|
{
"authors": [
"aminya",
"kgryte"
],
"repo": "stdlib-js/stdlib",
"url": "https://github.com/stdlib-js/stdlib/pull/385",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
284387136
|
What is the function to get Steem Power of an account?
I've looked over at https://steem.readthedocs.io/en/latest/ and i couldn't find anything. Is it possibe?
all i got was
sp_to_rshares(sp, voting_power=10000, vote_pct=10000)
Parameters:
sp (number) – Steem Power
voting_power (int) – voting power (100% = 10000)
vote_pct (int) – voting participation (100% = 10000)
Tried
from steem.account import Account
account = Account("manuelcho12")
print("VP: %s" % account.voting_power())
print("SP: %s" % account.sp())
Got this error:
VP: 94.13
Traceback (most recent call last):
File "steemapp.py", line 5, in
print("SP: %s" % account.sp())
TypeError: 'float' object is not callable
You get this from the Account object. You have to take into account SP delgated to you and SP you have delegated to get the active SP. I don't think there is an out of the box function.
See below. I have used rstrip to remove VESTS from the string. I haven't checked, but there might be a converter function that can simplify it, but the concept stays the same.
allSP = float(account.get('vesting_shares').rstrip(' VESTS'))
delSP = float(account.get('delegated_vesting_shares').rstrip(' VESTS'))
recSP = float(account.get('received_vesting_shares').rstrip(' VESTS'))
activeSP = account.converter.vests_to_sp(allSP - delSP + recSP)```
thank you for the workaround.
account.get_balances() also gets total vests but have to use json and to convert total vests to SP.
|
gharchive/issue
| 2017-12-24T23:50:37 |
2025-04-01T06:40:29.463959
|
{
"authors": [
"E-D-A",
"kingsofcoms"
],
"repo": "steemit/steem-python",
"url": "https://github.com/steemit/steem-python/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1581420491
|
Encoding record with 1 field doesn't work
record Config where
constructor MkConfig
test: String
Encoding MkConfig "test1" produces the string "test1" instead of the object {"test" : "test1"}
This is expected behavior as it reflects the representation of such a record at runtime. However, I think this should be customizable via JSON.Option.Options. I'll look into it.
Hi @raders ! With the latest version of idris2-json, you can set unwrapRecords to False to get the most verbose expansion of the record, and set sum to UntaggedValue so that you only keep the "value" (which is the pair you want).
module Main
import Hedgehog
import JSON.Derive
%language ElabReflection
record Config where
constructor MkConfig
test: String
%runElab derive "Config" [Show,Eq,(customToJSON $ {sum:=UntaggedValue, unwrapRecords:=False} defaultOptions),(customFromJSON $ {sum:=UntaggedValue, unwrapRecords:=False} defaultOptions)]
main : IO ()
main = test . pure $ MkGroup "Maelstrom" [("#22", property $ encode (MkConfig "test1") === #"{"test":"test1"}"#)]
Looks good. Thank you!
I'm closing this as it seems this is now resolved.
|
gharchive/issue
| 2023-02-12T21:45:19 |
2025-04-01T06:40:29.468035
|
{
"authors": [
"Victor-Savu",
"raders",
"stefan-hoeck"
],
"repo": "stefan-hoeck/idris2-json",
"url": "https://github.com/stefan-hoeck/idris2-json/issues/22",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1365058089
|
chore: upgrade to cdk v2.240.0
Fixes #
Codecov Report
Base: 100.00% // Head: 100.00% // No change to project coverage :thumbsup:
Coverage data is based on head (fe798a7) compared to base (1ba42aa).
Patch coverage: 100.00% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## master #174 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 3 3
Lines 51 52 +1
Branches 14 15 +1
=========================================
+ Hits 51 52 +1
Impacted Files
Coverage Δ
src/budget_notifier.ts
100.00% <100.00%> (ø)
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2022-09-07T18:45:47 |
2025-04-01T06:40:29.478864
|
{
"authors": [
"codecov-commenter",
"stefanfreitag"
],
"repo": "stefanfreitag/cdk-budget-notifier",
"url": "https://github.com/stefanfreitag/cdk-budget-notifier/pull/174",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2280667916
|
Warning "No region type defined for eSc_dummyblock_c" when training a region model
When I try training a region model, I get quite a few of such warnings:
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: No region type defined for eSc_dummyblock_ at /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: Element type "None" undefined in class dict /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: No region type defined for eSc_dummyblock_ at /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: Element type "None" undefined in class dict /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: No region type defined for eSc_dummyblock_ at /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: Element type "None" undefined in class dict /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
Why would this happen and should I pay attention? If yes, how can I fix this?
Thanks in advance!
This mean that in the GT there is a region that doesn't have a label. If the region has no label it will be ignored, so the pixels will just be classified (in the GT) as background
This mean that in the GT there is a region that doesn't have a label.
Hmm that's weird, since all regions in my GT do have labels.
You are right, seems eScriptorium didn't remove the dummy green region that occupied the whole page (a default situation after running Loghi with only baseline detection), although I did remove it manually in their UI.
Question is, why do I get 3 similar warnings about it even though there's only 1 eSc_dummyblock_ element?
The three warning are probably due to the preprocessing reading this region 3 times (could probably be optimized :smile: ), once for semantic segmentation once for instance segmentation and once for panoptic segmentation.
This is due to the reading order, but I am not sure why this would change. Maybe @rvankoert or @TimKoornstra knows. As there were some changes to the inference script. But this is done somewhere in the Java or bash part
Does that mean not running the RECALCULATEREADINGORDER order as well?
This might very well be the issue, thanks for the tip, I will check this!
|
gharchive/issue
| 2024-05-06T11:33:09 |
2025-04-01T06:40:29.483336
|
{
"authors": [
"fattynoparents",
"stefanklut"
],
"repo": "stefanklut/laypa",
"url": "https://github.com/stefanklut/laypa/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
112693676
|
Fix #31
Use HashAlgorithm.Create(string) so that .NET loads FIPS-compliant hash algorithms if available on the local machine. This also allows algorithms to be overridden as described here.
Thanks
|
gharchive/pull-request
| 2015-10-21T22:39:54 |
2025-04-01T06:40:29.485249
|
{
"authors": [
"martincostello",
"stefanprodan"
],
"repo": "stefanprodan/WebApiThrottle",
"url": "https://github.com/stefanprodan/WebApiThrottle/pull/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2055999836
|
🛑 Geschaeftspartner Portal API is down
In 3988a64, Geschaeftspartner Portal API (https://niko.neuenhauser.de/api/gpp/q/health) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Geschaeftspartner Portal API is back up in 2ba2d5b after 10 hours, 24 minutes.
|
gharchive/issue
| 2023-12-26T03:49:10 |
2025-04-01T06:40:29.487753
|
{
"authors": [
"stefanwendelmann"
],
"repo": "stefanwendelmann/niko-uptime",
"url": "https://github.com/stefanwendelmann/niko-uptime/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1078204851
|
Prüfung der Machbarkeit Schnittstelle der Webuntis API
Task
Postman Requests for getting
classes and class teacher by room
pupils of class
the current lesson of class
Requirements
API-key -> ask sysadmin
Webuntis API documentation
Postman
Contacted sysadmin on 16.12.2021, still no response.
Got some information from Prof. Stütz:
Example project
https://birklbauerjonas.github.io/webUntis-docs/
by a few former HTL students.
API Documentation
JSONRpc api
Here and/or here.
Additional info
Npm package for nodejs
Here.
Former project at HTL Here.
|
gharchive/issue
| 2021-12-13T08:07:41 |
2025-04-01T06:40:29.495883
|
{
"authors": [
"Nydery"
],
"repo": "steinmax/reSign",
"url": "https://github.com/steinmax/reSign/issues/29",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
739684327
|
module .. does not contain package github.com/stellar/go/clients/txnbuild
What version are you using?
v0.0.0-20201109195737-ef1e30d4691b
What did you do?
There is an incorrect installation command in txnbuild README.md which specifies Installation instruction as
go get github.com/stellar/go/clients/txnbuild
The above command is incorrect since txnbuild is not part of client directory. If the above command is used, the below error message will be displayed
module github.com/stellar/go@upgrade found (v0.0.0-20201109195737-ef1e30d4691b), but does not contain package github.com/stellar/go/clients/txnbuild
This should be
go get github.com/stellar/go/txnbuild
What did you expect to see?
I expected the installation of txnbuild.
What did you see instead?
It threw error
Thanks @aanupam23. I am correcting the installation instructions in #3207.
|
gharchive/issue
| 2020-11-10T08:05:56 |
2025-04-01T06:40:29.502111
|
{
"authors": [
"aanupam23",
"leighmcculloch"
],
"repo": "stellar/go",
"url": "https://github.com/stellar/go/issues/3202",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
728406265
|
services/horizon: Allow captive core to start from any ledger.
PR Checklist
PR Structure
[ ] This PR has reasonably narrow scope (if not, break it down into smaller PRs).
[ ] This PR avoids mixing refactoring changes with feature changes (split into two PRs
otherwise).
[ ] This PR's title starts with name of package that is most changed in the PR, ex.
services/friendbot, or all or doc if the changes are broad or impact many
packages.
Thoroughness
[ ] This PR adds tests for the most critical parts of the new functionality or fixes.
[ ] I've updated any docs (developer docs, .md
files, etc... affected by this change). Take a look in the docs folder for a given service,
like this one.
Release planning
[ ] I've updated the relevant CHANGELOG (here for Horizon) if
needed with deprecations, added features, breaking changes, and DB schema changes.
[ ] I've decided if this PR requires a new major/minor version according to
semver, or if it's mainly a patch change. The PR is targeted at the next
release branch if it's not a patch change.
What
Allow captive core to start from any ledger.
Why
Previously we were limiting the ledgers where online captive core could start since we were always trying to start (captive core) from the previous check-point ledger.
This was probably problematic since this wouldn't work for ledgers smaller than 63.
Known limitations
[TODO or N/A]
Thanks!
This seems to fix #3157 ! (which I tested using #3144 ).
However, I find confusing that the stats obtained from GET / don't reflect the log messages. For instance, even if Horizon was outputting this:
time="2020-10-27T14:36:46.525Z" level=info msg="Ingestion system state machine transition" current_state="resume(latestSuccessfullyProcessedLedger=61)" next_state="resume(latestSuccessfullyProcessedLedger=61)" pid=198 service=expingest
time="2020-10-27T14:36:46.533Z" level=info msg="Waiting for ledger to be available in stellar-core" core_sequence=61 ingest_sequence=62 pid=198 service=expingest
The root stats ere still at 0 (including the CoreSequence):
I would expect the IngestSequence and CoreSequence to be consistent in both the log messages and the root endpoint.
@2opremio I think I run into this while working on this PR but haven't debugged it much yet. I suspect that changes in https://github.com/stellar/go/pull/3106 broke something. If you /bin/bash the container and run curl localhost:8000 there you'll see correct values. So it looks like two Horizons are running? @tamirms can you confirm/take a look?
It's strange, because after ledger 64 is reached (according to the logs) the CoreSequence I obtain is correct.
If you /bin/bash the container and run curl localhost:8000 there you'll see correct values.
True.
$ docker exec -ti horizon-integration curl localhost:8000 | grep ledger
"ledger": {
"href": "http://localhost:8000/ledger/{sequence}",
"ledgers": {
"href": "http://localhost:8000/ledgers{?cursor,limit,order}",
"ingest_latest_ledger": 18,
"history_latest_ledger": 18,
"history_elder_ledger": 2,
"core_latest_ledger": 18,
@2opremio I noticed there is a new env variable: HORIZON_INTEGRATION_ENABLE_CAPTIVE_CORE. I haven't checked it but maybe it will fix it.
I'm going to approve this PR but please 👍 too because I worked on this partially. And maybe let's more discussion about the issue with a container to a new issue.
OK, just for the record. This seems to be the problem:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9a1d35cf918b stellar/quickstart:testing2 "/start --standalone…" 2 minutes ago Up 2 minutes 0.0.0.0:32797->1570/tcp, 0.0.0.0:32796->5432/tcp, 0.0.0.0:32795->6060/tcp, 0.0.0.0:32794->8000/tcp, 0.0.0.0:32793->11625/tcp, 0.0.0.0:32792->11626/tcp horizon-integration
it should be 0.0.0.0:8000->8000/tcp instead
|
gharchive/pull-request
| 2020-10-23T17:50:45 |
2025-04-01T06:40:29.513121
|
{
"authors": [
"2opremio",
"abuiles",
"bartekn"
],
"repo": "stellar/go",
"url": "https://github.com/stellar/go/pull/3160",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1205096971
|
core: Add inclusive and additive amount calculation in SEP-31
PR Checklist
PR Structure
[x] This PR has reasonably narrow scope (if not, break it down into smaller PRs).
[x] This PR avoids mixing refactoring changes with feature changes (split into two PRs
otherwise).
[x] This PR's title starts with name of package that is most changed in the PR, ex.
paymentservice.stellar, or all or doc if the changes are broad or impact many
packages.
Thoroughness
[ ] This PR adds tests for the most critical parts of the new functionality or fixes.
What
Add inclusive and additive amount calculation in SEP-31. Default to "inclusive"
Why
There are anchors who calculate amount in the following equation.
amount_in = amount + fee (additive)
Known limitations
Demo wallet won't work with additive settings.
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Also, it would be great to add tests to make sure we cover all these cases. This part of the code is a bit complicated and we should have tests in place to make sure a change won't break things here.
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
|
gharchive/pull-request
| 2022-04-14T23:16:16 |
2025-04-01T06:40:29.531374
|
{
"authors": [
"lijamie98",
"marcelosalloum",
"stellar-jenkins"
],
"repo": "stellar/java-stellar-anchor-sdk",
"url": "https://github.com/stellar/java-stellar-anchor-sdk/pull/195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2054343788
|
Change to Circle USDC.
What
Switching e2e tests to use circle's USDC.
Why
It's easier to fund the distribution account with circle's USDC, then having one more asset to maintain during testnet resets.
Known limitations
[TODO or N/A]
Checklist
PR Structure
[ ] This PR has a reasonably narrow scope (if not, break it down into smaller PRs).
[ ] This PR title and description are clear enough for anyone to review it.
[ ] This PR does not mix refactoring changes with feature changes (split into two PRs otherwise).
Thoroughness
[ ] This PR adds tests for the new functionality or fixes.
[ ] This PR contains the link to the Jira ticket it addresses.
Configs and Secrets
[ ] No new CONFIG variables are required -OR- the new required ones were added to the helmchart's values.yaml file.
[ ] No new CONFIG variables are required -OR- the new required ones were added to the deployments (pr-preview, dev, demo, prd).
[ ] No new SECRETS variables are required -OR- the new required ones were mentioned in the helmchart's values.yaml file.
[ ] No new SECRETS variables are required -OR- the new required ones were added to the deployments (pr-preview secrets, dev secrets, demo secrets, prd secrets).
Release
[ ] This is not a breaking change.
[ ] This is ready for production.. If your PR is not ready for production, please consider opening additional complementary PRs using this one as the base. Only merge this into develop or main after it's ready for production!
Deployment
[ ] Does the deployment work after merging?
stellar-disbursement-platform-backend-preview is available here:SDP: https://sdp-backend-pr132.previews.kube001.services.stellar-ops.com/healthAP: https://sdp-ap-pr132.previews.kube001.services.stellar-ops.com/healthFrontend: https://sdp-backend-dashboard-pr132.previews.kube001.services.stellar-ops.com
stellar-disbursement-platform-backend-preview is available here:SDP: https://sdp-backend-pr132.previews.kube001.services.stellar-ops.com/healthAP: https://sdp-ap-pr132.previews.kube001.services.stellar-ops.com/healthFrontend: https://sdp-backend-dashboard-pr132.previews.kube001.services.stellar-ops.com
|
gharchive/pull-request
| 2023-12-22T18:16:37 |
2025-04-01T06:40:29.545945
|
{
"authors": [
"marwen-abid",
"stellar-jenkins"
],
"repo": "stellar/stellar-disbursement-platform-backend",
"url": "https://github.com/stellar/stellar-disbursement-platform-backend/pull/132",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1093000870
|
Реализовать popup для отдельных маркеров
Контекст
Как пользователь я хочу иметь возможность быстро просматривать детали событий, отображаемых на карте. Для этого при нажатии на отдельный марке должен отображаться popup, содержащий основную информацию по событию.
Задача
Реализовать отображение основных деталей события в popup-е при нажатии на маркер:
Дату события
Адрес события
Тип события
Дополнительное описание.
@gellouglas, какие варианты "типа события" будут? Доп-описание будет только текст, или картинки в попапах тоже будут?
@gellouglas, какие варианты "типа события" будут? Доп-описание будет только текст, или картинки в попапах тоже будут?
Типы событий на маркерах:
попадание арт снаряда
попадание фугасной бомбы
попадание зажигательной бомбы
В доп описании пока только текст. Мне кажется перегружать картинкой хинты - не лучшая идея. Картинки наверное будет показывать в поле расширенного описания на панели.
|
gharchive/issue
| 2022-01-04T04:27:49 |
2025-04-01T06:40:29.557591
|
{
"authors": [
"gellouglas",
"stepan-anokhin",
"vzakhari"
],
"repo": "stepan-anokhin/spb-histmap",
"url": "https://github.com/stepan-anokhin/spb-histmap/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
309527319
|
Check your .tlu/properties.dat properties
I cant install the bot and the update have a bug or anything.
Here a screenshot: http://prntscr.com/ixqgua
Try to download the newest version manually: https://github.com/stephan-strate/teamspeak-league-update/releases/download/3.0.1/teamspeak-league-update.jar and delete your .tlu folder.
I had a bug in Version 3.0.0, which caused this issue.
Where is the .tlu folder?
It is in the same folder as your teamspeak-league-update.jar, it might be hidden by your system. Try to show hidden files and folders.
Thanks man!
All setups are right.
Here a screen: http://prntscr.com/iy63p0
I reconnect with my second ID and dont get my rank.
I connect with the verifyed ID from bot. The ID is on the reconnect the same, what I mean.
Difficult to tell where the problem is. You can send me your .tlu folder to my mail address (you can find my mail address on my profile, left side)
I released a new version 3.0.2 with a small bugfix. This should fix your problems, you might need to delete your .tlu/properties.dat
|
gharchive/issue
| 2018-03-28T20:52:22 |
2025-04-01T06:40:29.562917
|
{
"authors": [
"NaturFront",
"stephan-strate"
],
"repo": "stephan-strate/teamspeak-league-update",
"url": "https://github.com/stephan-strate/teamspeak-league-update/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510616702
|
Any plan to support NDJSON?
Hello.
I think glaze is a great library.
I have started to use glaze when I write new programs.
Next phase, I would like to replace other libraries with glaze in existing programs where speed is important, but NDJSON(Newline Delimited JSON) support is an issue.
http://ndjson.org/
As far as I know, glaze does not support NDJSON.
Are there any plans to support NDJSON?
NDJSON is a reasonable and great suggestion. We'll try to add support for this soon, probably as a compile time option.
@toge
NDJSON support has been added for array types like std::array, std::vector, std::tuple, and glz::array
For runtime deduced variant types see this Variant Handling documentation on the Wiki. This will let you deduce the type from the NDJSON.
Here are some tests that demonstrate the behavior and calling syntax:
std::vector<std::string> x = { "Hello", "World", "Ice", "Cream" };
std::string s = glz::write_ndjson(x);
expect(s ==
R"("Hello"
"World"
"Ice"
"Cream")");
x.clear();
glz::read_ndjson(x, s);
expect(x[0] == "Hello");
expect(x[1] == "World");
expect(x[2] == "Ice");
expect(x[3] == "Cream");
Another example:
std::tuple<my_struct, sub_thing> x{};
std::string s = glz::write_ndjson(x);
expect(s ==
R"({"i":287,"d":3.14,"hello":"Hello World","arr":[1,2,3]}
{"a":3.14,"b":"stuff"})");
auto& first = std::get<0>(x);
auto& second = std::get<1>(x);
first.hello.clear();
first.arr[0] = 0;
second.a = 0.0;
second.b.clear();
glz::read_ndjson(x, s);
expect(first.hello == "Hello World");
expect(first.arr[0] = 1);
expect(second.a == 3.14);
expect(second.b == "stuff");
@stephenberry
Thanks a lot!
It is wonderful.
I will give it a try!
|
gharchive/issue
| 2022-12-26T05:21:19 |
2025-04-01T06:40:29.574401
|
{
"authors": [
"stephenberry",
"toge"
],
"repo": "stephenberry/glaze",
"url": "https://github.com/stephenberry/glaze/issues/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
145555267
|
Restrict use of qualified table names
Qualified table names are not allowed when creating indexes or foreign key references.
Fixes #395
Nice solution! Thank you!
|
gharchive/pull-request
| 2016-04-04T00:17:40 |
2025-04-01T06:40:29.575856
|
{
"authors": [
"jlawton",
"stephencelis"
],
"repo": "stephencelis/SQLite.swift",
"url": "https://github.com/stephencelis/SQLite.swift/pull/396",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
175129530
|
Fix positional argument count on get_db_prep_value
When I try to run the Wordpress importer with the latest git head of Mezzanine (ee24a48b5f33) and Django 1.10.1, I get the error reproduced at the bottom of this comment when it tries to import WordPress pages. This occurs on both SQLite (which didn't experience this problem in February when I was last experimenting) and PostgreSQL (which I didn't test previously).
Despite spending a bit of time looking into it, I'm not totally sure why this started appearing now. The code at core/fields.py:103 doesn't appear to have changed since 2012 and the Django API for this function hasn't changed since Django 1.2. Probably Django is passing the connection object now regardless of whether a multi-tenant DB configuration is used or not, and maybe it wasn't before?
In any case, I'm not sure what the hiccup is, but now that I got it working, I'm not super interested in spending more time hunting down the root cause. It's working again with the attached patch. Hope this helps. :smile:
Imported comment by: Test
Traceback (most recent call last):
File "manage.py", line 14, in <module>
execute_from_command_line(sys.argv)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 359, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/base.py", line 294, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/base.py", line 345, in execute
output = self.handle(*args, **options)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/Mezzanine-4.2.0-py3.5.egg/mezzanine/blog/management/base.py", line 232, in handle
page, created = RichTextPage.objects.get_or_create(**page)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 473, in get_or_create
return self.get(**lookup), False
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 379, in get
num = len(clone)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 238, in __len__
self._fetch_all()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 1087, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 54, in __iter__
results = compiler.execute_sql()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 824, in execute_sql
sql, params = self.as_sql()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 376, in as_sql
where, w_params = self.compile(self.where) if self.where is not None else ("", [])
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 353, in compile
sql, params = node.as_sql(self, self.connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/where.py", line 79, in as_sql
sql, params = compiler.compile(child)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 353, in compile
sql, params = node.as_sql(self, self.connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/lookups.py", line 156, in as_sql
rhs_sql, rhs_params = self.process_rhs(compiler, connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/lookups.py", line 92, in process_rhs
return self.get_db_prep_lookup(value, connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/lookups.py", line 184, in get_db_prep_lookup
[get_db_prep_value(value, connection, prepared=True)]
TypeError: get_db_prep_value() takes 2 positional arguments but 3 were given
Thanks for tracking this down!
I had a look myself and the code in Django is a bit inconsistent - sometimes it's defined as a positional arg, and sometimes as a keyword arg. Anyway it looks like the change in 1.10 here caused the problem: https://github.com/django/django/commit/eab5df12b664b154b2e280330aa43d8c0621b94a
|
gharchive/pull-request
| 2016-09-05T21:37:49 |
2025-04-01T06:40:29.581557
|
{
"authors": [
"sjuxax",
"stephenmcd"
],
"repo": "stephenmcd/mezzanine",
"url": "https://github.com/stephenmcd/mezzanine/pull/1668",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
475092105
|
ZED camera causes ubuntu 18.04 with RX 2080 and CUDA 10.0 to freeze
My setup:
Ubuntu 18.04
AMD Ryzen 2700X
NVidia RTX 2080
nvidia-driver-410
cuda-10-0
Steps to reproduce:
1)
Connect ZED camera
If ZED camera is not detected, try plug/unplug until it is
Try to calibrate camera
if the calibartion window will become unresponding, try restarting calibration
Wait for system freeze
Connect ZED camera
If ZED camera is not detected, try plug/unplug until it is
Try to see video using ZED explorer
if camera is not detected, just wait
Any news?
Hi,
Can you tell the ZED SDK version you are using ?
The latest, 2.8.3, from https://download.stereolabs.com/zedsdk/2.8/ubuntu18 .
|
gharchive/issue
| 2019-07-31T11:37:20 |
2025-04-01T06:40:29.612162
|
{
"authors": [
"EmilPi",
"obraun-sl"
],
"repo": "stereolabs/zed-examples",
"url": "https://github.com/stereolabs/zed-examples/issues/165",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
458668183
|
Disparity as Grayscale image
HI,
I need the disparity as an image so what the best way to get it?
I am thinking to normalize the MEASURE_DISPARITY from 0-1 then multiply by 255 but the issue each image has a different range so it will be not accurate to make it in real time. Please, if a there better way to do it let me know.
Hi,
Yes, there is a better way to display the disparity map. Use the retrieveImage function with view=VIEW_DEPTH
The disparity and depth are visually similar.
Ok, thank you @adujardin for this quick response.
I put issue from a long time and I will appreciate more if you could answer it.
|
gharchive/issue
| 2019-06-20T13:45:24 |
2025-04-01T06:40:29.617582
|
{
"authors": [
"adujardin",
"zaher88abd"
],
"repo": "stereolabs/zed-python-api",
"url": "https://github.com/stereolabs/zed-python-api/issues/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2685856197
|
bug: Oil window just closes without error when trying to add/remove file
Did you check the docs and existing issues?
[X] I have read the docs
[X] I have searched the existing issues
Neovim version (nvim -v)
NVIM v0.11.0-dev-1207+g534544cbf7
Operating system/version
Arch Linux LTS
Describe the bug
I have not determined when, but quite often when I try to add or remove files, the oil window just closes and puts me to the last file I was in. There is no error.
It happened a few version ago, but I'm not sure which. It happens in multiple projects.
Right now I have a directory where I can delete some folders, but not all, and adding a file/folder just closes the window.
Is there some logs I can find to see what's happening?
I used the below repro config, and I did not get the error, but I did try with a clean oil config with my own nvim config, and got the issue.
What is the severity of this bug?
breaking (some functionality is broken)
Steps To Reproduce
Be in file
Run this keymap: vim.keymap.set("n", "-", require("oil").open, { desc = "Open parent directory" })
Delete folder
I havent determined the exact causation, so it's hard to give a definitive reproduction.
Expected Behavior
Can modity oil buffer
Directory structure
/012 .git/
/010 .github/
/006 .tests/
/013 autoload/
/003 deps/
/004 doc/
/001 ftplugin/
/011 lua/
/009 scripts/
/002 tests/
/014 .gitignore
/007 CONTRIBUTING.md
/008 Makefile
/005 README.md
/015 stylua.toml
Repro
-- save as repro.lua
-- run with nvim -u repro.lua
-- DO NOT change the paths
local root = vim.fn.fnamemodify("./.repro", ":p")
-- set stdpaths to use .repro
for _, name in ipairs({ "config", "data", "state", "runtime", "cache" }) do
vim.env[("XDG_%s_HOME"):format(name:upper())] = root .. "/" .. name
end
-- bootstrap lazy
local lazypath = root .. "/plugins/lazy.nvim"
if not vim.loop.fs_stat(lazypath) then
vim.fn.system({
"git",
"clone",
"--filter=blob:none",
"--single-branch",
"https://github.com/folke/lazy.nvim.git",
lazypath,
})
end
vim.opt.runtimepath:prepend(lazypath)
-- install plugins
local plugins = {
"folke/tokyonight.nvim",
{
"stevearc/oil.nvim",
config = function()
require("oil").setup({
-- add any needed settings here
})
end,
},
-- add any other plugins here
}
require("lazy").setup(plugins, {
root = root .. "/plugins",
})
vim.cmd.colorscheme("tokyonight")
-- add anything else here
Did you check the bug with a clean config?
[X] I have confirmed that the bug reproduces with nvim -u repro.lua using the repro.lua file above.
I tracked it down to be casued by this plugin: https://github.com/Shatur/neovim-session-manager
Probably some code related to session saving that borked it out. I didnt find a fix, I just switched plugins.
|
gharchive/issue
| 2024-11-23T11:21:57 |
2025-04-01T06:40:29.626138
|
{
"authors": [
"oysandvik94"
],
"repo": "stevearc/oil.nvim",
"url": "https://github.com/stevearc/oil.nvim/issues/520",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
256446382
|
Clique Community Persistence: A Topological Visual Analysis Approach for Complex Networks
Dear Steve,
thank you so much for this service! I have finally added all information about this project on OSF.io. Would you kindly include the relevant URIs?
All data: https://osf.io/rdktg/
Paper (PDF): https://mfr.osf.io/render?url=https://osf.io/td973/?action=download%26mode=render
Supplementary materials (PDF): https://mfr.osf.io/render?url=https://osf.io/e8dmp/?action=download%26mode=render
Additional explanations: https://mfr.osf.io/render?url=https://osf.io/ygd6m/?action=download%26mode=render
Hope that I am using this service correcetly---this is my first project on OSF, inspired by your initiative. Thanks for your efforts!
Best,
Bastian
Thanks Bastian. The paper is updated. And I've put the OSF repository as the materials. The data category is for experiment results such as experiment subject responses or algorithmic runtime results. I'm not sure the data folder here fits into that. Feel free to comment in this issue if you think anything should change.
Hi Steve! I realize that I should have named the folder somewhat differently, but it actually contains the results of algorithmic runs as well as scripts to create them for yourself (in order to compare them with the ones we reported in the paper). I'll also add a link from the GitHub repository to the OSF one so that more people are capable of finding it.
Hi Steve, sorry to bother you again, but I wanted to ask whether you could add the OSF repository as a data repository of our paper, as well. I have uploaded the raw input files as well as the results of our analysis in order to make everything reproducible.
I took a look at the data folder, and I'm still not sure if it fits the criteria of experiment data rather than materials.
I don't see:
A data dictionary (how do I read the data?)
An explanation in the paper about how the data is used for an analysis (e.g., mean & SD of algorithm run times) or comparison (algorithm A vs algorithm B; simulation vs ground truth; algorithm vs human estimation; etc.). I see the conclusion mentions a contrast with existing methods, but I don't see a discussion of the comparison using the data.
I just want to make sure that I understand what's posted, as it seems to be quite outside the typical umbrella of experiment results.
OK, I understand that I was mistaken about the nature of these experimental data. I have added an updated README to the Data folder to explain what is in there. In short, I added:
all networks that we analysed in our paper
the results of this analysis in the form of so-called persistence diagrams
further data calculated with our method (topological centrality) and a comparison between existing centrality measures
comparisons between our method and existing methods in the form of distance matrices (one is generated with the old method, the other with our method)
embeddings & glyphs to reproduce the figures in our paper
I also added detailed instructions and automated scripts for reproducing every figure and every table in the table. The results are added in order to make it possible to check the correctness of the calculations.
So, I was hoping that this (along with the supplied code) should make it possible to reproduce just about everything in the paper (and also try it out with new data, if possible).
Sorry for taking up so much time; I have to admit that Open Science is something new for me—but I'm very excited to try it out and see the benefits for other scientists.
|
gharchive/issue
| 2017-09-09T15:35:10 |
2025-04-01T06:40:29.636684
|
{
"authors": [
"Submanifold",
"steveharoz"
],
"repo": "steveharoz/open-access-vis",
"url": "https://github.com/steveharoz/open-access-vis/issues/38",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
33449969
|
Group items
Is there an index or anything I could hack to group items in a wrapping template?
IE... create rows, blocks of items for a slide show etc.
I'm closing this as it's very old, but for future reference, the mock option and success callback can be used to work with the data before inserting it into the DOM yourself.
|
gharchive/issue
| 2014-05-13T23:52:46 |
2025-04-01T06:40:29.673504
|
{
"authors": [
"benjamin-hull",
"sprynm"
],
"repo": "stevenschobert/instafeed.js",
"url": "https://github.com/stevenschobert/instafeed.js/issues/109",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
791580526
|
Doesn't work for duplicate params
Doesn't work for duplicate params
It just adds it as a second param, rather than replacing the first
Doesn't work for duplicate params
It just adds it as a second param, rather than replacing the first
I just made same library, which same name, even same options. And I found this package because I cannot publish mine with the name build-url, so I placed my library as scoped library: @googlicius/build-url
And it can replace the existing query params, Typescript supported. So please check it.
|
gharchive/issue
| 2021-01-22T00:02:29 |
2025-04-01T06:40:29.677089
|
{
"authors": [
"Geczy",
"googlicius"
],
"repo": "steverydz/build-url",
"url": "https://github.com/steverydz/build-url/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
360937592
|
[BUG] In DialogListAdapter, addItem(dialog) adds the item to end of the list but updates the first item
On adding a new dialog, item gets added but UI isn't updated.
/**
* Add dialog to the end of dialogs list
*
* @param dialog dialog item
*/
public void addItem(DIALOG dialog) {
items.add(dialog);
notifyItemInserted(0); <=== should be notifyItemInserted(items.length() - 1)
}
Hi! Yeah you are right! I can't understand how we missed it. I'll fix it. You can use method addItem(int position, DIALOG dialog) while I haven't update library.
Yup! I am using the addItem(int position, DIALOG dialog) for now.
|
gharchive/issue
| 2018-09-17T16:02:24 |
2025-04-01T06:40:29.679617
|
{
"authors": [
"bevzaanton",
"mayuroks"
],
"repo": "stfalcon-studio/ChatKit",
"url": "https://github.com/stfalcon-studio/ChatKit/issues/198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
236979020
|
Add support to sync only keys matching a certain pattern
I've used this tool it in the past and it works great, but now I have a case in which I need to dump keys that only match a certain pattern instead of dumping all the keys of a DB.
MATCH option of SCAN command can be used to add this support.
The following commit https://github.com/fzunino/rump/commit/67bc2e1dac6d46019019a8f6cf6996ba8a02cf70 adds this support adding an optional command line argument called match and using * as default value, maintaining the current semantic.
Let me know and I can submit a PR with this commit.
@fzunino Yes, that's great idea!
Let's keep the new argument optional. :+1:
Hi,
We just released Rump 1.0.0 and dropped support for MATCH, considering that it could lead to inconsistencies since the filter is applied after SCAN: https://redis.io/commands/scan#the-match-option
It is important to note that the MATCH filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, SCAN will likely return no elements in most iterations.
|
gharchive/issue
| 2017-06-19T18:27:16 |
2025-04-01T06:40:29.685971
|
{
"authors": [
"badshark",
"fzunino",
"nixtrace"
],
"repo": "stickermule/rump",
"url": "https://github.com/stickermule/rump/issues/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
681833873
|
Fix small mistake in time.rs: kilohertz should be megahertz
Fix error in description of Megahertz
Perfect, thanks!
|
gharchive/pull-request
| 2020-08-19T13:18:52 |
2025-04-01T06:40:29.702061
|
{
"authors": [
"TheZoq2",
"codemaster97"
],
"repo": "stm32-rs/stm32f1xx-hal",
"url": "https://github.com/stm32-rs/stm32f1xx-hal/pull/260",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
787814775
|
Fetching packages stuck at last package
Expected Behavior
For the install to continue
Actual Behavior
Steps to Reproduce the Problem
Clone the repo
Run yarn install
Stuck
Specifications
Platform: Ubuntu 18.04 LTS
Subsystem:
fsevents@1.2.13: The platform "linux" is incompatible with this module
fsevents@1.2.13: The platform "linux" is incompatible with this module
|
gharchive/issue
| 2021-01-17T22:57:14 |
2025-04-01T06:40:29.725154
|
{
"authors": [
"UNlDAN"
],
"repo": "stockmarkat/stockmarket-simulation",
"url": "https://github.com/stockmarkat/stockmarket-simulation/issues/156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2588875880
|
[Manual] Operator bundle update
Description
Please provide a brief description of the purpose of this pull request.
Related Issue
If applicable, please reference the issue(s) that this pull request addresses.
Changes Made
Provide a clear and concise overview of the changes made in this pull request.
Screenshots (if applicable)
Add screenshots or GIFs that demonstrate the changes visually, if relevant.
Checklist
[ ] I have tested the changes locally and they are functioning as expected.
[ ] I have updated the documentation (if necessary) to reflect the changes.
[ ] I have added/updated relevant unit tests (if applicable).
[ ] I have ensured that my code follows the project's coding standards.
[ ] I have checked for any potential security issues and addressed them.
[ ] I have added necessary comments to the code, especially in complex or unclear sections.
[ ] I have rebased my branch on top of the latest main/master branch.
Additional Notes
Add any additional notes, context, or information that might be helpful for reviewers.
Reviewers
Tag the appropriate reviewers who should review this pull request. To add reviewers, please add the following line: /cc @reviewer1 @reviewer2
/cc @cameronmwall @ngraham20
Definition of Done
[ ] Code is reviewed.
[ ] Code is tested.
[ ] Documentation is updated.
[ ] All checks and tests pass.
[ ] Approved by at least one reviewer.
[ ] Merged into the main/master branch.
/cherry-pick backplane-2.7
|
gharchive/pull-request
| 2024-10-15T13:58:00 |
2025-04-01T06:40:29.749415
|
{
"authors": [
"dislbenn"
],
"repo": "stolostron/backplane-operator",
"url": "https://github.com/stolostron/backplane-operator/pull/1019",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2509026927
|
[doc] remove hosted import
Summary
Related issue(s)
Fixes #
/retest
|
gharchive/pull-request
| 2024-09-05T23:29:28 |
2025-04-01T06:40:29.750753
|
{
"authors": [
"ldpliu"
],
"repo": "stolostron/multicluster-global-hub",
"url": "https://github.com/stolostron/multicluster-global-hub/pull/1089",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1410523327
|
[WIP] refactor hoh addon intallation
Signed-off-by: Zhiwei Yin zyin@redhat.com
/assign @clyang82
/assign @morvencao
/test test-e2e
|
gharchive/pull-request
| 2022-10-16T15:26:58 |
2025-04-01T06:40:29.752130
|
{
"authors": [
"zhiweiyin318"
],
"repo": "stolostron/multicluster-global-hub",
"url": "https://github.com/stolostron/multicluster-global-hub/pull/246",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2369581564
|
ACM-10812: Fix status report
We are still having issues with status reporting. In the latest instance, the endpoint-operator fails to update the metrics-collector deployment because of a conflict, then sets the status to degraded. And it remains in that state while the metrics are forwarded.
This PR:
Adds retry on conflict error to the metrics-collector related updates.
Changes the comparison between the found resources version and the desired ones to use DeepDerivative. This will reduce the number of unnecessary updates.
Adds retry on conflict for status updates made by the metrics-collector.
Sort conditions before updating the status in the metrics-collector to ensure it handles the most recent one.
I tried to change as few things as possible.
/retest
/cherrypick release-2.10
/cherrypick release-2.10
|
gharchive/pull-request
| 2024-06-24T08:27:23 |
2025-04-01T06:40:29.755251
|
{
"authors": [
"thibaultmg"
],
"repo": "stolostron/multicluster-observability-operator",
"url": "https://github.com/stolostron/multicluster-observability-operator/pull/1505",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2228549294
|
Upgraded to Go1.21 and dependencies
Upgraded all dependencies and go version to 1.21
/retest
Trying to get unit tests to pass
Getting unit tests to pass
|
gharchive/pull-request
| 2024-04-05T18:03:15 |
2025-04-01T06:40:29.757445
|
{
"authors": [
"ngraham20"
],
"repo": "stolostron/multiclusterhub-operator",
"url": "https://github.com/stolostron/multiclusterhub-operator/pull/1426",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2035947707
|
Links with absolute urls do not work
Hello,
Context
Currently, its impossible to add an absolute url link, which would redirect user from stoplight elements page, to a differente page , which is located outside of stoplight elements scope
Steps to Reproduce
Have @stoplight/elements generated API be placed under specific route, i.e. my-domain.com/api#.
Use hash router.
Have some pages outside of the @stoplight/elements scope, i.e. my-domain.com/guides
Attempt to add a link in API documentation, which would direct user to guides page, i.e. [check out our guides page](/guides)
Current Behavior
The generated links poins to my-domain.com/api#/guides instead of my-domain.com/guides
Expected Behavior
I would expect a valid link to be genrated, which would point to my-domain.com/guides.
Possible Workaround/Solution
No workarounds, unless final host url is known in advance.
Version used: Latest @stoplight/elements version
@manvydasu We propose the following enhancement to achieve what you're after:
pick some sort of token to use in the link definition to tell us to put the host in the url (i.e. [check out our guides page]($$origin/guides)
when we come across this token, we'd replace it with the origin where the app is currently hosted
We will work with our Product team to prioritize this, but feel free to put up a PR for the proposed solution above in the meantime.
|
gharchive/issue
| 2023-12-11T15:35:01 |
2025-04-01T06:40:29.772493
|
{
"authors": [
"chohmann",
"manvydasu"
],
"repo": "stoplightio/elements",
"url": "https://github.com/stoplightio/elements/issues/2469",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1034896679
|
405 Method Not Allowed, response is invalid
Describe the bug
For operations not described in the OAS Prism will return a 405, but without the required Allow header value.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/405
To Reproduce
Created a default API in Stoplight Studio, exposing a User endpoint, the one provided out of the box.
Running this request against the prism mock.
curl --request GET 'http://127.0.0.1:3100/user'
Expected behavior
HTTP/1.1 405, Method Not Allowed
Allow: POST
405 is coming for me for no reason. It was working 2 weeks back
405 is coming for me for no reason. It was working 2 weeks back
I'm not sure how this has anything to do with the issue I raised @abhayathapa?
|
gharchive/issue
| 2021-10-25T09:28:54 |
2025-04-01T06:40:29.775932
|
{
"authors": [
"abhayathapa",
"morten-nielsen"
],
"repo": "stoplightio/prism",
"url": "https://github.com/stoplightio/prism/issues/1929",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1265325083
|
Cache option not working
If i try to enable cache option:
buildModules: [ ["@storyblok/nuxt", { accessToken: "xxxxx", apiOptions: { cache: { type: "memory" }, }, }] ],
I get an error:
Cannot add property accessToken, object is not extensible
This is also not working:
buildModules: [ ["@storyblok/nuxt", { accessToken: "xxxxx", apiOptions: { accessToken: "xxxxx", cache: { type: "memory" }, }, }] ],
Iam using the latest version.
If i remove the apiOptions property, storyblok creates two request:
301 https://api.storyblok.com/v2/cdn/stories/en?version=published&token=xxxxx&cv=undefined =>
200 https://api.storyblok.com/v2/cdn/stories/en?cv=1654702667&token=xxxxx&version=published
How can i prevent this 301?
Confirming this issue. Whenever I pass apiOptions to the module it start failing with [nuxt] [request error] Cannot add property accessToken, object is not extensible.
The problematic code seems to be this bit in the storyblokInit function (code taken from dist folder in node_modules):
const { bridge, accessToken, use = [], apiOptions = {} } = pluginOptions;
apiOptions.accessToken = apiOptions.accessToken || accessToken;
This should be fixed as of V4.3.0. Check https://github.com/storyblok/storyblok-nuxt/issues/170#issuecomment-1239527769.
Closing since it hasn't been active in a while and the latest comment from @StaffOfHades. If it's still happening feel free to re-open providing a valid reproduction link
Thanks!
|
gharchive/issue
| 2022-06-08T21:18:18 |
2025-04-01T06:40:29.808878
|
{
"authors": [
"StaffOfHades",
"alvarosabu",
"arpadgabor",
"do-web"
],
"repo": "storyblok/storyblok-nuxt",
"url": "https://github.com/storyblok/storyblok-nuxt/issues/149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1878890707
|
Upgrade jest dependencies to v29 [rebased]
#319 + #345 but rebased on next.
Blocked by
#354
[ ] Release of #349
Dropping node 12 is fine, storybook itself also doesn't support node 12 anymore.
@legobeat Sounds good to me.
@yannbf Let's do this when you are back.
Hey there! Sorry for not checking this sooner. I'll update this PR and test it out next week!
|
gharchive/pull-request
| 2023-09-03T00:35:31 |
2025-04-01T06:40:29.848307
|
{
"authors": [
"kasperpeulen",
"legobeat",
"yannbf"
],
"repo": "storybookjs/test-runner",
"url": "https://github.com/storybookjs/test-runner/pull/348",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1245231027
|
[Bug] Protected properties are included in args table
The args table currently includes protected properties of components without indicating that they are protected.
I guess in most cases, protected properties do not need to be shown at all. In an API documentation, they are only relevant for classes that are intended to be subclassed. In other cases, protected members usually are an artifact of how a component is implemented instead of part of the API.
The angular framework itself excludes protected members from the public API except for classes that are explicitly marked as non-final in the documentation.
Yes, the plan was to exclude protected and private properties and i simply overlooked the protected properties.
Since the target audience of the generated types are developers consuming public components, and not extending them, i think it is fine to always exclude protected properties.
|
gharchive/issue
| 2022-05-23T14:11:07 |
2025-04-01T06:40:29.850506
|
{
"authors": [
"Etienne-Buschong",
"Yogu"
],
"repo": "storybookjs/webpack-angular-types-plugin",
"url": "https://github.com/storybookjs/webpack-angular-types-plugin/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
192563241
|
Update Typescript Definition File closes #632
This adds the addDecorator exported function to the definitions.
Awesome.
Thanks.
|
gharchive/pull-request
| 2016-11-30T13:18:11 |
2025-04-01T06:40:29.851563
|
{
"authors": [
"arunoda",
"wmonk"
],
"repo": "storybooks/react-storybook",
"url": "https://github.com/storybooks/react-storybook/pull/634",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
206711028
|
Webpack 2
Hi, is there any guideline how can we move storybook to webpack 2?
Thanks!
refs #637
Note to self:
This fork seems to have done pretty much the same things I have also done, could be interesting to compare at some point.
@ndelangen Is this the proper issue to watch for progress on webpack 2 support in storybook?
We'll be releasing an 3.0.0.alpha.01 soonish, which will have webpack 2 support.
We'll probably have to write some good migration guides before the real release.
Webpack 2 support is already in master, so you could already give it a try manually by either npm link or local file dependencies.
We're real close to a 3.0.0-alpha.01 release people!
I think after this: https://github.com/storybooks/storybook/issues/773#issuecomment-297679733 I will be publishing!
Any updates on this?
I second this...any update?
I'm working on it as much as I can 👍
It looks like #773 was unblocked today. Is there anything still holding up a 3.0.0-alpha release? :smile:
Not too much, 🔬 details I think and then 🚢
I'm going to releasing 3.0.0-alpha.0 in a short while, so this can be closed.
|
gharchive/issue
| 2017-02-10T05:18:08 |
2025-04-01T06:40:29.856850
|
{
"authors": [
"ajhyndman",
"andrewcashmore",
"delijah",
"eatrocks",
"joscha",
"ndelangen",
"tomitrescak"
],
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/issues/690",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
221962143
|
2 storyshots libraries???
Issue by pyros2097
Saturday Mar 04, 2017 at 12:18 GMT
Originally opened as https://github.com/storybooks/storyshots/issues/82
Why are there 2 storyshots libraries? Which one should I use. Its seems they are different versions also.
npm i -D @kadira/storyshots
npm i -D storyshots
Comment by mnmtanish
Saturday Apr 01, 2017 at 11:07 GMT
Please use the storyshots module
|
gharchive/issue
| 2017-04-15T17:39:28 |
2025-04-01T06:40:29.860768
|
{
"authors": [
"shilman"
],
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/issues/885",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
244799635
|
Add Docgen info to PropTable
Issue:
Storybook Info Addon currently does not support React docgen for documenting components. This was actually implemented on the old repo but got lost on the migration:
https://github.com/storybooks/react-storybook-addon-info/commit/092e10a736d9381ffdab5609ec7585df9d2cb7a1
What I did
Added support for react docgen on the PropTable component and documentation
How to test
Follow the instructions on the readme on a repo supporting flowtype. normal npm link approaches should be helpful
@jribeiro Thanks for contributing this! Is there any chance you can add this to the examples/cra-kitchen-sink example as part of this PR? We're starting to make sure that all PRs have good examples there.
To get the example running on your machine:
yarn && yarn bootstrap
cd examples/cra-kitchen-sink
yarn storybook
Many thanks again and please let me know if you have any questions!
Hi @shilman, thanks for that. I've added an example to the examples/cra-kitchen-sink. Let me know if you need any other change.
Hi @danielduan, is this not needed anymore?
Sorry about that, I tried to fix the merge conflicts and when I pushed to your branch, this got closed automatically. New PR is above ^
|
gharchive/pull-request
| 2017-07-21T22:40:33 |
2025-04-01T06:40:29.864606
|
{
"authors": [
"danielduan",
"jribeiro",
"shilman"
],
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/pull/1505",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
469272569
|
Opening titles component
[x] themeable variations
[x] data schema
[x] reponsiveness
[x] micro interactions: hovers/active
[x] abstract anilink/button component variation?
Done in https://github.com/storycopter/storycopter/commit/94b649e4b4565330fd569bacb513d5882bc665c8
|
gharchive/issue
| 2019-07-17T15:00:55 |
2025-04-01T06:40:29.866946
|
{
"authors": [
"piotrf"
],
"repo": "storycopter/storycopter",
"url": "https://github.com/storycopter/storycopter/issues/22",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.