id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
166590101
|
Inconsistent code
source code
public int testfor() {
int tmp = 1;
for (int i = 10; i > -1;i--) {
if (i > tmp) {
for (int j = 0; j < 54; j += 4) {
if (i < j) {
for (int k = j; k < j + 4; k++) {
if (tmp> k) {
return 0;
}
}
break;
}
}
}
tmp++;
}
return tmp;
}
decompile code
public int testfor() {
int tmp = 1;
for (int i = 10; i > -1; i--) {
if (i > tmp) {
int j = 0;
while (j < 54) {
if (i < j) {
for (int k = j; k < j + 4; k++) {
if (tmp > k) {
return 0;
}
}
continue;
} else {
j += 4;
}
}
continue;
}
tmp++;
}
return tmp;
}
another
source code
public void testswitchif(int i,int index) {
switch (i) {
case 0:
if (index != 7) {
if (index ==2) {
System.out.println(2);
}
if (index ==1) {
System.out.println(1);
break;
}
}
else
{
System.out.println(7);
return;
}
break;
default:
break;
}
System.out.println("end");
}
decompile code
public void testswitchif(int i, int index) {
switch (i) {
case 0:
if (index != 7) {
if (index == 2) {
System.out.println(2);
}
if (index == 1) {
System.out.println(1);
break;
}
}
System.out.println(7);
return;
break;
}
System.out.println("end");
}
In newest versions testfor() is decompiled to
public int testfor() {
int i = 1;
for (int i2 = 10; i2 > -1; i2--) {
if (i2 > i) {
int i3 = 0;
while (i3 < 54) {
if (i2 < i3) {
for (int i4 = i3; i4 < i3 + 4; i4++) {
if (i > i4) {
return 0;
}
}
continue;
} else {
i3 += 4;
}
}
continue;
}
i++;
}
return i;
}
testswitchif is decompiled good
|
gharchive/issue
| 2016-07-20T14:07:04 |
2025-04-01T06:40:24.681830
|
{
"authors": [
"basherone",
"sergey-wowwow"
],
"repo": "skylot/jadx",
"url": "https://github.com/skylot/jadx/issues/130",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1953800777
|
Issues installing on MacOS Sonoma, Python 3.11 and 3.12
While trying to install pymediasoup in the a.m. environment I'm appearing troubles, most likely with the AV submodule.
(ms) ~ $ pip install pymediasoup
Collecting pymediasoup
Using cached pymediasoup-0.2.2-py3-none-any.whl (41 kB)
Collecting aiortc<2.0.0,>=1.2.0 (from pymediasoup)
Downloading aiortc-1.5.0.tar.gz (1.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 4.0 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting h264-profile-level-id<2.0.0,>=1.0.0 (from pymediasoup)
Using cached h264_profile_level_id-1.0.0-py2.py3-none-any.whl (5.0 kB)
Collecting pydantic<2.0.0,>=1.8.1 (from pymediasoup)
Obtaining dependency information for pydantic<2.0.0,>=1.8.1 from https://files.pythonhosted.org/packages/39/9f/ab6d19c5d3fccc1e3e0d835ac773031388802b31d93937daf878465c2ecf/pydantic-1.10.13-py3-none-any.whl.metadata
Downloading pydantic-1.10.13-py3-none-any.whl.metadata (149 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 149.6/149.6 kB 4.5 MB/s eta 0:00:00
Collecting pyee<9.0.0,>=8.1.0 (from pymediasoup)
Using cached pyee-8.2.2-py2.py3-none-any.whl (12 kB)
Collecting sdp-transform<2.0.0,>=1.0.1 (from pymediasoup)
Obtaining dependency information for sdp-transform<2.0.0,>=1.0.1 from https://files.pythonhosted.org/packages/0e/47/80a3782ebe97cddb5c91f22e736cb5270288b11b1829d6943d75b90d7d5a/sdp_transform-1.0.6-py3-none-any.whl.metadata
Using cached sdp_transform-1.0.6-py3-none-any.whl.metadata (598 bytes)
Collecting aioice<1.0.0,>=0.9.0 (from aiortc<2.0.0,>=1.2.0->pymediasoup)
Using cached aioice-0.9.0-py3-none-any.whl (24 kB)
Collecting av<11.0.0,>=9.0.0 (from aiortc<2.0.0,>=1.2.0->pymediasoup)
Downloading av-10.0.0.tar.gz (2.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 3.9 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [80 lines of output]
Compiling av/plane.pyx because it changed.
[1/1] Cythonizing av/plane.pyx
Compiling av/dictionary.pyx because it changed.
[1/1] Cythonizing av/dictionary.pyx
Compiling av/stream.pyx because it changed.
[1/1] Cythonizing av/stream.pyx
Compiling av/option.pyx because it changed.
[1/1] Cythonizing av/option.pyx
Compiling av/enum.pyx because it changed.
[1/1] Cythonizing av/enum.pyx
Compiling av/bytesource.pyx because it changed.
[1/1] Cythonizing av/bytesource.pyx
Compiling av/buffer.pyx because it changed.
[1/1] Cythonizing av/buffer.pyx
Compiling av/packet.pyx because it changed.
[1/1] Cythonizing av/packet.pyx
Compiling av/error.pyx because it changed.
[1/1] Cythonizing av/error.pyx
Compiling av/_core.pyx because it changed.
[1/1] Cythonizing av/_core.pyx
Compiling av/format.pyx because it changed.
[1/1] Cythonizing av/format.pyx
performance hint: av/logging.pyx:232:5: Exception check on 'log_callback' will always require the GIL to be acquired.
Possible solutions:
1. Declare the function as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
2. Use an 'int' return type on the function to allow an error code to be returned.
Error compiling Cython file:
------------------------------------------------------------
...
cdef const char *log_context_name(void *ptr) nogil:
cdef log_context *obj = <log_context*>ptr
return obj.name
cdef lib.AVClass log_class
log_class.item_name = log_context_name
^
------------------------------------------------------------
av/logging.pyx:216:22: Cannot assign type 'const char *(void *) except? NULL nogil' to 'const char *(*)(void *) noexcept nogil'. Exception values are incompatible. Suggest adding 'noexcept' to type 'const char *(void *) except? NULL nogil'.
Error compiling Cython file:
------------------------------------------------------------
...
# Start the magic!
# We allow the user to fully disable the logging system as it will not play
# nicely with subinterpreters due to FFmpeg-created threads.
if os.environ.get('PYAV_LOGGING') != 'off':
lib.av_log_set_callback(log_callback)
^
------------------------------------------------------------
av/logging.pyx:351:28: Cannot assign type 'void (void *, int, const char *, va_list) except * nogil' to 'av_log_callback'. Exception values are incompatible. Suggest adding 'noexcept' to type 'void (void *, int, const char *, va_list) except * nogil'.
Compiling av/logging.pyx because it changed.
[1/1] Cythonizing av/logging.pyx
Traceback (most recent call last):
File "/Users/decades/anaconda3/envs/ms/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/decades/anaconda3/envs/ms/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/decades/anaconda3/envs/ms/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/7l/rc18f0m564qgtmjlzn6b_n5m0000gn/T/pip-build-env-hqi7pcce/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/7l/rc18f0m564qgtmjlzn6b_n5m0000gn/T/pip-build-env-hqi7pcce/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
self.run_setup()
File "/private/var/folders/7l/rc18f0m564qgtmjlzn6b_n5m0000gn/T/pip-build-env-hqi7pcce/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 507, in run_setup
super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
File "/private/var/folders/7l/rc18f0m564qgtmjlzn6b_n5m0000gn/T/pip-build-env-hqi7pcce/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 157, in <module>
File "/private/var/folders/7l/rc18f0m564qgtmjlzn6b_n5m0000gn/T/pip-build-env-hqi7pcce/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "/private/var/folders/7l/rc18f0m564qgtmjlzn6b_n5m0000gn/T/pip-build-env-hqi7pcce/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: av/logging.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
WARNING: There was an error checking the latest version of pip.
(ms) ~ $
Anybody heard about? Any idea, what to do with this?
PS: It works in a 3.9 VM. It also does not install in a 3.10 VM, but there with another error.
I'm fine with 3.9
I'm seeing right now there is only support up to 3.9.
v0.2.5 now supports python3.11
|
gharchive/issue
| 2023-10-20T08:31:41 |
2025-04-01T06:40:24.686699
|
{
"authors": [
"neilyoung",
"skymaze"
],
"repo": "skymaze/pymediasoup",
"url": "https://github.com/skymaze/pymediasoup/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1273795499
|
Feature: Add multi-threading and retry when caching Tokens and Markets during startup
Abstract
As of now, 2 getProgramAccounts RPC requests are made to a Solana RPC server on startup, first to cache the USDC markets, then the SOL markets. They are very slow, and can timeout, causing startup to fail.
Fix
Add multi-threading to the 2 RPC calls, so they are made concurrently. CompleteableFuture may be preferred, or the most modern Java concurrency pattern that is elegant.
If RPC timeout is experienced during either of the threads, retry up to 3 times.
Retry doesn't seem needed anymore, now that the majority of the Solana network is on 1.10.25, and both markets cache in ~1-3 seconds with the threading.
Resolved by 54e3351
|
gharchive/issue
| 2022-06-16T16:20:17 |
2025-04-01T06:40:24.690310
|
{
"authors": [
"skynetcap"
],
"repo": "skynetcap/serum-data",
"url": "https://github.com/skynetcap/serum-data/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
836918163
|
Would it be reasonable to get the shared_ptr from a resource_handle?
Hello, I've been using the resource management classes and they've been great, but I ran into an issue as I've started to build out my project's accompanying UI library.
I would like this UI lib to not enforce an entt dependency on the consumer (especially not a requirement that they use an entt::resource_cache), but I would like to continue using it in my own project. Unfortunately, there's no way that I can see to get the underlying shared_ptr from a resource_handle, so I don't seem to have a good way to pass lifetime responsibilities along to the UI objects that will be using the resources.
My question is: would it be reasonable to add a way to get the shared_ptr from a resource_handle? Or is there some complication that I'm not seeing?
Technically speaking, it's trivial.
On the other side, I always wanted to remove the constraint on the shared pointer and have stateless loaders that manage their own memory. In this case, your suggestion would break the design.
Since caches are defined on a per-type basis, we could make the handle sfinae-friendly and allow user customizations. This would solve your issue. Actually, it's already possible in fact.
Another approach is that your library exports its own handle type to callers rather than the EnTT one. Overall, it would also make it easier to modify your engine internals without affecting other libraries, at least as long as the API of your handle doesn't change. As a first implementation, it can just be an alias for the EnTT handle.
Good ideas, I went with the specialization approach for now and it's working just fine. Thanks!
Please, leave this issue open. It made me think and I can achieve my goal with a few changes.
Briefly, if the loader returned a handle rather than a shared pointer, I could allow for specializations that don't rely on shared pointers anymore while still offering a default that works as it does nowadays.
A breaking change in the API, but also an easy one to manage, so I don't really care about it.
|
gharchive/issue
| 2021-03-20T21:48:29 |
2025-04-01T06:40:24.703697
|
{
"authors": [
"Net5F",
"skypjack"
],
"repo": "skypjack/entt",
"url": "https://github.com/skypjack/entt/issues/679",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2402742285
|
Incorrect type definition for thread_broadcast subtype
@slack/bolt version
3.19.0
Your App and Receiver Configuration
n/a
Node.js runtime version
18.12.0
Steps to reproduce:
The type for thread_broadcast is not matching the actual type of actual callback.
Actual
{
...envelopedEventStuff,
"event": {
"app_id": "<app_id>",
"blocks": [],
"bot_id": "<bot_id>",
"channel": "<channel>",
"channel_type": "<channel_type>",
"event_ts": "<timestamp>",
"subtype": "thread_broadcast",
"text": "<text>",
"ts": "<timestamp>",
"type": "message",
"user": "<user>"
}
}
Type in SDK and Docs:
{
type: 'message';
subtype: 'thread_broadcast';
event_ts: string;
text: string;
attachments?: MessageAttachment[];
blocks?: (KnownBlock | Block)[];
user: string;
ts: string;
thread_ts?: string;
root: (GenericMessageEvent | BotMessageEvent) & {
thread_ts: string;
reply_count: number;
reply_users_count: number;
latest_reply: string;
reply_users: string[];
};
client_msg_id: string;
channel: string;
channel_type: channelTypes;
}
Expected result:
Types in SDK must match the actual event body.
Actual result:
There is a mismatch between the event body being posted to the callback url and type definition.
What specifically is missing for you, @arunonl ?
@filmaj root is the property that is missing and it seems to be missing when bot (seems to work fine for user) does a thread broadcast.
I am unable to reproduce that. Here are the details for my test:
I wrote a simple app that listens for a message "do the broadcast" and replies-in-thread w/ a message "broadcast" that is also broadcasted.
I have a generic message listener which logs out details for the event payload.
Here is what the test looks like in the Slack client:
The code for my two handlers are:
app.message('broadcast', async ({ message, say, client, event, payload }) => {
console.log('event', event);
});
app.message('do the broadcast', async ({ message, say, client, event, payload }) => {
await say({
text: 'broadcast',
channel: message.channel,
thread_ts: message.ts,
reply_broadcast: true,
});
});
And the event payload logging for the above test is:
event {
subtype: 'thread_broadcast',
bot_id: 'B060TP7C3SL',
thread_ts: '1720705130.212029',
root: {
user: 'U02AEHE4KG9',
type: 'message',
ts: '1720705130.212029',
client_msg_id: 'bc94cc53-8e54-4542-bdd7-d3f67ed922d9',
text: 'do the broadcast',
team: 'T029V6468RL',
thread_ts: '1720705130.212029',
reply_count: 1,
reply_users_count: 1,
latest_reply: '1720705130.809389',
reply_users: [ 'U0604PD417C' ],
is_locked: false,
blocks: [ [Object] ]
},
type: 'message',
ts: '1720705130.809389',
app_id: 'A0601REQL93',
text: 'broadcast',
blocks: [ { type: 'rich_text', block_id: 'vc/fN', elements: [Array] } ],
channel: 'C029YT5KEMB',
event_ts: '1720705130.809389',
channel_type: 'channel'
}
@filmaj thank you for trying to reproduce this. Looks like this happens when thread_ts is a random value that does not exist, like in the code below.
index.js
const express = require('express');
const app = express();
app.use(express.json())
app.post("/callback", (req, res) => {
console.log(req.body);
res.json({challenge: req.body?.challenge});
});
app.listen(3000, async () => {
console.log('Server is running on port 3000');
})
sendMessage.js
const signingSecret = ""
const token = ""
const channel = ""
const { App } = require('@slack/bolt');
const slackApp = new App({
signingSecret,
token
});
slackApp.client.chat.postMessage({
channel,
text: "broadcast message from bot",
thread_ts: "1720709525",
reply_broadcast:true
}).then((res) => {
console.log('Message sent: ', res.ts);
}).catch((err) => {
console.error(err);
});
O wow, actually you are right, and I think this is backend bug: if you send a message via chat.postMessage and provide garbage for thread_ts and reply_broadcast: true, then a regular message will be posted and your app will receive a type: message + subtype: thread_broadcast event for that message!
Going to mark that part as a backend bug. Separately, yes, we can add optional parameters for bot_id to the event.
|
gharchive/issue
| 2024-07-11T09:42:41 |
2025-04-01T06:40:24.736906
|
{
"authors": [
"arunonl",
"filmaj"
],
"repo": "slackapi/bolt-js",
"url": "https://github.com/slackapi/bolt-js/issues/2162",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1903848821
|
[BUG] Can't access CLA assistant
Describe the bug
I can't sign CLA from Contributing guidelines. I found an error.
I want to contribute but I can't now.
Requirements (place an x in each of the [ ])**
[x] I've read and understood the Contributing guidelines and have done my best effort to follow them.
[x] I've read and agree to the Code of Conduct.
[x] I've searched for any related issues and avoided creating a duplicate issue.
To Reproduce
Access to https://cla-assistant.io/slackapi/deno-slack-api
Expected behavior
A developer can begin to sign CLA.
Screenshots
Reproducible in:
deno-slack-api version:
Deno version:
OS version(s):
Additional context
Add any other context about the problem here.
@filmaj @WilliamBergamin Please check a site (CLA assistant) configuration or a contribution guide 👀
@whywaita, apologies for our delayed response here.
We've switched from the CLA assistant to the Salesforce one; however, it seems that we haven't updated the contribution guide yet 🤦 . Our team will resolve this later, so please proceed with your pull request and sign our newest CLA. Moreover, please remember that, if you plan to make significant changes, you should start a discussion before sending such code changes. Thank you for your interest!
@seratch Thank you for your quick comment!
I got it, I will open an issue and pull-request.
Thank you very much for pointing this issue out. We've resolved the issue on the contribution guide ✅
|
gharchive/issue
| 2023-09-19T23:20:51 |
2025-04-01T06:40:24.743999
|
{
"authors": [
"seratch",
"whywaita"
],
"repo": "slackapi/deno-slack-api",
"url": "https://github.com/slackapi/deno-slack-api/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1324757905
|
ModuleNotFoundError: No module named 'aiohttp'
(Filling out the following details about bugs will help us solve your issue sooner.)
Reproducible in:
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
The Slack SDK version
slack-sdk==3.18.1
Python runtime version
Python == 3.7
OS info
ProductName: macOS
ProductVersion: 12.4
BuildVersion: 21F79
Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1.Import both WebClient and SlackApiError from slack package to my Apache Airflow DAG
2.Apache Airflow sends an error
3.
Expected result:
Apache Airflow DAG succesfully loaded to webserver UI
Actual result:
Broken DAG: [/opt/airflow/dags/trigger_dag.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/slack/__init__.py", line 7, in <module>
from slack_sdk.rtm import RTMClient # noqa
File "/home/airflow/.local/lib/python3.7/site-packages/slack_sdk/rtm/__init__.py", line 16, in <module>
import aiohttp
ModuleNotFoundError: No module named 'aiohttp'
Requirements
For general questions/issues about Slack API platform or its server-side, could you submit questions at https://my.slack.com/help/requests/new instead. :bow:
Please read the Contributing guidelines and Code of Conduct before creating this issue or pull request. By submitting, you are agreeing to those rules.
Hi @PapiG0nz0 - aiohttp is an external module used by python-slack-sdk. I'm not so familiar with the Apache Airflow DAQ environment, but you will need to make sure that module is available for the Python SDK to work.
@PapiG0nz0
I don't think that your Airflow DAG needs the RTM client. If you are developing new code, you can use slack_sdk modules instead.
This SDK provides slack modules for smooth migration from slackclient v2 package. Refer to the v3 migration guide for details. If you have existing code, you can go with either of:
Migrate to slack_sdk modules instead of importing slack
Add aiohttp to your dependencies if you need to import slack modules for some reason
I hope this helps.
|
gharchive/issue
| 2022-08-01T17:42:32 |
2025-04-01T06:40:24.765066
|
{
"authors": [
"PapiG0nz0",
"seratch",
"srajiang"
],
"repo": "slackapi/python-slack-sdk",
"url": "https://github.com/slackapi/python-slack-sdk/issues/1250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1568879744
|
Send message even when Github Action fails
Description
I noticed that no message is sent whenever the action fails to execute (see the image below). Is there a way to force the message for any action state? There are cases when error is as important as success (e.g. failing to build the app and deploying it).
Fwiw, I'm using the Slack Workflow Builder approach.
What type of issue is this? (place an x in one of the [ ])
[ ] bug
[ ] enhancement (feature request)
[x] question
[ ] documentation related
[ ] example code related
[ ] testing related
[ ] discussion
Requirements (place an x in each of the [ ])
[x] I've read and understood the Contributing guidelines and have done my best effort to follow them.
[x] I've read and agree to the Code of Conduct.
[x] I've searched for any related issues and avoided creating a duplicate issue.
Bug Report
Filling out the following details about bugs will help us solve your issue sooner.
Reproducible in:
package version:
node version:
OS version(s):
Steps to reproduce:
Expected result:
What you expected to happen
Actual result:
What actually happened
Attachments:
Logs, screenshots, screencast, sample project, funny gif, etc.
Hey @fnando, this is a pretty neat use case! It seems like this might be possible by adding if: ${{ always() }} to this step of your GitHub Action. An example with failure() can be found on these docs, but the same syntax should apply to always(). I hope this is helpful!
|
gharchive/issue
| 2023-02-02T22:28:24 |
2025-04-01T06:40:24.772306
|
{
"authors": [
"e-zim",
"fnando"
],
"repo": "slackapi/slack-github-action",
"url": "https://github.com/slackapi/slack-github-action/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
180949497
|
Text input not following the keyboard
What is the best was to debug this behaviour for that I can send you a better issue description? I am asking because this should be fixed with Version 9.4.
Thank you for your kindness. But I can't reproduce. For me it seems that in my implementation the textinput does not recognize that it becomes firstResponder.
Maybe I miss something new in the initialization stage?
The only code I use here is
inverted = false
bounces = true
shakeToClearEnabled = true
keyboardPanningEnabled = true
shouldScrollToBottomAfterKeyboardShows = true
let attachmentIcon = UIImage(named: "icon_Attachment") as UIImage?
leftButton.setImage(attachmentIcon, forState: .Normal)
// TODO: Find better solution that does not break autoconstraints
// ...
leftButton.translatesAutoresizingMaskIntoConstraints = true // this breaks autolayout with constraints
leftButton.frame = CGRectMake(8, 8, 25, 30)
leftButton.contentMode = .ScaleAspectFill
leftButton.imageView?.contentMode = .ScaleAspectFill
leftButton.imageView?.clipsToBounds = false
leftButton.tintColor = MyTheme.sbColorPaletteDarkGrey
rightButton.setTitle("Send", forState: .Normal)
textInputbar.autoHideRightButton = true
textInputbar.maxCharCount = 4000;
textInputbar.counterStyle = SLKCounterStyle.Split
textInputbar.counterPosition = SLKCounterPosition.Top
textInputbar.editorTitle.textColor = MyTheme.sbColorPaletteDarkGrey
textInputbar.editorLeftButton.tintColor = MyTheme.sbColorPaletteDarkGrey
textInputbar.editorRightButton.tintColor = MyTheme.sbColorPaletteDarkGrey
textView.layer.borderWidth = 0
textView.keyboardType = UIKeyboardType.Default
textView.placeholder = "Type new message here..."
textView.placeholderColor = MyTheme.sbColorPaletteMediumGrey
My class includes
class MessageStreamViewController: SLKTextViewController, MyCallbackDelegate, MyProtocol, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
Neither the textInputbarDidMove is triggered nor the overridden didChangeKeyboardStatus does fire.
@headkit Do you by any chance override the method func target(forAction action: Selector, withSender sender: Any?) -> Any? of SLKTextView? I modified the method to prevent "select/copy/paste menu" and screwed something up, when I removed the override it worked again. Spend my entire afternoon figuring it out...
@jpodcedensek hey, thanx for your thoughts on that - unfortunately I don't overwrite any SLKTextView methods at all. I am still diggin'...
strange - gone it is, working fine!
@headkit How to solve this problem? Could you let me know?
For anyone coming in after me: I had to disabled IQKeyboardManagerSwift using viewDidAppear and viewWillDisappear methods.
The above comment helped me.
I did the following:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
IQKeyboardManager.sharedManager().enable = false
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
IQKeyboardManager.sharedManager().enable = true
}
|
gharchive/issue
| 2016-10-04T17:02:38 |
2025-04-01T06:40:24.777557
|
{
"authors": [
"DanH-SyncInteractive",
"ShawnBaek",
"cbrandsma-cs",
"headkit",
"jpodcedensek"
],
"repo": "slackhq/SlackTextViewController",
"url": "https://github.com/slackhq/SlackTextViewController/issues/525",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
669988965
|
seperated -> separated
Fixes a typo
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Evan Jensen seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
Thanks for the PR @evanpjensen! We can merge this once you signed the CLA.
Fixed by #302
|
gharchive/pull-request
| 2020-07-31T16:27:41 |
2025-04-01T06:40:24.781529
|
{
"authors": [
"CLAassistant",
"evanpjensen",
"nbrownus",
"wadey"
],
"repo": "slackhq/nebula",
"url": "https://github.com/slackhq/nebula/pull/264",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
527339846
|
Added basic macOS signing
In reference to issue #24 this adds a new make target for signing macOS binaries. I didn't integrate into release because it would seem weird to only have the macOS signing there and I don't have the moment right now to figure out the linux stuff and I don't know the Windows one.
But to my knowledge this is about what it should look like, the default TEAMID is the Slack one (pulled from the signed macOS Slack app) but can be overriden for enterprises deploying their own forks of nebula (or ones with unmerged master changes) with:
make TEAMID=<Some other TEAMID> sign-darwin
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Mitchell Grenier seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
This probably needs to be reworked for notarization. But I might imagine there's some work there to support iOS
|
gharchive/pull-request
| 2019-11-22T18:09:28 |
2025-04-01T06:40:24.785257
|
{
"authors": [
"CLAassistant",
"directionless",
"obelisk"
],
"repo": "slackhq/nebula",
"url": "https://github.com/slackhq/nebula/pull/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1625747929
|
Allow listen.host to contain names
On some networks (like fly.io, DO) it is desirable to listen on specific address and the easiest way to discover that address is by using a known name.
Closes #817
any docs on this feature to add to https://nebula.defined.net?
|
gharchive/pull-request
| 2023-03-15T15:25:33 |
2025-04-01T06:40:24.787033
|
{
"authors": [
"jasikpark",
"nbrownus"
],
"repo": "slackhq/nebula",
"url": "https://github.com/slackhq/nebula/pull/825",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
785939777
|
Drop support for Ruby 2.3 and 2.4
Per discussion in #1360
No need for it to go into this PR, but what is the status of building on Ruby 3.0? Are we waiting for libraries to update to be compatible?
No need for it to go into this PR, but what is the status of building on Ruby 3.0? Are we waiting for libraries to update to be compatible?
Middleman uses the deprecated URI::escape method which was removed in Ruby 3.0, so have to wait for an upstream release there before slate can support 3.0.
Middleman uses the deprecated URI::escape method which was removed in Ruby 3.0, so have to wait for an upstream release there before slate can support 3.0.
Sorry to have missed this, but does the Gemfile need updating too?
https://github.com/slatedocs/slate/blob/e546ad54c52d56089fee9c1376f18598d6408264/Gemfile#L1
Sorry to have missed this, but does the Gemfile need updating too?
https://github.com/slatedocs/slate/blob/e546ad54c52d56089fee9c1376f18598d6408264/Gemfile#L1
|
gharchive/pull-request
| 2021-01-14T12:13:35 |
2025-04-01T06:40:24.802008
|
{
"authors": [
"MasterOdin",
"MikeRalphson"
],
"repo": "slatedocs/slate",
"url": "https://github.com/slatedocs/slate/pull/1366",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1531617907
|
irrlicht 1.8 and .NET 7
Hi
Would it be possible to port your fantastic library so it supports latest irrlicht 1.8 and also .NET 7?
Thx
Hopfully it's as easy as changing the TFM from netcoreapp3.1 to net7.0-windows.
It does already target netcoreapp3.1 which you should be able to reference from a net7.0-windows assembly
https://www.nuget.org/packages/Irrlicht.NetCore.x64
Thanks i try it!
.net core will be dropped by microsoft I have read (or maybe already is).
Will you try to make .net 7 version?
Do you know if original irrlicht lime project is abandoned?
Is it possible to allow issues in the irrlicht lime fork? They are disabled.
I want to test it and report problems I find
Fixed, sorry about that, I never noticed issues were disabled!
I have made some custom changes in this fork. Mostly around setting the absolute position of scene nodes. The default behavior is unchanged.
I haven't updated the irrlicht version since forking irrlicht lime in 2020.
I don't see any activity on the original repo since 2019.
And yes, netcoreapp3.1 is out of supports since a few weeks ago
Do you want to check if there are things not included/updated yet so irrrlicht latest version is supported?
I will check, but I don't have much personally invested. I welcome your contributions!
Unfortunately I csn just use irrlicht and am not good enough to contribute to this :(
I've updated irrlicht to the latest copy from the mirror at https://gitlab.com/pgimeno/irrlicht-mirror.
Then, I updated my fork off irrlicht lime (Irrlicht.Net) to target net6.0-windows and use vs2022 cpp toolset (v143; it was v142). Everything compiled but I did not run it.
my fork of irrlicht: https://github.com/slater1/Irrlicht
used by my fork of irrlicht lime: https://github.com/slater1/Irrlicht.Net
Can confirm it works. Tested with GraphicsTemplate
A call to GUIEnvironment.AddImage(...) threw an AccessViolationException. I don't have time to look into it. I commented it out in the app.
Be sure to copy Ijwhost.dll and Irrlicht.dll from Irrlicht.NetCore\bin\x64\Release. This is done automatically with the nuget packages but net6.0-windows isn't published (yet)
Thanks, I have the same result so I did it correctly :)
Can I ask why you stay with .NET 6.0 and not use 7.0?
Maybe it's a good idea to move the examples from irrlicht lime to your repo once they all are working?
Great that it works for you too! I chose 6.0 because it it supported longer than 7.0. https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core
I suppose it would be good to port some examples. I haven't because I don't need it and my fork is pretty obscure. But if it gathers interest it makes sense to.
|
gharchive/issue
| 2023-01-13T02:28:34 |
2025-04-01T06:40:24.812503
|
{
"authors": [
"blender-girl",
"slater1"
],
"repo": "slater1/blog",
"url": "https://github.com/slater1/blog/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
188903685
|
Try to cythonize when Cython is installed
Change to use
Cython to compile .pyx
GCC to compile .cpp
Pure Python
if available when running
python setup.py install
Hello
when i run python setup.py install
i get a erro or warning
building 'fastdtw._fastdtw' extension
setup.py:81: UserWarning: compilation failed. Installing pure python package
warnings.warn(reason+'compilation failed. Installing pure python package')
how can i fix it.
thanks a lot
@zhahaoqiang
Cloud you paste the entire error message?
Please also take a look at:
https://github.com/slaypni/fastdtw/issues/13
|
gharchive/pull-request
| 2016-11-12T10:05:51 |
2025-04-01T06:40:24.816795
|
{
"authors": [
"slaypni",
"zhahaoqiang"
],
"repo": "slaypni/fastdtw",
"url": "https://github.com/slaypni/fastdtw/pull/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2443615327
|
Add configurable stream sorting
Possible options:
Order by relevance (this is the current default)
Order by download size
Order by download time
... (maybe more possibilities?)
The options are configurable here when accessing the search api:
https://github.com/sleeyax/stremio-easynews-addon/blob/726239538ccec2cc61c15d63a7f1cad8320983e3/packages/api/src/api.ts#L29-L34
One suggestion would be if it's possible to sort by HDR thing.
Like priority would be Dolby Vision > HDR > 4k > 1080p
One suggestion would be if it's possible to sort by HDR thing.
Like priority would be Dolby Vision > HDR > 4k > 1080p
Create a new issue for that.
|
gharchive/issue
| 2024-08-01T22:46:43 |
2025-04-01T06:40:24.832018
|
{
"authors": [
"sleeyax",
"znre"
],
"repo": "sleeyax/stremio-easynews-addon",
"url": "https://github.com/sleeyax/stremio-easynews-addon/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
119064250
|
Small issue with Xcode 7
I just downloaded Chipmunk from here and run example in Xcode 7, as result I see issue in cpSpatialIndex.h: - ../Chipmunk-7.0.1/include/chipmunk/cpSpatialIndex.h:57:5: Unknown command tag name.
/// @private
So, I just changed it to
// @private
and it compiles ok. Xcode 7.1.1
It's fixed in the master branch (I think). I should probably make another tag.
+1, just ran into that error :)
|
gharchive/issue
| 2015-11-26T15:28:08 |
2025-04-01T06:40:24.835274
|
{
"authors": [
"KAMIKAZEUA",
"gr8bit",
"slembcke"
],
"repo": "slembcke/Chipmunk2D",
"url": "https://github.com/slembcke/Chipmunk2D/issues/114",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
234070011
|
Component Wrappers
Storages like FlaggedStorage and ChangedStorage (I'm assuming most "wrapper" storages as well) could be improved if it was possible to return wrapper types instead of just components themselves:
pub trait UnprotectedStorage<T> {
type Wrapper<'a>;
type WrapperMut<'a>;
...
fn get<'a>(&'a self) -> Self::Wrapper<'a>;
fn get_mut<'a>(&'a mut self) -> Self::WrapperMut<'a>;
...
}
impl<T> UnprotectedStorage<T> for ChangedStorage {
type Wrapper<'a> = &'a T;
type WrapperMut<'a> = Tracked<&'a mut T>;
...
fn get<'a>(&'a self) -> &'a T { ... }
fn get_mut<'a>(&'a mut self) -> Tracked<&'a mut T> { ... }
...
}
Alternatively:
pub trait UnprotectedStorage<'a, T> {
type Wrapper: 'a;
type WrapperMut: 'a;
...
fn get(&self) -> Self::Wrapper;
fn get_mut(&mut self) -> Self::WrapperMut;
...
}
impl<'a, T> UnprotectedStorage<'a, T> for ChangedStorage {
type Wrapper = &'a T;
type WrapperMut = Tracked<'a, &'a mut T>;
...
fn get<'a>(&'a self) -> &'a T { ... }
fn get_mut<'a>(&'a mut self) -> Tracked<'a, &'a mut T> { ... }
...
}
Since it would eliminate the need for things like maintain methods and ease overall usage of the storages to be used similar to any other.
Sadly, I don't think it is possible in current Rust's type system until either we get lifetime ATCs or a non-'static TypeId (since this is what prevents just adding a lifetime to the UnprotectedStorage<T>.
It is possible with little boilerplate.
trait Wrapper<'a> {
type Mut;
type Ref;
}
trait UnprotectedStorage<T> {
type Wrapper: for<'a> Wrapper<'a>;
fn get<'a>(&'a self) -> <Self::Wrapper as Wrapper<'a>>::Ref;
fn get_mut<'a>(&'a mut self) -> <Self::Wrapper as Wrapper<'a>>::Mut;
}
``|
I think this would unnecessarily complicate the API. If you need such a special solution you can just as well create your own component storages (which is quite easy as can be seen in specs-static).
|
gharchive/issue
| 2017-06-07T01:48:54 |
2025-04-01T06:40:24.845641
|
{
"authors": [
"Aceeri",
"omni-viral",
"torkleyy"
],
"repo": "slide-rs/specs",
"url": "https://github.com/slide-rs/specs/issues/168",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2156851058
|
Sli.dev and twoslash cause strange behavior
Describe the bug
When I use twoslash, they statically appear on 1 slide as shown in the screenshot below.... (tested on my custom and default theme)
Presentation repo
Desktop (please complete the following information):
OS: Windows 11
Browser: 122.0.6261.69
Slidev version: ^0.47.5
Can you help narrow it down to a minimal reproduction? Thank you.
Can you help narrow it down to a minimal reproduction? Thank you.
https://stackblitz.com/edit/slidev-btd8ya?file=slides.md
❯ npm list
slidev-btd8ya@ /home/projects/slidev-btd8ya
+-- @slidev/cli@0.48.0-beta.19
+-- @slidev/theme-default@0.25.0
`-- @slidev/theme-seriph@0.25.0
Can you help narrow it down to a minimal reproduction? Thank you.
any updates?
Can you help narrow it down to a minimal reproduction? Thank you.
By the way, your presentations are very cool! Don't you share their source code?
@kravetsone 👉 here
Can you help narrow it down to a minimal reproduction? Thank you.
twoslash turned white on beta.... By the way, everything is in place when exporting to PDF
Can you help narrow it down to a minimal reproduction? Thank you.
Can you fix it today? It's important to me)
Can you fix it today? It's important to me)
I will try to fix this later. But it seems a little bit hard.
Can you fix it today? It's important to me)
I will try to fix this later. But it seems a little bit hard.
Thank you! Slidev is awesome...
I spent 1 hour on this but made no actual progress. However, there is a workaround which hides all the popovers when the current slide has no popover:
add the following to your <root>/styles.css:
.v-popper__popper.v-popper--theme-twoslash {
display: none;
}
add the following to your <root>/components/ShowTwoslash.vue:
<script setup>
import { computed } from 'vue'
import { useStyleTag } from '@vueuse/core'
useStyleTag(computed(() => $page.value === $slidev.nav.currentPage ? `
.v-popper__popper.v-popper--theme-twoslash {
display: block !important;
}` : ''))
</script>
<template>
<div/>
</template>
if you want to enable twoslash popovers on a slide, add the following to that slide:
<ShowTwoslash />
I spent 1 hour on this but made no actual progress. (It's really hard!) However, there is a workaround that can hide all the popovers when the current slide doesn't need them.
add the following to your <root>/styles.css:
.v-popper__popper.v-popper--theme-twoslash {
display: none;
}
add the following to your <root>/components/ShowTwoslash.vue:
<script setup>
import { computed } from 'vue'
import { useStyleTag } from '@vueuse/core'
useStyleTag(computed(() =>
['slide', 'presenter'].includes($renderContext.value) && $page.value === $slidev.nav.currentPage ? `
.v-popper__popper.v-popper--theme-twoslash {
display: block !important;
}` : ''))
</script>
<template>
<div />
</template>
if you want to enable Twoslash popovers on a specific slide, add the following code to that slide:
<ShowTwoslash />
It hides these elements on other slides but they are still on the edge))
well, I'll wait! Thanks for the work!
|
gharchive/issue
| 2024-02-27T15:11:54 |
2025-04-01T06:40:24.860508
|
{
"authors": [
"KermanX",
"antfu",
"kovsu",
"kravetsone"
],
"repo": "slidevjs/slidev",
"url": "https://github.com/slidevjs/slidev/issues/1349",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2601647303
|
Can't build Embedded example
Run ./build.sh, got error in screenshot. Need some help.
If I run it again, the first error gone. But last one remain.
can you verify that you are running it with a recent developer toolchain?
swift --version
should be something like Apple Swift version 6.1-dev (LLVM 89ccf4b8a46135a, Swift 6a5ae8d5df144dd)
@sliemeobn I'm using a swift 6.0 from https://github.com/swiftwasm/swift/releases.
So where I can find a 6.1 dev snapshot?
you don‘t need a wasm/wasi SDK or toolchain, just a plain Swift toolchain from swift.org.
Change to swift 6.1 snapshot and run swift package update make it work. Just need some guide in the readme.
|
gharchive/issue
| 2024-10-21T07:39:45 |
2025-04-01T06:40:24.864280
|
{
"authors": [
"bestwnh",
"sliemeobn"
],
"repo": "sliemeobn/elementary-dom",
"url": "https://github.com/sliemeobn/elementary-dom/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
339172793
|
Add text/plain to known content types
Add text/plain to known content types in AbstractHandler.php.
Coverage remained the same at 97.106% when pulling fcea184d2813e33cc8eb1235c8c0c9b01df199ff on filips123:patch-1 into 1ca78596de8f1b0e2389b812a6ce64d1ccd9e49f on slimphp:3.x.
What is the use-case that this change is for?
It could be for some text based or CLI websites and applications.
Do you have an actual app in production that needs this change? I'm struggling to imagine a text/plain website and haven't seen one in the wild.
No. This is just an idea, but it may be usable for someone.
I think we'll wait for someone to have a production use-case for this one.
|
gharchive/pull-request
| 2018-07-07T20:32:10 |
2025-04-01T06:40:24.869993
|
{
"authors": [
"akrabat",
"coveralls",
"filips123"
],
"repo": "slimphp/Slim",
"url": "https://github.com/slimphp/Slim/pull/2464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
455175347
|
Python 3.5: no mudule named textblob
When i start tihs program in python 3.5 there came an no module error. But in python 2 it works.
Why
This is likely not a textblob-specific issue. Make sure you have your Python 3.5 virtual environment activated, with textblob installed within it.
|
gharchive/issue
| 2019-06-12T12:07:26 |
2025-04-01T06:40:24.871317
|
{
"authors": [
"Schokolino1",
"sloria"
],
"repo": "sloria/TextBlob",
"url": "https://github.com/sloria/TextBlob/issues/270",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2251605835
|
chore: v2.0.0-rc.0: update adversarial test: Adversarial container-ba…
…sed builder
next step in release process
https://github.com/slsa-framework/slsa-github-generator/blob/main/RELEASE.md#adversarial-container-based-builder
slsa-framework/slsa-github-generator#3576
expected error
https://github.com/slsa-framework/example-package/actions/runs/8744989966/job/23999061366#step:2:205
Fetching the builder with ref: refs/tags/v2.0.0-rc.0
Builder version: v2.0.0-rc.0
BUILDER_REPOSITORY: slsa-framework/slsa-github-generator
verifier hash computed is 54e4f40bf120bce1cef1ff123fef3456e8c526f315c47e22ed6acfe02a06b9a8
verifier hash verification has passed
WARNING: Insecure SLSA_VERIFIER_TESTING is enabled.
Verified signature against tlog entry index 86835911 at URL: https://rekor.sigstore.dev/api/v1/log/entries/24296fb24b8ad77a4d71a9c543f0d1ccbeb37f2316571b12654dd2dac0abfd82392bb5b769507eeb
Verifying artifact slsa-builder-docker-linux-amd64: FAILED: expected hash '5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03' not found: artifact hash does not match provenance subject
FAILED: SLSA verification failed: expected hash '5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03' not found: artifact hash does not match provenance subject
Error: Process completed with exit code 6.
now succeeding
https://github.com/slsa-framework/example-package/actions/runs/8744989966/job/23999100929
|
gharchive/pull-request
| 2024-04-18T21:34:16 |
2025-04-01T06:40:24.886216
|
{
"authors": [
"ramonpetgrave64"
],
"repo": "slsa-framework/example-package",
"url": "https://github.com/slsa-framework/example-package/pull/362",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1262684169
|
E2E: go tag main SLSA3 adversarial asset provenance
Repo: https://github.com/slsa-framework/example-package/tree/v11.0.28
Run: https://github.com/slsa-framework/example-package/actions/runs/2451972251
Workflow name: go tag main SLSA3 adversarial asset provenance
Workflow file: https://github.com/slsa-framework/example-package/tree/main/.github/workflows/e2e.go.tag.main.adversarial-asset-provenance.slsa3.yml
Trigger: push
Branch: v11.0.28
Date: Tue Jun 7 03:50:38 UTC 2022
This e2e tests is flaky. I've increased the time to tamper with the artifact and we'll see if it helps
Repo: https://github.com/slsa-framework/example-package/tree/v11.0.29
Run: https://github.com/slsa-framework/example-package/actions/runs/2458949391
Workflow name: go tag main SLSA3 adversarial asset provenance
Workflow file: https://github.com/slsa-framework/example-package/tree/main/.github/workflows/e2e.go.tag.main.adversarial-asset-provenance.slsa3.yml
Trigger: push
Branch: v11.0.29
Date: Wed Jun 8 03:54:05 UTC 2022
Tests are passing now. Closing this issue.
|
gharchive/issue
| 2022-06-07T03:50:39 |
2025-04-01T06:40:24.890738
|
{
"authors": [
"laurentsimon"
],
"repo": "slsa-framework/slsa-github-generator",
"url": "https://github.com/slsa-framework/slsa-github-generator/issues/176",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2196734282
|
[e2e]: go push main config-ldflags slsa3
Repo: https://github.com/slsa-framework/example-package/tree/main
Run: https://github.com/slsa-framework/example-package/actions/runs/8354560514
Workflow file: https://github.com/slsa-framework/example-package/tree/main/.github/workflows/e2e.go.push.main.config-ldflags.slsa3.yml
Workflow runs: https://github.com/slsa-framework/example-package/actions/workflows/e2e.go.push.main.config-ldflags.slsa3.yml
Trigger: push
Branch: main
Date: Wed Mar 20 06:10:52 UTC 2024
Repo: https://github.com/slsa-framework/example-package/tree/main
Run: https://github.com/slsa-framework/example-package/actions/runs/8365006501
Workflow file: https://github.com/slsa-framework/example-package/tree/main/.github/workflows/e2e.go.push.main.config-ldflags.slsa3.yml
Workflow runs: https://github.com/slsa-framework/example-package/actions/workflows/e2e.go.push.main.config-ldflags.slsa3.yml
Trigger: push
Branch: main
Date: Wed Mar 20 20:06:53 UTC 2024
Tests are passing now. Closing this issue.
|
gharchive/issue
| 2024-03-20T06:10:53 |
2025-04-01T06:40:24.895406
|
{
"authors": [
"laurentsimon"
],
"repo": "slsa-framework/slsa-github-generator",
"url": "https://github.com/slsa-framework/slsa-github-generator/issues/3386",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2509398987
|
[e2e]: generic workflow_dispatch branch1 default slsa3
Repo: https://github.com/slsa-framework/example-package/tree/branch1
Run: https://github.com/slsa-framework/example-package/actions/runs/10731702244
Workflow file: https://github.com/slsa-framework/example-package/tree/main/.github/workflows/e2e.generic.workflow_dispatch.branch1.default.slsa3.yml
Workflow runs: https://github.com/slsa-framework/example-package/actions/workflows/e2e.generic.workflow_dispatch.branch1.default.slsa3.yml
Trigger: workflow_dispatch
Branch: branch1
Date: Fri Sep 6 03:07:35 UTC 2024
Repo: https://github.com/slsa-framework/example-package/tree/branch1
Run: https://github.com/slsa-framework/example-package/actions/runs/10748107679
Workflow file: https://github.com/slsa-framework/example-package/tree/main/.github/workflows/e2e.generic.workflow_dispatch.branch1.default.slsa3.yml
Workflow runs: https://github.com/slsa-framework/example-package/actions/workflows/e2e.generic.workflow_dispatch.branch1.default.slsa3.yml
Trigger: workflow_dispatch
Branch: branch1
Date: Sat Sep 7 03:07:20 UTC 2024
Tests are passing now. Closing this issue.
|
gharchive/issue
| 2024-09-06T03:07:36 |
2025-04-01T06:40:24.900457
|
{
"authors": [
"ianlewis"
],
"repo": "slsa-framework/slsa-github-generator",
"url": "https://github.com/slsa-framework/slsa-github-generator/issues/3866",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
340871244
|
前端项目打包出来的dist放在Nginx上运行,服务器500错误
想试着模拟一下生产环境,但是打包出来的dist文件夹与标准的项目的结构完全不同,也没有index.html,直接在Nginx上跑就500错误了
npm run build后,会成dist文件夹和index_prod.html文件,将这两个文件拿出来,配置到nginx中就ok了
|
gharchive/issue
| 2018-07-13T03:43:47 |
2025-04-01T06:40:24.936447
|
{
"authors": [
"EGOISTW",
"smallsnail-wh"
],
"repo": "smallsnail-wh/interest",
"url": "https://github.com/smallsnail-wh/interest/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2631092628
|
🛑 CBT (Computer Based Test) Website (proxmox tunnel) is down
In 25640e6, CBT (Computer Based Test) Website (proxmox tunnel) (https://cbt-cfd.sman3palu.sch.id) was down:
HTTP code: 530
Response time: 62 ms
Resolved: CBT (Computer Based Test) Website (proxmox tunnel) is back up in 2ca706f after 15 minutes.
|
gharchive/issue
| 2024-11-03T09:40:24 |
2025-04-01T06:40:24.945404
|
{
"authors": [
"hansputera"
],
"repo": "smantriplw/uptime-services",
"url": "https://github.com/smantriplw/uptime-services/issues/1139",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2613286015
|
🛑 Dokumentasi/Foto Website (tunnel) is down
In 73a0aa7, Dokumentasi/Foto Website (tunnel) (https://dokumentasi.sman3palu.sch.id/owncloud) was down:
HTTP code: 530
Response time: 86 ms
Resolved: Dokumentasi/Foto Website (tunnel) is back up in 62e3f1b after 3 hours, 4 minutes.
|
gharchive/issue
| 2024-10-25T07:10:28 |
2025-04-01T06:40:24.947933
|
{
"authors": [
"hansputera"
],
"repo": "smantriplw/uptime-services",
"url": "https://github.com/smantriplw/uptime-services/issues/950",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
509792714
|
Support Mithril.js
Description
Right now we support the top players (React obviously, Angular and Vue), but (except for Hyperapp) we do not support other (established) frameworks.
We should go for integrating a converter to handle Mithril.js. Like all other converters it would be opt-in.
Background
Homepage mithril.js.org
GitHub github.com/MithrilJS/mithril.js
Discussion
Right now I can't think of anything to discuss.
Landed in develop / preview.
|
gharchive/issue
| 2019-10-21T07:53:32 |
2025-04-01T06:40:24.950430
|
{
"authors": [
"FlorianRappl"
],
"repo": "smapiot/piral",
"url": "https://github.com/smapiot/piral/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
714914923
|
Fixed "Team members profile pictures were not showing"
Fixed "Team members profile pictures were not showing"
Fixes #251
Before
After
@smaranjitghose Please have a look at this PR.
|
gharchive/pull-request
| 2020-10-05T14:55:12 |
2025-04-01T06:40:24.952600
|
{
"authors": [
"dipanshparmar"
],
"repo": "smaranjitghose/doc2pen",
"url": "https://github.com/smaranjitghose/doc2pen/pull/253",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
493892832
|
Force list paths: use patterns instead of full paths
Hello!
I have the XML file where there are nodes containing the nodes of the same type. and sometimes there is 1 element, sometimes there are several of them.
Example:
<catalog>
<categories>
<category/>
<category>
<categories>
<category/>
</categories>
</category>
</categories>
</catalog>
In Obj presentation it will be simple:
class Category {
@Nullable
Categories categories;
// ... other meaningful fields
}
class Categories {
// at least 1 item
List<Category> list;
}
So, I need to force all these lists to be a lists instead of 1 item property.
.forceList("/catalog/categories/category")
.forceList("/catalog/categories/category/categories/category")
.forceList("/catalog/categories/category/categories/category/categories/category")
But my source XML can have infinite complexity. I need to have something like that:
.forceList("./categories/category")
or
.forceList("*/categories/category")
Can you improve the check of the tags so it will not search for an exact path, but will search for the pattern, or any other kind of regex-like structure?
Probably, it should be the method called forceListPattern(String pattern)
Hello Dmitri
I have started to look into this request and have probably a solution for this, but I need some time as I don't master regex. For example * does not compile, but I can use "category$". Probably I'll ask you to check / validate the changes I'm about to make before I publish for everyone.
Arnaud.
@smart-fun Hello!
I made a mistake with *. Correct would be .forceList(".*/categories/category")
Let me make a PR maybe you will like the solution I suggest. There are 1 additional method, tests for it.
<catalog>
<categories>
<category/>
<category>
<categories>
<category/>
</categories>
</category>
</categories>
</catalog>
May be converted to json:
{
"catalog": {
"categories": {
"category": [
{
"-self-closing": "true"
},
{
"categories": {
"category": {
"-self-closing": "true"
}
}
}
]
}
},
"#omit-xml-declaration": "yes"
}
Hello @relaxedSoul and @javadev
sorry for the delay. I did add a method long ago for regex but didn't push.
I just tried a little as I'm not comfortable with regex. Could you please give a try to the 1.5.0 pre-release?
the new method in the builder is: forceListPattern(String pattern)
thanks!
|
gharchive/issue
| 2019-09-16T07:40:52 |
2025-04-01T06:40:24.961386
|
{
"authors": [
"javadev",
"relaxedSoul",
"smart-fun"
],
"repo": "smart-fun/XmlToJson",
"url": "https://github.com/smart-fun/XmlToJson/issues/21",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1777514406
|
"with VS for condition" FHIR Condition category should specify "encounter" vs "problem-list"
https://build.fhir.org/valueset-condition-category.html
applies to counts, especially
core__count_condition_icd10_month
requested by @James-R-Jones
Related to "Encounter Reason" https://github.com/smart-on-fhir/cumulus-library/issues/31
Resolved: core__count_condition_month.cond_category_code
Note that core__count_condition_icd10_month no longer exists.
@comorbidity should we remove prior icd10_month datasets from the aggregator?
|
gharchive/issue
| 2023-06-27T18:25:50 |
2025-04-01T06:40:24.964968
|
{
"authors": [
"comorbidity",
"dogversioning"
],
"repo": "smart-on-fhir/cumulus-library",
"url": "https://github.com/smart-on-fhir/cumulus-library/issues/55",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1414197529
|
关于FPS
安卓:计算界面滑动的fps,>8.0系统的手机不支持游戏类的APP,视频类的APP也会有兼容问题。
iOS:xcode instruments的数据,有质疑数据不准确的,自己先用xcode比对一下。
https://bbs.perfdog.qq.com/?m=app&c=detail&a=index&id=201
看了perfdog的计算方法,用SurfaceView来计算的,但是我自己测试起来很多机器是没有更新数据的,除非perfdog不是用adb的方式获取。有了解的朋友可以给点意见。
https://bbs.perfdog.qq.com/?m=app&c=detail&a=index&id=201 看了perfdog的计算方法,用SurfaceView来计算的,但是我自己测试起来很多机器是没有更新数据的,除非perfdog不是用adb的方式获取。有了解的朋友可以给点意见。
这个问题解决了吗
|
gharchive/issue
| 2022-10-19T03:06:27 |
2025-04-01T06:40:24.967191
|
{
"authors": [
"morningofowl",
"rafa0128"
],
"repo": "smart-test-ti/SoloX",
"url": "https://github.com/smart-test-ti/SoloX/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1374619789
|
[feature/sc-51602] Adding soak test support
Added a remote runner that runs the soak in k8
Added logging to soak tests instead of panic
Default value is 720h for the k8 env and 10m for the soak
Adjusted soak logic to switch between positive and negative answers
Should #95 be closed now?
|
gharchive/pull-request
| 2022-09-15T14:26:16 |
2025-04-01T06:40:24.969921
|
{
"authors": [
"archseer",
"smickovskid"
],
"repo": "smartcontractkit/chainlink-starknet",
"url": "https://github.com/smartcontractkit/chainlink-starknet/pull/120",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1127193999
|
AttributeError: 'function' object has no attribute 'transfer'
Hi!
I am getting the same error running the aave_borrow.py both on kovan and on mainnet-fork as I am trying to use the aprrove_erc20 function.
My code looks like this:
from brownie import network, config, interface
from scripts.helpful_scripts import get_account
from scripts.get_weth import get_weth
from web3 import Web3
#0.1
AMOUNT = Web3.toWei(0.1, "ether")
def main():
account = get_account
erc20_address = config["networks"][network.show_active()]["weth_token"]
if network.show_active() in ["mainnet-fork"]:
get_weth()
# get_weth()
lending_pool = get_lending_pool()
# Approve sending out ERC20 tokens
approve_erc20(AMOUNT, lending_pool.address, erc20_address, account)
def approve_erc20(amount, spender, erc20_address, account):
print("Approving ERC20 token...")
erc20 = interface.IERC20(erc20_address)
tx = erc20.approve(spender, amount, {"from": account})
tx.wait(1)
print("Approved!")
return tx
def get_lending_pool():
# ABI
# Address
lending_pool_addresses_provider = interface.ILendingPoolAddressesProvider(
config["networks"][network.show_active()]["lending_pool_addresses_provider"]
)
lending_pool_address = lending_pool_addresses_provider.getLendingPool()
lending_pool = interface.ILendingPool(lending_pool_address)
return lending_pool
The error is popping up when the execution reaches the
tx = erc20.approve(spender, amount, {"from": account})
line.
Every other part of the code runs perfectly and gets the same results as in the video.
Can someone help me?
could you share the contract code here please? or are you using the interface on lesson 10?
I only use interfaces
I'll make more research on this, meanwhile if you find any workaround please share.
I'm having the same issue
Solved it! missing parenthesis for get_account function.
|
gharchive/issue
| 2022-02-08T12:29:40 |
2025-04-01T06:40:24.991399
|
{
"authors": [
"akocrypto",
"cromewar",
"rkirmann"
],
"repo": "smartcontractkit/full-blockchain-solidity-course-py",
"url": "https://github.com/smartcontractkit/full-blockchain-solidity-course-py/issues/988",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
141148655
|
Fixes application not reconnecting as well as random crashing.
Fixes #1
This PR is ready for review.
Risk
This PR makes no API changes.
Testing Plan
Disconnect and reconnect an application over TCP/IP multiple times, as well as disconnecting the proxy and reconnecting then connecting an application.
Summary
This PR fixes an issue with applications not being able to reconnect. We also found an issue with applications crashing in a specific, reproducible incident.
Start Relay.
Connect to Core.
Force close Relay.
Reopen Relay.
Disconnect Relay.
Reconnect Relay.
Connect app to Relay.
Crash.
Changelog
Bug Fixes
Fixed EASession's category that was mistyped, which never closed the output stream.
Fixed issue with application randomly crashing because the read of NSStream was returning back -1 and we were still trying to read.
@jamescs can you please review?
|
gharchive/pull-request
| 2016-03-16T01:55:10 |
2025-04-01T06:40:24.995722
|
{
"authors": [
"asm09fsu"
],
"repo": "smartdevicelink/relay_app_ios",
"url": "https://github.com/smartdevicelink/relay_app_ios/pull/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
614095547
|
Crash zoom-out with 1.3.6 version
After update to 1.3.6 version and using "false" as second params in adapter, during zoom out I have this crash:
2020-05-07 16:16:26.242 8705-8705/com.xxx E/RecyclerView: No adapter attached; skipping layout
2020-05-07 16:16:39.015 8705-8705/com.xxx E/MotionEvent-JNI: validatePointerIndex pointerIndex:-1, pointerCount:1
2020-05-07 16:16:39.017 8705-8705/com.xxx E/InputEventReceiver: Exception dispatching input event.
2020-05-07 16:16:39.020 8705-8705/com.xxx E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.xxx, PID: 8705
java.lang.IllegalArgumentException: pointerIndex out of range
at android.view.MotionEvent.nativeGetAxisValue(Native Method)
at android.view.MotionEvent.getX(MotionEvent.java:2122)
at com.smarteist.autoimageslider.SliderPager.onInterceptTouchEvent(SliderPager.java:2073)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2247)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2782)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2422)
at com.android.internal.policy.DecorView.superDispatchTouchEvent(DecorView.java:439)
at com.android.internal.policy.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1839)
at android.app.Activity.dispatchTouchEvent(Activity.java:3080)
at androidx.appcompat.view.WindowCallbackWrapper.dispatchTouchEvent(WindowCallbackWrapper.java:69)
at androidx.appcompat.view.WindowCallbackWrapper.dispatchTouchEvent(WindowCallbackWrapper.java:69)
at com.android.internal.policy.DecorView.dispatchTouchEvent(DecorView.java:401)
at android.view.View.dispatchPointerEvent(View.java:10293)
at android.view.ViewRootImpl$ViewPostImeInputStage.processPointerEvent(ViewRootImpl.java:4968)
at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:4792)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:4298)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:4351)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:4317)
at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:4464)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:4325)
at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:4521)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:4298)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:4351)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:4317)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:4325)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:4298)
at android.view.ViewRootImpl.deliverInputEvent(ViewRootImpl.java:6857)
at android.view.ViewRootImpl.doProcessInputEvents(ViewRootImpl.java:6831)
at android.view.ViewRootImpl.enqueueInputEvent(ViewRootImpl.java:6774)
at android.view.ViewRootImpl$WindowInputEventReceiver.onInputEvent(ViewRootImpl.java:7029)
at android.view.InputEventReceiver.dispatchInputEvent(InputEventReceiver.java:185)
at android.view.InputEventReceiver.nativeConsumeBatchedInputEvents(Native Method)
at android.view.InputEventReceiver.consumeBatchedInputEvents(InputEventReceiver.java:176)
at android.view.ViewRootImpl.doConsumeBatchedInput(ViewRootImpl.java:6988)
at android.view.ViewRootImpl$ConsumeBatchedInputRunnable.run(ViewRootImpl.java:7055)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:907)
at android.view.Choreographer.doCallbacks(Choreographer.java:709)
2020-05-07 16:16:39.020 8705-8705/com.xxx E/AndroidRuntime: at android.view.Choreographer.doFrame(Choreographer.java:638)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:893)
at android.os.Handler.handleCallback(Handler.java:836)
at android.os.Handler.dispatchMessage(Handler.java:103)
at android.os.Looper.loop(Looper.java:203)
at android.app.ActivityThread.main(ActivityThread.java:6255)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1063)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:924)
this is a bug in slider pager , thanks for reporting. library updated.
thanks for your support smarteist.
So, will we wait 1.3.7 release?
thanks for your support smarteist.
So, will we wait 1.3.7 release?
yes now updated.
Confirm, bug fixed. Thank you
I facing this issue on 1.4.0, how can i fix it
Error Log:
java.lang.IllegalArgumentException: pointerIndex out of range
at android.view.MotionEvent.nativeGetAxisValue(Native Method)
at android.view.MotionEvent.getX(MotionEvent.java:2608)
at androidx.viewpager.widget.ViewPager.onInterceptTouchEvent(ViewPager.java:2072)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2753)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3222)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2904)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3222)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2904)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3222)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2904)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3222)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2904)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3222)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2904)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3222)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2904)
at com.android.internal.policy.DecorView.superDispatchTouchEvent(DecorView.java:749)
at com.android.internal.policy.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1880)
at android.app.Activity.dispatchTouchEvent(Activity.java:3489)
at androidx.appcompat.view.WindowCallbackWrapper.dispatchTouchEvent(WindowCallbackWrapper.java:69)
at com.android.internal.policy.DecorView.dispatchTouchEvent(DecorView.java:707)
at android.view.View.dispatchPointerEvent(View.java:13732)
at android.view.ViewRootImpl$ViewPostImeInputStage.processPointerEvent(ViewRootImpl.java:6172)
at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:5950)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:5399)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:5452)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:5418)
at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:5577)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:5426)
at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:5634)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:5399)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:5452)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:5418)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:5426)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:5399)
at android.view.ViewRootImpl.deliverInputEvent(ViewRootImpl.java:8464)
at android.view.ViewRootImpl.doProcessInputEvents(ViewRootImpl.java:8384)
at android.view.ViewRootImpl.enqueueInputEvent(ViewRootImpl.java:8337)
at android.view.ViewRootImpl$WindowInputEventReceiver.onInputEvent(ViewRootImpl.java:8579)
at android.view.InputEventReceiver.dispatchInputEvent(InputEventReceiver.java:198)
at android.view.InputEventReceiver.nativeConsumeBatchedInputEvents(Native Method)
at android.view.InputEventReceiver.consumeBatchedInputEvents(InputEventReceiver.java:187)
at android.view.ViewRootImpl.doConsumeBatchedInput(ViewRootImpl.java:8538)
at android.view.ViewRootImpl$ConsumeBatchedInputRunnable.run(ViewRootImpl.java:8606)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:949)
at android.view.Choreographer.doCallbacks(Choreographer.java:761)
at android.view.Choreographer.doFrame(Choreographer.java:690)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:935)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:214)
at android.app.ActivityThread.main(ActivityThread.java:7124)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:494)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:975)
|
gharchive/issue
| 2020-05-07T14:19:25 |
2025-04-01T06:40:25.021137
|
{
"authors": [
"Poissac9",
"developerGM",
"smarteist"
],
"repo": "smarteist/Android-Image-Slider",
"url": "https://github.com/smarteist/Android-Image-Slider/issues/130",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
692149296
|
Fix: missing select values
fix #82
the mongo search may not include the checked values.
1.when no checked values are missing from the First search the search result is the result from first search
2. when there are missed values run the second search.
3. if the value still missing (not able to get the checked value from search) run the third search.
when 2 or 3 happens. the final result is the combination of 1st,2nd, and 3rd search
Fails
:no_entry_sign:
Your PR has lint errors. Please fix these and commit them.
facet.js
src/example-types/facet.js
Expected method shorthand.
Line 93: result: async (node, search) => {
Unexpected block statement surrounding arrow body; parenthesize the returned value and move it immediately after the =>.
Line 152: let matchSelectedValues = (node) => {
:no_entry_sign:
Please assign someone to merge this PR, and optionally include people who should review.
Warnings
:warning:
The README has not been updated. Please update the README.
:warning:
Your PR has lint warnings. Please consider fixing these.
facet.js
src/example-types/facet.js
'mapKeywordFilters2' is assigned a value but never used.
Line 61: let mapKeywordFilters2 = node =>
:warning:
Branch being merged does not follow Git Flow
Messages
:book:
We were able to automatically fix some formatting issues in this PR for you!
:book:
Could not find any browser results.
Some things that were possibly fixed:
Code that could be fixed via the --fix flag
Formatting that could be fixed by prettier
Take a look at this commit to see what happened in detail: 535721a23bff51648e869f7354ed430d001392d5
And look at this wiki page to see the reasoning behind the ESLint rules: https://github.com/smartprocure/eslint-config-smartprocure/wiki/Rules-and-Why-We-Chose-Them
Generated by :no_entry_sign: dangerJS against e3a9576e4806d2febe7d8fbae2ac237124dc1e96
@doug I do not think we could make a unit test for this case. since there is no way to duplicate the missing value bug every time.
@chris110408 we definitely need a unit test for this
Yes, I am adding the unit test now.
zero count checked values
make sure to include a test case for the 0s (values that are checked but aren't in the response, e.g. because they were checked prior and then the criteria changed)
I may need @doug-patterson hook up on that, I am not able to find out how to mock config.
@chris110408 I don't think the config has anything to do with what Sam is asking for. You just need to write a unit test that shows that any selected value will appear as checked in the options list with a 0 count, even if it can't actually be aggregated out of the search results themselves.
In the context of the other tests I think one like this would be sufficient:
set up so that collection doesn't contain some selectedValue, but node.values includes it
run the tested code on the test data as usual
check that selectedValue is among the returned options with a 0 count
@daedalus28 can let us know if that's not what he was thinking of
@chris110408 I don't think the config has anything to do with what Sam is asking for. You just need to write a unit test that shows that any selected value will appear as checked in the options list with a 0 count, even if it can't actually be aggregated out of the search results themselves.
In the context of the other tests I think one like this would be sufficient:
set up so that collection doesn't contain some selectedValue, but node.values includes it
run the tested code on the test data as usual
check that selectedValue is among the returned options with a 0 count
@daedalus28 can let us know if that's not what he was thinking of
@chris110408 I don't think the config has anything to do with what Sam is asking for. You just need to write a unit test that shows that any selected value will appear as checked in the options list with a 0 count, even if it can't actually be aggregated out of the search results themselves.
In the context of the other tests I think one like this would be sufficient:
set up so that collection doesn't contain some selectedValue, but node.values includes it
run the tested code on the test data as usual
check that selectedValue is among the returned options with a 0 count
@daedalus28 can let us know if that's not what he was thinking of
@daedalus28, After I chat with @doug-patterson. we find out we do need to mock the 4th argument "config" to facet.results. because we have to call it in https://github.com/smartprocure/contexture-mongo/blob/1ea34bbcfcc5d20487c6cefa41e9f3188f4fac6d/src/example-types/facet.js#L185 to get the stillmissingResult. and we need stillmissingResult to get the zero count label. I do not think we could bypass mock "config" to make the unit test. Since the "config" argument never exists before, @doug-patterson needs to waste hours of time to look at this issue from scratch and do research cross all contexture library. @doug-patterson thinks you are the best person to help me with this issue.
Just mock out the couple of methods you're using - they're pretty straightforward
Just mock out the couple of methods you're using - they're pretty straightforward
Use it and mock out that method is a different story. both I and @doug-patterson do not know how to mock it out
@daedalus28 and @doug-patterson yes, you guys are right, I was overthinking on this stuff, this is not a integration-test, mock the config is a pretty straightforward step.
|
gharchive/pull-request
| 2020-09-03T17:28:05 |
2025-04-01T06:40:25.045250
|
{
"authors": [
"chris110408",
"daedalus28",
"decrapifier",
"doug-patterson"
],
"repo": "smartprocure/contexture-mongo",
"url": "https://github.com/smartprocure/contexture-mongo/pull/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2456974218
|
出来的视频只有声音,画面是全黑,不知道哪里有错误!
程序也没有报错,不知道哪里错误!
开降噪,
是这个denoising吗
开降噪,
谢谢大佬的回复,下面是我全部的工作流,谢谢回复!!!
我看你开启了save_video output文件夹有没有保存到的视频文件
我看你开启了save_video output文件夹有没有保存到的视频文件
保存了,出来的也是黑色画面的视频,就黑色画面+音频,和后面出来的是一样的。
vae没有解码成功?你检查下vae的模型和config文件
vae没有解码成功?你检查下vae的模型和config文件
vae就是加载comfyui自动下载,我看目录也是跟你readme写的一样,是不是要该vae的value啊?把stabilityai/sd-vae-ft-mse改成vae?
是的,我提供vae只是方便测试,用默认的vae肯定不会出错的。
是的,我提供vae只是方便测试,用默认的vae肯定不会出错的。
好的,我后面改了再试试,谢谢您!
最后解决了么,我也是没报错就是没出视频
如果不出图,主要是unet和vae这里的问题,尤其是vae
同样的问题,不报错不出图视频黑色有声音,请问你解决了吗
同样的问题
最后解决了么,我也是没报错就是没出视频
解决了,好像是显卡太小了,生成的视频出问题了。我换了一个更大的显卡,就成功生成了
|
gharchive/issue
| 2024-08-09T02:31:14 |
2025-04-01T06:40:25.116203
|
{
"authors": [
"foxyear-kyumin",
"letture",
"nancygd",
"rong7932",
"smthemex"
],
"repo": "smthemex/ComfyUI_EchoMimic",
"url": "https://github.com/smthemex/ComfyUI_EchoMimic/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
952219959
|
Divisor Sum
Aim
Calculate the sum of all the divisors of the entered number and display it using python.
/assign
|
gharchive/issue
| 2021-07-25T07:57:05 |
2025-04-01T06:40:25.120465
|
{
"authors": [
"Manasi2001"
],
"repo": "smv1999/CompetitiveProgrammingQuestionBank",
"url": "https://github.com/smv1999/CompetitiveProgrammingQuestionBank/issues/748",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1145145723
|
[Enhancement]: Update to 2068?
What changes would you like?
Update to 2068?
Any extra information?
No response
I've just bumped to 2071, build is in progress.
2071 is now live in stable
or just update to 2074 ?
2079 is merged and available in edge
|
gharchive/issue
| 2022-02-20T21:57:22 |
2025-04-01T06:40:25.193079
|
{
"authors": [
"diddledani",
"jonas-sfx",
"ryanpcmcquen",
"zyga"
],
"repo": "snapcrafters/sublime-merge",
"url": "https://github.com/snapcrafters/sublime-merge/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2635879010
|
Parsing tsconfig doesn't work if the file contains comments
I run into issues where this action fails if the typescript config file contains. I think this is because TypeScript uses JSONC. You might be able to fix this by using TypeScript's JSON compiler, like this comment suggests: https://www.reddit.com/r/typescript/comments/8na5vb/comment/dzus6a9/
Ended up making PR 601 to fix this. Let me know what you think!
Thx a lot ❤️
I will review it asap
|
gharchive/issue
| 2024-11-05T16:05:11 |
2025-04-01T06:40:25.195302
|
{
"authors": [
"abenhamdine",
"reccanti"
],
"repo": "snapshift/action-check-typescript",
"url": "https://github.com/snapshift/action-check-typescript/issues/600",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
158597547
|
viewing register causes GDB hanging
Hi,
I noticed that if I run
voltron view reg
in another terminal, then my gdb will hang like this
(gdb) finish
Run till exit from #0 0x0000000000401a43 in Gets (dest=dest@entry=0x5561dc78 "\020\203\006") at support.c:163
getbuf () at buf.c:16
16 buf.c: No such file or directory.
1: x/i $pc
=> 0x4017b4 <getbuf+12>: mov $0x1,%eax
Value returned is $6 = 0x5561dc78 "012"
(cursor hangs here)
If I quit the register view, everything works fine.
What platform and version of GDB are you on?
Could you please follow the steps here and post your debug logs?
Thanks
Platform:
$ uname -a
Linux cl1 3.13.0-71-generic #114-Ubuntu SMP Tue Dec 1 02:34:22 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
GDB:
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
And the logs (main.log and debugger.log):
https://gist.github.com/lazywei/48dfc51ff0f6397f48da8f1355fd372e
Is that Ubuntu 14.04?
yes, i think so
Oh wait it's there in the GDB version. Cool.
So, when you ran through those steps I guess GDB hung after run and you weren't able to stepi?
GDB hung after finish and I wasn't able to stepi
I can't reproduce this issue. I've set a breakpoint in the test inferior, hit it and finished successfully without GDB hanging.
Can you please follow the steps in the Troubleshooting page I linked exactly (using the test inferior included with Voltron, and without finish) and make sure that also works as expected?
If you could try to reproduce the issue with an inferior that I have access to that would be helpful in trying to reproduce the issue here.
I guess this case is related to the executable I'm debugging with. So I'm not sure if this is really a bug for Voltron. :-P
Did you try it without loading Voltron? GDB can be pretty crashy at the best of times :(
It works fine with Voltron --- as long as I don't open register view. lol
I won't mind if you prefer to close this issue, btw.
snare notifications@github.com 於 2016年6月13日 週一 下午11:59寫道:
Did you try it without loading Voltron? GDB can be pretty crashy at the
best of times :(
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/snare/voltron/issues/151#issuecomment-225626001, or mute
the thread
https://github.com/notifications/unsubscribe/ACtX-_7sqSC-f6xjAIxmxwvlRzokwOdgks5qLX59gaJpZM4IujGU
.
--
Chih-Wei (Bert)
No worries. Thanks.
|
gharchive/issue
| 2016-06-06T04:09:04 |
2025-04-01T06:40:25.206914
|
{
"authors": [
"lazywei",
"snare"
],
"repo": "snare/voltron",
"url": "https://github.com/snare/voltron/issues/151",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2538819538
|
Feature request: DM user if they follow bridged user but aren't bridged themselves
When someone follows my fedi account bridged to BSky, and they are not bridged themselves, nothing happens. I.e. I don't find out about it and I cannot follow back.
If I do double check my follower lists and compare and find out about this person, I can then DM the bridge account in Fedi to have the bridge account in BSky nudge the person. That's too complicated.
They should be informed right away when following me, ideally by my own bridged account and not the bridge, that they need to follow @ap.brid.gy to get enable normal interactions with me.
This probably should have a bit of logic to check whether the user fulfills the requirements for bridging, ideally informing them how to do so if that's not the case.
|
gharchive/issue
| 2024-09-20T13:23:08 |
2025-04-01T06:40:25.209346
|
{
"authors": [
"Tamschi",
"h-2"
],
"repo": "snarfed/bridgy-fed",
"url": "https://github.com/snarfed/bridgy-fed/issues/1340",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
103087656
|
sncosmo.realize_lcs should have a key to switch off scatter
Currently, sncosmo.realize_lcs() obtains bandfluxes at times and filters indicated by obstable and calculates the fluxerror from the skynoise column of obstable, and the fluxes. It then adds a scatter to the flux values based on fluxerror. While all of this is correct, and should be the default, it is nice for checking to be able to turn the scatter off. Because it correctly uses the fluxes too to calculate the noise, this cannot be turned off by simply changing the skynoise column to 0.0. One could plot the model light curve, but it is nice to see what the points are at the observed times and bands only.
I would like to add a boolean argument scatter to the function's call signature with a default value of True.
If scatter=True, the current behavior will be replicated. If scatter=False, then the scatter will not be added to the model.bandflux calls. This is implemented in https://github.com/rbiswas4/sncosmo/tree/turnoffscatter
Is there a reason to avoid having this?
Can this be modified slightly to support similar use cases/ wishlist items other people have?
Sounds like a good idea to me. scatter does seem like a pretty good name for the argument. Can you create a PR from your branch?
closed by #106
|
gharchive/issue
| 2015-08-25T18:27:51 |
2025-04-01T06:40:25.213236
|
{
"authors": [
"kbarbary",
"rbiswas4"
],
"repo": "sncosmo/sncosmo",
"url": "https://github.com/sncosmo/sncosmo/issues/105",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
410943016
|
support for reaspeaker v2
I own a reaspeaker V2 4 MIC and 12 LED and would like to know if this works also as it is connected via USB .
Hello,
It's not yet supported for all USB based respeakers. But I would say it's a nice to have in the future.
|
gharchive/issue
| 2019-02-15T21:03:06 |
2025-04-01T06:40:25.321731
|
{
"authors": [
"CoorFun",
"thundergreen"
],
"repo": "snipsco/snips-skill-respeaker",
"url": "https://github.com/snipsco/snips-skill-respeaker/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
508840864
|
gitalk 插件官方文档
https://snowdreams1006.github.io/hexo-plugin-gitalk/
hexo-plugin-gitalk 插件官方文档
"hexo-plugin-gitalk": "^0.2.0",最新版引入了md5加密,固定gitalk的id配置项长度32,再也不用担心路径过长而报错了.
请更新至最新版,如有问题欢迎留言反馈哟!
你这插件写法也略简单粗暴了些。。就连你首页文章列表页的每篇文章后都渲染出了评论框了
Keren
|
gharchive/issue
| 2019-10-18T04:01:36 |
2025-04-01T06:40:25.356792
|
{
"authors": [
"Veitor",
"aritlh",
"snowdreams1006"
],
"repo": "snowdreams1006/hexo-plugin-gitalk",
"url": "https://github.com/snowdreams1006/hexo-plugin-gitalk/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2400982442
|
SNOW-1528909: Snowflake CLI cannot handle UTF-16LE encoded text files
SnowCLI version
1.6.0rc0
Python version
Python 3.11.9
Platform
macOS-14.5-arm64-arm-64bit
What happened
Powershell redirects (e.g. command > file) by default encode output using UTF-16LE. Unfortunately, Snowflake CLI in a lot of paths is assuming utf-8 encoding, which makes common workflows fail there. Here's an example PR that simply changes the input for a snow sql -f command to use that encoding, showing the failure: #1299
Console output
src/snowflake/cli/api/commands/snow_typer.py:96: in command_callable_decorator
result = command_callable(*args, **kw)
src/snowflake/cli/api/commands/decorators.py:158: in wrapper
return func(**options)
src/snowflake/cli/api/commands/decorators.py:158: in wrapper
return func(**options)
src/snowflake/cli/plugins/sql/commands.py:82: in execute_sql
single_statement, cursors = SqlManager().execute(query, files, std_in, data=data)
src/snowflake/cli/plugins/sql/manager.py:60: in execute
query_from_file = SecurePath(file).read_text(
src/snowflake/cli/api/secure_path.py:157: in read_text
return self._path.read_text(*args, **kwargs)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/pathlib.py:1059: in read_text
return f.read()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <encodings.utf_8.IncrementalDecoder object at 0x7f7815a95f90>
input = b'\xff\xfe/\x00*\x00\n\x00 \x00C\x00o\x00p\x00y\x00r\x00i\x00g\x00h\x00t\x00 \x00(\x00c\x00)\x00 \x002\x000\x002\x004\...00e\x00c\x00t\x00 \x00r\x00o\x00u\x00n\x00d\x00(\x00l\x00n\x00(\x001\x000\x000\x00)\x00,\x00 \x004\x00)\x00;\x00\n\x00'
final = True
> ???
E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
<frozen codecs>:322: UnicodeDecodeError
### How to reproduce
1. Encode a file using UTF-16LE
2. Use it as `snowflake.yml`, as a post-deploy hook, or as an input to `snow sql -f`
3. Observe a utf-8 codec error
We may need to use a tool like https://github.com/jawah/charset_normalizer
I think we could get away with something a little lighter-weight and more deterministic. BOM detection alone will solve the standard codepath for Windows, and if we give users the ability to use (python-standard? *nix locale?) env vars to match any overrides they've made on their local system, that coverage should be enough to resolve this ticket.
|
gharchive/issue
| 2024-07-10T14:47:14 |
2025-04-01T06:40:25.369697
|
{
"authors": [
"sfc-gh-cgorrie",
"sfc-gh-turbaszek"
],
"repo": "snowflakedb/snowflake-cli",
"url": "https://github.com/snowflakedb/snowflake-cli/issues/1303",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2523050687
|
SNOW-1662210: Add ability to patch for Session sql method
What is the current behavior?
Unable to patch Session.sql() method
import pytest
from project.utils import get_env_var_config
from snowflake.snowpark.session import Session
def pytest_addoption(parser):
parser.addoption("--snowflake-session", action="store", default="live")
@pytest.fixture(scope='module')
def session(request) -> Session:
if request.config.getoption('--snowflake-session') == 'local':
return Session.builder.configs({'local_testing': True}).create()
else:
return Session.builder.configs(get_env_var_config()).create()
Error: NotImplementedError: [Local Testing] Session.sql is not supported.
What is the desired behavior?
Ability to patch Session.sql() so code that uses Session.sql() can have tests made
import pytest
from project.utils import get_env_var_config
from snowflake.snowpark.session import Session
from unittest.mock import patch
def pytest_addoption(parser):
parser.addoption("--snowflake-session", action="store", default="live")
def mock_sql():
# code that will use create_dataframe to create the test table for the procedure based on a
# given query string
pass
@pytest.fixture(scope='module')
def session(request) -> Session:
if request.config.getoption('--snowflake-session') == 'local':
mock_session = Session.builder.configs({'local_testing': True}).create()
with patch.object(mock_session, 'sql', side_effect=mock_sql):
return mock_session
else:
return Session.builder.configs(get_env_var_config()).create()
If this is not an existing feature in snowflake-snowpark-python. How would this impact/improve non local testing mode?
It would allow users to work around the NotImplementedError for Session.sql when it is required in the code
References, Other Background
https://docs.snowflake.com/en/developer-guide/snowpark/python/tutorials/testing-tutorial#configure-local-testing
thanks for reaching out.
for sql operation, presently we recommend patching the sql method manually just like what you posted: https://docs.snowflake.com/en/developer-guide/snowpark/python/testing-locally#sql-operations
may I ask if you are looking for other ways to patch the sql call?
I don't think I am doing it correctly. What I am trying to accomplish is patching the sql method to create the desired dataframes for testing a sproc. However, during setup I get the following error: AttributeError: 'NoneType' object has no attribute 'create_dataframe' I have updated my code to reflect what I am trying to do
|
gharchive/issue
| 2024-09-12T18:14:59 |
2025-04-01T06:40:25.397620
|
{
"authors": [
"sfc-gh-aling",
"treyhannamconga"
],
"repo": "snowflakedb/snowpark-python",
"url": "https://github.com/snowflakedb/snowpark-python/issues/2286",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
400841724
|
Schema DDL: port checkVersionsConsistency from igluctl
Useful for multiple applications including Server.
Migrated to https://github.com/snowplow-incubator/schema-ddl/issues/11
|
gharchive/issue
| 2019-01-18T18:37:21 |
2025-04-01T06:40:25.403135
|
{
"authors": [
"aldemirenes",
"chuwy"
],
"repo": "snowplow/iglu",
"url": "https://github.com/snowplow/iglu/issues/472",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
304641692
|
Use Constructor instead of Tang injector
We create physical objects (sources, operators, and sinks) by using Tang injector. However, Tang injection is too slow compared to calling the constructor directly, thus degrading the system performance while creating objects. We should fix this problem.
A simple experimental result:
Time to create 100,000 map operators with Tang injector: 14,989 ms
Time to create 100,000 map operators with constructor: 322 ms
|
gharchive/issue
| 2018-03-13T06:23:45 |
2025-04-01T06:40:25.425302
|
{
"authors": [
"taegeonum"
],
"repo": "snuspl/mist",
"url": "https://github.com/snuspl/mist/issues/1014",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
620005254
|
Integrate the tool of inserting table into the toolbar module of quill (Fix #34)
I built a module out of Rudy's solution and tried to do like official Quill Pickers. The integration was not possible without extending the Snow Theme.
The solution looks like this:
Usage
var quill = new Quill("#quill-div", {
"theme": "better-table-snow",
"modules": {
"better-table": [],
"keyboard": {
"bindings": quillBetterTable.keyboardBindings
},
"toolbar": [
["clean"],
[{
"list": "ordered"
}, {
"list": "bullet"
}],
[{
"indent": "-1"
}, {
"indent": "+1"
}],
["bold", "italic", "underline", "strike", {
"script": "super"
}, {
"script": "sub"
}],
["link", {
"better-table": []
}]
]
}
});
I also added some little changes in code (e.g. split up built process to make parts of them work under windows or pass empty object as oprationMenu config if none set).
@simialbi thanks for this work, have also integrated and it works nicely. Performance when inserting larger tables is slow however - any ideas how to fix?
Could not reproduce it. Which size are you talking about?
7x7. Could be due to my integration with ReactQuill in that case if you are unable to reproduce. Thanks for the reply, will keep you posted if I find the cause
From: simialbi notifications@github.com
Sent: Tuesday, June 23, 2020 5:11:33 PM
To: soccerloway/quill-better-table quill-better-table@noreply.github.com
Cc: delewis13 daniel.elliott.lewis@gmail.com; Comment comment@noreply.github.com
Subject: Re: [soccerloway/quill-better-table] Integrate the tool of inserting table into the toolbar module of quill (Fix #34) (#56)
Could not reproduce it. Which size are you talking about?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/soccerloway/quill-better-table/pull/56#issuecomment-647954744, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AEZMJXPIS2QTFWKMSJ4DPXDRYBISLANCNFSM4ND3IPOQ.
why don't you merge it ?
|
gharchive/pull-request
| 2020-05-18T08:19:31 |
2025-04-01T06:40:25.449808
|
{
"authors": [
"Daedra22",
"delewis13",
"simialbi"
],
"repo": "soccerloway/quill-better-table",
"url": "https://github.com/soccerloway/quill-better-table/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
154742943
|
Allow rails5
Update prefactory to work with rails5.
+1, thanks!
|
gharchive/pull-request
| 2016-05-13T15:46:51 |
2025-04-01T06:40:25.459949
|
{
"authors": [
"cashins",
"seanwalbran"
],
"repo": "socialcast/prefactory",
"url": "https://github.com/socialcast/prefactory/pull/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
299432770
|
Speeding up DNS requests
Hi,
I needed a simple local DNS server to help me resolve internal names like ip-238.32.42.14.*.internal locally to the implied IP address. RubyDNS worked well for this and have a simple script working and resolving the names as expected. I've configured a RubyDNS server as the first entry via network manager / resolv.conf in linux.
However my DNS requests are resolving significantly more slowly when requesting a regular record. I'm no DNS expert but assume that generally on a host some caching mechanism is employed and that's why I'm so much slower when running the script.
Is the caching assumption correct regarding the slowdown? If so should I just use a map and some sort of timer to expire the records in that map? Can you give me other advice regarding my resolver or DNS configuration?
Thanks again for providing this DNS solution!
require 'rubydns'
INTERFACES = [
[:udp, "0.0.0.0", 53],
[:tcp, "0.0.0.0", 53],
]
IN = Resolv::DNS::Resource::IN
# Use upstream DNS for name resolution.
def getResolver(ipaddress,port)
return RubyDNS::Resolver.new([[:udp,ipaddress,port],[:tcp,ipaddress,port]])
end
#GOOGLE_DNS = RubyDNS::Resolver.new([[:udp, "8.8.8.8", 53], [:tcp, "8.8.8.8", 53]])
# TODO: get these from resolv.conf or network manager
NET_1 = getResolver("101.123.128.20",53)
#NET_2 = getResolver("101.124.82.144",53)
# Start the RubyDNS server
RubyDNS::run_server(INTERFACES) do
single_ip_match_group = /ip-([^.]*).*/ #internal/
match(single_ip_match_group) do |transaction, match_data|
logger.info("matched ")
logger.info(match_data[1].to_s)
ip = match_data[1].to_s.gsub("-",".")
transaction.respond!(ip)
end
# Default DNS handler
otherwise do |transaction|
logger.info("not matched")
transaction.passthrough!(NET_1)
end
end
Thanks for your detailed report. I will take a look at it, at my earliest convenience.
In the first instance, can you try using dig to go directly to the upstream server and then via your RubyDNS server and report back the latency?
Hi @ioquatix ,
I didn't mention before I'm not sure that it's DNS resolution that is causing my issues. What I've noticed is when I use the rubydns resolver script and try to visit a website it's very slow to load vs. without; using either firefox or chrome . Then I assumed this was related to DNS resolution, but you might know better than me.
Regarding your questions
While running the rubydns script with a slightly different DNS server IP:
; <<>> DiG 9.10.3-P4-Debian <<>> slashdot.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42406
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;slashdot.org. IN A
;; ANSWER SECTION:
slashdot.org. 44 IN A 216.105.38.15
;; Query time: 93 msec
;; SERVER: 10.12.138.20#53(10.12.138.20)
;; WHEN: Thu Feb 22 16:51:12 PST 2018
;; MSG SIZE rcvd: 57
Without running RubyDNS script:
dig slashdot.org
; <<>> DiG 9.10.3-P4-Debian <<>> slashdot.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13114
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;slashdot.org. IN A
;; ANSWER SECTION:
slashdot.org. 300 IN A 216.105.38.15
;; Query time: 25 msec
;; SERVER: 10.12.138.20#53(10.12.138.20)
;; WHEN: Thu Feb 22 16:52:27 PST 2018
;; MSG SIZE rcvd: 57
Not sure if I made this clear or not, but I'm setting this ruby resolver as my first entry (127.0.0.1) via network-manager to resolv.conf. (ignoring dhcp assigned 10.12.138.20 as the first entry to resolv.conf)
@ioquatix Could it be that the resolver 10.12.138.20 above doesn't resolve all my requests, is timing out; so I'm cycling through my other resolvers from resolv.conf, and that's causing the delay?
However, I would think this would not be the case because 10.12.138.20 is normally my first resolver assigned by dhcp / network-manager when I don't override (as mentioned above), and I would then expect the same timeout when not using the script.
Maybe I should try logging the passthrough?
@ioquatix Any ideas regarding this?
My kids have been sick this week, so I haven't had much spare time, but rest assured I appreciate your continued interest in this performance problem and I certainly want to look into it.
@drocsid did you put localhost or 127.0.0.1 as dns server?
running examples/basic-dns.rb
query takes 1s:
$ time dig @localhost -p 5300 example.com
..
;; Query time: 23 msec
dig @localhost -p 5300 example.com 0.00s user 0.01s system 0% cpu 1.037 total
query takes 0.03s:
$ time dig @127.0.0.1 -p 5300 example.com
..
;; Query time: 23 msec
dig @127.0.0.1 -p 5300 example.com 0.00s user 0.01s system 25% cpu 0.031 total
Judging by
;; Query time: 25 msec
in the sample output from @drocsid I'm going to assume your assessment is correct @Matti - thanks for adding to the discussion. I believe this is now resolved!
I think this has something to do with ipv6?
What makes you say that?
I have a vague memory that using "localhost" instead of 127.0.0.1 starts some ipv6 madness - I think that's whats going on here: first resolves are done with ipv6 that times out in 1s and then ipv4 is fast as expected
|
gharchive/issue
| 2018-02-22T17:04:45 |
2025-04-01T06:40:25.483151
|
{
"authors": [
"drocsid",
"ioquatix",
"matti"
],
"repo": "socketry/rubydns",
"url": "https://github.com/socketry/rubydns/issues/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1247955619
|
fix: handle %.% in profile columns properly and other bugs
Fixes improper inclusion handling for columns: "%.%" and fixes additional bugs uncovered by adding additional test case.
Resolves: https://sodadata.atlassian.net/browse/SODA-629
pls rebase after #1374 is merged, it should fix some of the athena issues. I will also take a look into the athena tests after that myself, the new test fails because of the same issue as you fixed here for snowflake
@m1n0 I rebased and it looks like the athena tests still fail. Could you take a look? I have no idea what the assumptions with this db are.
Regarding your comment about the schema and database variables. I indeed thought this was going to be the case that it was handled. I checked the sql_table_include_exclude_filter which seems to indeed be leveraging the schema attribute, so my feeling is that it does not work properly.
I'd welcome you trying to make it work. If so, please open a branch from this one as I have a tendency to force push when there are upstream changes that I'm not aware of.
|
gharchive/pull-request
| 2022-05-25T11:27:34 |
2025-04-01T06:40:25.497138
|
{
"authors": [
"bastienboutonnet",
"m1n0"
],
"repo": "sodadata/soda-core",
"url": "https://github.com/sodadata/soda-core/pull/1377",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1433274662
|
fix(sec): upgrade commons-io:commons-io to 2.7
What happened?
There are 1 security vulnerabilities found in commons-io:commons-io 2.5
CVE-2021-29425
What did I do?
Upgrade commons-io:commons-io from 2.5 to 2.7 for vulnerability fix
What did you expect to happen?
Ideally, no insecure libs should be used.
The specification of the pull request
PR Specification from OSCS
您的邮件已经收到,会在看到第一时间回复您,祝您生活愉快。
|
gharchive/pull-request
| 2022-11-02T14:44:31 |
2025-04-01T06:40:25.518588
|
{
"authors": [
"FlyingPigHasDream",
"chncaption"
],
"repo": "sofastack/sofa-jarslink",
"url": "https://github.com/sofastack/sofa-jarslink/pull/137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
213034079
|
Fix baremetal order example
This pull request is to fix issue #112. I used 251 package in the example now.
If you run the example now, it will show something like that
Coverage remained the same at 99.762% when pulling b10d7da37186783cf4e082c6f410fe8e9afc1fc3 on ArtsiomMusin:fix-baremetal-order-example into 0adc74b10d89f9ed840b08c70186ee02ea531c2b on softlayer:master.
|
gharchive/pull-request
| 2017-03-09T13:17:12 |
2025-04-01T06:40:25.554118
|
{
"authors": [
"ArtsiomMusin",
"coveralls"
],
"repo": "softlayer/softlayer-ruby",
"url": "https://github.com/softlayer/softlayer-ruby/pull/123",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
179663468
|
MessageTimers (DelaySeconds) with long polling (WaitTimeSeconds)
I am using message timers to delay the delivery of the message and long polling to reduce the number of calls to SQS. Currently the WaitTimeSeconds is set to 10 seconds and the DelaySeconds for each message to 2 seconds. The message is however not being delivered after 2 seconds, but rather after the long polling period of 10 seconds. I have confirmed that this behaves as expected when using Amazon SQS. Is there a configuration flag that I might be missing?
Thanks, Igor.
So this works in "real" SQS = messages are delivered after 2 seconds, while in ElasticMQ they are delivered when?
Can you maybe share some code which demonstrates this?
Yeah sorry I should have provided more details. I am using JavaScript SDK. This is the code to send messages:
defaultQueue.sendMessage(
{
MessageBody: JSON.stringify(message),
QueueUrl: defaultQueueUrl,
DelaySeconds: 2
},
(error, result) => {
logger.info(`Sent message ${message.body.handlerName} at ${new Date()}`);
if (error) {
reject(error);
} else {
resolve(result);
}
}
);
And this the code to receive messages:
const params = {
QueueUrl: defaultQueueUrl,
AttributeNames: [ATTR_RECEIVE_COUNT],
MaxNumberOfMessages: 1,
WaitTimeSeconds: 10
};
return new Promise((resolve, reject) => {
defaultQueue.receiveMessage(params, (error, data) => {
if (error) {
reject(error);
} else {
logger.info(`Received messages ${JSON.stringify(data.Messages)} at ${new Date()}`);
const messages = data.Messages || [];
resolve(messages);
}
});
});
Using elastic mq this code produces following logs:
Sent message POLL_FOR_RESULT_HANDLER at Wed Sep 28 2016 23:10:31 GMT+0000 (UTC)
Received messages undefined at Wed Sep 28 2016 23:10:41 GMT+0000 (UTC)
Received messages [{...,"Body":"{\"body\":{\"handlerName\":\"POLL_FOR_RESULT_HANDLER\",}”….] at Wed Sep 28 2016 23:10:41 GMT+0000 (UTC)
With the real sqs, I get:
Sent message POLL_FOR_RESULT_HANDLER at Thu Sep 29 2016 00:30:16 GMT+0000 (UTC)
Received messages [{…."Body\":\"{\\\"body\\\":{"handlerName\\\":\\\"POLL_FOR_RESULT_HANDLER\\\”}…}] at Thu Sep 29 2016 00:30:18 GMT+0000 (UTC)
I amended one of the existing tests to check that scenario and it passes:
https://github.com/adamw/elasticmq/blob/master/rest/rest-sqs-testing-amazon-java-sdk/src/test/scala/org/elasticmq/rest/sqs/AmazonJavaSdkTestSuite.scala#L817-L831
Are you using the latest ElasticMQ version?
There's also a mysterious undefined message at 23:10:41 - maybe you can try logging the raw content (without JSON parsing) to see what it is?
Also, the server should write DEBUG-level logs such as:
08:23:58.547 [elasticmq-akka.actor.default-dispatcher-3] DEBUG org.elasticmq.actor.queue.QueueActor - testQueue1: Sent message
08:23:58.579 [elasticmq-akka.actor.default-dispatcher-3] DEBUG org.elasticmq.actor.queue.QueueActor - testQueue1: Awaiting messages: start for sequence 0.
08:24:00.572 [elasticmq-akka.actor.default-dispatcher-3] DEBUG org.elasticmq.actor.queue.QueueActor - testQueue1: Receiving message d082e833-82ea-4804-b80e-a15d4899dce3
08:24:00.572 [elasticmq-akka.actor.default-dispatcher-3] DEBUG org.elasticmq.actor.queue.QueueActor - testQueue1: Awaiting messages: replying to sequence 0 with 1 messages.
which have exact timestamps of actions. Could you check that with your test?
Probably out of date
|
gharchive/issue
| 2016-09-28T02:07:51 |
2025-04-01T06:40:25.610733
|
{
"authors": [
"adamw",
"igorsechyn"
],
"repo": "softwaremill/elasticmq",
"url": "https://github.com/softwaremill/elasticmq/issues/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2096490909
|
add McToken DAO
Passed all checks on local instance
Any chance we can get this pushed?
|
gharchive/pull-request
| 2024-01-23T16:33:24 |
2025-04-01T06:40:25.639973
|
{
"authors": [
"MyceliumX"
],
"repo": "solana-labs/governance-ui",
"url": "https://github.com/solana-labs/governance-ui/pull/2077",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1779597576
|
token-2022: Refactor for unsized extension support
Problem
We want to add metadata support directly in token-2022, but token-2022's extensions only support extensions whose sizes are known at compile-time.
Solution
This is just a refactor, moving some functions and parameters around, and adding some more return info from other functions. No functional changes are contained here for ease of review in the next bit, which starts to add new functionality, after #4646 lands too.
Roughly, the steps after that go:
add alloc in the public interface, for allocating bytes to an extension
add UnsizedExtension trait, which specifies that an extension can use alloc and realloc. This is for compile-time safety, to keep using the in-place get_extension for all other extensions
add realloc, for an existing extension (ripping off https://github.com/solana-labs/solana-program-library/blob/ed8818c53438b32f96a77f86752e98db02a764ef/libraries/type-length-value/src/state.rs#L358)
add helpers for fetching the sizes of all TLV entries, along with calculating a new size for a realloc
add realloc for the whole account (ripping off https://github.com/solana-labs/solana-program-library/blob/ed8818c53438b32f96a77f86752e98db02a764ef/libraries/type-length-value/src/state.rs#L414)
Once that's all in place, it'll be a breeze to implement the metadata instructions!
Of course, no problem! The part that I skipped over is that once we have access to the raw bytes, and can alloc / realloc unsized extensions, then we can use any not-in-place serde (read: borsh) much more easily.
For initialization, the flow goes:
have an instance of the unsized extension
alloc the exact number of bytes that it needs
write the unsized instance into those bytes
And during update, the floes goes:
have a new instance of the unsized extension
get the old size from the buffer
get the new size from the instance
realloc the TLV slot and underlying AccountInfo
write the unsized instance into the bytes
I'll split up the commits more clearly to work up to that
Closing in favor of #4656
|
gharchive/pull-request
| 2023-06-28T19:20:49 |
2025-04-01T06:40:25.648517
|
{
"authors": [
"joncinque"
],
"repo": "solana-labs/solana-program-library",
"url": "https://github.com/solana-labs/solana-program-library/pull/4647",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1219406415
|
Update D3D Network Token img link
Update the token img link
https://github.com/solana-labs/token-list/pull/26478
I'm submitting a ...
[ ] bug report
[ ] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
Summary
Other information (e.g. detailed explanation, stack traces, related issues, suggestions how to fix, links for us to have context, eg. StackOverflow, personal fork, etc.)
bump
bump
|
gharchive/issue
| 2022-04-28T22:48:31 |
2025-04-01T06:40:25.662248
|
{
"authors": [
"rodasemi5"
],
"repo": "solana-labs/token-list",
"url": "https://github.com/solana-labs/token-list/issues/26479",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1101374565
|
Fix BMBO decimals 7->9
Please note: This repository is being rebuilt to accept the new volume of token additions and modifications. PR merges will be delayed.
I agree to not ping anybody on Discord/Twitter/email about this pull request. Instead I will inquire by posting a new comment in the pull request if needed.
PRs are reviewed in bulk and and can take up to two weeks to be merged.
This repository is managed using an auto merge action. Please ensure your PR has no deleted lines, and it will be merged.
Please provide the following information for your token.
Please include change to the src/tokens/solana.tokenlist.json file in the PR.
DON'T modify any other token on the list.
At minimum each entry should have
Token Address:
Token Name:
Token Symbol:
Logo: (logo should be uploaded under assets/mainnet//*.<png/svg>)
Link to the official homepage of token:
Coingecko ID if available (https://www.coingecko.com/api/documentations/v3#/coins/get_coins__id_):
Auto merge requirements
Your pull request will be automatically merged if the following conditions are met:
Your pull request only adds new tokens to the list. Any modification to existing
tokens will require manual review to prevent unwanted modifications.
Your pull request does not touch unrelated code. In particular, reformatting changes to unrelated
code will cause the auto merge to reject your PR.
Any asset files added correspond to the token address you are adding. Asset files
must be PNG, JPG or SVG files.
Your change is valid JSON and conforms to the schema. If your change failed validation,
read the error message carefully and update your PR accordingly.
No other tokens shares the same name, symbol or address.
For example, this change would be rejected due to unrelated changes:
The bot runs every 60 minutes and bulk-merges all open pull requests to prevent conflicts.
This means that you need to wait up to 60 minutes for your pull request to be merged or reprocessed.
Merged without checks => #2169
@rishkumaria
|
gharchive/pull-request
| 2022-01-13T08:35:37 |
2025-04-01T06:40:25.670395
|
{
"authors": [
"aliel"
],
"repo": "solana-labs/token-list",
"url": "https://github.com/solana-labs/token-list/pull/13785",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
99019068
|
[ANN] Do not set the trainning algorithm every time the ANN is retrained.
When the ANN is considered trained, change to incremental trainning only once.
Signed-off-by: Guilherme Iscaro guilherme.iscaro@intel.com
+1
Merged.
|
gharchive/pull-request
| 2015-08-04T17:29:02 |
2025-04-01T06:40:25.680521
|
{
"authors": [
"cabelitos",
"otaviobp"
],
"repo": "solettaproject/soletta-machine-learning",
"url": "https://github.com/solettaproject/soletta-machine-learning/pull/18",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
820197809
|
"Machintosh HD" typo in soliant_fms_zbx_export_templates.xml
In soliant_fms_zbx_export_templates.xml the default value for macro $FM_DATABASE_VOLUME for Mac is at line 2988 is "Machintosh HD" and should be "Macintosh HD".
Will be fixed in the next release, thanks!
|
gharchive/issue
| 2021-03-02T16:50:09 |
2025-04-01T06:40:25.686375
|
{
"authors": [
"jedtech-john",
"wimdecorte"
],
"repo": "soliantconsulting/FileMaker-Server-Zabbix-Templates",
"url": "https://github.com/soliantconsulting/FileMaker-Server-Zabbix-Templates/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
751904301
|
Error with 4xx rather than 5xx on unsupported authorization scheme
Currently (after #358), a request with invalid credentials will fail with a 500:
curl localhost:3000 -H "Authorization: fake" -i
HTTP/1.1 500 Internal Server Error
X-Powered-By: Community Solid Server
Access-Control-Allow-Origin: *
content-type: text/plain
Date: Thu, 26 Nov 2020 23:59:28 GMT
Connection: keep-alive
Transfer-Encoding: chunked
InternalServerError: No handler supports the given input: [No DPoP Authorization header specified., Unexpected Authorization scheme.]
at FirstCompositeHandler.findHandler (/Users/ruben/Documents/UGent/Solid/solid-community-server/src/util/FirstCompositeHandler.js:93:19)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Let's make that a 400 instead, since this is a client error.
Change likely to be made in AuthenticatedLdpHandler.
We probably want to keep the underlying error in a .cause field or so, such that (at a later point) we can describe to the user:
No DPoP Authorization header specified.
Unexpected Authorization scheme.
This seems okay now after updating https://github.com/solid/community-server/pull/358, in the sense that it gives 501 Not Implemented, which could be a correct answer to an unsupported authorization scheme being suggested.
On the other hand, a better response could be 403 with a suggestion for a better protocol.
Since so many handlers now throw 501 it might make more sense to change the composite handler to take the lower range if it receives multiple errors instead of the higher range. E.g., if it gets a 501 and 400, it throws a 400. Or maybe specifically check for 501s, since it's not because one handler doesn't implement something that we don't have an other handler implementing it.
Also very related to #364. If none of our handlers support something, is that a server fault or a user sending wrong data?
Since so many handlers now throw 501 it might make more sense to change the composite handler to take the lower range if it receives multiple errors instead of the higher range.
Two other options I had thought of:
Have a FallbackCompositeAsyncHandler, which does the same as FirstCompositeAsyncHandler, except canHandle errors with (only) the error message of the last (as opposed to a combined error message). A bit of this idea is in https://github.com/solid/community-server/pull/358/files#diff-371f0f66636dcb4e6791bed5402b5c6deefe31d7312686899ab1716bdc4cd072, where the last handler is really supposed to take them all.
Have an optional argument to construct the error (as with pipeStream). Note that this is actually a specialized version of the first option; just having a StaticAsyncHandler that throws a specific error would have the same effect.
If none of our handlers support something, is that a server fault or a user sending wrong data?
It depends on the perspective 🙂 I think ultimately, when we implement the full Solid spec, a 400 sends the signal "I'm fine, you should just try something else".
The latest version ignores incorrect auth; seems good enough for now.
|
gharchive/issue
| 2020-11-27T00:03:13 |
2025-04-01T06:40:25.694392
|
{
"authors": [
"RubenVerborgh",
"joachimvh"
],
"repo": "solid/community-server",
"url": "https://github.com/solid/community-server/issues/361",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
916195982
|
Align CoC for the Solid Community & make it visible
I wanted to record my observation about the CoCs I found and their location. If you are interested in the CoC content check this issue: https://github.com/solid/deit/issues/6
Website (https://solidproject.org/) - find CoC under Community header menu and in the footer - links to https://github.com/solid/process/blob/main/code-of-conduct.md
Forum (https://forum.solidproject.org/) - mentions Privacy Policy and Terms of Service but not the CoC
Gitter (https://gitter.im/solid/home) - mentions CoC only in the chat group
GitHub (https://github.com/solid) - no mention of CoC
Women of Solid (https://www.womenofsolid.org) - has its own CoC: https://www.womenofsolid.org/code-of-conduct.html
TODO:
do we want to align all CoCs and have only one? Is there a reason why the Women of Solid CoC is different?
Try to showcase the CoC in a visible way on all channels. Suggestions:
add CoC in the header of the forum
can one add the CoC on the whole Solid space on Gitter, as a header or footer?
In GitHub the recommendation is to have a CoC for each repo. Do we want to have that? Reference: https://opensource.creativecommons.org/contributing-code/github-repo-guidelines/
Just to note:
Website (https://solidproject.org/) - find CoC under Community header menu and in the footer - links to https://github.com/solid/process/blob/main/code-of-conduct.md
Those links were only added when you pointed out there were none, so thanks for that :)
I created the solid/chat topic including the CoC. Left CoC out of solid/specification because the link to the repo with the README includes both Solid CoC and W3C CEPC: https://github.com/solid/specification#code-of-conduct . In any case, I've updated the topic for solid/specification to that just now as well as for:
solid/authentication-panel
solid/authorization-panel
solid/data-interoperability-panel
solid/notifications-panel
solid/test-suite
|
gharchive/issue
| 2021-06-09T13:21:50 |
2025-04-01T06:40:25.715752
|
{
"authors": [
"Vinnl",
"csarven",
"theRealImy"
],
"repo": "solid/deit",
"url": "https://github.com/solid/deit/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1685838476
|
doesn't support "mailto:" hrefs
I'm using this component to render links in some MDX content, which is convenient since it works out of the box for any kind of links, internal or external. However, it doesn't seem to handle mailto href values appropriately, forcing me to create a custom wrapper. I think it should probably be supported out of the box, along with "tel" and other similar patterns.
Adding more bails out on seems unnecessary. Like if you are using mailto: you probably know. But I will move this to discussions. Because of developments around partial hydration patterns I am still very much thinking about the impact of <A> vs <a>.
|
gharchive/issue
| 2023-04-18T20:07:56 |
2025-04-01T06:40:25.717687
|
{
"authors": [
"DaniGuardiola",
"ryansolid"
],
"repo": "solidjs/solid-router",
"url": "https://github.com/solidjs/solid-router/issues/266",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
972329007
|
JSX-Lite changed to "Mitosis"
maybe they visited a butterfly farm?
Pull Request Test Coverage Report for Build 1138139745
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 89.94%
Totals
Change from base Build 1138131387:
0.0%
Covered Lines:
1101
Relevant Lines:
1171
💛 - Coveralls
|
gharchive/pull-request
| 2021-08-17T05:58:27 |
2025-04-01T06:40:25.722260
|
{
"authors": [
"coveralls",
"tomByrer"
],
"repo": "solidjs/solid",
"url": "https://github.com/solidjs/solid/pull/605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
908370543
|
Update sbt-tpolecat to 0.1.20
Updates io.github.davidgregory084:sbt-tpolecat from 0.1.8 to 0.1.20.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "io.github.davidgregory084", artifactId = "sbt-tpolecat" } ]
labels: sbt-plugin-update, semver-patch
Superseded by #83.
|
gharchive/pull-request
| 2021-06-01T14:44:19 |
2025-04-01T06:40:25.725764
|
{
"authors": [
"scala-steward"
],
"repo": "solidninja/schema-registry-sttp-client",
"url": "https://github.com/solidninja/schema-registry-sttp-client/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1312005255
|
Add shipments to authorize card api
Add shipments data to Authorize Card API payload.
In this way, the shipment will be saved on the Bolt Transaction.
QA:
place an order
check that the shipment address stored on the transaction
ref https://merchant-sandbox.bolt.com/transaction/8RNG-7CJQ-HXD4
s thinking if we can add a rescue and a spec for the case where the order doesn't have a ship_address but I gue
Yes, solidus doesn't allow it.
|
gharchive/pull-request
| 2022-07-20T22:04:16 |
2025-04-01T06:40:25.728306
|
{
"authors": [
"DanielePalombo"
],
"repo": "solidusio-contrib/solidus_bolt",
"url": "https://github.com/solidusio-contrib/solidus_bolt/pull/121",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
127677169
|
Concerns separation
I do not know a lot about Imba internals, but... what do you think about separating DOM management from the language itself?
Like having an expressive language and a powerful framework based on it.
It is slightly more difficult to separate than one would guess. Since tags really is a core part of the language, I'm not sure where we would draw the line. It is possible to move more of the tag related methods out into a separate library, but since ie. <div.large.warn> is a valid and native part of the language, we need the runtime/framework for that to work as expected (I think).
Is the motivation to be able to use the tags without Imba, or merely to clean up and tighten imba 'core' and be able to use Imba without including all this tag-related stuff? :)
Motivated by the fact that after CoffeeScript hype ppl actively looking for an alternative syntax and ES6 does not fit too well.
The keyword here is "syntax", cause a general purpose language needed, i.e. tags not needed on server... Instead a strong small core that evolves separately from the tags needed.
But if you say Imba was not intended as a general purpose superset, that's ok.
After all, no tool can fit all needs.
Since tags is in fact a core part of the syntax itself, it is probably difficult to separate the two without basically splitting it up into two languages :/ But even though tags are useful specifically for web apps, Imba itself can be used as a general purpose language? Some languages don't have first-class support for regular expressions. But a language like js has.. you can create a regexp with the /regex?/ syntax. It would feel pretty strange if that could throw an error RegExp library not included? Unless we split up Imba in two languages, that would happen if you tried using tags (native syntax) without including the 'framework'.
Maybe we should simply market it more as a general purpose language - because it absolutely is. But imho people who create web apps will get the most benefit from it :)
I was thinking about this in the car yesterday.
:+1: I think it'd be great to split the language from the library. The compiler would stand separate from the lib, and could accept a --lib flag that would internally switch compilation logic for Imba (global) based helpers as opposed to natively-stubbed replacements.
This allows anyone (hello ruby devs?) to write more natural-feeling JavaScript, with option of easily working with/integrating a great DOM library too.
Essentially, Imba acts as the CoffeeScript 2.0 (for syntax & compilation), but comes with an optional but highly recommended library behind it.
|
gharchive/issue
| 2016-01-20T13:00:11 |
2025-04-01T06:40:25.812730
|
{
"authors": [
"lukeed",
"sleewoo",
"somebee"
],
"repo": "somebee/imba",
"url": "https://github.com/somebee/imba/issues/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2267317563
|
feat: 支持class热更新
引入java动态编译
Instrumentation只支持方法内更新
feat: 支持class热更新
暂时支持方法内修改
已在static方法、对象方法、内部类方法验证
|
gharchive/pull-request
| 2024-04-28T03:28:02 |
2025-04-01T06:40:25.857672
|
{
"authors": [
"songbiaoself"
],
"repo": "songbiaoself/SuperHotSwap",
"url": "https://github.com/songbiaoself/SuperHotSwap/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2379612467
|
Coze无法添加
例行检查
[ ] 我已确认目前没有类似 issue
[ ] 我已确认我已升级到最新版本
[ ] 我已完整查看过项目 README,尤其是常见问题部分
[ ] 我理解并愿意跟进此 issue,协助测试和提供反馈
[ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
问题描述
不知道Coze为什么添加后无法跑通,有没有人试过接入成功的,麻烦各位讲一讲具体步骤
复现步骤
预期结果
相关截图
如果没有的话,请删除此节。
破案了,只支持国际版的扣子,不支持国行版的
https://github.com/songquanpeng/one-api/issues/1462#event-12973112021
怎么会,我都接入了,你改接口地址了吗,国内版的要用对应的cn域名
怎么会,我都接入了,你改接口地址了吗,国内版的要用对应的cn域名
请问是在代理那里填入国内域名吗,不知道在哪改
怎么会,我都接入了,你改接口地址了吗,国内版的要用对应的cn域名
请问是在代理那里填入国内域名吗,不知道在哪改
就是 API 地址这里
https://github.com/songquanpeng/one-api/issues/1462#event-12973112021
请问是在代理那里填入国内域名吗,不知道在哪改
#1462 (comment)
请问是在代理那里填入国内域名吗,不知道在哪改
代理?你用的是 newapi 吗?
#1462 (comment)
请问是在代理那里填入国内域名吗,不知道在哪改
代理?你用的是 newapi 吗?
oneapi最后一行那个代理网址,已经解决了,我把com换成了cn
|
gharchive/issue
| 2024-06-28T05:31:15 |
2025-04-01T06:40:25.867339
|
{
"authors": [
"QAbot-zh",
"YFSakura",
"sparkssssssss"
],
"repo": "songquanpeng/one-api",
"url": "https://github.com/songquanpeng/one-api/issues/1577",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1686837336
|
Bug: Bad authorization state. Refreshing the page might solve the issue. (
Hello,
In my react React I've got an error when I first time enter the page:
THis may indicates that in my url there is no 'code' query. Indeed there is no 'code' query and after reloading the page it is.
I tried to change settings in my router:
Right now default page redirects to /dashboard. But even when I delete it doesn't work.
In TAuthConfig I have autoLogin: true and clearURL: false,
I'am using this verson:
My expected behaviour - login wihout refreshing the page. Of course I can force login() method but it does not do the trick for me in this case
Could you be running into this issue? https://github.com/soofstad/react-oauth2-pkce#after-redirect-back-from-auth-provider-with-code-no-token-request-is-made
Yeah I read about it and it's not this case. When I'm redirected from my provider everything works fine. It happens when I enter the page -> provider/ library somehow remembers session -> no redirect to provider to login -> error 'bad authorization state...'
All my routes are wrapper inside AuthProvider. AuthProvider is in index.sx and App routes in App.tsx one level lower
Are you able to make minimal example on how to recreate this bug?
If not, I'd like so see the state of localStorage, both before and after you are redirected to the authentication provider. If everything there looks alright, then you should have a look at what parameters the auth servers sets when it redirects you back after login.
Ok - most updated version:
First scenario:
Local storage is empty - everything works fine. I'm redirected to provider -> there log in -> redirect to my app and everything works
Second scenario:
Local storage is NOT empty. There is some ROCP data. I'm NOT redirected to provider -> there is no 'code' param in my url and error occurs.
My local storage:
I think in order to reproduce it I need to:
Log in to the provider first OUTSIDE my application environment.
Turn on my app
Then there is no redirect and beforementioned data in local storage.
react-oauth2-code-pkce will only attempts to retrive code from url if loginInProgress is true.
That is set right before redirecting to auth provider.
The login flow needs to start from the web app using the package. Any redirects to the web-app with code besides that will not work. However, it should just automatically log the user in anyway, with a new redirect.
Have you tried clearing all the web apps persistent data? Calling "logout()" should be enough.
If you wan't some more help with this, I realy need an example on how to recreate it.
|
gharchive/issue
| 2023-04-27T13:17:50 |
2025-04-01T06:40:25.971298
|
{
"authors": [
"pablojakub",
"soofstad"
],
"repo": "soofstad/react-oauth2-pkce",
"url": "https://github.com/soofstad/react-oauth2-pkce/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1004034060
|
COM-2329 - Fix missing discount field in bill billingItemList
Oubli du champ discount dans bill.billingItemList
Il faudrait aussi ajouter la ligne remise dans le tableau lorsqu'on applique une remise sur un des articles de facturation, non? P
Pour ca il faudrait ajouter: totalDiscount += bi.discount; ligne 337 dans const bi of bill.billingItemList du service bills
|
gharchive/pull-request
| 2021-09-22T08:54:34 |
2025-04-01T06:40:25.976511
|
{
"authors": [
"KennyCallegari",
"manonpalin"
],
"repo": "sophiemoustard/compani-api",
"url": "https://github.com/sophiemoustard/compani-api/pull/1620",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1244290260
|
Added logic to have a wait screen
Updated client to show a slightly different view when waiting for participants to join a session based on session status. closes #80
closes #108
|
gharchive/pull-request
| 2022-05-22T15:32:57 |
2025-04-01T06:40:25.977368
|
{
"authors": [
"carolernst-uzh"
],
"repo": "sopra-fs22-group-18/client",
"url": "https://github.com/sopra-fs22-group-18/client/pull/90",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1681883187
|
security adjustments-> working controller tests
see issue #102 for detailed description
directly pushed to main with commit f03ce96
|
gharchive/pull-request
| 2023-04-24T19:21:01 |
2025-04-01T06:40:25.978372
|
{
"authors": [
"weberlii"
],
"repo": "sopra-fs23-group-13/meme-it-server",
"url": "https://github.com/sopra-fs23-group-13/meme-it-server/pull/127",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
568663499
|
Stack overflow when using refinements
Input
There is no Sorbet.run link because this is Rails code.
I'm trying to extend ActiveModel::ValidationError to use multiple models with a Ruby refinement.
# This refinement extends ActiveModel::ValidationError to include support for multiple models.
module Refinement
module ActiveModelValidationError
refine ActiveModel::ValidationError do
extend T::Sig
sig { returns(T::Enumerable[ActiveModel::Validations]) }
def models
@models || [ @model ]
end
sig { params(models: T::Enumerable[ActiveModel::Validations]).void }
def models=(models)
@models = models
end
end
# HACK: Refinements currently do not support overriding the private initialize method on
# objects. Instead, we have to override the `new` method on the class.
refine ActiveModel::ValidationError.singleton_class do
extend T::Sig
# Creates a new ActiveModel::ValidationError using an array of models.
# @param models This can be either a single model (the default), or a collection of models.
# @return Returns the new instance of ActiveModel::ValidationError.
sig do
params(model_or_models: T.any(
ActiveModel::Validations,
T::Enumerable[ActiveModel::Validations]
)) .returns(ActiveModel::ValidationError)
end
def new(model_or_models)
error = super(model_or_models.is_a?(Enumerable) ? model_or_models.first : model_or_models)
error.models = model_or_models if model_or_models.is_a? Enumerable
error
end
end
end
end
Observed output
When I run this:
using Refinement::ActiveModelValidationError
ActiveModel::ValidationError.new(Banana.first)
I get this:
# ./lib/refinement/active_model_validation_error.rb:33:in `new'
# /usr/local/bundle/gems/sorbet-runtime-0.5.5360/lib/types/private/methods/call_validation.rb:126:in `call'
# /usr/local/bundle/gems/sorbet-runtime-0.5.5360/lib/types/private/methods/call_validation.rb:126:in `validate_call'
# /usr/local/bundle/gems/sorbet-runtime-0.5.5360/lib/types/private/methods/call_validation.rb:186:in `block in create_validator_slow'
# ./lib/refinement/active_model_validation_error.rb:33:in `new'
# /usr/local/bundle/gems/sorbet-runtime-0.5.5360/lib/types/private/methods/call_validation.rb:126:in `call'
# /usr/local/bundle/gems/sorbet-runtime-0.5.5360/lib/types/private/methods/call_validation.rb:126:in `validate_call'
# /usr/local/bundle/gems/sorbet-runtime-0.5.5360/lib/types/private/methods/call_validation.rb:186:in `block in create_validator_slow'
# ./lib/refinement/active_model_validation_error.rb:33:in `new'
...
Expected behavior
I'd expect to be able to run my code without a stack overflow.
Sorbet doesn't currently support refinements, is it possible to write your example with inheritance instead?
It's possible, but refinements are a great feature, and I'd prefer to use them. 🙂
Are there any plans on supporting refinements in the future?
They aren't used at Stripe and thus were unlikely to put effort from our side to support them.
@DarkDimius Thanks for the reply!
If you don't mind me asking, why don't you guys use them? They seem like a good feature for larger organizations. 🙂
One of the most important parts of being able to work in a large codebase is to be able to have code mean the same thing when it's copy / pasted from one place to another. In that case, it's better to have monkey patches everywhere or nowhere (and increasingly, we prefer nowhere, using codemods to remove them entirely). Refinements make it so that copy / pasting code from one place to another can break. This is very non-intuitive. We don't want to have people track down which refinements they have to magically bring into scope to get their code to work--it's hard enough as it is with include and extend.
|
gharchive/issue
| 2020-02-21T00:28:58 |
2025-04-01T06:40:25.996976
|
{
"authors": [
"DarkDimius",
"LandonSchropp",
"elliottt",
"jez"
],
"repo": "sorbet/sorbet",
"url": "https://github.com/sorbet/sorbet/issues/2690",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1831709304
|
Show demangled package name in blame output
Motivation
We have other data sources within Stripe that use the demangled package
name. The mangled name is an artifact of Sorbet's internals, not a
public API.
Test plan
We do not have tests for this. I am assuming that if it compiles it works.
I did trace the code to make sure that PackageInfo::show shows the demangled name.
cc @maruth-stripe
|
gharchive/pull-request
| 2023-08-01T17:12:03 |
2025-04-01T06:40:25.999416
|
{
"authors": [
"jez"
],
"repo": "sorbet/sorbet",
"url": "https://github.com/sorbet/sorbet/pull/7195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
119477793
|
London restaurants integration
Hi,
OpenTable recently added more than 2300 restaurants for the city of London, available at the following link http://www.opentable.co.uk/ .
It would be very useful for people without the affiliate program to access this new set of data.
Thanks.
+1
Would be great to add the UK. Is there anyway I can help?
|
gharchive/issue
| 2015-11-30T11:57:07 |
2025-04-01T06:40:26.054460
|
{
"authors": [
"VikiBonzo",
"jwardle",
"mrkrumhausen"
],
"repo": "sosedoff/opentable",
"url": "https://github.com/sosedoff/opentable/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
165984191
|
feat(core): ability to compare two immutable structures
Hi Juan! First, thanks for this port :smile:
I thought it would be a good idea to implement the ability to compare two immutable structures, since using redux with immutable is a common pattern that tend to grow.
The only potential issue is that it require to add the immutable dependency, to both check if passed states are immutable iterables, and to compare them. But it shouldn't be too much of a problem, since redux-ava is supposed to be installed only in development anyway.
I've decided to deep-freeze only the action in case both states are immutables, since the previous state would be inevitably immutable.
Let me know your thoughts about this!
Any updates on merging this? Would love to see this feature in redux-ava, as Immutable.js is often the best choice in use with redux.
Hi! Sorry I've been super busy :-( I'll review this during my lunch and get back to you—adding a big dependency is always a hard choice but it looks like the benefits outweigh it. Thanks for your patience.
Hi @sotojuan, thanks for the merge!
Just to explain why I added the version field inside the package.json, was because I was installing the package using my specific url Apercu/redux-ava#immutable, and npm wasn't happy this field was missing, but could it be removed if you wish, since it's not a very common way to install.
|
gharchive/pull-request
| 2016-07-17T16:36:53 |
2025-04-01T06:40:26.099012
|
{
"authors": [
"Apercu",
"meriadec",
"sotojuan"
],
"repo": "sotojuan/redux-ava",
"url": "https://github.com/sotojuan/redux-ava/pull/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1714040917
|
通过Nginx反向代理之后,获取资源都报错了
你遇到了什么样的麻烦?
家庭网站通过反向代理设置首页之后,加载资源出现了问题,子页面也无法访问
http://firekula.site/
如何复现这个问题?
应用版本
搜索
[X] 在提交这个表格之前,我已经进行了相关问题搜索,没有找到相关的问题或解决方案。
补充描述
No response
我觉得问题有可能是出在 try_files 指令上,你是本地上还有其他文件需要访问吗?或许你也可以尝试类似下面这样的配置:
server {
location / {
try_files $uri @proxy;
}
location @proxy {
proxy_pass http://192.168.3.57:5005;
}
}
这样写就可以了!谢谢你!
|
gharchive/issue
| 2023-05-17T14:28:28 |
2025-04-01T06:40:26.106702
|
{
"authors": [
"LightAPIs",
"firekula"
],
"repo": "soulteary/docker-flare",
"url": "https://github.com/soulteary/docker-flare/issues/123",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1717694727
|
random profile details
-> Profile making steps need to be updated
working on it
@sourabhsikarwar Kindly add gssoc'23 tag
Kindly merge the PR @sourabhsikarwar
I am working next on fixing avatar.
|
gharchive/issue
| 2023-05-19T19:05:13 |
2025-04-01T06:40:26.114964
|
{
"authors": [
"BHANUJATIN"
],
"repo": "sourabhsikarwar/Scene-Movie-Platform",
"url": "https://github.com/sourabhsikarwar/Scene-Movie-Platform/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
953641563
|
No email notification with multiple domains under a single Commento++ instance
Hello,
have a running Commento(++) instance for years and it's running fine without any problems.
Today I have added a new domain under the same instance, and all works (comments are presented, registered in the DB, visible in the Dashboard etc), apart from the notifications.
Am I wrong that the notifications should come via the same route as my initial domain notifications are coming from? I am using SendGrid as my smtp host, and I see no way to configure another "route" to manage another domain and its notifications?
Any info on this?
Tnx in advanced!
Thanks for the bug report! Yep that's correct in that is how it should be working! Do you have any errors that you can see from the logs?
I have this running in Docker (Caroga repo), and there is nothing special in the default container log. Any specific log I should be looking at? Location?
tnx!
Hello!
Any progress on this issue? Tnx!
Another bump on this. Just want to know if this is an easy fix, a problem in the config or something else, or should I just run with another separate instance of Commento++?
tnx!
|
gharchive/issue
| 2021-07-27T08:35:31 |
2025-04-01T06:40:26.120188
|
{
"authors": [
"rusty1281",
"souramoo"
],
"repo": "souramoo/commentoplusplus",
"url": "https://github.com/souramoo/commentoplusplus/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1348225109
|
fix(authentication-service): remove device info and auth clients from token
BREAKING CHANGE:
auth clients in user model made optional
gh-991
Description
remove device info and auth clients from token
Fixes #991
Type of change
Please delete options that are not relevant.
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Intermediate change (work in progress)
Checklist:
[x] Performed a self-review of my own code
[ ] npm test passes on your machine
[ ] New tests added or existing tests modified to cover all changes
[ ] Code conforms with the style guide
[ ] API Documentation in code was updated
[ ] Any dependent changes have been merged and published in downstream modules
snyk is failing
|
gharchive/pull-request
| 2022-08-23T16:35:24 |
2025-04-01T06:40:26.133324
|
{
"authors": [
"akshatdubeysf",
"yeshamavani"
],
"repo": "sourcefuse/loopback4-microservice-catalog",
"url": "https://github.com/sourcefuse/loopback4-microservice-catalog/pull/992",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2035974381
|
bug: Can't delete an individual chat from the individual chat panel
Version
v0.19.1702220762
Describe the bug
Delete chat in the individual chat panel is no longer there.
Expected behavior
I should be able to delete the open chat with an option in the title.
Additional context
No response
@toolmantim, do we have a design for this icon? AFAIU, @taylorsperry means the "delete chat" functionality in the sidebar. Please correct me if I'm wrong.
I'd meant the title (not the sidebar), but tbh, I don't know why I thought I needed that. I think being able to delete an individual chat from the sidebar is sufficient, unless we hear otherwise from users. Closing for now.
|
gharchive/issue
| 2023-12-11T15:47:57 |
2025-04-01T06:40:26.137032
|
{
"authors": [
"taylorsperry",
"valerybugakov"
],
"repo": "sourcegraph/cody",
"url": "https://github.com/sourcegraph/cody/issues/2263",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2044185439
|
bug: Cody.codebase setting is no longer respected in chat
Version
v1.0.1
Describe the bug
When I am working in a repo that is not indexed on my enterprise sourcegraph instance, or the git remote detection does not match a repo on my instance, I am not able to set the cody codebase manually.
Expected behavior
I should be able to set the cody.codebase setting and see it's value reflected in enhanced context selection for chat.
Additional context
With no git remote
overriding the git remote
A customer also reported being impacted by this issue
A customer also reported being impacted by this issue
I'm just testing a fix for this now.
|
gharchive/issue
| 2023-12-15T18:29:08 |
2025-04-01T06:40:26.140533
|
{
"authors": [
"chwarwick",
"morgangauth"
],
"repo": "sourcegraph/cody",
"url": "https://github.com/sourcegraph/cody/issues/2409",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2558016851
|
bug: Failed to create "sourcegraph" autocomplete provider derived from "site-config-cody-llm-configuration". Please report the issue using the "Cody Debug: Report Issue" VS Code command.
Type: Bug
Extension Information
Cody Version: 1.35.1727622959
VS Code Version: 1.94.0-insider
Extension Host: desktop
Steps to Reproduce
Expected Behaviour
Logs
Extension version: 1.35.1727622959
VS Code version: Code - Insiders 1.94.0-insider (b7894e64dd103a19dd5015326d8310232236de0f, 2024-09-30T12:16:57.458Z)
OS version: Windows_NT x64 10.0.22631
Modes:
System Info
Item
Value
CPUs
AMD Ryzen 5 5600H with Radeon Graphics (12 x 3294)
GPU Status
2d_canvas: enabledcanvas_oop_rasterization: enabled_ondirect_rendering_display_compositor: disabled_off_okgpu_compositing: enabledmultiple_raster_threads: enabled_onopengl: enabled_onrasterization: enabledraw_draw: disabled_off_okskia_graphite: disabled_offvideo_decode: enabledvideo_encode: enabledvulkan: disabled_offwebgl: enabledwebgl2: enabledwebgpu: enabledwebnn: disabled_off
Load (avg)
undefined
Memory (System)
23.35GB (4.47GB free)
Process Argv
--crash-reporter-id bd49e885-cbd6-4c66-a074-590ffe85aa12
Screen Reader
yes
VM
0%
A/B Experiments
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256197
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
2i9eh265:30646982
962ge761:30841072
pythongtdpath:30726887
welcomedialog:30812478
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
2e7ec940:31000449
pythontbext0:30879054
accentitlementst:30870582
dsvsc016:30879898
dsvsc017:30880771
dsvsc018:30880772
cppperfnew:30980852
pythonait:30973460
da93g388:31013173
a69g1124:31018687
dvdeprecation:31040973
dwnewjupyter:31046869
nb_pri_only:31057983
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
flighttreat:31119334
wkspc-onlycs-t:31132770
nativeloc1:31118317
wkspc-ranged-t:31125599
e80f6927:31120813
autoexpandse:31146404
12bdf347:31141542
iacca2:31144504
notype1:31143044
c9j82188:31138334
showchatpanel:31139797
f8igb616:31140137
Failed to create "sourcegraph" autocomplete provider derived from "site-config-cody-llm-configuration". Please report the issue using the "Cody Debug: Report Issue" VS Code command.
|
gharchive/issue
| 2024-10-01T02:14:52 |
2025-04-01T06:40:26.149360
|
{
"authors": [
"jatinbht"
],
"repo": "sourcegraph/cody",
"url": "https://github.com/sourcegraph/cody/issues/5765",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1888473700
|
update claude infill prompt to fix indent issue
RE: https://sourcegraph.slack.com/archives/C05AGQYD528/p1694095905077429
Fix the issue mentioned in the attached slack thread where autocompletion for multi-line is not activated in an empty line.
This was caused by the modified indentation for the active cursor lines that we currently use with the prompt that does not support "infill".
To get the infill to work with the updated prompt, the indentation needs to be reserved.
This PR added a new tail and head that the new prompt used to resolve the issue. Also attached test result to the test suite for record.
Test plan
Run test suite to compare results between current claude instant and claude instant supporting infill.
Set: "cody.autocomplete.advanced.model": "claude-instant-infill"
This is truly amazing. I'm so glad that you folks considered are using a different model to better solve this problem. I was very perplexed when my previous PR(https://github.com/sourcegraph/cody/pull/990/) could not perform on certain edge because it seemed like the limitation of language model regardless of the prompt that I tried to use.
The way I approached the problem before I saw your solution was that Itested between five different sets of prompts and also changed the length of the trim lines generated and then for each of these prompts I would take screenshots of what went well and what went not so good. But using a different model just completely blows everything out of the water and it's such a great solution.
I do have 2 questions though,
I don't see this model in the official Anthropic documentation. Where does this model even come from? Is this just an internal sourcegraph thing(I am just curious about this)
The other thing that I was worrying or wondering about is that in this code one thing that might potentially be helpful is having trim blocks which are four lines instead of two lines. In my local experiments of A-B testing that was something that helped quite a bit. Is that something that you have any thoughts on? I'm happy to try it out just to see if it probably performs better. But it seems like there's a very high chance you have probably already tried it.
And once again, this is truly amazing so glad you found a clean way to solve this very critical functionality 💯
@arafatkatze
I don't see this model in the official Anthropic documentation. Where does this model even come from(Is it just the claude instant model)? Is this just an internal sourcegraph thing(I am curious about this if its okay to share)
We use the same Claude Instant 1.2 model here as we do for the non infilling, we only decided to use the model field on the client as a feature flag so we can still run the old version to compare it against.
I would take screenshots of what went well and what went not so good
The problem with this approach is that there's inherent randomness in the LLM output so any single screenshot is only anecdotal evidence and it often depends a lot on rng. We are working on a better test setup that can create multiple completions based on a test dataset so we have at least some statistical certainty that a solution is better (we're running into a lot of whack-a-mole with prompt tweaks if without that tooling).
The other thing that I was worrying or wondering about is that in this code one thing that might potentially be helpful is having trim blocks which are four lines instead of two lines. In my local experiments of A-B testing that was something that helped quite a bit. Is that something that you have any thoughts on?
Yeah we should probably run an experiment on this arbitrary threshold. When I tried to increase it locally, I got worse results but a lot has changed since then for sure. Our A/B pipeline is currently a bit full though as we're also evaluating other completion providers with models that were trained for the fill-in-the-middle use case.
@philipp-spiess Thanks a lot for the thorough explanation and clarifying things. I misread the naming convention as a different model altogether.
The problem with this approach is that there's inherent randomness in the LLM output so any single screenshot is only anecdotal evidence and it often depends a lot on rng. We are working on a better test setup that can create multiple completions based on a test dataset so we have at least some statistical certainty that a solution is better (we're running into a lot of whack-a-mole with prompt tweaks if without that tooling).
Yeah having a good testing suite to make multiple completions sounds like an awesome idea. Right now, the whack-a-mole approach is too time consuming and I am certain there is a better way to resolve this. I would love to contribute to some issues to build a better test setup to support multiple completions.
Yeah we should probably run an experiment on this arbitrary threshold. When I tried to increase it locally, I got worse results but a lot has changed since then for sure. Our A/B pipeline is currently a bit full though as we're also evaluating other completion providers with models that were trained for the fill-in-the-middle use case.
Yeah for me increasing the trim length stopped the odds of it repeated the thing it had said already. But then the whole prompt I was using with the shorter trim s with was very different(See Below)
const prefixMessages: Message[] = [
{
speaker: 'human',
text: 'You are a sophisticated code-completion AI, specifically designed to understand the intricacies of coding context. Your abilities include grasping the semantic and syntactic elements of the code I’m working on and offering completion suggestions that not only fit the functional requirements but also adhere to the stylistic and architectural patterns present in the existing codebase.',
},
{
speaker: 'assistant',
text: 'Acknowledged. My design incorporates advanced contextual understanding, which allows me to generate code completions that are functionally coherent, stylistically consistent, and architecturally aligned with your existing code.',
},
{
speaker: 'human',
text: `Complete the following code: ${OPENING_CODE_TAG}
${head.trimmed}${OPENING_CODE_TAG}${tail.trimmed}${CLOSING_CODE_TAG}${this.options.docContext.suffix}`,
},
{
speaker: 'assistant',
text: `Here is the code snippet aligned with your guidelines: ${OPENING_CODE_TAG}${tail.trimmed}`,
},
]
]
On a closer examination of my own prompt comparison with the existing prompt from @abeatrix I found that the prompts introduced in the latest PRs perform much better so trying out my prompt wouldn't be super helpful.
Regardless, this was a good learning experience. And I am very optimistic of a future with much better infill models that are finetuned specifically for the problem we are trying to solve. I would be very happy to try those out in the future.
As an additional note: Right now, I am working on the issue of https://github.com/sourcegraph/cody/issues/585 and I have many thoughts on that one. I would love to text you or @abeatrix on discord to help understand the problem a better.
@arafatkatze happy to chat! Im bad at keeping track with discord/social messages though, so if you don't hear back from me feel free to leave a comment on the issue you linked and ping me!
|
gharchive/pull-request
| 2023-09-08T23:12:18 |
2025-04-01T06:40:26.162542
|
{
"authors": [
"abeatrix",
"arafatkatze",
"philipp-spiess"
],
"repo": "sourcegraph/cody",
"url": "https://github.com/sourcegraph/cody/pull/990",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
908259311
|
Standardise files with files in sous-chefs/repo-management
Signed-off-by: Dan Webb dan.webb@damacus.io
Released as: 2.0.1
|
gharchive/pull-request
| 2021-06-01T12:51:10 |
2025-04-01T06:40:26.276592
|
{
"authors": [
"damacus",
"kitchen-porter"
],
"repo": "sous-chefs/htpasswd",
"url": "https://github.com/sous-chefs/htpasswd/pull/49",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2311849383
|
fix(styles): fix import.meta.env
explicitly declared import.meta.env type as Env.ImportMeta
不确定是否你的congfig有误,但是项目本身的类型推断是正常的,如果有检查需要可自行添加
|
gharchive/pull-request
| 2024-05-23T03:15:12 |
2025-04-01T06:40:26.302311
|
{
"authors": [
"Azir-11",
"wynn-w"
],
"repo": "soybeanjs/soybean-admin",
"url": "https://github.com/soybeanjs/soybean-admin/pull/444",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
520281200
|
Confirmation::wait() blocks forever on failure when running on Windows
Executing the following code on Windows and Linux (actually WSL running on the same machine to demonstrate that network/firewall are not at fault) and with rabbitmq halted, the Windows executable blocks forever or at least until interrupted with control-c.
Under Linux it will, as expected, return immediately and produce output: "Connection failed: ConnectionRefused"
This was tested with lapin 0.28.1.
use lapin::{Connection, ConnectionProperties};
fn main() {
match Connection::connect("amqp://127.0.0.1:5672/%2f", ConnectionProperties::default()).wait() {
Ok(_conn) => println!("Connected"),
Err(e) => println!("Connection failed: {:?}", e) ,
};
}
As an addendum, with logging set to trace, it prints the following (and then hangs)
[2019-11-09T00:46:17Z ERROR lapin::io_loop] error reading: IOError(Os { code: 10057, kind: NotConnected, message: "A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied." }) [2019-11-09T00:46:17Z ERROR lapin::connection] Connection error
Interesting, I'm getting strange behavior as well, no errors, and I'm running on Windows. However, my issue is it connects to the queue successfully every time, but it stops processing messages part way through. If I restart it may process everything, then if I restart again it might just process 3 things, etc.
Hi,
Fwiw, I'm working on better error reportig, once I'm done with this, we might get better insight of what's going on, but looks like mio is working differently on windows.
I do not have a windows system to test this though
Any chance you could retry this with lapin 0.30?
Error reporting will hopefully be better
redoing the test with lapin 0.30.1 looks similar:
Running `target\debug\lapin_test.exe`
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] register Token(1) Readable | Writable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a connect
[2020-02-26T22:50:58Z DEBUG lapin::channels] create channel with id 0
[2020-02-26T22:50:58Z TRACE lapin::connection] connection send_frame; channel_id=0
[2020-02-26T22:50:58Z TRACE lapin::connection] connection set readable
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] reregister Token(1) Readable | Writable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to (empty)
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a read
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to Readable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to Readable | Writable
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop run
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop poll
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] select; timeout=Some(0ns)
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] polling IOCP
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] returning
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop poll done
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop do_run; can_read=true, can_write=true, has_data=true
[2020-02-26T22:50:58Z TRACE lapin::io_loop] will write to buffer: ProtocolHeader
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to Readable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a write of 8 bytes
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] write error: A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied. (os error 10057)
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to Readable | Writable
[2020-02-26T22:50:58Z TRACE lapin::io_loop] wrote 8 bytes
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to Writable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a read
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to Readable | Writable
[2020-02-26T22:50:58Z ERROR lapin::io_loop] error reading: IOError(Os { code: 10057, kind: NotConnected, message: "A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied." })
[2020-02-26T22:50:58Z ERROR lapin::connection] Connection error
Ok, I think I know how to « fix » this. It will remove the hang but your
program still won’t work though.
It seems that mio tells us that the tcp connection is connected, writable
and readable, we then « successfully » write 11 bytes (or so mio tells us)
and then hit this error when reading: not connected, which means we were
never connected in the first place and the write failed, I guess
On Wed 26 Feb 2020 at 23:56, ajnewlands notifications@github.com wrote:
redoing the test with lapin 0.30.1 looks similar:
Running target\debug\lapin_test.exe
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] register Token(1)
Readable | Writable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a connect
[2020-02-26T22:50:58Z DEBUG lapin::channels] create channel with id 0
[2020-02-26T22:50:58Z TRACE lapin::connection] connection send_frame;
channel_id=0
[2020-02-26T22:50:58Z TRACE lapin::connection] connection set readable
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] reregister
Token(1) Readable | Writable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to
(empty)
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a read
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to
Readable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to
Readable | Writable
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE mio::poll] registering with poller
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop run
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop poll
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] select;
timeout=Some(0ns)
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] polling IOCP
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] returning
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop poll done
[2020-02-26T22:50:58Z TRACE lapin::io_loop] io_loop do_run; can_read=true,
can_write=true, has_data=true
[2020-02-26T22:50:58Z TRACE lapin::io_loop] will write to buffer:
ProtocolHeader
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to
Readable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a write of
8 bytes
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] write error: A request
to send or receive data was disallowed because the socket is not connected
and (when sending on a datagram socket using a sendto call) no address was
supplied. (os error 10057)
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to
Readable | Writable
[2020-02-26T22:50:58Z TRACE lapin::io_loop] wrote 8 bytes
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to
Writable
[2020-02-26T22:50:58Z TRACE mio::sys::windows::tcp] scheduling a read
[2020-02-26T22:50:58Z TRACE mio::sys::windows::selector] set readiness to
Readable | Writable
[2020-02-26T22:50:58Z ERROR lapin::io_loop] error reading: IOError(Os {
code: 10057, kind: NotConnected, message: "A request to send or receive
data was disallowed because the socket is not connected and (when sending
on a datagram socket using a sendto call) no address was supplied." })
[2020-02-26T22:50:58Z ERROR lapin::connection] Connection error
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/sozu-proxy/lapin/issues/216?email_source=notifications&email_token=AABWNXVJKBZQO6DY7C6WIXLRE3XQZA5CNFSM4JLAN3P2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENCG2WQ#issuecomment-591687002,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABWNXUDWB3BWAZ3OGRURNDRE3XQZANCNFSM4JLAN3PQ
.
Can you retry with 0.32?
v0.32 is much better; the call returns and hits the outer match statement as intended, rather than hanging. Thank you for taking the time to fix this.
[2020-02-27T10:39:44Z ERROR lapin::io_loop] error reading: IOError(Os { code: 10057, kind: NotConnected, message: "A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied." })
Connection failed: IOError(Os { code: 10057, kind: NotConnected, message: "A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied." })
[2020-02-27T10:39:44Z ERROR lapin::connection] Connection error
|
gharchive/issue
| 2019-11-08T23:24:10 |
2025-04-01T06:40:26.349693
|
{
"authors": [
"Keruspe",
"ajnewlands",
"andrewbanchich"
],
"repo": "sozu-proxy/lapin",
"url": "https://github.com/sozu-proxy/lapin/issues/216",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1519183611
|
Новый инструмент. Yes, плазменный резак
Добавляет новый инструмент - плазменный резак, используемый для уничтожения конструкций, самозащиты, и взлома шкафчиков(для синдиката) в будущем я сделаю версию синдиката, которая покупается в аплинке.
Для разбора вещей есть RCD. А для взлома шкафчиков лучше было бы написать код отдельный.
Для разбора вещей есть RCD. А для взлома шкафчиков лучше было бы написать код отдельный.
Делался плазменный резак как раз таки как обрезанная версия RCD, потому что РСУ далеко не всем доступна, да и количество зарядов маленькое.
А писать отдельный скрипт для взлома я не видел смысла, да, можно сделать через ивенты, но функционал будет один, и проще будет при нажатии на объект просто проверять на наличие замка, и удалять его.
Отправь это в офф, там лучше отсмотрят код, но сейчас оно даже не билдится.
Угу, со всем разберусь, доделаю, и отправлю.
|
gharchive/pull-request
| 2023-01-04T15:38:43 |
2025-04-01T06:40:26.425292
|
{
"authors": [
"Abobadadada",
"LarryRussian",
"Morb0"
],
"repo": "space-syndicate/space-station-14",
"url": "https://github.com/space-syndicate/space-station-14/pull/703",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1550196486
|
Toggle lights command keeps resetting itself
Description
It's quite annoying when you gotta see some administrative shit but gotta turn it back on every 1 or 2 minues
Reproduction
be admin
turn on toggle lights command
wait 1 to 3 minutes
???
profit
I think it's something to do with player-specific state handling or the likes as it doesn't seem to happen locally but happens frequently on live.
Ive never experienced this.
https://github.com/space-wizards/space-station-14/pull/15053
|
gharchive/issue
| 2023-01-20T01:40:29 |
2025-04-01T06:40:26.428607
|
{
"authors": [
"DrSmugleaf",
"IProduceWidgets",
"Mirino97",
"metalgearsloth"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/issues/13606",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2124452854
|
Device linking needs prediction
Title, flipping levers / opening linked doors feels bad.
a lot/all packet handling is serverside so this would be big
New to networking in SS, so I'll give this a go to figure it out.
What I've gathered so far:
Server/Client/Shared folders determine what code runs in which compilations
Components/fields are replicated either manually (gotta figure out how), or with [AutoGenerateComponentState]/ComponentHandleState Event per Tick() (oh no!?).
So imma try move InvokePort() and whatever it calls into Shared and auto replicate DeviceLinkSinkComponent
as a starting point, then manually replicate the fields OnActivate() once that works (if I got this right...).
Read like all existing net replication code, made hundreds of lines of changes before noticing I forgor to build
the new branch to begin with so LSP was broken and I rage quit.
If another newer person wants to take a shot at this, feel free but not obligated to hmu for a team up.
|
gharchive/issue
| 2024-02-08T06:33:53 |
2025-04-01T06:40:26.431435
|
{
"authors": [
"Bixkitts",
"deltanedas",
"metalgearsloth"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/issues/25042",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1329468574
|
Melee refactor
Goals:
prediction
fix the attack effects jank (e.g. xenos) and kill effectsystem
toggle precision as an action
Fadeout for the red hit marker rather than on / off
thrust / swing per weapon (which I think we have anyway)
Fix the code being jank so it's easier to make changes
windup?
Requires https://github.com/space-wizards/space-station-14/pull/8475 because I want predicted component changes.
Requires https://github.com/space-wizards/space-station-14/pull/8475 because I want predicted component changes.
if the requirement is the reason the PR was closed, that should hopefully be fixed soon, unless there's some issues with the PR,
Requires #8475 because I want predicted component changes.
if the requirement is the reason the PR was closed, that should hopefully be fixed soon, unless there's some issues with the PR,
I mainly need predicted component changes on client for active melee weapon accumulators; if the PR doesn't fix my issue I can just find a workaround.
|
gharchive/pull-request
| 2022-08-05T05:30:47 |
2025-04-01T06:40:26.435798
|
{
"authors": [
"ElectroJr",
"metalgearsloth"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/10319",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1706651584
|
add oily oaf drink
About the PR
mix 2u manly dorf and 1u soda water to get 3u of oily oaf
ROCK AND STONE FOREVER!
Media
[X] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase
Changelog
:cl:
add: Added the Oily Oaf drink, mix manly dorf and soda water to get it.
I think changing the full sprite of the manly dorf and its taste description would be better
|
gharchive/pull-request
| 2023-05-11T22:22:55 |
2025-04-01T06:40:26.438557
|
{
"authors": [
"AJCM-git",
"deltanedas"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/16349",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2553264858
|
Swap the advanced tool borg modules omnitool for jaws and a power drill
About the PR
The Engineering Cyborg Advanced Tool Module no longer has an omnitool, but instead jaws of life and a power drill, with the network configurator being swapped for a multitool.
Why / Balance
The omnitool is a pain to use since you have to spam Z to cycle to the correct tool, with this being much easier to use. Also, this way the borg keeps a blunt weapon for simplemobs, etc. with the jaws. Which it loses since the module loses its toolbar, and the welder is a much worse blunt weapon.
Media
Not Neccesary.
Requirements
[X] I have read and am following the Pull Request and Changelog Guidelines.
[X] I have added media to this PR or it does not require an ingame showcase.
Changelog
:cl: BramvanZijp
tweak: The Engineering Cyborg's Advanced Tool Module now has jaws of life and a power drill instead of an omnitool, with a multitool replacing the network configurator.
Jaws let it open bolted doors. Probably don't want that.
i hate omnitool good pr
The omnitool is a pain to use since you have to spam Z to cycle to the correct tool, with this being much easier to use.
I still think this is the wrong approach to the problem. If the problem is with the omnitool, fix the omnitool instead of removing it from the borg module.
Also, this way the borg keeps a blunt weapon for simplemobs, etc. with the jaws. Which it loses since the module loses its toolbar, and the welder is a much worse blunt weapon.
The module is not meant for combat so this is irrelevant.
Omnitool is a pain to use, this will be a really nice change until the omni gets reworked or modified to be less annoying to use,
Should be fine 👍
Thank you for your contribution.
|
gharchive/pull-request
| 2024-09-27T15:56:51 |
2025-04-01T06:40:26.444390
|
{
"authors": [
"0x6273",
"BramvanZijp",
"IProduceWidgets",
"Radezolid",
"deltanedas",
"slarticodefast"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/32487",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2582380012
|
Allow gas to escape from burning things and add gas emitting candles.
About the PR
Now flammable things can emit gases while they burn.
also adds gas emitting candles.
Why / Balance
Slow push toward miner removal?
A crate of air gas candles can be ordered from cargo for 2500 spesos and provides 32000 moles of 3-nitrogen/1-oxygen. The catch, is you have to burn the candles, collect the gas, and handle the hot temperatures they produce. These are the only available candles to players.
The candles added emit roughly 1500 mol per candle if all the gas is successfully retained. The exact amount produced depends on temperature/pressure regulation.
The flammable changes probably wont even be noticed by players. You still die, and the damage from actually burning has not changed. The only difference is you will no longer become the surface of the sun hot from being briefly on fire. This might lead to less ashings as high temperature (which are very hard to alleviate in game) also cause heat damage.
Technical details
I changed the logic of how the flammable component handled increasing heat. Previously it was just additive and lead to absurd temperatures approaching - or surpassing - the surface of the sun in a matter of minutes. Now that doesn't happen.
I added a limit to the temperature that a burning entity will increase to. This limit can be changed, and might be a good idea to do so depending on accelerants splashed or other factors, but I leave that as an exercise for the reader.
Media
Requirements
[x] I have read and am following the Pull Request and Changelog Guidelines.
[x] I have added media to this PR or it does not require an ingame showcase.
Breaking changes
Changelog
:cl:
add: Cargo can now order bargain priced gas candles for the Engineering Department!
Please be aware that our pull request guidelines ask that you split up unrelated refactoring changes (e.g. removing [ViewVariables] for unrelated fields) in a separate PR. This makes reviewing your actual PR easier, reduces merge conflicts, and makes it easier to revert if things go wrong.
In place of RequiresOxygen, perhaps it would generalize better to have an GasMixture? InputGasMix similar to how you define a new GasMixture? EmissiveGasMix. This could help this generalize better to different types of burning things.
Stepping back a bit and considering gameplay design issues: What problem does this solve? If it's about finding a way to supply the station with gas that isn't a miner, isn't this like a super beefed gas canister? Now I can see why you want to make burning the candles require a bit more work. But I see two issues here:
Functionally, candles are still pretty much still just less useful canisters (even canisters have pressure limits)
The source of gas (via these candles) is still cargo, which several folks have objected to in the past
Would it be possible to name the candles something realistic? Like real Oxygen candles used in submarines are sodium chlorate that on burning produce oxygen and salt. For nitrogen it could be ammonium nitrate since it thermally decomposes into nitrogen and water.
A different PR could make it be possible for chemistry to produce them in a way similar to plastic (upon mixing proper reagents it pops out of the beaker). I envision medical trying to keep people alive in a situation where station atmos is fucked and chemists can make these oxygen candles to pressurize the medical area and save the day.
The canisters have the unfortunate tendency to be accidentally emptied when people refill their tanks so a constant supply of oxygen filling a room would be a nice alternative to deal with station wide atmos issues.
Okay, so I got clued into more of this bug with the run away temps when I noticed that the temperatures between the flammable entities and the local atmos weren't equalizing at all. I addressed that, so although this isn't "superconduction" its... conduction lmao.
This is pretty closely related to #319 because it implements temperature as a more interactive atmos mechanic.
Everything said, I think the presently-PR'd heat-addition approach (_temperatureSystem.ChangeHeat(uid, flammable.JoulesPerFirestack * flammable.FireStacks, false, temp);) is the right way to go. Thanks for working on this and your patience while we clean up the last bits.
|
gharchive/pull-request
| 2024-10-12T00:50:28 |
2025-04-01T06:40:26.455365
|
{
"authors": [
"IProduceWidgets",
"Partmedia",
"daevidos"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/32760",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.