id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
953247188
|
Bug de update info do usuário
Existia um bug de se o usuário alterasse o nome, ele perdia o acesso ao site.
O que acontecia era que o campo que não fosse preenchido apagava o banco de dados.
@Edu-Amorim aconselho verificar as alterações, esse PR esta fazendo alterações de rota que irão quebra as funcionalidade. Acredito que seria valido fazer um pull request em sua branch para ficar sincronizado com a master do projeto.
Que funcionalidade que está quebrando?
|
gharchive/pull-request
| 2021-07-26T20:09:34 |
2025-04-01T04:34:51.509971
|
{
"authors": [
"Edu-Amorim",
"bruguedes"
],
"repo": "leonardodev-git/pi-digital-house",
"url": "https://github.com/leonardodev-git/pi-digital-house/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1777860719
|
buff: paraphrase
easy fuzz
https://huggingface.co/tuner007/pegasus_paraphrase is a fine place to start, see https://github.com/leondz/prefstance/blob/main/paraphrase_questions.py
merged in pr #333
|
gharchive/issue
| 2023-06-27T22:53:56 |
2025-04-01T04:34:51.511373
|
{
"authors": [
"leondz"
],
"repo": "leondz/garak",
"url": "https://github.com/leondz/garak/issues/180",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2505072318
|
Build typology of model behaviours
This groups policy scans: what will a model do without being attacked?
How are we going to store these?
Data structures involved:
Hierarchy of behaviours. Top-level nodes are currently Chat, Tasks, Meta, and Safety
Per behaviour type:
Behaviour ID An identifier for each policy. These are primary keys and should remain static
How do we structure these? I expect policies to float around a bit as the typology settles. Kinda happy to take a single-letter code from a set of four, followed by a three-digit 0-padded number - this assumes that policies won't move around category so much. Maybe we could add a final letter for subpolicies (no need to start at "a").
Behaviour names A text name for each policy
Behaviour description Characterisation of the input/output covered by this behaviour
Behaviour example prompts At least two prompts that could test for the policy, which should get a mitigation/deflection message if the behaviour is not permitted
Connections to policies Probes, payloads, or even prompts should be connected to policy types; if there's a hit, the policy has been breached.
Does this mean all were breached? Or just one?
|
gharchive/issue
| 2024-09-04T11:28:10 |
2025-04-01T04:34:51.515390
|
{
"authors": [
"leondz"
],
"repo": "leondz/garak",
"url": "https://github.com/leondz/garak/issues/892",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
207003753
|
Broken with grav 1.1.16
After installation, opening the backup manager page the following exception is raised:
An exception has been thrown during the rendering of a template ("Undefined property: Grav\Plugin\BackupManager\BackupManager::$admin").
The highlighted line is the number 5 of backup-manager.html.twig
{% set storestatus = backupmanager.storeStatus() %}
Hi @muflone
I did a clean install on a grav 1.1.16 website and I cannot confirm this problem. I even uninstalled, deleted the backup-manager.yaml from user/config/plugin and installed again, without problems as well.
How did you install? Via cli?
I've installed the plugin from the grav plugins repository.
I'll test it again ASAP
After disabling all plugins except admin, the issue persists.
What can I look to trace the source of the issue?
Ok, I had a look at the code. You're right. There's a bug. You can "fix" this yourself saving the configuration for Backup Manager once via the admin plugin. Once saved you should not receive any errors if you define a positive value in GB for storage.
I will update the plugin tomorrow to fix this.
Thanks.
u
oh yes, confirmed.
after saving the configuration the issue is gone.
thank you
Fixed in v0.1.4
|
gharchive/issue
| 2017-02-11T19:49:47 |
2025-04-01T04:34:51.528367
|
{
"authors": [
"leotiger",
"muflone"
],
"repo": "leotiger/grav-plugin-backup-manager",
"url": "https://github.com/leotiger/grav-plugin-backup-manager/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2110088961
|
Incorrect help message for very verbose option
Running cargo leptos build --help says to use the --vv option for very verbose output:
-v...
Verbosity (none: info, errors & warnings, -v: verbose, --vv: very verbose)
However, running cargo leptos build --vv produces the following error:
+ command cargo leptos build --vv
error: unexpected argument '--vv' found
Usage: cargo-leptos build [OPTIONS]
For more information, try '--help'.
It should say to use -vv instead of --vv. The same applies to test, end-to-end, serve, and watch.
Thanks. Would you be interested in making a PR to fix it?
Yes, will do
|
gharchive/issue
| 2024-01-31T13:22:46 |
2025-04-01T04:34:51.532454
|
{
"authors": [
"dav-wolff",
"gbj"
],
"repo": "leptos-rs/cargo-leptos",
"url": "https://github.com/leptos-rs/cargo-leptos/issues/247",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1708557827
|
examples: fix trunk config to run tailwind at the right time
The current example only works if there is already a created CSS file but on the first run it fails.
This patch should fix is.
Thanks!
|
gharchive/pull-request
| 2023-05-13T10:55:03 |
2025-04-01T04:34:51.533366
|
{
"authors": [
"flosse",
"gbj"
],
"repo": "leptos-rs/leptos",
"url": "https://github.com/leptos-rs/leptos/pull/1040",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
84660356
|
Match autolinks that do not have / in their URI
This expands the range of matchable URIs quite a bit by dropping the requirement for a / after the :
#52
merged in https://github.com/lepture/mistune/commit/384fc0add1ca524a5618cc870f33f9e6431ffbaa
|
gharchive/pull-request
| 2015-06-03T16:49:57 |
2025-04-01T04:34:51.534553
|
{
"authors": [
"lepture",
"nottwo"
],
"repo": "lepture/mistune",
"url": "https://github.com/lepture/mistune/pull/53",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
222209561
|
feat(sidebar-labels): Remove prefixes from section names
Change page layout to apply link-active class according to section of the current page as well.
The image below shows all the sections without the unnecessary prefixes CSS or JS.
Codecov Report
Merging #229 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #229 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 21 21
Lines 410 410
=====================================
Hits 410 410
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9136117...d8cc4e4. Read the comment docs.
closes #228
Can we merge this @leandrogrillo?
ué e isso aqui ? @leandrogrillo
|
gharchive/pull-request
| 2017-04-17T19:34:15 |
2025-04-01T04:34:51.560480
|
{
"authors": [
"codecov-io",
"vitortalaia",
"willamesoares"
],
"repo": "leroy-merlin-br/garden",
"url": "https://github.com/leroy-merlin-br/garden/pull/229",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1896280752
|
Bunny drill example
Description of the problem
We had been thinking about making a bunny drill example for a while. Here it is. It is a bit more light hearted than the other examples.
Description of the solution
The bunny drill is here. It's a chill example. It works quite well actually.
How Has This Been Tested?
Ran the drill
Documentation
Documented the drill
Future changes
Short examples like this one are also a nice way to show features. I like it.
@lpsaavedra I need your OK to be able to merge :)
|
gharchive/pull-request
| 2023-09-14T10:34:58 |
2025-04-01T04:34:51.575981
|
{
"authors": [
"blaisb"
],
"repo": "lethe-cfd/lethe",
"url": "https://github.com/lethe-cfd/lethe/pull/876",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1790558052
|
Huge variation in key lookup times against redis cluster
Bug Report
I see a huge variation for looking up a key. Some timings(in milliseconds) are as follows 9790ms, 28617ms,21474ms,5104ms,37ms,10ms,1ms,97ms,12957ms,1868ms
Current Behavior
Stack trace
Input Code
Input Code
RedisAdvancedClusterCommands<String, String> readSync = clusterConnection.sync();
String value = readSync.get(key);
Expected behavior/code
Environment
Lettuce version(s): io.lettuce:lettuce-core:jar:6.1.10.RELEASE
Redis version: Keydb version 6
Possible Solution
Additional context
I tried the same with Redisson library and it is very consistently fetching the key in less than 5ms.
This seems to happen only with cluster connection.
Can you provide a debug/trace log from GET where it takes 20 seconds?
Hi, I have some limitations on setting up the tracing in the server I tested in. So, I created a local example where the Redis cluster is run locally in docker. This does not show the timing variation as much as the server but still I could see times from 1ms to 100's of ms. I have added brave tracing library, let me know if I have set it up right in the file KeycacheConfig.java
I am attaching the example program I used to test this out.
keycache.zip
Usage is as follows:.
Set a timeout curl -XPOST http://localhost:8080/timeout/5000 note this is in nano seconds, so 5000 is actually 5 ms.
Create a key curl -XPOST http://localhost:8080 -H 'Content-Type: application/json' --data-raw '{"key": "cluster", "value":"works"}'
Read back the key http://localhost:8080\?key\=cluster
Run a basic load test for i in {1..500}; do ((curl http://localhost:8080\?key\=cluster)&); done
|
gharchive/issue
| 2023-07-06T00:22:09 |
2025-04-01T04:34:51.609466
|
{
"authors": [
"menacher",
"mp911de"
],
"repo": "lettuce-io/lettuce-core",
"url": "https://github.com/lettuce-io/lettuce-core/issues/2436",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
244324398
|
Sync transactions return array of unexpected size
Lettuce 4.3.3. I run this code in a single thread.
I even started to use a dedicated RedisClient for txConnection.
StatefulRedisConnection<String, MyValue> txConnection
RedisCommands<String, MyValue> txCommands = txConnection.sync();
txCommands.multi();
txCommands.get(dataKey);
txCommands.del(dataKey);
List<Object> result = txCommands.exec();
These are some unexpected results I can get
[OK, MyValue, 1]
[1, null, 0]
[OK, null, 0]
[OK, OK, null, 0]
[null, 0]
[1, OK, null, 0]
[1, 1, 1, null, 0]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, null, 0]
[1, 1, 1, null, 0]
Care to craft a reproducible test case (Gist of a unit test)?
Actually I was having 4.2.2 driver and 4.0.49 netty. And I was not aware of it.
Now I have updated to 4.3.3. And 80% of time get result [MyValue, 1] which is good.
20% of time I get [null, 0] which probably stands for absent value for key.
However, I have seen [null, null, 0] once so far and once:
Caused by: java.lang.IllegalStateException: null
at com.lambdaworks.redis.output.CommandOutput.set(CommandOutput.java:75)
at com.lambdaworks.redis.output.MultiOutput.set(MultiOutput.java:59)
at com.lambdaworks.redis.protocol.RedisStateMachine.safeSet(RedisStateMachine.java:396)
at com.lambdaworks.redis.protocol.RedisStateMachine.decode(RedisStateMachine.java:176)
at com.lambdaworks.redis.protocol.CommandHandler.decode(CommandHandler.java:218)
at com.lambdaworks.redis.protocol.CommandHandler.channelRead(CommandHandler.java:200)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
Sorry I would not be able to extract that piece code out of commercial software until weekends, at least.
It's not required to extract code from a software. I'm happy with a short snippet that reproduces the issue.
And what is actually expected size of array in this code? How would you say it should look for success (we found a key) and non-success case?
Now I am relying on index, but rather iterate all of items and checking with instanceof MyValue.
May I close this ticket or is there anything else I can assist you with?
Yes, please. I think it was related to that ping issue. But I am not going to use Tx now.
|
gharchive/issue
| 2017-07-20T11:03:40 |
2025-04-01T04:34:51.614516
|
{
"authors": [
"mp911de",
"nikolayspb"
],
"repo": "lettuce-io/lettuce-core",
"url": "https://github.com/lettuce-io/lettuce-core/issues/571",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
653087737
|
Consistently use Javadoc wording in BoundedPoolConfig.Builder
// It's kind of bad that javadoc is duplicated for this fields both in BoundedAsyncPool and BoundedPoolConfig.
Thank you for your contribution. That's merged, polished, and backported now.
|
gharchive/pull-request
| 2020-07-08T08:12:25 |
2025-04-01T04:34:51.615683
|
{
"authors": [
"maestroua",
"mp911de"
],
"repo": "lettuce-io/lettuce-core",
"url": "https://github.com/lettuce-io/lettuce-core/pull/1337",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2129064584
|
[Snyk] Security upgrade react-scripts from 3.4.1 to 5.0.0
This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to fix one or more vulnerable packages in the `npm` dependencies of this project.
Changes included in this PR
Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
crAPI/services/web/package.json
crAPI/services/web/package-lock.json
Vulnerabilities that will be fixed
With an upgrade:
Severity
Priority Score (*)
Issue
Breaking Change
Exploit Maturity
823/1000 Why? Proof of Concept exploit, Recently disclosed, Has a fix available, CVSS 8.6
Server-side Request Forgery (SSRF) SNYK-JS-IP-6240864
Yes
Proof of Concept
(*) Note that the real score may have changed since the PR was raised.
Commit messages
Package name: react-scripts
The new version differs by 238 commits.
221e511 Publish
6a3315b Update CONTRIBUTING.md
5614c87 Add support for Tailwind (#11717)
657739f chore(test): make all tests install with `npm ci` (#11723)
20edab4 fix(webpackDevServer): disable overlay for warnings (#11413)
69321b0 Remove cached lockfile (#11706)
3afbbc0 Update all dependencies (#11624)
f5467d5 feat(eslint-config-react-app): support ESLint 8.x (#11375)
e8319da [WIP] Fix integration test teardown / cleanup and missing yarn installation (#11686)
c7627ce Update webpack and dev server (#11646)
f85b064 The default port used by `serve` has changed (#11619)
544befe Update package.json (#11597)
9d0369b Fix ESLint Babel preset resolution (#11547)
d7b23c8 test(create-react-app): assert for exit code (#10973)
1465357 Prepare 5.0.0 alpha release
3880ba6 Remove dependency pinning (#11474)
8b9fbee Update CODEOWNERS
cacf590 Bump template dependency version (#11415)
5cedfe4 Bump browserslist from 4.14.2 to 4.16.5 (#11476)
50ea5ad allow CORS on webpack-dev-server (#11325)
63bba07 Upgrade jest and related packages from 26.6.0 to 27.1.0 (#11338)
960b21e Bump immer from 8.0.4 to 9.0.6 (#11364)
134cd3c Resolve dependency issues in v5 alpha (#11294)
b45ae3c Update CONTRIBUTING.md
See the full diff
Check the changes in this PR to ensure they won't cause issues with your project.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
🛠 Adjust project settings
📚 Read more about Snyk's upgrade and patch logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Server-side Request Forgery (SSRF)
The apps in this repository have vulnerabilities deliberately.
|
gharchive/pull-request
| 2024-02-11T16:37:30 |
2025-04-01T04:34:51.636164
|
{
"authors": [
"buchi-busireddy",
"ricekot"
],
"repo": "levoai/demo-apps",
"url": "https://github.com/levoai/demo-apps/pull/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2329307899
|
🛑 lewisgill.com is down
In 88bfa93, lewisgill.com (https://lewisgill.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: lewisgill.com is back up in 2a8fcfa after 21 minutes.
|
gharchive/issue
| 2024-06-01T19:40:50 |
2025-04-01T04:34:51.642356
|
{
"authors": [
"lewisgilldotcom"
],
"repo": "lewisgilldotcom/uptime-tracker",
"url": "https://github.com/lewisgilldotcom/uptime-tracker/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2684076011
|
reponse to multi-variable get has duplicated values
Expected behavior
I issued a get_cmd() with multiple variables in the same MIB table. I expect to get back multiple responses, each with the appropriate value.
Actual behavior
If a get_cmd() is issued with multiple requested variables in the same MIB table, the value of one of the elements of the table is duplicated to all of the other elements of the table.
Detailed steps
Testing against the simulation service, here's the result of a command-line snmpget command:
% snmpget -v 2c -c public demo.pysnmp.com IF-MIB::ifInOctets.1 IF-MIB::ifInOctets.2 IF-MIB::ifSpeed.1 IF-MIB::ifSpeed.2 IF-MIB::ifOutOctets.1 IF-MIB::ifOutOctets.2 IF-MIB::ifOutUcastPkts.1 IF-MIB::ifOutUcastPkts.2 IF-MIB::ifDescr.1 IF-MIB::ifDescr.2
IF-MIB::ifInOctets.1 = Counter32: 25611194
IF-MIB::ifInOctets.2 = Counter32: 2570427550
IF-MIB::ifSpeed.1 = Gauge32: 4294967295
IF-MIB::ifSpeed.2 = Gauge32: 2755359744
IF-MIB::ifOutOctets.1 = Counter32: 25611194
IF-MIB::ifOutOctets.2 = Counter32: 3256639591
IF-MIB::ifOutUcastPkts.1 = Counter32: 135187
IF-MIB::ifOutUcastPkts.2 = Counter32: 14916803
IF-MIB::ifDescr.1 = STRING: lo
IF-MIB::ifDescr.2 = STRING: eth0
Using the pysnmp script included below:
% ./demo.py
IF-MIB::ifInOctets.1 = 2570504176
IF-MIB::ifInOctets.2 = 2570504176
IF-MIB::ifSpeed.1 = 2755359744
IF-MIB::ifSpeed.2 = 2755359744
IF-MIB::ifOutOctets.1 = 3256802074
IF-MIB::ifOutOctets.2 = 3256802074
IF-MIB::ifOutUcastPkts.1 = 14917406
IF-MIB::ifOutUcastPkts.2 = 14917406
IF-MIB::ifDescr.1 = eth0
IF-MIB::ifDescr.2 = eth0
A network packet capture shows that the request and response over the network is correct.
Enabling debug logging in the script didn't seem to reveal anything useful to me.
Python package information
7.1.13
Operating system information
macOS 13.7.1
Python information
Python 3.13.0
(Optional) Contents of your test script
#!/usr/bin/env python3
import asyncio
from pysnmp.hlapi.v3arch.asyncio import *
async def main():
se = SnmpEngine()
comm = CommunityData('public')
target = await UdpTransportTarget.create(('demo.pysnmp.com', 161))
context = ContextData()
netiftable = [ ObjectIdentity('IF-MIB', 'ifInOctets', 1),
ObjectIdentity('IF-MIB', 'ifInOctets', 2),
ObjectIdentity('IF-MIB', 'ifSpeed', 1),
ObjectIdentity('IF-MIB', 'ifSpeed', 2),
ObjectIdentity('IF-MIB', 'ifOutOctets', 1),
ObjectIdentity('IF-MIB', 'ifOutOctets', 2),
ObjectIdentity('IF-MIB', 'ifOutUcastPkts', 1),
ObjectIdentity('IF-MIB', 'ifOutUcastPkts', 2),
ObjectIdentity('IF-MIB', 'ifDescr', 1),
ObjectIdentity('IF-MIB', 'ifDescr', 2) ]
_, _, _, objects = await get_cmd(se, comm, target, context,
*[ObjectType(x) for x in netiftable])
for item in objects:
print(item)
asyncio.run(main())
Relevant log output
No response
New issues are marked as low priority by default. Becoming our commercial customers, and then your reports are handled with higher priority after triage.
New issues are marked as low priority by default. Becoming our commercial customers, and then your reports are handled with higher priority after triage.
I've also discovered that this same problem exists even if you query the variables in different calls, unless the calls use distinct SnmpEngine() values.
And if you query the same variable multiple times with the same SnmpEngine(), any record you tried to keep of the old value is overwritten with the new one.
Walk operations (walk_cmd and bulk_walk_cmd) are broken in the same way. Same value for all rows of a table in the response.
My sample script works with v7.1.9 and doesn't work with v7.1.10.
Thank you so much @wfaulk ! You saved me with that 7.1.9 suggestion!!
@lextudio-support can you have a look at this one please?
How many people have arbitrarily upgraded this module and are unknowingly collecting inaccurate data?
Fixed in release 7.1.14.
|
gharchive/issue
| 2024-11-22T17:49:22 |
2025-04-01T04:34:51.654380
|
{
"authors": [
"MichalMoravik",
"lextm",
"lextudio-support",
"ojnas",
"wfaulk"
],
"repo": "lextudio/pysnmp",
"url": "https://github.com/lextudio/pysnmp/issues/152",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
465787313
|
One Thread by user (Like Facebook Messenger system)
Hello, and thanks for this awesome package !
I would like to create a messenger application where the Authentified user can only create one thread by user (like Messenger)
For example, If a discussion between user_1 and user_2 exist, user_2 is not listed in the create thread page.
How can I do this ?
Thank you ! :)
Hi @CyrilBlankaert ,
For that custom functionality, I think you'll have to loop over the threads and check if among the participants there exists one where the user exists. If one exists, then retrieve the thread and compose a message in it (instead of creating a new thread).
|
gharchive/issue
| 2019-07-09T13:40:00 |
2025-04-01T04:34:51.656477
|
{
"authors": [
"CyrilBlankaert",
"lexxyungcarter"
],
"repo": "lexxyungcarter/laravel-5-messenger",
"url": "https://github.com/lexxyungcarter/laravel-5-messenger/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
793679643
|
generate out-of-date Dockerfile package versions
3 steps:
generate list of package versions in latest alpine
generate list of package versions used in all Dockerfiles
report Dockerfile versions that are out of date
Add usage info in CI-CD.md.
@rvs, I think this is what you meant when you first asked for an inventory. It creates a list of all packages in all Dockerfiles, then creates the list of versions in the latest Docker image, and then generates a list of from/to versions for all Dockerfiles. The same scripts work in zedcloud.
@rvs, I think this is what you meant when you first asked for an inventory. It creates a list of all packages in all Dockerfiles, then creates the list of versions in the latest Docker image, and then generates a list of from/to versions for all Dockerfiles. The same scripts work in zedcloud.
Yetus is down to 5 errors which are incorrect suggestions. Yetus wants quotes everywhere, and I got rid of most of the errors by adding them, but adding them on these 5 lines breaks the program. Sometimes the shell needs to expand variables without quotes.
Yetus is down to 5 errors which are incorrect suggestions. Yetus wants quotes everywhere, and I got rid of most of the errors by adding them, but adding them on these 5 lines breaks the program. Sometimes the shell needs to expand variables without quotes.
Cool! I'll review shortly
Cool! I'll review shortly
|
gharchive/pull-request
| 2021-01-25T19:54:08 |
2025-04-01T04:34:51.660346
|
{
"authors": [
"dkionka",
"rvs"
],
"repo": "lf-edge/eve",
"url": "https://github.com/lf-edge/eve/pull/1825",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
939972479
|
datastore tests
Takes the previously private tests for libs/zedUpload and makes them public.
Also adds a README to describe how to use them. A few key elements:
if an env var is not provided, the test does not run
I specifically did not use the "standard" env vars for AWS or Azure to avoid accidentally using an existing acocunt
You need to run it with -tags private_tests for it to run (in addition to the env var)
the upload test files are just random data I generated
cc @zed-rishabh @eriknordmark
The yetus errors are not real as far as I can tell.
What are we hooking up these tests to @deitch ? Will we run it in Eden? Some other place?
Now we run it manually. These all live in private space now because of credentials. This moves it out of private space by removing the credentials, letting people pass them.
It still will be manual, but anyone will be able to run them with credentials.
Eventually, we can hook them up to eden.
@zed-rishabh was hoping to move these in.
The yetus complains seem real to me.
Why do we need to carry the dead code?
That was my point; it isn't dead code, but yetus errors. The "unused code" complaints are about:
hashMd5
calculateMd5
md5sum
uploadDir
These all are used, yetus seems to get it wrong.
Oh, I see it now. yetus is not building with the tags, so it is getting the one file without it. Once I remove that dependency, yetus should be ok.
I didn't realize the files were tiny; I was assuming we testing with at least a megabyte or so (since some of the protocols do chunking that would seem useful.)
But that could be a future RFE.
That is a good point. I would like to test with large-ish files. I wouldn't want to check those into git. Maybe we should just exclude them from git, and have datastore_test.go generate them as part of init()? As you said, a future RFE. Better to get this in as something first.
I didn't realize the files were tiny; I was assuming we testing with at least a megabyte or so (since some of the protocols do chunking that would seem useful.)
But that could be a future RFE.
That is a good point. I would like to test with large-ish files.
I cleaned some of the yetus up from the existing tests, but don't intend in solving everything. Most of it is stylistic anyways.
I took advantage and updated it. Now the test files are not included, but the tests generate them of the right size on initialization (if they do not exist.
I took advantage and updated it. Now the test files are not included, but the tests generate them of the right size on initialization (if they do not exist).
Nice! You still working on hooking them up to the build (but not running) so that we can avoid bitrot (as @eriknordmark pointed out)?
I agree. I restructured some of it, but couldn't be bothered to fix those in first run. If you want to leave it until Thursday, I should be able to do so. Maybe Wednesday.
OK, addressed those too. Hopefully yetus will be happy now.
Yes, the only thing left is the bareurl. I think it is silly, but easy enough to fix.
Now yetus us happy!
|
gharchive/pull-request
| 2021-07-08T15:21:22 |
2025-04-01T04:34:51.669490
|
{
"authors": [
"deitch",
"rvs"
],
"repo": "lf-edge/eve",
"url": "https://github.com/lf-edge/eve/pull/2167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2395602075
|
Only syscall.Stat() vol paths in metrics path
Balance between syscall.Stat() everything with
the aim of very accurate allocated storage usage
or syscall.Stat() nothing which will miss the deltas of provisioned vs allocated storage.
Don't syscall.Stat() in the child directories of each volume path.
RateLimited...
ERROR: failed to solve: golang:1.20.1-alpine: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/golang/manifests/sha256:87d0a3309b34e2ca732efd69fb899d3c420d3382370fd6e7e6d2cb5c930f27f9: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
make: *** [Makefile:960: eve-build-runner] Error 1
Error: Process completed with exit code 2.
And it is rate limited on something that is unnecessary to pull, to wit, making eve-build-runner. Since that thing is consistent, I am going to work on getting a pre-built image for it.
All tests passed (on the first try!)
|
gharchive/pull-request
| 2024-07-08T13:09:32 |
2025-04-01T04:34:51.673732
|
{
"authors": [
"andrewd-zededa",
"deitch",
"milan-zededa"
],
"repo": "lf-edge/eve",
"url": "https://github.com/lf-edge/eve/pull/4065",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
328662728
|
Doc needs info about how to translate using a TypeScript files and i18n files
I think the doc needs more info about how to translate using a TypeScript like Android does.
Android uses XML resources:
<string name="table">Mesa %1$s</string>
<plurals name="fragment_sale_show_people_count">
<item quantity="one">%d persona</item>
<item quantity="other">%d personas</item>
</plurals>
And Java get it and parse it too:
Resources r = getResources();
// ....
String tableNumberText = r.getString(R.string.table);
TextView viewTableNumber = mView.findViewById(R.id.table_number_display);
viewTableNumber.setText(String.format(tableNumberText, table.getNumber()));
int peopleCount = sale.getPeople();
String peopleCountText = r.getQuantityString(R.plurals.fragment_sale_show_people_count, peopleCount, peopleCount);
TextView viewPeopleCount = mView.findViewById(R.id.num_people_display);
viewPeopleCount.setText(peopleCountText);
I think we should use a TypeScript to get it as XML/JSON as well.
I don't understand... are you talking about adding plurals support ?
I think we should be use TypeScript component (in this moment we use angular directive tags (translate in html template)
I think we should be use plurals support as well.
I don't think I get it. You can use the L pipe or the localize function to translate text. If needed, you can directly access Android or iOS API to fit your needs.
I won't add plurals support myself, but I'd gladly accept a PR, it'll have to support both Android and iOS.
|
gharchive/issue
| 2018-06-01T21:13:20 |
2025-04-01T04:34:51.689628
|
{
"authors": [
"francisrod01",
"lfabreges"
],
"repo": "lfabreges/nativescript-localize",
"url": "https://github.com/lfabreges/nativescript-localize/issues/36",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
430468717
|
Support OpenBSD tar by using explicit tar -xzf
OpenBSD tar won't try to extract the compressed archive if no format type (z/j/...) is specified and trying to install pg_query gem on OpenBSD system will result in the following error:
tar: input compressed with gzip; use the -z option to decompress it
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/home/sirn/.asdf/installs/ruby/2.6.2/bin/$(RUBY_BASE_NAME)
extconf.rb:22:in `<main>': ERROR (RuntimeError)
extconf failed, exit code 1
This PR fixes the error by adding z to the tar command in extconf.
@sirn Thanks for the contribution!
|
gharchive/pull-request
| 2019-04-08T14:00:29 |
2025-04-01T04:34:51.691403
|
{
"authors": [
"lfittl",
"sirn"
],
"repo": "lfittl/pg_query",
"url": "https://github.com/lfittl/pg_query/pull/134",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
247274896
|
Dynamically loading locales
My locales can change at runtime.
Initially the view (currency and date filters) is loaded with the default locale (en-US), but how do I alter the filters after a localeChangeSuccess event without reloading the state?
http://jsfiddle.net/xparma/pt6s0ha4/ this shows the default behavior. But my problem occurs in a aprticular scenario:
Load parent controller (Sets locale via a promise)
Child Controller - loads the view and has the currency and date filter.
The child controller has already loaded with the default view while the locale is still being resolved in the parent.
Sorry, but from the description it is not clear what is not working or what are you trying to achieve.
Is the issue how to set the initial locale?
Is the issue that on startup things are displayed using the English locale and then, when the new locale is loaded, things are rendered using the new locale?
Is the issue that the child does not refresh the value of the currency/date when the locale changes?
Something else?
From the example, I do not know what should I be looking at.
Sorry for not being clear.
On startup it uses en-US locale, but when the new locale is loaded later, things(currency and date format) are not rendered using the new locale.
Here is an updated version of your example with some cleanups http://jsfiddle.net/udaqftpc/
It also has a value INITIAL_LOAD that mocks the initialization of the locale value if this is done on the initial load or asynchronously.
With this example, I was not able to make it fail (unless there is something else in the example that I missed.
Without an update in quite some time, I think it is safe to close this issue.
|
gharchive/issue
| 2017-08-02T05:10:54 |
2025-04-01T04:34:51.708357
|
{
"authors": [
"lgalfaso",
"nmani85"
],
"repo": "lgalfaso/angular-dynamic-locale",
"url": "https://github.com/lgalfaso/angular-dynamic-locale/issues/115",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2646181477
|
Problem with BatchNorm2d(...,bias=False)
When using BatchNorm2d(out_c,bias=False) the code return the following error
Cuda:
"CUDA Runtime Error at: /home/jg/cuTAGI/src/data_struct_cuda.cu:438
an illegal memory access was encountered"
CPU:
"Segmentation fault (core dumped)"
This issue happen with both classification.py & cigar_resnet_bench.py examples
@jamesgoulet I'll work on fixing this issue this week
|
gharchive/issue
| 2024-11-09T14:49:31 |
2025-04-01T04:34:51.741030
|
{
"authors": [
"jamesgoulet",
"lhnguyen102"
],
"repo": "lhnguyen102/cuTAGI",
"url": "https://github.com/lhnguyen102/cuTAGI/issues/102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
246006071
|
sorry,又要麻烦你了
导入的时候要是可以指定字段名就好了,不然不知道怎么查数据了。
@Rairmmd Excel 导入 SQLite 时,默认取 excel 中 sheet 的第一行作为数据库表的列名,excel 样式可以参考这里。
在 excel 文档里事先写好一个列名,比在代码里重新指定字段名要自由的多。如果这样还不能解决你的需求,请麻烦详细描述下你的应用场景,谢谢。
@li-yu Supplied password is invalid for salterifiererifierHash
@Rairmmd 如果可以,能否提供下 excel 文档,我这边做下验证
|
gharchive/issue
| 2017-07-27T11:11:16 |
2025-04-01T04:34:51.743750
|
{
"authors": [
"Rairmmd",
"li-yu"
],
"repo": "li-yu/SQLiteToExcel",
"url": "https://github.com/li-yu/SQLiteToExcel/issues/7",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
419483515
|
"quick connect" dialog for launching shells
**Is your feature request related to a problem?
maOS Terminal has an way of launching new shells: Apple-Shift-K brings up the "New Remote Connection" dialog, I think Aminal can learn from that and do better. Currently, we need to launch some existing shell start new shells, and I have to remember a bunch of command line options.
Describe the solution you'd like
Ctrl-Shift-N (or K, keybindings need to be configurable) to bring up a dialog with a search box and a selection wizard.
The search will search through previously used shells/remote connections, if none are found you can just type the whole shell command. This might even example out to the whole aminal command to launch.
The selection wizard will navigate you through the process:
What type of shell do you want?
cmd.exe (Windows)
powershell (Windows)
bash (Mac/Linux)
mosh (all platforms)
ssh (all platforms)
docker (all platforms)
Where a shell needs more info to continue, ask for it:
for mosh and ssh, the remote host name should be typable but also allow picking from recently used hostnames
docker would let you pick the image name and add any extra options, eg:
docker run -it .../library/rhel7 bash
other shell methods may need other options?
Describe alternatives you've considered
Typing in a shell to to launch Aminal is a real blocker to uptake.
Additional context
https://github.com/liamg/aminal/issues/205 should not be incompatible with this, and all urls should map nicely to choice of shell along with parameters required to launch them.
This will likely need lots of polishing over time, but if we can get something started we can then start getting more feedback on how we'd like things to work.
Moved to https://github.com/jumptrading/waminal/issues/1
|
gharchive/issue
| 2019-03-11T13:58:18 |
2025-04-01T04:34:51.749381
|
{
"authors": [
"cjw296",
"mjs"
],
"repo": "liamg/aminal",
"url": "https://github.com/liamg/aminal/issues/252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1366556926
|
上板的拨码开关不能使用
测了拨码开关的电压,up和OK都是3.3V,down不到1V(而且会变化),拨动的时候都会到0V,但是界面没有反应
电路板清洗一下,似乎就好了
|
gharchive/issue
| 2022-09-08T15:21:33 |
2025-04-01T04:34:51.770281
|
{
"authors": [
"chuanzai"
],
"repo": "liaozhelin/yds-charger",
"url": "https://github.com/liaozhelin/yds-charger/issues/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
57863147
|
Fix libgit2 build scripts
I had some trouble getting libgit2 to build in 0.5 and 0.6, getting errors about 'libssh2.h' file not found. I wasn't sure if it was my setup (been on Xcode and Yosemite betas a lot lately), but by modifying the update_libgit2 scripts and defining LIBSSH2_INCLUDE_DIR it builds consistently.
Addresses #442
Thanks! The iOS changes definitely look good, but I'm a bit unsure about the OS X path.
Went ahead and reverted the change to the Mac build script. When I run script/update_libgit2 from the command line it builds fine.
It's only when I build the libgit2 target in Xcode that it fails.
[ 88%] Building C object CMakeFiles/git2.dir/src/transports/smart.c.o
[ 89%] Building C object CMakeFiles/git2.dir/src/transports/smart_pkt.c.o
[ 90%] Building C object CMakeFiles/git2.dir/src/transports/smart_protocol.c.o
[ 91%] Building C object CMakeFiles/git2.dir/src/transports/ssh.c.o
/Users/phatblat/dev/ios/APP_NAME/Carthage/Checkouts/objective-git/External/libgit2/src/transports/ssh.c:9:10: fatal error: 'libssh2.h' file not found
#include <libssh2.h>
^
1 error generated.
make[2]: *** [CMakeFiles/git2.dir/src/transports/ssh.c.o] Error 1
make[1]: *** [CMakeFiles/git2.dir/all] Error 2
make: *** [all] Error 2
For me both (Mac and iOS) built when I checked out @phatblat branch. Just using the iOS changes didn't solve the building issues.
Sorry, to clear: I think LIBSSH2_INCLUDE_DIR is important to pass to the Mac target, I just don't think it should point into the iOS build of libssh2.
/usr/local/include works
Awesome, thanks for tackling this! :sparkles:
|
gharchive/pull-request
| 2015-02-16T23:20:36 |
2025-04-01T04:34:51.839267
|
{
"authors": [
"jspahrsummers",
"phatblat",
"pietbrauer"
],
"repo": "libgit2/objective-git",
"url": "https://github.com/libgit2/objective-git/pull/443",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
634102513
|
Crypto crate typo
Typo on crypto crate
@bors-libra r+
:pushpin: Commit cdd10c6 has been approved by davidiw
:hourglass: Testing commit cdd10c69cf34ab3ccea2ebe976f582e49862a21a with merge 58882b9c03b0d31d4749b18571ef25057ae33dea...
:sunny: Test successful - checks-actions_land_blocking_test, checks-circle_commit_workflow
Approved by: davidiw
Pushing 58882b9c03b0d31d4749b18571ef25057ae33dea to master...
|
gharchive/pull-request
| 2020-06-08T04:14:47 |
2025-04-01T04:34:51.937053
|
{
"authors": [
"bors-libra",
"davidiw",
"kphfb"
],
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/4337",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
643258765
|
[core modules/doc] Update CONTRIBUTING.md for Move
Update the CONTRIBUTING doc for Move to include the new naming conventions, and also update comments and indentation within script and address blocks.
@bors-libra r=sblackshear
:pushpin: Commit 12e9f16 has been approved by sblackshear
:hourglass: Testing commit 12e9f161579daab2f99922cf2675d9abcef02a49 with merge e40d79539cb96eb816a3196dceb8ad5cec133e54...
:sunny: Test successful - checks-actions_land_blocking_test, checks-circle_commit_workflow
Approved by: sblackshear
Pushing e40d79539cb96eb816a3196dceb8ad5cec133e54 to master...
|
gharchive/pull-request
| 2020-06-22T17:53:42 |
2025-04-01T04:34:51.940337
|
{
"authors": [
"bors-libra",
"tzakian"
],
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/4650",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2628471801
|
Whether the contents of pow_response and crypto_response can be realized
No reference information could be found
Please clarify what you mean, because I don't understand. I think you're asking a question so I'm moving this into a discussion.
|
gharchive/issue
| 2024-11-01T07:06:30 |
2025-04-01T04:34:52.019487
|
{
"authors": [
"SuperYogurt",
"roderickvd"
],
"repo": "librespot-org/librespot",
"url": "https://github.com/librespot-org/librespot/issues/1387",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2465571127
|
[SDL2] CI macOS runner failing in release-2.8.x, but succeeding in SDL2
See: https://github.com/libsdl-org/SDL_mixer/actions/runs/10386371038/job/28757509706
CC: @madebr
https://github.com/libsdl-org/SDL_mixer/commit/0c32440b739b5c34330901665e22128d290aaa7a fixes it
Older CMake versions don't know about Apple silicon
|
gharchive/issue
| 2024-08-14T11:24:24 |
2025-04-01T04:34:52.124528
|
{
"authors": [
"madebr",
"sezero"
],
"repo": "libsdl-org/SDL_mixer",
"url": "https://github.com/libsdl-org/SDL_mixer/issues/629",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
1838202680
|
ci: test x86/x64/arm32/arm64 MSVC
Fixes #91
Fixes #92
We don't have to go overboard with -Wundef and documentation warnings: They are directly inherited from SDL2.
I haven't read the whole thread, but sdl2-compat should build with the same flags as SDL2, with no dependency on any C runtime on Windows. If there are targets (clang?) that require it, they should be considered unsupported and not build in CI.
FYI, I fixed the implicit memcpy() in 1dcd1cf.
We probably shouldn't bring over our own memcpy and memset, we should either call SDL3's versions or fix any implicit calls generated by the compiler.
FYI, I fixed the implicit memcpy() in 1dcd1cf. We probably shouldn't bring over our own memcpy and memset, we should either call SDL3's versions or fix any implicit calls generated by the compiler.
Not sure how to eliminate the implicit memcpy here:
https://github.com/libsdl-org/sdl2-compat/blob/1dcd1cf8850ce6ca359303fb35204c09c2e6e688/src/sdl2_compat.c#L5742-L5753
Not sure how to eliminate the implicit memcpy here:
https://github.com/libsdl-org/sdl2-compat/blob/1dcd1cf8850ce6ca359303fb35204c09c2e6e688/src/sdl2_compat.c#L5742-L5753
Yeah, I don't know that we can. Returning a GUID on the stack probably isn't a great idea for other languages as well.
Maybe we should change the function to fill in a GUID parameter?
Yeah, I don't know that we can. Returning a GUID on the stack probably isn't a great idea for other languages as well. Maybe we should change the function to fill in a GUID parameter?
For SDL3, this can work.
But for sdl2-compat, you will still need to copy the guid to the stack memory of the caller when returning.
Looks like -lgcc -static-libgcc is not sufficient for providing memcpy for arm32: https://github.com/libsdl-org/sdl2-compat/actions/runs/5779838735/job/15662676006?pr=93
So sdl2-compat now does both: -lgcc -static-libgcc + SDL3-based memcpy + memset implementation.
Looks like -lgcc -static-libgcc is not sufficient for providing memcpy for arm32: https://github.com/libsdl-org/sdl2-compat/actions/runs/5779838735/job/15662676006?pr=93
Well, obviously it cannot, at least because it's not gcc :)
SDL3-based memcpy + memset implementation.
yes, with a #pragma intrinsic (or #pragma function?) in mslibc.h, it should work
Looks like -lgcc -static-libgcc is not sufficient for providing memcpy for arm32: https://github.com/libsdl-org/sdl2-compat/actions/runs/5779838735/job/15662676006?pr=93
Well, obviously it cannot, at least because it's not gcc :)
Dammit. You're right.
So -gcc -static-libgcc is not needed. But let's keep it for exotic untested mingw toolchains?
SDL3-based memcpy + memset implementation.
yes, with a #pragma intrinsic (or #pragma function?) in mslibc.h, it should work
I added it in sdl2_compat.c below the inclusion of x86_msvc.h.
Looks like -lgcc -static-libgcc is not sufficient for providing memcpy for arm32: https://github.com/libsdl-org/sdl2-compat/actions/runs/5779838735/job/15662676006?pr=93
Well, obviously it cannot, at least because it's not gcc :)
Dammit. You're right. So -gcc -static-libgcc is not needed. But let's keep it for exotic untested mingw toolchains?
Well, it is needed for at least x86 mingw
Ah, for 64-bit division, that makes sense. Okay, I guess we keep it and use it whenever we're doing -nostdlib? We should document why it's necessary to save archeology work in the future.
Not sure how to eliminate the implicit memcpy here:
https://github.com/libsdl-org/sdl2-compat/blob/1dcd1cf8850ce6ca359303fb35204c09c2e6e688/src/sdl2_compat.c#L5742-L5753
Yeah, I don't know that we can.
We can do the following:
diff --git a/src/sdl2_compat.c b/src/sdl2_compat.c
index 5d78b5a..feb4b27 100644
--- a/src/sdl2_compat.c
+++ b/src/sdl2_compat.c
@@ -5747,7 +5747,8 @@ SDL_JoystickGetDeviceGUID(int idx)
if (!jid) {
SDL3_zero(guid);
} else {
- guid = SDL3_GetJoystickInstanceGUID(jid);
+ const SDL_JoystickGUID guid_ = SDL3_GetJoystickInstanceGUID(jid);
+ SDL3_memcpy(&guid, &guid_, sizeof(guid));
}
return guid;
}
Right now, this PR lokks pretty much OK. A few things to note:
Do we need to apply https://github.com/libsdl-org/sdl2-compat/pull/93#issuecomment-1668096720 or something similar?
The only windows target where we do not build with /NODEFAULTLIB is ARM 32 bits using MSVC, because we don't have any asm for it. (Any mingw or clang should be OK with -nostdlib because we link to libgcc.) Should we even care?
I still think we should change the MSVC_CLANG check to simply MSVC for ARM 32 bit, i.e. https://github.com/libsdl-org/sdl2-compat/pull/93#pullrequestreview-1564191674 - but I won't fight for it.
I think the -Wundef and -Wdocumentation fixes (except for the one-liner in sdl2_compat.c) are overkill here: All of the affected sources/headers are directly copied from SDL2 where these fixes aren't applied at all. They should be fixed in SDL2 first if we do care. (Exception is the one-liner in sdl2_compat.c which is OK.)
SDL2COMPAT_WERROR should default to OFF I guess - can be enabled in CI..
* The only windows target where we do not build with `/NODEFAULTLIB` is ARM 32 bits using MSVC, because we don't have any asm for it. (Any mingw or clang should be OK with `-nostdlib` because we link to libgcc.) Should we even care?
This question applies to SDL3 as well.
And SDL2 too. @slouken: Do we keep going as is?
* I think the -Wundef and -Wdocumentation fixes are overkill here: All of the affected sources/headers are directly copied from SDL2 where these fixes aren't applied at all. They should be fixed in SDL2 first if we do care. (Exception is the one-liner in sdl2_compat.c which is OK.)\
The -Wundef warnings are already fixed in SDL3, and are in pr for SDL2 at libsdl-org/SDL#7554. About the Wdocumentation patches, let's port these to SDL2.
OK, as you wish.
Off-topic actually, but since you are went overboard with this, how about
testing SDL3 symbol correctness automatically? I mean this:
https://github.com/libsdl-org/sdl2-compat/blob/main/src/sdl2_compat.c#L123-L132
I have been doing it manually by changing that #if 0 to #if 1 successfully,
and if the compiler is a c++ compiler, it fails immediately (in plain C, it'll
be just a warning): Try changing the Init parameter from Uint32 to Sint32 in
sdl3_syms.h and see. Useful for detecting SDL3 api changes not being properly
reflected here.
For automatic testing, I can think of something like this: Do you think that we
should care about that and add something equivalent to the cmake'ry?
diff --git a/src/sdl3_testsyms.cc b/src/sdl3_testsyms.cc
new file mode 100644
index 0000000..a389bf2
--- /dev/null
+++ b/src/sdl3_testsyms.cc
@@ -0,0 +1 @@
+#include "sdl2_compat.c"
diff --git a/src/Makefile.linux b/src/Makefile.linux
index 86ce95b..eeaf23c 100644
--- a/src/Makefile.linux
+++ b/src/Makefile.linux
@@ -6,2 +6,3 @@ INCLUDES = -Iinclude
CC = gcc
+CXX= g++
LD = $(CC)
@@ -25,3 +26,3 @@ all: $(SHLIB)
-$(SHLIB): $(OBJ)
+$(SHLIB): $(OBJ) sdl3_testsyms.o
$(LD) -o $@ $(LDFLAGS) $(OBJ) $(LDLIBS)
@@ -34,2 +35,5 @@ $(SHLIB): $(OBJ)
+sdl3_testsyms.o: sdl3_testsyms.cc
+ $(CXX) $(CFLAGS) $(CPPFLAGS) -DSDL2COMPAT_TEST_SYMS=1 $(INCLUDES) -o $@ -c $<
+
distclean: clean
diff --git a/src/sdl2_compat.c b/src/sdl2_compat.c
index bb0b33a..1b8af9c 100644
--- a/src/sdl2_compat.c
+++ b/src/sdl2_compat.c
@@ -122,3 +122,6 @@ extern "C" {
*/
-#if 0
+#ifndef SDL2COMPAT_TEST_SYMS
+#define SDL2COMPAT_TEST_SYMS 0
+#endif
+#if SDL2COMPAT_TEST_SYMS
#define SDL3_SYM(rc,fn,params,args,ret) \
A little late, but noticed that we are now building with -fno-strict-aliasing: It doesn't seem necessary here in sdl2-compat. (It was only added to SDL as a temporary measure for https://github.com/libsdl-org/SDL/issues/2974)
* The only windows target where we do not build with `/NODEFAULTLIB` is ARM 32 bits using MSVC, because we don't have any asm for it. (Any mingw or clang should be OK with `-nostdlib` because we link to libgcc.) Should we even care?
This question applies to SDL3 as well.
Should we even care that we're not using /NODEFAULTLIB? That's actually the reason there isn't ARM support in the Visual Studio project files.
I don't know of anyone shipping an ARM version of SDL though, so I think it's fine for us to verify in CI and then let people build it themselves with their own C runtime dependencies if they want.
@madebr: Regarding the Add SDL3-based implementation of memcpy and memcpy commit:
I believe the following is the actual intention which is what SDL3 and SDL2 do.
OK to push, or am I missing anything?
diff --git a/src/sdl2_compat.c b/src/sdl2_compat.c
index df5302e..72e9c3b 100644
--- a/src/sdl2_compat.c
+++ b/src/sdl2_compat.c
@@ -589,44 +589,54 @@ LoadSDL3(void)
}
}
}
return okay;
}
-#ifndef SDL_BUILDING_WINRT
-#if defined(_MSC_VER) && defined(_M_IX86)
+#if defined(_MSC_VER) && !defined(SDL_BUILDING_WINRT)
+#ifdef _M_IX86
#include "x86_msvc.h"
#endif
-#if defined(_WIN32) && !defined(__WATCOMC__)
/* NOLINTNEXTLINE(readability-redundant-declaration) */
extern void *memcpy(void *dst, const void *src, size_t len);
/* NOLINTNEXTLINE(readability-redundant-declaration) */
extern void *memset(void *dst, int c, size_t len);
-#ifdef _MSC_VER
#ifndef __INTEL_LLVM_COMPILER
#pragma intrinsic(memcpy)
#pragma intrinsic(memset)
#endif
#pragma function(memcpy)
#pragma function(memset)
-#endif
/* NOLINTNEXTLINE(readability-inconsistent-declaration-parameter-name) */
void *memcpy(void *dst, const void *src, size_t len)
{
return SDL3_memcpy(dst, src, len);
}
/* NOLINTNEXTLINE(readability-inconsistent-declaration-parameter-name) */
void *memset(void *dst, int c, size_t len)
{
return SDL3_memset(dst, c, len);
}
+#endif /* MSVC && !WINRT */
+
+#if defined(__ICL) && defined(_WIN32)
+/* The classic Intel compiler generates calls to _intel_fast_memcpy
+ * and _intel_fast_memset when building an optimized SDL library */
+void *_intel_fast_memcpy(void *dst, const void *src, size_t len)
+{
+ return SDL_memcpy(dst, src, len);
+}
+
+void *_intel_fast_memset(void *dst, int c, size_t len)
+{
+ return SDL_memset(dst, c, len);
+}
#endif
-#endif /* ! WINRT */
#ifdef SDL_BUILDING_WINRT
EXTERN_C void error_dialog(const char *errorMsg);
#elif defined(_WIN32)
static void error_dialog(const char *errorMsg)
{
That looks sensible. Untested, but ok by me.
I think you want to use SDL3_memcpy instead of SDL_memcpy in the new mem* functions.
|
gharchive/pull-request
| 2023-08-06T13:05:58 |
2025-04-01T04:34:52.147927
|
{
"authors": [
"madebr",
"sezero",
"slouken"
],
"repo": "libsdl-org/sdl2-compat",
"url": "https://github.com/libsdl-org/sdl2-compat/pull/93",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
538768966
|
build failed on Linux
Building on Linux failed with the following output:
deps/src/libuv/.libs/libuv.a(libuv_la-fs.o): In function `uv__fs_mkstemp':
deps/src/libuv/src/unix/fs.c:289: undefined reference to `dlsym'
deps/src/libuv/src/unix/fs.c:294: undefined reference to `dlerror'
collect2: error: ld returned 1 exit status
CMakeFiles/app.dir/build.make:174: recipe for target 'app' failed
make[2]: *** [app] Error 1
CMakeFiles/Makefile2:146: recipe for target 'CMakeFiles/app.dir/all' failed
make[1]: *** [CMakeFiles/app.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
version: v1.34.0
platform: Linux localhost.localdomain 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux
Can you post the full make compilation line? V=1 make should do. I cannot repro on Linux myself.
|
gharchive/issue
| 2019-12-17T00:51:27 |
2025-04-01T04:34:52.165617
|
{
"authors": [
"neevek",
"saghul"
],
"repo": "libuv/libuv",
"url": "https://github.com/libuv/libuv/issues/2579",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
199989994
|
Getting unicode error when enabling flask-compress on my application
Hello!
I am running my app on Heroku and was hoping to add gzip compression. I tried whitenoise first and although it didn't give any errors, the gzip compression was not working. Then I found your extension and was happy to see it is so simple. Thanks for making this. Unfortunately I am running into the following error:
UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 1: invalid start byte
If a comment the line that add compress, then the app works again.
Fixed it. It was this problem: https://github.com/mgood/flask-debugtoolbar/issues/83
Thank you for keeping this Issue updated. Sorry I wasn't able to reply until now.
|
gharchive/issue
| 2017-01-11T02:43:33 |
2025-04-01T04:34:52.171553
|
{
"authors": [
"libwilliam",
"tatsuhirosatou"
],
"repo": "libwilliam/flask-compress",
"url": "https://github.com/libwilliam/flask-compress/issues/21",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1112558964
|
fix: resolve eslint issues
This does not solve all eslint issues but it reduces the number from 88 to 35.
There are some things that have to be discussed but since it touches so many files I want to get this merged ASAP.
Uff why the hell do those errors happen in the node_modules? 🙄
|
gharchive/pull-request
| 2022-01-24T12:05:38 |
2025-04-01T04:34:53.681517
|
{
"authors": [
"mathiasmoeller"
],
"repo": "lifinance/sdk",
"url": "https://github.com/lifinance/sdk/pull/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1244074482
|
bump sbt version
Mailing List thread:
https://groups.google.com/u/1/g/liftweb/c/L5xlSVzt_wo
fix link to the sbt-lift-build plugin (#1998)
fix error on aarch64 java (apple silicon) (#1999)
Hello,
This project seems stalled. Is there a chance this PR will be merged and packages updated?
Regards,
|
gharchive/pull-request
| 2022-05-21T20:24:01 |
2025-04-01T04:34:53.683588
|
{
"authors": [
"invenis-paris",
"lvitaly"
],
"repo": "lift/framework",
"url": "https://github.com/lift/framework/pull/2000",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
262491248
|
Cleanup/Add 'gotoExercise' command
Add new command 'gotoExercise' with TAB completion
Remove unused code
Simplify code
Update copyright messages
@vpetro Petro. Merged and yes, this change will be reflected in the course materials when a course is released. The latter doesn't happen automatically (we have to push a button to do so).
|
gharchive/pull-request
| 2017-10-03T16:16:01 |
2025-04-01T04:34:53.744743
|
{
"authors": [
"eloots"
],
"repo": "lightbend-training/course-management-tools",
"url": "https://github.com/lightbend-training/course-management-tools/pull/50",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2171836016
|
Improve coin selection unit tests
Fixes https://github.com/lightninglabs/taproot-assets/issues/124.
Found some issues (in the unit tests) revealed by these test changes.
Put up a branch that could get pulled into this PR: https://github.com/lightninglabs/taproot-assets/tree/coin-selection-test-fixes
Thanks for the review and the fixes. I took over the commit with the group anchor improvement.
IIUC the timeout would occur for any signal since the method call was complete.
That was kind of on purpose. We explicitly expect the LeaseCoins method not to be called. I agree that the timeout would occur on any other channel as well. But IMO selecting on the listSignals a second time doesn't really change that?
That was kind of on purpose. We explicitly expect the LeaseCoins method not to be called. I agree that the timeout would occur on any other channel as well. But IMO selecting on the listSignals a second time doesn't really change that?
Ah, fair enough - so any of the 3 calls timing out is sufficient.
|
gharchive/pull-request
| 2024-03-06T15:50:39 |
2025-04-01T04:34:53.784248
|
{
"authors": [
"guggero",
"jharveyb"
],
"repo": "lightninglabs/taproot-assets",
"url": "https://github.com/lightninglabs/taproot-assets/pull/823",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
890389708
|
multi: validate payment params at RPC layer
With this patch, we'll fail out earlier in the cycle in case of some wonky parameters, and not leave zombie payments in the router which currently are not cleaned up.
Naming currently conflicts with another check in routerrpc, so I could change the validation function name.
|
gharchive/pull-request
| 2021-05-12T18:50:58 |
2025-04-01T04:34:53.785282
|
{
"authors": [
"Crypt-iQ"
],
"repo": "lightningnetwork/lnd",
"url": "https://github.com/lightningnetwork/lnd/pull/5293",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
306526381
|
how to exclude transitive dependency
for example, in sbt
"org.apache.spark" %% "spark-core" % "2.3.0" exclude ("org.slf4j", "slf4j-log4j12")
how do I do the same in mill ?
ivy"org.apache.spark::spark-core:2.3.0" ???
There is already issue opened for this: https://github.com/lihaoyi/mill/issues/214
|
gharchive/issue
| 2018-03-19T16:04:05 |
2025-04-01T04:34:53.825133
|
{
"authors": [
"shengc"
],
"repo": "lihaoyi/mill",
"url": "https://github.com/lihaoyi/mill/issues/245",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
806891253
|
Update lds.py
Seems like this should be changed to verbose as well to allow total suppression of output
Thanks, Grace!
|
gharchive/pull-request
| 2021-02-12T02:10:13 |
2025-04-01T04:34:53.903762
|
{
"authors": [
"ghuckins",
"slinderman"
],
"repo": "lindermanlab/ssm",
"url": "https://github.com/lindermanlab/ssm/pull/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2062185631
|
Service discovery with Nacos
It would be useful to support service registration and discovery using Nacos.
API specification: https://nacos.io/en-us/docs/v2/guide/user/open-api.html
Hi @ikhoon. I want to try this issue, can you assign this issue to me?
Sure. Thanks in advance! 🙇♂️
I am nacos contributor and i can give some advice or some help.
For Nacos, it provide nacos-client. Nacos client version 1.x is based on http. from version 2.x nacos client is based on grpc.
The current pr is good and is light. It look like the PR use armeria to create nacos http client.
If we need more nacos power (grpc for long connection) and nacos plugins (eg. auth plugin), it should use nacos client to build the nacos module. like zookeeper module
As Armeria provides various transport layers, we prefer to use our stack to support certain protocols.
If we need more nacos power (grpc for long connection)
It sounds like gRPC is a better choice. I was wondering if there was a way to replace the gRPC client with Armeria gRPC client.
If it is not supported, I am curious about what you think about providing an API that can replace the underlying client with an Armeria gRPC client. Armeria is a rich gRPC client with more features than its upstream clients.
As Armeria provides various transport layers, we prefer to use our stack to support certain protocols.
It make sense.
It sounds like gRPC is a better choice. I was wondering if there was a way to replace the gRPC client with Armeria gRPC client.
I can not replace the gRPC layer from the nacos-client by Armeria gRPC directly.
If it is not supported, I am curious about what you think about providing an API that can replace the underlying client with an Armeria gRPC client. Armeria is a rich gRPC client with more features than its upstream clients.
I think we can use armeria gRPC client to create NacosGrpcClient which support register/list/watch etc.
maybe i need try to create a prototype project (armeria-nacos-client) to proof it.
nacos grpc api is https://github.com/alibaba/nacos/blob/develop/api/src/main/proto/nacos_grpc_service.proto
which use jackson to serialize the body to json and json to bytes in payload.
which use jackson to serialize the body to json and json to bytes in payload.
We have our own JSON marshaller which is more performant. https://github.com/line/armeria/blob/727ac8003dde3f665b80b17f25c71e48ccb7a340/grpc/src/main/java/com/linecorp/armeria/common/grpc/GrpcJsonMarshaller.java#L38-L38
I don't know what wire protocol is used for the JSON payload. Armeria gRPC client supports JSON out of the box.
https://github.com/line/armeria/blob/727ac8003dde3f665b80b17f25c71e48ccb7a340/grpc/src/main/java/com/linecorp/armeria/common/grpc/GrpcSerializationFormats.java#L37-L40
Maybe i need try to create a prototype project (armeria-nacos-client) to test
Should I need to try this
Yes, please. 🙏 I'm looking forward to the result. If we can fully support all Nacos features, that would be better.
|
gharchive/issue
| 2024-01-02T10:00:44 |
2025-04-01T04:34:53.912222
|
{
"authors": [
"KonaEspresso94",
"ikhoon",
"shalk"
],
"repo": "line/armeria",
"url": "https://github.com/line/armeria/issues/5365",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1900993036
|
Fix a bug where HttpResponse can't complete with CompletionStage
Motivation:
CompletionStage<T> CompletableFuture.minimalCompletionStage() returns a MinimalStage which implements CompletionFuture and raises UnsupportedOperationException if the methods only in CompletionFuture are called.
HttpResponse.of(CompletionStage<HttpResponse>) checks if the given value is an instance of CompletableFuture to use an optimized path for CompletableFuture. It is not working because MinimalStage can be set to the method parameter easily.
CompletableFuture<HttpResponse> future = new CompletableFuture<>();
HttpResponse.of(future.minimalCompletionStage());
Discord discussion: https://discord.com/channels/1087271586832318494/1087272728177942629/1153277808630566942
Modifications:
Do not check if a CompletationStage is an instance of CompletableFuture in HttpResponse and StreamMessage.
Result:
You no longer see UnsupportedOperationException when creating an HttpResposne with CompletationStage.
Considering this bug, there seems to be one more piece of code that's basically the same. Could this have a path that could cause the same problem?
I'm thinking of DefaultStreamMessage.cleanupObjects() where completeExceptionally is called (it's another one of the overridden method in MinimalStage).
Could this have a path that could cause the same problem?
Possible. I tried to check all usages of CompletionStage when creating this PR but let me double-check.
I'm thinking of DefaultStreamMessage.cleanupObjects() where completeExceptionally is called
Good point. I guess we can ignore it because the CompletableFuture is created internally which is an instance of AwaitDemandFuture. Let me change the type to AwaitDemandFuture for clarity.
@ikhoon 👍 👍 👍
@tobias- Thanks for the report and through review! 😄
|
gharchive/pull-request
| 2023-09-18T13:50:45 |
2025-04-01T04:34:53.918289
|
{
"authors": [
"ikhoon",
"minwoox",
"tobias-"
],
"repo": "line/armeria",
"url": "https://github.com/line/armeria/pull/5190",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
591731450
|
LineApiError{httpResponseCode=200, message='java.io.IOException: org.json.JSONException: Unable to load class named [io.jsonwebtoken.io.JacksonDeserializer]
What did you do?
3rd Login
What did you expect?
get user email information
What happened actually?
Login FAILED!
E/ERROR: LineApiError{httpResponseCode=200, message='java.io.IOException: org.json.JSONException: Unable to load class named [io.jsonwebtoken.io.JacksonDeserializer] from the thread context, current, or system/application ClassLoaders. All heuristics have been exhausted. Class could not be found.
Your environment?
implementation 'com.linecorp.linesdk:linesdk:5.4.1'
looks like you hit some class dependency issue.
could you please share us your build.gradle file?
I guess you introduced jackson in your project, or by some other dependency in your project,
if that's true, you can reference this about workaround:
https://github.com/jwtk/jjwt/issues/397
if this workaround works for you, please tell us
I think this issue is fixed in io.jsonwebtoken:jjwt v0.10.6,
we will upgrade our io.jsonwebtoken:jjwt dependency to the latest version later
We have upgraded jjwt to the latest version 0.11.1 : #72
It will be released with next version of line-sdk-android
I think this will fix this issue.
Close due to no response for over 2 months.
|
gharchive/issue
| 2020-04-01T08:36:04 |
2025-04-01T04:34:53.927511
|
{
"authors": [
"KeunWoongYuk",
"LiYing2010",
"plateaukao"
],
"repo": "line/line-sdk-android",
"url": "https://github.com/line/line-sdk-android/issues/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2413095276
|
Is the probability extracted corresponding to the selected token?
Hi UQ_ICL authors!
According to the paper, the second step of the framework of entropy-based uncertainty estimation is "2) selecting token(s) that is relevant to the answer and extract the probabilities". However, I saw that in code, after applying softmax() on logits, you used max() to pick the highest probability. Is it truly the probability associated with the selected token?
Thanks!
Hi, you can check this line: https://github.com/lingchen0331/UQ_ICL/blob/707e4df90a8d00b34c68ec9cf674675e8a19365b/sources/utils.py#L147C1-L148C1
It gets the index of the first token that corresponds to a number
|
gharchive/issue
| 2024-07-17T09:17:33 |
2025-04-01T04:34:53.942976
|
{
"authors": [
"lingchen0331",
"luke217"
],
"repo": "lingchen0331/UQ_ICL",
"url": "https://github.com/lingchen0331/UQ_ICL/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
367533810
|
Fixed "Remove the twist from navbar"
Fixes issue #996
Great :)
|
gharchive/pull-request
| 2018-10-07T08:42:10 |
2025-04-01T04:34:53.946388
|
{
"authors": [
"BennyCarlsson",
"pegasus-lynx"
],
"repo": "lingonsaft/hacktoberfest",
"url": "https://github.com/lingonsaft/hacktoberfest/pull/1002",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
73336485
|
Login redirects to the /home URL, causes confusion
EDIT: Changing issue description, since I can't just delete the whole thing.
I just ran into a rather frustrating issue (that is mostly my fault).
Here's the scenario:
app is in debug mode
had dummy User instances already loaded in the db from other dev work
not using UserMixin because I haven't in prior projects and don't really like all that extra code
wired up flask-user as i've done before in other projects
use forgot-password to receive email & set password on one of the fake users
totally forget that there's an active flag on the User
stay in endless loop that looks like login is not working & constantly redirecting to itself, but to the new /home URL, leading someone who has used flask-user in the past to wonder where the heck that came from (and then discover it's hard-coded in the pkg).
I just wasted hours trying to figure out where in the heck this /home URL came from, what view it was tied to, and why my once-working Flask-User process was suddenly broken in the latest version. And it was all because I hadn't set any active flags to True. Oh, and once I figured that out, now I have to have a has_confirmed_email attr/method on my User object.
Sigh.
Suggestion: if the app is in debug mode, it'd be pretty awesome to have some more errors thrown, console messages logged out, or something that helps identify why things are not working in places they should, but something that is expected is either not present, or not what it should be. i know there are some calls to set flash messages, but that's not enough in all scenarios--especially if we're customizing templates and don't have flash messages included.
Maybe it's just me, but Flask-User is increasing the amount of schema & model attributes it needs just to successfully pull off authentication. This is starting to feel like very poor design spiraling out of control. I'm not sure I am going to want to keep using Flask-User if this is how the project is going to be going in the future. I've now done two upgrades, and both times have found me wasting hours trying to discover unintuitive behavior based on schema/model assumptions.
Above all else, if the app is in debug, that should be the time to throw all kinds of show-stopping messages at a developer that they don't have things right. Or maybe allow it to be activated with a flag or something. But when you're using a 3rd-party lib that you're trying to treat like a black box, when the box doesn't work and doesn't tell you, things get pretty annoying rather quickly.
Greetings, I agree, the package is getting big. If you are simply interested in a simple user authentication protocol, I'd suggest Flask-Login (https://flask-login.readthedocs.org/en/latest/), which is what this module is dependent upon.
Flask-User attempts to do a lot more than a simple login while trying to not sacrifice customizability. When using this package I find myself diving into the source more so than other projects, but that is in large part because of how customizable it is and because deploying a fully-featured user authentication system necessitates scrutinizing every nuance of the system.
When referencing your specific issue, there is a flash that should have been sent to the login template upon failing user authentication:
https://github.com/lingthio/Flask-User/blob/master/flask_user/views.py#L704
Do you have flash set up in your application?
@neurosnap thanks for the reply, mate.
I do have flash setup, and included in my templates, but for some reason, the message about active status wasn't showing up at all. I just kept being redirected from /login to /home.
Moreover, I do want some of Flask-User's heightened user creation/auth features -- things like confirmation emails, password tokens/resets, etc. So, I don't want simple user auth, else I would have just gone with Flask-Login or rolled a simple login method myself. It's all the confirm/reset/forgot/email features that I find helpful -- *but only insofar as they are related to handling going from creating > authenticating a User. After auth, my projects are generally really large, complex, custom applications my clients use to manage their business processes. Things like profiles are either not necessary, or implemented in custom ways.
I just don't think all the extra profile stuff should be hard-coded into the app. Seems things ought to be broken up into some smaller disabled-by-default features that developers can turn on.
I also think that, at least when in debug, it might be worth having some verbose logging going to the console or something. Only having it routed through the flash system, expecting it to show up in templates can leave things in a state where devs like me, who don't allow blanket flash messages to appear everywhere in client applications, can't figure out what is happening.
Anyway, I'd have deleted this issue if I could. Feel free to close it, as it seems to be less of a bug, and more of an issue with documentation not really being clear (imo) about all that is mandatory to get things working correctly. When I read in the docs that one should include the UserMixin, I see it as a suggestion, like Flask-Login's UserMixin (on which Flask-User's is based). I don't anticipate it being something that will prevent me from getting user's logged in at all.
I really think all this User + UserAuth + UserProfile + UserRoles + UserEmails schema/model dependence is really getting out-of-hand. And the docs just aren't making it clear enough just how much Flask-User is depending on all these things in various ways.
Maybe the project should be renamed to "Flask-User (and the kitchen sink)" :)
I ran into this today as well and tripped me up, luckily I didn't get to far down the rabbit hole. My problem specifically was this redirect https://github.com/lingthio/Flask-User/blob/master/flask_user/views.py#L705
I wonder if we could customize that route in anyway? Maybe a new route for a user who is inactive? I would much prefer that they are sent to the sign in page again with the same flash message.
I can put together a PR if that is something you would be interested in. We might also consider the same for https://github.com/lingthio/Flask-User/blob/master/flask_user/views.py#L714. To my knowledge that is the only two place user.home is used.
Hi all, Thanks for taking the time to weigh in.
Re: /home URL: See issue #70.
Re: User + UserAuth + UserProfile + UserRoles + UserEmails schema: See issue #70. ;-)
Closing this as a duplicate issue.
Fixed in v0.6.3.
|
gharchive/issue
| 2015-05-05T14:51:15 |
2025-04-01T04:34:53.961475
|
{
"authors": [
"bobwaycott",
"lingthio",
"nZac",
"neurosnap"
],
"repo": "lingthio/Flask-User",
"url": "https://github.com/lingthio/Flask-User/issues/71",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2626028158
|
Add more places for Donation sections. Refactored existing logic.
Added optional donation sections to:
Footer
Introduction
New Support me page in Nav Menu (with new description field)
Made donation under blog post optional. Refactored/moved existing code. Discussion at: #357
BREAKING CHANGES
Moved existing support configuration into new SupportMeConfiguration out from ApplicationConfiguration
Preview:
Footer
Introduction
Support me page
Still Needed
@linkdotnet (or anyone else), please make the following changes to complete this PR.
Support me page in nav menu needs an icon (placeholder used).
New support me page needs css formatting (centering, padding, font size?,...)
How to configure new support me settings:
"SupportMe": {
"KofiToken": "ABC123",
"GithubSponsorName": "your-tag-here",
"PatreonName": "your-tag-here",
"ShowUnderBlogPost": true,
"ShowUnderIntroduction": true,
"ShowInFooter": true,
"ShowSupportMePage": true,
"SupportMePageDescription": "Buy my book here: [My Blazor Book](https://google.com) or please contribute to my open-source project here: [My Awesome Repo](https://github.com) . This can be **markdown**."
}
Will work on reviews late tomorrow for me
Thanks! Take your time and let me know if I can help
99% there I think 😅 will look into SupportMePageDescription tomorrow, or feel free if its a quick fix for you.
Added/fixed tests, add/moved documentation, and addressed your comments.
Thank you
Not sure if I can push directly to your branch - I'll try :D
Okay - that works. Nice
Nice work @digitaldirk - much appreciated. I would merge it if there aren't any open points from your side?
I added the migration text and some tests for the new component you created!
Lol - i accidentally merged :D. Wanted to rebase on the latest master. Well let me know if there where some missing points
Lol - i accidentally merged :D. Wanted to rebase on the latest master. Well let me know if there where some missing points
No worries. Thank you for cleaning up and adding the tests.
Only things left:
I didn't find an icon I loved for the support me page nav menu, please pick one for it.
In 'migration.md' I added notes on the bottom and you added some on the top, could these please be combined so its only in one place.
I added this one:
|
gharchive/pull-request
| 2024-10-31T05:52:14 |
2025-04-01T04:34:53.982413
|
{
"authors": [
"digitaldirk",
"linkdotnet"
],
"repo": "linkdotnet/Blog",
"url": "https://github.com/linkdotnet/Blog/pull/358",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
307075966
|
VerifiableProperties needs more APIs
VerifiableProperties needs a few more APIs to support the use cases that have cropped.
a "get if positive, otherwise throw" function for all types
also provide separate functions for min and max bounds (for ranges)
@vgkholla Would it make more sense to implement the getter methods to return Java's Optional? This way the caller could choose if they want to get the value or throw or get the value or do some other action.
@mbabauer That might be a good addition but if everyone has to write the code to do "some other action", that wouldn't serve the purpose either (we could provide a default static orElse() function in VerifiableProperties to cover most cases)
|
gharchive/issue
| 2018-03-20T23:34:35 |
2025-04-01T04:34:54.015082
|
{
"authors": [
"mbabauer",
"vgkholla"
],
"repo": "linkedin/ambry",
"url": "https://github.com/linkedin/ambry/issues/886",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
727362694
|
Why is there no the table structure metadata, after run mysql_etl.py
I configure the connection information for connecting to MySQL in the file "mysql_etl.py", Then I executed this python script,
The execution process fragment information is as follows:
{'auditHeader': None, 'proposedSnapshot': ('com.linkedin.pegasus2avro.metadata.snapshot.DatasetSnapshot', {'urn': 'urn:li:dataset:(urn:li:dataPlatform:mysql,ec_oms.virtual_manage_store,PROD)', 'aspects': [{'owners': [{'owner': 'urn:li:corpuser:datahub', 'type': 'DATAOWNER'}], 'lastModified': {'time': 0, 'actor': 'urn:li:corpuser:datahub'}}, {'schemaName': 'ec_oms.virtual_manage_store', 'platform': 'urn:li:dataPlatform:mysql', 'version': 10, 'created': {'time': 1603370741.888741, 'actor': 'urn:li:corpuser:datahub'}, 'lastModified': {'time': 1603370741.888741, 'actor': 'urn:li:corpuser:datahub'}, 'hash': '', 'platformSchema': {'tableSchema': "[('store_id', 'varchar(64)', 'NO', 'PRI', None, ''), ('protocol_id', 'varchar(64)', 'NO', '', '', ''), ('enable_at', 'datetime', 'NO', '', '2030-01-01 00:00:00', ''), ('disable_at', 'datetime', 'NO', '', 'CURRENT_TIMESTAMP', ''), ('created_stamp', 'datetime', 'NO', '', None, ''), ('created_by','varchar(64)', 'NO', '', '', ''), ('last_updated_stamp', 'datetime', 'NO', '', None, ''), ('last_updated_by', 'varchar(64)', 'NO', '','', '')]"}, 'fields': [{'fieldPath': 'store_id', 'nativeDataType': 'varchar(64)', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType': {}}}}, {'fieldPath': 'protocol_id', 'nativeDataType': 'varchar(64)', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType': {}}}}, {'fieldPath': 'enable_at', 'nativeDataType': 'datetime', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType': {}}}}, {'fieldPath': 'disable_at', 'nativeDataType': 'datetime', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType':{}}}}, {'fieldPath': 'created_stamp', 'nativeDataType': 'datetime', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType': {}}}}, {'fieldPath': 'created_by', 'nativeDataType': 'varchar(64)', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType': {}}}}, {'fieldPath': 'last_updated_stamp', 'nativeDataType': 'datetime', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType': {}}}}, {'fieldPath': 'last_updated_by', 'nativeDataType': 'varchar(64)', 'type': {'type': {'com.linkedin.pegasus2avro.schema.StringType': {}}}}]}]}), 'proposedDelta': None} has been successfully produced!
[root@localhost mysql-etl]#
**However, I only see the list of tables on the datahub web, as follows: **
but there is no the table structure metadata , as follows:
How I can do it ?
are you able to access your database using API Calles?
Here are examples
https://github.com/linkedin/datahub/blob/928444928a1618d0a861cfff371d12951d39a3ab/gms/README.md
What does get api of schema return ?
https://github.com/linkedin/datahub/blob/928444928a1618d0a861cfff371d12951d39a3ab/gms/README.md#get-dataset-schema
I tried several times, but I still couldn't,Is there any missing step:
1:build the mxe-schemas module:
./gradlew :metadata-events:mxe-schemas:build
**2:check my python version **
[root@localhost ~]# python3 --version
Python 3.6.8
**3: install requirements **
[root@localhost mysql-etl]# cat requirements.txt
avro-python3==1.8.2
confluent-kafka[avro]==1.4.0
mysql-connector==2.2.9
[root@localhost mysql-etl]# pip3 install -r requirements.txt
4:Config my MySQL environmental variable in mysql_etl.py
run mysql_etl.py
[root@localhost mysql-etl]# python3 mysql_etl.py
Then ,the result of runing mysql_etl.py is the above,In addition to the above four steps, what else needs to be done?
@HondaHsu2020 Please provide the result of the schema API call as suggested by @nagarjunakanamarlapudi.
@mars-lan
[root@localhost mysql-etl]# curl -H 'X-RestLi-Protocol-Version:2.0.0' -H 'X-RestLi-Method: get' 'http://10.118.71.181:8080/datasets/($params:(),name:ec_order.purchasing,origin:PROD,platform:urn%3Ali%3AdataPlatform%3Amysql)/schema/0' | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7163 100 7163 0 0 1109k 0 --:--:-- --:--:-- --:--:-- 1399k
{
"exceptionClass": "com.linkedin.restli.server.RestLiServiceException",
"stackTrace": "com.linkedin.restli.server.RestLiServiceException [HTTP Status:404]\n\tat com.linkedin.metadata.restli.RestliUtils.resourceNotFoundException(RestliUtils.java:61)\n\tat com.linkedin.metadata.restli.RestliUtils.resourceNotFoundException(RestliUtils.java:56)\n\tat java.util.Optional.orElseThrow(Optional.java:290)\n\tat com.linkedin.metadata.restli.BaseVersionedAspectResource.lambda$get$0(BaseVersionedAspectResource.java:85)\n\tat com.linkedin.metadata.restli.RestliUtils.toTask(RestliUtils.java:27)\n\tat com.linkedin.metadata.restli.BaseVersionedAspectResource.get(BaseVersionedAspectResource.java:82)\n\tat com.linkedin.metadata.resources.dataset.SchemaResource.get(SchemaResource.java:26)\n\tat sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat com.linkedin.restli.internal.server.RestLiMethodInvoker.doInvoke(RestLiMethodInvoker.java:172)\n\tat com.linkedin.restli.internal.server.RestLiMethodInvoker.invoke(RestLiMethodInvoker.java:326)\n\tat com.linkedin.restli.internal.server.filter.FilterChainDispatcherImpl.onRequestSuccess(FilterChainDispatcherImpl.java:47)\n\tat com.linkedin.restli.internal.server.filter.RestLiFilterChainIterator.onRequest(RestLiFilterChainIterator.java:86)\n\tat com.linkedin.restli.internal.server.filter.RestLiFilterChain.onRequest(RestLiFilterChain.java:55)\n\tat com.linkedin.restli.server.BaseRestLiServer.handleResourceRequest(BaseRestLiServer.java:218)\n\tat com.linkedin.restli.server.RestRestLiServer.handleResourceRequestWithRestLiResponse(RestRestLiServer.java:242)\n\tat com.linkedin.restli.server.RestRestLiServer.handleResourceRequest(RestRestLiServer.java:211)\n\tat com.linkedin.restli.server.RestRestLiServer.handleResourceRequest(RestRestLiServer.java:181)\n\tat com.linkedin.restli.server.RestRestLiServer.doHandleRequest(RestRestLiServer.java:164)\n\tat com.linkedin.restli.server.RestRestLiServer.handleRequest(RestRestLiServer.java:120)\n\tat com.linkedin.restli.server.RestLiServer.handleRequest(RestLiServer.java:132)\n\tat com.linkedin.restli.server.DelegatingTransportDispatcher.handleRestRequest(DelegatingTransportDispatcher.java:70)\n\tat com.linkedin.r2.filter.transport.DispatcherRequestFilter.onRestRequest(DispatcherRequestFilter.java:70)\n\tat com.linkedin.r2.filter.TimedRestFilter.onRestRequest(TimedRestFilter.java:72)\n\tat com.linkedin.r2.filter.FilterChainIterator$FilterChainRestIterator.doOnRequest(FilterChainIterator.java:146)\n\tat com.linkedin.r2.filter.FilterChainIterator$FilterChainRestIterator.doOnRequest(FilterChainIterator.java:132)\n\tat com.linkedin.r2.filter.FilterChainIterator.onRequest(FilterChainIterator.java:62)\n\tat com.linkedin.r2.filter.TimedNextFilter.onRequest(TimedNextFilter.java:55)\n\tat com.linkedin.r2.filter.transport.ServerQueryTunnelFilter.onRestRequest(ServerQueryTunnelFilter.java:58)\n\tat com.linkedin.r2.filter.TimedRestFilter.onRestRequest(TimedRestFilter.java:72)\n\tat com.linkedin.r2.filter.FilterChainIterator$FilterChainRestIterator.doOnRequest(FilterChainIterator.java:146)\n\tat com.linkedin.r2.filter.FilterChainIterator$FilterChainRestIterator.doOnRequest(FilterChainIterator.java:132)\n\tat com.linkedin.r2.filter.FilterChainIterator.onRequest(FilterChainIterator.java:62)\n\tat com.linkedin.r2.filter.TimedNextFilter.onRequest(TimedNextFilter.java:55)\n\tat com.linkedin.r2.filter.message.rest.RestFilter.onRestRequest(RestFilter.java:50)\n\tat com.linkedin.r2.filter.TimedRestFilter.onRestRequest(TimedRestFilter.java:72)\n\tat com.linkedin.r2.filter.FilterChainIterator$FilterChainRestIterator.doOnRequest(FilterChainIterator.java:146)\n\tat com.linkedin.r2.filter.FilterChainIterator$FilterChainRestIterator.doOnRequest(FilterChainIterator.java:132)\n\tat com.linkedin.r2.filter.FilterChainIterator.onRequest(FilterChainIterator.java:62)\n\tat com.linkedin.r2.filter.FilterChainImpl.onRestRequest(FilterChainImpl.java:96)\n\tat com.linkedin.r2.filter.transport.FilterChainDispatcher.handleRestRequest(FilterChainDispatcher.java:75)\n\tat com.linkedin.r2.util.finalizer.RequestFinalizerDispatcher.handleRestRequest(RequestFinalizerDispatcher.java:61)\n\tat com.linkedin.r2.transport.http.server.HttpDispatcher.handleRequest(HttpDispatcher.java:101)\n\tat com.linkedin.r2.transport.http.server.AbstractR2Servlet.service(AbstractR2Servlet.java:105)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:790)\n\tat com.linkedin.restli.server.spring.ParallelRestliHttpRequestHandler.handleRequest(ParallelRestliHttpRequestHandler.java:61)\n\tat org.springframework.web.context.support.HttpRequestHandlerServlet.service(HttpRequestHandlerServlet.java:73)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:790)\n\tat org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)\n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:536)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)\n\tat org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)\n\tat org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)\n\tat org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)\n\tat org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat org.eclipse.jetty.server.Server.handle(Server.java:494)\n\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)\n\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)\n\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\n\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\n\tat org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)\n\tat java.lang.Thread.run(Thread.java:748)\n",
"status": 404
}
The curl call looks correct to me. The fact that it returned 404 means your MCE wasn't ingested correctly. Check the docker log to see if there's any error?
docker logs datahub-mce-consumer
@mars-lan
I get good news today, The curl call dataset ,It was successful, as follow:
[root@localhost datahub-0.4.3]# curl -H 'X-RestLi-Protocol-Version:2.0.0' -H 'X-RestLi-Method: get' 'http://10.118.71.181:8080/datasets/($params:(),name:ec_order.purchasing_status,origin:PROD,platform:urn%3Ali%3AdataPlatform%3Amysql)/schema/0' | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1393 100 1393 0 0 233k 0 --:--:-- --:--:-- --:--:-- 340k
{
"created": {
"actor": "urn:li:corpuser:etl",
"time": 1603869810
},
"platformSchema": {
"com.linkedin.schema.OracleDDL": {
"tableSchema": ""
}
},
"lastModified": {
"actor": "urn:li:corpuser:etl",
"time": 1603869810
},
"schemaName": "ec_order.purchasing_status",
"fields": [
{
"fieldPath": "id",
"nullable": false,
"type": {
"type": {
"com.linkedin.schema.NumberType": {}
}
},
"nativeDataType": "BIGINT(display_width=64)",
"recursive": false
},
{
"fieldPath": "purchasing_id",
"description": "采购单ID",
"nullable": false,
"type": {
"type": {
"com.linkedin.schema.StringType": {}
}
},
"nativeDataType": "VARCHAR(length=64)",
"recursive": false
},
{
"fieldPath": "status",
"description": "采购单状态",
"nullable": false,
"type": {
"type": {
"com.linkedin.schema.EnumType": {}
}
},
"nativeDataType": "ENUM('NOT_SUBMITTED', 'CHECK_PENDING', 'AUDIT_FAILED', 'ORDER_PLACED')",
"recursive": false
},
{
"fieldPath": "created_by",
"description": "变更人",
"nullable": false,
"type": {
"type": {
"com.linkedin.schema.StringType": {}
}
},
"nativeDataType": "VARCHAR(length=64)",
"recursive": false
},
{
"fieldPath": "change_reason",
"description": "变更备注",
"nullable": false,
"type": {
"type": {
"com.linkedin.schema.StringType": {}
}
},
"nativeDataType": "VARCHAR(length=512)",
"recursive": false
},
{
"fieldPath": "created_stamp",
"nullable": false,
"type": {
"type": {
"com.linkedin.schema.NullType": {}
}
},
"nativeDataType": "DATETIME()",
"recursive": false
}
],
"version": 0,
"platform": "urn:li:dataPlatform:mysql",
"hash": ""
}
But ,there is no the table structure metadata on the datahub web ,as follow:
Interesting. Could you open "Developer Tools" in Chrome > "Network" tab and share the result of the /api/v2/datasets/<urn>/schema API call?
Closing as this is no longer relevant.
|
gharchive/issue
| 2020-10-22T12:53:47 |
2025-04-01T04:34:54.056374
|
{
"authors": [
"HondaHsu2020",
"NiravLangaliya",
"mars-lan",
"nagarjunakanamarlapudi",
"shirshanka"
],
"repo": "linkedin/datahub",
"url": "https://github.com/linkedin/datahub/issues/1953",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
159524568
|
[Avro to ORC] Mark all workunits of a dataset as failed if one task fails
In the Avro to Orc conversion we use gobblin DATASET_URN so Gobblin runtime publishes one dataset at a time by calling public void publishData(Collection<? extends WorkUnitState> states) for workunits of each dataset. Since we currently use a partition level watermark all workunits of a dataset need to be failed if any one of them fail.
@abti can you review
LGTM
|
gharchive/pull-request
| 2016-06-09T22:40:18 |
2025-04-01T04:34:54.062191
|
{
"authors": [
"abti",
"pcadabam"
],
"repo": "linkedin/gobblin",
"url": "https://github.com/linkedin/gobblin/pull/1046",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
93360697
|
Fixed a bug that may cause a job to hang if its tasks fail to getExtractor
This bug is affecting stand-alone jobs using LocalJobLauncher that may hang if any of its tasks fails to getExtractor. In this case, the finally part of the Task.run method will be executed, and this.taskStateTracker.onTaskCompletion(this) is called. In the implementation class LocalTaskStateTracker2, this.taskExecutor.retry(task) is called, which will throw an exception on task.getTaskState().getPropAsInt(ConfigurationKeys.FORK_BRANCHES_KEY) since the key FORK_BRANCHES_KEY is not set because the task failed before the number of branches is known and that key is set. This runtime exception is thrown in the finally clause and is not caught, causing the task to hang.
Signed-off-by: Yinan Li liyinan926@gmail.com
@zliu41 can you review this?
In LocalTaskStateTracker2.onTaskCompletion, should task.markTaskCompletion(); be moved to a finally block to make sure it is called?
task.markTaskCompletion should only be called if all retries fail. So calling it in a finally clause is not appropriate. What we can do is to make sure it is called if any Throwable is thrown.
So should it be put in a catch(Throwable t) block?
Yes, updated.
LGTM
|
gharchive/pull-request
| 2015-07-06T19:39:54 |
2025-04-01T04:34:54.066223
|
{
"authors": [
"liyinan926",
"zliu41"
],
"repo": "linkedin/gobblin",
"url": "https://github.com/linkedin/gobblin/pull/199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2543683599
|
Skip compaction on non-partitioned tables
Summary
Currently, there is no reason to run data compaction on non-partitioned tables since most of those tables get overwritten.
Changes
[ ] Client-facing API Changes
[ ] Internal API Changes
[x] Bug Fixes
[ ] New Features
[ ] Performance Improvements
[ ] Code Style
[ ] Refactoring
[ ] Documentation
[ ] Tests
For all the boxes checked, please include additional details of the changes made in this pull request.
Testing Done
[ ] Manually Tested on local docker setup. Please include commands ran, and their output.
[ ] Added new tests for the changes made.
[x] Updated existing tests to reflect the changes made.
[ ] No tests added or updated. Please explain why. If unsure, please feel free to ask for help.
[ ] Some other form of testing like staging or soak time in production. Please explain.
For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.
Additional Information
[ ] Breaking Changes
[ ] Deprecations
[ ] Large PR broken into smaller PRs, and PR plan linked in the description.
For all the boxes checked, include additional details of the changes made in this pull request.
obsolete
|
gharchive/pull-request
| 2024-09-23T21:20:53 |
2025-04-01T04:34:54.071394
|
{
"authors": [
"teamurko"
],
"repo": "linkedin/openhouse",
"url": "https://github.com/linkedin/openhouse/pull/208",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
903401822
|
boilerplate required for linkerd-smi
This PR adds all the basic boilerplate code required to run
and maintain this repo.
This includes:
bin/* scripts to build, run and test both goland and other
code i.e markdown, helm, bash, etc.
.github folder with the static, unit and codeql workflows
that run for all PR's.
LICENSE, README, DCO for the repo.
The plan is that once we get this merged first, It would make reviewing #1 much easier
Signed-off-by: Tarun Pothulapati tarunpothulapati@outlook.com
@alpeb That makes sense. I've removed them and also the kind,k3d related shell files. We will instead try to to use direct commands or third party action libraries for the same as you mentioned!
|
gharchive/pull-request
| 2021-05-27T08:30:37 |
2025-04-01T04:34:54.075199
|
{
"authors": [
"Pothulapati"
],
"repo": "linkerd/linkerd-smi",
"url": "https://github.com/linkerd/linkerd-smi/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
475227980
|
LKE-718 Enable viewing of kubeconfig- UI Followup
Description
UI Adjustments to kubeconfig buttons and the 'View config' drawer.
Type of Change
Non breaking change ('update')
Spoke offline about the above; going to move forward as is in re to indentation and look into possible adjustments later.
|
gharchive/pull-request
| 2019-07-31T15:57:53 |
2025-04-01T04:34:54.222123
|
{
"authors": [
"WilkinsKa1"
],
"repo": "linode/manager",
"url": "https://github.com/linode/manager/pull/5292",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
158459953
|
Support LINQPad Program kind of queries
LINQPad Program queries could be supported easily by appending a call to Main at the end of the script such that:
void Main()
{
Console.WriteLine(DateTime.Now);
}
gets compiled to the following C# script:
void Main()
{
Console.WriteLine(DateTime.Now);
}
Main(); // added by LINQPadless
If Main is async then call to it needs to be awaited. For example:
async Task Main()
{
await Task.Delay(TimeSpan.FromSeconds(1));
Console.WriteLine(DateTime.Now);
}
should become:
async Task Main()
{
await Task.Delay(TimeSpan.FromSeconds(1));
Console.WriteLine(DateTime.Now);
}
await Main(); // added by LINQPadless
One option, albeit a little heavy-handed, would be to use Roslyn to parse and determine whether Main is async or not.
|
gharchive/issue
| 2016-06-03T21:39:52 |
2025-04-01T04:34:54.225862
|
{
"authors": [
"atifaziz"
],
"repo": "linqpadless/LinqPadless",
"url": "https://github.com/linqpadless/LinqPadless/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1803836653
|
Feature/Plugin Wishlist
Hi, feel free to add the feature you want or your favorite plugin to this wish list!
I suggest two plugin of the same function:
vim-auto-save
auto-save.nvim
I use the first one, since I have little lua experience.
cmp-dictionary
A dictionary completion source for nvim-cmp. This plugin provides one of the easiest way to add desired completion candidates to nvim-cmp
@nirvana6 , auto-save.nvim seems has newer feature and still maintained, added in #217 .
@nirvana6 , auto-save.nvim seems has newer feature and still maintained, added in #217 .
thx. 😄
|
gharchive/issue
| 2023-07-13T22:19:03 |
2025-04-01T04:34:54.230006
|
{
"authors": [
"linrongbin16",
"nirvana6"
],
"repo": "linrongbin16/lin.nvim",
"url": "https://github.com/linrongbin16/lin.nvim/issues/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
200343877
|
@default_handler falls with excpetion
Traceback (most recent call last): File "/Library/Python/2.7/site-packages/slackbot/dispatcher.py", line 55, in _dispatch_msg_handler func(Message(self._client, msg), *args) TypeError: default_handler() takes exactly 2 arguments (1 given)
It was working when I tried "Hello, ping!" few days ago, but after finishing real logic implementation I got that error.
I have no idea why do I have this problem, but it would be very nice if someone would help to handle this problem!
Can you show your code snippet for your default handler function?
Sorry, my bad - the problem was in the way I've tried to use decorators: they was used inside the custom wrapper class(but has to be only decorator for dedicated methods in the file without any class-wrappers)
|
gharchive/issue
| 2017-01-12T11:32:32 |
2025-04-01T04:34:54.318905
|
{
"authors": [
"evasyuk",
"lins05"
],
"repo": "lins05/slackbot",
"url": "https://github.com/lins05/slackbot/issues/127",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
115389668
|
Consider implementing JSON API
Consider implementing JSON API. See: http://jsonapi.org/
This could be considered related to issue #129 (see that for more info).
|
gharchive/issue
| 2015-11-05T22:27:57 |
2025-04-01T04:34:54.457286
|
{
"authors": [
"david-a-wheeler"
],
"repo": "linuxfoundation/cii-best-practices-badge",
"url": "https://github.com/linuxfoundation/cii-best-practices-badge/issues/131",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
281982223
|
plumb cni settings
- What I did
Minimize usage of /var binding in containers and simplify cni metadata path.
- How I did it
Let the cni config persist on /var/lib/cni/conf and /var/lib/cni/bin.
then mount share them to their default position in /etc/cni/net.d and /opt/cni/bin in docker, kubelet and hostroot (cri-containerd mounts weave pods HostPath here).
- How to verify it
I run both cluster types.
- Description for the changelog
plumb cni settings
- A picture of a cute animal (not mandatory but encouraged)
#33 is merged so rebase onto master should be sufficient.
Please sign your commits following these rules:
https://github.com/moby/moby/blob/master/CONTRIBUTING.md#sign-your-work
The easiest way to do this is to amend the last commit:
$ git clone -b "cni_settings" git@github.com:w9n/kubernetes.git somewhere
$ cd somewhere
$ git rebase -i HEAD~842354109296
editor opens
change each 'pick' to 'edit'
save the file and quit
$ git commit --amend -s --no-edit
$ git rebase --continue # and repeat the amend for each commit
$ git push -f
Amending updates the existing PR. You DO NOT need to open a new one.
|
gharchive/pull-request
| 2017-12-14T05:00:23 |
2025-04-01T04:34:54.465109
|
{
"authors": [
"GordonTheTurtle",
"ijc",
"w9n"
],
"repo": "linuxkit/kubernetes",
"url": "https://github.com/linuxkit/kubernetes/pull/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
59955184
|
Breaks when user settings are added
I followed the advice on the default .sublime-settings and put this in my user configuration LaTeXWordCount.sublime-settings:
{
"LaTeX": {
"exclude_footnotes": false
}
}
Then, when I run the word count, nothing appears to happen. This is printed on the console:
Traceback (most recent call last):
File "/Applications/Sublime Text.app/Contents/MacOS/sublime_plugin.py", line 549, in run_
return self.run(edit)
File "WordCount in /Users/jonchan/Library/Application Support/Sublime Text 3/Installed Packages/LaTeX Word Count.sublime-package", line 142, in run
File "WordCount in /Users/jonchan/Library/Application Support/Sublime Text 3/Installed Packages/LaTeX Word Count.sublime-package", line 89, in wordcount_latex
TypeError: 'NoneType' object is not iterable
I've also tried this with "exclude_abstract": false. This doesn't happen when the user config file is deleted, and appears to happen even if a blank file is present.
Hmm the current version can't look up default values that are nested in other config elements, so as a workaround for now you could simply copy-paste the entire "LaTeX" configuration block and adjust the exclude-options in there, i.e. have this as your user package settings:
{
"LaTeX": {
"markup_commands": ["text\\w+", "uppercase", "uline", "emph", "caption"],
"exclude_headers": false,
"exclude_footnotes": false,
"exclude_appendices": true,
"exclude_abstract": false
}
}
Please let me know if that works for you
It does! Any ideas on how to fix it? I don't know Python, so I can't help.
|
gharchive/issue
| 2015-03-05T14:01:41 |
2025-04-01T04:34:54.544457
|
{
"authors": [
"NathanJang",
"lionandoil"
],
"repo": "lionandoil/SublimeLaTeXWordCount",
"url": "https://github.com/lionandoil/SublimeLaTeXWordCount/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
245338288
|
android 6.0.1手机不能播放m3u8,其他手机上面播放是可以的。求解,谢谢
@lipangit
超出老臣解决范围
|
gharchive/issue
| 2017-07-25T09:24:55 |
2025-04-01T04:34:54.559871
|
{
"authors": [
"lipangit",
"xdf0501"
],
"repo": "lipangit/JieCaoVideoPlayer",
"url": "https://github.com/lipangit/JieCaoVideoPlayer/issues/1081",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
245723724
|
在activity中写了几个fragment,在fragment中使用播放器,然而全屏的时候软件就崩溃了
在点击全屏的时候,使用重力感应是视频全屏放大,结果软件就闪退了,是没设置好吗,可以手动取消重力感应的功能吗
报了这个异常Caused by: java.lang.NullPointerException: Attempt to read from field 'int android.support.v4.app.Fragment.mContainerId' on a null object reference
看看readme是不是AndroidManifest没设置好
父activity已经设置过了,但是还是出现这种问题
已经修复好了,还是activity设置问题
|
gharchive/issue
| 2017-07-26T13:33:59 |
2025-04-01T04:34:54.561431
|
{
"authors": [
"MiChongGET",
"lipangit"
],
"repo": "lipangit/JieCaoVideoPlayer",
"url": "https://github.com/lipangit/JieCaoVideoPlayer/issues/1083",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
866859092
|
non Chinese vocab (input)
Hi, thx for repo!
Can I use code for non Chinese languages ?
let's say for Russian text
thx!
Sure, you may adjust this script to prepare the data for your scenario:https://github.com/lipiji/SongNet/blob/master/prepare_data.py
thx.
|
gharchive/issue
| 2021-04-24T23:31:50 |
2025-04-01T04:34:54.563031
|
{
"authors": [
"lipiji",
"pavelxx1"
],
"repo": "lipiji/SongNet",
"url": "https://github.com/lipiji/SongNet/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1350645489
|
Bump golangci-lint version to 1.49.0 and remove deprecated linters
Description
This PR bumps the golangci-lint version to 1.49.0 and removes a few deprecated linters. Additionally, it fixes a few trivial issues.
/test
/merge
|
gharchive/pull-request
| 2022-08-25T09:58:49 |
2025-04-01T04:34:54.590460
|
{
"authors": [
"giorio94"
],
"repo": "liqotech/liqo",
"url": "https://github.com/liqotech/liqo/pull/1399",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1361133582
|
Bump gci version to 0.7.0
The older version recognizes standard packages with "/" inside (e.g "net/netip") like third parties packages.
This version doesn't have this issue.
/test
/merge
|
gharchive/pull-request
| 2022-09-04T12:23:09 |
2025-04-01T04:34:54.592000
|
{
"authors": [
"aleoli",
"cheina97"
],
"repo": "liqotech/liqo",
"url": "https://github.com/liqotech/liqo/pull/1409",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
934973349
|
bugfix pod termination
Description
Pod deletion initiated from the home pod was not completed correctly.
Indeed, the pod's container statuses in the home was never set to
terminated so the virtual kubelet was not able to delete pod.
This fix updates the container statuses in one go, when the foreign pod
is deleted, ie. we miss to update the home pod when the foreign
containers are terminated. Fixing this problem is out of scope and would
probably require modifying the pod black listing mechanism which seems
too risky at this time. Issue described here:
#721.
Resolves: #598
How Has This Been Tested?
Manually on my development workstation using 2 local clusters (created with kind).
Unit tests added for PodIncomingReflector.
/ok-to-test
/rebase
/rebase
/rebase
/merge
|
gharchive/pull-request
| 2021-07-01T15:35:54 |
2025-04-01T04:34:54.595283
|
{
"authors": [
"filippoprojetto",
"palexster"
],
"repo": "liqotech/liqo",
"url": "https://github.com/liqotech/liqo/pull/722",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1533934572
|
Spring Integration doesn't work with logs not from classpath when using includes
Environment
Liquibase Version: 4.18.0
Liquibase Integration & Version: Spring Boot 2.7.7
Description
This is a follow-up of #1540. Initially resolving the changelog outside classpath did not work. This was resolved with #1540. When using includes the problem still exists.
The problem is the code resolving files defined with an include. But the problem also exisits for different kind of locations than classpath. The probelmatic code is in liquibase.changelog.DatabaseChangeLog#include(java.lang.String, boolean, liquibase.resource.ResourceAccessor, liquibase.ContextExpression, liquibase.Labels, java.lang.Boolean, liquibase.changelog.DatabaseChangeLog.OnUnknownFileFormat, liquibase.changelog.DatabaseChangeLog.ModifyChangeSets)
`
public boolean include(String fileName,
boolean isRelativePath,
ResourceAccessor resourceAccessor,
ContextExpression includeContextFilter,
Labels labels,
Boolean ignore,
OnUnknownFileFormat onUnknownFileFormat,
ModifyChangeSets modifyChangeSets)
throws LiquibaseException {
if (".svn".equalsIgnoreCase(fileName) || "cvs".equalsIgnoreCase(fileName)) {
return false;
}
if (isRelativePath) {
try {
fileName = resourceAccessor.get(this.getPhysicalFilePath()).resolveSibling(fileName).getPath();
fileName = Paths.get(fileName).normalize().toString()
.replace("\\", "/");
} catch (IOException e) {
throw new UnexpectedLiquibaseException(e);
}
}
...
`
When the filename is based on a file url the call to Path.normalize() will fail. I would suggest to not normalize when a protocol like file is used in the filename.
Steps To Reproduce
Simple Spring Boot application using a changelog with include specified with file protocol rather than classpath.
Sample project https://github.com/reallyinsane/liquibase-3692
Actual Behavior
Changelog defined as included cannot be loaded.
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 4: file:D:/.../src/main/liquibase/changelog-1.0.xml at java.base/sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182) ~[na:na] at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153) ~[na:na] at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77) ~[na:na] at java.base/sun.nio.fs.WindowsPath.parse(WindowsPath.java:92) ~[na:na] at java.base/sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:229) ~[na:na] at java.base/java.nio.file.Path.of(Path.java:147) ~[na:na] at java.base/java.nio.file.Paths.get(Paths.java:69) ~[na:na] at liquibase.changelog.DatabaseChangeLog.include(DatabaseChangeLog.java:759) ~[liquibase-core-4.18.0.jar:na] at liquibase.changelog.DatabaseChangeLog.handleChildNode(DatabaseChangeLog.java:438) ~[liquibase-core-4.18.0.jar:na] at liquibase.changelog.DatabaseChangeLog.load(DatabaseChangeLog.java:376) ~[liquibase-core-4.18.0.jar:na] at liquibase.parser.core.xml.AbstractChangeLogParser.parse(AbstractChangeLogParser.java:23) ~[liquibase-core-4.18.0.jar:na]
A clear and concise description of what happens in the software with the version used.
The correctly resolved file to included should be loaded with file protocol.
Before parsing file name is normalizes using Path.normalize() which does not work for url based names.
Expected/Desired Behavior
Normalize is not called when a protocol like file protocol is used. When normalize call is skipped, everything works as expected.
Hello! To solve this you need to pass the searchPath param to liquibase with the path to the changelog and the pass the changelog file on the changelog param. We checked if there were any feature that allows that on springboot but couldnt find any, maybe to solve this you need to submit pr to springboot. I think the code is over here
I disagree. If you don't want to support other protocols than classpath then fine. Then this should be documented as I don't want to change the Spring Liquibase Autoconfiguration just because a different protocol. But the code I pointed at just requires to be a file path always. I also don't understand how specifying the searchPath will help with the code mentioned.
I have the same issue with
Liquibase Version: 4.18.0
Spring Boot: 2.7.10
I specify "spring.liquibase.change-log=file:build/database/liquibase/migration/dbchangelog.xml" as TestPropertySource for my SpringBootTest.
If I run the maven build under Linux with openjdk version "17.0.5" 2022-10-18 everything is fine.
Running it under Windows (STS 4.17.s or via maven within eclipse) with OpenJDK 17.0.2 I get:
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 4: file:build/database/liquibase/migration/changes/feature-01-example-table/changesets.xml at java.base/sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182) at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153) at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77) at java.base/sun.nio.fs.WindowsPath.parse(WindowsPath.java:92) at java.base/sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:232) at java.base/java.nio.file.Path.of(Path.java:147) at java.base/java.nio.file.Paths.get(Paths.java:69) at liquibase.changelog.DatabaseChangeLog.include(DatabaseChangeLog.java:812) at liquibase.changelog.DatabaseChangeLog.handleChildNode(DatabaseChangeLog.java:452) at liquibase.changelog.DatabaseChangeLog.load(DatabaseChangeLog.java:388) at liquibase.parser.core.xml.AbstractChangeLogParser.parse(AbstractChangeLogParser.java:23) ... 110 common frames omitted
Maybe it has something todo with the special meaning of ':' under windows?
Hello! You are right that we should improve our documentation, but thats the way liquibase is design, the reason we dont support that kind of behavior is that Liquibase use the path to the changelog as part of the identifier of a changeset, you can read that over here . Using searchPath tells liquibase were to look for, if try to have a path like @tschniewind have, Im asuming its for dev env, if you stage those changes the path would change and the changeset will have a different identifier. We also should fix the
|
gharchive/issue
| 2023-01-15T17:59:57 |
2025-04-01T04:34:54.606802
|
{
"authors": [
"FBurguer",
"reallyinsane",
"tschniewind"
],
"repo": "liquibase/liquibase",
"url": "https://github.com/liquibase/liquibase/issues/3692",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1948094261
|
DAT-15548
Impact
[X] Bug fix (non-breaking change which fixes expected existing functionality)
[ ] Enhancement/New feature (adds functionality without impacting existing logic)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Description
Things to be aware of
Things to worry about
Additional Context
check comment : https://datical.atlassian.net/browse/DAT-15548?focusedCommentId=131845
|
gharchive/pull-request
| 2023-10-17T19:19:14 |
2025-04-01T04:34:54.610349
|
{
"authors": [
"sayaliM0412"
],
"repo": "liquibase/liquibase",
"url": "https://github.com/liquibase/liquibase/pull/5068",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
261911852
|
Examples do not run on OSX 10.12.x with SBCL
On newer versions of OSX, the examples do not run under SBCL:
* 2017-10-01 15:05:47.856 sbcl[37839:3834749]
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException',
reason: 'nextEventMatchingMask should only be called from the Main Thread!'
*** First throw call stack:
(
0 CoreFoundation 0x... __exceptionPreprocess + 171
1 libobjc.A.dylib 0x... objc_exception_throw + 48
2 AppKit 0x... -[NSApplication(NSEvent)
_nextEventMatchingEventMask:untilDate:inMode:dequeue:]
+ 4480
3 libSDL2.dylib 0x... Cocoa_PumpEvents + 211
4 libSDL2.dylib 0x... SDL_PumpEvents_REAL + 23
5 libSDL2.dylib 0x... SDL_WaitEventTimeout_REAL + 76
6 ??? 0x... 0x0 + 582453995
7 ??? 0x... 0x0 + 582783784
8 ??? 0x... 0x0 + 582786444
9 ??? 0x... 0x0 + 582588451
10 ??? 0x... 0x0 + 582590824
11 ??? 0x... 0x0 + 580565872
)
This is due to the initialisation functions needing to be run from the main thread.
Addressed this in the example in the README
|
gharchive/issue
| 2017-10-01T12:13:12 |
2025-04-01T04:34:54.618587
|
{
"authors": [
"mfiano",
"tavurth"
],
"repo": "lispgames/sdl2kit",
"url": "https://github.com/lispgames/sdl2kit/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2319647576
|
Import loop in @lit/localize
Which package(s) are affected?
Localize (@lit/localize)
Description
Hi there 👋,
Thanks for all the wonderful work! I noticed a little something:
lit-localize.js and init/*.js have an import loop.
There is already a comment to drop the lines:
https://github.com/lit/lit/blob/abf30b3e895ea5d833f6d9559612e2b1ba47580d/packages/localize/src/lit-localize.ts#L15C1-L22C37
However, the comment is only regarding the bundle size, not the import loop.
Reproduction
I noticed this while using a post-processors, that traverses imports. It's much too complicated to set up, but you can simply check the imports manual and verify the issue:
https://github.com/search?q=repo%3Alit%2Flit+import+{_installMsgImplementation}+from+'..%2Flit-localize.js'%3B&type=code
Workaround
I used esbuild to bundled it. Most bundlers will ignore the loop.
Is this a regression?
No or unsure. This never worked, or I haven't tried before.
Affected versions
latest
Browser/OS/Node environment
OS independent issue.
Potential solutions
Remove the code, that's marked for deprecation (clean, but risky).
Move the _installMsgImplementation into a separate file (safe).
What's the problem being cause by the import cycle?
What's the problem being cause by the import cycle?
Django can't process the package. It has built in static file collection, including caching. This requires resolving dependencies and assign filenames to them hash-based version postfixes (cache busters). Django ships with cycle detection, it's impossible to resolve the problem.
That behavior is very much required, especially with in browser ESM loading. We actually serve Lit as ESM in the browser. It's been wonderful, except for translations.
Solution 2 seems like a quick and easy fix – it's an internal function anyway – and we can punt the removal of re-exports to a later breaking change.
Thank you that would be amazing!
Sure, I think, I did everything I was supposed to according to the contributing guide, don't hesitate to amend my patch.
|
gharchive/issue
| 2024-05-27T18:52:38 |
2025-04-01T04:34:54.626511
|
{
"authors": [
"augustjk",
"codingjoe",
"justinfagnani"
],
"repo": "lit/lit",
"url": "https://github.com/lit/lit/issues/4655",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
160313618
|
Allow setting of initial variables using data-var-*
This is a convenience form for declaring them in data-vars and then setting them in the init code.
data-vars can be phased out, then.
data-var-* should ideally default to 0, but data-var-*="" should give the empty string.
data-vars (and subsequently, data-var) has now been phased out in favour of just declaring all needed variables inside the init function.
|
gharchive/issue
| 2016-06-15T00:19:40 |
2025-04-01T04:34:54.653148
|
{
"authors": [
"literallybenjam"
],
"repo": "literallybenjam/jelli",
"url": "https://github.com/literallybenjam/jelli/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
613598512
|
Added go.mod and deprecated dep
Signed-off-by: Rahul M Chheda rahul.chheda@mayadata.io
What this PR does / why we need it:
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #
Special notes for your reviewer:
Checklist:
[ ] Fixes #
[ ] Labelled this PR & related issue with documentation tag
[ ] PR messages has document related information
[ ] Labelled this PR & related issue with breaking-changes tag
[ ] PR messages has breaking changes related information
[ ] Labelled this PR & related issue with requires-upgrade tag
[ ] PR messages has upgrade related information
[ ] Commit has unit tests
[ ] Commit has integration tests
Add the description and reference it to the issue
|
gharchive/pull-request
| 2020-05-06T20:39:04 |
2025-04-01T04:34:54.665242
|
{
"authors": [
"chandankumar4",
"rahulchheda"
],
"repo": "litmuschaos/chaos-runner",
"url": "https://github.com/litmuschaos/chaos-runner/pull/66",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
719552575
|
Updated README.md
Proposed changes
Summarize your changes here to communicate with the maintainers and make sure to put the link of that issue
Types of changes
What types of changes does your code introduce to Litmus? Put an x in the boxes that apply
[ ] New feature (non-breaking change which adds functionality)
[ ] Bugfix (non-breaking change which fixes an issue)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Documentation Update (if none of the other choices applies)
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
[ ] I have read the CONTRIBUTING doc
[ ] I have signed the commit for DCO to be passed.
[ ] Lint and unit tests pass locally with my changes
[ ] I have added tests that prove my fix is effective or that my feature works (if appropriate)
[ ] I have added necessary documentation (if appropriate)
Dependency
Please add the links to the dependent PR need to be merged before this (if any).
Special notes for your reviewer:
The overview section doesn't intend to introduce experiment categories right away.
|
gharchive/pull-request
| 2020-10-12T17:50:01 |
2025-04-01T04:34:54.670916
|
{
"authors": [
"ksatchit",
"prakharshreyash15"
],
"repo": "litmuschaos/litmus",
"url": "https://github.com/litmuschaos/litmus/pull/2244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1268193563
|
hook实例变量失败
我hook了一个实例变量,但是没有效果
但是复制这个hook,改成记录实例变量,却可以记录 ,hook点before和after都不行,
app有储存权限,也能hook返回值,
变量可能在hook后被更改了,你要找准hook点
---原始邮件---
发件人: @.>
发送时间: 2022年6月11日(周六) 下午3:44
收件人: @.>;
抄送: @.***>;
主题: [littleWhiteDuck/SimpleHook] hook实例变量失败 (Issue #4)
我hook了一个实例变量,但是没有效果
但是复制这个hook,改成记录实例变量,却可以记录 ,hook点before和after都不行,
app有储存权限,也能hook返回值,
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: @.***>
|
gharchive/issue
| 2022-06-11T07:44:41 |
2025-04-01T04:34:54.697581
|
{
"authors": [
"hdyyds666",
"littleWhiteDuck"
],
"repo": "littleWhiteDuck/SimpleHook",
"url": "https://github.com/littleWhiteDuck/SimpleHook/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2109676702
|
lfs_file_write() return a error message of LFS_ERR_NOENT
hello
I am using the littlefs 0f 2.8.01,it working normally for a long time,suddenly,lfs_file_write() return a error message(LFS_ERR_NOENT).Only formatting can restore normal, and restarting will not solve the problem.
I am a chinese,my english is so bad,but wo want to tell you the question,i hope to get help from you.
thanks
I am seeing the same issue with lfs 2.8, disc v.2.1
Block 0 and block 1 seem to be corrupted after extensive use.
Is it possible a recent change introduced this problem, is there an "older known good version" that anyone can recommend.
I am seeing the same issue with lfs 2.8, disc v.2.1 Block 0 and block 1 seem to be corrupted after extensive use. Is it possible a recent change introduced this problem, is there an "older known good version" that anyone can recommend.
thanks ,i am very happy to receive your reply.it is useful to me。I am using a newer version of v2.8.2.I hope it can avoid the question. if you alerady solved the question,please tell me.thanks .
Best wish to you !
No, I haven't resolved, I continue to test the flash drivers etc. which all seem fine. The test harness is pretty intensive and significant tests pass, but eventually it fails.
No, I haven't resolved, I continue to test the flash drivers etc. which all seem fine. The test harness is pretty intensive and significant tests pass, but eventually it fails.
I have been test the question for a long time,but it only appear a time。I think the version of V2.8.1 exist a bug possibly,But i have no valid evidence. I hope the author of littlefs can help me solve the question。Otherwise,i dare not to use the code .I want to commit spifs to you,it is another file system,it is very stable。 Moreover,if you have other method or idea,please tell me.You can contace me quickly by means of the emaile of 2460070599@qq.com,when it is convenient for you.
Best wishes to you!
hi
thanks for email
i am suspicious its a problem with littlefs, but need to do more checks..
i am noticing littlefs does different things some times and i assume this is because something is wrong in my code,
it could be hardware, power for example, but i still suspect software...
what flash chip are you using?
Sent: Saturday, February 03, 2024 at 4:42 AM
From: "filkli" @.>
To: "littlefs-project/littlefs" @.>
Cc: "evaneight" @.>, "Comment" @.>
Subject: Re: [littlefs-project/littlefs] lfs_file_write() return a error message of LFS_ERR_NOENT (Issue #933)
No, I haven't resolved, I continue to test the flash drivers etc. which all seem fine. The test harness is pretty intensive and significant tests pass, but eventually it fails.
I have been test the question for a long time,but it only appear a time。I think the version of V2.8.1 exist a bug possibly,But i have no valid evidence. I hope the author of littlefs can help me solve the question。Otherwise,i dare not to use the code .I want to commit spifs to you,it is another file system,it is very stable。 Moreover,if you have other method or idea,please tell me.You can contace me quickly by means of the emaile of @.***,when it is convenient for you.
Best wishes to you!
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>
Sorry, I don't think I have much to add.
The last significant change was the introduction of the FCRC in v2.6.0 (so v2.5.1 may be interesting). Since then we've only had minor changes (additive features, heuristic tweaks, documentation, etc; releases).
I don't know why lfs_file_write would return LFS_ERR_NOENT. This could happen if the file's metadata entry disappeared, but that shouldn't happen normally. It would be very interesting to know if this changed across a version bump.
I want to commit spifs to you,it is another file system,it is very stable
spiffs is a good filesystem. It predates littlefs and takes a very different approach, so I wouldn't be surprised if there are cases where spiffs behaves better than littlefs.
thanks ,i am very happy to receive your reply.it is useful to me。if you have other,please tell me.
Running with my test harness... 2.0, 2.4, and 2.8 fail the same way (after some time) - with a file creation error. 2.1.1 gets through twice as many tests as the versions above, but ultimately fails with the same error.
Sorry,About the question,I have no good idea to solve it.I am ready to use the old version of littlefs.(2.5.1)
|
gharchive/issue
| 2024-01-31T09:39:04 |
2025-04-01T04:34:54.732555
|
{
"authors": [
"evaneight",
"filkli"
],
"repo": "littlefs-project/littlefs",
"url": "https://github.com/littlefs-project/littlefs/issues/933",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1174446504
|
🛑 USTC is down
In 9d07f36, USTC (https://www.ustc.edu.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: USTC is back up in 7a635c2.
|
gharchive/issue
| 2022-03-20T05:40:51 |
2025-04-01T04:34:54.739482
|
{
"authors": [
"littlekud"
],
"repo": "littlekud/sites-status",
"url": "https://github.com/littlekud/sites-status/issues/587",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1248027220
|
Learn about google auth and perhaps switch to openidconnect-rs
Perhaps switch the current goauth google auth package to
https://github.com/ramosbugs/openidconnect-rs as it seems more reliable and perhaps even easier to use?
To Read:
[ ] https://developers.google.com/identity/protocols/oauth2/openid-connect
[ ] https://openid.net/connect/
The example:
https://github.com/ramosbugs/openidconnect-rs/blob/main/examples/google.rs
Hi @liufuyang , thanks for open-sourcing this project! Is there progress on this ticket ? I'm happy to look into this.
@dwirya Hey there, thank you, feel free to do it, I haven't started on this yet. Do you also think it is a good idea to do this change? Feel free to make a PR if you have ideas then we can take a look at it together 🙂
@liufuyang good to know, thanks! Ideally I should be able to:
Attach a GCP credential in my container
Use gcloud login to populate application_default_credentials.json
In my Rust service, use read_authorized_user_secret() in yup_oauth2 (was going to try this instead) that will read that JSON file.
This way, I don't have to manage the secret files anymore. The current state in bigtable_rs requires storing the key-file locally in the container, which I think is less than ideal from a security POV. It's not a deal-breaker, but certainly can be improved. That's my take on it, I'm also happy to experiment with openidconnect to compare the two in terms of ergonomics.
@dwirya Thank you so much. https://github.com/dermesser/yup-oauth2 looks like a good alternative. I don't know the details between yup-oath2 and openidconnect-rs so feel free to choose what you think makes the best case here.
Can you elaborate a bit more about point 1 above?
For points 2 and 3, I think for the current implementation the workflow can still be similar as firstly you run gcloud login then set GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json to make it work.
The current state in bigtable_rs requires storing the key-file locally in the container, which I think is less than ideal from a security POV
Totally agree if we could find a way to not store key json file inside a container, it's just for now I am not sure how do you solve it via things like yup-oauth2.
What we are really lacking is an implementation of Application Default Credentials, for example here is a Google's Java implementation.
But looks like yup-oauth2 is already providing this feature, as its readme says:
Service account flow: Non-interactive authorization of server-to-server communication based on public key cryptography. Used for services like Cloud Pubsub, Cloud Storage, ...
And perhaps this feature is what you want to try out? It would be super nice if you could test this out. Using a key json file with GOOGLE_APPLICATION_CREDENTIALS is easy and if we could figure out a way to allow Service Account flow works then that would be super awesome :)
Just to note down some docs about ServiceAccount auth:
CloudRun ServiceAccount fetching access token
Compute Engine auth directly with access tokens
Otherwise, please give a look at https://github.com/hrvolapeter/gcp_auth
Seems to be an even simpler implementation and provides exactly what we need here?
I actually don't have much experience in OAuth too 😅. But I'm sure given enough time to read through the docs I can figure something out.
Regarding my point 1, coming from AWS, when running a container, Fargate (CloudRun equivalent) will inject the credentials into the container via volume mounting. I'm pretty sure a similar flow exists in GCP from skimming through the docs here.
I'm not particularly happy with my current solution, which is to embed the key-file in the project and hence, the container itself. I can also do gcloud login in the container, but since each key-file can only be downloaded once, I will end up with one key-file per image. Since a key-file is permanent, this may pose security risks if one of the key-files is compromised, because now I don't know which key-file to disable. Therefore, I think embedding it is safer, since if it's compromised, I just need to replace it with a new key-file.
The gcp_auth crate looks interesting, although I prefer to use yup_oauth2 simply because other GCP crates such as bigquery_storage are using it as well, so it looks more reliable.
If it's all good to you, I'll fork the current main branch and try out Application Default Credentials (ADC) using yup_oauth2 and a service account attached to the container. Will let you know soon 😁.
I see. bigquery_storage seems not being maintained well or? I think we can use whatever we feel is simple here. Yes feel free to create a PR and if you don't have time for looking deeper into gcp_auth, I might be able to give it a try if I have time.
One more thing is, I still didn't understand your statement:
I'm not particularly happy with my current solution, which is to embed the key-file in the project and hence, the container itself.
The current version of bigtable_rs does support config GOOGLE_APPLICATION_CREDENTIALS so you can in your container set a volume mounting that contains a credential JSON (so do not embed any key-file in the docker image, and you only do it once with a single key), then in your container environment param setting you set GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/container/secret/volume/mount/key.json
Wouldn't this solve your problem?
What I don't see is what extra it brings to you by using yup_oauth2 here as we already have goauth used here (but we need more code in this repo to make it really work). For now I don't see how will yup_oauth2 help you better than the current bigtable_rs impl. (Only benefit for us here is it can probably reduce the line of code here.) Or perhaps I missed something? 🤔
The doc link you did above is about service identity and that is a very different way than mounting key.json (either backing in the image or using k8s/cloudrun secret volume mounting). Perhaps that is what you want - do not use any json key and do not set any GOOGLE_APPLICATION_CREDENTIALS? Then for sure we will be in need of something like yup_oauth2 or gcp_auth 😄
@dwirya I got some time yesterday and here https://github.com/liufuyang/bigtable_rs/pull/27 is my proposal and I think gcp_auth is a cleaner solution + simpler API?
You can take a look at the last commit showing the part of start using gcp_auth.
Let me know if you like this approach. Otherwise, if you want to try with yup_oauth2, you can checkout from my branch and try switch gcp_auth to yup_oauth2. We can take a look at whether it feels better. What do you think?
Hi @liufuyang , sorry for the late reply as I've been busy with work. Currently we've decided to do the volume mounting approach similar to your proposal, and I think it's probably more straightforward than using ADC in my opinion. Therefore I don't really see the benefit of my initial approach anymore.
But seeing that you're using gcp_auth, I think it looks pretty good. Since I won't be using ADC until I hit a roadblock, I don't really have any opinions regarding the change. If there's anything else you'd like me to investigate, you can let me know :)
Sounds great then, I will try release a new version soon with this updates as I think it will be more efficient than the current implemention. Feel free to try it when I release a new version and if having any problems just let me know I can try fix it as soon as possible 👍
@dwirya I have version 0.2.1 released, if you want, give it a try and let me know if you encounter any issues. I will close this issue for now. Thanks again for taking initiative to help here :)
|
gharchive/issue
| 2022-05-25T12:31:44 |
2025-04-01T04:34:54.767838
|
{
"authors": [
"dwirya",
"liufuyang"
],
"repo": "liufuyang/bigtable_rs",
"url": "https://github.com/liufuyang/bigtable_rs/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1435500291
|
建议取消默认Converter
不知道能不能考虑取消默认GsonConverter,让用户自定义?
现在是默认引入了GSON,如果用别的JSON解析库,那不是多引入了一个用不到的GSON库。
内置Gson,不仅仅是用于GsonConverter,还有其它地方用到,尝试过移除,但放弃了。
如果你没用到,可以手动移除,虽然这没有多大的意义,因为在打release包,会自动移除你没用的代码。
|
gharchive/issue
| 2022-11-04T03:39:38 |
2025-04-01T04:34:54.774235
|
{
"authors": [
"jyygithub",
"liujingxing"
],
"repo": "liujingxing/rxhttp",
"url": "https://github.com/liujingxing/rxhttp/issues/414",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
505617787
|
数据库外键关联,实体没有生成对应关系
可否生成实体关联关系,一对多,多对已等关系
没有上下文关系,因此无法生成关联关系
|
gharchive/issue
| 2019-10-11T03:34:15 |
2025-04-01T04:34:54.774997
|
{
"authors": [
"honeykee",
"liukefeng2008"
],
"repo": "liukefeng2008/idea-plugin-jpa-support",
"url": "https://github.com/liukefeng2008/idea-plugin-jpa-support/issues/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1926619481
|
Uninstall? All local traffic is now forwarded to HTTPS
Describe the bug
I needed this briefly, now I no longer need it. I removed the package from the project, but all HTTP requests are still being forwarded to HTTPS, which now fails since I'm not using this package any longer.
Reproduction
Install and setup package
Remove package
Start dev server with https: false
Local url forwards everything to HTTPS still and returns an SSL/TLS error
This includes all local traffic. So opening an entirely different project no longer works either. Effectively all HTTP traffic to local host is now redirected to HTTPS.
Please clear your browser cache
|
gharchive/issue
| 2023-10-04T16:50:08 |
2025-04-01T04:34:54.779614
|
{
"authors": [
"bruceharrison1984",
"liuweiGL"
],
"repo": "liuweiGL/vite-plugin-mkcert",
"url": "https://github.com/liuweiGL/vite-plugin-mkcert/issues/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
217844338
|
测试bot.py,发现以下几个问题!共享以下。
1. 对方是电脑版微信,机器人不会回复。
2. 如果在群内你使用机器人的微信,修改了默认打群聊名字,那么机器人不会回复。
1 #36
|
gharchive/issue
| 2017-03-29T11:21:58 |
2025-04-01T04:34:54.781157
|
{
"authors": [
"Huarong",
"eastossifrage"
],
"repo": "liuwons/wxBot",
"url": "https://github.com/liuwons/wxBot/issues/211",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
173750761
|
#4450 update storm version to 1.0.0
The pull request references the related JIRA issue ("[FLINK-4450] ")
update storm version to 1.0.0, because of new storm version use pacakge path "org.apache." replace of "backtype.", and it changed permanently.
and flink-storm-examples pull request will summit later.
build successfully, run successfully.
|
gharchive/pull-request
| 2016-08-29T11:11:02 |
2025-04-01T04:34:54.784094
|
{
"authors": [
"liuyuzhong"
],
"repo": "liuyuzhong/flink",
"url": "https://github.com/liuyuzhong/flink/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1800200737
|
🛑 navyfilms.tv is down
In 66a012f, navyfilms.tv (https://navyfilms.tv) was down:
HTTP code: 0
Response time: 0 ms
Resolved: navyfilms.tv is back up in a7062ba.
|
gharchive/issue
| 2023-07-12T05:16:52 |
2025-04-01T04:34:54.787528
|
{
"authors": [
"nonamenix"
],
"repo": "live4dev/uptime.live4.dev",
"url": "https://github.com/live4dev/uptime.live4.dev/issues/459",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2524185575
|
It took very long time to connect the livekit room
It took very long time to connect the livekit room
While I connnect the livekit room by wss and token, it took too long time to connect
The log is below:
09-13 01:00:08.415 2929 6204 I AsrLiveKitServer: startAsrLiveKit: wsURL=ws://, token=**
09-13 01:00:08.886 2929 2929 I AsrLiveKitServer: connect end
09-13 01:00:10.034 2929 5458 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$Connected@f34b482
09-13 01:00:10.036 2929 2929 I AsrLiveKitServer: set mic enable and speak not mute
09-13 01:00:10.037 2929 2929 I AsrLiveKitServer: set mic enable and speak not mute
09-13 01:00:10.039 2929 2929 I AsrLiveKitServer: set mic enable and speak not mute
09-13 01:00:10.204 2929 4980 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$TrackPublished@35453d0
09-13 01:00:13.529 2929 5109 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$ConnectionQualityChanged@57d96ce
09-13 01:00:14.545 2929 5109 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$ParticipantConnected@bc029ef
09-13 01:00:15.019 2929 5109 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$DataReceived@b72dcfc
09-13 01:00:15.020 2929 5109 I AsrLiveKitServer: :��
09-13 01:00:15.020 2929 5109 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$TrackPublished@6250f85
09-13 01:00:15.020 2929 5109 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$ConnectionQualityChanged@e6051da
09-13 01:00:15.066 2929 4996 I AsrLiveKitServer: events collect: io.livekit.android.events.RoomEvent$TrackSubscribed@8f760b
09-13 01:00:15.066 2929 4996 I AsrLiveKitServer: event: io.livekit.android.room.track.RemoteAudioTrack@a8ecce8
Hi @tombang,
Did this issue occur the first time you called connect, or did it happen in subsequent attempts? I've encountered a similar bug where calling room.disconnect() gets blocked, which causes any subsequent connect attempts to fail indefinitely.
Would appreciate any insight you can provide!
Hi @thiendn160794 ,
this issue occur the first time i called connect and will occur after disconnect() again
|
gharchive/issue
| 2024-09-13T08:16:17 |
2025-04-01T04:34:54.796797
|
{
"authors": [
"thiendn160794",
"tombang"
],
"repo": "livekit/client-sdk-android",
"url": "https://github.com/livekit/client-sdk-android/issues/503",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1116804800
|
L2: Only non-self delegated stake for DelegatorPool in L2Migrator
What does this pull request do? Explain your changes. (required)
This PR fixes an issue where the L2Migrator adds an orchestrator's entire delegated stake for a newly created DelegatorPool in finalizeMigrateDelegator(). The L2Migrator should only add an orchestrator's non-self delegated stake for the DelegatorPool because the orchestrator's self-stake is already accounted for by the bondForWithHint() call that the L2Migrator executes to add the self-stake for the orchestrator. The fix is to subtract the orchestrator's self-stake from its delegated stake in order to calculate its non-self delegated stake and to use that value when staking for the DelegatorPool.
Specific updates (required)
See commit history.
How did you test each of these updates (required)
Updated tests.
Does this pull request close any open issues?
N/A
Checklist:
[x] README and other documentation updated
[x] All tests using yarn test pass
Pull Request Test Coverage Report for Build 1758768854
1 of 1 (100.0%) changed or added relevant line in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 100.0%
Totals
Change from base Build 1742152119:
0.0%
Covered Lines:
245
Relevant Lines:
245
💛 - Coveralls
|
gharchive/pull-request
| 2022-01-27T22:30:00 |
2025-04-01T04:34:54.804989
|
{
"authors": [
"coveralls",
"yondonfu"
],
"repo": "livepeer/arbitrum-lpt-bridge",
"url": "https://github.com/livepeer/arbitrum-lpt-bridge/pull/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1333667223
|
No transcoding for 4K videos
https://discord.com/channels/423160867534929930/992554008776540180/1006636908836827267
having issues with 4K streams. At some point I thought they just didn't work, but now only transcoding is broken. only source renditions is available
likely transcoding is taking too long and then being ignored as if it timed out? i know it's normally close to/worse than real-time for 4K videos
live for the next ~2h:
https://lvpr.tv/?url=https://sao-canary-catalyst-0.livepeer.fun/hls/video+196a6qz7pg06nl5h/index.m3u8?mute=false
Streaming command:
ffmpeg -re -i ./vintage/vintage-cercle.mp4 -c:v copy -c:a copy -strict -2 -f flv rtmp://sao-canary-catalyst-0.livepeer.
fun/live/196a-em8j-4822-1hun
File is 4K with H264 and AAC as required. Can send it over to someone but I don't think there's anything special with it. Works fine on livepeer.com.
I've successfully tested 4k60 transcoding with a local GPU, so it doesn't seem to be an inherent problem in go-livepeer or anything like that — probably you're right about the timeouts.
@iameli do we have any tighter timeouts on multi-node catalyst vs our current setup of mist+broadcaster today?
@victorges @emranemran Are you working on this issue? If yes, could you add assignee and move to In Progress?
Looking into this. I was able to reproduce the issue:
#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-STREAM-INF:CODECS="avc1.640033,mp4a.40.2",RESOLUTION=3840x2160,FRAME-RATE=29.97,BANDWIDTH=17448194,AVERAGE-BANDWIDTH=14763857
0_1/index.m3u8?mTrack=0&iMsn=2&sessId=157082073
Handing over to @hjpotter92 , because @emranemran is OOO.
unable to repro, tested using the following command:
fetching the playback manifest:
➜ http https://sin-canary-catalyst-0.livepeer.fun/hls/videorec+0c5e9i6m8hkzbfwg/index.m3u8
HTTP/1.1 200 OK
Accept: */*
Accept-Encoding: gzip, deflate
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: *
Access-Control-Allow-Methods: GET, POST, OPTIONS, HEAD
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: *
Access-Control-Max-Age: 600
Access-Control-Request-Headers: *
Access-Control-Request-Method: GET
Cache-Control: no-store
Content-Length: 865
Content-Type: application/vnd.apple.mpegurl
Date: Tue, 23 Aug 2022 06:49:44 GMT
Expires: 0
Host: sin-canary-catalyst-0.livepeer.fun
Pragma: no-cache
Set-Cookie: sid=3431820251; Max-Age=600
User-Agent: HTTPie/3.2.1
X-Forwarded-For: 122.171.17.100
X-Forwarded-Host: sin-canary-catalyst-0.livepeer.fun
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: dp6185
X-Real-Ip: 122.171.17.100
#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-STREAM-INF:CODECS="avc1.640034",RESOLUTION=3840x2160,FRAME-RATE=25,BANDWIDTH=27844804,AVERAGE-BANDWIDTH=23560988
0/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d401f",RESOLUTION=1280x720,FRAME-RATE=25,BANDWIDTH=3933332,AVERAGE-BANDWIDTH=3328204
1/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d401e",RESOLUTION=854x480,FRAME-RATE=25,BANDWIDTH=2093936,AVERAGE-BANDWIDTH=1771792
2/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d401e",RESOLUTION=640x360,FRAME-RATE=25,BANDWIDTH=1039678,AVERAGE-BANDWIDTH=879727
3/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d4015",RESOLUTION=426x240,FRAME-RATE=25,BANDWIDTH=325062,AVERAGE-BANDWIDTH=275053
4/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
Hmm weird, could it be dependent on the region we're using and the
orchestrators availability in the public network?
I streamed to sao region when it consistently did not work. You could SSH
to my dev server (lp-dev function in the lp.fish hack) and then stream
one of the files under ~/workspace/test-videos. The specific 4K one was
vintage-cercle.mp4, but I think there's a BBB one as well.
On Tue, 23 Aug 2022, 03:51 -, @.***> wrote:
unable to repro, tested using the following command:
fetching the playback manifest:
➜ http https://sin-canary-catalyst-0.livepeer.fun/hls/videorec+0c5e9i6m8hkzbfwg/index.m3u8
HTTP/1.1 200 OK
Accept: /
Accept-Encoding: gzip, deflate
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: *
Access-Control-Allow-Methods: GET, POST, OPTIONS, HEAD
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: *
Access-Control-Max-Age: 600
Access-Control-Request-Headers: *
Access-Control-Request-Method: GET
Cache-Control: no-store
Content-Length: 865
Content-Type: application/vnd.apple.mpegurl
Date: Tue, 23 Aug 2022 06:49:44 GMT
Expires: 0
Host: sin-canary-catalyst-0.livepeer.fun
Pragma: no-cache
Set-Cookie: sid=3431820251; Max-Age=600
User-Agent: HTTPie/3.2.1
X-Forwarded-For: 122.171.17.100
X-Forwarded-Host: sin-canary-catalyst-0.livepeer.fun
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: dp6185
X-Real-Ip: 122.171.17.100
#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-STREAM-INF:CODECS="avc1.640034",RESOLUTION=3840x2160,FRAME-RATE=25,BANDWIDTH=27844804,AVERAGE-BANDWIDTH=23560988
0/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d401f",RESOLUTION=1280x720,FRAME-RATE=25,BANDWIDTH=3933332,AVERAGE-BANDWIDTH=3328204
1/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d401e",RESOLUTION=854x480,FRAME-RATE=25,BANDWIDTH=2093936,AVERAGE-BANDWIDTH=1771792
2/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d401e",RESOLUTION=640x360,FRAME-RATE=25,BANDWIDTH=1039678,AVERAGE-BANDWIDTH=879727
3/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
#EXT-X-STREAM-INF:CODECS="avc1.4d4015",RESOLUTION=426x240,FRAME-RATE=25,BANDWIDTH=325062,AVERAGE-BANDWIDTH=275053
4/index.m3u8?mTrack=0&iMsn=1&sessId=3431820251
—
Reply to this email directly, view it on GitHub
https://github.com/livepeer/catalyst/issues/96#issuecomment-1223628708,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAMJ4R5UCZRE2ZUTJBRL6I3V2RYHBANCNFSM56B26QIA
.
You are receiving this because you were mentioned.Message ID:
@.***>
Hmm weird, could it be dependent on the region we're using and the orchestrators availability in the public network? I streamed to sao region when it consistently did not work. You could SSH to my dev server (lp-dev function in the lp.fish hack) and then stream one of the files under ~/workspace/test-videos. The specific 4K one was vintage-cercle.mp4, but I think there's a BBB one as well.
tested from inside gcloud instance; still unable to repro. tried streaming to sao/sin/nyc and fetching index.m3u8 from all 3 of them back; each time the response was (with a little delay when stream started):
#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-1",NAME="AAC-1",URI="1/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-2",NAME="AAC-2",URI="2/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-3",NAME="AAC-3",URI="3/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-4",NAME="AAC-4",URI="4/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-5",NAME="AAC-5",URI="5/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.640033,mp4a.40.2",RESOLUTION=3840x2160,FRAME-RATE=25,BANDWIDTH=17020182,AVERAGE-BANDWIDTH=14401693
0/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=25,BANDWIDTH=4202078,AVERAGE-BANDWIDTH=3555605
6/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=854x480,FRAME-RATE=25,BANDWIDTH=2325118,AVERAGE-BANDWIDTH=1967407
7/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=25,BANDWIDTH=1242748,AVERAGE-BANDWIDTH=1051556
8/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d4015,mp4a.40.2",RESOLUTION=426x240,FRAME-RATE=25,BANDWIDTH=502684,AVERAGE-BANDWIDTH=425348
9/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
➜ http https://nyc-canary-catalyst-0.livepeer.fun/hls/videorec+0c5e9i6m8hkzbfwg/index.m3u8
HTTP/1.1 200 OK
Accept: */*
Accept-Encoding: gzip, deflate
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: *
Access-Control-Allow-Methods: GET, POST, OPTIONS, HEAD
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: *
Access-Control-Max-Age: 600
Access-Control-Request-Headers: *
Access-Control-Request-Method: GET
Cache-Control: no-store
Content-Length: 1591
Content-Type: application/vnd.apple.mpegurl
Date: Wed, 24 Aug 2022 05:59:46 GMT
Expires: 0
Host: nyc-canary-catalyst-0.livepeer.fun
Pragma: no-cache
Set-Cookie: sid=1415100691; Max-Age=600
User-Agent: HTTPie/3.2.1
X-Forwarded-For: 122.171.17.100
X-Forwarded-Host: nyc-canary-catalyst-0.livepeer.fun
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: dp2426
X-Real-Ip: 122.171.17.100
#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-1",NAME="AAC-1",URI="1/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-2",NAME="AAC-2",URI="2/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-3",NAME="AAC-3",URI="3/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-4",NAME="AAC-4",URI="4/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-5",NAME="AAC-5",URI="5/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.640033,mp4a.40.2",RESOLUTION=3840x2160,FRAME-RATE=25,BANDWIDTH=16100417,AVERAGE-BANDWIDTH=13623430
0/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=25,BANDWIDTH=4063238,AVERAGE-BANDWIDTH=3438125
6/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=854x480,FRAME-RATE=25,BANDWIDTH=2249915,AVERAGE-BANDWIDTH=1903774
7/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=25,BANDWIDTH=1210518,AVERAGE-BANDWIDTH=1024285
8/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d4015,mp4a.40.2",RESOLUTION=426x240,FRAME-RATE=25,BANDWIDTH=495695,AVERAGE-BANDWIDTH=419434
9/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
Confirmed that it is now working for me as well!
On Wed, Aug 24, 2022 at 3:01 AM - @.***> wrote:
Hmm weird, could it be dependent on the region we're using and the
orchestrators availability in the public network? I streamed to sao region
when it consistently did not work. You could SSH to my dev server (lp-dev
function in the lp.fish hack) and then stream one of the files under
~/workspace/test-videos. The specific 4K one was vintage-cercle.mp4, but
I think there's a BBB one as well.
tested from inside gcloud instance; still unable to repro. tried streaming
to sao/sin/nyc and fetching index.m3u8 from all 3 of them back; each time
the response was (with a little delay when stream started):
#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-1",NAME="AAC-1",URI="1/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-2",NAME="AAC-2",URI="2/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-3",NAME="AAC-3",URI="3/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-4",NAME="AAC-4",URI="4/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-5",NAME="AAC-5",URI="5/index.m3u8?mTrack=0&iMsn=1&sessId=147465773"
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.640033,mp4a.40.2",RESOLUTION=3840x2160,FRAME-RATE=25,BANDWIDTH=17020182,AVERAGE-BANDWIDTH=14401693
0/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=25,BANDWIDTH=4202078,AVERAGE-BANDWIDTH=3555605
6/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=854x480,FRAME-RATE=25,BANDWIDTH=2325118,AVERAGE-BANDWIDTH=1967407
7/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=25,BANDWIDTH=1242748,AVERAGE-BANDWIDTH=1051556
8/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d4015,mp4a.40.2",RESOLUTION=426x240,FRAME-RATE=25,BANDWIDTH=502684,AVERAGE-BANDWIDTH=425348
9/index.m3u8?mTrack=0&iMsn=1&sessId=147465773
➜ http https://nyc-canary-catalyst-0.livepeer.fun/hls/videorec+0c5e9i6m8hkzbfwg/index.m3u8
HTTP/1.1 200 OK
Accept: /
Accept-Encoding: gzip, deflate
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: *
Access-Control-Allow-Methods: GET, POST, OPTIONS, HEAD
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: *
Access-Control-Max-Age: 600
Access-Control-Request-Headers: *
Access-Control-Request-Method: GET
Cache-Control: no-store
Content-Length: 1591
Content-Type: application/vnd.apple.mpegurl
Date: Wed, 24 Aug 2022 05:59:46 GMT
Expires: 0
Host: nyc-canary-catalyst-0.livepeer.fun
Pragma: no-cache
Set-Cookie: sid=1415100691; Max-Age=600
User-Agent: HTTPie/3.2.1
X-Forwarded-For: 122.171.17.100
X-Forwarded-Host: nyc-canary-catalyst-0.livepeer.fun
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: dp2426
X-Real-Ip: 122.171.17.100
#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-1",NAME="AAC-1",URI="1/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-2",NAME="AAC-2",URI="2/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-3",NAME="AAC-3",URI="3/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-4",NAME="AAC-4",URI="4/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aud",LANGUAGE="und-5",NAME="AAC-5",URI="5/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691"
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.640033,mp4a.40.2",RESOLUTION=3840x2160,FRAME-RATE=25,BANDWIDTH=16100417,AVERAGE-BANDWIDTH=13623430
0/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=25,BANDWIDTH=4063238,AVERAGE-BANDWIDTH=3438125
6/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=854x480,FRAME-RATE=25,BANDWIDTH=2249915,AVERAGE-BANDWIDTH=1903774
7/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d401e,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=25,BANDWIDTH=1210518,AVERAGE-BANDWIDTH=1024285
8/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
#EXT-X-STREAM-INF:AUDIO="aud",CODECS="avc1.4d4015,mp4a.40.2",RESOLUTION=426x240,FRAME-RATE=25,BANDWIDTH=495695,AVERAGE-BANDWIDTH=419434
9/index.m3u8?mTrack=0&iMsn=8&sessId=1415100691
—
Reply to this email directly, view it on GitHub
https://github.com/livepeer/catalyst/issues/96#issuecomment-1225235945,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAMJ4RZSSZ7SQKIGF2MP363V2W3DHANCNFSM56B26QIA
.
You are receiving this because you were mentioned.Message ID:
@.***>
|
gharchive/issue
| 2022-08-09T19:00:29 |
2025-04-01T04:34:54.847104
|
{
"authors": [
"emranemran",
"hjpotter92",
"iameli",
"leszko",
"victorges"
],
"repo": "livepeer/catalyst",
"url": "https://github.com/livepeer/catalyst/issues/96",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1504065917
|
[bug] Lot of API requests when video is not even playing
Is there an existing issue for this?
[X] I have searched the existing issues
Package Version
2.0.0-next.13
Current Behavior
When many players were rendered on the list on a screen, there are a lot of API calls. 😞
https://www.veed.io/view/ceb3ecfc-ee52-4797-a8a7-503ce7ed81be?panel=share
Expected Behavior
We can fetch and auto-upload things when there is user interaction like play in the player.
Steps To Reproduce
No response
Link to Minimal Reproducible Example (CodeSandbox, StackBlitz, etc.)
No response
Anything else?
No response
cc @0xcadams
This should be fixed by https://github.com/livepeer/livepeer.js/issues/60
|
gharchive/issue
| 2022-12-20T06:44:58 |
2025-04-01T04:34:54.861242
|
{
"authors": [
"0xcadams",
"sasicodes"
],
"repo": "livepeer/livepeer.js",
"url": "https://github.com/livepeer/livepeer.js/issues/210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
637786691
|
add orchestrator prices to table and campaign view
What does this pull request do? Explain your changes. (required)
This PR adds orchestrator pricing to the orchestrator table and campaign views using the buidl api. It also introduces pagination to the orchestrator table view for better performance (dynamically updating prices for 100 rows was sluggish and pagination was long overdue).
Notes:
Pagination is currently set to display 10 orchestrators per page. How do we feel about that? Do we want to increase to 20?
With a greater emphasis on price and performance now, any thoughts on sorting by total generated fees as opposed to total stake by default?
Does this pull request close any open issues?
Closes #740, #744
Changes look good 👍 perhaps let's rebase and force push already and that will also re-run CI so we can get a staging link and clean commit history in one go.
@kyriediculous rebased and all checks pass 👍 . I'll plan on squash + merging on Monday along with release notes for the "what's new" section.
|
gharchive/pull-request
| 2020-06-12T14:21:57 |
2025-04-01T04:34:54.864726
|
{
"authors": [
"adamsoffer",
"kyriediculous"
],
"repo": "livepeer/livepeerjs",
"url": "https://github.com/livepeer/livepeerjs/pull/749",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1724696482
|
Fix deeply nested comprehensions
Closes liveview-native/liveview-native-core#19
Closes liveview-native/liveview-native-core#18
why was this fixed in swift? This is a bug in core and fixing this in swift means we need to fix this bug in each client rather than once in core
@bcardarella Core does not handle this yet. @simlay is working on bringing it to core.
@carson-katri when issues are being closed on Core from the swift client that doesn't allow us to track the bug to ensure it is being properly addressed.
We need to hold the line on stop adding things to the Swift client as temporary fixes and deal with the pain of waiting on upstream fixes in Core
|
gharchive/pull-request
| 2023-05-24T20:34:41 |
2025-04-01T04:34:54.866838
|
{
"authors": [
"bcardarella",
"carson-katri"
],
"repo": "liveview-native/liveview-client-swiftui",
"url": "https://github.com/liveview-native/liveview-client-swiftui/pull/954",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1338129624
|
🛑 Movie is down
In 1fcb1ad, Movie (http://h.liyaodong.com:8096) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Movie is back up in ebc61f9.
|
gharchive/issue
| 2022-08-14T04:15:45 |
2025-04-01T04:34:54.898420
|
{
"authors": [
"liyaodong"
],
"repo": "liyaodong/uptime",
"url": "https://github.com/liyaodong/uptime/issues/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
899194294
|
training
你好,整个训练的顺序是按照您发的readme文件吗?我按照上面的这个步骤进行了安装环境,接着安装了CoCo数据集,我在执行bash all.sh时出现了下面的错误。这是什么地方的问题呀
Using /home/lab409/anaconda3/envs/bcnet/lib/python3.7/site-packages/oauthlib-3.1.0-py3.7.egg
Finished processing dependencies for detectron2==0.1
tee: log/train_log_159.txt: 没有那个文件或目录
Command Line Args: Namespace(config_file='configs/fcos/fcos_imprv_R_50_FPN_1x.yaml', dist_url='tcp://127.0.0.1:50152', eval_only=False, machine_rank=0, num_gpus=2, num_machines=1, opts=[], resume=False)
Traceback (most recent call last):
File "tools/train_net.py", line 161, in
args=(args,),
File "/home/lab409/BCNet-main /BCNet-main/detectron2/engine/launch.py", line 48, in launch
daemon=False,
File "/home/lab409/anaconda3/envs/bcnet/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/home/lab409/anaconda3/envs/bcnet/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/lab409/anaconda3/envs/bcnet/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/lab409/BCNet-main /BCNet-main/detectron2/engine/launch.py", line 71, in _distributed_worker
assert num_gpus_per_machine <= torch.cuda.device_count()
AssertionError
the number of required gpus is larger than the actual gpu number.
|
gharchive/issue
| 2021-05-24T02:54:00 |
2025-04-01T04:34:54.918507
|
{
"authors": [
"liujie1202",
"lkeab"
],
"repo": "lkeab/BCNet",
"url": "https://github.com/lkeab/BCNet/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1694418139
|
🛑 Goodmd total.good-md.com:5280 is down
In d0c0402, Goodmd total.good-md.com:5280 (https://total.good-md.com:5280) was down:
HTTP code: 503
Response time: 1303 ms
Resolved: Goodmd total.good-md.com:5280 is back up in ae64cad.
|
gharchive/issue
| 2023-05-03T16:33:21 |
2025-04-01T04:34:54.922161
|
{
"authors": [
"lksjames"
],
"repo": "lksjames/monitoring",
"url": "https://github.com/lksjames/monitoring/issues/732",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
587914004
|
Added title to charts in statistics view
Charts in the Statistics View were missing Title.
Before:
After:
I don't see the interest to add both the Title and the graphic Name.
Plus Title field has limitation (doesn't support punctuation, doesn't support accentuated characters, ...) today, so it's not very practical.
In my opinion we should have only one field Title with a default value equal to the current graphic Name.
@llaske if we have only title field with default value equal to the current graphic name, then it will cause problem with translation. The graphic field supports translation but the title does not. If we add a title with the default value as the graphic name, then it won't be translated on language change.
What do you think?
@NikhilM98 good remark. But it's an issue only for graphics created by default. For graphics created by users, we could expect than the user use its own language.
May be we could consider than if the user leave the title blank the default name will be use (and will be localized)?
we could expect than the user use its own language
Yeah. I was thinking the same. It's expected that the user will use his own language.
May be we could consider than if the user leave the title blank the default name will be use (and will be localized)?
Sounds good
@llaske I have made the changes. Please have a look.
That's nice like that. Thanks.
|
gharchive/pull-request
| 2020-03-25T18:46:59 |
2025-04-01T04:34:54.927496
|
{
"authors": [
"NikhilM98",
"llaske"
],
"repo": "llaske/sugarizer-server",
"url": "https://github.com/llaske/sugarizer-server/pull/244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1981716765
|
针对oneOf类型,是否可以把description显示出来
希望如图效果。
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Postgres Source Spec",
"type": "object",
"required": ["host", "port", "database", "username"],
"properties": {
"host": {
"title": "Host",
"description": "Hostname of the database.",
"type": "string",
"order": 0,
"group": "db"
},
"port": {
"title": "Port",
"description": "Port of the database.",
"type": "integer",
"minimum": 0,
"maximum": 65536,
"default": 5432,
"examples": ["5432"],
"order": 1,
"group": "db"
},
"database": {
"title": "Database Name",
"description": "Name of the database.",
"type": "string",
"order": 2,
"group": "db"
},
"schemas": {
"title": "Schemas",
"description": "The list of schemas (case sensitive) to sync from. Defaults to public.",
"type": "array",
"items": {
"type": "string"
},
"minItems": 0,
"uniqueItems": true,
"default": ["public"],
"order": 3,
"group": "db"
},
"username": {
"title": "Username",
"description": "Username to access the database.",
"type": "string",
"order": 4,
"group": "auth"
},
"password": {
"title": "Password",
"description": "Password associated with the username.",
"type": "string",
"airbyte_secret": true,
"order": 5,
"group": "auth",
"always_show": true
},
"jdbc_url_params": {
"description": "Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (Eg. key1=value1&key2=value2&key3=value3). For more information read about <a href="https://jdbc.postgresql.org/documentation/head/connect.html\">JDBC URL parameters.",
"title": "JDBC URL Parameters (Advanced)",
"type": "string",
"order": 6,
"group": "advanced",
"pattern_descriptor": "key1=value1&key2=value2"
},
"ssl_mode": {
"title": "SSL Modes",
"description": "SSL connection modes. \n Read more <a href="https://jdbc.postgresql.org/documentation/head/ssl-client.html\"> in the docs.",
"type": "object",
"order": 8,
"group": "security",
"oneOf": [
{
"title": "disable",
"additionalProperties": true,
"description": "Disables encryption of communication between Airbyte and source database.",
"required": ["mode"],
"properties": {
"mode": {
"type": "string",
"const": "disable",
"default": "disable",
"ui:hidden": true,
"order": 0
}
}
},
{
"title": "allow",
"additionalProperties": true,
"description": "Enables encryption only when required by the source database.",
"required": ["mode"],
"properties": {
"mode": {
"type": "string",
"const": "allow",
"default": "allow",
"ui:hidden": true,
"order": 0
}
}
},
{
"title": "prefer",
"additionalProperties": true,
"description": "Allows unencrypted connection only if the source database does not support encryption.",
"required": ["mode"],
"properties": {
"mode": {
"type": "string",
"const": "prefer",
"default": "prefer",
"ui:hidden": true,
"order": 0
}
}
},
{
"title": "require",
"additionalProperties": true,
"description": "Always require encryption. If the source database server does not support encryption, connection will fail.",
"required": ["mode"],
"properties": {
"mode": {
"type": "string",
"const": "require",
"default": "require",
"ui:hidden": true,
"order": 0
}
}
},
{
"title": "verify-ca",
"additionalProperties": true,
"description": "Always require encryption and verifies that the source database server has a valid SSL certificate.",
"required": ["mode", "ca_certificate"],
"properties": {
"mode": {
"type": "string",
"const": "verify-ca",
"default": "verify-ca",
"ui:hidden": true,
"order": 0
},
"ca_certificate": {
"type": "string",
"title": "CA Certificate",
"description": "CA certificate",
"airbyte_secret": true,
"multiline": true,
"order": 1
},
"client_certificate": {
"type": "string",
"title": "Client Certificate",
"description": "Client certificate",
"airbyte_secret": true,
"multiline": true,
"order": 2,
"always_show": true
},
"client_key": {
"type": "string",
"title": "Client Key",
"description": "Client key",
"airbyte_secret": true,
"multiline": true,
"order": 3,
"always_show": true
},
"client_key_password": {
"type": "string",
"title": "Client key password",
"description": "Password for keystorage. If you do not add it - the password will be generated automatically.",
"airbyte_secret": true,
"order": 4
}
}
},
{
"title": "verify-full",
"additionalProperties": true,
"description": "This is the most secure mode. Always require encryption and verifies the identity of the source database server.",
"required": ["mode", "ca_certificate"],
"properties": {
"mode": {
"type": "string",
"const": "verify-full",
"default": "verify-full",
"ui:hidden": false,
"order": 0
},
"ca_certificate": {
"type": "string",
"title": "CA Certificate",
"description": "CA certificate",
"airbyte_secret": true,
"multiline": true,
"order": 1
},
"client_certificate": {
"type": "string",
"title": "Client Certificate",
"description": "Client certificate",
"airbyte_secret": true,
"multiline": true,
"order": 2,
"always_show": true
},
"client_key": {
"type": "string",
"title": "Client Key",
"description": "Client key",
"airbyte_secret": true,
"multiline": true,
"order": 3,
"always_show": true
},
"client_key_password": {
"type": "string",
"title": "Client key password",
"description": "Password for keystorage. If you do not add it - the password will be generated automatically.",
"airbyte_secret": true,
"order": 4
}
}
}
]
},
"replication_method": {
"type": "object",
"title": "Update Method",
"description": "Configures how data is extracted from the database.",
"order": 9,
"group": "advanced",
"default": "CDC",
"display_type": "radio",
"oneOf": [
{
"title": "Read Changes using Write-Ahead Log (CDC)",
"description": "Recommended - Incrementally reads new inserts, updates, and deletes using the Postgres <a href="https://docs.airbyte.com/integrations/sources/postgres/#cdc\">write-ahead log (WAL). This needs to be configured on the source database itself. Recommended for tables of any size.",
"required": ["method", "replication_slot", "publication"],
"additionalProperties": true,
"properties": {
"method": {
"type": "string",
"const": "CDC",
"default": "CDC",
"ui:hidden": true,
"order": 1
},
"plugin": {
"type": "string",
"title": "Plugin",
"description": "A logical decoding plugin installed on the PostgreSQL server.",
"enum": ["pgoutput"],
"default": "pgoutput",
"order": 2
},
"replication_slot": {
"type": "string",
"title": "Replication Slot",
"description": "A plugin logical replication slot. Read about <a href="https://docs.airbyte.com/integrations/sources/postgres#step-3-create-replication-slot\">replication slots.",
"order": 3
},
"publication": {
"type": "string",
"title": "Publication",
"description": "A Postgres publication used for consuming changes. Read about <a href="https://docs.airbyte.com/integrations/sources/postgres#step-4-create-publications-and-replication-identities-for-tables\">publications and replication identities.",
"order": 4
},
"initial_waiting_seconds": {
"type": "integer",
"title": "Initial Waiting Time in Seconds (Advanced)",
"description": "The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about <a href="https://docs.airbyte.com/integrations/sources/postgres#step-5-optional-set-up-initial-waiting-time\">initial waiting time.",
"default": 300,
"order": 5,
"min": 120,
"max": 1200
},
"queue_size": {
"type": "integer",
"title": "Size of the queue (Advanced)",
"description": "The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.",
"default": 10000,
"order": 6,
"min": 1000,
"max": 10000
},
"lsn_commit_behaviour": {
"type": "string",
"title": "LSN commit behaviour",
"description": "Determines when Airbtye should flush the LSN of processed WAL logs in the source database. After loading Data in the destination is default. If While reading Data is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.",
"enum": [
"While reading Data",
"After loading Data in the destination"
],
"default": "After loading Data in the destination",
"order": 7
}
}
},
{
"title": "Detect Changes with Xmin System Column",
"description": "Recommended - Incrementally reads new inserts and updates via Postgres <a href="https://docs.airbyte.com/integrations/sources/postgres/#xmin\">Xmin system column. Only recommended for tables up to 500GB.",
"required": ["method"],
"properties": {
"method": {
"type": "string",
"const": "Xmin",
"default": "Xmin",
"ui:hidden": true,
"order": 0
}
}
},
{
"title": "Scan Changes with User Defined Cursor",
"description": "Incrementally detects new inserts and updates using the <a href="https://docs.airbyte.com/understanding-airbyte/connections/incremental-append/#user-defined-cursor\">cursor column chosen when configuring a connection (e.g. created_at, updated_at).",
"required": ["method"],
"properties": {
"method": {
"type": "string",
"const": "Standard",
"default": "Standard",
"ui:hidden": true,
"order": 8
}
}
}
]
}
},
"groups": [
{
"id": "db"
},
{
"id": "auth"
},
{
"id": "security",
"title": "Security"
},
{
"id": "advanced",
"title": "Advanced"
}
]
}
1、 针对oneOf类型,是否可以把description显示出来。
2、如果值是const,输入框是不是可以默认不显示出来。 现在的方案需要通过设置 "default": "CDC"和 "ui:hidden": true 俩个属性解决。
现在的效果是,没有description 显示
可以通过 ui:showDescription true 来显示anyOf的description
参考这个文档ui:showDescription
不会处理,考虑有些场景就是需要显示const 元素渲染的内容作为提示等
|
gharchive/issue
| 2023-11-07T16:08:44 |
2025-04-01T04:34:54.985423
|
{
"authors": [
"allendata0706",
"lljj-x"
],
"repo": "lljj-x/vue-json-schema-form",
"url": "https://github.com/lljj-x/vue-json-schema-form/issues/334",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1591739167
|
How to increase effect of text prompt?
Is there anyway we can increase the relative weight of the SD text conditioning? The reason for this is sometimes the additional condition is too strong and what we are trying to bring out using text is unfulfilled.
@lllyasviel
the attention layers in controlnet also receives text. Technically it should be able to mix them appropriately if trained properly.
might be related:
[New Feature] Control Mode https://github.com/Mikubill/sd-webui-controlnet/issues/1011#issuecomment-1529035570
On "Balanced", "My prompt is more important" and "ControlNet is more important"
|
gharchive/issue
| 2023-02-20T12:10:05 |
2025-04-01T04:34:54.989292
|
{
"authors": [
"geroldmeisinger",
"lllyasviel",
"xiankgx"
],
"repo": "lllyasviel/ControlNet",
"url": "https://github.com/lllyasviel/ControlNet/issues/117",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2403710536
|
Upload Keyframes!
Thanks for the great work as always!
How about allowing users to upload their own keyframes?
I'm expecting to be able to create some pretty cool animations.
https://github.com/lllyasviel/Paints-UNDO/assets/8006000/0866a049-0230-4c74-9a32-4e5520451c41
https://github.com/lllyasviel/Paints-UNDO/assets/8006000/da323cb1-1087-488e-ae6c-ab6719220f56
https://github.com/toyxyz/Paints-UNDO
Can you tell us which file to replace? Really don't want to have 2 instalations
Can you tell us which file to replace? Really don't want to have 2 instalations
This one! https://github.com/toyxyz/Paints-UNDO/blob/main/gradio_app.py
Thank you
|
gharchive/issue
| 2024-07-11T17:37:08 |
2025-04-01T04:34:54.992940
|
{
"authors": [
"barepixels",
"toyxyz"
],
"repo": "lllyasviel/Paints-UNDO",
"url": "https://github.com/lllyasviel/Paints-UNDO/issues/50",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2329652956
|
Add FAQ page to docs
This PR adds a Frequently Asked Questions (FAQ) page to the docs.
@MacOS- thanks, Stefan - nice work! 👍
|
gharchive/pull-request
| 2024-06-02T13:15:25 |
2025-04-01T04:34:54.994241
|
{
"authors": [
"MacOS",
"doberst"
],
"repo": "llmware-ai/llmware",
"url": "https://github.com/llmware-ai/llmware/pull/823",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
818182252
|
Mention CPU and RAM requirements in README
I take it this docker image needs signifcantly more resources than a basic image like nginx or bind. But what are the requirements?
What server size should users pick? 1 core, 2 GB RAM seems a baseline here when renting virtual servers.
Do requirements depend much on number of concurrently connected users, or game progress (e.g. number and size of player-built buildings)?
I think it also depends of the speed. My server has a 8259U as CPU, and valheim-server uses a constant 60% of one core without any player on it.
2GB is also too short, mine us using 2.1GB (again without any player), and you need spare memory for system.
My volume (/var/lib/docker/volumes/valheim-data) is actually 4GB. I think the /dl is not used anymore :
1,0 GiB [##########] /dl
1,0 GiB [######### ] /plus
1,0 GiB [######### ] /server
905,1 MiB [######## ] /valheim_server_Data
So atleast i should be
CPU 2cores atleast 2.5Ghz
4GB Memory
4GB disk
Thank you for the recommendation. Will add them.
|
gharchive/issue
| 2021-02-28T11:17:09 |
2025-04-01T04:34:54.997904
|
{
"authors": [
"Xav-Pe",
"lloesche",
"schildbach"
],
"repo": "lloesche/valheim-server-docker",
"url": "https://github.com/lloesche/valheim-server-docker/issues/123",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
695620263
|
ssh / shell based project file path
the project controller assumes that the "project name" is the same as the ssh repo name which becomes a folder under .tmp when the project is initialized
fixed in version 3.0, just merged and pushed to pypi
fixed in version 3.0, just merged and pushed to pypi
|
gharchive/issue
| 2020-09-08T07:17:40 |
2025-04-01T04:34:54.999195
|
{
"authors": [
"russlooker"
],
"repo": "llooker/pylookml",
"url": "https://github.com/llooker/pylookml/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.