id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
143306446 | Jakucera/config
Create a config module to store the base_url constant.
:+1:
| gharchive/pull-request | 2016-03-24T17:15:17 | 2025-04-01T06:37:40.903906 | {
"authors": [
"jakucera",
"jdalton"
],
"repo": "WebAppsAndFrameworks/ng-app",
"url": "https://github.com/WebAppsAndFrameworks/ng-app/pull/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
587617365 | Entering a state do not contruct the state class
Using the debugger I noticed that the method $enter is called before the constructor and constructor is never called.
My goal is to add the event listener before entering a state but only in the context of this state.
Hi @MattiaPontonioKineton
That's correct, classes to describe state of a component are not instantiated.
You should think of states as code that modifies or sets the behavior of component. States shouldn't have their own properties but rather modify the component's ones (that's why you have full access to component's context by default). Of course, there are reasonable cases where it would make sense for state to encapsulate some sort of data of its own, but lightning's state machine implementation goes into different direction.
So, addressing your issue, my suggestion is to use component's constructor to initialise your state, or state's $enter method if it should be delegated.
Best regards
@g-zachar,
thanks for the clarification.
As a side note, I want to suggest that this causes inconsistency in VS Code type checking. It may drives to error the use of the class keyword.
It's a sort of downgrades of the JavaScript features.
Anyway, If it drives to better code It should be followed.
| gharchive/issue | 2020-03-25T11:08:07 | 2025-04-01T06:37:41.243970 | {
"authors": [
"MattiaPontonioKineton",
"g-zachar",
"mattiapontonio"
],
"repo": "WebPlatformForEmbedded/Lightning",
"url": "https://github.com/WebPlatformForEmbedded/Lightning/issues/139",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2023007141 | Creating script element resulting unexpected opening tag.
I found an issue when creating a script element.,
the script element toSting() result '\x3Cscript />' instead of <script />
And when create a nested element that contains script element will send unexpected result to client i.e
<script type="module" src="index.js"></head><body class="antialiased" /></html></script>
Kindly need you to review.
Thanks & regards
self closing tags don't exist in HTML
| gharchive/issue | 2023-12-04T04:35:52 | 2025-04-01T06:37:41.247002 | {
"authors": [
"Srabutdotcom",
"WebReflection"
],
"repo": "WebReflection/linkedom",
"url": "https://github.com/WebReflection/linkedom/issues/251",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
1787204513 | [OTHER] Shouldn't the teams page be team?
What would you like to share?
This is not a major issue at all; however, I think it would more consistent and better to name the page "team", as we're mentioning the whole WebXDAO team, not teams (even though we have multiple teams or sub-teams, I don't think that counts).
In case it's going to be updated, a redirect should be added, IMO.
What do you think about this? Thanks. 🙂
Additional information
No response
Checklist
[X] I have read the Contributing Guidelines
[X] I have checked the existing issues
[ ] I am willing to work on this issue (optional)
[ ] I am a GSSoC'23 contributor
Well spotted!
Thanks! Working on this. 🙂
| gharchive/issue | 2023-07-04T05:58:18 | 2025-04-01T06:37:41.251026 | {
"authors": [
"Panquesito7",
"mkubdev"
],
"repo": "WebXDAO/WebXDAO.github.io",
"url": "https://github.com/WebXDAO/WebXDAO.github.io/issues/471",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
201816903 | www - vyhledávání
Ještě ti sem do jednotlivých issue naházím popis, některých "složitějších" funkcí webu. Ono jich moc není :)
Vyhledávání dnes může skončit na třech různých místech:
Uživatel hledá libovolné slovní spojení, které má více než jeden výsledků. Např. http://webarchiv.cz/cs/zdroj/noviny
Zobrazí se mu výčet veřejně dostupných zdrojů s možností přepínat mezi vizuálním a textovým zobrazením. Vyhledávání tedy proběhne v databázi seederu, ve zdrojích, které mají smlouvu
Uživatel hledá libovolné slovní spojení, která má přesně jeden výsledek Např. http://webarchiv.cz/cs/zdroj/essentia
Uživateli se zobrazí jakási landing page pro zdroj, oproti předchozímu vyhledávání obsahuje více informací. Používáme ji pro sdílení zdrojů viz #310
Uživatel zadá URL. Je přesměrován přímo do waybacku - odpadá tudíž zobrazovací krok na našich stránkách, ale výsledky jsou zobrazeny přímo ve waybacku. Např. zkusit hledat nkp.cz
implementovat cosi jako https://github.com/WebarchivCZ/WWW/blob/master/app/presenters/BasePresenter.php#L100
| gharchive/issue | 2017-01-19T10:16:57 | 2025-04-01T06:37:41.255211 | {
"authors": [
"Visgean",
"kvasnicaj"
],
"repo": "WebarchivCZ/Seeder",
"url": "https://github.com/WebarchivCZ/Seeder/issues/311",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
863994942 | http?
https://github.com/Webhose/webhoseio-python/blob/3c5ff4616a79b44c61df4a5b0191f9212666c915/webhoseio/__init__.py#L17
Shouldn't this use https?
@MaxwellRebo was thinking the exact same thing
https://github.com/Webhose/webhoseio-python/pull/5
| gharchive/issue | 2021-04-21T15:10:24 | 2025-04-01T06:37:41.256833 | {
"authors": [
"MaxwellRebo",
"NescobarAlopLop"
],
"repo": "Webhose/webhoseio-python",
"url": "https://github.com/Webhose/webhoseio-python/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
721930500 | 关于任务之间的依赖关系问题
对于各个任务之间的关系,比如a任务的子任务有b任务和c任务,b任务和c任务拥有相同的子任务d,那么在启动a任务的时候后面的子任务是不是必须等前面的任务执行完以后才会继续还是说触发后同时执行?
对于各个任务之间的关系,比如a任务的子任务有b任务和c任务,b任务和c任务拥有相同的子任务d,那么在启动a任务的时候后面的子任务是不是必须等前面的任务执行完以后才会继续还是说触发后同时执行?
bc任务同时运行,b任务成功后运行d任务,c任务成功后也运行d任务
如果想让d任务执行的时候必须前面的任务都执行完毕后再执行的话,要如何做呢?
如果想让d任务执行的时候必须前面的任务都执行完毕后再执行的话,要如何做呢?
你可以考虑对这块引入DAG
明白了
| gharchive/issue | 2020-10-15T02:56:36 | 2025-04-01T06:37:41.315902 | {
"authors": [
"LxqGit-blip",
"WeiYe-Jing"
],
"repo": "WeiYe-Jing/datax-web",
"url": "https://github.com/WeiYe-Jing/datax-web/issues/351",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1571393888 | Redis Library changed from Jedis to Lettuce + Reload with plugin managers
Lettuce is a redis library that is better in performance than jedis. Also lettuce has a built-in system for async commands that is very useful. I use a pool of n connections so you are use how many connections you are using and it's easy to debug.
In the plugin main I added a filter for already present permissions so the plugin can still load and skip those permissions. This is related to the reload process.
Thanks also to @Emibergo02 for helping me with the redis classes.
Tested in production and everything is working.
I added a commit with the requested fixes.
I'm still king of thinking the lettuce stuff would better be moved into its own library before merging
Any news on this @WiIIiam278?
Closing as stale (since this targets a previous major version) -- feel free to rebase and PR again on the latest with feedback in mind :)
| gharchive/pull-request | 2023-02-05T11:32:41 | 2025-04-01T06:37:41.423361 | {
"authors": [
"WiIIiam278",
"alexdev03",
"iVillager"
],
"repo": "WiIIiam278/HuskHomes2",
"url": "https://github.com/WiIIiam278/HuskHomes2/pull/308",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1599332938 | Protocolize packet seemingly failed to decode
Encountered this error:
[23:04:30 ERROR] [Protocolize]: === EXCEPTION CAUGHT IN DECODER ===
[23:04:30 ERROR] [Protocolize]: Protocolize 2.2.5:598
[23:04:30 ERROR] [Protocolize]: Stream Direction: UPSTREAM
[23:04:30 ERROR] [Protocolize]: InboundConnection: [connected player] Unnm3d (/31.156.170.216:60227), ServerConnection: null
[23:04:30 ERROR] [Protocolize]: Protocol version: 1.19.3
[23:04:30 ERROR]: io.netty.handler.codec.EncoderException: java.lang.NullPointerException: Cannot invoke "java.util.List.size()" because "this.entities" is null
netty errors blah blah blah
[23:04:30 ERROR]: Caused by: java.lang.NullPointerException: Cannot invoke "java.util.List.size()" because "this.entities" is null
[23:04:30 ERROR]: at net.william278.velocitab.packet.UpdateTeamsPacket.write(UpdateTeamsPacket.java:117)
[23:04:30 ERROR]: at dev.simplix.protocolize.velocity.packet.VelocityProtocolizePacket.encode(VelocityProtocolizePacket.java:66)
[23:04:30 ERROR]: at com.velocitypowered.proxy.protocol.netty.MinecraftEncoder.encode(MinecraftEncoder.java:54)
[23:04:30 ERROR]: at com.velocitypowered.proxy.protocol.netty.MinecraftEncoder.encode(MinecraftEncoder.java:32)
[23:04:30 ERROR]: at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
[23:04:30 ERROR]: ... 60 more
[23:04:30 ERROR]: [server connection] Unnm3d -> survival-spawn: exception encountered in com.velocitypowered.proxy.connection.backend.BackendPlaySessionHandler@7c8fbec5
com.velocitypowered.proxy.util.except.QuietRuntimeException: A packet did not decode successfully (invalid data). If you are a developer, launch Velocity with -Dvelocity.packet-decode-logging=true to see more.
tomorrow i'll try to find the cause. if you want a tester here am i
@Emibergo02 Did you test this on the latest commit? Further, were you playing on a 1.19.3 client? What velocity version were you running?
Duly note Protocolize 2.2.5 is required (you must build Protocolize@master at the moment).
yep. tested latest commit now. 1.19.3 client. latest protocolize 2.2.5. running Velocity-NoChatSigning latest commit
[12:22:07 ERROR] [Protocolize]: === EXCEPTION CAUGHT IN DECODER ===
[12:22:07 ERROR] [Protocolize]: Protocolize 2.2.5:599
[12:22:07 ERROR] [Protocolize]: Stream Direction: DOWNSTREAM
[12:22:07 ERROR] [Protocolize]: InboundConnection: null, ServerConnection: [server connection] Unnm3d -> survival-spawn
[12:22:07 ERROR] [Protocolize]: Protocol version: 1.19.3
[12:22:07 ERROR]: io.netty.handler.codec.CorruptedFrameException: Error decoding class dev.simplix.protocolize.velocity.packets.GeneratedUpdateTeamsPacketWrapper Direction CLIENTBOUND Protocol 1.19.3 State PLAY ID 56
[12:22:07 ERROR]: at com.velocitypowered.proxy.protocol.netty.MinecraftDecoder.handleDecodeFailure(MinecraftDecoder.java:131)
[12:22:07 ERROR]: at com.velocitypowered.proxy.protocol.netty.MinecraftDecoder.tryDecode(MinecraftDecoder.java:86)
[12:22:07 ERROR]: at com.velocitypowered.proxy.protocol.netty.MinecraftDecoder.channelRead(MinecraftDecoder.java:61)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
[12:22:07 ERROR]: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
[12:22:07 ERROR]: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
[12:22:07 ERROR]: at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
[12:22:07 ERROR]: at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
[12:22:07 ERROR]: at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:333)
[12:22:07 ERROR]: at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:454)
[12:22:07 ERROR]: at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
[12:22:07 ERROR]: at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
[12:22:07 ERROR]: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
[12:22:07 ERROR]: at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
[12:22:07 ERROR]: at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
[12:22:07 ERROR]: at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
[12:22:07 ERROR]: at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
[12:22:07 ERROR]: at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
[12:22:07 ERROR]: at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
[12:22:07 ERROR]: at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
[12:22:07 ERROR]: at java.base/java.lang.Thread.run(Thread.java:833)
[12:22:07 ERROR]: Caused by: io.netty.handler.codec.CorruptedFrameException: Protocolize is unable to read packet net.william278.velocitab.packet.UpdateTeamsPacket at protocol version 1.19.3 in direction CLIENTBOUND
[12:22:07 ERROR]: at dev.simplix.protocolize.velocity.packet.VelocityProtocolizePacket.decode(VelocityProtocolizePacket.java:58)
[12:22:07 ERROR]: at com.velocitypowered.proxy.protocol.netty.MinecraftDecoder.tryDecode(MinecraftDecoder.java:84)
[12:22:07 ERROR]: ... 34 more
[12:22:07 ERROR]: Caused by: java.lang.NullPointerException: Cannot invoke "java.util.List.add(Object)" because "this.entities" is null
[12:22:07 ERROR]: at net.william278.velocitab.packet.UpdateTeamsPacket.read(UpdateTeamsPacket.java:95)
[12:22:07 ERROR]: at dev.simplix.protocolize.velocity.packet.VelocityProtocolizePacket.decode(VelocityProtocolizePacket.java:49)
[12:22:07 ERROR]: ... 35 more
This is with -Dvelocity.packet-decode-logging=true flag on Velocity
@Emibergo02 Can you try again with the latest commit?
Of course. All working as expected. I'm closing this issue
| gharchive/issue | 2023-02-24T22:08:01 | 2025-04-01T06:37:41.428520 | {
"authors": [
"Emibergo02",
"WiIIiam278"
],
"repo": "WiIIiam278/Velocitab",
"url": "https://github.com/WiIIiam278/Velocitab/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2270983055 | j'ai du mal a suivre la doc
En plus des exemples de code, inclure des scénarios d'utilisation réelle pour illustrer comment la bibliothèque peut être appliquée dans des projets concrets. Cela aiderait les utilisateurs à visualiser plus facilement l'application de la bibliothèque dans leurs propres projets.
La documentation sera mis-à-jour au fur et a mesure de l'avancé du projet
| gharchive/issue | 2024-04-30T09:33:29 | 2025-04-01T06:37:41.479764 | {
"authors": [
"Freddyede",
"hamed-cell"
],
"repo": "WildCodeSchool-CDA-LYON-02-2024/P2-React-Markdown",
"url": "https://github.com/WildCodeSchool-CDA-LYON-02-2024/P2-React-Markdown/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
157978024 | Always regenerate variables for jdk name and url
This allows us to use this role to install multiple versions of java.
I use this role to provision CI systems that require multiple versions of Java. Attempting to install to different versions of Java resulted in urls that don't exist like:
http://download.oracle.com/otn-pub/java/jdk/8u51-b16/jdk-7u80-linux-x64.tar.gz
Which obviously gets some versions mixed up.
There might be a better way to solve this, but this fixes the problem for myself.
Maybe I still need the generic part, I will investigate.
Hi @William-Yeh, it looks like the failed build on travis-ci didn't even run. Can you try re-running it? Aside from that what else do you need to get this merged?
Bump
closing due to lack of response.
| gharchive/pull-request | 2016-06-01T18:40:14 | 2025-04-01T06:37:41.693251 | {
"authors": [
"Dirrk",
"trevorriles"
],
"repo": "William-Yeh/ansible-oracle-java",
"url": "https://github.com/William-Yeh/ansible-oracle-java/pull/30",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
103635245 | Test tox flush arg seems to be ignored
It seems to assume that pynt supports bools from the command line but it does not.
Really the feature should just be eliminated as it's simply an alias. As opposed to the other commands which either generally allow people to work without having to care whether or not they activated the venv or cd into a lower level project dir, or both.
| gharchive/issue | 2015-08-28T02:37:56 | 2025-04-01T06:37:41.700423 | {
"authors": [
"ivanvenosdel"
],
"repo": "WimpyAnalytics/pynt-of-django",
"url": "https://github.com/WimpyAnalytics/pynt-of-django/issues/16",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
62419272 | UTF-8 as default codepage
I recently had trouble when echoing strings to stdout/stderr on windows systems (over WinRM) that contained non-ascii chars, especially umlauts (occur in latin1) and other utf8 characters: They were always messed up in the winrm response.
I found that the gem defaultly specifies codepage 437 when opening the shell (https://github.com/WinRb/WinRM/blob/ec9140063bc47f92b52f3a84bde1e4b0800ed911/lib/winrm/winrm_service.rb#L99) so that only characters from this codepage (which contains only the ascii set and some other chars) are displayed correctly.
When passing the option :codepage => 65001 (which corresponds to utf-8 according to this index: https://msdn.microsoft.com/en-us/library/dd317756(VS.85).aspx) to open_shell, windows successfully receives the utf8 chars.
But when getting the command output, I get the error Encoding::CompatibilityError: incompatible encoding regexp match (UTF-8 regexp with ASCII-8BIT string). If I add force_encoding('utf-8') when getting the command output, the error is gone and the utf-8 chars are received correctly on the client (on both windows and linux clients).
In order to support the whole unicode character set in this Gem, this PR changes the default codepage for WinRM shell to utf-8.
Maybe this change is too radical by introducing a new default and will break dependent gems or programs...
But I couldn't find another solution - adding force_encoding('utf-8') would break the response when using another codepage as utf-8 for the shell. Nevertheless, I think utf-8 as default seems appropriate nowadays. The spec suite is passing on both windows and linux.
A less intruding alternative (completely backward-compatible): Introduce a utf-8 => true option that automatically sets the codepage to 650001 and forces the encoding to utf-8 on the output (but leaves the default & other codepages alone) - but I think that would be less elegant.
Spec is included and tested on windows 8.1 & ubuntu linux (ruby 1.9.3) against a windows 2012r2 server.
Could this be merged?
@zenchild @pmorton Is there a reason you can think of where this would cause issues? It seems pretty sensible to default to UTF-8.
This may become a pressing issue. I'm currently trying to setup packer templates for Windows Nano server and its using 65001 and complains when using 437. This will be an issue with the GO package as well.
@mwrock Interesting about Nano, but that sounds like a good change. My main worry is backwards compatibility. I'm pretty sure it'll be a safe change, but I was hoping for some additional opinions since I may have missed something.
I just tested against nano, win2012R2, win 8.1 and win 2008R2 endpoints. All were successful. I don't think we need to go any further back than 2008R2. I feel pretty comfortable with this change. @tmm1: would you mind fixing up merge conflicts with master?
@sneal any objection to me fixing the merge conflict here and bumping to 1.3.5.dev?
Go for it! I think we've done our due diligence on this one.
cool. done. If you could push a new dev gem to ruby gems whenever you have a chance that would be awesome.
@mwrock Done
| gharchive/pull-request | 2015-03-17T14:54:14 | 2025-04-01T06:37:41.711700 | {
"authors": [
"MatthiasWinzeler",
"mwrock",
"sneal",
"tmm1"
],
"repo": "WinRb/WinRM",
"url": "https://github.com/WinRb/WinRM/pull/130",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2370375916 | Update jupyter_translate.py
Hello @WittmannF,
Nice library ! I'm suggesting a few changes in the REGEX and a 2-step open scheme for json files to handle with codec.
If you agree, just merge it.
Cheers
Andre
Thanks @andrebelem! I'll take a look. I completely forgot about this project. Great to see it is still working. Are you interested in being one of the admins?
Hello Fernando. It's a very useful script !
You can reach me (in portuguese!) by @.***
Cheers
Andre
Em seg., 24 de jun. de 2024 às 11:21, Fernando Marcos Wittmann <
@.***> escreveu:
Thanks @andrebelem https://github.com/andrebelem! I'll take a look. I
completely forgot about this project. Great to see it is still working. Are
you interested in being one of the admins?
—
Reply to this email directly, view it on GitHub
https://github.com/WittmannF/jupyter-translate/pull/7#issuecomment-2186700270,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AK42FD56YFQPJB2A4VQ2WXTZJATNJAVCNFSM6AAAAABJZ5T5YGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBWG4YDAMRXGA
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
Prof. Andre L. Belem (Dr. rer. nat.)
Github: https://github.com/andrebelem (+Lattes
http://lattes.cnpq.br/8174173696509765, +ORCID
https://orcid.org/0000-0002-8865-6180, +ReseachGate
https://www.researchgate.net/profile/Andre-Belem, +Linkedin
https://www.linkedin.com/in/andre-belem)
Departamento de Engenharia Agrícola e Meio Ambiente - Escola de Engenharia
Universidade Federal Fluminense
Rua Passo da Pátria, 156 bloco E sl 345
Campus da Praia Vermelha - São Domingos
24210-240 Niterói,RJ / Brasil
| gharchive/pull-request | 2024-06-24T14:16:56 | 2025-04-01T06:37:41.739739 | {
"authors": [
"WittmannF",
"andrebelem"
],
"repo": "WittmannF/jupyter-translate",
"url": "https://github.com/WittmannF/jupyter-translate/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
294116720 | Beginnings of a more general Makefile
This is not ideal yet, but a little better. I guess we may have to bite the bullet and list and filter the src/ files to have a proper library dependency. Then we may want to properly (with soname, major, minor?) build a library, either dynamic or (it is small, after all) static.
I think I correctly fast-forwarded to have my two commits after yours.
Thanks. It builds a dynamic library on Mac, do you want to add a similar logic for Linux? (I'm happy to merge as-is, since it's already an improvement.)
I am a fan of many small increments so I'd merge now too.
Getting the reader working is more important for me :)
But we can build a simple dynamic library. It'll take me a few lines and we have to -fPIC and all that. Happy to do hat esp if we have the macOS side working already.
| gharchive/pull-request | 2018-02-03T14:53:49 | 2025-04-01T06:37:41.742042 | {
"authors": [
"eddelbuettel",
"evanmiller"
],
"repo": "WizardMac/librdata",
"url": "https://github.com/WizardMac/librdata/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1335722165 | 🛑 Chatbot SILI is down
In 40418c6, Chatbot SILI (https://sili.wjghj.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chatbot SILI is back up in 8ee2c84.
| gharchive/issue | 2022-08-11T09:48:38 | 2025-04-01T06:37:41.744522 | {
"authors": [
"Dragon-Fish"
],
"repo": "Wjghj-Project/status",
"url": "https://github.com/Wjghj-Project/status/issues/364",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
190865095 | Does portalocker support python 3.5.2 win?
Windows 7 x64, python 3.5.2
code from example, 'somefile' exists and writeable
file = open('somefile', 'r+') portalocker.lock(file, portalocker.LOCK_EX)
crashed with message
Process finished with exit code -1073740777 (0xC0000417)
stack overflow/stack exhaustion
on line 33 in portalocker.py
msvcrt.locking(file_.fileno(), mode, -1)
But that is not all. Rewrite code to
portalocker.lock(file, portalocker.LOCK_SH)
and script fail with message
File "C:\Python\lib\site-packages\portalocker\portalocker.py", line 14, in lock
mode = msvcrt.LK_RLOCK
AttributeError: module 'msvcrt' has no attribute 'LK_RLOCK'
+1 for Win 10
+1 for python 3.6
Though it seemed like a simple AttributeError which could be fixed with simple
getattr(msvcrt, 'LK_RLOCK', msvcrt.LK_RLCK)
it goes deeper for me.
Having applied the fix above I stumbled upon another issue: my python interpreter started failing.
The issue is:
(msvcrt.locking(file_.fileno(), mode, -1))[https://github.com/WoLpH/portalocker/blob/develop/portalocker/portalocker.py#L33]
If I put a random number instead of -1 it starts working.
@keenondrums
msvcrt.locking(fd, mode, nbytes) where nbytes -"the locked region of the file extends from the current file position for nbytes bytes, and may continue beyond the end of the file"
We need to lock whole file.
@vitidev I see. I just pointed that if wee need to lock the whole file -1 is not the answer.
These lines:
savepos = file_.tell()
if savepos:
file_.seek(0)
try:
msvcrt.locking(file_.fileno(), mode, -1)
need to be replaced with something like this:
savepos = file_.tell()
file_.seek(0, os.SEEK_END)
size = file_.tell()
file_.seek(0)
try:
msvcrt.locking(file_.fileno(), mode, size)
I see no indication from the documentation that -1 is a valid value for the third argument. The msvcrt.locking causes Python to silently exit, which must be a Python bug, though.
After updating 3.5.2 to 3.5.3 and still getting the crash, I verified that 2.7 and 3.4 don't cause a crash then filed a bug: http://bugs.python.org/issue29392
The msvcrt module uses _locking which says "It is possible to lock bytes past end of file".
Perhaps one can simply lock as much possible, 2147483647? Any larger value gives OverflowError: Python int too large to convert to C long.
@techtonik added -1, what do you think?
It looks like the bug was fixed so that's good :)
http://bugs.python.org/issue29392
Still... doesn't help much with the bug though. Not sure I can help much here guys (no windows) but I'll help with anything I can offer.
Second bug "AttributeError: module 'msvcrt' has no attribute 'LK_RLOCK'" not fixed in 1.1.0
Oops, Github automatically closed this. I've reopened.
If fix in line 14 mode = msvcrt.LK_RLOCK -> mode = msvcrt.LK_RLCK
I did not find LK_RLOCK in any python documentation and I think it's a typo
Fixed on develop, I'm releasing a new version today :)
The new release works perfect for me on Python 2.x and 3.x on Windows, OS X and Linux
| gharchive/issue | 2016-11-21T23:23:46 | 2025-04-01T06:37:41.767399 | {
"authors": [
"RazerM",
"TWAC",
"WoLpH",
"keenondrums",
"vitidev"
],
"repo": "WoLpH/portalocker",
"url": "https://github.com/WoLpH/portalocker/issues/31",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
244928579 | GPG signatures for source validation
Hi,
you know what to do :)
I will ask try to get the python progressbar2 into [community] of archlinux. Could you please tell me what is compatible and what not with the old/other version? I want to simply replace the package instead of creating another one.
Also correct:
https://python-utils.readthedocs.io/en/latest/
Done, should be a fully signed release now :)
Let me know if you have any issues
| gharchive/issue | 2017-07-23T18:49:28 | 2025-04-01T06:37:41.769585 | {
"authors": [
"NicoHood",
"WoLpH"
],
"repo": "WoLpH/python-utils",
"url": "https://github.com/WoLpH/python-utils/issues/3",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
165874699 | Adds metric policy
Summary
Addresses #46
👍
| gharchive/pull-request | 2016-07-15T20:49:33 | 2025-04-01T06:37:41.795248 | {
"authors": [
"epintos",
"mdesanti"
],
"repo": "Wolox/codestats",
"url": "https://github.com/Wolox/codestats/pull/64",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2659490369 | Tabs (reusable)
Tab - Make a reusable component for the tab to allow to navigate between sections.
View:
I would split this - one is a hero and one is a title.
The navigation (tabs) is also used on the event container (Upcoming events, Past events)
If the reusable component can already display different titles in the font size and colour we want then just use that, and rewrite this ticket to be the tabs
@stepsen89 I got a bit confused, what you consider calling a hero ? I thought it would be the first part that you see on the page like an image, title... But in this case the hero you saying is the blue background, right? and I guess hero section is the first part in the page, that is more than one element.
| gharchive/issue | 2024-11-14T17:09:09 | 2025-04-01T06:37:41.802187 | {
"authors": [
"joanaBrit",
"stepsen89"
],
"repo": "Women-Coding-Community/wcc-frontend",
"url": "https://github.com/Women-Coding-Community/wcc-frontend/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
336002370 | Maya2GLTF PBR Material - Environment maps not being exported
Hello!
I am having an issue with the maya PBR material. While applying maps to my car body, I have applied the following texture:
...to the Environment fields outlined below:
I export the object out of maya, but unfortunately, when I import my object into the threejs editor my environment map is empty.
Am I using the environment properties incorrectly or is this a bug?
glTF 2.0 files do not specify the environment map. I have just put it in the Maya shader for previewing.
So it is not a bug, it is by design, because I cannot put this in a standard glTF file.
Typically you specify an environment in your render engine of choice, separate from the 3D models you are importing. It will be the same for all models.
That being said, as soon as Maya2glTF will support the KHR lights extension, environment maps will be exported as part of the lights I guess. Obviously the render engines must also support this extension.
Thank you for the reply. I just did a test GLTF export from the ThreeJS editor to debug this issue and I see what you are talking about. Good to know! Feel free to close this issue :)
Thanks for reporting! Getting user feedback is important. Closing this now then :-)
| gharchive/issue | 2018-06-26T21:45:22 | 2025-04-01T06:37:41.807822 | {
"authors": [
"Chase-Reid",
"Ziriax"
],
"repo": "WonderMediaProductions/Maya2glTF",
"url": "https://github.com/WonderMediaProductions/Maya2glTF/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
165915042 | Add sniff for long conditional end comments.
From the handbook:
Furthermore, if you have a really long block, consider whether it can be broken into two or more shorter blocks or functions. If you consider such a long block unavoidable, please put a short comment at the end so people can tell at glance what that ending brace ends – typically this is appropriate for a logic block, longer than about 35 rows, but any code that’s not intuitively obvious can be commented.
Ref: https://make.wordpress.org/core/handbook/best-practices/coding-standards/php/#brace-style
This rule is so far not covered, but there is a sniff available for this upstream.
The finer details of a PR for this depend on the upstream PR https://github.com/squizlabs/PHP_CodeSniffer/pull/1074 and therefore can only be added once the upstream PR has been merged and the minimum PHPCS version required for WPCS is upped to match.
If it is desired to have this functionality available before that point in time, a copy of the - adjusted - upstream sniff could be used for the time being and altered to extend the upstream class at a later point in time.
Commit in feature branch which implements the functionality based on upstream: ~~https://github.com/WordPress-Coding-Standards/WordPress-Coding-Standards/commit/6774ab0f4f756cbaedee078713098443c7a13885~~ https://github.com/WordPress-Coding-Standards/WordPress-Coding-Standards/commit/c38a6c095d47229a64328b52efd1d216d39f1945
[Edit]: Travis will - of course - fail for this branch as long as the upstream PR has not been merged yet.
Personally, I can't stand these redundant comments, but the upstream patch makes sense to improve it.
Personally, I can't stand these redundant comments
True that, but it is in the handbook, which is why I suggest for WPCS to cover it. People can always turn it off for individual projects. And at least WP only suggests it for > 35 lines.
( which would make the condition a prime candidate for refactoring anyway)
I have found reasons not to be that harsh against clarity comments at the end of blocks, esp. on nested stuff where just seeing 3 closing blocks with a short comment allows quick understand of what you're seeing in complex situations. Just that manually maintaining the comment content accuracy is annoying.
Just that manually maintaining the comment content accuracy is annoying.
@lkraav The sniff actually contains a fixer, so that can be handled for you ;-)
FYI: looks like there's some movement upstream - the PR for this has been merged. Still, there isn't a released PHPCS version which contains it atm which we could set as a minimum version, so we'd still need to bridge this with an extended class for now or wait until it is contained in a released version and the WPCS minimum required PHPCS version has caught up.
Opinions ?
For the record, how would one disable this in their phpcs.xml file?
For the record, how would one disable this in their phpcs.xml file?
<exclude name="Squiz.Commenting.LongConditionClosingComment" />
And if a different line limit or end comment is preferred, you can overrule the settings by adding the following in phpcs.xml (with different values for the properties):
<rule ref="Squiz.Commenting.LongConditionClosingComment">
<properties>
<property name="lineLimit" value="35" />
<property name="commentFormat" value="// End %s()." />
</properties>
</rule>
| gharchive/issue | 2016-07-16T07:05:08 | 2025-04-01T06:37:41.850060 | {
"authors": [
"GaryJones",
"jrfnl",
"lkraav"
],
"repo": "WordPress-Coding-Standards/WordPress-Coding-Standards",
"url": "https://github.com/WordPress-Coding-Standards/WordPress-Coding-Standards/issues/606",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
20885773 | Flag bypassing of Settings API
The only way I can think of to automate checking of this is to look for instances of add_menu_page or add_submenu_page, and for the callback argument, and then to attempt to find that callback function and check to see if it outputs any fields, and if it does, warn that they should use settings API.
However, this seems to be better checked with PHPUnit. A unit test could be run which executes the admin page callback for each admin page and checks to see if the settings API is ever invoked during the execution of the function.
Not Using the Settings API #
Instead of handling the output of settings pages and storage yourself, use the WordPress Settings API as it handles a lot of the heavy lifting for you including added security.
Make sure to also validate and sanitize submitted values from users using the sanitize callback in the register_setting call.
Is this meant to be for VIP only, or is it something everyone could benefit from?
I think checking that register_setting() is always called with sanitize_callback set in the $args would be a good addition for everyone, this is covered by #126.
Checking the callback functions passed to add_(sub)menu_page() is something which IMHO cannot easily be done in a reliable manner with PHPCS.
Closing as VIP issues are no longer relevant here.
| gharchive/issue | 2013-10-11T18:21:38 | 2025-04-01T06:37:41.853706 | {
"authors": [
"GaryJones",
"jrfnl",
"westonruter"
],
"repo": "WordPress-Coding-Standards/WordPress-Coding-Standards",
"url": "https://github.com/WordPress-Coding-Standards/WordPress-Coding-Standards/issues/91",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
915840652 | Run ci on main push only
Fixes #96
This PR restricts GitHub actions to run on push only to the main branch to make sure that the linting and testing CI actions are not run twice.
I had used 'master' branch by mistake, and had to rename the branch because of that. Renaming a PR branch closes that PR, apparently.
Signed-off-by: Olga Bulat obulat@gmail.com
FYI there is a $default-branch variable that can be used in GitHub actions, so that 'main' or 'master' doesn't have to be hardcoded.
Oh, that's great to know!
| gharchive/pull-request | 2021-06-09T06:37:25 | 2025-04-01T06:37:42.050712 | {
"authors": [
"obulat",
"zackkrida"
],
"repo": "WordPress/openverse-catalog",
"url": "https://github.com/WordPress/openverse-catalog/pull/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2027130632 | Add catalog to dependencies to update by Renovate
Description
I noticed the Catalog was missing from these updates, but also why is it limited to development dependencies?
Checklist
[x] My pull request has a descriptive title (not a vague title likeUpdate index.md).
[x] My pull request targets the default branch of the repository (main) or a parent feature branch.
[x] My commit messages follow best practices.
[x] My code follows the established code style of the repository.
[ ] I added or updated tests for the changes I made (if applicable).
[ ] I added or updated documentation (if applicable).
[ ] I tried running the project locally and verified that there are no visible errors.
[ ] I ran the DAG documentation generator (if applicable).
Developer Certificate of Origin
Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
@AetherUnbound I'm refering to that group of dependencies, it includes all the python projects, not only the catalog so it looked strange to me.
https://github.com/WordPress/openverse/blob/9b44496703708454198cb80dc18a630f63ed64e6/.github/renovate.json#L67-L70
@dhruvkb I believe that is only for putting the label on the files that are under the catalog folder 🤔
| gharchive/pull-request | 2023-12-05T21:12:28 | 2025-04-01T06:37:42.056491 | {
"authors": [
"krysal"
],
"repo": "WordPress/openverse",
"url": "https://github.com/WordPress/openverse/pull/3465",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2506093230 | 🛑 Luvo is down
In fe1e31e, Luvo (https://luvo.care/health/check) was down:
HTTP code: 502
Response time: 660 ms
Resolved: Luvo is back up in 5945288 after 12 minutes.
| gharchive/issue | 2024-09-04T19:11:58 | 2025-04-01T06:37:42.246939 | {
"authors": [
"jminiat-wca"
],
"repo": "Wound-Care-Advantage/uptime",
"url": "https://github.com/Wound-Care-Advantage/uptime/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
423752806 | undefined StructType class
Hi,
when we generate php classes for our "roc.wsdl" web service, we end up with a method
\ServiceType\Operation::setSoapHeaderMessage(\StructType\Max2048TextType $message, $nameSpace = 'urn:amc:ci', $mustUnderstand = false, $actor = null) that uses an unknown \StructType\Max2048TextType type for its "$message" parameter.
All our .xsd and .wsdl files can be found at: https://drive.google.com/drive/folders/1m-OK8vhcMVsWKa_pyyesQK81rtTyc-PV
Note that after retrieving these .xsd and .wsdl files, you must first replace all "__LOCAL_PATH__" occurrences by the new local path of those files.
Thank you for your help :)
Thierry.
Is it possible for you to use the feature/issue-185 branch with:
php /var/www/console g:p \
--urlorpath=roc.wsdl \
--destination=./ \
--composer-name=roc/soap \
--force
in order to validate the fix before merging it :)
Thanks
ps: import use relative path, you don't need the full path in your xsd, just put the file name if it is in the same directory as the WSDL
Is it possible for you to use the feature/issue-185 branch with:
php console g:p \
--urlorpath=roc.wsdl \
--destination=./ \
--composer-name=roc/soap \
--force
in order to validate the fix before merging it :)
Thanks
ps: import use relative path, you don't need the full path in your xsd, just put the file name if it is in the same directory as the WSDL
Hi @mikaelcom,
thank you very much, I confirm you that your patch is working great, the issue is fixed for me ;)
Thierry.
| gharchive/issue | 2019-03-21T14:18:05 | 2025-04-01T06:37:42.269002 | {
"authors": [
"mikaelcom",
"tbl0605"
],
"repo": "WsdlToPhp/PackageGenerator",
"url": "https://github.com/WsdlToPhp/PackageGenerator/issues/185",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
172847539 | Food Crafting
food crafting for pills can be done with 1oz or the full 160oz
Currently there is no easy way to handle NBT data in forge recipes without some work.
| gharchive/issue | 2016-08-24T01:55:24 | 2025-04-01T06:37:42.274473 | {
"authors": [
"Servovicis",
"Wurmatron"
],
"repo": "Wurmcraft/WurmTweaks",
"url": "https://github.com/Wurmcraft/WurmTweaks/issues/98",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1726101918 | Support other audio types
this should support other common types such as mp4. Ideally this would also support audio types coming from WhatsApp
48fd65b addresses this
| gharchive/issue | 2023-05-25T16:03:09 | 2025-04-01T06:37:42.296034 | {
"authors": [
"Wyrine"
],
"repo": "Wyrine/mp3-cutting",
"url": "https://github.com/Wyrine/mp3-cutting/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1613060752 | buttons (hart en info)
kijk even naar de buttons, bij mij waren ze ovaal (ei vormig) dat hoort denk ik niet zo te zijn?
Opglost door de buttons een standaard height en width te geven te geven
| gharchive/issue | 2023-03-07T09:36:57 | 2025-04-01T06:37:42.296858 | {
"authors": [
"WyroneBlue",
"laibaaac"
],
"repo": "WyroneBlue/rijksmuseum-gallery-app",
"url": "https://github.com/WyroneBlue/rijksmuseum-gallery-app/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
229329291 | TICK-358 longitudinal features lagger
Add a preprocessor to lag exposure features matrices
@MaryanMorel The two commits should have been squashed
https://twitter.com/jamesfublo/status/402407321265274881?ref_src=twsrc^tfw&ref_url=http%3A%2F%2Fjamescooke.info%2Fgit-to-squash-or-not-to-squash.html
Also each commit should contain the task name it is related to :)
I know, my bad… I saw that when it was too late, and I didn’t want to mess with the master to make it right :(
Maryan MOREL
Centre de Mathématiques Appliquées (CMAP) http://www.cmap.polytechnique.fr/
École Polytechnique
91128 PALAISEAU CEDEX
+33 (0)1 69 33 45 80
Le 23 mai 2017 à 11:35, Martin notifications@github.com a écrit :
@MaryanMorel https://github.com/maryanmorel The two commits should have been squashed
https://twitter.com/jamesfublo/status/402407321265274881?ref_src=twsrc^tfw&ref_url=http%3A%2F%2Fjamescooke.info%2Fgit-to-squash-or-not-to-squash.html https://twitter.com/jamesfublo/status/402407321265274881?ref_src=twsrc^tfw&ref_url=http://jamescooke.info/git-to-squash-or-not-to-squash.html
Also each commit should contain the task name it is related to :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/X-DataInitiative/tick/pull/19#issuecomment-303345022, or mute the thread https://github.com/notifications/unsubscribe-auth/AFBDgogoF3cC3xImylM06y9pdy0Abnetks5r8qhigaJpZM4Ndw-8.
| gharchive/pull-request | 2017-05-17T12:04:09 | 2025-04-01T06:37:42.303364 | {
"authors": [
"MaryanMorel",
"Mbompr"
],
"repo": "X-DataInitiative/tick",
"url": "https://github.com/X-DataInitiative/tick/pull/19",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2645459312 | Remove unused configs
This will change the lineage, but it is fine.
Pull Request Test Coverage Report for Build 11752164398
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.02%) to 89.763%
Totals
Change from base Build 11277366952:
-0.02%
Covered Lines:
1210
Relevant Lines:
1348
💛 - Coveralls
| gharchive/pull-request | 2024-11-09T01:36:44 | 2025-04-01T06:37:42.327336 | {
"authors": [
"coveralls",
"dachengx"
],
"repo": "XENONnT/axidence",
"url": "https://github.com/XENONnT/axidence/pull/93",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
203776106 | Should this be put on the App Store?
Extensions for Xcode are still allowed with Apps fromthe AppStoreandcan be enabled in the System Settings.
Would this notbe a viable route for XVim?
P.S. For reference see Swiftify
If possible, I think it's a great idea.
See https://github.com/XVimProject/XVim/issues/964 on why this is not possible.
| gharchive/issue | 2017-01-28T01:02:23 | 2025-04-01T06:37:42.354736 | {
"authors": [
"ChrisBuchholz",
"Gminfly",
"keith"
],
"repo": "XVimProject/XVim",
"url": "https://github.com/XVimProject/XVim/issues/1041",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
375623410 | Solved
Sorry for opening this issue. Wrong repo... Issue can get removed.
haha, whoops
| gharchive/issue | 2018-10-30T18:05:57 | 2025-04-01T06:37:42.404765 | {
"authors": [
"ChaseFlorell",
"JFMG"
],
"repo": "XamFormsExtended/Xfx.Controls",
"url": "https://github.com/XamFormsExtended/Xfx.Controls/issues/75",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1871758211 | Comparison Report: Shorten help text that is always visible
The section help texts are sometimes quite lengthy. Experienced users won't need that description any longer and might be annoyed by always having to scroll passed the text. What about making the larger part of the text appear on click only, as already done in the load test report?
PO: Agree.
Did the same for trend report to keep appearance similar for all test types.
| gharchive/issue | 2023-08-29T14:08:15 | 2025-04-01T06:37:42.431720 | {
"authors": [
"jowerner",
"js-xc",
"rschwietzke"
],
"repo": "Xceptance/XLT",
"url": "https://github.com/Xceptance/XLT/issues/416",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
734037950 | WIP: add Esperanto translation of the Code of Conduct
Filing as draft, as I'd like the translation to be proofread by more well-versed Esperanto speakers than I am.
:green_heart: EO: aldonu Esperantan tradukon de la Kondutkodekso
Aldonante kiel malneton, ĉar mi volas, ke Esperantistoj pli spertaj ol mi provlegu la tradukon.
Dankon! Mia Esperanto estas tre mal, sed mi demandas mia sekvoj de Twitter. Dankon por tradukis!
| gharchive/pull-request | 2020-11-01T19:03:10 | 2025-04-01T06:37:42.433118 | {
"authors": [
"Xe",
"das-g"
],
"repo": "Xe/creators-code",
"url": "https://github.com/Xe/creators-code/pull/8",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1213472596 | ActiveMerchantGrid UI improvements
Added horizontal seperators between each merchant. Also added auto-sort on active merchants so the highest voted merchant is always on top, merchants with negative votes are greyed out depending on the number of votes (#37).
Zones are now clickable and open the image in a new tab (#40), Only clickable in merchants table, not in UpdateMerchants page to avoid confusion.
A star icon now appears before notified cards and rapports (#43).
So just to be clear how this works, it greys out only the 2nd/3rd/etc row?
Looks good though, other than the extra comment I marked above. I'm assuming it can be removed since the sorting still appears to be working.
Related I should really add some dummy test images for the fake zones...
Yes it only greys out additional merchants. I found it looks really odd if the first merchant is greyed out aswell. On that note: I chose not to completely hide/remove merchants over the downvote threshold, since it might be confusing to see merchants disappear (and users can't suggest them again, they will be blocked in MerchantHub), and I didn't want to add logic to completely remove a merchant server-side.
Oh yeah, the marked line can be removed, the actual sorting happens in the view, not the list. That's from testing, I forgot to remove that. Shame on me...
| gharchive/pull-request | 2022-04-23T22:54:57 | 2025-04-01T06:37:42.436034 | {
"authors": [
"Cyco0815",
"Xeio"
],
"repo": "Xeio/WanderLost",
"url": "https://github.com/Xeio/WanderLost/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2080884175 | Allow external http client and add custom request options
Hi,
Thanks for creating this npm module and sharing it with the world!
I've used it and made a few small improvements:
Aligned spacing & formatting (more by accident)
Exported HttpClient and added a clientFactory option for Kindle class. That is very useful when you want to NOT use the tls-client-api server but the shared library.
Added pagination handling to Kindle class. The default behaviour hasn't changed, there is an on-demand option for it.
Moved fetching of books into a separate function (fetchBooks) to reduce complexity (increased due to pagination handling).
Added query and filtering options (also includes the pagination).
Please take a look at it and I hope that we merge them!
Thanks!
This is excellent, thanks a lot for the PR. I've been meaning to make tls-client-api optional/more modular but just haven't had the motivation for it since I solved my own problem with the proxy.
One thought I have is since this includes pagination, I feel like it's the kind of thing that should be turned on by default. It feels a little bit strange to pass a false boolean flag to turn something on. Can we flip this behavior around to have pagination on by default so it can be released as the next major version to prevent breaking changes for existing users?
@Xetera Do you have a preferred formatter / code style that I can use for this project? I am fighting with my vscode right now, because there are no project specific settings. And my defaults don't seem to match yours.
And I also noticed that the package scripts are a bit messed up. build:all calls yarn but the package manager is pnpm.
2 spaces, semi, trailing commas is good. It's basically the changes you've got in your PR so that's no problem. I'd also like to stick to pnpm. Sorry, just feeling the embarrassment of having what I thought would just be a personal project get used by others I guess 😮💨
2 spaces, semi, trailing commas is good. It's basically the changes you've got in your PR so that's no problem. I'd also like to stick to pnpm. Sorry, just feeling the embarrassment of having what I thought would just be a personal project get used by others I guess 😮💨
No worries, this is our free time and not work 😁
pnpm is fine.
I've pushed a new commit that fixes it.
I also got sidetracked a little bit and wrote some tests. I will push that commit, too. Please take a look at it. I use msw to mock the network requests. That way the real Api is not part of the test and the tests can be run by anyone.
I can revert that commit if you do not like it. But I think it's worth it.
Looks great, I've used msw a couple times before and it's awesome so I don't mind it. Happy to merge this if you feel it's good to go
I think the PR is ready.
| gharchive/pull-request | 2024-01-14T20:23:17 | 2025-04-01T06:37:42.445635 | {
"authors": [
"Gitii",
"Xetera"
],
"repo": "Xetera/kindle-api",
"url": "https://github.com/Xetera/kindle-api/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1773766928 | rosdep install problem
我在这一步遇到了找不到包的问题,麻烦作者帮忙看一下
I encountered the problem of not being able to find the package at this step. Could you please help me take a look?
ros2@ubuntu:~/aubo_ros2_ws$ rosdep install --from-paths src --ignore-src --rosdistro foxy -r -y
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
aubo_ros2_moveit_config: Cannot locate rosdep definition for [warehouse_ros_mongo]
Continuing to install resolvable dependencies...
#All required rosdeps installed successfully
我用的是ubuntu 20.04 + ros2 foxy。
I am using ubuntu 20.04+ros2 foxy.
You can try to manually install sudo apt install ros-foxy-warehouse-ros-mongo
Thank you. It's useful.
| gharchive/issue | 2023-06-26T03:28:01 | 2025-04-01T06:37:42.463935 | {
"authors": [
"XieShaosong",
"dashuaip"
],
"repo": "XieShaosong/aubo_robot_ros2",
"url": "https://github.com/XieShaosong/aubo_robot_ros2/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1160582582 | 🛑 Zoltar is down
In 9ea9ace, Zoltar (https://Zoltar-12.tikihed.repl.co) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Zoltar is back up in b6f0bad.
| gharchive/issue | 2022-03-06T11:11:26 | 2025-04-01T06:37:42.466465 | {
"authors": [
"Xiija"
],
"repo": "Xiija/UpMonitor",
"url": "https://github.com/Xiija/UpMonitor/issues/858",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1206319358 | PyTorch Quantization - [VAIQ_WARN]: Node ouptut tensor is not quantized
I'm trying to quantize and export a simple 2 layer MLP, code seen below:
class Feedforward(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
output = self.sigmoid(output)
return output
I use the following code to quantize and export my model:
model = load_model()
input = torch.randn([2,2,2])
model.eval()
quantizer = torch_quantizer('calib', model, (input))
quant_model = quantizer.quant_model
quantizer.export_quant_config()
input = torch.randn([1,2,2])
quantizer = torch_quantizer('test', model, (input))
quantizer.export_xmodel()
I get the following warning when I run export_quant_config():
[VAIQ_WARN]: Node ouptut tensor is not quantized: Feedforward::input_0 type: input
and the same warning for all layers and parameters in my model.
When I debugg the function I can gather the following quantization info about my model:
> (Pdb) TORCHQuantizer._QuantInfo
> {'param': {'Feedforward::fc1.weight': [8, None], 'Feedforward::fc1.bias': [8, None], 'Feedforward::fc2.weight': [8, None], 'Feedforward::fc2.bias': [8, None]}, 'output': {'Feedforward::input_0': [8, None], 'Feedforward::Feedforward/ReLU[relu]/input': [8, None], 'Feedforward::Feedforward/Linear[fc2]/51': [8, None]}, 'input': {}}
I' m guessing that the bnfp value should not be None but I have no Idea how to properly quantize my model.
Any suggestions?
I have the same question
[VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::input_0 type: input
[VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::YoloBody/Focus[backbone]/CSPDarknet[backbone]/Focus[stem]/input.1 type: concat
[VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::YoloBody/Focus[backbone]/CSPDarknet[backbone]/Focus[stem]/BaseConv[conv]/Conv2d[conv]/input.2 type: conv2d
[VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::YoloBody/Focus[backbone]/CSPDarknet[backbone]/Focus[stem]/BaseConv[conv]/SiLU[act]/input.3 type: elemwise_mul
……
And the final tip is
[VAIQ_WARN]: Quantization is not performed completely, check if model inference function is called!!!
Model forward loop for evaluation is needed before export quantization config.
Please refer to the function "evaluate" in the demo resnet18_quant.py.
Model forward loop for evaluation is needed before exporting quantization config.
Please refer to the function "evaluate" in the example resnet18_quant.py.
| gharchive/issue | 2022-04-17T08:48:03 | 2025-04-01T06:37:42.472962 | {
"authors": [
"DouDinai",
"Orchidaceae",
"zl200881"
],
"repo": "Xilinx/Vitis-AI",
"url": "https://github.com/Xilinx/Vitis-AI/issues/761",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1238260023 | Update VART recipes to PetaLinux 2022.1 / honister
Hi, can you estimate until when the recipes and CMake files for 3.4 Honister Yocto and Petalinux 2022.1 can be updated?
Hi @Daniel-O551
VART recipes for PetaLinux 2022.1 will be released in VAI2.5.
| gharchive/issue | 2022-05-17T08:20:51 | 2025-04-01T06:37:42.474290 | {
"authors": [
"Daniel-O551",
"qianglin-xlnx"
],
"repo": "Xilinx/Vitis-AI",
"url": "https://github.com/Xilinx/Vitis-AI/issues/801",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
468307087 | Use BufReader::get_mut to write to a socket
On rustc 1.37.0-nightly (5f9c0448d 2019-06-25) fails with
error[E0596]: cannot borrow data in a `&` reference as mutable
--> /root/.cargo/git/checkouts/conetty-a662beb3b89f3dc0/7acf957/src/tcp_client.rs:54:9
|
54 | s.get_ref().write_all(&(req.finish(id)))?;
| ^^^^^^^^^^^ cannot borrow as mutable
error: aborting due to previous error
Thanks!
| gharchive/pull-request | 2019-07-15T19:55:41 | 2025-04-01T06:37:42.525443 | {
"authors": [
"ArtemGr",
"Xudong-Huang"
],
"repo": "Xudong-Huang/conetty",
"url": "https://github.com/Xudong-Huang/conetty/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
694917648 | meaning of >2px, >3px, >5px
I cannot find the meaning of >2px, >3px, >5px neither from KITTI nor from "Depth Map Prediction from a Single Image
using a Multi-Scale Deep Network". Could you explain where i could find the information?
I guess it is disparity error?
I guess it is disparity error?
What is the ground truth for the comparison?
Why input lidar has invalid Abs Rel but some error in >2px,>3px and >5px?
"What is the ground truth for the comparison?"
--KITTI stereo provides disparity ground truth for the evaluation.
"Why input lidar has invalid Abs Rel but some error in >2px,>3px and >5px?"
--It is because the input lidar and the ground truth are not dense. In this case, we do not evaluate the abs rel metric on the input lidar. For bad pixel rate metrics, since they compute the error ratios regardless of the densities of the inputs, we provide the results in our paper.
| gharchive/issue | 2020-09-07T09:39:17 | 2025-04-01T06:37:42.528107 | {
"authors": [
"JUGGHM",
"kuwt",
"yiranzhong"
],
"repo": "XuelianCheng/LidarStereoNet",
"url": "https://github.com/XuelianCheng/LidarStereoNet/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1144874166 | Wrong macro used for loop condition in paging.cpp:initDirectory()
https://github.com/XyrisOS/xyris/blob/adf3ae0afff96e621f22f5f2c49e58e8d7b59203/Kernel/Memory/paging.cpp#L134
I believe the upper bound for the iterator of this for loop is meant to be ARCH_PAGE_TABLE_ENTRIES. The current code works because the two macros happen to have the same value.
You would be correct! Thank you for opening an issue, I'll merge in a fix soon.
| gharchive/issue | 2022-02-19T23:45:19 | 2025-04-01T06:37:42.535062 | {
"authors": [
"Kfeavel",
"vannaka"
],
"repo": "XyrisOS/xyris",
"url": "https://github.com/XyrisOS/xyris/issues/386",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1113383509 | Resolve "Replace liballoc with Custom Heap"
Closes #367
Will write some unit tests for this soon. Thanks to @micahswitzer for writing a basic one used for testing (janky-heap-tests).
| gharchive/pull-request | 2022-01-25T03:30:50 | 2025-04-01T06:37:42.536257 | {
"authors": [
"Kfeavel"
],
"repo": "XyrisOS/xyris",
"url": "https://github.com/XyrisOS/xyris/pull/368",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1659565741 | 🛑 MOMO is down
In 64271d5, MOMO (https://www.momoshop.com.tw/main/Main.jsp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MOMO is back up in ee1110f.
| gharchive/issue | 2023-04-08T16:01:32 | 2025-04-01T06:37:42.538569 | {
"authors": [
"Y1YangLin"
],
"repo": "Y1YangLin/upptime",
"url": "https://github.com/Y1YangLin/upptime/issues/669",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
701573407 | 跨域的接口测试,无法切换环境
版本号
~
什么问题
运行测试集的时候,域名只能选择一个,而测试集中的接口域名可能有多个,无法做到统一切换环境
如何复现此问题
~
什么浏览器
~
什么系统(Linux, Windows, macOS)
这个确实是个痛点!
| gharchive/issue | 2020-09-15T03:18:43 | 2025-04-01T06:37:42.546326 | {
"authors": [
"TTtesting",
"jinhui20073000"
],
"repo": "YMFE/yapi",
"url": "https://github.com/YMFE/yapi/issues/1927",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
312902238 | 改进建议:接口运行自动填写示例参数
每次调试运行接口都要重新输入参数。还是我的使用姿势不对
可以保存为测试用例。。。
我也是强烈地建议。。
每次都要手动再去填一遍,效率比较低。
| gharchive/issue | 2018-04-10T12:33:06 | 2025-04-01T06:37:42.547377 | {
"authors": [
"jszjgqq",
"mahaixue",
"superwg1984"
],
"repo": "YMFE/yapi",
"url": "https://github.com/YMFE/yapi/issues/227",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
12877316 | Streamline API return codes & output it as HTTP header
This is a COPY of Issue 1277: Streamline API return codes & output it as HTTP header, filed on Google Code before the project was moved on Github.
Submitted on 2013-01-11T21:01:11.000Z by ozh...@gmail.com
Status: Accepted
Please review the original issue and especially its comments. Comments here on closed issues will be ignored. Thanks.
Original description
Currently the API result arrays contain either 'statusCode', 'errorCode' or nothing.
- make it consistent: 'statusCode' and 'message' for all methods
- (in case a custom method doesn't implement that, output default status & message)
- use it as a HTTP header in the output
This might induce some breakage for people using API: make a detailed blog post about it
Closed (or dismissed, really) via #3233
| gharchive/issue | 2013-04-06T12:52:57 | 2025-04-01T06:37:42.550475 | {
"authors": [
"ozh"
],
"repo": "YOURLS/YOURLS",
"url": "https://github.com/YOURLS/YOURLS/issues/1277",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
288257126 | No redirection when using "www"
Hello;
Our short URLs are not properly redirecting when using "www".
I can access the home page with or without "www" but it returns back to home page if I use "www" with the shortned url.
Thanks for any feedback.
YES! I've been pulling my hair over this. Thanks @codegrrrl for your time and expertise 👍
I would like to know what is properly redirecting when using "www". From the OP.
I'm on the home stretch of a PR refactoring the HTTP(S) scheme usage. This involves removing the YOURLS_SITE constant from config.php. So this topic is important during testing. What should a very vanilla install do? I believe what ozh reported HERE us the correct behavior and this issue is Not a Bug!
If YOURLS_SITE is defined as http://sho.rt does [properly redirecting] include http://redirect.sho.rt, http://w3.sho.rt, or http://www.sho.rt?
The domain sho.rt and the domain www.sho.rt are two different domains, suggesting the need for Multi-Domain Support #560, without hacking the core files.
A simple, standard 301 redirect in .htaccess or the server block from www. to non-www. would do. This is used on many WordPress sites and is easy to do. Howbeit, it does add an extra 301.
# FROM www. --TO-- NO www. -- In .htaccess
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^(.*)$ http://%1/$1 [R=301,L]
If the domain www. is a special case in YOURLS, what is the configuration for a site that does NOT want to serve www. requests? (Personally I want www. requests to be redirected to the non-www. "home page" where I have WordPress waiting to serve their needs.)
If you can delay a little, or temporarily use one if the above options. I expect to release the HTTP(S) refactoring PR and HTTPS Plugin in January 2019. Then all my effort will be directed on the YOURLS MultiSite Plugin. MultiSite is 99% done and working on my production server with three different domains. It can easily handle a configuration as the OP describes. Expect MultiSite Feburary 2019.
@ozh I believe the OP's issue is Not a Bug but a Multi-Domain Support #560 configuration error (see No. 1 above). What should a very vanilla install do with two different domains? I believe what you reported HERE is the correct behavior.
Social media sites are mangling non-www URLs. So if you say “site.net/nick” it’s showing that, but when someone clicks it’s sending them to http(s)://www.site.net/nick
This is an issue on twitter and facebook. The fix is to say “http(s)://site.net/nick” and not omit the http(s) portion OR — fix the site to accept the www portion if you have a dedicated domain for your shorturls (like many main sites both www and non-www resolve to the same/main site).
So I’m interested in whether this has been resolved in a graceful way or if it’s still pending. (I’m working on a cell phone right now with reading glasses in a shell app so it’s not like it’s a great time to read a ton of text lol — I may have to hack it and get back to updates and installations/fixes later).
i'm looking forward to a refactored yourls which does this "correctly." But for now, @codegrrrl 's "dirty" fix saved me!
Everybody: this issue has been fixed with latest commits.
Example: https://www.ozh.in/xt and https://ozh.in/xt both point here
(ozh.in is running current master version. I will probably release a new release sometimes soon)
| gharchive/issue | 2018-01-12T21:54:00 | 2025-04-01T06:37:42.562596 | {
"authors": [
"Crisses",
"PopVeKind",
"ayyoovod",
"boumaj123",
"johnaweiss",
"ozh"
],
"repo": "YOURLS/YOURLS",
"url": "https://github.com/YOURLS/YOURLS/issues/2354",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113211394 | SPI機能の実装
konashi-ios-sdkのSPI機能に追従
LGTM
| gharchive/pull-request | 2015-10-25T07:21:29 | 2025-04-01T06:37:42.563684 | {
"authors": [
"0x0c",
"sagiii"
],
"repo": "YUKAI/konashi-js-sdk",
"url": "https://github.com/YUKAI/konashi-js-sdk/pull/14",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2007844574 | package manager not found using create-lunaria
┌ create-lunaria
│
◇ Which @lunariajs package would you like to set up?
│ @lunariajs/core
│
◇ Where should we set up Lunaria?
│ ./lunaria
│
◇ Does your project use TypeScript?
│ Yes
│
◇ Do you wish to install @lunariajs/core and its dependencies?
│ Yes
└ Could not find your package manager. Setup wizard cancelled.
npm ERR! code 1
npm ERR! path /Users/alexanderniebuhr/Developer/tmp
npm ERR! command failed
npm ERR! command sh -c create-lunaria
npm ERR! A complete log of this run can be found in: /Users/alexanderniebuhr/.npm/_logs/2023-11-23T09_54_33_434Z-debug-0.log
The create-lunaria wizard was deprecated in favor of lunaria init, this shouldn't be an issue anymore. Thanks for reporting!
| gharchive/issue | 2023-11-23T09:55:27 | 2025-04-01T06:37:42.710516 | {
"authors": [
"Yan-Thomas",
"alexanderniebuhr"
],
"repo": "Yan-Thomas/lunaria",
"url": "https://github.com/Yan-Thomas/lunaria/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1335385746 | 模型训练
请问有模型训练的代码吗?
同求
| gharchive/issue | 2022-08-11T02:07:29 | 2025-04-01T06:37:42.716864 | {
"authors": [
"ChenJian7578",
"xiaolv899"
],
"repo": "YaoFANGUK/video-subtitle-generator",
"url": "https://github.com/YaoFANGUK/video-subtitle-generator/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
77254462 | [Build 525] HTML5 player reloads when forcing the flash player
When forcing the flash player, every time you load a video, a couple of seconds of it play in the new player, then it reloads and shows the flash player. This is mildly annoying, but I'm afraid it might be eating up resources that shouldn't even be a concern. Then again, I don't know much about this so I could be entirely wrong.
That is intentional and is not a bug. YouTube is automatically loading the HTML5 player, but your extension doesn't force Flash Player until the page stops loading. You should really be using the HTML5 player though, as it has the benefit of letting you watch videos at higher frame rates, instead of just a fixed 30fps.
The only reason I'm not using the HTML5 player is that it doesn't allow the progress bar to hide, and therefore I can only watch a letterboxed version of the video instead of the size I'm currently using with the flash player.
@Pikachuy It hides for me. If it's not disappearing, just move your cursor around the player, then move it off and it should disappear. Make sure you've also set the option to hide the progress bar. HTML5 player is superior to Flash, due to high fps playback, so you should definitely be using it over Flash. :)
In the HTML5 player there is a great problem with tearing (a problem with the vertical sync), if the Windows user is not used (disabled) Aero style. Aero style is bullshit, which immediately disable the normal people
Thus HTML5 player shit, that's why I only use the Flash Player, which supports the vertical sync.
The problem with refreshing the page is very annoying. If possible, try to fix it. Thank you.
@Hobbix Keep in mind that YouTube intends to drop the flash player support for their website soon, that is why they are moving towards the HTML5 player completely and you won't be able to use their flash player any longer when that time comes.
Yup, it's not because of YTC..
Closing...
| gharchive/issue | 2015-05-17T08:30:04 | 2025-04-01T06:37:42.730444 | {
"authors": [
"Hobbix",
"Pikachuy",
"SuperSajuuk",
"Yonezpt",
"ireun"
],
"repo": "YePpHa/YouTubeCenter",
"url": "https://github.com/YePpHa/YouTubeCenter/issues/1823",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
614040345 | Referencing ElasticSearch query in a json file
Referencing ElasticSearch query in filter would make life really easy. I know that Kibana dashboard could be used but I think, this alternative also foce using ElastAlert .yaml rule file. I think it would be much simpler if we could have some placeholders in our ElasticSearch query that ElastAlert replace.
We would reference this .json file in our filter section.
I appreciate your feedback on this.
Yelp/elastalert is no longer maintained. Please use jertel/elastalert2.
https://github.com/jertel/elastalert2
| gharchive/issue | 2020-05-07T13:02:11 | 2025-04-01T06:37:42.745594 | {
"authors": [
"hbarisik",
"nsano-rururu"
],
"repo": "Yelp/elastalert",
"url": "https://github.com/Yelp/elastalert/issues/2788",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
209702173 | Adding Exotel sms Alerter support for alerting
This pull request adds sms support for Exotel sms api(https://www.exotel.in). I have tested it and works well in production
@danielpops can you please merge this.
| gharchive/pull-request | 2017-02-23T09:05:41 | 2025-04-01T06:37:42.746796 | {
"authors": [
"ramey"
],
"repo": "Yelp/elastalert",
"url": "https://github.com/Yelp/elastalert/pull/914",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
244025227 | Support for FreeBSD.
All,
I'd like to use this puppet module on FreeBSD but there is not support. Would you be okay to a pull-request that addresses this issue?
Mike D.
I would accept any PR that expands the OS support of this module.
| gharchive/issue | 2017-07-19T12:34:47 | 2025-04-01T06:37:42.749167 | {
"authors": [
"madelaney",
"solarkennedy"
],
"repo": "Yelp/puppet-uchiwa",
"url": "https://github.com/Yelp/puppet-uchiwa/issues/84",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
268821051 | missing link to github from npm
Noticed you're missing a field to link to github from https://www.npmjs.com/package/choo-tts. Can't PR right now, but might be worth fixing. Thanks!
That's because package.json is missing a repository property rihgt? I think it's happening in all my repos :anguished:
Thanks for reporting!
| gharchive/issue | 2017-10-26T16:08:08 | 2025-04-01T06:37:42.755323 | {
"authors": [
"YerkoPalma",
"yoshuawuyts"
],
"repo": "YerkoPalma/choo-tts",
"url": "https://github.com/YerkoPalma/choo-tts/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2020393840 | [Bug] webdav同步问题
windows端目前关于使用webdav同步数据遇到的问题如下:
1、虽然使用的是坚果云webdav,但网络连接不稳定,很多时候都同步不上;
2、windows导出到本地的json文件为乱码,无法正常查看,也无法正常导入;
3、Windows通过webdav同步的json文件,是正常的,能正常查看。
综上:目前可能没法及时通过webdav同步,数据存在丢失风险(无法同步到云端,本地导出有故障)。
然后有个问题是,坚果云webdav是在国内,这个线路不应该有问题才对,这是我疑惑的地方。也许网络连接线路需要优化。谢谢
你买的谁网站你找他去啊你来UI论坛这里干嘛?
1、网络问题你买的谁网站你找他去啊你来UI论坛这里干嘛?
我是在正常反馈问题,坚果云dav在国内是可以正常访问的。什么叫来论坛干嘛?
坚果云的 webdav 不支持 cors 跨域,我提供了内置代理来解决这个问题,但不保证可用性。导出 json 乱码仅仅是因为是编码问题,这是 windows 平台老生常谈的问题,你可以自己去检索一下解决方法。
坚果云的 webdav 不支持 cors 跨域,我提供了内置代理来解决这个问题,但不保证可用性。导出 json 乱码仅仅是因为是编码问题,这是 windows 平台老生常谈的问题,你可以自己去检索一下解决方法。
我做了些测试,希望有帮助:
经测试,Windows2.9.7、2.9.6都没问题,能正常导出、正常查看、正常导入,但之后的版本都有问题。
其中有个细节是,通过2.9.7导出的json文件,大小为1727kb,能正常打开,能正常导入;而通过2.9.8、2.9.9、2.9.10导出时,大小分别为1102kb、1331kb、1079kb,文件明显偏小,且大小不一致。
今天更新2.10后发现坚果云同步失败才知道有这个问题,持续关注。
今天更新2.10后发现坚果云同步失败才知道有这个问题,持续关注。
我也是
坚果云的 webdav 不支持 cors 跨域,我提供了内置代理来解决这个问题,但不保证可用性。导出 json 乱码仅仅是因为是编码问题,这是 windows 平台老生常谈的问题,你可以自己去检索一下解决方法。
我做了些测试,希望有帮助: 经测试,Windows2.9.7、2.9.6都没问题,能正常导出、正常查看、正常导入,但之后的版本都有问题。
其中有个细节是,通过2.9.7导出的json文件,大小为1727kb,能正常打开,能正常导入;而通过2.9.8、2.9.9、2.9.10导出时,大小分别为1102kb、1331kb、1079kb,文件明显偏小,且大小不一致。
非常感谢,是这样的,我重新下载了2.9.7版本 成功的导出了数据后,下载安装了2.10.3版本,(也测试了后面多个版本)可以将2.9.7导出的数据成功导入,但是用后面的版本导出的数据仍热是乱码且无法重新导入。
尝试修改了系统语言等,都不行。
用notepad打开可以看到,完好的导出json文件为CRLF结尾,以及UTF-8编码,而其他版本导出的乱码皆为LF结尾以及ANSI编码(修改为CRLF以及UTF-8仍是乱码),看起来并不是您所说的“老生常谈的问题” @Yidadaa
感觉“同步”这块还有很大优化空间,一不小心就会得到不想要的结果。最好还是分成“上传”和“下载”备份两个按钮。
我在 v2.11.3 终于可以连通坚果云 WebDAV 了(感觉可能跟 #3972 的修复有关),但不知道为什么无法上传文件到坚果云
| gharchive/issue | 2023-12-01T08:44:39 | 2025-04-01T06:37:42.813135 | {
"authors": [
"PinkPanther-ny",
"Robinson28years",
"TCOTC",
"Yidadaa",
"jkjoker",
"kitaev-chen",
"reece00"
],
"repo": "Yidadaa/ChatGPT-Next-Web",
"url": "https://github.com/Yidadaa/ChatGPT-Next-Web/issues/3423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1643523863 | 能否自定义左侧应用名和应用描述
左侧的应用名是否支持自定义修改?
已解决,谢谢!
请问怎么解决呢,有劳指导一下
在文件app/components/home.tsx中进行修改,找到 function Home(),return的div标签里面有原始的应用名,改成自己的即可
请问怎么解决呢,有劳指导一下
在文件app/components/home.tsx中进行修改,找到 function Home(),return的div标签里面有原始的应用名,改成自己的即可
希望可以自定义应用名称(LOGO)和描述。因为想跟项目更新保持同步,手动修改后就会有冲突。希望可以像其它配置项那样添加环境变量的方式,自定义网站名称和描述。
问一下在哪里修改?
在文件app/components/home.tsx中进行修改,找到 function Home(),return的div标签里面有原始的应用名,改成自己的即可
请问现在是改不了了吗
| gharchive/issue | 2023-03-28T09:11:54 | 2025-04-01T06:37:42.816411 | {
"authors": [
"205637827",
"DarriusL",
"YouCanYouUp741",
"huwan"
],
"repo": "Yidadaa/ChatGPT-Next-Web",
"url": "https://github.com/Yidadaa/ChatGPT-Next-Web/issues/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1837944346 | 请问这个代码有对应的参考文献吗
您好,想对照文献学习一下,请问是否有参考文献
可以参考:[1] Waveform Design and Signal Processing Aspects for Fusion of Wireless Communications and Radar Sensing
[2] Performance analysis of joint radar and communication using OFDM and OTFS
| gharchive/issue | 2023-08-05T21:27:48 | 2025-04-01T06:37:42.887618 | {
"authors": [
"YongzhiWu",
"YuetianZhou"
],
"repo": "YongzhiWu/OFDM_ISAC_simulator",
"url": "https://github.com/YongzhiWu/OFDM_ISAC_simulator/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1488509434 | Compiling help: [Did what was told to and still got errors / source/funkin/system/MusicBeatState.hx:89: characters 43-45 : Expected )]
I have everything installed from the original source code for FNF, ran the update.bat and then tried to compile in debug, "Dbeta" and normal compile and all of them gave the below error
source/funkin/system/MusicBeatState.hx:89: characters 43-45 : Expected )
[EDIT]: I tried running the code on windows powershell and the command prompt and CMD crashes instead of giving me the above error, powershell is the one which gives me the error
[x] Windows
[ ] Mac
[ ] Linux
[ ] HTML5
Seems like your Haxe version does not support obj is Type statements
Are you sure you're using the latest version available?
If not, you can update to 4.2.5 here
I just realised that I'm using haxe 4.1.5, thank you!
I got a whole slew of errors after updating my version of haxe to 4.2.5 so I'm just going to wait for a full release
Update your lime version.
haxelib install lime
Alright thank you
| gharchive/issue | 2022-12-10T13:56:18 | 2025-04-01T06:37:42.908544 | {
"authors": [
"Megalonumber0ne",
"YoshiCrafter29"
],
"repo": "YoshiCrafter29/CodenameEngine",
"url": "https://github.com/YoshiCrafter29/CodenameEngine/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
451871136 | README.md: Missing formatting for
This is a drive-by PR to fix an invisible reference to <size> due to missing formatting.
I do not know the answer
| gharchive/pull-request | 2019-06-04T08:46:22 | 2025-04-01T06:37:42.912366 | {
"authors": [
"comicaza",
"tux3"
],
"repo": "YosysHQ/yosys",
"url": "https://github.com/YosysHQ/yosys/pull/1062",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
466816388 | synth_ice40: switch -relut to be always on
As far as I know this switch either does nothing or makes the design faster. People like @smunaut have been using it extensively and it appears robust.
As I mentionned on IRC, wouldn't it be better to ignore the "-relut" option and not remove it ? Just to not break all existing scripts that have it.
Else you need in a Makefile to test yosys's version to know if you need to explicitely add that option or leave it out :/
I think Yosys has never done that for any other options, did it?
Note: with -abc9, -relut doesn't seem to do much of anything on the designs I tested it on. Which is strange, because it should, in principle, on a techmapped adder followed by a mux with a constant or one of the operands. Nevertheless it doesn't hurt to enable it there as well.
I couldn't find any instance in synth_ice40 of any option being removed.
But in synth_xilinx for instance, I could at least find one example ( 6c256b8cda66e2ba128d5fa3ba344fe4717711f8 ) where -arch got replace by -family but the code still has -arch support as compatibility.
But I can also find examples of non-compatible changes as well ( 36e6da53964b406ad379a60fc289aa3af9beb8a9 ).
So not sure what the policy is here. But -relut is definitely a pretty popular option currently ...
@smunaut It is now parsed and ignored.
But I can also find examples of non-compatible changes as well ( 36e6da5 ).
So not sure what the policy is here. But -relut is definitely a pretty popular option currently ...
This incompatible example was a new option made on a branch and changed on that same branch before it was merged into master. Generally, I would say we preserve them as much as possible, even silently.
I've knowingly broken this once I think, for an option that was wrong to begin with and was there only for a short time....
Could we please have a -no-relut option now. My design actually runs around 30% slower with -relut.
Not sure if there are other designs that suffer the same fate but I have been tracking this as the cause of a regression where the design used to run at 44 MHz but now only does 33 MHz.
I'm a little surprised that this went from optional to always on without the means to opt-out.
@janrinze Please see #1187
I'm a little surprised that this went from optional to always on without the means to opt-out.
This regression was caused by commit 437fec0d88b4a2ad172edf0d1a861a38845f3b1d, which changed the semantics of SB_LUT4 without considering all in-tree users of SB_LUT4, which unfortunately sometimes happens and leads to undesirable results. There is nothing wrong with -relut itself though.
| gharchive/pull-request | 2019-07-11T10:48:05 | 2025-04-01T06:37:42.919126 | {
"authors": [
"eddiehung",
"janrinze",
"smunaut",
"whitequark"
],
"repo": "YosysHQ/yosys",
"url": "https://github.com/YosysHQ/yosys/pull/1183",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
1084008813 | hocuspocus.dev usage example
Do you know of a usage example of using SyncedStore with hocuspocus.dev as the server?
Hi and welcome @hangtwenty !
This should be similar to how other providers work, as described at https://syncedstore.org/docs/sync-providers.
I haven't tried this yet, but when setting up the store, try this:
import { syncedStore, getYjsValue } from "@syncedstore/core";
export const store = syncedStore({ arrayData: [] });
const doc = getYjsValue(store);
const provider = new HocuspocusProvider({
url: 'ws://127.0.0.1:1234',
name: 'example-document',
document: doc,
onAwarenessUpdate: ({ states }) => {
currentStates = states
},
})
Feel free to reopen if you have additional questions! Looking forward to hearing your experience with syncedstore :)
| gharchive/issue | 2021-12-19T04:11:30 | 2025-04-01T06:37:42.922666 | {
"authors": [
"YousefED",
"hangtwenty"
],
"repo": "YousefED/SyncedStore",
"url": "https://github.com/YousefED/SyncedStore/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
643064215 | Training error
Hello, I built PPDM as it is written in Installation.
But when I launch the training I got this:
What might be the problem ?
I'm using AWS g4dn.xlarge machine.
It may be caused by the wrong cuda version. I guess that you use cuda 10,but the dcn only supports cuda 9. I plan to rewrite the repo by cuda 10 and pytorch1.4 recently.
@YueLiao thanks for quick response !
Yep, I know about CUDA version, for that reason I've changed the symlink of the CUDA from 10 to 9.
like this:
sudo rm -fr /usr/local/cuda
ln -s /usr/local/cuda-9 /usr/local/cuda
The error is caused by the mismatching cuda version when compiling DCN and running the code. You need to modify the PATH likes:
export PATH="$CUDA_HOME/bin:${PATH}"
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export CFLAGS="-I$CUDA_HOME/include $CFLAGS"
Sorry for a long reply.
Unfortunately, it didn't work.
Looking forward for the solution for CUDA 10.
Thanks !
抱歉,回复很长。 不幸的是,它没有用。 期待 CUDA 10 的解决方案。 谢谢
Did you solve it? I encountered the same problem, I also checked the version and still reported this error
抱歉,回复很长。 不幸的是,它没有用。 期待 CUDA 10 的解决方案。 谢谢
Did you solve it? I encountered the same problem, I also checked the version and still reported this error
pt1 branch has already supported cuda10~
| gharchive/issue | 2020-06-22T13:17:40 | 2025-04-01T06:37:42.945299 | {
"authors": [
"Alasile",
"YueLiao",
"ghost",
"volodymyrkepsha"
],
"repo": "YueLiao/PPDM",
"url": "https://github.com/YueLiao/PPDM/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
413718818 | Crash Alert low
[SF - ERROR] SF:lanai/processor/systems/ai_decoder.txt:137: attempt to call global 'CheckAutoMode' (a nil value)
[starfall_processor [2081]] Server Error
SF:lanai/processor/systems/ai_decoder.txt:137: attempt to call global 'CheckAutoMode' (a nil value)
stack traceback:
SF:lanai/processor/systems/ai_decoder.txt:137: in function 'OnChatMessage'
SF:lanai/processor/systems/ai_utils.txt:22: in function 'ReadData'
SF:lanai/processor/systems/protocol.txt:85: in function SF:lanai/processor/systems/protocol.txt:61
Does this happen after the core is completely initialized (that means no more yellow ligh) or does this happen while the core is initializing ?
#3
on initialiez is done completely
Ok then I'll have a look asap.
I checked it and I can't reproduce the crash when LanAI is completely initialized...
Should be fixed now, comment if it's not...
#4
| gharchive/issue | 2019-02-23T17:44:39 | 2025-04-01T06:37:43.008173 | {
"authors": [
"TheDeadScythe",
"Yuri6037"
],
"repo": "Yuri6037/TSCM_Starfall",
"url": "https://github.com/Yuri6037/TSCM_Starfall/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2114165990 | Support for Persona 3 Reload
Game name: Persona 3 Reload
Game package name: SEGAofAmericaInc.L0cb6b3aea_s751p9cej88mt
wgs.zip
SteamSave.zip
Duplicate of #114.
Please follow that issue.
Also thanks for the saves, they'll help!
| gharchive/issue | 2024-02-02T06:16:21 | 2025-04-01T06:37:43.109672 | {
"authors": [
"Z1ni",
"dr1055"
],
"repo": "Z1ni/XGP-save-extractor",
"url": "https://github.com/Z1ni/XGP-save-extractor/issues/115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2637300131 | use of closed network connection
Hey,
i recently added your container to the helm chart SQLJames/factorio-server-charts
while the container works as intended it sometimes crashes:
{"level":"info","ts":1730878872.1065865,"caller":"cmd/local.go:115","msg":"Healthcheck server started"}
⇨ http server started on [::]:34197
{"level":"info","ts":1730878872.106785,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"127.0.0.1","Port":34197}
{"level":"info","ts":1730878877.758352,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
{"level":"info","ts":1730878880.5381744,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"127.0.0.1:31497"}
{"level":"info","ts":1730878880.5382175,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"127.0.0.1","Port":34197}
{"level":"error","ts":1730878882.7584276,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp 127.0.0.1:60707: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
{"level":"info","ts":1730878887.7583582,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
{"level":"error","ts":1730878892.7618413,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp 127.0.0.1:50433: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
{"level":"info","ts":1730878895.554898,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"127.0.0.1:31497"}
{"level":"info","ts":1730878895.5549371,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"127.0.0.1","Port":34197}
{"level":"info","ts":1730878897.7583709,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
{"level":"error","ts":1730878898.8152833,"caller":"cmd/local.go:132","msg":"net.ReadFromUDP() error: read udp 127.0.0.1:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.3\n\t/app/cmd/local.go:132\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"}
{"level":"error","ts":1730878902.7591352,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp 127.0.0.1:55703: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
{"level":"info","ts":1730878903.0005345,"caller":"cmd/local.go:190","msg":"graceful shutting down"}
{"level":"error","ts":1730878903.000601,"caller":"cmd/local.go:195","msg":"exit reason: read udp 127.0.0.1:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1\n\t/app/cmd/local.go:195\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:920\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\ngithub.com/zcube/factorio-port-fixer/cmd.Execute\n\t/app/cmd/root.go:31\nmain.main\n\t/app/main.go:10\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
when the container crashes the clients disconnect with "server not responding" despite the fact that the game server logs do look good.
It doesn't just sometimes crash, it crashes within 10 minutes which makes it not playable.
Related: https://github.com/SQLJames/factorio-server-charts/issues/62
@Kariton @samip5 It's probably a health check issue. Isn't the health check failing because it's 127.0.0.1?
upstream
https://github.com/SQLJames/factorio-server-charts/blob/main/charts/factorio-server-charts/templates/deployment.yaml#L218
my chart
https://github.com/ZCube/factorio-server-charts/blob/64f817b163a947a71aa53844afe6be78482191a0/charts/factorio-server-charts/templates/deployment.yaml#L205
There were no problems with chart-based server operation in the past (I did not include a health check), and the docker-based server I recently started operating has been running without problems for more than 3 days.
healthcheck:
test: curl --fail pingpong:34197/health || exit 1
interval: 20s
retries: 5
start_period: 20s
timeout: 10s
hmpf.
absolutely. you cannot listen on localhost and expect livenessProbes to work on a "public" site...
the port_fixer should listen to "0.0.0.0" and then work as expected.
nice catch!
gonna test, confirm and patch ASAP.
well.
the livenessProbe check seems to be the problem - not the listen IP.
Thu, Nov 7 2024 3:08:42 pm {"level":"info","ts":1730988522.3731894,"caller":"cmd/local.go:115","msg":"Healthcheck server started"}
Thu, Nov 7 2024 3:08:42 pm {"level":"info","ts":1730988522.3731928,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"0.0.0.0","Port":34197}
Thu, Nov 7 2024 3:08:42 pm ⇨ http server started on [::]:34197
Thu, Nov 7 2024 3:08:55 pm {"level":"info","ts":1730988535.616322,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"127.0.0.1:31497"}
Thu, Nov 7 2024 3:08:55 pm {"level":"info","ts":1730988535.6163716,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"0.0.0.0","Port":34197}
Thu, Nov 7 2024 3:08:56 pm {"level":"info","ts":1730988536.461232,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
Thu, Nov 7 2024 3:09:01 pm {"level":"error","ts":1730988541.462053,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:58875: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
Thu, Nov 7 2024 3:09:06 pm {"level":"info","ts":1730988546.460379,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
Thu, Nov 7 2024 3:09:10 pm {"level":"info","ts":1730988550.6329925,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"127.0.0.1:31497"}
Thu, Nov 7 2024 3:09:10 pm {"level":"info","ts":1730988550.6330307,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"0.0.0.0","Port":34197}
Thu, Nov 7 2024 3:09:11 pm {"level":"error","ts":1730988551.4609427,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:60609: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
Thu, Nov 7 2024 3:09:16 pm {"level":"info","ts":1730988556.4605217,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
Thu, Nov 7 2024 3:09:17 pm {"level":"error","ts":1730988557.5172606,"caller":"cmd/local.go:132","msg":"net.ReadFromUDP() error: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.3\n\t/app/cmd/local.go:132\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"}
Thu, Nov 7 2024 3:09:21 pm {"level":"error","ts":1730988561.4610858,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:36376: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
Thu, Nov 7 2024 3:09:21 pm {"level":"info","ts":1730988561.7004497,"caller":"cmd/local.go:190","msg":"graceful shutting down"}
Thu, Nov 7 2024 3:09:21 pm {"level":"error","ts":1730988561.7005007,"caller":"cmd/local.go:195","msg":"exit reason: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1\n\t/app/cmd/local.go:195\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:920\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\ngithub.com/zcube/factorio-port-fixer/cmd.Execute\n\t/app/cmd/root.go:31\nmain.main\n\t/app/main.go:10\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
periodSeconds: 10
initialDelaySeconds: 5
failureThreshold: 3
the container gets killed.
now udp [::]:34197 looks like an ipv6 problem to me. but i dont have dualstack enabled...
is ReadFromUDP actually related to http probes?
Thu, Nov 7 2024 3:37:14 pm ⇨ http server started on [::]:34197
Thu, Nov 7 2024 3:37:14 pm {"level":"info","ts":1730990234.3001275,"caller":"cmd/local.go:115","msg":"Healthcheck server started"}
Thu, Nov 7 2024 3:37:14 pm {"level":"info","ts":1730990234.3001416,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"0.0.0.0","Port":34197}
Thu, Nov 7 2024 3:37:27 pm {"level":"info","ts":1730990247.8176143,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"127.0.0.1:31497"}
Thu, Nov 7 2024 3:37:27 pm {"level":"info","ts":1730990247.8176613,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"0.0.0.0","Port":34197}
Thu, Nov 7 2024 3:37:28 pm {"level":"info","ts":1730990248.2562778,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
Thu, Nov 7 2024 3:37:33 pm {"level":"error","ts":1730990253.256902,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:55627: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
Thu, Nov 7 2024 3:37:38 pm {"level":"info","ts":1730990258.256299,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
Thu, Nov 7 2024 3:37:42 pm {"level":"info","ts":1730990262.8342316,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"127.0.0.1:31497"}
Thu, Nov 7 2024 3:37:42 pm {"level":"info","ts":1730990262.8342786,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"0.0.0.0","Port":34197}
Thu, Nov 7 2024 3:37:43 pm {"level":"error","ts":1730990263.2572982,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:39508: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
Thu, Nov 7 2024 3:37:48 pm {"level":"info","ts":1730990268.256682,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"10.41.3.197:31497"}
Thu, Nov 7 2024 3:37:53 pm {"level":"error","ts":1730990273.2570317,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:44050: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"}
Thu, Nov 7 2024 3:37:53 pm {"level":"error","ts":1730990273.2741623,"caller":"cmd/local.go:132","msg":"net.ReadFromUDP() error: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.3\n\t/app/cmd/local.go:132\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"}
Thu, Nov 7 2024 3:37:53 pm {"level":"info","ts":1730990273.2743487,"caller":"cmd/local.go:190","msg":"graceful shutting down"}
Thu, Nov 7 2024 3:37:53 pm {"level":"error","ts":1730990273.2743661,"caller":"cmd/local.go:195","msg":"exit reason: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1\n\t/app/cmd/local.go:195\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:920\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\ngithub.com/zcube/factorio-port-fixer/cmd.Execute\n\t/app/cmd/root.go:31\nmain.main\n\t/app/main.go:10\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: port-fixer
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- containerPort: 34197
name: port-fixer
protocol: TCP
unless http v3 (quic), yes.
http is tcp.
im unable to figure out what exactly is going on.
removing the probe entirely is working.
greater thresholds or delays do not help.
to me it looks like an issue in the /health endpoint or something related.
is there a way to get debug logging going?
Liveness probe failed: Get "http://10.41.3.219:34197/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
root@factorio-server-7c8497fdc-srv9l:/# curl 127.0.0.1:34197/health
OKroot@factorio-server-7c8497fdc-srv9l:/#
i have disabled the livenessProbe upstream.
i dont understand why that is a problem.
from within the factorio container it works with listener 127.0.0.1 (i think i have tested that before hardcoding it in the chart...)
port-fixer:
- args:
- local
- '--ip=127.0.0.1'
- '--port=34197'
- '--remotePort=31497'
{"level":"info","ts":1730994447.8460875,"caller":"cmd/local.go:115","msg":"Healthcheck server started"}
{"level":"info","ts":1730994447.8460903,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"127.0.0.1","Port":34197}
⇨ http server started on [::]:34197
probe test:
root@factorio-server-5bcbf95db8-s4gft:/# curl 127.0.0.1:34197/health
OKroot@factorio-server-5bcbf95db8-s4gft:/# curl 10.41.3.152:34197/health # pod IP
OKroot@factorio-server-5bcbf95db8-s4gft:/#
so listening to 0.0.0.0 or 127.0.0.1 does not make a difference here.
re-tested both and neither of them worked as expected. higher timeouts / thresholds / delay did not help.
but when deploying rcon-api within the same pod that probe does work.
@Kariton I finally remembered. The /health API was an API for checking the health of the Factorio server, not factorio-port-fixer.
would you mind adding an "correct" /healthz endpoint?
would you mind adding a "correct" /healthz endpoint?
use /health
services:
pingpong:
image: ghcr.io/zcube/factorio-port-fixer:main
command: /factorio-port-fixer local --ip=0.0.0.0 --port=34197 --remotePort=${PORT:-34197}
healthcheck:
test: curl --fail 127.0.0.1:34197/health || exit 1
factorio:
image: factoriotools/factorio:stable
environment:
- PORT=${PORT:-34197}
healthcheck:
test: curl --fail pingpong:34197/health_for_factorio || exit 1
I am closing this as I believe it has been resolved in v1.0.4.
| gharchive/issue | 2024-11-06T07:47:30 | 2025-04-01T06:37:43.138097 | {
"authors": [
"Kariton",
"ZCube",
"samip5"
],
"repo": "ZCube/factorio-port-fixer",
"url": "https://github.com/ZCube/factorio-port-fixer/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1210795250 | 🛑 ZEENKEY CDN is down
In d6f37ae, ZEENKEY CDN (https://zeenkeycdn.ga) was down:
HTTP code: 500
Response time: 247 ms
Resolved: ZEENKEY CDN is back up in 0f223d0.
| gharchive/issue | 2022-04-21T09:56:21 | 2025-04-01T06:37:43.141200 | {
"authors": [
"ZEENKEY"
],
"repo": "ZEENKEY/statuspage",
"url": "https://github.com/ZEENKEY/statuspage/issues/576",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
74417165 | Problème avec mon CustomRoleProvider
Hello I have perhaps not understood the doc but I have declared this in my zfc_rbac.global.php file :
'role_provider_manager' => [
'factories' => [
'role_db_provider' => 'ShishiUser\Factory\RoleProviderFactory',
]
],
'role_provider' => [
'role-db-provider' => [
],
],
My factory :
<?php
namespace ShishiUser\Factory;
use Zend\ServiceManager\FactoryInterface;
use ShishiUser\Role\RoleDbProvider;
class RoleProviderFactory implements FactoryInterface
{
/* (non-PHPdoc)
* @see \Zend\ServiceManager\FactoryInterface::createService()
*/
public function createService(\Zend\ServiceManager\ServiceLocatorInterface $serviceLocator)
{
// TODO Auto-generated method stub
$sm = $serviceLocator->get('ServiceManager');
$roleRepository = $sm->get('shishi-user-role-repository');
$roleDbProvider = new RoleDbProvider();
$roleDbProvider->setRoleRepository($roleRepository);
return $roleDbProvider;
}
}
?>
And the RoleProviderClass :
<?php
namespace ShishiUser\Role;
use ZfcRbac\Role\RoleProviderInterface;
use ShishiUser\Repository\RoleRepository;
use ZfcRbac\Exception\RoleNotFoundException;
use Rbac\Role\HierarchicalRole;
use Rbac\Role\Role;
class RoleDbProvider implements RoleProviderInterface{
public $roleRepository;
public function getRoles(array $roleNames)
{
$roles = $this->roleRepository->findByLibelles($roleNames);
$rbacRoles = [];
if (count($roles) >= count($roleNames)) {
foreach ($roles as $role){
if ($role->getChilds() !== null){
$rbacRole = new HierarchicalRole($role->getRol_libelle());
foreach ( (array) $role->getChilds() as $childRole){
$rbacRole->addChild($childRole);
}
}else{
$rbacRole = new Role($role->getRol_libelle());
}
$permissions = ($role->getRol_permissions() !== null) ? [] : $role->getRol_permissions();
if ($role->getRol_permissions() !== null){
foreach ($permissions as $permission){
$rbacRole->addPermission($permission->getPerm_libelle());
}
}
$rbacRoles[] = $rbacRole;
}
return $rbacRoles;
}
// We have roles that were asked but couldn't be found in database... problem!
foreach ($roles as &$role) {
$role = $role->getName();
}
throw new RoleNotFoundException(sprintf('Some roles were asked but could not be loaded from database: %s', implode(', ', array_diff($roleNames, $roles))));
}
public function setOptions($options)
{
$this->options = $options;
return $this;
}
public function setRoleRepository(RoleRepository $roleRepository)
{
$this->roleRepository = $roleRepository;
return $this;
}
}
?>
I try to retrieve roles from my database, could you explain me what I am doing wrong please?
Thank you in advance
cordially
Ping @bakura10 for transalation :P
Shishi, penses à écrire en anglais stp, je suis le seul à être français ici ^^. Pense également à ajouter le coloriage pour ton code, c'est impossible à lire (https://help.github.com/articles/github-flavored-markdown/#syntax-highlighting).
Haha yup @danizord, I'll try to help him ;).
I have edited my question
Hi,
I think the error come from this:
class RoleProviderFactory implements FactoryInterface
{
/* (non-PHPdoc)
* @see \Zend\ServiceManager\FactoryInterface::createService()
*/
public function createService(\Zend\ServiceManager\ServiceLocatorInterface $serviceLocator)
{
// TODO Auto-generated method stub
$sm = $serviceLocator->get('ServiceManager');
$roleRepository = $sm->get('shishi-user-role-repository');
$roleDbProvider = new RoleDbProvider();
$roleDbProvider->setRoleRepository($roleRepository);
return $roleDbProvider;
}
}
This object is constructed using a plugin manager (the role provider plugin manager).
The $serviceLocator you received in the factory is a plugin manager. If you want to retrieve the main service locator, you have to replace:
$sm = $serviceLocator->get('ServiceManager');
to:
$sm = $serviceLocator->getServiceLocator();
getServiceLocator is a method defined on each plugin manager that allows to retrieve the amin plugin manager.
Let me know if that works!
Et merci d'avoir traduit en anglais ;).
Hi,
I get the same error when i replace this code :
$sm = $serviceLocator->get('ServiceManager');
by
$sm = $serviceLocator->getServiceLocator();
the error :
Fatal error: Uncaught exception 'Zend\ServiceManager\Exception\ServiceNotFoundException' with message 'ZfcRbac\Role\RoleProviderPluginManager::get was unable to fetch or create an instance for role-db-provider' in E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\ServiceManager.php:555 Stack trace: #0 E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\AbstractPluginManager.php(116): Zend\ServiceManager\ServiceManager->get('role-db-provide...', true) #1 E:\Zend studio 12 Workspace\ShishiBlog\vendor\zf-commons\zfc-rbac\src\ZfcRbac\Factory\RoleServiceFactory.php(56): Zend\ServiceManager\AbstractPluginManager->get('role-db-provide...', Array) #2 [internal function]: ZfcRbac\Factory\RoleServiceFactory->createService(Object(Zend\ServiceManager\ServiceManager), 'zfcrbacservicer...', 'ZfcRbac\\Service...') #3 E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\ServiceManager.php( in E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\ServiceManager.php on line 555
thanks a lot for your help
Ha maybe I get it. In your "role_provider" key you called it "role-db-provider", but in your plugin manager config you called it "role_db_provider". Those are different names!
I'll assume that the problem was the incorrect config key. Closing due to lack of activity.
| gharchive/issue | 2015-05-08T16:23:38 | 2025-04-01T06:37:43.161600 | {
"authors": [
"Shishi666",
"bakura10",
"danizord"
],
"repo": "ZF-Commons/zfc-rbac",
"url": "https://github.com/ZF-Commons/zfc-rbac/issues/295",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
269456512 | Bug: Unown has no levels in sent JSON
Hi, for battle type gen7randombattle.
I noticed Unown JSON does not have details attribute sent when the request action is sent.
Example websocket data recv from server.
|request|{"active":[{"moves":[{"move":"Mirror Coat","id":"m
...: irrorcoat","pp":32,"maxpp":32,"target":"scripted","disabled":false},{"move":"Le
...: ech Life","id":"leechlife","pp":16,"maxpp":16,"target":"normal","disabled":fals
...: e},{"move":"Liquidation","id":"liquidation","pp":16,"maxpp":16,"target":"normal
...: ","disabled":false},{"move":"Toxic","id":"toxic","pp":16,"maxpp":16,"target":"n
...: ormal","disabled":false}]}],"side":{"name":"Sooham Rafiz","id":"p1","pokemon":[
...: {"ident":"p1: Araquanid","details":"Araquanid, L79, F","condition":"237/237","a
...: ctive":true,"stats":{"atk":156,"def":191,"spa":125,"spd":254,"spe":112},"moves"
...: :["mirrorcoat","leechlife","liquidation","toxic"],"baseAbility":"waterbubble","
...: item":"leftovers","pokeball":"pokeball","ability":"waterbubble"},{"ident":"p1:
...: Unown","details":"Unown","condition":"258/258","active":false,"stats":{"atk":14
...: 9,"def":153,"spa":201,"spd":153,"spe":153},"moves":["hiddenpowerpsychic60"],"ba
...: seAbility":"levitate","item":"choicespecs","pokeball":"pokeball","ability":"lev
...: itate"},{"ident":"p1: Aurorus","details":"Aurorus, L81, M","condition":"331/331
...: ","active":false,"stats":{"atk":129,"def":163,"spa":207,"spd":196,"spe":141},"m
...: oves":["freezedry","stealthrock","blizzard","ancientpower"],"baseAbility":"snow
...: warning","item":"leftovers","pokeball":"pokeball","ability":"snowwarning"},{"id
...: ent":"p1: Tapu Fini","details":"Tapu Fini, L75","condition":"229/229","active":
...: false,"stats":{"atk":117,"def":216,"spa":186,"spd":239,"spe":171},"moves":["moo
...: nblast","surf","calmmind","substitute"],"baseAbility":"mistysurge","item":"left
...: overs","pokeball":"pokeball","ability":"mistysurge"},{"ident":"p1: Garbodor","d
...: etails":"Garbodor, L81, F","condition":"262/262","active":false,"stats":{"atk":
...: 201,"def":179,"spa":144,"spd":179,"spe":168},"moves":["toxic","gunkshot","haze"
...: ,"toxicspikes"],"baseAbility":"aftermath","item":"blacksludge","pokeball":"poke
...: ball","ability":"aftermath"},{"ident":"p1: Claydol","details":"Claydol, L81","c
...: ondition":"230/230","active":false,"stats":{"atk":160,"def":217,"spa":160,"spd"
...: :241,"spe":168},"moves":["toxic","earthquake","icebeam","rapidspin"],"baseAbili
...: ty":"levitate","item":"leftovers","pokeball":"pokeball","ability":"levitate"}]}
...: ,"rqid":2}
notice the following
{u'ability': u'levitate',
u'active': False,
u'baseAbility': u'levitate',
u'condition': u'258/258',
u'details': u'Unown',
u'ident': u'p1: Unown',
u'item': u'choicespecs',
u'moves': [u'hiddenpowerpsychic60'],
u'pokeball': u'pokeball',
u'stats': {u'atk': 149, u'def': 153, u'spa': 201, u'spd': 153, u'spe': 153}}
Usually details is of format pokemon_ident level gender_if_any i.e Araquanid, L79, F.
level is left off of details if it's 100. Unown is level 100 in Random Battle.
details is documented in PROTOCOL.md in the |switch| major action.
https://github.com/Zarel/Pokemon-Showdown/blob/master/PROTOCOL.md#major-actions
Relevant excerpt of PROTOCOL.md:
DETAILS is a comma-separated list of all information about a pokemon visible on the battle screen: species, shininess, gender, and level. So it starts with SPECIES, adding , shiny if it's shiny, , M if it's male, , F if it's female, , L## if it's not level 100.
| gharchive/issue | 2017-10-30T01:18:36 | 2025-04-01T06:37:43.199735 | {
"authors": [
"Zarel",
"sooham"
],
"repo": "Zarel/Pokemon-Showdown",
"url": "https://github.com/Zarel/Pokemon-Showdown/issues/4094",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
418888112 | Refactor Gen 1/Stadium code to eliminate the modifiedStats table
Can you also see if you can eliminate Gen 1's reliance on the modifiedStats table? It probably doesn't need to be separate from storedStats.
Originally posted by @Zarel in https://github.com/Zarel/Pokemon-Showdown/pull/5274#issuecomment-470981380
I'm not sure what the idea behind using modifiedStats was, but it seems like we'd still need to use more than one stat table just for Transformed Pokemon. Gen 1 Transform keeps the user's original stats somewhere, and copies the current modified stats of the target, and stores the original stats of the target for the purposes of calculating damage during a critical hit.
I'm not sure what the idea behind using modifiedStats was, but it seems like we'd still need to use more than one stat table just for Transformed Pokemon.
We currently have baseStoredStats (user's original stats) and storedStats (where the transformed stats would be copied to).
and copies the current modified stats of the target
transformInto currently copies over volatiles and boosts to handle this, but I think this is where Gen 1 is weird (thanks to the Crystal_ discovery) and where the ordering of how boosts got applied relative to status forcing stat recalculation matters? And thus we need some way of tracking that, hence the current modifiedStats table?
and stores the original stats of the target for the purposes of calculating damage during a critical hit.
Do we do this today :S? It seems like we'd need another stats table for that.
From what you're saying, it seems we need at least 3 stats tables (user's original, target's modified + original), though I'm not sure the 3 we currently have cover everything.
Yes, I think baseStoredStats, storedStats, and a third table would be necessary just for Transform. The current modifiedStats can be rolled into storedStats safely.
I also think there's probably a bug with how Transform currently calculates stats but I haven't had time to look into it yet. https://www.smogon.com/forums/threads/gen-1-and-tradebacks-dev-post-bugs-here.3524844/page-14#post-8064239
| gharchive/issue | 2019-03-08T17:41:39 | 2025-04-01T06:37:43.206054 | {
"authors": [
"Marty-D",
"scheibo"
],
"repo": "Zarel/Pokemon-Showdown",
"url": "https://github.com/Zarel/Pokemon-Showdown/issues/5276",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
106421220 | Add -ability tag to Imposter activation
The other option is to modify transformInto in such a way that it broadcasts -transform messages to use the correct [from] parameter, then add a client change to check for it.
Either way will allow Imposter to be properly tracked by the client.
I was going to say the other option would be better, since otherwise Imposter will be revealed even when it doesn't activate against Illusion and substitutes.
Imposter doesn't activate on Illusion and substitutes? Oh, my bad.
| gharchive/pull-request | 2015-09-14T20:27:44 | 2025-04-01T06:37:43.208113 | {
"authors": [
"Marty-D",
"ascriptmaster"
],
"repo": "Zarel/Pokemon-Showdown",
"url": "https://github.com/Zarel/Pokemon-Showdown/pull/2147",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1620197596 | 🛑 Panel Excelencia is down
In b887ef5, Panel Excelencia (http://excelenciadigital.info:3002/PaneldeValidacion/Login.Aspx) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Panel Excelencia is back up in 65fc7f1.
| gharchive/issue | 2023-03-12T01:19:59 | 2025-04-01T06:37:43.210597 | {
"authors": [
"ZarzueloM"
],
"repo": "ZarzueloM/status",
"url": "https://github.com/ZarzueloM/status/issues/111",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
956053562 | Reject checkpointed blocks that would increase note commitment trees beyond their max sizes
Motivation
This was an implicit consensus rule, now made explicit.
Specifications
Designs
Related Work
Zebra already returns an error in all relevant cases:
the non-finalized state returns an error if a note commitment tree becomes full
the checkpoint verifier binds the transaction IDs to the block header
v1-v4 transactions bind all transaction data to the transaction ID
v5 transactions bind note commitments to the transaction ID
| gharchive/issue | 2021-07-29T17:23:06 | 2025-04-01T06:37:43.220717 | {
"authors": [
"dconnolly",
"teor2345"
],
"repo": "ZcashFoundation/zebra",
"url": "https://github.com/ZcashFoundation/zebra/issues/2544",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1048479696 | 🛑 Game Server Proxy (prod) is down
In 5d171d3, Game Server Proxy (prod) (https://proxy.zebr-a.com/api/Version) was down:
HTTP code: 502
Response time: 328 ms
Resolved: Game Server Proxy (prod) is back up in 772dc47.
| gharchive/issue | 2021-11-09T11:12:46 | 2025-04-01T06:37:43.223223 | {
"authors": [
"zlumer"
],
"repo": "Zebrainy/upptime",
"url": "https://github.com/Zebrainy/upptime/issues/214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
655207250 | log entries are off by one
I am using version 2.2.6 of the adapter with an nuki soft bridge adapter installation.
The entries in nuki-extended.0.smartlocks.<lock>.logs are always missing the last entry.
on('nuki-extended.0.smartlocks.<lock>.logs', () => {
const log = JSON.parse(getState('nuki-extended.0.smartlocks.<lock>.logs').val);
console.log(log[0]);
});
is only logging the action before the last one.
I am able to validate this by manually checking the log entries from the object after any action.
How can i get the most recent action (with triggering user) triggered on the lock?
I can't confirm this with my installation.
The log entries are retrieved from the Web API, which means that it matches the log you can find on https://web.nuki.io/. Could you please check if the most recent entries are available or missing there as well?
Furthermore, the up-to-dateness of the log depends on the refresh time you have set in the adapter settings for the Nuki Web API. What refresh frequency have you set?
Thanks for your input @Zefau
Refresh frequency has been to 0, but the log object has been refreshed / changed on actions to the nuki (always 1 entry off).
I changed the frequency to 5 seconds, now its working fine.
Where does the log changes come from while the setting is beeing set to 0?
could the issue be https://github.com/Zefau/ioBroker.nuki-extended/blob/f655657a5f8689fb684a8f7c6eb2c7463cdf879a/nuki-extended.js#L386 ?
on callback by the bridge for any action, the adapter is refreshing the webApi as well.
But if there is any latency on the bridge -> nuki web log, the log may not have the most recent update by the bridge yet.
This would explain the off-by-one error.
Yes, I guess that's it
I quickly added a timeout for this case. Could you install the current Github version (no version number change for now) and verify the fix?
kind of working - the nuki log is really weird.
I tried verifying the delay by stopping time from action to logs available in https://web.nuki.io/#/pages/activity-log (pressing "clear filter" continuously, as this fetchs the logs again).
The logs are sometimes delayed by 2-3 minutes - not sure if this has something todo with nukis android bridge.
is it currently possible to trigger nukiWebApi.getSmartlockLogs by script?
So it is fixed with regards to the ioBroker adapter, but not fixed with regards to Nuki Bridge vs. Web API ?
I assigned foreign issue to this. Let's see what answer you get in the Nuki Developer forum.
Yes, it cannot be fixed within this adapter as the nuki web logs are delayed themself.
The software bridge is not able to provide logs recording to the bridge api documentation, not sure about the details the hardware bridge could provide.
Hopefully nuki will provide a solution from their side, otherwise this adapter should mention the log delay within the documentation and maybe provide some kind of trigger object to fetch logs from web api again.
This way scripts would be able to handle the unknown delay to get details about the last action from a bridge-callback (e.g. the user which triggered the action).
Also nuki notification callbacks could be a solution for this, but they are still in beta.
I will keep this up2date if there is any response by nuki.
| gharchive/issue | 2020-07-11T13:18:53 | 2025-04-01T06:37:43.256834 | {
"authors": [
"Zefau",
"cyptus"
],
"repo": "Zefau/ioBroker.nuki-extended",
"url": "https://github.com/Zefau/ioBroker.nuki-extended/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
268598064 | Airboat sometimes corrupts the view.
Seems to happen when one bumps into stuff with a bigger latency.
This was fixed in Garrys Mod, if it happens again I'll reopn it.
| gharchive/issue | 2017-10-26T01:09:40 | 2025-04-01T06:37:43.264434 | {
"authors": [
"ZehMatt"
],
"repo": "ZehMatt/Lambda",
"url": "https://github.com/ZehMatt/Lambda/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1246783767 | 2048 Game
🛠️ Fixes Issue (Number)
#677
I have made a 2048 game using javascript
✅ Check List (Check all the applicable boxes)
[ ✅ ] My code doesn't break any part of the project (Zero Octave-Javascript-Projects).
[ ✅ ] This PR does not contain plagiarized content.
[ ✅ ] My Addition/Changes works properly and matches the overall repo pattern.
[ ✅ ] The title of my pull request is a short description of the requested changes.
📷 Screenshots
don't delete any line of code from readme just add yours
@Astrodevil Please review the changes once. I have updated the README.md and cards.json files.
| gharchive/pull-request | 2022-05-24T16:05:38 | 2025-04-01T06:37:43.290348 | {
"authors": [
"NOiR-07",
"SAMRIDHISINHA"
],
"repo": "ZeroOctave/ZeroOctave-Javascript-Projects",
"url": "https://github.com/ZeroOctave/ZeroOctave-Javascript-Projects/pull/700",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2073816723 | Rotating Causes Collider to Jitter on Y-Axis on Remote Client
For reasons I'm currently not able to investigate right now. Rotating the player object causes the Collider game object to experience drifting on the Y-Axis. This results from the delayed loop and some constraint behavior or incorrect understanding.
The issue appears to be unrelated to my setup. Rather likely a bug with VRChat?
https://github.com/Zexxx/Smart-Floor-Collider/assets/21136842/450426da-bf49-4abd-9ed2-ac1374d17b78
| gharchive/issue | 2024-01-10T08:07:17 | 2025-04-01T06:37:43.330256 | {
"authors": [
"Zexxx"
],
"repo": "Zexxx/Smart-Floor-Collider",
"url": "https://github.com/Zexxx/Smart-Floor-Collider/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
400095780 | 第三章 3.5.1 节 代码清单 3-63
页码与行数
第85页
代码清单 3-63
文本或排版错误
暂无
代码错误
match & x { // 这里应该是 match &*x 才对
//...
}
还可以是
match &x[..] { // 或者是 match x.as_ref() {
// ...
}
match 这块和 rust 编译器版本有关系,2021 edition中对match 人体工程学有很多改进
| gharchive/issue | 2019-01-17T03:30:09 | 2025-04-01T06:37:43.335635 | {
"authors": [
"ZhangHanDong",
"codergege",
"m104ngc4594"
],
"repo": "ZhangHanDong/tao-of-rust-codes",
"url": "https://github.com/ZhangHanDong/tao-of-rust-codes/issues/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1249215021 | msgtype -2007 表情包表情 的解码
https://github.com/ZhangJun2017/QQChatHistoryExporter/issues/4 的方式进行解码
可以看到 key value 已经非常明显了,但是还是需要通过重写 Class 结构 来匹配进行解码,Java 代码无能为力了。
java -jar SerializationDumper-v1.13.jar ACED000573720029636F6D2E74656E63656E742E6D6F62696C6571712E646174612E4D61726B466163654D6573736167650000000000018F4E02000E4900096346616365496E666F490008635375625479706549000D64774D53474974656D547970654900076477546162494449000B696D61676548656967687449000A696D61676557696474684A0005696E6465784900096D65646961547970654900057753697A654C0008666163654E616D657400124C6A6176612F6C616E672F537472696E673B5B000B6D6F62696C65706172616D7400025B425B0008726573764174747271007E00025B00067362664B657971007E00025B000673627566494471007E000278700000000100000003000000060003106B000000C8000000C800000000000000000000000000000025740003E795A5757200025B42ACF317F8060854E00200007870000000007571007E0005000000120A0608AC0210AC020A0608C80110C80140017571007E000500000010653233323937616361363634343037327571007E000500000010F6FF8700520167181B71D1286B673C2A
STREAM_MAGIC - 0xac ed
STREAM_VERSION - 0x00 05
Contents
TC_OBJECT - 0x73
TC_CLASSDESC - 0x72
className
Length - 41 - 0x00 29
Value - com.tencent.mobileqq.data.MarkFaceMessage - 0x636f6d2e74656e63656e742e6d6f62696c6571712e646174612e4d61726b466163654d657373616765
serialVersionUID - 0x00 00 00 00 00 01 8f 4e
newHandle 0x00 7e 00 00
classDescFlags - 0x02 - SC_SERIALIZABLE
fieldCount - 14 - 0x00 0e
Fields
0:
Int - I - 0x49
fieldName
Length - 9 - 0x00 09
Value - cFaceInfo - 0x6346616365496e666f
1:
Int - I - 0x49
fieldName
Length - 8 - 0x00 08
Value - cSubType - 0x6353756254797065
2:
Int - I - 0x49
fieldName
Length - 13 - 0x00 0d
Value - dwMSGItemType - 0x64774d53474974656d54797065
3:
Int - I - 0x49
fieldName
Length - 7 - 0x00 07
Value - dwTabID - 0x64775461624944
4:
Int - I - 0x49
fieldName
Length - 11 - 0x00 0b
Value - imageHeight - 0x696d616765486569676874
5:
Int - I - 0x49
fieldName
Length - 10 - 0x00 0a
Value - imageWidth - 0x696d6167655769647468
6:
Long - L - 0x4a
fieldName
Length - 5 - 0x00 05
Value - index - 0x696e646578
7:
Int - I - 0x49
fieldName
Length - 9 - 0x00 09
Value - mediaType - 0x6d6564696154797065
8:
Int - I - 0x49
fieldName
Length - 5 - 0x00 05
Value - wSize - 0x7753697a65
9:
Object - L - 0x4c
fieldName
Length - 8 - 0x00 08
Value - faceName - 0x666163654e616d65
className1
TC_STRING - 0x74
newHandle 0x00 7e 00 01
Length - 18 - 0x00 12
Value - Ljava/lang/String; - 0x4c6a6176612f6c616e672f537472696e673b
10:
Array - [ - 0x5b
fieldName
Length - 11 - 0x00 0b
Value - mobileparam - 0x6d6f62696c65706172616d
className1
TC_STRING - 0x74
newHandle 0x00 7e 00 02
Length - 2 - 0x00 02
Value - [B - 0x5b42
11:
Array - [ - 0x5b
fieldName
Length - 8 - 0x00 08
Value - resvAttr - 0x7265737641747472
className1
TC_REFERENCE - 0x71
Handle - 8257538 - 0x00 7e 00 02
12:
Array - [ - 0x5b
fieldName
Length - 6 - 0x00 06
Value - sbfKey - 0x7362664b6579
className1
TC_REFERENCE - 0x71
Handle - 8257538 - 0x00 7e 00 02
13:
Array - [ - 0x5b
fieldName
Length - 6 - 0x00 06
Value - sbufID - 0x736275664944
className1
TC_REFERENCE - 0x71
Handle - 8257538 - 0x00 7e 00 02
classAnnotations
TC_ENDBLOCKDATA - 0x78
superClassDesc
TC_NULL - 0x70
newHandle 0x00 7e 00 03
classdata
com.tencent.mobileqq.data.MarkFaceMessage
values
cFaceInfo
(int)1 - 0x00 00 00 01
cSubType
(int)3 - 0x00 00 00 03
dwMSGItemType
(int)6 - 0x00 00 00 06
dwTabID
(int)200811 - 0x00 03 10 6b
imageHeight
(int)200 - 0x00 00 00 c8
imageWidth
(int)200 - 0x00 00 00 c8
index
(long)0 - 0x00 00 00 00 00 00 00 00
mediaType
(int)0 - 0x00 00 00 00
wSize
(int)37 - 0x00 00 00 25
faceName
(object)
TC_STRING - 0x74
newHandle 0x00 7e 00 04
Length - 3 - 0x00 03
Value - ??? - 0xe795a5
mobileparam
(array)
TC_ARRAY - 0x75
TC_CLASSDESC - 0x72
className
Length - 2 - 0x00 02
Value - [B - 0x5b42
serialVersionUID - 0xac f3 17 f8 06 08 54 e0
newHandle 0x00 7e 00 05
classDescFlags - 0x02 - SC_SERIALIZABLE
fieldCount - 0 - 0x00 00
classAnnotations
TC_ENDBLOCKDATA - 0x78
superClassDesc
TC_NULL - 0x70
newHandle 0x00 7e 00 06
Array size - 0 - 0x00 00 00 00
Values
resvAttr
(array)
TC_ARRAY - 0x75
TC_REFERENCE - 0x71
Handle - 8257541 - 0x00 7e 00 05
newHandle 0x00 7e 00 07
Array size - 18 - 0x00 00 00 12
Values
Index 0:
(byte)10 - 0x0a
Index 1:
(byte)6 - 0x06
Index 2:
(byte)8 - 0x08
Index 3:
(byte)-84 - 0xac
Index 4:
(byte)2 - 0x02
Index 5:
(byte)16 - 0x10
Index 6:
(byte)-84 - 0xac
Index 7:
(byte)2 - 0x02
Index 8:
(byte)10 - 0x0a
Index 9:
(byte)6 - 0x06
Index 10:
(byte)8 - 0x08
Index 11:
(byte)-56 - 0xc8
Index 12:
(byte)1 - 0x01
Index 13:
(byte)16 - 0x10
Index 14:
(byte)-56 - 0xc8
Index 15:
(byte)1 - 0x01
Index 16:
(byte)64 (ASCII: @) - 0x40
Index 17:
(byte)1 - 0x01
sbfKey
(array)
TC_ARRAY - 0x75
TC_REFERENCE - 0x71
Handle - 8257541 - 0x00 7e 00 05
newHandle 0x00 7e 00 08
Array size - 16 - 0x00 00 00 10
Values
Index 0:
(byte)101 (ASCII: e) - 0x65
Index 1:
(byte)50 (ASCII: 2) - 0x32
Index 2:
(byte)51 (ASCII: 3) - 0x33
Index 3:
(byte)50 (ASCII: 2) - 0x32
Index 4:
(byte)57 (ASCII: 9) - 0x39
Index 5:
(byte)55 (ASCII: 7) - 0x37
Index 6:
(byte)97 (ASCII: a) - 0x61
Index 7:
(byte)99 (ASCII: c) - 0x63
Index 8:
(byte)97 (ASCII: a) - 0x61
Index 9:
(byte)54 (ASCII: 6) - 0x36
Index 10:
(byte)54 (ASCII: 6) - 0x36
Index 11:
(byte)52 (ASCII: 4) - 0x34
Index 12:
(byte)52 (ASCII: 4) - 0x34
Index 13:
(byte)48 (ASCII: 0) - 0x30
Index 14:
(byte)55 (ASCII: 7) - 0x37
Index 15:
(byte)50 (ASCII: 2) - 0x32
sbufID
(array)
TC_ARRAY - 0x75
TC_REFERENCE - 0x71
Handle - 8257541 - 0x00 7e 00 05
newHandle 0x00 7e 00 09
Array size - 16 - 0x00 00 00 10
Values
Index 0:
(byte)-10 - 0xf6
Index 1:
(byte)-1 - 0xff
Index 2:
(byte)-121 - 0x87
Index 3:
(byte)0 - 0x00
Index 4:
(byte)82 (ASCII: R) - 0x52
Index 5:
(byte)1 - 0x01
Index 6:
(byte)103 (ASCII: g) - 0x67
Index 7:
(byte)24 - 0x18
Index 8:
(byte)27 - 0x1b
Index 9:
(byte)113 (ASCII: q) - 0x71
Index 10:
(byte)-47 - 0xd1
Index 11:
(byte)40 (ASCII: () - 0x28
Index 12:
(byte)107 (ASCII: k) - 0x6b
Index 13:
(byte)103 (ASCII: g) - 0x67
Index 14:
(byte)60 (ASCII: <) - 0x3c
Index 15:
(byte)42 (ASCII: *) - 0x2a
样本文件
6618684157263489480.txt
sudo pip intstall javaobj-py3
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import javaobj
>>> j = javaobj.JavaObjectUnmarshaller(open('6618684157263489480.txt', 'rb')).readObject()
>>> j.
j.annotations j.cSubType j.dwMSGItemType j.faceName j.imageHeight j.index j.mobileparam j.sbfKey j.wSize
j.cFaceInfo j.classdesc j.dwTabID j.get_class( j.imageWidth j.mediaType j.resvAttr j.sbufID
>>> j.
sudo pip intstall javaobj-py3
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import javaobj
>>> j = javaobj.JavaObjectUnmarshaller(open('6618684157263489480.txt', 'rb')).readObject()
>>> j.
j.annotations j.cSubType j.dwMSGItemType j.faceName j.imageHeight j.index j.mobileparam j.sbfKey j.wSize
j.cFaceInfo j.classdesc j.dwTabID j.get_class( j.imageWidth j.mediaType j.resvAttr j.sbufID
>>> j.
通过对QQ安装程序的逆向,得到了表情实体类的定义。
其它消息格式的定义也可使用这种方式获取。
逆向工具来自pxb1988/dex2jar
public class MarkFaceMessage implements Serializable
{
public static final long serialVersionUID = 102222L;
public String backColor;
public long beginTime = 0L;
public int cFaceInfo = 1;
public int cSubType = 3;
public String copywritingContent;
public int copywritingType = 0;
public int dwMSGItemType = 6;
public int dwTabID;
public long endTime = 0L;
public String faceName = null;
public String from;
public boolean hasIpProduct = false;
public int imageHeight = 0;
public int imageWidth = 0;
public long index = 0L;
public boolean isAPNG = false;
public boolean isReword = false;
public String jumpUrl;
public int mediaType = 0;
public byte[] mobileparam;
public byte[] resvAttr;
public byte[] sbfKey;
public byte[] sbufID;
public boolean shouldDisplay = false;
public boolean showIpProduct = false;
public StickerInfo stickerInfo = null;
public List<Integer> voicePrintItems;
public String volumeColor;
public int wSize = 37;
}
sudo pip intstall javaobj-py3
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import javaobj
>>> j = javaobj.JavaObjectUnmarshaller(open('6618684157263489480.txt', 'rb')).readObject()
>>> j.
j.annotations j.cSubType j.dwMSGItemType j.faceName j.imageHeight j.index j.mobileparam j.sbfKey j.wSize
j.cFaceInfo j.classdesc j.dwTabID j.get_class( j.imageWidth j.mediaType j.resvAttr j.sbufID
>>> j.
谢谢,我对 Python 也不懂,我看了下 https://pypi.org/project/javaobj-py3/ 文档也不知道如何遍历Key
能请教下怎么遍历 J 对象转成 JSON 并写入新文件么?
sudo pip intstall javaobj-py3
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import javaobj
>>> j = javaobj.JavaObjectUnmarshaller(open('6618684157263489480.txt', 'rb')).readObject()
>>> j.
j.annotations j.cSubType j.dwMSGItemType j.faceName j.imageHeight j.index j.mobileparam j.sbfKey j.wSize
j.cFaceInfo j.classdesc j.dwTabID j.get_class( j.imageWidth j.mediaType j.resvAttr j.sbufID
>>> j.
谢谢,我对 Python 也不懂,我看了下 https://pypi.org/project/javaobj-py3/ 文档也不知道如何遍历Key 能请教下怎么遍历 J 对象转成 JSON 并写入新文件么?
从这里找了一个 https://github.com/tcalmant/python-javaobj/issues/42#issuecomment-631925681 源码如下,应该没问题吧
from json import JSONEncoder
class MyCustomEncoder(JSONEncoder):
def default(self,o):
return o.__dict__
import javaobj
import json
j = javaobj.JavaObjectUnmarshaller(open('emoji.txt', 'rb')).readObject()
data=json.dumps(j, cls=MyCustomEncoder)
print(data)
解出来并不是很完美,能用。 估计还是要用 Java 原生弄。
{
"classdesc": {
"name": "com.tencent.mobileqq.data.MarkFaceMessage",
"serialVersionUID": 102222,
"flags": 2,
"fields_names": [
"cFaceInfo",
"cSubType",
"dwMSGItemType",
"dwTabID",
"imageHeight",
"imageWidth",
"index",
"mediaType",
"wSize",
"faceName",
"mobileparam",
"resvAttr",
"sbfKey",
"sbufID"
],
"fields_types": ["I", "I", "I", "I", "I", "I", "J", "I", "I", "Ljava/lang/String;", "[B", "[B", "[B", "[B"],
"superclass": null
},
"annotations": [],
"cFaceInfo": 1,
"cSubType": 3,
"dwMSGItemType": 6,
"dwTabID": 107538,
"imageHeight": 200,
"imageWidth": 200,
"index": 0,
"mediaType": 0,
"wSize": 37,
"faceName": "吃饺子",
"mobileparam": {
"classdesc": {
"name": "[B",
"serialVersionUID": -5984413125824720000,
"flags": 2,
"fields_names": [],
"fields_types": [],
"superclass": null
},
"annotations": [],
"_data": []
},
"resvAttr": {
"classdesc": {
"name": "[B",
"serialVersionUID": -5984413125824720000,
"flags": 2,
"fields_names": [],
"fields_types": [],
"superclass": null
},
"annotations": [],
"_data": [10, 6, 8, -84, 2, 16, -84, 2, 10, 6, 8, -56, 1, 16, -56, 1, 64, 1]
},
"sbfKey": {
"classdesc": {
"name": "[B",
"serialVersionUID": -5984413125824720000,
"flags": 2,
"fields_names": [],
"fields_types": [],
"superclass": null
},
"annotations": [],
"_data": [51, 101, 98, 98, 50, 56, 101, 57, 100, 55, 101, 55, 49, 100, 57, 100]
},
"sbufID": {
"classdesc": {
"name": "[B",
"serialVersionUID": -5984413125824720000,
"flags": 2,
"fields_names": [],
"fields_types": [],
"superclass": null
},
"annotations": [],
"_data": [101, -72, 46, -83, -1, 8, -111, -111, -8, -48, -86, -27, -70, 85, -66, 104]
}
}
解出来了~ 但是不知道如何对应上文件
{
"index": 0,
"faceName": "吃饺子",
"dwMSGItemType": 6,
"cFaceInfo": 1,
"wSize": 37,
"sbufID": [101, -72, 46, -83, -1, 8, -111, -111, -8, -48, -86, -27, -70, 85, -66, 104],
"dwTabID": 107538,
"cSubType": 3,
"hasIpProduct": false,
"showIpProduct": false,
"sbfKey": [51, 101, 98, 98, 50, 56, 101, 57, 100, 55, 101, 55, 49, 100, 57, 100],
"mediaType": 0,
"imageWidth": 200,
"imageHeight": 200,
"mobileparam": [],
"resvAttr": [10, 6, 8, -84, 2, 16, -84, 2, 10, 6, 8, -56, 1, 16, -56, 1, 64, 1],
"isReword": false,
"copywritingType": 0,
"copywritingContent": "null",
"jumpUrl": "null",
"shouldDisplay": false,
"stickerInfo": null
}
从里这里 https://luotianyi.vc/391.html 得知
文件夹中储存着的gif经过简单加密,可以通过16进制编辑器将00000000位的47 48 46 39 39 60 xx xx更改为标准的GIF89a编码47 49 46 38 39 61 xx xx,即可正确识别。
解决“100k以上的表情无法照此实现提取”的问题:以四位16进制为一组,遇到偶数+1,遇到奇数-1。比如4748是偶数,+1变成4749,322F是奇数,-1变成322E。整个文件头都要一一改变,直到不加密的地方为止。比如“4E44 5452 4340 5044 322F 3002 0101 0001 21FE 0B59 4D51 2045 6175 6159”解密后就变成“4E45 5453 4341 5045 322E 3003 0100 0000 21FF 0B58 4D50 2044 6174 6158”。这种方法可以解决“只改文件头会转换失败”的问题。
要是能拿到原文件和加密文件对比就好了。
找到两组样本,加密文件和源文件大小不一致。但是头部加密是 以四位16进制为一组,遇到偶数+1,遇到奇数-1
{
"imageWidth": 200,
"sbufID": [
41, -64, -90, 30, 33, -87, -83, -93, -47, 8, 16, -71, -18, -4, -104, -102
],
"copywritingType": 0,
"index": 0,
"cFaceInfo": 1,
"showIpProduct": false,
"mediaType": 0,
"wSize": 37,
"imageHeight": 200,
"faceName": "哼",
"dwTabID": 195484,
"hasIpProduct": false,
"resvAttr": [
10, 6, 8, -84, 2, 16, -84, 2, 10, 6, 8, -56, 1, 16, -56, 1, 64, 1
],
"mobileparam": [],
"sbfKey": [49, 97, 54, 57, 55, 54, 99, 98, 56, 48, 50, 56, 99, 99, 102, 101],
"cSubType": 3,
"dwMSGItemType": 6,
"isReword": false,
"shouldDisplay": false
}
防止图片被压缩,我放到 Zip 里面了。
15094.zip
部分目录 可以获取到 /Android/data/com.tencent.mobileqq/Tencent/MobileQQ/.emotionsm/[dwTabID]/[dwTabID].jtmp 文件,格式是 Json,可以获取到表情包的名字 Package Name,
但是没有 jtmp 文件的表情包不知道如何获取 表情包名字 Package Name
从 Android QQ 打开相应表情会显示表情包名字,不知道藏哪了。
qqfav_[QQ号].db 应该已经废弃了,修改日期还是去年的
感谢,已经支持解码和导出:https://github.com/ZhangJun2017/QQChatHistoryExporter/commit/bfcf855be67e9f3320de5c5f9e5b2ace3cc7b845
| gharchive/issue | 2022-05-26T07:38:14 | 2025-04-01T06:37:43.354461 | {
"authors": [
"StephenZeng-Wonder",
"ZhangJun2017",
"demobin8",
"fchunfen",
"lqzhgood"
],
"repo": "ZhangJun2017/QQChatHistoryExporter",
"url": "https://github.com/ZhangJun2017/QQChatHistoryExporter/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
116919004 | 添加对HTML和XML的支持
下个版本考虑
添加对XML的支持,对HTML不予支持
| gharchive/issue | 2015-11-14T11:05:58 | 2025-04-01T06:37:43.368173 | {
"authors": [
"ZhaoKaiQiang"
],
"repo": "ZhaoKaiQiang/KLog",
"url": "https://github.com/ZhaoKaiQiang/KLog/issues/1",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1096967066 | 2022-01-09
2022-01-09
function slowestKey(releaseTimes: number[], keysPressed: string): string {
let key = ''
let time = 0
let lastKeyTime = 0
const len = releaseTimes.length
for (let i = 0; i < len; i += 1) {
const tempTime = releaseTimes[i] - lastKeyTime
const char = keysPressed[i]
if (tempTime > time) {
key = char
time = tempTime
}
if (tempTime === time) {
key = key >= char ? key : char
}
lastKeyTime = releaseTimes[i]
}
return key
};
#
# @lc app=leetcode.cn id=1629 lang=python3
#
# [1629] 按键持续时间最长的键
#
# @lc code=start
from typing import List
class Solution:
def slowestKey(self, releaseTimes: List[int], keysPressed: str) -> str:
res_map = {}
for i, key in enumerate(keysPressed):
res_map[key] = max(
res_map.get(key, 0), releaseTimes[i] - (releaseTimes[i - 1] if i else 0)
)
return max(res_map, key=lambda key: res_map[key] + ord(key) / 1000)
# @lc code=end
微信id: 而我撑伞
来自 vscode 插件
/*
* @lc app=leetcode.cn id=1629 lang=csharp
*
* [1629] 按键持续时间最长的键
*/
// @lc code=start
public class Solution
{
public char SlowestKey(int[] releaseTimes, string keysPressed)
{
var key = keysPressed[0];
var max = releaseTimes[0];
for (int i = 1; i < keysPressed.Length; i++)
{
var elapsed = releaseTimes[i] - releaseTimes[i - 1];
if (elapsed < max)
{
continue;
}
if (elapsed == max)
{
key = keysPressed[i] > key ? keysPressed[i] : key;
continue;
}
key = keysPressed[i];
max = elapsed;
}
return key;
}
}
// @lc code=end
微信id: 我要成為 Dr. 🥬
来自 vscode 插件
| gharchive/issue | 2022-01-08T16:01:16 | 2025-04-01T06:37:43.371682 | {
"authors": [
"ISchatz",
"Zheaoli",
"jingkecn",
"jinmu333",
"sishenhei7"
],
"repo": "Zheaoli/do-something-right",
"url": "https://github.com/Zheaoli/do-something-right/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1820234744 | Code Error.
We have fixed.
| gharchive/issue | 2023-07-25T12:32:50 | 2025-04-01T06:37:43.381870 | {
"authors": [
"LemonQC"
],
"repo": "Zhiquan-Wen/DDG",
"url": "https://github.com/Zhiquan-Wen/DDG/issues/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1269411837 | Fix reading Groups token information when compiling for x86_64
The struct we're reading from the buffer is defined as such:
typedef struct _TOKEN_GROUPS {
DWORD GroupCount;
SID_AND_ATTRIBUTES *Groups[];
} TOKEN_GROUPS, *PTOKEN_GROUPS;
which would translate to this in rust:
pub struct _TOKEN_GROUPS {
pub GroupCount: u32,
Groups: *const c_void,
}
The previous code used to assume that the offset of Groups was 4,
which is true on i686 but not on x86_64.
Output of rustc -Zprint-type-sizes on i686
print-type-size type: `_TOKEN_GROUPS`: 8 bytes, alignment: 4 bytes
print-type-size field `.GroupCount`: 4 bytes
print-type-size field `.Groups`: 4 bytes
Output of rustc -Zprint-type-sizes on x86_64
print-type-size type: `_TOKEN_GROUPS`: 16 bytes, alignment: 8 bytes
print-type-size field `.GroupCount`: 4 bytes
print-type-size padding: 4 bytes
print-type-size field `.Groups`: 8 bytes, alignment: 8 bytes
Oh dear god I forgot I made this. I'll assume you tested this and its all good.
| gharchive/pull-request | 2022-06-13T13:07:27 | 2025-04-01T06:37:43.505701 | {
"authors": [
"Eijebong",
"ZoeyR"
],
"repo": "ZoeyR/windows-accesstoken",
"url": "https://github.com/ZoeyR/windows-accesstoken/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1397248998 | Added a java program to find the number of unique characters in a string
This java program finds the number of unique characters in a string.
This java program finds the number of unique characters in a string.
@pritika163 kindly move program to Java/Data Structures/String problems
@pritika163 Your PR has been merged successfully, Thank you for your contribution.
Give repo a star if it was useful.
Happy hacking.
| gharchive/pull-request | 2022-10-05T05:22:56 | 2025-04-01T06:37:43.507560 | {
"authors": [
"Zohaib-Sathio",
"pritika163"
],
"repo": "Zohaib-Sathio/Hacktoberfest_22",
"url": "https://github.com/Zohaib-Sathio/Hacktoberfest_22/pull/76",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2476738613 | A Soul's Bane: Confusion Beast Room
The quest helper says to find the correct confusion beast and highlights all of them. The correct confusion beast to attack is always the one with NPC ID 1067.
Fix should be pretty simple: remove this line and update the message above it: https://github.com/Zoinkwiz/quest-helper/blob/master/src/main/java/com/questhelper/helpers/quests/asoulsbane/ASoulsBane.java#L249-L250
This is intentional, as we don't want to provide information to the player that they'd otherwise be unable to obtain themselves, as adjusted in https://github.com/Zoinkwiz/quest-helper/commit/530d2ee28fd4a6a712e2da8a1bcde2aa8646dccf.
| gharchive/issue | 2024-08-21T00:09:38 | 2025-04-01T06:37:43.509793 | {
"authors": [
"Zoinkwiz",
"larsiny"
],
"repo": "Zoinkwiz/quest-helper",
"url": "https://github.com/Zoinkwiz/quest-helper/issues/1710",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
357228672 | SkeletonStray & ZombieHusk
Hi,
I report those two creatures don't gain any payments from the jobs for killing them, i just installed the plugin (which means i have the default jobConfig.yml) I just add the Stray and Husk with same gain from their parents (skeleton and zombie) but when I kill them, there nothing.
STRAY:
income: 10.0
points: 10
experience: 15
Spigot 1.12.2
Jobs 4.8.0
Vault 1.6.1
there are many items/blocks missing from the default jobConfig.yml file. It hasn't been kept as current as the release, so you'll need to add/remove the items for each job as you like.
FWIW - I've been going thru the same file the last couple days and revising it quite a bit to pull in more recent released items, converting all numeric ID's to namespaceID's and making each job be more of real life scenario. I'll be sharing with @Zrips once I'm done so he can decide to include it as a new default, revise as he deems necessary or not use it at all. :-)
I know there some missing, that's why I add them in my jobs but like I said earlier stray and husk didn't work and I don't know why because polar_bear, evoker, etc... everything work expect for these two.
I even try to put their ID from words_en.yml but without success.
081edff7cb733cdcf528bd1f1567ddbccb0357bb
| gharchive/issue | 2018-09-05T13:26:00 | 2025-04-01T06:37:43.603957 | {
"authors": [
"Zrips",
"smmmadden",
"weberlepecheur"
],
"repo": "Zrips/Jobs",
"url": "https://github.com/Zrips/Jobs/issues/230",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
272269734 | phylopandas DataFrame is not really a phylopandas DataFrame
Need to figure out how to make sure all pandas.DataFrame methods return phylopandas.DataFrames when called by phylopandas.
Solved in #9
| gharchive/issue | 2017-11-08T16:49:08 | 2025-04-01T06:37:43.606986 | {
"authors": [
"Zsailer"
],
"repo": "Zsailer/phylopandas",
"url": "https://github.com/Zsailer/phylopandas/issues/7",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1062725803 | Be able to set working directory or use the project path as default working directory
What would you like to be added:
As far as I can tell, horusec assumes that the current directory as the working dir. It would be useful if horusec supports a different working dir or if treats the project path as so.
Actual usage.
$ cd `project diretory` (where my horusec-config.json is in)
$ ./horusec start -e -p ./
If I don't do the first step, horusec does not find horusec-config.json file. Would be very nice if it does.
I could pass the working dir:
$ ./horusec start -e -p ./ -W ./
Or use the project path as working dir. (maybe it could be a brocking change)
Why is this needed:
Basically, I wanted to create a gradle task the wrap the horusec binary. I couldn't make it work because of this behave.
Hi @leandrorodrigueszup thanks for the suggestion.
Use the --config-file-path would not help you in this case?
./horusec start -e -p ./ --config-file-path path/to/your/config-file
Hi @matheusalcantarazup thanks for the reply!
Yes, it works! :-)
But still a little verbose and unnatural in my opinion.
But anyways, I can move on now. Thanks!
This is good! I'll let this issue open for further discussion.
Thank very much for your suggestion.
| gharchive/issue | 2021-11-24T18:02:21 | 2025-04-01T06:37:43.620015 | {
"authors": [
"leandrorodrigueszup",
"matheusalcantarazup"
],
"repo": "ZupIT/horusec",
"url": "https://github.com/ZupIT/horusec/issues/815",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1522887938 | Unexpected behavior when constructing vectors from wrong-dtype numpy arrays
In the wiki, it is stated that the vector objects support both construction from iterables and from objects that supports the buffer protocol (such as numpy arrays). It seems that the buffer protocol construction has precedence over the iterable construction. This makes sense, but it has an unfortunate side-effect.
When constructing an integer vector using a numpy array of integers of the wrong size, the resulting vector is not as I would expect:
>>> a = np.array([1,1,1], dtype=np.int64)
>>> glm.ivec3(a)
ivec3( 1, 0, 1 )
On my system, dtype=int64 is the default, so ivec3(np.array([1,1,1])) produces this "bug". This effect only seems to occur for integer and boolean vectors.
If the vector had used the iterable construction method here, the result would be as expected.
This effect makes combining glm and numpy quite a bit more dangerous. This "bug" is subtle and easy to miss. For safety, vectors always have to be constructed using the unpacking operator (ivec3(*np.array([1,1,1]))). Even worse: since arbitrary iterables may also support the buffer protocol, this precaution has to be taken in all functions that accept a generic iterable.
My feature request:
Could PyGLM be changed to only use buffer protocol construction if the types match, and use iterable construction otherwise?
Hi there @avdstaaij ,
you seem to have found a bug.
On my system, the same code runs without issues:
>>> a = np.array([1,1,1], dtype=np.int64)
>>> glm.ivec3(a)
ivec3( 1, 1, 1 )
However, running the code on linux produces the error you've mentioned.
T he bug persists with any given vector size:
>>> a = numpy.array([1,1,1], dtype="int64")
>>> glm.ivec3(a)
ivec3( 1, 0, 1 )
>>> glm.i8vec3(a)
i8vec3( 1, 0, 1 )
>>> glm.i64vec3(a)
i64vec3( 1, 0, 1 )
I haven't been able to look at possible causes in the code yet.
The way PyGLM converts a given object to e.g. a 32-bit int vector is by following a simple checklist:
Is the object a number?
Is the type of the object a PyGLM type?
Does the object support the buffer protocol?
Is the object a tuple or list?
If the object does support the buffer protocol, PyGLM extracts its data and identifies, which PyGLM type it corresponds to. Afterwards, it uses a suitable constructor for the target type.
I.e. in your example, PyGLM would first convert the numpy array to a 64-bit int vector (i64vec3) and continue by converting that to a 32-bit int vector (ivec3). This way you can convert from any bit-width to any other bit-width as you would expect - unless there is a bug, as is the case here.
The reason for this bug is that the data given by numpy is interpreted based on its format string. On the linux platform I've tested on, the C datatype long appears to be 8 bytes wide, unlike on my Windows OS, where it's 4 bytes wide.
Therefore, on the Windows machine, numpy reports the format (np.array((1,1,1), dtype=np.int64).data.format) as "q" (long long) and on Linux it reports as "l" (long). PyGLM always interprets "l" as 4-bit, regardless of the actual size of long, according to this table: https://docs.python.org/3/library/struct.html#format-characters.
This needs to be fixed.
Looks like I was too eager assuming what the issue was :).
| gharchive/issue | 2023-01-06T17:11:05 | 2025-04-01T06:37:43.645672 | {
"authors": [
"Zuzu-Typ",
"avdstaaij"
],
"repo": "Zuzu-Typ/PyGLM",
"url": "https://github.com/Zuzu-Typ/PyGLM/issues/196",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
538627393 | No documentation
Hi,
this repo I couldn't find any documents about PyALL in this repo.
Where can I find documents describing PyALL in detail?
Hello Sean,
I'm not sure this is what you're looking for at all.
First of all, "PyALL" doesn't exist. There is no such library (at least in the PyPI.
This (PyOpenAL) is a library that provides bindings to OpenAL.
You'll find advanced documentation on their site.
Aside from that, PyOpenAL provides a few of it's own methods. They're documented in the README.
| gharchive/issue | 2019-12-16T20:11:16 | 2025-04-01T06:37:43.648830 | {
"authors": [
"SeanTolstoyevski",
"Zuzu-Typ"
],
"repo": "Zuzu-Typ/PyOpenAL",
"url": "https://github.com/Zuzu-Typ/PyOpenAL/issues/12",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1736622227 | Develop
modified: controller/conexion.php
a
| gharchive/pull-request | 2023-06-01T15:57:05 | 2025-04-01T06:37:43.649629 | {
"authors": [
"Zyanya714"
],
"repo": "Zyanya714/yectia",
"url": "https://github.com/Zyanya714/yectia/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
999692617 | feat: include systems as part of Netlify collections
The about pages for Design System and Application Framework are not part of the Netlify CMS collections, so hoping this will surface them there.
Aaaayyyooo it worked!
| gharchive/pull-request | 2021-09-17T20:20:09 | 2025-04-01T06:37:43.654198 | {
"authors": [
"juliexiong"
],
"repo": "Zywave/booster",
"url": "https://github.com/Zywave/booster/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1063761271 | Discuss ways to merge separately mapped sidewalks and bike lanes
This conversation started at https://github.com/a-b-street/abstreet/discussions/789 (search term "separate" and "multiple OSM ways")
The main question is:
…how will the library figure out if separate ways are geometrically located to the left or right of the main road?
Preprocess data
I think we should continue the search for a solution for this. AFAIK there is no way to solve this in JavaScipt/Node. But it is possible to solve it in QGIS and likely the underlying Python libraries.
So it should be possible to preprocess the data in a way, that osm2lanes can then work with.
I think we should check if this could be solved (in a performant way) with Osmium or osm2pgsql or Overpass, … – I don't know enough about those tools ATM.
I assume, one of those tools would be able to merge the separately mapped ways to the central way. This could be done just by proximity, but I assume that would be error prone. But there are tags, that we can use, to reduce the error rate:
Sidewalks
highway=footway + footway=sidewalk on the separately mapped way is a strong indicator, that the way belongs to the main road
sidewalk=separate/sidewalk:(left|right|both)=separate on the main road are a strong indicator, that there is a sidewalk to look for nearby
Cycleways
is_sidewalk=yes on the separately mapped way is a strong indicator, that the way belongs to the main road. This tag is still experimental, though, but could help a lot. We are experimenting with it to improve routing for footways and cycleways, more on this wiki page footway mapping (GER) and this wiki page on cycleways mapping (GER) The additional experimental tags is_sidepath:of=(secondary|residential|<HighwayValueOfMainRoad) and is_sidepath:of:name=<NameValueOfMainRoad> might also be useful, but are likely better automatically processed by the script.
cycleway=separate/cycleway:(left|right|both)=(separate|optional_sidepath) on the main road are a strong indicator, that there is a cycleway to look for nearby (Germany wiki page)
Thank you! Great issue.
I've faced this problem a couple of times. We can certainly add multiple way processing, but I'm not sure how to identify them in the first place. As you mentioned, probably all the approaches are error-prone.
I've attempted this before, https://github.com/a-b-street/abstreet/issues/330 tells the full story. Some of the problems are https://github.com/a-b-street/abstreet/issues/330#issuecomment-758173792.
But I did make some headway on this -- https://github.com/a-b-street/abstreet/blob/master/map_model/src/make/snappy.rs for reference. This code only tries to deal with merging separate cycleways with the main road. As a first pass, it only looks at ways with separation:left|right (https://wiki.openstreetmap.org/wiki/Proposed_features/cycleway:separation). Partly this helps us know which direction to search and partly this was a way for me to slowly "opt in" some cycleways around Seattle into the experimental merging, by filling out the tag.
Walking through how it works...
Find the center-line and estimated width of every separate cycleway
Create a quadtree (spatial partitioning structure that can quickly answer questions like "what's nearby") containing all of the left and right edge of every "regular" road segment
Look for matches
Delete any way that was snapped, and insert the lanes (and separators) into the main road
I have a few different ideas for how to look for matches. The thing currently implemented steps every 5 meters along a cycleway, projects a perpendicular test line 3 meters from the left or right edge (based on the separation tags), and looks for the nearest collision with one of the main road edges. Based on problems I hit, the match also has to have the same layer (z-ordering). And I was also requiring the cyclepath and main road to have a similar angle where the perpendicular line connects them -- within 30 degrees.
One of the biggest complications is when the main road or the cycleway are chopped up into different pieces. Around some intersections in Seattle where I was testing, some of the curb cuts are micro-mapped, so there will be tiny cycleway segments at a bunch of different angles. https://www.openstreetmap.org/way/835195091 is a simpler example of that. The current snapping code requires at least 80% of the cycleway to snap to some roads (maybe multiple). An example where this is tricky is the Burke Gilman trail at https://www.openstreetmap.org/node/53108420#map=19/47.66568/-122.30189. It's mostly a separate trail, but around here, it physically joins up and becomes part of the sidewalk on NE Blakeley Street:
(The screenshot is without any snapping)
Sometimes we might have to logically split a longer main road's way to indicate where a cyclepath runs parallel to it or not.
These're just some of the problems I hit. Many might be specific to a certain area.
AFAIK there is no way to solve this in JavaScipt/Node. But it is possible to solve it in QGIS and likely the underlying Python libraries.
If we can get a robust approach working using any language/dependencies, it's definitely possible to get it working everywhere. If GDAL or Shapely or libraries in some language make it easy to come up with good heuristics, then we can adapt the approach elsewhere
I feel like this should be beyond the scope of osm2lanes because it involves spatial reasoning. (Probably any problem involving multiple osm ways will end up needing some).
There are some great insights in this thread, and I would like to add that the calculated or estimated width that osm2lanes produces would be an valuable input into any algorithm that was able to do this snapping. It might even be useful to be able to request "minimum sensible" "likely" or "maximum sensible" estimated widths from osm2lanes.
| gharchive/issue | 2021-11-25T16:12:58 | 2025-04-01T06:37:43.669644 | {
"authors": [
"BudgieInWA",
"dabreegster",
"enzet",
"tordans"
],
"repo": "a-b-street/osm2lanes",
"url": "https://github.com/a-b-street/osm2lanes/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2393369712 | No locales were configured, but expected at least the default locale
I get the error: No locales were configured, but expected at least the default locale en. Perhaps you need to add it to your a2lix_translation_form.locales bundle configuration?
I'm using Symfony 7 and just set up the translation bundle. Also I did the required steps:
in config/packages/a2lix.yaml
a2lix_translation_form:
locale_provider: default
locales: [en, fr, es, de]
default_locale: en
required_locales: [fr]
templating: "@A2lixTranslationForm/bootstrap_4_layout.html.twig"
and in my formType
$builder->add('translations', TranslationsType::class, [
'locales' => ['en', 'fr', 'es', 'de'],
'default_locale' => ['en'],
'required_locales' => ['fr'],
'fields' => [
'description' => [
'field_type' => 'textarea',
'label' => 'descript.',
'locale_options' => [
'es' => ['label' => 'descripción'],
'fr' => ['display' => false]
]
]
],
'excluded_fields' => ['details'],
'locale_labels' => [
'fr' => 'Français',
'en' => 'English',
],
]);
in config/bundles.php
A2lix\AutoFormBundle\A2lixAutoFormBundle::class => ['all' => true],
A2lix\TranslationFormBundle\A2lixTranslationFormBundle::class => ['all' => true],
in config/packages/translation.yaml
framework:
default_locale: en
translator:
default_path: '%kernel.project_dir%/translations'
fallbacks:
- en
providers:
I went trough the documentation and the other issues but still couldn't resolve the problem.
@Paradonized I got the exact same error as you..
Also working on Symfony 7.
Any update ?
| gharchive/issue | 2024-07-06T03:47:44 | 2025-04-01T06:37:43.716650 | {
"authors": [
"Paradonized",
"c-vandendyck-kbr"
],
"repo": "a2lix/TranslationFormBundle",
"url": "https://github.com/a2lix/TranslationFormBundle/issues/394",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
132940391 | Controller 'asSortable', required by directive 'asSortableItem', can't be found!
Hey, I'm getting this error, even though my code is almost exactly the same as in the demo.
View:
<p ng-if="items.length > 0"
as-sortable="dragControlListeners"
ng-model="items"
class="mappa-block">
<span class="mappa-subtitle">List:</span>
<div ng-repeat="item in items" as-sortable-item>
<div as-sortable-item-handle>
test
</div>
</div>
</p>
Controller:
app.controller('MainCtrl', function ($scope, dataStorage) {
$scope.items = [];
$scope.newName = '';
$scope.newUrl = '';
$scope.dragControlListeners = dragControlListeners = {
accept: function (sourceItemHandleScope, destSortableScope) {return true},
itemMoved: function (event) { },
orderChanged: function(event) { }
};
dataStorage.load().then(function (items) {
$scope.items = items;
});
...
Am I making some silly mistake? Any advice on how to debug it? Thanks in advance!
why do u do this ?
$scope.dragControlListeners = dragControlListeners = {
how do u initialize the module dependencies?
Perhaps related to this? https://github.com/a5hik/ng-sortable/issues/230
| gharchive/issue | 2016-02-11T10:46:59 | 2025-04-01T06:37:43.740974 | {
"authors": [
"a5hik",
"anorudes",
"rhclayto"
],
"repo": "a5hik/ng-sortable",
"url": "https://github.com/a5hik/ng-sortable/issues/279",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.