id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
244040329
|
Unable to download JSON metadata (Pluralsight)
Youtube-dl version: 2017.07.15
ERROR: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 502, in _request_webpage return self._downloader.urlopen(url_or_request) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2151, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "/usr/lib/python2.7/urllib2.py", line 435, in open response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 548, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python2.7/urllib2.py", line 473, in error return self._call_chain(*args) File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
Carefully read new issue template and provide all requested information.
|
gharchive/issue
| 2017-07-19T13:27:53 |
2025-04-01T04:35:43.107226
|
{
"authors": [
"NikhilDoWhile",
"dstftw"
],
"repo": "rg3/youtube-dl",
"url": "https://github.com/rg3/youtube-dl/issues/13681",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
379743474
|
Unable to download youtube video 403 forbidden
youtube-dl --version
2018.11.07
youtube-dl "https://www.youtube.com/watch?v=71wQeC3ohug" -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['https://www.youtube.com/watch?v=71wQeC3ohug', '-v']
[debug] Encodings: locale cp936, fs mbcs, out cp936, pref cp936
[debug] youtube-dl version 2018.11.07
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg 3.4.2, ffprobe 3.4.2, rtmpdump 2.4.0
[debug] Proxy map: {'https': 'https://127.0.0.1:1080', 'http': 'http://127.0.0.1:1080', 'ftp': 'ftp://127.0.0.1:1080'}
[youtube] 71wQeC3ohug: Downloading webpage
[youtube] 71wQeC3ohug: Downloading video info webpage
[debug] Default format spec: bestvideo+bestaudio/best
WARNING: Requested formats are incompatible for merge and will be merged into mkv.
[debug] Invoking downloader on 'https://r2---sn-ogul7n7d.googlevideo.com/videoplayback?mt=1542023459&pl=23&mv=m&ei=h2npW-uJN9CAqAGFnpMQ&ms=au%2Crdu&mm=31%2C29&expire=1542045160&clen=111336310&ip=121.50.46.248&requiressl=yes&source=youtube&signature=3C71BF6D8D3534FB1CED7E1D596E1E1D9A1EF2E4.5A9776AA7C50D77B217E68FA5905265D375BB3E9&lmt=1540632754231181&mime=video%2Fmp4&key=yt6&itag=137&dur=216.416&fvip=2&gcr=jp&initcwndbps=867500&ipbits=0&c=WEB&keepalive=yes&id=o-ABsW8vL4iJo_Mi2WfsnX9pVP_9KPiHcpoOMaNvXvV1OT&aitags=133%2C134%2C135%2C136%2C137%2C160%2C242%2C243%2C244%2C247%2C248%2C278&gir=yes&mn=sn-ogul7n7d%2Csn-oguelnl7&sparams=aitags%2Cclen%2Cdur%2Cei%2Cgcr%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cexpire&txp=5432432&ratebypass=yes'
ERROR: unable to download video data: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\YoutubeDL.py", line 1902, in process_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\YoutubeDL.py", line 1847, in dl
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\downloader\common.py", line 364, in download
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\downloader\http.py", line 341, in real_download
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\downloader\http.py", line 109, in establish_connection
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\YoutubeDL.py", line 2211, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response
File "C:\Python\Python34\lib\urllib\request.py", line 508, in error
File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain
File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
Most likely caused by your proxy.
|
gharchive/issue
| 2018-11-12T11:54:08 |
2025-04-01T04:35:43.118333
|
{
"authors": [
"dstftw",
"flhang"
],
"repo": "rg3/youtube-dl",
"url": "https://github.com/rg3/youtube-dl/issues/18161",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
34479924
|
-x Can't find directory or file
part of #2963 and #2924
@phihag
\\\\
C:\youtube-dl>youtube-dl "https://www.youtube.com/watch?v=_qNxNIzVAXo" -f best -
x --no-mtime --add-metadata -v
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['https://www.youtube.com/watch?v=_qNxNIzVAXo', '-f',
'best', '-x', '--no-mtime', '--add-metadata', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2014.05.19
[debug] Python version 2.7.5 - Windows-8-6.2.9200
[debug] Proxy map: {}
[youtube] Setting language
[youtube] _qNxNIzVAXo: Downloading webpage
[youtube] _qNxNIzVAXo: Downloading video info webpage
[youtube] _qNxNIzVAXo: Extracting video information
[download] 【Drum&Bass】BMotion ft. Jon Lilygreen - All My Love-_qNxNIzVAXo.mp4 ha
s already been downloaded
[ffmpeg] Adding metadata to '【Drum&Bass】BMotion ft. Jon Lilygreen - All My Love-
_qNxNIzVAXo.mp4'
[debug] ffmpeg command line: ffmpeg -y -i 'Drum&BassBMotion ft. Jon Lilygreen -
All My Love-_qNxNIzVAXo.mp4' -c copy -metadata date=20140520 -metadata 'artist=x
Kito Music' -metadata 'title=Drum&BassBMotion ft. Jon Lilygreen - All My Love' '
Drum&BassBMotion ft. Jon Lilygreen - All My Love-_qNxNIzVAXo.temp.mp4'
ERROR: Drum&BassBMotion ft. Jon Lilygreen - All My Love-_qNxNIzVAXo.mp4: No such
file or directory
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 1073, in post_process
File "youtube_dl\postprocessor\ffmpeg.pyo", line 477, in run
File "youtube_dl\postprocessor\ffmpeg.pyo", line 65, in run_ffmpeg
File "youtube_dl\postprocessor\ffmpeg.pyo", line 62, in run_ffmpeg_multiple_fi
les
FFmpegPostProcessorError
C:\youtube-dl>
////////
Also, I'm not sure about this part but I wouldn't be surprised if the downloaded thumbnail and metadata-txt-file had different names from the actual video... (seeing as how the -x function couldn't get the right file)
Fixed quite some time ago.
|
gharchive/issue
| 2014-05-28T15:53:13 |
2025-04-01T04:35:43.125973
|
{
"authors": [
"Soroid",
"dstftw"
],
"repo": "rg3/youtube-dl",
"url": "https://github.com/rg3/youtube-dl/issues/2999",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
131459146
|
arte.tv: Unable to extract iframe url
Seems that all (?) video downloads from popolar german/french arte TV channel are broken.
Using youtube-dl version 2016.02.04, I get the following error constantly on all video pages, even older ones that worked before.
ERROR: Unable to extract iframe url;
This issue is already fixed and fix will be incorporated in the next version of youtube-dl.
Should have better read the closed issues. Sorry to steal your time. :-/
|
gharchive/issue
| 2016-02-04T20:04:39 |
2025-04-01T04:35:43.128009
|
{
"authors": [
"del0rean",
"dstftw"
],
"repo": "rg3/youtube-dl",
"url": "https://github.com/rg3/youtube-dl/issues/8432",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1878869175
|
🛑 Zimbra is down
In 195aa7c, Zimbra (https://posta.comune.preganziol.tv.it) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Zimbra is back up in 10b2552 after 5 minutes.
|
gharchive/issue
| 2023-09-02T22:48:37 |
2025-04-01T04:35:43.159171
|
{
"authors": [
"rglauco"
],
"repo": "rglauco/upptime",
"url": "https://github.com/rglauco/upptime/issues/155",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2262407997
|
[Feature Request] Bookmark node view offset
Option to add screen offset to bookmark node.
Very useful node but sometimes the position where I put the node and where I want the screen position to be needs to be slightly different.
Also suggestions for other features this node can have:
Align view (Default is top-left corner, similar to current behavior)
Align view options in list format: top-left / top-center / bottom-right / etc
Lock node position
Node could look something like:
Shortcut Key: 1
Zoom: 1.00
Align View: Top-Left
Offset X Axis (Left - Right): 0.00
Offset Y Axis (Up-Down): 0.00
(not sure if scaling input or pixel works better for offset)
I've submitted a pull request for this feature at #252. Feel free to check it and share your thoughts!
|
gharchive/issue
| 2024-04-25T00:44:31 |
2025-04-01T04:35:43.165896
|
{
"authors": [
"Alexyoe",
"aphaits"
],
"repo": "rgthree/rgthree-comfy",
"url": "https://github.com/rgthree/rgthree-comfy/issues/212",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
728900870
|
Editor plugins
I think that a language success is very dependent on the tooling. And having good editor support is step one.
This is not a rhai issue per se, but I think it is good to have a place to track the progress of the ecosystem, it can also serve as reference when people search for it.
To my knowledge, the main editors used nowadays are vim, vscode, emacs and sublime text, so it would be nice to have plugins for those.
I started a vim plugin here https://github.com/kuon/rhai.vim, but this is my first syntax plugin and there are a few features I have not implemented yet.
I have been wanting a plugin for Visual Studio Code as well...
Actually a good way to do this is to build a language server which can work with multiple IDE's. Short of that, a TextMate grammar may be better than nothing...
I'm wondering - does your vim plugin do anything other than syntax highlighting? There is an online playground available with syntax highlighting that you can look at.
Yeah, a language server would be require for more advanced features. And yes, my plugin only does basic syntax highlighting for now.
Nicolas Goy
@kuon how is your vim plugin coming along? How does it work out?
Would you like to publish it in the official org itself? I can make a rhai-plugins repo...
I didn't really do any more work on it, it provides really minimal features for syntax highlight.
At present, I am working on another project, and I don't have time to maintain it. But if someone wants to take over, you can fork it and I'll redirect my repo to yours. But to play nice with plugin manager, it's better in its own repository.
Ah, so something like rhai-vim-plugin
@kuon Just wondering if you still have your vim plugin for Rhai?
Rhai now has official VS Code syntax highlighting: https://marketplace.visualstudio.com/items?itemName=rhaiscript.vscode-rhai
Would be nice to add a vim-rhai to it as well.
Hi @schungx, I'm looking for projects for my university thesis for this semester and writing an LSP server for rhai came up as an idea. It would be a completely personal project that would be eventually open-sourced at the end.
I have a few concerns/questions before I commit to it:
Has anyone started working on something like this? I have to work completely alone before it is open-sourced, so I cannot contribute unfortunately, and I'd rather not have 2 competing implementations.
Most likely I'd have to write my own parser as well, how "hard" is parsing rhai? (especially since custom keywords can be defined) Is there a syntax specification somewhere?
What I currently have in mind:
Since rhai is very context-dependent, I'd rather implement the LSP as a library, where people can provide their own Engine that can be queried for modules and custom keywords
then figure out a way to create a context for the LSP so that these could be mapped to files (or even get/construct the source code from the user-provided engine).
I haven't started digging deeper, I thought I'd ask here first. Do you think this would be feasible, or do you see any major blocker why I wouldn't be able to implement something like this in the following 3-4 months?
This, now, would be wonderful. I've always wanted an LSP server but never learned how to write one.
Has anyone started working on something like this?
Not really...
Most likely I'd have to write my own parser as well, how "hard" is parsing rhai?
It is not hard at all. You can simply "borrow" the parser from the Rhai project itself. It builds an entire AST.
If you need any special functionalities, let me know, and I'll see if I can work with you to add it.
I'd recommend leaving custom syntax out of it because it is user-defined and you won't be able to handle it in an separate LSP server anyway.
implement the LSP as a library, where people can provide their own Engine that can be queried for modules and custom keywords
This sounds interesting, meaning that it can handle user-defined functions and custom syntax. However, I'd say a "standard" build that is added to the VS Code Rhai extension, for example, would work great already.
any major blocker why I wouldn't be able to implement something like this in the following 3-4 months?
Nothing that I can think of... but I have never written an LSP before so I won't really know!
This was also my first reaction, and it makes perfect sense technically, but the thesis needs enough original work done by me, so I probably cannot reuse it. I'll see if I can work something out.
Rhai uses a hand-crafted recursive-descent parser which is actually quite simple to understand. I'd say the parser is actually a very small portion of an LSP server, you may be able to get away with it.
On the other hand, it would be interesting to write a grammar for Rhai and put it into an LALR generator or something... or some PEG generator... That would be original!
How useful would this be though?
That's true... but at least we have jump-to-definition for variables and functions etc. Probably no type checking and function arguments checking... However, I suppose it would still be useful at least...
I'd like to create an IntelliJ IDEA plugin for supporting (initially) Rhai syntax highlighting. Is there any grammar definition for Rhai that I would be able to convert into an antlr grammar?
I'd suggest ou start with a standard-issue JavaScript grammar, as Rhai is very similar to JavaScript in syntax.
Just a few things to remember:
There can be number digit separators _ but they can only be in the middle of numbers, and they cannot be next to the decimal point...
Names like _123, _0 etc. are not valid identifiers while in JavaScript they are.
JavaScript has no statement expressions, so you need to consider them
Semicolon requirements for JavaScript are different
I built the vscode and textmate syntax highlighting by starting off with the JavaScript files and then simply deleted the unnecessary stuff.
I've made progress with the LSP and submitted my BSc thesis, I uploaded it here. It was mostly written out of necessity so it might be vague/incomplete at parts and probably even inaccurate, but it should be enough to skim through and get an idea of what I've been up to if you're interested.
I don't want to make repo completely public yet as it needs more documentation and some refactoring to which I'll get to at some point in January, but I can invite anyone as a collaborator until then.
Wow, this looks wonderful and a lot of work! Especially on the LSP implementation, rewritten parser, fault recovery, and syntax highlighting.
Thanks!
I'm wondering if it is possible at all to eventually merge your parser implementation into Rhai, replacing the existing implementation.
I wouldn't do this for the following reasons:
I don't think my parser is superior in any way, and is not yet widely tested
it also can't yet handle user-defined syntax and tokenization
I don't think there would be any benefit when the goal is to execute the code as fast as possible, skipping comments and bailing on errors is the way to go
So I believe the two parsers serve different purposes, and uniformity can be mostly achieved with common compliance tests. Currently I use the scripts in the repository for this, but they will need to be extended as there are some language constructs they don't cover.
Even if my parser turns to be somewhat better, I'm sure the benefits will only be marginal and wouldn't warrant a complete rewrite.
Using a common tokenizer on the other hand should be possible, and I'm definitely for it as Logos is awesome.
I decided to actually open up the lsp repository in its current state so that you can look at it and discussions can be done there.
It's rather messy and I don't have too much time right now, but there is little point in keeping it hidden.
@schungx I also invited you as a collaborator.
Thanks! Will be peeking into it this weekend...
BTW, any interest to move it into rhaiscript/lsp? Or would you prefer to get it into a stable form before putting it under the org?
BTW, any interest to move it into rhaiscript/lsp? Or would you prefer to get it into a stable form before putting it under the org?
Yes, it was my intention from the beginning. Although I'd definitely like to get your approval first to see if the content/quality etc. aligns with the rest of Rhai. But it's up to you really, as soon as you tell me, I can start a transfer of the repo.
Well, I guess there is a button somewhere in Github to transfer a repo to an org... but I haven't done it myself... See if you can do it yourself... if you need authentication or something, let me know and I'll figure it out.
It depends on you when you'd want to do the move. Some people I see wait until it is at least in a usable form before putting a repo under an org, while others will start off right in the org.
Doing development under the org itself has the benefit of making more people aware of the work, so they're more likely to help out than the chance that they discover the repo under your name.
If it's up to me, then I'd definitely transfer it, then keep working on it there.
According to github docs:
To transfer repositories to an organization, you must have repository creation permissions in the receiving organization.
I wouldn't necessarily want to get any permissions in the org, see if you have the rights in rhai-lspto transfer it.
Done. I just sent you a member invitation.
You may want to decide whether to call it rhaiscript/rhai-lsp or just rhaiscript/lsp...
I've transferred it, but I seem to have lost rights to it in the process.
There was no option to rename it during transfer, but we can rename it regardless if you wish (I have no opinion on it to be honest).
I've transferred it, but I seem to have lost rights to it in the process.
Fixed.
There was no option to rename it during transfer, but we can rename it regardless if you wish (I have no opinion on it to be honest).
It is renamed to rhaiscript/lsp which is better naming.
|
gharchive/issue
| 2020-10-24T23:42:33 |
2025-04-01T04:35:43.190908
|
{
"authors": [
"Hodkinson",
"kuon",
"schungx",
"tamasfe"
],
"repo": "rhaiscript/rhai",
"url": "https://github.com/rhaiscript/rhai/issues/268",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1880101324
|
Why does clicking submit need us to dedicate our audio to public domain? What does this mean for privacy?
Quick question, I've read through the source code, and can't see any automatic uploading of voice data, but why when clicking submit in the recording studio, a disclaimer says 'By clicking Submit, you agree to dedicate your recorded audio to the public domain (CC0)'. Is the data I record uploaded automatically, or is this implying that if I choose to upload my recorded voice data, it should be public domain?
TLDR; Is clicking the submit button uploading my private voice data automatically to a server somewhere?
Okay, thank you for clearing that up! (And I must say thank you very much for creating this, it's enabled me to have great fun cloning my voice and getting into the world of machine learning.)
|
gharchive/issue
| 2023-09-04T11:42:57 |
2025-04-01T04:35:43.197838
|
{
"authors": [
"mario872"
],
"repo": "rhasspy/piper-recording-studio",
"url": "https://github.com/rhasspy/piper-recording-studio/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
706368765
|
Multiple Satellities in one Room
I have 2 Satellites in one Room, sadly i am not able to mix the Audio and send it to one rhasspy install. Is there any option to "group" those 2 Satellites? Currently both Satellites are triggering on Wake Word and it's a total mess.
Hmmm...do you have both satellites connected to the base via HTTP or MQTT? If it's HTTP, it's hard to do much since they're not really aware of each other. But there might be an option for MQTT.
I have everything connect via MQTT. Any Ideas on how to accomplish this?
Sure, you can set two different wakewords?
Otherwise, when both sat's here the wakeword, they respond
Can I ask what is the use of those 2 satellites in 1 room?
Yes, but two different wakewords is lame.
I use one sat for the whole room and the other for my headset. I can't mix audio, becaus of the delay added by my USB Headset.
I don't see any other options tbh.
But I also don't quite understand the use-case for having a sat for a whole room and another for a headset.
They basically control the same thing right? It's probably me, so sorry about that.
What might be possible, but also not a very nice solution, is to run 2 Rhasspy instances....
I am running multiple rhasspy instances in docker. But there is my problem: Both of them get triggerd at the same time.
Running Rhasspy on my Headset allows me to talk much quieter and in general recognition is much better.
I see, well you can separate the two instances and connect sat1 with server1 and sat2 with server2
This can be done by putting sat1 in all the Satellite ID field of server1 and sat2 in server2.
That way, sat1 does not trigger server2 and vice versa
What you cannot do with the same wakeword is to have a trigger from sat1 only, if sat2 also triggers the wakeword.
I suggest closing this issue and maybe go for a feature request here:
https://community.rhasspy.org/c/feature-requests/7
I have everything connect via MQTT. Any Ideas on how to accomplish this?
A small change to the dialogue manager could make this work. If it was aware that site ids were in a "group", then it could simply not start a new dialogue session if a member of the group already has one. So if a second wake up attempt comes in, it will be ignored.
I'm trying to think of what other uses grouping site ids this way might have.
So if a second wake up attempt comes in, it will be ignored.
This is excatly what i need! Grouping site ids. It's useful in every case where you have 2 mics close to each other (big rooms, headsets, smartphones). I could imagine many use cases.
nice idea.. But very often devices hear speakers from other rooms.. but these would not be "grouped"
What about something that checks for voice start time.. as sound waves travels at a set rate.. if any other device hears the wake word within X milliseconds only the first device should continue. It would not be very often that more than 2 devices get asked something all at the exact time in space.
My lounge room device can hear me in the kitchen speaking. but I would not "group" these devices
thought?
Yes, a configurable delay in milliseconds would be nice (0 for disabled). As already mentioned it is very rare that 2 people close to each other need to call Rhasspy at the same time. So there are no real tradeoffs, just benifits.
I think the most flexible way is to create custom groups, each with a custom delay of X milliseconds.
My lounge room device can hear me in the kitchen speaking. but I would not "group" these devices
thought?
These were exactly my thoughts.
I was about to raise this as a feature request, until I saw this issue thread.
I was hoping to for a way to say "turn on the light" and use the sat ID to determine in which room.
Don't want another light turning on due to it catching my voice also.
The Alexa works like this, which I'm trying to phase out.
Here's what I'm thinking for this: assume there's a new configuration option for a site id "group separator". By default, this is not set and Rhasspy's behavior is unchanged.
As an example, let's say the group separator is set to "." (period). Assume you have 2 satellites and a base station with these site ids:
base
living_room.satellite1
living_room.satellite2
Now, you wake up Rhasspy in the living room and both satellites detect it. With the group separator, the dialogue manager could split the site ids into <group>.<name> and notice that both <group> values are the same (living_room). In that case, the first session "wins" and the second session is quietly dropped.
Without the group separator, the second session would override the first (Snips' original behavior).
Thoughts?
I really like your approch, it doesn't even include changes to the GUI. Simply rename all sats and you are ready to go!
That looks like a good idea.
A couple of queries..
The WebSocket API currently includes a 'sideId' property. Would this remain, or would you plan to split it in to 2 properties (e.g. groupId, siteId)?
You mention if they are not grouped, the second session would override the first...
Say you have the following setup:
base
living_room.satellite1
living_room.satellite2
kitchen.satellite1
You speak a command in the living room, intended for the 'living_room' satellites.
The kitchen satellite then also picks up the speech in the distance. Would the kitchen satellite then override the living room satellite(s)?
Anything passing through the dialogue manager would be split, so this should work even if it comes from the websocket API
In that case, I'd say it might be better to name things like this:
base
upstairs.living_room.satellite1
upstairs.living_room.satellite2
upstairs.kitchen.satellite1
I assume the living room and kitchen are "upstairs". With the proposed extension, you would want to group all satellites that could potentially activate at the same time. I left "living_room" and "kitchen" in the site ids assuming the intent handler might care to do something with them.
Just checking the start time, any satellite that starts a session within something like 100ms of the first item are dropped
No name changing
Simple. Any item in any room that hears the wake word and starts a session.. first in wins
In the Advanced settings, you can now set dialogue.group_separator as described above. For example:
{
"dialogue": {
"group_separator": "."
}
}
With that, you can set the site ids of two satellites to be something like living_room.satellite1 and living_room.satellite2 (everything before the "." is the "group id"). As long as one satellite in the same group has an active session, others will not be able to start one. So no more double wake ups :)
Please re-open the issue if you find a bug!
I have been testing this out with two satellites in the same room, both using wakeword detection (Mycroft Precise custom trained networks) and am finding that this does not prevent double wakeups, in addition to blocking any dialogue sessions from the same satellite when one is already active. This is a bit of a problem since for some reason my dialogue sessions from any device take forever to end even after the intent has been handled, thus I cannot consecutively issue commands by waking it up again right after it's "finished" a session. Right now with the satellite grouping in use the way you've instructed here, I'm getting double wakeups and this blocking issue. Do you know what might be happening here?
Some logs for demonstration, you can see one of my satellites refusing to wake up again while it's "finished" dialogue session is still active for some reason. This does not happen without satellite grouping, I get consecutive wakeups every time.
[DEBUG:2020-11-28 16:23:27,467] rhasspyserver_hermes: <- NluIntent(input='never mind', intent=Intent(intent_name='NullAction', confidence_score=1.0), site_id='bedroom.rhasspi', id=None, slots=[], session_id='bedroom.rhasspi-athena-custom-3c66ef8f-eaf4-40cc-b383-fade0c37a284', custom_data=None, asr_tokens=[[AsrToken(value='never', confidence=1.0, range_start=0, range_end=5, time=None), AsrToken(value='mind', confidence=1.0, range_start=6, range_end=10, time=None)]], asr_confidence=None, raw_input='never mind', wakeword_id='athena-custom', lang=None)
[DEBUG:2020-11-28 16:23:23,207] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 16:23:20,286] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 16:23:17,439] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 16:23:14,077] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 16:23:10,742] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 16:23:03,417] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 16:22:57,624] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 16:22:54,995] rhasspyserver_hermes: <- NluIntent(input='never mind', intent=Intent(intent_name='NullAction', confidence_score=1.0), site_id='bedroom.rhasspi', id=None, slots=[], session_id='bedroom.rhasspi-athena-custom-d68cd629-bf0b-464b-a1be-89b7d342a2e5', custom_data=None, asr_tokens=[[AsrToken(value='never', confidence=1.0, range_start=0, range_end=5, time=None), AsrToken(value='mind', confidence=1.0, range_start=6, range_end=10, time=None)]], asr_confidence=None, raw_input='never mind', wakeword_id='athena-custom', lang=None)
[DEBUG:2020-11-28 16:22:51,456] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-custom.pb', model_version='', model_type='personal', current_sensitivity=0.5, site_id='bedroom.rhasspi', session_id=None, send_audio_captured=None, lang=None)
Second satellite:
[DEBUG:2020-11-28 22:13:23,537] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:13:23,537] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:13:23,537] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:13:23,537] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:13:23,537] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:13:23,535] rhasspyserver_hermes: <- NluIntent(input='never mind', intent=Intent(intent_name='NullAction', confidence_score=1.0), site_id='bedroom.gp01', id=None, slots=[], session_id='bedroom.gp01-athena-gp01-2b4d9fdf-b60f-42ae-9d37-d01b29db67aa', custom_data=None, asr_tokens=[[AsrToken(value='never', confidence=1.0, range_start=0, range_end=5, time=None), AsrToken(value='mind', confidence=1.0, range_start=6, range_end=10, time=None)]], asr_confidence=None, raw_input='never mind', wakeword_id='athena-gp01', lang=None)
[DEBUG:2020-11-28 22:13:20,548] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:13:15,304] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:13:10,658] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:13:06,162] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:13:01,665] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:12:59,118] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:12:57,020] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:12:54,921] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:12:52,298] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
[DEBUG:2020-11-28 22:12:50,176] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:12:50,176] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:12:50,176] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:12:50,175] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:12:50,175] rhasspyserver_hermes: Sent 389 char(s) to websocket
[DEBUG:2020-11-28 22:12:50,173] rhasspyserver_hermes: <- NluIntent(input='never mind', intent=Intent(intent_name='NullAction', confidence_score=1.0), site_id='bedroom.gp01', id=None, slots=[], session_id='bedroom.gp01-athena-gp01-51390e60-4309-4ad0-9ee8-92a8bca42bd9', custom_data=None, asr_tokens=[[AsrToken(value='never', confidence=1.0, range_start=0, range_end=5, time=None), AsrToken(value='mind', confidence=1.0, range_start=6, range_end=10, time=None)]], asr_confidence=None, raw_input='never mind', wakeword_id='athena-gp01', lang=None)
[DEBUG:2020-11-28 22:12:47,130] rhasspyserver_hermes: <- HotwordDetected(model_id='athena-gp01.pb', model_version='', model_type='personal', current_sensitivity=0.7, site_id='bedroom.gp01', session_id=None, send_audio_captured=None, lang=None)
Hello,
I'm having problems with multiple satellites and concurrent wakewords triggering.
I have:
- 1 master node
Name: master
Version: 2.5.10
Deployment: Docker
Arch: x86_64 / Debian Buster
Connexion: Ethernet
- 1 satellite node
Name: rdc.raspberrypi3
Version: 2.5.10
Deployment: .deb package
Arch: armhf / Debian Buster
Connexion: Wifi
Location: kitchen (can hear me from living room)
Wakeword: Raven (hey-flipper)
- 1 satellite node
Name: rdc.respeakercore2
Version: 2.5.10
Deployment: .deb package
Arch: armhf / Debian Buster
Connexion: Wifi
Location: living room (can hear me from the kitchen)
Wakeword: Raven (hey-flipper)
All nodes are using the same external MQTT server.
The problem
The problem is that my satellites are triggering at the same time and are both trying to handle the intent at the same time.
I find it weird because:
I configured "group_separator=." on my master
I added a prefix "rdc." on my two nodes
I wonder if I did something wrong on the config ?
NB:
I've tested and had the same behaviour on 2.5.9 and 2.5.10
rdc.raspberrypi3 has its own secondary wakeword (hey-rasspy)
I noticed that when I say "hey rasspy", only rdc.raspberrypi3 wakes up and manage the command.
If I use 1 different wakeword for each satellite, I have no problems.
I also noticed that Rhasspy is slower at handling requests when both satellites are running.
When I stop 1 of them, it gets lightning fast.
Here are the configs:
{
"dialogue": {
"group_separator": ".",
"satellite_site_ids": "master,satellite1,satellite2,satellite3,rhasspy-sejour,rhasspy-rdc,rhasspy-salon,rdc.respeakercore2,rdc.raspberrypi3",
"system": "rhasspy"
},
"handle": {
"remote": {
"url": "http://192.168.1.253:1880/gestionIntent"
},
"satellite_site_ids": "rhasspy-sejour,satellite1,satellite3,rhasspy-rdc,rhasspy-salon,rdc.respeakercore2,rdc.raspberrypi3",
"system": "remote"
},
"intent": {
"fuzzywuzzy": {
"min_confidence": "0,5"
},
"satellite_site_ids": "master,satellite1,satellite2,satellite3,rhasspy-sejour,rhasspy-rdc,rhasspy-salon,rdc.respeakercore2,rdc.raspberrypi3",
"system": "fsticuffs"
},
"mqtt": {
"enabled": "true",
"host": "192.168.1.253",
"port": "1885",
"site_id": "master"
},
"sounds": {
"error": "${RHASSPY_PROFILE_DIR}/beep_ko.wav",
"recorded": "${RHASSPY_PROFILE_DIR}/beep_ok.wav",
"wake": "${RHASSPY_PROFILE_DIR}/beep_hi.wav"
},
"speech_to_text": {
"satellite_site_ids": "master,satellite1,satellite2,satellite3,rhasspy-sejour,rhasspy-rdc,rhasspy-salon,rdc.respeakercore2,rdc.raspberrypi3",
"system": "kaldi"
},
"text_to_speech": {
"espeak": {
"voice": "fr-FR"
},
"larynx": {
"default_voice": "siwis"
},
"marytts": {
"effects": {
"effect_F0Scale_parameters": "f0Scale:1.1;",
"effect_F0Scale_selected": "on",
"effect_TractScaler_parameters": "amount:1;",
"effect_TractScaler_selected": "on",
"effect_Volume_parameters": "amount:0,5;",
"effect_Volume_selected": "on"
},
"locale": "fr",
"url": "http://192.168.1.253:59125/process",
"voice": "enst-camille-hsmm",
"volume": "1"
},
"satellite_site_ids": "master,satellite1,satellite2,satellite3,rhasspy-sejour,rhasspy-rdc,rhasspy-salon,rdc.respeakercore2,rdc.raspberrypi3",
"system": "nanotts"
}
}
{
"dialogue": {
"satellite_site_ids": "rdc.respeakercore2"
},
"handle": {
"remote": {
"url": "http://192.168.1.253:1880/gestionIntent"
},
"satellite_site_ids": "rdc.respeakercore2"
},
"intent": {
"system": "hermes"
},
"microphone": {
"arecord": {
"device": "pulse",
"siteId": "rdc.respeakercore2",
"udp_audio_host": "127.0.0.1",
"udp_audio_port": "8888"
},
"pyaudio": {
"siteId": "rdc.respeakercore2",
"udp_audio_host": "127.0.0.1",
"udp_audio_port": "8888"
},
"system": "arecord"
},
"mqtt": {
"enabled": "true",
"host": "192.168.1.253",
"port": "1885",
"site_id": "rdc.respeakercore2"
},
"sounds": {
"aplay": {
"device": "pulse"
},
"system": "aplay",
"wake": "${RHASSPY_PROFILE_DIR}/beep_hi.wav"
},
"speech_to_text": {
"system": "hermes"
},
"text_to_speech": {
"satellite_site_ids": "rdc.respeakercore2",
"system": "hermes"
},
"wake": {
"raven": {
"keywords": {
"hey-flipper": {
"enabled": true
}
},
"udp_audio": "127.0.0.1:8888"
},
"satellite_site_ids": "rdc.respeakercore2",
"system": "raven"
},
"rhasspy": {
"listen_on_start": true
}
}
{
"dialogue": {
"satellite_site_ids": "etage.raspberrypi3"
},
"handle": {
"remote": {
"url": "http://192.168.1.253:1880/gestionIntent"
},
"satellite_site_ids": "rdc.respeakercore2"
},
"intent": {
"system": "hermes"
},
"microphone": {
"arecord": {
"device": "plughw:CARD=seeed8micvoicec,DEV=0",
"siteId": "rdc.respeakercore2",
"udp_audio_host": "127.0.0.1",
"udp_audio_port": "8888"
},
"pyaudio": {
"siteId": "rdc.raspberrypi3",
"udp_audio_host": "127.0.0.1",
"udp_audio_port": "8888"
},
"system": "pyaudio"
},
"mqtt": {
"enabled": "true",
"host": "192.168.1.253",
"port": "1885",
"site_id": "rdc.raspberrypi3"
},
"sounds": {
"aplay": {
"device": "pulse"
},
"system": "aplay",
"wake": "${RHASSPY_PROFILE_DIR}/beep_hi.wav"
},
"speech_to_text": {
"system": "hermes"
},
"text_to_speech": {
"satellite_site_ids": "rdc.raspberrypi3",
"system": "hermes"
},
"wake": {
"raven": {
"keywords": {
"hey-flipper": {
"enabled": true
},
"hey-rhasspy": {
"enabled": true
}
},
"udp_audio": "127.0.0.1:8888"
},
"satellite_site_ids": "rdc.raspberrypi3",
"system": "raven"
},
"rhasspy": {
"listen_on_start": true
}
}
Feel free to request more traces and tests if needed.
Thank you very much for your help.
Best regards
Hello,
I noticed some mistakes in my rdc.raspberrypi3 satellite.
The site id wasn't correctly set for michrophone and handle
I updated it like this:
{
"dialogue": {
"satellite_site_ids": "rdc.raspberrypi3"
},
"handle": {
"remote": {
"url": "http://192.168.1.253:1880/gestionIntent"
},
"satellite_site_ids": "rdc.raspberrypi3"
},
"intent": {
"system": "hermes"
},
"microphone": {
"pyaudio": {
"siteId": "rdc.raspberrypi3",
"udp_audio_host": "127.0.0.1",
"udp_audio_port": "8888"
},
"system": "pyaudio"
},
"mqtt": {
"enabled": "true",
"host": "192.168.1.253",
"port": "1885",
"site_id": "rdc.raspberrypi3"
},
"sounds": {
"aplay": {
"device": "pulse"
},
"system": "aplay",
"wake": "${RHASSPY_PROFILE_DIR}/beep_hi.wav"
},
"speech_to_text": {
"system": "hermes"
},
"text_to_speech": {
"satellite_site_ids": "rdc.raspberrypi3",
"system": "hermes"
},
"wake": {
"raven": {
"keywords": {
"hey-flipper": {
"enabled": true
}
},
"minimum_matches": "1",
"probability_threshold": "0.6",
"udp_audio": "127.0.0.1:8888",
"vad_sensitivity": "1"
},
"satellite_site_ids": "rdc.raspberrypi3",
"system": "raven"
},
"rhasspy": {
"listen_on_start": true
}
}
Now the behavior is:
I say the wake word (hey rhasspy)
Both satellites are catching it
I say a command ("donne moi l'heure" English --> "give me the time")
Both satellites are recognizing/handling the intent (GetTime)
Both satellites are sending the response via TTS
All of this happens on the same timeframe on my satellites.
Here are the logs:
Node: master
[DEBUG:2021-04-09 13:41:07,111] rhasspyserver_hermes: Sent 441 char(s) to websocket
[DEBUG:2021-04-09 13:41:05,781] rhasspyserver_hermes: Sent 437 char(s) to websocket
Node: rdc.raspberrypi3
[DEBUG:2021-04-09 13:36:27,027] rhasspyserver_hermes: Sent 437 char(s) to websocket
[DEBUG:2021-04-09 13:36:27,023] rhasspyserver_hermes: <- NluIntent(input="donne moi l'heure", intent=Intent(intent_name='GetTime', confidence_score=1.0), site_id='rdc.raspberrypi3', id=None, slots=[], session_id='rdc.raspberrypi3-hey-flipper-16260ade-35e0-47a4-84d0-965c76c344de', custom_data='hey-flipper', asr_tokens=[[AsrToken(value='donne', confidence=1.0, range_start=0, range_end=5, time=None), AsrToken(value='moi', confidence=1.0, range_start=6, range_end=9, time=None), AsrToken(value="l'heure", confidence=1.0, range_start=10, range_end=17, time=None)]], asr_confidence=None, raw_input="donne moi l'heure", wakeword_id='hey-flipper', lang=None)
[WARNING:2021-04-09 13:36:21,644] rhasspyserver_hermes: Dialogue management is disabled. ASR will NOT be automatically enabled.
[DEBUG:2021-04-09 13:36:21,643] rhasspyserver_hermes: <- HotwordDetected(model_id='/etc/rhasspy/profiles/fr/raven/hey-flipper/example-0.wav', model_version='', model_type='personal', current_sensitivity=0.6, site_id='rdc.raspberrypi3', session_id=None, send_audio_captured=None, lang=None)
Node: rdc.respeakercore2
[DEBUG:2021-04-09 11:41:07,146] rhasspyserver_hermes: Sent 441 char(s) to websocket
[DEBUG:2021-04-09 11:41:07,140] rhasspyserver_hermes: <- NluIntent(input="donne moi l'heure", intent=Intent(intent_name='GetTime', confidence_score=1.0), site_id='rdc.respeakercore2', id=None, slots=[], session_id='rdc.respeakercore2-hey-flipper-699c2a02-3302-4457-95af-4d20fd934fca', custom_data='hey-flipper', asr_tokens=[[AsrToken(value='donne', confidence=1.0, range_start=0, range_end=5, time=None), AsrToken(value='moi', confidence=1.0, range_start=6, range_end=9, time=None), AsrToken(value="l'heure", confidence=1.0, range_start=10, range_end=17, time=None)]], asr_confidence=None, raw_input="donne moi l'heure", wakeword_id='hey-flipper', lang=None)
[WARNING:2021-04-09 11:41:01,511] rhasspyserver_hermes: Dialogue management is disabled. ASR will NOT be automatically enabled.
[DEBUG:2021-04-09 11:41:01,506] rhasspyserver_hermes: <- HotwordDetected(model_id='/etc/rhasspy/profiles/fr/raven/hey-flipper/example-0.wav', model_version='', model_type='personal', current_sensitivity=0.6, site_id='rdc.respeakercore2', session_id=None, send_audio_captured=None, lang=None)
Dialogue management is disabled on my satellites, so I only configured "dialogue.group_separator": "." on the master.
Is it correct ?
Thank you for your help
Best regards
I think I've figured out the problem here, and it will be fixed in the next update to 2.5.10.
During audio playback, it's possible for a second satellite to sneak in and start a new session. My automated tests didn't include audio, so I never saw it. I'm using locks now in the dialogue manager to prevent this behavior.
Great ! Thank you for the feedback.
Let's wait for the next update 🙂👍
Hopefully fixed by now. Please let me know if this is still a problem.
|
gharchive/issue
| 2020-09-22T13:07:59 |
2025-04-01T04:35:43.241151
|
{
"authors": [
"Romkabouter",
"benedikt-bartscher",
"jerome83136",
"joshward9182",
"oziee",
"synesthesiam",
"xBelladonna"
],
"repo": "rhasspy/rhasspy",
"url": "https://github.com/rhasspy/rhasspy/issues/113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
590993373
|
Should --debug be a global or command specific option?
(base) Adams-MBP:services aroberts$ ./services --debug promote --from https://github.com/a-roberts/gitops-repo-testing --to https://github.com/a-roberts/staging --service service-a
2020/03/31 11:27:59 fatal: destination path 'gitops-repo-testing' already exists and is not an empty directory.
(base) Adams-MBP:services aroberts$ ./services promote --from https://github.com/a-roberts/gitops-repo-testing --to https://github.com/a-roberts/staging --service service-a --debug
Incorrect Usage: flag provided but not defined: -debug
NAME:
services promote - promote from one environment to another
USAGE:
services promote [command options] [arguments...]
OPTIONS:
--from value source Git repository
--to value destination Git repository
--service value service name to promote
--branch-name value the name to use for the newly created branch (default: "test-branch")
--cache-dir value where to cache Git checkouts (default: "~/.promotion/cache")
--commit-name value the name to use for commits when creating branches [$COMMIT_NAME]
--commit-email value the email to use for commits when creating branches [$COMMIT_EMAIL]
--help, -h show help (default: false)
2020/03/31 11:31:28 flag provided but not defined: -debug
You can see I had to provide it at globally (so at the top level, ./services --debug) and this means I can't stick it on the end. There's been good discussion in Slack when I raised this (so @bigkevmcd and @mnuttall, lemme know if I'm misquoting you or I've missed something important!).
Kevin suggested we could move to using cobra (I think that's an aside), and even provided the change we'd want to make (so at the time of this issue - simply from the global to promote flag list):
https://github.com/rhd-gitops-example/services/blob/master/cmd/services/main.go#L37 to https://github.com/rhd-gitops-example/services/blob/master/cmd/services/main.go#L85
so I think this makes for a useful first issue for somebody if we did want to make it so.
I'll have a go at making this a promote flag instead
Yeah, moving to cobra would be a bit more of a change (not a huge one) and I'm not convinced it'd be any easier to understand the flags bit, but, more folks are probably familiar with cobra :-)
Merged so let's close this one, thanks all
|
gharchive/issue
| 2020-03-31T10:39:38 |
2025-04-01T04:35:43.247620
|
{
"authors": [
"Megan-Wright",
"a-roberts",
"bigkevmcd"
],
"repo": "rhd-gitops-example/services",
"url": "https://github.com/rhd-gitops-example/services/issues/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1073132360
|
react진행 협업 여부
제가 맡은 작업이 모두 종료되어 react쪽에 보조 인원으로 참여하려고 하는데 추가 인원 필요하신가요?
일부 컴포넌트가 작업이 지연되는 부분이 있어보입니다.
감사합니다.
|
gharchive/issue
| 2021-12-07T09:42:05 |
2025-04-01T04:35:43.249202
|
{
"authors": [
"BaekJae",
"jaejin28",
"rhdenro"
],
"repo": "rhdenro/jinddoback2",
"url": "https://github.com/rhdenro/jinddoback2/issues/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2690209213
|
🛑 BOS2 is down
In 4dc1a83, BOS2 ($URL_BOS2) was down:
HTTP code: 0
Response time: 0 ms
Resolved: BOS2 is back up in 0f554e7 after 25 minutes.
|
gharchive/issue
| 2024-11-25T10:42:58 |
2025-04-01T04:35:43.279141
|
{
"authors": [
"rholak"
],
"repo": "rholak/upptime",
"url": "https://github.com/rholak/upptime/issues/1582",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2599422519
|
🛑 BOS2 is down
In 0cd3e59, BOS2 ($URL_BOS2) was down:
HTTP code: 0
Response time: 0 ms
Resolved: BOS2 is back up in 3348ee6 after 26 minutes.
|
gharchive/issue
| 2024-10-19T17:17:42 |
2025-04-01T04:35:43.281464
|
{
"authors": [
"rholak"
],
"repo": "rholak/upptime",
"url": "https://github.com/rholak/upptime/issues/346",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
153635264
|
Having EAP Control Script
Shall we have eapctl.sh in cases where we don't have EAP installed as service and set the bind-address and other settings we do right now in the jboss_eap/main.yml.
IMHO - we want to be very prescriptive in this project and that
includeincludes EAP installed as a service. At this time, I don't see value
in supporting alternative mechanisms for installation/management.
On May 8, 2016 12:41 AM, "Kamesh Sampath" notifications@github.com wrote:
Shall we have eapctl.sh in cases where we don't have EAP installed as service and set the
bind-address and other settings we do right now in the jboss_eap/main.yml.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/rhtconsulting/jboss_eap/issues/23
Got it.
Sent from Outlook Mobile
On Sun, May 8, 2016 at 9:42 AM -0700, "sherl0cks" notifications@github.com wrote:
IMHO - we want to be very prescriptive in this project and that
includeincludes EAP installed as a service. At this time, I don't see value
in supporting alternative mechanisms for installation/management.
On May 8, 2016 12:41 AM, "Kamesh Sampath" notifications@github.com wrote:
Shall we have eapctl.sh in cases where we don't have EAP installed as service and set the
bind-address and other settings we do right now in the jboss_eap/main.yml.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/rhtconsulting/jboss_eap/issues/23
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
https://github.com/rhtconsulting/jboss_eap/issues/23#issuecomment-217731732
|
gharchive/issue
| 2016-05-08T04:40:56 |
2025-04-01T04:35:43.300191
|
{
"authors": [
"kameshsampath",
"sherl0cks"
],
"repo": "rhtconsulting/jboss_eap",
"url": "https://github.com/rhtconsulting/jboss_eap/issues/23",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1072575596
|
[BUG] Error: can't find external program "python"
Describe the bug
The module relies on an external dependency named "python"
https://github.com/rhythmictech/terraform-terraform-errorcheck/blob/074abcbf0ff58b85c2a2a74340853b29c52bcb7c/main.tf#L2
which fails with:
│ Error: can't find external program "python"
│
│ with module.errorcheck_invalid.data.external.this,
│ on .terraform/modules/errorcheck_invalid/main.tf line 1, in data "external" "this":
│ 1: data "external" "this" {
Changing the command to "python3" in my case fixed the issue.
To Reproduce
Steps to reproduce the behavior:
Attempt to use this module without "python" in the path.
Expected behavior
terraform apply to finish without "internal" errors.
Desktop (please complete the following information):
OS: Ubuntu 21.04
Version: Terraform v1.0.11
Having the exact same issue in Terraform Cloud, but the support told me JQ is available, so in theory we can just set use_jq = true
I see. This should be an easy fix. Add a variable for the first argument passed to program. Then you can pass in whatever python you want
I am having the same issue as well, so I set use_jq = true, but unfortunately that didn't work as well.
I am still getting the same error:
@shubhisethi what are you passing in as the program?
@sblack4 I just tried it with python3 and it worked. Thanks for working on this issue.
|
gharchive/issue
| 2021-12-06T20:21:27 |
2025-04-01T04:35:43.332766
|
{
"authors": [
"gchamon",
"sblack4",
"sed-i",
"shubhisethi"
],
"repo": "rhythmictech/terraform-terraform-errorcheck",
"url": "https://github.com/rhythmictech/terraform-terraform-errorcheck/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
547339585
|
An error that could be linked to Python2-3 problems
So running for a test, I got such error. Some other errors occurred before, but I fixed that. Any suggestions here?
File "test_frcnn_count.py", line 173, in main
list_files = sorted(get_file_names(img_path), key=lambda var:[int(x) if x.isdigit() else x for x in re.findall(r'[^0-9]|[0-9]+', var)])
TypeError: '<' not supported between instances of 'int' and 'str'
i have the same error .. but i can't fix it.
So running for a test, I got such error. Some other errors occurred before, but I fixed that. Any suggestions here?
File "test_frcnn_count.py", line 173, in main
list_files = sorted(get_file_names(img_path), key=lambda var:[int(x) if x.isdigit() else x for x in re.findall(r'[^0-9]|[0-9]+', var)])
TypeError: '<' not supported between instances of 'int' and 'str'
Im struggled with this "test_frcnn_count.py" but i finally made it.
you will fix this line.
re.findall(r'[^0-9]|[0-9]+', var)]) -> re.findall(r'[^0-9]|[0-9]+', str(var))])
and then you will fix 112lines
with open(config_output_filename, 'r') -> with open(config_output_filename, 'rb')
Finally, you will fix 257~258 lines
cv2.rectangle(img_scaled,(x1, y1),(x2, y2), (class_to_color[key]),2)
cv2.rectangle(img_scaled,(x1, y1),(x2, y2), (int(class_to_color[key][0]),int(class_to_color[key][1]),int(class_to_color[key][2])),2)
I think these code is used python2.X version.. so i change these code line.
I hope you fix it.
|
gharchive/issue
| 2020-01-09T09:01:00 |
2025-04-01T04:35:43.337006
|
{
"authors": [
"Janzeero-PhD",
"KimTaeHyeong-97"
],
"repo": "riadhayachi/faster-rcnn-keras",
"url": "https://github.com/riadhayachi/faster-rcnn-keras/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
498423744
|
Define gff3 specification
We need to write an explicit gff3 specification for our transcriptome gff3, with code to test this added to check_gff_fasta.py. Making transcriptome annotations compatible is the key to bringing in new organisms and to making data portable between different tools such as ORF finders.
I'm concerned that we're not following standard practice as described in gmod, in particular on IDs/parent, and on feature types such as UTR5 vs five_prime_utr.
We might want to add
start_codon
stop_codon
ORF (for upstream ORFs)
@ewallace will check with the Trips-viz team to see if we can make something compatible with their browser. This would make it easy to process output in Riboviz and visualize in Trips-viz.
Related to #25
tripvs-viz developers are willing to engage so we should do that.
Confirmed that
TRIPS-Viz developers are engaging. @ewallace to set up a call with them and @FlicAnderson to figure next steps.
official gff3 specification has sequence ontology terms five_prime_UTR and three_prime_UTR, so we need to change the names in our gffs accordingly.
Checker check_gff_fasta.py, currently has three constraints/tests:
the beginning of every CDS is a start codon (ATG; translates to M)
the stop of every CDS is a stop codon (TAG, TGA, TAA; translates to *)
there are no stop codons internal to the CDS.
We want to:
give UTR features as arguments, with default values from SO five_prime_UTR and three_prime_UTR
put each constraint in a separate function
add flexibility in start codon specification for non-ATG starts, so must check the codon not the translated amino acid
add flexibility in stop codon specification for non-TGA/TAG/TAA stops, ditto
add constraint to check UTRs (a) present and (b) longer than a user-specified length, default 13.
check uORFs in the same way as CDSs, if present
count how many transcripts have multiple CDSs
add constraint to check if CDSs or uORFs overlap
report out how many features pass each constraint
flags to allow filtering for each constraint
count all features and their lengths
We still have some questions:
do we want to list uORFs as uORFs or as CDSs with names e.g. YAL002W_uORF1? The issue is that generate_stats_figs.R currently only quantifies over CDSs, so uORF quantification requires change there.
how do we specify alternative in-frame starts, i.e. ORF extensions?
how do we test parent relationships, or feature names more generally. For example, should the five_prime_UTR of YAL002W be called YAL002W_five_prime_UTR?
Note that gffutils already has feature inspection code in inspection.py. Already, check_fasta_gff.py uses gffutils so this should help.
Made a start on this and replacing #23/#50 python scripts in new branch #gff-spec-74.
Some example files and a README.md are in data/annotationexamples in that branch. Incomplete as needs the transcript-centric gffs adding.
Would like to discuss with @mikej888 what else is needed in the spec. We could code this using Biopython and gffutils, which would then replace script_for_transcript_annotation.Rmd.
I'll explore the content in more detail when I've progressed #167, but to summarise my interpretation of the task to date....
Port script_for_transcript_annotation.Rmd into a Python script which:
Validates input genome GFF file (from #50, "First challenge will be to validate the input genome gff file. Our check_gff_fasta.py validates the transcriptome gff and fasta file. The upstream challenge in script_for_transcript_annotation.Rmd is to check that and then make valid transcriptome files.")
Creates transcriptome GFF/FASTA from genome GFF/FASTA files.
Uses Biopython and gffutils packages.
I agree with that interpretation.
Today I updated branch gff-spec-74 with transcript-centric gffs for examples YAL10 and YBL72ish in commit def937e. Those gffs and fasta files are generated from a mixture of R/tidyverse/unix bedtools code in a separate repository yeastutrgff. Some of the code in r/tidyverse is less clunky; but the intron removal we did quite separately using bedtools functions.
We should test gff and fasta compatibility with check_gfff_fasta before anything else; I tested transcript length from gff and fasta agreed by eye, but not that CDS were in the correct position.
Also, I will email the fungidb developer team to find out how they produce their transcript information for the fungidb website.
Updated and fixed some bugs in the two examples:
YAL10_transcriptsfixed_R64-2-1_left18_right15
YBL72ish_transcripts_withUTRs
Still did not test with check_fasta_gff because I don't currently have access to an installation where that is working. Or, I have not understood something about how to call the code from the command line. We can discuss at dev meeting.
The syntax is now
$ python -m riboviz.tools.check_fasta_gff -f <FASTA_FILE> -g <GFF_FILE>
For example:
$ python -m riboviz.tools.check_fasta_gff -f vignette/input/yeast_YAL_CDS_w_250utrs.fa -g vignette/input/yeast_YAL_CDS_w_250utrs.gff3
Created by: RiboViz
Date: 2020-04-20 02:26:42.265614
Command-line tool: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
File: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 0ecd69ec0786eaeee78c4ce0999f4a181c775044 date 2020-03-04 05:45:40-08:00
Checking fasta file vignette/input/yeast_YAL_CDS_w_250utrs.fa
with gff file vignette/input/yeast_YAL_CDS_w_250utrs.gff3
YAL001C doesn't start with ATG.
YAL001C doesn't stop at end.
YAL001C has internal STOP.
$ python -m riboviz.tools.check_fasta_gff -f data/yeast_CDS_w_250utrs.fa -g data/yeast_CDS_w_250utrs.gff3
Created by: RiboViz
Date: 2020-04-20 02:27:04.606550
Command-line tool: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
File: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 0ecd69ec0786eaeee78c4ce0999f4a181c775044 date 2020-03-04 05:45:40-08:00
Checking fasta file data/yeast_CDS_w_250utrs.fa
with gff file data/yeast_CDS_w_250utrs.gff3
Q0050 has internal STOP.
Q0055 has internal STOP.
Q0060 has internal STOP.
Q0065 has internal STOP.
Q0070 has internal STOP.
Q0045 has internal STOP.
Q0075 doesn't start with ATG.
Q0075 has internal STOP.
Q0085 has internal STOP.
Q0110 has internal STOP.
Q0115 has internal STOP.
Q0120 has internal STOP.
Q0105 has internal STOP.
Q0140 has internal STOP.
Q0160 has internal STOP.
Q0250 has internal STOP.
Q0255 has internal STOP.
Q0275 has internal STOP.
A run on your dummy data:
YAL10_transcriptsfixed_R64-2-1_left18_right15:
$ git checkout gff-spec-74
$ python -m riboviz.tools.check_fasta_gff -f data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.fasta -g data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
Created by: RiboViz
Date: 2020-04-20 02:37:48.831018
Command-line tool: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
File: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 5ed5018113923c0110c86a6ae8f97d099cee0ca6 date 2020-04-18 10:16:21+01:00
Checking fasta file data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.fasta
with gff file data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
Line length of fasta file is not consistent! Inconsistent line found in >YAL001C_CDS at line 3.
YBL72ish_transcripts_withUTRs:
$ python -m riboviz.tools.check_fasta_gff -f data/annotationexamples/YBL72ish_transcripts_withUTRs.fasta -g data/annotationexamples/YBL72ish_transcripts_withUTRs.gff
Created by: RiboViz
Date: 2020-04-20 02:39:03.714980
Command-line tool: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
File: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 5ed5018113923c0110c86a6ae8f97d099cee0ca6 date 2020-04-18 10:16:21+01:00
Checking fasta file data/annotationexamples/YBL72ish_transcripts_withUTRs.fasta
with gff file data/annotationexamples/YBL72ish_transcripts_withUTRs.gff
Fix issue with YAL10_transcriptsfixed_R64-2-1_left18_right15:
From How can I join the wrap lines in a huge FASTA file?, which concatenates all the sequences for a record into a single wrapped line:
$ cat data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.fasta | sed -e '/^>/s/$/@/' -e 's/^>/#/'|tr '\n' ' '|sed 's/ //g'|sed 's/@/\n/g'|sed 's/#/\n>/g'|sed '/^$/d' > YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta
$ python -m riboviz.tools.check_fasta_gff -f YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta -g data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
Created by: RiboViz
Date: 2020-04-20 03:07:59.642646
Command-line tool: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
File: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 5ed5018113923c0110c86a6ae8f97d099cee0ca6 date 2020-04-18 10:16:21+01:00
Checking fasta file YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta
with gff file data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
'YAL001C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL002W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL003W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL005C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL007C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL008W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL009W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL010C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL011W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
'YAL012W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_wrapped.fasta.'
Preferable, in-Python approach, using format_fasta.py, which reads then writes the FASTA file. The output file has each sequence spread over multiple lines but all the lines have the same length, with the possible, but acceptable, exceptions of the final line for a sequence:
from Bio import SeqIO
import sys
in_file = sys.argv[1]
out_file = sys.argv[2]
with open(in_file, "rt") as f:
records = list(SeqIO.parse(f, "fasta"))
SeqIO.write(records, out_file, "fasta")
$ python format_fasta.py data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.fasta YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta
$ python -m riboviz.tools.check_fasta_gff -f YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta -g data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
Created by: RiboViz
Date: 2020-04-20 03:05:47.342654
Command-line tool: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
File: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 5ed5018113923c0110c86a6ae8f97d099cee0ca6 date 2020-04-18 10:16:21+01:00
Checking fasta file YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta
with gff file data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
'YAL001C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL002W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL003W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL005C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL007C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL008W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL009W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL010C_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL011W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
'YAL012W_mRNA not in YAL10_transcriptsfixed_R64-2-1_left18_right15_formatted.fasta.'
Thanks, that was my mistake on YAL10 examples, both the sequence names and the absence of line wrapping. Try the update in 48b2fb6?
$ git log -1
commit 48b2fb6b390774d37b2acb297c1f3d005e586299 (HEAD -> gff-spec-74, origin/gff-spec-74)
Author: Edward Wallace <ewjwallace@gmail.com>
Date: Mon Apr 20 17:36:37 2020 +0100
YAL10 example, seqnames and text wrap
$ python -m riboviz.tools.check_fasta_gff -f data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.fasta -g data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
Created by: RiboViz
Date: 2020-04-21 01:31:35.544253
Command-line tool: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
File: /home/ubuntu/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 48b2fb6b390774d37b2acb297c1f3d005e586299 date 2020-04-20 17:36:37+01:00
Checking fasta file data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.fasta
with gff file data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff
Sorted!
We agreed that these files, plus script for transcript annotation (#23) are enough starting material to test the gff3 specification and annotation task.
We could easily also add other fungal examples from fungidb data. perhaps a small selection of genes from S. pombe 972h- which has UTR and intron annotations, example S. pombe act1 gene.
Later, we need to add more examples to cover:
multiple ORFs per transcript, #159 for bacteria.
drosophila examples covering bicistronic genes.
Today I discussed drosophila examples with Julie Aspden. Julie and her student Isabel will send us drosophila .fasta and .gff examples, hopefully in next week. From those we can obtain the bicistronic transcripts for testing.
@ewallace and @mikej888 discussed this work. Mike's initial task can be to write a script to calculate the positions of all codons in a gene. This would allow us to replace yeast_codon_pos_i200.RData (binary R file format with nested lists and which starts at position 201) with a plain-text three column table. For example, suppose we have coding sequences:
G1: ATG AAA TAA
G2: ATG GGG CCC TAG
ATG start codon, TAA, TAG end codons. Then the codon position table should read: (check column names for consistency):
Gene Pos Codon
G1 1 ATG
G1 2 AAA
G1 3 TAA
G2 1 ATG
G2 2 GGG
G2 3 CCC
G2 4 TAG
i.e. a table with all codons and their positions. generate_stats_figs.R can then be updated to read this and use (a subset of) the data within the table.
The script to produce this should take both FASTA and GFF files and output this table. Remember that GFF positions are 1-indexed.
Check to see if Biopython has any useful functions to help e.g. to chop a sequence into codons.
Complement with unit tests.
More generally consider how these, and related functions, can be collected together into a coherent library for use by ourselves and others.
https://github.com/xryanglab/RiboCode may provide inspiration.
RiboViz overloads names to refer to genes, transcripts and coding sequences. It would be desirable to give genes, transcripts and coding sequences different names. GFF files record transcript names in the leftmost sequence column. This also records the names of features. Example names could include: YAL001C_<transcript-plus-12-minus-15>, or, following FungiDB <GENE>_t0,1,2,3 for transcripts. Naming patterns should be configurable.
Relates to @FlicAnderson work on #194 which will read this data.
Merged current develop branch into gff-spec-74 and committed WIP version riboviz/extract_cds_codons.py which uses the GFF file CDS entries to get coding sequences then splits these into their codons and creates a DataFrame holding the data in the desired format. This DataFrame needs to be saved as a TSV file, the code refactored into more functions, the command-line part extracted out into riboviz.tools.extract_cds_codons.py (essentially copy check_fasta_gff.py) and tests need to be implemented.
Current state:
Refactored into functions, within riboviz/fasta_gff.py.
Moved riboviz/check_fasta_gff.py function into the above.
Added riboviz/tools/extract_cds_codons.py.
Ran on examples:
$ python -m riboviz.tools.extract_cds_codons \
-f data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.fasta \
-g data/annotationexamples/YAL10_transcriptsfixed_R64-2-1_left18_right15.gff -c testYAL.tsv
$ python -m riboviz.tools.extract_cds_codons \
-f data/annotationexamples/YBL72ish_transcripts_withUTRs.fasta \
-g data/annotationexamples/YBL72ish_transcripts_withUTRs.gff -c testYBL.tsv
$ python -m riboviz.tools.extract_cds_codons \
-f vignette/input/yeast_YAL_CDS_w_250utrs.fa \
-g vignette/input/yeast_YAL_CDS_w_250utrs.gff3 -c testVignetteYeastYal.tsv
$ python -m riboviz.tools.extract_cds_codons \
-f data/yeast_CDS_w_250utrs.fa \
-g data/yeast_CDS_w_250utrs.gff3 -c testDataYeastCds.tsv
Started implementing unit tests. Includes test for G1, G2 example above.
Questions:
It is CDS (coding sequence) codons not all sequence codons?
Should CDS start and stop codons themselves be included?
check_fasta_gff-related questions:
Should padding out sequence via N to length divisable by three be done? If not then how should such sequences be handled? Currently implemented. But, should it be padded at the end of the CDS, after the stop codon? Or before the stop codon?
What about the other check_fasta_gff checks? Not implemented.
Can multiple CDS arise in a single transcript/gene? If so then how should numbering scheme work? Will the NAME attribute be different for each in the GFF? Should we include a CDS index in the TSV file?
What name do you want for the Gene in the TSV file? At present, the sequence ID, from the GFF file, is used e.g. YAL001C_mRNA. Could use attributes["Name"] value i.e. YAL001C_CDS from the GFF file? What if there is no attribute?
Extend to support configurable naming pattern:
RiboViz overloads names to refer to genes, transcripts and coding sequences.
Give genes, transcripts and coding sequences different names.
GFF files record transcript names in the leftmost sequence column.
This also records the names of features.
Example names could include: YAL001C_<transcript-plus-12-minus-15>, or, following FungiDB <GENE>_t0,1,2,3 for transcripts.
Change data input file yeast_codon_pos_i200.RData to alternative file format #194, "the choice to ignore the first 200 codon positions, and genes shorter than 200nt, is hard-coded in both the data-creation and data-analysis code. This should be a parameter."
Does that comment relate to this?
It is CDS (coding sequence) codons not all sequence codons?
Yes.
Should CDS start and stop codons themselves be included?
start - yes.
stop - ideally, that should be a flag. default value yes I think?
check_fasta_gff-related questions:
Should padding out sequence via N to length divisable by three be done? If not then how should such sequences be handled? Currently implemented. But, should it be padded at the end of the CDS, after the stop codon? Or before the stop codon?
No, if there is an incomplete codon, we should not count it. Possibly that could be a flag, but I can't imagine a use-case, unless @shahpr or @lianafaye suggest otherwise?
What about the other check_fasta_gff checks? Not implemented.
Can multiple CDS arise in a single transcript/gene? If so then how should numbering scheme work? Will the NAME attribute be different for each in the GFF? Should we include a CDS index in the TSV file?
What name do you want for the Gene in the TSV file? At present, the sequence ID, from the GFF file, is used e.g. YAL001C_mRNA. Could use attributes["Name"] value i.e. YAL001C_CDS from the GFF file? What if there is no attribute?
Extend to support configurable naming pattern:
RiboViz overloads names to refer to genes, transcripts and coding sequences.
Give genes, transcripts and coding sequences different names.
GFF files record transcript names in the leftmost sequence column.
This also records the names of features.
Example names could include: YAL001C_<transcript-plus-12-minus-15>, or, following FungiDB <GENE>_t0,1,2,3 for transcripts.
Change data input file yeast_codon_pos_i200.RData to alternative file format #194, "the choice to ignore the first 200 codon positions, and genes shorter than 200nt, is hard-coded in both the data-creation and data-analysis code. This should be a parameter."
Does that comment relate to this?
Progress on refactoring check_fasta_gff functionality.
Reports on:
CDS has length not divisible by 3.
Beginning of a CDS does not have a start codon (ATG).
End codon of the CDS is not a stop codon (TAG, TGA, TAA).
Stop codons internal to the CDS.
CDS has no ID or Name attribute.
CDS has non-unique ID attribute.
Multiple CDS for a sequence.
If no feature ID attribute then Name is used in TSV file. If no Name attribute, then the token Undefined is used.
Outputs TSV file with Sequence ID, Feature ID, Issue columns.
Progress at commit f6ce9d0, refactoring check_fasta_gff functionality.
Additionally reports on Sequence defined in GFF file missing in FASTA file.
Added riboviz/test/data/ test files test_fasta_gff_check.gff and test_fasta_gff_check.fasta.
Added tests to riboviz.test.test_fasta_gff
I ran this and it produced the same output as Mike:
python -m riboviz.tools.get_cds_codons_file -f riboviz/test/data/test_fasta_gff_codons.fasta -g riboviz/test/data/test_fasta_gff_codons.gff -c test_cds.txt
head test_cds.txt -n 20
# Created by: RiboViz
# Date: 2020-10-14 14:46:26.835204
# Command-line tool: /homes/ewallac2/riboviz/riboviz/tools/get_cds_codons_file.py
# File: /homes/ewallac2/riboviz/riboviz/fasta_gff.py
# Version: commit f6ce9d043d030ac7a13a02651f34ed3d05b37c08 date 2020-09-16 09:19:35-07:00
Gene Codon Pos
YAL001C_CDS ATG 1
YAL001C_CDS GCC 2
YAL001C_CDS CAC 3
YAL001C_CDS TGT 4
YAL001C_CDS TAA 5
YAL002C_CDS ATG 1
YAL002C_CDS GTA 2
YAL002C_CDS TCA 3
YAL002C_CDS GGA 4
YAL002C_CDS TAG 5
YAL004CSingleCodonCDS_CDS ATG 1
YAL004CSingleCodonCDS_CDS AGA 2
YAL004CSingleCodonCDS_CDS TGA 3
YAL005CMultiCDS_CDS_1 ATG 1
I could also run:
python -m riboviz.tools.check_fasta_gff -f riboviz/test/data/test_fasta_gff_codons.fasta -g riboviz/test/data/test_fasta_gff_codons.gff
This returns:
Created by: RiboViz
Date: 2020-10-14 14:51:29.662039
Command-line tool: /homes/ewallac2/riboviz/riboviz/tools/check_fasta_gff.py
File: /homes/ewallac2/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit f6ce9d043d030ac7a13a02651f34ed3d05b37c08 date 2020-09-16 09:19:35-07:00
Sequence YAL003CMissingGene_mRNA missing in FASTA file
Sequence YAL005CMultiCDS_mRNA has multiple CDS
Sequence YAL007CBadLength_mRNA feature YAL007CBadLength_CDS has length not divisible by 3
Sequence YAL007CBadLength_mRNA feature YAL007CBadLength_CDS doesn't stop at end
Sequence YAL008CNoIdNameAttr_mRNA feature Undefined has no 'ID' or 'Name' attribute
Sequence YAL009CMultiDuplicateCDS_mRNA has multiple CDS
Sequence YAL009CMultiDuplicateCDS_mRNA has non-unique 'ID' attribute YAL009CMultiDuplicateCDS_CDS
TODOs after discussion with @ewallace to answer preceding questions:
get_cds_codons:
Provide feature_format as a parameter, default value {}_CDS.
check_fasta_gff:
Provide a flag to allow which of ID or Name are to be used in output reports if both are defined.
Provide feature_format as a parameter, default value {}_CDS, if a feature name is undefined (i.e. same behaviour as get_cds_codons replacing current Undefined parameter).
For non-unique feature IDs have one output report to cover all the non-unique feature IDs.
Add a Notes column to the output file with additional information e.g. number of duplicates, from the above.
Add the free-text printed to standard output to this Notes column too.
Iterate through FASTA file and check any sequences in FASTA file that have no entries in GFF file.
Get sequence column identifiers the from GFF file.
Get list of sequence names from the FASTA file.
Calculate FASTA - GFF names.
In light of Solve issue with and add e. coli daa example-datasets/12, allow specification of non-canonical start codons. check_fasta_gff needs extended to allow non-canonical start codons as arguments (e.g. GTG or TTG (V or L after translation). It would make the code clearer too if we checked for start codons directly rather than their translations.
TODOs as of 03/11/20:
get_cds_codons:
Provide feature_format as a parameter, default value {}_CDS.
check_fasta_gff:
In light of Solve issue with and add e. coli daa example-datasets/12, allow specification of non-canonical start codons. check_fasta_gff needs extended to allow non-canonical start codons as arguments (e.g. GTG or TTG (V or L after translation). It would make the code clearer too if we checked for start codons directly rather than their translations.
Provide a flag to allow which of ID or Name are to be used in output reports if both are defined.
Provide feature_format as a parameter, default value {}_CDS, if a feature name is undefined (i.e. same behaviour as get_cds_codons replacing current Undefined parameter).
For non-unique feature IDs have one output report to cover all the non-unique feature IDs.
Add a Notes column to the output file with additional information e.g. number of duplicates, from the above.
Add the free-text printed to standard output to this Notes column too.
Iterate through FASTA file and check any sequences in FASTA file that have no entries in GFF file.
Get sequence column identifiers the from GFF file.
Get list of sequence names from the FASTA file.
Calculate FASTA - GFF names.
Provide suitable error messages for missing/empty files.
Completed as of commit c49ff73:
get_cds_codons:
--cds-feature-format <FORMAT> is now supported (default {}_CDS) to format sequence ID if no feature ID or name can be found.
check_fasta_gff:
--feature-format <FORMAT> is now supported (default {}_CDS) to format sequence ID if no feature ID or name can be found.
riboviz.check_fasta_gff reports number of duplicated feature IDs.
TSV now has additional Data column which can be used for JSON-strings with additional data about an issue e.g. number of duplicated feature IDs. Updates to tests and printing output still to be done.
Completed as of commit d13fe54:
check_fasta_gff:
TSV Data column is used for arbitrary data.
Duplicate feature IDs (DuplicateFeatureId) reported for each sequence.
Summary issue (DuplicateFeatureIds) records count of all such duplications.
Sequences defined in the FASTA file that have no related features in the GFF file are reported.
Removed translation of sequences. Sequences are now split into codons and these used for start and stop codon-related checks.
Commit: 77a896b, added https://github.com/riboviz/riboviz/blob/gff-spec-74/docs/user/check-fasta-gff.md.
Completed as of commit 6958b97:
Renamed riboviz.tools.get_cds_codons_file => riboviz.tools.get_cds_codons.
riboviz.tools.get_cds_codons and riboviz.tools.check_fasta_gff allow user to specify which of feature ID or Name are to be used in output reporting if both are defined.
Completed as of commit 2b888b2...
Added missing and empty file handling to riboviz.get_cds_codons and riboviz.check_fasta_gff.
Updated riboviz.check_fasta_gff to:
Record supplementary data about additional issues
NO_START_CODON: actual codon found.
NO_STOP_CODON: actual codon found.
MULTIPLE_CDS: count of the number of CDSs found.
Support list of allowed start codons, default ['ATG'].
Updated riboviz.tools.check_fasta_gff to support optional --start-codon flag to provide list of allowed start codons e.g. --start-codon ATG AAA. Defaults to ATG. @acope3 please try this for "Solve issue with and add e. coli data, https://github.com/riboviz/example-datasets/issues/12
@ewallace will ask his honours students to test this in the new year.
Reminder! We will ask @3mma-mack and @swinterbourne to test this, as part of adding S. pombe data example-datasets#21.
Reminder! We will ask @3mma-mack and @swinterbourne to test this, as part of adding S. pombe data example-datasets#21.
I'm testing this. It works.
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/fungi/cryptococcus/annotation/H99_10p_up12dwn9_CDS_with_120bputrs.fa \
-g ../example-datasets/fungi/cryptococcus/annotation/H99_10p_up12dwn9_CDS_120bpL_120bpR.gff3
returns no issues
Running on the issue from https://github.com/riboviz/example-datasets/issues/12
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.fa \
-g ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.gff3
returns results:
Sequence ECB_02723 feature ECB_02723 has length not divisible by 3
Sequence ECB_02723 feature ECB_02723 doesn't end with a recognised stop codon but with ANN
Sequence ECB_02723 feature ECB_02723 has an internal stop codon
Sequence ECB_00018 feature ECB_00018 doesn't start with a recognised start codon but with GTG
Sequence ECB_00021 feature ECB_00021 doesn't start with a recognised start codon but with GTG
...
Specifying more start codons, ATG/TTG/GTG:
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.fa \
-g ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.gff3 \
--start-codon ATG TTG GTG
results in (complete output here):
Created by: RiboViz
Date: 2021-01-16 09:23:19.720064
Command-line tool: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
File: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 6b4cfda7a250898f7d216d1a914d4b6ed9f01909 date 2021-01-12 07:25:44-08:00
Sequence ECB_02723 feature ECB_02723 has length not divisible by 3
Sequence ECB_02723 feature ECB_02723 doesn't end with a recognised stop codon but with ANN
Sequence ECB_02723 feature ECB_02723 has an internal stop codon
Sequence ECB_00526 feature ECB_00526 doesn't start with a recognised start codon but with CTG
Sequence ECB_00820 feature ECB_00820 doesn't start with a recognised start codon but with CTG
Sequence ECB_00835 feature ECB_00835 doesn't start with a recognised start codon but with CTG
Sequence ECB_01432 feature ECB_01432 has an internal stop codon
Sequence ECB_01627 feature ECB_01627 doesn't end with a recognised stop codon but with AGC
Sequence ECB_01907 feature ECB_01907 doesn't start with a recognised start codon but with CTG
Sequence ECB_01909 feature ECB_01909 doesn't start with a recognised start codon but with CTG
Sequence ECB_03779 feature ECB_03779 has an internal stop codon
Sequence ECB_03951 feature ECB_03951 has an internal stop codon
I spot-checked: ECB_01432/fdnG and ECB_03779/fdhF and ECB_03951/fdhF each have a UGA codon which encodes selenocysteine, not stop codons. So this is giving sensible results on real data.
Running on the tiny simulated dataset:
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa \
-g ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3
detects no problems.
Running a biologically erroneous alternative on the same dataset:
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa \
-g ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3 \
--start-codon AAA
returns appropriate results (here in full):
Created by: RiboViz
Date: 2021-01-16 09:49:49.650282
Command-line tool: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
File: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 6b4cfda7a250898f7d216d1a914d4b6ed9f01909 date 2021-01-12 07:25:44-08:00
Sequence MAT feature MAT doesn't start with a recognised start codon but with ATG
Sequence MIKE feature MIKE doesn't start with a recognised start codon but with ATG
So the logic of the program seems great.
I have some requests on output formatting that should be easy?
Can we add the parameters to the header, both for the stout and the tsv header? So print
fasta file: FASTA
gff file: GFF
start codon: START_CODON
I'm not sure if any other params are needed?
Could we calculate summary information: number of issues of each type? Something like:
print("Issue\tCount\n")
for (issue_type, issue_data) in issues:
if issue_type in ISSUE_FORMATS:
n_issues_of_type = unique(issue_data) # count the number of issues of that type
print(issue_type + "\t" + n_issues_of_type + "\n")
Then this summary info could be printed in the header in stdout and the .tsv. I am wondering if this could replace writing out features to stdout, or if it's worth wrapping in a verbose flag? The point is it's helpful to have a simple output for files with no issues, reminding the user "here's the issues we checked for and there aren't any".
Lastly, note that this has been refactored, so closing this issue will also close #25.
I'm testing this. It works.
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/fungi/cryptococcus/annotation/H99_10p_up12dwn9_CDS_with_120bputrs.fa \
-g ../example-datasets/fungi/cryptococcus/annotation/H99_10p_up12dwn9_CDS_120bpL_120bpR.gff3
returns no issues
Running on the issue from https://github.com/riboviz/example-datasets/issues/12
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.fa \
-g ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.gff3
returns results:
Sequence ECB_02723 feature ECB_02723 has length not divisible by 3
Sequence ECB_02723 feature ECB_02723 doesn't end with a recognised stop codon but with ANN
Sequence ECB_02723 feature ECB_02723 has an internal stop codon
Sequence ECB_00018 feature ECB_00018 doesn't start with a recognised start codon but with GTG
Sequence ECB_00021 feature ECB_00021 doesn't start with a recognised start codon but with GTG
...
Specifying more start codons, ATG/TTG/GTG:
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.fa \
-g ../example-datasets/bacteria/ecoli/annotations/Ecoli_REL606_CDS_w_25utrs.gff3 \
--start-codon ATG TTG GTG
results in (complete output here):
Created by: RiboViz
Date: 2021-01-16 09:23:19.720064
Command-line tool: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
File: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 6b4cfda7a250898f7d216d1a914d4b6ed9f01909 date 2021-01-12 07:25:44-08:00
Sequence ECB_02723 feature ECB_02723 has length not divisible by 3
Sequence ECB_02723 feature ECB_02723 doesn't end with a recognised stop codon but with ANN
Sequence ECB_02723 feature ECB_02723 has an internal stop codon
Sequence ECB_00526 feature ECB_00526 doesn't start with a recognised start codon but with CTG
Sequence ECB_00820 feature ECB_00820 doesn't start with a recognised start codon but with CTG
Sequence ECB_00835 feature ECB_00835 doesn't start with a recognised start codon but with CTG
Sequence ECB_01432 feature ECB_01432 has an internal stop codon
Sequence ECB_01627 feature ECB_01627 doesn't end with a recognised stop codon but with AGC
Sequence ECB_01907 feature ECB_01907 doesn't start with a recognised start codon but with CTG
Sequence ECB_01909 feature ECB_01909 doesn't start with a recognised start codon but with CTG
Sequence ECB_03779 feature ECB_03779 has an internal stop codon
Sequence ECB_03951 feature ECB_03951 has an internal stop codon
I spot-checked: ECB_01432/fdnG and ECB_03779/fdhF and ECB_03951/fdhF each have a UGA codon which encodes selenocysteine, not stop codons. So this is giving sensible results on real data.
Running on the tiny simulated dataset:
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa \
-g ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3
detects no problems.
Running a biologically erroneous alternative on the same dataset:
python -m riboviz.tools.check_fasta_gff \
-f ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa \
-g ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3 \
--start-codon AAA
returns appropriate results (here in full):
Created by: RiboViz
Date: 2021-01-16 09:49:49.650282
Command-line tool: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
File: /Users/edwardwallace/Repos/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit 6b4cfda7a250898f7d216d1a914d4b6ed9f01909 date 2021-01-12 07:25:44-08:00
Sequence MAT feature MAT doesn't start with a recognised start codon but with ATG
Sequence MIKE feature MIKE doesn't start with a recognised start codon but with ATG
So the logic of the program seems great.
I have some requests on output formatting that should be easy?
Can we add the parameters to the header, both for the stout and the tsv header? So print
fasta file: FASTA
gff file: GFF
start codon: START_CODON
I'm not sure if any other params are needed?
Could we calculate summary information: number of issues of each type? Something like:
print("Issue\tCount\n")
for (issue_type, issue_data) in issues:
if issue_type in ISSUE_FORMATS:
n_issues_of_type = unique(issue_data) # count the number of issues of that type
print(issue_type + "\t" + n_issues_of_type + "\n")
Then this summary info could be printed in the header in stdout and the .tsv. I am wondering if this could replace writing out features to stdout, or if it's worth wrapping in a verbose flag? The point is it's helpful to have a simple output for files with no issues, reminding the user "here's the issues we checked for and there aren't any".
Lastly, note that this has been refactored, so closing this issue will also close #25.
Changes as of commit a63c24c:
FASTA file, GFF file and start codons printed and added to file header.
Issue counts calculated, printed, added to file header.
Issue details only printed, if a -v verbose flag is provided.
Example with -v:
$ python -m riboviz.tools.check_fasta_gff \
-f data/yeast_CDS_w_250utrs.fa \
-g data/yeast_CDS_w_250utrs.gff3 \
-o check_data_CDS.tsv -v
...
Configuration:
fasta_file data/yeast_CDS_w_250utrs.fa
gff_file data/yeast_CDS_w_250utrs.gff3
start_codons ['ATG']
Issue summary:
Issue Count
InternalStopCodon 17
NoStartCodon 1
Issue details:
Sequence Q0050 feature Q0050 has an internal stop codon
Sequence Q0055 feature Q0055 has an internal stop codon
Sequence Q0060 feature Q0060 has an internal stop codon
Sequence Q0065 feature Q0065 has an internal stop codon
Sequence Q0070 feature Q0070 has an internal stop codon
Sequence Q0045 feature Q0045 has an internal stop codon
Sequence Q0075 feature Q0075 doesn't start with a recognised start codon but w
ith ATA
Sequence Q0075 feature Q0075 has an internal stop codon
Sequence Q0085 feature Q0085 has an internal stop codon
Sequence Q0110 feature Q0110 has an internal stop codon
Sequence Q0115 feature Q0115 has an internal stop codon
Sequence Q0120 feature Q0120 has an internal stop codon
Sequence Q0105 feature Q0105 has an internal stop codon
Sequence Q0140 feature Q0140 has an internal stop codon
Sequence Q0160 feature Q0160 has an internal stop codon
Sequence Q0250 feature Q0250 has an internal stop codon
Sequence Q0255 feature Q0255 has an internal stop codon
Sequence Q0275 feature Q0275 has an internal stop codon
If -v is omitted then Issue details: and its subsequent text is not printed.
$ cat check_data_CDS.tsv
...
# fasta_file: data/yeast_CDS_w_250utrs.fa
# gff_file: data/yeast_CDS_w_250utrs.gff3
# start_codons: ['ATG']
# InternalStopCodon: 17
# NoStartCodon: 1
Sequence Feature Issue Data
Q0050 Q0050 InternalStopCodon
Q0055 Q0055 InternalStopCodon
Q0060 Q0060 InternalStopCodon
Q0065 Q0065 InternalStopCodon
Q0070 Q0070 InternalStopCodon
Q0045 Q0045 InternalStopCodon
Q0075 Q0075 NoStartCodon ATA
Q0075 Q0075 InternalStopCodon
Q0085 Q0085 InternalStopCodon
Q0110 Q0110 InternalStopCodon
Q0115 Q0115 InternalStopCodon
Q0120 Q0120 InternalStopCodon
Q0105 Q0105 InternalStopCodon
Q0140 Q0140 InternalStopCodon
Q0160 Q0160 InternalStopCodon
Q0250 Q0250 InternalStopCodon
Q0255 Q0255 InternalStopCodon
Q0275 Q0275 InternalStopCodon
Changes as of commit a63c24c:
FASTA file, GFF file and start codons printed and added to file header.
Issue counts calculated, printed, added to file header.
Issue details only printed, if a -v verbose flag is provided.
Example with -v:
$ python -m riboviz.tools.check_fasta_gff \
-f data/yeast_CDS_w_250utrs.fa \
-g data/yeast_CDS_w_250utrs.gff3 \
-o check_data_CDS.tsv -v
...
Configuration:
fasta_file data/yeast_CDS_w_250utrs.fa
gff_file data/yeast_CDS_w_250utrs.gff3
start_codons ['ATG']
Issue summary:
Issue Count
InternalStopCodon 17
NoStartCodon 1
Issue details:
Sequence Q0050 feature Q0050 has an internal stop codon
Sequence Q0055 feature Q0055 has an internal stop codon
Sequence Q0060 feature Q0060 has an internal stop codon
Sequence Q0065 feature Q0065 has an internal stop codon
Sequence Q0070 feature Q0070 has an internal stop codon
Sequence Q0045 feature Q0045 has an internal stop codon
Sequence Q0075 feature Q0075 doesn't start with a recognised start codon but w
ith ATA
Sequence Q0075 feature Q0075 has an internal stop codon
Sequence Q0085 feature Q0085 has an internal stop codon
Sequence Q0110 feature Q0110 has an internal stop codon
Sequence Q0115 feature Q0115 has an internal stop codon
Sequence Q0120 feature Q0120 has an internal stop codon
Sequence Q0105 feature Q0105 has an internal stop codon
Sequence Q0140 feature Q0140 has an internal stop codon
Sequence Q0160 feature Q0160 has an internal stop codon
Sequence Q0250 feature Q0250 has an internal stop codon
Sequence Q0255 feature Q0255 has an internal stop codon
Sequence Q0275 feature Q0275 has an internal stop codon
If -v is omitted then Issue details: and its subsequent text is not printed.
$ cat check_data_CDS.tsv
...
# fasta_file: data/yeast_CDS_w_250utrs.fa
# gff_file: data/yeast_CDS_w_250utrs.gff3
# start_codons: ['ATG']
# InternalStopCodon: 17
# NoStartCodon: 1
Sequence Feature Issue Data
Q0050 Q0050 InternalStopCodon
Q0055 Q0055 InternalStopCodon
Q0060 Q0060 InternalStopCodon
Q0065 Q0065 InternalStopCodon
Q0070 Q0070 InternalStopCodon
Q0045 Q0045 InternalStopCodon
Q0075 Q0075 NoStartCodon ATA
Q0075 Q0075 InternalStopCodon
Q0085 Q0085 InternalStopCodon
Q0110 Q0110 InternalStopCodon
Q0115 Q0115 InternalStopCodon
Q0120 Q0120 InternalStopCodon
Q0105 Q0105 InternalStopCodon
Q0140 Q0140 InternalStopCodon
Q0160 Q0160 InternalStopCodon
Q0250 Q0250 InternalStopCodon
Q0255 Q0255 InternalStopCodon
Q0275 Q0275 InternalStopCodon
I tested this and it works delightfully.
Is it possible to include in the summary the issues that were checked for with zero counts?
For example, editing the current output:
$ python -m riboviz.tools.check_fasta_gff \
> -f ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa \
> -g ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3
Created by: RiboViz
Date: 2021-01-19 18:20:53.048658
Command-line tool: /exports/eddie3_homes_local/ewallac2/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
File: /exports/eddie3_homes_local/ewallac2/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit a63c24cfb46fd21e73cc4bbc29d5ea31b2f01fde date 2021-01-19 05:17:54-08:00
Configuration:
fasta_file ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa
gff_file ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3
start_codons ['ATG']
Issue summary:
Issue Count
NoStartCodon 0
OtherIssue 0
My point is that an output that says "we checked this and there are no problems of this type" is more informative that an output that says " ".
I tested this and it works delightfully.
Is it possible to include in the summary the issues that were checked for with zero counts?
For example, editing the current output:
$ python -m riboviz.tools.check_fasta_gff \
> -f ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa \
> -g ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3
Created by: RiboViz
Date: 2021-01-19 18:20:53.048658
Command-line tool: /exports/eddie3_homes_local/ewallac2/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
File: /exports/eddie3_homes_local/ewallac2/riboviz/riboviz/riboviz/tools/check_fasta_gff.py
Version: commit a63c24cfb46fd21e73cc4bbc29d5ea31b2f01fde date 2021-01-19 05:17:54-08:00
Configuration:
fasta_file ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.fa
gff_file ../example-datasets/simulated/mok/annotation/tiny_2genes_20utrs.gff3
start_codons ['ATG']
Issue summary:
Issue Count
NoStartCodon 0
OtherIssue 0
My point is that an output that says "we checked this and there are no problems of this type" is more informative that an output that says " ".
Commit: 666d1f7. Prints ordered list of counts for all issues, including those which were zero.
$ python -m riboviz.tools.check_fasta_gff \
-f data/yeast_CDS_w_250utrs.fa \
-g data/yeast_CDS_w_250utrs.gff3 \
-o check_data_CDS.tsv -v
...
Configuration:
fasta_file data/yeast_CDS_w_250utrs.fa
gff_file data/yeast_CDS_w_250utrs.gff3
start_codons ['ATG']
Issue summary:
Issue Count
InternalStopCodon 17
NoStartCodon 1
MultipleCDS 0
SequenceNotInGFF 0
IncompleteFeature 0
NoStopCodon 0
DuplicateFeatureId 0
NoIdName 0
DuplicateFeatureIds 0
SequenceNotInFASTA 0
...
Commit: 666d1f7. Prints ordered list of counts for all issues, including those which were zero.
$ python -m riboviz.tools.check_fasta_gff \
-f data/yeast_CDS_w_250utrs.fa \
-g data/yeast_CDS_w_250utrs.gff3 \
-o check_data_CDS.tsv -v
...
Configuration:
fasta_file data/yeast_CDS_w_250utrs.fa
gff_file data/yeast_CDS_w_250utrs.gff3
start_codons ['ATG']
Issue summary:
Issue Count
InternalStopCodon 17
NoStartCodon 1
MultipleCDS 0
SequenceNotInGFF 0
IncompleteFeature 0
NoStopCodon 0
DuplicateFeatureId 0
NoIdName 0
DuplicateFeatureIds 0
SequenceNotInFASTA 0
...
Beautiful!
Beautiful!
|
gharchive/issue
| 2019-09-25T17:30:39 |
2025-04-01T04:35:43.424520
|
{
"authors": [
"ewallace",
"mikej888"
],
"repo": "riboviz/riboviz",
"url": "https://github.com/riboviz/riboviz/issues/74",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2082812724
|
Add missing Descriptions
The following language metadata is missing a fitting description:
Type System:
[ ] Safe
[ ] Strong
[ ] Weak
Syntax Style:
[ ] Assembly
[ ] Lisp
[ ] Pascal
[ ] Perl
Applications:
[ ] Mobile
[ ] Apple
[ ] General
[ ] Client
[ ] Server
[ ] Microsoft
[ ] Games
[ ] Web
[ ] System
[ ] Desktop
[ ] Fun
[ ] Education
[ ] Ai
[ ] Science
[ ] Scripts
Paradigms:
[ ] Object Oriented
[ ] Functional
[ ] Generic
[ ] Imperative
[ ] Structured
[ ] Procedural
[ ] Declarative
[ ] Event Driven
[ ] Reflective
[ ] Task Driven
[ ] Concurrent
[ ] Natural Language
Also, most of them have their own section on Wikipedia, so a "more information" link might also be a good addition.
|
gharchive/issue
| 2024-01-15T23:47:57 |
2025-04-01T04:35:43.434634
|
{
"authors": [
"ricardoboss"
],
"repo": "ricardoboss/Prolangle",
"url": "https://github.com/ricardoboss/Prolangle/issues/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
233066782
|
add mobilenet please
pytorch-mobilenet: PyTorch MobileNet Implementation of "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", 1704.04861
Thanks a lot! :)
|
gharchive/issue
| 2017-06-02T03:25:32 |
2025-04-01T04:35:43.462103
|
{
"authors": [
"acgtyrant",
"rickiepark"
],
"repo": "rickiepark/awesome-pytorch",
"url": "https://github.com/rickiepark/awesome-pytorch/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
237962468
|
Fix/optimize search method (current search method does work)
$search seems to not work when passed in get request or its just me (I did test this however), but a further look into this must be made, current search works with startswith but causes multiple requests to be sent (depending on the size of values to be searched and sageOne entity property count) due to get request parameters being to long, maybe get parameters can be passed inside the request entity (if a get request has a body).
Will keep issue open, seems like from an old documentation I grabbed from SA's SageOne api version 2.0.1 in 2014 states that the api host doesn't support $search, or something like that, will keep issue open for a while, maybe to find solution by contacting Sage One Api SA support staff.
|
gharchive/issue
| 2017-06-22T20:21:01 |
2025-04-01T04:35:43.498717
|
{
"authors": [
"ricomaster9000"
],
"repo": "ricomaster9000/sageOneApiLibrary-GLOBAL",
"url": "https://github.com/ricomaster9000/sageOneApiLibrary-GLOBAL/issues/7",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1252338381
|
Mobile Code Extension
Is your feature request related to a problem? Please describe.
Mobile Code Extension
Describe the solution you'd like.
I will make a mobile code extension in which the user will type the name of a country and on clicking the button will get the mobile code of that country.
Describe alternatives you've considered.
.
Add any other context or screenshots about the feature request here.
I am contributor at GSSoC '22. Please assign this issue to me.
@praniti111 @ridsuteri @Akshima-Ghai
@Sukriti-m I think you have added a similar extension with the country code name?
@Sukriti-m I think you have added a similar extension with the country code name?
That contained short country code. For example, France can be written as FRA. India as IND.
But this will contain the phone number code of different countries such as for India we write +91.
oh ohk sorry my bad, go ahead :)
|
gharchive/issue
| 2022-05-30T07:52:32 |
2025-04-01T04:35:43.520572
|
{
"authors": [
"Sukriti-m",
"ridsuteri"
],
"repo": "ridsuteri/Awesome-Chrome-Extensions",
"url": "https://github.com/ridsuteri/Awesome-Chrome-Extensions/issues/420",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
337132390
|
[UnknownApiError] Endpoint request timed out on QPU
I was running some optimization program on QPU and after some time I got this error:
UnknownApiError Traceback (most recent call last)
in ()
35 'xatol': 0.01,
36 'return_all': False,
---> 37 'fatol': 0.01})
/anaconda3/lib/python3.6/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
473 callback=callback, **options)
474 elif meth == 'nelder-mead':
--> 475 return _minimize_neldermead(fun, x0, args, callback, **options)
476 elif meth == 'powell':
477 return _minimize_powell(fun, x0, args, callback, **options)
/anaconda3/lib/python3.6/site-packages/scipy/optimize/optimize.py in _minimize_neldermead(func, x0, args, callback, maxiter, maxfev, disp, return_all, initial_simplex, xatol, fatol, **unknown_options)
594 fsim = numpy.take(fsim, ind, 0)
595 if callback is not None:
--> 596 callback(sim[0])
597 iterations += 1
598 if retall:
in callback_func(input_params)
14 global Nfeval
15 global min_loss_history
---> 16 loss = targetfunc_q0(input_params)
17 list_display = [Nfeval]
18 list_display.extend(input_params)
in targetfunc_q0(params)
5 print('Group 0')
6 for input_vec in group0:
----> 7 qpu_prob = evaluate_q0(input_vec, params, qubits_chosen)
8 prob_group0 = prob_group0 + [qpu_prob]
9
in evaluate_q0(input_vec, params, qubits_chosen)
21 count += 1
22
---> 23 qpu_result = qpu.get_job(qpu_job_id).result()
24 qpu_prob = float(qpu_result.count([1, 0])+qpu_result.count([1, 1]))/float(N_RUNS)
25 print("\nProbability: {}".format(qpu_prob))
/anaconda3/lib/python3.6/site-packages/pyquil-1.8.0-py3.6.egg/pyquil/api/qpu.py in get_job(self, job_id)
289 :rtype: Job
290 """
--> 291 response = get_json(self.session, self.async_endpoint + "/job/" + job_id)
292 return Job(response.json(), 'QPU')
293
/anaconda3/lib/python3.6/site-packages/pyquil-1.8.0-py3.6.egg/pyquil/api/_base_connection.py in get_json(session, url)
76 res = session.get(url)
77 if res.status_code >= 400:
---> 78 raise parse_error(res)
79 return res
80
/anaconda3/lib/python3.6/site-packages/pyquil-1.8.0-py3.6.egg/pyquil/api/_base_connection.py in parse_error(res)
103
104 if 'error_type' not in body:
--> 105 raise UnknownApiError(str(body))
106
107 error_type = body['error_type']
UnknownApiError: {'message': 'Endpoint request timed out'}
The server has failed to return a proper response. Please describe the problem
and copy the above message into a GitHub issue at:
https://github.com/rigetticomputing/pyquil/issues
Thanks @yudongcao, sorry for the late response. As you likely know, this issue shouldn't persist with our new SDK :) let me know if you run into any issues.
|
gharchive/issue
| 2018-06-29T20:45:46 |
2025-04-01T04:35:43.538958
|
{
"authors": [
"ryankarle",
"yudongcao"
],
"repo": "rigetticomputing/pyquil",
"url": "https://github.com/rigetticomputing/pyquil/issues/491",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1413582679
|
Create bert-codegen.h
Orig: ring/language/include/codegen.h
New: ring/language/include/bert-codegen.h ==> codegen.h
Adds "^^" as power function example 4^^3 => 64
Uses "void ring_vm_explog ( VM *pVM )"
Hello Bert
I don't plan to add this operator at the current stage
Will think about this in the future once I determine the approach that match the language design
For example we have many functions in this chapter : https://ring-lang.github.io/doc1.17/mathfunc.html
One of the ideas is to keep using them while introducing specific VM instruction instead of the (function call)
to get the same performance.
I will close this for now, and will check them in the future.
Thanks
Hello Bert
Thanks for your contribution, Now Ring 1.18 support the ^^ operator
|
gharchive/pull-request
| 2022-10-18T17:23:08 |
2025-04-01T04:35:43.567464
|
{
"authors": [
"MahmoudFayed",
"umariani"
],
"repo": "ring-lang/ring",
"url": "https://github.com/ring-lang/ring/pull/1478",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
428313598
|
logrotate command does not close log
Issuing logrotate has no impact on my debug.log (i.e., it does not close the log). If I manually rm -f /var/log/rippled/debug.log, then issuing logrotate successfully reopens the log, so it seems like an issue with closing the log file.
Documentation indicates that the logrotate command "closes and reopens the log file."
I can replicate the issue on two servers, both running rippled 1.2.2 & CentOS 7.
As a minor addition, the compulsive part of me thinks it makes sense for the command to be log_rotate for overall consistency.
I can confirm that the same issue occurs in Ubuntu 16.04
Further investigation (thanks @mellery451 ) shows that the rippled logrotate method is supposed to be used with external log rotation mechanisms. See : https://developers.ripple.com/logrotate.html
In the current form of the code, the method can be done away with completely, and replaced with logrotate.d such as:
/var/log/rippled/debug.log {
missingok
notifempty
daily
rotate 10
copytruncate
compress
create 0644 rippled rippled
}
While this does help Windows based installations, neither does the rippled logrotate method. It simply opens and closes the log file. No rotation.
I think we should actively consider moving to logrotate.d and include it in the packaging.
@nbougalis
First, I think we can acknowledge that the name of this method is misleading at best. that said, it turns out that is does exactly the right thing for use with external log rotation mechanisms (logrotate being the standard on most linux distros). I think logrotate issues SIGHUP by default, but for various legacy reasons, rippled already uses SIGHUP and SIGINT to terminate...so that doesn't work. So this logrotate method fits nicely into a postrotate stanza for logrotate config, e.g. something like:
"/var/log/rippled/*.log" {
daily
rotate 10
nocreate
compress
compresscmd /usr/bin/nice
compressoptions -n19 ionice -c3 gzip
compressext .gz
postrotate
/opt/ripple/bin/rippled --conf /etc/opt/ripple/rippled.cfg logrotate
endscript
}
Regarding the initial experiment, something like this might help clarify how this all works:
mv debug.log debug.log.1 && /opt/ripple/bin/rippled --conf /opt/ripple/etc/rippled.cfg logrotate && ls -latr
The copytruncate option proposed above is another way to configure logrotate, but it runs the risk of losing log lines since it's possible for lines to be written after the copy but before the truncate. Using the move + logrotate insures nothing is lost because the internal command can hold-off log writes until the close/reopen has completed.
FWIW, I've tested this same mechanism on macos (using the homebrew logrotate) and it works as expected). On windows, if the filesystem still disallows renames while open then I suspect the move and close/open will probably not work - more investigation needed there.
In summary, I think this method is possibly poorly named but works as designed for use with logrotate.d.
Thanks @mellery451 . I think this logrotate snippet should be included in the packaging. Also the developer docs need to reflect this in the rippled logrotate method.
@mDuo13 what do you think about adding a little extra info to the docs about the specifics of using this method, maybe including sample postrotate command?
@mellery451, I assume most people probably log at less verbose levels (since debug & trace are verbose to the point of consuming massive amounts of space).
Thus, I think having a default logrotate.d configuration that rotates daily and keeps 10 logfiles may make it needlessly more difficult to review logs, as there isn't particularly a need to rotate daily for warn or error.
Adding minsize in the above method could provide a nice balance across log levels.
Logging should be structured and remote anyways, having more than a few days as backup locally on disk of the actual machine is overkill anyways imho, I'd cut down the logrotate rotate setting down to 5 or so, since by default (without limiting the log settings with the startup RPC call in the config file) logs are quite spammy already.
@mellery451: I think we can and should reevaluate the SIGHUP/SIGINT situation, and bring rippled more inline with the convention about these signals; granted, we're unlikely to ever support full config file reloading, but we could behave better.
Not for this issue, but something to consider.
@MarkusTeufelberger: I agree with you about logging; we also need to distinguish between log types; there's "programmer" logging (e.g. "Need 119 bytes, rounding up to 128") were unstructured and local is fine and there's logging for operations (e.g. "[Thu Jun 13 18:56:29 2019] [connection:info] [client 192.168.0.183:51235] [version rippled-1.2.4]") were structured and remote are good.
This is a big project, sorting out existing logging out and implementing the new stuff. Would you be interested in actively contributing to the effort? Even if it's only to review proposals.
I'm going to need more than the above example to provide docs I'm happy with for this, but we could totally add a brief tutorial about automating log rotation using the logrotate command.
93232ec7dfb4d47ba8552ad01d2f18cab63c53d8 adds a logrotate config to packages. Closing this issue, but feel free to open new issues if there are unresolved concerns.
|
gharchive/issue
| 2019-04-02T16:01:37 |
2025-04-01T04:35:43.622417
|
{
"authors": [
"MarkusTeufelberger",
"alloyxrp",
"crypticrabbit",
"mDuo13",
"mellery451",
"nbougalis"
],
"repo": "ripple/rippled",
"url": "https://github.com/ripple/rippled/issues/2892",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
494284407
|
1.3.1 build failed (Fetched in submodule path 'doc/docca', but it did not contain 335dbf9)
Can't build 1.3.1, it seems what https://github.com/vinniefalco/docca.git was updated couple days ago
[ 20%] Performing build step for 'snappy'
-- snappy build command succeeded. See also /rippled/.nih_c/unix_makefiles/GNU_7.4.0/Release/src/snappy-stamp/snappy-build-*.log
[ 20%] No install step for 'snappy'
[ 21%] No test step for 'snappy'
[ 22%] Completed 'snappy'
[ 22%] Built target snappy
Scanning dependencies of target nudb_src
[ 22%] Creating directories for 'nudb_src'
[ 23%] Performing download step (git clone) for 'nudb_src'
Cloning into 'nudb_src'...
Note: checking out '2.0.1'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 79c1dca Set version to 2.0.1
Submodule 'doc/docca' (https://github.com/vinniefalco/docca.git) registered for path 'doc/docca'
Submodule 'extras/rocksdb' (https://github.com/facebook/rocksdb.git) registered for path 'extras/rocksdb'
Cloning into '/rippled/.nih_c/unix_makefiles/GNU_7.4.0/Release/src/nudb_src/doc/docca'...
Cloning into '/rippled/.nih_c/unix_makefiles/GNU_7.4.0/Release/src/nudb_src/extras/rocksdb'...
error: Server does not allow request for unadvertised object 335dbf9c3613e997ed56d540cc8c5ff2e28cab2d
Fetched in submodule path 'doc/docca', but it did not contain 335dbf9c3613e997ed56d540cc8c5ff2e28cab2d. Direct fetching of that commit failed.
CMake Error at /rippled/.nih_c/unix_makefiles/GNU_7.4.0/Release/tmp/nudb_src-gitclone.cmake:93 (message):
Failed to update submodules in:
'/rippled/.nih_c/unix_makefiles/GNU_7.4.0/Release/src/nudb_src'
CMakeFiles/nudb_src.dir/build.make:90: recipe for target '../.nih_c/unix_makefiles/GNU_7.4.0/Release/src/nudb_src-stamp/nudb_src-download' failed
make[2]: *** [../.nih_c/unix_makefiles/GNU_7.4.0/Release/src/nudb_src-stamp/nudb_src-download] Error 1
make[1]: *** [CMakeFiles/nudb_src.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:486: recipe for target 'CMakeFiles/nudb_src.dir/all' failed
Upstream issue is https://github.com/CPPAlliance/NuDB/issues/77
Thanks @sergeygalkin. @mellery451, can we take a look at this? Thanks.
ugh - looks to me like a submodule of NuDB (our dependency) rewrote history and yet the submodule in NuDB is still pointing to an old commit.
My best suggestion to get past this is something like:
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 7f2ff1055..276a73f9a 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1412,6 +1412,7 @@ if (is_root_project) # NuDB not needed in the case of xrpl_core inclusion build
nudb_src
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
GIT_TAG 2.0.1
+ GIT_SUBMODULES extras/rocksdb
)
FetchContent_GetProperties(nudb_src)
if(NOT nudb_src_POPULATED)
@@ -1423,6 +1424,7 @@ if (is_root_project) # NuDB not needed in the case of xrpl_core inclusion build
PREFIX ${nih_cache_path}
GIT_REPOSITORY https://github.com/CPPAlliance/NuDB.git
GIT_TAG 2.0.1
+ GIT_SUBMODULES extras/rocksdb
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
TEST_COMMAND ""
...which would cause only the rocks submodule to get updated. It would be best to tell cmake to skip all submodules in this case, but we don't have that option.
Sorry for any inconvenience.
ok - looks like the orphaned commit has been restored and the unmodified build is working again.
|
gharchive/issue
| 2019-09-16T21:23:55 |
2025-04-01T04:35:43.627771
|
{
"authors": [
"mellery451",
"nbougalis",
"sergeygalkin"
],
"repo": "ripple/rippled",
"url": "https://github.com/ripple/rippled/issues/3080",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
211338172
|
Add Escrow support:
Escrow replaces the existing SusPay implementation with improved code that also adds hashlock support to escrow payments, making RCL ILP enabled.
The new functionality is under the Escrow amendment, which supersedes and replaces the SusPay amendment.
This commit also deprecates the CryptoConditions amendment which is replaced by the CryptoConditionSuite amendment which, once enabled, will allow use of cryptoconditions others than
hashlocks.
Codecov Report
Merging #2038 into develop will decrease coverage by -0.32%.
The diff coverage is 68.33%.
@@ Coverage Diff @@
## develop #2038 +/- ##
===========================================
- Coverage 67.93% 67.61% -0.32%
===========================================
Files 685 680 -5
Lines 49591 49167 -424
===========================================
- Hits 33689 33244 -445
- Misses 15902 15923 +21
Impacted Files
Coverage Δ
src/ripple/protocol/Indexes.h
100% <ø> (ø)
:white_check_mark:
src/ripple/protocol/TxFormats.h
100% <ø> (ø)
:white_check_mark:
src/ripple/ledger/impl/ApplyStateTable.cpp
86.18% <ø> (ø)
:white_check_mark:
src/ripple/protocol/LedgerFormats.h
100% <ø> (ø)
:white_check_mark:
src/ripple/app/main/Amendments.cpp
100% <ø> (ø)
:white_check_mark:
src/ripple/conditions/Fulfillment.h
100% <100%> (ø)
:white_check_mark:
src/ripple/conditions/Condition.h
100% <100%> (+3.44%)
:white_check_mark:
src/ripple/protocol/impl/LedgerFormats.cpp
100% <100%> (ø)
:white_check_mark:
src/ripple/protocol/impl/TxFormats.cpp
100% <100%> (ø)
:white_check_mark:
src/ripple/app/tx/impl/Escrow.h
100% <100%> (ø)
... and 16 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b7e2a3b...a7dce81. Read the comment docs.
It might be worth having the escrow unittest test unsupported preamble tags.
@seelabs: the size issue doesn't really concern me at this time because we only accept objects as part of a transaction, and we place limits on the size of transactions. See, for example, https://github.com/ripple/rippled/blob/develop/src/ripple/protocol/Protocol.h#L46
One problem with a limit is this: we don't know the limit of a fulfillment given a condition. A limit means that we might accept a condition, only to then reject the fulfillment for it, something explicitly not allowed in the spec.
:+1:
Unit tests are failing:
#10 failed: PreimageSha256_test.cpp(176)
#12 failed: PreimageSha256_test.cpp(178)
In utils.h, https://github.com/ripple/rippled/pull/2038/commits/c2563984d7616515eb43ae57cb3ece82899cae62#diff-8d3b885a6cc70f3156381be81dd5fc57R118
The check for empty needs to be moved down as s can never be empty here.
:+1:
In 0.60.0-rc1
|
gharchive/pull-request
| 2017-03-02T09:42:29 |
2025-04-01T04:35:43.647015
|
{
"authors": [
"codecov-io",
"miguelportilla",
"nbougalis",
"seelabs"
],
"repo": "ripple/rippled",
"url": "https://github.com/ripple/rippled/pull/2038",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
113144160
|
Add test for thrift accessing a compact storage created table.
This current fails on cassandra-3.0 branch.
Thanks for the test -- can you please address the linter errors found with CI?
Could you briefly explain how this tests COMPACT STORAGE on 3.0? My understanding was that there is no compact storage anymore.
It does a "create table ... with compact storage" and then inserts and reads from the table using thrift. There isn't a different storage format on disk any more, not sure if the "with compact storage" does anything or not under 3.0, but for the test to work under C* 2.1/2.2 it is needed...
(This test passes on 2.1/2.2 and causes an exception on 3.0) https://issues.apache.org/jira/browse/CASSANDRA-10586
Gotcha -- yeah, I'm pretty sure WITH COMPACT STORAGE is a noop in 3.0.
Is it all right if I merge this, then make CASSANDRA-10586 a subtask of CASSANDRA-10166, our ticket for tracking all failing 3.0 tests?
SGTM
Great, will do. Thanks for making the changes for the linter and explaining the test.
|
gharchive/pull-request
| 2015-10-24T07:52:26 |
2025-04-01T04:35:43.651469
|
{
"authors": [
"JeremiahDJordan",
"mambocab"
],
"repo": "riptano/cassandra-dtest",
"url": "https://github.com/riptano/cassandra-dtest/pull/629",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2715223171
|
enable unstable feature for cargo risczero test
Not adding additional semantics by handling an unstable feature flag, given this is already gated behind experimental feature flag.
Needed to be able to test new unstable syscalls
@austinabell, I've pushed ccaa353fd40b3ad6e9e541455d05cd788ed6c89b...b74308fc990c4d55f24a656e4e06b5eb26880b67 to avoid the need to specify RISC0_FEATURE_bigint2 manually when building. I am still working on building and testing this.
|
gharchive/pull-request
| 2024-12-03T14:42:53 |
2025-04-01T04:35:43.656177
|
{
"authors": [
"austinabell",
"nategraf"
],
"repo": "risc0/risc0",
"url": "https://github.com/risc0/risc0/pull/2598",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1423894955
|
vstopi behavior refers to vstopei
The behavior of CSR vstopi refers to the value of CSR vstopei (section 7.3.3). But vstopei is present only when an IMSIC is implemented. What happens if an IMSIC is not implemented?
Is there an assumption that hstatus.VGEIN can only be non-zero if an IMSIC is present? Is this stated anywhere?
Thanks.
Section 7.3.3, "Virtual supervisor top interrupt CSR (vstopi)", refers to vstopei in two bullet points:
if hstatus.VGEIN is the valid number of a guest interrupt file, bit 9 is one in both vsip and vsie, and vstopei is not zero ...
if hstatus.VGEIN is the valid number of a guest interrupt file, bit 9 is one in both vsip and vsie, and vstopei is zero ...
If a hart has no IMSIC, then there are no guest interrupt files for the hart, so it's hard to see how hstatus.VGEIN could be the "valid number of a guest interrupt file". That makes both conditions false before we even get to worrying about the value of vstopei (which you are correct to point out doesn't exist in this scenario). I don't see any logical ambiguity.
Is there an assumption that hstatus.VGEIN can only be non-zero if an IMSIC is present?
Maybe it can be nonzero, but it can't be the number of a guest interrupt file when there are no guest interrupt files.
Ok, thanks.
|
gharchive/issue
| 2022-10-26T11:51:22 |
2025-04-01T04:35:43.660682
|
{
"authors": [
"JamesKenneyImperas",
"jhauser-us"
],
"repo": "riscv/riscv-aia",
"url": "https://github.com/riscv/riscv-aia/issues/31",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
757595838
|
Specify minimum supported clang (and maybe also gcc) version
We should say something about what kind of compiler versions are required to run riscv-compliance.
This was filed because Clang didn't support a test program written in assembler. IT isn't clear what assembler features Clang wasn't supporting at that time, but it seems like a Clang issue, and nothing that the Architectural tests have any control over (unless we are using some GCC feature that Clang does not and will not support in the future).
In addition: our plan is to deploy tests in a Docker image with all the tools required included, which may obviate any requirement for Clang.
So, I don't think this is a question that can be answered by us, and possibly won't need to be answered.
|
gharchive/issue
| 2020-12-05T07:28:04 |
2025-04-01T04:35:43.662282
|
{
"authors": [
"allenjbaum",
"bluewww"
],
"repo": "riscv/riscv-arch-test",
"url": "https://github.com/riscv/riscv-arch-test/issues/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2098235431
|
Wrong operand order of Zba instructions
SHxADD and SHxADD.UW shift rs1 and add it to an unshifted rs2; this specification defines them the other way around.
@tariqkurd-repo : Is this fixed? Can we close the ticket?
The PR has been merged so this should be fixed. @sorear Please re-open in case one of them was missed.
|
gharchive/issue
| 2024-01-24T13:09:38 |
2025-04-01T04:35:43.666468
|
{
"authors": [
"andresag01",
"arichardson",
"sorear"
],
"repo": "riscv/riscv-cheri",
"url": "https://github.com/riscv/riscv-cheri/issues/28",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1241184240
|
Where i can find Assembler mnemonics for RISC-V ?
I can't find it in draft-20220513-3e7b38
https://github.com/riscv-non-isa/riscv-asm-manual/blob/master/riscv-asm.md
|
gharchive/issue
| 2022-05-19T04:16:06 |
2025-04-01T04:35:43.667649
|
{
"authors": [
"aswaterman",
"luyahan"
],
"repo": "riscv/riscv-isa-manual",
"url": "https://github.com/riscv/riscv-isa-manual/issues/847",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1566531005
|
When the hypervisor extension is implemented, the following are also mandatory
does this mean these are included in H? we simplified the spreadsheet to have 2 columns: included , implied. currently we have the following things marked mandatory in RV22S64 but it sounds like that is wrong:
Ssstateen Supervisor-mode view of the state-enable extension. The supervisor-mode (sstateen0-3) and hypervisor-mode (hstateen0-3) state-enable registers must be provided.
Shcounterenw For any hpmcounter that is not read-only zero, the corresponding bit in hcounteren must be writable.
Shvstvala vstval must be written in all cases described above for stval.
Shtvala htval must be written with the faulting guest physical address in all circumstances permitted by the ISA.
Shvstvecd vstvec.MODE must be capable of holding the value 0 (Direct). When vstvec.MODE=Direct, vstvec.BASE must be capable of holding any valid four-byte-aligned address.
Shvsatpa All translation modes supported in satp must be supported in vsatp.
Shgatpa For each supported virtual memory scheme SvNN supported in satp, the corresponding hgatp SvNNx4 mode must be supported. The hgatp mode Bare must also be supported
TL;DR answer: no, they are not a part of H.
It's not "extension requires/implies/includes extension" thing. It means "profile requires extension" (under some circumstances; i.e. H is implemented).
All Sh* extensions defined in the RISC-V Profiles denote certain property of the hypervisor extension implementation (which minimum H implementation are not required to do so). In other words, those Sh* extensions are additional restrictions to the minimal H extension. If we use the term "imply", Sh* extensions may imply H but not vice versa.
Ssstateen extension is a completely separate extension from H (even Ssstateen without H is possible in theory) but provides some flexibility running the guest operating system if H is implemented. I'm not completely sure about the reason this is mandated when H is implemented but having it is helpful anyway.
|
gharchive/issue
| 2023-02-01T17:37:12 |
2025-04-01T04:35:43.672594
|
{
"authors": [
"a4lg",
"mark-riscv"
],
"repo": "riscv/riscv-profiles",
"url": "https://github.com/riscv/riscv-profiles/issues/93",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2604583812
|
bug(batch): restarted serving node occasionally doesn't bump up min_pinned_hummock_version
Describe the bug
A serving node coexists with a frontend node in the same RisingWave standalone process in a pod.
Restart the pod.
The min_pinned_hummock_version held by this serving node is no longer increasing, unexpectedly. This is the issue.
show processlist returns no records.
The min_pinned_hummock_version starts to increase due to forceful expiration after max_version_pinning_duration_sec.
Error message/log
No response
To Reproduce
No response
Expected behavior
No response
How did you deploy RisingWave?
No response
The version of RisingWave
v1.10
Additional context
No response
Restart the pod.
So after pod restarts, somehow after its comes back online, it continues to hold the same min_pinned_hummock_version. Which suggests this is persisted somewhere.
Where do we pin the hummock version, is it managed by meta / serving / frontend?
Restart the pod.
So after pod restarts, somehow after its comes back online, it continues to hold the same min_pinned_hummock_version. Which suggests this is persisted somewhere.
Where do we pin the hummock version, is it managed by meta / serving / frontend?
Each worker node holds a min_pinned_hummock_version, which is persisted in meta node.
Compute node (including serving node) is expected to bump it up periodically.
|
gharchive/issue
| 2024-10-22T07:50:30 |
2025-04-01T04:35:43.683642
|
{
"authors": [
"kwannoel",
"zwang28"
],
"repo": "risingwavelabs/risingwave",
"url": "https://github.com/risingwavelabs/risingwave/issues/19050",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1606198234
|
Tracking: sqlsmith snapshot generation
Track: https://github.com/risingwavelabs/risingwave/issues/8220
High priority:
[x] Generation should not fail if query caused an unexpected error. https://github.com/risingwavelabs/risingwave/pull/8259
[x] Instead of capturing setup_sql and queries, just scrape from output logs to find failure reason. It will cleanup the code. https://github.com/risingwavelabs/risingwave/pull/8343
[x] Clean up run_pre_generated_queries interface. It currently takes in ddl too, but that is not needed, since we now bundle everything into queries.sql instead of ddl.sql and queries.sql.
[x] Script to extract failing queries. https://github.com/risingwavelabs/risingwave/pull/8343
[x] shrink failing queries #8507
[ ] #10043
[ ] Unit test the shell scripts
[ ] Extract plan for failing query (if panic did not occur in FE).
[ ] Dump full backtrace.
[ ] Script to automatically open issues for selected failing query.
[ ] Better error logging for skipped queries.
Low priority:
[ ] Collect and aggregate queries with same / similar failing reasons. Not really needed now, but if more failing queries start being generated, this will be needed. Can use some fuzzy compare or just sort, since error messages may not match 1-1, but are highly similar.
[ ] If ddl fails, entire generation pipeline fails. Patch it with a different seed if this happens.
[ ] Generate 1000 queries for cron.
[ ] Collect total generated query / expected to measure how effective it is.
[ ] Generation should run more times if insufficient queries generated. Currently sqlsmith generates sufficient queries, we can revisit if needed.
[ ] Decouple generation and execution: Generate queries first, execute after.
[ ] When executing, if a query causes a crash in the cluster:
Record the failing query
Restart the cluster
Re-run ddl + dml
Continue executing rest of queries.
I will add some docs to sqlsmith/develop.md later as well.
Added: https://github.com/risingwavelabs/risingwave/pull/10092/commits/826c692a31b3f221637b396ea69817d895789ef3
|
gharchive/issue
| 2023-03-02T06:40:10 |
2025-04-01T04:35:43.692080
|
{
"authors": [
"kwannoel"
],
"repo": "risingwavelabs/risingwave",
"url": "https://github.com/risingwavelabs/risingwave/issues/8284",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1721471962
|
test client libraries in the main-cron test
https://www.notion.so/risingwave-labs/client-libraries-215dccb91503473787a4ddfe0aa2b422
Please write the tests and run them in the main-cron test so that we can verify them on a daily basis.
Feel free to assign the owner to each test. @sumittal
Please open a PR once the implementation of tests starts so that people can help.
[ ] Java PostgreSQL JDBC driver(https://jdbc.postgresql.org/)
[ ] node.js node-postgres(https://www.npmjs.com/package/pg)
[ ] python psycopg2 (https://pypi.org/project/psycopg2/)
[ ] go pgx(https://github.com/jackc/pgx)
[ ] PHP pdo-pgsql(https://www.php.net/manual/en/ref.pdo-pgsql.php)
[ ] Ruby ruby-pg(https://github.com/ged/ruby-pg)
BTW, here's a prior art: https://github.com/risingwavelabs/risingwave/pull/7113
Basically sqllogictest --engine=external can create a subprocess as the driver (with this protocol https://github.com/risinglightdb/sqllogictest-rs/blob/27eb9f50993e10b36c1f4f68ad3afe499adbbb49/sqllogictest-engines/src/external.rs#L16-L36), so we can test any client library with all our slt test suites in this way. Although I'm not sure whether we want to do so, or just some simple tests are enough.
cc: @abhyuday26
|
gharchive/issue
| 2023-05-23T08:06:08 |
2025-04-01T04:35:43.698696
|
{
"authors": [
"lmatz",
"xxchan"
],
"repo": "risingwavelabs/risingwave",
"url": "https://github.com/risingwavelabs/risingwave/issues/9958",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
417690071
|
Extension causes high cpu load
Issue Type: Performance
Extension Name: LiveServer
Extension Version: 5.5.1
OS Version: Windows_NT ia32 10.0.10586
VSCode version: 1.31.1
:warning: Make sure to attach this file from your home-directory: C:\Users\WEBDEVELOPER2\ritwickdey.LiveServer-unresponsive.cpuprofile.txt :warning:
Find more details here: https://github.com/Microsoft/vscode/wiki/Explain:-extension-causes-high-cpu-load
Hello. few questions!!,
When you're getting this issue? after load of vscode or after clicking to Go Live ?
Is this happen every-time ? or random ?
Please reply here https://github.com/ritwickdey/vscode-live-server/issues/278
Duplicate of https://github.com/ritwickdey/vscode-live-server/issues/278
[Note: This issue is closed by a javascript code (written by me).
https://gist.github.com/ritwickdey/36682dabe4a992c57e4562c935bfbbdd ]
Thank You.
|
gharchive/issue
| 2019-03-06T08:56:20 |
2025-04-01T04:35:43.716383
|
{
"authors": [
"ritwickdey",
"temilolakutelu"
],
"repo": "ritwickdey/vscode-live-server",
"url": "https://github.com/ritwickdey/vscode-live-server/issues/336",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
381460894
|
Seit 5.2.15 plötzlich viele unerklärliche Crawls
Seit dem Update auf 5.2.15 müllt es mir den Linksammler voll bzw. wegen dem autom. Download hat es 100 GB in der Nacht geladen - jetzt ist der Shareonline Account mal offline weil ich das Tageslimit erreicht habe. Zudem hats mir bei 9kw natürlich etliche Captchasolves abgezogen :(
Meine Listen sind alle leer und trotzdem wird was gefunden?!
Hab sogar das imdbyear mal auf 2019 gestellt
Debug log und ini anbei!
Meine Logins habe ich entfernt und der autom. Download steht momentan mal auf False
RSScrawler_DEBUG.log
RSScrawler.ini.txt
Danke für die Meldung. Das muss ich mir am Wochenende ansehen.
Die MB Suche hat bisher eine recht überladene Methode zum suchen verwendet. Nach meinen Tests sollte die Anpassung eigentlich nicht so unerwartet laufen. Ziel ist die Suche sauber zu bekommen.....
hier das selbe
Vorallem Regex macht probleme
Ich lösche mal meine ini und stelle alles neu ein, ein Kollege hat das nicht.
Er hat kein Jahr in der IMDB Suche und Regex = false
Mal sehen...
2 Lektionen:
Das Docker Image braucht eine "folgende Version installieren" Funktion
Wenn ihr nicht docker nutzt kann per pip auch die 5.2.14 wieder installiert werden....
Kollege war noch nicht auf 5.2.15 ;) Jetzt klingelts bei ihm auch...
Hab erstmal die Anpassung zurück gerollt. Zum Testen benötige ich möglichst alle eure DBs/Logs aus der 5.2.15.
Die Blogs-Suche hat anscheinend noch mehr Fehler,.. bei mir läuft die zum Teil 20 Minuten lang...
Am schluss haben wir aber ein besseres Produkt.
Kannst du beim Docker Container auch einen Rollback machen oder geht das nicht?!
Einfach neustarten.
Lustigerweise schein ich das Problem mit der Version nicht zu haben. Zumindest bekomme ich keine Pushbullet Benachrichtigungen damit.
|
gharchive/issue
| 2018-11-16T06:10:13 |
2025-04-01T04:35:43.738536
|
{
"authors": [
"DKeppi",
"DaLeberkasPepi",
"Gutz-Pilz",
"rix1337"
],
"repo": "rix1337/RSScrawler",
"url": "https://github.com/rix1337/RSScrawler/issues/275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
330600778
|
Widget media's ignored by wagtailmenus?
Hi,
Using django-fontawesome in wagtailmenu doesn't work for me, whereas it works as-is when I put it in base wagtail models. The admin form seems well builded but the js and css deps seems not included. Are widget's media ignored by wagtailmenus?
My code:
from fontawesome.fields import IconField
from wagtailmenus.models import AbstractFlatMenuItem
from modelcluster.fields import ParentalKey
class HomeFlatMenuItem(AbstractFlatMenuItem):
menu = ParentalKey( 'wagtailmenus.FlatMenu',
on_delete=models.CASCADE,
related_name="home_menu_items",
)
icone = IconField(
verbose_name= 'Icone',
max_length= 100,
blank= True,
help_text= "Icone du menu",
)
panels= AbstractFlatMenuItem.panels + [
FieldPanel('icone')
]
Hi @fpoulain,
I think what you're observing here is a limitation of Wagtail, rather than something wagtailmenus is doing. In the past, I've found that you have to use the widget argument when defining a FieldPanel instance to get it to use a custom widget.
I have a feeling this might work if you added from fontawesome.widgets import IconWidget to your imports, and updated your panels definition like so:
panels = AbstractFlatMenuItem.panels + [
FieldPanel('icone', widget=IconWidget)
]
If that still doesn't work, let me know.
Hi @fpoulain, did you have an luck with the above approach?
Hi ababic,
Thanks for your answer. I didn't get time to try before tonight.
I think what you're observing here is a limitation of Wagtail,
By the way:
if I define my model with icone = IconField('Icone', blank= True) and I add FieldPanel('icone') to content_panels, it works with wagtail as expected.
If I define it in a StreamBlock, it works with wagtail as expected.
So, my problem occurs only in when I use that field in a menu.
I have a feeling this might work if you added from fontawesome.widgets import IconWidget to your imports
I tried before reporting the bug, and I retried this way tonight, and it didn't work.
Hi @fpoulain,
Okay, sorry to hear that didn't solve the problem. Does it work for you elsewhere in Wagtail if you use the field on a model included in an interface using InlinePanel (like menu items are)? I think that could make a difference.
Also, are you using wagtailmenus with https://github.com/wagtail/wagtail-condensedinlinepanel or without?
I've been trying this out locally, and it seems to be working fine for me when added to a custom menu item model, using just a standard FieldPanel:
On a custom main menu:
And on a custom flat menu:
Do you see any javascript errors in your console or anything? Perhaps there's a conflict with something else your using?
I've been trying this out locally, and it seems to be working fine for me when added to a custom menu item model, using just a standard FieldPanel.
Arf, very weird. It continue to not work here. Can you please post your code snippet ? I am using wagtail 2.0.1 and wagtailmenus 2.9 with python 3.5.
Do you see any javascript errors in your console or anything?
No, the css and js links aren't added in the head. That's why I thought the menu class missed widget's media somewhere. Also, the select html widget is properly built, it only miss the css and js link.
Perhaps there's a conflict with something else your using?
I have only very few deps and its a "toy" project to learn wagtail.
Hi @fpoulain,
I just followed the installation instructions from here:
https://github.com/redouane/django-fontawesome
Added an IconField (supplying no arguments, as shown in the project's README) to the MultilingualMenuItem model from:
https://github.com/rkhleics/wagtailmenus/blob/master/wagtailmenus/tests/models/menus.py#L25
Updated the panels attribute on the same model to include FieldPanel('icon')
Updated the development settings to:
WAGTAILMENUS_MAIN_MENU_MODEL = 'tests.CustomMainMenu'
WAGTAILMENUS_FLAT_MENU_MODEL = 'tests.CustomFlatMenu'
And the field rendered as show in the screenshots (using Wagtail 2.0 and latest dev version of wagtailmenus - which shouldn't matter, as there have been absolutely no UI/modeladmin changes for a good few versions)
Closing this for now, as am unable to replicate the issue.
Ok. I didn't found time to check it. I will recreate a from scratch project and try to understand why this don't work for me. Thanks for help.
|
gharchive/issue
| 2018-06-08T10:14:24 |
2025-04-01T04:35:43.765074
|
{
"authors": [
"ababic",
"fpoulain"
],
"repo": "rkhleics/wagtailmenus",
"url": "https://github.com/rkhleics/wagtailmenus/issues/250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1058415566
|
Some problem of multi CPUs
I test the experiments on Intel Xeon Gold 5118 and GTX2080 Ti. But I found that the utilization rate of GPU is very low. It takes 10 hours to train the Walker_walk/train_ppo.sh for 3 seeds. Is it my experimental parameters or is it normal?
I think there is no problem in parameters side :)
Thanks for your answer!
I want to know about how to maxmize the usage of the cpu?
Get Outlook for Androidhttps://aka.ms/AAb9ysg
From: Kimin Lee @.>
Sent: Saturday, November 20, 2021 12:42:35 PM
To: rll-research/BPref @.>
Cc: HuWenbo @.>; Author @.>
Subject: Re: [rll-research/BPref] Some problem of multi CPUs (Issue #1)
I think there is no problem in parameters side :)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/rll-research/BPref/issues/1#issuecomment-974593160, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AGFEFVB3AY6ZE7KINSMHT4LUM4RLXANCNFSM5IL4NNFQ.
Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
|
gharchive/issue
| 2021-11-19T11:13:37 |
2025-04-01T04:35:43.829464
|
{
"authors": [
"Huwenbo-git",
"pokaxpoka"
],
"repo": "rll-research/BPref",
"url": "https://github.com/rll-research/BPref/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1727153445
|
maps_topdown_object
I cannot find the img list star_yellow,blue in maps_topdown_object this unexist folder.
You can download the assets from this link.
|
gharchive/issue
| 2023-05-26T08:28:59 |
2025-04-01T04:35:43.830830
|
{
"authors": [
"bareblackfoot",
"rginjapan"
],
"repo": "rllab-snu/TopologicalSemanticGraphMemory",
"url": "https://github.com/rllab-snu/TopologicalSemanticGraphMemory/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
208118451
|
Add locale for Japanese(ja_jp)
Hi! Thanks for creating this library.
I added Japanese format. It seems working fine :D
Thanks for this, will publish soon.
|
gharchive/pull-request
| 2017-02-16T13:17:14 |
2025-04-01T04:35:43.835236
|
{
"authors": [
"jinjor",
"rluiten"
],
"repo": "rluiten/elm-date-extra",
"url": "https://github.com/rluiten/elm-date-extra/pull/32",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
916419997
|
Explore 256 color option for ConsoleColor
Not sure if it works, but there is a blog post that indicates ways to get 256 colors using escape sequences: https://www.lihaoyi.com/post/BuildyourownCommandLinewithANSIescapecodes.html#256-colors
Potential for cursor movement also.
There appears to be an INVERSE style as well that works but I haven't supported: https://stackoverflow.com/questions/2048509/how-to-echo-with-different-colors-in-the-windows-command-line
|
gharchive/issue
| 2021-06-09T16:59:37 |
2025-04-01T04:35:43.836970
|
{
"authors": [
"rlwhitcomb"
],
"repo": "rlwhitcomb/utilities",
"url": "https://github.com/rlwhitcomb/utilities/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
199380576
|
Assert I2C/SPI bus speed a valid value in serial class
see https://github.com/rm-hull/ssd1306/blob/master/oled/serial.py#L96
Also, provide choice in demo_opts: https://github.com/rm-hull/ssd1306/blob/master/examples/demo_opts.py#L24
Migrated
|
gharchive/issue
| 2017-01-07T19:36:06 |
2025-04-01T04:35:43.838628
|
{
"authors": [
"rm-hull"
],
"repo": "rm-hull/luma.oled",
"url": "https://github.com/rm-hull/luma.oled/issues/109",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2316527865
|
🛑 5typos.ne Blog is down
In cb4608d, 5typos.ne Blog (https://5typos.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 5typos.ne Blog is back up in f22a024 after 26 minutes.
|
gharchive/issue
| 2024-05-25T01:08:20 |
2025-04-01T04:35:43.878364
|
{
"authors": [
"rmateu"
],
"repo": "rmateu/statuspage",
"url": "https://github.com/rmateu/statuspage/issues/510",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
53312900
|
handle slovenian translation correctly
We have a bit more cases to handle than we currently do.
cc @mclion
This is awesome. Thank you. Considering I don't know any Slovenian, and there are a lot of requested changes here, could you please get at least one other person to +1 this and verify it helps move things in the correct direction?
Maybe someone will. Meanwhile you can see the plural formula for Slovenian here: https://www.gnu.org/software/gettext/manual/html_node/Plural-forms.html
+1 looks correct to me
Seems legit.
Fiddle with the new translations. (may not work months from now)
http://jsfiddle.net/gm75oaz8/1/
Pitch for testing (in slovene):
"Pikci, dajte stestirat."
Yeah, this code is better. It's by the book. It's the formula you get in poedit, when you switch to slovenan dictionary, if you want to check.
Rock on. Thanks everyone! :beers:
|
gharchive/pull-request
| 2015-01-03T19:27:29 |
2025-04-01T04:35:43.904210
|
{
"authors": [
"brodul",
"iElectric",
"mclion",
"offlinehacker",
"rmm5t"
],
"repo": "rmm5t/jquery-timeago",
"url": "https://github.com/rmm5t/jquery-timeago/pull/208",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2329257817
|
chore: add clang tidy configuration file
Close #7
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 100.00%. Comparing base (0fa815f) to head (cd8be42).
Additional details and impacted files
@@ Coverage Diff @@
## main #13 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 2 2
Lines 4 4
=========================================
Hits 4 4
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2024-06-01T17:40:30 |
2025-04-01T04:35:43.915300
|
{
"authors": [
"codecov-commenter",
"rng-dynamics"
],
"repo": "rng-dynamics/lux-sp",
"url": "https://github.com/rng-dynamics/lux-sp/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
272006981
|
Enable image upload for comments
Super complicated
Closes #758
IT WAS A TWO LINE FIX!!! 😁
I'm so extremely sorry for my misleading comment in the original thread. I should not have set false expectations for the feature to be delivered in such a small number of lines changed. 😞
|
gharchive/pull-request
| 2017-11-07T22:25:48 |
2025-04-01T04:35:43.922955
|
{
"authors": [
"BasThomas",
"Sherlouk"
],
"repo": "rnystrom/GitHawk",
"url": "https://github.com/rnystrom/GitHawk/pull/891",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1419296121
|
[💡 FEATURE REQUEST]: Expose client certificate to PHP
Plugin
HTTP
I have an idea!
mTLS works great, would be even better if this plugin and the PHP worker library could expose variables representing the parsed client certificate in PHP's $_SERVER superglobal or environment or PSR ServerRequest, in the same way for example on Apache you'd get:
$_SERVER['HTTPS'] = "on"
$_SERVER['SSL_CLIENT_VERIFY'] = "SUCCESS"
$_SERVER['SSL_CLIENT_S_DN'] = <DN string>
Hey @dwgebler 👋🏻
Thanks for the proposal. I'll discuss it with our PHP team 👍🏻
Thanks @rustatian this would be particularly useful for
a) a service which may run on RR via either HTTP or HTTPS and you'd like to be able to know which method was used by the client
b) a service is running via HTTPS with mTLS but you'd like to be able to distinguish between clients based on the DN of the certificate issued to them, to identify the user
Yeah, thanks for the clarification. I'll discuss that with the PHP team on Monday 😃. This is, for sure, entirely not a problem to pass some envs to the PHP process (with the parsed certs data), I just need to sync with the PHP team on that (since I'm not a PHP dev 😢).
specs:
https://www.cryptosys.net/pki/manpki/pki_distnames.html
https://www.alvestrand.no/objectid/2.5.4.html
https://httpd.apache.org/docs/2.4/mod/mod_ssl.html
As a quick and dirty draft, here are the changes I made in roadrunner-http (Go) and roadrunner-http (PHP) to get what I wanted:
In HTTP handler:
if r.TLS != nil {
req.IsHttps = true
// Get a complete JSON representation of the entire certificate, why not?
cert, err := json.Marshal(r.TLS.PeerCertificates[0])
if err != nil {
h.log.Error("failed to marshal certificate", zap.Error(err))
} else {
req.Certificate = string(cert)
req.CertificateSubject = r.TLS.PeerCertificates[0].Subject.String()
}
}
And just modified request to include these in the JSON payload which the PHP worker receives.
Then in PHP roadrunner-http package:
Add tlsParams as array to Request
In HttpWorker::hydrateRequest:
$request->tlsParams['HTTPS'] = $context['HTTPS'] ?? false;
if (!empty($context['certificate'])) {
$request->tlsParams['ssl_client_verify'] = 'SUCCESS';
$request->tlsParams['ssl_client_certificate'] = json_decode($context['certificate'], true);
$request->tlsParams['ssl_client_subject'] = $context['certificateSubject'] ?? '';
}
In PSR7Worker::configureServer:
$server['HTTPS'] = $request->tlsParams['HTTPS'];
$server['SSL_CLIENT_VERIFY'] = $request->tlsParams['ssl_client_verify'];
$server['SSL_CLIENT_CERT'] = $request->tlsParams['ssl_client_certificate'];
$server['SSL_CLIENT_SUBJECT'] = $request->tlsParams['ssl_client_subject'];
Yeah, nice 😃
It should be in the separate middleware generally. I'm also not a big fan of the json'ing everything, but as the POC looks nice, thanks.
If you need this functionality ASAP, you may also fork the HTTP plugin and build your custom RR with the Velox. You may have a look at the tutorial here: https://youtu.be/h5PPvc_YOtg
Yeah, nice 😃 It should be in the separate middleware generally. I'm also not a big fan of the json'ing everything, but as the POC looks nice, thanks.
If you need this functionality ASAP, you may also fork the HTTP plugin and build your custom RR with the Velox. You may have a look at the tutorial here: https://youtu.be/h5PPvc_YOtg
Cheers for that, I've re-written what I need as a custom middleware plugin which adds a flag for whether the request was HTTP/HTTPS and the client certificate data if present through the existing PSR-7 attributes on the ServerRequest, so very minimal change. Compiled RR binary with the plugin and enabled it in my application .rr.yaml and it all works a treat :) obviously would be great to have this feature included in the official RR HTTP package as a default middleware so anyone else needing similar functionality can just have it out the box. Great learning experience for me though, this was my first time writing any Go.
RoadRunner is fantastic, by the way! Got an API which I built with Symfony components which I was running on a regular Apache/PHP-FPM stack with mTLS, where the username is looked up by the certificate CN. On this stack a single API request took about 80ms on average, the same API translated to Roadrunner (and still using Symfony components with PSR-7 bridge) with my plugin is completing the same request in an average 1.4ms! That is a phenomenal improvement, I love it.
obviously would be great to have this feature included in the official RR HTTP package as a default middleware so anyone else needing similar functionality can just have it out the box.
Nice, yeah, I'll create an smt like a mod_tls middleware. We just need a little bit more upvotes on your proposal 😃
Great learning experience for me though, this was my first time writing any Go.
Excellent, my idea about plugins was precisely about what you did. Anyone can create a plugin for the RR, even with a small Go experience, and then compile its RR binary for any OS.
On this stack a single API request took about 80ms on average, the same API translated to Roadrunner (and still using Symfony components with PSR-7 bridge) with my plugin is completing the same request in an average 1.4ms! That is a phenomenal improvement, I love it.
Wow, that's impressive 👍🏻 Very glad to hear that 😃 Enjoy RR ❤️
You may also join our discord server and ping our PHP guys or me if you have questions: https://discord.gg/TFeEmCs
|
gharchive/issue
| 2022-10-22T13:29:31 |
2025-04-01T04:35:43.935636
|
{
"authors": [
"dwgebler",
"rustatian"
],
"repo": "roadrunner-server/roadrunner",
"url": "https://github.com/roadrunner-server/roadrunner/issues/1324",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
95887550
|
Tab bar is darker than it should be when each tab child is a storyboard view controller
I am trying to the RBStoryboardLink and I have two storyboards.. One that has a tab controller with regular view controllers as it 's tab children and the other which has each tab child as a RBStoryboardLink to another storyboard. The one with regular tab children works as expected however the one with storyboard viewcontrollers as tab children has dark greyish nav bar and tab bar. has anyone encountered this?
Just set needsTopLayoutGuide/needsBottomLayoutGuide to "No" at RBStoryboardLink View Controller settings.
|
gharchive/issue
| 2015-07-19T05:06:02 |
2025-04-01T04:35:43.938285
|
{
"authors": [
"ctng1213",
"dmkcv"
],
"repo": "rob-brown/RBStoryboardLink",
"url": "https://github.com/rob-brown/RBStoryboardLink/issues/67",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1269945689
|
Investigate bundling node_modules
It'd be nice to not rely on npmjs.com during install/runtime of the extension, from a very quick sketch:
# cd yaml.novaextension
tar -zcvf node_modules.tar.gz package.json package-lock.json node_modules
du -hs node*
# 18M node_modules
# 3.3M node_modules.tar.gz
Then the extension could untar the node_modules on startup and remove the tar (or rename to something like node_modules.tar.gz.bk so it knows not to do it again, but the tar is still there if needed. Alternatively it could update the package.json (and lock?) version(s) to that of the extension and only run the untar if the version has changed.
There could be a command to reinstall if required too, e.g. Extensions → YAML → Reinstall Server perhaps.
Related to #28 and #18
|
gharchive/issue
| 2022-06-13T20:55:52 |
2025-04-01T04:35:43.947649
|
{
"authors": [
"robb-j"
],
"repo": "robb-j/nova-yaml",
"url": "https://github.com/robb-j/nova-yaml/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1902126550
|
Fix backticks in doc comments
Hello!
Courtesy of clippy::doc_markdown, here is a collection of what I hope are fairly uncontroversial tweaks to the docs: adding/removing backticks for visual consistency. 😅
I left all mentions of SysEx alone because I see that they are consistently unticked and it's more of a concept than an explicit keyword (currently just an example plugin's name), but I could add those too if that's desired?
Thanks!
|
gharchive/pull-request
| 2023-09-19T03:09:19 |
2025-04-01T04:35:43.949130
|
{
"authors": [
"ijijn",
"robbert-vdh"
],
"repo": "robbert-vdh/nih-plug",
"url": "https://github.com/robbert-vdh/nih-plug/pull/86",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
432923780
|
Monosymmetric Beam Generator
Is your feature request related to a problem? Please describe.
Request for a monosymmetric I-beam generator function.
Describe the solution you'd like
Generate a monosymmetric I-beam with different top and bottom flange thickness and widths.
Describe alternatives you've considered
N/A
Additional context
N/A
Added in commit 9ee1886
|
gharchive/issue
| 2019-04-14T02:17:05 |
2025-04-01T04:35:43.951280
|
{
"authors": [
"robbievanleeuwen"
],
"repo": "robbievanleeuwen/section-properties",
"url": "https://github.com/robbievanleeuwen/section-properties/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2002880102
|
invalid imports in the documentation
The examples in the documentation ofter point to invalid references. For instance, the following imports
from sectionproperties.pre import Material
from sectionproperties.pre.library import i_section
from sectionproperties.analysis import Section
should be
from sectionproperties.pre.pre import Material
from sectionproperties.pre.library.steel_sections import i_section
from sectionproperties.analysis.section import Section
These could be avoided by either populating the init.py files or correcting the examples in the docs.
At the 'Results' section of the User Guide, the reference to the function plot_stress is invalid.
Hi @BALOGHBence, are you getting any import errors? The __init__.py files already contain these references so they should work. This was introduced recently in v3.0.0 to improve readability (see features).
I cannot find the invalid reference to plot stress, could you clarify this?
Hi. I think the problem here is that I used the package under Python 3.8. I can see now that version 3.0.0 is only from Python 3.9.
|
gharchive/issue
| 2023-11-20T19:44:13 |
2025-04-01T04:35:43.954778
|
{
"authors": [
"BALOGHBence",
"robbievanleeuwen"
],
"repo": "robbievanleeuwen/section-properties",
"url": "https://github.com/robbievanleeuwen/section-properties/issues/361",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
61666706
|
unable to run source ~/.zshrc successfully, issues found in .oh-my-zsh.sh. Maybe already solved, can someone reference a resolved ticket?
I am getting this when I run
source ~/.zshrc
-bash: /Users/ram/.oh-my-zsh/oh-my-zsh.sh: line 26: syntax error near unexpected token `('
-bash: /Users/ram/.oh-my-zsh/oh-my-zsh.sh: line 26: `for config_file ($ZSH/lib/*.zsh); do'
-bash: typeset: -g: invalid option
typeset: usage: typeset [-afFirtx] [-p] name[=value] ...
-bash: /Users/ram/.oh-my-zsh/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh: line 81: syntax error near unexpected token `always'
-bash: /Users/ram/.oh-my-zsh/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh: line 81: ` } always {'
The zshrc file doesn't have to be sourced manually, when you start zsh, it will read the file automatically.
What is happening is that you're running bash instead of zsh, so oh-my-zsh cannot start.
You have to do 2 things:
Actually 3 things: make sure you have zsh installed.
Make sure zsh is your default shell: echo $SHELL should say /some/thing/**zsh**. If not, run chsh -s $(which zsh).
Restart your terminal and the prompt should change.
Many thanks, I had to manually make the update, it would not complete it via console. System Preferences -> Users & Groups -> Click the lock -> Right click your user -> Advanced Options -> and paste /usr/local/bin/zsh in the login shell field.
Cool, glad to be of help :+1:
run zsh and then everything is ok
@Flyty it's unreal how much time I spent finding this most obvious solution ever. Thanks a lot!
@Flyty Unreal
Thanks @dingolfsson
|
gharchive/issue
| 2015-03-14T17:55:20 |
2025-04-01T04:35:43.961294
|
{
"authors": [
"Flyty",
"RichardBansal",
"ashishra0",
"bwoodlt",
"dingolfsson",
"mcornella",
"relaxed-tomato"
],
"repo": "robbyrussell/oh-my-zsh",
"url": "https://github.com/robbyrussell/oh-my-zsh/issues/3691",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
115912369
|
echo $SHELL prints /bin/bash
echo $SHELL prints /bin/bash
shouldn't it be printing /bin/zsh
Probably. $SHELL is the path to your default shell, so it seems like your default shell didn't get changed to zsh. If this is the case, you can fix it yourself with chsh -s /bin/zsh.
I hope my problem is relevant to this issue. I am having problems making oh-my-zsh work in both Terminal an iTerm2. I tried everything from changing the default shell, to configuring the settings. Here is a screenshot of my attempt to apply the agnoster theme in iTerm:
Same attempt in Terminal:
Regarding the issue, here is what $SHELL command gives me:
Agnoster requires a special Powerline-patched font in order to display properly. See its README.
@apjanke So from what I understood, my oh-my-sh is running correctly my terminal. Like you said, I just need to read README files more often. Is that also might be why the background is in no way influenced by the themes?
That's what it seems like. But in this case I think I misled you: there's no README for Agnoster in Oh My Zsh. (Only in the upstream Agnoster repo.) Sorry.
The themes/agnoster.zsh-theme has info about the fonts it needs in the comments of the file itself. So if a theme isn't working, taking a look at the theme's source code to see if there are notes on its requirements is also a good place to look.
@chirag7jain is your issue fixed?
Closing for the time being. Reopen if it's not solved.
i think some recent update has fixed the issue.
I am getting the correct output now
|
gharchive/issue
| 2015-11-09T16:25:37 |
2025-04-01T04:35:43.966932
|
{
"authors": [
"apjanke",
"chirag7jain",
"mcornella",
"mixania",
"ncanceill"
],
"repo": "robbyrussell/oh-my-zsh",
"url": "https://github.com/robbyrussell/oh-my-zsh/issues/4596",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
166427780
|
Error with git.zsh:42
Since the last update I get this error on my Mac (10.11.5).
~/.oh-my-zsh/lib/git.zsh:42: parse error near `elif'
zsh: command not found: git_prompt_info
I was able to resolve this by changing the line endings from CRLF to LF for lib/git.zsh.
e.g. in vim -> :set ff=unix
Yeah Thanks! It works!
For the next time, here's the solution: https://github.com/robbyrussell/oh-my-zsh/issues/4069#issue-89607351
|
gharchive/issue
| 2016-07-19T20:29:10 |
2025-04-01T04:35:43.969146
|
{
"authors": [
"adrianhj",
"cbou",
"mcornella"
],
"repo": "robbyrussell/oh-my-zsh",
"url": "https://github.com/robbyrussell/oh-my-zsh/issues/5241",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
346864825
|
agnoster displays incorrectly
There seem to be gaps with the badge that's displayed. Also the colors seem a bit dull. I'm not sure what's wrong here. I'm using powerline fonts.
That's concerning your chosen powerline font and the font size setting in your terminal emulator. Play a bit with font sizes and powerline fonts until you have one that looks good.
As per the colors, look at the color scheme of your terminal emulator.
This is not OMZ related.
Yeah i realized that. Found this thread :
https://github.com/powerline/fonts/issues/176
It was an issue with urxvt. Switching the terminal fixed it. Thanks
|
gharchive/issue
| 2018-08-02T05:29:51 |
2025-04-01T04:35:43.971688
|
{
"authors": [
"Nano-Sec",
"mcornella"
],
"repo": "robbyrussell/oh-my-zsh",
"url": "https://github.com/robbyrussell/oh-my-zsh/issues/7029",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
219808486
|
Add nohup plugin
Adding a new pluging. When pressing Ctrl + H it will prefix the current command with nohup and will add &> command_name.out at the end (and vice versa).
It is very useful when launching a lot of background scripts
Has no interest.
My plugin was an amazing piece of genius. You do not deserve it :)
I'm not saying it wasn't ;) Feel free to make a repository out of it and add it to the External plugins wiki.
|
gharchive/pull-request
| 2017-04-06T07:22:53 |
2025-04-01T04:35:43.973608
|
{
"authors": [
"mcornella",
"micrenda"
],
"repo": "robbyrussell/oh-my-zsh",
"url": "https://github.com/robbyrussell/oh-my-zsh/pull/6011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2342561249
|
🛑 Divulgación de la Ciencia UNAM is down
In 2d77837, Divulgación de la Ciencia UNAM (https://www.dgdc.unam.mx) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Divulgación de la Ciencia UNAM is back up in 96729e5 after 41 minutes.
|
gharchive/issue
| 2024-06-09T23:50:32 |
2025-04-01T04:35:44.031851
|
{
"authors": [
"robertormzg"
],
"repo": "robertormzg/upptime-dgdc",
"url": "https://github.com/robertormzg/upptime-dgdc/issues/865",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
379618935
|
Have ability to run cron at startup
This may be a dupe of https://github.com/robfig/cron/issues/119 , and if so I apologise.
I'm proposing a feature request: an @startup or @immediate or something similar keyword. Namely, run the function as soon as the cron service starts (in addition to any other schedule provided, of course).
In some Linux implementations of cron there's a @reboot flag which I'm sort of trying to emulate here. This flag could be called @reboot too, though that might be confusing.
Call a go routine on startup instead? You don't need this library to do that!
Ha, that's true, but if I want the cron schedule / calling of go routine on startup to be conditional on external config, then I do. We pass-in an external config file that may or may not have a list of cron schedules to run various goroutines on, and some of those schedules may or may not be "run this on startup".
I prefix jobs I want to run at startup with a ! character and execute the desired task by calling the function directly. I then I override the schedule string
sch = strings.TrimLeft(sch, "!")
I would like to stick to the cron spec without adding one-off features, especially because this can be easily done outside of cron with a custom prefix as suggested by @junkiebev or a separate bit. Sorry!
What about @reboot?
|
gharchive/issue
| 2018-11-12T04:42:18 |
2025-04-01T04:35:44.037760
|
{
"authors": [
"dana321",
"holtwilkins",
"junkiebev",
"kleijnweb",
"robfig"
],
"repo": "robfig/cron",
"url": "https://github.com/robfig/cron/issues/160",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
201199432
|
Support for Android Databinding library?
It would be useful to be able to make use of the databinding library to update the ticker, to follow MVVM design principles. Is there any way to do this currently?
e.g:
<com.robinhood.ticker.TickerView
android:id="@+id/opponent_score_ticker"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAppearance="@style/text_large"
android:text="@{viewModel.number}"/>
Hello,
Unfortunately I am unsure how the data binding library integrates with custom views. If the framework doesn't do it automatically then we currently do not support it.
@kevcgrant you can just create your own binding adapter :)
@mcassiano thanks! That is what I ended up doing.
|
gharchive/issue
| 2017-01-17T07:56:28 |
2025-04-01T04:35:44.055990
|
{
"authors": [
"jinatonic",
"kevcgrant",
"mcassiano"
],
"repo": "robinhood/ticker",
"url": "https://github.com/robinhood/ticker/issues/41",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2096081579
|
Compile Errors
Rob - thanks for this project. I get a couple of compile errors - any suggestions where I'm going wrong? Thanks
ESP32TimeServer_Test1.ino: In function 'void setDateAndTimeFromGPS(void*)':
ESP32TimeServer_Test1:355:19: error: aggregate 'tm wt' has incomplete type and cannot be defined
struct tm wt;
^~ (up arrow points to wt)
ESP32TimeServer_Test1:369:34: error: 'mktime' was not declared in this scope
candidateDateAndTime = mktime(&wt) + 1; // not sure why the + 1 but it is
^~~~~ (up arrow points to mktime)
Hmmm, I just compiled it myself and it works fine.
Did you copy the code exactly?
If yes, I'm guessing your issue is associated with a time library someplace on your system.
I did a quick google and found a variety of people having the same problem with other projects. Here are two examples:
https://community.platformio.org/t/issue-with-struct-tm/19017/4
https://community.platformio.org/t/aggregate-tm-timeinfo-has-incomplete-type-and-cannot-be-defined/13715
Also, it if helps, the ESPTime.h library I am using is this one:
https://github.com/fbiego/ESP32Time
Originally, when I compiled my code I was using version 2.0.0 and it compiled fine. Following your post I upgraded it to version 2.0.4 (the current version) and it also compiled fine. Are you perhaps using an older library?
Rob – Thanks. You have given me something definite to follow up. I’ll let you know how I get on. Thanks again – Ian
From: Rob Latour @.>
Sent: 23 January 2024 13:54
To: roblatour/ESP32TimeServer @.>
Cc: ianburton20 @.>; Author @.>
Subject: Re: [roblatour/ESP32TimeServer] Compile Errors (Issue #4)
Hmmm, I just compiled it myself and it works fine.
Did you copy the code exactly?
If yes, I'm guessing your issue is associated with a time library someplace on your system.
I did a quick google and found a variety of people having the same problem with other projects. Here are two examples:
https://community.platformio.org/t/issue-with-struct-tm/19017/4
https://community.platformio.org/t/aggregate-tm-timeinfo-has-incomplete-type-and-cannot-be-defined/13715
Also, it if helps, the ESPTime.h library I am using is this one:
https://github.com/fbiego/ESP32Time
Originally, when I compiled my code I was using version 2.0.0 and it compiled fine. Following your post I upgraded it to version 2.0.4 (the current version) and it also compiled fine. Are you perhaps using an older library?
—
Reply to this email directly, view it on GitHub https://github.com/roblatour/ESP32TimeServer/issues/4#issuecomment-1906108590 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AIUL7AWM2YZRUYKV6B27ORDYP66JFAVCNFSM6AAAAABCG7OMD2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBWGEYDQNJZGA .
You are receiving this because you authored the thread. https://github.com/notifications/beacon/AIUL7AX62WWKJQKC6HET63LYP66JFA5CNFSM6AAAAABCG7OMD2WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTRTTUK4.gif Message ID: @.*** @.***> >
Rob
I’m OK with ESP32Time.h – using v2.0.4
But – I’m back into…
void setDateAndTimeFromGPS(void *parameter)
and that structure
struct tm wt;
error: aggregate 'tm wt' has incomplete type and cannot be defined
also….
I can’t find where mktime is declared in….
candidateDateAndTime = mktime(&wt) + 1; // not sure why the + 1 but it is
What version of https://github.com/khoih-prog/Timezone_Generic
Are you using?
You #include <Timezone.h> but the library needs you to #include <Timezone_Generic.h> - so I can’t fathom the version.
Thanks again.
Ian
From: Rob Latour @.>
Sent: 23 January 2024 13:54
To: roblatour/ESP32TimeServer @.>
Cc: ianburton20 @.>; Author @.>
Subject: Re: [roblatour/ESP32TimeServer] Compile Errors (Issue #4)
Hmmm, I just compiled it myself and it works fine.
Did you copy the code exactly?
If yes, I'm guessing your issue is associated with a time library someplace on your system.
I did a quick google and found a variety of people having the same problem with other projects. Here are two examples:
https://community.platformio.org/t/issue-with-struct-tm/19017/4
https://community.platformio.org/t/aggregate-tm-timeinfo-has-incomplete-type-and-cannot-be-defined/13715
Also, it if helps, the ESPTime.h library I am using is this one:
https://github.com/fbiego/ESP32Time
Originally, when I compiled my code I was using version 2.0.0 and it compiled fine. Following your post I upgraded it to version 2.0.4 (the current version) and it also compiled fine. Are you perhaps using an older library?
—
Reply to this email directly, view it on GitHub https://github.com/roblatour/ESP32TimeServer/issues/4#issuecomment-1906108590 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AIUL7AWM2YZRUYKV6B27ORDYP66JFAVCNFSM6AAAAABCG7OMD2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBWGEYDQNJZGA .
You are receiving this because you authored the thread. https://github.com/notifications/beacon/AIUL7AX62WWKJQKC6HET63LYP66JFA5CNFSM6AAAAABCG7OMD2WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTRTTUK4.gif Message ID: @.*** @.***> >
I just checked my libraries, it looks as if I'm now using
https://github.com/JChristensen/Timezone (version 1.24)
for Timezone.h
Not sure when I switched to it, or why, but it seems to work with this project too.
closed due to lack of feedback
closed due to lack of feedback (no activity in over six months)
|
gharchive/issue
| 2024-01-23T13:20:20 |
2025-04-01T04:35:44.117220
|
{
"authors": [
"ianburton20",
"roblatour"
],
"repo": "roblatour/ESP32TimeServer",
"url": "https://github.com/roblatour/ESP32TimeServer/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1596849741
|
How to get images in the original aspect ratio?
Thanks for your amazing work of creating these datasets.
The images are appear stretched and squashed to 640x640. I am not sure if I did a mistake in some download options, if any.
IMO, model evaluations must be done on letter box images rather than stretched images for accurate comparison.
Each dataset has a raw version available in addition to the 640x640 version
|
gharchive/issue
| 2023-02-23T13:17:09 |
2025-04-01T04:35:44.134631
|
{
"authors": [
"sandeepjana",
"yeldarby"
],
"repo": "roboflow/roboflow-100-benchmark",
"url": "https://github.com/roboflow/roboflow-100-benchmark/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1892055813
|
fix readme link to contributing.md
Description
The link to the contributing guide in the readme leads to the main repo. This change updates the link to be the CONTRIBUTING page.
Docs
[x] README.md: link to CONTRIBUTING.md was updated
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.smethnani seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2023-09-12T09:16:17 |
2025-04-01T04:35:44.138982
|
{
"authors": [
"CLAassistant",
"smethnani"
],
"repo": "roboflow/roboflow-python",
"url": "https://github.com/roboflow/roboflow-python/pull/187",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1069866709
|
Helmfile apply does not install deleted resources
Steps to reproduce :
Install a new set of releases via helmfile apply
Use kubectl to delete one of the resources created for one of the released apps, eg kubectl delete svc APP1_NAME
Run helmfile apply again: I would expect it to see (when it does a diff) that svc APP1_NAME is missing, and therefore desired state is different from current k8s state, and so I expected it to run a helm upgrade on the app for which k8s resource was deleted. But it does not do that.
There's a chance this is a helm or even helm diff issue (or helmfile diff but I think it uses helm diff plugin), I will see if I can dig some more but meanwhile, I thought this might just be a known limitation or issue.
More likely it's related to this one and how helm-diff works. IIRC it makes a diff against a Helm state stored in secrets. Because of this, we started thinking of switching to helmfile sync.
@schollii @andrewnazarov Probably this would work with the new --three-way-merge option added to helm-diff recently.
https://github.com/databus23/helm-diff/pull/304
It'll be a few weeks before this is available in helmfile correct?
@schollii You're almost correct but this time I made it configurable via an envvar (https://github.com/databus23/helm-diff/pull/336) too. That means you'll be able to enable the new behavior by running helmfile with HELM_DIFF_THREE_WAY_MERGE=true as helm-diff processes inherit env from the helmfile process.
FYI, helm-diff v3.3.0 has been released with the new three-way-merge option 😃
|
gharchive/issue
| 2021-12-02T18:38:12 |
2025-04-01T04:35:44.148648
|
{
"authors": [
"andrewnazarov",
"mumoshu",
"schollii"
],
"repo": "roboll/helmfile",
"url": "https://github.com/roboll/helmfile/issues/2013",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
372206449
|
Внедрить систему балансировки нагрузки
Code maintenance request
Опишите проблему, которая приводит к необходимости этих изменений
Приложение не выдерживает больших нагрузок. На основании idea #47.
Опишите, какие изменения Вы считаете необходимыми
Внедрить и настроить систему распределенного доступа (например, nginx).
Дополнительные материалы
Ссылка на документ с обзором инструмента.
Необходимо провести некоторые доработки внутри кода для деплоя версии проекта с измененным сервером и добавленным прокси-сервером. А также провести нагрузочное тестирование.
|
gharchive/issue
| 2018-10-20T12:28:25 |
2025-04-01T04:35:44.151188
|
{
"authors": [
"nikita03565",
"ptrdiff"
],
"repo": "robot-lab/projectsfair",
"url": "https://github.com/robot-lab/projectsfair/issues/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1399312242
|
Migrate this repo to https://github.com/roboticslab-uc3m @imontesino plz?
Migrate this repo to https://github.com/roboticslab-uc3m @imontesino plz?
El repo acaba de ser transferido a nuestra org. Gracias!!!
|
gharchive/issue
| 2022-10-06T10:31:12 |
2025-04-01T04:35:44.167217
|
{
"authors": [
"jgvictores"
],
"repo": "roboticslab-uc3m/opensim-gui-docker",
"url": "https://github.com/roboticslab-uc3m/opensim-gui-docker/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
761497809
|
iCubWaterloo01 robot
@julijenv @violadelbono this is the draft PR to add iCubWaterloo01.
Please use a fork next time.
cc @Nicogene
@julijenv @violadelbono I've applied the fixes introduced via #220 also to iCubWaterloo01: see https://github.com/robotology/robots-configuration/pull/219/commits/5fe7797f6650736a4bb4b38c7faf4a8b41544275.
Thus, you may want to pull these locally.
I did a pull on robots-configuration. I think we are ready to merge
|
gharchive/pull-request
| 2020-12-10T18:32:30 |
2025-04-01T04:35:44.170695
|
{
"authors": [
"pattacini",
"violadelbono"
],
"repo": "robotology/robots-configuration",
"url": "https://github.com/robotology/robots-configuration/pull/219",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
757138431
|
Extensão requerida para o php 8
Para não ter problemas com o cropper no php 8 é necessário 'descomentar' a extensão:
extension=gd
no arquivo php.ini
A extensão GB já está no requisito do componente.
|
gharchive/issue
| 2020-12-04T14:20:56 |
2025-04-01T04:35:44.180435
|
{
"authors": [
"curruwilla",
"robsonvleite"
],
"repo": "robsonvleite/cropper",
"url": "https://github.com/robsonvleite/cropper/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
246228325
|
機能要望
とてもいいプラグインですね。便利に使わせてもらっています。
可能なのかわかっておらず恐縮ですが、2ch の専用ブラウザ(jane styleなど)のように3ペインのスタイルで表示することは可能でしょうか?
左のペインはそのまま
右上のペインはスレッドのタイトル、更新日、コメント数のみにして一覧表示
右下のペインはスレッドの投稿を表示、タブ表示にして複数のスレッドを切り替え可能に
お使いいただきありがとうございます!
Jane Style 使ってないので調べてみましたが、おそらくタブ付きのメーラーのような表示形式でしょうか。
だとすると残念ながら対応していないです…。
右上ペインのスレッドのタイトルは実現できそうですが、右下ペインをタブ化するのはちょっと大変そうですね 💦
Jane Style 使ってないので調べてみましたが、おそらくタブ付きのメーラーのような表示形式でしょうか。
そうですね。よくあるメーラー(thunderbirdとか)をイメージしていだければと思います。
右上ペインのスレッドのタイトルは実現できそうですが、右下ペインをタブ化するのはちょっと大変そうですね 💦
スレッドを選んだときにそのスレッドの投稿一覧を取得し、それをメモリ上(もしくはLocalStorageなど)に保持するような形で実現できないでしょうか。
何もわかっておらず、イメージでしかなくすみません。
|
gharchive/issue
| 2017-07-28T03:30:23 |
2025-04-01T04:35:44.241533
|
{
"authors": [
"gyamxxx",
"rockwillj"
],
"repo": "rockwillj/Workplace-Readable",
"url": "https://github.com/rockwillj/Workplace-Readable/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1636266049
|
Android fix - App crash on barcode read.
This fixes the issue that when scanning a barcode on Android, the app crashes.
any updates when this will be merged? Im having the same issue
Any updates? Could use this, I am experiencing crashes on Android
|
gharchive/pull-request
| 2023-03-22T18:07:57 |
2025-04-01T04:35:44.248642
|
{
"authors": [
"fabrianibrahim",
"kheuser",
"parkerbo"
],
"repo": "rodgomesc/vision-camera-code-scanner",
"url": "https://github.com/rodgomesc/vision-camera-code-scanner/pull/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
334284617
|
Don't assume rspec-mocks when stubbing Selinux
Try to stub Selinux with rspec-mocks if available, otherwise mocha. If neither are available, just don't do anything.
Coverage decreased (-0.08%) to 91.986% when pulling c76cf02130a006da62a0e6c7e4a79a047918e7f4 on selinux_stub_regression into 9f5c3a8e219a1166e75e2c18f4bf5d178ee93996 on master.
|
gharchive/pull-request
| 2018-06-20T23:30:51 |
2025-04-01T04:35:44.250433
|
{
"authors": [
"coveralls",
"rodjek"
],
"repo": "rodjek/rspec-puppet",
"url": "https://github.com/rodjek/rspec-puppet/pull/698",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
106978531
|
Extract childIndexMap in favor of core performance
childIndexMap currently gets created everytime children get resolved which means O(2n) every children.
This could be done only if there actually is a child-index pseudo. Typemaps could be saved to the Component while index maps need to be created every time.
This might be solved by passing down a 'parent' prop or sth. like that
Might also fit within context
While iterating children, the parent element needs to be added to newProps as well.
Also we need to add key's to every index-sensitive element to compare those later
Fixed with #108
|
gharchive/issue
| 2015-09-17T13:00:01 |
2025-04-01T04:35:44.256887
|
{
"authors": [
"rofrischmann"
],
"repo": "rofrischmann/react-look",
"url": "https://github.com/rofrischmann/react-look/issues/91",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
123450770
|
Update Mixin.md
Typo fixed
Thanks :)
|
gharchive/pull-request
| 2015-12-22T09:59:34 |
2025-04-01T04:35:44.257866
|
{
"authors": [
"rickdoesburg",
"rofrischmann"
],
"repo": "rofrischmann/react-look",
"url": "https://github.com/rofrischmann/react-look/pull/161",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1673153526
|
undefined: v8.NewContext
I'm using win11+wsl2+ ubuntu22.4, I don't know why the library functions don't seem to be imported
Do you have build-essentials installed?
For my case, after executing "go env -w CGO_ENABLED='1'" it worked
|
gharchive/issue
| 2023-04-18T13:36:47 |
2025-04-01T04:35:44.259805
|
{
"authors": [
"0x5457",
"dream2333",
"ondrej-smola"
],
"repo": "rogchap/v8go",
"url": "https://github.com/rogchap/v8go/issues/380",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
393941928
|
godef by expression not working with go/packages
I think this works without go/packages:
Create hello.go:
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
go mod init.
Run godef -f hello.go fmt.Println.
You get the error: godef: Offset -1 was not a valid identifier
try godef -t -f hello.go -o 43, you may get
xxx/hello.go:3:8
import (fmt "fmt")
try godef -t -f hello.go -o 43, you may get
xxx/hello.go:3:8
import (fmt "fmt")
This is about the expression mode not working. Offset mode still works.
Mod is not supporting expression now, it uses tools/x/package to find definition.
|
gharchive/issue
| 2018-12-25T00:52:36 |
2025-04-01T04:35:44.287591
|
{
"authors": [
"segevfiner",
"sunliver"
],
"repo": "rogpeppe/godef",
"url": "https://github.com/rogpeppe/godef/issues/104",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
259416119
|
For some Reason the Frame data is not being captured by the App.
First thanks for the awesome work : ).
I am facing a tiny problem hope you can help.
Following link is an image of the problem that a lot of people are facing : ( any guidance regarding the required frame works or software needed to run this tool properly will really be appreciated.
https://drive.google.com/open?id=0B5DwimTy6XK_ekJlMXVRVHVWRDg
they was a patch for tekken so u need to update the offsets mate check the readme how to do so
Really appreciate the help now I have read the readme file. I think if someone can make a video tutorial will be nothing less then a blessing.
Here's the link of my progress,
https://drive.google.com/open?id=0B5DwimTy6XK_VllaWjlMcjRBN3c
So I have no knowledge about python. Few points.
I have no clue on what the following means and if someone can a make video tutorial or some blog will really appreciate that.
Updating Memory Addresses with Cheat Engine after patches ?
When Tekken 7.exe is patched on Steam, it may change the location in memory of the relevant addresses. ? To find the new addresses, use Cheat Engine or another memory editor to locate the values, then find the new pointer addresses: ?
Currently, Tekken Bot only needs one value (Tekken7.exe Base + first offset --> follow that pointer to a second pointer --> follow the second pointer to the base of the player data object in memory). ? To find the player data object you can use the following values for player 1 animation ids: ?
Standing: 32769
Crouching (holding down, no direction. Hold for a second to avoid the crouching animation id): 32770
Alternately, you can search for move damage which is displayed in training mode and active (usually) for the duration of the move. ?
Whatever you find, there should be 9 values, eight in addresses located close together and one far away. Find the offset to the pointer to the pointer of any of the first 8 and replace the 'player_data_pointer_offset' value in MemoryAddressEnum.py. ?
Can we contact roguelike2d to let him know what we need update? Does anyone know how to contact him?
I'm at a lost on how to to update the program myself (not a programmer by any means). I agree with trying to get a hold of roguelike2d to see if he/she can update the bot for us.
change the player_data_pointer_offset on MemoryAddressEnum.py from
player_data_pointer_offset = 0x033F6B40
to
player_data_pointer_offset = 0x33DECC0
I think the latest Tekken 7 update may have caused the latest version to not work anymore. It's back to showing only question marks again.
roguelike2d we need an update pls pls pls
player_data_pointer_offset = 0x33DFC40
;)
Thanks lovegu. I wish I knew how to edit that field and compile it. I'll patiently wait for rogue2d to update the version. :)
Hi roguelike2d, would you be able to update this version? Current version is still not working with the latest version in Steam.
PRoosta made properly working version
https://github.com/roguelike2d/TekkenBot/pull/58
sadly doesnt work anymore
|
gharchive/issue
| 2017-09-21T08:30:49 |
2025-04-01T04:35:44.296495
|
{
"authors": [
"GohersWay",
"Haitakekakashi",
"SamppaZ",
"excision1",
"lovegu",
"riririru",
"yup-yup"
],
"repo": "roguelike2d/TekkenBot",
"url": "https://github.com/roguelike2d/TekkenBot/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1901478902
|
'validator' + 'onSaved' is not defined
Installed drop_down_search_field: ^1.0.0, and when I add DropDownSearchField() widget and follow along with the example, I get an error for the validator parameter. The error says there is no named parameter validator.
The same sort of error shows for onSaved. I am using this input within a Form() widget. When I inspect the source code, it is true that I do not see these options in the constructor.
My mistake. I see I should have used DropDownSearchFormField(). duh.
|
gharchive/issue
| 2023-09-18T18:05:12 |
2025-04-01T04:35:44.298485
|
{
"authors": [
"chuckntaylor"
],
"repo": "rohanjariwala03/drop_down_search_field",
"url": "https://github.com/rohanjariwala03/drop_down_search_field/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1093993294
|
JQF requires Java 9+ since release 1.8
I have a feeling this wasn't intended, as we'd expect to at least target Java 11 or 17, not 9 :)
I think because of this line, the build is generating a library targeting Java 9
https://github.com/rohanpadhye/JQF/blame/cade82293b9a5060e8800adac4ceec5798178575/pom.xml#L142
This results in an error like this when running on Java 8
Caused by: java.lang.UnsupportedClassVersionError: edu/berkeley/cs/jqf/fuzz/Fuzz has been compiled by a more recent version of the Java Runtime (class file version 53.0), this version of the Java Runtime only recognizes class file versions up to 52.0
class file 53.0 is for Java 9
If it is intended to raise the floor, it should be with a 2.x release, but I suspect newer features aren't be used and it's just a build configuration issue.
You are right that JQF 1.8 requires at least Java 9 to build. Java 8 was released in 2014 and reached end of public updates in 2019, so I don't expect it to be in active use by anyone building JQF.
We could target Java 11 or 17, but we're currently just targeting the oldest version with which everything builds. There's no change to the public API of JQF, so the major version is not yet bumped; though a 2.x release is on the cards.
Note that JQF should still be able to fuzz targets that are themselves compiled with very old versions of Java.
@rohanpadhye My issue is about not being able to use JQF from Maven Central with my Java 8 app. This is most likely because release is causing the target bytecode to be Java 9 compatible.
Yes, that's intentional. The current version depends on JDK methods that were only introduced in Java 9, and so will not work with a Java 8 app.
Whether updating a language/runtime requirement implies a major version update (e.g. 2.x as you suggested in your original post) is an interesting point, though, and has made me think.
Semantic Versioning (at least as of 2.0.0) does not appear to offer any opinions on this matter. Major version update is only required for breaking changes to the public API, but not updates to own dependencies. It is not clear to me how updates to language or run-time dependencies required of clients should be handled.
From a quick glance at projects that are more widely used than JQF, it looks like language requirements do get updated with minor version updates. For example, Apache Spark updated from Java 7 to Java 8 requirement from 2.1.0 to 2.2.0, and similarly updated Python requirements across 3.0.0 and 3.2.0. Jacskon updated its target Java version across versions 2.6 and 2.7. Hmmm.
Thanks for confirming the version change is intentional, in that case no worries. I've seen a lot of libraries bump major when changing java floor, especially from the still very commonly used Java 8, but it's up to each project. We'll disable our fuzz tests on the java 8 run but keep them on the others.
|
gharchive/issue
| 2022-01-05T05:21:59 |
2025-04-01T04:35:44.305590
|
{
"authors": [
"anuraaga",
"rohanpadhye"
],
"repo": "rohanpadhye/JQF",
"url": "https://github.com/rohanpadhye/JQF/issues/172",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1837777320
|
Automatically installs robot models in the robots directory
The change in this PR will allow robot models in the robots directory to be automatically discovered and installed. This eliminates the need to make local changes to CMakeLists.txt each time a new robot is added.
For jvrc_mj_description, the behavior is the same as before; i.e., print FATAL_ERROR if not found, and call add_subdirectory if found.
Thanks! :+1:
|
gharchive/pull-request
| 2023-08-05T14:03:38 |
2025-04-01T04:35:44.308501
|
{
"authors": [
"mmurooka",
"rohanpsingh"
],
"repo": "rohanpsingh/mc_mujoco",
"url": "https://github.com/rohanpsingh/mc_mujoco/pull/50",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2335925064
|
fix typing for e2
More discussion see #89
Summary by CodeRabbit
Refactor
Improved type hinting for better code clarity and precision.
Codecov Report
Attention: Patch coverage is 0% with 3 lines in your changes missing coverage. Please review.
Project coverage is 6.36%. Comparing base (75fbd8e) to head (674a7a4).
Files
Patch %
Lines
midealocal/devices/e2/__init__.py
0.00%
3 Missing :warning:
Additional details and impacted files
@@ Coverage Diff @@
## main #92 +/- ##
=====================================
Coverage 6.36% 6.36%
=====================================
Files 77 77
Lines 6630 6630
=====================================
Hits 422 422
Misses 6208 6208
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
This should be the case for a StrEnum.
The main purpose of this PR is to pass the mypy check with minimal modifications.
We can start the refactoring work in another PR.
And my main point is not to affect the user's current configuration.
|
gharchive/pull-request
| 2024-06-05T13:28:55 |
2025-04-01T04:35:44.318990
|
{
"authors": [
"Necroneco",
"codecov-commenter"
],
"repo": "rokam/midea-local",
"url": "https://github.com/rokam/midea-local/pull/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
421738106
|
Stopping debug does not always kill all active threads
When restarting debugging or stopping following a bright script crash it should run the exit command until the "Thread detached" message is detected.
In some cases this leads to the channel getting stuck in an open state and will not accept a new side load. I am able to get around this by opening a telnet connection and running exit my self a few times until I get the "Thread detached" message. After that the channel closes to the home screen correctly and I am able to side load again.
@triwav this is the same thing we were discussing on slack, right where we decided it would be better to send the kill command instead of the home press? I like the idea of running the exit command repeatedly until we get the "thread detached" message. Thoughts?
Pr related to this issue: #124
@triwav Thinking we can call this one fixed? Only time this seems to happen now is if the box is in a really messed up state. But that’s rare now and there’s not a lot we can do about it. Also I think the debug protocol addresses this issue long term.
I still see it happen. The fact we now allow you issue your own commands this is way easier to fix by just issuing the quit command yourself
@chrisdp @triwav is this still an issue?
I don't think so... @triwav thoughts?
|
gharchive/issue
| 2019-03-15T23:25:44 |
2025-04-01T04:35:44.322983
|
{
"authors": [
"TwitchBronBron",
"chrisdp",
"triwav"
],
"repo": "rokucommunity/vscode-brightscript-language",
"url": "https://github.com/rokucommunity/vscode-brightscript-language/issues/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1356786869
|
[BUG] Fix nudges deep links change
Describe the bug
Handle the new nudges deep links format
After discussing with @dobromirdobrev it turned out that this is only a configuration issue, so closing it.
|
gharchive/issue
| 2022-08-31T05:10:30 |
2025-04-01T04:35:44.324254
|
{
"authors": [
"petyos"
],
"repo": "rokwire/lms-building-block",
"url": "https://github.com/rokwire/lms-building-block/issues/58",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
797728503
|
Random reboot with 0.1.4
Hi,
Facing random reboots with 0.1.4/r30gb as soon as I access my camera with any ONVIF compatible apps (Android or Windows). No problem through RTSP however.
Thanks
Check if the problem is the snapshot, for example through the web interface.
Check if the problem is the snapshot, for example through the web interface.
Yes, seems to be. When I open the Snapshot tab, I can see a fresh snapshot and then the camera reboots itself. No problem however on the PTZ tab.
Yes, seems to be. When I open the Snapshot tab, I can see a fresh snapshot and then the camera reboots itself. No problem however on the PTZ tab.
It's a know issue: low memory.
Enable swap space.
It's a know issue: low memory.
Enable swap space.
Thank you, it works. My bad, I desactivated the swap space in 0.1.4 and didn't think it could be the reason. Sorry for wasting your time !
Thank you, it works. My bad, I desactivated the swap space in 0.1.4 and didn't think it could be the reason. Sorry for wasting your time !
No problem.
No problem.
It would be nice to indicate this configuration change in README.md, so new users would be aware of it
This isn't a configuration change.
Swap space problem is here from the beginning.
But I could add a note in the readme.
|
gharchive/issue
| 2021-01-31T14:48:14 |
2025-04-01T04:35:44.329603
|
{
"authors": [
"huntz",
"roleoroleo",
"sro2000"
],
"repo": "roleoroleo/yi-hack-Allwinner-v2",
"url": "https://github.com/roleoroleo/yi-hack-Allwinner-v2/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
681545869
|
Snapshot URL without credentials box appearing
Hi,
i try to set snapshot url with username and password but with no success.
can you help me with this ?
You have to configure authentication in "Configuration page" and use basic auth.
e.g.
http://user:password@IP-CAM:8080/cgi-bin/snapshot.sh
thanks.. somehow in EDGE its won't work...
|
gharchive/issue
| 2020-08-19T04:57:05 |
2025-04-01T04:35:44.331585
|
{
"authors": [
"roleoroleo",
"rt400"
],
"repo": "roleoroleo/yi-hack-Allwinner",
"url": "https://github.com/roleoroleo/yi-hack-Allwinner/issues/121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
297124607
|
ReadmeTest class doesn't pass the tests
Hi!
Noticed #314 pull request fail and tested it locally.
ReadmeTest::testBasicUsage() and ReadmeTest::testBasicUsage2() methods' assertions pass fine when ReadmeTest.php runs alone, but fail when all tests run together.
It seems so that previous tests set Rollup static attributes that interfere with later usage.
@gelige I removed the assertions, for now, to let the builds pass. I will resolve the failing assertions in the next tag.
Fixed in https://github.com/rollbar/rollbar-php/pull/326
|
gharchive/issue
| 2018-02-14T14:54:38 |
2025-04-01T04:35:44.339927
|
{
"authors": [
"ArturMoczulski",
"gelige"
],
"repo": "rollbar/rollbar-php",
"url": "https://github.com/rollbar/rollbar-php/issues/315",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2660075933
|
Remove main install docs ldfile
how did that get in? I thought I removed it when I merged that pr oops
must remove that, it’s resolved since Dubai here & in local-ic
no need to push new bin
c4801bf2f25bce531ade91f7d3a0d98cdeedeb89
|
gharchive/issue
| 2024-11-14T21:15:57 |
2025-04-01T04:35:44.348670
|
{
"authors": [
"Reecepbcups"
],
"repo": "rollchains/spawn",
"url": "https://github.com/rollchains/spawn/issues/265",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
444023755
|
New Versions
Testing against new python/postgres versions. Reformatting and optimizing imports
Coverage remained the same at 100.0% when pulling 51710a18d25f787b008930d9fb67dbadd23974c7 on feature/new-versions into ac3257faa3a7a4353a2f83e3e3f260bd38f543c1 on master.
|
gharchive/pull-request
| 2019-05-14T16:57:06 |
2025-04-01T04:35:44.384770
|
{
"authors": [
"coveralls",
"rolobio"
],
"repo": "rolobio/DictORM",
"url": "https://github.com/rolobio/DictORM/pull/57",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
82939407
|
Scope relations' method resolution to direct classes
This is mainly because Rails is stupid, but for example I was trying to do the following:
# Other code omitted for brevity
class ProductsRelation < ROM::Relation[:sql]
dataset :products
register_as :products
one_to_many :options, key: :product_id
def with_options
association_join(:options)
end
def by_id(id)
where(id: id)
end
end
# Let's get a Product with its Options!
rom.relation(:products).by_id(params[:product_id]).with_options.one!
Which raises an ArgumentError due to http://apidock.com/rails/Object/with_options - due to the use of method_missing to call the relation's methods it ends up calling the method on Object instead.
I'll work around it for now, but wondering if anything has been thought about in this regard?
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/19033618-scope-relations-method-resolution-to-direct-classes?utm_campaign=plugin&utm_content=tracker%2F155789&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F155789&utm_medium=issues&utm_source=github).
Gah that's just...
Anyway, that method_missing will be gone prior 1.0.0. Each relation will have its own lazy-class that will decorate relation's methods to provide autocurry mechanism and so on.
Now you just gave me a reason to do it faster :)
Lazy with method_missing is gone so this is fixed too :)
|
gharchive/issue
| 2015-05-30T23:07:28 |
2025-04-01T04:35:44.387898
|
{
"authors": [
"pnomolos",
"solnic"
],
"repo": "rom-rb/rom",
"url": "https://github.com/rom-rb/rom/issues/255",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1201637245
|
Updating model via update method fails if revision id is used and model is not modified
Here's the test case:
async def test_empty_update():
doc = DocumentWithRevisionTurnedOn(num_1=1, num_2=2)
await doc.insert()
# This fails with RevisionIdWasChanged
await doc.update({"$set": {"num_1": 1}})
It seems that in update method we check if result.modified_count == 0 and that causes the problem. Maybe result.matched_count == 0 will be more appropriate?
Good catch! Thank you!
|
gharchive/issue
| 2022-04-12T10:35:03 |
2025-04-01T04:35:44.390808
|
{
"authors": [
"MaratBR",
"roman-right"
],
"repo": "roman-right/beanie",
"url": "https://github.com/roman-right/beanie/issues/239",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
892354397
|
Adds unique flag to Indexed field
Adds the ability to create or integrate existing unique indexes using the Indexed field.
class Plan(Document):
key: Indexed(str, unique=True)
name: str
type: str
This is accomplished by turning _indexed into a tuple and passing the extra value(s) to pymongo.IndexModel as keyword parameters. This can be further extended to add other index attributes to the Indexed field init.
This functionality was already available in the code by using a list of IndexModel in a model's Collection definition, but adding this init kwarg is much more readable and concise. For clarification, it does not affect the index list method demoed in the index demo.
Additionally, I've added a ValueError check for the value of index_type since the truthy check in the collection factory now evaluates the tuple's truthiness.
To your point about the ODM checking the inputs and looking at the IndexModel docs, if the Indexed field uses **kwargs instead of named parameters, it can pass the kwargs directly to the ODM using the same tuple method. Sacrifices some readability but guarantees support for all IndexModel flags even if the ODM adds new ones.
In that case, you you prefer a test that only validates that kwargs made it to the IndexModel object or that the Indexed kwargs were successfully applied to the collection's indexes?
Hi @flyinactor91,
Sorry for the delay.
About the index type checking and value error:
I think, this is not needed and it doesn't cover all the cases. Drop this if-statement, please.
About kwargs.
This is an interesting idea. I think this will work, yes. 👍
About tests.
This is up to you. It depends on the implementation, I'd say.
If I will not answer fast enough here, you can reach me in discord.
|
gharchive/pull-request
| 2021-05-15T04:34:18 |
2025-04-01T04:35:44.395701
|
{
"authors": [
"flyinactor91",
"roman-right"
],
"repo": "roman-right/beanie",
"url": "https://github.com/roman-right/beanie/pull/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1076331892
|
KEY_CONFIG.md vim keybindings code snippet left key is double mapped to open help
When copying the basic vim keybindings snippet from the docs I ran into a bug where the h key was double mapped to open_help and move_left. I suggest extending the snippet by remapping open_help to H like so:
(
focus_right: Some(( code: Char('l'), modifiers: ( bits: 0,),)),
focus_left: Some(( code: Char('h'), modifiers: ( bits: 0,),)),
focus_above: Some(( code: Char('k'), modifiers: ( bits: 0,),)),
focus_below: Some(( code: Char('j'), modifiers: ( bits: 0,),)),
move_left: Some(( code: Char('h'), modifiers: ( bits: 0,),)),
move_right: Some(( code: Char('l'), modifiers: ( bits: 0,),)),
move_up: Some(( code: Char('k'), modifiers: ( bits: 0,),)),
move_down: Some(( code: Char('j'), modifiers: ( bits: 0,),)),
open_help: Some(( code: Char('H'), modifiers: ( bits: 0,),)),
)
wrong repo 🤦♂️
|
gharchive/issue
| 2021-12-10T01:54:19 |
2025-04-01T04:35:44.425371
|
{
"authors": [
"dannyknows"
],
"repo": "ron-rs/ron",
"url": "https://github.com/ron-rs/ron/issues/348",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1174220049
|
请求支持一下新的白兔俱乐部
站点名称:白兔俱乐部
站点地址:https://club.hares.top/
站点描述:高清2K资源站
资源类型:
开放注册:否
是否连坐:否
站点规则:
dupe: #935
|
gharchive/issue
| 2022-03-19T09:26:51 |
2025-04-01T04:35:44.430457
|
{
"authors": [
"AdoShan",
"Rhilip"
],
"repo": "ronggang/PT-Plugin-Plus",
"url": "https://github.com/ronggang/PT-Plugin-Plus/issues/1022",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
742999977
|
mac版chrome重启机器后插件就会被自动删除
PT 助手版本: 1.5.0
PT 助手安装方式:zip包安装
浏览器名称及版本:chrome:版本 86.0.4240.193(正式版本) (x86_64)
浏览器是否安装了其他插件:安装了
停用其他插件后是否正常工作:我觉得不是其他插件的问题
问题描述:
在口出站工具里能找到
提示:该扩展程序未列在 Chrome 网上应用店中,并可能是在您不知情的情况下添加的。
相关截图:
重现步骤:
https://xclient.info/a/1ddd2a3a-d34b-b568-c0d0-c31a95f0b309.html
https://xclient.info/a/1ddd2a3a-d34b-b568-c0d0-c31a95f0b309.html
感谢,搞定了!
|
gharchive/issue
| 2020-11-14T13:01:36 |
2025-04-01T04:35:44.434144
|
{
"authors": [
"gdw1986",
"ted423"
],
"repo": "ronggang/PT-Plugin-Plus",
"url": "https://github.com/ronggang/PT-Plugin-Plus/issues/635",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1455703223
|
Nothing Running
I have the internet connection checker up and running shows connected. Have both ethernet and wifi connected. But the monitor just gives me the title of 'Internet Monitor' that is underlined but never runs any test.
Is there supposed to be a point where we point the tester for where to run? Not seeing that in any of the module config.
I have the same problem.
same here
Ruuning inside docker on rasp pi 4, all seems ok, but like above title appears, have configured inside docker (docker exec -it mm bash but still dont see any icon. Any advise welcome...
I've just tried this module on a fresh install, nothing diplaying for me either.
not working and i try this.. https://github.com/BrianHepler/internet-monitor.git
OK, so this appears to be an issue because of a change on the Speedtest.net web site. If you're not running a newer version, then it fails to get a list of servers. After poking at it for a while, I manually updated speedtest-net to 2.2.0 and then had to fix a couple of other modules that needed updating. I'll probably have to investigate package-lock.json files so future updates don't stomp on my work.
I have to update:
lzma-native (8.0.6)
decompress-tarxz (3.0.0)
ddsol/speedtest-net (2.2.0)
Check the output of the terminal window when you start MagicMirror for clues.
WARNING! Could not load config file. Starting with default configuration. Error found: Error: Cannot find module /home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/lzma-native/binding-v4.0.6-electron-v22.0-linux-arm/lzma_native.node'
Require stack:
/home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/lzma-native/index.js
/home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/decompress-tarxz/index.js
/home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/speedtest-net/index.js
/home/xxxxx/MagicMirror/modules/internet-monitor/node_helper.js
/home/xxxxx/MagicMirror/js/app.js
/home/xxxxx/MagicMirror/js/electron.js
/home/xxxxx/MagicMirror/node_modules/electron/dist/resources/default_app.asar/main.js
the problem for me... is also electron e node helper
WARNING! Could not load config file. Starting with default configuration. Error found: Error: Cannot find module /home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/lzma-native/binding-v4.0.6-electron-v22.0-linux-arm/lzma_native.node' Require stack:
/home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/lzma-native/index.js
/home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/decompress-tarxz/index.js
/home/xxxxx/MagicMirror/modules/internet-monitor/node_modules/speedtest-net/index.js
/home/xxxxx/MagicMirror/modules/internet-monitor/node_helper.js
/home/xxxxx/MagicMirror/js/app.js
/home/xxxxx/MagicMirror/js/electron.js
/home/xxxxx/MagicMirror/node_modules/electron/dist/resources/default_app.asar/main.js
the problem for me... is also electron e node helper
The default config.js file doesn't reference internet-monitor. Issues with lzma-native should be a version problem. The reference to v4.0.6 makes me think that you don't have 8.0 installed. If you're getting errors about MAX-whatever, I got around that be directing it not to use the GPU. https://forum.magicmirror.builders/topic/17306/mesa-loader-failed-to-retrieve-device-information/2
I upgraded lzma using npm install lzma-native@8.0.6 from the internet-modules/node_modules folder. You can check the version from package.json in the module directory.
I've got things working on my test machine, but the actual mirror is not working. It's complaining about st.on not being a function. Haven't worked through that problem. That call comes from node_helper.js.
|
gharchive/issue
| 2022-11-18T19:46:26 |
2025-04-01T04:35:44.446597
|
{
"authors": [
"KennethGrainger",
"Kevin11Price",
"StryderGX",
"bigdog-will",
"davidoesch",
"mirrormonark",
"noz1380"
],
"repo": "ronny3050/internet-monitor",
"url": "https://github.com/ronny3050/internet-monitor/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
574674851
|
Which is collect, or ?
I just found typo of <module collation> or <module collations> .
In sql-2003-2.bnf,the below lines uses <module collation> but no definition in BNF.
<SQL-client module definition> ::=
<module name clause> <language clause> <module authorization clause>
[ <module path specification> ]
[ <module transform group specification> ]
[ <module collation> ]
[ <temporary table declaration>... ]
<module contents>...
But there is <module collations> definition in BNF.
<module collations> ::= <module collation specification>...
I have post many issue at once.
So close them and post one issue at a time.
|
gharchive/issue
| 2020-03-03T13:39:21 |
2025-04-01T04:35:44.449752
|
{
"authors": [
"GCer-Hidenori"
],
"repo": "ronsavage/SQL",
"url": "https://github.com/ronsavage/SQL/issues/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1446327871
|
🛑 TROJAN 🇸🇬 Singapore SGO 1 is down
In e9b0b00, TROJAN 🇸🇬 Singapore SGO 1 (https://sgt-2.opensvr.net/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: TROJAN 🇸🇬 Singapore SGO 1 is back up in 4d4bcea.
|
gharchive/issue
| 2022-11-12T08:13:02 |
2025-04-01T04:35:44.478840
|
{
"authors": [
"roosterkid"
],
"repo": "roosterkid/opentunnel-status-server",
"url": "https://github.com/roosterkid/opentunnel-status-server/issues/11670",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1458038967
|
🛑 TROJAN 🇭🇰 Hong Kong HKE 1 is down
In 13c8a13, TROJAN 🇭🇰 Hong Kong HKE 1 (https://hkt-1.opensvr.net/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: TROJAN 🇭🇰 Hong Kong HKE 1 is back up in b4ee8b5.
|
gharchive/issue
| 2022-11-21T14:26:27 |
2025-04-01T04:35:44.481339
|
{
"authors": [
"roosterkid"
],
"repo": "roosterkid/opentunnel-status-server",
"url": "https://github.com/roosterkid/opentunnel-status-server/issues/12333",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1506675711
|
🛑 XRAY 🇩🇪 Germany DEH 1 is down
In 8e4e10d, XRAY 🇩🇪 Germany DEH 1 (https://dex-1.openv2ray.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: XRAY 🇩🇪 Germany DEH 1 is back up in a806a79.
|
gharchive/issue
| 2022-12-21T17:32:14 |
2025-04-01T04:35:44.483992
|
{
"authors": [
"roosterkid"
],
"repo": "roosterkid/opentunnel-status-server",
"url": "https://github.com/roosterkid/opentunnel-status-server/issues/14485",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.