id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2041323508
バーコードからISBNを読み取る機能を実装する スマホ版でのみ本のバーコードをカメラで読み取り、ISBNを読み取れるようにする。 https://serratus.github.io/quaggaJS/ https://github.com/ericblade/quagga2
gharchive/issue
2023-12-14T09:45:16
2025-04-01T06:36:47.187586
{ "authors": [ "BlueSchnauzer" ], "repo": "BlueSchnauzer/BookLogger", "url": "https://github.com/BlueSchnauzer/BookLogger/issues/81", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
989082313
BW-DR03驱动盒应该使用哪个分支下的驱动 目前的ros功能包驱动在BW-DR02下可以正常工作,但换了新的BW-DR03驱动盒则不能正常前进后退,请问是什么 I just bought a BW-DR03 driver. Do you have a wiring diagram for a pair of hub motors in English? Do I need to calibrate motors before using the ROS package to manage it? 02和03协议是一样的。先用mtools工具验证一下03是否能正常工作。以及里面的参数是不是和02一致
gharchive/issue
2021-09-06T11:39:37
2025-04-01T06:36:47.197624
{ "authors": [ "haquebd", "randoms", "zhAlpha" ], "repo": "BluewhaleRobot/xqserial_server", "url": "https://github.com/BluewhaleRobot/xqserial_server/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1216958433
utils/xz-utils: Provide a dev package Upgrade to 5.2.5 whilst we're at it. This probably needs some fixes in the example repos. I'll provide those if this MR is considered fine for inclusion. LGTM. I still need to update the example repos to the new split layout... :see_no_evil: I still need to update the example repos to the new split layout... :see_no_evil: I'll do this too. I'll do this too. I'll postpone the update of the examples until BobBuildTool/basement-gnu-linux#1 is merged. I just want to follow up on this: is there anything I've missed? AFAICS, when https://github.com/BobBuildTool/basement-gnu-linux/pull/1 is merged, this and #134 can be merged, too. Afterwards, I'll update the example repos to use the latest basement including the split. I just want to follow up on this: is there anything I've missed? No. I just have been lazy... :see_no_evil:
gharchive/pull-request
2022-04-27T07:59:57
2025-04-01T06:36:47.219805
{ "authors": [ "Ferruck", "jkloetzke" ], "repo": "BobBuildTool/basement", "url": "https://github.com/BobBuildTool/basement/pull/133", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1482683991
issue#4 | @rebeccahongsf | update ordering to work on multiple browsers Summary update to ascending order which works on multiple browsers Issue ticket number and link #4 Checklist before requesting a review [x] I have performed a self-review of my code Screenshots (if applicable) Firefox: Chrome: @jwu910
gharchive/pull-request
2022-12-07T19:31:56
2025-04-01T06:36:47.223018
{ "authors": [ "rebeccahongsf" ], "repo": "BobaTalks/bobatalks.github.io", "url": "https://github.com/BobaTalks/bobatalks.github.io/pull/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
530150700
Multi user deployment changes @gaoning777 Here are our deployment changes. Thanks Yuan Superseded by https://github.com/Bobgy/manifests/pull/2
gharchive/pull-request
2019-11-29T04:30:44
2025-04-01T06:36:47.226004
{ "authors": [ "Bobgy", "gaoning777" ], "repo": "Bobgy/manifests", "url": "https://github.com/Bobgy/manifests/pull/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
276079094
CLI ETA? Hey, Thanks for this one! I'd like to test on a React Native using CLI. Is there an ETA? Pasting in a code editor didn't work :( @j2l , hi, thanks. Can you please show which code doesn't work? Or what error are you getting in dev-tools. Thanks. Thanks for replying quickly! First I tried to paste some react native code in a online editor (from Under the hood ReactJS) and svg wasn't produced, no error seen. Then I tried CLI: npm i -g js2flowchart installed it but jsflowchart is a unknown command. Hey, it wasn’t implemented yet, that’s why it didn’t work like global install. I pushed a CLI feature yesterday, please update js2flowchart from npm and try again. Thanks. Let me know if still have issues. Cool! But it doesn't like some react native: js2flowchart App.js Error at parseCodeToAST: Unexpected token (51:8) C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:18131 throw e; ^ SyntaxError: Unexpected token (51:8) at Parser.pp$5.raise (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:13702:13) at Parser.pp.unexpected (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11009:8) at Parser.pp$1.parseClassProperty (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11819:50) at Parser.pp$1.parseClassBody (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11764:34) at Parser.pp$1.parseClass (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11654:8) at Parser.pp$1.parseExport (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11890:19) at Parser.pp$1.parseStatement (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11132:74) at Parser.pp$1.parseBlockBody (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11516:21) at Parser.pp$1.parseTopLevel (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:11026:8) at Parser.parse (C:\Users\pm\AppData\Roaming\npm\node_modules\js2flowchart\dist\js2flowchart.js:10921:17) @j2l, there is a code state = { appIsReady: false }; inside of class definition. It's not like es6 way of doing that and I use es6-only parser which doesn't like it (it's not valid actually from that point of view), so it breaks there. So if you remove that 3 lines and try without it, you'll see it works fine. Indeed :) Fixed for me. Cheers!
gharchive/issue
2017-11-22T14:04:58
2025-04-01T06:36:47.244972
{ "authors": [ "Bogdan-Lyashenko", "j2l" ], "repo": "Bogdan-Lyashenko/js-code-to-svg-flowchart", "url": "https://github.com/Bogdan-Lyashenko/js-code-to-svg-flowchart/issues/14", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
495707075
Not Working This is not working for me giving results as matching for all the faces This lib is build on top of OpenFace ML. I can't improve it for now and in nearest feature.
gharchive/issue
2019-09-19T10:18:04
2025-04-01T06:36:47.246966
{ "authors": [ "BohdanNikoletti", "Harishreddy122" ], "repo": "BohdanNikoletti/SFaceCompare", "url": "https://github.com/BohdanNikoletti/SFaceCompare/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1722043563
Install BookStack on Synology NAS Attempted Debugging [X] I have read the debugging page Searched GitHub Issues [X] I have searched GitHub for the issue. Describe the Scenario Hi I am trying to create a backup install of my main BookStack install on a Synology DS414j NAS. I have two real questions: How to install BookStack on the NAS How to 'mirror' my server install to the NAS so they are synchronised Question 1: I have updated the NAS to the latest allowed DSM version (DSM 7.1.1-42962 Update 5), and updated Web Station. I have also installed PHP 8.0. However, all the tutorials I have found say to install Docker or Portainer to get a working BookStack install. My problem is that neither package are available (as far as I can see), so I can't even set up a container to install BookStack to! Has anyone recently managed to create a BookStack install on this NAS? Question 2: Regardless of whether I get the NAs install working, what is the best way of synchronising two installs? Cani I rsync the install from one device to another? Can I restore an sql dump and restore the images, etc? I am mindful of the fact that the .env settings for the db may be different. I am no expert on this stuff, so would welcome some pointers. Mark Exact BookStack Version Latest version Log Content No response PHP Version 8.x Hosting Environment Server - Ubuntu 22.04LTS NAS....? Hi @techauthoruk My problem is that neither package are available (as far as I can see), so I can't even set up a container to install BookStack to! Yeah, the lower value range of Synology boxes (Which I think are those with j at the end of the model name) won't have certain packages due to system limitations. Has anyone recently managed to create a BookStack install on this NAS? It likely is possible to get it running, but it might be messy to do outside of docker due to the requirements needed. It's not really what a Synology NAS is suited for tbh. Regardless of whether I get the NAs install working, what is the best way of synchronising two installs? Cani I rsync the install from one device to another? What is your actual goal in this? There are methods but no easy way for a bi-directional sync. For a single-direction sync you can indeed dump the database and copy the files over. Our backup and restore docs detail the mechanics of this. As an easier potential option, The latest release includes a system CLI that allows potentially easier backup/restore. It's in early alpha and can still be subject to bugs/issues, but it automates and standardises a lot of the process in the docs linked above. I'm not confident it'd run without some pains on a Synology system though. Personally, through experience I've found it's more hassle than it's worth to stretch the use-case of something like a Synology box, which ideally you'd want to remain reliable in it's primary purpose. You'd be going against the grain and there'd be little existing guidance/docs/experience to help when you run into issues (as you're finding now). I have a separate little intel NUC box to run apps, running proxmox to separate/compartmentalise things via VMs or LXC containers. I have a little overview of my setup here. If you just need to have a backup of your system though, you could maybe just look to use the new BookStack system CLI to create backups then store the resulting ZIP on your NAS like any other file. @ssddanbrown thank you for all your comments. I have abandoned the idea of using the Synology now - there is just no reliable way to get Docker installed. My goal was actually to have a replica of my main bookstack instance installed elsewhere for reference. The sync would only ever be one way (main instance to backup). The reason behind this - I keep all my notes and instructions for the management of my software apps in bookstack. My server went down a few days ago, and of course I couldn't access my notes to do reinstalls, etc! I wanted a backup on another machine/device so that I could see my notes while I sorted the issues. I do routinely back up the bookstack instance in line with the comments on your site, so it's not a problem restoring bookstack if needed. I tried this morning restoring the sql backup from my main instance into a bookstack install on another machine, but this didn't work - I get a 419 error when trying to log in. I'm going to play around a little more to see if I can find out why. Thank you again for your comments. OK, so resolved now. My 419 issues seemed to be due to an incorrect database pasword. Changed the password, and everything is working as it should, so closing this.
gharchive/issue
2023-05-23T13:03:16
2025-04-01T06:36:47.275035
{ "authors": [ "ssddanbrown", "techauthoruk" ], "repo": "BookStackApp/BookStack", "url": "https://github.com/BookStackApp/BookStack/issues/4257", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1980053033
403 error only when inserting specific code block in editor Describe the Bug I've found an odd bug that occurs specifically with one piece of code within my instance. Narrowing this down took quite a bit. The page I'm using is a reference page where I have approx 15 different SQL queries for reference. When inserting the following text as a code block in any page, "sp__dbutilisation tempdb;" the save button leads immediately to a 403 error page. My bookstack instance immediately slows down for the next few minutes. When I remove this code block, the page saves fine. I'm assuming that that the db command is getting executed on save? I updated to the latest instance and see the same issue. DB is MySQL Steps to Reproduce Create new page named "Queries" or anything else Create a code block with the following code (code block, not in-line code) sp__dbutilisation tempdb; Try saving page Immediately see 403 error Expected Behaviour Page saving without 403 error Screenshots or Additional Context No response Browser Details Chrome Exact BookStack Version 2023.10.1 Searched a bit more. Met me try #555 and #1792 and will update here if the issue persists. Can not recreate the issue on the demo site. Solved by disabling apache mod_security m
gharchive/issue
2023-11-06T21:02:21
2025-04-01T06:36:47.279956
{ "authors": [ "slimninja" ], "repo": "BookStackApp/BookStack", "url": "https://github.com/BookStackApp/BookStack/issues/4654", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2396546387
Internally Hosted Draw.IO is not usable, "This content is blocked. Contact the site owner to fix the issue." Describe the Bug Similar to #2285 , I am getting a gray page in Chrome that says "This content is blocked. Contact the site owner to fix the issue." I have the following environment variables set for the container: DRAWIO=http://172.31.1.167:8080/?embed=1&proto=json&spin=1&configure=1&stealth=1 I have also attempted to modify this environment variable: ALLOWED_IFRAME_SOURCES= I've tried: http://172.31.1.167:8080 https://172.31.1.167:8443 http://172.31.1.167* The only one that "works" is if I make it ALLOWED_IFRAME_SOURCES="*", which seems like a security vulnerability even if I'm running this on a LAN. Note: I can access the plain old Draw.IO interface just fine: http://172.31.1.167:8080, and it loads. Steps to Reproduce Edit a page, click the icon to work on a Draw.io image. Expected Behaviour I expect to load into a Draw.IO instance. Screenshots or Additional Context No response Browser Details Chrome and Edge on Windows 11 Exact BookStack Version v24.05.2 Hi @thickconfusion, You shouldn't need to adjust the iframe sources since BookStack will look to automatically add any custom drawio URL, where set, to the CSP rules. Maybe our custom handling is tripping up any additional rules you're adding. It does look though like we are not currently handling scenarios where non-protocol-standard ports are used. I've marked this to be tested for next patch, against a custom-ported drawio instance. Dev reference https://github.com/BookStackApp/BookStack/blob/78ebcb6f38ee7a984b26cd56dff882ae9d7e9f95/app/Util/CspService.php#L144 Sure, I was just saying that we attempt to handle this so you shouldn't have to set the iframe sources, but we currently don't handle custom defined ports. I've now fixed port handling via 897bb338f956245e2c86bda6cd5c6a67711f9448, with testing to cover, which will be part of the next patch release so I'll therefore close this off. Not sure why your custom ALLOWED_IFRAME_SOURCES additions did not work, since I could work around this on my dev instance via this method, but could be down to browser specifics or configuration changes not take place when expected. If you still have issues after the next patch release feel free to still comment here for further investigation.
gharchive/issue
2024-07-08T20:51:44
2025-04-01T06:36:47.288496
{ "authors": [ "ssddanbrown", "thickconfusion" ], "repo": "BookStackApp/BookStack", "url": "https://github.com/BookStackApp/BookStack/issues/5107", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2474287476
[UI] Token input (add missing part) The bottom part of this is still missing. This is kind of unnecessary, so it won't be implemented.
gharchive/issue
2024-07-08T19:12:58
2025-04-01T06:36:47.310244
{ "authors": [ "gabitoesmiapodo" ], "repo": "BootNodeDev/dAppBoosterLandingPage", "url": "https://github.com/BootNodeDev/dAppBoosterLandingPage/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2079122859
FR: short script to (auto)test wordforms? A script should be created to (possibly auto-) test a wordform file. this script should be extensible need to run it on all existing wordform files Common error: consecutive colons (":") in a line. Possibly due to errors when converting from hunspell For example forms_TR.txt:18516 Script is at 3c019616510f585dff69e00dbcab23bce6df1274 Fix is at 0f9abb57b65d2e05e518fa0df75344105fd49e1a The script is created, all errors are fixed (PL - with #1). Warning possibly could be fixed with #4
gharchive/issue
2024-01-12T15:37:44
2025-04-01T06:36:47.314421
{ "authors": [ "BorisNA" ], "repo": "BorisNA/wordforms", "url": "https://github.com/BorisNA/wordforms/issues/3", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
223309447
Links are not working on mobile devices Hi Naufal, thanks for magnificent component! I experienced little bug recently, links are not clickable inside of <ReactScrollbar /> on mobile devices Example: https://spreecode.github.io/react-scrollbar/ Safari iOS 9.3.5: Not working Safari iOS 10.2: Not working Opera Mini: Not working Mobile Chrome: Tricky (sometimes working) Mobile Firefox: Not working I think the problem is here, when you comment out these lines then links are clickable. Obviously when commented out the scrolling behavior is not working as expected.
gharchive/issue
2017-04-21T08:08:06
2025-04-01T06:36:47.317011
{ "authors": [ "devsli" ], "repo": "BosNaufal/react-scrollbar", "url": "https://github.com/BosNaufal/react-scrollbar/issues/20", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1908671597
Binary Sensor to send notification in advance of operation. Describe the feature Send a notification when new rates published of when a binary sensor will start/stop. Use case: dishwasher on/off is momentary switch but start can be delayed by the user. This advanced/ 'heads-up' notification message would allow recipients to know the correct time to set delay to get lowest overnight price for the dishwasher cycle. Expected behavior Trigger: Tomorrow's rates published binary 'notification' sensor run time calculated notification sent via mobile app containing start/stop times that meet the conditions of the sensor, or a use configurable message with these times as values that can be added to the message with whatever formatting the user chooses. Hello. If I understand your scenario correctly, a new sensor isn't needed as you can do this with existing HA functionality. The times for when the sensor is due can be found as attributes on the sensor. I for instance have a template sensor which extracts this time out vacuum_time: friendly_name: Vacuum time icon_template: mdi:clock value_template: >- {{ as_local(state_attr("binary_sensor.octopus_energy_target_vacuum", "next_time")).strftime('%H:%M') }} and then I have an automation that sends an alert at a specific time to tell me when my vacuum is due on - alias: Cleaning - Clean house notice trigger: - platform: event event_type: morning_reminder condition: - condition: state entity_id: input_boolean.is_cleaning_scheduled state: 'on' action: - event: notify_channels event_data_template: mode: speaker title: Cleaning occurring message: > {{ [ "I'm due to clean today at " + states("sensor.vacuum_time") + ". If you don't want this, ask me to 'turn off cleaning'.", "I'll be cleaning today at " + states("sensor.vacuum_time") + ". If it's not convenient, ask me to 'turn off cleaning'.", ] | random }} If you're wanting to send an alert x minutes before then I have a template sensor which stores the datetime the notification should go off vacuum_warning_one: friendly_name: "Vacuum warning one" device_class : timestamp value_template: >- {% set date = state_attr("binary_sensor.octopus_energy_target_vacuum_working_hours", "next_time") %} {% if date != None %} {{ date - timedelta( minutes = 30 ) }} {% else %} {{ as_datetime("2021-01-01T00:00Z") }} {% endif %} and then an automation looks like - alias: Cleaning - Clean house warning trigger: - platform: time at: sensor.vacuum_warning_one condition: - condition: state entity_id: input_boolean.is_cleaning_scheduled state: 'on' action: - event: notify_channels event_data_template: mode: speaker title: Cleaning occurring message: > {% set target_time = as_local(state_attr("binary_sensor.octopus_energy_target_vacuum", "next_time")) %} {% set minutes = ((as_timestamp(target_time) - as_timestamp(now())) / 60) | round(0) %} {{ [ "I'm planning to clean the house in " + minutes|string + " minutes. If you don't want this, ask me to 'turn off cleaning'.", "I'll be cleaning the house in " + minutes|string + " minutes. If it's not convenient, ask me to 'turn off cleaning'.", ] | random }} As you can see I use an input boolean to determine if the vacuum should go on or not, which I use as a condition for an automation when triggering the vacuum - alias: Cleaning - Start Automatic While Everyone Out (Non Work Day) id: 5418B9B8-3B54-49A2-8DCA-C9846B67F751 trigger: - platform: state entity_id: binary_sensor.octopus_energy_target_vacuum to: 'on' condition: - condition: state entity_id: input_boolean.is_cleaning_scheduled state: 'on' action: - event: start_cleaning Let me know if I've misunderstood your scenario. Thank you, this looks exactly what I am looking for... and more ! Much appreciated 👍 Excellent. I'll close this feature request then :)
gharchive/issue
2023-09-22T10:48:39
2025-04-01T06:36:47.339190
{ "authors": [ "BottlecapDave", "Jonaz80" ], "repo": "BottlecapDave/HomeAssistant-OctopusEnergy", "url": "https://github.com/BottlecapDave/HomeAssistant-OctopusEnergy/issues/416", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
182061862
TypeError: session.reply is not a function I was following the docs in order to setup the bot and suddenly I've encountered this error. $ bottr-cli start Mon, 10 Oct 2016 16:42:13 GMT body-parser deprecated undefined extended: provide extended option at node_modules\bottr\lib\bot.js:36:30 new websocket connection new websocket connection message_received event triggered Missing error handler on `socket`. TypeError: session.reply is not a function at Event.<anonymous> (C:\wamp\www\bottr\index.js:13:11) at Event.next (C:\wamp\www\bottr\node_modules\bottr\lib\event.js:16:14) at Event.triggerNext (C:\wamp\www\bottr\node_modules\bottr\lib\event.js:10:12) at Event.<anonymous> (C:\wamp\www\bottr\node_modules\bottr\lib\bot.js:22:5) at Event.next (C:\wamp\www\bottr\node_modules\bottr\lib\event.js:16:14) at EventEmitter.emit (C:\wamp\www\bottr\node_modules\bottr\lib\event-emitter.js:28:9) at Bot.trigger (C:\wamp\www\bottr\node_modules\bottr\lib\bot.js:49:26) at WebsocketClient.<anonymous> (C:\wamp\www\bottr\node_modules\bottr\lib\websocket-client.js:24:14) at emitOne (events.js:78:13) at Socket.emit (events.js:170:7) at Socket.onevent (C:\wamp\www\bottr\node_modules\socket.io\lib\socket.js:348:8) at Socket.onpacket (C:\wamp\www\bottr\node_modules\socket.io\lib\socket.js:308:12) at Client.ondecoded (C:\wamp\www\bottr\node_modules\socket.io\lib\client.js:194:14) at Decoder.Emitter.emit (C:\wamp\www\bottr\node_modules\component-emitter\index.js:134:20) at Decoder.add (C:\wamp\www\bottr\node_modules\socket.io-parser\index.js:247:12) at Client.ondata (C:\wamp\www\bottr\node_modules\socket.io\lib\client.js:176:18) Looks like the old example code needs to be updated to be send, session keeps track of the conversation now. @ummahusla I've pushed a new version of bottr-cli let me know if bottr-cli init generates something that works. @jcampbell05 Will do it in the evening 👍
gharchive/issue
2016-10-10T16:45:24
2025-04-01T06:36:47.341825
{ "authors": [ "jcampbell05", "ummahusla" ], "repo": "Bottr-js/Bottr", "url": "https://github.com/Bottr-js/Bottr/issues/16", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
211597987
Incorrect gateway used? Even if i am giving valid twilio credentials , the message "your token is 1234 " is diplaying. TWO_FACTOR_CALL_GATEWAY = 'two_factor.gateways.twilio.gateway.Twilio' TWO_FACTOR_SMS_GATEWAY = 'two_factor.gateways.twilio.gateway.Twilio' TWILIO_ACCOUNT_SID = 'AC86a1663acf************affba5d3' TWILIO_AUTH_TOKEN = '029be37d7**********1c0257c9b' TWILIO_CALLER_ID = '+919781539134' #verified caller ID It appears as if you're using the example project, as it would be the only explanation for that message appearing on your screen. Please follow the installation/configuration instructions, somewhere it tells you to set the correct TWO_FACTOR_CALL_GATEWAY and TWO_FACTOR_SMS_GATEWAY.
gharchive/issue
2017-03-03T05:19:14
2025-04-01T06:36:47.343940
{ "authors": [ "Bouke", "vikram-lapenatech" ], "repo": "Bouke/django-two-factor-auth", "url": "https://github.com/Bouke/django-two-factor-auth/issues/196", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1345682627
Chatbot ? Hola cómo están 👍, alguien a usado brain.js? estoy programando un chatbot pero la manera en que lo hago no me gusta mucho, a si que quería saber si alguien que lo a ya hecho antes me puede dar un ejemplo de código con brain.js para enseña al bot a escribir y a responder al texto del usuario de una manera coherente luego solo faltaria clasificar algunas palabras para ejecutar comandos, ví este ejemplo usando redes neuronales recurrentes: const trainingData = [ 'Jane saw Doug.', 'Spot saw himself.', 'Doug saw Jane.' ]; const lstm = new brain.recurrent.LSTM(); const result = lstm.train( trainingData, { iterations: 1000 } ); const run1 = lstm.run('Jane'); const run2 = lstm.run('Spot'); const run3 = lstm.run('Doug'); console.log('run 1: Jane' + run1); console.log('run 2: Spot' + run2); console.log('run 3: Doug' + run3); Se supone que esto le enseña a la red a crear fraces lo probé y le agregue mis propios datos de entrenamiento pero tengo como resultado puras cosas sin sentido, no quiero usar servicios externos, quiero crear mi propio chatbot, alguien ha usado brain.js y hecho esto? Pero de una manera que no se tenga que clasificar el texto? Quiero que la IA aprenda a escribir primero español pasándole un archivo grande lleno de texto, luego como la red ya sabe escribir quiero procesar el texto del usuario input y como salida output solo una sugerencia de como debería responder la red para que la conversación se sienta natural y he evitarme tener que clasificar a mano un montón de texto Hola IvanJRCH, escribo la respuesta en inglés para mantener una cohesión lingüística en el repositorio: I understand that you want to train your AI, please see the training reference. You'll need to work out the training before the chatbot answers make sense at all, I suggest you to define the use-cases, make sure your intents are distinct and that each intent contains many utterances. Also note that your job isn’t done after your chatbot has been deployed. Continuous improvement is important for a successful result and identifying situations where your chatbot needs more training will give you important insights about it. I don't spot any bug or problem related with Brain.js here so I'd recommend to close this issue. hello, thank you very much for your answer, I have created a chatbot, but the chatbot only responds with predefined responses that are already in training, what I would like is for the AI to learn to write, and to respond to the user with consistency only by passing examples of a normal conversation that I will take from a plain txt file, but that the AI does not respond with predefined text but creates its own dialog, then I plan to use another neural network to classify the intention of the text of the AI not of the user, in order to execute commands, I saw the example of using recursive.LSTM(); but notice that it only repeats what is already in the training, how can I make the AI create its own conversations but not with predefined texts, is using LSTM the right way? Could you give me an example? I don't understand how to put that in the code. Also, I don't understand the neural network well either. Thank you very much for answering. Hi IvanJRCH, I'm not an expert on ML, just reached this project as a way to learn more in my free time. Also got this resource that may help (didn't read it yet) https://www.deeplearningbook.org/ I suggest you to interact with the community through other sites more prone to the conversation such stackoverflow, dev.to or any other forum that's specific to have conversations around the niche of ML. This way we don't convert this issue tracker into something else. Have a great day and hope you find the way to solve your issue :) Thank you very much, I'll take a look at the link.
gharchive/issue
2022-08-22T01:40:19
2025-04-01T06:36:47.389272
{ "authors": [ "IvanJRCH", "JoelBonetR" ], "repo": "BrainJS/brain.js", "url": "https://github.com/BrainJS/brain.js/issues/836", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1994959670
rename to SQ_ASSD instance_ASSD should be renamed to SQ_ASSD to be consistent with the other metrics. same goes for the respective SD. should we also introduce PQ_ASSD?
gharchive/issue
2023-11-15T15:04:07
2025-04-01T06:36:47.390889
{ "authors": [ "neuronflow" ], "repo": "BrainLesion/panoptica", "url": "https://github.com/BrainLesion/panoptica/issues/38", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2631563335
🛑 Drank jowile is down In 47f1eb5, Drank jowile (https://drank.jowile.be) was down: HTTP code: 0 Response time: 0 ms Resolved: Drank jowile is back up in 8072680 after 7 minutes.
gharchive/issue
2024-11-03T23:43:14
2025-04-01T06:36:47.393323
{ "authors": [ "BramB-1952444" ], "repo": "BramB-1952444/uptime", "url": "https://github.com/BramB-1952444/uptime/issues/1129", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
275863193
Ensuring backward compatibility in case using a deprecated BUO method for latest SDK BUO# addContentMetadata() is deprecated and new integration should use BUO#setContentMetadata(). This fix ensure backward compatibility in case the app is still using the deprecated methods @aaustin @EvangelosG @derrickstaten 👍
gharchive/pull-request
2017-11-21T21:19:33
2025-04-01T06:36:47.394636
{ "authors": [ "EvangelosG", "sojanpr" ], "repo": "BranchMetrics/android-branch-deep-linking", "url": "https://github.com/BranchMetrics/android-branch-deep-linking/pull/508", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2146073933
url input ,1080p video much slower as title,when test,i found that problem。Looking forward to your reply I'm not sure this is enough information to go by, could you please provide: Description: Describe what the bug or issue is (e.g. crashes when setting X) and how it can be reproduced. Command: Place a full copy of the command line options you are using here, for example: scenedetect -i some_video.mp4 -s some_video.stats.csv -o outdir detect-content --threshold 28 list-scenes save-images Output: Copy the output of running the application here. Where possible, generate a debug log by adding -v debug -l BUG_REPORT.txt to the beginning of your command, and attach BUG_REPORT.txt to your issue. Environment: The operating system and how you installed PySceneDetect may be relevant to the issue. Please run scenedetect version --all and copy the output here, or provide other details on how PySceneDetect was installed. Media/Files: Attach or link to any files relevant to the issue, including videos (or YouTube links), scene files, stats files, and log output. 这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。 https://drive.google.com/file/d/1h5tztZkpV6ziJzgtVieJqcnhF4Bo82f_/view?usp=sharing =================This is the url =================This is a speed comparison. The URL input is very slow, but the same video is downloaded locally as input and the strips are split very quickly. from scenedetect import open_video, SceneManager, split_video_ffmpeg from scenedetect.detectors import ContentDetector from scenedetect.video_splitter import split_video_ffmpeg video_path = '1080pvdieo.mp4' threshold=27.0 Open our video, create a scene manager, and add a detector. video = open_video(video_path) scene_manager = SceneManager() scene_manager.add_detector( ContentDetector(threshold=threshold)) scene_manager.detect_scenes(video, show_progress=True) scene_list = scene_manager.get_scene_list() ================= this is code URL input is handled by OpenCV if you don't change anything, have you tried using a different backend? https://www.scenedetect.com/docs/0.6.2/api/backends.html Traceback (most recent call last): File "C:\Users\Administrator\Desktop\test.py", line 9, in video = open_video(video_path, backend='pyav') File "D:\ProgramData\miniconda3\envs\py39pt1121\lib\site-packages\scenedetect_init_.py", line 143, in open_video return backend_type(path, framerate, **kwargs) File "D:\ProgramData\miniconda3\envs\py39pt1121\lib\site-packages\scenedetect\backends\pyav.py", line 107, in init self._io = open(path_or_io, 'rb') OSError: [Errno 22] Invalid argument When I use pyav as backends, I get an error directly. cv2.VideoCapture When reading a URL, if the video resolution corresponding to the URL is very high and the data rate is very high, it will be very slow. Thank you for following up on this, I appreciate it. If you suspect this is due to OpenCV and isn't just an issue with using ffmpeg (or which ever backend OpenCV itself is using to process the stream), you might want to reach out in their repo. Best regards!
gharchive/issue
2024-02-21T08:08:58
2025-04-01T06:36:47.424950
{ "authors": [ "Breakthrough", "babyta" ], "repo": "Breakthrough/PySceneDetect", "url": "https://github.com/Breakthrough/PySceneDetect/issues/381", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
482998457
RexRay doesn't work on Docker latest (19) https://github.com/BretFisher/dogvscat/blob/master/stack-rexray.yml Using docker stack deploy -c stack-rexray.yml rexray gives back the following error: network "bridge" is declared as external, but it is not in the right scope: "local" instead of "swarm" Would love to see what those TODO notes resolve into as well as this has been very useful for my swarm. Did that work for you?
gharchive/issue
2019-08-20T17:52:11
2025-04-01T06:36:47.449431
{ "authors": [ "BretFisher", "syntaqx" ], "repo": "BretFisher/dogvscat", "url": "https://github.com/BretFisher/dogvscat/issues/24", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
204332218
US81659 - Update lang terms Replace English default terms in non English locales with the proper translated term Add NL locale Tested and looks fine: Test cases: Cycle through all lms languages in chrome and ensure no exceptions are thrown in the console Ensure new lang terms are reflected in the app by setting updates on, and setting courses into inactive, inactive started, inactive ended, starting later, already ended.
gharchive/pull-request
2017-01-31T15:32:47
2025-04-01T06:36:47.460808
{ "authors": [ "JoshuaKlassen", "ctwomey1" ], "repo": "Brightspace/d2l-my-courses-ui", "url": "https://github.com/Brightspace/d2l-my-courses-ui/pull/289", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2180144061
QE-147 Include error message with failed (flaky?) tests Would help understand test failure reason trends, and we can focus de-flaking efforts on areas of higher impact. EG: This run failed with Error: locator.fill: Target page, context or browser has been closed; which is clearly not something we can resolve directly, but if we notice an uptick post-Playwright update, we can complain to them about it OTOH, if we notice alot of tests failing to find locators or whatever, maybe we need better training on how to pick good ones Not sure if we can get it for flaky tests (ie: the first run of a retried set) too, but if we can that'd be even better Duplicate of https://github.com/Brightspace/test-reporting-node/issues/87. Gonna try and send error message, file, line and column values for each test when an error occurs. For retired tests I'm just gonna send the last error for now. If we find we need all error information or something else we can explore that separately. Moved to Jira
gharchive/issue
2024-03-11T20:17:22
2025-04-01T06:36:47.467164
{ "authors": [ "Dan-DeAraujo", "devpow112" ], "repo": "Brightspace/test-reporting-node", "url": "https://github.com/Brightspace/test-reporting-node/issues/149", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2262402896
Ability to hold down one button throughout runtime of NXBT macro It seems that the NXBT macro scripting "language" only supports subsequent inputs, and not continuous inputs that may continue to be sent simultaneously alongside other inputs. ^ also looking for this feature Hi, idk your exact usecase but checkout this: https://github.com/Brikwerk/nxbt/issues/151#issuecomment-2124226564 @BlameFelix Thanks! I'll look into it soon and report the results here after.
gharchive/issue
2024-04-25T00:36:38
2025-04-01T06:36:47.472788
{ "authors": [ "BlameFelix", "TaylorT52", "syndiate" ], "repo": "Brikwerk/nxbt", "url": "https://github.com/Brikwerk/nxbt/issues/152", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
175094007
Picking video or image externally crashes app After having added Video preview support, the observer does not correctly take in account videos when reordering the collection, when a picture/video is taken or deleted outside of the app and app is brought into foreground again. Fixed in https://github.com/BruelAndKjaer/Chafu/commit/b291db7a3330f2afc3069428a49fb0a708f38df9 Using ObservableCollection instead and updating indexes using CollectionChangedEvent
gharchive/issue
2016-09-05T15:32:44
2025-04-01T06:36:47.481047
{ "authors": [ "Cheesebaron" ], "repo": "BruelAndKjaer/Chafu", "url": "https://github.com/BruelAndKjaer/Chafu/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2173511026
res.getAckMessage() is not defined In the example the code uses getAckMessage() but that method is not defined for sendResponse. const messageRes = res.getAckMessage() I will review tonight and have a fix out maybe by the end of the week. This is an error just in my documentation. The "sender" handler: const listener = server.createInbound({port: 3000}, async (req, res) => { const messageReq = req.getMessage() expect(messageReq.get('MSH.12').toString()).toBe('2.7') await res.sendResponse('AA') //Here, we are sending back to the client the Ack or Failure. This is where the AA or AF message gets generated. }) and in the client: const outbound = client.createConnection({ port: 3000 }, async (res) => { const messageRes = res.getMessage() expect(messageRes.get('MSA.1').toString()).toBe('AA') // this is where we are confirming the message response sent by the server. dfd.resolve() }) So I can include: getAckMessage(): Message | undefined { return this._ack } ...but it might be undefined if called before sending the AA or AF. :tada: This issue has been resolved in version 2.1.0-beta.3 :tada: The release is available on: npm package (@develop dist-tag) GitHub release Your semantic-release bot :package::rocket: :tada: This issue has been resolved in version 2.2.0-beta.1 :tada: The release is available on: npm package (@develop dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/issue
2024-03-07T10:21:45
2025-04-01T06:36:47.511860
{ "authors": [ "Bugs5382", "krisc-informatica" ], "repo": "Bugs5382/node-hl7-server", "url": "https://github.com/Bugs5382/node-hl7-server/issues/61", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2532274870
[1|Topic Detail Page] Breadboard 🍰 Appetite 🍰 Property Value Points 3 points Difficulty rating ★★☆☆☆ Maximum Assignees 2 person 🤔 Problem Statement Make a breadboard for each screen in the fat-marker sketch. Every place and affordance should be accounted for, but don't focus on making it look pretty. This should be essentially a bare HTML file, with no JS, and only CSS if required for basic layouts. Implement fake navigation and interactions using normal <a> tags. This purely for getting an interactive "skeleton" up, and seeing in real life whether this design would work, or has any major problems. Please see this page for more info on what your breadboard should be like (and, if you want understand the fat-marker sketch and places affordances drawing better, please see this page). 🧪 Required Tests None (but the Code Wizard will review whether or not your code warrants some tests! Review the Code & Testing Guidelines for more info) ⚠️ Careful about: Don't make a separate desktop design. There should be one design, that works both in landscape and portrait orientations. 🤖 Technologies focused on in this feature React Tailwind Shadcn I can work on this one @puma @Sahib4 it's yours! You're now assigned. I can work on this as well @puma @khnatiuk I added in @namanparashar123 , but he is talking with @Sahib4 right now. @namanparashar123 @Sahib4 can you come see me? There are folks ready to work on Phase 2 but we're unsure whether you have completed Phase 1 yet. @ViniOkamoto please let a PuMA know if you want to be added to this issue. On it already! @khnatiuk Please add me to it @puma Yay!
gharchive/issue
2024-09-17T22:18:57
2025-04-01T06:36:47.522970
{ "authors": [ "Iwaslazkis", "NicoleOkamoto", "Sahib4", "ViniOkamoto", "hzz4343", "khnatiuk", "namanparashar123" ], "repo": "Builder-s-League/BuildersLeague-Edition1", "url": "https://github.com/Builder-s-League/BuildersLeague-Edition1/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2102567325
Feature/pnpm Description to fix changesets publishing Closing as there are way too many "@babel/" conflicts to sift through
gharchive/pull-request
2024-01-26T17:14:26
2025-04-01T06:36:47.523759
{ "authors": [ "samijaber" ], "repo": "BuilderIO/mitosis", "url": "https://github.com/BuilderIO/mitosis/pull/1345", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1609067660
[📖] Unclear instructions when it comes to testing/running your SSG'd site Suggestion node server/entry.ssr.js just returns instantly (verified I ran yarn build.server before), so it's not clear if this is old documentation for an older version I think you'd need something like npx http-server dist/ after running build, but that's npm specific. What's the cool kid way to run a simple http server locally these days?
gharchive/issue
2023-03-03T18:16:11
2025-04-01T06:36:47.524867
{ "authors": [ "nnelgxorz", "qq99" ], "repo": "BuilderIO/qwik", "url": "https://github.com/BuilderIO/qwik/issues/3250", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
349905924
Missing linux-build from latest release and hashes The latest release (0.8.11) does not contain a 'linux-unpacked.tar.gz' file or anything similiar. Since I'm currently trying to maintain the Arch AUR package it would be nice if we could make sure this file exists in coming updates in an expectable filename. And if it's not too much of a hassle, it might be nice if the build-process creates SHA-hashes for the generated distributables (since it's somewhat bank/privacy-related). Of course I could generate hashes myself, but 'original' ones are probably trustworthier. I'll check why the unpacked file is suddenly no longer uploaded since 0.8.11, it might have to do with the hotfix I ran a while. For the hashes you can always check the build for that specific release. At the bottom of every travis build a list of hashes are created. I'll reply here for the correct ones for the current build when I fix it 👍 Added the download file and the checksum for it is 8f36b98ea79b1323ffe3968c2105a099073700c536ed005868f6c702de5cb68c. I'll make a habit out of adding a link to the travis build as well so you can find the hashes more easily next time and to keep things a bit more transparent. Thanks for reporting this btw, I completely missed it 👍
gharchive/issue
2018-08-13T06:46:19
2025-04-01T06:36:47.540871
{ "authors": [ "Crecket", "L00Cyph3r" ], "repo": "BunqCommunity/bunqDesktop", "url": "https://github.com/BunqCommunity/bunqDesktop/issues/256", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
386436429
Refresh button still not working in 2.7.0 I have 2.7.0 installed via the marketplace, running the organizr docker container with :latest tag. Clicking the refresh button on a card opens opens the info instead of refreshing the cover: I am currently using "Organizr V2 On-top of Burry" from https://github.com/Archmonger/Blackberry-Flat, but the issue persists even with that turned off. I have also tried clearing cache in my browser and in CloudFlare. Fixed in 2.8.0 Will test when update is available through the marketplace. Confirmed that it now works :D thank you!
gharchive/issue
2018-12-01T08:15:15
2025-04-01T06:36:47.552419
{ "authors": [ "Burry", "JohanSF" ], "repo": "Burry/organizr-v2-plex-theme", "url": "https://github.com/Burry/organizr-v2-plex-theme/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1071674055
POST /event/{eventID}/participate: 이벤트 참가 신청 https://busandevelopers.github.io/BGM-Event-Calendar-API-Documentation/#operation/addEventTicket 새로운 이벤트 참가 신청: 참가자 이름, 연락처 (전화번호 및 이메일) 입력. 추가 의견은 선택적으로 입력 만약 이름 + 이메일의 쌍이 같은 경우, 중복된 참여 요청으로 간주하여 잘못된 요청 (Bad Request)를 반환함 [x] 데이터베이스 쿼리 [x] 웹서버 로직 [x] 테스트 코드 전화번호 양식 지정 (하이픈 없이 요청 전송)
gharchive/issue
2021-12-06T03:05:05
2025-04-01T06:36:47.553691
{ "authors": [ "hyecheol123" ], "repo": "BusanDevelopers/BGM-Event-Calendar-API", "url": "https://github.com/BusanDevelopers/BGM-Event-Calendar-API/issues/23", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1298334102
Fresh build- front end errors After a clean install and trying to run this, the frontend locks up and I get the following errors: Warning: React does not recognize the `computedMatch` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `computedmatch` instead. If you accidentally passed it from a parent component, remove it from the DOM element. ReferenceError: Can't find variable: process TypeScript error in /app/src/index.tsx(8,4): 'Router' cannot be used as a JSX component. Its instance type 'BrowserRouter' is not a valid JSX element. The types returned by 'render()' are incompatible between these types. Type 'React.ReactNode' is not assignable to type 'import("/node_modules/@types/react-transition-group/node_modules/@types/react/index").ReactNode'. TS2786 6 | 7 | ReactDOM.render( > 8 | <Router> | ^ 9 | <App /> 10 | </Router>, 11 | document.getElementById('root') TypeScript error in /app/src/Routes.tsx(31,6): 'Switch' cannot be used as a JSX component. Its instance type 'Switch' is not a valid JSX element. The types returned by 'render()' are incompatible between these types. Type 'React.ReactNode' is not assignable to type 'import("/node_modules/@types/react-transition-group/node_modules/@types/react/index").ReactNode'. TS2786 TypeScript error in /app/src/views/SignUp.tsx(58,6): 'Redirect' cannot be used as a JSX component. Its instance type 'Redirect' is not a valid JSX element. TS2786 Any idea how to troubleshoot this? Which version of React and React Router are you using? React Router v6 made a lot of changes. Warning: React does not recognize the computedMatch prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase computedmatch instead. If you accidentally passed it from a parent component, remove it from the DOM element. ReferenceError: Can't find variable: process ``` TypeScript error in /app/src/index.tsx(8,4): 'Router' cannot be used as a JSX component. Its instance type 'BrowserRouter' is not a valid JSX element. The types returned by 'render()' are incompatible between these types. Type 'React.ReactNode' is not assignable to type 'import("/node_modules/@types/react-transition group/node_modules/@types/react/index").ReactNode'. TS2786 6 | 7 | ReactDOM.render( > 8 | <Router> | ^ 9 | <App /> 10 | </Router>, 11 | document.getElementById('root') ReactDOM.render has been changed to createRoot. 1 | Import { createRoot} from 'react-dom/client' 6 | const container = document.getElementById('root'); 7 | const root = createRoot(container!); 8 | root.render( 9 | <Router > 10 | <App /> 11 | <Router /> 12 | ); 'Switch' cannot be used as a JSX component. Its instance type 'Switch' is not a valid JSX element.``` In v6 'Switch' was removed entirely. 'Switch' is now 'Routes' The types returned by 'render()' are incompatible between these types. Type 'React.ReactNode' is not assignable to type 'import("/node_modules/@types/react-transition-group/node_modules/@types/react/index").ReactNode'. TS2786 There have also been changes made in the way that routes are rendered. This is what it looked like prior: <header className={classes.header}> <Route path="/login" component={Login} /> <Login /> <Route path="/signup" component={SignUp} /> <Route path="/logout" render={() => { logout(); history.push('/'); return null; }} /> <PrivateRoute path="/protected" component={Protected} /> <Route exact path="/" component={Home} /> </header> </div> </Routes> This requires some refactoring, so I'd recommend taking a look at what upgrades are needed for React 18 and react hooks 'Redirect' cannot be used as a JSX component. In v6 'Redirect' has been changed to 'Navigate'. Change 'useHistory' to 'useNavigate' and then 'Redirect' to 'Navigate' in the jsx component. I would appreciate it if someone could push a PR with the latest react and react-router. As it stands now, the frontend build fails, leading to the above error. They're using the version that's in the clean install, as described in the readme.md I see the same behavior following the instructions provided, and given I was installing this to start playing with and learning react, makes the whole thing pretty useless. Confirmed this is still a problem. I was hoping to use this as a basis for a pet project/learning. Unfortunately, standing up from scratch using README results in a broken state as mentioned by @Kfelts and @chhopsky
gharchive/issue
2022-07-08T01:05:15
2025-04-01T06:36:47.558368
{ "authors": [ "BrendanJM", "Kfelts", "akcode47", "chhopsky", "kevingigiano" ], "repo": "Buuntu/fastapi-react", "url": "https://github.com/Buuntu/fastapi-react/issues/192", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
925924323
OOP rewrite Hi, Thanks for the great work. I intend to use this package for my Cryptocurrency project and rope this package in as dependency for my ninja-dart project. Right now, the library is very golang-like. Would you mind if I rewrite this package to be more OOP and Darty and send you a pull request? Thanks again! Of course you can and it is welcomed. But I will rewrite this with my new packages ecdsa and elliptic. Maybe it would be better that starting your contribution after reviewing my new code?
gharchive/issue
2021-06-21T07:41:28
2025-04-01T06:36:47.577346
{ "authors": [ "C0MM4ND", "tejainece" ], "repo": "C0MM4ND/dart-secp256k1", "url": "https://github.com/C0MM4ND/dart-secp256k1/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2350530421
🛑 C21CoasttoCoast is down In 243180b, C21CoasttoCoast (https://www.c21coasttocoast.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: C21CoasttoCoast is back up in a9ac2d6 after 54 minutes.
gharchive/issue
2024-06-13T08:31:44
2025-04-01T06:36:47.578913
{ "authors": [ "C21coastsh" ], "repo": "C21coastsh/ws-ut", "url": "https://github.com/C21coastsh/ws-ut/issues/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2080070703
🛑 RadCord Bot is down In e36503d, RadCord Bot (https://radcord.xyz) was down: HTTP code: 0 Response time: 0 ms Resolved: RadCord Bot is back up in 06e57e7 after 9 minutes.
gharchive/issue
2024-01-13T04:39:53
2025-04-01T06:36:47.600152
{ "authors": [ "CASPERg267" ], "repo": "CASPERg267/Uptime-page", "url": "https://github.com/CASPERg267/Uptime-page/issues/2016", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
266611402
Video-style controls Add a UI Component to "Play" the network like a video. Also "Pause", "Speed up", "Slow Down" Added in 9acc6667bb0d0a68428fcffc6c440922a177c04e
gharchive/issue
2017-10-18T19:19:32
2025-04-01T06:36:47.703391
{ "authors": [ "AABoyles" ], "repo": "CDCgov/MicrobeTRACE", "url": "https://github.com/CDCgov/MicrobeTRACE/issues/90", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2112891415
DEV 6811 Fixes issues from testing: If you remove the Latitude/Longitude configuration after selecting it, you get an unrecoverable error Can we have the cities we support by default be placed, even if others are placed using lat/long? Example, "New York City" in the data with no lat/long If you switch to pin, then switch back, the dots disappear because the "geoCodeCircleSize" or some setting is now gone Need to expose the "Geocode Circle Size" setting when lat/long is used. Could that default to the size of the normal city circles and allow folks to adjust it whether lat/long is used or not? If you filter the map to only show some locations, the custom locations still appear @mpallansch - I tested with the world geo code example file. I'm still seeing the circle appear for NYC when I have the location set to United States. Same with having the location set to Alaska. @mpallansch - I tested with the world geo code example file. I'm still seeing the circle appear for NYC when I have the location set to United States. Same with having the location set to Alaska. Looks like that file has erroneous lat/long data for each point (US, Alaska, and NYC all have NYC lat/long data): { "Country": "New York City", "Cases": 300, "Category": "Has not historically reported monkeypox", "AsOf": "11 Jul 2022 5:00 PM EDT", "longitude": "-74.006", "latitude": "40.712" }, { "Country": "United States of America", "Cases": 10, "Category": "Has not historically reported monkeypox", "AsOf": "11 Jul 2022 5:00 PM EDT", "longitude": "-74.006", "latitude": "40.712" }, { "Country": "Alaska", "Cases": 500, "Category": "Has not historically reported monkeypox", "AsOf": "11 Jul 2022 5:00 PM EDT", "longitude": "-74.006", "latitude": "40.712" } I don't think this will be an issue for valid use cases.
gharchive/pull-request
2024-02-01T16:14:33
2025-04-01T06:36:47.709120
{ "authors": [ "adamdoe", "mpallansch" ], "repo": "CDCgov/cdc-open-viz", "url": "https://github.com/CDCgov/cdc-open-viz/pull/1062", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1980461808
Finalize messaging recommendations/application User story As a ReportStream user, I want to understand if ReportStream can help me so I can decide what to do. Background & context User tests revealed some need for further clarity in our messaging. Last sprint reviewed the current messaging in light of user test feedback and other current realities. This ticket builds on that to provide specific recommendations for alignment/approval. Open questions Working links Acceptance criteria [x] Recommendations documented and reviewed with necessary stakeholders [ ] Next steps planned (help create engr ticket) Full deck of updates here. Will create a separate doc specific to home page changes for the eng ticket.
gharchive/issue
2023-11-07T03:15:16
2025-04-01T06:36:47.718501
{ "authors": [ "audreykwr" ], "repo": "CDCgov/prime-reportstream", "url": "https://github.com/CDCgov/prime-reportstream/issues/12108", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
2294323015
AL Gap Analysis - Test Orders Story As AL HCO, to ensure we'll receive all needed results data to an EHR, we need to relate the ReportStream results information to what will be received by the EHR. Pre-conditions [ ] Assumptions of prior or future work that's out of scope for this story Acceptance Criteria [ ] Message profiles are now finalized [ ] Publish list of transformation requirements for Intermediary [ ] Get approvals [ ] Create explicit stories for Intermediary transformation requirements Tasks [x] Draft side-by-side comparison of sender & receiver profiles [ ] Finalize side-by-side comparison of sender & receiver profiles [ ] Identify/highlight differences (gaps) [ ] Confirm gap solutions (3 possible actions) [ ] 1. Sender will update configurations/message mappings to align with receiver profile [ ] 2. Receiver will update message processing to align with sender profile [ ] 3. Intermediary will transform/translate messages mid-stream to alleviate burden on sender & receiver [ ] Update message profiles (if sender or receiver are making changes to accommodate) Definition of Done [ ] Documentation tasks completed [ ] Documentation and diagrams created or updated [ ] Implementation guide (/ig folder) [ ] ADRs (/adr folder) [ ] Main README.md [ ] Other READMEs in the repo [ ] If applicable, update the ReportStream Setup section in README.md [ ] Threat model updated [ ] API documentation updated [ ] Code quality tasks completed [ ] Code refactored for clarity and no design/technical debt [ ] Adhere to separation of concerns; code is not tightly coupled, especially to 3rd party dependencies [ ] Code is reviewed or developed by pair; 1 approval is needed but consider requiring an outside-the-pair reviewer [ ] Code quality checks passed [ ] Security & Privacy tasks completed [ ] Security & privacy gates passed [ ] Testing tasks completed [ ] Load tests passed [ ] Unit test coverage of our code >= 90% [ ] Build & Deploy tasks completed [ ] Build process updated [ ] API(s) are versioned [ ] Feature toggles created and/or deleted. Document the feature toggle [ ] Source code is merged to the main branch Research Questions Optional: Any initial questions for research Decisions Optional: Any decisions we've made while working on this story Notes Profile stories: #992 #618 Also pending verification of order profile by Cerner
gharchive/issue
2024-05-14T03:50:01
2025-04-01T06:36:47.729932
{ "authors": [ "JohnNKing" ], "repo": "CDCgov/trusted-intermediary", "url": "https://github.com/CDCgov/trusted-intermediary/issues/1090", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
937813656
Invalid Base64 charater 0xa I am using netopeer2-cli and netopeer2-server. I am trying to merge tls_keystore.xml to ietf-keystore as written in README, to try to enable TLS. When I type the command as below, I see the sysrepocfg error about data parsing failed.. Could you help me to solve this error? ubuntu@:~/netopeer2/build$ sudo sysrepocfg -f xml -d startup --edit=../example_configuration/tls_keystore.xml -m ietf-keystore -v3 [INF]: Scheduled changes not applied because of other existing connections. [INF]: Connection 19 created. [INF]: Session 117 (user "root", CID 19) created. sysrepocfg error: libyang: Invalid Base64 character 0xa. sysrepocfg error: Data parsing failed [INF]: No datastore changes to apply. removed unnecessary empty spaces.
gharchive/issue
2021-07-06T11:49:28
2025-04-01T06:36:47.758917
{ "authors": [ "j1y3p4rk" ], "repo": "CESNET/netopeer2", "url": "https://github.com/CESNET/netopeer2/issues/947", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
565944205
Rewrited auth scripts All script now timeouts if the script runs too long Login functions is now moved to separate helper scripts Added missing newlines Merging after discussion with Dominik Bučík
gharchive/pull-request
2020-02-16T18:05:00
2025-04-01T06:36:47.763324
{ "authors": [ "pajavyskocil" ], "repo": "CESNET/proxyidp-nagios-scripts", "url": "https://github.com/CESNET/proxyidp-nagios-scripts/pull/8", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
294783323
delay csvdb load tasks by a random offset to avoid database lock issues This resolved my issues when processing a lot of FASTQ files in parallel. (There may be better ways to address the issue, though) Thanks for the input, Kevin. It looks like a good workaround. Please bear in mind that SQLite is not a proper Relational Database Management System, and actually its performance is dependent on the underlying filesystem. I tried to fix SQLite operational errors with the following changes but more thought might be required: Pipeline changes: https://github.com/CGATOxford/CGATPipelines/pull/380 scripts changes: https://github.com/CGATOxford/cgat/pull/377 Those changes were added to the master in the v0.3.2 release, could you please let me know what version are you running with the following commands: pip list | grep -i cgat Best regards, Sebastian Hi Sebastian, I think I should be up to date. I pulled from the master branch this morning. $ pip list | grep -i cgat DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning. CGAT (0.3.2, /gfs/devel/kralbrecht/cgat) CGATPipelines (0.3.2, /gfs/devel/kralbrecht/CGATPipelines) CGATReport (0.7.6.1) Let me know if there's any command I should run to update the CGAT installation, or something. For the record, I was systematically getting the 'database locked' error message for pipeline_readqc.py with 384 FASTQ files. I don't know exactly where this falls in the range of input files that you usually process in a single run. As I said, the random delay of 1-30 seconds worked in my case, but might need to be increased or even parameterised if the error pops out again for larger amounts of input files. Best Kevin Thanks, Kevin. We try to solve this by asking ruffus to limit the number of concurrent jobs to 1`: @jobs_limit(PARAMS.get("jobs_limit_db", 1), "db") However, it does not seem to work properly. We are about to perform a major refactoring on the code, and we'll try to solve this problem then. The idea you propose in this PR is a potential solution. I will leave this open for our reference and I will decide what to do during the code refactoring. Many thanks! Sebastian Hi @sebastian-luna-valero Yes I saw the jobs_limit instruction in the code, and it did puzzle me as to why it does not seem to work. However, I was short on time and instead of investigating the issue, implemented the quick fix offered in this PR. Obviously, the ideal fix here would be to somehow have this jobs_limit instruction work as expected. I 100% agree that this PR is not the ideal implementation, and I would not take it personally if it is closed without being merged. Happy to see it used as a reference. As a user, I'll say that I'm happy enough to have this random delay even increased to a minute or two if needed, which would still represent a fraction of the total pipeline run, in compensation for reducing the chance of database block virtually to nil. Thanks for your work on the refactoring, I look forward to the result!
gharchive/pull-request
2018-02-06T14:41:15
2025-04-01T06:36:47.967380
{ "authors": [ "kevinrue", "sebastian-luna-valero" ], "repo": "CGATOxford/CGATPipelines", "url": "https://github.com/CGATOxford/CGATPipelines/pull/396", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
913109516
Colors for Cancel and Ok Button and Select Chip when selected Hey, I was trying to implement this package into my app and I wanted to change the colors for the buttons when you open the Bottom Sheet and when you select the chips inside the Sheet. I tried looking online and within the actual code but couldn't find anything that worked. If theres a solution for this please let me know. Thank you. My code for the MultiSelectBottomShieldField looks like the following: Container( width: 335, decoration: BoxDecoration( color: Color(0xFFF0F0F0), border: Border.all( color: Color(0xFFE8E8E8), width: 2, ), borderRadius: BorderRadius.circular(8), ), child: Column( children: <Widget>[ MultiSelectBottomSheetField<Software>( initialChildSize: 0.4, decoration: BoxDecoration(), listType: MultiSelectListType.CHIP, initialValue: _selectedSoftware, searchable: true, items: _items, buttonText: Text("Select Pharmacy Software...", style: GoogleFonts.inter( color: Color(0xFFBDBDBD), fontSize: 16)), onConfirm: (values) { _selectedSoftware = values; }, chipDisplay: MultiSelectChipDisplay( items: _selectedSoftware .map((e) => MultiSelectItem(e, e.toString())) .toList(), chipColor: Color(0xFF5DB075), onTap: (value) { _selectedSoftware.remove(value); return _selectedSoftware; }, textStyle: TextStyle(color: Colors.white), ), ), ], ), ), Never mind! I missed the selectedColor and selectedItemsTextStyle fields.
gharchive/issue
2021-06-07T05:28:54
2025-04-01T06:36:47.972461
{ "authors": [ "sgarg15" ], "repo": "CHB61/multi_select_flutter", "url": "https://github.com/CHB61/multi_select_flutter/issues/45", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1631825225
Added openCL support HIP supports openCL as backend hence enabled for hipblas as well. Closing it because new PR has included the change.
gharchive/pull-request
2023-03-20T10:34:49
2025-04-01T06:36:47.973389
{ "authors": [ "Sarbojit2019" ], "repo": "CHIP-SPV/H4I-MKLShim", "url": "https://github.com/CHIP-SPV/H4I-MKLShim/pull/4", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2528727292
improve mousemovement The mousemovement could use a multiplier parameter (more extreme movement) and also a vertical movement The server I play on requires both horizontal and vertical mouse movement (and it could be that a few pixels is not enough), which I already achieved with a macro but I dont want to keep my MC window active all the time Aight i added it use /antiafk mousemovement and /antiafk mousemovement to disable/enable it
gharchive/issue
2024-09-16T15:01:15
2025-04-01T06:36:47.990817
{ "authors": [ "CIOCOLATA47", "NthnH" ], "repo": "CIOCOLATA47/AntiAfk", "url": "https://github.com/CIOCOLATA47/AntiAfk/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
213271739
fifth_robot_param 調整 処理速度が関係するかもなんで, 調整とその環境を書いてメモにします Vostro3546 i3000番台 commit 26629e8772bf9f0de196dabfbe3181669a493c0d ストレスフリーに動けるようになったが, 左右の回転効率が異常なまでに差がある. めっちゃ曲がるので param でめっちゃ補正かけているが, 頭がおかしいレベルの値になっている. 左右で viscose 変化させられればOKなのだが無いのでどうにもならない odom がめっちゃ回転してる. 原因は明らかで, 左右の回転率から odom がyp_spur側で計算されているからなんだが 地図も一緒に回ってるけれど・・・ダメだと思う. 調整をやり直し他方がいいかもしれない. 左右の回転効率がおかしいのは故障であった可能性がある. 一旦閉じる
gharchive/issue
2017-03-10T08:07:00
2025-04-01T06:36:47.992888
{ "authors": [ "yasu80" ], "repo": "CIR-KIT/fifth_robot_pkg", "url": "https://github.com/CIR-KIT/fifth_robot_pkg/issues/56", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
624349910
用KeepAlive包裹的组件中,react-router-dom无法获取params 用KeepAlive包裹的组件中,可以获取到pathname,但无法获取params。 具体描述如下: Route的path: '/playlist/:id',我访问‘/playlist/12345’, react-router-dom有useLocation和useParams这两个hooks,用useLocation获取到{pathname:"/playlist/12345,...},而useParams只能获取到空对象。用withRouter的效果也是一样的,props.match.params也是空对象。 不包裹KeepAlive则一切正常。 我知道了, <Route path="/playlist/:id" render={props => ( <KeepAlive> <Playlist {...props} /> </KeepAlive> )} /> 在playlist组件中直接获取props.match.params.id就可以了,不要用withRouter或者useParams 问题是如何才能使 useParams 正常使用呢?老大有办法解决么?我这边发现通过render传入props,内部组件Props.match也是空的 问题是如何才能使 useParams 正常使用呢?老大有办法解决么?我这边发现通过render传入props,内部组件Props.match也是空的 用react-router-cache-route这个库,也是同一个作者写的,可以正常使用useParams 可以看看 AliveScope 和 Router 的位置关系,AliveScope 需要在 Router 内部 <Router> <AliveScope> <Route .../> </AliveScope> </Router> 的确是在router内部的 不使用 KeepAlive 是不是就没问题了? 如果有关键代码片段的话,可以呈现一下,有 demo 更好 不使用 KeepAlive 是不是就没问题了? 如果有关键代码片段的话,可以呈现一下,有 demo 更好 大佬,麻烦你拉 重现地址: https://codesandbox.io/s/keepalivebug-l5lx2 重现方法:将keepAlive注释和释放,对比前后差异,keepAlive存在的时候,组件props.match和useParams输出结果就缺失了页面的参数 记得访问地址的时候输入 https://l5lx2.csb.app/#/pageA/123 访问的时候记得加上参数 如: https://l5lx2.csb.app/#/123 感觉 keepAlive 组件内没有把 match 正确的传递给子组件 目前看,可能需要加这些东西,5.2.0 后 react-router-dom 不再暴露 RouterContext,我看看怎么解决 目前看,可能需要加这些东西,5.2.0 后 react-router-dom 不再暴露 RouterContext,我看看怎么解决 这样在未使用 react-activation/babel 的时候是有效果的 但是如果加上了这个插件,那么/:id 这类的路由在切换id后,mach内的id还是老的id,没有发生变化 问题是如何才能使 useParams 正常使用呢?老大有办法解决么?我这边发现通过render传入props,内部组件Props.match也是空的 用react-router-cache-route这个库,也是同一个作者写的,可以正常使用useParams 我用react-router-cache-route,也是出现useParams为空的情况,在路由跳转离开的时候触发当前组件的渲染,useParams获取为空报错 问题是如何才能使 useParams 正常使用呢?老大有办法解决么?我这边发现通过render传入props,内部组件Props.match也是空的 用react-router-cache-route这个库,也是同一个作者写的,可以正常使用useParams 我用react-router-cache-route,也是出现useParams为空的情况,在路由跳转离开的时候触发当前组件的渲染,useParams获取为空报错 @xydegithub react-router-cache-route 应该是 OK 的呀,可以到那边去开个 issues,前两天我看也有人提到这个,不过后来发现又没问题了 @xydegithub react-router-cache-route 应该是 OK 的呀,可以到那边去开个 issues,前两天我看也有人提到这个,不过后来发现又没问题了
gharchive/issue
2020-05-25T14:57:17
2025-04-01T06:36:48.008574
{ "authors": [ "CJY0208", "ChihoSy", "calmchang", "chiihooy", "xydegithub" ], "repo": "CJY0208/react-activation", "url": "https://github.com/CJY0208/react-activation/issues/43", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
220195706
Add an option to generate graph URIs without hashes Both in the nano publication code, but also from the command line. Fixed in 6eae1be
gharchive/issue
2017-04-07T12:09:28
2025-04-01T06:36:48.011695
{ "authors": [ "RinkeHoekstra" ], "repo": "CLARIAH/wp4-converters", "url": "https://github.com/CLARIAH/wp4-converters/issues/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2705040787
issue-147-add-dashboard-for-showing-question-variable-topic-mismatches Add dashboard for displaying question/variable pair where their assigned topics do not match. Addresses issue https://github.com/CLOSER-Cohorts/dashboard/issues/161 @spuddybike just a reminder about this open PR, from before the EDDI conference.
gharchive/pull-request
2024-11-29T12:55:21
2025-04-01T06:36:48.024812
{ "authors": [ "ollylucl" ], "repo": "CLOSER-Cohorts/dashboard", "url": "https://github.com/CLOSER-Cohorts/dashboard/pull/166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1270077313
🛑 Development is down In 6ca4d1d, Development (https://development.cmasnap.com/api/ping/) was down: HTTP code: 0 Response time: 0 ms Resolved: Development is back up in 9a5e9bf.
gharchive/issue
2022-06-13T23:43:33
2025-04-01T06:36:48.029294
{ "authors": [ "kevincolten" ], "repo": "CMAsnap/status", "url": "https://github.com/CMAsnap/status/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
376957274
Add files via upload Adding modified UML diagram Approved :)
gharchive/pull-request
2018-11-02T20:29:22
2025-04-01T06:36:48.031593
{ "authors": [ "UALBERTA-rcvoon", "wcomhelp" ], "repo": "CMPUT301F18T28/Personal-Condition-Tracker", "url": "https://github.com/CMPUT301F18T28/Personal-Condition-Tracker/pull/36", "license": "BSD-3-Clause-Clear", "license_type": "permissive", "license_source": "github-api" }
1360582346
use ROS timer instead of std::thread and use multi thread spinner @tatsuya-ishihara Could you review the change? I didn't know the multi-thread spinning APIs and tried to use my own thread to keep the FPS up. However, it looks like a cause of a bug in a certain situation. So, now I reuse ROS timer and use multi-thread spinning both in node and nodelet implementation. I could not reproduce the bug situation, so I just tested if it was working in a test condition (a few people) without navigation. Thank you. It looks good to me. @tatsuya-ishihara, could you "review" as github function to change the status?
gharchive/pull-request
2022-09-02T20:22:35
2025-04-01T06:36:48.107792
{ "authors": [ "daisukes", "tatsuya-ishihara" ], "repo": "CMU-cabot/cabot", "url": "https://github.com/CMU-cabot/cabot/pull/63", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1657061895
ivi-controller: update data type of member in struct ivishell From weston version 8.0.0, data type of param in weston_config_section_get_bool() changed from int to bool. So, we need to update in wayland-ivi-extension source. (< version 8.0.0 ) (>= version 8.0.0) Tested-by: Au Doan Ngoc au.doanngoc@vn.bosch.com reviewed-by: Tran Ba Khang khang.tranba@vn.bosch.com Reviewed-by: Harsha M M harsha.manjulamallikarjun@in.bosch.com thanks for the hint we would consider this, in case you have already prepared something just push it ;-)
gharchive/pull-request
2023-04-06T09:47:48
2025-04-01T06:36:48.122073
{ "authors": [ "HarshaMM", "audoan99", "efriedrich", "khangtb1" ], "repo": "COVESA/wayland-ivi-extension", "url": "https://github.com/COVESA/wayland-ivi-extension/pull/155", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
249766487
Feature ParticleFilter based tracker New Feature Vscan based Rao-Blackwellized Particle Filter tracker adapted to work on Autoware Expected Result https://drive.google.com/open?id=0BzYuVrO9pnh6VUhQd0N2TjBsM3c Documentation Based on the work described in: http://ieeexplore.ieee.org/document/7759043/ Constraints Requires hdl-64 to work reliably Uses 2d scan to match between frames Can only track up to 5 objects in realtime, using a GPU Needs the correct pose, for that reason in some cases, the tracking will fail @amc-nu What's the status on this PR?
gharchive/issue
2017-08-11T22:57:27
2025-04-01T06:36:48.146305
{ "authors": [ "amc-nu", "gbiggs" ], "repo": "CPFL/Autoware", "url": "https://github.com/CPFL/Autoware/issues/776", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1983739383
850 avoid search by id before every server mode download download info is stored in instance of EODataAccessGateway with key composed of product type, provider and id if download is attempted for product where id is not in download info and search by id is available for provider and product type, it will be executed; if search by id is not possible an error will be raised to do a search updated based on input from @alambare-csgroup closed because change requests were implemented on new branch -> new PR: https://github.com/CS-SI/eodag/pull/1012
gharchive/pull-request
2023-11-08T14:28:08
2025-04-01T06:36:48.190814
{ "authors": [ "jlahovnik" ], "repo": "CS-SI/eodag", "url": "https://github.com/CS-SI/eodag/pull/916", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
361832003
As a medical clinic receptionist, I want to request for a specific doctor for consultation So that I can satisfy patient need. what's the difference between this and #36 ?
gharchive/issue
2018-09-19T16:36:41
2025-04-01T06:36:48.198281
{ "authors": [ "arsalanc-v2", "jjlee050" ], "repo": "CS2103-AY1819S1-W14-1/main", "url": "https://github.com/CS2103-AY1819S1-W14-1/main/issues/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
181328993
As a user, I can tag my tasks based on priority. so that I can prioritize my goals this feature is not implemented
gharchive/issue
2016-10-06T05:40:51
2025-04-01T06:36:48.206890
{ "authors": [ "sunset1215" ], "repo": "CS2103AUG2016-T13-C1/main", "url": "https://github.com/CS2103AUG2016-T13-C1/main/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
749382259
중력모드 구현 def gravity 함수 사용해서 떠있는 블록은 자신의 x좌표에서 최대한 밑(y좌표) 위치, 어딘가에 안착한 블록은 그냥 자신의 위치 즉 matrix[][]에 색을 입혔습니다. 그리고 def erase_mino 사용해서 matrix에 색을 입혔으므로 기존에 내려온 블록은 제거하였습니다. +게임 플레이 코드 중간에 draw_mino가 불필요하게 작용하여서 제 코드가 잘 돌아가지 않아서 지웠습니다 일단은 이상 없는 걸 확인했습니다! 하지만 혹시라도 에러를 발생하면 얘기해주세요 리더보드에 저희 테스트하면서 aaa만 너무 많이 들어가버려서 그냥 정리했습니다 아직 중력 모드의 패널티 조건이 추가되지 않은 상태이므로 중력 모드 이슈가 추가되더라도 연결만 하고 이슈를 닫지는 않겠습니다. 그래비티 모드가 현재 single 모드를 대신하여 적용되고 있으므로, 일단 해당 pr을 머지한 후 start screen인 else loop에서 키 입력을 통해 single 모드와 중력 모드 전환이 가능하도록 추가 pr하겠습니다. 네네 제가 어제 이슈랑 마일스톤 추가해두었고, 아직 페널티 조건을 만들지 않아서 이슈 계속 열어두었습니다 넵 그러면 페널티 조건도 만들면 함께 적용해주세요!
gharchive/pull-request
2020-11-24T06:23:33
2025-04-01T06:36:48.232625
{ "authors": [ "RoJooHee", "c2lv" ], "repo": "CSID-DGU/2020-2-OSSP-CP-17woljang-9", "url": "https://github.com/CSID-DGU/2020-2-OSSP-CP-17woljang-9/pull/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1977452911
[BE] 학사 일정 추가할 app 생성 + 학사일정 더미데이터 추가 학사 일정 추가할 app 생성 + 학사일정 더미데이터 추가 작업 내용 csv 파일로 학사일정 데이터를 추가 app을 만들어 해당 데이터 장고 내 저장 확인했습니다!
gharchive/pull-request
2023-11-04T18:09:25
2025-04-01T06:36:48.234300
{ "authors": [ "HyeonJeKim", "hayeon2001" ], "repo": "CSID-DGU/2023-2-OSSProj-MSG_UP-8", "url": "https://github.com/CSID-DGU/2023-2-OSSProj-MSG_UP-8/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
447135185
Update to JetBrains.ReSharper.SDK 2019.1.1. Fixes #86 v0.18.0
gharchive/pull-request
2019-05-22T13:24:48
2025-04-01T06:36:48.254283
{ "authors": [ "GreenKn1ght", "RicoSuter" ], "repo": "CSharpAnalyzers/ExceptionalReSharper", "url": "https://github.com/CSharpAnalyzers/ExceptionalReSharper/pull/87", "license": "MS-PL", "license_type": "permissive", "license_source": "github-api" }
137747628
"Attendees must read and follow our Code of Conduct" Broken link on meetup and ctfeds.org ... Fixed! Thank you for spotting it and letting us know. :)
gharchive/issue
2016-03-02T01:52:08
2025-04-01T06:36:48.257339
{ "authors": [ "SteveBarnett", "gregbenner" ], "repo": "CTFEDs/ctfeds.org", "url": "https://github.com/CTFEDs/ctfeds.org/issues/17", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1039078417
Full Financials PDF Get financials PDF from Patty Leslie. Latest update should be ready by tomorrow (Fri Oct 29) This looks good to me Get Outlook for Androidhttps://aka.ms/AAb9ysg From: Alex Finnarn @.> Sent: Friday, October 29, 2021 1:33:45 PM To: CUCentralAdvancement/essential-cu @.> Cc: Yifei Wu @.>; Assign @.> Subject: Re: [CUCentralAdvancement/essential-cu] Full Financials PDF (Issue #459) PDF link added to https://essential-staging-cu.herokuapp.com/impact-reports/joy/financials — You are receiving this because you were assigned. Reply to this email directly, view it on GitHubhttps://github.com/CUCentralAdvancement/essential-cu/issues/459#issuecomment-955000045, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AVYBARVQMIPCI2WYNGOCDEDUJLZJTANCNFSM5G6F2UQA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
gharchive/issue
2021-10-29T01:23:25
2025-04-01T06:36:48.278241
{ "authors": [ "Feissit" ], "repo": "CUCentralAdvancement/essential-cu", "url": "https://github.com/CUCentralAdvancement/essential-cu/issues/459", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1487145926
Testing [copied from CUNY-CL/abstractness/issues/87] We should add integration tests (I hesitate to call these unit tests), simply limiting ourselves to the model sizes and data quantities we can run on CircleCI's free tier. We get 6,000 compute-minutes per month...all of this is pretty generous except that I am unclear whether we can use their GPU images or are stuck on CPU (ideally we'd parameterize tests on both). I think it ought to be possible to do actual training of the major models using, say, 1,000 examples. Unit tests could include g2p (for feature-less) and inflection (for feature-full) from SIGMORPHON. The current training and prediction functions are structured to read and write directly to the file system. They should be modularized to take ordinary arguments and return the results: for training, a function could simply return the best model (or its path) with metadata (wall clock time, training accuracy, development accuracy), and then the command-line enabled version of that loop could invoke this for prediction, a function could simply return the accuracy. These functions can then be called by the existing (null return type) training and prediction functions, the ones parameterized with click flags. This will also support two other projects (issues coming soon): benchmarking W&B-enabled hyperparameter sweeping This is a blocker for a post-beta release candidate. Yoyodyne test strategy.pdf The above describes my current thinking about the test strategy.
gharchive/issue
2022-12-09T17:37:13
2025-04-01T06:36:48.286727
{ "authors": [ "kylebgorman" ], "repo": "CUNY-CL/yoyodyne", "url": "https://github.com/CUNY-CL/yoyodyne/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1903629581
🛑 India mirror is down In b85ec28, India mirror (https://mirror.albony.xyz/cachylinux/) was down: HTTP code: 0 Response time: 0 ms Resolved: India mirror is back up in 80448ba after 6 minutes.
gharchive/issue
2023-09-19T19:48:03
2025-04-01T06:36:48.313414
{ "authors": [ "vnepogodin" ], "repo": "CachyOS/statuspage", "url": "https://github.com/CachyOS/statuspage/issues/114", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2588663599
Add missing FK connection Closes #2591 did you close this on purpose? the missing FW connection changes are not merged yet. yes. i renamed the branch and it auto closed this PR. i remade antother
gharchive/pull-request
2024-10-15T12:40:47
2025-04-01T06:36:48.316368
{ "authors": [ "SolidProgramming", "tpurschke" ], "repo": "CactuseSecurity/firewall-orchestrator", "url": "https://github.com/CactuseSecurity/firewall-orchestrator/pull/2593", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2553875357
teste correção ok, irei corrigir reabrindo issue
gharchive/issue
2024-09-27T23:21:28
2025-04-01T06:36:48.361497
{ "authors": [ "CaioAleixo" ], "repo": "CaioAleixo/DevOps2", "url": "https://github.com/CaioAleixo/DevOps2/issues/2", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
59580987
Added owner web_arg for queue API. Fixes #405 :+1: LGTM
gharchive/pull-request
2015-03-03T01:23:37
2025-04-01T06:36:48.366631
{ "authors": [ "Sumukh", "moowiz" ], "repo": "Cal-CS-61A-Staff/ok", "url": "https://github.com/Cal-CS-61A-Staff/ok/pull/407", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2382357650
Issue with using the FreeTAKServer Manager. Followed the youtube video and got it too install no problem but when I start the server I get two cmd screens like this, I have added screen shots too this. Thanks also tried this of no and still same issues Hi there, Looks like some dependencies are conflicting. You can check them here. for verification. The manager was made for the older version 1 of FTS and there is a Winforms and WPF version both should work the same but sometimes the different API calls can cause a strange error/bug however in this case it's more likely a Python thing since all the software is using python for the FTS stuff the C# part is just managing the interface an minor things like logging etc. Are you using the version in the releases tab? and did you install via the MSI installer? Also I tested the setup using Python 3.11.3 and Python 3.8.10 only, so others may have made changes to modules. Let me know. If you're still having issues you can try setup a VM in windows to install the latest version 2 stuff via Linux there's been many improvements to the Linux install. I'm not sure when I'll take a look at updating this software again been very busy at work recently
gharchive/issue
2024-06-30T17:46:57
2025-04-01T06:36:48.383170
{ "authors": [ "Cale-Torino", "jusromaine" ], "repo": "Cale-Torino/FreeTAKServer_Manager", "url": "https://github.com/Cale-Torino/FreeTAKServer_Manager/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1027361265
removed base package.json files #67 Discarded package.json and package.lock.json files Hi, @upkarlidder I've added the PR.
gharchive/pull-request
2021-10-15T11:44:47
2025-04-01T06:36:48.391650
{ "authors": [ "gibrankasif" ], "repo": "Call-for-Code-for-Racial-Justice/Incident-Accuracy-Reporting-System", "url": "https://github.com/Call-for-Code-for-Racial-Justice/Incident-Accuracy-Reporting-System/pull/71", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2271205119
[WIP] add new operator LU factorization Thanks for your contribution and we appreciate it a lot. :rocket::rocket: 1. Motivation add floating point operator LU factorization 2. Modification add implementation of floating point LU factorization 3. Test Report 3.1 Modification Details 3.1.1 Accuracy Acceptance Standard For static threshold standard details, see: MLU-OPS™ Accuracy Acceptance Standard. static threshold diff1 [ ] float32 mlu diff1 <= 1e-5 [*] float32 mlu diff1 <= 3e-3 [ ] float16 mlu diff1 <= 3e-3 diff2 [ ] float32 mlu diff2 <= 1e-5 [* ] float32 mlu diff2 <= 3e-3 [ ] float16 mlu diff2 <= 3e-3 diff3 [ ] mlu diff3 == 0 [ ] mlu diff3_1 == 0 [ ] mlu diff3_2 == 0 dynamic threshold [ ] diff1: mlu diff1 <= max(baseline diff1 * 10, static threshold) [ ] diff2: mlu diff2 <= max(baseline diff2 * 10, static threshold) [ ] diff3: mlu diff3 <= max(baseline diff3 * 10, static threshold) float32, threshold = 1e-5 float16, threshold = 1e-3 3.1.2 Operator Scheme checklist Supported hardware [* ] MLU370 [ ] MLU590 Job types [ ] BLOCK [ ] UNION1 [ ] UNION2 [ ] UNION4 [* ] The operator will dynamically select the most suitable task type, for example, UNION8 3.2 Accuracy Test 3.2.1 Accuracy Test If you have checked the following items, please tick the relevant box. [ ] Data type test (e.g. float32/int8) [ ] Multi-dimensional tensor test [ ] Layout test [ ] Different size/integer remainder end segment/alignment misalignment test [ ] Zero dimensional tensor test/zero element test [ ] stability test [ ] Multiple platform test [ ] Gen_case module test, see: Gencase-User-Guide-zh [ ] Nan/INF tests [ ] Bug fix tests [ ] For memory leak check details, see: GTest-User-Guide-zh [ ] For code coverage check details, see: GTest-User-Guide-zh [ ] For I/O calculation efficiency check details, see: MLU-OPS™-Performance-Acceptance-Standard 3.3 Performance Test Platform:MLU370 ----------- case0 ----------- case0 [Op name ]: sgetrf [Shape ]: input.shape=[256,256], output.shape=[256,256] [Data type] ]: float32 [MLU Hardware Time ]: 6460 (us) [MLU Interface Time ]: 15336.7 (us) [MLU IO Efficiency ]: 0.00026419 [MLU Compute Efficiency ]: 9.90712e-06 [MLU Workspace Size ]: -1 (Bytes) [MLU Kernel Name(s) ]: {} [MLU TheoryOps ]: 65536 (Ops) [MLU TheoryIOs ]: 524288 (Bytes) [MLU ComputeForce ]: 1.024e+12 (op/s) [MLU IoBandWidth ]: 307.2 (GB/s) [GPU Hardware Time ]: -1 (us) [GPU IO Efficiency ]: -1 [GPU Compute Efficiency ]: -1 [GPU Workspace Size ]: -1 (Bytes) [Diffs]: [output] DIFF1: 1.798500e-04 DIFF2: 7.016698e-04 [^ OK ] ../../test/mlu_op_gtest/pb_gtest/src/zoo/sgetrf/test_case/case0.prototxt [ OK ] sgetrf/TestSuite.mluOp/0 (36 ms) [----------] 1 test from sgetrf/TestSuite (36 ms total) [----------] Global test environment tear-down [ SUMMARY ] Total 1 cases of 1 op(s). ALL PASSED. [==========] 1 test case from 1 test suite ran. (3727 ms total) [ PASSED ] 1 test case. 3.4 Summary Analysis Please give a brief overview here, if you want to note and summarize the content. 当前文件有冲突,请rebase master 的同时,解除文件冲突。 Conflicting files docs/user_guide/9_operators/index.rst mlu_op.h 当前文件有冲突,请rebase master 的同时,解除文件冲突。 Conflicting files docs/user_guide/9_operators/index.rst mlu_op.h Feature: Refactor roi align rotated forward 参考这个格式修改下commit信息
gharchive/pull-request
2024-04-30T11:28:40
2025-04-01T06:36:48.436418
{ "authors": [ "ArtIntAI", "Chuancysun", "duzekunKTH" ], "repo": "Cambricon/mlu-ops", "url": "https://github.com/Cambricon/mlu-ops/pull/1019", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
55714280
Cannot use Hash as a resource I realise this is an unusual way to use CanCan, but I have noticed that the following fails unpredictably: can :some_action, Hash do |params| # actual logic omitted puts params true end If I try can? :some_action, {a: 1}, it will return false and not run the block above at all. If I try can? :some_action, {a: {a: 1}}, it will behave correctly, printing {a: {a: 1}}. This appears to be due to rule.rb's relevant method, which contains the following code: 24: def relevant?(action, subject) 25: subject = subject.values.first if subject.class == Hash 26: @match_all || (matches_action?(action) && matches_subject?(subject)) 27: end I'm curious what that is meant to do? Any thoughts? Is the use case so mad that it just should never be done? The use case is basically where the subject is just one of several arbitrary parameters. For example, rather than "view a client", "view a client's financial projections" or "view a client's contact details". Any tips on how to achieve logic that goes beyond action and subject would be helpful also. I see that this behaviour doesn't occur if the subject is a subclass of Hash, such as ActionController::Parameters. @GeorgeDewar I'll look into this when I can, but I agree this case is not conventional. Ideally, you should never be passing in a Hash here. An object can have a hash for its internal data structure, but I would recommend actually modeling your application with classes representative of the data they contain and operate on. I haven't dug into that yet, but I'm betting strongly that this won't be changed. Parameters doesn't get caught by this because it checks class, not superclass, nor does it use is_a?. When you pass a hash, I believe it is expecting a list of conditions.
gharchive/issue
2015-01-28T04:51:09
2025-04-01T06:36:48.440864
{ "authors": [ "GeorgeDewar", "Senjai" ], "repo": "CanCanCommunity/cancancan", "url": "https://github.com/CanCanCommunity/cancancan/issues/174", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
827696440
[BUG] -Getting File Not Found Exception when creating new File with cropped uri.getPath() Lib Version 2.2.1 Describe the bug I am able to crop the image and put in ImageView successfully. But when I am using the same URI path to create a file it's throwing FileNotFoundException. The URI path is starting like content://..something/package name/ myfiles/... something like this. When I debugged I am able to see some data like NO CACHE inside the URI value. I am pasting my code below: Calling Method to crop: public void onSelectImageClick() { CropImage .activity(null) .setOutputCompressFormat(Bitmap.CompressFormat.JPEG) .setGuidelines(CropImageView.Guidelines.ON) .setFixAspectRatio(true) .start(this); } Getting cropped result in onActivityResuly() @Override protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == CropImage.CROP_IMAGE_ACTIVITY_REQUEST_CODE) { CropImage.ActivityResult result = CropImage.getActivityResult(data); if (resultCode == RESULT_OK) { Glide.with(this) .load(result.getUri()) .into(actorPic); isImageViewAdded = true; //handleCropResult(CropImage.getActivityResult(data)) imagePath = result.getUri(); //imagePath = CropImage.getPickImageResultUri(this, data); //insertSingleItem(result.getUri()); } else if (resultCode == CropImage.CROP_IMAGE_ACTIVITY_RESULT_ERROR_CODE) { Toast.makeText(this, "Cropping failed: " + result.getError(), Toast.LENGTH_LONG).show(); } } if(requestCode == CropImage.PICK_IMAGE_CHOOSER_REQUEST_CODE){ Uri imageUri = CropImage.getPickImageResultUri(AddActorsActivity.this, data); imagePath = imageUri; } } But the imageview is able to find the path and successfully update the cropped image inside it through Glide. Expected behavior I think the cropped Image is not caching and so it's happening like this. Hi, In my case I solved the problem following this: https://developer.android.com/training/data-storage/shared/documents-files?hl=es-419 Hope it helps you too! Regards, I am creating the file like new File(uri.getPath()) . The same code was working before. @TomasMVazquez Can you please share the piece of code which you have used to create the file from the uri. I am creating the file like new File(uri.getPath()) . The same code was working before. @TomasMVazquez Can you please share the piece of code which you have used to create the file from the uri. @RanjitPati maybe this is happening because of Android OS permission changes. Now using scope storage we don't get a file path anymore. Maybe you can change the URI string "content" to "file" But is not the real fix, the library will return the URI for the image using the scope storage like google force us now. If you plan to create a file you need to get write storage permission and get the path where you put the image. Make sense? com.theartofdev.edmodo:android-image-cropper:2.8.0 It works here pretty fine I am creating the file like new File(uri.getPath()) . The same code was working before. @TomasMVazquez Can you please share the piece of code which you have used to create the file from the uri. What I needed was to get Base64 from Uri/Path, originaly I was getting the path but with the change I'm using directly the Uri: Original Code: fun imageFileToBase64(imageFile: File): String { return FileInputStream(imageFile).use { inputStream -> ByteArrayOutputStream().use { outputStream -> Base64OutputStream(outputStream, Base64.DEFAULT).use { base64FilterStream -> inputStream.copyTo(base64FilterStream) base64FilterStream.flush() outputStream.toString() } } } } New Code: fun imageUriToBase64(context: Context, uri: Uri): String? { val contentResolver = context.contentResolver return contentResolver.openInputStream(uri)?.use { inputStream -> ByteArrayOutputStream().use { outputStream -> Base64OutputStream(outputStream, Base64.DEFAULT).use { base64FilterStream -> inputStream.copyTo(base64FilterStream) base64FilterStream.flush() outputStream.toString() } } } } com.theartofdev.edmodo:android-image-cropper:2.8.0 It works here pretty fine, you can check this library for you reference. This was the main reason I fork from the old library @AbdulmalekAlshugaa , this was the way it work before OS 11. Now with OS 11, this behaviour changed so the library was updated, not we need to update the usage of it too. Please let's put all discussion about this on the same place: https://github.com/CanHub/Android-Image-Cropper/discussions/87 Sadly this is an Android OS change, but if anyone know a better fix for the latest OS, using scope storage and keeping it file drop a PR cause will help everyone =) I will close so we focus the discussion on the same place https://github.com/CanHub/Android-Image-Cropper/discussions/87 Please try the latest 3.0.0 release and let me know
gharchive/issue
2021-03-10T12:56:44
2025-04-01T06:36:48.452729
{ "authors": [ "AbdulmalekAlshugaa", "Canato", "RanjitPati", "TomasMVazquez" ], "repo": "CanHub/Android-Image-Cropper", "url": "https://github.com/CanHub/Android-Image-Cropper/issues/84", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2306826181
Profile-guided optimization Add a pgo mode to the compiler, which compiles with logging for runtime types (or more in the future) which is dumped to file. Then, add an argument to regular compile which takes in such file as input and uses it for optimization. Done in 97bb4f33be95a23eaebd017cea80a90a1d49abef!
gharchive/issue
2024-05-20T21:28:17
2025-04-01T06:36:48.454225
{ "authors": [ "CanadaHonk" ], "repo": "CanadaHonk/porffor", "url": "https://github.com/CanadaHonk/porffor/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1793305290
Chore: bump to 0.6.0 Description Bump to 0.6.0 Codecov Report Patch coverage: 88.57% and project coverage change: -0.05 :warning: Comparison is base (d121a7a) 90.52% compared to head (4e99b6a) 90.47%. :exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files @@ Coverage Diff @@ ## develop #225 +/- ## =========================================== - Coverage 90.52% 90.47% -0.05% =========================================== Files 331 335 +4 Lines 5477 5596 +119 Branches 732 742 +10 =========================================== + Hits 4958 5063 +105 - Misses 374 384 +10 - Partials 145 149 +4 Flag Coverage Δ build 90.55% <ø> (ø) catalog-server 100.00% <ø> (ø) cli 75.85% <ø> (ø) extension-authenticator-canner 78.37% <100.00%> (-2.11%) :arrow_down: extension-dbt 97.43% <ø> (ø) extension-driver-canner 84.65% <ø> (ø) extension-driver-clickhouse 88.09% <88.09%> (?) extension-driver-pg 96.11% <ø> (ø) extension-driver-snowflake 96.26% <ø> (ø) extension-store-canner 98.30% <ø> (ø) integration-testing 90.27% <ø> (ø) serve 87.17% <92.30%> (+0.05%) :arrow_up: Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ ...-driver-clickhouse/src/lib/clickHouseDataSource.ts 81.66% <81.66%> (ø) ...c/lib/middleware/auth/authCredentialsMiddleware.ts 90.00% <85.71%> (-1.67%) :arrow_down: .../extension-driver-clickhouse/src/lib/typeMapper.ts 90.24% <90.24%> (ø) ...-authenticator-canner/src/lib/authenticator/pat.ts 75.75% <100.00%> (-2.63%) :arrow_down: packages/extension-driver-clickhouse/src/index.ts 100.00% <100.00%> (ø) .../extension-driver-clickhouse/src/lib/sqlBuilder.ts 100.00% <100.00%> (ø) ...kages/serve/src/lib/auth/httpBasicAuthenticator.ts 90.90% <100.00%> (+3.67%) :arrow_up: ...es/serve/src/lib/auth/passwordFileAuthenticator.ts 80.48% <100.00%> (-1.34%) :arrow_down: ...ges/serve/src/lib/auth/simpleTokenAuthenticator.ts 100.00% <100.00%> (ø) :umbrella: View full report in Codecov by Sentry. :loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
gharchive/pull-request
2023-07-07T10:49:18
2025-04-01T06:36:48.484802
{ "authors": [ "codecov-commenter", "onlyjackfrost" ], "repo": "Canner/vulcan-sql", "url": "https://github.com/Canner/vulcan-sql/pull/225", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1376898585
Adding 0's after the "fractionDigits" limit doesn't trigger formatting $1.2211111111 > $1.22 ❤️ $1.2200000000 > $1.2200000000 💔 Hi, there! I would like to take up this issue. Can it be assigned to me? @gokullan sounds good, submit a PR when you are ready. Thanks!
gharchive/issue
2022-09-17T23:52:20
2025-04-01T06:36:48.500954
{ "authors": [ "fmaclen", "gokullan" ], "repo": "Canutin/svelte-currency-input", "url": "https://github.com/Canutin/svelte-currency-input/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1014063974
Lazy Loading Is your feature request related to a problem? Please describe. To load the welcome page faster Describe the solution you'd like 💡 Use lazy loading module to detach the welcome part so the initial page can load faster Can i get this task? Hello @infinityover, Sure!
gharchive/issue
2021-10-02T14:18:12
2025-04-01T06:36:48.502913
{ "authors": [ "goliakshay357", "infinityover" ], "repo": "Canvasbird/canvasboard", "url": "https://github.com/Canvasbird/canvasboard/issues/420", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
191455591
While deploying the docker-compose there are errors on the primus module as strict & const seem not compatible... I just launched the docker compose to run a cluster, and launching the eth-netstats is broken. JMarc /eth-netstats/node_modules/primus/index.js:177 const sandbox = Object.keys(global).reduce((acc, key) => { ^^^^^ SyntaxError: Use of const in strict mode. at exports.runInThisContext (vm.js:73:16) at Module._compile (module.js:443:25) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Module.require (module.js:365:17) at require (module.js:384:17) at Object. (/eth-netstats/app.js:44:14) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) npm ERR! Linux 3.13.0-52-generic npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "start" npm ERR! node v0.12.17 npm ERR! npm v2.15.1 npm ERR! code ELIFECYCLE npm ERR! eth-netstats@0.0.9 start: node ./bin/www npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the eth-netstats@0.0.9 start script 'node ./bin/www'. npm ERR! This is most likely a problem with the eth-netstats package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node ./bin/www npm ERR! You can g I had a similar issue, and fixed that by upgrading the node version for the eth-netstats container; updating the first line of the eth-netstats/Dockerfile from FROM node:0.12 to FROM node:7 did it for me. yes thx its fixed in my case as well, we should update the file and release... Closing via https://github.com/Capgemini-AIE/ethereum-docker/pull/18
gharchive/issue
2016-11-24T08:06:18
2025-04-01T06:36:48.509966
{ "authors": [ "jmlambert78", "meken", "tayzlor" ], "repo": "Capgemini-AIE/ethereum-docker", "url": "https://github.com/Capgemini-AIE/ethereum-docker/issues/17", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
269286104
能不能提供一个全量库 这个库支持全部播放格式的引入方式,有些需求是想支持全部的视频给事,尽量覆盖各种格式的视频。可以忽略包的体积。 这个需要自己编译,编译的方法首页有
gharchive/issue
2017-10-28T05:55:18
2025-04-01T06:36:48.529444
{ "authors": [ "CarGuo", "SaudM" ], "repo": "CarGuo/GSYVideoPlayer", "url": "https://github.com/CarGuo/GSYVideoPlayer/issues/535", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2574053877
🛑 PlanetHoster is down In 055d076, PlanetHoster (https://www.planethoster.com/fr) was down: HTTP code: 0 Response time: 0 ms Resolved: PlanetHoster is back up in 05baa6f after 5 minutes.
gharchive/issue
2024-10-08T19:39:30
2025-04-01T06:36:48.532811
{ "authors": [ "matteoferrux" ], "repo": "Carbubu/upptime", "url": "https://github.com/Carbubu/upptime/issues/285", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
292183097
URL calculations when laravel is run under a sub-directory are broken Assuming you are running laravel under the /project/ subdirectory for a domain. you usually use a custom url-rewrite to ensure that http://example.com/project/DURC/author/ is rewritten to http://example.com/project/index.php/DURC/Author/ But in that case the routes need to be build in terms of /DURC/author but the links in all of the templates need to be for /project/DURC/author/ perhaps this is an indication that the whole url calculation method is simplistic and needs to be improved. No one hosts Laravel this way. It is a bother to code this and it is not worth it.
gharchive/issue
2018-01-28T06:46:41
2025-04-01T06:36:48.540746
{ "authors": [ "ftrotter" ], "repo": "CareSet/DURC", "url": "https://github.com/CareSet/DURC/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2076961257
🛑 jereconstructionroofing is down In ced4963, jereconstructionroofing (jereconstructionroofing.com) was down: HTTP code: 429 Response time: 332 ms Resolved: jereconstructionroofing is back up in 4405177 after 1 hour, 56 minutes.
gharchive/issue
2024-01-11T15:27:28
2025-04-01T06:36:48.543505
{ "authors": [ "Careas" ], "repo": "Careas/vps-monitor", "url": "https://github.com/Careas/vps-monitor/issues/320", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
129001652
The issue has been resolved Feel free to just close the issue next time instead of renaming the title :)
gharchive/issue
2016-01-27T01:52:25
2025-04-01T06:36:48.544224
{ "authors": [ "benwilson512", "icn" ], "repo": "CargoSense/ex_aws", "url": "https://github.com/CargoSense/ex_aws/issues/106", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1515898638
consistency of scanlation team when reading Describe your suggested feature when reading, if you start with, say asura scan but there's also entries for the same chapter from, say, flame scan, if i start reading from asura scan, it sometimes switch to flamescan in the next chapter, even if asura scan is available, it's not a problem if it's on the same language, but it's infuriating when it switch from English to French when going to the next chapter it should also be noted that when a chapter gets marked as read when i finish it, it doesn't mark the same chapter from another team as read too Other details No response Acknowledgements [X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open issue. [X] I have written a short but informative title. [X] I have updated the app to the newest version Latest. [X] I have checked through the app settings for my feature. [X] I will fill out all of the requested information in this form. Are you talking about when skip duplicates is enabled? i do have skip duplicate enabled the thing I'm talking about is consistency across chapters when skipping duplicates, and the fact that duplicate aren't marked as read either Well duplicates marked as read is not what the option provides. As for skip duplicates it doesn't take into account the scanlator cause it's a pain to try to get something like that working. Just filter out the scanlator you want then make the others as read if it's that big of issue. Then unfilter the scanlator This should be slightly more consistent now
gharchive/issue
2023-01-01T23:37:22
2025-04-01T06:36:48.554984
{ "authors": [ "CarlosEsco", "yapudjus" ], "repo": "CarlosEsco/Neko", "url": "https://github.com/CarlosEsco/Neko/issues/1288", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
228114265
Dont mangle key typos. https://github.com/Carthage/Carthage/issues/1926 I think for the sake of better error messages Commandant should not try and convert flags to a set https://github.com/Carthage/Commandant/blob/903eaec12f3a68782496065dcc68393b8ab1b4bd/Sources/Commandant/ArgumentParser.swift#L75 It will mangle a --key typo and produce confusing errors. If flags are passed as used ex: $ carthage update -platform Unrecognized arguments: -armpfolt Should become $ carthage update -platform Unrecognized arguments: -platform That's only done if there's a single -, which denotes a single-letter flag/argument. That's how basically all *nix tools work. do all nix tools randomly the input in the error message? This issue isn't about the fact that there is an error message. this issue is about the formatting and content of the error message. It looks like most tools bail on the first unknown option: $ python -asenxtaboe Unknown option: -a I'm not sure if that's better as it gives you less information.
gharchive/issue
2017-05-11T20:39:34
2025-04-01T06:36:48.624395
{ "authors": [ "aaroncrespo", "mdiep" ], "repo": "Carthage/Commandant", "url": "https://github.com/Carthage/Commandant/issues/104", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
145121399
Fix overviews integration for named layers Fix #400 @dgaubert @rtorre please take a look :+1:
gharchive/pull-request
2016-04-01T08:38:08
2025-04-01T06:36:48.625543
{ "authors": [ "dgaubert", "jgoizueta" ], "repo": "CartoDB/Windshaft-cartodb", "url": "https://github.com/CartoDB/Windshaft-cartodb/pull/407", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
269991292
Formula dataview https://github.com/CartoDB/cartodb.js/issues/1851 @alonsogarciapablo I don't feel like releasing only the bboxFilter changes. Let's do it after the integration phase of all dataview changes.
gharchive/pull-request
2017-10-31T14:58:45
2025-04-01T06:36:48.627903
{ "authors": [ "ivanmalagon" ], "repo": "CartoDB/cartodb.js", "url": "https://github.com/CartoDB/cartodb.js/pull/1858", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
95301239
Broken date format in CSV causes whole file to fail Just had a CSV fail with error 9999 due to a column like this: date 2000-01-01 2000-1-NA IMHO it should never fail to import just because of some data values which don't fit the type that CartoDB has assigned. @stevage what do you propose for the failing cells? Set them to null? Either that or convert the whole column back to string. (I don't know anything about how this is implemented.) If we set the column back to string, we will just have the problem when converting later. So better to do at import time IMO If by "later" you mean, when the user explicitly converts the column type to date, that's better in two ways: You know that they really wanted a date. (Currently you're not that sure - you see one value that looks like a date, and another one that definitely isn't a date.) You're in a better position to give specific error feedback. (As opposed to just aborting with a generic error message.) Deployed a change that fixes this
gharchive/issue
2015-07-15T22:09:24
2025-04-01T06:36:48.630772
{ "authors": [ "ethervoid", "saleiva", "stevage" ], "repo": "CartoDB/cartodb", "url": "https://github.com/CartoDB/cartodb/issues/4483", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
173740309
Import timeouts are happening from the import_cleanup process at Table model This part of the code is using connections to the database "as superuser". Perhaps we should switch it by the new methods (direct connections or transactions with timeout) we've been using lately. https://github.com/CartoDB/cartodb/blob/6f060448fb68288d7f9695906df4b556c213b6fc/app/models/table.rb#L302 cc @oriolbx Started to check this. The import_cleanup process is actually flagged as potential code to be removed because the CartoDBfication already manages similar issues at _CDB_Has_Usable_Primary_ID. I'm gonna study it to see if it can be completely dropped, or otherwise treat better the timeout. There's a difference between this code and CartoDBfy: the ruby part gets rid of other known columns, generated usually by Ogr2ogr which are used as PK, but CartoDBfication doesn't. Removing the code without changing the PK name in ogr2ogr would provoke that exporting and importing a table (with cartodb_id) would add an extra gid column on each import, because this extra column wouldn't be dropped. Already solved in the mentioned PR, thus closing this.
gharchive/issue
2016-08-29T10:04:42
2025-04-01T06:36:48.633886
{ "authors": [ "iriberri", "rafatower" ], "repo": "CartoDB/cartodb", "url": "https://github.com/CartoDB/cartodb/issues/9588", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
717536018
Queue snapshots This is a working version that allows dumping snapshots of the queue of a node by running $ kill -USR1 $NODE_PID where $NODE_PID should actually the PID of the node that should dump its queue. It will create a queue_dump.json file in the working directoy of the node. Example, dumping a node with a single event in it: $ jq < queue_dump.json { "NetworkIncoming": [], "Network": [], "Regular": [ "AddressGossiper" ], "Api": [] } (the jq is not necessary here, but allows filtering. I use it because it makes the output pretty :)) Currently only the name of the event is dumped, as all the inner fields are skipped. If more output it desired, Serialize should be derived on inner events, possibly with skip_serialize on problematic fields. Some jq examples Counts of all event types: jq 'map_values( map(keys[0] | {"type": ., weight: 1})| group_by(.type) | map ([.[0].type,(.|length)]) | map({(.[0]): .[1]}) )' queue_dump.json Count each queue jq 'map_values(map(keys[0]))' queue_dump.json @marc-casperlabs How about we get this to compile and then squash all commits into one? If people are expected to keep it on the side (without merging to master) and use when necessary, it will be much easier to have it as a single commit that can be cherry-picked easily. Thinking about it, I think it's not harmful to merge this into master - the cost of having a few extra Serialize / Deserialize derives floating around. Actual runtime cost is is a single, never-taken branch on an atomic bool. I'd prefer to feature-gate the functionality (really just the signal-handling code so that we don't query a bool on every event, not suggesting we feature-gate the bulk of the changes in this PR). But that's not a blocker - leave as is if you prefer. I think that's absolutely unnecessary. We are talking about what I believe amounts to one or two CPU instructions, essentially never branching! That's not even remotely close to the threshold I would consider required to make feature gating worthwhile. I can remove a log message somewhere to compensate :) @marc-casperlabs How about we get this to compile and then squash all commits into one? If people are expected to keep it on the side (without merging to master) and use when necessary, it will be much easier to have it as a single commit that can be cherry-picked easily. This will no longer work, as we have more and more events and such implement Serialize to be dumpable. I'd argue that it is more trouble than it is worth at this point. @marc-casperlabs How about we get this to compile and then squash all commits into one? If people are expected to keep it on the side (without merging to master) and use when necessary, it will be much easier to have it as a single commit that can be cherry-picked easily. This will no longer work, as we have more and more events and such implement Serialize to be dumpable. I'd argue that it is more trouble than it is worth at this point. Sure, that comment is over month old. bors r+
gharchive/pull-request
2020-10-08T17:36:13
2025-04-01T06:36:48.687415
{ "authors": [ "goral09", "marc-casperlabs" ], "repo": "CasperLabs/casper-node", "url": "https://github.com/CasperLabs/casper-node/pull/343", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1787804871
Add token role info in each notebook for clarity Pending a better autogen token, make it clear in each colab that the Token should be a manually-generated "DB Admin" token. in prod now
gharchive/issue
2023-07-04T12:07:30
2025-04-01T06:36:48.688830
{ "authors": [ "hemidactylus" ], "repo": "CassioML/cassio-website", "url": "https://github.com/CassioML/cassio-website/issues/54", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1223052440
Town Hall Slides - Monday, 09th May, 2022 Townhall slides - Monday, 09th May, 2022 Previous Town Hall - https://github.com//Catalyst-Circle/Catalyst-Circle-Coordination/issues/108 CC Admin meeting - [ ] Prepare Town Hall slides for Wednesday Previous in series: #108 Next in series: #121
gharchive/issue
2022-05-02T15:54:29
2025-04-01T06:36:48.702405
{ "authors": [ "Andre-Diamond" ], "repo": "Catalyst-Circle/Catalyst-Circle-Admin-Coordination", "url": "https://github.com/Catalyst-Circle/Catalyst-Circle-Admin-Coordination/issues/120", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
236986599
Export Format Yolo, the bbox txt is all Infinity Infinity Infinity Infinity Export Format Yolo, the bbox txt is all Infinity Infinity Infinity Infinity I will take a look can you send me example output.. @aribornstein Now, I am anxious to turn to yolo, please help me solve, thank you. It looks like there is a scaling error in the image support version of the code. I will hopefully have some time to resolve in the upcoming week. In the meantime it might be worth while to look at the CNTK FastRCNN option. https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN I've identified whats causing the problem and am working to resolve it. It's caused by the fact that while video frames all have the same dimensions images don't. I will update you once this is fixed. Should be resolved now try the new version :)
gharchive/issue
2017-06-19T18:53:35
2025-04-01T06:36:48.705815
{ "authors": [ "aribornstein", "whaozl" ], "repo": "CatalystCode/VOTT", "url": "https://github.com/CatalystCode/VOTT/issues/125", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1922095591
Create bidirectional "merged vertex" and "merged edge" coders In some applications, it is necessary to de-type property graph vertices and edges for the sake of interoperability with type-unaware components, merging them all into a single undifferentiated set of vertices and/or edges -- every vertex or edge with the same type as every other vertex or edge. However, if we simply add a "label" property for each element, and associate labels with the original types of the elements, we can reconstitute them when mapping data in the other direction -- from the merged view back to the typed view. These coders will be needed first in Java, so will be prototyped in that language. Done and tested.
gharchive/issue
2023-10-02T15:00:12
2025-04-01T06:36:49.058556
{ "authors": [ "joshsh" ], "repo": "CategoricalData/hydra", "url": "https://github.com/CategoricalData/hydra/issues/106", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2082923970
感谢大佬的整合 感谢大佬的整合 别客气,正好自己也用到。感觉这个需求比较普遍就share了
gharchive/issue
2024-01-16T02:41:59
2025-04-01T06:36:49.059817
{ "authors": [ "Caterpolaris", "netesheng" ], "repo": "Caterpolaris/aria2list", "url": "https://github.com/Caterpolaris/aria2list/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2601919457
[BUG] Google icon is not vertically centered in the login page Describe the bug The google icon is not vertically centered To Reproduce Steps to reproduce the behavior: Go to 'Login page' Expected behavior The google icon should be vertically centered Screenshots @jinxsaber hi, are you working on this? @ali-ahnaf Can i work on this issue can you assign me ? @ali-ahnaf Can i work on this issue can you assign me ? Sure, go ahead. Let me know if you need help installing the development environment.
gharchive/issue
2024-10-21T09:27:14
2025-04-01T06:36:49.152571
{ "authors": [ "Propo41", "ali-ahnaf", "talhawebguru" ], "repo": "Cefalo/quick-meet", "url": "https://github.com/Cefalo/quick-meet/issues/75", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2599913608
🛑 Nextcloud is down In cc707f6, Nextcloud (https://nextcloud.hiripple.com) was down: HTTP code: 530 Response time: 124 ms Resolved: Nextcloud is back up in fe7dcac after 52 minutes.
gharchive/issue
2024-10-20T03:35:35
2025-04-01T06:36:49.159895
{ "authors": [ "CelestialRipple" ], "repo": "CelestialRipple/ripplelog", "url": "https://github.com/CelestialRipple/ripplelog/issues/4012", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2687470132
🛑 Alist is down In 70d56fc, Alist (https://alist.hiripple.com) was down: HTTP code: 530 Response time: 56 ms Resolved: Alist is back up in 34dcaa0 after 1 hour, 30 minutes.
gharchive/issue
2024-11-24T09:40:14
2025-04-01T06:36:49.162484
{ "authors": [ "CelestialRipple" ], "repo": "CelestialRipple/ripplelog", "url": "https://github.com/CelestialRipple/ripplelog/issues/5060", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
696169942
Solve lease renewal issue in Kube-Scheduler/KCM/WCM What would you like to be added: In performance test, we saw master components starting to have issue when it cannot renew its lease. Why is this needed: Resolved by bug fixing client sharing same rate limiter
gharchive/issue
2020-09-08T20:52:45
2025-04-01T06:36:49.190613
{ "authors": [ "Sindica" ], "repo": "CentaurusInfra/arktos", "url": "https://github.com/CentaurusInfra/arktos/issues/693", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }