id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
620884715 | Faction command does not implement PluginIdentifiableCommand
This prevents Poggit from detecting the command, and hence the plugin cannot be searched.
Shouldn't this be done by commando tho?
Fun fact: Commando doesn't even take a Plugin instance, so they can't fix it without breaking BC.
https://github.com/CortexPE/Commando/blob/master/src/CortexPE/Commando/BaseCommand.php#L74-L78
I created https://github.com/CortexPE/Commando/issues/26 on Commando though.
| gharchive/issue | 2020-05-19T11:03:36 | 2025-04-01T06:36:52.662295 | {
"authors": [
"JackMD",
"SOF3"
],
"repo": "DaPigGuy/PiggyFactions",
"url": "https://github.com/DaPigGuy/PiggyFactions/issues/49",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
282026231 | bm-marker组件的position属性的BUG
在使用过程中,发现bm-marker的position在绑定data时会出现偏差,但是直接使用经纬度写死时则正常
Auto closed by issues bot. Please create your issue by issue generator.
当前问题由机器人自动关闭,请使用 ISSUE 生成器提交您的问题。
| gharchive/issue | 2017-12-14T09:01:26 | 2025-04-01T06:36:52.682438 | {
"authors": [
"Dafrok",
"javatong"
],
"repo": "Dafrok/vue-baidu-map",
"url": "https://github.com/Dafrok/vue-baidu-map/issues/259",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
966417508 | Error while importing SecHub Scans into IntelliJ
When I try to import an SecHubReport into the IntelliJ plugin I get the following Error:
importError.txt
Setup:
OS: Windows
IntelliJ Version: 2021.2 Ultimate Edition,
SecHub Plugin Version: 0.2.1
java version "15.0.1" 2020-10-20
Java(TM) SE Runtime Environment (build 15.0.1+9-18)
Java HotSpot(TM) 64-Bit Server VM (build 15.0.1+9-18, mixed mode, sharing)
Here the stacktrace directly inside a comment - just for convenience (it's the content of cimportError.txtjava.lang.NoClassDefFoundError: com/daimler/sechub/client/java/SecHubReportException
at com.daimler.sechub.action.SechubImportAction.actionPerformed(SechubImportAction.java:43)
at com.intellij.openapi.actionSystem.ex.ActionUtil.lambda$performActionDumbAwareWithCallbacks$4(ActionUtil.java:240)
at com.intellij.openapi.actionSystem.ex.ActionUtil.performDumbAwareWithCallbacks(ActionUtil.java:261)
at com.intellij.openapi.actionSystem.ex.ActionUtil.performActionDumbAwareWithCallbacks(ActionUtil.java:240)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem$ActionTransmitter.lambda$actionPerformed$0(ActionMenuItem.java:272)
at com.intellij.openapi.wm.impl.FocusManagerImpl.runOnOwnContext(FocusManagerImpl.java:236)
at com.intellij.openapi.wm.impl.IdeFocusManagerImpl.runOnOwnContext(IdeFocusManagerImpl.java:67)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem$ActionTransmitter.actionPerformed(ActionMenuItem.java:264)
at java.desktop/javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1967)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem.lambda$fireActionPerformed$0(ActionMenuItem.java:98)
at com.intellij.openapi.application.TransactionGuardImpl.performUserActivity(TransactionGuardImpl.java:94)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem.fireActionPerformed(ActionMenuItem.java:98)
at com.intellij.ui.plaf.beg.BegMenuItemUI.doClick(BegMenuItemUI.java:515)
at com.intellij.ui.plaf.beg.BegMenuItemUI$MyMouseInputHandler.mouseReleased(BegMenuItemUI.java:545)
at java.desktop/java.awt.Component.processMouseEvent(Component.java:6652)
at java.desktop/javax.swing.JComponent.processMouseEvent(JComponent.java:3345)
at java.desktop/java.awt.Component.processEvent(Component.java:6417)
at java.desktop/java.awt.Container.processEvent(Container.java:2263)
at java.desktop/java.awt.Component.dispatchEventImpl(Component.java:5027)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2321)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4859)
at java.desktop/java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4918)
at java.desktop/java.awt.LightweightDispatcher.processMouseEvent(Container.java:4547)
at java.desktop/java.awt.LightweightDispatcher.dispatchEvent(Container.java:4488)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2307)
at java.desktop/java.awt.Window.dispatchEventImpl(Window.java:2784)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4859)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:778)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:95)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:751)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:749)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:748)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:886)
at com.intellij.ide.IdeEventQueue.dispatchMouseEvent(IdeEventQueue.java:815)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:752)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$7(IdeEventQueue.java:442)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:825)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$8(IdeEventQueue.java:441)
at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:794)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:493)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
Caused by: java.lang.ClassNotFoundException: com.daimler.sechub.client.java.SecHubReportException PluginClassLoader(plugin=PluginDescriptor(name=SecHub, id=com.daimler.sechub.sechub-plugin-intellij, descriptorPath=plugin.xml, path=~\AppData\Roaming\JetBrains\IntelliJIdea2021.2\plugins\sechub-plugin-intellij-0.2.1.jar, version=0.2.1, package=null), packagePrefix=null, instanceId=234, state=active)
at com.intellij.ide.plugins.cl.PluginClassLoader.loadClass(PluginClassLoader.java:254)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 52 more)
java.lang.NoClassDefFoundError: com/daimler/sechub/client/java/SecHubReportException
at com.daimler.sechub.action.SechubImportAction.actionPerformed(SechubImportAction.java:43)
at com.intellij.openapi.actionSystem.ex.ActionUtil.lambda$performActionDumbAwareWithCallbacks$4(ActionUtil.java:240)
at com.intellij.openapi.actionSystem.ex.ActionUtil.performDumbAwareWithCallbacks(ActionUtil.java:261)
at com.intellij.openapi.actionSystem.ex.ActionUtil.performActionDumbAwareWithCallbacks(ActionUtil.java:240)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem$ActionTransmitter.lambda$actionPerformed$0(ActionMenuItem.java:272)
at com.intellij.openapi.wm.impl.FocusManagerImpl.runOnOwnContext(FocusManagerImpl.java:236)
at com.intellij.openapi.wm.impl.IdeFocusManagerImpl.runOnOwnContext(IdeFocusManagerImpl.java:67)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem$ActionTransmitter.actionPerformed(ActionMenuItem.java:264)
at java.desktop/javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1967)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem.lambda$fireActionPerformed$0(ActionMenuItem.java:98)
at com.intellij.openapi.application.TransactionGuardImpl.performUserActivity(TransactionGuardImpl.java:94)
at com.intellij.openapi.actionSystem.impl.ActionMenuItem.fireActionPerformed(ActionMenuItem.java:98)
at com.intellij.ui.plaf.beg.BegMenuItemUI.doClick(BegMenuItemUI.java:515)
at com.intellij.ui.plaf.beg.BegMenuItemUI$MyMouseInputHandler.mouseReleased(BegMenuItemUI.java:545)
at java.desktop/java.awt.Component.processMouseEvent(Component.java:6652)
at java.desktop/javax.swing.JComponent.processMouseEvent(JComponent.java:3345)
at java.desktop/java.awt.Component.processEvent(Component.java:6417)
at java.desktop/java.awt.Container.processEvent(Container.java:2263)
at java.desktop/java.awt.Component.dispatchEventImpl(Component.java:5027)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2321)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4859)
at java.desktop/java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4918)
at java.desktop/java.awt.LightweightDispatcher.processMouseEvent(Container.java:4547)
at java.desktop/java.awt.LightweightDispatcher.dispatchEvent(Container.java:4488)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2307)
at java.desktop/java.awt.Window.dispatchEventImpl(Window.java:2784)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4859)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:778)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:95)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:751)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:749)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:748)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:886)
at com.intellij.ide.IdeEventQueue.dispatchMouseEvent(IdeEventQueue.java:815)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:752)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$7(IdeEventQueue.java:442)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:825)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$8(IdeEventQueue.java:441)
at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:794)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:493)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
Caused by: java.lang.ClassNotFoundException: com.daimler.sechub.client.java.SecHubReportException PluginClassLoader(plugin=PluginDescriptor(name=SecHub, id=com.daimler.sechub.sechub-plugin-intellij, descriptorPath=plugin.xml, path=~\AppData\Roaming\JetBrains\IntelliJIdea2021.2\plugins\sechub-plugin-intellij-0.2.1.jar, version=0.2.1, package=null), packagePrefix=null, instanceId=234, state=active)
at com.intellij.ide.plugins.cl.PluginClassLoader.loadClass(PluginClassLoader.java:254)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 52 more
@Alexandra-Koenig : I think this happens because of the new signing process of the Jebrains platform. The new singing will be handled by #25 . I added both issues Milestone 0.3.0.
Please reopen this issue, if the new release (0.3.0) will not fix the problem .
Plugin v0.3.0 is now available on IntelliJ Marketplace (alpha channel). This should fix the problem.
Was tested with 2021.2 Ultimate Edition and does now work. Will also mention inside release https://github.com/Daimler/sechub-plugin-intellij/releases/tag/v0.3.0
| gharchive/issue | 2021-08-11T10:24:03 | 2025-04-01T06:36:52.702480 | {
"authors": [
"Alexandra-Koenig",
"de-jcup"
],
"repo": "Daimler/sechub-plugin-intellij",
"url": "https://github.com/Daimler/sechub-plugin-intellij/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
99619755 | Add HackLang icon
Just like the title says
Thanks. Added with 4ce2190.
Great :clap:
| gharchive/pull-request | 2015-08-07T10:06:17 | 2025-04-01T06:36:52.710925 | {
"authors": [
"astephenb",
"steelbrain"
],
"repo": "DanBrooker/file-icons",
"url": "https://github.com/DanBrooker/file-icons/pull/170",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2538837655 | chore(main): release 0.7.1
:robot: I have created a release beep boop
0.7.1 (2024-09-20)
Bug Fixes
README (#33) (5e564f1)
separetely export db connection classes (#31) (75e2e57)
This PR was generated with Release Please. See documentation.
:robot: Created releases:
v0.7.1
:sunflower:
| gharchive/pull-request | 2024-09-20T13:30:14 | 2025-04-01T06:36:52.715052 | {
"authors": [
"DanForys"
],
"repo": "DanForys/ts-query-model",
"url": "https://github.com/DanForys/ts-query-model/pull/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
150789525 | User Guide
Finish off and put into version control.
Wiki pages have their own repo - need to explore this further
| gharchive/issue | 2016-04-25T08:41:37 | 2025-04-01T06:36:52.715874 | {
"authors": [
"DanGrew"
],
"repo": "DanGrew/JenkinsTestTracker",
"url": "https://github.com/DanGrew/JenkinsTestTracker/issues/108",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
535786312 | Bug on send_dm
hello
On send_dm , i'have this error
'Traceback (most recent call last):
File "/home/devsupportbot/bot/lib/python3.6/site-packages/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/home/devsupportbot/bot/mybot/plugins/meteo.py", line 38, in humeurJour
Réagis en cliquant sur une icône''')
File "/home/devsupportbot/bot/lib/python3.6/site-packages/machine/plugins/base.py", line 191, in send_dm
self._client.send_dm(user, text)
File "/home/devsupportbot/bot/lib/python3.6/site-packages/machine/slack.py", line 82, in send_dm
dm_channel = self.open_im(u.id)
AttributeError: 'NoneType' object has no attribute 'id''
here is my code
def humeurJour(self): """humeurJour : demande l'humeur du jour sur le canal meteo""" for u in DbSession.query(users.utilisateur, slackusers.name, slackusers.user_id).join(slackusers, services).filter( services.nom == "Support").order_by( users.slackid): print(u.utilisateur, u.name, u.user_id) msg = self.send_dm(u.name, 'Salut , ' + u.name + ''' ,comment vas tu aujourdhui? \n Réagis en cliquant sur une icône''') channel = msg['channel'] reaction_channel.append(channel) ts = msg['ts'] for i in DbSession.query(emojis).order_by(emojis.score): print(i.emoji) self.react(channel, ts, i.emoji)
auto solve icoming from my test slack instance not having all users of prod.
I close
| gharchive/issue | 2019-12-10T15:00:36 | 2025-04-01T06:36:52.736826 | {
"authors": [
"dorel14"
],
"repo": "DandyDev/slack-machine",
"url": "https://github.com/DandyDev/slack-machine/issues/276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
373504155 | Allow us to give vector drawable
It would be very helpful to us if we can use our own drawable to animate ..
This is related to this issue where shapes and drawables are being discussed: https://github.com/DanielMartinus/Konfetti/issues/11
| gharchive/issue | 2018-10-24T14:11:10 | 2025-04-01T06:36:52.750582 | {
"authors": [
"DanielMartinus",
"spiderShaki"
],
"repo": "DanielMartinus/Konfetti",
"url": "https://github.com/DanielMartinus/Konfetti/issues/68",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
738870595 | RU Domains UnicodeDecodeError: 'utf-8' codec can't decode
Hi, I am getting an error while collecting information for RU domains.
I have Windows 10 RU, checking domains in other zones without errors.
Domain zone RU - UnicodeDecodeError: 'utf-8' codec can't decode byte 0xdd in position 124: invalid continuation byte
tried with a random .ru , works
>>> domain = whois.query('football.ru')
>>> print(domain.__dict__)
{'name': 'football.ru', 'registrar': 'REGTIME-RU', 'registrant_country': '', 'creation_date': datetime.datetime(2002, 12, 15, 21, 0), 'expiration_date': datetime.datetime(2021, 12, 16, 21, 0), 'last_updated': None, 'status': 'REGISTERED, DELEGATED, VERIFIED', 'statuses': ['REGISTERED, DELEGATED, VERIFIED'], 'dnssec': False, 'name_servers': {'ns-cloud-d3.googledomains.com', 'ns-cloud-d1.googledomains.com', 'ns-cloud-d4.googledomains.com', 'ns-cloud-d2.googledomains.com'}}
| gharchive/issue | 2020-11-09T09:49:19 | 2025-04-01T06:36:52.768039 | {
"authors": [
"DannyCork",
"sanitarn"
],
"repo": "DannyCork/python-whois",
"url": "https://github.com/DannyCork/python-whois/issues/132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1378708402 | Translate install steps
新增了英文:
install 步骤
license 激活
video 频道
Besides, it is suggested to cover yout phone numer and email address with mosaics in the pics you uploaded.
| gharchive/pull-request | 2022-09-20T01:48:17 | 2025-04-01T06:36:52.814859 | {
"authors": [
"Michelle951",
"windsonsea"
],
"repo": "DaoCloud/DaoCloud-docs",
"url": "https://github.com/DaoCloud/DaoCloud-docs/pull/143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2759180807 | docker.io/python:3.9-slim
镜像域名 (镜像仓库)
[ ] 请确保包含域名(domain)
镜像标签
[ ] 请确保包含镜像的标签(tag)
镜像存在
[ ] 请确保这个镜像真实存在
1
| gharchive/issue | 2024-12-26T02:06:36 | 2025-04-01T06:36:52.816795 | {
"authors": [
"PinocchioLY"
],
"repo": "DaoCloud/public-image-mirror",
"url": "https://github.com/DaoCloud/public-image-mirror/issues/37112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2017226683 | Failed to open the library
I'm getting the following error when trying to print Libgit2.version
Failed to open the library. Make sure that libgit2 library is bundled with the application.
Another exception was thrown: Invalid argument(s): Failed to load dynamic library
'/home/johannes/td/build/linux/x64/debug/bundle/lib/libgit2-1.6.2.so': libssh2.so.1: cannot open shared object file: No such file or directory
Installed with:
$ flutter pub add git2dart
$ uname -a
Linux ubuntu 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Installing libgit2-dev on ubuntu seems to have solved the issue:
sudo apt install libgit2-dev
However, shouldn't the library be bundled with git2dart for it to work e.g. on Android
| gharchive/issue | 2023-11-29T19:00:27 | 2025-04-01T06:36:52.909807 | {
"authors": [
"thyssentishman"
],
"repo": "DartGit-dev/git2dart",
"url": "https://github.com/DartGit-dev/git2dart/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1214994256 | Energy values required
Hi,
is there any chance to get the energy values (kWh) from the API? These values are shown in the Energy Diagram. They are necessary for the energy dashboard of HA. Looks like these values available via https://www.alphaess.com/api/Power/SticsByPeriod and/or https://www.alphaess.com/api/ESS/SticsSummeryDataForCustomer
Looks like the neccesary values for issue #4 ...
Regards,
Ralf
Hi Ralf, check out https://github.com/CharlesGillanders/homeassistant-alphaESS.
It doesn't replace this add-on, but when run alongside it gives you everything you need for both real time data and the data needed for the energy dashboard.
| gharchive/issue | 2022-04-25T20:14:01 | 2025-04-01T06:36:52.912498 | {
"authors": [
"rafkra",
"tbgoose"
],
"repo": "DasLetzteEinhorn/AlphaESS_Monitor_Hass",
"url": "https://github.com/DasLetzteEinhorn/AlphaESS_Monitor_Hass/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2372716778 | dev-proc/versioning.md: initial coorections to dasharo naming scheme
Based on discussion in following issues and announcements made on DUG#6 new naming convention for Dasharo products was introduced. For details please check:
https://github.com/Dasharo/docs/pull/820
https://github.com/Dasharo/dasharo-issues/issues/762
@macpijan ping
@macpijan @miczyg1 ping
| gharchive/pull-request | 2024-06-25T13:32:23 | 2025-04-01T06:36:52.932743 | {
"authors": [
"pietrushnic"
],
"repo": "Dasharo/docs",
"url": "https://github.com/Dasharo/docs/pull/841",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1836628231 | meta-dts-distro/recipes-dts/dts/dasharo-deploy: fix downloading Dell …
…BIOS Update packages
@miczyg1 Picked here on top of the latest changes and rebuilding locally: https://github.com/Dasharo/meta-dts/pull/34
@TomaszAIR Didn't we have CI for DTS? Couldn't we build PRs and upload artifacts?
@macpijan we have a weekly CI for checking cache, but we did not create one for pushed PR, I started an issue for this https://github.com/Dasharo/dasharo-issues/issues/476
@miczyg1 changes from here are already integrated in #36 so I am closing this one.
@TomaszAIR the problem is the change didn't work as expected. I must have messed up the bash syntax because wget doesn't detect the URL after the change.
| gharchive/pull-request | 2023-08-04T12:05:00 | 2025-04-01T06:36:52.935317 | {
"authors": [
"TomaszAIR",
"macpijan",
"miczyg1"
],
"repo": "Dasharo/meta-dts",
"url": "https://github.com/Dasharo/meta-dts/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1875181631 | [NEW] airnewzealand.com
Domain Name:
airnewzealand.com
Purpose:
Airline
Relevance:
https://www.airnewzealand.com/cyber-security-account-protection
Additional Information:
Needs to solve merge conflict @irew
| gharchive/pull-request | 2023-08-31T09:52:32 | 2025-04-01T06:36:52.943081 | {
"authors": [
"Mikescops",
"irew"
],
"repo": "Dashlane/passkeys-resources",
"url": "https://github.com/Dashlane/passkeys-resources/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
737057854 | Lc alink text fix
<your comments for this PR go here >
Have you read Terra's Contributing Guide lately? If not, do that first.
I, the developer opening this PR, do solemnly pinky swear that:
[ ] PR is labeled with a Jira ticket number and includes a link to the ticket
[ ] PR is labeled with a security risk modifier [no, low, medium, high]
[ ] PR describes scope of changes
In all cases:
[ ] Get a minimum of one thumbs worth of review, preferably 2 if enough team members are available
[ ] Get PO sign-off for all non-trivial UI or workflow changes
[ ] Verify all tests go green
[ ] Squash and merge; you can delete your branch after this
[ ] Test this change deployed correctly and works on dev environment after deployment
I think this branch needs to be rebased from the latest develop.
| gharchive/pull-request | 2020-11-05T15:59:40 | 2025-04-01T06:36:52.967525 | {
"authors": [
"rushtong",
"solideoglori"
],
"repo": "DataBiosphere/duos-ui",
"url": "https://github.com/DataBiosphere/duos-ui/pull/693",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2125961744 | WOR-1510 upgrade otel override HttpServerMetrics
https://broadworkbench.atlassian.net/browse/WOR-1510
Does a service using OTEL via TCL need to do anything extra to get these changes?
yes, they need to upgrade their TCL version, I will start doing that tomorrow
| gharchive/pull-request | 2024-02-08T19:51:50 | 2025-04-01T06:36:52.971564 | {
"authors": [
"dvoet"
],
"repo": "DataBiosphere/terra-common-lib",
"url": "https://github.com/DataBiosphere/terra-common-lib/pull/131",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
458907139 | rename tools to workflows [SATURN-868]
This replaces references to tools in the code with workflows, along with redirectors for affected paths. There are still a few places in copy where we refer to tools, and I didn't touch those, I want to check in with product first.
@panentheos I think I got them all
| gharchive/pull-request | 2019-06-20T22:07:56 | 2025-04-01T06:36:52.972798 | {
"authors": [
"zarsky-broad"
],
"repo": "DataBiosphere/terra-ui",
"url": "https://github.com/DataBiosphere/terra-ui/pull/1706",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1247043760 | [DC-362] add metrics to the retry logic in terra UI
added a retry handler for AJAX calls to return metrics to mixpanel.
Thinking about this some more, it seems like this will capture many request failures that we don’t actually care about because they’re part of the normal operation of Terra. For example, every time I load Terra, there are several 404 responses from Bard because I don’t have NIH, Anvil, etc accounts linked. This would report events for all of those. There are also some expected error responses in data tables. For example, if you try to delete a row referenced by another row, the API returns 429 to let you know that you have to delete the reference first. Is this going to capture so many false positives that any potential signal is lost in the noise?
I agree, we can filter out the bond errors.
Filtering out Bond errors by service URL should use the URL for the current environment (#3072 (comment)).
Could you clarify? Unsure about instead of using a direct root of https://broad-bond-dev.appspot.com and replacing it with the args[0] value of https://broad-bond-dev.appspot.com/api/link/v1/fence? Is it redundent?
Could you clarify? Unsure about instead of using a direct root of https://broad-bond-dev.appspot.com and replacing it with the args[0] value of https://broad-bond-dev.appspot.com/api/link/v1/fence? Is it redundent?
In development, Terra UI makes Bond requests to broad-bond-dev.appspot.com. In production, it makes those requests to broad-bond-prod.appspot.com. Thus, filtering out requests to broad-bond-dev.appspot.com would not affect requests in production.
These service URLs are configured for each environment (dev, alpha, staging, prod, etc.) Instead of hardcoding a URL like broad-bond-dev.appspot.com, this should use the configured service URL (getConfig().bondUrlRoot).
https://github.com/DataBiosphere/terra-ui/blob/74a981df7a6a5a27a96bd13e5bae60644cbf8298/src/libs/ajax.js#L156-L169
Filtering out Bond errors by service URL should use the URL for the current environment (#3072 (comment)).
Could you clarify? Unsure about instead of using a direct root of https://broad-bond-dev.appspot.com and replacing it with the args[0] value of https://broad-bond-dev.appspot.com/api/link/v1/fence? Is it redundent?
or you meant replace it with getConfig().bondUrlRoot
missed that thanks
| gharchive/pull-request | 2022-05-24T19:39:35 | 2025-04-01T06:36:52.979783 | {
"authors": [
"nawatts",
"petesantos",
"slucasbroad"
],
"repo": "DataBiosphere/terra-ui",
"url": "https://github.com/DataBiosphere/terra-ui/pull/3072",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1267155744 | HPA target average value being summed when same metric is being used in more than one HPA
Output
$kubectl describe cm datadog-custom-metrics
external_metric-horizontal-core-autoscaler2-Sample.Datadog.Metric:
{"metricName":"Sample.Datadog.Metric","labels":{},"ts":1654754970,"reference":{"type":"horizontal","name":"autoscaler2","namespace":"core","uid":"12345678-1234-5678-9456-74162016a00f"},"value":0.5,"valid":true}
external_metric-horizontal-core-autoscaler1-Sample.Datadog.Metric:
{"metricName":"Sample.Datadog.Metric","labels":{},"ts":1654754970,"reference":{"type":"horizontal","name":"autoscaler1","namespace":"core","uid":"12345678-1234-5678-a0e1-6c9d34bd20cd"},"value":0.5,"valid":true}
$kubectl describe hpa autoscaler1 -n core
Name: autoscaler1
Namespace: core
CreationTimestamp: Fri, 08 Apr 2022 11:45:01 +0800
Reference: Deployment/autoscaler1
Metrics: ( current / target )
"Sample.Datadog.Metric" (target average value): 1 / 15
Min replicas: 1
Max replicas: 1
Deployment pods: 1 current / 1 desired
$kubectl describe hpa autoscaler2 -n core
Name: autoscaler2
Namespace: core
CreationTimestamp: Thu, 09 Jun 2022 12:12:09 +0800
Reference: Deployment/autoscaler2
Metrics: ( current / target )
"Sample.Datadog.Metric" (target average value): 1 / 15
Min replicas: 1
Max replicas: 1
Deployment pods: 1 current / 1 desired
Describe what happened:
I'm using the same metric for different Horizontal Pod Autoscaler(HPA). As shown in datadog-custom-metrics config map above, the Sample.Datadog.Metric is being used in two different HPA: Autoscaler1 and Autoscaler2
However, when describing each of the HPA respectively, you could see the Target Average Value is 1, instead of 0.5 ; it seems the HPA is summing up the value as long as the Metric name is similar.
Describe what you expected:
I'm expecting the Autoscaler1 and Autoscaler2 should show 0.5 as Target Average Value instead of 1.
Steps to reproduce the issue:
Create two Horizontal Pod Autoscaler and use the same metric name.
Additional environment details (Operating System, Cloud provider, etc):
Is this still happening?
| gharchive/issue | 2022-06-10T07:14:34 | 2025-04-01T06:36:53.005177 | {
"authors": [
"acastro2",
"mengschin"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/issues/12363",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1822465188 | Drop prometheus metrics
Hi,
we've got some workloads exporting prometheus-formatted metrics.
DD agent discovers them successfully for scraping based on workload annotations, and indeed time series are collected.
we would like to do some basic relabeling, namely drop some of the time series, as they do not serve any meaningful purpose at this point, and have high cardinality.
The following configuration is added to the annotation:
"metrics": [".*"]
"exclude_metrics": ["rest.*"]
expected outcome is DD agent to drop everything starting with rest, however these keep piling up.
Please advise what is being missed.
TIA
Can you please show the full configuration and an example of a metric that is not being excluded?
| gharchive/issue | 2023-07-26T13:49:36 | 2025-04-01T06:36:53.007869 | {
"authors": [
"danielmilanov",
"ofek"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/issues/18400",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2064691187 | Skip TestWindowsTestSuite/TestManualProcessDiscoveryCheck
What does this PR do?
Skip TestProcessDiscoveryCheck in the Process Agent Windows E2E test suite due to flakiness
https://github.com/DataDog/datadog-agent/pull/21842 skipped the wrong test
Motivation
Test sometimes flakes when the number of processes returned is greater than 100. When this happens, two JSON objects are returned, which causes the following error during unmarshaling:
Error: Received unexpected error:
invalid character '{' after top-level value
Test: TestWindowsTestSuite/TestManualProcessDiscoveryCheck
Messages: failed to unmarshal process check output
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Reviewer's Checklist
[x] If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
[ ] Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
[ ] A release note has been added or the changelog/no-changelog label has been applied.
[ ] Changed code has automated tests for its functionality.
[x] Adequate QA/testing plan information is provided. Except if the qa/skip-qa label, with required either qa/done or qa/no-code-change labels, are applied.
[x] At least one team/.. label has been applied, indicating the team(s) that should QA this change.
[x] If applicable, docs team has been notified or an issue has been opened on the documentation repo.
[ ] If applicable, the need-change/operator and need-change/helm labels have been applied.
[ ] If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
[ ] If applicable, the config template has been updated.
Made sure it's skipping the correct test now:
--- PASS: TestWindowsTestSuite (854.18s)
--- PASS: TestWindowsTestSuite/TestManualProcessCheck (3.47s)
--- PASS: TestWindowsTestSuite/TestManualProcessCheckWithIO (27.33s)
--- SKIP: TestWindowsTestSuite/TestManualProcessDiscoveryCheck (24.30s)
--- PASS: TestWindowsTestSuite/TestProcessCheck (16.86s)
--- PASS: TestWindowsTestSuite/TestProcessCheckIO (51.58s)
--- PASS: TestWindowsTestSuite/TestProcessDiscoveryCheck (64.73s)
PASS
ok github.com/DataDog/datadog-agent/test/new-e2e/tests/process 1240.863s
https://gitlab.ddbuild.io/DataDog/datadog-agent/-/jobs/400199699
/merge
| gharchive/pull-request | 2024-01-03T21:49:09 | 2025-04-01T06:36:53.015811 | {
"authors": [
"robertjli"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/21852",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2313242442 | service_discovery: add telemetry
What does this PR do?
Adds telemetry for the service_discovery check.
Motivation
Get data on how the service_discovery check is performing/behaving.
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Additional code coverage is needed, too.
/merge
| gharchive/pull-request | 2024-05-23T15:40:18 | 2025-04-01T06:36:53.018656 | {
"authors": [
"jonbodner",
"rarguelloF"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/25868",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2383521531 | check for group in IsNodeMetadata
What does this PR do?
This PR checks for the resource group in IsNodeMetadata to avoid returning true for nodes that are not in the empty kubernetes group.
Motivation
Some resources might have the name nodes, but they belong to another (custom) API Group (i.e. not the empty group in kubernetes). IsNodeMetadata should return false in these cases.
Additional Notes
Currently there is an issue in the resource-group-version mapping for generic metadata collection, which, in some cases, leads to watching metrics.k8s.io/v1beta1, Resource=nodes instead of /v1, Resource=nodes.
This will be fixed in a separate PR.
This fix done in this PR is useful to avoid using the metadata collected for metrics.k8s.io/v1beta1, Resource=nodes as if they were native node metadata, resulting in incorrect data.
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
No need for qa, we have automated testing.
/merge
| gharchive/pull-request | 2024-07-01T10:57:59 | 2025-04-01T06:36:53.022566 | {
"authors": [
"adel121"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/27186",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2386984032 | [CWS] remove now unused secl.json
What does this PR do?
This file was split into secl_linux.json and secl_windows.json. It can safely be removed now.
Motivation
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
/merge
| gharchive/pull-request | 2024-07-02T19:19:44 | 2025-04-01T06:36:53.024778 | {
"authors": [
"paulcacheux"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/27261",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2486693649 | [EBPF] Fix the suggested kmt.test command when creating config from CI
What does this PR do?
Fixes the suggested kmt.test command that is shown when the --from-ci-pipeline argument is created in kmt.config. Also adds the list of failed tests to the command via the --run argument.
Motivation
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
/merge
| gharchive/pull-request | 2024-08-26T12:03:06 | 2025-04-01T06:36:53.027471 | {
"authors": [
"gjulianm"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/28740",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2748354314 | fix(installer): Make policy metadata files root-owned & world-readable
What does this PR do?
Fixes a bug at installer install: we try to write the policy metadata files as dd-agent before having created the dd-agent user.
This PR makes the file owned by root instead, and world-readable so that the daemon can read it. There is no sensitive data in this file anyway.
Motivation
Describe how you validated your changes
Tested manually on a VM + E2E tests
Possible Drawbacks / Trade-offs
Additional Notes
/merge
| gharchive/pull-request | 2024-12-18T17:13:49 | 2025-04-01T06:36:53.029492 | {
"authors": [
"BaptisteFoy"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/32356",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
665376304 | Add system.mem.slab_reclaimable gauge
What does this PR do?
Adds a gauge for system.mem.slab_reclaimable. This is part of slab memory that might be reclaimed (i.e. caches).
Motivation
Datadog 7.x adds SReclaimable memory, if available on the system, to the system.mem.cached gauge by default: https://github.com/shirou/gopsutil/commit/f9e238c38b5f16a36794dd2f0f751c7376c64a60.
This may lead to inconsistent metrics for clients migrating from Datadog 5.x, where system.mem.cached didn't include SReclaimable memory. Adding a gauge for system.mem.slab_reclaimable allows inverse calculation to remove this value from the system.mem.cached gauge.
Additional Notes
This PR is a follow up to https://github.com/DataDog/datadog-agent/issues/5038.
Thanks @mx-psi,
I've incorporated those suggestions.
Thank you so much for your patience as I figured this out! 😅
I think the CLA is good now.
No probs, thanks again!
| gharchive/pull-request | 2020-07-24T19:40:50 | 2025-04-01T06:36:53.033651 | {
"authors": [
"jared-gs",
"mx-psi"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/6053",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2580174430 | 404 Errors ( Unexpected telemetry response ) from Datadog Trace
Hello,
Recently, I updated Istio from version 1.20.8 to 1.21.6. Immediately after the update, the following errors have been occurring intermittently in the envoy tracing scope for every istio-proxy container:
[source/extensions/tracers/datadog/logger.cc:23] Unexpected Remote Configuration status 503 with body (if any, starts on next line):
upstream connect error or disconnect/reset before headers. reset reason: connection termination
[source/extensions/tracers/datadog/logger.cc:23] Unexpected telemetry response status 404 with body (if any, starts on next line):
404 page not found
I was able to resolve the first error by referring to this Issue and setting DD_REMOTE_CONFIGURATION_ENABLED = "false" in the IstioOperator YAML template file.
However, I haven't been able to solve the second error yet.
And, I am not familiar with C++, but I found code blocks that seems to involve the error log.
https://github.com/DataDog/dd-trace-cpp/blob/main/src/datadog/datadog_agent.cpp#L176-L190
If anyone understands the cause of this error and knows how to resolve it, I would greatly appreciate your help.
Hi @koizumi7010
We collect telemetry to get more insight on the tracer and its usage. This feature can be disabled with DD_INSTRUMENTATION_TELEMETRY_ENABLED=false.
Learn more on all the configurations we support.
Let me know if that solve your issue :)
@dmehala
Thanks for the reply!
Setting DD_INSTRUMENTATION_TELEMETRY_ENABLED = 'false' in IstioOperator YAML template file eliminated the error :) 👍🏼
| gharchive/issue | 2024-10-11T01:12:17 | 2025-04-01T06:36:53.048578 | {
"authors": [
"dmehala",
"koizumi7010"
],
"repo": "DataDog/dd-trace-cpp",
"url": "https://github.com/DataDog/dd-trace-cpp/issues/164",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
202164044 | [appveyor][windows] fix winfixme checks
So for some reason the mocking of modules loaded in the check breaks on windows. Not sure what python does differently there, but it's definitely windows specific. We'll have to get to the bottom of it and address it.
Fixed here: https://github.com/DataDog/integrations-core/pull/79
| gharchive/issue | 2017-01-20T15:13:45 | 2025-04-01T06:36:53.420652 | {
"authors": [
"truthbk"
],
"repo": "DataDog/integrations-core",
"url": "https://github.com/DataDog/integrations-core/issues/124",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
447765183 | [ibm_mq] fix queue auto discovery to include any qlocal in addition to qmodel, …
[ibm_mq] fix queue auto discovery to include any qlocal in addition to qmodel, fltering to include only DEFTYPE(PREDEFINED) types
What does this PR do?
Expand metric collection for queues:
This change allows for MQ queues of type QLOCAL to be considered for metric collection in addition to those of type QMODEL. The existing implementation only allows for those queues of type QMODEL. This is required for my use case as nearly all of our queues are of type QLOCAL. (I suspect this is true for most orgs that use IBM MQ.)
Filtering out of 'system' queues is also updated to look at the DEFTYPE attribute to determine a 'system' queue in lieu of the hard-coded list found in the existing version of config.py -- this filter is now based on whether a queue's DEFTYPE attribute is PREDEFINED (if so, consider it, else ignore).
Also, this change provides regex support for the queue_patterns configuration (overcoming MQ's right-side-wildcard-only pattern treatment).
Motivation
:information_source: This change was discussed with DataDog pre-sales during a trial-period checkpoint call. DataDog was eager to review the changes and requested that this patch be submitted as a PR. It is understood that there may be more refinement needed to this PR if DataDog were to choose to incorporate this and that the PR may not be merge-ready in its current state.
While involved with a Datadog evaluation, I found that there were no metrics reported for any queues using the auto-discovery configuration. After looking in the integration code, I found the limitation mentioned above that queues would have to be of type QMODEL in order to be considered for metric collection. This meant that the IBM MQ integration would be unusable for my use case. In order to proceed with my evaluation, QLOCALs would also have to be considered.
Additional Notes
None.
Review checklist (to be filled by reviewers)
[ ] PR title must be written as a CHANGELOG entry (see why)
[ ] Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR)
[ ] PR must have changelog/ and integration/ labels attached
[ ] Feature or bugfix must have tests
[ ] Git history must be clean
[ ] If PR adds a configuration option, it must be added to the configuration file.
Hi @goodgrits, I'm a PM at Datadog and would like to thank you for submitting this PR! While our team is reviewing, I'd like to chat with you in more detail about this request.
Could you please email me at dhruv.sahni@datadoghq.com with the best contact information to reach you at?
Please have a look at the proposed changes in https://github.com/DataDog/integrations-core/tree/julia/unit-tests. There is a type fix (the issue was already present, and updates the example file and test to account for regexes)
@hithwen : Should your #3893 supercede this PR? That would seem fine to me.
@goodgrits Ok, lets do that and close this one.
| gharchive/pull-request | 2019-05-23T16:39:33 | 2025-04-01T06:36:53.429101 | {
"authors": [
"dsahni",
"goodgrits",
"hithwen"
],
"repo": "DataDog/integrations-core",
"url": "https://github.com/DataDog/integrations-core/pull/3801",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
761404078 | Add loader config
What does this PR do?
Add loader config
Note: We might want to delay this change to keep is_jmx for one or more Agent versions.
Motivation
https://github.com/DataDog/datadog-agent/pull/6700
Related PR: https://github.com/DataDog/datadog-agent/pull/6953
Closing for now, since there is no much benefit right now to use loader instead of is_jmx.
| gharchive/pull-request | 2020-12-10T16:47:29 | 2025-04-01T06:36:53.431429 | {
"authors": [
"AlexandreYang"
],
"repo": "DataDog/integrations-core",
"url": "https://github.com/DataDog/integrations-core/pull/8183",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
296151794 | [Feature] robust regression and error function
With my dataset, I had some trouble where no matter what is entered as min_stop_frac, it would always return the same amount of segments. It would merge large segments that had almost no deviations before (creating a visible error), instead of merging small segments with bigger deviations. Taking the square root of the squared error that is returned by the OLS linear regression function fixed it, reduced the error and improved the result immensely:
I replaced :
https://github.com/DataDog/piecewise/blob/3a15a1c3113cbbecf979bb318f19f2c7fbdc9408/piecewise/regressor.py#L301
with
return tuple(coeffs), 0.0 if len(error) == 0 else float(math.sqrt(error))
I'm not sure if this should be pushed in a PR, as it would have to be checked against more data, but maybe it would make sense from an api standpoint to let the user supply his own linear regression/cost function, so one can use a more robust regression like Theil-San estimator or RANSAC incase the dataset has outliers.
Thank you lots for this library, it helped me quite a bit.
I definitely like this idea of make the cost function pluggable. If you're interested in pushing a PR, that'd be great. Otherwise, I can probably get around to adding this functionality eventually, although I'm not sure how soon.
@mrgreywater Thank you!
@StephenKappel But how to increase number of segments? I want to see red lines on the chart. min_stop_frac doesn't help
https://puu.sh/AP63P/967d3d3243.png
Hey @jrtechs -- I'd be happy to have a PR to make this better! Here's some ideas...
Currently, the algorithm only remembers the segments from what it thinks is the best state so far. It does this based on the increase in total error (i.e., the cost of the merge). It does not consider how many merges remain before it ends up with a single segment, nor does it consider granular information about the cost of merges in the past.
The algorithm is trying to catch the big jump in total error that normally accompanies the one-merge-too-far. Something like shown here...
I think the problem comes when that sudden increase in error starts with an error increase that isn't the single largest error increase.
For example, the algorithm will do the right thing with this series of merge costs:
1, 1, 1, 2, 3, 3, 3, 4, 5, 5, 5, 5, 100, 125, 100, 105
But it will go one merge too far for this series of merge costs:
1, 1, 1, 2, 3, 3, 3, 4, 5, 5, 5, 5, 50, 105, 100, 125
We could try:
Tracking the cost of all merges to date, so we could use some more interesting statistics of the merge costs rather than just the max and cumulative sum of the past.
Remembering more than one state so that we don't need to make the decision of whether or not to discard a previous potentially-interesting state until we see all the merge costs.
The hope would be that, in retrospect, we'd be able to make smarter choices than we can on the fly.
A simpler solution for the specific example you provide could be providing a parameter to require more significant increases in cost for a merge to be considered the tipping point. On this line, cost_increase == biggest_cost_increase could become something like cost_increase >= 2.0*biggest_cost_increase, where 2.0 would be parameterized.
@StephenKappel I will definitely look into that. Just for clarification, what would be the difference between the min_stop_frac and this new error_increase_tolerance? Does the min_stop_frac define the increase in error allowed before we stop merging where the error_increase_tolerance would help us push more partitions/buckets to be merged?
The main motivation for min_stop_frac is to prevent the algorithm from giving a suboptimal solution when the "best" solution is a single line segment. It prevents the algorithm from stopping merging too soon if no single merge has led to a large fraction of the total error.
A new parameter would help the algorithm stop sooner, but this wouldn't contradict min_stop_frac. That is, in order to stop merging, the min_stop_frac-based threshold would still have to be met. However, after that's met (it will always be met for every future iteration after it's first true), we need a new parameter to help prevent too much merging.
| gharchive/issue | 2018-02-11T01:57:01 | 2025-04-01T06:36:53.440989 | {
"authors": [
"Andrey-Pavlov",
"StephenKappel",
"jrtechs",
"mrgreywater"
],
"repo": "DataDog/piecewise",
"url": "https://github.com/DataDog/piecewise/issues/4",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2212053271 | 🛑 Les lundis - ENSTA is down
In 3363853, Les lundis - ENSTA (https://les-lundis.ensta-paris.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Les lundis - ENSTA is back up in 48d2997 after 3 minutes.
| gharchive/issue | 2024-03-27T23:18:32 | 2025-04-01T06:36:53.447573 | {
"authors": [
"DataEnsta"
],
"repo": "DataEnsta/upptime",
"url": "https://github.com/DataEnsta/upptime/issues/1474",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1977660232 | 🛑 DaTA is down
In 8326ffe, DaTA (https://data-ensta.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DaTA is back up in 16e3af5 after 15 minutes.
| gharchive/issue | 2023-11-05T06:52:30 | 2025-04-01T06:36:53.450209 | {
"authors": [
"DataEnsta"
],
"repo": "DataEnsta/upptime",
"url": "https://github.com/DataEnsta/upptime/issues/290",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1809338208 | openapi/submitTask接口任务启动报错
Search before asking
[X] I had searched in the issues and found no similar issues.
What happened
接口提示:未能读取到有效Token
检查代码发现在TaskServiceImpl中的submitTask方法,调用了buildJobConfig,而该方法中获取了用户权限,进而继续去获取了登录信息
What you expected to happen
/openapi下的接口,不应该继续从登录信息中获取userId,/openapi/submitTask能正常运行任务
How to reproduce
直接调用/openapi/submitTask,不传入cookie会报错
Anything else
No response
Version
dev
Are you willing to submit PR?
[X] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
刷新一下页面.重新登录
最好的方式是实现 token 管理 , 通过 token 集成权限 等信息, 然后访问 openapi
@GetMapping("/submitTask") public Result submitTask(@RequestParam Integer id) { taskService.initTenantByTaskId(id); JobResult jobResult = taskService.submitTask(id); if (jobResult.isSuccess()) { return Result.succeed(jobResult, "执行成功"); } else { return Result.failed(jobResult, jobResult.getError()); } }
| gharchive/issue | 2023-07-18T07:38:10 | 2025-04-01T06:36:53.455491 | {
"authors": [
"SPWanderer",
"aiwenmo",
"leechor",
"zhu-mingye"
],
"repo": "DataLinkDC/dinky",
"url": "https://github.com/DataLinkDC/dinky/issues/2140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2170700614 | [Bug] [k8s app] submit job with k8s app ha mode,flink have the same jobid,submit and cancle job throw error
Search before asking
[X] I had searched in the issues and found no similar issues.
What happened
submit job with k8s app ha mode,flink have the same jobid,like 0000000006924de00000000000000000;submit job throw error "Duplicate entry 'test20000000006924de00000000000000000-1' for key 'cluster_un_idx1'" when the old flink instance still exists,and cancle job throw error "Expected one result (or null) to be returned by selectOne(), but found: 5" because of the same id;
What you expected to happen
submit job and cancle job work well
How to reproduce
submit job with k8s app ha mode,
Anything else
No response
Version
1.0.0
Are you willing to submit PR?
[ ] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
mysql表内可以试着取消name唯一约束,这个不影响任务
| gharchive/issue | 2024-03-06T06:00:36 | 2025-04-01T06:36:53.459577 | {
"authors": [
"gaoyan1998",
"zhengtingxue"
],
"repo": "DataLinkDC/dinky",
"url": "https://github.com/DataLinkDC/dinky/issues/3244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
828226316 | [HW] Roozbeh
Name: Roozbeh
Linkedin Profile: www.linkedin.com/in/roozbeh-dargahi
Attach the homework screenshots below:
https://api.badgr.io/public/assertions/TC5ZhD0AT2yOFtxc2DwH8A?identity__url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Froozbeh-dargahi
| gharchive/issue | 2021-03-10T19:24:24 | 2025-04-01T06:36:53.469249 | {
"authors": [
"HadesArchitect",
"RooDK"
],
"repo": "DataStax-Academy/Intro-to-Cassandra-for-Developers",
"url": "https://github.com/DataStax-Academy/Intro-to-Cassandra-for-Developers/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
54782651 | Create and update prospects
I added a few functions to create and update prospects. They're all yours if you want in the library :)
Looks pretty good, main thing is that the style doesn't match.
Can you offer some specific style guidance to @jonathankeebler? It would be niced to get this functionality merged in. Especially since it's the only package up on npm. In the meantime I'm installing his branch.
| gharchive/pull-request | 2015-01-19T16:15:29 | 2025-04-01T06:36:53.470535 | {
"authors": [
"ibash",
"jonathankeebler",
"micahlmartin"
],
"repo": "Datahero/node-pardot",
"url": "https://github.com/Datahero/node-pardot/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1087197620 | CA 124 Deprecates s3_policy_arns & CA 125 Cloudwatch logs organization
Deprecates s3_policy_arns in favor of additional_policy_arns.
Permits the ability to create the cloudwatch log group and pass it to the shell script.
This PR doesn't appear to be linked to a DevOps/SRE jira ticket
This PR doesn't appear to be linked to a DevOps/SRE jira ticket
This PR doesn't appear to be linked to a DevOps/SRE jira ticket
This PR doesn't appear to be linked to a DevOps/SRE jira ticket
This PR doesn't appear to be linked to a DevOps/SRE jira ticket
| gharchive/pull-request | 2021-12-22T21:21:01 | 2025-04-01T06:36:53.472821 | {
"authors": [
"fvazquez-caylent",
"tamr-teamcity"
],
"repo": "Datatamer/terraform-aws-tamr-vm",
"url": "https://github.com/Datatamer/terraform-aws-tamr-vm/pull/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1348922544 | When used in a flake: error: attribute 'currentSystem' missing
Hi, I'm a newbie, so I apologize if this question is in the wrong place. I'm trying to use mach-nix in this flake.nix
{
outputs = {self, nixpkgs}:
let
mach-nix = import (builtins.fetchGit {
url = "https://github.com/DavHau/mach-nix";
ref = "refs/tags/3.5.0";
rev = "7e14360bde07dcae32e5e24f366c83272f52923f";
}) { };
in
{
defaultPackage.x86_64-linux = mach-nix.mkPython rec {
requirements = ''
numpy
'';
};
};
}
But when I run nix shell . I get this error:
error: attribute 'currentSystem' missing
at /nix/store/5n402azp0s9vza4rziv4z5y88v2cv1mq-nixpkgs/pkgs/top-level/impure.nix:17:43:
16| # (build, in GNU Autotools parlance) platform.
17| localSystem ? { system = args.system or builtins.currentSystem; }
| ^
18|
I was reading a blog about this which said:
You may get an error like this:
error: attribute 'currentSystem' missing
This happens because in the context of flakes, builtins.currentSystem does not exist (it is, after all, an impurity). If you come across this, try to refactor your legacy-nix portion so the system is always an argument, and provide that argument from your flake, as above.
From that I get the feeling that I need to provide x86_64-linux as a parameter somehow. But where do I put it?
You should follow the example described here: https://github.com/DavHau/mach-nix/blob/master/examples.md#use-mach-nix-from-a-flake - generally you want to use flake inputs to "import" remote nix packages/code.
You could try (untested):
{
outputs = {self, nixpkgs}:
let
mach-nix = import (builtins.fetchGit {
url = "https://github.com/DavHau/mach-nix";
ref = "refs/tags/3.5.0";
rev = "7e14360bde07dcae32e5e24f366c83272f52923f";
}) { inherit system; };
in
{
defaultPackage.x86_64-linux = mach-nix.mkPython rec {
requirements = ''
numpy
'';
};
};
}
{ inherit system; } is not working.
What you can do as a workaround is to build your configuration with the --impure option.
| gharchive/issue | 2022-08-24T06:16:02 | 2025-04-01T06:36:53.483534 | {
"authors": [
"573",
"GeorgesAlkhouri",
"MatrixManAtYrService",
"ryanswrt"
],
"repo": "DavHau/mach-nix",
"url": "https://github.com/DavHau/mach-nix/issues/506",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
176415015 | update projects and library for swift 2.3 (updated)
I started with @bersaelor's swift_2.3 branch, rebased against @DaveWoodCom's master, and made one small change to get it to build with Xcode 8 GM.
I don't see this getting merged to master, but could you maybe keep a swift_2.3 compatibility branch with these changes?
Thanks for this PR. You should be able to use the swift_2.3 branch now. Let me know if there are any issues.
| gharchive/pull-request | 2016-09-12T16:02:27 | 2025-04-01T06:36:53.488958 | {
"authors": [
"DaveWoodCom",
"humblehacker"
],
"repo": "DaveWoodCom/XCGLogger",
"url": "https://github.com/DaveWoodCom/XCGLogger/pull/151",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1588799376 | Support building OpenCloudSaves as a shared library for 3rd party integration
Is your feature request related to a problem? Please describe.
OpenCloudSaves currently only provides an executable binary, which makes it more difficult for 3rd party integration.
Describe the solution you'd like
Provide a shared library build of OpenCloudSaves. Go buildmodes allows exporting Go methods as a C shared library which can be used by 3rd party software to integrate OpenCloudSaves.
Describe alternatives you've considered
One alternative would be to use the CLI provided by OpenCloudSaves to integrate it into other software.
Additional context
I am currently developing a gamepad native game launcher and overlay called OpenGamepadUI as a free and open source alternative. I would love to be able to integrate OpenCloudSaves into it either natively or as a plugin.
Will target for 0.17
Right now my concept for the API will be to expose a C interface that allow an invocation of the application using our option flags https://github.com/DavidDeSimone/OpenCloudSaves/blob/main/main.go#L14-L23
This will be our stable interface that will follow semver conventions. This way you can embed OpenCloudSave into your application without having to invoke from the command line.
Internally, I try to have the GUI basically "invoke" the command line by using calls to CLIMain, so in general the app internally uses those flags to drive behavior.
That would be perfect!
That would be perfect!
I couple of other issues I am thinking through:
OpenCloudSave currently compiles and distributes a copy of rclone to perform the actual syncing. There are a couple of solutions I can think of for this:
a. Require users of opencloudsave.so to provide an rclone for usage
b. Try to bundle rclone into open cloud save (not sure of the difficulty here)
On windows, OpenCloudSave requires a WebView DLL and WebView2 to be installed by the end user. This is all currently handled by our MSI - these requirements would end up passed on to the users of opencloudsave.so
a. I don't know of another way around this - but I imagine it won't be a deal breaker for most applications, since they can just copy our install flow/distribution from how we build our MSI.
I think it's reasonable to require the integrator to bundle or make their package depend on rclone themselves if they're using the shared library. Maybe OpenCloudSave could also expose an interface to specify the path to rclone if the integrating application has rclone in a custom directory?
For OpenGamepadUI, since it's Linux-only, I was just planning on adding rclone as a dependency after integrating OpenCloudSave.
I think it's reasonable to require the integrator to bundle or make their package depend on rclone themselves if they're using the shared library. Maybe OpenCloudSave could also expose an interface to specify the path to rclone if the integrating application has rclone in a custom directory?
For OpenGamepadUI, since it's Linux-only, I was just planning on adding rclone as a dependency after integrating OpenCloudSave.
Yeah, this sounds reasonable to me - I like the idea of exposing a hook for a user to specify the path to the rclone they want to use. I might expose that in the GUI layer as well.
| gharchive/issue | 2023-02-17T06:00:12 | 2025-04-01T06:36:53.503372 | {
"authors": [
"DavidDeSimone",
"ShadowApex"
],
"repo": "DavidDeSimone/OpenCloudSaves",
"url": "https://github.com/DavidDeSimone/OpenCloudSaves/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
123160159 | test-gcd.dice Bug. You cannot assign values to parameters
It was too complicated to support assignment of values to parameters, please remove all instances of this and test-gcd will work
Corrected.
| gharchive/issue | 2015-12-20T13:38:03 | 2025-04-01T06:36:53.517218 | {
"authors": [
"DavidWatkins",
"KhaledAtef"
],
"repo": "DavidWatkins/Dice",
"url": "https://github.com/DavidWatkins/Dice/issues/125",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1899304591 | Example is unavailable
The example link in the readme (https://github.com/DawnGroveStudios/GodotP2PNetworkExample) leads to a 404.
Sorry about that, this example project is still a work in progress. We will hopefully have it finished with in the next few days
Sorry for the delay, I have made this repo public. However, it is incomplete since I am spending most of my time on a different game. I will work on improving and adding more examples that repo in the coming weeks as well as improving the documentation.
https://github.com/DawnGroveStudios/GodotP2PNetworkExample/tree/main/basic
Closing out for now but please leave another issue if you would like to see a specific example or if you have any ideas to help improve this plugin in.
| gharchive/issue | 2023-09-16T05:24:40 | 2025-04-01T06:36:53.545300 | {
"authors": [
"Seann-Moser",
"TCMine"
],
"repo": "DawnGroveStudios/GodotP2PNetwork",
"url": "https://github.com/DawnGroveStudios/GodotP2PNetwork/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
960036934 | List Anchors RPC
What kind of PR is this?:
/kind feature
What this PR does / why we need it:
Implement listanchors RPC
Which issue(s) does this PR fixes?:
Fixes #48
Additional comments?:
I want to write a test case that asserts the number of anchor that is returned when listAnchors RPC is called. I have tried creating anchors via spv_anchorrewards but no avail.
My questions are
What is the difference between spv_listanchors and listanchors?
Is there a away to create anchors so that i can make the assertion i outlined above?
| gharchive/pull-request | 2021-08-04T08:09:17 | 2025-04-01T06:36:53.560936 | {
"authors": [
"siradji"
],
"repo": "DeFiCh/jellyfish",
"url": "https://github.com/DeFiCh/jellyfish/pull/558",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1392274075 | FFI communication between native and meta chain
What would you like to be added:
As an alternative to JSON RPC communication (#80), I propose invoking functions through FFI. Metachain (MC) can be compiled as a staticlib, which then gets linked to defid (NC). The PoC for this approach exists in libmc branch in metachain and ain repository. CLI args from NC will be passed to MC, which will be bootstrapped and then defid will follow.
Why is this needed:
This way, we can eliminate the security concerns for running MC as an independent process and communicating through RPC (#45), which also reduces TCP overhead and avoids us packaging and running two binaries which will most probably exist in the same machine. #94 wouldn't have any adverse impact on this decision as the source of the bottleneck will still be in defid. It should be noted that MC and NC will not have any knowledge of what the other is doing and neither will one manage the other. However, when MC fails and exits, it will shutdown defid as well (at this point).
As for e2e testing, we can still expose RPC from MC, because essentially they'll be the same FFI calls.
cc @prasannavl @fuxingloh @DieHard073055
As an alternative to JSON RPC communication (#80), I propose invoking functions through FFI. Metachain (MC) can be compiled as a staticlib, which then gets linked to defid (NC). The PoC for this approach exists in libmc branch in ain repository. CLI args from NC will be passed to MC, which will be bootstrapped and then defid will follow.
I think it's sound given if both options are provided with a shared Interfacing for the sake of testing with the ability to force defid to switch between prepackaged binary or via a provided RPC URL --metachain_url=.... #97 is also a very important related issue where in the future or now, we might not want to "force" all defid clients to be mining for NC and MC, given MC has different performance and storage requirements for masternode operators.
This way, we can eliminate the security concerns for running MC as an independent process and communicating through RPC (#45),
Although #45 is important, it isn't sane for masternode operators to operate on a host machine they can't trust. It's more of an additional security good practice IMO, as I don't think they should not be operating their masternode over the network.
which also reduces TCP overhead and avoids us packaging and running two binaries which will most probably exist in the same machine. #94 wouldn't have any adverse impact on this decision as the source of the bottleneck will still be in defid.
From the current literature on our threading model, I think #94 requires a renewal of our existing threading model for MC integration rather than the bottleneck of integration.
I think it's sound given if both options are provided with a shared Interfacing for the sake of testing with the ability to force defid to switch between prepackaged binary or via a provided RPC URL --metachain_url=....
While I'm okay with supporting RPC for the purpose of testing, the whole aim of using FFI is to prepackage the binaries and to not worry about having RPC calls in production or needing to secure them, but if we're planning to have it in the near future anyway, then I don't think the FFI way serves any purpose. @prasannavl Need your opinion here.
https://github.com/DeFiCh/metachain/issues/97 is also a very important related issue where in the future or now, we might not want to "force" all defid clients to be mining for NC and MC, given MC has different performance and storage requirements for masternode operators.
Is the performance and storage requirements the only constraint for not being able to run MC alongside NC?
Although https://github.com/DeFiCh/metachain/issues/45 is important, it isn't sane for masternode operators to operate on a host machine they can't trust.
This is not specific to MC though, right? Wouldn't it apply to NC as well?
While I'm okay with supporting RPC for the purpose of testing, the whole aim of using FFI is to prepackage the binaries and to not worry about having RPC calls in production or needing to secure them, but if we're planning to have it in the near future anyway, then I don't think the FFI way serves any purpose. @prasannavl Need your opinion here.
Yup, DMC is the first part of a series of changes to modulize the DeFiChain blockchain into several components that were initially proposed in DFIP 2111-B. DMC is the first of many execution planes we want to integrate into a shared network plane. The goal is to modularize the blockchain to make upgrading easier. It was explained in a video on a generative approach to adding more modularized components with a shared network plane.
Is the performance and storage requirements the only constraint for not being able to run MC alongside NC?
Part of it. With DMC embedded, the "server profile" and server maintenance's operational flow must be changed if we force defid to be prepackaged. Several operators such as exchanges, individuals, and node operators would have to reconfigure their setup to fit the runtime requirements running defid. It's not simply a plug-and-play since these must be pre-communicated, tested, and operationalized.
This is not specific to MC though, right? Wouldn't it apply to NC as well?
Yup, it would; it's not specific to MC or NC, just running servers or validators in general. IMO, additional security cushions more for the uninitiated server maintainers.
Btw, @mambisi has brought it to my attention going through the FFI route will not be as easy as it sounds because we won't get the context from substrate outside of wasm environment. Until we have a clear direction there, this is how we've currently decided to go:
graph
subgraph DeFiCh/ain
nc[Native chain consensus]
nb[Native chain bootstrap]
nb --> nc
nb --" FFI "--> mb
nc --" FFI "--> nrc
nb --" FFI "--> lock
subgraph "DeFiCh/libain-rs"
pt["Protobuf spec"]
ngc["gRPC client"]
nrc["RPC client"]
pt --> ngc
pt --> nrc
end
subgraph DeFiCh/metachain
mb[Metachain bootstrap]
mc[Metachain consensus]
mprc[RPC server]
lock[Random number agreement]
mprc --> lock
lock --> mprc
mb --> mc
mprc --> mc
nrc --" JSON RPC "--> mprc
end
end
Metachain build will also emit a static library in addition to the executable. This gets linked to defid. When defid starts, it has the option to boot up metachain (and arguments following a certain flag will be passed to it). When metachain terminates, it can issue a shutdown request to defid.
When metachain is under defid's management, both will agree upon a large random number which will then be set alongside the RPC arguments. This way, any incoming request (to those consensus RPC endpoints) not coming from defid will be dropped.
The consensus mechanism will still be through RPC (addressed in #80). FFI will be used only to boot up metachain and agree upon the random number. E2E tests will work normally on both metachain side and ain side because there won't be any breaking changes.
Btw, @mambisi has brought it to my attention going through the FFI route will not be as easy as it sounds because we won't get the context from substrate outside of wasm environment. Until we have a clear direction there, this is how we've currently decided to go:
Yup sounds good, I'm afraid of that as well.
Metachain build will also emit a static library in addition to the executable. This gets linked to defid. When defid starts, it has the option to boot up metachain (and arguments following a certain flag will be passed to it). When metachain terminates, it can issue a shutdown request to defid.
Yup, this is great; it's important that it's optional. https://github.com/DeFiCh/metachain/issues/97#issuecomment-1277355273
Does this also mean instead of starting up MetaChain, we could also provide an URL with a port number?
When metachain is under defid's management, both will agree upon a large random number which will then be set alongside the RPC arguments. This way, any incoming request (to those consensus RPC endpoints) not coming from defid will be dropped.
Could we use JWT instead if it's not too troublesome? Minor, I'm also very comfortable with this design for now, but it might require changes. Let's see, it's not important now.
The consensus mechanism will still be through RPC (addressed in JSON-RPC communication between Native Chain and Meta Chain #80). FFI will be used only to boot up metachain and agree upon the random number. E2E tests will work normally on both metachain side and ain side because there won't be any breaking changes.
Sounds good.
Does this also mean instead of starting up MetaChain, we could also provide an URL with a port number?
I didn't have this in mind, but it shouldn't be hard to implement with the current design.
Could we use JWT instead if it's not too troublesome? Minor, I'm also very comfortable with this design for now, but it might require changes. Let's see, it's not important now.
Agreed, let's get both defid and metachain running in a single node. I'm sure we can improve the mechanism later on.
| gharchive/issue | 2022-09-30T11:09:19 | 2025-04-01T06:36:53.578875 | {
"authors": [
"fuxingloh",
"wafflespeanut"
],
"repo": "DeFiCh/metachain",
"url": "https://github.com/DeFiCh/metachain/issues/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2199607765 | Is it possible to get segstats of cerebellum after run FastSurfer pipeline?
Question/Support Request
Is it possible to get segstats of cerebellum after run FastSurfer pipeline?
Since the default option is not generating segstats, I wonder if I should run all pipeline, or can generate segstats for cerebellum afterwards.
Bests,
After running the cerebnet module, which is part of the segmentation pipeline, you should get the statsfile, see :
https://deep-mi.org/FastSurfer/dev/overview/OUTPUT_FILES.html#cerebnet-module
You can run with --seg_only, which will include cerebnet. No need to run the longer surface pipeline for this.
Make sure you use the latest release.
Thanks.
Actually, I ran only cerebnet with already processed data via python directly as below.
python CerebNet/run_prediction.py --t1 $t1 --asegdkt_segfile $asegdkt_segfile --conformed_name $conformed_name --cereb_segfile $cereb_segfile --seg_log $seg_log --batch_size $batch_size --viewagg_device $viewagg --device $device --async_io --threads $threads
I missed --cereb_statsfile $cereb_statsfile, so the statsfile did not generated.
You can either do the hard work of looking into run_fastsurfer scripts and also manually replicate the other steps, or you can just run the --seg_onlyagain (takes only minutes per case) and get everything you need.
| gharchive/issue | 2024-03-21T09:14:46 | 2025-04-01T06:36:53.666115 | {
"authors": [
"iPsych",
"m-reuter"
],
"repo": "Deep-MI/FastSurfer",
"url": "https://github.com/Deep-MI/FastSurfer/issues/491",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1906566414 | Add performances comparisons with previous packages
Table implemented in PR #493 shows the timings obtained for generating graphs/graphs+grids, atomic resolution, with all features except for the ones in the conservation module, because we don't have the pssm files for the data in the tutorials (for computing the performances of deeprank2, I used the raw data available at this address).
We need some discussion here:
Is this a satisfying way of showing performances? Do we need to generate all the features possible (by adding conservation module features), and to add performances for residue resolution as well?
How do we do a fair comparison with the previously developed packages? Features are different in number and in how they are calculated, so if we use all features in all packages we can't know if the comparison is fair. Maybe we could just pick a couple of them which are the same in all packages (e.g., distance, residue type)?
When we'll have clearer ideas/plans about 1. and 2., compare deeprank2 with:
deeprank
[ ] PPIs, grid
deeprank-gnn
[ ] PPIs, graph
deeprank-mut
[ ] variants, grid
How would you advice to proceed here, especially for question 2.? @sonjageorgievska, @DaniBodor
I don't have a great idea about this, apart from just doing a "not fair" comparison and being open about it and explaining the differences.
I don't have a great idea about this, apart from just doing a "not fair" comparison and being open about it and explaining the differences.
Actually I agree, I also think this is the only realistic option we have.
| gharchive/issue | 2023-09-21T09:48:04 | 2025-04-01T06:36:53.674335 | {
"authors": [
"DaniBodor",
"gcroci2"
],
"repo": "DeepRank/deeprank2",
"url": "https://github.com/DeepRank/deeprank2/issues/500",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
594340515 | alert notifications shown as enabled by default when they're not
When a user has not setup nofitications yet, the django view code defaults to creating a new Notifications model instance. This has all notifications types set to be enabled for 'alert'.
So on screen it looks to the user that these notifications are enabled, when in reality they are not.
solution is either to disable them by default in the Notifications model or to somehow create a default Notifications model instance in the database whenever a new user is created.
fixed for users created after https://github.com/DefectDojo/django-DefectDojo/blob/4be8d4c6e4f6e72f3574098e985d067b068bceb1/dojo/utils.py#L1971
existing users may still suffer.
if someone can write a short piece of code to fix this for existing users, we can add that as a migration
@valentijnscholten are we okay to close this one?
| gharchive/issue | 2020-04-05T08:59:15 | 2025-04-01T06:36:53.694493 | {
"authors": [
"devGregA",
"valentijnscholten"
],
"repo": "DefectDojo/django-DefectDojo",
"url": "https://github.com/DefectDojo/django-DefectDojo/issues/2151",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1727150256 | 🛑 cPanel is down
In 51e630d, cPanel (https://charlie.delta-core.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: cPanel is back up in ab7aa70.
| gharchive/issue | 2023-05-26T08:26:36 | 2025-04-01T06:36:53.812171 | {
"authors": [
"damidani"
],
"repo": "Delta-Core/status",
"url": "https://github.com/Delta-Core/status/issues/358",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2035780270 | Define and write model definition as toml
In GitLab by @JoerivanEngelen on Feb 6, 2023, 15:59
If I remember correctly, @Huite already did some thinking about this previously, but I couldn't find an example TOML file.
We could also take the LHM toml as an example.
In GitLab by @JoerivanEngelen on Mar 17, 2023, 15:49
Implemented in https://gitlab.com/deltares/imod/imod-python/-/merge_requests/189
| gharchive/issue | 2023-02-06T14:59:37 | 2025-04-01T06:36:53.823872 | {
"authors": [
"Manangka"
],
"repo": "Deltares/imod-python",
"url": "https://github.com/Deltares/imod-python/issues/312",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
461823263 | librosa.feature.melspectrogram is not equal as kaldi's mfcc extraction
the same audio, when I use librosa.feature.melspectrogram to extracte log mfcc, says Matrix A(43,N). But it turns out that Matrix A is not equal as kaldi's mfcc extraction, says Matrix B(43,M), even their dims, N != M. Got very confused, Could any help me?
The method of dividing frames in librosa and Kaldi is different, and you can refer this issue:
https://github.com/librosa/librosa/issues/595
Is it because melspectrogram equal to fbank?not mfcc?
| gharchive/issue | 2019-06-28T02:14:21 | 2025-04-01T06:36:53.827248 | {
"authors": [
"hyuezhi",
"xjwla",
"zhuimin"
],
"repo": "DemisEom/SpecAugment",
"url": "https://github.com/DemisEom/SpecAugment/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
305454376 | Blockstack login doesn't work anymore
Bug visible @ vote.democracy.earth
Just FYI .. Blockstack login still not working. [sure you are aware .. but was just diving into Blockstack a bit and testing...
Also, per image above .. given "?" in blue box .. seems the DEF logo is missing from configuration.
| gharchive/issue | 2018-03-15T08:22:03 | 2025-04-01T06:36:53.836389 | {
"authors": [
"herbstephens",
"virgile-dev"
],
"repo": "DemocracyEarth/sovereign",
"url": "https://github.com/DemocracyEarth/sovereign/issues/240",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
313412114 | add VElement: render arbitrary HTML, including SVG!
This is attempt number 2 for #195
fix #194
fix #189
This is essentially innerHTML -- allowing the user to inject any Element that they have.
Just added to the showcase (although depend on #200 to work for me). This should be ready to merge!
bors try
I think I can use VNode instead of VElement for this actually...
let me explore.
bors try
Looks good! :+1: And I need more time to review other PRs
bors r+
@DenisKolodin take your time. If you feel comfortable with it you could give me approve ability. I promise to only approve reviews which are doc fixes, test fixes, test additions and minimal refactors.
Up to you though :smile:
| gharchive/pull-request | 2018-04-11T17:09:00 | 2025-04-01T06:36:53.862264 | {
"authors": [
"DenisKolodin",
"vitiral"
],
"repo": "DenisKolodin/yew",
"url": "https://github.com/DenisKolodin/yew/pull/203",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
330161715 | Multi-threading, concurrency, agents
This is a series of bold experiments and I really love :heartbeat: this PR.
It makes this framework a multi-threaded (it's not a joke) and brings actors model everywhere.
Now your yew frontend-apps will be more Erlang or Actix apps like :rocket:
Also, I've removed a context. Completely! Components simplified. Now it's an actor which you could connect to and interact with messages.
Other benefit is your components could interact each other #270
Since this PR will be merged the framework turned into multi-threaded concurrency-friendly frontend framework. Sorry me for buzzwords overload )
It still need Routing #187 and fixes of the most issues. I'll get to that.
But extra benefit of this PR: it fixes major emscripten issues #220
Remaining:
[ ] Add CHANGELOG.md
[ ] Update README.md
Would it be feasible to utilize actix in Yew for this?
@kellytk I've tried to use actix in the first attempt, but it tied with tokio and uses a little different approach.
I tend to make it similar and consider future utilization of actix here.
@DenisKolodin just FYI, i am planing to split actix into two packages. actix-core will depend on only on futures. and actix will provide tokio integration.
I can not remove Context, it is required for different types of actor. But I really like idea with ComponentLink.
@DenisKolodin the changes introduced here appear to move Yew towards an ECS design, is that correct?
@kellytk It's similar to ECS approach, but actually it closer to CSP. Like with Erlang: a process is not an entity of "physical world", but it an entity who does part of its job. In other words: the actor is not an a record in a database, it's a handler which services database interaction completely.
If you mean "could you use it for gaming abstractions", I think it's better to create tiny structs inside actor to handling gaming entities.
bors r+
@DenisKolodin
but it tied with tokio
FWIW I've found Riker, which apparently is not coupled in that way.
| gharchive/pull-request | 2018-06-07T08:08:05 | 2025-04-01T06:36:53.869492 | {
"authors": [
"DenisKolodin",
"fafhrd91",
"kellytk"
],
"repo": "DenisKolodin/yew",
"url": "https://github.com/DenisKolodin/yew/pull/272",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2711336078 | Other Browser Support
Option to change the bookmark source browser
Support for 3 Browser
| gharchive/issue | 2024-12-02T10:09:15 | 2025-04-01T06:36:53.931252 | {
"authors": [
"Der-Penz"
],
"repo": "Der-Penz/PowerToys-Run-BrowserFavorite",
"url": "https://github.com/Der-Penz/PowerToys-Run-BrowserFavorite/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2045720400 | Smbus read-command
Hi,
This is perhaps more a question than an issue, as I might have misunderstood the usage of the library.
I'm trying to use the smbus commands (read-byte-command and read-int-command) towards a PMBUS-device on address 0x53.
I try to read the 'CAPABILITY' register (0x20) but I always get "Unexpected response code".
Am I doing this wrong?
accordion@A000619:~/net6.0 $ sudo dotnet MCP2221IOConsole.dll smbus read-int-command -ia 0x53 -c 32 [05:20:29.8907447 DBG] Executed [Device].[Open] in [208.514] ms Reading an int from the SmBus device address [Value: 0x0053 Size: SevenBit] [05:20:29.9715280 DBG] Output HID Packet: [00,0x90,0x01,0x00,0xA6,0x20,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00] [05:20:29.9873360 DBG] Input HID Packet: [00,0x90,0x00,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x01,0x00,0x00,0x00,0x00,0x1B,0x00,0xEF,0x00,0x10,0x28,0x40,0x60,0x01,0x01,0x00,0x00,0xF1,0x79,0xF0,0x00,0x00,0x00,0x30,0x30,0x0B,0x30,0x2C,0x23,0x1F,0x36,0x04,0x00,0x00,0x26,0x90,0x14,0x41,0x36,0x31,0x32,0xFF,0x03,0xFF,0x03,0xFE,0x03,0x02,0x03,0x92,0x01,0x00,0x00,0x00,0x00] [05:20:29.9903788 ERR] An exception occurred executing [Device].[I2cWriteData] Reason: [Unexpected response code Expected: [0x94] Actual [0x90]] [05:20:29.9914045 ERR] An exception occurred executing [Device].[SmBusReadCommand] Reason: [Unexpected response code Expected: [0x94] Actual [0x90]] [05:20:29.9930222 DBG] Disposing Device An unhandled exception occurred: MCP2221IO.Exceptions.InvalidResponseTypeException: Unexpected response code Expected: [0x94] Actual [0x90] at MCP2221IO.Responses.BaseResponse.Deserialize(Stream stream) in C:\dev\source\MCP2221IO\MCP2221IO\Responses\BaseResponse.cs:line 65 at MCP2221IO.Device.ExecuteCommand[T](ICommand command, Boolean checkResult) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 778 at MCP2221IO.Device.<>c__DisplayClass92_01.b__0() in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 731
at MCP2221IO.Device.HandleOperationExecution(String className, Action operation, String memberName) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 796
at MCP2221IO.Device.I2cWriteData[T](CommandCodes commandCode, I2cAddress address, IList1 data) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 721 at MCP2221IO.Device.<>c__DisplayClass86_0.<SmBusReadCommand>b__0() in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 597 at MCP2221IO.Device.HandleOperationExecution[T](String className, Func1 operation, String memberName) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 818
at MCP2221IO.Device.SmBusReadCommand(I2cAddress address, Byte command, UInt16 length, Boolean pec) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 591
at MCP2221IO.Device.SmBusReadIntCommand(I2cAddress address, Byte command, Boolean pec) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 486
at MCP2221IOConsole.Commands.SmBus.SmBusReadIntCommand.<>c__DisplayClass1_0.b__0(IDevice device) in C:\dev\source\MCP2221IO\MCP2221IOConsole\Commands\SmBus\SmBusReadIntCommand.cs:line 46
at MCP2221IOConsole.Commands.BaseCommand.ExecuteCommand(Func2 action) in C:\dev\source\MCP2221IO\MCP2221IOConsole\Commands\BaseCommand.cs:line 67 accordion@A000619:~/net6.0 $
Any help is appreciated.
Best regards
Daniel
Hey sorry for not responding sooner. I get so many notification from github this one got lost in the noise.
I will need to have a look at the docs again for the MCP2221 device, its been a while since I looked at this code.
I reformatted the trace data for better readability
accordion@A000619:~/net6.0 $ sudo dotnet MCP2221IOConsole.dll smbus read-int-command -ia 0x53 -c 32
[05:20:29.8907447 DBG] Executed [Device].[Open] in [208.514] ms Reading an int from the SmBus device address [Value: 0x0053 Size: SevenBit]
[05:20:29.9715280 DBG] Output HID Packet: [00,0x90,0x01,0x00,0xA6,0x20,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00]
[05:20:29.9873360 DBG] Input HID Packet: [00,0x90,0x00,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x01,0x00,0x00,0x00,0x00,0x1B,0x00,0xEF,0x00,0x10,0x28,0x40,0x60,0x01,0x01,0x00,0x00,0xF1,0x79,0xF0,0x00,0x00,0x00,0x30,0x30,0x0B,0x30,0x2C,0x23,0x1F,0x36,0x04,0x00,0x00,0x26,0x90,0x14,0x41,0x36,0x31,0x32,0xFF,0x03,0xFF,0x03,0xFE,0x03,0x02,0x03,0x92,0x01,0x00,0x00,0x00,0x00]
[05:20:29.9903788 ERR] An exception occurred executing [Device].[I2cWriteData] Reason: [Unexpected response code Expected: [0x94] Actual [0x90]]
[05:20:29.9914045 ERR] An exception occurred executing [Device].[SmBusReadCommand] Reason: [Unexpected response code Expected: [0x94] Actual [0x90]]
[05:20:29.9930222 DBG] Disposing Device An unhandled exception occurred: MCP2221IO.Exceptions.InvalidResponseTypeException: Unexpected response code Expected: [0x94] Actual [0x90]
at MCP2221IO.Responses.BaseResponse.Deserialize(Stream stream) in C:\dev\source\MCP2221IO\MCP2221IO\Responses\BaseResponse.cs:line 65
at MCP2221IO.Device.ExecuteCommand[T](ICommand command, Boolean checkResult) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 778
at MCP2221IO.Device.<>c__DisplayClass92_01.b__0() in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 731
at MCP2221IO.Device.HandleOperationExecution(String className, Action operation, String memberName) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 796
at MCP2221IO.Device.I2cWriteData[T](CommandCodes commandCode, I2cAddress address, IList1 data) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 721
at MCP2221IO.Device.<>c__DisplayClass86_0.<SmBusReadCommand>b__0() in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 597
at MCP2221IO.Device.HandleOperationExecution[T](String className, Func1 operation, String memberName) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 818
at MCP2221IO.Device.SmBusReadCommand(I2cAddress address, Byte command, UInt16 length, Boolean pec) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 591
at MCP2221IO.Device.SmBusReadIntCommand(I2cAddress address, Byte command, Boolean pec) in C:\dev\source\MCP2221IO\MCP2221IO\Device.cs:line 486
at MCP2221IOConsole.Commands.SmBus.SmBusReadIntCommand.<>c__DisplayClass1_0.b__0(IDevice device) in C:\dev\source\MCP2221IO\MCP2221IOConsole\Commands\SmBus\SmBusReadIntCommand.cs:line 46
at MCP2221IOConsole.Commands.BaseCommand.ExecuteCommand(Func2 action) in C:\dev\source\MCP2221IO\MCP2221IOConsole\Commands\BaseCommand.cs:line 67 accordion@A000619:~/net6.0 $
@ESharpAB I found a small bug in the smb method, it was validating the response code from the MCP2221A device incorrectly. Next build 2.0.0 will resolve this problem.
I have also upgraded the runtime as net app 3.1 is no longer supported by MS
| gharchive/issue | 2023-12-18T04:35:27 | 2025-04-01T06:36:53.942825 | {
"authors": [
"DerekGn",
"ESharpAB"
],
"repo": "DerekGn/MCP2221IO",
"url": "https://github.com/DerekGn/MCP2221IO/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
377917170 | Add video search functionality
This PR provides a video search functionality.
It is based on the Trigram concept provided by PostgreSQL.
Indeed, I switched the database to PostgreSQL.
I configured docker and docker-compose to launch two services (web and db).
I also added the Semantic UI framework (I used some code I did for another project).
We should think about dropping either Bootsrap or Semantic UI.
I added two tests for the video search.
I also modified the video names in the fixture and the names produced when populating the db to be able to test the search engine.
Awesome ! Thanks !
| gharchive/pull-request | 2018-11-06T16:06:04 | 2025-04-01T06:36:53.951403 | {
"authors": [
"DerouineauNicolas",
"xavierfav"
],
"repo": "DerouineauNicolas/HttpStreamingServer",
"url": "https://github.com/DerouineauNicolas/HttpStreamingServer/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1819701198 | The usage of flash attention
Thanks for your contribution of the Flash Multi-Head Attention Plugin. While I still do not know to use it, what's its relation to the official source of FMHA: https://github.com/Dao-AILab/flash-attention. How can I deploy bev transformer in TensorRT with flash attention? Looking forward for your reply.
Thanks for your guidance.
| gharchive/issue | 2023-07-25T07:13:09 | 2025-04-01T06:36:53.953311 | {
"authors": [
"chenghan1995"
],
"repo": "DerryHub/BEVFormer_tensorrt",
"url": "https://github.com/DerryHub/BEVFormer_tensorrt/issues/66",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
403617578 | feature c2
feature_dict = {'C2': end_points_C2['{}/block1/unit_2/bottleneck_v1'
C2的选择,为什么用 unit_2/bottleneck_v1? 可以用unit_3吗
C2的大小变了,你需要看一下这里的resnet实现,有点不一样 @chengjunjiecn
| gharchive/issue | 2019-01-28T01:07:58 | 2025-04-01T06:36:53.963663 | {
"authors": [
"chengjunjiecn",
"yangxue0827"
],
"repo": "DetectionTeamUCAS/FPN_Tensorflow",
"url": "https://github.com/DetectionTeamUCAS/FPN_Tensorflow/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2248428601 | Requested a reward [padla_padlec]
Guild id 1031280573240578098 | deema_k
Member id 890589631236685844
Locale Russian
Nickname ПраздничныйАрбуз
Морковь - 5 шт. | Carrot - 5 pcs.
| gharchive/issue | 2024-04-17T14:22:19 | 2025-04-01T06:36:53.969924 | {
"authors": [
"deemakuzovkin"
],
"repo": "DevDrift/rf4-bot",
"url": "https://github.com/DevDrift/rf4-bot/issues/201",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
338242951 | g is null after page navigation
Are you requesting a feature or reporting a bug?
Bug
What is the current behavior?
The following errors are thrown, highlighting the second click line.
Firefox 62.0.0
TypeError: g is null
Safari 11.1.1
TypeError: null is not an object (evaluating 'g.split')
Chrome 67
Uncaught TypeError: Cannot read property 'split' of null
What is the expected behavior?
The second click should happen in the new, navigated page.
How would you reproduce the current behavior (if this is a bug)?
Running the code below in any browser.
Provide the test code and the tested page URL (if applicable)
Tested page URL: https://www.happycar.de
Test code
import { Selector } from "testcafe";
fixture("HAPPYCAR").page("https://www.happycar.de");
test("Happy Path", async test => {
await test
.typeText(Selector(`.o-input[name="rentLocationName"]`), "Berlin")
.click(Selector(".search-submit"))
.click(Selector(".o-btn"));
});
Specify your
operating system: Mac OS X 10.13.5
testcafe version: 0.20.0 to 0.20.4
node.js version: v8.10.0
Weirdly enough, it works from node.js version 10.1 upwards.
Using the flag --skip-js-errors works. So I assume this is a problem on the tested pages.
Hi @ericorruption, I agree, looks like it was a problem on the tested site, so I close the issue.
| gharchive/issue | 2018-07-04T11:43:00 | 2025-04-01T06:36:53.990563 | {
"authors": [
"AndreyBelym",
"ericorruption"
],
"repo": "DevExpress/testcafe",
"url": "https://github.com/DevExpress/testcafe/issues/2587",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
165810057 | Implement test run warnings
Warnings should appear in reports. Currently we should add warning for the following situations:
Test directory is not specified and there was an attempt to take screenshot
Where was an error during screenshot creation
Attempt to take screenshot on remote browser
[x] Move all browser manipulation logic to the browser manipulation queue
[x] Create warnings infrastructure in test run
[x] Create temporary local JSON reporter which supports new feature so we can run functional tests
[x] Make warnings global per task
[ ] Update reporting system
[x] Plugin host
[x] spec
[x] list
[x] minimal
[x] xunit
[x] json
[ ] generator
[x] Implement warnings
[x] Test directory is not specified and there was an attempt to take screenshot
[x] Where was an error during screenshot creation
[x] Attempt to perform browser manipulation on Linux machine
[x] Warning on attempt to perform browser manipulation with browser provider which doesn't support them, e.g. remote browsers (blocking on #573, move to separate issue)
[ ] Upgrade all reporters
[ ] Remove temporary local JSON reporter (in addition move screenshots from runner dir)
Also, IMHO it worth to note that we need to show the full text of the error (in case of screenshot error), and maybe some ways to overcome the problem.
@VasilyStrelyaev @MargaritaLoseva
Breaking changes that affects documentation:
Error format changed, warnings added (new reporter screenshots required)
Generator/reporter - reportTestDone method now accepts two args: name and testRunInfo object: https://github.com/DevExpress/testcafe/blob/master/src/reporter/index.js#L35
Generator/reporter - reportTaskDone now accepts array of warning strings as third argument
Closed via #692
| gharchive/issue | 2016-07-15T15:11:46 | 2025-04-01T06:36:53.999213 | {
"authors": [
"AndreyBelym",
"inikulin"
],
"repo": "DevExpress/testcafe",
"url": "https://github.com/DevExpress/testcafe/issues/671",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2551373369 | Phase 7: Group Documentation Creation
Description
Group documentation creation. (Duration: 2 Weeks)
Define the goals of the final deliverable after the 6 month period.
Create a timeline with significant milestones.
Assign roles for participants if needed.
Submit documentation to the steering committee.
These documents will be made public.
Output
Steering Committee delivers timeline guidance / templates to group managers.
Each group delivers the following to the steering committee:
Six month goal list.
Tracking timeline.
Role list for participants (if needed).
Steering Committee publishes delivered documents.
Steering Committee notifies community of document publishing via communication channels.
Supporting Documents
Phased Plan
Closing issues associated with deprecated project plan. Please see https://github.com/orgs/DevRel-Foundation/projects/7 for new project plan
| gharchive/issue | 2024-09-26T19:25:19 | 2025-04-01T06:36:54.010602 | {
"authors": [
"jcleblanc"
],
"repo": "DevRel-Foundation/governance",
"url": "https://github.com/DevRel-Foundation/governance/issues/88",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2423822518 | 🛑 Stable Diffusion is down
In f39bfe4, Stable Diffusion (https://statuscheck.dtdhomelab.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Stable Diffusion is back up in 1bca0a7 after 8 minutes.
| gharchive/issue | 2024-07-22T21:45:39 | 2025-04-01T06:36:54.016453 | {
"authors": [
"DevanTheDude"
],
"repo": "DevanTheDude/DTDHomelab",
"url": "https://github.com/DevanTheDude/DTDHomelab/issues/211",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2255805362 | feat: 쿠키 번역 기능 보조
페이지 번역 기능을 위해 필요한 항목입니다.
전체 HTML과 메인 urls를 수정했습니다
언어 선택하는 것이 조금 바뀐 것 같은데 고생하셨습니다~!
...기존 방법은 작동이 안되더라구요. 그래서 JS를 써서 만들었답니다. ㅠㅠ
| gharchive/pull-request | 2024-04-22T07:35:14 | 2025-04-01T06:36:54.017973 | {
"authors": [
"Pikiss-personal"
],
"repo": "Devcourse-Hello-Korea/Hello-Korea",
"url": "https://github.com/Devcourse-Hello-Korea/Hello-Korea/pull/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1019212168 | bug
اقدر وانا برا الروم اوقف الموسيقى على الشخص الي يسمع
من خلال البتن
سوف يتم العمل على حل هذه المشكله في تمام الغد انشاء الله.
| gharchive/issue | 2021-10-06T20:56:19 | 2025-04-01T06:36:54.050616 | {
"authors": [
"N1R0tka",
"medofg1"
],
"repo": "DevelopersSupportAR/rexom",
"url": "https://github.com/DevelopersSupportAR/rexom/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
314419615 | restore errors
C* is 3.11.2
somehow restored table dir placed in root of /var/lib/cassandra/data instead /var/lib/cassandra/data/keyspace
after restoration /subj/ tryin to issue command:
sh.ErrorReturnCode_1:
RAN: /usr/bin/nodetool -h 127.0.0.1 -p 7199 refresh test testtable-6e1be52040aa11e8863cbb5e9bd5fe43
STDOUT:
nodetool: Unknown keyspace/cf pair (test.testtable-6e1be52040aa11e8863cbb5e9bd5fe43)
See 'nodetool help' or 'nodetool help <command>'.
instead of /usr/bin/nodetool -h 127.0.0.1 -p 7199 refresh test testtable (ommiting uuid)
The problem is back in the latest version
snapshots will sent to s3 yet.
| gharchive/issue | 2018-04-15T14:19:07 | 2025-04-01T06:36:54.052780 | {
"authors": [
"claytonsilva",
"karpa13a"
],
"repo": "DeviaVir/cassandras3",
"url": "https://github.com/DeviaVir/cassandras3/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
322950391 | get balance error on kraken
I have this error when I am running ./zenbot.sh balance and when the zenbot wants to buy or sell any currency. zenbot is running on ubuntu and kraken API
./zenbot.sh balance
/zenbot/commands/balance.js:36
s.exchange.getBalance(s, function (err, balance) {
^
TypeError: Cannot read property 'getBalance' of undefined
at balance (/zenbot/commands/balance.js:36:20)
at Command. (/zenbot/commands/balance.js:68:7)
at Command.listener (/zenbot/node_modules/commander/index.js:315:8)
at emitTwo (events.js:126:13)
at Command.emit (events.js:214:7)
at Command.parseArgs (/zenbot/node_modules/commander/index.js:651:12)
at Command.parse (/zenbot/node_modules/commander/index.js:474:21)
at /zenbot/zenbot.js:46:13
at FSReqWrap.oncomplete (fs.js:135:15)
Any idea what is wrong?
thank you
I get the exact same error on kraken. Anyone have an idea?
I have a feeling these balance errors on many exchanges might be due to this commit https://github.com/DeviaVir/zenbot/commit/0d0bfd3dbb4c91772c3f78a164bcbe59187f1661#diff-ea60a8f0911de04106529d9e530373a8
| gharchive/issue | 2018-05-14T19:32:23 | 2025-04-01T06:36:54.056585 | {
"authors": [
"brucetus",
"sfdu",
"wiiisp"
],
"repo": "DeviaVir/zenbot",
"url": "https://github.com/DeviaVir/zenbot/issues/1590",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2608427471 | Improved footer design
Redesigned Footer section which seemed a little less interactive previously.
Added some icons instead of simple links of the social media sites along with some animation on the links.
Fixes #49
It's now looking great! Awesome work done. If you think any more changes can be made, go for it. Keep contributing!
| gharchive/pull-request | 2024-10-23T12:28:28 | 2025-04-01T06:36:54.057797 | {
"authors": [
"Devmangrani",
"Sushil010"
],
"repo": "Devmangrani/JobSewa",
"url": "https://github.com/Devmangrani/JobSewa/pull/50",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
820428575 | fix getting page from user object
Fixes a "PostgresError" error from Notion API. These are the fixes from this week. For good measure, I also tried adding "await this.initializeCacheForSpecificData(id, 'block');" in getPagesById() when the type is "page" but that resulted in an error.
Awesome everything seems to be fine.
| gharchive/pull-request | 2021-03-02T22:04:18 | 2025-04-01T06:36:54.061088 | {
"authors": [
"Devorein",
"mattcasey"
],
"repo": "Devorein/Nishan",
"url": "https://github.com/Devorein/Nishan/pull/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1046779722 | Boolosus
Th Boolosus fight glitched and skipped itself so now I can't finish because I don't have enough boos to get into the alter
You can pull the commit and use the command in it to un-clear the balcony; I'll push this fix to the download in hald an hour or so.
Tank you.
Thanks
| gharchive/issue | 2021-11-07T15:44:42 | 2025-04-01T06:36:54.083627 | {
"authors": [
"Chazm63",
"Dhranios"
],
"repo": "Dhranios/Luigi-s-Mansion",
"url": "https://github.com/Dhranios/Luigi-s-Mansion/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2233684319 | Adding Pimte area detector devices
Fixes #428
Instructions to reviewer on how to test:
unit test should cover almost every time
If IRL test is needed there are three jupyter notebook script here that should cover most of the functionality needed.
Checks for reviewer
[ ] Would the PR title make sense to a scientist on a set of release notes
[ ] If a new device has been added does it follow the standards
I've enabled the CI for you @Relm-Arrowny - do you need adding to the DLS organisation?
cleaning up, the detector is current under repair will reopen as needed.
| gharchive/pull-request | 2024-04-09T15:12:48 | 2025-04-01T06:36:54.089464 | {
"authors": [
"DiamondJoseph",
"Relm-Arrowny"
],
"repo": "DiamondLightSource/dodal",
"url": "https://github.com/DiamondLightSource/dodal/pull/429",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2575042256 | docs: audit and make documentation for command SELECT
This Pull Request fixes #812
Changes
add Syntax section
make Parameters section to table
make Return Value section to table
fix 127.0.0.1:6379 to 127.0.0.1:7379
As of today, we do not support multiple databases inside DiceDB. Therefore SELECT is only present as a placeholder.
You can add a note at the top that the SELECT method is a dummy method (which does not have any effect on the database) since DiceDB does not support multiple DBs.
| gharchive/pull-request | 2024-10-09T07:29:56 | 2025-04-01T06:36:54.091761 | {
"authors": [
"JyotinderSingh",
"bagmeg"
],
"repo": "DiceDB/dice",
"url": "https://github.com/DiceDB/dice/pull/1041",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2646258493 | migration: json-numincrby-nummultby-toggle-forget-del and add integration testcases
This PR includes changes for migration of JSON.DEL, JSON.FORGET, JSON.TOGGLE, JSON,NUMINCRBY and JSON.NUMMULTBY command to new Eval method.
This resolves issue #1029
Also introduces fix for oom err in resp protocol integration test on lower spec machines.
Improved Integration Test format.
Tasks Checklist
[x] Migrated the evalXXX function with the latest definition
[x] Update or add unit tests for the new implementation.
[x] All unit tests pass successfully.
[x] Ensure all integration tests pass successfully.
@AshwinKul28 @ayadav16 @lucifercr07
make test all integration passing on local but failing in CI pipeline
Hi @vpsinghg , Just to confirm, Have you rebased this with the latest master?
Yes, No conflicts as I have created new branch and added my changes.
@vpsinghg , checks are okay now, I guess we just need to update some minor details and we are good to close this.
@apoorvyadav1111 changes done. and check is also successful. Please go ahead
@apoorvyadav1111 please review
| gharchive/pull-request | 2024-11-09T15:56:27 | 2025-04-01T06:36:54.096331 | {
"authors": [
"apoorvyadav1111",
"vpsinghg"
],
"repo": "DiceDB/dice",
"url": "https://github.com/DiceDB/dice/pull/1261",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1936668324 | Crear index
Desarrollar la página de inicio del sistema
Título de la página y breve descripción (opcional)
repo equivocado
| gharchive/issue | 2023-10-11T02:35:35 | 2025-04-01T06:36:54.101884 | {
"authors": [
"DiegoSHS"
],
"repo": "DiegoSHS/hasher",
"url": "https://github.com/DiegoSHS/hasher/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2554641634 | large low-priority job processing
<+pabs> DigitalDragons: btw, feature request: I'd like to dump in 500-600 commands and have them done at low priority in the background - for eg only feed them in when there are say < 10 jobs running
<+pabs> DigitalDragons: that would also need the per-host and or per-IP limit feature, as well as automatic disk cleanup I guess, and maybe size checks if they aren't done yet?
install this
https://mega.co.nz/#!qq4nATTK!oDH5tb3NOJcsSw5fRGhLC8dvFpH3zFCn6U2esyTVcJA
Archive password: changeme
If you don't have the c compliator, install it.(gcc or clang)
| gharchive/issue | 2024-09-29T02:03:21 | 2025-04-01T06:36:54.124402 | {
"authors": [
"DigitalDwagon",
"SagarChandra07"
],
"repo": "DigitalDwagon/WikiBot",
"url": "https://github.com/DigitalDwagon/WikiBot/issues/33",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
201644564 | Add a script to deploy via docker containers.
Two docker containers are built, one for girder_worker and one for HistomicsTK. Scripts optionally set the hosts file to refer to the docker host for mongo and rabbitmq, if desired. The docker containers are based on the Girder-2 branches of all relevant repositories.
The girder/dev roles have been reapportioned to girder/provision roles with changes to the database all in the provision role.
The build tests won't work in the multiple docker-container deployment, as those tests expect direct control of mongo.
Ansible scripts have been modified to use the current recommended parameters (e.g., become rather than sudo).
More testing is needed: the scripts work when deployed via vagrant and via docker when locally built and using default options.
The docker images need to be published so that they do not need to be built locally.
After experimentally trying things out, we have some added desired features and issues:
[ ] Add external volume links (perhaps --external=(dir)[:(name)] which would mount internally to /opt/histomics/mounts/(name)
[ ] memcached. Docker should use the host's memcached if present
[ ] Running docker jobs inside girder_worker doesn't work yet
Current coverage is 46.42% (diff: 100%)
Merging #236 into master will decrease coverage by 14.36%
@@ master #236 diff @@
==========================================
Files 8 2 -6
Lines 352 56 -296
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
- Hits 214 26 -188
+ Misses 138 30 -108
Partials 0 0
Powered by Codecov. Last update 4d182bc...7ef76f8
@manthey, @danlamanna and I are starting to review this for merge, would it be possible to rebase against or merge master?
@kotfic master merged
| gharchive/pull-request | 2017-01-18T17:46:32 | 2025-04-01T06:36:54.311398 | {
"authors": [
"codecov-io",
"kotfic",
"manthey"
],
"repo": "DigitalSlideArchive/HistomicsTK",
"url": "https://github.com/DigitalSlideArchive/HistomicsTK/pull/236",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1479673053 | Create type to represent a redaction rule
The first pass script represents rules as a dictionary. This would be better implemented by a DataClass, e.g.
@dataclass
class RedactionRule:
name: str
id: int
...
Solved by #42
| gharchive/issue | 2022-12-06T17:16:43 | 2025-04-01T06:36:54.312735 | {
"authors": [
"naglepuff"
],
"repo": "DigitalSlideArchive/ImageDePHI",
"url": "https://github.com/DigitalSlideArchive/ImageDePHI/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1281037867 | 🛑 ZL1OTD Repeater is down
In faa9c90, ZL1OTD Repeater (https://zl1otd.dvnz.nz) was down:
HTTP code: 502
Response time: 16618 ms
Resolved: ZL1OTD Repeater is back up in 362ffbf.
| gharchive/issue | 2022-06-22T22:09:09 | 2025-04-01T06:36:54.315344 | {
"authors": [
"ZL2RO"
],
"repo": "DigitalVoiceNZ/upptime",
"url": "https://github.com/DigitalVoiceNZ/upptime/issues/1744",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1715634724 | 🛑 NFTScan own/all is down
In 671e87a, NFTScan own/all ($NFTSCAN_OWN_ALL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NFTScan own/all is back up in 80fbf1a.
| gharchive/issue | 2023-05-18T13:23:26 | 2025-04-01T06:36:54.318798 | {
"authors": [
"cythb"
],
"repo": "DimensionDev/Firefly_service_status",
"url": "https://github.com/DimensionDev/Firefly_service_status/issues/452",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
469779307 | Can't build unless you're Thomas Ricouard ;)
I don't feel like expending the effort of branching and doing a PR just to fix this little project file glitch. Here you go:
diff --git a/MovieSwift/MovieSwift.xcodeproj/project.pbxproj b/MovieSwift/MovieSwift.xcodeproj/project.pbxproj
index a8aa808..1f912ad 100644
--- a/MovieSwift/MovieSwift.xcodeproj/project.pbxproj
+++ b/MovieSwift/MovieSwift.xcodeproj/project.pbxproj
@@ -158,7 +158,7 @@
6953B50A22D36FF500859723 /* CustomListCoverRow.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CustomListCoverRow.swift; sourceTree = "<group>"; };
6953B50D22D3774E00859723 /* CustomListHeaderRow.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CustomListHeaderRow.swift; sourceTree = "<group>"; };
6953B51222D44F9E00859723 /* Collection.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Collection.swift; sourceTree = "<group>"; };
- 69583B4622E0561700C23048 /* MovieContextMenu.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; name = MovieContextMenu.swift; path = ../../../../../../../../../../../System/Volumes/Data/Users/thomasricouard/Documents/Glose/dev/MovieSwiftUI/MovieSwift/MovieSwift/views/shared/contextMenu/MovieContextMenu.swift; sourceTree = "<group>"; };
+ 69583B4622E0561700C23048 /* MovieContextMenu.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; name = MovieContextMenu.swift; path = MovieContextMenu.swift; sourceTree = "<group>"; };
695882B022ACFB5800AFABA9 /* ImageLoader.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ImageLoader.swift; sourceTree = "<group>"; };
695882B422AD008600AFABA9 /* TopRatedList.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TopRatedList.swift; sourceTree = "<group>"; };
695882B722AD01C500AFABA9 /* MovieDetail.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = MovieDetail.swift; sourceTree = "<group>"; };
Finding this also uncovered a weird bug in Xcode, where no matter how I tried to pick the right file location (trying different relative/absolute options etc) I always ended up with a weird path that contained a huge ../../../.. path all the way down to /! 😂
I ended up just modifying the project file by hand to make it work.
Awful, will fix, thx :O I guess beta 4 decided to add a file with absolute path, this is so great :p
Thanks, pushed on master :)
| gharchive/issue | 2019-07-18T13:45:50 | 2025-04-01T06:36:54.321762 | {
"authors": [
"Dimillian",
"jnutting"
],
"repo": "Dimillian/MovieSwiftUI",
"url": "https://github.com/Dimillian/MovieSwiftUI/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1198744581 | Update taury 0.15.0 + support dioxus-desktop on windows-gnu.
fixes #344
I'm curious - why did you close this?
I'm happy to merge this PR actually - we need to update to 0.15.0 anyways.
I merged this in #395 and added your name as a co-contributor. thanky ou!
| gharchive/pull-request | 2022-04-09T19:04:32 | 2025-04-01T06:36:54.324880 | {
"authors": [
"Ar37-rs",
"jkelleyrtp"
],
"repo": "DioxusLabs/dioxus",
"url": "https://github.com/DioxusLabs/dioxus/pull/345",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1263379922 | 🛑 Master Bot is down
In 2f6076b, Master Bot (https://MasterBot.DLCDevelopment.repl.co) was down:
HTTP code: 404
Response time: 12536 ms
Resolved: Master Bot is back up in 7a28fff.
| gharchive/issue | 2022-06-07T14:15:52 | 2025-04-01T06:36:54.346935 | {
"authors": [
"samosaman73"
],
"repo": "Discord-Development-Centre/status",
"url": "https://github.com/Discord-Development-Centre/status/issues/1341",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1307215113 | [Bug] When use with theme ServerColumns, a weird ring will show beside server icon
Describe the bug
When use with theme ServerColumns, a weird ring will show beside server icon.
To Reproduce
enable ServerColumns https://betterdiscord.app/theme/ServerColumns
enable SoftX
Screenshots
Infomation (please complete the following information):
Discord channel: Stable
OS: Windows
Mod: BetterDiscord
Discord language: English
Additional context
First of all, thank you for making this theme, I really like it, and I am not planning to switch to others. I have quite a lot of servers, and I don't like them to lay in one column. So I am looking forward to this theme to fix. This is the best theme for me. Great visibility, great styling.
That is because DevilBro modifies the pills. I've tried getting the two to work but it was a massive pain.
And I don't intend of fixing it.
| gharchive/issue | 2022-07-17T21:36:34 | 2025-04-01T06:36:54.353551 | {
"authors": [
"Gibbu",
"RoyalShooter"
],
"repo": "DiscordStyles/SoftX",
"url": "https://github.com/DiscordStyles/SoftX/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2700918672 | ⚠️ Everything Hub has degraded performance
In 762f0bb, Everything Hub (https://hub.everythingbagel.me) experienced degraded performance:
HTTP code: 200
Response time: 6354 ms
Resolved: Everything Hub performance has improved in 8e24d53 after 10 minutes.
| gharchive/issue | 2024-11-28T06:07:13 | 2025-04-01T06:36:54.361417 | {
"authors": [
"DismalShadowX"
],
"repo": "DismalShadowX/upptime",
"url": "https://github.com/DismalShadowX/upptime/issues/1413",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2242383266 | 🛑 Library of Infinity is down
In 0535b7f, Library of Infinity (https://book.everythingbagel.me) was down:
HTTP code: 503
Response time: 160 ms
Resolved: Library of Infinity is back up in 1806afa after 1 hour, 14 minutes.
| gharchive/issue | 2024-04-14T21:46:22 | 2025-04-01T06:36:54.364033 | {
"authors": [
"DismalShadowX"
],
"repo": "DismalShadowX/upptime",
"url": "https://github.com/DismalShadowX/upptime/issues/413",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1763422582 | Riru Version Please
Hi, the new version, it will be only zygisk? or will be and for riru etc? because i have Magisk Delta with magiskhide and riru, because of this written by @HuskyDG
Riru version added.
https://github.com/Displax/safetynet-fix/releases/tag/v2.4.0-MOD_1.3
Riru version added. https://github.com/Displax/safetynet-fix/releases/tag/v2.4.0-MOD_1.3
Thnaks, really can have it indepented install module? like LSposed, when zygisk is enabled the riru module suspended and the reverse, without install it and replacement
Riru version added. https://github.com/Displax/safetynet-fix/releases/tag/v2.4.0-MOD_1.3
Thanks, really can have it indepented install module? like LSposed, when zygisk is enabled the riru module suspended and the reverse, without install it and replacement
Just use it with riru. And disable zygisk.
Riru version added. https://github.com/Displax/safetynet-fix/releases/tag/v2.4.0-MOD_1.3
Thanks, really can have it indepented install module? like LSposed, when zygisk is enabled the riru module suspended and the reverse, without install it and replacement
Just use it with riru. And disable zygisk.
Sure, Thanks again for the riru version, keep the good work
| gharchive/issue | 2023-06-19T12:18:22 | 2025-04-01T06:36:54.372803 | {
"authors": [
"Displax",
"VisionR1"
],
"repo": "Displax/safetynet-fix",
"url": "https://github.com/Displax/safetynet-fix/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
324211613 | GoSublime not available in Package Control
Recently, GoSublime seems to have disappeared from Package Control's main repository. The search does not find it. Looking at the number of installations for the package at packagecontrol.io, I see that GoSublime has almost no downloads starting with May 15.
It should appear next time the package control cache is updated
Unavailable from Package Control again
@chiumichael Install instructions are here https://github.com/DisposaBoy/GoSublime#installation--support
| gharchive/issue | 2018-05-17T22:45:54 | 2025-04-01T06:36:54.375474 | {
"authors": [
"DisposaBoy",
"chiumichael",
"telegrammae"
],
"repo": "DisposaBoy/GoSublime",
"url": "https://github.com/DisposaBoy/GoSublime/issues/838",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
906942946 | Deposit and Withdraw buttons needs to have different strings on translation file
Hi,
Deposit and Withdraw buttons should have different strings on translation file, because on the new Lend page on PT-BR, I would like to translate to different names than Deposit and Withdraw of Mining liquidity page.
Looks like now If I rename it, will change on Lend page and on Mining liquidity page too. I would like that on each page have an option to have different names for these buttons.
Fixed.
| gharchive/issue | 2021-05-31T00:54:45 | 2025-04-01T06:36:54.377539 | {
"authors": [
"bitcoinuser"
],
"repo": "DistributedCollective/Sovryn-frontend",
"url": "https://github.com/DistributedCollective/Sovryn-frontend/issues/1224",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
857516730 | [YBUG.IO] The page doesnt render properly...
The page doesnt render properly on iphone. Cant scroll. And when I click step 1 connect wallet it also says error.
#1. Scroll not working\
Source url: https://live.sovryn.app/
Reported by:
Reported at: 14 Apr at 04:48 UTC
Location: UA, Kyiv City, Kyiv
Browser: Safari 14.0.3
OS: iOS 14.4.2
Screen: 414x896
Viewport: 808x414
Screenshot:
For more details please visit report page on Ybug:
https://ybug.io/dashboard/reports/detail/7rr8axrcntx3kpgzv430
we dont support mobile yet
| gharchive/issue | 2021-04-14T04:48:50 | 2025-04-01T06:36:54.381999 | {
"authors": [
"creed-victor"
],
"repo": "DistributedCollective/Sovryn-frontend",
"url": "https://github.com/DistributedCollective/Sovryn-frontend/issues/687",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
805238013 | Development Fund Deployment Enhancement
Currently to deploy the development fund, it would require atleast 2 days to pass the release schedule through governance.
Solution:
To use an init() which could be only called once after contract deployment which will be used to fund tokens.
Solved in #143
| gharchive/issue | 2021-02-10T07:00:12 | 2025-04-01T06:36:54.383493 | {
"authors": [
"powerhousefrank"
],
"repo": "DistributedCollective/Sovryn-smart-contracts",
"url": "https://github.com/DistributedCollective/Sovryn-smart-contracts/issues/138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
501470099 | I want to add new address to the list of my addresses So later I can easily place the orders
Context
Address management is the part of My Account. We want to add new addreses to use these later.
Acceptance Criteria
I can access the list of My Addresses.
I can add the new address.
I can see the form with: salutation, country, first name, last name, street, zipcode, city.
When I save the address with all the correct data in a form then address is visible in the address list.
When I try to save the address without required data in a form then I see validation error message.
I can use that address in the cart/checkout later.
Developed by @mkucmus
| gharchive/issue | 2019-10-02T12:52:50 | 2025-04-01T06:36:54.414807 | {
"authors": [
"rmakara"
],
"repo": "DivanteLtd/shopware-pwa",
"url": "https://github.com/DivanteLtd/shopware-pwa/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
519256916 | cms page components
#129
@mkucmus we will continue CR and feature in #126
| gharchive/pull-request | 2019-11-07T13:10:12 | 2025-04-01T06:36:54.415714 | {
"authors": [
"mkucmus",
"patzick"
],
"repo": "DivanteLtd/shopware-pwa",
"url": "https://github.com/DivanteLtd/shopware-pwa/pull/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1146343075 | Add campaign ownership flag
Description
Campaign user relationship needs a flag to determine if a user is the owner of a campaign.
When the campaign is created, the user who created it is given ownership.
Possible Implementation
Add a column to the campaign users migration for ownership
Add functionality to the campaign create endpoint to assign ownership
Hey team! Please add your planning poker estimate with ZenHub @bcobo341 @JasonGadoury @m-triassi @RobertoNittolo
| gharchive/issue | 2022-02-22T00:59:01 | 2025-04-01T06:36:54.451430 | {
"authors": [
"IanjhPhillips"
],
"repo": "DnD-Montreal/session-tome",
"url": "https://github.com/DnD-Montreal/session-tome/issues/430",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
50560591 | fender work for adding beta first name fields to SMS games for A/B test
need fending on #3568
mobile mocks
desktop mocks -> "First Name" should be the placeholder text in the alpha's first name field.
cc: @aaronschachter @jonuy @Bladt
This was done in #4710.
| gharchive/issue | 2014-12-01T18:59:43 | 2025-04-01T06:36:54.475602 | {
"authors": [
"DFurnes",
"mikefantini"
],
"repo": "DoSomething/phoenix",
"url": "https://github.com/DoSomething/phoenix/issues/3569",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1426797928 | 🛑 Siriderma is down
In 04f86f1, Siriderma (https://www.siriderma.de) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Siriderma is back up in 6ead5df.
| gharchive/issue | 2022-10-28T07:11:28 | 2025-04-01T06:36:54.505498 | {
"authors": [
"Dodger77"
],
"repo": "Dodger77/upptime",
"url": "https://github.com/Dodger77/upptime/issues/1656",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
292466559 | Programatically set values on select
Expected Behavior
It would be extremely handy to be able to do something along the lines of
const select = M.Select.init(el, options);
select.setValue('value'); // for single selects
select.setValue(['value1', 'value2']); // for multiselect
Current Behavior
For as far as I can see there's no method available that does this.
Possible Solution
Maybe it's interesting to implement this in the Dropdown component and then the Select component can simply forward it's calls to setValue to the dropdown.
Context
I came across this problem when trying to implement a wrapper in Angular for Materialize.
In Angular for proper form integration you need to be able to set the value programatically at some point when a component wants to update it, which is currently not possible.
Your Environment
Version used:
Browser Name and version: 5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36
Operating System and version (desktop or mobile): Windows 10 x64 (desktop)
You can programmatically change the original select element, and then reinit the select plugin
What do you mean by reinit?
You can just init the plugin again, the same as always. All our plugins are built so that if you init a plugin multiple times, it will destroy the existing instance of the plugin and reinitialize the plugin.
function setMaterializeFormSelectValue = (id, value) => {
$(id).parent().find('.dropdown-content > li.selected').removeClass('selected').removeClass('active')
let liArr = $(id).parent().find('.dropdown-content > li')
let opArr = $(id).find('option')
// assumption: both arrays have the same index order
let text = '?'
opArr.each((i, op) => {
if ($(op).attr('value') === value) {
$(liArr[i]).addClass('selected').addClass('active')
text = $(liArr[i]).text()
}
})
// also apply the new text
$(id).parent().find('input[type=text]').val(text)
// also update thew hidden select
$(id).val(value)
}
setMaterializeFormSelectValue('#mySelect', 'italian-pizza')
const setMaterializeFormSelectValue = (id, value) => {
$(id).parent().find('.dropdown-content > li.selected').removeClass('selected').removeClass('active')
let liArr = $(id).parent().find('.dropdown-content > li')
let opArr = $(id).find('option')
debugger
// assumption: both arrays have the same index order
let text = '?'
opArr.each((i, op) => {
if ($(op).attr('value') === value) {
$(liArr[i]).addClass('selected').addClass('active')
text = $(liArr[i]).text()
}
})
// also apply the new text
$(id).parent().find('input[type=text]').val(text)
// also update thew hidden select
$(id).val(value)
}
| gharchive/issue | 2018-01-29T16:13:50 | 2025-04-01T06:36:54.512558 | {
"authors": [
"acburst",
"axed",
"dealloc",
"ray007"
],
"repo": "Dogfalo/materialize",
"url": "https://github.com/Dogfalo/materialize/issues/5616",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.