id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1470751451
gf tool install issues Hello @joamatab , I installed the latest version and did a fresh install of klayout, and tried to use the cli tool to install the salt package. I ran into a couple of issues: The cli tool is broken and does not take in any commands and errors out. I had to manually fix it by opening the gf file and changin gf to cli everywhere. After getting this working, I ran gf tool install and it was installed in the salt directory. Opening klayout did not show the menu item for gdsfactory: Moving the gdsfactory folder to macros rather than salt brings the menu back: Is there a reason we have to keep this under salt? Does it change it's functionality in any way by moving it to macros? Atleast keeping it under macros brings it in toolbar. Having it under salt makes it inaccessible. Perhaps @thomasdorch knows? Originally posted by @SkandanC in https://github.com/gdsfactory/gdsfactory/issues/937#issuecomment-1332682864 If you want it installable by the package manager it needs to go to the salt folder, that's where all packages managed by the saltmine live. I believe though you have an error in your setup. Python macros need to go thw pymacros folder. macros is for ruby. That might be the problem. I think the standard packages might detect this and can run it successfully anyway, but not for packages. Skandan, is it working now? I believe this will not work. Klayout has two kinds of python files. .py and .lym They get loaded differently. .py are the "traditional" python files and go into /python. .lym are the xml wrapped python files that also hold klayout information. These are the ones called by the macro class (forgot the exact name) and can be set to autoexecute etc. They need tp be in /pymacros. I believe if they are in /python, they won't be properly executed on startup anymore. @joamatab yes it works now.
gharchive/issue
2022-12-01T06:20:45
2025-04-01T04:34:20.267042
{ "authors": [ "SkandanC", "joamatab", "sebastian-goeldi" ], "repo": "gdsfactory/gdsfactory", "url": "https://github.com/gdsfactory/gdsfactory/issues/943", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
844448268
RACK wiki pages sometimes open with incorrect titles The element for the "RACK Predefined Queries" wiki page is "RACK CLI" - this means that the title of a browser tab displaying this page is misleading. There may be other examples of Wiki pages with incorrect titles - I've not checked exhaustively. Upon further investigation, it would appear that GitHub is occasionally opening Wiki pages with an incorrect <title> element - however, this doesn't seem to happen consistently. It may have something to do with whether I open the page directly or in a new tab, or it may be to do with the page from which I open the wiki page, but it has happened at least twice to two different Wiki pages. If I can reproduce the problem, I will add a note to this issue.
gharchive/issue
2021-03-30T12:12:44
2025-04-01T04:34:20.270090
{ "authors": [ "clarissa-adelard" ], "repo": "ge-high-assurance/RACK", "url": "https://github.com/ge-high-assurance/RACK/issues/333", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
68209936
updated copyrights in comments fixes issue #482 Can one of the admins verify this patch? Can one of the admins verify this patch? [ ] Is it mergable? [ ] Did it pass the tests? [ ] If it introduces new functionality in scripts/ is it tested? Check for code coverage. [ ] Is it well formatted? Look at make pep8, make diff_pylint_report, make cppcheck, and make doc output. Use make format and manual fixing as needed. [ ] Did it change the command-line interface? Only additions are allowed without a major version increment. Changing file formats also requires a major version number increment. [ ] Is it documented in the ChangeLog? [ ] Was a spellchecker run on the source code and documentation after changes were made? Jenkins, ok to test Hi @Magentashades is this ready for review? If so please fill out the checklist (above) and ping us. thx!
gharchive/pull-request
2015-04-13T22:39:37
2025-04-01T04:34:20.310929
{ "authors": [ "Magentashades", "ctb", "ged-jenkins", "mr-c" ], "repo": "ged-lab/khmer", "url": "https://github.com/ged-lab/khmer/pull/934", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
136315620
make the packaging script more robust 建议在文档里加上对electron-packager的依赖说明0.0 @gaocegege 好,我回去merge。其实我的脚本们还没写完… 兄台看着面熟啊 @geeeeeeeeek 刚加了微信好友-,-
gharchive/pull-request
2016-02-25T08:19:06
2025-04-01T04:34:20.313241
{ "authors": [ "gaocegege", "geeeeeeeeek" ], "repo": "geeeeeeeeek/electronic-wechat", "url": "https://github.com/geeeeeeeeek/electronic-wechat/pull/9", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2267765043
083 Jr@**** Please close this issue. It doesn't serves any purpose. Thank you. Nitkarsh Chourasia.
gharchive/issue
2024-04-28T19:30:41
2025-04-01T04:34:20.314297
{ "authors": [ "NitkarshChourasia", "PoisonZl" ], "repo": "geekcomputers/Python", "url": "https://github.com/geekcomputers/Python/issues/2165", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1524239494
周记 2023 年第 2 周(20230108 —— 20230114) 周记 2023 年第 2 周(20230108 —— 20230114) 当小丑搬到宫殿,他不会成为真的国王,而是把宫殿便成了马戏团。 俞敏洪:有很多中小学老师,一年都看不了几本书,一辈子就教书上那点东西,天天不断重复,自己的知识面都很窄,跟不上时代变化,根本没有能力教育好现在的学生。 鲁迅先生在《灯下漫笔》中描写旧中国 : “一个跪久了的民族,连站起来都有恐高症。一说钱权,立刻放大瞳孔。一说男女性事,马上就兴奋。说到道德、民生、人性、良知,个个噤若寒蝉,不关我事,不感兴趣。一个个精到骨头的个体组成了一个奇葩的族群,所有的屈辱和灾难都是自酿的。” 好多人初次接触到一个超出他固有认知的新鲜事物的时候,很容易产生两种心态: 1、觉得自己发现了了不起的事物,领先一步。 2、贪欲战胜了谨慎。 这两种心态一结合,就掉进了一个深不可测的陷阱。 第一个吃螃蟹的人活了下来,第一个吃毒蜘蛛的人死掉了。 不读书也并不影响生活,也不影响明理。因为我们还可以向生活学习,从现实中总结经验教训。 读书也未必能怎么样,读成迂腐之辈,甚至有害人生。 人生,时日漫长总要有爱好来解闷……, 我从小就喜欢读书,就像有人从小就喜欢打游戏。 对我来说,读书不是手段,它是目的、是纯粹的精神愉悦。其余都不如读书。
gharchive/issue
2023-01-08T00:17:45
2025-04-01T04:34:20.318578
{ "authors": [ "xingangshi" ], "repo": "geekpanshi/panshi", "url": "https://github.com/geekpanshi/panshi/issues/86", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
407412728
Error in documentation Issue Type Documentation Report Summary http://docs.drupalvm.com/en/latest/extras/drush/ contains an error: Drupal VM automatically generates a drush alias file in ~/.drush/drupalvm.aliases.drushrc.php (for Drush < 9.0.0) and ~/.drush/sites/drupalvm.site.yml (for Drupal 9.0.0+) with an alias for every site you have defined in the apache_vhosts or nginx_vhosts variable. I assume Drupal 9.0.0+ needs to be Drush 9.0.0+ Indeed it should be Drush. I created a PR to update the documentation. See https://github.com/geerlingguy/drupal-vm/pull/1895 Good catch! And thanks @Michel-Settembrino for the PR. I've now merged it in.
gharchive/issue
2019-02-06T20:18:26
2025-04-01T04:34:20.325695
{ "authors": [ "Michel-Settembrino", "geerlingguy", "hepabolu" ], "repo": "geerlingguy/drupal-vm", "url": "https://github.com/geerlingguy/drupal-vm/issues/1894", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1117899089
Drush fails after fresh spinup: Class "DOMDocument" does not exist Issue Type Bug Report / Support Request Your Environment Vagrant 2.2.14 VirtualBox 6.1.16r140961 ansible 2.9.27 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0] Your OS Linux (version) Summary Spinning up a new project using DrupalVM with Acquia BLT and Acquia CMS. The box provisions correctly, but all drush commands fail when I ssh into the box. The error is: [preflight] Class "DOMDocument" does not exist [warning] Drush command terminated abnormally. Some initial debugging shows that this can be caused by the absence of the php-xml extension. I modified box/config.yml to include this: php_version: "7.4" php_packages_extra: - "php{{ php_version }}-bz2" - "php{{ php_version }}-imagick" - "php{{ php_version }}-zip" - "php{{ php_version }}-intl" - "php{{ php_version }}-bcmath" - "php{{ php_version }}-xml" - imagemagick And re-provisioned the box, but no success. phpinfo confirms the extension is present. Restarted PHP after re-provisioning just to be certain., but the issue still persists. Updating with some more info: the cause for this turns out to be a PHP version mismatch. Even though 7.4 is explicitly declared in the box/config.yml, the VM spins up with 8.1. Apparently, php-xml is not installed for 8.1. Manually switching to 7.4 in the VM resolves the issue: sudo update-alternatives --set php /usr/bin/php7.4 So the bug appears to be that the VM is not respecting the PHP version set in the box/config.yml when provisioning. I've been having this happen on a number of Ubuntu servers lately, even after running upgrades. It looks like the CLI gets updated to PHP 8.1, while the web UI still runs PHP 7.3 or 7.4 (whatever version is set with php_version)... hmm... Related: https://github.com/geerlingguy/ansible-for-devops/issues/451 Can you see if maybe php-apcu is in your configuration anywhere? If so, it needs to be changed to php7.4-apcu. Nope...not setting that anywhere in the configuration :( Weird, it should be fixed, though I might also need to update Drupal VM to latest role versions soon to make sure that's not the issue. Just reporting that I also experienced issues after cloning the latest master, not touching anything in config.yml and running vagrant up. The VM installed php 8.1 even though 7.4 was specified. I also got return type error messages similar to this: PHP Deprecated: Return type of Symfony\Component\Console\Helper\HelperSet::getIterator() should either be compatible with IteratorAggregate::getIterator(): Traversable, or the #[ReturnTypeWillChange] attribute should be used to temporarily suppress the notice Thanks helped a million. Can you see if maybe php-apcu is in your configuration anywhere? If so, it needs to be changed to php7.4-apcu. Live saver. Thanks you.
gharchive/issue
2022-01-28T22:21:29
2025-04-01T04:34:20.333430
{ "authors": [ "58bits", "acha5066", "dbperf", "geerlingguy", "sms20" ], "repo": "geerlingguy/drupal-vm", "url": "https://github.com/geerlingguy/drupal-vm/issues/2196", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
57716434
Document some tips and tricks for hardware setup with the Pi There are a number of things I need to document more formally elsewhere (to save my future self and others time in hardware-related setup with the Pi and accessories)... but for now, I'll just add them here, in this issue. A good power supply is essential I bought a decent, name-brand 6-port 2A-per-port (except for a couple) power supply to power all the Pis in the Dramble. I could've also gone with a PC 5V PSU and hacked the power to the Pi through GPIO pins (a lot of cluster builds do this, since it's easier to custom wire and doesn't require a mess of micro USB cables). But either way, you need to get clean, 2A or greater power to the Pi, or you're going to run into weird issues, like random restarts, USB device issues, network flakiness, etc. If you ever run into strange issues with your Pi, check your power supply. For other Pis I've been using Samsung and Apple 2A chargers (like the one that comes with an iPad), and they have worked great for a couple years! Increasing Current available for USB devices to 1.2A I have an SSD that seems to require something like 300-500mA of current to function properly. Mix that with a 40 mA USB keyboard and a 100-200mA WiFi dongle, and the default 600mA supplied over the Pi's bus is a bit cramped. To prevent plugging in of certain medium-high powered USB devices from crashing your Pi or taking down other USB devices, there's a /boot/config.txt parameter that allows you to double the default current on USB. To enable this mode: $ sudo nano /boot/config.txt Add the line max_usb_current=1 and save the file. Reboot the Pi. This hack is only available on the Raspberry Pi B+ and later Pis (like the Pi 2 model B). Also, I have an open question (and another) asking whether there are any real downsides to setting this value to 1. For now, I'm only setting this value for Pis where I need to power an external HDD/SSD drive. Tips for getting WiFi to connect automatically and reliably See this guide, which I posted to Midwestern Mac's blog: Setting up an 802.11n WiFi adapter on the Raspberry Pi. Shows how to connect using WPA Supplicant, and how to prevent the WiFi from going into standby mode. Tips for fixing a botched microSD card If you accidentally break your Pi by editing the wrong file or breaking configuration somewhere on the microSD card you use to boot the Pi, you can usually just pull it and mount it on another workstation, edit the file to revert the change, and pop it back in your Pi. I wrote a guide for mounting a Raspberry Pi's ext4-formatted microSD card in Ubuntu 14.04 on a Mac, and the process for other platforms is similar (use a VM, make it easy). You can also re-image the entire SD card, and that's generally what I do if I've botched things too badly (easy to rebuild things when the configuration's all done in Ansible :). Moved to the wiki: https://github.com/geerlingguy/raspberry-pi-dramble/wiki/Tips-and-Tricks
gharchive/issue
2015-02-15T03:41:47
2025-04-01T04:34:20.340173
{ "authors": [ "geerlingguy" ], "repo": "geerlingguy/raspberry-pi-dramble", "url": "https://github.com/geerlingguy/raspberry-pi-dramble/issues/4", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1806329563
⚠️ Département GEGI website (prod) has degraded performance In f6e79b2, Département GEGI website (prod) (https://www.usherbrooke.ca/genie-electrique-informatique/) experienced degraded performance: HTTP code: 200 Response time: 1057 ms Resolved: Département GEGI website (prod) performance has improved in 205a196.
gharchive/issue
2023-07-15T21:04:03
2025-04-01T04:34:20.342992
{ "authors": [ "mdaoustUdeS" ], "repo": "gegi-status/gegi-status.github.io", "url": "https://github.com/gegi-status/gegi-status.github.io/issues/352", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1990573277
Create gem5 v23.1 Staging branch As discussed and agreed upon during our November Developer meeting (https://github.com/orgs/gem5/discussions/482), we have scheduled the creation of our staging branch from develop for December 1st. This means gem5 developers have until December 1st to have their contributions merged into develop for inclusion in version v23.1. Contributions made after this deadline will have to wait for the v24.0 release. Once created the staging branch will be tested rigorously and our release procedures followed. The branch shall exist for at least 2 weeks in order to run tests and give the community time to inspect the branch. Once the staging branch is found to be ready for release it shall be merged into the stable branch thus officially releasing gem5 v23.1. Assuming a typical 2-week staging branch period this release to occur on December 15th. Contributions to the staging branch via pull request are permitted but, in the interests of ensuring new bugs are not introduced, good justification should be given as to why the change cannot wait for the following release (v24.0). Usually contributions to the staging branch are bug fixes or contributions that have no chance of changing software functionality (e.g. documentation updates, format fixes, etc.). New features or significant enhancements to gem5 should continue to be made to the develop branch during this time. The staging branch will be periodically merged into the develop branch to ensure contributions made to the staging branch are reflecting in the develop branch, I have highlighted the following must be included in v23.1 and will therefore be prioritized for review and inclusion into the staging branch: [ ] All tests (CI, Daily, Weekly, and Compiler) to be passing. If there is anything I have missed, or you create a new PR you believe needs to be prioritized for inclusion in v23.1, please reply to this thread to let me know. Edit: The following are PRs That we would want in the next release: [ ] #658 [x] #639 [x] arch-riscv: Add PCEvent for RISCV FS Workload kernel panic/oops #573 [x] #636 [x] configs,stdlib,tests: Remove get_runtime_isa() #241 [ ] arch-riscv: adding new instruction types to RISC-V #589 [ ] WIP: Add Clang format #362 [x] Features/bootloader workload stdlib integration #630 [x] #635 The following Issues we would like to fix in the next release (these issues are the ones with no PR): [ ] Can not build tlm with ARM #591 The following PRs will unfortunately not be in the next release: util: Added cli tool for gem5 resources #305 WIP: GPUFS stdlib board #352 arch-arm: add Sve mla and mls indexed #596 Ok to be cherry-pick to staging: [ ] util: Added script to copy resources from mongodb #510 [ ] stdlib's ArmBoard does not support KVM #612 [ ] stdlib: Updated release notes for version 23.1 #447 [ ] Update RISCV kernel and refactor stdlib to use the new kernel and bootloader workload. #616 Tasks that have been completed [x] Add KConfigs (#69) [x] #642 [x] mem-ruby: Unused L3CacheCntrl freed #598 [x] misc, stdlib: Update documentation to adhere to RST formatting. #631 [x] resources,stdlib: update suites to lazy download #609 [x] #615 [x] Support for classic prefetchers with Ruby controllers (#502). [x] #634 #633 [x] Support for classic prefetchers in Ruby #502 [x] arch-x86: Fixes page fault for CLFLUSH on write-protected pages #592 [x] misc,python: Add isort hook to pre-commit #431 [x] arch-riscv: Support combination of privilege modes configuration #522 [x] scons: Add an option to reduce memory usage of ld #601 [x] stdlib, resources: removed deprecated if statement in obtain_resource for workload resources #611 [x] sim,python: Restore sigint handler in python #531 [x] dev-amdgpu: Writeback PM4 queue rptr when empty #597 [x] arch-x86: Fix misc registers in mov instructions #593 [x] cpu: Require BTB hit to detect branches. #493 [x] tests: fix lulesh #600 [x] ext,github,tests: Update DRAMSys tests to v5.0 and handle new dependencies #577 [x] scons: Change to Kconfig build system #69 [x] #625 [x] misc: update x86-npb-benchmarks.py to use suites #587 [x] misc: update gapbs example to use suites #607 [x] #641 #487 [x] #646 fixes compiler tests [x] arch-arm: Only build ArmCapstoneDisassembler when ISA is arm #553 [x] Potential bug in TLB lookup #484 : I made a PR for the issue arch-riscv: fix tlb bug #610 [x] Pottential Bug in low_power_sweep.py #590 Not sure if resources should be discussed here, but I'd like to upload the kernel for this new disk image as a resource: https://github.com/gem5/gem5-resources/pull/12 . Alternately I can attempt to figure out how to copy from the disk image out using packer to obtain it that way. Not sure if resources should be discussed here, but I'd like to upload the kernel for this new disk image as a resource: gem5/gem5-resources#12 . Alternately I can attempt to figure out how to copy from the disk image out using packer to obtain it that way. Resources can be added at any time. They doesn't need to be in prior to the staging branch, or even prior to the release. However, I've noted down we should look into this (I'm quite bad at checking in on the gem5-resources PRs). Adding #484 in to-dos for 23.1. I edited the original commit message. I edited the original comment and deleted my comment. Hi all, I believe the staging branch hasn't been created yet, is that correct? I will guess it is due to the tests failing. Let me know if I can help out, in particular any X86 / GPU related failures Hi all, I believe the staging branch hasn't been created yet, is that correct? I will guess it is due to the tests failing. Let me know if I can help out, in particular any X86 / GPU related failures Just created it! https://github.com/gem5/gem5/tree/release-staging-v23-1 There was a small delay (as always), but it's created now. I'm going to create a separate Issue to track the health of the staging branch and what may need cherry-picked.
gharchive/issue
2023-11-13T12:33:25
2025-04-01T04:34:20.373489
{ "authors": [ "BobbyRBruce", "Harshil2107", "abmerop" ], "repo": "gem5/gem5", "url": "https://github.com/gem5/gem5/issues/558", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2692705824
website: Add Sphinx documentation to website using GitHub Actions This PR adds a workflow file that builds the Sphinx documentation and pushes it to the gem5 website. Currently, it is set to run once a week and build documentation for the stable version of gem5. This PR also includes a Python script that adds frontmatter and makes other alterations to the HTML files generated by Sphinx so they are compatible with the gem5 website. This was tested using my own forks of the gem5 website and the gem5 repo, so some issues may come up when this workflow is run with the gem5/website repo. This is great! I'm testing to see how well copilot works here :)
gharchive/pull-request
2024-11-26T00:44:27
2025-04-01T04:34:20.375347
{ "authors": [ "erin-le", "powerjg" ], "repo": "gem5/website", "url": "https://github.com/gem5/website/pull/159", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
218305176
fix biased getRandomQuote NaM pajaDank does this ping hemirt? NaM randomquotes are not exactly biased. The method of randomizing is just not very good, but I'll look into fixing it. So every quote has the same chance of getting picked. Fixed in 1935ddb187256eef94aaee558522f57eb16c698b Now all files of a user get loaded, which isn't super efficient, which is I will probably build a small cache in future.
gharchive/issue
2017-03-30T19:37:11
2025-04-01T04:34:20.386275
{ "authors": [ "ThisWatcher", "gempir", "hemirt" ], "repo": "gempir/gempbotgo", "url": "https://github.com/gempir/gempbotgo/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1980212839
Fixes a bug with the bitset hash When a component type has a ID that borders a uint size (32, 64, 96) the bitset isnt big enough I'm just +1 to the highestId I tried removing the highestId find foreach, and just using ComponentRegistry.Size However this broke my new tests and the HashSimilarity test as the ComponentRegistry isnt actually setup during these tests For now heres a fix Did this end up fixing https://github.com/genaray/Arch.Extended/issues/26? If so, could you add the tests from that issue?
gharchive/pull-request
2023-11-06T22:47:03
2025-04-01T04:34:20.389745
{ "authors": [ "LilithSilver", "RoyconSkylands" ], "repo": "genaray/Arch", "url": "https://github.com/genaray/Arch/pull/171", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
318401763
Axon homeostasis Would it be possible to create a new GO term for Axon homeostasis. I know we do not create terms related to diseases, but it looks like some protein coding genes are specifically required to 1) prevent axon destruction (NMNAT2), while another protein, MYCBP2, is specifically required to degrade NMNAT2 to promote axon degradation/degeneration After discussing with Pascale, we thought that creating 'Axon homeostasis' would be helpful Thanks Sylvain PMID:20126265; PMID:23082226; PMID:23665224; PMID:23610559; response from SynGO Hi, We're mostly focussed on synapses atm so I have not really thought about ontology models for dendrites and axons. In general, I do not favor using terms that describe phenotypes/diseases. In SynGO we use very generic terms for such cases, something like 'axon maintenance' following https://www.ncbi.nlm.nih.gov/pubmed/20126265 (SynGO maintenance terms; GO:0099559, GO:0098880, GO:0048790, GO:0099562). We're currently short on experts and annotation coverage for synapse organization and synaptogenesis terms. Although I hope we can expand to this a bit in the near future, I cannot contribute a structural solution for annotating synapse/axon/dendrite organization mechanisms that is widely applicable. Best, Frank We have neuron projection maintenance. Do we want to make an axon child? Are the mechanisms different enough? Hi Ruth and David, I was not aware of neuron projection maintenance. As I'm not very familiar with this topic and that the process described in my papers are not well defined, I would say that using neuron projection maintenance is fine for the moment. However, as said, I'm not familiar with such processes Thanks a lot for your help Sylvain Hi @ukemi I do not think there is any evidence in literature for different mechanics... (or at least not yet). The term 'neuron projection maintenance' is intended to cover both axons and dendrites, so any neuron projections, generally referred to as 'neurites' in literature. In fact the results sections in the papers cited by @sylvainpoux also describe mostly 'neurites' rather than specifically axons (unless animal work is presented). So 'neuron projection maintenance' is definitely applicable in these studies and I would not use a more specific term, if the experimental evidence comes from cell culture-based work. For the studies done in adult animals (e.g. Figure 1 in PMID: 23665224) I would be happy to use a more specific term 'axon maintenance', as here there is no ambiguity about whether or not the described neurite is indeed an axon. So with regard to GO term specificity, I would only go as far as I would be able to infer from the experimental assay. Here: cell work --> neurite, animal work --> axon. (But The mechanics of the neurite process in the cell culture may as well be more or less the same as for the axon process in the animal). @ukemi based on the above, would you add 'axon maintenance' as a child term of 'neuron projection maintenance'? Or would you perhaps make it a broad synonym of 'neuron projection maintenance'? @paolaroncaglia what would be your recommendation? Side note: the second paper cited by @sylvainpoux (PMID:23082226) can be a little tricky to annotate. These studies were done in embryos and the authors themselves state in the abstract "Our observations suggest that during embryogenesis, Nmnat2 plays an important role in axonal growth or maintenance." Figure 7 does not exactly show that axonal homeostasis/maintenance is affected: "Because NF+ axons are present in both wild-type and mutant limbs at E13.5 (Figure 7C, D), innervation appears to be initiated but then regresses by E18.5 in Nmnat2blad/blad embryos." So, without knowledge about Nmnat2 in adult animals (e.g. PMID: 23665224 mentioned above), I would most probably infer here that Nmnat2 is probably required for a step in the developmental cascade (rather than maintenance) and capture this with 'GO:0031175 neuron projection development'... (And both annotations could be correct; the protein could have a role in axonal development as well as its maintenance). cc: @RLovering Hi @BarbaraCzub. Unless we want to specifically distinguish the maintenance of axons versus dendrites, then I would not make new terms. I would make narrow synonyms because axon is narrower than neuron projection. It is probably possible to make the distinction using annotation extensions. Hi @ukemi, yes, it is possible to capture the axon vs. dendrite in the annotation extensions. I'll 'axon maintenance' as a narrow synonym then. Thank you! Hm... I wonder whether this should actually be a related synonym? (Is there any documentation that I could refer to?) @BarbaraCzub ‘axon’ is_a ‘neuron projection’, so to me ‘axon maintenance’ should be a narrow synonym of ‘neuron projection maintenance’. Yes, related synonyms are always safe when in doubt. Some guidance is here: http://geneontology.org/page/ontology-structure#extras http://wiki.geneontology.org/index.php/Curator_Guide:_General_Conventions#Synonyms Thank you @paolaroncaglia and @ukemi for the guidance notes! I added 'axon maintenance' and 'axon homeostasis' as related synonyms, because 'neurite maintenance' had been previously added as a narrow synonym: And 'neurite' is also a narrow synonym of a 'neuron projection' (https://www.ebi.ac.uk/QuickGO/term/GO:0043005). But based on the guidelines in the above links, which you both shared (thank you!), it looks like 'neurite...' should be exact, and 'axon...' should be narrow synonyms, respectively. @ukemi can you think of a reason why 'neurite' had been added as a narrow synonym previously? Based on my understanding of the literature a 'neurite' is an exactly the same structure as a 'neuron projection'. Although the latter is actually very rarely used in literature (usually researchers use the shorter term 'neurite'). Having read the guidance notes, I think the synonyms should be as follows: 'GO:0043005 neuron projection' --> 'neurite' [exact] 'GO:1990535 neuron projection maintenance' --> 'neurite maintenance' [exact] --> 'neurite homeostasis' [exact] --> 'neuron projection homeostasis' [exact] --> 'neuronal cell projection homeostasis' [exact] --> 'neuron process homeostasis' [exact] --> 'neuron protrusion homeostasis' [exact] --> 'axon maintenance' [narrow] --> 'axon homeostasis' [narrow] But there are many more 'neuron projection...' terms in the ontology for which 'neurite' had been added as a 'narrow' synonym, and not exact. And e.g. 'neurite morphogenesis' is a related synonym of 'GO:0048812 neuron projection morphogenesis', and not even narrow... The guidance notes also say that the main purpose of the synonyms is to help with searches, so I am not going to address these synonym issues now. But perhaps in the future, we should consider whether we even have a need for synonym types? Unless there are neuron projections that are not neurites. Seems possible. There are certainly neuron types, which have a primary cilium. (But in the ontology a cilium is a descendant of cell projection rather than the more specific neuron projection). But, yes, a cilium would be a neuron projection other than a neurite. I did not consider this. Thank you! @BarbaraCzub In reply to your note “perhaps in the future, we should consider whether we even have a need for synonym types?” A couple of quick thoughts here. One is that it is important to differentiate exact synonyms from all others, because an exact synonym is to all extents equivalent to the primary name (the latter is usually chosen on the basis of usage and/or clarity). This has practical implications, so it’s important that exact synonyms are really exact (e.g. ‘signalling’ and ‘signalling’ are the same, but ‘ciliary transition zone’ and ‘CTZ’ are not, as the latter may also refer to ‘chemoreceptive trigger zone’ - CTZ should be related, not exact). Another thought is that, among biomedical ontologies, GO is one of the few (to my knowledge) that has this richness of variety in synonym types which ultimately adds to knowledge content and quality. Sure, it is not the most essential aspect of GO, but I like to think that it can help curators if applied thoughtfully throughout. This said, i agree that this ticket is resolved :-)
gharchive/issue
2018-04-27T13:04:19
2025-04-01T04:34:20.419012
{ "authors": [ "BarbaraCzub", "RLovering", "paolaroncaglia", "sylvainpoux", "ukemi" ], "repo": "geneontology/go-ontology", "url": "https://github.com/geneontology/go-ontology/issues/15664", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1182680472
I couldn't search for people in the search bar From inside Person Record: Jerry Abraham, I clicked on his "tour history." It showed successfully. Then, from that window, I wanted to search for "Emily" in the search bar, but when i typed in Emily it just displayed a blank drop down. I clicked "enter" and it wouldn't populate results This appears to be working now except for the fact that it wont index people who are just a name entry I opened a new ticket for that
gharchive/issue
2022-03-27T22:12:47
2025-04-01T04:34:20.424389
{ "authors": [ "emilysusann" ], "repo": "generalludd/tour", "url": "https://github.com/generalludd/tour/issues/111", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
601114060
different view Thank you for your good work. Would you mind telling me which boundary you are using for changing viewpoint? (carpet, clutttered_space, dirt, glossy, indoor_lighting, scary, wood) Thank you for your help. Thank you for liking our work. The viewpoint boundary and category-related boundaries are not released right now. We will release them in the near future. Get it, thx Get it, thx.
gharchive/issue
2020-04-16T14:37:41
2025-04-01T04:34:20.441486
{ "authors": [ "250906461", "ShenYujun", "betterze" ], "repo": "genforce/higan", "url": "https://github.com/genforce/higan/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
631097942
Remove TNT spawning fix this one is fixed in the latest pre closing as the notice on this mod's readme has announced discontinuation for a while now - feel free to reopen in future if work starts again
gharchive/issue
2020-06-04T19:42:17
2025-04-01T04:34:20.447768
{ "authors": [ "muzikbike" ], "repo": "geniiii/FarLands", "url": "https://github.com/geniiii/FarLands/issues/78", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
507460334
How to update / set geofirestore and non-geofirestore db entries in a single GeoTransaction Hey - great library, really useful. I have a use case with transactions which I'm not sure is supported. I am using a GeoTransaction to create a new geofirestore entry in my database. But within the same transaction, I need to update other parts of my db, which are non-geofirestore entries. Here's a minimal example: import * as geofunctions from 'geofirestore' import * as admin from 'firebase-admin' export const db = admin.firestore() export const geoDb = new geofunctions.GeoFirestore(db) export async function geoTransactionExample() { const geoPoint = new admin.firestore.GeoPoint(5, 5) const geoDocRef = db.collection('geoDataCollection').doc() const geoDataEntry = { coordinates: geoPoint, extra: "extra", data: "data" } const normalDocRef = db.collection('normalDataCollection').doc() const normalEntry = { normal: "normal", data: "data" } await geoDb.runTransaction(async (t) => { const geotransaction = new geofunctions.GeoTransaction(t) // This succeeds geotransaction.set(geoDocRef, geoDataEntry) // This fails geotransaction.set(normalDocRef, normalEntry) }) } This fails on the second non-geo set operation with Error: Invalid GeoFirestore document: could not find GeoPoint - which makes sense. But is there e.g. a flag I can pass to the set operation to allow a normal non-geo set? Or is there a different approach to achieve this within a single transaction? Or is this not currently supported? Something not currently exposed by GeoTransactions in 3.3.1 (which you will see in the next release of GeoFirestore) is the native transaction. To access the native transaction object to run a set on you could do something like this: await geoDb.runTransaction(async (t) => { const geotransaction = new geofunctions.GeoTransaction(t) // This succeeds geotransaction.set(geoDocRef, geoDataEntry) // This should succeed geotransaction['_transaction'].set(normalDocRef, normalEntry) }) Note that in the next version you'll be able to access the native transaction object not like this: geotransaction['_transaction'] but like this geotransaction.native. @MichaelSolati - tested, works. Fantastic, thank you.
gharchive/issue
2019-10-15T20:17:36
2025-04-01T04:34:20.534516
{ "authors": [ "MichaelSolati", "dsg38" ], "repo": "geofirestore/geofirestore-js", "url": "https://github.com/geofirestore/geofirestore-js/issues/143", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1264396325
番地部分に不正な数値表現が含まれている時に、クラッシュする normalize関数が例外を吐き出すケースがあったのでこちらに起票いたします。 ご対応いただけますと幸いです。 実行環境 macOS Monterey 12.3.1(intel) node v16.15.0 normalize-japanese-addresses v2.5.6 NGな例 # 「百二三」という数字に変換できない文字列 東京都千代田区永田町百二三 OKな例 東京都千代田区永田町百二十三 東京都千代田区永田町百二 発生する例外: TypeError: The attribute of kanji2number() must be a Japanese numeral as integer. @shio-phys ありがとうございます。百二三 -> 123 に寄せられると正規化がカバーできる範囲が広がり良さそうです。改善を試みたいと思います。 @shio-phys こちら、 v2.5.7 で対応いたしました。ぜひお試しください。フィードバックありがとうございました! @kamataryo ご対応ありがとうございます!早速新しいバージョンを試してみようと思います!!
gharchive/issue
2022-06-08T08:37:31
2025-04-01T04:34:20.553931
{ "authors": [ "kamataryo", "shio-phys" ], "repo": "geolonia/normalize-japanese-addresses", "url": "https://github.com/geolonia/normalize-japanese-addresses/issues/173", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
111634231
Better Tileserver integration with Xray support Refactored export.sh Xray styles are now supported directly if only mbtiles is in /data directory. This allows for easier testing for us Integration into Docker Compose This solves #12. I initially wanted to create a additional debug viewer container but we can kill that branch if we merge this pull request. Nice. A very good idea to apply xray in case another style is not available... The #12 was about MapBox GL JS viewer (client side) - you have made here mapnik x-ray (server side). Let's discuss #12 on Monday.
gharchive/pull-request
2015-10-15T14:53:36
2025-04-01T04:34:20.556198
{ "authors": [ "klokan", "lukasmartinelli" ], "repo": "geometalab/osm2vectortiles", "url": "https://github.com/geometalab/osm2vectortiles/pull/39", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2011084149
Improve iterators Parts of the code use generic Iterable and Generator types, that provide the end user with little options (for example no array methods). We should adopt a better Iterable object with more utility. We can create a separate package/git submodule for this
gharchive/issue
2023-11-26T15:26:31
2025-04-01T04:34:20.557323
{ "authors": [ "jiricekcz" ], "repo": "geometryjs/geometry.js", "url": "https://github.com/geometryjs/geometry.js/issues/36", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1667537011
Fig bug in auto computation of zoom levels For maps with large aspect ratio, contextily downloads tiles as if the map was as small as the smallest dimension. It should base the zoom level on the dimension that's a larger extent. I discovered this bug when my server died attempting to render the following blue line on a map: https://sondemaps.lectrobox.com/seattle/2023/4/13-17-47.64048--123.95468.jpg This script has run fine for months, but in today's dataset the blue line coincidentally was almost perfectly horizontal. Therefore, the latitude extent of the map was very small, and because contextily uses the max of the two zoom levels, it attempted to download over a million zoomlevel 19 tiles from OSM. Yay. Thanks for the fix! Can you also update the test that are failing because of the change? Oops - sorry about that; fixed.
gharchive/pull-request
2023-04-14T05:33:29
2025-04-01T04:34:20.565086
{ "authors": [ "jelson", "martinfleis" ], "repo": "geopandas/contextily", "url": "https://github.com/geopandas/contextily/pull/214", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
425618123
Remove Python 3.4 From Travis CI Python 3.4 has reached it's end of life, remove it from the build matrix so that the Travis CI tests can pass. Shouldn't we replace it by Python 3.5? Can we test with adding 3.5 and 3.7? Sure thing, I've added Python 3.5 as well. 3.7 was already added by a previous PR I sent in a few weeks ago.
gharchive/pull-request
2019-03-26T19:52:48
2025-04-01T04:34:20.569839
{ "authors": [ "Coop56", "sbrunner", "tomkralidis" ], "repo": "geopython/OWSLib", "url": "https://github.com/geopython/OWSLib/pull/570", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1486353164
check on fgdc contact type fixes #848 includes also distribution contact, if exists @pvgenuchten please rebase, CI is not fixed in master Coverage decreased (-0.5%) to 59.073% when pulling fe4be6dcbc94c084d8c3a863848dd12de4daaf9b on pvgenuchten:primary-person-fgdc into 13b1443f7120c9d703adf0beb443ef2bcd86d8d4 on geopython:master. Some flake8 errors in CI flake8 resolved
gharchive/pull-request
2022-12-09T08:55:58
2025-04-01T04:34:20.572526
{ "authors": [ "coveralls", "kalxas", "pvgenuchten" ], "repo": "geopython/OWSLib", "url": "https://github.com/geopython/OWSLib/pull/849", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1899661009
空调户外温度一直保持35度,和app上显示的不符 HA版本 2023.9.2 集成版本 0.3.20 设备类型及型号 Air Conditioner 22012297 使用的App 美的美居 问题详细描述 App 室外温度显示29度,HA 后台 outdoor temperature 显示 35度,两个数值不符 附上开发者工具中的实体属性: hvac_modes: off, auto, cool, dry, heat, fan_only min_temp: 17 max_temp: 30 target_temp_step: 0.5 fan_modes: Silent, Low, Medium, High, Full, Auto preset_modes: none, comfort, eco, boost, sleep, away swing_modes: Off, Vertical, Horizontal, Both current_temperature: 28.4 temperature: 27 fan_mode: Silent preset_mode: none swing_mode: Off aux_heat: off prompt_tone: true power: true mode: 2 target_temperature: 27 fan_speed: 1 swing_vertical: false swing_horizontal: false smart_eye: false dry: false aux_heating: false boost_mode: false sleep_mode: false frost_protect: false comfort_mode: false eco_mode: false natural_wind: false temp_fahrenheit: false screen_display: true screen_display_new: false full_dust: false indoor_temperature: 28.4 outdoor_temperature: 35 indirect_wind: false indoor_humidity: 0 breezeless: false total_energy_consumption: 0 current_energy_consumption: 0 realtime_power: 0 fresh_air_power: false fresh_air_fan_speed: 0 fresh_air_mode: null fresh_air_1: null fresh_air_2: null icon: mdi:air-conditioner friendly_name: 卧室空调 supported_features: 121 App 截图: The logs No response 说明文档都告诉你了这是室外机的温度。你app上显示的“室外温度”是来自于互联网天气平台,两者根本不是一回事,不符就对了。 了解,感谢回复!!!
gharchive/issue
2023-09-17T05:16:29
2025-04-01T04:34:20.616463
{ "authors": [ "georgezhao2010", "lcuwx2016" ], "repo": "georgezhao2010/midea_ac_lan", "url": "https://github.com/georgezhao2010/midea_ac_lan/issues/309", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2554988132
Add typos to CI [x] I agree to follow the project's code of conduct. [x] I added an entry to CHANGES.md if knowledge of this change could be valuable to users. Closes #564 A type checker for one typo :smile: . Maybe add something in the readme for what to do if there are false positives? A type checker for one typo 😄 . You mean typo? Anyway, there were more, but https://github.com/georust/gdal/pull/563 fixed them. Maybe add something in the readme for what to do if there are false positives? TBH, I'm not sure myself, I didn't really understand their docs.
gharchive/pull-request
2024-09-29T15:25:20
2025-04-01T04:34:20.620313
{ "authors": [ "ChristianBeilschmidt", "lnicola" ], "repo": "georust/gdal", "url": "https://github.com/georust/gdal/pull/565", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
84030204
Better handling for CRS: it can exist, be null, or not exist For example, notice the CRS property in each of the following GeoJSON examplse: { "coordinates": [1.0, 2.0], "type": "Point", } { "coordinates": [1.0, 2.0], "type": "Point", "crs": null, } { "coordinates": [1.0, 2.0], "type": "Point", "crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } } } Right now, for all core GeoJSON types, the crs property is either Some (if it exists on an object) or None (if it does not exist on an object). There is currently no way to represent it as null. Options: We could make it an Option<Option<Crs>> (the outer Option indicates if the key exists, the inner Option indicates if the value is null) Create a new generic type that handles all three cases: pub enum OptionNull<T> { Some(T), Null, None, } Add a new enum variant Null on Crs that would indicate null. e.g.: pub enum Crs { Null, Named { name: String, }, Linked { href: String, type_: Option<String>, }, } Or we could just fold crs: null and missing crs to the same value, esentially ignoring crs: null. As of RFC 7946, the "crs" member has been removed. https://github.com/georust/rust-geojson/issues/53
gharchive/issue
2015-06-02T13:46:29
2025-04-01T04:34:20.630826
{ "authors": [ "frewsxcv", "mgax" ], "repo": "georust/rust-geojson", "url": "https://github.com/georust/rust-geojson/issues/34", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2171853871
Rewrite app entity label This PR adds a new entity label app, written in Processor API. The goal is to make the process more efficient. close in favor of #134
gharchive/pull-request
2024-03-06T15:59:14
2025-04-01T04:34:20.712592
{ "authors": [ "joschne" ], "repo": "geovistory/toolbox-streams", "url": "https://github.com/geovistory/toolbox-streams/pull/134", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
63824130
DOCKER_TLS_VERIFY=1 will fail always the TLS check In de.gesellix.docker.client.protocolhandler.DockerURLHandler: if ((dockerTlsVerify && Boolean.valueOf(dockerTlsVerify) == FALSE) || "0".equals(dockerTlsVerify)) { will always produce true and thus fail the check for TLS, because Boolean.valueOf("1") == FALSE. You should check that if its defined and it equals to 0 (or false or no as string, but not the Boolean.valueOf which returns false on everything except ignore-case "true"). Log: 20:15:08.039 [INFO] [de.gesellix.docker.client.DockerClientImpl] using docker at 'tcp://127.0.0.1:2376' 20:15:08.074 [DEBUG] [de.gesellix.docker.client.protocolhandler.DockerURLHandler] dockerTlsVerify=1 20:15:08.078 [INFO] [de.gesellix.docker.client.protocolhandler.DockerURLHandler] assume 'http' 20:15:08.079 [DEBUG] [de.gesellix.docker.client.protocolhandler.DockerURLHandler] selected dockerHost at 'http://127.0.0.1:2376' If you don't have DOCKER_TLS_VERIFY environment variable defined at all, you will correctly get the certPath, etc. defined. Thanks for reporting the issue! it has been fixed with de.gesellix:docker-client:2015-03-24T11-45-53. in case you used the gradle-docker-plugin: it has been updated, too: http://plugins.gradle.org/plugin/de.gesellix.docker/2015-03-24T14-17-57
gharchive/issue
2015-03-23T20:26:50
2025-04-01T04:34:20.740339
{ "authors": [ "gesellix", "mmannerm" ], "repo": "gesellix-docker/docker-client", "url": "https://github.com/gesellix-docker/docker-client/issues/12", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2230656884
Flickering price updates Happens more frequently a little bit later in the video https://github.com/get10101/10101/assets/382048/180fb149-201c-41e2-b80e-9ea2ececafe2 Yeah, this is happening in production for me. Boy, what did we think that bid/ask are always updated together? 😬
gharchive/issue
2024-04-08T09:04:34
2025-04-01T04:34:20.741812
{ "authors": [ "bonomat", "holzeis", "luckysori" ], "repo": "get10101/10101", "url": "https://github.com/get10101/10101/issues/2370", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1670356057
Do not call peer_manager.process_events manually I left this note a while back and running the ln-dlc-node tests now confirms that the snippet is not needed. bors r+
gharchive/pull-request
2023-04-17T03:55:05
2025-04-01T04:34:20.743002
{ "authors": [ "holzeis", "luckysori" ], "repo": "get10101/10101", "url": "https://github.com/get10101/10101/pull/437", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1594776288
下载弹框 下载时 干掉了 进程 通知栏无法关闭 无法点击 【警告:请务必按照 issue 模板填写,不要抱有侥幸心理,一旦发现 issue 没有按照模板认真填写,一律直接关闭】 问题描述 框架版本【必填】:13.1 问题描述【必填】:进程杀了 通知栏没消失 复现步骤【必填】: 打开升级app弹框 杀掉进程 是否必现【必填】:是 出现问题的手机信息【必填】:小米MIX 4 出现问题的安卓版本【必填】:12 请回答 是部分机型还是所有机型都会出现【必答】:小米MIX 4 android12 AndroidProject 最新的版本是否存在这个问题【必答】:是 是否已经查阅框架文档还未能解决的【必答】:是 issue 是否有人曾提过类似的问题【必答】:否 是否可以通过 AndroidProject 工程来复现该问题【必答】:是 使用原生的权限 API 是否会出现该问题【必答】:否 其他 提供报错堆栈(如果有报错的话必填,注意不要拿被混淆过的代码堆栈上来) 提供截图或视频(根据需要提供,此项不强制) 提供解决方案(如果已经解决了的话,此项不强制) 小伙子,根据你提供的复现步骤,结果无法复现到该问题,具体的复现步骤如下: 打开升级对话框 点击《立即更新》 检查通知栏有没有显示下载进度 最后从任务卡片划走应用 结果是应用被杀死了,通知栏的下载进度也消失了,我的手机是小米 12,Android 13,miui 14.0.4
gharchive/issue
2023-02-22T09:41:36
2025-04-01T04:34:20.748330
{ "authors": [ "183883324", "getActivity" ], "repo": "getActivity/AndroidProject", "url": "https://github.com/getActivity/AndroidProject/issues/111", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1945856042
[Feature] nostr relays should be configurable Feature description Relay list for NIP-07 getRelays should be configurable in settings instead of hardcoded here: https://github.com/getAlby/lightning-browser-extension/blob/62ae77c52a65ff1c3b67701d95422c8a14096bf7/src/extension/background-script/actions/nostr/getRelays.ts#L15-L26 Describe the solution Configurable in settings. Describe alternatives No response Additional context No response Are you working on this? None I second this. The existing default list is out of date as well resulting in clients making repeated unnecessary attempts to connect to relays that don't response for whatever reason. I also came across this issue because NDK tries to use these relays by default which leads to a poor experience. @badonyx @Yeghro is there any benefit to configuring this rather than just removing this optional method and leaving it up to the app to choose its boostrap relays? it's difficult for Alby to keep these relays up to date and choose "what is a good relay", so I'm not sure it even makes sense. @rolznz That would be a better approach. The apps can be configured to just fetch relays list associated with the users pubkey. Shouldn't be too complicated. I am not sure it is the right path to enable managing relays in the Alby Extension as currently it simply acts as a signer. If there is more nostr functionality built in later, maybe this would make more sense. And in order to fetch a relays list associated with the user's pubkey, the extension needs to have at least one relay in common with the ones the user posted their relay list to. For now I think the simpler and more effective option is to just remove this method. would be nice if user could customize it in the extension... So you could bring your relays with you to multiple clients, otherwise I'm actually never sure what relays do I use, if Primal has same as Nostrudel etc...
gharchive/issue
2023-10-16T18:35:08
2025-04-01T04:34:20.753414
{ "authors": [ "Yeghro", "badonyx", "itstomekk", "rolznz" ], "repo": "getAlby/lightning-browser-extension", "url": "https://github.com/getAlby/lightning-browser-extension/issues/2823", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
935262206
GMail logo vertically stretched Bug description: The GMail logo is vertically stretched in the vertical 'app bar' on the left. Steps to reproduce: click 'add service' and find 'Gmail' - notice that the logo is correctly shaped. Then proceed to actually add the service, and see that the logo is vertically stretched. Expected behavior Display the same logo version as in the services overview. Screenshots Environment: Operating System: Ubuntu Ferdi Version: 5.6.0 nightly 71 Server: Using without an account Debug information: Fixed in https://github.com/getferdi/recipes/pull/577
gharchive/issue
2021-07-01T22:31:37
2025-04-01T04:34:20.864223
{ "authors": [ "keunes", "vraravam" ], "repo": "getferdi/ferdi", "url": "https://github.com/getferdi/ferdi/issues/1584", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
573941274
Invalid translations for Portuguese Brazilian Currently there is mistake with translations. The file pt.js contains Portuguese Brazilian translations but should be Portuguese Portugal. Expected behavior It's necessary to move file content from pt.js to pt-BR.js and fix Crowdin integration Screenshots Thank you for the report. This problem should be fixed via #420
gharchive/issue
2020-03-02T12:27:46
2025-04-01T04:34:20.866673
{ "authors": [ "leandrogehlen", "vantezzen" ], "repo": "getferdi/ferdi", "url": "https://github.com/getferdi/ferdi/issues/424", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
957198344
Moved 'jss' to a runtime dependency from a dev dependency Pre-flight Checklist Please remember that if you are logging a bug for some service that has stopped working or is working incorrectly, please log the bug here If you are requesting support for a new service in Ferdi, please log it here Please remember to read the self-help documentation - in case it helps you unblock yourself for issues related to older versions of recipes that were installed on your machine. (These will get automatically upgraded when you upgrade to the newer versions of Ferdi, but to get new recipes between Ferdi releases, this documentation is quite useful.) Please consider supporting Ferdi! 👉 https://github.com/sponsors/getferdi 👉 https://opencollective.com/getferdi/donate Please ensure you've completed all of the following. [x] I have read the Contributing Guidelines for this project. [x] I agree to follow the Code of Conduct that this project adheres to. [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success. Description of Change Moved 'jss' to a runtime dependency from a dev dependency. Regenerated 'package-lock.json' from scratch. Motivation and Context Fixing the broken nightly build. Screenshots Checklist [x] My pull request is properly named [x] The changes respect the code style of the project (npm run prepare-code) [x] npm test passes [x] I tested/previewed my changes locally Release Notes Fixes #1711 @mhatvan - please see these changes. I will merge once all the CI builds pass. Thank you, Vijay, great updates! Sorry for breaking things, was not able to reproduce this because as you said yourself this only occurred in prod.
gharchive/pull-request
2021-07-31T08:10:12
2025-04-01T04:34:20.874828
{ "authors": [ "mhatvan", "vraravam" ], "repo": "getferdi/ferdi", "url": "https://github.com/getferdi/ferdi/pull/1712", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
414462951
Dynamic parameter could not be resolved : When using a csv file in step Using a special parameter(csv) to provide data to a step in a specification. Expected behavior No ParseError and should be able to read the data from csv file. Actual behavior Dynamic parameter table:resources/data.csv could not be resolved Steps to reproduce Create a spec with at least 1 scenario Create a step in the scenario which takes a csv file as special parameter ( table:data.csv ) Run "gauge validate specs/" in project directory Gauge version Gauge version: 1.0.4 Plugins flash (0.0.1) html-report (4.0.7) python (0.3.5) screenshot (0.0.1) spectacle (0.1.3) Similar as #414 @singharsh Relative path to the csv file is with respect to your project root. So if your data.csv is inside the specs directory parameter value will be <table:specs/data.csv> Say if your data.csv file is inside a folder named resources under your project root, parameter will be <resources/data.csv> @Apoorva-GA Tried both of them and still getting ParseError. Never mind false alert. Got it working. My csv was incorrect. Thanks for the help. Hi All, I am new to gauge automation.Using Gauge/js in vs code. I don't know how to set the path and get the values from xlsx file and write into the text form fields. Example:In xlsx file given values like Email | Password test@mail.com| tester kindly help me with Specification and Step Implementation Thanks @Sathyan87 please read the documentation at https://docs.gauge.org/execution.html?os=macos&language=javascript&ide=vscode#external-csv-for-data-table Also it's better to ask for help at https://spectrum.chat/gauge?tab=posts
gharchive/issue
2019-02-26T07:09:18
2025-04-01T04:34:20.885639
{ "authors": [ "Apoorva-GA", "Sathyan87", "singharsh", "singhverse", "zabil" ], "repo": "getgauge/gauge", "url": "https://github.com/getgauge/gauge/issues/1354", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
627827829
Ability to pin plugin versions in manifest.json Description By default, when running gauge it would install latest versions of the plugins automatically in background. This results in unpredictable versions when there are newer versions of plugins. It would be great, if we could pin the versions in manifest per plugin so we can have more predictable builds across multiple runs and systems. What command(s) did you run when you found the bug? For e.g. gauge run specs Output, stack trace or logs related to the bug Versions Gauge (Output of gauge -v) All Node.js/Java/Python/.Net/Ruby version Operating System information All IDE information @haroon-sheikh - thanks for the idea - curious to know what issues you see with plugin upgrades? From the initial days, we've stayed away from pinning plugin versions and instead focused on backward compatibility. For the rare cases where backward compatibility had to be sacrificed, gauge/plugin use a version range. ex: https://github.com/getgauge/gauge-java/blob/master/java.json#L40 marks gauge-java 0.7.8 to work with minimum version 1.0.7 of gauge. This is one of the problems: Installed version of gauge-java(0.7.8) does not match with dependency gauge-java(0.7.6) specified in pom.xml file That is something we included, to let users know that because gauge install java would install the latest of gauge-java but it wouldn't update pom.xml or build.gradle. On a related note, I think version pinning would be a global feature, rolling it into all language runners. every language ecosystem has it's own dependency management, and this would add another layer on top. an alternative I believe most of the problem stem from the fact that gauge-java is a fat jar, containing the types required for authoring, and the runtime required for execution. This has some effect: when running via maven/gradle, hence maven/gradle manage the dependency. It so happens that with gauge checks the plugins directory and figures it has gauge-java (say, 0.7.8) and tries to communicate accordingly. But maven/gradle load the version mentioned in the project, so it could load 0.7.4, and this causes the runtime incompatibility any project that includes gauge-java as a dependency inherits it's transitive dependency. This is unnecessary, and can lead to version conflict longer build times! I think one way out of this (and cleaner IMO) is to separate gauge-types and gauge-runtime. gauge-types would contain Step , *Hook, ScreenshotGrabber etc. /cc @getgauge/core @zabil Version pinning is a reasonable mechanism adopted in ruby and node ecosystems as they depend on a lot of external packages which in turn have their dependency graph. However, Gauge and its plugins does not have many third party dependencies. There's has been a lot of issues with the recent upgrades as new versions of Gauge and it's plugin moved to using gprc, changed how reports are generated for cleaner code base and performance. Now that the migration is complete there will hopefully be no more breaking changes that will require version pinning. I am closing this issue for now, but please feel free to discuss this further. Will re-open if this is required after the next few releases. I ended up having this issue today because gauge-ts updated and ask for node 20 and my CI (github action) is with node 18. I could just update my node but not being able to pin a plugin version make the build not so reproducible (which is an issue for me in principle and for legal reasons). Lucky for me, I found a workaround : I download the plugin version I want as a zip and install it with gauge install my-plugin-name -f /path/to/zip/file I put that in a script that I run at the same time as I install gauge (npm in my case)
gharchive/issue
2020-05-30T21:38:19
2025-04-01T04:34:20.896024
{ "authors": [ "FaustXVI", "haroon-sheikh", "sriv", "zabil" ], "repo": "getgauge/gauge", "url": "https://github.com/getgauge/gauge/issues/1656", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1563661295
Can we said that the input occurred for NFT Transfer were like this? https://github.com/getgems-io/nft-contracts/blob/debcd8516b91320fa9b23bff6636002d639e3f26/packages/contracts/nft-fixprice-sale-v3/NftFixpriceSaleV3.spec.ts#L290 Yes, you can ,jh
gharchive/issue
2023-01-31T05:06:10
2025-04-01T04:34:20.897870
{ "authors": [ "MMYYSS", "howardpen9", "stels-cs" ], "repo": "getgems-io/nft-contracts", "url": "https://github.com/getgems-io/nft-contracts/issues/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
181506934
Server Error 500 when Saving Docs Maybe you guys can help me figure this out. I get seemingly random 500 Internal Server errors in Grav. This issue only affects the content writing portion of Grav. I've done test cases and isolated some triggers: Images. Removing images fixes the issue for some articles. Doesn't for others. Character count. This affects new pages. Adding more than 500 characters (estimate) is a trigger. Content. For some pages, I couldn't get around the 500 error without removing all content. Adding a single character and saving will trigger the 500 error. I created a test case spreadsheet. Here's the link: https://docs.google.com/a/kmapsystems.com/spreadsheets/d/1DoDLRpripwSY-tElDf2ub1KKU6-yw3LN_l5wRaJCPbM/edit?usp=sharing Whenever I run into an error, I log it in this spreadsheet. If you need any other info, please let me know. Thanks! Sarto Hmm this sounds very recognizable. If you're on a server, can you try replicating this issue locally, to help identify if the problem is related to a server setting, or not? It is on a server but replicating the issue locally isn't an option. The server is hosted. Do I have any other options? We did also recently update the Grav base and admin UI. I've never seen this problem so my guess is it's specific to the server, but that's just my guess, and easiest thing would be you setting up Grav in a local MAMP or similar setup, and copy the user/ folder of the live site there, so the site is replicated local, and you can see yourself if the issue is reproducible. Ok, here's an example of text that I cannot save in Grav's admin UI. https://docs.google.com/a/kmapsystems.com/document/d/1twi_oLBskT16GV5FW5vyeKOOk8U-CgKuQkezdhFvqVs/edit?usp=sharing The red highlighted text triggers errors in Grav. Maybe it's a specific word or phrase or even formatting that I'm not picking up? Can you take a look and possibly see if you can enter the text into a Grav article and save it without errors. Thanks so much! Sarto You mind just pasting the text here, or making the document publicly accessible? [toc] You can manage all active programs and drafts within the Programs tab of your Admin Console. Edit, Clone, Delete, Activate or Disable a program right from this page. Edit your Program Once you have created a Program, you can always go back and update any pages or rewards. Follow these steps to editing your program: In your Admin console, click on the Programs tab. Select the name of the program you want to edit. This should take you to the Program page. In the lower half of the page is the Edit Pages section. Select the Edit button next to the section you want to edit. !!! If you are looking to make changes to your site setup, you can access those pages in Settings. ---------------START OF ERROR---------------------------------- Clone your Program To copy an existing program in its entirety, use the Cloning option by selecting the "Clone" button under the Actions column. This will create a new draft program that is a copy of the program you selected. Other Actions Remove an active program or draft by selecting the orange "Delete Button" and confirm the deletion. You can also deactivate a program by selecting the "Move to Draft" button. This will disable all URLs and move the program to Draft Programs. Activate Program or Make Program Inactive - If you have not activated your program, this button will make it active and publicly accessible. If your program is already active, this button will "turn off" your program and make it no longer accessible. View Details - This takes you to the Program page. There you can view your member and referral count, edit your program pages, and view integration options. Clone - Clones your program and creates a copy. This copy will be in your drafts. Default - Only available for activated programs. Makes program your default program. Delete - This will delete your program from your account. It and all the data in it will no longer exist. Here's a txt file: Managing Programs.txt Nothing wrong, I don't see any issue here. Unless the problem is the image, which I don't have. Same issue if you remove the images from the text? Could it be related to file permissions? I remember I had some random 500 errors on pages where new or updated content didn't had the correct permissions (iirc: permissions were root:root whereas I needed grav:root) I think the issue did have something to do with permissions. I ended up contacting the server host about it and the issue was fixed. Thanks for your help and support guys, I really appreciate it! I made sure to vote for Grav for the 2016 CMS Critics Award. Good luck guys :) Good to hear that! Would be nice to know what was wrong about them, if someone else has a similar issue @ghostinhershell Do you know what the server host had to change in order to fix this? I'm having very similar issues and have tried everything I can on my end. Thanks I was able to get this sorted out with my hosting provider. They said it was a ModSecurity issue: Looks like you're tripping a ModSecurity rule when you add the content, which is blocking the request and resulting in the 500 error. The particular rule you're hitting is common among Wordpress sites, so I'll go ahead and whitelist it to prevent this from happening again. Worked like a charm!
gharchive/issue
2016-10-06T19:40:43
2025-04-01T04:34:20.918805
{ "authors": [ "diegovogel", "flaviocopes", "fourroses666", "ghostinhershell", "nepomuk13", "rhukster" ], "repo": "getgrav/grav", "url": "https://github.com/getgrav/grav/issues/1094", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
359866808
Incorrect routing on duplicate slashes URL pathnames prefixed by duplicate slashes ('//', '///', ...) are incorrectly interpreted as absolute URLs. At present, the URL https://learn.getgrav.org//FAKE_HOSTNAME/content/routing is resolved to https://learn.getgrav.org/content/routing This is caused by the $bits = parse_url($uri); asignment in Uri.php. The word "incorrectly" implies that there's a "correct" way to interpret those slashes. But if you look at the relevant RFCs, I can find no prescribed normalization of such slashes. Did I miss something? So URI paths containing such are simply invalid, and the result of running such a URL through various tools is undefined. parse_url is a built-in PHP function. Grav can't do anything about it, short of rolling its own. URL starting with // is shorthand for the current protocol which is either http:// or https://. That said the behaviour you observed in the learn site is wrong, it should end up to 404 page. Confirmed this to be a bug in Grav\Common\Uri class. The issue isn't present on other libraries.
gharchive/issue
2018-09-13T12:06:50
2025-04-01T04:34:20.923628
{ "authors": [ "Perlkonig", "hundblue", "mahagr" ], "repo": "getgrav/grav", "url": "https://github.com/getgrav/grav/issues/2184", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
119987212
No routes available in the CLI :-/ protected function serve() { $pages = self::getGrav()['pages']; $routes = $pages->routes(); $this->output->writeln(var_export($routes, true)); } -bash-4.1$ php bin/plugin staticgenerator log array ( ) Any chance to be able to get the routes so I can export the site with twig ? :+1: The CLI only has the basics to get at configuration and such. The whole of Grav is not run, so that means the pages are not processed. My suggestion is that in your method, you mimic what is being don in Grav itself and create a new Pages object, then call: $pages->init(); There might be more involved to get that all working but all the code that Grav uses is there, so it should be easy to track down what's required.
gharchive/issue
2015-12-02T16:55:04
2025-04-01T04:34:20.925793
{ "authors": [ "CoolGoose", "rhukster" ], "repo": "getgrav/grav", "url": "https://github.com/getgrav/grav/issues/499", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2068863506
introduce a more prod ready helm chart for lago use helm apiVersion v2 having separated template folders for each deployment adjust all templates to use most of options when generating a new chart via helm create add github action to publish chart via gh pages add minio as optional chart dependency add some examples - see charts/lago/examples.md allow to use existing secrets introduce dedicated ingress objects for front and api introduce init-contains to check if db is available / schema exists add health checks whenever possible make events worker optional Hello @grafjo Thanks a lot for this PR, I know that we already have some users in production with the actual version of the chart, could we keep the values.yaml format and improve the deployments/service only? It will avoid a breaking change for them Hey @jdenquin I'm aware that this change is one big breaking change (or a lot of small ones :-)). How can we deal with possible breaking changes in the future? Somehow they will pop up and some users that are using the released version of this chart have to do some migrations. If we stick to the current values.yaml attributes / schema, even adopting the improvement of new labels used via deployment and service following helm standard labels is a breaking change on k8s side. Maybe we can have a chat about that somewhere else e.g. lago slack? yes feel free to send me a message on Slack, or open a thread on #contributions @grafjo I'm working on the v1 version and I'll take some of your changes! Hi @grafjo, Thank you so much for your PR and all the awesome features you’ve suggested! 🙏 While we love your ideas, we’re currently focused on avoiding breaking changes as some users still pull directly from the repo. We've created a separate PR that incorporates some of your proposals, while maintaining backward compatibility. We really appreciate your contribution and look forward to more collaborations in the future!
gharchive/pull-request
2024-01-06T22:31:35
2025-04-01T04:34:20.979691
{ "authors": [ "electrosenpai", "grafjo", "jdenquin" ], "repo": "getlago/lago-helm-charts", "url": "https://github.com/getlago/lago-helm-charts/pull/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2391561324
Github app to provide blast radius or test plan Description Develop a new GitHub app that listens to webhooks on PR creation, PR updates, and PR comments that mention the bot with a command. The bot should integrate with the Momentum app, and users who install the bot need to be signed up for the Momentum app. Workflow User signs up to momentum and installs app on their repo. There should be a user preference page on UI that has a toggle to turn on or turn off PR integration on Github. (not part of this task) Receive webhook: Handle webhooks differently for PR update, PR create, comment with mention, and comment without mention. PR update: Parse the webhook body for the branch name, repo name, and base branch name. Retrieve the project ID using the repo name, branch name, ( in case of multiple, choose the oldest one) Fetch the blast radius for the project using project id and base branch name. Comment on the PR with the blast radius information in a table format. PR create: Parse the webhook body for the branch name, repo name, and base branch name. Since we need the user id against which we need to create the project, retrieve the user id from project table by querying using repo name, in the case that multiple users have linked the same repo on momentum, choose the oldest user Create a project for the branch and repo Fetch the blast radius for the project and base branch name. Comment on the PR with the blast radius information in a table format. Comment with mention: Listen for a /plan {identifier} command in the comment that mentions the bot. Parse the webhook body for the repo name and branch name. Retrieve the project ID using the repo name and branch name. Fetch the test plan for the specified identifier. Comment on the PR with the test plan information and the project ID. Comment without mention: Ignore the comment. Relevant documentation: Webhook events and payloads - Github momentum.sh docs any open source pr review bot Thanks @dhirenmathur for a detailed description! I'd like to work on this! @parthfloyd awesome, I've assigned you the issue, let me know if you need any more context! @dhirenmathur I'm looking forward to incorporate the following changes: [ ] Create request_handler function which delegates function call based on Github events on webhook. [ ] create & integrate function to post comment on Github PR. [ ] create & integrate function to parse comment body. [ ] update documentation. Please feel free to add any feedback on this. Sounds good overall, can you provide more detail around the flow of the request handler. Yes, Firstly I'll fetch the event type (installation_repositories, issue_comment, pull_request) using the header: X-Github-Event & then checking its action & calling the required function (ideally as a coroutine object) For example: for a new PR, event: pull_request, action: opened. fetching blast radius for the branch & commenting on the PR in a table format.
gharchive/issue
2024-07-04T23:57:19
2025-04-01T04:34:21.019925
{ "authors": [ "dhirenmathur", "parthfloyd" ], "repo": "getmomentum/momentum-core", "url": "https://github.com/getmomentum/momentum-core/issues/28", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2032137207
Draft section is visible in all form views in a project Problem description Draft section is visible in form views in a project so for example it’s possible start creating a draft and from this view go to Submissions URL of the page https://staging.getodk.cloud/ https://test.getodk.cloud/ Steps to reproduce the problem Navigate to a form in a project. Go to different tab (e.g. Submissions, Settings). Try to create a draft from the chosen tab. From Draft section go to e.g Submissions. Screenshot Expected behavior Draft section shouldn’t be visible in other parts of form sections. Central version shown in version.txt staging versions: versions: c0bfac522d92a9224242f90a3e90df09f12ab540 (v2023.4.0-6-gc0bfac5) +4b5d237c5d83f7fc5c64dce79229f9edd97cff62 client (v2023.4.0-46-g4b5d237c) +38410ebe30f6082d9cebcd2e2bc9f38fbd07b4e2 server (v2023.4.0-48-g38410ebe) test.cloud.getodk.org versions: 65d38c5de66dc07245632a19f3458035337f1215 (v2023.4.0) 95326b9ad66ec31c93bdb68c29f8797975d93fd2 client (v2023.4.0) 63fdf150e1ed81e3b1059050f7b1ba323931ab24 server (v2023.4.0) Browser version Chrome 119.0.6045.159 (64-bit), Firefox 120.0 (64-bit) I don't think we're planning to make changes here for .5, so I'm going to go ahead and close this issue for now. @dbemke, let me know if you think we should make any changes here, or if you have other thoughts about this issue.
gharchive/issue
2023-12-08T08:06:18
2025-04-01T04:34:21.034116
{ "authors": [ "dbemke", "matthew-white" ], "repo": "getodk/central", "url": "https://github.com/getodk/central/issues/563", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1460004545
Transpiled React code detects wrong names return (0, c.BX)(a.Fragment, { children: [ (0, c.tZ)(t, { children: (0, c.tZ)(s.Z, { onSubmitError: (e) => { (0, n.Iw)( (0, i._N)("Something went wrong! [error]", { error: e }) ); }, children: w.map((e) => (0, c.tZ)(o.Z, { field: e }, e.name)), }), }), ], }); This is the simplified minified snippet for this original: https://github.com/getsentry/sentry/blob/6bb5c0b800104c70456dc9fcddc7d94d5132fe90/static/app/components/modals/createReleaseIntegrationModal.tsx#L43-L100 The name is being detected as <object>.childrenchildrenonSubmitError, whereas it should probably just be <object>.onSubmitError. The parent object is being passed as an argument to a function call at which point we should stop our search upwards the syntax tree. I'll take this one. Do you think it should be <object>.onSubmitError or maybe rather children.onSubmitError? 🤔
gharchive/issue
2022-11-22T14:40:32
2025-04-01T04:34:21.081107
{ "authors": [ "Swatinem", "kamilogorek" ], "repo": "getsentry/js-source-scopes", "url": "https://github.com/getsentry/js-source-scopes/issues/17", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
133085914
When merging issues, the assignee is (temporarily) removed To reproduce: From the issues stream, have two distinct issues On a single issue, assign a user Merge the two issues The resolved, "final" issue will appear to have removed the assigned user The user is still assigned on the backend. Refreshing the page will restore the UI properly and you will see the user assigned. lol wrong project – free contribs!
gharchive/issue
2016-02-11T20:57:40
2025-04-01T04:34:21.082893
{ "authors": [ "benvinegar" ], "repo": "getsentry/raven-js", "url": "https://github.com/getsentry/raven-js/issues/505", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
500578113
Sentry does not report errors in dockers scratch image When using sentry in a go project, which is statically compiled and executed in a docker scratch image sentry captured errors will not end up in the webinterface. Example: https://github.com/mwohlert/sentry_scratch_golang My first guess is because you're lacking SSL certificates in scratch. We document how to work around this here: https://docs.sentry.io/platforms/go/migration/#providing-ssl-certificates In raven-go, we used to bundle this by default and always use certifi for certificates, but this proved to be very controversial and we chose to just document it for the relatively rare situations where there aren't any certificates available.
gharchive/issue
2019-09-30T23:33:24
2025-04-01T04:34:21.146344
{ "authors": [ "mattrobenolt", "mwohlert" ], "repo": "getsentry/sentry-go", "url": "https://github.com/getsentry/sentry-go/issues/63", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
609578801
Update echo middleware to get Hub instance from context Modified to support getting sentry.Hub instance from parent Context to include extra fields defined in the parent context. Background -- we use sentry with echo web framework in a AWS lambda environment behind AWS API Gateway. We have top-level sentry instance defined with extra fields. The purpose of this change is to use the existing hub on context by default. @j--wong would you have a short example of how you're using this? I would like to understand what use case this is solving, because if we change the echo middleware we may want to consider the same change for other frameworks. Hi @rhcarvalho, thanks for the reply. We are building rest apis in AWS using AWS API Gateway > Lambda > Echo. We configure logging, tracing (honeycomb) and error handling/monitoring (sentry) at the Lambda layer with extra information. The key use case for us is we want that same set of extra information to flow to sentry, honeycomb and logs without having to reconfigure them with every error/log/trace. We configure a hub instance at Lambda layer and attach it to the context. We'd like Echo layer (ie., sentry-echo middleware) to use the same hub instance. Without this PR, those extra information is not preserved in sentry-echo middleware as it creates a new Hub instance internally instead of getting existing one from the parent context -- which is what I am trying to address with this PR. We configure honeycomb the same way. Honeycomb version of echo middleware looks for existing trace from parent context first, and create a new one only if there is no existing trace in parent context. As seen here: https://github.com/honeycombio/beeline-go/blob/d24142d664/wrappers/common/common.go#L45-L57 I believe this should be applicable to other web frameworks as well. I think it's sensible to use existing hub from context as default. Maybe you could introduce this behind an option like UseHubFromContext bool or something along those lines if you want to provide a toggle for this behaviour. Sorry it was a bit long, hope this all makes sense 😄 We configure [...] error handling/monitoring (sentry) at the Lambda layer with extra information. @j--wong thanks for the description. Do you have a short piece of code to demonstrate how you're putting things together? Particularly interested in the part related to sentry-go. I'd like to see precisely how you're inserting a hub in the request context before the sentry middleware. In this example code there is a demonstration of how to add a tag to all Sentry events: https://github.com/getsentry/sentry-go/blob/af3076c5259eb407873c99da142af3ee29051105/example/echo/main.go#L35-L46 With the same pattern you can configure the request's hub/scope arbitrarily. I am imagining you have a middleware that is running before Sentry's middleware, that's why you are trying to add a hub to the Echo context manually?! I am imagining you have a middleware that is running before Sentry's middleware, that's why you are trying to add a hub to the Echo context manually?! Hi @rhcarvalho, in my case, I already have a Hub instance on context with all extra fields set. I just wish the sentry-echo middleware could use that instance. Yes, I got a middleware running before sentry-echo middleware (it's lambda middlware, not echo middleware). The code looks like this: return func(next lambda.Handler) lambda.Handler { ... ... return func(ctx context.Context, payload []byte) ([]byte, error) { lc, _ := lambdacontext.FromContext(ctx) hub := sentry.CurrentHub().Clone() hub.Scope().SetExtras(cfg.Fields) // extra information hub.Scope().SetExtra("aws_request_id", lc.AwsRequestID) hub.Scope().SetExtra("...",...) // more lines omitted ctx = sentry.SetHubOnContext(ctx, hub) ... ... // invoke next lambda middleware; from this point forward, we have hub instance on context return next.Invoke(ctx, payload) } } The reason I configured sentry this way in Lambda layer is so that Sentry can catch lambda errors too, not just errors from Echo. The other approach I could take is, like in the example you shared, duplicate above code in another echo middleware func. So I add whatever fields I want to add after sentry middleware. I guess the key use-case question I'd like to ask is -- do you believe this is a valid/reasonable scenario for sentry/echo users to have Hub instance configured higher up in the chain before echo middleware run? To me, that's exactly my use case 😄. Maybe not many people are doing Lambda > Echo + Sentry combo for their apis. So this may not be common use case? This is a reasonable use case. I'll follow up with adding similar behavior to the other integrations. Thanks, @j--wong! This will be part of the upcoming next release. Great stuff @rhcarvalho! Thanks for that 👍
gharchive/pull-request
2020-04-30T05:00:17
2025-04-01T04:34:21.156913
{ "authors": [ "j--wong", "rhcarvalho" ], "repo": "getsentry/sentry-go", "url": "https://github.com/getsentry/sentry-go/pull/217", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
808315576
traces_sample_rate still required with traces_sampler Environment How do you use Sentry? Sentry SaaS (sentry.io) Which SDK and version? Laravel Steps to Reproduce Delete traces_sample_rate while having traces_sampler Expected Result Application should work. It's confusing, because I was wondering final trace sample rate is traces_sample_rate x traces_sampler or something similar. After many tests I investigated that even if traces_sampler is only necessary option in config you still need traces_sample_rate to send traces. Actual Result After adding traces_sampler, traces_sample_rate shouldn't be necessary Annnnd I'm moving it back because this does seem like a Laravel bug since the SDK handles this properly, sorry for the confusion 😄 @zatorck can you please show what you have set in your config? After testing with this I get tracing events to show up just fine! return [ 'dsn' => env('SENTRY_LARAVEL_DSN'), 'traces_sampler' => function (Sentry\Tracing\SamplingContext $context) { return 1.0; }, ]; It would also help to know the versions of Laravel/Lumen and the Sentry SDK you are running. Hey @stayallive. Super thanks for fast answer. Sentry and their team is awesome. Unfortunately I was too fast making this thread. After more testing I just realized that yesterday I made it in wrong way. So sorry for bothering You. Hope it's ok. Thanks Hi @zatorck, no worries it happens!
gharchive/issue
2021-02-14T22:47:24
2025-04-01T04:34:21.187378
{ "authors": [ "stayallive", "zatorck" ], "repo": "getsentry/sentry-laravel", "url": "https://github.com/getsentry/sentry-laravel/issues/453", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1579264908
Infinite loop on sentry-laravel install, asking for php-http/discovery plugin Environment Latest stable laravel installation, MacOS Ventura 13.1, macports PHP 8.0, 8.1, 8.2 Steps to Reproduce On a clean Laravel 9 install (composer create-project laravel/laravel example-app) Installing sentry (composer require sentry/sentry-laravel) Expected Result Sentry is installed Actual Result Sentry asks for a plugin to be installed (php-http/discovery plugin). If user confirms, an infinite loop begins. A video example from my console. Same happens with sentry package installed in Linux 22.04 LTS production server. composer install breaks the upgrade process because of the plugin asking to be enabled. In my opinion, no extra plugin, let alone discovery script, should be asked from the user during install. I have add this in composer.json config section. it's do not ask anymore, but it still infinite loop "allow-plugins": { "php-http/discovery": true } p/s: have auth0/auth0-php: 8.3 in composer file also cause this behavior. so i think it's not bug of sentry package. may be php-http/discovery package, which is a dependency of sentry/sentry-laravel thanks @ducla5 Actually, "php-http/discovery": true in "allow-plugins" is exactly what happens when you respond "y" to the prompt of trusting http-discovery (0:16 of the video). So, even if I follow your suggestion, composer update falls into the same infinite loop situation. @kolydart. sentry-laravel use php-http/discovery as a dependency. and version 1.15 of php-http/discovery cause this bug. downgrade php-http/discovery to "1.14.3" so you can install sentry-laravel Thanks. I will do. In the meanwhile I tried it in different environemtns (Linux x64, Linux arm) and the bug is there. So, I think this is going to affect many users. I think for the moment, it's better to add conflict in the composer.json as mentioned in the package issue "conflict": { "php-http/discovery": "1.15.0" }, For now, we can't really do much on our end besides locking the version to before 1.15, which also has its downsides.
gharchive/issue
2023-02-10T08:44:27
2025-04-01T04:34:21.194609
{ "authors": [ "cleptric", "ducla5", "kolydart" ], "repo": "getsentry/sentry-laravel", "url": "https://github.com/getsentry/sentry-laravel/issues/650", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1745624857
Unable to set IP address to {{auto}} How do you use Sentry? Sentry Saas (sentry.io) Version 3.19.1 Steps to Reproduce Follow the steps on this page to add auto IP address to user object: https://docs.sentry.io/platforms/php/enriching-events/identify-user/ \Sentry\configureScope(function (\Sentry\State\Scope $scope): void { $scope->setUser(['ip_address' => "{{auto}}"]); }); Expected Result Event is sent and the Sentry server infers the user/server's IP address. Actual Result Error message: Fatal error: Uncaught InvalidArgumentException: The "{{auto}}" value is not a valid IP address. This looks like an issue in our docs. Sentry will infer the IP address from the connection between your app and Sentry's server. This does not apply, as the connection between the app and Sentry will always be the IP of the server the PHP application is running on.
gharchive/issue
2023-06-07T10:55:28
2025-04-01T04:34:21.198408
{ "authors": [ "cleptric", "rodolfoBee" ], "repo": "getsentry/sentry-php", "url": "https://github.com/getsentry/sentry-php/issues/1546", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1634923810
feat: Add ignore_exceptions & ignore_transactions This adds two new options, namely ignore_exceptions and ignore_transactions. ignore_exceptions Make it dead simple to ignore an Exception, without the need to deal with any integrations. I actually marked IgnoreErrorsIntegration as deprecated. Additionally, we take $exception->getPrevious() into account, when applying the option. \Sentry\init([ 'ignore_exceptions' => [BadThingsHappenedException::class], ]); closes https://github.com/getsentry/sentry-php/issues/1426 ignore_transactions Same story, make it dead simple to ignore a transaction, without the need to fiddle around with the traces_sampler or any before_send_transaction stuff. \Sentry\init([ 'ignore_transactions' => ['GET /health'], ]); Got inspired after reading https://stevenwoodson.com/blog/conserving-sentry-transactions-by-ignoring-laravel-routes/. While I understand the reasoning behind these changes, I feel that the client is getting too much responsibility imho. The solution involving an integration was taken from the JS SDK and allows us to decouple the client from all these features, which improves the maintenability of the project. I also remember that we already had a discussion in the past about whether to consider "previous" exceptions, and I think that a real improvement would be to make this configurable: in fact, I may want to ignore all the exceptions, but only if they are the topmost. While I understand the reasoning behind these changes, I feel that the client is getting too much responsibility imho. The solution involving an integration was taken from the JS and Go SDKs and allows us to decouple the client from all these features, which improves the maintenability of the project. I also remember that we already had a discussion in the past about whether to consider "previous" exceptions, and I think that a real improvement would be to make this configurable: in fact, I may want to ignore all the exceptions, but only if they are the topmost. We tend to move away from event processors, as they are very opaque and confusing for people to use. Once https://github.com/getsentry/rfcs/pull/34 lands, we yet again will move more things into the Client, which is fine. @cleptric we also have that for some SDKs as well. https://docs.sentry.io/platforms/java/configuration/options/#ignored-exceptions-for-type It's a very handy utility.
gharchive/pull-request
2023-03-22T01:35:55
2025-04-01T04:34:21.205022
{ "authors": [ "cleptric", "marandaneto", "ste93cry" ], "repo": "getsentry/sentry-php", "url": "https://github.com/getsentry/sentry-php/pull/1503", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1312068882
Celery integration doesn't report uncaught exceptions when initialized as suggested Core or SDK? Platform/SDK Which part? Which one? https://docs.sentry.io/platforms/python/guides/celery/ Description I tried to set up the Celery integration by following https://docs.sentry.io/platforms/python/guides/celery/. I tried to initialize Sentry in response to the worker_process_init signal as mentioned in this passage: There are many valid ways to do this like, for example, using a signal like worker_process_init […] My code looked something like this: # myapp.celery from os import environ from celery import Celery, signals import sentry_sdk app = Celery( 'myapp', broker='redis://localhost:6379', include=['myapp.tasks'], ) def init_sentry(): sentry_sdk.init(debug=True, dsn='…') @signals.worker_process_init.connect def init_worker(**_kwargs): init_sentry() When set up this way, the Sentry initialization ran (I saw a bunch of Sentry debug output when the worker started) but uncaught exceptions in the worker were not reported to Sentry. For example, given the following task: # myapp.tasks import sentry_sdk from sentry_celery_test.celery import app @app.task def task(): sentry_sdk.capture_message('should report this') raise Exception('should also report this') Only the capture_message ended up appearing in Sentry. If I changed myapp.celery like this: --- /Users/stuart/src/sentry_celery_test/myapp/celery.py +++ #<buffer celery.py> @@ -14,7 +14,4 @@ def init_sentry(): sentry_sdk.init(debug=True, dsn=environ['SENTRY_DSN']) - -@signals.worker_process_init.connect -def init_worker(**_kwargs): - init_sentry() +init_sentry() Then the uncaught exception was reported as expected. Using the celeryd_init signal, as discussed in this relevant issue also seemed to work. Suggested Solution Not sure :-) hey @harto thanks a lot for reporting this. I finally investigated this today. You are right that worker_process_init will not work because we patch build_tracer and worker_process_init is called here after build_tracer. Both celeryd_init and worker_init will however work so I will update the docs!
gharchive/issue
2022-07-11T06:04:26
2025-04-01T04:34:21.214733
{ "authors": [ "harto", "sl0thentr0py" ], "repo": "getsentry/sentry-python", "url": "https://github.com/getsentry/sentry-python/issues/1512", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
565538717
WIP: ci: add tests for cw logs Adds tests for #618 actually, one more change before merge.
gharchive/pull-request
2020-02-14T20:22:35
2025-04-01T04:34:21.215743
{ "authors": [ "flyinbutrs" ], "repo": "getsentry/sentry-python", "url": "https://github.com/getsentry/sentry-python/pull/621", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
164584890
Multiple notifications for every instance of an error In Sentry, multiple instances of an error get aggregated. And I get just one email for all those events. Everything is good so far. However, in Slack, we get notifications for each instance of the error. That just floods the slack channel. According to this https://github.com/getsentry/sentry/issues/1403 the issue is fixed in Sentry. As I get just one email and I can see the instances aggregated in Sentry, I'm guessing it's an issue with sentry-slack? This is configurable in your Rules settings in Sentry itself. My guess is you have a rule that says to do this on every event, and not just new/regressions. sentry-slack has no control over choosing to send or not send data. The rules are just the default ones. Can you please help me identify which setting in "Rules" or "Notifications" would say that notifications should be sent for every event? Also, won't emails be using those notifications/rules settings as well? Emails don't get multiple notifications. Just Slack does.
gharchive/issue
2016-07-08T18:06:04
2025-04-01T04:34:21.244665
{ "authors": [ "mattrobenolt", "namit" ], "repo": "getsentry/sentry-slack", "url": "https://github.com/getsentry/sentry-slack/issues/51", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
796823741
Upgrade to v 2.2.0 and add Readme This version will add the whitepaper explorer + other minor improvement. Official change log: v2.2.0 2021-01-22 New "Fun" item for the tx containing the whitepaper and new tool to extract the whitepaper and display it New fee rate data on /block-analysis pages New minor misc peer data available in Bitcoin Core RPC v0.21+ New gold exchange rate on homepage Fix for SSO token generation URL encoding (Thanks @shesek and @Kixunil) Fix for /peers map Fix for README git clone instructions (Thanks @jonasschnelli) See https://github.com/janoside/btc-rpc-explorer/blob/master/CHANGELOG.md#v220 Ok understood, I removed the README. I've been writing it mostly for myself anyway, as I was exploring the correct environment variables to use to run it locally. Ok understood, I removed the README. I've been writing it mostly for myself anyway, as I was exploring the correct environment variables to use to run it locally. Thanks @goums!
gharchive/pull-request
2021-01-29T11:57:53
2025-04-01T04:34:21.307765
{ "authors": [ "goums", "lukechilds" ], "repo": "getumbrel/docker-btc-rpc-explorer", "url": "https://github.com/getumbrel/docker-btc-rpc-explorer/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2390089162
Switching from GHCR to DockerHub Description Switching Docker images from GHCR to DockerHub because of call caching issues Also adding options json for usability Related Issue Fixes #2 Completed successful test run via GitHub Actions. Minor change, so pushing through without review.
gharchive/pull-request
2024-07-04T06:51:41
2025-04-01T04:34:21.312282
{ "authors": [ "tefirman" ], "repo": "getwilds/ww-cell-ranger", "url": "https://github.com/getwilds/ww-cell-ranger/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
530048407
Error sending sticker. Describe the bug Error sending sticker (Store.CryptoLib.encryptE2EMedia is not a function) To Reproduce Steps to reproduce the behavior: Sending a sticker. It looks like Whatsapp have updated their website and removed (or moved) that function. I will look into it, thanks for reporting. This is now fixed in the following commit: 53229cfb67f5f5bca71dca10cbf010014ae98ef0 Check it out!
gharchive/issue
2019-11-28T18:59:12
2025-04-01T04:34:21.340930
{ "authors": [ "edgaru", "gfaraj" ], "repo": "gfaraj/super-bot", "url": "https://github.com/gfaraj/super-bot/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1524067167
Update scalatest to 3.2.15 Updates org.scalatest:scalatest from 3.2.12 to 3.2.15. GitHub Release Notes - Version Diff I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.scalatest", artifactId = "scalatest" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "@monthly" }, dependency = { groupId = "org.scalatest", artifactId = "scalatest" } }] labels: test-library-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #131.
gharchive/pull-request
2023-01-07T19:00:09
2025-04-01T04:34:21.345064
{ "authors": [ "scala-steward" ], "repo": "gfc-collective/gfc-id", "url": "https://github.com/gfc-collective/gfc-id/pull/123", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
57682621
[WIP] Use associated types for device resources Gets us closer to #416. An alternative to #517 For now I am hard coding the GlDevice as the buffer handle device parameters. Currently getting a lifetime error though: src/device/lib.rs:203:9: 203:17 error: the associated type `<D as Device>::Buffer` may not live long enough [E0311] src/device/lib.rs:203 self.raw.get_info() ^~~~~~~~ src/device/lib.rs:203:9: 203:17 help: consider adding an explicit lifetime bound for `<D as Device>::Buffer` src/device/lib.rs:203 self.raw.get_info() ^~~~~~~~ src/device/lib.rs:202:43: 204:6 note: the associated type `<D as Device>::Buffer` must be valid for the anonymous lifetime #1 defined on the block at 202:42... src/device/lib.rs:202 pub fn get_info(&self) -> &BufferInfo { src/device/lib.rs:203 self.raw.get_info() src/device/lib.rs:204 } src/device/lib.rs:203:9: 203:17 note: ...so that the reference type `&Handle<<D as Device>::Buffer, BufferInfo>` does not outlive the data it points at src/device/lib.rs:203 self.raw.get_info() ^~~~~~~~ src/device/lib.rs:203:9: 203:17 error: the associated type `<D as Device>::Buffer` may not live long enough [E0311] src/device/lib.rs:203 self.raw.get_info() ^~~~~~~~ src/device/lib.rs:203:9: 203:17 help: consider adding an explicit lifetime bound for `<D as Device>::Buffer` src/device/lib.rs:203 self.raw.get_info() ^~~~~~~~ src/device/lib.rs:202:43: 204:6 note: the associated type `<D as Device>::Buffer` must be valid for the anonymous lifetime #1 defined on the block at 202:42... src/device/lib.rs:202 pub fn get_info(&self) -> &BufferInfo { src/device/lib.rs:203 self.raw.get_info() src/device/lib.rs:204 } src/device/lib.rs:203:9: 203:17 note: ...so that the reference type `&Handle<<D as Device>::Buffer, BufferInfo>` does not outlive the data it points at src/device/lib.rs:203 self.raw.get_info() ^~~~~~~~ error: aborting due to 2 previous errors Could not compile `gfx_device_gl`. Closing in favor of #564
gharchive/pull-request
2015-02-14T05:53:09
2025-04-01T04:34:21.347351
{ "authors": [ "bjz", "kvark" ], "repo": "gfx-rs/gfx-rs", "url": "https://github.com/gfx-rs/gfx-rs/pull/562", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2571213188
🛑 RGB24/BGR15 Converter is down In cc4e579, RGB24/BGR15 Converter (https://nas.gfx-pro.net/BGR15) was down: HTTP code: 404 Response time: 104 ms Resolved: RGB24/BGR15 Converter is back up in f0a2f34 after 2 days, 17 hours, 27 minutes.
gharchive/issue
2024-10-07T18:55:07
2025-04-01T04:34:21.386138
{ "authors": [ "wdeb" ], "repo": "gfxpronet/upptime", "url": "https://github.com/gfxpronet/upptime/issues/409", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
209931913
ag cannot find accent letters Version: windows port 0.29.1-1641 (built on 2015-07-04) System: windows 7 pro, language français Issue: Ag cannot find french accent letter, such as ô, é, è, ç, à, etc. Bonjour, Having the same problem, I've just done another port of ag for Windows that can find non-ASCII characters... But so far only in text files encoded as UTF-8. Also, it will find files that have non-ASCII characters in their pathname. The sources are there: https://github.com/JFLarvoire/the_silver_searcher If you don't have Visual C++ for rebuilding it, I've uploaded pre-built versions in: http://jf.larvoire.free.fr/temp/ag.zip Eventually, I plan to add a dynamic detection of the encoding (ANSI/UTF-8/Unicode), to allow finding non-ASCII characters in any kind of text file. Jean-François This should be an issue for Windows port author. If it was ag issue, it has been fixed already: % ag ô /tmp/ag.txt 1:ô, é, è, ç, à (Linux amd64) Yes, this was indeed a problem in previous Windows ports, not in Unix builds. I've released a few days ago a new version 2.0.0 for Windows with full support for this at last: https://github.com/JFLarvoire/the_silver_searcher/releases As proposed in my previous post above, it dynamically detects the encoding, and can search both in files encoded in UTF-8 and in the Windows System code Page (CP 1252 for west-european versions of Windows). And whatever the file encoding, and whatever the current console code page, the strings found will be displayed correctly.
gharchive/issue
2017-02-24T01:07:45
2025-04-01T04:34:21.436490
{ "authors": [ "JFLarvoire", "krigstask", "xc1427" ], "repo": "ggreer/the_silver_searcher", "url": "https://github.com/ggreer/the_silver_searcher/issues/1062", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1867096013
Just a few questions Hi and thanks so much for creating this useful tool! I had a view questions I'm hoping you can help me with: Can you explain a bit more what requirements need to happen in order for "Setup State" to show successful? I ask because I had re-installed my app, successfully re-download all data from iCloud, Import/Export showed Successful but Setup was Not Started. It wasn't until I killed and re-launched that it updated. I have a user who has just over 100 saved objects on their iPhone in my app, but only 94 are syncing to their iPad. New projects aren't syncing between the two devices, and the status page is reading: Export Failed "CKERRORDOMAIN ERROR 2" -- I can't find much online about this error or what the suggested steps to fix it are. I'm not expecting you to help me troubleshoot, but I'm curious if you've come across this error before and if you've found a successful and repeatable way to address it. Thanks so much! Hi @thatvirtualboy , setupState is set to .succeeded(started: startDate, ended: endDate) when NSPersistentCloudKitContainer sends a SyncEvent whose type is .setup and whose startDate and endDate properties are not nil (which in theory means the setup has ended, therefore setup has succeeded). See internal func setProperties(from event: SyncEvent). If setupState is showing .notStarted, it means that CloudKitSyncMonitor didn't receive one or more events from NSPersistentCloudKitContainer. You might have a race condition in which NSPersistentCloudKitContainer is firing events before your app is detecting them with CloudKitSyncMonitor. (NSPersistentCloudKitContainer publishes a stream of events (of type SyncEvent) telling your app what it's doing. CloudKitSyncMonitor listens to those events and turns them into state variables you can check and display; if it misses events, it won't be able to update the variables). You may just need to make sure to set up CloudKitSyncMonitor before NSPersistentCloudKitContainer, although it's been so long since I've done that that I can't remember how. CKERRORDOMAIN ERROR 2 is a network error - the device can't connect to the network (or, really, it is running into some network-like error when CloudKit is trying to export data). Export errors like this are why I wrote CloudKitSyncMonitor, as iCloud is the "source of truth" for synced data, and the usual fix for errors like this, if it's not something simple like they're not on the Internet or a firewall is blocking outgoing traffic, is to delete and reinstall the app. Yes, that means the 6 items that are not exporting from the iPhone (assuming the iPhone is what's showing the export error) will be lost. I'll pause a moment while you gasp and go through the stages of grief. Your user might be able to go into Settings and turn iCloud Sync off and back on for your app, but I'm pretty sure the local data will be deleted anyway, not merged. With NSPersistentCloudKitContainer, it's critical that data make it up to iCloud, or it doesn't really exist. However, I also haven't run into random export errors like that since iOS 14 (15?) or so, so make sure your users are on new OSes. NSPersistentCloudKitContainer was very buggy at first, having frequent problems with items not syncing; detecting that export error quickly was critical to avoiding data loss. I assume your users are on something newer than iOS 14, but you never know, and it makes a big difference in stability. @ggruen Thanks Grant so much for the detailed explanation on both items! This is very helpful. Unfortunately the export errors are still present on current iOS builds, though seemingly less frequent. At any rate, I really appreciate the reply. Thank you! You may just need to make sure to set up CloudKitSyncMonitor before NSPersistentCloudKitContainer, Does this mean that instead of just having an ObservedObject in a view, the sync monitor should also be initialized somewhere in the app delegate before it's actually later used by a view that the user navigates to? The docs don't suggest triggering some behavior before its use in a view. Thanks Hi @aehlke, Good question, and I think there may be a race condition in the design of the module in "normal" use. Basically, CloudKitSyncMonitor is usually accessed as a singleton (SyncMonitor.shared), but the SyncMonitor instance won't be initialized until that singleton is accessed. So, if CloudKitSyncMonitor is used in a view (as will usually be the case), it's possible that the view won't be initialized before NSPersistentCloudKitContainer starts sending messages, so it could miss a message. In my code, I use a view model, which is initialized very early in the startup process (I forget where and I've paused my Apple development for the time being so I can't look it up), so I didn't run into problems. If there is an issue caused by missing early messages, then you can try putting let _ = SyncMonitor.shared before NSPersistentCloudKitContainer is set up (e.g. in the app delegate) and see if that fixes it. That should cause SyncMonitor to subscribe to the notifications before NSPersistentCloudKitContainer can start doing anything. If you try this, please post your findings here. :) That's what I ended up doing, thanks
gharchive/issue
2023-08-25T13:50:17
2025-04-01T04:34:21.446378
{ "authors": [ "aehlke", "ggruen", "thatvirtualboy" ], "repo": "ggruen/CloudKitSyncMonitor", "url": "https://github.com/ggruen/CloudKitSyncMonitor/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2088248168
Cant process the replays generated by python-sc2 I have writed some botai to play starcraft2 by python-sc2,and i find sc2reader cant process these replays. python version:3.10 sc2reader version:1.8.0 python-sc2 version: 6.5.0 code is here: import sc2reader replay = sc2reader.load_replay('Altitude LE_20230903_161615_PROTOSS_VS_BUILD_IN_AI_MediumHard_Zerg_process_-5_checked.SC2Replay') print(replay.players) here is my error: Traceback (most recent call last): File "E:\python_code\sc2data_process\process_replay\process_replay\test_sc2reader.py", line 2, in replay = sc2reader.load_replay('Altitude LE_20230903_161615_PROTOSS_VS_BUILD_IN_AI_MediumHard_Zerg_process_-5_checked.SC2Replay') File "E:\anaconda\envs\sc2env\lib\site-packages\sc2reader\factories\sc2factory.py", line 88, in load_replay return self.load(Replay, source, options, **new_options) File "E:\anaconda\envs\sc2env\lib\site-packages\sc2reader\factories\sc2factory.py", line 166, in load return self._load(cls, resource, filename=filename, options=options) File "E:\anaconda\envs\sc2env\lib\site-packages\sc2reader\factories\sc2factory.py", line 175, in _load obj = cls(resource, filename=filename, factory=self, **options) File "E:\anaconda\envs\sc2env\lib\site-packages\sc2reader\resources.py", line 302, in init self.load_all_details() File "E:\anaconda\envs\sc2env\lib\site-packages\sc2reader\resources.py", line 444, in load_all_details self.load_details() File "E:\anaconda\envs\sc2env\lib\site-packages\sc2reader\resources.py", line 389, in load_details self.region = details["cache_handles"][0].server.lower() IndexError: list index out of range here is replay file: replays.zip IndexError issues: https://github.com/ggtracker/sc2reader/issues?q=is%3Aissue+is%3Aopen+indexerror Thank you sir, but it is hard to find how to solve this problem,is this problem still existence? IndexError issues: https://github.com/ggtracker/sc2reader/issues?q=is%3Aissue+is%3Aopen+indexerror All four issues point to the same line of code. Thank you sir, but it is hard to find how to solve this problem,is this problem still existence? Right, so I think the comment linking points to a general issue that the sc2reader library assumes that these replays were generated off of actually played games. Those games have things like server details and such that are pulled into the library. Since you said that python-sc2 to simulate the games, I suspect that some of these details aren't present. A possible fix here is to see if you can bypass these lines if the cache handles aren't present. However, I'm not sure how many other places make the same assumption or if you will run into more problems down the line. If you do get it working assuming that these details are optional, you're welcome to open a PR for it!
gharchive/issue
2024-01-18T13:04:26
2025-04-01T04:34:21.458440
{ "authors": [ "StoicLoofah", "cclauss", "histmeisah" ], "repo": "ggtracker/sc2reader", "url": "https://github.com/ggtracker/sc2reader/issues/199", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
544499982
panic: Detect brain error: no Brain available panic: Detect brain error: no Brain available 具体怎么用,有再详细点的介绍吗 我回头写一个吧,这个问题是没有配置储存器
gharchive/issue
2020-01-02T09:17:05
2025-04-01T04:34:21.462731
{ "authors": [ "ghaoo", "tianxia0079" ], "repo": "ghaoo/rboot", "url": "https://github.com/ghaoo/rboot/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1420932609
Add lable DAC2 on PA5 if the pin available. I think any SITCore that PA5 available, we should add lable DAC2 as well. Without DAC2 users think there is only one DAC1 available on PA4 and less option for them Added to Pico & Flea in PR #708
gharchive/issue
2022-10-24T14:31:23
2025-04-01T04:34:21.472005
{ "authors": [ "Palomino34", "greg-norris" ], "repo": "ghi-electronics/Documentation", "url": "https://github.com/ghi-electronics/Documentation/issues/707", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
60060466
fix headings with single quotes if the heading contains a single quote, it will break here's the fix Thank you! you are welcome! ;-D
gharchive/pull-request
2015-03-06T05:15:25
2025-04-01T04:34:21.473088
{ "authors": [ "eksperimental", "ghiculescu" ], "repo": "ghiculescu/jekyll-table-of-contents", "url": "https://github.com/ghiculescu/jekyll-table-of-contents/pull/12", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
208601623
removed unbindInitialValue to allow new values keep watching When the watcher for initialValue is removed, if the value changes in the scope is not updated, so you get the latest value - 1, and that doesn't work if you are getting the initial value from a service hitting an API. @ghiden what do you think about this fix? Thanks for this solution @aoga88 =D I applied it in my project and everything worked well @ghiden you should approve this pull request ;-)
gharchive/pull-request
2017-02-18T01:54:54
2025-04-01T04:34:21.474512
{ "authors": [ "aoga88", "jeffersondev" ], "repo": "ghiden/angucomplete-alt", "url": "https://github.com/ghiden/angucomplete-alt/pull/470", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2550002018
Sysinternals - Copied from x64 and changed links Added sysinternals aarch64 [x] I have read the Contributing Guide. I see that arm64 was removed from extras bucket, prob because they changed a lot of names in the zip, now almost all files have 64a suffix. So, I changed the manifest and reviewed all binaries and shortcuts. Maybe it will help someone. I was able to install it locally now. sorry i currently doesn't have Windows on arm pc/laptop to try, but it looks good
gharchive/pull-request
2024-09-26T09:15:50
2025-04-01T04:34:21.486639
{ "authors": [ "ghishadow", "wisemanny" ], "repo": "ghishadow/scoop-aarch64", "url": "https://github.com/ghishadow/scoop-aarch64/pull/1", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
597410647
Gatsby/Docz build failures with micromodal We are using Docz and Gatsby as well as server side rendering. In certain instance the window is not yet available during server side rendering therefore our docz build fails. I would like to know if you would consider putting a wrapper around the window.micromodal code (around line 440): // Wrap the require in check for window if (typeof window !== undefined) { window.MicroModal = MicroModal; } This actually resolves the build errors. @nsahlas Thanks for reporting. SSR was not something we had considered till now. Will fix this in next release. @ghosh was this resolved with the last release? +1 for this issue. We are server rendering and I'm getting the the window is undefined error. You give simple fix. We smile. tl;dr +1 I'm still getting the "window" is not available during server side rendering. error on 'gatsby build'. Is there a way to fix this?? I'm still getting the "window" is not available during server side rendering. error on 'gatsby build'. Is there a way to fix this?? In terms of SSR, I'm getting the same error on Svelte Kit. ReferenceError: window is not defined at /node_modules/micromodal/dist/micromodal.es.js:437:1 at instantiateModule (D:\Github\conference\node_modules\vite\dist\node\chunks\dep-66eb515d.js:69030:166) Looks like Micromodal still isn't supporting SSR as it references the DOM on import. Perhaps the DOM reference can be changed to support SSR frameworks? Fixed in #443
gharchive/issue
2020-04-09T16:40:53
2025-04-01T04:34:21.500171
{ "authors": [ "benaltair", "brianyuen", "ghosh", "mumanity", "nsahlas", "spasticninja" ], "repo": "ghosh/Micromodal", "url": "https://github.com/ghosh/Micromodal/issues/311", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
853637918
Sort dependency Unblocking https://github.com/giantswarm/app-checker/pull/52 Checklist ~- [ ] Update changelog in CHANGELOG.md.~ going with a ping
gharchive/pull-request
2021-04-08T16:18:38
2025-04-01T04:34:21.538776
{ "authors": [ "tomahawk28" ], "repo": "giantswarm/app-checker", "url": "https://github.com/giantswarm/app-checker/pull/61", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
283367582
Use managed disks as default storage class Switch default storage class to managed disks. By default Azure creates blob and tries to attach it to instance, this works only for VMs with unmanaged disks (Old-style), we use managed disks VMs. Indeed :)
gharchive/pull-request
2017-12-19T20:59:19
2025-04-01T04:34:21.540830
{ "authors": [ "r7vme" ], "repo": "giantswarm/azure-terraform", "url": "https://github.com/giantswarm/azure-terraform/pull/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
546325396
Push and use china registry towards https://github.com/giantswarm/giantswarm/issues/7435 E2E is fucky. @rossf7 sure I can leave it for tomorrow, ping me when its fixed so i can rebase @calvix Thanks will do. @calvix e2e is fixed. You'll need to rebase. nice and green :) good job @rossf7
gharchive/pull-request
2020-01-07T14:49:11
2025-04-01T04:34:21.542756
{ "authors": [ "calvix", "rossf7", "xh3b4sd" ], "repo": "giantswarm/chart-operator", "url": "https://github.com/giantswarm/chart-operator/pull/341", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2273306559
Enable cloud-config This will be added to enable-controlplane branch, we need this to make Azure work otherwise it breaks for a lot of components. Once we have a new release for https://github.com/giantswarm/cluster/pull/166 I will update all the dependency correctly afterwards, not using cluster-test catalog For now this is only demo purpose that it's working /run cluster-test-suites Tested manually, works like charm 🎉
gharchive/pull-request
2024-05-01T10:40:46
2025-04-01T04:34:21.549610
{ "authors": [ "njuettner" ], "repo": "giantswarm/cluster-azure", "url": "https://github.com/giantswarm/cluster-azure/pull/264", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2360196525
Reset context timeout in AfterSuite What this PR does Ensure that the timeout on the context passed in to the delete function in AfterSuite is always a value long enough to handle all cleanup needed. Checklist [x] Update changelog in CHANGELOG.md. Trigger e2e tests /run cluster-test-suites /run releases-test-suites TARGET_SUITES=./providers/capa/china,./providers/capa/standard
gharchive/pull-request
2024-06-18T15:54:10
2025-04-01T04:34:21.552391
{ "authors": [ "AverageMarcus" ], "repo": "giantswarm/cluster-test-suites", "url": "https://github.com/giantswarm/cluster-test-suites/pull/347", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2733016783
Bump cluster-standup-teardown to v1.27.4 to use lower lifecycle hooks heartbeat timeout to allow spot instances to terminate more quickly (CAPA) What this PR does Use https://github.com/giantswarm/cluster-standup-teardown/pull/173 to prevent test from timing out because EC2 spot instances can't terminate quickly. Checklist [x] Update changelog in CHANGELOG.md. Trigger e2e tests Chicken-egg problem, so I'll run these on the cluster-aws PR instead. This worked for me in a locally-run test and used the correct, reduced lifecycle hook heart beat timeout. E2E_RELEASE_VERSION=25.1.3 E2E_RELEASE_COMMIT=capa-25-1-3 ginkgo -v -r ./providers/capa/standard Therefore I'm merging this.
gharchive/pull-request
2024-12-11T13:42:20
2025-04-01T04:34:21.555132
{ "authors": [ "AndiDog" ], "repo": "giantswarm/cluster-test-suites", "url": "https://github.com/giantswarm/cluster-test-suites/pull/570", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2624246413
chart cleanup towards https://github.com/giantswarm/giantswarm/issues/31983 Instead of reviewing the PR from the changed files tab, It's probably easier to review the individual commits below because i've grouped the changes together logically. I think this is good to review - I'm going to try running the e2e tests but I don't think they're necessary as the diff check shows no changes have been introduced. /run cluster-test-suites /run cluster-test-suites /run cluster-test-suites /run cluster-test-suites TARGET_SUITES=./providers/capa/on-capa /run cluster-test-suites TARGET_SUITES=./providers/capv/on-capa ok, i've had enough of this PR, going to merge it now
gharchive/pull-request
2024-10-30T14:13:44
2025-04-01T04:34:21.558016
{ "authors": [ "glitchcrab" ], "repo": "giantswarm/cluster-vsphere", "url": "https://github.com/giantswarm/cluster-vsphere/pull/305", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1054630668
Add tilt support Added tilt support to easily manual test policies. I've added a sample cluster.yaml and machinepool.yaml files in the e2e folder. These can be used from the Tilt UI and you get back the mutated result. Also, Tilt deploys kyverno so you can see the kyverno logs from the Tilt UI. Checklist [x] Update changelog in CHANGELOG.md. I like the changes - only now got to reviewing them. Is the failure of test-policies related? Before I review deeper: What about the merge conflict? Before I review deeper: What about the merge conflict? should be fine now
gharchive/pull-request
2021-11-16T09:12:09
2025-04-01T04:34:21.567927
{ "authors": [ "MarcelMue", "fiunchinho" ], "repo": "giantswarm/kyverno-policies", "url": "https://github.com/giantswarm/kyverno-policies/pull/158", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
496684828
Use types.new_class() instead of type() in the defclass macro This will automatically support metaclasses correctly. The docs should also be updated. Doing this right is too complex for a basic macro. I'm just going to recommend using the Hebigo macros if you need advanced features. The FAQ already points out types.new_class() for metaclass trouble.
gharchive/issue
2019-09-21T18:15:51
2025-04-01T04:34:21.630242
{ "authors": [ "gilch" ], "repo": "gilch/hissp", "url": "https://github.com/gilch/hissp/issues/37", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
305503176
Publish reimg as npm package Could you publish reimg as npm package? PR #10 is meant to start addressing this issue @mslawins @pcnate Sorry for being so late to do this. Honestly I didn't realise there were issues on this repo. I published it now if this still helps. https://www.npmjs.com/package/reimg Awesome. Thank you!
gharchive/issue
2018-03-15T10:57:29
2025-04-01T04:34:21.641039
{ "authors": [ "gillyb", "mslawins", "pcnate" ], "repo": "gillyb/reimg", "url": "https://github.com/gillyb/reimg/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
178435541
fixed some clippy warnings Just some small readability improvements Coverage increased (+0.01%) to 92.422% when pulling 808b8884baeb00f5eb59cec7fc32f7f9b3594a1a on llogiq:clippy into b6abef57e5063df026b44bf894f5f78cc02483c6 on gimli-rs:master. You're welcome.
gharchive/pull-request
2016-09-21T19:06:16
2025-04-01T04:34:21.645557
{ "authors": [ "coveralls", "llogiq" ], "repo": "gimli-rs/gimli", "url": "https://github.com/gimli-rs/gimli/pull/128", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1032215053
RampGenerator.cpp code inquiry file : RampGenerator.cpp static void _getNextCommand(const struct ramp_ro_s *ramp, const struct ramp_rw_s *rw, const struct queue_end_s *queue_end, NextCommand *command) Please tell me what you are using planning_steps for. I don't understand the code. What does 2ms mean? The code uses this formula to calculate from steps to the required speed: v = sqrt(2 * s * a) On call to getNextCommand(), the code is currently at some point on the ramp e.g. sx and needs to issue the speed for the next point sy. So the speeds are then: vx = sqrt(2 * sx * a) vy = sqrt(2 * sy * a) The problem is now to determine this next point sy in a reasonable distance from sx. And this distance is the (forward) planning_steps. So simply: sy = sx + planning_steps This planning_steps is chosen in a way, that a typical command to the stepper queue covers 2ms. On high speeds, the planning_steps are greater than 1 and for slower speeds equal to 1.
gharchive/issue
2021-10-21T08:37:03
2025-04-01T04:34:21.651108
{ "authors": [ "gin66", "kpu0411" ], "repo": "gin66/FastAccelStepper", "url": "https://github.com/gin66/FastAccelStepper/issues/95", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
676241976
GDAL v3 breaking changes How does this library need to change to accommodate GDAL v3's change to axis ordering? I see several places that this code directly uses the "u" and "v" parameters on the transform. Would those areas be affected? Brannon, I've been away from GIS for about a decade and am completely out of the loop. Do you have a reference that explains these breaking changes? Dan There's the 2.4 to 3.0 section here: https://github.com/OSGeo/gdal/blob/master/gdal/MIGRATION_GUIDE.TXT . The link there is important as well: https://trac.osgeo.org/gdal/wiki/rfc73_proj6_wkt2_srsbarn
gharchive/issue
2020-08-10T15:55:05
2025-04-01T04:34:21.653721
{ "authors": [ "BrannonKing", "dstahlke" ], "repo": "gina-alaska/dans-gdal-scripts", "url": "https://github.com/gina-alaska/dans-gdal-scripts/issues/17", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
293628050
Mobile Profile Images Profile images are not sized at mobile view due to fact that all old profile images are small (165x165), and new ones are saved as 600x600. Makes it look weird in Mobile view: https://sustainability.asu.edu/lightworks/people/ Just some CSS Bryan! Where is the PR to fix it? :)
gharchive/issue
2018-02-01T17:59:57
2025-04-01T04:34:21.711837
{ "authors": [ "bryanbarker", "tooshel" ], "repo": "gios-asu/ASU-Web-Standards-Wordpress-Theme", "url": "https://github.com/gios-asu/ASU-Web-Standards-Wordpress-Theme/issues/344", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
342739791
DOM isn't finished rendering when initial filteringEnd is triggered I'm trying to get a list of visible items using $('.filtr-item:not(.filteredOut)') but the filteredOut class hasn't been written to the DOM when the initial filteringEnd event is triggered. The items are hidden though, opacity: 0, etc. I have to set an arbitrary timeout to ensure the class is present like this: $filterizd.on('filteringEnd', function () { setTimeout(function () { // At this point the `filteredOut` class `should` be applied to the grid items that are hidden }, 250); }); Is there a more precise way to get a list of the filtered out element from filterizr? All timing issues with events should be resolved in v2.2.0 onwards. Also many new features have been added. Hello there and thank you for the wonderful library! It works wonders except for the filteringEnd callback. I have the same problem as above but only on Blink-based browsers (Firefox and all of its clones works without a problem). What I'm trying to achieve is to remove the filtered out items from the tabindex. My code works on all browsers but when I press on the "All" button, Blink-based browsers doesn't update the tabindex until I press a second time. I can fix the problem with a setTimeout of 500ms but it's not ideal. In short, filteringEnd works good on all filters except "All", where there is short delay (around 400-500ms). Only on Blink-based browsers. Any idea what might be the problem? I'm using v2.2.3 Here's my code: let filtrItem = document.getElementsByClassName("filtr-item"); let fL = filtrItem.length; for (let i = 0; i <= fL; i++) { if (filtrItem[i].classList.contains("filteredOut")) { filtrItem[i].setAttribute("tabindex", "-1"); } else { filtrItem[i].setAttribute("tabindex", "0"); } }
gharchive/issue
2018-07-19T13:57:53
2025-04-01T04:34:21.715643
{ "authors": [ "Xoxsossardoii", "giotiskl", "gkrinc" ], "repo": "giotiskl/filterizr", "url": "https://github.com/giotiskl/filterizr/issues/103", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2026671437
ETQ API iMILO, j'ai besoin du SIRET des structures prescriptrices pour tourner - front :thinking: Contexte et problématique #732 :tada: Proposition de solution Vérifier la validité du SIRET à la soumission du formulaire de création d'agence en backend - et refuser la création si le SIRET n'est pas valide ou que l'entreprise est inactive. :camera: Maquettes [ ] Remonter le champ SIRET en 1ère position et pré-remplir les champs suivants autant que possible (comme pour les entreprises). [ ] Message d'erreur directement sur le champ SIRET s'il n'est pas valide. "Le SIRET que vous avez saisi n'est pas valide. Vérifiez le SIRET de votre organisme sur l'annuaire des entreprises." J'ai repris le figma que j'avais fait à l'époque pour l'issue #638 - en changeant la position du champ SIRET. Il faudra juste voir à quel moment on fait la vérification que le combo Type + SIRET + adresse n'existe pas déjà - au submit ? ou avant si possible ? proto : https://www.figma.com/proto/6msAgGCn4id2P3OjI4NRxP/Site---UX-design-1er-semestre-2024?page-id=0%3A1&type=design&node-id=2-503&viewport=1254%2C725%2C0.51&t=DB2pZnEY3Lxv2mFN-1&scaling=min-zoom&mode=design nouveau fichier figma pour le 1er semestre 2024 : https://www.figma.com/file/6msAgGCn4id2P3OjI4NRxP/Site---UX-design-1er-semestre-2024?type=design&node-id=2%3A503&mode=design&t=A7OTa3yDoYpRr67Q-1
gharchive/issue
2023-12-05T16:49:52
2025-04-01T04:34:21.720121
{ "authors": [ "3lodi3" ], "repo": "gip-inclusion/immersion-facile", "url": "https://github.com/gip-inclusion/immersion-facile/issues/974", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1968549274
Export Ar-Package support Export a particular Ar-package maintaining its tree structure along with the parent packages in to a new arxml file. The exported arxml shall be importable in tools like Matlab/Simulink/TargetLink Hi @punithspal , Thanks for the CR and I think, it really makes sense to have this feature in place. I think, it will be better if we could do it for all packageableElements. So, basically, ArPackage as well as any elements which comes directly under ArPackage. What do you think? Thanks! Hi @punithspal , I added the support for exporting the collectableElements(which means Ar-package as well as all packageableElements which includes ApplicationSwComponentType, SR interface, Signals etc etc). The changes are pushed to the branch https://github.com/girishchandranc/autosarfactory/tree/14-issue-with-xml-serialization. Following are the way to use it: option1(using export function available in the node itself) swc = autosarfactory.get_node('/Swcs/swc1') swc.export_to_file('swc1Export.arxml', overWrite = True) option2(using export function where you pass the node) swc = autosarfactory.get_node('/Swcs/swc1') autosarfactory.export_to_file(swc, 'swc1Export.arxml', overWrite = True) Could you please check and let me know if this works for you? @punithspal further regarding question on importing the the dictionary(datatypes and mappings). I believe they are all packageableElements. for eg: ImplementationDataTypes, ApplciationDataTypes, DataTypeMappingSet etc etc. Can you please check once and let me know if that works for you. I have described the usage in my previous comment. Thanks again! The issue is successfully implemented.
gharchive/issue
2023-10-30T14:45:06
2025-04-01T04:34:21.742186
{ "authors": [ "girishchandranc", "punithspal" ], "repo": "girishchandranc/autosarfactory", "url": "https://github.com/girishchandranc/autosarfactory/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2719404038
[feat] Implement toast notification system with error handling https://github.com/user-attachments/assets/6d2f7ac1-2287-416b-a0fd-3dd4f4e38195 Changes Add toast notification system using Radix UI Toast primitive Create ToastContext and provider for managing toast state Implement styled toast components with error variant Add temporary testing shortcut for development Configure toast positioning and animation behavior Testing Manual testing using temporary 'a' key shortcut Verified error variant styling and icon display Tested auto-dismiss functionality Validated toast positioning and z-index layering Indeed. I'll talk to the designer about it tomorrow.
gharchive/pull-request
2024-12-05T05:48:50
2025-04-01T04:34:21.747374
{ "authors": [ "toyamarinyon" ], "repo": "giselles-ai/giselle", "url": "https://github.com/giselles-ai/giselle/pull/170", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1471228933
Create devcontainer.json Changes Context can you provide more context on this PR @heatherleeann ?
gharchive/pull-request
2022-12-01T12:26:23
2025-04-01T04:34:21.781433
{ "authors": [ "heatherleeann", "pedrorijo91" ], "repo": "git/git-scm.com", "url": "https://github.com/git/git-scm.com/pull/1753", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
400319786
Table fill does not not accept a scheme color constant. Table fill and cell fill do not accept a scheme color constant as stated in the docs. Would be great if this feature could be enabled for fills and table borders to improve themes. This is available as of v3.0
gharchive/issue
2019-01-17T15:07:50
2025-04-01T04:34:21.784680
{ "authors": [ "gitbrent", "robertedjones" ], "repo": "gitbrent/PptxGenJS", "url": "https://github.com/gitbrent/PptxGenJS/issues/479", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2060878019
🛑 invidious.lidarshield.cloud is down In 4eed159, invidious.lidarshield.cloud (https://invidious.lidarshield.cloud/) was down: HTTP code: 0 Response time: 0 ms Resolved: invidious.lidarshield.cloud is back up in 814158d after 58 minutes.
gharchive/issue
2023-12-30T22:57:29
2025-04-01T04:34:21.833310
{ "authors": [ "gitetsu" ], "repo": "gitetsu/invidious-instances-upptime", "url": "https://github.com/gitetsu/invidious-instances-upptime/issues/4026", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1641913995
🛑 wwg.vern.cc is down In 33b8a9f, wwg.vern.cc (https://wg.vern.cc) was down: HTTP code: 0 Response time: 0 ms Resolved: wwg.vern.cc is back up in 8fda621.
gharchive/issue
2023-03-27T11:11:08
2025-04-01T04:34:21.836231
{ "authors": [ "gitetsu" ], "repo": "gitetsu/whoogle-instances-upptime", "url": "https://github.com/gitetsu/whoogle-instances-upptime/issues/231", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1800933245
🛑 whoogle.privacydev.net is down In ebf52a8, whoogle.privacydev.net (https://whoogle.privacydev.net) was down: HTTP code: 0 Response time: 0 ms Resolved: whoogle.privacydev.net is back up in ab77e67.
gharchive/issue
2023-07-12T12:58:15
2025-04-01T04:34:21.839396
{ "authors": [ "gitetsu" ], "repo": "gitetsu/whoogle-instances-upptime", "url": "https://github.com/gitetsu/whoogle-instances-upptime/issues/734", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1828535108
🛑 whoogle.dcs0.hu is down In f735866, whoogle.dcs0.hu (https://whoogle.dcs0.hu) was down: HTTP code: 0 Response time: 0 ms Resolved: whoogle.dcs0.hu is back up in f85048b.
gharchive/issue
2023-07-31T06:59:10
2025-04-01T04:34:21.842346
{ "authors": [ "gitetsu" ], "repo": "gitetsu/whoogle-instances-upptime", "url": "https://github.com/gitetsu/whoogle-instances-upptime/issues/864", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2535353293
[GHSA-2p57-rm9w-gvfp] ip SSRF improper categorization in isPublic Updates Affected products Comments Will install @vue/devtools@6.6.4, which is a breaking change node_modules/ip @vue/devtools-electron * Depends on vulnerable versions of ip node_modules/@vue/devtools-electron @vue/devtools >=7.0.0 Depends on vulnerable versions of @vue/devtools-electron node_modules/@vue/devtools Hi @aka2024, this pull request doesn't make any changes to the advisory, so I'm closing the PR and not merging the contribution. If the lack of changes appeared in error, feel free to reopen this pull request or make a new pull request.
gharchive/pull-request
2024-09-19T05:52:04
2025-04-01T04:34:21.904809
{ "authors": [ "aka2024", "shelbyc" ], "repo": "github/advisory-database", "url": "https://github.com/github/advisory-database/pull/4821", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1926779328
Improve downloading log message Regarding GitHub Actions log output, the dot was recognized as part of the URL. Merge / deployment checklist [x] Confirm this change is backwards compatible with existing workflows. [x] Confirm the readme has been updated if necessary. [x] Confirm the changelog has been updated if necessary. The failing resolve-environment test recently had some changes. I wonder if something is wrong with the test. I think it should be unrelated as the failure is mv: cannot move 'node_modules' to '../action/node_modules': Permission denied in the prepare step. I'll re-run.
gharchive/pull-request
2023-10-04T18:42:56
2025-04-01T04:34:21.907903
{ "authors": [ "aeisenberg", "angelapwen", "hoshinotsuyoshi" ], "repo": "github/codeql-action", "url": "https://github.com/github/codeql-action/pull/1920", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2109139455
Added dependabotApiUrl to ENV variable for Proxy Container Context This pull request primarily introduces changes to the ProxyBuilder class and its usage in the __tests__/proxy-integration.test.ts and src/updater.ts files. The changes involve the addition of a new parameter dependabotApiUrl to the run method of the ProxyBuilder class and the subsequent adjustments in the method calls throughout the codebase. This is done to provide dependabotApiUrl as an ENV variable for the proxy container so that the dependabotApiUrl can be used by newly introduced metrics_client in PR to send metrics from proxy to Dependabot-api. What are you trying to accomplish? Currently, in AWS each ec2 host (uj-worker-firecracker) runs a datadog agent. The credentials are sourced from secret manager. All update jobs that run in firecracker on the ec2 host will share the same datadog agent for reporting. After moving to Dependabot on actions, we'd need to run a datadog agent per job which could dramatically affect our datadog billing. Also if we would need to provide the datadog credentials as inputs to the dynamic workflow which increases risk they can be extracted by unsafe code or customers on self-hosted runners. This has already been flagged by the security team. Note: The dependabotApiUrl is passed as a parameter to the Dependabot-actions from Dependabot-api https://github.com/github/dependabot-api/blob/58b6b17fc41f334c614a1e13cced292219f926e6/app/actions/run_updater/actions.rb#L56 6:28
gharchive/pull-request
2024-01-31T02:03:45
2025-04-01T04:34:21.919311
{ "authors": [ "honeyankit" ], "repo": "github/dependabot-action", "url": "https://github.com/github/dependabot-action/pull/1156", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2210083130
Reasons of exposing GHES url on public internet Code of Conduct [X] I have read and agree to the GitHub Docs project's Code of Conduct What article on docs.github.com is affected? https://docs.github.com/en/enterprise-server@3.11/admin/github-actions/enabling-github-actions-for-github-enterprise-server/enabling-github-actions-with-amazon-s3-storage https://docs.github.com/en/enterprise-server@3.11/admin/configuration/hardening-security-for-your-enterprise/configuring-tls https://docs.github.com/en/enterprise-server@3.11/admin/identity-and-access-management/using-saml-for-enterprise-iam/configuring-user-provisioning-with-scim-for-your-enterprise What changes are you suggesting? What are all reasons for exposing GHES URL on public internet Few of reasons , which I can infer directly/indirectly while setting up GHES on aws & going through few of GHES docs . Reason 1) Configuring s3 provider for Github Actions through OIDC way , which requires exposing urls like https://HOSTNAME/_services/token/.well-known/openid-configuration , https://HOSTNAME/_services/token/.well-known/jwks to Public internet https://docs.github.com/en/enterprise-server@3.11/admin/github-actions/enabling-github-actions-for-github-enterprise-server/enabling-github-actions-with-amazon-s3-storage Reason 2) Configuring TLS with let's encrypt certs. Generation & renewal of it requires your GHES URL available on public internet https://docs.github.com/en/enterprise-server@3.11/admin/configuration/hardening-security-for-your-enterprise/configuring-tls Are there any other reasons ? , which I am missing . For-ex (May be Reason 3) Configuring SCIM (in case of SAML configured with IDP , in our case okta) , enabling SCIM through Github Personal Access Token may require GHES host url available on public internet . Though nothing has been written in straight words here https://docs.github.com/en/enterprise-server@3.11/admin/identity-and-access-management/using-saml-for-enterprise-iam/configuring-user-provisioning-with-scim-for-your-enterprise Having these scenarios listed in separate doc/refining existing affected articles would surely help someone to better plan architecture & deal with issue of rearchitecting in middle of long journey of implementing GHES. Additional information The only thing , I am suggesting here is to have some thing either in existing docs or separate article mentioning list of scenarios/configurations which would only work when your GHES host url is publicly exposed. I have already listed out few scenarios here & expecting you guys with your expertise to further enhance that list. @sameerjethvani-alation Thank you for raising this issue! I'll get this triaged for review :sparkles: Our team will provide feedback regarding the best next steps for this issue - thanks for your patience! 💛 Thank you @sameerjethvani-alation! 👋 As it happens we had an internal issue raised for the same topic last week, so I'm going to copy the context you added here into that internal issue, and close this one as a duplicate. We'll keep you updated here as we make progress. Thanks again for the issue!
gharchive/issue
2024-03-27T07:37:24
2025-04-01T04:34:21.929666
{ "authors": [ "isaacmbrown", "nguyenalex836", "sameerjethvani-alation" ], "repo": "github/docs", "url": "https://github.com/github/docs/issues/32243", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }