id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2284834133 | Wish to add a program that convert wind directions in degrees to compass directions.
Hi, Repo Owner,
I wish I can add a program in this repo. The program is about to convert a degree number like 44.5 to NE.
Would you allow me?
Best,
Yan
Hi , can you assign me this issue .
You both are assigned the task.
@yanliutafewa @Anxhul10
Submit a PR for review and tag me along.
It will be merged.
Thank you,
@NitkarshChourasia
| gharchive/issue | 2024-05-08T06:58:28 | 2025-04-01T06:38:46.139815 | {
"authors": [
"Anxhul10",
"NitkarshChourasia",
"yanliutafewa"
],
"repo": "geekcomputers/Python",
"url": "https://github.com/geekcomputers/Python/issues/2180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2297700507 | Adding a Search Engine to the repo
This PR contributes a search engine that can be used to search for documents that contain a search term
Hi, @geekcomputers , kindly see that @NitkarshChourasia left a comment for me to squash some commits, I planned on working on that this weekend, but I see that you have merged the PR regardless. Will the squashing still be needed?
Hi, @geekcomputers , kindly see that @NitkarshChourasia left a comment for me to squash some commits, I planned on working on that this weekend, but I see that you have merged the PR regardless. Will the squashing still be needed?
No
It is to reduce the number of commits, so as to handle it in future. If needed.
Next time try to squash them when the commits are huge in numbers as per to the features introduced. @Xceptions
Thank you,
@NitkarshChourasia
okay thank you.
Looking to make more contributions soon
sure!
On Sat, 18 May 2024 at 14:41, Xceptions @.***> wrote:
okay thank you.
Looking to make more contributions soon
—
Reply to this email directly, view it on GitHub
https://github.com/geekcomputers/Python/pull/2196#issuecomment-2118726130,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AVTTXYG7QPVRJ7A7P7GMXATZC4LKXAVCNFSM6AAAAABHYAB34KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJYG4ZDMMJTGA
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
Nitkarsh Chourasia
| gharchive/pull-request | 2024-05-15T11:53:35 | 2025-04-01T06:38:46.145651 | {
"authors": [
"NitkarshChourasia",
"Xceptions"
],
"repo": "geekcomputers/Python",
"url": "https://github.com/geekcomputers/Python/pull/2196",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
824250445 | Add Github Action Welcome Bot
Add Github action greeting bot which greets first time contributors with a welcome message and message to follow contributing guidelines.
@geekquad @kritikaparmar-programmer Can I look into this issue?
Sure @Aayush-hub. Go ahead!
@geekquad @kritikaparmar-programmer Waiting for my #23 PR to be merged. Will make a PR soon after that solving this issue!
Hey @Aayush-hub, you can make a new branch for the same and continue your work.
Hey @Aayush-hub, you can make a new branch for the same and continue your work.
@geekquad Done :)
| gharchive/issue | 2021-03-08T07:06:49 | 2025-04-01T06:38:46.158617 | {
"authors": [
"Aayush-hub",
"geekquad"
],
"repo": "geekquad/Image-Processing-OpenCV",
"url": "https://github.com/geekquad/Image-Processing-OpenCV/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1028365365 | 🛑 blist is down
In 4f74582, blist (https://Blistingsnip.geeks121.repl.co) was down:
HTTP code: 0
Response time: 0 ms
Resolved: blist is back up in d9d2b5a.
| gharchive/issue | 2021-10-17T15:56:15 | 2025-04-01T06:38:46.161032 | {
"authors": [
"geeks121"
],
"repo": "geeks121/upkan",
"url": "https://github.com/geeks121/upkan/issues/207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
125690775 | nodejs: forever is not installed
Following the nodejs example I get the error:
TASK [Check list of running Node.js apps.] *************************************
fatal: [default]: FAILED! => {"changed": false, "cmd": "forever list", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
Further invocations of vagrant provision produce the same error and interestingly the "Install Forever (to run our Node.js app)" task is marked as changed every time. I logged into the machine and found out forever wasn't being installed at all.
Changing the "Install Forever" task so that state=present rather than state=latest fixes the problem.
However, my understanding is that state=latest should also ensure the package is installed so maybe this is a bug in the ansible npm module?
I am using Ansible version: 2.0.0.
@lihop - Yes, state=present should be preferred, and should work fine... it looks like there's a chance the npm module has a bug in 2.0.0. Note that it's still pre-release software—you may want to report this bug in the Ansible project's queue.
Thanks for confirming @geerlingguy. I've downgraded to ansible 1.9.4 and the playbook runs fine as is. I was using Ansible 2.0.0 because that is the version shown in the "Installing Ansible" section of the book. I didn't realize the software was pre-release, so I will look into making a bug report.
Looks like the bug has already been reported here: ansible/ansible-modules-extras#1375
Thanks for the update!
| gharchive/issue | 2016-01-08T20:48:12 | 2025-04-01T06:38:46.165356 | {
"authors": [
"geerlingguy",
"lihop"
],
"repo": "geerlingguy/ansible-for-devops",
"url": "https://github.com/geerlingguy/ansible-for-devops/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
214118099 | Restarting MYSQL failing : Fresh Install
Issue Type
Bug Report / Support Request
Your Environment
Vagrant 1.9.1
VirtualBox 5.1.14r112924
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
Your OS
macOS (10.12.3)
Full console output
Command
$ blt vm:nuke && blt vm
RUNNING HANDLER [geerlingguy.mysql : restart mysql] ****************************
to retry, use: --limit @/Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/playbook.retry
PLAY RECAP *********************************************************************
webny : ok=195 changed=92 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
[phingcall] /Users/justinwinter/Sites/wny/./vendor/acquia/blt/phing/tasks/vm.xml:127:129: Task exited with code 1
BUILD FAILED/Users/justinwinter/Sites/wny/./vendor/acquia/blt/phing/tasks/vm.xml:26:8: /Users/justinwinter/Sites/wny/./vendor/acquia/blt/phing/tasks/vm.xml:29:34: Execution of the target buildfile failed. Aborting.
; 22 minutes 7.38 seconds
Summary
Trying to get my VM up and running again after doing a vagrant reload --provision and I'm seeing the above error. I deleted my vendor folder and ran a composer update and I'm also on the latest stable BLT release.
...
"acquia/blt": "8.6.14",
"geerlingguy/drupal-vm": "~4.2"
...
Any ideas why MYSQL might be failing here?
@justinlevi - Is there any other output above that mysql handler line? I'm guessing something else is actually failing prior to that handler; Ansible just shows that a handler isn't run at the end if something else fails before the playbook ends. That doesn't necessarily mean the handler failed.
Here's the full output
https://gist.github.com/justinlevi/f836a497e0941d6b79abdda3e4c64aea
@justinlevi - It looks like the failure is in the mailhog step:
TASK [geerlingguy.mailhog : Download MailHog and mhsendmail binaries.] *********
failed: [webny] (item={u'url': u'https://github.com/mailhog/MailHog/releases/download/v0.2.0/MailHog_linux_amd64', u'dest': u'/opt/mailhog/mailhog'}) => {"failed": true, "item": {"dest": "/opt/mailhog/mailhog", "url": "https://github.com/mailhog/MailHog/releases/download/v0.2.0/MailHog_linux_amd64"}, "msg": "failed to create temporary content file: The read operation timed out"}
changed: [webny] => (item={u'url': u'https://github.com/mailhog/mhsendmail/releases/download/v0.2.0/mhsendmail_linux_amd64', u'dest': u'/opt/mailhog/mhsendmail'})
I've been having some github connectivity issues here and there today... maybe it's just a temporary fluke? If you want the build to succeed, you can bypass mailhog by removing it from installed_extras temporarily, and overriding the php_sendmail_path.
@geerlingguy I'm not seeing the mailhog failure in my gist:
TASK [geerlingguy.mailhog : Ensure mailhog install directory exists.] **********
changed: [webny]
TASK [geerlingguy.mailhog : Download MailHog and mhsendmail binaries.] *********
changed: [webny] => (item={u'url': u'https://github.com/mailhog/MailHog/releases/download/v0.2.0/MailHog_linux_amd64', u'dest': u'/opt/mailhog/mailhog'})
changed: [webny] => (item={u'url': u'https://github.com/mailhog/mhsendmail/releases/download/v0.2.0/mhsendmail_linux_amd64', u'dest': u'/opt/mailhog/mhsendmail'})
TASK [geerlingguy.mailhog : Copy mailhog init script into place.] **************
changed: [webny]
TASK [geerlingguy.mailhog : Copy mailhog systemd unit file into place (for systemd systems).] ***
skipping: [webny]
TASK [geerlingguy.mailhog : Ensure mailhog is enabled and will start on boot.] *
changed: [webny]
Today I'm actually seeing another issue??
vagrant reload --provision
==> webny: [vagrant-hostsupdater] Removing hosts
Password:
==> webny: Attempting graceful shutdown of VM...
==> webny: Checking if box 'geerlingguy/ubuntu1404' is up to date...
==> webny: Clearing any previously set forwarded ports...
==> webny: Clearing any previously set network interfaces...
==> webny: Preparing network interfaces based on configuration...
webny: Adapter 1: nat
webny: Adapter 2: hostonly
==> webny: Forwarding ports...
webny: 22 (guest) => 2222 (host) (adapter 1)
==> webny: Running 'pre-boot' VM customizations...
==> webny: Booting VM...
==> webny: Waiting for machine to boot. This may take a few minutes...
webny: SSH address: 127.0.0.1:2222
webny: SSH username: vagrant
webny: SSH auth method: private key
==> webny: Machine booted and ready!
[webny] GuestAdditions 5.1.14 running --- OK.
==> webny: Checking for guest additions in VM...
==> webny: [vagrant-hostsupdater] Checking for host entries
==> webny: [vagrant-hostsupdater] Writing the following entries to (/etc/hosts)
==> webny: [vagrant-hostsupdater] 192.168.88.88 webny.dev # VAGRANT: 12f45ea8233192517336ff6e8539e7c6 (webny) / d55b1cbe-ec1c-4112-b951-8e998d6cec26
==> webny: [vagrant-hostsupdater] 192.168.88.88 www.webny.dev # VAGRANT: 12f45ea8233192517336ff6e8539e7c6 (webny) / d55b1cbe-ec1c-4112-b951-8e998d6cec26
==> webny: [vagrant-hostsupdater] 192.168.88.88 adminer.webny.dev # VAGRANT: 12f45ea8233192517336ff6e8539e7c6 (webny) / d55b1cbe-ec1c-4112-b951-8e998d6cec26
==> webny: [vagrant-hostsupdater] 192.168.88.88 xhprof.webny.dev # VAGRANT: 12f45ea8233192517336ff6e8539e7c6 (webny) / d55b1cbe-ec1c-4112-b951-8e998d6cec26
==> webny: [vagrant-hostsupdater] 192.168.88.88 pimpmylog.webny.dev # VAGRANT: 12f45ea8233192517336ff6e8539e7c6 (webny) / d55b1cbe-ec1c-4112-b951-8e998d6cec26
==> webny: [vagrant-hostsupdater] 192.168.88.88 dashboard.webny.dev # VAGRANT: 12f45ea8233192517336ff6e8539e7c6 (webny) / d55b1cbe-ec1c-4112-b951-8e998d6cec26
==> webny: [vagrant-hostsupdater] This operation requires administrative access. You may skip it by manually adding equivalent entries to the hosts file.
==> webny: Setting hostname...
==> webny: Configuring and enabling network interfaces...
==> webny: Exporting NFS shared folders...
==> webny: Preparing to edit /etc/exports. Administrator privileges will be required...
==> webny: Mounting NFS shared folders...
==> webny: Configuring cache buckets...
==> webny: Running provisioner: ansible...
webny: Running ansible-playbook...
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [webny]
TASK [Include OS-specific variables.] ******************************************
ok: [webny]
TASK [Define config_dir.] ******************************************************
skipping: [webny]
TASK [include_vars] ************************************************************
ok: [webny] => (item=/Users/justinwinter/Sites/wny/box/config.yml)
TASK [Define fully qualified domain name.] *************************************
ok: [webny]
TASK [Define short hostname.] **************************************************
ok: [webny]
TASK [Add hostname to /etc/hosts.] *********************************************
ok: [webny]
TASK [Configure hostname.] *****************************************************
ok: [webny]
TASK [Set the hostname for current session.] ***********************************
skipping: [webny]
TASK [include] *****************************************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/tasks/init-debian.yml for webny
TASK [Update apt cache if needed.] *********************************************
ok: [webny]
TASK [Install required dependencies.] ******************************************
ok: [webny] => (item=[u'curl', u'python-apt', u'python-pycurl', u'sudo', u'unzip', u'make'])
TASK [Configure /etc/mailname.] ************************************************
ok: [webny]
TASK [Add repository for Apache 2.4.9+ on Ubuntu 12 and 14.] *******************
ok: [webny]
TASK [Add repository for PHP 5.5, 5.6, 7.0 or 7.1.] ****************************
ok: [webny]
TASK [Add repository for PHP 5 compatibility packages.] ************************
ok: [webny]
TASK [Purge PHP version packages.] *********************************************
ok: [webny] => (item=[u'php5.5', u'php5.5-apcu', u'php5.5-cli', u'php5.5-common', u'php5.5-curl', u'php5.5-dev', u'php5.5-fpm', u'php5.5-gd', u'php5.5-imap', u'php5.5-json', u'php5.5-mbstring', u'php5.5-mcrypt', u'php5.5-opcache', u'php5.5-sqlite3', u'php5.5-xml', u'php5.5-yaml', u'php7.0', u'php7.0-apcu', u'php7.0-cli', u'php7.0-common', u'php7.0-curl', u'php7.0-dev', u'php7.0-fpm', u'php7.0-gd', u'php7.0-imap', u'php7.0-json', u'php7.0-mbstring', u'php7.0-mcrypt', u'php7.0-opcache', u'php7.0-sqlite3', u'php7.0-xml', u'php7.0-yaml', u'php7.1', u'php7.1-apcu', u'php7.1-cli', u'php7.1-common', u'php7.1-curl', u'php7.1-dev', u'php7.1-fpm', u'php7.1-gd', u'php7.1-imap', u'php7.1-json', u'php7.1-mbstring', u'php7.1-mcrypt', u'php7.1-opcache', u'php7.1-sqlite3', u'php7.1-xml', u'php7.1-yaml'])
TASK [Purge PHP packages installed by default on Ubuntu.] **********************
skipping: [webny] => (item=[])
TASK [Purge PHP modules directory.] ********************************************
skipping: [webny] => (item=/usr/lib/php5/modules)
skipping: [webny] => (item=/usr/lib/php/modules)
TASK [Define php_xhprof_html_dir.] *********************************************
skipping: [webny]
TASK [include] *****************************************************************
skipping: [webny]
TASK [Run configured pre-provision shell scripts.] *****************************
TASK [Run configured pre-provision ansible task files.] ************************
TASK [Set the PHP webserver daemon correctly when nginx is in use.] ************
skipping: [webny]
TASK [Set the correct XHProf package when PHP 5.5 or 5.6 is used.] *************
ok: [webny]
TASK [Ensure PHP version -specific workspace directory exists.] ****************
ok: [webny]
TASK [geerlingguy.repo-remi : Install remi repo.] ******************************
skipping: [webny]
TASK [geerlingguy.repo-remi : Import remi GPG key.] ****************************
skipping: [webny]
TASK [geerlingguy.firewall : Ensure iptables is installed.] ********************
ok: [webny]
TASK [geerlingguy.firewall : Flush iptables the first time playbook runs.] *****
ok: [webny]
TASK [geerlingguy.firewall : Copy firewall script into place.] *****************
ok: [webny]
TASK [geerlingguy.firewall : Copy firewall init script into place.] ************
ok: [webny]
TASK [geerlingguy.firewall : Copy firewall systemd unit file into place (for systemd systems).] ***
skipping: [webny]
TASK [geerlingguy.firewall : Ensure the firewall is enabled and will start on boot.] ***
ok: [webny]
TASK [geerlingguy.firewall : Check firewalld package is installed (on RHEL).] **
skipping: [webny]
TASK [geerlingguy.firewall : Disable the firewalld service (on RHEL, if configured).] ***
skipping: [webny]
TASK [geerlingguy.firewall : Check ufw package is installed (on Ubuntu).] ******
changed: [webny]
TASK [geerlingguy.firewall : Disable the ufw firewall (on Ubuntu, if configured).] ***
ok: [webny]
TASK [geerlingguy.git : Ensure git is installed (RedHat).] *********************
skipping: [webny] => (item=[])
TASK [geerlingguy.git : Update apt cache (Debian).] ****************************
ok: [webny]
TASK [geerlingguy.git : Ensure git is installed (Debian).] *********************
ok: [webny] => (item=[u'git', u'git-svn'])
TASK [geerlingguy.git : Ensure git's dependencies are installed (RedHat).] *****
skipping: [webny] => (item=[])
TASK [geerlingguy.git : Ensure git's dependencies are installed (Debian).] *****
skipping: [webny] => (item=[])
TASK [geerlingguy.git : Get installed version] *********************************
[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..
This feature will be removed in version 2.4. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
skipping: [webny]
TASK [geerlingguy.git : Force git install if the version numbers do not match] *
skipping: [webny]
TASK [geerlingguy.git : Download git.] *****************************************
skipping: [webny]
TASK [geerlingguy.git : Expand git archive.] ***********************************
skipping: [webny]
TASK [geerlingguy.git : Build git.] ********************************************
skipping: [webny] => (item=all)
skipping: [webny] => (item=install)
TASK [geerlingguy.postfix : Ensure postfix is installed (RedHat).] *************
skipping: [webny]
TASK [geerlingguy.postfix : Ensure postfix is installed (Debian).] *************
ok: [webny]
TASK [geerlingguy.postfix : Ensure postfix is started and enabled at boot.] ****
ok: [webny]
TASK [geerlingguy.apache : Include OS-specific variables.] *********************
ok: [webny]
TASK [geerlingguy.apache : Define apache_packages.] ****************************
ok: [webny]
TASK [geerlingguy.apache : include] ********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.apache/tasks/setup-Debian.yml for webny
TASK [geerlingguy.apache : Update apt cache.] **********************************
ok: [webny]
TASK [geerlingguy.apache : Ensure Apache is installed on Debian.] **************
ok: [webny] => (item=[u'apache2', u'apache2-utils'])
TASK [geerlingguy.apache : Get installed version of Apache.] *******************
[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..
This feature will be removed in version 2.4. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [webny]
TASK [geerlingguy.apache : Create apache_version variable.] ********************
ok: [webny]
TASK [geerlingguy.apache : include_vars] ***************************************
skipping: [webny]
TASK [geerlingguy.apache : include_vars] ***************************************
ok: [webny]
TASK [geerlingguy.apache : include] ********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.apache/tasks/configure-Debian.yml for webny
TASK [geerlingguy.apache : Configure Apache.] **********************************
ok: [webny] => (item={u'regexp': u'^Listen ', u'line': u'Listen 80'})
TASK [geerlingguy.apache : Enable Apache mods.] ********************************
ok: [webny] => (item=expires.load)
ok: [webny] => (item=ssl.load)
ok: [webny] => (item=rewrite.load)
ok: [webny] => (item=proxy.load)
ok: [webny] => (item=proxy_fcgi.load)
TASK [geerlingguy.apache : Disable Apache mods.] *******************************
TASK [geerlingguy.apache : Check whether certificates defined in vhosts exist.]
TASK [geerlingguy.apache : Add apache vhosts configuration.] *******************
ok: [webny]
TASK [geerlingguy.apache : Add vhost symlink in sites-enabled.] ****************
ok: [webny]
TASK [geerlingguy.apache : Remove default vhost in sites-enabled.] *************
ok: [webny]
TASK [geerlingguy.apache : Ensure Apache has selected state and enabled on boot.] ***
ok: [webny]
TASK [geerlingguy.apache : Include OS-specific variables.] *********************
ok: [webny]
TASK [geerlingguy.apache : Define apache_packages.] ****************************
skipping: [webny]
TASK [geerlingguy.apache : include] ********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.apache/tasks/setup-Debian.yml for webny
TASK [geerlingguy.apache : Update apt cache.] **********************************
ok: [webny]
TASK [geerlingguy.apache : Ensure Apache is installed on Debian.] **************
ok: [webny] => (item=[u'apache2', u'apache2-utils'])
TASK [geerlingguy.apache : Get installed version of Apache.] *******************
[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..
This feature will be removed in version 2.4. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [webny]
TASK [geerlingguy.apache : Create apache_version variable.] ********************
ok: [webny]
TASK [geerlingguy.apache : include_vars] ***************************************
skipping: [webny]
TASK [geerlingguy.apache : include_vars] ***************************************
ok: [webny]
TASK [geerlingguy.apache : include] ********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.apache/tasks/configure-Debian.yml for webny
TASK [geerlingguy.apache : Configure Apache.] **********************************
ok: [webny] => (item={u'regexp': u'^Listen ', u'line': u'Listen 80'})
TASK [geerlingguy.apache : Enable Apache mods.] ********************************
ok: [webny] => (item=expires.load)
ok: [webny] => (item=ssl.load)
ok: [webny] => (item=rewrite.load)
ok: [webny] => (item=proxy.load)
ok: [webny] => (item=proxy_fcgi.load)
TASK [geerlingguy.apache : Disable Apache mods.] *******************************
TASK [geerlingguy.apache : Check whether certificates defined in vhosts exist.]
TASK [geerlingguy.apache : Add apache vhosts configuration.] *******************
ok: [webny]
TASK [geerlingguy.apache : Add vhost symlink in sites-enabled.] ****************
ok: [webny]
TASK [geerlingguy.apache : Remove default vhost in sites-enabled.] *************
ok: [webny]
TASK [geerlingguy.apache : Ensure Apache has selected state and enabled on boot.] ***
ok: [webny]
TASK [geerlingguy.apache-php-fpm : Enable mod_proxy_fcgi.] *********************
ok: [webny] => (item=proxy.load)
ok: [webny] => (item=proxy_fcgi.load)
TASK [geerlingguy.nginx : Include OS-specific variables.] **********************
skipping: [webny]
TASK [geerlingguy.nginx : Define nginx_user.] **********************************
skipping: [webny]
TASK [geerlingguy.nginx : Enable nginx repo.] **********************************
skipping: [webny]
TASK [geerlingguy.nginx : Ensure nginx is installed.] **************************
skipping: [webny]
TASK [geerlingguy.nginx : Add PPA for Nginx.] **********************************
skipping: [webny]
TASK [geerlingguy.nginx : Ensure nginx will reinstall if the PPA was just added.] ***
skipping: [webny]
TASK [geerlingguy.nginx : Update apt cache.] ***********************************
skipping: [webny]
TASK [geerlingguy.nginx : Ensure nginx is installed.] **************************
skipping: [webny]
TASK [geerlingguy.nginx : Update pkg cache.] ***********************************
skipping: [webny]
TASK [geerlingguy.nginx : Ensure nginx is installed.] **************************
skipping: [webny]
TASK [geerlingguy.nginx : Create logs directory.] ******************************
skipping: [webny]
TASK [geerlingguy.nginx : Remove default nginx vhost config file (if configured).] ***
skipping: [webny]
TASK [geerlingguy.nginx : Ensure nginx_vhost_path exists.] *********************
skipping: [webny]
TASK [geerlingguy.nginx : Add managed vhost config file (if any vhosts are configured).] ***
skipping: [webny]
TASK [geerlingguy.nginx : Remove managed vhost config file (if no vhosts are configured).] ***
skipping: [webny]
TASK [geerlingguy.nginx : Copy nginx configuration in place.] ******************
skipping: [webny]
TASK [geerlingguy.nginx : Ensure nginx is started and enabled to start at boot.] ***
skipping: [webny]
TASK [geerlingguy.php : Include OS-specific variables.] ************************
ok: [webny]
TASK [geerlingguy.php : Define php_packages.] **********************************
skipping: [webny]
TASK [geerlingguy.php : Define extra php_packages.] ****************************
ok: [webny]
TASK [geerlingguy.php : Define php_webserver_daemon.] **************************
ok: [webny]
TASK [geerlingguy.php : Define php_conf_paths.] ********************************
skipping: [webny]
TASK [geerlingguy.php : Define php_extension_conf_paths.] **********************
skipping: [webny]
TASK [geerlingguy.php : Define php_apc_conf_filename.] *************************
ok: [webny]
TASK [geerlingguy.php : Define php_opcache_conf_filename (Ubuntu 16.04).] ******
skipping: [webny]
TASK [geerlingguy.php : Define php_opcache_conf_filename.] *********************
ok: [webny]
TASK [geerlingguy.php : Define php_fpm_conf_path.] *****************************
skipping: [webny]
TASK [geerlingguy.php : include] ***********************************************
skipping: [webny]
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/setup-Debian.yml for webny
TASK [geerlingguy.php : Update apt cache.] *************************************
ok: [webny]
TASK [geerlingguy.php : Ensure PHP packages are installed.] ********************
ok: [webny] => (item=[u'php5.6', u'php5.6-apcu', u'php5.6-cli', u'php5.6-common', u'php5.6-curl', u'php5.6-dev', u'php5.6-fpm', u'php5.6-gd', u'php5.6-imap', u'php5.6-json', u'php5.6-mbstring', u'php5.6-mcrypt', u'php5.6-opcache', u'php5.6-sqlite3', u'php5.6-xml', u'php5.6-yaml', u'php5.6-bz2'])
TASK [geerlingguy.php : Delete APCu configuration file if this role will provide one.] ***
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Delete OpCache configuration file if this role will provide one.] ***
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : include] ***********************************************
skipping: [webny]
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure.yml for webny
TASK [geerlingguy.php : Ensure configuration directories exist.] ***************
ok: [webny] => (item=/etc/php/5.6/fpm)
ok: [webny] => (item=/etc/php/5.6/apache2)
ok: [webny] => (item=/etc/php/5.6/cli)
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Place PHP configuration file in place.] ****************
ok: [webny] => (item=/etc/php/5.6/fpm)
ok: [webny] => (item=/etc/php/5.6/apache2)
ok: [webny] => (item=/etc/php/5.6/cli)
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure-apcu.yml for webny
TASK [geerlingguy.php : Check for existing APCu config files.] *****************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove any non-role-supplied APCu config files.] *******
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/fpm/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/fpm/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'extension(\\s+)?=(\\s+)?apc[u]?\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500768.242277, u'inode': 3934559, u'isgid': False, u'size': 66, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/fpm/conf.d/20-apcu.ini', u'xusr': False, u'atime': 1489588357.212095, u'isdir': False, u'ctime': 1489500768.410277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/apache2/conf.d', u'examined': 3, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/apache2/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'extension(\\s+)?=(\\s+)?apc[u]?\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500768.686277, u'inode': 3934560, u'isgid': False, u'size': 66, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/apache2/conf.d/20-apcu.ini', u'xusr': False, u'atime': 1489588406.988656, u'isdir': False, u'ctime': 1489500768.850277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/cli/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/cli/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'extension(\\s+)?=(\\s+)?apc[u]?\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500769.114277, u'inode': 3934561, u'isgid': False, u'size': 66, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/cli/conf.d/20-apcu.ini', u'xusr': False, u'atime': 1489588407.164656, u'isdir': False, u'ctime': 1489500769.310277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
TASK [geerlingguy.php : Ensure APCu config file is present.] *******************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove APCu config file if APC is disabled.] ***********
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure-opcache.yml for webny
TASK [geerlingguy.php : Check for existing OpCache config files.] **************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove any non-role-supplied OpCache config files.] ****
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/fpm/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/fpm/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'zend_extension(\\s+)?=(\\s+)?opcache\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500771.538277, u'inode': 3934562, u'isgid': False, u'size': 303, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/fpm/conf.d/05-opcache.ini', u'xusr': False, u'atime': 1489588357.192095, u'isdir': False, u'ctime': 1489500771.710277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/apache2/conf.d', u'examined': 3, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/apache2/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'zend_extension(\\s+)?=(\\s+)?opcache\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500771.962277, u'inode': 3934563, u'isgid': False, u'size': 303, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/apache2/conf.d/05-opcache.ini', u'xusr': False, u'atime': 1489588741.726332, u'isdir': False, u'ctime': 1489500772.130277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/cli/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/cli/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'zend_extension(\\s+)?=(\\s+)?opcache\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500772.378277, u'inode': 3934564, u'isgid': False, u'size': 303, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/cli/conf.d/05-opcache.ini', u'xusr': False, u'atime': 1489588407.164656, u'isdir': False, u'ctime': 1489500772.538277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
TASK [geerlingguy.php : Ensure OpCache config file is present.] ****************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove OpCache config file if OpCache is disabled.] ****
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure-fpm.yml for webny
TASK [geerlingguy.php : Define php_fpm_daemon.] ********************************
skipping: [webny]
TASK [geerlingguy.php : Define php_fpm_pool_conf_path.] ************************
skipping: [webny]
TASK [geerlingguy.php : Define php_fpm_pool_user.] *****************************
ok: [webny]
TASK [geerlingguy.php : Define php_fpm_pool_group.] ****************************
ok: [webny]
TASK [geerlingguy.php : Stat php_fpm_pool_conf_path] ***************************
ok: [webny]
TASK [geerlingguy.php : Ensure the default pool directory exists.] *************
skipping: [webny]
TASK [geerlingguy.php : Ensure the default pool exists.] ***********************
ok: [webny]
TASK [geerlingguy.php : Configure php-fpm pool (if enabled).] ******************
ok: [webny] => (item={u'regexp': u'^user.?=.+$', u'line': u'user = www-data'})
ok: [webny] => (item={u'regexp': u'^group.?=.+$', u'line': u'group = www-data'})
ok: [webny] => (item={u'regexp': u'^listen.?=.+$', u'line': u'listen = 127.0.0.1:9000'})
ok: [webny] => (item={u'regexp': u'^listen\\.allowed_clients.?=.+$', u'line': u'listen.allowed_clients = 127.0.0.1'})
ok: [webny] => (item={u'regexp': u'^pm\\.max_children.?=.+$', u'line': u'pm.max_children = 50'})
ok: [webny] => (item={u'regexp': u'^pm\\.start_servers.?=.+$', u'line': u'pm.start_servers = 5'})
ok: [webny] => (item={u'regexp': u'^pm\\.min_spare_servers.?=.+$', u'line': u'pm.min_spare_servers = 5'})
ok: [webny] => (item={u'regexp': u'^pm\\.max_spare_servers.?=.+$', u'line': u'pm.max_spare_servers = 5'})
TASK [geerlingguy.php : Ensure php-fpm is started and enabled at boot (if configured).] ***
ok: [webny]
TASK [geerlingguy.php : Include OS-specific variables.] ************************
ok: [webny]
TASK [geerlingguy.php : Define php_packages.] **********************************
skipping: [webny]
TASK [geerlingguy.php : Define extra php_packages.] ****************************
ok: [webny]
TASK [geerlingguy.php : Define php_webserver_daemon.] **************************
skipping: [webny]
TASK [geerlingguy.php : Define php_conf_paths.] ********************************
skipping: [webny]
TASK [geerlingguy.php : Define php_extension_conf_paths.] **********************
skipping: [webny]
TASK [geerlingguy.php : Define php_apc_conf_filename.] *************************
skipping: [webny]
TASK [geerlingguy.php : Define php_opcache_conf_filename (Ubuntu 16.04).] ******
skipping: [webny]
TASK [geerlingguy.php : Define php_opcache_conf_filename.] *********************
skipping: [webny]
TASK [geerlingguy.php : Define php_fpm_conf_path.] *****************************
skipping: [webny]
TASK [geerlingguy.php : include] ***********************************************
skipping: [webny]
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/setup-Debian.yml for webny
TASK [geerlingguy.php : Update apt cache.] *************************************
ok: [webny]
TASK [geerlingguy.php : Ensure PHP packages are installed.] ********************
ok: [webny] => (item=[u'php5.6', u'php5.6-apcu', u'php5.6-cli', u'php5.6-common', u'php5.6-curl', u'php5.6-dev', u'php5.6-fpm', u'php5.6-gd', u'php5.6-imap', u'php5.6-json', u'php5.6-mbstring', u'php5.6-mcrypt', u'php5.6-opcache', u'php5.6-sqlite3', u'php5.6-xml', u'php5.6-yaml', u'php5.6-bz2', u'php5.6-bz2'])
TASK [geerlingguy.php : Delete APCu configuration file if this role will provide one.] ***
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Delete OpCache configuration file if this role will provide one.] ***
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : include] ***********************************************
skipping: [webny]
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure.yml for webny
TASK [geerlingguy.php : Ensure configuration directories exist.] ***************
ok: [webny] => (item=/etc/php/5.6/fpm)
ok: [webny] => (item=/etc/php/5.6/apache2)
ok: [webny] => (item=/etc/php/5.6/cli)
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Place PHP configuration file in place.] ****************
ok: [webny] => (item=/etc/php/5.6/fpm)
ok: [webny] => (item=/etc/php/5.6/apache2)
ok: [webny] => (item=/etc/php/5.6/cli)
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure-apcu.yml for webny
TASK [geerlingguy.php : Check for existing APCu config files.] *****************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove any non-role-supplied APCu config files.] *******
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/fpm/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/fpm/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'extension(\\s+)?=(\\s+)?apc[u]?\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500768.242277, u'inode': 3934559, u'isgid': False, u'size': 66, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/fpm/conf.d/20-apcu.ini', u'xusr': False, u'atime': 1489588357.212095, u'isdir': False, u'ctime': 1489500768.410277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/apache2/conf.d', u'examined': 3, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/apache2/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'extension(\\s+)?=(\\s+)?apc[u]?\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500768.686277, u'inode': 3934560, u'isgid': False, u'size': 66, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/apache2/conf.d/20-apcu.ini', u'xusr': False, u'atime': 1489588406.988656, u'isdir': False, u'ctime': 1489500768.850277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/cli/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/cli/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'extension(\\s+)?=(\\s+)?apc[u]?\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500769.114277, u'inode': 3934561, u'isgid': False, u'size': 66, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/cli/conf.d/20-apcu.ini', u'xusr': False, u'atime': 1489588407.164656, u'isdir': False, u'ctime': 1489500769.310277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
TASK [geerlingguy.php : Ensure APCu config file is present.] *******************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove APCu config file if APC is disabled.] ***********
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure-opcache.yml for webny
TASK [geerlingguy.php : Check for existing OpCache config files.] **************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove any non-role-supplied OpCache config files.] ****
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/fpm/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/fpm/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'zend_extension(\\s+)?=(\\s+)?opcache\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500771.538277, u'inode': 3934562, u'isgid': False, u'size': 303, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/fpm/conf.d/05-opcache.ini', u'xusr': False, u'atime': 1489588357.192095, u'isdir': False, u'ctime': 1489500771.710277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/apache2/conf.d', u'examined': 3, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/apache2/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'zend_extension(\\s+)?=(\\s+)?opcache\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500771.962277, u'inode': 3934563, u'isgid': False, u'size': 303, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/apache2/conf.d/05-opcache.ini', u'xusr': False, u'atime': 1489588741.726332, u'isdir': False, u'ctime': 1489500772.130277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
skipping: [webny] => (item=({'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, '_ansible_item_result': True, 'item': u'/etc/php/5.6/cli/conf.d', u'examined': 41, 'invocation': {'module_name': u'find', u'module_args': {u'paths': [u'/etc/php/5.6/cli/conf.d'], u'file_type': u'file', u'age': None, u'contains': u'zend_extension(\\s+)?=(\\s+)?opcache\\.so', u'recurse': False, u'age_stamp': u'mtime', u'patterns': [u'*'], u'get_checksum': False, u'use_regex': False, u'follow': False, u'hidden': False, u'size': None}}, u'matched': 1, u'msg': u''}, {u'uid': 0, u'woth': False, u'mtime': 1489500772.378277, u'inode': 3934564, u'isgid': False, u'size': 303, u'isuid': False, u'isreg': True, u'gid': 0, u'ischr': False, u'wusr': True, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/etc/php/5.6/cli/conf.d/05-opcache.ini', u'xusr': False, u'atime': 1489588407.164656, u'isdir': False, u'ctime': 1489500772.538277, u'isblk': False, u'wgrp': False, u'xgrp': False, u'dev': 64512, u'roth': True, u'isfifo': False, u'mode': u'0644', u'rusr': True}))
TASK [geerlingguy.php : Ensure OpCache config file is present.] ****************
ok: [webny] => (item=/etc/php/5.6/fpm/conf.d)
ok: [webny] => (item=/etc/php/5.6/apache2/conf.d)
ok: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : Remove OpCache config file if OpCache is disabled.] ****
skipping: [webny] => (item=/etc/php/5.6/fpm/conf.d)
skipping: [webny] => (item=/etc/php/5.6/apache2/conf.d)
skipping: [webny] => (item=/etc/php/5.6/cli/conf.d)
TASK [geerlingguy.php : include] ***********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.php/tasks/configure-fpm.yml for webny
TASK [geerlingguy.php : Define php_fpm_daemon.] ********************************
skipping: [webny]
TASK [geerlingguy.php : Define php_fpm_pool_conf_path.] ************************
skipping: [webny]
TASK [geerlingguy.php : Define php_fpm_pool_user.] *****************************
skipping: [webny]
TASK [geerlingguy.php : Define php_fpm_pool_group.] ****************************
skipping: [webny]
TASK [geerlingguy.php : Stat php_fpm_pool_conf_path] ***************************
ok: [webny]
TASK [geerlingguy.php : Ensure the default pool directory exists.] *************
skipping: [webny]
TASK [geerlingguy.php : Ensure the default pool exists.] ***********************
ok: [webny]
TASK [geerlingguy.php : Configure php-fpm pool (if enabled).] ******************
ok: [webny] => (item={u'regexp': u'^user.?=.+$', u'line': u'user = www-data'})
ok: [webny] => (item={u'regexp': u'^group.?=.+$', u'line': u'group = www-data'})
ok: [webny] => (item={u'regexp': u'^listen.?=.+$', u'line': u'listen = 127.0.0.1:9000'})
ok: [webny] => (item={u'regexp': u'^listen\\.allowed_clients.?=.+$', u'line': u'listen.allowed_clients = 127.0.0.1'})
ok: [webny] => (item={u'regexp': u'^pm\\.max_children.?=.+$', u'line': u'pm.max_children = 50'})
ok: [webny] => (item={u'regexp': u'^pm\\.start_servers.?=.+$', u'line': u'pm.start_servers = 5'})
ok: [webny] => (item={u'regexp': u'^pm\\.min_spare_servers.?=.+$', u'line': u'pm.min_spare_servers = 5'})
ok: [webny] => (item={u'regexp': u'^pm\\.max_spare_servers.?=.+$', u'line': u'pm.max_spare_servers = 5'})
TASK [geerlingguy.php : Ensure php-fpm is started and enabled at boot (if configured).] ***
ok: [webny]
TASK [geerlingguy.php-pecl : Install PECL libaries.] ***************************
TASK [geerlingguy.composer : Set php_executable variable to a default if not defined.] ***
skipping: [webny]
TASK [geerlingguy.composer : Check if Composer is installed.] ******************
ok: [webny]
TASK [geerlingguy.composer : Download Composer installer.] *********************
skipping: [webny]
TASK [geerlingguy.composer : Run Composer installer.] **************************
skipping: [webny]
TASK [geerlingguy.composer : Move Composer into globally-accessible location.] *
skipping: [webny]
TASK [geerlingguy.composer : Update Composer to latest version (if configured).] ***
skipping: [webny]
TASK [geerlingguy.composer : Ensure composer directory exists.] ****************
ok: [webny]
TASK [geerlingguy.composer : Add GitHub OAuth token for Composer (if configured).] ***
skipping: [webny]
TASK [geerlingguy.composer : Install configured globally-required packages.] ***
ok: [webny] => (item={u'release': u'^0.3', u'name': u'hirak/prestissimo'})
TASK [geerlingguy.composer : Add composer_home_path bin directory to global $PATH.] ***
ok: [webny]
TASK [geerlingguy.composer : Add composer_project_path bin directory to global $PATH.] ***
skipping: [webny]
TASK [geerlingguy.mysql : Include OS-specific variables.] **********************
ok: [webny]
TASK [geerlingguy.mysql : Include OS-specific variables (RedHat).] *************
skipping: [webny]
TASK [geerlingguy.mysql : Define mysql_packages.] ******************************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_daemon.] ********************************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_slow_query_log_file.] *******************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_log_error.] *****************************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_syslog_tag.] ****************************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_pid_file.] ******************************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_config_file.] ***************************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_config_include_dir.] ********************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_socket.] ********************************
ok: [webny]
TASK [geerlingguy.mysql : Define mysql_supports_innodb_large_prefix.] **********
ok: [webny]
TASK [geerlingguy.mysql : include] *********************************************
skipping: [webny]
TASK [geerlingguy.mysql : include] *********************************************
included: /Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/roles/geerlingguy.mysql/tasks/setup-Debian.yml for webny
TASK [geerlingguy.mysql : Check if MySQL is already installed.] ****************
ok: [webny]
TASK [geerlingguy.mysql : Update apt cache if MySQL is not yet installed.] *****
skipping: [webny]
TASK [geerlingguy.mysql : Ensure MySQL Python libraries are installed.] ********
ok: [webny]
TASK [geerlingguy.mysql : Ensure MySQL packages are installed.] ****************
ok: [webny] => (item=[u'mysql-common', u'mysql-server'])
TASK [geerlingguy.mysql : Ensure MySQL is stopped after initial install.] ******
skipping: [webny]
TASK [geerlingguy.mysql : Delete innodb log files created by apt package after initial install.] ***
skipping: [webny] => (item=ib_logfile0)
skipping: [webny] => (item=ib_logfile1)
TASK [geerlingguy.mysql : Check if MySQL packages were installed.] *************
ok: [webny]
TASK [geerlingguy.mysql : Copy my.cnf global MySQL configuration.] *************
ok: [webny]
TASK [geerlingguy.mysql : Verify mysql include directory exists.] **************
skipping: [webny]
TASK [geerlingguy.mysql : Copy my.cnf override files into include directory.] **
TASK [geerlingguy.mysql : Create slow query log file (if configured).] *********
ok: [webny]
TASK [geerlingguy.mysql : Create datadir if it does not exist] *****************
ok: [webny]
TASK [geerlingguy.mysql : Set ownership on slow query log file (if configured).] ***
ok: [webny]
TASK [geerlingguy.mysql : Create error log file (if configured).] **************
ok: [webny]
TASK [geerlingguy.mysql : Set ownership on error log file (if configured).] ****
ok: [webny]
TASK [geerlingguy.mysql : Ensure MySQL is started and enabled on boot.] ********
ok: [webny]
TASK [geerlingguy.mysql : Get MySQL version.] **********************************
ok: [webny]
TASK [geerlingguy.mysql : Ensure default user is present.] *********************
skipping: [webny]
TASK [geerlingguy.mysql : Copy user-my.cnf file with password credentials.] ****
skipping: [webny]
TASK [geerlingguy.mysql : Disallow root login remotely] ************************
ok: [webny] => (item=DELETE FROM mysql.user WHERE User='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1'))
TASK [geerlingguy.mysql : Get list of hosts for the root user.] ****************
[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..
This feature will be removed in version 2.4. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
skipping: [webny]
TASK [geerlingguy.mysql : Update MySQL root password for localhost root account (5.7.x).] ***
TASK [geerlingguy.mysql : Update MySQL root password for localhost root account (< 5.7.x).] ***
TASK [geerlingguy.mysql : Copy .my.cnf file with root password credentials.] ***
skipping: [webny]
TASK [geerlingguy.mysql : Get list of hosts for the anonymous user.] ***********
[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..
This feature will be removed in version 2.4. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [webny]
TASK [geerlingguy.mysql : Remove anonymous MySQL users.] ***********************
TASK [geerlingguy.mysql : Remove MySQL test database.] *************************
ok: [webny]
TASK [geerlingguy.mysql : Ensure MySQL databases are present.] *****************
ok: [webny] => (item={u'collation': u'utf8mb4_general_ci', u'name': u'drupal', u'encoding': u'utf8mb4'})
TASK [geerlingguy.mysql : Ensure MySQL users are present.] *********************
ok: [webny] => (item=(censored due to no_log))
TASK [geerlingguy.mysql : Ensure replication user exists on master.] ***********
skipping: [webny]
TASK [geerlingguy.mysql : Check slave replication status.] *********************
skipping: [webny]
TASK [geerlingguy.mysql : Check master replication status.] ********************
skipping: [webny]
TASK [geerlingguy.mysql : Configure replication on the slave.] *****************
skipping: [webny]
TASK [geerlingguy.mysql : Start replication.] **********************************
skipping: [webny]
TASK [geerlingguy.php-mysql : Include OS-specific variables.] ******************
ok: [webny]
TASK [geerlingguy.php-mysql : Define php_mysql_package.] ***********************
skipping: [webny]
TASK [geerlingguy.php-mysql : Install PHP MySQL dependencies (RedHat).] ********
skipping: [webny]
TASK [geerlingguy.php-mysql : Install PHP MySQL dependencies (Debian).] ********
ok: [webny]
TASK [geerlingguy.postgresql : include] ****************************************
skipping: [webny]
TASK [geerlingguy.postgresql : include] ****************************************
skipping: [webny]
TASK [geerlingguy.postgresql : include] ****************************************
skipping: [webny]
TASK [geerlingguy.postgresql : Set PostgreSQL environment variables.] **********
skipping: [webny]
TASK [geerlingguy.postgresql : Ensure PostgreSQL data directory exists.] *******
skipping: [webny]
TASK [geerlingguy.postgresql : Check if PostgreSQL database is initialized.] ***
skipping: [webny]
TASK [geerlingguy.postgresql : Ensure PostgreSQL database is initialized.] *****
skipping: [webny]
TASK [geerlingguy.postgresql : Configure global settings.] *********************
skipping: [webny] => (item={u'option': u'unix_socket_directories', u'value': u'/var/run/postgresql'})
TASK [geerlingguy.postgresql : Ensure PostgreSQL unix socket dirs exist.] ******
skipping: [webny] => (item=/var/run/postgresql)
TASK [geerlingguy.postgresql : Ensure PostgreSQL is started and enabled on boot.] ***
skipping: [webny]
TASK [geerlingguy.postgresql : Ensure PostgreSQL databases are present.] *******
skipping: [webny] => (item={u'name': u'drupal'})
TASK [geerlingguy.postgresql : Ensure PostgreSQL users are present.] ***********
skipping: [webny] => (item=(censored due to no_log))
TASK [geerlingguy.php-pgsql : Include OS-specific variables.] ******************
skipping: [webny]
TASK [geerlingguy.php-pgsql : Define php_pgsql_package.] ***********************
skipping: [webny]
TASK [geerlingguy.php-pgsql : Install PHP PostgreSQL dependencies (RedHat).] ***
skipping: [webny]
TASK [geerlingguy.php-pgsql : Install PHP PostgreSQL dependencies (Debian).] ***
skipping: [webny]
TASK [geerlingguy.drupal-console : Install Drupal Console.] ********************
skipping: [webny]
TASK [geerlingguy.drupal-console : Ensure Drupal Console is executable.] *******
skipping: [webny]
TASK [geerlingguy.drupal-console : Run Drupal Console init.] *******************
skipping: [webny]
TASK [geerlingguy.drupal-console : Update Drupal Console to latest version (if configured).] ***
skipping: [webny]
TASK [geerlingguy.drush : Install Drush.] **************************************
ok: [webny]
TASK [geerlingguy.drush : Ensure Drush is executable.] *************************
fatal: [webny]: FAILED! => {"changed": false, "failed": true, "msg": "src and dest are required for creating links"}
to retry, use: --limit @/Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/playbook.retry
PLAY RECAP *********************************************************************
webny : ok=137 changed=1 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
justinwinter in ~/Sites/wny [-b feature/NDD-868-Video-Embed*]
Also, possibly unrelated, but if I comment out drush and mailhog in the config file
# - drush
# - mailhog
I now see this error:
TASK [geerlingguy.drupal : Install dependencies with composer require.] ********
failed: [webny] (item=drupal/devel:1.x-dev) => {"failed": true, "item": "drupal/devel:1.x-dev", "msg": "./composer.json has been updated Loading composer repositories with package information Installation failed, reverting ./composer.json to its original content. [RuntimeException] Failed to execute git clone --mirror 'git@github.com:ny/amber_alert_module.git' '/home/vagrant/.composer/cache/vcs/git-github.com-ny-amber-alert-module.git/' require [--dev] [--prefer-source] [--prefer-dist] [--no-progress] [--no-suggest] [--no-update] [--no-scripts] [--update-no-dev] [--update-with-dependencies] [--ignore-platform-reqs] [--prefer-stable] [--prefer-lowest] [--sort-packages] [-o|--optimize-autoloader] [-a|--classmap-authoritative] [--apcu-autoloader] [--] [<packages>]...", "stdout": "./composer.json has been updated\nLoading composer repositories with package information\n\nInstallation failed, reverting ./composer.json to its original content.\n\n \n [RuntimeException] \n Failed to execute git clone --mirror 'git@github.com:ny/amber_alert_module.git' '/home/vagrant/.composer/cache/vcs/git-github.com-ny-amber-alert-module.git/' \n \n\nrequire [--dev] [--prefer-source] [--prefer-dist] [--no-progress] [--no-suggest] [--no-update] [--no-scripts] [--update-no-dev] [--update-with-dependencies] [--ignore-platform-reqs] [--prefer-stable] [--prefer-lowest] [--sort-packages] [-o|--optimize-autoloader] [-a|--classmap-authoritative] [--apcu-autoloader] [--] [<packages>]...\n\n", "stdout_lines": ["./composer.json has been updated", "Loading composer repositories with package information", "", "Installation failed, reverting ./composer.json to its original content.", "", " ", " [RuntimeException] ", " Failed to execute git clone --mirror 'git@github.com:ny/amber_alert_module.git' '/home/vagrant/.composer/cache/vcs/git-github.com-ny-amber-alert-module.git/' ", " ", "", "require [--dev] [--prefer-source] [--prefer-dist] [--no-progress] [--no-suggest] [--no-update] [--no-scripts] [--update-no-dev] [--update-with-dependencies] [--ignore-platform-reqs] [--prefer-stable] [--prefer-lowest] [--sort-packages] [-o|--optimize-autoloader] [-a|--classmap-authoritative] [--apcu-autoloader] [--] [<packages>]...", ""]}
to retry, use: --limit @/Users/justinwinter/Sites/wny/vendor/geerlingguy/drupal-vm/provisioning/playbook.retry
Seems that our custom amber alert module is having some issues getting cloned for some reason.
However, if I run composer update manually, I don't have any issues...
@justinlevi - It looks like you're managing your dependencies via composer.json on your own, so there's no need to have Drupal VM try to manage deps for you.
In this case, I'd say set drupal_composer_dependencies: [] in your config.yml
Oh wow... I didn't know about that config setting. Was that added recently?
I did upgrade drupal-vm recently and was wondering why I'm just seeing those composer issues now.
That's been in default.config.yml for a few releases now, since 4.0 I think? Anyways—does that get you through to the end now?
Yes, this can be closed. Thank you!
On Wed, Mar 15, 2017 at 2:39 PM Jeff Geerling notifications@github.com
wrote:
That's been in default.config.yml for a few releases now, since 4.0 I
think? Anyways—does that get you through to the end now?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/geerlingguy/drupal-vm/issues/1216#issuecomment-286839680,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABEPJS4U8kb5vw1ZHYtwroa5hu_6JsMzks5rmDBQgaJpZM4McxdM
.
TASK [geerlingguy.drush : Install Drush.] **************************************
fatal: [drupalvm]: FAILED! => {"changed": false, "failed": true, "msg": "failed to create temporary content file: ('The read operation timed out',)"}
my os is ubuntu 16.04
@sudishth - That seems like a different problem besides MySQL restart failing. Can you try running vagrant provision again and see if it works? That error usually means your computer couldn't download Drush from GitHub for some reason (usually temporary).
| gharchive/issue | 2017-03-14T15:53:44 | 2025-04-01T06:38:46.193819 | {
"authors": [
"geerlingguy",
"justinlevi",
"sudishth"
],
"repo": "geerlingguy/drupal-vm",
"url": "https://github.com/geerlingguy/drupal-vm/issues/1216",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
601466785 | Adapts python plotting functionality to adhere to new tecplot output files
Hi Geir,
this PR adapts the python code to your new tecplot files and fixes Peter issue.
Cheers,
Rafael
Very good. Peter will be testing it :-)
Thanks a lot for contributing.
| gharchive/pull-request | 2020-04-16T20:58:48 | 2025-04-01T06:38:46.208149 | {
"authors": [
"geirev",
"rafaeljmoraes"
],
"repo": "geirev/EnKF_seir",
"url": "https://github.com/geirev/EnKF_seir/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1877814327 | misc: Add LULESH GPU tests
This adds the LULESH tests, which currently run successfully, though they still use the gem5-resources directory to build the binary used as opposed to using a pre-built binary
Change-Id: I91c511fe92b7f9d11dfb027f435573f826bc6714
Thanks for helping to move the GPU weekly tests out of the bash script! This looks fine for GCN3. One thing I would like though is to test the Vega ISA as well. I realize that will take more resources. @mattsinc and I have been planning to discuss in detail if GCN3 can be deprecated. I would also like to see Vega as part of the ALL build, which would further reduce compilation for testing, I think (not sure if each yaml file is rebuilding).
I have many more detailed thoughts on moving to Vega but it is probably better for a discussions thread rather than this PR.
Yes, this is something we can look into! I think if we do that, we can put it into a separate PR so we can update all the GPU tests to use Vega at the same time after we ensure it works locally first, unless you have other thoughts on that.
thanks! Please let @abmerop confirm before you merge in though.
done
FYI, when this is good to go (soon, I promise), i'll do a merge commit with this one so it'll be added to develop as one commit.
Just giving a heads up before people start to notice all the touch-up commits i've been doing.
Hi @Harshil2107 , I was actually just looking at this and was wondering how this command was even working. It seems it's not. I will post a comment on what I suspect is the issue.
| gharchive/pull-request | 2023-09-01T17:45:11 | 2025-04-01T06:38:46.230868 | {
"authors": [
"BobbyRBruce",
"abmerop",
"mkjost0"
],
"repo": "gem5/gem5",
"url": "https://github.com/gem5/gem5/pull/256",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2428664642 | 02 09 Sampling
[x] Introduction
[x] SimPoint/LoopPoint
[x] SimPoint analysis
[x] SimPoint checkpointing
[x] Simpoint restoring
[x] ElFie
[x] ElFie example
[x] SMARTS
[x] SMARTS example
[x] Summary
What hasn't been done
Third party test on materials and go through slides
On slide 2 and 50, it is waiting for a visualization
@powerjg I removed the config files, removed the SimPoint3.2 source files, removed the checkpoints, and added -img to the image directory.
| gharchive/pull-request | 2024-07-24T23:57:48 | 2025-04-01T06:38:46.234054 | {
"authors": [
"studyztp"
],
"repo": "gem5bootcamp/2024",
"url": "https://github.com/gem5bootcamp/2024/pull/45",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2461110821 | Inconsistent documentation InformationService
Within the documentation on the main page it is written:
Information Service (information-service)
....
Limitations:
.....
for record status it will always send HTTP code **200** (OK)
But information/application.yaml says
nested-responses:
204:
response:
statusCode: **204**
Running actual Image deployment-1.0.8 repsonses with 204 and not 200
Hi @CEiderEVIDENT,
the documentation is updated with release version 1.0.12 - only the ReadMe wasn't updated accordingly after changing from 200 to 204 in previous API release.
--> https://github.com/gematik/epa-deployment/blob/main/README.md#information-service-information-service
Thanks for this finding and best regards,
| gharchive/issue | 2024-08-12T14:07:39 | 2025-04-01T06:38:46.237006 | {
"authors": [
"CEiderEVIDENT",
"fnoGematik"
],
"repo": "gematik/epa-deployment",
"url": "https://github.com/gematik/epa-deployment/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
169672545 | Annotated images in AmiGO
We'd like to have the ability to add annotated images that are displayable via the browser.
Discussion with @cmungall sounds like the ability to do something like a "gulp golr-load-image-annotations" where a tab delimited text file (format TBD) can be loaded into GOLR similar to how GAFs are done is the direction we'd like to head in.
Parts that I believe would need to be worked on include gulpfile changes to add the loading piece, actual code to load in GOLR, changes/addition to annotation pages in AmiGO to add the image type data, methods to display images (maybe carousel, maybe flat page with tables of images), probably other bits.
This is in conjunction with the image annotation work that @preecej is working on.
Also, time scale is pretty lax. Just creating the issue so that work/thoughts can get started and we can brainstorm on how this will work.
As a first pass we can just overload golr bioentity_association documents (the bioentity would be an image rather than gene or germplasm). We can then reuse all the existing nice faceting mechanisms (e.g. when on the leaf page I can narrow my search dynamically to monocots).
I suggest using isa-partof closure but we need more requirements analysis here. If I am interested in leafs generally I will want to see images of subtypes of leaves. I will probably also be interested in images of parts of leafs too, but I'd get a bit uneasy if I jump too many levels of granularity.
Realistically, the code changes would be minimal if using 3rd-party storage. Most of these point could be covered with additional handler code. Adding a page base type similarly easy (although, maybe that should be abstracted out a bit more).
Also: https://github.com/geneontology/amigo/issues/341
In reference to work being done on Planteome.
(To mark the specific use case for planteome, I'd add it to the planteome amigo tracker and have it blocked with this as the upstream issue.)
I think the first step here is to see what kind of data would be loaded.
There are two possibilities here: an overloaded GAF-like (or whatever) thing that would take a custom loader or to overlay the image data on top of already-loaded annotation data (what the current demo does).
I suspect that once we have some fairly concrete data running around, or at format/approach, the code to get to basic usability would be pretty fast.
We may want to experiment with overlays later, but this is simpler. The subject/bioentity is an image denoted by a URL that resolves to a jpg or similar (let's say a thumbnail). Just using the default amigo view this would be behave as any other bioentity. We'd want to then enhance the display a bit, I don't have strong opinions here: just showing the thumbnail, carousel, ...?
@cmungall and I were just talking about this a second ago. So, I think what we will have is some image URL that will display some term(s) so that people can see it and get a non-textual example of the term. I think GAF may be an acceptable input format, we just have to have a new object type of image. Maybe some client code that if the object type is an image to do something like make a thumbnail that links out to the source URL. In other words, instead of At5g20800 in column 2 or 3 of the GAF, have the URL. In column 12, have "image" as the object type, and then figure out how to make it look good in the browser.
The "carousel" has come up a couple of times here, and I don't quite understand--if it is a single object, what are the multiple things carouselling?
Otherwise, if we are literally treating these things as bioentities, then the code to detect an image URL ID would be very easy. Not so easy would be that have an ID component like that--it would likely throw a spanner into a lot of things. Preferable might be a standard bioentity document with an additional field that could act as the data overlay in a second loading step; population of the field would trigger the main effects.
This issue came up again in our ontology call this morning. We are discussing some very complex Plant Ontology terms related to inflorescence axes and these definitions are accompanied with nice line-art diagrams of different types of inflorescences. The textual definitions are complicated, and nuanced, but the images make it much clearer. So being able to imbed an image would go a long way in clarifying the meaning of these terms. This wouldn't require multiple images/carousel, rather just a single labeled image (from the NY crew).
Image example:
inflorescence_img_example.pdf
I think the addition is simple: add a new field, something like "auxiliary_external_reference_image" that is a remotely accessible PNG, etc. When the field is populated, AmiGO embeds whatever is at that end into the page. Simple.
Now, the hard part is to load that info into the store in the first place, which means we must descend into modifying the loader and loading a new file type (because GAF does not need a new field). That, or doing a second run over the index to populate it (like we did for the geo-spatial setup). If you want something out the door soon, the latter would be very very easy, especially if you don't have that many images.
When you say "new field" that would be a new field in what, exactly?
I think for a temporary solution, the example you outlined would work nicely. To get a sense of scale, I'd say we will likely start with just a single image for each term in the PO (actually it would be fewer, as we would not have images for some categorical terms) I could ask Dennis and his crew to gather the images that he would like to use, and deposit them somewhere on our repo (or elsewhere) with the PO:id in which they should be annotated to.
What format would be best? We can just load all the images somewhere, and provide a delimited file with an ID and a URL to the image?
A new field in the Solr schema, as defined by the amigo metadata files.
If one were to move in this direction, and I won't have time really until after the GO meeting, I would tack towards getting all of the images into S3--I think we're talking about a few thousand here? Well, thinking about it, if you have a webserver (probably apache) up for AmiGO anyways, you could always server it out of the AmiGO static directory or apache as well.
As an experimental load format, let's say a JSON list along the lines of:
[
{
"index": "PO:0022008",
"overlay": {
"auxiliary_external_reference_image": "http://my.nifty/s3/url"
}
}
]
We can reuse this for other overlays in the future; I think that we can probably just reuse most of what was done for the geo-spatial here.
Excellent. It will take some time to get images collected, and labeled correctly, so I'll see what can be done, then we can give it a whirl sometime in the semi-near future.
You're going to be in Corvallis this summer for the GO meeting, we can connect at that point.
Thanks for the explanation.
Yes, we can touch bases at Corvallis.
If there is a non-ontology label, or multiple images for a single term, a different overlay (or even strategy) would be necessary.
Yeah, and I'm sure that is in the long-term plan for the image annotation project, but if we could get just simple descriptive images imbedded in the term page on the browser, it would clear up a lot of the complex plant anatomy terms rapidly.
Well, yes, let's call this a one-off.
But for image annotations, we should revisit the work that has gone on with geospatial:
https://github.com/geneontology/amigo/issues/341
reiterating @austinmeier comments of displaying line diagrams etc. for explaining the anatomy terms on term detail pages. I recommend moving it up the priority.
Or, for that matter, thinking about #421, if we had a field that was essentially an overlay catch-all, a multivalued field (e.g. auxiliary_overlays) that could be loaded separately and incrementally, it could take a number of items to be render as needed. Each value could be like:
{
"overlay": "auxiliary_external_reference_image",
"index": "PO:0022008",
"type": "image",
"content": "http://snazzy.uri/foo/png"
}
or
{
"overlay": "patter_description",
"index": "PO:0022008",
"type": "markdown",
"content": "*** bleh"
}
| gharchive/issue | 2016-08-05T18:51:25 | 2025-04-01T06:38:46.287711 | {
"authors": [
"austinmeier",
"cmungall",
"elserj",
"jaiswalp",
"kltm"
],
"repo": "geneontology/amigo",
"url": "https://github.com/geneontology/amigo/issues/368",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
113725824 | Odd default namespace 2015-10-24 release
Looking in the header of gene_ontology_ext.obo
default-namespace: file:/Users/tanyaberardini/go_svn/ontology/extensions/ro_pending.obo
where it's usually
default-namespace: gene_ontology
Hi Jim,
As you've probably seen by now, Doug reported the same issue on the go-consortium mailing list, and it was fixed.
For reference, https://github.com/geneontology/go-ontology/issues/12141
Thanks for reporting.
Paola
| gharchive/issue | 2015-10-28T01:51:13 | 2025-04-01T06:38:46.292399 | {
"authors": [
"jimhu-tamu",
"paolaroncaglia"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/12127",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
175071254 | NTR: homotypic vesicle fusion and secretory granule maturation
Two new terms needed for annotation of Syt4 (Synaptotagmin-IV) in PMID:16618809:
homotypic vesicle fusion ; GO:NEW1
The fusion of two vesicle membranes to form a single vesicle.
GOC:bf, GOC:PARL
PMID:16618809
(Given that you’d have to have 2x ‘results_in_fusion_of: vesicle membrane’, I’m not sure I can currently create a decent logical definition for this).
secretory granule maturation ; GO:NEW2
Steps required to transform an immature secretory vesicle into a mature secretory vesicle. Typically proceeds through homotypic membrane fusion and membrane remodelling.
is_a: secretory granule organization ; GO:0033363
developmental maturation (GO:0021700) that results_in_organization_of: secretory granule ; GO:0030141
GOC:bf, GOC:PARL
PMID:16618809
is_a: child: dense core granule maturation ; GO:1990502
Note a similar logical definition could be added to dense core granule maturation ; GO:1990502
developmental maturation (GO:0021700) that results_in_organization_of: dense core granule ; GO:0031045
Thanks.
Including @dosumis because I'm sure he must have thought about this with respect to synaptic vesicles.
My hesitation with any of the 'homotypic' terms including the cell-cell adhesions terms is that at what level do we define the two entities as being the same. In the case of cell adhesion if two neurons bind one another, but they are distinct subtypes of neurons, is this homotypic? I have the same concerns with this term. In the ontology there is a rich substructure under membrane-bounded vesicle. At what point to we make the cut off for what is homotypic and what is heterotypic?
For the maturation term, we have 'synaptic vesicle maturation' and 'dense core granule maturation' as is_a children of 'vesicle organization'. We also have 'secretory granule organization' as a child. It seems like there are some patterns here of which we could take advantage. 'Synaptic vesicle maturation' is defined as 'developmental maturation' results_in_organization_of SOME 'synaptic vesicle' and is asserted as a subclass of 'vesicle organization'. There is also a term for 'synaptic vesicle coating' that has no relation to the maturation term. Any insights David?
I'd be reluctant to support a homotypic/heterotypic distinction if we can avoid it - for the resaons that @ukemi outlined above.
CC @Pimmelorus - General question about dense core vesicles here. Any comments? Could you see a need for a DCV maturation term for the Synapse work?
I see the problem with the 'homotypic' wording. How about making it more explicit in the GO term? It would go alongside the more specific instances of vesicle fusion:
E.g
vesicle fusion ; GO:0006906
--[isa]vesicle fusion with vesicle ; GO:NEW
--[isa]vesicle fusion to plasma membrane ; GO:0099500
--[isa]vesicle fusion with vacuole ; GO:0051469
--[isa]vesicle fusion with endoplasmic reticulum ; GO:0048279
I see the problem with the 'homotypic' wording. How about making it more explicit in the GO term? It would go alongside the more specific instances of vesicle fusion:
E.g
vesicle fusion ; GO:0006906
--[isa]vesicle fusion with vesicle ; GO:NEW
--[isa]vesicle fusion to plasma membrane ; GO:0099500
--[isa]vesicle fusion with vacuole ; GO:0051469
--[isa]vesicle fusion with endoplasmic reticulum ; GO:0048279
That looks fine. I thought you were referring to vesicles of the same type.
Me too. This looks ok.
Added:
[Term]
+id: GO:0061782
+name: vesicle fusion with vesicle
+namespace: biological_process
+def: "Fusion of the membrane of a transport vesicle with a target membrane on another vesicle." [GOC:bf, GOC:PARL, PMID:16618809]
+synonym: "vesicle to vesicle fusion" EXACT [GOC:dph]
+synonym: "vesicle-vesicle fusion" EXACT [GOC:dph]
+is_a: GO:0006906 ! vesicle fusion
+created_by: dph
+creation_date: 2016-09-06T13:29:49Z
+
Thankyou! The authors were talking about two vesicles of the same type fusing, but vesicle-vesicle fusion is enough for me!
Added logical def to dense core granule term and:
[Term]
+id: GO:0061792
+name: secretory granule maturation
+namespace: biological_process
+def: "Steps required to transform an immature secretory vesicle into a mature secretory vesicle. Typically proceeds through homotypic membrane fusion and membrane remodelling." [GOC:bf, GOC:dph, GOC:PARL, PMID:16618809]
+is_a: GO:0033363 ! secretory granule organization
+intersection_of: GO:0021700 ! developmental maturation
+intersection_of: results_in_maturation_of GO:0030141 ! secretory granule
+created_by: dph
+creation_date: 2016-09-09T13:24:34Z
| gharchive/issue | 2016-09-05T13:25:28 | 2025-04-01T06:38:46.305175 | {
"authors": [
"dosumis",
"rebeccafoulger",
"ukemi"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/12635",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
188204940 | Modified proteins proposal
Relevant GH tickets:
NTR: ubiquitinated protein binding
Parentage of glycoprotein binding
Question: adding new children of GO:0035064 methylated histone binding?
The proposalwas not well received by annotators at the GO meeting. The annotators thought that these terms were too important biologically for them not to be in the ontology. We need to come up with alternatives.
Continue to add these terms by hand. Unsustainable?
Figure out a way to add these terms automatically. Use PRO?, Chebi?, PsiMod?, will this lead to annotation inconsistency. Probably.
Only add a few very high-level terms. What is the specificity cut off and what happens when annotators make a case that the very specific term that they want is important.
Have do-not-annotate' terms that are generated automatically as in #1 below and categorize gene products by the entity that is captured in binding annotations.
We discussed three possible semantic interpretations of modified protein binding terms:
https://docs.google.com/document/d/1MAnnOfs-e2LY9MnqdCZscalbxbNUDSJ9pbMZ-f2WS9U/edit#heading=h.vzsyf1ss0k8h
Binds a protein that happens to have a modification, but not necessarily the modification site
Binds to the modified part of a protein <- Seems to be most broadly supported.
Binding to some specific protein partner is dependent on the modification state of the partner even if binding is not to the modified bit. <- Some support. DOS objection: this seems like annotating a property of the bound protein. Also specific to one binding interaction (whereas 2 is likely to be a general function)
Number 2 in this list - binds to the modified part of a protein - had the most support. Example: SH2 domain confers binding to phospho-tryosine in the context of a specific peptide motif: https://en.wikipedia.org/wiki/SH2_domain
For terms like this, we could add a comment that it should only be used where there is high confidence that binding is to the modification + protein (simple dependency on phosphorylation of target is not sufficient but one that localizes the domain by deletion/mutagenesis analysis of the protein is). This still leaves the question of how detailed we should get in specifying the target (see histone binding).
Darren will join us on today's meeting to present his ideas.
This is what I mentioned to a few attendees at the USC GOC meeting. I'll summarize first, then give the reasoning by way of example.
Consider that "ubiquitinated protein binding" could be logically defined as something like ("protein binding to some ubiquitinated protein"). So ("ubiquitinated protein binding to some protein") is the same as ("protein binding to some ubiquitinated protein"). Given the confusion that currently holds when using these terms, this proposal provides the benefit that annotation is simplified. No information is lost, since terms like "ubiquitinated protein binding" are useful grouping terms that can be inferred
based on the binding partner indicated.
I would suggest that "protein binding" be used as the annotate-to term. Child terms of interest (such as "glycoprotein binding") stay, and new ones minted if desirable, but be marked as do-not-annotate.
Example:
Consider the case of USP15, a protein that binds to ubiquitinated histone H2B. This protein is annotated to GO:0061649 ("ubiquitinated histone binding"). For the moment, pretend that GO:0061649 can be a child of GO:0042393 ("histone binding") and the proposed term ("ubiquitinated protein binding"). The full hierarchy (after reasoning) would be:
protein binding
|_ ubiquitinated protein binding
| |ubiquitinated histone binding
| histone binding
|_ubiquitinated histone binding
Consider a possible history for the annotation of USP15, from two different labs. Lab one and lab two each publish that USP15 binds protein, but don't know which protein. You'd get:
Lab1: USP15 protein binding some "protein"
Lab2: USP15 protein binding some "protein"
Lab1 then finds that USP15 binds to a ubiquitinated protein, while lab2 discovers that it binds to some histone. Now the annotation becomes:
Lab1: USP15 ubiquitinated protein binding some "ubiquitinated protein"
Lab2: USP15 histone binding some "histone"
Lab1 now finds that the specific type of ubiquitinated protein bound is histone, and Lab2 finds that the histone is ubiquitinated. Revised annotation:
Lab1: USP15 ubiquitinated histone binding some "ubiquitinated histone"
Lab2: USP15 ubiquitinated histone binding some "ubiquitinated histone"
Note that each new bit of information resulted in, for each lab, a change to both the GO term & the target (column 17). That means (for each lab) a total of 4 changes beyond the initial annotation.
A better, easier, more scalable solution would be to annotate ONLY to the parent protein binding term. The other GO terms stay but only as grouping terms. Specificity is given by the target. Thus, the history (for Lab1 only) would become:
A: USP15 protein binding some "protein"
B: USP15 protein binding some "ubiquitinated protein"
C: USP15 protein binding some "ubiquitinated histone"
Only 2 changes were necessary. Pros of this approach include fewer changes, clear guidance for GO term to use, and it's sustainable to any level of target specificity ("ubiquitinated histone" becomes "ubiquitinated histone H2B" becomes "histone H2B ubiquitinated at position 120" (as opposed to "histone H2B ubiquitinated at position 122"). I can think of only one con, which is that term enrichment becomes useless for specific types of protein binding. Of course, term enrichment could still be achieved, but there would be some overhead in calculating the correct sub-function.
On the call http://wiki.geneontology.org/index.php/Ontology_meeting_2016-11-10 @nataled clarified that in the above
A: USP15 protein binding to some "protein"
B: USP15 protein binding to some "ubiquitinated protein"
C: USP15 protein binding to some "ubiquitinated histone"
A, B and C are annotations made based on different evidence (different papers) through time, not one annotation evolving through time (though this would be possible too, provided the original paper contained the necessary information)
Notes from call (http://wiki.geneontology.org/index.php/Ontology_meeting_2016-11-10)
DOS:
Darren's suggested pattern in OWL: GP (enables) 'protein binding' that has_input some PRO:'ubiquitinated protein'.
This corresponds to semantic interpretation 1: binds to some protein that happens to be ubiquitinated. Binding doesn't have to be dependent on modification. Are we sure we want this?
If we have the terms with logical defs in the ontology, then annotations with extensions will also work. Might be worthwhile if we want to allow more specific pro terms to be used in extensions.
Action item: We need more examples. Ask Sylvan and Pascale to come to call to present examples.
The proposal was not well received by annotators at the GO meeting.
I'd like it to be recorded that this is not true for PomBase curators.
I really like Darren's proposal. This is what we would do anyway. The reason this solution was opposed was (by some) was because at the meeting it was not clear to all that PRO could add generic
"ubiquitinated protein" terms.
Of course, tools would also need to catch up, but at present, I would not rely on GO to provide comprehensive lists of modified protein binding partners. The annotation would need to catch up in tandem to be realistically useful. So in this situation we have a good opportunity to make the annotations in a more sustainable way without compromising what users can already do.
For example
GO:0051219 phosphoprotein binding
currently has
1 to 25 of 148 for 99 proteins
But I agree, @dosumis we also need to be clear in these cases whether the binding is modification-dependent (when we make these annotations using PRO IDs we translate has substrate into "active form" for our users
gene A protein binding "active form" PR:000045540 IPI gene B
http://research.bioinformatics.udel.edu/pro/entry/PR%3A000045540/
(this isn't a real example, but I am sure we have done this for binding)
Hello,
Here's a version of the proposal, reviewed by the GO-editors and by @sylvainpoux
Background:
A number of proteins are known to specifically bind a protein that is modified, while not binding the unmodified form. The binding may occur at the site of the modification, but this is not an absolute requirement for a modification-specific binding; in some cases the binding occurs at another position on the target protein but is nevertheless dependent on the PTM.
Moreover (and very important for annotation), the information is not always complete: in some cases, the position of the binding relative to the modification is known, while in other cases, only the nature of the modification is known but not the binding site positions.
A number of domains are also known to specifically recognize and bind modified proteins, which is extremely important for recruitment of proteins and signaling (for example chromatin or DNA repair). Note that we do not always know whether these domains/proteins bind the modified part, but we usually know that they specifically recognize modified proteins.
The existence of these GO terms is extremely important for users, because it can really guide research. Moreover, resources like InterPro can generate electronic annotation for domains that bind modified proteins. Despite the usefulness of these terms, their ambiguous definitions has led to inconsistent usage. Some terms have been created in GO to reflect modified protein binding: GO:0051219 phosphoprotein binding, GO:0061649 ubiquitinated histone binding etc. The definition does not clearly state that the binding must be modification-dependent, so it can be interpreted as 'protein x modified protein binding' to 'a protein that has the potential of bearing the modification'. This usage of the term is clearly incorrect.
On the other hand, other terms are missing from the ontology and curator use existing terms incorrectly to capture the information. For example, GO:0043130 ubiquitin binding could be both used for proteins that bind ubiquitinated proteins and free ubiquitin (both cases exist and are biologically important).
Proposal:
1. Create a GO term
Modification-dependent protein binding:
Definition: Binding specifically to a protein that bears a post-translation modification.
Note: Does not bind the protein when not modified.
Annotation guidelines:
- The binding must be compared with and without the PTM in the interaction partner. If it is not shown that the binding is abolished in absence of the modification, this term cannot be used.
2. Create a limited number of child terms for most common PTMs (fewer than 10)
Phosphorylated protein binding (would replace phosphoprotein binding)
Glycosylated protein binding (would replace glycoprotein binding)
Ubiquitinated protein binding (would be a child of ubiquitin binding and we would specify that ubiquitin binding is both for binding free ubiquitin and ubiquitinated proteins)
Methylated protein binding
Acylated protein binding
Acetylated protein binding
3. Other PTMs:
To avoid unsustainable multiplication of terms, we can use 'Modified protein binding' with an ontology term in the extension (ontology to use remains to be defined). Until we choose the ontology, the parent term “modification-dependent protein binding” should be used for annotation.
Thanks, Pascale
Note: Send annotations to everyone to modify
here is list from PRO:
acetylation
ADP-ribosylation
amidation
bromination
cleavage
glycosylation
GPI-anchor
hydroxylation
methylation
phosphorylation
prenylation
sumoylation
ubiqitination
other
don't know how often bromination comes up, and the "cleavage"
Rename:
GO:0050815 phosphoserine binding -> phosphoserine residue binding
GO:0050816 phosphothreonine binding -> phosphothreonine residue binding
GO:0001784 phosphotyrosine binding -> phosphotyrosine residue binding
On the May 9. 2017 annotation call, we agreed that the meaning of that terms was consistent with these changes.
Updated the definition to "Binding specifically to a protein dependent on the presence of a post-translation modification in the target protein. "
[Term]
+id: GO:0140030
+name: modification-dependent protein binding
+namespace: molecular_function
+def: "Binding specifically to a protein dependent on the presence of a post-translation modification in the target protein." [PMID:26060076]
+comment: This term should only be used when the binding is shown to required the post-translational modification: the interaction needs to be tested with and without the PTM. The binding does not need to be at the site of the modification. It may be that the PTM causes a conformation change that allows binding of the protein to another region; this type of modification-dependent protein binding is valid for annotation to this term.
+synonym: "modified protein binding" RELATED []
+is_a: GO:0005515 ! protein binding
+created_by: pg
+creation_date: 2017-05-17T11:50:41Z
+
Fixed ubiquitin protein binding:
id: GO:0031593
-name: polyubiquitin binding
+name: polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently with a polymer of ubiqutin." [GOC:mah]
-synonym: "multiubiquitin binding" RELATED []
-is_a: GO:0043130 ! ubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination of the target protein." [GOC:pg]
+is_a: GO:0140030 ! modification-dependent protein binding
[Term]
id: GO:0036435
-name: K48-linked polyubiquitin binding
+name: K48-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 48 of the ubiquitin monomers." [GOC:al, PMID:20739285]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 48 in the target protein." [GOC:al, PMID:20739285]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
created_by: rfoulger
creation_date: 2013-09-18T14:51:06Z
[Term]
id: GO:0070530
-name: K63-linked polyubiquitin binding
+name: K63-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 63 of the ubiquitin monomers." [GOC:mah, PMID:15556404, PMID:17525341]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 63 in the target protein." [GOC:mah, PMID:15556404, PMID:17525341]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
[Term]
id: GO:0071795
-name: K11-linked polyubiquitin binding
+name: K11-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 11 of the ubiquitin monomers." [GOC:sp, PMID:18775313]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 11 in the target protein." [GOC:sp, PMID:18775313]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
created_by: midori
creation_date: 2010-09-02T02:11:41Z
[Term]
id: GO:0071796
-name: K6-linked polyubiquitin binding
+name: K6-polyubiquitin modification-dependent protein binding
namespace: molecular_function
-def: "Interacting selectively and non-covalently and non-covalently with a polymer of ubiquitin formed by linkages between lysine residues at position 6 of the ubiquitin monomers." [GOC:sp, PMID:17525341, PMID:20351172]
-is_a: GO:0031593 ! polyubiquitin binding
+def: "Interacting selectively and non-covalently with a protein upon poly-ubiquitination formed by linkages between lysine residues at position 6 in the target protein." [GOC:sp, PMID:17525341, PMID:20351172]
+is_a: GO:0031593 ! polyubiquitin modification-dependent protein binding
created_by: midori
creation_date: 2010-09-02T02:13:07Z
@@ -587346,7 +587345,7 @@ name: linear polyubiquitin binding
namespace: molecular_function
def: "Interacting selectively and non-covalently with a linear polymer of ubiquitin. Linear ubiquitin polymers are formed by linking the amino-terminal methionine (M1) of one ubiquitin molecule to the carboxy-terminal glycine (G76) of the next." [GOC:bf, GOC:PARL, PMID:23453807]
synonym: "M1-linked ubiquitin chain binding" EXACT [PMID:23453807]
-is_a: GO:0031593 ! polyubiquitin binding
+is_a: GO:0043130 ! ubiquitin binding
created_by: bf
creation_date: 2014-08-06T11:10:26Z
Was the duplication of "and non-covalently" intentional?
probably not :) thanks for pointing out
In fact it was in the old definition ! (as you can see by the '-' sign at the beginning of the line.
I lose. My apologies.
As discussed in
https://github.com/geneontology/go-annotation/issues/1586
I will create
'glycosylated region binding' (similar to 'proline-rich region binding),
Proposed def: Interacting selectively and non-covalently with a glycosylated region of a protein.
Proposed parents:
protein binding + carbohydrate-derivative protein binding.
Pascale
New term:
+id: GO:0140081
+name: glycosylated region protein binding
+namespace: molecular_function
+def: "Interacting selectively and non-covalently with a glycosylated region of a protein." [GOC:pg]
+is_a: GO:0005515 ! protein binding
+is_a: GO:0097367 ! carbohydrate derivative binding
+created_by: pg
+creation_date: 2017-07-25T10:58:31Z
| gharchive/issue | 2016-11-09T09:50:20 | 2025-04-01T06:38:46.349973 | {
"authors": [
"ValWood",
"dosumis",
"hdrabkin",
"mcourtot",
"nataled",
"pgaudet",
"ukemi"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/12787",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
249106560 | NTR:sexual macrocyst formation
Hello:
I would like to request a new GO term to describe the sexual reproductive phase of the Dictyostelium life cycle. This term is most commonly associated with D. discoideum, but applies to at least 3 additional species of Dictyostelids: D. giganteum, D. purpureum, and P. palladium. Please let me know if you have additional questions.
Thanks,
Bob Dodson
Name: sexual macrocyst formation
Ontology: process
Synonyms: macrocyst formation, sexual fusion
Definition: The fusion of haploid amoebae cells with matching mating types to form a larger cell, which ingests additional amoebae and forms a cellulose wall. The resulting macrocyst undergoes recombination and meiosis followed by release of haploid amoebae. An example of this process can be found in Dictyostelium discoideum.
Child of: GO:0019953 sexual reproduction
References:
PMID:16592095 Mating Types and Macrocyst Formation in Dictyostelium discoideum
PMID: 20089169 Phylogeography and sexual macrocyst formation in the social amoeba Dictyostelium giganteum.
Hi Bob !!
Here's the new term:
[Term]
+id: GO:0140084
+name: sexual macrocyst formation
+namespace: biological_process
+def: "The fusion of haploid amoebae cells with matching mating types to form a larger cell, which ingests additional amoebae and forms a cellulose wall. The resulting macrocyst undergoes recombination and meiosis followed by release of haploid amoebae. An example of this process can be found in Dictyostelium discoideum." [PMID:16592095, PMID:20089169]
+synonym: "macrocyst formation" RELATED []
+synonym: "sexual fusion" RELATED []
+is_a: GO:0019953 ! sexual reproduction
+created_by: pg
+creation_date: 2017-08-14T20:11:03Z
Thanks, Pascale
Thanks Pascale!
| gharchive/issue | 2017-08-09T17:38:06 | 2025-04-01T06:38:46.356403 | {
"authors": [
"pgaudet",
"rjdodson"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/14040",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
97147306 | astrocytes are not immune cells
GO:0045321 : immune cell activation (374)
GO:0048143 : astrocyte activation (5)
Astrocytes are glial cells (only exist in the brain and
support neurons; they are not found in the blood), so
they are currently mis-classified as immune cells.
Reported by: *anonymous
Original Ticket: "geneontology/ontology-requests/1627":https://sourceforge.net/p/geneontology/ontology-requests/1627
thanks www.movied.org
| gharchive/issue | 2004-03-29T22:36:33 | 2025-04-01T06:38:46.358898 | {
"authors": [
"gocentral",
"linuxlovell"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/1624",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
420709791 | ATP synthase parent
proton-transporting ATP synthase activity, rotational mechanism (GO:0046933)
has the parents
cation-transporting ATPase activity
is_a ATPase activity, coupled to transmembrane movement of ions, rotational mechanism
but this is ATP synthase not ATPase
There are 3 types of proton translocating ATPases with a rotational mechanism: P-type, V-type, and F-type. I think the P-type and V-type ATPases function only as ATP-ases. But the F-type ATPases can function either as an ATP synthase (where protons crossing a membrane drive synthesis of ATP from ADP + P1) or as an ATPase (where ATP -> ADP + Pi provides the energy to move protons back across the membrane). The location of GO:0046933 in GO makes sense to me in terms of the biochemistry because all the protein complexes I know of that have proton-transporting ATP synthase activity, rotational mechanism are also proton-translocating ATPases. But that may not fit with the logic of the ontology structure.
Links:
https://en.wikipedia.org/wiki/Proton_ATPase
Stewart et al. (2014) Current Opinion in Structural Biology
https://doi.org/10.1016/j.sbi.2013.11.013
but to annotate the non ATP synthase direction you would use
proton-transporting ATPase activity, rotational mechanism?
(some of the exact synonyms are conflicting too)
Quick note for the assigned curator. The = sign in the definitions means that these reactions are all defined as bidirectional despite their names. At first glance this leads me to agree with @dsiegele.
But we have both reactions in GO?
I think it is really important for representing the biology that we retain this distinction.
ATP synthase creates the energy currency for the cell.
There is an old discussion about having both terms somewhere but I suspect it is so old it is on Source Forge (I can't find it on GitHub), but I seem to remember that , the reverse mitochondrial reaction isn't physiologically relevant. So surely we should represent the reaction in the correct direction with appropriate parentage?
Otherwise it's really difficult to model the biology.
We represent the v-ATPase as an ATPase and the mitochondial F0-F1 as an ATP synthase?
EC and Rhea treat all chemical reactions as reversible in principle, if I remember right. @dsiegele ? This one is probably a valid example - given high enough concentrations of what we normally think of as reaction products, the system could be driven to generate what we think of as reaction substrates. Doesn't GO follow this usage, that by default the description of the direction of a molecular function is agnostic as to direction? So is a direction emerges, that can only happen at the process level? Unless there's some way to impose a "physiological direction" attribute on the function from outside of GO? @ukemi ?
Hmm, I see, so how does that work with kinase/phosphatase etc?
It takes a hell of as lot of phosphoprotein, ADP, and patience.
;)
but we don't follow the agnostic as to direction for GO in this case?
Here's an example:
GO:0004713 protein tyrosine kinase activity
Molecular Function
Definition: Catalysis of the reaction: ATP + a protein tyrosine = ADP + protein tyrosine phosphate.
Which takes us back to @ukemi 's comment above:
Quick note for the assigned curator. The = sign in the definitions means that these reactions are all defined as bidirectional despite their names.
The reverse reaction (ATP --> ADP + H+) is physiologically relevant in bacteria. During fermentative growth, i.e. growth without an electron transport chain, some taxa of bacteria use this process to generate a pmf.
Here's an example:
GO:0004713 protein tyrosine kinase activity
OK I misunderstood, I think. @deustp01 you are saying that we don't specify the direction even though we annotate directionally based on the term name?
The reverse reaction (ATP --> ADP + H+) is physiologically relevant in bacteria.
There should be a term for this too.
At present it's really confusing because the entire ancestry is specified as ATPase
see the parent
GO:0019829 cation-transporting ATPase activity
Enables the transfer of a solute or solutes from one side of a membrane to the other according to the reaction: ATP + H2O + cation(out) = ADP + phosphate + cation(in).
until this term when the term name switches to ATP synthase. It should be consistent but I don't want to annotate a mitochondrial F1F0 ATP synthase as an ATPase. It would look wrong.
There is clearly a precedent for forcing directionality when it is biologically important (kinase/phophatase).
In Trypanosoma brucei, the mitochondrial F1Fo ATPase functions differently depending upon the host organism. In the insect host, it functions as an ATP synthase and generates ATP. In the mammalian bloodstream form of T. brucei, the same enzyme functions primarily as an ATPase and is required to maintain the mitochondrial membrane potential.
PMID: 28414727
also, https://doi.org/10.1111/j.1432-1033.1992.tb17278.x
That's OK, we have terms for both activities
GO:0046933 proton-transporting ATP synthase activity, rotational mechanism
or
GO:0046961 proton-transporting ATPase activity, rotational mechanism
coupled to the appropriate process and the lifecycle stage
This is a reason why it is important to represent both directions- because they represent different aspects of biology.
Parentage of both terms demonstrates more clearly why the parentage is incorrect:
After discussion with @ValWood and @pgaudet, we propose to add GO:0016776 'phosphotransferase activity, phosphate group as acceptor' as a parent to the 'proton-transporting ATP synthase term and remove the ATPase parent. Not having the phosphotransferase parent seems to be missing a biologically relevant MF parent.
For curation, curators would annotate to the ATP synthase and/or ATPase MF terms and, ideally, provide the appropriate biological context to capture when the 'machine' and its subunits enable each type of MF.
We also discussed the 'proton transmembrane transporter activity' MF parent for 'proton-transporting ATP synthase. According to the definition, this MF seems to fit GO:0022803 'passive transmembrane transporter activity':
"Enables the transfer of a single solute from one side of a membrane to the other by a mechanism involving conformational change, either by facilitated diffusion or in a membrane potential dependent process if the solute is charged."
@pgaudet - does this seem correct to you?
Note that the question remains about which RHEA to xref to for each MF term and also that synonyms will need to be reassigned appropriately.
@pgaudet - does this seem correct to you?
Yes
Note that the question remains about which RHEA to xref to for each MF term and also that synonyms will need to be reassigned appropriately.
I thought we had decided on the directionality of the Rhea reaction for both (or at least for one of them - I think Rhea is missing the synthase?)
Thanks, Pascale
I thought we had decided on the directionality of the Rhea reaction for both (or at least for one of them - I think Rhea is missing the synthase?)
I think we weren't clear on exactly how 'in' and 'out' should be interpreted, but tagging @amorgat here for guidance.
| gharchive/issue | 2019-03-13T20:42:04 | 2025-04-01T06:38:46.376222 | {
"authors": [
"ValWood",
"deustp01",
"dsiegele",
"pgaudet",
"ukemi",
"vanaukenk"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/17035",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
465143701 | MP modulation by symbiont of host defense-related PCD
GO:0034053
modulation by symbiont of host defense-related programmed cell death
should be
is_a
regulation of
GO:0097300 programmed necrotic cell death
@CuzickA
Is it always programmed necrotic cell death, never apoptotic?
Well this is a tricky one. In the plant community they call the death "necrotic death" whatever the menchism, because "necrotic lesions" are what is observed.
The mechanism isn't always clear. In the cese we looked at either we don't know, or it is referred to as PCD. If it is PCD or not for plants it is necrotic cell death.
Normally In response to a pathogen the plant activates the hypersensitive repsone, a host immune response- the normal defense against biotrophs (localized cell killing).
However this isn't a pathogen process, since the host is doing the activating.
However for necrotrophic fungi actively switch on the plant host hypersensitive response to kill the plant becasue they live on the dead tissue.
It would make sense that if the pathogen is doing the killing (implied by this particular term "modulation by symbiont of host defense-related PCD" then it is always "necrotic".
It also seems to completely fit this definition:
GO:0097300 programmed necrotic cell death
Definition (GO:0097300 GONUTS page)
A necrotic cell death process that results from the activation of endogenous cellular processes, such as signaling involving death domain receptors or Toll-like receptors. PMID:21760595
because it is always necrotic.
clarified the comment above a little...
Does this term only apply to plants ? If you want to describe 'necrotic cell death' I think we need a new term.
We want to use
GO:0034053 modulation by symbiont of host defense-related programmed cell death
to logically define a phenotype.
As far as I can see, logically
all
GO:0034053
modulation by symbiont of host defense-related programmed cell death
must be
regulation of GO:0097300 programmed necrotic cell death
which is why we are asking for the parent.
Do symbionts regulate host defenses and cause cell death that isn't necrotic? if the pathogen is causing cell death can it be other?
I don't really quite know, but ...
https://doi.org/10.1371/journal.ppat.1000478
and maybe
https://www.ncbi.nlm.nih.gov/pubmed/20191202
https://www.ncbi.nlm.nih.gov/pubmed/12766474
https://www.ncbi.nlm.nih.gov/pubmed/11595833
Hmm. OK yes this doesn't seem quite right.
We can probably use both GO:0034053 and GO:0097300 in the logical defs.
At the moment we aren't creating the logical defs, we are just noting the GO terms we think are most appropriate so they are ready. James is looking into design patterns with Nico.
@CuzickA could you note both of these GO IDs for this one. We can look closer nearer the time, but I will close this ticket.
| gharchive/issue | 2019-07-08T09:12:44 | 2025-04-01T06:38:46.385481 | {
"authors": [
"ValWood",
"mah11",
"pgaudet",
"ukemi"
],
"repo": "geneontology/go-ontology",
"url": "https://github.com/geneontology/go-ontology/issues/17597",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
291682241 | Allow for searching on IDs or accessions
Curators will want to be able to cut and paste database IDs in the Gene Product field.
This currently works in the Add Individual field in the graph editor.
@vanaukenk pasting values is now working. I just put a minimum of 2 characters for the autocomplete to be triggered.
| gharchive/issue | 2018-01-25T19:11:22 | 2025-04-01T06:38:46.402711 | {
"authors": [
"tmushayahama",
"vanaukenk"
],
"repo": "geneontology/simple-annoton-editor",
"url": "https://github.com/geneontology/simple-annoton-editor/issues/25",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2102357438 | chore: added root project to settings.gradle.kts
Review Guidance
📓 Related JIRA
https://genesisglobal.atlassian.net/browse/IP-51
🤔 What does this PR do?
Add bullets..
🚀 Where should the reviewer start?
Add file / directory pointers.
📑 How should this be tested?
npx -y @genesislcap/genx@latest init prtest -x --ref YOUR-BRANCH-NAME
✅ Checklist
[ ] I have tested my changes.
[ ] I have added tests for my changes.
[ ] I have updated the project documentation to reflect my changes.
No longer needed
| gharchive/pull-request | 2024-01-26T14:57:11 | 2025-04-01T06:38:46.406563 | {
"authors": [
"jldparker"
],
"repo": "genesiscommunitysuccess/blank-app-seed",
"url": "https://github.com/genesiscommunitysuccess/blank-app-seed/pull/118",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
280624947 | Possible refactoring idea
package main
import (
"log"
"os"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
awsec2 "github.com/aws/aws-sdk-go/service/ec2"
awsiam "github.com/aws/aws-sdk-go/service/iam"
"github.com/genevievelesperance/leftovers/app"
"github.com/genevievelesperance/leftovers/aws/ec2"
"github.com/genevievelesperance/leftovers/aws/iam"
flags "github.com/jessevdk/go-flags"
)
type opts struct {
NoConfirm bool `short:"n" long:"no-confirm"`
AWSAccessKeyID string ` long:"aws-access-key-id" env:"AWS_ACCESS_KEY_ID"`
AWSSecretAccessKey string ` long:"aws-secret-access-key" env:"AWS_SECRET_ACCESS_KEY"`
AWSRegion string ` long:"aws-region" env:"AWS_REGION"`
}
type deleter interface {
Delete() error
}
func main() {
log.SetFlags(0)
var c opts
parser := flags.NewParser(&c, flags.HelpFlag|flags.PrintErrors)
_, err := parser.ParseArgs(os.Args)
if err != nil {
os.Exit(0)
}
logger := app.NewLogger(os.Stdout, os.Stdin, c.NoConfirm)
if c.AWSAccessKeyID == "" {
log.Fatal("Missing AWS_ACCESS_KEY_ID.")
}
if c.AWSSecretAccessKey == "" {
log.Fatal("Missing AWS_SECRET_ACCESS_KEY.")
}
if c.AWSRegion == "" {
log.Fatal("Missing AWS_REGION.")
}
config := &aws.Config{
Credentials: credentials.NewStaticCredentials(c.AWSAccessKeyID, c.AWSSecretAccessKey, ""),
Region: aws.String(c.AWSRegion),
}
iamClient := awsiam.New(session.New(config))
ec2Client := awsec2.New(session.New(config))
ir := iam.NewRoles(iamClient, logger)
ip := iam.NewInstanceProfiles(iamClient, logger)
sc := iam.NewServerCertificates(iamClient, logger)
vo := ec2.NewVolumes(ec2Client, logger)
for _, deletable := range []deleter{ir, ip, sc, vo} {
if err = deletable.Delete(); err != nil {
log.Fatalf("\n\n%+v - %s\n", deletable, err)
}
}
}
Thanks Matt!
| gharchive/issue | 2017-12-08T21:40:01 | 2025-04-01T06:38:46.409210 | {
"authors": [
"genevievelesperance",
"mattetti"
],
"repo": "genevievelesperance/leftovers",
"url": "https://github.com/genevievelesperance/leftovers/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
249799841 | Hand exact Jacobian to nonlinear solver
I vaguely recall seeing a way to do this once.
This way we no longer have to distinguish between the nonlinear and linearized problems in phaseflow.
Once this is done, we should be able to run all tests with mpirun, and no longer need to run tests in serial.
This is complete as of commit 35fa7f63e59d07ec46c4e723806b50ba7bd7247e
This is a pretty big deal :) I discussed it in more detail at PR #63
| gharchive/issue | 2017-08-12T09:12:59 | 2025-04-01T06:38:46.496769 | {
"authors": [
"alexanderzimmerman"
],
"repo": "geo-fluid-dynamics/phaseflow-fenics",
"url": "https://github.com/geo-fluid-dynamics/phaseflow-fenics/issues/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2023946666 | Container Image als non-root User app laufen lassen
Seit .NET 8 haben alle .NET Docker Images einen non-root User mit dem Namen app konfiguriert, mit dem man das Image laufen lassen sollte - standardmässig wird das Image immer noch mit dem root-User ausgeführt.
Siehe auch: https://andrewlock.net/exploring-the-dotnet-8-preview-updates-to-docker-images-in-dotnet-8/
@danjov Möchte man den Client Node Container diesbezüglich auch gleich anpassen? Neues separates Issue?
@danjov Möchte man den Client Node Container diesbezüglich auch gleich anpassen? Neues separates Issue?
@flenny Gerne, ja - und in einem separaten Issue.
| gharchive/issue | 2023-12-04T14:09:38 | 2025-04-01T06:38:46.504598 | {
"authors": [
"danjov",
"flenny"
],
"repo": "geoadmin/suite-bdms",
"url": "https://github.com/geoadmin/suite-bdms/issues/858",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2735450077 | Setup pre-commit-ci
https://github.com/apps/pre-commit-ci
Checks working on this repo: https://results.pre-commit.ci/repo/github/813604742
Autofix automation working on this PR: https://github.com/geojupyter/jupytergis/pull/243
Although I think I prefer autofixes to be off, but that's just me! :)
| gharchive/issue | 2024-12-12T10:09:30 | 2025-04-01T06:38:46.560361 | {
"authors": [
"martinRenou",
"mfisher87"
],
"repo": "geojupyter/jupytergis",
"url": "https://github.com/geojupyter/jupytergis/issues/247",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1660792666 | AttributeError: module 'logging' has no attribute 'isEnabledFor'
Just started seeing this error pop yesterday from my project that uses owslib. I'm not version pinned so assuming this is related to release from yesterday.
r = wcs.getCoverage(
File "/usr/local/lib/python3.8/dist-packages/owslib/coverage/wcs201.py", line 156, in getCoverage
if log.isEnabledFor(logging.DEBUG):
AttributeError: module 'logging' has no attribute 'isEnabledFor'
Hi! Is there any update on when this will be updated in the pypi released version?
| gharchive/issue | 2023-04-10T13:32:59 | 2025-04-01T06:38:46.572402 | {
"authors": [
"alesolla",
"disbr007"
],
"repo": "geopython/OWSLib",
"url": "https://github.com/geopython/OWSLib/issues/871",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
133541112 | add mailing list to Nabble
https://trac.osgeo.org/osgeo/ticket/1625
Done: http://osgeo-org.1560.x6.nabble.com/PyWPS-f5250613.html
| gharchive/issue | 2016-02-14T13:54:41 | 2025-04-01T06:38:46.591949 | {
"authors": [
"tomkralidis"
],
"repo": "geopython/pywps",
"url": "https://github.com/geopython/pywps/issues/75",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
586376758 | Use bump2version to track version changes
Overview
This PR adds bump2version (a maintained fork of bumpversion) to the repository to manage version changes. This works by setting the standard version number in the setup.cfg and performs the changes based on semantic versioning scheme. From the command line:
bump2version patch --> +0.0.1
bump2version minor --> +0.1.x # where x is reset to 0
bump2version major --> +1.x.x # where x is reset to 0
bump2version can also do things like tag versions and create commits when tagging. These have are not enabled in this PR.
Related Issue / Discussion
https://github.com/geopython/pywps/issues/525
Additional Information
https://pypi.org/project/bump2version/
Contribution Agreement
(as per https://github.com/geopython/pywps/blob/master/CONTRIBUTING.rst#contributions-and-licensing)
[x] I'd like to contribute [feature X|bugfix Y|docs|something else] to PyWPS. I confirm that my contributions to PyWPS will be compatible with the PyWPS license guidelines at the time of contribution.
[ x I have already previously agreed to the PyWPS Contributions and Licensing Guidelines
Coverage remained the same at 74.26% when pulling 976fbf90b258a49fd609c5969a1912ab94e1f364 on Zeitsperre:bumpversion into 61e03fe566d9a4cdf52a6b941a404916592e5aac on geopython:master.
test failure on Python 3.7 not related to the PR.
@Zeitsperre thanks :)
| gharchive/pull-request | 2020-03-23T17:15:30 | 2025-04-01T06:38:46.597111 | {
"authors": [
"Zeitsperre",
"cehbrecht",
"coveralls"
],
"repo": "geopython/pywps",
"url": "https://github.com/geopython/pywps/pull/527",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1417487757 | Add support for Laravel's artisan test command
Laravel's artisan test command offers some styling upgrades on top of PHPUnit's. artisan test passes all arguments directly to PHPUnit.
It would be nice to add Laravel/Artisan detection and adjust the command to utilize this.
I plan to create a Pull Request with the changes in the near future.
Currently you can toggle the --testdox option from the command palette which will give you a similar output.
One way to hack it is the following:
"phpunit.prepend_cmd": ["artisan"],
"phpunit.executable": "test",
"phpunit.options":
{
"colors=never": true,
},
This prepends artisan and replaces the executable with test which gets you artisan test. A bit hacky but works.
The colors option is required because artisan prints colors codes. PHPUnit defaults to --colors=auto which disables colors when run in Sublime Text.
The problem is that artisan still prints some color codes which is similar to the same issue Pest has (see https://github.com/gerardroche/sublime-phpunit/issues/103):
It also prints a TTY warning that I don't understand yet:
Warning: TTY mode requires /dev/tty to be read/writable.
In the next version you will be able to set the executable as a list:
"phpunit.executable": ["artisan", "test"],
"phpunit.options":
{
"colors=never": true,
},
I added the boolean setting phpunit.artisan. Just set it to true to enable the Artisan test runner.
I opened an issue in the Laravel tracker for the color output issues: https://github.com/laravel/framework/issues/46759.
Please open issues about missing syntax highlighting of the output.
| gharchive/issue | 2022-10-20T23:55:11 | 2025-04-01T06:38:46.694615 | {
"authors": [
"RCady",
"gerardroche"
],
"repo": "gerardroche/sublime-phpunit",
"url": "https://github.com/gerardroche/sublime-phpunit/issues/102",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
125954964 | Make loose quadtree compatible to MSVC 2015
These are only minor changes but they fix any compiler errors when used in Visual Studio 2015 C++ Toolchain.
Thanks a lot for the fix.
| gharchive/pull-request | 2016-01-11T14:23:40 | 2025-04-01T06:38:46.695821 | {
"authors": [
"Hemofektik",
"gerazo"
],
"repo": "gerazo/loose_quadtree",
"url": "https://github.com/gerazo/loose_quadtree/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
868470928 | 请教下最新版是什么原因删除定位权限组呢
请教下最新版是什么原因删除了定位权限组呢
原因是我如果开放 Permission.Group.LOCATION 数组常量,那么会有一部分人在不需要后台定位权限的情况也会申请该权限,这个是我经过一些反馈得知的,所以我删掉这个数组的初衷仅仅只是为了,大家在申请定位权限的时候尽量不要申请到后台定位权限,因为需要用到后台定位权限的 App 基本很少,除了地图 App,我实在想不到还有什么 App 需要这个权限,另外在 Android 11 上面申请后台定位权限的方式会更加繁琐,需要拆分为两步才能申请成功。
还有小伙子,框架只是删除了 Permission.Group.LOCATION 数组常量,并不意味着整个定位权限组无法申请,只不过你需要一个个加入到权限申请列表里面
public final class Permission {
/** 获取精确位置 */
public static final String ACCESS_FINE_LOCATION = "android.permission.ACCESS_FINE_LOCATION";
/** 获取粗略位置 */
public static final String ACCESS_COARSE_LOCATION = "android.permission.ACCESS_COARSE_LOCATION";
/** 在后台获取位置(需要 Android 10.0 及以上) */
public static final String ACCESS_BACKGROUND_LOCATION = "android.permission.ACCESS_BACKGROUND_LOCATION";
}
小伙子,你还有其他问题不,如果没有的话我就关闭 issue 了
| gharchive/issue | 2021-04-27T05:29:38 | 2025-04-01T06:38:46.706596 | {
"authors": [
"getActivity",
"w296365959"
],
"repo": "getActivity/XXPermissions",
"url": "https://github.com/getActivity/XXPermissions/issues/74",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1488957600 | refactor: Navbar: constrain max width to same width as the content.
Related Rails app PR
https://github.com/getAlby/getalby.com/pull/272
HELP NEEDED / TODOs
[ ] review code
[x] depending on responsiveness needs, consider moving the navigation entries ("websites" etc.) a bit more to the right so they are more centered instead of leaning left.
[x] reviewer: test with an extension development setup (or pair with me / let me know how to set one up)
[x] reviewer: test responsiveness (current extension's header looks as broken to me on smaller breakpoints as with these changes but I'm not sure!?)
Describe the changes you have made in this PR
Navbar: constrain max width to same width as the content.
reason: on desktop the navigation items in the top left and right corners were easy to miss. With this change, everything is closer together and easier to notice.
Type of change
UI improvement
Screenshots of the changes [optional]
Before
After
How has this been tested?
⚠️ I only tested this with my browser's inspector. I don't have a development setup for the extension yet. Ok, I have the development setup for the extension now and everything looked good to me.
I think this is a good adjustment.
This looks a bit weird, but not to problematic I think:
yes, there is a weird space of about 100px display resolution around 1024px where the wallet icon has space on the left and is not flush with the content's left border. in the same space the negative margin on the children is a problem. Didn't investigate further since responsiveness is not a big focus. We could remove the negative margin, at full-res I felt it looks more centered with it than without.
hm, just checked again and it feels like i don't get the children that close to the wallet icon until a much smaller breakpoint on firefox. Does the following GIF match what you're seeing?
looks the same as in Firefox for me on Chrome.
| gharchive/pull-request | 2022-12-10T20:59:56 | 2025-04-01T06:38:46.712955 | {
"authors": [
"jankoegel"
],
"repo": "getAlby/lightning-browser-extension",
"url": "https://github.com/getAlby/lightning-browser-extension/pull/1855",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
577994568 | Links between services
Hi all
Is it possible to open a link in a registered service.
Example I have a Trello Service and a Slack service, If someone send me a Trello link in Slack it will open in external browser by default, is it possible to open it in Trello service inside Getferdi ?
Thanks
@kytwb I've worked a little on this, now I need to use user.js in each service in order to modify all links I want. For example, if I have Trello with id 'baa48d44-e29a-4a55-8709-e4fbf02a39a4' and Slack service :
I can transofrm links like Card
TO:
window.open('https://trello.com/c/daUQc9Us/89-gestion-des-notifications-menu-écran', 'Ferdi: Trello', 'baa48d44-e29a-4a55-8709-e4fbf02a39a4');
Can you give me any direction, to retrieve automatically a service.id for a given service.name, is it possible to retrieve this kind of information in user.js.
I can create a pull request, but just to have some insights from you guys ;-)
For the moment all my code is in https://github.com/gmarec/ferdi/tree/develop-service-link
Thanks for your help
@gmarec Thank you for digging into this! 💪🙏
Can you please open a draft pull request? Like this we can more easily checkout, see the diff and collaborate within the pull request 😄 Maybe you can also comment in the diff to pin-point us to where you're missing the ~getServiceIdByServiceName helper?
@kytwb it's ok now to get serviceId by serviceName, thanks. I have opened a draft pull request. Thanks for your help. It needs documentation and a better integration than using user.js but it's working.
| gharchive/issue | 2020-03-09T15:31:52 | 2025-04-01T06:38:46.722737 | {
"authors": [
"gmarec",
"kytwb"
],
"repo": "getferdi/ferdi",
"url": "https://github.com/getferdi/ferdi/issues/451",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
593874742 | Make PAF format release also available as windows portable option
The current portable version for windows simply writes files to a temp directory. While technically this works, this does not fulfill what most users have in mind for a portable app - which is a standalone folder with program files that can be just moved and launched anywhere and it works. I use such apps to easily sync settings between machines.
I still use the old PAF version with the newer versions (just replacing the App/ferdi folder) and it works great. I know using Electron's portable target is easier and updates are also easier to solve, but the PAF format is still more flexible for advanced users, and should be at least optionally there (with caveat warnings).
I'd be happy to create a PortableApps.com Format package and host it on PortableApps.com. Or to create it and let you build/host it for both your users and ours. Or start with the former and transition to the latter as you'd like.
(I'm the creator of PortableApps.com and manage many of our app packages).
| gharchive/issue | 2020-04-04T13:56:14 | 2025-04-01T06:38:46.725561 | {
"authors": [
"JohnTHaller",
"poisonborz"
],
"repo": "getferdi/ferdi",
"url": "https://github.com/getferdi/ferdi/issues/538",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
156263233 | With multiple tags for a scenario the previous tags are being overwritten
Expected behavior
If there are multiple tags then it should get appended to the list
Actual behavior
It is getting overwritten
Steps to reproduce
Create a spec
Specification Heading
=====================
tags:top
* something1
c
-----------------------------
tags:scenario3 ppp a b c d e f g h i j k l m n o p q r s t u v w x y z
* something
tags:b
run the spec
The tag associated with c is only b
Gauge version
Gauge version: 0.4.1.nightly-2016-05-18
Plugins
-------
html-report (2.1.1.nightly-2016-05-19)
java (0.4.1.nightly-2016-05-17)
Ideally, tags should be defined only once in a scenario. So if it's defined more than once, we should throw a parse error.
Fix should be available in nightly >= 03.06.2016
| gharchive/issue | 2016-05-23T11:55:39 | 2025-04-01T06:38:46.734272 | {
"authors": [
"apoorvam",
"sguptatw"
],
"repo": "getgauge/gauge",
"url": "https://github.com/getgauge/gauge/issues/410",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
307558167 | Missing metadata include
Quick one. Within the base template, an include:
{% include 'partials/metadata.html.twig' %}
But no such file exists in partials or elsewhere.
That is in the Grav core now.. under system/templates/partials/
BTW, this was moved there because it's useful for all themes, and rarely needs changing. You can of course override it in your theme, but don't need to provide it yourself now.
Thanks for the clarification, many thanks!
| gharchive/issue | 2018-03-22T09:09:25 | 2025-04-01T06:38:46.742808 | {
"authors": [
"rhukster",
"stephenvoisey"
],
"repo": "getgrav/grav-theme-quark",
"url": "https://github.com/getgrav/grav-theme-quark/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
433694002 | Disable tags in codemirror editor.
I only want to allow my client in the editor to do the text Bold. I want to hide all other tags.
Is this possible?
How can i do this?
Anyone a idea?
with some custom css in admin you could hide them via the css like
.grav-editor-button-italic {
display: none;
}
There's plugins that let you add custom css to admin
with some custom css in admin you could hide them via the css like
.grav-editor-button-italic {
display: none;
}
There's plugins that let you add custom css to admin
Thx a lot. I will test it.
| gharchive/issue | 2019-04-16T10:08:30 | 2025-04-01T06:38:46.745690 | {
"authors": [
"Memurame",
"n30nl1ght",
"ricardo118"
],
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/issues/2453",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
143395366 | using underscore vs dashes in page filename
how do I get grav to use underscore _ instead of dashes - in the pages filename? I noticed that grav uses dashes in the page name when there are spaces or on default. I prefer underline. Is there an admin panel option for this?
I see the option to put .html at the end of the page. But it seems spaces in filename default to - instead of _ . Any ideas?
@4evermaat no there's no such option at the moment. Add it to the Admin Plugin issues to be considered https://github.com/getgrav/grav-plugin-admin/issues/new
| gharchive/issue | 2016-03-25T00:36:51 | 2025-04-01T06:38:46.747867 | {
"authors": [
"4evermaat",
"flaviocopes"
],
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/issues/749",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
106803588 | Responsive image
This patch adds more support for responsive images, allowing you to do the following:
page.media.images|first.derivatives(320, 960, 100).sizes('(max-width: 26em) 100vw, 50vw').html()
This will output an image tag with srcset for all widths starting from 320px to 960px in steps of 100px, the image will never upscale. The example uses an image with an intrinsic width of 650px.
<img sizes="(max-width: 26em) 100vw, 50vw" style="" src="/images/a/2/d/c/0/a2dc029adaa6460cfae159f06e676439138fac17-its-havenvervoer-breed1.jpeg" srcset="/images/6/a/a/f/d/6aafde2cb0b8bca92368200556db5c48be7318b5-its-havenvervoer-breed1.jpeg 320w, /images/f/5/8/c/7/f58c75299a67eff397d1ef62731ed55e1df9ee9a-its-havenvervoer-breed1.jpeg 420w, /images/9/0/1/7/c/9017c3a9fcbe0e2f2894f377ff9460ba78d34b0d-its-havenvervoer-breed1.jpeg 520w, /images/c/e/d/4/6/ced46633f00d10c91b72dd38b3a82c39a03bb181-its-havenvervoer-breed1.jpeg 620w, /images/a/2/d/c/0/a2dc029adaa6460cfae159f06e676439138fac17-its-havenvervoer-breed1.jpeg 650w">
In the code you'll see we just remember a list of url and width, this has been done to avoid memory problems while using lots of steps.
Wow looks great guys. Thanks for the contribution.
Thanks, was hoping for feedback, but this is even better :smile:
I really have no way to improve upon that.. If you could provide a PR to the learn github site to update the docs that would be great.
I assume docs are in https://github.com/getgrav/grav-learn, if so I can have a look tomorrow
yes specifically here: https://github.com/getgrav/grav-learn/blob/develop/pages/02.content/06.media/docs.md#responsive-images
| gharchive/pull-request | 2015-09-16T15:54:18 | 2025-04-01T06:38:46.751319 | {
"authors": [
"attiks",
"rhukster"
],
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/pull/325",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
435042045 | 如何配置第三方主题
比如我想使用hexo的相关主题
目前不支持 Hexo 主题,简介中说的第三方主题是说其他开发者开发的主题。未来会考虑做主题转换兼容
目前不支持 Hexo 主题,简介中说的第三方主题是说其他开发者开发的主题。未来会考虑做主题转换兼容
好的,谢谢
| gharchive/issue | 2019-04-19T04:05:49 | 2025-04-01T06:38:46.752857 | {
"authors": [
"EryouHao",
"startaprl"
],
"repo": "getgridea/gridea",
"url": "https://github.com/getgridea/gridea/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
51831787 | First tracking implementation
Intend to add the possibility to only track a list of blobs as I find it
too CPU intensive now.
re-up the request on the experimental branch and I'll merge it
| gharchive/pull-request | 2014-12-12T17:32:40 | 2025-04-01T06:38:46.845622 | {
"authors": [
"getnamo",
"vimaxus"
],
"repo": "getnamo/leap-ue4",
"url": "https://github.com/getnamo/leap-ue4/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1232968299 | Sentry errors without server_name
We're seeing errors in Sentry that don't indicate a server_name, for example, this error from an ODK Cloud server. Because the error doesn't indicate things like the request URL, I'm thinking that it was thrown by the worker mechanism. I think our hope was that 9e410332ed4e65ab36c87b09c8ee8ce54937c3ad would resolve this, but it seems like these errors are still appearing in Sentry.
For the record, I'm not optimistic that #521 solves this problem.
The only tags on see on this type of error is environment, handled, level, and mechanism. I see a lot more tags on other errors (e.g., runtime, url, device).
I'm out of my depth here, but it would seem that perhaps workers don't have access to the full Sentry env? @matthew-white would it be worth it to escalate to their support team?
I also don't think that #521 solves this problem. For some of the tags you mentioned, I think it's expected that those are missing for workers, because there's no associated request (for example, url). But in other cases, it seems pretty surprising that the tag is missing (for example, runtime). I'm not sure, but it looks like this might be fixed in the latest version of the Sentry SDK: see getsentry/sentry-javascript#5190 (search for server_name).
For posterity, note that #626 effectively reverted the commit 9e410332ed4e65ab36c87b09c8ee8ce54937c3ad mentioned in the issue description.
| gharchive/issue | 2022-05-11T17:25:27 | 2025-04-01T06:38:46.867540 | {
"authors": [
"matthew-white",
"yanokwa"
],
"repo": "getodk/central-backend",
"url": "https://github.com/getodk/central-backend/issues/483",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
206327054 | Increase truncation limit on handler to 255
I am trying to debug a Job error. The problem is that Sentry is truncating the "handler" under "Additional Data". What I see in Sentry is
--- !ruby/object:Delayed::PerformableMailer
object: !ruby/class 'GeneralMailer'
method_name: :generi
This requires me to go digging outside of Sentry to find the full handler to get the ID of the troublesome entry (in args)
--- !ruby/object:Delayed::PerformableMailer
object: !ruby/class 'GeneralMailer'
method_name: :generic_message
args:
- 305294"
Hmm, we could probably even go a little bigger. The main thing is we don't want to trip the limit of a TEXT field (~60k characters, depending on DB). 1k for both the handler and last_error is probably fine.
@darrennix Wanna change this PR to modify the truncation for last_error and handler to 1,000?
Done!
| gharchive/pull-request | 2017-02-08T21:17:01 | 2025-04-01T06:38:46.891689 | {
"authors": [
"darrennix",
"nateberkopec"
],
"repo": "getsentry/raven-ruby",
"url": "https://github.com/getsentry/raven-ruby/pull/633",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1504163274 | Support Dart 3
Description
https://medium.com/dartlang/the-road-to-dart-3-afdd580fbefa
I believe we just need to increase the versioning in the pubspec file, but it has to be tested.
Likely a 7.0.0 milestone.
Let's watch https://github.com/dart-lang/sdk/issues/49530 before going ahead.
Dart v3 is already part of the master channel so I am going to make it part of the v7 release.
| gharchive/issue | 2022-12-20T08:22:45 | 2025-04-01T06:38:46.911340 | {
"authors": [
"marandaneto"
],
"repo": "getsentry/sentry-dart",
"url": "https://github.com/getsentry/sentry-dart/issues/1199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
643672053 | Documentation out of date
https://docs.sentry.io/error-reporting/configuration/?platform=php
request_bodies
This parameter controls if integrations should capture HTTP request bodies. It can be set to one of the following values:
never: request bodies are never sent.
small: only small request bodies will be captured where the cutoff for small depends on the SDK (typically 4KB)
medium: medium-sized requests and small requests will be captured. (typically 10KB)
always: the SDK will always capture the request body for as long as sentry can make sense of it
When trying to use the request_bodies option:
Fatal error: Uncaught Symfony\Component\OptionsResolver\Exception\UndefinedOptionsException: The option "request_bodies" does not exist. Defined options are: "attach_stacktrace", "before_breadcrumb", "before_send", "capture_silenced_errors", "class_serializers", "context_lines", "default_integrations", "dsn", "enable_compression", "environment", "error_types", "excluded_exceptions", "http_proxy", "in_app_exclude", "in_app_include", "integrations", "logger", "max_breadcrumbs", "max_request_body_size", "max_value_length", "prefixes", "project_root", "release", "sample_rate", "send_attempts", "send_default_pii", "server_name", "tags". in /home/***/public_html/vendor/symfony/options-resolver/OptionsResolver.php:798
Stack trace:
#0 /home/***/public_html/vendor/sentry/sentry/src/Options.php(56): Symfony\Component\OptionsResolver\OptionsResolver->resolve(Array)
#1 /home/***/public_html/vendor/sentry/sentry/src/ClientBuilder.php(115): Sentry\Options->__construct(Array)
#2 /home/***/public_html/ in /home/***/public_html/vendor/symfony/options-resolver/OptionsResolver.php on line 798
FTR, the right option is max_request_body_size.
And never should be none (#1019)
I found related getsentry/sentry-docs#1408: the problem is that the Python client indeed uses request_bodies (and never)
| gharchive/issue | 2020-06-23T09:16:08 | 2025-04-01T06:38:46.968349 | {
"authors": [
"Jean85",
"gjedeer",
"guilliamxavier"
],
"repo": "getsentry/sentry-php",
"url": "https://github.com/getsentry/sentry-php/issues/1029",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
369004185 | [2.0 branch] Tracking issue - Unified API
This is a non-exhaustive list containing todo's to refactor the current 2.0 branch to conform the unified SDK API see: https://docs.sentry.io/clientdev/unified-api/
In general, all points are up for discussion. This should be a rough guideline to hold on to while we progress through it.
[ ] Up minimum PHP version to 7.1 and use features (like: Return type declarations, nullable types, Throwable, …)
[ ] General wording changes to functions (e.g.: leaveBreadcrumb -> addBreadcrumb, Config -> Options)
[ ] Change Sentry protocol version to 7
[ ] Update SDK identifier + User Agent (sentry.php)
[ ] Context → Scope
[ ] Create Hub which takes some responsibility of the Client / ClientBuilder
[ ] Breakup Client
[ ] Move getLastEventID to Hub
[ ] Move breadcrumbs, context information into Scope
[ ] Refactor Middleware's to Integrations
[ ] Remove some public setter/getter
[ ] DSN → We no longer need path (projectRoot)
[ ] shouldCapture → beforeSend
[ ] Do not setTags in Configuration
[ ] Stacktrace should be stacktrace.values[].frames
[ ] Breadcrumbs should be breadcrumbs.values[].crumb
[ ] Remove base64 encoding
[ ] Remove sanitization
[ ] Remove docs from repo, they new live in https://github.com/getsentry/sentry-docs
[ ] Write + Update Docs https://github.com/getsentry/sentry-php/issues/650
Closing this in favor of https://github.com/getsentry/sentry-php/pull/677
| gharchive/issue | 2018-10-11T08:19:09 | 2025-04-01T06:38:46.976505 | {
"authors": [
"HazAT"
],
"repo": "getsentry/sentry-php",
"url": "https://github.com/getsentry/sentry-php/issues/670",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2367818492 | TraceableResponseForV4::getInfo must be compatible with ResponseInterface::getInfo
How do you use Sentry?
Sentry SaaS (sentry.io)
SDK version
5.0
Steps to reproduce
Executed composer update with symfony 6.4 (https://github.com/pimcore/pimcore/blob/11.x/composer.json)
Expected result
No error
Actual result
Fatal error: Declaration of Sentry\SentryBundle\Tracing\HttpClient\TraceableResponseForV4::getInfo(?string $type = null) must be compatible with Symfony\Contracts\HttpClient\ResponseInterface::getInfo(?string $type = null): mixed in /var
/www/html/vendor/sentry/sentry-symfony/src/Tracing/HttpClient/TraceableResponseForV4.php on line 15
PHP Fatal error: Declaration of Sentry\SentryBundle\Tracing\HttpClient\TraceableResponseForV4::getInfo(?string $type = null) must be compatible with Symfony\Contracts\HttpClient\ResponseInterface::getInfo(?string $type = null): mixed in
/var/www/html/vendor/sentry/sentry-symfony/src/Tracing/HttpClient/TraceableResponseForV4.php on line 15
Something is loading TraceableResponseForV4, while you should be loading just TraceableResponseForV6. This is currently loaded here: https://github.com/getsentry/sentry-symfony/blob/5de2b84421489e20c23c7678c69c3210dc7a223e/src/aliases.php#L81-L83
Do you have any strange situation where all those ifs are triggered and not the correct one?
I had the same issue but fixed it via explicitly requiring symfony/http-client in my composer.json
Will be fixed by #858. Problem is that pimcore requires symfony/contracts but does not inlcude symfony/http-client, hence you're seeing this error.
| gharchive/issue | 2024-06-22T13:16:00 | 2025-04-01T06:38:46.981316 | {
"authors": [
"JakobBruening",
"Jean85",
"Shadow-Devil",
"cleptric"
],
"repo": "getsentry/sentry-symfony",
"url": "https://github.com/getsentry/sentry-symfony/issues/855",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
744736090 | Use the hub injected in the constructor of the listeners rather than retrieving it from the SDK singleton
This PR refactorizes the event listeners to make them use the Hub injected into the constructor rather than getting it using the global functions. This eases the unit testing of those classes. Note that by default the injected hub is under the hood an instance of HubAdapter and thus it works exactly as before by proxing each call to the current hub that is retrieved on-the-fly
@Jean85 do we really need to make the priority of the listeners configurable? I know it's a really small feature and doesn't have a big impact in the maintenance of the code, but I don't think it's useful since people can simply adjust the priority of their listeners
Listeners need to run after certain stuff, making them configurable makes the installation a lot easier. Requiring users to change their code to install ours is not fine IMHO.
What issues do you have with those?
No issue at all, I was simply questioning because I never heard anyone needing this and also trying to look around GitHub using the search feature to see who does the same I didn't find much results. I'm totally fine with leaving this as-is if you think it's worth keeping the feature though, I have some doubts about the usefulness but it's just my opinion
The issue was requested in #49 the first time, so there's definitely a need.
Looking at that PR I think that the main issue was that the listeners at that time were all using the default priority, so people could not inject their own listeners between the Symfony ones (if there was any) and the Sentry ones. You decided to solve the issue by making the priority configurable, which is a way to solve the issue. The other way (it would had required a new major version of course to avoid breaking BC) was to explicitly set the priority. Both ways are fine, as I said I don't really have any issue with keeping things as they are now 😃
| gharchive/pull-request | 2020-11-17T13:27:54 | 2025-04-01T06:38:46.985473 | {
"authors": [
"Jean85",
"ste93cry"
],
"repo": "getsentry/sentry-symfony",
"url": "https://github.com/getsentry/sentry-symfony/pull/387",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2496945199 | Show icons also in dataview tables
When using iconize the icons get also displayed in front of the file link in any dataview table. Icons set by iconic don't. Is there a way to also show them?
Actually this seems not to be related to dataview, this applies to all links. Icons defined by iconize get displayed in any link, icons from iconic not.
Didn't read documentation, behavior is to be expected like it is.
Hi - it's true this feature isn't on the roadmap, but since I'm expecting a lot of "add this Iconize feature" requests over time, it'd be nice to keep this issue open so anyone can pitch in their comments.
You don't have to reply to this, but thanks for the suggestion! :)
| gharchive/issue | 2024-08-30T11:12:19 | 2025-04-01T06:38:47.152084 | {
"authors": [
"gfxholo",
"jckoester"
],
"repo": "gfxholo/iconic",
"url": "https://github.com/gfxholo/iconic/issues/22",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
2110153236 | 🛑 MediaKiller™ is down
In 43fc9cd, MediaKiller™ (https://mk.gfx-pro.net) was down:
HTTP code: 502
Response time: 1234 ms
Resolved: MediaKiller™ is back up in 42d5f8a after 26 minutes.
| gharchive/issue | 2024-01-31T13:57:06 | 2025-04-01T06:38:47.154601 | {
"authors": [
"wdeb"
],
"repo": "gfxpronet/upptime",
"url": "https://github.com/gfxpronet/upptime/issues/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1366295193 | 🛑 Contemporanea is down
In 3f24846, Contemporanea ($SITE_CNTM) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Contemporanea is back up in 86792cd.
| gharchive/issue | 2022-09-08T13:08:08 | 2025-04-01T06:38:47.157017 | {
"authors": [
"ggardin"
],
"repo": "ggardin/uptime-monitor",
"url": "https://github.com/ggardin/uptime-monitor/issues/372",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1456920976 | robots.txt
don't allow the path where keys are
Done - I tried with something like the snippet below but wasn't sure if it would work how I intended or not so didn't go with it in the end - decided to just Allow: /
User-agent: *
Allow: /
Allow: /changelog
Allow: /account
Allow: /settings
Disallow: /*
| gharchive/issue | 2022-11-20T14:20:15 | 2025-04-01T06:38:47.293512 | {
"authors": [
"ghostdevv"
],
"repo": "ghostdevv/short",
"url": "https://github.com/ghostdevv/short/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2087932016 | Update kubectl gs get releases
Change of review date only - no content changes
I'm thinking that by now a note regarding vintage only would make sense here.
Damn - that comment came in seconds before automerge - It would be correct. I'll do a new PR for that.
| gharchive/pull-request | 2024-01-18T10:01:08 | 2025-04-01T06:38:47.344726 | {
"authors": [
"marians",
"mproffitt"
],
"repo": "giantswarm/docs",
"url": "https://github.com/giantswarm/docs/pull/2052",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
712602830 | Use releases branch from flag when fetching release components
I missed this in https://github.com/giantswarm/kubectl-gs/pull/162
Going with a ping
| gharchive/pull-request | 2020-10-01T08:05:52 | 2025-04-01T06:38:47.348541 | {
"authors": [
"axbarsan"
],
"repo": "giantswarm/kubectl-gs",
"url": "https://github.com/giantswarm/kubectl-gs/pull/167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2006386128 | OS Image Distribution from S3 to Customer vCenters
We should implement a mechanism to distribute the images from the common place (AWS S3 now) to all vSphere environments automatically to be able to manage multiple customers/users easily
from: https://github.com/giantswarm/roadmap/issues/2757#issuecomment-1822303696
@giantswarm/team-rocket how could that work? this would rather need to be a pull mechanism than a push because of network restrictions, right?
Yes we would need to pull the image from the central location, most likely via an operator or maybe a cronjob in the cluster and then push it to the customer's catalog
I want to bump this back up in priority - because all providers+customers are now using releases, the number of images we need to upload has increased. Previously we used to set the image template in the cluster-provider chart and it wasn't often updated, however the release-based charts see kubernetes and/or flatcar upgrades far more often and so we have to upload images more often (especially for test installations). Additionally, the number of locations we need to upload them too is increasing with more on-prem customers coming onboard.
An operator shouldn't be too complex to write - I suggest we provide it a whitelist of images to upload (so as to avoid us uploading every single image which is built). This has the added benefit that we don't need to have the operator watch the S3 bucket - it just reconciles the existing images in the vcenter/vcd catalog against the whitelist and then just add any which are missing.
| gharchive/issue | 2023-11-22T13:48:22 | 2025-04-01T06:38:47.355254 | {
"authors": [
"gawertm",
"glitchcrab"
],
"repo": "giantswarm/roadmap",
"url": "https://github.com/giantswarm/roadmap/issues/2990",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2275104648 | Azure Arc summary
identify what Azure Arc can do and how a common architecture could look like
https://gigantic.slack.com/archives/C05U702MFS8/p1717010090855449
waiting for the call with microsoft to get a deep dive on those topics. scheduled for June 20th
| gharchive/issue | 2024-05-02T10:02:47 | 2025-04-01T06:38:47.356902 | {
"authors": [
"gawertm",
"vxav"
],
"repo": "giantswarm/roadmap",
"url": "https://github.com/giantswarm/roadmap/issues/3432",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
630341631 | Hello, i have an problem.
Error
An exception was thrown (press Ctrl+C to copy):
System.UnauthorizedAccessException: Access to the path is denied.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.FileStream.FlushWrite(Boolean calledFromFinalizer)
at System.IO.FileStream.Dispose(Boolean disposing)
at System.IO.Stream.Close()
at Gibbed.Borderlands2.SaveEdit.ShellViewModel.WriteSave(String savePath, SaveFile saveFile)
at Caliburn.Micro.Contrib.Results.DelegateResult.Execute(ActionExecutionContext context)
OK
Hello, I got it fixed, it was my ransomware protection going off again..
Closed
| gharchive/issue | 2020-06-03T21:23:09 | 2025-04-01T06:38:47.362846 | {
"authors": [
"Sydecadus"
],
"repo": "gibbed/Gibbed.Borderlands2",
"url": "https://github.com/gibbed/Gibbed.Borderlands2/issues/143",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
2690857401 | Activated Achievements Not Displaying in 'Perfectionist' and 'Achievement Count
I encountered an issue where activating achievements for a game does not update their display in the Perfectionist and Achievement Count sections. Additionally, the "perfect game" status is not shown in these sections. However, the game still appears correctly listed under Games > Perfect Games.
Below are images to illustrate the issue:
This inconsistency makes it difficult to track and manage achievements. Could this be reviewed to ensure activated achievements and perfect game status are displayed consistently across all sections?
If needed, I can provide more details or additional examples.
This sounds like a problem on Steam's end, not much SAM can do about that.
This sounds like a problem on Steam's end, not much SAM can do about that.
I'll run some tests by activating achievements with a longer time gap between them to observe the behavior. This might help determine if the issue is related to timing or how the achievements are being processed by the system. Once I have results, I'll share them here.
I’ll be closing this issue because, after testing with a game using a longer time gap, I observed mixed results: the system did not work perfectly when I used a game that was already platinum. However, when testing with a game that had no achievements unlocked, the system worked as expected.
If anyone encounters similar issues, feel free to reopen the issue with more details.
| gharchive/issue | 2024-11-25T14:01:37 | 2025-04-01T06:38:47.366735 | {
"authors": [
"DutraGames",
"gibbed"
],
"repo": "gibbed/SteamAchievementManager",
"url": "https://github.com/gibbed/SteamAchievementManager/issues/448",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
1348983812 | KeyColumn for Creating Query View
I found that your publish_featurestore_sqlview, does not have the primary key ('<keyColumn>primary_key</keyColumn>') parameter. Is there a specific reason behind that?? Are you willing that the primary key(s) be recognized by GeoServer automatically?
Also, I want to mention that a complex query can have a primary key made by combining more than one attribute. Is there a solution for handling that?
@iamtekson I would be happy to contiribute on this issue.
Please contribute if you like. Thank you @iamMHZ
I see this feature was removed in an earlier commit, and given the name it seems to have been a diliberate removal. What's the reason behind this / is there a way of reimplementing this feature? Without it I get errors later in my data pipeline, meaning I can't use this package to automate my work fully. If it's remaining removed, I would also suggest updating the documentation which implies the argument still exists. Thanks
Hi @trodaway, thank you for raising this issue. I forgot exactly why we removed that feature, but I am going to add it again and try to see how it impact, rest of the things. If you want to send a PR, I am also happy to accept it.
Just released the new version v2.5.2 solving this issue. I hope it should work now.
Thanks for taking a look at this so quickly. I think the keyColumn tag is in the wrong bit of the XML though - whilst GeoServer isn't throwing an error when you POST, if I try to GET the layer via WFS I still get an error regarding the key field. From some testing, it appears (from trial & error, as I don't believe GeoServer's REST API is properly documented) the keyColumn tag should be within the virtualTable tag, opposed to just within the featureType. I'll try to compile a PR to fix this.
Seems write permission is limited on this repo so can't push / do a PR. This is the code I've got:
def publish_featurestore_sqlview(
self,
name: str,
store_name: str,
sql: str,
key_column: Optional[str] = None,
geom_name: str = "geom",
geom_type: str = "Geometry",
srid: Optional[int] = 4326,
workspace: Optional[str] = None,
):
"""
Parameters
----------
name : str
store_name : str
sql : str
key_column : str, optional
geom_name : str, optional
geom_type : str, optional
workspace : str, optional
"""
if workspace is None:
workspace = "default"
# issue #87
if key_column is not None:
key_column_xml = """
<keyColumn>{}</keyColumn>""".format(key_column)
else:
key_column_xml = """"""
layer_xml = """<featureType>
<name>{0}</name>
<enabled>true</enabled>
<namespace>
<name>{4}</name>
</namespace>
<title>{0}</title>
<srs>EPSG:{5}</srs>
<metadata>
<entry key="JDBC_VIRTUAL_TABLE">
<virtualTable>
<name>{0}</name>
<sql>{1}</sql>
<escapeSql>true</escapeSql>
<geometry>
<name>{2}</name>
<type>{3}</type>
<srid>{5}</srid>
</geometry>{6}
</virtualTable>
</entry>
</metadata>
</featureType>""".format(
name, sql, geom_name, geom_type, workspace, srid, key_column_xml
)
# rest API url
url = "{}/rest/workspaces/{}/datastores/{}/featuretypes".format(
self.service_url, workspace, store_name
)
# headers
headers = {"content-type": "text/xml"}
# request
r = requests.post(
url,
data=layer_xml,
auth=(self.username, self.password),
headers=headers,
)
if r.status_code == 201:
return r.status_code
else:
raise GeoserverException(r.status_code, r.content)
Yes you are write. I will make another PR to solve this issue. Thanks again!
Thanks for sorting! I assume a 2.5.3 will be released shortly with that update?
I already released the new version!
Thanks - it just didn't come through onto conda-forge for another couple of hours.
Yes, it takes few hours to create the build file for Conda.
| gharchive/issue | 2022-08-24T07:19:14 | 2025-04-01T06:38:47.374749 | {
"authors": [
"iamMHZ",
"iamtekson",
"trodaway"
],
"repo": "gicait/geoserver-rest",
"url": "https://github.com/gicait/geoserver-rest/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
368490047 | Replace reqwest with hyper and futures
Provide asynchronous Future interfaces with hyper.
Thank you for the pull request, and I'm sorry for being late.
This PR conflicts with a current master and has been inactive for a long time, so I'm closing this for now.
If you want an async version of APIs, please create a new Pull Request.
Comments:
The blocking versions of APIs are still useful for some use cases, so it would be better to add new async versions of APIs instead of replacing them.
Since reqwest now supports async APIs, we don't need to switch to hyper. For now, we don't need low-level features provided by hyper, and we can keep our implementation easy and simple by using reqwest's convenient APIs.
| gharchive/pull-request | 2018-10-10T04:30:30 | 2025-04-01T06:38:47.389778 | {
"authors": [
"gifnksm",
"pbzweihander"
],
"repo": "gifnksm/oauth-client-rs",
"url": "https://github.com/gifnksm/oauth-client-rs/pull/36",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
447441632 | Adding a gig creates a new record in the db, but the new gig does not show up on the gig list.
It looks like the Foreign Key "CompanyContactID" is not being entered properly in the "Company" table.
Additionally, the GigPosted" and "LastUpdated" columns of the Gig table are not getting real data.
I am unsure if these are the reasons why the added gig is not showing up on the gigs/index.php view. More research is needed.
fyi .... I've added gigs. I just added a couple, but I was not logged on. They all seem to have stuck. All fields are populated except GigPosted, and LastUpdated is all zeros.
IF nobody is working on this issue, I'll assign myself
I think a recent pull request may have resolved this issue. @esteban-gs are you able to add a gig and then see it on the view gigs page?
It is writing to the DB, but not showing up on the list:
For me it only shows up when I search for it:
I'll assign myself.
@esteban-gs .... You said above that Gigs are being added. Did you confirm that the Foreign Key "CompanyContactID" is being set properly in the "Company" table?
If so, I think we should close this one.
Yes. It worked on my end
On Mon, Jun 10, 2019, 9:26 PM Thom Harrington notifications@github.com
wrote:
@esteban-gs https://github.com/esteban-gs .... You said above that Gigs
are being added. Did you confirm that the Foreign Key "CompanyContactID" is
being set properly in the "Company" table?
If so, I think we should close this one.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/gig-central/gig-repo-1/issues/173?email_source=notifications&email_token=AGJXJ444OFJFGSCIWCLHCB3PZ4SOFA5CNFSM4HOZY26KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXL4YKQ#issuecomment-500681770,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGJXJ4YEHULWDRI6SZYRYTTPZ4SOFANCNFSM4HOZY26A
.
Closing per PR #233 and above discussion
| gharchive/issue | 2019-05-23T04:08:27 | 2025-04-01T06:38:47.399111 | {
"authors": [
"craigbpeterson",
"esteban-gs",
"thomaskise"
],
"repo": "gig-central/gig-repo-1",
"url": "https://github.com/gig-central/gig-repo-1/issues/173",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1191972510 | Team invite´s anfrage
abfragen nach allen invite´s per Username
Ausgabe Team Name pro jede invite
Issue moved to gigaclub/TeamAPI #4 via ZenHub
| gharchive/issue | 2022-04-04T15:36:21 | 2025-04-01T06:38:47.415041 | {
"authors": [
"Feier68",
"kevin-fritsch"
],
"repo": "gigaclub/BuilderSystemAPI",
"url": "https://github.com/gigaclub/BuilderSystemAPI/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
99025092 | Suggestion: jQuery icon only
How about the jQuery logo without the text?
Oh and possibly add Microsoft Edge?
hey,
I thought about using only the jquery logomark but isn't clearly recognizable.
About Microsoft Edge:
Hmm, I still think the logomark on it's own could be useful in some cases. Edge has a different (yet suspiciously similar) logo so I think it's eligible to be included.
Added MS Edge f209440ba434258ed6d944c649804c506ad9525f
| gharchive/issue | 2015-08-04T18:03:28 | 2025-04-01T06:38:47.444085 | {
"authors": [
"gilbarbara",
"rctneil"
],
"repo": "gilbarbara/logos",
"url": "https://github.com/gilbarbara/logos/issues/80",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1052039089 | ungrab Mod4+Shift and Mod1+Shift
Hi!
Many tiling window managers use by default keybindings with both the Super key (Mod4) and the shift key pressed (i.e. Mod4+Shift+q to close the focused window).
Currently, such key combinations are grabbed and therefore can't be used while the script is running.
I was able to solve this by ungrabbing the Shift_L key with the modifier Mod4Mask (and also Mod1Mask for the Alt key).
This should also resolve #6, since keybindings like Shift+t will still work.
Your fix works for me (I'm using i3-wm) Thanks!
| gharchive/pull-request | 2021-11-12T14:46:57 | 2025-04-01T06:38:47.467231 | {
"authors": [
"OmerBenHayun",
"simonecig"
],
"repo": "gillescastel/inkscape-shortcut-manager",
"url": "https://github.com/gillescastel/inkscape-shortcut-manager/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
58788408 | Fix placeholder vertical positioning when text input element height changes
This fixes a positioning issue with responsive layouts which change the height of single line text input boxes dynamically based on window size.
Placeholder span is now positioned vertically in the middle of the input box based on outerHeights instead of basing the positioning on the input box top padding.
Resize event wasn't firing on the document object, so bound the handler to window instead.
how to fix text in textbox as it is in gmail sign up page ...in the textbox @gmail.com in right side how ???
Can u tell me????????????
| gharchive/pull-request | 2015-02-24T19:10:18 | 2025-04-01T06:38:47.472471 | {
"authors": [
"9090899",
"mmajis"
],
"repo": "ginader/HTML5-placeholder-polyfill",
"url": "https://github.com/ginader/HTML5-placeholder-polyfill/pull/70",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1137580270 | Allows user to set clang-format executable
This PR allows users to set the clang-format executable. The find_progam part is skipped in that case, instead it is only checked if the provided executable exists. Since that is not enough to check if the executable is working, I've set the minimal version to 9.0.0 (which is used in ubuntu 18.04) to trigger an error, if the user did not provide a working clang-format.
I've also added more clang format names.
PS: This is necessary for me, since my local default clang-format (v13) sometimes disagrees with the one that our check-format action uses. So far, I couldn't figure out which clang-format option is giving these difficulties.
@upsj Yes that works, You have to use the absolute path for that, which I didn't check before.
| gharchive/pull-request | 2022-02-14T16:57:11 | 2025-04-01T06:38:47.481230 | {
"authors": [
"MarcelKoch"
],
"repo": "ginkgo-project/git-cmake-format",
"url": "https://github.com/ginkgo-project/git-cmake-format/pull/9",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
131333578 | Updated License
Updated so that arsey legal teams can't stop these awesome charts being used in their projects.
related to #604
@gionkunz Do you want me to update this so it show's dual licenses, in the one file?
I switched to dual licensing WTFPL and MIT within Chartist 0.9.6 :smile: :+1:
| gharchive/pull-request | 2016-02-04T12:37:07 | 2025-04-01T06:38:47.482960 | {
"authors": [
"gionkunz",
"mcdonnelldean"
],
"repo": "gionkunz/chartist-js",
"url": "https://github.com/gionkunz/chartist-js/pull/605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
225477109 | Added support for barChart - follow up to PR 11
Hi @gionkunz I modified package.json as you asked here:
https://github.com/gionkunz/chartist-plugin-pointlabels/pull/11
Can you please consider merging this? It would make me very happy 😍
Thanks!
Thanks again! :-)
| gharchive/pull-request | 2017-05-01T17:23:11 | 2025-04-01T06:38:47.484523 | {
"authors": [
"gionkunz",
"radojesrb"
],
"repo": "gionkunz/chartist-plugin-pointlabels",
"url": "https://github.com/gionkunz/chartist-plugin-pointlabels/pull/15",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
1008458592 | Kruskal's Algorithm for minimum spanning tree
Description
I will be explaining Kruskal's algorithm with the code in order to find minimum spanning tree.
Domain
Competitive Programming
Type of Contribution
Documentation
Code of Conduct
[X] I follow Contributing Guidelines & Code of conduct of this project.
/assign
| gharchive/issue | 2021-09-27T18:24:01 | 2025-04-01T06:38:47.494320 | {
"authors": [
"AGR-12JU"
],
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/issues/3218",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1052870761 | Audio for Goto statement in C
Description
Here.I will be doing audio contribution on the topic Goto statements in C.
Domain
C/CPP
Type of Contribution
Audio
Code of Conduct
[X] I follow Contributing Guidelines & Code of conduct of this project.
/assign
| gharchive/issue | 2021-11-14T07:29:13 | 2025-04-01T06:38:47.496570 | {
"authors": [
"Harsh652-cpu"
],
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/issues/7868",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
296234037 | Support multi-index and wildcard in collection reference
In gitlab by @sfalquier on Dec 29, 2017, 10:43
Collection API should support https://www.elastic.co/guide/en/elasticsearch/reference/5.6/multi-index.html
ìndexPath should become indexPatternPath
On Collection creation, arlas-server have to check that ES types of all indeces have the same name and are consistant with other paths definitions (centroidPath, geometryPath, timestampPath, ...)
NB : explore.RawRESTService must throw an exception when indexPatternPath refers to more than one indeces
In gitlab by @sfalquier on Dec 29, 2017, 11:21
mentioned in issue #139
In gitlab by @sfalquier on Dec 29, 2017, 11:21
added ~2424198 label
In gitlab by @sylvaingaudan on Feb 7, 2018, 17:05
removed ~2424198 label
In gitlab by @sylvaingaudan on Feb 7, 2018, 17:06
changed milestone to %26
| gharchive/issue | 2018-02-11T22:46:51 | 2025-04-01T06:38:47.500937 | {
"authors": [
"elouanKeryell-Even"
],
"repo": "gisaia/ARLAS-server",
"url": "https://github.com/gisaia/ARLAS-server/issues/138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
293880087 | Automatic Release generation and tagging
In gitlab by @sylvaingaudan on Apr 26, 2017, 12:16
In gitlab by @sylvaingaudan on Apr 26, 2017, 14:01
added ~1918909 label
In gitlab by @sylvaingaudan on Apr 28, 2017, 13:46
changed milestone to %4
In gitlab by @sfalquier on May 9, 2017, 09:13
added ~1918367 and removed ~1918909 labels
In gitlab by @sfalquier on May 12, 2017, 11:22
added ~1918368 and removed ~1918367 labels
In gitlab by @sfalquier on May 12, 2017, 11:22
assigned to @sfalquier
In gitlab by @sfalquier on May 15, 2017, 13:44
removed ~1918368 label
In gitlab by @sfalquier on May 15, 2017, 13:44
closed
| gharchive/issue | 2018-02-02T12:57:41 | 2025-04-01T06:38:47.504635 | {
"authors": [
"elouanKeryell-Even"
],
"repo": "gisaia/ARLAS-server",
"url": "https://github.com/gisaia/ARLAS-server/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
147566022 | checkintool not working (VS2015 Update 2)
git-tfs version 0.25.0.0 (TFS client library 14.0.0.0 (MS)) (64-bit)
Commits visited count:11
Commits visited count:11
...
System.Reflection.TargetInvocationException: Exception has been thrown by the ta
rget of an invocation. ---> System.ArgumentNullException: Value cannot be null.
Parameter name: path1
at System.IO.Path.Combine(String path1, String path2)
at Sep.Git.Tfs.VsCommon.TfsHelperBase.GetDialogAssembly()
at Sep.Git.Tfs.VsCommon.TfsHelperBase.GetCheckinDialogType()
at Sep.Git.Tfs.VsCommon.TfsHelperBase.ShowCheckinDialog(Workspace workspace,
PendingChange[] pendingChanges, WorkItemCheckedInfo[] checkedInfos, String check
inComment)
at Sep.Git.Tfs.Core.TfsWorkspace.CheckinTool(Func1 generateCheckinComment) at Sep.Git.Tfs.Core.GitTfsRemote.<>c__DisplayClass2a.<CheckinTool>b__29(ITfsW orkspace workspace) at Sep.Git.Tfs.VsCommon.TfsHelperBase.WithWorkspace(String localDirectory, IG itTfsRemote remote, TfsChangesetInfo versionToFetch, Action1 action)
at Sep.Git.Tfs.Core.GitTfsRemote.WithWorkspace(TfsChangesetInfo parentChanges
et, Action1 action) at Sep.Git.Tfs.Core.GitTfsRemote.CheckinTool(String head, TfsChangesetInfo pa rentChangeset) at Sep.Git.Tfs.Commands.CheckinBase.PerformCheckin(TfsChangesetInfo parentCha ngeset, String refToCheckin) --- End of inner exception stack trace --- at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Objec t[] parameters, Object[] arguments) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invoke Attr, Binder binder, Object[] parameters, CultureInfo culture) at Sep.Git.Tfs.Util.GitTfsCommandRunner.Run(GitTfsCommand command, IList1 ar
gs)
at Sep.Git.Tfs.GitTfs.Main(GitTfsCommand command, IList`1 unparsedArgs)
at Sep.Git.Tfs.Program.Main(String[] args)
Value cannot be null.
Parameter name: path1
There's no "Microsoft.VisualStudio.TeamFoundation.TeamExplorer.Extensions" in my registry:
checkintool is one of the hackier parts of git-tfs, since it manually searches for assemblies etc. If you have a chance to fix this and submit a PR, it'd be super helpful.
Since I also did an uninstall of older Visual Studio 2013 which could have interfered with the registries) I'll have to get my hands on a clean 2015 Update 2 install to see what the registries look like.
Well, it'd be nice if it worked in your case, too. Feel free to build it for your case first.
Reporter had a workaround and hasn't provided further information.
Looks like the issue can be closed
This should be fixed by 1123ad84a8064540bf98faa01a8d1dfeae97d9cb
Closing the issue
This should be fixed by 1123ad84a8064540bf98faa01a8d1dfeae97d9cb
Closing the issue
| gharchive/issue | 2016-04-11T21:54:12 | 2025-04-01T06:38:47.614650 | {
"authors": [
"joseph-orbis",
"siprbaum",
"spraints"
],
"repo": "git-tfs/git-tfs",
"url": "https://github.com/git-tfs/git-tfs/issues/951",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
615101334 | 引用了latest的Gitalk, 但是登陆时/点击comment栏会自动跳转到archive页面
所有配置是按readme来的.
这是我的repo
https://github.com/yougikou/yougikou.github.io
现象是如果我不debug页面, 点击"登录"就会跳转到blog archive page.
如果debug, 就能顺利进行login.(这个现象让我怀疑人生啊, 还以为自己在做量子实验 - 观察决定结果)
成功login后,点击leave comment, 又会自动跳转到archive page.
Google了好久没有相关信息.
我折腾了几次, 一直搞不定. 只能来这里碰碰运气.
自己解决了.在这里更新一下.
原因有可能是js加载顺序或者document查找时候的问题.
如果有熟悉的朋友,希望帮助补充说明一下.
一开始, 我只是把div层和js随便放在了page的最下方.
现在我把div的位置调整到post div里面后 - 都正常了.
自己解决了!真棒!
js 一般放在最后的啊。
| gharchive/issue | 2020-05-09T05:59:43 | 2025-04-01T06:38:47.617674 | {
"authors": [
"booxood",
"yougikou"
],
"repo": "gitalk/gitalk",
"url": "https://github.com/gitalk/gitalk/issues/382",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2140959252 | [Systemctl service won't start]
Hello Benn,
When trying to start a custome service, the service won't start :
My service looks like this:
But the bash command is working on my terminal.
This behavior is happening for all my customs services.
This is my customize script:
sdm --customize \ --extend --xmb 512 \ --plugin network:"netman=dhcpcd|wifissid=Livebox-49F7|wifipassword=********|wificountry=FR" \ --plugin copyfile:"filelist=/home/camanalytics/camanalytics/LampCam/images/base/files|mkdirif=yes" \ --plugin user:"adduser=pi" \ --plugin user:"setpassword=pi|password=pi" \ --plugin runatboot:"script=/home/camanalytics/camanalytics/LampCam/images/base/ref_files_v2/runatboot.sh|output=/home/pi/firsboot.log" \ --plugin raspiconfig:"spi=1" \ --plugin apps:"apps=@myapps|name=myapps" \ 2023-05-03-raspios-bullseye-arm64-lite_base.img
I'm running v11.4
Am I'm missing something with the system plugin ?
Thank you
Found the solution, by stopping userconfig.service
sudo systemctl stop userconfig.service
The service was stopping all other services that wanted to use multi-user.target
Try NOT stopping/disabling the userconfig service, but instead use --plugin disables:piwiz. This will disable the userconfig service and a few other things that you don't need.
Also, a couple of other things:
It doesn't hurt anything, but RasPiOS comes with the pi user already in /etc/passwd (but with no password set), so you don't need to adduser=pi. It doesn't hurt, of course, to leave it there
In your camanalytics.service file, since you have specified User=root you don't need the sudo on the ExecStart
Closing as resolved.
| gharchive/issue | 2024-02-18T12:11:39 | 2025-04-01T06:38:47.623746 | {
"authors": [
"gitbls",
"pierreCAMANALYTICS"
],
"repo": "gitbls/sdm",
"url": "https://github.com/gitbls/sdm/issues/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
610980023 | Toggle turns into normal checkbox
I'm using bootstrap-table. If I include toggles inside the table, the toggle appears on load, but looses all styling if I apply actions to the table: Pagination or toggle columns on/off for example.
Same issue here
Same here!!
| gharchive/issue | 2020-05-01T21:34:43 | 2025-04-01T06:38:47.625287 | {
"authors": [
"AsadKhanOMS",
"aymannabil86",
"frameworker2019"
],
"repo": "gitbrent/bootstrap4-toggle",
"url": "https://github.com/gitbrent/bootstrap4-toggle/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1588940208 | 🛑 invidious.privacydev.net is down
In a59345c, invidious.privacydev.net (https://invidious.privacydev.net/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: invidious.privacydev.net is back up in 2288934.
| gharchive/issue | 2023-02-17T08:29:51 | 2025-04-01T06:38:47.741746 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/invidious-instances-upptime",
"url": "https://github.com/gitetsu/invidious-instances-upptime/issues/281",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1570924466 | 🛑 invidious.privacydev.net is down
In f8aa024, invidious.privacydev.net (https://invidious.privacydev.net/) was down:
HTTP code: 502
Response time: 721 ms
Resolved: invidious.privacydev.net is back up in 8723fa5.
| gharchive/issue | 2023-02-04T12:35:58 | 2025-04-01T06:38:47.744929 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/invidious-instances-upptime",
"url": "https://github.com/gitetsu/invidious-instances-upptime/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1934505990 | 🛑 search.mascotboi.xyz is down
In 34df006, search.mascotboi.xyz (https://search.mascotboi.xyz/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: search.mascotboi.xyz is back up in d869568 after 53 minutes.
| gharchive/issue | 2023-10-10T06:40:14 | 2025-04-01T06:38:47.747592 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/librex-instances-upptime",
"url": "https://github.com/gitetsu/librex-instances-upptime/issues/1011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1610119577 | 🛑 buscar.weblibre.org is down
In 5b92a43, buscar.weblibre.org (https://buscar.weblibre.org/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: buscar.weblibre.org is back up in ca48566.
| gharchive/issue | 2023-03-05T09:21:58 | 2025-04-01T06:38:47.750936 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/librex-instances-upptime",
"url": "https://github.com/gitetsu/librex-instances-upptime/issues/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1802590675 | 🛑 whoogle.privacydev.net is down
In 6b8610e, whoogle.privacydev.net (https://whoogle.privacydev.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: whoogle.privacydev.net is back up in 7ca9b7d.
| gharchive/issue | 2023-07-13T09:21:47 | 2025-04-01T06:38:47.754013 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/whoogle-instances-upptime",
"url": "https://github.com/gitetsu/whoogle-instances-upptime/issues/742",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1803050552 | 🛑 whoogle.privacydev.net is down
In b10ed86, whoogle.privacydev.net (https://whoogle.privacydev.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: whoogle.privacydev.net is back up in ebb3fe1.
| gharchive/issue | 2023-07-13T13:36:35 | 2025-04-01T06:38:47.757117 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/whoogle-instances-upptime",
"url": "https://github.com/gitetsu/whoogle-instances-upptime/issues/745",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1664622092 | [GHSA-5pm2-9mr2-3frq] Vulnerability in the Oracle Data Provider for .NET...
Updates
Affected products
Description
References
Summary
Comments
Add more details, issue automatic dependabot notifications to vulnerable apps
I checked the readme in the nupkg and indeed it does look like that version claims to fix the issue, but is that really the only reference for this fix?
I verified it personally but I guess that isn't the kind of source you are looking for.
It is also listed here: https://www.oracle.com/security-alerts/cpujan2023.html
This is oracle's official document regarding this CVE. I'm not sure if it lists the affected & fixed versions. It was released close to the release date of the fixed package version though. This package isn't updated too regularly so that's a strong sign too.
I believe you. I was just hoping that oracle would have a better public announcement of it. I guess the source repo is likely behind some wall, so maybe I'm hoping for too much 🤷.
Either way, many thanks for the contribution 👍
| gharchive/pull-request | 2023-04-12T13:56:08 | 2025-04-01T06:38:47.804458 | {
"authors": [
"darakian",
"georg-jung"
],
"repo": "github/advisory-database",
"url": "https://github.com/github/advisory-database/pull/2058",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1472063662 | [GHSA-wcg3-cvx6-7396] Segmentation fault in time
Updates
Affected products
Comments
All versions of 0.1.x are listed as vulnerable in the advisory description.
Hi there @jhpratt! A community member has suggested an improvement to your security advisory. If approved, this change will affect the global advisory listed at github.com/advisories. It will not affect the version listed in your project repository.
This change will be reviewed by our highly-trained Security Curation Team. If you have thoughts or feedback, please share them in a comment here! If this PR has already been closed, you can start a new community contribution for this advisory
| gharchive/pull-request | 2022-12-01T22:54:35 | 2025-04-01T06:38:47.807669 | {
"authors": [
"JamieMagee",
"github"
],
"repo": "github/advisory-database",
"url": "https://github.com/github/advisory-database/pull/968",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1676456023 | Upgrade packages tests exports
This upgrades this package to fall in line with the improvements made in our most recently updated Web Component, the relative-time-element.
The development changes are:
Uses web-test-runner over karma.
Uses a slightly improved eslint config
Minor changes to TSconfig
Uses esbuild over rollup
User faces changes are:
Emits JSX types, making it compatible with React
Reworks exports allowing for various patterns, including importing the web component without defining, or defining under different scopes or registries.
Outputs a custom elements manifest.
:wave: Hello and thanks for pinging us! This issue or PR has been added to our inbox and a Design Infrastructure first responder will review it soon.
:art: If this is a PR that includes a visual change, please make sure to add screenshots in the description or deploy this code to a lab machine with instructions for how to test.
:fast_forward: If this is a PR that includes changes to an interaction, please include a video recording in the description.
:warning: If this is urgent, please visit us in #primer on Slack and tag the first responders listed in the channel topic.
| gharchive/pull-request | 2023-04-20T10:31:42 | 2025-04-01T06:38:47.812141 | {
"authors": [
"keithamus",
"primer-css"
],
"repo": "github/auto-check-element",
"url": "https://github.com/github/auto-check-element/pull/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1446707731 | SSL Error during device flow
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:584)
at java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:123)
at com.github.codespaces.jetbrains.services.HttpClient.sendRequest(HttpClient.kt:108)
at com.github.codespaces.jetbrains.services.codespace.client.GitHubAuthClient.getLoginDeviceCode(GitHubAuthClient.kt:51)
at com.github.codespaces.jetbrains.services.codespace.AuthApiService.getLoginDeviceCode(AuthApiService.kt:20)
at com.github.codespaces.jetbrains.gateway.connector.auth.CodespacesSsoAuthComponent.signInToGitHub(CodespacesSsoAuthComponent.kt:85)
at com.github.codespaces.jetbrains.gateway.connector.auth.CodespacesSsoAuthComponent.access$signInToGitHub(CodespacesSsoAuthComponent.kt:32)
at com.github.codespaces.jetbrains.gateway.connector.auth.CodespacesSsoAuthComponent$signInPanel$1$1$1$1.invoke(CodespacesSsoAuthComponent.kt:149)
at com.github.codespaces.jetbrains.gateway.connector.auth.CodespacesSsoAuthComponent$signInPanel$1$1$1$1.invoke(CodespacesSsoAuthComponent.kt:149)
at com.github.codespaces.jetbrains.gateway.ui.dsl.extensions.ButtonXKt$listener$1.invoke$lambda-0(ButtonX.kt:14)
at java.desktop/javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1972)
at java.desktop/javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2313)
at java.desktop/javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:405)
at java.desktop/javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:262)
at java.desktop/javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:279)
at java.desktop/java.awt.Component.processMouseEvent(Component.java:6648)
at java.desktop/javax.swing.JComponent.processMouseEvent(JComponent.java:3392)
at java.desktop/java.awt.Component.processEvent(Component.java:6413)
at java.desktop/java.awt.Container.processEvent(Container.java:2266)
at java.desktop/java.awt.Component.dispatchEventImpl(Component.java:5022)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2324)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4854)
at java.desktop/java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4948)
at java.desktop/java.awt.LightweightDispatcher.processMouseEvent(Container.java:4575)
at java.desktop/java.awt.LightweightDispatcher.dispatchEvent(Container.java:4516)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2310)
at java.desktop/java.awt.Window.dispatchEventImpl(Window.java:2802)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4854)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:781)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:730)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:724)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:86)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:97)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:754)
at java.desktop/java.awt.EventQueue$5.run(EventQueue.java:752)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:86)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:751)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:918)
at com.intellij.ide.IdeEventQueue.dispatchMouseEvent(IdeEventQueue.java:840)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:763)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$6(IdeEventQueue.java:450)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:791)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$7(IdeEventQueue.java:449)
at com.intellij.openapi.application.TransactionGuardImpl.performActivity(TransactionGuardImpl.java:113)
at com.intellij.ide.IdeEventQueue.performActivity(IdeEventQueue.java:624)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$8(IdeEventQueue.java:447)
at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:881)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:493)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:207)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:128)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:117)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:105)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:92)
Caused by: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at java.base/sun.security.ssl.SSLEngineInputRecord.bytesInCompletePacket(SSLEngineInputRecord.java:145)
at java.base/sun.security.ssl.SSLEngineInputRecord.bytesInCompletePacket(SSLEngineInputRecord.java:64)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:612)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
at java.net.http/jdk.internal.net.http.common.SSLFlowDelegate$Reader.unwrapBuffer(SSLFlowDelegate.java:529)
at java.net.http/jdk.internal.net.http.common.SSLFlowDelegate$Reader.processData(SSLFlowDelegate.java:433)
at java.net.http/jdk.internal.net.http.common.SSLFlowDelegate$Reader$ReaderDownstreamPusher.run(SSLFlowDelegate.java:268)
at java.net.http/jdk.internal.net.http.common.SequentialScheduler$LockingRestartableTask.run(SequentialScheduler.java:205)
at java.net.http/jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run(SequentialScheduler.java:149)
at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:230)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
OS Version: 10.0
OS Architecture: amd64
IDE Version: 2022.2.4
Plugin Version: 0.2.0.896
Plugin path: C:\Users\xxxxxx\AppData\Roaming\JetBrains\JetBrainsGateway2022.2\plugins\github-codespaces-gateway
Plugin ID: com.github.codespaces.jetbrains.gateway
Thanks for the report @Robothy! Do you happen to know if there is any kind of proxy that your requests are getting passed through? It looks like something may be modifying the request that's coming from your local machine.
Also do you mind updating your plugin and trying again? We have added additional logging in the latest versions, so it will make it easier for us to determine what is happening.
Thanks!
| gharchive/issue | 2022-11-13T03:14:52 | 2025-04-01T06:38:47.876038 | {
"authors": [
"Robothy",
"cmuto09"
],
"repo": "github/codespaces-jetbrains-feedback",
"url": "https://github.com/github/codespaces-jetbrains-feedback/issues/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2305076858 | build needs to be installed in advance
Code of Conduct
[X] I have read and agree to the GitHub Docs project's Code of Conduct
What article on docs.github.com is affected?
https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-pypi#updating-your-github-actions-workflow
What part(s) of the article would you like to see updated?
This workflow uses python -m build to build packages, but build is not pre-installed in the runner image. One must install build using pip install build, or combine installation and running into pipx run build.
https://github.com/github/docs/blob/a107d854397cc1a346fa011fd1d5dc0a573a2d3f/content/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-pypi.md#L61-L77
Additional information
No response
@njzjz Thanks so much for opening an issue! I'll get this triaged for review ✨
Thank you for opening this issue @njzjz! ✨ This update to the docs looks good to me. You, or anyone else, are free to open a pull request to make these changes.
I would recommend adding the step python-m pip install build before python -m build as seen in the section Publishing to PyPI.
@njzjz After looking into the issue, I've learned that changes to any files within content/actions/deployment/security-hardening-your-deployments/** are restricted due to security compliance. I'm going to transfer this open source issue into an internal issue and update the file with your changes.
Thank you again for your valued contribution 💛
| gharchive/issue | 2024-05-20T04:50:58 | 2025-04-01T06:38:47.882616 | {
"authors": [
"nguyenalex836",
"njzjz",
"sunbrye"
],
"repo": "github/docs",
"url": "https://github.com/github/docs/issues/33056",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
263087489 | response status value error! when fetch a cors request
i mean if you build a web service in localhost 127.0.0.1
if you fetch 127.0.0.1 the response's status is 0 ,not 404 .
Stack Overflow is a better place for usage questions. Here's an introduction to CORS requests.
| gharchive/issue | 2017-10-05T11:16:28 | 2025-04-01T06:38:47.884455 | {
"authors": [
"aqnaruto",
"dgraham"
],
"repo": "github/fetch",
"url": "https://github.com/github/fetch/issues/567",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
167643680 | Debian build errors
I get this error while running ./docker/run_dockers.bsh debian_8. What's happening is that a package is being loaded in multiple spots:
github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm (correct)
github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm (bad)
src/github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/httputil/ntlm.go:22: cannot use c.NtlmSession (type "github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession) as type "github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession in return argument:
"github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession does not implement "github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession (wrong type for GenerateAuthenticateMessage method)
have GenerateAuthenticateMessage() (*"github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
want GenerateAuthenticateMessage() (*"github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
src/github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/httputil/ntlm.go:38: cannot use session (type "github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession) as type "github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession in assignment:
"github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession does not implement "github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".ClientSession (wrong type for GenerateAuthenticateMessage method)
have GenerateAuthenticateMessage() (*"github.com/github/git-lfs/obj-i586-linux-gnu/src/github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
want GenerateAuthenticateMessage() (*"github.com/github/git-lfs/vendor/github.com/ThomsonReutersEikon/go-ntlm/ntlm".AuthenticateMessage, error)
Just realized what the problem is: the move to support Go 1.6's vendoring didn't make it into v1.2.1, and I just never ran this after those changes. @ttaylorr and I are looking at the docker files to make these changes:
Update to Go 1.6
Download git lfs to $GOPATH/src/github.com/github/git-lfs. $GOPATH can be anything, Git LFS vendors all dependencies outside of stdlib.
Wish us luck!
Fixes started in https://github.com/andyneff/git-lfs_dockers/pull/3 and https://github.com/github/git-lfs/pull/1398
| gharchive/issue | 2016-07-26T15:54:23 | 2025-04-01T06:38:47.893673 | {
"authors": [
"technoweenie"
],
"repo": "github/git-lfs",
"url": "https://github.com/github/git-lfs/issues/1395",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
85877940 | Unable to find the .exe application
Hi guys, :smile:
I tried to use the Git extension for versioning large files v0.5.1 program.
I run install.bat but it reports permanently "Unable to find git.exe, exiting...". :unamused:
The git-lfs.exe program is in the same folder. What's wrong :question: :question: :question:
Can somebody help me, please?
Yours,
Suriyaa Kudo :octocat:
git.exe must be in your path.
Here is the relevant snippet from install.bat:
:: Check if git.exe is in the user's path before continuing
where /q git.exe
if %errorlevel% neq 0 (ECHO Unable to find git.exe, exiting... & EXIT /b %errorlevel%)
So, I guess you don't have git installed. At least not in the way needed by install.bat.
I have Git installed.
What is the output of where /q git.exe when run in a shell?
My answers:
What is the output of where /q git.exe when run in a shell?
Git: "C:\Program Files (x86)\Git\bin\sh.exe" --login -i
Git Shell: C:\Users\Suriyaa\AppData\Local\GitHub\GitHub.appref-ms --open-shell
Which version do you have installed?
Git: 1.9.5.msysgit.0
Git Shell: 1.9.5.github.0
GitHub for Windows: "I Wear Goggles When You Are Not Here" (2.14.5.1) cc018b2
Do you have the maintained build installed or any third party software (like Github for Windows)?
Maintained build version 1.9.5 of Git for the Windows platform and GitHub for Windows (include also Git Shell)
Does it help if you adjust the path, i.e. add the directory where your git.exe is located?
No. I tried to run it from Git folder but it doesn't work. Then I tried to run it under Git\cmd and Git\lib. But it still does not work.
Since you have GitHub for Windows, it should have an option to install command line tools, which includes Git LFS. Did you try that?
No. How should I do it?
Hi guys,
I found the problem!
I tired to run install.bat but it's still not ffunction.
Then I copied the git-lfs.exe file into Git/bin folder and see it works!
Ah, thanks for the update. The Git LFS windows installer could definitely use some work.
| gharchive/issue | 2015-06-07T08:56:18 | 2025-04-01T06:38:47.903781 | {
"authors": [
"SuriyaaKudoIsc",
"michael-k",
"technoweenie"
],
"repo": "github/git-lfs",
"url": "https://github.com/github/git-lfs/issues/371",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1617818660 | [Release Radar Request] Microsoft Kiota v1.0
Open Source Project name
Microsoft Kiota
What is your project?
Kiota is a modern client code generator for OpenAPI and REST. It supports multiple languages and provides model classes, a chained API surface to build requests and much more!
Kiota differentiates itself from other generators by its simplicity, un-opiniated choices about the serialization formats/libraries, HTTP clients or authentication schemes/libraries.
Additionally, Kiota enables selecting just the parts of the API that you need to call and generates out a client for your needs.
Version
1.0.0
Date
2023-03-09
Description of breaking changes
This first major version comes with the C#/dotnet language as a stable language and Go/Java/TypeScript/PHP/Python/Ruby/CLI as preview languages. Right now the generator is available as a CLI and we're working to enable additional experiences (GUI/CI/...).
We're also working to get the other languages to a stable maturity level as well as to add more languages.
GitHub Repo
https://github.com/microsoft/kiota
Website
https://microsoft.github.io/kiota
Link to changelog
https://github.com/microsoft/kiota/blob/main/CHANGELOG.md
Social media
upcoming
Anything else to add?
No response
For information the website now moved to here https://learn.microsoft.com/openapi/kiota/
Thank you for sharing this release in this month's edition! Closing this issue.
Our pleasure! Glad you enjoyed it. For reference, here's the link to the blog post.
| gharchive/issue | 2023-03-09T18:49:11 | 2025-04-01T06:38:47.925658 | {
"authors": [
"baywet",
"mishmanners"
],
"repo": "github/release-radar",
"url": "https://github.com/github/release-radar/issues/161",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
643920645 | URL in description and README broken
The URL in the repository description and the one in the README are pointing to githubschool's copy of the game instead of yours.
Please fix both so they point to your copy of the game at https://githubschool.github.io/github-games-Rebvos/
updated
| gharchive/issue | 2020-06-23T15:06:25 | 2025-04-01T06:38:47.941083 | {
"authors": [
"Rebvos",
"githubteacher"
],
"repo": "githubschool/github-games-Rebvos",
"url": "https://github.com/githubschool/github-games-Rebvos/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
227497121 | Add things to do in Anup's hometown
Added some text
Merging to master
| gharchive/pull-request | 2017-05-09T21:02:58 | 2025-04-01T06:38:47.944125 | {
"authors": [
"asharma-art"
],
"repo": "githubschool/refactored-fiesta",
"url": "https://github.com/githubschool/refactored-fiesta/pull/27",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
656103740 | CircleCI flow out of date
The sign on and authorization flow for CircleCI is no longer in sync with what we describe on the course. This is likely causing folks to drop off and should be resolved.
This seems to be blocking the course as the instructions are very far apart from the learner experience.
Thank you for opening this issue, @hectorsector. Depending on the registration numbers and the maintenance that would be involved in keeping this active, do you think this would be a course we should consider sunsetting or deactivating?
I think this might be a good idea. @brianamarie should we reach out to our friends at CircleCI to see if they're interested in transitioning to their organization? I'd be glad to help them get set up with maintaining the course themselves.
I think that's a great idea, @hectorsector. I found this thread from 3 years ago when we worked with Emma, and it looks like she's the director of marketing and reachable at emma@circleci.com. 🎉
Reached out via Halp.
| gharchive/issue | 2020-07-13T20:04:33 | 2025-04-01T06:38:47.960013 | {
"authors": [
"brianamarie",
"hectorsector"
],
"repo": "githubtraining/ci-circle",
"url": "https://github.com/githubtraining/ci-circle/issues/122",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
270515423 | Is this fork dead?
There's no new releases. Should I use this, the original source of the px3 fork?
Official development takes place in upstream. Eventually upstream will be moved here in the future.
| gharchive/issue | 2017-11-02T02:48:27 | 2025-04-01T06:38:47.972145 | {
"authors": [
"NicholasJohn16",
"alehaa"
],
"repo": "gitlist-php/gitlist",
"url": "https://github.com/gitlist-php/gitlist/issues/44",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
423645089 | theme-override not working for private gitlab
I'm currently using gitpitch opensource with my own gitlab server:
git {
repo {
services = [
{
name = "GitLab2"
type = "gitlab"
site = "http://********.prive"
apibase = "http://*********.prive/api/v4/"
apitoken = "**********"
apitokenheader = "PRIVATE-TOKEN"
rawbase = "http://********.prive/"
branchdelim = "~"
default = "true"
}
]
}
}
When I tried to use theme-override : next/PITCHME.css in my PITCHME.yaml, it seems that the css is not correctly loaded.
After inspecting network, it seems that the PRIVATE-TOKEN header is not set for the PITCHME.css request (it is correctly set for PITCHME.md request):
GET /*****/*****/raw/master/next/PITCHME.css HTTP/1.1
User-Agent: Java/1.8.0_181
Host: ********.prive
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
[ ... ]
GET /*****/*****/raw/master/next/PITCHME.md?gp=90 HTTP/1.1
Cache-Control: no-cache
PRIVATE-TOKEN: ********
Host: *********.prive
Accept: */*
User-Agent: AHC/2.0
Hi Kévin, the GitPitch open-source server does not support private repos. So the token is not set on any assets fetches, including PITCHME.css. The token on the open-source server was only ever used to avoid API call limits when the open-source server was being used live on gitpitch.com.
For GitPitch private repository support you need either (1) a Pro subscription on gitpitch.com or (2) a license for GitPitch Enterprise that supports public and private repos within self-hosted GitHub, GitLab, and Bitbucket servers.
Oh ! So it's not a bug and it's not necessary that I do a pull request to fix that ?
Not a bug! No PR. I offer the Pro subscription service on gitpitch.com and the Enterprise server to deliver private repo support. The open-source server is public repo only.
| gharchive/issue | 2019-03-21T09:52:45 | 2025-04-01T06:38:47.989496 | {
"authors": [
"gitpitch",
"kserin"
],
"repo": "gitpitch/gitpitch",
"url": "https://github.com/gitpitch/gitpitch/issues/245",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1317748743 | 🛑 MSP-103 is down
In d9dc5d9, MSP-103 (https://msp-portal.gw103.oneitfarm.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MSP-103 is back up in 6fd12a9.
| gharchive/issue | 2022-07-26T06:14:03 | 2025-04-01T06:38:48.063273 | {
"authors": [
"gitsrc"
],
"repo": "gitsrc/upptime",
"url": "https://github.com/gitsrc/upptime/issues/579",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
65246886 | sparklines and line chart from example/theme.go don't look right on raspberry pi (should they?)
Using raspbian wheezy (2015-02-16). Cross compiled example/theme.go and ran it on the console of the pi.
Maybe the pi is missing a font?
I do not have an experience with raspberry pi, but I guess the problem may be the unicode displaying. The sparklines and line chart rely upon unicode, so make sure your terminal and font's settings are correct.
Ok well it looks like there are ~24 characters that I don't have: the brail ones and the blocks.
| gharchive/issue | 2015-03-30T16:12:30 | 2025-04-01T06:38:48.082948 | {
"authors": [
"gizak",
"hagna"
],
"repo": "gizak/termui",
"url": "https://github.com/gizak/termui/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1717527078 | remove unnecessary into that is breaking cairo-test
Pull Request type
Please check the type of change your PR introduces:
[x] Bugfix
[ ] Feature
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no API changes)
[ ] Build-related changes
[ ] Documentation content changes
[ ] Other (please describe):
What is the current behavior?
Issue Number: #52
Lgtm!
https://github.com/all-contributors please add @moodysalem for code, bug
@all-contributors please add @moodysalem for code, bug
| gharchive/pull-request | 2023-05-19T16:38:57 | 2025-04-01T06:38:48.086467 | {
"authors": [
"moodysalem",
"raphaelDkhn"
],
"repo": "gizatechxyz/onnx-cairo",
"url": "https://github.com/gizatechxyz/onnx-cairo/pull/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1791710141 | Feat: QuantizeLinear
Pull Request type
Please check the type of change your PR introduces:
[ ] Bugfix
[X] Feature
[ ] Code style update (formatting, renaming)
[X] Refactoring (no functional changes, no API changes)
[ ] Build-related changes
[ ] Documentation content changes
[ ] Other (please describe):
What is the current behavior?
Current quantization functions do not comply with ONNX quantization operators.
Issue Number: N/A
What is the new behavior?
implement QuantizeLinear
adding tests
Does this introduce a breaking change?
[X] Yes
[ ] No
Other information
Wouldn't be better to have a separate saturate module with the arithmetic operations instead of having saturate_ everywhere? What do you think @raphaelDkhn . I'm fine with this approach just thinking for modularity purposes.
You mean a new saturation function in TensorTrait? We could do that, but it would cost more because we'd have to loop into the tensor element
| gharchive/pull-request | 2023-07-06T14:45:32 | 2025-04-01T06:38:48.091403 | {
"authors": [
"raphaelDkhn"
],
"repo": "gizatechxyz/orion",
"url": "https://github.com/gizatechxyz/orion/pull/141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2025627043 | Feat: erf operator
Pull Request type
Please check the type of change your PR introduces:
[ ] Bugfix
[x] Feature
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no API changes)
[ ] Build-related changes
[ ] Documentation content changes
[ ] Other (please describe):
What is the current behavior?
Issue Number: https://github.com/gizatechxyz/orion/issues/349
What is the new behavior?
added Erf operator compatible with ONNX Erf.
Does this introduce a breaking change?
[ ] Yes
[x] No
Other information
@raphaelDkhn I modified the Erf implementation to be a conditional statement function
| gharchive/pull-request | 2023-12-05T08:43:46 | 2025-04-01T06:38:48.095926 | {
"authors": [
"HappyTomatoo"
],
"repo": "gizatechxyz/orion",
"url": "https://github.com/gizatechxyz/orion/pull/491",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.