id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1684260234
|
Nitpick: Change CLI command profiles to profile
I noticed that the CLI command to check your profiles is profiles:
❯ aiidalab-launch --help
Usage: aiidalab-launch [OPTIONS] COMMAND [ARGS]...
Options:
-v, --verbose Increase the output verbosity of the launcher. Use '-vv' or
'-vvv' for even more verbose output
--help Show this message and exit.
Commands:
exec Directly execute a command on an AiiDAlab instance.
logs Show the logs of a running AiiDAlab instance.
profiles Manage AiiDAlab profiles.
reset Reset an AiiDAlab instance.
start Start an AiiDAlab instance on this host.
status Show AiiDAlab instance status and entry point.
stop Stop an AiiDAlab instance on this host.
version Show the version of aiidalab-launch.
Not unreasonable, but since verdi uses profile, I vote we switch to use that in the name of consistency. It's backwards incompatible, sure, but aiidalab-launch is a top-level tool that I think no scripts should rely on?
While we're at it, verdi also shows the default profile by default:
❯ verdi profile show
Report: Profile: dev
PROFILE_UUID: 66c852a2bc4b4047a2eef0c018eadc1a
default_user_email: mbercx@gmail.com
options: {}
process_control:
backend: rabbitmq
config:
broker_host: 127.0.0.1
broker_password: guest
broker_port: 5672
broker_protocol: amqp
broker_username: guest
broker_virtual_host: ''
storage:
backend: core.psql_dos
config:
database_engine: postgresql_psycopg2
database_hostname: localhost
database_name: aiida-core_dev
database_password: foobar
database_port: 5432
database_username: mbercx
repository_uri: file:///Users/mbercx/envs/aiida-core/.aiida/repository/dev
test_profile: false
Let's do the same for aiidalab-launch:
❯ aiidalab-launch profiles show
Usage: aiidalab-launch profiles show [OPTIONS] PROFILE
Try 'aiidalab-launch profiles show --help' for help.
Error: Missing argument 'PROFILE'.
I am second on this!
|
gharchive/issue
| 2023-04-26T04:45:23 |
2025-04-01T04:55:51.662368
|
{
"authors": [
"mbercx",
"unkcpz"
],
"repo": "aiidalab/aiidalab-launch",
"url": "https://github.com/aiidalab/aiidalab-launch/issues/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1114227565
|
Do not resolve Python Path symlinks for datafile hash
Fixes #67
In some cases, the Python binaries of multiple virtual environments can
be symlinks that point to the same file. In this case, the hash of the
reentry data file is the same for these environments, since
Path.resolve() also resolves symlinks.
As sys.executable already returns the absolute path to the environment
Python binary, we can just remove .resolve() to avoid resolving the
symlinks.
Strange, if I run the tests locally with tox they all pass. @ltalirz any ideas?
hm... is it possible that the test setup was missing a reentry scan somewhere that is now surfacing because symlinks are no longer resolved?
Codecov Report
Merging #68 (676977b) into develop (32bfd09) will increase coverage by 0.19%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## develop #68 +/- ##
===========================================
+ Coverage 70.95% 71.14% +0.19%
===========================================
Files 7 8 +1
Lines 451 454 +3
===========================================
+ Hits 320 323 +3
Misses 131 131
Impacted Files
Coverage Δ
reentry/config.py
87.27% <100.00%> (ø)
reentry/__init__.py
100.00% <0.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 577e5bb...676977b. Read the comment docs.
Seems moving up the reentry scan did the trick, thanks @ltalirz!
version 1.3.3 released on pypi
|
gharchive/pull-request
| 2022-01-25T18:28:44 |
2025-04-01T04:55:51.705629
|
{
"authors": [
"codecov-commenter",
"ltalirz",
"mbercx"
],
"repo": "aiidateam/reentry",
"url": "https://github.com/aiidateam/reentry/pull/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
151972521
|
Update neural-networks-2.md
일단 목차만 번역.
Just wanted to use my previous branch. Please forget about this.
|
gharchive/pull-request
| 2016-04-29T21:27:52 |
2025-04-01T04:55:51.706920
|
{
"authors": [
"stat2ml"
],
"repo": "aikorea/cs231n",
"url": "https://github.com/aikorea/cs231n/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1209179739
|
Сделать кнопку "Посмотреть БД" на форме заполнения опросника
Кнопка "Посмотреть БД" раскрывает таблицу.
Первая строка таблицы - это названия таблиц БД, которые являются кнопками.
Ниже выводятся атрибуты с данными, где лимит 10
Оценка 4 ч
Выведу атрибуты без данных
|
gharchive/issue
| 2022-04-20T05:26:13 |
2025-04-01T04:55:51.722619
|
{
"authors": [
"Nenuahule",
"germanovn"
],
"repo": "aikus/sql-exam",
"url": "https://github.com/aikus/sql-exam/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1047733343
|
Add source distribution (tar.gz)
Would it be possible to add a source distribution (tarball/tar.gz) here on github or on pypi? I would like to add this package to the conda-forge in order to make this package accessible to a wider user base.
@mahnerak @alberttorosyan I am also trying to update the outdated conda-forge aim package and would need to package aimrocks. It's easier and preferred to do that from the sdist package instead of the GH repo. Would that be possible for you to upload a sdist package to pypi please? Thank you.
@alberttorosyan would that be possible to also upload the sdist package to pypi instead of only the wheels? This is important for packaging.
Friendly ping here. A gift tag os a sdist file is the only thing needed here. If you add it to your auto release process, it should be all automatic next time.
|
gharchive/issue
| 2021-11-08T17:47:25 |
2025-04-01T04:55:51.738957
|
{
"authors": [
"d-cunningham",
"hadim"
],
"repo": "aimhubio/aimrocks",
"url": "https://github.com/aimhubio/aimrocks/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
849483985
|
let's go to python3
also fixes test_pretty_j1587.py to be runnable
Transition to python3 was overdue. Nice motivation.
|
gharchive/pull-request
| 2021-04-02T21:06:46 |
2025-04-01T04:55:51.741705
|
{
"authors": [
"BenGardiner",
"salnoobie"
],
"repo": "ainfosec/pretty_j1587",
"url": "https://github.com/ainfosec/pretty_j1587/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
428948614
|
Load testing using Jmeter
Create a JMeter file to test the API Gateway and upload images of results
Commit URL: https://github.com/airavata-courses/Team-Rocket/commit/626de71c1cb56526d648091eb582b27f51077ab4
|
gharchive/issue
| 2019-04-03T20:00:02 |
2025-04-01T04:55:51.769724
|
{
"authors": [
"ATarfe",
"aravindparappil46"
],
"repo": "airavata-courses/Team-Rocket",
"url": "https://github.com/airavata-courses/Team-Rocket/issues/145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
362903573
|
Wrapper.update() not re-render
Hello!!
Bug description:
I'm trying to test if my function is called:
in MyComponent.jsx:
import React from 'react'
class MyComponent extends React.Component {
calledFunction = () => {
console.log('I was called')
}
render () {
this.calledFunction()
return null
}
}
export default MyComponent
in my MyComponent.test.js:
import React from 'react'
import { mount } from 'enzyme'
import MyComponent from '../components/MyComponent'
describe("<MyComponent />", () => {
const wrapper = mount(<MyComponent />)
it('should call calledFunction()', async () => {
const spy = await jest.spyOn(wrapper.instance(), 'calledFunction')
await wrapper.update()
await expect(spy).toHaveBeenCalledTimes(1)
})
})
The result:
I didn't understood something, for me I should see "I was called" twice because wrapper.update() should forces re-render.
Thank you so much!
The issue is that you're using arrow functions in class fields. You can't spy on them because the rendered tree already has a reference to the original first.
Instead, do this:
class MyComponent extends React.Component {
calledFunction = this.calledFunction.bind(this);
calledFunction() {
console.log('I was called')
}
render () {
this.calledFunction()
return null
}
}
and things should work fine.
I trying something like:
in MyComponent.test.js
import React from 'react'
import { mount } from 'enzyme'
import MyComponent from '../components/MyComponent'
describe('<MyComponent />', () => {
it('should call calledFunction()', () => {
const wrapper = mount(<MyComponent />)
const spy = jest.spyOn(wrapper.instance(), 'calledFunction')
wrapper.forceUpdate()
expect(spy)
.toHaveBeenCalledTimes(1)
})
})
in MyComponent.jsx:
import React from 'react'
class MyComponent extends React.Component {
calledFunction = this.calledFunction.bind(this)
calledFunction() {
const str = "I was called"
console.log(str)
return str
}
render() {
this.calledFunction()
return <div />
}
}
export default MyComponent
You have these lines in the wrong order:
const wrapper = mount(<MyComponent />)
const spy = jest.spyOn(wrapper.instance(), 'calledFunction')
instead, do this:
const spy = jest.spyOn(wrapper.instance(), 'calledFunction');
const wrapper = mount(<MyComponent />);
logically got this error: ReferenceError: wrapper is not defined
oh lol, sorry. do this:
const spy = jest.spyOn(MyComponent.prototype, 'calledFunction');
const wrapper = mount(<MyComponent />);
thanks you saved my weekend man!
I found something that works.
Add these middle two lines.
wrapper .find(...Your thing) .simulate('click'); //Click event on wrapper
await wrapper.instance().forceUpdate(); //THIS ONE
wrapper.update(); //AND THIS ONE
expect(....).toHaveLength(1); //Your assertion here
|
gharchive/issue
| 2018-09-23T01:09:27 |
2025-04-01T04:55:51.780514
|
{
"authors": [
"Jlevett",
"ljharb",
"mi-mazouz"
],
"repo": "airbnb/enzyme",
"url": "https://github.com/airbnb/enzyme/issues/1833",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
156268819
|
Add documentation and unit tests for index parameter for map and forEach methods
Addresses https://github.com/airbnb/enzyme/issues/366
Also added unit tests to cover the index parameter.
LGTM
|
gharchive/pull-request
| 2016-05-23T12:27:52 |
2025-04-01T04:55:51.782188
|
{
"authors": [
"ljharb",
"marcin-mazurek"
],
"repo": "airbnb/enzyme",
"url": "https://github.com/airbnb/enzyme/pull/407",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
729869586
|
React 17 peer support
NPM warning airbnb-prop-types@2.16.0 requires a peer of react@^0.14 || ^15.0.0 || ^16.0.0-alpha but none is installed. You must install peer dependencies yourself. with React 17.
Please release support for the peer dependency.
I put up #74; it's blocked by the lack of an enzyme adapter for react 17.
I put up #74; it's blocked by the lack of an enzyme adapter for react 17.
Check out https://github.com/wojtekmaj/enzyme-adapter-react-17
It might not be the official adapter, but it does work.
Is there no way we can get a version of this published that has v17 support? It doesn't seem enzyme adapter 17 is going to arrive anytime soon, and it is only a dev dependency for this package.
What about re-writing the tests to use an alternative testing library? How much does it actually use Enzyme? I can't see much usage of it. This means we could release this without waiting for Enzyme adaptor?
I’m not particularly interested in doing that; and either way, there doesn’t exist an alternative react testing library for testing in a non-browser context.
Hello, there is now an adapter for Enzyme, i will link it here.
Can we please have an update for this. it is really important as React 17 is growing on popularity.
https://www.npmjs.com/package/@wojtekmaj/enzyme-adapter-react-17
You can see that this package has 300.000 downloads weekly.
No, there isn’t. That’s an unofficial one.
There's enzyme-adapter-react-17 package, Can we re-open this issue again?
@Luksys5 no, because that's an unofficial package. if it worked reliably, i'd have published the official one.
def can benefit from this! thanks op
is there any version of airbnb-prop-types which depends on react 17 version
Not yet, otherwise this issue would be closed.
ok
|
gharchive/issue
| 2020-10-26T20:09:20 |
2025-04-01T04:55:51.800992
|
{
"authors": [
"Luksys5",
"Vankydwivedi",
"deavial",
"jipis",
"leepowelldev",
"ljharb",
"ljrodriguez1",
"sariza369"
],
"repo": "airbnb/prop-types",
"url": "https://github.com/airbnb/prop-types/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
281852364
|
Add missing onKeyDown function to CalendarDay
Eep.
Fixes https://github.com/airbnb/react-dates/issues/899
@ljharb
Coverage decreased (-0.2%) to 85.487% when pulling 4d3b312a5367c7711a78d9e9e4c43824fef08bbf on maja-fix-on-key-down-error into 94e18523fa8559fbf68d667b6f42697d8db3cdf4 on master.
|
gharchive/pull-request
| 2017-12-13T18:34:43 |
2025-04-01T04:55:51.803386
|
{
"authors": [
"coveralls",
"majapw"
],
"repo": "airbnb/react-dates",
"url": "https://github.com/airbnb/react-dates/pull/901",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
446313285
|
[improvement] Add ability to over-write existing outputs
Background
Mentioned on Slack: https://streamalert.slack.com/archives/C3BHE2Z0S/p1558386389107200
Are you on the latest version of StreamAlert? Yes
Description
If you configure a Slack webhook, and want to change it, then trying to rerun ./manage.py output slack with the same name will result in an error. Currently, you have to manually edit conf/outputs.json to remove the output and then retry that command. It would be nice to have a way to over-write or delete the existing output.
Steps to Reproduce
Run ./manage.py output slack and configure an output.
Rerun the same command with the same output name and a new webhook URL and you'll get an error.
Desired Change
Provide a --override to delete the current output and add the new one.
👍
Hi @0xdabbad00 @Ryxias , I'm interested in working on this feature. Can I please work on it and submit a PR?
@0xdabbad00 Will add a PR for this feature for release-3-1-0
@jack1902 I'm not sure if you meant to say that you will be adding this feature, but just to clarify, I unfortunately do not have the time to add it.
@0xdabbad00 yeah sorry for the confusion. Coffee hadn't kicked in
@0xdabbad00 I believe this issue can be closed at the output command has now got some new features :)
Thanks @jack1902 !
|
gharchive/issue
| 2019-05-20T21:14:11 |
2025-04-01T04:55:51.808209
|
{
"authors": [
"0xdabbad00",
"Ryxias",
"VinodKumarLogan",
"jack1902"
],
"repo": "airbnb/streamalert",
"url": "https://github.com/airbnb/streamalert/issues/940",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
569892797
|
Fix config.py to respect firehose being disabled
to: @ryandeivert
cc: @airbnb/streamalert-maintainers
related to: #1158
resolves: #1158
Background
Having just pulled down release-3-0-0 for a fresh deployment, the firehose s3 notification was being passed through even though enabled was set to False
Changes
config.py now respects enabled on the firehose config
Testing
Was able to deploy the infrastructure :)
ran ./tests/scripts/unit_tests.sh
ran ./tests/scripts/pylint.sh
Coverage increased (+0.02%) to 95.926% when pulling 168178a6984962f2dacdaba2b4c82ffae50b4b83 on jack1902:bug/firehose_config into 9718c28d2da86d09b5be487f875e339c37a8a1ea on airbnb:release-3-0-0.
Closing this PR as work has been carried out by @ryandeivert based on this PR
|
gharchive/pull-request
| 2020-02-24T14:28:49 |
2025-04-01T04:55:51.813074
|
{
"authors": [
"coveralls",
"jack1902"
],
"repo": "airbnb/streamalert",
"url": "https://github.com/airbnb/streamalert/pull/1160",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
305730015
|
Add an Express installation doc
Adds an installation doc for Express.
Thanks @thompiler!
|
gharchive/pull-request
| 2018-03-15T21:44:07 |
2025-04-01T04:55:51.813984
|
{
"authors": [
"mmcdaris"
],
"repo": "airbrake/airbrake-docs",
"url": "https://github.com/airbrake/airbrake-docs/pull/223",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
260910673
|
feat(erc20): adding support for erc20
So, I tried to add support for ERC20-Tokens in this PR.
I have not tested it using a real TX so far, but adding this PR in order to get input whether this is the correct way to do it.
It feels a bit hacky to add it this way, it works for this case, but considering BTC-Support is a longer term goal, we probably need to refactor a bunch of stuff
Maybe we should think about using a slim JS framework to keep the UI in sync? It works for now, but document.createElement isn't exactly pretty.
See #13
@dschoeni can you add directly all common testnets? See topright on MEW
|
gharchive/pull-request
| 2017-09-27T09:42:00 |
2025-04-01T04:55:51.928379
|
{
"authors": [
"dcale",
"dschoeni"
],
"repo": "airgap-it/airgap-web-signer",
"url": "https://github.com/airgap-it/airgap-web-signer/pull/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2197226226
|
Fix test coverage
Fix test coverage
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 100.00%. Comparing base (41f857f) to head (68f9d99).
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #4 +/- ##
============================================
+ Coverage 76.71% 100.00% +23.28%
============================================
Files 5 4 -1
Lines 73 70 -3
Branches 6 6
============================================
+ Hits 56 70 +14
+ Misses 14 0 -14
+ Partials 3 0 -3
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2024-03-20T10:51:11 |
2025-04-01T04:55:51.932573
|
{
"authors": [
"codecov-commenter",
"joostlek"
],
"repo": "airgradienthq/python-airgradient",
"url": "https://github.com/airgradienthq/python-airgradient/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
766196379
|
Automate publishing releases on GitHub
This will automatically publish a release including the latest changelog on GitHub when the release lane is run.
The latest release (2.3.1) was created using this lane.
Running the whole release lane on CI whenever a prepare pr is merged would be the next step in automating the release process.
@nathan-mohemian Updated .gitignore and Fastfile. 👌
|
gharchive/pull-request
| 2020-12-14T09:29:57 |
2025-04-01T04:55:51.940011
|
{
"authors": [
"daniel-mohemian"
],
"repo": "airsidemobile/JOSESwift",
"url": "https://github.com/airsidemobile/JOSESwift/pull/249",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
248083219
|
自定义主题的变量无法作用于Datetime组件
在自定义主题theme.less里定义
@datetime-header-item-confirm-font-color: #000;
vux-loader配置正确,其他组件已经生效,但是datetime组件的样式依然没有改变。
并不能重现。
|
gharchive/issue
| 2017-08-04T18:30:34 |
2025-04-01T04:55:51.968652
|
{
"authors": [
"airyland",
"joeslee"
],
"repo": "airyland/vux",
"url": "https://github.com/airyland/vux/issues/1797",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
340478035
|
[Bug Report] search组件的setFocus问题
VUX version
2.9.2
OS/Browsers version
主流浏览器
Vue version
2.5.17-beta.0
Code
setFocus () {
this.$refs.search.setFocus()
}
Steps to reproduce
例子:
1、在首页点击搜索按钮跳转到search页面
2、setFocus是一个方法,当进入search页面的时候,我在created调用了setFocus方法
3、浏览器报错:'setFocus' of undefined"
What is Expected?
在另一个页面点击进来的时候,有没有办法自动获取焦点呢?方便用户在点击进来search页面,自动焦点到search就可以输入进行搜索。官方文档写着貌似是需要通过点击才能获取焦点。
What is actually happening?
浏览器报错:'setFocus' of undefined"
貌似我把setFocus方法加在mounted就没有报错了。
貌似我把的setFocus方法加在mounted就没有报错了。
|
gharchive/issue
| 2018-07-12T03:37:58 |
2025-04-01T04:55:51.971983
|
{
"authors": [
"kinder1995"
],
"repo": "airyland/vux",
"url": "https://github.com/airyland/vux/issues/2923",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1690058625
|
⚠️ Turnos 24x7 has degraded performance
In eabf659, Turnos 24x7 (https://reservas-dev.alternativasinteligentes.com/) experienced degraded performance:
HTTP code: 200
Response time: 994 ms
Resolved: Turnos 24x7 performance has improved in 38f7f4f.
|
gharchive/issue
| 2023-04-30T19:55:13 |
2025-04-01T04:55:51.974563
|
{
"authors": [
"AlternativasInteligentes"
],
"repo": "aisa-status/status-upptime-dev",
"url": "https://github.com/aisa-status/status-upptime-dev/issues/2007",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1694620499
|
⚠️ Turnos 24x7 has degraded performance
In 64f7509, Turnos 24x7 (https://reservas-dev.alternativasinteligentes.com/) experienced degraded performance:
HTTP code: 200
Response time: 953 ms
Resolved: Turnos 24x7 performance has improved in b9c3958.
|
gharchive/issue
| 2023-05-03T18:56:19 |
2025-04-01T04:55:51.977141
|
{
"authors": [
"AlternativasInteligentes"
],
"repo": "aisa-status/status-upptime-dev",
"url": "https://github.com/aisa-status/status-upptime-dev/issues/2041",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1706651379
|
⚠️ Turnos 24x7 has degraded performance
In ab35828, Turnos 24x7 (https://reservas-dev.alternativasinteligentes.com/) experienced degraded performance:
HTTP code: 200
Response time: 922 ms
Resolved: Turnos 24x7 performance has improved in 2dc4d7b.
|
gharchive/issue
| 2023-05-11T22:22:47 |
2025-04-01T04:55:51.979565
|
{
"authors": [
"AlternativasInteligentes"
],
"repo": "aisa-status/status-upptime-dev",
"url": "https://github.com/aisa-status/status-upptime-dev/issues/2147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2414911883
|
Unable to build docker container
when running docker compose up -d, the following error is returned
failed to solve: process "/bin/sh -c apt-get update && apt-get install -y --no-install-recommends gcc libpq-dev && apt-get install -y libc-bin=2.36-9+deb12u6 && apt-get clean && rm -rf /var/lib/apt/lists/*" did not complete successfully: exit code: 100
Resolved this with the following change to the Dockerfile
# Update system and install required packages
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
libc-bin \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
...
# Upgrade all packages in the final stage to ensure the latest versions are applied
RUN apt-get update && apt-get upgrade -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
|
gharchive/issue
| 2024-07-18T00:49:26 |
2025-04-01T04:55:52.050882
|
{
"authors": [
"ajb426"
],
"repo": "ajb426/user_management",
"url": "https://github.com/ajb426/user_management/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
336491566
|
Broken unit test: “Should have guessed profile 'mum to one boy and one girl' was 'female', not 'male'”
Split off from https://github.com/ajdavis/twitter-gender-distribution/issues/16#issuecomment-378813452.
$ python2.7 -m unittest discover -v
test_declared_gender (tests.test_declared_gender.TestDeclaredGender) ... FAIL
======================================================================
FAIL: test_declared_gender (tests.test_declared_gender.TestDeclaredGender)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/twitter-gender-distribution/tests/test_declared_gender.py", line 50, in test_declared_gender
description, expected_gender, guess))
AssertionError: Should have guessed profile 'mum to one boy and one girl' was 'female', not 'male'
----------------------------------------------------------------------
Ran 1 test in 0.080s
FAILED (failures=1)
git bisect says that f2725606923164be1436789febbde6bf99cd1d54 is the first bad commit.
Let’s say, if there is more than one gender clue in the profile text, then
guess using first name as if there were no gender clues.
On Thu, Jun 28, 2018 at 1:28 PM Jackie Scholl notifications@github.com
wrote:
Yup, looks like I did that! You can definitely fix it by reverting part or
all of f272560
https://github.com/ajdavis/twitter-gender-distribution/commit/f2725606923164be1436789febbde6bf99cd1d54,
though if you do that you will break, e.g., "guy who loves his mum." I'm
not sure which formulation is more common, so I don't know if it makes more
sense to change the test or change (back) the priority order
Regardless, I have no personal stake and I'm happy with whatever ends up
happening :)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/ajdavis/twitter-gender-distribution/issues/20#issuecomment-401112417,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAFIhcIGitr4ObydlvR7yteYnpQzGWF_ks5uBRIhgaJpZM4U602L
.
|
gharchive/issue
| 2018-06-28T06:36:25 |
2025-04-01T04:55:52.059108
|
{
"authors": [
"ajdavis",
"lucaswerkmeister"
],
"repo": "ajdavis/twitter-gender-distribution",
"url": "https://github.com/ajdavis/twitter-gender-distribution/issues/20",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1571664532
|
AttributeError: module 'collections' has no attribute 'MutableMapping' when using python 3.10
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.10/runpy.py", line 146, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/usr/lib/python3.10/site-packages/livereload/__init__.py", line 15, in <module>
from .server import Server, shell
File "/usr/lib/python3.10/site-packages/livereload/server.py", line 20, in <module>
from tornado.wsgi import WSGIContainer
File "/home/ente/.local/lib/python3.10/site-packages/tornado/wsgi.py", line 40, in <module>
from tornado import httputil
File "/home/ente/.local/lib/python3.10/site-packages/tornado/httputil.py", line 107, in <module>
class HTTPHeaders(collections.MutableMapping):
AttributeError: module 'collections' has no attribute 'MutableMapping'
Hello. I'm not actively maintaining this repo. But just by looking at the stack trace, I can see upgrading tornado would make this work.
I would happily take a pull request if you send one.
|
gharchive/issue
| 2023-02-05T23:39:11 |
2025-04-01T04:55:52.077881
|
{
"authors": [
"3nt3",
"ajitid"
],
"repo": "ajitid/live-server",
"url": "https://github.com/ajitid/live-server/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
569288190
|
[QUESTION] customizable time and message?
just wondering if you can edit this to say a specific word/command at any given time.
Do you mean time on a 24 hour clock or an interval? Right now the README explains how to modify interval time and the message, but time of day messages have not yet been implemented. I'm sure it wouldn't be too hard to do yourself if that's what you mean, but if you need help I can take a look.
Closing due to inactivity, but if you still have questions or need help with anything I'll be happy to reopen this
|
gharchive/issue
| 2020-02-22T06:02:00 |
2025-04-01T04:55:52.080341
|
{
"authors": [
"Motzumoto",
"ajmeese7"
],
"repo": "ajmeese7/spambot",
"url": "https://github.com/ajmeese7/spambot/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
146632837
|
Add gausslaguerre
Add a fast gausslaguerre.jl command based on the GLR algorithm, not asymptotics.
Closes #33.
|
gharchive/pull-request
| 2016-04-07T13:52:37 |
2025-04-01T04:55:52.099693
|
{
"authors": [
"ajt60gaibb"
],
"repo": "ajt60gaibb/FastGaussQuadrature.jl",
"url": "https://github.com/ajt60gaibb/FastGaussQuadrature.jl/pull/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1730077753
|
Phase out serapeum dependency
This library currently prevents loading cl-duckdb with Clasp as it doesn't compile:
https://github.com/clasp-developers/clasp/issues/1365
Only serapeum:mvlet* and serapeum:count-cpus are currently used, which doesn't warrant the inclusion of such a large library dependency.
DuckDB seems to be using std::thread::hardware_concurrency to determine the number of threads, maybe this could be exposed via the C API.
@snunez1: I've replaced the single call we have to mvlet* with let+
As a replacement for serapeum:count-cpus, perhaps cl-cpus?
This project was originally using cl-cpus before, but that was even less reliable compared to the serapeum implementation (by less reliable I mean it gave different results compared to the DuckDB defaults in more cases).
I've opened a PR where I simply open a temporary connection to DuckDB to determine their internal defaults when cl-duckdb is first loaded. This should be more future proof in case they decide to change the logic behind the defaults.
|
gharchive/issue
| 2023-05-29T05:14:49 |
2025-04-01T04:55:52.121056
|
{
"authors": [
"ak-coram"
],
"repo": "ak-coram/cl-duckdb",
"url": "https://github.com/ak-coram/cl-duckdb/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
180347474
|
Problem with set.php
Required Information
PHP version: 5.5.9
PHP Telegram Bot version: latest I think
Using MySQL database: no
Update Method: Webhook
Self-signed certificate: yes
Actual behaviour
What IS happening?
when I open set.php on my browser he return http error 500, but the webhook is correctly set.
I've tried to add
$result = Longman\TelegramBot\Request::sendMessage(['chat_id' => 'myid', 'text' => 'sample message']); //echo json_encode($result);
after
$telegram = new Longman\TelegramBot\Telegram($API_KEY, $BOT_NAME);
and the message is correcly sent.
the apache error.log say:
PHP Fatal error: Call to a member function isOk() on a non-object in /var/www/html/set.php on line 21
Then, if I broswe directly to hook.php he return
exception 'Longman\TelegramBot\Exception\TelegramException' with message 'Input is empty!' in /var/www/html/phptelebot/vendor/longman/telegram-bot/src/Telegram.php:252 Stack trace: #0 /var/www/html/hook.php(52): Longman\TelegramBot\Telegram->handle() #1 {main}
What can I try to make it work? Thank you
The code is fine.
I don't understand why you get 500. Could be that the telegram server are in down did you check it?
Then input is empty is correct because with your browser you are not sending a json payload
Is it possible that my server miss something? Is an ubuntu 14 set by me on cloudatcost, what can I check?
I see you are using a custom certificate.
Are the permissions for the certificate file correct? It must be readable by the Apache user, probably www-data.
I know becouse if I browse to https://api.telegram.org/botMYAPI/getUpdates I get {"ok":false,"error_code":409,"description":"Conflict: can't use getUpdates method while webhook is active"}
about the custom certificate, I tried to set www-data as group, but nothing changes, che error is still on line 21
well, figured out the errors:
first, I had an old version of longman telegram-bot (don't know why, did a composer update couple times).
Then, I was trying to use as webhook www.domain.com/hook.php, but my certificate was for domain.com ... yes i feel so stupid. Thank you all
Great that you figured it out and that it's working now 🎉
|
gharchive/issue
| 2016-09-30T16:31:27 |
2025-04-01T04:55:52.128320
|
{
"authors": [
"KilluaFein",
"MBoretto",
"noplanman"
],
"repo": "akalongman/php-telegram-bot",
"url": "https://github.com/akalongman/php-telegram-bot/issues/305",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
183231821
|
Very minor fix: changes some methods to camelCase for style consistency
Some methods in stringutilities.py were underscored, while some were camelCased, and so I made all methods camelCase for code style consistency.
@jfsawin after changes did you test all of methods?
I tripled-checked all the replacements I did to make sure they didn't change the program logic, but yes, I've also just manually confirmed that all the modified methods still work as before.
|
gharchive/pull-request
| 2016-10-15T20:41:10 |
2025-04-01T04:55:52.129868
|
{
"authors": [
"akalongman",
"jfsawin"
],
"repo": "akalongman/sublimetext-stringutilities",
"url": "https://github.com/akalongman/sublimetext-stringutilities/pull/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1749815602
|
Migrate Tensorflow v1 code to v2
Hey, the existing codebase is written in tensorflow v1 which isn't very compatible these days, can I send in a patch to migrate it to v2?
@ashok-arora that would be really helpful
|
gharchive/issue
| 2023-06-09T12:41:14 |
2025-04-01T04:55:52.154787
|
{
"authors": [
"akaraspt",
"ashok-arora"
],
"repo": "akaraspt/tinysleepnet",
"url": "https://github.com/akaraspt/tinysleepnet/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2292528353
|
Set app.toml
Is there a way to set app.toml. I need to change some preconfigured values but cannot find a way to do it.
See the Configuration section here, specifically the introduction. You can use environment variables to change any of the configuration in the files.
e.g. to change the persistent-peers config in the p2p section, for the CosmosHub blockchain, it would be:
GAIAD_P2P_PERSISTENT_PEERS=id@ip
All of those envs are for common all cosmos chains. What I need is a way to modify app.toml which may have different keys/values in different chains. For example modify osmosis's max-gas-wanted-per-tx.
The way run.sh works it initialise the chain which creates default app.toml but there is no way to modify it after.
https://github.com/akash-network/cosmos-omnibus/blob/786abe4107291f0858c4fc613828b81dcdda29f0/run.sh#L204-L217
Something like how peers are updated may work:
https://github.com/akash-network/cosmos-omnibus/blob/786abe4107291f0858c4fc613828b81dcdda29f0/run.sh#L213-L216
I understand this is not trivial and may I be wrong tho in how run.sh works but for no, I could not find a way to modify the default app.toml in chains.
@mrcr0cket all Tendermint config, regardless of chain, can be configured like this. None of this is Omnibus specific, Tendermint/Cosmos SDK chains can be configured using environment variables as above. The precedence is command line arguments > env variables > config files.
An example Osmosis config I'm using is OSMOSISD_MIN_GAS_PRICE_FOR_HIGH_GAS_TX=.01
The main thing is determining the section it's in if applicable (i.e. p2p for seeds/persistent-peers, mempool, consensus etc), and the prefix which is always the binary name capitalised.
|
gharchive/issue
| 2024-05-13T11:20:17 |
2025-04-01T04:55:52.159575
|
{
"authors": [
"mrcr0cket",
"tombeynon"
],
"repo": "akash-network/cosmos-omnibus",
"url": "https://github.com/akash-network/cosmos-omnibus/issues/803",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1648829800
|
feature request: support client-managed healthchecks (livenessProbe, readinessProbe) and imagePullPolicy
This would solve one of the important use-cases - a self-updating image through livenessProbe.
With the livenessProbe pointed to query the releases link comparing against the current version would allow it to fail once there is a new image available, causing the pod restart allowing it to self-update. This requires imagePullPolicy: Always set for this specific deployment.
Add client-managed healthchecks (livenessProbe, readinessProbe and their parameters) and setting imagePullPolicy to Always.
Refs
https://discord.com/channels/747885925232672829/1090670585555730563/1091261402729156608
https://github.com/akash-network/provider/pull/54
https://github.com/akash-network/support/issues/50
https://github.com/arno01/self-update
PoC
Dockerfile
FROM nginx
RUN echo ver1 > /usr/share/nginx/html/index.html
pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
#image: nginx
image: andrey01/testimage
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health2
port: 80
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 15
@anilmurty would this be a good experiment for us to create a bounty for the community. Currently, this is needed for Pre-Search.
Andy has more info on it, and link attached to the convo in our discord channel with them.
potentially but first we need to clean up the local dev env set up (readme on the github repo) for node and provider repos, so that others can get set up (we did this for console before engaging OSS dev community)
|
gharchive/issue
| 2023-03-31T07:30:05 |
2025-04-01T04:55:52.165013
|
{
"authors": [
"andy108369",
"anilmurty",
"bmenzalji"
],
"repo": "akash-network/support",
"url": "https://github.com/akash-network/support/issues/85",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1534321472
|
feat: クロスフェード機構を追加
このpull requestが解決する内容
g.AudioUtil を追加
音声のフェードイン・フェードアウト・クロスフェードの機能を提供します
g.Game#onUpdate を追加
ティックの進行後 (g.Scene#onUpdate が発火した後) に発火するトリガ
g.Util#clamp() を追加
EasingFunction EasingFinishFunctions を追加
動作確認
以下のミニマムコンテンツにて動作確認済み
展開
赤い四角をクリックするとフェードイン・クロスフェード・フェードアウトするサンプル
module.exports = () => {
const game = g.game;
const scene = new g.Scene({
game,
assetPaths: [
"/assets/**/*"
],
});
scene.onLoad.addOnce(() => {
const rect = new g.FilledRect({
scene,
cssColor: "red",
width: 32,
height: 32,
touchable: true,
});
rect.onPointDown.addOnce(() => {
const akiurara = game.audio.create(scene.asset.getAudio("/assets/audio/akiurara"));
const yume = game.audio.create(scene.asset.getAudio("/assets/audio/yume"));
scene.setTimeout(() => {
console.log("fadeIn");
g.AudioUtil.fadeIn(game, akiurara, 5000, 0.1);
}, 0);
scene.setTimeout(() => {
console.log("crossFade");
g.AudioUtil.crossFade(game, yume, akiurara, 5000, 0.1);
}, 6000);
scene.setTimeout(() => {
console.log("fadeOut");
g.AudioUtil.fadeOut(game, yume, 5000);
}, 12000);
rect.remove();
});
scene.append(rect);
});
game.pushScene(scene);
};
破壊的な変更を含んでいるか?
なし
議論メモ:
fade() → transtionVolume()
"fade" という語は "transisionVolume + 便利処理" に使う: 特に fadeIn() は play() を伴うことにする
from はさすがに不要か
必要なら自力で事前に volume を変えておけばよい
徹底すると cross fade の from が二種類必要になってしまいそうなのもちょっと
戻り値の型は AudioUtil には入れない
|
gharchive/pull-request
| 2023-01-16T05:22:37 |
2025-04-01T04:55:52.170868
|
{
"authors": [
"xnv",
"yu-ogi"
],
"repo": "akashic-games/akashic-engine",
"url": "https://github.com/akashic-games/akashic-engine/pull/432",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
409043739
|
Update links to new website: http://www.vtki.org/
Check out the new domain http://www.vtki.org/ where the documentation is hosted.
Note: I have also added email forwarding on that domain which we will need to set up for user support per the JOSS submission guidelines.
Currently, support@vtki.org forwards to my email. Perhaps we create an actual email account under the vtki.org domain that active maintainers can access?
Thanks for doing that! I accepted the google domains request.
It's up to you if we'll create a maintainers email account under the new domain.
I'll just keep the email forwarding for now because it's free. Maybe down the road when vtki grows we can set up a maintainer's email account.
|
gharchive/pull-request
| 2019-02-12T00:14:09 |
2025-04-01T04:55:52.179451
|
{
"authors": [
"akaszynski",
"banesullivan"
],
"repo": "akaszynski/vtki",
"url": "https://github.com/akaszynski/vtki/pull/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
957188843
|
Fixes: safePreventDefault Prevents Scrolling With Preact
Fixes #2068
Oh, is this PR still opening? I face this issue as well :)
|
gharchive/pull-request
| 2021-07-31T07:05:00 |
2025-04-01T04:55:52.257542
|
{
"authors": [
"MANTENN",
"Th3Fire"
],
"repo": "akiran/react-slick",
"url": "https://github.com/akiran/react-slick/pull/2069",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2384446717
|
Support for SteamChina clients
What problem is this solving
When I start cs demo manager with SteamChina (exclusive version client produced for review in Chinese Mainland for games such as Counter Strike), it is considered that Steam has not been started. It seems that the SteamChina client is no different from the original Steam client, but the name is changed to SteamChina and it will connect to the perfectworld server in Chinese Mainland by default. Hope to add support for SteamChina.
Proposed solution
I hope that CS Demo Manager can also be used properly when using the SteamChina client.
Describe alternatives you've considered
Using the original Steam client may cause some connectivity issues, resulting in the inability to open demos properly.
The next version should support it.
I'm using should because I couldn't fully test it.
I installed the client in a VM and stopped going further after seeing this window asking for personal information.
The only difference seems to be the executable name - let me know if it works when the next version is released.
Thank you very much for your reply. I will test it after the next version is released.
I have tested it and it runs very well! :)
|
gharchive/issue
| 2024-07-01T18:09:34 |
2025-04-01T04:55:52.260830
|
{
"authors": [
"EventHorizonV",
"akiver"
],
"repo": "akiver/cs-demo-manager",
"url": "https://github.com/akiver/cs-demo-manager/issues/887",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2580444027
|
Bump Akka.TestKit dependency to 1.5.30 + Akka.TestKit.NUnit package to 1.5.30
Changes
This pull request bumps Akka.TestKit dependency to version 1.5.30 and removes the preview version I introduced in #136 (where Akka.TestKit.NUnit was published as 1.6.0-preview.1). Instead, both Akka.TestKit.NUnit and Akka.TestKit.NUnit3 NuGet packages use the same version as Akka.TestKit, 1.5.30.
Checklist
[x] I have reviewed my own pull request.
@Aaronontheweb this is a follow-up to my comments from the previous PR (you were probably not notified because I posted them after the PR was merged and closed).
https://github.com/akkadotnet/Akka.TestKit.NUnit/pull/136#issuecomment-2338658718
https://github.com/akkadotnet/Akka.TestKit.NUnit/pull/136#issuecomment-2353338911
Important - because Akka.TestKit.NUnit now depends on NUnit 4, it can no longer be used in projects that still have to use NUnit 3. I recommend that you also publish Akka.TestKit.NUnit3 as built by this pull request (i.e. version 1.5.30). This will allow projects that use NUnit 3 to target the latest Akka.NET.
Akka.TestKit.NUnit3 currently available from Microsoft NuGet registry is at version 1.3.8, which is quite dated. The deprecation message "... please use Akka.TestKit.NUnit (which also supports NUnit3)" made sense up to version 1.5.24, but doesn't make sense now that Akka.testKit.NUnit no longer supports NUnit 3.
@Aaronontheweb any feedback for the PR?
Sorry @milang - this looks good to me
|
gharchive/pull-request
| 2024-10-11T05:04:16 |
2025-04-01T04:55:52.321564
|
{
"authors": [
"Aaronontheweb",
"milang"
],
"repo": "akkadotnet/Akka.TestKit.NUnit",
"url": "https://github.com/akkadotnet/Akka.TestKit.NUnit/pull/143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1342549
|
Newsbeuter unresponsive while running bookmark command
I have a question regarding the use of newsbeuter bookmark command option.
I am using a bash script as the bookmark command. It takes the url, parses it and downloads the targets of certain links in the page using wget. The issue I am facing is that newsbeuter becomes unresponsive until the script finishes execution. Is there anyway for newsbeuter to run the bookmark script in the background while I am able to use it to, say, read other feeds?
You can make the bash script fork and run in the background.
#!/bin/bash
do_bookmarking() {
wget "$1"
do other stuff here
}
exec >/dev/null 2>&1
do_bookmarking "$@" &
|
gharchive/issue
| 2011-08-04T06:45:27 |
2025-04-01T04:55:52.344194
|
{
"authors": [
"IsaacG",
"PolevaulterDonkeyman"
],
"repo": "akrennmair/newsbeuter",
"url": "https://github.com/akrennmair/newsbeuter/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
220122824
|
URLs tree
Would be nice to have a way to group URLs into "folders", like a tree. Very useful when there are hundreds of feeds.
#487
Randomly decided to check this project and you've brought up both my ideal features for newsbeuter. Are you an QuiteRSS/RSSowl user looking for a new client?
Are you an QuiteRSS/RSSowl user looking for a new client?
Nope, just a command line lover who likes clean interfaces :)
Just as @kyyu points out, this is a duplicate of #487.
|
gharchive/issue
| 2017-04-07T06:28:14 |
2025-04-01T04:55:52.345979
|
{
"authors": [
"Minoru",
"gtr6ytuuygf",
"kyyu"
],
"repo": "akrennmair/newsbeuter",
"url": "https://github.com/akrennmair/newsbeuter/issues/529",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
769086183
|
UCF101 and HMDB51 data splits
Hi Akshita,
Thanks for sharing your codes.
Would you mind sharing more .mat files? For example, ucf101 and hmdb51 dataset splits that include 'traing_loc' and 'test_seen_loc' and so on.
Thanks in advance.
In zero-shot-actions/util.py
matcontent = sio.loadmat(opt.dataroot + "/" + opt.dataset + "/" + opt.class_embedding + "_splits.mat")
trainval_loc = matcontent['trainval_loc'].squeeze() - 1
train_loc = matcontent['train_loc'].squeeze() - 1
val_unseen_loc = matcontent['val_loc'].squeeze() - 1
test_seen_loc = matcontent['test_seen_loc'].squeeze() - 1
test_unseen_loc = matcontent['test_unseen_loc'].squeeze() - 1
I have found them from gzsl-od repo.
|
gharchive/issue
| 2020-12-16T16:43:33 |
2025-04-01T04:55:52.417696
|
{
"authors": [
"kaiqiangh"
],
"repo": "akshitac8/tfvaegan",
"url": "https://github.com/akshitac8/tfvaegan/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1932840485
|
Image in the karma blog is very big
Description
Describe the bug:
in the power of knowledge blogs, the image of karma blog is very big, and needs some margin/padding also
Screenshots
Hey,I'd like to work on this issue.
assign this issue to me i want to work on this issue
|
gharchive/issue
| 2023-10-09T11:18:13 |
2025-04-01T04:55:52.419781
|
{
"authors": [
"Anshul439",
"Jatingupta9120",
"adityadeshpande09"
],
"repo": "akshitagupta15june/Moksh",
"url": "https://github.com/akshitagupta15june/Moksh/issues/1370",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2290867227
|
Broken of links in smaller width
Describe the bug
When we are changing width to smaller all the links are breaking in two parts which is not a good UI . So we can use so that the links will not breaks in two
To Reproduce
Steps to reproduce the behavior:
Go to 'https://akshitagupta15june.github.io/PetMe/'
Change the size by using devoloper tools in browser
See error
Expected behavior
Link should not break they should in 1 line
Screenshots
You can see the error
Desktop (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Smartphone (please complete the following information):
Device: [e.g. iPhone6]
OS: [e.g. iOS8.1]
Browser [e.g. stock browser, safari]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
Hi maintainers pls assign me this issue thank you
|
gharchive/issue
| 2024-05-11T12:08:48 |
2025-04-01T04:55:52.425400
|
{
"authors": [
"upsaurav12"
],
"repo": "akshitagupta15june/PetMe",
"url": "https://github.com/akshitagupta15june/PetMe/issues/1511",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1330959381
|
Add favicon
Website could use an awesome favicon , currently it doesn't have one.
I would like to work on this. Can you assign me?
@narayan954
nope I can't @SushanthReddy-49
Ok @narayan954
hi, I want to work on this issue ,can you please assign it to me
Hi can I work on it ?
I would like to work on this issue,can you please assign me
This issue is completed
|
gharchive/issue
| 2022-08-07T09:14:45 |
2025-04-01T04:55:52.427954
|
{
"authors": [
"CodeWAdi",
"Pratham-19",
"SushanthReddy-49",
"akshitagupta15june",
"narayan954",
"sambhavgupta0705"
],
"repo": "akshitagupta15june/PetMe",
"url": "https://github.com/akshitagupta15june/PetMe/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1922413426
|
Added Chatbot Functionality for the website
Fixes: #257
What Does this PR do?
This PR creates a Chatbot and also adds functionalities to redirect to contact, report & donate.
And This is senond Screenshot when you click the chatbot
please don't create pr without issue being assigned
@akshitagupta15june Sister, It was my first PR & I did take help from chatgpt but i worked 4hrs continuously, you even did not review my work & closed it
|
gharchive/pull-request
| 2023-10-02T18:26:52 |
2025-04-01T04:55:52.430252
|
{
"authors": [
"Aloksh42",
"akshitagupta15june"
],
"repo": "akshitagupta15june/PetMe",
"url": "https://github.com/akshitagupta15june/PetMe/pull/1090",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
179880654
|
Testing in appium
Version
Tell us which versions you are using:
react-native-router-flux v3.35.0
react-native v0.33.0
When I load the app in Appium inspector, I can't see an id or name I can use for automation of the tabs. Is there an attribute I can set on the corresponding Scene tag?
Nevermind, i got it working with accessiblityLabel
As far as I can tell there is no accessibility label on the left and right buttons in the navigation bar. How did you manage to click on those?
|
gharchive/issue
| 2016-09-28T20:31:45 |
2025-04-01T04:55:52.434761
|
{
"authors": [
"mmzoo",
"ptah23"
],
"repo": "aksonov/react-native-router-flux",
"url": "https://github.com/aksonov/react-native-router-flux/issues/1229",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1938706765
|
add kargo logo to readme page
Now that we're officially counting this as a Kargo sub-project, the AI-generated octopus can go and we can slap the Kargo logo on the README -- at least until we get a Kargo Render logo. (If we do.)
Correct logo can be found here: https://github.com/akuity/kargo/blob/main/kargo-logo.png
Layout should be similar to Kargo's README
@krancour I want to work on this issue, can you assign me?
@anevski-stefan all yours!
@krancour There was one PR with these changes, in the first issue there was a need to update the logo and to include the logo, so this is a duplicate one.
|
gharchive/issue
| 2023-10-11T20:49:50 |
2025-04-01T04:55:52.437345
|
{
"authors": [
"anevski-stefan",
"krancour"
],
"repo": "akuity/kargo-render",
"url": "https://github.com/akuity/kargo-render/issues/186",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1255998551
|
Faction Manager seems to believe I have about twice as much money as I do, and its making Ascend fail.
As the comment posted, I went ahead and updated scritps to see if that would fix the problem, but it didn't make any difference
bitburnerSave_1654089680_BN9x2.zip
.
Why did you close it? Did you figure out what happened?
|
gharchive/issue
| 2022-06-01T13:24:42 |
2025-04-01T04:55:52.485272
|
{
"authors": [
"Thedakdragon",
"alainbryden"
],
"repo": "alainbryden/bitburner-scripts",
"url": "https://github.com/alainbryden/bitburner-scripts/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
905553932
|
Add method partition(::AbstractMatrix, ...)
Also, relax signature of partition(::AbstractVector, ...) to allow any
vectors, not just vectors of integers.
Fix #361.
@giordano Thanks for this.
It remains to relax the signature altogether to include the tables case. As tables are defined by a trait, we need trait dispatch here. Have a look at the implementation of MLJModelInterface.matrix for a similar case: https://github.com/alan-turing-institute/MLJModelInterface.jl/blob/275beda8e90e7946cd408ab84ca59c8efc34b407/src/data_utils.jl#L26 You can use Tables.istable instead of vtrait (MLJModelInterface does not have Tables as a dep).
Ok, thanks, I relaxed the signature, so now the argument is X::Any and we check at runtime whether X is a matrix/table.
I don't think having a separate method would speed up anything: partition itself for a matrix isn't superfast (for a small matrix it takes time of the order of units of microseconds), which dwarfs the time to look up a specific method, and the X isa AbstractMatrix check should hopefully make it as good as having a dedicated method.
|
gharchive/pull-request
| 2021-05-28T14:54:40 |
2025-04-01T04:55:52.505322
|
{
"authors": [
"ablaom",
"giordano"
],
"repo": "alan-turing-institute/MLJBase.jl",
"url": "https://github.com/alan-turing-institute/MLJBase.jl/pull/561",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
966234698
|
test_mesh_cython and test_mesh_cl failing
Two tests are failing. I initially thought that this might be related to a recent edit to the .gitignore file (ignore .vtk file types) but this does not appear to be the case. I returned to the original code and the tests still fail. I have attached details below. @bb515 do you have any idea what might be causing these tests to fail? It isn't clear to me what these tests do.
FAILED peripy/test/test_regression.py::TestRegression::test_mesh_cython - AssertionError: assert [b'BINARY', b...x00\x00', ...] == [b'BINARY', b...x00\x00', ...]
FAILED peripy/test/test_regression.py::TestRegression::test_mesh_cl - AssertionError: assert [b'BINARY', b...x00\x00', ...] == [b'BINARY', b...x00\x00', ...]
E AssertionError: assert [b'BINARY', b...x00\x00', ...] == [b'BINARY', b...x00\x00', ...]
E At index 16 diff: b'CELLS 4225 12544' != b'CELLS 4224 16768'
E Left contains 105 more items, first extra item: b'6\xe2\xeb\x1cC-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00?'
E Use -v to get the full diff
You can view the tests in peripy/test/
These tests are listed below. They make sure that the mesh is identical to the one that was given with the package.
The meshes are stored in "peripy/test/data" and you probably haven't changed anything in there, but if you put git ignore vtk then these meshes will not get pushed to the repo, so you can't use gitignore vtk. Instead, you could git ignore the individual files that you create when making a new example.
I would recommend when making a new example, creating a new directory "example3" and just gitignoring that directory.
` def test_mesh_cython(self, regression_cython, data_path, tmp_path):
"""Ensure mesh file is identical."""
model, displacements, damage, *_ = regression_cython
path = data_path
mesh = tmp_path / "mesh.vtk"
expected_mesh = path / "expected_mesh.vtk"
model.write_mesh(mesh, damage, displacements)
assert(
mesh.read_bytes().split(b"\n")[2:] ==
expected_mesh.read_bytes().split(b"\n")[2:]
)`
` def test_mesh_cl(self, regression_cl, data_path, tmp_path):
"""Ensure mesh file is identical."""
model, displacements, damage, *_ = regression_cl
path = data_path
mesh = tmp_path / "mesh.vtk"
expected_mesh = path / "expected_mesh_cl.vtk"
model.write_mesh(mesh, damage, displacements)
assert(
mesh.read_bytes().split(b"\n")[2:] ==
expected_mesh.read_bytes().split(b"\n")[2:]
)`
Fixed it. Reverted to Jim's original testing code @pytest.mark.skip(reason="Stalling" which skips the test on travis if it stalls like this.
|
gharchive/issue
| 2021-08-11T08:33:31 |
2025-04-01T04:55:52.509965
|
{
"authors": [
"bb515",
"mhobbs18"
],
"repo": "alan-turing-institute/PeriPy",
"url": "https://github.com/alan-turing-institute/PeriPy/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2285143556
|
Update README.md
Adding a "documentation" badge
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 91.14%. Comparing base (8c482e6) to head (fdf4003).
Additional details and impacted files
@@ Coverage Diff @@
## main #215 +/- ##
=======================================
Coverage 91.14% 91.14%
=======================================
Files 45 45
Lines 2393 2393
=======================================
Hits 2181 2181
Misses 212 212
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2024-05-08T09:33:08 |
2025-04-01T04:55:52.514317
|
{
"authors": [
"codecov-commenter",
"kallewesterling"
],
"repo": "alan-turing-institute/autoemulate",
"url": "https://github.com/alan-turing-institute/autoemulate/pull/215",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1005563740
|
[DOC] Add multivariate forecasting examples to forecasting tutorial
Update by @fkiraly: examples for multivariate forecasting should be added to the forecasting notebook.
There are any multivariate forecasting example?
No, currently not - good point! This should be added.
I think it's a good first issue?
@kejsitake has expressed interest - assigned to track interest, revisit later.
now included by https://github.com/alan-turing-institute/sktime/pull/2041
|
gharchive/issue
| 2021-09-23T15:22:00 |
2025-04-01T04:55:52.521353
|
{
"authors": [
"eldritchhh",
"fkiraly"
],
"repo": "alan-turing-institute/sktime",
"url": "https://github.com/alan-turing-institute/sktime/issues/1446",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
925411363
|
Forecasting support for multivariate y and multiple input/output types - working prototype
This is a merge/review ready sketch for how multivariate y could be supported for multivariate forecasters, alongside the possibility to pass np.array and pd.Series. This is based on the design in STEP no.5, adapted to the new forecasting base class.
The key ingredient are:
converters parameterized by from, to, and as - in the convertIOmodule. Besides the obvious conversion functionality, the converters can be given access to a dictionary via reference in the store argument, where information for inverting lossy conversions (like from pd.DataFrame to np.array) can be stored
a new tag y_type which encodes the type of y that the private _fit, _predict, and _update assume internally - for now, it's just one type and not a list of compatible types
some logic in the public "plumbing" area of fit, predict, update, which converts inputs to the public layer to the desired input of the logic layer and back
This is for discussion, and I expect major change before we merge.
It might be especially interesting to discuss the following interaction with typing:
the third argument of the converter, the "as" argument, is a scitype, and designed to have the same name as the respective typing composite type. Also see discussion in #965, #973 and #974.
Thinking of the logical continuation of this:
extend this to the other arguments, e.g., X
the individual argument checks like check_X etc should be replaced by checks of the kind check(what, as : scitype), e.g., check(y, "Series") should replace the less generic check_y.
extend this to time series classification, to support 3D numpy arrays, awkward arrays, nested data frames
A "visionary" pathway may see this working with type annotations that are scitypes, not typing machine types, e.g.,
fit(X : Series, Y : Series)
etc
_fit(X: Series, Y : UnivariateSeries)
and checks/conversions being done automatically.
pinging @chrisholder since otherwise I can't add him reviewer
note: this fails with some of the forecasters that are not yet refactored, since it can happen that fit is overridden but predict is not, in which case the variables written in fit are not addressed.
I see two ways to address this:
wait until the re-factoring #955 is done
add some safety checks that turn off the new code in predict if it wasn't called in fit
I've found a better option no.3 - initialize the fit variables with "safe" defaults that are what they would be for all of the forecasters that haven't been refactored. We can change that later, though it might be in general a good idea to have these "safe" defaults.
ready for review
Hi @fkiraly, I had a quick look now, this seems to be a major API change, will need some more time to look into it.
sure - since it's a dev days priority, shall we try to get any questions and change requests out of the way soon-ish? Meeting later today?
what about renaming y to Y? Is it postponed or decided if we want to do it in general? See also discussion here #967
Martin Walter martin_friedrich.walter@daimler.com, Mercedes-Benz AG on behalf of Daimler TSS GmbH.
https://github.com/Daimler/daimler-foss/blob/master/LEGAL_IMPRINT.md
what about renaming y to Y? Is it postponed or decided if we want to do it in general? See also discussion here #967
I'd say variable renaming is a very invasive core interface change and will affect many files.
Therefore, it's best kept separate from this change, which is not breaking core interface contracts (as it works only on the inside).
Due to the breakage of the core interface, I think it's also worth considering not changing y to Y at all, ever.
I think it could work fine, if we depreciate it properly so that users are aware of it.
hm, what's going wrong with manylinux?
@aiwalter, re. y/Y - let's do this in a different PR.
Following the principle that a PR should either be localized in interface or localized in scope/logic. I.e., it can be broad in terms of logic while affecting interfaces minimally, or being broad in terms of the files/places it affects while having a minimal logic scope, but not both, unless it really can't be avoided.
yes sure, this would be a different PR. I was just curious :)
Is this a blocker for anyone at the moment? If I remember correctly, in our sprint planning, the goal for this was to come closer to an API design. I certainly need more time to think about this. It seems like a big change from the more familiar scikit-learn interface and I'm not sure about the consequences. Perhaps we can have a discussion session on this?
It seems like a big change from the more familiar scikit-learn interface and I'm not sure about the consequences. Perhaps we can have a discussion session on this?
How is that? Not sure whether I understand why you think this.
As you see the tests are all running, and - in addition - all outputs are the same when compared to pre-PR behaviour.
All this adds is the ability to input and receive data types that previously would have raised errors.
Is this a blocker for anyone at the moment?
I feel it's crucial to get to our self-set goal of multivariate forecasting by end of week.
I think the type conversion is necessary to have multivariate forecasting under a compatible interface.
Why: otherwise we have to decide per-estimator whether we want it to rely on pd.DataFrame or pd.Series. This either/or seems a bit unnecessary and contrived, since it would split the forecaster interface down the middle.
I think the easy way to introduce support for multivariate forecasting is to work with the existing check_y function and give it an argument for whether it handles univariate or multivariate series based on a boolean tag.
See above. This would introduce a lot of boilerplate code.
Also note that we are not talking about hypotheticals here, but a working prototype in the PR, which does not change existing interfaces, adds the capability to pass pd.DataFrame and np.array to forecasters (as attempted in #897 by more manual hacking of the check_xyz etc), and makes any future multivariate forecaster downwards compatible.
So, to summarize our discussion at the dev days:
There's no consensus on merging this - @mloning does not like this, based on arguments that are perhaps best explained by himself here. If I were to paraphrase it:
@mloning proposes an alternative solution that, instead of converting, just throws an error if one of the formats is not supported
this would mean we will have forecaster which support np.array, pd.Series, or pd.DataFrame, or combinations thereof. For example, if you pass a pd.Series to a forecaster that supports pd.DataFrame, it would not be converted, but an error would be thrown, and the user will have to know that they have to convert the pd.Series to a one-column-pd.DataFrame if their data is univariate
In addition, @mloning proposed that we work through all forecasters and make sure they support both pd.Series and pd.DataFrame both; hoping that we can use joint interface points where possible, and otherwise make any necessary conversions inside individual forecasters, and not automated in the base class.
I disagree with this way forward, since:
working through individual forecasters has nothing to do with base class behaviour - i.e., error throw or convert
it's better to convert to the user expected type rather than throw an error and expect the user to make the conversion easily
complex pipeline composites are much less likely to work out of the box
@mloning's suggestion would lead to an "interface split" between pd.Series and pd.DataFrame forecasters going forward with multivariate implementations, along lines that are not obvious to users
To get this involved, I would be invoking our governance document, which we wisely wrote in anticipation of a situation like this, to professionally get to a democratic resolution 😃
https://github.com/alan-turing-institute/sktime/blob/main/GOVERNANCE.rst
As per "decision making" section, @mloning (or another core contributor) should now either, asap:
veto this PR, based on STEP05, to formally express disagreement
not veto this PR, and agree that this is merged soon
If a veto happens, core devs have 5 days (excluding week-ends) to look at this with care and cast a public vote, here in the PR. Deadline would be those 5 days after the veto.
I wrote up my view of the discussion in the hackmd file for the forecasting group of dev days: https://hackmd.io/84rpzQEiQiWQy5oQjs7FYw?both. The summary can be found at the bottom of the hack md.
I highly recommend having a look at it, because it presents a different perspective on ML's views than what FK describes above. (And of course, ML is the best person to say what his perspective is)
@TonyBagnall. I think that is broadly correct!
But, from my understanding, the fundamental disagreement is more about time. Markus did not say he outright disliked option 2, but he wanted us to take more time to consider any downsides thoroughly, whereas Franz does not think there are any downsides that require significant time and is concerned that if we do not merge it quickly, it will just continuously get pushed back. Hence (again from my understanding) Franz invoked the governance document to ensure a timely decision is made.
@Lovkush-A, it's not a good idea to post a hackmd link here when the doc is still editable. A random person or malicious bot from the internet can access this, and, for instance, replace links with malicious links.
Can whoever owns this make the hackmd doc read-only? I'll put the content on the wiki in the meanwhile.
is this (broadly) correct?
In my view, yes, @TonyBagnall.
However, the discussion with @mloning convinced me that this may be sub-optimal, becaues it does not allow individual instances of estimators to accept multiple types, e.g., when it can handle series and pandas by only using joint intervace points (like the imputer).
What I would hence like to change this to, ultimately, is a tag that is a "pass" list. If the input is of a type on the list, fit passes the input to _fit unchanged. If the tag is not on the list, it converts it to the first element that's on the list.
@Lovkush-A, I think this of yours is a very interesting and relevant point:
Thought by LA later, this sounds like a reduction, of multivariate forecasting to univariate forecasting. This suggests a reduction style interface, where user creates multivariate forecaster using reduction function and a univariate forecaster object. Separate to this, can have reductions from multivariate forecasting to tabular regression
I think you indeed identify univariate forecasters correctly as a different scitype than multivariate forecaster.
The subtle point is here that I think it's a sub-scitype, and the inclusion is too "weak" to justify a separate base class.
What do I mean with sub-scitype: multivariate forecasters have the same sci-interface as univariate forecasters whenever presented with a univariate series. When presented with a multivariate series, only multivariate forecasters can do this.
Other examples of this "sub-scitype" relation are:
binary and multi-class classification
transformers that have an inverse transform vs those that don't
supervised regressors that can deal with multivariate target vs those that can't
Note that neither of these have distinct base classes in scikit-learn, since the interfaces are too close (I reckon) - a key design principle of scikit-learn is to avoid the anti-pattern of (base) class proliferation. That's what I mean with "weak" sub-scitype - the added complexity by covering both cases in one base class is small.
Another interesting point: even if you have the same base class, you still can do reduction, it just doesn't mutate the general interface, it only removes the "lack of capability".
Examples are:
the "one-vs-rest-classification" reduction strategy, which creates a multi-class capable classifier out of a binary one
the "apply per output variable" reduction strategy which makes a multi-target regressor out of any regressor
I think univariate forecasting vs multivariate forecasting is precisely such a case, most similar to the multi-output regression example.
What I would hence like to change option no.2 to, ultimately, is a tag that is a "pass" list. If the input is of a type on the list, fit passes the input to _fit unchanged. If the tag is not on the list, it converts it to the first element that's on the list.
so maintain a list of allowable inputs. You will still have to prioritise conversions, so if it can take two types and pass a third, which of the two it allows would be chosen?
have you mocked up the use cases in an example base and subclass forecasters? That may help convince others?
Standard use case 1: passed a type the subclass can handle unary and many to many
Standard use case 2: passed a type the subclass cannot handle, where subclass can handle only one type. Convert and pass
Edge case (?): passed a type the subclass cannot handle, where subclass can handle more than one type. Convert and pass
@Lovkush-A, it's not a good idea to post a hackmd link here when the doc is still editable.
My bad... I have removed the link from my comment. The description should be enough information for people in this group to find the link. (But I suppose there is still the small risk the link has already been scraped...)
The subtle point here is that it's a sub-scitype, and the inclusion is too "weak" to justify a separate base class.
Makes sense to me. Note we do not need any separate base classes to still have a reduction style interface. We can still have something like MakeMultivariate(UnivariateForecaster(), reduction_strategy)
My bad... I have removed the link from my comment.
You would't have needed to, I've changed it to a static wiki page.
To answer your questions, @TonyBagnall
so maintain a list of allowable inputs. You will still have to prioritise conversions, so if it can take two types and pass a third, which of the two it allows would be chosen?
I'd say there is always a "preferred" type if it can take many, that's what conversion would be attempted to. Easy to implement by making the y_type field being a list; check containedness, if yes pass unconverted; if not, convert to first element.
have you mocked up the use cases in an example base and subclass forecasters? That may help convince others?
This PR implements the functionality already (one internal type only).
It does not change behaviour that currently does not throw an error - all tests pass - your case no.1. It's basically all of the current tests, and the example sheets.
It adds functionality where users can pass more inputs - to all forecasters. - your case no.2.
regarding no.3: there currently does not exist, to my knowledge, a forecaster that can handle more than one case. To my knowledge, @mloning's discussion of this case is purely hypothetical given the current code base.
Makes sense to me. Note we do not need any separate base classes to still have a reduction style interface. We can still have something like MakeMultivariate(UnivariateForecaster(), reduction_strategy)
@Lovkush-A, absolutely agreed! That's what #1038 should do.
so I have just discussed this with my research group, and not wanting to muddy the water, but one suggestion (from chris, supported by aaron) was it could be better to achieve this effect with decorators, along the lines of below (wont let my upload a note book). He can comment more himself if this is of any interest.
import numpy as np
import pandas as pd
def support_df(estimator_class):
estimator_fit = estimator_class.fit
def fit(self, X):
if isinstance(X, pd.DataFrame):
print("Converting df to numpy")
X = X.to_numpy()
return estimator_fit(self, X)
estimator_class.fit = fit
return estimator_class
def support_list(estimator_class):
estimator_fit = estimator_class.fit
def fit(self, X):
if isinstance(X, list):
print("Converting list to numpy")
X = np.array(X)
return estimator_fit(self, X)
estimator_class.fit = fit
return estimator_class
@support_list
@support_df
class Example:
def __init__(self, name: str):
self.name = name
def fit(self, X):
return isinstance(X, np.ndarray)
numpy_example = np.array([1,2,3,4,5])
df_example = pd.DataFrame({"A": [1, 2], "B": [3, 4]})
list_example = [1,2,3,4,5]
numpy_example_cls = Example("numpy")
print(numpy_example_cls.fit(numpy_example))
print(numpy_example_cls.fit(df_example))
print(numpy_example_cls.fit(df_example))
was it could be better to achieve this effect with decorators, along the lines of below (wont let my upload a note book). He can comment more himself if this is of any interest.
Yes, we've also discussed decorators extensively, see the original STEP05 as well.
The problem with that lies with the extender user archetype (not the notebook/deployment user archetype):
the extender user would have to remember to add/modify decorators. That's similarly error prone as remembering to having to add the checks. It's better to keep "info about the method" localized to tags/capabilities, just one table to fill in, just one failure point, not multiple.
the functionality is not out-of-the-box, i.e., you need to add the decorators
for an extender user, decorators are much harder to debug than the nested function calls, since it's a more advanced programming concept, and obscures the traceback
This is why I believe @mloning was against decorators, and I agreed with him.
I would argue (and potentailly this isn't valid as ML is very different to web dev) that Flask, which is considered one of the most user friendly web development packages in Python; part of the reason it is seen as so user friendly is because of its use of decorators. Not only is it super simple to use, but it's also self documenting. Here is an example:
from flask import Flask, url_for
from markupsafe import escape
app = Flask(__name__)
@app.route('/')
def index():
return 'index'
@app.route('/login')
def login():
return 'login'
Just looking at this code with no knowledge of the application you know just from the decorator there are two GET methods for the route '/' and '/login'.
I also think using a decorator approach much better and consistently allows people to support their own custome datatypes. Lets say we support numpy and dataframe like so as a base in sktime.
@dtype_support.dataframe @dtype_support.numpy class SomeClassifier def __init__(): ..... etc
If someone then comes along and wants to use a pytorch tensor or something, if all the logic is in the base class they're going to have to add a check to the top of their fit() that accounts for it, which is then confusing because if you were just looking at the code I'd wonder where are the rest of the checks?
By using decorators it still allows users to define their method to handle custom datatypes, but allows them to use it consistent manner to the rest of sktime. So for example:
We define our own custom tensor handler data type.
`
def dtype_support_tensor(estimator_class):
estimator_fit = estimator_class.fit
def fit(self, X):
if torch.is_tensor(X)
X = torch_tensor.cpu().detach().numpy()
return estimator_fit(self, X)
estimator_class.fit = fit
return estimator_class
`
And then add it to their model in a consistent manner that makes sense.
@dtype_support.dataframe @dtype_support.numpy **@dtype_support_tensor** class SomeClassifier def __init__(): ..... etc
My biggest concern with having to many decorators as you can see above 3 is already getting a bit ugly, so you can group ones togethers. So for example you can group a bunch together say we just define 'dtype_support.univariate' and 'type_support.multivariate' and these just encompass everything that @fkiraly (dataframes, numpy etc.) wants and then when looking at our code base it's super easy to determine what the model supports.
@dtype_support.multivariate class SomeClassifier def __init__(): ..... etc
If someone then comes along and wants to use a pytorch tensor or something.
This is now confusing because if you were just looking at the code wouldn't you wonder where are the rest of the checks are?
No, not at the top of fit - you wouldn't touch that.
To add a new supported data type, you would add it in the converters module.
Once it is "registered" there, you can make use of pytorch tensors, and potentially write individual methods that use them natively (without conversion); while all already implemented methods would be able to use pytorch tensors by conversion (potentially inefficient, of course, but it runs).
This is much more consistent and just by glancing at my class I know it supports numpy, dataframe and tensor. It also doesn't clutter the body of the class.
In this design, you see whether the algorithm supports xyz by querying a tag - that's consistent with querying any other property.
In your design:
you have at least two sources of information about the forecaster: tags, and decorators. The decorators cannot be queried by the convenient all_estimators mechanism, but you have to sift through the code to find all estimators that can do numpy, say.
it is not clear which sequence to add the decorators. It is also not clear how the decorators would make sure that support is automatically added? The internal logic seems speculative, and potentially highly brittle.
and then when looking at our code base it's super easy to determine what the model supports.
but that's not what the "user archetype" is willing to do! They want to call the all-estimators method, say, to be able to query all estimators with a property. They will for sure not go manually through all code files and look at the decorator block.
This is not a great way of doing it I admit but the point being you can do what you like in the decorator and therefore there is definitly a solution for this.
@chrisholder, I agree that decorators are an attractive way of doing this - see the original STEP05 which was written in terms of decorators.
But, as said, the main argument against (brought up by @mloning and which I now follow) is the difficulty for the extender user archetype. The extension guidelines become more difficult to follow than with this suggestion, in my opinion. The less clear interface for inspection (tags vs decorators) is also an argument for the other solution, in my opinion.
So, to speed up this discussion: I think the community and core devs have had ample time to look at this - it's the longest thread in a while if you exclude code annotations; we've also had a dedicated 1+ hour meeting at the dev days. I'd explicitly brought up the opportunity for veto 2 working days ago. I'd say this plus having one more day next week satisfies the "reasonable time" requirement for the "lazy consensus".
So, if no one vetos by end of Monday, I'll merge this. FYI, @mloning - please veto if you don't like this.
(updating from main to prepare for merge - checks should pass but let's wait)
@fkiraly In the governance doc it says:
"To accept proposed changes, we require approval by one core developer (lazy consensus) and no rejection by a core developer (veto right)." - Currently there isn't an even a single approval yet? So why do you think you can merge? What's worse is that you know that there is disagreement. I think your comments above are not helpful at all. Note that on GitHub, we have we even require two approvals (I think that needs to be updated in the governance doc).
"Core developers are expected to give reasonable time to others to give their opinion on the pull request if they’re not confident others would agree." Given that this PR is only 7 days old, that I have asked for more time repeatedly and that the dev days were happening, I don't think that you're giving reasonable time to others.
"When no consensus can be found, any core developer can call for a vote at any point during the discussion." So seems like you can just call a vote. The more important point is that I'd like someone else to review it too.
I will take some more time now to look at the PR, read through the discussions above and write some comments on it.
I've been thinking about the proposed changes in the PR over the last few days. I feel that the discussion has been a bit tense at times, so I'll try to write down my reasons here for why I'm hesitant about this PR.
In general, I think that the proposed solution in this PR is a bit over-engineered: it solves a problem at a level of generality which isn't necessary for supporting multivariate forecasting in sktime. My preference is to work with the current (less general but simpler) input validation/conversion functions.
I think it's a bit over-complicated because we only want to support a few internal data types for the target series. That is, pd.Series for univariate forecasting and pd.DataFrame for multivariate series (supporting pandas data containers of course also gives access to the underlying NumPy arrays). I don't think we want to support other types in the univariate/multivariate forecasting module, or do we?
I believe that supporting these types internally can be solved more easily with the existing input validation/conversion functions (e.g. check_y) and keyword arguments which can be set automatically in the base class based on tags. For example, in fit in the base class, we could call (inside check_y_X):
y = check_y(
y,
enforce_univariate=has_tag(self, "univariate-only"),
enforce_multivariate=has_tag(self, "multivariate-only"))
with one new boolean tag multivariate-only (as suggested by @aiwalter in some other PR) and a few more lines in check_y to handle the new keyword arguments. I could open a PR to see if that works in practice.
The solution in this PR adds:
non-boolean tags (not even string-valued tags, but type-valued tags, possibly with sets of types)
a new module for conversions, with functions relying on machine types, scitypes and type-valued tags
possibility to support multiple output types
My main worries are the following:
these changes may not be obvious for developers/users familiar with the PyData ecosystem, to me it departs significantly from scikit-learn in terms of input validation/conversion,
in order to understand how the data passes through the estimators (e.g. in inspecting, extending or debugging code), users and developers now have to have to be aware of:
the internal type (which may no longer be the same for all forecasters)
non-boolean tags
scitypes
the various conversion functions
non-boolean tags and scitypes are not self-explanatory at all I think. So it may make estimators less transparent and less extendible. In the worst case, undoing much of the work of the recent refactoring.
I also don't think that we want to support multiple output types. Wouldn't that complicate higher-level functionality such as composition (e.g. ensembling), or model evaluation (e.g. evaluate), as they would now have to handle different output types (we have seen the complications of that with the current pred_int interface)?
Do we want to support more internal types than the ones above? My preference is to stick to pd.Series and pd.DataFrames. We have seen that supporting multiple internal data types can lead to costly data copies, making it difficult to write efficient composition models, since each model may have to convert first the data it receives from the previous model (we have seen this in the classification setting).
To me, the proposed type conversions seems more relevant for the panel data setting, especially for user convenience, but even there I wouldn't want to support more internal types than we currently support if avoidable, with the exception of awkward array perhaps. But I would be happy to revisit the idea in that context.
Also note that panel and supervised forecasting probably requires a different API altogether (multiple stages of fit), so may not directly build on the univariate/multivariate forecasting API discussed in this PR.
If this PR finds broader approval from others, I'm happy to support the PR, review the code in more detail and suggest improvements (I agree with @lovkush-A that some minor things can still be improved).
Let me know what you think and if my reasons make sense. I'm happy to discuss them in more detail and I'm open to other arguments.
All I will say at this point is that it is a complex issue in code, in engineering and in toolkit strategy. I have discussed it in my group and we were not sure. This haste, trying to force a decision on a Saturday, is I think inappropriate. There is no actual urgency, and it seems like impatience is creating the impression of Franz trying to steam roller compliance or confrontation. Consensus is better. Let's talk next week, I don't want this issue to damage team relationships.
In general, I think that the proposed solution in this PR is a bit over-engineered: it solves a problem at a level of generality which isn't necessary for supporting multivariate forecasting. My preference is to work with the current (less general but simpler) input validation/conversion functions.
Disagreed, in the sense that supporting multivariate forecasting will be cumbersome, unintuitive, or both, without this. It will lead to writing of boilerplate code and user frustration with unclear interfaces.
There's a clear motivation to avoid this, so I disagree that this is "overengineered" - it is fit-for-purpose and elegant in the sense that it makes all existing forecasters ready for the multivariate extension, without breaking interface contracts.
I believe that supporting these types internally can be solved more easily with the existing input validation/conversion functions (e.g. check_y) and keyword arguments which can be set automatically in the base class based on tags. For example, in fit in the base class, we could call (inside check_y_X):
As said above: this would essentially split the forecasters into "pd.DataFrame only" and a "pd.Series` only" sub-types. Or it will require extensive rework of all forecasters manually. This is not desirable, so it is not a good suggestion.
these changes may not be obvious for developers/users familiar with the PyData ecosystem, to me it departs significantly from scikit-learn in terms of input validation/conversion,
The changes are "under the hood", and both the standard user archetype (in fit, predict etc) as well as the extender user archetype (in _fit, _predict) will deal with the familiar interfaces. We also avoid user frustration through rejection of pd.Seriesorpd.DataFrame` on unclear/uninspectable criteria. I don't see a major or "significant" divergence here.
users and developers now have to be aware of:
I think your list incorrectly frames sktime design principles as problems, or incorrectly "blames" existing problems of sktime on this PR. Let me go through your list.
the internal type (which may no longer be the same for all forecasters)
It already isn't so this is not a change this PR introduces. Search for occurrences of the enforce_numpy flag, we already have numpy vs pandas internally, and you can't blame this on this PR.
scitypes
this is a key design principle of sktime, and advanced users should already be familiar with this. See, e.g., this paper we wrote together. Or, search the code for the string "scitype".
This is nothing that this PR introduces - in fact, it follows core sktime design principles.
the various conversion functions
Look at utils/data_processing - there are a dozen or so from/to conversion functions (for panel time series data) in there, inconsistently named and with inconsistent function signatures. Developers are expected to understand this mess. This is i.m.o. ludicrous, instead there should be convert function, see my new PR #1061.
Here's the list of the converters for your edification:
from_3d_numpy_to_2d_array,
from_3d_numpy_to_nested,
from_nested_to_2d_array,
from_2d_array_to_nested,
from_nested_to_3d_numpy,
from_nested_to_long,
from_long_to_nested,
from_multi_index_to_3d_numpy,
from_3d_numpy_to_multi_index,
from_multi_index_to_nested,
from_nested_to_multi_index,
convert_from_dictionary,
non-boolean tags
This is indeed new, but you agreed in #1013 that non-boolean tags (e.g., string tags) are fine, and they are also needed in your alternate suggestion above.
non-boolean tags and scitypes are not self-explanatory at all I think. So it may make estimators less transparent and less extendible. In the worst case, undoing much of the work of the recent refactoring.
Are they more explanatory than the mess of converters? I think yes.
Is a single tag that lists the data types supported more explanatory than 5 boolean tags? I think yes.
Does it "undo" the recent refactoring? No, because that was done with the purpose to put all the boilerplate stuff in the base forecaster, and not have all the checks and conversions in 100 places in the concrete classes, precisely as this PR suggests.
I also don't think that we want to support multiple output types.
Disagreed - I think the user wants a pd.Series forecast when they pass a pd.Series, and a pd.DataFrame forecast when they pass a pd.DataFrame. Saying "no" in case a forecaster supports multivariate is a source of user frustration. Same for numpy.arrays.
I also agree with @chrisholder that decorators would help here, especially if we want to handle output types, as it keeps the conversion functionality outside of the code of the class and things like converter_store separate from the other class attributes.
Perhaps, but that's an orthogonal point (see #973 and #974) - whether the conversions are in a decorator or explicit in the base class does not touch the point of conversions and simultaneous support of univariate and multivariate data container types.
This haste, trying to force a decision on a Saturday, is I think inappropriate. There is no actual urgency, and it seems like impatience is creating the impression of Franz trying to steam roller compliance or confrontation. Consensus is better. Let's talk next week, I don't want this issue to damage team relationships.
@TonyBagnall, sorry if there was a misunderstanding - I'm not forcing a decision on a Saturday. I do not expect anyone to work on week-ends, and I have said that either someone should veto, or accept that this is merged on Monday (end of day).
That gives enough time to decide whether to veto this.
Once there is a veto, there are 5 more working days to think this through, I think that's ample time.
My fear is that we keep the multivariate issue indefinitely off the table, due to decision making paralysis because none of the potential solutions are perfect. We have been discussing the data container issues since the beginning of sktime, and there is no silver bullet. Perfect is the enemy of good, so if we want to get somewhere with multivariate, we need to stop stalling and get going.
And, of course, friendly working relations in the team should never be in question here. I do appreciate the work that @mloning and you, @TonyBagnall, are doing; I respect and value you as persons and collaborators.
@Lovkush-A, to reply to your high-level comments:
there should be some basic unittests before any PR is merged.
Good point - the current unit test suite tests downwards compatibility, but not the new cases where the input/output data type is flexible, or multivariate cases.
This will be a bit of a project and may also involve some refactoring of the current test suite, so I'll wait for resolution of this discussion.
might need to consider the .name attribute of a pd.Series object as being a candidate for dataframe column name. if this is a concern, then id personally be ok with this being turned into an issue for future consideration
Good idea - in the converters, I guess? Or is there another place this would be relevant?
in discussion, you (Franz) say that you think having lists of types for y_type tag might be better. I agree. Personally id be ok with this being turned into an issue for future consideration/adjustments
Yes, I think I've already decided I'd add this once the discussion is resolved positively.
[Unittests] will be a bit of a project and may also involve some refactoring of the current test suite, so I'll wait for resolution of this discussion.
I was only suggesting very basic checks for this PR - e.g. that the converter functions you created run without error on simple inputs - as opposed to testing how this interacts with everything else.
might need to consider the .name attribute of a pd.Series
Good idea - in the converters, I guess? Or is there another place this would be relevant?
My first instinct is it only matters in converters.
All, I looked at this again after having talked to @mloning and @TonyBagnall in the CC meeting and getting feedback on how this discussion felt from @mloning's perspective.
I think I did forget my good manners, regarding tone. My enthusiasm for multivariate and impatience might have gotten the better of me. Very sorry, @mloning.
I do think that disagreement, and even passionate disagreement, is a core ingredient of seeking scientific truth and engineering insight, but it needs to happen in a civil and friendly manner. I'll give it my best.
Regaring the force-merge, that would not be in line with the rules of governance, since as per reasonable reading, the PR requires at least one approval for that. I could call a vote without such approval in-principle, but I've decided not to do this right now, in the interest of giving core developers more time to review (and potentially approve or ask for amendments).
FYI, for those following this discussion: I think the new PR #1074 is meant as a counterproposal to this one.
My thoughts on Marcus' concerns above.
it solves a problem at a level of generality which isn't necessary for supporting multivariate forecasting.
To me, this is the big attraction! It is not just about multivariate forecasting, but a general framework for all of sktime. E.g. one of somebody's 'frustrations' in the final dev days meeting (not to do with forecasting) would be straightforwardly solved by this input/output framework. I can't remember the example, but you might remember I said 'That would be solved by Franzs input/output suggestion' and Franz said 'I was just about to say the same thing'
I don't think we want to support other types in the univariate/multivariate forecasting module, or do we?
Unlikely no, but this input/output framework makes it easy to adapt the interface if we do want more (or less) supported types
I believe that supporting these types internally can be solved more easily with the existing input validation/conversion functions (e.g. check_y)
That might be true. On the other hand, Franz has already coded this up and it does not look too complex.
non-boolean tags (not even string-valued tags, but type-valued tags, possibly with sets of types)
If this is an issue, we could make all the tags strings, and have a function that parses the string. E.g. instead of {y_inner_type: set([pd.Series, pd.DataFrame])} could have {y_inner_type: "series;dataframe"} and have function lambda tag: set(tag.split(";"))
a new module for conversions, with functions relying on machine types, scitypes and type-valued tags
I like this. It means all conversions are in one place in the codebase, as opposed to being split up in various check functions spread out across sktime.
possibility to support multiple output types (not sure if I have understood this correctly?)
I think that is part of the intended design. If a user inputs a dataframe for univariate forecasting, we convert it into series, do the forecasting, then convert it back to a dataframe.
these changes may not be obvious for developers/users familiar with the PyData ecosystem
For me, the main difficulty for developers/users is understanding the actual mathematics and subtleties of time series, rather than the interface.
to me it departs significantly from scikit-learn in terms of input validation/conversion
Yup, certainly a disadvantage. But I think it is a departure in a good way. E.g. I find it annoying in sklearn that I have to manually convert things back to dataframes, and think about whether index and/or column information has been lost. (Though, maybe the issue is I should be using Pipelines more...)
in order to understand how the data passes through the estimators (e.g. in inspecting, extending or debugging code), users and developers now have to be aware of:
the internal type (which may no longer be the same for all forecasters)
non-boolean tags
scitypes
the various conversion functions
I think most of these are fundamental issues of having a comprehensive time series package.
I also don't think that we want to support multiple output types. Wouldn't that complicate higher-level functionality such as composition (e.g. ensembling), or model evaluation (e.g. evaluate)
This is certainly worth thinking about! I don't know enough about these items to judge.
It looks like @fkiraly and I have come to an agreement, you can find our suggestion below. Feedback is very welcome! And thanks again for all the above comments:
which input types do we want to support for which scitype?
univariate/multivariate series: pd.Series, pd.DataFrame, np.ndarray?
FK opinion: there should just be one concept "series" - pd.Series, pd.DataFrame, 2D np.ndarray, 1D np.dnarray
np.ndarray are interpreted as axis 0 indexed by integers
which internal types do we want to support?
FK opinion: internally (in _fit, _predict etc without conversion), we should have "tiers" of support
low tier: pd.Series or pd.DataFrame (uni/multi)
mid tier: support both pd.Series and pd.DataFrame
high tier: supports all of these interally - target state
which output types do we want to support?
same as internal types?
FK opinion:
default is type in = type out
type out can be controlled by tag (override), this is useful when building pipelines, so conversions can happen only at start/end of long pipeline - push towards pandas internally
no strong opinion about type in = type out
ML does not like this
ML opinion:
target state: output should always be pd.DataFrame
FK would be happy with this
short/mid term: output should be pd.Series for all "univariate" tagged estimators, pd.DataFrame for all "multivariate" tagged estimators, input converted to internal format (usually pd.Series for "univariate")
FK is happy with this
how do we want to indicate whether a forecaster handles univariate data, multivariate data or both?
tags?
FK suggestions if we insist on bool:
scitype:multivariate - can handle multivariate series
requires_y_multivariate - cannot handle univariate series
univariate-only, multivariate-only
scitype:y {"univariate", "multivariate", "both"}
univariate: error if dim>1
multivariate: error if dim<2
both: no error from dim
scitype:y {"univariate", "multivariate", "multi_but_not_univariate"}
agreed on:scitype:y {"univariate", "multivariate", "both"}
edge cases
fbprophet which is univariate but requires pd.DataFrame
combined uni/multi forecasters
unit testing
new tags: test forecasters valid string values for y:scitype tag {"univariate", "multivariate", "both"}
maybe just do this for all tags? registry could also record expected values
input conversion: accepts all accepted inputs: pd.Series, pd.DataFrame, 2D np.ndarray, 1D np.dnarray
simply add "variants" of valid inputs to input checks
univariate/multivariate check: error should be raised if dimension too high/low - 2d np.ndarray and pd.dataframe on one side, 1d np.ndarray on other side
with minor comment, we aim for "mid tier" internal type support only, for the next half year or so.
so as steps, I would now:
remove or turn off the output checks, since the solution assumes (at least for short term) that uni/multivariate forecasters return the "right thing"
add the new tag and concomitant checks
add or modify tests to check multi-input-type compatibility and the tag working properly
Sounds good to me. Does that mean @markus' draft PR is being closed and work will continue on this PR?
Yes, I closed #1074.
@fkiraly let me know if you need help with the implementation.
I'm on it, just need to deal with BaseObject first since this one needs tags.
@mloning, @aiwalter , @TonyBagnall, I think this is now ready to review.
The one failing test is due to the newly enabled doctests, which insists I fill in all the docstrings for the tests because I just changed on line.
This should implement the agreed plan, with minor departures:
I also added conversions for X, while I was at it.
I changed some of the check functions to accommodate for np.ndarray as discussed - e.g., check_equal_time_index
I change the check functionality to allow for None (e.g., when X=None) - the general philosophy is to ignore None series and let them pass.
I replaced forecaster specific checks with scitype specific checks where possible, e.g., check_X by check_series. I think this looks much cleaner and much more general now.
I introduced an mtype function which returns a string representation of the machine type. This is to avoid type valued tags, and will also help later when there are two different representations of the same machine type.
I removed the output conversions by explicitly switching them off rather than removing them entirely, using the constant setting OUTPUT_CONVERSIONS = "off". I think that's a neat compromise for the other side of the "python in python out" ideological fence.
from verbal discussion with @mloning, @mloning reviewed this and did not like two things:
that output conversions were still implemented, just turned off - he thought the option should be removed entirely. That I highlighted as deviation from the agreed plan. @mloning's argument here is the unnecessary code complexity when looking at fit/predict. I concurred with this and agreed to remove entirely.
that I didn't remove the tag for the internal machine type, y_inner_mtype after our discussion. @mloning thought I deviated from the agreement here, though it didn't occur to me that this should have been removed (see list above). @mloning thought that the inner type should be fully determined by the y:scitype tag, and some translation between the y:scitype tag and the convert functionality should happen in fit and update. @mloning thought this was a strong point of contention. I disagreed that the tag should be removed, since I didn't think it was part of the agreement to change this part, and it would add the "glue code" to fit/update as unnecessary complexity in there.
Overall, we agreed that I can make the call to merge this PR - for instance, we can have detail discussions on the necessary tags later, and see it as a delta to this PR rather than a blocker for this PR.
The call I made is as follows:
I remove the output conversions entirely, in line with the spirit of our agreement above, and the argument of code complexity.
I'm keeping y_inner_mtype for the moment, to keep the conversion logic and the content of fit/update simpler; also, in consideration of the time series classification case and facebook prophet where the supposed 1:1 mapping of inner type and scitype is broken. Also, in consideration of potentially slowly adding geomstats like data container functionality to sktime, and development activity where it's not entirely clear yet whether this mapping would be easy to adhere to.
FYI, @aiwalter, @TonyBagnall.
@fkiraly Great effort, Franz! This is one monster of a PR.
I have seen all your comments/resolutions to my review conversations and am happy with decisions made. There was one conversation in which you gave two opposite answers to the same question (whether MvS is always pandas frames or not), but I do not think that is actionable in this PR.
FYI, @mloning: I also made changes to the test which parallel yours in #1074 - the enforce functions were not working properly.
@GuzalBulatova, @thayeylolu, this is now merged - and it should be useable in your work on multivariate forecasters.
|
gharchive/pull-request
| 2021-06-19T15:01:46 |
2025-04-01T04:55:52.628455
|
{
"authors": [
"Lovkush-A",
"TonyBagnall",
"aiwalter",
"chrisholder",
"fkiraly",
"mloning"
],
"repo": "alan-turing-institute/sktime",
"url": "https://github.com/alan-turing-institute/sktime/pull/980",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1813041348
|
Add a filter for TimestampCommitment and BlockHashCommitment
The TrivialCommitment trait includes (optional) support for filtering of the candidate data.
In the case of a TimestampCommitment, the candidate data is a Bitcoin block header. Currently the expected data (a Unix timestamp) is sought inside the whole header, not just the timestamp field. This introduces a possible attack vector. An attacker could produce a header with a different timestamp but which contains, in a different field, the 32 bits matching a different timestamp. Without filtering of the candidate data, this bogus header would be accepted as valid (in the sense that the commitment would verify successfully).
The same applies to the BlockHashCommitment (where we should filter on the Merkle root field).
For this we'll need to:
make TimestampCommitment into a trait in trustchain-core, extending the Commitment trait
include a default implementation of the timestamp() method
also include the new() method signature in the trait (with expected_data argument of type Timestamp)
implement the trait in trustchain-ion
include an implementation of the filter() method that filters out everything from the commitment content (a Bitcoin block header) except the timestamp field.
Ok, this seems to work but the default implementation of the timestamp() method inside the TimestampCommitment trait is brittle:
/// A Commitment whose expected data is a Unix time.
pub trait TimestampCommitment : Commitment {
/// Gets the timestamp as a Unix time.
fn timestamp(&self) -> Timestamp {
self.expected_data()
.as_u64()
.unwrap()
.try_into()
.expect("Construction guarantees u32.")
}
}
It would be ok if we could enforce the condition that the expected_data must have type Timestamp, but I can't see a way to do that.
Also, I haven't yet run the implementation tests.
On the plus side, the implementation of verifiable_timestamp in IONVerifier is now a bit simpler, and the filter() is implemented.
Nice one, this is much better and fixes the issue!
I've made a couple of modifications:
Fixed test_block_hash_commitment() given revision and added failing case to test_block_timestamp_commitment() (2f7902ce02edf6858f18db243a8b9d9045784a83)
Changed TrivialCommitment and Commitment traits to take a generic (that defaults to Value) for the expected_data (556f122f1890de8ce39274001df42687d6188d3e). We can then implement these traits with the generic as a Timestamp for the BlockTimestampCommitment (e.g. here). The verify_content() method has an additional json!() on the expected data so it is always a Value.
I think this is good to go in #106 but some potential options for future:
Refactoring BlockHashCommitment to handle committing to both the timestamp and PoW hash as we can implement both Commitment<Value> and Commitment<Timestamp> (distinct generics)
We could refactor the commitment traits to have an associated type to make the candidate data specific relevant types instead of Value e.g.:trait Commitment {
type Data;
// ...
fn decode_candidate_data(&self) -> fn(&[u8]) -> CommitmentResult<Data>;
// ...
}
A downside of this though would be I don't think default implementations would be possible.
Done in #108
|
gharchive/issue
| 2023-07-20T02:42:33 |
2025-04-01T04:55:52.658177
|
{
"authors": [
"sgreenbury",
"thobson88"
],
"repo": "alan-turing-institute/trustchain",
"url": "https://github.com/alan-turing-institute/trustchain/issues/104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1621975256
|
Can not send text on shareWhatsapp.share
Hello! Sorry for my english, this is a good plugin, when i used shareWhatsapp.share, overall i can send the files to the selected phone number, but i cannot send the text, how can i solve this?
@ihzadeprian1305 which platform?
|
gharchive/issue
| 2023-03-13T17:54:05 |
2025-04-01T04:55:52.679654
|
{
"authors": [
"alann-maulana",
"ihzadeprian1305"
],
"repo": "alann-maulana/share_whatsapp",
"url": "https://github.com/alann-maulana/share_whatsapp/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1996726325
|
Link IFC results in null in revit 2023
Dynamo Revit 2.13.1.3891
Genius Loci 2023.7.13
DynamoIronPython2.7 2.1.0
In revit 2022 works perfect only in revit 2023 not. What's wrong?
Hi,
It is working for me with Revit 2023. Not sure where your issue is.
HI all,
I've got the same issue too :
Not sure what is happening
My packages seem ok too :
Link to discussion added: https://forum.dynamobim.com/t/importing-ifc-files-in-bulk-with-dynamo-into-revit/102060/4
|
gharchive/issue
| 2023-11-16T12:14:26 |
2025-04-01T04:55:52.690265
|
{
"authors": [
"BertusRevit",
"Cyberflow187",
"albandechasteigner"
],
"repo": "albandechasteigner/GeniusLociForDynamo",
"url": "https://github.com/albandechasteigner/GeniusLociForDynamo/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
366457947
|
Training Deeplab
Hello, can we also train the model by writing our training code (for example, by modifying fcnTrain routine ?
Hi @sm176357, unfortunately I didn't have time to get around to it, but it would certainly be possible.
|
gharchive/issue
| 2018-10-03T17:58:49 |
2025-04-01T04:55:52.690896
|
{
"authors": [
"albanie",
"sm176357"
],
"repo": "albanie/mcnDeepLab",
"url": "https://github.com/albanie/mcnDeepLab/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
171826898
|
Xcode 8 beta 1: install Alcatraz fail
Goals
Install Alcatraz with Xcode 8 Beta 1
Expected Results
The package manager should successfully Install
Actual Results
Install fails to build In Terminal with this error
xcrun: error: unable to find utility "xcodebuild", not a developer tool or in PATH
Steps to Reproduce
Terminal input: curl -fsSL https://raw.githubusercontent.com/supermarin/Alcatraz/deploy/Scripts/install.sh | sh
Version Info
Alcatraz version: latest
Xcode version: 8.0 Beta 1
OSX version: OS X 10.11.6
Please see: https://github.com/alcatraz/Alcatraz/issues/475
|
gharchive/issue
| 2016-08-18T06:27:12 |
2025-04-01T04:55:52.774873
|
{
"authors": [
"jurre",
"yueqianzhang"
],
"repo": "alcatraz/Alcatraz",
"url": "https://github.com/alcatraz/Alcatraz/issues/481",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
177552297
|
Not working in Xcode8
Xcode8 install the plug-in fail
Try restarting your Mac. Has worked for some people.
i find this is useful http://www.jianshu.com/p/dc2fc2a680fc succeed!
https://github.com/alcatraz/Alcatraz/issues/73
Xcode 8 does not support loading plugins out of the box anymore.
Please see #475 for more info.
|
gharchive/issue
| 2016-09-17T03:08:21 |
2025-04-01T04:55:52.776434
|
{
"authors": [
"R0cKET",
"dbarros",
"imqiuhang",
"ixp9891",
"jurre"
],
"repo": "alcatraz/Alcatraz",
"url": "https://github.com/alcatraz/Alcatraz/issues/491",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2352059232
|
chore: add eslint rule to check just the re-exported functions have jsdocs
Pull Request Checklist
[ ] Did you add new tests and confirm existing tests pass? (yarn test)
[ ] Did you update relevant docs? (docs are found in the site folder, and guidelines for updating/adding docs can be found in the contribution guide)
[ ] Do your commits follow the Conventional Commits standard?
[ ] Does your PR title also follow the Conventional Commits standard?
[ ] If you have a breaking change, is it correctly reflected in your commit message? (e.g. feat!: breaking change)
[ ] Did you run lint (yarn lint:check) and fix any issues? (yarn lint:write)
[ ] Did you follow the contribution guidelines?
PR-Codex overview
This PR adds custom ESLint rules, configures TypeScript, and sets up linting workflows.
Detailed summary
Added custom ESLint rules for JSDoc comments on re-exported functions
Configured TypeScript for ESLint plugin
Defined linting workflows for ESLint plugin build and testing
The following files were skipped due to too many changes: yarn.lock
✨ Ask PR-Codex anything about this PR by commenting with /codex {your question}
heads up, gonna hold off on merging this until I've got https://github.com/alchemyplatform/aa-sdk/pull/728 working and then worked my way through batches of backfills of jsdoc comments.
once this branch passes linting, I'll merge it into v4.x.x. Then I'll get the CLI merged in
|
gharchive/pull-request
| 2024-06-13T21:00:28 |
2025-04-01T04:55:52.782077
|
{
"authors": [
"moldy530"
],
"repo": "alchemyplatform/aa-sdk",
"url": "https://github.com/alchemyplatform/aa-sdk/pull/719",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1725037128
|
Include full day when going every 30 minutes
Thank you for such an awesome package!
Is there a way to render cells every 30 minutes and include the full day (from 00:00 to 23:59), I have tried to do
<Scheduler
week={{
startHour: 0,
endHour: 23,
step: 30,
}}
}
/>
but that shows from 00:00 to 11:30pm (given endHour is 23 so it only gets until 23:30). endHour doesn't seem to be able to be a float so things like 23.5 don't work sadly.
Good point
I will look into it
After #203 it will work out of the box, just need to change endHour to 24
I tried and see a bit of a margin for exact events and ones that end at midnight. Any suggestion on why this might be happening / how to fix it?
|
gharchive/issue
| 2023-05-25T03:53:23 |
2025-04-01T04:55:52.785891
|
{
"authors": [
"aldabil21",
"ralcant"
],
"repo": "aldabil21/react-scheduler",
"url": "https://github.com/aldabil21/react-scheduler/issues/200",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
2526748866
|
🛑 BOS Fahrzeug Info is down
In 7a246c3, BOS Fahrzeug Info (https://bos-fahrzeuge.info/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: BOS Fahrzeug Info is back up in 3025c60 after 10 minutes.
|
gharchive/issue
| 2024-09-15T05:35:05 |
2025-04-01T04:55:52.791370
|
{
"authors": [
"aldo1066"
],
"repo": "aldo1066/Uptime",
"url": "https://github.com/aldo1066/Uptime/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
482559806
|
Update for Rails 6.0
Specs passing, and still works like a charm for me locally!
Thank you so much for your work here, @leboshi .
Unfortunately, CI is not currently testing Rails 6 with this configuration. I am handling that issue in #9 .
|
gharchive/pull-request
| 2019-08-19T22:35:34 |
2025-04-01T04:55:52.799228
|
{
"authors": [
"alecdotninja",
"leboshi"
],
"repo": "alecdotninja/active_record_distinct_on",
"url": "https://github.com/alecdotninja/active_record_distinct_on/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
111996676
|
Provide bash completion
Similar to --help-man, create a --help-bash-completion flag (that is possibly hidden).
I'm assuming this could come out of e.g. the CmdGroupModel of Application.Model(). Having not looked at the source and only the API, that was just my best guess.
This will probably be an easier PR for me to start up on than the other issues I've opened. :wink:
I don't use bash, but this is something I've wanted for a while for zsh. I started, but unfortunately the documentation for zsh completion is lacking.
Definitely would be great to have.
Oh! If your zsh is recent enough, it should come with a function called bashcompinit that will create the completer for zsh. I've set that in #76.
Not sure if I should open a separate issue, but is it correct to have hidden cmds completed? If a cmd is marked as hidden from --help, maybe it should also be hidden from tab completion.
Yes, I would say it should be hidden. Commands, flags too.
I think this is largely complete, just needs some tweaks and bug fixes. Closing.
|
gharchive/issue
| 2015-10-18T02:29:24 |
2025-04-01T04:55:52.804404
|
{
"authors": [
"alecthomas",
"heewa",
"rgbkrk"
],
"repo": "alecthomas/kingpin",
"url": "https://github.com/alecthomas/kingpin/issues/75",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
244811780
|
Import from gopkg like the other examples
This was the only example not importing from gopkg. When trying multiple examples this one would not work because it was not available locally. It took me a while to understand what was happening.
Thanks!
|
gharchive/pull-request
| 2017-07-22T00:33:21 |
2025-04-01T04:55:52.805360
|
{
"authors": [
"PotHix",
"alecthomas"
],
"repo": "alecthomas/kingpin",
"url": "https://github.com/alecthomas/kingpin/pull/199",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
56861663
|
Support for flags defined with a pattern (regexp)
This PR adds the ability to define flags using a regular expression, for example:
kingpin.FlagPattern(`root\.(\d+).(name|value)`, "Specify value at given path")
Any capture defined in the pattern can be retrieved via the new Capture parser, given this:
var pos []string
vals := kingpin.FlagPattern(`matrix-(\d+)-(\d+)`, "Specify value for entry in matrix col i, row j").Capture(&pos).Strings()
passing the command line flags --matrix-0-0=foo --matrix-0-1=bar will result in:
pos == []string{"0", "0", "0", "1"}
vals == []string{"foo", "bar"}
Thank you for the PR, I appreciate the effort, but I think this is a bit too complex and thus confusing. Here is my rationale:
I have never seen dynamic flag names, and I think it would be very unexpected to encounter them.
I think that any use case for dynamic flag names could be satisfied with a custom value parser. eg. instead of --matrix-0-1=bar the value parser could accept --matrix-set=0,0:foo --matrix-set=0,1:bar. This flag would also return a matrix type directly rather than "outputting" two values.
I certainly understand the desire to keep things simple and maybe my use case is too esoteric but using parsers like you suggest would defeat the purpose. I want to be able to leverage Kingpin's built-in help generation and validators. In my use case different values at different positions can have different types.
To add more color: my use case consists of a command line tool that can build data structures from values given on the command line via flags. These data structures can be recursive and having the help and type validation taken care of by Kingpin would be awesome. Happy to provide more details around the use case if needed. Thank you for your consideration!
Using a custom parser does not preclude varying types, help, or type safety AFAICT. In fact, I'm not sure how your patch would achieve this either? As for the help, it seems like the flag help output with your patch would be the regex pattern?
var nested *MyNestedStruct = MyNestedStructFlag(kingpin.Flag("set", "Set custom struct field"))
If your underlying use case is the ability to build data structures using kingpin, then that is an interesting idea in itself. Something that reflected over a type and produced flags capable of generating an instance of that type could be quite interesting. I can't personally envisage an elegant way of doing that off the top of my head, but it might be possible.
I'm writing a code generator tool that takes in metadata that describes an API (e.g. https://www.googleapis.com/discovery/v1/apis/compute/v1/rest) and produces a command line tool that can call the APIs. So what I have is a JSON schema describing data structures and from that I need to "flatten" it so that it's possible to specify the values for each leaf. There can be data structures that are embedded in arrays or hashes and for these I need to do something like (not a real example):
client instance create --disks.0.name="first disk" --disks.1.name="second disk" --disks.0.size=200 --disks.1.size=400 --tags.foo=bar --tags.goo=baz
This would correspond to the data structures:
type Instance struct {
Disks []*Disk
Tags map[string]string
}
type Disk struct {
Name string
Size int
}
The metadata contains documentation for each field which I want to display with the command line as well. So client instance create --help would list the flags and for each the corresponding data structure field docs.
I've been able to use flag patterns to achieve all of that (I can specify the exact type of the leaves of data structures embedded in arrays and the corresponding docs).
To follow up on this: after a few iterations I ended up not needing flag patterns, so good call on rejecting my PR ;)
Hehe. Okay, good to know, thanks :)
|
gharchive/pull-request
| 2015-02-06T20:11:29 |
2025-04-01T04:55:52.812602
|
{
"authors": [
"alecthomas",
"raphael"
],
"repo": "alecthomas/kingpin",
"url": "https://github.com/alecthomas/kingpin/pull/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1695978282
|
lockAda
lender deposits funds + asset-id of an NFT that will be the collateral
compare:
lockAda
creates a datum and populates it with form values and the PKH of the owner wallet.
[ ] https://github.com/lley154/helios-examples/blob/main/vesting/pages/index.tsx
[x] https://github.com/lley154/helios-examples/blob/main/vesting/components/LockAda.tsx
[x] does not need a Redeemer?
Lock Ada takes care of Datum creation.
[ ] I need Datum for unlocking, so I need to try unlocking first
|
gharchive/issue
| 2023-05-04T13:06:57 |
2025-04-01T04:55:52.818668
|
{
"authors": [
"aleeusgr"
],
"repo": "aleeusgr/potential-robot",
"url": "https://github.com/aleeusgr/potential-robot/issues/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
941659564
|
🛑 Whoogle is down
In 23a2569, Whoogle (https://s.alefvanoon.xyz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Whoogle is back up in 98c733c.
|
gharchive/issue
| 2021-07-12T04:31:38 |
2025-04-01T04:55:52.825487
|
{
"authors": [
"alefvanoon"
],
"repo": "alefvanoon/Status",
"url": "https://github.com/alefvanoon/Status/issues/109",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
990413256
|
🛑 Teddit is down
In 6e0a2f8, Teddit (https://teddit.alefvanoon.xyz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Teddit is back up in efef934.
|
gharchive/issue
| 2021-09-07T22:57:19 |
2025-04-01T04:55:52.827788
|
{
"authors": [
"alefvanoon"
],
"repo": "alefvanoon/Status",
"url": "https://github.com/alefvanoon/Status/issues/457",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
295030258
|
Great Augmentation library
I am not sure of the bugs that may be there, But till date, this is the best augmentation library I have encountered so far. The api is extremely readable and intuitive to understand.
It is ironic, that I am opening an issue to commend the job done. :+1:
Well, no option of comments on github. Please close this issue as soon as you read it.
Cheers!
Akanimax
Thanks a lot! :blush:
Good to hear that the library helps somebody!
|
gharchive/issue
| 2018-02-07T07:14:34 |
2025-04-01T04:55:52.840647
|
{
"authors": [
"akanimax",
"aleju"
],
"repo": "aleju/imgaug",
"url": "https://github.com/aleju/imgaug/issues/95",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2689182510
|
🛑 h23 is down
In d211892, h23 (https://h23.seohost.pl:2222/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: h23 is back up in baca8b1 after 16 minutes.
|
gharchive/issue
| 2024-11-25T05:19:58 |
2025-04-01T04:55:52.856348
|
{
"authors": [
"alekssobolewski"
],
"repo": "alekssobolewski/h23",
"url": "https://github.com/alekssobolewski/h23/issues/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1796470464
|
fix: Update example command
Fixed example command switches with current manual
Added new working WARP key for example
I encountered switch format issues while running the example command, so I fixed them and thought it would be helpful to submit it as a PR as well. Thanks for this great script by the way.
Thanks for your contribution.
Actually you can put or omit = in arguments. I updated the example to add = to all args
About the example license, users have to get new license for their installation otherwise warp will show 'Too many devices' error.
Each license is valid for 5 devices
Wonderful, appreciate your help.
|
gharchive/pull-request
| 2023-07-10T10:35:58 |
2025-04-01T04:55:52.863671
|
{
"authors": [
"AlirezaBaratian",
"aleskxyz"
],
"repo": "aleskxyz/reality-ezpz",
"url": "https://github.com/aleskxyz/reality-ezpz/pull/46",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1329479149
|
I want help
I wanted to create and deploy aniyomi-local but I am very noob in this stuff I am unable to deploy the fork I made of this repository on Netlify. Can you help me deploy it? If yes please DM me on Discord 12ohit#6969 because your DM's are closed.
Nevermind, I realised that your latest commit was making the problem so I used the repository from before that commit and it worked.
Hi, 12ohit, sorry about not replying it before.
Glad to hear you managed to make it build with success 😄
Any new questions or issues you find, feel free to open a new issue or PR with fixes!
Also, @12ohit, please replace my Google Analytics tag on index.html with a custom one created by you to your fork or remove that script from the page, please.
Ok I will remove that Google Analytics tag, thanks
|
gharchive/issue
| 2022-08-05T05:47:38 |
2025-04-01T04:55:52.865907
|
{
"authors": [
"12ohit",
"alessandrojean"
],
"repo": "alessandrojean/tachi-local",
"url": "https://github.com/alessandrojean/tachi-local/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
438399064
|
Add canvas editor caret tracking example
Create a fake editor using and use js2at to expose where the caret is.
@kenlimmj since you've already built this, do you want to just land it? It's good enough for now.
Oh, you already did! Closing.
|
gharchive/issue
| 2019-04-29T16:22:36 |
2025-04-01T04:55:52.869583
|
{
"authors": [
"aleventhal"
],
"repo": "aleventhal/js2at",
"url": "https://github.com/aleventhal/js2at/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
337190472
|
unicode_type = six.text_type and define long in Python 3
This approach uses the Python porting best practice Use feature detection instead of version detection and thus avoids raising undefined name (F821) errors in flake8.
flake8 testing of https://github.com/alexa-labs/alexa-skills-kit-sdk-for-python on Python 3.6.3
$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
./ask-sdk-core/ask_sdk_core/serialize.py:38:20: F821 undefined name 'unicode'
unicode_type = unicode
^
./ask-sdk-core/ask_sdk_core/serialize.py:49:33: F821 undefined name 'long'
'long': int if PY3 else long,
^
./ask-sdk-core/tests/unit/test_serialize.py:36:20: F821 undefined name 'unicode'
unicode_type = unicode
^
./ask-sdk-core/tests/unit/test_serialize.py:37:17: F821 undefined name 'long'
long_type = long
^
4 F821 undefined name 'unicode'
4
Description
Motivation and Context
Testing
Screenshots (if appropriate)
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist
[x] My code follows the code style of this project
[ ] My change requires a change to the documentation
[ ] I have updated the documentation accordingly
[x] I have read the README document
[ ] I have added tests to cover my changes
[x] All new and existing tests passed
License
[x] By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Hey @cclauss , thanks for the PR. Can you please look into the checklist and the license information for the PR, so that we can go ahead with the commit?
Done.
|
gharchive/pull-request
| 2018-06-30T06:05:44 |
2025-04-01T04:55:52.887847
|
{
"authors": [
"cclauss",
"nikhilym"
],
"repo": "alexa-labs/alexa-skills-kit-sdk-for-python",
"url": "https://github.com/alexa-labs/alexa-skills-kit-sdk-for-python/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
331034320
|
sudo bash startauth.sh does not work
IMPORTANT: Before you create an issue, please take a look at our Issue Reporting Guide.
Briefly summarize your issue:
I can not find file "startauth.sh" in my "/home/pi" directory.
Where can I get this file?
I execute the command as below.
sudo bash startauth.sh
the following issue is similar.
https://github.com/alexa/avs-device-sdk/issues/557
What is the expected behavior?
What behavior are you observing?
no such file or directory
Provide the steps to reproduce the issue, if applicable:
Tell us about your environment:
What version of the AVS Device SDK are you using?
AVS Device SDK
1.6.0
Tell us what hardware you're using:
[ ] Desktop / Laptop
[X ] Raspberry Pi
[ ] Other - tell us more:
Tell us about your OS (Type & version):
[ ] Linux
[ ] MacOS
[X ] Raspbian Stretch
[ ] Raspbian Jessy
[ ] Other - tell us more:
Hello apollo13yuma,
We have deprecated the authorization tools and examples in 1.6 because they did not implement any of the supported authorization methods ( Companion App, Companion Site, Code Based Linking, or On Product ). We strongly encourage you to update to a later version.
Regards,
-SWH
Hi Scotthea,
Thank you very much for your reply. I will try to update to AVS SDK 1.7.
If possible, would you please suggest how to upgrade 1.6 to 1.7 on Raspberry Pi?
Can I follow this setup?
https://github.com/alexa/avs-device-sdk/wiki/Raspberry-Pi-Quick-Start-Guide-with-Script
Regards
Yuma
Hello Yuma,
Those steps are intended to be used for building 1.7.1 from scratch. If you do not have any work that depends upon 1.6 it would be simplest to just delete the folders created during your build of 1.6 and use the instructions you linked to.
If you really need to upgrade (e.g. you have changes to avs-device-sdk that you want to merge with 1.7.1) and already have 1.6 building, you can probably do a pull from within the avs-device-sdk folder to get the 1.7.1 source. Of course, if you have made changes to the 1.6 avs-device-sdk, you will need to merge them. Then use 'make SampleApp' in the build folder to build the 1.7.1 version of the SampeApp. Then, you can start from the Setup Your Configuration step of the Quick Start Guide to configure and authorize SampleApp. Note that where those configuration instructions refer to a sdk-build, your previous build of 1.6 probably named the folder build.
I hope that helps,
-SWH
hi SWH,
Thanks for your help. It was reslolved, when I use SDK 1.7.1.
However, I can not authorrize my cilent in Amazon web site.
https://github.com/alexa/avs-device-sdk/issues/790
|
gharchive/issue
| 2018-06-11T03:25:47 |
2025-04-01T04:55:52.902140
|
{
"authors": [
"apollo13yuma",
"scotthea-amazon"
],
"repo": "alexa/avs-device-sdk",
"url": "https://github.com/alexa/avs-device-sdk/issues/775",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
154290878
|
Using svg textwrap with polygons
Is it possible to use the textwrap feature with an svg ? The and work very well, but wanted to do this with a svg polygon shape that looks like a text-bubble. Thank you!
I've often wondered about this too - particularly if the textwrap feature could possibly be used to label streamgraphs.
Possibly a duplicate of https://github.com/d3plus/d3plus-text/issues/6
Thanks @curran! I will dig through this thread and see if I can make it work. I really appreciate the quick response.
Have you guys seen the largestRect function? It's available with d3plus.geom.largestRect:
Blog Post and Examples
Documentation
Internally, D3plus uses it for stacked area charts. Here's the code: https://github.com/alexandersimoes/d3plus/blob/master/src/viz/helpers/shapes/area.js#L50-L72
I'm going to close this issue in favor of this new issue in the 2.0 d3plus-text module.
|
gharchive/issue
| 2016-05-11T16:49:57 |
2025-04-01T04:55:52.908754
|
{
"authors": [
"curran",
"davelandry",
"ebwalker"
],
"repo": "alexandersimoes/d3plus",
"url": "https://github.com/alexandersimoes/d3plus/issues/478",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1610452016
|
✨ [REQUEST] free-games-claimer
!!! I have a real life job in parallel to this addon, and don't think I'll be able to add new addons for the moment. You can still however express your interest in case someone would do it
Which addon?
https://github.com/vogler/free-games-claimer. Supports epic games, prime gaming and GOG
Is your feature request related to a problem? Please describe.
Claim free games from multiple services.
**If a new addon, have you checked on Google that such as addon doesn't already exists?**
Yes
Describe the solution you'd like
Configure single add-on to claim all free games.
Describe alternatives you've considered
Not aware of other alternatives
there's an epic games one already.
https://github.com/alexbelgium/hassio-addons/tree/master/epicgamesfree
That one always prompts for a security code, so isn't that helpful. free-games-claimer doesn't prompt and supports more accounts.
Oh my goodness, thank you For bringing this project to my attention. It seems they are looking to as even more services to The grabber as well!
As someone who's had issues with the epicgamesfree addon in the past (was unable to get it to actually claim games because of the captcha) I would also like this if possible.
I'll look at it
Would this addon remove the epicgamesfree one? For sake of simplification
I believe so but I don't personally use the epicfreegames addon so it may
be worth putting a poll/question for them if that's an option?
On Mon, 22 May 2023, 21:14 Alexandre, @.***> wrote:
Would this addon remove the epicgamesfree one? For sake of simplification
—
Reply to this email directly, view it on GitHub
https://github.com/alexbelgium/hassio-addons/issues/747#issuecomment-1556852483,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AUMWJOFBRJ5JCLNEXXQCERDXHMU55ANCNFSM6AAAAAAVQO7TTE
.
You are receiving this because you commented.Message ID:
@.***>
I think it makes sense to keep both. They use different methods, and epic games free tends to be much more reliable from what I'm seeing.
Is the latest working?
The latest egf is still working, and the latest container for fgc is working. I tried to set up the fgc app last night but ran into issues with it accepting creds from the config file, so need to sit down today and figure out what's going on.
On theory values from the config file (in yaml format) are exported as env variables
I'm not sure why, but I am getting constant stalls on start. It gets to the point that I am seeing output from the container, but it just stops sometimes moments after it starts up. Is that likely an issue with the container itself, or is that potentially a misconfiguration on the addon "wrapper" side?
Well config.yaml is my internet system to,export env variables. Not sure about config.env
Just found a minute and was going to check if 'epic-games-free' is fixable, and was confused where the readme part was gone. Found that you renamed the whole folder, therefore it looked 'abandoned' to me.
Tried the new 'free-games-claimer' - currently shown as 'epic-games-free' and had just one weird thing. It was not showing the novnc server, nor something useful in the logs. After adding a real port to the VNC port mapping everything started fine. After removing it still works fine. Not sure how settings are handled internally, and if a setting that once was overwritten is now empyt/null - or at least other than it was before.
To make every business consultant happy, here is the positive feedback: Would be nice to select which game plattforms i would like to check, as e.g. I have no usage of amazon prime gaming.
Keep up the good work :v:
Yeah, I'm having trouble getting novnc running here as well, and my entries in the config file don't seem to be working at all. I tried adding a vnc port in the network config section of the addon to no avail. I may need to remove and re-add or something.
As for picking and choosing which service works, technically the following command can be appended to the end of the docker run command to achieve that, leaving out what you don't want: bash -c node epic-games; node prime-gaming; node gog it may be possible to incorporate that into the plugin but I'm not sure.
Latest version includes custom commands through addon options as you propose, and usage of the config.env instead of my config.yaml. The config.env uses the nomenclature KEY=VALUE instead of the previous yaml that was KEY: VALUE
Just found a minute and was going to check if 'epic-games-free' is fixable, and was confused where the readme part was gone. Found that you renamed the whole folder, therefore it looked 'abandoned' to me.
Hi - sorry this was unintended, I had pushed the folder in the free-games-claimer one on explorer :-) corrected now
Not working yet
Hi, latest version is tested to start. Could you please check the applicability of the config.env ? I've modified the format to export KEY=VALUE
Okay, I got it to run. Wrapping strings in single quotes, export keyword in front (though I'm surprised that can't be simulated) does work. Integers can remain unquoted no issue.
One "major* problem at the moment, is the command entry. The first command works, anything after that fails to ever run. For example, if you set node prime-gaming ; node gog ; node epic-games the prime-gaming module does indeed run, but the gog and epic-games modules never get run.
Would "node prime-gaming" "node gog" "node epic-games" work? I'll check this evening
I tried looking into that, but it's proving difficult since I'm actually terrible with docker. My initial thought was wrapping them all in a double quote together, so it's seen as a single command. But I don't have proper knowledge of what that script being referenced actually does (I assume it's part of the original image?...) so I can't speak with any authority.
Thanks, I've pushed a new version. Now, the CMD_ARGUMENTS should be in the form : node prime-gaming; node gog; node epic-games and the config.env should be in the form KEY='VALUE'. I've also corrected the default files but as you have already installed the addon it won't probably use the new defaults.
Sorry to be the bearer of bad news: ```
/./etc/cont-init.d/99-run.sh: line 46: AZE: unbound variable
/etc/cont-init.d/99-run.sh: exiting 1
That's my test variable, I went too fast and need to push another version
still not working
I got it running, but then it needs me to login in UI. So when I try to access the no VNC port the chrome tab shows the loading icon for a while then eventually says site can't be reached.
btw, works fine for me, thanks :)
For me it works too... Perhaps a matter of ram availability too as I think it requires quite an amount?
Hmm, I have 2.2 GB of free memory so I feel it's unlikely a memory problem. I also tried accessing from a different browser with same problems.
I'm still seeing the issue of the first module running, but no others. I'm wondering if I'm missing something since two different individuals have said things are working for them.
I also find the idea of RAM being an issue, dubious. I regularly have >4GB of ram available, but noVNC has never worked.
When I say it works for me, I mean I can access Firefox through the novnc webui, and even go on internet through it. I havent tested other functionalities. I'll look again for multi modules launch
For whatever reason none of these claimer addons work for me. This one launches successfully (no errors in log) but tying to access the webUI spins indefinitely and the addon takes 50% CPU. I don't have any weird network config, it's all typical 192.168. IPs.
Yeah, I can't get the web Ui working either. You can use the config.env to get yourself logged in though. The variables are all listed on the main repo.
I can't. I never got the text prompt. I can, however, run the very same config env file with docker outside of HA (I am used to run docker stuff in HA as I didn't have other options before, now I do), and it does work there for me. No problem, thanks!
So weird. Here is my log:
Starting...
/etc/cont-init.d/00-banner.sh: executing
-----------------------------------------------------------
Add-on: Free Games Claimer (in development)
automatically claims free games on the Epic Games Store, Amazon Prime Gaming and GOG
-----------------------------------------------------------
Add-on version: 1.4
You are running the latest version of this add-on.
System: Home Assistant OS 9.5 (amd64 / qemux86-64)
Home Assistant Core: 2023.5.2
Home Assistant Supervisor: 2023.04.1
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums
-----------------------------------------------------------
Provided by: https://github.com/alexbelgium/hassio-addons
-----------------------------------------------------------
/etc/cont-init.d/01-custom_script.sh: executing
[10:08:57] INFO: Execute /config/addons_autoscripts/free-games-claimer.sh if existing
[10:08:57] INFO: ... script found, executing
dos2unix: converting file /config/addons_autoscripts/free-games-claimer.sh to Unix format...
/etc/cont-init.d/20-folders.sh: executing
Creating config location ...
Copying files if needed...
/etc/cont-init.d/99-run.sh: executing
[10:08:58] WARNING: The config.env file found in /config/addons_config/free_games_claimer will be used. Please customize according to https://github.com/vogler/free-games-claimer/tree/main#configuration--options and restart the add-on
[10:08:58] INFO: Starting the app with arguments node epic-games ; node prime-gaming ; node gog
Xvfb display server created screen with resolution 1280x1280
VNC is running on port 5900 (no password!)
noVNC (VNC via browser) is running on http://localhost:6080
2023-05-30 10:08:58.288 started checking epic-games
Not signed in anymore. Please login in the browser or here in the terminal.
Open http://localhost:6080 to login inside the docker container.
Login timeout is 180 seconds!
Press ESC to skip the prompts if you want to login in the browser (not possible in headless mode).
And going to http://homeassistant.local:6080/ shows
I'll do some other tests.
Btw even with default settings (running straight the container from upstream), it's using 47.1% of my 8go ram rpi4
Last version pushed (1.4-test8) fully works on my system.
I uninstalled everything and clean reinstalled.
Started addon, waiting for log to complete (until "started checking epic-games")
Logged on webui on epic
Logged in epic (the addon then moved on automatically to prime-gaming)
Logged on webui on prime
...
10% ram on a 16gb system
Full log :
Starting...
/etc/cont-init.d/00-banner.sh: executing
-----------------------------------------------------------
Add-on: Free Games Claimer (NOT WORKING)
automatically claims free games on the Epic Games Store, Amazon Prime Gaming and GOG
-----------------------------------------------------------
Add-on version: 1.4-test8
You are running the latest version of this add-on.
System: Home Assistant OS 9.5 (amd64 / qemux86-64)
Home Assistant Core: 2023.5.2
Home Assistant Supervisor: 2023.04.1
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums
-----------------------------------------------------------
Provided by: https://github.com/alexbelgium/hassio-addons
-----------------------------------------------------------
/etc/cont-init.d/01-custom_script.sh: executing
[15:00:55] INFO: Execute /config/addons_autoscripts/free-games-claimer.sh if existing
[15:00:55] INFO: ... script found, executing
dos2unix: converting file /config/addons_autoscripts/free-games-claimer.sh to Unix format...
/etc/cont-init.d/20-folders.sh: executing
Creating config location ...
Copying files if needed...
/etc/cont-init.d/99-run.sh: executing
[15:00:55] WARNING: The config.env file found in /config/addons_config/free_games_claimer will be used. Please customize according to https://github.com/vogler/free-games-claimer/tree/main#configuration--options and restart the add-on
[15:00:55] INFO: Starting the app with arguments node epic-games ; node prime-gaming ; node gog
Xvfb display server created screen with resolution 1280x1280
VNC is running on port 5900 (no password!)
noVNC (VNC via browser) is running on http://localhost:6080
2023-05-30 15:00:56.100 started checking epic-games
Not signed in anymore. Please login in the browser or here in the terminal.
Open http://localhost:6080 to login inside the docker container.
Login timeout is 180 seconds!
Press ESC to skip the prompts if you want to login in the browser (not possible in headless mode).
? Enter email ‣ ✖ Enter email · timeout
Waiting for you to login in the browser.
page.waitForURL: Timeout 180000ms exceeded.
=========================== logs ===========================
waiting for navigation to "https://store.epicgames.com/en-US/free-games" until "load"
============================================================
at file:///data/epic-games.js:102:16 {
name: 'TimeoutError'
}
2023-05-30 15:07:02.376 started checking prime-gaming
Not signed in anymore.
Login timeout is 180 seconds!
Press ESC to skip the prompts if you want to login in the browser (not possible in headless mode).
? Enter email ‣ ✖ Enter email · timeout
Waiting for you to login in the browser.
```
I've created a request on the upstream repo here : https://github.com/vogler/free-games-claimer/issues/149
Hello, not working on a HAOS 10.2 am64 / generic x86-64 installation. which makes it weirder.
Direct VNC is not reacheable on port 5900, so noVNC is also not reachable as well on :6080 trough browser... The tab opens, and keeps spinning for a while and eventually a timeout error is displayed.
I've also added a comment in here https://github.com/vogler/free-games-claimer/issues/149
update: after insisting a lot to connect trough VNC direct client on port:5900 I was able to connect after pressing connect button several times on my VNC Client! However, it was already on one of the later stages, during the GOG login, so I have missed the login for Epic and Prime...
Hi sorry I disabled the function to customize the nodes to be as close as possible to the base container to avoid sources of issues
Just as a side note for the env variables, it is not possible to use the built in way of using them? The config file appears to be loaded as described here: https://github.com/vogler/free-games-claimer/issues/143#issuecomment-1551822765
What is strange is that the config.env file is already pushed in the workdir here :
https://github.com/alexbelgium/hassio-addons/blob/2d56211be25865a38a489d476eb1c4d8128092d5/free_games_claimer/rootfs/etc/cont-init.d/99-run.sh#LL20C1-L21C37
Just realized perhaps I should do /data/data/config.env to respect the code you were showing. I'll push a new version to do that
I spent a lot of time trying to solve the VNC not starting thing - but it seems to be getting nowhere.
I also need to re-enable the option to choose which elements to run, I'll push a test10 version
config.env doesn't seem to be working, either in /data/config.env or /data/data/config.env (given that /data is the initial WORKDIR)
That's very odd... I think the only time I've ever seen the env variables work was when they needed export in front of them. But looking at the code every way you've set them up looks like they should work.
Maybe if you dropped the later set +a/-a? I wonder if that might be causing the code to ignore the set variables somehow? Very much a shot in the dark though.
When they needed export, I didn't use the embedded config.env system but instead read the file line per file and exported them in the bash shell
I can redo it. Set -a was to avoid error messages for undeclared variables
Current dev status (test11) :
[ ] config.env working in test11 (variables passed)
[ ] sequencial execution of CMD_ARGUMENTS (use format node gog; node prime-gaming
[ ] succesful execution of free games claimer scripts (to confirm)
[ ] VNC & noVNC reliable usage
I did try adding export previous to the most recent with no luck.
If you wanted to do a real quick test (I will asap) you can try setting the size to 1024 by 768 and seeing if the size message properly reflects that just after the image is spun up.
After testing test11 and test12 I can confirm that the environment variables do appear to be properly loading!
That said, test12 seems to still have issues with setting the command arguments. Using node epic-games ; node prime-gaming; node gog I am seeing ```
Starting...
/etc/cont-init.d/00-banner.sh: executing
Add-on: Free Games Claimer (NOT WORKING)
automatically claims free games on the Epic Games Store, Amazon Prime Gaming and GOG
Add-on version: 1.4-test12
You are running the latest version of this add-on.
System: Home Assistant OS 10.2 (aarch64 / raspberrypi4-64)
Home Assistant Core: 2023.5.4
Home Assistant Supervisor: 2023.04.1
Please, share the above information when looking for help
or support in, e.g., GitHub, forums
Provided by: https://github.com/alexbelgium/hassio-addons
/etc/cont-init.d/01-custom_script.sh: executing
[16:18:25] INFO: Execute /config/addons_autoscripts/free-games-claimer.sh if existing
[16:18:25] INFO: ... no script found
/etc/cont-init.d/20-folders.sh: executing
Creating config location ...
Copying files if needed...
/etc/cont-init.d/99-run.sh: executing
[16:18:26] WARNING: The config.env file found in /config/addons_config/free_games_claimer will be used. Please customize according to https://github.com/vogler/free-games-claimer/tree/main#configuration--options and restart the add-on
[16:18:27] INFO: Starting the app with arguments node epic-games; node prime-gaming; node gog
Xvfb display server created screen with resolution 1024x768
VNC is running on port 5900 (no password!)
noVNC (VNC via browser) is running on http://localhost:6080
node:internal/modules/cjs/loader:1093
throw err;
^
Error: Cannot find module '/data/epic-games;'
at Module._resolveFilename (node:internal/modules/cjs/loader:1090:15)
at Module._load (node:internal/modules/cjs/loader:934:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:83:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v19.9.0
[16:18:27] INFO: All actions concluded, addon will stop```
Thanks :) I'll add automatic stop. And check sequential execution. I thought I had the error message only because I didn't have a real account to test
Latest version should work (except for VPN), it will also show in blue exported values from config.env
Just... in case, before I run it. Are you censoring anything with the word password in the env variable? You are able to pass usernames and passwords, and keeping a log of those doesn't sound like a good idea.
Well, logs are temporary, they only appear but are not stored anywhere that I am aware of. Anyway it would still be stored locally - which is already the case from the passwords being written as plain text in the config.env. So really the key element is to keep your HA system secure. But I can remove this feature if you prefer
Apologies for the late follow up. I’ve been able to test and can confirm things run. However. Sequential running does not appear to work? node epic-games; node prime-gaming; node gog results in only node epic-games being run, and the log appears to confirm this with this line [23:29:23] INFO: Starting the app with arguments "node epic-games"
Hi, indeed sequential running is executed but doesn't connect... Well I'll have to look again
Bad bot
Well it reminded me of this sorry I was quite caught up in work. I'll have a look again soon
Haha no worries, it's not like you're paid to do this, and I'm fairly certain you don't even have a use for this particular add-on.
Hi, I think I got it in version test22. Please let me know!
NoVNC still isn't working for me. But looks like it's able to claim epic games for me.
Things kinda work now.
Honestly, I think I’m going to revert to using my PC and waking it to handle this… It seems horribly slow on a Pi, and sometimes takes so long things break. I think this may just be an inherent limitation of the Pi and/or image.
VNC doesn't work for me either but can confirm that Epic and GOG will claim games without me needing to restart the addon. I'm running this on an old machine and not having any issues with the speed @KairuByte so that may just be a Pi issue.
I think VNC doesn't work due to rpi being too slow... It didn't work on my rpi4 either...
Regarding the script in itself however it seems to run fine on my rpi4 !
Closed with disclaimer that novnc does not work but that the add-on can still work using the config file
Hi, would you consider an option to mount a config directory instead of just the config file? Reason is that there are options like RECORD=1 that will record a video to help debug problems in config/record directory, but since directory isn't mounted there isn't an easy way to access the recorded file.
Another side effect would be that the browser state is also saved in there. That could be useful for uninstall / reinstall.
I am missing something here... I configured the env correctly adding discord, eg, ap, and gog but there's no way to open anything. How do I use this: "Please use config.env to execute the script"?
Hi, I don't use the addon so can't help you with the config.env content. However, I can see from your log that the config.env file is correctly loaded by the addon Getting variables from /config/addons_config/free_games_claimer/config.env
So you would have more chance trying on their repo https://github.com/vogler/free-games-claimer. From the log it seems you haven't filled a #password tag in the config.env but they would know better than me what that means.
|
gharchive/issue
| 2023-03-06T00:09:20 |
2025-04-01T04:55:53.000104
|
{
"authors": [
"KairuByte",
"MaxWinterstein",
"Stooovie",
"Teberon",
"alexbelgium",
"amosyuen",
"diamant-x",
"hacshacdgacs",
"viny182"
],
"repo": "alexbelgium/hassio-addons",
"url": "https://github.com/alexbelgium/hassio-addons/issues/747",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1222157687
|
Bazarr logo and icon
The logo of sonarr was added to bazarr instead of it's actual logo
Thanks very much!
|
gharchive/pull-request
| 2022-05-01T13:27:05 |
2025-04-01T04:55:53.003386
|
{
"authors": [
"alexbelgium",
"monkey-debugger"
],
"repo": "alexbelgium/hassio-addons",
"url": "https://github.com/alexbelgium/hassio-addons/pull/309",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
572789619
|
how off logger?
how off logger? He eat many flash memory
you cat insert:
STATS_PRINT_PERIOD = 10000000000
in the config.py
is there any way to clear the log?
or can I configure it somehow to show log for 24h?
I ran proxy for 10 days, with about 20 users, now it takes like 10 minutes to see the end
the log size can be controlled here: https://github.com/alexbers/mtprotoproxy/blob/master/docker-compose.yml#L10
|
gharchive/issue
| 2020-02-28T14:27:31 |
2025-04-01T04:55:53.005999
|
{
"authors": [
"alexbers",
"ali-jk",
"dark0ghost"
],
"repo": "alexbers/mtprotoproxy",
"url": "https://github.com/alexbers/mtprotoproxy/issues/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
53309705
|
Using log from crates.io to avoid compile errors
As in-tree module is always presented and almost all crates have already
migrated to using external log module, there are always problems
while using this crate
Thanks!
|
gharchive/pull-request
| 2015-01-03T17:24:29 |
2025-04-01T04:55:53.049044
|
{
"authors": [
"alexcrichton",
"vhbit"
],
"repo": "alexcrichton/rust-compress",
"url": "https://github.com/alexcrichton/rust-compress/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
897369448
|
Extract directory entries after all others when using Archive::unpack.
If they are extracted in order, they may fail if a directory's
permissions prevent descendant files from being written. So extracting
them after all other entries ensures that any files that need to be
created can be without being subject to the final permissions.
Closes #242.
Seems reasonable to me, thanks!
|
gharchive/pull-request
| 2021-05-20T19:56:43 |
2025-04-01T04:55:53.050892
|
{
"authors": [
"afranchuk",
"alexcrichton"
],
"repo": "alexcrichton/tar-rs",
"url": "https://github.com/alexcrichton/tar-rs/pull/249",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2451631257
|
FedEx migration from WSDLS to REST API
I receive this message when enter to the developer FedEx's portal:
Caution: FedEx Web Services Tracking, Address Validation, and Validate Postal Codes WSDLS will be disabled on August 31, 2024. The SOAP based FedEx Web Services is in development containment and has been replaced with FedEx RESTful APIs. To learn more and upgrade your integration from Web Services to FedEx APIs, please visit the FedEx Developer Portal.
This will affect the shipping rates calculation of this project?
Thank you.
We use the Quote Rates service, so we shouldn't be affected. However, the transition to the REST API for FedEx is already in our plans. We expect it to be done later this year.
Hi, Just checking to see when will the transition from WSDLS to REST will take place? Thanks
I've added basic FedEx REST support in https://github.com/alexeybusygin/ShippingRates/pull/85
Cool, thanks! I will look at it!
|
gharchive/issue
| 2024-08-06T20:22:58 |
2025-04-01T04:55:53.065470
|
{
"authors": [
"alexeybusygin",
"elcapitanandres",
"tuhin24",
"xqiu"
],
"repo": "alexeybusygin/ShippingRates",
"url": "https://github.com/alexeybusygin/ShippingRates/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
864346315
|
ldapi support
Support for using ldapi Unix socket connections. Build the connection
URI differently to suit. Added a new configuration option for specifying
the path to the socket. Don't mandate username and password if using
ldapi.
Ping @alexferl.
|
gharchive/pull-request
| 2021-04-21T22:18:04 |
2025-04-01T04:55:53.067056
|
{
"authors": [
"adarnimrod"
],
"repo": "alexferl/flask-simpleldap",
"url": "https://github.com/alexferl/flask-simpleldap/pull/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
252177138
|
Keep changelog update?
The history.md on master branch it's outdated since the latest revision is 7.2.1 and the changelog only keep to 7.1.1. I want to know what has changed since 7.1.1 especially if there is any break change.
history.md has been updated. Thanks for pointing this out.
|
gharchive/issue
| 2017-08-23T06:44:49 |
2025-04-01T04:55:53.093514
|
{
"authors": [
"halfcrazy",
"jbielick"
],
"repo": "alexmingoia/koa-router",
"url": "https://github.com/alexmingoia/koa-router/issues/374",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2624116606
|
🛑 Alexandre Nuttinck website is down
In c0d589f, Alexandre Nuttinck website (https://alexnuttinck.dev) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Alexandre Nuttinck website is back up in 2048079 after 8 minutes.
|
gharchive/issue
| 2024-10-30T13:29:33 |
2025-04-01T04:55:53.096525
|
{
"authors": [
"alexnuttinck"
],
"repo": "alexnuttinck/status-page",
"url": "https://github.com/alexnuttinck/status-page/issues/111",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1636961024
|
Slow performance
Describe the bug
I am not sure why yet but our requests are taking like between 5 to 15 seconds most times.
I tried to log using httplog gem and it stops after the connecting step, after that for like 10 seconds goes fast.
May it be related to httparty gem using Net::HTTP? I think it doesn't support http2 nor keep-alives
I see many requests time out but looking at the API status page, it shows up as fine. I also tested with python and the code takes a while to run, but it doesn't error with a timeout at least.
FYI I see a new version came out to help address this issue: https://github.com/alexrudall/ruby-openai/releases/tag/v3.6.0
@jtoy v3.6 didn't change much. I did some research and tests and it seems to me that a big part of the slow performance is due to HTTParty's Net::HTTP usage.
Quickly hacking HTTParty I enabled keep-alive to persist the connection and it improved by around ~50%.
I think moving to another gem that doesn't rely on Net::HTTP and enabling streaming is key to unlock good performance.
I think the OpenAI API itself is unstable. But the idea of using net-http-persistent style keep-alive when available or via adapters sounds it could be a good approach as well. Probably part of https://github.com/alexrudall/ruby-openai/issues/189
I agree
@jtoy v3.6 didn't change much. I did some research and tests and it seems to me that a big part of the slow performance is due to HTTParty's Net::HTTP usage.
Quickly hacking HTTParty I enabled keep-alive to persist the connection and it improved by around ~50%.
I think moving to another gem that doesn't rely on Net::HTTP and enabling streaming is key to unlock good performance.
How did you hacked HTTParty? Thanks.
I think moving to another gem that doesn't rely on Net::HTTP and enabling streaming is key to unlock good performance.
Does such a gem even exist though?
https://github.com/alexrudall/ruby-openai/pull/234 adds streaming with Faraday - final reviews much appreciated 👍
I looked and #234 is now merged. We may be able to close this issue if the performance issues here are now fixed with the change to Faraday. (I only note this as a new user of this library who was checking issues and saw this since performance seems important.)
One more comment, I’m not sure which is better, it I’ve personally moved to http.rb for all networking requests. I’m not sure if that would handle it better. Also in my Python code I’ve written retry logic. I’m not sure if that would make sense to put in the Ruby libraries.
|
gharchive/issue
| 2023-03-23T06:59:13 |
2025-04-01T04:55:53.110849
|
{
"authors": [
"alexrudall",
"bf4",
"deikka",
"dep",
"gastonmorixe",
"jtoy",
"panozzaj"
],
"repo": "alexrudall/ruby-openai",
"url": "https://github.com/alexrudall/ruby-openai/issues/225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1460233856
|
Lighthouse says that the robots.txt file isn't valid
"Search engines are unable to include your pages in search results if they don't have permission to crawl them"
Nevermind, this was due to a dev deployment.
|
gharchive/issue
| 2022-11-22T16:55:30 |
2025-04-01T04:55:53.117000
|
{
"authors": [
"FlipFloop"
],
"repo": "alextim/astro-lib",
"url": "https://github.com/alextim/astro-lib/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
386769992
|
Count() expects Int32. OpenEdge returns Int64
The following throws an exception
People.Count();
An exception occurred while reading a database value. The expected type was 'System.Int32' but the actual value was of type 'System.Int64'
Workaround for now is to always use LongCount() instead.
The following throws an exception
People.Count();
An exception occurred while reading a database value. The expected type was 'System.Int32' but the actual value was of type 'System.Int64'
Workaround for now is to always use LongCount() instead.
FYI:
You can use LongCount from Queryable class:
https://docs.microsoft.com/en-us/dotnet/api/system.linq.queryable.longcount?view=net-5.0
For me LongCount worked as expected.
Sample:
var orderItemsCount = (int)orderItemsQuery.LongCount();
|
gharchive/issue
| 2018-12-03T11:15:29 |
2025-04-01T04:55:53.121353
|
{
"authors": [
"BerndSadleder",
"alexwiese"
],
"repo": "alexwiese/EntityFrameworkCore.OpenEdge",
"url": "https://github.com/alexwiese/EntityFrameworkCore.OpenEdge/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
744241002
|
Thank you for your hard work!
This tool is really great, and I reverse-searched the URL to find the repo. Thanks a bunch, sorry for the junk issue!
Thanks for feedback. I will put the link to sources.
|
gharchive/issue
| 2020-11-16T22:30:34 |
2025-04-01T04:55:53.122648
|
{
"authors": [
"alezandr",
"ctjlewis"
],
"repo": "alezandr/geohash-explorer",
"url": "https://github.com/alezandr/geohash-explorer/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
162638718
|
Implement exactOnSingleWordQuery and alternativesAsExact
Solves #96.
LGTM 👍
|
gharchive/pull-request
| 2016-06-28T09:14:39 |
2025-04-01T04:55:53.123716
|
{
"authors": [
"PLNech",
"clement-leprovost"
],
"repo": "algolia/algoliasearch-client-android",
"url": "https://github.com/algolia/algoliasearch-client-android/pull/102",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
468094391
|
Make the size parameter optional, as advertised.
See: https://github.com/algoo/preview-generator/pull/100/files#r303397097
Did you finish work on this ? (it' s "WIP").
Look like you asked at the same time I removed it, IIRC it's OK.
|
gharchive/pull-request
| 2019-07-15T12:17:03 |
2025-04-01T04:55:53.183396
|
{
"authors": [
"JulienPalard",
"inkhey"
],
"repo": "algoo/preview-generator",
"url": "https://github.com/algoo/preview-generator/pull/101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1808124546
|
Arc73 support
Hi, is there way to use ARC73 with tealscript?
https://github.com/algorand-devrel/TEALScript/blob/ddc491458178984abefdf893b0ecaec719342053/examples/AlgorandDBStorage/AlgorandDBStorage.algo.ts#L59
I have attempted, but somewhere i read that boolean type will be added soon.. Perhaps it is good if it is implemented at least in one example.
The feature/booleans branch is where I am working on boolean support. Hopefully should be merged in by the end of next week.
Once I have that, I can add ARC73 directly into the compiler so you don't have to manually write the method.
|
gharchive/issue
| 2023-07-17T16:23:38 |
2025-04-01T04:55:53.185259
|
{
"authors": [
"joe-p",
"scholtz"
],
"repo": "algorand-devrel/TEALScript",
"url": "https://github.com/algorand-devrel/TEALScript/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1315637362
|
Contact Details
439451902@qq.com
What happened?
运行环境: windows、vscode
固件版本: HaaSPython-HaaSEDUK1-v2.2.0.bin
haas-studio版本: 2.3.1
Version
master (Default)
What soultions are you seeing the problem on?
eduk1_demo
Relevant log output
-
感谢您提出的宝贵问题,我们有7x24小时服务的“HaaS百事通”客服系统,
您可以先尝试能否解决您的问题(https://haas.iot.aliyun.com/?ask=1&f=a2cre.b82925042)
如果“HaaS百事通”没有解决您的问题,请回复“人工支持”,我们会在工作日(10:00-12:00/14:00-18:00)10分钟内回复您。
对,目前在本地更新上遇到什么问题吗?
对,目前在本地更新上遇到什么问题了吗?
本地更新没有问题, 想弄 在线更新,找不到入口
目前这个功能已经下线了
请问您对这个功能的需求是什么?
请问还有问题不?
先关闭,有问题请再提问
|
gharchive/issue
| 2022-07-23T11:32:38 |
2025-04-01T04:55:53.303489
|
{
"authors": [
"YiluMao",
"skylarCai",
"sunlrain",
"weipx666"
],
"repo": "alibaba/AliOS-Things",
"url": "https://github.com/alibaba/AliOS-Things/issues/1925",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
662651677
|
alink跑测试代码出现问题
/usr/local/flink-1.10.1/bin/flink run -p 1 -c com.alibaba.alink.ALSExample examples/target/alink_examples_flink-1.10_2.11-1.1-SNAPSHOT.jar
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Unable to instantiate java compiler
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:662)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:210)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:893)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:966)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:966)
Caused by: java.lang.IllegalStateException: Unable to instantiate java compiler
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:434)
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.load3(JaninoRelMetadataProvider.java:375)
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.lambda$static$0(JaninoRelMetadataProvider.java:109)
at org.apache.flink.calcite.shaded.com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:149)
at org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
at org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323)
at org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
at org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201)
at org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3953)
at org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3957)
at org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4875)
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.create(JaninoRelMetadataProvider.java:475)
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.revise(JaninoRelMetadataProvider.java:488)
at org.apache.calcite.rel.metadata.RelMetadataQuery.revise(RelMetadataQuery.java:193)
at org.apache.calcite.rel.metadata.RelMetadataQuery.getNonCumulativeCost(RelMetadataQuery.java:304)
at org.apache.calcite.plan.volcano.VolcanoPlanner.getCost(VolcanoPlanner.java:936)
at org.apache.calcite.plan.volcano.RelSubset.propagateCostImprovements0(RelSubset.java:347)
at org.apache.calcite.plan.volcano.RelSubset.propagateCostImprovements(RelSubset.java:330)
at org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1828)
at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1764)
at org.apache.calcite.plan.volcano.VolcanoPlanner.setRoot(VolcanoPlanner.java:296)
at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:326)
at org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:280)
at org.apache.flink.table.plan.Optimizer.optimizeLogicalPlan(Optimizer.scala:199)
at org.apache.flink.table.plan.BatchOptimizer.optimize(BatchOptimizer.scala:56)
at org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:280)
at org.apache.flink.table.api.java.internal.BatchTableEnvironmentImpl.toDataSet(BatchTableEnvironmentImpl.scala:87)
at com.alibaba.alink.common.utils.DataSetConversionUtil.fromTable(DataSetConversionUtil.java:31)
at com.alibaba.alink.operator.batch.BatchOperator.getDataSet(BatchOperator.java:147)
at com.alibaba.alink.operator.batch.dataproc.SplitBatchOp.linkFrom(SplitBatchOp.java:43)
at com.alibaba.alink.ALSExample.main(ALSExample.java:23)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
... 8 more
Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
at org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
at org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432)
... 43 more
java版本是1.8.0_252
scala版本是 2.11.12
请教是和java版本有关吗
Maybe you can refer to this: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Flink-1-10-exception-Unable-to-instantiate-java-compiler-td38221.html
|
gharchive/issue
| 2020-07-21T06:36:55 |
2025-04-01T04:55:53.315888
|
{
"authors": [
"kulame",
"lqb11"
],
"repo": "alibaba/Alink",
"url": "https://github.com/alibaba/Alink/issues/119",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
212677059
|
使用apkpatch-1.0.3,成功制作apatch,但是无输出,补丁无效。
全程没有报错,out文件如下:
4-86112f4778c67a620082b737ae673830.apatch
diff.dex
smali
其中smali里面没有内容,执行patch的时候,也没有任何输出,正常的时候,会输出modify呀,这个是apkpatch的bug嚒?修改的内容是activity的initUI函数,声明是这样的
void initUI() {}
。
使用这个apatch也没有任何效果l。
无论如何,感谢作者的付出和开源。
i've got the same problem.
I don't know how to output the patch file.
It uses just fine in my another project.
Is it possible because of the dependency of 'com.alibaba.mobileim:IMCore:2.0.2.1@aar' and 'com.alibaba.mobileim:IMKit:2.0.2.1@aar'?
I'm a Chinese, and the grammar may not right, but I'm trying to make the meaning right. I'm sorry if it cause your reading problem.
Looking forward to your reply.
Thank you.
同样,无报错,不会有modify。。
|
gharchive/issue
| 2017-03-08T09:09:55 |
2025-04-01T04:55:53.320168
|
{
"authors": [
"11447416",
"SaraLee12",
"panq-jack"
],
"repo": "alibaba/AndFix",
"url": "https://github.com/alibaba/AndFix/issues/336",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1201237428
|
[Survey] Feature the API of PyTorch LTC should provide if we want to keep updating and decoupling to PyTorch releases.
PyTorch LazyTensorCore hasn't currently haven't provided an API to extend TSBackendImpl.
In our POC Design, to support BladeDISC compilation for subgraph, we had to rewrite TSBackendImpl and copy some of PyTorch's source codes. The current implementation depends on unstable API and features, which will lead to maintenance efforts.
Actions:
[x] Update current POC implementations to latest pytorch on master https://github.com/alibaba/BladeDISC/pull/242
and copy some of PyTorch's source codes
I believed that we never copy PyTorch source code. The important reason that adding PyTorch as a submodule is that some .h is required to compile pybind library, and those are not included in PyTorch setup.py.
We have progressively updated TorchDISC with upstream torch lazy API. Some discussions and actions have been done in https://github.com/alibaba/BladeDISC/issues/234.
There are some related issues that still need discuss and survey:
How to support the integration of DISC into lazy tensor core? Other than the current implementation, registering a fuseGraph pass for runNoGradOptimizations in ProfilingGraphExecutorImpl might be a good choice.
The clustering of supported subgraph @Yancey1989
|
gharchive/issue
| 2022-04-12T06:58:27 |
2025-04-01T04:55:53.326268
|
{
"authors": [
"Yancey1989",
"fortianyou"
],
"repo": "alibaba/BladeDISC",
"url": "https://github.com/alibaba/BladeDISC/issues/239",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1251972820
|
Questions on Datasets
Could you please share your considerations when choosing datasets for training?
As one option, you use COCO+LVIS. In the RITM paper, they say that LVIS is the best; however, it has a problem of being "long-tailed and therefore lacking general object categories". Could you elaborate on the problem from your experience? For example, if we wanted to achieve way over 90% IoU, wouldn't low-quality COCO annotations be a problem?
Models trained on a combination of 8 datasets achieve better results that those trained on COCO+LVIS. Is the reason in higher image diversity (meaning COCO+LVIS is not diverse enough) or that the other 6 datasets have higher-quality annotations than COCO?
How did you combine those 8 datasets? For example, did you just add 6 full datasets to the COCO+LVIS combination?
How long did it take to train on the huge combined dataset?
LVIS do not contain annotations for `stuff'. Besides, objects is LVIS are usually small in size. Hence, COCO is necessary. And our experiments show that COCO+LVIS is better than LVIS (I did not record the detailed digits). Although COCO masks are not very fine, they are acceptable to train the model.
I think the both, but the later one is more important. For example, ADE20K gives more diverse images. The saliency data sets are finely annotated.
We add proportional ratios like in https://github.com/alibaba/ClickSEG/blob/main/models/strongbaseline/mobilenetv2_x1_comb.py#:~:text=[0.35%2C0.10%2C0.1%2C0.2%2C0.1%2C0.15]%2C
Sorry, I have forgotten the training time. Maybe 3-5 times longer than training on COCO+LVIS.
|
gharchive/issue
| 2022-05-29T19:33:52 |
2025-04-01T04:55:53.330290
|
{
"authors": [
"XavierCHEN34",
"dfrumkin"
],
"repo": "alibaba/ClickSEG",
"url": "https://github.com/alibaba/ClickSEG/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1478092784
|
Adapt CI to new deployment style
CI is now organized as building a wheel and get one image containing all component.
Since we are going to introduce a different style of organizing components, the CI must be adapted together.
Closed as completed.
|
gharchive/issue
| 2022-12-06T03:40:31 |
2025-04-01T04:55:53.339026
|
{
"authors": [
"siyuan0322"
],
"repo": "alibaba/GraphScope",
"url": "https://github.com/alibaba/GraphScope/issues/2306",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
481590526
|
rump模式下源端支持cluster
目前rump模式下源端如果是cluster是有问题的,考虑支持
这个可能是用户源端发生了Move slot操作,或者节点配置不对,理论上不应该由这个报错。
用户源端是6个节点(3主3从),但是source.address配置了所有6个节点,是会报这个错误。
后面加上校验,防止用户配置出错。
todo:文档需要介绍具体些
|
gharchive/issue
| 2019-08-16T12:38:52 |
2025-04-01T04:55:53.354442
|
{
"authors": [
"suxb201",
"vinllen"
],
"repo": "alibaba/RedisShake",
"url": "https://github.com/alibaba/RedisShake/issues/149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.