id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
206765228
|
Property inheritance from extended class
Bug Report
TSLint version: 0.19.1
TypeScript version: 2.1.5
Running TSLint via: nodejs
The problem :
Using : Angular2
I have class A and class B extends A
In B.component.html I use a var story that is defined in class A.
The linter doesn't like this (see result below)
TypeScript code being linted
with tslint.json configuration:
{
"rulesDirectory": [
"node_modules/codelyzer",
"node_modules/tslint-eslint-rules/dist/rules"
],
"rules": {
"class-name": true,
"comment-format": [
true,
"check-space"
],
"curly": true,
"eofline": true,
"forin": true,
"indent": [
true,
"spaces"
],
"ter-indent": [
true,
4
],
"label-position": true,
"max-line-length": [
true,
140
],
"member-access": false,
"member-ordering": [
true,
"static-before-instance",
"variables-before-functions"
],
"no-arg": true,
"no-bitwise": true,
"no-console": [
true,
"debug",
"info",
"time",
"timeEnd",
"trace"
],
"no-construct": true,
"no-debugger": true,
"no-duplicate-variable": true,
"no-empty": false,
"no-eval": true,
"no-shadowed-variable": true,
"no-string-literal": false,
"no-switch-case-fall-through": true,
"no-trailing-whitespace": true,
"no-unused-expression": true,
"no-use-before-declare": true,
"no-var-keyword": true,
"object-literal-sort-keys": false,
"one-line": [
true,
"check-open-brace",
"check-catch",
"check-else",
"check-whitespace"
],
"quotemark": [
true,
"single"
],
"radix": true,
"semicolon": [
true,
"never"
],
"triple-equals": [
true,
"allow-null-check"
],
"typedef-whitespace": [
true,
{
"call-signature": "nospace",
"index-signature": "nospace",
"parameter": "nospace",
"property-declaration": "nospace",
"variable-declaration": "nospace"
}
],
"variable-name": false,
"whitespace": [
true,
"check-branch",
"check-decl",
"check-operator",
"check-separator",
"check-type"
],
"directive-selector": [
true,
"attribute",
"cms",
"camelCase"
],
"component-selector": [
true,
"element",
"cms",
"kebab-case"
],
"use-input-property-decorator": true,
"use-output-property-decorator": true,
"use-host-property-decorator": true,
"no-input-rename": true,
"no-output-rename": true,
"use-life-cycle-interface": true,
"use-pipe-transform-interface": true,
"component-class-suffix": true,
"directive-class-suffix": true,
"no-access-missing-member": true,
"templates-use-public": true,
"invoke-injectable": true
}
}
Actual behavior
src/app/story/titles-and-settings/titles-and-settings.component.html[12, 22]: The method "story" that you're trying to access does not exist in the class declaration.
Expected behavior
Linter knows about the inherited property "story" from the parent class.
You're in the wrong repository.
The issue you're searching for is over there: https://github.com/mgechev/codelyzer/issues/191
@ajafff Thank you sir, sorry for putting this in the wrong place :(
I'm closing this one.
|
gharchive/issue
| 2017-02-10T10:51:03 |
2025-04-01T06:39:57.344924
|
{
"authors": [
"Shahor",
"ajafff"
],
"repo": "palantir/tslint",
"url": "https://github.com/palantir/tslint/issues/2197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
253388089
|
tslint:disable-next-line disables two subsequent lines
Bug Report
TSLint version: 5.7.0
TypeScript version: 2.4.2
Running TSLint via: (pick one) CLI
TypeScript code being linted
/* tslint:disable-next-line:max-line-length */
const a = '11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111';
const b = '11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111';
const c = '11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111';
const d = '11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111';
const e = '11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111';
const f = '11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111';
with tslint.json configuration:
{
"rules": {
"max-line-length": [true, 50]
}
}
Actual behavior
ERROR: foo.ts[4, 1]: Exceeds maximum line length of 50
ERROR: foo.ts[5, 1]: Exceeds maximum line length of 50
ERROR: foo.ts[6, 1]: Exceeds maximum line length of 50
ERROR: foo.ts[7, 1]: Exceeds maximum line length of 50
Expected behavior
ERROR: foo.ts[3, 1]: Exceeds maximum line length of 50
ERROR: foo.ts[4, 1]: Exceeds maximum line length of 50
ERROR: foo.ts[5, 1]: Exceeds maximum line length of 50
ERROR: foo.ts[6, 1]: Exceeds maximum line length of 50
ERROR: foo.ts[7, 1]: Exceeds maximum line length of 50
No matter where I place the disable-next-line rule in the file, it always seems to affect the two subsequent lines, not just the line immediately afterwards. I don't see this behavior in the docs or referenced in any other issues — sorry if I missed something!
I can confirm that this is a bug.
What's happening here: the disabled range goes up to the start of the next line. If the failure begins at the first character in that line, the error is erroneously disabled.
@ajafff Thanks for the quick fix! 🎉
Please help dears. When launching the ng new my-project , i got this. What could be the issue and how to solve that? I'm learning angular
I've tried to install some other asked dependencies, with no effects..
@paulTchaa8 you'll want to file an issue on Angular on GitHub. That doesn't look like a TSLint issue.
|
gharchive/issue
| 2017-08-28T16:46:59 |
2025-04-01T06:39:57.350939
|
{
"authors": [
"JoshuaKGoldberg",
"ajafff",
"dylanpyle",
"paulTchaa8"
],
"repo": "palantir/tslint",
"url": "https://github.com/palantir/tslint/issues/3174",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1270571767
|
Redis add() without timeout expires immediately
I fixed this in flask-caching some time ago, but it looks like it's broken here as well (and with flask-caching now relying on this, it's broken there again too): https://github.com/pallets-eco/flask-caching/pull/218
Thanks for letting me know, will write a fix this weekend :bowtie:
|
gharchive/issue
| 2022-06-14T10:10:01 |
2025-04-01T06:39:57.393597
|
{
"authors": [
"ThiefMaster",
"northernSage"
],
"repo": "pallets-eco/cachelib",
"url": "https://github.com/pallets-eco/cachelib/issues/156",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
439783230
|
option with bool type
@click.command()
@click.option('--shout/--no-shout', default=False)
def info(shout):
pass
info()
is ok, but
@click.command()
@click.option('--shout/--no-shout', default=False, type=bool)
def info(shout):
pass
info()
will raise a TypeError: Got secondary option for non boolean flag.
you need to use click.BOOL instead of bool, as per the docs:
https://click.palletsprojects.com/en/7.x/parameters/#parameter-types
Oops, that's not right, sorry, deleting and will hopefully post a sensible response momentarily.
I've tracked this down to
https://github.com/pallets/click/blob/c6042bf2607c5be22b1efef2e42a94ffd281434c/click/core.py#L1573
in this case type is not None, it's bool, so the condition is failing and self.is_bool_flag incorrectly gets set to False.
I can't immediately see why this is there (it just looks wrong at first glance). But blame says that line of code is 5 years old, which suggests there's more to it...
I am working on a click helper library (https://github.com/Cologler/click-anno-python), so I was did a lot of tests for click.
I think this is there because the --no- option makes this a special type of boolean flag automatically, it wouldn't make sense to assign a type to it. I guess we could check if type is none or bool.
Huh, I’d have thought it made sense if using the /--no- spelling implied type=bool, so you didn’t have to add that explicitly, but if you did, it would mean the same thing.
Taking a look at this
Checked the Discord, it looks like KP has got it
Same here. type=click.BOOL repros this bug, too. Temporarily remove the type and resolved.
The type argument only supports bool and not click.BOOL, should this be changed to support both?
https://github.com/kporangehat/click/blob/c7bd8d47b44816c2999a12a0b4e52162e5c6994a/click/core.py#L1609
|
gharchive/issue
| 2019-05-02T20:52:58 |
2025-04-01T06:39:57.400799
|
{
"authors": [
"Cologler",
"Ketzalkotal",
"Xevion",
"davidism",
"jab",
"xingheng"
],
"repo": "pallets/click",
"url": "https://github.com/pallets/click/issues/1287",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
487131617
|
WIP: Set permissions for atomic open_file()
Previously, when open_file() was called with atomic=True, the target
file's permissions were always set to 0600, i.e. readable and writable
by the current user. These permissions come from the temporary file
created by tempfile.mkstemp().
This commit changes an atomic open_file() call to set the permissions
of the target file. If the target file already exists, then its current
permissions are retained. Otherwise, the permissions respect the current
umask.
Fixes #1376
I really appreciate the work you're putting into this, but this is way more complexity than I'm willing to add and support. Vendoring two copies of Python's tempfile module is not worth it. Does python-atomicwrites suggested in #320 have this issue? cc @untitaker
@davidism Thanks for the review. I agree that vendoring the modules is heavy-handed and not a great way forward. I'll close this pull request for now.
Note that python-atomicwrites also changes the permissions: https://github.com/untitaker/python-atomicwrites/issues/42
Atomicwrites does NOT intentionally change permissions. Different permissions come from a result of how the file write is done.
I took a more minimalist stab at this myself in #1400. @msmolens I copied your tests from here; hope that’s fine.
@untitaker You can certainly argue that it “does not change permissions” because it internally works by creating a new file with different permissions and renaming it over the old file. But that argument is of little interest to a developer who wanted the permissions to stay the same.
I believe #1400 is a correct fix for this, and I am surprised that one exists. I would be happy to have this in atomicwrites. After all its stated motto is to solve this rabbit hole once and for all.
We could keep the impl in click simple and have atomicwrites as optional dep for "better" behavior.
|
gharchive/pull-request
| 2019-08-29T19:17:02 |
2025-04-01T06:39:57.406387
|
{
"authors": [
"andersk",
"davidism",
"msmolens",
"untitaker"
],
"repo": "pallets/click",
"url": "https://github.com/pallets/click/pull/1382",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
423972774
|
Mixed-up names in docs explaining relationships
I believe that section detailing relationships has a naming mix-up in the model definitions.
At first there is:
class Person(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
addresses = db.relationship('Address', backref='person', lazy=True)
But further down it changes to this, where backrefs are explained in more detail:
class User(db.Model): # this should read: Person(db.Model)...
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
addresses = db.relationship('Address', lazy='select',
backref=db.backref('person', lazy='joined'))
I think this is quite misleading and it's easy to think that this is a third table, which somehow connects to the two others defined earlier. It certainly had me confused for a while.
@pfabri thank you for the report. We will get this fix in the 2.4 release, which should be coming soon.
closed in #718
|
gharchive/issue
| 2019-03-21T22:40:35 |
2025-04-01T06:39:57.409287
|
{
"authors": [
"davidism",
"pfabri",
"rsyring"
],
"repo": "pallets/flask-sqlalchemy",
"url": "https://github.com/pallets/flask-sqlalchemy/issues/706",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
308909086
|
teardown_request can't remove session's attribute
I use before_request to set session's attribute,and use teardown_request to remove it.But only first request don't have session's attribute in before_request,other requests all decide request have session's attribute in before_request.Why?
Sample code:
from flask import Flask,session app = Flask(__name__) app.secret_key='sdf' @app.route('/') def index(): return """sessoin: {}""".format(session['hello']) @app.before_request def before(): if not session.get('hello'): session['hello'] = 'Hello!' print('before: add hello') else: print('beforeL: have',session['hello']) @app.teardown_request def teardown(exception): if session.get('hello'): session.pop('hello') print('teardown: delete hello') try: print(session['hello']) except: print('teardown: no have')
Output:
`* Serving Flask app "app"
Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
before: add hello
teardown: delete hello
teardown: no have
127.0.0.1 - - [27/Mar/2018 18:17:35] "GET / HTTP/1.1" 200 -
beforeL: have Hello!
teardown: delete hello
teardown: no have
127.0.0.1 - - [27/Mar/2018 18:17:35] "GET /favicon.ico HTTP/1.1" 404 -
beforeL: have Hello!
teardown: delete hello
teardown: no have
127.0.0.1 - - [27/Mar/2018 18:17:35] "GET /favicon.ico HTTP/1.1" 404 -
beforeL: have Hello!
teardown: delete hello
teardown: no have
127.0.0.1 - - [27/Mar/2018 18:17:43] "GET / HTTP/1.1" 200 -
beforeL: have Hello!
teardown: delete hello
teardown: no have
127.0.0.1 - - [27/Mar/2018 18:17:46] "GET / HTTP/1.1" 200 -
beforeL: have Hello!
teardown: delete hello
teardown: no have
127.0.0.1 - - [27/Mar/2018 18:17:47] "GET / HTTP/1.1" 200 -
beforeL: have Hello!
teardown: delete hello
teardown: no have
127.0.0.1 - - [27/Mar/2018 18:17:50] "GET / HTTP/1.1" 200 -`
Python version: 3.6.3
Flask version: 0.12.2
Werkzeug version: 0.12.2
The response can't be modified in teardown. Use after_request instead. Use g to store data during a request, not session.
|
gharchive/issue
| 2018-03-27T10:22:38 |
2025-04-01T06:39:57.417586
|
{
"authors": [
"MrLiupython",
"davidism"
],
"repo": "pallets/flask",
"url": "https://github.com/pallets/flask/issues/2673",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
270778465
|
Fixed typo in docs
Changed "when" to "how" as the subsequent text talks about "how to store data on the g object".
Thanks, but I'm completely rewriting the tutorial so this won't be relevant.
BTW, the PR itself was fine, thanks for making it. If you run into any other documentation fixes, please submit them! 👍
Sure :+1:
|
gharchive/pull-request
| 2017-11-02T19:25:21 |
2025-04-01T06:39:57.419646
|
{
"authors": [
"davidism",
"faheel"
],
"repo": "pallets/flask",
"url": "https://github.com/pallets/flask/pull/2509",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
716676021
|
Raise an error if rule starts or ends with a space
This PR solves https://github.com/pallets/flask/issues/3780. After this PR is merged creating routes like this will throw an error:
@bp.route("/bar ")
@bp.route(" /bar")
See #3780 for close reason.
|
gharchive/pull-request
| 2020-10-07T16:24:17 |
2025-04-01T06:39:57.421336
|
{
"authors": [
"davidism",
"lalnuo"
],
"repo": "pallets/flask",
"url": "https://github.com/pallets/flask/pull/3781",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
56785119
|
支持编辑html 也就是源码编辑功能嘛
如题,我发现好多这种编辑器都不支持 html编辑
不支持。
@pandao 呵呵,那挺遗憾的
@9073194 呵呵,术业有专攻。
|
gharchive/issue
| 2015-02-06T08:41:45 |
2025-04-01T06:39:57.446348
|
{
"authors": [
"9073194",
"pandao"
],
"repo": "pandao/editor.md",
"url": "https://github.com/pandao/editor.md/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
3260405
|
uWSGI application not found error after install on Ubuntu 11.10
Installing on Ubuntu 11.10, after running the install script I can run /opt/panda/manage.py runserver and voila, I get a functional Panda instance running. But when I go to localhost on port 80, I get this error: "uWSGI error: Python application not found". Will start digging in the logs!
Crap, I lost track of this. Here's the log. I'm going to go ahead and try one more time presently.
+ echo 'PANDA installation beginning.'
PANDA installation beginning.
+ CONFIG_URL=https://raw.github.com/pandaproject/panda/master/setup_panda
+ echo 'DEPLOYMENT_TARGET="deployed"'
+ export DEPLOYMENT_TARGET=deployed
+ DEPLOYMENT_TARGET=deployed
+ apt-get --yes update
Ign http://us.archive.ubuntu.com oneiric InRelease
Ign http://us.archive.ubuntu.com oneiric-updates InRelease
Ign http://us.archive.ubuntu.com oneiric-backports InRelease
Ign http://extras.ubuntu.com oneiric InRelease
Ign http://security.ubuntu.com oneiric-security InRelease
Hit http://us.archive.ubuntu.com oneiric Release.gpg
Hit http://extras.ubuntu.com oneiric Release.gpg
Hit http://security.ubuntu.com oneiric-security Release.gpg
Hit http://us.archive.ubuntu.com oneiric-updates Release.gpg
Hit http://extras.ubuntu.com oneiric Release
Hit http://security.ubuntu.com oneiric-security Release
Hit http://us.archive.ubuntu.com oneiric-backports Release.gpg
Hit http://us.archive.ubuntu.com oneiric Release
Hit http://extras.ubuntu.com oneiric/main Sources
Hit http://security.ubuntu.com oneiric-security/main Sources
Hit http://us.archive.ubuntu.com oneiric-updates Release
Hit http://extras.ubuntu.com oneiric/main i386 Packages
Ign http://extras.ubuntu.com oneiric/main TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-backports Release
Hit http://us.archive.ubuntu.com oneiric/main Sources
Hit http://security.ubuntu.com oneiric-security/restricted Sources
Hit http://security.ubuntu.com oneiric-security/universe Sources
Hit http://security.ubuntu.com oneiric-security/multiverse Sources
Hit http://security.ubuntu.com oneiric-security/main i386 Packages
Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages
Hit http://us.archive.ubuntu.com oneiric/restricted Sources
Hit http://us.archive.ubuntu.com oneiric/universe Sources
Hit http://us.archive.ubuntu.com oneiric/multiverse Sources
Hit http://us.archive.ubuntu.com oneiric/main i386 Packages
Hit http://us.archive.ubuntu.com oneiric/restricted i386 Packages
Hit http://us.archive.ubuntu.com oneiric/universe i386 Packages
Hit http://us.archive.ubuntu.com oneiric/multiverse i386 Packages
Hit http://us.archive.ubuntu.com oneiric/main TranslationIndex
Hit http://us.archive.ubuntu.com oneiric/multiverse TranslationIndex
Hit http://us.archive.ubuntu.com oneiric/restricted TranslationIndex
Hit http://security.ubuntu.com oneiric-security/universe i386 Packages
Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages
Hit http://security.ubuntu.com oneiric-security/main TranslationIndex
Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex
Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex
Hit http://us.archive.ubuntu.com oneiric/universe TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-updates/main Sources
Hit http://us.archive.ubuntu.com oneiric-updates/restricted Sources
Hit http://us.archive.ubuntu.com oneiric-updates/universe Sources
Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Sources
Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-updates/main i386 Packages
Hit http://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages
Hit http://us.archive.ubuntu.com oneiric-updates/universe i386 Packages
Hit http://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages
Hit http://us.archive.ubuntu.com oneiric-updates/main TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex
Hit http://security.ubuntu.com oneiric-security/main Translation-en
Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en
Hit http://us.archive.ubuntu.com oneiric-backports/main Sources
Hit http://us.archive.ubuntu.com oneiric-backports/restricted Sources
Hit http://us.archive.ubuntu.com oneiric-backports/universe Sources
Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Sources
Hit http://us.archive.ubuntu.com oneiric-backports/main i386 Packages
Hit http://us.archive.ubuntu.com oneiric-backports/restricted i386 Packages
Hit http://us.archive.ubuntu.com oneiric-backports/universe i386 Packages
Hit http://us.archive.ubuntu.com oneiric-backports/multiverse i386 Packages
Hit http://us.archive.ubuntu.com oneiric-backports/main TranslationIndex
Hit http://security.ubuntu.com oneiric-security/restricted Translation-en
Hit http://security.ubuntu.com oneiric-security/universe Translation-en
Hit http://us.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-backports/restricted TranslationIndex
Hit http://us.archive.ubuntu.com oneiric-backports/universe TranslationIndex
Hit http://us.archive.ubuntu.com oneiric/main Translation-en
Hit http://us.archive.ubuntu.com oneiric/multiverse Translation-en
Hit http://us.archive.ubuntu.com oneiric/restricted Translation-en
Hit http://us.archive.ubuntu.com oneiric/universe Translation-en
Hit http://us.archive.ubuntu.com oneiric-updates/main Translation-en
Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en
Hit http://us.archive.ubuntu.com oneiric-updates/restricted Translation-en
Hit http://us.archive.ubuntu.com oneiric-updates/universe Translation-en
Hit http://us.archive.ubuntu.com oneiric-backports/main Translation-en
Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Translation-en
Hit http://us.archive.ubuntu.com oneiric-backports/restricted Translation-en
Hit http://us.archive.ubuntu.com oneiric-backports/universe Translation-en
Ign http://extras.ubuntu.com oneiric/main Translation-en_US
Ign http://extras.ubuntu.com oneiric/main Translation-en
Reading package lists...
+ apt-get --yes upgrade
Reading package lists...
Building dependency tree...
Reading state information...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/10periodic -O /etc/apt/apt.conf.d/10periodic
2012-02-16 16:22:27 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/10periodic [230/230] -> "/etc/apt/apt.conf.d/10periodic" [1]
+ service unattended-upgrades restart
+ apt-get install --yes git openssh-server postgresql python2.7-dev libxml2-dev libxml2 libxslt1.1 libxslt1-dev nginx build-essential openjdk-6-jdk libpq-dev python-pip mercurial
Reading package lists...
Building dependency tree...
Reading state information...
build-essential is already the newest version.
git is already the newest version.
libxslt1-dev is already the newest version.
libxslt1.1 is already the newest version.
openssh-server is already the newest version.
python2.7-dev is already the newest version.
nginx is already the newest version.
python-pip is already the newest version.
libpq-dev is already the newest version.
libxml2 is already the newest version.
libxml2-dev is already the newest version.
openjdk-6-jdk is already the newest version.
postgresql is already the newest version.
mercurial is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
+ pip install uwsgi
Requirement already satisfied (use --upgrade to upgrade): uwsgi in /usr/local/lib/python2.7/dist-packages
Cleaning up...
+ ln -s /etc/init.d/ssh /etc/rc2.d/S20ssh
ln: creating symbolic link `/etc/rc2.d/S20ssh': File exists
+ ln -s /etc/init.d/ssh /etc/rc3.d/S20ssh
ln: creating symbolic link `/etc/rc3.d/S20ssh': File exists
+ ln -s /etc/init.d/ssh /etc/rc4.d/S20ssh
ln: creating symbolic link `/etc/rc4.d/S20ssh': File exists
+ ln -s /etc/init.d/ssh /etc/rc5.d/S20ssh
ln: creating symbolic link `/etc/rc5.d/S20ssh': File exists
+ wget -nv http://mirror.uoregon.edu/apache//lucene/solr/3.4.0/apache-solr-3.4.0.tgz -O /opt/apache-solr-3.4.0.tgz
2012-02-16 16:24:28 URL:http://mirror.uoregon.edu/apache//lucene/solr/3.4.0/apache-solr-3.4.0.tgz [83290310/83290310] -> "/opt/apache-solr-3.4.0.tgz" [1]
+ cd /opt
+ tar -xzf apache-solr-3.4.0.tgz
+ mv apache-solr-3.4.0 solr
+ cp -r solr/example solr/panda
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solr.xml -O /opt/solr/panda/solr/solr.xml
2012-02-16 16:24:46 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solr.xml [378/378] -> "/opt/solr/panda/solr/solr.xml" [1]
+ mkdir /opt/solr/panda/solr/pandadata
mkdir: cannot create directory `/opt/solr/panda/solr/pandadata': File exists
+ mkdir /opt/solr/panda/solr/pandadata/conf
mkdir: cannot create directory `/opt/solr/panda/solr/pandadata/conf': File exists
+ mkdir /opt/solr/panda/solr/pandadata/lib
mkdir: cannot create directory `/opt/solr/panda/solr/pandadata/lib': File exists
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml -O /opt/solr/panda/solr/pandadata/conf/schema.xml
2012-02-16 16:24:46 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml [3090/3090] -> "/opt/solr/panda/solr/pandadata/conf/schema.xml" [1]
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadata/conf/solrconfig.xml
2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadata/conf/solrconfig.xml" [1]
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar -O /opt/solr/panda/solr/pandadata/lib/panda.jar
2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar [1283/1283] -> "/opt/solr/panda/solr/pandadata/lib/panda.jar" [1]
+ mkdir /opt/solr/panda/solr/pandadata_test
mkdir: cannot create directory `/opt/solr/panda/solr/pandadata_test': File exists
+ mkdir /opt/solr/panda/solr/pandadata_test/conf
mkdir: cannot create directory `/opt/solr/panda/solr/pandadata_test/conf': File exists
+ mkdir /opt/solr/panda/solr/pandadata_test/lib
mkdir: cannot create directory `/opt/solr/panda/solr/pandadata_test/lib': File exists
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml -O /opt/solr/panda/solr/pandadata_test/conf/schema.xml
2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/data_schema.xml [3090/3090] -> "/opt/solr/panda/solr/pandadata_test/conf/schema.xml" [1]
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadata_test/conf/solrconfig.xml
2012-02-16 16:24:47 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadata_test/conf/solrconfig.xml" [1]
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar -O /opt/solr/panda/solr/pandadata_test/lib/panda.jar
2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/panda.jar [1283/1283] -> "/opt/solr/panda/solr/pandadata_test/lib/panda.jar" [1]
+ mkdir /opt/solr/panda/solr/pandadatasets
mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets': File exists
+ mkdir /opt/solr/panda/solr/pandadatasets/conf
mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets/conf': File exists
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml -O /opt/solr/panda/solr/pandadatasets/conf/schema.xml
2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml [2388/2388] -> "/opt/solr/panda/solr/pandadatasets/conf/schema.xml" [1]
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadatasets/conf/solrconfig.xml
2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadatasets/conf/solrconfig.xml" [1]
+ mkdir /opt/solr/panda/solr/pandadatasets_test
mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets_test': File exists
+ mkdir /opt/solr/panda/solr/pandadatasets_test/conf
mkdir: cannot create directory `/opt/solr/panda/solr/pandadatasets_test/conf': File exists
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml -O /opt/solr/panda/solr/pandadatasets_test/conf/schema.xml
2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/datasets_schema.xml [2388/2388] -> "/opt/solr/panda/solr/pandadatasets_test/conf/schema.xml" [1]
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml -O /opt/solr/panda/solr/pandadatasets_test/conf/solrconfig.xml
2012-02-16 16:24:48 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solrconfig.xml [4820/4820] -> "/opt/solr/panda/solr/pandadatasets_test/conf/solrconfig.xml" [1]
+ adduser --system --no-create-home --disabled-login --disabled-password --group solr
The system user `solr' already exists. Exiting.
+ chown -R solr:solr /opt/solr
+ touch /var/log/solr.log
+ chown solr:solr /var/log/solr.log
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/solr.conf -O /etc/init/solr.conf
2012-02-16 16:24:50 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/solr.conf [285/285] -> "/etc/init/solr.conf" [1]
+ initctl reload-configuration
+ service solr start
start: Job is already running: solr
+ adduser --system --no-create-home --disabled-login --disabled-password --group panda
The system user `panda' already exists. Exiting.
+ mkdir /var/run/uwsgi
mkdir: cannot create directory `/var/run/uwsgi': File exists
+ chown panda:panda /var/run/uwsgi
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/uwsgi.conf -O /etc/init/uwsgi.conf
2012-02-16 16:24:51 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/uwsgi.conf [385/385] -> "/etc/init/uwsgi.conf" [1]
+ initctl reload-configuration
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/nginx -O /etc/nginx/sites-available/panda
2012-02-16 16:24:51 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/nginx [239/239] -> "/etc/nginx/sites-available/panda" [1]
+ ln -s /etc/nginx/sites-available/panda /etc/nginx/sites-enabled/panda
ln: creating symbolic link `/etc/nginx/sites-enabled/panda': File exists
+ rm /etc/nginx/sites-enabled/default
rm: cannot remove `/etc/nginx/sites-enabled/default': No such file or directory
+ service nginx restart
Restarting nginx: nginx.
+ wget -nv https://raw.github.com/pandaproject/panda/master/setup_panda/pg_hba.conf -O /etc/postgresql/9.1/main/pg_hba.conf
2012-02-16 16:24:53 URL:https://raw.github.com/pandaproject/panda/master/setup_panda/pg_hba.conf [234/234] -> "/etc/postgresql/9.1/main/pg_hba.conf" [1]
+ service postgresql restart
* Restarting PostgreSQL 9.1 database server
...done.
+ sudo -u postgres psql postgres
+ echo 'CREATE USER panda WITH PASSWORD '\''panda'\'';'
ERROR: role "panda" already exists
+ sudo -u postgres createdb -O panda panda
createdb: database creation failed: ERROR: database "panda" already exists
+ cd /opt
+ git clone git://github.com/pandaproject/panda.git panda
fatal: destination path 'panda' already exists and is not an empty directory.
+ cd /opt/panda
+ pip install -r requirements.txt
Downloading/unpacking git+https://github.com/onyxfish/django-ajax-uploader.git#egg=ajaxuploader (from -r requirements.txt (line 9))
Cloning https://github.com/onyxfish/django-ajax-uploader.git to /tmp/pip-2cpisv-build
Running setup.py egg_info for package from git+https://github.com/onyxfish/django-ajax-uploader.git#egg=ajaxuploader
Downloading/unpacking git+https://github.com/onyxfish/csvkit.git#egg=csvkit (from -r requirements.txt (line 10))
Cloning https://github.com/onyxfish/csvkit.git to /tmp/pip-Y06O5d-build
Running setup.py egg_info for package from git+https://github.com/onyxfish/csvkit.git#egg=csvkit
Downloading/unpacking git+https://github.com/toastdriven/django-tastypie.git@f51b94025#egg=tastypie (from -r requirements.txt (line 11))
Cloning https://github.com/toastdriven/django-tastypie.git (to f51b94025) to /tmp/pip-fA2v7s-build
Could not find a tag or branch 'f51b94025', assuming commit.
Running setup.py egg_info for package from git+https://github.com/toastdriven/django-tastypie.git@f51b94025#egg=tastypie
Downloading/unpacking git+https://github.com/GoodCloud/django-longer-username.git@cdf0375ec5#egg=longerusername (from -r requirements.txt (line 17))
Cloning https://github.com/GoodCloud/django-longer-username.git (to cdf0375ec5) to /tmp/pip-MdHW3r-build
Could not find a tag or branch 'cdf0375ec5', assuming commit.
Running setup.py egg_info for package from git+https://github.com/GoodCloud/django-longer-username.git@cdf0375ec5#egg=longerusername
Downloading/unpacking hg+https://bitbucket.org/ericgazoni/openpyxl/@134c257abd1e#egg=openpyxl (from -r requirements.txt (line 18))
Cloning hg https://bitbucket.org/ericgazoni/openpyxl/ (to revision 134c257abd1e) to /tmp/pip-LrGgvC-build
Running setup.py egg_info for package from hg+https://bitbucket.org/ericgazoni/openpyxl/@134c257abd1e#egg=openpyxl
warning: no files found matching '*.xml' under directory 'src/openpyxl/tests/test_data'
warning: no files found matching '*.rels' under directory 'src/openpyxl/tests/test_data'
warning: no files found matching '*.xlsx' under directory 'src/openpyxl/tests/test_data'
warning: no files found matching 'CREDITS'
Downloading/unpacking hg+https://bitbucket.org/bkroeze/django-keyedcache/@fa1f452a53f7#egg=django-keyedcache (from -r requirements.txt (line 20))
Cloning hg https://bitbucket.org/bkroeze/django-keyedcache/ (to revision fa1f452a53f7) to /tmp/pip-FPerem-build
Running setup.py egg_info for package from hg+https://bitbucket.org/bkroeze/django-keyedcache/@fa1f452a53f7#egg=django-keyedcache
Downloading/unpacking hg+https://bitbucket.org/bkroeze/django-livesettings/@a413c0205048#egg=django-livesettings (from -r requirements.txt (line 21))
Cloning hg https://bitbucket.org/bkroeze/django-livesettings/ (to revision a413c0205048) to /tmp/pip-R8c8m3-build
Running setup.py egg_info for package from hg+https://bitbucket.org/bkroeze/django-livesettings/@a413c0205048#egg=django-livesettings
zip_safe flag not set; analyzing archive contents...
Installed /tmp/pip-R8c8m3-build/setuptools_hg-0.4-py2.7.egg
Downloading/unpacking django==1.3 (from -r requirements.txt (line 1))
Running setup.py egg_info for package django
Downloading/unpacking fabric==1.2.2 (from -r requirements.txt (line 2))
Running setup.py egg_info for package fabric
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no files found matching 'fabfile.py'
Downloading/unpacking psycopg2==2.4.1 (from -r requirements.txt (line 3))
Running setup.py egg_info for package psycopg2
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching '*.js' under directory 'doc'
warning: no files found matching '*' under directory 'doc/html'
no previously-included directories found matching 'doc/src/_build'
warning: no files found matching 'MANIFEST'
Downloading/unpacking django-celery==2.3.3 (from -r requirements.txt (line 4))
Running setup.py egg_info for package django-celery
no previously-included directories found matching 'bin/*.pyc'
no previously-included directories found matching 'tests/*.pyc'
no previously-included directories found matching 'docs/*.pyc'
no previously-included directories found matching 'contrib/*.pyc'
no previously-included directories found matching 'djcelery/*.pyc'
no previously-included directories found matching 'docs/.build'
no previously-included directories found matching 'examples/*.pyc'
Downloading/unpacking kombu-sqlalchemy==1.1.0 (from -r requirements.txt (line 5))
Running setup.py egg_info for package kombu-sqlalchemy
warning: no files found matching '*' under directory 'djkombu'
Downloading/unpacking python-dateutil==1.5.0 (from -r requirements.txt (line 6))
Running setup.py egg_info for package python-dateutil
Downloading/unpacking lxml==2.3.1 (from -r requirements.txt (line 7))
Running setup.py egg_info for package lxml
Building lxml version 2.3.1.
Building without Cython.
Using build configuration of libxslt 1.1.26
Building against libxml2/libxslt in the following directory: /usr/lib
Requirement already satisfied (use --upgrade to upgrade): httplib2==0.7.1 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 8))
Downloading/unpacking django-compressor==1.1 (from -r requirements.txt (line 12))
Running setup.py egg_info for package django-compressor
Downloading/unpacking BeautifulSoup==3.2.0 (from -r requirements.txt (line 13))
Running setup.py egg_info for package BeautifulSoup
Downloading/unpacking sphinx==1.0.7 (from -r requirements.txt (line 14))
Running setup.py egg_info for package sphinx
no previously-included directories found matching 'doc/_build'
Downloading/unpacking requests==0.8.3 (from -r requirements.txt (line 15))
Running setup.py egg_info for package requests
Downloading/unpacking south==0.7.3 (from -r requirements.txt (line 16))
Running setup.py egg_info for package south
Downloading/unpacking xlrd==0.7.1 (from -r requirements.txt (line 19))
Running setup.py egg_info for package xlrd
Downloading/unpacking boto==2.0 (from -r requirements.txt (line 22))
Running setup.py egg_info for package boto
Downloading/unpacking argparse==1.2.1 (from csvkit==0.4.2->-r requirements.txt (line 10))
Running setup.py egg_info for package argparse
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.orig' found anywhere in distribution
warning: no previously-included files matching '*.rej' found anywhere in distribution
no previously-included directories found matching 'doc/_build'
no previously-included directories found matching 'env24'
no previously-included directories found matching 'env25'
no previously-included directories found matching 'env26'
no previously-included directories found matching 'env27'
Downloading/unpacking sqlalchemy==0.6.6 (from csvkit==0.4.2->-r requirements.txt (line 10))
Running setup.py egg_info for package sqlalchemy
warning: no files found matching '*.jpg' under directory 'doc'
no previously-included directories found matching 'doc/build/output'
Downloading/unpacking mimeparse (from django-tastypie==0.9.11->-r requirements.txt (line 11))
Running setup.py egg_info for package mimeparse
Requirement already satisfied (use --upgrade to upgrade): pycrypto>=1.9 in /usr/lib/python2.7/dist-packages (from fabric==1.2.2->-r requirements.txt (line 2))
Downloading/unpacking paramiko>=1.7.6 (from fabric==1.2.2->-r requirements.txt (line 2))
Running setup.py egg_info for package paramiko
Downloading/unpacking django-picklefield (from django-celery==2.3.3->-r requirements.txt (line 4))
Downloading django-picklefield-0.1.9.tar.gz
Running setup.py egg_info for package django-picklefield
Downloading/unpacking celery>=2.3.1 (from django-celery==2.3.3->-r requirements.txt (line 4))
Running setup.py egg_info for package celery
no previously-included directories found matching 'tests/*.pyc'
no previously-included directories found matching 'docs/*.pyc'
no previously-included directories found matching 'contrib/*.pyc'
no previously-included directories found matching 'celery/*.pyc'
no previously-included directories found matching 'examples/*.pyc'
no previously-included directories found matching 'bin/*.pyc'
no previously-included directories found matching 'docs/.build'
no previously-included directories found matching 'docs/graffles'
no previously-included directories found matching '.tox/*'
Downloading/unpacking kombu (from kombu-sqlalchemy==1.1.0->-r requirements.txt (line 5))
Running setup.py egg_info for package kombu
Downloading/unpacking django-appconf>=0.4 (from django-compressor==1.1->-r requirements.txt (line 12))
Downloading django-appconf-0.4.1.tar.gz
Running setup.py egg_info for package django-appconf
Installed /opt/panda/build/django-appconf/versiontools-1.8.3-py2.7.egg
Downloading/unpacking Pygments>=0.8 (from sphinx==1.0.7->-r requirements.txt (line 14))
Running setup.py egg_info for package Pygments
Downloading/unpacking Jinja2>=2.2 (from sphinx==1.0.7->-r requirements.txt (line 14))
Running setup.py egg_info for package Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Downloading/unpacking docutils>=0.5 (from sphinx==1.0.7->-r requirements.txt (line 14))
Running setup.py egg_info for package docutils
warning: no files found matching 'MANIFEST'
warning: no previously-included files matching '.cvsignore' found under directory '*'
warning: no previously-included files matching '*.pyc' found under directory '*'
warning: no previously-included files matching '*~' found under directory '*'
warning: no previously-included files matching '.DS_Store' found under directory '*'
Downloading/unpacking anyjson>=0.3.1 (from celery>=2.3.1->django-celery==2.3.3->-r requirements.txt (line 4))
Downloading anyjson-0.3.1.tar.gz
Running setup.py egg_info for package anyjson
Downloading/unpacking amqplib>=1.0 (from kombu->kombu-sqlalchemy==1.1.0->-r requirements.txt (line 5))
Running setup.py egg_info for package amqplib
Installing collected packages: django, fabric, psycopg2, django-celery, kombu-sqlalchemy, python-dateutil, lxml, django-compressor, BeautifulSoup, sphinx, requests, south, xlrd, boto, ajaxuploader, argparse, sqlalchemy, csvkit, mimeparse, django-tastypie, longerusername, openpyxl, django-keyedcache, django-livesettings, paramiko, django-picklefield, celery, kombu, django-appconf, Pygments, Jinja2, docutils, anyjson, amqplib
Running setup.py install for django
changing mode of build/scripts-2.7/django-admin.py from 644 to 755
changing mode of /usr/local/bin/django-admin.py to 755
Running setup.py install for fabric
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no files found matching 'fabfile.py'
Installing fab script to /usr/local/bin
Running setup.py install for psycopg2
building 'psycopg2._psycopg' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-i686-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/green.c -o build/temp.linux-i686-2.7/psycopg/green.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/pqpath.c -o build/temp.linux-i686-2.7/psycopg/pqpath.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/utils.c -o build/temp.linux-i686-2.7/psycopg/utils.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/bytes_format.c -o build/temp.linux-i686-2.7/psycopg/bytes_format.o -Wdeclaration-after-statement
psycopg/bytes_format.c: In function ‘Bytes_Format’:
psycopg/bytes_format.c:114:24: warning: variable ‘orig_args’ set but not used [-Wunused-but-set-variable]
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/connection_int.c -o build/temp.linux-i686-2.7/psycopg/connection_int.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/connection_type.c -o build/temp.linux-i686-2.7/psycopg/connection_type.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/cursor_int.c -o build/temp.linux-i686-2.7/psycopg/cursor_int.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/cursor_type.c -o build/temp.linux-i686-2.7/psycopg/cursor_type.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/lobject_int.c -o build/temp.linux-i686-2.7/psycopg/lobject_int.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/lobject_type.c -o build/temp.linux-i686-2.7/psycopg/lobject_type.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/notify_type.c -o build/temp.linux-i686-2.7/psycopg/notify_type.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/xid_type.c -o build/temp.linux-i686-2.7/psycopg/xid_type.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_asis.c -o build/temp.linux-i686-2.7/psycopg/adapter_asis.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_binary.c -o build/temp.linux-i686-2.7/psycopg/adapter_binary.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_datetime.c -o build/temp.linux-i686-2.7/psycopg/adapter_datetime.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_list.c -o build/temp.linux-i686-2.7/psycopg/adapter_list.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pboolean.c -o build/temp.linux-i686-2.7/psycopg/adapter_pboolean.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pdecimal.c -o build/temp.linux-i686-2.7/psycopg/adapter_pdecimal.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pfloat.c -o build/temp.linux-i686-2.7/psycopg/adapter_pfloat.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_qstring.c -o build/temp.linux-i686-2.7/psycopg/adapter_qstring.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/microprotocols.c -o build/temp.linux-i686-2.7/psycopg/microprotocols.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/microprotocols_proto.c -o build/temp.linux-i686-2.7/psycopg/microprotocols_proto.o -Wdeclaration-after-statement
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090102 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/typecast.c -o build/temp.linux-i686-2.7/psycopg/typecast.o -Wdeclaration-after-statement
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/psycopg/psycopgmodule.o build/temp.linux-i686-2.7/psycopg/green.o build/temp.linux-i686-2.7/psycopg/pqpath.o build/temp.linux-i686-2.7/psycopg/utils.o build/temp.linux-i686-2.7/psycopg/bytes_format.o build/temp.linux-i686-2.7/psycopg/connection_int.o build/temp.linux-i686-2.7/psycopg/connection_type.o build/temp.linux-i686-2.7/psycopg/cursor_int.o build/temp.linux-i686-2.7/psycopg/cursor_type.o build/temp.linux-i686-2.7/psycopg/lobject_int.o build/temp.linux-i686-2.7/psycopg/lobject_type.o build/temp.linux-i686-2.7/psycopg/notify_type.o build/temp.linux-i686-2.7/psycopg/xid_type.o build/temp.linux-i686-2.7/psycopg/adapter_asis.o build/temp.linux-i686-2.7/psycopg/adapter_binary.o build/temp.linux-i686-2.7/psycopg/adapter_datetime.o build/temp.linux-i686-2.7/psycopg/adapter_list.o build/temp.linux-i686-2.7/psycopg/adapter_pboolean.o build/temp.linux-i686-2.7/psycopg/adapter_pdecimal.o build/temp.linux-i686-2.7/psycopg/adapter_pfloat.o build/temp.linux-i686-2.7/psycopg/adapter_qstring.o build/temp.linux-i686-2.7/psycopg/microprotocols.o build/temp.linux-i686-2.7/psycopg/microprotocols_proto.o build/temp.linux-i686-2.7/psycopg/typecast.o -lpq -o build/lib.linux-i686-2.7/psycopg2/_psycopg.so
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching '*.js' under directory 'doc'
warning: no files found matching '*' under directory 'doc/html'
no previously-included directories found matching 'doc/src/_build'
warning: no files found matching 'MANIFEST'
Running setup.py install for django-celery
changing mode of build/scripts-2.7/djcelerymon from 644 to 755
no previously-included directories found matching 'bin/*.pyc'
no previously-included directories found matching 'tests/*.pyc'
no previously-included directories found matching 'docs/*.pyc'
no previously-included directories found matching 'contrib/*.pyc'
no previously-included directories found matching 'djcelery/*.pyc'
no previously-included directories found matching 'docs/.build'
no previously-included directories found matching 'examples/*.pyc'
changing mode of /usr/local/bin/djcelerymon to 755
Installing djcelerymon script to /usr/local/bin
Running setup.py install for kombu-sqlalchemy
warning: no files found matching '*' under directory 'djkombu'
Found existing installation: python-dateutil 1.4.1
Uninstalling python-dateutil:
Successfully uninstalled python-dateutil
Running setup.py install for python-dateutil
Running setup.py install for lxml
Building lxml version 2.3.1.
Building without Cython.
Using build configuration of libxslt 1.1.26
Building against libxml2/libxslt in the following directory: /usr/lib
building 'lxml.etree' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -lxslt -lexslt -lxml2 -lz -lm -o build/lib.linux-i686-2.7/lxml/etree.so
building 'lxml.objectify' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/usr/include/python2.7 -c src/lxml/lxml.objectify.c -o build/temp.linux-i686-2.7/src/lxml/lxml.objectify.o -w
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/src/lxml/lxml.objectify.o -lxslt -lexslt -lxml2 -lz -lm -o build/lib.linux-i686-2.7/lxml/objectify.so
Running setup.py install for django-compressor
Running setup.py install for BeautifulSoup
Running setup.py install for sphinx
no previously-included directories found matching 'doc/_build'
Installing sphinx-build script to /usr/local/bin
Installing sphinx-quickstart script to /usr/local/bin
Installing sphinx-autogen script to /usr/local/bin
Running setup.py install for requests
Running setup.py install for south
Running setup.py install for xlrd
changing mode of build/scripts-2.7/runxlrd.py from 644 to 755
changing mode of /usr/local/bin/runxlrd.py to 755
Running setup.py install for boto
changing mode of build/scripts-2.7/sdbadmin from 644 to 755
changing mode of build/scripts-2.7/elbadmin from 644 to 755
changing mode of build/scripts-2.7/cfadmin from 644 to 755
changing mode of build/scripts-2.7/s3put from 644 to 755
changing mode of build/scripts-2.7/fetch_file from 644 to 755
changing mode of build/scripts-2.7/launch_instance from 644 to 755
changing mode of build/scripts-2.7/list_instances from 644 to 755
changing mode of build/scripts-2.7/taskadmin from 644 to 755
changing mode of build/scripts-2.7/kill_instance from 644 to 755
changing mode of build/scripts-2.7/bundle_image from 644 to 755
changing mode of build/scripts-2.7/pyami_sendmail from 644 to 755
changing mode of build/scripts-2.7/lss3 from 644 to 755
changing mode of build/scripts-2.7/cq from 644 to 755
changing mode of build/scripts-2.7/route53 from 644 to 755
changing mode of /usr/local/bin/bundle_image to 755
changing mode of /usr/local/bin/cq to 755
changing mode of /usr/local/bin/list_instances to 755
changing mode of /usr/local/bin/cfadmin to 755
changing mode of /usr/local/bin/s3put to 755
changing mode of /usr/local/bin/route53 to 755
changing mode of /usr/local/bin/taskadmin to 755
changing mode of /usr/local/bin/lss3 to 755
changing mode of /usr/local/bin/fetch_file to 755
changing mode of /usr/local/bin/elbadmin to 755
changing mode of /usr/local/bin/kill_instance to 755
changing mode of /usr/local/bin/sdbadmin to 755
changing mode of /usr/local/bin/launch_instance to 755
changing mode of /usr/local/bin/pyami_sendmail to 755
Running setup.py install for ajaxuploader
Running setup.py install for argparse
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.orig' found anywhere in distribution
warning: no previously-included files matching '*.rej' found anywhere in distribution
no previously-included directories found matching 'doc/_build'
no previously-included directories found matching 'env24'
no previously-included directories found matching 'env25'
no previously-included directories found matching 'env26'
no previously-included directories found matching 'env27'
Running setup.py install for sqlalchemy
warning: no files found matching '*.jpg' under directory 'doc'
no previously-included directories found matching 'doc/build/output'
Running setup.py install for csvkit
changing mode of build/scripts-2.7/in2csv from 644 to 755
changing mode of build/scripts-2.7/csvcut from 644 to 755
changing mode of build/scripts-2.7/csvsql from 644 to 755
changing mode of build/scripts-2.7/csvclean from 644 to 755
changing mode of build/scripts-2.7/csvstat from 644 to 755
changing mode of build/scripts-2.7/csvlook from 644 to 755
changing mode of build/scripts-2.7/csvjoin from 644 to 755
changing mode of build/scripts-2.7/csvstack from 644 to 755
changing mode of build/scripts-2.7/csvsort from 644 to 755
changing mode of build/scripts-2.7/csvgrep from 644 to 755
changing mode of build/scripts-2.7/csvjson from 644 to 755
changing mode of /usr/local/bin/in2csv to 755
changing mode of /usr/local/bin/csvclean to 755
changing mode of /usr/local/bin/csvgrep to 755
changing mode of /usr/local/bin/csvsort to 755
changing mode of /usr/local/bin/csvjoin to 755
changing mode of /usr/local/bin/csvstat to 755
changing mode of /usr/local/bin/csvlook to 755
changing mode of /usr/local/bin/csvstack to 755
changing mode of /usr/local/bin/csvcut to 755
changing mode of /usr/local/bin/csvjson to 755
changing mode of /usr/local/bin/csvsql to 755
Running setup.py install for mimeparse
Running setup.py install for django-tastypie
Running setup.py install for longerusername
Running setup.py install for openpyxl
warning: no files found matching '*.xml' under directory 'src/openpyxl/tests/test_data'
warning: no files found matching '*.rels' under directory 'src/openpyxl/tests/test_data'
warning: no files found matching '*.xlsx' under directory 'src/openpyxl/tests/test_data'
warning: no files found matching 'CREDITS'
Running setup.py install for django-keyedcache
Running setup.py install for django-livesettings
Running setup.py install for paramiko
Running setup.py install for django-picklefield
Running setup.py install for celery
no previously-included directories found matching 'tests/*.pyc'
no previously-included directories found matching 'docs/*.pyc'
no previously-included directories found matching 'contrib/*.pyc'
no previously-included directories found matching 'celery/*.pyc'
no previously-included directories found matching 'examples/*.pyc'
no previously-included directories found matching 'bin/*.pyc'
no previously-included directories found matching 'docs/.build'
no previously-included directories found matching 'docs/graffles'
no previously-included directories found matching '.tox/*'
Installing celeryctl script to /usr/local/bin
Installing celeryd script to /usr/local/bin
Installing camqadm script to /usr/local/bin
Installing celeryev script to /usr/local/bin
Installing celeryd-multi script to /usr/local/bin
Installing celerybeat script to /usr/local/bin
Running setup.py install for kombu
Running setup.py install for django-appconf
Running setup.py install for Pygments
Installing pygmentize script to /usr/local/bin
Running setup.py install for Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Running setup.py install for docutils
changing mode of build/scripts-2.7/rst2html.py from 644 to 755
changing mode of build/scripts-2.7/rst2s5.py from 644 to 755
changing mode of build/scripts-2.7/rst2latex.py from 644 to 755
changing mode of build/scripts-2.7/rst2xetex.py from 644 to 755
changing mode of build/scripts-2.7/rst2man.py from 644 to 755
changing mode of build/scripts-2.7/rst2xml.py from 644 to 755
changing mode of build/scripts-2.7/rst2pseudoxml.py from 644 to 755
changing mode of build/scripts-2.7/rstpep2html.py from 644 to 755
changing mode of build/scripts-2.7/rst2odt.py from 644 to 755
changing mode of build/scripts-2.7/rst2odt_prepstyles.py from 644 to 755
warning: no files found matching 'MANIFEST'
warning: no previously-included files matching '.cvsignore' found under directory '*'
warning: no previously-included files matching '*.pyc' found under directory '*'
warning: no previously-included files matching '*~' found under directory '*'
warning: no previously-included files matching '.DS_Store' found under directory '*'
changing mode of /usr/local/bin/rst2pseudoxml.py to 755
changing mode of /usr/local/bin/rst2man.py to 755
changing mode of /usr/local/bin/rst2xetex.py to 755
changing mode of /usr/local/bin/rst2html.py to 755
changing mode of /usr/local/bin/rst2s5.py to 755
changing mode of /usr/local/bin/rstpep2html.py to 755
changing mode of /usr/local/bin/rst2xml.py to 755
changing mode of /usr/local/bin/rst2odt.py to 755
changing mode of /usr/local/bin/rst2odt_prepstyles.py to 755
changing mode of /usr/local/bin/rst2latex.py to 755
Running setup.py install for anyjson
Running setup.py install for amqplib
Successfully installed django fabric psycopg2 django-celery kombu-sqlalchemy python-dateutil lxml django-compressor BeautifulSoup sphinx requests south xlrd boto ajaxuploader argparse sqlalchemy csvkit mimeparse django-tastypie longerusername openpyxl django-keyedcache django-livesettings paramiko django-picklefield celery kombu django-appconf Pygments Jinja2 docutils anyjson amqplib
Cleaning up...
+ mkdir /var/log/panda
mkdir: cannot create directory `/var/log/panda': File exists
+ touch /var/log/panda/panda.log
+ chown -R panda:panda /var/log/panda
+ mkdir /var/lib/panda
mkdir: cannot create directory `/var/lib/panda': File exists
+ mkdir /var/lib/panda/uploads
mkdir: cannot create directory `/var/lib/panda/uploads': File exists
+ mkdir /var/lib/panda/exports
mkdir: cannot create directory `/var/lib/panda/exports': File exists
+ mkdir /var/lib/panda/media
mkdir: cannot create directory `/var/lib/panda/media': File exists
+ chown -R panda:panda /var/lib/panda
+ sudo -u panda -E python manage.py syncdb --noinput
Syncing...
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_user_permissions
Creating table auth_user_groups
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_admin_log
Creating table django_site
Creating table south_migrationhistory
Creating table celery_taskmeta
Creating table celery_tasksetmeta
Creating table djcelery_intervalschedule
Creating table djcelery_crontabschedule
Creating table djcelery_periodictasks
Creating table djcelery_periodictask
Creating table djcelery_workerstate
Creating table djcelery_taskstate
Creating table livesettings_setting
Creating table livesettings_longsetting
Creating table panda_category
Creating table panda_taskstatus
Creating table panda_dataset_categories
Creating table panda_dataset
Creating table panda_dataupload
Creating table panda_export
Creating table panda_notification
Creating table panda_relatedupload
Creating table panda_userprofile
Installing custom SQL ...
Installing indexes ...
No fixtures found.
Synced:
> django.contrib.auth
> django.contrib.contenttypes
> django.contrib.sessions
> django.contrib.messages
> django.contrib.admin
> django.contrib.sites
> django.contrib.staticfiles
> south
> djcelery
> compressor
> livesettings
> panda
Not synced (use migrations):
- longerusername
- tastypie
(use ./manage.py migrate to migrate these)
+ sudo -u panda -E python manage.py migrate --noinput
Running migrations for longerusername:
- Migrating forwards to 0001_initial.
> longerusername:0001_initial
- Loading initial data for longerusername.
No fixtures found.
Running migrations for tastypie:
- Migrating forwards to 0001_initial.
> tastypie:0001_initial
- Loading initial data for tastypie.
No fixtures found.
+ sudo -u panda -E python manage.py loaddata panda/fixtures/init_panda.json
Installed 8 object(s) from 1 fixture(s)
+ sudo -u panda -E python manage.py collectstatic --noinput
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/gis/move_vertex_off.png'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/gis/move_vertex_on.png'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-restore-8bit.png'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-restore.png'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/default-bg-reverse.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-search.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_success.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/arrow-up.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-removeall.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-delete-8bit.png'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-remove.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-right_over.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/changelist-bg.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/deleted-overlay.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon-unknown.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/chooser_stacked-bg.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-right.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_changelink.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon-yes.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-arrowright_over.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_clock.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/nav-bg-grabber.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/changelist-bg_rtl.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_deletelink.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-delete.png'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-add.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_error.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_addlink.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_alert.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_searchbox.png'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-left.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-add_over.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/default-bg.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-addall.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector_stacked-remove.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tool-left_over.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/nav-bg-reverse.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector-add.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/arrow-down.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/chooser-bg.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/selector_stacked-add.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/tooltag-arrowright.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon-no.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/icon_calendar.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/nav-bg.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/img/admin/inline-splitter-bg.gif'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/dashboard.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/rtl.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/widgets.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/login.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/changelists.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/forms.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/base.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/css/ie.css'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/calendar.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/prepopulate.min.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/core.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/SelectFilter2.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/SelectBox.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/LICENSE-JQUERY.txt'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/timeparse.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/inlines.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/dateparse.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/inlines.min.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/urlify.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/compress.py'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/jquery.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/actions.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/getElementsBySelector.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/collapse.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/jquery.min.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/collapse.min.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/prepopulate.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/jquery.init.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/actions.min.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/admin/RelatedObjectLookups.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/admin/ordering.js'
Copying '/usr/local/lib/python2.7/dist-packages/django/contrib/admin/media/js/admin/DateTimeShortcuts.js'
Copying '/opt/panda/panda/static/panda_user_change_form.js'
Copying '/opt/panda/client/static/img/desc.gif'
Copying '/opt/panda/client/static/img/no-sort.gif'
Copying '/opt/panda/client/static/img/progress.png'
Copying '/opt/panda/client/static/img/ajax-loader.gif'
Copying '/opt/panda/client/static/img/glyphicons-halflings.png'
Copying '/opt/panda/client/static/img/asc.gif'
Copying '/opt/panda/client/static/img/panda_and_ire.png'
Copying '/opt/panda/client/static/img/glyphicons-halflings-white.png'
Copying '/opt/panda/client/static/css/reset.css'
Copying '/opt/panda/client/static/css/fileuploader.css'
Copying '/opt/panda/client/static/css/bootstrap.css'
Copying '/opt/panda/client/static/css/panda.css'
Copying '/opt/panda/client/static/css/loading.gif'
Copying '/opt/panda/client/static/js/application.js'
Copying '/opt/panda/client/static/js/utils.js'
Copying '/opt/panda/client/static/js/SpecRunner.html'
Copying '/opt/panda/client/static/js/models/datasets.js'
Copying '/opt/panda/client/static/js/models/data_uploads.js'
Copying '/opt/panda/client/static/js/models/tasks.js'
Copying '/opt/panda/client/static/js/models/users.js'
Copying '/opt/panda/client/static/js/models/related_uploads.js'
Copying '/opt/panda/client/static/js/models/categories.js'
Copying '/opt/panda/client/static/js/models/notifications.js'
Copying '/opt/panda/client/static/js/models/data.js'
Copying '/opt/panda/client/static/js/spec/mock_xhr_responses.js'
Copying '/opt/panda/client/static/js/spec/models/datasets.js'
Copying '/opt/panda/client/static/js/spec/models/tasks.js'
Copying '/opt/panda/client/static/js/spec/routers/index.js'
Copying '/opt/panda/client/static/js/spec/views/not_found.js'
Copying '/opt/panda/client/static/js/spec/views/root.js'
Copying '/opt/panda/client/static/js/lib/fileuploader.js'
Copying '/opt/panda/client/static/js/lib/jasmine-sinon.js'
Copying '/opt/panda/client/static/js/lib/jquery.tablesorter.js'
Copying '/opt/panda/client/static/js/lib/jasmine-jquery.js'
Copying '/opt/panda/client/static/js/lib/json2.js'
Copying '/opt/panda/client/static/js/lib/bootstrap.js'
Copying '/opt/panda/client/static/js/lib/backbone.js'
Copying '/opt/panda/client/static/js/lib/jquery.cookie.js'
Copying '/opt/panda/client/static/js/lib/underscore.js'
Copying '/opt/panda/client/static/js/lib/moment.js'
Copying '/opt/panda/client/static/js/lib/sinon-1.2.0.js'
Copying '/opt/panda/client/static/js/lib/backbone-tastypie.js'
Copying '/opt/panda/client/static/js/lib/bootbox.js'
Copying '/opt/panda/client/static/js/lib/jquery-1.7.1.js'
Copying '/opt/panda/client/static/js/lib/jasmine-1.1.0/jasmine_favicon.png'
Copying '/opt/panda/client/static/js/lib/jasmine-1.1.0/jasmine-html.js'
Copying '/opt/panda/client/static/js/lib/jasmine-
Durrr, that's not all of the log...however the issue seems to be having had installed uwsgi already, which needed to be restarted.
+ service uwsgi start
start: Job is already running: uwsgi
Closing as a duplicate of #440, which, if implemented, could account for restarting already running services.
|
gharchive/issue
| 2012-02-16T23:11:24 |
2025-04-01T06:39:57.466179
|
{
"authors": [
"eads"
],
"repo": "pandaproject/panda",
"url": "https://github.com/pandaproject/panda/issues/454",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
754871929
|
Fix table-short-captions
Support for Pandoc 2.10 is not working, and all the tests are broken too, they do not check anything.
This PR also fixes Pandoc 2.10 support, and also fixes the tests to ensure they do run (and updates them to the changes introduced by 7a49118).
Commit 6e1b2cc introduced support for Pandoc 2.10 but it doesn't work because it treats the new Table.caption.long object as the pre 2.10 Table.caption object, but they are completely different lists. Table.caption.long is a list of Blocks, where each block has a content key which is a List of inlines (or it should be, a test would be needed for more complex captions?). The pre 2.10 Table.caption is a List of inlines, so they cannot share the same code to be manipulated.
This PR fixes it by just splitting the code into two different functions. They do follow the same structure, but it is better to keep them separated, otherwise it would require too many ifs, or code that is not clear.
For pre 2.10 Table.caption just keep the code as it is. For the new Table.caption.long, ensure the outer list of Blocks is taken into account when searching for the inline Span with the short caption.
Now all test do run and all pass, with pre and post Pandoc 2.10 (nix-run-version is a helper that allows me to use nixpkgs releases to run different versions of a command)
Before this PR (well, not really, this PR also fixes the tests to see them failing. Before the last commit fixing 2.10 support):
nix-run-version 19.09 pandoc make && echo "no errors"
pandoc 2.7.3
Compiled with pandoc-types 1.17.6, texmath 0.11.2.2, skylighting 0.8.2
no errors
$ nix-run-version 20.09 pandoc make && echo "no errors"
pandoc 2.10.1
Compiled with pandoc-types 1.21, texmath 0.12.0.2, skylighting 0.8.5
Output does not contain `\def\pandoctableshortcapt{} % .unlisted`.
make: *** [test] Error 1
With this PR:
$ nix-run-version 19.09 pandoc make && echo "no errors"
pandoc 2.7.3
Compiled with pandoc-types 1.17.6, texmath 0.11.2.2, skylighting 0.8.2
no errors
$ nix-run-version 20.09 pandoc make && echo "no errors"
pandoc 2.10.1
Compiled with pandoc-types 1.21, texmath 0.12.0.2, skylighting 0.8.5
no errors
$ nix-run-version unstable pandoc make && echo "no errors"
pandoc 2.11.2
Compiled with pandoc-types 1.22, texmath 0.12.0.3, skylighting 0.10.0.3,
citeproc 0.2, ipynb 0.1.0.1
no errors
@blake-riley please review.
Agh, it works with every Pandoc version I tried on my Mac. I need to figure out why the CI is failing.
Rebased to fix the CI ($(</dev/stdin) was not working so I went back to $(cat -)).
|
gharchive/pull-request
| 2020-12-02T02:13:23 |
2025-04-01T06:39:57.696153
|
{
"authors": [
"smancill"
],
"repo": "pandoc/lua-filters",
"url": "https://github.com/pandoc/lua-filters/pull/149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
143657635
|
LuaState.checkRef > u.act will be NullReference
i dont know why,but throw this Exception in sometimes.
i think it is a bug,maybe delegate had been gced
|
gharchive/issue
| 2016-03-26T03:00:15 |
2025-04-01T06:39:57.713096
|
{
"authors": [
"chiuan",
"pangweiwei"
],
"repo": "pangweiwei/slua",
"url": "https://github.com/pangweiwei/slua/issues/134",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2193735730
|
[Question]: gnet.run()时提示错误:gnet engine is stopping with error: kevent add|clear:function not implemented
Actions I've taken before I'm here
[X] I've thoroughly read the documentations about this problem but still have no answer.
[X] I've searched the Github Issues/Discussions but didn't find any similar problems that have been solved.
[X] I've searched the internet for this problem but didn't find anything helpful.
Questions with details
我想用gnet跑一个基于tcp的socket服务端.
实现了onboot,open,close,shutdown,traffic 接口.
启动时提示错误:gnet engine is stopping with error: kevent add|clear:function not implemented
Code snippets (optional)
package Client
import (
"context"
"fmt"
"mt_server/Common"
"mt_server/Game"
"mt_server/Logs"
"sync/atomic"
"time"
"github.com/panjf2000/gnet/v2"
)
var (
MtServerListener *MTServer
)
type MTServer struct {
gnet.BuiltinEventEngine
eng gnet.Engine
protoAddr string
multicore bool
connected int32
disconnected int32
}
func (s *MTServer) OnBoot(eng gnet.Engine) (action gnet.Action) {
Logs.InfoF("BINDING PORT [%d]", Game.ConfigObj.Port)
s.eng = eng
return
}
func (s *MTServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
atomic.AddInt32(&s.connected, 1)
atomic.AddInt32(&Common.SocketConnectPlayer, 1)
Game.InitDescriptor(c)
return
}
func (s *MTServer) OnClose(c gnet.Conn, err error) (action gnet.Action) {
atomic.AddInt32(&s.disconnected, 1)
atomic.AddInt32(&s.connected, -1)
atomic.AddInt32(&Common.SocketConnectPlayer, -1)
c.Flush()
SocketClose(c)
return
}
func (s *MTServer) OnTraffic(c gnet.Conn) (action gnet.Action) {
l := c.InboundBuffered()
data, _ := c.Peek(l)
RecvBroadcast <- Common.MsgID{Conn: c, Packet: data}
return
}
func (s *MTServer) OnShutdown(eng gnet.Engine) {
Game.CloseAllDescriptor()
}
func InitMtServer() {
multicore := true
MtServerListener = &MTServer{
protoAddr: fmt.Sprintf("tcp://:%d", Game.ConfigObj.Port),
multicore: multicore,
}
err := gnet.Run(MtServerListener, MtServerListener.protoAddr, gnet.WithMulticore(multicore))
if err != nil {
Logs.ErrorF("server exits with error: %v", err)
}
}
func ListenClose() {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
gnet.Stop(ctx, MtServerListener.protoAddr)
cancel()
}
补充一下,系统是freebsd
请用这个模板开一个新的 issue,现在必要的信息太少。
|
gharchive/issue
| 2024-03-19T00:17:45 |
2025-04-01T06:39:57.725083
|
{
"authors": [
"dul666888",
"panjf2000"
],
"repo": "panjf2000/gnet",
"url": "https://github.com/panjf2000/gnet/issues/551",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
266879663
|
Render documentation by default
Unless it is configured otherwise, have Grove render documentation by default. This minimizes the need for configuration and gives the user the best experience right out of the box.
I thought about doing this but after surveying our 20 to 30 repos, I realized that documentation would be generated on far fewer than half of them, hence the default.
I didn't want a bunch of repos with empty elm-docs directories.
If you don't like this for your situation, you can configure Grove globally to generate documents by default and then turn them off explicitly on a case by case basis.
Ya I have it globally configured to create documentation.
It seems our case would be a good one for using the --local --docs=off option instead of having that be the default.
In the general case, developers will use the Elm documentation format since the compiler requires placeholders at least. Since those will be there by default, it makes sense to render them by default.
I'm sure now that we have this tool, moving forward documenting in the code will be our default. Long term I think it makes sense.
Having function specific documentation is helpful for libraries. But as things get more complicated, API documentation is less and less useful. A great example can be seen at nodegit.
At first the documentation seems really nice. But it's just API docs. And when I starting using this library in Grove I found it sorely lacking.
What I needed for this library was how to use these functions together to accomplish larger goals. The example code they provided and their own source code was more useful to me than these extensive function by function docs.
Once I had that as a resource, I nearly stopped looking at the API docs.
This is why I don't plan to automatically create docs for elm-slate (when I finally get around to documenting it). It's just too complicated to expect that API docs are going to be useful. I expect that the documentation will consist of conceptual documentation, examples and most important, a few reference implementations.
|
gharchive/issue
| 2017-10-19T15:11:29 |
2025-04-01T06:39:57.734972
|
{
"authors": [
"alexgig",
"cscalfani"
],
"repo": "panosoft/elm-grove",
"url": "https://github.com/panosoft/elm-grove/issues/4",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
2407340295
|
File descriptor from content URI isn't closed when disposed
Describe the bug
.close isn't called (or .use isn't used) when the file descriptor (here from a content URI) is not needed anymore when using supersampling.
Affected platforms
Select of the platforms below:
[x] Android
[x] Desktop
Versions
zoomimage version: 1.1.0-alpha01
Kotlin version: 2.0.0-RC1
Compose version(s)):1.6.8
Running Devices
Samsung Galaxy A12; One UI Core 4.1; arm64
Code sample
// Create a ZoomImage with a zoomState and set the subsampling
zoomState.subsampling.setImageSource(Uri.parse("content://..."))
Reproduction step
Create an image with subsampling using a content URI
Zoom
Unzoom
Error/stacktrace
Details
StrictMode policy violation: android.os.strictmode.LeakedClosableViolation: A resource was acquired at attached stack trace but never released. See java.io.Closeable for information on avoiding resource leaks.
at android.os.StrictMode$AndroidCloseGuardReporter.report(StrictMode.java:1987)
at dalvik.system.CloseGuard.warnIfOpen(CloseGuard.java:336)
at android.os.ParcelFileDescriptor.finalize(ParcelFileDescriptor.java:1069)
at java.lang.Daemons$FinalizerDaemon.doFinalize(Daemons.java:339)
at java.lang.Daemons$FinalizerDaemon.processReference(Daemons.java:324)
at java.lang.Daemons$FinalizerDaemon.runInternal(Daemons.java:300)
at java.lang.Daemons$Daemon.run(Daemons.java:145)
at java.lang.Thread.run(Thread.java:1012)
Caused by: java.lang.Throwable: Explicit termination method 'close' not called
at dalvik.system.CloseGuard.openWithCallSite(CloseGuard.java:288)
at dalvik.system.CloseGuard.open(CloseGuard.java:257)
at android.os.ParcelFileDescriptor.(ParcelFileDescriptor.java:206)
at android.os.ParcelFileDescriptor$2.createFromParcel(ParcelFileDescriptor.java:1129)
at android.os.ParcelFileDescriptor$2.createFromParcel(ParcelFileDescriptor.java:1120)
at android.os.storage.IStorageManager$Stub$Proxy.openProxyFileDescriptor(IStorageManager.java:3577)
at android.os.storage.StorageManager.openProxyFileDescriptor(StorageManager.java:2141)
at android.os.storage.StorageManager.openProxyFileDescriptor(StorageManager.java:2202)
at fr.theskyblockman.lifechest.vault.EncryptedContentProvider.openFile(EncryptedContentProvider.kt:159)
at android.content.ContentProvider.openAssetFile(ContentProvider.java:2138)
at android.content.ContentProvider.openTypedAssetFile(ContentProvider.java:2314)
at android.content.ContentProvider.openTypedAssetFile(ContentProvider.java:2381)
at android.content.ContentProvider$Transport.openTypedAssetFile(ContentProvider.java:562)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2034)
at android.content.ContentResolver.openAssetFileDescriptor(ContentResolver.java:1849)
at android.content.ContentResolver.openInputStream(ContentResolver.java:1525)
at com.github.panpf.zoomimage.subsampling.ContentImageSource$openSource$2.invokeSuspend(AndroidImageSource.kt:101)
at com.github.panpf.zoomimage.subsampling.ContentImageSource$openSource$2.invoke(Unknown Source:8)
at com.github.panpf.zoomimage.subsampling.ContentImageSource$openSource$2.invoke(Unknown Source:4)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:61)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:163)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source:1)
at com.github.panpf.zoomimage.subsampling.ContentImageSource.openSource-IoAF18A(AndroidImageSource.kt:99)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper$getOrCreateDecoder$2.invokeSuspend(BitmapFactoryDecodeHelper.kt:80)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper$getOrCreateDecoder$2.invoke(Unknown Source:8)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper$getOrCreateDecoder$2.invoke(Unknown Source:4)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:61)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:163)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source:1)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper.getOrCreateDecoder(BitmapFactoryDecodeHelper.kt:79)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper.access$getOrCreateDecoder(BitmapFactoryDecodeHelper.kt:25)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper$decodeRegion$2.invokeSuspend(BitmapFactoryDecodeHelper.kt:47)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper$decodeRegion$2.invoke(Unknown Source:8)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper$decodeRegion$2.invoke(Unknown Source:4)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:61)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:163)
at kotlinx.coroutines.BuildersKt.withContext(Unknown Source:1)
at com.github.panpf.zoomimage.subsampling.internal.BitmapFactoryDecodeHelper.decodeRegion(BitmapFactoryDecodeHelper.kt:39)
at com.github.panpf.zoomimage.subsampling.internal.TileDecoder$decode$tileBitmap$1.invokeSuspend(TileDecoder.kt:57)
at com.github.panpf.zoomimage.subsampling.internal.TileDecoder$decode$tileBitmap$1.invoke(Unknown Source:8)
at com.github.panpf.zoomimage.subsampling.internal.TileDecoder$decode$tileBitmap$1.invoke(Unknown Source:4)
at com.github.panpf.zoomimage.subsampling.internal.TileDecoder$useDecoder$2.invokeSuspend(TileDecoder.kt:75)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104)
at kotlinx.coroutines.internal.LimitedDispatcher$Worker.run(LimitedDispatcher.kt:111)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:99)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:811)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:715)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:702)
at com.github.panpf.zoomimage.subsampling.internal.TileDecoder$decode$tileBitmap$1.invoke(Unknown Source:8)
at com.github.panpf.zoomimage.subsampling.internal.TileDecoder$decode$tileBitmap$1.invoke(Unknown Source:4)
at com.github.panpf.zoomimage.subsampling.internal.TileDecoder$useDecoder$2.invokeSuspend(TileDecoder.kt:75)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104)
at kotlinx.coroutines.internal.LimitedDispatcher$Worker.run(LimitedDispatcher.kt:111)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:99)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:811)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:715)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:702)
This is indeed turned off by use, so this may be a false positive of StrictMode
https://github.com/panpf/zoomimage/blob/6bf40159f1dbc1e21ba8dd9b28900349c4c3fa30/zoomimage-core/src/androidMain/kotlin/com/github/panpf/zoomimage/subsampling/internal/BitmapFactoryDecodeHelper.kt#L80
This seems quite strange, I have never seen a false positive from StrictMode before.
I use my own ContentProvider so I could listen for when a file is opened and closed, when I initialized the ZoomImage I see a giant wall of events (the fd attached to the URI is opened more than 75 times in less than 300ms!) which could be (and probably is) the cause of StrictMode's error, I believe this should be treated a little bit better, here I use ProxyFileDescriptor so the OS is under a bit more strain than with a normal content URI but this seems abusive to not keep a ParcelFileDescriptor around as it does not store any state anyways (I do, but only for optimization)
For convenience I also add the Composable I use with ZoomImage:
@Composable
fun InteractiveImage(
modifier: Modifier = Modifier,
bitmap: ImageBitmap,
uri: Uri?,
subsample: Boolean = true,
fileName: String,
isFullscreen: Boolean,
contentScale: ContentScale = ContentScale.Fit,
onClick: () -> Unit,
) {
val context = LocalContext.current
val zoomState: ZoomState = rememberZoomState()
LaunchedEffect(context, zoomState, uri, subsample) {
if (subsample) {
zoomState.subsampling.setImageSource(ImageSource.fromContent(context, uri!!))
}
}
val painter = remember {
BitmapPainter(bitmap)
}
Box(
contentAlignment = Alignment.Center,
modifier = if (isFullscreen) Modifier.background(Color.Black) else Modifier
) {
ZoomImage(
painter = painter,
contentDescription = stringResource(R.string.image_alt_text, fileName),
modifier = modifier
.fillMaxSize(),
contentScale = contentScale,
onTap = {
onClick()
},
zoomState = zoomState
)
}
}
I have reproduced the problem of opening files multiple times in a short period of time. This is caused by the failure of concurrency control. I am working hard to fix this problem.
I have not reproduced the LeakedClosableViolation problem reported by StrictMode on the simulator of API 31. Can you give me more precise development environment information or give me a sample code that can be reproduced on the simulator?
I use my own content URIs which depending on the context of my app enables me to read a file I encrypted earlier. In my class who's implementing ContentProvider I essentially have this method (some values need to be tweaked to make it work in another environment) :
override fun openFile(uri: Uri, mode: String): ParcelFileDescriptor? {
if (mode != "r") {
throw UnsupportedOperationException("Only reading is supported")
}
val file = getFile(uri) ?: return null // Set a File object here to test
val storageManager = context?.getSystemService(Context.STORAGE_SERVICE) as StorageManager?
?: throw SecurityException("No storage manager")
val handlerThread = HandlerThread("BackgroundFileReader")
handlerThread.start()
val randomAccessFile =
RandomAccessFile(file, "r")
return storageManager.openProxyFileDescriptor(
ParcelFileDescriptor.MODE_READ_ONLY, EncryptedProxyFileDescriptorCallback(
file,
randomAccessFile,
handlerThread
), Handler(handlerThread.looper)
)
}
class EncryptedProxyFileDescriptorCallback(
private val file: File,
private val randomAccessFile: RandomAccessFile,
private val handlerThread: HandlerThread
) : ProxyFileDescriptorCallback() {
override fun onGetSize(): Long {
return file.length()
}
override fun onRead(offset: Long, size: Int, data: ByteArray?): Int {
if (data == null) return if (offset + size > file.size) (file.size - offset).toInt() else size
randomAccessFile.seek(offset)
val result = randomAccessFile.read(data, 0, size)
return result
}
init {
Log.d("EncryptedContentProvider", "Initializing file")
}
override fun onRelease() {
Log.d("EncryptedContentProvider", "Releasing file")
handlerThread.quitSafely()
randomAccessFile.close()
}
}
This is the ContentProvider for the URI I have as an argument in https://github.com/panpf/zoomimage/issues/29#issuecomment-2228105630
I heavily edited the code to accept clear files, normally I run decryption on the relevant part of the file which adds more latency/processing time. I use quite large files (around 5000x8000) to do my tests.
This piece of code is part of a relatively large app running all the latest versions of Kotlin, Jetpack Compose and AGP, also it does not have any networking involved.
Version 1.1.0-alpha03 attempts to optimize this issue, please retest
I have retested with 1.1.0-alpha03, the file descriptor is now only created 4 times when the image is opened and the subsampling is initialized, which is a totally acceptable behavior. Now, the StrictMode error is gone. Bug fixed.
This is good news
|
gharchive/issue
| 2024-07-14T09:08:30 |
2025-04-01T06:39:57.748769
|
{
"authors": [
"panpf",
"theskyblockman"
],
"repo": "panpf/zoomimage",
"url": "https://github.com/panpf/zoomimage/issues/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
432914641
|
Update Composer dependencies (2019-04-14-00-03)
Loading composer repositories with package information
Updating dependencies
Writing lock file
Generating optimized autoload files
> DrupalProject\composer\ScriptHandler::createRequiredFiles
Visual regression test passed!
View the visual regression test report
|
gharchive/pull-request
| 2019-04-14T00:03:33 |
2025-04-01T06:39:57.785379
|
{
"authors": [
"ataylorme"
],
"repo": "pantheon-training-org/eseguinte-drupalcon-seattle-2019",
"url": "https://github.com/pantheon-training-org/eseguinte-drupalcon-seattle-2019/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
952233737
|
Update Posts “technologyvshumantrafficking”
Automatically generated by Netlify CMS
👷 Deploy Preview for rangitoto-reporter processing.
🔨 Explore the source changes: 14f4034e313ddec9f71b90fb1be1dd1467420fb1
🔍 Inspect the deploy log: https://app.netlify.com/sites/rangitoto-reporter/deploys/60fd2de1d46c3a00073a97db
|
gharchive/pull-request
| 2021-07-25T09:24:48 |
2025-04-01T06:39:57.796109
|
{
"authors": [
"pantryfight"
],
"repo": "pantryfight/rangitoto-reporter",
"url": "https://github.com/pantryfight/rangitoto-reporter/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
545859979
|
Logically dead code in requests auth.py
https://github.com/pantsbuild/pex/blob/a2b6d0a645824e5ed20e422f34482b60e0e7cdd6/pex/vendor/_vendored/pip/pip/_vendor/requests/auth.py#L223
Line 175, entdig = None
Line 222, if entdig: cannot be true
Line 223 is never reached
Closing as wont fix as explained in #844.
|
gharchive/issue
| 2020-01-06T18:03:36 |
2025-04-01T06:39:57.812635
|
{
"authors": [
"huornlmj",
"jsirois"
],
"repo": "pantsbuild/pex",
"url": "https://github.com/pantsbuild/pex/issues/848",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2408433534
|
ExEx that registers an RLPx subprotocol
Similar to https://github.com/paradigmxyz/reth/issues/7130, but integrated in as an ExEx.
This should include a custom RLPx protocol, for example, broadcasting decoded logs or similar over p2p.
May I get this @shekhirin ?
|
gharchive/issue
| 2024-07-15T10:51:05 |
2025-04-01T06:39:57.838276
|
{
"authors": [
"loocapro",
"shekhirin"
],
"repo": "paradigmxyz/reth-exex-examples",
"url": "https://github.com/paradigmxyz/reth-exex-examples/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1977589562
|
OOM and MerkleExecute Error While Syncing
Describe the bug
I've been trying to sync a new reth node (with Lighthouse) but am running into issues syncing the node and having it killed by OOM errors.
Issues:
Process is repeatedly killed when syncing due to OOM memory
When running MerkleExecute, a validation error is encountered at the end of the checkpoint stage checkpoint, which doesn't kill the process but doesn't seem to advance beyond
Upon restart, the node will randomly reset to a 98.3% checkpoint in AccountHashing even after its shutdown and restarted at a later checkpoint
The prior stages are synced far from the current head
Setup:
i5-1235U Mini PC, 32 GB memory, 4 TB SSD
Ubuntu 22.04.3 LTS
reth 0.1.0-alpha.10 + lighthouse 4.2.0
Troubleshooting Steps:
I've tried restarting this process several times including gracefully exiting along the way to try and advance the check pointing but had issues above. I increased swap space from 2 GB to 16 GB. I also tried shutting off Lighthouse to try and finish pipelining nodes. I've also monitored the resource usage and see only ~20% memory and swap usage when reth is running.
Logs:
[1] OOM Errors from grep -i oom /var/log/syslog
Nov 4 18:44:04 rethship kernel: [613255.141797] tokio-runtime-w invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
Nov 4 18:44:04 rethship kernel: [613255.141824] oom_kill_process+0x108/0x1c0
Nov 4 18:44:04 rethship kernel: [613255.141829] __alloc_pages_may_oom+0x117/0x1e0
Nov 4 18:44:04 rethship kernel: [613255.141958] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Nov 4 18:44:04 rethship kernel: [613255.141976] [ 636] 108 636 3707 192 65536 128 -900 systemd-oomd
Nov 4 18:44:04 rethship kernel: [613255.142266] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service/app.slice/app-org.gnome.Terminal.slice/vte-spawn-a346d30d-cd63-4709-ac50-8f1a2cbcceeb.scope,task=reth,pid=981137,uid=1000
Nov 4 18:44:04 rethship kernel: [613255.142326] Out of memory: Killed process 981137 (reth) total-vm:4376905076kB, anon-rss:27667340kB, file-rss:128kB, shmem-rss:0kB, UID:1000 pgtables:2043212kB oom_score_adj:0
Nov 4 18:44:04 rethship systemd[1]: user@1000.service: A process of this unit has been killed by the OOM killer.
Nov 4 18:44:04 rethship systemd[1728]: vte-spawn-a346d30d-cd63-4709-ac50-8f1a2cbcceeb.scope: A process of this unit has been killed by the OOM killer.
Nov 4 18:44:06 rethship kernel: [613257.682761] oom_reaper: reaped process 981137 (reth), now anon-rss:204kB, file-rss:128kB, shmem-rss:0kB
[2] See node logs below
[3] Logs upon restart
2023-11-05T01:27:13.175346Z WARN consensus::engine: Pipeline sync progress is inconsistent first_stage_checkpoint=18200983 inconsistent_stage_id=AccountHashing inconsistent_stage_checkpoint=17933127
2023-11-05T01:27:13.181836Z INFO reth::node::events: Executing stage pipeline_stages=1/13 stage=Headers from=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.183994Z INFO execute{stage=Headers}: sync::stages::headers: Target block already reached checkpoint=100.0% target=Hash(0x7f0f336dafd02579116db16639213ad274de6802821e640e7a2ca951b4ac0e5e)
2023-11-05T01:27:13.184061Z INFO reth::node::events: Stage finished executing pipeline_stages=1/13 stage=Headers block=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184069Z INFO reth::node::events: Executing stage pipeline_stages=2/13 stage=TotalDifficulty from=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184073Z INFO reth::node::events: Stage finished executing pipeline_stages=2/13 stage=TotalDifficulty block=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184106Z INFO reth::node::events: Executing stage pipeline_stages=3/13 stage=Bodies from=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184735Z INFO reth::node::events: Stage finished executing pipeline_stages=3/13 stage=Bodies block=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184748Z INFO reth::node::events: Executing stage pipeline_stages=4/13 stage=SenderRecovery from=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184757Z INFO reth::node::events: Stage finished executing pipeline_stages=4/13 stage=SenderRecovery block=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184763Z INFO reth::node::events: Executing stage pipeline_stages=5/13 stage=Execution from=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184904Z INFO execute{stage=MerkleUnwind}: sync::stages::merkle::unwind: Stage is always skipped
2023-11-05T01:27:13.184914Z INFO reth::node::events: Stage finished executing pipeline_stages=5/13 stage=Execution block=18200983 checkpoint=100.0% eta=unknown
2023-11-05T01:27:13.184919Z INFO reth::node::events: Executing stage pipeline_stages=6/13 stage=MerkleUnwind from=18200983 checkpoint=18200983 eta=unknown
2023-11-05T01:27:13.184922Z INFO reth::node::events: Stage finished executing pipeline_stages=6/13 stage=MerkleUnwind block=18200983 checkpoint=18200983 eta=unknown
2023-11-05T01:27:13.184974Z INFO reth::node::events: Executing stage pipeline_stages=7/13 stage=AccountHashing from=17933127 checkpoint=98.3% eta=unknown
2023-11-05T01:27:16.176960Z INFO reth::cli: Status connected_peers=1 stage=AccountHashing checkpoint=98.3% eta=unknown
2023-11-05T01:27:41.177238Z INFO reth::cli: Status connected_peers=16 stage=AccountHashing checkpoint=98.3% eta=unknown
Steps to reproduce
Unsure
Node logs
2023-11-04T22:21:41.943424Z INFO reth::node::events: Stage committed progress pipeline_stages=9/13 stage=MerkleExecute block=17933127 checkpoint=99.9% eta=4s
2023-11-04T22:21:43.101493Z INFO reth::node::events: Stage committed progress pipeline_stages=9/13 stage=MerkleExecute block=17933127 checkpoint=99.9% eta=3s
2023-11-04T22:21:45.066253Z INFO reth::cli: Status connected_peers=84 stage=MerkleExecute checkpoint=99.9% eta=1s
2023-11-04T22:21:45.408990Z INFO reth::node::events: Stage committed progress pipeline_stages=9/13 stage=MerkleExecute block=17933127 checkpoint=100.0% eta=0s
2023-11-04T22:21:46.032948Z WARN execute{stage=MerkleExecute}: sync::stages::merkle: Failed to verify block state root target_block=18200983 got=0x823da0e726af6e6c62fd9d0a2f8cc6270a83019cbf59a6bce386f5ba9642c3f2 expected=SealedHeader { header: Header { parent_hash: 0xf2f6cb468c897676ebc423d3e71ec1765913222a72dce83ac0b6d85ddcced07c, ommers_hash: 0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347, beneficiary: 0x4838b106fce9647bdf1e7877bf73ce8b0bad5f97, state_root: 0xbb09e5726aac4165aae76903a03f697650d4f86c9db65bfd6f8590fca9b1eb2a, transactions_root: 0xede23d510c6e60afb89e493dc41e012828c4604409f611ee2d3bbbe5b9bf9f2d, receipts_root: 0x7a32d6f19103688a4c176357e25dae09a29aef103c9a52d7f3a5bd40892500cd, withdrawals_root: Some(0x5549b704ab4127950b498748eab9700556a86712c262808ad13226db078de894), logs_bloom: 0x2a619118d0c498000b83060ac900122c6e92008118901c4538556025788140e400c614629018682106105748075237a31a421418aa070a091a3e042000740c03e6aa4019432288e64928acea4e2eb06a902606222cc68222bac02210862028b0a6e000926a9e10930faae186c04089545431821914210534a600459ca8d0090010c04023e88609b120c42f9002b011614080681129549058a4d480499a1001388aa6ed6840d82c84881e10959a589436a28404a26613561ab0a300ea08842005384c204305821404d8692155109f1254020e1caa8d000812592ac8425041a0c02452f2504c50921a2cd46608684593a398009030010ba066702c88309029c8f0, difficulty: 0x0_U256, number: 18200983, gas_limit: 29970705, gas_used: 29967592, timestamp: 1695501911, mix_hash: 0x7c474df647faacac2251c724cc264942e3a0aab0e8d7657126ad3e1c75dfa3a1, nonce: 0, base_fee_per_gas: Some(6881759407), blob_gas_used: None, excess_blob_gas: None, parent_beacon_block_root: None, extra_data: Bytes(0x546974616e2028746974616e6275696c6465722e78797a29) }, hash: 0x7f0f336dafd02579116db16639213ad274de6802821e640e7a2ca951b4ac0e5e }
2023-11-04T22:21:46.035592Z ERROR execute{stage=MerkleExecute}: sync::pipeline: Stage encountered a validation error: Block state root (0x823da0e726af6e6c62fd9d0a2f8cc6270a83019cbf59a6bce386f5ba9642c3f2) is different from expected: (0xbb09e5726aac4165aae76903a03f697650d4f86c9db65bfd6f8590fca9b1eb2a) stage=MerkleExecute bad_block=18200983
2023-11-04T22:22:10.066552Z INFO reth::cli: Status connected_peers=85 stage=MerkleExecute checkpoint=100.0% eta=unknown
2023-11-04T22:22:35.066721Z INFO reth::cli: Status connected_peers=85 stage=MerkleExecute checkpoint=100.0% eta=unknown
[...]
2023-11-04T23:43:25.067648Z INFO reth::cli: Status connected_peers=97 stage=MerkleExecute checkpoint=100.0% eta=unknown
2023-11-04T23:43:50.081770Z INFO reth::cli: Status connected_peers=96 stage=MerkleExecute checkpoint=100.0% eta=unknown
./run_reth.sh: line 4: 981137 Killed RUST_LOG=info /home/user/dev/reth/target/release/reth node --http --http.api debug,eth,net,trace,txpool,web3,rpc --http.addr 192.168.1.248
Platform(s)
Linux (x86)
What version/commit are you on?
reth 0.1.0-alpha.10 (a60dbfdd)
What database version are you on?
Current database version: 1
Local database version: 1
What type of node are you running?
Archive (default)
What prune config do you use, if any?
N/A
If you've built Reth from source, provide the full command you used
cargo build --release
Code of Conduct
[X] I agree to follow the Code of Conduct
Hi folks - has this been encountered again?
Hey @gakonst! Yeah, I keep hitting this OOM error. Happy to help triage.
I ended up clearing my state and resyncing, and I haven't had the issue again. But it definitely seems like there's some bug lurking in here. I'll leave this open for others to add in.
Following @theforager's suggestion, I also cleared the state and resynced. My node is now tracking the tip. Thanks!
I'm considering this fixed with https://github.com/paradigmxyz/reth/pull/7364 - users who upgraded from beta.4 or earlier will have to change their config like we recommend in the release notes:
https://github.com/paradigmxyz/reth/releases/tag/v0.2.0-beta.5
[stages.merkle]
-clean_threshold = 50000
+clean_threshold = 5000
|
gharchive/issue
| 2023-11-05T01:50:04 |
2025-04-01T06:39:57.849718
|
{
"authors": [
"ChrisTorresLugo",
"Rjected",
"gakonst",
"theforager"
],
"repo": "paradigmxyz/reth",
"url": "https://github.com/paradigmxyz/reth/issues/5298",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2126101903
|
reuse alloy eips constants
Resolves #6489
sorry for blocking the issue for so long 😅
I'm a rust-newbie trying to figure this out on the fly,
it's taking me some time to getting familiar with this code base (& rust itself)
very sorry for the inconvenience
all good :)
this is not critical, so no rush
happy to help
|
gharchive/pull-request
| 2024-02-08T21:30:49 |
2025-04-01T06:39:57.852205
|
{
"authors": [
"ThreeHrSleep",
"i-m-aditya",
"mattsse"
],
"repo": "paradigmxyz/reth",
"url": "https://github.com/paradigmxyz/reth/pull/6496",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2308401123
|
KeyNotFoundException is thrown when Project Settings property is used for the first time .
Describe the bug
In my extension testing I observed that when using useProjectSettings hook the values displayed on the very first load are not coming from the projectSettings.json file, instead coming from the placeholders on the hook initialization.
To Reproduce
Steps to reproduce the behavior:
Set up projectSettings.json with the below information
"paranextExtTesting.highlightColor_projectSetting_TC13": {
"label": "%paranextExtTesting.highlightColor%",
"description": "%paranextExtTesting.highlightColorDescription%",
"default": "Aqua",
"includeProjectTypes": ["ParatextStandard"],
"excludeProjectTypes": null
},
On the webview use the PAPI useProjectSettings hook
const [projectColor, setProjectColorInternally] = useProjectSetting(
'ParatextStandard',
'b4c501ad2538989d6fb723518e92408406e232d3',
'paranextExtTesting.highlightColor_projectSetting_TC13',
'Green',
);
You will find below exception in the main.log and warning in console.log
[2024-05-20 17:21:00.390] [warn] Tried to retrieve data immediately for Setting with selector "paranextExtTesting.highlightColor_projectSetting_TC13", but it threw. Error: System.Collections.Generic.KeyNotFoundException: The given key 'paranextExtTesting.highlightColor_projectSetting_TC13' was not present in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at Paranext.DataProvider.Projects.ParatextProjectDataProvider.GetProjectSetting(String jsonKey) in C:\Repos\paranext-core\c-sharp\Projects\ParatextProjectDataProvider.cs:line 222
at Paranext.DataProvider.Projects.ProjectDataProvider.HandleRequest(String functionName, JsonArray args) in C:\Repos\paranext-core\c-sharp\Projects\ProjectDataProvider.cs:line 83
The UI displays the values from the hook instead of defaults from the projectSettings.json
Expected behavior
Default values from the projectSettings.json should be displayed on the UI
Matt and I discussed and decided to fix a couple things and leave one other thing for the moment. Hopefully we'll get to the other thing before long:
(Fixed) Paratext project settings throw instead of getting the default
(Fixed) Paratext project settings updates don't go out properly
Paratext project settings don't support numbers or objects #906
|
gharchive/issue
| 2024-05-21T14:07:30 |
2025-04-01T06:39:57.866403
|
{
"authors": [
"roopa0222",
"tjcouch-sil"
],
"repo": "paranext/paranext-core",
"url": "https://github.com/paranext/paranext-core/issues/904",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1008279802
|
Improved notifications
Notifications in the app are a little messy at the moment. If for instance you place a huge order that is filled in multiple parts, you will get one notification for each part in quick succession.
Instead, the app should attempt to replace existing notifications an be clever. For example, if you place a huge order you should get one to say something like "Order filling", then it would update to say "Order filling: 1000/3000", then eventually update to success saying it's filled.
Also, when we trim, the status of the trade switches to 'canceled'. Should this field refer to the stop loss or something else?
hm yeah that might be the stop loss cancellation interfering. Clearly a log bug. I'll open a new bug for that
|
gharchive/issue
| 2021-09-27T15:16:20 |
2025-04-01T06:39:57.936293
|
{
"authors": [
"Dolivent",
"pareeohnos"
],
"repo": "pareeohnos/ktrade",
"url": "https://github.com/pareeohnos/ktrade/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2248131083
|
added generation of command documentation
Generates in the GitHub summary a list of all the available commands with some information on their usage and (soon to be added) parameters.
Resolves #12
Find a working example here
|
gharchive/pull-request
| 2024-04-17T12:06:11 |
2025-04-01T06:39:57.937833
|
{
"authors": [
"Bullrich"
],
"repo": "paritytech/cmd-action",
"url": "https://github.com/paritytech/cmd-action/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
383849350
|
Clean up repo
Bump all packages to 3.0.0
All packages now require local packages. Which means when we modify one, all the other immediately know of the changes.
added back the old api folder, because all tests pass there. That's most of the file changes in the file diff.
to run those tests: yarn test:api in root folder, or yarn test in api folder
in api, only converted utils/ and format/ folder to TS (related #21)
TODO:
[ ] lerna version
Pull Request Test Coverage Report for Build 250
229 of 305 (75.08%) changed or added relevant lines in 14 files are covered.
5 unchanged lines in 3 files lost coverage.
Overall coverage increased (+22.5%) to 74.99%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
packages/api/src/util/encode.ts
4
5
80.0%
packages/light.js/src/rpc/utils/createRpc.ts
2
3
66.67%
packages/api/src/format/input.ts
45
53
84.91%
packages/api/src/format/output.ts
149
215
69.3%
Files with Coverage Reduction
New Missed Lines
%
packages/electron/src/getParityPath.ts
1
0.0%
packages/electron/src/fetchParity.ts
1
0.0%
packages/contracts/src/badgereg.ts
3
0.0%
Totals
Change from base Build 247:
22.5%
Covered Lines:
1338
Relevant Lines:
1794
💛 - Coveralls
|
gharchive/pull-request
| 2018-11-23T14:46:59 |
2025-04-01T06:39:57.951551
|
{
"authors": [
"amaurymartiny",
"coveralls"
],
"repo": "paritytech/js-libs",
"url": "https://github.com/paritytech/js-libs/pull/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1801126031
|
Update to separate worker binaries
Once https://github.com/paritytech/polkadot/pull/7337 is merged. This will probably require a workspace setup.
Please let me know if you run into any issues. Any help with testing before https://github.com/paritytech/polkadot/pull/7337 is merged would be awesome.
|
gharchive/issue
| 2023-07-12T14:39:51 |
2025-04-01T06:39:57.981261
|
{
"authors": [
"mrcnski",
"ordian"
],
"repo": "paritytech/pvf-checker",
"url": "https://github.com/paritytech/pvf-checker/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1056011351
|
Refactor and rename sandboxing implementation
The sandboxing code inside the client which is responsible for running the contracts of pallet-contracts needs a refactor. Currently, the different execution engines for contracts are crammed into sc-executor-common. Neither should different execution engines be in the same crate nor should runtime execution and sandbox execution be intertwined in this way. For that reason we want to refactor that code in order to have proper abstractions that allows us to move the sandboxing code to its own set of crates.
Rename
After a discussion with @pepyakin (see comments below) we agreed that we should also move away from the term "sandboxing" for this whole thing. One proposal what I also back is to use the terminology "Wasm Virtualization".
TODO
[x] #10563
[ ] Create a trait to abstract over sandboxing execution engines ("backends").
[ ] Use dynamic dispatch with that trait to make engine selection possible at runtime (command line argument).
[ ] Move sandboxing code into its own crate client/wasm-virt with one additional crate per execution engine (sc-wasm-virt-wasmi, sc-wasm-virt-wasmer). This should also move all wasm virtualization related traits that needs to be implement by the runtime executor to the new location.
Another open question is whether we need to expand the API to allow for chunked execution so that code merkelization can be implemented. However, current thinking is that we can and should get away without execution engine support. I think it is reasonable to go forward with a non chunked version risking that we might need to maintain two APIs. Otherwise this is blocked by #9431.
I think, by carefully designing the API we can future-proof it to support potential chunked execution similar to what we have done for wasmi during our experiment.
Maybe. It adds complexity, though. For example, we would need some kind of feature discovery then because not all execution engines will support this right off the bat. As a matter of fact none will support it as we only put it in for future proofing. Just creating a new version of the API when the need arises is better in my opinion. Future proofing APIs is worth nothing when users will depend on the implementation.
Excerpt from our discussion about changing the terminology from sandoxing to a term based on virtualization, e.g. wasm virtualization API, hypervisor API, or whatever.
Specifically, I think it is just too general and does not represent what it actually does. What is usually understood by sandbox? Well, it's when some code is put into an environment where it cannot reach to anything except things that was explicitly designated to be used by that sandboxed code.
If we agree on that definition, then the "runtime" is exactly that. The thing that executes "runtime" (which would be called runtime by normal people) can be also called a sandbox.
However, virtualization is really more appropriate term here I feel. This is because it fits really an analogy. Virtualization is where you have some medium that provides virtualization facilities, some hypervisor/supervisor that virtualizes or multiplexes the underlying hardware, emulating other and providing services, and the guest which consumes those services, who uses the hardware through the hypervisor, and ultimately is controlled by the hypervisor
Updated the top post with new information after a discussion I had with @0x7CFE, today.
Refined the top posting again after the picture got a bit clearer on how the final refactoring should look like.
I narrowed down the scope if this issue and created a a proper task list.
What exactly do you *need it for?
run a piece of wasm in runtime
You can still do that: Use sp_sandbox::embedded_executor. It doesn't rely on this interface but includes a wasmi into the runtime.
No one is working on this anymore.
Since we decided to go with wasmi for now and we don't really have the resources to work on that we will close it.
|
gharchive/issue
| 2021-11-17T11:21:42 |
2025-04-01T06:39:57.998010
|
{
"authors": [
"0x7CFE",
"atenjin",
"athei",
"pepyakin"
],
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/issues/10297",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1367545201
|
MaxUnlockingChunks should be configurable via Runtime
StakingLedger in the pallet-staking is using a hardcoded value of MaxUnlockingChunks. Pallet-staking also has a MaxUnlockingChunks in the configuration which is not used anywhere.
I discussed this with @kianenigma, we should remove the hard coded value and use the value we get from Config.
Taking it
Taking it
@MrishoLukamba Feel free to pick it up.
|
gharchive/issue
| 2022-09-09T09:26:49 |
2025-04-01T06:39:58.001465
|
{
"authors": [
"Ank4n",
"MrishoLukamba"
],
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/issues/12227",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1210711490
|
Fix pallet_assets no_std compilation
Could you port this to polkadot-v0.9.19 as well? thanks
Hmm, it is included in the bin/node/runtime. So, it should already be compiled as part of the CI in wasm. Not sure why it didn't failed there :thinking:
What compiler version are you using?
@bkchr we are using: nightly-2021-11-07, somehow I cannot compile into wasm yet due to missing vec import, wierd
I found the problem. I bet with you that you have the std feature of sp-std somewhere in your project enabled. Please check this.
Not sure if it's caused by compiler though, it's also reporting some others:
let me check thanks
@bkchr you are awesome! fixed now
I am running into the same issue in v0.9.26
I have tried wit the sp-std/std flag enabled and disabled.
Suggestions?
@anantasty it can also be any other crate that propagates std feature to sp-std. You can use cargo tree to find out which crate enables the std feature.
Thanks @bkchr
|
gharchive/pull-request
| 2022-04-21T08:43:57 |
2025-04-01T06:39:58.006398
|
{
"authors": [
"GopherJ",
"anantasty",
"bkchr"
],
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/pull/11256",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1236029803
|
contracts: Get rid of #[pallet::without_storage_info]
All the pallet storage now implements MaxEncodedLen and hence we can remove #[pallet::without_storage_info]. Most of the changes were replacing Vec with BoundedVec and adding a derive for MaxEncodedLen.
cumulus companion: https://github.com/paritytech/cumulus/pull/1261
bot merge
bot merge
|
gharchive/pull-request
| 2022-05-14T15:26:15 |
2025-04-01T06:39:58.008439
|
{
"authors": [
"athei"
],
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/pull/11414",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1532595964
|
[DNM] Test CI
Various CI testing.
Works es expected @rcny :+1:
bot help
bot help
wrong repo
bot bench $ pallet dev pallet_balances
|
gharchive/pull-request
| 2023-01-13T16:34:24 |
2025-04-01T06:39:58.010119
|
{
"authors": [
"ggwpez"
],
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/pull/13140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1573101368
|
track total value locked in pools onchain
could resolve #12838 as well.
[ ] fix tests
[ ] migration
The CI pipeline was cancelled due to failure one of the required jobs.
Job name: cargo-check-each-crate
Logs: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/2357659
The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable-int
Logs: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/2357655
Not at the top of my TODO list, but plan to getting around to it in the next couple of weeks.
Closing in favor of https://github.com/paritytech/substrate/pull/14775
|
gharchive/pull-request
| 2023-02-06T18:49:41 |
2025-04-01T06:39:58.014029
|
{
"authors": [
"kianenigma",
"liamaharon",
"paritytech-cicd-pr"
],
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/pull/13319",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
589105987
|
[wip] Batching experiments
Opening for bot & performance testing
/bench import
/bench import
/bench import
/bench import
/bench ed25519
/bench import
/bench import
/bench ed25519
|
gharchive/pull-request
| 2020-03-27T12:39:42 |
2025-04-01T06:39:58.016466
|
{
"authors": [
"NikVolf"
],
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/pull/5434",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1384593768
|
Add new g-diffuser command "enhance"
Rescale the input image to a higher resolution and use inpainting with a constant mask of some opacity, effectively using SD for super-resolution. The same function could be aliased as a style transfer function, since it would do the same thing depending on opacity value and the prompt supplied.
optional parameter to increment or decrement certain parameters like scale or strength during said operation so that the longer it goes on the more insane the images get as the str gets higher or the scale gets lower
|
gharchive/issue
| 2022-09-24T08:51:38 |
2025-04-01T06:39:58.045304
|
{
"authors": [
"lootsorrow",
"parlance-zz"
],
"repo": "parlance-zz/g-diffuser-lib",
"url": "https://github.com/parlance-zz/g-diffuser-lib/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
190138145
|
I guess AddCookie/AddCookies doesnt work?
Hi!
I just tried to use it vs sites which use cookies to confirm language/currency, and it's just do not work, I am not sure why...
I can only confirm that same sites with standard approach (cookiejar and http.Client(jar: cookieJar)) work just fine.
Hi @MikhailKlemin
I just have the same issue, but it works well if i put the AddCookies back the post/get.
i just have the same issue, i have try many method, but i have not find the way, so i need you help@han2015, this is mine code ,
func GetGoRequest() {
request := gorequest.New()
//loginCookie := GetLoginCookie()
//
//fmt.Println("go request logincookie",loginCookie)
cookie := &http.Cookie{Name: "PHPSESSID", Value: "t2cdimadecpoel6i8rcdomk1m6"}
fmt.Println("go request cookie--->", cookie)
_, _, err := request.
Get("https://test-xxxxxx.cn/api/index.php?r=detail/brokerage-flows&token=cdkqqf1407307954&recommend_id=39ee6603-e648-49b1-12f5-e803efbf383a&devopenid=88374").
Send("r":"detail/recommend-flows","token":"cdkqqf1407307954","recommend_id":"39ee6603-e648-49b1-12f5-e803efbf383a").
AddCookie(cookie).
End()
if err != nil{
fmt.Printf("%v \n", err)
}
fmt.Println(request.Header) // this is always map[]
}
please, give me same help, thank you
@kervinson
correct your Send sentence refer doc(https://github.com/parnurzeal/gorequest/blame/develop/README.md#L120).
go request cookie---> PHPSESSID=t2cdimadecpoel6i8rcdomk1m6
++++++++++++++1
++++++++++++++TypeJSON
++++++++++++++5
[Get https://test-xxxxxx.cn/api/index.php?devopenid=88374&r=detail%2Fbrokerage-flows&recommend_id=39ee6603-e648-49b1-12f5-e803efbf383a&token=cdkqqf1407307954: dial tcp: lookup test-xxxxxx.cn: no such host]
[PHPSESSID=t2cdimadecpoel6i8rcdomk1m6]
I met the same problem, my code is bvelow
env : window 10, go 1.15, gorequest 1.6.0
package main
import (
"github.com/parnurzeal/gorequest"
"net/http"
"strings"
"log"
)
func buildCookie(cookie string) []*http.Cookie {
cookieSlice := make([]*http.Cookie, 0)
params := strings.Split(cookie, ";")
for _, line := range params {
res := strings.Split(line, "=")
cookieSlice = append(cookieSlice, &http.Cookie{Name: res[0], Value: res[1]})
}
return cookieSlice
}
func main() {
cookies := []string {"cookie1", "cookie2", "cookie3", "cookie4"..}
r := gorequest.New()
r = r.Set("Accept","text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8").
Set("Accept-Encoding","gzip, deflate, br").
Set("Accept-Language","zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2").
Set("Host","www.tianyancha.com").
// Set("Referer","https://www.tianyancha.com/search/ocD?base=taiyuan").
Set("User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0")
index := 0
req, _ := http.NewRequest("GET", "https://www.tianyancha.com/company/3192219802", nil)
req.Header.Set("Accept","text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")
req.Header.Set("Accept-Encoding","gzip, deflate, br")
req.Header.Set("Accept-Language","zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2")
req.Header.Set("Host","www.tianyancha.com")
// Set("Referer","https://www.tianyancha.com/search/ocD?base=taiyuan").
req.Header.Set("User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0")
for i := 0; i<3; i ++ {
// cookies[index%len(cookies)])
r.AddCookies(buildCookie(cookies[index%len(cookies)]))
// log.Println(r.Header)
// r = r.Set("Cookie", cookies[index%len(cookies)])
response, _, _ := r.Get("https://www.tianyancha.com/company/3192219802").End()
log.Println("url, ", response.Request.URL.String())
req.Header.Set("Cookie", cookies[index%len(cookies)])
client := &http.Client{}
resp, _ := client.Do(req)
log.Println(resp.Request.URL.String())
index ++
}
}
output
2020/11/11 16:20:42 url, https://www.tianyancha.com/login?from=https%3A%2F%2Fwww.tianyancha.com%2Fcompany%2F3192219802
2020/11/11 16:20:43 https://www.tianyancha.com/company/3192219802
2020/11/11 16:20:43 url, https://www.tianyancha.com/login?from=https%3A%2F%2Fwww.tianyancha.com%2Fcompany%2F3192219802
2020/11/11 16:20:43 https://www.tianyancha.com/company/3192219802
2020/11/11 16:20:44 url, https://www.tianyancha.com/login?from=https%3A%2F%2Fwww.tianyancha.com%2Fcompany%2F3192219802
2020/11/11 16:20:44 https://www.tianyancha.com/company/3192219802
anyway, i cannot get it through via gorequest~~~
Now I've solved this problem.
Its really a misuse of cookie.
When I read source code of gorequest, I found it would clear the agent stat every time before a new request if DoNotClearSuperAgent is default to false.
So the solution is add this code between you set cookie and start a new request.
agent := gorequest.New()
agent.SetDoNotClearSuperAgent(true)
|
gharchive/issue
| 2016-11-17T19:19:15 |
2025-04-01T06:39:58.054218
|
{
"authors": [
"MikhailKlemin",
"han2015",
"kervinson",
"luckcry"
],
"repo": "parnurzeal/gorequest",
"url": "https://github.com/parnurzeal/gorequest/issues/117",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
283489592
|
Add 'Syntax' data type.
Something along these lines? References #9 .
Coverage decreased (-0.8%) to 84.566% when pulling a5e1e0c7d6a8230e750122414a46a05bbd072f6c on hit023:cleaner-imports into 0468c68f1df8cb7d07d746085e020bc219daae33 on parsonsmatt:master.
Coverage decreased (-0.8%) to 84.566% when pulling a5e1e0c7d6a8230e750122414a46a05bbd072f6c on hit023:cleaner-imports into 0468c68f1df8cb7d07d746085e020bc219daae33 on parsonsmatt:master.
Coverage decreased (-0.8%) to 84.566% when pulling a5e1e0c7d6a8230e750122414a46a05bbd072f6c on hit023:cleaner-imports into 0468c68f1df8cb7d07d746085e020bc219daae33 on parsonsmatt:master.
Coverage decreased (-0.8%) to 84.566% when pulling 6126353fbeb6a126cd3f3b412508f83d4bd1bcbf on hit023:cleaner-imports into 0468c68f1df8cb7d07d746085e020bc219daae33 on parsonsmatt:master.
|
gharchive/pull-request
| 2017-12-20T08:51:40 |
2025-04-01T06:39:58.077766
|
{
"authors": [
"coveralls",
"hit023"
],
"repo": "parsonsmatt/kale",
"url": "https://github.com/parsonsmatt/kale/pull/22",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
167669720
|
How to use with webpack?
I am attempting to use this feature in webpack, but I cannot figure out how to inject custom properties. This is fairly easy to do for var
let variables = {
'primary-toolbar': '#415464',
'secondary-toolbar': "#488fb4"
}
require("postcss-cssnext")({
features: {
customProperties: {
variables: variables,
'conversation-progress': `
{
margin: 0 auto;
position: fixed;
top: 50%;
left: 50%;
};
`
},
applyRule: {
}
}
})
However when attempting to do something similar for apply I get the no custom properties set declared error.
Hi,
this has nothing to do with webpack, it's just that there's no mechanism implemented to create custom property sets from js/config.
That's what I assumed, thanks for the clarification. Since I'm using angular2, my use case isn't as simple as just plugging everything into one file. As a workaround, I'm using postcss-inject to inject :root into my components' css.
@chimon2000: See https://github.com/postcss/postcss-custom-properties/issues/32#issuecomment-237363712
Refs #15
|
gharchive/issue
| 2016-07-26T17:50:40 |
2025-04-01T06:39:58.097498
|
{
"authors": [
"chimon2000",
"kevinSuttle",
"pascalduez"
],
"repo": "pascalduez/postcss-apply",
"url": "https://github.com/pascalduez/postcss-apply/issues/9",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2405014707
|
Dont render camera in Build
I see https://github.com/pastasfuture/com.hauntedpsx.render-pipelines.psx/issues/28, but i have a lastet realise in project
I change Graphics API for Linux to Vulkan and errors lose, but i also have a gray screen
My OS: Ubuntu 20.04 LTS
Ohh, well, i changed unity version to 2021 LTS and this works! For me it's not problem what version i use, but someone it's can interfere
|
gharchive/issue
| 2024-07-12T08:13:55 |
2025-04-01T06:39:58.135795
|
{
"authors": [
"ValterGames-Coder"
],
"repo": "pastasfuture/com.hauntedpsx.render-pipelines.psx",
"url": "https://github.com/pastasfuture/com.hauntedpsx.render-pipelines.psx/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2428159292
|
Update to ProtocolDataUnits with Nested PDU Support and Improved API
This pull request introduces significant enhancements to the ProtocolDataUnits Python module (as discussed in pull request #1 ). The following features have been added or improved:
Nested PDU Support:
PDUs can now contain other PDUs, allowing for more complex and hierarchical data structures.
Enhanced API:
Simplified and user-friendly API for creating PDU formats.
New helper function create_pdu_format to streamline PDU creation.
Updated Encoding/Decoding Functions:
Improved handling of various data types.
Support for nested PDUs in both encoding and decoding processes.
Comprehensive Documentation:
Detailed docstrings for the encode and decode functions to explain their functionality and usage.
Updated README:
Revised to reflect the new features and provide clear instructions for installation, usage, and contribution.
Changes Made:
Added nested_pdu method to the PDU class.
Updated encode and decode methods to handle nested PDUs.
Refactored the API for defining PDUs with create_pdu_format.
Enhanced documentation and examples in the README.
Testing:
TODO: Extensive testing has to perform to ensure the new features work as expected.
All existing tests have been updated to accommodate the new functionality.
New tests have been added for nested PDU support.
Impact:
These changes improve the flexibility and usability of the ProtocolDataUnits module, making it easier to define and work with complex PDUs.
Please review the changes and provide feedback. If everything looks good, kindly approve the pull request so we can merge it into the main branch.
A bit busy these couple of weeks with field tests. Will review by early Aug.
No rush. Good luck with the trials.
@mchitre Thank you so much for reviewing, much appreciated. Sorry i have missed the notification for this for some reason. I do agree with the comments, will made the suggested changes and send for another review.
|
gharchive/pull-request
| 2024-07-24T17:52:29 |
2025-04-01T06:39:58.164635
|
{
"authors": [
"mchitre",
"patel999jay"
],
"repo": "patel999jay/ProtocolDataUnits",
"url": "https://github.com/patel999jay/ProtocolDataUnits/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2425398404
|
Accelerate ODE solver [What did I miss?]
Hi,
I am playing around with diffrax's ODE solving functionality. In a nutshell, I define a simple feedforward MLP with random initialization and benchmark the runtime of using it as the temporal derivatives of an ODE. I wrote the following code to record the run-time of ODE solving and got run-time around 3.7 sec, which seems much slower compared to other ODE solver frameworks.
I am new to jax and diffrax. What did I miss in my code implemenation?
import equinox as eqx
import jax
import diffrax
import jax.numpy as jnp
import time
class MLPeqx(eqx.Module):
layers: list
activation: callable = eqx.static_field()
def __init__(self, hidden_dims):
super().__init__()
tmp_key = jax.random.split(jax.random.PRNGKey(0), len(hidden_dims) - 1)
self.layers = [eqx.nn.Linear(hidden_dims[i], hidden_dims[i + 1], key=tmp_key[i]) for i in
range(len(hidden_dims) - 1)]
self.activation = jax.nn.relu
def __call__(self, x):
for i in range(len(self.layers) - 1):
x = self.activation(self.layers[i](x))
x = self.layers[-1](x)
return x
class ODEjax(eqx.Module):
func: MLPeqx
def __init__(self, hidden_dims):
super().__init__()
self.func = MLPeqx(hidden_dims)
def __call__(self, t, y, args=None):
return self.func(y)
def solve_ode(input_x, t, func, cfg):
sol = diffrax.diffeqsolve(
diffrax.ODETerm(func),
cfg['method'],
t0=t[0],
t1=t[-1],
y0=input_x,
dt0=None,
saveat=diffrax.SaveAt(ts=t),
stepsize_controller=diffrax.PIDController(atol=cfg['atol'], rtol=cfg['rtol']),
)
return sol.ys
def run_diffrax(hidden_dims, input_x, t, num_t, cfg):
t = jnp.linspace(t[0], t[1], num_t)
func = ODEjax(hidden_dims)
y = jax.vmap(solve_ode, in_axes=(0, None, None, None))(input_x, t, func, cfg)
return y
if __name__ == '__main__':
batch_size = 128
hidden_dims = [100, 100, 100]
input_x = jax.random.normal(jax.random.PRNGKey(0), (128, 100))
start_time = time.time()
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': 1e-5,
'rtol': 1e-5})
end_time = time.time()
print(f"run time = {end_time - start_time:.3f} (sec)")
jax compile times will be longer for first iteration (and are generally excluded in benchmarks)
with async dispatch you need a block until ready
With the following code I got:
import equinox as eqx
import jax
import diffrax
import jax.numpy as jnp
import time
class MLPeqx(eqx.Module):
layers: list
activation: callable = eqx.static_field()
def __init__(self, hidden_dims):
super().__init__()
tmp_key = jax.random.split(jax.random.PRNGKey(0), len(hidden_dims) - 1)
self.layers = [eqx.nn.Linear(hidden_dims[i], hidden_dims[i + 1], key=tmp_key[i]) for i in
range(len(hidden_dims) - 1)]
self.activation = jax.nn.relu
def __call__(self, x):
for i in range(len(self.layers) - 1):
x = self.activation(self.layers[i](x))
x = self.layers[-1](x)
return x
class ODEjax(eqx.Module):
func: MLPeqx
def __init__(self, hidden_dims):
super().__init__()
self.func = MLPeqx(hidden_dims)
def __call__(self, t, y, args=None):
return self.func(y)
def solve_ode(input_x, t, func, cfg):
sol = diffrax.diffeqsolve(
diffrax.ODETerm(func),
cfg['method'],
t0=t[0],
t1=t[-1],
y0=input_x,
dt0=None,
saveat=diffrax.SaveAt(ts=t),
stepsize_controller=diffrax.PIDController(atol=cfg['atol'], rtol=cfg['rtol']),
)
return sol.ys
@eqx.filter_jit
def run_diffrax(hidden_dims, input_x, t, num_t, cfg):
t = jnp.linspace(t[0], t[1], num_t)
func = ODEjax(hidden_dims)
y = jax.vmap(solve_ode, in_axes=(0, None, None, None))(input_x, t, func, cfg)
return y
batch_size = 128
hidden_dims = [100, 100, 100]
input_x = jax.random.normal(jax.random.PRNGKey(0), (128, 100))
_ = run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': 1e-5,
'rtol': 1e-5}).block_until_ready()
%%timeit
_ = run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': 1e-5,
'rtol': 1e-5}).block_until_ready()
19.4 ms ± 4.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Thanks so much! I have tried on my end and observed similar run-time metrics. I have another follow-up question. Say I first want to run with atol=rtol=1e-5, and later in my code I want it to run with atol=rtol=1e-4. I observe again the method run_diffrax runs slower again when changing from 1e-5 to 1e-4 because of compilation( I guess). Namely,
# first time of atol=rtol=1e-5, takes ~2secs
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': 1e-5,
'rtol': 1e-5}).block_until_ready()
# second time of atol=rtol=1e-5, takes ~0.008secs
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': 1e-5,
'rtol': 1e-5}).block_until_ready()
# first time of atol=rtol=1e-4, takes ~2secs
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': 1e-5,
'rtol': 1e-5}).block_until_ready()
# second time of atol=rtol=1e-4, takes ~0.008secs
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': 1e-5,
'rtol': 1e-5}).block_until_ready(
Is this behavior expected? I wonder if there is a way to compile only once for arbitrary atol=rtol values, and can always run around millisecond level regardless of atol and rtol.
Yes, this behavior is expected. The python floats are getting marked as static by the filtering that happens before jit. You can make them not static by making them jax types (e.g. arrays).
start_time = time.time()
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': jnp.array(1e-5),
'rtol': jnp.array(1e-5)}).block_until_ready()
end_time = time.time()
print(f"run time = {end_time - start_time:.3f} (sec)")
start_time = time.time()
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': jnp.array(1e-5),
'rtol': jnp.array(1e-5)}).block_until_ready()
end_time = time.time()
print(f"run time = {end_time - start_time:.3f} (sec)")
start_time = time.time()
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': jnp.array(1e-4),
'rtol': jnp.array(1e-5)}).block_until_ready()
end_time = time.time()
print(f"run time = {end_time - start_time:.3f} (sec)")
start_time = time.time()
run_diffrax(hidden_dims, input_x, [0.0, 1.0], 100, {
'method': diffrax.Dopri5(),
'atol': jnp.array(1e-4),
'rtol': jnp.array(1e-5)}).block_until_ready()
end_time = time.time()
print(f"run time = {end_time - start_time:.3f} (sec)")
run time = 4.057 (sec)
run time = 0.016 (sec)
run time = 0.013 (sec)
run time = 0.013 (sec)
|
gharchive/issue
| 2024-07-23T14:42:41 |
2025-04-01T06:39:58.253640
|
{
"authors": [
"lockwo",
"zhengqigao"
],
"repo": "patrick-kidger/diffrax",
"url": "https://github.com/patrick-kidger/diffrax/issues/466",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2043772276
|
State does not change the icon
I noticed that the entity that is from integration does not update the icon when changing the status. It seems to me that this is due to the fact that the device does not have the right device_class or the supported_features are different.
I am using beta version 2023.11.0b5
Probably just because the icon is set statically. Will change it in the next update, but it should be fine in tile or mushroom cards anyways.
shouldn't happen on the non-beta versions, the beta branch will not be taken further, as that part will move to HA core.
|
gharchive/issue
| 2023-12-15T13:55:33 |
2025-04-01T06:39:58.259677
|
{
"authors": [
"darkkatarsis",
"zweckj"
],
"repo": "patrickhilker/tedee_hass_integration",
"url": "https://github.com/patrickhilker/tedee_hass_integration/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1910167420
|
GMScreen compatibily is broken on Chronium based browser
Describe the bug
Hello, thanks for the last issue resolved!
When editing a whiteboard and saving it, GM Screen preview is not updated until I hit the GM Screen refresh button.
To Reproduce
Steps to reproduce the behavior:
Go to GM Screen (with a Whiteboard page attached)
Click on Edit the journal note and edit the whiteboard then save
GM Screen doesn't show changes
Hit the refresh button of GM Screen, changes are shown within GM Screen
Expected behavior
GM Screen should show changes once they are saved (might be an event relative to note updated not fired?)
Environment (please complete the following information):
Foundry VTT: 11.309
Browser tested on chrome & brave
Module Version latest
Thx! Working perfectly 😃
|
gharchive/issue
| 2023-09-24T09:25:25 |
2025-04-01T06:39:58.272497
|
{
"authors": [
"DarKDinDoN"
],
"repo": "patrickporto/journal-whiteboard",
"url": "https://github.com/patrickporto/journal-whiteboard/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2000122622
|
Icarus Server Not Listening on Port 17777
Followed your instructions. Server is finally running after suffering through figuring out how to get rid of the Visual C++ error message. I am on a fresh Windows 11 install. The Icarus server is not visible in the "Dedicated Servers" list, and is not available by direct connect either.
netstat -an | Select-String 27015 shows the server machine is listening on the port.
netstat -an | Select-String 17777 doesn't return anything, showing the server is not listening on this port.
icarus.psm1 is set to both ports above.
I have opened ports 27015 and 17777 TCP and UDP on both routers (I have two separate networks).
Windows firewall has all ports opened for Icarus for UDP and TCP for private, public, and domain networks.
The server's console window is open and showing the server is running, but no confirmation the server is listening on what ports.
The server does not show in the "Dedicated Servers" list.
The server is not available through direct connect in Icarus.
Don't know what to do from here.
Can you post your configuration and logs ?
I managed to get it running a few minutes ago. Shortly after my original post I switched to my own startup cmd which got everything running. This time around I tried the automated script by @BananaAcid. Now the server is running just fine. Now I need to configure daily or hourly automated backups, if that's possible. So far I have to manually restart the launcher script, not quite helpful if someone wipes out the prospect world and I haven't restarted recently. Happened once already.
Automated Backups and restart are built-in powershell GSM, they are also enabled by default in the configuration file.
They are launched by the task-scheduler in windows.
|
gharchive/issue
| 2023-11-18T00:42:05 |
2025-04-01T06:39:58.276943
|
{
"authors": [
"patrix87",
"wizzbangwa"
],
"repo": "patrix87/PowerShellGSM",
"url": "https://github.com/patrix87/PowerShellGSM/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
424599413
|
form control should have ellipsis when text overflows
According to the image below when the text overflows, ellipsis should be shown:
I believe it's easier to implement that using css's text-overflow than writing it in JavaScript.
:tada: This issue has been resolved in version 2.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/issue
| 2019-03-24T10:53:35 |
2025-04-01T06:39:58.285557
|
{
"authors": [
"boaz0",
"patternfly-build"
],
"repo": "patternfly/patternfly-next",
"url": "https://github.com/patternfly/patternfly-next/issues/1618",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
578885408
|
fix(file-upload): removed message container, added form to error example
fixes https://github.com/patternfly/patternfly-next/issues/2797
Preview: https://patternfly-next-pr-2807.surge.sh
@mcarrano it just removes the file message/helper text at the bottom (instructions/error text) from the file upload component and relies on the file upload component being in a form for that helper text, so the helper text comes from the form component instead.
That also means:
Any updates we make to the form component's helper text (descriptive, error, success, etc) will be available when used with file upload since the file upload is no longer creating its own helper text.
The text doesn't get a blue overlay when you drag/hover the file upload component, since that text is no longer part of the file upload component.
old:
new:
Sounds good to me. Thanks for the explanation @mcoker !
Ah, I'm glad you guys made that change, I forgot to mention I needed to make .pf-c-file-upload a div instead of a form in the React component so I could get my FormGroup example to work (React will refuse to let you nest a in another ).
:tada: This PR is included in version 2.68.4 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2020-03-10T21:59:03 |
2025-04-01T06:39:58.291843
|
{
"authors": [
"mcarrano",
"mcoker",
"mturley",
"patternfly-build",
"redallen"
],
"repo": "patternfly/patternfly-next",
"url": "https://github.com/patternfly/patternfly-next/pull/2807",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
354919420
|
Extract components from page layout
fixes https://github.com/patternfly/patternfly-next/issues/637
@andresgalante I left the JS that toggles the navigation in the demo. It's easy to remove if we want to. For the sake of consistency and for future demos, I don't think we should write javascript in our workspace, and just present the different states instead. To see the interaction/animations, maybe link to a demo of the react components in a sample application since that's officially supported. What do you think?
Deploy preview for pf-next ready!
Built with commit 994767c170f99c03907627550aec2d9db70ec898
https://deploy-preview-658--pf-next.netlify.com
@michael-coker @andresgalante I agree that best practice is to present different states. With more complex components, it's just a lot more work to do so when including very basic JS makes it much easier to understand. It doesn't seem to be a big issue while we're still in Alpha, but will be in Beta+. Insights will be using all interactive components, so that would be a good opportunity to link to a live demo when they become available.
|
gharchive/pull-request
| 2018-08-28T21:57:38 |
2025-04-01T06:39:58.295088
|
{
"authors": [
"mattnolting",
"michael-coker",
"patternfly-build"
],
"repo": "patternfly/patternfly-next",
"url": "https://github.com/patternfly/patternfly-next/pull/658",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1004527348
|
[Feature]: Import original MusicDJ MIDI files
Describe the problem you're having
I've got a bunch of MusicDJ MIDI files made on SE K750i back in 2006, however nothing can open or convert them. Edit: VLC with fluidsynth can in fact open them (thanks to some people for the tips). MusicDJ data appears to be stored in the SEM1 chunk as noted in losnoco/foo_midi#1 and confuses most MIDI decoders.
MusicDJ_files.zip
Describe the solution you'd like
I was wondering if it would eventually be possible to import the files as projects in this new MusicDJ recreation. However, since I imagine this use case is super rare I'm just leaving it as a "nice to have" in the hopes someone would be compelled enough to work on this.
Thank you!
@StepS- Good news! v1.5-beta just added this.
|
gharchive/issue
| 2021-09-22T17:07:01 |
2025-04-01T06:39:58.377492
|
{
"authors": [
"StepS-",
"pattlebass"
],
"repo": "pattlebass/Music-DJ",
"url": "https://github.com/pattlebass/Music-DJ/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2409696311
|
Add support for 8k tokens with Claude 3.5 Sonnet
Issue
Claude 3.5 Sonnet now supports a maximum output/response token length of 8,192 tokens (from 4,096 currently).
To support 8k tokens, we need to add the header "anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15" to our API calls:
Issue
Claude 3.5 Sonnet now supports a maximum output/response token length of 8,192 tokens (from 4,096 currently).
To support 8k tokens, we need to add the header "anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15" to our API calls:
How did you get past this:
official docs about the feat: https://docs.anthropic.com/en/docs/about-claude/models#model-comparison
did anybody submit a PR yet?
Thanks for trying aider and filing this issue.
Aider is already able to receive unlimited output from Sonnet. See the article below. This capability was introduced back in v0.41.0.
https://aider.chat/2024/07/01/sonnet-not-lazy.html
That said, it would be beneficial to support the new 8k output limit, mainly to somewhat reduce the cost and latency of longer LLM responses.
I added support for 8k output tokens.
The change is available in the main branch. You can get it by installing the latest version from github:
python -m pip install --upgrade git+https://github.com/paul-gauthier/aider.git
If you have a chance to try it, let me know if it works better for you.
I'm going to close this issue for now, but feel free to add a comment here and I will re-open or file a new issue any time.
I am trying to do this on the AWS Bedrock hosted version of 3.5 Sonnet. However, to my surprise I found that it isn't really updated all that much meaning if I pass this parameter I get an error of extra headers not allowed.
Then doing some searching I found that Sonnet on bedrock hasn't been updated since it was launched i.e., June 2024 which is really surprising. Any idea if Amazon or Anthropic will update the models?
Is this on by default, or do I need to point Cursor to use a specific model/API?
Aider is using 8k output tokens for sonnet:
aider --sonnet --verbose
...
Model metadata:
{
"key": "claude-3-5-sonnet-20241022",
"max_tokens": 8192,
...
|
gharchive/issue
| 2024-07-15T21:44:06 |
2025-04-01T06:39:58.396837
|
{
"authors": [
"WilliamAGH",
"berwinsingh",
"bitnom",
"flamerged",
"jackyliang",
"pascalandy",
"paul-gauthier"
],
"repo": "paul-gauthier/aider",
"url": "https://github.com/paul-gauthier/aider/issues/867",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
105035609
|
"Null expectation context - missing withExpectations?" w/ ScalaTest FunSpec
Hi, I'm currently trying to create a basic test suite using ScalaTest and wanted to try my hand at mocking too (I'm quite new to Scala so am just wanting to get used to a few things and pin down which libraries I can use to perform common tasks like testing, and mocking etc).
I'm encountering a strange exception when I run my tests, one test succeeds, then all of the following tests fail. I've tried to change how this works a few times to restructured when things are created, and if they're only created once, but I still just get exceptions.
Here's my tests:
Here's the output:
I've just encountered this as well with FunSuite.
It seems to happen when you mock stuff in before/beforeEach.
A temporary workaround is to define whatever in before in a method and calling it at the beginning of each test (in other words, behave like there's no "before" feature).
+1
+1
+1
minimal failing example:
class CatTest extends FunSuite with BeforeAndAfter with MockFactory {
var cat: Cat = _
trait Cat {
def meow(): String
}
before {
cat = stub[Cat]
(cat.meow _).when().returns("meow")
}
test("meows") {
cat.meow() === "meow"
}
test("meows again") {
cat.meow() === "meow"
}
}
working example:
class CatTest extends FunSuite with BeforeAndAfter with MockFactory {
var cat: Cat = _
trait Cat {
def meow(): String
}
before {
withExpectations {
cat = stub[Cat]
(cat.meow _).when().returns("meow")
}
}
test("meows") {
cat.meow() === "meow"
}
test("meows again") {
cat.meow() === "meow"
}
}
The example above only works because there is no assertion on the ===. This code with added asserts looks like this:
class CatTest extends FunSuite with BeforeAndAfter with MockFactory {
var cat: Cat = _
trait Cat {
def meow(): String
}
before {
withExpectations {
cat = stub[Cat]
(cat.meow _).when().returns("meow")
}
}
test("meows") {
assert(cat.meow() === "meow")
}
test("meows again") {
assert(cat.meow() === "meow")
}
}
Which fails the assertions with cat.meow() being equal to null. The problem seems to occur whenever the mock is done within before{}. The only way I have been able to get around this is using OneInstancePerTest instead of BeforeAndAfter and implementing similarly.
@barkhorn can you confirm why this bug was closed? It seems to be confirmed but not fixed?
Hi, I can't see where the bug is here, so this should remain closed.
To share mocks within a suite, i'd recommend reading up on this page: http://scalamock.org/user-guide/sharing-scalatest/
Using the OneInstancePerTest is the correct way of sharing mocks (or a Fixture, whatever style you like better).
You could kind of use withExpectations too, but not in a before call. A higher order function wrapping around it might work for you.
See here for the implementation, maybe that helps. I would suggest using a Fixture though. https://github.com/paulbutcher/ScalaMock/blob/5fd4e563143d752b9638d32e05defcd8ddb8bb82/shared/src/main/scala/org/scalamock/MockFactoryBase.scala#L47
|
gharchive/issue
| 2015-09-05T16:23:02 |
2025-04-01T06:39:58.409332
|
{
"authors": [
"SeerUK",
"TEWatson",
"Tolsi",
"Woodz",
"agile-jordi",
"barkhorn",
"eyalroth",
"mdirkse"
],
"repo": "paulbutcher/ScalaMock",
"url": "https://github.com/paulbutcher/ScalaMock/issues/117",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
514651042
|
SOLVED - Error importing androidx.core in FlutterBluePlugin.java
:\Users\boris\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\flutter_blue-0.6.3+1\android\src\main\java\com\pauldemarco\flutter_blue\FlutterBluePlugin.java:43: error: package androidx.core.app does not exist
import androidx.core.app.ActivityCompat;
^
C:\Users\boris\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\flutter_blue-0.6.3+1\android\src\main\java\com\pauldemarco\flutter_blue\FlutterBluePlugin.java:44: error: package androidx.core.content does not exist
import androidx.core.content.ContextCompat;
I have this error and I've already imported android x and refractored. What should I do? I've tested with diferent versions of flutter ble too.
@borisllona, check out my comment here: https://github.com/pauldemarco/flutter_blue/issues/402#issuecomment-546425715
Or my pull request here: #416
Also, staying at 0.6.2 is not a bad idea (in pubspeck.yaml).
I had to delete the .flutter0plugins and pubspec.lock before flutter pub get.
It works for me but I am looking forward the next version if it comes with the solution of @rjcasson
@borisllona, check out my comment here: #402 (comment)
Or my pull request here: #416
It worked ! I was starting going crazy and updating my own app gradle but I haven't realized that it has to be changed the library one.
Thanks so much!
@borisllona Chances are the problem is with outdated project files. Stop into the discord if you'd like some help getting things up and running.
Please follow the instructions here and let me know the results:
https://github.com/pauldemarco/flutter_blue/issues/415#issuecomment-548186492
|
gharchive/issue
| 2019-10-30T13:01:26 |
2025-04-01T06:39:58.415610
|
{
"authors": [
"borisllona",
"bus710",
"pauldemarco",
"rjcasson"
],
"repo": "pauldemarco/flutter_blue",
"url": "https://github.com/pauldemarco/flutter_blue/issues/418",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1753089839
|
Port to 1.20
What's Working
The mod builds, and Minecraft loads with it and BCLib 3.0.2
I can create a world without crashing
I can construct an Eden Portal
I can travel to the Eden Ring
I saw a discwing (didn't grab a screenshot :disappointed:)
Eden ring items show up in the creative inventory
What's Not
There's no Eden Ring tab
The Eden Ring guidebook is barely legible
What I haven't tested
Crafting
Mining / block breaking (see below)
Multiplayer
Anything having to do with gravity manipulation
What to Pay Extra Attention to
The most non-obvious changes (read: the ones I probably got wrong) were:
rewriting the guide book rendering (and I clearly messed that up pretty badly, haha!)
migrating to the tag-based system of determining which tools break which blocks, explicitly the portion of the program that determines what items are mineable by what. I followed BetterEnd's example, but I was stumbling around in the dark and have no idea if I did it right, or whether I was even matching intent when it came to hammer-compatibility.
Thank you so much for creating this gorgeous mod--there's nothing like spending a day and a half poring over someone's codebase to make you appreciate the impressiveness of their body of work. If there's anything I can do to assist in the future development of this mod (including sponsoring its development via KoFi, Patreon, etc.), please let me know.
Also thank you to @Treetrain1 (whose review, testing and feedback I would greatly appreciate!) for all the work on getting the mod to 1.19.3 (especially the datagen gradle task!)
Ha! Okay, update: placing a gravity compressor crashed the game:
java.lang.IllegalArgumentException: Cannot get property class_2746{name=extended, clazz=class java.lang.Boolean, values=[true, false]} as it does not exist in Block{edenring:gravity_compressor}
I'm guessing there's a problem with the piston mixin...
Anyway, this is why this PR is likely to need a bit more attention than, say, #61
I'm closing this PR, as any port to 1.19.4+ really should be using the new tags system. I may try again, starting from @Treetrain1's 1.19.3 port, but I don't think I'll be reusing much of my own work from this PR.
|
gharchive/pull-request
| 2023-06-12T16:09:01 |
2025-04-01T06:39:58.798125
|
{
"authors": [
"OpenBagTwo"
],
"repo": "paulevsGitch/EdenRing",
"url": "https://github.com/paulevsGitch/EdenRing/pull/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
110106661
|
Why is fsevents mentioned twice in package.json?
This as a dependency and as an optional dependency both?
Furthermore I am unable to use npm shrinkwrap on my Ubuntu machine anymore because fsevents is targeted for OS machines only. It is breaking our build in our organization :(
Looking at your issues, I think fsevents has caused lots of trouble. Maybe time to drop it?
If not, how else can I solve the shrinkwrap issue then?
fsevents is an optional dependency: https://github.com/paulmillr/chokidar/blob/master/package.json#L41
If you can't do shrinkwrap, that seems like a npm issue, not brunches. NPM should always respect optionalDependencies. Please report it here: https://github.com/npm/npm/issues
https://github.com/npm/npm/issues/9865
|
gharchive/issue
| 2015-10-06T21:37:59 |
2025-04-01T06:39:58.867141
|
{
"authors": [
"binarykitchen",
"es128",
"paulmillr"
],
"repo": "paulmillr/chokidar",
"url": "https://github.com/paulmillr/chokidar/issues/363",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
504845756
|
allow watch for recursive not existing folder
Solves #872
This pull request allows user to watch specific not existing folders.
Was achieved via introducing a new function _mostSpecifiedExistingAncestor
We should consider making the function public, as users might be interested in this functionality.
We need tests for that feature. Also, ensure you conform to chokidar code style e.g. { locations.
I added tests.
I've made the changes that you asked.
Ping?
If i’m reading this correctly, we’ll have a huge performance degradation because of the new IO call for every file. Need to investigate this thoroughly and so some benchmarks.
If i’m reading this correctly, we’ll have a huge performance degradation because of the new IO call for every file. Need to investigate this thoroughly and so some benchmarks.
maybe we should use a flag to enable or disable this functionality?
This needs a rebase. Also the curly brackets shouldn't be placed on new lines, there's an extra space in a test message and extra newlines.
|
gharchive/pull-request
| 2019-10-09T19:25:39 |
2025-04-01T06:39:58.870486
|
{
"authors": [
"XhmikosR",
"exx8",
"paulmillr"
],
"repo": "paulmillr/chokidar",
"url": "https://github.com/paulmillr/chokidar/pull/897",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
121879600
|
Question: When is Set/Map not?
I am writing some type checking routines isSet and isMap among others. I have them using Object#toString when toStringTag is not available, .and when it is available then I'm using the size getter to perform a check. This is working nicely across realm and on all environments that I have tested on. I then threw es6-shim into the mix, which has now caused me some dilemma. On ES5 environments the shim is used, it walks, talks, eats, sleeps, and even farts like a Duck :) but it's DNA is not of a Duck, Object#toString is [object Object]. Ok, so the alternative method of using size says that it is a Duck, and I thought "that's ok then". But using this method without es6-shim then fails due to bugs in old FireFox where size is a function rather than a getter. So I'm wondering what thoughts are regarding type checkers and shims which implement a Duck in all but DNA. Should these checkers support es shims or should they count a shim as not a real Duck and say false. I'm not asking for code, just a perspective from shim makers as to whether isSet should return true for the Set shim in an ES5 environment or even a Set patch in an ES6 environment where the constructor has been replaced?
You can go one of two ways - either isSet only returns true for a set that has native behavior (which is impractical to determine in JS), or, assume that if Set exists, cache the Set.prototype.size getter and use that (an old Firefox Set is not actually a Set if "size" is a non-getter).
Thanks for that perspective, my head is still in dilemma about what to do about it, how much to check (is a buggy Set not a Set?). Seems likely that there is a need to drop Object#toString checking for Map and Set and be more duck-typed. Maybe sleeping on it will clear my view.
Yes, Object#toString is not reliable whatsoever in an environment with Symbol.toStringTag, so it's best to avoid it except as an optimization in older engines.
Yeah, I was only using it where toStringTag didn't exist, but it didn't exist in the buggy Firefox. So this begs the question of how much of an implementation to test before you say "this is a Set/Map", sure I could require that es6-shim is loaded, but unlike es5-shim (which I fully recommend/require) these shims still feel like they are in their infancy, and some even require true ES5 environments. It feels a bit of a "between a rock and a hard place" situation.
es6-shim should still be used on every engine since there are tons of bugs in all of them with their implementations.
Based on what you've said, I am going back over all my work to add it in. It's going to throw up some issues I'm sure. :)
The only really reliable mechanism is to compile a list of bugs on each platform, and test for them individually. That happens to be exactly what es6-shim is doing -- it then loads a shim to fix those bugs when it detects them.
But there are certainly ES6 features which are unshimmable. Our goal is that correctly written ES6 code which uses only the shimmable subset of ES6 will run correctly on every platform. If you use unshimmable features we can't make any guarantees. (ToStringTag is an unshimmable feature.)
I'm currently moving away from toStringTag checks in all of my code, and adding es6-shim to all projects. Often it means implementing more costly code, but nothing that I'm working on needs to be so performant. :)
Closing this, I'm very happy that the shim implements Map and Set as per spec, and that my isMap and isSet are happy. I look forward to a new release that includes the MapIterator and SetIterator changes that are sitting in the master. Thanks.
|
gharchive/issue
| 2015-12-12T20:56:02 |
2025-04-01T06:39:58.878506
|
{
"authors": [
"Xotic750",
"cscott",
"ljharb"
],
"repo": "paulmillr/es6-shim",
"url": "https://github.com/paulmillr/es6-shim/issues/382",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1175470808
|
Added docker image creation
I added dockerfiles and instructions necessary to build a docker image containing:
JuptyerLab
Tomorec Kernel
In the build of the docker image, I use as input the current user's uid. I did this in order to simplify the permissions issues that occur when writing to the host file system from within a docker container.
Not sure if this is the best solution, as it make the image specific to that user, but we can change this later and use a different method if needed.
Please test it out and see if it works for you! I tested it myself and it works, but it really pushes my laptop to the limit in terms of resource usage. It might be necessary to modify the analysis script further to avoid this.
#! Jason
I have updated the code, will make a new pull request
|
gharchive/pull-request
| 2022-03-21T14:28:50 |
2025-04-01T06:39:58.880909
|
{
"authors": [
"jasonbrudvik"
],
"repo": "paulscherrerinstitute/tomorec",
"url": "https://github.com/paulscherrerinstitute/tomorec/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2590564399
|
🛑 FIT-nl is down
In b044e30, FIT-nl (https://www.fit.nl) was down:
HTTP code: 0
Response time: 0 ms
Resolved: FIT-nl is back up in 931108b after 1 hour, 52 minutes.
|
gharchive/issue
| 2024-10-16T04:19:48 |
2025-04-01T06:39:58.883458
|
{
"authors": [
"paulwolters"
],
"repo": "paulwolters/up",
"url": "https://github.com/paulwolters/up/issues/100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
851738356
|
essense --> essence
Typo in intro.md, "essense" should be "essence".
Thanks for the fix!
|
gharchive/pull-request
| 2021-04-06T19:17:06 |
2025-04-01T06:39:58.931612
|
{
"authors": [
"abrhim",
"chrishtr"
],
"repo": "pavpanchekha/emberfox",
"url": "https://github.com/pavpanchekha/emberfox/pull/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1236845044
|
Laravel 9 - empty array on getPaymentMethods
Hello,
I'm integrating Paynow with Laravel 9 project / PHP 7.4.
public function availablePaymentMethods()
{
$client = new Client(config('paynow.api_key'), config('paynow.signature_key'), Environment::SANDBOX);
try {
$payment = new Payment($client);
$paymentMethods = $payment->getPaymentMethods();
$availablePaymentMethods = $paymentMethods->getAll();
return response()->json($availablePaymentMethods);
} catch (PaynowException $exception) {
return response()->json('Error occurred', 400);
}
}
API returns 200, but it's content is only empty elements:
[
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{},
{}
]
Http Client:
httpClient: Paynow\HttpClient\HttpClient {#499 ▼
#client: Symfony\Component\HttpClient\Psr18Client {#503 ▼
-client: Symfony\Component\HttpClient\CurlHttpClient {#506 ▶}
-responseFactory: Nyholm\Psr7\Factory\Psr17Factory {#504}
-streamFactory: Nyholm\Psr7\Factory\Psr17Factory {#505}
}
Have you tried to call the REST API from the postman or curl for your credentials?
@emilleszczak2, calling API directly from postman returns data correctly.
I've made it this way:
public function availablePaymentMethods(): \Illuminate\Http\JsonResponse
{
$client = new Client(config('paynow.api_key'), config('paynow.signature_key'), Environment::SANDBOX);
try {
$payment = new Payment($client);
$paymentMethods = $payment->getPaymentMethods();
$availablePaymentMethods = $paymentMethods->getOnlyCards();
$responseTable = array();
foreach ($availablePaymentMethods as $availablePaymentMethod) {
$responseTable[] = [
'id' => $availablePaymentMethod->getId(),
'type' => $availablePaymentMethod->getType(),
'name' => $availablePaymentMethod->getName(),
'description' => $availablePaymentMethod->getDescription(),
'image' => $availablePaymentMethod->getImage(),
'status' => $availablePaymentMethod->getStatus(),
'authorizationType' => $availablePaymentMethod->getAuthorizationType()
];
}
return response()->json($responseTable);
} catch (PaynowException $exception) {
return response()->json('Error occurred: '. $exception->getMessage(), 400);
}
}
We can close the issue.
|
gharchive/issue
| 2022-05-16T09:04:09 |
2025-04-01T06:39:58.938265
|
{
"authors": [
"emilleszczak2",
"m-pastuszek"
],
"repo": "pay-now/paynow-php-sdk",
"url": "https://github.com/pay-now/paynow-php-sdk/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1560891839
|
Some entries disappear from collection page view table
Hello!
Bug Report
After I've added draft system to this collection, I've got a problem with view table of the collection:
12 entries from 14 don't appear in table:
But the view becomes ok if I sort with "-" by some field:
So this /pages?sort=-_status&page=1 will be ok and this pages?sort=_status&page=1 will lose 12 entries.
Other Details
Payload version: 1.5.9
For what it's worth, I was experiencing this issue and had to go all the way back to version 1.5.4 before it worked as expected. There were several version in between 1.5.4 and 1.5.9 that did slightly different things.
Hey @Kikky and @joornby — this is now resolved and released in 1.6.1.
Note that there are some migration steps that you need to follow in order to get to the new version. See here:
https://github.com/payloadcms/payload/releases/tag/v1.6.1
@jmikrut , I've updated to 1.6.3.
After created new collection with such versions options, because I just need draft system:
And when I create new entry to this collection, edit it and click Publish - I get 1 version is found. If I edit it and click Publish again I will get - no versions found. One more time - 1 version is found. One more - no versions found ... etc.
I don't need versions, I just need draft system, but when a doc has no versions in DB when versions are enabled on collection I can't view them in collection's view table.
|
gharchive/issue
| 2023-01-28T15:12:18 |
2025-04-01T06:39:58.962898
|
{
"authors": [
"Kikky",
"jmikrut",
"joornby"
],
"repo": "payloadcms/payload",
"url": "https://github.com/payloadcms/payload/issues/1963",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2410751966
|
Switching locale doesn't update field UI based on certain access
Link to reproduction
https://github.com/chrisvanmook/payload/commit/c0be6a12eb7bc17b906f88832253e9d28c0a1fe6
Payload Version
3.0.0-beta.60
Node Version
v20.11.1
Next.js Version
15.0.0-canary.58
Describe the Bug
When switching locale, and you have an access method (update) that should disable the field based on a language, it doesn't update the UI. Only after a full refresh the expected result is visible (see video).
https://github.com/user-attachments/assets/ed029167-fc44-4016-933e-e93af0486475
Reproduction Steps
Add code in reproduction url
Switch from english to NL
Note that the field is not disabled
Refresh
Note that the field is now disabled as it should
Adapters and Plugins
db-postgres
This will be fixed in the next beta release!
|
gharchive/issue
| 2024-07-16T10:06:46 |
2025-04-01T06:39:58.966965
|
{
"authors": [
"chrisvanmook",
"paulpopus"
],
"repo": "payloadcms/payload",
"url": "https://github.com/payloadcms/payload/issues/7163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1568101243
|
How to clear previous user paypal data?
Example:
Step 0: Initial state
App User: nil
Paypal User: nil
Step 1: User A logs into the app
App User: A
Paypal User: nil
Step 2: User A logs into their Paypal Checkout
App User: A
Paypal User: A
Step 3: User A logs out of the app
App User: nil
Paypal User: A
Step 4: User B logs into the app
App User: B
Paypal User: A
Step 5: User B goes to Paypal checkout and is able to use User A's Paypal.
App User: B
Paypal User: A
And in iOS SDK, I can call Checkout.logoutUser()
how to logout user in android sdk???
@Oooooori apologies for the delayed reply.
The logout functionality is slated to be available in our next release.
Closing this issue as it's a duplicate of https://github.com/paypal/android-checkout-sdk/issues/99
The changes are already merged for programmatic logout. It will be in the next release.
|
gharchive/issue
| 2023-02-02T13:51:59 |
2025-04-01T06:39:59.002007
|
{
"authors": [
"Oooooori",
"saperi22",
"tdchow"
],
"repo": "paypal/android-checkout-sdk",
"url": "https://github.com/paypal/android-checkout-sdk/issues/195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
213725609
|
can not detect paypal dialog closed event.
I can not find a close event callback when I manually close paypal dialog.
This is possibly related to #250 and #211
Are you using onCancel with the iframe modal? In that case, your issue is probably due to be released with #211
I've used, but no effect. I don't know if i'm wrong.
My situation is like this:
When I click the checkout button, our website will generate an order by showing a loading page and the loading page is also in paypal dialog. And now when I close the popup window I want to have a close callback to do some operations just like refreshing my checkout page.
This is our loading page:
After our loading page, there will show paypal's payment page
When I close the window of paypal's payment page, there will have a default event to jump to another page, But when I close the window in our loading page, there will nothing happened.
Looks like you're using the legacy v3.5 integration. If you want to use onCancel you will need to upgrade to the v4 button -- please see the docs at https://developer.paypal.com/docs/integration/direct/express-checkout/integration-jsv4/advanced-integration/
thanks so much! I'll try it now.
|
gharchive/issue
| 2017-03-13T10:25:25 |
2025-04-01T06:39:59.006913
|
{
"authors": [
"bluepnume",
"danigomez",
"yuuk"
],
"repo": "paypal/paypal-checkout",
"url": "https://github.com/paypal/paypal-checkout/issues/257",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
203999821
|
replace map with foreach
Thanks for your pull request. Please review the following guidelines.
[ ] Title includes issue id.
[x] Description of the change added.
[x] Commits are squashed.
[ ] Tests added.
[ ] Documentation added/updated.
[x] Also please review CONTRIBUTING.md.
The change looks good. Just need help with some of the formalities:
Please file an issue for any change
Please edit your commit to include link or reference to the issue
The PR title should reference the issue (in form of #123)
Sorry for the inconvenience. We do this for traceability (as described in CONTRIBUTING.md). Thanks!
@akara Done! PTAL.
Thanks! Build is failing. Let me go take a look what's happening there. This should not be due to this change unless there is anything unexpected. I'll get back. No action on your side.
|
gharchive/pull-request
| 2017-01-30T12:39:47 |
2025-04-01T06:39:59.012507
|
{
"authors": [
"akara",
"thefourtheye"
],
"repo": "paypal/squbs",
"url": "https://github.com/paypal/squbs/pull/382",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1376955970
|
🛑 SIGAA is down
In 62562c6, SIGAA (https://sig.unb.br/sigaa/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SIGAA is back up in a7d2141.
|
gharchive/issue
| 2022-09-18T06:01:13 |
2025-04-01T06:39:59.018649
|
{
"authors": [
"pazkero"
],
"repo": "pazkero/status.cacic.bsb.br",
"url": "https://github.com/pazkero/status.cacic.bsb.br/issues/161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2211782891
|
🛑 Departamento is down
In b92a3bb, Departamento (http://cca.unb.br/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Departamento is back up in dddba7f after 34 minutes.
|
gharchive/issue
| 2024-03-27T19:54:39 |
2025-04-01T06:39:59.021444
|
{
"authors": [
"pazkero"
],
"repo": "pazkero/status.cacic.bsb.br",
"url": "https://github.com/pazkero/status.cacic.bsb.br/issues/2969",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1921836875
|
🛑 Departamento is down
In 175400f, Departamento (http://cca.unb.br/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Departamento is back up in 4644903 after 21 minutes.
|
gharchive/issue
| 2023-10-02T12:37:26 |
2025-04-01T06:39:59.024058
|
{
"authors": [
"pazkero"
],
"repo": "pazkero/status.cacic.bsb.br",
"url": "https://github.com/pazkero/status.cacic.bsb.br/issues/415",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1924427849
|
🛑 Departamento is down
In 1288f32, Departamento (http://cca.unb.br/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Departamento is back up in 609f089 after 12 minutes.
|
gharchive/issue
| 2023-10-03T15:44:15 |
2025-04-01T06:39:59.026472
|
{
"authors": [
"pazkero"
],
"repo": "pazkero/status.cacic.bsb.br",
"url": "https://github.com/pazkero/status.cacic.bsb.br/issues/437",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
178423497
|
iOS 10, xCode 8: Error in Success callbackId
Expected behaviour
Tell us what should happen.
Actual behaviour
Tell us what happens instead. Provide a log message if relevant
Error in Success callbackId: Sim1675580494 : TypeError: undefined is not an object (evaluating 'navigator.connection.type')
I'm seeing this behaviour on
Remove this hint: these checkboxes can be checked like this: [x]
[x ] iOS device
[ x] iOS simulator
[ ] Android device
[ ] Android emulator
I am using
[x ] cordova
[ ] ionic
[ ] PhoneGap
[ ] PhoneGap Developer App
[ ] Intel XDK
[ ] Intel App Preview
[ ] Telerik
[ ] Other:
Hardware models
Example: Samsung Galaxy S6, iPhone 6s
OS versions
Example: Android 4.4.2, iOS 9.2
I've checked these
[ ] It happens on a fresh Cordova CLI project as well.
[ ] I'm waiting for deviceready to fire.
[ ] My JavaScript has no errors (window.onerror catches nothing).
[ ] I'm using the latest cordova library, Android SDK, Xcode, etc.
So how can we reproduce this?
Provide the used components versions (cordova, ionic, etc).
Provide the steps to reproduce the issue.
Provide files, sources if available.
Thanks. Somehow my "cordova-plugin-network-information" got removed. I just added it again and there would not be an issue. Sorry!
|
gharchive/issue
| 2016-09-21T18:17:12 |
2025-04-01T06:39:59.036771
|
{
"authors": [
"azn1viet"
],
"repo": "pbakondy/cordova-plugin-sim",
"url": "https://github.com/pbakondy/cordova-plugin-sim/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
120592515
|
Tests for everything
This tests all the features we have right now that I could think of.
It also makes a fix for when the promise returns null, as I re-broke that in a previous commit.
So if this gets merged, can you republish a patch version.
This is great. :+1:
I released the patch as 2.3.1.
Doesn't seem to be on npm yet
Whoops. Sorry abut that. It should be now.
Just FYI, I ignored the tests and example directory on npm. I’m also in the process of setting up code coverage + https://coveralls.io https://coveralls.io/. I’ll probably finish that up in a couple of weeks when I have more time.
On Dec 6, 2015, at 3:01 PM, Thomas <notifications@github.com mailto:notifications@github.com> wrote:
Doesn't seem to be on npm yet
—
Reply to this email directly or view it on GitHub https://github.com/pburtchaell/redux-promise-middleware/pull/41#issuecomment-162347079.
|
gharchive/pull-request
| 2015-12-05T22:30:59 |
2025-04-01T06:39:59.082593
|
{
"authors": [
"pburtchaell",
"tomatau"
],
"repo": "pburtchaell/redux-promise-middleware",
"url": "https://github.com/pburtchaell/redux-promise-middleware/pull/41",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
490024260
|
Add simple implementation of infinite list
Fixes #29 for all list views in the application
Codecov Report
Merging #41 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #41 +/- ##
======================================
Coverage 94.2% 94.2%
======================================
Files 33 33
Lines 569 569
======================================
Hits 536 536
Misses 33 33
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b7cf394...b6ab447. Read the comment docs.
|
gharchive/pull-request
| 2019-09-05T21:43:29 |
2025-04-01T06:39:59.087866
|
{
"authors": [
"codecov-io",
"pbylicki"
],
"repo": "pbylicki/rfhub-new",
"url": "https://github.com/pbylicki/rfhub-new/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
784787065
|
add a japanese translate on lockedMaster
Thanks for the so great app !! I love this app.
I added a japanese translate on lockedMaster.
Sorry about the messy commits history, becouse I screwed up on the line feed code.
TEST
I passed the test on local.
Executed 183 of 183 specs SUCCESS in 1 sec.
Thank.
🎉 Thanks a bunch for adding a level translation!!
🎉 Thanks a bunch for adding a level translation!!
|
gharchive/pull-request
| 2021-01-13T04:55:04 |
2025-04-01T06:39:59.184783
|
{
"authors": [
"HosokawaR",
"pcottle"
],
"repo": "pcottle/learnGitBranching",
"url": "https://github.com/pcottle/learnGitBranching/pull/779",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
52545345
|
cant join rooms
Noooo! I'm home and connect through my sprint phone hotspot. I can't get into rooms.
Turned off firewall. Your websocket link gives all "yes", looks fine.
The log:
2014-12-19 19:58:30,613 DEBUG[SoundBounce.WindowsClient.Program]: Initial startup: Main()
2014-12-19 19:58:30,694 INFO [SoundBounce.WindowsClient.SpotifyEnabledBrowser]: Cef version initialized OK.
2014-12-19 19:58:30,701 DEBUG[SoundBounce.SpotifyAPI.Spotify]: Message thread running...
2014-12-19 19:58:36,814 DEBUG[SoundBounce.SpotifyAPI.Spotify]: api_version=12
2014-12-19 19:58:36,814 DEBUG[SoundBounce.SpotifyAPI.Spotify]: api_version=12
2014-12-19 19:58:36,814 DEBUG[SoundBounce.SpotifyAPI.Spotify]: application_key_size=321
2014-12-19 19:58:36,814 DEBUG[SoundBounce.SpotifyAPI.Spotify]: cache_location=C:\Users\Tony\AppData\Local\Temp\SoundBounce_temp
2014-12-19 19:58:36,814 DEBUG[SoundBounce.SpotifyAPI.Spotify]: settings_location=C:\Users\Tony\AppData\Local\Temp\SoundBounce_temp
2014-12-19 19:58:37,223 DEBUG[SoundBounce.SpotifyAPI.Spotify]: sp_session_preferred_bitrate() to 320k succeeded!
2014-12-19 19:58:37,284 DEBUG[SoundBounce.SpotifyAPI.Spotify]: libspotify > 00:58:37.284 I [ap:1752] Connecting to AP ap.gslb.spotify.com:4070
2014-12-19 19:58:37,285 DEBUG[SoundBounce.SpotifyAPI.Spotify]: libspotify > 00:58:37.285 E [c:/Users/spotify-buildagent/BuildAgent/work/1e0ce8a77adfb2dc/client/core/network/proxy_resolver_win32.cpp:215] WinHttpGetProxyForUrl failed
2014-12-19 19:58:37,374 DEBUG[SoundBounce.SpotifyAPI.Spotify]: libspotify > 00:58:37.374 I [ap:1226] Connected to AP: 193.235.203.114:4070
2014-12-19 19:58:41,900 DEBUG[SoundBounce.SpotifyAPI.Spotify]: libspotify > 00:58:41.900 I [offline-mgr:2084] Storage has been cleaned
2014-12-19 19:58:41,900 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:41,905 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:42,753 DEBUG[SoundBounce.WindowsClient.SpotifyEnabledBrowser]: From http://cdnjs.cloudflare.com/ajax/libs/react/0.12.1/JSXTransformer.js Line 318: You are using the in-browser JSX transformer. Be sure to precompile your JSX for production - http://facebook.github.io/react/docs/tooling-integration.html#jsx
2014-12-19 19:58:42,951 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:45,587 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:45,592 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:46,943 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:47,040 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:48,542 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:49,084 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:50,110 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:50,223 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:50,887 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:51,570 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:52,235 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:52,917 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:53,638 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:54,359 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:54,938 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:55,688 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:56,397 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:57,030 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:57,386 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:57,846 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:58,147 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:58,458 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:58,924 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:58:59,129 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:59:00,917 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:59:01,024 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:59:02,186 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:59:02,353 DEBUG[SoundBounce.SpotifyAPI.Spotify]: spotify > metadata_updated
2014-12-19 19:59:03,156 DEBUG[SoundBounce.SpotifyAPI.Spotify]: libspotify > 00:59:03.156 I [offline-mgr:2032] 0 files are locked. 0 images are locked
2014-12-19 19:59:03,156 DEBUG[SoundBounce.SpotifyAPI.Spotify]: libspotify > 00:59:03.156 I [offline-mgr:2058] 0 files unlocked. 0 images unlocked
2014-12-19 19:59:03,887 DEBUG[SoundBounce.SpotifyAPI.Spotify]: Successfully closed libspotify session.
In the meantime, if someone finds themself locked out because they left a session running somewhere else -- as I seem to do every other day :angry: -- they can use the "Log out everywhere" button in their Spotify account overview https://www.spotify.com/us/accounts/overview/.
|
gharchive/issue
| 2014-12-20T00:57:53 |
2025-04-01T06:39:59.202642
|
{
"authors": [
"Tonybyte",
"bgod"
],
"repo": "pdaddyo/soundbounce",
"url": "https://github.com/pdaddyo/soundbounce/issues/88",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2457778398
|
Use python setuptools & pip to distribute pdicfg_validate script
In GitLab by @jbigot on Jul 15, 2019, 08:02
Replaced by #153
closed
This is done for zpp, just need to copy now
reopened
|
gharchive/issue
| 2019-07-15T06:02:50 |
2025-04-01T06:39:59.215572
|
{
"authors": [
"jbigot"
],
"repo": "pdidev/pdi",
"url": "https://github.com/pdidev/pdi/issues/143",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
788080842
|
Which features are a available and which are not?
Maybe I'm blind or my browser renders it badly. From the main page:
In such a list I'd expect some checked check boxes and some uncheck check boxes. Which raises the quesiton what available?
Like (in Markdown):
[ ] unchecked
[x] checked
Hi @frankgerhardt, Thank you for pointing this out!
There's nothing wrong with your browser. We're just way behind with updating our documentation. :disappointed:
Current @poef, @ylebre, and I are hard at work trying to complete the implementation of milestones 4. (Social web apps).
Most of the other features are fully functional, we just never got around to actually checking those boxes. (Although the code still needs cleaning up and there might be some bugs caused by hard-coded values).
You can get a slightly more detailed idea of where we are by following our process in the Solid test-suite repo.
As our current focus is on the code, my estimate is that our documentation will remain out-of-date for another month or two.
Hi @frankgerhardt, Thank you for pointing this out!
There's nothing wrong with your browser. We're just way behind with updating our documentation. :disappointed:
Current @poef, @ylebre, and I are hard at work trying to complete the implementation of milestones 4. (Social web apps).
Most of the other features are fully functional, we just never got around to actually checking those boxes. (Although the code still needs cleaning up and there might be some bugs caused by hard-coded values).
You can get a slightly more detailed idea of where we are by following our process in the Solid test-suite repo.
As our current focus is on the code, my estimate is that our documentation will remain out-of-date for another month or two.
Fixed in 03c8e961772c324b4718e0e778a2a38054b3ef6d
|
gharchive/issue
| 2021-01-18T09:11:08 |
2025-04-01T06:39:59.221573
|
{
"authors": [
"Potherca",
"frankgerhardt"
],
"repo": "pdsinterop/php-solid-server",
"url": "https://github.com/pdsinterop/php-solid-server/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1381265501
|
Add new Category AlertDialog deactivates wrong button when no text is entered
The cancel button is greyed out instead of the accept button when adding a new category.
@peanut-brother I can take this on
@peanut-brother Just finished a pull request addressing this issue. Let me know what you think
|
gharchive/issue
| 2022-09-21T17:28:34 |
2025-04-01T06:39:59.240655
|
{
"authors": [
"garrettcollier",
"peanut-brother"
],
"repo": "peanut-brother/categorical-to-do",
"url": "https://github.com/peanut-brother/categorical-to-do/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
804228692
|
⚠️ Hartman Foundation has degraded performance
In 3e8752f, Hartman Foundation (https://gordonhartman.com/) experienced degraded performance:
HTTP code: 200
Response time: 2528 ms
Resolved: Hartman Foundation performance has improved in 245fe9c.
|
gharchive/issue
| 2021-02-09T05:43:50 |
2025-04-01T06:39:59.243286
|
{
"authors": [
"pearanalytics"
],
"repo": "pearanalytics/uptime",
"url": "https://github.com/pearanalytics/uptime/issues/194",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1244348895
|
Examples for accountsChanged and chainChanged events
Hi @pedrouid ,
These examples might help newcomers like me to understand WalletConnect a little bit more.
I used the https://github.com/WalletConnect/walletconnect-test-wallet repo to test my changes.
Cheers,
Tamas
I pushed an extra commit to get the Checksummed address, isValid returned false for me in case of MetaMask.
|
gharchive/pull-request
| 2022-05-22T19:46:38 |
2025-04-01T06:39:59.278278
|
{
"authors": [
"moltam89"
],
"repo": "pedrouid/web3modal-ethers-example",
"url": "https://github.com/pedrouid/web3modal-ethers-example/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
122897910
|
Add git tags and/or a changelog
Hello there :smile:
I tried to update erroz in one of our applications from 0.1.4 to 1.0.0 and it's really hard to find out which breaking change caused the major version bump. To me it's unclear which git commits were used to publish 0.1.4 and 1.0.0 to npm. Maybe you can make this a bit more clear :+1:
Ben
Sorry @benurb. That's something i wanted to start with the next release.
I think the breaking change was that i removed the toJSON function.
Additionally i added erroz.AbstractError for comparison and customizing.
The documentation on master is for the upcoming release, which i will release before christmas. Just ping me if you need help upgrading :)
Hey meaku :smile:
Ok, I'll just give it a try. I'll come back to you if I need help.
Thanks!
Ben
|
gharchive/issue
| 2015-12-18T07:35:31 |
2025-04-01T06:39:59.281162
|
{
"authors": [
"benurb",
"meaku"
],
"repo": "peerigon/erroz",
"url": "https://github.com/peerigon/erroz/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1782600399
|
🛑 API MERCADO PUBLICO is down
In 5a14b9f, API MERCADO PUBLICO (https://api.mercadopublico.cl/servicios/v1/publico/ordenesdecompra.json?fecha=30122022&CodigoOrganismo=7312&ticket=E1951384-BF88-4EE5-8C33-934680281626) was down:
HTTP code: 0
Response time: 0 ms
Resolved: API MERCADO PUBLICO is back up in 2187fb0.
|
gharchive/issue
| 2023-06-30T14:12:18 |
2025-04-01T06:39:59.309236
|
{
"authors": [
"peimando"
],
"repo": "peimando/fimosis_monitor",
"url": "https://github.com/peimando/fimosis_monitor/issues/524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1419927275
|
🛑 Beyond.pl is down
In 41f576c, Beyond.pl (http://www.beyond.pl) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Beyond.pl is back up in f465fbb.
|
gharchive/issue
| 2022-10-23T20:44:40 |
2025-04-01T06:39:59.312451
|
{
"authors": [
"pejotes"
],
"repo": "pejotes/upptime",
"url": "https://github.com/pejotes/upptime/issues/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
110538220
|
Updates to Get started topic
Made the bullet list into paragraphs
Add links to other topics
Add examples of what you can do with search
Take a look at the next-to-last paragraph, in particular. I wasn't as clear on the meaning of some of the bullet points as I was reworking them.
In addition:
update the URLs in the reverse topic with correct formatting (changing YOUR_KEY to search=XXXXX and removing extra / in api/_key in URLs.
remove or fix TODO notes.
Also, why do the URLs have underscores in them?
For example: [/v1/reverse?api_key=search-XXXXXXX&point.lat=48.858268&point.lon=2.294471]
These changes are all good and the paragraphs are much better quality than the bullet points. I have a question though. For a very high level overview doc like this one (I think this will often be one of the very first things people who are new to Pelias read), is it useful to have some bullet points that can quickly be skimmed over to get an idea of what you can do with this thing you're reading about?
Good points. Thanks, @orangejulius, for giving it a read.
Let me come up with about a couple of concise bullet points of what Search does and how to get started. If we're calling it a "get started", it should tell me that info up front. That can they link to other topics.
Those bullet points don't have to go in this document, now that I think about it. I'd love to hear your ideas for the best place. Maybe we keep the new text here as is, and add the bullet points to the index page?
Going to merge this. I have some work to do on the index page to make the docs site consistent. Will see if the bullet points belong here or in the index file.
|
gharchive/pull-request
| 2015-10-08T20:49:09 |
2025-04-01T06:39:59.322199
|
{
"authors": [
"orangejulius",
"rmglennon"
],
"repo": "pelias/pelias-doc",
"url": "https://github.com/pelias/pelias-doc/pull/45",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2477140802
|
Formatted stats.py, added some documentation.
Git is saying that I removed a line from config_default.txt. Not sure why, but it looks fine. Anyways, please let me know if you want to change anything with the documentation. It's pretty basic at the moment.
Thank you for your contribution! The documentation updates look great and are much appreciated. However, I have some concerns about the formatting changes. Currently, the project does not have a standard formatter set up, which might explain the discrepancies between the formatting in your PR and the formatting on other systems. It’s important to ensure consistency across different environments, so I suggest we establish a standard formatter as part of our CI process in a future update.
Regarding the changes to the configuration files, it would be best to remove them from this PR. Please rebase your branch to clean up those changes. In the future, be mindful to only stage (git add) the specific files or changes you intend to include in a PR.
Once these adjustments are made, I believe we can proceed with merging the documentation updates. We can address setting up a formatter and integrating it into our CI in a separate PR
Sorry about the unexpected close, I renamed the master branch to main which is why this closed automatically. Ive also made dev the default branch so please make new PRs on the dev branch instead of main (the replacement for master).
Since this closed automatically I don't see a way to reopen it, so if you'd like to just make a new PR on dev with the documentation changes we can continue from there.
That sounds fine! I'll open one soon.
|
gharchive/pull-request
| 2024-08-21T06:06:50 |
2025-04-01T06:39:59.382392
|
{
"authors": [
"kurealnum",
"morr0ne",
"pennybelle"
],
"repo": "pennybelle/pbfetch",
"url": "https://github.com/pennybelle/pbfetch/pull/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2653553116
|
Add UIP-6: Noble Forwarding
Opening as draft pending discussion on forum: https://forum.penumbra.zone/t/uip-noble-address-forwarding/125
#10 is already claiming UIP-6; mind bumping to UIP-7?
|
gharchive/pull-request
| 2024-11-12T22:54:15 |
2025-04-01T06:39:59.417558
|
{
"authors": [
"conorsch",
"zbuc"
],
"repo": "penumbra-zone/UIPs",
"url": "https://github.com/penumbra-zone/UIPs/pull/11",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2317483490
|
specification: file organization in protocol spec
Describe your changes
Several parts of the protocol are not specified, while being referred to in other parts of the documentation. We should remove these empty patches in our protocol spec, and use this PR as a guiding reference of what sections were removed as we periodically fill them in over time. @conorsch can you help better format the docs?
This references components X and Y in the ECC audit log. Auxiliary to this is component Z, which should probably be (but currently is not) captured here since they're also spec-related changes.
This also captures supplemental spec changes based on specific spec-related comments made in the audit.
Issue ticket number and link
Checklist before requesting a review
[x] If this code contains consensus-breaking changes, I have added the "consensus-breaking" label. Otherwise, I declare my belief that there are not consensus-breaking changes, for the following reason:
@redshiftzero this doesn't have an associated issue attached to it
this is a staging area for spec-related changes, and we'll focus on addressing the rest of the spec changes later this week once phase 2 is unblocked.
|
gharchive/pull-request
| 2024-05-26T04:59:40 |
2025-04-01T06:39:59.421028
|
{
"authors": [
"TalDerei"
],
"repo": "penumbra-zone/penumbra",
"url": "https://github.com/penumbra-zone/penumbra/pull/4485",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2414647496
|
chore: update cometbft version v0.37.5 -> v0.37.9
Describe your changes
Updates the recommended CometBFT version to the latest in the 0.37.x series, and also relaxes the language in the guide, to make clear that anything in 0.37.x is acceptable.
Issue ticket number and link
N/A
Checklist before requesting a review
[x] If this code contains consensus-breaking changes, I have added the "consensus-breaking" label. Otherwise, I declare my belief that there are not consensus-breaking changes, for the following reason:
docs changes and tweaks to deploy scripts, no application code changes.
I also tacked on a version bump to the recommended penumbra versions, to match 0.79.1, which went out today https://github.com/penumbra-zone/penumbra/releases/tag/v0.79.1
|
gharchive/pull-request
| 2024-07-17T21:40:59 |
2025-04-01T06:39:59.424274
|
{
"authors": [
"conorsch"
],
"repo": "penumbra-zone/penumbra",
"url": "https://github.com/penumbra-zone/penumbra/pull/4720",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
557795824
|
Adding support for contextvars
Currently this package uses thread locals to maintain the logging context. However, that doesn't work for async code, especially ASGI servers (which may concurrently handle multiple requests with a single thread).
The way things like this are typically handled in the async world is through the use of contextvars. (They were added in 3.7, but there is a backport package for 3.6.)
Would there be any possibility of accepting a PR adding optional support for the use of contextvars to manage the logging context rather than thread locals?
I'm imagining something along the lines of adding a top level use_contextvars() call that would attempt to import contextvars (or the 3.6 backport), and if successful would replace the pylogctx.core.context threading.Local instance with a contextvars.ContextVar or similar.
Great suggestion. I need this functionality in my project
I'd definitely accept a PR that switches to contextvars :)
I'm also totally fine dropping support for old Pythons. Python 3.6 is still supported but only for a few months (23 Dec 2021). I think it's not unreasonable to imagine that 3.6 users will be stuck with the current version if we make a new one.
This should make the work easier to you :)
I just realized the original issue was old :D Sorry for the late answer !
|
gharchive/issue
| 2020-01-30T22:30:52 |
2025-04-01T06:39:59.427802
|
{
"authors": [
"124bit",
"dmontagu",
"ewjoachim"
],
"repo": "peopledoc/pylogctx",
"url": "https://github.com/peopledoc/pylogctx/issues/46",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2115499970
|
pipestat summarize: columns view should only show stats NOT objects
The columns should just show stats:
Currently, it is also showing objects (which doesn't make sense):
I spent some time on this but reverted my changes. It is a bit tricky because the summary table is tied to the column plotting (due to jinja template). I would need to take out all non-stats objects from the summary table. But these results are used here in the summary table AND for the individual record pages.
Basically, this might require more refactoring that I originally thought. Can probably tackle this issue during the same refactor as well: https://github.com/pepkit/pipestat/issues/150
This should now be solved with the above PR #182
|
gharchive/issue
| 2024-02-02T18:10:55 |
2025-04-01T06:39:59.477716
|
{
"authors": [
"donaldcampbelljr"
],
"repo": "pepkit/pipestat",
"url": "https://github.com/pepkit/pipestat/issues/148",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1996289212
|
K8SPG-430: Add labels to backup objects
CHANGE DESCRIPTION
Problem:
Labels not aligned for backup objects (repo sts, jobs, and repo PVC).
Solution:
Add missing labels
CHECKLIST
Jira
[x] Is the Jira ticket created and referenced properly?
[x] Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
[x] Does the Jira ticket link to the proper milestone (Fix Version field)?
Tests
[x] Is an E2E test/test case added for the new feature/change?
[x] Are unit tests added where appropriate?
Config/Logging/Testability
[x] Are all needed new/changed options added to default YAML files?
[x] Are the manifests (crd/bundle) regenerated if needed?
[x] Did we add proper logging messages for operator actions?
[x] Did we ensure compatibility with the previous version or cluster upgrade process?
[x] Does the change support oldest and newest supported PG version?
[x] Does the change support oldest and newest supported Kubernetes version?
Test name
Status
demand-backup
passed
init-deploy
passed
monitoring
passed
operator-self-healing
passed
scaling
passed
scheduled-backup
passed
self-healing
passed
start-from-backup
passed
telemetry-transfer
passed
users
passed
custom-extensions
passed
We run 11 out of 11
commit: https://github.com/percona/percona-postgresql-operator/pull/578/commits/98519d6a16bbec9bb8cf75efe08c70e6c7b4f187
image: perconalab/percona-postgresql-operator:PR-578-98519d6a1
|
gharchive/pull-request
| 2023-11-16T08:04:38 |
2025-04-01T06:39:59.506673
|
{
"authors": [
"JNKPercona",
"inelpandzic"
],
"repo": "percona/percona-postgresql-operator",
"url": "https://github.com/percona/percona-postgresql-operator/pull/578",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1683484669
|
CLOUD-783 - Cleanup test cluster before deletion
CHANGE DESCRIPTION
Problem:
Short explanation of the problem.
Cause:
Short explanation of the root cause of the issue if applicable.
Solution:
Short explanation of the solution we are providing with this PR.
CHECKLIST
Jira
[ ] Is the Jira ticket created and referenced properly?
[ ] Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
[ ] Does the Jira ticket link to the proper milestone (Fix Version field)?
Tests
[ ] Is an E2E test/test case added for the new feature/change?
[ ] Are unit tests added where appropriate?
Config/Logging/Testability
[ ] Are all needed new/changed options added to default YAML files?
[ ] Are the manifests (crd/bundle) regenerated if needed?
[ ] Did we add proper logging messages for operator actions?
[ ] Did we ensure compatibility with the previous version or cluster upgrade process?
[ ] Does the change support oldest and newest supported PS version?
[ ] Does the change support oldest and newest supported Kubernetes version?
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
failure
demand-backup
passed
gr-bootstrap
failure
gr-demand-backup
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-scaling
passed
gr-tls-cert-manager
passed
haproxy
passed
init-deploy
passed
limits
passed
monitoring
passed
one-pod
passed
scaling
passed
semi-sync
passed
service-per-pod
passed
sidecars
passed
tls-cert-manager
passed
users
passed
version-service
passed
We run 24 out of 24
commit: https://github.com/percona/percona-server-mysql-operator/pull/359/commits/6ef395e25c546e3ff92044065331570abc69e237
image: perconalab/percona-server-mysql-operator:PR-359-6ef395e
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
passed
gr-bootstrap
failure
gr-demand-backup
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-scaling
passed
gr-tls-cert-manager
passed
haproxy
passed
init-deploy
passed
limits
passed
monitoring
passed
one-pod
passed
scaling
passed
semi-sync
passed
service-per-pod
passed
sidecars
passed
tls-cert-manager
passed
users
passed
version-service
passed
We run 24 out of 24
commit: https://github.com/percona/percona-server-mysql-operator/pull/359/commits/6ef395e25c546e3ff92044065331570abc69e237
image: perconalab/percona-server-mysql-operator:PR-359-6ef395e
|
gharchive/pull-request
| 2023-04-25T16:21:38 |
2025-04-01T06:39:59.550513
|
{
"authors": [
"JNKPercona",
"tplavcic"
],
"repo": "percona/percona-server-mysql-operator",
"url": "https://github.com/percona/percona-server-mysql-operator/pull/359",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1065350187
|
西楼
https://linux-learning.cn/self-talking/
锋言疯语 锋言疯语加载中,请稍等... $.getScript("/js/gitalk_self.min.js", function () { var gitalk = new Gitalk({ clientID: 'dd578d2d286850459126', clientSecret: '3551db08e789d5
江湖几多雨,骤风晚驰疾。
束之高危阁,不得半日闲。
何妨典高楼,把酒尽余欢。
浮生三千日,有何不可堪
江湖几多雨,骤风晚驰疾。
束之高危阁,不得半日闲。
何妨典高楼,把酒尽余欢。
浮生三千日,有何不可堪。
《尽余欢》
江湖几多雨,骤风晚驰疾。
束之高危阁,不得半日闲。
何妨典高楼,把酒尽余欢。
浮生三千日,有何不可堪。
郭大路嘴里有一搭没一搭的哼着小调,曲调也许已流传很久,歌词却一定是他自己编的。
除了他之外,没有人能编得出这种歌词来。
“来的时候威风,去的时候稀松。来的时候坐车,去的时候乘风。来的时候铛铛响,去的时候已成空。来的时候……”
燕七忽然道:“你在唱什么?”
郭大路道:“这叫‘来去歌’,来来去去,一来一去,去的不来,来的不去。”
燕七忽然跟着他的调子唱到:“放的不通,通的不放,放放通通,一通一放。”
郭大路道:“放什么?”
燕七道:“狗屁。这叫放狗屁。”
|
gharchive/issue
| 2021-11-28T14:45:06 |
2025-04-01T06:39:59.569486
|
{
"authors": [
"perfiffer"
],
"repo": "perfiffer/hexo_blog_comments",
"url": "https://github.com/perfiffer/hexo_blog_comments/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
283798845
|
Use of a debug environment variable.
Instead of defining a couple of methods with the same name 'Debug' to enable
or disable the debugging output, borrow the same P6DOC_DEBUG environment
variable.
Example of usage:
% export P6DOC_DEBUG=true
% perl6 --doc=HTML ~/your_pod_file.pod
Tested on:
% perl6 --version
This is Rakudo version 2017.11 built on MoarVM version 2017.11
implementing Perl 6.c.
This would precompile the debug at whatever value P6DOC_DEBUG was at during module installation time and it won't change later on, even if P6DOC_DEBUG env var is changed. Change constant to my
https://docs.perl6.org/language/traps#Constants_are_Compile_Time
Sorry I pushed a wrong commit in a rush, applying my to constant.
I believe it is possible to cherry pick only 3689fa3.
I can provide a new pr if that is cleaner.
👍 Thanks!
|
gharchive/pull-request
| 2017-12-21T07:48:46 |
2025-04-01T06:39:59.584483
|
{
"authors": [
"fluca1978",
"zoffixznet"
],
"repo": "perl6/Pod-To-HTML",
"url": "https://github.com/perl6/Pod-To-HTML/pull/29",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1161278679
|
ETCD - Filter out locks from query results
According to the ETCD concurrency code directly https://github.com/etcd-io/etcd/blob/8ac44ffa5fcccc7928876be4682c07f50b5e3b7e/client/v3/concurrency/mutex.go#L114
The value of the lock seems to be "" empty string.
Thus I would like to filter empty values, hoping that it will prevent us from having this kind of issue in query all :
error decoding the value associated with the key '<queried-key>/<one-real-element>/3477f3fdab5002b': unexpected end of JSON input
It is probably not a good idea to check for the value to determinate if we encounter a lock or not. Despite the fact that I'm pretty sure that a lock value is empty string, one can simply set a real object with the empty string and would like to query it.
Probably a better approach would be to use leasing instead. Like if the object contains a lease, then it shouldn't be part of the result, but is there a case where we'd like to retrieve a real object (not a lock) with a lease inside ? I think not.
Another solution would be to not fail on Unmarshal issues, and continue. This is actually the better for me as currently we have a critical failure. You can currently make everything fail with that command etcdctl put <query key>/0 "". Because 0 will appears lexically at first result, and the first result being not parsable, it will fail and not make the rest
|
gharchive/pull-request
| 2022-03-07T11:16:02 |
2025-04-01T06:39:59.604882
|
{
"authors": [
"celian-garcia"
],
"repo": "perses/common",
"url": "https://github.com/perses/common/pull/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1427287778
|
put the variable in a sticky header when you scroll down
This PR fixed #576.
I slightly modified the variable component to made it. I hope you won't mind if it's modified like that.
Perhaps if you don't want to have this behaviour by default we could add a boolean to activate it.
So if someone want to use this component outside of perses it won't be bothered by this behaviour.
https://user-images.githubusercontent.com/4548045/198618810-053ad10c-26c7-42cf-bb30-8150433a9368.mov
/cc @saminzadeh
Signed-off-by: Augustin Husson husson.augustin@gmail.com
As a side note, when you change a variable value it re-draw everything and close every row opened. Perhaps there is something to fix around that. WDYT @saminzadeh ?
I think this looks great! My only question is how does this look when the Dashboard is embedded in another app where it's not full screen. I assume it should still work ok. We could always add a prop like sticky={true} if its an issue.
As a side note, when you change a variable value it re-draw everything and close every row opened. Perhaps there is something to fix around that. WDYT @saminzadeh ?
Yeah this is a known issue. I think we need to write some optimizations around when to render.
I think this looks great! My only question is how does this look when the Dashboard is embedded in another app where it's not full screen. I assume it should still work ok. We could always add a prop like sticky={true} if its an issue.
ah that's a good question. I have no idea, but yeah definitely adding a properties like you suggest should fix the issue if there is one.
For that matters, I have been told there is an even simpler way to have a sticky header. We just need to use the css directive position: 'sticky' (based on https://www.w3schools.com/howto/howto_css_sticky_element.asp) but somehow it didn't work. I don't know why, maybe there is a conflict with React MUI somehow.
Here what I tried :
<Box display={'flex'} justifyContent="space-between" position={'sticky'} top={0}>
<Stack direction={'row'} spacing={2}>
{showVariables &&
variableDefinitions.map((v) => (
<Box key={v.spec.name} display={v.spec.display?.hidden ? 'none' : undefined}>
<TemplateVariable key={v.spec.name} name={v.spec.name} />
</Box>
))}
</Stack>
</Box>
even if I put position={'sticky'} top={0} in the previous box, it doesn't work too.
@sjcobb @saminzadeh I added the optional props variableIsSticky. By default, the variable list is not sticky. And it is explicitly set at true in the app.
Should be good to merge right ?
I noticed in others PRs, we started to complete the Changelog with unrelease section. Should I do the same ?
I added a changelog entry, we should be good to merge @sjcobb @saminzadeh
|
gharchive/pull-request
| 2022-10-28T13:44:34 |
2025-04-01T06:39:59.612289
|
{
"authors": [
"Nexucis",
"saminzadeh"
],
"repo": "perses/perses",
"url": "https://github.com/perses/perses/pull/703",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
308377461
|
text is not centered inside rings (d3 v4)
The text displayed inside the rings is not being centered properly for D3 version 4. This is working for version 3.
fixed in https://github.com/personality-insights/sunburst-chart/pull/38
|
gharchive/issue
| 2018-03-25T19:25:02 |
2025-04-01T06:39:59.614885
|
{
"authors": [
"soumak77"
],
"repo": "personality-insights/sunburst-chart",
"url": "https://github.com/personality-insights/sunburst-chart/issues/39",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
173675120
|
Spark does not start w/ only static file location set
I have just started learning about Spark with the intention of using it as a lightweight application server that mostly serve static files and occasionally use templates for pages with data. I created a simply main method in my application class as follows:
public static void main(String[] args) {
staticFiles.location("/public");
}
When I start my application, it seems that Spark will not start and immediately exits with RC 0. However, if I add a route (e.g. using get() or even adding the route overview module, then Spark would ignite and start as expected.
I can also get the server to start by calling Spark.ignite() but according to the code comment I am only to call this method directly when websocket is used.
I am pretty sure that I'll add a route to my application eventually so this is not really a problem here. But I'd still like to know 1) If this is a bug, or 2) if it is not a bug, what is the correct way to get Spark to ignite/start when serving only static files.
Thanks.
I just added a get method inside so the server is listening.
import static spark.Spark.*;
public class FileServer {
public static void main(String [] args){
staticFiles.location("/public");
get("/", (res, req) -> "hello world");
}
}
Just spent a bit more time looking at the code, and the behavior seems to be by design. As with webSocket(), staticFileLocation() or externalStaticFileLocation() must be called before init() is run for the first time. Meanwhile init() is called by any of the routing methods via addRoute(), so Spark is ignited properly whenever a route is added. Otherwise init() must be manually called in the correct sequence.
The requirement of method call sequence has been documented for the WebSocket case in http://sparkjava.com/documentation.html#websockets, but would be nice if the same can be mentioned in http://sparkjava.com/documentation.html#static-files. That means:
staticFileLocation() or externalStaticFileLocation() must be called before init() is called or routes are added
If using only static file and WebSocket, init() must be called manually.
Please consider this as a doc update. Thanks.
Added to docs (but not currently live)
|
gharchive/issue
| 2016-08-28T23:50:35 |
2025-04-01T06:39:59.619841
|
{
"authors": [
"acwwat",
"s-rajaraman",
"tipsy"
],
"repo": "perwendel/spark",
"url": "https://github.com/perwendel/spark/issues/648",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
144221184
|
Trim or ignore long lines
If ack finds a match in e.g. minified js file, or other files with just one or few very long lines, it will flood the output with the contents of the file and likely push all useful matches off-screen.
I would like an option to either ignore such matches altogether (maybe list the file name), or possibly to only show some number of characters worth of context around the match.
Ack should already be ignoring minified JS files. What JS files is your ack finding?
Minified js was maybe a bad example because they usually have .min.js extension, but there are minified js files that don't have the extension. I think the Vaadin/GWT UI toolkit for one.
Also some tools produce minified html and xml files, which also cause problems with ack.
What happens when you grep these files?
I don't understand the question.
If I grep (or 'ack' or 'ack-grep', doesn't matter) these files and they have a match, I get a several screenfuls of text with a matches highlighted somewhere in the mess.
I don't understand the question.
Not a trick question. Just wanted to know what grep did. For the most part, I try to keep ack and grep behaving the same.
Well, in that case, both grep and ack fill the terminal with useless amounts of text. Depending on where I run the command, that can mean thousands of rows of scrollback, if I'm unlucky enough to have multiple minified files that match.
One possibility could be being able to define characters that work as line breaks depending of file format. E.g. if you find a match in a .js file with long lines, treat semicolons like linefeeds (for the purposes of -A -B and -C switches)
E.g. if you find a match in a .js file with long lines, treat semicolons like linefeeds (for the purposes of -A -B and -C switches)
That's a level of source awareness that we don't want to get into.
It wouldn't have to be language aware code, it could be an option, just like --type-set or --ignore-dir
This is probably only a problem with languages that can be minified in the first place, and those have to have some other statement separator, so it could be like --statement-separator or --record-separator or something similar.
A co-worker just IM'd me with same complaint about *.js. Since not all minifiers follow .min.js quasi-convention, there's much noise. grep doesn't do any DWIM magic, we do, so expectations are higher of ack.
an option to trim (long|all) lines to KWIC (keyword in context) with specified window before/after start of match (default screen width when piped).
For a proper KWIC index, the start of match will be centered on screen even if left context less than half screen
an option to ignore lines > NNN characters
Does grep not have an option to trim lines? I'm not seeing one.
which grep ? IDK. If gnu grep has one it would be good to be compatible, but we can be better.
Adding this kind of option gets things pretty ugly what with highlighting the matches etc.
It sounds to me that the solution should probably be more towards people excluding files to not search files they know they want to ignore. Truncating result lines so they don't explode your screen is just saying "Let's work hard to make things more palatable that we don't even want to see anyway."
Truncating result lines so they don't explode your screen is just saying "Let's work hard to make things more palatable that we don't even want to see anyway."
I agree they should say --perl or --type=clojure if that's what they mean, which ignores all JS whether we can tell it's .min.js or not.
And if both the minified and full JS are in the tree, they should arrange .ackrc to ignore the minified directories. We detect .min.js if that is in use. Adding ignore .js in .ackrc in dir containing minified only may help sometimes. If we consulted .gitignore it might help DWIM otherwise. Classifying files with average line length() > 1024 as binary might help. But allowing users to say ignore lines > 1024 or 256 or whatever is good too.
If only the minified is available -- e.g. not shipped, or compiled from Clojure -- and they want to see where the JS calls the back end, maybe specifically having asked for --type=js or looking for all mentions of domain-specific word, seeing statements instead of lines would help them with minified files.
( Maybe that's setting $/ aka $INPUT_RECORD_SEPARATOR $RS ? I don't think we support that ... nor can we ? Might require a preprocess filter co-routine to expand and give statement numbers as faux line numbers? That's less ugly and modular but still invasive. )
In some cases it's actually desirable to see results within minified js (assuming it's all that's available) to put back-end code into context of a front-end call, for instance. Having the option to truncate excessively long lines at a specified limit or otherwise provide limited contextual results would be flexible and useful in a number of common use cases, rather than just excluding the files outright.
I was (am?) a fan of the old KWOC/KWIC formats. (I say 'was' because who really needs a lineprinter corpus index (concordance) in the 21stCentury! But context index is still plausibly useful for text searching online.) That --output lets me generate KWOC and nearly-KWIC thrills me. I don't think we need --kwic-.... options. Maybe I can write-up a KWOC/KWIC idiom or wrapper for documentation ...
What are KWOC/KWIC?
On Tue, Mar 14, 2017 at 12:46 AM, Andy Lester notifications@github.com
wrote:
What are KWOC/KWIC?
https://en.wikipedia.org/wiki/Key_Word_in_Context
--
Bill Ricker
bill.n1vux@gmail.com
https://www.linkedin.com/in/n1vux
Closed and moved to wiki. https://github.com/petdance/ack2/wiki/Feature-requests
Could you reconsider this? It's been an issue in ack for 10+ years. I can't imagine anyone considers scrolling through pages of the following to be desired functionality, and it's a common occurrence in most codebases these days.
The simple way to do is to add an .ackrc compatible option that consists of a boolean flag and/or a width max limit. You don't have to get fancy with the trimming: put the matches in the center of the buffer when the line exceeds the width. This gives context on both sides and it's OK if the default buffer width results in a few lines of visual output (instead of thousands).
It's been an issue in ack for 10+ years.
It's been an issue with grep since the beginning of time.
I don't understand what you mean by "put the matches in the center of the buffer when the line exceeds the width."
line: the long matched line in a file
match: the text that is highlighted (substring of line)
buffer: the truncated line storage
The issue with truncating lines is that it isn't clear how to display a partial line as opposed to a full line. By making the buffer at least as large as the match, then you can find the middle of the buffer by dividing the buffer length by two (and the middle of the match by dividing its length by two), and then you can put the middle of the match in the middle of the buffer. This gives equal context on either side of the match. This is a simple and good enough way to do it, though it is not the only way.
Let's not get bogged down in the details of how it would be implemented internally, and keep it to the user interfece.
It sounds like you're suggesting that in the case of overrunning --maxwidth that ack print out some portion of the line that has the match on it, right? Something like this?
47: ... stuff that is from the middle of the line **MATCHED TEXT** more but not to the end...
How do we handle multiple matches per line? What if acked on a comma and there are 1000 matches on the line in your minified javascript?
How do we handle lines that are longer than --maxwidth that show up in the context lines when using -A, -B and -C?
I have ideas for output that I don't want to put out here yet, but I don't see a way to handle the two scenarios above and still display matches.
It sounds like you're suggesting that in the case of overrunning --maxwidth that ack print out some portion of the line that has the match on it, right? Something like this?
Yes
I think the way to think about an option like --maxwidth is as a UI nicety instead of as something that plays nicely with rigorous output parsing scripts. The way I (and I presume from the Google hits, a lot of other people) use ack is as a nicer grep: I want to know what files trigger, primarily, and then secondarily it's nice to see what line numbers and what context, but ultimately I'm going to open that file in my editor and jump to the matched keyword.
So for
How do we handle multiple matches per line? What if acked on a comma and there are 1000 matches on the line in your minified javascript?
my answer would be: we could highlight only those matches that fit in the buffer starting with the first match. Now the objection to this is that you drop some valid matches, but for the stated use case, this is OK---it's only not OK if you're doing some kind of piped scripting or something.
One way to signal this to the user is to make the name obviously not script-friendly, e.g. --pretty-maxwidth or somesuch. Another way to make it play nicely with scripts is, possibly, to detect when you're outputting to a terminal and only do it then, say --terminal-output-maxwidth, and this functionality is, I think, already built-in for the coloring.
For -A, -B, -C I think the answer is to truncate the other lines as well, and that it would be fine to left-align and right-truncate, though I'm not as experienced with these options.
For -A, -B, -C I think the answer is to truncate the other lines as well, and that it would be fine to left-align and right-truncate, though I'm not as experienced with these options.
Realistically, the default buffer widths that people are going to use will be orders of magnitude smaller than the largest untruncated line, meaning that if you're displaying untruncated lines there, it's not like the output is going to be readable anyway.
Is it really meaningful to show the match on the line? Vs. just saying "There are 14 matches in this 47,320 character line", for example?
For me, yes. I use the context to determine whether it was a "desirable" match or not. The group of people I've known over the years that use ack is fairly large and all developers, and the typical use case is "someone mentioned such and such string, or it came up somehow in my work, and now I need to know where all instances of 'isInitialBlankNavigation' occur in the codebase."
So what does the sample output look like? How do we denote it's a partial line?
If this is our normal output:
t/illegal-regex.t
33-
34: return subtest "test_ack_with( $testcase: @args )" => sub {
35- my ( $stdout, $stderr ) = run_ack_with_stderr( @args );
maybe the partials look like
t/illegal-regex.t
33-
34* ... whatever this that other subtest this other thing that goes very ....
35- my ( $stdout, $stderr ) = run_ack_with_stderr( @args );
Kevin,
I'm planning to include in Ack3's cookbook section hints on displaying
selective context and may even get Andy to include features to do better at
it too.
Workarounds for context in Ack 2 for KWIC/KWOC Keyword indexes (with short
input lines):
ack2 --output can sorta do KWOC/KWIC with evil before/after vars
-
--output '$&^I$'"'"'^I|| $`' # *KWOC*
-
--output '$`^I$&^I$'"'" # *pseudo KWIC*
-
but they’re nasty from Shell since mix quote and dollar
-
and tabs don’t truly line up if width variation exceeds a tab
width
For your purpose, monsterline uglified JS/html/etc, I'll make a long line
version of ack-standalone and ack for perl 'use' statements.
perl -pE 's/\n$//' ack-standalone | ack2 --output '$1 $2 $3'
'(.{0,20})(\buse \w+(?:::\w+)*[^;]{0,40};?)(.{0,20})' | less
( Those are ^V^I tabs )
Note that it steps over 'use warnings;' when it immediately follows 'use
strict' which may be ok because it finds each cluster. But can get each
this way
perl -pE 's/\n$//' ack-standalone | ack2 --output '$1 $2 $3'
'(.{0,20})(\buse \w+(?:::\w+)*[^;]{0,40};?)((?=\buse)|.{0,20})' | less
Bill
On Mon, Apr 24, 2017 at 9:24 PM, Kevin Lawler notifications@github.com
wrote:
For me, yes. I use the context to determine whether it was a "desirable"
match or not. The group of people I've known over the years that use ack
is fairly large and all developers, and the typical use case is "someone
mentioned such and such string, or it came up somehow in my work, and now I
need to know where all instances of 'isInitialBlankNavigation' occur in
the codebase."
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/petdance/ack2/issues/596#issuecomment-296865214, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AANS-MTqvMHv5vXy-UohyvQMAVb3810Pks5rzUsxgaJpZM4H6lqz
.
--
Bill Ricker
bill.n1vux@gmail.com
https://www.linkedin.com/in/n1vux
Use three dots on any side that's elided. Potentially color the dots. (Putting these "outside" is fine or you can put them inside and do the more complicated string math.) If you really want to get fancy you can put the number of dropped chars in brackets outside any elided side.
On Apr 24, 2017, at 6:46 PM, Andy Lester notifications@github.com wrote:
So what does the sample output look like? How do we denote it's a partial line?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
More than 2 years later, what's the status on this? I'm also having the same issue.
No more work is being done on ack2, but a related request came in the other day on ack3, and I think it might helpful here if it were implemented.
I welcome input on that ticket: https://github.com/beyondgrep/ack3/issues/234
I too would like some sort of feature to deal with suppressing or truncating matches from long lines. My workaround is to filter the output of ack using grep to remove results that are longer than 300 characters.
ack my-seach-string | grep -vE '.{300,}'
but because using ack with a pipe turns off color by default, I usually turn that back on with a flag:
ack --color my-seach-string | grep -vE '.{300,}'
It would be nice to be able to put something in my .ackrc to ignore or truncate long lines by default that I could then override on the command line if I needed to.
@stephenostermiller Please go comment on the current ticket at https://github.com/beyondgrep/ack3/issues/325
|
gharchive/issue
| 2016-03-29T10:18:13 |
2025-04-01T06:39:59.672735
|
{
"authors": [
"digeomel",
"jmhutch",
"kevinlawler",
"n1vux",
"petdance",
"stephenostermiller",
"wheany"
],
"repo": "petdance/ack2",
"url": "https://github.com/petdance/ack2/issues/596",
"license": "artistic-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
157046737
|
--pager="less -R" = command not found
If you add --pager="less -R" to your .ackrc file and the run it you get something like this:
(venv) mbp0 AWS-43 $ ack TODO
sh: less -R: command not found
Hi @jeffmacdonald, thanks for the report! This is duplicate of #447; you can solve the issue by omitting the quotes around less -R.
I filed an issue #601 to document the options.
Thanks for the info!
|
gharchive/issue
| 2016-05-26T18:19:56 |
2025-04-01T06:39:59.676325
|
{
"authors": [
"hoelzro",
"jeffmacdonald"
],
"repo": "petdance/ack2",
"url": "https://github.com/petdance/ack2/issues/600",
"license": "artistic-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
222559967
|
Inline calls to print_line_if_context
In the main loop, if the line isn't a match, we do this:
else {
chomp; # XXX Proper newline handling?
print_line_if_context( $filename, $_, $., '-' );
}
If we're not doing context (the most common case), then the chomp is unnecessary and the print_line_if_context is unnecessary.
Let's look at inlining that code, since we're not going to be making any more functionality changes in ack2, and keep this in mind in ack3.
In my basic testing, it sped things up about 4%.
Can't inline this.
|
gharchive/issue
| 2017-04-18T22:07:47 |
2025-04-01T06:39:59.678484
|
{
"authors": [
"petdance"
],
"repo": "petdance/ack2",
"url": "https://github.com/petdance/ack2/issues/638",
"license": "artistic-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2060647787
|
Support Leader Access
Could we get allowance for Support Leaders to use the add-on? It just displays an error message when trying to open within Terrain. I've tried doing a full clear of the cache and cookies with Terrain first to no avail.
Hmm I am not sure how the support leaders work, I have tested with youth member accounts. I might need to do a screenshare with someone who has one of these accounts. I did not know there was even a support leader option. Do you know how you get them setup? I might be able to get one for my wife and can test with that.
The Support Leader roles within Terrain are for District Leaders and above. We can request access to another formation for a set duration as either Read/Write or Write only. If we get Write access, we can do everything a Group Leader can within that Formations terrain.
This is different to just being added to a formation as an adult leader, as when you get added you can view members and modify roles and UC, but you can't change Patrols unless you're from that group; Support Leader role lets you modify patrols.
It also has a 'Branch Life' section seperate from Group Life where you can view all the groups you have access to in one place.
Happy to do a show and tell some time. Topo is having to make a few changes too as we had it working, but a Terrain Update over Xmas has broken whatever was working.
Now I know what you are talking about my DCL has been having the same issue with Topo.
Can you do me a favor and go to Developer Tools in chrome and run some code to give me a list of the roles you have?
Screenshot is below, steps are 1: click chrome menu, 2: click more tools, 3: click Developer Tools, 4: click Console Tab, 5: Paste the code in the cursor (as shown in the yellow box) and press Enter. Then copy the infomation in the Blue box and send it back to me.
Find attached listing. Just a tad long.
Thanks for that, it looks like it has to do with the new roles. Now I can see the names of the permissions, like support-leader-group-readwrite I should be able to add them to my code so it knows what to do. I will try to sort this for the next release.
I am fairly sure I have solved this issue but I can't test it as I dont have the role. The fix will be in the next release, I will keep this one open until I can verify that it is working after the release.
Also, apologies for not giving you the code in text, I just re-read this issue and the images and noticed I only gave you the code in a screenshot. Sorry!
Hey Pete, just tried it and it's all working perfectly.
Awesome, thanks for the update. Let me know if there is any other summary views or features that would be handy at the district, region and branch levels.
|
gharchive/issue
| 2023-12-30T06:06:40 |
2025-04-01T06:39:59.684164
|
{
"authors": [
"pete-mc",
"wilsteady"
],
"repo": "pete-mc/Summit",
"url": "https://github.com/pete-mc/Summit/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.