id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1508379906
|
🛑 tanecni-divadlo.cz is down
In f3c5076, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 81a7ee1.
|
gharchive/issue
| 2022-12-22T18:36:48 |
2025-04-01T04:56:16.375622
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/5064",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1511219509
|
🛑 tanecni-divadlo.cz is down
In c651f04, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 383ad47.
|
gharchive/issue
| 2022-12-26T20:21:50 |
2025-04-01T04:56:16.378602
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/5352",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1512806408
|
🛑 tanecni-divadlo.cz is down
In 0430e11, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 9c49064.
|
gharchive/issue
| 2022-12-28T13:36:49 |
2025-04-01T04:56:16.381571
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/5484",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1513431602
|
🛑 Bali 2017 is down
In d5d65d3, Bali 2017 ($SITE_BALI) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bali 2017 is back up in f7198d7.
|
gharchive/issue
| 2022-12-29T06:37:33 |
2025-04-01T04:56:16.383629
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/5541",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1515900863
|
🛑 tanecni-divadlo.cz is down
In 456beb9, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 6b6738d.
|
gharchive/issue
| 2023-01-01T23:46:48 |
2025-04-01T04:56:16.386960
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/5807",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1577453177
|
🛑 tanecni-divadlo.cz is down
In cad2e01, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in f882089.
|
gharchive/issue
| 2023-02-09T08:26:51 |
2025-04-01T04:56:16.389875
|
{
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/8547",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
807340467
|
Strange change of longitude values in test
In test:
tests/test_wps_subset_cru_ts.py::test_wps_subset_cru_ts_check_min_max[wet-1951-01-01-2005-12-15-area0]
...as follows...
data:
lat = -89.75, -39.75, 10.25, 60.25 ;
lon = -179.75, -129.75, -79.75, -29.75, 20.25, 70.25, 120.25, 170.25 ;
}
(roocs) [root@localhost flamingo]# fg
python -m pytest tests -v
'wet'
(Pdb) wps_ds[variable].coords
Coordinates:
* lat (lat) float32 10.25 60.25
* lon (lon) float32 1.0 51.0 101.0 151.0 201.0 251.0
* time (time) datetime64[ns] 1951-01-16 1951-02-15 ... 2005-11-16
(Pdb)
FAILED [ 87%]
Might relate to changes in roocs-utils and clisops - need to check.
|
gharchive/issue
| 2021-02-12T15:49:04 |
2025-04-01T04:56:16.395699
|
{
"authors": [
"agstephens"
],
"repo": "cedadev/flamingo",
"url": "https://github.com/cedadev/flamingo/issues/4",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1991502488
|
Updating to breaking upstream changes
Issue #, if available:
N/A
Description of changes:
Just updating to some breaking API changes in 3.0
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
I'm pretty sure this actually reverts to the old (pre-3.0) APIs
Maybe my local env is messed up
|
gharchive/pull-request
| 2023-11-13T21:19:06 |
2025-04-01T04:56:16.397401
|
{
"authors": [
"aaronjeline"
],
"repo": "cedar-policy/cedar-spec",
"url": "https://github.com/cedar-policy/cedar-spec/pull/152",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2676694755
|
Add roundtrip targets for entity data
Fixes for #472 by adding two new targets to check that we can always parse entity data after serializing to JSON. Local runs haven't found any failures
These passed an overnight local test run, so I'm hopeful they won't turn up anything in the nightly dashboard tests
|
gharchive/pull-request
| 2024-11-20T17:44:49 |
2025-04-01T04:56:16.398545
|
{
"authors": [
"john-h-kastner-aws"
],
"repo": "cedar-policy/cedar-spec",
"url": "https://github.com/cedar-policy/cedar-spec/pull/481",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
137880443
|
Improves the screen size normalization of the gizmo
Now the widget size doesn't depend of the width-height ratio and the canvas size.
Stops reading directly width and height attribute on canvas too (for
performance).
I don't think anyone used the index triangle since it just couldn't work properly
first the kdtree is re-ordering the triangles and uses its own indexes https://github.com/cedricpinson/osgjs/blob/develop/sources/osg/KdTree.js#L92
second not sure how to handle multiple primitiveSet
|
gharchive/pull-request
| 2016-03-02T13:58:16 |
2025-04-01T04:56:16.419231
|
{
"authors": [
"stephomi"
],
"repo": "cedricpinson/osgjs",
"url": "https://github.com/cedricpinson/osgjs/pull/542",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1879346475
|
absorb rust-lorawan lorawan-device's trait impl locally
Typically, a trait implementation is not committed to the upstream repo. Was there a specific reason this one was?
This should probably be done on embassy-rs/lora-phy and not on this fork
Yea I didn't realize it was a fork. I'm just going to maintain this impl in the rust-lorawan crate for now.
|
gharchive/pull-request
| 2023-09-04T01:30:05 |
2025-04-01T04:56:16.420452
|
{
"authors": [
"lthiery",
"lucasgranberg"
],
"repo": "ceekdee/lora-phy",
"url": "https://github.com/ceekdee/lora-phy/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2277036996
|
fix bug of verifyMultiRowRootsToDataRootTupleRoot
The original _root parameter is not verified before being used in verifyMultiRowRootsToDataRootTupleRootProof, it will allow arbitrary _rowRoots to be proved valid.
Summary by CodeRabbit
Refactor
Updated the verification process in the application to enhance data integrity checks.
Thanks for reporting this 🙏
Can you provide test cases that fail without this fix?
I don't have a test case, I found this problem when code review.
@zhiqiangxu I confirm the bug, would you be able to add the tests? Also, this fix should also be done for: verifySharesToDataRootTupleRoot and verifyRowRootToDataRootTupleRoot. Would you be down to updating this PR to include fix + tests for those too?
If not, I'll gladly pick this PR up and add the necessary changes :D
@zhiqiangxu I confirm the bug, would you be able to add the tests? Also, this fix should also be done for: verifySharesToDataRootTupleRoot and verifyRowRootToDataRootTupleRoot. Would you be down to updating this PR to include fix + tests for those too?
If not, I'll gladly pick this PR up and add the necessary changes :D
The last commit should already fix them. And the existing tests are updated, basically removing the redundant root parameter.
Feel free to add more tests:)
@zhiqiangxu I confirm the bug, would you be able to add the tests? Also, this fix should also be done for: verifySharesToDataRootTupleRoot and verifyRowRootToDataRootTupleRoot. Would you be down to updating this PR to include fix + tests for those too?
If not, I'll gladly pick this PR up and add the necessary changes :D
I fixed the mentioned functions, please take a look, and feel free to add more tests:)
|
gharchive/pull-request
| 2024-05-03T06:47:55 |
2025-04-01T04:56:16.503781
|
{
"authors": [
"rach-id",
"zhiqiangxu"
],
"repo": "celestiaorg/blobstream-contracts",
"url": "https://github.com/celestiaorg/blobstream-contracts/pull/307",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1167825836
|
Test for a specific size instead of multiple of the share size
in the current cli integration test, we are checking to see that emitted events contain a multiple of the share size instead of checking for a direct size. We should update that test to check for a specific size.
can close as this was completed https://github.com/celestiaorg/celestia-app/blob/6b86e91eea1063c27c1ae461eddbe505f778df08/x/payment/client/testutil/integration_test.go#L118
|
gharchive/issue
| 2022-03-14T03:04:01 |
2025-04-01T04:56:16.505580
|
{
"authors": [
"evan-forbes"
],
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/issues/245",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2195959536
|
test: testground sanity check with knuu
In replace of https://github.com/celestiaorg/celestia-app/pull/3184
Closes #3189
Introduces the capability to customize the resource allocation for each Node and TxSimNode individually, allowing for a diverse range of resource specifications rather than applying a uniform resource allocation across all instances.
Enables running txsim as a knuu instance.
Have you had any preliminary results yet as to whether having it as a separate instance enables greater load on the network?
Not yet, was awaiting reviews on the PR before moving forward with the test. I will conduct it and share the results soon.
Migrating this PR to this one
|
gharchive/pull-request
| 2024-03-19T20:37:36 |
2025-04-01T04:56:16.508105
|
{
"authors": [
"staheri14"
],
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/pull/3194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2042519011
|
Fix typos
Hello
I fixed several typos
Hope it helps.
we'll fix these alongside further documentation updates, we're not inclined to merge automated PRs, thanks
|
gharchive/pull-request
| 2023-12-14T21:18:59 |
2025-04-01T04:56:16.509410
|
{
"authors": [
"nnsW3",
"ramin"
],
"repo": "celestiaorg/celestia-node",
"url": "https://github.com/celestiaorg/celestia-node/pull/3010",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
129408353
|
[DNSResolver] Raise SocketError when there’s no connection.
In short, what I tried to fix is the following behaviour when trying to resolve a hostname when offline:
irb(main):001:0> require 'celluloid/current'
=> true
irb(main):002:0> require 'celluloid/io'
=> true
irb(main):003:0> Celluloid::IO::TCPSocket.new("www.google.com", 80)
NoMethodError: undefined method `"\u0000\u0002\u0001\u0000\u0000\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u0003www\u0006google\u0003com\u0000\u0000\u0001\u0000\u0001"' for nil:NilClass
from /Users/eloy/.gem/ruby/2.3.0/gems/celluloid-io-0.17.3/lib/celluloid/io/dns_resolver.rb:44:in `resolve'
from /Users/eloy/.gem/ruby/2.3.0/gems/celluloid-io-0.17.3/lib/celluloid/io/tcp_socket.rb:101:in `create_socket'
from /Users/eloy/.gem/ruby/2.3.0/gems/celluloid-io-0.17.3/lib/celluloid/io/tcp_socket.rb:53:in `initialize'
from /Users/eloy/.gem/ruby/2.3.0/gems/celluloid-io-0.17.3/lib/celluloid/io/socket.rb:39:in `new'
from /Users/eloy/.gem/ruby/2.3.0/gems/celluloid-io-0.17.3/lib/celluloid/io/socket.rb:39:in `new'
from (irb):3
from /Users/eloy/.rubies/ruby-2.3.0/bin/irb:11:in `<main>'
What happens is that @socket remains nil here which means that you end up sending the hostname as a NilClass#send message here.
With stdlib it raises the following exception, although that one is actually not raised by the name resolver, but by Socket:
irb(main):001:0> require 'socket'
=> false
irb(main):002:0> TCPSocket.new("www.google.com", 80)
SocketError: getaddrinfo: nodename nor servname provided, or not known
from (irb):2:in `initialize'
from (irb):2:in `new'
from (irb):2
from /Users/eloy/.rubies/ruby-2.3.0/bin/irb:11:in `<main>'
The one raised by Resolv is:
irb(main):001:0> Resolv.getaddress("www.google.com")
Resolv::ResolvError: no address for www.google.com
from /Users/eloy/.rubies/ruby-2.3.0/lib/ruby/2.3.0/resolv.rb:95:in `getaddress'
from /Users/eloy/.rubies/ruby-2.3.0/lib/ruby/2.3.0/resolv.rb:45:in `getaddress'
from (irb):1
from /Users/eloy/.rubies/ruby-2.3.0/bin/irb:11:in `<main>'
I’m not sure which one I should use, any thoughts?
probably the one from socket?
perhaps a new exception would make sense... SocketNameResolutionError < SocketError?
Sure, I can do that, and keep the same error message, yeah?
@ioquatix Like so?
Yeah that seems fine.
@tarcieri still waiting for travis issues to be fixed, boop :)
@ioquatix that was on nio4r. The test failures look like the same on master (we never got celluloid-io back to green)
ah okay I guess I'll take a look at the issues here.
|
gharchive/pull-request
| 2016-01-28T10:21:27 |
2025-04-01T04:56:16.516881
|
{
"authors": [
"alloy",
"ioquatix",
"tarcieri"
],
"repo": "celluloid/celluloid-io",
"url": "https://github.com/celluloid/celluloid-io/pull/171",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
598273989
|
Cofactor Trait
We should have a trait for scaling by cofactor as well. We use more efficient methods for G2, and will possibly use a more efficient methods for G1 on BLS12 curves as well.
Originally posted by @kobigurk in https://github.com/celo-org/bls-zexe/pull/140
We decided to focus on Celo's use case for the time being.
|
gharchive/issue
| 2020-04-11T14:12:09 |
2025-04-01T04:56:16.520854
|
{
"authors": [
"gakonst",
"kobigurk"
],
"repo": "celo-org/celo-bls-snark-rs",
"url": "https://github.com/celo-org/celo-bls-snark-rs/issues/143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1620888097
|
Clean the README
Clean the Readme.md file
Thanks @Ankush263. This is fixed on PR #182
|
gharchive/pull-request
| 2023-03-13T07:14:01 |
2025-04-01T04:56:16.522225
|
{
"authors": [
"Ankush263",
"nestorbonilla"
],
"repo": "celo-org/celo-composer",
"url": "https://github.com/celo-org/celo-composer/pull/183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
521557468
|
[Suggestion] Reset security PIN option should be shown as if user forgot the created security PIN they won't change it in the app.
Frequency: 100%
App version: Pilot build v1.5.1
Repro on: Google Pixel XL (8.1), Redmi Note 7 Pro (9.0)
Repro Steps:
Launch the app.
Proceed for payment.
The user forgot the password and enter the wrong PIN “Incorrect PIN” error is shown.
The user is not able to change the created PIN.
Impact: User is blocked to proceed for transaction.
Investigation: User is able to change the device PIN if they forgot it.
Current Behavior: Reset security PIN option is not shown.
Expected Behavior: Reset security PIN option should be shown
Hi @nityas can you please confirm is it wont fixed for now?
|
gharchive/issue
| 2019-11-12T13:40:22 |
2025-04-01T04:56:16.525888
|
{
"authors": [
"Lss-Ankit"
],
"repo": "celo-org/celo-monorepo",
"url": "https://github.com/celo-org/celo-monorepo/issues/1673",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
422446564
|
Exemplar: Use generic interface for attachment values.
Updates #1058.
Similar to Java PR https://github.com/census-instrumentation/opencensus-java/pull/1779. Though in Go I think we can take advantage of the flexibility on interfaces so I just used interface{} for generic attachment values. WDYT?
FYI this PR doesn't contain user-visible changes. Will merge by EOD today if there're no other comments/concerns.
|
gharchive/pull-request
| 2019-03-18T21:37:04 |
2025-04-01T04:56:16.533190
|
{
"authors": [
"songy23"
],
"repo": "census-instrumentation/opencensus-go",
"url": "https://github.com/census-instrumentation/opencensus-go/pull/1070",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1440292523
|
Fix : vulnerabilities in resolveHostName.php
Description
Fixed vulnerabilities in host form
Fixes # MON-15395
Type of change
[x] Patch fixing an issue (non-breaking change)
[ ] New functionality (non-breaking change)
[ ] Breaking change (patch or feature) that might cause side effects breaking part of the Software
Target serie
[ ] 21.04.x
[x] 21.10.x
[x] 22.04.x
[x] 22.10.x (master)
How this pull request can be tested ?
Go to “Configuration > Hosts > Hosts”
Edit/Create a host
Define a “ IP Address / DNS” like www.google.com
Check the DNS is replaced by IP address
Checklist
Community contributors & Centreon team
[x] I have followed the coding style guidelines provided by Centreon
[ ] I have commented my code, especially new classes, functions or any legacy code modified. (docblock)
[ ] I have commented my code, especially hard-to-understand areas of the PR.
[x] I have rebased my development branch on the base branch (master, maintenance).
migrated from https://github.com/centreon/centreon/pull/12036
|
gharchive/pull-request
| 2022-11-08T14:17:41 |
2025-04-01T04:56:16.548998
|
{
"authors": [
"kduret"
],
"repo": "centreon/centreon-gha",
"url": "https://github.com/centreon/centreon-gha/pull/148",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
197814925
|
fix check nmapexist on debian
command is a shell-buitin, so command -v nmap must use shell module
Signed-off-by: Shengjing Zhu zsj950618@gmail.com
Why? Can we see the error? command should work fine since there are no pipes or/and special characters.
@leseb unlike centos, there is no /usr/bin/command in debian. command is a shell built in.
zsj@debian ~ $ which command
command: shell built-in command
|
gharchive/pull-request
| 2016-12-28T08:24:37 |
2025-04-01T04:56:16.748317
|
{
"authors": [
"leseb",
"zhsj"
],
"repo": "ceph/ceph-ansible",
"url": "https://github.com/ceph/ceph-ansible/pull/1209",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
559621396
|
Fix rolling update playbook
Running ansible-playbook rolling_update.yml to upgrade a Ceph cluster from 14.2.6 to 14.2.7 failed due to the three issues addressed by the three commits in this PR:
Wrong dashboard.yml playbook import path;
norebalance flag remains set after upgrade;
ceph.conf.j2 template fails to render if mon_host_v1 is disabled.
Hm, the playbook syntax check is unhappy with the new import path, it cannot find the dashboard.yml playbook anymore.
It's a bit ugly, but I could create a symlink in infrastructure-playbooks pointing to ../dashboard.yml; that would make the CI happy, and would still allow rolling_update.yml to be run from the root of the repository without issues.
Thanks for the review! I've replied to your comments inline below.
Two of the three issues mentioned are wrong
Wrong dashboard.yml playbook import path;
that's because you're running the rolling_update.yml playbook from the infrastructure-playbook directory which is wrong.
You should use either the full or relative path to the playbook
$ ansible-playbook -vv -i hosts infrastructure-playbook/rolling_update.yml
or
$ ansible-playbook -vv -i /path/hosts /path/ceph-ansible/infrastructure-playbook/rolling_update.yml
Actually, I'm running it as described in infrastructure-playbooks/README.md (i.e. I copied rolling_update.yml to ceph-ansible's root directory, and ran ansible-playbook rolling_update.yml), which doesn't work. I had also tried ansible-playbook infrastructure-playbook/rolling_update.yml, but it failed too (for a different reason, I think because it didn't find my group_vars and host_vars directories in that case, but I'm not sure).
So either it's a documentation issue and the README needs to be updated, or the playbook import path does need to be updated.
norebalance flag remains set after upgrade;
This flag isn't present on master so I assume you're talking about the stable-4.0 branch (due to nautilus).
So this was already fixed by [1]
You're absolutely right, I was using the stable-4.0 branch and didn't realize that the issue didn't apply to master.
ceph.conf.j2 template fails to render if mon_host_v1 is disabled.
This looks like a valid issue
[1] 675b678
Actually, I'm running it as described in infrastructure-playbooks/README.md (i.e. I copied rolling_update.yml to ceph-ansible's root directory, and ran ansible-playbook rolling_update.yml), which doesn't work. I had also tried ansible-playbook infrastructure-playbook/rolling_update.yml, but it failed too (for a different reason, I think because it didn't find my group_vars and host_vars directories in that case, but I'm not sure).
So either it's a documentation issue and the README needs to be updated, or the playbook import path does need to be updated.
It's a documentation issue as I mentioned in https://github.com/ceph/ceph-ansible/pull/5001#issuecomment-579573413
@dsavineau I tried again running ansible-playbook infrastructure-playbook/rolling_update.yml, and I can confirm that it doesn't work. It fails with
TASK [ceph-facts : set_fact _current_monitor_address]
*******************************************************************************
Thursday 06 February 2020 07:19:56 +0000 (0:00:01.105) 0:02:20.912 *****
fatal: [mon-01]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [mon-02]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [mon-03]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [osd-01]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [osd-02]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [osd-03]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
(I have monitor_interface defined in group_vars/all.yml.)
If instead I copy the playbook to the root of the repository and run ansible-playbook rolling_update.yml, it works fine; so I think the README is correct, it is still required to copy those playbooks, otherwise Ansible looks for group_vars and host_vars in the wrong place.
Are you seeing a different behavior?
@dsavineau I tried again running ansible-playbook infrastructure-playbook/rolling_update.yml, and I can confirm that it doesn't work. It fails with
TASK [ceph-facts : set_fact _current_monitor_address]
*******************************************************************************
Thursday 06 February 2020 07:19:56 +0000 (0:00:01.105) 0:02:20.912 *****
fatal: [mon-01]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [mon-02]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [mon-03]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [osd-01]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [osd-02]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
fatal: [osd-03]: FAILED! =>
msg: '''_monitor_addresses'' is undefined'
(I have monitor_interface defined in group_vars/all.yml.)
If instead I copy the playbook to the root of the repository and run ansible-playbook rolling_update.yml, it works fine; so I think the README is correct, it is still required to copy those playbooks, otherwise Ansible looks for group_vars and host_vars in the wrong place.
Are you seeing a different behavior?
I don't see any behavior on my side because the {group,host}_vars directories and the inventory are correctly configured.
Your {group,host}_vars directories aren't loaded because those directories should be either:
in the playbook directory
in the inventory directory
See [1] for more information
So I suppose you have your group_vars directory in the main ceph-ansible directory and the inventory file somewhere in /etc/ansible/hosts (as you're not using -i).
This is working on initial deployment because the main playbooks (site.yml and site-container.yml) are already in the root ceph-ansible directory.
See also https://github.com/ceph/ceph-ansible/issues/4968 for a similar issue
[1] https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#organizing-host-and-group-variables
|
gharchive/pull-request
| 2020-02-04T10:22:52 |
2025-04-01T04:56:16.762656
|
{
"authors": [
"BenoitKnecht",
"dsavineau"
],
"repo": "ceph/ceph-ansible",
"url": "https://github.com/ceph/ceph-ansible/pull/5035",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1279551589
|
nfs: make use of latest sidecars in the deployment
The sidecars in the NFS deployment has latest versions which is
also updated for RBD and CephFS drivers. This commit update
the versions in the NFS deployment too.
Additional Note for reviewer:
provisioner update has been taken care as part of #3186
Signed-off-by: Humble Chirammal hchiramm@redhat.com
/retest ci/centos/k8s-e2e-external-storage/1.21
/retest ci/centos/mini-e2e-helm/k8s-1.21
/retest ci/centos/mini-e2e-helm/k8s-1.21
@Mergifyio refresh
/retest ci/centos/mini-e2e-helm/k8s-1.21
/retest ci/centos/mini-e2e-helm/k8s-1.21
@Mergifyio rebase
/retest ci/centos/mini-e2e/k8s-1.23
@Mergifyio rebase
/retest ci/centos/mini-e2e-helm/k8s-1.21
/retest ci/centos/mini-e2e-helm/k8s-1.21
@Mergifyio refresh
@Mergifyio rebase
ci/centos/mini-e2e-helm/k8s-1.23
/retest ci/centos/mini-e2e/k8s-1.21
/retest ci/centos/mini-e2e-helm/k8s-1.23
@Mergifyio refresh
ci/centos/mini-e2e-helm/k8s-1.22
ci/centos/mini-e2e-helm/k8s-1.23
/retest ci/centos/mini-e2e-helm/k8s-1.23
/retest ci/centos/mini-e2e-helm/k8s-1.23
@mergifyio rebase
/retest ci/centos/mini-e2e/k8s-1.23
/retest ci/centos/mini-e2e/k8s-1.23
|
gharchive/pull-request
| 2022-06-22T05:22:44 |
2025-04-01T04:56:16.769899
|
{
"authors": [
"Rakshith-R",
"humblec"
],
"repo": "ceph/ceph-csi",
"url": "https://github.com/ceph/ceph-csi/pull/3202",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
78895497
|
ceph_deploy/hosts/suse/install.py: SUSE zypper ref
zypper may not pull in latest packages if the repository cache is not updated
so we should do a 'zypper ref' before installing packages.
Signed-off-by: Owen Synge osynge@suse.com
this patch has caused ceph testing to break dont know why though
|
gharchive/pull-request
| 2015-05-21T08:41:46 |
2025-04-01T04:56:16.771425
|
{
"authors": [
"osynge"
],
"repo": "ceph/ceph-deploy",
"url": "https://github.com/ceph/ceph-deploy/pull/294",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
570007281
|
Provide a mechanism for setting "ceph.conf" configuration values so they will be in effect during "cephadm bootstrap" phase
Nowadays, Ceph has a MON store for cluster configuration, but certain options still have to be set in ceph.conf before the cluster is bootstrapped.
One example is osd crush chooseleaf type = 0. If this is not provided on the cephadm bootstrap command line via the -c option, the initial CRUSH map created by cephadm bootstrap will have the failure domain set to "host" and there is no easy way to change that.
It's possible that there are other options like this one, which must be set via cephadm bootstrap -c in order to properly take effect.
Therefore, I am proposing that ceph-salt provide a mechanism for setting these options.
UPDATE: I found another situation where this is needed (and I think it's quite likely that there are more):
If someone needs to run cephadm bootstrap with MGR debugging turned up, the only way to do this is via cephadm bootstrap -c.
Hm, that should not end up in the ceph.conf. Instead, I'd put this into the mon store. In general there is only a very limited amout of config that should live within the ceph.conf.
@sebastian-philipp @trociny Allow me to elaborate a little.
On a single-node cluster, we typically want the failure domain to be "osd", not "host". I can achieve this by doing:
echo -en "[global]\n osd crush chooseleaf type = 0\n" > /root/ceph.conf
cephadm bootstrap -c /root/ceph.conf
After this command runs, there is a MON, a MGR, a pool, and a CRUSH map (with failure domain "host"), although there are still no OSDs at this point.
Once the CRUSH map is created, there is no easy way to modify it. Hence this issue.
Therefore, I do not understand what you mean by "put this into the mon store".
I will edit the issue description to emphasize that this mechanism would/should only be used for ceph.conf settings that must be made prior to bootstrapping, like osd crush chooseleaf type = 0
https://tracker.ceph.com/issues/44284
you're right. the workaround looks ok to me.
If someone needs to run cephadm bootstrap with MGR debugging turned up, the only way to do this is via cephadm bootstrap -c.
@smithfarm Can you please share what exactly has to be added to the ceph.conf?
If someone needs to run cephadm bootstrap with MGR debugging turned up, the only way to do this is via cephadm bootstrap -c.
@smithfarm Can you please share what exactly has to be added to the ceph.conf?
Nevermind, ceph-salt should be flexible enough to allow the user to set any option.
|
gharchive/issue
| 2020-02-24T17:29:40 |
2025-04-01T04:56:16.777804
|
{
"authors": [
"ricardoasmarques",
"sebastian-philipp",
"smithfarm"
],
"repo": "ceph/ceph-salt",
"url": "https://github.com/ceph/ceph-salt/issues/100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1526296331
|
github: update PR template with new checklist item and helpful links
Add a new checklist item reminding contributors to run make api-update to track new APIs. Add two short paragraphs that cover some of the common things I copy and paste or need to look up when working with contributors' PRs.
@Mergifyio rebase
|
gharchive/pull-request
| 2023-01-09T20:43:50 |
2025-04-01T04:56:16.914552
|
{
"authors": [
"anoopcs9",
"phlogistonjohn"
],
"repo": "ceph/go-ceph",
"url": "https://github.com/ceph/go-ceph/pull/809",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
153824898
|
Mock zfs Provider
Issue Description
In order to complete adequate testing of #159 Status Pusher, several mock providers must be made. The scope of this card is to create a mock zfs provider. The real provider is being handled in #168.
Acceptance Criteria (How do we know when this is done?)
Functioning well enough to complete testing on #159.
Related Tickets
#159
duplicate of #168
|
gharchive/issue
| 2016-05-09T17:13:43 |
2025-04-01T04:56:16.916415
|
{
"authors": [
"pjnorton"
],
"repo": "cerana/cerana",
"url": "https://github.com/cerana/cerana/issues/169",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
217809283
|
Error with default sorting on relationship field
Hello!
I'm using version 0.8.1 (also Rails 4.2.5.1 and Ruby 2.1.10) and following code
class A < Resource
attributes :name
has_one :b
def self.default_sort
[{field: 'b.name', direction: :asc}, {field: 'name', direction: :asc}]
end
end
class B < Resource
attributes :name
end
results with following error, when I'm trying to make any request to A's resource.
ERROR -- : Internal Server Error: undefined method `model' for #<Class:0x000000042533a8> /vendor/bundle/ruby/2.1.0/gems/activerecord-4.2.5.1/lib/active_record/dynamic_matchers.rb:26
It is happening in apply_sort method, when try to get records.model.to_s:
associations = _lookup_association_chain([records.model.to_s, *model_names])
Is it a bug or something is wrong with my code?
Also happens in Rails 5.1.1 w/ ruby 2.3.3 with a has one when relationship fields are added to sorts like this:
has_one :account ... def self.sortable_fields(context) super + [:"account.name"] end
|
gharchive/issue
| 2017-03-29T09:00:56 |
2025-04-01T04:56:16.934832
|
{
"authors": [
"begor",
"nbarthel"
],
"repo": "cerebris/jsonapi-resources",
"url": "https://github.com/cerebris/jsonapi-resources/issues/1015",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
103352531
|
update release command to be correct.
I had the command in the docs wrong. Had to run a different command to update the package.
:+1:
|
gharchive/pull-request
| 2015-08-26T19:59:44 |
2025-04-01T04:56:16.945278
|
{
"authors": [
"garrypolley",
"sambao21"
],
"repo": "cerner/canadarm",
"url": "https://github.com/cerner/canadarm/pull/15",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
301453812
|
[time-input] Remove Unsupported Translation Files
Issue Description
At this point we only support these i18n languages: ['ar', 'en', 'en-US', 'en-GB', 'es', 'es-US', 'es-ES', 'de', 'fi-FI', 'fr', 'fr-FR', 'pt', 'pt-BR']
In the time-input component, it seems a couple unsupported translation files were added. These should be removed until these locales are supported:
nl
sv
Issue Type
[ ] New Feature
[ ] Enhancement
[ ] Bug
[x] Other
We've recently added nl and nl-BE locales and will be adding sv in the near future.
This a non-issue then?
|
gharchive/issue
| 2018-03-01T15:45:58 |
2025-04-01T04:56:16.949615
|
{
"authors": [
"bjankord",
"emilyrohrbough",
"yuderekyu"
],
"repo": "cerner/terra-core",
"url": "https://github.com/cerner/terra-core/issues/1311",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
319334382
|
Add wdio tests for dynamic grid
Summary
Replaced nightwatch tests with wdio tests
Addresses https://github.com/cerner/terra-core/issues/1302
Lets move all these test examples to the new dir structure to drop reference to nightwatch all together
/component
/examples
/test-examples
...examples
@emilyrohrbough Updated: https://github.com/cerner/terra-core/pull/1471/commits/604ef0032d7de375d395254be59a0d8d12bf8c14
I think the generate-config script needs ran as well
|
gharchive/pull-request
| 2018-05-01T21:27:53 |
2025-04-01T04:56:16.951917
|
{
"authors": [
"bjankord",
"emilyrohrbough"
],
"repo": "cerner/terra-core",
"url": "https://github.com/cerner/terra-core/pull/1471",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1193954509
|
cmctl as downloaded for 1.8.0 shows canary as its version
Describe the bug:
I just downloaded cmctl from this repository and running cmctl version doesn't show the command actual version (1.8.0). I was expecting 1.8.0 but instead is shows canary.
This is the current output
Client Version: util.Version{GitVersion:"canary", GitCommit:"", GitTreeState:"", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: &versionchecker.Version{Detected:"v1.7.1", Sources:map[string]string{"crdLabelVersion":"v1.7.1"}}
If I run the same using version 1.7.2, then this is fine and the correct version is displayed
Client Version: util.Version{GitVersion:"v1.7.2", GitCommit:"2e0bfc87d0c63c473c31a17f4c8c65e89806dc16", GitTreeState:"clean", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: &versionchecker.Version{Detected:"v1.7.1", Sources:map[string]string{"crdLabelVersion":"v1.7.1"}}
Expected behaviour:
cmctl version shoud show version 1.8.0.
Steps to reproduce the bug:
download cmctl from https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cmctl-linux-amd64.tar.gz
unpack the archive via tar -zxf cmctl-linux-amd64.tar.gz
run ./cmctl version
Anything else we need to know?:
Environment details::
Kubernetes version: 1.22
Cloud-provider/provisioner: bare metal
cert-manager version: 1.8.0 (client), 1.7.1 (deployed)
Install method: e.g. helm
/kind bug
Hi! That is strange, I thought I had fixed this in https://github.com/cert-manager/cert-manager/pull/4968. 😞
I will investigate now.
It seems like we can't have multiple -ldflags in a go build command (source).
The make command
make bin/cmctl/cmctl-linux-amd64
runs go build with two consecutive -ldflags. The second one "cancels" the first:
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 GOMAXPROCS= \
go build -o bin/cmctl/cmctl-linux-amd64 -trimpath \
-ldflags '-w -s -X github.com/cert-manager/cert-manager/pkg/util.AppVersion=v1.8.0-6-gd212165c8da228 -X github.com/cert-manager/cert-manager/pkg/util.AppGitCommit=d212165c8da228437e71a74c2e117ae9d62c7f24' \
-ldflags '-X "github.com/cert-manager/cert-manager/cmd/ctl/pkg/build.name=cmctl" -X "github.com/cert-manager/cert-manager/cmd/ctl/pkg/build/commands.registerCompletion=true"' cmd/ctl/main.go
The right command would look like this:
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 GOMAXPROCS= \
go build -o bin/cmctl/cmctl-linux-amd64 -trimpath \
-ldflags '-w -s -X github.com/cert-manager/cert-manager/pkg/util.AppVersion=v1.8.0-beta.0-4-g312baa4d7e0a25 -X github.com/cert-manager/cert-manager/pkg/util.AppGitCommit=312baa4d7e0a25222571f8cdb1c95356a28a70d3 -X "github.com/cert-manager/cert-manager/cmd/ctl/pkg/build.name=cmctl" -X "github.com/cert-manager/cert-manager/cmd/ctl/pkg/build/commands.registerCompletion=true"' cmd/ctl/main.go
I'll open a PR to fix this. (cc @SgtCoDFish)
bumping this issue so it doesn't auto-close.
bumping this issue so it doesn't auto-close.
@aureq Thank you for bumping - this issue totally flew under the radar and your bump helped me realise that! Thank you for getting involved :raised_hands: :grin:
|
gharchive/issue
| 2022-04-06T03:18:30 |
2025-04-01T04:56:16.964187
|
{
"authors": [
"SgtCoDFish",
"aureq",
"maelvls"
],
"repo": "cert-manager/cert-manager",
"url": "https://github.com/cert-manager/cert-manager/issues/5020",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2353477632
|
Add global image repository value to helm values to facilitate private repo and eiliminate need to maintain individual repo image paths
Is your feature request related to a problem? Please describe.
because we use a private artifactory repo, we are forced change the fqdn portion of XXX.image.repository for each container image in our helm values. the problem presents itself when there is an image name change like in the current v1.15.0 release for apistartup image causing manual upkeep of those image helm values.
Describe the solution you'd like
add to helm values a global image repository fqdn value that overrides all the image fdqns.
Describe alternatives you've considered
add fdqn value to each XXX.image block
Environment details (remove if not applicable):
Kubernetes version: 1.29
Cloud-provider/provisioner: AWS EKS
cert-manager version: v1.15.0
Install method: helm via argocd
/kind feature
We've had a short discussion on that topic during this morning's open standup. Erik said "I find this type of top-level value useful as a user, but Go templates makes it painful to maintain".
It seems like a reasonable request.
/good-first-issue
Do we need some kind of helper chart to reduce code duplication across all charts (cert-manager, trust-manager, approver-policy, etc.)?
/remove-lifecycle stale
|
gharchive/issue
| 2024-06-14T14:14:18 |
2025-04-01T04:56:16.969825
|
{
"authors": [
"amir3ash",
"dhorner71",
"maelvls"
],
"repo": "cert-manager/cert-manager",
"url": "https://github.com/cert-manager/cert-manager/issues/7101",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
476859056
|
Reason for spawning a new process in authentication and authorization extensions ?
I have been using docker auth authorization and authentication extension for a project. I have observed that for each and every authentication request, a new process is spawned. This result in higher resource consumption. Is there a particular reason for implementing like this, without giving the developer to add a plugin (Or at least communicate through UNIX socket) ? Introducing a plugin may avoid spawning a new process.
Thank you ..!
ext_authn/autz is meant for quick hacks / low volume usage cases.
next step currently is implementing the Authenticator or Authorizer interface in the source.
i agree that there is a place for an intermediate option for a shared library or socket interface. the reason it doesn't exist is that no one had the need or time to implement it yet. you are welcome to do so.
I have implemented a new custom authentication and authorization mechanism where the developers can add their own plugins to existing docker auth implementation. As you mentioned in the comment, I implemented the authenticator and authorizer interfaces in doing so. I did it like that to make this backward compatible with the existing users. I have tested this new authentication and authorization implementations with my authentication and authorization plugins.
Please review this PR and merge
Highly appreciated if you can review this quickly and merge since I am going to use docker auth implementation for a production project at WSO2.
If we can get this merge I will write the necessary documentation and examples to use the new implementation as the already existing docs in your repo.
Thank you ..!
I have fixed the comments for the PR. I had a question there. Please review those changes and let me know further changes.
Thank you ..!
Hi @rojer
It would be great if you can help us to get this merged and released since one of our solutions needs this improvement.
@hasinthaindrajee i assume you were talking about https://github.com/cesanta/docker_auth/pull/254
reviewed, still some work necessary to get it in.
Hi @rojer
I have done the remaining necessary fixes. Can you please review it and merge?
Hi @rojer , Thanks for helping out to get the PR merged. Any idea when we can get the repo released with this change ?
let me take a look at some other PRs and i'll push latest
built and pushed cesanta/docker_auth:latest
|
gharchive/issue
| 2019-08-05T13:26:55 |
2025-04-01T04:56:16.982345
|
{
"authors": [
"hasinthaindrajee",
"rojer",
"tharindulak"
],
"repo": "cesanta/docker_auth",
"url": "https://github.com/cesanta/docker_auth/issues/253",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2280071372
|
🛑 echo-server.madacluster.tech is down
In a1af8c7, echo-server.madacluster.tech (https://echo-server.madacluster.tech) was down:
HTTP code: 0
Response time: 0 ms
Resolved: echo-server.madacluster.tech is back up in 04ddbce after 14 minutes.
|
gharchive/issue
| 2024-05-06T05:37:24 |
2025-04-01T04:56:16.984961
|
{
"authors": [
"cesarempathy"
],
"repo": "cesarempathy/upptime",
"url": "https://github.com/cesarempathy/upptime/issues/401",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
711946857
|
Update environment variable
As per the official Android documentation, the $ANDROID_HOME environment variable is now deprecated -> ANDROID_HOME, which also points to the SDK installation directory, is deprecated.
This PR simply replaces every occurrences of $ANDROID_HOME with $ANDROID_SDK_ROOT. It also modifies a bit the Notes section to directly link to the official documentation.
Let me know
Hi @cesarferreira any news on this?
Thank you for your contribution
Thank you for your contribution
My pleasure. It's just a small thing 😊
|
gharchive/pull-request
| 2020-09-30T13:27:58 |
2025-04-01T04:56:16.987497
|
{
"authors": [
"cesarferreira",
"r4dixx"
],
"repo": "cesarferreira/dryrun",
"url": "https://github.com/cesarferreira/dryrun/pull/138",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
711009854
|
Přesměrování data.cesko.digital na GitHub
Kdyby se v tom někdo vrtal, ať ho to přesměruje na GitHub repo s dokumentací.
Je to celé trochu hack (servírovat legitimní obsah jako chybovou stránku), ale dává mně to smysl, přidám – díky!
@zoul, tak redirect status kód asi není možný. Nechat tedy 403 nebo změnit na 200?
Mimochodem tyhle následné opravy, které nelze odhalit předem, jsou opravdu zábavné. 😄 😕
Jáj 🤦😁 Pokud je možné 200, bral bych 200, sémanticky to líp odpovídá. Pokud ne, tak vraťme 403 a zapomeňme 😄
|
gharchive/pull-request
| 2020-09-29T11:12:45 |
2025-04-01T04:56:16.998701
|
{
"authors": [
"HormCodes",
"zoul"
],
"repo": "cesko-digital/assets",
"url": "https://github.com/cesko-digital/assets/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1046593661
|
feat: navigator adjustments
https://cesko-digital.atlassian.net/browse/LOON-273 + změny z komentů z https://github.com/cesko-digital/loono/pull/43
status: WIP (není ready na review)
@mzdm nechci nijak tlacit, ale jak je to s timhle prkem? je tam jen dost zmen, tak aby se to pripadne resolvnulo co nejrychleji
Jj do konce týdne snad (ještě tam jsou nutný nějaký clean upy)
@mzdm promin ze s timhle furt prudim, ale jak to vidis? pripadne to muzem proste zavrit a rict ze se to udela nekdy jindy, ale jde mi o to at tu nevisi neco takoveho podivneho
@killalad yep toto zavírám, teď to nestihám
|
gharchive/pull-request
| 2021-11-06T20:46:29 |
2025-04-01T04:56:17.001134
|
{
"authors": [
"killalad",
"mzdm"
],
"repo": "cesko-digital/loono",
"url": "https://github.com/cesko-digital/loono/pull/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
126725146
|
1.6beta2 on OSX
I can't seem to get reflex to work.. perhaps I'm using it incorrectly.. im doing reflex -g * -- echo hi I've also tried reflex -g ./* echo hi .. nothing happens when I change those files
Thanks for the report. I'll take a look.
Hi @pkieltyka,
The way you're invoking reflex, the * is expanded by your shell, so if you have (for example) a.txt and b.txt in your current directory, reflex sees
reflex -g a.txt b.txt -- echo hi
So then if you change b.txt, nothing happens; if you change a.txt, then reflex will try to run b.txt, which is definitely not what you intended.
If you really want to run a change after any file changes, you shouldn't provide -g (or -r) -- by default, reflex considers all file changes in the current directory.
reflex echo hi
If you want to use a glob pattern with a *, you'll want to quote it:
reflex -g 'foo*' -- echo hi
Finally, use -v if you're ever not sure what reflex is doing -- it will print out exactly the configuration that it's using so you can see what's going on.
Hope this helps!
thanks it certainly helps! and it works perfectly.
|
gharchive/issue
| 2016-01-14T19:10:33 |
2025-04-01T04:56:17.004950
|
{
"authors": [
"cespare",
"pkieltyka"
],
"repo": "cespare/reflex",
"url": "https://github.com/cespare/reflex/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
3130310
|
Inner types in prototype style
Inner types are currently only partly functional in prototype style. For instance, this doesn't work:
class Outer() {
class Inner() {
shared String s = "hello";
}
shared String test() { return Inner().s; }
}
In the generated JS code test can't access Outer.Inner() properly because it is defined inside Outer and is not exported. That could be fixed of course. But perhaps the real problem is that we mix prototype style and capture style here: while methods are defined separately and added to the prototype, inner types are defined inside the body of the outer type, creating a closure. It seems that this mixing of concepts is not completely trouble-free.
It would be more consistent, and probably work out better in the long term, to define inner types in the same way as other class members in prototype style. It would also definitely be more efficient because currently inner types are completely initialized every time an instance of the outer class is created.
Implementing this is definitely not uncomplicated either though.
After playing around with some JavaScript code I think it should be possible to move the inner types out of the class body. I'll need to refactor the generated code a bit first.
Inner types in prototype style are now added to the prototype instead of being defined in the class body. All the tests still pass, and correct code is generated for the example given above. As a nice side effect the prototype style code is now more readable because the members are grouped properly.
But there are probably some cases that still don't work, for both prototype style and capture style, in particular if inner and outer types have identical names.
Ok, this seems to work pretty well now. The problems with identical names of inner and outer types are not directly related to this, so I'll open a separate issue for that.
|
gharchive/issue
| 2012-02-07T20:30:59 |
2025-04-01T04:56:17.009470
|
{
"authors": [
"ikasiuk"
],
"repo": "ceylon/ceylon-js",
"url": "https://github.com/ceylon/ceylon-js/issues/41",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
56176157
|
Wrappers for covariant Java Collections
As discussed here.
@jvasileff Thanks!
One question: why doesn't CeylonMutableXxxx extend CeylonXxxx?
Laziness, I guess :)
Should be fixed.
Thank you!
|
gharchive/pull-request
| 2015-02-01T20:12:00 |
2025-04-01T04:56:17.011396
|
{
"authors": [
"gavinking",
"jvasileff"
],
"repo": "ceylon/ceylon-sdk",
"url": "https://github.com/ceylon/ceylon-sdk/pull/343",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
186049130
|
Figure out the welcome/new user email notification situation
I want to hook into this: https://codex.wordpress.org/Plugin_API/Action_Reference/user_register
v6.3.0 - https://github.com/cferdinandi/gmt-wordpress-for-web-apps/pull/64
|
gharchive/issue
| 2016-10-29T00:40:55 |
2025-04-01T04:56:17.027096
|
{
"authors": [
"cferdinandi"
],
"repo": "cferdinandi/gmt-wordpress-for-web-apps",
"url": "https://github.com/cferdinandi/gmt-wordpress-for-web-apps/issues/63",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
277448735
|
Need help getting it work in React
Hi guys! So far i've managed to import the js file into page and add ID's to links.
But still can't get it to work with the last step, adding the var scroll = new SmoothScroll('a[href*="#"]');
Can you guys help me to set it up correctly in React JS?
Thanks in advance!
I'd need to see a demo link to understand what's not working/why.
Does the div you want to scroll to have an id of id?
<div id="id"></div>
@cferdinandi yes of course. See my previous message. I just don't know where and how to add
var scroll = new SmoothScroll('a[href*="#"]');
Ah. Anywhere after you've loaded/imported smooth-scroll.js. Header, footer, doesn't matter.
Don't know if you've used react, i'm doing it so
componentDidMount () {
const script = document.createElement("script");
script.src = "../utils/scroll.js";
script.async = true;
document.body.appendChild(script);
}
and in console/network i don't see it loading.
Without a working demo I can look at, I'm afraid I've reached the limit of the debugging I can help you with. Let me know if/when you get something setup that I can look at.
BTW, this may help you out? https://stackoverflow.com/a/42848407/1293256
Alright, i was using gatsby and managed to fix it by adding each script as a separate file. Thanks!
|
gharchive/issue
| 2017-11-28T16:27:08 |
2025-04-01T04:56:17.031648
|
{
"authors": [
"cferdinandi",
"ziyafenn"
],
"repo": "cferdinandi/smooth-scroll",
"url": "https://github.com/cferdinandi/smooth-scroll/issues/399",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1311666113
|
Configuration
I'm planning on running this for prod, how do I configure it? (i.e. different port, etc)
Added description of .env to README at 063ce36617b259a3cf7cdd70a5a083ee6ae9264f.
In case you are interested, here is a note on how to write .env in prod.
Create .env.production or .env.production.local and write the following
CK=[your consumer key for twitter API]
CS=[your consumer secret for twitter API]
TW_OAUTH_CALLBACK=[your server's URL]/twitter_callback # e.g. https://hogehoge.com/twitter_callback
MK_CALLBACK=[your server's URL]/misskey_callback # e.g. https://hogehoge.com/misskey_callback
|
gharchive/issue
| 2022-07-20T18:30:33 |
2025-04-01T04:56:17.041303
|
{
"authors": [
"ThatOneCalculator",
"cffnpwr"
],
"repo": "cffnpwr/caffe-bruncher",
"url": "https://github.com/cffnpwr/caffe-bruncher/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
107787629
|
Added testing for web-storage-proxy.js
Added testing for web-storage-proxy.js
Additions
Added tests to increase coverage to 100% across the board
Changes
Refactored code to be less complex and allow for easier testing
Updated code to utilize storage object instead of boolean to allow for storage options outside the window object
Testing
Run gulp test:unit:scripts see the green across the board for this file
Review
@KimberlyMunoz
@anselmbradford
@sebworks
Screenshots
Before
=============================== Coverage summary ===============================
Statements : 26.51% ( 321/1211 )
Branches : 21.84% ( 90/412 )
Functions : 22.49% ( 56/249 )
Lines : 26.75% ( 321/1200 )
After
=============================== Coverage summary ===============================
Statements : 28.37% ( 343/1209 )
Branches : 24.27% ( 100/412 )
Functions : 24.5% ( 61/249 )
Lines : 28.54% ( 343/1202 )
Notes
Skipping tests of internal private functions
Checklist
[ ] Changes are limited to a single goal (no scope creep)
[ ] Code can be automatically merged (no conflicts)
[ ] Code follows the standards laid out in the front end playbook
[ ] Passes all existing automated tests
[ ] New functions include new tests
[ ] New functions are documented (with a description, list of inputs, and expected output)
[ ] Placeholder code is flagged
[ ] Visually tested in supported browsers and devices
[ ] Project documentation has been updated (including the "Unreleased" section of the CHANGELOG)
LGTM, other than the one comment.
:+1:
Code comment at the top of web-storage-proxy.js needs to be updated or removed.
|
gharchive/pull-request
| 2015-09-22T19:47:54 |
2025-04-01T04:56:17.070072
|
{
"authors": [
"anselmbradford",
"jimmynotjim",
"sebworks"
],
"repo": "cfpb/cfgov-refresh",
"url": "https://github.com/cfpb/cfgov-refresh/pull/1007",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
58281531
|
Clean up the Hero and Summary code blocks
Clean up the Hero and Summary code blocks to be more semantic and remove extra containers
Additions
Created .hero_preview block to combine .hero_card-bg and .hero_card-padding styles
Added header and footer blocks to .summary block
Added last-child hook to remove margin from last child within summary no matter what elem
Removals
Removed extra .hero_card-bg and .hero_card-padding blocks
Removed extra whitespace
Changes
Moved hero card url condition to outside of the markup
Swapped divs for more semantic article where applicable
Testing
Fetch branch
navigate to /blog page
navigate to /newsroom page
Review
@anselmbradford
@sebworks
Preview
No visual differences
Notes
The image container is needed to suppress loading of the images until they are needed
Fixes #257
If I can get a merge I'll integrate these changes into the Sprint-21 Hero and events list I'm currently working on.
|
gharchive/pull-request
| 2015-02-19T22:36:13 |
2025-04-01T04:56:17.075529
|
{
"authors": [
"jimmynotjim"
],
"repo": "cfpb/cfgov-refresh",
"url": "https://github.com/cfpb/cfgov-refresh/pull/262",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1262123056
|
Search type selector doesn't work properly
The "Things to search for" selector in the search sidebar doesn't seem to work properly. If you select "HTML" or "Components", the resulting search still goes to the "Links" view:
The search should modify the search_type parameter in the query string -- you can manually trigger the HTML/components search by setting ?search_type=html or ?search_type=components.
Yes! This needs to be fixed. Currently it's two separate forms. Very dumb.
Couple options I see:
Combine the forms at the HTML level by having all the form elements contained within a single <form> element and have two submit buttons. This is a pain due to the page's layout.
Continue having two separate forms but use JS to sync the forms by adding type=hidden form elements that get updated whenever an option or search term is changed.
Remove the Update button and use AJAX to update the results in real-time when sidebar options are changed. This is what iRegs does (try changing a sidebar option). There's a separate endpoint that returns partial HTML that the front-end fetches and injects into the DOM.
iRegs has a no-JS fallback that shows an Update button for when JS is disabled by the user. Our template is based on iRegs but has no AJAX functionality, hence our ugly update button.
Option 2 above is easiest to implement so I'll start on that right now.
|
gharchive/issue
| 2022-06-06T17:15:29 |
2025-04-01T04:56:17.079794
|
{
"authors": [
"chosak",
"contolini"
],
"repo": "cfpb/crawsqueal",
"url": "https://github.com/cfpb/crawsqueal/issues/14",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
182636001
|
Additional Print Styles
Made some adjustments on print styles to reduce the number of pages being printed.
Since Chrome prints with mobile styles and Firefox/IE prints on desktop styles, the styles need to be flexible enough to work with both.
These aren't perfect, but they're a huge improvement, especially for Firefox/IE where a lot of text was being cut off.
Testing
Review and Print Preview the following pages:
http://localhost:8000/owning-a-home/explore-rates/
http://localhost:8000/owning-a-home/closing-disclosure/
http://localhost:8000/owning-a-home/loan-options/
http://localhost:8000/owning-a-home/loan-options/FHA-loans
http://localhost:8000/owning-a-home/process
Amazing.
|
gharchive/pull-request
| 2016-10-12T21:15:14 |
2025-04-01T04:56:17.083345
|
{
"authors": [
"KimberlyMunoz",
"contolini"
],
"repo": "cfpb/owning-a-home",
"url": "https://github.com/cfpb/owning-a-home/pull/686",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
619841493
|
Support native HTTPS in application
Currently in order to target new clients we route HTTPS via nginx to strip it to HTTP before sending it to our application.
This is not very portable so I'd like to support HTTPS natively.
The main issue with this currently is that Java wants you to use JKS and we'd prefer to allow PEM to prevent use of some proprietary format.
Looking into this issue Java doesn't appear to budge with the standard server sockets and we are forced to use JKS for the time being. I will be looking into a replacement for this in due time as the preferred certificate should be PEM.
|
gharchive/issue
| 2020-05-18T01:06:17 |
2025-04-01T04:56:17.100671
|
{
"authors": [
"cg0"
],
"repo": "cg0/Hirasawa-Project",
"url": "https://github.com/cg0/Hirasawa-Project/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
658832923
|
error on python setup.py install
mbp-dev:gazu-publisher des$ python setup.py install
error in gazupublisher setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Expected version spec in qtazu@git+https://github.com/cgwire/qtazu.git#egg=qtazu at @git+https://github.com/cgwire/qtazu.git#egg=qtazu
Thx for your feedback !
This bug has been fixed, so it shouldn't be a problem anymore. The gazu publisher still remains WIP so let us know if the error persists. If you want to, we've also written a procedure of installation here to make things easier : https://gazu.cg-wire.com/publisher.html
ok! lets move on this one!
os x mojave, 10.14.6
pyside2 installed, pip from python27
mbp-dev:gazu-publisher des$ pip install https://github.com/cgwire/gazu-publisher.git --user
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Collecting https://github.com/cgwire/gazu-publisher.git
Using cached https://github.com/cgwire/gazu-publisher.git
ERROR: Cannot unpack file /private/var/folders/my/nn47_t4d365fr1p3rj6p0r7r0000gn/T/pip-unpack-rrb_dW/gazu-publisher.git (downloaded from /private/var/folders/my/nn47_t4d365fr1p3rj6p0r7r0000gn/T/pip-req-build-ARbn8n, content-type: text/html; charset=utf-8); cannot detect archive format
ERROR: Cannot determine archive format of /private/var/folders/my/nn47_t4d365fr1p3rj6p0r7r0000gn/T/pip-req-build-ARbn8n
Can you try the command pip install git+https://github.com/cgwire/gazu-publisher.git --user, instead of pip install https://github.com/cgwire/gazu-publisher.git --user (add git+ in front of the url) ?
The mistake is in our tutorial, it was corrected but the PR with the fix has not been merged yet, sry for that. Let me know if it worked !
pip install -U setuptools did the trick for me, after the git+https worked just fine in --user mode.
mac mojave 10.14.6, python27, apple python build, no brew
|
gharchive/issue
| 2020-07-17T04:13:15 |
2025-04-01T04:56:17.141567
|
{
"authors": [
"LedruRollin",
"ddesmond"
],
"repo": "cgwire/gazu-publisher",
"url": "https://github.com/cgwire/gazu-publisher/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
246719014
|
Serializer Interface
Hey, nice, slim library by the way, will try to use it.
First complain: SerializerInterface should take $context as a second serialize/unserialize parameter.
Otherwise this Interface can have some issues in the future.
Example case i've encoutered:
Made JsonSerializer, json_encode, json_decode method takes additional parameters, which would be impossible to pass.
This is a great suggestion. I'm moving the serializer into my helper library I will definitely consider this change.
Optionally you could construct your JsonSerializer with any options you wanted to use with encoding/decoding.
|
gharchive/issue
| 2017-07-31T10:57:39 |
2025-04-01T04:56:17.157549
|
{
"authors": [
"arnaspet",
"chadicus"
],
"repo": "chadicus/psr-cache-redis",
"url": "https://github.com/chadicus/psr-cache-redis/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
276772919
|
unreliable channels?
Looking through the DataChannel.hpp channel types, there doesn't seem to be an unreliable option. Perhaps I am mis-understanding what webRTC means by unreliable -- is "reliable_unordered" what I am looking for? What I was expecting unreliable to be would be both unordered (of course) but also no guarantee of sending the data, whereas reliable_unordered sounds to me like it would be unordered but data would be re-sent until acknowledged on the other side (which I don't care about).
Here are the explanations from https://tools.ietf.org/html/draft-ietf-rtcweb-data-protocol-08#section-5.1.
I think the "PARTIAL_RELIABLE" ones might be the what you are looking for.
DATA_CHANNEL_RELIABLE (0x00): The Data Channel provides a
reliable in-order bi-directional communication.
DATA_CHANNEL_RELIABLE_UNORDERED (0x80): The Data Channel provides
a reliable unordered bi-directional communication.
DATA_CHANNEL_PARTIAL_RELIABLE_REXMIT (0x01): The Data Channel
provides a partially-reliable in-order bi-directional
communication. User messages will not be retransmitted more
times than specified in the Reliability Parameter.
DATA_CHANNEL_PARTIAL_RELIABLE_REXMIT_UNORDERED (0x81): The Data
Channel provides a partial reliable unordered bi-directional
communication. User messages will not be retransmitted more
times than specified in the Reliability Parameter.
DATA_CHANNEL_PARTIAL_RELIABLE_TIMED (0x02): The Data Channel
provides a partial reliable in-order bi-directional
communication. User messages might not be transmitted or
retransmitted after a specified life-time given in milliseconds
in the Reliability Parameter. This life-time starts
when providing the user message to the protocol stack.
DATA_CHANNEL_PARTIAL_RELIABLE_TIMED_UNORDERED (0x82): The Data
Channel provides a partial reliable unordered bi-directional
communication. User messages might not be transmitted or
retransmitted after a specified life-time given in milli-
seconds in the Reliability Parameter. This life-time starts
when providing the user message to the protocol stack.
Ah. I think you're right.
Still, in the WebRTC documentation and tutorials, typically the behavior of these unreliable channels is controlled by a 'maxRetries' or 'maxTimeout' style parameter. I can't seem to find where such a parameter might be set in librtcdcpp. And in the absence of a such a parameter, what would the behavior PARTIAL_RELIABLE data channels be?
In fact, scanning through the DataChannel.cpp source code, the chan_type parameter seems to be unused.
Still, in the WebRTC documentation and tutorials, typically the behavior of these unreliable channels is controlled by a 'maxRetries' or 'maxTimeout' style parameter. I can't seem to find where such a parameter might be set in librtcdcpp. And in the absence of a such a parameter, what would the behavior PARTIAL_RELIABLE data channels be?
That parameter would be the reliability parameter for which the proper API is not yet implemented in this library.
In fact, scanning through the DataChannel.cpp source code, the chan_type parameter seems to be unused.
The library does not support creating a data channel yet. I have a PR for this, but I hardcoded the RELIABLE chan_type and other params like reliability. Plus it would need more tweaks to actually make it work accordingly at the SCTP layer.
I've added the chan_type and reliability features to the API: https://github.com/chadnickbok/librtcdcpp/pull/30/commits/f20cf66815448f4068a0d00f972e7603c0f133d7
|
gharchive/issue
| 2017-11-25T19:44:45 |
2025-04-01T04:56:17.163696
|
{
"authors": [
"PowerInside",
"an-kumar"
],
"repo": "chadnickbok/librtcdcpp",
"url": "https://github.com/chadnickbok/librtcdcpp/issues/34",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
273410397
|
[backport] new style rdiv function
Backport of #3615.
jenkins, test this please
LGTM!
|
gharchive/pull-request
| 2017-11-13T12:20:43 |
2025-04-01T04:56:17.178931
|
{
"authors": [
"kmaehashi",
"takagi"
],
"repo": "chainer/chainer",
"url": "https://github.com/chainer/chainer/pull/3857",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
533380764
|
Use typescript for status actions and statusReducer
This PR aims to use typescript for actions and reducers related to status.
Thank you!
|
gharchive/pull-request
| 2019-12-05T14:18:59 |
2025-04-01T04:56:17.179694
|
{
"authors": [
"gky360",
"ofk"
],
"repo": "chainer/chainerui",
"url": "https://github.com/chainer/chainerui/pull/340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1323249314
|
Merge pull request #2 from chakkaphong/main
pull
pr
|
gharchive/pull-request
| 2022-07-30T17:52:09 |
2025-04-01T04:56:17.183749
|
{
"authors": [
"Chakkaphong9"
],
"repo": "chakkaphong/nodejs-express-mongodb-typscript-gernerator",
"url": "https://github.com/chakkaphong/nodejs-express-mongodb-typscript-gernerator/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
696733149
|
Set charge state Z0 in AddIon(...)
We should retire PRESCRIBED_FULLY_IONIZED etc and instead allow the shortcut
eqsys.n_i.addIon(name='name', Z=Z, Z0=Z0, iontype=Ions.IONS_PRESCRIBED, n=n)
where Z is the atomic number as usual, but Z0 is now the charge state (= 0 for neutral or = Z for fully ionized).
Sure, but I am not convinced that we should retired IONS_PRESCRIBED_FULLY_IONIZED et al? The notation Z0=... suggests that we are specifying just one charge state (rather than setting all other to zero), while IONS_PRESCRIBED_FULLY_IONIZED is quite verbose.
(Implementing the Z0=... interface should be very simple, though, as the IonSpecies class already has the initialize_XXX_charge_state() method for initializing ions just this way; it's just not exposed to addIon())
For sure, I agree that we should keep the current methods, also for backwards-compatibility.
|
gharchive/issue
| 2020-09-09T11:30:31 |
2025-04-01T04:56:17.207828
|
{
"authors": [
"Embreus",
"hoppe93"
],
"repo": "chalmersplasmatheory/DREAM",
"url": "https://github.com/chalmersplasmatheory/DREAM/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
116564095
|
Mobile examples are not building
It fails on some missing npm modules like webpack and graphql declared in package.json in subdirectories. It should be buildable out of the box.
if you're referring to relay-mobile-examples then you must need graphQL and webpack because relay uses those to generate/validate relay-graphql queries.
fixed with #17
|
gharchive/issue
| 2015-11-12T14:52:06 |
2025-04-01T04:56:17.229589
|
{
"authors": [
"chandu0101",
"mkotsbak"
],
"repo": "chandu0101/sri",
"url": "https://github.com/chandu0101/sri/issues/12",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1132093967
|
Snap shot conflict 수정
Summary
https://github.com/channel-io/bezier-react/pull/715 를 머지했으나, https://github.com/channel-io/bezier-react/pull/665 의 변경사항으로 머지 후 돌아간 테스트의 에러 발생, 해당 에러를 수정하는 PR 입니다.
Detail
Snapshot 업데이트 진행
:tada: This PR is included in version 1.0.0-next-v1.93 :tada:
The release is available on:
npm package (@next-v1 dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-02-11T08:43:26 |
2025-04-01T04:56:17.233591
|
{
"authors": [
"ch-builder",
"guswnsxodlf"
],
"repo": "channel-io/bezier-react",
"url": "https://github.com/channel-io/bezier-react/pull/719",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1262701455
|
SectionLabel의 help가 Tooltip의 allowHover를 받도록 수정
Summary
SectionLabel 컴포넌트의 help 프롭 타입에 allowTooltipHover: boolean을 추가하여, SectionLabel을 사용할 때 Help tooltip의 allowHover 여부를 정할 수 있게 합니다.
https://user-images.githubusercontent.com/6940439/172295300-46fe61e8-f6d4-4a54-a779-84dc5fc39957.mp4
Details
관련 Desk 이슈: https://github.com/channel-io/ch-desk-web/issues/10023
Help tooltip 내에 Clickable한 요소가 있는 경우 등에는 Tooltip이 호버 가능해야 하며, 이를 SectionLabel.help의 props에 추가하여 해결합니다.
Browser Compatibility
OS / Engine 호환성을 반드시 확인해주세요.
Windows
[ ] Chrome - Blink
[ ] Edge - Blink
[ ] Firefox - Gecko (Option)
macOS
[x] Chrome - Blink
[ ] Edge - Blink
[ ] Safari - WebKit
[ ] Firefox - Gecko (Option)
References
prop 추가가 아니라 기본적으로 allowHover = true로 하는 것은 어떻게 생각하시나요?
반대로 생각했을 때(SectionLabel의 help에 대해 툴팁 allowHover를 원치 않는 경우)가 약간 걸리긴 하지만, 딱히 툴팁 hover를 막아야 할 케이스가 떠오르지 않네요 😅 저는 괜찮을 것 같습니다!
덧붙여, (지금 받는 tooltipContent도 어차피 TooltipProps에 들어가기 때문에) 다음과 같이 tooltipProps를 대신 받아서 이를 spread해주고, tooltipContent를 deprecated 처리하는 방안도 생각해 보았습니다. 사실 위의 논의대로 allowHover를 무조건 true로 주면 해결되는 문제이지만, 이렇게 하면 어떨지 의견 주시면 감사하겠습니다 🙇
As-is:
interface SectionLabelHelpProps extends Partial<IconInfo> {
iconSize?: IconSize
tooltipContent: React.ReactNode
}
To-be:
interface SectionLabelHelpProps extends Partial<IconInfo> {
iconSize?: IconSize
tooltipProps: Partial<TooltipProps> & Pick<TooltipProps, 'content'>
/**
* @deprecated
*/
tooltipContent: React.ReactNode
}
내부 컴포넌트의 prop을 지정하는 선택지를 늘리고 싶다면, 이를 인터페이스에 추가하기보다는 합성을 통해 해결하는 것이 좋아보입니다.
interface SectionLabelProps {
...
tooltip: { /* preset props for tooltip */ } | React.ReactNode
}
:tada: This PR is included in version 1.0.0-next-v1.148 :tada:
The release is available on:
npm package (@next-v1 dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-06-07T04:18:58 |
2025-04-01T04:56:17.242386
|
{
"authors": [
"Dogdriip",
"ch-builder",
"inhibitor1217"
],
"repo": "channel-io/bezier-react",
"url": "https://github.com/channel-io/bezier-react/pull/815",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
200365006
|
Blueprint middleware interferes with global middleware
Creating some middleware for a blueprint seems to interfere with the main router unexpectedly. E.g.
from sanic import Sanic, Blueprint
from sanic.exceptions import NotFound
app = Sanic(__name__)
key_blueprint = Blueprint('key', url_prefix='/foobar/')
@app.route('/')
async def hello_world(request):
return "Hello World! Your API is working!"
@key_blueprint.middleware('request')
async def authentication(request):
raise NotFound("This blueprint is not ready yet!")
app.blueprint(key_blueprint)
app.run()
And then I run
$ curl 127.0.0.1:8000/
Error: This blueprint is not ready yet!%
I would have expected the blueprint 'authentication' not to interfere with an endpoint which is not prefixed /foobar/.
Known issue. A discussion is going on: #37
Going to close this in favor of #37
|
gharchive/issue
| 2017-01-12T13:20:24 |
2025-04-01T04:56:17.244618
|
{
"authors": [
"nasfarley88",
"seemethere",
"yoloseem"
],
"repo": "channelcat/sanic",
"url": "https://github.com/channelcat/sanic/issues/290",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1296986991
|
Rewrite v-sanitize into vue3/nuxt3
Closed: #265
Hey!
I started working on the rewrite, couldn't test the directive yet, but the $sanitize function should already work. I ran into issues running sanitize-html under Vite. A possible solution would be to replace sanitize-html with dompurify.
Are you open to that idea, should I prototype up a rewrite with dompurify? You can check it our here https://github.com/cure53/DOMPurify
ToDo
[ ] test directive in vue 3
[ ] write tests
[ ] test / update nuxt implementation for nuxt 3
[ ] add typings extending vue
[ ] fix compatibility issues with Vite and sanitize-html
[ ] add editorconfig or eslint or any other form of keeping codestyle consistent
Also typings not yet finished, specifically declaring the existence of $sanitize under vue instance
Yes, it's nice, let go ahead try that to see if it works fine, we can release it for vue 3 and nuxt 3.
But the another plugin already exist here: https://github.com/LeSuisse/vue-dompurify-html, so better avoid creating the same thing.
@chantouchsek I use the package you linked as a replacement for v-sanitize, 'till I update the implementation :D. The problem with thatone is, that it only implements a directive, there is no sanitize function passed to the context as well as no native nuxt implementation.
@chantouchsek I use the package you linked as a replacement for v-sanitize, 'till I update the implementation :D. The problem with thatone is, that it only implements a directive, there is no sanitize function passed to the context as well as no native nuxt implementation.
Ok, great 👍
@chantouchsek heya! Please could you look into my tsconfig and sanitize.ts file? I am pretty new to TS and I don't know where in the documentation to look for stuff like extending vue or nuxt types so they include $sanitize and stuff like that, so I don't know where to add that in the nuxt3/vue3 version.
I added some basic tests for now and an example environment, where you can run the package in dev mode so you can test it properly without the need of another vue app.
Could you please checkut my version and maybe throw a PR with proper TS settings? After that I'd love to finish writing more tests and giving you the code for PR.
Thanks.
@truesteps what editor/IDE are you using, that seem, you made to lots of files that doesn't related, can you check and revert all of the at first, then i will look up into it again. thanks.
I'm using phpstorm, I added a .editorconfig and reformated the entire project, feel free to modify the .editorconfig to your liking and I'll reformat it again :)
The idea behind that came from the fact, that usually no 2 developers have the same formatting setup, so I added it to ensure all code will look the same in the future
@truesteps no don't use phpstorm to write js or ts, you should use webstorm instead.
@chantouchsek phpstorm is the same like webstorm, except it has support for PHP, nothing else is different. .editorconfig is supported by all editors not just the IntelliJ family of products from jetbrains
@chantouchsek phpstorm is the same like webstorm, except it has support for PHP, nothing else is different. .editorconfig is supported by all editors not just the IntelliJ family of products from jetbrains
anyway, if you want me to rewert the formatting changes, I'll do that :)
No, I don't accept that format.
@chantouchsek phpstorm is the same like webstorm, except it has support for PHP, nothing else is different. .editorconfig is supported by all editors not just the IntelliJ family of products from jetbrains
anyway, if you want me to rewert the formatting changes, I'll do that :)
No, I don't accept that format.
what format do you wish to use? Please define it so I can implement it. Thanks
@chantouchsek phpstorm is the same like webstorm, except it has support for PHP, nothing else is different. .editorconfig is supported by all editors not just the IntelliJ family of products from jetbrains
anyway, if you want me to rewert the formatting changes, I'll do that :)
No, I don't accept that format.
what format do you wish to use? Please define it so I can implement it. Thanks
Thanks, the current format is fine.
|
gharchive/pull-request
| 2022-07-07T07:57:56 |
2025-04-01T04:56:17.256622
|
{
"authors": [
"chantouchsek",
"truesteps"
],
"repo": "chantouchsek/v-sanitize",
"url": "https://github.com/chantouchsek/v-sanitize/pull/284",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2680745965
|
docs: Running list of Eng / Design improvements for 2025
Engineering
[ ] Custom examples of inherited functions #890
[ ] Autogeneration of data model
[ ] Python scripts or notebooks generation for examples and tutorials
[ ] Versioned docs
Improving Design
[ ] Improved API docs styling
[ ] Accordion / admonition styling
[ ] Add ability to link and copy link to specific questions in the doc site faq ( addressing #419)
Effort estimation:
Engineering
Custom examples of inherited functions (1h)
Autogeneration of data model (on hold, waiting for eng suggestions)
Python scripts or notebooks generation for examples and tutorials (10h)
Versioned docs (10h)
Improving Design
Improved API docs styling
I believe this is about the "attributes in table" issue. (10h)
Accordion / admonition styling (5h)
Add ability to link and copy link to specific questions in the doc site faq ( addressing https://github.com/chanzuckerberg/cryoet-data-portal/issues/419) - this may be an engineering issue, waiting on confirmation
|
gharchive/issue
| 2024-11-21T20:03:30 |
2025-04-01T04:56:17.263221
|
{
"authors": [
"dgmccart",
"melissawm"
],
"repo": "chanzuckerberg/cryoet-data-portal",
"url": "https://github.com/chanzuckerberg/cryoet-data-portal/issues/1350",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1156317030
|
add support for npe2-enabled features
Summary
We will add support on the napari hub for npe2, inspecting plugins for high priority metadata, presenting the metadata on plugin pages, letting users filter by the plugin metadata, and providing the metadata to napari through the napari hub API.
Outcome
Users will be able to see human-readable plugin names, filter by plugin type, and filter by file extensions that are supported for reading and writing. napari developers will have access to this metadata through the napari hub API. napari plugin developers will be able to preview this metadata on the Plugin Preview Page.
How will it work
When new plugins are detected on PyPI, on the backend we will need to download the Python package and inspect the npe2 manifest file for relevant metadata. For legacy plugins, we will attempt to infer this information using the logic in npe2 convert
Assets
PRD
Designs
Tech Spec
Stories
[x] https://github.com/chanzuckerberg/napari-hub/issues/237
[x] #467
[x] https://github.com/chanzuckerberg/napari-hub/issues/425
[ ] #468
@lauramarcos to QA the following functionality:
Test new npe2 filters on search page: https://staging.napari-hub.org
Test PyPi link on plugin page:
Verify plugin metadata matches what's is being filtered:
|
gharchive/issue
| 2022-03-02T03:55:47 |
2025-04-01T04:56:17.283533
|
{
"authors": [
"codemonkey800",
"neuromusic"
],
"repo": "chanzuckerberg/napari-hub",
"url": "https://github.com/chanzuckerberg/napari-hub/issues/434",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2553412922
|
Draft relaxed schema compliance
Context
There are emerging requirements for reusing the cellxgene-schema CLIschema+validator for scenarios that are more relaxed than CELLxGENE Discover's current requirements.
Relaxation
The following sections blue-sky possible approaches to documenting relaxed requirements; however, the solution should be driven by concrete scenarios and not theory.
Fine Granularity: Per Schema variant
A limited number of schema variants could be documented such as the "cross modality schema". schema_reference could be reused for the curator to define the preferred schema for validation.
Fine Granularity: Per Metadata field
For each metadata field, the schema defines separate requirements for strict and relaxed. Generally, relaxed will indicate that the field MUST NOT be present, but it's also possible to relax other requirements.
uns (Dataset Metadata)
relaxed
Key
relaxed
Annotator
Curator MAY annotate.
Value
list[str]. str values MUST match one or more of the values in the set:
"obs['cell_type_ontology_term_id']"
"obs['development_stage_ontology_term_id']"
...
If present, relaxed validation MUST be performed on the specified metadata field.
Concrete example: If the assay is silver tier Visium Spatial Gene Expression then assuming that cell_type_ontology_term_id defined its relaxed validation as:
cell_type_ontology_term_id MUST NOT be present in obs
"cell_type_onotlogy_term_id" MUST be annotated in uns['relaxed']
Then the silver tier dataset would simply meet those requirements.
Coarse Granularity: Per Dataset
The schema documents a relaxed subset of the current required fields. This subset may not include cell_type_ontology_term_id or perhaps development_stage_ontology_term_id. If a current required field is not included in the relaxed subset, then it MUST NOT be present in the dataset.
Curators annotate whether strict or relaxed validation is desired.
uns (Dataset Metadata)
strict
Key
strict
Annotator
Curator MUST annotate.
Value
bool. This MUST be True for strict validation and MUST be False for relaxed validation.
References
Strict and Relaxed Mode
Categories of AIRR Schema Fields
Compliance with the MiAIRR Data Standard
Compliance to the MiAIRR Data Standard is currently a binary state, i.e., a data either is or is not compliant, there are not “grades” of compliance. However, additional requirements for specific use cases might be defined in the future.
I'd prefer not to overload "relaxed" to mean anything besides "MUST NOT contain". If we want to "relax" in some other way, it should probably be a new schema variant or additional flag.
I like the idea of using a combination of schema_reference to point to variant schemas, and uns.relaxed to point to which requirements to ignore in that given schema reference.
We may have dependent columns that need to be relaxed, like tissue_type and tissue_ontology_term_id. Just wanted to note that we'll have to account for that dependency either by logging an error if tissue_ontology_term_id is relaxed and tissue_type is not, or automatically relaxing dependent columns of relaxed columns.
|
gharchive/issue
| 2024-09-27T17:28:59 |
2025-04-01T04:56:17.295460
|
{
"authors": [
"brianraymor",
"nayib-jose-gloria"
],
"repo": "chanzuckerberg/single-cell-curation",
"url": "https://github.com/chanzuckerberg/single-cell-curation/issues/1025",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1813177502
|
🛑 de-4 (redglobuli) is down
In 7e53436, de-4 (redglobuli) (https://de-4-mirror.chaotic.cx/no-failover/chaotic-aur/lastupdate) was down:
HTTP code: 503
Response time: 59 ms
Resolved: de-4 (redglobuli) is back up in 659bad6.
|
gharchive/issue
| 2023-07-20T05:16:19 |
2025-04-01T04:56:17.324386
|
{
"authors": [
"Chaotic-Temeraire"
],
"repo": "chaotic-aur/chaotic-uptimes",
"url": "https://github.com/chaotic-aur/chaotic-uptimes/issues/3002",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1919018597
|
🛑 Silky.Network - us-ca is down
In 3de1ce6, Silky.Network - us-ca (https://sjc-us-mirror.silky.network) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Silky.Network - us-ca is back up in c030206 after 2 minutes.
|
gharchive/issue
| 2023-09-29T10:30:05 |
2025-04-01T04:56:17.326854
|
{
"authors": [
"Chaotic-Temeraire"
],
"repo": "chaotic-aur/chaotic-uptimes",
"url": "https://github.com/chaotic-aur/chaotic-uptimes/issues/5056",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1931191960
|
🛑 br (UFSCar Alcateia) is down
In 01370bc, br (UFSCar Alcateia) (mirror.ufscar.br) was down:
HTTP code: 0
Response time: 0 ms
Resolved: br (UFSCar Alcateia) is back up in 5f94546 after 6 minutes.
|
gharchive/issue
| 2023-10-07T05:05:18 |
2025-04-01T04:56:17.329193
|
{
"authors": [
"Chaotic-Temeraire"
],
"repo": "chaotic-aur/chaotic-uptimes",
"url": "https://github.com/chaotic-aur/chaotic-uptimes/issues/5209",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
144288484
|
Specific if jvm_opts sets memory usage per core or per virtual machine
It is not clear to me in the documentation how jvm_opts is passed to the jvm. How should I set this option with respect to my total core and memory availability?
Thank you!
Andrew
Andrew;
The memory specifications are per-core and automatically get scaled to the cores allocated to a process. Sorry this was not more clear -- I added additional documentation on it:
https://bcbio-nextgen.readthedocs.org/en/latest/contents/parallel.html#tuning-core-and-memory-usage
Let us know if you have any other questions and thanks much for the helpful feedback.
|
gharchive/issue
| 2016-03-29T14:49:22 |
2025-04-01T04:56:17.356552
|
{
"authors": [
"6C3C41",
"chapmanb"
],
"repo": "chapmanb/bcbio-nextgen",
"url": "https://github.com/chapmanb/bcbio-nextgen/issues/1292",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
169095385
|
gatk bcbio error
Hi! I'm having problems with the latest dev version of bcbio. The error message is found below. Any ideas?
//Pär
CalledProcessError: Command 'set -o pipefail; export PATH=/home/genetik/bcbio/share/bcbio/anaconda/bin:$PATH && /home/genetik/bcbio/share/bcbio/anaconda/bin/gatk-framework -Xms750m -Xmx2000m -XX:+UseSerialGC -U LENIENT_VCF_PROCESSING --read_filter BadCigar --read_filter NotPrimaryAlignment realign recal -o /home/genetik/calling/haloplex/160708_M00568_0174_000000000-ARHBP/calling/work/bamprep/16-4623/chr1/tx/tmp0dMrI7/16-4623-sort-chr1_0_25870287-prep-prealign.bam
##### ERROR ------------------------------------------------------------------------------------------
##### ERROR A USER ERROR has occurred (version 3.6-24-g59fd391):
##### ERROR
##### ERROR This means that one or more arguments or inputs in your command are incorrect.
##### ERROR The error message below tells you what is the problem.
##### ERROR
##### ERROR If the problem is an invalid argument, please check the online documentation guide
##### ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
##### ERROR
##### ERROR Visit our website and forum for extensive documentation and answers to
##### ERROR commonly asked questions https://www.broadinstitute.org/gatk
##### ERROR
##### ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
##### ERROR
##### ERROR MESSAGE: Invalid argument value 'realign' at position 6.
##### ERROR Invalid argument value 'recal' at position 7.
##### ERROR ------------------------------------------------------------------------------------------
' returned non-zero exit status 1
I'm pasting a little bit more of the output, if necessary.
max: 127595531
Between block sizes:
min: 287
5%: 344.75
25%: 1003.25
median: 2365.5
75%: 15080.5
95%: 142929.25
99%: 24990570.5
max: 43839113
[2016-08-03T10:00Z] Timing: hla typing
[2016-08-03T10:00Z] Resource requests: freebayes, gatk, gatk-haplotype, picard, platypus, samtools; memory: 2.00, 3.50, 3.50, 3.50, 2.00, 2.00; cores: 16, 1, 1, 1, 16, 16
[2016-08-03T10:00Z] Configuring 16 jobs to run, using 1 cores each with 3.50g of memory reserved for each job
[2016-08-03T10:00Z] Timing: alignment post-processing
[2016-08-03T10:00Z] multiprocessing: piped_bamprep
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 0, 25870287) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 25880402, 55505727) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 55509505, 78381828) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 78383240, 116244057) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 116245531, 156085075) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 156095984, 201328393) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 201328740, 218520399) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 218536665, 236850109) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr1', 236881147, 249250621) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr2', 0, 21226216) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr2', 21227130, 105977901) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr2', 105979731, 179392485) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr2', 179392990, 220283772) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr2', 220284806, 243199373) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr3', 30664680, 46899961) : 16-4623
[2016-08-03T10:00Z] GATK pre-alignment ('chr3', 0, 30648479) : 16-4623
[2016-08-03T10:00Z] ##### ERROR ------------------------------------------------------------------------------------------
[2016-08-03T10:00Z] ##### ERROR A USER ERROR has occurred (version 3.6-24-g59fd391):
[2016-08-03T10:00Z] ##### ERROR
[2016-08-03T10:00Z] ##### ERROR This means that one or more arguments or inputs in your command are incorrect.
[2016-08-03T10:00Z] ##### ERROR The error message below tells you what is the problem.
[2016-08-03T10:00Z] ##### ERROR
[2016-08-03T10:00Z] ##### ERROR If the problem is an invalid argument, please check the online documentation guide
[2016-08-03T10:00Z] ##### ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
[2016-08-03T10:00Z] ##### ERROR
[2016-08-03T10:00Z] ##### ERROR Visit our website and forum for extensive documentation and answers to
[2016-08-03T10:00Z] ##### ERROR commonly asked questions https://www.broadinstitute.org/gatk
[2016-08-03T10:00Z] ##### ERROR
[2016-08-03T10:00Z] ##### ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
[2016-08-03T10:00Z] ##### ERROR
[2016-08-03T10:00Z] ##### ERROR MESSAGE: Invalid argument value 'realign' at position 6.
[2016-08-03T10:00Z] ##### ERROR Invalid argument value 'recal' at position 7.
[2016-08-03T10:00Z] ##### ERROR ------------------------------------------------------------------------------------------
[2016-08-03T10:00Z] Uncaught exception occurred
Traceback (most recent call last):
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 21, in run
_do_run(cmd, checks, log_stdout)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 95, in _do_run
raise subprocess.CalledProcessError(exitcode, error_msg)
CalledProcessError: Command 'set -o pipefail; export PATH=/home/genetik/bcbio/share/bcbio/anaconda/bin:$PATH && /home/genetik/bcbio/share/bcbio/anaconda/bin/gatk-framework -Xms750m -Xmx2000m -XX:+UseSerialGC -U LENIENT_VCF_PROCESSING --read_filter BadCigar --read_filter NotPrimaryAlignment realign recal -o /home/genetik/calling/haloplex/160708_M00568_0174_000000000-ARHBP/calling/work/bamprep/16-4623/chr1/tx/tmpxQ5oLh/16-4623-sort-chr1_25880402_55505727-prep-prealign.bam
Pär;
Sorry about the issue, I'm confused as to what happened here. It looks like the command lines are completely messed up, you're getting realign recal instead of the expected parameters. I'm not sure what would cause this. Would you be able to run this with a single core (-n 1) so we can see exactly where it fails? Would you also be able to share your configuration so I can try to reproduce here? Thanks much for the help debugging.
Hi!
Please find attached, the complete output from a run using the -n 16 (bcbio_output.txt) and the output from -n 1. There are gzip: Broken path warnings. I've checked the input fastq files using gzip -t and they seem ok. Also, I upgraded from gatk 3.5 to gatk 3.6, maybe this could have something to do with it?
It appears to me that the error has to do with the realignment step (?) . I do this because I use the alignment when assessing found variants, and using realignment has not been an issue before.
cheers,
Pär
complete -n 16 output:
bcbio_output.txt
-n 1 output:
[2016-08-05T11:12Z] System YAML configuration: /home/genetik/bcbio/share/bcbio/galaxy/bcbio_system.yaml
[2016-08-05T11:12Z] Resource requests: bwa, sambamba, samtools; memory: 2.00, 2.00; cores: 16, 16, 16
[2016-08-05T11:12Z] Configuring 1 jobs to run, using 1 cores each with 2.00g of memory reserved for each job
[2016-08-05T11:12Z] run local -- checkpoint passed: multicore
[2016-08-05T11:12Z] Timing: organize samples
[2016-08-05T11:12Z] multiprocessing: organize_samples
[2016-08-05T11:12Z] Using input YAML configuration: /home/genetik/calling/haloplex/160708_M00568_0174_000000000-ARHBP/calling/config/calling.yaml
[2016-08-05T11:12Z] Checking sample YAML configuration: /home/genetik/calling/haloplex/160708_M00568_0174_000000000-ARHBP/calling/config/calling.yaml
[2016-08-05T11:12Z] Testing minimum versions of installed programs
[2016-08-05T11:13Z] Timing: alignment preparation
[2016-08-05T11:13Z] multiprocessing: prep_align_inputs
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] multiprocessing: disambiguate_split
[2016-08-05T11:13Z] Timing: alignment
[2016-08-05T11:13Z] multiprocessing: process_alignment
[2016-08-05T11:13Z] Aligning lane 16-4623 with bwa aligner
[2016-08-05T11:13Z] Aligning lane 16-4653 with bwa aligner
[2016-08-05T11:13Z] Aligning lane 16-4669 with bwa aligner
[2016-08-05T11:13Z] Aligning lane 16-4670 with bwa aligner
[2016-08-05T11:13Z] Aligning lane 16-4672 with bwa aligner
[2016-08-05T11:13Z] Aligning lane 16-4884 with bwa aligner
[2016-08-05T11:13Z] Timing: callable regions
[2016-08-05T11:13Z] multiprocessing: prep_samples
[2016-08-05T11:13Z] multiprocessing: postprocess_alignment
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] Resource requests: ; memory: 1.00; cores: 1
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 1.00g of memory reserved for each job
[2016-08-05T11:13Z] multiprocessing: combine_sample_regions
[2016-08-05T11:13Z] Identified 66 parallel analysis blocks
Block sizes:
min: 1661458
5%: 9448122.5
25%: 19335508.25
median: 30527962.0
75%: 51787653.75
95%: 97588388.25
99%: 121930600.3
max: 127595531
Between block sizes:
min: 287
5%: 344.75
25%: 1003.25
median: 2365.5
75%: 15080.5
95%: 142929.25
99%: 24990570.5
max: 43839113
[2016-08-05T11:13Z] Timing: hla typing
[2016-08-05T11:13Z] Resource requests: freebayes, gatk, gatk-haplotype, picard, platypus, samtools; memory: 2.00, 3.50, 3.50, 3.50, 2.00, 2.00; cores: 16, 1, 1, 1, 16, 16
[2016-08-05T11:13Z] Configuring 1 jobs to run, using 1 cores each with 3.50g of memory reserved for each job
[2016-08-05T11:13Z] Timing: alignment post-processing
[2016-08-05T11:13Z] multiprocessing: piped_bamprep
[2016-08-05T11:13Z] GATK pre-alignment ('chr1', 0, 25870287) : 16-4623
[2016-08-05T11:13Z] ##### ERROR ------------------------------------------------------------------------------------------
[2016-08-05T11:13Z] ##### ERROR A USER ERROR has occurred (version 3.6-24-g59fd391):
[2016-08-05T11:13Z] ##### ERROR
[2016-08-05T11:13Z] ##### ERROR This means that one or more arguments or inputs in your command are incorrect.
[2016-08-05T11:13Z] ##### ERROR The error message below tells you what is the problem.
[2016-08-05T11:13Z] ##### ERROR
[2016-08-05T11:13Z] ##### ERROR If the problem is an invalid argument, please check the online documentation guide
[2016-08-05T11:13Z] ##### ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
[2016-08-05T11:13Z] ##### ERROR
[2016-08-05T11:13Z] ##### ERROR Visit our website and forum for extensive documentation and answers to
[2016-08-05T11:13Z] ##### ERROR commonly asked questions https://www.broadinstitute.org/gatk
[2016-08-05T11:13Z] ##### ERROR
[2016-08-05T11:13Z] ##### ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
[2016-08-05T11:13Z] ##### ERROR
[2016-08-05T11:13Z] ##### ERROR MESSAGE: Invalid argument value 'realign' at position 6.
[2016-08-05T11:13Z] ##### ERROR Invalid argument value 'recal' at position 7.
[2016-08-05T11:13Z] ##### ERROR ------------------------------------------------------------------------------------------
[2016-08-05T11:13Z] Uncaught exception occurred
Traceback (most recent call last):
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 21, in run
_do_run(cmd, checks, log_stdout)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 95, in _do_run
raise subprocess.CalledProcessError(exitcode, error_msg)
CalledProcessError: Command 'set -o pipefail; unset JAVA_HOME && export PATH=/home/genetik/bcbio/share/bcbio/anaconda/bin:$PATH && /home/genetik/bcbio/share/bcbio/anaconda/bin/gatk-framework -Xms750m -Xmx2000m -XX:+UseSerialGC -U LENIENT_VCF_PROCESSING --read_filter BadCigar --read_filter NotPrimaryAlignment realign recal -o /home/genetik/calling/haloplex/160708_M00568_0174_000000000-ARHBP/calling/work/bamprep/16-4623/chr1/tx/tmp8gCxnZ/16-4623-sort-chr1_0_25870287-prep-prealign.bam
##### ERROR ------------------------------------------------------------------------------------------
##### ERROR A USER ERROR has occurred (version 3.6-24-g59fd391):
##### ERROR
##### ERROR This means that one or more arguments or inputs in your command are incorrect.
##### ERROR The error message below tells you what is the problem.
##### ERROR
##### ERROR If the problem is an invalid argument, please check the online documentation guide
##### ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
##### ERROR
##### ERROR Visit our website and forum for extensive documentation and answers to
##### ERROR commonly asked questions https://www.broadinstitute.org/gatk
##### ERROR
##### ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
##### ERROR
##### ERROR MESSAGE: Invalid argument value 'realign' at position 6.
##### ERROR Invalid argument value 'recal' at position 7.
##### ERROR ------------------------------------------------------------------------------------------
' returned non-zero exit status 1
Traceback (most recent call last):
File "/home/genetik/bcbio/bin/bcbio_nextgen.py", line 226, in <module>
main(**kwargs)
File "/home/genetik/bcbio/bin/bcbio_nextgen.py", line 43, in main
run_main(**kwargs)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/pipeline/main.py", line 43, in run_main
fc_dir, run_info_yaml)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/pipeline/main.py", line 87, in _run_toplevel
for xs in pipeline(config, run_info_yaml, parallel, dirs, samples):
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/pipeline/main.py", line 148, in variant2pipeline
samples = region.parallel_prep_region(samples, run_parallel)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/pipeline/region.py", line 138, in parallel_prep_region
"piped_bamprep", _add_combine_info, file_key, ["config"])
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/distributed/split.py", line 59, in parallel_split_combine
split_output = parallel_fn(parallel_name, split_args)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/distributed/multi.py", line 28, in run_parallel
return run_multicore(fn, items, config, parallel=parallel)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/distributed/multi.py", line 86, in run_multicore
for data in joblib.Parallel(parallel["num_jobs"])(joblib.delayed(fn)(x) for x in items):
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/joblib/parallel.py", line 800, in __call__
while self.dispatch_one_batch(iterator):
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/joblib/parallel.py", line 658, in dispatch_one_batch
self._dispatch(tasks)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/joblib/parallel.py", line 566, in _dispatch
job = ImmediateComputeBatch(batch)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/joblib/parallel.py", line 180, in __init__
self.results = batch()
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/joblib/parallel.py", line 72, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/utils.py", line 51, in wrapper
return apply(f, *args, **kwargs)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/distributed/multitasks.py", line 120, in piped_bamprep
return bamprep.piped_bamprep(*args)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/variation/bamprep.py", line 142, in piped_bamprep
_piped_bamprep_region(data, region, out_file, tmp_dir)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/variation/bamprep.py", line 120, in _piped_bamprep_region
_piped_bamprep_region_gatk(data, region, prep_params, out_file, tmp_dir)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/variation/bamprep.py", line 89, in _piped_bamprep_region_gatk
prep_params)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/variation/bamprep.py", line 62, in _piped_realign_gatk
do.run(cmd, "GATK pre-alignment {0}".format(region), data)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 21, in run
_do_run(cmd, checks, log_stdout)
File "/home/genetik/bcbio/share/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 95, in _do_run
raise subprocess.CalledProcessError(exitcode, error_msg)
subprocess.CalledProcessError: Command 'set -o pipefail; unset JAVA_HOME && export PATH=/home/genetik/bcbio/share/bcbio/anaconda/bin:$PATH && /home/genetik/bcbio/share/bcbio/anaconda/bin/gatk-framework -Xms750m -Xmx2000m -XX:+UseSerialGC -U LENIENT_VCF_PROCESSING --read_filter BadCigar --read_filter NotPrimaryAlignment realign recal -o /home/genetik/calling/haloplex/160708_M00568_0174_000000000-ARHBP/calling/work/bamprep/16-4623/chr1/tx/tmp8gCxnZ/16-4623-sort-chr1_0_25870287-prep-prealign.bam
##### ERROR ------------------------------------------------------------------------------------------
##### ERROR A USER ERROR has occurred (version 3.6-24-g59fd391):
##### ERROR
##### ERROR This means that one or more arguments or inputs in your command are incorrect.
##### ERROR The error message below tells you what is the problem.
##### ERROR
##### ERROR If the problem is an invalid argument, please check the online documentation guide
##### ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
##### ERROR
##### ERROR Visit our website and forum for extensive documentation and answers to
##### ERROR commonly asked questions https://www.broadinstitute.org/gatk
##### ERROR
##### ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
##### ERROR
##### ERROR MESSAGE: Invalid argument value 'realign' at position 6.
##### ERROR Invalid argument value 'recal' at position 7.
##### ERROR ------------------------------------------------------------------------------------------
' returned non-zero exit status 1
Pär -- thanks for the additional details, this helps a lot to identify the bug. This was a problem with the transition to using GATK 3.6 (which has realignment in the freely available version). There was a bug in the code that used realignment but not recalibration. If you update to the latest development version it should work cleanly for you now. Sorry about the problem and thank you for the help tracking it down.
Great, thanks!
|
gharchive/issue
| 2016-08-03T10:09:24 |
2025-04-01T04:56:17.367209
|
{
"authors": [
"chapmanb",
"parlar"
],
"repo": "chapmanb/bcbio-nextgen",
"url": "https://github.com/chapmanb/bcbio-nextgen/issues/1497",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1106088013
|
Box Characters not showing
When viewing the font "Source Sans Pro" multiple Box Characters such as U+251C are not listed in the Characters Pane. The same Box Characters are verified as present in the font by viewing it in Windows Character Map under Unicode Subrange "Box Characters".
I became aware of the issue because I was using Source Sans Pro in Notepad++ and was able to enter the characters but not able to browse them in Character Map UWP.
I hope this can be resolved because Character Map UWP is infinitely superior to Windows Character Map but it does need to be accurate for me to use it.
Do you know what version of Source Sans Pro you're using? The ones I have installed show fine 🤔
|
gharchive/issue
| 2022-01-17T17:05:07 |
2025-04-01T04:56:17.375778
|
{
"authors": [
"JohnnyWestlake",
"edwinbradford"
],
"repo": "character-map-uwp/Character-Map-UWP",
"url": "https://github.com/character-map-uwp/Character-Map-UWP/issues/188",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
719106371
|
安卓视频被重命名以后,还是会读到之前的路径,然后拿着这个路径播放的话就会报错
缓存很严重,在相册里将视频删除后 还是会读到路径和缩略图。
缓存很严重,在相册里将视频删除后 还是会读到路径和缩略图。
安卓还是iOS
Caoyunxiao
在2020年10月12日 14:58,Zuozihaonotifications@github.com 写道:
缓存很严重,在相册里将视频删除后 还是会读到路径和缩略图。
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
安卓还是iOS Caoyunxiao 在2020年10月12日 14:58,Zuozihaonotifications@github.com 写道: 缓存很严重,在相册里将视频删除后 还是会读到路径和缩略图。 — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
安卓
|
gharchive/issue
| 2020-10-12T06:47:47 |
2025-04-01T04:56:17.390421
|
{
"authors": [
"Zuozihao",
"charlesYun"
],
"repo": "charlesYun/photo_album_manager",
"url": "https://github.com/charlesYun/photo_album_manager/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
328022552
|
Add a basic contributing guide.
Hello @charleskorn,
thanks for your contribution to open source. This tool has made my life easier.
It's not much, but I'd like to start a contributing guide to make it easier for other to collaborate.
Best regards.
Great idea, thanks @pameck!
|
gharchive/pull-request
| 2018-05-31T07:21:57 |
2025-04-01T04:56:17.391643
|
{
"authors": [
"charleskorn",
"pameck"
],
"repo": "charleskorn/batect",
"url": "https://github.com/charleskorn/batect/pull/6",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1310223570
|
Preload Language Data Once, then Set in Engine
In my program I am using 10 or so TesseractEngines to work in parallel. One of the issues I am seeing (that could be an enhancement) is that each time you create a new TesseractEngine instance, it has to reload the training data.
It would be great if you could just load it once and "send it into" the engine. The eng language file for v4.0 is ~22MB, but when you make 10 engines, you're then putting ~220MB in memory, way more than you need.
This would be a great improvement for parallelization.
I now realize that it is an issue for the cpp library and not this one, sorry!
|
gharchive/issue
| 2022-07-20T00:19:08 |
2025-04-01T04:56:17.393170
|
{
"authors": [
"alexgwalley"
],
"repo": "charlesw/tesseract",
"url": "https://github.com/charlesw/tesseract/issues/618",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2376162528
|
Error on creating TesseractEngine
Hi!
I'm starting with Tesseract and have this code, that runs very well on Windows:
using (var engine = new TesseractEngine("tessdata", "por"))
{
var image = Pix.LoadFromFile(filePath);
var page = engine.Process(image);
text = page.GetText();
}
But I need run this one on Linux, most specifically on Mint distribution, and I use in this form:
using (var engine = new TesseractEngine("./tessdata", "por"))
{
var image = Pix.LoadFromFile(filePath);
var page = engine.Process(image);
text = page.GetText();
}
And I receive this error in the 'new TesseractEngine' line:
Unhandled exception. System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
---> System.ArgumentNullException: Value cannot be null. (Parameter 'path1')
at System.ArgumentNullException.Throw(String paramName)
at System.IO.Path.Combine(String path1, String path2)
at InteropDotNet.LibraryLoader.InternalLoadLibrary(String baseDirectory, String platformName, String fileName)
at InteropDotNet.LibraryLoader.CheckExecutingAssemblyDomain(String fileName, String platformName)
at InteropDotNet.LibraryLoader.LoadLibrary(String fileName, String platformName)
at InteropRuntimeImplementer.LeptonicaApiSignaturesInstance.LeptonicaApiSignaturesImplementation..ctor(LibraryLoader loader)
at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor)
at System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object obj, Span1 copyOfArgs, BindingFlags invokeAttr) --- End of inner exception stack trace --- at System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object obj, Span1 copyOfArgs, BindingFlags invokeAttr)
at System.Reflection.MethodBaseInvoker.InvokeWithOneArg(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture)
at InteropDotNet.InteropRuntimeImplementer.CreateInstanceT
at Tesseract.Interop.LeptonicaApi.Initialize()
at Tesseract.Interop.TessApi.Initialize()
at Tesseract.Interop.TessApi.get_Native()
at Tesseract.TesseractEngine..ctor(String datapath, String language, EngineMode engineMode, IEnumerable1 configFiles, IDictionary2 initialOptions, Boolean setOnlyNonDebugVariables)
at Tesseract.TesseractEngine..ctor(String datapath, String language)
at TesteConversaoPdfParaImagem.Program.ReadImage(String prefix, String filePath, String resultFileName, Boolean isSingleblock) in C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line 127
at TesteConversaoPdfParaImagem.Program.Main(String[] args) in C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line 53
Do you help me with this, please?
Thanks a lot, guys! :)
System.ArgumentNullException: Value cannot be null. (Parameter 'path1')
at System.ArgumentNullException.Throw(String paramName)
at System.IO.Path.Combine(String path1, String path2)
I am not expert but, its definitely about your path :D.
I would Console.WriteLine(filePath); to see what is result of your Path.Combine. Probably you have null there.
Yesterday I discovered the same error on the project that I just started. When I debug from Visual Studio there is no issue. When I publish and run it somewhere else, I get the same error message.
I discovered that I get the error when I enable the checkbox "Produce single file". When it's enabled I get the error. When it's disabled, I don't get the error.
In both situations the tessdata folder is on the same location (on the same level as my console executable). Also in both situations I confirmed my code was pointing to the right location by adding a console message:
string tesseractDataPath = Path.Combine(AppContext.BaseDirectory, "tessdata"); Console.WriteLine($"Tesseract data folder: {tesseractDataPath}"); using var ocrEngine = new TesseractEngine(tesseractDataPath, "nld+eng", EngineMode.Default);
Hi guys!
I found the error. It's happen on the class 'InteropDotNet.LibraryLoader', at the line 86:
var baseDirectory = Path.GetDirectoryName(executingAssembly.Location);
The code executingAssembly.Location return null and it's the cause of the crash. I created a handler class called 'EnvironmentUtils.cs' in 'Tesseract.Internal' with this code:
` using System;
using System.IO;
using System.Reflection;
namespace Tesseract.Internal
{
internal static class EnvironmentUtils
{
public static string AppPath(Assembly assembly)
{
string appPath = Path.GetDirectoryName(Path.GetDirectoryName(assembly?.Location));
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
return AppPath();
}
public static string AppPath()
{
string appPath;
appPath = Directory.GetCurrentDirectory();
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = Environment.CurrentDirectory;
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = Path.Combine(AppContext.BaseDirectory, "tessdata");
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = Path.GetDirectoryName(Path.GetDirectoryName(Assembly.GetEntryAssembly()?.Location));
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = AppDomain.CurrentDomain.BaseDirectory;
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
throw new ArgumentNullException("Application path not found");
}
}
} `
And the problem has solved. But after this, the error change to this:
Unhandled exception. System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.DllNotFoundException: Failed to find library "libleptonica-1.82.0.so" for platform x64. at InteropDotNet.LibraryLoader.LoadLibrary(String fileName, String platformName) in D:\git\tesseract\src\Tesseract\Internal\InteropDotNet\LibraryLoader.cs:line 57 at InteropRuntimeImplementer.LeptonicaApiSignaturesInstance.LeptonicaApiSignaturesImplementation..ctor(LibraryLoader loader) at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor) at System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object obj, Span1 copyOfArgs, BindingFlags invokeAttr)
--- End of inner exception stack trace ---
at System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object obj, Span1 copyOfArgs, BindingFlags invokeAttr) at System.Reflection.MethodBaseInvoker.InvokeWithOneArg(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture) at InteropDotNet.InteropRuntimeImplementer.CreateInstance[T]() in D:\git\tesseract\src\Tesseract\Internal\InteropDotNet\InteropRuntimeImplementer.cs:line 45 at Tesseract.Interop.LeptonicaApi.Initialize() in D:\git\tesseract\src\Tesseract\Interop\LeptonicaApi.cs:line 563 at Tesseract.Interop.TessApi.Initialize() in D:\git\tesseract\src\Tesseract\Interop\BaseApi.cs:line 583 at Tesseract.Interop.TessApi.get_Native() in D:\git\tesseract\src\Tesseract\Interop\BaseApi.cs:line 372 at Tesseract.TesseractEngine..ctor(String datapath, String language, EngineMode engineMode, IEnumerable1 configFiles, IDictionary2 initialOptions, Boolean setOnlyNonDebugVariables) in D:\git\tesseract\src\Tesseract\TesseractEngine.cs:line 181 at Tesseract.TesseractEngine..ctor(String datapath, String language) in D:\git\tesseract\src\Tesseract\TesseractEngine.cs:line 37 at TesteConversaoPdfParaImagem.Program.ReadImage(String prefix, String filePath, String resultFileName, Boolean isSingleblock) in C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line 147 at TesteConversaoPdfParaImagem.Program.Extract(String pdfName, String readedFilePath) in C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line 68 at TesteConversaoPdfParaImagem.Program.Main(String[] args) in C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line 14
In the next week I expect return to see this new problem.
Thank's for your help :)
Thanks, I'll see if I can find some time this evening to put in a fix. If I
don't, feel free to create a push request with fix and I'll merge it in.
Sorry haven't been all that active lately, pretty much all my free time is
taken up with home Reno's/repairs.
On Sun, 30 June 2024, 4:53 am Carlos Felippe Vernizze, <
@.***> wrote:
Hi guys!
I found the error. It's happen on the class 'InteropDotNet.LibraryLoader',
at the line 86:
var baseDirectory = Path.GetDirectoryName(executingAssembly.Location);
The code executingAssembly.Location return null and it's the cause of the
crash. I created a handler class called 'EnvironmentUtils.cs' in
'Tesseract.Internal' with this code:
` using System;
using System.IO;
using System.Reflection;
namespace Tesseract.Internal
{
internal static class EnvironmentUtils
{
public static string AppPath(Assembly assembly)
{
string appPath =
Path.GetDirectoryName(Path.GetDirectoryName(assembly?.Location));
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
return AppPath();
}
public static string AppPath()
{
string appPath;
appPath = Directory.GetCurrentDirectory();
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = Environment.CurrentDirectory;
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = Path.Combine(AppContext.BaseDirectory, "tessdata");
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = Path.GetDirectoryName(Path.GetDirectoryName(Assembly.GetEntryAssembly()?.Location));
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
appPath = AppDomain.CurrentDomain.BaseDirectory;
if (!string.IsNullOrWhiteSpace(appPath))
return appPath;
throw new ArgumentNullException("Application path not found");
}
}
} `
And the problem has solved. But after this, the error change to this:
Unhandled exception. System.Reflection.TargetInvocationException:
Exception has been thrown by the target of an invocation. --->
System.DllNotFoundException: Failed to find library "
libleptonica-1.82.0.so" for platform x64. at
InteropDotNet.LibraryLoader.LoadLibrary(String fileName, String
platformName) in
D:\git\tesseract\src\Tesseract\Internal\InteropDotNet\LibraryLoader.cs:line
57 at
InteropRuntimeImplementer.LeptonicaApiSignaturesInstance.LeptonicaApiSignaturesImplementation..ctor(LibraryLoader
loader) at System.RuntimeMethodHandle.InvokeMethod(Object target, Void**
arguments, Signature sig, Boolean isConstructor) at
System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object
obj, Span1 copyOfArgs, BindingFlags invokeAttr)
--- End of inner exception stack trace ---
at System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object
obj, Span1 copyOfArgs, BindingFlags invokeAttr) at
System.Reflection.MethodBaseInvoker.InvokeWithOneArg(Object obj,
BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo
culture) at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr,
Binder binder, Object[] args, CultureInfo culture) at
InteropDotNet.InteropRuntimeImplementer.CreateInstanceT in
D:\git\tesseract\src\Tesseract\Internal\InteropDotNet\InteropRuntimeImplementer.cs:line
45 at Tesseract.Interop.LeptonicaApi.Initialize() in
D:\git\tesseract\src\Tesseract\Interop\LeptonicaApi.cs:line 563 at
Tesseract.Interop.TessApi.Initialize() in
D:\git\tesseract\src\Tesseract\Interop\BaseApi.cs:line 583 at
Tesseract.Interop.TessApi.get_Native() in
D:\git\tesseract\src\Tesseract\Interop\BaseApi.cs:line 372 at
Tesseract.TesseractEngine..ctor(String datapath, String language,
EngineMode engineMode, IEnumerable1 configFiles, IDictionary2
initialOptions, Boolean setOnlyNonDebugVariables) in
D:\git\tesseract\src\Tesseract\TesseractEngine.cs:line 181 at
Tesseract.TesseractEngine..ctor(String datapath, String language) in
D:\git\tesseract\src\Tesseract\TesseractEngine.cs:line 37 at
TesteConversaoPdfParaImagem.Program.ReadImage(String prefix, String
filePath, String resultFileName, Boolean isSingleblock) in
C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line
147 at TesteConversaoPdfParaImagem.Program.Extract(String pdfName, String
readedFilePath) in
C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line
68 at TesteConversaoPdfParaImagem.Program.Main(String[] args) in
C:\Users\carlo\source\repos\TesteConversaoPdfParaImagem\TesteConversaoPdfParaImagem\Program.cs:line
14
In the next week I expect return to see this new problem.
Thank's for your help :)
—
Reply to this email directly, view it on GitHub
https://github.com/charlesw/tesseract/issues/673#issuecomment-2198300967,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAB7HSCU24LUJLZFIBIQMADZJ37CFAVCNFSM6AAAAABJ6R2EHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJYGMYDAOJWG4
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
|
gharchive/issue
| 2024-06-26T20:00:04 |
2025-04-01T04:56:17.420128
|
{
"authors": [
"Vernizze",
"Yanik39",
"charlesw",
"mitchuhl"
],
"repo": "charlesw/tesseract",
"url": "https://github.com/charlesw/tesseract/issues/673",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1288642810
|
[Feature] Support for non-uint8 and/or non-RGB images
Following up on #15, supporting say uint16 images and say grayscale images would allow more widespread utility of the package.
Support for grayscale and common dtypes added
|
gharchive/issue
| 2022-06-29T12:21:32 |
2025-04-01T04:56:17.421974
|
{
"authors": [
"charliebudd",
"tvercaut"
],
"repo": "charliebudd/torch-content-area",
"url": "https://github.com/charliebudd/torch-content-area/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
733979047
|
Modify the log_dir and log_dir_pretrain
What file should to put in the path of log_dir and log_dir_pretrain
Hi Yao1up, these directories are the parent directories where you store the models, please check the following lines.
https://github.com/charliememory/Disentangled-Person-Image-Generation/blob/e4703860bb1b351050ce50f339985ff0811f1d64/run_market_train.sh#L19
https://github.com/charliememory/Disentangled-Person-Image-Generation/blob/e4703860bb1b351050ce50f339985ff0811f1d64/run_market_train.sh#L48
|
gharchive/issue
| 2020-11-01T14:24:33 |
2025-04-01T04:56:17.423718
|
{
"authors": [
"Yao1up",
"charliememory"
],
"repo": "charliememory/Disentangled-Person-Image-Generation",
"url": "https://github.com/charliememory/Disentangled-Person-Image-Generation/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1746819015
|
v0.0.271 pipeline failing on homebrew
Not sure if this belongs here but the pipeline is failing thus my brew and pre-commit versions are out of sync (select = ["ALL"] user here). I have since uninstalled homebrew and installed using pipx. just FYI
It looks like this is because they need to upgrade to the latest version of Rust still https://github.com/Homebrew/homebrew-core/pull/132584
Thank you so much @chenrui333.
|
gharchive/issue
| 2023-06-07T23:21:42 |
2025-04-01T04:56:17.426026
|
{
"authors": [
"ColemanDunn",
"charliermarsh",
"madkinsz"
],
"repo": "charliermarsh/ruff",
"url": "https://github.com/charliermarsh/ruff/issues/4946",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1714906531
|
Avoid autofixing within nested f-strings
Summary
The permitted grammar within a nested f-string is really hard to work with. For example, there's typically no way to include quotation marks, unless one of the surrounding strings is triple-quoted? Or something like that. So, again, for example, our rule that flags usages of str() can run into problems when it sees f"{f'{str()}'}" -- there's no way to rewrite str() as "" or ''.
I've turned off that specific rule in nested f-strings, but this PR goes one step further, and just avoids attempting to autofix within nested f-strings altogether. This is a pretty bad fix, since we'll still end up using unparse_expr within nested f-strings to format some diagnostics, which means we'll show users code and suggestions that won't quite work. But, our options are pretty limited here.
Would this require that we go through every rule and add a check for ctx.in_nested_f_string(), and use the appropriate applicability method if so?
Would this require that we go through every rule and add a check for ctx.in_nested_f_string(), and use the appropriate applicability method if so?
Hmm yeah. That sounds annoying and very fragile
One note to add here is that most of these will valid syntax with PEP 701 implemented, currently targeted for python 3.12, i.e. we could very ambitiously allow autofixes in f-strings for projects with minimum python version 3.12.
Originally posted by @konstin in https://github.com/charliermarsh/ruff/issues/4324#issuecomment-1540775994
|
gharchive/pull-request
| 2023-05-18T02:59:56 |
2025-04-01T04:56:17.430225
|
{
"authors": [
"MichaReiser",
"T-256",
"charliermarsh"
],
"repo": "charliermarsh/ruff",
"url": "https://github.com/charliermarsh/ruff/pull/4488",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
161272672
|
spanGaps does not work well
I am using v2.1.6
I tried to create continuous line when there is a null point in the middle.
However, it does not work well in some situations.
For example
{
label: "A random label",
data: [20, 30, 80, 90, 50, null, 30],
spanGaps: true
}
{
label: "A random label",
data: [20, 30, 80, 90, null, 20, 30],
spanGaps: true
}
@etimberg I would like to work on this issue any recommendations on where to start file wise, if no one else is not working on it already.
Looking a it now this should not be too hard to fix. It only occurs when the line tension is et to something that is not equal to 0 which leads me to believe it is an issue the way the iff statements are setup in this code.
else if (point._view.tension === 0) {
ctx.lineTo(point._view.x, point._view.y);
} else {
// Line between points
ctx.bezierCurveTo(
previousPoint._view.controlPointNextX,
previousPoint._view.controlPointNextY,
point._view.controlPointPreviousX,
point._view.controlPointPreviousY,
point._view.x,
point._view.y
);
}
I will continue looking it to this and try to fix it :)
@nmac143 feel free to work on this :)
@etimberg Hi, v2.2.0 did not fix this issue. I updated to v2.2.0, but still same.
@eemikula thanks! works well now
|
gharchive/issue
| 2016-06-20T19:18:49 |
2025-04-01T04:56:17.440595
|
{
"authors": [
"Hongbo-Miao",
"etimberg",
"nmac143"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/issues/2812",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
168778866
|
options.events property works weird
I want to make my chart static and not to show any tooltips and/or hover highlighting.
So I set options.events = [] for chart. This doesn't work
https://jsfiddle.net/wcfp91bh/
If I set options.events = ["fakeEvent"]. This works better but still provides tooltips for "click" event
https://jsfiddle.net/h7g8hngn/
Idk, but seems it shouldn't work like this.
Any comments?
You can set the global option like this:
Chart.defaults.global.tooltips.enabled = false;
Then it will be work. You can try it.
@rain-js thanks for this tip. I know it.
The problem is not in tooltips but in any event based behavior. I don't want lines and points to be highlighted and hovered as well.
If I do add tooltips.enabled = false to chart's options you can see that this won't help to achieve what I want. That's better can be seen on Doughnut
https://jsfiddle.net/7ewnxsu6/
I'm not sure why option.events = [] doesn't work properly. As described in doc this assignment should remove any reaction of chart to mouse/touch or other events. Obviously it doesn't which looks like wrong behavior. Also I don't understand why option.events = ["fake"] actually works for some events (mouseover event) but not all (click event).
@demoalex uh, i got it. You also can set the global option like this.
Chart.defaults.global.events = [];
Then it will remove any reaction of chart to mouse/touch or other events.
Try it.
@demoalex You are right.
option,events = [ ]
It won't work. I think it's a bug.@etimberg
but you also can set the global option:
Chart.defaults.global.events = [];
It can solve your problem right now.
Yeah, this looks like a bug. I think it might be in the code that merges properties from the global config to the local config.
|
gharchive/issue
| 2016-08-02T01:21:49 |
2025-04-01T04:56:17.447045
|
{
"authors": [
"demoalex",
"etimberg",
"rain-js"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/issues/3075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
328908230
|
Remove moment from public API
We should remove moment from the public API and just use timestamps and Dates
We should also consider dropping support for Moment entirely since it's purely worse than Luxon. I would especially like to get rid of Moment since it lacks timezone support and is very large
I've been switching everything over to Luxon, but cannot switch web apps that use Chart.js since Chart.js needs Moment anyway. How much of Moment does Chart.js actually use? And could it be replaced with a (much smaller) utility?
@jazoom you may be interested inhttps://github.com/chartjs/Chart.js/pull/5522
@benmccann Thanks for pointing that out. I searched the issues but didn't find that one for some reason.
@benmccann any idea what the hold-up is for accepting #5522 pull request?
The reviewers are all busy and doing this just as a volunteer effort on top of their day jobs
OK, thanks for clarifying, just wondering if there was some deeper refactoring required or if it somehow hinged on a future major release. Looking forward to its acceptance so that moment.js stops bloating production builds!
Hi, I don't know Luxon but you could consider dayjs?
In my app I just did a search and replace moment -> dayjs and everything worked fine.
@AoDev did the same. It hasn't been a simple search and replace but I was able to migrate the whole app from moment to dayjs within 15minutes (the only issue i encountered: en-gb locale which worked in moment has to be renamed to en in day js).
Now I need to get rid of the moment dependency of chartjs
date-fns would be a good option since you can require/import just the functions you need without bundling the entire library... I am able to get chart.js to work fine using chartjs-plugin-datalabels but I have to add moment as an external in the webpack config to prevent it from getting bunded (because of chart.js). This is more of a nuisance then it sounds since I'm using create-react-app
Perhaps using dayjs, which offers the same functionalities as moment.js, but with a fraction of the bundle size, would be a better option IMHO
|
gharchive/issue
| 2018-06-04T04:57:06 |
2025-04-01T04:56:17.452956
|
{
"authors": [
"AoDev",
"benmccann",
"darkmavis1980",
"edclement",
"gonzochic",
"jazoom",
"shongololo"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/issues/5542",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
485250460
|
JavaScript warnings: reference to undefined property + test for equality misstyped? (latest release)
WARNINGS:
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1000:37
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1010:38
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1016:37
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1022:41
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1092:34
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1095:33
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1220:44
SyntaxError: test for equality (==) mistyped as assignment (=)?
Chart.bundle.js:1222:43
ReferenceError: reference to undefined property "abbr"
Chart.bundle.js:13692:21
ReferenceError: reference to undefined property "id"
Chart.bundle.js:7197:7
ReferenceError: reference to undefined property "devicePixelRatio"
Chart.bundle.js:8484:3
ReferenceError: reference to undefined property "devicePixelRatio"
Chart.bundle.js:8537:3
ReferenceError: reference to undefined property "xAxes"
Chart.bundle.js:8561:3
ReferenceError: reference to undefined property "bodyFontFamily"
Chart.bundle.js:7523:3
ReferenceError: reference to undefined property "_data"
Chart.bundle.js:3181:7
ReferenceError: reference to undefined property "fontSize"
Chart.bundle.js:2642:14
ReferenceError: reference to undefined property "label"
Chart.bundle.js:4648:5
ReferenceError: reference to undefined property "fontColor"
Chart.bundle.js:18630:20
ReferenceError: reference to undefined property "lineCap"
Chart.bundle.js:18657:5prefixesprefixes
In javascript console of firefox 68.0.2 64bit on Ubuntu 18.04.
Using: https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.8.0/Chart.bundle.js
See live demo here:
http://community-registry.ff-hamm.de/
Charts are loading, but i can see WARNINGS (22) and DEBUG (1) messages in Firefox JavaScript Console. See screenshot:
Her tested on a private tab in Firefox, to be sure that it is not a Browser plugin that makes trouble...
I can't reproduce the warnings?
Same errors on Firefox 70.0.1 on Windows 10.
These problems do not occure in Chrome 78.0.3904.70 on Windows 10.
Did you tried to reproduce it with a firefox?
@kurkle @benmccann
If I load https://www.chartjs.org/samples/latest/charts/pie.html on Firefox for Linux I do not see any issues in the console
Regarding my earlier comment (https://github.com/chartjs/Chart.js/issues/6488#issuecomment-524934609), Chart.js 2.9.2 will have a fix for the chartjs-color syntax issues.
I am using 2.8.0, i will test 2.9.1.
same issue on 2.9.1, i will wait for 2.9.2
I updated to 2.9.3. And my Firefox is now a 70.0.1.
SyntaxErrors dropped from 8 to 6 entries.
"reference to undefined property" has increased from 12 to 13 entries.
But this do not relate to 2.9.3. With 2.9.1 the numbers are the same.
So your color fix did not fixed that.
I recommend to add "sourceMappingURL" to your minified javascript file and to provide a source map. This makes debugging easyer and would enable your browser to provide real-world line numbers. See: https://developer.mozilla.org/de/docs/Tools/Debugger/How_to/Use_a_source_map for details.
This may help you to fix the "SyntacErrors":
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/Equal_as_assign
As your script want to be executed in strict mode.
Also recommend to use ESLint to lint your java script source code.
@christian-weiss when I view http://community-registry.ff-hamm.de/ in Firefox 70.0.1 on Linux I do not see any warnings in the console.
Also, when I look at line 7 character 179 I do not see any reference to moment as your screenshot suggests
We do use eslint
Strange is can see it at 3 systems (2x Windows 10, 1x Ubuntu 19.10), always latest Firefox, even weeks later on latest.
I can not reproduce it on 2 other Systems (1x Ubuntu, 1x MacOS). And even not with Ubuntu 19.10 from osboxes.org (Virtualbox).
So i guess it is something system-specific - but it is not related to Plugins (tested on private tabs), not related to Firefox-Profiles (fresh profile), not related to cache - strange.
I close the ticket as i will ignore that warnings now (investigation already took alot of time). If if finde something more, i may re-open the issue in the future.
|
gharchive/issue
| 2019-08-26T13:55:42 |
2025-04-01T04:56:17.465533
|
{
"authors": [
"benmccann",
"christian-weiss",
"kurkle"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/issues/6488",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
724195640
|
Bower package for 2.9.4 release is not including dist folder and files
Expected Behavior
Using bower install for 2.9.4 should install distribution package
e.g. include "dist" folder and files
Current Behavior
Package installs the gihub repo code
Steps to Reproduce
Run "bower install" with 2.9.4 release as a dependency
Context
Existing builds fail as the 'dist' folder and expected files are missing
Can this problem be fixed as soon as possible?
bower.json is missing
Hello,
Same problem here.
Our project uses angular-chart.js dependency, and angular-chart.js bower.json is too permissive (version 2.X of Chart.js).
Chart.js version 2.9.3 is OK, but version 2.9.4 fails, so our web application crashs.
Can we help you to solve the problem ?
@f3cp @HereWeR @pblanchardie @fresnault I believe I have fixed this issue by repushing the v2.9.4 tag here on GitHub. I can see the dist folder & bower.json file. Can you confirm that this works for you?
@etimberg It's good for us after clean bower cache. Thank you for fixing this so quickly.
|
gharchive/issue
| 2020-10-19T01:36:26 |
2025-04-01T04:56:17.470390
|
{
"authors": [
"HereWeR",
"etimberg",
"f3cp",
"fresnault",
"pblanchardie"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/issues/7927",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
398462100
|
Make moment optional from our UMD builds
Create a rollup plugin altering the UMD header to wrap optional dependencies between try/catch, which allows to load moment only when the dependency is installed.
Since AMD loaders are asynchronous, 'moment' needs to be explicitly loaded before 'chart.js' so when 'chart.js' requires moment, it's already loaded and returns synchronously (at least with requirejs).
require(['moment'], function() {
require(['chartjs'], function(Chart) {
new Chart('chart', {
//...
Still need to write a bit of docs and do more testing.
UMD wrapper in dist/Chart.js:
Before:
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory(require('moment')) :
typeof define === 'function' && define.amd ? define(['moment'], factory) :
(global.Chart = factory(global.moment));
}(this, (function (moment) { 'use strict';
After:
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory(function() { try { return require('moment'); } catch(e) { } }()) :
typeof define === 'function' && define.amd ? define(['require'], function(require) { return factory(function() { try { return require('moment'); } catch(e) { } }()); }) :
(global.Chart = factory(global.moment));
}(this, (function (moment) { 'use strict';
Related to #5960
Fixes #4303
Please see the latest docs for details: https://github.com/chartjs/Chart.js/blob/master/docs/getting-started/integration.md
I use chartjs with angular-cli. So I build angular app like ng build --prod. And anguar cli does not use not webpack nor Rollup. So how can I configure angular project to exclude moment?
look like it is inposible use externals and angular cli
@lsn793 I'm not familiar with angular / angular-cli, can you share a/your project that uses chart.js?
@lsn793 Can you also try removing the package after install and seeing if complication will succeed in that case? https://github.com/chartjs/Chart.js/pull/5978#issuecomment-453769800
@benmccann I tried like you advise but got error while build
WARNING in ./node_modules/chart.js/dist/Chart.js
Module not found: Error: Can't resolve 'moment' in 'C:\Users\Serg\dev\calc-weight\node_modules\chart
.js\dist'
And chart is not work
@simonbrunel https://github.com/lsn793/calc-weight
My workaround for this problem in Angular is to use file replacement. In your angular.json, modify your production environment (and any other environment from which you want to exclude Moment) as follows:
"production": {
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
},
{
"replace": "node_modules/moment/moment.js",
"with": "src/environments/support/moment.js"
}
]
Where src/environments/support/moment.js is an empty module exporting null.
It's a bit of a hack, as file replacements aren't really intended for this, but it seems to be the only way to get it to work.
FWIW I would also propose making a breaking change that changes Moment to a peer dependency, releasing it as v3.0.0, and making v4 the future "big" release.
I understand the desire to make the version numbers correspond to "important" releases with lots of features, but I'm not sure that desire should override making the correct technical solution. In the end, a version number is just a version number.
|
gharchive/pull-request
| 2019-01-11T21:41:40 |
2025-04-01T04:56:17.478903
|
{
"authors": [
"benmccann",
"jonrimmer",
"lsn793",
"simonbrunel"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/pull/5978",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
910331812
|
Fix docs global fonts property matching current API
This PR refactors example code, documentation in general/fonts.
This is the correct syntax for the current API, what you are trying to make it to is the old V2 syntax, if you look at this fiddle for example you see that the default fontSize is set correctly: https://jsfiddle.net/Leelenaleee/caqxh04t/2/
Thanks for explanation! pardon me.
|
gharchive/pull-request
| 2021-06-03T10:02:41 |
2025-04-01T04:56:17.481144
|
{
"authors": [
"DPS0340",
"LeeLenaleee"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/pull/9227",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1819344688
|
[BUG] 中文提问 输出中英文混杂
问题描述 / Problem Description
用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner.
模型的问题,需要设置好提示词和选择更好的模型
|
gharchive/issue
| 2023-07-25T00:44:09 |
2025-04-01T04:56:17.484811
|
{
"authors": [
"hepengyue",
"zRzRzRzRzRzRzR"
],
"repo": "chatchat-space/Langchain-Chatchat",
"url": "https://github.com/chatchat-space/Langchain-Chatchat/issues/926",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1837215580
|
⚠️ Chatwoot has degraded performance
In bf1ef9a, Chatwoot (https://app.chatwoot.com) experienced degraded performance:
HTTP code: 503
Response time: 3247 ms
Resolved: Chatwoot performance has improved in 1783589.
|
gharchive/issue
| 2023-08-04T19:04:42 |
2025-04-01T04:56:17.515480
|
{
"authors": [
"vishnu-narayanan"
],
"repo": "chatwoot/status",
"url": "https://github.com/chatwoot/status/issues/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1789133202
|
Started getting "savepoint transactional_testing_X does not exist" errors since v1
I'm not quite sure what happened, but exactly the same code throws an error in v1 and doesn't in v0.5.0, using prisma v4.15.0.
Happened in two different projects that use this library.
This is the code that builds the tests:
import { TestingModule, TestingModuleBuilder } from '@nestjs/testing';
import { AuthenticationService } from '../../../src/user/authentication/authentication.service';
import { ConfigService } from '@nestjs/config';
import { PostgreSqlContainer } from 'testcontainers';
import { PrismaService } from '../../../src/prisma/prisma.service';
import { PrismaTestingHelper } from '@chax-at/transactional-prisma-testing';
import { S3Service } from '../../../src/aws/s3/s3.service';
import { getRandomFreePort } from './get-random-free-port';
import { prismaReset } from './prisma-reset';
import { prismaSeed } from './prisma-seed';
export const buildTest = async (
testID: string,
testModuleFixture: TestingModuleBuilder,
) => {
const pgPort = await getRandomFreePort();
const dbUrl = `postgresql://${testID}:${testID}@localhost:${pgPort}/${testID}?schema=public`;
const container = await new PostgreSqlContainer()
.withExposedPorts({ container: 5432, host: pgPort })
.withName(`${testID}-${pgPort}`)
.withDatabase(testID)
.withUsername(testID)
.withPassword(testID)
.start();
await prismaReset(dbUrl);
let prismaService = new PrismaService({
datasources: { db: { url: dbUrl } },
});
await prismaService.$connect();
const prismaTestingHelper = new PrismaTestingHelper(prismaService);
prismaService = prismaTestingHelper.getProxyClient();
const moduleFixture: TestingModule = await testModuleFixture
.overrideProvider(ConfigService)
.useValue({
get: (value: string) =>
value === 'DATABASE_URL' ? dbUrl : process.env[value],
})
.overrideProvider(PrismaService)
.useValue(prismaService)
.compile();
const app = moduleFixture.createNestApplication();
prismaService = app.get(PrismaService);
await Promise.all([app.init(), prismaSeed(dbUrl)]);
return {
app,
prismaService,
beforeEach: async () => {
await prismaTestingHelper.startNewTransaction({ timeout: 10000 });
},
afterEach: async () => {
prismaTestingHelper.rollbackCurrentTransaction();
},
afterAll: async () => {
await prismaService.$disconnect();
await app.close();
await container.stop();
},
};
};
Is it possible that you don't (completely) await all functions that call queries in your test? e.g. something like this
async function firstTest() {
// ...
const user = userService.loadUserFromDb(3); // note the missing "await"
}
async function secondTest() {
// This test will fail now
}
where userService.loadUserFromDb looks like this:
public async loadUSerFromDb(id: number) {
await this.prismaService.user.findUnique({where: { id }});
}
I did publish a new pre-release version which improves the robustness of savepoint handling and logs a warning if the case mentioned above is detected. You can install it using
npm i -D @chax-at/transactional-prisma-testing@1.0.1-rc.1
This version fixes my own reproduction at least (and logs a warning), please tell me if it's working for you as well (or maybe throws a different error at least).
The fix I mentioned above landed in 1.1.0, if you have time, you can try it out. Otherwise, you can stay on 0.6.0 which doesn't have implicit query transaction/savepoints (in case you don't need it and don't want to deal with debugging).
I'll close the issue for now, assuming it's fixed - but please let me know if you're still encountering any issues with the new version! (I don't know if you can re-open the issue, but just comment here or open another issue in this case if re-opening doesn't work)
|
gharchive/issue
| 2023-07-05T09:18:08 |
2025-04-01T04:56:17.520662
|
{
"authors": [
"DimosthenisK",
"Valerionn"
],
"repo": "chax-at/transactional-prisma-testing",
"url": "https://github.com/chax-at/transactional-prisma-testing/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
608411621
|
Allow for alternative color architectures (specifically base16)
Is your feature request related to a problem? Please describe.
I'm a happy user of "cheat" and have found its author to be a super guy, very responsive, and working hard on this great project. One thing I'd personally love to see is for it to use my terminal's current color configuration. Specifically, the excellent Base16 project.
Describe the solution you'd like
Rather than using a "chroma" theme, I'd like that cheat just use the colors currently provided by the shell. Using the base16-shell project, I can easily use a single theme for use in iTerm, Vim or any other CLI. I'd LOVE to be able to view my sheets using this theme as well.
I realize it's probably not a super urgent priority, but it's the ONE thing that keeps me from really enjoying using "cheat" as much as I would like.
Hi, @zeitchef
Thanks for the kind words. I'm glad to hear that you're enjoying cheat!
Regarding your request:
First of all, it's entirely sensible. I'm not afflicted with this personally because I use solarized-dark for everything, but I'd be hugely annoyed if I had your problem.
Rather than using a "chroma" theme, I'd like that cheat just use the colors currently provided by the shell.
Unfortunately, there's some complexity here. There's really no generalized way for cheat to use your shell colors. Rather, each color must be assigned on a per-token basis (keyword, variable, etc) by chroma's parser.
I can think of three possible solutions:
First and best solution
While it would not solve the problem in the general case, you might consider opening a pull-request against chroma which implements a base16 color scheme. Were you to succeed in that, I'd simply need to update cheat's chroma dependency, and then two projects would be improved, rather than one :slightly_smiling_face:
(For what it's worth, I corresponded with chroma's maintainer while refactoring cheat into Go, and can state that he's a pleasure to work with and a wonderful maintainer.)
Second, disgusting solution
bat appears to support the base16 color theme. If you have it installed on your system - and if you're only interested in bash syntax highlighting - you could create a cheat shell function, something like this:
#!/bin/bash
cheat () {
cheat $1 | bat --theme=base16 --language=bash --style=plain
}
If you were to source that function (perhaps in a .bashrc file or somewhere), cheat would then invoke that function (rather than the executable directly), which would in turn pipe cheat's output through bat.
That's gross, but it might work?
Third solution
It may also be possible for me to provide users with a mechanism for manually passing color values to chroma's parser. This would probably require some work, but it would solve the problem in the most general case. (I'll look into this when I have some time.)
I hope that's helpful. Feel free to share any feedback here.
Hi @chrisallenlane,
I took your advice and submitted a pull request to the chroma project, which was just merged. While this solves my immediate issue, I do love the idea of a more robust way of customizing colors for cheat.
Thanks for your work!
Hi, @zeitchef
That's fantastic. Thanks for making both projects better! :confetti_ball:
I see that your work was merged on Oct 29, but the current latest release of chroma was published on Sept 22. That being the case, your work won't be available in cheat the next version of chroma is released.
The chroma maintainer is excellent, however, so this likely won't be long. Likewise, I have "dependabot" configured on cheat/cheat, so a PR will be submitted automatically once the upstream dependencies are updated.
So, this feature won't be available in cheat yet, but it likely will shortly :slightly_smiling_face:
Thanks!
Tagging as "blocked" while we await the upstream release.
@zeitchef, your changes have been released with cheat@4.2.0:
https://github.com/cheat/cheat/releases/tag/4.2.0
Thanks again!
|
gharchive/issue
| 2020-04-28T15:31:28 |
2025-04-01T04:56:17.548061
|
{
"authors": [
"chrisallenlane",
"zeitchef"
],
"repo": "cheat/cheat",
"url": "https://github.com/cheat/cheat/issues/558",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2571697061
|
Restructure Preprocessing Example Analysis
In this PR we restructure the example analysis for preprocessing to have GitHub-contained example data.
This follows the precedents for example analysis set in #8 and #9.
Notes:
For this PR review I have removed the example files. Once the PR review is done I will add them back. They prevent a reviewer from seeing differences.
Some of the input ND2 files were too large to include in this repo, so we will have to come up with a different method to make these accessible to outside users.
@mat10d This is ready for review, let me know if you have any questions!
Yep the goal of this does make sense and I remember seeing in the previous example file that the cell patterns lined up for part of the images (these are the cells that have overlapping data for SBS and PH image iirc). I am now thinking that maybe this isn't something we need to do here, and instead could be done in the merge/hash step. However, just showing a table of all of the ph and sbs tiles doesn't really make much sense. The goal is to show ones that will align between 10x and 20x, and then essentially display a set of matching ones so that you can see the pattern of cells aligning between the two tiles you selected. Would you recommend we just leave the grid views out of these new examples that don't have that much data? Especially if we want to move this to the merge/hash step? If so I can just remove it from this example analysis eval notebook.
View entire conversation on ReviewNB
Yeah, let's just exclude it for now.
View entire conversation on ReviewNB
Done in https://github.com/cheeseman-lab/OpticalPooledScreens/pull/12/commits/bef2bd2a5c5a0829458aff49dd878f6f6458a513
View entire conversation on ReviewNB
Removed these examples in https://github.com/cheeseman-lab/OpticalPooledScreens/pull/12/commits/bef2bd2a5c5a0829458aff49dd878f6f6458a513
View entire conversation on ReviewNB
|
gharchive/pull-request
| 2024-10-07T23:11:25 |
2025-04-01T04:56:17.658198
|
{
"authors": [
"mat10d",
"roshankern"
],
"repo": "cheeseman-lab/OpticalPooledScreens",
"url": "https://github.com/cheeseman-lab/OpticalPooledScreens/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
218592054
|
upload failed for cookbooks/audit because missing "compat_resource"
knife cookbook upload audit -o ./chef-cookbooks
Uploading audit [2.4.0]
ERROR: Cookbook audit depends on cookbooks which are not currently
ERROR: being uploaded and cannot be found on the server.
ERROR: The missing cookbook(s) are: 'compat_resource' version '>= 0.0.0'
Cookbook version
audit [2.4.0]
Chef-client version
Chef Development Kit Version: 1.2.22
chef-client version: 12.18.31
kitchen version: 1.15.0
Platform Details
Red Hat Enterprise Linux Server release 7.0 (Maipo)
Scenario:
[What you are trying to achieve and you can't?]
knife cookbook upload audit -o ./chef-cookbooks
...
ERROR: Cookbook audit depends on cookbooks which are not currently
ERROR: being uploaded and cannot be found on the server.
ERROR: The missing cookbook(s) are: 'compat_resource' version '>= 0.0.0'
Steps to Reproduce:
as above
Expected Result:
upload with no error, or to have updated 'compat_resource' version in the cookbook audit
Actual Result:
Failed as "ERROR: The missing cookbook(s) are: 'compat_resource' version '>= 0.0.0' "
Hello, @benlu36.
This is expected behavior. knife cookbook upload does not solve any cookbook dependencies, and as you discovered, the audit cookbook depends on the compat_resource cookbook.
There are two ways to solve this:
You can download the compat_resource cookbook manually and knife cookbook upload it first, or.
You can use Berkshelf to upload both the compat_resource cookbook and the audit cookbook.
I would recommend using Berkshelf as it will make sure the right versions of the right dependency cookbooks are downloaded and then uploaded to your Chef Server.
From the audit cookbook directory itself, you should be able to do a berks install and then a berks upload and it will upload the cookbooks to your Chef Server for you.
I hope this helps!
~Adam
Thanks, @adamleff !
Below is what I get:
pwd
/root/chef-repo/chef-cookbooks/audit
berks install
W, [2017-03-31T14:12:59.449649 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#frozen? defined in Kernel. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:12:59.449898 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#frozen? defined in Kernel. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:12:59.450116 #6460] WARN -- : You are setting a key that conflicts with a built-in method VariaModel::Attributes#frozen? defined in Kernel. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:12:59.450266 #6460] WARN -- : You are setting a key that conflicts with a built-in method VariaModel::Attributes#frozen? defined in Kernel. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:12:59.501178 #6460] WARN -- : You are setting a key that conflicts with a built-in method VariaModel::Attributes#default defined in Hash. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:12:59.501399 #6460] WARN -- : You are setting a key that conflicts with a built-in method VariaModel::Attributes#default defined in Hash. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
Resolving cookbook dependencies...
Fetching 'audit' from source at .
Fetching 'test_helper' from source at test/cookbooks/test_helper
Fetching cookbook index from https://supermarket.chef.io...
W, [2017-03-31T14:13:09.637216 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:16.544918 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:16.546160 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:16.547417 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:16.548605 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:16.550645 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:16.552220 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.370454 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.372567 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.375297 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.378044 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.380860 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.383656 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.386452 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:19.389080 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:21.839512 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#fetch defined at /opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/hashie-3.5.1/lib/hashie/mash.rb:141. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:22.873038 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:24.546274 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:24.547769 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
W, [2017-03-31T14:13:24.549158 #6460] WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash#zip defined in Enumerable. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
Using audit (2.4.0) from source at .
Installing compat_resource (12.16.3)
Installing mingw (2.0.0)
Installing build-essential (8.0.0)
Installing ohai (5.0.2)
Installing git (6.0.0)
Installing dmg (3.1.0)
Installing seven_zip (2.0.2)
Using test_helper (0.1.0) from source at test/cookbooks/test_helper
Installing windows (3.0.4)
Installing yum-epel (2.1.1)
[root@psvlxccas03 audit]# pwd
/root/chef-repo/chef-cookbooks/audit
Yeah, that a lot of warnings you can ignore - that's fixed in the upcoming ChefDK release.
So, you ran berks install and it completed successfully! Great! This downloaded all the dependencies, etc. to your local berkshelf. Now if you do a berks upload it should upload everything it needs to your Chef Server.
berks upload -- > works, it uploaded all cookbooks to chef server.
Thanks, @adamleff !
Hi,
It looks like you might be new to Chef. The problem you are seeing is that knife cookbook upload is expecting that all dependencies of the cookbook are handled. You can't upload a cookbook to the Chef Server without uploading its dependencies. This isn't a problem specific to this cookbook, it's something that you need to understand and deal with for any cookbook that you use. The metadata.rb file will indicate what dependencies there are. You can either manually handle the dependencies by downloading them from the supermarket and uploading them, or you can use a tool like berkshelf to handle the dependencies. If you look at the Berksfile that is included you'll see that it will pull from the public supermarket https://supermarket.chef.io.
High-five, @iennae 🙂
|
gharchive/issue
| 2017-03-31T19:15:20 |
2025-04-01T04:56:17.686034
|
{
"authors": [
"adamleff",
"benlu36",
"iennae"
],
"repo": "chef-cookbooks/audit",
"url": "https://github.com/chef-cookbooks/audit/issues/204",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
207497294
|
Timeout on large file downloads from S3 with s3_file
Cookbook version
4.2
Chef-client version
12.13.37
Platform Details
Ubuntu 14.04 on AWS
We're trying to download a relatively large, text file (3GB) but it results in a 403 forbidden exception. Looking ath the logs it takes almost 3mins just to decide that the local and remote files do not match, this is most likely down to the loading of the existing file to calculate an MD5 hash.
[2017-02-10T12:20:45+00:00] INFO: Processing aws_s3_file[/home/my_user/very_large_file.txt] action create (mycookbooks::corenlp line 43)
[2017-02-10T12:23:20+00:00] INFO: Remote and local files do not match, running create operation.
[2017-02-10T12:23:20+00:00] INFO: Processing remote_file[/home/myuser/very_large_file.txt] action create (/var/chef/runs/1/local-mode-cache/cache/cookbooks/aws/providers/s3_file.rb line 40)
[2017-02-10T12:25:55+00:00] INFO: HTTP Request Returned 403 Forbidden
Options to fix this, not limited to:
Calulate the presigned url just before attempting the download (it seems it's not used in calculating the hash, but I may be wrong)
Allow overriding of the link expiry (currently it's set to 300 seconds)
Allow for skipping of md5 comparison step when using :create action
I think the first option makes the most sense.
I've gone ahead and created a pull request for the first option which does fix this particular issue for me, however it could still cause issues.
In the event that the first attempt to download takes > 5 mins, then fails and the resource specifies a number of retries then the second retry would still result in a 403 forbidden error.
This issue also raises the question on the suitability of how the MD5 hash is being calculated as I know the file on S3 hasn't changed and we have to wait for the file to be redownloaded on every deployment. We can work around this but I wonder if it could still be worth having an option skipping the MD5 hash check.
Closed by #274 - thanks for reporting and the PR to fix it!
|
gharchive/issue
| 2017-02-14T12:05:57 |
2025-04-01T04:56:17.690890
|
{
"authors": [
"markunsworth",
"smurawski"
],
"repo": "chef-cookbooks/aws",
"url": "https://github.com/chef-cookbooks/aws/issues/273",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
222299546
|
Bump cookstyle and foodcritic deps
We need a release of ChefDK in current that has the latest cookstyle and foodcritic. Right now if someone does the right thing with amazon for Chef 13, then travis goes red since old Foodcritic blows up.
Signed-off-by: Tim Smith tsmith@chef.io
This will need to get rebased once #1226 is merged which should fix the foodcritic failure
|
gharchive/pull-request
| 2017-04-18T04:38:36 |
2025-04-01T04:56:17.696777
|
{
"authors": [
"tas50"
],
"repo": "chef/chef-dk",
"url": "https://github.com/chef/chef-dk/pull/1229",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
160560085
|
Documentation for "chef provision" command cookbook dependencies
I've seen a few approaches at to where your provision cookbook should go (for use with the chef provision command). However I don't understand where you should put the dependencies.
It seems that chef provision is a version of chef-solo which sets the cookbook folder to the folder just above the cookbook selected to provision from. However it seems common for provision cookbooks to live just about everywhere, from .delivery/deploy-cookbook to ./provision at the top level of a cookbook or chef-repo itself.
[devops@codecan app_server_cookbook]$ bundle exec chef provision ticket_120805691 --policy-name app_server
Frame number: 0/22
From: /home/devops/workspace/chris/app_server/provision/metadata.rb @ line 11 Chef::Mixin::FromFile#from_file:
6: long_description 'Installs/Configures provision'
7: version '0.1.0'
8: # Not quite sure how to depend on other cookbooks
9: # in a provisioning recipe yet
10: require 'pry-byebug' ; binding.pry
=> 11: depends 'dnsimple'
[1] pry(#<Chef::Cookbook::Metadata>)> Chef::Config
=> {:log_location=>#<IO:<STDOUT>>, :solo=>true, :cookbook_path=>"/home/devops/workspace/chris/app_server", :color=>true, :diff_disabled=>true, :use_policyfile=>false, :policy_name=>nil, :policy_group=>nil, :deployment_group=>nil, :file_staging_uses_destdir=>true}
[2] pry(#<Chef::Cookbook::Metadata>)> quit
Installing Cookbook Gems:
Compiling Cookbooks...
Error: Could not find cookbook(s) to satisfy run list ["recipe[provision::default]"] in /home/devops/workspace/chris/app_server
Reason: (Chef::Exceptions::CookbookNotFound) Cookbook dnsimple not found. If you're loading dnsimple from another cookbook, make sure you configure the dependency in your metadata
What are others doing? There doesn't appear to be any documentation or much direction on using the "chef provision" command with complex provision cookbooks with dependencies.
https://github.com/cassianoleal/cookbook-deploy_key is a library cookbook I'd like to use during a provisioning run, possible creating a deploy key per repo + node combination, then destroying them
https://github.com/target/f5-bigip-cookbook would be nice to add nodes to a load balancer
https://github.com/dnsimple/chef-dnsimple for adding nodes to dns
Though about dropping this into the ./provision/metadata.rb
name 'provision'
maintainer 'The Authors'
maintainer_email 'wolfpack@vulk.coop'
license 'all_rights'
description 'Installs/Configures provision'
long_description 'Installs/Configures provision'
version '0.1.0'
# Not quite sure how to depend on other cookbooks
# in a provisioning recipe yet
if Chef::Config.has_key?(:cookbook_path)
this_dir=File.dirname(__FILE__)
if File.exists?(File.join(this_dir,'vendor','cookbook_artifacts'))
Chef::Config[:cookbook_path] << File.join(this_dir,'vendor','cookbook_artifacts')
require 'pry-byebug' ; binding.pry
#trying to get this to work, but I still get Chef::Exceptions::CookbookNotFound for dnsimple
else
Chef::Log.fatal "This provisioning cookbook has dependencies!!!"
Chef::Log.fatal "Run: chef export ./provision/Policyfile.rb ./provision/vendor"
end
else
Chef::Log.warn 'Did not contain cookbook_path'
end
depends 'dnsimple'
But still no love
Closing and moving to https://github.com/chef/chef-dk/issues/901 since the related ChefRunner code called from the chef provision command is at https://github.com/chef/chef-dk/blob/master/lib/chef-dk/command/provision.rb#L253
|
gharchive/issue
| 2016-06-16T01:39:59 |
2025-04-01T04:56:17.702229
|
{
"authors": [
"hh"
],
"repo": "chef/chef-provisioning",
"url": "https://github.com/chef/chef-provisioning/issues/524",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
372549122
|
knife status --run-list -l shows invalid/incorrect output
Description
When running knife status -l --run-list I get an empty field between commas where the run-list/roles should be.
ChefDK Version
Chef Development Kit Version: 2.4.17
chef-client version: 13.6.4
delivery version: master (73ebb72a6c42b3d2ff5370c476be800fee7e5427)
berks version: 6.3.1
kitchen version: 1.19.2
inspec version: 1.45.13
Platform Version
Ubuntu 16.04.3 LTS
Replication Case
Run knife status -m --run-list and observe the 3rd "field" being the role.
Run knife status -l --run-list and observe the 3rd "field" being empty (where the role should be).
Fixed in #8415
|
gharchive/issue
| 2018-10-22T14:39:15 |
2025-04-01T04:56:17.705287
|
{
"authors": [
"dragon788",
"vsingh-msys"
],
"repo": "chef/chef",
"url": "https://github.com/chef/chef/issues/7764",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
705343074
|
Gate requires in chef-utils and chef-config as well
Update the cop config to handle those as well
Signed-off-by: Tim Smith tsmith@chef.io
busted unit tests
Hmm. looks like tests are failing due to uninitialized constant ChefUtils::DSL which may be related.
Fixed things, but excluding specs and using ChefUtils::CANARY
|
gharchive/pull-request
| 2020-09-21T06:58:03 |
2025-04-01T04:56:17.707023
|
{
"authors": [
"lamont-granquist",
"mwrock",
"tas50"
],
"repo": "chef/chef",
"url": "https://github.com/chef/chef/pull/10451",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
187002129
|
Fix converge_if_changed
Description
sensitive is the property of the resource, not the provider.
Issues Resolved
Fixes exceptions using converge_if_changed
this could really use a spec to not regress, needs a DCO signoff as well.
@axos88 please sign the DCO (https://github.com/chef/chef/blob/master/CONTRIBUTING.md#developer-certification-of-origin-dco) and then we're good to go. cc @chef/client-core
According to that doc obvious fixes don't need DCO, and this is pretty obvious, is it not? I'd rather avoid amending and force-pushing.
obvious fix actually requires you to say that it is one, which you've now done :)
Huh. Thought I did. Anyhoo, glad that it's okay this way.
|
gharchive/pull-request
| 2016-11-03T08:41:56 |
2025-04-01T04:56:17.709702
|
{
"authors": [
"axos88",
"lamont-granquist",
"thommay"
],
"repo": "chef/chef",
"url": "https://github.com/chef/chef/pull/5508",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
435056523
|
Remove chef-* binstubs from chef gem
This is for the relicensing effort.
Note that this PR leaves the knife and ohai binstubs still in the
gem(s) while that discussion is still ongoing.
Just for the record, I'm not a fan of this change. We've never officially supported gem install chef as an install style, but this will make it entirely unavailable. This will make installing Chef on unusual environments (RasPis, etc) more difficult, and will undoubtedly frustrate some number of people that built their own install systems around gem install chef.
That said, I have no energy to argue this point, I just wanted to side effects listed so people will have an easier time finding the change which broke things for them.
closed in favor of #8413
|
gharchive/pull-request
| 2019-04-19T05:41:29 |
2025-04-01T04:56:17.712027
|
{
"authors": [
"coderanger",
"lamont-granquist"
],
"repo": "chef/chef",
"url": "https://github.com/chef/chef/pull/8397",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
70630960
|
Move to oc_erchef 1.8.2
Changelog-Entry: Improved performance fo the cookbook_verions endpoint
for large installations
@stevendanna of instead of fo in the Changelog-Entry. Also, is it ChangeLog or Changelog? Does it matter?
@mmzyk Thanks, typos on both fronts. It is ChangeLog-Entry. Perhaps I it should be case insensitive?
@stevendanna case insensitivity is probably good, that way it's one less thing to think about.
changelog-entry is a bit easy to type and requires less thinking. ;)
:heart: thanks for pulling this in to Chef Server so quickly
|
gharchive/pull-request
| 2015-04-24T08:55:09 |
2025-04-01T04:56:17.755066
|
{
"authors": [
"irvingpop",
"mmzyk",
"stevendanna"
],
"repo": "chef/opscode-omnibus",
"url": "https://github.com/chef/opscode-omnibus/pull/760",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.