id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
499945966
|
Choose from a list of keywords
Hello,
Thanks for this. It's working quite well, but is there a way to get it to choose only from a pre-defined list of keywords?
Hello,
Thanks for this. It's working quite well, but is there a way to get it to choose only from a pre-defined list of keywords?
Hello, I would like to ask if you have any progress regarding the idea you said, I am also very interested and want to ask you for advice.
Looking forward to your reply, thanks a lot!!!
|
gharchive/issue
| 2019-09-29T17:31:53 |
2025-04-01T06:39:37.885312
|
{
"authors": [
"RileyChing",
"regstuff"
],
"repo": "minimaxir/gpt-2-keyword-generation",
"url": "https://github.com/minimaxir/gpt-2-keyword-generation/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1781382776
|
Support unsafe function schemas
LLMs for code are capable of reasoning beyond just what is merely executable [^1][^2]. Therefore, I suggest allowing users to provide free-form function schemas that aren't necessarily strictly following the JSON Schema format.
[^1]: Souza, Beatriz, and Michael Pradel. "LExecutor: Learning-Guided Execution." arXiv preprint arXiv:2302.02343 (2023).
[^2]: https://www.youtube.com/watch?v=YIYlkCbIxqc&t=2664s
Function input implementation is handled by a model-by-model basis, which is why the current schema code is in chatgpt.py.
@minimaxir, what do you mean by "model-by-model basis"?
This implementation of structured I/O to ChatGPT is specific to ChatGPT.
Okay, but why not allow users to provide schemas that violate the JSON Schema format?
No point, unless there is evidence that it actually improves generation quality.
To keep the library simple, I am not adding things for the sake of adding things.
|
gharchive/issue
| 2023-06-29T19:38:40 |
2025-04-01T06:39:37.889064
|
{
"authors": [
"keyboardAnt",
"minimaxir"
],
"repo": "minimaxir/simpleaichat",
"url": "https://github.com/minimaxir/simpleaichat/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
382212821
|
Missing file irange in mininet.util
from mininet.topo import Topo prompts error
This line prompts error when ran and it was found that module irange that is often called by Topo is missing. How can this be resolved to use Topo function to create topology?
this can be closed i think
|
gharchive/issue
| 2018-11-19T13:23:37 |
2025-04-01T06:39:37.891498
|
{
"authors": [
"Pryanga306",
"cheriimoya"
],
"repo": "mininet/mininet",
"url": "https://github.com/mininet/mininet/issues/843",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2130951482
|
minio/minio-go/v7@v7.0.67/utils.go:627:67: undefined: tls.CertificateVerificationError with Go 1.19.
https://github.com/minio/minio-go/pull/1921 and more specifically https://github.com/minio/minio-go/commit/76a41461fe5124fb9b646615c6abafcd1d41c7c2 caused minio-go to no longer build with Go 1.19.
https://endoflife.date/go go1.19 is EOLed
@harshavardhana I guess a mention on https://github.com/minio/minio-go/releases/tag/v7.0.67 would have avoid the surprise on our side. I'm fine with dropping support for EOL Go version but was thrown off by go 1.17 in the go.mod.
go1.17 was kept for migration purposes; it was a mistake when using the new types.
Agree! We run into this issue as well. If we're still going use go1.17 in go mod, then better fix it. otherwise better update it to 1.20
Upgrade version golang, solve for me. go 1.20
|
gharchive/issue
| 2024-02-12T20:43:23 |
2025-04-01T06:39:37.896993
|
{
"authors": [
"haorenfsa",
"harshavardhana",
"mahbubzulkarnain",
"simondeziel"
],
"repo": "minio/minio-go",
"url": "https://github.com/minio/minio-go/issues/1931",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2548351081
|
Consolidate configurations
Change application files to be more meaningful
Remove excess ones
Update files to reflect the new default (SDS40 and what is live) rather than the historic SDS at 50
Update test cases where there are specific issues
Add documentation to explain how developers should use the application profiles going forward.
Make all operational commencement dates consistent in all files (some reflect speculative future dates, that have now passed).
NOTE: There is still an outstanding test case failure to be addressed. Pushed up for a partial review.
|
gharchive/pull-request
| 2024-09-25T15:46:37 |
2025-04-01T06:39:38.052170
|
{
"authors": [
"joelstobart-moj"
],
"repo": "ministryofjustice/calculate-release-dates-api",
"url": "https://github.com/ministryofjustice/calculate-release-dates-api/pull/855",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1407518167
|
OIDC - Complete current OIDC
Outstanding OIDC conversions:
Modernisation Platform repo:
terraform/
environments (Excluding the files in environments directory as those run off a different set of keys)
core-vpc
core-network-services
bootstrap
delegate-access
single-sign-on
secure-baseline
github
pagerduty
modernisation-platform-account
Modernisation Platform AMI Builds
modernisation-platform
teams
I think terratests in modules should continue using the testing-test ci credentials since adding an OIDC provider in the testing-test account would prevent OIDC module from being tested.
https://github.com/ministryofjustice/modernisation-platform/issues/2040#issuecomment-1275821857 <-- core-security-production
picking up the core-vpc account
OIDC refactor implementation for the core-vpc account: https://github.com/ministryofjustice/modernisation-platform/pull/2551
I'm looking at delegate access and the environments as they both run off privileged keys in the root account
`Modernisation Platform AMI Builds
[ ] modernisation-platform
[ ] teams`
Should be split off into a different story.
OIDC refactor implementation for the core-vpc-test-deployment (I have missed it in the previous PR)
and the core-network-services-deployment workflows: https://github.com/ministryofjustice/modernisation-platform/pull/2560
OIDC refactor implementation for the modernisation-platform-account: https://github.com/ministryofjustice/modernisation-platform/pull/2567
https://github.com/ministryofjustice/modernisation-platform/pull/2571 for single-sign-on and secure-baselines
Moved delegate-access and environments out to https://github.com/ministryofjustice/modernisation-platform/issues/2568
Moved the github one to [https://github.com/ministryofjustice/modernisation-platform/pull/2570] #2570
|
gharchive/issue
| 2022-10-13T10:05:53 |
2025-04-01T06:39:38.069480
|
{
"authors": [
"SteveLinden",
"davidkelliott",
"ewastempel",
"julialawrence",
"seanprivett"
],
"repo": "ministryofjustice/modernisation-platform",
"url": "https://github.com/ministryofjustice/modernisation-platform/issues/2399",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1416583254
|
Slackbot to open frequently used sites ie GH users, GH team, pingdom users, from the ask channel
Background
We run lots of services. We often need to open multiple site to do some tasks. It would be nice to be open a link via the #ask channel to reduce hunting for various links/bookmarks. Would reduce number of clicks.
Approach
Investigate options that might be available.
Acceptance Criteria
[ ] Options for review by team
Reference
How to write good user stories
Looked at tefter app. Would do this however won't proceed with that option as there is a cost.
Added links to a shortcut bar in the main slack channel. Revisit this need if we need something more sophisticated.
Done
|
gharchive/issue
| 2022-10-20T13:01:05 |
2025-04-01T06:39:38.072507
|
{
"authors": [
"AntonyBishop"
],
"repo": "ministryofjustice/operations-engineering",
"url": "https://github.com/ministryofjustice/operations-engineering/issues/1670",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1424382912
|
Sw 5910 pagination tests
Changed pagination element from Jen's design to follow GDS component standards
https://design-system.service.gov.uk/components/pagination/#:~:text=Components-,Pagination,-Help users navigate
Codecov Report
Base: 47.52% // Head: 47.73% // Increases project coverage by +0.20% :tada:
Coverage data is based on head (a0d97a6) compared to base (c035eb4).
Patch coverage: 100.00% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## main #242 +/- ##
==========================================
+ Coverage 47.52% 47.73% +0.20%
==========================================
Files 13 13
Lines 505 507 +2
==========================================
+ Hits 240 242 +2
Misses 240 240
Partials 25 25
Impacted Files
Coverage Δ
internal/sirius/get_page_details.go
92.59% <100.00%> (+0.28%)
:arrow_up:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Hi Kate, took a look at this today and fixed the CSS issue on the pagination buttons.
Turns out there was a class on the button that needed to be on the parent <li> element, govuk-pagination__item--current. Also changed the template logic to to just not render the next/prev when it doesn't need to, rather than use visibility:hidden.
One more scenario to look at on Monday and make sure is tested.
Locally if I filter by 50 per page and go to page 2, I can't go back to page 1.
http://localhost:8888/supervision/workflow/?change-team=13&page=1&tasksPerPage=50&xsrfToken=&assignTeam=0&assignCM=&tasksPerPage=50
|
gharchive/pull-request
| 2022-10-26T16:58:29 |
2025-04-01T06:39:38.082155
|
{
"authors": [
"codecov-commenter",
"kate-49",
"mattmachell"
],
"repo": "ministryofjustice/opg-sirius-supervision-workflow",
"url": "https://github.com/ministryofjustice/opg-sirius-supervision-workflow/pull/242",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
566962638
|
removed govuk link on viewer home page
Purpose
To remove the link to the lpa service on the viewer home page as from research some users are unnecessarily following the link which takes them away from the service and interrupts their journey
Fixes UML-484
Approach
Removed the link so that 'lasting power of attorney' is just text
Checklist
[x] I have performed a self-review of my own code
[ ] I have updated documentation (Confluence/GitHub wiki/tech debt doc) where relevant
[ ] I have added tests to prove my work
[ ] The product team have tested these changes
Coverage remained the same at 78.523% when pulling 33116c8ebfd63c288c42b4a807d49f8d95437d86 on UML-484-Remove-lpa-link-on-viewer-start-page into 38f35dfce6471e146257ed9dba3b341387c567b1 on master.
|
gharchive/pull-request
| 2020-02-18T15:15:52 |
2025-04-01T06:39:38.085854
|
{
"authors": [
"GemTay",
"coveralls"
],
"repo": "ministryofjustice/opg-use-an-lpa",
"url": "https://github.com/ministryofjustice/opg-use-an-lpa/pull/234",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1816544264
|
Update Terraform terraform-aws-modules/vpc/aws to v5
This PR contains the following updates:
Package
Type
Update
Change
terraform-aws-modules/vpc/aws (source)
module
major
2.78.0 -> 5.1.0
Release Notes
terraform-aws-modules/terraform-aws-vpc (terraform-aws-modules/vpc/aws)
v5.1.0
Compare Source
Features
Add support for creating a security group for VPC endpoint(s) (#962) (802d5f1)
v5.0.0
Compare Source
⚠ BREAKING CHANGES
Bump Terraform AWS Provider version to 5.0 (#941)
Features
Bump Terraform AWS Provider version to 5.0 (#941) (2517eb9)
4.0.2 (2023-05-15)
Bug Fixes
Add dns64 routes (#924) (743798d)
4.0.1 (2023-04-07)
Bug Fixes
Add missing private subnets to max subnet length local (#920) (6f51f34)
v4.0.2
Compare Source
v4.0.1
Compare Source
v4.0.0
Compare Source
⚠ BREAKING CHANGES
Support enabling NAU metrics in "aws_vpc" resource (#838)
Features
Support enabling NAU metrics in "aws_vpc" resource (#838) (44e6eaa)
v3.19.0
Compare Source
Features
Add public and private tags per az (#860) (a82c9d3)
Bug Fixes
Use a version for to avoid GitHub API rate limiting on CI workflows (#876) (2a0319e)
3.18.1 (2022-10-27)
Bug Fixes
Update CI configuration files to use latest version (#850) (b94561d)
v3.18.1
Compare Source
v3.18.0
Compare Source
Features
Added ability to specify CloudWatch Log group name for VPC Flow logs (#847) (80d6318)
v3.17.0
Compare Source
Features
Add custom subnet names (#816) (4416e37)
3.16.1 (2022-10-14)
Bug Fixes
Prevent an error when VPC Flow log log_group and role is not created (#844) (b0c81ad)
v3.16.1
Compare Source
v3.16.0
Compare Source
Features
Add IPAM IPv6 support (#718) (4fe7745)
v3.15.0
Compare Source
Features
Add IPAM IPv4 support (#716) (6eddcad)
3.14.4 (2022-09-05)
Bug Fixes
Remove EC2-classic deprecation warnings by hardcoding classiclink values to null (#826) (736931b)
3.14.3 (2022-09-02)
Bug Fixes
Allow security_group_ids to take null values (#825) (67ef09a)
3.14.2 (2022-06-20)
Bug Fixes
Compact CIDR block outputs to avoid empty diffs (#802) (c3fd156)
3.14.1 (2022-06-16)
Bug Fixes
Declare data resource only for requested VPC endpoints (#800) (024fbc0)
v3.14.4
Compare Source
v3.14.3
Compare Source
v3.14.2
Compare Source
v3.14.1
Compare Source
v3.14.0
Compare Source
Features
Change to allow create variable within specific vpc objects (#773) (5913d7e)
v3.13.0
Compare Source
Features
Made it clear that we stand with Ukraine (acb0ae5)
v3.12.0
Compare Source
Features
Added custom route for NAT gateway (#748) (728a4d1)
3.11.5 (2022-01-28)
Bug Fixes
Addresses persistent diff with manage_default_network_acl (#737) (d247d8e)
3.11.4 (2022-01-26)
Bug Fixes
Fixed redshift_route_table_ids outputs (#739) (7c8df92)
3.11.3 (2022-01-13)
Bug Fixes
Update tags for default resources to correct spurious plan diffs (#730) (d1adf74)
3.11.2 (2022-01-11)
Bug Fixes
Correct for_each map on VPC endpoints to propagate endpoint maps correctly (#729) (19fcf0d)
3.11.1 (2022-01-10)
Bug Fixes
update CI/CD process to enable auto-release workflow (#711) (57ba0ef)
v3.11.5
Compare Source
v3.11.4
Compare Source
v3.11.3
Compare Source
v3.11.2
Compare Source
v3.11.1
Compare Source
v3.11.0
Compare Source
feat: Add tags to VPC flow logs IAM policy (#706)
v3.10.0
Compare Source
fix: Enabled destination_options only for VPC Flow Logs on S3 (#703)
v3.9.0
Compare Source
feat: Added timeout block to aws_default_route_table resource (#701)
v3.8.0
Compare Source
feat: Added support for VPC Flow Logs in Parquet format (#700)
docs: Fixed docs in simple-vpc
chore: Updated outputs in example (#690)
Updated pre-commit
v3.7.0
Compare Source
feat: Add support for naming and tagging subnet groups (#688)
v3.6.0
Compare Source
feat: Added device_name to customer gateway object. (#681)
v3.5.0
Compare Source
fix: Return correct route table when enable_public_redshift is set (#337)
v3.4.0
Compare Source
fix: Update the terraform to support new provider signatures (#678)
v3.3.0
Compare Source
docs: Added ID of aws_vpc_dhcp_options to outputs (#669)
fix: Fixed mistake in separate private route tables example (#664)
fix: Fixed SID for assume role policy for flow logs (#670)
v3.2.0
Compare Source
feat: Added database_subnet_group_name variable (#656)
v3.1.0
Compare Source
chore: Removed link to cloudcraft
chore: Private DNS cannot be used with S3 endpoint (#651)
chore: update CI/CD to use stable terraform-docs release artifact and discoverable Apache2.0 license (#643)
v3.0.0
Compare Source
refactor: remove existing vpc endpoint configurations from base module and move into sub-module (#635)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
VPC community module requires code to be refactored.
Suggest pinning module locally to current version.
2.78.0 -> 5.1.0
|
gharchive/pull-request
| 2023-07-22T02:20:33 |
2025-04-01T06:39:38.154199
|
{
"authors": [
"smjmoj",
"staff-infrastructure-moj"
],
"repo": "ministryofjustice/staff-device-shared-services-infrastructure",
"url": "https://github.com/ministryofjustice/staff-device-shared-services-infrastructure/pull/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2036496668
|
When logging in with Discord you should be redirected back to where you started
Click into corporations and try and apply to a corporation, you'll be forced to log in. This passes a next parameter, but we lose it with all the redirects. We should push this into the django session (or preserve it somehow) so that we can redirect back to the original page after logging in.
Completed
|
gharchive/issue
| 2023-12-11T20:45:25 |
2025-04-01T06:39:38.162051
|
{
"authors": [
"bearthatcares"
],
"repo": "minmatarfleet/minmatar.org",
"url": "https://github.com/minmatarfleet/minmatar.org/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
569557494
|
Ansible 2.2+ cannot run playbook
Steps to Reproduce
Install the latest version of Ansible (2.9)
Force vagrant to reprovision by running vagrant provision
Expected Behavior
Ansible will successfully run through the playbook
Actual Behavior
Ansible fails with the message ERROR! 'sudo' is not a valid attribute for a Play.
Details
sudo has been deprecated since Ansible 2.0 and was removed in 2.2. It was replaced with become. (See this Stack Overflow post for more details.)
Output
Vagrant/Ansible error:
eryn@eryn-XPS-13-9370:~/Code/sessionizer/vagrant$ vagrant provision
==> default: Running provisioner: ansible...
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.9.4).
Alternatively, the compatibility mode can be specified in your Vagrantfile:
https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode
default: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/eryn/Code/sessionizer/vagrant/.vagrant/provisioners/ansible/inventory -v ansible/development.yml
Using /etc/ansible/ansible.cfg as config file
ERROR! 'sudo' is not a valid attribute for a Play
The error appears to be in '/home/eryn/Code/sessionizer/vagrant/ansible/development.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Set up base development environment
^ here
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Version data:
eryn@eryn-XPS-13-9370:~/Code/sessionizer/vagrant$ ansible --version
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/eryn/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609]
The solution to this is to use become: instead of sudo:. I can submit a PR if there's interest in updating the project to work with newer versions of Ansible.
It sounds like the general feeling on Slack is to move towards Docker over Vagrant, but if you've got a quick PR handy that allows the VM to start up properly, I don't see a reason not to do it!
We've done away with Vagrant/Docker and went back to the basics, a plain-ol' RoR app w/PostgreSQL dependency.
refs #316
|
gharchive/issue
| 2020-02-23T20:59:48 |
2025-04-01T06:39:38.167092
|
{
"authors": [
"eryno",
"tonyc",
"unsay"
],
"repo": "minnestar/sessionizer",
"url": "https://github.com/minnestar/sessionizer/issues/245",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2568064376
|
🛑 Matrix is down
In 9b8cfd1, Matrix (https://matrix.mint.lgbt:8448/_matrix/federation/v1/version) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Matrix is back up in ccfa4ac after 46 minutes.
|
gharchive/issue
| 2024-10-05T13:40:54 |
2025-04-01T06:39:38.213731
|
{
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/3120",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2699502081
|
🛑 Redlib is down
In be58881, Redlib (https://redlib.mint.lgbt) was down:
HTTP code: 404
Response time: 967 ms
Resolved: Redlib is back up in e2c713c after 23 minutes.
|
gharchive/issue
| 2024-11-27T18:49:30 |
2025-04-01T06:39:38.216144
|
{
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/4133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1619867832
|
🛑 Matrix is down
In 2637f43, Matrix (https://matrix.mint.lgbt:8448/_matrix/federation/v1/version) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Matrix is back up in f65af5c.
|
gharchive/issue
| 2023-03-11T03:45:37 |
2025-04-01T06:39:38.218551
|
{
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/510",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1482689473
|
Rework undo for DeltaDataCollection v2
This is a continuation of work started in #577, but as a separate PR to avoid painful rebase. This time based on master.
I'll try to formulate the problem once more. I need deltas and undos to be associative. Meaning sequence like (DB <- Delta) <- Undo should be equal to DB <- (Delta <- Undo) where <- is merge operation. Previous approach where undo didn't contain information and only Erase operation didn't satisfy this property.
This PR implements undo operation through separate delta type DataDeltaUndo, which can be applied to a delta from another collection or to a DB. Thus the information is not lost for undo, allowing us to flush delta to db at any moment and still be able to undo it later.
Moreover in a arbitrary chain of deltas and undos they can be merged in different order and still produce the same result, e.g:
Delta + Delta + Undo + Undo
(Delta + Delta) + (Undo + Undo)
Delta + ((Delta + Undo) + Undo)
Delta + (Delta + (Undo + Undo))
What was the original motivation for the separation into three types? I may be missing something.
The idea to distinguish DeltaDataUndo from DeltaData was introduced to make the code more explicit. Indeed undo can be represented by a simple delta, but that would make user intention unclear. For example, we want to be able to apply additional rules for undo (right now applying delta over undo is forbidden for simplicity).
I agree that it looks duplicated, maybe an undo wrapper over DeltaData is better. Will play with it.
Regarding DeltaDataCollection. It's just a container for deltas. A map of arbitrary keys to either DeltaData or DeltaDataUndo. It is also useful because it encapsulates the rules of undo creation: undo is only created on delta+delta merge (not undo merge or delta+undo).
Wondering if we can get away with something a bit simpler.
I believe that your main concern is the reimplementation of operations for undo. Otherwise, the current approach is more restrictive in terms of how undo can be created and used.
The idea to distinguish DeltaDataUndo from DeltaData was introduced to make the code more explicit. Indeed undo can be represented by a simple delta, but that would make user intention unclear. For example, we want to be able to apply additional rules for undo (right now applying delta over undo is forbidden for simplicity). I agree that it looks duplicated, maybe an undo wrapper over DeltaData is better. Will play with it.
Regarding DeltaDataCollection. It's just a container for deltas. A map of arbitrary keys to either DeltaData or DeltaDataUndo. It is also useful because it encapsulates the rules of undo creation: undo is only created on delta+delta merge (not undo merge or delta+undo).
Sorry I meant whether DeltaData, DeltaDataUndo and DeltaMapElement could be somehow streamlined. I don't have a problem with DeltaDataCollection, that one just slipped in by accident.
I gave the issue of composition in presence of errors some thought and this is what I ended up with. Please let me know your opinion.
Specification
Before diving into the implementation, let me first lay down what we want to achieve accompanied with some high-level observations. Hopefully, we will be able to derive the correct implementation from that later on. Let us ignore for the moment the issue of representing missing values. For now, assume we are working with a value of type T that is always present and we only have the Modify operation, no Create or Delete. This apparent limitation will be reviewed later.
Fundamentally, a delta Delta<T> represents a function of type Fn(T) -> Result<T>. That is, a function that takes some data point of type T and either returns a new data point of type T or fails. We can find a concrete data type to use to represent deltas efficiently later. For now, to lay out semantics of deltas, how should deltas behave when applied to a value and how should they compose together, it is instructive to think of them as if they were implemented as the function type above.
The Delta type has a method Delta::apply taking Self and a T and producing a Result<T> which actually turns the delta into its function representation |x| my_delta.apply(x) : impl Fn(T) -> Result<T>. This is basically your combine_delta_with_data. The apply function for the "primitive" create/modify/delete operations is obvious enough, it just performs given operation.
Let us now look into how should deltas combine. Since deltas represent functions, combining deltas should behave like function composition (i.e. applying one function after another). However, the usual notion of function composition compose(f, g) { |x| f(g(x)) } does not quite cut it, because the result of f is a Result<T> and g expects a T as its argument. We need a modified version of function composition for fallible functions which looks something like compose_fallible(f, g) { |x| Ok(f(g(x)?)?) }. Here we take f and g satisfying Fn(T) -> Result<T> and we get one function that also satisfies Fn(T) -> Result<T>.
So for delta composition to be well behaved, the following three should all be equivalent functions:
|x| Ok(d1.apply(d0.apply(x)?)?)
|x| combine(d0, d1).apply(x)
compose_fallible(|x1| d1.apply(x1), |x0| d0.apply(x0))
In the above, the convention that d0, d1, etc are deltas Delta<T> whereas x, x0, x1, etc are values of type T.
Since function composition is associative (including the fallible version, assuming no side effects), making delta composition in effect implement function composition will make delta composition associative too once it is applied to data.
Implementation
Here is one representation of the Delta type that seems to satisfy the above requirements:
enum Delta<T> {
Modify(T, T), // Require the old value to be equal to the first component, update it to the second
Mismatch, // There was a old/new mismatch somewhere along the way
}
The semantics are defined in terms of apply as follows:
fn apply(self, x: T) -> Option<T> {
match self {
Delta::Modify(old, new) if x == old => Ok(new),
_ => None, // we have Delta::Mismatch or the old value does not match
}
}
Using Option in place of Result since there is only one error state. Supporting multiple error types is a bit tricky as discussed below.
The delta composition goes like this:
fn combine<T>(d0: Delta<T>, d1: Delta<T>) -> Delta<T> {
match (d0, d1) {
(Delta::Modify(x0, x1), Delta::Modify(x2, x3)) if x1 == x2 => Delta::Modify(x0, x3),
_ => Delta::Mismatch, // either one of deltas is a mismatch or the equality check above fails
}
}
I think this satisfies the specification outlined in the previous section. Proof left as an exercise for the reader :laughing:
Handling missing values
The above does not handle creating and deleting values. That functionality is easily recovered by using Delta<Option<T>>.
Create(x) becomes Delta::Modify(None, Some(x))
Modify(x0, x1) becomes Delta::Modify(Some(x0), Some(x1))
Delete(x) becomes Delta::Modify(Some(x), None)
The remaining combination Delta::Modify(None, None) amounts to failing unless the value is absent. This is different from a no-op delta (which is not representable with the current iteration of the Delta type, see below)
Note that since Delta<U> has various desirable properties (associativity of the combine operation) for all U, Delta<Option<T>> has them too by setting U = Option<T>.
Undo
The notion of inverting something is captured by the mathematical structure group. To make Delta into a group, we need two extra ingredients: the identity element and the inverse operation.
Identity
We add the Noop arm as the identity element:
enum Delta<T> {
Noop,
Modify(T, T),
Mismatch,
}
The Delta::Noop behaves like the identity function when applied to data element x. Composing Noop with any other delta d gives d (no matter whether noop is lhs or rhs).
It may be sensible to keep separate types for deltas with the noop arm and for deltas without it. When put in a map, the noop case can be represented by key not being present. The new Delta is isomorphic to Option<OldDelta>.
Inverse
Inverse has to satisfy combine(d.inverse(), d) == combine(d, d.inverse()) == Delta::Noop. The following does that:
Noop => Noop,
Modify(old, new) => Modify(new, old),
Mismatch => Mismatch,
Distinguishing "forward" and "backward" deltas at type level
Also touched upon in one of the other comments was that it may be useful to distinguish using types whether a delta applies new changes or undoes previously applied changes.
My preferred solution would be to add a phantom type that captures whether the intention is for the delta to be applied forward or backward. Inverse turns the forward annotation to backward and vice versa. The combine operation requires the directions to be the same but does not care otherwise. It is not clear to me what the resulting annotation should be when mixing directions, that's why I suggest for combine to work only on deltas going in the same direction. This could be revisited later.
Tradeoffs
There are two things that I identified that we lose by employing this approach.
Less granular errors
Due to a more general representation, the various error cases (like double creation, modifying non-existing value, deleted value mismatch, etc) have been collapsed into one. Is this a serious issue?
Seems we could change Mismatch to Mismatch(expected_value, actual_value) for a richer error reporting, and the original error cases (like deleting a non-existing value, etc.) could be extracted from that. However, it gets more complicated once inverses enter the scene. Since we want the first error encountered to be propagated and inverting changes which error we encounter first, we would have to track two errors, one for the case when we apply the operations in the specified order and one for backwards order. At least all errors in the middle can still be dropped. Not sure it's worth investing the effort and extra complexity into this.
Early exit less convenient
Using Result or Option allows us to use the ? operator when something goes wrong to exit early. When working with deltas, we often want to exit early as soon as we encounter a Delta::Mismatch. A nice principled way to do it is to implement std::ops::Try for Delta but it is currently unstable. An ugly option is to use a Result<(T, T), MismatchError> to represent Delta or have Delta easily convertible to it, so ? can be used on it directly.
The combine operation requires both arguments to be evaluated no matter what which may result in inefficiency. We can provide a version of combine that evaluates rhs lazily:
fn combine_lazy<T>(d0: Delta<T>, d1: impl FnOnce() -> Delta<T>) -> Delta<T> {
if matches!(d0, Delta::Mismatch) { return Delta::Mismatch }
combine(d0, d1())
}
// use combine_lazy(a, || expensive_operation(b, c)) instead of combine(a, expensive_operation(b, c))
I don't particularly like any of these options apart from the one that requires unstable features.
Conclusion
I hope I have not missed something obvious. This seems to work in my head :rofl:.
There is a number of advantages to the use of well-established mathematical structures. They are well researched so for example any result that someone has proven about groups also applies to deltas if we make deltas into a group. For some reason I suspect the crypto people here are familiar with groups in particular. Also the well-established structures tend to be very well behaved, predictable and composable.
Inverse has to satisfy combine(d.inverse(), d) == combine(d, d.inverse()) == Delta::Noop. The following does that:
Noop => Noop,
Modify(old, new) => Modify(new, old),
Mismatch => Mismatch,
Turns out this is not correct. The Modify case does not satisfy the inverse law as it gives a Modify(old, old) instead of a Noop. The Modify(old, old) is correct according to the original spec where we require applying deltas one by one to be equivalent to combining deltas and applying the combined delta.
There is something called Inverse semigroup which seems to fit the use case here much better. It has a weaker notion of inverse where combine(d, d.inverse(), d) == d. The following inverse operation, formulated on the original Delta without the extra identity element, satisfies that:
Modify(old, new) => Modify(new, old),
Mismatch => Mismatch,
Also we don't have to artificially introduce the identity element to make this formulation work, so the whole thing becomes somewhat simpler.
I like the formulation and I think it's worth giving a try. I'd like to expand on this and say that the elements you mentioned and the semi-group seem to form a symmetric/hermitian transformation in Hilbert space. One can see that a data (even empty data) is a column vector (A, B) that represents the data of an account from state A to state B. The transformation function is a hermitian/symmetric matrix multiplication that moves the data among these states. If we represent the data in these states, they're guaranteed to be reversible. I think if we want to go nuts we can have a full mathematical formulation of this.
I hope I have not missed something obvious. This seems to work in my head
I like the formulation and I think it's worth giving a try. I'd like to expand on this and say that the elements you mentioned and the semi-group seem to form a symmetric/hermitian transformation in Hilbert space. One can see that a data (even empty data) is a column vector (A, B) that represents the data of an account from state A to state B. The transformation function is a hermitian/symmetric matrix multiplication that moves the data among these states. If we represent the data in these states, they're guaranteed to be reversible. I think if we want to go nuts we can have a full mathematical formulation of this.
Added more tests and some fixes. New tests fail which shows that the current approach has problems.
Closing this PR in favor of #613. All comments are addressed there.
|
gharchive/pull-request
| 2022-12-07T19:35:18 |
2025-04-01T06:39:38.248385
|
{
"authors": [
"TheQuantumPhysicist",
"azarovh",
"iljakuklic"
],
"repo": "mintlayer/mintlayer-core",
"url": "https://github.com/mintlayer/mintlayer-core/pull/586",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1901200723
|
live version Switching
Is your feature request related to a problem? Please describe.
Mostly, if I updated my CV, I would like to update all languages and compile them all.
Switch behavior by source editing seems unnatural
Describe the solution you'd like
Pass the desired languages by cmdline args, or change the varLanguage into a list
Describe alternatives you've considered
Furthermore. It feels more reasonable to nast the language entry in the file i.e. like the metadata.typ
I realize that it is a language limitation that each compile can only output one PDF. and also if the cmdline can pass args to the template
A very interesting proposal, I see your point here.
I will be happy to implement this if Typst expands this functionality: https://github.com/typst/typst/issues/295
In the meanwhile, you might want to write a simple bash script that change the variable in the metadata.typ by commands like sed and compile the file, such that you could execute the script once and get the PDF for all versions.
I made a minor change to support this feature. Please have a look at my repo for the use of multi-version support, and my repo of template for information.
Let me know if you'd like to pull my changes, I'll raise a PR
I took a look but I don't quite understand your repo -- so it is that you added a varVersion and it's all?
You can submit a PR if you want though. It would be clearer to review altogether.
Follow up discussion will be in https://github.com/mintyfrankie/brilliant-CV-Submodule/pull/9
|
gharchive/issue
| 2023-09-18T15:29:28 |
2025-04-01T06:39:38.255309
|
{
"authors": [
"HernandoR",
"mintyfrankie"
],
"repo": "mintyfrankie/brilliant-CV",
"url": "https://github.com/mintyfrankie/brilliant-CV/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
428068720
|
修复沙盒漏洞
相关 ISSUE: (ISSUE 链接地址)
#568
1、升级点 (清晰准确的描述升级的功能点)
MIP.sandbox.strict.MIP.sandbox 指向 MIP.sandbox.strict
MIP.sandbox.strict.MIP.util 只开放 platform\customStorage\jsonParse\string
2、影响范围 (描述该需求上线会影响什么功能)
所有用到 mip-script 的站点可能会有影响
3、自测 Checklist
单测全覆盖
4、需要覆盖的场景和 Case
[ ] 是否覆盖了 sf 打开 MIP 页
[ ] 是否验证了极速服务 MIP 页面效果
5、自测机型和浏览器
[ ] 是否覆盖了 iOS 系统手机
[ ] 是否覆盖了 Android 系统手机
[ ] 是否覆盖了 iPhone 版式浏览器(比如 QQ、UC、Chrome、Safari、安卓自带浏览器)
[ ] 是否覆盖了手百
Pull Request Test Coverage Report for Build 1059
0 of 0 changed or added relevant lines in 0 files are covered.
2 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.03%) to 94.542%
Files with Coverage Reduction
New Missed Lines
%
src/util/templates.js
2
94.52%
Totals
Change from base Build 1029:
-0.03%
Covered Lines:
3754
Relevant Lines:
3890
💛 - Coveralls
|
gharchive/pull-request
| 2019-04-02T07:36:01 |
2025-04-01T06:39:38.272070
|
{
"authors": [
"clark-t",
"coveralls"
],
"repo": "mipengine/mip2",
"url": "https://github.com/mipengine/mip2/pull/569",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
57454856
|
Kill chrome driver before running a test
Running a test with the local chrome driver will fail when the previous test is interrupted.
To make the test process more stable, first kill the chrome driver before running the test.
Solved in commit 0f13b38a360107459bd57d155634675e73a12528 in slightly different way.
Each instance of the driver will get an unique process name, this way it's possible to start multiple instances of the Chrome Driver.
|
gharchive/issue
| 2015-02-12T12:37:11 |
2025-04-01T06:39:38.282471
|
{
"authors": [
"lazytesting"
],
"repo": "mirabeau-nl/WbTstr.Net",
"url": "https://github.com/mirabeau-nl/WbTstr.Net/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
673347174
|
libpcap: Add patch to fix build target dependency
because debian patch replaced 'libpcap.so' target to '$(SHARDLIB)' target to set soname as 0.8.
Here are details about libpcap soname:
https://people.debian.org/~rfrancoise/libpcap-faq.html
Thanks!
Could you propose this fix also to meta-debian?
|
gharchive/pull-request
| 2020-08-05T08:13:11 |
2025-04-01T06:39:38.284358
|
{
"authors": [
"hiraku-wfs",
"mozoko"
],
"repo": "miraclelinux/meta-emlinux",
"url": "https://github.com/miraclelinux/meta-emlinux/pull/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
935794910
|
Compose Settings from multiple BaseSettings classes
to approach this issue: https://github.com/miracum/ahd2fhir/blob/master/ahd2fhir/kafka_setup.py#L48
Brilliant, thank you so much. Looks much cleaner! I guess technically it's a breaking change though, since the external configuration needs to be changed (including the readme.md). But personally, I am all for it.
Could you also update the readme.md's section on Kafka with the new settings names as part of this PR? https://github.com/miracum/ahd2fhir#kafka-settings
Other than that looks good to me.
Hey, glad that you like it! I'll update the README.
I noticed that pydantics BaseSettings supports only one level of prefix nesting (see here).
A fix would be to set the full prefix on the second-level objects:
class KafkaProducerSettings(BaseSettings):
compression_type: str = "gzip"
class Config:
env_prefix = "kafka_producer_"
Seems reasonable to me!
|
gharchive/pull-request
| 2021-07-02T13:56:16 |
2025-04-01T06:39:38.288662
|
{
"authors": [
"chgl",
"uklft"
],
"repo": "miracum/ahd2fhir",
"url": "https://github.com/miracum/ahd2fhir/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1525271998
|
Implement cloud functions for sending email
Currently, the Conact page uses netlify forms to send emails. A more vendor-neutral option would be prefferable.
With the rise of cloud functions, I think a simple javascript function to send mail using nodemailer or something like that could be easily implemented. This could then be ran by one of the many hosting providers as most of them offer free functions (Netlify, Vercel, Supabase offers functions too)
The added value of having an edge function handling emails as opposed to netlify forms:
automated response back to sender
discord notification
custom email templates
Expected workflow:
user submits contact request
an email from noreply@mirceanton.com is sent to the user to notify them that the request went through
an email is sent to contact@mirceanton.com with the details
(optional) a discord notification sent to me about it
user is redirected to some thank you page with a button to go back to the site
|
gharchive/issue
| 2023-01-09T10:05:13 |
2025-04-01T06:39:38.309295
|
{
"authors": [
"mirceanton"
],
"repo": "mirceanton/mirceanton.com",
"url": "https://github.com/mirceanton/mirceanton.com/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
635755550
|
Minor improvements in efficiency
To make the overlap faster in general, you should add a flag whether you want the indexes lexsorted in the end. This will be the biggest speed win.
In general, my PR is not going to improve the efficiency much, but for the case where you overlap a few intervals against a giant reference, I think my implementation will be much faster. This is because you avoid concatenating and transposing all the self-overlaps. If you check the overlap of a few intervals against a full gencode GFF, 99.99% of the results will be self overlaps.
Another possible win is to exclude all the intervals in the subject that come before the first query start or after the last query end. In general, this is not going to improve the speed much (or at all) but for the cases where you overlap a few hundred intervals against a giant interval set, it might matter a lot. This I did not do, btw, but you'd just:
starts2 = starts2[starts2 > ends1.max()]
ends2 = ends2[ends2 < starts1.min()]
I don't care if you accept this PR or not, I was mostly playing around with the code to understand the algorithm better :) I like it a lot.
oh, just saw that. I'm going to look into it right now! (hopefully i didn't reinvent the wheel with my most recent commit)
Following your observation and advice, I introduced the flag to make sorting optional. Thank you!!
Re: self-overlaps, I had to work on that too, because a major slowdown that @mimakaev ran into (he reported it on slack). Please, take a look at the latest commits in the develop branch!
|
gharchive/pull-request
| 2020-06-09T21:01:35 |
2025-04-01T06:39:38.319698
|
{
"authors": [
"endrebak",
"golobor"
],
"repo": "mirnylab/bioframe",
"url": "https://github.com/mirnylab/bioframe/pull/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
679098244
|
Insufficient Stock Error for Digital Product Type
What I'm trying to achieve
I want to checkout for digital type of product irrespective of country of shipping address and without worrying about
inventory.
Currently i want to offer online E-Learning course which user can buy across world.
I don't want to setup any warehouse or its inventory as it doesn't make sense for Digital Type Product.
Now when i am checking out on Storefront, its throwing error saying "Not enough Quantity" ( as Saleor core checkout if failing due to InsufficientStock Exception in check_stock_quantity(variant, country, quantity)
I have also verified these boolean fields. . product_type.is_digital = True, product_type.is_shipping_required = False
Steps to reproduce the problem
Create Digital Product Type.
Create Product with Digital Product type and don't assign quantity on any warehouse.
Checkout the product on Storefront
What I expected to happen
I should be able to checkout without any InsufficientStock exception for Digital Type of products.
Screenshots
https://prnt.sc/tzj3vw
System information
Operating system:
Linux
ProductVariant model has a field track_inventory. Can you see what value is set for that field in your product variant? When it's set to false than the quantity checks should be bypassed in checkout and it would work as you expect. This value defaults to true at the model level, but maybe it should be changed in case of digital products.
Hi @maarcingebala , Yes, track_inventory is set to false.
Problem which i was able to figure out is InsufficientStock exception is thrown before the variant.track_inventory check.
Current code : function: saleor/warehouse/availability.py: check_stock_quantity
stocks = Stock.objects.get_variant_stocks_for_country(country_code, variant)
if not stocks:
raise InsufficientStock(variant). ### Exception is thrown here..
if variant.track_inventory and quantity > _get_available_quantity(stocks):
raise InsufficientStock(variant)
This should fix the problem:
if variant.track_inventory:
stocks = Stock.objects.get_variant_stocks_for_country(country_code, variant)
if not stocks:
raise InsufficientStock(variant)
if quantity > _get_available_quantity(stocks):
raise InsufficientStock(variant)
Kindly suggest if this change should fix or it requires more changes?
At a first glance, it looks like it should solve the issue but it would need to be tested. If we apply this change and creating an order works well for digital products, we would have to also test that order fulfillment works well and it is not trying to decrease stock for these products (it shouldn't but it will be good to test).
Can you provide a PR with this fix and maybe some test for that? If not I'll add it to our tasks list but cannot promise when it will be tackled.
@maarcingebala I will create the PR with some tests.
@maarcingebala Created PR: https://github.com/mirumee/saleor/pull/6018
For order fulfillment, Testcase saleor/order/tests/test_fulfullments_actions.py::test_create_fulfillments_with_variant_without_inventory_tracking_and_without_stock is passing successfully. So i haven't created any new testcase.
Great! We'll take a look at the changes and will get back if anything more is needed.
|
gharchive/issue
| 2020-08-14T11:35:22 |
2025-04-01T06:39:38.336858
|
{
"authors": [
"maarcingebala",
"neerajgupta2407"
],
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/issues/6001",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
191679184
|
Consider requiring PostgreSQL
We don't want to rush it but the team seems to be convinced that requiring PostgreSQL may be a good idea. It would allow us to use hstore type columns instead of the current generic JSON field (which really is a text field and cannot be filtered or aggregated upon).
Compatibility: We require Django 1.8 and that's when most of django.contrib.postgres appeared. Not sure if there are good reasons to use Saleor with another database given that we perform most optimizations working with PostgreSQL. One downside is being unable to test locally with SQLite which is often expected. With good Docker support we should be able to make it a non-issue.
Requiring PostgreSQL for deployment would be great, but as for losing the ability to use SQLite in my development environment I'm not so keen.
Have you tried to use docker to get a working PostgreSQL instance for your project?
I haven't yet, I haven't needed to deploy anywhere.
I mean to try and use Docker for local development. docker-compose is a great tool that can get you started in no time.
@mikeres0 For dev I like to run postgres in docker. For example:
docker run --name saleor-postgres -p 6432:5432 -e POSTGRES_PASSWORD=postgres -e POSTGRES_USER=postgres -e POSTGRES_DB=saleor -d postgres
It creates working db on port 6432. If I mess it up or just want to have clean DB, I remove container and create new one.
@patrys @krzysztofwolski I'll give this a go guys, cheers
|
gharchive/issue
| 2016-11-25T11:13:51 |
2025-04-01T06:39:38.340481
|
{
"authors": [
"krzysztofwolski",
"mikeres0",
"patrys"
],
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/issues/633",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
274426044
|
Fix very slow category index render with many discounts for #1314
This PR has some improvements for #1314
Codecov Report
Merging #1318 into master will increase coverage by 0.02%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #1318 +/- ##
=========================================
+ Coverage 70.37% 70.4% +0.02%
=========================================
Files 117 117
Lines 6158 6164 +6
Branches 786 788 +2
=========================================
+ Hits 4334 4340 +6
Misses 1661 1661
Partials 163 163
Impacted Files
Coverage Δ
saleor/discount/models.py
89.36% <100%> (+0.35%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 86d0f84...bb86cdc. Read the comment docs.
This looks like a candidate for prefetch_related instead of a secondary caching layer. With prefetch_related calls to relation.all() become very cheap.
OK, I'll experiment and see what happens. The thing is the real issue is not the SQL queries, at least according to django debug toolbar. It shows SQL queries take ~200ms. All this time is spent creating python objects or something like that.
The first call to sale.categories.all() will make a second DB query to actually fetch the categories. Subsequent calls to sale.categories.all() will use this already fetched data, and there will be no queries to the database.
But, in this particular case when there are a lot of discounts, products and product variants even using this cached data to construct new QuerySet objects adds up. The algorithm calculating the discounts for products has exponential time which is the main reason why there is such a drastic slowdown with not so much data.
This is also confirmed by the fact that once I added caching to (products/categories).all() the next thing that popped up was the pgettext call translating the message in NotApplicable constructor:
raise NotApplicable(
pgettext(
'Voucher not applicable',
'Discount not applicable for this product'))
I'm closing this PR for now, as we're rebuilding the storefront using GraphQL and React and we want to tackle optimization at the API level.
|
gharchive/pull-request
| 2017-11-16T08:03:56 |
2025-04-01T06:39:38.350054
|
{
"authors": [
"codecov-io",
"maarcingebala",
"patrys",
"zaro"
],
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/pull/1318",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
653451849
|
Trouble setting up the project
Steps to reproduce on Ubuntu L18.04:
use git to clone the project locally
create variables.env in root directory, setting host to localhost:3000(since my nodejs starts at 3000)
change the window.location.hostname variable to localhost:3000 as well
change the proxy address to localhost:3000
This results in a bad request in Ubuntu with NodeJS v10 and a Request Header Too Long error in windows NodeJS v14
Am I doing something wrong?
Project is based on create-react-app and its running on port 3000 so you cant have that and nodejs on same port
also HOST should be your local ip on port 5000 if you choose that port for nodejs (its already running on port 5000)
HOST=http://192.168.0.21:5000
and put same thing for window.location.hostname
and you should run npm run dev in root directory
HI, I have solved the issue by correcting my MongoDB connection URL. Thank you for clarifying
|
gharchive/issue
| 2020-07-08T16:55:40 |
2025-04-01T06:39:38.354090
|
{
"authors": [
"MarchingVoxels",
"misa-j"
],
"repo": "misa-j/social-network",
"url": "https://github.com/misa-j/social-network/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
110035108
|
Can't use filefield with S3 storage backend
Using this setting:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
I'm getting
NotImplementedError at /forms/submit/
This backend doesn't support absolute paths.
I wonder if it's really necessary to return the filepath in the handle_uploaded_files() function, since a StoredUploadedFile field is save in form.cleaned_data for later sending and saving.
From what I can remember — I needed the files later to be be used as an attachment for sending email submissions via email. See: https://github.com/mishbahr/djangocms-forms/blob/master/djangocms_forms/forms.py#L357
You seem to approach the uploaded files from the fields, while when using a FormView as you do, the files are available in form.files already. Any reason why you took this approach?
I'd like to rewrite it a bit, but don't want to overlook something.
I'm more than happy for you to refactor bits.
Just looking through the code.. can't really remember why I did it this way.
I slightly refactored the email attachment logic last night. Now we get the files form request.FILES . Please try this, and let me know.
— Mishbah
Hi mishbahr. Thank you for your work. I was doing the same this morning, and since I'm trying to improve on my programming skill, I would care if you have a look at it. I've made a pull new PR15, but it doesn't automerge with the changes you've done.
I think my patch particularly solves my problem I had with an S3 storage class which doesn't provide a .path method. So in my case, your solution wouldn't work. I'm going to comment on your code to open up discussion. Again, I'm not an experienced contributor, but I'd like to very much improve. Let me know what you think.
Hmm. looks like we both did very similar thing. I removed the bit where I was trying to access the file path.. and as for attachments.. I'm looping through request.FILES instead of form.files as per doc here: https://docs.djangoproject.com/en/1.8/topics/http/file-uploads/#basic-file-uploads
Please try using my latest commit and see if it works with S3 Storage. If not .. I'll try to resolve the merge conflict. Thanks
UPDATE: I have just tried my code using django-storages-redux==1.3 with:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
Its uploads to S3 bucket just fine. However the email attachment seems to 0 Bytes :-(
Yes. I fixed this. One moment.
On 7 October 2015 at 15:49, Mishbah Razzaque notifications@github.com
wrote:
UPDATE: I have just tried my code using django-storages-redux==1.3 with:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
Its uploads to S3 bucket just fine. However the email attachment seems to 0
Bytes :-(
—
Reply to this email directly or view it on GitHub
https://github.com/mishbahr/djangocms-forms/issues/14#issuecomment-146201217
.
--
Dries Desmet
URGA | creatieve communicatie
Zuiderlaan 69
8790 Waregem
t 056 61 41 94
m 0477 19 41 13
dries@urga.be
www.urga.be
It's because Django's file object returns an empty bytestring when the file object is not open yet. I'm not sure if this is expected behaviour. It should probably return an exception rather than just a zero length bytestring. the #16 PR runs for me now.
btw, I wondered why you create a hash for file names at all. Is that because they wouldn't get overwritten? Doesn't Django create a new filename anyway if it already exists? I also noticed that you generate a has only once for every file field, which means filenames uploaded within the same form would have the same hash. Intended?
Just committed fully working code :-)
As far as using a hash in filename -- its not based on filename but rather just a UUID appended to filename. So I'm hoping it will be unique per file.
If you'd like it to be unique per file, you should probably put it within the for loop for every file field in handle_uploaded_files. No?
btw, I've sent you a google chat invitation. Thought that would talk easier. Couldn't see your name in django-cms chatroom either. My nichname is TrioTorus.
You are correct!
Fixed in 88dda1fea8f61c61686e40058f58307ab58d0d31
Thanks
Happy to help. I'm delighted you made this plugin, and now it works with S3!
On 7 October 2015 at 16:21, Mishbah Razzaque notifications@github.com
wrote:
You are correct!
Fixed in 88dda1f
https://github.com/mishbahr/djangocms-forms/commit/88dda1fea8f61c61686e40058f58307ab58d0d31
Thanks
—
Reply to this email directly or view it on GitHub
https://github.com/mishbahr/djangocms-forms/issues/14#issuecomment-146209146
.
--
Dries Desmet
URGA | creatieve communicatie
Zuiderlaan 69
8790 Waregem
t 056 61 41 94
m 0477 19 41 13
dries@urga.be
www.urga.be
I'll push it to PyPi :-)
|
gharchive/issue
| 2015-10-06T15:25:48 |
2025-04-01T06:39:38.373859
|
{
"authors": [
"driesdesmet",
"mishbahr"
],
"repo": "mishbahr/djangocms-forms",
"url": "https://github.com/mishbahr/djangocms-forms/issues/14",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1909660245
|
Pass obs normalizer to defend policy
Update the train_against_easy.py script to pass the observation normalizer to the DefendGen function correctly.
Haha I literally just made a branch to address this 5 minutes ago!
|
gharchive/pull-request
| 2023-09-23T01:25:24 |
2025-04-01T06:39:38.421682
|
{
"authors": [
"PatP15",
"makaimann"
],
"repo": "mit-ll-trusted-autonomy/pyquaticus",
"url": "https://github.com/mit-ll-trusted-autonomy/pyquaticus/pull/17",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1901560474
|
changing environment variable to correct name
The huggingface token should be under HUGGING_FACE_HUB_TOKEN, not HF_TOKEN for huggingface authentication. Changes are made to reflect this.
This PR is not complete. I missed a couple changes. Will tear this down and add a new one.
|
gharchive/pull-request
| 2023-09-18T18:57:26 |
2025-04-01T06:39:38.428276
|
{
"authors": [
"julius-heitkoetter"
],
"repo": "mit-submit/A2rchi",
"url": "https://github.com/mit-submit/A2rchi/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
117659117
|
Update to latest CA bundles
Closes #63
LGTM
|
gharchive/pull-request
| 2015-11-18T19:15:58 |
2025-04-01T06:39:38.436782
|
{
"authors": [
"mitchellh",
"sethvargo"
],
"repo": "mitchellh/vagrant-installers",
"url": "https://github.com/mitchellh/vagrant-installers/pull/66",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2180372073
|
[FALSE-POSITIVE] saudi-k.com
Domains or links
Please list any domains and links listed here which you believe are a false positive.
saudi-k.com
More Information
How did you discover your web site or domain was listed here?
Website was hacked
Have you requested removal from other sources?
Please include all relevant links to your existing removals / whitelistings.
sophos
avira
webroot
eset
Netcraft
:exclamation:
We understand being listed on a Phishing Database like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
Send a Pull Request for faster removal
Users who understand github and creating Pull Requests can assist us with faster removals by sending a PR to mitchellkrogza/phishing repository, on the falsepositive.list file
https://github.com/mitchellkrogza/phishing/blob/main/falsepositive.list
Please include the same above information to help speed up the whitelisting process.
Dear Team,
Please update on this request.
Thank you.
@funilrys @mitchellkrogza
|
gharchive/issue
| 2024-03-11T22:29:23 |
2025-04-01T06:39:38.449347
|
{
"authors": [
"bilalag"
],
"repo": "mitchellkrogza/Phishing.Database",
"url": "https://github.com/mitchellkrogza/Phishing.Database/issues/841",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
316572651
|
menshealth.com
Match found in list.34.hosts.ubuntu101.co.za.domains:
menshealth.com
www.menshealth.com
As this domain is also blocked by famous list, I choose not to whitelist this.
|
gharchive/issue
| 2018-04-22T11:50:55 |
2025-04-01T06:39:38.451339
|
{
"authors": [
"funilrys",
"xxcriticxx"
],
"repo": "mitchellkrogza/Ultimate.Hosts.Blacklist",
"url": "https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/301",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
240436156
|
host=localhost not possible with docker mac
Not sure if anyone else has experienced this same issue.
I am using the docker-compose modeldb installation on a mac, and unable to connect the docker-machine to localhost. Here's another forum discussing the issue: https://forums.docker.com/t/using-localhost-for-to-access-running-container/3148
As a workaround, I used this script for the [path to modeldb directory] that find/replaces all the instances of localhost in the routing files.
sed -i -e 's/localhost/<docker-machine address>/g' server/src/main/resources/reference.conf
sed -i -e 's/localhost/<docker-machine address>/g' server/src/main/resources/reference-docker.conf
sed -i -e 's/localhost/<docker-machine address>/g' server/src/main/resources/reference-test.conf
sed -i -e 's/localhost/<docker-machine address>/g' client/syncer.json
sed -i -e 's/localhost/<docker-machine address>/g' frontend/util/check_thrift.js
sed -i -e 's/localhost/<docker-machine address>/g' frontend/util/thrift.js
sed -i -e 's/localhost/<docker-machine address>/g' client/python/modeldb/basic/Structs.py
For , do: $ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v17.05.0-ce
After that I was able to add new projects to the docker/modeldb.
I can branch and submit a pull request. Or, is there a better way? Please advise.
@justinanderson
Hi @soellingeraj, it sounds like you're using the older Docker Toolbox instead of Docker for Mac. If your system supports it, I recommend switching to Docker for Mac. It doesn't use virtualbox anymore and so removes many of the headaches like port forwarding involved with Docker development on a Mac.
If you use Docker for Mac, it will forward all exposed container ports to localhost automatically. docker-compose up should leave you with a web server reachable at http://localhost:3000/ and a docker-machine ls that looks like this:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
If you need to run Docker Toolbox, I suggest setting up port forwarding for the virtualbox VM used by docker-machine, as detailed in this Stack Overflow answer.
thank you. that worked.
you can close this one.
|
gharchive/issue
| 2017-07-04T14:25:56 |
2025-04-01T06:39:38.457301
|
{
"authors": [
"justinanderson",
"mpvartak",
"soellingeraj"
],
"repo": "mitdbg/modeldb",
"url": "https://github.com/mitdbg/modeldb/issues/258",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
167885633
|
Integrate mitmproxy contentviews
first PR for this issue, can you tell me more about the content generator?
Content views:
https://github.com/mitmproxy/mitmproxy/blob/d97fe767dc7b8ea47f0e170c6f002c506f606d57/mitmproxy/contentviews.py#L621-L632
https://github.com/mitmproxy/mitmproxy/blob/d97fe767dc7b8ea47f0e170c6f002c506f606d57/mitmproxy/contentviews.py#L117-L135
The content generator yields lists of (style, text) tuples, where each list represents a single line.
In a nutshell, a generator is a sequence you can iterate over. The main difference compared to a list is that generators are computed lazily. See: https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/
>>> import pprint
>>> from mitmproxy import contentviews
>>> view = contentviews.ViewURLEncoded()
>>> description, generator = view(b"foo=42&bar=43&baz=44")
>>> description
'URLEncoded form'
>>> generator
<generator object format_dict at 0x041DE1E0>
>>> lines = list(generator)
>>> pprint.pprint(lines)
[[('header', 'foo: '), ('text', '42')],
[('header', 'bar: '), ('text', '43')],
[('header', 'baz: '), ('text', '44')]]
The style is normally just "text", but it can also be "highlight", "offset" or "header". This is probably best solved by giving the span a matching className.
i am struggeling with the sticky contentview options, currently its not working, but the rest should be fine
Can you resolve the merge conflicts on this please?
This looks good. Let's get this in a mergable state ASAP so that we can move that into master and do smaller iterations there! :smiley:
|
gharchive/pull-request
| 2016-07-27T15:37:32 |
2025-04-01T06:39:38.466144
|
{
"authors": [
"cle1000",
"mhils"
],
"repo": "mitmproxy/mitmproxy",
"url": "https://github.com/mitmproxy/mitmproxy/pull/1441",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1002143718
|
asadiqbal08/Update xPro references
fixes: #10
Updated .html and .scss from xPro references.
@arslanashraf7 I guess there is another story for that https://github.com/mitodl/mitxonline-theme/issues/12
@arslanashraf7 I guess for point#1, there is another story for that #12
@asadiqbal08 , I didn't notice that ticket exists. But point#2,3 I think still belongs to this PR.
|
gharchive/pull-request
| 2021-09-21T09:57:48 |
2025-04-01T06:39:38.486301
|
{
"authors": [
"arslanashraf7",
"asadiqbal08"
],
"repo": "mitodl/mitxonline-theme",
"url": "https://github.com/mitodl/mitxonline-theme/pull/24",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1150775071
|
Epic: Support Program Requirements
We already have some initial support for programs in the form of a Program and ProgramEnrollment and want to expand on that to support ecommerce specifically for MicroMasters programs more thoroughly.
[x] A Program should be able to specify required Courses
[x] A Program should be able to specify elective Courses
[x] Program progress / completion tracking
[x] Program Certificates
Far Future
[ ] A Course should be able to specify prerequisite Courses
I checked all the boxes except for the "far future" one.
|
gharchive/issue
| 2022-02-25T19:24:53 |
2025-04-01T06:39:38.488944
|
{
"authors": [
"pdpinch",
"rhysyngsun"
],
"repo": "mitodl/mitxonline",
"url": "https://github.com/mitodl/mitxonline/issues/443",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
438057449
|
The RPC server is unavailable.
Hello,
I like this project very much. I don't know how to solve this problem. I hope you can help me.
look at it
As you can see, I opened the RPC service, but I still couldn't perform this operation.
Another problem is that my server uses redhunter, which prompts when I load pstools that the permissions aren't enough, and I've tried to modify the permissions, but it doesn't work.
If there is a systemic language problem, I think I was wrong from the beginning.
Look forward to your reply
If this issue still exists after upgrading to the most recent version of Caldera, feel free to re-open this issue or create a new one. Until then, this issue is being closed due to lack of activity.
|
gharchive/issue
| 2018-10-11T02:33:28 |
2025-04-01T06:39:38.531974
|
{
"authors": [
"ArtificialErmine",
"qq854051086"
],
"repo": "mitre/adversary",
"url": "https://github.com/mitre/adversary/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1500367721
|
Switch to a node-based IR
Building on PR #41, this change refactors the logic in op.cpp, eval_llvm.cpp and eval_cuda.cpp to switch to a node-based IR.
In particular:
The very large generic function jit_var_new_op() was transformed into a set of operation-specific functions like jit_var_add(), jit_var_fma(). These produce an abstract IR representations, whose codegen-specific bits are now part of the corresponding backends. Splitting the generic function leads to code that is easier to understand and likely easier to compile as well.
The somewhat wordy pattern jit[c]_var_new_x was changed to the shorter jit[c]_var_x, throughout the codebase where x is usually an action.
It is now impossible to create statement-type variables (string-based IR) through the public API. However, internally, there are still a bunch of places that create such variables (CUDA textures, printf, ray tracing, ...). Eventually it will be nice to get rid of these as well, but let's do it in another PR (this one is already way too large)
The PR rolls in a few other minor changes:
Dr.Jit must know the LLVM version to generate the right set of intrinsics (which change between versions). To do this properly, it now tries harder to detect the version version. When all fails, it uses a fallback that resolves symbols known to only exist in specific LLVM versions, which is good enough for inferring the major version.
It removes the managed and managed-read-only CUDA memory types that we never used. They are only poorly supported on Windows and have a bunch of performance pitfalls.
It improves the formatting of some routines (dr.whos(), dr.graphviz()) that had bit-rotted somewhat.
Analogous to the LLVM backend, the CUDA and OptiX backends were reorganized into a parts related to dynamic API resolution-specific bits ({cuda,optix}_api.{h,cpp}) and the rest (_core.cpp)
I renamed eval_{cuda,llvm}.cpp to {cuda,llvm}_eval.cpp to follow the naming convention of the other files.
There was quite a bit of complex code in the memory allocation cache for propagating allocations between different threads that is no longer needed now that each device has a central queue. All of that could be removed, which will hopefully make it easier to support additional backends in the future.
It turns out that LLVM still had a per-thread queue, which wasn't consistent with how the CUDA backend works. I changed it so that there is also a global queue that different threads submit to. (The minor performance benefits of a per-thread queue are outweighted by the difficulty of getting it to work correctly when mitsuba loads scenes in parallel)
(Updated.)
I did an interactive rebase to squash some sub-commits and integrated your feedback. I added one more minor change (https://github.com/mitsuba-renderer/drjit-core/pull/52/commits/a3c7e5bef28e06dfe79e42dba3917645992241ab) to change the printf_async operation from textual IR into abstract IR.
|
gharchive/pull-request
| 2022-12-16T14:55:48 |
2025-04-01T06:39:38.540385
|
{
"authors": [
"wjakob"
],
"repo": "mitsuba-renderer/drjit-core",
"url": "https://github.com/mitsuba-renderer/drjit-core/pull/52",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
98051764
|
Read/Write arbitrary string.
Use for checking the original type information.
Do not support.
|
gharchive/issue
| 2015-07-29T23:39:32 |
2025-04-01T06:39:38.544807
|
{
"authors": [
"mitsuse"
],
"repo": "mitsuse/serial-go",
"url": "https://github.com/mitsuse/serial-go/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1522620438
|
ERROR: Detected a package name collision
This bug happened to me when I added qthttpserver to an installation. Without qthttpserver everything installs fine.
List cmd:
$ aqt list-qt linux desktop --modules 6.4.2 gcc_64
debug_info qt3d qt5compat qtcharts qtconnectivity qtdatavis3d qthttpserver qtimageformats qtlanguageserver qtlottie qtmultimedia qtnetworkauth qtpdf qtpositioning qtquick3d qtquick3dphysics qtquicktimeline qtremoteobjects qtscxml qtsensors qtserialbus qtserialport qtshadertools qtspeech qtvirtualkeyboard qtwaylandcompositor qtwebchannel qtwebengine qtwebsockets qtwebview
Install cmd:
$ aqt install-qt -O /opt/Qt linux desktop 6.4.2 gcc_64 --m qt5compat qtcharts qthttpserver qtimageformats qtlottie qtmultimedia qtquicktimeline qtspeech qtwebchannel qtwebsockets qthttpserver
INFO : aqtinstall(aqt) v3.1.0 on Python 3.10.6 [CPython GCC 11.3.0]
WARNING : Specified Qt version is unknown: 6.4.2.
ERROR : Detected a package name collision
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/aqt/installer.py", line 177, in run
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/aqt/installer.py", line 400, in run_install_qt
qt_archives: QtArchives = retry_on_bad_connection(
File "/usr/local/lib/python3.10/dist-packages/aqt/helper.py", line 165, in retry_on_bad_connection
return function(base_url)
File "/usr/local/lib/python3.10/dist-packages/aqt/installer.py", line 401, in
lambda base_url: QtArchives(
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 300, in init
self._get_archives()
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 365, in _get_archives
self.get_archives_base(f"qt{self.version.major}{self._version_str()}{self._arch_ext()}", self._target_packages())
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 361, in _target_packages
target_packages.add(module, package_names)
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 94, in add
assert package_name not in self._packages_to_modules, "Detected a package name collision"
AssertionError: Detected a package name collision
ERROR : aqtinstall(aqt) v3.1.0 on Python 3.10.6 [CPython GCC 11.3.0]
Working dir: /
Arguments: ['/usr/local/bin/aqt', 'install-qt', '-O', '/opt/Qt', 'linux', 'desktop', '6.4.2', 'gcc_64', '--m', 'qt5compat', 'qtcharts', 'qthttpserver', 'qtimageformats', 'qtlottie', 'qtmultimedia', 'qtquicktimeline', 'qtspeech', 'qtwebchannel', 'qtwebsockets', 'qthttpserver'] Host: uname_result(system='Linux', node='4a4d7c6111e3', release='5.15.0-53-generic', version='#59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022', machine='x86_64')
===========================PLEASE FILE A BUG REPORT===========================
You have discovered a bug in aqt.
Please file a bug report at https://github.com/miurahr/aqtinstall/issues
Please remember to include a copy of this program's output in your report.
This is definitely a CLI bug, but there's an easy workaround. You have the module qthttpserver listed twice in your command. If you remove one of the duplicates, your command should work properly.
#633 should fix this, if you want to try it. You should be able to put qthttpserver into the module list as many times as you want and it should still work.
The CI runs are failing right now due to an issue with tox; I'm not sure what the problem is, but it exists in the master branch as well.
Hi!
I tried the fix and yes it is working now.
|
gharchive/issue
| 2023-01-06T13:46:34 |
2025-04-01T06:39:38.555018
|
{
"authors": [
"ddalcino",
"vkuznetsovgn"
],
"repo": "miurahr/aqtinstall",
"url": "https://github.com/miurahr/aqtinstall/issues/632",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1144843634
|
Add explanation of --config flag in CLI docs
This adds the -c | --config flag to the CLI section of the documentation.
This problem was discovered while investigating #488. It does not attempt to fix that issue.
Thank you for the fix. merged.
|
gharchive/pull-request
| 2022-02-19T20:52:22 |
2025-04-01T06:39:38.556371
|
{
"authors": [
"ddalcino",
"miurahr"
],
"repo": "miurahr/aqtinstall",
"url": "https://github.com/miurahr/aqtinstall/pull/491",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
89256071
|
Rollback metrics versions
The new version has problems through AWS ELBs.
Thanks!
|
gharchive/pull-request
| 2015-06-18T10:29:45 |
2025-04-01T06:39:38.567781
|
{
"authors": [
"calumlean",
"neilprosser"
],
"repo": "mixradio/mr-clojure",
"url": "https://github.com/mixradio/mr-clojure/pull/16",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
312981793
|
Links to URLs ending in brackets are truncated
URLs like https://en.wikipedia.org/wiki/Set_(mathematics) which end with a bracket seem to get truncated by Mistletoe. It seems to grab a URL by searching for the first ) character, omitting brackets and any following characters. For instance:
$ mistletoe
mistletoe [version 0.5.4] (interactive)
Type Ctrl-D to complete input, or Ctrl-C to exit.
>>> [link](https://en.wikipedia.org/wiki/Set_(mathematics))
...
<p><a href="https://en.wikipedia.org/wiki/Set_%28mathematics">link</a>)
</p>
I'm not sure if this is a serious issue---after all, we can encode the bracket in the URL as %29. For what it's worth, GitHub's flavour of markdown seems to be happy with URLs ending in (at least one) bracket. The string [link](https://en.wikipedia.org/wiki/Set_(mathematics)) results in link, and all is well.
Ah, should be fixed in 2706db3. The problem was caused by a lazy matching character in the regex, because I was trying to solve the problem of multiple links in a paragraph. Disallowing whitespaces (as I should have done from the beginning, derp..) in URLs bypasses this problem.
mistletoe [version 0.5.5] (interactive)
Type Ctrl-D to complete input, or Ctrl-C to exit.
>>> [link](https://en.wikipedia.org/wiki/Set_(mathematics))
...
<p><a href="https://en.wikipedia.org/wiki/Set_%28mathematics%29">link</a>
</p>
This fix will be included in the next pypi release, so please use the dev branch if you need it working now. Thanks for the bug report!
|
gharchive/issue
| 2018-04-10T15:44:32 |
2025-04-01T06:39:38.619861
|
{
"authors": [
"DMRobertson",
"miyuchina"
],
"repo": "miyuchina/mistletoe",
"url": "https://github.com/miyuchina/mistletoe/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
111958099
|
add reporter feature
add three reporter
Related #23
status ok
|
gharchive/pull-request
| 2015-10-17T11:30:52 |
2025-04-01T06:39:38.621823
|
{
"authors": [
"mizunashi-mana"
],
"repo": "mizunashi-mana/node-sonparser",
"url": "https://github.com/mizunashi-mana/node-sonparser/pull/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
173073978
|
Question: why would you need to store state that isn't in the url?
Should the user expect to get a different page when they visit a url each time?
Sure. See for example the Pinterest example in React Router.
|
gharchive/issue
| 2016-08-24T22:34:16 |
2025-04-01T06:39:38.624345
|
{
"authors": [
"ntucker",
"taion"
],
"repo": "mjackson/history",
"url": "https://github.com/mjackson/history/issues/353",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1859570494
|
Track metadata about the day
As a stats keeper, I want to track details about the day, so that I can provide context to the launch outputs.
For example, tracking what is submitted in the VGS Ops Data Return (stats) for each day (weather, staffing, awards)
Will be worth adding the MT state as well, but unsure if we are reporting this back on the GUR anymore?
|
gharchive/issue
| 2023-08-21T15:02:14 |
2025-04-01T06:39:38.643008
|
{
"authors": [
"CameronMacG",
"mjennings061"
],
"repo": "mjennings061/viking-log-keeper",
"url": "https://github.com/mjennings061/viking-log-keeper/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
333547931
|
Command line error: No input files found
Hello!
The compilation work but the watch no.
My version :
"mjml": "^4.0.5"
I'm on windows with Git Bash.
hey @lefuturiste, thanks for reporting. It's a known issue, we're going to fix it very soon in v4.1.0.
I'm closing this one to the benefit of the already-opened issue: https://github.com/mjmlio/mjml/issues/1171
Oh, now i know that i must have file "index.mjml" in directory before this entering this command?
|
gharchive/issue
| 2018-06-19T07:21:51 |
2025-04-01T06:39:38.648330
|
{
"authors": [
"klimov-rv",
"lefuturiste",
"ngarnier"
],
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/issues/1246",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
244666976
|
MJ-column issue when mobile rendering is used
Hello there,
I've been trying to setup a template with some buttons with a text overlay, and I came up with the following code.
<mj-column width="25%">
<mj-image width="133px" src="http://static.evensi.com/business-square.png" />
<mj-text align="center"><h3 class="footer-interest-text-box"><a href="https://twitter.com/RecastAI" style="margin-top: -70px;" class="footer-interest-text">Business</a></h3></mj-text>
</mj-column>
The css class declared at the top are properly used, but the issue I've experiencing is related the layout once on the mobile view. The 25% width of the column is somewhat ignored and not coded in the final HTML. As a a result of this I have the screen rendering a list of boxes (I've 6 of them as per the above), piled up vertically in a non elegant way.
Anybody that can help me figuring out what's wrong?
Thanks
Hi @Evensier
You should take a look at https://mjml.io/documentation/#mjml-group it keeps columns inside mj-group as inline in mobile
I'm closing this issue, feel free to reopen if it doesn't work for you
I've tried that already, even before posting this issue, but things are getting overlapped with the design above.
mj-table could work for this, if I'm understanding the issue correctly
Kind Regards,
:----------------------------:
Dale McConnell
On 21 July 2017 at 16:43, Evensier notifications@github.com wrote:
I've tried that already, even before posting this issue, but things are
getting overlapped with the design above.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/mjmlio/mjml/issues/752#issuecomment-317019874, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AKxLOFvvQt5ZD5TlQCCcy0ylpjhEeg5bks5sQLj_gaJpZM4OfZOI
.
You might need to do 2 group of 50/50
mj-group
mj-column x2 50%
mj-group
mj-column x2 50%
image could be too wide to allow you 4 column as mobile
Ok sort of getting what I need, although the mj-group doesn't expose any css-class property. Any idea on how to add some style there?
mj-group does support css-class like every other element in the body
@iRyusa true, but for whatever reason the MJML compiler add some inline style that override the class behaviour.
Any idea on how to avoid that?
I've created a fiddle here with part of the code I've written. Would love some advice to understand what I'm doing wrong
https://jsfiddle.net/syLdxd1e/
Thanks for the help.
|
gharchive/issue
| 2017-07-21T13:21:22 |
2025-04-01T06:39:38.656791
|
{
"authors": [
"Evensier",
"dalefish",
"iRyusa"
],
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/issues/752",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2158700425
|
[FIX] Watch wrong files on CLI
Should fix #2823 soon
@iRyusa mind releasing a patch for this? Thank you for your work! :pray:
Should be available in 5.x branch
|
gharchive/pull-request
| 2024-02-28T11:06:06 |
2025-04-01T06:39:38.658282
|
{
"authors": [
"dargmuesli",
"iRyusa"
],
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/pull/2838",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
303825397
|
Question about the adapters pattern
Considering my current understanding of the adapters pattern - waiting to stand corrected - we're going to define adapter mapping trees.
Will it exclusively go in one way, ie:
from django.db.models import Person
# add returns a clone of the payload, so we're instanciating 2 payloads here
# because the FormView adapter's post_add() will add(Form adapter, clone=False)
# if it's not already there ?
p = Payload.factory(instance=Person()).add('django.views.ModelFormView')
# executing a step returns a clone of the payload, but we don't care:
# we have an adapter mapping on data, a request, and we want a response
assert p.steps.response(request=request).response
The tutorial demonstrates how the above would be possible, but it might look like this (not tested code, obviously clumsy):
import Person model
make a payload with an empty instance,
because it's a model, factory will add the django model adapter, which has post_add():
introspect the payload,
map payload.instance._meta.fields to keys of the Payload corresponding to field names
with the appropriate adapters for each field
ie. payload.map.name.adapters == [StringAdapter(max_length=255)]
add validate and clean steps
add the modelformview adapter on the person payload
add the form adapter on the personpayload,
introspect the payload, and map keys corresponding form fields to model fields
ie. payload.map.name.adapters == [StringAdapter(max_length=255), TextFieldAdapter(label="my verbose name")]
add validate and clean steps on the person payload
add a render step
add the template adapter on the person,
which adds a render step with a default template name
that will be able to see other adapter's render outputs
add a response step which to orchestrate other steps,
but needs payload.request, and sets payload.response
in a clone as usual when a step is executed
unless clone=False, for calling steps from within steps
execute the response step by adding request to the payload
modelformview adapter response will try to execute all prior steps,
if no errors are added by clean() step then process() step will save
if no errors response() will if errors on validate step show the form again, otherwise redirect to the detail view, if errors were added during process who knows and honnestly i leave it up to you what the default behaviour will be since it should be so easy to override not only the method but the default adapter registered for ModelFormView !!
Or, will it allow to build a nested adapter map, and then be able to generate a model class with another adapter ?
class Hobby(adapters.Payload):
name = Payload(adapters=[StringAdapter()])
class Person(adapters.Payload):
hobbies = Payload(map=[HobbyAdapter()])
class Meta:
adapters = [OnlyAllowHobbiesToBe('archery', 'django', 'music')]
p = Person().add('django.db.models.Model')
# custom step by django model adapter, optinal, sets payload.model if not already present
p.steps.modelize().model
Another possibility is to make everything an adapter, which can have adapters who know about their parent, in which case steps also are adapters, just they orchestrate the adapters which are in a mapping structure, and defining a step is just defining a method which may depend on methods priorly executed.
Sorry if this doesn't make any sense please correct me ;)
If that makes sense to you then you probably understand why i consider this million $ worth, in terms of refactoring, and code reusability.
Closing this for now it's not supported
|
gharchive/issue
| 2018-03-09T12:28:03 |
2025-04-01T06:39:38.670083
|
{
"authors": [
"jpic"
],
"repo": "mjtamlyn/django-adapters",
"url": "https://github.com/mjtamlyn/django-adapters/issues/39",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
678419423
|
Refactor duplicated code in tests, remove duplicated references from test projects
Fix #158, merge after #188
:x: Build ConfuserEx 407 failed (commit https://github.com/mkaring/ConfuserEx/commit/59830d5eb4 by @KvanTTT)
:x: Build ConfuserEx 409 failed (commit https://github.com/mkaring/ConfuserEx/commit/4a8592482f by @KvanTTT)
:x: Build ConfuserEx 410 failed (commit https://github.com/mkaring/ConfuserEx/commit/93caef267e by @KvanTTT)
:x: Build ConfuserEx 411 failed (commit https://github.com/mkaring/ConfuserEx/commit/48f4be13a2 by @KvanTTT)
|
gharchive/pull-request
| 2020-08-13T13:01:56 |
2025-04-01T06:39:38.678783
|
{
"authors": [
"AppVeyorBot",
"KvanTTT"
],
"repo": "mkaring/ConfuserEx",
"url": "https://github.com/mkaring/ConfuserEx/pull/189",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2098490953
|
bug: ModuleNotFoundError: No module named 'mkdocstrings_handlers'
Description of the bug
When i try to build the doc i get "ModuleNotFoundError: No module named 'mkdocstrings_handlers'". This is a fresh installation and a new empty mkdocs project and i still get this error as soon as i had a class or a function.
To Reproduce
```
pip3 install mkdocs
pip3 install mkdocstring
mkdocs new docs
cd docs
mkdocs build
```
Full traceback
Full traceback
INFO - Cleaning site directory
INFO - Building documentation to directory:
/home/sam/Documents/project/project/has/docs/site
WARNING - A relative path to 'subfolder/functions.md' is included in the
'nav' configuration, which is not found in the documentation
files.
WARNING - A relative path to 'functions.md' is included in the 'nav'
configuration, which is not found in the documentation files.
ERROR - Error reading page 'index.md': No module named
'mkdocstrings_handlers'
Traceback (most recent call last):
File "/home/sam/.local/bin/mkdocs", line 8, in <module>
sys.exit(cli())
File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/sam/.local/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/__main__.py", line 286, in build_command
build.build(cfg, dirty=not clean)
File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/commands/build.py", line 322, in build
_populate_page(file.page, config, files, dirty)
File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/commands/build.py", line 175, in _populate_page
page.render(config, files)
File "/home/sam/.local/lib/python3.9/site-packages/mkdocs/structure/pages.py", line 271, in render
self.content = md.convert(self.markdown)
File "/home/sam/.local/lib/python3.9/site-packages/markdown/core.py", line 357, in convert
root = self.parser.parseDocument(self.lines).getroot()
File "/home/sam/.local/lib/python3.9/site-packages/markdown/blockparser.py", line 117, in parseDocument
self.parseChunk(self.root, '\n'.join(lines))
File "/home/sam/.local/lib/python3.9/site-packages/markdown/blockparser.py", line 136, in parseChunk
self.parseBlocks(parent, text.split('\n\n'))
File "/home/sam/.local/lib/python3.9/site-packages/markdown/blockparser.py", line 158, in parseBlocks
if processor.run(parent, blocks) is not False:
File "/home/sam/.local/lib/python3.9/site-packages/mkdocstrings/extension.py", line 124, in run
html, handler, data = self._process_block(identifier, block, heading_level)
File "/home/sam/.local/lib/python3.9/site-packages/mkdocstrings/extension.py", line 195, in _process_block
handler = self._handlers.get_handler(handler_name, handler_config)
File "/home/sam/.local/lib/python3.9/site-packages/mkdocstrings/handlers/base.py", line 459, in get_handler
module = importlib.import_module(f"mkdocstrings_handlers.{name}")
File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mkdocstrings_handlers'
Expected behavior
Since i put a class in index.md, i expect the site to be generated correctly.
Environment information
python3 -m mkdocstrings.debug # | xclip -selection clipboard
System: Linux-5.10.0-26-amd64-x86_64-with-glibc2.31
Python: cpython 3.9.2
Environment variables:
PYTHONPATH: :/home/sam/Documents/Project/project
Installed packages:
mkdocstrings v0.24.0
Additional context
Duplicate of #623, #647
You mean i have to install mkdocstrings-python with pip?
Yes :)
|
gharchive/issue
| 2024-01-24T15:13:47 |
2025-04-01T06:39:38.720164
|
{
"authors": [
"cldtech",
"pawamoy"
],
"repo": "mkdocstrings/mkdocstrings",
"url": "https://github.com/mkdocstrings/mkdocstrings/issues/648",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
251364946
|
share buttons not working
hello
thank you for your great work and sharing it
I installed your plugin on my website copying link and embed code work perfectly but not social buttons : I tried them all
here's an url to check https://videos.arabeevideo.com/watch/p127-royal-enfield-cont
is it possible to add custom embed code ?
You can pass custom embed code as embedCode property in the plugin options object.
Would you be kind and provide an example of share button implementation (fb or smth)? 😸
Take a look at https://neuron-digital.github.io/wjplayer/examples/mp4.html
I'm not very comfortable with js programming neither with videojs , but seems like this is a customized player with many advanced featured
I took a look into the source code , seems you packed plugins together , is it possible to add other plugins like playlist or context menu to your player?
@tunmsk videojs has that already. He improved it with his own plugin.
Check on https://github.com/videojs/video.js
|
gharchive/issue
| 2017-08-18T21:37:36 |
2025-04-01T06:39:38.725417
|
{
"authors": [
"kenanbalija",
"mkhazov",
"tunmsk"
],
"repo": "mkhazov/videojs-share",
"url": "https://github.com/mkhazov/videojs-share/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
462203336
|
Insert result set resulting in "pyodbc.ProgrammingError: ('String data, right truncation: length 8 buffer 4', 'HY000')" error.
I am using pyodbc 4.0.26 version. Inserting result-set (containing a decimal result) with "fastexecutemany" set to TRUE results in "pyodbc.ProgrammingError: ('String data, right truncation: length 8 buffer 4', 'HY000')" error.
Please provide a minimum reproducible example and/or an ODBC trace, along with the name and version of the ODBC driver you are using.
SQL.LOG
Hi, I have attached the ODBC trace.
Below are the versions of Software I am using
Python Version: 3.7
pyodbc version: 4.0.26
DB: Teradata
Driver: Teradata 15.10.01.03
OS: Win10
The previous version of the issue title indicates that you are using "fastexecutemany". Does the error go away if you use fast_executemany=False (the default)? If so, then the Teradata ODBC driver may simply not support "parameter arrays", the internal ODBC feature that allows fast_executemany=True to work.
Yes. It works when fast_executemany= False. I have used fast_executemany = True on the same Teradata driver for other tables. It worked. I am having issue for this one table. could it be an issue in the data I am inserting? the data I am inserting is extracted from a VSAM file (Mainframe).
The error
pyodbc.ProgrammingError: ('String data, right truncation: length 8 buffer 4', 'HY000')
clearly indicates that a string parameter value is overflowing the space allocated to it. Are you passing the decimal parameter values as strings? If so, can you try passing those parameter values as Decimal instead of str, e.g., Decimal('3.14') instead of '3.14'?
Yes. I am passing decimal parameter values as string. I passed those values as decimal. It worked. Thanks so much for your inputs.
You're welcome. Glad to hear that you got it working. You can close this issue now.
|
gharchive/issue
| 2019-06-28T20:52:24 |
2025-04-01T06:39:38.738774
|
{
"authors": [
"dvarna",
"gordthompson"
],
"repo": "mkleehammer/pyodbc",
"url": "https://github.com/mkleehammer/pyodbc/issues/579",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
238410570
|
open method not visible when using WKZombie in Objective C project
Hi, after adding WKZombie to my podfile and adding @import WKZombie;, I can invoke some methods on the WKZombie.shareInstance, e.g. dump, setTimeoutSeconds, but the essential open method is not visible for some reasons.
Would you have any hints on how to solve this?
thanks.
Hi @billylo1 Thanks for reporting this. You're right, there seems to be an issue with the extensions not being visible in Objective C code. I'll look into it.
Thanks! Looking forward to it!
Sorry for the confusion! I forgot that Swift generic functions are not supported by Objective-C. So this is the correct behaviour. However, you should be able to add a Swift file to your Objective-C project (Mix and Match) and use WKZombie in there. Hope that helps!
I did try that. Importing the WKZombie-swift.h to make WKZombie visible to the objective C code. But I can't find a way to invoke "open". Are you able to make it work in Xcode?
Just add a Swift file (e.g. Test.swift) to your project and simply add "import WKZombie". Create a class/function, do all the headless browsing there and hand the result back to your Objective-C code.
|
gharchive/issue
| 2017-06-25T22:16:21 |
2025-04-01T06:39:38.742322
|
{
"authors": [
"billylo1",
"mkoehnke"
],
"repo": "mkoehnke/WKZombie",
"url": "https://github.com/mkoehnke/WKZombie/issues/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2428372889
|
Refactor cardano-db-sync code and add tests
Closes #67
~Note: this PR breaks cardanow-ts nix package: in particular the checkPhase is no longer passing: it seems the the mocking is not working as expected~
how the derivation build is not failing if tests are failing? I don't see where you disabled them and if I run nix build locally I see that some tests are executed
the first draft had that, but I managed to fix that so we are currently testing things in CI, good catch
probably not the end of the world since we are talking about tests anyway but is the as unknown as Mock thing unavoidable? Or perhaps it's something commonly done in Typescript? (forgive my ignorance)
Not sure honestly, I'm not a TS expect neither, I'll drop a todo commit so we can look into this better later
The idea is that we start docker during the tests because we need a database? I can't understand how it's mocked otherwise. Perhaps this is why the derivation check phase fails? (running docker in a derivation sandbox may be non trivial)
No we are not, we are only mocking the TS code, docker is not involved
|
gharchive/pull-request
| 2024-07-24T20:01:11 |
2025-04-01T06:39:38.753828
|
{
"authors": [
"albertodvp"
],
"repo": "mlabs-haskell/cardanow",
"url": "https://github.com/mlabs-haskell/cardanow/pull/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2064147211
|
[Bug] Unable to recognize vocab.json, merges.txt tokenizer format
🐛 Bug
according to #31 , mlc-llm should already support vocab.json, merges.txt tokenizer formats.
But when I try to inference CausalLM/72B-preview-llamafied-qwen-llamafy I run into the error that can't find the tokenizer
>>> from mlc_chat import ChatModule
>>> cm = ChatModule(model="/home/alphaarea/models/CausalLM-7B-DPO-alpha-q0f16", model_lib_path="/home/alphaarea/models/CausalLM-7B-DPO-alpha-q0f16/CausalLM-7B-DPO-alpha-q0f16-cuda.so")
[2024-01-03 14:04:58] INFO model_metadata.py:55: Total memory usage: 9917.13 MB (Parameters: 5462.51 MB. KVCache: 1024.00 MB. Temporary buffer: 3430.62 MB)
[2024-01-03 14:04:58] INFO model_metadata.py:64: To reduce memory usage, tweak `prefill_chunk_size`, `context_window_size` and `sliding_window_size`
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/alphaarea/.conda/envs/mlc-llm-20240103/lib/python3.11/site-packages/mlc_chat/chat_module.py", line 774, in __init__
self._reload(self.model_lib_path, self.model_path, user_chat_config_json_str)
File "/home/alphaarea/.conda/envs/mlc-llm-20240103/lib/python3.11/site-packages/mlc_chat/chat_module.py", line 988, in _reload
self._reload_func(lib, model_path, app_config_json)
File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__
File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3
File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
File "/home/alphaarea/.conda/envs/mlc-llm-20240103/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
raise py_err
File "/workspace/mlc-llm/cpp/llm_chat.cc", line 1532, in mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
File "/workspace/mlc-llm/cpp/llm_chat.cc", line 553, in mlc::llm::LLMChat::Reload(tvm::runtime::TVMArgValue, tvm::runtime::String, tvm::runtime::String)
File "/workspace/mlc-llm/cpp/tokenizers.cc", line 63, in mlc::llm::TokenizerFromPath(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
tvm._ffi.base.TVMError: Traceback (most recent call last):
3: mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
at /workspace/mlc-llm/cpp/llm_chat.cc:1532
2: mlc::llm::LLMChat::Reload(tvm::runtime::TVMArgValue, tvm::runtime::String, tvm::runtime::String)
at /workspace/mlc-llm/cpp/llm_chat.cc:553
1: mlc::llm::TokenizerFromPath(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
at /workspace/mlc-llm/cpp/tokenizers.cc:63
0: _ZN3tvm7runtime6deta
File "/workspace/mlc-llm/cpp/tokenizers.cc", line 63
TVMError: Cannot find any tokenizer under: /home/alphaarea/models/CausalLM-7B-DPO-alpha-q0f16
Here is the mlc-chat-config.json generated by mlc_chat gen_config, which looks like it already recognizes the tokenizer file. But it doesn't work in practice
{
...
"tokenizer_files": [
"vocab.json",
"merges.txt",
"tokenizer_config.json"
],
"version": "0.1.0"
}
To Reproduce
Several of CausalLM's models use the same tokenizer format, which can be reproduced by downloading the smallest model CausalLM/7B-DPO-alpha
convert, gen_config and compile
MODEL_PATH='/home/alphaarea/models/CausalLM-7B-DPO-alpha'
MLC_QUANT='q0f16'
MLC_DEV='cuda'
MLC_SHARDS=4
MODEL_ARCH='llama'
MODEL_TEMP='gpt2'
MODEL_NAME=${MODEL_PATH##*/}
MODEL_OUTPUT=$MODEL_PATH'-'$MLC_QUANT
MODEL_LIB=$MODEL_NAME'-'$MLC_QUANT'-'$MLC_DEV'.so'
mlc_chat convert_weight --quantization $MLC_QUANT --model-type $MODEL_ARCH --output $MODEL_OUTPUT $MODEL_PATH
mlc_chat gen_config --quantization $MLC_QUANT --model-type $MODEL_ARCH --conv-template $MODEL_TEMP --tensor-parallel-shards $MLC_SHARDS --output $MODEL_OUTPUT $MODEL_PATH
mlc_chat compile --device $MLC_DEV --output $MODEL_OUTPUT/$MODEL_LIB $MODEL_OUTPUT/mlc-chat-config.json
run in python
from mlc_chat import ChatModule
cm = ChatModule(model="/yourpath/CausalLM-7B-DPO-alpha-q0f16", model_lib_path="/yourpath/CausalLM-7B-DPO-alpha-q0f16/CausalLM-7B-DPO-alpha-q0f16-cuda.so")
Expected behavior
TVMError: Cannot find any tokenizer under: /yourpath/CausalLM-7B-DPO-alpha-q0f16
Environment
Platform: CUDA
Operating system: Ubuntu 22.04.3 LTS
Device: Tesla P100
How you installed MLC-LLM: python3 -m pip install --pre -U -f https://mlc.ai/wheels mlc-chat-nightly-cu121 mlc-ai-nightly-cu121
How you installed TVM-Unity: pip
Python version: 3.11
GPU driver version: 545.23.08
CUDA/cuDNN version: 12.1
Currently we support the following tokenizers:
SentencePiece
HuggingFace
RWKV world
Byte-level BPE
See the tokenizer finding logics here for more details: https://github.com/mlc-ai/mlc-llm/blob/main/cpp/tokenizers.cc. The tokenizer-related files exist (vocab.json, merges.txt and tokenizer_config.json) don't match any of the patterns in our tokenizer detection, and that's why an error is reported.
Not super familiar with the tokenizer part - could you share which tokenizer it is? Please feel free to contribute if you are interested!
I've figured it out, the models I've been able to run without problems in the past have the tokenizer_class LlamaTokenizer or LlamaTokenizerFast in tokenizer_config.json. But the CausalLM-72B is GPT2Tokenizer. And I had a similar problem when I trying Nous-Capybara-34B, whose tokenizer_class is YiTokenizer.
Does HuggingFace tokenizers support mean only LlamaTokenizer and LlamaTokenizerFast in currently?
In the future, will the common tokenizer supported by transformers.AutoTokenizer be more easily supported by mlc-llm?
I do think fundamentally we support full HuggingFace tokenizers because we compile its full source in rust: https://github.com/mlc-ai/tokenizers-cpp with some wrapping logic:
Rust wrapper: https://github.com/mlc-ai/tokenizers-cpp/blob/main/rust/src/lib.rs#L106-L130
Expose Rust wrapper in C++: https://github.com/mlc-ai/tokenizers-cpp/blob/main/src/huggingface_tokenizer.cc#L84
It would be awesome if you'd love to contribute adding the related wrappers to tokenizers-cpp!
Ah I just noticed that GPT2Tokenizer is actually byte-level BPE tokenizer, which is supported already. We only need to figure out what the missing file add_tokens.json is used for
I saw a blog explaining this file. Let me know if it's helpful! https://blog.rfox.eu/en/Programming/How_to_run_your_own_LLM_GPT.html
I'll see if there's a way of generating an added_tokens.json during gen_config, just like how we currently convert tokenizer.model to tokenizer.json there.
https://github.com/mlc-ai/mlc-llm/blob/main/python/mlc_chat/interface/gen_config.py#L132-L153
Meanwhile we might have to do it manually.
@CharlieFRuan I tried to add an added_tokens.json with an empty json string "{}", but got another error from our tokenizer wrapper complaining about merges.txt:
thread '<unnamed>' panicked at 'Invalid merges.txt file.', src/lib.rs:63:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
fatal runtime error: failed to initiate panic, error 5
This line in our tokenizer wrapper tries to split each line in merges.txt into two pieces, separate by " ", which fails at the last line which only contains:
å
Note that the merges.txt file is different from GPT2's official one.
OK I know how to make this work now! @alphaarea There are two things you will want to patch up:
Add an added_tokens.json which contains an empty json object: "{}"
Replace the truncated merge.txt with QWen's official one.
OK I know how to make this work now! @alphaarea There are two things you will want to patch up:
Add an added_tokens.json which contains an empty json object: "{}"
Replace the truncated merge.txt with QWen's official one.
Very thanks, I successfully run CausalLM/7B-DPO-alpha follow your way. English and Chinese both run good.
And I compare the mergers.txt from CausalLM/7B-DPO-alpha and vonjack/Qwen-LLaMAfied-HFTok-7B-Chat. The front half of them is exactly the same. But the CausalLM/7B-DPO-alpha's mergers.txt is short, and looks like having a incomplete ending. The last line only has one character å. I deleted the line and try to run it, can't believe it work well. Seems no different with when use mergers.txt from vonjack/Qwen-LLaMAfied-HFTok-7B-Chat.
Is this due to an error in the file provided by CausalLM itself?
When I use the same way on CausalLM/72B-preview-llamafied-qwen-llamafy, it didn't have the same effect. It output broken characters, looks like the tokenizer is still having error.
So I ran the test all over again, I noticed a warning appeared when starting the convert_weight
[2024-01-06 11:57:23] WARNING utils.py:25: Unused extern parameters: model.layers.0.self_attn.k_proj.bias, model.layers.0.self_attn.o_proj.bias, model.layers.0.self_attn.q_proj.bias, model.layers.0.self_attn.v_proj.bias, model.layers.1.self_attn.k_proj.bias...
(Incomplete, the warning is very long)
Is this the reason why the model outputs broken characters? Other than that I haven't encountered any other warning messages
CausalLM is not a popular model, I'm not sure if there's something wrong with the model itself.
If manager determines that these errors are caused by the CauseLM's non-standard llama model itself, please close the issues.
|
gharchive/issue
| 2024-01-03T14:38:43 |
2025-04-01T06:39:38.783079
|
{
"authors": [
"CharlieFRuan",
"alphaarea",
"junrushao"
],
"repo": "mlc-ai/mlc-llm",
"url": "https://github.com/mlc-ai/mlc-llm/issues/1533",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1842528965
|
[Bug] Unsupported gpu architecture 'compute_89'
🐛 Bug
I got this error while building the model.
Compilation error:
nvcc fatal : Unsupported gpu architecture 'compute_89'
I didn't encounter it before. After I commented https://github.com/mlc-ai/mlc-llm/pull/686 , the error was resolved.
What's your cuda version? I guess we are trying to build sm89 while your local nvcc does not support it.
We should not enable fatbin by default cc @MasterJH5574
The problem should be solved if you upgrade to CUDA 11.8. Meanwhile, we would fix it and turn off by default
Thanks @Cydia2018 for reporting! With #716 we will be good to go.
#716 gets merged. Please open an issue again if the issue persists :-)
|
gharchive/issue
| 2023-08-09T05:43:15 |
2025-04-01T06:39:38.787753
|
{
"authors": [
"Cydia2018",
"Hzfengsy",
"MasterJH5574"
],
"repo": "mlc-ai/mlc-llm",
"url": "https://github.com/mlc-ai/mlc-llm/issues/710",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1699840927
|
Detect SVG when XML declaration is missing
Hello,
I'm currently working on a project that relies heavily on file-format to handle images by it's media type accordingly.
I've just finished implementation for SVG support in our app and, while testing, the crate failed to detect correctly that our logo is a SVG. The main reason is that it lacks the <?xml declaration, so the code never reaches this part.
After looking in our (source) SVGs, I found that there are many cases:
some of them contains <?xml declaration;
some of them contains xmlns attribute on <svg>;
The key point of this issue that, as per this doc, <?xml is optional unless the encoding is not UTF-8 or UTF-16.
While I made a patch for our needs (and willing to finish up the PR), I wanted to open this issue to talk about other SVG versions and how it can be treated better for wider / more general use.
Hello,
Thanks for your PR!
Indeed, some XML-based formats such as SVG may not be detected. After some research, it turns out that XML 1.0 has an optional declaration whereas with XML 1.1 it is mandatory.
Ideally, I will have to deal with all XML-base formats when they do not have an XML declaration. I will comment directly on your PR for SVG.
Hi,
The patch will be available in version 0.17 later this week.
Hi,
I'm a bit late because I also wanted to resolve #21 for version 0.17.
I'm going to do a bit of code review, the release should arrive very soon!
Version 0.17.0 published including this fix!
Thanks!
|
gharchive/issue
| 2023-05-08T08:59:32 |
2025-04-01T06:39:38.841109
|
{
"authors": [
"mmalecot",
"petru-tazz"
],
"repo": "mmalecot/file-format",
"url": "https://github.com/mmalecot/file-format/issues/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
667603139
|
samsung UE40F6400 volume control
hi everyone, how do i turn on volume control. thank you
Volume is implemented but doesn't show up at the moment. It however shows up in the homebridge accessories tab for me. I still have to figure out why iOS isn't showing the speaker service.
hi, I also see it as accessories in Homebridge but not on ios. thank you
I have the theory that iOS requires some other characteristics like CurrentMediaState before it will show the volume characteristics. Unfortunately I didn't find any more documentation concerning this topic in the hombridge docs or apples homekit documentation yet.
I just figured out that I can control the volume with the iPhone hardware buttons when in control center -> remote -> tv selected on top... I still don't know how to toggle mute in the remote app however. The source for this info also says that the home app just doesn't show the tv speaker accessory like other apps do. I might add an option to add a "lightbulb" accessory or so to be able to control the volume/mute directly in the home app like other plugins do.
the native accessory of apple tv 4 has volume control ... maybe you can investigate on that side ...
Gastón
El 29 jul. 2020, a la(s) 05:20, Martin Mende notifications@github.com escribió:
I just figured out that I can control the volume with the iPhone hardware buttons when in control center -> remote -> tv selected on top... I still don't know how to toggle mute in the remote app however. The source for this info also says that the home app just doesn't show the tv speaker accessory like other apps do. I might add an option to add a "lightbulb" accessory or so to be able to control the volume/mute directly in the home app like other plugins do.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Maybe it will help
https://developer.apple.com/documentation/mediaplayer/mpvolumeview
Gastón
El 29 jul. 2020, a la(s) 05:32, Gastón Groel ggroel@icloud.com escribió:
the native accessory of apple tv 4 has volume control ... maybe you can investigate on that side ...
Gastón
El 29 jul. 2020, a la(s) 05:20, Martin Mende notifications@github.com escribió:
I just figured out that I can control the volume with the iPhone hardware buttons when in control center -> remote -> tv selected on top... I still don't know how to toggle mute in the remote app however. The source for this info also says that the home app just doesn't show the tv speaker accessory like other apps do. I might add an option to add a "lightbulb" accessory or so to be able to control the volume/mute directly in the home app like other plugins do.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
in node-red Linked service "TelevisionSpeaker" e Characteristic :
{
"VolumeControlType":1,
"VolumeSelector":true
}
so I see it in the control center with output {"RemoteKey":4} UP; {"RemoteKey":5} DOWN; {"RemoteKey":7}DX; {"RemoteKey":6} SX
IF YOU CAN USE IT
now works!!!
Nice... A more fine grained volume control is not accessible in iOS yet unfortunately. However it is implemented theoretically in this plugin. I hope iOS 14 will bring volume controls for tv's.
|
gharchive/issue
| 2020-07-29T07:01:09 |
2025-04-01T06:39:38.867708
|
{
"authors": [
"ggroel",
"mmende",
"simoneras"
],
"repo": "mmende/homebridge-samsungtv-control2",
"url": "https://github.com/mmende/homebridge-samsungtv-control2/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1032438137
|
Gorelease with Homebrew
Adding .goreleaser.yml setup to use Homebrew
Codecov Report
Merging #3 (b8a35cb) into main (c383555) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #3 +/- ##
=======================================
Coverage 86.11% 86.11%
=======================================
Files 1 1
Lines 108 108
=======================================
Hits 93 93
Misses 9 9
Partials 6 6
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c383555...b8a35cb. Read the comment docs.
|
gharchive/pull-request
| 2021-10-21T12:31:40 |
2025-04-01T06:39:38.879336
|
{
"authors": [
"codecov-commenter",
"mmiranda"
],
"repo": "mmiranda/markdown-index",
"url": "https://github.com/mmiranda/markdown-index/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2116916176
|
Lecture Request: Seperate coure or module for Installing Kubernetes cluster
Your Workstation
Windows 10 Laptop, 16 GB RAM, 8 core i7 CPU
What happened?
Would like to see a course on Kubernetes installation on various scenarios from bare metal, cloud, virtualization, single master, multi master. Managing multiple cluster etc...
Relevant log output
No response
Hello
Please see the following:
https://github.com/kodekloudhub/certified-kubernetes-administrator-course/tree/master/kubeadm-clusters
https://github.com/kodekloudhub/certified-kubernetes-administrator-course/tree/master/managed-clusters/eks
For general questions not directly related to this repo, please use our forum here https://community.kodekloud.com/
|
gharchive/issue
| 2024-02-04T06:58:39 |
2025-04-01T06:39:38.897151
|
{
"authors": [
"fireflycons",
"pugazhendhiramakrishnan08121985"
],
"repo": "mmumshad/kubernetes-the-hard-way",
"url": "https://github.com/mmumshad/kubernetes-the-hard-way/issues/330",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
648642117
|
Week 3 walkthrough 4
This PR marks the completion of Step 4 of the Walkthrough for Week 3. Further commits on this branch will reflect feedback directly on this PR and feedback given on the Week 3 Step 3 PR.
*Now that Week 3 Step 3's PR has been approved, I am requesting review of this PR.
Sounds good! I'll do that & open a new pull request in reply
|
gharchive/pull-request
| 2020-07-01T03:26:58 |
2025-04-01T06:39:38.898491
|
{
"authors": [
"mmurray22"
],
"repo": "mmurray22/my-portfolio",
"url": "https://github.com/mmurray22/my-portfolio/pull/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1201931030
|
Plot not show
Hi, I am running mne(v = 1.0) on VScode(Jupyter notebook) on macOS with M1 chip. I wanted to plot the raw data in the new qt-browser. But it only popped up the window, but no time series showed
And the error message showed ImportError("Unable to load OpenGL library", *err.args)
Hello @ts-mindyourmind, how did you install mne-qt-browser? Could you please paste the output of
import mne
mne.sys_info()
I assume you didn't install the PyOpenGL package? It's recommended on macOS.
can you try with
raw.plot(scalings="auto")
?
Alex
Message ID: @.***>
Did you install PyOpenGL?
Also your mne-qt-browser is slightly outdated, the latest version is 0.3.0; the latest MNE version is 1.0.1
yes, I have installed PyOpenGL. And i upgraded the mne and qt-browser just now. still not working.
still appreciate your help!
okay, i will switch to a new environment and try again.
Thanks again!
@ts-mindyourmind Could your problem be resolved? If so, I will close this issue.
Yes, it has been solved. Thank you!
On 12 May 2022, at 15:13, Martin Schulz @.@.>> wrote:
@ts-mindyourmindhttps://github.com/ts-mindyourmind Could your problem be resolved? If so, I will close this issue.
—
Reply to this email directly, view it on GitHubhttps://github.com/mne-tools/mne-qt-browser/issues/115#issuecomment-1125047457, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARK5HDRYPZJR3YRURDVKG4LVJUGXZANCNFSM5THP62RA.
You are receiving this because you were mentioned.Message ID: @.***>
|
gharchive/issue
| 2022-04-12T14:24:33 |
2025-04-01T06:39:38.913960
|
{
"authors": [
"agramfort",
"hoechenberger",
"marsipu",
"ts-mindyourmind"
],
"repo": "mne-tools/mne-qt-browser",
"url": "https://github.com/mne-tools/mne-qt-browser/issues/115",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2329895720
|
Update README.md branch1
Task name:
Description:
Documentation changes required: (Y/N)
Added unit tests: (Y/N)
Attached related issue: (Y/N)
Checklist:
[ ] Task version was bumped - please check instruction how to do it
[ ] Checked that applied changes work as expected
All good
|
gharchive/pull-request
| 2024-06-02T22:40:09 |
2025-04-01T06:39:38.939611
|
{
"authors": [
"moadmct"
],
"repo": "moadmct/azure-pipelines-tasks",
"url": "https://github.com/moadmct/azure-pipelines-tasks/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
300993913
|
Subgraphs
Define components of graph
graph {
edge 'A', 'B'
subgraph {
edge 'C', 'D'
subgraph {
edge 'X', 'Y'
}
}
edge 'Y', 'Z'
}
When a vertex or edge is added it is added to parent subgraphs and main graph (may need listeners)
graphs are maps
subgraph is a graph
subgraph can have subgraphs
if vertex or edge is missing an entry it will check all parent graphs for entry and return found value
vertex or edge can be in multiple subgraphs
subgraph is always same type as main graph (share type variable)
subgraphs can be
if a named subgraph is used inside other subgraphs what is the behavior. How does grapgviz do it?
Graph {
subgraph {
color = 'blue'
edge 'A', 'B'
}
}
Edge and vertices will be blue.
In graphviz
If a default attribute is defined using a node, edge, or graph statement, or by an attribute assignment not attached to a node or edge, any object of the appropriate type defined afterwards will inherit this attribute value. This holds until the default attribute is set to a new value, from which point the new value is used. Objects defined before a default attribute is set will have an empty string value attached to the attribute once the default attribute definition is made.
|
gharchive/issue
| 2018-02-28T11:46:31 |
2025-04-01T06:39:38.943274
|
{
"authors": [
"moaxcp"
],
"repo": "moaxcp/graph-dsl",
"url": "https://github.com/moaxcp/graph-dsl/issues/109",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2501626713
|
debug maestro-e2e-output not being present when tests fail
The artifacts stopped appearing.
Looks like it's caused by https://github.com/mobile-dev-inc/maestro/pull/2007. Sigh, GitHub Actions, the hopeless abomination.
|
gharchive/pull-request
| 2024-09-02T22:06:56 |
2025-04-01T06:39:38.981350
|
{
"authors": [
"bartekpacia"
],
"repo": "mobile-dev-inc/maestro",
"url": "https://github.com/mobile-dev-inc/maestro/pull/2007",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1861373234
|
Comments from Maxx Dekkers
Comments received from Maxx Dekkers (SEMIC Group), Email on July 21th, 2023:
Property: Visibility status
Could dct:accessRights not be used here? From the definition of this property at DCMI combined with the use of the EU vocabulary Access right it seems that that Dublin Core property would be able to serve the need.
Property: Data Model Schema
Could dct:conformsTo not be used here? The definition of DCAT includes a usage note for this property that seems to align quite well with the definition of your mobilitydcapap:dataModelSchema.
My general comment here is that if you define ‘local’ properties where you could possibly reuse existing properties, you understand that those data will not be understandable to others outside your domain. So if you share catalogue records with a more general DCAT(-AP) implementation, the access restrictions are no longer maintained. For the distributions the information provided for dataModelSchema will be lost while it could still be useful for the user.
Property: Legal Framework
In the section on Controlled Vocabularies, there is a link to https://eur-lex.europa.eu/eli-register/eu_publications_office.html as the mandatory controlled vocabulary for the property legal framework. However, that link does not point to a controlled set of terms identifying particular legal documents, but points to information on the description schema for legal documents so I think it doesn’t fit there. Maybe the only thing necessary is to add to the usage note of the property that it is recommended to use ELI to refer to legislation whenever possible. By the way, ELI is not only used for European legislation; many countries already use it for national legislation, see https://eur-lex.europa.eu/eli-register/implementation.html.
Class: Assessment
One problem I see here is that you have mappings from two different semantic definitions to the same property (oa:hasBody) – although they really only differ in the expression of the information. First of all, this approach makes it impossible for an application that receives the data to distinguish between them, unless it looks at the encoding. Secondly, there are quite some restrictions on the use of a Literal as value on oa:hasBody. So, no language tag and only plain text allowed. Otherwise you’re encouraged to use oa:TextualBody. So, you could have a single property oa:hasBody with range rdfs:Resource with the usage note that, in case you don’t have a URL to point to, textual information can be included using the Embedded Textual Body construction, which allows you to specify text formats and languages which might be relevant for multilingual purposes.
Regarding 2:
Peter Lubrich:
In fact, "dct:conformsTo" might be used "to indicate the model, schema, ontology, view or profile that this representation of a dataset conforms to".
However, we introduced multiple properties under the class Distribution, that describe the technical format:
format dct:format
data model mobilitydcatap:dataModel
data model version mobilitydcatap:dataModelVersion
data model schema mobilitydcatap:dataModelSchema
grammar mobilitydcatap:grammar
-> Such differentiation is very specific to the transportation domain, and we want to have such clear differentiation !
-> We could now replace each of our proprietary properties above ("mobilitydcat-ap:...") with "dct:conformsTo".
-> But then we would lose our intended differentiation !
-> On the other side, Maxx is right, any information from our proprietary properties might get lost when exchanging metadata with non-transportation portals!
Your opinions?
Regarding 3:
I
Peter Lubrich:
The question here is: Is the ELI system a controlled vocabulary or not?
Either way, we want to have it used for our property "mobilitydcatap:legalFramework".
Suggestion:
Add a hint in the usage note next to the property, linking to ELI, as suggested by Maxx.
Still list the ELI in our section "Controlled vocabularies to be used", (so it doesn't get ignored), but also mention here that this is not a real Controlled Vocab!
Regarding the "visibilty status":
The idea behind was that data describtions could be unfinished, on hold or in generak not published.
For statistics on data set this field has a hugh benefit for the NAPs.
For operating the API for data exchange this is the filter to destinguish "sendable" or not.
But it is not necessary to be in the DCAT-AP Profile if obly published data descritions are exchanged.
Regarding 2 Data format "dct:conformsTo"
The information around the data format, encoding, used schema and so on are very important for data user and data services!!
Therefore we should have this differentiation.
If we exchange meta data with other portals, it is up to the harvesting portal how they handle the more information.
Regarding 4:
I agree with Maxx' suggestion: we will only use one single property oa:hasBody, and make an additional usage note about (optional)textual information
Regarding 1:
I responded to Maxx as follows:
_Well, when you look closely at the EU vocabulary for "Access right", it seems to controll the access to content data, whereas the meta data is exchanged in any case. In contrast, we wanted to controll the access/visibility of meta data. So, "Access right" seems not to be the right replacement for our proposed property.
However, the only options for our property are "true" (=metadata is exchanged) or "false" (=metadata is not exchanged). The latter one is not relevant, as this metadata stays (temporarily) within the data platform. In thise sense, the "metadata visibility" is not a information to be exchanged, but more a platform-internal information. So, well will give up the "Visibility status" for now.
We may re-introduce it at a later time, as they are some use cases with "limited" or "restricted" metadata visibility (in transportation, many (meta)data is considered non-open!). This means that only selected receivers can see the metadata, or that some receivers can only see partial metadata. We will discuss this later._
Regarding 1:
We got a response by Makx as follows:
I understand you want to look at this later. As this information is not going to be exchanged but rather used for the data platform itself, it could indeed be considered at a later stage. However, contrary to what you wrote, the CatalogRecord gives information about the metadata, so asserting dct:accessRights on CatalogRecord will give the visibility of the metadata, not of the content. To describe the visibility of the content, you would use the property dct:accessRights on dcat:Dataset.
->Conlusion: for mobilityDCAT-AP v1.0, the proposal is to take out the "visibility status" property. We might consider this again for v2.0, when we have a clearer picture about use cases of restricted metadata visibility.
Regarding 2:
We got a response by Makx as follows:
Making these properties subproperties of dct:conformsTo indeed allows other DCAT implementations to understand what the general meaning of these properties is, so this makes sense. One additional comment is that, if we understand correctly, the dataModelVersion and dataModelSchema properties describe characteristics of the data model and not of the distribution, so it would be more correct to define those as properties of a separate entity, for example a class mobilitydcatap:DataModel, which could be a subclass of dct:Standard.
->Conlusion: I really like the proposal to introduce a new class "mobilitydcatap:DataModel".
This class will be the range of property "mobilitydcatap:dataModel" (so far, it has the generic range "skos:Concept").
This class will be a sub-class of class "dct:Standard".
This class have two optional properties:
"owl:versionInfo" (formerly proposed as proprietary property "mobilitydcatap:dataModelVersion")
"mobilitydcatap:dataModelSchema" (as sub-property of "dct:conformsTo")
Regarding 3:
We got a response by Makx as follows:
In the work on High-Value Datasets, a property dcatap:applicableLegislation (applied to the DCATAP HVD extension here) was defined that has the same meaning as your property mobilitydcatap:legalFramework. You could use the more general property from the dcatap namespace.
->Conlusion: we change the property from "mobilitydcatap:legalFramework" to "dcatap:applicableLegislation"
My conclusion for topics 1,2,3 above would also result in a modified UML diagram as follows.
For example, note the new class "mobilitydcatap:dataModel" in the upper-right corner.
I took over all proposals under "conclusions" above for point 1,2,3,4.
|
gharchive/issue
| 2023-08-22T12:36:58 |
2025-04-01T06:39:39.002111
|
{
"authors": [
"BWitsch",
"peterlubrich"
],
"repo": "mobilityDCAT-AP/mobilityDCAT-AP",
"url": "https://github.com/mobilityDCAT-AP/mobilityDCAT-AP/issues/12",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1855980086
|
🛑 PizzaRiing Web is down
In f9f6a0f, PizzaRiing Web (https://pizzariing.uy) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PizzaRiing Web is back up in 06c1dbc.
|
gharchive/issue
| 2023-08-18T03:20:28 |
2025-04-01T06:39:39.005357
|
{
"authors": [
"mobilitysol"
],
"repo": "mobilitysol/monitorweb",
"url": "https://github.com/mobilitysol/monitorweb/issues/1588",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1659475245
|
🛑 RequetePizza Web is down
In 6c2525d, RequetePizza Web (https://requetepizza.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RequetePizza Web is back up in b24d2b8.
|
gharchive/issue
| 2023-04-08T10:09:46 |
2025-04-01T06:39:39.008290
|
{
"authors": [
"mobilitysol"
],
"repo": "mobilitysol/monitorweb",
"url": "https://github.com/mobilitysol/monitorweb/issues/821",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
651623115
|
[18.09 backport] bridge: disable IPv6 router advertisements
Please consider backporting this fix for CVE-2020-13401. Thanks!
Signed-off-by: Samuel Karp skarp@amazon.com
(cherry picked from commit 153d0769a1181bf591a9637fd487a541ec7db1e6)
Signed-off-by: Sam Whited sam@samwhited.com
18.09 is no longer maintained
I know, sadly we still have to use the bump_18.09 branch at work and don't have a way to migrate off of it yet.
/cc @adamparco
|
gharchive/pull-request
| 2020-07-06T15:19:42 |
2025-04-01T06:39:39.017300
|
{
"authors": [
"AkihiroSuda",
"SamWhited",
"thaJeztah"
],
"repo": "moby/libnetwork",
"url": "https://github.com/moby/libnetwork/pull/2570",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1679668445
|
linux jumbox doesn't have Az CLI installed
After logging on to linux jumbox using bastion, az --version failed. It should have gotten installed since the init script is setup to install it. Perhaps the init script is not working as expected.
fixed issue - replaced bash script of linux vm configuration with a cloud-init script. Linux VM needs to be deleted before redeploying the resources.
closed issue
|
gharchive/issue
| 2023-04-22T19:25:08 |
2025-04-01T06:39:39.140525
|
{
"authors": [
"mocelj",
"utkarshayachit"
],
"repo": "mocelj/azbatch-starter-connectivity",
"url": "https://github.com/mocelj/azbatch-starter-connectivity/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
263490610
|
normalize suite and test titles
This has bugged me since forever.
Some suites and tests have titles like #foo or .foo and some which correspond to a function have parens (#foo()) and some don't.
let's make sure this is consistent across the tests. I propose doing away with any leading or trailing punctuation
Elsewhere I've seen # used to distinguish instance methods from static methods and parentheses used to distinguish methods from properties or around arguments where multiple overloads on different arguments are available. If we don't have any of that stuff, just non-overloaded instance methods, then we should be good getting rid of most of that syntax.
that’s a jsdoc convention. it’ll be further confused by the private field syntax on he horizon.
imo we should be decoupling the tests from the implementation as much as possible. The goal being that refactors won’t result in a bunch of broken tests. I’ll try to come up with an example of what that looks like.
s/he/the. sorry GH on mobile sux
|
gharchive/issue
| 2017-10-06T15:46:51 |
2025-04-01T06:39:39.143095
|
{
"authors": [
"ScottFreeCode",
"boneskull"
],
"repo": "mochajs/mocha",
"url": "https://github.com/mochajs/mocha/issues/3056",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
272903018
|
First unit test always slower than the others.
Prerequisites
[x] Checked that your issue isn't already filed by cross referencing issues with the common mistake label
[x] Checked next-gen ES issues and syntax problems by using the same environment and/or transpiler configuration without Mocha to ensure it isn't just a feature that actually isn't supported in the environment in question or a bug in your code.
[x] 'Smoke tested' the code to be tested by running it outside the real test suite to get a better sense of whether the problem is in the code under test, your usage of Mocha, or Mocha itself
[x] Ensured that there is no discrepancy between the locally and globally installed versions of Mocha. You can find them with:
node node_modules/.bin/mocha --version(Local) and mocha --version(Global). We recommend avoiding the use of globally installed Mocha.
Description
For some reason the first unit test of my test suit is always slower than the others. When executing the tests in my console I get something like:
√ unit test A (483ms)
√ unit test B
But in the code if I change the unit test B to above the unit test A, I get this:
√ unit test B (470ms)
√ unit test A
For some reason the first unit tests gets always slower and because of that I think, the reason of being slow in not my code, but something in Mocha. At the same time I have others tests suits that are testing other code and it works fine, so I'm confused. Maybe is not Mocha, but due to not being sure, I need to ask if you have an idea of what can be happening.
The test is something like this:
let target = require('...');
describe('Module of unit tests', function () {
this.timeout(1000);
before(function () {
...
target = proxyquire('...', {
'node-chartist': sinon.stub().resolves('...'),
'ws': function(){
return {
'close': function(){ /*Do nothing*/ },
'send': function(){ /*Do nothing*/ },
'on': function(arg, callback){
...
}
};
}
});
});
//Warning happens here
it('unit test A', function () {
...
target();
...
});
//If this unit test goes above unit test A, this will be the one to get the warning.
it('unit test B', function () {
...
target();
...
});
})
Steps to Reproduce
I tried to reproduce in other projects without success, so I doubt you will be able to do it, but what I'm doing is:
Execute tests suit with unit test A above.
Execute tests suit with unit test B above.
Expected behavior: [What you expect to happen]
Don't get any warning about the time in both cases.
Actual behavior: [What actually happens]
The first unit test gets always a warning about the time.
Reproduces how often: [What percentage of the time does it reproduce?]
Around 90% of the times.
Versions
node v6.11.4
npm 3.10.10
mocha 4.0.1
sinon 4.1.2
chai 3.5.0
proxyquire 1.8.0
Additional Information
I used fiddler to make sure that while executing the unit tests, none network request was being made for the outside, to make sure that the delay is not caused by any network request.
I also debugged the code that the unit test is testing and I really don't see any reason for the delay in any case.
One thing worth trying is copying everything that's common to the two tests, except for any assertions, into a before hook to see if simply running the same sort of stuff in another place makes the first place it runs of any sort slower, rather than the first test specifically.
It's possible that the code, even if it's not necessarily slow in general, is initializing something the first time that then gets saved in some way (e.g. Node's require cache, or filesystem-level caches of data from the disk, or a reuse optimization built into some library code), or that the JavaScript engine looks for optimizations in the code after it runs once, or something like that. You could also put it outside the testsuite altogether, although that's less likely to work -- for a few types of caches, that would have more chance of the cache running out somewhere in between loading the test files and actually running this particular file's tests (on the other hand, if a before hook worked and outside the testsuite didn't, that might narrow down what sort of caching or optimization is responsible...).
(And on a completely different note, more workaround than solution -- for anyone who just wants to suppress the time warning, there's the slow option to go with the timeout option.)
Hi ScottFreeCode,
Thanks for the response, it was very usefull. The problem was happening because I am making some stubs with proxyquire and by default the npm modules are loaded even when stubbed. In my case I have a module called node-chartist that was being loaded during the first unit test and that is why it was slower than the others.
To solve this problem I had to use the method noCallThru() of proxyquire which make proxyquire don't load any original dependencies.
Thanks for the help.
Kind regards,
Daniel Serrão
Glad I could help you get that figured out! Let us know if there's anything else you need.
|
gharchive/issue
| 2017-11-10T11:29:24 |
2025-04-01T06:39:39.154182
|
{
"authors": [
"ScottFreeCode",
"danielserrao"
],
"repo": "mochajs/mocha",
"url": "https://github.com/mochajs/mocha/issues/3100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
827603135
|
Unable to update mocha to 8.3.1: fsevents@patch - Cannot apply hunk #1
Prerequisites
[x] Checked that your issue hasn't already been filed by cross-referencing issues with the faq label
[x] Checked next-gen ES issues and syntax problems by using the same environment and/or transpiler configuration without Mocha to ensure it isn't just a feature that actually isn't supported in the environment in question or a bug in your code.
[x] 'Smoke tested' the code to be tested by running it outside the real test suite to get a better sense of whether the problem is in the code under test, your usage of Mocha, or Mocha itself
[x] Ensured that there is no discrepancy between the locally and globally installed versions of Mocha. You can find them with: node node_modules/.bin/mocha --version(Local) and mocha --version(Global). We recommend that you not install Mocha globally.
Description
I'm trying to update mocha from version 8.2.1 to 8.3.1. Mocha is installed as dev dependency.
Steps to Reproduce
Set version 8.3.1 in package.json and run yarn or use upgrade-interactive or use yarn add mocha --dev.
Expected behavior:
Mocha 8.3.1 should be installed.
Actual behavior: [What actually happens]
Error occurred:
➤ YN0066: │ fsevents@patch:fsevents@npm%3A2.3.2#builtin<compat/fsevents>::version=2.3.2&hash=127e8e: Cannot apply hunk #1 (set enableInlineHunks for details)
Versions
The output of mocha --version and node node_modules/.bin/mocha --version: 8.2.1
The output of node --version: 12.18.3
Your operating system
name and version: Windows 10
architecture (32 or 64-bit): 64-bit
Your shell (e.g., bash, zsh, PowerShell, cmd): Powershell
Your browser and version (if running browser tests): -
Any third-party Mocha-related modules (and their versions): yarn 2.3.3.
Any code transpiler (e.g., TypeScript, CoffeeScript, Babel) being used (and its version): -
I searched for this error and found many issues from 2020, but all of them were resolved somehow. So not sure why it occurred for me especially on Windows.
@juergba you might be right about optional. It looks like the chain is: mocha requires chokidar and chokidar requires fsevents, but optionally.
I have no idea why yarn 2 tries to install optional dependency and flag --ignore-optional does not work as per this issue.
I'll raise another issue for yarn team.
Thanks.
Update: it was a yarn issue. It looks like issue was fixed in yarn 2.4.1. We updated yarn to 2.4.0 - issue was in place, but after ipdating yarn to 2.4.1 everything is fine.
Hope this will help someone.
|
gharchive/issue
| 2021-03-10T11:44:55 |
2025-04-01T06:39:39.164894
|
{
"authors": [
"DJ-Glock"
],
"repo": "mochajs/mocha",
"url": "https://github.com/mochajs/mocha/issues/4602",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
201416219
|
Coverage for node tests
This PR does the following:
Add nyc, istanbul-combine and coveralls as dependencies
Adds an environment switch in the Makefile on COVERAGE=true, where coverage gathered from the test-node target and sub-targets
Adds an npm coverage script that runs make test with COVERAGE=true for local coverage collection
Adds coverage collection on travis for node 7
Posts coverage report to coveralls
closes #2620 #2351
Thanks to @c089 for getting the basic setup in the Makefile working
Changes Unknown when pulling 820d61639a95e808d58ea73f0860f7e139b2b7da on Munter:coverage-report into ** on mochajs:master**.
|
gharchive/pull-request
| 2017-01-17T21:53:08 |
2025-04-01T06:39:39.168131
|
{
"authors": [
"Munter",
"coveralls"
],
"repo": "mochajs/mocha",
"url": "https://github.com/mochajs/mocha/pull/2672",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1271145747
|
How do we mock flow/shared flow?
Description
Hello,
I'd like to test that myDependency2.someFun() is called when receiving a value from myDependency1.myFlow which is a Flow<Unit>.
class MyClass(
private val coroutineScope: CoroutineScope,
private val myDependency1: MyDependency1,
private val myDependency2: MyDependency2,
) : MyClassInterface, CoroutineScope by coroutineScope {
init {
launch(coroutineContext) {
myDependency1.myFlow.collect {
myDependency2.someFun()
}
}
}
}
Attempt
@Mock
private val myDependency1 = mock(MyDependency1::class)
@Mock
private val myDependency2 = mock(MyDependency2::class)
private val myFlow = MutableSharedFlow<Unit>()
...
"my test" {
given(myDependency1).getter(myDependency1::myFlow)
.whenInvoked()
.thenReturn(myFlow)
val myClass = MyClass(this, myDependency1, myDependency2)
myFlow.emit(Unit)
verify(myDependency2).function(myDependency2::someFun)
.wasInvoked(exactly = 1.time)
}
Result (error)
A mock of type MyDependency2 was not invoked the expected number of times.
Expected 1 invocations of someFun()
Actual: 0
No invocation on the mock were recorded.
Turns out this was not an issue about Mockative.
Closing this issue.
|
gharchive/issue
| 2022-06-14T17:47:39 |
2025-04-01T06:39:39.173844
|
{
"authors": [
"AlexandreBrown"
],
"repo": "mockative/mockative",
"url": "https://github.com/mockative/mockative/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
200792663
|
Wrong experiment annotation for Modelica.Mechanics.Translational.Examples.Accelerate
Reported by beutlich on 2 May 2012 11:28 UTC
Model Modelica.Mechanics.Translational.Examples.Accelerate has experiment keyword for Diagram annotation by mistake. See attached diff file for a simple fix.
Migrated-From: https://trac.modelica.org/Modelica/ticket/734
Comment by dietmarw on 2 May 2012 11:37 UTC
Thanks, it is already fixed in trunk in 8ae903717121f7689a300c0c9d66e2eee3820845.
Comment by beutlich on 2 May 2012 11:47 UTC
Fix should also be also included to /maintenance/3.2 branch.
Comment by dietmarw on 2 May 2012 11:58 UTC
Good point! Fix applied in 2aa9f4f0743210f1cc8e136b3ec95e03db13a1d5 for maintenance/3.2
|
gharchive/issue
| 2017-01-14T09:24:58 |
2025-04-01T06:39:39.210415
|
{
"authors": [
"modelica-trac-importer"
],
"repo": "modelica/Modelica",
"url": "https://github.com/modelica/Modelica/issues/734",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
698692316
|
Using modelmapper 2.3.8 with java.lang.ClassNotFoundException
I am running with org.modelmapper:modelmapper:2.3.8 from Maven Central at https://search.maven.org/search?q=g:org.modelmapper
When I attempt to run the ModelMapper inside an assembler (see below),
`public class AccountAssembler implements DtoEntityAssembler<AccountEntity, AccountDto> {
private static final AddressAssembler addressAssembler = new AddressAssembler();
private static final PhoneAssembler phoneAssembler = new PhoneAssembler();
public AccountEntity toEntity(AccountDto account) {
ModelMapper mapper = new ModelMapper();
mapper.createTypeMap(AccountDto.class, AccountEntity.class)
.addMapping(AccountDto::getAccountId, AccountEntity::setAccountId) // Integer
.addMapping(AccountDto::getAddress, AccountEntity::setAddress)
.addMapping(AccountDto::getBookerAccountKey, AccountEntity::setBookerAccountKey) // String
.addMapping(AccountDto::getEmail, AccountEntity::setEmail) // String
.addMapping(AccountDto::getKey, AccountEntity::setKey) // String
.addMapping(AccountDto::getMap, AccountEntity::setMap) // String
.addMapping(AccountDto::getName, AccountEntity::setName) // String
.addMapping(AccountDto::getWebsite, AccountEntity::setWebsite); // String
return mapper.map(account, AccountEntity.class);
}
public AccountDto toDto(AccountEntity account) {
ModelMapper mapper = new ModelMapper();
mapper.createTypeMap(AccountEntity.class, AccountDto.class)
.addMapping(AccountEntity::getAccountId, AccountDto::setAccountId)
.addMapping(AccountEntity::getAddress, (dest, v) -> dest.setAddress(addressAssembler.toDto((AddressEntity) v)))
.addMapping(AccountEntity::getBookerAccountKey, AccountDto::setBookerAccountKey)
.addMapping(AccountEntity::getEmail, AccountDto::setEmail)
.addMapping(AccountEntity::getKey, AccountDto::setKey)
.addMapping(AccountEntity::getMap, AccountDto::setMap)
.addMapping(AccountEntity::getName, AccountDto::setName)
.addMapping(AccountEntity::getWebsite, AccountDto::setWebsite);
return mapper.map(account, AccountDto.class);
}
}`
I get the following root cause in my stack trace:
Caused by: java.lang.ClassNotFoundException: sun.reflect.ReflectionFactory not found by modelmapper [18]
I am using the Apache Felix framework. From my research, I understand that the sunb.reflect is not publicly available. I don't know how to work around this currently. Are there any ideas?
I am currently running Java 8 and want to upgrade to the most recent Java version at a future time. I will remain running within a Java framework like Felix or similar.
Any ideas how to get around this would be greatly appreciated.
I think this issue is related to this one: https://github.com/modelmapper/modelmapper/issues/426
Can you check if latest modelmapper can reproduce this issue? Thanks!
I am not the thread owner. Nevertheless I cannot reproduce the issue in the latest release (2.4.0). I was able to completly remove the jdk.unsupported attribute.
Thank you!
Thanks for the feedback! I will close the issue. Please feel free to reopen this issue or create a new one if this issue was still reproducible.
|
gharchive/issue
| 2020-09-11T01:22:17 |
2025-04-01T06:39:39.242195
|
{
"authors": [
"chhsiao90",
"elmer25",
"mladBlum"
],
"repo": "modelmapper/modelmapper",
"url": "https://github.com/modelmapper/modelmapper/issues/561",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1892061720
|
(zju_15) 提供 1 个新的风格模型
提供 1 个兼容 MajicmixRealistic_v6 的风格模型:雪山羽绒服风(Jacket in Snow Mountain)。
用于训练的人像
Jacket in Snow Mountain
I tested this style, the effect is pretty good. Can we continue to tune the prompt and parameters a bit, the multiplier_style can be lower.
My prompt and parameters:
'multiplier_style': 0.6,
'multiplier_human': 0.9,
'add_prompt_style': '1 girl, close-up, fur, ((jacket)), shirt, pants, winter, (bright sunny day, snow mountain, alpine slopes, snow), gyaru, fashion, trendy, gentle hair'
I tested this style, the effect is pretty good. Can we continue to tune the prompt and parameters a bit, the multiplier_style can be lower.
My prompt and parameters: 'multiplier_style': 0.6, 'multiplier_human': 0.9, 'add_prompt_style': '1 girl, close-up, fur, ((jacket)), shirt, pants, winter, (bright sunny day, snow mountain, alpine slopes, snow), gyaru, fashion, trendy, gentle hair'
Wow, your tune results works obviously better. Let us update those prompts and parameters after merge.
I tested this style, the effect is pretty good. Can we continue to tune the prompt and parameters a bit, the multiplier_style can be lower.
My prompt and parameters: 'multiplier_style': 0.6, 'multiplier_human': 0.9, 'add_prompt_style': '1 girl, close-up, fur, ((jacket)), shirt, pants, winter, (bright sunny day, snow mountain, alpine slopes, snow), gyaru, fashion, trendy, gentle hair'
Wow, your tune results works obviously better. Let us update those prompts and parameters after merge.
hi, @iotang if there's a better parameter & prompt, chould you change it first, and update your style showcase.
I updated parameter and prompts and the result is as shown below. Human LoRA may make the results different.
Style image has been updated too.
|
gharchive/pull-request
| 2023-09-12T09:19:19 |
2025-04-01T06:39:39.249838
|
{
"authors": [
"hehaha68",
"iotang",
"sunbaigui"
],
"repo": "modelscope/facechain",
"url": "https://github.com/modelscope/facechain/pull/220",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2354849181
|
请问Swift可以实现多模态模型的dpo/simpo吗?
如题,swift可以实现多模态模型,比如qwen-vl, internvl-v1.5等的dpo/simpo吗?
我目前按照llm的dpo格式会报错:
AttributeError: 'Seq2SeqTrainingArguments' object has no attribute 'model_init_kwargs'
请问如果自己添加的话,有没有什么文档参考呢?
support
|
gharchive/issue
| 2024-06-15T13:15:35 |
2025-04-01T06:39:39.251928
|
{
"authors": [
"delian11",
"hjh0119"
],
"repo": "modelscope/swift",
"url": "https://github.com/modelscope/swift/issues/1145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1327572841
|
🛑 Plex is down
In c526e50, Plex ($PLEX_URL) was down:
HTTP code: 404
Response time: 446 ms
Resolved: Plex is back up in 10b2cda.
|
gharchive/issue
| 2022-08-03T17:31:32 |
2025-04-01T06:39:39.254011
|
{
"authors": [
"modem7"
],
"repo": "modem7/Status",
"url": "https://github.com/modem7/Status/issues/1605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1783608313
|
🛑 Dozzle is down
In 540e29e, Dozzle ($DOZZLE_URL) was down:
HTTP code: 404
Response time: 381 ms
Resolved: Dozzle is back up in f8f68d3.
|
gharchive/issue
| 2023-07-01T07:06:25 |
2025-04-01T06:39:39.256065
|
{
"authors": [
"modem7"
],
"repo": "modem7/Status",
"url": "https://github.com/modem7/Status/issues/3685",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
570148594
|
Can't execute any Modin function (ValueError: tuple is not allowed for map key)
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
Modin version (modin.__version__): '0.7.0
Python version: 3.7.1
Code we can use to reproduce:
import os
os.environ["MODIN_ENGINE"] = "dask"
import modin.pandas as pd
import numpy as np
df = pd.DataFrame([1,2,3])
Describe the problem
I am a new user who just follows your installation page to install Modin and tried out a little bit. But I found that I couldn't even execute a simple function through your package (ValueError: tuple is not allowed for map key). I have installed packages by the following command.
pip install -U modin # -U for upgrade in case you have an older version
pip install modin[dask] # Install Modin dependencies and Dask to run on Dask
p.s. By the way, every time I imported the package (import modin.pandas as pd), the jupyter notebook would give me the following warning. I couldn't really figure out the cause.
UserWarning: The Dask Engine for Modin is experimental.
UserWarning:
Failed to start diagnostics server on port 8787. [WinError 10013] 嘗試存取通訊端被拒絕,因為存取權限不足。 ## This means the access is denied because the accuess authority is insufficient
Source code / logs
UserWarning: Distributing <class 'list'> object. This may take some time.
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\distributed\protocol\core.py", line 114, in loads
header = msgpack.loads(header, use_list=False, **msgpack_opts)
File "msgpack\_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
ValueError: tuple is not allowed for map key
distributed.core - ERROR - tuple is not allowed for map key
Traceback (most recent call last):
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\distributed\core.py", line 297, in handle_comm
msg = yield comm.read()
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\tornado\gen.py", line 1133, in run
value = future.result()
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\tornado\gen.py", line 1141, in run
yielded = self.gen.throw(*exc_info)
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\distributed\comm\tcp.py", line 206, in read
deserializers=deserializers)
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\tornado\gen.py", line 1133, in run
value = future.result()
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\tornado\gen.py", line 326, in wrapper
yielded = next(result)
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\distributed\comm\utils.py", line 79, in from_frames
res = _from_frames()
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\distributed\comm\utils.py", line 65, in _from_frames
deserializers=deserializers)
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\distributed\protocol\core.py", line 114, in loads
header = msgpack.loads(header, use_list=False, **msgpack_opts)
File "msgpack\_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
ValueError: tuple is not allowed for map key
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
File "C:\Users\ADMIN\Anaconda3\lib\site-packages\distributed\protocol\core.py", line 114, in loads
header = msgpack.loads(header, use_list=False, **msgpack_opts)
File "msgpack\_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
ValueError: tuple is not allowed for map key
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-1f9265289ca2> in <module>
4 import modin.pandas as pd
5 import numpy as np
----> 6 df = pd.DataFrame([1,2,3])
7 # data_set = pd.read_csv(data_path+'/'+data_file)
~\Anaconda3\lib\site-packages\modin\pandas\dataframe.py in __init__(self, data, index, columns, dtype, copy, query_compiler)
84 data=data, index=index, columns=columns, dtype=dtype, copy=copy
85 )
---> 86 self._query_compiler = from_pandas(pandas_df)._query_compiler
87 else:
88 self._query_compiler = query_compiler
~\Anaconda3\lib\site-packages\modin\pandas\utils.py in from_pandas(df)
21 from .dataframe import DataFrame
22
---> 23 return DataFrame(query_compiler=BaseFactory.from_pandas(df))
24
25
~\Anaconda3\lib\site-packages\modin\data_management\factories.py in from_pandas(cls, df)
26 @classmethod
27 def from_pandas(cls, df):
---> 28 return cls._determine_engine()._from_pandas(df)
29
30 @classmethod
~\Anaconda3\lib\site-packages\modin\data_management\factories.py in _from_pandas(cls, df)
30 @classmethod
31 def _from_pandas(cls, df):
---> 32 return cls.io_cls.from_pandas(df)
33
34 @classmethod
~\Anaconda3\lib\site-packages\modin\engines\base\io\io.py in from_pandas(cls, df)
12 @classmethod
13 def from_pandas(cls, df):
---> 14 return cls.query_compiler_cls.from_pandas(df, cls.frame_cls)
15
16 @classmethod
~\Anaconda3\lib\site-packages\modin\backends\pandas\query_compiler.py in from_pandas(cls, df, data_cls)
59 @classmethod
60 def from_pandas(cls, df, data_cls):
---> 61 return cls(data_cls.from_pandas(df))
62
63 index = property(_get_axis(0), _set_axis(0))
~\Anaconda3\lib\site-packages\modin\engines\base\frame\data.py in from_pandas(cls, df)
1181 new_columns = df.columns
1182 new_dtypes = df.dtypes
-> 1183 new_frame, new_lengths, new_widths = cls._frame_mgr_cls.from_pandas(df, True)
1184 return cls(
1185 new_frame,
~\Anaconda3\lib\site-packages\modin\engines\base\frame\partition_manager.py in from_pandas(cls, df, return_dims)
307 for j in range(0, len(df.columns), col_chunksize)
308 ]
--> 309 for i in range(0, len(df), row_chunksize)
310 ]
311 if not return_dims:
~\Anaconda3\lib\site-packages\modin\engines\base\frame\partition_manager.py in <listcomp>(.0)
307 for j in range(0, len(df.columns), col_chunksize)
308 ]
--> 309 for i in range(0, len(df), row_chunksize)
310 ]
311 if not return_dims:
~\Anaconda3\lib\site-packages\modin\engines\base\frame\partition_manager.py in <listcomp>(.0)
305 [
306 put_func(df.iloc[i : i + row_chunksize, j : j + col_chunksize].copy())
--> 307 for j in range(0, len(df.columns), col_chunksize)
308 ]
309 for i in range(0, len(df), row_chunksize)
~\Anaconda3\lib\site-packages\modin\engines\dask\pandas_on_dask_futures\frame\partition.py in put(cls, obj)
135 """
136 client = _get_global_client()
--> 137 return cls(client.scatter(obj, hash=False))
138
139 @classmethod
~\Anaconda3\lib\site-packages\distributed\client.py in scatter(self, data, workers, broadcast, direct, hash, maxsize, timeout, asynchronous)
1872 broadcast=broadcast, direct=direct,
1873 local_worker=local_worker, timeout=timeout,
-> 1874 asynchronous=asynchronous, hash=hash)
1875
1876 @gen.coroutine
~\Anaconda3\lib\site-packages\distributed\client.py in sync(self, func, *args, **kwargs)
671 return future
672 else:
--> 673 return sync(self.loop, func, *args, **kwargs)
674
675 def __repr__(self):
~\Anaconda3\lib\site-packages\distributed\utils.py in sync(loop, func, *args, **kwargs)
275 e.wait(10)
276 if error[0]:
--> 277 six.reraise(*error[0])
278 else:
279 return result[0]
~\Anaconda3\lib\site-packages\six.py in reraise(tp, value, tb)
691 if value.__traceback__ is not tb:
692 raise value.with_traceback(tb)
--> 693 raise value
694 finally:
695 value = None
~\Anaconda3\lib\site-packages\distributed\utils.py in f()
260 if timeout is not None:
261 future = gen.with_timeout(timedelta(seconds=timeout), future)
--> 262 result[0] = yield future
263 except Exception as exc:
264 error[0] = sys.exc_info()
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1131
1132 try:
-> 1133 value = future.result()
1134 except Exception:
1135 self.had_exception = True
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1139 if exc_info is not None:
1140 try:
-> 1141 yielded = self.gen.throw(*exc_info)
1142 finally:
1143 # Break up a reference to itself
~\Anaconda3\lib\site-packages\distributed\client.py in _scatter(self, data, workers, broadcast, direct, local_worker, timeout, hash)
1734 client=self.id,
1735 broadcast=broadcast,
-> 1736 timeout=timeout)
1737
1738 out = {k: Future(k, self, inform=False) for k in data}
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1131
1132 try:
-> 1133 value = future.result()
1134 except Exception:
1135 self.had_exception = True
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1139 if exc_info is not None:
1140 try:
-> 1141 yielded = self.gen.throw(*exc_info)
1142 finally:
1143 # Break up a reference to itself
~\Anaconda3\lib\site-packages\distributed\core.py in send_recv_from_rpc(**kwargs)
578 try:
579 comm = yield self.live_comm()
--> 580 result = yield send_recv(comm=comm, op=key, **kwargs)
581 except (RPCClosed, CommClosedError) as e:
582 raise e.__class__("%s: while trying to call remote method %r"
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1131
1132 try:
-> 1133 value = future.result()
1134 except Exception:
1135 self.had_exception = True
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1139 if exc_info is not None:
1140 try:
-> 1141 yielded = self.gen.throw(*exc_info)
1142 finally:
1143 # Break up a reference to itself
~\Anaconda3\lib\site-packages\distributed\core.py in send_recv(comm, reply, serializers, deserializers, **kwargs)
455 yield comm.write(msg, serializers=serializers, on_error='raise')
456 if reply:
--> 457 response = yield comm.read(deserializers=deserializers)
458 else:
459 response = None
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1131
1132 try:
-> 1133 value = future.result()
1134 except Exception:
1135 self.had_exception = True
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1139 if exc_info is not None:
1140 try:
-> 1141 yielded = self.gen.throw(*exc_info)
1142 finally:
1143 # Break up a reference to itself
~\Anaconda3\lib\site-packages\distributed\comm\tcp.py in read(self, deserializers)
204 msg = yield from_frames(frames,
205 deserialize=self.deserialize,
--> 206 deserializers=deserializers)
207 except EOFError:
208 # Frames possibly garbled or truncated by communication error
~\Anaconda3\lib\site-packages\tornado\gen.py in run(self)
1131
1132 try:
-> 1133 value = future.result()
1134 except Exception:
1135 self.had_exception = True
~\Anaconda3\lib\site-packages\tornado\gen.py in wrapper(*args, **kwargs)
324 try:
325 orig_stack_contexts = stack_context._state.contexts
--> 326 yielded = next(result)
327 if stack_context._state.contexts is not orig_stack_contexts:
328 yielded = _create_future()
~\Anaconda3\lib\site-packages\distributed\comm\utils.py in from_frames(frames, deserialize, deserializers)
77 res = yield offload(_from_frames)
78 else:
---> 79 res = _from_frames()
80
81 raise gen.Return(res)
~\Anaconda3\lib\site-packages\distributed\comm\utils.py in _from_frames()
63 return protocol.loads(frames,
64 deserialize=deserialize,
---> 65 deserializers=deserializers)
66 except EOFError:
67 if size > 1000:
~\Anaconda3\lib\site-packages\distributed\protocol\core.py in loads(frames, deserialize, deserializers)
112
113 header = frames.pop()
--> 114 header = msgpack.loads(header, use_list=False, **msgpack_opts)
115 keys = header['keys']
116 headers = header['headers']
msgpack\_unpacker.pyx in msgpack._cmsgpack.unpackb()
ValueError: tuple is not allowed for map key
Hi @HiIamJeff, thanks for the report!
This is related to https://github.com/dask/distributed/issues/3491. The fix is to pip install msgpack<1.0.
Thanks Devin. It works like charm! Much appreciate it.
|
gharchive/issue
| 2020-02-24T21:10:58 |
2025-04-01T06:39:39.268385
|
{
"authors": [
"HiIamJeff",
"devin-petersohn"
],
"repo": "modin-project/modin",
"url": "https://github.com/modin-project/modin/issues/1104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
621341247
|
[Boards] test
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Modin installed from (source or binary):
Modin version:
Python version:
Exact command to reproduce:
Test from Github
New test from Github
|
gharchive/issue
| 2020-05-19T23:09:36 |
2025-04-01T06:39:39.272344
|
{
"authors": [
"aregm"
],
"repo": "modin-project/modin",
"url": "https://github.com/modin-project/modin/issues/1465",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
654902423
|
self._query_compiler.columns RecursionError: maximum recursion depth exceeded while calling a Python object
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Modin version (modin.__version__):
Python version:
Code we can use to reproduce:
Describe the problem
Source code / logs
Hi @deepalib-cuelogic thanks for posting!
We need more information to reproduce and fix this issue. Can you share the code that produced this error? Thanks!
Closing due to lack of information/reproducer.
|
gharchive/issue
| 2020-07-10T16:39:23 |
2025-04-01T06:39:39.275234
|
{
"authors": [
"deepalib-cuelogic",
"devin-petersohn",
"pyrito"
],
"repo": "modin-project/modin",
"url": "https://github.com/modin-project/modin/issues/1706",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1160587978
|
Create physics module
Closes #40
Codecov Report
Merging #52 (14f50af) into main (71779df) will increase coverage by 99.96%.
The diff coverage is 99.52%.
@@ Coverage Diff @@
## main #52 +/- ##
=========================================
+ Coverage 0 99.96% +99.96%
=========================================
Files 0 38 +38
Lines 0 3148 +3148
=========================================
+ Hits 0 3147 +3147
- Misses 0 1 +1
Impacted Files
Coverage Δ
crates/modor/src/actions.rs
100.00% <ø> (ø)
crates/modor/src/entities.rs
100.00% <ø> (ø)
crates/modor/src/system_runner.rs
100.00% <ø> (ø)
crates/modor_physics/src/lib.rs
100.00% <ø> (ø)
crates/modor/src/testing.rs
99.20% <95.23%> (ø)
...rates/modor_physics/src/components/acceleration.rs
100.00% <100.00%> (ø)
crates/modor_physics/src/components/position.rs
100.00% <100.00%> (ø)
crates/modor_physics/src/components/scale.rs
100.00% <100.00%> (ø)
crates/modor_physics/src/components/velocity.rs
100.00% <100.00%> (ø)
crates/modor_physics/src/entities/delta_time.rs
100.00% <100.00%> (ø)
... and 33 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 71779df...14f50af. Read the comment docs.
|
gharchive/pull-request
| 2022-03-06T11:39:35 |
2025-04-01T06:39:39.293368
|
{
"authors": [
"Nicolas-Ferre",
"codecov-commenter"
],
"repo": "modor-engine/modor",
"url": "https://github.com/modor-engine/modor/pull/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2584194633
|
🛑 Plus Repo is down
In ca7cd8b, Plus Repo (https://repo.plus) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Plus Repo is back up in 4d931eb after 17 minutes.
|
gharchive/issue
| 2024-10-13T19:01:18 |
2025-04-01T06:39:39.367681
|
{
"authors": [
"moechs"
],
"repo": "moechs/upptime",
"url": "https://github.com/moechs/upptime/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2502186539
|
Add support for Tomcat 10
With Tomcat 10, the expression Engine (within el-api.jar) moved to the jakarta.el package, thus the payload for tomcat does no longer work on Tomcat 10.
We need to create a new Controller that handles this cases. Actual change would be minimal.
Basically, simple change "javax.el.ELProcessor to "jakarta.ex.ELProcessor".
//prepare payload that exploits unsafe reflection in org.apache.naming.factory.BeanFactory
ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "", "",
true, "org.apache.naming.factory.BeanFactory", null);
ref.add(new StringRefAddr("forceString", "x=eval"));
ref.add(new StringRefAddr("x", payload));
Hello! I would like to contribute to this issue. Could you please assign it to me? :)
Hi! It's already done, see d46724f677330653463b748ee7e284a94be58c0e.
|
gharchive/issue
| 2024-09-03T07:59:47 |
2025-04-01T06:39:39.375693
|
{
"authors": [
"h0ng10",
"hyeok-kong",
"theGEBIRGE"
],
"repo": "mogwailabs/rogue-jndi-ng",
"url": "https://github.com/mogwailabs/rogue-jndi-ng/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
34322982
|
npm module requires 'lame' module, but it's not listed as a depndency
When you try to require('timbre') it throws the following error.
Warning: Cannot find module 'lame' from '/Users/jorgesilva/Sites/2014/clickOnJorge/node_modules/timbre' Use --force to continue.
This seems to be because this package is not listed as a dependency in the package.json.
Should be as simple as:
npm install lame --save
if yall are getting this error when using timbre.js with browserify, you can use the --ignore-missing to skip the unresolved requires for lame, ogg, and vorbis
|
gharchive/issue
| 2014-05-26T18:20:12 |
2025-04-01T06:39:39.401196
|
{
"authors": [
"data-doge",
"thejsj"
],
"repo": "mohayonao/timbre.js",
"url": "https://github.com/mohayonao/timbre.js/issues/18",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2317102020
|
Documents doubts and improvements requests
Hi,
First, thanks for the initiative, i also have used the old nestjs-zod and had no idea it were fully discontinued.
Does your generator uses the same structure than nest-zod-prisma ?
Would be nice to have it better described on how to use it
Hi, I am very sorry I didn't see your Issue
Yes,It uses the same generator but I have applied some improvements described in the docs such as nullable and nullish, and repeated Enum import in schemas and so on
I actually would love to write a good description but I really don't have a lot of time for now, So I would appreciate any PR
If you needed any more information about the package you can reach me in a discord call and I would love to explain what is foggy for you so you can PR here.
appreciate it!
@yangricardo
@timseriakov
Tho I think it is more clear with this changes I made to the README
Please check that so I can close this
|
gharchive/issue
| 2024-05-25T16:05:32 |
2025-04-01T06:39:39.408283
|
{
"authors": [
"mohrazzak",
"yangricardo"
],
"repo": "mohrazzak/better-nestjs-zod-prisma",
"url": "https://github.com/mohrazzak/better-nestjs-zod-prisma/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2089370123
|
Add an example to show how to run pending tasks in an interval
Split from https://github.com/moka-rs/moka/issues/349#issuecomment-1819114103
Hi. It does not have to be run_pending_tasks method, but you need to call some of the cache methods such as get, get_with, insert, or remove to drive eviction_listener.
Before v0.12.0, Moka had its own global thread pool to periodically run pending tasks. Some users did not like it, so we removed it. You will find more details here when pending tasks (internal maintenance tasks) will be executed: https://github.com/moka-rs/moka/blob/main/MIGRATION-GUIDE.md#the-maintenance-tasks
but i want the eviction_listener() to be called exactly at the time of eviction, not some time later,
can you please help me to understand what should i do?
You can spawn a thread and make it to call run_pending_tasks with some interval (e.g. 0.1 secs). If you need some code samples, I could write one for you.
CC: @unikzforce
Thanks
Hi! would it be good to add in the public doc about this behavior when it's described how to create an eviction listener? I feel it'd be good to say explicitly that the listener is not called automatically but it's linked to some cache operations, and perhaps bring the section in https://github.com/moka-rs/moka/blob/main/MIGRATION-GUIDE.md#the-maintenance-tasks to the pub doc as well?
|
gharchive/issue
| 2024-01-19T01:10:19 |
2025-04-01T06:39:39.428547
|
{
"authors": [
"agiuliano",
"tatsuya6502",
"unikzforce"
],
"repo": "moka-rs/moka",
"url": "https://github.com/moka-rs/moka/issues/379",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
353147707
|
Problem getting started
Hi,
This looks awesome and I'd love getting it up and running.
Please tell me if I'm missing something.
I've compiled it using make under Linux. Everything checks out OK and I can properly discover my Tellsticknet on the network running the command python3 -m tellsticknet -vv discover
That's about it. I want to use it in conjunction with Home Assistanst using the mqtt option. However, I run into the following problems:
Using the provided example configuration, the command python3 -m tellsticknet -vv devices throws the following error:
18-08-22 23:45.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/tellsticknet.conf
18-08-22 23:45.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/.tellsticknet.conf
18-08-22 23:45.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet.conf
- Door
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 162, in <module>
for e in (e for e in read_config() if e['class'] == 'command'):
File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 162, in <genexpr>
for e in (e for e in read_config() if e['class'] == 'command'):
KeyError: 'class'
Commenting out the class setting nor deleting it resolves the issue.
2. Trying to turn on a light (using the house and unit from telldus live), python3 -m tellsticknet -vv send protocol=arctech model=selflearning house=5092673 unit=1 cmd=turnon throws the following error:
18-08-22 23:59.29 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet.conf
method not found
Doesn't matter if cmd=on, ON, turnon, turnoff etc. same result.
Trying to connect to a local mqtt broker using python3 -m tellsticknet -vv mqtt with .config/mosquitto_pub containing the following information:
-h localhost
-p 1883
-username test
-pw test
results in the following error:
18-08-23 00:07.58 DEBUG (MainThread) [tellsticknet.mqtt] Connecting
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 208, in <module>
run(config, host=host)
File "/home/homeassistant/tellsticknet_api/tellsticknet/mqtt.py", line 574, in run
port=int(credentials['port']))
File "/usr/local/lib/python3.6/dist-packages/paho_mqtt-1.3.1-py3.6.egg/paho/mqtt/client.py", line 768, in connect
return self.reconnect()
File "/usr/local/lib/python3.6/dist-packages/paho_mqtt-1.3.1-py3.6.egg/paho/mqtt/client.py", line 927, in reconnect
sock.do_handshake()
File "/usr/lib/python3.6/ssl.py", line 1068, in do_handshake
self._sslobj.do_handshake()
File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake
self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer
I can sucessfully publish to the mqtt broker using Node-red.
I'm suspecting that I'm missing something crucial here since you've got it running just fine.
Looking forward to your response.
Happy to hear you want to use my code, thanks for testing and finding bugs!
Is fixed here
I noticed that I actually never implemented specifying params on the command line this way. Clarified it now. If you have a valid config file you should be able to do tellsticknet send livingroom on, tellsticknet send kitchen dim 50, etc.
I believe this is because the code currently only supports connecting to the MQTT broker using SSL. I'm running my broker with SSL enabled on port 8883 and it works. So your options are to enable non-SSL in the client code (should not be to hard), or to enable SSL in your MQTT broker.
Thank you for the quick response! Are Nexa switches supported? Trying to turn on a Nexa switch using the device name throws an error about not finding "nexa.encode".
I can provide you a debug log when I get home.
Here is the log as promised:
18-08-23 20:35.09 INFO (MainThread) [tellsticknet.discovery] Discovering tellstick devices ...
18-08-23 20:35.09 INFO (MainThread) [tellsticknet.discovery] Found TellStickNet device with firmware 17 at 192.168.1.216
18-08-23 20:35.09 DEBUG (MainThread) [tellsticknet.controller] creating controller with address 192.168.1.216 (ACCA5400218D)
18-08-23 20:35.09 DEBUG (SenderThread) [tellsticknet.controller] Waiting for command forever
18-08-23 20:35.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/tellsticknet.conf
18-08-23 20:35.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet_api/tellsticknet/.tellsticknet.conf
18-08-23 20:35.09 DEBUG (MainThread) [__main__] checking for config file /home/homeassistant/tellsticknet.conf
18-08-23 20:35.09 DEBUG (MainThread) [tellsticknet.controller] Sending time 1
18-08-23 20:35.09 DEBUG (MainThread) [tellsticknet.protocol] Encoding for protocol nexa
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/homeassistant/tellsticknet_api/tellsticknet/__main__.py", line 203, in <module>
controller.execute(device, method, param=param)
File "/home/homeassistant/tellsticknet_api/tellsticknet/controller.py", line 135, in execute
self._execute(device, method, param)
File "/home/homeassistant/tellsticknet_api/tellsticknet/controller.py", line 115, in _execute
packet = encode(**device, method=method, param=param)
File "/home/homeassistant/tellsticknet_api/tellsticknet/protocol.py", line 314, in encode
return protocol.encode(**device)
AttributeError: module 'tellsticknet.protocols.nexa' has no attribute 'encode'
Should python3 -m tellsticknet -vv listen generate something more than this?
18-08-23 20:37.59 INFO (MainThread) [tellsticknet.discovery] Discovering tellstick devices ...
18-08-23 20:37.59 INFO (MainThread) [tellsticknet.discovery] Found TellStickNet device with firmware 17 at 192.168.1.216
18-08-23 20:37.59 DEBUG (MainThread) [tellsticknet.controller] creating controller with address 192.168.1.216 (ACCA5400218D)
18-08-23 20:37.59 DEBUG (SenderThread) [tellsticknet.controller] Waiting for command forever
18-08-23 20:37.59 DEBUG (MainThread) [tellsticknet.controller] Listening for signals from 192.168.1.216
18-08-23 20:37.59 INFO (MainThread) [tellsticknet.controller] Registering self as listener for device at 192.168.1.216
18-08-23 20:37.59 DEBUG (MainThread) [tellsticknet.controller] Sending packet to controller 192.168.1.216:42314 <b'B:reglistener'>
Have you specified protocol, model, house, unit in tellsticknet.conf?
Like:
controller: abc123
name: Sovrum
component: light
protocol: arctech
model: selflearning
unit: 15
house: 45213512
---
... etc
You can find out what parameters to use by starting tellsticknet -vv listen and then start pressing buttons on your Nexa controller. Then you should get decoded packets displayed in the console.
Great! I got it working with one of my Nexa switches. The one I got working is just a regular power outlet. I've got another Nexa device supposed to be mounted behind a regular wall switch which doesn't work. It's called "self-learning Pro" in Telldus live. Any idea?
Have you tried Jula's Anslut?
Sorry for all the questions. When I get the time I'll try to sniff out the packages sent from Telldus live to the Tellstick and compare them.
Ok, I ended up programming it myself using Node-red. Analysing the packages sent using tcpdump I found that Nexa Pro and Julia are the same and very similar to Nexa (built-in arctech). The Nexa (arctech) house code is 26 bits and Jula's/Nexa Pro are 26 bits and has to end in 10. Telldus live had a problem early on where Jula's Anslut wouldn't work if you didn't pick a house code which ended in 10(binary). It seems like if the house code doesn't end correctly it just zero pads it to 24 bits and adds 10 to the end of it, making it 26 bits.
Closing for now. Feel free to provide suggestions for changes as PRs.
|
gharchive/issue
| 2018-08-22T22:21:46 |
2025-04-01T06:39:39.472159
|
{
"authors": [
"mannerydhe",
"molobrakos"
],
"repo": "molobrakos/tellsticknet",
"url": "https://github.com/molobrakos/tellsticknet/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2605718482
|
Weird Decimal Conversion Issue
Here's a simple scenario. I have the following Json:
{"Amount": 11.11}
When I run "Pretty-print current JSON file", the number changes from 11.11 to 11.109999999999999.
This is not a bug per se, but rather an unfortunate consequence of my fix to #78.
Since loss of precision (that issue) means unrecoverable and unnecessary loss of information, whereas your issue is merely an unnecessarily ugly string representation of a number, I will leave this unfixed.
If you're confused, I recommend that you Google "floating point imprecision" to help you understand why some imprecision is unavoidable.
It seems fundamentally wrong that a plugin that is supposed to "pretty print" Json ends up manipulating the json data. That means that I can't use and trust your plugin's output because this is absolutely not just an "ugly string representation." 11..11 and 11.1099999xxx are completely different numbers, and "close enough" does not work in the real world of business and science.
@KeanuTang
I recommend that you stop wasting your breath chiding me for this issue. I will not fix it myself, because as I already explained I don't believe that the alternative solutions are acceptable or feasible to implement, as they involve (a) massive refactoring and loss of performance to pivot from double-precision floating point numbers to decimal numbers, (b) loss of precision when pretty-printing other numbers and thus a regression on #78, or (c) implementing my own (undoubtedly bug-ridden) algorithm that somehow fixes your issue while still using doubles.
You are welcome to submit a PR that would fix this issue, but don't be surprised if I reject it because I don't like the tradeoffs you made.
There are other JSON plugins you can try if you don't like this one. There is a strong likelihood that they will have the same issue that mine does, because almost everyone uses double-precision floating point number to represent real numbers because, again, the alternative solutions are much harder to use and less performant.
The reason I said this is not a bug in my original commend is that 11.11 is not "close enough" to 11.109999999999999 as far as double-precision floating point numbers are concerned; it is equal. By contrast, the earlier algorithm I was using was changing the string representation in a way that would be parsed to a non-equal double. Decimals are not a reasonable substitute because the maximum value for a decimal is far less (7.9e28) than the maximum value for a double (1.8e308).
Any alternative with no rounding errors for any number of reasonable size would require me to fish for some third-party library that provides a higher-precision numeric specification while still having comparable performance and memory usage to doubles and an equally generous range of acceptable values, and then tediously change every double parameter, return type, and variable initialization in the entire plugin to the different type.
@KeanuTang
I hope that JSON Viewer meets your needs, and I'm sorry if I came across as a jerk. But be warned; that plugin also uses doubles to represent real numbers, and I have no idea how their double-to-decimal and decimal-to-double algorithms compare to the one I'm using. They may be strictly better, or it may just make a different set of tradeoffs than the one I'm using. Regardless, since they use the same data type, they are subject to the same fundamental limitations as my plugin.
The only thing I know for sure is that if I try to implement these conversion algorithms myself, I will almost certainly do a worse job than the standard library and introduce many subtle bugs. If you think you can do a better job than the C# standard library, or point me to a library that you are extremely confident does better than the standard library, I would consider a fix.
@KeanuTang
This commit should address this issue. If you follow my instructions for downloading an unreleased version, you can test out this fix and see if it is too your liking.
As I noted in the changelog, this fix comes at the cost of noticeably worse performance when reformatting very large files (say, several megabytes or more). At some point I will see if I can implement a more performant solution.
Please let me know if you have any thoughts. If you are satisfied with this fix, I will be including it in v8.2 of JsonTools, which I aim to release in the next few weeks.
I must extend my sincerest apology to you for effectively gaslighting you and trying to trivialize what was in fact a real and substantial issue with my plugin.
JsonTools version 8.2, incorporating a fix for this issue, is now live. I will soon submit a PR to include v8.2 in the plugins manager.
|
gharchive/issue
| 2024-10-22T14:58:51 |
2025-04-01T06:39:39.482456
|
{
"authors": [
"KeanuTang",
"molsonkiko"
],
"repo": "molsonkiko/JsonToolsNppPlugin",
"url": "https://github.com/molsonkiko/JsonToolsNppPlugin/issues/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
619849912
|
[feat] [while spin] Show the nams in plain text
Hello 👋
Thanks for the great work!
Is it possible to shuffle the names above the wheel in plain text?
Because when we add +200 name, you won't be able to read the names from the wheel.
https://youtu.be/yL5clbrvmyY?t=480
Thanks
Thank you for the proposal, Asim!
Could add some more details? I don't think I understand, but i would like
to.
On Sun, May 17, 2020 at 6:39 PM Asim M Al Twijry notifications@github.com
wrote:
Hello 👋
Thanks for the great work!
Is it possible to shuffle the names above the wheel in plain text?
Because when we add +200 name, you won't be able to read the names from
the wheel.
https://youtu.be/yL5clbrvmyY?t=480
Thanks
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/momander/wheel-spinner/issues/6, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAD2AIJUP3CV6464GS6IQVLRSCGUHANCNFSM4NDU4F7Q
.
Here's an example
https://drive.google.com/file/d/1Ewh_kkJ98dipRevxCilRH2a8KeqUK7iT/view?usp=drivesdk
On Mon, 18 May 2020, 05:18 Martin Omander, notifications@github.com wrote:
Thank you for the proposal, Asim!
Could add some more details? I don't think I understand, but i would like
to.
On Sun, May 17, 2020 at 6:39 PM Asim M Al Twijry <notifications@github.com
wrote:
Hello 👋
Thanks for the great work!
Is it possible to shuffle the names above the wheel in plain text?
Because when we add +200 name, you won't be able to read the names from
the wheel.
https://youtu.be/yL5clbrvmyY?t=480
Thanks
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/momander/wheel-spinner/issues/6, or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AAD2AIJUP3CV6464GS6IQVLRSCGUHANCNFSM4NDU4F7Q
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/momander/wheel-spinner/issues/6#issuecomment-629909724,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AA3U36LQXRS6GM2FLY6HCBLRSCLGFANCNFSM4NDU4F7Q
.
Ah, very good, now I get it. Thank your for sending this! I think it looks
great. Will add it to the list of things to build next.
On Sun, May 17, 2020 at 8:02 PM Asim M Al Twijry notifications@github.com
wrote:
Here's an example
https://drive.google.com/file/d/1Ewh_kkJ98dipRevxCilRH2a8KeqUK7iT/view?usp=drivesdk
On Mon, 18 May 2020, 05:18 Martin Omander, notifications@github.com
wrote:
Thank you for the proposal, Asim!
Could add some more details? I don't think I understand, but i would like
to.
On Sun, May 17, 2020 at 6:39 PM Asim M Al Twijry <
notifications@github.com
wrote:
Hello 👋
Thanks for the great work!
Is it possible to shuffle the names above the wheel in plain text?
Because when we add +200 name, you won't be able to read the names from
the wheel.
https://youtu.be/yL5clbrvmyY?t=480
Thanks
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/momander/wheel-spinner/issues/6, or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AAD2AIJUP3CV6464GS6IQVLRSCGUHANCNFSM4NDU4F7Q
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<
https://github.com/momander/wheel-spinner/issues/6#issuecomment-629909724
,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AA3U36LQXRS6GM2FLY6HCBLRSCLGFANCNFSM4NDU4F7Q
.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/momander/wheel-spinner/issues/6#issuecomment-629920018,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAD2AINVBRJ5MJYBTXV6SETRSCQM3ANCNFSM4NDU4F7Q
.
The Wheel has a method getNameAtPointer().
Naive approach would be to call a function at an interval and update DOM innerText, depending upon the interval, the value will not be in exact sync with entry at the pointer. Also updating DOM innerText too frequently will have performance implications.
A slightly better performance approach would be to raise events as the value at pointer changes, however, this will still have issues at the high speed of wheel rotating. Unless the values are not updated at high speed and only updated at certain low speed.
The correct way to solve this would be to use the same animation approach used to spin the wheel. That way both the name at the pointer and the name value will be always in sync, as the browser paints them.
Excellent analysis, johnberry09! Thank you.
|
gharchive/issue
| 2020-05-18T01:39:04 |
2025-04-01T06:39:39.510521
|
{
"authors": [
"AsimNet",
"johnberry09",
"momander"
],
"repo": "momander/wheel-spinner",
"url": "https://github.com/momander/wheel-spinner/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
130237768
|
Possible to get week number relative to month?
week() returns the week number according to year, what i'd like to get is week number relative to month. Is it possible?
It's not built-in, but basically you can subtract the week number of the start of the month from the week number of the date in question.
function weekOfMonth(m) {
return m.week() - moment(m).startOf('month').week() + 1;
}
Note that week function is locale specific, so in some cases you might want to use isoWeek instead. (See the docs).
If someone wants to add this to moment, a PR with the above function (or similar) and related unit tests would be appreciated.
Tracking in PR #2965. Thanks!
What about adding this ability to format function?
It's not built-in, but basically you can subtract the week number of the start of the month from the week number of the date in question.
function weekOfMonth(m) {
return m.week() - moment(m).startOf('month').week() + 1;
}
Note that week function is locale specific, so in some cases you might want to use isoWeek instead. (See the docs).
moment("2018-12-31") -> weekOfMonth = -48
In some years week() will return 1 for the last days of the year, see #4019. So it messes up the calculations as @Knjaz89 mentioned.
Therefore the right way to calculate the weekOfMonth is:
function weekOfMonth(date) {
let weekInYearIndex = date.week();
if (date.year() !== date.weekYear()) {
weekInYearIndex = date.clone().subtract(1,'week').week() + 1;
}
const weekIndex = weekInYearIndex - moment(date).startOf('month').week() + 1;
}
weekOfMonth(moment('2018-12-31T00:00:00.000Z')); // return 6
weekOfMonth(moment('2019-01-01T00:00:00.000Z')); // return 1
It up a 2020, weekly index it was difficult to calculate the number,
but settled with the help of the @eitanfr
Thank you.
I did it in this way:
const weekOfMonth = (date) => {
const dayInMonth = moment(date).date();
return Math.floor(dayInMonth / 7);
}
In some years week() will return 1 for the last days of the year, see #4019. So it messes up the calculations as @Knjaz89 mentioned.
Therefore the right way to calculate the weekOfMonth is:
function weekOfMonth(date) {
let weekInYearIndex = date.week();
if (date.year() !== date.weekYear()) {
weekInYearIndex = date.clone().subtract(1,'week').week() + 1;
}
const weekIndex = weekInYearIndex - moment(date).startOf('month').week() + 1;
}
weekOfMonth(moment('2018-12-31T00:00:00.000Z')); // return 6
weekOfMonth(moment('2019-01-01T00:00:00.000Z')); // return 1
For 02.01.2020 it will return -52, so I fixed that case:
function getWeekOfMonth(dateObj) {
const date = m(dateObj);
const weekInYear = date.isoWeek();
const result = weekInYear - date.startOf('month').isoWeek();
return result < 0 ? weekInYear : result;
}
It's not built-in, but basically you can subtract the week number of the start of the month from the week number of the date in question.
function weekOfMonth(m) {
return m.week() - moment(m).startOf('month').week() + 1;
}
Note that week function is locale specific, so in some cases you might want to use isoWeek instead. (See the docs).
Awesome (y)
FYI @M7Arman doesn't work for 2020-08-30, your function reports week 4, but we're in week 6.
FYI @M7Arman doesn't work for 2020-08-30, your function reports week 4, but we're in week 6.
@michaelhayman you mean we are in week 5 ?
This doesn't work if the last week of month is common with next yeah e.g. 31 Dec 2020 is Tuesday and its is considered week 1 in the new year.
this was my solution to the problem
`
function getWeekIndexInMonth(day) {
const startOfMonth = moment(day).startOf('month');
const endOfMonth = moment(day).endOf('month');
let currentMomentDate = moment(startOfMonth);
const weeks = [];
while (currentMomentDate.isBefore(endOfMonth)) {
weeks.push(currentMomentDate.week());
currentMomentDate.add(1, "weeks").startOf("week");
}
return weeks.indexOf(day.week())
}
`
function weekOfMonth(m) {
return m.week() - moment(m).startOf('month').week() + 1;
}
Just FYI, if I don't set the locale I get unexpected results.
Here's an example where I don't set the locale and I get unexpected results (negative week of months):
import moment from "moment";
function weekOfMonth(m) {
return m.week() - moment(m).startOf('month').week() + 1;
}
const m = moment();
m.set({year: 2021, month: 11, date: 1}); // (months are 0-based, days are 1-based)
const decNumDays = m.daysInMonth();
console.log("Number of days in December:", decNumDays);
const result = [];
for (let date = 1; date <= decNumDays; date++) {
m.set('date', date);
result.push({
dayName: m.format("dddd"),
dayOfMonth: date,
weekOfMonth: weekOfMonth(m)
})
}
console.log(result)
Result:
[
{ dayName: 'Wednesday', dayOfMonth: 1, weekOfMonth: 1 },
{ dayName: 'Thursday', dayOfMonth: 2, weekOfMonth: 1 },
{ dayName: 'Friday', dayOfMonth: 3, weekOfMonth: 1 },
{ dayName: 'Saturday', dayOfMonth: 4, weekOfMonth: 1 },
{ dayName: 'Sunday', dayOfMonth: 5, weekOfMonth: 2 },
{ dayName: 'Monday', dayOfMonth: 6, weekOfMonth: 2 },
{ dayName: 'Tuesday', dayOfMonth: 7, weekOfMonth: 2 },
{ dayName: 'Wednesday', dayOfMonth: 8, weekOfMonth: 2 },
{ dayName: 'Thursday', dayOfMonth: 9, weekOfMonth: 2 },
{ dayName: 'Friday', dayOfMonth: 10, weekOfMonth: 2 },
{ dayName: 'Saturday', dayOfMonth: 11, weekOfMonth: 2 },
{ dayName: 'Sunday', dayOfMonth: 12, weekOfMonth: 3 },
{ dayName: 'Monday', dayOfMonth: 13, weekOfMonth: 3 },
{ dayName: 'Tuesday', dayOfMonth: 14, weekOfMonth: 3 },
{ dayName: 'Wednesday', dayOfMonth: 15, weekOfMonth: 3 },
{ dayName: 'Thursday', dayOfMonth: 16, weekOfMonth: 3 },
{ dayName: 'Friday', dayOfMonth: 17, weekOfMonth: 3 },
{ dayName: 'Saturday', dayOfMonth: 18, weekOfMonth: 3 },
{ dayName: 'Sunday', dayOfMonth: 19, weekOfMonth: 4 },
{ dayName: 'Monday', dayOfMonth: 20, weekOfMonth: 4 },
{ dayName: 'Tuesday', dayOfMonth: 21, weekOfMonth: 4 },
{ dayName: 'Wednesday', dayOfMonth: 22, weekOfMonth: 4 },
{ dayName: 'Thursday', dayOfMonth: 23, weekOfMonth: 4 },
{ dayName: 'Friday', dayOfMonth: 24, weekOfMonth: 4 },
{ dayName: 'Saturday', dayOfMonth: 25, weekOfMonth: 4 },
{ dayName: 'Sunday', dayOfMonth: 26, weekOfMonth: -47 },
{ dayName: 'Monday', dayOfMonth: 27, weekOfMonth: -47 },
{ dayName: 'Tuesday', dayOfMonth: 28, weekOfMonth: -47 },
{ dayName: 'Wednesday', dayOfMonth: 29, weekOfMonth: -47 },
{ dayName: 'Thursday', dayOfMonth: 30, weekOfMonth: -47 },
{ dayName: 'Friday', dayOfMonth: 31, weekOfMonth: -47 }
]
And here's an example where I get correct results when I set the locale:
import moment from "moment";
moment.locale('en-au');
function weekOfMonth(m) {
return m.week() - moment(m).startOf('month').week() + 1;
}
const m = moment();
m.set({year: 2021, month: 11, date: 1}); // (months are 0-based, days are 1-based)
const decNumDays = m.daysInMonth();
console.log("Number of days in December:", decNumDays);
const result = [];
for (let date = 1; date <= decNumDays; date++) {
m.set('date', date);
result.push({
dayName: m.format("dddd"),
dayOfMonth: date,
weekOfMonth: weekOfMonth(m)
})
}
console.log(result)
Result:
[
{ dayName: 'Wednesday', dayOfMonth: 1, weekOfMonth: 1 },
{ dayName: 'Thursday', dayOfMonth: 2, weekOfMonth: 1 },
{ dayName: 'Friday', dayOfMonth: 3, weekOfMonth: 1 },
{ dayName: 'Saturday', dayOfMonth: 4, weekOfMonth: 1 },
{ dayName: 'Sunday', dayOfMonth: 5, weekOfMonth: 2 },
{ dayName: 'Monday', dayOfMonth: 6, weekOfMonth: 2 },
{ dayName: 'Tuesday', dayOfMonth: 7, weekOfMonth: 2 },
{ dayName: 'Wednesday', dayOfMonth: 8, weekOfMonth: 2 },
{ dayName: 'Thursday', dayOfMonth: 9, weekOfMonth: 2 },
{ dayName: 'Friday', dayOfMonth: 10, weekOfMonth: 2 },
{ dayName: 'Saturday', dayOfMonth: 11, weekOfMonth: 2 },
{ dayName: 'Sunday', dayOfMonth: 12, weekOfMonth: 3 },
{ dayName: 'Monday', dayOfMonth: 13, weekOfMonth: 3 },
{ dayName: 'Tuesday', dayOfMonth: 14, weekOfMonth: 3 },
{ dayName: 'Wednesday', dayOfMonth: 15, weekOfMonth: 3 },
{ dayName: 'Thursday', dayOfMonth: 16, weekOfMonth: 3 },
{ dayName: 'Friday', dayOfMonth: 17, weekOfMonth: 3 },
{ dayName: 'Saturday', dayOfMonth: 18, weekOfMonth: 3 },
{ dayName: 'Sunday', dayOfMonth: 19, weekOfMonth: 4 },
{ dayName: 'Monday', dayOfMonth: 20, weekOfMonth: 4 },
{ dayName: 'Tuesday', dayOfMonth: 21, weekOfMonth: 4 },
{ dayName: 'Wednesday', dayOfMonth: 22, weekOfMonth: 4 },
{ dayName: 'Thursday', dayOfMonth: 23, weekOfMonth: 4 },
{ dayName: 'Friday', dayOfMonth: 24, weekOfMonth: 4 },
{ dayName: 'Saturday', dayOfMonth: 25, weekOfMonth: 4 },
{ dayName: 'Sunday', dayOfMonth: 26, weekOfMonth: 5 },
{ dayName: 'Monday', dayOfMonth: 27, weekOfMonth: 5 },
{ dayName: 'Tuesday', dayOfMonth: 28, weekOfMonth: 5 },
{ dayName: 'Wednesday', dayOfMonth: 29, weekOfMonth: 5 },
{ dayName: 'Thursday', dayOfMonth: 30, weekOfMonth: 5 },
{ dayName: 'Friday', dayOfMonth: 31, weekOfMonth: 5 }
]
|
gharchive/issue
| 2016-02-01T02:48:47 |
2025-04-01T06:39:39.539356
|
{
"authors": [
"Assem-Hafez",
"InstanceOfMichael",
"Knjaz89",
"M7Arman",
"behrangsa",
"bonesoul",
"daniaaalarshad",
"eitanfr",
"michaelhayman",
"mj1856",
"samslow"
],
"repo": "moment/moment",
"url": "https://github.com/moment/moment/issues/2934",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
345161377
|
Fix inconsistent output on new year
Comparing last week of year N and first week of year N+1 produced wrong result
Coverage remained the same at 94.647% when pulling 9844b4476ef1a63f9ece2dde061d685aa75cc1fb on kylekatarnls:patch-2 into 2e2a5b35439665d4b0200143d808a7c26d6cd30f on moment:develop.
@kylekatarnls - please add a test case documenting what you are trying to change.
You get inconsistency on year overlap:
moment('2017-12-31').locale('ja').calendar(moment('2018-01-03'))
// "日曜日 00:00"
moment('2018-01-06').locale('ja').calendar(moment('2018-01-09'))
// "先週土曜日 00:00"
Both should return "先週土曜日 00:00"
You can see on this PR the week comparison is currently done with < which fails for 52(2017) < 1(2018)
|
gharchive/pull-request
| 2018-07-27T09:44:50 |
2025-04-01T06:39:39.543223
|
{
"authors": [
"coveralls",
"kylekatarnls",
"marwahaha"
],
"repo": "moment/moment",
"url": "https://github.com/moment/moment/pull/4719",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2270445171
|
chore(main): release momento 0.4.9
:robot: I have created a release beep boop
0.4.9 (2024-04-30)
Bug Fixes
remove token and specify email for the redundant tag (#168) (c119376)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/momentohq/client-sdk-ruby/releases/tag/momento/v0.4.9 :sunflower:
|
gharchive/pull-request
| 2024-04-30T03:50:53 |
2025-04-01T06:39:39.546567
|
{
"authors": [
"momento-github-actions-machine-user"
],
"repo": "momentohq/client-sdk-ruby",
"url": "https://github.com/momentohq/client-sdk-ruby/pull/169",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1586814519
|
chore: add heartbeat support
Bumping the SDK version for heartbeart support. Interrupted
will now happen for this class of timeout instead of a stream
read error.
Here's a fresh subscription receiving a heartbeat on subscribe:
$ momento -p alpha topic subscribe asd --cache roflmao --verbose
[2023-02-16T00:42:56Z DEBUG momento::utils::user] Token already expired at: 2022-06-02 23:54:38 UTC
[2023-02-16T00:42:56Z DEBUG momento::utils::user] No session found in .momento_session profile...
[2023-02-16T00:42:56Z DEBUG rustls::anchors] add_parsable_certificates processed 166 valid and 0 invalid certs
[2023-02-16T00:42:56Z DEBUG hyper::client::connect::dns] resolving host="cache.cell-alpha-dev.preprod.a.momentohq.com"
[2023-02-16T00:42:56Z DEBUG hyper::client::connect::http] connecting to 54.186.170.81:443
[2023-02-16T00:42:56Z DEBUG hyper::client::connect::http] connected to 54.186.170.81:443
[2023-02-16T00:42:56Z DEBUG rustls::client::hs] No cached session for DnsName(DnsName(DnsName("cache.cell-alpha-dev.preprod.a.momentohq.com")))
[2023-02-16T00:42:56Z DEBUG rustls::client::hs] Not resuming any session
[2023-02-16T00:42:56Z DEBUG rustls::client::hs] Using ciphersuite Tls13(Tls13CipherSuite { suite: TLS13_AES_256_GCM_SHA384, bulk: Aes256Gcm })
[2023-02-16T00:42:56Z DEBUG rustls::client::tls13] Not resuming
[2023-02-16T00:42:56Z DEBUG rustls::client::tls13] TLS1.3 encrypted extensions: [Protocols([PayloadU8([104, 50])])]
[2023-02-16T00:42:56Z DEBUG rustls::client::hs] ALPN protocol is Some(b"h2")
[2023-02-16T00:42:56Z DEBUG h2::client] binding client connection
[2023-02-16T00:42:56Z DEBUG h2::client] client connection bound
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 }
[2023-02-16T00:42:56Z DEBUG h2::proto::connection] Connection; peer=Client
[2023-02-16T00:42:56Z DEBUG tower::buffer::worker] service.ready=true message=processing request
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 5177345 }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(1) }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
[2023-02-16T00:42:56Z DEBUG rustls::client::tls13] Ticket saved
[2023-02-16T00:42:56Z DEBUG rustls::client::tls13] Ticket saved
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Settings { flags: (0x0), header_table_size: 4096, max_concurrent_streams: 100, initial_window_size: 1048576, enable_connect_protocol: 0 }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_write] send frame=Settings { flags: (0x1: ACK) }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Settings { flags: (0x1: ACK) }
[2023-02-16T00:42:56Z DEBUG h2::proto::settings] received settings ACK; applying Settings { flags: (0x0), enable_push: 0, initial_window_size: 2097152, max_frame_size: 16384 }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
[2023-02-16T00:42:56Z DEBUG h2::codec::framed_read] received frame=Data { stream_id: StreamId(1) }
[2023-02-16T00:42:56Z DEBUG momento::preview::topics] received a heartbeat
Here's what happens (--verbose) now on a subscription timeout regardless of
whether something was published:
[2023-02-16T00:45:12Z DEBUG momento::preview::topics] received a heartbeat
[2023-02-16T00:46:12Z DEBUG h2::codec::framed_read] received frame=Reset { stream_id: StreamId(1), error_code: NO_ERROR }
[2023-02-16T00:46:12Z DEBUG tonic::codec::decode] decoder inner stream error: Status { code: Unknown, message: "error reading a body from connection: stream error received: not a result of an error", source: Some(hyper::Error(Body, Error { kind: Reset(StreamId(1), NO_ERROR, Remote) })) }
[2023-02-16T00:46:12Z DEBUG momento::response::error] translating raw status to error: Status { code: Unknown, message: "error reading a body from connection: stream error received: not a result of an error", source: Some(hyper::Error(Body, Error { kind: Reset(StreamId(1), NO_ERROR, Remote) })) }
The subscription ended: the request was interrupted by the server without an error
detail: TonicStatus(Status { code: Unknown, message: "error reading a body from connection: stream error received: not a result of an error", source: Some(hyper::Error(Body, Error { kind: Reset(StreamId(1), NO_ERROR, Remote) })) })
rebased for the restructure - it's a small enough pr I just force pushed.
|
gharchive/pull-request
| 2023-02-16T00:49:23 |
2025-04-01T06:39:39.549455
|
{
"authors": [
"kvcache"
],
"repo": "momentohq/momento-cli",
"url": "https://github.com/momentohq/momento-cli/pull/259",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.