id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1123192863
|
Experiencing memory leaks when using RDS Token assembly
When the application generates an RDS authentication token for use with connecting to an RDS database to generate the password, we are experiencing memory leaks in our application. We are using the Assembly AWSSDK.RDS, Version=3.3.0.0 for now and not the latest version due to package dependencies on other projects. To prove that, So we tried switching the GenerateAuthToken off and used the static password and deployed the change, i could see there is a slow decline but much slower than it was earlier which is in measurable scale. So i believe that the RDP token code is the vast majority of the problem it seems. It is just a toggle option which we switch off it works fine and we don't experience any memory issues. Attaching the metrics from the graph to look at the difference when the change for static password was made and deployed. Please let me know in case you need any more information from my side to look into this issue. Looking forward for the support.
This is the toggle option code -
if (applicationDbContextPostgreSqlSettings.GenerateAuthToken.HasValue && applicationDbContextPostgreSqlSettings.GenerateAuthToken.Value)
{
builder.Password = RDSAuthTokenGenerator.GenerateAuthToken(
applicationDbContextPostgreSqlSettings.RegionEndpoint,
applicationDbContextPostgreSqlSettings.Host,
applicationDbContextPostgreSqlSettings.Port.Value,
applicationDbContextPostgreSqlSettings.Username);
}
else
{
SecureString password = applicationDbContextPostgreSqlSettings.Password;
password.Decrypt(plainText =>
{
builder.Password = plainText;
});
}
Hi @archanasharma3,
Good afternoon.
Could you please share the sample application so that we could reproduce the issue? Above code snippet doesn't provide enough data points that support memory leak because of call to RDSAuthTokenGenerator.GenerateAuthToken(). To troubleshoot further, we need to investigate application code and see if it's trying to generate tokens continuously.
Thanks,
Ashish
Hello, as requested attaching the startup, and dbcontext context related files. Please let me know if any other files are needed for investigation.
ApplicationDbContext.txt
Startup.txt
ApplicationDbContextPostgreSql.txt
.
Hi @archanasharma3,
Good morning.
Unfortunately, I do not see any logic in RDSAuthTokenGenerator that might be causing the memory leaks. I examined your Startup.txt and here are my observations:
You are using services.AddScoped<IApplicationDbContext, ApplicationDbContextPostgreSql>();. Per Microsoft's official documentation Dependency injection in ASP.NET Core, the AddScoped method registers the service with a scoped lifetime, the lifetime of a single request. This means for every request from any user would have a new ApplicationDbContextPostgreSql instance, which could end up calling Amazon.RDS.Util.RDSAuthTokenGenerator.GenerateAuthToken() every time. One optimization that could be done here is to cache generated token (however, you would need to add logic to refresh the expired tokens).
I do notice that many services are registered at application startup. You might want to revisit the services to see if there could be any memory leaks there.
Could you try developing sample application with minimal logic which invokes Amazon.RDS.Util.RDSAuthTokenGenerator.GenerateAuthToken() first with AddScoped() and then with AddSingleton()? Also try upgrading to latest AWSDK version, any identified fixes would be pushed on top of latest version, if any.
Thanks,
Ashish
|
gharchive/issue
| 2022-02-03T15:02:03 |
2025-04-01T04:56:06.340840
|
{
"authors": [
"archanasharma3",
"ashishdhingra"
],
"repo": "aws/aws-sdk-net",
"url": "https://github.com/aws/aws-sdk-net/issues/1973",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2262170660
|
V4 Development: General strategy on how to fill in missing APIs
Describe the feature
I'm currently fiddling around with some performance improvements and for some cases I need to somehow polyfill certain missing APIs or attributes. For example in order to implement overrenting when doing stack allocations it might be beneficial to add SkipsLocalsInit. Another example are missing encoding extension methods that accept ReadOnlySpan.
Several of these cases can be poly filled by adding corresponding attributes or methods into the utils and that might be the teams preferred approach. On the other hand there are libraries like Polyfil or Polysharp that to some extend implement those workarounds already.
In order to proceed forward it would be great if the strategy for the v4 development branch would be documented somewhere. Additionally the team could potentially also set the LangVersion to a reasonable language version that the team wants to allow so that contributors have clear guidelines on what or what not to use.
Use Case
Contributing to v4 development but I guess these questions are anyway something the team on the v4 branch has to have answers for independent of whether someone contributes or not.
Proposed Solution
No response
Other Information
No response
Acknowledgements
[ ] I may be able to implement this feature request
[ ] This feature might incur a breaking change
AWS .NET SDK and/or Package version used
v4
Targeted .NET Platform
all
Operating System and version
all
To add an example. Here is a new URL encode method I quickly spiked https://github.com/danielmarbach/aws-sdk-net/commit/81d85869f3f096d522966b7c93a1cf862c74cb27. Already there I'm missing proper GetBytes and GetString overloads for Netstandard that I would have to shim. I don't have SkipLocalsInit and I have discrepancy between TryGetValue and GetValueOrDefault.
@danielmarbach Agreed we need to get this documented. We haven't really talked too much about V4 yet other then the blog post announcing the drop in support for .NET Framework 3.5. Not trying to keep it a secret but we are taking care of some internal infrastructure changes right now to support both a V3 and V4. After that I was hoping to be more proactively public about V4. That being said don't let that stop you from contributing to V4. The main thing about V4 is it will be evolutionary not revolutionary so only breaking changes when necessary to make a better product.
To your immediate question about poly fills the SDK is still going to take a hard rule of avoiding taking on external dependencies. The SDK is too low level in most people's applications and dependencies we take can cause collection with versions customers use or we could be forced to make breaking changes due to the dependencies we don't control.
Using pragmas to target newer versions and leaving older versions using stock implementations is perfectly fine in the SDK. I'm not very interested in having pragmas with special implementations in older targets. I don't want to build up code we have to maintain for targets people are moving away from.
@normj
To your immediate question about poly fills the SDK is still going to take a hard rule of avoiding taking on external dependencies. The SDK is too low level in most people's applications and dependencies we take can cause collection with versions customers use or we could be forced to make breaking changes due to the dependencies we don't control.
Both Polyfil and Polyshard are source-only dependencies, meaning that customers wouldn't even know that SDK uses them.
Absolutely what @Dreamescaper said. Essentially those are development time dependencies and will not be dependencies on the shipped product at all. So the risk here is probably more of a supply chain risk than anything else (which obviously requires considerations I get that)
I didn't realize the libraries were source-only dependencies. As you can call I haven't ever tried them before. Let me play around with them to get more familiar with them and then I will come back.
|
gharchive/issue
| 2024-04-24T20:56:58 |
2025-04-01T04:56:06.350726
|
{
"authors": [
"Dreamescaper",
"danielmarbach",
"normj"
],
"repo": "aws/aws-sdk-net",
"url": "https://github.com/aws/aws-sdk-net/issues/3298",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2019677187
|
feat: Add support for encoded data to Value
Description of changes:
Add getEncodedBytes() to Value and all its child classes so values can be recreated after encryption.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Commits are showing up wrong, closing PR.
|
gharchive/pull-request
| 2023-11-30T23:28:43 |
2025-04-01T04:56:06.400350
|
{
"authors": [
"m271828"
],
"repo": "aws/c3r",
"url": "https://github.com/aws/c3r/pull/431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1363763979
|
fake change. DO NOT MERGE
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
/hold
|
gharchive/pull-request
| 2022-09-06T20:12:12 |
2025-04-01T04:56:06.401650
|
{
"authors": [
"kschumy"
],
"repo": "aws/eks-anywhere-build-tooling",
"url": "https://github.com/aws/eks-anywhere-build-tooling/pull/1267",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2504133044
|
Fix prometheus/prometheus version in the README
Description of changes:
#3683 updated the prometheus/prometheus version in the README which caused the validate-generated-presubmit prow job to fail on the open PRs like #3694 and #3706 with the following error:
Error: Generated files, UPSTREAM_PROJECTS.yaml README.md release/staging-build.yml release/checksums-build.yml batch-build.yml, do not match expected. Please run 'make generate' to update
diff --git a/projects/prometheus/prometheus/README.md b/projects/prometheus/prometheus/README.md
index cf45a32e..dadbcd2d 100644
--- a/projects/prometheus/prometheus/README.md
+++ b/projects/prometheus/prometheus/README.md
@@ -1,4 +1,4 @@
-
+

## **Prometheus**
make: *** [validate-generated] Error 1
This PR reverts that change to match the expected version in the GIT_TAG file.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
/approve
/lgtm
|
gharchive/pull-request
| 2024-09-04T01:59:39 |
2025-04-01T04:56:06.405263
|
{
"authors": [
"abhay-krishna",
"sp1999"
],
"repo": "aws/eks-anywhere-build-tooling",
"url": "https://github.com/aws/eks-anywhere-build-tooling/pull/3714",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1527948512
|
Add validation for management cluster referenced in workload cluster spec to be valid
Issue #, if available:
Description of changes:
Added additional validations to ensure that we are able to retrieve the EKS-A cluster object from the management cluster and also confirm that the cluster we have referenced is indeed a management cluster by checking if the management cluster name for that cluster references itself.
Testing (if applicable):
unit tests and testing these scenarios functionally
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
/approve
|
gharchive/pull-request
| 2023-01-10T20:14:46 |
2025-04-01T04:56:06.407445
|
{
"authors": [
"vivek-koppuru"
],
"repo": "aws/eks-anywhere",
"url": "https://github.com/aws/eks-anywhere/pull/4596",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2131474301
|
Release EKS Go: v1.20.14-15
Increment release file to publish new EKS Go artifacts for v1.20.14-15
/lgtm
/approve
|
gharchive/pull-request
| 2024-02-13T05:08:13 |
2025-04-01T04:56:06.408586
|
{
"authors": [
"xdu31",
"zafs23"
],
"repo": "aws/eks-distro-build-tooling",
"url": "https://github.com/aws/eks-distro-build-tooling/pull/1339",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1085419574
|
feat(go) - research using generics for improved UX
:rocket: Feature Request
Affected Languages
[ ] TypeScript or Javascript
[ ] Python
[ ] Java
[ ] .NET (C#, F#, ...)
[x] Go
General Information
JSII Version: 1.49.0<!-- Output of jsii --version -->
Platform: *
[x] This feature might incur a breaking change
Description
Go 1.18 is scheduled to release as stable in Feb 2022 with support for generic types. This new feature allows us to potentially take an alternative approach to supporting optional values, enabling type casting, and various runtime improvements.
Proposed Solution
Optional Types
Definition of some wrapper type like:
type Optional[T any] struct{
val T
defined bool
}
Would allow code generating Optional[Type] instead of *Type everywhere. This largely would have similar semantics to the user as all types, regardless of whether they are optional or required in typescript, would have to be this wrapper type in order to uphold backwards compatibility guarantees as detailed in the RFC. It may however emerge as the more idiomatic solution once generics are commonly used in Go, and they can be defined inline instead of using a utility function like jsii.String or needing to allocate a variable previously.
Additionally this could simplify reflection code within the Go runtime where currently some relatively complicated logic exists around inspecting and dereferencing pointers during reflection.
Type Casting
We may be able to define a function, returning a generic type representing a parent or child class (in typescript) to support downcasting as needed for escape hatches.
Runtime Improvements
Changing JSII runtime functionality to use generics:
// from
func (c *Client) Get(props GetProps) (response GetResponse, err error)
// to
func (c *Client) Get[A any](props GetProps) (response GetResponse, err error, val A)
Could potentially give stronger compile time guarantees for the runtime library and generated code and reduce the usage of unsafe pointer manipulation.
I extensively prototyped in the https://github.com/aws/jsii/tree/rmuller/explore/go1.18-generics branch.
Here are some of the high-level findings (pending a more comprehensive writeup):
Methods cannot introduce new type arguments (they may of course use the type parameters of the type they are a member of)
We can design a "marker" interface that can be used in any place where a value is optional (nil-able)type Option[T any] interface {
// This is merely a marker function to facilitate
// getting the underlying value type through reflection
Unwrap__() T
}
All generated types simply implement this interface (T is the implementing type)
This means a signature that accepts any T can be updated to accept Optional[T] without breaking
Unfortunately any given type can only implement the marker ONCE (as it'd otherwise require multiple identically-named methods to be implemented, which is impossible)
Structs (in the jsii sense) and enums are passed by-value when required, so they simply cannot be nil.
We still need to use wrappers for primitive types (string, float64, etc...), but instead of being pointers to these types, they can be "real" aliasestype String string
// String is a valid value for Option[String]
func (s String) Unwrap__() String { return s }
These cannot be nil when required.
They implement Option[T]
These types have perhaps a more "go native" feel to them.
The marker allows runtime validation that structs have values for all required (and nil-able) fields they have
We can streamline the runtime APIs that currently accept reflect.Type so they are instead generic functions// Before
RegisterStruct(reflect.TypeOf((*StructType)(nil).Elem())
// After:
RegisterStruct[StructType]()
We can implement a decent UnsafeCast feature using generics, where the experience could look like so:original := SomeFunction() // A union of structs
downCasted := jsii.UnsafeCast[SpecificStruct](original)
These definitely look like some significant improvements that at first glance appear to make the Go experience easier to get started with and understand. We should work towards providing a working experience for users to try and provide feedback on though if possible without rewriting too much of our current implementation, if only to provide further validation without committing to an entirely new approach.
Additional notes in https://github.com/aws/aws-cdk-rfcs/pull/397
|
gharchive/issue
| 2021-12-21T04:17:22 |
2025-04-01T04:56:06.426275
|
{
"authors": [
"MrArnoldPalmer",
"RomainMuller"
],
"repo": "aws/jsii",
"url": "https://github.com/aws/jsii/issues/3276",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1623488132
|
FileNotFoundException: Could not load file or assembly ...
Hi,
I get the error "System.IO.FileNotFoundException: Could not load file or assembly" during the assessment, but Visual Studio's build is successful and the files exist in related folders. I check and see related files when starting a build by Visual Studio 2022, but there are no files in the folder when starting an assessment by Porting Assistant.
This solution file contains the Silverlight and Asp.Net Web Application projects. Is Porting Assistant for .Net support?
Log;
[2023-03-14 15:41:07 ERR] (1.9.2) PortingAssistant.Client.Analysis.PortingAssistantAnalysisHandler: Error while resolving reference Microsoft.CodeAnalysis.MetadataImageReference System.IO.FileNotFoundException: Could not load file or assembly '...\bin\Debug\XYZ.dll'. The system cannot find the file specified. File name: '...\bin\Debug\XYZ.dll'
at System.Reflection.AssemblyName.nGetFileInformation(String s)
at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
at Codelyzer.Analysis.Build.ExternalReferenceLoader.LoadFromCompilation(HashSet`1 projectReferenceNames)
Thank you for opening this issue! We've opened a ticket to track and investigate this issue internally.
|
gharchive/issue
| 2023-03-14T13:30:50 |
2025-04-01T04:56:06.430593
|
{
"authors": [
"mrkdeng",
"orcunhanay"
],
"repo": "aws/porting-assistant-dotnet-ui",
"url": "https://github.com/aws/porting-assistant-dotnet-ui/issues/786",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
443552702
|
Adds proof harness for aws_nospec_mask function
Signed-off-by: Felipe R. Monteiro felisous@amazon.com
Description of changes:
Updates implementation of aws_nospec_mask function with pre- and post-conditions;
Adds a proof harness for aws_nospec_mask function;
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
@karkhaz @danielsn @Baha @dlohar @mmuesly any comments?
|
gharchive/pull-request
| 2019-05-13T19:01:33 |
2025-04-01T04:56:06.461168
|
{
"authors": [
"feliperodri"
],
"repo": "awslabs/aws-c-common",
"url": "https://github.com/awslabs/aws-c-common/pull/349",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
426430830
|
"cdk diff" returns error code 1
Calling cdk difffrom NPM fails because the command does not return 0.
package.json
"scripts": {
"diff": "cdk diff"
}
Running npm run diff fails.
cdk diff; echo $?shows the error code returned is 1.
A workaround to get rid of the `npm ERR! ...:
package.json
"scripts": {
"diff": "cdk diff || true"
}
Is this enough of an issue to document? Or are we going to try to make it return 0?
It returns 0 if there are no differences as documented here:
https://github.com/awslabs/aws-cdk/blob/7001f7723cde611f0d8cd9c182f92120b09579c3/packages/aws-cdk/bin/cdk.ts#L69
and here https://docs.aws.amazon.com/CDK/latest/userguide/tools.html
As much as this is a nice a pure way to express that "there is a diff". I doesn't seem like that's what people expect. This is the 3rd or 4th time someone fell into this little pit. I'd argue that we can add a flag --fail for those brave souls who are interested to use cdk diff to check if there is a diff and change the default behavior to what people most expect.
For scripts which rely on the error code from cdk diff, there is no way to know if this is due to diffs or some other error (e.g. invalid python code). The flag seems quite appropriate. This is how git does it as well.
git diff --name-only
echo $? # returns 0
git diff --name-only --exit-code
echo $? # returns 1 if there are diffs
Any updates?
Any updates?
Will open a PR for this.
When running through a pipeline (i.e. Jenkins), I'd actually expect a different result. If there's a diff that's cool and I want to move on to the next stage of the pipeline and deploy it, so an exit code of 1 is actually not good here. If there's an ERROR, like someone committed bad code to the repo which causes cdk diff to actually throw an error, like invalid syntax or anything that would cause it to NOT deploy... That seems like an error code of 1 or 2 or something else, no?
Expected results would be:
exit 0 -- No diff and the expected pre-compilation or whatever didn't find any errors in synthesizing.
exit 1 -- You have an error, I could not synth due to any number of reasons, invalid syntax, etc etc.
exit 2 -- Synth worked, code looks clean, and there is a diff that I would apply if you were to deploy.
With those, I can create really functional pipeline steps. With exit 0, I can skip the deploy stage entirely because there's no need. With exit 1, I can shoot an email to the Dev who last committed to the repo and broke the pipeline with bad syntax or something. With exit 2, I know something changed and I move on to apply it.
With either a blanket return of 0, or forcing me now to use "|| true" ... I actually get nothing. I end up having to create an Approval stage where the pipeline sits until someone looks at the output of the Diff to see if it was a real change or if the CDK is now broken and wouldn't deploy if we wanted it to.
Suggestions / Thoughts?
When running through a pipeline (i.e. Jenkins), I'd actually expect a different result. If there's a diff that's cool and I want to move on to the next stage of the pipeline and deploy it, so an exit code of 1 is actually not good here. If there's an ERROR, like someone committed bad code to the repo which causes cdk diff to actually throw an error, like invalid syntax or anything that would cause it to NOT deploy... That seems like an error code of 1 or 2 or something else, no?
Expected results would be:
exit 0 -- No diff and the expected pre-compilation or whatever didn't find any errors in synthesizing.
exit 1 -- You have an error, I could not synth due to any number of reasons, invalid syntax, etc etc.
exit 2 -- Synth worked, code looks clean, and there is a diff that I would apply if you were to deploy.
With those, I can create really functional pipeline steps. With exit 0, I can skip the deploy stage entirely because there's no need. With exit 1, I can shoot an email to the Dev who last committed to the repo and broke the pipeline with bad syntax or something. With exit 2, I know something changed and I move on to apply it.
With either a blanket return of 0, or forcing me now to use "|| true" ... I actually get nothing. I end up having to create an Approval stage where the pipeline sits until someone looks at the output of the Diff to see if it was a real change or if the CDK is now broken and wouldn't deploy if we wanted it to.
Suggestions / Thoughts?
@timwelch what if there was no CloudFormation diff, but you updated the CloudFormation tags? I think that's the only scenario I see that isn't covered in what you mentioned.
@timwelch what if there was no CloudFormation diff, but you updated the CloudFormation tags? I think that's the only scenario I see that isn't covered in what you mentioned.
I didn't know the tags was an issue, but I also don't know the full inner workings of cdk and/or cloudformation. It actually sounds like another instance CDK / CFN not showing any changes are needed.
A key example I have of that is in an ASG, when you modify the update_policy....
self.asg = autoscaling.AutoScalingGroup( self, rekor.id_str(self.app_name+'-ASG'), role=self.role, vpc=self.vpc, instance_type=ec2.InstanceType(self.env_data['instance_type']), machine_image=self.ami, vpc_subnets=self.private_subnets, # desired_capacity=0, max_capacity=2, # min_capacity=0, user_data=self.userdata, key_name=self.keypair.name, associate_public_ip_address=False, security_group=self.sg, cooldown=core.Duration.seconds(180), ignore_unmodified_size_properties=True, notifications=None if not self.notifications_sns_topic else [ autoscaling.NotificationConfiguration( topic=self.notifications_sns_topic, scaling_events=autoscaling.ScalingEvents.ALL)], update_policy=autoscaling.UpdatePolicy.rolling_update( max_batch_size=1, min_instances_in_service=1, suspend_processes=[ autoscaling.ScalingProcess.HEALTH_CHECK, autoscaling.ScalingProcess.REPLACE_UNHEALTHY, autoscaling.ScalingProcess.AZ_REBALANCE, autoscaling.ScalingProcess.ALARM_NOTIFICATION, autoscaling.ScalingProcess.SCHEDULED_ACTIONS ] ) )
You can modify the update policy there ALL day long and the CDK will show NO diff. It's only when you add a useless Output ...
core.CfnOutput(self, "force_asg_update_policy", value="true")
That the CDK diff will show a change (the output) and then you can deploy the CDK and it will actually update the stack. IF you DO NOT create/update an Output, or any other resource, the CDK DEPLOY will just say there's nothing to deploy and NOT deploy anything, even though the update_policy was modified...
So, my point is... if Tags work that way, then it's not a one-off case, and just getting some kind of return codes like I mentioned could get us further in a direction of happiness for pipelines, while working to come up with solutions for these other random cases.
I didn't know the tags was an issue, but I also don't know the full inner workings of cdk and/or cloudformation. It actually sounds like another instance CDK / CFN not showing any changes are needed.
A key example I have of that is in an ASG, when you modify the update_policy....
self.asg = autoscaling.AutoScalingGroup( self, rekor.id_str(self.app_name+'-ASG'), role=self.role, vpc=self.vpc, instance_type=ec2.InstanceType(self.env_data['instance_type']), machine_image=self.ami, vpc_subnets=self.private_subnets, # desired_capacity=0, max_capacity=2, # min_capacity=0, user_data=self.userdata, key_name=self.keypair.name, associate_public_ip_address=False, security_group=self.sg, cooldown=core.Duration.seconds(180), ignore_unmodified_size_properties=True, notifications=None if not self.notifications_sns_topic else [ autoscaling.NotificationConfiguration( topic=self.notifications_sns_topic, scaling_events=autoscaling.ScalingEvents.ALL)], update_policy=autoscaling.UpdatePolicy.rolling_update( max_batch_size=1, min_instances_in_service=1, suspend_processes=[ autoscaling.ScalingProcess.HEALTH_CHECK, autoscaling.ScalingProcess.REPLACE_UNHEALTHY, autoscaling.ScalingProcess.AZ_REBALANCE, autoscaling.ScalingProcess.ALARM_NOTIFICATION, autoscaling.ScalingProcess.SCHEDULED_ACTIONS ] ) )
You can modify the update policy there ALL day long and the CDK will show NO diff. It's only when you add a useless Output ...
core.CfnOutput(self, "force_asg_update_policy", value="true")
That the CDK diff will show a change (the output) and then you can deploy the CDK and it will actually update the stack. IF you DO NOT create/update an Output, or any other resource, the CDK DEPLOY will just say there's nothing to deploy and NOT deploy anything, even though the update_policy was modified...
So, my point is... if Tags work that way, then it's not a one-off case, and just getting some kind of return codes like I mentioned could get us further in a direction of happiness for pipelines, while working to come up with solutions for these other random cases.
I would be curious if there could be a use-case for pulling down the current CFN Template and diff it with the synthesized CFN? ... I can clearly see the update_policy / rolling_update changes in the cdk.out folder where it synthesizes what would be pushed. So there could be a comparison and even an error or warning that ... hey, though CDK diff says nothing changed... there's actually a discrepancy between the current / live CFN and what we would be pushing in place.
I would be curious if there could be a use-case for pulling down the current CFN Template and diff it with the synthesized CFN? ... I can clearly see the update_policy / rolling_update changes in the cdk.out folder where it synthesizes what would be pushed. So there could be a comparison and even an error or warning that ... hey, though CDK diff says nothing changed... there's actually a discrepancy between the current / live CFN and what we would be pushing in place.
When running through a pipeline (i.e. Jenkins), I'd actually expect a different result. If there's a diff that's cool and I want to move on to the next stage of the pipeline and deploy it, so an exit code of 1 is actually not good here. If there's an ERROR, like someone committed bad code to the repo which causes cdk diff to actually throw an error, like invalid syntax or anything that would cause it to NOT deploy... That seems like an error code of 1 or 2 or something else, no?
Expected results would be:
exit 0 -- No diff and the expected pre-compilation or whatever didn't find any errors in synthesizing.
exit 1 -- You have an error, I could not synth due to any number of reasons, invalid syntax, etc etc.
exit 2 -- Synth worked, code looks clean, and there is a diff that I would apply if you were to deploy.
With those, I can create really functional pipeline steps. With exit 0, I can skip the deploy stage entirely because there's no need. With exit 1, I can shoot an email to the Dev who last committed to the repo and broke the pipeline with bad syntax or something. With exit 2, I know something changed and I move on to apply it.
With either a blanket return of 0, or forcing me now to use "|| true" ... I actually get nothing. I end up having to create an Approval stage where the pipeline sits until someone looks at the output of the Diff to see if it was a real change or if the CDK is now broken and wouldn't deploy if we wanted it to.
Suggestions / Thoughts?
Running into a similar scenario with building our pipelines. I really like your suggestions.
When running through a pipeline (i.e. Jenkins), I'd actually expect a different result. If there's a diff that's cool and I want to move on to the next stage of the pipeline and deploy it, so an exit code of 1 is actually not good here. If there's an ERROR, like someone committed bad code to the repo which causes cdk diff to actually throw an error, like invalid syntax or anything that would cause it to NOT deploy... That seems like an error code of 1 or 2 or something else, no?
Expected results would be:
exit 0 -- No diff and the expected pre-compilation or whatever didn't find any errors in synthesizing.
exit 1 -- You have an error, I could not synth due to any number of reasons, invalid syntax, etc etc.
exit 2 -- Synth worked, code looks clean, and there is a diff that I would apply if you were to deploy.
With those, I can create really functional pipeline steps. With exit 0, I can skip the deploy stage entirely because there's no need. With exit 1, I can shoot an email to the Dev who last committed to the repo and broke the pipeline with bad syntax or something. With exit 2, I know something changed and I move on to apply it.
With either a blanket return of 0, or forcing me now to use "|| true" ... I actually get nothing. I end up having to create an Approval stage where the pipeline sits until someone looks at the output of the Diff to see if it was a real change or if the CDK is now broken and wouldn't deploy if we wanted it to.
Suggestions / Thoughts?
Running into a similar scenario with building our pipelines. I really like your suggestions.
|
gharchive/issue
| 2019-03-28T11:04:36 |
2025-04-01T04:56:06.484601
|
{
"authors": [
"Doug-AWS",
"abelmokadem",
"cal5barton",
"eladb",
"isubuz",
"jogold",
"timwelch",
"xapou"
],
"repo": "awslabs/aws-cdk",
"url": "https://github.com/awslabs/aws-cdk/issues/2111",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
425175674
|
feat(VpcNetwork): support subnet groups in subnetConfiguration
**Closes #2087
Adds functionality to add subnet groups which reserve a block of IP space within a VPC.
Pull Request Checklist
[x] Testing
Unit test added (prefer not to modify an existing test, otherwise, it's probably a breaking change)
__CLI change?: coordinate update of integration tests with team
__cdk-init template change?: coordinated update of integration tests with team
[x] Docs
jsdocs: All public APIs documented
README: README and/or documentation topic updated
[x] Title and Description
Change type: title prefixed with fix, feat will appear in changelog
Title: use lower-case and doesn't end with a period
Breaking?: last paragraph: "BREAKING CHANGE: <describe what changed + link for details>"
Issues: Indicate issues fixed via: "Fixes #xxx" or "Closes #xxx"
[x] Sensitive Modules (requires 2 PR approvers)
IAM Policy Document (in @aws-cdk/aws-iam)
EC2 Security Groups and ACLs (in @aws-cdk/aws-ec2)
Grant APIs (only if not based on official documentation with a reference)
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license.
I think I'm in favor of a simpler solution, see the ticket for further discussion.
|
gharchive/pull-request
| 2019-03-26T00:41:11 |
2025-04-01T04:56:06.491351
|
{
"authors": [
"darcoli",
"rix0rrr"
],
"repo": "awslabs/aws-cdk",
"url": "https://github.com/awslabs/aws-cdk/pull/2090",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
436082627
|
New rule on SAGEMAKER_TRAINING_JOB_NETWORK_ISOLATION_ENABLED
New rule to verify that the Amazon SageMaker training jobs have Network Isolation enabled.
Please provide comments on the gherkin if any.
Description:
Check whether Network Isolation is enabled for all Amazon SageMaker Training Jobs.
Trigger:
Periodic
Reports on:
AWS::::Account
Scenarios:
Scenario: 1
Given: No Amazon SageMaker training jobs exist
Then: Return NOT_APPLICABLE
Scenario: 2
Given: At least one Amazon SageMaker training job exists
And: EnableNetworkIsolation is set to False for atleast one Amazon SageMaker training job
Then: Return NON_COMPLIANT with annotation "Network isolation is not enabled for the Amazon SageMaker Training Job(s): <Training job names>."
Scenario: 3
Given: At least one Amazon SageMaker training job exists
And: EnableNetworkIsolation is set to True for all Amazon SageMaker training jobs
Then: Return COMPLIANT
Just added a space of "At least". Good to go!
|
gharchive/issue
| 2019-04-23T09:16:13 |
2025-04-01T04:56:06.496780
|
{
"authors": [
"jongogogo",
"kritijha"
],
"repo": "awslabs/aws-config-rules",
"url": "https://github.com/awslabs/aws-config-rules/issues/208",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
929674341
|
wr.s3.read_parquet() do not read partition column
Describe the bug
Following table has three columns: day, month, year, where year is the paritionKey.
! aws s3 ls s3://ocean-qualiu/spark/test_wr_2_test_parquet_write_a/
PRE year=2018/
PRE year=2019/
PRE year=2020/
When run following python code to read the table path. Pandas show a dataframe with only day, month.
PS. Redshift Spectrum and Athena was able to recognize partition column as a column.
wr.s3.read_parquet("s3://ocean-qualiu/spark/test_wr_2_test_parquet_write_a/").head()
Environment
Package Version
alabaster 0.7.12
anaconda-client 1.7.2
anaconda-project 0.8.3
argh 0.26.2
asn1crypto 1.4.0
astroid 2.4.2
astropy 4.0
atomicwrites 1.3.0
attrs 20.3.0
Automat 20.2.0
autopep8 1.4.4
autovizwidget 0.17.1
awscli 1.18.197
awswrangler 2.8.0
Babel 2.8.0
backcall 0.1.0
backports.shutil-get-terminal-size 1.0.0
bcrypt 3.2.0
beautifulsoup4 4.8.2
bitarray 1.2.1
bkcharts 0.2
bleach 3.2.1
bokeh 1.4.0
boto 2.49.0
boto3 1.16.37
botocore 1.19.37
Bottleneck 1.3.2
cached-property 1.5.2
certifi 2020.11.8
cffi 1.14.0
chardet 3.0.4
Click 7.0
cloudpickle 1.3.0
clyent 1.2.2
colorama 0.4.3
contextlib2 0.6.0.post1
cryptography 2.8
cycler 0.10.0
Cython 0.29.15
cytoolz 0.10.1
dask 2.11.0
decorator 4.4.1
defusedxml 0.6.0
diff-match-patch 20181111
dill 0.3.3
distributed 2.11.0
distro 1.5.0
docker 4.4.0
docker-compose 1.27.4
dockerpty 0.4.1
docopt 0.6.2
docutils 0.15.2
entrypoints 0.3
environment-kernels 1.1.1
et-xmlfile 1.0.1
fastcache 1.1.0
filelock 3.0.12
flake8 3.7.9
Flask 1.1.1
fsspec 0.6.2
future 0.18.2
gevent 1.4.0
glob2 0.7
gmpy2 2.0.8
google-pasta 0.2.0
greenlet 0.4.15
h5py 2.10.0
hdijupyterutils 0.17.1
HeapDict 1.0.1
html5lib 1.0.1
hypothesis 5.5.4
idna 2.10
imageio 2.6.1
imagesize 1.2.0
importlib-metadata 3.1.0
intervaltree 3.0.2
ipykernel 5.1.4
ipyparallel 6.3.0
ipython 7.12.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
isort 4.3.21
itsdangerous 1.1.0
jdcal 1.4.1
jedi 0.14.1
jeepney 0.4.2
Jinja2 2.11.1
jmespath 0.10.0
joblib 0.14.1
json5 0.9.1
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 5.3.4
jupyter-console 6.1.0
jupyter-core 4.6.1
jupyterlab 1.2.6
jupyterlab-server 1.0.6
keyring 21.1.0
kiwisolver 1.1.0
lazy-object-proxy 1.4.3
libarchive-c 2.8
lief 0.9.0
llvmlite 0.34.0
locket 0.2.0
lxml 4.6.2
MarkupSafe 1.1.1
matplotlib 3.1.3
mccabe 0.6.1
mistune 0.8.4
mkl-fft 1.0.15
mkl-random 1.1.0
mkl-service 2.3.0
mock 4.0.1
more-itertools 8.2.0
mpmath 1.1.0
msgpack 0.6.1
multipledispatch 0.6.0
nb-conda 2.2.1
nb-conda-kernels 2.2.4
nbconvert 5.6.1
nbformat 5.0.4
networkx 2.4
nltk 3.4.5
nose 1.3.7
notebook 6.0.3
numba 0.51.2
numexpr 2.7.1
numpy 1.19.4
numpydoc 0.9.2
nvidia-ml-py 10.418.84
olefile 0.46
opencv-python 4.2.0.32
openpyxl 3.0.3
packaging 20.7
pandas 1.1.5
pandocfilters 1.4.2
paramiko 2.7.2
parso 0.5.2
partd 1.1.0
path 13.1.0
pathlib2 2.3.5
pathtools 0.1.2
patsy 0.5.1
pep8 1.7.1
pexpect 4.8.0
pg8000 1.19.5
pickleshare 0.7.5
Pillow 7.2.0
pip 20.3
pkginfo 1.5.0.1
plotly 4.13.0
pluggy 0.13.1
ply 3.11
prometheus-client 0.7.1
prompt-toolkit 3.0.3
protobuf 3.14.0
protobuf3-to-dict 0.1.5
psutil 5.6.7
psycopg2 2.7.5
ptyprocess 0.6.0
py 1.8.1
py3cli 1.0.1
py4j 0.10.7
pyarrow 2.0.0
pyasn1 0.4.8
pycodestyle 2.5.0
pycosat 0.6.3
pycparser 2.19
pycrypto 2.6.1
pycurl 7.43.0.5
pydocstyle 4.0.1
pyfiglet 0.7
pyflakes 2.1.1
pyfunctional 1.4.2
pygal 2.4.0
Pygments 2.5.2
pykerberos 1.2.1
pylint 2.6.0
PyMySQL 1.0.2
PyNaCl 1.4.0
pyodbc 4.0.0-unsupported
pyOpenSSL 19.1.0
pyparsing 2.4.7
pyrsistent 0.15.7
PySocks 1.7.1
pyspark 2.3.4
pytest 5.3.5
pytest-arraydiff 0.3
pytest-astropy 0.8.0
pytest-astropy-header 0.1.2
pytest-doctestplus 0.5.0
pytest-openfiles 0.4.0
pytest-remotedata 0.3.2
python-dateutil 2.8.1
python-dotenv 0.15.0
python-jsonrpc-server 0.3.4
python-language-server 0.31.7
pytz 2021.1
PyWavelets 1.1.1
pyxdg 0.26
PyYAML 5.3.1
pyzmq 18.1.1
QDarkStyle 2.8
QtAwesome 0.6.1
qtconsole 4.6.0
QtPy 1.9.0
redshift-connector 2.0.881
requests 2.25.0
requests-kerberos 0.12.0
retrying 1.3.3
rope 0.16.0
rsa 4.5
Rtree 0.9.3
ruamel-yaml 0.15.87
s3fs 0.4.2
s3transfer 0.3.3
sagemaker 2.19.0
sagemaker-pyspark 1.4.1
scikit-image 0.16.2
scikit-learn 0.22.1
scipy 1.4.1
scramp 1.4.0
seaborn 0.10.0
SecretStorage 3.1.2
Send2Trash 1.5.0
setuptools 50.3.2
simplegeneric 0.8.1
singledispatch 3.4.0.3
six 1.15.0
sklearn 0.0
smclarify 0.1
smdebug-rulesconfig 1.0.0
snowballstemmer 2.0.0
sortedcollections 1.1.2
sortedcontainers 2.1.0
soupsieve 1.9.5
sparkmagic 0.15.0
Sphinx 3.3.1
sphinxcontrib-applehelp 1.0.1
sphinxcontrib-devhelp 1.0.1
sphinxcontrib-htmlhelp 1.0.2
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.2
sphinxcontrib-serializinghtml 1.1.3
sphinxcontrib-websupport 1.2.0
spyder 4.0.1
spyder-kernels 1.8.1
SQLAlchemy 1.3.13
statsmodels 0.11.0
sympy 1.5.1
tables 3.6.1
tabulate 0.8.7
tblib 1.6.0
terminado 0.8.3
testpath 0.4.4
texttable 1.6.3
toml 0.10.2
toolz 0.10.0
tornado 6.0.3
tqdm 4.42.1
traitlets 4.3.3
typed-ast 1.4.1
ujson 1.35
unicodecsv 0.14.1
urllib3 1.25.11
watchdog 0.10.2
wcwidth 0.1.8
webencodings 0.5.1
websocket-client 0.57.0
Werkzeug 1.0.0
wheel 0.34.2
widgetsnbextension 3.5.1
wrapt 1.11.2
wurlitzer 2.0.0
xlrd 1.2.0
XlsxWriter 1.2.7
xlwt 1.3.0
yapf 0.28.0
zict 1.0.0
zipp 3.4.0
WARNING: You are using pip version 20.3; however, version 21.1.2 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/python3/bin/python -m pip install --upgrade pip' command.
To Reproduce
create a paritioned parquet with data wrangler.
Read the parquet files with wr.s3.read_parquet(table path)
P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.
You must add dataset=True as a parameter to the call. From the read_parquet docs:
dataset (bool) – If True read a parquet dataset instead of simple file(s) loading all the related partitions as columns.
Example:
import awswrangler as wr
import pandas as pd
wr.s3.to_parquet(
df=pd.DataFrame({
"col": [1, 2, 3],
"col2": ["A", "A", "B"]
}),
path="s3://bucket/prefix",
dataset=True,
partition_cols=["col2"]
)
df1 = wr.s3.read_parquet("s3://bucket/prefix/")
print(df1.head())
col
0 1
1 2
2 3
df2 = wr.s3.read_parquet("s3://bucket/prefix/", dataset=True)
print(df2.head())
col col2
0 1 A
1 2 A
2 3 B
|
gharchive/issue
| 2021-06-24T22:53:38 |
2025-04-01T04:56:06.545285
|
{
"authors": [
"jaidisido",
"samliuq"
],
"repo": "awslabs/aws-data-wrangler",
"url": "https://github.com/awslabs/aws-data-wrangler/issues/764",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1411276411
|
feat(docs): update docs when creating a new Layer ARN
Issue number: #1549
Summary
This PR automates the publishing of the layer version in our documetation for every release.
This will reduce maintance overhead, and make the publishing of the new layer version faster.
Changes
Please provide a summary of what's being changed
Update action-lint to support multiple nested workflow calls https://github.blog/changelog/2022-08-22-github-actions-improvements-to-reusable-workflows-2/
Added arm64 layers to docs
Added method to republish docs after a new published Layer
User experience
Please share what the user experience looks like before and after this change
After this change, the new layer version will be update don the documentation automatically, as soon as the Layer is created and published.
Checklist
If your change doesn't seem to apply, please leave them unchecked.
[x] Meet tenets criteria
[x] I have performed a self-review of this change
[x] Changes have been tested
[x] Changes are documented
[x] PR title follows conventional commit semantics
Is this a breaking change?
RFC issue number:
Checklist:
[ ] Migration process documented
[ ] Implement warnings (if it can live side by side)
Acknowledgment
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Disclaimer: We value your time and bandwidth. As such, any pull requests created on non-triaged issues might not be successful.
View rendered docs/index.md
Note: update the rest of the SAM sample code on docs/index.md
Note: remove "Can't find our Lambda Layer for your preferred AWS region?" because we are now publishing in all regions.
Docs looks excellent. Perhaps let's just increase md-grid (where the content is) to 90vw to give a bit more room, otherwise the layer per region table will look too tight
Consider it approved after these last changes to ease maintenance.
|
gharchive/pull-request
| 2022-10-17T09:56:25 |
2025-04-01T04:56:06.556875
|
{
"authors": [
"heitorlessa",
"rubenfonseca"
],
"repo": "awslabs/aws-lambda-powertools-python",
"url": "https://github.com/awslabs/aws-lambda-powertools-python/pull/1610",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1860442475
|
ci: add release new version action
Description
Add release new version action
General Checklist
[ ] Added new tests to cover change, if needed
[ ] Security oriented best practices and standards are followed (e.g. using input sanitization, principle of least privilege, etc)
[ ] Documentation update for the change if required
[x] PR title conforms to conventional commit style
[ ] If breaking change, documentation/changelog update with migration instructions
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Codecov Report
Patch and project coverage have no change.
Comparison is base (6b3b527) 99.65% compared to head (6762a45) 99.65%.
Additional details and impacted files
@@ Coverage Diff @@
## main #12 +/- ##
=======================================
Coverage 99.65% 99.65%
=======================================
Files 24 24
Lines 882 882
Branches 151 151
=======================================
Hits 879 879
Misses 3 3
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2023-08-22T02:34:42 |
2025-04-01T04:56:06.562404
|
{
"authors": [
"codecov-commenter",
"zhu-xiaowei"
],
"repo": "awslabs/clickstream-web",
"url": "https://github.com/awslabs/clickstream-web/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
786460459
|
Use custom NDK version instead of action default NDK
PyTorch Android native build only works on NDK version 20.
Action uses version 22 by default.
Have you tested the android pipeline?
Have you tested the android pipeline?
Have you tested the android pipeline?
I tested it on my local repo https://github.com/stu1130/djl/runs/1705801591?check_suite_focus=true you can see it only failed on upload
Have you tested the android pipeline?
I tested it on my local repo https://github.com/stu1130/djl/runs/1705801591?check_suite_focus=true you can see it only failed on upload
|
gharchive/pull-request
| 2021-01-15T01:26:47 |
2025-04-01T04:56:06.568832
|
{
"authors": [
"lanking520",
"stu1130"
],
"repo": "awslabs/djl",
"url": "https://github.com/awslabs/djl/pull/531",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
801816749
|
Fix spelling error in ListDataset docstring
Issue #, if available: No issue created
Description of changes: I found a very small spelling error in the ListDatasetdocumentation and fixed it.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Thank you!
|
gharchive/pull-request
| 2021-02-05T03:44:36 |
2025-04-01T04:56:06.570270
|
{
"authors": [
"flrs",
"lostella"
],
"repo": "awslabs/gluon-ts",
"url": "https://github.com/awslabs/gluon-ts/pull/1311",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1044438166
|
Add Offerings to InstanceType to hold zone and capacity type, remove OS from InstanceType, promote capacity-type label
1. Issue, if available:
Part of https://github.com/awslabs/karpenter/issues/371
2. Description of changes:
OS is not an instance type level concept - this seems to have been shoehorned in and causes unnecessary confusion. Removing it until we figure out proper support for OS constraints.
When we do ICE caching we need to be able to filter out combinations of zone and capacity type when we see ICEs. Since packing otherwise does not care about them it seemed preferable to put a slice of these tuples ("Offerings") inside InstanceType rather than having to permute all the possible combinations in bin packing (i.e. if there are 3 zones and 2 capacity types then we can still consider just a single InstanceType for an actual instance type with this Offering re-factoring rather than having to create 5 InstanceType instances in the status quo to cover the permutations of zone and capacity type).
Added a note on tailing logs to the dev guide. Maybe obvious to those of you very familiar with K8s, but was not obvious to me =P
In talking to Ellis, decided to bite the bullet and re-name the capacity-type label to make it a first class citizen rather than being relegated to the AWS cloud provider. This simplifies some of the scheduler code a little and honestly just makes more sense.
I'm not entirely sure if I need to add additional unit tests at this time - I think we're covered with what's there but open to feedback.
3. Does this change impact docs?
[x] Yes, PR includes docs updates
[ ] Yes, issue opened: link to issue
[ ] No
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
I appreciate you including docs changes, it's helping me keep the thread.
|
gharchive/pull-request
| 2021-11-04T07:38:34 |
2025-04-01T04:56:06.574685
|
{
"authors": [
"eptiger",
"geoffcline"
],
"repo": "awslabs/karpenter",
"url": "https://github.com/awslabs/karpenter/pull/780",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1026915213
|
Cleanups
Some minor cleanups I ran into while working on the extables fixes.
Boilerplate disclamer follows:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Had a braino in the top commit, so dropped it.
|
gharchive/pull-request
| 2021-10-14T23:36:55 |
2025-04-01T04:56:06.576140
|
{
"authors": [
"minipli-oss"
],
"repo": "awslabs/ktf",
"url": "https://github.com/awslabs/ktf/pull/221",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1967863859
|
Fargate support
Tell us more about this new feature.
At present, mountpoint-s3 cannot be used with the Fargate runtime because it doesn't support privileged mode. There have been numerous discussions on s3-fargate integration over the past few years without any tangible efforts. Implementing this feature would greatly benefit many existing AWS customers. Are you planning to work on it?
Thanks for the feature request, Damian! I can definitely see the benefits of this integration.
I don't have anything to share on Fargate support right now. We are tracking interest for Fargate support in #450. I will close this issue in favor of that one.
|
gharchive/issue
| 2023-10-30T09:09:22 |
2025-04-01T04:56:06.578057
|
{
"authors": [
"dannycjones",
"kalupad"
],
"repo": "awslabs/mountpoint-s3",
"url": "https://github.com/awslabs/mountpoint-s3/issues/586",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
663976990
|
docs: Add security comment to ISSUE_TEMPLATE
Resolved issues:
Internal
Description of changes:
Copy the security reporting wording from the README to the issue template.
Also, Update to newer GitHub template format.
Call-outs:
There is the option of creating distinct issue templates for Bugs vs Features, for now, just went with Custom to reduce confusion and duplication.
Testing:
Added to my fork.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Codecov Report
Merging #2167 into master will decrease coverage by 0.00%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #2167 +/- ##
==========================================
- Coverage 81.09% 81.09% -0.01%
==========================================
Files 248 248
Lines 18497 18497
==========================================
- Hits 15001 15000 -1
- Misses 3496 3497 +1
Looks like you would have to rebase your PR to obtain the latest fix for sidetrail https://github.com/awslabs/s2n/pull/2170.
|
gharchive/pull-request
| 2020-07-22T18:54:35 |
2025-04-01T04:56:06.583077
|
{
"authors": [
"codecov-commenter",
"dougch",
"ttjsu-aws"
],
"repo": "awslabs/s2n",
"url": "https://github.com/awslabs/s2n/pull/2167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1632157732
|
[Bug] TRE when deploying the TRE solution with devops profile and shared ami parameters the deployment fails
Describe the bug
TRE when deploying the TRE solution with devops profile and enableAmiSharing parameters the deployment fails
The Reason:
The TRE deployment creates 2 IAM roles that are required for iam sharing over the devops account, this roles are created after running the enableAmiSharing and it fails as the IAM roles are missing.
Possible Resolution:
Create IAM Roles workflowLoopRunnerRole & apiHandlerRole before running the main/solution/prepare-devops-account
Workaround:
Deploy TRE environment without parameters devops profile and enableAmiSharing
After successful deployment, change the parameters evops profile and enableAmiSharing = true in the <stage.yml> and run deploy again .
To Reproduce
Steps to reproduce the behavior:
Deploy environment from scratch :
Create new environment <stage.yml> set enableAmiSharing = true with devops profile
Run ./scripts/environment-deploy
See error
Expected behavior
The deployment creates prepare devops account as part of the deployment
Versions (please complete the following information):
Release Version installed [5.2.7]
Thank you for reaching out. Your request has been added to our backlog for research computing solutions. Our team curates these requests for fit with our solutions vision on a regular basis. Please watch this space for new updates. If you require immediate assistance, please reach out to your AWS account team. Please note that security issues should be reported directly to AWS Security at aws-security@amazon.com.
|
gharchive/issue
| 2023-03-20T13:50:00 |
2025-04-01T04:56:06.588693
|
{
"authors": [
"eldar557",
"kpark277"
],
"repo": "awslabs/service-workbench-on-aws",
"url": "https://github.com/awslabs/service-workbench-on-aws/issues/1156",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1874888771
|
fix: scripts not finding the right one
Description
Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
We identified new issues on unchanged lines of code. Navigate to the Amazon CodeGuru Reviewer console to view the recommendations to fix them.
|
gharchive/pull-request
| 2023-08-31T06:46:58 |
2025-04-01T04:56:06.591086
|
{
"authors": [
"NingLu",
"alvindaiyan"
],
"repo": "awslabs/stable-diffusion-aws-extension",
"url": "https://github.com/awslabs/stable-diffusion-aws-extension/pull/178",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2175890157
|
aws spring cloud 3.x : There is no queue stop timeout, that is creating problems by increasing buid timeout.
Type: Feature
Is your feature request related to a problem? Please describe.
public void stop(String logicalQueueName) {
stopQueue(logicalQueueName);
try {
if (isRunning(logicalQueueName)) {
Future<?> future = this.scheduledFutureByQueue.remove(logicalQueueName);
if (future != null) {
future.get(this.queueStopTimeout, TimeUnit.MILLISECONDS);
}
}
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (ExecutionException | TimeoutException e) {
getLogger().warn("Error stopping queue with name: '" + logicalQueueName + "'", e);
}
}
There used to be a queueStopTimeout in the previous version, that let us, shut the queue down by interrupting the thread, when the application shuts down. In v3
@Override
public void stop() {
if (!this.isRunning) {
return;
}
logger.debug("Stopping container {}", this.id);
synchronized (this.lifecycleMonitor) {
this.isRunning = false;
doStop();
}
logger.info("Container {} stopped", this.id);
}
Please advice if this exists, leave implementation, do you see a "hacky" way?
Another hacky thing I had to do: https://github.com/awspring/spring-cloud-aws/issues/1055
@maciejwalkowiak
@VaibhavTheVicar , please take a look at the ContainerOptions.
You'll find listenerShutdownTimeout and acknowledgementShutdownTimeout there which you can configure to achieve your goal. Set to Duration.ZERO if you don't want any wait.
Let me know if that works for you.
Closing due to lack of feedback.
|
gharchive/issue
| 2024-03-08T11:52:47 |
2025-04-01T04:56:06.595347
|
{
"authors": [
"VaibhavTheVicar",
"tomazfernandes"
],
"repo": "awspring/spring-cloud-aws",
"url": "https://github.com/awspring/spring-cloud-aws/issues/1077",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
852046371
|
events_unbound io_ring_exit_work task kworker blocked for more than 120 seconds
Follow up on the issue from #326 - but this is only triggered when SQPOLL is enabled.
This is running with
https://git.kernel.dk/cgit/linux-block/commit/?h=poll-multiple&id=283fc84eaeb1031e1f2389e4e365e44cd4398b9c
I have one ring that is only deallocated by process exit, the rest is "properly" closed.
[ 633.583870] ------------[ cut here ]------------
[ 633.583873] WARNING: CPU: 4 PID: 6485 at fs/io_uring.c:8611 io_ring_exit_work+0xe6/0x500
[ 633.583880] Modules linked in: binfmt_misc dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua isst_if_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd rapl ppdev parport_pc ena parport serio_raw i2c_piix4 sch_fq_codel drm i2c_core ip_tables x_tables autofs4
[ 633.583896] CPU: 4 PID: 6485 Comm: kworker/u16:77 Not tainted 5.12.0-rc6-5.13-uring-210407 #1
[ 633.583898] Hardware name: Amazon EC2 c5.2xlarge/, BIOS 1.0 10/16/2017
[ 633.583899] Workqueue: events_unbound io_ring_exit_work
[ 633.583902] RIP: 0010:io_ring_exit_work+0xe6/0x500
[ 633.583904] Code: 8b bb f0 fd ff ff e8 99 7d ff ff 4c 8d ab 40 fe ff ff 31 d2 31 f6 4c 89 e7 e8 b6 e9 ff ff 48 8b 05 7f 03 29 01 49 39 c7 79 02 <0f> 0b be 0c 00 00 00 4c 89 ef e8 5b 96 7c 00 48 85 c0 74 d4 4c 8d
[ 633.583906] RSP: 0018:ffffb30e01bebdd0 EFLAGS: 00010297
[ 633.583908] RAX: 00000001000144f7 RBX: ffff97d63d8095a8 RCX: 0000000000000008
[ 633.583909] RDX: 0000000000000001 RSI: ffff97d63d809088 RDI: ffff97d63d8095a0
[ 633.583910] RBP: ffffb30e01bebe50 R08: 00000093848a6640 R09: 0000000000000000
[ 633.583911] R10: ffffffff9126bfa0 R11: 0000000000000000 R12: ffff97d63d809000
[ 633.583912] R13: ffff97d63d8093e8 R14: 0000000000000000 R15: 00000001000144f3
[ 633.583913] FS: 0000000000000000(0000) GS:ffff97d90bd00000(0000) knlGS:0000000000000000
[ 633.583915] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 633.583916] CR2: 000000c00033d000 CR3: 0000000115010004 CR4: 00000000007706e0
[ 633.583919] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 633.583920] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 633.583920] PKRU: 55555554
[ 633.583921] Call Trace:
[ 633.583925] ? __switch_to+0x192/0x460
[ 633.583930] ? __switch_to_asm+0x36/0x70
[ 633.583934] process_one_work+0x220/0x3c0
[ 633.583938] worker_thread+0x4d/0x3f0
[ 633.583940] kthread+0x114/0x150
[ 633.583943] ? process_one_work+0x3c0/0x3c0
[ 633.583945] ? kthread_park+0x90/0x90
[ 633.583947] ret_from_fork+0x22/0x30
[ 633.583950] ---[ end trace 5225fbee45c60487 ]---
[ 2840.166730] show_signal: 36 callbacks suppressed
[ 2840.166735] traps: swift-nioPackag[143299] trap invalid opcode ip:7f1d15b3c7d2 sp:7ffc59b93ea0 error:0 in libswiftCore.so[7f1d159ea000+525000]
[ 3626.789652] INFO: task kworker/u16:15:5478 blocked for more than 120 seconds.
[ 3626.792583] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.795536] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.798758] task:kworker/u16:15 state:D stack: 0 pid: 5478 ppid: 2 flags:0x00004000
[ 3626.798763] Workqueue: events_unbound io_ring_exit_work
[ 3626.798769] Call Trace:
[ 3626.798772] __schedule+0x2de/0x890
[ 3626.798777] ? check_preempt_wakeup+0xfd/0x210
[ 3626.798781] schedule+0x4f/0xc0
[ 3626.798783] schedule_timeout+0x202/0x290
[ 3626.798786] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.798789] wait_for_completion+0x94/0x100
[ 3626.798792] io_ring_exit_work+0x18c/0x500
[ 3626.798793] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.798798] process_one_work+0x220/0x3c0
[ 3626.798801] worker_thread+0x4d/0x3f0
[ 3626.798803] kthread+0x114/0x150
[ 3626.798805] ? process_one_work+0x3c0/0x3c0
[ 3626.798807] ? kthread_park+0x90/0x90
[ 3626.798809] ret_from_fork+0x22/0x30
[ 3626.798816] INFO: task kworker/u16:29:5506 blocked for more than 120 seconds.
[ 3626.801729] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.804603] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.807777] task:kworker/u16:29 state:D stack: 0 pid: 5506 ppid: 2 flags:0x00004000
[ 3626.807779] Workqueue: events_unbound io_ring_exit_work
[ 3626.807782] Call Trace:
[ 3626.807783] __schedule+0x2de/0x890
[ 3626.807786] ? enqueue_entity+0x115/0x660
[ 3626.807789] schedule+0x4f/0xc0
[ 3626.807791] schedule_timeout+0x202/0x290
[ 3626.807793] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.807797] wait_for_completion+0x94/0x100
[ 3626.807799] io_ring_exit_work+0x18c/0x500
[ 3626.807803] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.807806] process_one_work+0x220/0x3c0
[ 3626.807808] worker_thread+0x4d/0x3f0
[ 3626.807811] kthread+0x114/0x150
[ 3626.807813] ? process_one_work+0x3c0/0x3c0
[ 3626.807815] ? kthread_park+0x90/0x90
[ 3626.807817] ret_from_fork+0x22/0x30
[ 3626.807835] INFO: task kworker/u16:98:138912 blocked for more than 120 seconds.
[ 3626.810838] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.813798] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.816976] task:kworker/u16:98 state:D stack: 0 pid:138912 ppid: 2 flags:0x00004000
[ 3626.816978] Workqueue: events_unbound io_ring_exit_work
[ 3626.816980] Call Trace:
[ 3626.816981] __schedule+0x2de/0x890
[ 3626.816983] ? enqueue_entity+0x115/0x660
[ 3626.816986] schedule+0x4f/0xc0
[ 3626.816987] schedule_timeout+0x202/0x290
[ 3626.816989] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.816992] wait_for_completion+0x94/0x100
[ 3626.816994] io_ring_exit_work+0x18c/0x500
[ 3626.816996] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.816999] process_one_work+0x220/0x3c0
[ 3626.817000] worker_thread+0x4d/0x3f0
[ 3626.817002] kthread+0x114/0x150
[ 3626.817004] ? process_one_work+0x3c0/0x3c0
[ 3626.817005] ? kthread_park+0x90/0x90
[ 3626.817007] ret_from_fork+0x22/0x30
[ 3626.817015] INFO: task kworker/u16:129:227858 blocked for more than 120 seconds.
[ 3626.820039] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.822968] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.826156] task:kworker/u16:129 state:D stack: 0 pid:227858 ppid: 2 flags:0x00004000
[ 3626.826159] Workqueue: events_unbound io_ring_exit_work
[ 3626.826161] Call Trace:
[ 3626.826162] __schedule+0x2de/0x890
[ 3626.826164] ? enqueue_entity+0x115/0x660
[ 3626.826168] schedule+0x4f/0xc0
[ 3626.826170] schedule_timeout+0x202/0x290
[ 3626.826172] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.826176] wait_for_completion+0x94/0x100
[ 3626.826179] io_ring_exit_work+0x18c/0x500
[ 3626.826182] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.826185] process_one_work+0x220/0x3c0
[ 3626.826187] worker_thread+0x4d/0x3f0
[ 3626.826189] kthread+0x114/0x150
[ 3626.826191] ? process_one_work+0x3c0/0x3c0
[ 3626.826192] ? kthread_park+0x90/0x90
[ 3626.826194] ret_from_fork+0x22/0x30
[ 3626.826200] INFO: task kworker/u16:145:240639 blocked for more than 120 seconds.
[ 3626.829201] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.833454] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.839410] task:kworker/u16:145 state:D stack: 0 pid:240639 ppid: 2 flags:0x00004000
[ 3626.839413] Workqueue: events_unbound io_ring_exit_work
[ 3626.839415] Call Trace:
[ 3626.839416] __schedule+0x2de/0x890
[ 3626.839418] ? enqueue_entity+0x115/0x660
[ 3626.839420] schedule+0x4f/0xc0
[ 3626.839422] schedule_timeout+0x202/0x290
[ 3626.839424] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.839427] wait_for_completion+0x94/0x100
[ 3626.839429] io_ring_exit_work+0x18c/0x500
[ 3626.839431] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.839433] process_one_work+0x220/0x3c0
[ 3626.839435] worker_thread+0x4d/0x3f0
[ 3626.839436] kthread+0x114/0x150
[ 3626.839439] ? process_one_work+0x3c0/0x3c0
[ 3626.839440] ? kthread_park+0x90/0x90
[ 3626.839442] ret_from_fork+0x22/0x30
[ 3626.839446] INFO: task kworker/u16:157:262491 blocked for more than 120 seconds.
[ 3626.845169] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.849467] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.855352] task:kworker/u16:157 state:D stack: 0 pid:262491 ppid: 2 flags:0x00004000
[ 3626.855354] Workqueue: events_unbound io_ring_exit_work
[ 3626.855357] Call Trace:
[ 3626.855358] __schedule+0x2de/0x890
[ 3626.855362] ? enqueue_entity+0x115/0x660
[ 3626.855365] schedule+0x4f/0xc0
[ 3626.855367] schedule_timeout+0x202/0x290
[ 3626.855369] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.855374] wait_for_completion+0x94/0x100
[ 3626.855376] io_ring_exit_work+0x18c/0x500
[ 3626.855378] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.855381] process_one_work+0x220/0x3c0
[ 3626.855384] worker_thread+0x4d/0x3f0
[ 3626.855386] kthread+0x114/0x150
[ 3626.855389] ? process_one_work+0x3c0/0x3c0
[ 3626.855390] ? kthread_park+0x90/0x90
[ 3626.855392] ret_from_fork+0x22/0x30
[ 3626.855397] INFO: task kworker/u16:169:274169 blocked for more than 120 seconds.
[ 3626.861101] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.865360] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.871249] task:kworker/u16:169 state:D stack: 0 pid:274169 ppid: 2 flags:0x00004000
[ 3626.871252] Workqueue: events_unbound io_ring_exit_work
[ 3626.871254] Call Trace:
[ 3626.871255] __schedule+0x2de/0x890
[ 3626.871257] ? check_preempt_wakeup+0xfd/0x210
[ 3626.871259] schedule+0x4f/0xc0
[ 3626.871261] schedule_timeout+0x202/0x290
[ 3626.871263] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.871265] wait_for_completion+0x94/0x100
[ 3626.871267] io_ring_exit_work+0x18c/0x500
[ 3626.871269] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.871272] process_one_work+0x220/0x3c0
[ 3626.871273] worker_thread+0x4d/0x3f0
[ 3626.871275] kthread+0x114/0x150
[ 3626.871277] ? process_one_work+0x3c0/0x3c0
[ 3626.871278] ? kthread_park+0x90/0x90
[ 3626.871280] ret_from_fork+0x22/0x30
[ 3626.871283] INFO: task kworker/u16:171:274174 blocked for more than 120 seconds.
[ 3626.877023] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.881287] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.887192] task:kworker/u16:171 state:D stack: 0 pid:274174 ppid: 2 flags:0x00004000
[ 3626.887196] Workqueue: events_unbound io_ring_exit_work
[ 3626.887198] Call Trace:
[ 3626.887199] __schedule+0x2de/0x890
[ 3626.887201] ? check_preempt_wakeup+0xfd/0x210
[ 3626.887203] schedule+0x4f/0xc0
[ 3626.887207] schedule_timeout+0x202/0x290
[ 3626.887209] ? ttwu_do_activate+0xb5/0x3f0
[ 3626.887212] wait_for_completion+0x94/0x100
[ 3626.887216] io_ring_exit_work+0x18c/0x500
[ 3626.887218] ? io_uring_del_task_file+0xc0/0xc0
[ 3626.887221] process_one_work+0x220/0x3c0
[ 3626.887222] worker_thread+0x4d/0x3f0
[ 3626.887225] kthread+0x114/0x150
[ 3626.887228] ? process_one_work+0x3c0/0x3c0
[ 3626.887229] ? kthread_park+0x90/0x90
[ 3626.887232] ret_from_fork+0x22/0x30
[ 3626.887236] INFO: task iou-sqp-278149:278150 blocked for more than 120 seconds.
[ 3626.892947] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.897266] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.903175] task:iou-sqp-278149 state:D stack: 0 pid:278150 ppid: 1 flags:0x00004002
[ 3626.903177] Call Trace:
[ 3626.903178] __schedule+0x2de/0x890
[ 3626.903180] schedule+0x4f/0xc0
[ 3626.903182] io_uring_cancel_sqpoll+0xdb/0x110
[ 3626.903184] ? wait_woken+0x80/0x80
[ 3626.903187] io_sqpoll_cancel_cb+0x24/0x30
[ 3626.903188] io_run_task_work_head+0x28/0x50
[ 3626.903191] io_sq_thread+0x4ef/0x720
[ 3626.903194] ? wait_woken+0x80/0x80
[ 3626.903195] ? recalc_sigpending+0x1c/0x60
[ 3626.903198] ? io_submit_sqes+0x14c0/0x14c0
[ 3626.903200] ret_from_fork+0x22/0x30
[ 3626.903204] INFO: task NIO-ELT-1-#0:278151 blocked for more than 120 seconds.
[ 3626.907496] Tainted: G W 5.12.0-rc6-5.13-uring-210407 #1
[ 3626.911758] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3626.917652] task:NIO-ELT-1-#0 state:D stack: 0 pid:278151 ppid: 1 flags:0x00024000
[ 3626.917654] Call Trace:
[ 3626.917655] __schedule+0x2de/0x890
[ 3626.917657] ? free_pages_and_swap_cache+0xb9/0xd0
[ 3626.917662] schedule+0x4f/0xc0
[ 3626.917664] schedule_timeout+0x202/0x290
[ 3626.917667] wait_for_completion+0x94/0x100
[ 3626.917669] __io_uring_files_cancel+0x17d/0x2e0
[ 3626.917672] ? blk_finish_plug+0x26/0x40
[ 3626.917676] ? io_uring_cancel_sqpoll+0x110/0x110
[ 3626.917678] do_exit+0xc0/0xaf0
[ 3626.917681] ? exit_to_user_mode_prepare+0x3d/0x1a0
[ 3626.917685] __x64_sys_exit+0x1b/0x20
[ 3626.917687] do_syscall_64+0x38/0x90
[ 3626.917690] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 3626.917695] RIP: 0033:0x7f1b2570a6c6
[ 3626.917697] RSP: 002b:00007f1b22077e80 EFLAGS: 00000246 ORIG_RAX: 000000000000003c
[ 3626.917699] RAX: ffffffffffffffda RBX: 00007f1b22078700 RCX: 00007f1b2570a6c6
[ 3626.917700] RDX: 000000000000003c RSI: 00000000007fb000 RDI: 0000000000000000
[ 3626.917701] RBP: 00007f1b21878000 R08: 0000000000000006 R09: 0000000100000000
[ 3626.917702] R10: 00007f1b257054e0 R11: 0000000000000246 R12: 00007ffd3c3f149e
[ 3626.917704] R13: 00007ffd3c3f149f R14: 00007ffd3c3f14a0 R15: 00007f1b22077f40
Was thinking it's because of not applied https://github.com/isilence/linux/commit/42c74362eb3997a543b648260fe2ad8d2266b786, but actuallt it looks different.
There is a very alike syzbot report, makes sense to solve it first. I'll let you know when it's sorted out.
Ok, super - let me know and I'll retest then.
What I see in logs:
userspace task1: exit() -> cancel -> waits sqpoll1 to cancel (via task work)
sqpoll1: io_uring_cancel_sqpoll() -> hangs in req cancel (probably, unable to cancel some requests)
exit_work1: short on refs as well, maybe same reason as for sqpoll1
exit_workN: waits for some task to task_work_run(), maybe for userspace task1
It very well may be a not put poll request that we forgot somewhere. I need to take care of 5.12 first, but can you run a branch for some debug output? It's on top of the Jens' patch. Logs may overflow, so better to dmesg > file from the beginning of a run.
https://github.com/isilence/linux/commits/poll_bugs_test
Sure, here is the output from the run - I didn't get the usual bells in the console but there is info in the output about the usual hangs (and basically each process just hung hard taking the shell with it when the process trying to exit, 100%). For the record, run with:
commit ed68098faf1f661231a7e2cc3485861e09425baa (HEAD -> poll_bugs_test, origin/poll_bugs_test)
dmesg.txt
@hassila, do you poll pipes or std[in,out,err]? If so, do you have poll requests having both read and write in its poll mask?
Sockets - read, write and both at times - it's a big test suite...
@hassila, can you try a new branch below? The last patch, which also you CC'ed and sent by email, fixes one kind of sqpoll related hangs. Let's see if it's yours.
https://github.com/isilence/linux/tree/sqpoll_hangs_tests
Nope, not quite - this time processes gets stuck with cpu:s running, attaching screenshot and dmesg output.
dmesg.txt
Are you sure you used right kernel version? That branch doesn't have debug printk()'s like below.
what's uname -r?
[ 605.598521] p 0000000000000000 sqd 0000000000000000 ctx 0000000023460f9d | exit work hang 1
Ouch. Sorry, mea culpa. I had two lines in my grub config to easily switch between kernels and managed to update the commented out one... Instance rebooting now, retest in progress.
Ok, now this time with the right kernel - no CPU, but my processes hangs on exit. dmesg:
dmesg.txt
And missing from that dmesg.txt something came right after:
[ 364.310769] R13: 00007ffd338fe2ff R14: 00007ffd338fe300 R15: 00007f0cb4807f40
[ 502.613079] ------------[ cut here ]------------
[ 502.613082] WARNING: CPU: 2 PID: 8 at fs/io_uring.c:8592 io_ring_exit_work.cold+0x0/0x16
[ 502.613089] Modules linked in: binfmt_misc dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua isst_if_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd rapl ppdev parport_pc parport ena i2c_piix4 serio_raw sch_fq_codel drm i2c_core ip_tables x_tables autofs4
[ 502.613104] CPU: 2 PID: 8 Comm: kworker/u16:0 Not tainted 5.12.0-rc6-5.13-uring-210413 #1
[ 502.613106] Hardware name: Amazon EC2 c5.2xlarge/, BIOS 1.0 10/16/2017
[ 502.613107] Workqueue: events_unbound io_ring_exit_work
[ 502.613111] RIP: 0010:io_ring_exit_work.cold+0x0/0x16
[ 502.613114] Code: a7 6a ff ff 48 83 c8 ff e9 7b 31 87 ff 48 c7 c7 90 8b d8 b0 c6 05 c4 af e0 00 01 e8 8b 6a ff ff b8 02 00 00 00 e9 f7 4e 87 ff <0f> 0b 48 c7 c7 00 96 d8 b0 4c 89 e6 e8 70 6a ff ff e9 21 43 88 ff
[ 502.613115] RSP: 0018:ffffa63f8004bdd0 EFLAGS: 00010283
[ 502.613117] RAX: 000000010000c531 RBX: ffff904ed5fd5df0 RCX: 0000000000000009
[ 502.613119] RDX: 0000000000000001 RSI: ffff904ed5fd58c8 RDI: ffff904ed5fd5de8
[ 502.613120] RBP: ffffa63f8004be50 R08: 0000000000000000 R09: 0000000000000000
[ 502.613121] R10: 000000000000000f R11: 0000000000000000 R12: ffff904ed5fd5800
[ 502.613121] R13: ffff904ed5fd5c30 R14: 0000000000000000 R15: 000000010000c52c
[ 502.613123] FS: 0000000000000000(0000) GS:ffff9051cbc80000(0000) knlGS:0000000000000000
[ 502.613124] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 502.613125] CR2: 00007faab237d9dc CR3: 0000000107c80002 CR4: 00000000007706e0
[ 502.613128] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 502.613129] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 502.613130] PKRU: 55555554
[ 502.613131] Call Trace:
[ 502.613135] ? __switch_to+0x192/0x460
[ 502.613140] ? __switch_to_asm+0x36/0x70
[ 502.613143] process_one_work+0x220/0x3c0
[ 502.613147] worker_thread+0x4d/0x3f0
[ 502.613149] kthread+0x114/0x150
[ 502.613152] ? process_one_work+0x3c0/0x3c0
[ 502.613154] ? kthread_park+0x90/0x90
[ 502.613156] ret_from_fork+0x22/0x30
[ 502.613159] ---[ end trace ff7f99fcda4828df ]---
[ 502.613160] task failed NIO-ELT-0-#3
[ 502.665076] task failed NIO-ELT-0-#3
[ 502.717070] task failed NIO-ELT-0-#3
[ 502.769067] task failed NIO-ELT-0-#3
[ 502.821066] task failed NIO-ELT-0-#3
[ 502.873065] task failed NIO-ELT-0-#3
[ 502.925064] task failed NIO-ELT-0-#3
[ 502.977064] task failed NIO-ELT-0-#3
[ 503.029062] task failed NIO-ELT-0-#3
[ 503.081061] task failed NIO-ELT-0-#3
[ 503.133060] task failed NIO-ELT-0-#3
[ 503.185058] task failed NIO-ELT-0-#3
[ 503.237059] task failed NIO-ELT-0-#3
[ 503.289056] task failed NIO-ELT-0-#3
[ 503.341055] task failed NIO-ELT-0-#3
[ 503.393055] task failed NIO-ELT-0-#3
[ 503.445053] task failed NIO-ELT-0-#3
[ 503.497052] task failed NIO-ELT-0-#3
[ 503.549052] task failed NIO-ELT-0-#3
[ 503.601050] task failed NIO-ELT-0-#3
[ 503.653048] task failed NIO-ELT-0-#3
[ 503.705047] task failed NIO-ELT-0-#3
[ 503.757046] task failed NIO-ELT-0-#3
[ 503.809045] task failed NIO-ELT-0-#3
[ 503.861045] task failed NIO-ELT-0-#3
[ 503.913043] task failed NIO-ELT-0-#3
[ 503.965042] task failed NIO-ELT-0-#3
[ 504.017041] task failed NIO-ELT-0-#3
[ 504.069039] task failed NIO-ELT-0-#3
[ 504.121039] task failed NIO-ELT-0-#3
[ 504.173038] task failed NIO-ELT-0-#3
[ 504.225036] task failed NIO-ELT-0-#3
[ 504.277034] task failed NIO-ELT-0-#3
[ 504.329033] task failed NIO-ELT-0-#3
[ 504.381032] task failed NIO-ELT-0-#3
[ 504.433031] task failed NIO-ELT-0-#3
[ 504.485030] task failed NIO-ELT-0-#3
[ 504.537028] task failed NIO-ELT-0-#3
[ 504.589027] task failed NIO-ELT-0-#3
[ 504.641026] task failed NIO-ELT-0-#3
[ 504.693027] task failed NIO-ELT-0-#3
[ 504.745024] task failed NIO-ELT-0-#3
[ 504.797024] task failed NIO-ELT-0-#3
[ 504.849023] task failed NIO-ELT-0-#3
[ 504.901022] task failed NIO-ELT-0-#3
[ 504.953021] task failed NIO-ELT-0-#3
[ 505.005018] task failed NIO-ELT-0-#3
[ 505.057017] task failed NIO-ELT-0-#3
[ 505.109014] task failed NIO-ELT-0-#3
[ 505.161014] task failed NIO-ELT-0-#3
[ 505.213014] task failed NIO-ELT-0-#3
[ 505.265012] task failed NIO-ELT-0-#3
[ 505.317011] task failed NIO-ELT-0-#3
[ 505.369010] task failed NIO-ELT-0-#3
[ 505.421010] task failed NIO-ELT-0-#3
[ 505.473008] task failed NIO-ELT-0-#3
[ 505.525007] task failed NIO-ELT-0-#3
[ 505.577006] task failed NIO-ELT-0-#3
[ 505.629006] task failed NIO-ELT-0-#3
[ 505.681006] task failed NIO-ELT-0-#3
[ 505.733004] task failed NIO-ELT-0-#3
[ 505.785003] task failed NIO-ELT-0-#3
[ 505.837001] task failed NIO-ELT-0-#3
[ 505.889000] task failed NIO-ELT-0-#3
[ 505.941000] task failed NIO-ELT-0-#3
[ 505.992998] task failed NIO-ELT-0-#3
[ 506.044997] task failed NIO-ELT-0-#3
[ 506.096996] task failed NIO-ELT-0-#3
[ 506.148994] task failed NIO-ELT-0-#3
... repeating ...
Blood, how many of them is in there... Don't worry, 5.13 hasn't even started yet, we'll get it nailed, was focusing on things that are more time critical
@hassila, the last patch set you've been CC'ed did went in, and the problem it solves looks pretty much as yours. Can you test it out?
https://git.kernel.dk/cgit/linux-block/log/?h=for-5.13/io_uring
Sure @isilence - tomorrow or Thursday latest I think - will let you know.
Perfect, thanks Joakim
Ok, I've run tests now for 20 minutes hammering it using f2a48dd09b8e933f59570692e1382b81d4fddc49 - it's been rock solid and no dmesg output. It definitely would have had issues with earlier versions, so seems you nailed it, well done! I think we can close this. Thanks!
|
gharchive/issue
| 2021-04-07T06:11:53 |
2025-04-01T04:56:06.627089
|
{
"authors": [
"hassila",
"isilence"
],
"repo": "axboe/liburing",
"url": "https://github.com/axboe/liburing/issues/327",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
962266397
|
utilizing wq_fd together with IORING_SETUP_ATTACH_WQ returns EINVAL
We are trying to utilize multiple threads to submit and reap i/o events. As suggested we initialize the first ring as normal and it works. But when we set the above on the second uring it returns error EINVAL. Does this feature even work at the moment?
#define IO_URING_DEPTH 4
struct io_uring_params params;
memset(¶ms, 0, sizeof(params));
params.flags |= IORING_SETUP_SQPOLL;
params.sq_thread_idle = __SQ_THRD_IDLE;
if(__first_ring_fd_set) {
/* this allows multiple user io_urings to utilize the same async
* backend. Thus it should utilize the same SQLPOLL thread as well */
params.wq_fd = cntxt->ring.ring_fd;
params.flags = IORING_SETUP_ATTACH_WQ;
}
if((rc = io_uring_queue_init_params(IO_URING_QDEPTH, &cntxt->ring, ¶ms))!=0) {
rc = -1;
goto cleanup;
}
if((rc = io_uring_register_files(&cntxt->ring, &fd, 1))!=0) {
rc = -1;
goto cleanup;
}
What kernel? And are the rings created from the same task?
Thanks for the response, This is being done in linux 5.4 in AWS linux
5.4 is too old to support that feature. Basically -EINVAL here means "some flag you set in the ring setup is not supported".
What is the oldest linux version that supports that feature. We really need to get this working as soon as we can.
5.6 would be the oldest, but honestly if you're swapping kernels, I'd recommend using the latest stable. Even 5.6 is pretty old in that regard, that's almost 2 years ago.
We can utilize linux kernel-5.10. Hopefully it will work well enough there. Do we have to download a specific version of liburing code for that kernel? Or can we simply download the most recent version of liburing and it will know that we are working on kernel-5.10 and support features that it can?
Basically we create the first ring single threaded and the remaining rings are created concurrently... hopefully there is no issue with concurrently created/destroying rings. If there is we can isolate those code points as they are quite infrequent.
regards
5.10 seems reasonable. The current release of liburing will work fine, there are really no library dependencies for this feature.
thanks so much for the quick response. We will download the most recent version of liburing from Github ...and i will close this issue.
I spent the whole of last day upgrading to Linux 5.10 in AWS and i also downloaded the latest version of liburing from Github and rebuilt everything from scratch and i still get EINVAL when utilizing wq_fd together with IORING_SETUP_ATTACH_WQ.
The code is as indicated in the first comment. And this is what i see with uname -r:
5.10.50-44.132.amzn2.x86_64
We are doing this in docker containers. However the host is also the same kernel. Is it possible that this is not working because we are utilizing docker containers?
Any help is greatly appreciated.
Could very well be that it's not working since it's in a docker container, I really have very little idea about that. I'd recommend cloning the liburing repo and building it all, then run test/shared-wq from there. That's a basic functionality test. If that works, then you are likely doing something wrong. If it doesn't, then likely docker is doing something funky.
It sounds strange that it should be an issue. We are also utilizing this in combination with IORING_SETUP_SQPOLL.
Is it known to work together with IORING_SETUP_ATTACH_WQ?
Allright i will rebuild and run. So far we were running in Docker and without ATTACH_WQ some of our tests pass.
Yes, it'll work with SQPOLL, in fact liburing also has a test case for that, test/sq-poll-share
When rebuilding liburing i do run into the following problem:
io_uring_register.c:509:10: warning: implicit declaration of function 'memfd_create'; did you mean timer_create'? [-Wimplicit-function-declaration]
memfd = memfd_create("uring-shmem-test", 0);
^~~~~~~~~~~~
timer_create
/tmp/cc91MvfX.o: In function `test_shmem':
after a bit of looking around it seems that including <syscall.h> together with utilizing SYS_memfd_create seems to make it work and additionally all the tests appear to pass, i ran
source ./test/runtests.sh
So the tests seem to pass ...
So inside the Docker container, all the tests pass. But when we run this in the product it fails with EINVAL. Is there anyway we can debug this further to see what may be causing it to return EINVAL. Is there some system call it is attempting which returns EINVAL. If so we can poke around to see what may be causing it to fail.
should we fall back on strace to see why it failed ... let me try that?
Looking at your example:
if(__first_ring_fd_set) {
/* this allows multiple user io_urings to utilize the same async
* backend. Thus it should utilize the same SQLPOLL thread as well */
params.wq_fd = cntxt->ring.ring_fd;
params.flags = IORING_SETUP_ATTACH_WQ;
}
if((rc = io_uring_queue_init_params(IO_URING_QDEPTH, &cntxt->ring, ¶ms))!=0) {
rc = -1;
goto cleanup;
}
it's not clear to me what is being set in params->wq_fd - just to be clear, that one should be set to the ring fd of the original ring, and I can't tell if that's what you're doing as cnxt appears to be the new ring being registered. But it could just be that you're not including all of the code, and cnxt is allocated after that setting and it's the original ring to begin with?
In any case, if the test cases work, then you're doing something wrong in your code. -EINVAL is a horrible error code, but there's really nothing else that covers it. It'll get set for any invalid field in the params or setup.
io_uring_queue_init_params() boils down to a io_uring_setup(), and then later ring mmaps if that succeeds. If you strace, you're most likely seeing io_uring_setup() return -EINVAL due to some invalid parameter.
You are right about that one. That is a copy paste error on my part. Sorry to bother you. Let me try to address that one.
Thanks so much for your help. Im really sorry for taking up your time for what is my bug. Hopefully this fix will work. regards.
It's Friday, wasting a bit of time on this is healthier than drinking beer ;-)
We are hoping that the SQPOLL will setup one thread which will be polling for the sqes. And our application on the other hand has potentially 100 threads each one of which has a tiny seq, cqe ring of length 4/8. All threads are actively submitting and reaping ... hopefully we should be able to leverage the random i/o aspects of SSDs with that. If you are interested i can keep you posted on any performance reads we see.
We are looking at doing random i/o on massive databases. In the ballpark of a few TBs.
You have done a great service to the community by providing this library. Incidentally we also intend to look closely at SPDK which is utilizing a user mode polling model. In some usage scenarios that might offer some further advantages.
Definitely, keep us updated. If completion polling ends up being interesting as well, you can use IOPOLL with SQPOLL, fwiw.
Sorry i read some articles which indicate that IOPOLL should not be combined with SQPOLL ... i don't know what the reasoning was provided ... if you search for io_uring samsung, performance on google you will find the pdf document that talks about it ... it also compares it with SPDK.
Don't know why they suggest not combining IOPOLL with SQPOLL.
Below is the link, they suggest do not utilize IOP with SQPOL
https://www.usenix.org/sites/default/files/conference/protected-files/vault20_slides_lund.pdf
After the fix i get the error: ENXIO ... probably some incorrect fd sent it ... let me check ...
Here is what i am finding about ENXIO:
The test sq-poll-share.c does not utilize io_uring_register_files() to register fd. From what i can see we have to utilize this API in order to register files for SQPOLL. If we do not utilize this API we can't even make a most basic test work. So there is something amiss.
Incidentally we have a separate test that utilizes SQPOLL and it works very well and we follow nearly the same steps.
Its when we try to go multi-threaded that we run into issues. At the moment i am getting error ENXIO when invoking
io_uring_queue_init_params().
closing the issue for now. It looks like sq-poll-share.c test is not working on linux. We have to understand that, and hopefully that will resolve this issue.
|
gharchive/issue
| 2021-08-05T23:22:14 |
2025-04-01T04:56:06.647152
|
{
"authors": [
"akseg73",
"axboe"
],
"repo": "axboe/liburing",
"url": "https://github.com/axboe/liburing/issues/398",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2148129946
|
link to axelarscan when deploying tokens
When deploying a token, link to axelarscan through the Initiated/Confirmed status, instead of just the top right buttons
Not sure if i understand this correctly, but we already had multiple links to axelarscan. Feel free to reopen the issue if i misunderstand this 🙇🏻
|
gharchive/issue
| 2024-02-22T03:56:46 |
2025-04-01T04:56:06.649387
|
{
"authors": [
"canhtrinh",
"npty"
],
"repo": "axelarnetwork/axelarjs",
"url": "https://github.com/axelarnetwork/axelarjs/issues/226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
180169775
|
Limit Sign up form width
4 grid columns
Fixed in commit https://github.com/axelpale/tresdb/commit/218bfcc5fa14bc1f446138746e23d8a6e7bfb4c0
|
gharchive/issue
| 2016-09-29T21:37:48 |
2025-04-01T04:56:06.650468
|
{
"authors": [
"axelpale"
],
"repo": "axelpale/tresdb",
"url": "https://github.com/axelpale/tresdb/issues/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1338028374
|
Changing bacticks to (double) quotes
Hi,
Are there any plans to support converting bacticks to quotes when user deletes a variable? I noticed in the gif (see below) from vscode plugin that it has the option to go both ways. Would be great to have both.
Very nice and simple plugin btw, very useful.
Yeah, I'd like to implement this feature, but I need to think more about how to implement it to avoid problems like these: meganrogge/template-string-converter#43
Wasn't aware of those issues. I think there should still be a branch that has it implemented as simple as possible, and have people test it out. Most probably it will not have some of those issues vscode plugin has, but also some other issues might arise.
Awesome, thanks!
@axelvc I tried using the feature but cant manage to make it work. Can you give me a simple explanation or add it to the README? Appreciate a lot your effort 💪🏼
Can you give me a simple explanation or add it to the README?
You need to enable the option in the setup function. That should be enough
require('template-string').setup {
remove_template_string = true,
}
I made it work by deleting the character with x under normal mode. Is it the way its supposed to be working?
The feature should work in Normal and Insert mode, but there are some cases where the plugin doesn't remove the backticks. You can see them here: https://github.com/axelvc/template-string.nvim/pull/4#issue-1342876268
If you've found a new issue, feel free to report it by opening an issue.
|
gharchive/issue
| 2022-08-13T18:38:41 |
2025-04-01T04:56:06.655187
|
{
"authors": [
"axelvc",
"camiloaromero23",
"kristijanhusak"
],
"repo": "axelvc/template-string.nvim",
"url": "https://github.com/axelvc/template-string.nvim/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
720804228
|
Windows Universal Platform (UWP) Support?
Can you add Windows Universal Platform (UWP)?
It was available on older Cocos2d-x 3 versions
Yes, we have plan to support UWP: https://github.com/c4games/engine-x/projects/2
Usefull for Xbox One/X Platform.
Yes and also Windoes store. In the past Cocos2dx support uwp, I think it had support in version 3.15
Maybe this is helping to convert/add UWP to adxe:
https://github.com/Microsoft/DesktopBridgeToUWP-Samples
Desktop Conversion Extensions is a bridge that enables you to convert your classic desktop application (like Win32, Windows Forms, and WPF) or game to a Universal Windows Platform (UWP) app or game.
It works for Desktop applications, Xbox still requires the full implementation
I confirm, winRT support that allows to produce UWP format was available in 3.15 and is easy to copy/paste for support in 3.17.
There is a problem with the audio player as of 3.17, but taking advantage of the unified OpenAL the project has, it might be easier to debug caching problems. There is also no gamepad support.
I will share you my cocos2d-x 4.0 UWP port, and start working on a fork, I just don't have a few prebuilt libraries from the dependencies/thirdparty folder, it's stuck at the linking phase, I think I have the lua thing built already
This one is it:
https://github.com/Greentwip/time-trio
UWP Port of 4.0 version.
Problem is some libraries need to be compiled from source, I've already compiled angle, lua, openssl I gues? But bullet, and some other need their precomps. I'll try to follow up with the adxe structure, right now I don't know how to disable glfw and purely add the missing UWP files using OpenGL.
Not probably UWP (I'm almost done with the port) but you can try this Xbox port I've finished compiling, you'll just need the open Microsoft GDK (or sign up for the internal Microsoft GDK):
https://github.com/balancedwolf/adxe
See: CMakeGDKXboxOne.cmake and CMakeLists.txt from the time-trio repo to build your game.
@hal99,
Hope this is still active and part of axys 1.1
@halx99
please add labels:
pinned
help wanted
https://github.com/Greentwip/cocos2d-x
XBOX one port
axmol UWP screenshot:
axmol UWP screenshot:
Amazing!
Time to clean my XBOX One ;)
|
gharchive/issue
| 2020-10-13T20:37:53 |
2025-04-01T04:56:06.665181
|
{
"authors": [
"Obg1",
"aismann",
"appakabar",
"balancedwolf",
"bensgigi",
"halx99"
],
"repo": "axmolengine/axmol",
"url": "https://github.com/axmolengine/axmol/issues/232",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2723883363
|
Refactor dist init and split out dist migrate
dist init still works as on main
dist migrate does the migration part of dist init
dist init's migration functionality is now implemented in terms of dist migrate
This shows dist migrate on its own:
And this shows that dist migrate is only a migration, and explicitly does not do the re-init part:
Once #1612 is merged, I'll rebase this off main and those changes won't be in this PR's diff.
|
gharchive/pull-request
| 2024-12-06T20:39:01 |
2025-04-01T04:56:06.668253
|
{
"authors": [
"duckinator"
],
"repo": "axodotdev/cargo-dist",
"url": "https://github.com/axodotdev/cargo-dist/pull/1611",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
383594548
|
bitwise ops across arrays
[1 0 1][1 0 0] &
[1 0 1][1 0 0] |
Implemented in #78
|
gharchive/issue
| 2018-11-22T16:05:29 |
2025-04-01T04:56:06.690656
|
{
"authors": [
"nick-paul"
],
"repo": "aya-lang/aya",
"url": "https://github.com/aya-lang/aya/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1750968825
|
Transfer of ViennaRNA PyPI ownership
Dear Ayaan Hossain,
I wonder whether you are willing to transfer ownership and maintainership of the PyPI ViennaRNA package project to us as the upstream development team.
You haven't been updating this repository since May 2022 and it seems you even provide a sort of pre-release. From what I can see at the moment, there is no easy way to tell whether you include the official upstream releases into your PyPI package or whether you patch them to some extent.
We believe it makes much more sense if we integrate submissions and updates to PyPI into our regular release workflow instead. This will ensure that all updates that are released with new versions of the ViennaRNA Package find their way into the PyPI package within a reasonable time. At the same time it would avoid any confusions of our users who don't know that the PyPI package is not maintained by us.
Please let me know about your decision.
Best,
Ronny
Hi Ronny, thanks for reaching out. Yes, I usually only try to maintain the latest version. I last updated the 2.5.0 version and since then there's been 2.5.1 and now 2.6.0. I can consider transferring ownership of the name in PyPI so your team can push your releases to PyPI, but I was wondering about the following:
Currently, I compile the source on the user's machine and this installs all the relevant SWIG interface code to the user. Afterwards, I delete the compiled files, and the python interface is the only thing that remains. Will you make sure the release you push to PyPI just installs the python interface?
There is no good way to specify the parameter files, other than the user somehow having to download them separately and point to them to the RNA.read-parameter_file function (apologies if I get the name wrong, it's mentioned in the README with this repo). Can you make sure when the user installs the package, it is as easy as specifying a string to load the parameters?
The python interface is now just a SWIG interface to the underlying C code. Will your team spend some time to flesh out a more curated python interface? It'd help if you have a series of documents explaining all the different interface functions, their proper parameters, usage, purpose, return type etc.
Let me know, what you think.
Ayaan
@RaumZeit @ayaanhossain
I trust you are both doing well. As a user and enthusiast of the ViennaRNA project, I have been closely following your conversation regarding the Python package support.
Firstly, let me express my heartfelt gratitude for the valuable work you both have done. Ayaan, your initiative to package ViennaRNA for PyPI and maintain it has been of immense value. Ronny, your willingness to take over the PyPI package to provide official updates is a testament to the commitment of the original development team towards the project and its users.
I noticed that it's been a week since your last exchange. As an interested party and a user of the ViennaRNA package, I would like to encourage you both to continue this important conversation. Your ongoing dialogue on this topic is vital for the future development and support of the ViennaRNA Python package.
Furthermore, I am excited to see the integration of GitHub actions into your project pipelines. It would be great to see the Python package become an integral part of the official CI/CD. This would not only ensure the efficient and effective maintenance of the project but would also solidify the trust of the user community in the ViennaRNA package.
Thank you both for your dedication and commitment to this project. I am sure many users like myself eagerly await the outcomes of your ongoing discussion and collaboration.
Looking forward to the continued growth and success of the ViennaRNA project.
Best regards,
Abed
Hi @ayaanhossain and @AH-Merii ,
I'm pretty busy these days with giving lectures at the university, so I can't dedicate too much time to this project during the next two weeks or so. But, we are still willing to proceed on that matter.
So let me first answer the questions @ayaanhossain mentioned:
Sure, for the PyPI version, we will only install the Python interface and no additional other parts of the ViennaRNA Package, That should be easily possible, since we also do that for our binary packages that we distribute for several linux distros.
While we ship energy parameter files, those are mostly intended to be used by our executable programs. In our library, we already provide dedicated API functions that load these parameters independently without the need for these files. So, in essence, we already include all parameter files into our library. They have just not been exposed as strings so far but required the call of high-level functions that would load these parameters, e.g. RNA.params_load_Andronescu2007() would load the Andronescu et al. 2007 parameter set.
For the next release, I even added support to actually export these parameters as strings that represent the content of the parameter files. Long story short, no parameter files are required for the Python interface, everything is already compiled into the library and available through the API.
While we use SWIG to create the Python bindings for the ViennaRNA C-library we add an object oriented layer that transforms many of our C functions to methods of the respective objects in Python. I understand that this renders the available reference manual documentation somewhat useless for the Python user. However, with version 2.6.0 we introduced an automatic conversion of the Doxygen documentation into Python docstrings that, for the most part, recognizes these transformations from functions to methods. This conversion and the creation of docstrings finally allowed us to provide a Python API documentation which will be growing in the future. You can already find the latest documentation at Read the Docs
As for the github actions and the creation of the PyPI packages, this is exactly the way we want to go. We want to fully automate the process such that whenever we create a new release, this will trigger a github action that then uploads the required artifacts to PyPI.
That's all for now from my side and I'll be more focused on bringing this project forward in about 2 weeks from now.
Cheers,
Ronny
I forgot to mention that for point 3. we plan to eventually merge both of the created documentations into one.
I'm already in the process to transform our doxygen output to sphinx-generated html websites via the breathe bridge. But this is a tedious task that requires many changes in the way we currently document the API. But once this is done, a merge of both the C-API with all its extra descriptions and example codes and the Python documentation we extract from the generated docstrings should be easy to accomplish.
@RaumZeit
Thank you for taking the time to provide such a detailed update amidst your busy schedule. It's truly appreciated!
I'm incredibly excited about the upcoming changes you've outlined for the ViennaRNA Python package. The improvements in the PyPI version, API documentation, and the automated CI/CD practices sound very promising.
Looking forward to these developments in the coming weeks. 😁
@AH-Merii @ayaanhossain:
Dear Ayaan and Abed,
I started to refactor the Python interface build process for the ViennaRNA Package and everything seems good so far. I use setuptools to build RNAlib and the swig extension and prepare/compile the sdist and bdist_wheel packages. Now, that I wanted to test whether everything works correctly by pushing it to test.pypi.org, I realized that the ViennaRNA project at test.pypi.org is taken by @AH-Merii . I tried reaching out via email a few days ago and today. but didn't get a response yet. That's why I write here now. Would you, @AH-Merii, be so kind to add me to the list of maintainers for your project at test.pypi.org?! This would allow me to generate an API token to upload the artifacts. Otherwise, I'd need to create a package with a different package name that would render testing much more tedious.
As soon as everything looks right, I start the github actions integration via cibuildwheel to build binary packages for Linux, macosx and hopefully windows. Once that also works out as expected, I'd need access to the ViennaRNA project at pypi.org from @ayaanhossain .
Just as a side note, I intend to keep the
import ViennaRNA
thing next to our official
import RNA
Both will actually load the same set of functions, classes and variables. Also, the RNA/__init__.py now is an actual package description file instead of a copy of the RNA.py generated by swig. So, the swig wrapper (RNA.py) itself is now an actual module (RNA.RNA) and part of the RNA package.
For future purposes, I think splitting the library into smaller sub packages is the way to go, e.g.:
from ViennaRNA import plotting
and things alike. But this requires some more restructuring in the SWIG interfaces and a shared library for libRNA. So, this change will likely come not earlier than version 3.0.0.
Thanks for your understanding and commitment!
Best,
Ronny
Hi Ronny, thanks for the initiative. I'm happy to see that you're focusing on a nice python interface for the RNAlib functions. I have added you to the pypi package for ViennaRNA, please feel free to start playing around with it. I have added you as a Maintainer, and once you confirm that you've received the invite, I will promote you to full owner. Best, Ayaan
Thanks a lot to the both of you!
I very much appreciate your trust in me/us working on the ViennaRNA Package.
The current state for the PyPI packages is that my pipeline runs fine for Linux and MacOS. For the latter, I was also able to build arm64 wheels today, so they will be included as well. What doesn't work is support for PyPy, since our C Extension doesn't seem compatible with their C layer. Windows builds need some more changes to the source code and will most likely follow in the future, once I find the time to rewrite the upstream code to support building with MSVC. I'll test upload of the sdist and wheel packages to test.pypi.org later this week.
I'll let you know once the tests for test.pypi.org are successful.
Best,
Ronny
Well, @ayaanhossain:
My pipelines are set up and working, publishing to both TestPyPI and PyPI works (they show version 2.6.0 now) and will be triggered automatically whenever a new release is issued on github. The good thing is that most users might not even have to compile the package, since the new build pipeline already provides Python 3.8 - 3.11 builds for Linux and MacOS X. As mentioned earlier, I'll have a look at Windows builds as well, but this might take some more time as it requires more changes in our code base....
Thanks again for everything! I'll have a look at the content of your git repo later to see which parts might be interesting to keep for the https://viennarna-python.readthedocs.io site. Also I try to add more documentation/examples on the energy parameter settings via the API.
Best,
Ronny
P.S.: you can make me owner of the PyPI project now
P.S.S.: The github actions workflow that builds and publishes everything is in our development branch and will be accessible once the next release is pushed to github. Just in case you are interested. The same is true for the setup.py, setup.cfg and pyproject.toml files I used to create the Python packages (Although they are already included in the sdist packages on PyPI)
Congrats Ronny! I've promoted you to owner on the PyPI. Thank you and good luck! Hope the new python interface is really pythonic and easy to use for design and evaluation of RNA sequences. Ayaan
|
gharchive/issue
| 2023-06-10T13:19:30 |
2025-04-01T04:56:06.716068
|
{
"authors": [
"AH-Merii",
"RaumZeit",
"ayaanhossain"
],
"repo": "ayaanhossain/ViennaRNA",
"url": "https://github.com/ayaanhossain/ViennaRNA/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
267295060
|
evaluate clang build with qt
in time of r13 Qt was able to build only with gcc/r10 - check compatibility
Imagemagick7 test was tested with clang/gcc on real hardware(armeabi-v7a) and found to be working with Qt 5.9.2.
detailed evaluation, including performance comparation, will be a separate task inside IMTest project
|
gharchive/issue
| 2017-10-20T20:23:55 |
2025-04-01T04:56:06.718506
|
{
"authors": [
"ayaromenok"
],
"repo": "ayaromenok/Android_ImageMagick7",
"url": "https://github.com/ayaromenok/Android_ImageMagick7/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2612598952
|
🛑 Obase is down
In 56879c9, Obase (https://www.obase.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Obase is back up in 28175ee after 10 minutes.
|
gharchive/issue
| 2024-10-24T21:35:01 |
2025-04-01T04:56:06.720840
|
{
"authors": [
"aydgn"
],
"repo": "aydgn/upptime",
"url": "https://github.com/aydgn/upptime/issues/223",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
198082
|
Cannot Mock Class which has Array paramater in constructor
Code :
public class Service{
public Service(Command[] commands) { }
}
public abstract class Command { }
var mock = MockRepository.GenerateMock<Service>(new Command[] { }); // This fails
This is just a syntax issue. You passed an empty object array as a parameter - so the exception stating that there is no parameterless constructor is totally right.
Just write:
var mock = MockRepository.GenerateMock(new object[] {new Command[] {}});
Thanks. I figured it out some time ago.
|
gharchive/issue
| 2010-05-18T07:58:58 |
2025-04-01T04:56:06.723004
|
{
"authors": [
"alaendle",
"ashishs"
],
"repo": "ayende/rhino-mocks",
"url": "https://github.com/ayende/rhino-mocks/issues/8",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2698131999
|
Feature: sort interface properties, object keys and array values if code has annotation
Describe the rule
There is existing package that implements similar feature: https://github.com/ronparkdev/eslint-plugin-annotation
I personally think it would be great if eslint-plugin-perfectionist provided thie rule since it's also related to sorting.
Code example
| | |
Additional comments
No response
Validations
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
Currently, you can use ESLint comments to define plugin rule settings.
/*
eslint perfectionist/sort-objects: [
'error',
{
type: 'alphabetical',
order: 'asc',
partitionByNewLine: true
}
]
*/
Using annotations looks beautiful and convenient. But I'm not sure it will be in demand.
I would like to hear more opinions from other users.
I understand the use case for sort-array-includes: it would allow users to sort arrays who do not have .include if the user wishes.
However, I'm not sure to understand the use case with the other rules such as sort-objects: all objects get sorted if the rule is active, why would an annotation be needed?
@sardor01 Perhaps the ESLint comment solves your problem?
|
gharchive/issue
| 2024-11-27T10:58:16 |
2025-04-01T04:56:06.738231
|
{
"authors": [
"azat-io",
"hugop95",
"sardor01"
],
"repo": "azat-io/eslint-plugin-perfectionist",
"url": "https://github.com/azat-io/eslint-plugin-perfectionist/issues/400",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
244676896
|
Version info in 1.2.0.3 is 1.2.0.0
Download https://github.com/azerg/NppBplistPlugin/releases/download/1.2.0.3/NppBplistPlugin_x86.zip
Unzip to C:\Downloads\NppBplistPlugin.dll, found:
Size: 2054656 Byte (1.96 MB)
Modified: 2017-01-27 21:34
Version: 1.2.0.0
MD5: db2d3ea89a526b9f315d3b0053770a93
SHA1: 6427201900041159d11d488e976b9a85dbda5be9
SHA256: 0c7f53d480d199c0cda37596747873d3e7e6365f3225abfa23e7a47edf0d3a44
CRC32: 77e20d7a
Also in x64 build, version info says 1.2.0.0.
Thanks. Will recompile with updated version info soon 🏎
fixed in 1.2.0.4
|
gharchive/issue
| 2017-07-21T13:57:45 |
2025-04-01T04:56:06.758181
|
{
"authors": [
"azerg",
"sunjw"
],
"repo": "azerg/NppBplistPlugin",
"url": "https://github.com/azerg/NppBplistPlugin/issues/5",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
649668609
|
【每日一题】- 2020-07-01 - TypeScript 编译器是用 TypeScript 写的,那是先有编译器还是TS?
TypeScript 编译器是用 TypeScript 写的,比如:
https://github.com/microsoft/TypeScript/blob/7b942b4fa875f2877a90d201cf146e6196b0c07b/src/compiler/scanner.ts
那么这个TS写的编译器是怎么被解析的呢?
先有 TypeScript。 TS 编译器的发展大致如下:
用别的语言(比如JS)写一个第一版编译器 V1
用 TS 写第二版编译器 V2,TS 需要被 V1 先编译
用 TS 写第三版编译器 V3,TS 需要被 V2 先编译
至此,就完成了自举(bootstrapping)
|
gharchive/issue
| 2020-07-02T06:20:35 |
2025-04-01T04:56:06.901924
|
{
"authors": [
"azl397985856"
],
"repo": "azl397985856/fe-interview",
"url": "https://github.com/azl397985856/fe-interview/issues/135",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1779778719
|
ParamsSpec: Store Mapping Functions Separately
Enables the developer to not have to pass in params specs for items that had mapping functions.
Currently ParamsSpec and related types -- ParamsSpecFieldless, ValueSpec, FieldWiseSpec (generated by proc macro) -- hold MappingFns within their type.
This forces users to respecify the params specs with mapping functions in subsequent CmdCtx::builder_* instantiation.
This change is to:
Get developers to instantiate a Map<MappingFnK, Box<dyn MappingFn>>, and pass that to CmdCtx::builder_*.
ParamsSpec and related types will hold the MappingFnK.
Developers should define the MappingFnK, an all-unit-variant enum with sensible names to identify the mapping function.
Considering the value of this.
With It
Developers
Define an enum to name the function keys:
enum MappingFunctions {
BucketNameFromBucketState,
IamPolicyArnFromIamPolicyState,
}
Define the mappings:
let mapping_functions = {
let mut mapping_functions = MappingFunctions::new();
mapping_functions.insert(BucketNameFromBucketState, S3BucketState::bucket_name);
// ..
mapping_functions
};
Pass MappingFunctions to CmdCtxBuilder, for each code instantiation (may be just one):
cmd_ctx_builder.with_mapping_functions(mapping_functions);
Not have to call .with_item_params::<TheItem>(..) in subsequent calls.
Users
Get runtime error if the mapping function type doesn't match, but it should be caught by tests.
Framework Maintainers
MappingFunctions map will have magic logic to store the function argument types and return type.
Error reporting when types are mismatched.
Without It
Developers
Define the item params spec:
// First execution
let s3_object_params_spec = S3ObjectParams::<WebApp>::field_wise_spec()
.with_file_path(web_app_path_local)
.with_object_key(object_key)
.with_bucket_name_from_map(S3BucketState::bucket_name)
.build();
// Subsequent executions
let s3_object_params_spec = S3ObjectParams::<WebApp>::field_wise_spec()
.with_bucket_name_from_map(S3BucketState::bucket_name)
.build();
Pass the item params spec to CmdCtxBuilder, for every separate code instantiation:
cmd_ctx_builder
.with_item_params::<S3ObjectItem<WebApp>>(
item_id!("s3_object"),
s3_object_params_spec,
)
This is somewhat of an inconvenience, because if this isn't done, the user / developer will have a runtime error, which looks like this:
peace_rt_model::params_specs_mismatch
× Item params specs do not match with the items in the flow.
help: The following items either have not had a params spec provided previously,
or had contained a mapping function, which cannot be loaded from disk.
So the params spec needs to be provided to the command context for:
* s3_object
When the closure passed to with_*_from_map doesn't have the argument type specified, or mismatches, the compilation error is still unclear. https://github.com/rust-lang/rust/pull/119888 will allow us to return a useful compilation error.
Users
No runtime error, because it will be caught at compile time.
Framework Maintainers
Error messages / diagnostics showing which CmdCtx is missing which item spec for which field, should be made clear.
|
gharchive/issue
| 2023-06-28T21:08:05 |
2025-04-01T04:56:06.914507
|
{
"authors": [
"azriel91"
],
"repo": "azriel91/peace",
"url": "https://github.com/azriel91/peace/issues/156",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1997841202
|
🛑 Amelia Bot is down
In 807b2ca, Amelia Bot (https://ameliabot-discord.uzumekiulee.repl.co/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Amelia Bot is back up in f0cede8 after 4 minutes.
|
gharchive/issue
| 2023-11-16T21:43:04 |
2025-04-01T04:56:06.917312
|
{
"authors": [
"azrielbsi"
],
"repo": "azrielbsi/monitor",
"url": "https://github.com/azrielbsi/monitor/issues/253",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1287559388
|
Unable to set timestamp for vm_extension and vmss_extension custom_script
It seems there is a bug in both vm_extension and vmss_extension custom_script. There is a function used (toint) which is not existing in terraform, there is only tonumber. The reason why it was not seen before is that try will supress the error.
settings = jsonencode(
{
- "fileUris" : local.fileuris,
- "timestamp" : try(toint(var.extension.timestamp), 12345678)
+ fileUris = local.fileuris,
+ timestamp = try(tonumber(var.extension.timestamp), 1234568)
}
)
I will raise a PR to fix this issue.
Assigning to you waiting for the PR? ;)
|
gharchive/issue
| 2022-06-28T15:54:40 |
2025-04-01T04:56:06.919726
|
{
"authors": [
"arnaudlh",
"wasfree"
],
"repo": "aztfmod/terraform-azurerm-caf",
"url": "https://github.com/aztfmod/terraform-azurerm-caf/issues/1231",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
119693201
|
ypromiseを外す
URL : http://azu.github.io/promises-book/#promise-library
流石に古いような感じ。問題はないんだけど、あんまりメンテされる空気感ではない(仕様変更ないので問題ないとは言えるけど、推奨とは言い難い感じ)
代わりを一ついれたい
[x] ypromise依存を外す -> getify/native-promise-onlyを使う
[x] ypromiseの紹介文を変更
es6-promisesではなくnative-promise-onlyを使う理由はない特にないのだけど、
es6以降もpromiseは微妙に変わっていってるのと、コードの読みやすさはNPOの方がよい。
今の仕様に一番厳密なのはcore-jsやes6-shim系の予感はある。これはspeicesとかの考慮が関係するので、promiseのpolyfillの範囲外に思える
|
gharchive/issue
| 2015-12-01T11:00:14 |
2025-04-01T04:56:06.922772
|
{
"authors": [
"azu"
],
"repo": "azu/promises-book",
"url": "https://github.com/azu/promises-book/issues/250",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1803566574
|
🛑 azuma doa: Mailing Service is down
In c8eb444, azuma doa: Mailing Service (https://pie.azuma-health.tech/health/doa-mailing) was down:
HTTP code: 0
Response time: 0 ms
Resolved: azuma doa: Mailing Service is back up in 52baf2f.
|
gharchive/issue
| 2023-07-13T18:38:41 |
2025-04-01T04:56:06.929311
|
{
"authors": [
"info-azuma-health-tech"
],
"repo": "azuma-healthtech-public/uptime-pie",
"url": "https://github.com/azuma-healthtech-public/uptime-pie/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
260855568
|
Documentation or interface goodies for setting configurations on external IPC connections
Need a way to specify in code how to change configuration parameters for IPC connection to another node
Need a way to achieve same functionality listed above over the CLI
Either the above functionality, or explicitly do it in code + add docs on how to manipulate these internal conf params + allow users of library to create their own gen_server to handle this functionality in a more generic way for other gen_servers.
Resolved.
|
gharchive/issue
| 2017-09-27T05:58:29 |
2025-04-01T04:56:06.930820
|
{
"authors": [
"kevinwilson541"
],
"repo": "azuqua/clusterluck",
"url": "https://github.com/azuqua/clusterluck/issues/42",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
312751825
|
Custom config?
How can I use pretty-quick with a custom eslint-prettier (.eslintrc) config?
My commits code is different than my development environment... :sweat_smile:
This would require prettier-eslint support, so I'll close this as a duplicate of #22.
|
gharchive/issue
| 2018-04-10T02:32:40 |
2025-04-01T04:56:06.933466
|
{
"authors": [
"azz",
"juandc"
],
"repo": "azz/pretty-quick",
"url": "https://github.com/azz/pretty-quick/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2573037093
|
Using an external vector in selections was deprecated in tidyselect 1.1.0.
In perform_bootstrap_ts(): tidyr::pivot_wider(names_from = temporal_col_name,
Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
## # Was:
## data %>% select(temporal_col_name)
##
## # Now:
## data %>% select(all_of(temporal_col_name))
fixed in #13
|
gharchive/issue
| 2024-10-08T12:16:47 |
2025-04-01T04:56:06.935024
|
{
"authors": [
"wlangera"
],
"repo": "b-cubed-eu/indicator-uncertainty",
"url": "https://github.com/b-cubed-eu/indicator-uncertainty/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
429201434
|
Source map does not resolve correctly for imported vulnerable contracts
We have a 2 contract setup, in which the main contract inherits a vulnerable contract:
vulnerableImport.sol
pragma solidity 0.5.0;
import "./imported.sol";
contract C is Imported {}
imported.sol
pragma solidity 0.5.0;
contract Imported {
function f() public {
selfdestruct(msg.sender);
}
}
Scanning finds the correct selfdestruct problem:
$ sabre ./vulnerableImport.sol
✔ Compiled with solc v0.5.0 successfully
[ ] Analyzing C(node:30670) V8: :3 Invalid asm.js: Invalid member of stdlib
./vulnerableImport.sol
6:2 error The contract can be killed by anyone https://smartcontractsecurity.github.io/SWC-registry/docs/SWC-106
✖ 1 problem (1 error, 0 warnings)
However the source mapping does not point to the correct line number. Reported location is 6:2 but line 6 does not exist in the vulnerableImport.sol file.
Observation
Doing a few more tests I found an interesting behavior. Adding an extra empty line in the vulnerable contract (before the vulnerable code) will increase 6:2 to 6:3.
Yeah, it looks like there is something wrong with it, I will try to find out the reason and fix it If no one has started to solve this problem yet.
@cleanunicorn -- Thanks for the example. I'll take a quick look at it and see how I can resolve the source map issue. The problem with imports and their source map linking is something that I haven't debugged yet. Unfortunately solc doesn't support resolving imports out of the box. So we're handling it within sabre, hence all such issues 😅
I have exactly the same problem in mythos ☺
|
gharchive/issue
| 2019-04-04T10:28:36 |
2025-04-01T04:56:06.938991
|
{
"authors": [
"Skyge",
"cleanunicorn",
"eswarasai"
],
"repo": "b-mueller/sabre",
"url": "https://github.com/b-mueller/sabre/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1316036426
|
Crashes when hovering over a map
crash-2022-07-24_18.21.52-client.txt
This is probably due to an incompatibility with another mod. I will look into fixing these incompatibilities in the coming weeks
I did not comply with a changed API so my mod does not recognize custom tooltip data. I will change that as soon as I am able to get back to my PC, which will probably be in a week.
|
gharchive/issue
| 2022-07-24T23:24:16 |
2025-04-01T04:56:06.942192
|
{
"authors": [
"DewGaming",
"b0iizz"
],
"repo": "b0iizz/minecraft-advancednbttooltip",
"url": "https://github.com/b0iizz/minecraft-advancednbttooltip/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
88565329
|
New User: React-Bootstrap
Please fill the following out:
Name: React-Bootstrap
Homepage url: http://react-bootstrap.github.io/
Brand Guidelines/Licensing: MIT
Logo:
Thanks!
|
gharchive/issue
| 2015-06-15T23:19:48 |
2025-04-01T04:56:06.990213
|
{
"authors": [
"mtscout6"
],
"repo": "babel/babel.github.io",
"url": "https://github.com/babel/babel.github.io/issues/336",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
176150140
|
Improve parsing error message (T6710)
Issue originally made by @markelog
Ref https://github.com/babel/babel/issues/3774
Input code
require("babylon").parse("2++");
Description
Will throw a error -
"SyntaxError: Assigning to rvalue (1:0)"
Which is pretty cryptic, parsers like esprima, or any other JS engine will show more meaningful message - "Invalid left-hand side expression in postfix operation"
Moving to https://github.com/babel/babylon/issues/119
Moved where?
.. sigh closed this one.
Yey!
|
gharchive/issue
| 2016-09-10T02:24:07 |
2025-04-01T04:56:07.012408
|
{
"authors": [
"hzoo",
"markelog"
],
"repo": "babel/babylon",
"url": "https://github.com/babel/babylon/issues/119",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
178459064
|
add relative information about add-module-exports to support ordinary…
… lib, fixes #362
As asked here https://github.com/babel/generator-babel-boilerplate/issues/362
Tell me if I should put it somewhere else.
done. Hope you like it.
Thanks @gabrielstuff ! I'll merge this in once CI passes :)
Well, no idea what's up with Node 5. I'm gonna merge this and fix that in a subsequent PR.
|
gharchive/pull-request
| 2016-09-21T20:51:08 |
2025-04-01T04:56:07.014589
|
{
"authors": [
"gabrielstuff",
"jmeas"
],
"repo": "babel/generator-babel-boilerplate",
"url": "https://github.com/babel/generator-babel-boilerplate/pull/413",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
204192507
|
Wish animation can be turned off and pop-up can show up in center of window
thx
I agree with removing the animation. There's a lot of redrawing happening while it opens up, and it makes the extension feel slow, since you can't start using it instantly.
Ideally, the animation would be an option disabled by default.
I don't believe the animation can be customized by chrome extensions, as per: https://stackoverflow.com/questions/43867226/is-it-possible-to-customize-my-chrome-extension-popup-windows-opening-animation?rq=1
That being said, I'm just learning the chrome extension APIs now...
I imagine it's unlikely they will add support to the API, closing this in the meantime.
|
gharchive/issue
| 2017-01-31T02:58:47 |
2025-04-01T04:56:07.026432
|
{
"authors": [
"babyman",
"jgthms",
"markwatson",
"spjspjspj"
],
"repo": "babyman/quick-tabs-chrome-extension",
"url": "https://github.com/babyman/quick-tabs-chrome-extension/issues/164",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1561475270
|
🛑 SM Service is down
In 881698d, SM Service (http://smservice.de) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SM Service is back up in d5f35fe.
|
gharchive/issue
| 2023-01-29T22:06:28 |
2025-04-01T04:56:07.031246
|
{
"authors": [
"thomasrehm"
],
"repo": "bachmannschumacher/upptime",
"url": "https://github.com/bachmannschumacher/upptime/issues/177",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1653347697
|
🛑 SM Service is down
In 146853a, SM Service (http://smservice.de) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SM Service is back up in db0959f.
|
gharchive/issue
| 2023-04-04T07:32:18 |
2025-04-01T04:56:07.033535
|
{
"authors": [
"thomasrehm"
],
"repo": "bachmannschumacher/upptime",
"url": "https://github.com/bachmannschumacher/upptime/issues/292",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1589565997
|
🔌 Plugin: Apply git patch
🔖 Summary
For templates that publish to existing repos with publish:github:pull-request, some files might already exist that we'd like to modify, e.g. adding a workflow to .circleci/config.yml. At the moment, if a directory pulled by fetch:template contains that file, it will fully replace the existing file, which makes it a bit painful to merge.
It seems like it might be relatively low-effort to implement an plugin that git applys provided patch before / during a publish:*.
The biggest question in my head is whether this could be implemented as fully separate from publish:* or if it would need to patch those actions. That would probably have a big impact on the overall level of effort
🌐 Project website (if applicable)
No response
✌️ Context
No response
👀 Have you spent some time to check if this plugin request has been raised before?
[X] I checked and didn't find similar issue
🏢 Have you read the Code of Conduct?
[X] I have read the Code of Conduct
Are you willing to submit PR?
No, but I'm happy to collaborate on a PR with someone else
This seems reasonable, I would suggest that this be a separate scaffolder action however. I wonder if this is possible in the current setup however, we might need to break up the bigger actions into smaller steps so that you can apply the git patch after the git repo has been created but before the push to the remote.
|
gharchive/issue
| 2023-02-17T15:51:08 |
2025-04-01T04:56:07.069769
|
{
"authors": [
"benjdlambert",
"zdufour-asp"
],
"repo": "backstage/backstage",
"url": "https://github.com/backstage/backstage/issues/16429",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
965604957
|
Status field not work
Hi, I added the status field in the entity, but I got an error below.
I don't know if I can get the function like below. we want to have status column to show if our service is down or up.
Hi!
The status field is not meant to ever be written in a YAML file. It is a generated readonly field, where some errors that can happen inside the processing loops of the catalog are surfaced.
We have been considering a general mechanism for getting errors into entities from "the outside", e.g. via an API or similar but there is no such thing in place yet.
|
gharchive/issue
| 2021-08-11T00:36:01 |
2025-04-01T04:56:07.072522
|
{
"authors": [
"chuliang014",
"freben"
],
"repo": "backstage/backstage",
"url": "https://github.com/backstage/backstage/issues/6783",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
815437094
|
catalog-plugin: Hide "Create Component" button using config
Hey 👋🏻 We're deploying Backstage internally but we're not currently using the scaffolder functionality. Because of this we'd like to remove the "Create Component" button from the catalog plugin home page.
This PR makes it possible to hide the button using config.
Config missing or true:
Config false:
:heavy_check_mark: Checklist
[x] A changeset describing the change and affected packages. (more info)
Tugboat has finished building the preview for this pull request!
Link:
https://pr4677-bbxsvcojki1xtmghlmof1nv5lwkfeyqc.tugboat.qa
Dashboard:
https://dashboard.tugboat.qa/603647f4f285af3e6596e855
👍 We also wanne get rid of it in some cases.
As it's using external route refs, how about supporting something like "optional" external route refs?
Should we generalize this all the way to instead a flag readonly: true?
This does not tie it into a specific implementation (the words "show", "create", "component", and "link" all tell a story about deeper specifics). Also, a flag like this could hide the Create link in the sidebar, and help the demo site be instructive about "hey you can't do that because it's intentionally disabled" etc
Should we generalize this all the way to instead a flag readonly: true?
Thats sounds perfect to me 👌 I can update this PR to do that if you'd like?
I think so. @Fox32 agree?
Oh and one more thing. You need to add this new parameter to the config.d.ts file in the a same package, and mark it @visibility frontend
I think so. @Fox32 agree?
I think that would be good for now
Wanna throw in that the intention is to implement this by making the external component creation route optional, and signal this that way instead, removing the need for config.
Tbh optional external route refs shouldn't be that tricky so could take a look at implementing them if we wanna avoid having to deprecate and remove this in the future?
That makes sense. So the button would only be present if the Create route was registered at all?
Ended up taking a stab at optional external routes, so that we're able to avoid the need for config.
Shipped in #4700, which would supersede this PR
Amazing! Thank you. I'll close this one.
|
gharchive/pull-request
| 2021-02-24T12:34:59 |
2025-04-01T04:56:07.081374
|
{
"authors": [
"Fox32",
"Rugvip",
"backstage-service",
"freben",
"robinjmurphy"
],
"repo": "backstage/backstage",
"url": "https://github.com/backstage/backstage/pull/4677",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1132579453
|
catalog-model: Remove serializeEntityRef
removes serializeEntityRef which have been deprecated for almost a year, long live stringifyEntityRef.
rebased (didn't fix test yet tho)
|
gharchive/pull-request
| 2022-02-11T13:12:12 |
2025-04-01T04:56:07.082650
|
{
"authors": [
"freben",
"jhaals"
],
"repo": "backstage/backstage",
"url": "https://github.com/backstage/backstage/pull/9479",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
356731062
|
Telegram don’t start
ubuntu@ip-172-31-36-140:~$ sudo TelegramTUI/./telegramTUI������������������������Traceback (most recent call last):����������������������������������������������� File "/usr/lib/python3.4/configparser.py", line 1116, in _unify_values��������� sectiondict = self._sections[section]����������������������������������������KeyError: 'telegram_api'������������������������������������������������������������������������������������������������������������������������������������������During handling of the above exception, another exception occurred:�����������������������������������������������������������������������������������������������Traceback (most recent call last):����������������������������������������������� File "TelegramTUI/./telegramTUI", line 2, in �������������������������� from src.ui import App������������������������������������������������������� File "/home/ubuntu/TelegramTUI/src/ui.py", line 2, in ����������������� from src.MainForm import MainForm�������������������������������������������� File "/home/ubuntu/TelegramTUI/src/MainForm.py", line 4, in ����������� from src.telegramApi import client������������������������������������������� File "/home/ubuntu/TelegramTUI/src/telegramApi.py", line 200, in ������ client = TelegramApi()������������������������������������������������������� File "/home/ubuntu/TelegramTUI/src/telegramApi.py", line 20, in __init__������� api_id = config.get('telegram_api', 'api_id')�������������������������������� File "/usr/lib/python3.4/configparser.py", line 754, in get�������������������� d = self._unify_values(section, vars)���������������������������������������� File "/usr/lib/python3.4/configparser.py", line 1119, in _unify_values��������� raise NoSectionError(section)������������������������������������������������configparser.NoSectionError: No section: 'telegram_api'��������������������������ubuntu@ip-172-31-36-140:~$ �
Are you install all dependensies?
Please, send me traceback in utf8.
|
gharchive/issue
| 2018-09-04T09:41:32 |
2025-04-01T04:56:07.091363
|
{
"authors": [
"Antoxa223",
"bad-day"
],
"repo": "bad-day/TelegramTUI",
"url": "https://github.com/bad-day/TelegramTUI/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1324304335
|
optional text in cucumber expression does not match
Current behavior
Omitted optional text in cucumber expressions do not match.
Expected behavior
Omitted optional text in cucumber expressions should match.
Test code to reproduce
I assume /this is a cucumber epression/ as indicated in the preprocessor docs and defined in the cucumber docs.
Then(/I am on the start page (again)/, () => {});
does not match
Then I am on the start page
Step implementation missing for "I am on the start page".
(However, it does match Then I am on the start page again.)
Versions
Cypress: 10.3.1
Preprocessor: 12.0.0
Node: 14.17.0
Checklist
[x] I've read the FAQ.
[x] I've read Instructions for logging issues.
[x] I'm not using cypress-cucumber-preprocessor@4.3.1 (package name has changed and it is no longer the most recent version, see #689).
I assume /this is a cucumber epression/
The docs are inaccurate, cucumber expressions are strings.
Additionally, if you were to write a step Then("I am on the start page (again)", () => {});, then I think the last space would be required.
Save your opinion of it for the authors.
https://github.com/cucumber/cucumber-expressions
And the rest of the docs look fine.
Save your opinion of it for the authors.
https://github.com/cucumber/cucumber-expressions
Their specification does not imply if my use case should work or not. Common sense tells me that it should. Sorry, but may I ask if it is their or your code we are talking about? Just to clarify.
And the rest of the docs look fine.
You say “cucumber expressions are strings”
The docs says “A step definition’s expression can either be a regular expression or a cucumber expression. The examples in this section use cucumber expressions.”
My IDE says /^a table step$/ (from that page) is not a string but a regular expression.
You say “the resut of the docs looks fine”
Although I’m new to Typescript, I think one of those four statements must be wrong.
Given, When and Then accepts both strings and regular expressions. Strings automatically are turned into regular expressions, this is what's referred to as "cucumber expressions". This is handled by https://github.com/cucumber/cucumber-expressions, IE. it is not my invention.
So /^a table step$/ is a regular expression but not a cucumber expression, right?
So /^a table step$/ is a regular expression but not a cucumber expression, right?
That's right.
The thing I do not understand is: Do you use some original cucumber code to parse those expressions? Something you cannot touch? Or do you just reject my let’s call it request for improvement?
You can find multiple cucumber libs listed as dependencies in package.json, including @cucumber/cucumber-expressions.
Thanks for your information! One last question, if I may: I am thinking about workarounds. Is it possible for me to transform the string before it goes to cucumber-expressions? I mean without needing to fork this project?
Sure, like this
Given(transform("I am on the start page (again)"), function () {
...
});
Thank you! My workaround now looks like this: (“Dann” is German for “Then”.)
interface IStepDefinitionBody<T extends unknown[]> {
(this: Mocha.Context, ...args: T): void;
}
function toCucumberExpOrRegExp(description: string | RegExp) {
return typeof description === 'string' ? description.replace(' (', '( ') : description;
}
export const Dann = function (description: string | RegExp, implementation: IStepDefinitionBody<any>) {
return Then(toCucumberExpOrRegExp(description), implementation);
};
Usage:
Dann('erscheint (wieder) die Startseite', function () {
// ...
});
I wonder if I am losing anything by having implementation: IStepDefinitionBody<any> as parameter instead of implementation: IStepDefinitionBody<T>. Everything seems to be just fine, though.
No, I don't think so. It's not possible to infer the types of the args of a step-def-body statically. I might have replaced any with unknown and require each dev to explicitly state each expected type, although it's not going to give you any runtime guarantees.
Good point! I replaced implementation: IStepDefinitionBody<any> with implementation: IStepDefinitionBody<unknown[]>.
|
gharchive/issue
| 2022-08-01T12:00:29 |
2025-04-01T04:56:07.106507
|
{
"authors": [
"badeball",
"iomedico-beyer"
],
"repo": "badeball/cypress-cucumber-preprocessor",
"url": "https://github.com/badeball/cypress-cucumber-preprocessor/issues/784",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
227877452
|
Greenkeeper badge
https://greenkeeper.io/
https://github.com/greenkeeperio/greenkeeper/issues/416
The badge could say "dependencies: monitored", similar to the david-dm badge.
I assume you'd want a badge that shows something like "dependencies: up to date" like the david-dm one. Does Greenkeeper provide an API for that?
The badge could say "dependencies: monitored",
If all you want is "dependencies: monitored", you could simply use a static badge:
https://img.shields.io/badge/dependencies-monitored-green.svg
@Daniel15 I don't think there currently is, but there's this: https://github.com/greenkeeperio/greenkeeper/issues/422
I'm marking all the needs-upstream-help issues closed.
If there's new information from the upstream service, please post in the thread. If they're actionable by Shields, a maintainer will reopen the issue.
If anyone wants to follow up with these vendors, feel free to do that! Even though the unresolved issues are closed, they are easy to find:
https://github.com/badges/shields/issues?q=label%3Aneeds-upstream-help+is%3Aclosed
|
gharchive/issue
| 2017-05-11T04:49:18 |
2025-04-01T04:56:07.113464
|
{
"authors": [
"Daniel15",
"paulmelnikow",
"stevenvachon"
],
"repo": "badges/shields",
"url": "https://github.com/badges/shields/issues/991",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
356277127
|
fix wercker examples
refs #1994
Still need to think about how to improve the base class, but lets fix this one example as a first step.
Warnings
:warning:
This PR modified service code for wercker but not its test code. That's okay so long as it's refactoring existing code.
Messages
:book:
:sparkles: Thanks for your contribution to Shields, @chris48s!
Generated by :no_entry_sign: dangerJS
|
gharchive/pull-request
| 2018-09-02T12:08:22 |
2025-04-01T04:56:07.116493
|
{
"authors": [
"chris48s",
"shields-ci"
],
"repo": "badges/shields",
"url": "https://github.com/badges/shields/pull/2044",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
611223527
|
update production hosting/deploy docs
I attempted to update the table of who has access to what, but as I was doing it I realised the table we made in the document last week doesn't match up with the selection of invites I received and I don't know who else received what so I've ignored that section for now.
@paulmelnikow - do you feel like having a go at that table, or we can review it in the next call if you want..
Messages
:book:
:sparkles: Thanks for your contribution to Shields, @chris48s!
:book:
Thanks for contributing to our documentation. We :heart: our documentarians!
Generated by :no_entry_sign: dangerJS against 86ce59ac0cc52476c92ac3025844f87c173c370a
Thanks for taking this on! I can have a go at the access table in a follow-on.
|
gharchive/pull-request
| 2020-05-02T16:17:58 |
2025-04-01T04:56:07.119453
|
{
"authors": [
"chris48s",
"paulmelnikow",
"shields-ci"
],
"repo": "badges/shields",
"url": "https://github.com/badges/shields/pull/5013",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
128957222
|
Some minor fixes
This is the stuff that was suppose to go into the branch linusu-fixes before I started messing with the options :)
From my side this LGTM. @badunk could you please review?
Looks great!
Thanks for adding standard
Do you want this merged in before #16?
Yeah, we might as well, it makes it easier to see what was specific to the other one. I'll merge this one and rebase the other on latest master :+1:
|
gharchive/pull-request
| 2016-01-26T22:05:58 |
2025-04-01T04:56:07.123533
|
{
"authors": [
"LinusU",
"badunk"
],
"repo": "badunk/multer-s3",
"url": "https://github.com/badunk/multer-s3/pull/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
269638276
|
Update usingcurl-uploads.md
fix typo
I disagree. curl provides a progress meter. Still does. Why use past tense there?
You're right!
|
gharchive/pull-request
| 2017-10-30T15:16:50 |
2025-04-01T04:56:07.131683
|
{
"authors": [
"alawvt",
"bagder"
],
"repo": "bagder/everything-curl",
"url": "https://github.com/bagder/everything-curl/pull/34",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2642394603
|
Tech Option 1 Example
Try edf file I/O with eyelinkio.
Thoughts on eyelinkio:
Pros:
It directly handles edf, the raw Eyelink output format, and frees users from using the edf2asc function.
It's fairly straightforward to use. Using this library and a couple of other common libraries (numpy, pandas, and matplotlib), I was able to plot a heatmap of gaze data.
It specializes in edf I/O, making it more lightweight than some other libraries; MNE, for example, would require many more dependencies than it does.
Cons:
It doesn't have the best documentation and examples of use – I had to look at their source code to figure out how output data were organized. But, this is something we could take care of for our users.
It's still under development, so it could be unreliable.
|
gharchive/issue
| 2024-11-07T22:56:02 |
2025-04-01T04:56:07.135228
|
{
"authors": [
"BrendaQiu"
],
"repo": "baharsener/visualEyes",
"url": "https://github.com/baharsener/visualEyes/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
477274255
|
Nuxt?
Is it possible to create some sort of documentation on how Nuxt is implemented? Is it needed to create a plugin? Something? It's quite confusing. I read that Hooper works ok with Nuxt but I'm experiencing some issues as the carousel does not render the slides I want it to render and I don't know where the issue is because apparently my code is correct comparing it to the docs one. 😞
I can attach code if needed.
Nothing special, works out of a box.
Code snippet:
<template>
<hooper :settings="hooperSettings" ref="carousel">
<slide>
<div v-for="index in 5" :key="index">
{{index}}
</div>
</slide>
</hooper>
</template>
<script>
import { Hooper, Slide } from 'hooper'
import 'hooper/dist/hooper.css'
export default {
name: 'Div',
components: {
Hooper,
Slide
},
data() {
hooperSettings: {
itemsToShow: 1,
itemsToSlide: 1
}
},
methods: {
slidePrev () {
this.$refs.carouselPress.slidePrev();
},
slideNext () {
this.$refs.carouselPress.slideNext();
}
}
}
</script>
Like @rensite said, hooper supports SSR out of the box. No need to create a plugin for it.
You might've missed importing the css, as Hooper requires some minimal CSS to work. But you will have to provide your own styles to adjust height, width, colors.
This is a sample I uploaded recently to Codesandbox: https://codesandbox.io/s/hooper-nuxt-765ot
Ok, thanks. I modified my slider so it's like the example but it still renders whatever the slides it wants, not the ones I want it to render, any ideas on why this would be happening?
Ok, thanks. I modified my slider so it's like the example but it still renders whatever the slides it wants, not the ones I want it to render, any ideas on why this would be happening?
Could you attach the code?
`
<Slide :style="{width: pWidth, height: pHeight}"
v-for="(option, index) in options"
:index="index"
:key="index"
>
<SinglePhotoForSlider :pWidth="pWidth" :pHeight="pHeight" class="photo" :option="(option.img || option.path) + '&fit=min&q=60'" />
</Slide>
<Navigation slot="hooper-addons"></Navigation>
`
Here it is
😔
Look's like the loop issue. Try to move v-for="(option, index) in options" :index="index" :key="index" into SinglePhotoForSlider as following:
<Slide :style="{width: pWidth, height: pHeight}">
<SinglePhotoForSlider v-for="(option, index) in options" :index="index" :key="index" :pWidth="pWidth" :pHeight="pHeight" class="photo" :option="(option.img || option.path) + '&fit=min&q=60'" />
</Slide>
Hope the snippet above solves the problem ;)
I'm afraid that doesn't work. Images appear one on top of another (vertical). I've also tried adding the v-for to a div inside the slide, outside. Nothing works. :S For some reason it does not get the setting from the data...
Could you create sandbox example or provide your live project with broken hooper carousel? Gonna take a look at it. I assume it may be also css issue.
Here: https://codesandbox.io/s/codesandbox-nuxt-t6q72
This is no representation of what is happening to me, I can see the photos but I can not use hooper settings to modify anything. I don't know why the sandbox is behaving like that (probably I forgot something) but at least you can check the code. 🤦🏼♀️
I had the same problem and it's because the hooper css is missing, inspect the page and search for the .hooper class, if you can't find its because of that.
In my case, it wasn't adding the CSS because I'm using nuxt-purgecss, adding whitelistPatterns: [/hooper/], fixed the problem.
I hope it helps!
I'm afraid css is not the problem, it is imported and i'm not using PurgeCSS or similar. Hooper class is there. Thanks anyway, I'll keep investigating. 😔
Related issue #51
Thanks @saerose
I think I figured it out. I have the sliders inside a component that has an v-show, i changed it to v-if and now it's working.
|
gharchive/issue
| 2019-08-06T09:36:06 |
2025-04-01T04:56:07.154578
|
{
"authors": [
"Abdelrahman3D",
"alexon1234",
"logaretm",
"rensite",
"rubenmoya",
"saerose"
],
"repo": "baianat/hooper",
"url": "https://github.com/baianat/hooper/issues/117",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2155564875
|
[Feature]目前新建连接测试没有及时拆链,导致存量连接越来越多
有配置可以在新建后快速拆链吗? 比如主动发rst报文
配置cps,不要配置cc,就是建立连接后发送一个请求、响应后关闭连接
如果喜欢dperf 请加个star 谢谢
没有
SYN -> SYN+ACK -> Request-> Reply->RST 这样吗
都可以,最好可以配置,建立连接后可选发送几个包然后RST。
目前通过FIN的方式拆链太慢了,导致连接很容易被打满,不太容易长时间观察新建连接的测试性能。
我尝试改了一下,修改了tcp_server_process_data 和 tcp_client_process_data里面的,
if (sk->keepalive == 0) {
tx_flags |= TH_FIN; // 改成了tx_flags |= TH_RST;
}
试一下 https://github.com/pengjianzhang/dperf
客户端上配置: fast_close
支持 配置 发送多少个包后 fast_close吗?
参考keepalive配置
测试效果如何
RST可以生效
/root/dperf-main/src/config.c:1128:15: error: #if with no expression
#if KNI_ENABLE
//这里有个编译错误,要改一下 #ifdef KNI_ENABLE
请说详细一点
|
gharchive/issue
| 2024-02-27T03:17:03 |
2025-04-01T04:56:07.168168
|
{
"authors": [
"pengjianzhang",
"zhangyalin"
],
"repo": "baidu/dperf",
"url": "https://github.com/baidu/dperf/issues/419",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
565136320
|
Failed to receive response from service [get_access_token]
logid=794248 Failed to receive response from service [get_access_token] remote_server=127.0.0.1:6379 latency=43us error_msg=[E112]Not connected to 127.0.0.1:6379 yet, server_id=1 [R1][E112]Not connected to 127.0.0.1:6379 yet, server_id=1
这需要修改哪个配置文件啊? 之前没问题,现在突然出现这个问题。
logid=794248 Failed to receive response from service [get_access_token]
说明 backend.conf 中 get_access_token service 请求失败,需要定位该 service 相关问题
|
gharchive/issue
| 2020-02-14T06:40:14 |
2025-04-01T04:56:07.171739
|
{
"authors": [
"ShaneTian",
"twoyoungtwosimple"
],
"repo": "baidu/unit-uskit",
"url": "https://github.com/baidu/unit-uskit/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
111963034
|
Memory Chat Shows incorrect mb
Hi,
Thank you for sharing the Memcached Dashboard. You did a wonderful job.
In memory chart , Total memory is showing 2MB. But the actual MB is 2048. I mean 2 GB.
Number of bytes this server is allowed to use for storage is shown correctly.
Not sure what you mean?
I have allocated 2048MB for memcache. But the chart is showing 2MB as total memory
can you add a screenshot?
Added screenshot Pls check it
|
gharchive/issue
| 2015-10-17T13:22:23 |
2025-04-01T04:56:07.178609
|
{
"authors": [
"bainternet",
"indra452"
],
"repo": "bainternet/Memchaced-Dashboard",
"url": "https://github.com/bainternet/Memchaced-Dashboard/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2134948116
|
acodeX-server: command not found
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
Scroll down to '....'
See error
Expected behavior
Expected to open AcodeX server.
Screenshots
If applicable, add screenshots to help explain your problem.
Smartphone (please complete the following information):
Device: [LG VELVET]
OS: [Android 13]
Termux [from f-droid]
Version ?
From where you have installed termux?
f-droid
RESOLVED! I reinstalled and evrtng is ok.
I have the same problem, Do you reinstall from the same page?
|
gharchive/issue
| 2024-02-14T18:37:41 |
2025-04-01T04:56:07.184101
|
{
"authors": [
"8376459",
"Aleeeksanders",
"bajrangCoder"
],
"repo": "bajrangCoder/acode-plugin-acodex",
"url": "https://github.com/bajrangCoder/acode-plugin-acodex/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
183513897
|
Example doesn't compile
I'm on 64bit Linux with latest OF of_v0.9.3_linux64_release.
I've tried compiling with/out -std=c++11
In file included from src/ofApp.h:31:0,
from src/main.cpp:26:
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:169:10: error: ‘thread’ in namespace ‘std’ does not name a type
std::thread _thread;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:227:18: error: ‘mutex’ in namespace ‘std’ does not name a type
mutable std::mutex mutex;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:57:7: error: ‘bool ofx::Video::IPVideoGrabber::isFrameNew() const’ marked override, but does not override
bool isFrameNew() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:59:7: error: ‘bool ofx::Video::IPVideoGrabber::isInitialized() const’ marked override, but does not override
bool isInitialized() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:60:7: error: ‘bool ofx::Video::IPVideoGrabber::setPixelFormat(ofPixelFormat)’ marked override, but does not override
bool setPixelFormat(ofPixelFormat pixelFormat) override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:61:16: error: ‘ofPixelFormat ofx::Video::IPVideoGrabber::getPixelFormat() const’ marked override, but does not override
ofPixelFormat getPixelFormat() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:66:12: error: conflicting return type specified for ‘virtual ofPixels& ofx::Video::IPVideoGrabber::getPixels()’
ofPixels& getPixels() override;
^
In file included from ../../../libs/openFrameworks/ofMain.h:15:0,
from src/ofApp.h:29,
from src/main.cpp:26:
../../../libs/openFrameworks/types/ofBaseTypes.h:106:14: error: overriding ‘T* ofBaseHasPixels_<T>::getPixels() [with T = unsigned char]’
virtual T * getPixels()=0;
^
In file included from src/ofApp.h:31:0,
from src/main.cpp:26:
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:67:18: error: ‘const ofPixels& ofx::Video::IPVideoGrabber::getPixels() const’ marked override, but does not override
const ofPixels& getPixels() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:72:13: error: ‘ofTexture& ofx::Video::IPVideoGrabber::getTexture()’ marked override, but does not override
ofTexture& getTexture() override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:73:19: error: ‘const ofTexture& ofx::Video::IPVideoGrabber::getTexture() const’ marked override, but does not override
const ofTexture& getTexture() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:75:7: error: ‘bool ofx::Video::IPVideoGrabber::isUsingTexture() const’ marked override, but does not override
bool isUsingTexture() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:78:26: error: ‘std::vector<ofTexture>& ofx::Video::IPVideoGrabber::getTexturePlanes()’ marked override, but does not override
std::vector<ofTexture>& getTexturePlanes() override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:79:32: error: ‘const std::vector<ofTexture>& ofx::Video::IPVideoGrabber::getTexturePlanes() const’ marked override, but does not override
const std::vector<ofTexture>& getTexturePlanes() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:82:10: error: ‘void ofx::Video::IPVideoGrabber::draw(float, float) const’ marked override, but does not override
void draw(float x, float y) const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:83:7: error: ‘void ofx::Video::IPVideoGrabber::draw(float, float, float, float) const’ marked override, but does not override
void draw(float x, float y, float w, float h) const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:84:7: error: ‘void ofx::Video::IPVideoGrabber::draw(const ofPoint&) const’ marked override, but does not override
void draw(const ofPoint& point) const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:85:7: error: ‘void ofx::Video::IPVideoGrabber::draw(const ofRectangle&) const’ marked override, but does not override
void draw(const ofRectangle& rect) const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:91:11: error: ‘float ofx::Video::IPVideoGrabber::getWidth() const’ marked override, but does not override
float getWidth() const override;
^
../../../addons/ofxIpVideoGrabber/src/IPVideoGrabber.h:92:11: error: ‘float ofx::Video::IPVideoGrabber::getHeight() const’ marked override, but does not override
float getHeight() const override;
Can you please try the develop branch?
Sorry, didn't mentioned it but I've tried both master & develop.
Hm. Develop works for me on linux on the master branch of openFrameworks. I'm not sure what the reason is for 0.9.3 not working on Master. I'll definitely take a look at it, but it may be in a week or so.
Hey there, I just downloaded a fresh 0.9.4 release (and also installed the dependencies), cloned the master branch of ofxIpVideoGrabber in the addons folder, compiled it and it just ran using the make files included in the example.
I'm on Ubuntu 16.04.1.
|
gharchive/issue
| 2016-10-17T20:07:06 |
2025-04-01T04:56:07.194768
|
{
"authors": [
"bakercp",
"zzart"
],
"repo": "bakercp/ofxIpVideoGrabber",
"url": "https://github.com/bakercp/ofxIpVideoGrabber/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
924164309
|
WIP: Refactor api calls
[ ] Test
Refactor api calls
|
gharchive/pull-request
| 2021-06-17T17:16:30 |
2025-04-01T04:56:07.196667
|
{
"authors": [
"ungarson"
],
"repo": "baking-bad/bcd",
"url": "https://github.com/baking-bad/bcd/pull/284",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2467196343
|
question on align identity > 90%
Hi Sigalign team,
Thanks for the amazing library for sequence alignment. I have a question related to aligning high similar sequences against a reference. If the read is always > some certain threshold e.g., 90% to the reference genomes/sequences, would it still be accurate compared to heuristics such as minimap2 (what is the minimum threshold for sigalign to be accurate). A second question is whehter it provide overlap alignment (that is gaps extended at both ends of query and reference are not penalized), like those in usearch/vsearch semi-global alignment.
Thanks,
Jianshu
Hi Jianshu!
Thank you for your interest in SigAlign.
Question 1: accuracy in highly similar sequences
1) On "high" similarity
For SigAlign, "high" similarity generally means around 98-99%, not >90%.
SigAlign is particularly suited for sequences with high accuracy, such as those from Illumina NovaSeq (99.86%) and MiSeq (98.79%) (calculated from "read mapping" tests in SigAlign's paper).
This doesn’t mean SigAlign only finds sequences with >98% identity. In the paper's tests, SigAlign can detect <97% identity for MiSeq data, but it generally performs best with targets in the >98% range.
2) On accuracy
SigAlign is non-heuristic, meaning it:
Finds all alignments within the defined scope (see details in the Wiki).
Provides consistent results for a single genome regardless of reference composition.
However, SigAlign doesn’t guarantee finding the "biological truth" (e.g., exact origin of a read), so we can't claim it is more accurate than other aligners like minimap2.
That said, our paper shows SigAlign demonstrates high sensitivity and precision (when filtering for the highest-scoring alignments) with simulated MiSeq data.
3) Use case suggestions
SigAlign is expected to perform well with:
similar length to MiSeq (200-500 bp)
similar accuracy to MiSeq (>98% identity)
complex references containing diverse genomes
low repeat region (e.g., bacterial genomes)
Question 2: Semi-global alignment
Yes, SigAlign supports semi-global alignment.
(Semi-global mode may offer lower memory usage and faster speeds compared to local alignment)
|
gharchive/issue
| 2024-08-15T02:22:19 |
2025-04-01T04:56:07.234018
|
{
"authors": [
"baku4",
"jianshu93"
],
"repo": "baku4/sigalign",
"url": "https://github.com/baku4/sigalign/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1852256730
|
Adding 80RATE/20WETH pool to weighted allowlist
Description
Please include a summary of the change and if relevant which issue is fixed. Please also include relevant motivation and context.
Type of change
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Dependency changes
[ ] Code refactor / cleanup
[ ] Documentation or wording changes
[ ] Other
How should this be tested?
This PR is adding the 80RATE/20WETH pool to the weighted pool allow list. Test by ensuring the UI allows provisioning liquidity to it.
Please provide instructions so we can test. Please also list any relevant details for your test configuration.
[x] Test A: Add liquidity to the 80RATE/20WETH pool, assert all works as intended.
Visual context
Please provide any relevant visual context for UI changes or additions. This could be static screenshots or a loom screencast.
Checklist:
[x] I have performed a self-review of my own code
[x] I have requested at least 1 review (If the PR is significant enough, use best judgement here)
[x] I have commented my code where relevant, particularly in hard-to-understand areas
[x] If package-lock.json has changes, it was intentional.
[x] The base of this PR is master if hotfix, develop if not
@oddaf Hey. IF you are not already in contact with Balancer, please DM me on telegram to get a group fired up: https://t.me/tritium_vlk
You should be good to go on the pool in a few minutes.
|
gharchive/pull-request
| 2023-08-15T22:57:42 |
2025-04-01T04:56:07.312809
|
{
"authors": [
"Tritium-VLK",
"oddaf"
],
"repo": "balancer/frontend-v2",
"url": "https://github.com/balancer/frontend-v2/pull/3981",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
121400051
|
[Request] Changelog
Having a changelog would be much appreciated in case we want to upgrade our current version of Waterline. There could new features or something that behaves differently to previous version. Looking at commits It's pretty much impossible between releases tell what belongs where.
Done! Changelog
@particlebanana much appreciated
|
gharchive/issue
| 2015-12-10T04:26:32 |
2025-04-01T04:56:07.315908
|
{
"authors": [
"moisesrodriguez",
"particlebanana"
],
"repo": "balderdashy/waterline",
"url": "https://github.com/balderdashy/waterline/issues/1247",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
695231547
|
Invalid events in API Inspector
Description
I recently updated from 1.7 to 1.9 and after capturing a frame I saw that some events in the API Inspector window were marked as "Indirect sub-commands", which made reading details (like function parameters) impossible. The issue seems to be caused by vkCmdDrawIndirectCount and is visible in subsequent command buffers. I checked 1.8 and it's also affected.
Steps to reproduce
I've sent a simple application that reproduces the issue in an email.
Capture any frame from the application
Select vkCmdDraw(3, 1) from Colour Pass # 2
API Inspector shows EID:22, Event:"Indirect sub-command"
Environment
RenderDoc version: 1.8+
Operating System: Windows 10
Graphics API: Vulkan
I think it may have been broken on 1.7 just in a different way. A command buffer with an indirect count draw was re-ordering later commands in itself but not later comands in other command buffers. In older versions I was reserving all the data needed for indirect-count draws and then removing the excess, and in 1.8 I changed it to not reserve space since for very large maxCounts that's wasteful. In your case the draw was drawing exactly maxcount so in 1.7 nothing was re-ordering.
That commit should avoid the re-order entirely and fix the problem.
|
gharchive/issue
| 2020-09-07T15:52:05 |
2025-04-01T04:56:07.319461
|
{
"authors": [
"baldurk",
"luki1129"
],
"repo": "baldurk/renderdoc",
"url": "https://github.com/baldurk/renderdoc/issues/2042",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
316271171
|
Problems with JDWP connection
Description
I've tried connecting render doc to an Android Device (specifically, Samsung Galaxy Tab S3). Android 7.0 is installed on that device.
The renderdoc apk was installed correctly and starts up properly when selecting the device as a replay context.
However, after I select the apk that I want to debug in the 'Executable Path' box and hit 'Launch', it would say that it couldn't connect to the debugger ('debuggable' is set to true in the AndroidManifest).
There's no other application running that makes use of the adb other than RenderDoc.
To see what was causing this, I opened the Diagnostic Log File using the 'Help->View Diagnostic Log File' option.
When I was searching for potential errors, this line struck my interest:
RDOC 000320: [15:31:38] jdwp.cpp( 399) - Error - Couldn't make JDWP connection
RDOC 000320: [15:31:38] android.cpp( 217) - Error - Failed to inject using JDWP
Could you maybe tell me what would cause this error?
Something to note:
I tried the same procedure with an Samsung Galaxy S8 device and everything worked fine so this seems to be an Android or device specific setting.
If you need more informations, please don't hesitate to contact me.
The content of the whole log file:
RDOC 000320: [15:31:10] core.cpp( 298) - Log - RenderDoc v1.0 64-bit Release (5ef2d0bb5e86a37ba6c250987e9dc8c33c26435c) loaded in replay application
RDOC 010120: [15:31:10] core.cpp( 298) - Log - RenderDoc v1.0 64-bit Release (5ef2d0bb5e86a37ba6c250987e9dc8c33c26435c) loaded in replay application
QTRD 000320: [15:31:10] qrenderdoc.cpp( 87) - Log - QRenderDoc initialising.
RDOC 007420: [15:31:10] core.cpp( 298) - Log - RenderDoc v1.0 64-bit Release (5ef2d0bb5e86a37ba6c250987e9dc8c33c26435c) loaded in replay application
QTRD 000320: [15:31:12] ToolWindowManager.cpp( 817) - Warning - invalid splitter encountered
RDOC 000320: [15:31:12] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:39970 localabstract:renderdoc_39920'
RDOC 000320: [15:31:12] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:38970 localabstract:renderdoc_38920'
RDOC 000320: [15:31:12] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell getprop ro.product.manufacturer'
RDOC 000320: [15:31:13] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell getprop ro.product.model'
RDOC 000320: [15:31:13] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell setprop debug.vulkan.layers :'
RDOC 000320: [15:31:16] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell setprop debug.vulkan.layers :'
RDOC 000320: [15:31:20] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:39970 localabstract:renderdoc_39920'
RDOC 000320: [15:31:20] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:38970 localabstract:renderdoc_38920'
RDOC 000320: [15:31:26] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell pm list packages -3'
RDOC 000320: [15:31:27] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:39970 localabstract:renderdoc_39920'
RDOC 000320: [15:31:27] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:38970 localabstract:renderdoc_38920'
RDOC 000320: [15:31:28] android_patch.cpp( 497) - Log - Checking that APK is debuggable
RDOC 000320: [15:31:28] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell dumpsys package com.flaregames.monsterio'
RDOC 000320: [15:31:28] android_patch.cpp( 460) - Log - Checking for root access on 6ec8ea1eb10f312e
RDOC 000320: [15:31:28] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e root'
RDOC 000320: [15:31:28] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell whoami'
RDOC 000320: [15:31:28] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell test -e /system/xbin/su && echo found'
RDOC 000320: [15:31:32] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell cmd package resolve-activity -c android.intent.category.LAUNCHER com.flaregames.monsterio'
RDOC 000320: [15:31:32] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward --remove tcp:39021'
RDOC 000320: [15:31:32] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell am force-stop com.flaregames.monsterio'
RDOC 000320: [15:31:33] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell setprop debug.vulkan.layers VK_LAYER_RENDERDOC_Capture'
RDOC 000320: [15:31:33] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell mkdir -p /sdcard/Android/data/com.flaregames.monsterio'
RDOC 000320: [15:31:33] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell setprop debug.rdoc.RENDERDOC_CAPTUREOPTS ababaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabcbaa'
RDOC 000320: [15:31:34] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell pm path com.flaregames.monsterio'
RDOC 000320: [15:31:34] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell ls /data/app/com.flaregames.monsterio-1//lib//libVkLayer_GLES_RenderDoc.so'
RDOC 000320: [15:31:34] android.cpp( 173) - Log - No library found in /data/app/com.flaregames.monsterio-1//lib//libVkLayer_GLES_RenderDoc.so for com.flaregames.monsterio - assuming injection is required.
RDOC 000320: [15:31:34] android.cpp( 185) - Log - Setting up to launch the application as a debugger to inject.
RDOC 000320: [15:31:34] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell am start -S -D com.flaregames.monsterio/com.keenflare.monsterio.MonsterioActivity'
RDOC 000320: [15:31:35] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:39970 localabstract:renderdoc_39920'
RDOC 000320: [15:31:35] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:38970 localabstract:renderdoc_38920'
RDOC 000320: [15:31:35] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell ps -A | grep com.flaregames.monsterio'
RDOC 000320: [15:31:36] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell ps -A | grep com.flaregames.monsterio'
RDOC 000320: [15:31:36] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell ps -A | grep com.flaregames.monsterio'
RDOC 000320: [15:31:36] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell ps -A | grep com.flaregames.monsterio'
RDOC 000320: [15:31:37] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e shell ps -A | grep com.flaregames.monsterio'
RDOC 000320: [15:31:37] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:39970 localabstract:renderdoc_39920'
RDOC 000320: [15:31:37] android_tools.cpp( 283) - Log - COMMAND: C:\Users\fklinge\AppData\Local\Android\Sdk\platform-tools\adb.exe '-s 6ec8ea1eb10f312e forward tcp:38970 localabstract:renderdoc_38920'
RDOC 000320: [15:31:38] jdwp.cpp( 399) - Error - Couldn't make JDWP connection
RDOC 000320: [15:31:38] android.cpp( 217) - Error - Failed to inject using JDWP
Environment
RenderDoc build: v1.0 (build from 5ef2d0bb)
Operating System: Windows 7 64-Bit
API: OpenGL ES 3.0
Device: Samsung Galaxy Tab S3
This looks like the bug that's been reported a couple of times where some android devices don't support ps -A, so renderdoc can't determine the PID of the apk and can't set up the JDWP connection from there.
Can you try a recent nightly build? This has been fixed and I now have a fallback that tries just plain ps as well.
Yup, that fixed it. Thanks a lot!
|
gharchive/issue
| 2018-04-20T13:37:16 |
2025-04-01T04:56:07.345850
|
{
"authors": [
"FelixK15",
"baldurk"
],
"repo": "baldurk/renderdoc",
"url": "https://github.com/baldurk/renderdoc/issues/962",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2117280227
|
Add standard docusaurus-builder docs
Adding product-os/docusaurus-builder docs to balena-sound, removing the custom builder.
Need #654 to ready this PR.
Staging Link for the new balena-sound docs: https://052b7988.balena-sound.pages.dev/
The Docs PR is ready to review @alanb128 @rahul-thakoor @klutchell
|
gharchive/pull-request
| 2024-02-04T19:58:46 |
2025-04-01T04:56:07.347778
|
{
"authors": [
"vipulgupta2048"
],
"repo": "balena-io-experimental/balena-sound",
"url": "https://github.com/balena-io-experimental/balena-sound/pull/655",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1248087255
|
Getting Started guides CLI installer shows incorrect platform instructions
When you navigate to https://www.balena.io/docs/learn/getting-started/raspberrypi3/nodejs/#install-the-balena-cli using a Linux machine.
The tab is selected for Linux but it's currently showing Mac instructions by default
@vipulgupta2048 what machine are you seeing that on, and also can you paste your user-agent string?
@nucleardreamer Here you go https://whatismybrowser.com/w/K98S5KD
|
gharchive/issue
| 2022-05-25T13:19:44 |
2025-04-01T04:56:07.356447
|
{
"authors": [
"nucleardreamer",
"vipulgupta2048"
],
"repo": "balena-io/docs",
"url": "https://github.com/balena-io/docs/issues/2297",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
632028720
|
Balena-engine.socket is created as a directory
Description
We occasionally find the "heartbeat" is red on the dashboard. Running diagnostics we see engine is not running, supervisor is not running, and often it cannot connect to docker hub. Investigation further shows
balena-engine.socket loaded failed failed Docker Socket for the API
Restarting the balena engine via systemctl restart balena-engine fails, and journalctl -xu shows that the socket is nonfunctional (sorry - lost the lines before I could copy and paste them). Then looking at /var/run/balena-engine.socket shows that it is created as a directory and not a file.
Removing the directory and restarting the balena-engine usually restores everything to operating condition.
This seems to only occur on 2.48 systems.
** Environment **
Device type: Raspberry Pi (v1 / Zero / Zero W)
OS version: balenaOS 2.48.0+rev1
Supervisor version: 10.8.0
Can you share some more details on the containers you're running? Are you using any supervisor labels?
We've seen this on system running a single alpine-based container, and on systems that are hosting 3 containers (2 alpine, 1 debian). It's only seen on systems running Host OS 2.48.
We are not using any supervisor labels.
FWIW, we had a conversation with one of Balena's tech support folks named Hugh who also saw this and may have filed a bug report.
sorry this took some time to come back to you. did anything change regarding this issue?
are you interacting with the docker/balena socket at all in your container?
it would be useful to have a minimal reproduction for me to try this myself. I think your single container deploy would work for that, any chance you could share that with me?
I haven't created a minimal reproduction but it happens quite often. When I notice, I typically remove the directory and everything recovers. I suspect it might self-recover also, not sure. Next time it happens I can post to support and grant access and ask them to forward to you.
sounds good, thanks. I can take care of connecting your support request to this ticket o/
Hi Robert, I have logged the site demonstrating this issue via Intercom and ask them to forward to you.
@etcetc it looks like we found the issue internally. I will keep you posted when we have a solution for this o/
Good news thanks.
This Supervisor PR is in the works which will fix this, by moving to using mounts instead of binds for volume configs.
This issue may be avoided by upgrading to Supervisor v14+.
|
gharchive/issue
| 2020-06-05T22:43:11 |
2025-04-01T04:56:07.371144
|
{
"authors": [
"cywang117",
"etcetc",
"robertgzr"
],
"repo": "balena-os/balena-engine",
"url": "https://github.com/balena-os/balena-engine/issues/220",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
406272754
|
Build error fetching balena_git (rename issue?)
balena_git.bb:do_fetch error below (also log attached)
on build command
balena-yocto-scripts/build/barys -m var-som-mx6
Could be resin to balena rename problem ?
WARNING: balena-17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0 do_fetch: Failed to fetch URL git://github.com/resin-os/balena.git;branch=17.12-resin;destsuffix=git/src/import, attempting MIRRORS if available
ERROR: balena-17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0 do_fetch: Fetcher failure: Fetch command export PSEUDO_DISABLED=1; export DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-Zx6XwJTIDB"; export SSH_AUTH_SOCK="/run/user/1000/keyring/ssh"; export PATH="/opt/balena-variscite/build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/opt/balena-variscite/layers/poky/scripts:/opt/balena-variscite/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/balena/17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi:/opt/balena-variscite/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/balena/17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0/recipe-sysroot/usr/bin/crossscripts:/opt/balena-variscite/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/balena/17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0/recipe-sysroot-native/usr/sbin:/opt/balena-variscite/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/balena/17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0/recipe-sysroot-native/usr/bin:/opt/balena-variscite/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/balena/17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0/recipe-sysroot-native/sbin:/opt/balena-variscite/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/balena/17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0/recipe-sysroot-native/bin:/opt/balena-variscite/layers/poky/bitbake/bin:/opt/balena-variscite/build/tmp/hosttools"; export HOME="/home/roberto"; LANG=C git -c core.fsyncobjectfiles=0 clone --bare --mirror git://github.com/resin-os/balena.git /opt/balena-variscite/build/downloads/git2/github.com.resin-os.balena.git --progress failed with exit code 128, output:
Cloning into bare repository '/opt/balena-variscite/build/downloads/git2/github.com.resin-os.balena.git'...
fatal: unable to connect to github.com:
github.com[0: 140.82.118.3]: errno=No route to host
github.com[1: 140.82.118.4]: errno=No route to host
ERROR: balena-17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0 do_fetch: Fetcher failure for URL: 'git://github.com/resin-os/balena.git;branch=17.12-resin;destsuffix=git/src/import'. Unable to fetch URL from any source.
ERROR: balena-17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0 do_fetch: Function failed: base_do_fetch
ERROR: Logfile of failure stored in: /opt/balena-variscite/build/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/balena/17.12.0-dev+git60400e2cd0eb10b72f3c1bc12befb6e358d1cb1f-r0/temp/log.do_fetch.57893
ERROR: Task (/opt/balena-variscite/build/../layers/meta-resin/meta-resin-common/recipes-containers/balena/balena_git.bb:do_fetch) failed with exit code '1'
log.do_fetch.txt
Yes, the issue is still present.
I think the root of the problem is this URI in balena_git.bb
git://github.com/resin-os/balena.git;branch=${BALENA_BRANCH};destsuffix=git/src/import
This path gives error 'not found' with git also...
|
gharchive/issue
| 2019-02-04T10:50:57 |
2025-04-01T04:56:07.384244
|
{
"authors": [
"robertdax"
],
"repo": "balena-os/balena-variscite",
"url": "https://github.com/balena-os/balena-variscite/issues/84",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
874044577
|
Parallelize Leviathan Multi-Suite testing among multiple workers
PR adds parallelization support to running test suites across multiple available workers. Additionally, it also involves multiple improvements to client cloud helpers, code readability and testing performance
@vipulgupta2048 depending on how things are going on your end, we could merge this (after I quickly test it), and then make the changes we discussed yesterday in a subsequent PR?
@rcooke-warwick Opened #416 for that
@vipulgupta2048 this is ready to be merged ,when we merge https://github.com/balena-os/leviathan/pull/402 and https://github.com/balena-os/leviathan/pull/412
testing running against an app with a single and two testbots (rpi3 as DUT) and it's working great
@balena-ci rebase
@balena-ci rebase
@balena-ci I self-certify!
|
gharchive/pull-request
| 2021-05-02T21:06:46 |
2025-04-01T04:56:07.387528
|
{
"authors": [
"rcooke-warwick",
"robertgzr",
"vipulgupta2048"
],
"repo": "balena-os/leviathan",
"url": "https://github.com/balena-os/leviathan/pull/393",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1536177873
|
Client: Drop config NPM package
Converted convention to accept JS config file, making config npm package useless.
Signed-off-by: Vipul Gupta (@vipulgupta2048) vipul@balena.io
@balena-ci I self-certify!
|
gharchive/pull-request
| 2023-01-17T10:55:49 |
2025-04-01T04:56:07.389130
|
{
"authors": [
"vipulgupta2048"
],
"repo": "balena-os/leviathan",
"url": "https://github.com/balena-os/leviathan/pull/941",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1292308711
|
Pi Zero W and Pi 3A+ all connect to the cloud but all return "nothing is listening on port 80" errors on the public URL.
I have tried to get the Balena-cam working today on both a Pi 3A+ and a Pi Zero W via the "Deploy with Balena" route but while both connect to my account / the cloud, each one yields the following error when trying to access the public IP address to view the camera:
"We encountered an error when reaching this balena.io device:
(ID)
tunneling socket could not be established: 500
One possible reason is because nothing is listening on port 80 on the device."
I have tried the current builds (as of today, 03/07/22) for the Pi Zero and both 32 and 64 bit versions for the Pi 3. In all cases I am using the official camera module and the hardware in each case works from Pi OS.
The other devices I have currently running other Balena tasks are all still connecting so I don't think it's a connectivity issue my end or a service issue at Balena.
Here is the log from the Pis on startup:
Supervisor starting
Applying configuration change {"SUPERVISOR_POLL_INTERVAL":"900000","SUPERVISOR_DELTA":"1","SUPERVISOR_DELTA_REQUEST_TIMEOUT":"59000","SUPERVISOR_DELTA_VERSION":"3"}
Applying boot config: {"gpu_mem_1024":"448","gpu_mem_256":"192","gpu_mem_512":"256","start_x":"1","avoid_warnings":"1","disable_splash":"1","dtparam":["i2c_arm=on","spi=on","audio=on"],"gpu_mem":"16"}
Applied boot config: {"gpu_mem_1024":"448","gpu_mem_256":"192","gpu_mem_512":"256","start_x":"1","avoid_warnings":"1","disable_splash":"1","dtparam":["i2c_arm=on","spi=on","audio=on"],"gpu_mem":"16"}
Applying boot config: {}
Applied configuration change {"SUPERVISOR_POLL_INTERVAL":"900000","SUPERVISOR_DELTA":"1","SUPERVISOR_DELTA_REQUEST_TIMEOUT":"59000","SUPERVISOR_DELTA_VERSION":"3"}
Applied boot config: {}
Applying configuration change {"SUPERVISOR_VPN_CONTROL":"true"}
Applied configuration change {"SUPERVISOR_VPN_CONTROL":"true"}
Rebooting
Supervisor starting
Creating network 'default'
Supervisor starting
Can someone please offer me some assistance in this ?
Same here! I have tried this with "Deploy with Balena" and ended as you.
|
gharchive/issue
| 2022-07-03T13:26:54 |
2025-04-01T04:56:07.409599
|
{
"authors": [
"GOTO-GOSUB",
"PlesnivejSejra"
],
"repo": "balenalabs/balena-cam",
"url": "https://github.com/balenalabs/balena-cam/issues/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
774779525
|
UPNP : All servers use the same default UUID
When running multiple upnp servers on the same network (e.g. in standalone mode) only one is visible to clients. Which server is visible is a race condition. Checking the gmediarender code clarifies that there is a default hard coded id when none is specified.
https://github.com/hzeller/gmrender-resurrect/blob/master/src/main.c#L83
Fixed by #381.
balenaSound v3.4.6
Fixed by #381.
balenaSound v3.4.6
|
gharchive/issue
| 2020-12-25T20:07:24 |
2025-04-01T04:56:07.411621
|
{
"authors": [
"JodyGoldberg",
"tmigone"
],
"repo": "balenalabs/balena-sound",
"url": "https://github.com/balenalabs/balena-sound/issues/380",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
664275064
|
Change the Titles of All Modules in API Docs to Denote the GitHub Org
Description:
We need to change the titles of all the modules in the API Docs to denote the GitHub org (i.e., whether it is "ballerina" or "ballerinax").
For example, they should be:
Module: ballerina/auth
Module: ballerinax/docker
This should be done in both 1.2.x & Swan Lake releases.
Related Issues (optional):
https://github.com/ballerina-platform/ballerina-lang/issues/24908
https://github.com/ballerina-platform/ballerina-dev-website/issues/1097
Suggested Labels (optional):
Suggested Assignees (optional):
Closing as this issue is fixed.
|
gharchive/issue
| 2020-07-23T07:58:16 |
2025-04-01T04:56:07.427648
|
{
"authors": [
"praneesha",
"shehan360"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/issues/24914",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
748205801
|
Incorrect/incomplete variable types with "Create Local Variable" code actions for iterators
Description:
$title.
Steps to reproduce:
For
public function main() {
int[] x = [1, 2, 3];
x.iterator();
x.iterator().next();
}
adds
public function main() {
int[] x = [1, 2, 3];
object {function () returns {| any| value |}|()} iteratorResult = x.iterator();
function () nextResult = x.iterator().next();
}
but it should be
public function main() {
int[] x = [1, 2, 3];
object {public function next() returns record {| int value; |}?;} iteratorResult = x.iterator();
record {| int value; |}? nextResult = x.iterator().next();
}
Related issues:
https://github.com/ballerina-platform/ballerina-lang/issues/24379
Affected Versions:
slp5
Blocked due to https://github.com/ballerina-platform/ballerina-lang/issues/29017
|
gharchive/issue
| 2020-11-22T09:39:35 |
2025-04-01T04:56:07.431161
|
{
"authors": [
"MaryamZi",
"rasika"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/issues/27059",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1016307946
|
Document This codeaction for class is suggested within object method body
Description:
Consider the source:
class Student {
public function getName() returns string {
return "name";
}
}
Put the cursor on the return statement and you will see the Document This codeaction, Click on it, and the documentation will be added to the class Student.
Steps to reproduce:
See description
Affected Versions:
SL Beta 3 RC5
@nadeeshaan I can still the Document This code action on the return statement on 2201.0.3 RC1. Shall we reopen this?
@nadeeshaan I can still the Document This code action on the return statement on 2201.0.3 RC1. Shall we reopen this?
We actually merged this one for the master. Should be available with a latest minor release though
|
gharchive/issue
| 2021-10-05T13:07:54 |
2025-04-01T04:56:07.434335
|
{
"authors": [
"IMS94",
"nadeeshaan"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/issues/33072",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1559214750
|
[Bug]: Invalid error recovery in regex
Description
public function main() {
string:RegExp x7 = re `\p{sc=``}`;
}
The above code is recover as
public function main() {
string:RegExp x7 = re `\p{sc= MISSING[] MISSING[)]` MISSING[;]`}`;
}
The MISSING[)] node is not valid for this recover operation.
Steps to Reproduce
No response
Affected Version(s)
Master
OS, DB, other environment details and versions
No response
Related area
-> Compilation
Related issue(s) (optional)
No response
Suggested label(s) (optional)
No response
Suggested assignee(s) (optional)
No response
The expected node is MISSING[}] instead of MISSING[)].
|
gharchive/issue
| 2023-01-27T05:52:35 |
2025-04-01T04:56:07.438033
|
{
"authors": [
"SasinduDilshara"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/issues/39393",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1600985597
|
[Bug]: debugger breakpoints are not working on windows
Description
$subject seems to be a regression issue after fixing debugging support for bala sources with https://github.com/ballerina-platform/ballerina-lang/pull/37896
Steps to Reproduce
No response
Affected Version(s)
No response
OS, DB, other environment details and versions
No response
Related area
Related issue(s) (optional)
No response
Suggested label(s) (optional)
No response
Suggested assignee(s) (optional)
No response
Closing with https://github.com/ballerina-platform/ballerina-lang/pull/39739
|
gharchive/issue
| 2023-02-27T11:18:41 |
2025-04-01T04:56:07.441654
|
{
"authors": [
"NipunaRanasinghe"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/issues/39738",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
324378320
|
Results sets returned from mysql stored procedure are empty
Description:
$subject, due to [1]. Seems it is not possible to have the streaming behavior while keeping several result sets. When debugging, just before calling the "getMoreResults" on the statement, calling "next" on previously obtained result set returned "true". Just after calling "getMoreResults" calling "next" on previously obtained result set returned "false".
If the condition at [1] is removed, the result set would not be empty.
However, when the result sets are returned as OUT params there is no such issue.
[1] https://github.com/ballerina-platform/ballerina-lang/blob/master/stdlib/ballerina-database/src/main/java/org/ballerinalang/database/sql/actions/AbstractSQLAction.java#L453
Steps to reproduce:
Call a mysql stored procedure which returns a result set. The returned table would be empty.
eg:
Stored procedure
CREATE PROCEDURE SelectPersonData()
READS SQL DATA
BEGIN
SELECT firstName FROM Customers where registrationID = 1;
END
Ballerina code
type ResultCustomers {
string FIRSTNAME,
};
function testCallProcedureWithResultSet(string jdbcUrl, string userName, string password) returns (string) {
endpoint jdbc:Client testDB {
url: jdbcUrl,
username: userName,
password: password,
poolOptions: { maximumPoolSize: 1 }
};
string firstName;
var ret = check testDB->call("{call SelectPersonData()}", [ResultCustomers]);
table[] dts;
match ret {
table[] dtsRet => dts = dtsRet;
() => firstName = "error";
}
while (dts[0].hasNext()) {
ResultCustomers rs = check <ResultCustomers>dts[0].getNext();
firstName = rs.FIRSTNAME;
}
testDB.stop();
return firstName;
}
Affected Versions:
0.970.1
OS, DB, other environment details and versions:
mysql
https://bugs.mysql.com/bug.php?id=90653
|
gharchive/issue
| 2018-05-18T11:46:13 |
2025-04-01T04:56:07.446388
|
{
"authors": [
"Manuri"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/issues/8643",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
424753775
|
Fix minor issues in Auth stdlib
Purpose
Fixes minor issues in Auth stdlib encountered during review process. Includes a auth validation improvement and a JWT cache performance improvement.
Codecov Report
:exclamation: No coverage uploaded for pull request base (master@746d242). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #14658 +/- ##
=========================================
Coverage ? 64.01%
Complexity ? 523
=========================================
Files ? 2276
Lines ? 99341
Branches ? 13215
=========================================
Hits ? 63594
Misses ? 30737
Partials ? 5010
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 746d242...06a29ea. Read the comment docs.
|
gharchive/pull-request
| 2019-03-25T07:11:30 |
2025-04-01T04:56:07.451747
|
{
"authors": [
"ayomawdb",
"codecov-io"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/pull/14658",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
604531448
|
Improve lang lib table tests
Purpose
$subject
Fixes #22663
Approach
N/A
Samples
N/A
Remarks
N/A
Check List
[x] Read the Contributing Guide
[ ] Updated Change Log
[ ] Checked Tooling Support (#)
[x] Added necessary tests
[x] Unit Tests
[ ] Spec Conformance Tests
[ ] Integration Tests
[ ] Ballerina By Example Tests
[ ] Increased Test Coverage
[ ] Added necessary documentation
[ ] API documentation
[ ] Module documentation in Module.md files
[ ] Ballerina By Examples
Codecov Report
:exclamation: No coverage uploaded for pull request base (new-table-impl@16c5cf8). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## new-table-impl #22830 +/- ##
=================================================
Coverage ? 14.59%
=================================================
Files ? 51
Lines ? 1411
Branches ? 219
=================================================
Hits ? 206
Misses ? 1189
Partials ? 16
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 16c5cf8...5362c88. Read the comment docs.
|
gharchive/pull-request
| 2020-04-22T07:43:23 |
2025-04-01T04:56:07.461469
|
{
"authors": [
"codecov-io",
"pcnfernando"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/pull/22830",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
728118406
|
[VCoDeS] Windows: file system server connector
Description:
Refer https://github.com/ballerinalang/ballerina/issues/3025 for more details.
You can get the latest directory listener here. I have checked this sample with Windows OS. Directory listener returned the following correct path name:
C:\Users\ashak\OneDrive\Documents\test.bal
But The above BBE print the path with extra slash as log is using additional validation to escape invalid characters. The output of the log:printInfo:
time = 2021-07-02T16:15:40.885+05:30 level = INFO module = "" message = "Modify: C:\\Users\\ashak\\OneDrive\\Documents\\test.bal"
@daneshk @MadhukaHarith92 I think we need to improve the escape invalid character impl according to the OS
@kalaiyarasiganeshalingam this is because the backslash is a special character and we are escaping special characters from the message printing it to console.
File listener works fine. Hence, closing this issue.
|
gharchive/issue
| 2017-10-18T12:28:54 |
2025-04-01T04:56:07.468432
|
{
"authors": [
"anupama-pathirage",
"daneshk",
"kalaiyarasiganeshalingam"
],
"repo": "ballerina-platform/ballerina-standard-library",
"url": "https://github.com/ballerina-platform/ballerina-standard-library/issues/115",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
945276625
|
Set Target Code Coverage Value
Description:
Currently, there's no strict rule for code coverage in Standard Library modules. Since we are now at 80% coverage in all the modules, we can now set the baseline as 80% all the time.
To make sure the 80% coverage is maintained, each repo has to make the code coverage to 80% and make the codecov/project check is a required check.
Let's mark this issue once the task is done for each repo. (This is independent of the level).
[x] io
[x] java.arrays
[x] random
[x] regex
[x] time
[x] url
[x] xmldata
[x] crypto
[x] log
[x] os
[x] task
[x] xslt
[x] cache
[x] file
[x] ftp
[x] mime
[x] nats
[x] stan
[x] tcp
[x] udp
[x] uuid
[x] auth
[x] email
[x] jwt
[x] oauth2
[x] http
[x] graphql
[x] grpc
[x] websocket
[x] websub
[x] websubhub
[x] kafka
[x] rabbitmq
[x] sql
[x] java.jdbc
[x] mysql
[x] postgresql
[x] mssql
Closing since all the modules completed the task.
|
gharchive/issue
| 2021-07-15T11:20:39 |
2025-04-01T04:56:07.476482
|
{
"authors": [
"ThisaruGuruge"
],
"repo": "ballerina-platform/ballerina-standard-library",
"url": "https://github.com/ballerina-platform/ballerina-standard-library/issues/1660",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
962380980
|
Handle internal messages without repetitions
Description:
The stub generation of the following protobuf file has repeated record values. Those repetitions should handle gracefully.
Proto file
syntax = "proto3";
option go_package = "apis/";
message FileInfo {
message Observability {
string id = 1 [json_name = "observabilityId"];
string latest_version = 2;
}
Observability observability = 1;
}
message Log {
message Observability {
string id = 1 [json_name = "observabilityId"];
string latest_version = 2;
}
Observability observability = 1;
}
Generated stub
public type FileInfo record {|
Observability observability = {};
|};
public type Observability record {|
string id = "";
string latest_version = "";
|};
public type Log record {|
Observability observability = {};
|};
public type Observability record {|
string id = "";
string latest_version = "";
|};
Steps to reproduce:
Generate stub file using the aforementioned proto file.
Affected Versions:
Ballerina SL beta 2
OS, DB, other environment details and versions:
Java 11
macOS
Suggested stub:
public type FileInfo record {|
FileInfo_Observability observability = {};
|};
public type FileInfo_Observability record {|
string id = "";
string latest_version = "";
|};
public type Log record {|
Log_Observability observability = {};
|};
public type Log_Observability record {|
string id = "";
string latest_version = "";
|};
@daneshk , @BuddhiWathsala WDYT
+1 @MadhukaHarith92. We can follow the same approach as internal enums.
|
gharchive/issue
| 2021-08-06T04:17:11 |
2025-04-01T04:56:07.481001
|
{
"authors": [
"BuddhiWathsala",
"MadhukaHarith92"
],
"repo": "ballerina-platform/ballerina-standard-library",
"url": "https://github.com/ballerina-platform/ballerina-standard-library/issues/1736",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1194058392
|
Add support to respond error type using http:Caller
Description:
We supports resource functions to return error types which will be converted to an error response with the error message. Similarly, we will allow http:Caller to respond with error type.
service on new http:Listener(9090) {
resource function get returnedError() returns error {
return error("return new error");
}
resource function get respondedError(http:Caller caller) returns error? {
check caller->respond(error("respond with new error"));
}
}
Related Issues:
Part of Move security error reporting to a default error interceptor #2823
We also have a http:CallerInfo annotation which is used to specify the respond type of the caller->respond() method.
Since there is a limitation in specifying the typedesc as generic error (types of error is allowed), at the moment we only support respondType to be http:ResponseMessage|http:Error. So we are using the annotation and respond with the error, we can only use an error of type http:Error
We also have a http:CallerInfo annotation which is used to specify the respond type of the caller->respond() method. Since there is a limitation in specifying the typedesc as generic error (types of error is allowed), at the moment we only support respondType to be http:ResponseMessage|http:Error. So we are using the annotation and respond with the error, we can only use an error of type http:Error
Related lang bug https://github.com/ballerina-platform/ballerina-spec/issues/797
|
gharchive/issue
| 2022-04-06T05:31:32 |
2025-04-01T04:56:07.486470
|
{
"authors": [
"TharmiganK",
"chamil321"
],
"repo": "ballerina-platform/ballerina-standard-library",
"url": "https://github.com/ballerina-platform/ballerina-standard-library/issues/2832",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.