id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
430649367
Relicense to MIT-0 By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. (FP for missing NOTICE file deletion)
gharchive/pull-request
2019-04-08T20:43:03
2025-04-01T06:37:59.691716
{ "authors": [ "jpeddicord" ], "repo": "aws-samples/aws-geotagging-logs", "url": "https://github.com/aws-samples/aws-geotagging-logs/pull/1", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
824604243
Init container isn't started for webserver + TLS handshake error in secret-inject Hi there! I'm following instructions but getting only webserver container in my webserver-78578795c6-6l2mx pod No init container is present there In my secret-inject-87fd4b8bb-v6jvp pod logs I see http: TLS handshake error from 192.168.183.197:43904: remote error: tls: bad certificate exception on each webserver deploy attempt Any ideas how to debug / fix it? Hello, That can typically occur when there is a cert mismatch between the admission controller and the mutating webhook configuration. I doubt if certs have expired (since they are valid for 3650 days). Is this a new clean-state deployment ? Verify that you dont have an existing secret named "secret-inject-tls" which was created as a part of previous Helm deployment. I've just completed full k8s redeploy and still getting exactly the same. Worse mentioning that other sample deploy that is using IAM OIDC, role and policy to reach S3 via Amazon EKS Pod Identity Webhook works just fine. I have k8s v1.19, nodePool v1.19.6-20210302, CNI v1.7.9-eksbuild.1 if it matters I have the same issue. 2021/03/11 00:13:02 http: TLS handshake error from ********: remote error: tls: bad certificate 2021/03/11 00:13:02 http: TLS handshake error from ********: remote error: tls: bad certificate K8s details: kubectl version --short Client Version: v1.19.3 Server Version: v1.19.6-eks-49a6c0 @shashankbansal6 and @kagarlickij we're testing this against 1.19 now. Perhaps something changed in that release. I'm now also having this issue after upgrading to 1.19 I believe we need to add a SAN to the certificate. @kagarlickij @shashankbansal6 This should be fixed. @amit0701 Please can you confirm where you added the SAN? I dont see any commits to the repo and I'm using the multi-secret branch. We're not actively maintaining that branch @themattkeating. You could ask @dnascimento if he has plans to update the certificate. Unfortunately the master branch only supports a single secret, which is almost useless in the real world. @jicowan If you could explain which certificate was updated, Im happy to do a PR for this multi-secret branch. I just wanted to know more about what the fix was. The changes are in the gh-pages branch: https://github.com/aws-samples/aws-secret-sidecar-injector/commits/gh-pages
gharchive/issue
2021-03-08T14:08:26
2025-04-01T06:37:59.706057
{ "authors": [ "amit0701", "jicowan", "kagarlickij", "shashankbansal6", "themattkeating" ], "repo": "aws-samples/aws-secret-sidecar-injector", "url": "https://github.com/aws-samples/aws-secret-sidecar-injector/issues/43", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
834544234
fix(core): Fixing SSM-Document-Share Fixing number of accountIds passing to ssm modify-document-permission Fixing number of accounts passing to GuardDuty createMembers, updateMembers and deleteMembers Adding security-hub-excl-regions and not enabling security hub in those regions By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Can we add a parameter to disable security hub /guardduty/mach for an ou like sandbox/dev for cost optimizations Can we add a parameter to disable security hub /guardduty/mach for an ou like sandbox/dev for cost optimizations For SH, see both: #497 and #516, leaning to #497 as we've received similar feedback from other customers (also waiting to see what SH has for a roadmap). GD is enabled using AWS Orgs for enablement so we don't have the capability to control per OU, unless GD adds the feature :-). We do not have any plans to switch back to manual, per-account GD enablement. I strongly discourage disabling GuardDuty in any OU or account and strongly believe it should be enabled across your entire AWS footprint. If something bad happens in any account, including Sandbox, you still need to know. GuardDuty is a cloud native/cloud aware Intrusion Detection Service. Given Sandbox often has more freedoms and fewer security controls, the utilization of GD in Sandbox is even more important. GD is one of the best tools to alert you that something unusual is occurring and allow you to act/remediate in the most timely manner. For the meagre savings you will achieve, I strongly encourage cost optimizing elsewhere :-). Love all the great feedback you've been providing!
gharchive/pull-request
2021-03-18T08:52:39
2025-04-01T06:37:59.709989
{ "authors": [ "Brian969", "naveenkoppula", "rverma-nsl" ], "repo": "aws-samples/aws-secure-environment-accelerator", "url": "https://github.com/aws-samples/aws-secure-environment-accelerator/pull/666", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
791401509
Error: Failed to choose a D3D11 Adapter. I looked at the logs, I saw this: A D3D11-compatible GPU (Feature Level 11.0, Shader Model 5.0) is required to run the engine. I think the default instance doesn't have a compatible GPU. I don't know how to proceed. Thus, the build is not running. Please note the webserver does start: What instance are you running? I was using g4ad.xlarge. Even though it was not necessary I installed the vGaming drivers instead of workstation drivers that are default. That was only to get RTX, not necessary for streaming unless your project has RTX ON. What instance are you running? I was using g4ad.xlarge. Even though it was not necessary I installed the vGaming drivers instead of workstation drivers that are default. That was only to get RTX, not necessary for streaming unless your project has RTX ON. What instance are you running? I am using the default values of the CloudFormation template. So the instance is g4dn.xlarge. I was using g4ad.xlarge. Even though it was not necessary I installed the vGaming drivers instead of workstation drivers that are default. That was only to get RTX, not necessary for streaming unless your project has RTX ON? I remember getting similar errors when running RTX-stuff on the default instance. Very interesting @sebrk. Can you tell me how to install those drivers? Maybe that'll help me solve my issue. @kashman-amzn I think this shows a pattern with the default configuration. BTW, I am trying to deploy the pixel streaming demo, nothing special. What instance are you running? I am using the default values of the CloudFormation template. So the instance is g4dn.xlarge. I was using g4ad.xlarge. Even though it was not necessary I installed the vGaming drivers instead of workstation drivers that are default. That was only to get RTX, not necessary for streaming unless your project has RTX ON? I remember getting similar errors when running RTX-stuff on the default instance. Very interesting @sebrk. Can you tell me how to install those drivers? Maybe that'll help me solve my issue. @kashman-amzn I think this shows a pattern with the default configuration. BTW, I am trying to deploy the pixel streaming demo, nothing special. Do you have RTX on? instructions here: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/install-nvidia-driver.html#nvidia-gaming-driver Do you have RTX on? instructions here: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/install-nvidia-driver.html#nvidia-gaming-driver @sebrk Amazing!, you fixed it! I am not sure if I required RTX or not but this was the solution. I followed the steps you linked until step 3 and that was enough. Do you know how to automate this process or put it in the CloudFormation template? @sebrk Amazing!, you fixed it! I am not sure if I required RTX or not but this was the solution. I followed the steps you linked until step 3 and that was enough. Do you know how to automate this process or put it in the CloudFormation template? @sebrk Amazing!, you fixed it! I am not sure if I required RTX or not but this was the solution. I followed the steps you linked until step 3 and that was enough. Do you know how to automate this process or put it in the CloudFormation template? Great! You probably had a bad case of drivers from the start. I have not automated this but googling around should help you find information on it. @sebrk Amazing!, you fixed it! I am not sure if I required RTX or not but this was the solution. I followed the steps you linked until step 3 and that was enough. Do you know how to automate this process or put it in the CloudFormation template? Great! You probably had a bad case of drivers from the start. I have not automated this but googling around should help you find information on it. @kashman-amzn could you modify the CloudFormation template to include the aforementioned drivers? @kashman-amzn could you modify the CloudFormation template to include the aforementioned drivers? Thank you for your collaboration on this issue and working through and identifying the need for the NVIDIA gaming driver. The GRID drivers are installed as part of the NICE DCV installation, and works for many UE4 projects but we will modify the CloudFormation to switch to the gaming driver. Again thanks for digging into this. Thank you for your collaboration on this issue and working through and identifying the need for the NVIDIA gaming driver. The GRID drivers are installed as part of the NICE DCV installation, and works for many UE4 projects but we will modify the CloudFormation to switch to the gaming driver. Again thanks for digging into this. Excellent @kashman-amzn! Excellent @kashman-amzn! @kashman-amzn thank you very much! When do you think we'll have the new CloudFormation? @kashman-amzn thank you very much! When do you think we'll have the new CloudFormation? @kashman-amzn I saw your recent commit. If I wanted to use the NVIDIA gaming driver what steps should I take to modify the CloudFormation template? I am planning to get to this but it may not happen until next week. One way you can do this is to add a silent install of the latest NVIDIA EC2 gaming driver in the bootstrap script. You could also execute it in the CFn script by adding the file to the download items and executing the silent install. Documentation on NVIDIA driver installation can be found here: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/install-nvidia-driver.html I have added the installation of the .NET Framework and run the UE4PrereqSetup that is bundled with the build by the engine. This should ensure current and future dependencies are resolved before running the pixel streaming build. As far as the NVIDIA gaming driver, I have added a note on how to install that driver if desired, but with the prerequisite installed the Unreal Engine Pixel Streaming demo that failed before now succeeds and streams as expected at 60fps. Hey @kashman-amzn, I used the latest scripts and I still get the same issue trying to deploy the pixel streaming demo. Are you sure you were able to deploy it? I was able to deploy it by changing the DiskSize to 45GB. That is great. I will be increasing the default storage value to 50GB so that larger sized projects will work by default.
gharchive/issue
2021-01-21T18:39:34
2025-04-01T06:37:59.727574
{ "authors": [ "kashman-amzn", "sebrk", "yardenshoham" ], "repo": "aws-samples/deploying-unreal-engine-pixel-streaming-server-on-ec2", "url": "https://github.com/aws-samples/deploying-unreal-engine-pixel-streaming-server-on-ec2/issues/2", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2057342181
Fix/claude 2.1 support image generation JSON フォーマット自体はいずれのモデルでも出力しており、前後に余計な文言が出るケースがあるので、{ と } を検索し、その間だけを抜き出す処理を追加 別実装を検討する
gharchive/pull-request
2023-12-27T12:55:17
2025-04-01T06:37:59.729026
{ "authors": [ "kazuhitogo" ], "repo": "aws-samples/generative-ai-use-cases-jp", "url": "https://github.com/aws-samples/generative-ai-use-cases-jp/pull/269", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2134363858
Fix GraalVM build Description of changes: temp downgrade spring-data-commons due to https://github.com/spring-projects/spring-data-commons/issues/3025 By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Thank you for the PR, looks good! Thank you! It worked here!
gharchive/pull-request
2024-02-14T13:31:08
2025-04-01T06:37:59.730865
{ "authors": [ "deki", "jjeanjacques10", "smoell" ], "repo": "aws-samples/java-on-aws", "url": "https://github.com/aws-samples/java-on-aws/pull/264", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
1776122339
updated workshop ami Updated AWS AMI for Rancher v2.7.4 (release notes) Tested and Verified comparability with the workshop By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Current workshop AMI (ami-08cdaea7883b8b88a) still exists and public until merge is approved, then will deprecate.
gharchive/pull-request
2023-06-27T05:32:46
2025-04-01T06:37:59.732986
{ "authors": [ "zackbradys" ], "repo": "aws-samples/rancher-on-aws-workshop", "url": "https://github.com/aws-samples/rancher-on-aws-workshop/pull/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
735697498
RAPIDS.ai custom image example Issue #, if available: Description of changes: Added a rapids.ai custom image example. Will be great for users to make use of a readymade image with https://rapids.ai/start.html By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Testing completed successfully: Performed local testing as suggested in DEVELOPMENT.md Performed testing on studio with two types of GPU instances -- g4dn and p3.8xl... Both currently work Performed additional testing with RAPIDS example, importing various libraries like cuml and testing out the nvidia-smi command used both docker build and push, as well as sm-docker from studio to create and test images with the same Dockerfile, followed by testing as updated in the rapids/README.md Thanks for the contribution!
gharchive/pull-request
2020-11-03T23:16:22
2025-04-01T06:37:59.737003
{ "authors": [ "jaipreet-s", "w601sxs" ], "repo": "aws-samples/sagemaker-studio-custom-image-samples", "url": "https://github.com/aws-samples/sagemaker-studio-custom-image-samples/pull/2", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2155684779
feat: Update zk netplan render to handle docker bridge network interface Problem: We have noticed when docker has a network interface the zookeeper netplan render is confused and can't correctly pick the ENI ifname. For example with the following network interfaces, the zookeeper_user_data will pick up both br-146bd07d2fbf and eth1 then render into /etc/netplan/99_config.yaml, and the netplan is broken. ip -j link show | jq . [ { "ifindex": 3, "ifname": "br-146bd07d2fbf", "flags": [ "NO-CARRIER", "BROADCAST", "MULTICAST", "UP" ], "mtu": 1500, "qdisc": "noqueue", "operstate": "DOWN", "linkmode": "DEFAULT", "group": "default", "link_type": "ether", "address": "02:42:01:07:1e:1c", "broadcast": "ff:ff:ff:ff:ff:ff" }, { "ifindex": 4, "ifname": "eth1", "flags": [ "BROADCAST", "MULTICAST" ], "mtu": 1500, "qdisc": "noop", "operstate": "DOWN", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "ether", "address": "0e:f1:0b:65:44:d3", "broadcast": "ff:ff:ff:ff:ff:ff" } ] Solution: We are adding in additional filtering when finding the INTFERFACE_NAME, so it filter out any docker bridge network interface, which start with "br-". By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Hi Paul, Thanks for the fix. Nice work. 👍
gharchive/pull-request
2024-02-27T05:20:22
2025-04-01T06:37:59.739769
{ "authors": [ "chn217", "sydefz" ], "repo": "aws-solutions/scalable-analytics-using-apache-druid-on-aws", "url": "https://github.com/aws-solutions/scalable-analytics-using-apache-druid-on-aws/pull/8", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
270853997
Can't resolve 'amazon-cognito-auth-js' This is happening on a regular create react app, not related to typescript like #39. In package.json: "main": "lib/index.js", "module": "es/index.js", Are nowhere to be found in amazon-cognito-auth, so I used: import { CognitoAuth } from 'amazon-cognito-auth-js/dist/amazon-cognito-auth'; To work around the incorrect values in package.json for now Is your project using React structure? If yes, this is the correct way to do: import { CognitoAuth } from 'amazon-cognito-auth-js/dist/amazon-cognito-auth'; Still encounter this error following the documentation. This issue should not be closed. If what you said is the right way to import, then please correct the documentation so that we do not waste time searching through all the past issues. @jaxondu The documentation shows how to use SDK generally. For specific case such as using React structure, import { CognitoAuth } from 'amazon-cognito-auth-js/dist/amazon-cognito-auth'; should be the correct way. And this is the common sense for importing when using React structure. @yuntuowang Sorry I'm using VueJS. It is not common sense to have ES import in this manner. You must have export it wrongly. Hi @jaxondu, I just updated the README.md for the ES modules import: // ES Modules, e.g. transpiling with Babel import {CognitoAuth} from 'amazon-cognito-auth-js/dist/amazon-cognito-auth'; If you have any other concerns, feel free to post here, thanks! I have updated this workaround: import { CognitoAuth } from 'amazon-cognito-auth-js/dist/amazon-cognito-auth'; in our README.md. which programming language is this https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_ListUserPools.html Can you point to any sample code which uses this? I'm trying to use this in a lambda function. @yuntuowang I have a problem in my Ionic+angular+typescript Mobile Project. I want to import {CognitoAuth} from 'amazon-cognito-auth-js/dist/amazon-cognito-auth' But it does not recognize it?
gharchive/issue
2017-11-03T01:04:42
2025-04-01T06:37:59.746447
{ "authors": [ "deeperid", "engharb", "jaxondu", "sam-jg", "sunsoori", "yuntuowang" ], "repo": "aws/amazon-cognito-auth-js", "url": "https://github.com/aws/amazon-cognito-auth-js/issues/42", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
337816933
Projection Expression parsing bug Hi, I am using Expression Builder to create the Expression for QueryInput. I noticed that Expression Builder Projection delimits with comma and space, eg. "#1, #2". As a result, when there is more than 1 projection, the output is incorrect. Below is an example of QueryOutput.Items [ { " #2": { "B": null, "BOOL": null, "BS": null, "L": null, "M": null, "N": null, "NS": null, "NULL": null, "S": "0f3028a2-ee87-447e-9a7a-ee2d94798934", "SS": null }, "ID": { "B": null, "BOOL": null, "BS": null, "L": null, "M": null, "N": null, "NS": null, "NULL": null, "S": "9bbef19476623ca56c17da75fd57734dbf82530686043a6e491c6d71befe8f6e", "SS": null } } ] The value of expr.Names() is { "#0": "RepoID", "#1": "ID", "#2": "TokenID" } The value of expr.Projection() is #1, #2 The same Expression Builder works without DAX. Edit: A workaround I am currently using is to specify ProjectionExpression without using Expression Builder This will be fixed in the next release
gharchive/issue
2018-07-03T09:30:02
2025-04-01T06:37:59.914578
{ "authors": [ "anandsas", "darylnwk" ], "repo": "aws/aws-dax-go", "url": "https://github.com/aws/aws-dax-go/issues/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
538156275
Use the SDK with C++ Builder Hi, I'm trying to use the sdk in my project in C++ builder (Embarcadero C++ builder 10.3). I tried to create a new project in C++ builder and create exactly the same structure of the current SDK directory there. But I got never-ending issues. Is there any other way to do this? Any help/guidance appreciated. Thank you I'd need to see build logs to know what specific issues you're talking about and give advice. Since you're just getting started on your project, I'd suggest that you pick up the new aws-iot-device-sdk-cpp-v2 instead of this SDK. The V2 SDK is where AWS will be putting its development and support efforts going forward.
gharchive/issue
2019-12-16T02:50:45
2025-04-01T06:37:59.925902
{ "authors": [ "Preemali", "graebm" ], "repo": "aws/aws-iot-device-sdk-cpp", "url": "https://github.com/aws/aws-iot-device-sdk-cpp/issues/176", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1308635952
Create release Hi there, Is there a standard periodic release process for this package or are releases initiated manually by the maintainers? I had a PR merged in https://github.com/aws/aws-lambda-go/issues/452 which I'd like to use. I see the last release was created 18 days ago. Looking at the history it doesn't seem like there's a regular cadence, could you please create a release so that the above changes can be used? done! https://github.com/aws/aws-lambda-go/releases/tag/v1.33.0 Much appreciated.
gharchive/issue
2022-07-18T21:07:04
2025-04-01T06:37:59.933535
{ "authors": [ "DanielBauman88", "bmoffatt" ], "repo": "aws/aws-lambda-go", "url": "https://github.com/aws/aws-lambda-go/issues/453", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1437651508
local runner container has python2.7 environment not 3.7 local runner cannot found my constraints so I tried debugging, I found local runner container has python 2.7 env not 3.7 $ docker exec -it e177d2495fd0 /bin/bash [airflow@e177d2495fd0 ~]$ python Python 2.7.18 (default, May 25 2022, 14:30:51) [GCC 7.3.1 20180712 (Red Hat 7.3.1-15)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path ['', '/usr/lib/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib/python2.7/site-packages'] >>> You need to run python3 not python to access version 3.7.10 (which airflow is using). @killingtime-sc perfect. thx
gharchive/issue
2022-11-07T01:54:16
2025-04-01T06:37:59.943378
{ "authors": [ "Haebuk", "killingtime-sc" ], "repo": "aws/aws-mwaa-local-runner", "url": "https://github.com/aws/aws-mwaa-local-runner/issues/181", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
1989301885
Is it possible to change the version? I want to test it on mwaa 2.6.3. Now, 2.7.2 is the default setting, so I would like to lower the version being checked. There are tags you can check out. git checkout v2.6.3 @erg Thank you so much ! mr.erg! it's really helpful for me!! haha I'll add this in the README as it might not be clear. Closing this issue
gharchive/issue
2023-11-12T08:22:20
2025-04-01T06:37:59.945244
{ "authors": [ "JeongPock2y", "erg", "mayushko26" ], "repo": "aws/aws-mwaa-local-runner", "url": "https://github.com/aws/aws-mwaa-local-runner/issues/326", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2406700589
Updating selectedItems in Table component does not update Describe the bug I am using the aws-northstar Table component. I am trying to update the selected items in the table based on some state that is being controlled outside of the table component. When I update the selected state, I can see it is updated, however the table component ignores it and does not update the selected checkbox items in the table. Note: if I swap the aws-northstar Table component for the @cloudscape-design/components Table component, it works fine. Versions @aws-northstar/ui v1.1.13 To Reproduce CodeSandbox link to reproduce the problem Steps to reproduce the behavior See CodeSandbox example. Steps to reproduce: Click on checkbox items inside the table. The "selectedItems" state updates successfully. Click on the test button. Expected behaviour Clicking on test button should override the selectedItems state and select the 3rd component in the table. Actual behaviour The React state for "selectedItems" is updated successfully, however the Table component ignores it and keeps the current selection. Additional context Using the @cloudscape-design/components Table works as expected. Swap the "Table" component in the imports in the CodeSandbox to see the difference in behavior. Hi @adriantadros Thanks for reporting the bug. Northstar's V2 Table component leverages Cloudscape's Collection hooks on handling the selectedItems updates. The input selectedItems will be passed to the defaultSelectedItems of the hooks params: Items selected on the initial render. The items are matched by trackBy if defined and by reference otherwise. It will not updated with the props change after the initial rendering. Sorry for the confusion. We will update the docs to reflect that.
gharchive/issue
2024-07-13T04:48:50
2025-04-01T06:37:59.954035
{ "authors": [ "adriantadros", "jessieweiyi" ], "repo": "aws/aws-northstar", "url": "https://github.com/aws/aws-northstar/issues/1044", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
809874240
feat: Add Popover component Issue #, if available: https://github.com/aws/aws-northstar/issues/66 Description of changes: Added a Popover component that can be used to display some content on top of another. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. :tada: This PR is included in version 1.0.32 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2021-02-17T05:43:46
2025-04-01T06:37:59.958269
{ "authors": [ "abrenaut", "cogwirrel" ], "repo": "aws/aws-northstar", "url": "https://github.com/aws/aws-northstar/pull/87", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2436420720
IoT Core MQTT with SQS and .NET Framework 4.8 intermittently drops payload attribute Describe the bug When ingesting MQTT under load to a single device, where the device sends a message, then the server sends a message over and over, and an SQS payload topic rule is used: SELECT * as payload, topic() as topic, traceid() as traceId, clientId() as clientId, principal() as principalId, sourceip() as ipAddress FROM 'device/+/mo/#' Where the .NET Framework CDK is used to read SQS messages, an intermittent SQS message is received without a payload attribute as shown here: Body: { "topic": "device/NT-00999/mo/data/38", "traceId": "3e090b84-30a8-2b9d-e8b5-9163864624dd", "clientId": "NT-00999", "principalId": "981307f9ed56b5ebede3f576ac86b027dba5517f8f5eec4e2a7a3d7cd9cc6ce5", "ipAddress": "207.189.176.142" } A similar problem was originally observed when using a base64 MQTT payload, where the payload delivered would intermittently have the \n as the starting character of the payload and be missing a trailing length character. From here, I switched back to JSON payloads, where the symptom seems to be a complete loss of the payload attribute. Expected Behavior The payload attribute should be present and correct with all packets, as it is with most packets. Current Behavior The payload attribute is dropped when parsing incoming messages here and as shown in the log snipit above: var client = m_sqsClient; var request = new ReceiveMessageRequest { QueueUrl = queueUrl, AttributeNames = attributeNames, MaxNumberOfMessages = maxNumberOfMessages, VisibilityTimeout = visibilityTimeout, WaitTimeSeconds = waitTimeSeconds, }; while (true) { var response = await client.ReceiveMessageAsync(request); if (response.Messages.Count > 0) { Reproduction Steps Using .NET Framework 8, with AWSSDK version 3.7.400.1 (and previous versions). Client 1 sends a publish to topic A, at which time client 2 sends a response to topic B, and client 1 sends a response to topic C. From here, there are back to back messages that include client 2 sending a message on topic B and client 1 sending a response to topic C. After around 30-40 messages, the SQS message received via the .NET SDK is missing the payload attribute. IoT Payload Rule: SELECT * as payload, topic() as topic, traceid() as traceId, clientId() as clientId, principal() as principalId, sourceip() as ipAddress FROM 'device/+/mo/#' delivered SQS message: Body: { "topic": "device/NT-00999/mo/data/38", "traceId": "3e090b84-30a8-2b9d-e8b5-9163864624dd", "clientId": "NT-00999", "principalId": "981307f9ed56b5ebede3f576ac86b027dba5517f8f5eec4e2a7a3d7cd9cc6ce5", "ipAddress": "207.189.176.142" } Additionally, and separate, when using a base64 payload, sending a single message from client 2 on topic B results in a malformed base64 encoded payload with the rule: SELECT encode(*, 'base64') as payload, topic() as topic, traceid() as traceId, clientId() as clientId, principal() as principalId, sourceip() as ipAddress FROM 'device/+/mo/#' Here, the payload starts with a '\n' character and is missing a trailing length character delimiter. Possible Solution Assuming a race condition is present in the IoT Core rule processing, or more likely, on the .NET client SDK receiving the SQS message. Additional Information/Context No response AWS .NET SDK and/or Package version used AWSSDK 3.7.400.1 Targeted .NET Platform .NET Framework 4.8 Operating System and version Windows 11 Pro If the SQS message is not deleted, upon re-delivery, it also is missing the payload block again. @nfvelado Good afternoon. Could you please share the following: Complete minimal reproducible code, rather than a code snippet. The snippet you shared is just a basic code form receiving SQS messages. List of all packages used in your project, specifically IoT Core MQTT Was the issue happening earlier or started happening after package upgrade? If yes, please share the list of packages used before and after the error started happening. Logs, if any. Thanks, Ashish Hi Ashish, thanks for the quick response. Issue was happening before and after the SDK upgrade. I meant .NET Framework SDK, not CDK above. Query above is set as an IoT Core message routing payload rule that forwards to an SQS queue. Please close for the time being, until I can find a way to reproduce more consistently with a better trace. Thanks, Nick
gharchive/issue
2024-07-29T21:34:15
2025-04-01T06:38:00.064759
{ "authors": [ "ashishdhingra", "nfvelado" ], "repo": "aws/aws-sdk-net", "url": "https://github.com/aws/aws-sdk-net/issues/3405", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
724471587
Renaming account profile throws error Describe the bug Opening AWS Explorer in Visual Studio and trying to edit one of the existing profile's name results in an Object reference not set to an instance of an object error, as seen in the screenshot below. The profile name is however successfully renamed, it's only that one has to close and reopen Visual Studio in order to view the change. To Reproduce Open AWS Explorer from Visual Studio's View menu Select a profile and click the "Edit Profile" icon Change the profile name to something else Enter the account number and click "Ok" Expected behavior The profile to be renamed silently. Screenshots Computer (please complete the following information): Windows Version: Windows 10 1909 Visual Studio Version: 16.5.0 Preview 5 AWS Toolkit for Visual Studio Version: 1.18.1.0 Thank you for reporting this @luckerby - I can confirm that I was able to reproduce this for a basic shared credentials profile (not a .NET Credentials profile) This issue has been addressed in more recent versions of the Toolkit (I believe version 1.21.0.0 and newer).
gharchive/issue
2020-10-19T10:09:52
2025-04-01T06:38:00.084237
{ "authors": [ "awschristou", "luckerby" ], "repo": "aws/aws-toolkit-visual-studio", "url": "https://github.com/aws/aws-toolkit-visual-studio/issues/127", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
413539039
SerializationError: failed decoding REST JSON error response I'm running AWS X-Ray daemon version 3.0.0 on my Windows 7 machine with the following options... ./xray_windows.exe -o -n us-east-1 -l debug ... and see SerializationErrors such as the following during idle periods. 2019-02-22T13:38:03-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response caused by: invalid character '<' looking for beginning of value 2019-02-22T13:39:04-05:00 [Debug] Send 2 telemetry record(s) 2019-02-22T13:40:04-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:41:04-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:42:04-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:43:04-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:44:04-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response caused by: invalid character '<' looking for beginning of value 2019-02-22T13:45:04-05:00 [Debug] Send 2 telemetry record(s) 2019-02-22T13:46:04-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:47:04-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:48:05-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:49:05-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:50:05-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:51:05-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response caused by: invalid character '<' looking for beginning of value 2019-02-22T13:52:05-05:00 [Debug] Send 2 telemetry record(s) 2019-02-22T13:53:05-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:54:05-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:55:06-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:56:06-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:57:06-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:58:06-05:00 [Debug] Send 1 telemetry record(s) 2019-02-22T13:59:06-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response caused by: invalid character '<' looking for beginning of value 2019-02-22T14:00:06-05:00 [Debug] Failed to send telemetry 2 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response caused by: invalid character '<' looking for beginning of value 2019-02-22T14:01:06-05:00 [Debug] Send 3 telemetry record(s) Oddly enough, this doesn't seem to interfere with tracing. Is there some way to get more information?! Is this something that I should not worry about (since it's a debug message)? Hi @tmuldoon , Thank for posting the issue. The above debug message wont affect the daemon's ability to send X-Ray traces. The daemon send's telemetry records to X-Ray service to notify daemon health. From the above logs, the telemetry record receiving error are sent in the following interval. Example Send 3 telemetry record(s). X-Ray daemon uses aws-go-sdk version 1.14.1. I checked the git repo for aws-go-sdk and found that similar issue is present in the latest version of the aws-go-sdk : issue. I will work with AWS GO SDK team to understand the scenario. Please stay tuned. Thanks, Yogi I am experiencing the same error in the x-ray daemon log. Coincidentally no traces in X-Ray console, so seems like the problem affects x-ray monitoring. Hi @dagb , Can you help me understand the application environment, X-Ray SDK you are using and confirm the traces generated by the app are sent to X-Ray daemon address. By default the X-Ray SDK sends traces at UDP localhost:2000 and the daemon also listens on the same address. The above issue happens only when the daemon is ideal (not receiving any traces for longer time) and ideally shouldn't affect tracing abilities. Thanks, Yogi Both the emitting app and X-Ray daemon runs in ECS. I have built the X-Ray daemon docker image with https://s3.dualstack.eu-west-1.amazonaws.com/aws-xray-assets.eu-west-1/xray-daemon/aws-xray-daemon-linux-3.x.zip Port config of the X-Ray daemon container: ENTRYPOINT ["/usr/bin/xray", "-b", "0.0.0.0:2000", "-t", "0.0.0.0:2000", "-n", "eu-west-1", "-o", "-l", "debug"] EXPOSE 2000/udp EXPOSE 2000/tcp The emitting app has AWS_XRAY_DAEMON_ADDRESS=x-ray.local:2000 This was working for almost 4 days, the suddenly started to log errors (debug level), and no traces in the X-Ray console. Will it help to restart the X-Ray container as a short term mitigation? In addition I want to add that there is a test environment (different AWS account) using the same X-Ray daemon docker image. No errors and traces appears in the console. Hi @dagb , Thank you for letting me know the setup. There hasn't been any daemon update recently. From the description, the traces were sent almost for 4 days and suddenly no traces appeared in the X-Ray console. One possibility is the DNS address of the X-Ray daemon "x-ray.local:2000" might have changed but this is not reflected on the SDK UDP emitter. Can you check the X-Ray SDK logs to check you see any errors? We have a feature request in the backlog for the JAVA SDK to periodically resolve the hostname. This is more of a SDK issue and not related to this daemon issue. One solution I think is to restart X-Ray container so the X-Ray SDK resolves the DNS for the daemon address and the segments are received by the X-Ray Daemon. Please feel free to open an issue if you still face the problem on JAVA SDK so we can do fast followup. Thanks, Yogi Hi @yogiraj07 , My bad: this was caused by subnet (mis)configuration in ECS. I'm still a bit puzzled why it worked and stopped working, but at least I got to clean up some mess. Thank for your feedback and directions. The original concern in this issue has been resolved with the AWS SDK for Go v1.19.34 Resolving this issue to clear up some clutter. I have opened a new issue for tracking the ocassional UnmarshalError when calling PutTelemetryRecord.
gharchive/issue
2019-02-22T19:10:49
2025-04-01T06:38:00.102651
{ "authors": [ "dagb", "srprash", "tmuldoon", "yogiraj07" ], "repo": "aws/aws-xray-daemon", "url": "https://github.com/aws/aws-xray-daemon/issues/22", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
284280126
Adding unit test for feature operations Unit test for each feature on for feature operations. Codecov Report :exclamation: No coverage uploaded for pull request base (master@59e7c24). Click here to learn what that means. The diff coverage is 65.58%. @@ Coverage Diff @@ ## master #20 +/- ## ========================================= Coverage ? 49.45% ========================================= Files ? 47 Lines ? 3114 Branches ? 469 ========================================= Hits ? 1540 Misses ? 1309 Partials ? 265 Impacted Files Coverage Δ lib/feature-operations/scripts/user-files-ops.js 92.1% <100%> (ø) lib/feature-operations/scripts/hosting-ops.js 88.88% <100%> (ø) lib/feature-operations/scripts/analytics-ops.js 94.87% <100%> (ø) lib/feature-operations/scripts/lib/mh-yaml-lib.js 53.55% <14.28%> (ø) lib/feature-operations/scripts/cloud-api-ops.js 67.01% <75%> (ø) ...ture-operations/scripts/lib/function-generation.js 8.43% <8.43%> (ø) lib/feature-operations/scripts/user-signin-ops.js 83.82% <84.84%> (ø) lib/feature-operations/scripts/database-ops.js 78.04% <88.42%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 59e7c24...537a144. Read the comment docs.
gharchive/pull-request
2017-12-23T02:19:45
2025-04-01T06:38:00.114938
{ "authors": [ "codecov-io", "elorzafe" ], "repo": "aws/awsmobile-cli", "url": "https://github.com/aws/awsmobile-cli/pull/20", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1329228651
Update CHECKSUMS Issue #, if available: #1132 Description of changes: By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /hold
gharchive/pull-request
2022-08-04T22:18:44
2025-04-01T06:38:00.130941
{ "authors": [ "YoyoTT" ], "repo": "aws/eks-anywhere-build-tooling", "url": "https://github.com/aws/eks-anywhere-build-tooling/pull/1140", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1551029892
Sets compute type for arm code build instances Issue #, if available: Description of changes: In the staging build we default to the giant instance type for amd which does not exist for arm. We have to specifically set the arm compute type to the normal large. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /retest /cherrypick release-0.14 /cherrypick release-0.14
gharchive/pull-request
2023-01-20T15:19:58
2025-04-01T06:38:00.133198
{ "authors": [ "abhay-krishna", "jaxesn" ], "repo": "aws/eks-anywhere-build-tooling", "url": "https://github.com/aws/eks-anywhere-build-tooling/pull/1758", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1569082028
Pin kustomize version to v4.5.7 Kustomize released a new version v5.0.0 today which caused some issues in generating manifests, specifically setting creationTimestamp to "null" (with quotes). So pinning to an older version to avoid this. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /test etcdadm-bootstrap-provider-tooling-presubmit /cherrypick release-0.14
gharchive/pull-request
2023-02-03T02:01:19
2025-04-01T06:38:00.135292
{ "authors": [ "abhay-krishna" ], "repo": "aws/eks-anywhere-build-tooling", "url": "https://github.com/aws/eks-anywhere-build-tooling/pull/1809", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1841979146
Support dev release image builds from prod image-builder This feature was added in #2312 but inadvertently removed in the refactor done in #2361. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /lgtm /approve /cherrypick release-0.16 /cherrypick release-0.17
gharchive/pull-request
2023-08-08T19:50:03
2025-04-01T06:38:00.136957
{ "authors": [ "abhay-krishna" ], "repo": "aws/eks-anywhere-build-tooling", "url": "https://github.com/aws/eks-anywhere-build-tooling/pull/2368", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1234320328
Fix typo Issue #, if available: Description of changes: Fix typo Testing (if applicable): By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /approve /cherrypick release-0.9
gharchive/pull-request
2022-05-12T18:06:46
2025-04-01T06:38:00.138662
{ "authors": [ "jaxesn", "mdsgabriel" ], "repo": "aws/eks-anywhere", "url": "https://github.com/aws/eks-anywhere/pull/2145", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1832124029
Make tinkerbell airgapped tests to dynamically reserve hardware Issue #, if available: Add dynamic hardware reservation logic to be extended to airgapped tinkerbell tests. Description of changes: Testing (if applicable): Documentation added/planned (if applicable): By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /approve
gharchive/pull-request
2023-08-01T22:37:34
2025-04-01T06:38:00.140617
{ "authors": [ "rahulbabu95" ], "repo": "aws/eks-anywhere", "url": "https://github.com/aws/eks-anywhere/pull/6376", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2143267508
Skip snow e2e tests Issue #, if available: Description of changes: Testing (if applicable): Documentation added/planned (if applicable): By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /approve /cherrypick release-0.19
gharchive/pull-request
2024-02-19T23:08:05
2025-04-01T06:38:00.142554
{ "authors": [ "abhay-krishna", "cxbrowne1207" ], "repo": "aws/eks-anywhere", "url": "https://github.com/aws/eks-anywhere/pull/7634", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2284758403
Add CCM patches 1.28-1.30 Issue #, if available: Description of changes: By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /lgtm /approve
gharchive/pull-request
2024-05-08T06:09:32
2025-04-01T06:38:00.144440
{ "authors": [ "xdu31", "zafs23" ], "repo": "aws/eks-distro", "url": "https://github.com/aws/eks-distro/pull/3000", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1088400950
add yaml struct tags for generated golang structs :rocket: Feature Request Affected Languages [ ] TypeScript or Javascript [ ] Python [ ] Java [ ] .NET (C#, F#, ...) [x] Go General Information JSII Version: 1.46.0 Platform: N/A [x] I may be able to implement this feature request [ ] This feature might incur a breaking change Description In Go, JSII generates structs with json struct tags : type ProviderConfig struct { Data *string `json:"data"` } It would be nice to have yaml struct tags too. While marshaling/unmarshaling this would help. Proposed Solution type ProviderConfig struct { Data *string `json:"data" yaml:"data"` } Notes If you could point me to the file which does this I may be able to write a PR. I see no problem with this, but I'm wondering what your use case is? The JSON struct tags are generated to help marshaling/demarshalling as JSII types are serialized to JSON when passed to the kernel via stdin, and deserialized when received from the kernel via stdout. Generally the struct tags aren't meant to be seen by consumers of generated JSII Go modules and are an implementation detail, though there may be some uses I'm not considering. Generally the struct tags aren't meant to be seen by consumers of generated JSII Go modules and are an implementation detail, though there may be some uses I'm not considering. When we create a construct library in cdk8s using TS, since the generated structs in golang only have JSON struct tags, I cannot use those structs to marshal YAML data into it. Lets look at an interface: export interface CustomInterface { readonly config: Config; } export interface Config { readonly containerName: string; readonly containerImage: string } The generated structs for this would be : type struct CustomInterface { Config Config `json:"config"` } type struct Config { ContainerName string `json:"containerName"` ContainerImage string `json:"containerImage"` } The application I'm building marshalls data into the CustomInterface struct from a YAML config which would look like this for our CustomInterface: config: containerName: test containerImage: test-image:latest Then it takes those values and then creates a new container definition in cdk8s. podSpec := &k8s.PodSpec{ Containers: &[]*k8s.Container{ { Name: jsii.String(config.containerName), Image: jsii.String(config.containerImage), }, }, } Sort of like helm without the madness : ) The problem here is that the camelCase is not being respected by the Marshaller, thats why yaml struct tags are necessary. I hope I was able to convey my use case. gotcha, so your application accepts some yaml configuration loaded and deserialized and passed as args to cdk8s constructs? Seems straightforward enough and I don't see any downside to enabling easier yaml deserialization. 👍🏻 gotcha, so your application accepts some yaml configuration loaded and deserialized and passed as args to cdk8s constructs? yep you nailed it.
gharchive/issue
2021-12-24T14:15:01
2025-04-01T06:38:00.152617
{ "authors": [ "Hunter-Thompson", "MrArnoldPalmer" ], "repo": "aws/jsii", "url": "https://github.com/aws/jsii/issues/3293", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2189442810
CRD troubleshooting guide needs updating for Beta CRDs Description How can the docs be improved? https://karpenter.sh/docs/troubleshooting/#helm-error-when-installing-the-karpenter-crd-chart The CRD's in the examples in the troubleshooting guide are the Alpha names. This guide should be updated to have the Beta (ec2nodeclasses/etc) crd names instead, for fast copy/paste troubleshooting. example 1 becomes: kubectl label crd ec2nodeclasses.karpenter.k8s.aws nodepools.karpenter.sh nodeclaims.karpenter.sh app.kubernetes.io/managed-by=Helm --overwrite example 2 becomes: kubectl annotate crd ec2nodeclasses.karpenter.k8s.aws nodepools.karpenter.sh nodeclaims.karpenter.sh meta.helm.sh/release-name=karpenter-crd --overwrite kubectl annotate crd aec2nodeclasses.karpenter.k8s.aws nodepools.karpenter.sh nodeclaims.karpenter.sh meta.helm.sh/release-namespace=karpenter --overwrite @jsamuel1 Would you be interested in opening a PR to make this change?
gharchive/issue
2024-03-15T20:45:09
2025-04-01T06:38:00.155701
{ "authors": [ "jonathan-innis", "jsamuel1" ], "repo": "aws/karpenter-provider-aws", "url": "https://github.com/aws/karpenter-provider-aws/issues/5876", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2722334820
Use NodeRepair featureGate Description How can the docs be improved? Hello, I try to use the featureGate NodeRepair, this is correctly enable on my controller pod (FEATURE_GATES: SpotToSpotConsolidation=false,NodeRepair=true), but that doesn't work like I've expected. We just need to enable the feature gate for work ? I pretty sure we need to wait 30min before the node was considerate as unready but I don't find any documentation on this point. Also it's look on node object or nodeclaims.karpenter.sh ? Because my node is ready but not my node claim: ╰─➤ k get node ip-10-34-6-212.eu-west-3.compute.internal NAME STATUS ROLES AGE VERSION ip-10-34-6-212.eu-west-3.compute.internal Ready <none> 129m v1.30.6-eks-94953ac ╰─➤ k get nodeclaims.karpenter.sh std-linux-cpu-h4g4s NAME TYPE CAPACITY ZONE NODE READY AGE std-linux-cpu-h4g4s m7i.8xlarge spot eu-west-3a ip-10-34-6-212.eu-west-3.compute.internal Unknown 129m Regards, Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request If you are interested in working on this issue or have submitted a pull request, please leave a comment The node repair feature works by looking at the node readiness. Karpenter only act against nodes that are not ready for 30 min. The nodeclaim being unknown is a known bug that are planning on pushing to fix. The issue there is the status of how karpenter is marking the status rather then any issue with the nodeclaim Hello, With 1.1.1 I've still have Unknown if my node still have the statupTaint ╰─➤ k get nodeclaims.karpenter.sh std-linux-core-pnpgb NAME TYPE CAPACITY ZONE NODE READY AGE std-linux-core-pnpgb t3a.medium spot eu-west-3a ip-10-157-24-78.eu-west-3.compute.internal Unknown 55m ╰─➤ k get node ip-10-157-24-61.eu-west-3.compute.internal NAME STATUS ROLES AGE VERSION ip-10-157-24-61.eu-west-3.compute.internal Ready <none> 4d8h v1.30.6-eks-94953ac ╰─➤ kubectl get nodes ip-10-157-24-61.eu-west-3.compute.internal -o json | jq '.spec.taints | length' 1 The node repair feature works by looking at the node readiness So they have no possibility to automatically destroy the nodeclaim if it's not at True ?
gharchive/issue
2024-12-06T07:59:34
2025-04-01T06:38:00.160880
{ "authors": [ "engedaam", "fe80" ], "repo": "aws/karpenter-provider-aws", "url": "https://github.com/aws/karpenter-provider-aws/issues/7491", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1456427346
fix: add missing DELETE method for validating webhook Fixes # Description Minor fix for validation.webhook.karpenter.k8s.aws webhook. It seems like DELETE method for karpenter.k8s.aws group was removed in #2754. I see no reason for the deletion, and think it was a mistake. How was this change tested? Does this change impact docs? [ ] Yes, PR includes docs updates [ ] Yes, issue opened: # [ ] No Release Note By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I don't see the usecase for validating delete requests. Can't remember, but probably why I removed it. Does this break something? I don't see the usecase for validating delete requests. Can't remember, but probably why I removed it. Does this break something? I am currently running two EKS clusters with karpenter. I used helm to install karpenter, but directly installed it using helm cli on one cluster and argocd installed on the another. On both clusters, deployed webhooks have DELETE method unlike the template. It is fine on cluster where I used helm cli, but on cluster with argocd it seems argocd is constantly re-syncing the webhook. This makes karpenter pods emit reconcile errors every few second and no node is spawned. I still don't understand why deployed webhooks get DELETE method back again, but so far I believe this PR could fix current problem. I think this is due to knative's webhook reconciliation behavior. Fix SGTM
gharchive/pull-request
2022-11-19T12:01:04
2025-04-01T06:38:00.166423
{ "authors": [ "ellistarn", "skdltmxn" ], "repo": "aws/karpenter", "url": "https://github.com/aws/karpenter/pull/2891", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1888655235
Getting client certification with rustls mTLS Security issue notifications Not a security issue Problem: This is more of a question but is there a way in s2n-quic to get the client certificate from the connection that was accepted? I'd like to extract some information from the certificate of the client that the server accepted but I can't find a way to do it. Now I'm using a channel and sending that info to a receiver that the server polls when a new connection has been accepted. That seems very fragile as I can't ascertain whether the next value in the channel is associated with the connection that was accepted (but that may be true anyway). Is it possible to somehow get access to client's TLS certificate after a connection has been received? There's currently not an API for this, unfortunately. I think the best solution would be to wire up an event to the TLS provider that emits the client's certificate. You would then handle that event and store it on the connection's event context. This value can then be queried by the application using the Connection::query_event_context function. Any plan on it?
gharchive/issue
2023-09-09T09:09:35
2025-04-01T06:38:00.169569
{ "authors": [ "altonen", "camshaft", "thynson" ], "repo": "aws/s2n-quic", "url": "https://github.com/aws/s2n-quic/issues/1957", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1875027924
Pod not starting with EKS v20230825 What happened: When running a pod with a private image with a ruby/rails application we got the following error message on start : Last State: Terminated Reason: StartError Message: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "rails": cannot run executable found relative to current directory: unknown Exit Code: 128 Started: Thu, 01 Jan 1970 01:00:00 +0100 Finished: Wed, 30 Aug 2023 16:55:15 +0200 We notice some strange things : unknown current working dir Started at epoch 0 Our image define a PATH environment variable to add a relative path like ENV PATH=bin:$PATH (and WORKDIR is also correctly set on out docker image). Adding .spec.workingDir on pod definition doesn’t change the behavior. What you expected to happen: To run a pod normally without any error. The same pod definition run perfectly on v1.26.6-eks-a5565ad. How to reproduce it (as minimally and precisely as possible): Run the same pod definition on v1.26.7-eks-8ccc7ba  / ami-0c8f13e2e3c8de829 node. Anything else we need to know?: Environment: AWS Region: eu-west-3 Instance Type(s): Spot - with karpenter with the following constraints : requirements: - key: topology.kubernetes.io/zone operator: In values: - eu-west-3a - eu-west-3b - eu-west-3c - key: kubernetes.io/arch operator: In values: - amd64 - key: karpenter.k8s.aws/instance-size operator: NotIn values: - nano - micro - small - medium - large - xlarge - metal - key: karpenter.k8s.aws/instance-category operator: In values: - m - r - c - key: karpenter.sh/capacity-type operator: In values: - spot - on-demand - key: kubernetes.io/os operator: In values: - linux EKS Platform version (use aws eks describe-cluster --name <name> --query cluster.platformVersion): eks.5 Kubernetes version (use aws eks describe-cluster --name <name> --query cluster.version): 1.26 AMI Version: AMI Release v20230825 Kernel (e.g. uname -a): 5.10.186-179.751.amzn2.x86_64 Release information (run cat /etc/eks/release on a node): BASE_AMI_ID="ami-0f2b325398f933a81" BUILD_TIME="Fri Aug 25 20:12:27 UTC 2023" BUILD_KERNEL="5.10.186-179.751.amzn2.x86_64" ARCH="x86_64" This looks related to a restriction introduced in go 1.19 os.Exec that executables in the PWD have to be referenced with “./“: https://tip.golang.org/doc/go1.19#os-exec-path Does that fix your issue? The latest release contains a runc that was compiled with a newer version of golang (1.20.x), to address some CVE's that weren't going to be fixed in 1.18.x. That's where the behavior change is coming from. AMI release runc go v20230816 1.1.7-1.amzn2 1.18.6 v20230825 1.1.7-3.amzn2 1.20.7 GODEBUG=execerrdot=0 is how you get back older behavior until you can fix things. There's some deeper details in https://github.com/golang/proposal/blob/master/design/56986-godebug.md as well. runc and containerd are Amazon Linux 2 packages, so https://alas.aws.amazon.com/alas2.html go version $FILE will give you the version of go used to build $FILE as well 👍 Hi, Sorry, I was OoO. I confirm that comes from GO security update. Simply replace : ENV PATH=bin:$PATH to ENV PATH=/my/absolute/path/bin:$PATH fix the issue. Cheers
gharchive/issue
2023-08-31T08:21:10
2025-04-01T06:38:00.210706
{ "authors": [ "cartermckinnon", "dims", "jBouyoud" ], "repo": "awslabs/amazon-eks-ami", "url": "https://github.com/awslabs/amazon-eks-ami/issues/1410", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
1338327429
Add raw table log for iptables to log collector script Description of changes: The command "iptables --list" output the result of filter table by default. iptables --list --help iptables v1.8.4 (snip) --table -t table table to manipulate (default: `filter') However, the result of filter table has already got in the file "iptables-filter.txt". So the file "iptables.txt" and the file "iptables-filter.txt" are same result. I believe that the table we want is raw table. Therefore I added it. The raw table is used by security groups for pods. Testing Done For this verification, I added the following rules to the raw table. sudo iptables -t raw -I PREROUTING -p tcp --dport 8888 -j TRACE sudo iptables -t raw -I OUTPUT -p tcp --dport 8888 -j TRACE I executed the following commands. curl -O https://raw.githubusercontent.com/hiraken-w/amazon-eks-ami/7ad03c22ff99ba06cecc52c7c52ee1a931b99aa4/log-collector-script/linux/eks-log-collector.sh sudo bash eks-log-collector.sh As a result, the raw table was output successfully. cat iptables-raw.txt Chain PREROUTING (policy ACCEPT 1406 packets, 255K bytes) pkts bytes target prot opt in out source destination 0 0 TRACE tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8888 Chain OUTPUT (policy ACCEPT 774 packets, 85475 bytes) pkts bytes target prot opt in out source destination 0 0 TRACE tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8888 ======= Total Number of Rules: 2 Please open a new PR if this change is still relevant; apologies for the radio silence.
gharchive/pull-request
2022-08-14T19:12:50
2025-04-01T06:38:00.214015
{ "authors": [ "cartermckinnon", "hiraken-w" ], "repo": "awslabs/amazon-eks-ami", "url": "https://github.com/awslabs/amazon-eks-ami/pull/989", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
662694952
Add support for HongKong (Ap-east-1) Region Is it possible to add support for the HongKong (Ap-east-1) region? https://github.com/olddriver4/Codedeploy-Script
gharchive/issue
2020-07-21T07:31:15
2025-04-01T06:38:00.241388
{ "authors": [ "ericli0209", "olddriver4" ], "repo": "awslabs/aws-codedeploy-plugin", "url": "https://github.com/awslabs/aws-codedeploy-plugin/issues/108", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1434467756
Maintenance: Refresh SAM examples Summary The repo contains SAM examples of how to use the libraries. These examples are one of the most visited pages in the repository and since they were released the project and its documentation has evolved. The examples should be updated to reflect that. Why is this needed? The SAM samples were first created several months ago and in the meantime several updates have happened, so it's time to refresh them. Which area does this relate to? Other Solution Below a list of changes in no particular order: Updates to README [ ] Change the section that reads "You will need to have a valid AWS Account in order to deploy these resources" to a "Note" with GitHub specific markdown (see here), so that it's better highlighted. [ ] Link documentation about Layers in Layers section of the README [ ] Modify X-Ray section of docs to point to CloudWatch Traces section instead [ ] Remove the section that talks/lists all IDEs (see image in details below) ![Screenshot 2022-11-03 at 11 26 03](https://user-images.githubusercontent.com/7353869/199697804-a53d6c01-fbbf-4b42-96f3-c4df42f82c35.png) Maintenance [ ] Update Powertools dependencies to latest available at time of PR [ ] Update all devDependencies to latest available & test that everything works [ ] Move back code to examples/lambda-functions (where it was originally) & symlink (how it was originally) Code [ ] Update one Lambda function to use Middy middleware usage (see docs) [ ] Update second Lambda to use Class-based/decorators usage (see docs) [ ] Update third Lambda (manual usage - like it is now) and make sure all features are represented (see docs) [ ] Move Powertools instances creation in shared file & import it throughout other files (see example here) [ ] Make sure all functions are minified, have source maps, use NODE_OPTIONS: --enable-source-maps' ([here](https://docs.aws.amazon.com/lambda/latest/dg/typescript-exceptions.html)), and use AWS_NODEJS_CONNECTION_REUSE_ENABLED = 1` (here) [ ] Move DynamoDB client instantiation in separate shared file, reuse across functions (see example here) & apply tracing to it [ ] Move from aws-sdk (v2) to @aws-sdk/client-dynamodb (v3 - docs) - keep DocumentClient [ ] Add an example of HTTP call (i.e. fake retrieve data at some point) in each function at the beginning, use phin to make requests, make a request to https://httpbin.org/#/Dynamic_data/get_uuid and add the returned uuid to logs (with this), to main segment as annotation (see docs), and as metadata to metrics (using addMetadata). [ ] All functions should log the incoming event: middleware/decorator via parameter, manual via a logger.info(...) Acknowledgment [X] This request meets Lambda Powertools Tenets [ ] Should this be considered in other Lambda Powertools languages? i.e. TypeScript, Java @dreamorosi as discussed you can assign this to me, I will start working on this @dreamorosi I have just created a PR covering the mentioned changes
gharchive/issue
2022-11-03T10:59:27
2025-04-01T06:38:00.255378
{ "authors": [ "bpauwels", "dreamorosi" ], "repo": "awslabs/aws-lambda-powertools-typescript", "url": "https://github.com/awslabs/aws-lambda-powertools-typescript/issues/1140", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
78674027
Does aws-sdk-xamarin have dynamodb mapper Hey @tawalke, does this library (aws-sdk-xamarin) have the concept of a dynamodb mapper or an alternative? http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/DynamoDBMapper.html? Cheers, Jason @jasonmc86 Not quite but you should be able to use the Document API: http://blogs.aws.amazon.com/net/post/Tx2R0WG46GQI1JI/-span-class-matches-DynamoDB-span-Series-Document-Model Let me know if this provides what you want with JSON.NET serializing to an object and back out. Yep document api and newtonsoft worked together perfectly to give me what i needed thanks :+1: Excellent, glad to hear it!
gharchive/issue
2015-05-20T19:59:25
2025-04-01T06:38:00.258874
{ "authors": [ "findly-jasonmcmonagle", "jasonmc86", "tawalke" ], "repo": "awslabs/aws-sdk-xamarin", "url": "https://github.com/awslabs/aws-sdk-xamarin/issues/29", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1301110171
updates This is still WIP. Do not close the request. AWS CodeBuild CI Report CodeBuild project: githubautobuild-for-cdk-v2 Commit ID: d149e2f12a6a0d38666f2361234813cd7c8987f8 Result: SUCCEEDED Build Logs (available for 30 days) Powered by github-codebuild-logs, available on the AWS Serverless Application Repository AWS CodeBuild CI Report CodeBuild project: githubautobuild-for-cdk-v1 Commit ID: d149e2f12a6a0d38666f2361234813cd7c8987f8 Result: SUCCEEDED Build Logs (available for 30 days) Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
gharchive/pull-request
2022-07-11T19:09:59
2025-04-01T06:38:00.276517
{ "authors": [ "aws-solutions-constructs-team", "gijayah213" ], "repo": "awslabs/aws-solutions-constructs", "url": "https://github.com/awslabs/aws-solutions-constructs/pull/729", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1816179022
Pipeline creation fails in DataProcessing Summary Pipeline creation fails in DataProcessing stack creation Steps to reproduce Followed exact same steps as per workshop, created project from webconsole, used kinesis as sink, after putting all config and on creation of pilpeine, data processing stack fails. What is the current bug behavior? Pipeline creation fails What is the expected correct behavior? Pipeline should get created Relevant logs and/or screenshots Received response status [FAILED] from custom resource. Message returned: Socket timed out without establishing a connection within 5000 ms Logs: /aws/lambda/Clickstream-DataProcessin-GlueTablePartitionSyncer-LHI331oLoIf1 at Timeout._onTimeout (/var/task/index.js:13076:30) at listOnTimeout (node:internal/timers:559:17) at processTimers (node:internal/timers:502:7) (RequestId: bd2a66f8-2797-41d4-8323-fb5c0c0b0ea6) Possible fixes Not Sure This is :bug: Bug Report Hi @snjkumar23 , the data processing stack will provisioning a glue data catalog for putting the processed clickstream data. It requires accessing the endpoint of Glue. And the job will be periodically executed for creating the partitions of glue tables. So this function is placed on the private subnets when you configured the ingestion module. And the web console does some network checking for NAT gateway if you specify the private subnets with NAT gateway or the isolated subnets with the required VPC endpoints used by the solution. But, we know an edge case, the private subnets having NAT gateway and internet route to the NAT gateway the VPC also configures the VPC endpoints, for example, Glue endpoint. But the security group of existing Glue endpoint does not allowing the access from the Lambda function created by solution. You can check your VPC endpoint for Glue, updating the inboud rule of security group to allow the reques from the Lamba function. Then clicking the Retry action in the pipeline detail page to resume the provision of the pipeline. Let's know if it helps. Thanks. Issues resolved, VPC endpoint security group had to allow lamda to access it.
gharchive/issue
2023-07-21T17:57:03
2025-04-01T06:38:00.282370
{ "authors": [ "snjkumar23", "zxkane" ], "repo": "awslabs/clickstream-analytics-on-aws", "url": "https://github.com/awslabs/clickstream-analytics-on-aws/issues/87", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2389022919
RAG pattern with LangChain or LlamaIndex Community Note Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request If you are interested in working on this issue or have submitted a pull request, please leave a comment What is the outcome that you are trying to reach? Describe the solution you would like Describe alternatives you have considered Additional context I would like to work on this
gharchive/issue
2024-07-03T16:23:50
2025-04-01T06:38:00.285478
{ "authors": [ "darrenlin", "vara-bonthu" ], "repo": "awslabs/data-on-eks", "url": "https://github.com/awslabs/data-on-eks/issues/567", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
926387294
EKS Launch Template now has multiple mentions of "NODE_TAINT" The awk on the quickinstall.sh line 161 fails because there's multiple lines with NODE_TAINTS=in the launch template now. By adding -m1 limits the match to first hit making the awk replacement work again. Without this the daemon will fail to add/update the taint in main.py line 206 to fail as there are no taints defined on the node. Error on logs: Registering game server The game server is already registered The instance is HEALTHY Updating game server health Claiming game server The instance has already been claimed Changing status to utilized Traceback (most recent call last): File "main.py", line 347, in <module> main() File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "main.py", line 322, in main initialize_game_server(GameServerGroupName=game_server_group_name, GameServerId=game_server_id, InstanceId=instance_id) File "main.py", line 207, in initialize_game_server taints.append(taint) AttributeError: 'NoneType' object has no attribute 'append' Thanks @syvanen. cc @trevorrobertsjr Fixed by #12
gharchive/issue
2021-06-21T16:18:16
2025-04-01T06:38:00.288679
{ "authors": [ "jicowan", "syvanen", "trevorrobertsjr" ], "repo": "awslabs/fleetiq-adapter-for-agones", "url": "https://github.com/awslabs/fleetiq-adapter-for-agones/issues/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1318766946
feat: audio alert for thresholds Summary Add an audio player feature in @iot-app-kit/core to allow developers to play sounds in App Kit. Also add an audio alert feature that plays an audible alert whenever a threshold is breached by a visible new data point. Note: This PR is complete and ready for review but it should not be opened or merged in until the following bugs are fixed and merged into main to prevent incorrect behaviors of audio alerts: Breaking Blockers https://github.com/awslabs/synchro-charts/issues/150 points constantly rerender and cause audio alerts to play over and over again even though there's no new points. Make sure to update Synchro Chart package version when this is fixed. https://github.com/awslabs/iot-app-kit/issues/153 points constantly rerender and cause audio alerts to play over and over again even though there's no new points. Non-breaking blocker https://github.com/awslabs/synchro-charts/issues/153 This is an issue where thresholds aren't properly calculated as breaching, however audio alerts are meant to reflect what's displayed on Synchro Charts. If a threshold isn't displayed as being breached then no audio alerts are played, which follows the correct behavior as audio alerts. When this issue is fixed, audio alerts will follow along with the correct behavior as well without any necessary additional changes. Legal This project is available under the Apache 2.0 License. is this safe to merge in with the outstanding bugs you have outlined in the overview? is this safe to merge in with the outstanding bugs you have outlined in the overview? No it is not, the bugs must be fixed beforehand otherwise the behavior of audio alerts would not be as expected. What is the plan to merge this PR in? Please recreate this pull request off the new head, a history rewrite was pushed. Closing.
gharchive/pull-request
2022-07-26T21:23:02
2025-04-01T06:38:00.294138
{ "authors": [ "TheEvilDev", "diehbria", "janezhang10", "tracfren" ], "repo": "awslabs/iot-app-kit", "url": "https://github.com/awslabs/iot-app-kit/pull/179", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
920882677
remove go-report-card Issue #, if available: N/A Description of changes: Remove go-report-card because it seems to be unstable and giving us an F :( By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Let's hold off. How long has this been failing? I expect it might be fixed soon. We can hold off and see, it’s been over 2 weeks I believe. Wow 2 weeks is a long time!
gharchive/pull-request
2021-06-15T00:45:31
2025-04-01T06:38:00.296339
{ "authors": [ "bwagner5", "ellistarn" ], "repo": "awslabs/karpenter", "url": "https://github.com/awslabs/karpenter/pull/453", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2182042001
Add client_config_default function for each client configuration Issues: Addresses #81. Description of changes: The goal of this PR is to simply the steps needed to setup a simple client. It adds a new function which returns a default client config with an in-memory group state storage backend. The new GroupStateStorageAdapter struct allows us to use any mls-rs group state storage, so we could easily surface the Sqlite storage as well now. Since I was working with the storage trait, I also took the opportunity to flatten the errors a little. Right now, we don't seem to have a use case for separate errors, so I feel this makes things a little simpler overall. Testing: New Python integration test. By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 and MIT license. Codecov Report Attention: Patch coverage is 73.33333% with 20 lines in your changes are missing coverage. Please review. Project coverage is 89.48%. Comparing base (2d92710) to head (5576dd3). Files Patch % Lines mls-rs-uniffi/src/config/group_state.rs 64.10% 14 Missing :warning: mls-rs-uniffi/src/config.rs 81.25% 6 Missing :warning: Additional details and impacted files @@ Coverage Diff @@ ## main #111 +/- ## ========================================== - Coverage 89.51% 89.48% -0.03% ========================================== Files 173 173 Lines 31261 31299 +38 ========================================== + Hits 27983 28009 +26 - Misses 3278 3290 +12 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
gharchive/pull-request
2024-03-12T16:02:27
2025-04-01T06:38:00.307324
{ "authors": [ "codecov-commenter", "mgeisler" ], "repo": "awslabs/mls-rs", "url": "https://github.com/awslabs/mls-rs/pull/111", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
515184994
Add TLS 1.3 Finished Key labels and Finished Verify MAC calculation Issue # (if available): https://github.com/awslabs/s2n/projects/6#card-28412053 Description of changes: This PR adds necessary TLS 1.3 Finished Key labels and MAC functions needed to compute "Server Finished" and "Client Finished" in TLS 1.3 By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Codecov Report :exclamation: No coverage uploaded for pull request head (tls13_finished_keys@a976d2f). Click here to learn what that means. The diff coverage is n/a.
gharchive/pull-request
2019-10-31T05:53:27
2025-04-01T06:38:00.311953
{ "authors": [ "codecov-io", "zz85" ], "repo": "awslabs/s2n", "url": "https://github.com/awslabs/s2n/pull/1223", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
574249606
Use key and factors methods instead of direct RSA accessors Please note that while we are transitioning from travis-ci to AWS CodeBuld, some tests are run on each platform. Non-AWS contributors will temporarily be unable to see CodeBuild results. We apologize for the inconvenience. Issue # (if available): N/A Description of changes: By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Codecov Report :exclamation: No coverage uploaded for pull request head (rsa-accessors@b84c74d). Click here to learn what that means. The diff coverage is n/a.
gharchive/pull-request
2020-03-02T21:09:48
2025-04-01T06:38:00.314894
{ "authors": [ "andrew-kaufman", "codecov-io" ], "repo": "awslabs/s2n", "url": "https://github.com/awslabs/s2n/pull/1611", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
639406713
Adds a proof harness for s2n_stuffer_extract_blob Please note that while we are transitioning from travis-ci to AWS CodeBuild, some tests are run on each platform. Non-AWS contributors will temporarily be unable to see CodeBuild results. We apologize for the inconvenience. Resolved issues: N/A. Description of changes: Adds a proof harness for the s2n_stuffer_extract_blob function; Adds a pre- and post-conditions to the s2n_stuffer_extract_blob function; Call-outs: N/A. Testing: N/A. By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Looks like that memcpy -> s2n_blob_init change introduced a memory leak: Looks like that memcpy -> s2n_blob_init change introduced a memory leak: I reverted the change :) we can still keep the memcpy_check and have a valid blob. @camshaft no more memory leak, ready for another round! Looking at the function itself, does it make sense to do a free followed by alloc? Shouldn't that just be realloc? Looking at the function itself, does it make sense to do a free followed by alloc? Shouldn't that just be realloc? Nice catch! Implemented as suggested! @danielsn
gharchive/pull-request
2020-06-16T06:29:51
2025-04-01T06:38:00.319837
{ "authors": [ "camshaft", "danielsn", "feliperodri" ], "repo": "awslabs/s2n", "url": "https://github.com/awslabs/s2n/pull/2025", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1706597349
[Feature] Type Inference of primitives and module selection API change Currently, we have the following schedule operations logic for gpt2. # code snippet from slapo/model_schedule/gpt2.py ... attn_op = [] for idx in range(model_config.num_hidden_layers): sub_sch = sch[attn_path.replace("N", str(idx))] with init_empty_weights(enable=delay_init): new_mod = Attention(**init_config) attn_op.append(new_mod.module.attn_op_name) sub_sch.replace(new_mod) cnt += 1 ... Issues of this code snippet from my view: the primitive function replace cannot provide type inference features: difficult to know the options for the primitive and not possible to get the doc string for the primitive. selection for sub graph is not intuitive due to the concept of the sub schedule. Treating the schedule as a dictionary/hash table is not that intuitive to me. For a single model, it is natural to me that we have a single schedule for this model. The schedule can only affect part of the model, and consider them as a list of tuples, e.g., (module_part_id, "replace_with", new_module_obj). This can also facility debugging the schedule, e.g., removing entries of in the schedule to disable schedules. Recommend to changes the APIs to the following ... for idx in range(model_config.num_hidden_layers): sub_module = slapo.select(model, "transformer.h."+str(idx)) with init_empty_weights(enable=delay_init): new_mod = Attention(**init_config) cur_schedule = slapo.replace(cur_schedule, sub_module, new_mod) cnt += 1 ... And the select method can be further improved to consider fuzzy match ... sub_modules = slapo.select("transformer.h.*") with init_empty_weights(enable=delay_init): new_mod = Attention(**init_config) cur_schedule = slapo.replace(cur_schedule, sub_modules, new_mod) cnt = len(sub_modules) ... My two cents. For the first point, it seems not related to the API. You still need to register each primitive anyways. The current registration may not be intuitive to new developers, but I believe a good tutorial could largely solve this issue. After all developers don't need to know how the registration works, but only need to where to find the primitive implementation and their doc strings. For the second point, I'm not sure I got it. To me, sch["transformer.h."+str(idx)] is more intuitive: This is similar as the schedule language of Halide and TVM. This is similar as the way to access submodules in a PyTorch model. On the other hand, slapo.select("transformer.h"+str(idx)) confuses me because I cannot tell which model/schedule I'm working on by looking at this statement. I could think of two ways to maintain the current working schedule: Global variable. This is unsafe and could be a mess. Context manager. So you may need to use something like with slapo.schedule(model): to wrap an entire schedule logic. This might be working, but it seems not worth to spend engineering efforts on this refactoring. For fuzzy matching, there's nothing preventing us from implementing sub_schs = sch["transformer.h.*"]. Sorry not intend to have a global variable. But to have all primitive in a functional mode. And for the context manager, I didn't intend to change it. Basically, what I was recommending is to make the operation as explicit as possible. I think the current API is fine for users who are familiar with the coding style of Halide/TVM. We do not need to change the interface. VSCode is able to capture the data type of __getitem__, so writing sch[op].replace(...) can still prompt the correct hints for programmers. In order to achieve that, what we need to do is to change the implementation of those primitives. I would suggest explicitly expose the primitives to the programmers, but use separate files to implement different primitives, just like what PyTorch does -- having a top-level interface, and then dispatching the nn.module op to nn.functional implemenation. This way couples the concept of schedule and module/graph. I don't think it's a good way to manage the schedules. This kind of hierarchical organization of schedule also increases the difficulty of debugging in some sense. And limiting the user-based from Halide/TVM community is not good from my view. We should expect more MLEs can use Slapo to adopt different optimized kernels, distributed strategies. And limiting the user-based from Halide/TVM community is not good from my view. We should expect more MLEs can use Slapo to adopt different optimized kernels, distributed strategies. I think the point is we need fine-grained control for different optimizations. That is why we need to design the schedule in a hierarchical way and apply the schedule to a specific submodule. For MLEs, I don't expect them to use these low-level primitives. The best way for them is to directly generate an optimized model, so we need to add more automation to it. Then they only need to call some APIs like .autoshard() or .fuse_all() to accomplish complicated optimizations. Our decoupled primitives are actually specifying the design space, which provides a good interface for compilers to further optimize.
gharchive/issue
2023-05-11T21:41:08
2025-04-01T06:38:00.329353
{ "authors": [ "chhzh123", "comaniac", "zarzen" ], "repo": "awslabs/slapo", "url": "https://github.com/awslabs/slapo/issues/91", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
821652962
Endpoint Discovery Design and implement dynamic endpoint discovery. See also: * Endpoint discovery * /seps/accepted/shared/endpoint-discovery/ * /seps/accepted/shared/sts_regionlization/ Needs more investigation to determine whether this is still relevant.
gharchive/issue
2021-03-04T01:07:49
2025-04-01T06:38:00.331416
{ "authors": [ "ianbotsf", "kggilmer" ], "repo": "awslabs/smithy-kotlin", "url": "https://github.com/awslabs/smithy-kotlin/issues/146", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1357824358
Convert time extension not working for non-smithy crates DateTimeExt works for aws_smithy_types but not for other crates like EC2. use aws_smithy_types_convert::date_time::DateTimeExt; use chrono::Utc; use std::{convert::TryFrom, time::SystemTime}; fn main() { aws_smithy_types::date_time::DateTime::from_secs(5).to_chrono_utc(); // works let ec2_time = aws_sdk_ec2::types::DateTime::from_secs(5); ec2_time.to_chrono_utc(); // doesn't work;no method found // works let sys_time = SystemTime::try_from(ec2_time).unwrap(); let chrono_time = chrono::DateTime::<Utc>::from(sys_time); } Minimum working example repo here --> https://github.com/allan2/aws-convert-time I think this should work for you if you change the 0.48 versions to 0.47 to be consistent with what the EC2 crate is depending on: aws-smithy-types-convert = { version = "0.47", features = ["convert-chrono"] } aws-smithy-types = "0.47" aws-sdk-ec2 = "0.17" Thanks John. That works. Is there any way to show a warning when versions are mismatched? Is there any way to show a warning when versions are mismatched? Not an easy way that I'm aware of, unfortunately. You can setup cargo-deny to disallow or warn on duplicate dependencies, but it will likely complain about many other transitive dependencies as well. I think this is a problem that will go away once we reach 1.0 since we'll have stability guarantees among all these crates. Until then, the versions.toml file in aws-sdk-rust tracks which versions pair together for any given release (for a specific release, look at that file at the release tag).
gharchive/issue
2022-08-31T20:18:43
2025-04-01T06:38:00.335273
{ "authors": [ "allan2", "jdisanti" ], "repo": "awslabs/smithy-rs", "url": "https://github.com/awslabs/smithy-rs/issues/1688", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2105020067
chore: change version of pymoo This hopefully fixes the unit tests By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. If you rebase now, the CI should go through
gharchive/pull-request
2024-01-29T09:19:17
2025-04-01T06:38:00.336494
{ "authors": [ "aaronkl", "mseeger" ], "repo": "awslabs/syne-tune", "url": "https://github.com/awslabs/syne-tune/pull/806", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
851321562
SimpleStorageResource.getURL should not encode s3 object key slash character From spring-cloud-aws created by pilak: spring-cloud/spring-cloud-aws#768 Type: Bug Component: S3 Describe the bug When trying to get SimpleStorageResource URL it also encodes the slashes in the object name. But in AWS S3 slashes represents folders and may not be encoded, as it is not the same URL anymore. Sample @Test public void getUri_escapes_characters() throws Exception { AmazonS3 s3 = mock(AmazonS3.class); when(s3.getRegion()).thenReturn(Region.US_West_2); SimpleStorageResource resource = new SimpleStorageResource(s3, "bucketName", "some/[objectName]", new SyncTaskExecutor()); assertThat(resource.getURI()) .isEqualTo(new URI("https://s3.us-west-2.amazonaws.com/bucketName/some/%5BobjectName%5D")); } throws org.opentest4j.AssertionFailedError: Expecting: <https://s3.us-west-2.amazonaws.com/bucketName/some%2F%5BobjectName%5D> to be equal to: <https://s3.us-west-2.amazonaws.com/bucketName/some/%5BobjectName%5D> but was not. I think this is a mistake of me, because I retried with a public AWS content URI and both worked well with encoded and decoded slashes as long as only the objectName is encoding. I don't why it didn't worked at previous try... Well I apologize for this erroneous report @pilak no worries 👍
gharchive/issue
2021-04-06T11:16:51
2025-04-01T06:38:00.341871
{ "authors": [ "maciejwalkowiak", "pilak" ], "repo": "awspring/spring-cloud-aws", "url": "https://github.com/awspring/spring-cloud-aws/issues/110", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
402381468
ETags don't reflect modified content The interception appears to happen after ETag header has already been set for the response. Consequently, the ETag doesn't reflect the changed output, which can cause browser caching issues--even if the content is wildly different from one page load to the next, the server is telling the browser, "Nah, nothing changed since the last time you loaded this resource." It's pretty simple (even if a little inefficient) to regenerate the ETag with another library (e.g., https://www.npmjs.com/package/etag) right before calling send(). But the whole ETag thing a hazard that most users are unlikely to recognize. Would be great to have a built-in option. Or at least a warning in the docs/examples We got hit by an ETag issue where every other request would fail to intercept. This was made even more difficult to diagnose because of the Disable Cache checkbox being checked in Chrome DevTools; The problem would go away when DevTools was open. The issue we were having was that the interceptor would encounter an empty body because of the 304 Not Modified statusCode. Our solution was to check the status code in the isInterceptable method. isInterceptable() { // Don't intercept cached responses (etag) return res.statusCode !== 304 }, Alternatively, you can check the body in the intercept method. intercept(body, send) { if (!body) { send(body) } else { const stuff = JSON.parse(body) // do something... send(JSON.stringify(stuff)) } }, Finally, if you don't want to write any of these checks and want to disable the server cache entirely via a config. app.set('etag', false)
gharchive/issue
2019-01-23T18:52:16
2025-04-01T06:38:00.385311
{ "authors": [ "Soviut", "phillanier" ], "repo": "axiomzen/express-interceptor", "url": "https://github.com/axiomzen/express-interceptor/issues/46", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2747215549
Think more about SourcePos They were created when we were still using antlr, but they play important roles in the backend. Now that we're all-in grammar-kit, maybe the information stored in source pos can be simplified, and we might use text range for error report. The LSP makes use of them many times. Maybe we can make the lines & columns lazy, but they cannot be removed. However, Span and LineColSpan can be removed.
gharchive/issue
2024-12-18T09:03:09
2025-04-01T06:38:00.429638
{ "authors": [ "ice1000" ], "repo": "aya-prover/aya-dev", "url": "https://github.com/aya-prover/aya-dev/issues/1234", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
310371837
CP210X serial kernel module Could you please add CONFIG_USB_SERIAL_CP210X=m to the kernel config? This builds the cp210x kernel module which is needed by the HUSBZ-1 Zigbee/Zwave USB stick. CONFIG_USB_SERIAL_CP210X [=m] is already set in the recent pre-release kernels.
gharchive/issue
2018-04-02T01:35:35
2025-04-01T06:38:00.450650
{ "authors": [ "jbrnd", "xalius" ], "repo": "ayufan-rock64/linux-build", "url": "https://github.com/ayufan-rock64/linux-build/issues/150", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
495044925
How to print out predicted sequence in inference time? Dear @ayumiymk, Thank you for this repo, In inference time, the script print out the accuracy, so I want it to print out also the actual predicted sequence. Is it stored in lib/evaluators.py pred_list variable? Yes, you are right! Thank you. In main.py there was an evaluation right before the training loop: evaluator.evaluate(). So from your answer, the predicted sequence output at this step is supposed to be garbage, right? (In my case it outputs all random Latin characters in a Japanese/Chinese text recognition problem) I have done some Japanese/Chinese ocr with CRNN+CTC and the outputs at the very first steps were all blank characters (which is reasonable when using CTC loss). When we are using the Attention decoder, is the above-mentioned phenomenon normal? For attention-based methods, it is normal to output random characters at the very first steps. I really appreciate the support. Thank you @ayumiymk.
gharchive/issue
2019-09-18T07:39:12
2025-04-01T06:38:00.453668
{ "authors": [ "ayumiymk", "tumbleintoyourheart" ], "repo": "ayumiymk/aster.pytorch", "url": "https://github.com/ayumiymk/aster.pytorch/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1642962439
🛑 Hacker News is down In 5883cd6, Hacker News (https://news.ycombinator.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Hacker News is back up in 2d37475.
gharchive/issue
2023-03-27T23:16:01
2025-04-01T06:38:00.566316
{ "authors": [ "azhaganandhan" ], "repo": "azhaganandhan/uptime", "url": "https://github.com/azhaganandhan/uptime/issues/36", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2056814500
Add Time sensitive networking (TSN) support. Features added to NetX Duo: Generic link layer support Virtual local area network (VLAN) Multiple registration protocol (MRP) Multiple VLAN registration protocol (MVRP) Multiple Stream registration protocol (MSRP) Stream reservation protocol (SRP) Credit based shaper (CBS/Qav) Time aware shaper (TAS/Qbv) Frame preemption (FPE/Qbu) could you also add tsn test cases and pipeline in this PR?
gharchive/pull-request
2023-12-27T01:32:34
2025-04-01T06:38:00.576404
{ "authors": [ "TiejunMS", "bo-ms" ], "repo": "azure-rtos/netxduo", "url": "https://github.com/azure-rtos/netxduo/pull/226", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1041186470
Detect git when in git worktree Fixes #141 @azz Can this be merged? Sorry I haven't been monitoring repositories recently. Merged & should be released soon. Thanks for the contribution. :tada: This PR is included in version 3.1.3 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2021-11-01T13:41:47
2025-04-01T06:38:00.579623
{ "authors": [ "alber70g", "azz" ], "repo": "azz/pretty-quick", "url": "https://github.com/azz/pretty-quick/pull/142", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1064878973
chore: release 0.1.4 :robot: I have created a release *beep* *boop* 0.1.4 (2021-11-27) Features is_anagram (a128298) all_equal (94248e3) all_unique (1d0a9af) average (28d132b) capitalize every word (dc64add) converts a string to camelcase (6c5bbe1) find_multiples (ec846a9) hex_to_rgb (be4bd7c) rgb_to_hex (e7fa861) Bug Fixes all_equal regression (f914abf) all_unique regression (b4bd938) clippy (6654754) remove main.rs, this project is more suitable for library (f7c2452) Continuous Integration add release-please (f8c8643) add release-please (c7d326c) This PR was generated with Release Please. See documentation. Codecov Report Merging #3 (5c0525b) into master (b200f6e) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #3 +/- ## ========================================= Coverage 100.00% 100.00% ========================================= Files 4 4 Lines 33 33 ========================================= Hits 33 33 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update b200f6e...5c0525b. Read the comment docs.
gharchive/pull-request
2021-11-27T01:55:44
2025-04-01T06:38:00.593652
{ "authors": [ "azzamsa", "codecov-commenter" ], "repo": "azzamsa/30-seconds-of-rust", "url": "https://github.com/azzamsa/30-seconds-of-rust/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
776103081
can't add options to "cwd" I must be stupid or something but i can't get "max_dir_size" to work. My code look like this: { "segments": [ "ssh", "cwd" { max_dir_size:2 } "git", "hg", "jobs", "root" ] } } My error is: [powerline-bash] Config file (/data/data/com.termux/files/home/.config/powerline-shell/config.json) could not be decoded! Error: Expecting ',' delimiter: line 4 column 11 (char 39) Setting aside the issue that your configuration as shown is invalid JSON, the configuration options for individual segments belong outside of the segments property, like such { "segments": [ "ssh", "cwd", "git", "hg", "jobs", "root" ], "cwd": { "max_dir_size": 2 } }
gharchive/issue
2020-12-29T21:53:42
2025-04-01T06:38:00.600490
{ "authors": [ "Dan1jel", "minelminel" ], "repo": "b-ryan/powerline-shell", "url": "https://github.com/b-ryan/powerline-shell/issues/510", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1248420328
When They are Talking Compound Compound / 复利 / 利滚利 / 驴打滚 我们可以通过减少复利的时间间隔, 来达到利息的最大化. (PS: 有上限) 复利和自然对数 e 的关系 自然对数的底 / 欧拉数 e 假设有这样一个式子 $\begin{align} (1 + \frac{1}{n} )^n = x \tag{1} \end{align}$ 当 $n=1$ 时, $x=2$ 当 $n \rightarrow \infty$ 时, 得到 $x = e = 2.7 1828 1828 45 90 45 ...$ 即 $e = \lim_{n \rightarrow \infty} (1+\frac{1}{n})^n$ 泰勒展开: $e = \frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + ...$ 复利 单利即一年到头结算利息. 而多利则将一年分为多次结算利息. 即我们可以得到一个公式: $\begin{align} PV \times ( 1 + \frac{r}{n} ) ^ n = FV \tag{2} \end{align}$ $PV(Present Value)$ 为初值 $r$ 为名义利率 $n$ 为期数 $FV(Future Value)$ 为终值 则我们假设有 100 的本金, 银行说明具有 $10%$ 的利率 月复利$(n=12)$: $100 \times ( 1 + \frac{0.1}{12} )^{12} = 110.47$ 日复利$(n=365)$: $100 \times ( 1 + \frac{0.1}{365} )^{356} = 110.51$ 连续复利 ($n \rightarrow \infty$), 此时 $(2)$ 后半部分 $(1 + \frac{r}{n} )^n = e ^ r$, 存在上限. 后记: 我们可以将 周期内的固定 利率/回报率 定义为 $i(interest)$, 则 $(2)$ 将简化为: $\begin{align} PV \times ( 1 + i ) ^ n = FV \tag{3} \end{align}$ 从而得到回报率公式 $\begin{align} \sqrt[n]{ ( \frac{FV}{PV} ) } - 1 \tag{4} \end{align}$ 复利的谎言 不确定性: 世界是随机的. 复利是一种虚幻的确定性, “确定性”的判断,本质而言,其实只是某种信念. 连续性: 时间的连续无法作用事件连续发生. 回报对称性 财富的委托代理机制的权利和责任是不对称的 代理人只会考虑如何尽可能地延长游戏的时间,以便自己能够获得更多的业绩提成,而不会考虑委托人的总体回报水平。--塔勒布《非对称风险》 不懂期望值会导致概率与赔付之间的不对称 重视概率忽视赔付在肥尾条件下会导致更大的问题 肥尾条件下对实际分布估计的微小偏离都可能带来巨大的赔付偏差 由于存在非线性关系,市场参与者的概率预测误差和最终赔付误差完全是两类分布,概率预测误差是统计量,在0到1之间,因此误差分布是薄尾的,而赔付的误差分布是肥尾的。 塔勒布 现实不均匀 不确定性的一部分,正是分布的“不均匀” 正态分布/幂律分布/肥尾分布 幂律分布: 幂律表示的是两个量之间的函数关系,其中一个量的相对变化会导致另一个量的相应幂次比例的变化,且与初值无关:表现为一个量是另一个量的幂次方 肥尾分布: 相对于正态分布或指数分布表现出较大偏度或峰度的概率分布 预测, 下注, 决策即算命 贝叶斯 随时在根据当前境况重新判断; 打出无记忆的牌; 不介意自打嘴巴; 勇于自我更新。 长期主义 大数定律: 样本数量越多,则其算术平均值就有越高的概率接近期望值 复利神话, 一场反智的智力贩卖 做正确的事情 过于偏重把事情做正确 複利效應最有用的地方是在於投資自己成長、建立習慣、思考和練習的部分 Refs Compound interest - Wikipedia 李永乐老师讲自然对数的底e - YouTube 富人投資 | 复利效应並不會幫你致富?- YouTube 复利的谎言-Wechat Offical Acount Correct time: 220122
gharchive/issue
2022-05-25T17:17:14
2025-04-01T06:38:00.633425
{ "authors": [ "bGZo" ], "repo": "bGZo/blog", "url": "https://github.com/bGZo/blog/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
976408870
Fatal Python error: Segmentation fault when calling HLDAModel.make_doc().get_topics() Hi, thank you for your great work! I have been experimenting with the HLDA model, and whenever I try to get the topics of a document, my Notebook kernel crashes. System: MacOS 10.15.7, Python 3.9.4 Code: hlda = tp.HLDAModel.load(some_model) article: list[str] = Article.get(xxxx).doc doc = hlda.make_doc(article) doc.get_topics(top_n=10) Stack Trace Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY VM Regions Near 0: --> __TEXT 0000000105a9a000-0000000105d12000 [ 2528K] r-x/r-x SM=COW /Users/USER/*/*.9 Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libsystem_kernel.dylib 0x00007fff7358233a __pthread_kill + 10 1 libsystem_pthread.dylib 0x00007fff7363ee60 pthread_kill + 430 2 libsystem_c.dylib 0x00007fff7349993e raise + 26 3 libsystem_platform.dylib 0x00007fff736335fd _sigtramp + 29 4 ??? 000000000000000000 0 + 0 5 _tomotopy_avx2.cpython-39-darwin.so 0x00000001064c5a4e tomoto::TopicModel<Eigen::Rand::ParallelRandomEngineAdaptor<unsigned int, Eigen::Rand::MersenneTwister<long long vector[4], 312, 156, 31, 13043109905998158313ull, 29, 6148914691236517205ull, 17, 8202884508482404352ull, 37, 18444473444759240704ull, 43, 6364136223846793005ull>, 8>, 4ul, tomoto::IHLDAModel, tomoto::HLDAModel<(tomoto::TermWeight)0, Eigen::Rand::ParallelRandomEngineAdaptor<unsigned int, Eigen::Rand::MersenneTwister<long long vector[4], 312, 156, 31, 13043109905998158313ull, 29, 6148914691236517205ull, 17, 8202884508482404352ull, 37, 18444473444759240704ull, 43, 6364136223846793005ull>, 8>, tomoto::IHLDAModel, void, tomoto::DocumentHLDA<(tomoto::TermWeight)0>, tomoto::ModelStateHLDA<(tomoto::TermWeight)0> >, tomoto::DocumentHLDA<(tomoto::TermWeight)0>, tomoto::ModelStateHLDA<(tomoto::TermWeight)0> >::getTopicsByDoc(tomoto::DocumentBase const*, bool) const + 14 6 _tomotopy_avx2.cpython-39-darwin.so 0x00000001064c5a83 tomoto::TopicModel<Eigen::Rand::ParallelRandomEngineAdaptor<unsigned int, Eigen::Rand::MersenneTwister<long long vector[4], 312, 156, 31, 13043109905998158313ull, 29, 6148914691236517205ull, 17, 8202884508482404352ull, 37, 18444473444759240704ull, 43, 6364136223846793005ull>, 8>, 4ul, tomoto::IHLDAModel, tomoto::HLDAModel<(tomoto::TermWeight)0, Eigen::Rand::ParallelRandomEngineAdaptor<unsigned int, Eigen::Rand::MersenneTwister<long long vector[4], 312, 156, 31, 13043109905998158313ull, 29, 6148914691236517205ull, 17, 8202884508482404352ull, 37, 18444473444759240704ull, 43, 6364136223846793005ull>, 8>, tomoto::IHLDAModel, void, tomoto::DocumentHLDA<(tomoto::TermWeight)0>, tomoto::ModelStateHLDA<(tomoto::TermWeight)0> >, tomoto::DocumentHLDA<(tomoto::TermWeight)0>, tomoto::ModelStateHLDA<(tomoto::TermWeight)0> >::getTopicsByDocSorted(tomoto::DocumentBase const*, unsigned long) const + 35 7 _tomotopy_avx2.cpython-39-darwin.so 0x0000000106894f89 Document_getTopics(DocumentObject*, _object*, _object*) + 185 8 python 0x0000000105b21a45 cfunction_call + 69 9 python 0x0000000105ae0757 _PyObject_MakeTpCall + 375 10 python 0x0000000105bc8340 call_function + 624 11 python 0x0000000105bc5452 _PyEval_EvalFrameDefault + 28002 12 python 0x0000000105bc9134 _PyEval_EvalCode + 2852 13 python 0x0000000105bbe620 PyEval_EvalCode + 64 14 python 0x0000000105c0e90d pyrun_file + 333 15 python 0x0000000105c0c9c9 PyRun_SimpleFileExFlags + 729 16 python 0x0000000105c2b973 Py_RunMain + 2067 17 python 0x0000000105c2bea3 pymain_main + 403 18 python 0x0000000105c2befb Py_BytesMain + 43 19 libdyld.dylib 0x00007fff7343acc9 start + 1 Please let me know if there is anything else I can do to help with debugging, thank you. Update 1: get_topic_dist()also crashes. Hi @dennishylau Thank you for reporting a bug. When you run make_doc(), it creates a document without any topic assignment. So calling get_topics() in this situation will not give you proper result. You need to call infer() to estimate the distribution of topics in the doc before calling get_topics(). doc = hlda.make_doc(article) hlda.infer(doc) # doc should be inferred first doc.get_topics(top_n=10) I'll fix crashes when calling get_topics() or get_topic_dist() and add a warning message to call infer first. Hi @bab2min, wow thank you for the prompt reply! That makes perfect sense, what a silly mistake on my end. Closing this issue now, have a nice day!
gharchive/issue
2021-08-22T15:55:39
2025-04-01T06:38:00.639547
{ "authors": [ "bab2min", "dennishylau" ], "repo": "bab2min/tomotopy", "url": "https://github.com/bab2min/tomotopy/issues/140", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2584195786
EX1_python_file EX1_python_file EX1_python_file
gharchive/pull-request
2024-10-13T19:03:48
2025-04-01T06:38:00.640765
{ "authors": [ "babakyousefian" ], "repo": "babakyousefian/Advanced-Database-Homework-1", "url": "https://github.com/babakyousefian/Advanced-Database-Homework-1/pull/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
115961525
Add Olapic logo Add Olapic as a user @thejameskyle sorry, I closed the previous pr and send another more clean You probably want to center the logo in the svg @thejameskyle it's in the center Needs rebase @thejameskyle done and ready to merge
gharchive/pull-request
2015-11-09T20:43:21
2025-04-01T06:38:00.645299
{ "authors": [ "loverajoel", "thejameskyle" ], "repo": "babel/babel.github.io", "url": "https://github.com/babel/babel.github.io/pull/556", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
183188122
Possible conflict with transform-react-constant-elements? Using this with transform-react-constant-elements produces a cryptic error: Module build failed: Error: /Users/...path../: File/Program node, we can't possibly find a statement parent to this Everything's ok without transform-react-constant-elements. Full stack trace: at NodePath.getStatementParent (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/family.js:43:11) at PathHoister.getAttachmentPath (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/lib/hoister.js:110:26) at PathHoister.run (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/lib/hoister.js:160:25) at NodePath.hoist (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/modification.js:263:18) at PluginPass.JSXElement (/Users/sid/Dev/bridg-email-designer/node_modules/babel-plugin-transform-react-constant-elements/lib/index.js:39:16) at newFn (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/visitors.js:276:21) at NodePath._call (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/context.js:76:18) at NodePath.call (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/context.js:48:17) at NodePath.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/context.js:105:12) at TraversalContext.visitQueue (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:150:16) at TraversalContext.visitSingle (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:108:19) at TraversalContext.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:192:19) at Function.traverse.node (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/index.js:114:17) at NodePath.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/context.js:115:19) at TraversalContext.visitQueue (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:150:16) at TraversalContext.visitMultiple (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:103:17) at TraversalContext.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:190:19) at Function.traverse.node (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/index.js:114:17) at NodePath.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/context.js:115:19) at TraversalContext.visitQueue (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:150:16) at TraversalContext.visitSingle (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:108:19) at TraversalContext.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:192:19) at Function.traverse.node (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/index.js:114:17) at NodePath.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/context.js:115:19) at TraversalContext.visitQueue (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:150:16) at TraversalContext.visitSingle (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:108:19) at TraversalContext.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:192:19) at Function.traverse.node (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/index.js:114:17) at NodePath.visit (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/path/context.js:115:19) at TraversalContext.visitQueue (/Users/sid/Dev/bridg-email-designer/node_modules/babel-traverse/lib/context.js:150:16) @ ./components/App.jsx 1:25-58 @ ./index.jsx Duplicate #194
gharchive/issue
2016-10-15T04:30:20
2025-04-01T06:38:00.669628
{ "authors": [ "boopathi", "f0rr0" ], "repo": "babel/babili", "url": "https://github.com/babel/babili/issues/195", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
506712635
Can I use Laravels Eloquent with the index model? Hi, If I have two models, Event.php and Dates.php. They have a hasMany and belongsTo relationship. Is it possible to create an Event index model that includes the dates database for searching? I tried doing something like protected $mapping = [ 'properties' => [ 'id' => [ 'type' => 'integer', 'index' => false ], 'title' => [ 'type' => 'text', 'analyzer' => 'english' ], 'dates' => [ 'properties' => [ 'date' => [ 'type' => 'date', ], 'event_id' => [ 'type' => 'integer', ] ] ], but I couldn't see the dates model. Currently the only way I can see to filter by date range is like so $dates = Date::search('*') ->whereBetween('date', ['2000-10-18 00:00:00','2019-10-18 00:00:00']) ->get(); $events = Event::whereIn('id', $dates->pluck('event_id'))->get(); return response()->json($events); I would love to be able to do $dates = Event::search('*') ->whereBetween('dates.date', ['2000-10-18 00:00:00','2019-10-18 00:00:00']) ->get(); Is this possible? after some playing around I was able to get it loaded by using protected $with = ['dates']; when I do http://127.0.0.1:9200/event/_search?pretty=true I can now see the dates. However I am still at a loss on how to search them. $dates = Event::search('*') ->whereBetween('dates.date', ['2000-10-18 00:00:00','2019-10-18 00:00:00']) ->get(); Doesn't work even though I can see that it is there source": { "id": 3, "description": "Omnis et rem aut et et. Aut eos ipsum rem. Ex ut tempora ut pariatur sint. Fugit quam rerum illo cupiditate.", "approved": 1, "created_at": "2019-10-13 17:33:26", "updated_at": "2019-10-13 17:33:26", "dates": [ { "id": 1, "event_id": 3, "date": "2004-06-15 17:28:36", "created_at": "2019-10-13 17:33:52", "updated_at": "2019-10-13 17:33:52" } ] I realized I was searching incorrectly and it works now!
gharchive/issue
2019-10-14T15:06:10
2025-04-01T06:38:00.674184
{ "authors": [ "chrisgrim" ], "repo": "babenkoivan/scout-elasticsearch-driver", "url": "https://github.com/babenkoivan/scout-elasticsearch-driver/issues/282", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2083179773
🛑 SM Service is down In 78f2ca8, SM Service (http://smservice.de) was down: HTTP code: 0 Response time: 0 ms Resolved: SM Service is back up in 8f9c0f2 after 14 minutes.
gharchive/issue
2024-01-16T07:32:32
2025-04-01T06:38:00.681062
{ "authors": [ "thomasrehm" ], "repo": "bachmannschumacher/upptime", "url": "https://github.com/bachmannschumacher/upptime/issues/2462", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1291491317
Allow GraphQL service components to query via GraphiQL in Definitions screen of API Docs Context I did some digging into this problem and I found a way to reproduce it Go to https://demo.backstage.io/catalog/default/api/starwars-graphql/definition Add the following query and hit play button { allFilms { edges { node { id } } } } You'll get the following error. { "errors": { "message": "Cannot use 'in' operator to search for 'subscribe' in null", "stack": "TypeError: Cannot use 'in' operator to search for 'subscribe' in null\n at $r (https://demo.backstage.io/static/550.ae2c2479.chunk.js:80:297)\n at https://demo.backstage.io/static/550.ae2c2479.chunk.js:49:169" } } This error is actually GraphiQL's attempt to parse response from the default fetcher. This is happening because GraphQlDefinition doesn't use the fetchers that are configured in apis.tsx. Feature Suggestion My understanding of the use case here is to allow users to query the API. Api Docs seems to be setup to use GraphiQL for documentation but not for querying. GraphiQL plugin can query using fetchers. Possible Implementation We could allow querying from GraphQLDefinitionWidget which would require providing a way to reuse or configure fetchers. Alternatively, we could change how we should documentation for GraphQL to no use GraphiQL. There may be some confusion here and I completely understand why. The definition in apis.ts for graphQlBrowseApiRef is meant for the @backstage/plugin-graphiql plugin. Although the @backstage/plugin-api-docs plugin uses the same component (https://github.com/graphql/graphiql), they are not connected in any way. Introspection and querying would be nice in the api-docs version. However, discussed in #5590, doing so in a generic way that works across all catalog api's and org envs might be tricky to cover sufficiently. That said, having a programmatic interface similar or possibly shared across the graphiql and api-docs plugins might be the way to go here. There could be a shared package with the graphql endpoint config, etc. which requires programmatic wiring in the app or backend (possibly proxy in some cases?). There would need to be some gluing metadata to match a catalog api entity to an appropriate endpoint config though. I think this issue should be reopened because users keep bumping into it. Our team is going to be looking into the GraphQL docs situation internally starting next week. There might be something we can contribute back once we have a closer look at this. @taras - Do you have any particular ideas/thoughts on this? In my mind there are 2 things here. There is documentation (usually best done with static docs generators) & there is the interactive usage of the schemas (typically through introspection). Graphiql v2 was released and is much more modular so was thinking of investigating what it offers over the current v1. Beyond that, it's an open road. @andrewthauer I've been keeping an eye on GraphQL documentation options. I haven't found anything. Did your look at the new GraphiQL docs yield anything? @andrewthauer I've been keeping an eye on GraphQL documentation options. I haven't found anything. Did your look at the new GraphiQL docs yield anything? @taras - Unfortunately, plans changed and we haven't looked into this further. For pure GraphQL documentation there is now https://magidoc.js.org. However, it's not a react component so it would likely require building static sites in CI, etc. and then having a link out to the static site. However, I haven't tried it out so perhaps there might be a nicer way. I kind of thing there are 2 personas here. One where someone wants to interact with the GraphQL schema using introspection and the other is a more traditional API documentation approach. Perhaps there is a new for both given the GraphQL SDL itself isn't really tailored to do both really well (or at all) like OpenAPI can. Did this ever make it into the RFC? I do agree that it's currently confusion that it uses Graphiql to give the illusion that your query is going to work. I'd almost rather see it just render the schema in a decent format if getting it to run actual queries is too big of a hill to climb. Did this ever make it into the RFC? I do agree that it's currently confusion that it uses Graphiql to give the illusion that your query is going to work. I'd almost rather see it just render the schema in a decent format if getting it to run actual queries is too big of a hill to climb. I don't think there is an official RFC, but there have been a few related GH issues on the general subject and improvements of the GraphQL API Docs experience. I think we should probably split this into a couple of issues? For example: a) Remove the confusing GraphiQL query building and execution and just have a pure "docs" view. I started playing around with GraphiQL v2 and it appears you can compose just the docs part without the rest of the editor. However, it's a bit sparse and looks funny initially. Maybe some styling would help. Another option is to use something like https://magidoc.js.org/ which is purpose built for this, but since it's a static site generator it doesn't integrate with react in any way and would either need build & hosted externally and linked to. b) Allow API Docs of type graphql to be interactive similar to OpenAPI Since the GraphQL SDL spec doesn't support these annotations out of the box we'd need to define a way to do this. Possibly options are through custom annotations on the SDL, having an annotation configuration on the related entity to define the connection, through code, through app-config.yaml or a combination. Technically not hard, but a proposal would be required to handle appropriate uses cases and requirements (connection types, auth, etc.) @andrewthauer Thanks for your thoughtful feedback. I like A for now. I don't know about you, but our GraphQL Api's expose a playground off each API and there isn't a place to drop that link on the definition page. So, a definition page that 1) links to the playground for the API (if available) and lists out the spec for browsing would clear up any confusion. So we already have the Raw view of the SDL. Would be really nice if it was syntax highlighted though. Might be something to also investigate. Few potentially immediate options I see are: Replace the graphiql page with a component that can read some annotation that links to a dedicated docs site (e.g. graphql.org/docs-site-url: https://some.place.hosting/graphql/api/ Look at upgrading to graphiql v2 and only showing the Docs part. A combination of the 2 or possibly providing a pick and choose model between the existing view and a new view. Technically removing the graphiql view and adding support for a custom annotation shouldn't be too difficult. The challenge being this would be a breaking change from a feature/user perspective potentially. Granted it might not be that big of a deal. Technically you can override the entire component now, so another approach could be to provide a new component that links to docs as the default or optionally configurable to replace the current view. I'd love to contribute this, but given my focus, I definitely won't have the time in the short to medium term. Any alternative solutions to this? I would like to see our developers running queries from the GraphiQL interface rather than using that interface for just showing the schema Since the GraphQL SDL itself does not contain provisions for auth, endpoints etc like OpenAPI this would require something custom which entities themselves may need to provide this information or be coded outside the API spec. Might be worth looking at the new graphql-voyager plugin. I haven't tried it myself but am curious if it could be adapted to the API docs plugin in a more general way. I think the problem again being that it requires introspection and this could require auth which could need to be handled differently for each API. @andrewthauer the voyager might be a good option. I was setting it up earlier and there is one wrinkle. It requires voyager.worker.js to be added to the public directory. It's in node_modules. I don't know if there is an API for adding files from node_modules to public directory from a plugin. @taras @andrewthauer seeing that this issue has gone stale and closed, is there a different plan for GraphQL APIs to be executed from within Backstage? @shrutiyer do you mean to experiment with a GraphQL API in API Docs? Correct. Similar to the "Try it out" feature within swagger/openapi specs. Is there any feature planned to do the same for GraphQL APIs? Part of the challenge is that it's unclear how this should be solved. Each backend usually needs credentials and we don't have a way to provide these. We discussed remove the "try it" and replacing it with just documentation but it wasn't a priority for anyone. @taras @andrewthauer Seems we cannot query the graphql apis in backstage but only can view the schema with the existing code?? if I integrate @backstage/plugin-graphiql, will this allow me to query the graphql endpoints from the backstage UI? @shalinichennoju - The graphiql plugin will let you query through introspection when configured for each endpoint. However, this is a separate UI and plugin from the api-docs plugin and not based on the catalog or API entities. started to work on some enhancements for this one. First for plugin-graphiql, I added a function to GraphQLEndpoints.ts to allow it query all API with type=graphql, then create the endpoints automatically. This depends on a annotation for the API entity. (backstage.io/api-graphql-url). then in GraphiQLPage.ts, I call this function after the GraphQLBrowserAPI.getEndpoints(). With this approach, when I enabled graphiql plugin, I can play with the APIs that defined with API entities. another change I made is to replace the tab with Select and also set another annotation backstage.io/api-graphql-url-auth-url for remind user where to get the token for querying this graphql api. The last change I made is to upgrade the graphiql plugin to 3.0.5 which is the latest, I am still working on some details of this change, and would like to raise a PR for this enhancements. https://github.com/backstage/backstage/pull/19312
gharchive/issue
2022-07-01T14:35:43
2025-04-01T06:38:00.764389
{ "authors": [ "SujithChowdari", "andrewthauer", "jtreher", "shalinichennoju", "shrutiyer", "taras", "younky-yang" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/issues/12392", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1448805551
🐛 Bug Report: Scaffolding Backstage with yarn 3.2.4 does not work 📜 Description I did not have yarn installed on my machine, so I followed the yarn installation instructions which recommend getting the latest stable version, which was 3.2.4 in my case. When I tried to scaffold Backstage with npx @backstage/create-app I would get an error saying Error: Could not execute command yarn install The workaround was to downgrade to yarn 1.22.19 👍 Expected behavior Installation would work 👎 Actual Behavior with Screenshots The terminal says Error: Could not execute command yarn install 👟 Reproduction steps Install yarn 3.2.4 or great Run npx @backstage/create-app 📃 Provide the context for the Bug. No response 🖥️ Your Environment Apple M1 Pro, macOs Monterey node 16.18.1 yarn 3.2.4 👀 Have you spent some time to check if this bug has been raised before? [X] I checked and didn't find similar issue 🏢 Have you read the Code of Conduct? [X] I have read the Code of Conduct Are you willing to submit PR? No response I forgot to mention — I think it's totally fine for the project to not support yarn 3, but it might be worth stating which version of yarn should be used in the docs. I was following this guide. Oh we do support yarn 3 for sure, I am personally on 3.2.3 and also on an M1 mac. Now I'm wondering if just 3.2.4 is problematic or if there's something else amiss in your env. I saw that you posted a gist with some output but it seems to be incomplete; was there no more information at the end of that? oh I don't have the output anymore but after listing the missing dependencies it just says Error: Could not execute command yarn install Projects can be migrated to Yarn 3, but the template itself only supports Yarn classic at the moment. If can update the template, but for now I'm just suggesting this clarification in the docs: #14683
gharchive/issue
2022-11-14T21:43:24
2025-04-01T06:38:00.773425
{ "authors": [ "Rugvip", "freben", "robdodson" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/issues/14622", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1874570784
🚀 Feature: Automatically filter Cloud Build workflow runs 🔖 Feature description Automatically filter Cloud Build history to the specified component name or source name. 🎤 Context Using the Google Cloud Build plugin, it currently shows Cloud Build history in the CI/CD page starting from the most recent build. If you do many builds in the specified project-slug, then the history will show these builds. It does not make sense to show builds not related to the component. Also, there appears to be no way to filter based on source using the filter at the upper right of this page. ✌️ Possible Implementation There should be a way to automatically filter or only show the results that match the component that has the CI/CD entityContent. Either by component name (default?), or something added to a component annotation (google.com/cloudbuild-source-name: myapp). 👀 Have you spent some time to check if this feature request has been raised before? [X] I checked and didn't find similar issue 🏢 Have you read the Code of Conduct? [X] I have read the Code of Conduct Are you willing to submit PR? No, but I'm happy to collaborate on a PR with someone else I wonder if you could do something like this: async listWorkflowRuns(options) { const workflowRuns = await fetch( `https://cloudbuild.googleapis.com/v1/projects/${encodeURIComponent( options.projectId )}/builds`, { headers: new Headers({ Accept: "*/*", Authorization: `Bearer ${await this.getToken()}` }) } ); const builds = await workflowRuns.json(); const { entity } = useEntity(); const filter = builds.filter(b => b.REPO_NAME = entity.metadata.name) return filter; } I noticed projects.builds.list has a filter parameter, however It does not seem to work (INVALID_ARGUMENT): https://cloud.google.com/build/docs/api/reference/rest/v1/projects.builds/list? Yep sounds like a good addition, but it might indeed need a new annotation or potentially configurable way of filtering things. Leaving this one open for contributions This is still very much desired. I figured out how to use the filter parameter properly. I believe the below should be a viable option: Requires useEntity import: import { useEntity } from '@backstage/plugin-catalog-react'; async listWorkflowRuns(options: { projectId: string; }): Promise<ActionsListWorkflowRunsForRepoResponseData> { const { entity } = useEntity(); const entityName = entity.metadata.name const cbFilter = "substitutions.REPO_NAME=" + entityName const workflowRuns = await fetch( `https://cloudbuild.googleapis.com/v1/projects/${encodeURIComponent( options.projectId, )}/builds?filter=${encodeURIComponent( cbFilter, )}`, { headers: new Headers({ Accept: '*/*', Authorization: `Bearer ${await this.getToken()}`, }), }, ); const builds: ActionsListWorkflowRunsForRepoResponseData = await workflowRuns.json(); return builds; } Assuming that the above would work, it uses the "substitutions.REPO_NAME" filter parameter with the entity.metadata.name as the value. I am sure this could be expanded upon so that the entity name is used by default unless there is an annotation that is specified (annotation TBD). So for example, lets say your entity name is "MyLegitProject", then the listWorkFlowRuns function would fetch a URL like below: https://cloudbuild.googleapis.com/v1/projects/<project-slug>/builds?filter=substitutions.REPO_NAME%3DMyLegitProject Ive tested the substitutions.REPO_NAME filter using "Try this Method" on the projects.builds.list method page: https://cloud.google.com/build/docs/api/reference/rest/v1/projects.builds/list At least in my use case, this would be exactly what I need. However, other use cases may require defining what repo name is used in the filter (different from entity name), or possibly a completely separate filter which could get complicated quickly. I am happy to work with anyone to see this though to completion.
gharchive/issue
2023-08-31T00:54:03
2025-04-01T06:38:00.782553
{ "authors": [ "Rugvip", "cl-christschantz" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/issues/19685", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2441859766
🐛 Bug Report: Error while upgrading to 1.24.0 at the beginning of the migration 📜 Description I am currently running backstage 1.23.3, and i want to update it to 1.24.0, i have noticed the breaking change concerning the backend and trying to do the migration. I have upgraded all the version in package.json using https://backstage.github.io/upgrade-helper/?from=1.23.3&to=1.24.0 and modified my backend index.ts file like so : import { createBackend } from '@backstage/backend-defaults'; const backend = createBackend(); backend.add(import('@backstage/plugin-app-backend/alpha')); backend.start(); But i have a console error : Failed to instantiate service 'core.httpRouter' for 'app' because the factory function threw an error, Error: Failed to instantiate service 'core.auth' for 'app' because the factory function threw an error, Error: ENOENT: no such file or directory, scandir '/Users/xxxx/kry/code/backstage/node_modules/@backstage/backend-defaults/migrations/auth' and in fact this directory doesn't exist. The backend-defaults module is version 0.2.18. and i have this error when i go to http://localhost:3000/ : 👍 Expected behavior It should work as i have followed every steps in https://backstage.io/docs/backend-system/building-backends/migrating/#overview 👎 Actual Behavior with Screenshots see the description above 👟 Reproduction steps have a 1.23.3 version and try to update it to 1.24.0 📃 Provide the context for the Bug. No response 🖥️ Your Environment No response 👀 Have you spent some time to check if this bug has been raised before? [X] I checked and didn't find similar issue 🏢 Have you read the Code of Conduct? [X] I have read the Code of Conduct Are you willing to submit PR? None Hmm, I think that it might be better for you to upgrade to the latest version by using yarn backstage-cli versions:bump and go from there. The issue is most likely that by changing the package.json values manually, there's no de-duping of dependencies, which could lead to incompatibilities in the framework. Yes, I had the same issue while upgrading from 1.23 to 1.24. So I upgraded to 1.29 and followed all the releases notes for migrations. Here are a few points for your reference: Few plugins moved to community plugins. Migrate to the new auth system (remove cookie-auth and s2s auth). Migrate all backend plugins to use the new auth system. Remove loggerToWinstonLogger and use LoggerService. Remove getVoidLogger and use mockService for testing. Migrate from PluginDatabaseManager to DatabaseService. Migrate from TaskRunner to SchedulerServiceTaskRunner. Migrate from PluginEndpointDiscovery to DiscoveryService. TokenManager to AuthService and HttpAuthService. Remove getBearerTokenFromAuthorizationHeader and use httpAuth.credentials. From BackstageIdentityResponse to PolicyQueryUser for permission policy.
gharchive/issue
2024-08-01T08:47:49
2025-04-01T06:38:00.794251
{ "authors": [ "benjdlambert", "toughthomas", "zeshanziya" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/issues/25876", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1884313912
backend-system: export all features as default exports Hey, I just made a Pull Request! This changes feature (plugin/module/service factories) export from using named export to the default export. There are a couple of reasons for this change, but the primary driver for it is to support declarative integration for the new backend system. We currently discovery all exports and include them automatically. This is quite fragile, since it means we won't be able to export optional features without additions to the system, and in general it's risks the future evolution of the system. More importantly though, this is a problem for services factories, since we want service factories to be installable via discovery too. The problem is that any app depending on @backstage/backend-app-api would install of default factories, which in turn means you won't be able to override them. By only considering default exports we can instead create an explicit service factory export to be installed in the app. Worth noting that we still have the idea of "presets" as a potential new feature for the backend system, which essentially ends up being a collection of features. This could help out in situations where a single package needs to export multiple features, but it's something we'll visit in the future. Now in addition to this change and the impact on feature discovery, we also introduce a new pattern for feature installation in code. Rather than a separate import with the installation happening afterwards in backend.add(...), it's now possible to pass an import of a package with a default package export straight to backend.add(...) This new pattern will be what we encourage for all backend setups implemented with code. For example: backend.add(import('@backstage/plugin-catalog-backend')); CC @davidfestal, this'll have an impact on dynamic plugins :heavy_check_mark: Checklist [x] A changeset describing the change and affected packages. (more info) [ ] Added or updated documentation [x] Tests for new functionality and regression tests for bug fixes [ ] Screenshots attached (for UI changes) [x] All your commits have a Signed-off-by line in the message. (more info) CC @davidfestal, this'll have an impact on dynamic plugins I'll try to have a look to the impact asap..
gharchive/pull-request
2023-09-06T15:57:17
2025-04-01T06:38:00.800268
{ "authors": [ "Rugvip", "davidfestal" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/pull/19822", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2142291607
permissions: migrate to new auth system and accept credentials On top of #23054, work towards supporting the new auth services Added a couple of tests for the credentials cases now too
gharchive/pull-request
2024-02-19T12:45:25
2025-04-01T06:38:00.801515
{ "authors": [ "Rugvip" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/pull/23055", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
807317079
Adding DCO file and updating contributing.md with details Hey, I just made a Pull Request! Adding DCO file and DCO information to CONTRIBUTING.md as PR ahead of DCO going live on 03/01/21. :heavy_check_mark: Checklist [ ] A changeset describing the change and affected packages. (more info) [x] Added or updated documentation [ ] Tests for new functionality and regression tests for bug fixes [ ] Screenshots attached (for UI changes) Tugboat has finished building the preview for this pull request! Link: https://pr4513-fyyn98ggkmxc4jxnbtsmttlfgtz6y9ja.tugboat.qa Dashboard: https://dashboard.tugboat.qa/60269c2afcb22db6f55441ed I already played around a bit with it in my latest PRs. Maybe some tips: Forgot to add it to the latest commit? Amend it: git commit --amend --signoff Forgot it on multiple commits in your branch? Rebase the last n commits and add the signoff (here last two commits): git rebase --signoff HEAD~2 If you have already pushed you branch to a remote, you might have to force push: git push -f. @Fox32 Thanks! I expect us writing this comment on a lot of PRs starting next month. Perhaps the DCO bot should suggest these to a new contributor, IMO. Tugboat has finished building the preview for this pull request! Link: https://pr4513-fyyn98ggkmxc4jxnbtsmttlfgtz6y9ja.tugboat.qa Dashboard: https://dashboard.tugboat.qa/60269c2afcb22db6f55441ed
gharchive/pull-request
2021-02-12T15:18:00
2025-04-01T06:38:00.810009
{ "authors": [ "Fox32", "OrkoHunter", "backstage-service", "leemills83" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/pull/4513", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1323016801
Error when no nav and docs_dir are specified I was trying to build my monorepo documentation having the following in the general mkdocs.yml site_name: "Example" site_description: "Description Here" docs_dir: ./docs plugins: - monorepo nav: - Home: "index.md" - Subnav: - index.md - index.md - Hello: "!include project-a/mkdocs.yml" and the following mkdocs.yml in a subfolder of the monorepo site_name: "test" site_description: "This is a subdomain site." plugins: - monorepo And I got this error: The file path .... does not contain a valid 'nav' key in the YAML file and the docs folder is not the default one, i.e. docs. Please include the nav key to indicate how your documentation should be presented in the navigation, or include a 'docs_dir' to indicate that automatic nav generation should be used. I personally find useless to specify the key docs_dir when it is the default one, i.e. docs. I would like to have the automatic nav generation without having to specify the docs_dir key Solved with https://github.com/backstage/mkdocs-monorepo-plugin/pull/82
gharchive/issue
2022-07-30T13:01:33
2025-04-01T06:38:00.814210
{ "authors": [ "dariocurr" ], "repo": "backstage/mkdocs-monorepo-plugin", "url": "https://github.com/backstage/mkdocs-monorepo-plugin/issues/81", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1594110927
Allow cross-referencing in nav Sometimes we want to add a page under different subsites (or main site) in a subsite's nav. While linking with ../ may work, but mkdocs can't properly process as it can't find the page. It shows error like WARNING - A relative path to 'test/../cross.md' is included in the 'nav' configuration, which is not found in the documentation files And the navigation title will be None. This PR fixes it by having normpath for nav link. okay updated Seems like a test needs updated. @haneul Can you go ahead and update the test? I can go ahead and merge this once that's resolved :)
gharchive/pull-request
2023-02-21T21:16:58
2025-04-01T06:38:00.816413
{ "authors": [ "agentbellnorm", "bih", "haneul" ], "repo": "backstage/mkdocs-monorepo-plugin", "url": "https://github.com/backstage/mkdocs-monorepo-plugin/pull/94", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
57983922
"Warning: unrecognized cop RSpec/DescribeClass" on running rubocop Some debugging info: $ cat Gemfile.lock | grep rubocop rubocop (0.29.0) rubocop-rspec (1.2.1) rubocop rubocop-rspec $ cat .rubocop.yml require: rubocop-rspec inherit_from: .rubocop_todo.yml AllCops: RunRailsCops: true Exclude: - 'bin/**/*' - 'vendor/**/*' - 'app/controllers/resource_sites_controller.rb' Documentation: Enabled: false $ cat .rubocop_todo.yml # This configuration was generated by `rubocop --auto-gen-config` # on 2015-02-12 16:50:06 -0500 using RuboCop version 0.29.0. # The point is for the user to remove these configuration records # one by one as the offenses are removed from the code base. # Note that changes in the inspected code, or installation of new # versions of RuboCop, may require this file to be generated again. # Offense count: 16 # Configuration parameters: AllowSafeAssignment. Lint/AssignmentInCondition: Enabled: false # Offense count: 5 Lint/Debugger: Enabled: false # Offense count: 1 # Configuration parameters: AlignWith, SupportedStyles. Lint/DefEndAlignment: Enabled: false # Offense count: 4 # Configuration parameters: AlignWith, SupportedStyles. Lint/EndAlignment: Enabled: false # Offense count: 1 Lint/Eval: Enabled: false # Offense count: 1 Lint/HandleExceptions: Enabled: false # Offense count: 2 Lint/ParenthesesAsGroupedExpression: Enabled: false # Offense count: 1 Lint/RescueException: Enabled: false # Offense count: 1 Lint/ShadowingOuterLocalVariable: Enabled: false # Offense count: 1 Lint/UselessAccessModifier: Enabled: false # Offense count: 23 Lint/UselessAssignment: Enabled: false # Offense count: 45 Metrics/AbcSize: Max: 87 # Offense count: 9 # Configuration parameters: CountComments. Metrics/ClassLength: Max: 210 # Offense count: 7 Metrics/CyclomaticComplexity: Max: 9 # Offense count: 1809 # Configuration parameters: AllowURI, URISchemes. Metrics/LineLength: Max: 301 # Offense count: 40 # Configuration parameters: CountComments. Metrics/MethodLength: Max: 97 # Offense count: 1 Metrics/PerceivedComplexity: Max: 9 # Offense count: 5 RSpec/DescribeClass: Enabled: false # Offense count: 109 RSpec/DescribedClass: Enabled: false # Offense count: 211 # Configuration parameters: CustomTransform, IgnoredWords. RSpec/ExampleWording: Enabled: false # Offense count: 2 # Configuration parameters: CustomTransform. RSpec/FilePath: Enabled: false # Offense count: 15 RSpec/InstanceVariable: Enabled: false # Offense count: 1 RSpec/MultipleDescribes: Enabled: false # Offense count: 10 # Configuration parameters: Include. Rails/HasAndBelongsToMany: Enabled: false # Offense count: 2 # Configuration parameters: Include. Rails/Output: Enabled: false # Offense count: 3 # Configuration parameters: Include. Rails/Validation: Enabled: false # Offense count: 11 Style/AccessorMethodName: Enabled: false # Offense count: 1 # Cop supports --auto-correct. # Configuration parameters: EnforcedHashRocketStyle, EnforcedColonStyle, EnforcedLastArgumentHashStyle, SupportedLastArgumentHashStyles. Style/AlignHash: Enabled: false # Offense count: 3 # Cop supports --auto-correct. # Configuration parameters: EnforcedStyle, SupportedStyles. Style/AlignParameters: Enabled: false # Offense count: 8 # Configuration parameters: IndentWhenRelativeTo, SupportedStyles, IndentOneStep. Style/CaseIndentation: Enabled: false # Offense count: 13 # Configuration parameters: EnforcedStyle, SupportedStyles. Style/ClassAndModuleChildren: Enabled: false # Offense count: 2 Style/EachWithObject: Enabled: false # Offense count: 3 Style/EmptyElse: Enabled: false # Offense count: 7 # Configuration parameters: AllowedVariables. Style/GlobalVars: Enabled: false # Offense count: 63 # Configuration parameters: MinBodyLength. Style/GuardClause: Enabled: false # Offense count: 16 # Configuration parameters: MaxLineLength. Style/IfUnlessModifier: Enabled: false # Offense count: 13 # Cop supports --auto-correct. Style/Lambda: Enabled: false # Offense count: 3 # Cop supports --auto-correct. Style/LineEndConcatenation: Enabled: false # Offense count: 1 # Cop supports --auto-correct. # Configuration parameters: EnforcedStyle, SupportedStyles. Style/MultilineOperationIndentation: Enabled: false # Offense count: 2 Style/MultilineTernaryOperator: Enabled: false # Offense count: 4 # Configuration parameters: EnforcedStyle, MinBodyLength, SupportedStyles. Style/Next: Enabled: false # Offense count: 8 # Configuration parameters: NamePrefix, NamePrefixBlacklist. Style/PredicateName: Enabled: false # Offense count: 1 # Configuration parameters: EnforcedStyle, SupportedStyles. Style/RaiseArgs: Enabled: false # Offense count: 1 # Configuration parameters: MaxSlashes. Style/RegexpLiteral: Enabled: false # Offense count: 1 # Cop supports --auto-correct. Style/SelfAssignment: Enabled: false # Offense count: 2 # Cop supports --auto-correct. # Configuration parameters: EnforcedStyle, SupportedStyles. Style/SignalException: Enabled: false # Offense count: 1 # Configuration parameters: Methods. Style/SingleLineBlockParams: Enabled: false # Offense count: 3 # Cop supports --auto-correct. # Configuration parameters: IgnoredMethods. Style/SymbolProc: Enabled: false # Offense count: 5 # Cop supports --auto-correct. # Configuration parameters: ExactNameMatch, AllowPredicates, AllowDSLWriters, Whitelist. Style/TrivialAccessors: Enabled: false # Offense count: 1 Style/UnlessElse: Enabled: false # Offense count: 3 # Configuration parameters: EnforcedStyle, SupportedStyles. Style/VariableName: Enabled: false # Offense count: 2 # Cop supports --auto-correct. # Configuration parameters: WordRegex. Style/WordArray: MinSize: 5 Looks like the rubocop-rspec cops are successfully being ignored (commenting out the disabling of them causes cop failures), so the warning is very odd. RSpec/DescribeClass is failing because it is the first rubocop-rspec being encountered. In other words, if I reordered the declarations of rubocop-rspec's cops in the TODO file, the very first rubocop-rspec cop will appear in the warning. Please let me know if more information is needed! @geniou Any idea what's up here? @dleve123 sorry for the delay. I tied to reproduce the problem but I had no success. It is strange because when the rubocop_todo.yml file was created rubocop-rspec seams to be there. Did you start rubocop with bundle - e.g. bundle exec rubocop? I'm just guessing. @geniou Thanks for helping out! We are not running rubocop in the context of our bundle. That could definitely be the explanation. Will amend our build script and get back to you! @geniou - I work with @dleve123 and it appears we are running it within our bundle; however, I just noticed something else. When the RSPec cops are moved into the original .rubocop.yml file and not inherited from .rubocop_todo.yml it appears to no longer produce the warning. Just figured out a semi-solution - for anyone looking at this in the future or if you think this is actually a bug and should be fixed or if this tells us the problem with our setup... Adding require rubocop-rspec to the .rubocop_todo.yml prevents the warning message despite require rubocop-rspec is already at the top of our .rubocop.yml before inheriting the todo. @GolfyMcG thanks that worked for me Is this still an issue for anybody? @nijikon we have not changed our .rubocop.yml or .rubocop_todo.yml files so I can't say for sure No problems here anymore. Rubocop 0.37.2, rubocop-rspec 1.4.0. However, when running with --only there is still a warning. $ rubocop --only RSpec/AnyInstance Running via Spring preloader in process 12894 Warning: unrecognized cop RSpec/DescribeClass found in /path/to/.rubocop_todo.yml Warning: unrecognized cop RSpec/DescribedClass found in /path/to/.rubocop_todo.yml Warning: unrecognized cop RSpec/FilePath found in /path/to/.rubocop_todo.yml Warning: unrecognized cop RSpec/InstanceVariable found in /path/to/.rubocop_todo.yml Warning: unrecognized cop RSpec/NotToNot found in /path/to/.rubocop_todo.yml I'm running Rails 4.2.1 and Rubocop 0.37.2 and getting Warning: unrecognized parameter Rails:Enable found in .rubocop.yml with a ton of An error occurred while RSpec/FilePath cop was inspecting [file]. require: rubocop-rspec AllCops: Exclude: - 'bin/**/*' - 'config/**/*' - 'db/**/*' - 'spec/spec_helper.rb' - 'spec/rails_helper.rb' - 'Rakefile' Documentation: Enabled: false Metrics/LineLength: Max: 200 Metrics/ClassLength: Max: 500 Metrics/ModuleLength: Exclude: - 'spec/**/*_spec.rb' Rails: Enable: true RSpec/DescribeClass: Exclude: - spec/requests/**/* - spec/routes/**/* - spec/support/**/* Style/TrivialAccessors: Exclude: - spec/**/* @kWhittington it should be Enabled: true, not Enable: true @andyw8 that was just a typo in the comment, I've updated my sample .rubocop.yml. I'm having the problem with: Rails: Enabled: true inside of .rubocop.yml with either Rubocop v0.35.1 or 0.37.2 @nijikon we're still seeing warning Warning: unrecognized cop Rspec/DescribeClass found in .../.rubocop.yml With following entry in rubocop.yml: Rspec/DescribeClass: Enabled: false rubocop-rspec (1.4.0) rubocop (0.33.0) @kowal: Try RSpec/DescribeClass instead of Rspec/DescribeClass @andyw8 yup that was a typo :) thx! Is anyone still experiencing this issue? It seems like the require: recommendation resolves the problems being described. If anyone follows up saying they are having the same problem I will reopen It works using require: rubocop-rspec in both .rubocop.yml and .rubocop_todo.yml. But I'd argue that it's not ideal, since .rubocop_todo.yml is an auto generated file that gets overwritten frequently (at least in our project, with every new rubocop release). @aried3r thats currently out of rubocop-rspec's control. See https://github.com/bbatsov/rubocop/issues/3414 Same, adding require: rubocop-rspec to the top of the file fixed it
gharchive/issue
2015-02-17T21:20:16
2025-04-01T06:38:00.840948
{ "authors": [ "Dorian", "GolfyMcG", "andyw8", "aried3r", "backus", "dleve123", "geniou", "john-griffin", "kWhittington", "kowal", "nijikon" ], "repo": "backus/rubocop-rspec", "url": "https://github.com/backus/rubocop-rspec/issues/32", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
289767657
Successor for Bacon.fromBinder In #703 it was suggested that this version of Bacon.fromEvent should be added: Bacon.fromEvent( (listener) => window.addEventListener('scroll', listener, {passive: true}), (listener) => window.removeEventListener('scroll', listener, {passive: true}), () => window.scrollX ); It has the virtue of being more readable than Bacon.fromBinder. From my point of view, it also looks like it should be returning a Property because the third function argument is used to get a current value. The initial value of the Property could be set at the time of creation by calling the third function. Might as well close this now that #710 is merged.
gharchive/issue
2018-01-18T20:38:07
2025-04-01T06:38:00.846173
{ "authors": [ "raimohanska", "steve-taylor" ], "repo": "baconjs/bacon.js", "url": "https://github.com/baconjs/bacon.js/issues/704", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2204220675
Most easy issue Hello In examples in EchoBot is mistake in constructor. EchoBot() : Bot("BOT_TOKEN_FROM_BOT_FATHER") { } - present EchoBot(const std::string& token) : Bot(token) { } - mast be Thanks for your good work. Thank you for catching this! Fixed.
gharchive/issue
2024-03-24T07:16:14
2025-04-01T06:38:00.849073
{ "authors": [ "OMRKiruha", "baderouaich" ], "repo": "baderouaich/tgbotxx", "url": "https://github.com/baderouaich/tgbotxx/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
124033691
Github badges are down Currently github badges give a vendor | unresponsive after few seconds ex: https://img.shields.io/github/tag/strongloop/express.svg -> That's odd! It means that the request took more than 20 seconds. It seems fine now; I'll try to see what happened. indeed it's good now it seems shall I close the issue, or wait you investigate? Looks like it might be back, at least once for me. @remexre back as in live, or back as in down? Back down for at least 4 or 5 minutes. They didn't load at all when embedded, but when I went to the URL directly, I got the vendor unresponsive message. Ditto for cURLing them. Last I checked, they were up again, so intermittent weirdness? They're down again for me. Down again right now :/ Down again, timing out. Failed to load resource: the server responded with a status of 522 (OK) https://img.shields.io/imagelayers/image-size/_/ubuntu/latest.svg Failed to load resource: the server responded with a status of 522 (OK) https://img.shields.io/imagelayers/layers/_/ubuntu/latest.svg Failed to load resource: the server responded with a status of 522 (OK) https://img.shields.io/david/dev/strongloop/express.svg Failed to load resource: the server responded with a status of 522 (OK) https://img.shields.io/gem/u/raphink.svg Failed to load resource: the server responded with a status of 522 (OK) Sorry, Linux crashed. I took about 20 hours to notice. https://github.com/MDXDave/ModernWebif/blob/master/README.md Release version not recognized or image not shown due to timeout Feel free to open a new issue if this issue recurs.
gharchive/issue
2015-12-28T08:23:42
2025-04-01T06:38:00.859277
{ "authors": [ "AdrieanKhisbe", "AriaFallah", "MDXDave", "davej", "espadrine", "lots0logs", "paulmelnikow", "pydsigner", "remexre" ], "repo": "badges/shields", "url": "https://github.com/badges/shields/issues/617", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
1221860310
Minor amendments to life/stay-sane.md I believe you meant "code of conduct" in the first edit. The other is just grammar, I believe "misled" is past tense of mislead. Thanks for writing this Daniel. Thanks!
gharchive/pull-request
2022-04-30T15:46:13
2025-04-01T06:38:00.868122
{ "authors": [ "bagder", "cosimo" ], "repo": "bagder/uncurled", "url": "https://github.com/bagder/uncurled/pull/39", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1940070999
fixed issue #8666 Issue Reference fixed issue #8666 @shivendra-webkul It is working fine.
gharchive/pull-request
2023-10-12T14:01:04
2025-04-01T06:38:00.869045
{ "authors": [ "Amitk-webkul", "shivendra-webkul" ], "repo": "bagisto/bagisto", "url": "https://github.com/bagisto/bagisto/pull/8667", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1960752036
Fixed Issue #8777 Issue Reference #8777 @amit-webkul It is working fine. Conflicts
gharchive/pull-request
2023-10-25T07:38:54
2025-04-01T06:38:00.870095
{ "authors": [ "Amitk-webkul", "amit-webkul", "jitendra-webkul" ], "repo": "bagisto/bagisto", "url": "https://github.com/bagisto/bagisto/pull/8781", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2325384957
fixed #9801 filters overlapping to headers issue Issue Reference fixed #9801 filters overlapping to headers issue Description I have fixed issue where filters are overlapping to the headers @hussaincoder23 Thank you so much for your contribution. We really appreciate your efforts. After checking and testing the PR, we found that the locale dropdown top 2 locales are getting hidden behind the header. Kindly check the video for reference, we would really happy if you could fix the issue as well. Thanks Video https://github.com/bagisto/bagisto/assets/92721837/ecc596bf-62da-4fa3-93b8-24e4e991882a Ok fixing that locale dropdown issue as well @ashishkumar-webkul @hussaincoder23 Thank you so much for your contribution. We really appreciate your efforts and dedication. However, we are currently avoiding the use of inline CSS as we are using Tailwind CSS. Please adjust the code accordingly. @suraj-webkul updated PR and used only tailwind class . @hussaincoder23 Please check there are some conflicts in the PR, please resolve the conflicts. @ashishkumar-webkul fixed language locale dropdown hidden behind header issue @hussaincoder23 As I have checked and verified the issue, now it is working fine. please check the video for reference. Video https://github.com/bagisto/bagisto/assets/92721837/18e8e8e8-5e86-4ced-baed-6e5057b0c0e5 Thank You So Much for your contribution, looking ahead for more. Checked and Found that the issue has been fixed now and working fine. Please check the video for reference. Video https://github.com/bagisto/bagisto/assets/92721837/ac04958d-897d-480c-9983-b34c46cec9b9
gharchive/pull-request
2024-05-30T11:08:50
2025-04-01T06:38:00.876030
{ "authors": [ "ashishkumar-webkul", "hussaincoder23", "suraj-webkul" ], "repo": "bagisto/bagisto", "url": "https://github.com/bagisto/bagisto/pull/9849", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
324085701
Question: Custom messages per rule and field Versions: VueJs: 2.5.16 Vee-Validate: 2.0.9 Assumptions Vue SSR I have an input component Scoped form Question: How would I do the following? I'm trying to make the required message for firstName be: Please enter your first name. and for city: Please enter the city. Steps To Reproduce: So far I have: const dictionary = { en: { messages: { required: (field) => { if (field === 'firstName') { return `Please enter your ${field}.` } else if (field === 'city') { return `Please enter the ${field}.` } return `Please enter a ${field}.` }, }, }, } Object.keys(veeRules).forEach(k => Validator.extend(k, veeRules[k])) Validator.localize('en', veeEn) Vue.use(VeeValidate, { inject: false, dictionary }) But this doesn't seem correct. You could assign specific error messages to specific fields: const dictionary = { en: { custom: { firstName: { required: 'Please enter your first name.' }, city: { required: 'Please enter the city.' } } } } Object.keys(veeRules).forEach(k => Validator.extend(k, veeRules[k])) Validator.localize('en', veeEn) Vue.use(VeeValidate, { inject: false, dictionary }) Hi @logaretm I'm trying this out but I keep getting: "Can't find variable: veeRules". I'm obviously doing something wrong. Please can you give me pointers? @elujoba I know that you fixed it, but more info is here: https://github.com/baianat/vee-validate/issues/1329 @logaretm Thanks a million!
gharchive/issue
2018-05-17T15:56:41
2025-04-01T06:38:00.882910
{ "authors": [ "elujoba", "g3rd", "logaretm" ], "repo": "baianat/vee-validate", "url": "https://github.com/baianat/vee-validate/issues/1330", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
335927258
Errors field doesnt work anymore Versions vee-validate: 2.1.0.beta3 vue: 2.#.# At beta2 stopped working errors.first with spaces at the name (this bug is already reported), but at beta3 stopped working errors.any and errors[i].msg example: <div v-if="errors.any()" style="color:red;"> <br> Existen campos pendientes de corrección <ul v-for="e in errors.items" style="padding-left:15px;"> <li> -{{e.msg}} </li> </ul> </div> try it with beta.4 https://jsfiddle.net/v9u2deyo/3/ both issues seem to be fine. No, its not working, and its not a "question". Your currently hotfix didn't solve this bug. Regards I just created an example for you using your code and no issues there, the name with spaces issue has been fixed as well. so it could be something else. Create a live example and I would check it out for you.
gharchive/issue
2018-06-26T17:48:26
2025-04-01T06:38:00.886576
{ "authors": [ "logaretm", "somosarado" ], "repo": "baianat/vee-validate", "url": "https://github.com/baianat/vee-validate/issues/1433", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
242222903
RC7 breaks .initial validation in combination with v-if on input (+ what is the cause) Versions: VueJs: 2.3.4 Vee-Validate: 2.0.0-rc.7 Description: In RC6 and earlier, if you had an input field with v-validate.initial and used a v-if condition on it, then when the condition became true, the validation would run as expected and you can display errors on the now-visible input(s). However, with RC7, this behaviour is broken. When the v-if condition becomes true, the validation does not seem to run (visibly, that is) and no errors are displayed. Steps To Reproduce: I have two identical JSFiddles. RC6 version: https://jsfiddle.net/qhxn2tpm/4/ RC7 version: https://jsfiddle.net/zw3h7ac8/2/ In both versions, after you click on the "Click me!" button, the input element is rendered and because of the .initial modifier you expect the validation to run and display an error becomes the input is empty and the validation rule is required. With RC6 this is indeed the case. With RC7, no error is displayed. Investigation I managed to narrow this change down to a specific commit: https://github.com/baianat/vee-validate/commit/ba8fb89dd9355761ef35c70077ccdd0acd8a5d03 "ensure no null scopes" The code is a bit difficult to follow, but what seems to be happening is the following (prior to the commit!): bind is called. Listener is attached. The scope is "__global__". Validate is called because .initial is set, the validation fails because the required rule is present and the input is empty. The error is inserted in the ErrorBag. inserted is called. The scope here is null. The null scope is of course different from the instance scope "__global__" The field is still resolved, however, because the scope defaults to "__global__" in _resolveField _moveFieldScope dutifully moves the field from scope "__global__" to null. update is called. update finds that the validation is not cached yet (Note: you could probably cache it sooner since the field is already added and the validation has even run, right?) update computes the scope to be the string "null". I think this is because the null was written back to data-vv-scope. _createField is called since the scope doesn't exist, but it just copies the "new" field over the existing one. Since a new field was created, or so it thinks, it tries to remove the errors for the old scope, but again it uses the string "null" so the errors are not removed from the ErrorBag because they were not inserted with that scope. As you can see, the old behaviour relied on a bunch of bugs since the scope can be "__global__", null and the string "null" in different parts of the code. This should be fixed so it's consistent. But there is more: The code in update seems a bit strange as it checks a cache that should probably be already made, and then if the value is not cached (which is always the case on the first update) then it discards validation errors and re-creates the field. The only reason that this did not screw up validation before is actually because it tried to discard the errors of a non-existing scope and because the field was re-created in its existing position. So there seem to be two bugs, but bug nr1 actually counter-acted the second bug in RC6 and previous :) The change in RC7 (partially) fixed the first bug, so now bug nr 2 takes effect and the validation breaks in the case where .initial is used as it discards the validation errors when re-creating the field. I contemplated trying to fix this myself, but I do not understand all the code and these issues touch quite some functions and behaviour, so I'd probably do something silly. I though I had fixed the null issues with scoping, primarily I needed the inserted lifecycle to actually check for scopes correctly. for example the form attribute is never set until then. But again I recently found out that inserted is not a guaranteed hook, meaning it doesn't get called in some situations which is critical, so I went back to the bind hook in rc.6. The code is actually patchy and hacky ATM, that is why PR #616 is going to resolve most of these issues, basically it treats fields as isolated objects, and no longer will be identified by name and scope, but by a unqiue ID. Also it would allow fields to be more flexible and even allow them to change names and scopes dynamically without breaking validation. Also the code is much simpler than the current one and is better tested. So i guess I need to finish up the PR, and check if it resolves the issue here instead of patching it one more time for rc.8, I'm currently have to following todos before merging: 100% coverage (almost there) Target field validation (confirmed, after, before), I have a good draft but haven't tested it yet. Can confirm this issue as well. Thanks for working on the fix so quickly! PR #616 seems like a very good idea. Great work! I've tested it again after PR #616 merge and it seems to work fine. https://jsfiddle.net/zw3h7ac8/4/ Nice, thanks!
gharchive/issue
2017-07-12T00:31:42
2025-04-01T06:38:00.900755
{ "authors": [ "Endlessline", "ThomHurks", "logaretm" ], "repo": "baianat/vee-validate", "url": "https://github.com/baianat/vee-validate/issues/632", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
191864576
Fix sync timeout for random write 直接调用Sync不应该返回Timeout build build build
gharchive/pull-request
2016-11-27T14:36:07
2025-04-01T06:38:00.922807
{ "authors": [ "cyshi", "yvxiang" ], "repo": "baidu/bfs", "url": "https://github.com/baidu/bfs/pull/648", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
453876047
ISSUE-78 add grpc support for brpc Motivation Add GRPC support for BRPC in java project Status Now, this PR has implemented the following features: GrpcServer boot and deploy service(classical GRPC service) Bind BRPC service to GRPC server and the service which was written by BRPC can be invoked by the GRPC client @Kewei-Wang-Kevin Thanks for your pull request. I have some suggestions, I don't know if it works. Could you implement Protocol interface like http protocols which is the package protocol/http, and reuse RpcServer/RpcClient/LoadBalance/ConnectionPool/NamingService etc? @Kewei-Wang-Kevin Thanks for your pull request. I have some suggestions, I don't know if it works. Could you implement Protocol interface like http protocols which is the package protocol/http, and reuse RpcServer/RpcClient/LoadBalance/ConnectionPool/NamingService etc? In the beginning of my working, I also hope that I can reuse existed code as much as possible, but after I investigative, I found that it is really hard to reuse the protocol part, which is the entry for all RPC process starts, here is the reason: the interacting mechanism between GRPC and other RPC protocols in BRPC has a significant difference. the basic assumption of other RPC protocol is one request with one response, here is the sequence: Client->Server:Request Server-->>Client:Response under this order, we can use a standard process that you have implemented in BRPC, but for GRPC, the sequence is: Client->Server:Request(Http2/MAGIC,SETTINGS) Server-->>Client:Response(Http2/ SETTINGS) Client->Server:Request(Http2/Header Frame) Server-->Client:Response(Http2/OK) Client->ServerRequest(Http2/Data Frame) Server-->Client:Response(Http2/Data Frame)//GRPC will also got response here you should notice the whole RPC has been split into several parts, which can't match the current way in BRPC. And here is the network package that I captured by Wireshark it means that if you want to support GRPC in netty, you need to support HTTP2 first, and then, based on the support of Http2, you can start supporting GRPC. which is reinventing the wheel. So under such a circumstance, it is hard to reuse the protocol part. and I choose to use grpc's internal API to make it work with BRPC. The implementation of your own business logic based on BRPC and GRPC is also different. that's the reason why I created a new kind of server called GrpcServer, for BRPC, you prefer to define a Service interface, and then implement it, and you will use a kind of Proxy technology make a delegate such JDK Proxy or cglib of your service. but in GRPC, the philosophy is use generated code instead of any reflection and bytecode wave things. so when you are using GRPC, you need to extend from the base class and rewrite methods, and it means that the perform of registeService become different as well Although we have some difficulties in reuse protocol part and server part in BRPC, it is still possible to reuse other fundamental code in BRPC project, for example, in the server side, I have reused the NamingService for service registration, and I think in the client side, I can also reuse the LoadBalance and Service discovery functions. How do you think about that? @Kewei-Wang-Kevin current implementation can talk with grpc effectively, but I think it is better to implement http2 and grpc protocol, so that it is consistent with other protocols in brpc. grpc Stub client can be implemented by java proxy, please see brpc standard protocol. Since there are too many commits in this pr, I will close this pr and create a new pr for grpc implementation
gharchive/pull-request
2019-06-09T08:30:57
2025-04-01T06:38:00.931674
{ "authors": [ "Kewei-Wang-Kevin", "wenweihu86" ], "repo": "baidu/brpc-java", "url": "https://github.com/baidu/brpc-java/pull/81", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
158844723
一些小的代码优化 除了1.8版本java外,其它修改对于项目没有任何副作用,主要是一些性能多线程,基本对象unboxin方面的优化。 @xwarrior 谢谢提交。考虑到你修改的内容某些地方不是特别合适,建议你继续修改一下,如下: 1、 “升级到java1.8” 这个不要修改,我们基于java1.6开发是为了保证更多的用户能够方便的使用,用户如果有需要,自行调整版本 2、“增加double locking检查” 这里的修改其实都在同步块内,所以double locking也是没必要的 3、其他地方的修改没什么问题
gharchive/pull-request
2016-06-07T05:55:32
2025-04-01T06:38:00.933238
{ "authors": [ "dsfan", "xwarrior" ], "repo": "baifendian/harpc", "url": "https://github.com/baifendian/harpc/pull/2", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
338378378
Logging out and keep baking Running: Bakechain 0.2.0 OS: Win 10 1803 x64 It would benefit security if one could log out and keep baking. I have a dedicated machine for baking, and it is windows locked, but I would feel safer if the wallet was also locked. Thank you for your hard work, Stephen! The software needs to be unlocked in order to have access to your private key for baking Hello and Thanks Stephen for that you have contributed. Please pardon my noobish questions or comments. My computer has to be on at all times to bake correct? I can open a new tezbox wallet with my bakechain seed word and password because bakechain is not a spendable wallet correct? I also need a tutorial on how to read the tz scanner. If there is any tutorials out there please let me know. Agsin thanks a bunch love your products! Correct - I'm unaware of any tutorials on tzsacan?
gharchive/issue
2018-07-04T21:31:50
2025-04-01T06:38:00.946853
{ "authors": [ "aah180", "gaia", "stephenandrews" ], "repo": "bakechain/bakechain", "url": "https://github.com/bakechain/bakechain/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
825756444
Support for Tiled Resources in D3D12 Description I understand this might be a big ask, but I was wondering if there were any plans to support tiled resources in D3D12? We've never really had renderdoc support in this engine and I was trying to get it to work, tiled resources is a core feature so nothing we can really disable. It fails to report support for TILED_RESOURCES and even if I let it run, it fails in the call to CreateReservedResource. Thanks! Environment RenderDoc version: 1.12 Operating System: Windows 10 Graphics API: D3D12 That commit adds support for capturing and replaying sparse resources on D3D12. It's only lightly tested since I couldn't really get a hold of many non-trivial sparse resource samples (on any API), possibly because I think IHVs recommend against using them so it's a rarely used feature. Especially on D3D12 this may be an issue since the D3D12 API for sparse resources is... awful. One thing to note: the memory overhead for capturing such resources is proportional to their total virtual size not how much physical memory is currently mapped. This isn't a problem if you're using sparse resources to defrag/remap and they're mostly always bound anyway, but it may be if you are reserving a lot more memory than is available on the GPU or system. @baldurk you are a total machine. I was not expecting this at all, I was eager to try it out, booted up our game and hey presto it loaded and ran! I could also take a capture but during replay renderdoc crashes, accessing a null list in d3d12_initstate.cpp // transition to copy dest if(!barriers.empty()) list->ResourceBarrier((UINT)barriers.size(), &barriers[0]); This is the callstack renderdoc.dll!D3D12ResourceManager::Apply_InitialState(ID3D12DeviceChild * live, const D3D12InitialContents & data) Line 1181 C++ renderdoc.dll!ResourceManager::ApplyInitialContents() Line 1351 C++ renderdoc.dll!WrappedID3D12Device::ApplyInitialContents() Line 1291 C++ renderdoc.dll!WrappedID3D12Device::ReadLogInitialisation(RDCFile * rdc, bool storeStructuredBuffers) Line 3837 C++ renderdoc.dll!ReplayController::PostCreateInit(IReplayDriver * device, RDCFile * rdc) Line 2042 C++ renderdoc.dll!ReplayController::CreateDevice(RDCFile * rdc, const ReplayOptions & opts) Line 2009 C++ renderdoc.dll!CaptureFile::OpenCapture(const ReplayOptions & opts, std::function<void __cdecl(float)> progress) Line 364 C++ qrenderdoc.exe!ReplayManager::run(int proxyRenderer, const QString & capturefile, const ReplayOptions & opts, std::function<void __cdecl(float)> progress) Line 450 C++ If that's not enough, I can try to move the necessary strings to get a capture to you when I'm back at work on Monday. Seriously thank you so much for the effort, it's quite game-changing. You're lucky that I had already implemented sparse resources for vulkan and was already planning on rolling in D3D12 too at the same time :smile: . I'm not surprised that there are bugs but please open it as a new issue, I don't want this to become a mega-issue for every tiled resource bug. For that one it sounds like you hit a device removed earlier which is why the list is NULL, you could double check in the diagnostic log. Yes I'll definitely need either a repro case or a lot more information so that I can tell what's wrong/repro it locally in isolation - e.g. you could run with debug layers enabled to see if it points out any problems. Of course, I will do. I'll get more details tomorrow and open a new issue. Thanks!
gharchive/issue
2021-03-09T11:17:39
2025-04-01T06:38:00.958289
{ "authors": [ "baldurk", "redorav" ], "repo": "baldurk/renderdoc", "url": "https://github.com/baldurk/renderdoc/issues/2203", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
475553154
Send value from UI to Server First, I have to say I'm a real noob in Rust I want to send additional value to my Linux board when clicking the submit button. So, I added input form in index.html easily but I wonder how to send it to the board Actually, my board will do something with the delivered value after connecting with WiFi Could you please help me about that? Plus, how can I build the source? Fortunately, I know how to use Docker My Linux board's architecture is ARM 64, and the OS is Debian 9 We have an example of doing that through Python: https://github.com/balena-io-playground/wifi-connect-api/ Going to add some documentation now at the README of that repo. I added additional information to the README. Please let me know if you have any questions. @majorz Thank you so much! I'll try @majorz I've tried several times but it doesn't work It seems working fine but my phones and laptop could not find the AP(WiFi-connect) Serving Flask app "web-app" (lazy loading) Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. Debug mode: off Running on http://0.0.0.0:80/ (Press CTRL+C to quit) I did docker build/run as below docker build --tag wifi-connect . docker run -p 80:80 -p 45454:45454 wifi-connect And, I had to change some words in Dockerfile.template as below Because, these variables cannot found %%RESIN_MACHINE_NAME%% %%RESIN_ARCH%% FROM balenalib/generic-aarch64-python:3.7.2-20190327 RUN curl -Ls https://github.com/resin-io/resin-wifi-connect/releases/download/$WW IFI_CONNECT_VERSION/wifi-connect-$WIFI_CONNECT_VERSION-linux-aarch64.tar.gz | taa r -xvz -C /tmp/download @theruin0000 Docker will need --cap-add=NET_ADMIN (or --privileged) and --network=host, as those are required by dnsmasq. Additionally you will need to mount the DBus socket in Docker. I need to look up how that is done on non-balenaOS environment. Do you actually need Docker? Probably the best way is to download wifi-connect and put it in /usr/local/sbin on the OS directly. And follow the other steps from the Dockerfile.template but apply them on the OS. What do you think? Do you mean following Dockerfile.template step by step manually? Okay, I'll try :) Yes, it is only a few steps like download and copy wifi-connect, clone the repo, install Python requirements, and run the application. @majorz I just found it works well :) So thanks for your help!
gharchive/issue
2019-08-01T08:43:23
2025-04-01T06:38:00.980767
{ "authors": [ "majorz", "theruin0000" ], "repo": "balena-io/wifi-connect", "url": "https://github.com/balena-io/wifi-connect/issues/305", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1598325670
Update layers/meta-balena digest to c6a25b2 This PR contains the following updates: Package Update Change layers/meta-balena digest 80ca81f -> c6a25b2 Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Enabled. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 Ignore: Close this PR and you won't be reminded about this update again. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. Can one of the admins verify this patch? @resin-jenkins add to whitelist @resin-jenkins ok to test @resin-jenkins test this please lgtm @balena-ci I self-certify!
gharchive/pull-request
2023-02-24T10:13:21
2025-04-01T06:38:00.986275
{ "authors": [ "alexgg", "balena-ci", "floion", "resin-jenkins" ], "repo": "balena-os/balena-fsl-arm", "url": "https://github.com/balena-os/balena-fsl-arm/pull/361", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
354074569
Add negative test cases for ballerina checkpointing Description: Ballerina test cases only cover the positive part of ballerina checkpointing. We need to add the following. Ballerina services without interruptible annotation so persistence should not be activated. Checkpoint support is removed.
gharchive/issue
2018-08-26T07:45:26
2025-04-01T06:38:01.004794
{ "authors": [ "hasithaa", "warunalakshitha" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/issues/10191", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
481451407
Enable the disabled test with lang-lib change Description: There are several test-modules have been commented out from the testng.xml. There should be more than ~5200 unit tests running but currently only ~4500 are running as of now. There are no tests excluding due to "brokenOnLangLibChange" So we can close this issue, after removing "brokenOnLangLibChange" label
gharchive/issue
2019-08-16T05:46:19
2025-04-01T06:38:01.006094
{ "authors": [ "SupunS", "hasithaa" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/issues/17844", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
580370530
Function Documentation code action not working Description: Consider the following source snippet function myFunction(record {int x; string b;} hello) { } After executing the add documentation code action, the following source is generated causing syntax errors. function myFunction(# Description # # + x - x Parameter Description # + b - b Parameter Description record {int x; string b;} hello) { } Affected Versions: v1.2.0-SNAPSHOT at least This is fixed in the latest master
gharchive/issue
2020-03-13T04:54:14
2025-04-01T06:38:01.008020
{ "authors": [ "nadeeshaan", "rasika" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/issues/21743", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }