id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2578724790 | when I run uv pip sync requirements.txt , the environment is not be updated
win11
but pyproject.toml
So, I think this command of uv add -r requirements.txt can instead it .
Yeah I think you're looking for uv add here as you mentioned above.
| gharchive/issue | 2024-10-10T12:36:47 | 2025-04-01T06:37:58.258417 | {
"authors": [
"Super1Windcloud",
"charliermarsh"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/issues/8088",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2139931115 | Add benchmark of pixi with uv
Summary
Adds Pixi as one of the tool benchmarked against uv.
Pixi can generate multi-platform lock files. here, lock-files are limited to the platform the benchmark is executed on
Pixi can install packages from conda and/or pypi. here, packages are installed from pypi
Still in draft as generating lock-files currently fails due to https://github.com/prefix-dev/pixi/issues/817
This seems redundant now that they're retiring rip in favor of uv.
This seems redundant now that they're retiring rip in favor of uv.
For anyone else like me wondering what you’re referring to: https://prefix.dev/blog/uv_in_pixi
Very ecosystem-conscious move by the Prefix team! 👏
| gharchive/pull-request | 2024-02-17T10:19:09 | 2025-04-01T06:37:58.261716 | {
"authors": [
"erlend-sh",
"olivier-lacroix",
"zanieb"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/pull/1581",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2251831636 | Revert "Rewrite uv-auth (#2976)"
This reverts commit c0efeeddf6d738991d8f3149168ce57c52073f4e.
As an alternative to the in-progress fix at https://github.com/astral-sh/uv/pull/3130, we could revert the pull request at #2976.
#3130 instead.
| gharchive/pull-request | 2024-04-19T00:24:33 | 2025-04-01T06:37:58.263174 | {
"authors": [
"zanieb"
],
"repo": "astral-sh/uv",
"url": "https://github.com/astral-sh/uv/pull/3131",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2393633447 | fix(sequencer)!: store native asset ibc->trace mapping in init_chain
Summary
we need to store the native asset ibc to "trace" mapping in the state, otherwise queries for the native asset using the ID will fail. for example get_bridge_account_info where the asset is the native asset fails right now
Changes
store native asset ibc->trace mapping in init_chain
also enforce that the native asset is is "trace" form, as otherwise, we won't be able to map from ibc->trace form for the asset as we don't know the trace form.
Breaking changes
this is unfortunately breaking since the ibc->trace mapping is stored in app state.
Added an exclamation mark, as in fix(sequencer)!, because this isi breaking
| gharchive/pull-request | 2024-07-06T17:11:39 | 2025-04-01T06:37:58.272419 | {
"authors": [
"SuperFluffy",
"noot"
],
"repo": "astriaorg/astria",
"url": "https://github.com/astriaorg/astria/pull/1242",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1613970454 | Anyway to make it also remove the external from the html?
Hi, thank you for building this tool. I was wondering how could we remove something like <link href=/_astro/index.66b179d0.css rel=stylesheet> from the built html that's already been inlined?
Critters only inlines a small portion of the CSS, the one that is displayed above-the-fold. The rest is needed by your website.
@nikolaxhristov maybe this feature make sense when all your styles are critical
@nikolaxhristov maybe this feature make sense when all your styles are critical
True, this is the issue to track it https://github.com/nikolaxhristov/critters/issues/3 and it is related to https://github.com/nikolaxhristov/critters/issues/2 which needs to get fixed first.
@nikolaxhristov
Critters only inlines a small portion of the CSS, the one that is displayed above-the-fold. The rest is needed by your website. It is recommended to leave those intact.
Actually, that is a false assumption, isn't it? The critters-readme states:
It also means Critters inlines all CSS rules used by your document, rather than only those needed for above-the-fold content.
@Suven Yes, but it also means that critters includes more CSS than just that above-the-fold.
@Suven I don't always give the most correct assumptions :D
With the fact in mind that critters inlines everything which is needed in the initial render, the only case where the external css is still needed, is for client-side-hydrated components right? 🤔 Maybe a config option for this plugin to remove the link-tags would be nice, or am I overlooking something?
If you agree I would try and see if I am able to provide a PR.
All PRs are warm-welcomed.
| gharchive/issue | 2023-03-07T18:15:31 | 2025-04-01T06:37:58.277715 | {
"authors": [
"RodrigoTomeES",
"Suven",
"gnomeria",
"nikolaxhristov"
],
"repo": "astro-community/astro-critters",
"url": "https://github.com/astro-community/astro-critters/issues/73",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2009952504 | Upsert does not remove successfully upserted documents
https://github.com/astronomer/ask-astro/blob/515f3386c4eac8aa4ddcdc3ad12c46b52e4aad8a/airflow/include/tasks/extract/utils/weaviate/ask_astro_weaviate_hook.py#L328C9-L340C1
If no errors occur in upsert this will never be called. Need to move this to its own function and call it after rollback and also at line 405 if there are no errors.
For suggested fix https://github.com/mpgreg/ask-astro/blob/ed65a354013b5ce2170f98448bd510ed3c4201be/airflow/include/utils/weaviate/hooks/weaviate.py#L350
https://github.com/mpgreg/ask-astro/blob/ed65a354013b5ce2170f98448bd510ed3c4201be/airflow/include/utils/weaviate/hooks/weaviate.py#L301
https://github.com/mpgreg/ask-astro/blob/ed65a354013b5ce2170f98448bd510ed3c4201be/airflow/include/utils/weaviate/hooks/weaviate.py#L464-L489
@mpgreg Isn't this section doing the rollback: https://github.com/astronomer/ask-astro/blob/main/airflow/include/tasks/extract/utils/weaviate/ask_astro_weaviate_hook.py#L328?
| gharchive/issue | 2023-11-24T16:15:41 | 2025-04-01T06:37:58.281128 | {
"authors": [
"mpgreg",
"sunank200"
],
"repo": "astronomer/ask-astro",
"url": "https://github.com/astronomer/ask-astro/issues/174",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1553916656 | Add support for "source" select/exclude to DbtDag & DbtTaskGroup parsers
See dbt docs on cli usage examples here
$ dbt run --select source:snowplow+ # run all models that select from Snowplow sources
Ultimately, in our parsers, we should be able to have a new parameter that looks something like this:
# (Either the select or exclude parameter would be specified with the snowplow source - not both)
jaffle_shop = DbtTaskGroup(
...
select={'sources': ['snowplow+']} # run all models that select from Snowplow sources
exclude={'sources': ['snowplow+']} # run all models except those that select from Snowplow sources
)
Complementing, as of Cosmos 1.x, this functionality only works on LoadMode.DBT_LS. We should also support it when the DAG/TaskGroup uses LoadMode.DBT_MANIFEST and LoadMode.CUSTOM.
| gharchive/issue | 2023-01-23T22:14:08 | 2025-04-01T06:37:58.289499 | {
"authors": [
"chrishronek",
"tatiana"
],
"repo": "astronomer/astronomer-cosmos",
"url": "https://github.com/astronomer/astronomer-cosmos/issues/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2119838010 | Add support for InvocationMode.DBT_RUNNER for local execution mode
Description
This PR adds dbtRunner programmatic invocation for ExecutionMode.LOCAL. I decided to not make a new execution mode for each (e.g. ExecutionMode.LOCAL_DBT_RUNNER) and all of the child operators but instead added an additional config ExecutionConfig.invocation_mode where InvocationMode.DBT_RUNNER could be specified. This is so that users who are already using local execution mode could use dbt runner and see performance improvements.
With the dbtRunnerResult it makes it easy to know whether the dbt run was successful and logs do not need to be parsed but are still logged in the operator:
Performance Testing
After #827 was added, I modified it slightly to use postgres adapter instead of sqlite because the latest dbt-core support for sqlite is 1.4 when programmatic invocation requires >=1.5.0. I got the following results comparing subprocess to dbt runner for 10 models:
InvocationMode.SUBPROCESS:
Ran 10 models in 23.77661895751953 seconds
NUM_MODELS=10
TIME=23.77661895751953
InvocationMode.DBT_RUNNER:
Ran 10 models in 8.390100002288818 seconds
NUM_MODELS=10
TIME=8.390100002288818
So using InvocationMode.DBT_RUNNER is almost 3x faster, and can speed up dag runs if there are a lot of models that execute relatively quickly since there seems to be a 1-2s speed up per task.
One thing I found while working on this is that a manifest is stored in the result if you parse a project with the runner, and can be reused in subsequent commands to avoid reparsing. This could be a useful way for caching the manifest if we use dbt runner for dbt ls parsing and could speed up the initial render as well.
I thought at first it would be easy to have this also work for virtualenv execution, since I at first thought the entire execute method was run in the virtualenv, which is not the case since the virtualenv operator creates a virtualenv and then passes the executable path to a subprocess. It may be possible to have this work for virtualenv and would be better suited for a follow-up PR.
Related Issue(s)
closes #717
Breaking Change?
None
Checklist
[x] I have made corresponding changes to the documentation (if required)
[x] I have added tests that prove my fix is effective or that my feature works - added unit tests and integration tests.
@jlaneve I'm closing this PR and opening up #850 because I couldn't update the GH action and have it run with the updates here on my forked branch.
| gharchive/pull-request | 2024-02-06T02:57:18 | 2025-04-01T06:37:58.296651 | {
"authors": [
"jbandoro"
],
"repo": "astronomer/astronomer-cosmos",
"url": "https://github.com/astronomer/astronomer-cosmos/pull/836",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1210894621 | Fix failing example_emr_eks_pi_job example DAG
The example_emr_eks_pi_job is failing as part of our intergration tests run.
As part of the investigation logs, the shell script Bash Operator seems to having some errors not being raised up and letting the task state shown to be successful.
Understand the DAG, debug the issue across tasks in the DAG and make it working.
Have set failure raise for shell script in the PR: https://github.com/astronomer/astronomer-providers/pull/259
However, cloud formation template is not succeeding with below failure reason:
AWS::EKS::Nodegroup/ManagedNodeGroup: CREATE_FAILED – "Resource handler returned message: \"[Issue(Code=AsgInstanceLaunchFailures, Message=You've reached your quota for maximum Fleet Requests for this account. Launching EC2 instance failed., ResourceIds=[eks-ng-a0731f88-2ec026b0-8620-cc3d-82e3-dee3ed01e00a]), Issue(Code=NodeCreationFailure, Message=Instances failed to join the kubernetes cluster, ResourceIds=[DUMMY_11448ab7-f2d5-42a7-a6cc-0d74ccb2e8e9, DUMMY_1bb6f31e-92c8-43d5-9ca3-8e815cbff947, DUMMY_2a5d16f9-291c-4d58-bf35-2eb5c0db88f5, DUMMY_4e4b490f-ef51-4d5b-87a5-3a5141ac6f9c, DUMMY_77a3e8d7-3af0-49e6-afa5-7c5ebc9a0eeb, DUMMY_7e88602e-c7f8-4d6a-943b-5f5078a4eb3f, DUMMY_a129f26a-1a81-4b5e-9c84-2285d8734a8f, DUMMY_abb67e19-d841-4bd0-ad05-5418f73149d1, DUMMY_ae850f0d-0693-4772-9d9b-4f4d442bab0e, DUMMY_ff345f8f-3468-4f2b-b8cf-ac1560c4150d])] (Service: null, Status Code: 0, Request ID: null, Extended Request ID: null)\" (RequestToken: 98ef1611-2341-fd34-ef08-4e8970c34e47, HandlerErrorCode: GeneralServiceException)"
Adding @bharanidharan14 as assignee too as he is trying this on his local.
@dstandish suggested to try a smaller instance. But, we still are facing the same issue. He further suggested to check with @danielhoherd and/or speak to AWS. I will connect with @danielhoherd and check if he can help us here. Parallelly, I have created an AWS case for our account: https://us-east-1.console.aws.amazon.com/support/home?region=us-east-2#/case/?displayId=9973194951&language=en
I don't have any special knowledge about this. The error seems pretty clear though: "You've reached your quota for maximum Fleet Requests for this account." My first step would be to make that request that @pankajkoti made.
We've run into this kind of thing in prod-cloud in GCP, and requesting an increase is the only solution.
We are not sending the VIRTUAL_CLUSTER_ID to the example_delete_eks_cluster_and_role_policies shell script from the DAG. Due to this, if a virtual cluster already exists the delete shell script isn't cleaning up properly.
[2022-05-02, 17:42:13 UTC] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'sh $AIRFLOW_HOME/dags/example_delete_eks_cluster_and_role_policies.sh '] [2022-05-02, 17:42:13 UTC] {subprocess.py:85} INFO - Output: [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - To see help text, you can run: [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws help [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws <command> help [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws <command> <subcommand> help [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws: error: argument --id: expected one argument
aws emr-containers delete-virtual-cluster --id $VIRTUAL_CLUSTER_ID
VIRTUAL_CLUSTER_ID is coming as none during run-time
create_emr_virtual_cluster_func should delete existing virtual clusters and then create one.
[2022-05-02, 17:42:10 UTC] {example_emr_eks_containers_job.py:60} ERROR - Error while creating EMR virtual cluster Traceback (most recent call last): File "/usr/local/airflow/dags/example_emr_eks_containers_job.py", line 50, in create_emr_virtual_cluster_func response = client.create_virtual_cluster( File "/usr/local/lib/python3.9/site-packages/botocore/client.py", line 395, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.9/site-packages/botocore/client.py", line 725, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the CreateVirtualCluster operation: A virtual cluster already exists in the given namespace
| gharchive/issue | 2022-04-21T11:24:01 | 2025-04-01T06:37:58.304260 | {
"authors": [
"danielhoherd",
"pankajkoti",
"phanikumv"
],
"repo": "astronomer/astronomer-providers",
"url": "https://github.com/astronomer/astronomer-providers/issues/257",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
492345303 | write_ds9 silently overwrites output file
https://github.com/astropy/regions/blob/35bab74340b12053942d2bc6ebd071dce340c605/regions/io/ds9/write.py#L62
This behavior is not always desirable. I propose you add an overwrite keyword like what astropy unified I/O does.
Added in v0.5
| gharchive/issue | 2019-09-11T16:35:36 | 2025-04-01T06:37:58.373001 | {
"authors": [
"larrybradley",
"pllim"
],
"repo": "astropy/regions",
"url": "https://github.com/astropy/regions/issues/298",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
476623590 | Enhancement of Convolution Networks
This PR
Corrects a few mistakes in the docs for Convolution Networks
Adds data_format capability. User can now use channels_first and channels_last tensors. This option can be passed in forward function. If not passed, it will be picked from hyperparameters
Handles the difference in data_format requirements between convolution network and mask_sequences [This was an existing issue which was unnoticed]
Makes other_conv_kwargs and other_pool_kwargs accept list as well as dict. If dict, the same property is applied to all layers. If list, then individual layers get their own kwargs
Fixes a bug - When num_dense_layers < 0, in_features of logits layer equals the out_features of Flatten layer. Adds a test case for this.
This PR closes #136
Is the new issue about time_major=False included in this PR?
Is the new issue about time_major=False included in this PR?
Yes, it is included. I have updated the description of this PR to mention this issue.
| gharchive/pull-request | 2019-08-05T01:59:34 | 2025-04-01T06:37:58.386844 | {
"authors": [
"AvinashBukkittu",
"ZhitingHu"
],
"repo": "asyml/texar-pytorch",
"url": "https://github.com/asyml/texar-pytorch/pull/138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1243432022 | docs: update to make styling more consistent
I have updated the Readme part by adding some icons aside two headings i.e. Environment and How to setup.
@naman-tiwari Hi! Please apply my and Missy suggestions, and then we will accept it and merge :)
| gharchive/pull-request | 2022-05-20T17:26:38 | 2025-04-01T06:37:58.396118 | {
"authors": [
"magicmatatjahu",
"naman-tiwari"
],
"repo": "asyncapi/design-system",
"url": "https://github.com/asyncapi/design-system/pull/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1694933072 | sumneko_lua lags the editor
As stated above, sumneko_lua (lua_ls) lags down the editor. Somehow it really lags mini plugins?
not just mini but also noice.nvim?
otherwise, diagnostics are lagging down the editor when leaving?
nope, fixed in 50aa8ccb8cfcec2aef2fca202cd1b7aa1dca450e
| gharchive/issue | 2023-05-03T22:27:05 | 2025-04-01T06:37:58.397457 | {
"authors": [
"asyncedd"
],
"repo": "asyncedd/dots.nvim",
"url": "https://github.com/asyncedd/dots.nvim/issues/96",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1153283479 | Restructure
Remove netstandard 2.1
Restructure solution structure
Remove Proto.Remote.GrpcNet and Proto.Repote.GrpcCore in favor of just Proto.Remote containing all grpcnet code
Also, any idea why some tests consistently fail? or maybe just CI acting up
Also, any idea why some tests consistently fail? or maybe just CI acting up
I did rewrite the tests to run concurrently. Might need higher timeouts because of that, since they might get CPU throttled
I've disabled everything else but the two failing tests. still fails.
Very unclear why. one just seem to get stuck, not doing anything?
| gharchive/pull-request | 2022-02-27T15:06:39 | 2025-04-01T06:37:58.399421 | {
"authors": [
"mhelleborg",
"rogeralsing"
],
"repo": "asynkron/protoactor-dotnet",
"url": "https://github.com/asynkron/protoactor-dotnet/pull/1488",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1841096233 | Fixes #208
Changes the function argument type in get_instance(self, num: int) in the ExampleManager class from 'int' to 'str'. This allows to load some of the instance files with characters in the filename, e.g. 'instance1c.rddl' in the MountainCar domain.
Hi,
Thanks for the fix.
We should really starting thinking of moving the competition domains over to rddlrepository soon.
What do you mean exactly by moving the competition domains over to rddlrepository?
There are some inconsistencies due to the fix, e.g. in the README.md EnvInfo.get_instance(0) we still use an int.
I could offer to fix all inconsistencies and additionally I would rename the argument from num to name, so it get's def get_instance(self, name: str):.
Let me know what you think and if it would help you.
Indeed, there are still some incompatibilities. Originally, we intended (and still do) for instance numbers to be integers, following custom in defining rddl instances. The 'c' was appended to the instance to denote that they are part of the official IPPC probabilistic planning competition we held earlier this year. This means those domain files with 1c, 2c will eventually be removed from the pyrddlgym.
That said, there is no reason why users should not be allowed to use either integer or string numbering going forward, There is currently a major design overhaul underway of the pyrddlgym front end to enhance user friendliness for the future , so I think a number of these changes will need to be eventually incorporated as part of this overhaul.
If you would like to help, please feel free to contribute a PR. Note we are currently waiting on #211 while is a big change to the front end, so hopefully it would not conflict. If you think any aspect of the front end can be further improved please feel free to suggest improvements or make PR :)
| gharchive/pull-request | 2023-08-08T11:10:38 | 2025-04-01T06:37:58.403674 | {
"authors": [
"GMMDMDIDEMS",
"mike-gimelfarb"
],
"repo": "ataitler/pyRDDLGym",
"url": "https://github.com/ataitler/pyRDDLGym/pull/209",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
713989530 | Out-of-memory issue when allocating a http request header buffer
Trying to setup a 512 byte buffer for reading the http request headers and failing to allocate it.
There could be an IDF setting that can be used to suppress the abort() call and instead just return nullptr.
stack:
0x40089235: esp_system_abort at C:/espressif/esp-idf/components/esp_system/system_api.c:106
0x4008fbfd: abort at C:/espressif/esp-idf/components/newlib/abort.c:46
0x400d5d13: __cxa_end_catch at C:/espressif/esp-idf/components/cxx/cxx_exception_stubs.cpp:13
0x40195331: operator new(unsigned int) at /builds/idf/crosstool-NG/.build/HOST-x86_64-w64-mingw32/xtensa-esp32-elf/src/gcc/libstdc++-v3/libsupc++/new_op.cc:54
0x400f18de: std::vector<unsigned char, std::allocator<unsigned char> >::_M_default_append(unsigned int) at c:\espressif\tools\xtensa-esp32-elf\esp-2020r2-8.2.0\xtensa-esp32-elf\xtensa-esp32-elf\include\c++\8.2.0\ext/new_allocator.h:111
\-> inlined by: ?? at c:\espressif\tools\xtensa-esp32-elf\esp-2020r2-8.2.0\xtensa-esp32-elf\xtensa-esp32-elf\include\c++\8.2.0\bits/alloc_traits.h:436
\-> inlined by: ?? at c:\espressif\tools\xtensa-esp32-elf\esp-2020r2-8.2.0\xtensa-esp32-elf\xtensa-esp32-elf\include\c++\8.2.0\bits/stl_vector.h:296
\-> inlined by: std::vector<unsigned char, std::allocator<unsigned char> >::_M_default_append(unsigned int) at c:\espressif\tools\xtensa-esp32-elf\esp-2020r2-8.2.0\xtensa-esp32-elf\xtensa-esp32-elf\include\c++\8.2.0\bits/vector.tcc:604
0x4011686a: http::HttpRequestFlow::start_request() at c:\espressif\tools\xtensa-esp32-elf\esp-2020r2-8.2.0\xtensa-esp32-elf\xtensa-esp32-elf\include\c++\8.2.0\bits/stl_vector.h:827
\-> inlined by: http::HttpRequestFlow::start_request() at r:\code\esp32commandstation\build/../components/HttpServer/src/HttpRequestFlow.cpp:83
0x401194d1: StateFlowBase::run() at r:\code\esp32commandstation\build/../components/OpenMRNLite/src/executor/StateFlow.cpp:63 (discriminator 4)
0x401192dd: ExecutorBase::entry() at r:\code\esp32commandstation\build/../components/OpenMRNLite/src/executor/Executor.cpp:324
0x401a7985: OSThread::start(void*) at r:\code\esp32commandstation\build/../components/OpenMRNLite/src/os/OS.hxx:193
0x40125703: os_thread_start at r:\code\esp32commandstation\build/../components/OpenMRNLite/src/os/os.c:391
Partially addressed via https://github.com/atanisoft/HttpServer/commit/35230a02be32ccdada5f41c5d00489f1c07f9b83
| gharchive/issue | 2020-10-03T02:54:59 | 2025-04-01T06:37:58.405930 | {
"authors": [
"TrainzLuvr",
"atanisoft"
],
"repo": "atanisoft/HttpServer",
"url": "https://github.com/atanisoft/HttpServer/issues/6",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1021933011 | 🛑 Wedding HTTPS is down
In fb66068, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 8155511.
| gharchive/issue | 2021-10-10T09:42:46 | 2025-04-01T06:37:58.436225 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/2434",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1348223905 | 🛑 Wedding HTTPS is down
In 6706d4b, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 57d8397.
| gharchive/issue | 2022-08-23T16:34:24 | 2025-04-01T06:37:58.438342 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/8107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
950252348 | 🛑 Wedding HTTPS is down
In 1990d12, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 1300034.
| gharchive/issue | 2021-07-22T02:46:02 | 2025-04-01T06:37:58.440425 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/843",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1472457890 | 🛑 Wedding HTTPS is down
In 866cc4b, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in c82bdea.
| gharchive/issue | 2022-12-02T07:55:33 | 2025-04-01T06:37:58.442722 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/9479",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1314695302 | Add monadic utility functions
This PR adds some monadic utility functions to improve the readability of code written using MTL-style effects.
As a side note: writing the code I notices wart remover getting really angry with the monadic unless and when because it kept inferring the type Any; we have to keep an eye on this I think we'll have to disable this Wart to keep the code readable:
when (updatedOrder needsMoreOf product) (emit(MissingProduct(product)))
// would turn into
when[<type annotation>] (updatedOrder needsMoreOf product) (emit(MissingProduct(product)))
// The type annotation is quite ugly and breaks the flow of the sentence when reading the code
| gharchive/pull-request | 2022-07-22T08:53:53 | 2025-04-01T06:37:58.457187 | {
"authors": [
"giacomocavalieri"
],
"repo": "atedeg/mdm",
"url": "https://github.com/atedeg/mdm/pull/95",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2223992924 | Integration stopped working after update to HA 2024.4.0
Hi,
After update to HA 2024.4.0, all integration entities but state became unavailable.
I see those suspicious errors in logs, not sure they are related:
Enregistreur: homeassistant
Source: util/async_.py:35
S'est produit pour la première fois: 23:34:13 (1 occurrences)
Dernier enregistrement: 23:34:13
Error doing job: Task exception was never retrieved
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 629, in async_add_entities
await add_func(coros, entities, timeout)
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 535, in _async_add_and_update_entities
tasks = [create_eager_task(coro) for coro in coros]
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/util/async_.py", line 35, in create_eager_task
loop=loop or get_running_loop(),
^^^^^^^^^^^^^^^^^^
RuntimeError: no running event loop
Enregistreur: py.warnings
Source: runner.py:189
S'est produit pour la première fois: 23:34:13 (1 occurrences)
Dernier enregistrement: 23:34:13
/usr/local/lib/python3.12/threading.py:299: RuntimeWarning: coroutine 'EntityPlatform._async_add_entity' was never awaited def __enter__(self):
Thanks!
I can confirm this bug after updating to 2024.0.
Debug output:
This happens to both of my servers (dell and ASRock). It looks, that only the SDR List Items are effected.
The power state and the power switch are available.
1.6.0 brought back the sensors for me, thank you. Hope you'll sort out how to play with those bloody APIs :)
| gharchive/issue | 2024-04-03T21:48:22 | 2025-04-01T06:37:58.462520 | {
"authors": [
"35gh",
"corgan2222"
],
"repo": "ateodorescu/home-assistant-ipmi",
"url": "https://github.com/ateodorescu/home-assistant-ipmi/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
564559260 | Move ExtensionAttribute into a separate assembly
At this point where is a lot of libraries which employs an ExtensionAttribute
definition hack to support .NET, which leads to some well-known problems when
System.Runtime.CompilerServices.ExtensionAttribute gets redefined in more when
one assembly. For example, let's consider the following situation I faced
recently: my project referenced both LinqBridge and Json.NET which has its own
built-in *internal* copy of LinqBridge (with ExtensionAttribute defined
*internal* too). This resulted in an "error CS0656: Missing compiler required
member 'System.Runtime.CompilerServices.ExtensionAttribute..ctor'" and made me
to recompile Json.NET to reference LinqBridge explicitly.
So, what about moving ExtensionAttribute definition into a completely separate
assembly, maybe even into a completely separate project? AFAIR, LinqBridge is
the most notable project which uses this tricky technique, and I hope other
libraries would eventually switch to that assembly instead of defining their
own copy of the attribute class, and there would be the only one assembly with
ExtensionAttribuite for .NET 2.0, providing the "standard" implementation the
hack.
See also:
http://stackoverflow.com/questions/11025100
http://devhawk.net/2012/06/20/ambiguous-extensionattribute-errors/
Original issue reported on code.google.com by firegura...@gmail.com on 20 Dec 2013 at 3:18
You should take up this issue with Json.NET. The idea of LINQBridge was to
provide .NET 3.5-isms for .NET 2.0, which, besides LINQ, includes extension
methods. In fact, the ExtensionAttribute was internalized in 1.1 and then
deliberately brought back in 1.2 (see issue #10).
Original comment by azizatif on 20 Dec 2013 at 6:10
Changed state: WontFix
> The idea of LINQBridge was to provide .NET 3.5-isms for .NET 2.0, which,
> besides LINQ, includes extension methods.
LINQ is the most known application of extension methods, but not the only one.
It may
be a good idea to split "providing extension methods for .NET 2.0" and
"providing
LINQ to Objects for .NET 2.0", because they are *different* languages features,
where
the latter is based on the former. There may be other libraries which would
like to
use extension methods facility which have nothing to do with LinqBridge. If
they all
declare their own ExtensionAttribute, they may get unusable together with each
other.
> In fact, the ExtensionAttribute was internalized in 1.1 and then
> deliberately brought back in 1.2 (see issue #10 ).
I didn't mean it should be internalized. It should be public, but moved away
from LinqBridge assembly into a separate DLL what would be shipped with it.
Something
like this:
* LinqBrigde.dll references ExtensionAttribute.dll
* LibraryThatNeedsExtensionMethodsWithLinq.dll references LinqBridge.dll
* LibraryThatNeedsExtensionMethodsButNotLinq.dll references
ExtensionAttribute.dll
Original comment by firegura...@gmail.com on 20 Dec 2013 at 8:14
I understand where you're coming from. You can also extend the same argument to
Action and Func delegates. In any event, this would change the direction with
which LINQBridge was conceived. Today, it is a project in twilight and the only
effort made would be, I reckon, towards a showstopper bug. Anything else would
require considerable resources that are scarce, like volunteered free time.
That said, LINQBridge is open source and if you are confident it needs to make
the split, it can be forked (under the same license), changed and re-published
on NuGet under an alternate Id.
Original comment by azizatif on 21 Dec 2013 at 6:38
> You can also extend the same argument to Action and Func delegates.
This would be an overkill. Moreover, multiple Action and Func delegates
may be declared in different namespaces, one for each library which
wants then, and they would be still compatible each with other,
unlike ExtensionAttribute which MUST reside in system namespace.
> LINQBridge is open source and if you are confident it needs to make
> the split, it can be forked (under the same license), changed and
> re-published on NuGet under an alternate Id.
The key point was to create a common assembly for everyone who wants
to employ extension methods in .NET 2.0, so that assembly must come
from an authoritative source like LinqBridge. Even if I'm going to
fork the project, it will not convince anybody to use that assembly.
> Today, it is a project in twilight
Sad but true. Almost no one cares about .NET 2.0, but some libraries
still try to support it. If there would be a common and widely
acceptable way of using extension methods which will prevent conflicts
between these libraries (caused by ExtensionAttribute), then their
authors would not just terminate that support because they got tired
of bug reports from users of old runtime.
Anyway, thank you for your answers. I'm also sorry for my not very
perfect language.
Original comment by firegura...@gmail.com on 21 Dec 2013 at 8:51
| gharchive/issue | 2020-02-13T09:45:19 | 2025-04-01T06:37:58.479240 | {
"authors": [
"atifaziz"
],
"repo": "atifaziz/LINQBridge",
"url": "https://github.com/atifaziz/LINQBridge/issues/28",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2685931371 | [Atividade 09] Análise comparativa de modelos
Inserir as seções de texto Obtenção de dados, Preparação de dados e Seleção de modelos no notebook notebooks/02-comparative_analysis.ipynb, de acordo com o conteúdo visto em aula:
[x] Deve-se utilizar um método de validação cruzada (holdout, k-fold, Monte Carlo) apresentado em sala de aula;
[x] Apresentar no mínimo quatro modelos: um baseline, para servir de comparação a pelo menos outros três (ou mais) experimentos com modelos de famílias diferentes. Deverão ser utilizadas pelo menos duas métricas para efetuar a comparação entre os modelos;
[x] Para que os resultados sejam avaliados, eles devem ser sintetizados através da utilização de tabelas e/ou gráficos explicativos;
@omadson @carlos-stefano @fiuzatayna
| gharchive/issue | 2024-11-23T12:19:44 | 2025-04-01T06:37:58.522174 | {
"authors": [
"laayrd"
],
"repo": "atlantico-academy/equipe5-2024.3",
"url": "https://github.com/atlantico-academy/equipe5-2024.3/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
261904038 | Interpolate task progress
Expected Behaviour
The progress of tasks update every tick according to the speed of the machines.
Actual Behaviour
At the moment, the progress of tasks will only update at a specific interval (10 ticks) instead of per tick.
Implemented in #21
| gharchive/issue | 2017-10-01T09:33:33 | 2025-04-01T06:37:58.523500 | {
"authors": [
"fabianishere"
],
"repo": "atlarge-research/opendc-simulator",
"url": "https://github.com/atlarge-research/opendc-simulator/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
437790084 | Introduce editable combobox
Introduce editable combobox which allows user to type in and filter before choose one from a list.
delimiter for Concatenate/Split transformation
transformation name
field name
What else?
out of date
| gharchive/issue | 2019-04-26T18:19:00 | 2025-04-01T06:37:58.531382 | {
"authors": [
"igarashitm"
],
"repo": "atlasmap/atlasmap",
"url": "https://github.com/atlasmap/atlasmap/issues/895",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1046783273 | init.py
change
from storages.backends import s3boto
to
from storages.backends.s3boto3 import S3Boto3Storage
replace
s3boto.S3BotoStorage
with
S3Boto3Storage
and update repository please, its outdated.
change
from storages.backends import s3boto
to
from storages.backends.s3boto3 import S3Boto3Storage
replace
s3boto.S3BotoStorage
with
S3Boto3Storage
and update repository please, its outdated.
Pull request(s) are welcome!
| gharchive/issue | 2021-11-07T15:59:25 | 2025-04-01T06:37:58.547318 | {
"authors": [
"atodorov",
"bruns6077"
],
"repo": "atodorov/django-s3-cache",
"url": "https://github.com/atodorov/django-s3-cache/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
124083240 | [ Keyboard usability enhancement ] Search box for Tree View
Hi,
Often a project is made up of tens if not hundreds or even thousands of files and directories, it would be quite handy to have a box to search (as-you-type) for them.
Cheers
Thanks for the suggestion! This looks like a duplicate of https://github.com/atom/tree-view/issues/159 – feel free to subscribe there for updates.
You can already live search for files in your project in Atom already though, via the fuzzy-finder: https://atom.io/docs/latest/getting-started-atom-basics#opening-a-file-in-a-project
Also, please take a look at the Contributing guide for a guide on submitting bug reports (including searching for duplicates first).
| gharchive/issue | 2015-12-28T15:49:46 | 2025-04-01T06:37:58.570163 | {
"authors": [
"i90rr",
"mnquintana"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/10202",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
136541875 | Feature Request: Distribute Atom through the Mac App Store
IANAL. Please can anyone confirm whether the Atom MIT license is compatible with the Mac App Store?
Now that Electron 0.34.0 supports the Mac App Store https://github.com/atom/electron/blob/master/docs/tutorial/mac-app-store-submission-guide.md, is it possible to submit Atom to the Mac App Store so that the app auto-updates?
As of Atom 1.0 to 1.6, the OS X auto-update framework is borked as per https://github.com/atom/atom/issues/2860
Thanks!
To my knowledge, there is no restriction against open source software on the Mac App Store. On the other hand, an application on the Mac App Store is restricted to a sandbox that precludes reading from and writing to files outside certain areas. Since editing files anywhere on your system is one of the features of Atom, until that requirement changes I don't believe we'll be pursuing publishing on the Mac App Store.
Thanks for your feedback!
| gharchive/issue | 2016-02-25T23:13:55 | 2025-04-01T06:37:58.573316 | {
"authors": [
"landocloud",
"lee-dohm"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/10976",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
200811203 | Build Failed
Prerequisites
[x] Put an X between the brackets on this line if you have done all of the following:
Reproduced the problem in Safe Mode: http://flight-manual.atom.io/hacking-atom/sections/debugging/#using-safe-mode
Followed all applicable steps in the debugging guide: http://flight-manual.atom.io/hacking-atom/sections/debugging/
Checked the FAQs on the message board for common solutions: https://discuss.atom.io/c/faq
Checked that your issue isn't already filed: https://github.com/issues?utf8=✓&q=is%3Aissue+user%3Aatom
Checked that there is not already an Atom package that provides the described functionality: https://atom.io/packages
Description
I want to install atom by building the source code in Linux environment, but get failed.
Steps to Reproduce
git clone https://github.com/atom/atom.git
cd atom
npm config set python /.tool/bin/python -g
script/build
wait for a few minutes after printing "Installing apm" on the screen.
the error will be displayed.
Expected behavior: [What you expect to happen]
Install successfully
Actual behavior: [What actually happens]
The error info as following:
Node: v6.9.4
Npm: v4.1.1
Installing script dependencies
Installing apm
module.js:327
throw err;
^
Error: Cannot find module '../build/Release/git.node'
at Function.Module._resolveFilename (module.js:325:15)
at Function.Module._load (module.js:276:25)
at Module.require (module.js:353:17)
at require (internal/module.js:12:17)
at Object.<anonymous> (/.tool/atom/apm/node_modules/atom-package-manager/node_modules/git-utils/lib/git.js:8:16)
at Object.<anonymous> (/.tool/atom/apm/node_modules/atom-package-manager/node_modules/git-utils/lib/git.js:371:4)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
child_process.js:506
throw err;
^
Error: Command failed: /.tool/atom/apm/node_modules/atom-package-manager/bin/apm --loglevel=error install
at checkExecSyncError (child_process.js:483:13)
at Object.execFileSync (child_process.js:503:13)
at module.exports (/.tool/atom/script/lib/install-atom-dependencies.js:15:16)
at Object.<anonymous> (/.tool/atom/script/bootstrap:28:1)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
Reproduces how often: [What percentage of the time does it reproduce?]
After getting the error, I run clear, and build again, still get the same error.
Versions
OS: RHEL 5.7
Python version: v2.6.9
Node version: v6.9.4
Npm version: v4.1.1
Additional Information
Any additional information, configuration or data that might be necessary to reproduce the issue.
Is the version of python you have installed at /.tool/bin/python v2 or v3?
It's v2, v2.6.9
| gharchive/issue | 2017-01-14T15:44:24 | 2025-04-01T06:37:58.582152 | {
"authors": [
"hucan7",
"lee-dohm"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/13620",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
232823091 | Rename file should replace the tab with the renamed file.
Prerequisites
[X] Put an X between the brackets on this line if you have done all of the following:
...
Description
Rename file should replace the tab with the renamed file.
Steps to Reproduce
Right click a file in tab or file tree sidebar,
Click rename
Enter the new name
Expected behavior:
The active tab should be the newly renamed file.
Actual behavior:
The active tab remained the old un-renamed file. And I have to close it and open the renamed file.
Versions
Atom: 1.16.0
OS: ubuntu 16.04
Thanks for the report! Can you re-open this in https://github.com/atom/tree-view if there's no existing issue already so we have it in the right place? Also, can you clarify your steps to reproduce a bit? I'm not quite sure if you're renaming the file you currently have open or if you're renaming some other file.
This is a duplicate of atom/tree-view#264
| gharchive/issue | 2017-06-01T09:45:32 | 2025-04-01T06:37:58.586453 | {
"authors": [
"q4w56",
"rsese"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/14692",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
325136963 | Display Progress of Download of a package
Summary
It would a good idea to show a progress bar of how much is downloaded when installing a package via apm or atom ui
Motivation
When installing a package via apm or atom ui it sometimes took a long time, so sometimes it let me think, is it really downloading something or it crashed?
Additional context
It would be helpful for many people. i hope this feature will be added in next releases.
Thanks for contributing!
We noticed that this looks like a duplicate of https://github.com/atom/apm/issues/148 so you can subscribe there if you'd like.
Because we treat our issues list as the Atom team's backlog, we close duplicates to focus our work and not have to touch the same chunk of code for the same reason multiple times. This is also why we may mark something as duplicate that isn't an exact duplicate but is closely related.
For information on how to use GitHub's search feature to find out if something is a duplicate before filing, see the How Can I Contribute? section of the Atom CONTRIBUTING guide.
| gharchive/issue | 2018-05-22T04:27:28 | 2025-04-01T06:37:58.590215 | {
"authors": [
"bauripalash",
"rsese"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/17383",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
442166552 | Auto Indent Empty Line Above
In Geany, notice that when an indented line is sent to a line below, indentation is added to the newline above so that indented text can quickly be added to that newline:
In Atom, the indentation must be added manually:
My feature request is for the geany-like behavior to be implemented by default in atom.
Thanks for the suggestion! For future issues, please fill out the issue template - the information and format of the templates are super helpful for us when triaging issues.
And as mentioned in the template, the team is currently very unlikely to prioritize feature requests right now - but with Atom's customizability and with community packages, you can often get the functionality you need without requesting changes to Atom itself. In this particular case, I poked around and found this package that looks like it can help:
https://atom.io/packages/atom-cursor-indent
With your example and this package:
Since this functionality is already provided by a package, we'll go ahead and close this issue.
| gharchive/issue | 2019-05-09T10:31:25 | 2025-04-01T06:37:58.593798 | {
"authors": [
"doakey3",
"rsese"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/19288",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
91619174 | Incorrect Arch Linux installation instructions
Hi,
I know I could go about this with a pull request but it's only a single word addition to the Linux.md file so I felt a developer with write permissions to the GitHub repo could just read this and make the change. At the moment the Linux.md file reads this in its Arch Linux dependency installation instructions:
sudo pacman -S gconf base-devel git nodejs libgnome-keyring python2
export PYTHON=/usr/bin/python2 before building Atom.
The amendment I propose is to add npm after nodejs in the first of these lines, in accordance with the official Nodejs Wiki installation instructions. I have tested this on a 32 bit Manjaro Linux Virtual Machine (which is as close to Arch that I can effectively work with-- Arch is over my head): without npm in this line, later during the actual build script/build generated errors stating that it could not find npm.
I hope this helps,
Brenton
As this particular problem got fixed with #8101, shouldn't this be closed?
Thanks @Narrat :).
| gharchive/issue | 2015-06-28T17:33:27 | 2025-04-01T06:37:58.597842 | {
"authors": [
"50Wliu",
"Narrat",
"fusion809"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/7511",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
112675988 | Atom Beta isn't Side by Side on Windows
Expected
According to the beta info page Atom Beta should "run side-by-side with Atom stable". To me that reads like I should be able to have both the beta and stable installed (and even running) simultaneously on the same system (like with Chrome's stable & beta releases) and there should be separate links to run each build.
Actual
On Windows (7 x64), when I install the beta (1.1.0-beta1), it replaces the context menu "Open with Atom" shortcut and the start menu shortcuts with the beta so there is no apparent way to run stable without reinstalling.
Reinstalling stable works but it replaces all the shortcuts with links to the stable version so there is no way to run the beta without re-installing.
/cc @maxbrunsfeld @nathansobo
cc @raelyard
Ok, I had to stop work on this for the time being because of some PathLength issues on windows. We need to upgrade our build infrastructure to use npm 3, which does a better job of reducing path nesting. Until that time, I'm reopening this.
With the file associations and shell integration having the beta executable named something different - atombeta.exe for example - would also make this work better. Thoughts?
having the beta executable named something different - atombeta.exe for example - would also make this work better.
:+1: Ah, I didn't realize that. We already rename a bunch of stuff based on the channel (stable vs beta) in the build scripts, so this would just be one more.
Now the shell integration is coming out of the installer and into Atom settings we should make sure we use productName instead of atom.exe everywhere when we do that work in #5901
Included in the forthcoming shell integration options.
@damieng what's the process/timeline for getting both installed side by side on windows?
It's checked into master now on both atom/atom and atom/settings-view and should ship in Atom 1.10.
Reopening because I am not able to install Atom beta side by side with Atom stable on Windows 7. Installing beta removes the stable install. See below for a full list of what I did to try and install Atom 1.12.0-beta4 side by side with Atom 1.11.2.
I can test it out on Windows 10 later if required to see if stable and beta side by side works there.
Originally reported in: https://github.com/atom/atom/issues/13016 so not alone seeing this.
Windows 7 - Trying to install stable and beta side by side.
Uninstalled Atom 1.12-beta4 using Programs and Features found in the Control panel.
Removed the .atom folder from %USERPROFILE%.
Removed the atom folder from %localappdata%. It contained update.exe and .dead.
Expected: Atom to be uninstalled. Actual: Atom pin is still on the taskbar, clicking it gives an error that it might have been moved, renamed or deleted asking me to remove the pin.
Unpinned Atom from the taskbar by clicking Yes on the dialog.
Installed Atom 1.11.2 using AtomSetup.exe downloaded from atom.io.
Expected and Actual: Atom 1.11.2 starts after install, all settings lost.
Answered yes on the telemetry consent and unchecked the Show On Start option for the welcome guide.
Opened the settings-view using Ctrl+,
Navigated to the System tab and checked all three options.
Pinned Atom to the taskbar.
Closed Atom 1.11.2.
Checked %localappdata\atom it contains the app-1.11.2 folder.
Installed Atom 1.12.0-beta4 using AtomSetup.exe downloaded from atom.io.
Atom 1.12.0-beta4 starts.
Atom does not ask for telemetry consent or show the welcome guide.
Atom 1.11.2 is not in %localappdata%\atom. Only app-1.12.0-beta4.
Atom is still pinned to the taskbar and can open from there but it has the icon from stable.
Open with Atom context menu works, icon from beta and opens Atom 1.12.0-beta4.
Checked the Systems tab all three options are checked.
Atom Beta is expected to use the same config and settings as regular Atom right now so I wouldn't expect it to show the welcome guide or telemetry consent.
The real problem right now is that beta and non-beta on Windows share the same setup-id in Squirrel so one is considered an upgrade to the other.
One option for now would be to use the beta zip and unpack that somewhere. That should allow you to run side-by-side. If you want the beta to also use a separate config then you should be able to create an empty .atom folder in the folder about where you unpacked the beta, e.g.
c:\apps\.atom```
I see exactly the same happening as @Ben3eeE is describing. While installing any version of Atom (release, beta or dev) the previous Atom installation is being removed from %LOCALAPPDATA%\atom\.
Windows 7 EN 64-bits
Just installed Atom 1.18 beta (Windows 10, x64) and it has deleted the stable Atom version (1.17). If the issue is not going to be fixed, it would be good to remove the wrong message on the Atom beta page that the beta can be used side-by-side.
Heya Vlad, no it works in other platforms as far as the side-by-side install. It's just in Windows where this is currently an issue. side-by-side installs work fine in UNIX and UNIX-like environments. They're still working on it. In the meantime, refer to @damieng 's suggestion:
Atom Beta is expected to use the same config and settings as regular Atom right now so I wouldn't expect it to show the welcome guide or telemetry consent.
The real problem right now is that beta and non-beta on Windows share the same setup-id in Squirrel so one is considered an upgrade to the other.
One option for now would be to use the beta zip and unpack that somewhere. That should allow you to run side-by-side. If you want the beta to also use a separate config then you should be able to create an empty .atom folder in the folder about where you unpacked the beta, e.g.
c:\apps\atombeta
c:\apps\.atom
Hi all, I've written the first version of a tool designed to solve this problem: https://github.com/atom/avm
Here's how to get started:
npm install -g atom-version-manager
## Install the stable version:
avm switch stable
## Switch to the beta
avm switch beta
The initial run of these commands will take awhile as it downloads Atom and installs it, but from then on switching between the two will be very fast (i.e. 2-3 sec or so). Let me know if this helps!
At a minimum, it would be nice to mention something about this on the atom beta website. It's really annoying to discover after the fact that it blew away my stable atom install.
Guys, tons of people are wasting a lot of time on this. Like everyone is suggesting please remove the side by side sales pitch from the site, at least on windows side. From the looks of this thread it hasn't worked in years. Its causing a lot of confusion and cursing.
| gharchive/issue | 2015-10-21T20:49:29 | 2025-04-01T06:37:58.616702 | {
"authors": [
"Ben3eeE",
"MartyGentillon",
"MethodGrab",
"MorganMarshall",
"benogle",
"calebmeyer",
"damieng",
"jerone",
"maxbrunsfeld",
"mnquintana",
"paulcbetts",
"sonokamome",
"vvs"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/9247",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
163072903 | Indexing Project..0
Whenever I start the Fuzzy Finder, it gives a screen stating: Indexing project..0.
How do I get it to work properly?
https://github.com/atom/fuzzy-finder/issues/205?
+1
Closing as a duplicate of #205 - feel free to subscribe there for updates.
| gharchive/issue | 2016-06-30T03:20:17 | 2025-04-01T06:37:58.619418 | {
"authors": [
"50Wliu",
"Ben3eeE",
"Postem1",
"sunnyvempati"
],
"repo": "atom/fuzzy-finder",
"url": "https://github.com/atom/fuzzy-finder/issues/226",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1315966772 | ESP32: add support to CMake buildsystem used on esp-idf >= 4.x
These changes are made under both the "Apache 2.0" and the "GNU Lesser General
Public License 2.1 or later" license terms (dual license).
SPDX-License-Identifier: Apache-2.0 OR LGPL-2.1-or-later
tested:
[x] network driver
[x] socket driver
| gharchive/pull-request | 2022-07-24T17:40:53 | 2025-04-01T06:37:58.652958 | {
"authors": [
"bettio"
],
"repo": "atomvm/AtomVM",
"url": "https://github.com/atomvm/AtomVM/pull/333",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1790126110 | functional tests occasionally failing
Describe the bug
Seeing messages like this in logs
SEVERE|2023-07-05 19:04:35.767602|LocalSecondary (@alice🛠)|exception in llookup:Could not read value from box. Maybe your box is corrupted.
For example in this PR, test run 1 failed but test run 2 succeeded (but can still see the warnings about hive box maybe being corrupted even though the test run succeeded).
Suspect there is a race condition with how and when the hive boxes are being cleaned up / recreated in the functional test pack
Steps to reproduce
run the functional tests
Expected behavior
functional tests should pass or fail consistently
Setting to P0 because (a) worrying messages are worrying (b) happens frequently, which means it is a big thief of time
| gharchive/issue | 2023-07-05T19:11:18 | 2025-04-01T06:37:58.674802 | {
"authors": [
"gkc"
],
"repo": "atsign-foundation/at_client_sdk",
"url": "https://github.com/atsign-foundation/at_client_sdk/issues/1084",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1290780210 | Update TLS connection to optionally output TLS keys to file
Is your feature request related to a problem? Please describe.
Update TLS connection to optionally output TLS keys to file. This allows you to "see" inside the TLS packets using WireShark and diagnose issues.
Describe the solution you'd like
Open to suggestions on implementation but it would be nice just to include a dev low level library that includes this feature. So testing can be done by including a dev library in pubspec.yaml. This would dump the TLS keys in the directory where the binary is being run.
The additional lines of code required to do this are :-
In at_lookup_impl.dart
var secConConnect = SecurityContext();
var keyfile = File('keysfile');
secConConnect.setTrustedCertificates('caroot/rootcacert.pem');
var secureSocket = await SecureSocket.connect(host, int.parse(port), context: secConConnect, keyLog: (line) => keyfile.writeAsStringSync(line, mode: FileMode.append));
And in monitor_client.dart
var secConConnect = SecurityContext();
var keyfile = File('keysfile');
secConConnect.setTrustedCertificates('caroot/rootcacert.pem');
var secureSocket = await SecureSocket.connect(host, int.parse(port), context: secConConnect, keyLog: (line) => keyfile.writeAsStringSync(line, mode: FileMode.append));
replacing the secureSocket connection with no SecurityContent()
It would be nice to abstract the SecureSocket.connect so only one change would effect both lines of code and then that abstraction could be used in the secondary server code as well perhaps.
Describe alternatives you've considered
I did consider pushing all the way through via command line options or by adding a method options but that I think holds the danger of leaving it in place before going to a prod build.. But open to them or other ideas..
Additional context
Screen shot of the resulting Wireshark diagnostics
Flowchart for proposed changes
flowchart TD
A[Start] --> B[CreateSecureSocket]
B --> C [Read preference]
C --> D {decryptPackets?}
D -->|No| E[Create secure socket without security context]
D -->|Yes| F[Create secure socket with security context]
E --> G [End]
F --> G
https://api.flutter.dev/flutter/dart-io/SecureSocket/connect.html
files to modify on client side
at_client/lib/src/manager/monitor.dart
at_lookup/lib/src/at_lookup_impl.dart
wireshark TLS decryption
https://wiki.wireshark.org/TLS#tls-decryption
The above-listed PRs contain implementation for creating sockets with security context and changes necessary to support this new implementation.
What needs to be worked on further: Unit tests in at_client_sdk need minor changes to the way they mock MonitorOutboundConnectionFactory class as this class uses a new implementation to create secure sockets.
When testing this I only see one TLS connection dumping the keys .. The monitor connection.. We need to sump keys for all files. The test for the rootcacerts file also does not error if the file is not there. Plus if the files is not there for the keys it does not get opend (it is being opened in append mode)
Ok I have spotted the problem in at_libraraies and posted a branch with a 'fix'
@srieteja see what you think
I am not too sure how the monitor connection gets picked up ?? As I see nowhere in monitor_client.dart where the code uses the TLS dumping socket connection , does that happen somewhere else now ??
the branch is 'tlsdump` I tested sshnp's using
dependency_overrides:
at_lookup:
git:
url: https://github.com/atsign-foundation/at_libraries.git
path: at_lookup
ref: tlsdump
The monitor uses the SecureSocketUtil in 'at_client/lib/src/manager/monitor.dart'. This was a more feasible place to do this as we needed access to the AtClientPreferences. @cconstab
Is my fix ok? @srieteja if so I will raise a PR
@cconstab I was able to understand the problem but I was unable the understand the fix. Perhaps you could send me your tls keys file with the fix(so that I could understand the diff) or we could jump on a quick call ? Hope I'm not interrupting your weekend :)
Yup it's just a single line change to remove the bool. The rest is just editor noise
Yes. The thing that is bugging me is that even though the false statement was removed, the default value for decryptPackets is false in SecureSocketConfig class. I'm just trying to understand how removing the false statement is affecting the functioning.
@cconstab I just debugged it and understood your fix and was able to observe the bug. Also if it's okay with you I would like to push a commit into your branch resolving the case with rootcerts availability check.
That's great thanks
When testing this I only see one TLS connection dumping the keys .. The monitor connection.. We need to dump keys for all files. The test for the rootcacerts file also does not error if the file is not there. Plus if the files is not there for the keys it does not get opend (it is being opened in append mode)..
@cconstab sorry for the delay, I forgot to push the change into the branch. I put a fix to throw an error when the certs don't exist. Reg the other thing I used append mode instead of write as append does not overwrite data from previous sessions, and append does create a new file when a file does not already exist.
I think this has been completed but waiting for @cconstab to approve the PR
Ok I have tested this an approved the PR ... Thanks folks!
Once PR is merged and published, we can close this ticket
| gharchive/issue | 2022-07-01T01:06:00 | 2025-04-01T06:37:58.687602 | {
"authors": [
"VJag",
"cconstab",
"gkc",
"murali-shris",
"srieteja"
],
"repo": "atsign-foundation/at_libraries",
"url": "https://github.com/atsign-foundation/at_libraries/issues/189",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
261499750 | SimplifyType is too expensive
Now that the dust is starting to settle after changing values to be backed by bytes, SimplifyType is rising the top of alot of profiles.
The algorithm is unavoidably costly. I think a good kind of approach here is to try to avoid recomputing a simplified type that we've already computed a bunch (e.g. when creating large collections)
Thinking about this more, I think I can imagine three approaches to mitigating the cost of simplify type:
Find some way to implement which is lost costly in terms of number of memory allocations
Memoize the work so that we don't keep recomputing the same simplified type for the same (or very similar) input types
Avoid doing it at all if the type doesn't need to be simplified
I think maybe a good first place to start is (3) above. I think there's probably a 60-80% base case here which is that SimplifyType gets called everytime the sequence chunker produces a new node in a prolly tree. What gets simplified is all of the elements of the sequence. It's probably more often than not that
a) Each element is exactly the same type
b) Any given element is already Simplified
I think simply detecting this case and avoiding the call to SimplifyType is a good place to start and will help with the major of cases.
| gharchive/issue | 2017-09-29T00:32:23 | 2025-04-01T06:37:58.695148 | {
"authors": [
"rafael-atticlabs"
],
"repo": "attic-labs/noms",
"url": "https://github.com/attic-labs/noms/issues/3747",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
295497691 | Bot logins before joining any team ?
If a bot tries to login before it joins any team, there will be error in MattermostAPIv4.load_initial_data
Should mattermost_bot allows this bot to login ?
A possible way to do that :
def load_initial_data(self):
! self.teams = self.get('/users/me/teams')
+ self.teams_channels_ids = {}
+ if len(self.teams) == 0:
+ return
self.default_team_id = self.teams[0]['id']
- self.teams_channels_ids = {}
for team in self.teams:
self.teams_channels_ids[team['id']] = []
# get all channels belonging to each team
for channel in self.get_channels(team['id']):
self.teams_channels_ids[team['id']].append(channel['id'])
Or should mattermost_bot throws defined exception ?
Need to check if not having a default_team_id in this scenario would be a problem. Have you had a chance to test this proposal @seLain?
The default_team_id is necessary for MattermostAPI (APIv3) when calling webhooks. In MattermostAPIv4, there is not much usage of it since the APIv4 webhook is not supported in mattermost_bot yet ....
Maybe we should postpone this issue and think about it more globally. Along with the deprecation of APIv3, the MattermostAPIv4 does not have to extend MattermostAPI (APIv3) anymore. Further, there should be more enhancements needed for mattermost_bot in supporting APIv4. At that time, I believe this issue will be resolved consequently.
After all, a bot must be added to at least one team. Otherwise it can do almost nothing. This issue can be easily skipped this way in normal cases. :stuck_out_tongue:
Totally agree. We should update the documentation to specify that the bot user must be added to at least one team prior to actually running the bot.
| gharchive/issue | 2018-02-08T12:41:31 | 2025-04-01T06:37:58.702482 | {
"authors": [
"attzonko",
"seLain"
],
"repo": "attzonko/mattermost_bot",
"url": "https://github.com/attzonko/mattermost_bot/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
185245136 | Remove auto-generated comment :)
I guess the generated docs files are quite modified compared to what sphinx autogenerates, so we can remove this :)
Apparently I end all sentences with smileys right now :)
Right, makes sense!
| gharchive/pull-request | 2016-10-25T21:57:49 | 2025-04-01T06:37:58.776358 | {
"authors": [
"benjaoming",
"eliasdorneles"
],
"repo": "audreyr/cookiecutter-pypackage",
"url": "https://github.com/audreyr/cookiecutter-pypackage/pull/263",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1518212717 | Merge develop to serenity (#9)
Update ci.yml
fix deployment ts (#8)
Update ci.yml
Co-authored-by: kienvc vuchikien269@gmail.com
@doquockhanhan : Tuần sau merge
| gharchive/pull-request | 2023-01-04T02:56:04 | 2025-04-01T06:37:58.787722 | {
"authors": [
"hoangthanh212"
],
"repo": "aura-nw/verify-contract",
"url": "https://github.com/aura-nw/verify-contract/pull/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
117822322 | fix(tests): Fixed ReferenceError in two tests
Fixed missing done parameters
thanks :+1:
| gharchive/pull-request | 2015-11-19T14:14:24 | 2025-04-01T06:37:58.797749 | {
"authors": [
"Mordred",
"zewa666"
],
"repo": "aurelia/animator-css",
"url": "https://github.com/aurelia/animator-css/pull/31",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
178300629 | [Bug] When using 2 config files, bundle fail
I using skeleton-esnext 1.0.0
JSPM Version: 0.16.45
bundle.js
var gulp = require('gulp');
var bundler = require('aurelia-bundler');
var bundles = require('../bundles.js');
var config = {
force: true,
baseURL: '.',
configPath: [
'./jspm.config.js',
'./jspm.config-extend.js'
],
injectionConfigPath: './jspm.config.js',
bundles: bundles.bundles
};
gulp.task('bundle', ['build'], function () {
return bundler.bundle(config);
});
gulp.task('unbundle', function () {
return bundler.unbundle(config);
});
when run gulp export or gulp bundle, get the error:
[17:51:03] 'bundle' errored after 864 ms [17:51:03] Error on fetch for npm:jquery@3.1.0.js at file:///Users/xujin/My_Projects/NodeJS/antme-web-client/npm:jquery@3.1.0.js Error: ENOENT: no such file or directory, open '/Users/xujin/My_Projects/NodeJS/antme-web-client/npm:jquery@3.1.0.js' at Error (native)
jspm.config.js
System.config({
defaultJSExtensions: true,
transpiler: false,
paths: {
"*": "dist/*",
"github:*": "jspm_packages/github/*",
"npm:*": "jspm_packages/npm/*"
},
map: {
"aurelia-animator-css": "npm:aurelia-animator-css@1.0.1",
"aurelia-bootstrapper": "npm:aurelia-bootstrapper@1.0.0",
"aurelia-framework": "npm:aurelia-framework@1.0.3",
"aurelia-history-browser": "npm:aurelia-history-browser@1.0.0",
"aurelia-http-client": "npm:aurelia-http-client@1.0.0",
"aurelia-i18n": "npm:aurelia-i18n@1.1.2",
"aurelia-loader-default": "npm:aurelia-loader-default@1.0.0",
"aurelia-logging-console": "npm:aurelia-logging-console@1.0.0",
"aurelia-pal-browser": "npm:aurelia-pal-browser@1.0.0",
"aurelia-polyfills": "npm:aurelia-polyfills@1.1.1",
"aurelia-router": "npm:aurelia-router@1.0.3",
"aurelia-templating-binding": "npm:aurelia-templating-binding@1.0.0",
"aurelia-templating-resources": "npm:aurelia-templating-resources@1.0.0",
"aurelia-templating-router": "npm:aurelia-templating-router@1.0.0",
"bluebird": "npm:bluebird@3.4.6",
"i18next": "npm:i18next@3.4.3",
"i18next-xhr-backend": "npm:i18next-xhr-backend@1.2.0",
"intl": "npm:intl@1.2.5",
"jquery": "npm:jquery@3.1.0",
"lodash": "npm:lodash@4.16.1",
"moment": "npm:moment@2.15.0",
"text": "github:systemjs/plugin-text@0.0.8",
"github:jspm/nodelibs-assert@0.1.0": {
"assert": "npm:assert@1.4.1"
},
"github:jspm/nodelibs-buffer@0.1.0": {
"buffer": "npm:buffer@3.6.0"
},
"github:jspm/nodelibs-process@0.1.2": {
"process": "npm:process@0.11.9"
},
"github:jspm/nodelibs-util@0.1.0": {
"util": "npm:util@0.10.3"
},
"github:jspm/nodelibs-vm@0.1.0": {
"vm-browserify": "npm:vm-browserify@0.0.4"
},
"npm:assert@1.4.1": {
"assert": "github:jspm/nodelibs-assert@0.1.0",
"buffer": "github:jspm/nodelibs-buffer@0.1.0",
"process": "github:jspm/nodelibs-process@0.1.2",
"util": "npm:util@0.10.3"
},
"npm:aurelia-animator-css@1.0.1": {
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-templating": "npm:aurelia-templating@1.1.0"
},
"npm:aurelia-binding@1.0.4": {
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-task-queue": "npm:aurelia-task-queue@1.0.0"
},
"npm:aurelia-bootstrapper@1.0.0": {
"aurelia-event-aggregator": "npm:aurelia-event-aggregator@1.0.0",
"aurelia-framework": "npm:aurelia-framework@1.0.3",
"aurelia-history": "npm:aurelia-history@1.0.0",
"aurelia-history-browser": "npm:aurelia-history-browser@1.0.0",
"aurelia-loader-default": "npm:aurelia-loader-default@1.0.0",
"aurelia-logging-console": "npm:aurelia-logging-console@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-pal-browser": "npm:aurelia-pal-browser@1.0.0",
"aurelia-polyfills": "npm:aurelia-polyfills@1.1.1",
"aurelia-router": "npm:aurelia-router@1.0.3",
"aurelia-templating": "npm:aurelia-templating@1.1.0",
"aurelia-templating-binding": "npm:aurelia-templating-binding@1.0.0",
"aurelia-templating-resources": "npm:aurelia-templating-resources@1.0.0",
"aurelia-templating-router": "npm:aurelia-templating-router@1.0.0"
},
"npm:aurelia-dependency-injection@1.0.0": {
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0"
},
"npm:aurelia-event-aggregator@1.0.0": {
"aurelia-logging": "npm:aurelia-logging@1.0.0"
},
"npm:aurelia-framework@1.0.3": {
"aurelia-binding": "npm:aurelia-binding@1.0.4",
"aurelia-dependency-injection": "npm:aurelia-dependency-injection@1.0.0",
"aurelia-loader": "npm:aurelia-loader@1.0.0",
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-path": "npm:aurelia-path@1.0.0",
"aurelia-task-queue": "npm:aurelia-task-queue@1.0.0",
"aurelia-templating": "npm:aurelia-templating@1.1.0"
},
"npm:aurelia-history-browser@1.0.0": {
"aurelia-history": "npm:aurelia-history@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0"
},
"npm:aurelia-http-client@1.0.0": {
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-path": "npm:aurelia-path@1.0.0"
},
"npm:aurelia-i18n@1.1.2": {
"aurelia-binding": "npm:aurelia-binding@1.0.4",
"aurelia-dependency-injection": "npm:aurelia-dependency-injection@1.0.0",
"aurelia-event-aggregator": "npm:aurelia-event-aggregator@1.0.0",
"aurelia-loader": "npm:aurelia-loader@1.0.0",
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-templating": "npm:aurelia-templating@1.1.0",
"aurelia-templating-resources": "npm:aurelia-templating-resources@1.0.0",
"i18next": "npm:i18next@3.4.3",
"intl": "npm:intl@1.2.5"
},
"npm:aurelia-loader-default@1.0.0": {
"aurelia-loader": "npm:aurelia-loader@1.0.0",
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0"
},
"npm:aurelia-loader@1.0.0": {
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-path": "npm:aurelia-path@1.0.0"
},
"npm:aurelia-logging-console@1.0.0": {
"aurelia-logging": "npm:aurelia-logging@1.0.0"
},
"npm:aurelia-metadata@1.0.0": {
"aurelia-pal": "npm:aurelia-pal@1.0.0"
},
"npm:aurelia-pal-browser@1.0.0": {
"aurelia-pal": "npm:aurelia-pal@1.0.0"
},
"npm:aurelia-polyfills@1.1.1": {
"aurelia-pal": "npm:aurelia-pal@1.0.0"
},
"npm:aurelia-route-recognizer@1.0.0": {
"aurelia-path": "npm:aurelia-path@1.0.0"
},
"npm:aurelia-router@1.0.3": {
"aurelia-dependency-injection": "npm:aurelia-dependency-injection@1.0.0",
"aurelia-event-aggregator": "npm:aurelia-event-aggregator@1.0.0",
"aurelia-history": "npm:aurelia-history@1.0.0",
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-path": "npm:aurelia-path@1.0.0",
"aurelia-route-recognizer": "npm:aurelia-route-recognizer@1.0.0"
},
"npm:aurelia-task-queue@1.0.0": {
"aurelia-pal": "npm:aurelia-pal@1.0.0"
},
"npm:aurelia-templating-binding@1.0.0": {
"aurelia-binding": "npm:aurelia-binding@1.0.4",
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-templating": "npm:aurelia-templating@1.1.0"
},
"npm:aurelia-templating-resources@1.0.0": {
"aurelia-binding": "npm:aurelia-binding@1.0.4",
"aurelia-dependency-injection": "npm:aurelia-dependency-injection@1.0.0",
"aurelia-loader": "npm:aurelia-loader@1.0.0",
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-path": "npm:aurelia-path@1.0.0",
"aurelia-task-queue": "npm:aurelia-task-queue@1.0.0",
"aurelia-templating": "npm:aurelia-templating@1.1.0"
},
"npm:aurelia-templating-router@1.0.0": {
"aurelia-dependency-injection": "npm:aurelia-dependency-injection@1.0.0",
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-path": "npm:aurelia-path@1.0.0",
"aurelia-router": "npm:aurelia-router@1.0.3",
"aurelia-templating": "npm:aurelia-templating@1.1.0"
},
"npm:aurelia-templating@1.1.0": {
"aurelia-binding": "npm:aurelia-binding@1.0.4",
"aurelia-dependency-injection": "npm:aurelia-dependency-injection@1.0.0",
"aurelia-loader": "npm:aurelia-loader@1.0.0",
"aurelia-logging": "npm:aurelia-logging@1.0.0",
"aurelia-metadata": "npm:aurelia-metadata@1.0.0",
"aurelia-pal": "npm:aurelia-pal@1.0.0",
"aurelia-path": "npm:aurelia-path@1.0.0",
"aurelia-task-queue": "npm:aurelia-task-queue@1.0.0"
},
"npm:bluebird@3.4.6": {
"process": "github:jspm/nodelibs-process@0.1.2"
},
"npm:buffer@3.6.0": {
"base64-js": "npm:base64-js@0.0.8",
"child_process": "github:jspm/nodelibs-child_process@0.1.0",
"fs": "github:jspm/nodelibs-fs@0.1.2",
"ieee754": "npm:ieee754@1.1.6",
"isarray": "npm:isarray@1.0.0",
"process": "github:jspm/nodelibs-process@0.1.2"
},
"npm:i18next@3.4.3": {
"process": "github:jspm/nodelibs-process@0.1.2"
},
"npm:inherits@2.0.1": {
"util": "github:jspm/nodelibs-util@0.1.0"
},
"npm:intl@1.2.5": {
"process": "github:jspm/nodelibs-process@0.1.2"
},
"npm:process@0.11.9": {
"assert": "github:jspm/nodelibs-assert@0.1.0",
"fs": "github:jspm/nodelibs-fs@0.1.2",
"vm": "github:jspm/nodelibs-vm@0.1.0"
},
"npm:util@0.10.3": {
"inherits": "npm:inherits@2.0.1",
"process": "github:jspm/nodelibs-process@0.1.2"
},
"npm:vm-browserify@0.0.4": {
"indexof": "npm:indexof@0.0.1"
}
}
});
jspm.config-extend.js
System.config({
paths: {
"libs/*": "libs/*"
}
});
The problem seems to be caused by jspm.config.js overrided by jspm.config-extend.js.
@XuJinNet If possible point me to a Github repo that I can use that reproduces this issue. It will be easier for me to debug. Thanks for reporting.
@ahmedshuhel please visit https://github.com/XuJinNet/aurelia-skeleton-esnext
Thank you !!!!
Looking at it.
@XuJinNet Does your app run with gulp serve? I am unable to run it locally. You have config.js in index.html but you don't have any file in the project root. Before bundling we need to have a working application.
@ahmedshuhel fixed, please pull, thanks
Looking at it.
@XuJinNet I can bundle your app without any problem. I just had to remove and invalid entry from your bundle config. So, now it looks like this:
var gulp = require('gulp');
var bundler = require('aurelia-bundler');
var bundles = require('../bundles.js');
var config = {
force: true,
baseURL: '.',
configPath: './jspm.config.js',
injectionConfigPath: './jspm.config.js',
bundles: bundles.bundles
};
gulp.task('bundle', ['build'], function() {
return bundler.bundle(config);
});
gulp.task('unbundle', function() {
return bundler.unbundle(config);
});
Simply, configPath: './jspm.config.js' should be your configPath as you are only using that in your index.html here. You are not using jspm.config-extended.js in index.html or anywhere in the app, thus you should not use it in the bundler config. Thank you.
@ahmedshuhel Sorry, i need the file jspm.config-extended.js, i using it in my project, i fixed the repo, please pull, thanks.
Could you please point me to where you have used jquery that you configured in https://github.com/XuJinNet/aurelia-skeleton-esnext/blob/master/jspm.config-extend.js ? Then again, I suppose having two config file is not the issue here. It's about the correctness of SystemJS config. Try importing jquery and use it in your code somewhere and see if your application runs/works and then if the bundling fails please report back.
@ahmedshuhel OK, please pull, thanks.
https://github.com/XuJinNet/aurelia-skeleton-esnext/blob/master/src/welcome.js
Looking at it.
| gharchive/issue | 2016-09-21T09:58:44 | 2025-04-01T06:37:58.812475 | {
"authors": [
"XuJinNet",
"ahmedshuhel"
],
"repo": "aurelia/bundler",
"url": "https://github.com/aurelia/bundler/issues/145",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1656332861 | Call internationalization
Add internationalization configuration as parameter of both: check for updates and update methods.
Implemented by e3daf30e81aac6c506dfb080e207c6509613d663.
| gharchive/issue | 2023-04-05T21:43:29 | 2025-04-01T06:37:58.841570 | {
"authors": [
"aureliano"
],
"repo": "aureliano/caravela",
"url": "https://github.com/aureliano/caravela/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
438390215 | Use erc721-balance to quickly query badges
This PR integrates erc721-balance by @vrde.
Here's a page that explains what it does: https://vrde.github.io/erc721-benchmark/
TL;DR: It queries ERC721 tokens super quickly by taking advantage of batch calls against the JSON-RPC API. This shows especially when you have lots of badges.
Note: Code is very alpha and may contain bugs. Surely @vrde would be interested to iterate with you on this :)
erc721-balance@0.0.1 contains a bug where accounts with a balance of 0 cannot be retrieved. Fixed it here: https://github.com/vrde/erc721-balance/pull/2/files Let's wait for this to be published on npm before merging this PR.
| gharchive/pull-request | 2019-04-29T16:01:33 | 2025-04-01T06:37:58.887298 | {
"authors": [
"TimDaub"
],
"repo": "austintgriffith/burner-wallet",
"url": "https://github.com/austintgriffith/burner-wallet/pull/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2355734645 | Incompatible types in typeclass instances are not caught by the checker
While looking into why the standard library wasn't building correctly for me, I found that that readByte for StandardInput
was trying to call an incompatible mono in the generated c code.
In the source I found that the implementations of the readByte and writeByte functions in the instances for StandardInput and StandardError respectively did not have the correct types in the function definition.
The typeclass definition demands that the instance type parameter matches the stream type
-- standard/src/IO/IO.aui
typeclass ByteInputStream(T: Type) is
generic [R: Region]
method readByte(stream: &![T, R]): Option[Nat8];
end;
But the implementation does not follow this
-- standard/src/IO/Terminal.aum
instance ByteInputStream(StandardInput) is
generic [R: Region]
method readByte(stream: &![StandardOutput, R]): Option[Nat8] is
let stdin: Address[Nat8] := getStdin();
let res: Int32 := fgetc(stdin);
if res = EOF then
return None();
else
return toNat8(res);
end if;
end;
end;
I have created a PR to fix the discrepancy in this code, but surely this should be getting caught by the type checker?
Yes this is absolutely a bug in the type checker. Thanks for reporting and fixing the code!
| gharchive/issue | 2024-06-16T12:32:00 | 2025-04-01T06:37:58.889692 | {
"authors": [
"eudoxia0",
"tim-de"
],
"repo": "austral/austral",
"url": "https://github.com/austral/austral/issues/600",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1973400838 | 🛑 repl.co is down
In b05e2cb, repl.co (https://hello-repl.auteen.repl.co) was down:
HTTP code: 404
Response time: 599 ms
Resolved: repl.co is back up in af7eddb after 45 minutes.
| gharchive/issue | 2023-11-02T02:39:58 | 2025-04-01T06:37:58.892891 | {
"authors": [
"auteen"
],
"repo": "auteen/autoreplit",
"url": "https://github.com/auteen/autoreplit/issues/1053",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1983523625 | ESD-32354: Add disable_self_service_change_password to AD connection options
🔧 Changes
Adds support for disable_self_service_change_password on AD Connection Options.
📚 References
https://github.com/auth0/terraform-provider-auth0/issues/870
🔬 Testing
📝 Checklist
[x] All new/changed/fixed functionality is covered by tests (or N/A)
[x] I have added documentation for all new/changed functionality (or N/A)
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (ba83c16) 94.80% compared to head (09ffb2c) 94.81%.
Additional details and impacted files
@@ Coverage Diff @@
## main #308 +/- ##
=======================================
Coverage 94.80% 94.81%
=======================================
Files 46 46
Lines 8916 8921 +5
=======================================
+ Hits 8453 8458 +5
Misses 361 361
Partials 102 102
Files
Coverage Δ
management/connection.go
72.50% <ø> (ø)
management/management.gen.go
100.00% <100.00%> (ø)
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
| gharchive/pull-request | 2023-11-08T12:44:07 | 2025-04-01T06:37:58.913995 | {
"authors": [
"codecov-commenter",
"sergiught"
],
"repo": "auth0/go-auth0",
"url": "https://github.com/auth0/go-auth0/pull/308",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2092852228 | Extra +s are being appended to formData's strings
When sending in formData with some structure as { key: "this is a test" }, all of the spaces in the string are replaced with + after being processed.
Hi! Sorry this took a while! You can now pass convertPluses: true as a config option to form and softForm to enable this behaviour. Otherwise if it's default, it would interfer with actual plus signs. If you want to avoid this altogether you can use enctype="multipart/form-data" in your form which will encode strings properly.
Thanks!
No prob :) I also added file upload support in 1.5.0
| gharchive/issue | 2024-01-22T00:21:39 | 2025-04-01T06:37:58.928602 | {
"authors": [
"LeoDog896",
"miunau"
],
"repo": "auth70/bodyguard",
"url": "https://github.com/auth70/bodyguard/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
971265351 | fix: resolve namespace cfg snapshot timestamps uniquly per namespace
The changes herein ensure that namespace config snapshots are resolved and propagated uniquely per namespace. namespace config changes can happen at different times between any two different namespaces, and therefore we must propagate the snapshot timestamps independently in the request context.
Fixes #36 .
Codecov Report
Merging #37 (9579d47) into master (e914a98) will increase coverage by 1.87%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## master #37 +/- ##
==========================================
+ Coverage 76.07% 77.94% +1.87%
==========================================
Files 8 8
Lines 1028 1147 +119
==========================================
+ Hits 782 894 +112
- Misses 182 189 +7
Partials 64 64
Impacted Files
Coverage Δ
internal/access-controller.go
75.87% <100.00%> (+1.51%)
:arrow_up:
internal/namespace.go
100.00% <100.00%> (ø)
internal/tree.go
100.00% <0.00%> (ø)
internal/hashring.go
100.00% <0.00%> (ø)
internal/client-router.go
100.00% <0.00%> (ø)
internal/healthchecker.go
100.00% <0.00%> (ø)
internal/relation-tuple.go
100.00% <0.00%> (ø)
internal/namespace-manager/postgres/manager.go
57.21% <0.00%> (+1.47%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e914a98...9579d47. Read the comment docs.
Sweet. I'll merge tonight and cut a patch release for it. Thanks!
| gharchive/pull-request | 2021-08-16T00:43:03 | 2025-04-01T06:37:58.955412 | {
"authors": [
"codecov-commenter",
"jon-whit"
],
"repo": "authorizer-tech/access-controller",
"url": "https://github.com/authorizer-tech/access-controller/pull/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1580478005 | extraPodAnnotations doesn't apply to the migration pod
I deployed the following in a namespace that has istio injection enabled
apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
name: dev
spec:
config:
datastoreEngine: postgres
extraPodAnnotations:
sidecar.istio.io/inject: "false"
secretName: dev-spicedb-config
I was trying to add the annotation with extraPodAnnotations to the generated pods so that the sidecar doesn't get injected
Istio adds a sidecar to every pod created in that namespace. Because the sidecar keeps running, the pod never reaches the Completed stage, preventing the operator from progressing further (and creating the spicedb pods)
Since this migration pod is also generated by the operator, shouldn't the additional annotations apply to the migration pod as well?
I know I can either deploy the spicedb cluster to a namespace that doesn't inject sidecars, or configure the Istio operator to never inject anything in the pods that match the labels for spicedb, But it may be simpler, and perhaps more consistent, to apply the same annotations to the migration pod as well.
Another possible solution could be for the operator to only check the status of the migration container within the pod, instead of checking the pod status before moving on
Right now, extraPodLabels only applies to the deployment pods, not the jobs.
There's a PR in progress (https://github.com/authzed/spicedb-operator/pull/135) that attempts to address this generically - that would look something like:
apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
name: dev
spec:
config:
datastoreEngine: postgres
secretName: dev-spicedb-config
patches:
- kind: Job
patch:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
Thanks for the very quick response (once again, since you did the same earlier this morning). Looking forward to it.
While the PR #135 looks exciting, until the work there has completed, consider merging PR #147.
@ecordell any plans to cut a new release with the #147 merged in?
| gharchive/issue | 2023-02-10T23:22:21 | 2025-04-01T06:37:58.960444 | {
"authors": [
"Bhashit",
"ecordell",
"thomasklein94"
],
"repo": "authzed/spicedb-operator",
"url": "https://github.com/authzed/spicedb-operator/issues/146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1390470616 | SpiceDb crashing unexpectedly
I came across a weird behaviour for SpiceDB - it crashed with an exit code 2.
Here is the tail of the log:
{"level":"error","module":"pgx","args":[],"err":"ERROR: relation \"relation_tuple_transaction\" does not exist (SQLSTATE 42P01)","pid":7266,"sql":"\n\tSELECT COALESCE(\n\t\t(SELECT MIN(id) FROM relation_tuple_transaction WHERE timestamp >= TO_TIMESTAMP(FLOOR(EXTRACT(EPOCH FROM NOW() AT TIME ZONE 'utc') * 1000000000 / 5000000000) * 5000000000 / 1000000000) AT TIME ZONE 'utc'),\n\t\t(SELECT MAX(id) FROM relation_tuple_transaction)\n\t),\n\t5000000000 - CAST(EXTRACT(EPOCH FROM NOW() AT TIME ZONE 'utc') * 1000000000 as bigint) % 5000000000;","time":"2022-09-28T11:52:42Z","message":"Query"}
panic: interface conversion: interface {} is nil, not decimal.Decimal
goroutine 786 [running]:
github.com/authzed/spicedb/internal/datastore/common/revisions.(*CachedOptimizedRevisions).OptimizedRevision(0xc000b087c0, {0x2117760?, 0xc000e74280?})
/home/runner/work/spicedb/spicedb/internal/datastore/common/revisions/optimized.go:72 +0x4d3
github.com/authzed/spicedb/internal/datastore/proxy.hedgingProxy.OptimizedRevision.func1({0x2117760?, 0xc000e74280?}, 0x0?)
/home/runner/work/spicedb/spicedb/internal/datastore/proxy/hedging.go:179 +0x65
created by github.com/authzed/spicedb/internal/datastore/proxy.newHedger.func1
/home/runner/work/spicedb/spicedb/internal/datastore/proxy/hedging.go:78 +0x27b
paw-marketplace-spicedb-1 exited with code 2
It is being run inside a docker-compose like so:
pg-spicedb:
image: postgres:14-alpine
healthcheck:
test: [ 'CMD-SHELL', 'pg_isready -U postgres -d spicedb' ]
interval: 10s
timeout: 5s
retries: 5
labels:
- 'traefik.enable=false'
expose:
- '5432'
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: spicedb
volumes:
- pg-spicedb-data:/var/lib/postgresql/data
spicedb:
image: quay.io/authzed/spicedb:v1.9.0
depends_on:
- pg-spicedb
labels:
- 'traefik.enable=false'
ports:
- 50051:50051
volumes:
- ./libs/authz/schema.local.yml:/schema.yml
environment:
- SPICEDB_GRPC_PRESHARED_KEY=localadmin
- SPICEDB_GRPC_ENABLED=1
- SPICEDB_GRPC_ADDR=:50051
- SPICEDB_GRPC_NO_TLS=1
- SPICEDB_METRICS_ENABLED=0
- SPICEDB_DASHBOARD_ENABLED=0
- SPICEDB_DATASTORE_ENGINE=postgres
- SPICEDB_DATASTORE_CONN_URI=postgresql://postgres:postgres@pg-spicedb:5432/spicedb?sslmode=disable
- SPICEDB_DATASTORE_BOOTSTRAP_FILES=/schema.yml
- SPICEDB_DATASTORE_BOOTSTRAP_OVERWRITE=1
- SPICEDB_TELEMETRY_ENDPOINT=
command: serve
👋🏻 It seems like you are running v1.9.0. In that version, if an error happened while checking for optimized revisions, it was not checked and would panic:
https://github.com/authzed/spicedb/blob/ae4552ed89f0561f71893da2feeb7feb1e767e71/internal/datastore/common/revisions/optimized.go#L71-L72
You can see that this was fixed in https://github.com/authzed/spicedb/pull/740, which is part of release v1.12.0.
Feel free to reopen if that does not fix your problem! 🙇🏻
| gharchive/issue | 2022-09-29T08:29:16 | 2025-04-01T06:37:58.964665 | {
"authors": [
"jakub-lesniak-mck",
"vroldanbet"
],
"repo": "authzed/spicedb",
"url": "https://github.com/authzed/spicedb/issues/850",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1653071654 | Surround URL replacements option with quotes
Internal ticket: https://app.clickup.com/t/864ea29wc
https://github.com/autifyhq/autify-cli/pull/389
To support --url-replacements option with space delimiter we need to surround the arg with quotes.
I confirmed it works well on autify-cli with this commit https://github.com/autifyhq/autify-cli/pull/389/commits/84b25fd9d2fe19e536e948b4abe2948cbfea8629
The escaping looks good. Then, it reminds us that the URLs containing , will be broken (also CircleCI integrations as well).
Then, it reminds us that the URLs containing , will be broken (also CircleCI integrations as well).
True, we need to have another delimiter to pass multiple replacements option via CI integrations.
I will raise an internal ticket but will not deal with it here since it'll be also breaking change.
| gharchive/pull-request | 2023-04-04T02:46:31 | 2025-04-01T06:37:58.967902 | {
"authors": [
"mtsmfm",
"riywo"
],
"repo": "autifyhq/actions-web-test-run",
"url": "https://github.com/autifyhq/actions-web-test-run/pull/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2335016551 | Adsk Contrib - Add cmake option OCIO_HAS_BUILTIN_YAML_CONFIGS
Adding cmake option to remove built-in yaml based configs (CGConfig and StudioConfig). When this option is turned off, the tests that rely on the built-in configs will also be removed.
Yes, when the YAML switch is added this will be controlled by it too. These PR's are for rolling in all the previous work in separable pieces.
| gharchive/pull-request | 2024-06-05T06:29:32 | 2025-04-01T06:37:58.986849 | {
"authors": [
"cozdas"
],
"repo": "autodesk-forks/OpenColorIO",
"url": "https://github.com/autodesk-forks/OpenColorIO/pull/4",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1579444107 | The file extension loaded into the viewer is not supported
I have a 2022 revit model (.rvt).
I want to know whether the problem comes from the version of the software or from the formats that are supported by the app.
I used the same revit file as in the README.
The file extension loaded into the viewer is not supported
@petrbroz
Thank you.
Solved.
https://gist.github.com/salahelfarissi/784796c339ea39ec917f919db6f203fc
| gharchive/issue | 2023-02-10T10:38:47 | 2025-04-01T06:37:58.988783 | {
"authors": [
"salahelfarissi"
],
"repo": "autodesk-platform-services/aps-iot-extensions-demo",
"url": "https://github.com/autodesk-platform-services/aps-iot-extensions-demo/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
324601167 | prune vendor using pruning rules
Before prune:
$ du -sh vendor/
121M vendor/
After prune:
$ du -sh vendor/
41M vendor/
Thanks @jmrodri You're going to have to show me how to do this :)
| gharchive/pull-request | 2018-05-19T04:53:23 | 2025-04-01T06:37:59.012349 | {
"authors": [
"dymurray",
"jmrodri"
],
"repo": "automationbroker/sbcli",
"url": "https://github.com/automationbroker/sbcli/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1956589271 | Doitpay Go SDK
Implementation:
docs/ masih WIP
Gokil 16K 🤣
Pada github action tambahkan job untuk running Unit test
| gharchive/pull-request | 2023-10-23T08:05:52 | 2025-04-01T06:37:59.016692 | {
"authors": [
"kervinch",
"reza-putra"
],
"repo": "automotechnologies/doitpay-go",
"url": "https://github.com/automotechnologies/doitpay-go/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
641490562 | How could I disable verbose output
chatty(false) is no-op now. Do I have an opportunity to disable console output?
Thanks for asking. I have made an effort with recent releases to reduce console output by default. Do you find this is still too verbose?
Yes, we have a lot of modules around 300 and a lot of plugins and they all want to print something
Now your plugin makes a lot of noise in the logs
It makes hard to read logs and I prefer to look for your plugin advice only in files
Thanks for the response. Would you like the ability to suppress all output from this plugin?
It would be enough for me.
I'm not sure I want to make this an extension method (like chatty), but I'm also not totally opposed to it. What about using a system property to disable it? You could do it via the command line or in gradle.properties.
For me, it would be enough. Feel free to make any design decision you think fits for your vision
Note to self, I think I only need to disable logging in the AdviceSubprojectAggregationTask class. This is one of only two places where logger.quiet is used. The other is FailOrWarnTask, which runs just once per invocation of buildHealth, compared to once per subproject for the Advice... task.
I have resolved this by adding a system property the plugin will respond to. If you add
systemProp.dependency.analysis.silent=true
to gradle.properties, then logging will be greatly reduced. You may also use -Ddependency.analysis.silent=true on the command line.
This has not yet been published, but is available as a snapshot if you want to test.
| gharchive/issue | 2020-06-18T19:18:53 | 2025-04-01T06:37:59.021312 | {
"authors": [
"autonomousapps",
"sboishtyan"
],
"repo": "autonomousapps/dependency-analysis-android-gradle-plugin",
"url": "https://github.com/autonomousapps/dependency-analysis-android-gradle-plugin/issues/202",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2578844536 | ignoreKtx(true) is not working
Plugin version
2.1.4
Gradle version
8.10.2
JDK version
23
(Optional) Kotlin and Kotlin Gradle Plugin (KGP) version
2.0.20
(Optional) Android Gradle Plugin (AGP) version
8.6.2
(Optional) reason output for bugs relating to incorrect advice
./gradlew sha:vide:reason --id androidx.lifecycle:lifecycle-runtime:2.8.6
:shared:video-player
\--- io.coil-kt:coil-base:2.7.0
\--- androidx.lifecycle:lifecycle-runtime:2.8.6
Source: developDebug, main
--------------------------
(no usages)
./gradlew sha:vide:reason --id libs.androidx.lifecycle.runtime
Shortest path from :shared:video-player to androidx.lifecycle:lifecycle-runtime-ktx:2.8.6 (libs.androidx.lifecycle.runtime) for nowsecureReleaseUnitTestRuntimeClasspath:
:shared:video-player
\--- androidx.lifecycle:lifecycle-runtime-ktx:2.8.6
Source: developDebug, main
--------------------------
(no usages)
Describe the bug
The ktx dependency is proposed to be removed and the normal dependency is proposed to be added even we have
dependencyAnalysis {
structure {
ignoreKtx(true) // default is false
}
}
In the root folder.
Expected behavior
ignoreKtx is respected
Additional context
Plugin is applied in root folder and then applied in every module. The ignoreKts is applied in the root file.
Thanks for the issue. Do you have a reproducer?
Sorry for the delay, let me try in small project.
| gharchive/issue | 2024-10-10T13:23:07 | 2025-04-01T06:37:59.026378 | {
"authors": [
"autonomousapps",
"emartynov"
],
"repo": "autonomousapps/dependency-analysis-gradle-plugin",
"url": "https://github.com/autonomousapps/dependency-analysis-gradle-plugin/issues/1283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1229909868 | Add definition of "Script Plugin"
https://kotlinlang.slack.com/archives/C19FD9681/p1652110468534199?thread_ts=1652107016.526049&cid=C19FD9681
Oops, @martinbonnin was wrong, it is there. My fault for not double-checking though 😂
https://github.com/autonomousapps/gradle-glossary#script-plugin
CLOSING
| gharchive/issue | 2022-05-09T15:36:11 | 2025-04-01T06:37:59.028189 | {
"authors": [
"handstandsam"
],
"repo": "autonomousapps/gradle-glossary",
"url": "https://github.com/autonomousapps/gradle-glossary/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
792446713 | Compiling for Windows?
Hello,
I made a trimmed down version and building for Linux is easy. I'm having issues with building on Windows. Can you post directions online?
Thank you!
I figured it out
I figured it out
| gharchive/issue | 2021-01-23T05:34:27 | 2025-04-01T06:37:59.029448 | {
"authors": [
"Jah-On"
],
"repo": "autopilot-rs/autopy",
"url": "https://github.com/autopilot-rs/autopy/issues/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1157837646 | chore: sync issue and PR templates
Sync files that will be added in https://github.com/autowarefoundation/autoware/pull/56.
pre-commit.ci run
@xmfcx cc @mitsudome-r I'll explain about this CI here.
If we change the settings like this,
and run the workflow sync-files,
this kind of PR is created. It enables us to sync files between repositories easily.
https://github.com/autowarefoundation/autoware.core/pull/12
This time I've run it manually, the workflow usually runs automatically every day.
This time I've run it manually, the workflow usually runs automatically every day.
Why not run it with every commit?
Every commit in which repository? :thinking:
Anyway, we don't have to run this workflow so frequently. I believe daily execution is enough.
If necessary, you can run it anytime you like using workflow_dispatch. (What I've used this time.)
Every commit in which repository? 🤔 Anyway, we don't have to run this workflow so frequently. I believe daily execution is enough. If necessary, you can run it anytime you like using workflow_dispatch. (What I've used this time.)
Ah this repository doesn't have ability to get event notifications from the https://github.com/autowarefoundation/autoware-github-actions/tree/main/sync-files so we cannot trigger this when changes to that repo occurs right?
@xmfcx Actually, it's technically possible, for example by https://github.com/peter-evans/repository-dispatch.
But I think it's a bit complex for this use case.
| gharchive/pull-request | 2022-03-03T00:33:57 | 2025-04-01T06:37:59.087415 | {
"authors": [
"kenji-miyake",
"xmfcx"
],
"repo": "autowarefoundation/autoware.core",
"url": "https://github.com/autowarefoundation/autoware.core/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1660395707 | feat(obstacle_cruise_planner): implement slow down planner
Description
Implemented slow down planner, inserting slow down point in the trajectory where the point is close to the dynamic/static obstacles.
Related links
launcher PR: https://github.com/autowarefoundation/autoware_launch/pull/288
Tests performed
Planning simulator works well.
TODO
[x] scenario sim: https://evaluation.tier4.jp/evaluation/reports/50ce7861-d8b3-5ef9-87b1-f68421f860a8?project_id=prd_jt
Notes for reviewers
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
[ ] The PR has been properly tested.
[ ] The PR has been reviewed by the code owners.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
[ ] The PR is ready for merge.
After all checkboxes are checked, anyone who has write access can merge the PR.
slow down virtual wall
| gharchive/pull-request | 2023-04-10T07:52:41 | 2025-04-01T06:37:59.094674 | {
"authors": [
"takayuki5168"
],
"repo": "autowarefoundation/autoware.universe",
"url": "https://github.com/autowarefoundation/autoware.universe/pull/3339",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1873161267 | fix(goal_planner): fix goal search for narrow shoulder lane
Description
Fix goal search for narrow shoulder lane considering patterns when the bound of the vehicle at pull over lane's center line is outside the bound of the lanelet.
before (skip checking is in lane)
after
Related links
Tests performed
psim
!
Notes for reviewers
Interface changes
Effects on system behavior
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
[ ] The PR has been properly tested.
[ ] The PR has been reviewed by the code owners.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
[ ] The PR is ready for merge.
After all checkboxes are checked, anyone who has write access can merge the PR.
@kosuke55
could you give me time to check this PR?
If it's urgent I will take a look briefly
@kyoichi-sugahara
OK, thanks!!b
| gharchive/pull-request | 2023-08-30T08:11:52 | 2025-04-01T06:37:59.103163 | {
"authors": [
"kosuke55",
"kyoichi-sugahara"
],
"repo": "autowarefoundation/autoware.universe",
"url": "https://github.com/autowarefoundation/autoware.universe/pull/4816",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2580605880 | fix(docker): install CUDA development drivers in development containers
Description
Resolved https://github.com/autowarefoundation/autoware/issues/5219
The cause of #5219 is that due to the changes in #5159, only the CUDA runtime drivers are now being installed in both the development containers and the runtime containers. Though the development containers are required to install the CUDA development drivers.
https://github.com/autowarefoundation/autoware/blob/main/ansible/roles/cuda/tasks/main.yaml#L28-L50
This PR installs CUDA development drivers on development containers.
cc @marioney
Tests performed
https://github.com/youtalk/autoware/actions/runs/11288251647
https://github.com/youtalk/autoware/actions/runs/11288252539
Effects on system behavior
Not applicable.
Interface changes
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
After all checkboxes are checked, anyone who has write access can merge the PR.
I hope it was self hosted runner problem. Please let me merge to run by GitHub runner.
| gharchive/pull-request | 2024-10-11T07:03:20 | 2025-04-01T06:37:59.109799 | {
"authors": [
"youtalk"
],
"repo": "autowarefoundation/autoware",
"url": "https://github.com/autowarefoundation/autoware/pull/5332",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1565418732 | feat(autoware_adapi_v1_msgs): add vehicle status msgs
Signed-off-by: tkhmy tkh.my.p@gmail.com
Description
Create msgs for vehicle status
Related links
https://github.com/autowarefoundation/autoware/issues/3232
https://github.com/autowarefoundation/autoware-documentation/pull/312
Tests performed
Notes for reviewers
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[ ] I've confirmed the contribution guidelines.
[ ] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
[ ] The PR has been properly tested.
[ ] The PR has been reviewed by the code owners.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
[ ] The PR is ready for merge.
After all checkboxes are checked, anyone who has write access can merge the PR.
@isamu-takagi I think we will need to add the maximum velocity. Do you think is better to put it in this place?
@tkhmy If it means the limit value of hardware, I think that it can be provided as vehicle information. If it is a config value, it seems better to consider making it as a planning API Including setting service.
@tkhmy If it means the limit value of hardware, I think that it can be provided as vehicle information. If it is a config value, it seems better to consider making it as a planning API including setting service.
@isamu-takagi it should be velocity limit. Ya I think we should put it in planning api instead of vehicle api
@mitsudome-r @isamu-takagi @yukkysaito @kenji-miyake
Hi, I created the message for visualization the vehicle status.
Can you help to review it?
Thank you!
@kenji-miyake @yukkysaito
I think it's okay except for the typo (int8). Do you have any other comments?
@isamu-takagi I think @mitsudome-r needs to get agreements with other AWF members (at least @xmfcx ).
@mitsudome-r @xmfcx Could you check this PR?
| gharchive/pull-request | 2023-02-01T04:37:43 | 2025-04-01T06:37:59.119131 | {
"authors": [
"isamu-takagi",
"kenji-miyake",
"tkhmy"
],
"repo": "autowarefoundation/autoware_adapi_msgs",
"url": "https://github.com/autowarefoundation/autoware_adapi_msgs/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
673842927 | Unifying the Reporting API: without RaceBuffer for now
Closes #179
Based on #185 , but replaces the RaceBuffer with a basic ring buffer to get us by until the RaceBuffer work settles.
:+1:
| gharchive/pull-request | 2020-08-05T20:58:11 | 2025-04-01T06:37:59.120347 | {
"authors": [
"ZackPierce",
"jonlamb-gh"
],
"repo": "auxoncorp/modality-probe",
"url": "https://github.com/auxoncorp/modality-probe/pull/188",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
413595813 | Add ReSwitched Silver™
one real addition to one meme addition is a good ratio right
Can you make it an embed?
thanks I hate it
| gharchive/pull-request | 2019-02-22T21:59:33 | 2025-04-01T06:37:59.158127 | {
"authors": [
"ThatNerdyPikachu",
"aveao",
"leo60228"
],
"repo": "aveao/robocop-ng",
"url": "https://github.com/aveao/robocop-ng/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2229047515 | Improve clarity of 1. Your budget and expectations screen
[ ] It is unclear to the players what values are constant every round: The facilitator can explain that aspect by adding an * to the applicable values and a note below.
I do not think it is wise to hard-code this. Future versions of the game could have changes in these values as a result of news measures (e.g., increased living costs). Therefore, I would not like to change this. Also, the (*) gives extra clutter on the screen.
| gharchive/issue | 2024-04-06T03:12:32 | 2025-04-01T06:37:59.195566 | {
"authors": [
"averbraeck",
"vjcortesa"
],
"repo": "averbraeck/housinggame-player",
"url": "https://github.com/averbraeck/housinggame-player/issues/51",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
888284501 | Auto save of new asset not working
When auto-save tries to save an asset it "appears" to save because the save button is disabled but the asset is not really saved anywhere. Furthermore, Save As is not enabled to save it somewhere else.
This ended up being a problem with the data model which has now been resolved. Save As will be fixed in a different issue.
| gharchive/issue | 2021-05-11T19:15:15 | 2025-04-01T06:37:59.196645 | {
"authors": [
"mvsoder"
],
"repo": "avereon/xenon",
"url": "https://github.com/avereon/xenon/issues/202",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
433933023 | Load/Save Employee PINs to file
Done When: the pins for every employee can be loaded and saved to file
30 min: planning and discussion with group about what is needed for this and how to approach it. I started coding it but realized I need to merge unfinished work to really get going
1.5 hr: did work on saving to file
1.5 hrs: Fixing issues and getting I/O to work
| gharchive/issue | 2019-04-16T18:51:04 | 2025-04-01T06:37:59.198045 | {
"authors": [
"JohnHunter809"
],
"repo": "averma1/RockstarRestaurant",
"url": "https://github.com/averma1/RockstarRestaurant/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1765383713 | Adding Wikipedia Scrapper
Aim
What is the objective of the Script
It is basically a technique or a process in which large amounts of data from a huge number of websites is passed through a web scraping software coded in a programming language and as a result, structured data is extracted
Details
What the features will your script have??
Web scraping is an automatic process of extracting information from the web.
It allows scrapping based on Javascript and Python Framework.
Almost compatible with all the websites and retrieve necessary informations
@avinashkranjan and @1e9abhi1e10 plz assign this issue to me
Go ahead @Shreya111111
@1e9abhi1e10 @Yashbhadiyadra Plz review the PR https://github.com/avinashkranjan/Amazing-Python-Scripts/pull/1867 fixes https://github.com/avinashkranjan/Amazing-Python-Scripts/issues/1849 ..
| gharchive/issue | 2023-06-20T13:11:24 | 2025-04-01T06:37:59.205674 | {
"authors": [
"Shreya111111",
"Yashbhadiyadra"
],
"repo": "avinashkranjan/Amazing-Python-Scripts",
"url": "https://github.com/avinashkranjan/Amazing-Python-Scripts/issues/1849",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1747189336 | Vulnerability Assessment and Scanning Script
Aim
The objective of Vulnerability Assessment and Scanning is to identify and assess vulnerabilities within a system, network, or application. This process involves using various tools and techniques to systematically identify weaknesses that could be exploited by attackers.
The main objective is to discover potential vulnerabilities within a system, network, or application. This includes identifying security weaknesses in configurations, software, services, or infrastructure components that could be exploited by malicious individuals.
Once vulnerabilities are identified, they need to be evaluated and prioritized based on their severity and potential impact on the system's security. This allows security teams to focus their efforts on addressing the most critical vulnerabilities first.
By conducting vulnerability assessments and scanning, organizations can gain insights into their security weaknesses and take appropriate measures to strengthen their overall security posture, reduce the risk of exploitation, and protect their systems and data from potential attacks.
Details
Vulnerability scanning script using the popular open-source tool Nmap.
Do I want to work on this:
[x] Yes
[ ] No
Please assign me this issue under gssoc 23 @avinashkranjan
Go Ahead @Abhinavcode13
| gharchive/issue | 2023-06-08T06:43:29 | 2025-04-01T06:37:59.208857 | {
"authors": [
"Abhinavcode13",
"avinashkranjan"
],
"repo": "avinashkranjan/Pentesting-and-Hacking-Scripts",
"url": "https://github.com/avinashkranjan/Pentesting-and-Hacking-Scripts/issues/231",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1638137364 | JSON-API: No Authorization with valid token sent
Bug report, debug log and your config file (FULL LOGS ARE MANDATORY)
Steps to reproduce
API Authentication: on
Internet API Access: off
Local Admin API Authentication: on
Local API Authentication: off
curl http://localhost:8090/json-rpc -H 'Content-Type: application/json' -H 'Authorization : token [valid token]' -d '{"command":"config","subcommand":"getconfig"}'
What is expected?
output of current config
What is actually happening?
result of curl request:
{
"command": "config",
"error": "No Authorization",
"success": false,
"tan": 0
}
When Local Admin API Authentication is disabled, request will succeed.
Maybe my curl request or my understanding of the security system is wrong, but please have a look at this:
JsonAPI::handleConfigCommand() checks for _adminAuthorized
void JsonAPI::handleConfigCommand(const QJsonObject& message, const QString& command, int tan)
{
...
else if (subcommand == "getconfig")
{
if (_adminAuthorized)
sendSuccessDataReply(QJsonDocument(_hyperhdr->getQJsonConfig()), full_command, tan);
else
sendErrorReply("No Authorization", command, tan);
}
API::isTokenAuthorized() is not setting _adminAuthorized but _authorized:
bool API::isTokenAuthorized(const QString& token)
{
(_authManager->thread() != this->thread())
? QMetaObject::invokeMethod(_authManager, "isTokenAuthorized", Qt::BlockingQueuedConnection, Q_RETURN_ARG(bool, _authorized), Q_ARG(QString, token))
: _authorized = _authManager->isTokenAuthorized(token);
return _authorized;
}
System
HyperHDR Server:
Build: master (GitHub-bc24df7/a9a00f9-1678986833)
Build time: Mar 23 2023 09:35:26
Git Remote: https://github.com/awawa-dev/HyperHDR.git
Version: 20.0.0.0beta0
UI Lang: en (BrowserLang: de)
UI Access: default
Avail Capt: Linux (V4L2)
Database: read/write
HyperHDR Server OS:
Distribution: Raspbian GNU/Linux 11 (bullseye)
Architecture: arm
CPU Model: ARMv6-compatible processor rev 7 (v6l)
CPU Type: Raspberry Pi Zero W Rev 1.1
CPU Revision: 9000c1
CPU Hardware: BCM2835
Kernel: linux (6.1.19+ (WS: 32))
Qt Version: 5.15.2
Browser: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0
When Local Admin API Authentication is disabled, request will succeed.
When enabled, you first need to ask the user (a dialog will appear on the HyperHDR page if it is open) to authorize the token before using it (authorize>requestToken). I still can't remember if it can be done with http://ip:8090/json-rpc or json port (default 19444).
Hi,
token was created directly in Web UI and i can see it in Token Management. There is a requestToken subcommand and according to the docs a popup should appear in the Web UI to accept/reject the token request. But there seems to be no difference in token creation.
Hyperion.ng has the same problem, there is an old (05/2021) issue about this topic, but i don't expect to get that fixed. This is one of the reasons i try to switch to HyperHDR, because i hope you are more motivated and/or skilled and have a better coding style.
A request to
http://ip:8090/json-rpc?request={%22command%22:%22authorize%22,%22subcommand%22:%22requestToken%22,%22comment%22:%22testtokentest%22}
results in:
{
"command": "authorize",
"error": "Command not implemented",
"success": false,
"tan": 0
}
BTW: All references to _authorized are in API.cpp and i couldn't find any code checking _authorized, it's always set only. This may be caused by the coding style of hyperion.ng developers, maybe they have removed it, i don't know.
Simple fix of the issue would be to set _adminAuthorized = _authorized; at the end of API::isTokenAuthorized(), but i don't know if this will break something else in security system.
wbr
_adminAuthorized is different from _authorized and requires user interaction: just creating a token is not enough. If you don't want the user to interfere, the local admin option should be disabled (but it's enabled on default and the user has to disable it manually). Even if I redesign it to make it work over https POST (http & GET requests are too risky), user interaction will still be required.
A request to
....
results in:
yes I checked it: it's disabled for http/https requests so only json api RPC port can be used.
Some more test:
http://192.168.32.107:8090/json-rpc?request={"command":"config","subcommand":"getconfig"}
{
"command": "config",
"error": "No Authorization",
"success": false,
"tan": 0
}
http://192.168.32.107:8090/json-rpc?request={"command":"authorize","subcommand":"login","token":"911faca7-0d11-4e06-9424-e46c6c6784b0"}
{
"command": "authorize",
"error": "Command not implemented",
"success": false,
"tan": 0
}
curl --request POST http://localhost:8090/json-rpc -H 'Content-Type: application/json' --data-raw '{"command":"config","subcommand":"getconfig"}'
{
"command": "config",
"error": "No Authorization",
"success": false,
"tan": 0
}
curl --request POST http://localhost:8090/json-rpc -H 'Content-Type: application/json' --data-raw '{"command":"authorize","subcommand":"login","token":"911faca7-0d11-4e06-9424-e46c6c6784b0"}'
{
"command": "authorize",
"error": "Command not implemented",
"success": false,
"tan": 0
}
curl --request POST http://localhost:8090/json-rpc -H 'Content-Type: application/json' -H 'Authorization : token 911faca7-0d11-4e06-9424-e46c6c6784b0' --data-raw '{"command":"config","subcommand":"getconfig"}'
{
"command": "config",
"error": "No Authorization",
"success": false,
"tan": 0
}
echo '{"command":"config","subcommand":"getconfig"}' | nc localhost 19444
{"command":"config","error":"No Authorization","success":false,"tan":0}
waiting for input... need to CTRL-C
echo '{"command":"authorize","subcommand":"login","token":"911faca7-0d11-4e06-9424-e46c6c6784b0"}' | nc localhost 19444
{"command":"authorize-login","success":true,"tan":0}
waiting for input... need to CTRL-C
Looks like JSON-API can be used by raw access to port 19444 only, but there is no way to use it in shell scripts (would like to switch V4L device input by shell script triggered by lirc irexec, executing getconfig, using jq to change, and executing setconfig to save config)
wbr
Just change the token if it exists in your config for this example to work: a message in HyperHDR page will popup when you execute the command.
OK, there is a login subcommand, let's try this:
echo '{"command":"authorize","subcommand":"login","token":"d9f2c817-9b8e-4358-9133-995e611b09ab"}' | websocat -n1 ws://localhost:8090
{"command":"authorize-login","success":true,"tan":0}
Token must be longer than 36 chars https://github.com/awawa-dev/HyperHDR/blob/6a6b29dfa970ff1c0b1b4d46192d27deaedab70c/sources/api/JsonAPI.cpp#L1531
otherwise for 36 chars it triggers method that won't unlock admin access only ordinary authorization
https://github.com/awawa-dev/HyperHDR/blob/6a6b29dfa970ff1c0b1b4d46192d27deaedab70c/sources/api/JsonAPI.cpp#L1541
OK, silly question, how do i get a valid token longer than 36 bytes to unlock admin access?
wbr
Did you check 'auth' table for a token for the user? also login with a password returns it https://github.com/awawa-dev/HyperHDR/blob/6a6b29dfa970ff1c0b1b4d46192d27deaedab70c/sources/api/JsonAPI.cpp#L1562
Hi,
yes there is another entry in auth table
user = Hyperhdr
password = 4e0d2fa2cf3741d8999b884f5b77dcbe70c1978abbc9ee656fe1046eb08b788fac4db5903453735f77302358fce50ef51bbe5ea43a132c8d6fc713cf1f8d5860
token = df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f
salt = 510c2bc6125aab9a53aa2dd91f655a1903093ff02d335bb565fd32186bad47eb7c73cd597196a78ef6731ad5003ef29271eae5c847de914e96db0c0e283084df
comment =
portal_token =
id =
created_at = 2023-03-23T15:25:33Z
last_use = 2023-03-29T22:11:56Z
looks like token is hashed too, let's try to login with password
echo '{"command":"authorize","subcommand":"login","password":"hyperhdr"}' | websocat -n1 ws://localhost:8090
{"command":"authorize-login","info":{"token":"df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f"},"success":true,"tan":0}
hashed token is longer than 36 bytes, maybe we are logged in now?
echo '{"command":"config","subcommand":"getconfig"}' | websocat -n1 ws://localhost:8090
{"command":"config","error":"No Authorization","success":false,"tan":0}
Maybe the token has to be used either in further requests
echo '{"command":"config","subcommand":"getconfig","token":"df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f"}' | websocat -n1 ws://localhost:8090
{"command":"config","error":"Errors during specific message validation, please consult the HyperHDR Log","success":false,"tan":0}
WEBSOCKET : <ERROR> While validating schema against json data of 'JsonRpc@::1':[root].token: no schema definition
OK key "token" is not defined in schema-config.json, let's try to login with token
echo '{"command":"authorize","subcommand":"login","token":"df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f"}' | websocat -n1 ws://localhost:8090
{"command":"authorize-login","success":true,"tan":0}
echo '{"command":"config","subcommand":"getconfig"}' | websocat -n1 ws://localhost:8090
{"command":"config","error":"No Authorization","success":false,"tan":0}
Any more ideas? Code flow looks like this, there are no errors in log (--debug option)
void JsonAPI::handleAuthorizeCommand(const QJsonObject& message, const QString& command, int tan)
...
if (subc == "login")
...
if (token.length() > 36)
{
if (API::isUserTokenAuthorized(token))
...
bool API::isUserTokenAuthorized(const QString& userToken)
{
bool res;
QMetaObject::invokeMethod(_authManager, "isUserTokenAuthorized", Qt::BlockingQueuedConnection, Q_RETURN_ARG(bool, res), Q_ARG(QString, DEFAULT_CONFIG_USER), Q_ARG(QString, userToken));
if (res)
{
_authorized = true;
_adminAuthorized = true;
// Listen for ADMIN ACCESS protected signals
connect(_authManager, &AuthManager::newPendingTokenRequest, this, &API::onPendingTokenRequest, Qt::UniqueConnection);
}
return res;
}
bool AuthManager::isUserAuthorized(const QString& user, const QString& pw)
{
if (isUserAuthBlocked())
return false;
if (!_authTable->isUserAuthorized(user, pw))
{
setAuthBlock(true);
return false;
}
return true;
}
bool AuthTable::isUserAuthorized(const QString& user, const QString& pw)
{
if (userExist(user) && (calcPasswordHashOfUser(user, pw) == getPasswordHashOfUser(user)))
{
updateUserUsed(user);
return true;
}
return false;
}
wbr
| gharchive/issue | 2023-03-23T19:05:26 | 2025-04-01T06:37:59.283015 | {
"authors": [
"Thinner77",
"awawa-dev"
],
"repo": "awawa-dev/HyperHDR",
"url": "https://github.com/awawa-dev/HyperHDR/issues/536",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
343468087 | Add a "good-thomas double butterfly" algorithm impl
When I added the good-thomas implementation a while back, I tried it with very large sizes, and saw that it didn't really stand up to mixed-radix.
Something I didn't try is very small sizes. It turns out that if both child FFTs are small enough to be butterflies, Dgood-Thomas pretty greatly outperforms mixed-radix. So the planner now does Good-Thomas double butterfly instead of Mixed-Radix Double Butterfly whenever possible.
I also tried using the main Good-Thomas Algorithm instead of Mixed Radix when sizes are less than a few thousand, but I got mixed results. Some benchmarks were improved, while others were worse. If we did something like FFTW where we measured performance as a part of the planning process, it would be worth it to test mixed radix vs good-thomas performance for given sizes, but it seems to be too unreliable to do it all the time.
I also tried another stab at computing the reordering indexes on the fly, and I got a version that's faster than the original, but it's still slower than precomputing them, both at small and large sizes.
Comparing benchmarks of before and after:
PLANNED -- BEFORE
test complex_composite_20736 ... bench: 470,106 ns/iter (+/- 24,886)
test complex_composite_24028 ... bench: 2,118,454 ns/iter (+/- 96,817)
test complex_composite_24576 ... bench: 361,290 ns/iter (+/- 30,788)
test complex_composite_30270 ... bench: 1,164,039 ns/iter (+/- 151,710)
test complex_composite_32192 ... bench: 2,957,133 ns/iter (+/- 287,858)
test complex_prime_0019 ... bench: 262 ns/iter (+/- 58)
test complex_prime_0151 ... bench: 4,154 ns/iter (+/- 302)
test complex_prime_1009 ... bench: 29,750 ns/iter (+/- 2,597)
test complex_prime_2017 ... bench: 75,501 ns/iter (+/- 6,779)
test complex_primepower_160801 ... bench: 10,726,836 ns/iter (+/- 821,831)
test complex_primepower_44521 ... bench: 2,604,360 ns/iter (+/- 89,333)
PLANNED -- AFTER
test complex_composite_20736 ... bench: 454,925 ns/iter (+/- 131,175)
test complex_composite_24028 ... bench: 1,946,914 ns/iter (+/- 115,707)
test complex_composite_24576 ... bench: 353,485 ns/iter (+/- 16,025)
test complex_composite_30270 ... bench: 1,079,107 ns/iter (+/- 52,330)
test complex_composite_32192 ... bench: 2,719,065 ns/iter (+/- 114,287)
test complex_prime_0019 ... bench: 254 ns/iter (+/- 11)
test complex_prime_0151 ... bench: 3,924 ns/iter (+/- 186)
test complex_prime_1009 ... bench: 27,943 ns/iter (+/- 1,231)
test complex_prime_2017 ... bench: 74,977 ns/iter (+/- 3,572)
test complex_primepower_160801 ... bench: 9,487,384 ns/iter (+/- 238,325)
test complex_primepower_44521 ... bench: 2,468,669 ns/iter (+/- 106,513)
And comparing good thomas vs mixed radix at various sizes, showign that good-thomas is much better than mixed radix at small sizes
test good_thomas_0002_3 ... bench: 51 ns/iter (+/- 14)
test good_thomas_0003_4 ... bench: 72 ns/iter (+/- 6)
test good_thomas_0004_5 ... bench: 148 ns/iter (+/- 7)
test good_thomas_0007_32 ... bench: 1,666 ns/iter (+/- 708)
test good_thomas_0032_27 ... bench: 13,751 ns/iter (+/- 743)
test good_thomas_0256_243 ... bench: 1,915,772 ns/iter (+/- 121,952)
test good_thomas_2048_2187 ... bench: 264,304,653 ns/iter (+/- 2,765,125)
test good_thomas_2048_3 ... bench: 71,788 ns/iter (+/- 5,187)
test good_thomas_butterfly_0002_3 ... bench: 32 ns/iter (+/- 7)
test good_thomas_butterfly_0003_4 ... bench: 50 ns/iter (+/- 3)
test good_thomas_butterfly_0004_5 ... bench: 109 ns/iter (+/- 45)
test good_thomas_butterfly_0007_32 ... bench: 1,544 ns/iter (+/- 431)
MIXED RADIX:
test mixed_radix_0002_3 ... bench: 81 ns/iter (+/- 25)
test mixed_radix_0003_4 ... bench: 103 ns/iter (+/- 7)
test mixed_radix_0004_5 ... bench: 180 ns/iter (+/- 51)
test mixed_radix_0007_32 ... bench: 1,861 ns/iter (+/- 115)
test mixed_radix_0032_27 ... bench: 14,030 ns/iter (+/- 768)
test mixed_radix_0256_243 ... bench: 1,735,521 ns/iter (+/- 106,055)
test mixed_radix_2048_2187 ... bench: 193,542,181 ns/iter (+/- 3,073,271)
test mixed_radix_2048_3 ... bench: 75,990 ns/iter (+/- 5,625)
test mixed_radix_butterfly_0002_3 ... bench: 43 ns/iter (+/- 13)
test mixed_radix_butterfly_0003_4 ... bench: 64 ns/iter (+/- 16)
test mixed_radix_butterfly_0004_5 ... bench: 119 ns/iter (+/- 8)
test mixed_radix_butterfly_0007_32 ... bench: 1,684 ns/iter (+/- 126)
Nice work. Thanks!
As an aside, you may have noticed I haven't been putting much time into this project lately. At this point, I think you have a more complete understanding of the code and you seem to have a good vision for pushing this project forward. Would you be willing to assume ownership of this project? You're already a collaborator, so what this means is I would just defer to you for PR decisions and you would manage the version hosted on crates.io.
Could you add me ad an owner on the crate? Username is ejmahler
@awelkie Have you had a chance to look at this?
At the PR or adding you as an owner? I sent the crates.io invitation 5 days ago when you asked, let me know if you didn't get it. I skimmed the PR and it looks fine, but as I said I'll defer to you. You should have permissions to merge pull requests, right?
The crates.io account. I didn't notice that you had sent an invite. I just accepted it! thanks
| gharchive/pull-request | 2018-07-23T02:34:54 | 2025-04-01T06:37:59.292544 | {
"authors": [
"awelkie",
"ejmahler"
],
"repo": "awelkie/RustFFT",
"url": "https://github.com/awelkie/RustFFT/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
159319702 | Added GitHub Student Developer Pack
Link muito útil pro pessoal que usa o gitHub, eles oferecem repositórios privados grátis por 2 anos, UnrealEngine full grátis e desconto e várias outras ferramentas.
A página está em inglês
Eu vi! Mas o link é muito bom, senti que deveria compartilhar. Poderia escrever um artigo sobre ele e explicar em passo a passo com faz pra utilizar. Vocês conhecem algum lugar que eu possa fazer isso? Blog ou algo do tipo.
Você pode escrever no Medium
theSkilled, vc precisa de um email .edu.br para poder ter o direito. basta assinar o programa e se cadastrar com o email.
| gharchive/pull-request | 2016-06-09T03:06:13 | 2025-04-01T06:37:59.309138 | {
"authors": [
"danielschmitz",
"jonatasleon",
"theSkilled"
],
"repo": "awesome-br/awesome-br.github.io",
"url": "https://github.com/awesome-br/awesome-br.github.io/pull/294",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
665785163 | Add zoxide
zoxide is a blazing fast autojump alternative written in Rust. It's orders of magnitude faster than other autojumpers, cross-shell, and extremely feature-rich.
First you didn't disclose that it was your project. Please do it next time. With 4.3k stars I think it is fine for the popular enough part. Last thing though, this tool is neither written in bash (which could be fine) nor it is bash related. Well it is not more related to bash than zsh or fish. I think zoxide would be perfect for an awesome-cli or something like this. We may have include tool like this in the past. I dont know (who wants to check?). Fianlly, I dont have a close answer to give. Who have an opinion to share?
This tool is neither written in bash (which could be fine) nor it is bash related
IMHO, this is one of the strengths of the project. Because it's not linked to a shell, it can work across other applications like ranger/nnn/vim/emacs without a problem, which adds so much more value to the tool as a shell plugin.
I wouldn't say it isn't bash related, though. It has first-class support for bash, and the fact that it is able to support other shells at the same time shouldn't diminish its value as a bash plugin.
We may have include tool like this in the past. I dont know (who wants to check?).
I just checked the Command Line Productivity section. Almost all of the tools are written in other languages:
aliases is written in Rust, supports only bash right now but mentions zsh under future plans
bashhub is written in Python, supports bash+zsh
commacd is written in POSIX shell, supports bash+zsh
hstr is written in C, supports bash+zsh
qfc is written in Python, supports bash+zsh
The only tools that actually are written in Bash were:
bashmarks
has
sshrc (although the link is broken)
Would it be more relevant on awesome-cli-apps or awesome-cli? Does awesome-zsh or awesome-fish included it?
Would it be more relevant on awesome-cli-apps or awesome-cli?
zoxide is, at the end of the day, a shell plugin. It needs to be set up on your shell. It comes with a CLI, but that CLI is basically used to set up the shell plugin.
Did awesome-zsh or awesome-fish include it?
It's included in awesome-zsh. awesome-fish didn't include it because they require every tool to be written in Fish rather than for Fish.
I am fine with merging stuff not written in bash if it is relevant to bash in some way.
If you're trying to curate a list of high-quality Bash plugins, I don't see why you would want to exclude a plugin simply because it supports other shells.
Consider the starship project. I use it on Bash, because it's the best prompt I could find for Bash. Does it matter to me, as a user, that it's not written in Bash, or that it works on zsh as well? Absolutely not.
I haven't understood why you say zoxide isn't relevant to Bash. It's a Bash plugin. The fact that it's also a zsh plugin doesn't change the fact that it's a Bash plugin.
Finally, 5/8 projects I checked in the list were not written in Bash. Would you want to remove those from the README, for no fault other than the fact that they support other shells?
The list was not maintained much and thus did not have real contribution guidelines. I am in favor of adopting the philosophy of the awesome-fish list. However, I also could envision some middle ground where we have an category of enhancements that are not "pure" bash.
| gharchive/pull-request | 2020-07-26T14:01:34 | 2025-04-01T06:37:59.342684 | {
"authors": [
"Knusper",
"ajeetdsouza",
"aloisdg"
],
"repo": "awesome-lists/awesome-bash",
"url": "https://github.com/awesome-lists/awesome-bash/pull/53",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1019579499 | Not support MV3 native ES background worker
Blocked by https://github.com/w3c/ServiceWorker/issues/1356, otherwise, we need to emit import statements which is not easy in a webpack bundle.
I think that maybe it already works with this, regardless if manifest version:
Yep: https://github.com/pixiebrix/pixiebrix-extension/pull/7472
I'm unsure if your comment relates to this issue, but I'll write down what I recently found as a memo.
Firstly, Chrome supports service workers with native ES modules now, you need to write your manifest like this:
"background": {
"service_worker": "background.worker.js",
"type": "module"
},
Then the service worker can use import or export statements, not importScripts.
Currently, we use JSONP-style (the default of webpack) to load chunks, this way works well, so even if this issue is not resolved, this plugin is ok to use.
// JSONP-style chunk format
"use strict";
(globalThis["webpackChunk"] = globalThis["webpackChunk"] || []).push(...)
As an ES Module enthusiast, I hope we can support ES Module as the chunk format, and also as the chunk loading format.
// ES Module chunk format
export const id = 232;
export const ids = [232];
export const modules = {
/***/ 232:
/***/ (() => {
/***/ })
};
If the chunk format becomes ESM, the only way to load it is static import or dynamic import.
Dynamic import already works to load JSONP-style chunks in the content script (https://github.com/awesome-webextension/webpack-target-webextension#content-script), the problem is dynamic import is not allowed in an ES Module service worker even if the dynamic import is called before first event loop ends. This requires me to emit not dynamic import, but static imports. Webpack now only supports dynamic import (see __webpack_require__.f.j if you're interested) those chunks.
This becomes harder, but not impossible. I think it needs a lot of time to figure out how to make this work. I can emit import statements at the top level, but I need to know their file name, but unluck, file names are generated from a runtime function (__webpack_require__.u) and it's very hard to do it in the compile time.
Since everything works well today, this is not very urgent to fix.
Then the service worker can use import or export statements, not importScripts.
That's very good to know. Parcel forced me to use that syntax but never investigated it. I'll start using it in webpack too because importScript breaks "Pause on uncaught errors" in the dev tools, because the debugger pauses on importScript, regardless of the actual position and source maps.
dynamic import
That didn't seem to work for me 🤷♂️ I got the same error as before
Changing it to import statement fixed it.
So technically now I'm using a native ES background worker in webpack-target-webextension. This issue is fixed? Maybe it just needs some documentation?
Native ES service worker works, dynamic import does not.
While using this plugin, compiled ES modules (including dynamic import) work after bundling, loading by importScritps. Native ES service worker does not work with this plugin currently (the screenshot you gave).
I just took another look of this. This is not possible until there is ES Module support in content script.
e is ES Module support in content script
I think you meant "in background workers" right?
e is ES Module support in content script
I think you meant "in background workers" right?
No, to share chunks between the background and content script, they must use the format that both environments support. Now ES Module is only supported in the background worker and not supported in the content script, so there is no meaning in investigating this.
What do you mean? I definitely am using both static and dynamic ESM in content scripts:
https://robwu.nl/crxviewer/?crx=https%3A%2F%2Fchromewebstore.google.com%2Fdetail%2Frefined-github%2Fhlepfoohegkhhmjieoechaddaejaokhf%3Fhl%3Den
See content-script.js.
There are some tricks though:
you need to use import(chrome.runtime.getURL('actual-content-script.js')) in your entry point in order to later be able to use import statements
all JS files need to be in web_accessible_resources
hmm wait you're right, an extra file will be enough to load ESM in content script.
| gharchive/issue | 2021-10-07T03:52:41 | 2025-04-01T06:37:59.362281 | {
"authors": [
"Jack-Works",
"fregante"
],
"repo": "awesome-webextension/webpack-target-webextension",
"url": "https://github.com/awesome-webextension/webpack-target-webextension/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
445452994 | Improve GInputBox's behaviour with a long title
Before:
After:
A solid improvement :+1:
| gharchive/pull-request | 2019-05-17T13:51:20 | 2025-04-01T06:37:59.375747 | {
"authors": [
"awesomekling",
"rburchell"
],
"repo": "awesomekling/serenity",
"url": "https://github.com/awesomekling/serenity/pull/51",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
419577216 | Where do I add permissions when using amplify-cli?
** Which Category is your question related to? **
API/Function
** What AWS Services are you utilizing? **
Lambda/DynamoDB
** Provide additional details e.g. code snippets **
I have a lambda that gathers data from DynamoDB that i created using the cli.
When I execute it, I get a permission error:
User: blah is not authorized to perform: dynamodb:DescribeTable
This is easy enough to fix, I added the DynamoDB permissions required through the console but this feels bad. I could edit the cloudformation template, adding the permissions there before pushing the changes up. But the template is generated, that also feels bad.
Any idea where I should be doing this? I can't find any documentation around it.
@bwobbones You could probably modify the Cloudformation file in amplify/backend/function<funcition-name>/cloudformation-file.json file to add the perminssions.
Thanks @kaustavghosh06, do you know if that file will be overwritten if I push changes?
@gregevari What do you mean by if push changes? I don't beleive this file is changed dynamically by the CLI after it's created.
@kaustavghosh06 pushing with amplify push. I think you cover my question though, thanks!
| gharchive/issue | 2019-03-11T09:27:15 | 2025-04-01T06:37:59.397853 | {
"authors": [
"bwobbones",
"gregevari",
"kaustavghosh06"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/1019",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
499044946 | add-graphql-datasource is not showing list of secrets
Describe the bug
No keys available to select
To Reproduce
Steps to reproduce the behavior:
Create new project, amplify init, amplify add api
Create RDS instance (Aurora PostgreSQL compatible with PostgreSQL 10.7, serverless)
Execute amplify api add-graphql-datasource, try to complete quiz.
See error
Expected behavior
Being able to use RDS as datasource.
Screenshots
If I just press enter, this happens
Additional context
OS: Windows 10
amplify -v: 3.10.0
@TrueLecter
Did you select the "serverless" mode when you create the RDS database?
It is required to select the "serverless" mode as stated in our document.
Yes. All clusters were created with serverless template. Role shown as serverless in console as well.
@TrueLecter
Did you create the password for the "master" user? and then use the query editor to create a database in the newly created cluster?
I will send a PR to guard against those scenarios, and print out error messages.
Ok, so I was able to finally connect to database in Query Editor. However, not I'm getting next issue:
I was also getting an error, stating that there was no database named . After that I created database with name same as username and started receiving issue regarding unrecognized configuration parameter.
This seems to still be the case (CLI ver. 4.12.0). Is there any info on when amplify will support Postgres?
(in fact, I did not find any documentation that it was not supported beyond this issue...)
Is there any update, please?
| gharchive/issue | 2019-09-26T18:26:08 | 2025-04-01T06:37:59.405518 | {
"authors": [
"Genkilabs",
"TrueLecter",
"UnleashedMind",
"stefanotauriello"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/2423",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1290646313 | chore: use correct commit when publishing git tag
Description of changes
This changes the prerelease scripts to make sure we are publishing tags and releases linked to the correct commit SHA.
Issue #, if available
Description of how you validated changes
Checklist
[x] PR description included
[ ] yarn test passes
[ ] Tests are changed or added
[ ] Relevant documentation is changed or added (and PR referenced)
[ ] New AWS SDK calls or CloudFormation actions have been added to relevant test and service IAM policies
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Codecov Report
:exclamation: No coverage uploaded for pull request base (dev@5ea0b9a). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## dev #10677 +/- ##
======================================
Coverage ? 47.37%
======================================
Files ? 669
Lines ? 33066
Branches ? 6673
======================================
Hits ? 15665
Misses ? 15723
Partials ? 1678
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
| gharchive/pull-request | 2022-06-30T21:12:26 | 2025-04-01T06:37:59.410934 | {
"authors": [
"codecov-commenter",
"danielleadams"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/pull/10677",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1522517255 | Cloning repository fails trying to read the cache
Before opening, please confirm:
[X] I have checked to see if my question is addressed in the FAQ.
[X] I have searched for duplicate or closed issues.
[X] I have read the guide for submitting bug reports.
[X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
[X] I have removed any sensitive information from my code snippets and submission.
App Id
d31qkmp0h12ar6
AWS Region
us-west-2
Amplify Hosting feature
Build settings, Monorepo
Describe the bug
I'm using a Monorepo configuration for this application and try to make the deploy using a custom amplify.yml file.
Currently the main issue is that the checkout job fails with the following 2 relevant warnings:
2023-01-06T12:25:44.542Z [INFO]: # Retrieving environment cache...
2023-01-06T12:25:44.612Z [WARNING]: ! Unable to write cache: {"code":"ERR_BAD_REQUEST","message":"Request failed with status code 404"})}
2023-01-06T12:26:00.536Z [INFO]: # Retrieving cache...
2023-01-06T12:26:00.536Z [INFO]: # Retrieved cache
2023-01-06T12:26:40.089Z [ERROR]: !!! TypeError: m.indexOf is not a function
2023-01-06T12:26:40.168Z [INFO]: # Starting environment caching...
2023-01-06T12:26:40.168Z [INFO]: # Environment caching completed
After that the build step fails.
I've tested the following configuration changes:
Node 14 & 16
Overwrite the amplify CLI version (multiple versions and the same as we are using locally)
Removed the caches from the amplify.yml file declaration
Removed the test step
Reconnect the repository multiple times (as is the indication that appears on the interface)
Expected behavior
We expect to see the checkout to work as expected as the indications are warnings and not errors.
Reproduction steps
This happens when triggering a build or pushing a new change from gitlab integration.
Build Settings
No response
Log output
2023-01-06T12:25:44.414Z [INFO]: # Switching to commit: 4a5a6f7d9966f36b966e056f491b0948b4bea32a
2023-01-06T12:25:44.458Z [INFO]: Agent pid 159
2023-01-06T12:25:44.458Z [INFO]: Identity added: /root/.ssh/git_rsa (/root/.ssh/git_rsa)
Note: switching to '4a5a6f7d9966f36b966e056f491b0948b4bea32a'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 4a5a6f7d9 Testing configuration
2023-01-06T12:25:44.530Z [INFO]: Successfully cleaned up Git credentials
2023-01-06T12:25:44.531Z [INFO]: # Checking for Git submodules at: /codebuild/output/src955501831/src/creator-web/.gitmodules
2023-01-06T12:25:44.542Z [INFO]: # Retrieving environment cache...
2023-01-06T12:25:44.612Z [WARNING]: ! Unable to write cache: {"code":"ERR_BAD_REQUEST","message":"Request failed with status code 404"})}
2023-01-06T12:25:44.612Z [INFO]: ---- Setting Up SSM Secrets ----
2023-01-06T12:25:44.612Z [INFO]: SSM params {"Path":"/amplify/d35m6bal8x8kl9/citesting/","WithDecryption":true}
2023-01-06T12:25:44.666Z [INFO]: # Defaulting to Node version 16
2023-01-06T12:25:54.411Z [INFO]: # Node version 16 is available for installation
2023-01-06T12:25:54.502Z [INFO]: # Installing Node version 16
2023-01-06T12:26:00.450Z [INFO]: # Now using Node version 16
2023-01-06T12:26:00.531Z [INFO]: No live updates for this build run
2023-01-06T12:26:00.536Z [INFO]: # Retrieving cache...
2023-01-06T12:26:00.536Z [INFO]: # Retrieved cache
2023-01-06T12:26:40.089Z [ERROR]: !!! TypeError: m.indexOf is not a function
2023-01-06T12:26:40.168Z [INFO]: # Starting environment caching...
2023-01-06T12:26:40.168Z [INFO]: # Environment caching completed
Terminating logging...
Additional information
This application is configured on the amplify console to use other application backend.
Hi @esteban-serfe 👋🏽 thanks for raising this issue. It's possible that there could be an issue with your amplify.yml file and we are misrepresenting the error. Could you please share the file so we can make sure it is configured correctly?
Hi @hloriana
This is the amplify.yml file from the last build.
The issue appears within the students application, not the creators.
version: 1
applications:
- appRoot: creators
backend:
phases:
preBuild:
commands:
# Run the lint over the lambdas functions > check #97113"
#- npm i
#- npm run lint:lambdas
- if [[ ! -v ENV ]]; then export ENV=${USER_BRANCH}; fi;
- if [[ ! -v ENV && -z "$ENV" ]]; then export ENV=${AWS_BRANCH}; fi
- echo "Using $ENV as the Backend environment"
#- whereis jq
#- yum update && yum install -y jq
#- aws cloudformation wait stack-update-complete --stack-name $(amplify env get --name $ENV --json | jq '.awscloudformation.StackId')
build:
commands:
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- nvm use $VERSION_NODE_14
- export NODE_OPTIONS=\"--max-old-space-size=8192\"
#- npm ci --no-audit
#- echo "=== Running Lint to validate it works as expected ==="
#- "npm run lint"
- npm i --no-audit --production
build:
commands:
- npm run build
artifacts:
baseDirectory: build
files:
- "**/*"
cache:
paths:
- node_modules/**/*
test:
artifacts:
baseDirectory: cypress
configFilePath: "**/mochawesome.json"
files:
- "**/*.png"
- "**/*.mp4"
- "report/mochawesome-report/**/*"
phases:
preTest:
commands:
- echo "=== Install cypress reporters and friends ==="
- npm install --no-audit wait-on pm2 mocha mochawesome mochawesome-merge mochawesome-report-generator
- echo '=== Configure the environment data sample ==='
- 'echo "{ \"CYPRESS_TEST_USER_CREATOR\": \"${CYPRESS_TEST_USER_CREATOR}\", \"CYPRESS_TEST_PASS_CREATOR\": \"${CYPRESS_TEST_PASS_CREATOR}\" }" > cypress.env.json'
- npx pm2 start npm -- start
- npx wait-on http://localhost:3000/
test:
commands:
- 'npx cypress run --reporter mochawesome --reporter-options "reportDir=cypress/report/mochawesome-report,overwrite=false,html=false,json=true,timestamp=mmddyyyy_HHMMss" --config video=false'
postTest:
commands:
- echo "=== Generate tests output by merging ==="
- "npx mochawesome-merge cypress/report/mochawesome-report/mochawesome*.json > cypress/report/mochawesome.json"
- "npx pm2 kill"
- appRoot: students/everprep-students
backend:
phases:
build:
commands:
# - set
- true
frontend:
phases:
preBuild:
commands:
- nvm use $VERSION_NODE_14
- export NODE_OPTIONS=\"--max-old-space-size=4096\"
- echo "=== Install dependencies ==="
- npm ci --no-audit --production
# - if [[ ! -v ENV ]]; then export ENV=${USER_BRANCH}; fi;
# - if [[ ! -v ENV && -z "$ENV" ]]; then export ENV=${AWS_BRANCH}; fi
# - echo "Using $ENV as the Backend environment"
# - export AWSCLOUDFORMATIONCONFIG='{"configLevel":"project","accessKeyId":"$AWS_ACCESS_KEY_ID","secretAccessKey":"$AWS_SECRET_ACCESS_KEY","region":"$AWS_REGION"}'
# - export AMPLIFY='{"envName":"$ENV","appId":"$AWS_APP_ID","defaultEditor":"code"}'
# - export PROVIDERS='{"awscloudformation":$AWSCLOUDFORMATIONCONFIG}'
# - export CODEGEN='{"generateCode":false,"generateDocs":false}'
# - export REACTCONFIG='{"SourceDir":"src","DistributionDir":"build","BuildCommand":"npm run-script build","StartCommand":"npm run-script start"}'
# - export FRONTEND='{"frontend":"javascript","framework":"react","config":$REACTCONFIG}'
# - export PORT=3001
# - amplify pull $ENV --yes --amplify ${AMPLIFY} --providers ${PROVIDERS} --frontend ${FRONTEND}
# - test -f src/aws-exports.js
build:
commands:
- npm run build
artifacts:
baseDirectory: build
files:
- "**/*"
# cache:
# paths:
# - node_modules/**/*
# test:
# artifacts:
# baseDirectory: cypress
# configFilePath: "**/mochawesome.json"
# files:
# - "**/*.png"
# - "**/*.mp4"
# - "report/mochawesome-report/**/*"
# phases:
# preTest:
# commands:
# - "npm install --no-audit wait-on pm2 mocha@5.2.0 mochawesome mochawesome-merge mochawesome-report-generator"
# - "npx pm2 start npm -- start"
# - "npx wait-on http://localhost:3001"
# test:
# commands:
# - 'npx cypress run --reporter mochawesome --reporter-options \"reportDir=cypress/report/mochawesome-report,overwrite=false,html=false,json=true,timestamp=mmddyyyy_HHMMss\" --config video=false'
# postTest:
# commands:
# - "npx pm2 kill"
# - "npx mochawesome-merge cypress/report/mochawesome-report/mochawesome*.json > cypress/report/mochawesome.json"
| gharchive/issue | 2023-01-06T12:35:13 | 2025-04-01T06:37:59.442058 | {
"authors": [
"esteban-serfe",
"hloriana"
],
"repo": "aws-amplify/amplify-hosting",
"url": "https://github.com/aws-amplify/amplify-hosting/issues/3228",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1845937280 | while deploying my nextjs project with sentry on aws amplify build getting failed
Before opening, please confirm:
[X] I have searched for duplicate or closed issues and discussions.
[X] I have read the guide for submitting bug reports.
[X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
JavaScript Framework
Next.js
Amplify APIs
Not applicable
Amplify Categories
Not applicable
Environment information
# Put output below this line
Describe the bug
I am using nextjs 11.1.3 when I added sentry error tracking to my next js project when i build and run it locally it works fine but when i deploy my sentry nextjs project on aws amplyfy my build getting failed with following error,
'> Build error occurred\n' +
'Error: spawn ENOMEM\n' +
' at ChildProcess.spawn (node:internal/child_process:420:11)\n' +
' at spawn (node:child_process:733:9)\n' +
' at fork (node:child_process:169:10)\n' +
' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' +
' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' +
' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' +
' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' +
' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' +
' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' +
' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' +
' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' +
' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' +
' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' +
' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' +
' errno: -12,\n' +
" code: 'ENOMEM',\n" +
" syscall: 'spawn'\n" +
'}',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
}
[G
2023-08-10T16:47:33.927Z [ERROR]: [?25h[G[J
300s › darxjey0xrl5c › Error: Command failed with exit code 1: node_modules/.bin/next build
To Reproduce
Below is my aws amplify build logs
'> Build error occurred\n' +
'Error: spawn ENOMEM\n' +
' at ChildProcess.spawn (node:internal/child_process:420:11)\n' +
' at spawn (node:child_process:733:9)\n' +
' at fork (node:child_process:169:10)\n' +
' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' +
' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' +
' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' +
' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' +
' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' +
' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' +
' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' +
' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' +
' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' +
' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' +
' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' +
' errno: -12,\n' +
" code: 'ENOMEM',\n" +
" syscall: 'spawn'\n" +
'}',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
}
[G
2023-08-10T16:47:33.927Z [ERROR]: [?25h[G[J
300s › darxjey0xrl5c › Error: Command failed with exit code 1: node_modules/.bin/next build
Expected behavior
my nextjs project which contains sentry error tracking build should not fail.
Reproduction steps
after deploying my nextjs sentry project on aws amplify build getting failed.
Code Snippet
// Put your code below this line.
**next.config.js File**
```javascript
const { withSentryConfig } = require("@sentry/nextjs");
const nextConfig = {
...nextConfigurations
productionBrowserSourceMaps: true,
sentry:{
widenClientFileUpload: true,
transpileClientSDK: true,
hideSourceMaps: true,
disableLogger: true,
}
};
const sentryWebpackPluginOptions = {
org: process.env.NEXT_PUBLIC_SENTRY_ORG_NAME,
project: process.env.NEXT_PUBLIC_SENTRY_PROJECT_NAME,
authToken: process.env.NEXT_PUBLIC_SENTRY_AUTH_TOKEN,
sourceMapFilename: '[name].[hash].js.map',
silent: true,
};
module.exports = withSentryConfig(nextConfig, sentryWebpackPluginOptions);
sentry.client.config.js
import * as Sentry from "@sentry/nextjs";
import { ContextLines } from "@sentry/integrations";
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
tracesSampleRate: 0.2,
replaysSessionSampleRate: 0.1,
replaysOnErrorSampleRate: 0.1,
integrations: [
new Sentry.Replay(),
new ContextLines({
frameContextLines: 7,
}),
],
});
app.js
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
integrations: [
new BrowserTracing()
],
tracesSampleRate: 0.2,
});
So this setup works well for me in locally when i build locally sourcemap getting uploaded to my sentry dashboard and also i get proper stack starce but when i try to deploy on aws amplify my build is failing and getting below error
'> Build error occurred\n' +
'Error: spawn ENOMEM\n' +
' at ChildProcess.spawn (node:internal/child_process:420:11)\n' +
' at spawn (node:child_process:733:9)\n' +
' at fork (node:child_process:169:10)\n' +
' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' +
' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' +
' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' +
' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' +
' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' +
' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' +
' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' +
' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' +
' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' +
' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' +
' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' +
' errno: -12,\n' +
" code: 'ENOMEM',\n" +
" syscall: 'spawn'\n" +
'}',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
}
[G
2023-08-10T16:47:33.927Z [ERROR]: [?25h[G[J
300s › darxjey0xrl5c › Error: Command failed with exit code 1: node_modules/.bin/next build
I am not getting why I am getting this problem only when i deploy it on aws amplify locally everything working fine.
### Log output
<details>
// Put your logs below this line
Build error occurred
Error: spawn ENOMEM
at ChildProcess.spawn (node:internal/child_process:420:11)
at spawn (node:child_process:733:9)
at fork (node:child_process:169:10)
at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)
at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)
at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)
at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)
at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)
at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)
at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)
at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)
at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31
at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)
at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {
errno: -12,
code: 'ENOMEM',
syscall: 'spawn'
}
info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5
info - Checking validity of types...
info - Creating an optimized production build...
info - Using external babel configuration from /codebuild/output/src489982623/src/...-nextjs/.babelrc
info - Collecting page data...
at makeError (/root/.//node_modules/execa/lib/error.js:60:11)
at handlePromise (/root/.//node_modules/execa/index.js:118:26)
at runMicrotasks ()
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Builder.build (/root/.//node_modules/@sls-next/lambda-at-edge/dist/build.js:377:13)
at async NextjsComponent.build (/root/.//node_modules/@sls-next/-component/dist/component.js:165:13)
at async NextjsComponent.default (/root/.//node_modules/@sls-next/-component/dist/component.js:22:13)
at async fn (/root/.npm/_npx/780a6c1398234b48/node_modules/@/template/utils.js:280:41)
at async Promise.all (index 0)
at async executeGraph (/root/.npm/_npx/780a6c1398234b48/node_modules/@/template/utils.js:294:3)
at async Template.default (/root/.npm/_npx/780a6c1398234b48/node_modules/@/template/.js:67:38)
at async Object.runComponents (/root/.npm/_npx/780a6c1398234b48/node_modules/@/cli/src/index.js:222:17) {
shortMessage: 'Command failed with exit code 1: node_modules/.bin/next build',
command: 'node_modules/.bin/next build',
escapedCommand: '"node_modules/.bin/next" build',
exitCode: 1,
signal: undefined,
signalDescription: undefined,
stdout: 'info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5\n' +
'info - Checking validity of types...\n' +
'info - Creating an optimized production build...\n' +
'info - Using external babel configuration from /codebuild/output/src489982623/src/...-nextjs/.babelrc\n' +
'info - Collecting page data...',
stderr: '\n' +
'warn - As of Tailwind CSS v2.2, lightBlue has been renamed to sky.\n' +
'warn - Update your configuration file to silence this warning.\n' +
'\n' +
'warn - As of Tailwind CSS v3.0, warmGray has been renamed to stone.\n' +
'warn - Update your configuration file to silence this warning.\n' +
'\n' +
'warn - As of Tailwind CSS v3.0, trueGray has been renamed to neutral.\n' +
'warn - Update your configuration file to silence this warning.\n' +
'\n' +
'warn - As of Tailwind CSS v3.0, coolGray has been renamed to gray.\n' +
'warn - Update your configuration file to silence this warning.\n' +
'\n' +
'warn - As of Tailwind CSS v3.0, blueGray has been renamed to slate.\n' +
'warn - Update your configuration file to silence this warning.\n' +
'(node:4289) [DEP_WEBPACK_CHUNK_HAS_ENTRY_MODULE] DeprecationWarning: Chunk.hasEntryModule: Use new ChunkGraph API\n' +
'(Use node --trace-deprecation ... to show where the warning was created)\n' +
'(node:4289) [DEP_WEBPACK_CHUNK_ADD_MODULE] DeprecationWarning: Chunk.addModule: Use new ChunkGraph API\n' +
'warn - Compiled with warnings\n' +
'\n' +
'./components/dashB/FooterDashboard.js\n' +
"Attempted import error: 'support_id' is not exported from '../../constants' (imported as 'support_id').\n" +
'\n' +
'./node_modules/typescript/lib/typescript.js\n' +
"Module not found: Can't resolve 'perf_hooks' in '/codebuild/output/src489982623/src/...-nextjs/node_modules/typescript/lib'\n" +
'\n' +
'./node_modules/typescript/lib/typescript.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/typescript/lib/typescript.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/engine.io-client/node_modules/ws/lib/buffer-util.js\n' +
"Module not found: Can't resolve 'bufferutil' in '/codebuild/output/src489982623/src/...-nextjs/node_modules/engine.io-client/node_modules/ws/lib'\n" +
'\n' +
'./node_modules/engine.io-client/node_modules/ws/lib/validation.js\n' +
"Module not found: Can't resolve 'utf-8-validate' in '/codebuild/output/src489982623/src/...-nextjs/node_modules/engine.io-client/node_modules/ws/lib'\n" +
'\n' +
'./components/dashB/FooterDashboard.js\n' +
"Attempted import error: 'support_id' is not exported from '../../constants' (imported as 'support_id').\n" +
'\n' +
'./node_modules/next/dist/server/load-components.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/next/dist/server/load-components.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/next/dist/server/load-components.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/next/dist/server/require.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/next/dist/server/require.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/next/dist/server/require.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/typescript/lib/typescript.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'./node_modules/typescript/lib/typescript.js\n' +
'Critical dependency: the request of a dependency is an expression\n' +
'\n' +
'\n' +
'> Build error occurred\n' +
'Error: spawn ENOMEM\n' +
' at ChildProcess.spawn (node:internal/child_process:420:11)\n' +
' at spawn (node:child_process:733:9)\n' +
' at fork (node:child_process:169:10)\n' +
' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' +
' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' +
' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' +
' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' +
' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' +
' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' +
' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' +
' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' +
' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' +
' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' +
' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' +
' errno: -12,\n' +
" code: 'ENOMEM',\n" +
" syscall: 'spawn'\n" +
'}',
</details>
### aws-exports.js
_No response_
### Manual configuration
_No response_
### Additional configuration
_No response_
### Mobile Device
_No response_
### Mobile Operating System
_No response_
### Mobile Browser
_No response_
### Mobile Browser Version
_No response_
### Additional information and screenshots
_No response_
@cwomack ok but can you share me url where you shared this issue ? how caan i see there
@gauravsapkal1 Did you find any solution to this issue?
| gharchive/issue | 2023-08-10T17:53:22 | 2025-04-01T06:37:59.486074 | {
"authors": [
"gauravsapkal1",
"himanshu-mobstac"
],
"repo": "aws-amplify/amplify-hosting",
"url": "https://github.com/aws-amplify/amplify-hosting/issues/3640",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2074173852 | BUG: SSG Build failed : Failed to find the deploy-manifest.json file in the build output
Before opening, please confirm:
[X] I have checked to see if my question is addressed in the FAQ.
[X] I have searched for duplicate or closed issues.
[X] I have removed any sensitive information from my code snippets and submission.
Amplify Hosting feature
Deployments
Is your feature request related to a problem? Please describe:
Building SSG next js site gives build error
Describe how you'd like this feature to work
https://github.com/aws-amplify/amplify-hosting/issues/3853
Getting the same error for nextjs SSG deployment :
2024-01-10T10:41:58.716Z [INFO]: ## Completed Frontend Build
2024-01-10T10:41:58.721Z [INFO]: ## Build completed successfully
2024-01-10T10:41:58.722Z [INFO]: # Starting caching...
2024-01-10T10:41:58.732Z [INFO]: # Creating cache artifact...
2024-01-10T10:42:33.446Z [INFO]: # Created cache artifact
2024-01-10T10:42:33.544Z [INFO]: # Uploading cache artifact...
2024-01-10T10:42:37.786Z [INFO]: # Uploaded cache artifact
2024-01-10T10:42:37.881Z [INFO]: # Caching completed
2024-01-10T10:42:37.891Z [ERROR]: !!! CustomerError: Failed to find the deploy-manifest.json file in the build output. Please verify that it exists within the "baseDirectory" specified in your buildSpec. If it's not there, we will also check the .amplify-hosting directory as a fallback. When using a framework adapter for hosting on Amplify, double-check that the adapter settings are correct.
2024-01-10T10:42:37.892Z [INFO]: # Starting environment caching...
2024-01-10T10:42:37.893Z [INFO]: # Environment caching completed
Terminating logging...
This is actually a bug but "bug" option was not there when clicked on "New Issue"
@jitendra-koodo 👋 This repository only accepts new feature requests for AWS Amplify Hosting. For technical support, we encourage you to open a case with AWS technical support if you have AWS support plan. If you do not have an active AWS support plan, we encourage you to leverage our Amplify community Discord server where community members and staff try to help each other with Amplify.
Where are we supposed to report the bugs?
They don't actually care @jitendra-koodo. They do what they want. They lie about ISR and its still included in there docs like it works exactly as vercels infrastructure but it does not. There customer service at AWS web service's is a joke, they just want to upgrade you to premium by ticking you off with their lack of expertise in the first tier support.
Hi @jitendra-koodo 👋 , if you have an AWS support plan we encourage you to report bugs by creating a support case. If you do not have an active AWS support plan, we encourage you to leverage our community Discord server where you can ask questions by creating a new thread in the amplify-help channel and community members and staff will try to answer your queries.
| gharchive/issue | 2024-01-10T11:32:38 | 2025-04-01T06:37:59.496478 | {
"authors": [
"Jay2113",
"PythonCircuit",
"arundna",
"jitendra-koodo"
],
"repo": "aws-amplify/amplify-hosting",
"url": "https://github.com/aws-amplify/amplify-hosting/issues/3897",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
609953768 | @auth: API_KEY is Supported or not
we have Postgres database and we have selected authentication as API_KEY. and API type is GRAPHQL
Thanks in advanced.
Hi @vishaldroisys can you explain your use case in more detail? Are you trying to make a call directly to the Postgres database or through APIGateway or AppSync with data resolvers?
Closing due to inactivity. Feel free to re-open if you're still experiencing the same issue.
| gharchive/issue | 2020-04-30T13:01:04 | 2025-04-01T06:37:59.498175 | {
"authors": [
"drochetti",
"lawmicha",
"vishaldroisys"
],
"repo": "aws-amplify/amplify-ios",
"url": "https://github.com/aws-amplify/amplify-ios/issues/410",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
544408850 | Attempt to fix circle ci builds for API
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
This PR only targets api -- working on datastore ut test in a separate PR
LGTM! Can probably combine this with https://github.com/aws-amplify/amplify-ios/pull/282
| gharchive/pull-request | 2020-01-01T23:09:52 | 2025-04-01T06:37:59.499624 | {
"authors": [
"iartemiev",
"wooj2"
],
"repo": "aws-amplify/amplify-ios",
"url": "https://github.com/aws-amplify/amplify-ios/pull/281",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1777713640 | ❗❗Liveness: Camera preview not centered on screen (Fixed in version 1.0.2)
Which UI component?
Liveness
Describe the bug
Upgrading Material3 UI (androidx.compose.material3:material3) from 1.0.1 to version 1.1.0, caused an issue where the camera preview was top aligned to the screen during the liveness challenge, instead of center aligned. This results in the camera preview not aligning to the face oval.
Impacted Amplify UI Liveness Versions:
1.0.0 (Only if the customer upgraded the androidx.compose.material3:material3 dependency to 1.1.0+)
1.0.1
As a workaround for these versions, you can force downgrade the material3 lib by adding the snippet below to the app build.gradle.
configurations.all {
resolutionStrategy {
force('androidx.compose.material3:material3:1.0.1')
}
}
Amplify UI Liveness 1.0.2 includes a fix to ensure the camera is properly centered on the screen, and is able to render correctly with 'androidx.compose.material3:material3:1.1.0' or later.
Closing, notification has been given.
| gharchive/issue | 2023-06-27T20:37:53 | 2025-04-01T06:37:59.525545 | {
"authors": [
"tjleing",
"tylerjroach"
],
"repo": "aws-amplify/amplify-ui-android",
"url": "https://github.com/aws-amplify/amplify-ui-android/issues/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2197757142 | WS1004
CloudFormation Lint Version
cfn-lint 0.86.0
What operating system are you using?
WSL2 Ubuntu on Windows
Describe the bug
cfn-lint incorrectly reports:
[cfn-lint] WS1004: Lambda function xxx does not have a corresponding log group with a Retention property
When there is a !Ref to a log group with a retention property.
Expected behavior
When there is an explicit !Ref to a defined log group with the RetentionInDays property set then WS1004 should not be flagged.
Reproduction template
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Resources:
LambdaProxyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
FunctionName: MyFunction
Handler: index.handler
LoggingConfig:
LogFormat: JSON
LogGroup: !Ref LambdaProxyLogGroup
MemorySize: 512
PackageType: Zip
ReservedConcurrentExecutions: 1
Runtime: nodejs18.x
Timeout: 10
Tracing: Active
LambdaProxyLogGroup:
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Type: AWS::Logs::LogGroup
Properties:
LogGroupClass: STANDARD
RetentionInDays: 180
cfn-lint itself does not vend WS1004. Seems to be from this popular rule pack:
https://awslabs.github.io/serverless-rules/rules/
https://github.com/awslabs/serverless-rules/issues
| gharchive/issue | 2024-03-20T14:59:07 | 2025-04-01T06:37:59.588884 | {
"authors": [
"PatMyron",
"shawnbucholtz"
],
"repo": "aws-cloudformation/cfn-lint",
"url": "https://github.com/aws-cloudformation/cfn-lint/issues/3103",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
2365123398 | Since 1.3.0 Update Template Takes 30+ Minutes or fails to finish linting (redux of old issue #2874)
CloudFormation Lint Version
1.3.0
What operating system are you using?
Windows/AWS Serverless
Describe the bug
An issue that was previously fixed has cropped back up. Below is a copy/paste from the last ticket on this matter but the symptoms exhibited are the exact same and on our previous version 0.80.1 it was working fine. The more conditions are in this particular template the longer the amount of time it takes to execute as well. It seems exponential as the first condition is instantaneous and subsequent ones are longer and longer.
A template that used to lint in 1.5 seconds now doesn't complete or completes in around 30 minutes.
This is an issue that was previously fixed in #2874
Expected behavior
To complete linting in a reasonable time frame for supported configurations and properly lint the values inside the conditions. In this case I would prefer under 1 minute of execution but in versions of cfn-lint before 0.76.0 it completes in about 1.5 seconds and properly lints the values inside of the conditions.
Reproduction template
#CORP::EC2::EC2::MODULE
AWSTemplateFormatVersion: 2010-09-09
Parameters:
name:
Description: The name tag for the EC2 instance.
Type: String
Default: ""
imageId:
Description: Base AMI for the EC2 instance.
Type: AWS::EC2::Image::Id
Default: ""
instanceType:
Description: EC2 instance type.
Type: String
Default: r5.large
subnetId:
Description: The ID of the subnet to launch the instance into.
Type: String
Default: ""
osVolumeSize:
Description: Size of the volume that has the OS.
Type: Number
Default: 120
securityGroupIds:
Description: The IDs of the security groups.
Type: ListAWS::EC2::SecurityGroup::Id
Default: ""
instanceKeyName:
Type: String
Description: Key pair name
Default: kpnamegoeshere
ebsOptimized:
Type: String
Description: Indicates whether the instance is optimized for Amazon EBS I/O.
Default: false
numberOfAdditionalVolumes:
Description: Additional volumes in addition to OS volume. Max is 25.
Type: Number
MaxValue: 25
Default: 0
volume2Size:
Description: Size of volume 2
Type: Number
Default: 40
volume3Size:
Description: Size of volume 3
Type: Number
Default: 40
volume4Size:
Description: Size of volume 4
Type: Number
Default: 40
volume5Size:
Description: Size of volume 5
Type: Number
Default: 40
volume6Size:
Description: Size of volume 6
Type: Number
Default: 40
volume7Size:
Description: Size of volume 7
Type: Number
Default: 40
volume8Size:
Description: Size of volume 8
Type: Number
Default: 40
volume9Size:
Description: Size of volume 9
Type: Number
Default: 40
volume10Size:
Description: Size of volume 10
Type: Number
Default: 40
volume11Size:
Description: Size of volume 11
Type: Number
Default: 40
volume12Size:
Description: Size of volume 12
Type: Number
Default: 40
volume13Size:
Description: Size of volume 13
Type: Number
Default: 40
volume14Size:
Description: Size of volume 14
Type: Number
Default: 40
volume15Size:
Description: Size of volume 15
Type: Number
Default: 40
volume16Size:
Description: Size of volume 16
Type: Number
Default: 40
volume17Size:
Description: Size of volume 17
Type: Number
Default: 40
volume18Size:
Description: Size of volume 18
Type: Number
Default: 40
volume19Size:
Description: Size of volume 19
Type: Number
Default: 40
volume20Size:
Description: Size of volume 20
Type: Number
Default: 40
volume21Size:
Description: Size of volume 21
Type: Number
Default: 40
volume22Size:
Description: Size of volume 22
Type: Number
Default: 40
volume23Size:
Description: Size of volume 23
Type: Number
Default: 40
volume24Size:
Description: Size of volume 24
Type: Number
Default: 40
volume25Size:
Description: Size of volume 25
Type: Number
Default: 40
volume26Size:
Description: Size of volume 26
Type: Number
Default: 40
Conditions:
hasVolume26: !Equals
- !Ref numberOfAdditionalVolumes
- 25
hasVolume25: !Or
- Condition: hasVolume26
- !Equals
- !Ref numberOfAdditionalVolumes
- 24
hasVolume24: !Or
- Condition: hasVolume25
- !Equals
- !Ref numberOfAdditionalVolumes
- 23
hasVolume23: !Or
- Condition: hasVolume24
- !Equals
- !Ref numberOfAdditionalVolumes
- 22
hasVolume22: !Or
- Condition: hasVolume23
- !Equals
- !Ref numberOfAdditionalVolumes
- 21
hasVolume21: !Or
- Condition: hasVolume22
- !Equals
- !Ref numberOfAdditionalVolumes
- 20
hasVolume20: !Or
- Condition: hasVolume21
- !Equals
- !Ref numberOfAdditionalVolumes
- 19
hasVolume19: !Or
- Condition: hasVolume20
- !Equals
- !Ref numberOfAdditionalVolumes
- 18
hasVolume18: !Or
- Condition: hasVolume19
- !Equals
- !Ref numberOfAdditionalVolumes
- 17
hasVolume17: !Or
- Condition: hasVolume18
- !Equals
- !Ref numberOfAdditionalVolumes
- 16
hasVolume16: !Or
- Condition: hasVolume17
- !Equals
- !Ref numberOfAdditionalVolumes
- 15
hasVolume15: !Or
- Condition: hasVolume16
- !Equals
- !Ref numberOfAdditionalVolumes
- 14
hasVolume14: !Or
- Condition: hasVolume15
- !Equals
- !Ref numberOfAdditionalVolumes
- 13
hasVolume13: !Or
- Condition: hasVolume14
- !Equals
- !Ref numberOfAdditionalVolumes
- 12
hasVolume12: !Or
- Condition: hasVolume13
- !Equals
- !Ref numberOfAdditionalVolumes
- 11
hasVolume11: !Or
- Condition: hasVolume12
- !Equals
- !Ref numberOfAdditionalVolumes
- 10
hasVolume10: !Or
- Condition: hasVolume11
- !Equals
- !Ref numberOfAdditionalVolumes
- 9
hasVolume9: !Or
- Condition: hasVolume10
- !Equals
- !Ref numberOfAdditionalVolumes
- 8
hasVolume8: !Or
- Condition: hasVolume9
- !Equals
- !Ref numberOfAdditionalVolumes
- 7
hasVolume7: !Or
- Condition: hasVolume8
- !Equals
- !Ref numberOfAdditionalVolumes
- 6
hasVolume6: !Or
- Condition: hasVolume7
- !Equals
- !Ref numberOfAdditionalVolumes
- 5
hasVolume5: !Or
- Condition: hasVolume6
- !Equals
- !Ref numberOfAdditionalVolumes
- 4
hasVolume4: !Or
- Condition: hasVolume5
- !Equals
- !Ref numberOfAdditionalVolumes
- 3
hasVolume3: !Or
- Condition: hasVolume4
- !Equals
- !Ref numberOfAdditionalVolumes
- 2
hasVolume2: !Or
- Condition: hasVolume3
- !Equals
- !Ref numberOfAdditionalVolumes
- 1
Resources:
Instance:
Type: AWS::EC2::Instance
Properties:
DisableApiTermination: true
ImageId: !Ref imageId
InstanceType: !Ref instanceType
KeyName: !Ref instanceKeyName
IamInstanceProfile: ssmEc2DelegatedRole
Tenancy: default
SubnetId: !Ref subnetId
EbsOptimized: !Ref ebsOptimized
Tags:
- Key: Name
Value: !Ref name
SecurityGroupIds: !Ref securityGroupIds
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
Encrypted: true
VolumeSize: !Ref osVolumeSize
VolumeType: gp3
- !If
- hasVolume2
- DeviceName: /dev/sdb
Ebs:
Encrypted: true
VolumeSize: !Ref volume2Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume3
- DeviceName: /dev/sdc
Ebs:
Encrypted: true
VolumeSize: !Ref volume3Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume4
- DeviceName: /dev/sdd
Ebs:
Encrypted: true
VolumeasdfSize: !Ref volume4Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume5
- DeviceName: /dev/sde
Ebs:
Encrypted: true
VolumeSize: !Ref volume5Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume6
- DeviceName: /dev/sdf
Ebs:
Encrypted: true
VolumeSize: !Ref volume6Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume7
- DeviceName: /dev/sdg
Ebs:
Encrypted: true
VolumeSize: !Ref volume7Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume8
- DeviceName: /dev/sdh
Ebs:
Encrypted: true
VolumeSize: !Ref volume8Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume9
- DeviceName: /dev/sdi
Ebs:
Encrypted: true
VolumeSize: !Ref volume9Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume10
- DeviceName: /dev/sdj
Ebs:
Encrypted: true
VolumeSize: !Ref volume10Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume11
- DeviceName: /dev/sdk
Ebs:
Encrypted: true
VolumeSize: !Ref volume11Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume12
- DeviceName: /dev/sdl
Ebs:
Encrypted: true
VolumeSize: !Ref volume12Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume13
- DeviceName: /dev/sdm
Ebs:
Encrypted: true
VolumeSize: !Ref volume13Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume14
- DeviceName: /dev/sdn
Ebs:
Encrypted: true
VolumeSize: !Ref volume14Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume15
- DeviceName: /dev/sdo
Ebs:
Encrypted: true
VolumeSize: !Ref volume15Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume16
- DeviceName: /dev/sdp
Ebs:
Encrypted: true
VolumeSize: !Ref volume16Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume17
- DeviceName: /dev/sdq
Ebs:
Encrypted: true
VolumeSize: !Ref volume17Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume18
- DeviceName: /dev/sdr
Ebs:
Encrypted: true
VolumeSize: !Ref volume18Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume19
- DeviceName: /dev/sds
Ebs:
Encrypted: true
VolumeSize: !Ref volume19Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume20
- DeviceName: /dev/sdt
Ebs:
Encrypted: true
VolumeSize: !Ref volume20Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume21
- DeviceName: /dev/sdu
Ebs:
Encrypted: true
VolumeasdfSize: !Ref volume21Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume22
- DeviceName: /dev/sdv
Ebs:
Encrypted: true
VolumeSize: !Ref volume22Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume23
- DeviceName: /dev/sdw
Ebs:
Encrypted: true
VolumeSize: !Ref volume23Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume24
- DeviceName: /dev/sdx
Ebs:
Encrypted: true
VolumeSize: !Ref volume24Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume25
- DeviceName: /dev/sdy
Ebs:
Encrypted: true
VolumeSize: !Ref volume25Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
- !If
- hasVolume26
- DeviceName: /dev/sdz
Ebs:
Encrypted: true
VolumeSize: !Ref volume26Size
VolumeType: gp3
- !Ref 'AWS::NoValue'
I see this has been merged and after running 1.3.2 I can confirm it indeed does finish linting in a very fast time.
Thank you so much for your continued support on this.
| gharchive/issue | 2024-06-20T19:37:27 | 2025-04-01T06:37:59.636094 | {
"authors": [
"randybasrs"
],
"repo": "aws-cloudformation/cfn-lint",
"url": "https://github.com/aws-cloudformation/cfn-lint/issues/3356",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
974888641 | Auto-generate controllers when new runtime released
Regenerate all service controllers to runtime v0.13.0 and code-gen v0.13.0 to include recent bug fixes.
Now up and running!
| gharchive/issue | 2021-08-19T17:25:31 | 2025-04-01T06:37:59.654242 | {
"authors": [
"RedbackThomson",
"vijtrip2"
],
"repo": "aws-controllers-k8s/community",
"url": "https://github.com/aws-controllers-k8s/community/issues/908",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
993861572 | apigatewayv2 - DomainNameConfigurations missing in DomainNameObservation
for apigatewayv2 in DomainNameObservation struct - DomainNameConfigurations struct is missing
https://github.com/aws/aws-sdk-go/blob/v1.37.10/service/apigatewayv2/api.go#L12137-L12158
at the moment:
type DomainNameObservation struct {
APIMappingSelectionExpression *string `json:"apiMappingSelectionExpression,omitempty"`
DomainName *string `json:"domainName,omitempty"`
}
what we need:
type DomainNameObservation struct {
APIMappingSelectionExpression *string `json:"apiMappingSelectionExpression,omitempty"`
DomainName *string `json:"domainName,omitempty"`
DomainNameConfigurations []*DomainNameConfiguration `json:"domainNameConfigurations,omitempty"`
}
we have one Issue in Crossplane Provider-AWS for this: https://github.com/crossplane/provider-aws/issues/826
Hi Thanks for pointing it out.
APIGatewayv2 controller is currently at aws-sdk-go v1.35.5 https://github.com/aws-controllers-k8s/apigatewayv2-controller/blob/main/apis/v1alpha1/ack-generate-metadata.yaml#L8
We are working on upgrading it but have some problems with how controller-gen handles maps of maps , which were introduced after v1.35.5
apigatewayv2 controller now updated to v1.37.10
https://github.com/aws-controllers-k8s/apigatewayv2-controller/blob/main/apis/v1alpha1/ack-generate-metadata.yaml#L8
| gharchive/issue | 2021-09-11T15:03:44 | 2025-04-01T06:37:59.658688 | {
"authors": [
"haarchri",
"vijtrip2"
],
"repo": "aws-controllers-k8s/community",
"url": "https://github.com/aws-controllers-k8s/community/issues/951",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1958141905 | Updated submodules/lacework-control-tower-cfn.
Updated submodules/lacework-control-tower-cfn. Removed extraneous test parameters from lacework-control-tower-cfn cfn-abi-control-tower-integration.template.yaml.
/do-e2e-tests
@jefferyfry tests fail due to Issues with below resources from ControlTower submodule:
LaceworkAuthFunction
LaceworkSetupFunction
LaceworkAccountFunction
All 3 failed due to S3 permission or incorrect reference to the key.
Resource handler returned message: "Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. (
@kkvinjam Added missing lambda zip files for ControlTower submodule.
| gharchive/pull-request | 2023-10-23T22:43:39 | 2025-04-01T06:37:59.661416 | {
"authors": [
"jefferyfry",
"kkvinjam"
],
"repo": "aws-ia/cfn-abi-lacework-polygraph",
"url": "https://github.com/aws-ia/cfn-abi-lacework-polygraph/pull/72",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2147751009 | Feature request: allow to pass custom logger for warning and debug logs
Use case
The Metrics utility emits some warning logs to notify customers that certain expected conditions are not being met. This is the case for when a namespace is not specified or when the publishStoredMetrics() method is called on an empty buffer.
Currently customers have no way of suppressing these warnings and some customers have reported wanting to do so (#2036). I think this is a fair ask and whatever implementation we settle for in this issue will also be reused for other utilities that emit either warnings or debug logs (Idempotency, Tracer, and Parameters).
Solution/User Experience
We could define a new type/interface in the commons package and expose it:
interface UtilityLogger {
trace?: (...content: any[]) => void;
debug: (...content: any[]) => void;
info: (...content: any[]) => void;
warn: (...content: any[]) => void;
error: (...content: any[]) => void;
}
From there, customers can use it as a reference to create their own logger and pass it to the Metrics utility.
In the example below I'm making the warn method a no-op to disable the warnings entirely, but customers can write their own custom implementation to decide whether the logs are emitted or not.
import { Metrics } from '@aws-lambda-powertools/metrics';
import type { UtilityLogger } from '@aws-lambda-powertools/commons/types';
const myLogger: UtilityLogger = {
debug: console.debug,
info: console.info,
warn: {}, // no-op - but customers can add their own logic
error: console.error,
};
const metrics = new Metrics({
namespace: 'serverlessAirline',
serviceName: 'orders',
logger: myLogger,
});
Customers should also be able to pass an instance of Powertools Logger if they wish to do so:
import { Metrics } from '@aws-lambda-powertools/metrics';
import { Logger } from '@aws-lambda-powertools/logger';
const logger = new Logger({
serviceName: 'orders',
logLevel: 'ERROR',
});
const metrics = new Metrics({
namespace: 'serverlessAirline',
serviceName: 'orders',
logger,
});
My main concern with this is avoiding confusion and conveying clearly that this is only a logger that will be used for debug and warning logs but not to emit the EMF metrics themselves.
The Metrics utility maintain its own Console object that logs the metrics using console.log (notice that the log() method is not part of the suggested interface). This is needed for the Metrics utility to work with the Advanced Logging Configuration feature.
Alternative solutions
If my memory serves me right the AWS Lambda Node.js managed runtime treats warning emitted via process.emitWarning(warning[, options]) as errors rendering this method unviable.
As part of this issue however we should still test this option just in case I'm wrong.
Other alternatives that I'm not inclined to consider would go along the lines of adding a doNotWarnOnEmptyMetrics option to suppress these warnings.
Acknowledgment
[X] This feature request meets Powertools for AWS Lambda (TypeScript) Tenets
[ ] Should this be considered in other Powertools for AWS Lambda languages? i.e. Python, Java, and .NET
Future readers
Please react with 👍 and your use case to help us understand customer demand.
@heitorlessa & @am29d would love your opinion on this and especially your point of view on the concern I share at the end of the "Solution/User Experience" section.
Also please let me know if anything is not clear, happy to clarify & expound on any detail. Thanks!
| gharchive/issue | 2024-02-21T21:32:57 | 2025-04-01T06:37:59.669178 | {
"authors": [
"dreamorosi"
],
"repo": "aws-powertools/powertools-lambda-typescript",
"url": "https://github.com/aws-powertools/powertools-lambda-typescript/issues/2126",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
2368866002 | Customer support Streamlit application with Guardrails
Issue #, if available:
#210
Description of changes:
Features:
CloudFormation Template:
Defines a guardrail for the customer support chatbot.
Filters out harmful content and protects sensitive information.
Configures Content Policy, Sensitive Information Policy, Topic Policy, and Word Policy.
Python Scripts:
deploy_guardrails_infra.sh: Bash script to deploy the CloudFormation stack and retrieve the guardrail identifier.
streamlit_guardrails_app.py: Streamlit app to interact with the chatbot, using the guardrails set up in the CloudFormation stack.
Requirements:
Added requirements.txt to install necessary Python packages, including Streamlit.
Documentation:
Detailed README with step-by-step instructions to set up the environment, deploy the CloudFormation stack, and run the Streamlit app.
Included links to relevant resources and documentation for further reference.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Please re do this PR following the new layout and the process for creating the required file . We request all files to have a presence on the Cookbook / Recipes Website which is fronting this github repo now. Some of the mandatory sections of the notebook need to include
Overview.
What are we demonstrating .
What use case .
What will you learn.
What is the architectural pattern and why we select this. With a diagram
What are the libraries to install
What model did we choose and why this model
Every cell needs to have a markup
THIS PR needs to go to responsibleai / use-cases
| gharchive/pull-request | 2024-06-23T19:54:58 | 2025-04-01T06:37:59.680510 | {
"authors": [
"mccartni-aws",
"rsgrewal-aws"
],
"repo": "aws-samples/amazon-bedrock-samples",
"url": "https://github.com/aws-samples/amazon-bedrock-samples/pull/211",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
2112867750 | Error: Security Constraints Not Satisfied!
❯ sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: s3Uploader
AWS Region [us-east-1]: us-east-2
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: Y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
#Preserves the state of previously provisioned resources when an operation fails
Disable rollback [y/N]: y
UploadRequestFunction has no authentication. Is this okay? [y/N]: n
Error: Security Constraints Not Satisfied!
It shouldn't be prompting me at all.
| gharchive/issue | 2024-02-01T16:03:39 | 2025-04-01T06:37:59.683642 | {
"authors": [
"NorseGaud"
],
"repo": "aws-samples/amazon-s3-presigned-urls-aws-sam",
"url": "https://github.com/aws-samples/amazon-s3-presigned-urls-aws-sam/issues/27",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
979533520 | CFN stack name as argument
seems like CFN stack name is must
Issue #, if available: Without the the CFN stack name Amplify frontend build is failing
Description of changes: Added CFN stack name as the argument to setup_config.py script.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Hi @shankarr-code! Thank you for providing this contribution. I am going to close this PR as the issue addressed by this PR are fixed in #29. More specifically, #29 will default to using the broadcast-monitoring stack name when an argument is not provided. Additionally, it allows for an environment variable(with a custom stack name) to be defined for CI/CD builds.
| gharchive/pull-request | 2021-08-25T19:02:09 | 2025-04-01T06:37:59.685827 | {
"authors": [
"abest0",
"shankarr-code"
],
"repo": "aws-samples/automating-livestream-video-monitoring",
"url": "https://github.com/aws-samples/automating-livestream-video-monitoring/pull/28",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
1662510953 | Embedded stack arn:aws:cloudformation
Hi Team,
We are deploying Compute optimiser dashboard, While running Cloud formation template on Data Collection account getting below error :
Embedded stack arn:aws:cloudformation:ap-south-1:xxxxxx:stack/OptimizationDataCollectionStack1-ComputeOptimizerModule-xxxxxx was not successfully created: The following resource(s) failed to create: [ReplicaitonBucketsStackSet].
_Creating using the following stack
https://wellarchitectedlabs.com/cost/300_labs/300_optimization_data_collection/2_deploy_main_resources/_
Stack and Stackset created on Management ran successfully, without any issue.
Can you please suggest what can be the probable issue
Thanks
Shweta
can you check in the deleted stacks in cloud formation the stack that starts with "StackSet-" and check the failure reason?
Most probably, you need to add another exception in your SCP
There are 2 roles involved:
https://github.com/awslabs/aws-well-architected-labs/blob/master/static/Cost/300_Optimization_Data_Collection/Code/module-compute-optimizer.yaml#L132
https://github.com/awslabs/aws-well-architected-labs/blob/master/static/Cost/300_Optimization_Data_Collection/Code/module-compute-optimizer.yaml#L86
Hello Shweta, were you able to allow these roles and deploy the stack?
Please comment / reopen if still an issue
| gharchive/issue | 2023-04-11T13:40:02 | 2025-04-01T06:37:59.690634 | {
"authors": [
"10shweta",
"iakov-aws"
],
"repo": "aws-samples/aws-cudos-framework-deployment",
"url": "https://github.com/aws-samples/aws-cudos-framework-deployment/issues/516",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.