id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
327337214
Incorrect requirement for azure.mgmt.compute It seems that the version requirement for azure-common~=1.1 in azure.mgmt.compute is too broad: new versions of the package reference azure.profiles, which was introduced in azure-common 1.1.9. For example, if I try to install the latest version (4.0.0rc2) of azure.mgmt.common in an environment which already has a previous, older version of azure-common, the installation completes successfully but I can't import anything from the new package. Steps to reproduce: From a clean environment, install an older version of azure-common, then install the latest azure.mgmt.compute from source. pip install "azure-common<=1.1.8" git clone https://github.com/Azure/azure-sdk-for-python.git pip install azure-sdk-for-python/azure-mgmt-compute In a Python console, import the new package. >>> import azure.mgmt.compute Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/ageorgou/anaconda/envs/test-azure/lib/python3.6/site-packages/azure/mgmt/compute/__init__.py", line 8, in <module> from .compute_management_client import ComputeManagementClient File "/Users/ageorgou/anaconda/envs/test-azure/lib/python3.6/site-packages/azure/mgmt/compute/compute_management_client.py", line 16, in <module> from azure.profiles import KnownProfiles, ProfileDefinition ModuleNotFoundError: No module named 'azure.profiles' Expected behaviour is for the import to work, as it successfully does with newer (>=1.1.9) versions of azure-common. Hi @ageorgou Yes you're right, I should write ~=1.1;>=1.1.9. I didn't notice since I do fresh env each time, but on update it won't work. Will fix it asap. Thanks! Hi @lmazuel, Yes, I only noticed this when updating an existing installation. I was looking at this again and realised that it happens in other packages too. A quick search (grep "azure.profiles") shows: azure-mgmt-compute azure-mgmt-resource azure-mgmt-storage azure-mgmt-network azure-mgmt-containerregistry The last two already require specific versions. I have submitted a PR for the other three in case that makes things easier, but please feel free to ignore it if not! Hi @ageorgou I released storage this morning including the fix, so your PR conflicts :/. If you could remove storage from your PR, I'll merge it. Thanks! Hi @lmazuel, done now.
gharchive/issue
2018-05-29T14:07:48
2025-04-01T06:36:45.407320
{ "authors": [ "ageorgou", "lmazuel" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/2644", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1423159972
Failing to find Subscription ID when targeting AzureUSGovernment Tenant Package Name: MLClient Package Version: SDK V2 Operating System: AML Compute Instance STANDARD_DS11_V2 Python Version: Python 3.10 Describe the bug After initializing an instance of the MLClient module, executing any of it's methods results in the error below. To Reproduce Steps to reproduce the behavior: Pre-requirements: Have an AzureUSGovernment tenant and subscrition Have an AML Workspace created, along with a Compute Instance Have a Service Principal created in the above subscription, and given a "Contributor" role assignment to the AML Workspace Run a notebook in AML using the compute instance, and updating the placeholder environment variables: from azure.ai.ml.entities import AmlCompute from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential, AzureAuthorityHosts, EnvironmentCredential import traceback # Set ENV Variables os.environ["AZURE_CLIENT_SECRET"] = "<value>" os.environ["AZURE_CLIENT_ID"] = "<value>" os.environ["AZURE_TENANT_ID"] = "<value>" os.environ["AZURE_AUTHORITY_HOST"] = AzureAuthorityHosts.AZURE_GOVERNMENT credentials = DefaultAzureCredential( interactive_browser_tenant_id=os.environ["AZURE_TENANT_ID"], authority=AzureAuthorityHosts.AZURE_GOVERNMENT ) ml_client = MLClient( credential=credentials, subscription_id="<value>", resource_group_name="<value>", workspace_name="<value>", cloud="AzureUSGovernment", ) # Name assigned to the compute cluster cpu_compute_target = "cpu-cluster-2" try: # let's see if the compute target already exists cpu_cluster = ml_client.compute.get(cpu_compute_target) print( f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is." ) except Exception: print("Creating a new cpu compute target...") # Let's create the Azure ML compute object with the intended parameters cpu_cluster = AmlCompute( name=cpu_compute_target, # Azure ML Compute is the on-demand VM service type="amlcompute", # VM Family size="STANDARD_DS3_V2", # Minimum running nodes when there is no job running min_instances=0, # Nodes in cluster max_instances=4, # How many seconds will the node running after the job termination idle_time_before_scale_down=180, # Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination tier="Dedicated", ) # Now, we pass the object to MLClient's create_or_update method cpu_cluster = ml_client.compute.begin_create_or_update(cpu_cluster) print( f"AMLCompute with name {cpu_cluster.name} is created, the compute size is {cpu_cluster.size}" ) Expected behavior The above code should result in either a new CPU Cluster being created, or printing out the message You already have a cluster named {cpu_compute_target}, we'll reuse it as is." Screenshots The actual behavior is an error: ResourceNotFoundError: (SubscriptionNotFound) The subscription 'xxxxxxxxxxxxxxxx' could not be found. Code: SubscriptionNotFound Message: The subscription 'xxxxxxxxxxxxxxxx' could not be found. The stack trace is: Creating a new cpu compute target... --------------------------------------------------------------------------- ResourceNotFoundError Traceback (most recent call last) Input In [8], in <cell line: 13>() 13 try: 14 # let's see if the compute target already exists ---> 15 cpu_cluster = ml_client_6.compute.get(cpu_compute_target) 16 print( 17 f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is." 18 ) File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_telemetry/activity.py:169, in monitor_with_activity.<locals>.monitor.<locals>.wrapper(*args, **kwargs) 168 with log_activity(logger, activity_name or f.__name__, activity_type, custom_dimensions): --> 169 return f(*args, **kwargs) File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_operations/compute_operations.py:75, in ComputeOperations.get(self, name) 67 """Get a compute resource 68 69 :param name: Name of the compute (...) 72 :rtype: Compute 73 """ ---> 75 response, rest_obj = self._operation.get( 76 self._operation_scope.resource_group_name, 77 self._workspace_name, 78 name, 79 cls=get_http_response_and_deserialized_from_pipeline_response, 80 ) 81 # TODO: Remove warning logging after 05/31/2022 (Task 1776012) File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/tracing/decorator.py:83, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs) 82 if span_impl_type is None: ---> 83 return func(*args, **kwargs) 85 # Merge span is parameter is set, but only if no explicit parent are passed File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_restclient/v2022_01_01_preview/operations/_compute_operations.py:577, in ComputeOperations.get(self, resource_group_name, workspace_name, compute_name, **kwargs) 576 if response.status_code not in [200]: --> 577 map_error(status_code=response.status_code, response=response, error_map=error_map) 578 error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/exceptions.py:105, in map_error(status_code, response, error_map) 104 error = error_type(response=response) --> 105 raise error ResourceNotFoundError: (SubscriptionNotFound) The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found. Code: SubscriptionNotFound Message: The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found. During handling of the above exception, another exception occurred: ResourceNotFoundError Traceback (most recent call last) Input In [8], in <cell line: 13>() 24 cpu_cluster = AmlCompute( 25 name=cpu_compute_target, 26 # Azure ML Compute is the on-demand VM service (...) 37 tier="Dedicated", 38 ) 40 # Now, we pass the object to MLClient's create_or_update method ---> 41 cpu_cluster = ml_client_6.compute.begin_create_or_update(cpu_cluster) 43 print( 44 f"AMLCompute with name {cpu_cluster.name} is created, the compute size is {cpu_cluster.size}" 45 ) File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_telemetry/activity.py:169, in monitor_with_activity.<locals>.monitor.<locals>.wrapper(*args, **kwargs) 166 @functools.wraps(f) 167 def wrapper(*args, **kwargs): 168 with log_activity(logger, activity_name or f.__name__, activity_type, custom_dimensions): --> 169 return f(*args, **kwargs) File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_operations/compute_operations.py:116, in ComputeOperations.begin_create_or_update(self, compute, **kwargs) 107 @monitor_with_activity(logger, "Compute.BeginCreateOrUpdate", ActivityType.PUBLICAPI) 108 def begin_create_or_update(self, compute: Compute, **kwargs: Any) -> LROPoller: 109 """Create a compute 110 111 :param compute: Compute definition. (...) 114 :rtype: LROPoller 115 """ --> 116 compute.location = self._get_workspace_location() 117 compute._set_full_subnet_name(self._operation_scope.subscription_id, self._operation_scope.resource_group_name) 119 compute_rest_obj = compute._to_rest_object() File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_operations/compute_operations.py:308, in ComputeOperations._get_workspace_location(self) 307 def _get_workspace_location(self) -> str: --> 308 workspace = self._workspace_operations.get(self._resource_group_name, self._workspace_name) 309 return workspace.location File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/tracing/decorator.py:83, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs) 81 span_impl_type = settings.tracing_implementation() 82 if span_impl_type is None: ---> 83 return func(*args, **kwargs) 85 # Merge span is parameter is set, but only if no explicit parent are passed 86 if merge_span and not passed_in_parent: File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_restclient/v2022_01_01_preview/operations/_workspaces_operations.py:615, in WorkspacesOperations.get(self, resource_group_name, workspace_name, **kwargs) 612 response = pipeline_response.http_response 614 if response.status_code not in [200]: --> 615 map_error(status_code=response.status_code, response=response, error_map=error_map) 616 error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) 617 raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/exceptions.py:105, in map_error(status_code, response, error_map) 103 return 104 error = error_type(response=response) --> 105 raise error Additional context I looked through the source code in _azure_environments.py file and also the _ml_client.py file to infer what environment variables and values I needed to pass into the MLClient constructor. However, something doesn't appear to be working correctly. Here is an example for running a notebook in Non-public cloud: https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/multicloud-configuration.ipynb You may need to pass the cloud name in the kwargs for MLClient. # NOTE: cloud parameter is required in kwargs to signal mlclient to connect to the appropriate endpoints in Azure. kwargs = {"cloud": "AzureChinaCloud"} ml_client = MLClient(credential, subscription_id, resource_group, **kwargs) Hi @harneetvirk , unfortunately I am still facing the same error. Below is my latest attempt. First I make sure that the authentication works (replacing the placeholder values with the correct values) from azure.ai.ml.entities import AmlCompute from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential, AzureAuthorityHosts # Set ENV Variables os.environ["AZURE_USERNAME"] = "xxxxxxxxx" os.environ["AZURE_PASSWORD"] = "xxxxxxxxx" os.environ["AZURE_TENANT_ID"] = "xxxxxxxxxx" kwargs = {"cloud": "AzureUSGovernment"} credentials = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT) credentials.get_token("https://management.usgovcloudapi.net/.default") This results in a successful authentication and token retrieval. But then running the below command, after putting in the correct subscription ID, using the ML Client errors out. ml_client = MLClient( credential=credentials, subscription_id="xxxxxxxxx", resource_group_name="xxxxxxxx", workspace_name="xxxxxxx", **kwargs, ) print(ml_client) # Get a list of workspaces in a resource group for ws in ml_client.workspaces.list(): print(ws.name, ":", ws.location, ":", ws.description) error is ResourceNotFoundError: (SubscriptionNotFound) The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found. Code: SubscriptionNotFound Message: The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found. I can perform the following in an Azure US Gov ML Studio notebook targeting Python 3.10 - SDK V2 (default Compute) and it works. Seems like there is a default version installed of azure-ai-ml at 2.4.1 that might be causing some issues. ---------CELL 1--------- pip list | grep azure Output: azure-ai-ml 2.4.1 azure-common 1.1.28 azure-core 1.22.1 azure-identity 1.10.0 azure-mgmt-core 1.3.0 azure-ml 2.3.1 azure-storage-blob 12.9.0 azure-storage-file-share 12.7.0 Note: you may need to restart the kernel to use updated packages. ---------CELL 2--------- pip uninstall azure-ai-ml azure-ml -y Output: Found existing installation: azure-ai-ml 2.4.1 Uninstalling azure-ai-ml-2.4.1: Successfully uninstalled azure-ai-ml-2.4.1 Found existing installation: azure-ml 2.3.1 Uninstalling azure-ml-2.3.1: Successfully uninstalled azure-ml-2.3.1 Note: you may need to restart the kernel to use updated packages. ---------CELL 3--------- pip install --pre azure-ai-ml Output: Collecting azure-ai-ml Downloading azure_ai_ml-1.0.0-py3-none-any.whl (4.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 55.9 MB/s eta 0:00:00:00:010:01 Collecting azure-storage-blob<13.0.0,>=12.10.0 Downloading azure_storage_blob-12.14.1-py3-none-any.whl (383 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 383.2/383.2 kB 28.9 MB/s eta 0:00:00 Requirement already satisfied: azure-mgmt-core<2.0.0,>=1.3.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (1.3.0) Requirement already satisfied: azure-common<2.0.0,>=1.1 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (1.1.28) Requirement already satisfied: msrest>=0.6.18 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (0.6.21) Collecting strictyaml<=1.6.1 Downloading strictyaml-1.6.1.tar.gz (137 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 137.7/137.7 kB 11.9 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Requirement already satisfied: azure-core!=1.22.0,<2.0.0,>=1.8.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (1.22.1) Requirement already satisfied: pydash<6.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (4.9.0) Requirement already satisfied: pyyaml<7.0.0,>=5.1.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (6.0) Requirement already satisfied: colorama<=0.4.4 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (0.4.4) Collecting azure-storage-file-datalake<13.0.0 Downloading azure_storage_file_datalake-12.9.1-py3-none-any.whl (238 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 238.8/238.8 kB 23.0 MB/s eta 0:00:00 Requirement already satisfied: tqdm<=4.63.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (4.63.0) Requirement already satisfied: pyjwt<3.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (2.3.0) Requirement already satisfied: marshmallow<4.0.0,>=3.5 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (3.17.0) Requirement already satisfied: typing-extensions<5.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (3.10.0.2) Requirement already satisfied: azure-storage-file-share<13.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (12.7.0) Requirement already satisfied: isodate in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (0.6.1) Requirement already satisfied: jsonschema<5.0.0,>=4.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (4.13.0) Requirement already satisfied: six>=1.11.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (1.16.0) Requirement already satisfied: requests>=2.18.4 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (2.28.1) Requirement already satisfied: cryptography>=2.1.4 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-storage-blob<13.0.0,>=12.10.0->azure-ai-ml) (37.0.4) Collecting msrest>=0.6.18 Downloading msrest-0.7.1-py3-none-any.whl (85 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 85.4/85.4 kB 10.9 MB/s eta 0:00:00 Collecting azure-core!=1.22.0,<2.0.0,>=1.8.0 Downloading azure_core-1.26.0-py3-none-any.whl (178 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 178.9/178.9 kB 21.4 MB/s eta 0:00:00 Collecting typing-extensions<5.0.0 Downloading typing_extensions-4.4.0-py3-none-any.whl (26 kB) Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from jsonschema<5.0.0,>=4.0.0->azure-ai-ml) (0.18.1) Requirement already satisfied: attrs>=17.4.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from jsonschema<5.0.0,>=4.0.0->azure-ai-ml) (22.1.0) Requirement already satisfied: packaging>=17.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from marshmallow<4.0.0,>=3.5->azure-ai-ml) (21.3) Requirement already satisfied: requests-oauthlib>=0.5.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from msrest>=0.6.18->azure-ai-ml) (1.3.1) Requirement already satisfied: certifi>=2017.4.17 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from msrest>=0.6.18->azure-ai-ml) (2022.6.15) Requirement already satisfied: python-dateutil>=2.6.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from strictyaml<=1.6.1->azure-ai-ml) (2.8.2) Requirement already satisfied: cffi>=1.12 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from cryptography>=2.1.4->azure-storage-blob<13.0.0,>=12.10.0->azure-ai-ml) (1.15.1) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from packaging>=17.0->marshmallow<4.0.0,>=3.5->azure-ai-ml) (3.0.9) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests>=2.18.4->azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (1.26.11) Requirement already satisfied: charset-normalizer<3,>=2 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests>=2.18.4->azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (2.1.0) Requirement already satisfied: idna<4,>=2.5 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests>=2.18.4->azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (3.3) Requirement already satisfied: oauthlib>=3.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests-oauthlib>=0.5.0->msrest>=0.6.18->azure-ai-ml) (3.2.0) Requirement already satisfied: pycparser in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from cffi>=1.12->cryptography>=2.1.4->azure-storage-blob<13.0.0,>=12.10.0->azure-ai-ml) (2.21) Building wheels for collected packages: strictyaml Building wheel for strictyaml (setup.py) ... done Created wheel for strictyaml: filename=strictyaml-1.6.1-py3-none-any.whl size=123931 sha256=7f10357971c55b3c29d2dbee29b816db58830ad42ac89258d28bd87636c1f5a7 Stored in directory: /home/azureuser/.cache/pip/wheels/fb/ca/49/3c5046dee736c4c938048ce89b236b1643ea83178517b5f88a Successfully built strictyaml Installing collected packages: typing-extensions, strictyaml, azure-core, msrest, azure-storage-blob, azure-storage-file-datalake, azure-ai-ml Attempting uninstall: typing-extensions Found existing installation: typing-extensions 3.10.0.2 Uninstalling typing-extensions-3.10.0.2: Successfully uninstalled typing-extensions-3.10.0.2 Attempting uninstall: azure-core Found existing installation: azure-core 1.22.1 Uninstalling azure-core-1.22.1: Successfully uninstalled azure-core-1.22.1 Attempting uninstall: msrest Found existing installation: msrest 0.6.21 Uninstalling msrest-0.6.21: Successfully uninstalled msrest-0.6.21 Attempting uninstall: azure-storage-blob Found existing installation: azure-storage-blob 12.9.0 Uninstalling azure-storage-blob-12.9.0: Successfully uninstalled azure-storage-blob-12.9.0 Successfully installed azure-ai-ml-1.0.0 azure-core-1.26.0 azure-storage-blob-12.14.1 azure-storage-file-datalake-12.9.1 msrest-0.7.1 strictyaml-1.6.1 typing-extensions-4.4.0 Note: you may need to restart the kernel to use updated packages. ---------CELL 4--------- import logging import requests import os from azure.ai.ml import MLClient from azure.identity import AzureAuthorityHosts, DefaultAzureCredential from azure.ai.ml.entities import Workspace subscription_id = "YOUR_VALUE_HERE" resource_group = "YOUR_VALUE_HERE" workspace_name = "YOUR_VALUE_HERE" logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) try: credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT) except Exception as ex: raise ex print(credential) try: kwargs = {"cloud": "AzureUSGovernment"} ml_client = MLClient(credential, subscription_id, resource_group, **kwargs) except Exception as ex: raise ex print(ml_client) # Get a list of workspaces in a resource group for ws in ml_client.workspaces.list(): print(ws.name, ":", ws.location, ":", ws.description) Could you please try to login to azure cli from the same machine from where you are running the notebooks and set the default subscription? az cloud set -n AzureUSGovernment az account set -s <SUBSCRIPTION-ID> This should set the default subscription for you on the machine. I have tried the following code snippet in Government cloud, and this is working with azure-ai-ml==1.0.0. Had to update the call to create compute by appending .result() for LRO poller. from azure.ai.ml.entities import AmlCompute from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential, AzureAuthorityHosts import traceback # Enter details of your subscription subscription_id = "SOME-SUBSCRITION-ID-IN-GOVT-CLOUD" resource_group = "test-rg-221005" workspace_name = "est-usgovvirginia" credentials = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT) kwargs = {"cloud": "AzureUSGovernment"} ml_client = MLClient(credential, subscription_id, resource_group, **kwargs) ml_client = MLClient( credential=credentials, subscription_id=subscription_id, resource_group_name=resource_group, workspace_name=workspace_name, cloud="AzureUSGovernment", ) # Name assigned to the compute cluster cpu_compute_target = "cpu-cluster-3" try: # let's see if the compute target already exists cpu_cluster = ml_client.compute.get(cpu_compute_target) print( f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is." ) except Exception: print("Creating a new cpu compute target...") # Let's create the Azure ML compute object with the intended parameters cpu_cluster = AmlCompute( name=cpu_compute_target, # Azure ML Compute is the on-demand VM service type="amlcompute", # VM Family size="STANDARD_DS3_V2", # Minimum running nodes when there is no job running min_instances=0, # Nodes in cluster max_instances=4, # How many seconds will the node running after the job termination idle_time_before_scale_down=180, # Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination tier="Dedicated", ) # Now, we pass the object to MLClient's create_or_update method cpu_cluster = ml_client.compute.begin_create_or_update(cpu_cluster).result() print( f"AMLCompute with name {cpu_cluster.name} is created, the compute size is {cpu_cluster.size}" ) Returned the following: Creating a new cpu compute target... AMLCompute with name cpu-cluster-3 is created, the compute size is STANDARD_DS3_V2 @adrian-gonzalez : I created a new Conda environment and installed the GA version of SDK (1.0.0) and tried running the notebook in usgovvirginia region to repro this issue but unfortunately, I was able to able to reproduce this issue after I configure the cloud name and account using CLI. az cloud set -n AzureUSGovernment az account set -s <SUBSCRIPTION-ID> By default, the SDK or CLI tries to connect to public cloud. Hi @harneetvirk - Please read through our previous comments. The steps to reproduce the issue lies with azure-ai-ml v2.4.1. it is this version that is the default when we are creating an instance of Azure Machine Learning, and therefore preventing our team from using this python package. azure-ai-ml v2.4.1 is having some known issues and will be removed from CI from the next release and will be replaced with azure-ai-ml 1.0.0 from pypi. To unblock, please install azure-ai-ml 1.0.0 from pypi. Thank you @harneetvirk . We can do that as a temporary workaround. When is the next release slated for? Can you confirm if once azure-ai-ml v2.4.1 is removed from CI, whether newly created AML instance won't have this issue moving forward? Just want to be sure that from a developer experience, that teams creating new AML instances don't have to always manually downgrade the auzre-ai-ml package to 1.0.0 @xiangyan99 can you or @harneetvirk provide guidance on the remaining above questions? I would like to confirm that the issue is resolved and that the steps to reproduce no longer result in the issue prior to closing this issue out. @adrian-gonzalez if you don't mind, could you open a new issue with the questions? Thanks. @xiangyan99 That doesn't seem efficient, are you sure we want to go that route? It'll effectively a copy of this issue to not duplicate the steps to reproduce and discussions. Happy to do that if that's the approach /unresolve @xiangyan99 can you or @harneetvirk provide guidance on the remaining above questions? I would like to confirm that the issue is resolved and that the steps to reproduce no longer result in the issue prior to closing this issue out. The new CI image has been released with SDK v2 package installed from pypi. Please create a new Compute Instance. Thanks @harneetvirk!
gharchive/issue
2022-10-25T21:51:05
2025-04-01T06:36:45.457843
{ "authors": [ "adrian-gonzalez", "harneetvirk", "kottofy", "xiangyan99" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/27038", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1487792278
azure-monitor-opentelemetry-exporter not compatible with latest opentelemetry-[api/sdk] 1.15.0 Package Name: azure-monitor-opentelemetry-exporter Package Version: 1.0.0b10 Operating System: Linux Python Version: 3.10 Describe the bug not compatible with opentelemetry-[api/sdk] 1.15.0 To Reproduce Steps to reproduce the behavior: pip install azure-monitor-opentelemetry-exporter python3.10 from azure.monitor.opentelemetry.exporter import AzureMonitorLogExporter, AzureMonitorTraceExporter see: .tox/py310-sqlalchemy14-integration/lib/python3.10/site-packages/gmo/core/util/telemetry/__init__.py:8: in <module> from azure.monitor.opentelemetry.exporter import AzureMonitorLogExporter, AzureMonitorTraceExporter .tox/py310-sqlalchemy14-integration/lib/python3.10/site-packages/azure/monitor/opentelemetry/exporter/__init__.py:7: in <module> from azure.monitor.opentelemetry.exporter.export.logs._exporter import AzureMonitorLogExporter .tox/py310-sqlalchemy14-integration/lib/python3.10/site-packages/azure/monitor/opentelemetry/exporter/export/logs/_exporter.py:8: in <module> from opentelemetry.sdk._logs.severity import SeverityNumber E ModuleNotFoundError: No module named 'opentelemetry.sdk._logs.severity' Expected behavior import to work Screenshots If applicable, add screenshots to help explain your problem. Additional context Add any other context about the problem here. Thank you for your feedback @jabbera . We will investigate asap and get back to you The PR is merged. However, since this is an ongoing issue, let's keep this open until the next release. I am looking into whether we can do a release before January. In the meantime, use OTel 1.14 There's another issue: when using the fixed version of the exporter with OTel 1.15, I consistently get the following warning: ...\site-packages\werkzeug\serving.py:716: ResourceWarning: unclosed <socket.socket fd=1304, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0> self.socket = socket.fromfd(fd, address_family, socket.SOCK_STREAM) ResourceWarning: Enable tracemalloc to get the object allocation traceback I've determined that no commit in exporter 1.0.0b11 has caused this. It's too early to know. But it seems to be an issue with how opentelemetry-instrumentation-flask==0.36b0 or opentelemetry-instrumentation-wsgi==0.36b0 use Werkzeug. I've confirmed the fixed version of the exporter does still send telemetry correctly. However, since the new version needs to be pinned to 1.15, we'll be encouraging people to use 1.15 seems to have some issue that needs to be addressed. I am not able to release the exporter as is because of the memory issue with OTel 1.15. Instead, we can pin the exporter to 1.12<=x<=1.14 before the module path was changed. That way, we can avoid the memory allocation issue as well as the severity import breaking change. My release PR is approved. But my understanding is others have the exclusive permissions to merge it and trigger the release pipeline. https://github.com/Azure/azure-sdk-for-python/pull/27958 @jabbera Please use newly released 1.0.0b11. It blocks OTel 1.15. Resolving issue.
gharchive/issue
2022-12-10T02:06:06
2025-04-01T06:36:45.469410
{ "authors": [ "jabbera", "jeremydvoss", "kashifkhan" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/27900", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2736699898
Do Azure Cosmos DB full text functions (FullTextContains, FullTextScore, etc.) not support cross partition queries? Hi, I'm trying to understand if Full Text Search (on Azure Cosmos DB Python SDK) isn't supported across partitions. I'm led to believe this as a result of the following: When I run this query to get a few documents from my Azure Cosmos DB collection using Full Text Search, where I provide the partition key (no cross partition querying enabled): summary_nodes = self.cosmos.query_items( query=""" SELECT c.id, c.text FROM c ORDER BY RANK FullTextScore(c.text, @text) """, partition_key="cbd31f95-a3a6-4a85-ba8a-67925980d37c", parameters=[ {"name": "@text", "value": text}, ], verbose=True, ) print(summary_nodes) I get the desired output: [{'id': 'cbd31f95-a3a6-4a85-ba8a-67925980d37c', 'text': "The letter dated 11th October 2024...."}] However, when I run the exact same query without providing the partition key with cross partition querying enabled: summary_nodes = self.cosmos.query_items( query=""" SELECT c.id, c.text FROM c ORDER BY RANK FullTextScore(c.text, @text) """, parameters=[ {"name": "@text", "value": text}, ], verbose=True, enable_cross_partition_query=True, ) print(summary_nodes) I'm met with the following error: {"code":"BadRequest","message":"One of the input values is invalid.\r\nActivityId: 9c5f1c20-e0eb-4dcd-b965-c404a2af537b, Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0"} I've set up all indexing policies + full text policies correctly (as evident by the working code snippet). Here are the policies for reference: full_text_policy = { "defaultLanguage": "en-US", "fullTextPaths": [{"path": "/text", "language": "en-US"}], } # Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy indexing_policy = { "indexingMode": "consistent", "automatic": True, "includedPaths": [ { "path": "/*", } ], "excludedPaths": [{"path": '/"_etag"/?', "path": "/vector/*"}], "fullTextIndexes": [ { "path": "/text", } ], "vectorIndexes": [ { "path": "/vector", "type": "quantizedFlat", } ], } In the above code snippets, I've used my own self.cosmos.query_items function - which is a function that shadows the Azure Cosmos Python SDK's container.query_items function, as follows: def query_items( self, query: str, partition_key: str | None = None, verbose=False, parameters: List[Dict[str, object]] | None = None, **kwargs, ): try: items = list( self.container.query_items( query=query, parameters=parameters, partition_key=partition_key, **kwargs, ) ) if verbose: print("[AZCOSMOSDB]\tFound {0} items".format(len(items))) return items except exceptions.CosmosHttpResponseError as e: if verbose: print("[AZCOSMOSDB]\tCannot query items.") print(e.http_error_message) Looking forward to any clarification - thank you for your time! @simorenoh - can you take a look? Hi @sachintha180, thank you for opening this issue. This is actually a known gap in the FTS query feature for the service currently - parametrized cross partition queries using Order By Rank will not work. We are currently working on fixing this. However, you can still get cross partition queries to work with FTS by sending the entire query directly, ie query="SELECT c.id, c.text FROM c ORDER BY RANK FullTextScore(c.text, ['text-here']) ", or using string formatting directly using either %s or Python's .format string method on the query before sending it. Do let me know if this answers your question or if there's anything else you need help with - I can also ping in this issue again once we have merged the fix on our end if you'd like. It definitely does! - I was a bit hesitant in using string formatting due to SQL injection. But I'll write a few checks to overcome this prior formatting the query. Thank you very much, looking forward to a ping when the issue is merged. Thank you for your time!
gharchive/issue
2024-12-12T19:19:33
2025-04-01T06:36:45.477327
{ "authors": [ "Pilchie", "sachintha180", "simorenoh" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/38857", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
282202846
[Monitor] Add Public Preview APIs of Metric Baseline Generated from RestAPI PR: https://github.com/Azure/azure-rest-api-specs/pull/2049 Codecov Report Merging #1710 into master will decrease coverage by 0.03%. The diff coverage is 41.39%. @@ Coverage Diff @@ ## master #1710 +/- ## ========================================== - Coverage 55.33% 55.29% -0.04% ========================================== Files 4202 4215 +13 Lines 100158 100436 +278 ========================================== + Hits 55422 55540 +118 - Misses 44736 44896 +160 Impacted Files Coverage Δ ...itor/azure/mgmt/monitor/models/retention_policy.py 62.5% <0%> (-8.93%) :arrow_down: ...iagnostic_settings_category_resource_collection.py 66.66% <0%> (-13.34%) :arrow_down: ...nitor/azure/mgmt/monitor/models/metric_settings.py 50% <0%> (-5.56%) :arrow_down: ...tor/azure/mgmt/monitor/models/autoscale_profile.py 45.45% <0%> (-4.55%) :arrow_down: ...onitor/azure/mgmt/monitor/models/scale_capacity.py 55.55% <0%> (-6.95%) :arrow_down: ...t-monitor/azure/mgmt/monitor/models/time_window.py 55.55% <0%> (-6.95%) :arrow_down: ...mt-monitor/azure/mgmt/monitor/models/recurrence.py 62.5% <0%> (-8.93%) :arrow_down: ...-monitor/azure/mgmt/monitor/models/sms_receiver.py 50% <0%> (-5.56%) :arrow_down: ...-monitor/azure/mgmt/monitor/models/scale_action.py 50% <0%> (-5.56%) :arrow_down: ...mt-monitor/azure/mgmt/monitor/models/scale_rule.py 62.5% <0%> (-8.93%) :arrow_down: ... and 81 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 0c191e5...ae48b11. Read the comment docs.
gharchive/pull-request
2017-12-14T18:52:36
2025-04-01T06:36:45.493280
{ "authors": [ "AutorestCI", "codecov-io" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/1710", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
327930993
[AutoPR network/resource-manager] reverted methods removal Created to sync https://github.com/Azure/azure-rest-api-specs/pull/3163 (message created by the CI based on PR content) This PR has been merged into https://github.com/Azure/azure-sdk-for-python/pull/2376
gharchive/pull-request
2018-05-30T23:11:16
2025-04-01T06:36:45.495628
{ "authors": [ "AutorestCI" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/2668", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1876049881
Schedule operations rewire for feature and custom Description Please add an informative description that covers that changes made by the pull request and link all relevant issues. If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above. All SDK Contribution checklist: [ ] The pull request does not introduce [breaking changes] [ ] CHANGELOG is updated for new features, bug fixes or other significant changes. [ ] I have read the contribution guidelines. General Guidelines and Best Practices [ ] Title of the pull request is clear and informative. [ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [ ] Pull request includes test coverage for the included changes. API change check APIView has identified API level changes in this PR and created following API reviews. azure-ai-ml
gharchive/pull-request
2023-08-31T18:14:08
2025-04-01T06:36:45.500832
{ "authors": [ "azure-sdk", "nemanjarajic" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/31903", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1909657110
[AutoRelease] t2-cosmosdb-2023-09-23-46388(can only be merged by SDK owner) https://github.com/Azure/sdk-release-request/issues/4551 Live test success https://dev.azure.com/azure-sdk/internal/_build?definitionId=984 BuildTargetingString azure-mgmt-cosmosdb Skip.CreateApiReview true issue link:https://github.com/Azure/sdk-release-request/issues/4551
gharchive/pull-request
2023-09-23T01:14:29
2025-04-01T06:36:45.503202
{ "authors": [ "azure-sdk" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/32202", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2653742988
add test samples tracking https://github.com/Azure/azure-sdk-for-python/issues/38197 TODO ~add to markdown/html/csv reports~ update power BI template @microsoft-github-policy-service rerun
gharchive/pull-request
2024-11-13T00:32:21
2025-04-01T06:36:45.504804
{ "authors": [ "kristapratico" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/38502", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
412673365
[AutoPR datafactory/resource-manager] [Datafactory] Support create pipeline run with recovery mode Created to sync https://github.com/Azure/azure-rest-api-specs/pull/5239 (message created by the CI based on PR content) Codecov Report Merging #4396 into restapi_auto_datafactory/resource-manager will increase coverage by 1.16%. The diff coverage is 52.49%. @@ Coverage Diff @@ ## restapi_auto_datafactory/resource-manager #4396 +/- ## ============================================================================= + Coverage 52.28% 53.44% +1.16% ============================================================================= Files 10470 10284 -186 Lines 226922 215872 -11050 ============================================================================= - Hits 118640 115380 -3260 + Misses 108282 100492 -7790 Impacted Files Coverage Δ ...ry/azure/mgmt/datafactory/models/vertica_source.py 62.5% <ø> (ø) :arrow_up: ...ory/azure/mgmt/datafactory/models/impala_source.py 62.5% <ø> (ø) :arrow_up: ...y/azure/mgmt/datafactory/models/mongo_db_source.py 62.5% <ø> (ø) :arrow_up: ...zure/mgmt/datafactory/models/service_now_source.py 62.5% <ø> (ø) :arrow_up: .../azure/mgmt/datafactory/models/file_system_sink.py 62.5% <ø> (ø) :arrow_up: .../datafactory/models/document_db_collection_sink.py 62.5% <ø> (ø) :arrow_up: ...ctory/azure/mgmt/datafactory/models/xero_source.py 62.5% <ø> (ø) :arrow_up: ...tory/azure/mgmt/datafactory/models/drill_source.py 62.5% <ø> (ø) :arrow_up: ...ctory/azure/mgmt/datafactory/models/oracle_sink.py 62.5% <ø> (ø) :arrow_up: ...zure/mgmt/datafactory/models/azure_table_source.py 55.55% <ø> (ø) :arrow_up: ... and 617 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 51a2757...a6e1d5b. Read the comment docs. This PR has been merged into https://github.com/Azure/azure-sdk-for-python/pull/4381
gharchive/pull-request
2019-02-20T23:12:50
2025-04-01T06:36:45.521429
{ "authors": [ "AutorestCI", "codecov-io" ], "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/4396", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
746456167
Simplify ConsistencyLevel and CosmosStruct @thomastaylor312 and I are looking into simplifying the cosmos crate. This PR contains changes to two types that are indicative of the larger changes we would like to make. In general these changes favor concrete types over traits. Some additional changes I'd like to make include but are not limited to: Remove the CosmosClient trait and and rename CosmosStruct to CosmosClient. Change the hyper_client field of CosmosStruct to client and make it a Box<dyn Client> . This is somewhat dependent on other work to abstract the client usage so implementations are not dependent on a particular http implementation. Consolidate all the clients into one client CosmosClient (aka CosmosStruct) This is great work ❤️ . I think we probably want to merge https://github.com/Azure/azure-sdk-for-rust/pull/79 first since many modifications are probably going to create conflicts.
gharchive/pull-request
2020-11-19T10:38:52
2025-04-01T06:36:45.525087
{ "authors": [ "MindFlavor", "rylev" ], "repo": "Azure/azure-sdk-for-rust", "url": "https://github.com/Azure/azure-sdk-for-rust/pull/88", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1724841839
Add default users to prod release plans Share Release Plans created in PROD with the v-team so we can help diagnose problems and look at behaviors, specially while we do our initial rollout. Ideally, Release Plans shouldn't have any permissions and anyone can access them Both options to add default users to all release plans or removing the permissions would take the same amount of work. Since the future goal is to remove those permissions, I would suggest start removing them. I have a question, in the Release Plans list view, what should be the filter? Only show the release plans for products from the selected service? Or show all of them? How could we avoid having too many records that are not relevant for the user? I have a question, in the Release Plans list view, what should be the filter? Only show the release plans for products from the selected service? Or show all of them? How could we avoid having too many records that are not relevant for the user? Great question. Having all will be too much noise. Filter by the service selected will work for our users (and not for the admins). @ccbarragan how is this handled in the new UI? As a first step, I modified the list of the Release Plans and now it shows all the release plans for the selected Service. Also, I removed all permissions restrictions in the Release Planner APP, however, we must do the same for all other apps, since there is logic in them relying on users' permissions. To avoid this, the current users get added as an owner of the release plan whenever they click the link to open any Readiness App. Opening this GitHub issue to keep track of those changes: https://github.com/Azure/azure-sdk-tools/issues/6369 I forgot to summarize the changes that were made here. This is what changed: In the Release Planner App Brand new List of the release plans Shows all the release plans for the selected service, not only the ones that are owned by the user Searchable by: Name, Lifecycle stage, Product name, API type, Created by and Status. UI enhancements: Responsive, cleaner, and leans toward the new proposed UI for the App. Added Deeplink support for release plans Release plans can be directly opened from a URL. In the new "Summary" screen, added an option to copy a URL to easily share the current release plan Added a warning in the Permissions section for a Release Planner, saying that permissions are getting deprecated in future versions, and there's no need to manually add users there anymore.
gharchive/issue
2023-05-24T23:01:11
2025-04-01T06:36:45.531105
{ "authors": [ "JonathanCrd", "maririos" ], "repo": "Azure/azure-sdk-tools", "url": "https://github.com/Azure/azure-sdk-tools/issues/6237", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1356178460
SDK Review Meeting - Tracking Azure Communication Services This meeting was created by Jorge Garcia Hirota.   It will be used to Track the conversation in the informational Session for the Azure Communication Services Service.   Detailed meeting information and documents provided can be accessed here Cancelled by: Jorge Garcia Hirota
gharchive/issue
2022-08-30T18:59:49
2025-04-01T06:36:45.532863
{ "authors": [ "azure-sdk" ], "repo": "Azure/azure-sdk", "url": "https://github.com/Azure/azure-sdk/issues/4768", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
643224361
Spark 3 support As Apache Spark 3.0.0 is now released, what is the timetable to support Spark 3 and Scala 2.12? I'm getting the following exception when running on databricks 7.0. at com.microsoft.azure.sqldb.spark.config.SqlDBConfigBuilder.<init>(SqlDBConfigBuilder.scala:31) at com.microsoft.azure.sqldb.spark.config.Config$.apply(Config.scala:254) at com.microsoft.azure.sqldb.spark.config.Config$.apply(Config.scala:235) at line0419139fc8114231985e78f2bf75c46d25.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-802367102285758:15) at line0419139fc8114231985e78f2bf75c46d25.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-802367102285758:67) at line0419139fc8114231985e78f2bf75c46d25.$read$$iw$$iw$$iw$$iw.<init>(command-802367102285758:69) at line0419139fc8114231985e78f2bf75c46d25.$read$$iw$$iw$$iw.<init>(command-802367102285758:71) at line0419139fc8114231985e78f2bf75c46d25.$read$$iw$$iw.<init>(command-802367102285758:73) at line0419139fc8114231985e78f2bf75c46d25.$read$$iw.<init>(command-802367102285758:75) at line0419139fc8114231985e78f2bf75c46d25.$read.<init>(command-802367102285758:77) at line0419139fc8114231985e78f2bf75c46d25.$read$.<init>(command-802367102285758:81) at line0419139fc8114231985e78f2bf75c46d25.$read$.<clinit>(command-802367102285758) at line0419139fc8114231985e78f2bf75c46d25.$eval$.$print$lzycompute(<notebook>:7) at line0419139fc8114231985e78f2bf75c46d25.$eval$.$print(<notebook>:6) at line0419139fc8114231985e78f2bf75c46d25.$eval.$print(<notebook>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:745) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1021) Thank you for your questions and ideas. There are no plans to support Spark 3.0.0 with this connector. Consider evaluating the Apache Spark Connector for SQL Server and Azure SQL which is a newer connector. They are already tracking the request for Spark 3.0.0 support in the new connector. I am closing this issue as there are no plans to address this request in this connector.
gharchive/issue
2020-06-22T16:54:28
2025-04-01T06:36:45.536707
{ "authors": [ "arvindshmicrosoft", "lotsahelp", "tkasu" ], "repo": "Azure/azure-sqldb-spark", "url": "https://github.com/Azure/azure-sqldb-spark/issues/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
365993420
When output=json is used for copy, some errors aren't outputted in a JSON friendly way Which version of the AzCopy was used? 10.0.2-preview Which platform are you using? (ex: Windows, Mac, Linux) Windows What command did you run? I ran copy "C:\Users\marayerm\Desktop\*" "https://redacted.blob.core.windows.net/one/?REDACTED" --overwrite=false --follow-symlinks --recursive --fromTo=LocalBlob --include "New Text Document.txt;" --output=json when there is no file at C:\Users\marayerm\Desktop\New Text Document.txt What problem was encountered? Although output=json was specified, the output I received was failed to perform copy command due to error: cannot start job due to error: nothing can be uploaded, please use --recursive to upload directories., which is not JSON. I have seen other situations where this happens, but this is the easiest to reproduce. Basically, if output=json is used, then all output should be formatted as JSON objects. How can we reproduce the problem in the simplest way? Try to do a copy where the source does not exist. Have you found a mitigation/solution? No. @MRayermannMSFT thanks for reporting this issue! I've logged it to be fixed. Another scenario where this happens is when uploading an empty folder. This also happens if you do not have permissions to read the contents of the blob container/file system. Fixed in 10.0.8.
gharchive/issue
2018-10-02T17:06:15
2025-04-01T06:36:45.541174
{ "authors": [ "MRayermannMSFT", "zezha-msft" ], "repo": "Azure/azure-storage-azcopy", "url": "https://github.com/Azure/azure-storage-azcopy/issues/70", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
254112916
Enabling TransferRequestHandler for all request (This will include static content - The reason we do this is to enable us to have routes with period in them. i.e. with extensions) Look at the issue #969 Essentially, Functions currently does not allow paths to have a period in them. for instance /index.html, etc etc. Proxies does allow users to have such paths. With Proxies merging with function this creates a problem. To enable such paths, we need to ensure that the manage module are executed for such path. There are a couple of way of enabling this as mentioned in the issue. The change made here is a way of ensuring that TransferRequestHandler is invoked for all requests @omkarmore83, Thanks for having already signed the Contribution License Agreement. Your agreement was validated by .NET Foundation. We will now review your pull request. Thanks, .NET Foundation Pull Request Bot
gharchive/pull-request
2017-08-30T19:41:47
2025-04-01T06:36:45.543700
{ "authors": [ "dnfclas", "omkarmore83" ], "repo": "Azure/azure-webjobs-sdk-script", "url": "https://github.com/Azure/azure-webjobs-sdk-script/pull/1848", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
194855679
Containerize the volume plugin Sure you're aware of, but thought I'd voice my interest after using this plugin on a few machines and really liking it. Documented here: https://docs.docker.com/engine/extend/plugin_api/ Has plugins running as "special" docker containers, brought up before docker proper and shut down after docker stops. Not sure of the impact of how they work on the fact you're using SMB tools/permissions/privileges. I think you're already using some of the ideas, so might not be an issue. Would make installing the plugin very simple and much cleaner however, when plugins come out of experimental. @gjonespf we tried to containerize the driver, however the main problem is, the driver executes mount commands to mount SMB shares on Linux. When we containerize, the mounts are restricted to the container's namespace and therefore does not appear on the Linux host, thus not available to containers using the volumes. Last time I checked, this was a limitation of docker (or the Linux kernel). If the situation has changed, we can take a look at containerizing the driver. BTW in the docs you sent, I don't see "running as special docker containers" reference. Could you please quote? Indeed, wondered if this may be an issue. Would suggest that issue (containerized mounts) should be upstream on docker, as I'd expect having mount tools working correctly for plugins would be pretty critical for anything doing volume plugins based on mount. I raised the suggestion more as an easier way for people to install/update this driver. That being said, I'm at a loss to find much on docker plugin infra at all, and current suggestion (on the page I linked) is to run them outside of containers (as you're already doing). Re: "special" containers - I believe I read it on some Weave documentation, I'm unable to find much on plugins as containers tbh. Also unsure as to how these fit into the existing Kubernetes plugin infrastructure. Oh, for ref here is how the Weave team is doing it. Looks like they've got separate Docker/CNI plugins. https://github.com/weaveworks/weave/tree/master/plugin Would suggest that issue (containerized mounts) should be upstream on docker It's here: https://github.com/docker/docker/issues/10088 https://github.com/docker/docker/issues/14630 https://github.com/docker/docker/issues/17034 Looks like it is merged now. Perhaps we should give it a try. This was the only reason we could not containerize the plugin back in the day. Everything I read pretty much implies to me that volume plugins are still very experimental. Volume plugins support have been out for pretty long and the API has gone many revisions, I think things have settled down at this point and we have a stable mechanism. I'm changing the title to reflect the latest here. Nice work, yeah those items look a good reflection. Experimental - agreed, it's been around for a year or two, just wasn't seeing much documentation to help you out. Expect if the mounts work, then it's "just" a case of sorting out how to build plugin api wrapper to handle mounting etc commands. @ahmetb hi, is this likely to happen please? I note the the "docker-for-azure" project has implemented this with their "cloudstor" plugin (https://docs.docker.com/docker-for-azure/persistent-data-volumes/) but that is only (officially) available if using that project, and it is closed source with minimal docs. Enabling mounting Azure storage using docker plugin would be very useful. Otherwise it severely limits the use of Docker on Azure. @markvr as @ahmetb is now working at Kubernetes would expect that it's not high on his priority list, and don't blame him. I'm still keen on this, but have started looking at other avenues to solve same problem. Docker plugin infra is finally maturing so would suggest it's getting easier for anyone to jump in and try this stuff out. I would have hoped that Microsoft could have more than one person supporting their products. I like the open source approach they've taken with a lot of things, but they seem to just abandon a lot of them as well, which makes it impossible to know if we can rely on them for production systems.
gharchive/issue
2016-12-11T20:52:56
2025-04-01T06:36:45.572107
{ "authors": [ "ahmetalpbalkan", "gjonespf", "markvr", "powareverb" ], "repo": "Azure/azurefile-dockervolumedriver", "url": "https://github.com/Azure/azurefile-dockervolumedriver/issues/78", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1094475031
Data property on Microsoft.KeyVault/vaults/keys defined as string but labeled with any The Microsoft.KeyVault/vaults/keys resource type has a property called "data" within the "release_policy" object. In the swagger, it's defined as a string. But, it gets generated as "any". This log entry may be relevant. The Swagger definition indicates that the data property should be base64-encoded bytes, and I'm not sure which Autorest type that would be translated into. The any type is used as a fallback when a field's type is not recognized. It should be type string (it is just encoded string). Examples here https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-string#examples-2 The Bicep autorest plugin uses Autorest's ModelerFour framework, which applies a binary schema to base64 encoded strings. This seems appropriate for SDK generation, where the SDK would be responsible for taking a byte buffer and converting it to the expected wire format, but is probably not exactly what we want in Bicep. @jlichwa If a user wants to supply a policy today in a Bicep or ARM JSON file, do they need to provide a base64-encoded string? I can put together a test if you don't know. Yes, it needs to be a base64-encoded string.
gharchive/issue
2022-01-05T15:30:00
2025-04-01T06:36:45.577046
{ "authors": [ "jackrichins", "jeskew", "jlichwa", "tfitzmac" ], "repo": "Azure/bicep-types-az", "url": "https://github.com/Azure/bicep-types-az/issues/579", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
687254123
Support to split template into multiple files? Have you thought about supporting splitting one bicep files into multiple files within the same directory and merging them on build? I find this a very neat feature in terraform to make larger templates with many resources easier to read - without using modules, that is a different use case for me. Like: storageaccounts.bicep sqldb.bicep keyvault.bicep bicep build . ==> my_combined_arm_template.json Some use cases I would use splitting: logic app templates everything which is enter in template as escaped JSON string. E.g. Log Analytivs workbooks. Splitting and loading bicep files would be very useful feature. As well as functions to load external files and preparing payloads using the bicep language. Similar to terraform: local variables file function yaml and json decode/encode Also it would allow for modularity. Sure, merging local files is a good feature. Take it one step further and support remote git files. That way we can get versioning too. Versioning is a must if you use the bicep file like a module. Seconding this and also liking to add that this would be nice with modules. For readability it would be nice to seperate a module into a folder with resource.bicep params.bicep variables.bicep outputs.bicep It's a bit unfortunate, ARM JSON files interpreters don't support natively JSON Pointer (RFC 6901) (while we could probably imagine a pre-processor addind it!). I had recently to compose a lot of JSON "definition" files (of my own) into a single object structure before loading them in memory. And JSON Pointer definitely helped me managing complexity while being able to check consistency as part of my unit tests. BTW I was already toying with ARM JSON files generation a few years ago leveraging Jinja2 :) But as it's only "syntactic macros", a lot of semantic issues are not detected as they should and are with something like Bicep... This would be great if we could split out Bicep files like that! Any news on this? Like I said in issue #7726 I would then also like to add parameters and expressions (to do some string manipulation if needed) to add re-use value (predefined template files for several resources with all the properties that are always the same in your organization) Good Ask....For me I want parameters in a file and remaining code in another file. is it possible? I have n number of resources to create, so it will be good and traceable if we have different file param and different file for resources/modules When using Terraform, this template was the initial setup of other organizations', or tenants', spoke subscription. It deployed a core set of resources to integrate with the common services/hub subscription. They're split mostly by resource type plus Data Sources (Existing resources in Bicep), outputs, providers, and variables (Bicep Parameters). It was deployed by the same team every time. Modules are overkill for the resource types with multiple instances because they had different requirements for configurations such as 3 different Key Vaults: 1 for Customer Managed Keys, 1 for Certificates, and another for Secrets. They each had a specific set of access for the service account creating them and their secrets/keys/certs and different network restrictions. I simplified it down to a handful of variables/parameters that were required for each to basically just the name and location. The *_permissions were the same for each tenant. It's also easier for the team, which I've left, to maintain or update than as a single file which would be around 2000 lines of code. @alex-frankel Is this still being considered? One of our customers is hitting the limit and wants to know if there are any solutions. @lddeiva - it's not being actively considered. Having said that, I don't think if we implemented this it would address the limit you are talking about. I am assuming you are talking about the 4MB template size limit? We have a separate work item to try to get that limit raised. Would be very nice to split up NSG rules, Sentinel rules or Firewall rules (1 rule = 1 file). Also Azure workbooks, as already mentioned.
gharchive/issue
2020-08-27T13:43:05
2025-04-01T06:36:45.587339
{ "authors": [ "JayDoubleu", "Lddeiva", "SPSamL", "TazzyMan", "alex-frankel", "enbridgeint", "jfe7", "jikuja", "sebader", "sebbrochet", "takekazuomi", "thebeautiful", "thebenwaters" ], "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/363", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1278734445
Bicep intellisense partially working Bicep version version 0.4.613 Describe the bug typing res brings up intellisense options for resources I cannot get intellisense to give me in between the quotes. Not specific to keyvaults. typing resource kv '' should return Microsoft.Something/@date Microsoft.KeyVault/vaults@2019-09-01 To Reproduce install reinstall the vscode extension Additional context Once I get Microsoft.KeyVault/vaults@2019-09-01 loaded I get intellisense options for accesspolicies enablePurgeProtection enableRbacAuthorization and others. @dairta, you're on a very old version of the Bicep extension. Could you try and repro this on the latest? (0.7.4) uninstalling and reinstalling the 0.7.4 version still shows bicep --version Bicep CLI version 0.4.613m My terminal launches in pwsh v7 I've looked at this and it's what I am experiencing on a new install. https://github.com/Azure/bicep/issues/1780 The vs code extension and the bicep CLI are separate installs. Can you look at the version in VS code? Also, separately, you also have two different versions of the bicep CLI installed. If that's not intention/desired, you can follow this to make sure you only have one: https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/installation-troubleshoot#multiple-versions-of-bicep-cli-installed
gharchive/issue
2022-06-21T17:05:33
2025-04-01T06:36:45.593311
{ "authors": [ "alex-frankel", "anthony-c-martin", "dairta" ], "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/7317", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1445673403
Bicept validation is wrong for Event Hub Namespace When adding a new property to a "Microsoft.EventHub/namespaces@2022-01-01-preview" resource I get the following error: The property "minimumTlsVersion" is not allowed on objects of type "EHNamespaceProperties". Permissible properties include "clusterArmId", "encryption", "privateEndpointConnections". Per Microsoft's documentation this is a valid property. https://learn.microsoft.com/en-us/azure/templates/microsoft.eventhub/namespaces?pivots=deployment-language-bicep Can you share the bicep code you are using? Is this throwing an error in VS code or when you attempt to deploy the bicep file? Closing due to no response
gharchive/issue
2022-11-11T15:57:22
2025-04-01T06:36:45.595632
{ "authors": [ "SDanehy", "alex-frankel" ], "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/8987", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1894330923
Build workflow improvements Add action to log preview install scripts Summarize dotnet test results into single comment rather than 4 separate comments Split VSCode into build & test jobs. This means you get a working VSCode package even if there are lint or test failures. Microsoft Reviewers: Open in CodeFlow Checks are all passing - there are just some required checks which have been renamed, which explains why the PR shows this:
gharchive/pull-request
2023-09-13T11:31:12
2025-04-01T06:36:45.598448
{ "authors": [ "anthony-c-martin" ], "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/pull/11829", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1644477505
Add support for providing environment var/values in a .env file read by dotenv.net Discussed in https://github.com/Azure/data-api-builder/discussions/1361 Originally posted by glaucia86 March 25, 2023 Hi! I would like to propose that we can make use of the .env file in the dab.config.json file. As it already happens with the files generated by the SWA CLI: staticwebapp.database.config.json (example: HERE) Because the connection string is very exposed, where it has extremely sensitive data such as: login, password and database name. This issue is to add support for .env so that I can just put my sensitive data into the .env, if I don't need the complexity of having multiple configuration files. This is separate from the current @env() feature where we need to set environment variables on the system. Two libraries for this option: https://github.com/tonerdo/dotnet-env https://github.com/bolorundurowb/dotenv.net Looking forward to this! :)
gharchive/issue
2023-03-28T18:37:18
2025-04-01T06:36:45.603291
{ "authors": [ "Aniruddh25", "yorek" ], "repo": "Azure/data-api-builder", "url": "https://github.com/Azure/data-api-builder/issues/1374", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1594003996
chore: cherry pick for v1.0.0 What this PR does / why we need it: Cherry pick main changes since v1.0.0-rc.2 for v1.0.0 release. (cherry picked #608 #618 #621 #620 #622 #628 #631 #632 #635) Which issue(s) this PR fixes (optional, using fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when the PR gets merged): Fixes # Special notes for your reviewer: closing, fixed in PR #640
gharchive/pull-request
2023-02-21T19:39:10
2025-04-01T06:36:45.610277
{ "authors": [ "ashnamehrotra" ], "repo": "Azure/eraser", "url": "https://github.com/Azure/eraser/pull/639", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2167930745
No documented way to deploy to consumption based linux functions (e.g. Python) By now I spend several days trying to figure out how to deploy a Python function to a consumption based plan using this action and I have come to the conclusion that this is not possible. However, there is no warning or any indication about this. Please add either a documentation how to do it or a warning that it is not possible. Setup Python is supported only on a Linux-based hosting plan when it's running in Azure. source Function App Operating System:Linux App Service Plan Pricing plan: Y1 (consumption based) Runtime version: 4.28.4.4 FUNCTIONS_WORKER_RUNTIME: python FUNCTIONS_EXTENSION_VERSION: ~4 Action configuration - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1 id: fa with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} slot-name: ${{ env.AZURE_FUNCTIONAPP_SLOT }} package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }} publish-profile: ${{ secrets.AZURE_FUNCTIONAPP_PUBLISH_PROFILE }} respect-funcignore: true scm-do-build-during-deployment: true enable-oryx-build: true The action is exactly in line with the example. The slot-name has been added or removed but is irrelevant for the behaviour. Repository structure The repository contains a Hello world application using exactly the setup of the official documentation Behaviour In essence the problem boils down to the possible states of the WEBSITE_RUN_FROM_PACKAGE environment variable in the Azure Function. Without WEBSITE_RUN_FROM_PACKAGE A newly created Azure Function does not have the environment variable set. Therefore, the execution of the azure function fails. Error: Failed to deploy web package to App Service. Error: Execution Exception (state: PublishContent) (step: Invocation) Error: When request Azure resource at PublishContent, zipDeploy : Failed to use /home/runner/work/_temp/temp_web_package_26238559607[44](XXXX)894.zip as ZipDeploy content Error: Package deployment using ZIP Deploy failed. Refer logs for more details. Error: Deployment Failed! Btw. there is no indication where to find the logs that the error message is referring to in case of the failure. The link to the xxx.scm.azurewebsites.net logs is only displayed in case of a successful deployment . With WEBSITE_RUN_FROM_PACKAGE = 1 In this case the GitHub action runs through without an error. However, the deployed function is not visible in the Azure Portal. This is somewhat expected behaviour as the documentation clearly states that Linux Consumption based Functions need to set the value to a URL: External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan Source Other source With WEBSITE_RUN_FROM_PACKAGE = The expected solution is to set the WEBSITE_RUN_FROM_PACKAGE to a URL that is created during the deployment of the function using this GitHub Action e.g. the value of the SCM_RUN_FROM_PACKAGE variable. However, in this scenario the GitHub Action also fails: Error: Execution Exception (state: PublishContent) (step: Invocation) Error: When request Azure resource at PublishContent, zipDepoy : WEBSITE_RUN_FROM_PACKAGE in your function app is set to an URL. Please remove WEBSITE_RUN_FROM_PACKAGE app setting from your function app. Error: Deployment Failed! The suggested fix to remove the variable does unfortunately not work as stated above. With WEBSITE_RUN_FROM_PACKAGE = anything else For the sake of completeness I also tried to run the action with something else than the official options. As expected, the function fails with: Error: Failed to deploy web package to App Service. Error: Execution Exception (state: PublishContent) (step: Invocation) Error: When request Azure resource at PublishContent, zipDeploy : Failed to use /home/runner/work/_temp/temp_web_package_5530634294070984.zip as ZipDeploy content Error: Package deployment using ZIP Deploy failed. Refer logs for more details. Error: Deployment Failed! Conclusion Given that none of the options results in a working Azure function I have to assume that it is just not possible to use this action for consumption based Linux functions. I would be more than happy if you could highlight my mistake. WEBSITE_RUN_FROM_PACKAGE = 1 is not support of Linux app on Consumption plan. Documented here: https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package WEBSITE_RUN_FROM_PACKAGE = URL will only work with this action if you are using service principle and not publish profile. Also, it will not support remote build. If you want remote build, then used publish profile and pass remote build parameters with this action. Also, remove WEBSITE_RUN_FROM_PACKAGE app setting from your app. Dear @patelchandni thanks a lot for summarizing the behaviour of WEBSITE_RUN_FROM_PACKAGE again. Could you provide a working example of how to deploy a consumption based linux function with this action? I have the same problem and I can't find a way to deploy in consumption mode (sku=Y1).
gharchive/issue
2024-03-04T22:52:00
2025-04-01T06:36:45.621975
{ "authors": [ "mxmo0rhuhn", "patelchandni", "ricardolimadb" ], "repo": "Azure/functions-action", "url": "https://github.com/Azure/functions-action/issues/218", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1691106648
Update dmr client to GA DTDL parser Update dmr client to GA DTDL parser 1.0.52 And added a new test case according to Changes from Version 2 guildline Resolving issue https://github.com/Azure/iot-plugandplay-models-tools/issues/197.
gharchive/pull-request
2023-05-01T17:37:42
2025-04-01T06:36:45.624513
{ "authors": [ "Elsie4ever", "digimaun" ], "repo": "Azure/iot-plugandplay-models-tools", "url": "https://github.com/Azure/iot-plugandplay-models-tools/pull/198", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
374231054
add weight to lightgmb in our use case we need to include sample weight for model training, which is no available for LightGBMClassifier this has been resolved with PR #426 , the fix will be in the next release of mmlspark v0.15. You can test out the fix with the build here: --packages com.microsoft.ml.spark:mmlspark_2.11:0.14.dev30+2.gb5960fb and --repositories https://mmlspark.azureedge.net/maven
gharchive/issue
2018-10-26T05:22:25
2025-04-01T06:36:45.632125
{ "authors": [ "imatiach-msft", "stevekuo4" ], "repo": "Azure/mmlspark", "url": "https://github.com/Azure/mmlspark/issues/411", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1262351679
[BUG] Array of objects is not being evaluated correctly Describe the bug When trying to use a wildcard to evaluate an array of objects to validate if an object key exists in each array entry the rules are not evaluated properly: { "name": "resourceName", "type": "Microsoft.Resources/deployments", "apiVersion": "2021-04-01", "resourceGroup": "resourceGroup", "properties": { "mode": "Incremental", "templateLink": { "id": "<Template-Link>" }, "parameters": { "secretsObject": { "value": { "secrets": [ { "secretValue": "secret-value-1" }, { "secretValue": "secret-value-2" } ] } } } } } Rule: "evaluation": { "resourceType": "Microsoft.Resources/deployments", "allOf": [ { "path": "properties.parameters.secretsObject.value.secrets[*].secretName", "exists": true } ] } Expected behavior It should fail without the need to add a specific evaluation to each array entry: "path": "properties.parameters.secretsObject.value.secrets[0].secretName" "path": "properties.parameters.secretsObject.value.secrets[1].secretName" Reproduction Steps Run the tool against a template like the example above. Environment No response Hi @lucas-lelis, thanks for the feedback. Since wildcards are evaluated based on whether or not the full path resolves to an existing property, I believe the issue here is that the rule isn't returning anything to evaluate since the property in question doesn't exist, so it's skipped. This is somewhat mentioned in the rule authoring guide, but is not very clear on what's happening. "When a wildcard is used, zero or more paths in the template will be found that match path. If zero paths are found, the operator in the Evaluation is skipped, as there is nothing to evaluate" Can you try making a small modification to your rule and let us know if this works for you? "evaluation": { "resourceType": "Microsoft.Resources/deployments", "allOf": [ { "path": "properties.parameters.secretsObject.value.secrets[*]", "allOf": [ { "path" : "secretName", "exists": true } ] } ] } The additional operator used for evaluating the remaining path after the wildcard isn't very intuitive, so I've created #247 to help with this. Thx for the feedback @JohnathonMohr ! It did work as expected with those changes.
gharchive/issue
2022-06-06T20:09:41
2025-04-01T06:36:45.687624
{ "authors": [ "JohnathonMohr", "lucas-lelis" ], "repo": "Azure/template-analyzer", "url": "https://github.com/Azure/template-analyzer/issues/246", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2404557876
chore: repository governance Repository governance update This PR was automatically created by the AVM Team hive-mind using the grept governance tool. We have detected that some files need updating to meet the AVM governance standards. Please review and merge with alacrity. Grept config source: git::https://github.com/Azure/Azure-Verified-Modules-Grept.git//terraform Thanks! The AVM team :heart: Supersceeded by #32
gharchive/pull-request
2024-07-12T03:11:28
2025-04-01T06:36:45.690094
{ "authors": [ "mbilalamjad" ], "repo": "Azure/terraform-azurerm-avm-res-compute-hostgroup", "url": "https://github.com/Azure/terraform-azurerm-avm-res-compute-hostgroup/pull/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2278793502
chore: repository governance Repository governance update This PR was automatically created by the AVM Team hive-mind using the grept governance tool. We have detected that some files need updating to meet the AVM governance standards. Please review and merge with alacrity. Grept config source: git::https://github.com/Azure/Azure-Verified-Modules-Grept.git//terraform Thanks! The AVM team :heart: Supersceeded by #32
gharchive/pull-request
2024-05-04T06:40:34
2025-04-01T06:36:45.692342
{ "authors": [ "mbilalamjad" ], "repo": "Azure/terraform-azurerm-avm-res-desktopvirtualization-hostpool", "url": "https://github.com/Azure/terraform-azurerm-avm-res-desktopvirtualization-hostpool/pull/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2499149176
chore: repository governance Repository governance update This PR was automatically created by the AVM Team hive-mind using the grept governance tool. We have detected that some files need updating to meet the AVM governance standards. Please review and merge with alacrity. Grept config source: git::https://github.com/Azure/Azure-Verified-Modules-Grept.git//terraform Thanks! The AVM team :heart: Supersceeded by #67
gharchive/pull-request
2024-09-01T01:53:12
2025-04-01T06:36:45.694491
{ "authors": [ "segraef" ], "repo": "Azure/terraform-azurerm-avm-res-network-privatednszone", "url": "https://github.com/Azure/terraform-azurerm-avm-res-network-privatednszone/pull/66", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2525985105
[TCGC] update references for #1463 Update references after merged #1463 All changed packages have been documented. :white_check_mark: @azure-tools/typespec-azure-resource-manager :white_check_mark: @azure-tools/typespec-client-generator-core Show changes @azure-tools/typespec-azure-resource-manager - feature ✏️ x-ms-skip-url-encoding should be replaced with allowReserved @azure-tools/typespec-client-generator-core - breaking ✏️ The kind for unknown renamed from any to unknown.,> 2. The values property in SdkUnionType renamed to variantTypes.,> 3. The values property in SdkTupleType renamed to valueTypes.,> 4. The example types for parameter, response and SdkType has been renamed to XXXExampleValue to emphasize that they are values instead of the example itself.,> 5. The @format decorator is no longer able to change the type of the property. @azure-tools/typespec-client-generator-core - fix ✏️ Fix naming logic for anonymous model wrapped by HttpPart @azure-tools/typespec-client-generator-core - breaking ✏️ no longer export the SdkExampleValueBase You can try these changes here 🛝 Playground 🌐 Website 📚 Next docs
gharchive/pull-request
2024-09-14T04:26:34
2025-04-01T06:36:45.704379
{ "authors": [ "ArcturusZhang", "azure-sdk" ], "repo": "Azure/typespec-azure", "url": "https://github.com/Azure/typespec-azure/pull/1541", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
738334495
Extension issue Issue Type: Bug Extension Name: vscode-kubernetes-tools Extension Version: 1.2.1 OS Version: Windows_NT x64 10.0.17763 VSCode version: 1.51.0 :warning: We have written the needed data into your clipboard. Please paste! :warning: Should be fixed in 1.2.3 - please reopen if not Should be fixed in 1.2.3 - please reopen if not
gharchive/issue
2020-11-07T23:09:10
2025-04-01T06:36:45.707467
{ "authors": [ "KYZITEMELOS93", "itowlson" ], "repo": "Azure/vscode-kubernetes-tools", "url": "https://github.com/Azure/vscode-kubernetes-tools/issues/837", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
312125756
Add a drill-down for Pods created by a Deployment. This goes straight from Deployment -> Pods Another option would be: Deployment -> ReplicaSet -> Pods If we'd rather, easy to do, let me know. Comments addressed, please re-check. @testforstephen comment addressed, please re-check. Thanks!
gharchive/pull-request
2018-04-06T21:31:54
2025-04-01T06:36:45.709719
{ "authors": [ "brendandburns" ], "repo": "Azure/vscode-kubernetes-tools", "url": "https://github.com/Azure/vscode-kubernetes-tools/pull/162", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
625578786
Fixes typo in RenewResponse action in WsTrust 1.3 There was a copy-paste error in one of the WsTrust 1.3 actions. @gislikonrad I rebased on dev and squashed before I saw your pr, sorry. It is hard to see what your PR was now. Can you re-submit. @gislikonrad I rebased on dev and squashed before I saw your pr, sorry. It is hard to see what your PR was now. Can you re-submit. I merged my branch. Now you should be able to see the change. Do you still want me to resubmit the PR? @gislikonrad thanks for catching this! Sure
gharchive/pull-request
2020-05-27T10:35:12
2025-04-01T06:36:45.712203
{ "authors": [ "brentschmaltz", "gislikonrad" ], "repo": "AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet", "url": "https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/pull/1424", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
425632081
Jpwilli/locking Adds locking to cache operations Reverts projects to .net4.5 Upgrades projects to netcoreapp2.1 (redundant with Mark's other PR) Adds tests using System; please add license blurb Refers to: src/Shared/CrossPlatLock.cs:1 in d756837. [](commit_id = d756837b904e3e1c621279e3ffc719a5f2b5fce5, deletion_comment = False) using System; license Refers to: tests/Microsoft.Identity.Extensions.Msal.UnitTests/MockTokenCache.cs:1 in d756837. [](commit_id = d756837b904e3e1c621279e3ffc719a5f2b5fce5, deletion_comment = False)
gharchive/pull-request
2019-03-26T20:25:43
2025-04-01T06:36:45.721487
{ "authors": [ "JPWilli", "MarkZuber" ], "repo": "AzureAD/microsoft-authentication-extensions-for-dotnet", "url": "https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
167794905
Cannot open the PPDF file, which is protected by RMS Android SDK Protect a PDF file to PPDF file by RMS Android SDK Open this PPDF by RMS Linux SDK 3 Get the a error message : This version is not supported. Check the code, we found that the following functions have an exception error ProtectedFileStream::Acquire PS: Please use "catch (const rmscore::exceptions::RMSException& ex)" to get the exception @prvijay , please take look this issue. Thank you. @raeitan for investigation Any update? Hey, I've updated the SDK to support pfile version 3 in the dev branch, please have a look and see if its working for you. I'll need to run more tests before merging into master. Thank you for pointing to the issue.
gharchive/issue
2016-07-27T08:04:57
2025-04-01T06:36:45.768096
{ "authors": [ "Kammy6679", "prvijay", "raeitan" ], "repo": "AzureAD/rms-sdk-for-cpp", "url": "https://github.com/AzureAD/rms-sdk-for-cpp/issues/119", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1110804368
Review of Develop It's time to merge into master. All this code needs review. I was thinking once every two weeks, but it doesn't really matter so long as master build and works
gharchive/pull-request
2022-01-21T19:13:40
2025-04-01T06:36:45.792245
{ "authors": [ "cppcooper" ], "repo": "BCCF-UBCO-AD/Orthanc-TMI", "url": "https://github.com/BCCF-UBCO-AD/Orthanc-TMI/pull/93", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
945506291
PR for Warning label track hours issue 45(This issue is based off of branch: track_volunteer_hours_issue42) This PR fixes issue #45 Summary: The issue asked that there be a way for an admin to tell that a participant that has checked into or rsvp to an event be flagged if they have not completed all prerequisites for the event. In order to achieve this, we queried the database and pushed on to a list of all the eventIDs that were set as prerequisites to the program the participant is attending. We then queried the database using the eventIDS and pushed onto a list all the users who attended any of the prerequisite events, using this list we then checked how many times a user name appeared and created a conditional that required the user to appear as many times as the length of the eventIDs list. If the user met the amount they would stay on the list, if the user did not they would be removed from the list. After deleting duplicates we then passed the list to the HTML and used jinja to flag any users who appeared on the track hours page who were not on the list of eligible participants. To Test: There is a test function in the test suite labeled test_warning.py. Running this should pass. You can also visit the URL: http://<YOURIP>/<PROGRAMID>/<EVENTID>/track_hours. This URL will display a table with every user set to attend or currently at the event, if any of them have not met the requirements for the event you will see a red warning icon. Hovering over this will display a ToolTip that will tell you that the user has not completed the prerequisites. Notes: This branch is forked off over the branch listed in the title. If any changes are made to the listed branch this will have to be updated. There are conflicts to resolve
gharchive/pull-request
2021-07-15T15:27:52
2025-04-01T06:36:45.813974
{ "authors": [ "BrianRamsay", "tylerpar99" ], "repo": "BCStudentSoftwareDevTeam/celts", "url": "https://github.com/BCStudentSoftwareDevTeam/celts/pull/47", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
228432579
Display start time and last updated in action service table See description. Opening branch feature/display-time to address. @wshands Should I change out attributes for the timestamps, or should I just add two more columns? Attributes added, columns rearranged to resemble file browser order per request of klearned. Current debate: leave all eleven columns, remove other items, or consolidate information?
gharchive/issue
2017-05-12T23:48:11
2025-04-01T06:36:45.820411
{ "authors": [ "alex-hancock" ], "repo": "BD2KGenomics/dcc-dashboard", "url": "https://github.com/BD2KGenomics/dcc-dashboard/issues/22", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
227172261
Use chunked transfer in AWS job store importing Resolves #1515 @cket i tested this branch and it still raised the [Errno 104] Connection reset by peer Exception Thanks @arkal for testing these changes as well! The 10 Gb import/export test was removed and will be added as an issue.
gharchive/pull-request
2017-05-08T20:57:35
2025-04-01T06:36:45.822007
{ "authors": [ "arkal", "cket", "ejacox" ], "repo": "BD2KGenomics/toil", "url": "https://github.com/BD2KGenomics/toil/pull/1669", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
687990276
added black and flake8 linters The black code formatter in Python is an opinionated tool that formats your code in the best way possible and Flake8 is a powerful tool that checks our code’s compliance to PEP8. This will format and enhance the code quality whenever there is a PR in the repository or any changes done to .py file. Fixes: #22 @RC99 I am participating through BITSoC. Please review.
gharchive/pull-request
2020-08-28T10:53:48
2025-04-01T06:36:45.964517
{ "authors": [ "iamrajiv" ], "repo": "BITSoC/EmotionRecog", "url": "https://github.com/BITSoC/EmotionRecog/pull/24", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1062991121
🛑 Repo BSrE is down In 466163b, Repo BSrE (https://gitlab-bsre.bssn.go.id/users/sign_in) was down: HTTP code: 0 Response time: 0 ms Resolved: Repo BSrE is back up in 5f79828.
gharchive/issue
2021-11-24T23:14:43
2025-04-01T06:36:46.020952
{ "authors": [ "BSrE-ID" ], "repo": "BSrE-ID/monitor", "url": "https://github.com/BSrE-ID/monitor/issues/90", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
271369241
ItemGroupFromSeparatedList build task should accept an empty string and return an empty ItemGroup ItemGroupFromSeparatedList build task should accept an empty string and return an empty ItemGroup This work item was migrated from CodePlex CodePlex work item ID: '6054' Assigned to: 'tfabraham' Vote count: '0' [UnknownUser@2/18/2010] Resolved with changeset 36776. [UnknownUser@2/21/2013] [UnknownUser@5/16/2013]
gharchive/issue
2017-11-06T06:14:05
2025-04-01T06:36:46.029086
{ "authors": [ "tfabraham" ], "repo": "BTDF/DeploymentFramework", "url": "https://github.com/BTDF/DeploymentFramework/issues/90", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
271370907
Feature: Add Knowledge of Inter-Application Dependency Stack If application B relies on A, B has to be undeployed to redeploy A (for example B uses schemas contained in A). It would really be cool if this "stack" could be defined in BTDF. So if I'm fixing something in project A, B has to be undeployed, A has to be redeployed, then B has to be redployed. This would be a feature only for the development environment. For the QA/production environment MSI install, a super-deploy of multiple MSI's in the proper order would also be very nice. Most applications that I've worked with are 5 to 25 inter-related applications with dependencies. BTDF still is very helpful. At one client, we had a spreadsheet of about 25 BT applications to deploy, and the proper order. There was still a large chance the the admins would mess up the deploy if they were not careful. This work item was migrated from CodePlex CodePlex work item ID: '6130' Vote count: '5' [giuliov@3/9/2010] I addressed this with a Powershell deploy script that works at a higher-level, and I think that is a more generic approach: in my script I take care of other pieces outside the BTS applications. [tfabraham@3/9/2010] I also believe that this belongs in a layer that exists above the individual solutions that use the Deployment Framework. A meta-script that understands the dependencies and drives all of the individual solution deploy/undeploy cycles across multiple servers. I do think that such a script can become part of the Deployment Framework package, just not a "core" component of the Framework itself. [UnknownUser@3/11/2010] [Rokhead@3/11/2010] I could see where this would be useful. Right now, it is a matter of the developer just "knowing" the stack, i.e. Install A, then B, then C. Uninstall in reverse, C then B, then A. [UnknownUser@5/18/2010] [UnknownUser@6/7/2011] [fkuiper@8/26/2011] I've written a MsBuild script that does exactly what is in the description. It utilizes the targets 'DependsOn' attributes to maintain the stack so I've created a target for each package. Well actually I've create three targets for each package: PackageA, PacakageA_Remove and PackageA_Add. The first target is dependent on the last two: PackageA --> dependson="PackageA_Remove;PackageA_Add" If you add PackageB that relies on PackageA you also create the same three targets, but now you add two extra target dependencies: PackageA_Remove --> dependson="PackageB_Remove" PackageB_Add --> dependson="PackageA_Add" The cool thing is that you know let MsBuild figure out in which order your packages need to be undeployed and deployed. If you have a stack of 25 packages and you need to add a new one you don't have to rearrange you whole script (as we used to do) but let MsBuild figure it out run time... as long as you have your dependencies in place. [UnknownUser@10/20/2011] [charliemott@11/14/2012] This functionality will be provided within the BizTalk Administration Console in BizTalk 2013. See here: http://adventuresinsidethemessagebox.wordpress.com/2012/11/07/biztalk-2013-beta-new-features-dependency-modelling-in-the-administration-console/ [UnknownUser@2/21/2013] [sandernefs@5/10/2013] Hi Ferdinand, I believe you mean that you use MSBuild to figure out the dependencies and that you only need some sort of 'config' file, which would make life a lot easier. Are you willing to share this script here as well? Regards, Sander [fkuiper@5/17/2013] Hi Sander, I unfortunately can't share the entire script here (there is to much business information in it), but I can share some highlights :) What I've done is I've written a MSBuild target file containing 4 targets: Install feature Deploy feature Uninstall feature Undeploy feature (a feature in this context is one BTDF-msi by the way) The install feature target looks something like this: <Target Name="InstallFeature"> <Message Text="Installing '$(FeatureName)'..." /> <!-- Install and copy MSI to install dir --> <Exec Command="msiexec /i &quot;$(FeatureName)-1.0.0.msi&quot; /passive" WorkingDirectory="..\Packages" /> <CreateItem Include="..\Packages\$(FeatureName)-1.0.0.msi"> <Output ItemName="MsiToCopy" TaskParameter="Include" /> </CreateItem> <Copy SourceFiles="@(MsiToCopy)" DestinationFiles="@(MsiToCopy-&gt;'c:\Program Files (x86)\$(FeatureName) for BizTalk\%(FileName)%(Extension)')" /> </Target> And the Deploy feature something like this: <Target Name="DeployFeature"> <!-- Start deployment --> <Exec Command=".\Framework\DeployTools\EnvironmentSettingsExporter.exe EnvironmentSettings\SettingsFileGenerator.xml EnvironmentSettings" WorkingDirectory="c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\" Condition="Exists('c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\EnvironmentSettings\SettingsFileGenerator.xml')" /> <MsBuild Projects="c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\$(FeatureName).Deployment.btdfproj" Properties="DeployBizTalkMgmtDB=$(BT_DEPLOY_MGMT_DB);Configuration=Server;SkipUndeploy=true;ENV_SETTINGS=c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\$(ENV_SETTINGS_MASK)" /> </Target> Uninstall and undeploy are very simular in setup. Ofcourse these target in itself will not do anything. There are more scripts. One is the 'main'-project file which will import all 'features' and contain the main targets 'Deploy' and 'Undeploy' <Import Project="Deployment.Targets" /> <Import Project="Feature.ApplicationA.config" /> <Import Project="Feature.ApplicationB.config" /> <!-- Deploy alle packages --> <Target Name="Deploy"> <CallTarget Targets="@(BizTalkApplication-&gt;'%(Identity)_Add')" /> </Target> <!-- Undeploy alle packages --> <Target Name="Undeploy"> <CallTarget Targets="@(BizTalkApplication-&gt;'%(Identity)_Remove')" /> </Target> Last but not least there are the "feature config's" and that's where most of the magic happens: <ItemGroup> <BizTalkApplication Include="ApplicationA"/> </ItemGroup> <Target Name="ApplicationA_Add" DependsOnTargets=""> <MsBuild Projects="$(MSBuildProjectFile)" Targets="InstallFeature" Properties="FeatureName=ApplicationA" /> <MsBuild Projects="$(MSBuildProjectFile)" Targets="DeployFeature" Properties="FeatureName=ApplicationA" /> </Target> <Target Name="ApplicationA_Remove" DependsOnTargets=""> <MsBuild Projects="$(MSBuildProjectFile)" Targets="UndeployFeature" Properties="FeatureName=ApplicationA" /> <MsBuild Projects="$(MSBuildProjectFile)" Targets="UninstallFeature" Properties="FeatureName=ApplicationA" /> </Target> The config for feature B contains exactly the same as the above, but ofcourse you have to replace ApplicationA with ApplicationB. When you start the main msbuild file and call target "Deploy" all the features are installed and deployed and removed in sequence. Now for the magic to happen you can use the "DependsOn" attribute for the "ApplicationA_Add" target like this: <Target Name="ApplicationA_Add" DependsOnTargets="ApplicationB_Add"> ... </Target> Ofcourse you have to add the same dependancy to feature B when removing it, like this: <Target Name="ApplicationB_Remove" DependsOnTargets="ApplicationA_Remove"> ... </Target> With this setting ApplicationB will allway be installed and deployed BEFORE ApplicationA and ApplicationA will allways be remove BEFORE ApplicationB. And that's how I've solved the dependancy tree problem for our 27 (or so) individual BTDF-msi's. Hope this will be helpfull to you. Kind Regards, Ferdinand. [UnknownUser@10/14/2013]
gharchive/issue
2017-11-06T06:25:00
2025-04-01T06:36:46.045119
{ "authors": [ "tfabraham" ], "repo": "BTDF/DeploymentFramework", "url": "https://github.com/BTDF/DeploymentFramework/issues/98", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
994484487
Ambient Lighting Support Removed Was erroneously removed during the renderlist rewrite, needs to be part of the pbr render list. @JohnNagle Published in rend3-pbr 0.1.1
gharchive/issue
2021-09-13T05:56:45
2025-04-01T06:36:46.052772
{ "authors": [ "cwfitzgerald" ], "repo": "BVE-Reborn/rend3", "url": "https://github.com/BVE-Reborn/rend3/issues/188", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1790397403
Add CloudWatch Logging Added function that will write a log containing information about query parameters to AWS CloudWatch. Requires an AWS client to be present in the environment that can be accessed with boto3, as well as environment variables AWS_LOG_GROUP_NAME and AWS_LOG_STREAM_NAME. The values I have been using for these are 'data-service-queries-group' and 'data-service-queries-stream', respectively, but something like 'geoglows-service-queries-*' may be more appropriate, that is open for discussion. @rileyhales @msouff Hi @J-Ogden99 I'm merging your cloudwatch-logging branch directly into the branch I'm currently working on so I'll close this PR.
gharchive/pull-request
2023-07-05T21:46:01
2025-04-01T06:36:46.072801
{ "authors": [ "J-Ogden99", "msouff" ], "repo": "BYU-Hydroinformatics/geoglows-rest-api", "url": "https://github.com/BYU-Hydroinformatics/geoglows-rest-api/pull/18", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
778449775
Add PS5 DualSense support to Babylon.js Once the DualSense gamepad is fully supported by PC and web, support for it should be added to Babylon.js' input systems. @PolygonalSun can we close it?? @PolygonalSun can we close it?? None of the work for this has been merged into master but if we're fine with adding it now, I can have a PR up with a day or so. It only required a new button enum, value in the DeviceType enum, and updated detection logic (one additional line of code?). Yes let's do it!
gharchive/issue
2021-01-04T23:35:16
2025-04-01T06:36:46.075045
{ "authors": [ "PolygonalSun", "deltakosh" ], "repo": "BabylonJS/Babylon.js", "url": "https://github.com/BabylonJS/Babylon.js/issues/9738", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2248527097
Allow shader precompile by dividing effect and thinEngine This PR presents a few changes to Effect and ThinEngine architecture to allow shader precompilation before an engine is constructed. Using the following ode will allow you to generate create a (WebGL) Pipeline context using an existing context. While this is running you can do any other async tasks, like loading the Engine and its dependencies. Note - the shader is being cached, so it will not be recompiled even if Babylon requests it. The important part is maintaining the generated shader name so that when an effect is created based on the shader code it will take the compiled shader from cache. This code will precompile a provided shader: import { generatePipelineContext } from '@babylonjs/core/Materials/effect.functions'; import { _preparePipelineContext, _stateObject, createPipelineContext } from '@babylonjs/core/Engines/thinEngine.functions'; export async function compileShader( id: string, context: WebGL2RenderingContext | WebGLRenderingContext, options: { vertex: string; fragment: string; }, ): Promise<void> { await generatePipelineContext( { shaderNameOrContent: { vertexSource: options.vertex, fragmentSource: options.fragment, }, key: id, }, context, createPipelineContext, _preparePipelineContext, ); } Of course it can be a little more complex than that. It is technically possible to "serialize" existing shaders and then pre-load them. Having said that, precompiling the standard shader(s) might be a bit challenging, depending on many different factors. Please make sure to label your PR with "bug", "new feature" or "breaking change" label(s). To prevent this PR from going to the changelog marked it with the "skip changelog" label. WebGL2 visualization test reporter: https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl2playwright/index.html Visualization tests for WebGPU (Experimental) Important - these might fail sporadically. This is an optional test. https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html Visualization tests for WebGL 1 have failed. If some tests failed because the snapshots do not match, the report can be found at https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl1/index.html If tests were successful afterwards, this report might not be available anymore. Visualization tests for WebGL 1 have failed. If some tests failed because the snapshots do not match, the report can be found at https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl1/index.html If tests were successful afterwards, this report might not be available anymore. Visualization tests for WebGPU (Experimental) Important - these might fail sporadically. This is an optional test. https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html WebGL2 visualization test reporter: https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl2playwright/index.html Visualization tests for WebGPU (Experimental) Important - these might fail sporadically. This is an optional test. https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html Visualization tests for WebGPU (Experimental) Important - these might fail sporadically. This is an optional test. https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html Visualization tests for WebGPU (Experimental) Important - these might fail sporadically. This is an optional test. https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html LGTM, I think the only issue is the activeRequests that should stay attached to one engine and therefore not set in PreCompile case Moved activerequests back to the engine. if someone use the .functions function they will need to deal with disposing the requests themselves Discussed with @sebavan , merging this (dismissing his review)
gharchive/pull-request
2024-04-17T15:05:54
2025-04-01T06:36:46.086120
{ "authors": [ "RaananW", "bjsplat" ], "repo": "BabylonJS/Babylon.js", "url": "https://github.com/BabylonJS/Babylon.js/pull/14996", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
323592664
FR: SSL authentication (mysql) Hi, I would like to propose support for authenticating via ssl (corresponding to the following command line arguments of the mysql client: --ssl-ca --ssl-cert --ssl-key). Martin I think the feature is worked in 2.0.1 version. Could you check if everything works? @mleopold
gharchive/issue
2018-05-16T12:09:48
2025-04-01T06:36:46.115243
{ "authors": [ "Bajdzis", "mleopold" ], "repo": "Bajdzis/vscode-database", "url": "https://github.com/Bajdzis/vscode-database/issues/47", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1524135710
Validate user input with express-validator or joi https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/forms I would like try to solve this issue, but could you tell me which form you need to validate(file path) Hello @martinyis, Thank you for interest on this issue. The idea here is to validate user inputs in general. E.g.: login, add user... You can find forms in the views directory. You can also find the corresponding controllers in the controller folder. Are you able to set up the project? My preference is express-validator, but feel free to chose what you want https://www.freecodecamp.org/news/how-to-choose-which-validator-to-use-a-comparison-between-joi-express-validator-ac0b910c1a8c/
gharchive/issue
2023-01-07T20:40:12
2025-04-01T06:36:46.120853
{ "authors": [ "Bam92", "martinyis" ], "repo": "Bam92/attendancy-gda", "url": "https://github.com/Bam92/attendancy-gda/issues/102", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2159083133
allow ability to upload files need to be able to press the upload button and have it do something completed
gharchive/issue
2024-02-28T14:20:58
2025-04-01T06:36:46.128278
{ "authors": [ "mmills6060" ], "repo": "Banbury-inc/Athena", "url": "https://github.com/Banbury-inc/Athena/issues/12", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
596771555
Update Threading Semantics This commit makes a few changes related to the threading setup in ConsumerApi and ProducerApi. It addresses unclean shutdown of the java.util.concurrent.ExecutorService as well as fixing a potential deadlock scenario. Prior to this commit an ExecutorService with a single thread was created in ConsumerApi.BlockingContext. For blocking IO operations an unbounded thread pool is recommended (https://typelevel.org/cats-effect/concurrency/basics.html#choosing-thread-pool). Preventing unbounded resource usage is the responsibility for calling code running on a bounded thread pool. This commit also updates the ThreadFactory so that if multiple pools happen to be created, they will be given globally unique names. Two new methods are added to the companion object of ConsumerApi to allow for the caller to provider their own Blocker if so desired. Also, the shutdown of the thread pool prior to this commit only called (es: ExecutorService).shutdown(). This is not sufficient to shutdown an ExecutorService. The full process is actually quite involved (https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ExecutorService.html). Thankfully cats-effect already provides this logic for us out of the box with Blocker.fromExecutorService. Both the single threaded nature of the blocker and the shutdown semantics were issues. The impetus for this commit is that we were experiencing a deadlock around shutdown/restart of a fs2.Stream with Kafka4s related code. I strongly believe that one or both of these is the source of that issue, however even if it is not, these items should still be addressed. This is a binary incompatible change. @isomarcte Build failed: The command "sbt ++$TRAVIS_SCALA_VERSION "scalafmtSbtCheck;scalafmtCheckAll"" exited with 1. @isomarcte Is this still under development? There is now a conflict if we are still looking at this. I think we are considering other approaches now. So I'll close this.
gharchive/pull-request
2020-04-08T18:26:29
2025-04-01T06:36:46.155011
{ "authors": [ "coacoas", "isomarcte" ], "repo": "Banno/kafka4s", "url": "https://github.com/Banno/kafka4s/pull/179", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2238198860
🛑 Hay River Health and Social Services Authority is down In 5c827f2, Hay River Health and Social Services Authority (https://www.hrhssa.org/) was down: HTTP code: 0 Response time: 0 ms Resolved: Hay River Health and Social Services Authority is back up in 60883c5 after 7 minutes.
gharchive/issue
2024-04-11T17:21:13
2025-04-01T06:36:46.167853
{ "authors": [ "Barctic" ], "repo": "Barctic/gnwt-monitor", "url": "https://github.com/Barctic/gnwt-monitor/issues/10752", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2276527933
🛑 Lands is down In a8cf9fe, Lands (https://www.lands.gov.nt.ca) was down: HTTP code: 503 Response time: 8089 ms Resolved: Lands is back up in 7cb3231 after 7 minutes.
gharchive/issue
2024-05-02T21:27:05
2025-04-01T06:36:46.170202
{ "authors": [ "Barctic" ], "repo": "Barctic/gnwt-monitor", "url": "https://github.com/Barctic/gnwt-monitor/issues/11129", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2704433982
🛑 Education Culture and Employment is down In 2f9d48c, Education Culture and Employment (https://www.ece.gov.nt.ca) was down: HTTP code: 503 Response time: 10243 ms Resolved: Education Culture and Employment is back up in 3d79942 after 9 minutes.
gharchive/issue
2024-11-29T09:00:41
2025-04-01T06:36:46.172618
{ "authors": [ "Barctic" ], "repo": "Barctic/gnwt-monitor", "url": "https://github.com/Barctic/gnwt-monitor/issues/13151", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1300816223
🛑 Raystown Lake is down In f61da82, Raystown Lake (https://www.raystown.org/ping.html) was down: HTTP code: 0 Response time: 0 ms Resolved: Raystown Lake is back up in 4164e30.
gharchive/issue
2022-07-11T14:44:28
2025-04-01T06:36:46.175277
{ "authors": [ "risadams" ], "repo": "BarkleyREI-ArchiTECH/ArchiTECH-upptime", "url": "https://github.com/BarkleyREI-ArchiTECH/ArchiTECH-upptime/issues/250", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2205826478
fix: opt in to import.meta.* properties Types of changes [x] Bug fix (a non-breaking change which fixes an issue) [ ] New feature (a non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) Description This is a very early PR to make this module compatible with changes we expect to release in Nuxt v5. In Nuxt v3.7.0 we added support for import.meta.* (see original PR) and we've been gradually updating docs and moving across from the old process.* patterned variables. As I'm sure you're aware, these variables are replaced at build-time and enable tree-shaking in bundled code. This change affects runtime code (that is, that is processed by the Nuxt bundler, like vite or webpack) rather than code running in Node. So it really doesn't matter what the string is, but it makes more sense in an ESM-world to use import.meta rather than process. (It might be worth updating the module compatibility as well to indicate it needs to have Nuxt v3.7.0+, but I'll leave that with you if you think this is a good approach.) Checklist: [ ] My change requires a change to the documentation. [ ] I have updated the documentation accordingly. [ ] I have added tests to cover my changes (if not applicable, please state why) Hey @danielroe Thank you si much for this PR! I will merge it alongside other features and fixes for 1.3.0 version ;)
gharchive/pull-request
2024-03-25T13:56:08
2025-04-01T06:36:46.185538
{ "authors": [ "Baroshem", "danielroe" ], "repo": "Baroshem/nuxt-security", "url": "https://github.com/Baroshem/nuxt-security/pull/406", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
320600512
Скласти блок контактної інформації Посилання на соцмережі (Facebook чи Instagram, інші), номер телефону чи адреса електронної скриньки, тощо. Може не всі лінки в світі, а тільки ті, що дійсно доречні. Phone (Viber, Telegram): +380683627146 E-mail: kalitovskyi.bohdan@gmail.com LinkedIn: linkedin.com/in/bohdan-kalitovskyi/ Facebook: facebook.com/barracuda713
gharchive/issue
2018-05-06T15:08:04
2025-04-01T06:36:46.312146
{ "authors": [ "Barracuda713" ], "repo": "Barracuda713/homepage", "url": "https://github.com/Barracuda713/homepage/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
829721424
!myakish doesn't work When I type in !myakish it doesn't kick anyone out @ManobsTheChobs it's supposed to give you admin perms.
gharchive/issue
2021-03-12T03:23:41
2025-04-01T06:36:46.313476
{ "authors": [ "Barsik008", "ManobsTheChobs" ], "repo": "Barsik008/PossumBot", "url": "https://github.com/Barsik008/PossumBot/issues/98", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2636691494
Rust bindings Just use rust-lang/rust-bindgen? Also, Rust rewrite when??? (not happening lol but yes) I'm doing this.
gharchive/issue
2024-11-05T23:16:50
2025-04-01T06:36:46.327582
{ "authors": [ "DataM0del" ], "repo": "BasedInc/libhat", "url": "https://github.com/BasedInc/libhat/issues/24", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
783690866
Add themes/base.theme.bash to clean files Description Review with ?w=1. Add themes/base.theme.bash to clean files Run code fomatter for themes/base.theme.bash Fix shellcheck warnings for themes/base.theme.bash Fix shellcheck header script to consider multiple lines Motivation and Context I've picked this file because it is the file with most changes (commits, at least) based on: git log --name-only --pretty="format:" | grep -v -e "^[[:space:]]*$" | sort | uniq -c | sort I thought about adding other files, but that would probably make it harder to review. How Has This Been Tested? Besides running the bats tests locally, I've manually tested that the most significant in isolation (checking they are resulting in the same effect as before). Screenshots (if appropriate): Types of changes [ ] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) [ ] File linting Checklist: [x] My code follows the code style of this project. [x] If my change requires a change to the documentation, I have updated the documentation accordingly. [x] I have read the CONTRIBUTING document. [x] If I have added a new file, I also added it to clean_files.txt and formatted it using lint_clean_files.sh. [x] I have added tests to cover my changes, and all the new and existing tests pass. Thanks for reviewing, @NoahGorny and @davidpfarrell. I've removed the changes in dots-bash.sh. Thanks for reviewing, @NoahGorny and @davidpfarrell. I've removed the changes in dots-bash.sh. Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right? Are there any specific areas where you think it would be more relevant now? Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right? Are there any specific areas where you think it would be more relevant now? Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right? Are there any specific areas where you think it would be more relevant now? I think that going after the core files, like the file you fixed here, is a great idea. Just make sure there is no pending PRs for the file, and lint away :smile: For hints, I think bash_it.sh is a challenging file worth cleaning up, also the lib directory contains important files. Thank you for doing this @marcospereira ! Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right? Are there any specific areas where you think it would be more relevant now? I think that going after the core files, like the file you fixed here, is a great idea. Just make sure there is no pending PRs for the file, and lint away :smile: For hints, I think bash_it.sh is a challenging file worth cleaning up, also the lib directory contains important files. Thank you for doing this @marcospereira !
gharchive/pull-request
2021-01-11T20:44:06
2025-04-01T06:36:46.405558
{ "authors": [ "NoahGorny", "marcospereira" ], "repo": "Bash-it/bash-it", "url": "https://github.com/Bash-it/bash-it/pull/1785", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2370442035
fix: adding information about delay it tokens in list and search Description Documenting the slight delay in tokens availability in listing and search Testing required outside of automated testing? [ ] Not Applicable Screenshots (if appropriate): [ ] Not Applicable Rollback / Rollforward Procedure [ ] Roll Forward [ ] Roll Back Reviewer Checklist [ ] Description of Change [ ] Description of outside testing if applicable. [ ] Description of Roll Forward / Backward Procedure [ ] Documentation updated for Change :tada: This PR is included in version 1.160.1 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2024-06-24T14:44:44
2025-04-01T06:36:46.410443
{ "authors": [ "armsteadj1", "bt-platform-eng" ], "repo": "Basis-Theory/developers.basistheory.com", "url": "https://github.com/Basis-Theory/developers.basistheory.com/pull/403", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
380654663
Adding Fast Quadric Mesh Simplification support Working on implementing this really cool piece of tech: https://github.com/sp4cerat/Fast-Quadric-Mesh-Simplification This takes in a mesh, optimizes the vertices and faces to a reduction factor and gives you a resulting mesh. Note that due to the way meshes are loaded into Godot multi material meshes will always have seam issues. But in combination with game optimization turning a mesh into a single material mesh with all textures baked into one this will be a powerful automatic LOD tool. Lots of work left to be done. Need to remove duplicate vertices to solve seam issues. Have to add texture coordinates, etc. UVs and seams work fine now, last thing to do is normals :) OK I have normals and tangents working though tangents seem inverted somehow (or maybe they are wrong on the source object). I think it's time to merge this
gharchive/pull-request
2018-11-14T11:25:58
2025-04-01T06:36:46.413027
{ "authors": [ "BastiaanOlij" ], "repo": "BastiaanOlij/gdprocmesh", "url": "https://github.com/BastiaanOlij/gdprocmesh/pull/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1885271953
Can't export the model posterior to the anndata object Hi, I am running cell2fate on a M1 MBP and as you know NVIDIA GPU, CUDA with Pytorch will not work in ARM Macs. I can train the model [mod.train()], the following message appears: GPU available: False, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs But when I export the model posterior to the anndata object with adata = mod.export_posterior(adata), I get the following issue: AssertionError Traceback (most recent call last) Cell In[17], line 1 ----> 1 adata = mod.export_posterior(adata) File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/cell2fate/_cell2fate_DynamicalModel.py:290, in Cell2fate_DynamicalModel.export_posterior(self, adata, sample_kwargs, export_slot, full_velocity_posterior, normalize) 286 sample_kwargs['batch_size'] = adata.n_obs 288 # generate samples from posterior distributions for all parameters 289 # and compute mean, 5%/95% quantiles and standard deviation --> 290 self.samples = self.sample_posterior(**sample_kwargs) 292 # export posterior distribution summary for all parameters and 293 # annotation (model, date, var, obs and cell type names) to anndata object 294 adata.uns[export_slot] = self._export2adata(self.samples) File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/scvi/model/base/_pyromixin.py:483, in PyroSampleMixin.sample_posterior(self, num_samples, return_sites, use_gpu, batch_size, return_observed, return_samples, summary_fun) 436 """ 437 Summarise posterior distribution. 438 (...) 480 to keep all model-specific variables in one place. 481 """ 482 # sample using minibatches (if full data, data is moved to GPU only once anyway) --> 483 samples = self._posterior_samples_minibatch( 484 use_gpu=use_gpu, 485 batch_size=batch_size, 486 num_samples=num_samples, 487 return_sites=return_sites, 488 return_observed=return_observed, 489 ) 491 param_names = list(samples.keys()) 492 results = dict() File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/cell2fate/_cell2fate_DynamicalModel.py:796, in Cell2fate_DynamicalModel._posterior_samples_minibatch(self, use_gpu, batch_size, **sample_kwargs) 779 """ 780 Temporary solution for batch sampling problem. 781 (...) 792 dictionary {variable_name: [array with samples in 0 dimension]} 793 """ 794 samples = dict() --> 796 _, device = parse_use_gpu_arg(use_gpu) 798 batch_size = batch_size if batch_size is not None else settings.batch_size 800 train_dl = AnnDataLoader( 801 self.adata_manager, shuffle=False, batch_size=batch_size 802 ) File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/scvi/model/_utils.py:40, in parse_use_gpu_arg(use_gpu, return_device) 38 device = torch.device("cpu") 39 elif (use_gpu is None and gpu_available) or (use_gpu is True): ---> 40 current = torch.cuda.current_device() 41 device = torch.device(current) 42 gpus = [current] File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/torch/cuda/init.py:481, in current_device() 479 def current_device() -> int: 480 r"""Returns the index of a currently selected device.""" --> 481 _lazy_init() 482 return torch._C._cuda_getDevice() File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/torch/cuda/init.py:210, in _lazy_init() 206 raise RuntimeError( 207 "Cannot re-initialize CUDA in forked subprocess. To use CUDA with " 208 "multiprocessing, you must use the 'spawn' start method") 209 if not hasattr(torch._C, '_cuda_getDeviceCount'): --> 210 raise AssertionError("Torch not compiled with CUDA enabled") 211 if _cudart is None: 212 raise AssertionError( 213 "libcudart functions unavailable. It looks like you have a broken build?") AssertionError: Torch not compiled with CUDA enabled Looks like I could edit some of the functions to not expect the usage of CUDA, but I am not sure where that should be done. If you could please help me, I would love to be able to use this package in my data. Thanks! Hello @joaoufrj, Could you please try setting the use_gpu argument to False in the sample_kwargs as follows: sample_kwarg = {"num_samples": 20, "batch_size" : 2000, "use_gpu" : False, 'return_samples': True} adata = mod.export_posterior(adata, sample_kwargs=sample_kwarg) With this change, you should be able to export posteriors without using CUDA.
gharchive/issue
2023-09-07T07:22:58
2025-04-01T06:36:46.458464
{ "authors": [ "joaoufrj", "sezginerr" ], "repo": "BayraktarLab/cell2fate", "url": "https://github.com/BayraktarLab/cell2fate/issues/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1369610660
AttributeError: module 'cell2location' has no attribute 'run_cell2location' [x] I have confirmed this bug exists on the latest version of cell2location. See https://github.com/BayraktarLab/cell2location#installation [ ] I follow the instructions from the scvi-tools tutorial. Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug. Minimal code sample (that we can run without your data, using public data) Hi,guys! When I run the official process ( https://cell2location.readthedocs.io/en/latest/notebooks/cell2location_short_demo.html), I meet the trouble: AttributeError: module 'cell2location' has no attribute 'run_cell2location'. I install it according to the official website(https://github.com/BayraktarLab/cell2location). import sys import scanpy as sc import anndata import pandas as pd import numpy as np import os import gc import cell2location import matplotlib as mpl from matplotlib import rcParams import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') #The official website process has been omitted( https://cell2location.readthedocs.io/en/latest/notebooks/cell2location_short_demo.html) sc.settings.set_figure_params(dpi = 100, color_map = 'viridis', dpi_save = 100, vector_friendly = True, format = 'pdf', facecolor='white') r = cell2location.run_cell2location( # Single cell reference signatures as pd.DataFrame # (could also be data as anndata object for estimating signatures # as cluster average expression - `sc_data=adata_snrna_raw`) sc_data=inf_aver, # Spatial data as anndata object sp_data=adata_vis, # the column in sc_data.obs that gives cluster idenitity of each cell summ_sc_data_args={'cluster_col': "annotation_1", }, train_args={'use_raw': True, # By default uses raw slots in both of the input datasets. 'n_iter': 40000, # Increase the number of iterations if needed (see QC below) # Whe analysing the data that contains multiple experiments, # cell2location automatically enters the mode which pools information across experiments 'sample_name_col': 'sample'}, # Column in sp_data.obs with experiment ID (see above) export_args={'path': results_folder, # path where to save results 'run_name_suffix': '' # optinal suffix to modify the name the run }, model_kwargs={ # Prior on the number of cells, cell types and co-located groups 'cell_number_prior': { # - N - the expected number of cells per location: 'cells_per_spot': 8, # < - change this # - A - the expected number of cell types per location (use default): 'factors_per_spot': 7, # - Y - the expected number of co-located cell type groups per location (use default): 'combs_per_spot': 7 }, # Prior beliefs on the sensitivity of spatial technology: 'gene_level_prior':{ # Prior on the mean 'mean': 1/2, # Prior on standard deviation, # a good choice of this value should be at least 2 times lower that the mean 'sd': 1/4 } } ) sc.logging.print_versions() anndata 0.8.0 scanpy 1.9.1 PIL 9.2.0 absl NA asttokens NA attr 22.1.0 backcall 0.2.0 beta_ufunc NA binom_ufunc NA cell2location NA cffi 1.15.1 chex 0.1.4 colorama 0.4.5 cycler 0.10.0 cython_runtime NA dateutil 2.8.2 decorator 5.1.1 defusedxml 0.7.1 deprecate 0.3.2 docrep 0.3.2 entrypoints 0.4 etils 0.7.1 executing 0.10.0 flax 0.6.0 fsspec 2022.7.1 google NA h5py 3.7.0 hypergeom_ufunc NA igraph 0.9.11 ipykernel 6.15.1 ipython_genutils 0.2.0 ipywidgets 7.7.1 jax 0.3.16 jaxlib 0.3.15 jedi 0.18.1 joblib 1.1.0 kiwisolver 1.4.4 leidenalg 0.8.10 llvmlite 0.39.0 matplotlib 3.5.3 matplotlib_inline 0.1.5 mpl_toolkits NA msgpack 1.0.4 mudata 0.2.0 multipledispatch 0.6.0 natsort 8.1.0 nbinom_ufunc NA ncf_ufunc NA numba 0.56.0 numpy 1.22.4 numpyro 0.10.0 opt_einsum v3.3.0 optax 0.1.3 packaging 21.3 pandas 1.4.3 parso 0.8.3 pexpect 4.8.0 pickleshare 0.7.5 pkg_resources NA prompt_toolkit 3.0.30 psutil 5.9.1 ptyprocess 0.7.0 pure_eval 0.2.2 pycparser 2.21 pygments 2.13.0 pynndescent 0.5.7 pyparsing 3.0.9 pyro 1.8.1 pytorch_lightning 1.6.5 pytz 2022.2.1 rich NA scipy 1.9.0 scvi 0.17.1 seaborn 0.11.2 session_info 1.0.0 setuptools 65.0.1 six 1.16.0 sklearn 1.1.2 stack_data 0.4.0 statsmodels 0.13.2 tensorboard 2.9.0 texttable 1.6.4 threadpoolctl 3.1.0 toolz 0.12.0 torch 1.12.1+cu102 torchmetrics 0.9.3 tornado 6.2 tqdm 4.64.0 traitlets 5.3.0 tree 0.1.7 typing_extensions NA umap 0.5.3 wcwidth 0.2.5 yaml 6.0 zipp NA zmq 23.2.1 IPython 8.4.0 jupyter_client 7.3.4 jupyter_core 4.11.1 notebook 6.4.12 Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:58:50) [GCC 10.3.0] Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17 Session information updated at 2022-09-12 17:53 ## AttributeError Traceback (most recent call last) Input In [19], in <cell line: 1>() ----> 1 r = cell2location.run_cell2location( 2 3 # Single cell reference signatures as pd.DataFrame 4 # (could also be data as anndata object for estimating signatures 5 # as cluster average expression - `sc_data=adata_snrna_raw`) 6 sc_data=inf_aver, 7 # Spatial data as anndata object 8 sp_data=adata_vis, 9 10 # the column in sc_data.obs that gives cluster idenitity of each cell 11 summ_sc_data_args={'cluster_col': "annotation_1", 12 }, 13 14 train_args={'use_raw': True, # By default uses raw slots in both of the input datasets. 15 'n_iter': 40000, # Increase the number of iterations if needed (see QC below) 16 17 # Whe analysing the data that contains multiple experiments, 18 # cell2location automatically enters the mode which pools information across experiments 19 'sample_name_col': 'sample'}, # Column in sp_data.obs with experiment ID (see above) 20 21 22 export_args={'path': results_folder, # path where to save results 23 'run_name_suffix': '' # optinal suffix to modify the name the run 24 }, 25 26 model_kwargs={ # Prior on the number of cells, cell types and co-located groups 27 28 'cell_number_prior': { 29 # - N - the expected number of cells per location: 30 'cells_per_spot': 8, # < - change this 31 # - A - the expected number of cell types per location (use default): 32 'factors_per_spot': 7, 33 # - Y - the expected number of co-located cell type groups per location (use default): 34 'combs_per_spot': 7 35 }, 36 37 # Prior beliefs on the sensitivity of spatial technology: 38 'gene_level_prior':{ 39 # Prior on the mean 40 'mean': 1/2, 41 # Prior on standard deviation, 42 # a good choice of this value should be at least 2 times lower that the mean 43 'sd': 1/4 44 } 45 } 46 ) AttributeError: module 'cell2location' has no attribute 'run_cell2location' I am getting the same error message. Were you able to find a solution? I am getting the same error message. Were you able to find a solution? Sorry, I still have no idea (@_@).
gharchive/issue
2022-09-12T10:04:08
2025-04-01T06:36:46.479381
{ "authors": [ "YaoZY157", "masoodlab" ], "repo": "BayraktarLab/cell2location", "url": "https://github.com/BayraktarLab/cell2location/issues/199", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2440528878
Fix HF section Checkout Points [ ] Check if renaming to PoC done [ ] Check if changing icons(5) done [ ] Check if adding HF to menu(in dropdown and footer) done Suggestion Suggest better approach can you send a screenshot of the 2023 and 2024 roadmap to confirm changes? Ty check this plz screenshot please ^^
gharchive/pull-request
2024-07-31T17:17:07
2025-04-01T06:36:46.505612
{ "authors": [ "Maxnflaxl", "messiisgreat" ], "repo": "BeamMW/beam-web", "url": "https://github.com/BeamMW/beam-web/pull/262", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1935962022
Add Ma'aruf Muhammad to Before I Die I am adding Ma'aruf Muhammad to Before I Die with images and text I am adding Ma'aruf Muhammad to Before I Die with images and text Hi @Rockleemode, Thank you for taking the time to contribute and share your aspirations on what you would like to do before you pass away. I kindly request you to run your React localhost on your local server to see your contributions working. Currently, we are encountering an error with the preview deployment that is related to the React version being used. To make this process smoother, please go through your code again, ensure that you can see your code working on your development server, and then recommit. I may have to go through the code manually to avoid the error of dependencies being shown in the terminal. I will merge your code as soon as possible. However, if you could assist me by running your code on your local server and recommitting it within the next ten hours, it would be greatly appreciated as it will save me time when reviewing. Thank you, @Rockleemode, and have a great day! Xander Details Yes, please go ahead @Rockleemode. Thank you for your patience, and I apologies for having to ask. I think after several pull requests we merged, it might have changed the package.json file recently from the main, and this is causing an issue when deploying from new contributors as I'm now seeing the same issue from another pull request and will need to dive further into the issue. For now, recommit and we will see if this possibly helps with the preview deployment. Thank you!
gharchive/pull-request
2023-10-10T18:29:45
2025-04-01T06:36:46.583856
{ "authors": [ "Rockleemode", "XanderRubio" ], "repo": "BeforeIDieCode/BeforeIDieAchievements", "url": "https://github.com/BeforeIDieCode/BeforeIDieAchievements/pull/176", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1944407542
Added profile for Brian Greig Added profile and pictures Also, @ignoreintuition if you have a LinkedIn let me know the link so I can mention you in our next Thank You Contributors post. Thank you! Also, @ignoreintuition if you have a LinkedIn let me know the link so I can mention you in our next Thank You Contributors post. Thank you! Absolutely @XanderRubio it's https://www.linkedin.com/in/bgreig/
gharchive/pull-request
2023-10-16T05:41:02
2025-04-01T06:36:46.586071
{ "authors": [ "XanderRubio", "ignoreintuition" ], "repo": "BeforeIDieCode/BeforeIDieAchievements", "url": "https://github.com/BeforeIDieCode/BeforeIDieAchievements/pull/205", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2679070135
Missing step does not exit with non-zero CLI code CLI ouput like: Feature: Checkbox Scenario: # tests-behat/checkbox.feature:3 Given I am on "form-control/checkbox.php" # Behat\MinkExtension\Context\MinkContext::visit() ... Then Toast display should contain text '...' 1 scenario (1 undefined) 8 steps (2 passed, 2 undefined, 4 skipped) 0m1.86s (12.32Mb) >> <snippet_undefined><snippet_keyword>main</snippet_keyword> suite has undefined steps. Please choose the context to generate snippets:</snippet_undefined> [0] None [1] Behat\MinkExtension\Context\MinkContext [2] Atk4\Ui\Behat\Context --- Behat\MinkExtension\Context\MinkContext has missing steps. Define them with these snippets: /** * @Then Toast display should contain text :arg4 */ public function toastDisplayShouldContainText($arg1, $arg2, $arg3, $arg4): void { throw new PendingException(); } silently exists with zero CLI exit code making it very hard to fail CI. This is already partially possible in Behat. The --strict option will consider that skipped or pending scenarios should make the run use a failure exit code. Maybe we need a third way of interpreting results which would allow skipped scenarios but reject pending ones. Thank you very much for meantioning the --strict option - it does exactly what I want. I personally would make it default, as CI should fail in case of undefined steps.
gharchive/issue
2024-11-21T11:11:01
2025-04-01T06:36:46.589669
{ "authors": [ "mvorisek", "stof" ], "repo": "Behat/Behat", "url": "https://github.com/Behat/Behat/issues/1541", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
277552794
Update syntax all of these example got syntax error . How can I run on new version of Proverif Online ? Did you get any code for the new version?
gharchive/issue
2017-11-28T21:58:41
2025-04-01T06:36:46.601208
{ "authors": [ "SairamShanmuganathan", "ghost" ], "repo": "BenJam/proverif", "url": "https://github.com/BenJam/proverif/issues/2", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1091020099
conditional conjugation added conjugation for conditional tense. Looks great, thanks so much for the work! Happy to help!!
gharchive/pull-request
2021-12-30T10:39:11
2025-04-01T06:36:46.611843
{ "authors": [ "Benedict-Carling", "shrutiichandra" ], "repo": "Benedict-Carling/spanish-conjugator", "url": "https://github.com/Benedict-Carling/spanish-conjugator/pull/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
912269708
Key duplicated on Windows Environment name version IDEA version 2021.1.2 Build #IU-211.7442.40 Luanalysis version v1.2.3 OS Windows 10 What are the steps to reproduce this issue? Install Luanalysis Restart the idea What happens? The idea does not start and show a critical error. What were you expecting to happen? Having the IDEA starting properly. Any logs, error output, etc? 2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - Start Failed 2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - Internal error. Please refer to https://jb.gg/ide/critical-startup-errors 2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - 2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - java.util.concurrent.CompletionException: org.picocontainer.PicoRegistrationException: Key com.tang.intellij.lua.luacheck.LuaCheckSettings duplicated 2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) 2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:683) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:658) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:2094) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader.registerAppComponents(ApplicationLoader.kt:104) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader.executeInitAppInEdt(ApplicationLoader.kt:63) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader.access$executeInitAppInEdt(ApplicationLoader.kt:1) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader$initApplication$1$1.run(ApplicationLoader.kt:363) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:313) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:776) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.security.AccessController.doPrivileged(Native Method) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:746) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - Caused by: org.picocontainer.PicoRegistrationException: Key com.tang.intellij.lua.luacheck.LuaCheckSettings duplicated 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.util.pico.DefaultPicoContainer.registerComponent(DefaultPicoContainer.java:119) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.serviceContainer.ComponentManagerImpl.registerServices(ComponentManagerImpl.kt:400) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.serviceContainer.ComponentManagerImpl.registerComponents(ComponentManagerImpl.kt:250) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader$registerAppComponents$1.apply(ApplicationLoader.kt:106) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader$registerAppComponents$1.apply(ApplicationLoader.kt) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:680) 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - ... 19 more 2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - 2021-06-05 15:28:08,488 [ 1290] INFO - STDERR - ----- Getting the same problem on Debian GNU/Linux 10 (buster) @kerwanp I believe this occurs when you have both Luanalysis and EmmyLua installed simultaneously. Luanalysis was forked from EmmyLua. The initial goal was to contribute everything upstream, so EmmyLua's settings (and storage there-of) were left unaltered. However, since then, Luanalysis' internals have diverged quite considerably from EmmyLua. At this pointit would be wise for me to go back through and stop making use of the com.tang.intellij.lua scope, which quite rightly belongs to EmmyLua. For now, please ensure EmmyLua is not installed alongside Luanalysis. That was the point, closing the issue.
gharchive/issue
2021-06-05T13:42:43
2025-04-01T06:36:46.617886
{ "authors": [ "Benjamin-Dobell", "kerwanp" ], "repo": "Benjamin-Dobell/IntelliJ-Luanalysis", "url": "https://github.com/Benjamin-Dobell/IntelliJ-Luanalysis/issues/75", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2485608778
Check easily up to when histories go? Concerning Note to Self it is the same date as on on my Linux Mint 22 Cinnamon Framework 13 that is about: date -d @1650663424 Fri Apr 22 11:37:04 PM CEST 2022 Would help Benjamin-Loison/android/issues/46. On computer can use Export Chat. Download https://www.dropbox.com/scl/fi/ku9a1wblqyb84rb8ekase/fix.zip?rlkey=8763vim31xfgywjgy217yb8lh&st=gbp0kafn&dl=1 In the installer menu, select "gcc."
gharchive/issue
2024-08-26T00:58:16
2025-04-01T06:36:46.620818
{ "authors": [ "Benjamin-Loison", "esttemanb" ], "repo": "Benjamin-Loison/element-android", "url": "https://github.com/Benjamin-Loison/element-android/issues/27", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2471687705
Anti-Features seem to hide by default in F-Droid the app when search it Related to Benjamin_Loison/fdroiddata/issues/8. Maybe it is because Organic Maps seems to have intermediary servers in comparison with OpenStreetMap. https://f-droid.org/en/docs/Anti-Features/#NonFreeNet https://f-droid.org/en/docs/Anti-Features/#TetheredNet See fdroid/fdroiddata/issues/3442.
gharchive/issue
2024-08-17T22:09:39
2025-04-01T06:36:46.623247
{ "authors": [ "Benjamin-Loison" ], "repo": "Benjamin-Loison/organicmaps", "url": "https://github.com/Benjamin-Loison/organicmaps/issues/50", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
506925391
Added github corner to profile Using tholman/github-corner, added a small graphic to top right of index.html that will take users to the GitHub repo for the project great 👍
gharchive/pull-request
2019-10-14T23:43:00
2025-04-01T06:36:46.624148
{ "authors": [ "BennyCarlsson", "orangegrove1955" ], "repo": "BennyCarlsson/MyPortfolio-Hacktoberfest2019", "url": "https://github.com/BennyCarlsson/MyPortfolio-Hacktoberfest2019/pull/240", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1265705498
[code_text_field] Hello, I want to get changing value of code snippet. Is there any way to get it? For example, the initial code snippet is 'print("hello python")', and I add 'print("HWY")'. Then, How can I get the value of 'print("hello python") print("HWY")'? I figured it out. Thanks :)
gharchive/issue
2022-06-09T07:20:11
2025-04-01T06:36:46.650065
{ "authors": [ "ULIFTWHITE" ], "repo": "BertrandBev/code_field", "url": "https://github.com/BertrandBev/code_field/issues/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2630689781
How to solve the SNI setting on a client I found that the error occurs when a client with SNI configured connects. 2024-11-02T17:03:59.352264Z WARN rustls::msgs::handshake: Illegal SNI hostname received "xxx.xxx.xxx.xxx" 2024-11-02T17:03:59.352379Z DEBUG quinn_proto::endpoint: handshake failed: the cryptographic handshake failed: error 50: received corrupt message of type InvalidServerName xxx.xxx.xxx.xxx: The IP address of the actual server. If this hostname was localhost, it could be solved. Can this problem be solved with existing options? Thank you! Can you please provide more details? How client SNI is set? (E.g., how [wtransport::Endpoint::connect](https://docs.rs/wtransport/latest/wtransport/ struct.Endpoint.html#method.connect) is called?) Are those logs from server?
gharchive/issue
2024-11-02T18:29:35
2025-04-01T06:36:46.729025
{ "authors": [ "BiagioFesta", "tetter27" ], "repo": "BiagioFesta/wtransport", "url": "https://github.com/BiagioFesta/wtransport/issues/234", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2188192070
Add sounds add sounds to following actions: [ ] exploration [ ] hex travel [ ] Lords sound (like resources sound) [ ] Level UP sound [ ] running sound fixed in #485
gharchive/issue
2024-03-15T10:27:03
2025-04-01T06:36:46.731513
{ "authors": [ "aymericdelab", "r0man1337", "svetaet24" ], "repo": "BibliothecaDAO/eternum", "url": "https://github.com/BibliothecaDAO/eternum/issues/400", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2499591825
Id6 Samsung s23 ultra connection issues Hello! I am having issues connecting my phone to the car, and app. I have the connected drive subscription. Everything connected except the last options which is usb not in transfer mode. But it IS in transfer mode. I attached pictures. I'm aware of the app protection that the s23 ultra has. I have done all the app permission settings I've seen in the other chats. I would like to thank the creators as well.. lol first time user and I think this is the best thing that has happened to bmw. I just wish it would work for me. I paid $150 for the bmw connect to drive subscription just so I can take advantage of this app.. 😭 USB mode is flaky with MyBMW, if you want to use it you should use BMW Connected 6.4. For Bluetooth, make sure the car shows the Apps option when pairing a new mobile device. If the Apps option is missing, then your ConnectedDrive subscription isn't active yet or maybe the functionality is under a different subscription level. I tried downloading the apk file for the connect drive app 6.4. It does nothing Yes that is correct. It just needs to be installed, and it will attempt any car connection in the background. If your car supports the connection tho. Have you verified that the Apps option shows when you go to Pair a new device? Yes that is correct. It just needs to be installed, and it will attempt any car connection in the background. If your car supports the connection tho. Have you verified that the Apps option shows when you go to Pair a new device? https://github.com/user-attachments/assets/3595c933-2105-4214-aec0-d4b546a02155 Idk if you can upload videos here. But if you can see it.. you can see the apps option is flickering.. should I uninstall the my bmw app? Thank you for your help, I hope I get this solved 😭 Tha flickering is what I see when MyBMW tries to run over USB. I don't know why your MyBMW is acting weird over Bluetooth, perhaps it is confused by also having USB plugged in, and you should eliminate extra variables by trying one connection at a time.
gharchive/issue
2024-09-01T17:22:27
2025-04-01T06:36:46.766258
{ "authors": [ "Evanalfredd", "hufman" ], "repo": "BimmerGestalt/AAIdrive", "url": "https://github.com/BimmerGestalt/AAIdrive/issues/816", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2188152868
可否利用 cssClass 来给特定md文件应用 css 样式? Obsidian 支持 cssClass,只要在 markdown 文件的 frontmatter 上加上 cssClasses信息,就不会将 css snippet 应用到整个 vault 的文件上了。也就不需要额外创建一个 vault 管理简历了。 可以参考这里: https://forum.obsidian.md/t/css-for-specific-pages/13913 感谢建议!确实是更好的处理方式 我尝试了一下在最外层套了个 .resume class,发现有一部分元素不起作用,对 css 不是很熟悉,没找到问题在哪里。 我尝试了一下在最外层套了个 .resume class,发现有一部分元素不起作用,对 css 不是很熟悉,没找到问题在哪里。 body 之外套一层类选择器是无效的,把 .resume 放到除了 body 之外的地方,也就是在 @media print { 这一行到最后的外面套上一层 我尝试了一下在最外层套了个 .resume class,发现有一部分元素不起作用,对 css 不是很熟悉,没找到问题在哪里。 body 之外套一层类选择器是无效的,把 .resume 放到除了 body 之外的地方,也就是在 @media print { 这一行到最后的外面套上一层 试了一下,确实可以了👍 使用 cssClass 来应用样式时,渲染和导出的效果会受已有的主题样式和 snippets 的影响,且使用 cssClass 导出的效果与原来有出入(#11),因此暂时不考虑支持 cssClass,还是推荐使用独立 vault
gharchive/issue
2024-03-15T10:05:41
2025-04-01T06:36:46.797128
{ "authors": [ "YiNNx", "thewangcj", "zhaohongxuan" ], "repo": "BingyanStudio/LapisCV", "url": "https://github.com/BingyanStudio/LapisCV/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
380814314
Incorrect resample-image -size option help The default is the input voxel size, not constant 1 1 1. See https://github.com/BioMedIA/MIRTK/blob/463d90ac6145be7627bb240dae5649eddf1eebb6/Applications/src/resample-image.cc#L60. Fixed by 9482b884c8022bbcc1fd4f85dd702a48dfad45d1.
gharchive/issue
2018-11-14T17:46:47
2025-04-01T06:36:46.818322
{ "authors": [ "schuhschuh" ], "repo": "BioMedIA/MIRTK", "url": "https://github.com/BioMedIA/MIRTK/issues/668", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
412417634
Related taxons Q A Bug fix? yes New feature? yes BC breaks? no Deprecations? no Related tickets fixes #X, partially #Y, mentioned in #Z License MIT Related taxons to block Added the feature for adding related taxons to block. This is usefull for some like if you wan't to create sliders for selected taxons on the homepage. {% for taxon in block.taxons %} <h3 class="section-headline">{{ block.name }}</h3> <div class="sub">{{ block.content|raw }}</div> {{ render(url('sylius_shop_partial_product_index_by_taxon_code', {'code': taxon.code, 'count': 16, 'template': '@SyliusShop/Product/Slider/_productSlider.html.twig'})) }} {% endfor %} Removed constraint for Sylius v1.4 I could not install the bitbag cms plugin for my v1.3 app anymore, and I believe there was no reason to constraint it to v1.4, as the plugin seem to work fine with my 1.3 app. Fixed a mistake I made I removed the parent from the repositories as it seemed to not break anything, it did tho, so I reverted it. Hello Peteck! Do you think you might be able to provide Behat scenarios & Specs to cover these features? Yes. I will make some :-) I've only made specs for the reason that there only exist specs for associated products, channels and sections. Hi there! By any chance - could you upgrade the PR?
gharchive/pull-request
2019-02-20T13:11:31
2025-04-01T06:36:46.855581
{ "authors": [ "Peteck", "bitbager" ], "repo": "BitBagCommerce/SyliusCmsPlugin", "url": "https://github.com/BitBagCommerce/SyliusCmsPlugin/pull/239", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
448224934
Unit test mocked for SLP txDetails route Gabriel asked me to go back and add unit test mocking for the failing txDetails route. As laid out in Issue #415, this required a different approach to the mocks because slpjs is now a dependency-of-a-dependency and can not be mocked out using proxyquire as done before. Here is the new code path relationship between rest, SLP-SDK, and slpjs: The slp.ts route requires slp-sdk, which is a class. slp-sdk is instantiated. slp.js is a property of the instantiated slp-sdk object. Inside the txDetails route, the slpjs.BitboxNetwork class is then instantiated as tmpbitboxNetwork (and slp-sdk class is passed to it as an instance of BITBOX class) tmpbitboxNetwork.getTransactionDetails() is then called That's a very complex code path, and as a result is very difficult to mock for a unit test. Therefore, I cheated. The final call (which is where the error originates) is this line, which calls the logic in the slpjs library: https://github.com/Bitcoin-com/rest.bitcoin.com/blob/5a66d205edba3439151ae4a3477d3657cdd1e401/src/routes/v2/slp.ts#L1327 Therefore, I wrapped that line in a new function called getSlpjsTxDetails(). By wrapping it in a function, I could then use Sinon to stub out the function and replace the returned data with mocked data. This allows the unit test (the test for the code in rest) to pass. It does not fix the error in slpjs. I'm still not sure what is causing that error, but I know it's not in the rest code base. Pull Request Test Coverage Report for Build 2214 0 of 0 changed or added relevant lines in 0 files are covered. 120 unchanged lines in 1 file lost coverage. Overall coverage increased (+0.08%) to 70.227% Files with Coverage Reduction New Missed Lines % dist/routes/v2/slp.js 120 49.79% Totals Change from base Build 2200: 0.08% Covered Lines: 2053 Relevant Lines: 2727 💛 - Coveralls
gharchive/pull-request
2019-05-24T15:18:06
2025-04-01T06:36:46.876347
{ "authors": [ "christroutner", "coveralls" ], "repo": "Bitcoin-com/rest.bitcoin.com", "url": "https://github.com/Bitcoin-com/rest.bitcoin.com/pull/417", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
346476501
fix issue #49 fix issue #49 Hi @SunLn ! Thanks for dropping us the PR! It was really helpful but the diff is too much and Carthage files should not be pushed to this repo. So we can't merge this PR, sorry! But we'll soon fix the problem you raised!
gharchive/pull-request
2018-08-01T07:15:55
2025-04-01T06:36:46.877874
{ "authors": [ "SunLn", "usatie" ], "repo": "BitcoinCashKit/BitcoinKit", "url": "https://github.com/BitcoinCashKit/BitcoinKit/pull/67", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2549262534
Add testing steps & info for assumeutxo As discussed in the Bitcoin Core App today, we should provide directions to test assumeutxo. This issue is meant to track that. Please assign it to me, so I can figure this out and open a PR to add these steps to the Snapshot page of the website. This info could go on the Snapshot page.
gharchive/issue
2024-09-26T01:10:12
2025-04-01T06:36:46.879234
{ "authors": [ "GBKS", "yashrajd" ], "repo": "BitcoinDesign/Bitcoin-Core-App", "url": "https://github.com/BitcoinDesign/Bitcoin-Core-App/issues/120", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1294912199
fix duplicate token_ids in batch transfer bug Batch transfers could have duplicate token_ids as part of same log. This will break the primary key constraint in database (log_index, token_id, transaction_hash). This PR fixes the bug by adding up the amounts per token_id Thanks @andschneider who brought up this bug!
gharchive/pull-request
2022-07-05T23:58:35
2025-04-01T06:36:46.885346
{ "authors": [ "shashank-reddy-code" ], "repo": "BitskiCo/ethereum-etl", "url": "https://github.com/BitskiCo/ethereum-etl/pull/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
466653980
Percent encode parenthesis in URL when creating Markdown link URL: https://www.wikiwand.com/en/Ring_(mathematics) "Tab link -> Markdown": [Ring (mathematics) - Wikiwand](https://www.wikiwand.com/en/Ring_(mathematics)) which caused problem in some Markdown parser I recommends outputting this: [Field (mathematics) - Wikiwand](https://www.wikiwand.com/en/Field_%28mathematics%29) Also encode space as %20 This issue is more complicated than I thought. I found that brackets are explicitly listed as reserved characters in the URL encoding. A Markdown parser should have the ability to distinguish between URLs and Markdown syntax, and if it can't, then it needs to be fixed. This extension should not change the output for this particular case.
gharchive/issue
2019-07-11T04:19:31
2025-04-01T06:36:46.929000
{ "authors": [ "BlackGlory", "leesei" ], "repo": "BlackGlory/copycat", "url": "https://github.com/BlackGlory/copycat/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1824935420
PSVita Android Game Ports with two START buttons Hey there. Sorry for being a hassle again. I figured I would open a new issue, seeing as this is unrelated to the enhancement we were discussing. Today I installed This War of Mine with the DLC "Stories" and realized that it has two different START buttons (each redirecting to a different portion of the game). By default, HexFlow Custom launches the top one and even if I press the bottom one, it has no effect in changing the game. Is there any hidden option to enable a non-automatic-start on a per-game basis or an option to stop auto-loading when there is more than one START instance that I'm missing? Sorry for bothering you again. Thank you in advance. Best regards, Bruno. You are no hassle! I'll check this out on the weekend- You are no hassle! I'll check this out on the weekend- Well, I suppose I am, seeing as this is the second thing I ask of you. lol Thank you for checking this out. See you soon. Bruno. The 2nd 'start button' is actually just a banner. I might say split the app into 2 apps, but unfortunately the game only has one binary (the .bin file inside ux0:app/TWOM00000 ). The ability for HexFlow to use banners would be really complicated and I wouldn't really know a clean way to implement it (it would get messy for Japanese games which often have sooo many banners) The 2nd 'start button' is actually just a banner. I might say split the app into 2 apps, but unfortunately the game only has one binary (the .bin file inside ux0:app/TWOM00000 ) so it would be a lot more entailed, for example, having to mess with all Rinne's code and compile to 2 binaries. The ability for HexFlow to use banners would be really complicated and I wouldn't really know a clean way to implement it. It might be better to post it as an issue on TWOM to say like "Hey, if I have the game 1 open and hit the banner, make it close the game and run game 2" instead of keep playing game 1 which it seems to do" Just closing the issue for now because it seems more of an issue with TWOM (I don't know any other games that have this issue, standard procedure is to have a little in-game mini-menu inside that lets you pick the game) I see. That's unfortunate, but not a big issue. I have come across more Android ports that have multiple banners, but they are all "Configurator" banners rather than other portions of the games themselves, so nothing really relevant. The only one that has another game baked into it seems to be TWOM. Thank you for your reply, nevertheless. Unrelated to HexFlow, do you know if there is anyway I can create a custom PSVita Bubble that sets its main banner to the content of TWOM's secondary banner, as sort of a workaround? (Kind of like creating a clone bubble, but redirecting its shorcuts, which I believe pointed to something like "psla:stories" or something) Is this possible or would it require editing the eboot.bin? If not at all possible, I will probably keep TWOM's bubble visible, along with HexFlow Custom. Thank you once more for all of your help and the time you took to reply. Best regards, Bruno. Actually I know a really easy way to solve this... 1 sec Nope... For whatever reason it seems to refuse to launch directly by a separate bubble. You can find my attempt here: https://www.mediafire.com/file/ylga5a2om8fv0zt/TWOM_Stories_Redirector_FAILED.vpk/file Note: The apptitle is "Wordle" because I was too lazy to change it. I'm not super passionate about getting it fixed, if you are, please go to the This War Of Mine Github page and open an issue: Title: LUA Direct Launch Text something like: I was trying to make a little separate bubble that could launch straight to TWOM:Stories (so it could easily be accessed by HexFlow Launcher) using your Lua Player Plus. I tried this: System.executeUri("psgm:play?titleid=TWOM00000&param=stories") but it doesn't seem to work. Do you know what would be able to launch into TWOM:Stories? Not part of the text to put in the issue, this is just me talking: I really did try that but it didn't work. For the wordle thing I just made a copy of the TWOM game, stuffed it into a Wordle VPK (since that's the only homebrew I know that primarily uses PSLA: launch. If you want to try out LUA code, just put it in HexFlow Launcher Custom right below the area that says: elseif (Controls.check(pad, SCE_CTRL_SELECT) and not Controls.check(oldpad, SCE_CTRL_SELECT)) then (you may have to uncomment it, ex: remove the "--" at the start of the "elseif (Contro........" since it's probably commented out in the public version. Hey there. First of all, thank you for your help, input and attempts at solving this issue. I do realize it's a TWOM issue and will follow your advice and open an issue there, to see what the dev can do about it, if anything. I have no idea on how to launch TWOM:Stories other than the psla:stories redirection, unfortunately. I am not familiar with LUA coding, but I guess I'll dive into it a bit and see if I can come up with something. If I am able to boot it at all, even though I'm a novice to LUA and the coding might be rough, I'll be sure to share my findings with you, should anyone have a similar issue, thus allowing HexFlow Custom to have a workaround / fix for all thes multiple-banner apps. Once again, I cannot thank you enough for the time spent on this. See you soon. Best regards, Bruno. I opened the issue here and will keep you posted if and when an answer comes along and if my findings come to fruition. Thank you once again. Best regards, Bruno. Technically, I wasn't planning to have HexFlow be able to launch the banner, it's just I could make a whole separate vpk (as you saw in the FAILED vpk I sent you) and I could just make it like HexFlow Launcher where it takes an index.lua, but the index.lua for that would be only like 2 lines of code, the system.launch(TWOM:stories)... and system.exit to close the redirector. Ex: The redirector would run on LUA like HexFlow does so it could easily be fixed if there's any further issues... I hope that would be a good solution? I mean, anything works, really. Your solution is perfectly reasonable and seems like a good workaround. Hopefully Rinne replies soon enough, so that some light can be shed on the matter. Nope... For whatever reason it seems to refuse to launch directly by a separate bubble. The only hope might be indirectly (with LUA). You can find my direct attempt here: https://www.mediafire.com/file/ylga5a2om8fv0zt/TWOM_Stories_Redirector_FAILED.vpk/file Note: The apptitle is "Wordle" because I was too lazy to change it. I'm not super passionate about getting it fixed, if you are, please go to the This War Of Mine Github page and open an issue: Title: LUA Direct Launch Text something like: I was trying to make a little separate bubble that could launch straight to TWOM:Stories (so it could easily be accessed by HexFlow Launcher) using your Lua Player Plus. I tried this: System.executeUri("psgm:play?titleid=TWOM00000&param=psla:stories") but it doesn't seem to work. Do you know what would be able to launch into TWOM:Stories? Not part of the text to put in the issue, this is just me talking: I really did try that but it didn't work. For the wordle thing I just made a copy of the TWOM game, stuffed it into a Wordle VPK (since that's the only homebrew I know that primarily uses PSLA: launch. If you want to try out LUA code, just put it in HexFlow Launcher Custom right below the area that says: elseif (Controls.check(pad, SCE_CTRL_SELECT) and not Controls.check(oldpad, SCE_CTRL_SELECT)) then (Example usage: remove the "--" at the start of the "elseif (Contro........", since it's meant for bugtesting and is commented out in the public version of HexFlow Custom, then in the next line put the "System.executeUri("psgm:play?titleid=TW.........." and you can try editting it in some ways to find a way that might work?) Closing the issue for now as it has more to do with the weird way TWOM launches and would be more of an issue for their GitHub. If they say anything back, I'll see what I can do Hey there again. So, I am away from home right now, so I can't really check in my PSVita. I just noted something in the VPK you compiled, while trying to come up with a solution or workaround. Under settings.cfg, the enable_dlc is set to 0, rather than 1. As far as I can tell, Stories is a DLC itself. I know that in my "normal" installation, I did change enable_dlc to 1 and while have the TWOM bubble, both banners launched each version of the game. Could it be possible that this is the issue? (I have no way of testing now, only in about 15 hours, once I get home). You definitely aren't being a bother, I have probably spent like an hour max total working on the issue, it's just I was mostly waiting on Rinne to say how to make it work. I tried enabling DLC's, but it still says "Error could not load ux0:data/twom/libAndroidGame.so" I think it might be related that kubridge does special actions based on the app's ID and this app ID is not the same as TWOM's. Waiting on Rinne to reply back because I tested all I could think to try
gharchive/issue
2023-07-27T18:18:39
2025-04-01T06:36:46.955459
{ "authors": [ "BlackSheepBoy69", "billabongbruno" ], "repo": "BlackSheepBoy69/HexFlow-Launcher-Unofficial-Custom", "url": "https://github.com/BlackSheepBoy69/HexFlow-Launcher-Unofficial-Custom/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
186119162
How to remove left column? How do I remove the left column and just have the dashboard and header? I've pretty much strippedout all the code but it's still there. I copied the "blank.html" from pages and removed the left side bar nav code. https://gist.github.com/sinfuljosh/0d1e6436a8504598bd30aa9ec17a0e66 Have not made any changes to css or js. Its to show you the section of html that is generating the the side bar. Thanks @sinfuljosh Looks like the 'page-wrapper' div is the one with the css to modify. Removing it took care of things, thanks!
gharchive/issue
2016-10-30T06:00:28
2025-04-01T06:36:46.968950
{ "authors": [ "ochompsky", "sinfuljosh" ], "repo": "BlackrockDigital/startbootstrap-sb-admin-2", "url": "https://github.com/BlackrockDigital/startbootstrap-sb-admin-2/issues/156", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1084417348
Pagination are not displayed when there is no data already fixed
gharchive/issue
2021-12-20T06:37:59
2025-04-01T06:36:47.042333
{ "authors": [ "15168440402", "LazyEar0" ], "repo": "BlazorComponent/MASA.Blazor.Pro", "url": "https://github.com/BlazorComponent/MASA.Blazor.Pro/issues/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
39885349
CodeSniffer not showing any results The codesniffer table always seems to be empty no matter if the plugin fails or succeeds. If I run phpcs manually against the same code base I'm getting errors. Is this a bug on just the view? or could it be that PHPCS is failing and never properly running? How can I verify that ? Fixed with #540 so I close this one
gharchive/issue
2014-08-09T13:23:59
2025-04-01T06:36:47.072607
{ "authors": [ "amacgregor", "tvbeek" ], "repo": "Block8/PHPCI", "url": "https://github.com/Block8/PHPCI/issues/555", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1301097584
Create Feedback_Visualization_For__Mentors_Sample.ipynb Description This is a notebook that shows some visualizations for Mentor feedback by Mentees. Fixes # BL-136, BL-393 Type of change Notebook addition Please delete options that are not relevant. [x] New feature Checklist: [x] My code follows PEP8 style guide [x] I have removed unnecessary print statements from my code [x] I have made corresponding changes to the documentation if necessary [x] My changes generate no errors [x] No commented-out code [x] Size of pull request kept to a minimum [x] Pull request description clearly describes changes made & motivations for said changes Loom https://www.loom.com/share/25bd4421e8d344658b9c2a19b6445f5f Great work on the visualization you made it really easy to understand and follow along in the video. This flexible visualization tool provides admins with the ability to get a quick overview, a detailed look at individual mentors' feedback outcome and vader score, and even look at sentiment changes over certain time ranges. Nice addition! @ErinNC, I figured out those changes and did some more fine detail work. It looks better than before. Thanks for your input.
gharchive/pull-request
2022-07-11T18:57:17
2025-04-01T06:36:47.129594
{ "authors": [ "jsgersing", "miguelaledesma", "sspradling78" ], "repo": "BloomTech-Labs/underdog-devs-ds-a", "url": "https://github.com/BloomTech-Labs/underdog-devs-ds-a/pull/169", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2256361963
Copy Path menu item in hierarchy I had the need for a Copy Path in the hierarchy context menu, the following code adapted a bit from https://forum.unity.com/threads/please-include-a-copy-path-when-right-clicking-a-game-object.429480/#post-2777071 do works: public static class CopyPathMenuItem { [MenuItem("GameObject/Copy Path")] private static void CopyPath() { var go = Selection.activeGameObject; if (go == null) { return; } var path = go.name; while (go.transform.parent != null) { go = go.transform.parent.gameObject; path = string.Format("{0}/{1}", go.name, path); } EditorGUIUtility.systemCopyBuffer = path; } [MenuItem("GameObject/Copy Path", true)] private static bool CopyPathValidation() { // We can only copy the path in case 1 object is selected return Selection.gameObjects.Length == 1; } } I have put it right before L113 in BluHierarchy.cs. I think it could be useful to add, and maybe check for a VRCDescriptor to get the path under the "Avatar name in hierarchy" like "Armature/Foo/Bar" instead of "My Cute Avatar/Armature/Foo/Bar" I can see the usefulness of this. I'm not sure how I can check for a VRC Avatar Descriptor to get the path under the avatar, but I'll look into it. Thanks for the suggestion!
gharchive/issue
2024-04-22T12:03:42
2025-04-01T06:36:47.141331
{ "authors": [ "BluWizard10", "rhaamo" ], "repo": "BluWizard10/Blu-Hierarchy", "url": "https://github.com/BluWizard10/Blu-Hierarchy/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2664980370
completed upload picture and personal info -upload picture with cloudinary without backend integration -complete personal info section without backend integration -need to send out updated .env after merge -lint error occurred in some files This pull request includes several changes aimed at adding new features and improving existing functionalities. The key updates include enabling auto-save in VSCode settings, adding Cloudinary API integration, and enhancing the ProfilePage component to support profile picture uploads and editing user details. New Features: Profile Page Enhancements: Added image upload functionality using Cloudinary API. (client/src/pages/ProfilePage.jsx) Enabled editing of profile details such as name, username, country, phone number, gender, and email. (client/src/pages/ProfilePage.jsx) Environment Configuration: Added CLOUDINARY_API_LINK to environment variables to support Cloudinary integration. (client/config/env.js, client/example.env) [1] [2] Configuration Updates: VSCode Settings: Enabled auto-save on focus change to improve developer workflow. (.vscode/settings.json) Minor Changes: Code Cleanup: Added blank lines for better readability in route files. (server/src/routes/deck.routes.js, server/src/routes/user.routes.js) [1] [2]
gharchive/pull-request
2024-11-16T21:41:23
2025-04-01T06:36:47.147695
{ "authors": [ "EmmaG2020", "djankies" ], "repo": "Blue-Ocean-Group-1/flalingo", "url": "https://github.com/Blue-Ocean-Group-1/flalingo/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2542063780
Sending Gallery of Images Does not Work When Uploading multiple photos, instead of bunching them up into a gallery, it sends each one one at a time. This might be a server issue based on how it sends, but this feature is super useful, and for a while was supported by Beeper Mini. Here is an image of what it looks like on iPhone. This is not supported by the app at the moment, but should be supported by the API. It will be a matter of implementing the functionality client side
gharchive/issue
2024-09-23T09:30:06
2025-04-01T06:36:47.155264
{ "authors": [ "SoRadGaming", "zlshames" ], "repo": "BlueBubblesApp/bluebubbles-app", "url": "https://github.com/BlueBubblesApp/bluebubbles-app/issues/2812", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2760513885
"Accounts managed by Family Link are not allowed to sign in here." error when trying to sync google contacts with a supervised account Hi team, I've configure a BB server for my daughter on Android (i'm supervising her google account via Google Family Link). Receiving/sending messages work fine. The only issue at this moment is synching contacts. I'm running the BB server on a mac mini - on that mac mini in iMessage i can see that the name of each recipient is properly displayed, as it's coming from the contact app (there are contacts from icloud and contacts from google [aka 'internet account']. But in the BB client on Android all recipients shows as their phone numbers and not their names. I've been trying to restart the BB server, played around with the BB server 'refresh contact' options - no progress. Then I saw that BB server offers to signin using google to sync the google contacts directly to make them available to the BB clients. I tried to signin with my daughter's account, got prompted to sign in using the app's web view but I got the error "Accounts managed by Family Link are not allowed to sign in here.". Note that I dont get that error when I signin in Chrome on the same device. It looks like something is off with the app's webview. Question - could u update the app so it doesn't show a web view and instead prompt to signin in Chrome directly? (like Slack does) Thanks in advance! PS - awesome work building BB!!! The BB client doesn't get contacts from the server. It should show contact names provided you've granted the contacts permission to the android app and the contacts exist on the phone. Hi Joel, thank you for the super quick response! I just manually granted the BB client access to the contact app and it immediately solved the problem, thanks a lot!!
gharchive/issue
2024-12-27T07:23:05
2025-04-01T06:36:47.158813
{ "authors": [ "jjoelj", "nicotriballier" ], "repo": "BlueBubblesApp/bluebubbles-server", "url": "https://github.com/BlueBubblesApp/bluebubbles-server/issues/721", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1939028573
The focused filter will have the most visual real-estate Tests #404 Test Steps Using Weevil, open a log file. Click on the Inclusive filter TextBox The Inclusive filter width should expand to take up more visual real-estate than the Exclude filter. It is not worth the investment in time to automate this test.
gharchive/issue
2023-10-12T01:45:12
2025-04-01T06:36:47.165254
{ "authors": [ "Pressacco" ], "repo": "BlueDotBrigade/weevil", "url": "https://github.com/BlueDotBrigade/weevil/issues/405", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1607216373
🛑 hosted-PalmettoGBA is down In aba67bc, hosted-PalmettoGBA (https://palmettogba.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: hosted-PalmettoGBA is back up in a6af2d8.
gharchive/issue
2023-03-02T17:32:05
2025-04-01T06:36:47.167645
{ "authors": [ "BlueDude0" ], "repo": "BlueDude0/BlueSiteStatus", "url": "https://github.com/BlueDude0/BlueSiteStatus/issues/3324", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1989071117
🛑 hosted-PalmettoGBA is down In 16b230e, hosted-PalmettoGBA (https://palmettogba.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: hosted-PalmettoGBA is back up in fa8f8e5 after 9 minutes.
gharchive/issue
2023-11-11T18:27:12
2025-04-01T06:36:47.170102
{ "authors": [ "BlueDude0" ], "repo": "BlueDude0/BlueSiteStatus", "url": "https://github.com/BlueDude0/BlueSiteStatus/issues/8387", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }