id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
843844128
|
Mechanism for Mono to Clone Swift PM Repos
The goal here is for us to have fine-grained, independent version control for SwiftPM modules the way we do in other SDKs and with CocoaPods while maintaining the benefits of a mono-repo.
To facilitate this, we will have a per-module Package.swift and CHANGELOG.md. Each module will be represented in manifest.yml with its path in the mono-repo and its mirror repo name.
Whenever the mono-repo is updated, we can update the mirror repo with:
python3 eng/scripts/sync_repo.py <MODULE>
This will copy that branch to the root of the mirror, which can then be target by SwiftPM. In the current model, users target the "AzureSDK" package, and then specify individual modules, whereas with this change, the target will be the module.
The current example of this is AzureCore, which will mirror to azure-sdk-for-ios-core.
This PR seems to have two strategies in play. One strategy that has been discussed is having a copy of all the modules in a separate repository (synchronized from the mono-repo). But there also appears to be a reference to an azure-sdk-for-ios-core repository as well (which makes me think that this is just breaking out core separately, and will be managed as a separate repo (incl the engineering system etc).
This PR seems to have two strategies in play. One strategy that has been discussed is having a copy of all the modules in a separate repository (synchronized from the mono-repo). But there also appears to be a reference to an azure-sdk-for-ios-core repository as well (which makes me think that this is just breaking out core separately, and will be managed as a separate repo (incl the engineering system etc).
@mitchdenny thanks for the catch. I updated the reference to SwiftPM-AzureCore. Also, @weshaggard I updated the sync_repo.py script so it no longer requires any metadata file.
@weshaggard @mitchdenny this PR should be ready for review.
Another thing that needs to be worked out is the tagging strategy when a release goes out. Is the plan just to replace the contents of the repo and then commit and tag? So we could end up with something like this:
commit1: 1.0.0-beta.11
commit2: 1.0.0
commit3: 1.1.0-beta.1
commit4: 1.0.1
There would be corresponding tags on the monorepo like:
commit1: AzureCore_1.0.0-beta.11
commit2: AzureCore_1.0.0 (albeit not anytime soon)
commit3: AzureCore_1.1.0-beta.1
commit4: AzureCore_1.0.1
However, if that were the sequence in both repos, I don't see how either would be any less confusing. \
I think there will also need to be some content in the README that explains how/why these repos are done this way. Imagine a scenario where someone inherits an iOS code-base and they go and look at their dependencies and they see one of our swiftpm-* links. They may not have previously worked with the Azure SDK before so wouldn't know why we have these satellite repos.
I think, most likely, they wouldn't think twice about the SwiftPM repo but would be surprised to learn of the existence of the mono-repo. There would be no reference to azure-sdk-for-ios in their files, but if they wanted to file a bug or contribute, the README would direct them to the mono-repo. But you are right, we should call this out in some fashion somewhere.
|
gharchive/pull-request
| 2021-03-29T21:52:10 |
2025-04-01T04:54:45.636924
|
{
"authors": [
"mitchdenny",
"tjprescott"
],
"repo": "Azure/azure-sdk-for-ios",
"url": "https://github.com/Azure/azure-sdk-for-ios/pull/789",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1668723736
|
Deprecation of Service Bus Processor Stop API
Today Service Bus Processor exposes two APIs that complete the message pumping - stop and close.
The mainline use case is - the application wants the Processor to pump the message forever. When/If the application shuts down, it calls close API to dispose of the Processor.
As the engineering team recently reevaluated the past design choices, we identified that the stop API involves quite a complexity; supporting it correctly involves a reasonable (allocation, coordination) overhead and contributes to engineering costs. Additionally, the above mainline use case (that does not use stop) pays the price of this overhead.
The engineering team came to the conclusion that the mainline use case will greatly benefit from deprecating and removing stop , and saves engineering and maintenance costs.
Going forward, starting Processor that was stopped before is not recommended and this feature may be deprecated in future. Recommendation is to close the Processor instance and create a new one to restart processing.
The April 2023 version (v7.13.4) of Service Bus SDK includes a log message in the warning level to communicate this upcoming change; this is the pr.
We run a async "short-lived" client for receving messages and has tested recreating clients with the use of close(). But this does not behave in a gracefull way today. This is a list of my top values from the logs when we call close() after a few days in production with version 7.13.4 of the java lib (I guess some of the messages could be ignored, but still...)
Cannot perform operation 'abandoned' on a disposed receiver.
The receiver didn't receive the disposition acknowledgment due to receive link closure.
java.lang.InterruptedException
Cannot subscribe. Processor is already terminated
Cannot perform operations on a disposed set.
Delivery not on receive link.
Cannot perform operation 'renewMessageLock' on a disposed receiver
Maybe a combination of first calling stop(), and the close() a short while later could be good choice?
We definitely need a way for in-process messages to complete just before close, otherwise it's very easy to run into edge cases where the work completes around processing a message, but the complete call then fails because the processor was stopped/closed.
|
gharchive/issue
| 2023-04-14T18:00:39 |
2025-04-01T04:54:45.643824
|
{
"authors": [
"MT-Jacobs",
"anuchandy",
"trefsahl"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/issues/34464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1889588308
|
[QUERY] RequestRetryOptions seems did not work at the SpringBoot
Query/Question
My app is SpringBoot 3.1.0 and the package is using spring cloud azure.
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
<version>5.5.0</version>
</dependency>
This Java app uses multiple threads to upload images(jpeg, png) into Azure Blob Storage. Each thread is uploading one image, but I face intermittently upload timeout.
how to justify when there is a timeout, I set the timeout as 30 seconds because I think it should not be over 30 seconds to upload a tiny image. Please see my following attached code.
In order to cushion this timeout issue, I want to use the re-try logic to failover if over 30 seconds
Why is this not a Bug or a feature Request?
Now I have already introduced a re-try logic as the following code, but I CAN NOT see the re-try log record when I intentionally make the image upload timeout into a quite small value eg: 10 ms, and the re-try account is 3, but I can not see three-times re-try execution log!
@Bean
public BlobServiceClient getBlobServiceClient() {
RequestRetryOptions requestRetryOptions = new RequestRetryOptions(RetryPolicyType.EXPONENTIAL, 3, 1, 10000L, 20000L, null);
return new BlobServiceClientBuilder().
retryOptions(requestRetryOptions).
connectionString("******").buildClient();
}
@Async("taskExecutor")
public CompletableFuture<String> migrateImageToAzureStorage(Image image, BlobContainerClient forecastOrObsContainer, byte[] imageData) {
String imageAzureBlobName = image.getImageId().toString() + guessFileExtension(imageData);
try {
BlobClient blobClient = forecastOrObsContainer.getBlobClient(imageAzureBlobName);
try (ByteArrayInputStream imageStream = new ByteArrayInputStream(imageData)) {
blobClient.uploadWithResponse(new BlobParallelUploadOptions(imageStream).setTier(AccessTier.HOT).setRequestConditions(new BlobRequestConditions()),
Duration.ofMillis(10ms),
Context.NONE);
LogUtil.info("Uploaded Image "+image.getImageId()+" to Azure Blob Storage HOT Tier as " + blobClient.getBlobUrl());
}
image.setObjectStoreUrl(blobClient.getBlobName());
return CompletableFuture.completedFuture(blobClient.getBlobName());
} catch (IOException e) {
LogUtil.error("Failed for closing stream when UPLOAD image "+image.getImageId()+" for Azure as "+imageAzureBlobName, e);
} catch (IllegalStateException e) {
LogUtil.error("Timeout for UPLOADING image "+image.getImageId()+" for Azure ", e);
} catch (Exception e) {
LogUtil.error("Failed for uploading image "+image.getImageId()+" for Azure ", e);
}
// return a failed future
CompletableFuture<String> failedFuture = new CompletableFuture<>();
failedFuture.completeExceptionally(new MigrationException("Failed for uploading image "+image.getImageId()+" for Azure "));
return failedFuture;
}
Setup (please complete the following information if applicable):
OS: [e.g. iOS]
IDE: [e.g. IntelliJ]
Information Checklist
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
[x] Query Added
[x] Setup information Added
@alzimmermsft
Thanks for your details explanation.
I removed the timeout setting from BlobClient and assigned it to the RequestRetryOptions. Now it works and the following is the sudo code, Please have a look.
But may I check with you about several things
Could you please share some official documents to describe the timeout difference between among BlobClient and RequestRetryOptions?
I may miss something because I found it hard to catch the azure upload timeout exception after running out of re-try count, eg: what is the particular exception type, so I have to use a crude way to match the exception string which you can see it among my code
Seems I have to add the HttpLogOptions to let the azure related logs be represented, eg: the following logs
seems I have to fetch out these logs with "az.sdk.message", but I still will miss some part, eg: the second log, in this instance, is there any way I can use to fetch all these logs in one time?
{"@timestamp":"2023-09-12T02:24:29.712Z","ecs.version":"1.2.0","log.level":"INFO","message":"{\"az.sdk.message\":\"HTTP request\",\"method\":\"PUT\",\"url\":\"https://host.blob.core.windows.net/forecastimage/1565385274.jpeg\",\"tryCount\":\"3\",\"contentLength\":236608141}","process.thread.name":"parallel-4","log.logger":"com.azure.storage.blob.implementation.BlockBlobsImpl$BlockBlobsService.upload"}
{"@timestamp":"2023-09-12T02:24:33.717Z","ecs.version":"1.2.0","log.level":"WARN","message":"[id: ***********, L:/********:***** - R:host.blob.core.windows.net/********:*****] Last write attempt timed out; force-closing the connection.","process.thread.name":"reactor-http-kqueue-3","log.logger":"io.netty.handler.ssl.SslHandler"}
@Bean
public BlobServiceClient getBlobServiceClient() {
RequestRetryOptions requestRetryOptions = new RequestRetryOptions(RetryPolicyType.FIXED, retryMaxCount, tryTimeoutInMs/1000, retryDelayInMs, maxRetryDelayInMs, null);
HttpLogOptions httpLogOptions = new HttpLogOptions().setLogLevel(HttpLogDetailLevel.BASIC);
BlobServiceClient blobServiceClient =
new BlobServiceClientBuilder().httpLogOptions(httpLogOptions).retryOptions(requestRetryOptions).connectionString(connectStr).buildClient();
return blobServiceClient;
}
@Async("taskExecutor")
public CompletableFuture<String> migrateImageToAzureStorage(Image image, BlobContainerClient forecastOrObsContainer, byte[] imageData) {
String imageAzureBlobName = image.getImageId().toString() + guessFileExtension(imageData);
// record the size of each image and represent at the log
double imageSizeInMB = imageData.length / 1024.0 / 1024.0;
try {
BlobClient blobClient = forecastOrObsContainer.getBlobClient(imageAzureBlobName);
try (ByteArrayInputStream imageStream = new ByteArrayInputStream(imageData)) {
blobClient.uploadWithResponse(new BlobParallelUploadOptions(imageStream).setTier(AccessTier.HOT).setRequestConditions(new BlobRequestConditions()),
null,
Context.NONE);
LogUtil.info("Uploaded Image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" to Azure Blob Storage HOT Tier as " + blobClient.getBlobUrl());
}
image.setObjectStoreUrl(blobClient.getBlobName());
return CompletableFuture.completedFuture(blobClient.getBlobName());
} catch (IOException e) {
LogUtil.error("Failed for closing stream when UPLOAD image "+image.getImageId()+", Size(MB): " +imageSizeInMB+ " for Azure as "+imageAzureBlobName, e);
} catch (Exception e) {
// crude way to check the azure blob upload timeout exception
if(e.getMessage().contains("Did not observe any item or terminal signal within ")) {
LogUtil.error("Timeout for UPLOADING image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" for Azure ", e);
} else {
LogUtil.error("Exception happened during uploading image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" for Azure ", e);
}
}
// return a failed future
CompletableFuture<String> failedFuture = new CompletableFuture<>();
failedFuture.completeExceptionally(new MigrationException("Failed for uploading image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" for Azure "));
return failedFuture;
}
@alzimmermsft could you please have a look for the above message when you feel available, thank you so much.
Could you please share some official documents to describe the timeout difference between among BlobClient and RequestRetryOptions?
This is something that wasn't well documented and I'm working on adding this in a few places. So far, I have a PR opened adding this documentation to azure-core's README: https://github.com/Azure/azure-sdk-for-java/pull/36710/files#diff-b8dc45bc6fad5f70e59b49cda551e56ae6668c49fa0fd026c09b74537b9abed1R161
@ibrahimrabab could you look at porting this documentation to the Storage READMEs as well?
@JonathanGiles where is the best place to put documentation like this in https://learn.microsoft.com/en-us/azure/developer/java/sdk/?
I may miss something because I found it hard to catch the azure upload timeout exception after running out of re-try count, eg: what is the particular exception type, so I have to use a crude way to match the exception string which you can see it among my code
Thanks for making note of this, looking at this it's a bit tricky and something the SDKs need to clean up. Depending on the timeout that triggered different exceptions will be thrown.
If the timeout happened at the apiCall(Duration timeout) level it will throw an IllegalStateException. If the timeout happened at the HttpPipeline or HttpClient layer it will throw a TimeoutException.
For the purposes of your application as you removed the usage of the apiCall(Duration timeout) I'd check for TimeoutException.
@JonathanGiles @srnagar @lmolkova we should do a review on the exceptions thrown in timeout scenarios to make sure they are standardized.
Seems I have to add the HttpLogOptions to let the azure related logs be represented, eg: the following logs
seems I have to fetch out these logs with "az.sdk.message", but I still will miss some part, eg: the second log, in this instance, is there any way I can use to fetch all these logs in one time?
I don't fully understand the question about fetching all the logs at one time. If you're asking how to fetch all Azure SDK logs I would recommend doing that based on the log.logger included in the log and look for com.azure.* loggers.
This is something that wasn't well documented and I'm working on adding this in a few places. So far, I have a PR opened adding this documentation to azure-core's README: https://github.com/Azure/azure-sdk-for-java/pull/36710/files#diff-b8dc45bc6fad5f70e59b49cda551e56ae6668c49fa0fd026c09b74537b9abed1R161
@ibrahimrabab could you look at porting this documentation to the Storage READMEs as well? @JonathanGiles where is the best place to put documentation like this in https://learn.microsoft.com/en-us/azure/developer/java/sdk/?
I'll ping you this week to discuss Alan - but generally speaking this seems like good content for the troubleshooting push I'm doing over at learn.microsoft.com
@JonathanGiles @srnagar @lmolkova we should do a review on the exceptions thrown in timeout scenarios to make sure they are standardized.
@alzimmermsft Please file an issue and start the ball rolling on this! Loop us in ASAP.
@JonathanGiles @alzimmermsft
Thank you so much for the detailed information and your valuable time.
Closed, question was answered, and documentation was updated to be clearer.
|
gharchive/issue
| 2023-09-11T03:38:22 |
2025-04-01T04:54:45.662078
|
{
"authors": [
"JonathanGiles",
"alzimmermsft",
"ibrandes",
"kensinzl"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/issues/36689",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2120094243
|
[BUG] Spring Cloud Azure 5.9.0 update requires JDK 21
Describe the bug
Updating to Spring Cloud Azure 5.9.0 caused build failure complaining about the class version. I wasn't able to find any information about the JDK 21 requirement in the release so I assume this is not intentional. Please kindly let me know in case I missed something! Thank you!
Exception or Stack Trace
Caused by: java.lang.UnsupportedClassVersionError: com/azure/spring/cloud/autoconfigure/implementation/context/AzureGlobalConfigurationEnvironmentPostProcessor has been compiled by a more recent version of the Java Runtime (class file version 65.0), this version of the Java Runtime only recognizes class file versions up to 61.0
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
To Reproduce
Steps to reproduce the behavior:
Check out https://github.com/nagyesta/lowkey-vault-example/commit/f14417508a6cb433636455f9b115c21fa89e59d7
Build using JDK 17
Observe errors
Code Snippet
N/A
Expected behavior
The project is working with JDK 17
Screenshots
N/A
Setup (please complete the following information):
OS: any
IDE: any
Library/Libraries:
"com.azure.spring:spring-cloud-azure-starter-keyvault-secrets:5.9.0"
"com.azure.spring:spring-cloud-azure-starter-keyvault:5.9.0"
Java version: 17 (below 21)
App Server/Environment: any
Frameworks: Spring Boot
Additional context
Affects this PR: https://github.com/nagyesta/lowkey-vault-example/pull/274
Information Checklist
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
[x] Bug Description Added
[x] Repro Steps Added
[x] Setup information Added
Meanwhile, Microsoft Azure Functions do not support Java 21 :)
@saragluna @vcolin7 could you please investigate this apparent regression (requiring JDK 21 to run)?
Yes, I'm looking into this.
Sorry for the inconvenience, we will release a hotfix ASAP.
Spring Cloud Azure 5.9.1 now is released
Looks like 5.9.1 works well. Thank you
|
gharchive/issue
| 2024-02-06T06:57:49 |
2025-04-01T04:54:45.672689
|
{
"authors": [
"Netyyyy",
"OleksandrShkurat",
"joshfree",
"nagyesta",
"saragluna",
"vcolin7"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/issues/38661",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
527469154
|
Add support for groupby queries
Java sdks currently do not support group by queries. Support for the same should be added.
Added support for groupby in v4.1.0 https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cosmos/azure-cosmos/CHANGELOG.md#410-2020-06-25
|
gharchive/issue
| 2019-11-22T23:53:02 |
2025-04-01T04:54:45.674300
|
{
"authors": [
"mbhaskar"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/issues/6524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
154264001
|
vm createOrUpdate not working
When I try to e.g. add a data disk to a vm I get an error that the createoption for the os disk is required and cannot be null. When I try to set the createoption of the os disk to DiskCreateOptionTypes.EMPTY I get "error":{"code":"PropertyChangeNotAllowed","target":"osDisk.createOption","message":"Changing property 'osDisk.createOption' is not allowed."}
The issue seems to be that the DataDisk.createOption and OSDisk.createOption do not get filled on retrieval of a virtual machine. I changed the field to a string and adapted the setter and getter method which seems to work
https://github.com/Azure/azure-rest-api-specs/issues/275
Not sure if that issue is related to mine. In my case, the createOption is not getting filled when I get() a virtual machine. When I try to update the VM, I get " is required and cannot be null " because the field was not set on get()
This is fixed in beta2. Please verify.
|
gharchive/issue
| 2016-05-11T14:54:22 |
2025-04-01T04:54:45.676695
|
{
"authors": [
"MSSedusch",
"jianghaolu",
"selvasingh"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/issues/698",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
695566268
|
Azure spring data cosmos version schema
The former design of Spring version schema will make two separate artifacts to cope with Spring Data 2.2.x and 2.3.x, but if we let Spring BOM manage our versions we could use one artifact to support both versions.
The dependency management section in a pom file could also affect the transitive versions: https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#bill-of-materials-bom-poms.
/azp run java - cosmos - tests
/azp run java - cosmos - tests
|
gharchive/pull-request
| 2020-09-08T05:49:40 |
2025-04-01T04:54:45.679106
|
{
"authors": [
"saragluna"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/14892",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
790475657
|
Sync eng/common directory with azure-sdk-tools for PR 1327
Sync eng/common directory with azure-sdk-tools for PR https://github.com/Azure/azure-sdk-tools/pull/1327 See eng/common workflow
/check-enforcer evaluate
/check-enforcer evaluate
|
gharchive/pull-request
| 2021-01-21T00:05:23 |
2025-04-01T04:54:45.681249
|
{
"authors": [
"azure-sdk",
"benbp"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/18713",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
806008464
|
Fixing checkstyle breaks from upgrade to 8.40
Fixing indentation in IdentityClient
/azp run java - anomalydetector - ci
|
gharchive/pull-request
| 2021-02-11T01:23:28 |
2025-04-01T04:54:45.682167
|
{
"authors": [
"conniey"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/19166",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1200376084
|
Sync stack
Incorporating feedback:
rename synchronous -> sync
rename getContent -> getBodyAsBinaryData
rename setContent -> setBody(BinaryData)
API change check for com.azure:azure-core
API changes have been detected in com.azure:azure-core. You can review API changes here
API change check for com.azure:azure-core-http-jdk-httpclient
API changes have been detected in com.azure:azure-core-http-jdk-httpclient. You can review API changes here
API changes
- public JdkHttpClientProvider()
+ public JdkHttpClientProvider()
API change check for com.azure:azure-core-http-netty
API changes have been detected in com.azure:azure-core-http-netty. You can review API changes here
API changes
- public NettyAsyncHttpClientProvider()
+ public NettyAsyncHttpClientProvider()
API change check for com.azure:azure-core-http-okhttp
API changes have been detected in com.azure:azure-core-http-okhttp. You can review API changes here
API changes
- public OkHttpAsyncClientProvider()
+ public OkHttpAsyncClientProvider()
- public OkHttpAsyncHttpClientBuilder followRedirects(boolean followRedirects)
API change check for com.azure:azure-core-test
API changes have been detected in com.azure:azure-core-test. You can review API changes here
API change check for com.azure:azure-core-tracing-opentelemetry
API changes have been detected in com.azure:azure-core-tracing-opentelemetry. You can review API changes here
API changes
+ @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next)
API change check for com.azure:azure-storage-common
API changes have been detected in com.azure:azure-storage-common. You can review API changes here
API changes
+ @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next)
+ @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next)
+ @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next)
+ @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next)
+ @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next)
API change check for com.azure:azure-storage-blob
API changes are not detected in this pull request for com.azure:azure-storage-blob
API change check for com.azure:azure-storage-blob-batch
API changes are not detected in this pull request for com.azure:azure-storage-blob-batch
API change check for com.azure:azure-storage-blob-nio
API changes are not detected in this pull request for com.azure:azure-storage-blob-nio
|
gharchive/pull-request
| 2022-04-11T18:31:36 |
2025-04-01T04:54:45.694057
|
{
"authors": [
"azure-sdk",
"kasobol-msft"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/28187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1279174943
|
Bug Fix: Passing clientOptions to HttpPipelineBuilder in buildPipeline
resolves #28783
API change check
API changes are not detected in this pull request.
|
gharchive/pull-request
| 2022-06-21T22:17:07 |
2025-04-01T04:54:45.695285
|
{
"authors": [
"azure-sdk",
"ibrahimrabab"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/29588",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
415806076
|
[AutoPR mariadb/resource-manager] Update MariaDB api version
Created to sync https://github.com/Azure/azure-rest-api-specs/pull/5280
This PR has been merged into https://github.com/Azure/azure-sdk-for-java/pull/2387
|
gharchive/pull-request
| 2019-02-28T20:20:40 |
2025-04-01T04:54:45.697138
|
{
"authors": [
"AutorestCI"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/2987",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2237229238
|
[Automation] Generate SDK based on TypeSpec 0.15.8
[Automation] Generate SDK based on TypeSpec 0.15.8
/check-enforcer override
|
gharchive/pull-request
| 2024-04-11T08:51:14 |
2025-04-01T04:54:45.698226
|
{
"authors": [
"azure-sdk",
"weidongxu-microsoft"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/39664",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2759216207
|
[Automation] Generate Fluent Lite from Swagger security#package-composite-v3
[Automation] Generate Fluent Lite from Swagger security#package-composite-v3
API change check
API changes are not detected in this pull request.
|
gharchive/pull-request
| 2024-12-26T03:09:26 |
2025-04-01T04:54:45.699360
|
{
"authors": [
"azure-sdk"
],
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/43617",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
744134206
|
Confused about @azure/ms-node-auth vs @azure/identity vs MSAL.js
Package Name: @azure/identity
Package Version: All
Is the bug related to documentation in
[x] README.md
[x] source code documentation
[x] SDK API docs on https://docs.microsoft.com
Describe the bug
It is unclear what the difference is between @azure/ms-node-auth, @azure/identity, and MSAL.js. I do not know which lib to use when. If there is a doc that explains it, I cannot find one.
Expected behavior
A doc explaining each library and when they should be used with examples.
Thanks for reporting @southpolesteve
@jonathandturner, @daviwil, @sadasant,
I was thinking that at the very least, we should do a couple of things here
Update the readme for @azure/identity with notes on
how it integrates with MSAL
the fact that it uses v2 of AAD apis
that not all credential classes can be used in the browser
any scenario where one should use MSAL.js directly instead of @azure/identity
Resolve #11359
See what improvements we can do to Authenticate with the Azure management modules for JavaScript
@southpolesteve Can you list the top app types or identity issues, from your perspective, a customer is trying to figure out when asking this question?
@diberry I think you covered it well.
Maybe your first case captures this, but I think it can be broken in two:
I am writing an app that needs many social logins (FB, GitHub, etc) only for identity purposes
I am writing an app that specifically needs AAD login to access Azure. Like an internal app that lets my employees configure things that might be Azure resources under the hood.
I also think there is a "Just make it work any way you can" case for developers. I won't put an web app in production with vscode based login, but if it helps me get the app working or improves development, I want that to be possible.
@southpolesteve Hello Steve! I'll be going through your feedback and making an update to our documentation as soon as possible. I'll get in contact with you in case I'm missing something. Thank you for submitting this issue!
@sadasant @ramya-rao-a Do you want to be included on Dev Center changes or localize this issue to your own SDK content?
@diberry I'd like to be included in the Dev Center changes! If anything, for exposure.
Hi again! Just to mention that I'll provide more information next week.
@southpolesteve :
I believe that the questions @diberry is working on will be helpful for you! Let me answer some of the other things you mention here.
It is unclear what the difference is between @azure/ms-node-auth, @azure/identity, and MSAL.js
@azure/ms-node-auth is the older authentication library! We continue to maintain it, but we're not adding new features. We're planning some better integration with it to facilitate users to move towards our newer library.
@azure/identity is our newer library! This is the one that should be used if you want to authenticate any of our clients with the Azure services.
MSAL.js is a library that connects to the system-specific keychains to provide a same authentication experience across environments. @azure/identity uses MSAL.js under the hood!
When should each library be used?
Even though all of them can be used, @azure/identity is the library that should be used with the Azure SDK for JS clients.
@ramya-rao-a :
Update the readme for @azure/identity
I've made an issue! https://github.com/Azure/azure-sdk-for-js/issues/12669 . I'll follow up this week.
See what improvements we can do to Authenticate with the Azure management modules for JavaScript.
I'll take a look and I'll make some notes!
While I move ahead with the readme update etc, how else can I be useful here? Please let me know if I'm missing something!
On MSAL:
MSAL does offer several features that are not yet available in our SDK, but we will be adding support as soon as possible. These include more control on the caching and the storing of the credentials. However, we're working as closely as possible with the MSAL team, so we should be able to level up with them in a couple of months, as far as I'm understanding.
It's in our interest to request people to use the @azure/identity library as much as possible, instead of the possible alternatives, since direct customer feedback will help us make this experience better for everyone.
@sadasant thanks for the explainer, I've been wondering the differences for awhile! What is the recommendation for providing browser-based authentication (e.g. for webapps needing a credential)? @azure/identity has a browser method that works well, but the package is only supported by a small number of Azure service libraries. I'm using @azure/arm-resources and @azure/arm-compute which require the older msRest.ServiceClientCredentials type of credential object.
@sadasant thanks for the explainer, I've been wondering the differences for awhile! What is the recommendation for providing browser-based authentication (e.g. for webapps needing a credential)? @azure/identity has a browser method that works well, but the package is only supported by a small number of Azure service libraries. I'm using @azure/arm-resources and @azure/arm-compute which require the older msRest.ServiceClientCredentials type of credential object.
@seanknox You can use the @azure/ms-rest-browserauth package when working with @azure/arm-resources and @azure/arm-compute packages for authentication needs in the browser. The readmes on these packages should have a code snippet that shows this:
Code snippet in readme for compute
Code snippet in readme for resources
@seanknox You can use the @azure/ms-rest-browserauth package when working with @azure/arm-resources and @azure/arm-compute packages for authentication needs in the browser. The readmes on these packages should have a code snippet that shows this:
Code snippet in readme for compute
Code snippet in readme for resources
@ramya-rao-a @azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method?
@ramya-rao-a @azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method?
@seanknox Hello hello! I wonder if a credential like @azure/identity's DeviceCodeCredential can work for you. Would that be useful? In ms-rest-nodeauth we have interactiveLoginWithAuthResponse, which is similar.
@seanknox Hello hello! I wonder if a credential like @azure/identity's DeviceCodeCredential can work for you. Would that be useful? In ms-rest-nodeauth we have interactiveLoginWithAuthResponse, which is similar.
I understand that this wouldn't be on the browser though. Would it be possible to move to @azure/identity instead?
I understand that this wouldn't be on the browser though. Would it be possible to move to @azure/identity instead?
@azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method?
@seanknox All credential classes in @azure/identity make use of the client id and therefore require you to create an app registration. The ones that don't default to the one corresponding to Azure CLI. So yes, the recommended way is to create an app registration and pass the clientId when creating the credential.
@sadasant The packages @azure/arm-resources and @azure-arm-compute do not support @azure/identity. So, @seanknox won't be able to use it.
@azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method?
@seanknox All credential classes in @azure/identity make use of the client id and therefore require you to create an app registration. The ones that don't default to the one corresponding to Azure CLI. So yes, the recommended way is to create an app registration and pass the clientId when creating the credential.
@sadasant The packages @azure/arm-resources and @azure-arm-compute do not support @azure/identity. So, @seanknox won't be able to use it.
Why don't they support @azure/identity? I'm interested in making it work.
If it makes sense, is it because of Continuous Access Evaluation (CAE) challenge based authentication? I believe this is important for ARM resources. We're adding support for CAE this month.
Why don't they support @azure/identity? I'm interested in making it work.
If it makes sense, is it because of Continuous Access Evaluation (CAE) challenge based authentication? I believe this is important for ARM resources. We're adding support for CAE this month.
This has nothing to do with CAE
All the management plane packages (the ones dealing with resource management) at the moment are auto generated. The generated code works with the credentials from @azure/ms-rest-nodeauth and @azure/ms-rest-browserauth. They are of a different shape than the TokenCredential interface which is implemented by all the credentials in the @azure/identity package. We do have a feature request to update the code generator to generate code that will work with the credentials from @azure/identity as well. But it will take a while to update the code generator and re-generate over 100 management plane packages.
This has nothing to do with CAE
All the management plane packages (the ones dealing with resource management) at the moment are auto generated. The generated code works with the credentials from @azure/ms-rest-nodeauth and @azure/ms-rest-browserauth. They are of a different shape than the TokenCredential interface which is implemented by all the credentials in the @azure/identity package. We do have a feature request to update the code generator to generate code that will work with the credentials from @azure/identity as well. But it will take a while to update the code generator and re-generate over 100 management plane packages.
@seanknox Please log an issue in the repo for @azure/ms-rest-browserauth for more on that package
We have https://github.com/Azure/azure-sdk-for-js/issues/12669 tracking improvements to the @azure/identity package which we will tackle this month.
We are independently tracking other efforts to improve documentation around auth. So, closing this issue.
Thanks for your patience everyone
@seanknox Please log an issue in the repo for @azure/ms-rest-browserauth for more on that package
We have https://github.com/Azure/azure-sdk-for-js/issues/12669 tracking improvements to the @azure/identity package which we will tackle this month.
We are independently tracking other efforts to improve documentation around auth. So, closing this issue.
Thanks for your patience everyone
then how about library @azure/msal-browser, this lib is using PublicClientApplication to achieve browser login and use graph or other web API. what distinguish it from @azure/identity?
then how about library @azure/msal-browser, this lib is using PublicClientApplication to achieve browser login and use graph or other web API. what distinguish it from @azure/identity?
@leolumicrosoft,
@azure/identity contains multiple credential classes, all following the TokenCredential interface. You would need to use these credentials when using our newer set of libraries.
When in browser, the only credential that applies from @azure/identity is the InteractiveBrowserCredential which at the moment uses the msal package. We are in the process of moving to use @azure/msal-browser instead. See #13155 and #13263
You are free to use @azure/msal-browser directly as long as you create your own credential class that follows the interface expected by the client constructor in the Azure package that you are using
The client constructors in the new JS packages require a credential that follows the TokenCredential interface
The client constructors in the rest of the JS packages in this repo require a credential that follows the ServiceClientCredential interface. An example can be found at Authenticating with an existing token
@leolumicrosoft,
@azure/identity contains multiple credential classes, all following the TokenCredential interface. You would need to use these credentials when using our newer set of libraries.
When in browser, the only credential that applies from @azure/identity is the InteractiveBrowserCredential which at the moment uses the msal package. We are in the process of moving to use @azure/msal-browser instead. See #13155 and #13263
You are free to use @azure/msal-browser directly as long as you create your own credential class that follows the interface expected by the client constructor in the Azure package that you are using
The client constructors in the new JS packages require a credential that follows the TokenCredential interface
The client constructors in the rest of the JS packages in this repo require a credential that follows the ServiceClientCredential interface. An example can be found at Authenticating with an existing token
Thank you, @ramya-rao-a. Thank you for the detail reply, I started to understand more of AD related javascript SDK.
I highlight a few points I learned through experiment as it might be helpful to people who just started explore AD authenticate topic. Please correct anything that is inaccurate.
DefaultAzureCredential in @azure/identity is to be used in backend service application code, or it is used in a local running application to get credential. But this library is not used in frontend code such as JavaScript in html.
Browser javascript code can utilize @azure/msal-browser or earlier @azure/ms-node-auth. @azure/msal-browser uses auth code flow which allows stricter control of protected resource access, while @azure/ms-node-auth uses implicit flow
Azure services exposes RESTful endpoints which need token. There are two types of token, one is for the resource management or control plane REST api. This token can be get through credential.getToken("https://management.azure.com/.default"). such as listing all data storage account in your azure account.
The other type of RESTful endpoint need token from each specified service, for example
azure keyvault: credential.getToken("https://vault.azure.net/.default")
azure digitaltwin: credential.getToken("https://digitaltwins.azure.net/.default")
Both 3 and 4 can be tested using postman after you get the token by using DefaultAzureCredential in a simple locally run script with "az login".
These points maybe very basic but it still give me three days to have a clearer insight into them.
|
gharchive/issue
| 2020-11-16T19:55:08 |
2025-04-01T04:54:45.742652
|
{
"authors": [
"diberry",
"leolumicrosoft",
"ramya-rao-a",
"sadasant",
"seanknox",
"southpolesteve"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/12565",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
746139786
|
Remove core-tracing dependency in core-auth
The @azure/core-auth package was meant to be a light weight one to hold the types and interfaces to be used by anyone trying to implement the credentials used in our latest packages. In #11359, we are discussing using it in our older code generator so that the older packages can make use of @azure/identity as well. Since this package has a dependency on @azure/core-tracing only for types, we now end up pulling un-necessary tracing dependency as well.
This issue is to consider removing the dependency on core-tracing from core-auth and instead duplicate the two types we pull in i.e. SpanOptions and SpanContext
cc @xirzec, @joheredi
I'm good with duplicating.
|
gharchive/issue
| 2020-11-19T00:12:57 |
2025-04-01T04:54:45.745976
|
{
"authors": [
"ramya-rao-a",
"xirzec"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/12612",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1147384101
|
[Identity] Should we add support for tenantId on the ManagedIdentityCredential?
On getToken, the ManagedIdentityCredential can receive a tenant Id since the GetTokenOptions type supports it. However, the ManagedIdentityCredential does not send the tenant Id on the outgoing requests. Should we support this?
@sadasant I believe you cannot support ternant id on managed identity, since this is calling a localhost endpoint this is not how the flow is designed to work.
The update we did in the Python doc, is to say that when you implement the "get_token" protocol, you may ignore silently tenant_id if you can't do anything with it, as this parameter should be seen as a hint of how to get a valid token (it's designed with challenge of KV in mind), not as a requirement that it has to be this tenant_id. This means if the hint doesn't apply to this implementation it's safe to ignore it.
@lmazuel oh ok! Gotcha.
So, my approach to solve this issue will be just to add tests. Thank you!
I've been beating my head against the wall on this. From what I'm reading if I want to use ManagedIdentityCredential I can't specify the tenant ID, which I thought I could do. However, when I do, I always pull the managed identity from the default tenant and not the tenant that I specify. Which makes sense now.
My understanding is that when I have a multi-tenant app (Azure Function) that uses a system managed identity, that a Managed Identity will be created in other tenants when an admin consents to the access in those tenants. How then do I consume the managed identity in the other tenant, so that my multi-tenant function can access the Graph API of the other tenant? Everything I try just returns an access token for the tenant that is hosting the app.
My code looks like this:
var credential = new DefaultAzureCredential();
var token = credential.GetToken(
new Azure.Core.TokenRequestContext(
new[] { "https://graph.microsoft.com/.default" }, null, null, "<Tenant B ID>"));
var accessToken = token.Token;```
|
gharchive/issue
| 2022-02-22T21:26:59 |
2025-04-01T04:54:45.750281
|
{
"authors": [
"appleoddity",
"lmazuel",
"sadasant"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/20498",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1268057931
|
Missing H1 for Quantum Area
The following page on Docs are using H2 header elements in place of an H1 title header. Docs throws a warning when there is no H1 element at the top of the page.
https://github.com/MicrosoftDocs/azure-docs-sdk-node/blob/main/docs-ref-services/preview/quantum-jobs-readme.md
This appears to have been imported from the following readme file: sdk/quantum/quantum-jobs/README.md
The readme file should be modified to use H1 title headers and re-imported to Docs, or the import process needs to be modified to change the headers.
Label prediction was below confidence level 0.6 for Model:ServiceLabels: 'Storage:0.23352472,Azure.Core:0.13711227,Docs:0.04791413'
Tracking in #22206
|
gharchive/issue
| 2022-06-10T22:01:27 |
2025-04-01T04:54:45.753179
|
{
"authors": [
"azure-sdk",
"v-alje",
"xirzec"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/22208",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1735013004
|
OpenAI: GetChatCompletionsOptions.n - can n have a more obvious name
The property n - The number of chat completions choices that should be generated for a chat completions - this needs a better name such as totalAllowedChatCompletions.
Please update source.
As unfortunate as it is, this name is used in the API: https://platform.openai.com/docs/api-reference/chat/create#chat/create-n and the SDK is using the same names.
/cc @bterlson @johanste
@deyaaeldeen How did that get past API review board?
This API is owned by OpenAI.
Ok.
|
gharchive/issue
| 2023-05-31T20:59:31 |
2025-04-01T04:54:45.755885
|
{
"authors": [
"deyaaeldeen",
"diberry",
"johanste"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/26064",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
723275478
|
[Service Bus] "message" word added to "createBatch", "CreateBatchOptions" and "tryAdd"
PR for #11878
/azp run js - servicebus - tests
/azp run js - servicebus - tests
|
gharchive/pull-request
| 2020-10-16T14:28:28 |
2025-04-01T04:54:45.757280
|
{
"authors": [
"HarshaNalluru",
"mohsin-mehmood",
"ramya-rao-a"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/11887",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
927675996
|
[core] - added changelog entries for recent changes
I forgot the changelogs. I always forget the changelogs 🤷
/check-enforcer override
|
gharchive/pull-request
| 2021-06-22T22:16:26 |
2025-04-01T04:54:45.758099
|
{
"authors": [
"maorleger"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/15902",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1009526170
|
Post release automated changes for eventgrid releases
Post release automated changes for azure-arm-eventgrid
@qiaoza Please take a look at the merge conflicts in this PR
close this one as eventgrid has already been GAed
|
gharchive/pull-request
| 2021-09-28T09:46:13 |
2025-04-01T04:54:45.759136
|
{
"authors": [
"azure-sdk",
"qiaozha",
"ramya-rao-a"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/17906",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1639023108
|
imagebuilder release
https://github.com/Azure/sdk-release-request/issues/3930
API change check
APIView has identified API level changes in this PR and created following API reviews.
azure-arm-imagebuilder
|
gharchive/pull-request
| 2023-03-24T09:23:47 |
2025-04-01T04:54:45.760718
|
{
"authors": [
"azure-sdk",
"kazrael2119"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/25361",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1886536919
|
Move identity variable setting into test-resources-pre.ps1
Updating the identity live tests to remove logic and env setting from yaml in favor of a dedicated keyvault config and powershell script. This will make it possible to improve local and sovereign cloud testing, and make cross-language config updates more easily.
Related: https://github.com/Azure/azure-sdk-for-net/pull/38473
API change check
API changes are not detected in this pull request.
Working through some testing issues
Live tests: https://dev.azure.com/azure-sdk/internal/_build/results?buildId=3080889&view=results
|
gharchive/pull-request
| 2023-09-07T20:40:04 |
2025-04-01T04:54:45.763657
|
{
"authors": [
"azure-sdk",
"benbp"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/27049",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
380068032
|
[AutoPR hdinsight/resource-manager] [HDInsight] - Support KV URL
Created to sync https://github.com/Azure/azure-rest-api-specs/pull/4449
This PR has been merged into https://github.com/Azure/azure-sdk-for-js/pull/515
|
gharchive/pull-request
| 2018-11-13T05:40:10 |
2025-04-01T04:54:45.765236
|
{
"authors": [
"AutorestCI"
],
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/475",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
577557562
|
[Add methods:] Add method to retrieve results of completed train/analyze LROs
Per .NET guidelines: https://azure.github.io/azure-sdk/dotnet_introduction.html#dotnet-longrunning
Outstanding design issues here: https://github.com/Azure/azure-sdk-for-python/pull/9963#discussion_r388289970
Constructors have been added to Operation classes to resume LRO.
|
gharchive/issue
| 2020-03-08T20:07:42 |
2025-04-01T04:54:45.766940
|
{
"authors": [
"annelo-msft"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/10408",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
618215288
|
Azure QnA maker fails to create a knowledge base by raising ExtractionFailure error code
Describe the bug
Azure QnA Maker SDK fails to create knowledgebase from a URL powered by an Azure Http-Triggered function while the function is publicly available and it only accepts GET requests. The response of the function is a pure string return by "return new OkObjectResult(responseString)"
Expected behavior
Knowledgebase should be created from the given URL.
Actual behavior (include Exception or Stack Trace)
The following error message is produced:
Unsupported / Invalid url(s). Failed to extract Q&A from the source
To Reproduce
Run the following code snippet in a console application:
var createKbDto = new CreateKbDTO
{
Name = request.Name,
QnaList = new List<QnADTO>(),
Urls = new List<string>
{ "https://caccea77.ngrok.io/api/jobdescription/facade/2034/43D672205B3106BE3273C60FE423C632"
}
};
var createKb = await client.Knowledgebase.CreateAsync(createKbDto);
var createdOp = await MonitorOperationAsync(client, createKb);
return GetKbId(createdOp);
Environment:
Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker" Version="1.1.0"
IDE and version : Visual Studio 16.5.4
Environment:
.NET Core SDK (reflecting any global.json):
Version: 3.1.201
Commit: b1768b4ae7
Runtime Environment:
OS Name: Windows
OS Version: 10.0.18363
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\3.1.201\
Host (useful for support):
Version: 3.1.3
Commit: 4a9f85e9f8
.NET Core SDKs installed:
2.1.802 [C:\Program Files\dotnet\sdk]
2.2.207 [C:\Program Files\dotnet\sdk]
2.2.402 [C:\Program Files\dotnet\sdk]
3.0.100-rc1-014190 [C:\Program Files\dotnet\sdk]
3.1.100-preview3-014645 [C:\Program Files\dotnet\sdk]
3.1.201 [C:\Program Files\dotnet\sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.0-rc1.19457.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.0-preview3.19555.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.0-rc1-19456-20 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.0-preview3.19553.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.0-rc1-19456-20 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.0-preview3.19553.2 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
@dfulcer
Hi Dean,
To be able to reproduce the problem please feed the following URL to QnAMaker's SDK:
https://caccea77.ngrok.io/api/jobdescription/facade/2034/43D672205B3106BE3273C60FE423C632
Feeding this URL to QnAMaker portal results in the same error message too:
Unsupported / Invalid url(s). Failed to extract Q&A from the source
Does the web content produced by the URL above lack anything that QnAMaker expects it? What causes QnAMaker to reject the content?
/cc @milad-simcoeai @miladghafoori
@dfulcer @jsquire
Any updates on this ticket? The application insights is quite silent in terms of logging what goes wrong.
/cc @milad-simcoeai
I am just the source of initial triage in this case. Unfortunately, that means that I don't have any insight to offer.
@jsquire just wondering, who's the lead on QnA Maker to have them engaged a quickly as possible? It's a showstopper and I'm sure there are or will be other folks experiencing the same problem.
With regret, I do not know. This is not a library which the Azure SDK team owns at this point. The QnA Maker service team will need to assist, but beyond that I don't have insight. Each Azure service team has their own triage process once an issue has been identified and tagged. In this case, it would appear that they've identified @dfulcer as the point of contact.
If this is a show stopping issue, I'd recommend opening an Azure support ticket. That will be a more formal and expedient route for support with a proper escalation path. That would ensure that someone is actively working to engage the proper folks for attention.
My apologies that I don't have a better answer for you.
This thread has came to the team now :(
This is related to the QnAMaker extraction logic, not specific to SDK. As per the error, the provided URL content didn't support our extraction standard to generate any QnAs. However, I see that the provided URL doesn’t exist anymore. Please close the thread if it's too late, or share a valid URL so that we can investigate. Thanks!
|
gharchive/issue
| 2020-05-14T13:05:53 |
2025-04-01T04:54:45.785661
|
{
"authors": [
"Arash-Sabet",
"jsquire",
"rokulka"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/12075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
863115900
|
[BUG] BlobContainerClient.CreateIfNotExistsAsync returns a null Response object when container already exists.
Describe the bug
The bug happened here:
https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/storage/Azure.Storage.Blobs/src/BlobContainerClient.cs#L1080
BlobContainerClient.CreateIfNotExistsAsync() returns default or null if said container already exists.
Expected behavior
It should return a Response object with the proper error code
Actual behavior (include Exception or Stack Trace)
It returns null
Environment:
Azure.Storage.Blobs 12.8.1
Hosting platform or OS and .NET runtime version:
.NET SDK (reflecting any global.json):
Version: 5.0.201
Commit: a09bd5c86c
Runtime Environment:
OS Name: Windows
OS Version: 10.0.19041
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\5.0.201\
Host (useful for support):
Version: 5.0.4
Commit: f27d337295
.NET SDKs installed:
1.0.4 [C:\Program Files\dotnet\sdk]
2.1.500 [C:\Program Files\dotnet\sdk]
2.1.812 [C:\Program Files\dotnet\sdk]
2.2.207 [C:\Program Files\dotnet\sdk]
3.1.202 [C:\Program Files\dotnet\sdk]
5.0.201 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.All 2.1.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.24 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.26 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.24 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.26 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 5.0.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 1.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.24 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.26 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 5.0.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.13 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 5.0.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
IDE and version : Microsoft Visual Studio Community 2019 Version 16.9.2
Hi,
CreateIfNotExists returning null or default is an expected result if the container already exists. A response of Response<BlobContainerInfo> can not be returned because that would mean that a successful container exists. Users are expecting that if they receive a null or default response from this API that the container already exists (and that we don't throw an exception). If the container does not exist and was created a Response<BlobContainerInfo> will be returned. (If we were to stop returning default or null in the case of the container already existing, this would be a breaking change).
If you're looking for this method to throw an exception upon seeing a BlobErrorCode of ContainerAlreadyExists, please use the regular Create method.
|
gharchive/issue
| 2021-04-20T18:36:53 |
2025-04-01T04:54:45.792807
|
{
"authors": [
"Arkatufus",
"amnguye"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/20537",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1348532951
|
Explore multiple calls to GZip WriteTo and TryComputeLength methods
To be completed before GA on Sept 29
Can we WriteTo and then continue to add Json?
Add test for write to the stream, get the length, and then write some more
Are gzip streams appendable?
https://gist.github.com/KrzysztofCwalina/f94e76a50c78968fe9c7b3df99a73eed
|
gharchive/issue
| 2022-08-23T20:54:18 |
2025-04-01T04:54:45.794671
|
{
"authors": [
"nisha-bhatia"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/30691",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1882605895
|
[BUG] ObjectDisposedException when mocking HttpResponseMessage content via a System.Net.Http.DelegatingHandler on the underlying HTTP client.
Library name and version
Azure.ResourceManager.Resources 1.6.0, Azure.ResourceManager 1.7.0, Azure.Core 1.34.0
Describe the bug
I'm encountering an ObjectDisposedException in our unit test environment that uses a DelegatingHandler to mock out response content. See the repro steps for relevant code snippets.
After debugging, I discovered the following:
In Azure.Core.Pipeline.HttpClientTransport::ProcessAsync(HttpMessage message, bool async), the StringContent gets read as a MemoryStream and is passed to the SDK's response abstraction PipelineResponse which is an IDisposable that when disposed, will dispose the underlying HttpResponseMessage and its content.
In Azure.Core.Pipeline.ResponseBodyPolicy, if the content stream is a non-seekable stream and message.BufferResponse is true, a setter message.Response.ContentStream = bufferedStream is called. This invokes the overridden setter in Azure.Core.Pipeline.PipelineResponse which nulls the Content on HttpResponseMessage
This step does not occur in the unit test setup because responseContentStream.CanSeek is true for a MemoryStream
In Azure.ResourceManager.Resources.ArmDeploymentResource::UpdateAsync(...) the HttpMessage is disposed after CreateOrUpdateAtScopeAsync is done, which will dispose the StringContent on the response since it was not nulled by the previous point.
The SDK then wraps the response with a ResourcesArmOperation which leads to the disposed exception.
Expected behavior
Be able to mock a long running operation's intermediate response at the HTTP layer via DelegatingHandler.
Actual behavior
System.ObjectDisposedException: Cannot access a closed Stream.
at System.IO.__Error.StreamIsClosed()
at System.IO.MemoryStream.get_Length()
at Azure.Core.NextLinkOperationImplementation.IsFinalState(Response response, HeaderSource headerSource, Nullable`1& failureState, String& resourceLocation)
at Azure.Core.NextLinkOperationImplementation.Create(HttpPipeline pipeline, RequestMethod requestMethod, Uri startRequestUri, Response response, OperationFinalStateVia finalStateVia, Boolean skipApiVersionOverride, String apiVersionOverrideValue)
at Azure.Core.NextLinkOperationImplementation.Create[T](IOperationSource`1 operationSource, HttpPipeline pipeline, RequestMethod requestMethod, Uri startRequestUri, Response response, OperationFinalStateVia finalStateVia, Boolean skipApiVersionOverride, String apiVersionOverrideValue)
at Azure.ResourceManager.Resources.ResourcesArmOperation`1..ctor(IOperationSource`1 source, ClientDiagnostics clientDiagnostics, HttpPipeline pipeline, Request request, Response response, OperationFinalStateVia finalStateVia, Boolean skipApiVersionOverride, String apiVersionOverrideValue)
at Azure.ResourceManager.Resources.ArmDeploymentResource.<UpdateAsync>d__20.MoveNext()
Reproduction Steps
The mock delegating handler:
public class MockRequestHandler : DelegatingHandler
{
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
// ...
var response = new HttpResponseMessage();
response.Content = new StringContent(/* mock json */);
// it does NOT call base.SendAsync()
// ...
return Task.FromResult(response);
}
The http client:
var webRequestHandler = new WebRequestHandler { AllowAutoRedirect = false };
var httpClient = new HttpClient(
handler: HttpClientFactory.CreatePipeline(innerHandler: webRequestHandler, handlers: delegatingHandlers), // mock handler goes here
disposeHandler: true);
The ArmClient is initialized as follows:
var armClientOptions = new ArmClientOptions
{
// ...
Transport = new HttpClientTransport(httpClient) // http client with the delegating handler
};
var armClient = new ArmClient(tokenCredential, default, armClientOptions);
The ArmClient call:
var armDeploymentSdkResource = armClient.GetArmDeploymentResource(/* ResourceIdentifier */);
var deploymentOperation = await armDeploymentSdkResource
.UpdateAsync(WaitUntil.Started, deploymentRequestInput, this.CancellationToken);
Environment
Windows 11
System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription = ".NET Framework 4.8.9167.0"
JetBrains Rider 2023.2.1
Thank you for your feedback. Tagging and routing to the team member best able to assist.
//cc: @m-nash, @annelo-msft
I think this may turn out to be the same root cause as #38219, with what we currently suspect is the root cause discussed here.
Thanks, @jsquire! I was planning to spend some time looking at https://github.com/Azure/azure-sdk-for-net/issues/38219 today, so I'll look at this one as well while I'm doing that.
@kalbert312, thanks for a really nice investigation and repro case! I have confirmed that this is the same issue as we're looking at in https://github.com/Azure/azure-sdk-for-net/issues/38219. I'm going to close it as a duplicate, but I'm also tagging the other one as Azure.Core, and will try to turn around a fix soon. Thanks for reporting this!
Reopening this as no-longer a duplicate of the first one.
|
gharchive/issue
| 2023-09-05T19:33:45 |
2025-04-01T04:54:45.805521
|
{
"authors": [
"annelo-msft",
"jsquire",
"kalbert312"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/38505",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2448562442
|
[BUG] token retrieval intermittently stuck in BearerTokenAuthenticationPolicy in Azure.Core 1.40.0
Library name and version
Azure.Core 1.40.0
Describe the bug
Same as #44817. However, the "fix" in #44882 only works if a CancellationToken is passed on and cancellation is requested.
We're using ASP.NET Core Data Protection with Azure.Extensions.AspNetCore.DataProtection.Blobs. This is building an Azure.Core.HttpMessage with default cancellation token. So, it is never cancelled and requests hang indefinitely.
I know Azure.Core 1.42.0 is the latest version. But it seems the fix in #44882 would not work in this scenario. The workaround using an CancellationToken does not solve the real issue. There seems to be a specific scenario in which the CurrentTokenTcs never gets a result.
Multiple of our applications deadlock in this situation:
[Managed to Native Transition]
System.Private.CoreLib.dll!System.Threading.ManualResetEventSlim.Wait(int millisecondsTimeout = -1, System.Threading.CancellationToken cancellationToken) Line 264 C#
System.Private.CoreLib.dll!System.Threading.Tasks.Task.SpinThenBlockingWait(int millisecondsTimeout, System.Threading.CancellationToken cancellationToken) Line 2386 C#
System.Private.CoreLib.dll!System.Threading.Tasks.Task.InternalWaitCore(int millisecondsTimeout, System.Threading.CancellationToken cancellationToken) Line 2354 C#
System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task task = Id = 50, Status = WaitingForActivation, Method = "{null}", Result = "{Not yet computed}") Line 51 C#
System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter<Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.AuthHeaderValueInfo>.GetResult() Line 173 C#
[Waiting on Async Operation, double-click or press enter to view Async Call Stacks]
Azure.Core.dll!Azure.Core.Pipeline.TaskExtensions.EnsureCompleted<Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.AuthHeaderValueInfo>(System.Threading.Tasks.Task<Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.AuthHeaderValueInfo> task = Id = 50, Status = WaitingForActivation, Method = "{null}", Result = "{Not yet computed}") Line 33 C#
Azure.Core.dll!Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.TokenRequestState.GetCurrentHeaderValue(bool async = false, bool checkForCompletion = false, System.Threading.CancellationToken cancellationToken = IsCancellationRequested = false) Line 438 C#
[Resuming Async Method]
System.Private.CoreLib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start<Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.TokenRequestState.d__19>(ref Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.TokenRequestState.d__19 stateMachine = {Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.TokenRequestState.d__19}) Line 55 C#
Azure.Core.dll!Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AccessTokenCache.GetAuthHeaderValueAsync(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, Azure.Core.TokenRequestContext context = {Azure.Core.TokenRequestContext}, bool async = false) Line 207 C#
Azure.Core.dll!Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.AuthenticateAndAuthorizeRequest(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, Azure.Core.TokenRequestContext context = {Azure.Core.TokenRequestContext}) Line 172 C#
Azure.Storage.Blobs.dll!Azure.Storage.StorageBearerTokenChallengeAuthorizationPolicy.AuthorizeRequestInternal(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, bool async = false) Line 69 C#
Azure.Storage.Blobs.dll!Azure.Storage.StorageBearerTokenChallengeAuthorizationPolicy.AuthorizeRequest(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}) Line 52 C#
Azure.Core.dll!Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.ProcessAsync(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[4]", bool async = false) Line 127 C#
Azure.Core.dll!Azure.Core.Pipeline.BearerTokenAuthenticationPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[4]") Line 61 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(Azure.Core.HttpMessage message, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline) Line 47 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[5]") Line 40 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(Azure.Core.HttpMessage message, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline) Line 47 C#
Azure.Core.dll!Azure.Core.Pipeline.RedirectPolicy.ProcessAsync(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[6]", bool async = false) Line 50 C#
Azure.Core.dll!Azure.Core.Pipeline.RedirectPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[6]") Line 198 C#
Azure.Core.dll!Azure.Core.Pipeline.RetryPolicy.ProcessAsync(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[7]", bool async = false) Line 85 C#
Azure.Core.dll!Azure.Core.Pipeline.RetryPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[7]") Line 59 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(Azure.Core.HttpMessage message, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline) Line 47 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[8]") Line 40 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(Azure.Core.HttpMessage message, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline) Line 47 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[9]") Line 40 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(Azure.Core.HttpMessage message, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline) Line 47 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[10]") Line 40 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(Azure.Core.HttpMessage message, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline) Line 47 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.ReadOnlyMemory<Azure.Core.Pipeline.HttpPipelinePolicy> pipeline = "System.ReadOnlyMemory[11]") Line 40 C#
Azure.Core.dll!Azure.Core.Pipeline.HttpPipeline.Send(Azure.Core.HttpMessage message = {Azure.Core.HttpMessage}, System.Threading.CancellationToken cancellationToken = IsCancellationRequested = false) Line 174 C#
Azure.Storage.Blobs.dll!Azure.Storage.Blobs.BlobRestClient.Download(string snapshot = null, string versionId = null, int? timeout = null, string range = "bytes=0-268435455", string leaseId = null, bool? rangeGetContentMD5 = null, bool? rangeGetContentCRC64 = null, string encryptionKey = null, string encryptionKeySha256 = null, Azure.Storage.Blobs.Models.EncryptionAlgorithmTypeInternal? encryptionAlgorithm = null, System.DateTimeOffset? ifModifiedSince = null, System.DateTimeOffset? ifUnmodifiedSince = null, string ifMatch = null, string ifNoneMatch = "0x8DC8A29E41EA7D4", string ifTags = null, System.Threading.CancellationToken cancellationToken = IsCancellationRequested = false) Line 175 C#
Azure.Storage.Blobs.dll!Azure.Storage.Blobs.Specialized.BlobBaseClient.StartDownloadAsync(Azure.HttpRange range = {Azure.HttpRange}, Azure.Storage.Blobs.Models.BlobRequestConditions conditions = {Azure.Storage.Blobs.Models.BlobRequestConditions}, Azure.Storage.DownloadTransferValidationOptions validationOptions = {Azure.Storage.DownloadTransferValidationOptions}, long startOffset = 0, bool async = false, System.Threading.CancellationToken cancellationToken = IsCancellationRequested = false) Line 1737 C#
Azure.Storage.Blobs.dll!Azure.Storage.Blobs.Specialized.BlobBaseClient.DownloadStreamingInternal(Azure.HttpRange range = {Azure.HttpRange}, Azure.Storage.Blobs.Models.BlobRequestConditions conditions = {Azure.Storage.Blobs.Models.BlobRequestConditions}, Azure.Storage.DownloadTransferValidationOptions transferValidationOverride = {Azure.Storage.DownloadTransferValidationOptions}, System.IProgress progressHandler = null, string operationName = "BlobBaseClient.DownloadStreaming", bool async = false, System.Threading.CancellationToken cancellationToken = IsCancellationRequested = false) Line 1561 C#
Azure.Storage.Blobs.dll!Azure.Storage.Blobs.PartitionedDownloader.DownloadTo(System.IO.Stream destination = {System.IO.MemoryStream}, Azure.Storage.Blobs.Models.BlobRequestConditions conditions = {Azure.Storage.Blobs.Models.BlobRequestConditions}, System.Threading.CancellationToken cancellationToken) Line 307 C#
Azure.Storage.Blobs.dll!Azure.Storage.Blobs.Specialized.BlobBaseClient.StagedDownloadAsync(System.IO.Stream destination = {System.IO.MemoryStream}, Azure.Storage.Blobs.Models.BlobRequestConditions conditions = {Azure.Storage.Blobs.Models.BlobRequestConditions}, System.IProgress progressHandler = null, Azure.Storage.StorageTransferOptions transferOptions = {Azure.Storage.StorageTransferOptions}, Azure.Storage.DownloadTransferValidationOptions transferValidationOverride = null, bool async = false, System.Threading.CancellationToken cancellationToken = IsCancellationRequested = false) Line 2893 C#
Azure.Storage.Blobs.dll!Azure.Storage.Blobs.Specialized.BlobBaseClient.DownloadTo(System.IO.Stream destination = {System.IO.MemoryStream}, Azure.Storage.Blobs.Models.BlobRequestConditions conditions = {Azure.Storage.Blobs.Models.BlobRequestConditions}, Azure.Storage.StorageTransferOptions transferOptions = {Azure.Storage.StorageTransferOptions}, System.Threading.CancellationToken cancellationToken = IsCancellationRequested = false) Line 2677 C#
Azure.Extensions.AspNetCore.DataProtection.Blobs.dll!Azure.Extensions.AspNetCore.DataProtection.Blobs.AzureBlobXmlRepository.GetLatestData() Line 199 C#
Azure.Extensions.AspNetCore.DataProtection.Blobs.dll!Azure.Extensions.AspNetCore.DataProtection.Blobs.AzureBlobXmlRepository.GetAllElements() Line 57 C#
Microsoft.AspNetCore.DataProtection.dll!Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager.GetAllKeys() Unknown
Microsoft.AspNetCore.DataProtection.dll!Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingProvider.CreateCacheableKeyRingCore(System.DateTimeOffset now = {System.DateTimeOffset}, Microsoft.AspNetCore.DataProtection.KeyManagement.IKey keyJustAdded = null) Unknown
Microsoft.AspNetCore.DataProtection.dll!Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingProvider.Microsoft.AspNetCore.DataProtection.KeyManagement.Internal.ICacheableKeyRingProvider.GetCacheableKeyRing(System.DateTimeOffset now = {System.DateTimeOffset}) Unknown
Microsoft.AspNetCore.DataProtection.dll!Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingProvider.GetCurrentKeyRingCore(System.DateTime utcNow, bool forceRefresh) Unknown
Microsoft.AspNetCore.DataProtection.dll!Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Protect(byte[] plaintext = {byte[949538]}) Unknown
Expected behavior
Blob storage call is executed
Actual behavior
The call is stuck and never completes
Reproduction Steps
Hard to reproduce
Environment
@christothes: Would you please take a look and offer your thoughts?
@christothes : Maybe it's due to the fact there's a single BlobRestClient' with a single HttpPipeline containing the same BearerTokenAuthenticationPolicy instance? I can see 2 concurrent Download calls on the same BlobRestClient instance.
I'm not too familiar with this code, but is thread safe? Concurrent calls on https://github.com/christothes/azure-sdk-for-net/blob/6c76e7d2374473c7be189f7cf35d641024a0d164/sdk/core/Azure.Core/src/Pipeline/BearerTokenAuthenticationPolicy.cs#L256 could modify the same state. Although with the lock, it would be one after the other. However, does https://github.com/christothes/azure-sdk-for-net/blob/6c76e7d2374473c7be189f7cf35d641024a0d164/sdk/core/Azure.Core/src/Pipeline/BearerTokenAuthenticationPolicy.cs#L203-L205 still work if the 2nd calll also changed the state before this code is executed?
#45223
My bad, i missed this subtle change: https://github.com/Azure/azure-sdk-for-net/commit/93512b14ca1b6d40dde499bfb1e74440779dae5f?diff=split&w=0#diff-f6d09d34c9aed3acf5957c27473cfdc4a1d52f0219b8a25339e0abb206a89209R412.
This issue can be closed as I've only seen it with Azure.Core 1.40.0. So it might be already be fixed.
|
gharchive/issue
| 2024-08-05T13:26:26 |
2025-04-01T04:54:45.835871
|
{
"authors": [
"MarcWils",
"jsquire",
"timaiv"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/45351",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
544032828
|
Either update or remove use of LoggingExtensions
Throughout the Storage API, we call logging extensions such as LogMethodEnter, LogMethodExit, LogException, etc. Currently these extensions, don't do anything as they are behind a Conditional compilation attribute that isn't enabled, e.g.:
[Conditional("EnableLoggingHelpers")]
public static void LogMethodExit(
this HttpPipeline pipeline,
string className,
[CallerMemberName] string member = default,
string message = "")
=> LogTrace(pipeline, $"EXIT METHOD {className} {member}\n{message}");
If we want this logging, we should define EnableLoggingHelpers and do any other updates that are needed. If we don't need this, we can delete this file and remove all calls to these methods.
I vote we remove the LoggingExtensions.
Hi @JoshLove-msft, we deeply appreciate your input into this project. Regrettably, this issue has remained inactive for over 2 years, leading us to the decision to close it. We've implemented this policy to maintain the relevance of our issue queue and facilitate easier navigation for new contributors. If you still believe this topic requires attention, please feel free to create a new issue, referencing this one. Thank you for your understanding and ongoing support.
|
gharchive/issue
| 2019-12-30T22:42:12 |
2025-04-01T04:54:45.839314
|
{
"authors": [
"JimSuplizio",
"JoshLove-msft",
"seanmcc-msft"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/9274",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
607958463
|
Performed Code Cleanup
Removed redundant code
Standardized naming and style
Fixed comments
Fixed warnings
Can one of the admins verify this patch?
As the owners of this version of the client library, @nemakam, @shankarsama and team would be the authoritative voice for feedback here.
@axisc and @shankarsama - Would you please be so kind as to provide feedback to @tstepanski and advise if these changes are something that you'd like to consider or if we should look to close out the PR?
Hi @tstepanski. Thank you for your contribution, and I'm sorry that you haven't received any feedback. Unfortunately, it does not look as if the Service Bus team would like to consider these changes at this point in time. I'm going to close this out, since there hasn't been any recent activity or engagement. Please feel free to reopen if you'd like to continue working on these changes.
|
gharchive/pull-request
| 2020-04-28T01:32:34 |
2025-04-01T04:54:45.841981
|
{
"authors": [
"azuresdkci",
"jsquire",
"tstepanski"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/11630",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
999722907
|
Swallow NotImplementedException in EventSource name deduplication logic
Fixes: https://github.com/Azure/azure-sdk-for-net/issues/24055
Hi.
I am part of the now closed report #24055 submitted by Muhammet Sahin and we fins ourselves blocked from publishing our xamarin based mobile app to Appstore due to this.
All worked/works fine for some reason while testing the same build of the app from AppCenter, but as we now move to the next phase and add it to Appstore and Testflight, this issue emerged...
In what timeframe can we get access to a fix for testing in our app?
Best regards
Thomas Odell Balkeståhl
In addition, there is a nightly feed: pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-net/nuget/v3/index.json you can get the latest set of packages and test the fix.
Hi.
We have tried to downgrade, but it got even worse, now the app won't even start at all. We get 2-3 crashes at start and then it is dead.
Do you know if the deployment via Testflight/Appstore affects this in any way? Anything that 'manipulates' the versions of the packages?
Most likely, it is as you say, that the bug was introduced in aug, but we had a working app in AppCenter(Microsoft) and the exact same build published on Testflight triggered the bug.
Update:
We have now managed to get our app running. It was due to the bug in azure.storage/azure.core, but also in relation to the experimental flags in Xamarin.
https://docs.microsoft.com/en-us/xamarin/xamarin-forms/internals/experimental-flags
(With 'we' I'm referring to our big hero @muhammetsahin who managed to solve it with no blame to himself)
|
gharchive/pull-request
| 2021-09-17T21:09:38 |
2025-04-01T04:54:45.846590
|
{
"authors": [
"Candelit",
"pakrym"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/24097",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1227238427
|
Add EntityId to secrets, keys, and certificates
Fixes #28564
API change check for Azure.Security.KeyVault.Certificates
API changes have been detected in Azure.Security.KeyVault.Certificates. You can review API changes here
API changes
+ public string EntityId { get; }
API change check for Azure.Security.KeyVault.Keys
API changes have been detected in Azure.Security.KeyVault.Keys. You can review API changes here
API changes
+ public string EntityId { get; }
API change check for Azure.Security.KeyVault.Secrets
API changes have been detected in Azure.Security.KeyVault.Secrets. You can review API changes here
API changes
+ public string EntityId { get; }
Waiting for 7.4-preview.1 to deploy so I can record tests and write assertions.
|
gharchive/pull-request
| 2022-05-05T22:33:12 |
2025-04-01T04:54:45.851354
|
{
"authors": [
"azure-sdk",
"heaths"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/28566",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1586729774
|
Use ArrayBackedPropertyBag for PipelineRequest
Benchmarks:
| Categories | count | Method | Mean | Error | StdDev | Ratio | Gen 0 | Allocated |
|------------------------------ |------ |------------------ |------------:|---------:|---------:|------:|-------:|----------:|
| CreateHttpRequestMessage | 2 | HttpRequestHeader | 656.1 ns | 2.51 ns | 2.35 ns | 1.00 | 0.0048 | 496 B |
| CreateHttpRequestMessage | 2 | ArrayBackedHeader | 470.6 ns | 0.74 ns | 0.58 ns | 0.72 | 0.0048 | 496 B |
| | | | | | | | | |
| CreateHttpRequestMessage | 3 | HttpRequestHeader | 727.0 ns | 2.69 ns | 2.52 ns | 1.00 | 0.0048 | 496 B |
| CreateHttpRequestMessage | 3 | ArrayBackedHeader | 608.6 ns | 2.65 ns | 2.35 ns | 0.84 | 0.0076 | 776 B |
| | | | | | | | | |
| CreateHttpRequestMessage | 8 | HttpRequestHeader | 1,287.3 ns | 4.97 ns | 4.40 ns | 1.00 | 0.0114 | 1,104 B |
| CreateHttpRequestMessage | 8 | ArrayBackedHeader | 1,180.9 ns | 14.22 ns | 13.30 ns | 0.92 | 0.0134 | 1,384 B |
| | | | | | | | | |
| CreateHttpRequestMessage | 16 | HttpRequestHeader | 2,490.9 ns | 6.50 ns | 5.43 ns | 1.00 | 0.0267 | 2,736 B |
| CreateHttpRequestMessage | 16 | ArrayBackedHeader | 2,504.4 ns | 6.96 ns | 5.81 ns | 1.01 | 0.0305 | 3,016 B |
| | | | | | | | | |
| CreateHttpRequestMessage | 32 | HttpRequestHeader | 4,374.9 ns | 12.06 ns | 10.07 ns | 1.00 | 0.0381 | 4,120 B |
| CreateHttpRequestMessage | 32 | ArrayBackedHeader | 5,410.5 ns | 21.57 ns | 19.12 ns | 1.24 | 0.0458 | 4,656 B |
| | | | | | | | | |
| CreateHttpRequestMessageTwice | 2 | HttpRequestHeader | 1,324.0 ns | 5.69 ns | 5.04 ns | 1.00 | 0.0114 | 1,256 B |
| CreateHttpRequestMessageTwice | 2 | ArrayBackedHeader | 950.8 ns | 4.91 ns | 4.35 ns | 0.72 | 0.0095 | 1,032 B |
| | | | | | | | | |
| CreateHttpRequestMessageTwice | 3 | HttpRequestHeader | 1,506.9 ns | 5.77 ns | 5.40 ns | 1.00 | 0.0134 | 1,336 B |
| CreateHttpRequestMessageTwice | 3 | ArrayBackedHeader | 1,137.6 ns | 0.91 ns | 0.80 ns | 0.76 | 0.0134 | 1,312 B |
| | | | | | | | | |
| CreateHttpRequestMessageTwice | 8 | HttpRequestHeader | 2,855.4 ns | 15.61 ns | 14.60 ns | 1.00 | 0.0305 | 2,888 B |
| CreateHttpRequestMessageTwice | 8 | ArrayBackedHeader | 2,187.8 ns | 2.24 ns | 1.75 ns | 0.77 | 0.0267 | 2,528 B |
| | | | | | | | | |
| CreateHttpRequestMessageTwice | 16 | HttpRequestHeader | 5,452.2 ns | 15.40 ns | 12.86 ns | 1.00 | 0.0687 | 6,792 B |
| CreateHttpRequestMessageTwice | 16 | ArrayBackedHeader | 4,419.1 ns | 29.19 ns | 27.31 ns | 0.81 | 0.0610 | 5,792 B |
| | | | | | | | | |
| CreateHttpRequestMessageTwice | 32 | HttpRequestHeader | 9,906.1 ns | 11.67 ns | 9.11 ns | 1.00 | 0.1068 | 10,840 B |
| CreateHttpRequestMessageTwice | 32 | ArrayBackedHeader | 8,834.6 ns | 20.36 ns | 15.90 ns | 0.89 | 0.0916 | 8,816 B |
| | | | | | | | | |
| MultipleReads | 8 | HttpRequestHeader | 2,976.5 ns | 11.14 ns | 9.30 ns | 1.00 | 0.0229 | 2,472 B |
| MultipleReads | 8 | ArrayBackedHeader | 1,447.9 ns | 2.72 ns | 2.55 ns | 0.49 | 0.0153 | 1,472 B |
| | | | | | | | | |
| MultipleReads | 16 | HttpRequestHeader | 5,971.1 ns | 19.34 ns | 17.14 ns | 1.00 | 0.0534 | 5,448 B |
| MultipleReads | 16 | ArrayBackedHeader | 3,195.2 ns | 24.38 ns | 22.80 ns | 0.54 | 0.0305 | 3,168 B |
| | | | | | | | | |
| MultipleReads | 32 | HttpRequestHeader | 11,225.7 ns | 40.46 ns | 35.87 ns | 1.00 | 0.0916 | 9,520 B |
| MultipleReads | 32 | ArrayBackedHeader | 7,953.4 ns | 37.65 ns | 35.22 ns | 0.71 | 0.0458 | 4,936 B |
First scenario - CreateHttpRequestMessage - is the base one, when we create a request and send it directly to the socket. In case of less than 16 headers, ArrayBackedPropertyBag is faster
Second scenario - CreateHttpRequestMessageTwice simulates the retry case. Here, even with 32 headers the benefit is about 10%
Third scenario - MultipleReads - simulates Azure.Storage case when headers are used to create a signature
API change check
API changes are not detected in this pull request.
It would be good if this PR description started with a note explaining about what this PR does, as opposed to starting with a benchmark table :-)
But I am not surprised this improves perf. The headers collection is pretty inefficient. We should send this benchmark and the scenarios we have (changing header values) that make the BCL headers collection suboptimal. Maybe this can be fixed in the BCL so that we don't have to write code like in this PR.
This PR appears to have caused test flakiness in the .NET Live tests pipeline - Beginning on 2/17, we have seen intermittent test failures in our live batch tests - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=2203247&view=ms.vss-test-web.build-test-results-tab&runId=39465973&resultId=100295&paneView=debug
Based on the timing, it appears it was caused by this commit.
@AlexanderSher @amnguye
|
gharchive/pull-request
| 2023-02-15T23:18:12 |
2025-04-01T04:54:45.857444
|
{
"authors": [
"AlexanderSher",
"KrzysztofCwalina",
"azure-sdk",
"seanmcc-msft"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/34195",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1735996754
|
Increment version for signalr releases
Increment package version after release of Azure.ResourceManager.SignalR
API change check
API changes are not detected in this pull request.
|
gharchive/pull-request
| 2023-06-01T10:31:41 |
2025-04-01T04:54:45.858798
|
{
"authors": [
"azure-sdk"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/36776",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
265360827
|
[KeyVault] [Do Not Merge] Adding ECC key support
Description
These changes introduce ECC key support to the Key Vault SDK. They correspond to the swagger update https://github.com/Azure/azure-rest-api-specs/pull/1724.
This checklist is used to make sure that common guidelines for a pull request are followed.
[x] I have read the contribution guidelines.
[x] The pull request does not introduce breaking changes.
General Guidelines
[ ] Title of the pull request is clear and informative.
[ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.
Testing Guidelines
[ ] Pull request includes test coverage for the included changes.
SDK Generation Guidelines
[ ] If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.
[ ] The generate.cmd file for the SDK has been updated with the version of AutoRest, as well as the commitid of your swagger spec or link to the swagger spec, used to generate the code.
[ ] The *.csproj and AssemblyInfo.cs files have been updated with the new version of the SDK.
closing since this will be merged into the KvDev branch instead. https://github.com/Azure/azure-sdk-for-net/pull/3815
|
gharchive/pull-request
| 2017-10-13T17:21:02 |
2025-04-01T04:54:45.865072
|
{
"authors": [
"schaabs"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/3782",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1876465694
|
[AzureMonitorExporter] resolve AOT warnings
Related: #37734.
This PR mitigates AOT warnings in the Azure.Monitor.OpenTelemetry.Exporter library.
Azure.Monitor.OpenTelemetry.Exporter has 14 totals warnings
AzureMonitorExporterEventSource.cs
IL2026:RequiresUnreferencedCodeAttribute
Using member 'System.Diagnostics.Tracing.EventSource.WriteEvent(Int32,Object[])' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. EventSource will serialize the whole object graph. Trimmer will not safely handle this case because properties may be trimmed. This can be suppressed if the object is a primitive type.
The fix is to decorate these methods with [UnconditionalSuppressMessage("ReflectionAnalysis", "IL2026:RequiresUnreferencedCode", Justification = "Parameters to this method are primitive and are trimmer safe.")]
AzureMonitorStatsbeat.GetVmMetadataResponse & IngestionResponseHelper.GetErrorsFromResponse
IL2026:RequiresUnreferencedCodeAttribute
Using member 'System.Text.Json.JsonSerializer.Deserialize(String,JsonSerializerOptions)' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. JSON serialization and deserialization might require types that cannot be statically analyzed. Use the overload that takes a JsonTypeInfo or JsonSerializerContext, or make sure all of the required types are preserved.
IL3050:RequiresDynamicCodeAttribute
Using member 'System.Text.Json.JsonSerializer.Deserialize(String,JsonSerializerOptions)' which has 'RequiresDynamicCodeAttribute' can break functionality when AOT compiling. JSON serialization and deserialization might require types that cannot be statically analyzed and might need runtime code generation. Use System.Text.Json source generation for native AOT applications.
The fix is to use source generation.
Models\StackFrame
IL2026:RequiresUnreferencedCodeAttribute
Using member 'System.Diagnostics.StackFrame.GetMethod()' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. Metadata for the method might be incomplete or removed.
The fix is to decorate with UnconditionalSuppressMessage . GetMethod() may return null. In this case we will fall back to ToString().
LogsHelper.GetProblemId
IL2026:RequiresUnreferencedCodeAttribute
Using member 'System.Diagnostics.StackFrame.GetMethod()' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. Metadata for the method might be incomplete or removed.
The fix is to decorate with UnconditionalSuppressMessage . GetMethod() may return null. In this case we will fall back to ToString().
API change check
API changes are not detected in this pull request.
@vitek-karas, @Yun-Ting, @m-redding Please help with this review :)
|
gharchive/pull-request
| 2023-09-01T00:14:33 |
2025-04-01T04:54:45.873043
|
{
"authors": [
"TimothyMothra",
"azure-sdk"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/38459",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1891359488
|
Azure Deployment Manager is being decommissioned. Remove its NET SDK through this PR
Contributing to the Azure SDK
Please see our CONTRIBUTING.md if you are not familiar with contributing to this repository or have questions.
For specific information about pull request etiquette and best practices, see this section.
@ArthurMa1978 I remember there is a decommission process to follow?
@rohantagaru can you provide official announcement of this deprecation?
API change check
API changes are not detected in this pull request.
From @rohantagaru, this rp has been removed from cli last year, https://github.com/Azure/azure-cli-extensions/pull/4653
|
gharchive/pull-request
| 2023-09-11T22:20:39 |
2025-04-01T04:54:45.876552
|
{
"authors": [
"ArthurMa1978",
"archerzz",
"azure-sdk",
"rohantagaru"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/38614",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
439357390
|
Add support for schema create or update for Swagger, WSDL and OpenApi
Fix based on REST Spec https://github.com/Azure/azure-rest-api-specs/pull/5824
@dsgouda the build failure is related to an EventHub test.
@weshaggard Please take a look at the failures.
@dsgouda I wanted to add an extension to one of the models for better usability. Give me an hour.
@dsgouda I re-queued the failing test leg but I don't believe it is related to the changes in this PR so if it fails again merge anyway.
@jsquire looks like another EventHubs test reliability failure
Failed Microsoft.Azure.EventHubs.Tests.ServiceFabricProcessor.OptionsTests.RuntimeInformationTest
Error Message:
Assert.True() Failure
Expected: True
Actual: False
Stack Trace:
at Microsoft.Azure.EventHubs.Tests.ServiceFabricProcessor.OptionsTests.RuntimeInformationTest() in D:\a\1\s\sdk\eventhub\Microsoft.Azure.EventHubs\tests\ServiceFabricProcessor\OptionsTests.cs:line 191
I added a note about this test in issue https://github.com/Azure/azure-sdk-for-net/issues/5995, and if we see it keep failing then I'll disable it.
@dsgouda I have pushed my changes. Feel free to merge it as soon as CI passes or fails with EventHub test failure.
@dsgouda can you merge this.
|
gharchive/pull-request
| 2019-05-01T22:47:20 |
2025-04-01T04:54:45.880192
|
{
"authors": [
"dsgouda",
"solankisamir",
"weshaggard"
],
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/6038",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
685039671
|
[Schema Registry] Cross language design review
Ongoing discussion:
Design: API
what should the method return?
- Schema:
- content
- SchemaProperties
- SchemaProperties
- id
- version
- ...
Design: API: SRAvroSerializer.serialize(..., schema):
what's the expected type of the parameter schema?
if string/bytes, then it's the SDK's duty to do normalize -- remove space from the input (\n\t, etc.) or the service would handle it
Laurent: good with bytes and string for p1
Design: Naming
response type(object)
SchemaId: in JS it's called SchemaIdResponse, probably we should consider a better/different/ name? what's naming convention here
Schema: same to the question above
content/string/schema
Option: all return the same object type e.g.: SchemaProperties
parameter name
Design: SchemaProperties dict mixin support?
Impl: Encoding
big/small endian problem to id/format identifier
Shall we use struct.pack/unpack to construct payload?
Ask service team/other languages how they impl this
Impl: Dependency
which packages are required for user (dependency)?
Laurent: postpone aio implementation later, only do the sync avro serializer now
Others
generate schema from class/type? the input being a object, is it pythonic?
future discussion, not now, but protobuf probably need to support this
Eng: Doc
Doc auto generation, need to ask Scott
Samples and sample readme release to official ms website
API reference
Finished discussion:
Impl: Parsing in sr and avsr
should we remove all the space (regular expression "\s") in the schema string user passes into our sdk?
it's the service's duty
Design: Typing
serialization type: string vs enum vs both
class a(str, Enum)
auto register schema for SR Serializer?
data collected and moved into onenote page, will spawn separate issues for each task
|
gharchive/issue
| 2020-08-24T23:02:43 |
2025-04-01T04:54:45.894245
|
{
"authors": [
"yunhaoling"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/13301",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
868202955
|
Chat Samples are not able to run in the pipeline
Package Name: azure-communication-chat
Describe the bug
We want to make the Python ACS samples run in the pipeline to have an extra security layer when we do our releases. We refactored our TNM, Identity and SMS samples to make them able to run in the pipeline, however, chat samples are not being able to work because of some special environment variables they need in order to run successfully.
We are seeing references to an AZURE_COMMUNICATION_SERVICE_ENDPOINT env variable in the samples. This env variable doesn't exist in the pipeline so it should be removed fron the samples or added to the key vault from the resource we use to test to avoid any inconsistencies with the env variables the Chat Client needs to initiate.
@juancamilor
This is the PR I opened to add this feature https://github.com/Azure/azure-sdk-for-python/pull/18234
If you go to the pipeline logs you can see exactly where are the errors that need to be addressed.
@LuChen-Microsoft FYI
|
gharchive/issue
| 2021-04-26T21:05:41 |
2025-04-01T04:54:45.897537
|
{
"authors": [
"jbeauregardb",
"juancamilor"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/18314",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1175833167
|
[Test Proxy] Remove custom default matcher setup in proxy_startup
Per https://github.com/Azure/azure-sdk-for-python/pull/23148, we now call set_custom_default_matcher within proxy_startup.py in order to preserve backwards compatibility and ignore headers that we now omit from recordings. Eventually, once recordings are free from these headers, we should remove this call and use the default matcher upon startup.
Note: at this point, we should also revert any set_custom_default_matcher calls that set bodiless matching to set_bodiless_matcher. The linked PR has details about this change as well.
This is tracked by https://github.com/Azure/azure-sdk-for-python/issues/34897.
|
gharchive/issue
| 2022-03-21T19:17:29 |
2025-04-01T04:54:45.900175
|
{
"authors": [
"mccoyp"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/23592",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1231918217
|
Broken links in Azure Resources libraries for Python
The links to the packages are broken.
For example, clicking azure.mgmt.resources.features takes you to
https://docs.microsoft.com/en-us/python/api/azure.mgmt.resource.features
instead of
https://docs.microsoft.com/en-us/python/api/azure-mgmt-resource/azure.mgmt.resource.features
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: e19ddc50-3cc0-eded-49ed-e1f2552a421c
Version Independent ID: 391c4753-337c-8a9f-fa58-2c6fc95cb6df
Content: Azure Resources libraries for Python
Content Source: docs-ref-services/latest/azure.mgmt.resource.md
Service: resources
Product: azure
Technology: azure
GitHub Login: @lisawong19
Microsoft Alias: ramyar
Hi @scbedd could you help merge the fix PR or address proper person to merge it? Thanks!
@scbedd can you review this when you get a chance?
Github auto-closed the issue when I merged the PR. Re-opening until the change is actually visible on docs.ms.
The issue was fixed already.
|
gharchive/issue
| 2022-05-11T02:29:03 |
2025-04-01T04:54:45.906592
|
{
"authors": [
"msyyc",
"rguptar",
"scbedd"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/24387",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
370655181
|
No module named 'azure.storage'
Hello, I have tried the instruction to install, reinstall the azure package.
However, all the time I am still getting the ModuleNotFoundError: No module named 'azure.storage'.
following are the results of: pip freeze
azure==4.0.0
azure-applicationinsights==0.1.0
azure-batch==4.1.3
azure-common==1.1.16
azure-cosmosdb-nspkg==2.0.2
azure-cosmosdb-table==1.0.5
azure-datalake-store==0.0.34
azure-eventgrid==1.2.0
azure-graphrbac==0.40.0
azure-keyvault==1.1.0
azure-loganalytics==0.1.0
azure-mgmt==4.0.0
azure-mgmt-advisor==1.0.1
azure-mgmt-applicationinsights==0.1.1
azure-mgmt-authorization==0.50.0
azure-mgmt-batch==5.0.1
azure-mgmt-batchai==2.0.0
azure-mgmt-billing==0.2.0
azure-mgmt-cdn==3.0.0
azure-mgmt-cognitiveservices==3.0.0
azure-mgmt-commerce==1.0.1
azure-mgmt-compute==4.3.1
azure-mgmt-consumption==2.0.0
azure-mgmt-containerinstance==1.2.0
azure-mgmt-containerregistry==2.2.0
azure-mgmt-containerservice==4.2.2
azure-mgmt-cosmosdb==0.4.1
azure-mgmt-datafactory==0.6.0
azure-mgmt-datalake-analytics==0.6.0
azure-mgmt-datalake-nspkg==2.0.0
azure-mgmt-datalake-store==0.5.0
azure-mgmt-datamigration==1.0.0
azure-mgmt-devspaces==0.1.0
azure-mgmt-devtestlabs==2.2.0
azure-mgmt-dns==2.1.0
azure-mgmt-eventgrid==1.0.0
azure-mgmt-eventhub==2.1.0
azure-mgmt-hanaonazure==0.1.1
azure-mgmt-iotcentral==0.1.0
azure-mgmt-iothub==0.5.0
azure-mgmt-iothubprovisioningservices==0.2.0
azure-mgmt-keyvault==1.1.0
azure-mgmt-loganalytics==0.2.0
azure-mgmt-logic==3.0.0
azure-mgmt-machinelearningcompute==0.4.1
azure-mgmt-managementgroups==0.1.0
azure-mgmt-managementpartner==0.1.0
azure-mgmt-maps==0.1.0
azure-mgmt-marketplaceordering==0.1.0
azure-mgmt-media==1.0.0
azure-mgmt-monitor==0.5.2
azure-mgmt-msi==0.2.0
azure-mgmt-network==2.2.1
azure-mgmt-notificationhubs==2.0.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-policyinsights==0.1.0
azure-mgmt-powerbiembedded==2.0.0
azure-mgmt-rdbms==1.4.0
azure-mgmt-recoveryservices==0.3.0
azure-mgmt-recoveryservicesbackup==0.3.0
azure-mgmt-redis==5.0.0
azure-mgmt-relay==0.1.0
azure-mgmt-reservations==0.2.1
azure-mgmt-resource==2.0.0
azure-mgmt-scheduler==2.0.0
azure-mgmt-search==2.0.0
azure-mgmt-servicebus==0.5.2
azure-mgmt-servicefabric==0.2.0
azure-mgmt-signalr==0.1.1
azure-mgmt-sql==0.9.1
azure-mgmt-storage==2.0.0
azure-mgmt-subscription==0.2.0
azure-mgmt-trafficmanager==0.50.0
azure-mgmt-web==0.35.0
azure-nspkg==3.0.2
azure-servicebus==0.21.1
azure-servicefabric==6.3.0.0
azure-servicemanagement-legacy==0.20.6
azure-storage==0.33.0
azure-storage-blob==1.3.1
azure-storage-common==1.3.0
azure-storage-file==1.3.1
azure-storage-nspkg==3.0.0
azure-storage-queue==1.3.0
Could anyone help on this?
Thanks a lot!
Hi @irisava
Could you confirm version of Python, version of pip, platform (Windws, Ubuntu, etc.) and exact command used to install.
Thanks you
Hello @lmazuel
Thank you for the reply!
Following is the info of my working environment:
Python 3.6.5
pip 18.1 from ...\appdata\local\programs\python\python36-32\lib\site-packages\pip (python 3.6)
Windows 10
Last command used in cmd: pip install azure-storage
azure-storage and azure-storage-blob/file/queue are incompatible and cannot work together. azure-storage is actually the deprecated old version of the three packages azure-storage-blob/file/queue
Please juste use azure-storage-blob/file/queue or just use azure-storage but not both. azure-storage-blob/file/queue is recommended if you don't have existing code base.
Thank you,
Closing for inactivity, since I believe I addressed the initial question. If this is still a problem, feel to open a new issue in the storage repo:
https://github.com/Azure/azure-storage-python
Thanks,
|
gharchive/issue
| 2018-10-16T14:56:55 |
2025-04-01T04:54:45.925921
|
{
"authors": [
"irisava",
"lmazuel"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/3623",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2457836376
|
which program will work? (Program which can upload a text file to vector index of Azure AI search )
I'm looking for a basic python program that will upload a text file to the vector index of Azure AI search, but
everything gives me an error, and I can't find the version number of the azure search documents package or a working ptyhon program anywhere.
Everything gives me an error.
I guess it's probably because the development is so hard that they haven't been able to check the operation of the developed program, but which program will work?
GitHub
Azure/azure-search-vector-samples
Azure/azure-sdk-for-python
Hi @TaisukeIto! Sorry to hear about your experience with the AI Search library - it's a rapidly growing service so sometimes the API changes on newer versions and samples become outdated fast.
One of our most popular samples uses the AI Search library and should have accurate behavior - here's what I found for a quick search of the SearchClient.upload_files() method: link. It looks like this sample is on 11.6.0b1.
The API reference is also here if you're curious about the details.
If you're still running into errors, please post the specific error you're running into.
|
gharchive/issue
| 2024-08-09T12:42:07 |
2025-04-01T04:54:45.929675
|
{
"authors": [
"TaisukeIto",
"rohit-ganguly"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/36833",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2740255948
|
Hey! I've been using Cash App to send money and spend using the Cash App Card. Try it using my code and we’ll each get $5. GQ4N8C8
https://cash.app/app/GQ4N8C8
Hello
|
gharchive/issue
| 2024-12-15T03:33:52 |
2025-04-01T04:54:45.931167
|
{
"authors": [
"carlos2martinize"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/38885",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
246149965
|
Initial Container Instance
FYI @yolo3301 @derekbekoe
Didn't look at the diff in details yet at the time of the PR, but naming and packaging should be ok.
Codecov Report
Merging #1330 into master will increase coverage by <.01%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #1330 +/- ##
==========================================
+ Coverage 56.04% 56.04% +<.01%
==========================================
Files 2691 2692 +1
Lines 71246 71247 +1
==========================================
+ Hits 39932 39933 +1
Misses 31314 31314
Impacted Files
Coverage Δ
...zure-mgmt-containerinstance/azure/mgmt/__init__.py
100% <100%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f477734...3bf348f. Read the comment docs.
|
gharchive/pull-request
| 2017-07-27T19:50:50 |
2025-04-01T04:54:45.937870
|
{
"authors": [
"codecov-io",
"lmazuel"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/1330",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
800839653
|
Revert "Communication identity api redesign (#16420)"
This reverts commit 30b917b2b377e7fadbd66c478209b3ab7427ca78.
/azp run python - communication - tests
|
gharchive/pull-request
| 2021-02-04T01:14:49 |
2025-04-01T04:54:45.939221
|
{
"authors": [
"lsundaralingam"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/16511",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1365665318
|
[Videoanalyzer] Fixed cspell typos in videoanalyzer
Description
Fix https://github.com/Azure/azure-sdk-for-python/issues/22681
All SDK Contribution checklist:
[X] The pull request does not introduce [breaking changes]
[X] CHANGELOG is updated for new features, bug fixes or other significant changes.
[X] I have read the contribution guidelines.
General Guidelines and Best Practices
[X] Title of the pull request is clear and informative.
[X] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.
Testing Guidelines
[X] Pull request includes test coverage for the included changes.
CI check here: https://dev.azure.com/azure-sdk/public/_build/results?buildId=1847868&view=results
I'll merge as soon as it's green. Thanks again! 😸
/check-enforcer override
|
gharchive/pull-request
| 2022-09-08T07:14:59 |
2025-04-01T04:54:45.944122
|
{
"authors": [
"kristapratico",
"syso-jxx"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/26087",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1432275678
|
Added Sampler Factory
Description
Please add an informative description that covers that changes made by the pull request and link all relevant issues.
If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.
All SDK Contribution checklist:
[ ] The pull request does not introduce [breaking changes]
[ ] CHANGELOG is updated for new features, bug fixes or other significant changes.
[ ] I have read the contribution guidelines.
General Guidelines and Best Practices
[ ] Title of the pull request is clear and informative.
[ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.
Testing Guidelines
[ ] Pull request includes test coverage for the included changes.
Make sure to add a changelog entry
|
gharchive/pull-request
| 2022-11-02T00:19:10 |
2025-04-01T04:54:45.948525
|
{
"authors": [
"jeremydvoss",
"lzchen"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/27236",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1552371136
|
[AutoRelease] t2-cognitiveservices-2023-01-23-44734(can only be merged by SDK owner)
https://github.com/Azure/sdk-release-request/issues/3679
Live test success
https://dev.azure.com/azure-sdk/internal/_build?definitionId=976
BuildTargetingString
azure-mgmt-cognitiveservices
Skip.CreateApiReview
true
issue link:https://github.com/Azure/sdk-release-request/issues/3679
|
gharchive/pull-request
| 2023-01-23T01:07:56 |
2025-04-01T04:54:45.950681
|
{
"authors": [
"azure-sdk"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/28445",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1629680779
|
Black 22.3.0 Azure-Core
Nothing fancy, just use the latest black, so the Typing PRs don't fail on black, while not making those PRs full of uninteresting lines.
Ok, so we don't need this PR, it happened I didn't see we have a black config file, it's why I had so many diff
|
gharchive/pull-request
| 2023-03-17T17:27:29 |
2025-04-01T04:54:45.951879
|
{
"authors": [
"lmazuel"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/29436",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
344962272
|
[AutoPR keyvault/resource-manager] KV multiapi Readme
Created to sync https://github.com/Azure/azure-rest-api-specs/pull/3416
This PR has been merged into https://github.com/Azure/azure-sdk-for-python/pull/2927
|
gharchive/pull-request
| 2018-07-26T18:47:29 |
2025-04-01T04:54:45.953302
|
{
"authors": [
"AutorestCI"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/3014",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1803689117
|
[ml] Update docstrings to meet guidelines and fix example paths
…ines
Description
Please add an informative description that covers that changes made by the pull request and link all relevant issues.
If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.
All SDK Contribution checklist:
[ ] The pull request does not introduce [breaking changes]
[ ] CHANGELOG is updated for new features, bug fixes or other significant changes.
[ ] I have read the contribution guidelines.
General Guidelines and Best Practices
[ ] Title of the pull request is clear and informative.
[ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.
Testing Guidelines
[ ] Pull request includes test coverage for the included changes.
API change check
APIView has identified API level changes in this PR and created following API reviews.
azure-ai-ml
|
gharchive/pull-request
| 2023-07-13T20:06:20 |
2025-04-01T04:54:45.958257
|
{
"authors": [
"azure-sdk",
"diondrapeck"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/31137",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1836973254
|
vnext issue creator script
Resolves https://github.com/Azure/azure-sdk-for-python/issues/29344
We pin the versions of type checkers/linters in this repo so that we don't see any surprises in CI when new versions are released. To keep up with the latest, we periodically bump the pinned version of these checkers and then go through the process of getting all libraries clean for that version. We would like to improve this process by 1) giving early notice of when a version bump will happen and what errors need to be fixed in a given library and 2) standardize when we do version bumps (e.g., quarterly, the Monday after release week).
The idea behind this PR is to give library owners an early heads up of what checks are failing with the next version of the type checkers/linters and provide a deadline / merge date for when that version will be merged. It adds a script which will create GH issues if a client library is failing a vnext check for pylint, mypy, or pyright and will run as part of the test-weekly pipeline. If a library fails a vnext check, the script will either create an issue (if one doesn't exist) or update the issue with the latest dates/links to builds.
Example issue: https://github.com/Azure/azure-sdk-for-python/issues/31463
@kristapratico No complaints about the code of this PR at all!
From a strategery point of view, I'm trying to get our common code under azure-sdk-tools instead of adding new scripts/code to tox folder.
Reason being:
Makes it super easy to re-use the code here that submits a new issue
Has a place to run tests out of if you add them
Yes you could put a test file right alongside this under tox/, but that would get awkward pretty quick 😂
That being said, I'm not going to block on that.
@scbedd Ah thanks for pointing that out, I meant to mention that I wasn't sure if this was a great place for the script to live. Were you thinking under tools/azure-sdk-tools/ci_tools? I'm happy to move it in this PR.
@kristapratico absolutely have some suggestions!
tools/azure-sdk-tools/ci_tools/gh <-- access through ci_tools.gh
or
tools/azure-sdk-tools/gh_tools/ <-- would need to create top levelm so would probably just be gh_tools or whatever you come up with.
Both work. Arguably creating an issue isn't tightly bound to CI, so there are arguments for making them their own namespace.
|
gharchive/pull-request
| 2023-08-04T15:40:50 |
2025-04-01T04:54:45.964271
|
{
"authors": [
"kristapratico",
"scbedd"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/31474",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2288619741
|
[Key Vault] Add support for pre-backup and pre-restore operations
Description
Resolves https://github.com/Azure/azure-sdk-for-python/issues/35252. This adds client-facing support for pre-backup and pre-restore methods, for checking whether a full backup or full restore operation can be performed.
As a draft, this PR doesn't include tests because of default feature unavailability in the service. Once the feature can be easily enabled, tests and samples will be added.
All SDK Contribution checklist:
[x] The pull request does not introduce [breaking changes]
[x] CHANGELOG is updated for new features, bug fixes or other significant changes.
[x] I have read the contribution guidelines.
General Guidelines and Best Practices
[x] Title of the pull request is clear and informative.
[x] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.
Testing Guidelines
[ ] Pull request includes test coverage for the included changes.
API change check
APIView has identified API level changes in this PR and created following API reviews.
azure-keyvault-administration
|
gharchive/pull-request
| 2024-05-09T23:24:45 |
2025-04-01T04:54:45.970335
|
{
"authors": [
"azure-sdk",
"mccoyp"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/35569",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2589980334
|
fixed warning for aio and get call function tools for stream within t…
…he SDK
Description
Get rid of warning in aio AgentOperation. TO do that, I copied the AgentsOperation from sync to async/aio and modify accordingly.
Call functions within SDK for streaming instead of asking developers to call in their code. I will do this for non-streaming.
Please add an informative description that covers that changes made by the pull request and link all relevant issues.
If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.
All SDK Contribution checklist:
[ ] The pull request does not introduce [breaking changes]
[ ] CHANGELOG is updated for new features, bug fixes or other significant changes.
[ ] I have read the contribution guidelines.
General Guidelines and Best Practices
[ ] Title of the pull request is clear and informative.
[ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.
Testing Guidelines
[ ] Pull request includes test coverage for the included changes.
Give some time to review :)
|
gharchive/pull-request
| 2024-10-15T21:34:24 |
2025-04-01T04:54:45.975243
|
{
"authors": [
"howieleung",
"jhakulin"
],
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/37913",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1779134327
|
Management namespace approval improvements for new UI
Today the management namespace approval is a manual process. Teams have to create a GitHub issue and follow the guidance on this wiki page to initiate the process. https://dev.azure.com/azure-sdk/internal/_wiki/wikis/internal.wiki/821/Naming-for-new-initial-management-or-client-libraries-(new-SDKs)
Questions:
Should a dedicated team of architects be responsible for approving the management plane namespaces?
Should the archboard use APIView to approve the namespaces? This is how it is done for data plane.
Are teams blocked from releasing new initial SDKs if they do not have approval for namespace for management plane? This is implemented for data plane.
If we are going to continue with the current process, then we should create the GitHub issue for the user in the Release Planner. The user can enter the suggested names of the namespaces and we have all of the other information needed to create the GitHub issue for them using the template - https://github.com/Azure/azure-sdk-pr/issues/new?assignees=kyle-patterson%2C+ronniegeraghty&labels=architecture%2C+board-review%2C+mgmt-namespace-review&projects=&template=adp_mgmt_namespace_review.md&title=Board+Review%3A+Management+Plane+Namespace+Review+<client+library+name>
All of the questions have been covered either in docs or already implemented in the SDK release app
The only reminaing one is
If we are going to continue with the current process, then we should create the GitHub issue for the user in the Release Planner. The user can enter the suggested names of the namespaces and we have all of the other information needed to create the GitHub issue for them using the template - https://github.com/Azure/azure-sdk-pr/issues/new?assignees=kyle-patterson%2C+ronniegeraghty&labels=architecture%2C+board-review%2C+mgmt-namespace-review&projects=&template=adp_mgmt_namespace_review.md&title=Board+Review%3A+Management+Plane+Namespace+Review+<client+library+name>
which will be covered by https://github.com/Azure/azure-sdk-tools/issues/4601
|
gharchive/issue
| 2023-06-28T14:52:03 |
2025-04-01T04:54:45.982797
|
{
"authors": [
"ladonnaq",
"maririos"
],
"repo": "Azure/azure-sdk-tools",
"url": "https://github.com/Azure/azure-sdk-tools/issues/6431",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1505118097
|
Board Review: Event Grid System Events
Thank you for submitting this review request. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure.
The Architecture Board reviews Track 2 libraries only. If your library does not meet this requirement, please reach out to Architecture Board before creating the issue.
Please reference our review process guidelines to understand what is being asked for in the issue template.
To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings.
For Tier-2 languages (C, C++, Go, Android, iOS), the review will be on an as-needed basis.
Before submitting, ensure you adjust the title of the issue appropriately.
Note that the required material must be included before a meeting can be scheduled.
Contacts and Timeline
Responsible service team: API Management, DataBox
Main contacts: @JoshLove-msft
Expected code complete date: 1/6/23
Expected release date: 1/13/23
About the Service
Link to the service REST APIs:
https://github.com/Azure/azure-rest-api-specs/pull/21771
https://github.com/Azure/azure-rest-api-specs/pull/21945
.NET
APIView Link: To be added
Java
APIView Link: To be added
Python
APIView Link: To be added
TypeScript
APIView Link: To be added
Scheduled for Jan 12th, from 2:05PM - 4PM PST.
As per email, I cannot attend this meeting since I'm based in Belgium.
Can we move this to 10 PM CET / 1AM PT please? I'm OK to stay up for the meeting then.
Even I can't attend this meeting since I'm based in India.
Can we move it to 12 AM IST / 10:30 AM PST ?
@tomkerkhove & @aakash049 the review session time can be moved up, just let me get an agreed upon time. Would 10:05AM - 12PM PST work for you both?
I can check but allocating 2h in the evening is a bit much as it feels like the discussion will not take that much given it's 2 different topics. Can we split them or have some indication?
I can potentially do 10:30-11:30 but I think this is still too late for @aakash049 who is in India.
From the context in the issue description it looks like the two topics of this meeting will be 11 new events for API Management and an Event Grid system topic for DataBox. @aakash049 & @tomkerkhove, can you let me know which part you're interested in, and I'll add info to the review session stating which topic should go first and second. Since @aakash049 is in India and it will be latest for them, we can arrange it, so their topic is covered from 10-11AM PST and @tomkerkhove's topic is covered from 11AM-12PM PST. Could that work for you both?
It's not ideal because that is my 8 PM but I'll make it work :) I'm joining for the 11 new events for API Management
I'll be discussing about Event Grid System topic for Databox, 10-11 AM PST works for me.
Okay, thanks for being flexible. I'll speak with the architects now to confirm the time. There is a chance they could do 9AM-11AM, but the normal morning time slot is 10-12. I'll keep you posted.
Scheduled for 1/12 from 9:05AM - 11AM PST
Thanks! I'll join at 10 AM PST to represent APIM
@ronniegeraghty is this scheduled for 10-12 or 9-11. It sounds like both service reps are available from 10 onward?
Adding a note here to reflect the update in the meeting invite.
The review session is scheduled to take place between 9:05AM - 11AM PST.
@aakash049 will be going first for the DataBox related topic from 9:05AM - 9:35AM PST.
Then, @tomkerkhove will be going for the API Management related topic from 9:35 - 10:05AM PST
Recording (MS INTERNAL ONLY)
|
gharchive/issue
| 2022-12-20T19:10:23 |
2025-04-01T04:54:45.995185
|
{
"authors": [
"JoshLove-msft",
"aakash049",
"ronniegeraghty",
"tg-msft",
"tomkerkhove"
],
"repo": "Azure/azure-sdk",
"url": "https://github.com/Azure/azure-sdk/issues/5282",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
516298485
|
Documentation - Indexes Should Account for Renamed Packages
Right now, we generate github.io landing pages purely based off of what is present within the repo.
We know that packages will be renamed (most recent example being azure-storage-fileshare), so we may need to account for this. Depending on the level of investment to github.io docs, this may be important.
Most straightforward way I can think of is to always include locations in the index for packages that we've published to blob storage before.\
CC @kaerm
This is super rare and we don't lose any history/data related to docs without this. Cutting this feature since we'll never prioritize it enough to do it. The amount of work needed doesn't match the benefit given how rare this will occur.
|
gharchive/issue
| 2019-11-01T19:13:11 |
2025-04-01T04:54:45.998265
|
{
"authors": [
"kurtzeborn",
"scbedd"
],
"repo": "Azure/azure-sdk",
"url": "https://github.com/Azure/azure-sdk/issues/761",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
599796638
|
Add blog for JS abort controller
Let me know any comments. Edits allowed from maintainers, so feel free to fix any minor issues you like 😀
All feedback addressed, let me know if there are more suggestions!
Was the second file "how q" added by accident? Might want to remove it from the PR.
that extra file is very strange! I'll slice it out of the history.
Well clearly I messed up my filter-branch.
Recreating PR.
@jongio Thanks so much for the feedback, I addressed most of this feedback over in #1240.
I'm open to changing the intro paragraph to a bullet list and also interested in how to improve the point about separation of concerns between signal and controller.
|
gharchive/pull-request
| 2020-04-14T19:10:22 |
2025-04-01T04:54:46.000621
|
{
"authors": [
"adrianhall",
"bterlson"
],
"repo": "Azure/azure-sdk",
"url": "https://github.com/Azure/azure-sdk/pull/1226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1917725361
|
Need -whatif flag for Azcopy Sync tool
What command did you run?
Note: Please remove the SAS to avoid exposing your credentials. If you cannot remember the exact command, please retrieve it from the beginning of the log file.
NA - feature request
What problem was encountered?
Please add -whatif flag for Azcopy Sync tool to estimate exact the outcome of the command. Especially useful when we are dealing with large number of files and having --delete-destination and --recursive flags.
How can we reproduce the problem in the simplest way?
NA
Have you found a mitigation/solution?
No
This sounds like our --dry-run flag, which doesn't match this exact functionality at this point.
|
gharchive/issue
| 2023-09-28T14:54:02 |
2025-04-01T04:54:46.005384
|
{
"authors": [
"adreed-msft",
"dinu99"
],
"repo": "Azure/azure-storage-azcopy",
"url": "https://github.com/Azure/azure-storage-azcopy/issues/2389",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
245890530
|
Allow direct Load webjobs dll
Resolves https://github.com/Azure/azure-webjobs-sdk-script/issues/1508
Allow Functions to directly load and consume a WebJobs DLL that may come from precompiled tooling.
The function.json has a new "configurationSource" : "attributes" flag in it.
This builds on several previous fixes:
This skips the InvokerBase and ILGeneration path. This builds on some previous changes to move non-invocation responsibility (logging, metrics, return values, etc) out of the invoker path.
Recent fix to billing: https://github.com/Azure/azure-webjobs-sdk-script/issues/578
It builds on [FunctionName] and Return value support from the SDK.
Can we add something to the docs on this change. Seems pretty significant.
|
gharchive/pull-request
| 2017-07-27T00:12:06 |
2025-04-01T04:54:46.015983
|
{
"authors": [
"MikeStall",
"dallancarr"
],
"repo": "Azure/azure-webjobs-sdk-script",
"url": "https://github.com/Azure/azure-webjobs-sdk-script/pull/1717",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2354762447
|
[Feature Request]: add module for ADF Linked Services
Description
currently there is no module for ADF Linked Services resource https://docs.microsoft.com/en-us/azure/templates/microsoft.datafactory/factories/linkedservices?pivots=deployment-language-bicep
Hey @clintgrove,
I just migrated this issue over from CARML. Please take a look and triage if still relevant :)
I am working on this, due to raise PR today or tomorrow 25th june 2024
|
gharchive/issue
| 2022-09-06T06:36:43 |
2025-04-01T04:54:46.029954
|
{
"authors": [
"AlexanderSehr",
"clintgrove",
"tyconsulting"
],
"repo": "Azure/bicep-registry-modules",
"url": "https://github.com/Azure/bicep-registry-modules/issues/2414",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2621667024
|
[AVM Module Issue]: Virtual Network Gateway - WAF and APRL alignment
Check for previous/existing GitHub issues
[x] I have checked for previous/existing GitHub issues
Issue Type?
Feature Request
Module Name
avm/res/network/virtual-network-gateway
(Optional) Module Version
No response
Description
We've been asked to ensure module defaults alignment for WAF and APRL for several modules. For the Virtual Network Gateway module can we please update the following default.
For Public IP's used by the gateway, set zone configuration to all zones [1,2,3] as the default value
Superseding #3247
(Optional) Correlation Id
No response
Hey @fabmas,
Please triage this issue when you get the chance 🙂
|
gharchive/issue
| 2024-10-29T15:47:45 |
2025-04-01T04:54:46.033520
|
{
"authors": [
"AlexanderSehr",
"jtracey93"
],
"repo": "Azure/bicep-registry-modules",
"url": "https://github.com/Azure/bicep-registry-modules/issues/3661",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1192354947
|
AzureML: Datastore created with no subscription_id and resource_group
Bicep version
Bicep CLI version 0.4.1318 (ee0d808f35)
Describe the bug
Deploying a datastore through bicep results in a datastore without subscription_id and resource group setted in the azureml workspace. The workspace is correctly working but doesn't have the direct link to the blob storage.
To Reproduce
simply create a datastore resource with
resource datastore 'Microsoft.MachineLearningServices/workspaces/datastores@2021-03-01-preview`
Can you share the full code sample that you deployed?
@stan-sz - do you happen to know anything about this one?
The code that I deployed is something like the following code:
resource datastore 'Microsoft.MachineLearningServices/workspaces/datastores@2021-03-01-preview' = {
name: '${workspace_name}/dstr_preproc'
properties: {
contents: {
contentsType: 'AzureBlob'
accountName: ext_storage_reference.name
containerName: 'bscont-preproc'
credentials: {
credentialsType: 'AccountKey'
secrets: {
key: listKeys(ext_storage_reference.id, '2019-06-01').keys[0].value
secretsType: 'AccountKey'
}
}
endpoint: environment().suffixes.storage // https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-functions-deployment
protocol: 'https'
}
}
}
The issue is still there also with the stable version of 2022-05-01
resource pipeDatastore 'Microsoft.MachineLearningServices/workspaces/datastores@2022-05-01' = {
name: '${workspace_name}/dstr_preproc'
properties: {
datastoreType: 'AzureBlob'
accountName: ext_storage_reference_tmp.name
containerName: 'bscont--preproc'
credentials: {
credentialsType: 'AccountKey'
secrets: {
key: listKeys(ext_storage_reference_tmp.id, '2019-06-01').keys[0].value
secretsType: 'AccountKey'
}
}
endpoint: environment().suffixes.storage // https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-functions-deployment
protocol: 'https'
}
}
using the latest api 2022-06-01-preview everything works due to the fact that now we can specify subscriptionId and resourceGroup
resource tmpPipeDatastore 'Microsoft.MachineLearningServices/workspaces/datastores@2022-06-01-preview' = {
name: '${workspace_name}/dstr_preproc'
properties: {
datastoreType: 'AzureBlob'
accountName: ext_storage_reference_tmp.name
containerName: 'bscont-preproc'
subscriptionId: env.subcription_id
resourceGroup: rg_name
credentials: {
credentialsType: 'AccountKey'
secrets: {
key: listKeys(ext_storage_reference_tmp.id, '2019-06-01').keys[0].value
secretsType: 'AccountKey'
}
}
endpoint: environment().suffixes.storage // https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-functions-deployment
protocol: 'https'
}
}
The links are created correctly but they end up in a strange "not found" error
Here for example i clicked on the container link
This is an issue with the Azure ML Resource Provider. Can you open an Azure support case, so this can be routed to the Azure ML team?
|
gharchive/issue
| 2022-04-04T21:55:09 |
2025-04-01T04:54:46.040313
|
{
"authors": [
"VELCpro",
"alex-frankel"
],
"repo": "Azure/bicep",
"url": "https://github.com/Azure/bicep/issues/6407",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1528680893
|
Azure Spring Apps / App Deployment : API missing to get relativePath
Bicep version
az bicep version
Bicep CLI version 0.11.1 (030248df55)
Describe the bug
I have a Bicep snipet to create an Azure Spring Apss / App Deployment :
// https://learn.microsoft.com/en-us/azure/templates/microsoft.appplatform/2022-11-01-preview/spring/apps/deployments?pivots=deployment-language-bicep#usersourceinfo-objects
resource adminserverappdeployment 'Microsoft.AppPlatform/Spring/apps/deployments@2022-11-01-preview' = {
name: 'default'
parent: adminserverapp
sku: {
name: azureSpringAppsSkuName
}
….
source: {
version: deploymentVersion
type: 'Jar' // Jar, Container or Source [https://learn.microsoft.com/en-us/azure/templates/microsoft.appplatform/2022-11-01-preview/spring/apps/deployments?pivots=deployment-language-bicep#usersourceinfo](https://learn.microsoft.com/en-us/azure/templates/microsoft.appplatform/2022-11-01-preview/spring/apps/deployments?pivots=deployment-language-bicep#usersourceinfo)
jvmOptions: '-Xms512m -Xmx1024m -Dspring.profiles.active=mysql,key-vault,cloud'
// [https://learn.microsoft.com/en-us/rest/api/azurespringapps/apps/get-resource-upload-url?tabs=HTTP#code-try-0](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Frest%2Fapi%2Fazurespringapps%2Fapps%2Fget-resource-upload-url%3Ftabs%3DHTTP%23code-try-0&data=05%7C01%7CSteve.Pincaud%40microsoft.com%7C6ba1c341743548429ec708daf35eb3dd%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638089885656486803%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=hZBkaW6MJfx0XQkTVt9cmn0hveGmDpqwACN6ZsxKyIc%3D&reserved=0)
// should be a link to a BLOB storage
relativePath: 'https://stasapetcliasa.blob.core.windows.net/petcliasa-blob/asa-spring-petclinic-admin-server-2.6.6.jar'
runtimeVersion: 'Java_11'
}
}
}
There is this API which result provides the relativePah field , but how to get that result in Bicep ?
Without this value, it looks like there is no way to create a Deployment with Bicep which is asked by my customer.
To Reproduce
See snippet above
Additional context
This is a show stopper for my customer who wants to use Bicep only WITHOUT any extra steps in a script
Ask: this get-resource-upload-url API should be callable through Bicep
@alex-frankel
The AppPlatform/Spring RP team is looking into this
Hi @ezYakaEagle442
If you need to create a new deployment, you can fill the the relativePath with a placeholder <default>.
@description('The instance name of the Azure Spring Cloud resource')
param springCloudInstanceName string
param location string = resourceGroup().location
resource springCloudInstance 'Microsoft.AppPlatform/Spring@2022-11-01-preview' = {
name: springCloudInstanceName
location: location
sku: {
name: 'S0'
tier: 'Standard'
}
properties: {
}
}
resource apiGatewayApp 'Microsoft.AppPlatform/Spring/apps@2022-11-01-preview' = {
name: 'api-gateway'
parent: springCloudInstance
}
resource apiGatewayDeploymentApp 'Microsoft.AppPlatform/Spring/apps/deployments@2022-11-01-preview' = {
name: 'default'
parent: apiGatewayApp
sku: {
name: 'S0'
}
properties: {
active: true
source: {
relativePath: '<default>'
type: 'Jar'
}
deploymentSettings: {
resourceRequests: {
cpu: '1'
memory: '2Gi'
}
}
}
}
If you need a real storage location that can be used to upload artifacts and pass to the deployment, you need a POST call to the app's getResourceUploadUrl action. You need to leverage the Deployment Script support in bicep to do this.
create a user assigned identity
assign Contributor role for the identity to the target resource group that contains the Azure Spring Apps instance
Add the following snippet to get the URL.
resource getUploadUrl 'Microsoft.Resources/deploymentScripts@2020-10-01' = {
name: 'get-upload-url'
location: location
kind: 'AzureCLI'
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
// replace the ??xxx?? placeholder below with your identity properties
'${resourceId('??your-identity-group??', 'Microsoft.ManagedIdentity/userAssignedIdentities', '??your identity name??')}': {}
}
}
properties: {
forceUpdateTag: utcValue
azCliVersion: '2.40.0'
timeout: 'PT30M'
scriptContent: 'az rest --method post --url ${apiGatewayApp.id}/getResourceUploadUrl?api-version=2022-11-01-preview'
retentionInterval: 'P1D'
}
}
// you can get the url and path using the following assignment
var relativePath = getUploadUrl.properties.outputs.relativePath
var uploadUrl = getUploadUrl.properties.outputs.uploadUrl
However, if you want to do real deployment using bicep (upload JAR and then patch deployment), it's not a good idea IMO as bicep is more ARM Template oriented. You will need to write further deployment scripts to call curl to upload your JAR. Reference: https://blog.soft-cor.com/uploading-large-files-to-an-azure-file-share-using-a-shell-script-and-standard-linux-commands/
|
gharchive/issue
| 2023-01-11T09:11:38 |
2025-04-01T04:54:46.048857
|
{
"authors": [
"allxiao",
"ezYakaEagle442",
"stephaniezyen"
],
"repo": "Azure/bicep",
"url": "https://github.com/Azure/bicep/issues/9515",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2417566715
|
Warning: Unable to fetch all az cli versions
Warning: Unable to fetch all az cli versions, please report it as an issue on https://github.com/Azure/CLI/issues. Output: ***
"name": "azure-cli",
"tags": [
"0.10.0",
"0.10.1",
"0.10.10",
"0.10.11",
"0.10.12",
"0.10.13",
"0.10.14",
"0.10.2",
"0.10.3",
"0.10.4",
"0.10.5",
"0.10.6",
"0.10.7",
"0.10.8",
"0.9.10",
"0.9.13",
"0.9.14",
"0.9.15",
"0.9.16",
"0.9.17",
"0.9.18",
"0.9.19",
"0.9.2",
"0.9.20",
"0.9.4",
"0.9.5",
"0.9.6",
"0.9.7",
"0.9.8",
"0.9.9",
"2.0.24",
"2.0.26",
"2.0.27",
"2.0.28",
"2.0.29",
"2.0.31",
"2.0.32",
"2.0.34",
"2.0.37",
"2.0.38",
"2.0.41",
"2.0.42",
"2.0.43",
"2.0.44",
"2.0.45",
"2.0.46",
"2.0.47",
"2.0.49",
"2.0.50",
"2.0.51",
"2.0.52",
"2.0.53",
"2.0.54",
"2.0.55",
"2.0.56",
"2.0.57",
"2.0.58",
"2.0.59",
"2.0.60",
"2.0.61",
"2.0.62",
"2.0
Hi @vn0siris, this is a duplicate issue of #153, which has been fixed in #154. You can point to master branch as a temporary workaround. I will inform you once the new version is released.
|
gharchive/issue
| 2024-07-18T23:34:48 |
2025-04-01T04:54:46.057738
|
{
"authors": [
"MoChilia",
"vn0siris"
],
"repo": "Azure/cli",
"url": "https://github.com/Azure/cli/issues/155",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1242057622
|
Handle expires_on in int format
Unmarshal the value into an interface{} and perform the proper
conversion depending on the underlying type.
Thank you for your contribution to Go-AutoRest! We will triage and review it as soon as we can.
As part of submitting, please make sure you can make the following assertions:
[ ] I've tested my changes, adding unit tests if applicable.
[ ] I've added Apache 2.0 Headers to the top of any new source files.
Fixes https://github.com/Azure/go-autorest/issues/696
Looking at the related issue, it looks to me that expires_on, at least in that example, isn't the number of seconds from now but probably from the Unix epoch. I need to take a closer look.
Everywhere except App Service, expires_on is epoch seconds, either as a number or a string.
And App Service is as a time-stamp correct?
OK I did a little digging. Token.Expires() already treats ExpiresOn as Unix time. It does mean though that our handling of expires_on in date-time format is incorrect at present.
|
gharchive/pull-request
| 2022-05-19T16:57:32 |
2025-04-01T04:54:46.067915
|
{
"authors": [
"chlowell",
"jhendrixMSFT"
],
"repo": "Azure/go-autorest",
"url": "https://github.com/Azure/go-autorest/pull/698",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2720541981
|
Signed version of kubelogin.exe
Can we get a signed version of kubelogin.exe, we are only allowed to use signed binaries and exe in our production systems.
ack. I think the upcoming publishing update should be able to address it
|
gharchive/issue
| 2024-12-05T14:17:39 |
2025-04-01T04:54:46.120646
|
{
"authors": [
"kurian-dm",
"weinong"
],
"repo": "Azure/kubelogin",
"url": "https://github.com/Azure/kubelogin/issues/566",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1854677886
|
Disabling PR Workflow for Forked PRs
Using trigger pull_request_target allows workflows from forked repos to get access to the secrets and GitHub tokens of this repository, as specified here.
Until testing is fixed, we are removing the capability to run tests on forked PRs to resolve a security issue. After testing is enabled for the GitHub Action, we will enable the workflow accordingly.
Note: Support for running this workflow on forked branches will be added after proper investigation. There were 2 approaches of fixing this:
Either support running only on non forked branches. E.x. functions GitHub Action.
Make the workflow work explicitly E.x. SQL Deploy GitHub Action.
We have picked the first approach, and will investigate more for the second approach
|
gharchive/pull-request
| 2023-08-17T09:59:21 |
2025-04-01T04:54:46.123532
|
{
"authors": [
"mitsha-microsoft"
],
"repo": "Azure/load-testing",
"url": "https://github.com/Azure/load-testing/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
366896506
|
Try running MMLSPARK in YARN mode
Hi, We are trying to use mmlspark in a Cloudera environment using pyspark through terminal [1] and Cloudera Data Science Workbench (CDSW) [2].
All our efforts have failed and we wonder if this option is possible. The only way we've got it working is to use pyspark without yarn and even if it runs we got another error [3].
*We also tried to run in a Google Cloud Dataproc cluster with same error [4]
[1] Terminal
pyspark2 --master local --deploy-mode yarn--packages Azure:mmlspark:0.14,com.microsoft.ml.lightgbm:lightgbmlib:2.1.250,com.jcraft:jsch:0.1.54,com.microsoft.cntk:cntk:2.4,io.spray:spray-json_2.11:1.3.2,org.openpnp:opencv:3.4.2-0
[2] CDSW
from pyspark.sql import SparkSession warehouseLocation = "/prod/bcp/edv/mesapymesh/datain" jarsLocation = "/home/cdsw/" spark = SparkSession\ .builder.appName("SparkML")\ .config("spark.sql.warehouse.dir", warehouseLocation)\ .config("spark.jars.ivy", jarsLocation)\ .config("spark.jars.packages", "Azure:mmlspark:0.14,com.microsoft.ml.lightgbm:lightgbmlib:2.1.250,com.jcraft:jsch:0.1.54,com.microsoft.cntk:cntk:2.4,io.spray:spray-json_2.11:1.3.2,org.openpnp:opencv:3.4.2-0")\ .enableHiveSupport()\ .getOrCreate()
[3] Error running in local mode
[4] Google Cloud Data Data Proc error
Code
pyspark --packages Azure:mmlspark:0.14
Log
`Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/Azure_mmlspark-0.14.jar added multiple times to distributed cache.
18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/io.spray_spray-json_2.11-1.3.2.jar added multiple times to distributed cache.
18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/com.microsoft.cntk_cntk-2.4.jar added multiple times to distributed cache.
18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/org.openpnp_opencv-3.2.0-1.jar added multiple times to distributed cache.
18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/com.jcraft_jsch-0.1.54.jar added multiple times to distributed cache.
18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/com.microsoft.ml.lightgbm_lightgbmlib-2.1.250.jar added multiple times to distributed cache.
ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/conf.dist/ivysettings.xml will be
...
Using Python version 2.7.9 (default, Sep 25 2018 20:42:16)
SparkSession available as 'spark'.
import mmlspark
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mmlspark
`
[5] Notes
CDH 5.12
Spark 2.2.0 CDH / Spark 2.2.1 GCD
Python 2.7.13 / 3.6
Thanks in advance for your help. If you need more information or something else I will be checking for news.
Hi,
You should only need the option
"--packages Azure:mmlspark:0.14"
The other options shouldn't matter. MMLSpark should work anywhere where spark is deployed, it shouldn't matter what cluster you are using. Having said that, I've only tested it on Azure Databricks and HDInsight. If you want to meet over skype I could try and debug it with you, but I don't have access to a cloudera workbench unfortunately :(.
Thank you, Ilya
@antoniocachuan
Can you also try on spark 2.3? We only support spark 2.3 now (older versions support 2.2).
You can also send me an email at mmlspark-support@microsoft.com if you want to diagnose your issue.
@imatiach-msft
Thanks for your answer, In this moment is not possible to test it in Spark 2.3. Also I tried with "--packages Azure:mmlspark:0.14" in Google Cloud Data Proc cluster with the same results.
PD: I really appreciate your help, just emailed you.
Regards,
Antonio C.
@antoniocachuan the strange thing is, I don't see any errors anywhere. It looks like you retrieved the jar so you would think that it would just work. I'm not quite sure what the problem might be. We could try and add the python files manually from the zip to see if anything fails. Otherwise, when using --packages it should just pick up the python files and import them. My guess is something in the import step if failing, but that might not be the case because I don't see an error anywhere.
@imatiach-msft I tried in CDH running --package using the Scala API and It works, now I am getting a error related to the issue #335 also I could test adding manually the python files.
spark2-shell --master yarn --packages Azure:mmlspark:0.14,com.microsoft.ml.lightgbm:lightgbmlib:2.1.250
Error #335
Caused by: java.lang.UnsatisfiedLinkError: /data/06/yarn/nm/usercache/s16746/appcache/application_1538195866523_1970/container_e100_1538195866523_1970_01_000002/tmp/mml-natives2115945537512894448/lib_lightgbm.so: /lib64/libm.so.6: version GLIBC_2.23 not found (required by /data/06/yarn/nm/usercache/s16746/appcache/application_1538195866523_1970/container_e100_1538195866523_1970_01_000002/tmp/mml-natives2115945537512894448/lib_lightgbm.so)
@antoniocachuan I also encountered the similar problem of ‘No module named mmlspark’,but I compile the source code of mmlspark-0.15, i install 'mmlspark-0.15-py2.py3-none-any.whl' package to my ubuntu16.04 envrionment,this problem is gone!
@antoniocachuan hello, have you solved this question? can you give me some advise ,thanks
|
gharchive/issue
| 2018-10-04T17:37:41 |
2025-04-01T04:54:46.141622
|
{
"authors": [
"antoniocachuan",
"imatiach-msft",
"kunguang",
"vinglogn"
],
"repo": "Azure/mmlspark",
"url": "https://github.com/Azure/mmlspark/issues/386",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1087622473
|
Support arrays in property definitions
The Plug and Play documentation states that the definition of arrays in properties is not supported yet (https://docs.microsoft.com/en-us/azure/iot-develop/concepts-modeling-guide).
Since it is now finally possible to have arrays in the device twin tough (https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins#tags-and-properties-format), it should also be possible to define arrays in component properties.
this is something we are targeting for DTDL v3. Stay tuned for upcoming updates.
DTDL v3 has been published as preview, with support for arrays in properties.
Can you close this this issue?
|
gharchive/issue
| 2021-12-23T11:18:15 |
2025-04-01T04:54:46.153356
|
{
"authors": [
"K2CanDo",
"rido-min"
],
"repo": "Azure/opendigitaltwins-dtdl",
"url": "https://github.com/Azure/opendigitaltwins-dtdl/issues/124",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2334291710
|
Bump sidecar image
Bump sidecar image to resolve CVEs
/azp run pr-e2e-arc
|
gharchive/pull-request
| 2024-06-04T20:11:44 |
2025-04-01T04:54:46.154561
|
{
"authors": [
"keithmattix",
"nshankar13"
],
"repo": "Azure/osm-azure",
"url": "https://github.com/Azure/osm-azure/pull/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
471177688
|
Should be portalfx in stead of portalf in the breaking changes link
Should be portalfx in stead of portalf in the breaking changes link
Docs Build status updates of commit e65372f:
:white_check_mark: Validation status: passed
File
Status
Preview URL
Details
portal-sdk/generated/downloads.md
:bulb:Suggestion
View
Details
portal-sdk/generated/downloads.md
[Suggestion] Missing attribute: author. Add the current author's GitHub ID.
[Suggestion] Missing attribute: title. Add a title string to show in search engine results.
For more details, please refer to the build report.
Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
Docs Build status updates of commit e65372f:
:white_check_mark: Validation status: passed
File
Status
Preview URL
Details
portal-sdk/generated/downloads.md
:bulb:Suggestion
View
Details
portal-sdk/generated/downloads.md
[Suggestion] Missing attribute: author. Add the current author's GitHub ID.
[Suggestion] Missing attribute: title. Add a title string to show in search engine results.
For more details, please refer to the build report.
Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
@nickharris please take a look
My bad I only changed display link :S
|
gharchive/pull-request
| 2019-07-22T16:29:42 |
2025-04-01T04:54:46.165015
|
{
"authors": [
"mikekinsman",
"ppgovekar"
],
"repo": "Azure/portaldocs",
"url": "https://github.com/Azure/portaldocs/pull/244",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1409451985
|
feat(#60): include CodeQL@v2 workflow
Include CodeQL workflow to security scan
Changes are added to resolve the conflicts, so closing it here.
|
gharchive/pull-request
| 2022-10-14T14:35:47 |
2025-04-01T04:54:46.165975
|
{
"authors": [
"BALAGA-GAYATRI",
"jbenaventem"
],
"repo": "Azure/powershell",
"url": "https://github.com/Azure/powershell/pull/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
816619812
|
Possibility to inject key-vault values as environment variables with or without k8s secrets
Describe the solution you'd like
Many applications have a native ability to read options/parameters from environment variables like e.g. dot net core. Right now to get that behavior we need to:
Mount the files that won't be used later
Add secrets as k8s secrets that won't be used later
Map k8s secrets to env.variables
For many applications there is not much need for all of these and it would be a nice feature to just allow injection of variables without the need to configure either mounts or secrets.
Anything else you would like to add:
It would be a cool additional feature if injecting as env. variable can update k8s secrets (like what is happening now during the mount if configured) with enabled auto-rotation that will restart pods to inject new environment variables with renewed secrets if the secret has changed.
Or perhaps it's possible to evaluate the injected variables instead of creating a k8s secret to determine if pod should be restarted.
That would allow for completely pain-free secret update, like e.g. database password update.
Are we able to use Deployment env.valueFrom.secretKeyRef along with SecretProviderClass secretObjects already?
Mount the files that won't be used later
Even with the ability to use env.valueFrom.secretKeyRef, there is still a requirement to mount the files, otherwise the synced secrets aren't created. That's a bit unfortunate; it makes it more complicated to build a chart that can use regular secrets OR the CSI driver in different environments. Each deployment needs to be modified to mount the volume from the CSI driver.
Is there anyway to expose vault secret directly as pod's env variables. My client does not allow you to use k8s secret due to base64 encode, which is still clear text. For e.g some db and sensitive data store credentials really need to be hidden from K8s admins/developers.
|
gharchive/issue
| 2021-02-25T16:58:13 |
2025-04-01T04:54:46.170542
|
{
"authors": [
"ilya-git",
"kpkool",
"ltouro",
"ms1111"
],
"repo": "Azure/secrets-store-csi-driver-provider-azure",
"url": "https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/412",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2350534622
|
Nextjs + Authjs v5 Callback Error with Duende Identity Server provider in a Azure Static Web App
Describe the bug
After the user is successfully logged in on my duende identity server v6, it tries to redirect the user to the specified callback url (following the Auth.js documentation here).
Here's an example redirect url:
https://mydomain/api/auth/callback/duende-identity-service?code=3D96EBB5721191CD18CDBBFE5FFD818017E4C59F09214552AE74636913DB21B6-1&scope=openid profile email&session_state=G2eqKlEXsiOGOl0W5zNmDk7MloXu18w1M3YapSqv7qI.E986B1AED0AC76CFE51406215F9A08F6&iss=myidentityserver
My SWA instead of retrieving the session and redirecting back the user to the "dashboard" it returns a 302 status code with this weird url as location. https://1dd4069c374e:8080/api/auth/error?error=Configuration
On localhost the redirect works as expected.
To Reproduce
Can't give much details of the code as it's private. But will try to create a mock application to replicate this behaviour.
auth.config.ts
export default {
providers: [
DuendeIDS6Provider({
id: 'duende-identity-service', // default id duende-identityserver6!!
name: 'Duende Identity Service',
clientId: process.env.AUTH_DUENDE_IDENTITY_SERVER6_ID!,
clientSecret: process.env.AUTH_DUENDE_IDENTITY_SERVER6_SECRET!,
issuer: process.env.AUTH_DUENDE_IDENTITY_SERVER6_ISSUER,
}),
],
} satisfies NextAuthConfig;
middleware.ts
const intlMiddleware = createMiddleware({
defaultLocale,
localePrefix,
locales,
pathnames,
});
const authMiddleware = auth(
(req: NextRequest & { auth: Session | null }): Response | void => {
const session = req.auth;
// Handle session
return intlMiddleware(req);
},
);
const middleware = (req: NextRequest) => {
// some validations
if (isAuthPage) {
return (authMiddleware as any)(req);
}
if (isPublicPage) {
return intlMiddleware(req);
}
return (authMiddleware as any)(req);
};
export const config = {
matcher: ['/((?!api|_next/static|_next/image|favicon.ico|.*.swa).*)/'],
};
export default middleware;
staticwebapp.config.json
{
"forwardingGateway": {
"allowedForwardedHosts": [
"mydomain"
]
}
}
Expected behavior
Location should be https://mydomain/dashboard
Actual response:
Device info (if applicable):
OS: Windows
Browsers: Brave, Firefox, Chrome, Edge
Version: Latest
@OsoThevenin This was happening to me too. I spent quite a while working through the code and realised the way AuthJS was setting the hostname was a bit odd. The weird url is actually the HOST of the server.
I can't recall exactly helped me work around the issue but it was either setting the AUTH_URL or the AUTH_REDIRECT_PROXY_URL to the actual domain i.e. "https:///api/auth"
See this issue https://github.com/nextauthjs/next-auth/issues/10928#issuecomment-2121092912
@OsoThevenin This was happening to me too. I spent quite a while working through the code and realised the way AuthJS was setting the hostname was a bit odd. The weird url is actually the HOST of the server.
I can't recall exactly how I worked around the issue but it was either setting the AUTH_URL or the AUTH_REDIRECT_PROXY_URL to the actual domain i.e. "https:///api/auth"
See this issue nextauthjs/next-auth#10928 (comment)
Definetly this helped fix the issue. Thanks a lot ❤️
|
gharchive/issue
| 2024-06-13T08:33:50 |
2025-04-01T04:54:46.224834
|
{
"authors": [
"OsoThevenin",
"alasdairmackenzie"
],
"repo": "Azure/static-web-apps",
"url": "https://github.com/Azure/static-web-apps/issues/1492",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
934445651
|
AzureStaticWebApp step fails immediately
For the life of me, I can't get this AzureStaticWebApp pipeline step to succeed.
I have a react app that I created through the standard create-react-app npx command. I tried following a bunch of official and unofficial tutorials, but nothing I've tried has worked. I've seen many tutorials get you to use the GitHub Actions flow, but I want to get an Azure DevOps CI/CD pipeline setup for this react project.
The one thing that I suspect might be causing some issues is that I am using a self-hosted Agent (vsts-agent-win-x64-2.188.3) to run the pipelines, since I dont have access to the hosted parallelism
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
I run my pipeline
AzureStaticWebApp step fails immediately
GitHub Actions or Azure Pipelines workflow YAML file
trigger:
- main
pool:
name: Default
steps:
- checkout: self
submodules: true
- task: Npm@1
displayName: 'npm install'
inputs:
verbose: false
- task: Npm@1
displayName: 'npm run build'
inputs:
command: custom
verbose: false
customCommand: 'run build'
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: drop'
inputs:
PathtoPublish: build
- task: AzureStaticWebApp@0
inputs:
app_location: '/build'
azure_static_web_apps_api_token: '$(deployment_token)'
Output of AzureStaticWebApp step
2021-07-01T04:36:39.8899695Z ##[section]Starting: AzureStaticWebApp
2021-07-01T04:36:39.9036032Z ==============================================================================
2021-07-01T04:36:39.9036303Z Task : Deploy Azure Static Web App
2021-07-01T04:36:39.9036657Z Description : [PREVIEW] Build and deploy an Azure Static Web App
2021-07-01T04:36:39.9036859Z Version : 0.187.1
2021-07-01T04:36:39.9037027Z Author : Microsoft Corporation
2021-07-01T04:36:39.9037214Z Help : https://aka.ms/swadocs
2021-07-01T04:36:39.9037436Z ==============================================================================
2021-07-01T04:36:40.1961223Z ##[section]Finishing: AzureStaticWebApp
staticwebapp.config.json file
{
"navigationFallback": {
"rewrite": "/index.html"
}
}
Expected behavior
The AzureStaticWebApps step completes and deploys my react project to my Azure Static Web App
Screenshots
Here is what my entire pipeline looks like
Any help would be greatly appreciated! Thanks
Is the VM running the pipeline a Windows machine? We've seen this in the past if the VM is not capable of running the task startup script.
Hi miwebst,
I am running the Agent on my own local machine, which is Windows 10.
Did the issue that you are referring to get resolved? If so, how did they resolve it?
Thanks
I FINALLY FIXED MY ISSUE!!!!
When you mentioned that it might be a Windows problem, I used my Ubuntu VM:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
Then I had to install npm and docker.io:
sudo apt install npm
sudo apt-get install docker.io
Then I had to setup docker for my user on the machine:
https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket
After that I was able to run my pipeline with the following YAML:
trigger:
- main
pool:
name: Default
steps:
- checkout: self
submodules: true
- task: AzureStaticWebApp@0
inputs:
app_location: '/'
output_location: 'build'
azure_static_web_apps_api_token: '$(deployment_token)'
I hope this helps some people out :)
I would have to double check, but I think installing docker would fix my Windows issue. I will reply with my results
|
gharchive/issue
| 2021-07-01T06:17:14 |
2025-04-01T04:54:46.233968
|
{
"authors": [
"kevinprescottwong-Dev",
"miwebst"
],
"repo": "Azure/static-web-apps",
"url": "https://github.com/Azure/static-web-apps/issues/501",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2646708755
|
chore: repository governance
Repository governance update
This PR was automatically created by the AVM Team hive-mind using the grept governance tool.
We have detected that some files need updating to meet the AVM governance standards.
Please review and merge with alacrity.
Grept config source: git::https://github.com/Azure/Azure-Verified-Modules-Grept.git//terraform
Thanks! The AVM team :heart:
Supersceeded by #81
|
gharchive/pull-request
| 2024-11-10T01:35:51 |
2025-04-01T04:54:46.236210
|
{
"authors": [
"segraef"
],
"repo": "Azure/terraform-azurerm-avm-ptn-alz-management",
"url": "https://github.com/Azure/terraform-azurerm-avm-ptn-alz-management/pull/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2337850213
|
[AVM Module Issue]: "Missing required argument" that is not declared as 'required'.
Check for previous/existing GitHub issues
[X] I have checked for previous/existing GitHub issues
Issue Type?
I'm not sure
(Optional) Module Version
No response
(Optional) Correlation Id
No response
Description
This simple code uses only the parameters that are declared as "Required" in the documentation, but Terraform (plan operation) still shows an error that it's not enough. I expected the module to generate admin password for me and provide it as output. I have no intention in using key vaults, it's a simple test scenario.
module "avm-res-compute-virtualmachine" {
source = "Azure/avm-res-compute-virtualmachine/azurerm"
name = module.naming.windows_virtual_machine.name_unique
resource_group_name = azurerm_resource_group.this.name
location = azurerm_resource_group.this.location
virtualmachine_sku_size = "Standard_A2_v2"
zone = 1
}
The error (and follow-up errors):
Error: Missing required argument
with module.avm-res-compute-virtualmachine.azurerm_key_vault_secret.admin_password[0],
on .terraform\modules\avm-res-compute-virtualmachine\main.authentication.tf line 35, in resource "azurerm_key_vault_secret" "admin_password":
35: key_vault_id = var.admin_credential_key_vault_resource_id
The argument "key_vault_id" is required, but no definition was found.
Error: Attempt to get attribute from null value
on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 135, in resource "azurerm_windows_virtual_machine" "this":
135: offer = local.source_image_reference.offer
This value is null, so it does not have any attributes.
Error: Attempt to get attribute from null value
on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 136, in resource "azurerm_windows_virtual_machine" "this":
136: publisher = local.source_image_reference.publisher
local.source_image_reference is null
This value is null, so it does not have any attributes.
Error: Attempt to get attribute from null value
on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 137, in resource "azurerm_windows_virtual_machine" "this":
137: sku = local.source_image_reference.sku
local.source_image_reference is null
This value is null, so it does not have any attributes.
Error: Attempt to get attribute from null value.
on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 138, in resource "azurerm_windows_virtual_machine" "this":
138: version = local.source_image_reference.version
local.source_image_reference is null
This value is null, so it does not have any attributes.
I encountered this issue when providing UN + PW for the admin.
Referencing a KV seems tightly coupled to the parameter generate_admin_password_or_ssh_key which is defaulted to true. Since I'm providing UN+PW I disable the value, and the KV requirement is avoided.
module "avm-onprem-mgmt-vm" {
source = "Azure/avm-res-compute-virtualmachine/azurerm"
name = module.on_prem_naming.virtual_machine.name
location = azurerm_resource_group.onprem.location
resource_group_name = azurerm_resource_group.onprem.name
admin_username = var.username
admin_password = var.password
generate_admin_password_or_ssh_key = false
virtualmachine_sku_size = var.vmsize
zone = null
virtualmachine_os_type = "Windows"
source_image_reference = {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2022-datacenter-azure-edition-hotpatch"
version = "latest"
}
network_interfaces = {
mgmt_nic = {
name = module.on_prem_naming.network_interface.name
location = azurerm_resource_group.onprem.location
resource_group_name = azurerm_resource_group.onprem.name
ip_configurations = {
mgmt_ipconfig = {
name = "mgmt-ipconfig"
subnet_id = module.avm-onprem-mgmt-subnet.resource_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = null
primary = true
}
}
}
}
}
https://github.com/Azure/terraform-azurerm-avm-res-compute-virtualmachine/blob/20917bef881cc8e346864c8b159d4546d27dccb2/main.authentication.tf#L32
@wplj and @eamreyes - Release 0.15.0 removes the requirement for the key vault id, and moves the id to a single interface for all of the generated secret (password or ssh key) configuration items. (Also deprecates the old inputs for removal in a future release). It also cleans up the inputs so that it can be deployed with only required inputs. This is now also tested in the minimal example. Finally, please be aware there are breaking changes in the release so please review the release notes when you move to 0.15.
|
gharchive/issue
| 2024-06-06T09:56:32 |
2025-04-01T04:54:46.242808
|
{
"authors": [
"eamreyes",
"jchancellor-ms",
"wplj"
],
"repo": "Azure/terraform-azurerm-avm-res-compute-virtualmachine",
"url": "https://github.com/Azure/terraform-azurerm-avm-res-compute-virtualmachine/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2272764834
|
chore: repository governance
Repository governance update
This PR was automatically created by the AVM Team hive-mind using the grept governance tool.
We have detected that some files need updating to meet the AVM governance standards.
Please review and merge with alacrity.
Grept config source: git::https://github.com/Azure/Azure-Verified-Modules-Grept.git//terraform
Thanks! The AVM team :heart:
Supersceeded by #23
|
gharchive/pull-request
| 2024-05-01T01:33:59 |
2025-04-01T04:54:46.245297
|
{
"authors": [
"mbilalamjad"
],
"repo": "Azure/terraform-azurerm-avm-res-web-hostingenvironment",
"url": "https://github.com/Azure/terraform-azurerm-avm-res-web-hostingenvironment/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2504778789
|
Broken Wiki links
Community Note
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Versions
terraform: 1.7.0
azure provider: 3.65.0
module: 6.1.0
Description
Describe the bug
Wiki links for connectivity with custom settings are still broken on some sub pages. Further to the issue I previously raised (#1094) which was closed due to this PR the following pages contain links to the same which need fixing:
Examples.md
[Examples]-Deploy-Connectivity-Resources.md
[Examples]-Deploy-Multi-Region-Networking-With-Custom-Settings.md
[Examples]-Deploy-Virtual-WAN-Multi-Region-With-Custom-Settings.md
[Examples]-Deploy-Virtual-WAN-Resources.md
[Examples]-Deploy-using-multiple-module-declarations-with-orchestration.md
[Examples]-Deploy-using-multiple-module-declarations-with-remote-state.md
Broken links are for both the hub and spoke and VWAN custom settings pages.
Steps to Reproduce
Navigate to the pages above
click links for connectivity with custom settings
get redirected to 'Home'
Screenshots
Additional context
Would really like to be able to share a PR with the fixes for this but still unable to contribute - is it possible to be accepted as a contributor to raise PRs?
Hey @sissonsrob,
You can indeed submit a PR by forking this repo and then making your changes and submitting via a pull request. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/getting-started/about-collaborative-development-models#fork-and-pull-model
Thanks @jtracey93 - unsure why I couldn't raise the PR last time but I have now raised this PR which fixes the other pages affected by this.
Note - the fork was from my other account
Hope it helps
|
gharchive/issue
| 2024-09-04T09:23:22 |
2025-04-01T04:54:46.253758
|
{
"authors": [
"jtracey93",
"sissonsrob"
],
"repo": "Azure/terraform-azurerm-caf-enterprise-scale",
"url": "https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/issues/1126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2273473780
|
Bug Report : Continuous Destroy and then Create of azapi_resource diag_settings
Community Note
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Continuos Destroy and then Create of azapi_resource diag_settings
The module successfully deploys the Cloud Adoption Framework. However, when doing a plan with no changes to the parameters in the module, we are getting 10 add and 10 destroy of the resource "azapi_resources" "diag_settings".
module.enterprise_scale.azapi_resource.diag_settings["/providers/Microsoft.Management/managementGroups/testtenant-sandboxes"] must be replaced
-/+ resource "azapi_resource" "diag_settings" {
~ id = "/providers/Microsoft.Management/managementGroups/testtenant-sandboxes/providers/Microsoft.Insights/diagnosticSettings/toLA" -> (known after apply)
- location = "global" -> null # forces replacement
name = "toLA"
~ output = jsonencode({}) -> (known after apply)
# (7 unchanged attributes hidden)
}
Plan: 10 to add, 0 to change, 10 to destroy.
Help regarding resolution of this issue will be much appreciated.
fixed by #968
|
gharchive/issue
| 2024-05-01T13:06:02 |
2025-04-01T04:54:46.258119
|
{
"authors": [
"Keetika-Yogendra",
"matt-FFFFFF"
],
"repo": "Azure/terraform-azurerm-caf-enterprise-scale",
"url": "https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/issues/939",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1308490517
|
Move common http test utilities from testutils to common4j test fixtures
Description: Move common http test utilities from testutils to common4j test fixtures
This is to break https://github.com/AzureAD/microsoft-authentication-library-common-for-android/pull/1770 into multiple smaller PRs.
Would be nice to update the description with what is being changed here
Would be nice to update the description with what is being changed here
I've added the title to description as well. That's basically all there is to this PR. I'm not sure what more I'd put for a simple PR like this that's just relocated some files.
Would be nice to update the description with what is being changed here
I've added the title to description as well. That's basically all there is to this PR. I'm not sure what more I'd put for a simple PR like this that's just relocated some files.
Something like why we are moving the classes will add some context, when we look back at the PR later.
Would be nice to update the description with what is being changed here
I've added the title to description as well. That's basically all there is to this PR. I'm not sure what more I'd put for a simple PR like this that's just relocated some files.
Something like why we are moving the classes will add some context, when we look back at the PR later.
Added this:
Why we are moving these classes? We are moving these classes to test fixtures because test fixtures is where they truly belong. The primary purpose of test fixtures is to be able to share test code across modules.
Test Fixtures is a concept that @p3dr0rv had introduced to the team some time ago, and I think I also covered it again in one of my recent brown-bags as well.
|
gharchive/pull-request
| 2022-07-18T19:56:57 |
2025-04-01T04:54:46.269990
|
{
"authors": [
"iamgusain",
"shahzaibj"
],
"repo": "AzureAD/microsoft-authentication-library-common-for-android",
"url": "https://github.com/AzureAD/microsoft-authentication-library-common-for-android/pull/1797",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
809931330
|
Merge Release/1.6.2 back to Master
Merge Release/1.6.2 back to Master
Type of change
[ ] Feature work
[ ] Bug fix
[ ] Documentation
[x] Engineering change
[ ] Test
[ ] Logging/Telemetry
Risk
[ ] High – Errors could cause MAJOR regression of many scenarios. (Example: new large features or high level infrastructure changes)
[ ] Medium – Errors could cause regression of 1 or more scenarios. (Example: somewhat complex bug fixes, small new features)
[x] Small – No issues are expected. (Example: Very small bug fixes, string changes, or configuration settings changes)
Additional information
@jasoncoolmax Any ETA on this release?
@jasoncoolmax Any ETA on this release?
I am doing the release now :)
|
gharchive/pull-request
| 2021-02-17T07:23:50 |
2025-04-01T04:54:46.273615
|
{
"authors": [
"jasoncoolmax",
"jbzdarkid"
],
"repo": "AzureAD/microsoft-authentication-library-common-for-objc",
"url": "https://github.com/AzureAD/microsoft-authentication-library-common-for-objc/pull/948",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
626565213
|
audience/resource for token acquisition
Library
[ ] msal@1.x.x or @azure/msal@1.x.x
[x] @azure/msal-browser@2.x.x
[ ] @azure/msal-angular@0.x.x
[ ] @azure/msal-angular@1.x.x
[ ] @azure/msal-angularjs@1.x.x
Description
Trying to figure how to acquire a token for AppConfiguration API and coming up short.
The API documentation is talking about requesting a resource and even though AuthenticationParameters has a field for it I get the error:
AADSTS901002: The 'resource' request parameter is not supported.
I tried using scopes for it, but it's not clear to me how to correctly setup *.azconfig.io as a scope - AppConfiguration is not listed as an API I can request permissions for.
scope will be something like https://{myconfig}.azconfig.io/.default
in your resource map you'll do [ 'https://{myconfig}.azconfig.io/', [ 'https://{myconfig}.azconfig.io/.default' ]],
@ranjanmicrosoft that's what I thought, but then my other problem:
ServerError: invalid_client: AADSTS650057: Invalid resource. The client has requested access to a resource which is not listed in the requested permissions in the client's application registration. Client app ID: 7e327720-2c2b-4516-a52b-d255e3834907(avs-capman-dev). Resource value from request: https://*.azconfig.io. Resource app ID: 35ffadb3-7fc1-497e-b61b-381d28e744cc. List of valid resources from app registration: 00000003-0000-0000-c000-000000000000.
How do I add the *.azconfig.io URI to my app definition? It has to be registered somewhere because the manifest takes a GUID, not the URI:
"requiredResourceAccess": [
{
"resourceAppId": "00000003-0000-0000-c000-000000000000",
"resourceAccess": [
{
"id": "e1fe6dd8-ba31-4d61-89e7-88639da4683d",
"type": "Scope"
}
]
}
],
"samlMetad
Closing this as it looks like it's being handled in https://github.com/Azure/AppConfiguration/issues/338.
|
gharchive/issue
| 2020-05-28T14:37:15 |
2025-04-01T04:54:46.288795
|
{
"authors": [
"et1975",
"jmckennon",
"ranjanmicrosoft"
],
"repo": "AzureAD/microsoft-authentication-library-for-js",
"url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/1722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
336462296
|
Wiki documentation is non-existent
I get we are all busy but when all MS documentation points to MSAL as the way to go for SPA apps, then documentation needs to be a first class citizen.
@aszalacinski - I apologize that you are not able to find what you are looking for. We do have the documentation and we are currently working on improving it. Could you please explain why do you say that it's non-existent?
@nehaagrawal the Wiki on this Github Repository does not show any useful information. Every item links back to Home. The only link that works is Register your app with AAD that links you to Microsoft website.
@aszalacinski I have fixed the wiki. Please check.
|
gharchive/issue
| 2018-06-28T03:43:13 |
2025-04-01T04:54:46.290932
|
{
"authors": [
"aszalacinski",
"nartc",
"nehaagrawal"
],
"repo": "AzureAD/microsoft-authentication-library-for-js",
"url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/337",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1306312635
|
After successful login and redirection, the execution of the routing guard is interrupted.
Core Library
MSAL.js v2 (@azure/msal-browser)
Core Library Version
2.26.0
Wrapper Library
Not Applicable
Wrapper Library Version
None
Description
HI,
I have a sign in page. When you visit the website without authentication, you will be redirected to this page.
When I click sign in to enter the login process and successfully authenticate and jump back to my website, there seems to be a problem with the routing guard, which makes me return to the sign in page.
I need to enter the home page after successful authentication. I don't know how to implement it.
But in fact, the login is successful. When I visit the website again, I can get the login user information. It seems that it is not waiting for the execution of handleredirectpromise at the routing guard.
The source code is here
vue3-sample-app.zip
.
MSAL Configuration
No response
Relevant Code Snippets
No response
Identity Provider
No response
Source
External (Customer)
@zico209 Can you please provide your configuration and your routing guard implementation so I can better assist you? Have you seen our Vue3 sample, which implements a routing Guard here?
@zico209 Can you please provide your configuration and your routing guard implementation so I can better assist you? Have you seen our Vue3 sample, which implements a routing Guard here?
https://github.com/AzureAD/microsoft-authentication-library-for-js/files/9122801/vue3-sample-app.zip
This is my demo project modified on vue3 simple. All details are here.
@zico209 Thanks! A couple things I noticed:
By default the library will redirect the user back to the page which started the login flow (in your case /signin) after hitting the specified redirectUri. Sounds like you don't want this behavior so to disable you can either set the navigateToLoginRequestUrl flag to false in your auth config (authConfig.ts -> msalConfig -> auth). Alternatively, you can set the redirectStartPage parameter on the login request (also located in authConfig.ts) to tell MSAL to redirect to any page you want after login is complete.
You are using the home route as your redirectUri and also configuring that route to use the Guard. This is not advisable. We recommend setting your redirectUri to a page which does not require the user to be authenticated, and then if needed, have that page redirect the user to where they need to be using the methods mentioned in point 1.
@tnorling Thank you! This is really helpful.
|
gharchive/issue
| 2022-07-15T17:35:48 |
2025-04-01T04:54:46.298679
|
{
"authors": [
"tnorling",
"zico209"
],
"repo": "AzureAD/microsoft-authentication-library-for-js",
"url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/5011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1379554933
|
Is it possible to have multiple application logins?
Core Library
MSAL.js v2 (@azure/msal-browser)
Core Library Version
2.28.3
Wrapper Library
MSAL React (@azure/msal-react)
Wrapper Library Version
1.4.7
Public or Confidential Client?
Public
Description
Application usage scenario:
Landing page -> Azure AD B2C login -> some functionalities -> Azure AD B2B login -> all functionalities
After authenticating with Azure AD B2C, the user gets access to application. After the initial login only some functionalities
are available. To be granted access to all the functionalities, user must authenticate with Azure AD B2B this time.
Is this scenario possible with single instance of PCA?
MSAL Configuration
No response
Relevant Code Snippets
No response
Identity Provider
Azure B2C Custom Policy
Source
External (Customer)
@grgicpetar
After authenticating with Azure AD B2C, the user gets access to application. After the initial login only some functionalities
are available. To be granted access to all the functionalities, user must authenticate with Azure AD B2B this time.
To be clear, do you want users to authenticate directly against AAD (i.e. MSAL -> AAD, as opposed to via B2C, i.e. MSAL -> B2C -> AAD)?
You can technically change authorities on a per request basis, so assuming the answer to the above question is yes, then you may be able to achieve what you describe, although it may be easier if you maintain two PCA instances. Unfortunately, we do not have sample that demonstrates this scenario, as far as I know.
@jasonnutter
I am aware that I can change authorities on a per request basis, but what I need to do is change ClientId on a second authorization request.
I believe this should explain my use case more clearly:
Landing page -> Azure AD B2C login (Client ID 1) -> some functionalities -> Azure AD B2B login (Client ID 2) -> all functionalities.
As far as I know, I didn't see any example where ClientId can be changed using a single instance of MSAL.
@grgicpetar HI grgicpetar.I want to know how you solved it in the end because I also encountered the same scenario.
Hi @tangyinhao123, multiple instances did indeed work. It just feels weird to use since you can use only one instance through useMsal().
Hi @tangyinhao123, multiple instances did indeed work. It just feels weird to use since you can use only one instance through useMsal().
Thanks @grgicpetar Is there an example for me to refer to, because I put it in index.tsx and re-instantiate it every time it is called?MY code:` // Initialize client side tracing
initializeAppInsights();
// Initialize Icons
initializeIcons();
const RootComponent = () => {
const [instances,setInstance] = useState<PublicClientApplication|null>(null);
// Inject some global styles
mergeStyles({
":global(body,html,#root)": {
margin: 0,
padding: 0,
height: "100vh",
},
});
React.useEffect(() => {
const session = sessionStorage.getItem("clientId")
if (session) {
if (session == msalConfig.auth.clientId)
{
setInstance(new PublicClientApplication(msalConfig))
}
else {
setInstance(new PublicClientApplication(pmeConfig))
}
}
}, []);
document.title = "OfferStore Portal";
const handAccountType = (atype:string)=>()=>{
if(atype == "pme")
{
setInstance(new PublicClientApplication(pmeConfig))
sessionStorage.setItem("clientId",pmeConfig.auth.clientId)
}
else if (atype == "ms")
{
setInstance(new PublicClientApplication(msalConfig))
sessionStorage.setItem("clientId",msalConfig.auth.clientId)
}
else
{
return
}
}
return (
instances!=null?(
):(
<Stack style={{ marginTop: '50px' }} tokens={stackTokens}>
<PrimaryButton onClick={handAccountType("ms")} text="Microsoft Account" />
<PrimaryButton onClick={handAccountType("pme")} text="PME Account" />
)
);
};
ReactDOM.render(, document.getElementById("root"));`
@tangyinhao123
1.)
Define two PCA first in some config file:
import { Configuration, PublicClientApplication } from "@azure/msal-browser";
export const msalConfig1 Configuration = {
...
};
export const msalConfig2: Configuration = {
...
};
export const pca1 = new PublicClientApplication(msalConfig1);
export const pca2 = new PublicClientApplication(msalConfig2);
2.)
Provide the first one through Provider, this one you can use through useMsal() hook.
...
The second one you can use by explicity importing pca2 in files.
|
gharchive/issue
| 2022-09-20T14:51:05 |
2025-04-01T04:54:46.314950
|
{
"authors": [
"grgicpetar",
"jasonnutter",
"tangyinhao123"
],
"repo": "AzureAD/microsoft-authentication-library-for-js",
"url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/5230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
655089237
|
Add script to automate publish msal-core files to CDN
Script to automate upload msal-core generated files to the CDN. Requires a .env file in the msal-core folder including environment variables with SAS keys for the cdn.
Coverage remained the same at 80.998% when pulling 4f844f4a5a2fabdb4a0b8e1edcc0e8bb3e9f27b9 on automate-cdn-core into 0f352a074dff709304d87e3543c89472a1bcf875 on dev.
|
gharchive/pull-request
| 2020-07-10T23:49:31 |
2025-04-01T04:54:46.317586
|
{
"authors": [
"coveralls",
"jasonnutter"
],
"repo": "AzureAD/microsoft-authentication-library-for-js",
"url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/pull/1930",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
181334883
|
update sample to use passport-azure-ad 3.0.0
Will merge after the release of version 3.0.0.
For the comments I made on the WebApp-OpenIDConnect-NodeJS PR that apply to all samples, please ensure we're making those changes across samples. Afterwards, :shipit:
|
gharchive/pull-request
| 2016-10-06T06:32:26 |
2025-04-01T04:54:46.318732
|
{
"authors": [
"lovemaths",
"polita"
],
"repo": "AzureADQuickStarts/WebApp-OpenIDConnect-NodeJS",
"url": "https://github.com/AzureADQuickStarts/WebApp-OpenIDConnect-NodeJS/pull/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
129810542
|
Consistently capitalize GitHub
"GitHub" is its correct capitalization, so here's a PR. :sunny:
(This tool looks cool - good work, btw!)
Thanks!
|
gharchive/pull-request
| 2016-01-29T16:08:45 |
2025-04-01T04:54:46.326152
|
{
"authors": [
"ChrisBAshton",
"issyl0"
],
"repo": "BBC-News/wraith",
"url": "https://github.com/BBC-News/wraith/pull/375",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
264296690
|
Segfault when GUI closed and popout map used
There is a coredump in rqt_rover_gui when the GUI is closed and the popout map frame is open. Related to an std::pair assignment.
Fixed in pr #78
|
gharchive/issue
| 2017-10-10T16:38:45 |
2025-04-01T04:54:46.359004
|
{
"authors": [
"gmfricke",
"wfvining"
],
"repo": "BCLab-UNM/SwarmBaseCode-ROS",
"url": "https://github.com/BCLab-UNM/SwarmBaseCode-ROS/issues/71",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
109214059
|
static addChild definition of a node does what Encapsulate claims to do by default
If A is a job with dynamically added children, if
a, b = A(), B()
a.addChild(b)
then b should run in parallel with the dynamically added children unless A().encapsulate is specified. b appears to run only after all of a's dynamically added children have completed.
@arkal, I slightly tweaked your comment. I hope I didn't change the semantics.
yep. I'll properly format any new issues I find.
Here's the sample code if you want to reproduce it
from __future__ import print_function
from toil.job import Job
import argparse
import time
def f(job):
'''
DOCSTRING
'''
with open('test_toil.txt', 'w', 0) as outfile:
print('F', sep='', file=outfile)
#job.addChildJobFn(h)
#job.addChildJobFn(i)
return 'F'
def g(job):
'''
DOCSTRING
'''
with open('test_toil.txt', 'a', 0) as outfile:
print('G', sep='', file=outfile)
def h(job):
'''
DOCSTRING
'''
time.sleep(5) # So this will end after G and I
with open('test_toil.txt', 'a', 0) as outfile:
print('H', sep='', file=outfile)
return 'H'
def i(job):
'''
DOCSTRING
'''
with open('test_toil.txt', 'a', 0) as outfile:
print('I', sep='', file=outfile)
return 'I'
def j(job, my_rv):
'''
DOCSTRING
'''
with open('test_toil.txt', 'a', 0) as outfile:
print(my_rv, sep='', file=outfile)
def test_1():
'''
DOCSTRING
'''
parser = argparse.ArgumentParser()
parser.add_argument('-f', dest='txt', default='txt')
Job.Runner.addToilOptions(parser)
params = parser.parse_args()
F = Job.wrapJobFn(f).encapsulate()
G = Job.wrapJobFn(g)
F.addChild(G)
Job.Runner.startToil(F, params)
def test_2():
'''
DOCSTRING
'''
parser = argparse.ArgumentParser()
parser.add_argument('-f', dest='dummy', default='dummy')
Job.Runner.addToilOptions(parser)
params = parser.parse_args()
F = Job.wrapJobFn(f).encapsulate()
J = Job.wrapJobFn(j, F.rv())
F.addChild(J)
Job.Runner.startToil(F, params)
if __name__ == '__main__':
test_1() # Expect FIGH or FGIH, get FIHG
test_2() # TypeError
|
gharchive/issue
| 2015-10-01T01:53:14 |
2025-04-01T04:54:46.362042
|
{
"authors": [
"arkal",
"hannes-ucsc"
],
"repo": "BD2KGenomics/toil",
"url": "https://github.com/BD2KGenomics/toil/issues/445",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2119687161
|
Tuesday 06-02-2024 roll call
Roll Call!
Leave a comment with these two things:
A summary of your study using only emojis
Something you'd like to share, anything goes! (within respect)
emoji cheat sheet
Good morning!!!
Hello!
Good Morning!
Good morning
Hello
hey hey hey
hi
hi
good morning 👋
morning
✌️
hello
|
gharchive/issue
| 2024-02-06T00:01:53 |
2025-04-01T04:54:46.367676
|
{
"authors": [
"AdilCodeBX",
"Agnieszka-Dzwolak",
"Dnyandeo33",
"SowmyaPuttaswamygowda",
"ahlamboudali",
"dspodina",
"emrahhko",
"enteryana",
"rathiNamrata",
"richellepintucan",
"rodicailciuc",
"rohma19",
"samirm00"
],
"repo": "BF-FrontEnd-class-2024/home",
"url": "https://github.com/BF-FrontEnd-class-2024/home/issues/162",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1915880195
|
add help command
this command could list all commands one can call from our package
probably have to rename this command to something else, since it is inbuilt into the cmd. idk if i can use this command call
|
gharchive/issue
| 2023-09-27T15:54:16 |
2025-04-01T04:54:46.427472
|
{
"authors": [
"frehburg"
],
"repo": "BIH-CEI/ERKER2Phenopackets",
"url": "https://github.com/BIH-CEI/ERKER2Phenopackets/issues/160",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1881995306
|
Implement _map_chunk
This method should implement the logic to map a chunk (subset) of the dataset to phenopackets.
implement helper method def_map_instance to map a single row of the dataset to one phenopacket
Include:
Patient description:
[x] #60
[x] #62
Genotyping:
[x] #64
[x] #65
Phenotyping:
[x] #67
Debug
|
gharchive/issue
| 2023-09-05T13:33:15 |
2025-04-01T04:54:46.430179
|
{
"authors": [
"frehburg"
],
"repo": "BIH-CEI/ERKER2Phenopackets",
"url": "https://github.com/BIH-CEI/ERKER2Phenopackets/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.