id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
638336300
|
No Contest Outcome + Rules - Sports templates
Change all of the ones listed below (including mentions in the rules to No Contest) with the exception of NFL (that would change to Tie/No Contest)
No Winner
Tie/No Winner (For NFL moneyline only)
Unofficial Game/Cancelled
No Winner/Event Cancelled
updated - closing
|
gharchive/issue
| 2020-06-14T11:47:02 |
2025-04-01T04:54:43.777853
|
{
"authors": [
"Chwy5"
],
"repo": "AugurProject/augur",
"url": "https://github.com/AugurProject/augur/issues/8038",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1963279335
|
rating value 0
what if i want rating value 0 when i click
Hello, it is impossible to interact with a click on the component to send 0 as the value. And if you click outside the component, it doesn't have access to this information and can't deduce that the value should be 0.
You need to manage this outside the component (set the default value to 0 and/or add a button to reset/apply the value 0 to the component).
In most cases, you don't want to have 0 in the component, but if your case requires it, you'll have to manage it in your project ;)
|
gharchive/issue
| 2023-10-26T11:11:21 |
2025-04-01T04:54:43.781055
|
{
"authors": [
"Aurion72",
"TokyConstellation"
],
"repo": "Aurion72/nuxt-rating",
"url": "https://github.com/Aurion72/nuxt-rating/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1293577657
|
How to use it ?
How i use this library ?
How i use this library ?
The readme is probably a good hint
|
gharchive/issue
| 2022-07-04T21:26:16 |
2025-04-01T04:54:43.843688
|
{
"authors": [
"Auties00",
"danillo10"
],
"repo": "Auties00/WhatsappWeb4j",
"url": "https://github.com/Auties00/WhatsappWeb4j/issues/154",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
114398209
|
Make some InlineAutoDataAttribute constructor overloads protected
InlineAutoDataAttribute has a constructor overload that takes an AutoDataAttribute as an argument.
As #457 implicitly points out, there's no way to use this constructor overload when using the [InlineAutoData] attribute to annotate tests.
This constructor overload only exists to enable a derived class to supply a custom, derived instance of AutoDataAttribute, thereby customizing the behaviour of the derived attribute. Since this constructor overload only exists to enable inheritance, we should consider changing its accessibility level to protected.
This is a breaking change, so can only be done in AutoFixture 4.
Note that this ought to be done for both AutoFixture.Xunit and AutoFixture.Xunit2.
I'm going to take a look at this, and (hopefully) open a Pull Request over the next couple of days.
I think we can now close this, since #462 is merged.
Addressed by 5ac109164aa212c5963d620d3a4a166ebbe1096f
|
gharchive/issue
| 2015-10-31T08:34:04 |
2025-04-01T04:54:43.848543
|
{
"authors": [
"moodmosaic",
"ploeh"
],
"repo": "AutoFixture/AutoFixture",
"url": "https://github.com/AutoFixture/AutoFixture/issues/458",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1658307925
|
DQ Issue unable to identify Id needs to be removed in dq_report & Fix_dq remove them
Algorithm needs to remove id like feaatures
Hi @GDGauravDutta 👍
Thanks for trying out pandas_dq. I have made some bug fixes already. Can you please upgrade and then check?
pip install pandas_dq --ignore-installed --no-cache-dir
If you see version 1.7 or higher, then you have the right version.
Thanks
Auto Vimal team
It now removes ID like features, On Thursday, April 6, 2023 at 11:25:19 PM EDT, GDGauravDutta @.***> wrote:
Algorithm needs to remove id like feaatures
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: @.***>
|
gharchive/issue
| 2023-04-07T03:19:54 |
2025-04-01T04:54:43.855232
|
{
"authors": [
"AutoViML",
"GDGauravDutta"
],
"repo": "AutoViML/pandas_dq",
"url": "https://github.com/AutoViML/pandas_dq/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
468260195
|
Handle CTRL-C gracefully in WP CLI
Description
We need to handle when a user cancels a long-running WP CLI command
Forward the Ctrl-C character (x03) to the command runner
Call an endpoint in the API to mark the command as cancelled (and do any other cleanup)
There are a few cases we need to test to make sure that the command cancel logic is run correctly on Ctrl-C/SIGINT:
Standard commands
Interactive command (such as user create --prompt, help)
Pseudo shell commands
wp shell
|
gharchive/issue
| 2019-07-15T18:00:40 |
2025-04-01T04:54:44.105600
|
{
"authors": [
"dchymko"
],
"repo": "Automattic/vip",
"url": "https://github.com/Automattic/vip/issues/416",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
278980994
|
SignalR throw an error when reconnecting, even if successfully
Hi,
I have an issue, when I connect to the server with SignalR, lock de screen and unlocked, HubConnection.error is fired with the following error "NSURLErrorDomain" - code: 4294966291, but the library is able reconnect successfully.
The library https://github.com/DyKnow/SignalR-ObjC doesn't have the same behaviour.
When the connection is lost, it try to reconnect, and only if the reconnect attempt is failed, them it throw's an error, and that seems the right behaviour to me.
The problem with this, is that when an error happen, I show a different screen to the user, but actually this isn't an error, only a temporary lost of connectivity, and I don't a have a secure way to differentiate this error from the others.
Is this a bug?
Here is the log it prints when unlocking the screen in case it helps.
[] nw_socket_get_input_frames recvmsg(fd 4, 1024 bytes): [57] Socket is not connected
[] nw_socket_get_input_frames recvmsg(fd 9, 1024 bytes): [57] Socket is not connected
[] nw_socket_get_input_frames recvmsg(fd 11, 1024 bytes): [57] Socket is not connected
[] nw_socket_get_input_frames recvmsg(fd 12, 1024 bytes): [57] Socket is not connected
[] nw_socket_get_input_frames recvmsg(fd 13, 1024 bytes): [57] Socket is not connected
[] nw_endpoint_handler_add_write_request [2.1 107.22.161.209:443 failed socket-flow (satisfied)] cannot accept write requests
[] nw_endpoint_handler_add_write_request [5.1 13.74.158.5:443 failed socket-flow (satisfied)] cannot accept write requests
[] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
[] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
[] nw_endpoint_handler_add_write_request [7.1 13.74.158.5:443 failed socket-flow (satisfied)] cannot accept write requests
[] nw_endpoint_handler_add_write_request [6.1 13.74.158.5:443 failed socket-flow (satisfied)] cannot accept write requests
[] nw_endpoint_handler_add_write_request [4.1 13.74.158.5:443 failed socket-flow (satisfied)] cannot accept write requests
[] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
[] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
[] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
I've tried to reproduce the issue described above, but no such luck.
HubConnection.error callback is not invoked after the screen has been unlocked. I don't know what I'm doing wrong.
By the way, there is a workaround - you can handle "lock screen" "unlock screen" events manually
private func subscribeAppLifeCycleNotifications()
{
let center = NotificationCenter.default
center.addObserver(self, selector: #selector(connect), name: .UIApplicationWillEnterForeground, object: nil)
center.addObserver(self, selector: #selector(disconnect), name: .UIApplicationDidEnterBackground, object: nil)
}
@objc private func connect()
{
// your code to connect
}
@objc private func disconnect()
{
// your code to disconnect
}
|
gharchive/issue
| 2017-12-04T12:36:41 |
2025-04-01T04:54:44.318269
|
{
"authors": [
"4brunu",
"vldalx"
],
"repo": "AutosoftDMS/SignalR-Swift",
"url": "https://github.com/AutosoftDMS/SignalR-Swift/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1991361817
|
CSS variables contain unsupported values
none is being produced as CSS values in the Design Tokens variables. This value is unsupported on multiple attributes and should be 0 (assuming that none is default/unchanged value in Figma).
Current examples:
--*-letter-spacing: none
--*-paragraph-spacing: none
text-case ( css text-transform) has a none value. This is valid but should probably be initial instead
gotcha. We'll double back with UX to confirm 0 is what is desired for those.
|
gharchive/issue
| 2023-11-13T19:47:46 |
2025-04-01T04:54:44.320412
|
{
"authors": [
"LauRoxx",
"brycehowitson"
],
"repo": "Availity/element",
"url": "https://github.com/Availity/element/issues/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2076260910
|
OnPlatform should allow multiple values in each branch for collection setters
To Reproduce
Steps to reproduce the behavior:
<Application.Styles>
<OnPlatform>
<On Options="Android">
<themes:MaterialTheme BaseTheme="Dark" PrimaryColor="Purple" SecondaryColor="Lime" />
</On>
<On Options="Default">
<FluentTheme /> <!-- will fail after this line -->
<StyleInclude Source="avares://Avalonia.Controls.DataGrid/Themes/Fluent.xaml"/>
<StyleInclude Source="avares://XPos/Assets/Icons.axaml" />
</On>
</OnPlatform>
</Application.Styles>
Environment
OS: any
Avalonia-Version: 11.0.6
I've h ad the same problem. I solved it like this
<Application.Styles>
<OnPlatform>
<On Options="Android">
<themes:MaterialTheme BaseTheme="Dark" PrimaryColor="Purple" SecondaryColor="Lime" />
</On>
<On Options="Default">
<Styles>
<FluentTheme />
<StyleInclude Source="avares://Avalonia.Controls.DataGrid/Themes/Fluent.xaml"/>
<StyleInclude Source="avares://XPos/Assets/Icons.axaml" />
</Styles>
</On>
</OnPlatform>
</Application.Styles>
|
gharchive/issue
| 2024-01-11T10:33:57 |
2025-04-01T04:54:44.323296
|
{
"authors": [
"maxkatz6",
"workgroupengineering"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/14172",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2361365164
|
D3D11CreateDevice+Hardware crashes and fallback doesn't work
@Sewer56 looks like this PR break Angle/DX on my machine:
[OpenGL]Unable to initialize ANGLE-based rendering with DirectX11 : 'System.ArgumentException: Value does not fall within the expected range.
at Avalonia.Win32.DirectX.DirectXUnmanagedMethods.D3D11CreateDevice(IntPtr adapter, D3D_DRIVER_TYPE DriverType, IntPtr Software, UInt32 Flags, D3D_FEATURE_LEVEL[] pFeatureLevels, UInt32 FeatureLevels, UInt32 SDKVersion, IntPtr& ppDevice, D3D_FEATURE_LEVEL& pFeatureLevel, IntPtr* ppImmediateContext)
at Avalonia.Win32.OpenGl.Angle.AngleWin32EglDisplay.CreateD3D11Device(IDXGIAdapter1 chosenAdapter, D3D_FEATURE_LEVEL[] featureLevels) in E:\Work\Projects\AvaloniaCopy1\src\Windows\Avalonia.Win32\OpenGl\Angle\AngleWin32EglDisplay.cs:line 175
at Avalonia.Win32.OpenGl.Angle.AngleWin32EglDisplay.CreateD3D11Display(Win32AngleEglInterface egl) in E:\Work\Projects\AvaloniaCopy1\src\Windows\Avalonia.Win32\OpenGl\Angle\AngleWin32EglDisplay.cs:line 106
at Avalonia.Win32.OpenGl.Angle.D3D11AngleWin32PlatformGraphics.TryCreate(Win32AngleEglInterface egl) in E:\Work\Projects\AvaloniaCopy1\src\Windows\Avalonia.Win32\OpenGl\Angle\D3D11AngleWin32PlatformGraphics.cs:line 77'
[OpenGL]Unknown requested PlatformApi 'DirectX11'
There are couple of problems:
D3D11CreateDevice+HARDWARE fails here with invalid argument. Don't really know why.
D3D11CreateDevice+SOFTWARE fails too, but UNKNOWN works fine.
D3D11CreateDevice return type should be "int" (non-void), and PreserveSig = false should be removed, if you want to implement fallback logic.
Before each fallback, there should be a warning log message, so developers can easier find this problem. Something like:
Logger.TryGet(LogEventLevel.Warning, LogArea.Win32Platform)?.Log(null, "Unable to create hardware ID3D11Device, error code = {ErrorCode}", $"0x{result:X}")
Originally posted by @maxkatz6 in https://github.com/AvaloniaUI/Avalonia/issues/16035#issuecomment-2177788039
This is technically fixed by
https://github.com/AvaloniaUI/Avalonia/pull/16063
And the feedback was implemented in:
https://github.com/Sewer56/Avalonia/commit/ec04ffacc56a5484c10ece72e9f1685d606bc9c0
As per https://github.com/AvaloniaUI/Avalonia/pull/16035#issuecomment-2177884713, I'm not sure what the current course of action should be.
|
gharchive/issue
| 2024-06-19T05:39:55 |
2025-04-01T04:54:44.327813
|
{
"authors": [
"Sewer56",
"maxkatz6"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/16062",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2437591212
|
FontWeight.ExtraBold is rendered heavier since 11.1
Describe the bug
I'm using FontWeight.ExtraBold in an application. After upgrading from 11.0.10 to 11.1.1, the text is rendered way heavier, and is harder to read.
To Reproduce
See attached solution. Set all avalonia package versions to 11.0.10 to see the old behavior. Upgrade to 11.1.1 or (at this moment) latest preview version to see the changed rendering.
AvaloniaApplication1Changed.zip
Expected behavior
I didn't expect any changes, or I would have expected a breaking change notice in the release notes somewhere.
Old behavior:
New behavior:
Avalonia version
11.1.1
OS
Windows
Additional context
No response
I'm using Avalonia version 11.1.3 and encountered a similar issue. I'm not sure if it's the same bug. From the image, it seems that the font weights 600 and 700 are swapped, and the same issue appears between 800 and 900 as well.
<Grid RowDefinitions="Auto Auto Auto Auto Auto Auto Auto Auto Auto">
<Grid.Styles>
<Style Selector="TextBlock">
<Setter Property="FontFamily" Value="Arial"></Setter>
<Setter Property="FontSize" Value="28"></Setter>
</Style>
</Grid.Styles>
<TextBlock Grid.Row="0" Grid.Column="0" FontWeight="100">FontWeight 100</TextBlock>
<TextBlock Grid.Row="1" Grid.Column="0" FontWeight="200">FontWeight 200</TextBlock>
<TextBlock Grid.Row="2" Grid.Column="0" FontWeight="300">FontWeight 300</TextBlock>
<TextBlock Grid.Row="3" Grid.Column="0" FontWeight="400">FontWeight 400</TextBlock>
<TextBlock Grid.Row="4" Grid.Column="0" FontWeight="500">FontWeight 500</TextBlock>
<TextBlock Grid.Row="5" Grid.Column="0" FontWeight="600">FontWeight 600</TextBlock>
<TextBlock Grid.Row="6" Grid.Column="0" FontWeight="700">FontWeight 700</TextBlock>
<TextBlock Grid.Row="7" Grid.Column="0" FontWeight="800">FontWeight 800</TextBlock>
<TextBlock Grid.Row="8" Grid.Column="0" FontWeight="900">FontWeight 900</TextBlock>
</Grid>
|
gharchive/issue
| 2024-07-30T11:21:48 |
2025-04-01T04:54:44.333586
|
{
"authors": [
"genment",
"mterwoord"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/16537",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2742395908
|
Sorting issues on datagrid with user control as data template
Describe the bug
I have a datagrid with templated columns, one of which being a user control.
That user control doesn't seem to properly sort with the rest of the line:
Sample project demonstrating the issue:
Avalonia.Samples-main - gridbug.zip
To Reproduce
From the sample project, fiddle a few time with the tag column header sorting.
Expected behavior
No response
Avalonia version
11.2.2
OS
Windows
Additional context
No response
The issue seem to stem from the fact that I'm adding the textboxes programmatically in the TagManagerUserControl UC. The column sorts correctly when using ItemsControl.
Avalonia.Samples-main - gridbug.zip
[^Demo project using itemscontrol]
Why do you need to add controls programatically? I highly suggest you use DataTemplates. If not, add them at least in OnDataContextChanged override to ensure correct DataContext is available.
|
gharchive/issue
| 2024-12-16T13:27:09 |
2025-04-01T04:54:44.338365
|
{
"authors": [
"giacarrea",
"timunie"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/17785",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
505782526
|
Carousel gestures
v0.8.999
I'm working on adding a swipe gesture to the Carousel control but I have discovered that the control overrides OnKeyDown and OnPointerPressed.
/// <inheritdoc/>
protected override void OnKeyDown(KeyEventArgs e)
{
// Ignore key presses.
}
/// <inheritdoc/>
protected override void OnPointerPressed(PointerPressedEventArgs e)
{
// Ignore pointer presses.
}
The OnPointerPressed override prevents gestures from working on the Carousel control.
Is there a good reason why these methods are overriden like this?
I can't remember 100% but I think it's to prevent the default keyboard behavior from SelectingItemsControl from happening. Could be that we should either:
Move the keyboard behavior to a derived class. This would mean that keyboard handling would be implemented in ListBox, ComboBox etc separately
Simply set e.Handled in these events in Carousel
|
gharchive/issue
| 2019-10-11T10:55:20 |
2025-04-01T04:54:44.341026
|
{
"authors": [
"aguahombre",
"grokys"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/3098",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
930206197
|
Clipping Geometry on Apple M1
Geometry clipping does not work correctly on Apple M1
I think something wrong with SkiaGpuRenderTarget. Everything is ok if With(new AvaloniaNativePlatformOptions { UseGpu = false }) added in Program.cs
We have the same issue. It hurts. @vmelnikov thanks for a solution!
I think an update of SkiaSharp skia version up to version 90+ (for example) will fix this... Current version is 80. It was published before m1 mac release AFAIK. Later skia version maybe fixes this issue. Let's wait for a new release from @mattleibow
@Mikolaytis Sadly latest version 2.88.0-preview.127 on .net6.0, native m1 doesn't fix the problem
@BAndysc wow. thanks a lot for a pretty useful information. any thoughts what can fix it?
I know there is two options of clipping in skia (antialiased and basic). Maybe let's try to switch this option and check it out?
Any news on that?
Investigating this now. Minimal repro:
<Border Background="Yellow" Width="200" Height="200" ClipToBounds="True">
<Image Height="48" Margin="0,200,0,0">
<Image.Source>
<DrawingImage>
<GeometryDrawing Brush="Red" Geometry="F1M14.707,4.707L6,13.414 1.293,8.707 2.707,7.293 6,10.586 13.293,3.293z" />
</DrawingImage>
</Image.Source>
</Image>
Output on M1 Mac:
Definitely looks to be a Skia issue. If I add the following code at this line: https://github.com/AvaloniaUI/Avalonia/blob/master/src/Skia/Avalonia.Skia/DrawingContextImpl.cs#L185
Canvas.Clear(SKColors.Gold);
(i.e. immediately before drawing the geometry)
I get the following output. You can see that the .Clear() call uses the correct clip but the geometry is just ignoring it:
|
gharchive/issue
| 2021-06-25T14:04:01 |
2025-04-01T04:54:44.346634
|
{
"authors": [
"BAndysc",
"Mikolaytis",
"grokys",
"iMonZ",
"vmelnikov"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/6144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1143426618
|
Rare exception in RelayCommand
Describe the bug
Sometimes RelayCommand crash application with following stack trace.
Unhandled Exception: System.InvalidOperationException: Call from invalid thread
at Avalonia.Threading.Dispatcher.VerifyAccess() + 0x4c
at Avalonia.AvaloniaObject.GetValue[T](StyledPropertyBase`1) + 0x2e
at Avalonia.Controls.MenuItem.CanExecuteChanged(Object, EventArgs) + 0x30
at System.Reactive.AnonymousSafeObserver`1.OnNext(T) + 0x28
at System.Reactive.Subjects.FastImmediateObserver`1.EnsureActive(Int32) + 0x334
at System.Reactive.Subjects.ReplaySubject`1.ReplayBase.OnNext(T) + 0xc0
at System.Reactive.Linq.ObservableImpl.CombineLatest`3._.SecondObserver.OnNext(TSecond) + 0x7d
at System.Reactive.Subjects.FastImmediateObserver`1.EnsureActive(Int32) + 0x334
at System.Reactive.Subjects.ReplaySubject`1.ReplayBase.OnNext(T) + 0xc0
at System.Reactive.SafeObserver`1.WrappingSafeObserver.OnNext(TSource) + 0x2b
at System.Reactive.ObserveOnObserverLongRunning`1.Drain() + 0xa6
at AvaloniaCoreRTDemo!<BaseAddress>+0x1693985
at AvaloniaCoreRTDemo!<BaseAddress>+0x1693d69
To Reproduce
I do not have reliable repro steps. This is happens occasionally and non-deterministic. It seems to be easier to repro on first or maybe second launch and then cannot repro.
Clone https://github.com/teobugslayer/AvaloniaCoreRTDemo on Linux
Run application using dotnet run
Enter text in text box
Open Help menu
Press Exit. Crash! Boom! Bang! happens here.
Expected behavior
Reliably not crash application.
Desktop (please complete the following information):
OS: Linux Gentoo (but seems to be unrelated)
Version 0.10.8
Additional context
Add any other context about the problem here.
Any updates on this? I am getting very similar thing with Avalonia 11. Whenever I call CanExecuteChanged?.Invoke(this, EventArgs.Empty) on my implementation of ICommand. Though in my case it calls into Avalonia.Controls.dll!Avalonia.Controls.Button.CanExecuteChanged(object sender, System.EventArgs e) and throws at Avalonia.Base.dll!Avalonia.Threading.Dispatcher.VerifyAccess.__ThrowVerifyAccess|16_0().
Shouldn't Avalonia take care of automatically delegating all such calls to the UI thread by queuing them up, instead of trying to execute on the calling thread?
No, you need to use Dispather.UiThread.Post if you want to invalidate the command from async task or similar
No, you need to use Dispather.UiThread.Post
Is this true of raising CollectionChanged event too? Os is this only particular to ICommand?
All events that should be consumed by the UI. CollectionChanged and also PropertyChanged
@kant2002 can you double-check if the Dispatcher also helps in your situation? If so, we can close this as by design.
All events that should be consumed by the UI. CollectionChanged and also PropertyChanged
Thanks for a quick response. This is rather unfortunate, and I am not sure correct. How would any "Observable" Collection (like the built-in System.Collections.ObjectModel.ObservableCollection, or any custom one) know anything at all about the UI thread or Avalonia? They wouldn't - in fact ObservableCollection just simply raises an event whenever it changed, and it is up to the event subscriber, i.e. Avalonia ListBox for example to properly process the event on the correct thread.
Does it then follow that all collection or observable object manipulations that could raise either CollectionChanged or PropertyChanged events subscribed to by Avalonia - would all have to be done on the UI thread? This approach could easily exhaust the UI thread. Wouldn't it be better if Avalonia one some lower level would automatically dispatch processing of such events to the UI thread? Or am I misunderstanding something?
That's why .Add or .Remove should only happen on UiThread. Same for other UI-Libs like WPF. If you look into async collections, I can say that DynamicData (shipped with ReactiveUI) can handle this by .ObserveOn
https://www.reactiveui.net/docs/handbook/collections/
|
gharchive/issue
| 2022-02-18T17:07:43 |
2025-04-01T04:54:44.355858
|
{
"authors": [
"fitdev",
"kant2002",
"timunie"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/7644",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1713994323
|
Fixes Issue #6263
What does the pull request do?
What is the current behavior?
What is the updated/expected behavior with this PR?
How was the solution implemented (if it's not obvious)?
Checklist
[ ] Added unit tests (if possible)?
[ ] Added XML documentation to any related classes?
[ ] Consider submitting a PR to https://github.com/AvaloniaUI/Documentation with user documentation
Breaking changes
Obsoletions / Deprecations
Fixed issues
Fixes issue #6263
You can test this PR using the following package version. 11.0.999-cibuild0034924-beta. (feed url: https://pkgs.dev.azure.com/AvaloniaUI/AvaloniaUI/_packaging/avalonia-all/nuget/v3/index.json) [PRBUILDID]
You can test this PR using the following package version. 11.0.999-cibuild0035053-beta. (feed url: https://pkgs.dev.azure.com/AvaloniaUI/AvaloniaUI/_packaging/avalonia-all/nuget/v3/index.json) [PRBUILDID]
You can test this PR using the following package version. 11.0.999-cibuild0035206-beta. (feed url: https://pkgs.dev.azure.com/AvaloniaUI/AvaloniaUI/_packaging/avalonia-all/nuget/v3/index.json) [PRBUILDID]
Just encountered a similar problem in #11626, which I think this PR should also fix, but I'm kinda not sure about always scheduling the scroll on the dispatcher, though it might not be a problem in practice. Need a little time to think about potential fixes.
|
gharchive/pull-request
| 2023-05-17T14:05:21 |
2025-04-01T04:54:44.362248
|
{
"authors": [
"avaloniaui-team",
"grokys",
"workgroupengineering"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/11418",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
607023156
|
Allow for showing child windows
What does the pull request do?
Adds a concept and implementation of window ownership.
What is the current behavior?
WindowBase.Owner is simply ignored.
What is the updated/expected behavior with this PR?
You can use Window.ShowChild on desktop platforms to show child windows.
On Win32 you can open child windows and closing parent window will close all child windows.
On X11 and OSX while child windows are open you won't be able to close parent window. This might get addressed in the future.
How was the solution implemented (if it's not obvious)?
Checklist
[ ] Added unit tests (if possible)?
[ ] Added XML documentation to any related classes?
[ ] Consider submitting a PR to https://github.com/AvaloniaUI/Avaloniaui.net with user documentation
Depends: https://github.com/AvaloniaUI/Avalonia/pull/3867
what happens when you try to show as a child with a parent that is is not visible?
Good question, I've checked WPF and this PR - both just show child window normally. WPF has one quirk - you must show parent window at least once before assigning it as Owner.
WPF Specification:
If an owner window is minimized, all its owned windows are minimized as well.
If an owned window is minimized, its owner is not minimized.
If an owner window is maximized, both the owner window and its owned windows are restored.
An owner window can never cover an owned window.
Owned windows that were not opened using ShowDialog are not modal. The user can still interact with the owner window.
If you close an owner window, its owned windows are also closed.
If an owned window was opened by its owner window using Show, and the owner window is closed, the owned window's Closing event is not raised.
WPF Specification Win32:
[x] If an owner window is minimized, all its owned windows are minimized as well.
[ ] If an owned window is minimized, its owner is not minimized.
[ ] If an owner window is maximized, both the owner window and its owned windows are restored.
[x] An owner window can never cover an owned window.
[ ] Owned windows that were not opened using ShowDialog are not modal. The user can still interact with the owner window.
[ ] If you close an owner window, its owned windows are also closed.
[ ] If an owned window was opened by its owner window using Show, and the owner window is closed, the owned window's Closing event is not raised.
WPF Specification X11:
[x] If an owner window is minimized, all its owned windows are minimized as well.
[ ] If an owned window is minimized, its owner is not minimized.
[ ] If an owner window is maximized, both the owner window and its owned windows are restored.
[x] An owner window can never cover an owned window.
[ ] Owned windows that were not opened using ShowDialog are not modal. The user can still interact with the owner window.
[ ] If you close an owner window, its owned windows are also closed.
[ ] If an owned window was opened by its owner window using Show, and the owner window is closed, the owned window's Closing event is not raised.
This is actually more complicated to get right cross-plat than anticipated and our shutdown/window close might need more work to support everything. Might revisit in the future.
I suggested to move the window bookkeeping into the windowing platform in the past. Maybe that makes things easier.
What part of your work didn't work out well?
Main issues were with closing parent windows and cancellation of such. WPF for instance won't invoke any Closing callbacks when you close parent window. For us we wanted to make sure that we invoke Closing handlers for dialogs. Which is causing issues since Window.Close will just dispose platform implementation which on X11 and Win32 will just destroy current window. On Win32 it will kill child windows but won't notify about closing. On X11 it will just leave child windows open.
I've tried to come up with a solution for this, but in the end I am not sure what is the correct behavior here across different platform.
Thanks for your summary
bringing this back to life!
@danwalmsley @kekekeks Reworked this on top of the new API.
@MarchingCube wow much simpler now :) will test in the morning.
Final Tests:
[ ] Windows
[ ] OSX
[ ] Linux
|
gharchive/pull-request
| 2020-04-26T14:16:49 |
2025-04-01T04:54:44.375371
|
{
"authors": [
"Gillibald",
"MarchingCube",
"Sorien",
"danwalmsley"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/3833",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
749559781
|
Add keyboard navigation to slider
cc @danwalmsley
there is an e.t.a. for this PR?
there is an e.t.a. for this PR?
|
gharchive/pull-request
| 2020-11-24T10:02:03 |
2025-04-01T04:54:44.376627
|
{
"authors": [
"jmacato",
"workgroupengineering"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/5100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
347393958
|
Chinese version
do you mind if i translate your Info-graphs to Chinese?
If you really want to learn this, language is not the gap。
谷歌翻译,了解一下。
This is good we can do this
hi,我可以加入translate么?
继续兄弟,我看好你
单独建个中文版的库吧
Those willing to help in translation please pick an infographic and translate it then post it here.
@zhyongquan What Do you say regarding this?
hi,我可以加入translate么? @zhyongquan
Can I join in ?
Respect
Maybe I can give a hand.
Can I join in translating?
@Avik-Jain , so many guys want to join translating team. it's better to create a Chinese repo. i also need them to check. is it ok?
@Aaaaaaada @MOC99 @BelindaLI
谢谢各位抬爱,我已经翻译好了几页,但还得校对,有些专业词汇拿不准,Python代码也得调试。稍后,我建个中文仓库,把地址贴在这里。大家可以分工合作,每人翻译几页,通过pull request提交进来,相互校对。怎么样?
It's a nice project. Good idea!后续如果可以新建中文仓库,我也想出份力。
@zhyongquan 加油兄弟,没问题,很棒
可以啊,没问题啊。期待。激动地搓搓小手~或者大家建一个群什么的,分工交流什么的,也比较方便?@zhyongquan
我觉得没问题 @zhyongquan 建一个群挺好的@Aaaaaaada
@zhyongquan 我能加入翻译吗?也是在自学机器学习,我还特意针对这个repo建立了一个100天机器学习的群,吸引了一些小伙伴的加入,尝试拿这个repo来进行学习。
@zhyongquan 求加入翻译小组~
@zhyongquan I have made a new repo. Sent you an invite. Kindly Check it out and update the repository.
@zhyongquan we can do it together,I'm glad to join in this project!
@zhyongquan 希望加入翻译小组
I'd like to join~
I also wanna join
1Chain4asCYNnLVbvG6pgCLGBrtzh4Lx4b. Ok tq plese add thiss
Pada Isn, 6 Ogo 2018 10:40 an1006634493 notifications@github.com menulis:
I also wanna join
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Avik-Jain/100-Days-Of-ML-Code/issues/8#issuecomment-410573423,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AnyBCGgS6jo65bc8RnFZvW5HjmK2fW7cks5uN6yugaJpZM4Vt9on
.
I wanna join too
I also want to join
@wengJJ 图上的自变量因变量观测值预测值都没翻译欸
@noeagles 我当时是觉得没必要翻到那么里 不过现在觉得这是新手教程 我明天改下
@goelo 我对你的学习群有兴趣 我可以进那个小群吗
中文库有了吗?
同求群
https://github.com/MachineLearning100/100-Days-Of-ML-Code
@Avik-Jain we have a team in https://github.com/MachineLearning100/100-Days-Of-ML-Code. we almost translate all your content. Can you add a link to this Chinese version in readme?
@KerwinChan @HeYDwane3 加我微信xiaoxin5683yy,我拉你们。群里不限于ML的打卡,任何坚持学习的都可以打卡。
非常nice~~
棒
that‘ s great。
Win a $1,250 Cash Prize!
Hey! I saw this prize and thought of you! All you have to do is click on
the link to see why I think you’d love to win it — and then enter for a
chance to make it yours:
http://www.sweepon.com/sweepstakes/win-a-1-250-cash-prize-25508?utm_campaign=socialshare&utm_medium=email&utm_source=email
Thanks for sharing!
@cyancity
You can regard zh version as a cache, which makes it unnecessary to visit Google trans every time.
@Avik-Jain @zhyongquan where can we get the templates for info-graphics?
@samarthshukla i don't have template, we use photoshop to modify pictures.
Thanks for sharing
Hope I can do some help for this project
能加群吗?这样一是打卡,二也可以有不懂的地方可以请教各位。
interesting ~~
感覺蠻多人一起學習,很棒的感覺。
我是每天發一篇在臉書上,很多人可能在內地,沒能看到臉書,我就分享我今天的內容
『Day 17: 複習』
作者今天要大家複習前兩週的學習。然後他就跑了。
不過我覺得我學得最多,重複在這裡分享了好些次,等於練習複習的好些次。希望大家也跟著腦袋演練。
有問題隨時一起討論。
< 什麼機器學習>
我只好繼續咳嗽繼續回想這兩週。
作者的第一天在講資料清理的準備,我覺得不是很好的開始,他沒有告訴大家什麼是人工智慧、機器學習。
這些人工智慧機器學習新的方法與成就跟過去的科技發展有什麼不同?
套用 Sebastian Thrun 與 Chris Anderson 對話時說的,過往如果我們要編寫一個煮菜機器人能夠完成四五個食譜,可能要用一堆 IFs,
“如果溫度超過 80度, 表皮有紅色,降低溫度到 60度”,“如果紅色持續,溫度調節設定到 45度”... 這樣可能需要上百萬行的 code,還不容易 debug。
人工智慧,機器學習用的觀念是不一樣的。AI /ML 比較偏向餵機器系統資料,讓這個系統多次閱讀這些資料,修正他的不準度或錯誤等。可以如附圖一。
如果根據李弘毅老師的說法,“才能”有天生基因與後天學習。水獺天生聽到水流聲就會去 “築水壩”,這個就像是 IFs,但是小朋友可以聽到大人的對話,發出聲音,修正、然後學會語言,不同的環境學會不同語言,這個就是根據聽到的聲音資料,吸收、嘗試、修正然後學習。(為什麼我每天餵自己英文資料,英文還是不會?哭)
好吧,作者雖然沒有提到以上,不過他第 15 天看了這個,你們有沒有一起看?很棒的內容,可以只看 Slide就夠了。
https://bloomberg.github.io/foml/#home
<基本的工具、基本的程序>
作者在第一二天提到,採用python 作機器學習,基本工具有 numpy、pandas,但是我建議多加一個 matplotlib。
其他是用 pd.read_csv 讀資料,處理一下缺失值,拆分訓練用與測試用 Data、特徵間的大小幅度標準化,還有作一個 One Hot Encoding。
第二三天提到了 Regression 的問題,除了介紹求值系統還順便把未來處理的程序算是交付一下,
1.) 如第一天所提的 輸入 Libraries,輸入 data 拆解 data,資料清理,缺失值處理,拆解 data,因為不是分類問題,就不做 One Hot Encoding 。
2.) 然後第二步利用 sklearn 套件中的 model 處理
3.) 用 .fit 作 training
4.) 用 .predict 作預測
以上可以搭配李弘毅老師說的 framework,三個步驟A,
1.) 查看資料,思考題目,找出一群函式集合(整個集合稱為模型)在上面就是在 sklearn 中叫出 LinearRegression
2.) 找出如何判斷如何決定這些函式好壞的方程式,比如資料送進 2x +2 算出來的誤差 跟 2.3x +2 算出來的誤差那個比較小?在上面作者 1~4 的方式中,這個步驟被藏在 2.) sklearn model 中處理掉,所以到了 .fit 時沒被看到,如果處理到深度學習用了 Keras, Tensorflow 等,就會很明白在 fit 時必需寫清楚 loss function。
3.) 就是訓練,就是 .fit
B) 完成三步驟就來預測,預測的眉眉角角就以後再說吧!
記住,寫好的 model 必需 .compile, 然後學習 .fit,最後預測 .predict 。 如果 model 是直接叫 API 那麼就不用 .compile(上面的 regression 就是直接從 sklearn 取用,都不用 compile了)。
第三天起,作者有點鬼打牆的學習 Logistic Regression SVM,中間加入 KNN,這個我摸不透他到底學得如何。
我前些日子的分享都有做基本說明與幫作者補充,有空的人可以回頭看看。
我加入一張圖(圖二)好了,如果能夠看懂,應該就真的熟透了。
就是說,在作分類時,要用到一個 Cross Entropy 的損失函式,當我們用數學偏微分去求代價函數最小化,就能讓系統得到最佳或很好的結果。(要清楚為什麼 rmse 不可以啊!)
最後就會設計出 y' = sigmoid(w*x +b) 我們輸入 x, 得到 y' 看這個求出的 y' 跟答案 y 有沒有差很多,如果差很多,就調整 w, b 值,再求一次 y' 看有沒有好一點... 就這樣重複。
不是亂試,是有方法的,叫作 Gradient Descent,就是上面說到的極小化偏微分值。
學機器學習,花時間看 Gradient Descent 是免不了的。
學 Gradient Descent 時,要清楚,我們是對誰微分,求哪一個 domain (我們是要小化 Loss,求 w, b)
SVM 是覺得用 cross entropy 還不夠好,不如用 hinge loss 去作,可以讓空間 sparse ,然後就是只跟幾個點有關,這幾個點就是 support vectors (SVM 名字的由來)
參考一下我的初淺解釋 https://goo.gl/VBuQis
SVM 推導看一看就好,但是應用很有趣,他不只當作簡單推導的分類器,他還可以用 Kernel function 投射到高維度,非常有趣,用 Sklearn 很簡單就實現,參考這個https://goo.gl/wsTLLt
還有圖三,原本題目如左圖兩個特徵在二維空間,無法用線性一刀分辨類別,但是用
SVC(kernel ='poly', degree = 3)
就會變成右圖,好好切 開心。
KNN 看一看,不難,就是量測一個要判斷的 sample 與存在的點中最近的幾個點,然後看哪一類的點最多,就判斷是那一類,比如在一堆知道類別(類別一二三)的點中,一個測試點與所有點找出最近的10 顆,類別一 7 顆,類別二 0 顆,類別三 3 顆,我就判斷他是類別一。結案。
其他,作者第八天有提到 Logistic Regression 的數學推導,有空看一看不算太難。如果有時間建議上網找用機率、純樸貝氏出發、求 ML (Maximum Likelihood) 的方法,也很精彩,雖然沒有帶來更精準的結果。
熟悉暖身一下統計、資訊理論,對未來學 GAN 應該有幫助,雖然聽不懂推導也可以學會觀念與應用。
感覺,我好像是寫給自己看的...
#ML_100days
Started Deep learning Specialization on Coursera | Day 17
Completed the whole Week 1 and Week 2 on a single day. Learned Logistic regression as a Neural Network.
@PatrickRuan 欢迎到中文版里投稿https://github.com/MachineLearning100/100-Days-Of-ML-Code
Do we have a study group (qun) on Wechat? I may not have bandwidth to
contribute, but would like to join the study group to keep up progress.:)
Thanks
On Wed, Aug 29, 2018 at 7:05 AM zhang yongquan notifications@github.com
wrote:
@PatrickRuan https://github.com/PatrickRuan 欢迎到中文版里投稿
https://github.com/MachineLearning100/100-Days-Of-ML-Code
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Avik-Jain/100-Days-Of-ML-Code/issues/8#issuecomment-416965302,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AZShCoCrTNndbFnczyNBEyjsWuXkh4KZks5uVp-kgaJpZM4Vt9on
.
Hi, @Avik-Jain , I am following your project of 100-Days-Of-ML-Code, it is very helpful for me as a beginner, thank you! I found the illustrating picture your created is very beautiful, and I curious that how do you creat it? By python or other plaform? It seem not made by using traditional photoshop.
Would you share your experience of creating such beautiful illustrating picture to us? Thank you very much!
|
gharchive/issue
| 2018-08-03T13:21:02 |
2025-04-01T04:54:44.425790
|
{
"authors": [
"Aaaaaaada",
"AnnaXJGe",
"Avik-Jain",
"BeeSeeChain",
"BelindaLI",
"Genepsy",
"Great1414",
"HeYDwane3",
"JuneRR",
"KerwinChan",
"MOC99",
"PatrickRuan",
"Wjshang",
"addy086",
"an1006634493",
"andrewzrant",
"cyancity",
"eruisi",
"goelo",
"heliuphy",
"jiangwei007",
"kmalloc8",
"lichunhong2010",
"noeagles",
"samarthshukla",
"wengJJ",
"wizardforcel",
"wuchao5460",
"wxrapha",
"yunxiaoyin",
"yyong119",
"zhangyanqi92",
"zhyongquan"
],
"repo": "Avik-Jain/100-Days-Of-ML-Code",
"url": "https://github.com/Avik-Jain/100-Days-Of-ML-Code/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
921950210
|
タグレポ
内容
タグレポ対応しない?悪くない機能だと思うんだけど使ってる人見かけないんよねえ
問題が発生したURL
https://www.nicovideo.jp/my/follow/tag/大丈夫だ、問題ない/tagrepo
環境
Chrome
From
NNS from GForm
#133
|
gharchive/issue
| 2021-06-16T00:51:03 |
2025-04-01T04:54:44.445337
|
{
"authors": [
"AyumuNekozuki"
],
"repo": "AyumuNekozuki/niconico-darkmode",
"url": "https://github.com/AyumuNekozuki/niconico-darkmode/issues/135",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
943292866
|
BMI calculator using Flask
Description
A simple BMI calculator.
Checklist
[x] I've been assigned an issue related to this PR.
[x] I've used beautifiers.
[x] I've added my Project's name and description to Index.md
[x] I've made a README.md file for my Project.
[x] The README.md file of my projrct contains Project title, Description, Use of project, Set up, Stack used and Output (Screenshots).
Related Issues or Pull Requests number
Fixes #184
Hey, @Ayushparikh-code please add the level label as well. Thanks.
|
gharchive/pull-request
| 2021-07-13T12:24:37 |
2025-04-01T04:54:44.448628
|
{
"authors": [
"tanvi355"
],
"repo": "Ayushparikh-code/Web-dev-mini-projects",
"url": "https://github.com/Ayushparikh-code/Web-dev-mini-projects/pull/201",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
258251859
|
feat: add authorize func
Checklist
[x] npm test passes
[x] tests and/or benchmarks are included
[x] documentation is changed or added
[x] commit message follows commit guidelines
Affected core subsystem(s)
Description of change
enable authorization_code mode
enbale options for authenticate/token method
update document
update to 2.0.1
Codecov Report
Merging #9 into master will decrease coverage by 16.56%.
The diff coverage is 11.76%.
@@ Coverage Diff @@
## master #9 +/- ##
===========================================
- Coverage 92.75% 76.19% -16.57%
===========================================
Files 3 3
Lines 69 84 +15
===========================================
Hits 64 64
- Misses 5 20 +15
Impacted Files
Coverage Δ
lib/server.js
75.38% <11.76%> (-22.62%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8db48f9...c69d33a. Read the comment docs.
|
gharchive/pull-request
| 2017-09-16T19:06:34 |
2025-04-01T04:54:44.458966
|
{
"authors": [
"codecov-io",
"thonatos"
],
"repo": "Azard/egg-oauth2-server",
"url": "https://github.com/Azard/egg-oauth2-server/pull/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2086768270
|
chore(docs): Update lsp install instructions
Remove the recommendation to install nargo for syntax highlighting.
Add recommendation to install the LSP extensions and point it to aztec-nargo.
closes #4098
Merged Palla's PR on removing some tech debt around noir which updated the doc and has caused merged conflict.
My recommendation would be to merge from master and once again search for nargo.
Otherwise LGTM!!!!
@critesjosh fixed your merge conflicts - and enabling auto merge!
|
gharchive/pull-request
| 2024-01-17T18:40:41 |
2025-04-01T04:54:44.462784
|
{
"authors": [
"critesjosh",
"rahul-kothari"
],
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/pull/4110",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2206295381
|
feat: Throw by default when awaiting a tx that reverted
A tx that returns from a tx.wait() is then guaranteed to have succeeded. Otherwise, we were having txs that failed silently, since we were not manually checking the receipt status after every action.
Benchmark results
No metrics with a significant change found.
Detailed results
All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write.
This benchmark source data is available in JSON format on S3 here.
Values are compared against data from master at commit 933145e8 and shown if the difference exceeds 1%.
L2 block published to L1
Each column represents the number of txs on an L2 block published to L1.
Metric
8 txs
32 txs
64 txs
l1_rollup_calldata_size_in_bytes
676
676
676
l1_rollup_calldata_gas
6,400
6,364 (-1%)
6,424
l1_rollup_execution_gas
585,733
585,697
585,757
l2_block_processing_time_in_ms
1,286 (-1%)
4,573 (-3%)
8,634 (-6%)
note_successful_decrypting_time_in_ms
174 (-6%)
519 (-1%)
976 (+1%)
note_trial_decrypting_time_in_ms
77.6 (-14%)
33.4 (-16%)
112 (+18%)
l2_block_building_time_in_ms
13,560 (-3%)
50,103 (-3%)
99,629 (-2%)
l2_block_rollup_simulation_time_in_ms
7,757 (-3%)
27,227 (-3%)
54,004 (-2%)
l2_block_public_tx_process_time_in_ms
5,783 (-2%)
22,828 (-2%)
45,532 (-2%)
L2 chain processing
Each column represents the number of blocks on the L2 chain where each block has 16 txs.
Metric
5 blocks
10 blocks
node_history_sync_time_in_ms
14,037
26,844 (+1%)
note_history_successful_decrypting_time_in_ms
1,307 (+4%)
2,520 (+4%)
note_history_trial_decrypting_time_in_ms
132 (+46%)
207 (+75%)
node_database_size_in_bytes
18,616,400
34,869,328
pxe_database_size_in_bytes
29,859
59,414
Circuits stats
Stats on running time and I/O sizes collected for every circuit run across all benchmarks.
Circuit
circuit_simulation_time_in_ms
circuit_input_size_in_bytes
circuit_output_size_in_bytes
private-kernel-init
237 (+1%)
44,377
28,214
private-kernel-ordering
210 (+1%)
52,880
14,296
base-parity
4,559 (-4%)
128
311
root-parity
1,704 (+6%)
1,244
311
base-rollup
17,757 (-1%)
165,760
861
root-rollup
49.8
4,359
725
private-kernel-inner
308 (+1%)
73,794
28,214
public-kernel-app-logic
126 (+1%)
35,251
28,217
public-kernel-tail
170 (+1%)
40,928
28,217
merge-rollup
8.21 (-11%)
2,568
861
Tree insertion stats
The duration to insert a fixed batch of leaves into each tree type.
Metric
1 leaves
16 leaves
64 leaves
128 leaves
512 leaves
1024 leaves
2048 leaves
4096 leaves
32 leaves
batch_insert_into_append_only_tree_16_depth_ms
9.99
15.8 (-1%)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_count
16.8
31.6
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_ms
0.582
0.489 (-1%)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_32_depth_ms
N/A
N/A
47.3 (+2%)
71.5 (-1%)
230
448 (+1%)
836 (-3%)
1,665 (-3%)
N/A
batch_insert_into_append_only_tree_32_depth_hash_count
N/A
N/A
96.0
159
543
1,055
2,079
4,127
N/A
batch_insert_into_append_only_tree_32_depth_hash_ms
N/A
N/A
0.484 (+1%)
0.441 (-1%)
0.420
0.419 (+1%)
0.399 (-3%)
0.400 (-3%)
N/A
batch_insert_into_indexed_tree_20_depth_ms
N/A
N/A
53.4 (-3%)
105 (-1%)
329 (-3%)
666 (+1%)
1,263 (-3%)
2,528 (-3%)
N/A
batch_insert_into_indexed_tree_20_depth_hash_count
N/A
N/A
104
207
691
1,363
2,707
5,395
N/A
batch_insert_into_indexed_tree_20_depth_hash_ms
N/A
N/A
0.475 (-3%)
0.474 (-1%)
0.448 (-3%)
0.457 (+1%)
0.439 (-3%)
0.440 (-2%)
N/A
batch_insert_into_indexed_tree_40_depth_ms
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
61.2
batch_insert_into_indexed_tree_40_depth_hash_count
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
109
batch_insert_into_indexed_tree_40_depth_hash_ms
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
0.534
Miscellaneous
Transaction sizes based on how many contract classes are registered in the tx.
Metric
0 registered classes
1 registered classes
tx_size_in_bytes
14,966
494,914
Transaction processing duration by data writes.
Metric
0 new note hashes
1 new note hashes
tx_pxe_processing_time_ms
2,377 (+1%)
1,407 (+1%)
Metric
0 public data writes
1 public data writes
tx_sequencer_processing_time_ms
14.4 (-12%)
732 (+1%)
|
gharchive/pull-request
| 2024-03-25T17:24:24 |
2025-04-01T04:54:44.515802
|
{
"authors": [
"AztecBot",
"spalladino"
],
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/pull/5431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2497303271
|
chore: fix a bunch of generics issues in aztec-nr
This PR removes a bunch of unnecessary generics from the aztec-nr codebase as this is becoming a hard error in new versions of nargo.
Benchmark results
Metrics with a significant change:
avm_simulation_time_ms (Token:mint_public): 370 (+632%)
avm_simulation_time_ms (Token:transfer_public): 35.0 (+54%)
Detailed results
All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write.
This benchmark source data is available in JSON format on S3 here.
Proof generation
Each column represents the number of threads used in proof generation.
Metric
1 threads
4 threads
16 threads
32 threads
64 threads
proof_construction_time_sha256_ms
5,753
1,585 (+1%)
712
774 (+2%)
773 (-1%)
proof_construction_time_sha256_30_ms
11,449
3,109 (+1%)
1,380
1,429 (-1%)
1,475 (+1%)
proof_construction_time_sha256_100_ms
43,998
11,789 (-2%)
5,461
5,407 (-2%)
5,800 (+2%)
proof_construction_time_poseidon_hash_ms
79.0 (+1%)
34.0
34.0
57.0
88.0
proof_construction_time_poseidon_hash_30_ms
1,528
421
202 (-1%)
229
267 (-1%)
proof_construction_time_poseidon_hash_100_ms
5,637
1,509
672
735 (-1%)
750 (+1%)
L2 block published to L1
Each column represents the number of txs on an L2 block published to L1.
Metric
4 txs
8 txs
16 txs
l1_rollup_calldata_size_in_bytes
4,356
7,876
14,884
l1_rollup_calldata_gas
50,208
93,008
178,144
l1_rollup_execution_gas
845,542
1,579,238
3,364,430
l2_block_processing_time_in_ms
258 (+2%)
457 (+4%)
849 (+6%)
l2_block_building_time_in_ms
11,441 (+1%)
22,373
44,715 (+1%)
l2_block_rollup_simulation_time_in_ms
11,441 (+1%)
22,373
44,715 (+1%)
l2_block_public_tx_process_time_in_ms
9,767 (+1%)
20,653
42,974 (+1%)
L2 chain processing
Each column represents the number of blocks on the L2 chain where each block has 8 txs.
Metric
3 blocks
5 blocks
node_history_sync_time_in_ms
3,025
3,964 (+4%)
node_database_size_in_bytes
12,640,336
16,695,376
pxe_database_size_in_bytes
16,254
26,813
Circuits stats
Stats on running time and I/O sizes collected for every kernel circuit run across all benchmarks.
Circuit
simulation_time_in_ms
witness_generation_time_in_ms
input_size_in_bytes
output_size_in_bytes
proving_time_in_ms
private-kernel-init
96.0 (+4%)
394 (-1%)
21,735
44,860
N/A
private-kernel-inner
191 (+7%)
700 (-1%)
72,544
45,007
N/A
private-kernel-reset-tiny
312 (-1%)
716
65,593
44,846
N/A
private-kernel-tail
167
136
50,644
52,257
N/A
base-parity
5.59 (-1%)
N/A
160
96.0
N/A
root-parity
35.8 (+1%)
N/A
73,948
96.0
N/A
base-rollup
2,974 (+1%)
N/A
189,136
664
N/A
block-root-rollup
41.5
N/A
58,205
2,448
N/A
public-kernel-setup
84.6
N/A
105,085
71,222
N/A
public-kernel-app-logic
97.6
N/A
104,911
71,222
N/A
public-kernel-tail
861
N/A
390,582
16,414
N/A
private-kernel-reset-small
310
N/A
66,341
45,629
N/A
private-kernel-tail-to-public
668
632 (+3%)
455,400
1,825
N/A
public-kernel-teardown
84.2
N/A
105,349
71,222
N/A
merge-rollup
19.9
N/A
38,174
664
N/A
undefined
N/A
N/A
N/A
N/A
78,438 (-1%)
Stats on running time collected for app circuits
Function
input_size_in_bytes
output_size_in_bytes
witness_generation_time_in_ms
ContractClassRegisterer:register
1,344
11,731
345
ContractInstanceDeployer:deploy
1,408
11,731
18.2 (-1%)
MultiCallEntrypoint:entrypoint
1,920
11,731
407 (+1%)
FeeJuice:deploy
1,376
11,731
391 (+1%)
SchnorrAccount:constructor
1,312
11,731
74.2 (+1%)
SchnorrAccount:entrypoint
2,336
11,731
394
Token:privately_mint_private_note
1,280
11,731
106 (+3%)
FPC:fee_entrypoint_public
1,344
11,731
28.9 (+4%)
Token:transfer
1,312
11,731
227 (-2%)
Benchmarking:create_note
1,344
11,731
86.5 (-1%)
SchnorrAccount:verify_private_authwit
1,280
11,731
27.7
Token:unshield
1,376
11,731
520
FPC:fee_entrypoint_private
1,376
11,731
690 (-1%)
AVM Simulation
Time to simulate various public functions in the AVM.
Function
time_ms
bytecode_size_in_bytes
FeeJuice:_increase_public_balance
56.7 (-2%)
8,174
FeeJuice:set_portal
11.2 (-6%)
4,055
Token:constructor
81.5 (-2%)
29,082
FPC:constructor
55.0 (-2%)
18,940
FeeJuice:mint_public
46.1 (+12%)
6,522
Token:mint_public
:warning: 370 (+632%)
12,704
Token:assert_minter_and_mint
319 (-1%)
8,467
AuthRegistry:set_authorized
38.6 (-21%)
4,194
FPC:prepare_fee
235 (-2%)
6,747
Token:transfer_public
:warning: 35.0 (+54%)
39,863
FPC:pay_refund
52.6 (-20%)
9,398
Benchmarking:increment_balance
1,224
7,263
Token:_increase_public_balance
42.7 (+2%)
8,686
FPC:pay_refund_with_shielded_rebate
63.3 (+2%)
9,881
Public DB Access
Time to access various public DBs.
Function
time_ms
get-nullifier-index
0.155 (-2%)
Tree insertion stats
The duration to insert a fixed batch of leaves into each tree type.
Metric
1 leaves
16 leaves
64 leaves
128 leaves
256 leaves
512 leaves
1024 leaves
batch_insert_into_append_only_tree_16_depth_ms
2.19 (+1%)
3.95 (+2%)
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_count
16.8
31.7
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_ms
0.114
0.112 (+2%)
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_32_depth_ms
N/A
N/A
11.3 (+2%)
17.7 (+2%)
31.0 (+1%)
59.9 (+3%)
117 (+4%)
batch_insert_into_append_only_tree_32_depth_hash_count
N/A
N/A
95.9
159
287
543
1,055
batch_insert_into_append_only_tree_32_depth_hash_ms
N/A
N/A
0.108 (+1%)
0.103 (+2%)
0.101 (+1%)
0.103 (+2%)
0.105 (+4%)
batch_insert_into_indexed_tree_20_depth_ms
N/A
N/A
14.8 (+2%)
25.9 (+2%)
44.3 (+2%)
87.9 (+8%)
164 (+2%)
batch_insert_into_indexed_tree_20_depth_hash_count
N/A
N/A
109
207
355
691
1,363
batch_insert_into_indexed_tree_20_depth_hash_ms
N/A
N/A
0.113 (+2%)
0.104 (+1%)
0.108 (+2%)
0.109 (+8%)
0.103
batch_insert_into_indexed_tree_40_depth_ms
N/A
N/A
16.9 (+2%)
N/A
N/A
N/A
N/A
batch_insert_into_indexed_tree_40_depth_hash_count
N/A
N/A
132
N/A
N/A
N/A
N/A
batch_insert_into_indexed_tree_40_depth_hash_ms
N/A
N/A
0.108 (+2%)
N/A
N/A
N/A
N/A
Miscellaneous
Transaction sizes based on how many contract classes are registered in the tx.
Metric
0 registered classes
1 registered classes
tx_size_in_bytes
64,779
668,997
Transaction size based on fee payment method
| Metric | |
| - | |
|
gharchive/pull-request
| 2024-08-30T13:50:31 |
2025-04-01T04:54:44.669834
|
{
"authors": [
"AztecBot",
"TomAFrench"
],
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/pull/8295",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
202696061
|
Missing assembly reference or using directive
This occurred for a lot of name space such as 'Group' and 'Azure'. I've restored the nuget packages and rebuilt the solutions but it didn't solve the problem
Please get the latest version, it works well for me.
|
gharchive/issue
| 2017-01-24T00:58:45 |
2025-04-01T04:54:44.678567
|
{
"authors": [
"VitorX",
"yfan183"
],
"repo": "Azure-Samples/active-directory-dotnet-graphapi-web",
"url": "https://github.com/Azure-Samples/active-directory-dotnet-graphapi-web/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1936737653
|
Terraform only supports service principal authorization for azure
This issue is for a: (mark with an x)
- [x] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Minimal steps to reproduce
Using the "JJ" task very similar to this commit https://github.com/Azure-Samples/azure-devops-terraform-oidc-ci-cd/commit/2fd009feb533d9eaf08470416cfbd223adbf1433 but we're using Terraform cloud as our backend state store
Also set the runAzLogin: true parameter to the "JJ" tasks
Set user assigned managed identity with Contributor role to resource group
Set user assigned managed identity federation with Azure DevOps project
Terraform that's using the AzureRM resource provider
Any log messages given by the failure
Log message failure on pipeline run
##[error]Terraform only supports service principal authorization for azure
Expected/desired behavior
Terraform operates with managed identity federated credentials
OS and Version?
Linux (ubuntu-latest azure devops)
Versions
Version 1.0.6 of the Charles Zipp extension (which seems to be equal to the JJ extension as linked repo in marketplace is the same)
Terraform CLI 1.6.1
Terraform AzureRM 3.75.0
Mention any other details that might be useful
Logs from terraform plan stage
/opt/hostedtoolcache/terraform/1.6.1/x64/terraform version
2023-10-11T03:22:01.957Z [INFO] Terraform version: 1.6.1
2023-10-11T03:22:01.957Z [DEBUG] using github.com/hashicorp/go-tfe v1.34.0
2023-10-11T03:22:01.957Z [DEBUG] using github.com/hashicorp/hcl/v2 v2.18.1
2023-10-11T03:22:01.957Z [DEBUG] using github.com/hashicorp/terraform-svchost v0.1.1
2023-10-11T03:22:01.957Z [DEBUG] using github.com/zclconf/go-cty v1.14.1
2023-10-11T03:22:01.957Z [INFO] Go runtime version: go1.21.1
2023-10-11T03:22:01.957Z [INFO] CLI args: []string{"/opt/hostedtoolcache/terraform/1.6.1/x64/terraform", "version"}
2023-10-11T03:22:01.957Z [TRACE] Stdout is not a terminal
2023-10-11T03:22:01.957Z [TRACE] Stderr is not a terminal
2023-10-11T03:22:01.957Z [TRACE] Stdin is not a terminal
2023-10-11T03:22:01.957Z [DEBUG] Attempting to open CLI config file: /home/AzDevOps/.terraformrc
2023-10-11T03:22:01.957Z [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /home/AzDevOps/.terraform.d/plugins
2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /home/AzDevOps/.local/share/terraform/plugins
2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /usr/local/share/terraform/plugins
2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /usr/share/terraform/plugins
2023-10-11T03:22:01.958Z [INFO] CLI command args: []string{"version"}
Terraform v1.6.1
on linux_amd64
+ provider registry.terraform.io/hashicorp/azurerm v3.75.0
##[error]Terraform only supports service principal authorization for azure
##[error]Terraform only supports service principal authorization for azure
Finishing: terraform plan
@jaredfholgate that did the trick...need to get dependabot updating our yaml azure devops tasks in use...but also noticed, we were using @0 because that's what the readme has https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform
Maybe the main resolution here is for us to send a PR to the repo to update the readme so the docs guide us to @1 by default now.
Thanks for the quick response
Closing this for now since it is not related to this repo. If you continue to have issues, please raise a new issue in the task repo here: https://github.com/jason-johnson/azure-pipelines-tasks-terraform/issues
Thanks
@jaredfholgate that did the trick...need to get dependabot updating our yaml azure devops tasks in use...but also noticed, we were using @0 because that's what the readme has https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform
Maybe the main resolution here is for us to send a PR to the repo to update the readme so the docs guide us to @1 by default now.
Thanks for the quick response
Good point. Yes I think the docs need to be updated over there.
I added this issue and will work on it when I get some free time: https://github.com/jason-johnson/azure-pipelines-tasks-terraform/issues/381
|
gharchive/issue
| 2023-10-11T03:43:59 |
2025-04-01T04:54:44.688414
|
{
"authors": [
"damienpontifex",
"jaredfholgate"
],
"repo": "Azure-Samples/azure-devops-terraform-oidc-ci-cd",
"url": "https://github.com/Azure-Samples/azure-devops-terraform-oidc-ci-cd/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
992398461
|
Add GovCloud regions to Sample SDK
https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/e1ccf3156c94f0b1a668946fc7dd41a64af1230e/samples/js/browser/index.html#L30
Thanks!
@noahsw Please see the linked JS PR. Adding those regions to the sample won't, by itself, enable access via JS SDK. The linked PR, which should be in version 1.19 (due by mid-October), will help. I will add these regions to the sample to coincide with the 1.19 release.
@noahsw JS Speech SDK version 1.19.0 has been released, with support for usgov regions. Thanks again for writing this issue up!
Thx @glharper but I'm not seeing any changes to https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/e1ccf3156c94f0b1a668946fc7dd41a64af1230e/samples/js/browser/index.html#L30. I was hoping to see the GovCloud regions in the dropdown.
@noahsw I have a PR now for this file. The structure of the sample has changed, so the file path you linked to will be deleted at some point.
|
gharchive/issue
| 2021-09-09T16:10:46 |
2025-04-01T04:54:44.692101
|
{
"authors": [
"glharper",
"noahsw"
],
"repo": "Azure-Samples/cognitive-services-speech-sdk",
"url": "https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1264976427
|
Missing components directory
Was the components directory left out of this repo intentionally? It's required to run locally.
fixed
|
gharchive/issue
| 2022-06-08T16:03:27 |
2025-04-01T04:54:44.692980
|
{
"authors": [
"kendallroden",
"safari137"
],
"repo": "Azure-Samples/container-apps-store-api-microservice",
"url": "https://github.com/Azure-Samples/container-apps-store-api-microservice/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
670094323
|
must gather race condition
Saw this in e2e run:
[Admin API] Must gather action
should return information collected from a cluster cluster
/data/vsts-agent/_work/3/s/gopath/src/github.com/Azure/ARO-RP/test/e2e/adminapi_mustgather.go:17
STEP: triggering the mustgather action
time="2020-07-31T18:26:06Z" level=info msg="read request" func="middleware.Log.func1.1()" file="pkg/frontend/middleware/log.go:110" client_principal_name= client_request_id= component=access correlation_id= request_id=c75b3db9-4ddd-4c75-bf44-b11fba357a3e request_method=POST request_path=/admin/subscriptions/46626fc5-476d-41ad-8c76-2ec49c6994eb/resourcegroups/v4-e2e-rg-v33411082-eastus/providers/microsoft.redhatopenshift/openshiftclusters/v4-e2e-v33411082/mustgather request_proto=HTTP/1.1 request_remote_addr="127.0.0.1:57258" request_user_agent=Go-http-client/1.1 resource_group=v4-e2e-rg-v33411082-eastus resource_id=/subscriptions/46626fc5-476d-41ad-8c76-2ec49c6994eb/resourcegroups/v4-e2e-rg-v33411082-eastus/providers/microsoft.redhatopenshift/openshiftclusters/v4-e2e-v33411082 resource_name=v4-e2e-v33411082 subscription_id=46626fc5-476d-41ad-8c76-2ec49c6994eb
time="2020-07-31T18:26:08Z" level=info msg="403: Forbidden: pods/must-gather: pods \"must-gather\" is forbidden: error looking up service account openshift-must-gather-v6tf4/default: serviceaccount \"default\" not found" func="frontend.reply()" file="pkg/frontend/frontend.go:375" client_principal_name= client_request_id= component=access correlation_id= request_id=c75b3db9-4ddd-4c75-bf44-b11fba357a3e resource_group=v4-e2e-rg-v33411082-eastus resource_id=/subscriptions/46626fc5-476d-41ad-8c76-2ec49c6994eb/resourcegroups/v4-e2e-rg-v33411082-eastus/providers/microsoft.redhatopenshift/openshiftclusters/v4-e2e-v33411082 resource_name=v4-e2e-v33411082 subscription_id=46626fc5-476d-41ad-8c76-2ec49c6994eb
time="2020-07-31T18:26:08Z" level=info msg="sent response" func="middleware.Log.func1.1.1()" file="pkg/frontend/middleware/log.go:101" body_read_bytes=0 body_written_bytes=255 client_principal_name= client_request_id= component=access correlation_id= duration=2.2390684419999998 request_id=c75b3db9-4ddd-4c75-bf44-b11fba357a3e request_method=POST request_path=/admin/subscriptions/46626fc5-476d-41ad-8c76-2ec49c6994eb/resourcegroups/v4-e2e-rg-v33411082-eastus/providers/microsoft.redhatopenshift/openshiftclusters/v4-e2e-v33411082/mustgather request_proto=HTTP/1.1 request_remote_addr="127.0.0.1:57258" request_user_agent=Go-http-client/1.1 resource_group=v4-e2e-rg-v33411082-eastus resource_id=/subscriptions/46626fc5-476d-41ad-8c76-2ec49c6994eb/resourcegroups/v4-e2e-rg-v33411082-eastus/providers/microsoft.redhatopenshift/openshiftclusters/v4-e2e-v33411082 resource_name=v4-e2e-v33411082 response_status_code=403 subscription_id=46626fc5-476d-41ad-8c76-2ec49c6994eb
• Failure [2.244 seconds]
[Admin API] Must gather action
/data/vsts-agent/_work/3/s/gopath/src/github.com/Azure/ARO-RP/test/e2e/adminapi_mustgather.go:14
should return information collected from a cluster cluster [It]
/data/vsts-agent/_work/3/s/gopath/src/github.com/Azure/ARO-RP/test/e2e/adminapi_mustgather.go:17
Expected
<int>: 403
to equal
<int>: 200
/data/vsts-agent/_work/3/s/gopath/src/github.com/Azure/ARO-RP/test/e2e/adminapi_mustgather.go:24
The problem is that after creating the openshift-must-gather-XXX namespace, we don't wait for the existence of the default service account before creating the must-gather pod.
cc @ehashman
It's not a serious bug, but it makes the e2es flaky so would be good to fix as a priority.
|
gharchive/issue
| 2020-07-31T18:30:10 |
2025-04-01T04:54:44.810138
|
{
"authors": [
"jim-minter"
],
"repo": "Azure/ARO-RP",
"url": "https://github.com/Azure/ARO-RP/issues/912",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1677094890
|
Test cases for GA SupportedVMSizes
Which issue this PR addresses:
Jira: https://issues.redhat.com/browse/ARO-2883
What this PR does / why we need it:
Test plan for issue:
Is there any documentation that needs to be updated for this PR?
Test cases for https://github.com/Azure/ARO-RP/pull/2798
cc @cadenmarchese
I don't think comparing the json bytes is not a good idea, because validate.Supported**VmSizes is a map and a map is not ordered.
I think we should avoid comparing in the way depending on the order of the map.
https://go.dev/ref/spec#Map_types
A map is an unordered group of elements of one type, called the element type, indexed by a set of unique keys of another type, called the key type.
If you use *map[api.VMSize]api.VMSizeStruct for wantResponse, you can compare wantResponse and its response directly with validateResponse, which uses deep.Equal and not dependent on the order.
type test struct {
name string
query string
wantStatusCode int
wantResponse *map[api.VMSize]api.VMSizeStruct
wantError string
}
I don't think comparing the json bytes is not a good idea, because validate.Supported**VmSizes is a map and a map is not ordered. I think we should avoid comparing in the way depending on the order of the map.
https://go.dev/ref/spec#Map_types
A map is an unordered group of elements of one type, called the element type, indexed by a set of unique keys of another type, called the key type.
If you use *map[api.VMSize]api.VMSizeStruct for wantResponse, you can compare wantResponse and its response directly with validateResponse, which uses deep.Equal and not dependent on the order.
type test struct {
name string
query string
wantStatusCode int
wantResponse *map[api.VMSize]api.VMSizeStruct
wantError string
}
I wish to use the map[api.VMSize]api.VMSizeStructforwantResponse` so it's a lot easier to compare and I don't need to json.MarshallIndent, but the API returns a []byte response as you can see here., https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/admin_supportvmsizes_list.go#L34
Yes, the API returns []byte response in json format.
but validateResponse unmarshals the response if wantResponse can't be casted to []byte.
https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/shared_test.go#L262
so when you use *map[api.VMSize]api.VMSizeStruct as a wantResponse's type, validateResponse unmarshals the response and use deep.Equal to compare.
I looked over some APIs that return []byte and their tests.
They just use the pointer of struct as a wantResponse's type and don't marshal it.
https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/admin_openshiftversion_list_test.go
https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/asyncoperationsstatus_get_test.go
Now I see what you meant, made the changes!
|
gharchive/pull-request
| 2023-04-20T17:01:17 |
2025-04-01T04:54:44.820148
|
{
"authors": [
"SrinivasAtmakuri",
"bitoku"
],
"repo": "Azure/ARO-RP",
"url": "https://github.com/Azure/ARO-RP/pull/2863",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1470189872
|
Add newer images for azure monitor metrics
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Requirements:
[ ] uses conventional commit messages
[ ] includes documentation
[ ] adds unit tests
[x] tested upgrade from previous version
Special notes for your reviewer:
Release note:
none
we were thinking to slim down VHD size to speed VM creation, to my knowledge this image isn't a key component blocking cluster startup.
|
gharchive/pull-request
| 2022-11-30T20:37:48 |
2025-04-01T04:54:44.824265
|
{
"authors": [
"haitch",
"vishiy"
],
"repo": "Azure/AgentBaker",
"url": "https://github.com/Azure/AgentBaker/pull/2466",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
493741899
|
Update README.md
New update in diagnostic settings and README.md file
Hi @tarosler , can you pls merge it to the master? Thanks
|
gharchive/pull-request
| 2019-09-15T14:10:33 |
2025-04-01T04:54:44.830187
|
{
"authors": [
"v-liatba"
],
"repo": "Azure/Azure-Security-Center",
"url": "https://github.com/Azure/Azure-Security-Center/pull/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1043359454
|
Pulse secure parser for Azure Sentinel
For standard log format with syslog type as below, the regex used in the pulseconnectsecure(function developed by @shainw and @acnccd) parser is not matching
'2021-11-03 01:20:46 - ive - [1.2.3.4] Jane, Mitchel(Employee)[] - Received OCSP response from '1.1.1.1' with url 'http://abc.com', user: 'Jane, Mitchel' serial number: 'D8:DB:12:8A:DE:00:00:00:04:B1:8D''
Current regex : (\d{4}-\d{2}-\d{2})\s(\d{2}:\d{2}:\d{2})\s(\S+)\s(\S+)\s(\S+)\s[(\S+)]\s(\S+,)\s(\S+)((.)?)[(.)]\s-\s(.*)
Can someone help me with a regex which matches for all the below syslog messages
'2021-11-03 01:20:46 - ive - [1.2.3.4] Jane, Mitchel(Employee)[] - Received OCSP response from '1.1.1.1' with url 'http://abc.com', user: 'Jane, Mitchel' serial number: 'D8:DB:12:8A:DE:00:00:00:04:B1:8D''
2020-05-01 05:36:14 - ive - [10.0.0.0] user100(ABC Realm)[Personal_PC No RDP] - WebRequest ok : Host: sample.abc.com, Request: GET /Citrix/XDSWeb/dample/js/ctxs.webui.min_0204820BD028.js HTTP/1.1
2020-05-01 00:07:21 - ive - [127.0.0.1] System()[] - User Accounts modified. Removed username ABC\user34 from authentication server ABC Active Directory.
Hey.. we are looking into this issue and would get back to you asap... Thanks!!!
Hi @annanra ,
We have updated the regex of the parser to match the syslog messages shared by you. https://github.com/Azure/Azure-Sentinel/pull/3887. Can you please follow below steps and save the function with a different name and update us if the parser is working fine now?
Open Log Analytics Workspace
Open new Query window
Copy and Paste the updated query from the PulseConnectSecure.txt file provided in the https://github.com/Azure/Azure-Sentinel/pull/3887/files.
In the query window, on the second line of the query, enter the hostname(s) of your Pulse Connect Secure device(s) and any other unique identifiers for the logstream.
// For example: | where Computer in ("server1, server2") and Facility == "local7"
Click on Save button and select as Function from drop down by specifying function name and alias as PulseConnectSecure_Test.
Run the query to validate data is being received and parsed.
Let us know, if the query works fine. Thanks!!
Hi @annanra, Closing this issue as there is no response on this for more than a month. Please re-open or create a new one, if the issue persists and needs help. Thanks.
|
gharchive/issue
| 2021-11-03T10:19:13 |
2025-04-01T04:54:44.838371
|
{
"authors": [
"annanra",
"ritika-msft",
"v-rucdu"
],
"repo": "Azure/Azure-Sentinel",
"url": "https://github.com/Azure/Azure-Sentinel/issues/3370",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
183736901
|
Warning: Error in winDialog: winDialog() cannot be used non-interactively
Hi,
I am trying to run IDEAR.rmd and get the following error "Warning: Error in winDialog: winDialog() cannot be used non-interactively". Any idea.
Regards,
Amit
session Info:
sessionInfo()
R version 3.3.1 (2016-06-21)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] rmarkdown_1.0
loaded via a namespace (and not attached):
[1] Rcpp_0.12.6 digest_0.6.10 mime_0.5 R6_2.1.3 xtable_1.8-2 magrittr_1.5 evaluate_0.9
[8] stringi_1.1.1 miniUI_0.1.1 shinyjs_0.7 tools_3.3.1 stringr_1.0.0 shiny_0.13.2 httpuv_1.3.3
[15] yaml_2.1.13 rsconnect_0.4.3 htmltools_0.3.5
the rmarkdown log is listed below:
Loading required package: shiny
Listening on http://127.0.0.1:3994
|.. | 3%
inline R code fragments
|.... | 6%
label: unnamed-chunk-1 (with options)
List of 3
$ echo : logi FALSE
$ message: logi FALSE
$ warning: logi FALSE
processing file: IDEAR.rmd
Quitting from lines 23-240 (IDEAR.rmd)
Warning: Error in winDialog: winDialog() cannot be used non-interactively
Stack trace (innermost first):
105: winDialog
104: eval [#29]
103: eval
102: withVisible
101: withCallingHandlers
100: handle
99: evaluate_call
98: evaluate
97: in_dir
96: block_exec
95: call_block
94: process_group.block
93: process_group
92: withCallingHandlers
91: process_file
90: knitr::knit
89:
88: do.call
87: contextFunc
86: .getReactiveEnvironment()$runWith
85: shiny::maskReactiveContext
84: reactive reactive({
out <- rmd_cached_output(file, encoding)
output_dest <- out$dest
if (out$cached) {
if (nchar(out$resource_folder) > 0) {
shiny::addResourcePath(basename(out$resource_folder),
out$resource_folder)
}
return(out$shiny_html)
}
if (!file.exists(dirname(output_dest))) {
dir.create(dirname(output_dest), recursive = TRUE, mode = "0700")
}
resource_folder <- knitr_files_dir(output_dest)
perf_timer_reset_all()
dependencies <- list()
shiny_dependency_resolver <- function(deps) {
dependencies <<- deps
list()
}
output_opts <- list(self_contained = FALSE, copy_resources = TRUE,
dependency_resolver = shiny_dependency_resolver)
message("\f")
args <- merge_lists(list(input = reactive_file(), output_file = output_dest,
output_dir = dirname(output_dest), output_options = output_opts,
intermediates_dir = dirname(output_dest), runtime = "shiny",
envir = new.env()), render_args)
result_path <- shiny::maskReactiveContext(do.call(render,
args))
if (!dir_exists(resource_folder))
dir.create(resource_folder, recursive = TRUE)
shiny::addResourcePath(basename(resource_folder), resource_folder)
dependencies <- append(dependencies, list(create_performance_dependency(resource_folder)))
write_deps <- base::file(file.path(resource_folder, "shiny.dep"),
open = "wb")
on.exit(close(write_deps), add = TRUE)
serialize(dependencies, write_deps, ascii = FALSE)
if (!isTRUE(out$cacheable)) {
shiny::onReactiveDomainEnded(shiny::getDefaultReactiveDomain(),
function() {
unlink(result_path)
unlink(resource_folder, recursive = TRUE)
})
}
shinyHTML_with_deps(result_path, dependencies)
})
73: doc
72: shiny::renderUI
71: func
70: output$reactivedoc
3:
2: do.call
1: rmarkdown::run
Hi, thanks for the feedback. To run IDEAR, you should open the Run-IDEAR.R in RStudio, and click Source. Was this how you ran it and got the error?
I ran directly IDEAR in R Studio as I was getting error with running Run-IDEAR.R in RStudio. I have opened a separate issue https://github.com/Azure/Azure-TDSP-Utilities/issues/4.
Thanks for the quick response.
Regards,
Amit
Hi, Amitkb3,
For issue #4, we have figured out a possible cause of the error, and provided a way to run IDEAR correctly in RStudio. Basically, you need to (1) Open Run-IDEAR.r in RStudio; (2) Click Source to launch IDEAR. DO NOT execute the code line by line, or select all lines and click Run.
For this issue #3, running IDEAR.rmd directly will fail as expected. IDEAR.rmd should be triggered by Run-IDEAR.r in the way described above.
Hope it helps.
Let us know if you run into any other issue.
|
gharchive/issue
| 2016-10-18T16:24:59 |
2025-04-01T04:54:44.856771
|
{
"authors": [
"amitkb3",
"hangzh-msft"
],
"repo": "Azure/Azure-TDSP-Utilities",
"url": "https://github.com/Azure/Azure-TDSP-Utilities/issues/3",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2269270689
|
Check for previous/existing GitHub issues/module proposals
[X] I have checked for previous/existing GitHub issues/module proposals.
Check this module doesn't already exist in the module indexes
[X] I have checked for that this module doesn't already exist in the module indexes; or I'm proposing the module to be migrated from CARML/TFVM.
Bicep or Terraform?
Terraform
Module Classification?
Pattern Module
Module Name
avm-ptn-azure-ipam
Module Details
A terraform pattern to deploy the https://azure.github.io/ipam/#/ container as a web app with supporting resources.
Supported deployment models are the default public deployment or a virtual network integrated web app that can use service routes or private endpoints to access resources.
Do you want to be the owner of this module?
No
Module Owner's GitHub Username (handle)
No response
(Optional) Secondary Module Owner's GitHub Username (handle)
ChrisChapman-gh
We (BJSS) have most of the terraform written which we can submit as a pull request from a fork.
This should get the module most of the way there.
Were also happy to contribute to the maintenance.
@ChrisChapman-gh thank for proposal. Give as a time to search for module owner.
@prjelesi I'm happy to own and to work with BJSS on this
That would be awesome, thanks Paul.
Hi @pagyP
Thanks for requesting/proposing to be an AVM module owner!
We just want to confirm you agree to the below pages that define what module ownership means:
Team Definitions & RACI
Shared Specification (Bicep & Terraform)
Module Support
Any questions or clarifications needed, let us know!
If you agree, please just reply to this issue with the exact sentence below (as this helps with our automation 👍):
"I CONFIRM I WISH TO OWN THIS AVM MODULE AND UNDERSTAND THE REQUIREMENTS AND DEFINITION OF A MODULE OWNER"
Thanks,
The AVM Core Team
#RR
"I CONFIRM I WISH TO OWN THIS AVM MODULE AND UNDERSTAND THE REQUIREMENTS AND DEFINITION OF A MODULE OWNER"
Hi @prjelesi - What needs to happen next? can we get a repo spun up so I can fork it and start some PRs?
Cheers Chris
Hi @pagyP
Thanks for confirming that you wish to own this AVM module and understand the related requirements and responsibilities!
Before starting development, please ensure ALL the following requirements are met.
Please use the following values explicitly as provided in the module index page:
For your module:
ModuleName - for naming your module
TelemetryIdPrefix - for your module's telemetry
For your module's repository:
Repo name and folder path are defined in RepoURL
Create GitHub teams for module owners and contributors and grant them permissions as outlined here.
Grant permissions for the AVM core team and PG teams on your GitHub repo as described here.
Check if this module exists in the other IaC language. If so, collaborate with the other owner for consistency. 👍
You can now start the development of this module! ✅ Happy coding! 🎉
Please respond to this comment and request a review from the AVM core team once your module is ready to be published! Please include a link pointing to your PR, once available. 🙏
Any further questions or clarifications needed, let us know!
Thanks,
The AVM Core Team
Hi @prjelesi - What needs to happen next? can we get a repo spun up so I can fork it and start some PRs? Cheers Chris
Hi @ChrisChapman-gh , @pagyP will initiate repo creation process and when it is created you will be able to fork and work.
Let me know if you need any help to move fw.
@ChrisChapman-gh I've initiated the repository creation, just pending approval now.
@ChrisChapman-gh repo should now be available https://github.com/Azure/terraform-azurerm-avm-ptn-azure-ipam (thanks for your patience ), please fork and submit a PR as per https://azure.github.io/Azure-Verified-Modules/contributing/terraform/review/
@ChrisChapman-gh are you still planning/able to contribute to this? Let me know either way please., thanks.
Hi Paul
Yes absolutely – had some leave and then straight onto a new project which has taken a lot of time to get up and running.
I should have some time soon to fork and contribute.
Cheers
Chris
From: Paul Paginton @.>
Sent: Tuesday, September 10, 2024 1:01 PM
To: Azure/Azure-Verified-Modules @.>
Cc: Chris Chapman @.>; Mention @.>
Subject: Re: [Azure/Azure-Verified-Modules] [Module Proposal]: avm-ptn-azure-ipam (Issue #913)
@ChrisChapman-ghhttps://github.com/ChrisChapman-gh are you still planning/able to contribute to this? Let me know either way please., thanks.
—
Reply to this email directly, view it on GitHubhttps://github.com/Azure/Azure-Verified-Modules/issues/913#issuecomment-2340485709, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A4J7HYDQZ2YNXPCEEMTSLM3ZV3NQPAVCNFSM6AAAAABG6QBSOSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBQGQ4DKNZQHE.
You are receiving this because you were mentioned.Message ID: @.@.>>
The information included in this email and any files transmitted with it may contain information that is confidential and it must not be used by, or its contents or attachments copied or disclosed to, persons other than the intended addressee. If you have received this email in error, please notify BJSS. In the absence of written agreement to the contrary BJSS' relevant standard terms of contract for any work to be undertaken will apply. Please carry out virus or such other checks as you consider appropriate in respect of this email. BJSS does not accept responsibility for any adverse effect upon your system or data in relation to this email or any files transmitted with it. BJSS Limited, a company registered in England and Wales (Company Number 2777575), VAT Registration Number 613295452, Registered Office Address, 1 Whitehall Quay, Leeds, LS1 4HR
Is there any update on this module? I am prepping to deploy IPAM and would like to include this in my pipeline
|
gharchive/issue
| 2024-04-29T14:58:36 |
2025-04-01T04:54:44.877823
|
{
"authors": [
"ChrisChapman-gh",
"bordera-randy",
"pagyP",
"prjelesi"
],
"repo": "Azure/Azure-Verified-Modules",
"url": "https://github.com/Azure/Azure-Verified-Modules/issues/913",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
386561807
|
Blobs with backward slashes in their names are stored with incorrect name
Blobs which have name like "Path\To\My\Blob" are created as "Blob" instead.
@michaelkruglos Please try with Azurite V3 has solid support for blob creation! Will close this issue.
|
gharchive/issue
| 2018-12-02T13:53:27 |
2025-04-01T04:54:44.879599
|
{
"authors": [
"XiaoningLiu",
"michaelkruglos"
],
"repo": "Azure/Azurite",
"url": "https://github.com/Azure/Azurite/issues/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
536923903
|
[blob-storage]BlobClient.getProperties not returning archiveStatus
Which service(blob, file, queue, table) does this issue concern?
blob
Which version of the Azurite was used?
3.3.0-preview
Where do you get Azurite? (npm, DockerHub, NuGet, Visual Studio Code Extension)
npm
What's the Node.js version?
v12.13.0
What problem was encountered?
Unit test failure:
BlobClient.beginCopyFromURL with rehydrate priority
it.only("beginCopyFromURL with rehydrate priority", async () => {
recorder.skip("browser");
const newBlobURL = containerClient.getBlobClient(recorder.getUniqueName("copiedblobrehydrate"));
const initialTier = BlockBlobTier.Archive;
const result = await (await newBlobURL.beginCopyFromURL(blobClient.url, {
tier: initialTier,
rehydratePriority: "Standard"
})).pollUntilDone();
assert.ok(result.copyId);
delay(1 * 1000);
const properties1 = await blobClient.getProperties();
const properties2 = await newBlobURL.getProperties();
assert.deepStrictEqual(properties1.contentMD5, properties2.contentMD5);
assert.deepStrictEqual(properties2.copyId, result.copyId);
assert.deepStrictEqual(properties2.copySource, blobClient.url);
assert.equal(properties2.accessTier, initialTier);
await newBlobURL.setAccessTier(BlockBlobTier.Hot);
const properties3 = await newBlobURL.getProperties();
assert.equal(properties3.archiveStatus!.toLowerCase(), "rehydrate-pending-to-hot");
});
assert.equal(properties3.archiveStatus!.toLowerCase(), "rehydrate-pending-to-hot") throw error because properties3.archiveStatus is undefined.
/**
* For blob storage LRS accounts, valid values are
* rehydrate-pending-to-hot/rehydrate-pending-to-cool. If the blob is being rehydrated and is not
* complete then this header is returned indicating that rehydrate is pending and also tells the
* destination tier.
*/
archiveStatus?: string;
Do we support LRS accounts?
Steps to reproduce the issue?
Re-run the test-case.
Have you found a mitigation/solution?
N/A
debug.log
debug.log
request_id
b1b8441a-d563-496d-ba8a-a40650d74307
Should add "archiveStatus" for get properties options.
Per rest doc, valid values are rehydrate-pending-to-hot/rehydrate-pending-to-cool. Since Azurite don't have this pedning status won't fix it.
@XiaoningLiu
Would you please help to close it?
|
gharchive/issue
| 2019-12-12T11:35:41 |
2025-04-01T04:54:44.885053
|
{
"authors": [
"XiaoningLiu",
"blueww",
"ljian3377"
],
"repo": "Azure/Azurite",
"url": "https://github.com/Azure/Azurite/issues/292",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
917493447
|
Suggestion: Group Monitored Items in one structure in standalone publisher with possibility of trigger
For our use cases we need a set of data points sampled and published in the same message.
Additional, it would be great if all the nodes are publishing when a certain endpoint changes, for example a boolean switching from false to true.
Grouping
We would like to be able to group nodes in the publishednodes.json config file of the standalone publisher.
The configuration could then look like the following with just a new section OpcNodeGroups:
[
{
"EndpointUrl": "opc.tcp://host.docker.internal:49322/",
"UseSecurity": false,
"OpcNodes": [
{
"Id": "ns=2;s=Machine1.Status",
"OpcSamplingInterval": 1000,
"OpcPublishingInterval": 5000,
"DisplayName": "Machine1 Status"
},
{
"Id": "ns=2;s=Machine1.Speed",
"OpcSamplingInterval": 500,
"OpcPublishingInterval": 2000,
"DisplayName": "Machine1 Speed"
}
],
"OpcNodeGroups": [
{
"GroupId": "Machine1",
"OpcSamplingInterval": 1000,
"OpcPublishingInterval": 2000,
"nodes": [
{
"Id": "ns=2;s=Machine1.Pressure",
"DisplayName": "Pressure"
},
{
"Id": "ns=2;s=Machine1.Temperature",
"DisplayName": "Temperature"
},
{
"Id": "ns=2;s=Machine1.TorqueA",
"DisplayName": "TorqueA"
},
{
"Id": "ns=2;s=Machine1.TorqueB",
"DisplayName": "TorqueB"
},
{
"Id": "ns=2;s=Machine1.OrderNumber",
"DisplayName": "OrderNumber"
},
{
"Id": "ns=2;s=Machine1.PartNumber",
"DisplayName": "PartNumber"
},
{
"Id": "ns=2;s=Machine1.PartFinished",
"DisplayName": "PartFinished"
}
]
}
]
}
]
The resulting message might then look like this the following sample. The messages asscociated the Group would then have a GroupId.
[
{
"NodeId": "nsu=MYServer;s=Machine1.Status",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "Machine1 Status",
"Value": {
"Value": 389,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"NodeId": "nsu=MYServer;s=Machine1.Speed",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "Machine1 Speed",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"GroupId": "Machine1",
"NodeId": "nsu=MYServer;s=Machine1.Pressure",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "Pressure",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"GroupId": "Machine1",
"NodeId": "nsu=MYServer;s=Machine1.Speed",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "Speed",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"GroupId": "Machine1",
"NodeId": "nsu=MYServer;s=Machine1.TorqueA",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "TorqueA ",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"GroupId": "Machine1",
"NodeId": "nsu=MYServer;s=Machine1.TorqueB",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "TorqueB",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"GroupId": "Machine1",
"NodeId": "nsu=MYServer;s=Machine1.OrderNumber",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "OrderNumber",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"GroupId": "Machine1",
"NodeId": "nsu=MYServer;s=Machine1.PartNumber",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "PartNumber",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
},
{
"GroupId": "Machine1",
"NodeId": "nsu=MYServer;s=Machine1.PartFinished",
"ApplicationUri": "urn:MYSERVER:UA%20Server",
"DisplayName": "PartFinished",
"Value": {
"Value": -298,
"SourceTimestamp": "2020-11-20T15:39:52.2521132Z"
}
}
]
Trigger for Group
It would be great if all the nodes are publishing when a certain endpoint changes, for example a boolean switching from false to true.
Therefore a Trigger in the OpcNodeGroup would be required, which defines the node for the trigger.
[
{
"EndpointUrl": "opc.tcp://host.docker.internal:49322/",
"UseSecurity": false,
"OpcNodes": [
{
"Id": "ns=2;s=Machine1.Status",
"OpcSamplingInterval": 1000,
"OpcPublishingInterval": 5000,
"DisplayName": "Machine1 Status"
},
{
"Id": "ns=2;s=Machine1.Speed",
"OpcSamplingInterval": 500,
"OpcPublishingInterval": 2000,
"DisplayName": "Machine1 Speed"
}
],
"OpcNodeGroups": [
{
"GroupId": "Machine1",
"OpcSamplingInterval": 1000,
"OpcPublishingInterval": 2000,
"Trigger": {
"Id": "ns=2;s=Machine1.PartFinished",
"Trigger": {
"Type": "Once",
"Value": true
}
},
"nodes": [
{
"Id": "ns=2;s=Machine1.Pressure",
"DisplayName": "Pressure"
},
{
"Id": "ns=2;s=Machine1.Temperature",
"DisplayName": "Temperature"
},
{
"Id": "ns=2;s=Machine1.TorqueA",
"DisplayName": "TorqueA"
},
{
"Id": "ns=2;s=Machine1.TorqueB",
"DisplayName": "TorqueB"
},
{
"Id": "ns=2;s=Machine1.OrderNumber",
"DisplayName": "OrderNumber"
},
{
"Id": "ns=2;s=Machine1.PartNumber",
"DisplayName": "PartNumber"
},
{
"Id": "ns=2;s=Machine1.PartFinished",
"DisplayName": "PartFinished"
}
]
}
]
}
]
This is just a proposal /feature request. I'm glad to see other options and possibilities.
Thanks,
Martin
OPC UA PubSub supports datasets, which are equivalent to the subscription (per endpoint, per tag inside endpoint, batches of 1000). They are part of the same network message (size permitting).
Regarding triggering, this has been requested as "polled" mode, where a set of nodes are sent on a trigger. This has nothing to do with the way OPC UA subscriptions work, but rather would leverage something like the OPC Twin and a scheduler.
A poll mode issue feature request has been added as #1934.
|
gharchive/issue
| 2020-11-30T10:04:43 |
2025-04-01T04:54:44.893481
|
{
"authors": [
"marcschier",
"martin-weber"
],
"repo": "Azure/Industrial-IoT",
"url": "https://github.com/Azure/Industrial-IoT/issues/1212",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1380389596
|
In the case of parallel execution, one succeeds, but all others fail.
While provisioning a container app is in progress, other provision reques will be fail.
We need guidance or solution to this issue.
Cloud you please comment on how to do it, as well as any logs or screen shots?
If the Container App is in the process of provisioning, the following error will occur when the GitHub action is executed.
Looks like retry logic is needed, further investigation is needed to decide which layer the logic should be implemeted in.
I think the workaround would be to implement retry. In addition, I think that it is necessary to decide what it should be by discussing it separately.
@azure/core-rest-pipeline already has retry logic as described at the following documentation.
https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/core/core-rest-pipeline/documentation/pipeline.md
But the status code 409 in this case is not included in retry-target-response-codes as far as reading the following code.
https://github.com/Azure/azure-sdk-for-js/blob/82996230773ab8295f06b17a8b6f449f9d9f2a8c/sdk/core/core-rest-pipeline/src/retryStrategies/exponentialRetryStrategy.ts#L81-L94
Ref: https://github.com/Azure/azure-sdk-for-js/issues/23298
As guidance, at a minimum, the following guidance should be guided.
Run only one workflow at a time
Ref: Workflow syntax for GitHub Actions
The guidance is about GitHub Actions workflows' concurrency.
On the other hands, this issue can be happened when the action runs during any updates from Azure portal, CLI, PowerShell or REST API.
@k-in
Could you please try an action horihiro/aca-preview@v0.2.0-alpha2 that retries 10 times at maxiumum and let me know the result?
Released horihiro/aca-preview@v0.2.0-alpha3 that exposes max retry count as an input max-retries.
https://github.com/horihiro/aca-preview/blob/5d75583459e1a53ed6ebf3f8d956e0144bb7eb68/src/main.ts#L127-L137
By design, currently,
|
gharchive/issue
| 2022-09-21T06:26:56 |
2025-04-01T04:54:44.927002
|
{
"authors": [
"horihiro",
"k-in",
"koudaiii"
],
"repo": "Azure/aca-review-apps",
"url": "https://github.com/Azure/aca-review-apps/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
202978156
|
kubectl should be extracted from the hyperkube image
Say you're working with or testing managed disks. It's vital the kubectl is at the exact same revision as the running apiserver.
If we would extract hyperkube (and thus kubectl) from the container, we could place that on host. We'd also get to eliminate the extra kubectlVersion parameter/variable we have now.
Should be possible to do this with something roughly like: docker run <hyperkube_spec> -v /usr/local/bin:/host cp /hyperkube /host/hyperkube && hyperkube --symlinks
Thinking out loud:
We need to stream line the entire process. This will happen not only during testing beta feature (similar to md) but also when people upgrade cluster to versions that will break existing kubectl. maybe we can create a script toolkit as docker image that has bunch of scripts, one of them can extract kubectl from hyperkube spec. Also consider the reverse, upgrade something opps, now i want my old kubectl back.
Closing this P2 as it is nice to have, but can be currently solved in alternate ways. We can consider re-opening if we get requests for this.
|
gharchive/issue
| 2017-01-25T00:11:21 |
2025-04-01T04:54:44.929768
|
{
"authors": [
"anhowe",
"colemickens",
"khenidak"
],
"repo": "Azure/acs-engine",
"url": "https://github.com/Azure/acs-engine/issues/208",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
373691948
|
Fix potential nil pointer dereference when VM tags are empty
What this PR does / why we need it:
In setting up a regular Jenkins upgrade test with VM tags removed, I found a logging-related panic that this fixes.
If applicable:
[ ] documentation
[ ] unit tests
[ ] tested backward compatibility (ie. deploy with previous version, upgrade with this branch)
Release note:
NONE
/lgtm
|
gharchive/pull-request
| 2018-10-24T21:52:13 |
2025-04-01T04:54:44.932557
|
{
"authors": [
"jackfrancis",
"mboersma"
],
"repo": "Azure/acs-engine",
"url": "https://github.com/Azure/acs-engine/pull/4117",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1974671558
|
add crd webhooks
Description
Adds crd webhooks.
Type of change
Please delete options that are not relevant.
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
Tested locally and unit tested. Will be e2e tested in a future PR.
Checklist:
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my code
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[x] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
[x] Any dependent changes have been merged and published in downstream modules
Pull Request Test Coverage Report for Build 6735804858
224 of 498 (44.98%) changed or added relevant lines in 5 files are covered.
5 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-5.7%) to 78.448%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
pkg/webhook/webhook.go
85
127
66.93%
pkg/webhook/cert.go
24
86
27.91%
pkg/webhook/nginxingress.go
96
180
53.33%
pkg/controller/controller.go
0
86
0.0%
Files with Coverage Reduction
New Missed Lines
%
pkg/controller/controller.go
5
20.34%
Totals
Change from base Build 6722886340:
-5.7%
Covered Lines:
2355
Relevant Lines:
3002
💛 - Coveralls
/ok-to-test sha=399b4f
/ok-to-test sha=399b54f
|
gharchive/pull-request
| 2023-11-02T16:58:06 |
2025-04-01T04:54:44.945758
|
{
"authors": [
"OliverMKing",
"coveralls"
],
"repo": "Azure/aks-app-routing-operator",
"url": "https://github.com/Azure/aks-app-routing-operator/pull/123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
289013895
|
[Discussion] ARRAffinity cookie is changing to become 'HttpOnly'
Discussion thread for https://github.com/Azure/app-service-announcements/issues/12
Hi, re. the SecureOnly flag; without this, anyone using Testing in Production will cause a penetration test to fail, irrespective of the lack of sensitive data in the cookie. Are there any plans to introduce this?
Hi, Is there any way that we can append the secure tag for ARRAffinity ? I saw this post on msdn blog ( https://blogs.msdn.microsoft.com/webtopics/2018/05/14/securing-the-arraffinity-cookie/ ) I tried that solution for an ASP.NET Core web app but is not working, the ARRAffinity is still without the secure tag.
I'd also like to chime in the same. I agree that it is not a real "risk" but marking it as secure is good practice since it is using HTTPS and should cause no issues.
Just came across this, actually across #12. I needed this for a scenario when I was handling OAuth form a 3rd-party which absolutely didn't understand how OAuth should be implemented, so the 3rd-party was making call to our servers as part of authentication. That way of course any affinity was lost and I needed to pass affinity somehow around in OAuthe state. I had problems to figure out where to get WEBSITE_INSTANCE_ID from.
Hint for anybody who doesn't know where to read it from: Read WEBSITE_INSTANCE_ID from IConfiguration (which is available via DI in .NET Core) like config["WEBSITE_INSTANCE_ID"].
|
gharchive/issue
| 2018-01-16T18:39:50 |
2025-04-01T04:54:44.965410
|
{
"authors": [
"DzonnyDZ",
"alincosmin7",
"davidebbo",
"glaidler",
"kentongray"
],
"repo": "Azure/app-service-announcements-discussions",
"url": "https://github.com/Azure/app-service-announcements-discussions/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
724366080
|
setting custom health probe via portal gets reset automatically after some time
Describe the bug
I have deployed Elasticsearch in AKS using the link https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html,
and exposed (public ip / domain) using ingress app gateway.
Using ingress health probe is automatically created for elasticsearch, but since elasticsearch is by default protected with basic auth it returns 401 status code, which causes backend health to fail.
To resolve this, I tried manually editing the health probe from portal and set it as 200-399,401, which resolves the issue for sometime (it gets reset to 200-399 automatically after some time.)
@dipakyadav By default AGIC assumes full control of the appgw. Hence any manual changes done by you in the application gateway shall be revoked by AGIC whenever any of these actions happen (New Pod launch, New ingress rule addition, Removal of ingress rules etc..). You have two options here, One is defining a ProhibitedTarget (you have to launch AGIC in shared mode for this). Another is editing the health check of the elasticsearch deployment (Editing the existing liveness/readiness probe).
Reference link for Azure Ingress Prohibited target
https://github.com/Azure/application-gateway-kubernetes-ingress/tree/master/crds
This issue shall be closed.
Thanks @vishal8k , will check
|
gharchive/issue
| 2020-10-19T07:46:49 |
2025-04-01T04:54:44.969279
|
{
"authors": [
"dipakyadav",
"vishal8k"
],
"repo": "Azure/application-gateway-kubernetes-ingress",
"url": "https://github.com/Azure/application-gateway-kubernetes-ingress/issues/1024",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1730314799
|
Fix bug with "PasswordBoxes-Must-Have-Min-Length" test
Resolved #742 .
Added test case.
Hello @StartAutomating Brian Moore has directed me to you for your help. Here is a description of what we need, in addition to this being merged.
Get the PR merged and a new release performed.
Get Partner Center to uptake the release with the fix.
|
gharchive/pull-request
| 2023-05-29T08:18:16 |
2025-04-01T04:54:44.971248
|
{
"authors": [
"edburns",
"galiacheng"
],
"repo": "Azure/arm-ttk",
"url": "https://github.com/Azure/arm-ttk/pull/743",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
758202809
|
fix e2e test
for #666
/azp run
|
gharchive/pull-request
| 2020-12-07T06:25:04 |
2025-04-01T04:54:44.979873
|
{
"authors": [
"changlong-liu",
"qiaozha"
],
"repo": "Azure/autorest.az",
"url": "https://github.com/Azure/autorest.az/pull/673",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1724485010
|
autorest is adding u8 at the end of the file
Before asking the question:
[x] have you checked the faq, the documentation in the docs folder and couldn't find the information there
[x] have you checked existing issues for a similar question?
We are encountering an issue where we are using .net6, AutoRest code generation utility [cli version: 3.3.2; node: v16.14.2],
config as below
input-file: openapi.json
project-folder: .
output-folder: $(project-folder)/Generated
clear-output-folder: true
csharp: true
public-clients: true
skip-csproj: true
generation1-convenience-client: true
it is adding u8 at the end of many generated string part
ex:
writer.WritePropertyName("contents"u8);
writer.WritePropertyName("protectedFiles"u8);
does anyone know why that's the case? It is causing some error for our build as we current do not support c# version 11 as we are still on .net6. I have tried using many different versions of autorest but it still appends the u8 at end the of the string
This was recently add since it provides a performance improvement in serialization. We can add a flag to turn this off which you can set in your autorest.md but we will probably keep the default for this to be on.
@AlexanderSher do you mind picking this up?
This was recently add since it provides a performance improvement in serialization. We can add a flag to turn this off which you can set in your autorest.md but we will probably keep the default for this to be on.
@AlexanderSher do you mind picking this up?
is there a specific flag that can be used to force the use of .net6? or csharp v10 instead of v11 ?
|
gharchive/issue
| 2023-05-24T17:04:53 |
2025-04-01T04:54:44.985835
|
{
"authors": [
"jamesfan1",
"joaobarraca",
"m-nash"
],
"repo": "Azure/autorest.csharp",
"url": "https://github.com/Azure/autorest.csharp/issues/3433",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2170317466
|
[Microsoft Generator CSharp] Migrate All Unbranded Test Projects
As a follow up to https://github.com/Azure/autorest.csharp/issues/4200, all of the remaining unbranded test projects should be migrated to the new generator and should serve as the validation when making changes to the generator code.
I want to consider the trade offs with only having cadl-ranch tests with very few exceptions.
|
gharchive/issue
| 2024-03-05T23:22:44 |
2025-04-01T04:54:44.987522
|
{
"authors": [
"jorgerangel-msft",
"m-nash"
],
"repo": "Azure/autorest.csharp",
"url": "https://github.com/Azure/autorest.csharp/issues/4333",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1606209290
|
skip the validation for external projects
Fixes https://github.com/Azure/autorest.csharp/issues/3179
Description
Add your description here!
Checklist
To ensure a quick review and merge, please ensure:
[ ] The PR has a understandable title and description explaining the why and what.
[ ] The PR is opened in draft if not ready for review yet.
If opened in draft, please allocate sufficient time (24 hours) after moving out of draft for review
[ ] The branch is recent enough to not have merge conflicts upon creation.
Ready to Land?
[ ] Build is completely green
Submissions with test failures require tracking issue and approval of a CODEOWNER
[ ] At least one +1 review by a CODEOWNER
[ ] All -1 reviews are confirmed resolved by the reviewer
Override/Marking reviews stale must be discussed with CODEOWNERS first
@m-nash I made a few changes, could you take a look again?
@m-nash latest regen is here: https://github.com/Azure/azure-sdk-for-net/pull/35727
|
gharchive/pull-request
| 2023-03-02T06:51:34 |
2025-04-01T04:54:44.992556
|
{
"authors": [
"ArcturusZhang"
],
"repo": "Azure/autorest.csharp",
"url": "https://github.com/Azure/autorest.csharp/pull/3182",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
337349119
|
Release Notes for pre-beta
BEFORE YOU BEGIN: IMPORTANT NOTES
There are a lot of what I consider to be "minor" issues in here. My priority at this point has been to get enough generated code in place to get the complete end-to-end scenarios functioning, and I can worry about the 'details' once we start thoroughly testing.
Modifing the generation is actually really really trivial for me, so I consider the vast majority of the rest of it 'pretty minor'
I have code that's not exposed where auth and other things are handled for generating support for Azure ARM resources, and that's not going help you.
If this doesn't work for for you
Don't panic; this is pre-beta, and it works great for certain scenarios, but they may not be yours
If you can't get it to work with your stuff
Wait till I get back from vacation. I can't really help before I get back.
If you try to use petstore.json for testing
You will be smacked with a newspaper, and I shall not give you a treat.
If you don't think the cmdlets are very good
I should remind you that Jeffrey Snover reviewed this and he thinks a bit different. You will have to wait till I get back for me to document why this works the way it does. Trust me when I say, I went to great lengths to design things a certain way, and everything has purpose and reason.
Caveats and known issues:
PLEASE IGNORE the errors like:
(they are actually harmless)
Error occurred in handler for 'ReadFile' in session 'session_5':
Error: Could not read 'obj/test.txt'.
at QuickDataSource.ReadStrict (C:\Users\garretts\.autorest\@microsoft.azure_autorest-core@2.0.4280\node_modules\@microsoft.azure\autorest-core\dist\lib\data-store\data-store.js:26:19)
at <anonymous>
Error occurred in handler for 'ReadFile' in session 'session_7':
Error: Could not read 'ContainerRegistryManagement.private.csproj'.
at QuickDataSource.ReadStrict (C:\Users\garretts\.autorest\@microsoft.azure_autorest-core@2.0.4280\node_modules\@microsoft.azure\autorest-core\dist\lib\data-store\data-store.js:26:19)
at <anonymous>
Bugs
there are a lot of little bugs. I haven't published my backlog yet.
WHAT IS MISSING FROM THIS BUILD
auth support
documentation and explinations.
field/class/property/method descriptions. Some are there, but I gotta go thru and do good pass everywhere.
Handlers for Retry/etc --
Persisting/editing names/parameters/etc for generated commands
use of proxy settings/etc
no support for streams, duration, or arrays at the root/base level of the request/response
Getting Started
Requires:
node 8.11.3 (stay away from node 10 for now
https://nodejs.org/dist/v8.11.3/node-v8.11.3-x64.msi
autorest 2.0.4280+
npm install -g autorest
dotnet 2.0.0 sdk - either install the one from the .net page or use
npm install -g dotnet-sdk-2.0.0
powershell core 6
https://github.com/PowerShell/PowerShell/releases/download/v6.0.2/PowerShell-6.0.2-win-x64.zip
install the autorest.incubator plugin :
autorest --reset
autorest "--use=@microsoft.azure/autorest.incubator@preview"
Usage
autorest --use="@microsoft.azure/autorest.incubator@preview" --powershell --output-folder=output-folder --input-file=path-or-url-to-swagger-file
sample run:
# run these from powershell core.
# Windows Powershell is not ok.
# The cmdlets can work in both, but be patient
autorest --use="@microsoft.azure/autorest.incubator@preview" --powershell --input-file=https://github.com/Azure/azure-rest-api-specs/blob/master/specification/redis/resource-manager/Microsoft.Cache/stable/2018-03-01/redis.json
--output-folder=./generated
# will generate the cmdlets into the output-folder
cd ./generated
# now we do one last step (build proxies and import the module)
./generate-proxies.ps1 -test
# Now you're in a new pwsh instance with the cmdlets
#example:
get-operation -verbose -debug
One quick comment - we're working through this - small typo on npm install, should be - npm i -g dotnet-sdk-2.0.0 not npm i -g dotnet-sdk.2.0.0
$ sudo npm install -g dotnet-sdk.2.0.0
npm WARN notice Due to a recent security incident, all user tokens have been invalidated. Please see https://status.npmjs.org/incidents/dn7c1fgrr7ng for more details. To generate a new token, visit https://www.npmjs.com/settings/~/tokens
or run "npm login".
npm ERR! code E404
npm ERR! 404 Not Found: dotnet-sdk.2.0.0@latest
don't think it works on centos7
Trying the powershell autorest generation code here: https://github.com/Azure/autorest/tree/master/Samples/1a-code-generation-minimal
$ autorest --use="@microsoft.azure/autorest.incubator@preview" --powershell
AutoRest code generation utility [version: 2.0.4280; node: v10.5.0]
(C) 2018 Microsoft Corporation.
https://aka.ms/autorest
There is a new version of AutoRest available (2.0.4282).
> You can install the newer version with with npm install -g autorest@latest
Loading AutoRest core '/Users/alex.guo/.autorest/@microsoft.azure_autorest-core@2.0.4280/node_modules/@microsoft.azure/autorest-core/dist' (2.0.4280)
Loading AutoRest extension '@microsoft.azure/autorest.incubator' (preview->1.0.86)
Loading AutoRest extension '@microsoft.azure/autorest.csharp' (~2.2.51->2.2.67)
Loading AutoRest extension '@microsoft.azure/autorest.modeler' (2.3.50->2.3.50)
WARNING (UndefinedTypeWithSchema): The schema 'User' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object'
- swagger-document:1:0
WARNING (UndefinedTypeWithSchema): The schema 'Category' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object'
- swagger-document:1:0
WARNING (UndefinedTypeWithSchema): The schema 'Pet' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object'
- swagger-document:1:0
WARNING (UndefinedTypeWithSchema): The schema 'Tag' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object'
- swagger-document:1:0
WARNING (UndefinedTypeWithSchema): The schema 'Order' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object'
- swagger-document:1:0
WARNING (TypeFileNotValid): The schema type 'file' is not a OAI standard type. This has been auto-corrected to 'type:string' and 'format:binary'
- swagger-document:1:0
Error: Format 'Binary' not implemented.
at SchemaDefinitionResolver.resolveTypeDeclaration (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/csharp/schema/schema-resolver.ts:80:19)
at nameStuffRight (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/csharp/namer.ts:146:49)
Error: Inputs missing.
at process (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/csharp/lowlevel-generator/main.ts:20:13)
Error: Inputs missing.
at Object.processCodeModel (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/common/process-code-model.ts:18:13)
Error: Inputs missing.
at processRequest (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/powershell/powershell-generator.ts:17:13)
And yes, we realize that test with petstore is a slap on the nose, but given we were having some challenges, we decided to start with the basics as a sanity check
@mallochine : "no support for streams, duration, or arrays at the root/base level of the request/response"
I'm not that familiar with the petstore example (linked in 1a-code-generation-minimal). What is a stream exactly? Is it where 'readme.md' is read into stdin? Should I actually have done --input-file="a.json" or something similar?
If yes to that last question, then I actually still run into the same code generation problem. So, please elaborate on what is a 'stream'?
No; the swagger file has format: binary somewhere in it.
That ends up being a stream. Not yet supported.
Ahha! We've got those all over the place in our code, which means a nice "e brake" for us here :(
Are you back from PTO @fearthecowboy ? Or more apt question - is there another drop that might support binary in the works?
Does it make sense for us to dig into the code and submit a PR, or just leave you too it at this point? No rush, we want to help you out here as much as we can, as i strongly believe that AutoRest is going to be our true north for both traditional client binaries (for other languages) and PowerShell (from the sounds of it)
Spacing between comments and previous code would be great,
/// <summary>The URI for the proxy server to use</summary>
[System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "The URI for the proxy server to use")]
public System.Uri Proxy {get;set;}
/// <summary>Credentials for a proxy server to use for the remote call</summary>
[System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "Credentials for a proxy server to use for the remote call")]
[System.Management.Automation.ValidateNotNull]
public System.Management.Automation.PSCredential ProxyCredential {get;set;}
to
/// <summary>The URI for the proxy server to use</summary>
[System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "The URI for the proxy server to use")]
public System.Uri Proxy {get;set;}
/// <summary>Credentials for a proxy server to use for the remote call</summary>
[System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "Credentials for a proxy server to use for the remote call")]
[System.Management.Automation.ValidateNotNull]
public System.Management.Automation.PSCredential ProxyCredential {get;set;}
Yes. Agreed. that's been driving me crazy.
For the ItemGroup > PackageReferences - what are your thoughts about adding Rosyln analyzers in there? When we were handcrafting our cmdlets, we found that using the in-built dotnet analyzers did help give us a wee bit of sanity right on build. Before we did that, we ended up chasing half working code in some areas, that various "stuff" caught. I'm happy to send in a PR if you'd like with my ideas @fearthecowboy
I'm talking about this part here: https://github.com/Azure/autorest.incubator/blob/a86ff8df11778f385e41243d90e7169ebe6e3347/src/powershell/powershell-generator.ts#L79
Yeah; we should have it conditional based on a configuration setting.
ie, add one of these: https://github.com/Azure/autorest.incubator/blob/a86ff8df11778f385e41243d90e7169ebe6e3347/src/powershell/project.ts#L304
this.enableRoslynAnalyzer = await service.GetValue('enable-roslyn-analyzer') || false;
and then emit it when project.enableRoslynAnalyzer is set.
(trying to get things more configuration driven)
ok, I'll give it a crack if I can break away a few cycles over the next few days. That seems like a sane approach.
The only tricky-ish bit is that we're statically creating the csproj by basically echo'ing a static config into a file with .writefile at the moment, which itself is fine.
Do we need to change that to be more like the bit right below it in code (the psd1 bit), where it has some conditional logic?
I could change it over to be this in the PR, but wanted to get your thoughts first
https://github.com/Azure/autorest.incubator/blob/a86ff8df11778f385e41243d90e7169ebe6e3347/src/powershell/powershell-generator.ts#L108
Yeah, it'd probably a good idea to move to the text-emitter style; it makes it easy to exclude stuff.
|
gharchive/issue
| 2018-07-02T02:28:59 |
2025-04-01T04:54:45.013003
|
{
"authors": [
"JonKohler",
"deathly809",
"fearthecowboy",
"mallochine"
],
"repo": "Azure/autorest.incubator",
"url": "https://github.com/Azure/autorest.incubator/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1953229115
|
[typespec-python] when response body is a string, we shouldn't call json on it
typespec here defines our response body as a string: https://github.com/kristapratico/azure-rest-api-specs/blob/azopenai-python/specification/cognitiveservices/OpenAI.Inference/routes.tsp#L131
Generated code tries to call json() on response here and it fails because it's not json:
https://github.com/kristapratico/azure-rest-api-specs/blob/azopenai-python/specification/cognitiveservices/OpenAI.Inference/tsp-output/%40azure-tools/typespec-python/azure/openai/operations/_operations.py#L1335
Not entirely sure, but I think the code should be something like:
deserialized = _deserialize(
str, response.text()
)
Will discuss about it in scrum meeting
|
gharchive/issue
| 2023-10-19T23:55:24 |
2025-04-01T04:54:45.016197
|
{
"authors": [
"kristapratico",
"msyyc"
],
"repo": "Azure/autorest.python",
"url": "https://github.com/Azure/autorest.python/issues/2200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
816560803
|
BluePrint configure resourceGroup tags from parameter
I am trying to configure tags for ResourceGroups from the blueprints and I am experiencing some issue.
In the example below I am want to parse the Tags as an object parameter:
{
"properties": {
"type": "Microsoft.Blueprint/blueprints",
"description": "Management Blueprint",
"targetScope": "subscription",
"parameters": {
"tags": {
"type": "object",
"metadata": {
"displayName": "Enter the Tags that need to be configured"
},
"defaultValue": {
"tags1": "value1",
"tags2": "value2"
}
}
},
"resourceGroups": {
"ResourceGroup1": {
"name": "resourceGroup01",
"location": "westeurope",
"metadata": {
"displayName": "resourceGroup01"
},
"dependsOn": [],
"tags": "[parameters('tags')]"
}
}
}
}
However, the above template returns the error message below:
Import-AzBlueprintWithArtifact: Can't deserialize the JSON file '/management/Blueprint.json'. 'Error converting value "[parameters('tags')]" to type 'System.Collections.Generic.IDictionary`2[System.String,System.String]'. Path 'tags', line 26, position 38.'
When I configure the Tags on ResourceGroup and use a String parameter as input it works fine:
{
"properties": {
"type": "Microsoft.Blueprint/blueprints" ,
"description": "Management Blueprint",
"targetScope": "subscription",
"parameters": {
"tagsvalue": {
"type": "string",
"defaultValue": "value1"
}
},
"resourceGroups": {
"ResourceGroup1": {
"name": "resourceGroup01",
"location": "westeurope",
"metadata": {
"displayName": "resourceGroup01"
},
"dependsOn": [],
"tags": {
"Tags1": "[parameters('tagsvalue')]"
}
}
}
}
}
Is this a bug or is it not possible to forward an object to the Tags part?
+1
I don't think we support object parameters. I think the tag values need to be passed individually. Have you tried that?
Hi @pkhabazi
The reason I believe you are getting the error message 'Error converting value "[parameters('tags')]" to type 'System.Collections.Generic.IDictionary`2[System.String,System.String]' is because the "[parameters('tags')]" expression within the resourceGroups section does not actually get substituted with the "tags" object you defined in the parameters section. The ARM interpreter is passing a literal string "[parameters('tags')]" to the function that attempts conversion to a System.Collections.Generic.IDictionary<String,String> object and that is why it fails.
So in your first example, you are actually trying to pass a string to the tags element while it is expecting an object. In your second example, you are passing an object. Keep in mind that in the second example, the "[parameters('tagsvalue')]" string is also not getting replaced with "value1" when you run the Import-AzBlueprintWithArtifact command. It just sets the value of the "Tags1" key to the "[parameters('tagsvalue')]" string.
VS Code's syntax highlighting for ARM templates is misleading in this case. The color coding for the "[parameters('tags')]" expression in the resourceGroups section should be represented in the standard red denoting a string.
|
gharchive/issue
| 2021-02-25T15:49:26 |
2025-04-01T04:54:45.021781
|
{
"authors": [
"DennisR73",
"alex-frankel",
"felipebbc",
"pkhabazi"
],
"repo": "Azure/azure-blueprints",
"url": "https://github.com/Azure/azure-blueprints/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
650784426
|
Azure DevOps CLI Create Service Connection with Installation Token
Extension name: Azure DevOps
Description
How do I create a GitHub service connection using an InstallationToken (the installed GitHub app) instead of an OAuth flow/PAT?
I can see the following service connection in one of my existing orgs, by running:
az devops service-endpoint list
[
{
"authorization": {
"scheme": "InstallationToken"
},
"createdBy": {
...
},
"data": {
"AvatarUrl": "https://avatars3.githubusercontent.com/u/63518284?v=4",
"pipelinesSourceProvider": "github"
},
...
"type": "GitHub",
"url": "https://github.com"
}
]
But how do I create such a connection using the DevOps CLI? It only seems to support OAuth or entering a PAT when creating a GitHub connection.
devops
Bump. Not supported?
This would be really useful. At the moment using terraform for devops isn't really possible without this
As a workaround, I've tried using the az devops service-endpoint create --org [ORG] -p [PROJECT] --service-endpoint-configuration CustomServiceEndpoint.json with the file below, without much luck. When I access the project build information got the error
An error occurred while fetching the pipeline. TF400864: The Item specified by the key '26501285-261b-409c-b7ce-154f1da82d74' does not exist. where the key 26501285... is the actual project id
{
"description": "",
"administratorsGroup": null,
"authorization": {
"parameters": {
"accessToken": null
},
"scheme": "InstallationToken"
},
"createdBy": null,
"data": {
"AvatarUrl": "https://avatars2.githubusercontent.com/u/2916417?v=4",
"pipelinesSourceProvider": "github"
},
"name": "SomeName",
"type": "GitHub",
"url": "https://github.com",
"readersGroup": null,
"groupScopeId": null,
"serviceEndpointProjectReferences": null,
"operationStatus": null,
"isReady": true,
"isShared": false,
"owner": "Library"
}
Don't have the specs for this json and since I cannot create it by hand I cannot follow the https://docs.microsoft.com/en-gb/azure/devops/cli/service-endpoint?view=azure-devops#create-service-endpoint-using-a-configuration-file to get the json.
If any one figures this json schema, drop here a message.
This process is terrbile and not really fit for production but you can sort of manage it via terraform. I haven't tried creating one via the cli. You can create an oauth service connection in terraform:
resource "azuredevops_serviceendpoint_github" "organisation" {
project_id = azuredevops_project.platform.id
service_endpoint_name = "some_org"
description = ""
auth_oauth {
oauth_configuration_id = "000000000-0000-0000-0000-000000000000"
}
}
Which you then have to go an manually authorize as the azuredevops_resource_authorization resource doesn't work. You can then go to your pipeline and "convert" the oauth service endpoint to an "app connection", like this:
This creates a new service connection named "my connection (1)" or whatever you called it. You can them import this new object to terraform with terraform import azuredevops_serviceendpoint_github.s projectid/service-endpoint-id and use it for any new pipelines for that project. I haven't come acorss a scenario where you'd need multiple github connections per project so it seems to be a one time operation.
resource "azuredevops_serviceendpoint_github" "s" {
project_id = azuredevops_project.platform.id
service_endpoint_name = "my connection (1)"
description = ""
}
|
gharchive/issue
| 2020-07-03T22:36:43 |
2025-04-01T04:54:45.029656
|
{
"authors": [
"admin-simeon",
"b0bu",
"crmitchelmore",
"saamorim",
"yonzhan"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/issues/1971",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
423223589
|
This page is incorrect
The examples are different from the definitions.
Also, --group-name is now --name.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 01f8fe72-48f6-2e30-0741-d83bafc5d438
Version Independent ID: 32ed6908-2c71-3c17-683b-d07eb08d4590
Content: az account management-group
Content Source: latest/docs-ref-autogen/ext/managementgroups/account/management-group.yml
Service: azure
GitHub Login: @rloutlaw
Microsoft Alias: routlaw
Yeah, I see this too. Various commands are mixing up the az account management-group with the az cli extension for managementgroups.
Please remove this page, or reference to the correct cli https://docs.microsoft.com/en-us/cli/azure/account/management-group?view=azure-cli-latest -- this caused much confusion.
Thank all for the findings. I will remove this extension from this repo.
|
gharchive/issue
| 2019-03-20T12:25:23 |
2025-04-01T04:54:45.034874
|
{
"authors": [
"KevinBrooke",
"bebattis",
"jiasli",
"kylecweeks"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/issues/585",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
429272929
|
AZ image copy between different EA subscription ( same location/Different location) not working.
If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at Azure/azure-cli
Extension name (the extension in question)
az image copy
Description of issue (in as much detail as possible)
Throws error while copying the image from one EA subscription to another subscription. Tried with same location also different location.
command failed: ['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '****', '--tags', 'created_by=image-copy-extension']
output: ERROR: Operation failed with status: 'Forbidden'. Details: 403 Client Error: Forbidden for url: https://management.azure.com/subscriptions//resourcegroups/image-copy-rg?api-version=2018-05-01
The command failed with an unexpected error. Here is the traceback:
Command '['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '**********', '--tags', 'created_by=image-copy-extension']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/opt/az/lib/python3.6/site-packages/knack/cli.py", line 206, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 351, in execute
raise ex
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 409, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 402, in _run_job
six.reraise(sys.exc_info())
File "/opt/az/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
result = cmd_copy(params)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 171, in call
return self.handler(args, kwargs)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/init.py", line 451, in default_command_handler
return op(command_args)
File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/custom.py", line 104, in imagecopy
target_subscription)
File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/custom.py", line 193, in create_resource_group
run_cli_command(cli_cmd)
File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/cli_utils.py", line 35, in run_cli_command
raise ex
File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/cli_utils.py", line 21, in run_cli_command
cmd_output = check_output(cmd, stderr=STDOUT, universal_newlines=True)
File "/opt/az/lib/python3.6/subprocess.py", line 336, in check_output
kwargs).stdout
File "/opt/az/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '', '--tags', 'created_by=image-copy-extension']' returned non-zero exit status 1.
@jayaakshayan the command failed because the temporary resource group creation failed - see the first few lines of your message.
Is the user running the image copy command has permissions to create new resource groups?
If not, a quick workaround is to pre-create the group manually just like this command tries to do:
command failed: ['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '*************', '--tags', 'created_by=image-copy-extension']
Thanks Tamir, it was policy level restriction not to create resource group with out tag, I have removed the policy for the specific resource group which sorted out the issue.
|
gharchive/issue
| 2019-04-04T13:15:41 |
2025-04-01T04:54:45.048403
|
{
"authors": [
"jayaakshayan",
"tamirkamara"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/issues/622",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
894949262
|
[Front Door] Add update func for backend and fix backend_host_header
Fix issues:
https://github.com/Azure/azure-cli/issues/17270
https://github.com/Azure/azure-cli/issues/17269
This checklist is used to make sure that common guidelines for a pull request are followed.
General Guidelines
[ ] Have you run azdev style <YOUR_EXT> locally? (pip install azdev required)
[ ] Have you run python scripts/ci/test_index.py -q locally?
For new extensions:
[ ] My extension description/summary conforms to the Extension Summary Guidelines.
About Extension Publish
There is a pipeline to automatically build, upload and publish extension wheels.
Once your PR is merged into master branch, a new PR will be created to update src/index.json automatically.
The precondition is to put your code inside this repo and upgrade the version in the PR but do not modify src/index.json.
Front Door
Please update extension version and history before release.
|
gharchive/pull-request
| 2021-05-19T03:20:32 |
2025-04-01T04:54:45.053063
|
{
"authors": [
"00Kai0",
"kairu-ms",
"yonzhan"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/pull/3394",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1631352179
|
serviceconnector-passwordless update dependency
This checklist is used to make sure that common guidelines for a pull request are followed.
Related command
General Guidelines
[x] Have you run azdev style <YOUR_EXT> locally? (pip install azdev required)
[x] Have you run python scripts/ci/test_index.py -q locally?
For new extensions:
[ ] My extension description/summary conforms to the Extension Summary Guidelines.
About Extension Publish
There is a pipeline to automatically build, upload and publish extension wheels.
Once your pull request is merged into main branch, a new pull request will be created to update src/index.json automatically.
You only need to update the version information in file setup.py and historical information in file HISTORY.rst in your PR but do not modify src/index.json.
serviceconnector
Hi @jsntcy, could you help merge the PR. It's an urgent fix.
|
gharchive/pull-request
| 2023-03-20T04:11:18 |
2025-04-01T04:54:45.057405
|
{
"authors": [
"xfz11",
"yonzhan"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/pull/6038",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2041039711
|
Add new params to support auto binding
This checklist is used to make sure that common guidelines for a pull request are followed.
Related command
Add arguments --bind-service-registry in spring app create.
Add arguments --bind-application-configuration-service in spring app create.
General Guidelines
[ ] Have you run azdev style <YOUR_EXT> locally? (pip install azdev required)
[x] Have you run python scripts/ci/test_index.py -q locally? (pip install wheel==0.30.0 required)
For new extensions:
[ ] My extension description/summary conforms to the Extension Summary Guidelines.
About Extension Publish
There is a pipeline to automatically build, upload and publish extension wheels.
Once your pull request is merged into main branch, a new pull request will be created to update src/index.json automatically.
You only need to update the version information in file setup.py and historical information in file HISTORY.rst in your PR but do not modify src/index.json.
Thank you for your contribution! We will review the pull request and get back to you soon.
Please fix CI issues.
|
gharchive/pull-request
| 2023-12-14T06:58:37 |
2025-04-01T04:54:45.062238
|
{
"authors": [
"moarychan",
"yonzhan"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/pull/7083",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
490515129
|
Failure in running az vmss update on diskprofile
This is autogenerated. Please review and update as needed.
Describe the bug
Command Name
az vmss update
Errors:
pop from empty list
Traceback (most recent call last):
Temp\pip-install-qxmmnr17\knack\knack\cli.py, ln 206, in invoke
azure\cli\core\commands\__init__.py, ln 603, in execute
azure\cli\core\commands\__init__.py, ln 661, in _run_jobs_serially
azure\cli\core\commands\__init__.py, ln 654, in _run_job
Local\Temp\pip-install-qxmmnr17\six\six.py, ln 693, in reraise
azure\cli\core\commands\__init__.py, ln 631, in _run_job
azure\cli\core\commands\__init__.py, ln 305, in __call__
azure\cli\core\commands\arm.py, ln 525, in handler
azure\cli\core\commands\arm.py, ln 806, in set_properties
azure\cli\core\commands\arm.py, ln 968, in _get_name_path
IndexError: pop from empty list
To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
Put any pre-requisite steps here...
az vmss update -g {} -n {} --set {} {} {}
Expected Behavior
Environment Summary
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: cmd.exe
azure-cli 2.0.72
Additional Context
This is autogenerated. Please review and update as needed.
Describe the bug
Command Name
az vmss update
Errors:
pop from empty list
Traceback (most recent call last):
Temp\pip-install-qxmmnr17\knack\knack\cli.py, ln 206, in invoke
azure\cli\core\commands\__init__.py, ln 603, in execute
azure\cli\core\commands\__init__.py, ln 661, in _run_jobs_serially
azure\cli\core\commands\__init__.py, ln 654, in _run_job
Local\Temp\pip-install-qxmmnr17\six\six.py, ln 693, in reraise
azure\cli\core\commands\__init__.py, ln 631, in _run_job
azure\cli\core\commands\__init__.py, ln 305, in __call__
azure\cli\core\commands\arm.py, ln 525, in handler
azure\cli\core\commands\arm.py, ln 806, in set_properties
azure\cli\core\commands\arm.py, ln 968, in _get_name_path
IndexError: pop from empty list
To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
Put any pre-requisite steps here...
az vmss update --name {} -g {} --set {} {} {}
Expected Behavior
Environment Summary
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: cmd.exe
azure-cli 2.0.72
Additional Context
The same works from Powershell
PS C:\Users\naragati> $vmss = Get-AzVmss -ResourceGroupName "inmobilab" -VMScaleSetName "navininmboi"
PS C:\Users\naragati> $vmss.VirtualMachineProfile.StorageProfile.OsDisk.DiskSizeGB = 512
PS C:\Users\naragati> $vmss | Update-AzVmss
ResourceGroupName : inmobilab
Sku :
Name : Standard_DS1_v2
Tier : Standard
Capacity : 2
UpgradePolicy :
Mode : Manual
VirtualMachineProfile :
OsProfile :
ComputerNamePrefix : navininmb
AdminUsername : navin
LinuxConfiguration :
DisablePasswordAuthentication : False
ProvisionVMAgent : True
StorageProfile :
ImageReference :
Publisher : Canonical
Offer : UbuntuServer
Sku : 16.04-LTS
Version : latest
OsDisk :
Caching : ReadWrite
CreateOption : FromImage
DiskSizeGB : 512
ManagedDisk :
StorageAccountType : Premium_LRS
NetworkProfile :
NetworkInterfaceConfigurations[0] :
Name : navininmboiNic
Primary : True
EnableAcceleratedNetworking : False
NetworkSecurityGroup :
Id : /subscriptions/8d2d94f8-2e1c-428f-9e66-d36fd0b60f42/resourceGroups/inmobilab/p
roviders/Microsoft.Network/networkSecurityGroups/navininmboinsg
DnsSettings :
IpConfigurations[0] :
Name : navininmboiIpConfig
Subnet :
Id : /subscriptions/8d2d94f8-2e1c-428f-9e66-d36fd0b60f42/resourceGroups/inmobilab/p
roviders/Microsoft.Network/virtualNetworks/inmobi/subnets/default
PublicIPAddressConfiguration :
Name : pub1
IdleTimeoutInMinutes : 15
PrivateIPAddressVersion : IPv4
EnableIPForwarding : False
Priority : Regular
ProvisioningState : Succeeded
Overprovision : True
DoNotRunExtensionsOnOverprovisionedVMs : False
UniqueId : 78e029f1-e242-42ac-9c62-a4fa2fe1b388
SinglePlacementGroup : True
PlatformFaultDomainCount : 5
Id : /subscriptions/8d2d94f8-2e1c-428f-9e66-d36fd0b60f42/resourceGroups/inmobilab/p
roviders/Microsoft.Compute/virtualMachineScaleSets/navininmboi
Name : navininmboi
Type : Microsoft.Compute/virtualMachineScaleSets
Location : centralus
Tags : {}
@qwordy please take a look and response.
Thanks for you feedback.
Could you please run az vmss update --name {} -g {} --set {} {} {} --debug and paste the output here
Hi, could you provide more detailed info? You can paste the full command here except sensitive information like resource group or name, so that I can diagnose it.
Hi,
I’ve updated the outputs in the github page on same day.
Regards,
Navin
From: Feiyue Yu notifications@github.com
Sent: Sunday, September 8, 2019 7:05 PM
To: Azure/azure-cli azure-cli@noreply.github.com
Cc: Navin Kumar Ragati Navin.Ragati@microsoft.com; Author author@noreply.github.com
Subject: Re: [Azure/azure-cli] Failure in running az vmss update on diskprofile (#10464)
Hi, could you provide more detailed info? You can paste the full command here except sensitive information like resource group or name, so that I can diagnose it.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%2Fissues%2F10464%3Femail_source%3Dnotifications%26email_token%3DANDUPBGI6J4VA4KPN3SRWJDQIWVODA5CNFSM4IUM55J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6GBI6Y%23issuecomment-529273979&data=02|01|Navin.Ragati%40microsoft.com|d963abc0fa37468c6fce08d734ca2bf8|72f988bf86f141af91ab2d7cd011db47|1|0|637035915245571635&sdata=WQ9f%2FBg1tOT39SnycxPNOhRLO%2Bj%2FDcfqr9M07hru27A%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FANDUPBAOZBNXADKZRU7MAETQIWVODANCNFSM4IUM55JQ&data=02|01|Navin.Ragati%40microsoft.com|d963abc0fa37468c6fce08d734ca2bf8|72f988bf86f141af91ab2d7cd011db47|1|0|637035915245581631&sdata=u%2Ftut06gS0gdTQ4kqOh6QGQEj02lgXNPJ9GxadioFVE%3D&reserved=0.
This is the same command that I ran the other day for my internal lab
C:\Users\naragati>az vmss update -g inmobilab -n navininmboi --set VirtualMachineProfile.StorageProfile.OsDisk.DiskSizeGB = 512
Do you have a space between DiskSizeGB and 512?
It's the problem. I see the same error if I insert space between key and value.
Space is used to separate multiple key-value pairs in --set.
However, I think the error message is confusing. I plan to make it more user-friendly.
Thanks for the feedback, yeah if the error message can be user friendly it would help in self-serving.
From: Feiyue Yu notifications@github.com
Sent: Monday, September 9, 2019 11:09 PM
To: Azure/azure-cli azure-cli@noreply.github.com
Cc: Navin Kumar Ragati Navin.Ragati@microsoft.com; Author author@noreply.github.com
Subject: Re: [Azure/azure-cli] Failure in running az vmss update on diskprofile (#10464)
Do you have a space between DiskSizeGB and 512?
It's the problem. I see the same error if I insert space between key and value.
Space is used to separate multiple key-value pairs in --set.
However, I think the error message is confusing. I plan to make it more user-friendly.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%2Fissues%2F10464%3Femail_source%3Dnotifications%26email_token%3DANDUPBDOFWAWWYL4NNHZV4DQI42W7A5CNFSM4IUM55J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6J6FPA%23issuecomment-529785532&data=02|01|Navin.Ragati%40microsoft.com|c15301258386494e7aa908d735b5582f|72f988bf86f141af91ab2d7cd011db47|1|0|637036925364059850&sdata=HNju57Epu%2BuIG8G9MWfflQtG8mxrKWYd2zjnqw0g%2BrE%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FANDUPBBNDXPRLNAZUPW6EJTQI42W7ANCNFSM4IUM55JQ&data=02|01|Navin.Ragati%40microsoft.com|c15301258386494e7aa908d735b5582f|72f988bf86f141af91ab2d7cd011db47|1|0|637036925364069848&sdata=mc3fQiUllzlhSgDo%2FBcjBE9T2nV2B2KpgChJrfE3yVY%3D&reserved=0.
Add to Sprint 75. Make error message more understandable.
The user's problem has been solved. However, we'll improve user experience by providing more accurate error information in future release. Close the issue.
I have made the error message more clear. PR merged.
|
gharchive/issue
| 2019-09-06T20:44:13 |
2025-04-01T04:54:45.086252
|
{
"authors": [
"naragati",
"qwordy",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/10464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
705226223
|
az vmss update can't set osDisk caching to None
This is autogenerated. Please review and update as needed.
Describe the bug
These commands work:
$ az vmss update --set virtualMachineProfile.storageProfile.osDisk.caching=ReadOnly
# outputs
...
"storageProfile": {
"osDisk": {
"caching": "ReadOnly"
...
$ az vmss update --set virtualMachineProfile.storageProfile.osDisk.caching=ReadWrite
# outputs
...
"storageProfile": {
"osDisk": {
"caching": "ReadWrite"
...
This one does not. It returns successfully but will not change the value of the field to "None".
$ az vmss update --set virtualMachineProfile.storageProfile.osDisk.caching=None
# outputs
...
"storageProfile": {
"osDisk": {
"caching": "ReadWrite"
...
If I try to PATCH the VMSS directly and then update all instances via the CLI, the operation succeeds. So it's not an API level issue, the bug is in CLI or Python SDK.
ace@ace-vm:~$ cat patch.json
{
"properties": {
"virtualMachineProfile": {
"storageProfile": {
"osDisk": {
"caching": "None"
}
}
}
}
}
ace@ace-vm:~$
ace@ace-vm:~$ az rest --method patch --uri "${VMSS_RESOURCE_ID}?api-version=2020-06-01" --body "${PATCH}"
...
"storageProfile": {
"osDisk": {
"caching": "None"
...
Command Name
az vmss update
Errors:
To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
Put any pre-requisite steps here...
Create a VMSS with ReadWrite/ReadOnly OS disk caching, then try this:
az vmss update -g {} -n {} --set virtualMachineProfile.storageProfile.osDisk.caching=None
Expected Behavior
It should change the caching mode to None, as demonstrated via the patch.
Environment Summary
Linux-5.4.0-37-generic-x86_64-with-debian-bullseye-sid
Python 3.6.10
Installer: DEB
azure-cli 2.11.1
Extensions:
kusto 0.1.0 (dev) /home/ace/code/azure-cli-extensions/src/kusto
aks-preview 0.4.62 (dev) /home/ace/code/azure-cli-extensions/src/aks-preview
Additional Context
hi @qwordy could you pls help to look at is this a partial patch issue? thanks.
You can't set it to None through PUT. None means don't change. It keeps the old value. This is designed by service.
@qwordy can you please explain the sample output for my PATCH? 'None' does NOT mean "don't change". It is a valid value from the service perspective?
I disagree that this works as intended. Please see my workaround for this bug: https://github.com/alexeldeib/azbench/blob/3d59017d1a0c3dcb5a6a9a06088ff825da5145e3/scripts/cluster.sh#L73-L88
It is a known issue in Azure CLI. The default implementation of update command in CLI is using PUT. We are migrating it to PATCH. It is a long term plan.
|
gharchive/issue
| 2020-09-21T01:10:30 |
2025-04-01T04:54:45.093538
|
{
"authors": [
"alexeldeib",
"qwordy",
"yungezz"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/15217",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1904340363
|
when scripting az cli with the shell, special chars should be stripped from inputs.
Describe the bug
while trying to run the following script to extract id
for i in `cat ~/Downloads/addresses.csv`
do
az ad user show --id $i --query "id" --output tsv
done
Related command
az ad user show --id $i --query "id" --output tsv
Errors
DEBUG: cli.knack.cli: Command arguments: ['ad', 'user', 'show', '--id', 'removed#EXT#@removed.onmicrosoft.com\r', '--query', 'id', '--output', 'tsv', '--debug']
DEBUG: cli.knack.cli: __init__ debug log:
Cannot enable color.
DEBUG: cli.knack.cli: Event: Cli.PreExecute []
DEBUG: cli.knack.cli: Event: CommandParser.OnGlobalArgumentsCreate [<function CLILogging.on_global_arguments at 0x101385fc0>, <function OutputProducer.on_global_arguments at 0x101416b90>, <function CLIQuery.on_global_arguments at 0x101484040>]
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreCommandTableCreate []
DEBUG: cli.azure.cli.core: Modules found from index for 'ad': ['azure.cli.command_modules.role']
DEBUG: cli.azure.cli.core: Loading command modules:
DEBUG: cli.azure.cli.core: Name Load Time Groups Commands
DEBUG: cli.azure.cli.core: role 0.005 17 61
DEBUG: cli.azure.cli.core: Total (1) 0.005 17 61
DEBUG: cli.azure.cli.core: These extensions are not installed and will be skipped: ['azext_ai_examples', 'azext_next']
DEBUG: cli.azure.cli.core: Loading extensions:
DEBUG: cli.azure.cli.core: Name Load Time Groups Commands Directory
DEBUG: cli.azure.cli.core: Total (0) 0.000 0 0
DEBUG: cli.azure.cli.core: Loaded 17 groups, 61 commands.
DEBUG: cli.azure.cli.core: Found a match in the command table.
DEBUG: cli.azure.cli.core: Raw command : ad user show
DEBUG: cli.azure.cli.core: Command table: ad user show
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreCommandTableTruncate [<function AzCliLogging.init_command_file_logging at 0x101f2dab0>]
DEBUG: cli.azure.cli.core.azlogging: metadata file logging enabled - writing logs to '/Users/anpatel/.azure/commands/2023-09-20.03-21-30.ad_user_show.83281.log'.
INFO: az_command_data_logger: command args: ad user show --id {} --query {} --output {} --debug
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreArgumentLoad [<function register_global_subscription_argument.<locals>.add_subscription_parameter at 0x101f4a440>]
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPostArgumentLoad []
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPostCommandTableCreate [<function register_ids_argument.<locals>.add_ids_arguments at 0x10200fe20>, <function register_cache_arguments.<locals>.add_cache_arguments at 0x10200ff40>]
DEBUG: cli.knack.cli: Event: CommandInvoker.OnCommandTableLoaded []
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreParseArgs []
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPostParseArgs [<function OutputProducer.handle_output_argument at 0x101416c20>, <function CLIQuery.handle_query_parameter at 0x1014840d0>, <function register_ids_argument.<locals>.parse_ids_arguments at 0x10200feb0>]
DEBUG: cli.azure.cli.core.util: Retrieving token for resource https://graph.microsoft.com/
DEBUG: cli.azure.cli.core.auth.persistence: build_persistence: location='/Users/anpatel/.azure/msal_token_cache.json', encrypt=False
DEBUG: cli.azure.cli.core.auth.binary_cache: load: /Users/anpatel/.azure/msal_http_cache.bin
DEBUG: urllib3.util.retry: Converted retries value: 1 -> Retry(total=1, connect=None, read=None, redirect=None, status=None)
DEBUG: msal.authority: openid_config = {'token_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/token', 'token_endpoint_auth_methods_supported': ['client_secret_post', 'private_key_jwt', 'client_secret_basic'], 'jwks_uri': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/discovery/v2.0/keys', 'response_modes_supported': ['query', 'fragment', 'form_post'], 'subject_types_supported': ['pairwise'], 'id_token_signing_alg_values_supported': ['RS256'], 'response_types_supported': ['code', 'id_token', 'code id_token', 'id_token token'], 'scopes_supported': ['openid', 'profile', 'email', 'offline_access'], 'issuer': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/v2.0', 'request_uri_parameter_supported': False, 'userinfo_endpoint': 'https://graph.microsoft.com/oidc/userinfo', 'authorization_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/authorize', 'device_authorization_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/devicecode', 'http_logout_supported': True, 'frontchannel_logout_supported': True, 'end_session_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/logout', 'claims_supported': ['sub', 'iss', 'cloud_instance_name', 'cloud_instance_host_name', 'cloud_graph_host_name', 'msgraph_host', 'aud', 'exp', 'iat', 'auth_time', 'acr', 'nonce', 'preferred_username', 'name', 'tid', 'ver', 'at_hash', 'c_hash', 'email'], 'kerberos_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/kerberos', 'tenant_region_scope': 'NA', 'cloud_instance_name': 'microsoftonline.com', 'cloud_graph_host_name': 'graph.windows.net', 'msgraph_host': 'graph.microsoft.com', 'rbac_url': 'https://pas.windows.net'}
DEBUG: msal.application: Broker enabled? False
DEBUG: cli.azure.cli.core.auth.msal_authentication: UserCredential.get_token: scopes=('https://graph.microsoft.com//.default',), claims=None, kwargs={}
DEBUG: msal.application: Cache hit an AT
DEBUG: msal.telemetry: Generate or reuse correlation_id: 7c59f099-ad96-4f7d-afe5-a06dbf4f139f
INFO: cli.azure.cli.core.util: Request URL: 'https://graph.microsoft.com/v1.0/users/adityaagupta31_gmail.com%23EXT%23@onevmw.onmicrosoft.com%0D'
INFO: cli.azure.cli.core.util: Request method: 'GET'
INFO: cli.azure.cli.core.util: Request headers:
INFO: cli.azure.cli.core.util: 'User-Agent': 'python/3.10.13 (macOS-13.5.2-arm64-arm-64bit) AZURECLI/2.52.0 (HOMEBREW)'
INFO: cli.azure.cli.core.util: 'Accept-Encoding': 'gzip, deflate'
INFO: cli.azure.cli.core.util: 'Accept': '*/*'
INFO: cli.azure.cli.core.util: 'Connection': 'keep-alive'
INFO: cli.azure.cli.core.util: 'x-ms-client-request-id': 'deea7171-f2d6-4cda-b072-9ff09eaace8b'
INFO: cli.azure.cli.core.util: 'CommandName': 'ad user show'
INFO: cli.azure.cli.core.util: 'ParameterSetName': '--id --query --output --debug'
INFO: cli.azure.cli.core.util: 'Authorization': 'Bearer eyJ0eXAiOiJKV...'
INFO: cli.azure.cli.core.util: Request body:
INFO: cli.azure.cli.core.util: None
DEBUG: urllib3.connectionpool: Starting new HTTPS connection (1): graph.microsoft.com:443
DEBUG: urllib3.connectionpool: https://graph.microsoft.com:443 "GET /v1.0/users/adityaagupta31_gmail.com%23EXT%23@onevmw.onmicrosoft.com%0D HTTP/1.1" 400 324
INFO: cli.azure.cli.core.util: Response status: 400
INFO: cli.azure.cli.core.util: Response headers:
INFO: cli.azure.cli.core.util: 'Content-Type': 'text/html; charset=us-ascii'
INFO: cli.azure.cli.core.util: 'Date': 'Wed, 20 Sep 2023 07:21:30 GMT'
INFO: cli.azure.cli.core.util: 'Connection': 'close'
INFO: cli.azure.cli.core.util: 'Content-Length': '324'
INFO: cli.azure.cli.core.util: Response content:
INFO: cli.azure.cli.core.util: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid URL</h2>
<hr><p>HTTP Error 400. The request URL is invalid.</p>
</BODY></HTML>
DEBUG: cli.azure.cli.core.azclierror: Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 718, in _run_job
return cmd_copy.exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/commands.py", line 51, in graph_err_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 363, in handler
show_exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/arm.py", line 429, in show_exception_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 361, in handler
return op(**command_args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/custom.py", line 1866, in show_user
return client.user_get(upn_or_object_id)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 304, in user_get
result = self._send("GET", "{}".format(_get_user_url(id_or_upn)))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 55, in _send
raise GraphError(ex.response.json()['error']['message'], ex.response) from ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
ERROR: cli.azure.cli.core.azclierror: The command failed with an unexpected error. Here is the traceback:
ERROR: az_command_data_logger: The command failed with an unexpected error. Here is the traceback:
ERROR: cli.azure.cli.core.azclierror: Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 718, in _run_job
return cmd_copy.exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/commands.py", line 51, in graph_err_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 363, in handler
show_exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/arm.py", line 429, in show_exception_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 361, in handler
return op(**command_args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/custom.py", line 1866, in show_user
return client.user_get(upn_or_object_id)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 304, in user_get
result = self._send("GET", "{}".format(_get_user_url(id_or_upn)))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 55, in _send
raise GraphError(ex.response.json()['error']['message'], ex.response) from ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
ERROR: az_command_data_logger: Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 718, in _run_job
return cmd_copy.exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/commands.py", line 51, in graph_err_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 363, in handler
show_exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/arm.py", line 429, in show_exception_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 361, in handler
return op(**command_args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/custom.py", line 1866, in show_user
return client.user_get(upn_or_object_id)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 304, in user_get
result = self._send("GET", "{}".format(_get_user_url(id_or_upn)))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 55, in _send
raise GraphError(ex.response.json()['error']['message'], ex.response) from ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
To check existing issues, please visit: https://github.com/Azure/azure-cli/issues
DEBUG: cli.knack.cli: Event: Cli.PostExecute [<function AzCliLogging.deinit_cmd_metadata_logging at 0x101f2dcf0>]
INFO: az_command_data_logger: exit code: 1
INFO: cli.__main__: Command ran in 0.396 seconds (init: 0.115, invoke: 0.281)
Issue script & Debug output
DEBUG: cli.knack.cli: Command arguments: ['ad', 'user', 'show', '--id', 'removed#EXT#@removed.onmicrosoft.com\r', '--query', 'id', '--output', 'tsv', '--debug']
DEBUG: cli.knack.cli: __init__ debug log:
Cannot enable color.
DEBUG: cli.knack.cli: Event: Cli.PreExecute []
DEBUG: cli.knack.cli: Event: CommandParser.OnGlobalArgumentsCreate [<function CLILogging.on_global_arguments at 0x101385fc0>, <function OutputProducer.on_global_arguments at 0x101416b90>, <function CLIQuery.on_global_arguments at 0x101484040>]
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreCommandTableCreate []
DEBUG: cli.azure.cli.core: Modules found from index for 'ad': ['azure.cli.command_modules.role']
DEBUG: cli.azure.cli.core: Loading command modules:
DEBUG: cli.azure.cli.core: Name Load Time Groups Commands
DEBUG: cli.azure.cli.core: role 0.005 17 61
DEBUG: cli.azure.cli.core: Total (1) 0.005 17 61
DEBUG: cli.azure.cli.core: These extensions are not installed and will be skipped: ['azext_ai_examples', 'azext_next']
DEBUG: cli.azure.cli.core: Loading extensions:
DEBUG: cli.azure.cli.core: Name Load Time Groups Commands Directory
DEBUG: cli.azure.cli.core: Total (0) 0.000 0 0
DEBUG: cli.azure.cli.core: Loaded 17 groups, 61 commands.
DEBUG: cli.azure.cli.core: Found a match in the command table.
DEBUG: cli.azure.cli.core: Raw command : ad user show
DEBUG: cli.azure.cli.core: Command table: ad user show
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreCommandTableTruncate [<function AzCliLogging.init_command_file_logging at 0x101f2dab0>]
DEBUG: cli.azure.cli.core.azlogging: metadata file logging enabled - writing logs to '/Users/anpatel/.azure/commands/2023-09-20.03-21-30.ad_user_show.83281.log'.
INFO: az_command_data_logger: command args: ad user show --id {} --query {} --output {} --debug
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreArgumentLoad [<function register_global_subscription_argument.<locals>.add_subscription_parameter at 0x101f4a440>]
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPostArgumentLoad []
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPostCommandTableCreate [<function register_ids_argument.<locals>.add_ids_arguments at 0x10200fe20>, <function register_cache_arguments.<locals>.add_cache_arguments at 0x10200ff40>]
DEBUG: cli.knack.cli: Event: CommandInvoker.OnCommandTableLoaded []
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPreParseArgs []
DEBUG: cli.knack.cli: Event: CommandInvoker.OnPostParseArgs [<function OutputProducer.handle_output_argument at 0x101416c20>, <function CLIQuery.handle_query_parameter at 0x1014840d0>, <function register_ids_argument.<locals>.parse_ids_arguments at 0x10200feb0>]
DEBUG: cli.azure.cli.core.util: Retrieving token for resource https://graph.microsoft.com/
DEBUG: cli.azure.cli.core.auth.persistence: build_persistence: location='/Users/anpatel/.azure/msal_token_cache.json', encrypt=False
DEBUG: cli.azure.cli.core.auth.binary_cache: load: /Users/anpatel/.azure/msal_http_cache.bin
DEBUG: urllib3.util.retry: Converted retries value: 1 -> Retry(total=1, connect=None, read=None, redirect=None, status=None)
DEBUG: msal.authority: openid_config = {'token_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/token', 'token_endpoint_auth_methods_supported': ['client_secret_post', 'private_key_jwt', 'client_secret_basic'], 'jwks_uri': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/discovery/v2.0/keys', 'response_modes_supported': ['query', 'fragment', 'form_post'], 'subject_types_supported': ['pairwise'], 'id_token_signing_alg_values_supported': ['RS256'], 'response_types_supported': ['code', 'id_token', 'code id_token', 'id_token token'], 'scopes_supported': ['openid', 'profile', 'email', 'offline_access'], 'issuer': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/v2.0', 'request_uri_parameter_supported': False, 'userinfo_endpoint': 'https://graph.microsoft.com/oidc/userinfo', 'authorization_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/authorize', 'device_authorization_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/devicecode', 'http_logout_supported': True, 'frontchannel_logout_supported': True, 'end_session_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/oauth2/v2.0/logout', 'claims_supported': ['sub', 'iss', 'cloud_instance_name', 'cloud_instance_host_name', 'cloud_graph_host_name', 'msgraph_host', 'aud', 'exp', 'iat', 'auth_time', 'acr', 'nonce', 'preferred_username', 'name', 'tid', 'ver', 'at_hash', 'c_hash', 'email'], 'kerberos_endpoint': 'https://login.microsoftonline.com/b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0/kerberos', 'tenant_region_scope': 'NA', 'cloud_instance_name': 'microsoftonline.com', 'cloud_graph_host_name': 'graph.windows.net', 'msgraph_host': 'graph.microsoft.com', 'rbac_url': 'https://pas.windows.net'}
DEBUG: msal.application: Broker enabled? False
DEBUG: cli.azure.cli.core.auth.msal_authentication: UserCredential.get_token: scopes=('https://graph.microsoft.com//.default',), claims=None, kwargs={}
DEBUG: msal.application: Cache hit an AT
DEBUG: msal.telemetry: Generate or reuse correlation_id: 7c59f099-ad96-4f7d-afe5-a06dbf4f139f
INFO: cli.azure.cli.core.util: Request URL: 'https://graph.microsoft.com/v1.0/users/adityaagupta31_gmail.com%23EXT%23@onevmw.onmicrosoft.com%0D'
INFO: cli.azure.cli.core.util: Request method: 'GET'
INFO: cli.azure.cli.core.util: Request headers:
INFO: cli.azure.cli.core.util: 'User-Agent': 'python/3.10.13 (macOS-13.5.2-arm64-arm-64bit) AZURECLI/2.52.0 (HOMEBREW)'
INFO: cli.azure.cli.core.util: 'Accept-Encoding': 'gzip, deflate'
INFO: cli.azure.cli.core.util: 'Accept': '*/*'
INFO: cli.azure.cli.core.util: 'Connection': 'keep-alive'
INFO: cli.azure.cli.core.util: 'x-ms-client-request-id': 'deea7171-f2d6-4cda-b072-9ff09eaace8b'
INFO: cli.azure.cli.core.util: 'CommandName': 'ad user show'
INFO: cli.azure.cli.core.util: 'ParameterSetName': '--id --query --output --debug'
INFO: cli.azure.cli.core.util: 'Authorization': 'Bearer eyJ0eXAiOiJKV...'
INFO: cli.azure.cli.core.util: Request body:
INFO: cli.azure.cli.core.util: None
DEBUG: urllib3.connectionpool: Starting new HTTPS connection (1): graph.microsoft.com:443
DEBUG: urllib3.connectionpool: https://graph.microsoft.com:443 "GET /v1.0/users/adityaagupta31_gmail.com%23EXT%23@onevmw.onmicrosoft.com%0D HTTP/1.1" 400 324
INFO: cli.azure.cli.core.util: Response status: 400
INFO: cli.azure.cli.core.util: Response headers:
INFO: cli.azure.cli.core.util: 'Content-Type': 'text/html; charset=us-ascii'
INFO: cli.azure.cli.core.util: 'Date': 'Wed, 20 Sep 2023 07:21:30 GMT'
INFO: cli.azure.cli.core.util: 'Connection': 'close'
INFO: cli.azure.cli.core.util: 'Content-Length': '324'
INFO: cli.azure.cli.core.util: Response content:
INFO: cli.azure.cli.core.util: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid URL</h2>
<hr><p>HTTP Error 400. The request URL is invalid.</p>
</BODY></HTML>
DEBUG: cli.azure.cli.core.azclierror: Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 718, in _run_job
return cmd_copy.exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/commands.py", line 51, in graph_err_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 363, in handler
show_exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/arm.py", line 429, in show_exception_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 361, in handler
return op(**command_args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/custom.py", line 1866, in show_user
return client.user_get(upn_or_object_id)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 304, in user_get
result = self._send("GET", "{}".format(_get_user_url(id_or_upn)))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 55, in _send
raise GraphError(ex.response.json()['error']['message'], ex.response) from ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
ERROR: cli.azure.cli.core.azclierror: The command failed with an unexpected error. Here is the traceback:
ERROR: az_command_data_logger: The command failed with an unexpected error. Here is the traceback:
ERROR: cli.azure.cli.core.azclierror: Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 718, in _run_job
return cmd_copy.exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/commands.py", line 51, in graph_err_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 363, in handler
show_exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/arm.py", line 429, in show_exception_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 361, in handler
return op(**command_args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/custom.py", line 1866, in show_user
return client.user_get(upn_or_object_id)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 304, in user_get
result = self._send("GET", "{}".format(_get_user_url(id_or_upn)))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 55, in _send
raise GraphError(ex.response.json()['error']['message'], ex.response) from ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
ERROR: az_command_data_logger: Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 718, in _run_job
return cmd_copy.exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/commands.py", line 51, in graph_err_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 363, in handler
show_exception_handler(ex)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/arm.py", line 429, in show_exception_handler
raise ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/core/commands/command_operation.py", line 361, in handler
return op(**command_args)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/custom.py", line 1866, in show_user
return client.user_get(upn_or_object_id)
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 304, in user_get
result = self._send("GET", "{}".format(_get_user_url(id_or_upn)))
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/role/_msgrpah/_graph_client.py", line 55, in _send
raise GraphError(ex.response.json()['error']['message'], ex.response) from ex
File "/opt/homebrew/Cellar/azure-cli/2.52.0_1/libexec/lib/python3.10/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
To check existing issues, please visit: https://github.com/Azure/azure-cli/issues
DEBUG: cli.knack.cli: Event: Cli.PostExecute [<function AzCliLogging.deinit_cmd_metadata_logging at 0x101f2dcf0>]
INFO: az_command_data_logger: exit code: 1
INFO: cli.__main__: Command ran in 0.396 seconds (init: 0.115, invoke: 0.281)
Expected behavior
i am current using python in the script to strip the \r from the string
for i in `cat ~/Downloads/addresses.csv`
do
export LI=$i
export LI2=$(python3 -c 'import os; print(os.getenv("LI").strip())')
export SID=$(az ad user show --id $LI2 --query "id" --output tsv)
echo "$i" ; echo " $SID"
done
Environment Summary
azure-cli 2.52.0
core 2.52.0
telemetry 1.1.0
Extensions:
account 0.2.5
aks-preview 0.5.152
application-insights 0.1.19
interactive 0.5.3
serviceconnector-passwordless 0.3.8
spring 1.14.0
Dependencies:
msal 1.24.0b1
azure-mgmt-resource 23.1.0b2
Additional context
No response
Thank you for opening this issue, we will look into it.
|
gharchive/issue
| 2023-09-20T07:25:54 |
2025-04-01T04:54:45.108689
|
{
"authors": [
"anishp55",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/27434",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2111357182
|
az eventgrid system-topic event-subscription create - unable to handle/validate webhook endpoint with multiple query parameters correctly.
Describe the bug
When using az eventgrid system-topic event-subscription create to create an event-subscription and it validates the webhook endpoint, it does not send the query parameters beyond the 1st one.
Repo:
Create a system topic with az eventgrid system-topic create command to create a Microsoft.Storage.StorageAccounts topic
set $endpoint = "https://${functionAppUrl}/runtime/webhooks/blobs?functionName=Host.Functions.MyFunctionEventTrigger&code=$functionKey"
run az eventgrid system-topic event-subscription create --name scanned-images-blob-created --system-topic-name scanned-items-blobs-topic --endpoint-type "WebHook" --resource-group $resourceGroup --endpoint $endpoint
The code is the system key to the Azure function. The command errors out with "'code' is not recognized as an internal or external command,
operable program or batch file."
Related command
az eventgrid system-topic event-subscription create
Errors
'code' is not recognized as an internal or external command,
operable program or batch file.
if I switch code and functionName around it will say:
'functionName ' is not recognized as an internal or external command,
operable program or batch file.
Issue script & Debug output
'code' is not recognized as an internal or external command,
operable program or batch file.
Expected behavior
It should validate the webhook endpoint using the entirety of the url+query parameters.
This works when using portal.azure.com
Environment Summary
azure-cli 2.56.0
core 2.56.0
telemetry 1.1.0
Extensions:
aks-preview 0.5.149
fleet 0.2.7
Dependencies:
msal 1.24.0b2
azure-mgmt-resource 23.1.0b2
Python location 'C:\Program Files\Microsoft SDKs\Azure\CLI2\python.exe'
Extensions directory 'C:\Users\johnsontseng.azure\cliextensions'
Python (Windows) 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]
Additional context
No response
Thank you for opening this issue, we will look into it.
|
gharchive/issue
| 2024-02-01T02:27:09 |
2025-04-01T04:54:45.117868
|
{
"authors": [
"vtjc2002",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/28286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
745458553
|
{Error Improvement} Error category and error message refining
Description
This PR provides the following error improvements.
Suppress UnknownError in error message.
Catgorize the uncommon 4xx HTTP response errors into the UnclassifiedUserFaults instead of UnknownError
Fix the untidy error message issue in HttpOperationError #15872
Testing Guide
For the untidy error message in HttpOperationError, just type az account management-group show --name non-existing for testing.
Previously,
code: AuthorizationFailed - , The client 'test@azuresdkteam.onmicrosoft.com' with object id '6d97229a-391f-473a-893f-f0608b592d7b' does not have authorization to perform action 'Microsoft.Management/managementGroups/read' over scope '/providers/Microsoft.Management/managementGroups/non-existing' or the scope is invalid. If access was recently granted, please refresh your credentials.
Now,
AuthorizationFailed: The client 'test@azuresdkteam.onmicrosoft.com' with object id '6d97229a-391f-473a-893f-f0608b592d7b' does not have authorization to perform action 'Microsoft.Management/managementGroups/read' over scope '/providers/Microsoft.Management/managementGroups/non-existing' or the scope is invalid. If access was recently granted, please refresh your credentials.
History Notes
[Component Name 1] BREAKING CHANGE: az command a: Make some customer-facing breaking change.
[Component Name 2] az command b: Add some customer-facing feature.
This checklist is used to make sure that common guidelines for a pull request are followed.
[x] The PR title and description has followed the guideline in Submitting Pull Requests.
[x] I adhere to the Command Guidelines.
[x] I adhere to the Error Handling Guidelines.
@jiasli @evelyn-ys for awareness
|
gharchive/pull-request
| 2020-11-18T08:46:43 |
2025-04-01T04:54:45.123930
|
{
"authors": [
"houk-ms",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/15963",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1408880268
|
{Synapse} Update artifact version to 0.14.0
Related command
Update azure-synapse-artifacts to 0.14.0 version
Description
Fix icm339842304 https://portal.microsofticm.com/imp/v3/incidents/details/339842304/home
Testing Guide
History Notes
This checklist is used to make sure that common guidelines for a pull request are followed.
[x] The PR title and description has followed the guideline in Submitting Pull Requests.
[x] I adhere to the Command Guidelines.
[x] I adhere to the Error Handling Guidelines.
Synapse
/azp run
/azp run
|
gharchive/pull-request
| 2022-10-14T07:11:39 |
2025-04-01T04:54:45.128839
|
{
"authors": [
"kevinzz6",
"wangzelin007",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/24204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
275545256
|
Fixes for Resource modules
This checklist is used to make sure that common guidelines for a pull request are followed.
General Guidelines
[ ] The PR has modified HISTORY.rst describing any customer-facing, functional changes. Note that this does not include changes only to help content. (see Modifying change log).
Command Guidelines
[ ] Each command and parameter has a meaningful description.
[ ] Each new command has a test.
(see Authoring Command Modules)
View a preview at https://prompt.ws/r/Azure/azure-cli/4945
This is an experimental preview for @microsoft.com users.
(It may take a minute or two for your instance to be ready)
Email feedback to 'azfeedback' with subject 'Prompt Feedback'.
|
gharchive/pull-request
| 2017-11-21T00:29:46 |
2025-04-01T04:54:45.133098
|
{
"authors": [
"azuresdkci",
"tjprescott"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/4945",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
941139942
|
removed lock for version command
Reason for Change:
Issue Fixed:
Requirements:
[ ] uses conventional commit messages
[ ] includes documentation
[ ] adds unit tests
Notes:
/azp run
|
gharchive/pull-request
| 2021-07-10T01:04:14 |
2025-04-01T04:54:45.138456
|
{
"authors": [
"tamilmani1989"
],
"repo": "Azure/azure-container-networking",
"url": "https://github.com/Azure/azure-container-networking/pull/929",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
60549863
|
Edit hdinsight-hbase-provision-vnet.md
Edit complete.
On line 23, in the phrase "virtual network integration," if "virtual network" refers to the Azure service rather than the generic term, it should be capitalized as "Virtual Network."
Per naming guidelines, I changed instances of "PowerShell" by itself to "Azure PowerShell." Please make sure that all mentions are technically accurate and shouldn't be "Windows PowerShell" instead.
Some of the UI text that appears in all caps in screenshots is in title case in text. It would be better if capitalization of UI elements mentioned in text consistently matched what the UI shows.
The link "Use Azure Blob storage with Hadoop in HDInsight" goes to a page titled "Query big data from Hadoop-compatible Blob storage for analysis in HDInsight." Please confirm that this is the right link.
The link "Analyze Twitter sentiment with HBase in HDInsight" didn't work when I tried it.
Please make sure that the "Hadoop Command Line" screenshot doesn't reveal any potentially sensitive information.
In the text "For more information on name resolution in Azure virtual networks..." (line 314), please confirm that "virtual networks" (plural generic term) is accurate and shouldn't be "Virtual Network" (Azure service name).
The last cmdlet mentioned, Get-AzureHDInsightCluster (line 353), should be formatted like the cmdlet mentioned earlier--for the sake of consistency.
Hi @ShawnJackson, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution!
It looks like you're working at Microsoft (v-shawja). If you're full-time, we DON'T require a contribution license agreement.
If you are a vendor, or work for Microsoft Open Technologies, DO please sign the electronic contribution license agreement. It will take 2 minutes and there's no faxing! https://cla.azure.com.
TTYL, AZPRBOT;
|
gharchive/pull-request
| 2015-03-10T18:51:22 |
2025-04-01T04:54:45.144242
|
{
"authors": [
"ShawnJackson",
"azurecla"
],
"repo": "Azure/azure-content",
"url": "https://github.com/Azure/azure-content/pull/3033",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
105451324
|
Update machine-learning-azure-ml-netsharp-reference-guide.md
Explained auto option and default activation functions. Used bold for keywords and emphasis, italics for names. Added acknowledgements.
Hi @jeannt, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution!
It looks like you're working at Microsoft (jeannt). If you're full-time, we DON'T require a contribution license agreement.
If you are a vendor, or work for Microsoft Open Technologies, DO please sign the electronic contribution license agreement. It will take 2 minutes and there's no faxing! https://cla.azure.com.
TTYL, AZPRBOT;
|
gharchive/pull-request
| 2015-09-08T19:16:51 |
2025-04-01T04:54:45.147055
|
{
"authors": [
"azurecla",
"jeannt"
],
"repo": "Azure/azure-content",
"url": "https://github.com/Azure/azure-content/pull/4448",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1269003015
|
System.ObjectDisposedException: Cannot access a closed Stream
Following our migration to cosmos SDK v3 and usage of feed range API, we started to see this exception occasionally in our traces:
Error while copying content to a stream. ---> System.ObjectDisposedException: Cannot access a closed Stream.
at System.IO.Stream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
at System.Net.Http.DelegatingStream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
at System.Net.Http.StreamToStreamCopy.CopyAsync(Stream source, Stream destination, Int32 bufferSize, Boolean disposeSource, CancellationToken cancellationToken)
It is coming from cosmos client when trying to pull feed ranges, see full stack trace attached:
at Microsoft.Azure.Cosmos.GatewayStoreClient.d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Azure.Cosmos.GatewayStoreModel.d__9.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Azure.Cosmos.Routing.PartitionKeyRangeCache.d__11.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.Azure.Cosmos.Routing.PartitionKeyRangeCache.d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Azure.Cosmos.Routing.PartitionKeyRangeCache.d__6.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Azure.Cosmos.ContainerCore.d__35.MoveNext()
Looks like you forgot to attach the full callstack (the one in the bug description doesn't show what API on container is called and from where). Can you please attach the full callstack?
docdb sdkv3 stack trace.txt
System.Net.Http.HttpRequestException: Error while copying content to a stream. ---> System.ObjectDisposedException: Cannot access a closed Stream.
at System.IO.Stream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
at System.Net.Http.DelegatingStream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
at System.Net.Http.StreamToStreamCopy.CopyAsync(Stream source, Stream destination, Int32 bufferSize, Boolean disposeSource, CancellationToken cancellationToken)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.HttpContent.<CopyToAsyncCore>d__44.MoveNext()
--- End of inner exception stack trace ---
at System.Net.Http.HttpContent.<CopyToAsyncCore>d__44.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Azure.Cosmos.GatewayStoreClient.<BufferContentIfAvailableAsync>d__13.MoveNext()
Looks like the issue is related to System.Net.Http. In this flow, the application received the HTTP response and our SDK code is asking the HttpResponseMessage to copy the Stream content to a MemoryStream:
https://github.com/Azure/azure-cosmos-dotnet-v3/blob/8154849cdf441b3c9669550b64344be7e4c84ad3/Microsoft.Azure.Cosmos/src/GatewayStoreClient.cs#L228-L239
The ownership of the source Stream is on the System.Net space, why was it disposed is something we don't quite know but it's not controlled by the SDK code.
Which .NET Framework / NET Standard implementation version are you running on?
.NET 4.6.2
From: Matias Quaranta @.>
Sent: Monday, June 13, 2022 5:17 PM
To: Azure/azure-cosmos-dotnet-v3 @.>
Cc: Gilad Levy @.>; Author @.>
Subject: Re: [Azure/azure-cosmos-dotnet-v3] System.ObjectDisposedException: Cannot access a closed Stream (Issue #3263)
Which .NET Framework / NET Standard implementation version are you running on?
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cosmos-dotnet-v3%2Fissues%2F3263%23issuecomment-1153977334&data=05|01|gilevy%40microsoft.com|9b00f3e75d0449d6155408da4d4767c7|72f988bf86f141af91ab2d7cd011db47|1|0|637907266358982457|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=XJiRgDuK3BiO%2FXzDgcFyHPwwoOYEgDLZ5OvcnoQdNRU%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FATKWCGP7J7ORAG2NTYUMN33VO47GNANCNFSM5YTFTS4Q&data=05|01|gilevy%40microsoft.com|9b00f3e75d0449d6155408da4d4767c7|72f988bf86f141af91ab2d7cd011db47|1|0|637907266358982457|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=YtL0ou2tl3D%2BllyFnnR0lmiFIIV4JmlvCWLr4O46A1o%3D&reserved=0.
You are receiving this because you authored the thread.Message ID: @.@.>>
@giladl99-eng What is the lifetime of your CosmosClient? Are there any events that would call its Dispose? The only relationship I can see if, when the CosmosClient is disposed the HttpClient is disposed. I wonder if this occurs while a response Content is being accessed, if this might the reason.
Hi Matias,
We use cosmos client as singleton, dispose happens when our service shuts down. We saw those exceptions frequently with no correlation to process termination.
From: Matias Quaranta @.>
Sent: Monday, June 13, 2022 7:20 PM
To: Azure/azure-cosmos-dotnet-v3 @.>
Cc: Gilad Levy @.>; Mention @.>
Subject: Re: [Azure/azure-cosmos-dotnet-v3] System.ObjectDisposedException: Cannot access a closed Stream (Issue #3263)
@giladl99-enghttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgiladl99-eng&data=05|01|gilevy%40microsoft.com|b965570bfea94dbbc89708da4d589bbc|72f988bf86f141af91ab2d7cd011db47|1|0|637907340236250322|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=KclBYpfkHlhdOACkHoaYaKkbWNIa9UYiKKq8j9Xz8tI%3D&reserved=0 What is the lifetime of your CosmosClient? Are there any events that would call its Dispose? The only relationship I can see if, when the CosmosClient is disposed the HttpClient is disposed. I wonder if this occurs while a response Content is being accessed, if this might the reason.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cosmos-dotnet-v3%2Fissues%2F3263%23issuecomment-1154122868&data=05|01|gilevy%40microsoft.com|b965570bfea94dbbc89708da4d589bbc|72f988bf86f141af91ab2d7cd011db47|1|0|637907340236250322|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=q8QWEjDLXmdbTNUv2Yr8yqLsugw9S9qcNz4PNJgBsfE%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FATKWCGNTJW5JB3IX6OZXZTLVO5NUFANCNFSM5YTFTS4Q&data=05|01|gilevy%40microsoft.com|b965570bfea94dbbc89708da4d589bbc|72f988bf86f141af91ab2d7cd011db47|1|0|637907340236250322|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=FsxtxeCLhoZDIpHBQExQjdpYL7mw2IMIY0KPN3LVTJQ%3D&reserved=0.
You are receiving this because you were mentioned.Message ID: @.@.>>
We were able to root cause and find the source of this problem, it was affecting only Bounded Staleness/Strong accounts and it was related to the handling of quorum responses.
Incorrect linking. This is not the case that was fixed and found. This Issue's stack trace and error is not related to the issue found and fixed.
The issue we found and fixed was related to Bounded Staleness/Strong barrier requests on the TCP stack, this stack trace is for an HTTP operation obtaining metadata information and the disposing is tied to the HttpClient Stream management.
|
gharchive/issue
| 2022-06-13T07:09:47 |
2025-04-01T04:54:45.169659
|
{
"authors": [
"FabianMeiswinkel",
"ealsur",
"giladl99-eng"
],
"repo": "Azure/azure-cosmos-dotnet-v3",
"url": "https://github.com/Azure/azure-cosmos-dotnet-v3/issues/3263",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
955749815
|
Investigate/ fix failures in live tests
https://dev.azure.com/ms/azure-devops-cli-extension/_build/results?buildId=205118&view=logs&j=15d4e81f-d43d-597d-0244-b3aa26f78abd
[x] #1173
[x] #1174
[x] #1175
[x] #1176
[x] #1177
[x] #1179
[x] #1180
[x] #1181
[x] #1182
Adding new / remaining issues
[ ] #1192
[ ] #1193
[ ] #1194
[ ] #1195
|
gharchive/issue
| 2021-07-29T11:37:57 |
2025-04-01T04:54:45.175466
|
{
"authors": [
"gauravsaralMs",
"roshan-sy"
],
"repo": "Azure/azure-devops-cli-extension",
"url": "https://github.com/Azure/azure-devops-cli-extension/issues/1165",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
166476044
|
DevTest Labs artifact - Windows AD domain join
Added DevTest Labs artifact to join a Windows VM to a specified Active Directory Domain. Requires that the Windows operating system can resolve the domain name and the domain controller via DNS.
Hi @jamesbannan, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution!
In order for us to evaluate and accept your PR, we ask that you sign a contribution license agreement. It's all electronic and will take just minutes. I promise there's no faxing. https://cla.azure.com.
TTYL, AZPRBOT;
@jamesbannan, Thanks for signing the contribution license agreement so quickly! Actual humans will now validate the agreement and then evaluate the PR.
Thanks, AZPRBOT;
|
gharchive/pull-request
| 2016-07-20T01:39:18 |
2025-04-01T04:54:45.178327
|
{
"authors": [
"azurecla",
"jamesbannan"
],
"repo": "Azure/azure-devtestlab",
"url": "https://github.com/Azure/azure-devtestlab/pull/113",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
393916916
|
Update CLI extensions available doc
Update CLI extensions available doc.
Triggered by Azure/azure-cli-extensions - TRAVIS_BUILD_ID=471919847
https://github.com/Azure/azure-cli-extensions/commit/3bcb0df1477e27762f6756fd1cdd3da0ef2a93bf
OPS Build status updates of commit dc05bf5:
:clock10: Preparing: average preparing time is 57 sec(s)
OPS Build status updates of commit dc05bf5:
:clock10: Incremental building: average incremental building time is 12 min(s) 18 sec(s)
OPS Build status updates of commit dc05bf5:
:white_check_mark: Validation status: passed
File
Status
Preview URL
Details
docs-ref-conceptual/azure-cli-extensions-list.md
:white_check_mark:Succeeded
View (azure-cli-latest) View (azure-cli-2017-03-09-profile) View (azure-cli-2018-03-01-hybrid)
For more details, please refer to the build report.
Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
|
gharchive/pull-request
| 2018-12-24T18:06:25 |
2025-04-01T04:54:45.186131
|
{
"authors": [
"VSC-Service-Account",
"azuresdkci"
],
"repo": "Azure/azure-docs-cli-python",
"url": "https://github.com/Azure/azure-docs-cli-python/pull/1217",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
806745205
|
Docs.MS Release Updates for Azure.Analytics.Synapse.Spark
Update docs metadata and targeting for release of Azure.Analytics.Synapse.Spark
Docs Build status updates of commit 627868e:
:warning: Validation status: warnings
File
Status
Preview URL
Details
api/overview/azure/analytics.synapse.spark-readme-pre.md
:warning:Warning
View
Details
api/overview/azure/analytics.synapse.spark-readme-pre.md
Line 2, Column 1: [Warning-ms-prod-and-service] Only one of the following attributes can exist: 'ms.prod', 'ms.service'. Use ms.prod for on-premise products, or ms.service for cloud services.
Line 9, Column 16: [Warning-ms-prod-technology-invalid] Invalid value for 'ms.technology': 'azure' is not valid with 'ms.prod' value 'azure'.
Line 11, Column 13: [Warning-ms-service-subservice-invalid] Invalid value for 'ms.service': 'synapseanalytics'.
Line 2, Column 1: [Suggestion-description-missing] Missing required attribute: 'description'.
Line 138, Column 1: [Suggestion-table-syntax-invalid] Table syntax is invalid. Ensure your table includes a header and is surrounded by empty lines. NOTE: This Suggestion will become a Warning on 1/29/21.
For more details, please refer to the build report.
If you see build warnings/errors with permission issues, it might be due to single sign-on (SSO) enabled on Microsoft's GitHub organizations. Please follow instructions here to re-authorize your GitHub account to Docs Build.
Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report.
Note: Your PR may contain errors or warnings unrelated to the files you changed. This happens when external dependencies like GitHub alias, Microsoft alias, cross repo links are updated. Please use these instructions to resolve them.
For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel
|
gharchive/pull-request
| 2021-02-11T21:11:35 |
2025-04-01T04:54:45.198833
|
{
"authors": [
"azure-sdk",
"openpublishbuild"
],
"repo": "Azure/azure-docs-sdk-dotnet",
"url": "https://github.com/Azure/azure-docs-sdk-dotnet/pull/1846",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
709395508
|
Docs.MS Release Updates for azure-core-amqp
Update docs metadata and targeting for release of azure-core-amqp
Docs Build status updates of commit 6f29f44:
:clock10: Full build: the average full build time is 94 min(s) 47 sec(s), this is based on the last 802 build(s) for this repository.
Docs Build status updates of commit 6f29f44:
:warning: Validation status: warnings
File
Status
Preview URL
Details
legacy/docs-ref-autogen/overview/EventHubs/Client.yml
:warning:Warning
View (azure-java-legacy)
Details
docs-ref-services/core-amqp-readme-pre.md
:white_check_mark:Succeeded
View (azure-java-stable) View (azure-java-preview) View (azure-java-legacy)
package.json
:white_check_mark:Succeeded
legacy/docs-ref-autogen/overview/EventHubs/Client.yml
[Warning-DuplicateUids] Uid(azure.java.sdk.landingpage.services.eventhub.Client) has already been defined in docs-ref-services/messaging-eventhubs-readme.md.
For more details, please refer to the build report.
If you see build warnings/errors with permission issues, it might be due to single sign-on (SSO) enabled on Microsoft's GitHub organizations. Please follow instructions here to re-authorize your GitHub account to Docs Build.
Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report.
Note: Your PR may contain errors or warnings unrelated to the files you changed. This happens when external dependencies like GitHub alias, Microsoft alias, cross repo links are updated. Please use these instructions to resolve them.
For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel
|
gharchive/pull-request
| 2020-09-26T01:40:37 |
2025-04-01T04:54:45.211268
|
{
"authors": [
"azure-sdk",
"opbld32",
"opbld34"
],
"repo": "Azure/azure-docs-sdk-java",
"url": "https://github.com/Azure/azure-docs-sdk-java/pull/1277",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
460115609
|
Fix import links in readme.md
The current readme.md references another project for the go get instructions.
Good catch. Obviously, that was a bit of copypasta.
Sure thing, happens to all of us!
|
gharchive/pull-request
| 2019-06-24T22:00:49 |
2025-04-01T04:54:45.213039
|
{
"authors": [
"devigned",
"elsesiy"
],
"repo": "Azure/azure-event-hubs-go",
"url": "https://github.com/Azure/azure-event-hubs-go/pull/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1099618272
|
OpenAPI functions RenderOAuth2Redirect, RenderOpenApiDocument, RenderSwaggerDocument, and RenderSwaggerUI missing when published to folder
Published my function app to a folder so I can zip it and put it on a share and I noticed that the functions created by Microsoft.Azure.WebJobs.Extensions.OpenApi are not there. I see the functions when debugging.
Did I miss a step somewhere?
Using V3 Functions SDK and VS2019.
Hi there, I am also rather confused by this.
When running via debug, the additional RenderOAuth2Redirect, RenderOpenApiDocument, RenderSwaggerDocument, and RenderSwaggerUI endpoints are present.
When publishing the Azure Function (via func tasks in VSCode) it does deploy successfully however as @gitmadness has noted our shiny OpenAPI pages got lost somewhere.
Assuming that it might be some hidden magic, I have tried visiting;
{functionUri}/swagger.json?code={code}
{functionUri}/swagger/swagger.json?code={code}
{functionUri}/swagger/ui?code={code}
..all to no avail.
Justin Yoo over at MSFT has created some extensions to allow publishing of these as functions but I am not sure whether these work with the current landscape we're using (dotnet6, inproc, v4 runtime - posting this primarily for @gitmadness's benefit as this may help them).
Blog post describing the implementation: https://devkimchi.com/2019/02/02/introducing-swagger-ui-on-azure-functions/
Repository containing the libs: https://github.com/aliencube/AzureFunctions.Extensions
I have run into some implementation issues with the above, like GetSpecVersion() and GetExtension() methods, these are used in the example without any prefixes; suggesting they should be present in the example somewhere, but aren't, nor can I find them in the assembly anywhere. Some more search-fu later and I landed on this github issue.
Ok so I made some headway on this, started a fresh test project using the AlienCube nuget mentioned previously and got it working quite happily.
Dotnet 6 IoC, v4 runtime
Builds ok
Debugs locally ok with all the right endpoints
Deploys to Azure FunctionApp ok
Has all the right endpoints published, and they all work
To be brutally honest with myself I just RTFM with a fresh test project and it worked 🤷 I think my test yesterday must have had some conflicting config from all the headscratching trying to get it working.
Given the nature of the lineage of these nugets, I tried porting the same working config to use the Microsoft.Azure.Functions.Worker.Extensions.OpenApi nuget, all works flawlessly... apart from no OpenApi Functions published to the Function App on deployment.
No offense to anybody, but I suspect the current release here is borked. With the above workaround it's no biggie (for me) now though - and porting the same config to this repo's way of doing it is actually quite trivial, technical debt can be paid off at some point with a single paycheque 😁
@gitmadness feel free to give me a yell if you can't get the AlienCube way of doing it working, happy to help.
@gitmadness @JohnGe0rge Thanks for the issue! I assume that you're using "in-proc" worker of Azure Functions app, by using the Microsoft.Azure.WebJobs.Extensions.OpenApi package. Because it's the characteristics of the "in-proc" worker, those endpoints are showing up on your local machine, but they're hidden (encapsulated) when deployed to Azure.
Just FYI – The Aliencube one is no longer maintained, and this official extension has more features than that.
|
gharchive/issue
| 2022-01-11T21:09:36 |
2025-04-01T04:54:45.241303
|
{
"authors": [
"JohnGe0rge",
"gitmadness",
"justinyoo"
],
"repo": "Azure/azure-functions-openapi-extension",
"url": "https://github.com/Azure/azure-functions-openapi-extension/issues/347",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
326958256
|
Update jquery and jquery-ui sources
jquery-3.3.1 and jquery-ui 1.12.1
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: RaoulHolzer sign nowYou have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2018-05-28T09:21:28 |
2025-04-01T04:54:45.307579
|
{
"authors": [
"RaoulHolzer",
"msftclas"
],
"repo": "Azure/azure-mobile-apps-net-server",
"url": "https://github.com/Azure/azure-mobile-apps-net-server/pull/245",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
671852243
|
Cannot enable CMK on a pre-existing cosmosdb account
I created a cosmosdb account by using the REST API body:
{
"kind": "GlobalDocumentDB",
"location": "westus2",
"properties": {
"consistencyPolicy": {
"defaultConsistencyLevel": "BoundedStaleness",
"maxStalenessPrefix": 200,
"maxIntervalInSeconds": 10
},
"locations": [
{
"locationName": "westus2",
"failoverPriority": 0
}
],
"databaseAccountOfferType": "Standard",
"ipRules": [],
"isVirtualNetworkFilterEnabled": false,
"enableAutomaticFailover": true,
"capabilities": [],
"virtualNetworkRules": [],
"enableMultipleWriteLocations": false,
"enableFreeTier": false
},
"tags": {}
}
then I am trying to enable CMK on this cosmosdb using the PATCH RESP API with the body
{
"properties": {
"keyVaultKeyUri": "https://<vault-name>.vault.azure.net/keys/<key-name>",
},
}
I got an error:
Code="BadRequest" Message="Updating KeyVaultKeyUri is not supported\r\nActivityId: 5f367e7d-92ee-42a7-966b-fb12e53ab69a, Microsoft.Azure.Documents.Common/2.11.0"
hi @ArcturusZhang, enabling CMK on existing accounts is not supported. This is something we are looking into to be supported in the future. Please consider supporting this feature request at https://feedback.azure.com/forums/263030-azure-cosmos-db.
|
gharchive/issue
| 2020-08-03T07:10:31 |
2025-04-01T04:54:45.398230
|
{
"authors": [
"ArcturusZhang",
"wmengmsft"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/issues/10323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
237287734
|
VirtualNetworkGateway.json QOL fixes
PR exclusively for general QOL fixes for VirtualNetworkGateway.json
This checklist is used to make sure that common issues in a pull request are addressed. This will expedite the process of getting your pull request merged and avoid extra work on your part to fix issues discovered during the review process.
PR information
[ ] The title of the PR is clear and informative.
[ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For information on cleaning up the commits in your pull request, see this page.
[ ] Except for special cases involving multiple contributors, the PR is started from a fork of the main repository, not a branch.
[ ] If applicable, the PR references the bug/issue that it fixes.
[ ] Swagger files are correctly named (e.g. the api-version in the path should match the api-version in the spec).
Quality of Swagger
[ ] I have read the contribution guidelines.
[ ] My spec meets the review criteria:
[ ] The spec conforms to the Swagger 2.0 specification.
[ ] The spec follows the guidelines described in the Swagger checklist document.
[ ] Validation tools were run on swagger spec(s) and have all been fixed in this PR.
Hi There,
I am the AutoRest Linter Azure bot. I am here to help. My task to analyze the situation from the AutoRest linter perspective. Please review the below analysis result:
File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json
Before the PR: Warning(s): 22 Error(s): 11
After the PR: Warning(s): 22 Error(s): 11
Thanks for your co-operation.
Hi There,
I am the AutoRest Linter Azure bot. I am here to help. My task to analyze the situation from the AutoRest linter perspective. Please review the below analysis result:
File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json
Before the PR: Warning(s): 22 Error(s): 11
After the PR: Warning(s): 22 Error(s): 11
Thanks for your co-operation.
Hi There,
I am the AutoRest Linter Azure bot. I am here to help. My task to analyze the situation from the AutoRest linter perspective. Please review the below analysis result:
File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json
Before the PR: Warning(s): 22 Error(s): 11
After the PR: Warning(s): 22 Error(s): 11
Thanks for your co-operation.
@dsgouda Normalized provisioning state across entire JSON
@azuresdkci Test this please
Hi There,
I am the AutoRest Linter Azure bot. I am here to help. My task is to analyze the situation from the AutoRest linter perspective. Please review the below analysis result:
File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json
Before the PR: Warning(s): 22 Error(s): 11
After the PR: Warning(s): 22 Error(s): 11
Know more about AutoRest Linter Guidelines.
Send feedback and make AutoRest Linter Azure Bot smarter day by day!
Thanks for your co-operation.
@dsgouda
are we good t o merge here?
We are good, but I need to check why the travis build is failing, will investigate now.
CI is reporting a failure to generate sdk for ruby, get a feeling it may have to do with using old command line args for AutoRest, will post my findings soon.
@henry416 looks like you are updating provisioningState enum only here and not in the other json files which are a part of the corresponding composite. When generating the SDK, AutoRest expects to have exactly one unique x-ms-enum extension or the exact same definition repeated (like you have in virtualNetworkGateway.json), either update all definitions for ProvisioningState or undo this particular change. FYI The definition for ProvisioningState in networkWatcher.json has an additional enum value.
ping @henry416
@salameer I am aware of this, however I am busy with other issues at the moment, I will take time when it is finished to deal with this.
Sounds Good @henry416
But please note that we'll have to close this pr by end of the Next week July 7th if this is not updated. and please feel free to open a new PR after that if your unable to make these changes by that time.
Thanks,
Samer
Closing due to no response.
|
gharchive/pull-request
| 2017-06-20T17:30:31 |
2025-04-01T04:54:45.415781
|
{
"authors": [
"azuresdkci",
"dsgouda",
"henry416",
"salameer"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/1331",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
929327471
|
Adding swagger json and examples for Compute Diagnostic Resource
Provider - preview
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Changelog
Please ensure to add changelog with this PR by answering the following questions.
What's the purpose of the update?
[x] new service onboarding
[x] new API version
[ ] update existing version for new feature
[ ] update existing version to fix swagger quality issue in s360
[ ] Other, please clarify
When you are targeting to deploy new service/feature to public regions? Please provide date, or month to public if date is not available yet.
When you expect to publish swagger? Please provide date, or month to public if date is not available yet.
If it's an update to existing version, please select SDKs of specific language and CLIs that require refresh after swagger is published.
[ ] SDK of .NET (need service team to ensure code readiness)
[ ] SDK of Python
[ ] SDK of Java
[ ] SDK of Js
[ ] SDK of Go
[ ] PowerShell
[ ] CLI
[ ] Terraform
[ ] No, no need to refresh for updates in this PR
Contribution checklist:
[x] I commit to follow the Breaking Change Policy of "no breaking changes"
[x] I have reviewed the documentation for the workflow.
[x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix?
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
[x] Ensure to check this box if one of the following scenarios meet updates in the PR, so that label “WaitForARMFeedback” will be added automatically to involve ARM API Review. Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs, all “removals” and “adding a new property” no more require ARM API review.
Adding new API(s)
Adding a new API version
[ ] Ensure to copy the existing version into new directory structure for first commit (including refactoring) and then push new changes including version updates in separate commits. This is required to review the changes efficiently.
Adding a new service
[x] Please ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board.
[x] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Breaking Change Review Checklist
If there are following updates in the PR, ensure to request an approval from Breaking Change Review Board as defined in the Breaking Change Policy.
[ ] Removing API(s) in stable version
[ ] Removing properties in stable version
[ ] Removing API version(s) in stable version
[ ] Updating API in stable or public preview version with Breaking Change Validation errors
[ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews)
Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki.
Please follow the link to find more details on PR review process.
As discussed offline, please implement below APIs
POST /subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/diagnostics/diskInspection/run
GET /subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/diagnostics/diskInspection
{
id: xxx,
name: diskInspection,
properties: {
supportedResourceTypes: [ "VMs"]
}
}
GET /subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/diagnostics
{
value: [
{
id: xxx,
name: diskInspection,
properties: {
supportedResourceTypes: [ "VMs"]
}
}
]
}
/azp run
Closing this as created a new PR with changes
Hi, @ansahdev. The PR has be closed for a long time and it's related branch still exist. Please tell me if you still need this branch or i will delete it in 14 days.
|
gharchive/pull-request
| 2021-06-24T15:06:39 |
2025-04-01T04:54:45.433481
|
{
"authors": [
"ArcturusZhang",
"JackTn",
"RamyasreeChakka",
"ansahdev"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/14976",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
958267207
|
Added new API version 2021-07-01 with the existing swagger json files for ResourceGuard feature
copied the existing swagger json files (2021-06-01 for recoveryservicesbackup and 2021-01-01 for dataprotection) to create the base swagger json files with API version 2021-07-01.
Will add ResourceGuard related changes (which are already checked-in in preview folder with API version 2021-02-01-preview) on top of this base swagger json files.
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Changelog
Add a changelog entry for this PR by answering the following questions:
What's the purpose of the update?
[ ] new service onboarding
[ ] new API version
[ ] update existing version for new feature
[ ] update existing version to fix swagger quality issue in s360
[ ] Other, please clarify
When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month.
When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month.
If updating an existing version, please select the specific langauge SDKs and CLIs that must be refreshed after the swagger is published.
[ ] SDK of .NET (need service team to ensure code readiness)
[ ] SDK of Python
[ ] SDK of Java
[ ] SDK of Js
[ ] SDK of Go
[ ] PowerShell
[ ] CLI
[ ] Terraform
[ ] No refresh required for updates in this PR
Contribution checklist:
[ ] I commit to follow the Breaking Change Policy of "no breaking changes"
[ ] I have reviewed the documentation for the workflow.
[ ] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix?
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
Applicability: :warning:
If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review.
Change to data plane APIs
Adding new properties
All removals
Otherwise your PR may be subject to ARM review requirements. Complete the following:
[ ] Check this box if any of the following apply to the PR so that label “WaitForARMFeedback” will be added automatically to begin ARM API Review. Failure to comply may result in delays to the manifest.
Adding a new service
Adding new API(s)
Adding a new API version
-[ ] To review changes efficiently, ensure you copy the existing version into the new directory structure for first commit (including refactoring) and then push new changes, including version updates, in separate commits.
[ ] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board.
[ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Breaking Change Review Checklist
If any of the following scenarios apply to the PR, request approval from the Breaking Change Review Board as defined in the Breaking Change Policy.
[ ] Removing API(s) in a stable version
[ ] Removing properties in a stable version
[ ] Removing API version(s) in a stable version
[ ] Updating API in a stable or public preview version with Breaking Change Validation errors
[ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews)
Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki.
Please follow the link to find more details on PR review process.
Hi @deymadhumanti now the default branch is main, I have updated the PR to based on main branch.
Hi, Closing this PR Since I am unable to push the changes from my local repo to this PR. Raised a new PR : https://github.com/Azure/azure-rest-api-specs/pull/15514 with these changes. So please review the new PR : https://github.com/Azure/azure-rest-api-specs/pull/15514
Hi, Closing this PR Since I am unable to push the changes from my local repo to this PR. Raised a new PR : https://github.com/Azure/azure-rest-api-specs/pull/15515 with these changes. So please review the new PR : https://github.com/Azure/azure-rest-api-specs/pull/15515
|
gharchive/pull-request
| 2021-08-02T15:36:48 |
2025-04-01T04:54:45.452669
|
{
"authors": [
"deymadhumanti",
"zhenglaizhang"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/15475",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
252360272
|
[Traffic Manager] Adding new preview api-version for Traffic Manager. This adds two new features to Traffic Manager: RealUserMetricsKey, and HeatMap.
This checklist is used to make sure that common issues in a pull request are addressed. This will expedite the process of getting your pull request merged and avoid extra work on your part to fix issues discovered during the review process.
PR information
[X] The title of the PR is clear and informative.
[X] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For information on cleaning up the commits in your pull request, see this page.
[X] Except for special cases involving multiple contributors, the PR is started from a fork of the main repository, not a branch.
[X] If applicable, the PR references the bug/issue that it fixes.
[X] Swagger files are correctly named (e.g. the api-version in the path should match the api-version in the spec).
Quality of Swagger
[X] I have read the contribution guidelines.
[X] My spec meets the review criteria:
[X] The spec conforms to the Swagger 2.0 specification.
[X] The spec follows the guidelines described in the Swagger checklist document.
[ ] Validation tools were run on swagger spec(s) and have all been fixed in this PR.
@hrkulkarMsft,
Thanks for your contribution as a Microsoft full-time employee or intern. You do not need to sign a CLA.
Thanks,
Microsoft Pull Request Bot
@ravbhatnagar this is new new API version, please review and sign-off.
@hrkulkarMsft please update readme.md file to have a tag for this new api-version.
Should the readme.md still try to build non-preview api? Or should I add?
openapi-type: arm
tag: package-2017-09-preview
Tag: package-2017-09-preview
These settings apply only when --tag=package-2017-09-preview is specified on the command line.
input-file:
- Microsoft.Network/2017-09-01-preview/trafficmanager.json
@hrkulkarMsft, the readme file seems correct, it seems CI is failing because from the swagger you are referring HeatMap-GET.json but in the file system the file name is HeatMap-Get.json, please use the same casing in both places.
If you already have autorest installed I would recommend to run it locally against your swagger to catch linter errors, and fix them.
Assuming you are in the root folder of local clone of rest-api-spec, you can run this:
autorest --validation --azure-validator --message-format=json --input-file=./specification/trafficmanager/resource-manager/Microsoft.Network/2017-09-01-preview/trafficmanager.json
Thanks, my validator wasn't throwing that warning because git hadn't renamed the case remotely to match my local branch.
For Operations API -- our parent RP (Microsoft.Network) implements this should we still reference it in our Swagger?
For Operations API no need to reference to the parent network swagger. @ravbhatnagar please note that linter shows RPCViolation saying Operations API is missing but the same operation is implemented in the parent RP https://github.com/Azure/azure-rest-api-specs/tree/current/specification/network/resource-manager.
To repro semantic and model validation locally
install oav
npm install -g oav
run
oav validate-spec ./specification/trafficmanager/resource-manager/Microsoft.Network/2017-09-01-preview/trafficmanager.json
oav validate-example ./specification/trafficmanager/resource-manager/Microsoft.Network/2017-09-01-preview/trafficmanager.json
Thanks Anu, I was able to successfully run these validations this time.
@hrkulkarMsft regarding TrackedResourceListByImmediateParent RPCViolation - you mentioned that in the case of this particular resource -- it could be very large, if i understand correctly the number of HeapMap child resources associated with an instance of traffic manager profile resource can be huge. If that is the case, this could be a paged collection right? which can be expressed using x-ms-pageable extension, https://github.com/Azure/azure-rest-api-specs/blob/master/documentation/creating-swagger.md#Paging-x-ms-pageable.
@anuchandy Sorry, I should have been more specific. The payload can return close to 4MB in our preview version -- this will be right under ARM limit to keep it from needing pagination. If we decide to have multiple heatMap resources at some point then (i.e.: a history of a Profile's heatMap), I would imagine that a List operation could get large.
@hrkulkarMsft thanks for clarifying, got it so there will be only one HeapMap child instance associated with a traffic manager profile instance, i.e. this is not a collection. I would let Gaurav comment on this. @ravbhatnagar we have RPCViolation warning in this case TrackedResourceListByImmediateParent, please share your thoughts.
@ravbhatnagar : Submitted a new iteration in the review with the changes we discussed.
• {heatMapsType}: can now only be “default” in value.
• Removed endpoint-identifying properties.
• Added query parameters.
• Removed App Rum to get HeatMap through for now, and will open separate review.
I’ll take note of the general API feedback to fix, but for this iteration would it be possible to only fix the HeatMap implementation, and then update the existing API comments later?
@hrkulkarMsft - sounds good.
RubyCoden is failing https://travis-ci.org/Azure/azure-rest-api-specs/jobs/268142263, checking with the codegen owners.
Hey, is there anything I need to do to unblock codegen?
hi @hrkulkarMsft sorry for the delay. No action required from your side now, we are tracking the code-gen issue here https://github.com/Azure/azure-sdk-for-ruby/issues/944. Merging this PR.
@ravbhatnagar as i mentioned above though there is a RPCViolation OperationsAPIImplementation reported by linter, the API is implemented in the parent Network swagger.
Cool, thanks Anu! Changed the PR to better reflect these changes since the other feature was moved to a separate PR.
No modification for AutorestCI/azure-sdk-for-node
No modification for AutorestCI/azure-sdk-for-python
|
gharchive/pull-request
| 2017-08-23T17:34:40 |
2025-04-01T04:54:45.471753
|
{
"authors": [
"AutorestCI",
"anuchandy",
"hrkulkarMsft",
"msftclas",
"ravbhatnagar"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/1580",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1247009997
|
Search API's in marketplace catalog
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Changelog
Add a changelog entry for this PR by answering the following questions:
What's the purpose of the update?
[x] new service onboarding
[ ] new API version
[ ] update existing version for new feature
[ ] update existing version to fix swagger quality issue in s360
[ ] Other, please clarify
When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month - July 2022
When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month - already published
If updating an existing version, please select the specific language SDKs and CLIs that must be refreshed after the swagger is published.
[ ] SDK of .NET (need service team to ensure code readiness)
[ ] SDK of Python
[ ] SDK of Java
[ ] SDK of Js
[ ] SDK of Go
[ ] PowerShell
[ ] CLI
[ ] Terraform
[ ] No refresh required for updates in this PR
Contribution checklist:
[x] I commit to follow the Breaking Change Policy of "no breaking changes"
[x] I have reviewed the documentation for the workflow.
[x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix?
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
Applicability: :warning:
If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review.
Change to data plane APIs
Adding new properties
All removals
Otherwise your PR may be subject to ARM review requirements. Complete the following:
[x] Check this box if any of the following appy to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment.
Adding a new service
Adding new API(s)
Adding a new API version
-[ ] To review changes efficiently, ensure you are using OpenAPIHub to initialize the PR for adding a new version. More details, refer to the wiki.
[x] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board.
[ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Breaking Change Review Checklist
If any of the following scenarios apply to the PR, request approval from the Breaking Change Review Board as defined in the Breaking Change Policy.
[ ] Removing API(s) in a stable version
[ ] Removing properties in a stable version
[ ] Removing API version(s) in a stable version
[ ] Updating API in a stable or public preview version with Breaking Change Validation errors
[ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews)
Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki.
Please follow the link to find more details on PR review process.
Opened 2 separate pull requests - one for dataplan and one for resource manager
|
gharchive/pull-request
| 2022-05-24T19:05:24 |
2025-04-01T04:54:45.488536
|
{
"authors": [
"gregoks"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/19207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1271469447
|
Remove storage arm constraint
Adding a small swagger transform to work around some invalid client platform validation that is blocking Storage Explorer customers.
Changelog
Add a changelog entry for this PR by answering the following questions:
What's the purpose of the update?
[ ] new service onboarding
[ ] new API version
[ ] update existing version for new feature
[ ] update existing version to fix swagger quality issue in s360
[x] Other, please clarify
When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month.
When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month.
If updating an existing version, please select the specific language SDKs and CLIs that must be refreshed after the swagger is published.
[ ] SDK of .NET (need service team to ensure code readiness)
[ ] SDK of Python
[ ] SDK of Java
[x] SDK of Js
[ ] SDK of Go
[ ] PowerShell
[ ] CLI
[ ] Terraform
[ ] No refresh required for updates in this PR
Contribution checklist:
[ ] I commit to follow the Breaking Change Policy of "no breaking changes"
[ ] I have reviewed the documentation for the workflow.
[ ] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix?
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
Applicability: :warning:
If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review.
Change to data plane APIs
Adding new properties
All removals
Otherwise your PR may be subject to ARM review requirements. Complete the following:
[ ] Check this box if any of the following appy to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment.
Adding a new service
Adding new API(s)
Adding a new API version
-[ ] To review changes efficiently, ensure you are using OpenAPIHub to initialize the PR for adding a new version. More details, refer to the wiki.
[ ] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board.
[ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Breaking Change Review Checklist
If any of the following scenarios apply to the PR, request approval from the Breaking Change Review Board as defined in the Breaking Change Policy.
[ ] Removing API(s) in a stable version
[ ] Removing properties in a stable version
[ ] Removing API version(s) in a stable version
[ ] Updating API in a stable or public preview version with Breaking Change Validation errors
[ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews)
Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki.
Please follow the link to find more details on PR review process.
Ugh trying to figure out why so many commits got pulled in
Ah my branch was still on master was the problem
|
gharchive/pull-request
| 2022-06-14T23:32:39 |
2025-04-01T04:54:45.503792
|
{
"authors": [
"xirzec"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/19451",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1382215290
|
[Hub Generated] Review request for Microsoft.DataProtection to add version preview/2022-09-01-preview
This is a PR generated at OpenAPI Hub. You can view your work branch via this link.
ARM API Information (Control Plane)
Azure 1st Party Service can try out the Shift Left experience to initiate API design review from ADO code repo. If you are interested, may request engineering support by filling in with the form https://aka.ms/ShiftLeftSupportForm.
Changelog
Add a changelog entry for this PR by answering the following questions:
What's the purpose of the update?
[ ] new service onboarding
[x] new API version
[ ] update existing version for new feature
[ ] update existing version to fix swagger quality issue in s360
[ ] Other, please clarify
When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month.
When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month.
By default, Azure SDKs of all languages (.NET/Python/Java/JavaScript for both management-plane SDK and data-plane SDK, Go for management-plane SDK only ) MUST be refreshed with/after swagger of new version is published. If you prefer NOT to refresh any specific SDK language upon swagger updates in the current PR, please leave details with justification here.
Contribution checklist (MS Employees Only):
[x] I commit to follow the Breaking Change Policy of "no breaking changes"
[x] I have reviewed the documentation for the workflow.
[x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix?
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
Applicability: :warning:
If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review.
Change to data plane APIs
Adding new properties
All removals
Otherwise your PR may be subject to ARM review requirements. Complete the following:
[x] Check this box if any of the following apply to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment.
Adding a new service
Adding new API(s)
Adding a new API version
-[ ] To review changes efficiently, ensure you copy the existing version into the new directory structure for first commit and then push new changes, including version updates, in separate commits. You can use OpenAPIHub to initialize the PR for adding a new version. For more details refer to the wiki.
[x] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board.
[x] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Breaking Change Review Checklist
If you have any breaking changes as defined in the Breaking Change Policy, request approval from the Breaking Change Review Board.
Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Additional details on the process and office hours are on the Breaking Change Wiki.
NOTE: To update API(s) in public preview for over 1 year (refer to Retirement of Previews)
Please follow the link to find more details on PR review process.
Most of the changes in this API have already been reviewed and approved by ARM team in a PR raised in swagger's private repo -
https://github.com/Azure/azure-rest-api-specs-pr/pull/7480
/azp run unifiedPipeline
"state": {
What's the difference between On and AlwaysOn? If they are different, please add descriptions for the enum values to clarify (x-ms-enum allows per-value descriptions).
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:6661 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False)
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/backupVaults/{vaultName}/deletedBackupInstances/{backupInstanceName}/undelete": {
ARM soft-delete pattern uses restore for this action.
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:2560 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False)
"state": {
What's the difference between On and AlwaysOn? If they are different, please add descriptions for the enum values to clarify (x-ms-enum allows per-value descriptions).
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:6661 in 3d96f0c. [](commit_id = 3d96f0c, deletion_comment = False)
Added description for enum values. Thanks for the suggestion
/azp run unifiedPipeline
/azp run unifiedPipeline
"state": {
Nicely done.
In reply to: 1260293048
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:6661 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False)
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/backupVaults/{vaultName}/deletedBackupInstances/{backupInstanceName}/undelete": {
Makes sense.
In reply to: 1260293405
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:2560 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False)
"state": {
Recommend adding per-value descriptions for this property as well (not blocking ARM signoff).
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:5505 in b503351. [](commit_id = b503351dcd921a9006e589736180b86a71603623, deletion_comment = False)
"resourceGuardOperationRequests": {
Description would be helpful.
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:7157 in b503351. [](commit_id = b503351dcd921a9006e589736180b86a71603623, deletion_comment = False)
@amchandn - Signed off for ARM with comments.
"state": {
Recommend adding per-value descriptions for this property as well (not blocking ARM signoff).
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:5505 in b503351. [](commit_id = b503351, deletion_comment = False)
thanks for the suggestion. We'll plan and take up this exercise to add enum descriptions for all enum values in our swagger in upcoming versions.
"resourceGuardOperationRequests": {
Description would be helpful.
Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:7157 in b503351. [](commit_id = b503351, deletion_comment = False)
thanks for the suggestion. We'll plan and take up this exercise to add all missing descriptions in our swagger in upcoming versions.
/azp run
|
gharchive/pull-request
| 2022-09-22T10:28:15 |
2025-04-01T04:54:45.531887
|
{
"authors": [
"amchandn",
"jianyexi",
"mentat9"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/20823",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1427962888
|
Add MaintenanceWindow into MaintenanceConfigurationProperties
ARM API Information (Control Plane)
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Azure 1st Party Service can try out the Shift Left experience to initiate API design review from ADO code repo. If you are interested, may request engineering support by filling in with the form https://aka.ms/ShiftLeftSupportForm.
Changelog
Add a changelog entry for this PR by answering the following questions:
What's the purpose of the update?
[ ] new service onboarding
[ ] new API version
[ ] update existing version for new feature
[ ] update existing version to fix swagger quality issue in s360
[ ] Other, please clarify
When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month.
When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month.
By default, Azure SDKs of all languages (.NET/Python/Java/JavaScript for both management-plane SDK and data-plane SDK, Go for management-plane SDK only ) MUST be refreshed with/after swagger of new version is published. If you prefer NOT to refresh any specific SDK language upon swagger updates in the current PR, please leave details with justification here.
Contribution checklist (MS Employees Only):
[ ] I commit to follow the Breaking Change Policy of "no breaking changes"
[ ] I have reviewed the documentation for the workflow.
[ ] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix?
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
Applicability: :warning:
If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review.
Change to data plane APIs
Adding new properties
All removals
Otherwise your PR may be subject to ARM review requirements. Complete the following:
[ ] Check this box if any of the following apply to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment.
Adding a new service
Adding new API(s)
Adding a new API version
-[ ] To review changes efficiently, ensure you copy the existing version into the new directory structure for first commit and then push new changes, including version updates, in separate commits. You can use OpenAPIHub to initialize the PR for adding a new version. For more details refer to the wiki.
[ ] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board.
[ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Breaking Change Review Checklist
If you have any breaking changes as defined in the Breaking Change Policy, request approval from the Breaking Change Review Board.
Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Additional details on the process and office hours are on the Breaking Change Wiki.
NOTE: To update API(s) in public preview for over 1 year (refer to Retirement of Previews)
Please follow the link to find more details on PR review process.
The errors scanned by Swagger ModelValidation is irrelevant to the changes in this PR. I'll help bypass the check after the identifier issue got fixed.
|
gharchive/pull-request
| 2022-10-28T23:49:14 |
2025-04-01T04:54:45.546717
|
{
"authors": [
"FumingZhang",
"wenxuan0923"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/21337",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1610821829
|
updated resource id in examples Azure Monitor for SAP
updated resource id in examples Azure Monitor for SAP for API version 2021-12-01-preview
ARM API Information (Control Plane)
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Azure 1st Party Service can try out the Shift Left experience to initiate API design review from ADO code repo. If you are interested, may request engineering support by filling in with the form https://aka.ms/ShiftLeftSupportForm.
Changelog
Add a changelog entry for this PR by answering the following questions:
What's the purpose of the update?
[ ] new service onboarding
[ ] new API version
[ ] update existing version for new feature
[ ] update existing version to fix swagger quality issue in s360
[ ] Other, please clarify
When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month.
When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month.
By default, Azure SDKs of all languages (.NET/Python/Java/JavaScript for both management-plane SDK and data-plane SDK, Go for management-plane SDK only ) MUST be refreshed with/after swagger of new version is published. If you prefer NOT to refresh any specific SDK language upon swagger updates in the current PR, please leave details with justification here.
Contribution checklist (MS Employees Only):
[ ] I commit to follow the Breaking Change Policy of "no breaking changes"
[ ] I have reviewed the documentation for the workflow.
[ ] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix?
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
Applicability: :warning:
If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review.
Change to data plane APIs
Adding new properties
All removals
Otherwise your PR may be subject to ARM review requirements. Complete the following:
[ ] Check this box if any of the following apply to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment.
Adding a new service
Adding new API(s)
Adding a new API version
-[ ] To review changes efficiently, ensure you copy the existing version into the new directory structure for first commit and then push new changes, including version updates, in separate commits. You can use OpenAPIHub to initialize the PR for adding a new version. For more details refer to the wiki.
[ ] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board.
[ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Breaking Change Review Checklist
If you have any breaking changes as defined in the Breaking Change Policy, request approval from the Breaking Change Review Board.
Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Additional details on the process and office hours are on the Breaking Change Wiki.
NOTE: To update API(s) in public preview for over 1 year (refer to Retirement of Previews)
Please follow the link to find more details on PR review process.
@M2skills , can you revert the changes on the root folder files and also resolve the conflicts?
@M2skills , feel free to re-open this PR if needed.
|
gharchive/pull-request
| 2023-03-06T07:23:50 |
2025-04-01T04:54:45.562397
|
{
"authors": [
"M2skills",
"raych1"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/22918",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2496559112
|
New API for communication call settings [Private preview]
Data Plane API Specification Update Pull Request
[!TIP]
Overwhelmed by all this guidance? See the Getting help section at the bottom of this PR description.
Introducing new provisioning API for ACS calling, that will allow customers to store call related settings on ACS resource level and ACS participant level.
TypeSpec link
PR review workflow diagram
Please understand this diagram before proceeding. It explains how to get your PR approved & merged.
API Info: The Basics
Most of the information about your service should be captured in the issue that serves as your API Spec engagement record.
Link to API Spec engagement record issue:
Is this review for (select one):
[x] a private preview
[ ] a public preview
[ ] GA release
Change Scope
This section will help us focus on the specific parts of your API that are new or have been modified. Please share a link to the design document for the new APIs, a link to the previous API Spec document (if applicable), and the root paths that have been updated.
Design Document:
Previous API Spec Doc: N/A
Updated paths: N/A
Viewing API changes
For convenient view of the API changes made by this PR, refer to the URLs provided in the table
in the Generated ApiView comment added to this PR. You can use ApiView to show API versions diff.
Suppressing failures
If one or multiple validation error/warning suppression(s) is detected in your PR, please follow the
Swagger-Suppression-Process
to get approval.
❔Got questions? Need additional info?? We are here to help!
Contact us!
The Azure API Review Board is dedicated to helping you create amazing APIs. You can read about our mission and learn more about our process on our wiki.
💬 Teams Channel
💌 email
Click here for links to tools, specs, guidelines & other good stuff
Tooling
Open API validation tools were run on this PR. Go here to see how to fix errors
Spectral Linting
Guidelines & Specifications
Azure REST API Guidelines
OpenAPI Style Guidelines
Azure Breaking Change Policy
Helpful Links
Schedule a data plane REST API spec review
Getting help
First, please carefully read through this PR description, from top to bottom.
If you don't have permissions to remove or add labels to the PR, request write access per aka.ms/azsdk/access#request-access-to-rest-api-or-sdk-repositories
To understand what you must do next to merge this PR, see the Next Steps to Merge comment. It will appear within few minutes of submitting this PR and will continue to be up-to-date with current PR state.
For guidance on fixing this PR CI check failures, see the hyperlinks provided in given failure
and https://aka.ms/ci-fix.
If the PR CI checks appear to be stuck in queued state, please add a comment with contents /azp run.
This should result in a new comment denoting a PR validation pipeline has started and the checks should be updated after few minutes.
If the help provided by the previous points is not enough, post to https://aka.ms/azsdk/support/specreview-channel and link to this PR.
fix https://github.com/Azure/azure-rest-api-specs/issues/30399
From our prior conversations, this is only for private preview.
If we're going with the new Swagger route, then we should include the TypeSpec sources.
|
gharchive/pull-request
| 2024-08-30T08:19:31 |
2025-04-01T04:54:45.580344
|
{
"authors": [
"DominikMe",
"jiriburant"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/30381",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2613845819
|
nsp async association api spec changes
Choose a PR Template
Switch to "Preview" on this description then select one of the choices below.
Click here to open a PR for a Data Plane API.
Click here to open a PR for a Control Plane (ARM) API.
@shublnu please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
@microsoft-github-policy-service agree [company="{your company}"]
Options:
(default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
(when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement
@microsoft-github-policy-service agree company="Microsoft"
Choose a PR Template description and add a description to this pr to have the Purpose of PR and due diligence sections added
The first commit needs to be an exact copy of the previous API version. All new changes should only be added in the subsequent commits.
This allows the reviewer to get a clear understanding of the actual changes being introduced. With the way the PR is raised now, it is not possible for the reviewer to tell what the changes are. Please either abandon the PR and raise another one with the recommendation or create a new set of commits on this PR following the recommendation. If you are doing the later option please indicate which commit is the exact copy of the previous version.
|
gharchive/pull-request
| 2024-10-25T11:31:57 |
2025-04-01T04:54:45.586180
|
{
"authors": [
"razvanbadea-msft",
"shublnu"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/31233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
559480911
|
New Put API for updating vault security config
Latest improvements:
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Contribution checklist:
[x] I have reviewed the documentation for the workflow.
[x] Validation tools were run on swagger spec(s) and have all been fixed in this PR.
[ ] The OpenAPI Hub was used for checking validation status and next steps.
ARM API Review Checklist
[ ] Service team MUST add the "WaitForARMFeedback" label if the management plane API changes fall into one of the below categories.
adding/removing APIs.
adding/removing properties.
adding/removing API-version.
adding a new service in Azure.
Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs.
[ ] If you are blocked on ARM review and want to get the PR merged urgently, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Please follow the link to find more details on API review process.
You don't have permission to trigger SDK Automation.
Please add yourself to Azure group from opensource portal if you are MSFT employee,
or please ask reviewer to add comment *** /openapibot sdkautomation ***.
Please ask tih@microsoft.com (or NullMDR in github) for additional help.
/azp run automation - sdk
/azp run automation - sdk
/azp run automation - sdk
Can one of the admins verify this patch?
|
gharchive/pull-request
| 2020-02-04T04:29:13 |
2025-04-01T04:54:45.593676
|
{
"authors": [
"AutorestCI",
"azuresdkci",
"chandrasekarendran"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/8292",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
610388877
|
Update mountTarget type definition
Latest improvements:
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Contribution checklist:
[x] I have reviewed the documentation for the workflow.
[x] Validation tools were run on swagger spec(s) and have all been fixed in this PR.
[ ] The OpenAPI Hub was used for checking validation status and next steps.
ARM API Review Checklist
[ ] Service team MUST add the "WaitForARMFeedback" label if the management plane API changes fall into one of the below categories.
adding/removing APIs.
adding/removing properties.
adding/removing API-version.
adding a new service in Azure.
Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs.
[ ] If you are blocked on ARM review and want to get the PR merged urgently, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them.
Please follow the link to find more details on API review process.
/azp run automation - sdk
URGENT, This is a bug fix that already was put in other API versions see pr 9078
Can one of the admins verify this patch?
|
gharchive/pull-request
| 2020-04-30T21:11:42 |
2025-04-01T04:54:45.600217
|
{
"authors": [
"AutorestCI",
"audunn",
"azuresdkci"
],
"repo": "Azure/azure-rest-api-specs",
"url": "https://github.com/Azure/azure-rest-api-specs/pull/9294",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1809926144
|
azure-sdk-for-c-arduino, can't see telemetry data from devices in AZURE,ESP32
Describe the bug
can't see telemetry data from devices in AZURE,ESP32
To Reproduce
example from libarary
https://github.com/Azure/azure-sdk-for-c-arduino/blob/main/examples/Azure_IoT_Hub_ESP32/readme.md
Expected behavior
It looks like the device is not sending payloads
Additional context
see my discussion details on stackoverflow
https://stackoverflow.com/questions/76687945/cant-see-telemetry-data-from-devices-in-azure-esp32
In the end result, I'm interested in data transmission via cellular communication, ESP32 and SIM7000 module connected to AZURE IoT HUB
Setup :
OS: [Windows10]
IDE: ARDUINO
Version of the Library used: Last available
[ ] Bug Description Added
[ ] Repro Steps Added
[ ] Setup information Added
Problem solved in https://github.com/Azure/azure-sdk-for-c/issues/2611
|
gharchive/issue
| 2023-07-18T13:20:34 |
2025-04-01T04:54:45.605166
|
{
"authors": [
"MIKHANYA"
],
"repo": "Azure/azure-sdk-for-c",
"url": "https://github.com/Azure/azure-sdk-for-c/issues/2605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1379680093
|
Split test capture step out from build step
Currently the builds run gtest_discover_tests as a final step of the tests. This step should be split out into its own step to speed up the builds and help isolate points of failure.
Related issue tracking resolving the failures:
https://github.com/Azure/azure-sdk-for-cpp/issues/1607
Having isolated the point of failure to GTest_Discover_Tests , splitting the step out is pointless and actually lengthens the build process as it being part of the cmake files, the only way to have it executed or not is to run cmake config / build twice. which overall lengthen the build process.
We cannot have only the discovery test step run , but we can actually increase the timeout of the discovery from 5 s currently ( which is too short ) to something more apropriate.
To quote the cmake gtest documentation for discovery timeout "Most test executables will enumerate their tests very quickly, but under some exceptional circumstances, a test may require a longer timeout. The default is 5. " the exceptional circumstances are not elaborated further.
Why the current implementation of the timeout is not working is due to the fact that it is in a macro nobody calls , protected by an ENV that nobody sets thus it never really executes.
|
gharchive/issue
| 2022-09-20T16:21:25 |
2025-04-01T04:54:45.608102
|
{
"authors": [
"RickWinter",
"ahsonkhan",
"gearama"
],
"repo": "Azure/azure-sdk-for-cpp",
"url": "https://github.com/Azure/azure-sdk-for-cpp/issues/3951",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
794032922
|
Setting up keyvault live test
Setting up KeyVault live testing
/azp run cpp - keyvault
/azp run cpp - keyvault
/azp run cpp - keyvault
/azp run cpp - keyvault
/azp run cpp - keyvault
/azp run cpp - keyvault
|
gharchive/pull-request
| 2021-01-26T08:23:54 |
2025-04-01T04:54:45.610619
|
{
"authors": [
"danieljurek",
"vhvb1989"
],
"repo": "Azure/azure-sdk-for-cpp",
"url": "https://github.com/Azure/azure-sdk-for-cpp/pull/1465",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1268372925
|
armsubscription.Subscription: missing Tags
Bug Report
import path: /sdk/resourcemanager/subscription/armsubscription
SDK version: latest
go version: go version go1.18.3 darwin/arm64
Subscription struct doesn't include Tags:
type Subscription struct {
// The authorization source of the request. Valid values are one or more combinations of Legacy, RoleBased, Bypassed, Direct
// and Management. For example, 'Legacy, RoleBased'.
AuthorizationSource *string `json:"authorizationSource,omitempty"`
// The subscription policies.
SubscriptionPolicies *Policies `json:"subscriptionPolicies,omitempty"`
// READ-ONLY; The subscription display name.
DisplayName *string `json:"displayName,omitempty" azure:"ro"`
// READ-ONLY; The fully qualified ID for the subscription. For example, /subscriptions/00000000-0000-0000-0000-000000000000.
ID *string `json:"id,omitempty" azure:"ro"`
// READ-ONLY; The subscription state. Possible values are Enabled, Warned, PastDue, Disabled, and Deleted.
State *SubscriptionState `json:"state,omitempty" azure:"ro"`
// READ-ONLY; The subscription ID.
SubscriptionID *string `json:"subscriptionId,omitempty" azure:"ro"`
}
https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/subscription/armsubscription/zz_generated_models.go
@mblaschke Thanks for your feedback. The return body of Get operation of subscription does not contains tag for now. As this is a feature request, I'll involve service team to have a look.
Adding Service team to look into this feature request.
according to https://docs.microsoft.com/en-us/rest/api/resources/subscriptions/get it returns the tags
This is not really a feature as the old sdk also provides subscription tags.
Swagger default tag still using old 2016 api-version. @anyone from service team, please help to do the upgrade.
@tadelesh I will check with the Service team offline and update this github thread.
You might want to use all the models under “resourcemanager/resources” instead of “resourcemanager” (might has the old version, not sure why it hasn’t been removed).
Please give it a try with https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/resources/armsubscriptions/zz_generated_models.go (I do see “tags” in subscription struct) instead of https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/subscription/armsubscription/zz_generated_models.go (without tags).
@Grayer123 Is the package of subscription/armsubsciption replaced by resources/armsubscriptions totally? If so, we could deprecate the former one and let the customer use the later one.
@tadelesh Got a confirmation from @Grayer123 that it should be replaced.
@navba-MSFT Thanks. @mblaschke You could use package github.com/Azure/azure-sdk-for-go/resources/armsubscriptions.
|
gharchive/issue
| 2022-06-11T21:07:02 |
2025-04-01T04:54:45.618236
|
{
"authors": [
"Grayer123",
"mblaschke",
"navba-MSFT",
"tadelesh"
],
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/issues/18395",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2484295038
|
Enhancement: combine event hub in-memory and blob checkpointing feature
Feature Request
This is regarding the checkpointing feature in the Event Hub Go client. I'm wondering if we can combine the benefits of in-memory checkpoints and persistent checkpointing to blob storage. Persisting to blob storage on every checkpoint has resulted in slower performance for my team's application (which requires low-latency processing of telemetry). In-memory persistence, on the other hand, means data will be lost when the application dies or restarts.
I propose adding an option to only persist to blob storage after a time interval or after a certain number of checkpoints.
I propose adding an option to only persist to blob storage after a time interval or after a certain number of checkpoints.
We've generally left this pattern up to the application writer. Generally your app wants full visibility when you bypass certain areas of safety.
Now, your application is in full control of when UpdateCheckpoint is called, so if you want to do a "only write after 'x' time" style pattern it's a simple update to wrap the update with your own logic:
if time.Since(lastCheckpointTime) > arbitraryDuration {
if err := partitionClient.UpdateCheckpoint(context.TODO(), events[len(events)-1], nil); err != nil {
return err
}
lastCheckpointTime = time.Now()
}
The compromise, as you mention, is whether you're okay with possibly reprocessing events if you lose a processor instance and have to start again.
Closing as we're not going to add this to the SDK at this time.
|
gharchive/issue
| 2024-08-24T06:45:34 |
2025-04-01T04:54:45.621750
|
{
"authors": [
"ifeify",
"richardpark-msft"
],
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/issues/23370",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1428534246
|
[Release] sdk/resourcemanager/trafficmanager/armtrafficmanager/2.0.0-beta.1
https://github.com/Azure/sdk-release-request/issues/3321
@Alancere Please help to change to minor version.
|
gharchive/pull-request
| 2022-10-30T01:05:40 |
2025-04-01T04:54:45.623393
|
{
"authors": [
"azure-sdk",
"tadelesh"
],
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/pull/19450",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
283709264
|
Update Changelog, ACS reorg and delete extra services for v12
Thanks you for your contribution to the Azure-SDK-for-Go! We will triage and review it as quickly as we can.
As part of your submission, please make sure that you can make the following assertions:
[ ] I'm not making changes to Auto-Generated files which will just get erased next time there's a release.
If that's what you want to do, consider making a contribution here: https://github.com/Azure/autorest.go
[ ] I've tested my changes, adding unit tests where applicable.
[ ] I've added Apache 2.0 Headers to the top of any new source files.
[ ] I'm submitting this PR to the dev branch, or I'm fixing a bug that warrants its own release and I'm targeting the master branch.
[ ] If I'm targeting the master branch, I've also added a note to CHANGELOG.md.
[ ] I've mentioned any relevant open issues in this PR, making clear the context for the contribution.
#861
|
gharchive/pull-request
| 2017-12-20T22:04:47 |
2025-04-01T04:54:45.627393
|
{
"authors": [
"vladbarosan"
],
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/pull/921",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
806609609
|
Sync eng/common directory with azure-sdk-tools for PR 1386
Sync eng/common directory with azure-sdk-tools for PR https://github.com/Azure/azure-sdk-tools/pull/1386 See eng/common workflow
/check-enforcer reset
|
gharchive/pull-request
| 2021-02-11T17:48:36 |
2025-04-01T04:54:45.629244
|
{
"authors": [
"azure-sdk",
"weshaggard"
],
"repo": "Azure/azure-sdk-for-ios",
"url": "https://github.com/Azure/azure-sdk-for-ios/pull/704",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.