id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
430294898
|
Shared link of my website on whastapp are not opening in smart webview
I want to open shared link or url of my website in smart webview app. Intent popup when clicked on link but after selecting Smart webview app nothing happens further. Please help and sorry for my bad english.
I'm going to assume that you've already added your intended web address in manifest data filter > host.
If the screen goes white, it probably means there is some port or file request issue, like mixture of HTTP and HTTPS requests. If not that, can you look into logs for more details of the issue.
I have the same issue with WhatsApp and Telegram. When I click on my app urls in there applications, nothing happens.
Here is my setting:
The only link that application can handle from WhatsApp is "https://my.site.com/". When I click on other URLs such as "https://my.site.com/blah" nothing happens.
Note that it tested the problematic URLs with my Samsung SMS app but have no problem.
My android is Pie.
Does it opens links inside apps internal browser (chrome tab) or force opens your default browser.
It always tries to open such links (my.site.com) with my app. One hint, in first click on url, it seems that it tries to launch MainActivity but maybe the activity crashes and fails to load.
@ATSGIT do you record any logs? I'd need to look into it.
No, I have no log. I think the problem may relates to how the activity
handles the recieved intent.
On Tue, 27 Aug 2019, 12:35 Ghazi Khan, notifications@github.com wrote:
@ATSGIT https://github.com/ATSGIT do you record any logs? I'd need to
look into it.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/mgks/Android-SmartWebView/issues/86?email_source=notifications&email_token=AC7EYOPZIICLJUHIRL77XETQGTN55A5CNFSM4HEF5MC2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5G4DUI#issuecomment-525189585,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AC7EYOJ6VQDLG5SRBE5RAFDQGTN55ANCNFSM4HEF5MCQ
.
I don't know what or how to look into this, unless I have some log data. I'm not having problem on my end for the same.
Can you share the log next time you use android studio?
intent filter in my app:
<activity android:name="com.xmpl.my.MainActivity" android:screenOrientation="portrait" > <!-- remove or alter as your apps requirement --> <intent-filter android:label="@string/app_name"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="https" android:host="my.website.com" android:pathPrefix="/" /> <!-- if you want only a specific directory from your website to be opened in the app through external links --> </intent-filter> </activity>
Logcat log on Android 7 when clicking on a link:
08-28 17:41:43.490 1590-1924/system_process I/ActivityManager: START u0 {act=android.intent.action.VIEW dat=https://my.website.com/dashboard cmp=android/com.android.internal.app.ResolverActivity (has extras)} from uid 10009 on display 0
--------- beginning of main
08-28 17:41:43.490 1590-1664/system_process W/AudioTrack: AUDIO_OUTPUT_FLAG_FAST denied by client
08-28 17:41:43.505 3059-3059/system:ui D/ResolverActivity: sinceTime=1565788303505
08-28 17:41:43.510 2446-2941/com.android.mms D/Mms: cancelNotification
08-28 17:41:43.515 2446-3137/com.android.mms D/Mms: cancelNotification
08-28 17:41:43.522 2446-2446/com.android.mms D/Mms: cancelNotification
08-28 17:41:43.536 1205-1264/? D/gralloc_ranchu: gralloc_alloc: Creating ashmem region of size 7753728
08-28 17:41:43.668 2446-2446/com.android.mms W/IInputConnectionWrapper: showStatusIcon on inactive InputConnection
08-28 17:41:43.669 1205-2127/? D/gralloc_ranchu: gralloc_alloc: Creating ashmem region of size 7753728
08-28 17:41:43.702 1590-1620/system_process I/ActivityManager: Displayed android/com.android.internal.app.ResolverActivity: +200ms (total +60s754ms)
08-28 17:41:44.733 1205-1267/? D/gralloc_ranchu: gralloc_alloc: Creating ashmem region of size 7753728
08-28 17:41:46.462 1590-1664/system_process W/AudioTrack: AUDIO_OUTPUT_FLAG_FAST denied by client
08-28 17:41:46.464 1590-1611/system_process I/PackageManager: Setting last chosen activity com.xmpl.my/.MainActivity for user 0:
08-28 17:41:46.464 1590-1611/system_process I/PackageManager: Action: "android.intent.action.VIEW"
08-28 17:41:46.464 1590-1611/system_process I/PackageManager: Category: "android.intent.category.DEFAULT"
08-28 17:41:46.464 1590-1611/system_process I/PackageManager: Scheme: "https"
08-28 17:41:46.464 1590-1611/system_process I/PackageManager: Authority: "my.website.com": -1
08-28 17:41:46.464 1590-1611/system_process I/PackageManager: Path: "PatternMatcher{PREFIX: /}"
08-28 17:41:46.468 1590-1924/system_process I/ActivityManager: START u0 {act=android.intent.action.VIEW dat=https://my.website.com/dashboard flg=0x3000000 cmp=com.xmpl.my/.MainActivity (has extras)} from uid 10009 on display 0
08-28 17:41:46.971 1590-1615/system_process W/ActivityManager: Activity pause timeout for ActivityRecord{9820d1d u0 android/com.android.internal.app.ResolverActivity t34 f}
08-28 17:41:46.975 2836-2836/com.xmpl.my V/FA: onActivityCreated
08-28 17:41:46.975 2836-2836/com.xmpl.my W/READ_PERM =: android.permission.READ_EXTERNAL_STORAGE
08-28 17:41:46.975 2836-2836/com.xmpl.my W/WRITE_PERM =: android.permission.WRITE_EXTERNAL_STORAGE
08-28 17:41:47.509 2446-2934/com.android.mms D/Mms: cancelNotification
08-28 17:41:47.512 2446-3140/com.android.mms D/Mms: cancelNotification
08-28 17:41:47.521 2446-2931/com.android.mms D/EGL_emulation: eglMakeCurrent: 0xa222b0a0: ver 2 0
08-28 17:41:47.528 3059-3059/system:ui I/Choreographer: Skipped 62 frames! The application may be doing too much work on its main thread.
08-28 17:41:47.569 1590-1931/system_process W/InputMethodManagerService: Focus gain on non-focused client com.android.internal.view.IInputMethodClient$Stub$Proxy@26a3c5c3 (uid=10009 pid=2446)
08-28 17:41:47.671 2446-2931/com.android.mms D/EGL_emulation: eglMakeCurrent: 0xa222b0a0: ver 2 0
08-28 17:41:47.707 2446-2446/com.android.mms D/Mms: cancelNotification
08-28 17:41:48.014 2446-2942/com.android.mms D/Mms: cancelNotification
08-28 17:41:48.020 2446-3142/com.android.mms D/Mms: cancelNotification
08-28 17:41:48.023 2446-2446/com.android.mms D/Mms: cancelNotification
I don't know what or how to look into this, unless I have some log data. I'm not having problem on my end for the same.
Can you share the log next time you use android studio?
I debugged the app and found that the problem related to "the following logic in MainActivity:
if (!isTaskRoot()) {
finish();
return;
}
isTaskRoot() returns false for all external links except root url "https://www.example.com/". So when clicking on such links, the activity exits.
I Found Solution
Uri exturi = getIntent().getData();
if(exturi!=null){
String link = exturi.toString();
aswm_view(link, false);
}else{
aswm_view(ASWV_URL, false);
}
I Found Solution
Uri exturi = getIntent().getData();
if(exturi!=null){
String link = exturi.toString();
aswm_view(link, false);
}else{
aswm_view(ASWV_URL, false);
}
|
gharchive/issue
| 2019-04-08T07:24:58 |
2025-04-01T04:35:01.170633
|
{
"authors": [
"ATSGIT",
"RahimSayyed",
"mgks",
"natsirasrafi"
],
"repo": "mgks/Android-SmartWebView",
"url": "https://github.com/mgks/Android-SmartWebView/issues/86",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1569406607
|
🛑 Traffic Dashboard Mediahuis Nederland is down
In da12da7, Traffic Dashboard Mediahuis Nederland (https://traffic-tmg.mediahuis.nl/api/system/status) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Traffic Dashboard Mediahuis Nederland is back up in 1cd32f7.
|
gharchive/issue
| 2023-02-03T08:09:39 |
2025-04-01T04:35:01.208350
|
{
"authors": [
"JoranDox"
],
"repo": "mh-data-science/upptime",
"url": "https://github.com/mh-data-science/upptime/issues/1001",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1463814251
|
🛑 Traffic Dashboard Mediahuis België is down
In d7c993b, Traffic Dashboard Mediahuis België (https://traffic.mediahuis.be/api/system/status) was down:
HTTP code: 503
Response time: 601 ms
Resolved: Traffic Dashboard Mediahuis België is back up in 97ab690.
|
gharchive/issue
| 2022-11-24T21:10:38 |
2025-04-01T04:35:01.210909
|
{
"authors": [
"JoranDox"
],
"repo": "mh-data-science/upptime",
"url": "https://github.com/mh-data-science/upptime/issues/298",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
31103234
|
Installation on Windows
I installed the plugin on my windows machine and i got this error message when i try to run any vagrant command:
Vagrant failed to initialize at a very early stage:
The plugins failed to load properly. The error message given is
shown below.
Permission denied - puppet
Do i have to install puppet and/or librarian-puppet gem on my windows machine to make this work?
Is anyone able to reproduce this on a clean vagrant box? It sounds like an environment setup issue.
I think this is old and can be closed? I recently wiped out vagrant (and removed C:\HashiCorp\ along with ~/.vagrant.d/gems/ and ~/.vagrant.d/plugins.json) and reinstalled it along with this plug-in on Windows 8.1 Pro 64 bit. Everything works well.
|
gharchive/issue
| 2014-04-08T19:54:55 |
2025-04-01T04:35:01.212789
|
{
"authors": [
"jhoblitt",
"mlt",
"pablocrivella"
],
"repo": "mhahn/vagrant-librarian-puppet",
"url": "https://github.com/mhahn/vagrant-librarian-puppet/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2666747822
|
UDP Proxy not functioning
Hi All,
It appears that there is some issue with the proxy protocol implemented for UDP as my server recieves this response when trying to proxy.
[2024-11-18 00:00:04 Local] [192.168.200.1:37825] [UDPPROXY] System.ArgumentNullException: Value cannot be null. (Parameter 'address')
at System.ArgumentNullException.Throw(String paramName)
at DnsServerCore.Dns.DnsServer.ReadUdpRequestAsync(Socket udpListener, DnsTransportProtocol protocol)
[2024-11-18 00:00:04 Local] [192.168.200.1:37825] [UDPPROXY] System.IO.InvalidDataException: The stream does not contain PROXY protocol header.
at TechnitiumLibrary.Net.ProxyProtocol.ProxyProtocolStream.CreateAsServerAsync(Stream baseStream, CancellationToken cancellationToken) in Z:\Technitium\Projects\TechnitiumLibrary\TechnitiumLibrary.Net\ProxyProtocol\ProxyProtocolStream.cs:line 138
at DnsServerCore.Dns.DnsServer.ReadUdpRequestAsync(Socket udpListener, DnsTransportProtocol protocol) in Z:\Technitium\Projects\DnsServer\DnsServerCore\Dns\DnsServer.cs:line 409
But when I use nginx, it works fine. I did raise this with the provider and they also confirmed my findings
https://github.com/TechnitiumSoftware/DnsServer/issues/1107
TCP works fine for proxying as I'm also utilizing that, but I would like to cover both sides of it.
The thing I see on the caddy side when this is run is the below errors.
{"level":"debug","ts":1731887445.6253898,"logger":"layer4","msg":"matching","remote":"172.253.247.61:39349","error":"consumed all prefetched bytes","matcher":"layer4.matchers.dns","matched":false}
{"level":"debug","ts":1731887445.6254132,"logger":"layer4","msg":"prefetched","remote":"172.253.247.61:39349","bytes":48}
{"level":"debug","ts":1731887445.6254213,"logger":"layer4","msg":"matching","remote":"172.253.247.61:39349","matcher":"layer4.matchers.dns","matched":true}
{"level":"debug","ts":1731887446.644327,"logger":"layer4","msg":"matching","remote":"172.68.209.70:19458","error":"consumed all prefetched bytes","matcher":"layer4.matchers.dns","matched":false}
{"level":"debug","ts":1731887446.64437,"logger":"layer4","msg":"prefetched","remote":"172.68.209.70:19458","bytes":48}
{"level":"debug","ts":1731887446.6443822,"logger":"layer4","msg":"matching","remote":"172.68.209.70:19458","matcher":"layer4.matchers.dns","matched":true}
{"level":"debug","ts":1731887446.6450343,"logger":"layer4","msg":"matching","remote":"172.68.209.70:19483","error":"consumed all prefetched bytes","matcher":"layer4.matchers.dns","matched":false}
{"level":"debug","ts":1731887446.6450565,"logger":"layer4","msg":"prefetched","remote":"172.68.209.70:19483","bytes":52}
{"level":"debug","ts":1731887446.6450644,"logger":"layer4","msg":"matching","remote":"172.68.209.70:19483","matcher":"layer4.matchers.dns","matched":true}
This is what I have set up for the block
layer4 {
tcp/:53 {
@tdns dns
route @tdns {
proxy {
upstream tcp/ext-dns:538
proxy_protocol v2
}
}
}
udp/:53 {
@udns dns
route @udns {
proxy {
proxy_protocol v2
upstream {
dial udp/ext-dns:538
}
}
}
}
}
And this is what I set up with nginx to confirm that it was its definitely an issue with caddy-l4
stream {
log_format proxy '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
upstream dns {
server ext-dns:538;
}
server {
listen 53 udp;
proxy_pass dns;
proxy_protocol on;
}
access_log /config/log/nginx/access-dns.log proxy buffer=32k;
}
cc/ @WeidiDeng
I think the layer4 proxy handler just does not intend to support proxy protocol with udp.
I am not sure how proxy protocol over udp is supposed to work, but it probably has to send the header in front of each packet instead of only once.
|
gharchive/issue
| 2024-11-17T23:54:31 |
2025-04-01T04:35:01.251643
|
{
"authors": [
"erroltuparker",
"mohammed90",
"ydylla"
],
"repo": "mholt/caddy-l4",
"url": "https://github.com/mholt/caddy-l4/issues/269",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
397603286
|
Update npm/node babel syntax
https://github.com/mhulse/node-boilerplate-cli/commit/fc26211282d0daf1592a04876bf7d6d0d52cac35
I forgot, but I'm not using babel or my CLI boiler for this proj. Closing.
|
gharchive/issue
| 2019-01-09T23:18:11 |
2025-04-01T04:35:01.254187
|
{
"authors": [
"mhulse"
],
"repo": "mhulse/kludgy",
"url": "https://github.com/mhulse/kludgy/issues/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
204252612
|
using webpack context path
Problem
If i use webpack.LoaderOptionsPlugin(), there is a independent "context" fields.
new webpack.LoaderOptionsPlugin({
options: {
context: path.join(APP_DIR, "app/assets"),
postcss: function(webpack) {
return [plugins]
}
}
})
I can modify that context, and postcss use it.
But if i use postcss.config.js, there is no options for independent context.
Postcss will just using webpack's context, not it's own context.
maybe, postcss.config.js can also handle that context like
module.exports = (ctx) => ({
context: path.join(APP_DIR, "app/assets"),
plugins: {
...
}
})
module.exports = (ctx) => ({
console.log(ctx.webpack) // Webpack Loader API => this.resourcePath etc.
})
@FourwingsY The webpack Context is on ctx.webpack. What do you mean exactly by own context, maybe I'm missing the point here :)?
@michael-ciniawsky Sorry, i might be misusing postcss-smart-import. Thanks for your example!
@FourwingsY I highly recommend using postcss-import v9+ instead of postcss-smart-import during the fact that postcss-loader is able to watch your @import automatically then
import imports from 'postcss-import'
postcss([ imports() ]).process('index.css', options)
.then((result) => {
result.messages
.filter((msg) => msg.type === 'dependency ? msg : '')
.forEach((dependency) => loader.addDependency(dependency.file)) // @import's
})
|
gharchive/issue
| 2017-01-31T10:14:29 |
2025-04-01T04:35:01.264983
|
{
"authors": [
"FourwingsY",
"michael-ciniawsky"
],
"repo": "michael-ciniawsky/postcss-load-config",
"url": "https://github.com/michael-ciniawsky/postcss-load-config/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
433005301
|
Add option to force opening links in new browser window
#669
Added an option to the config file, force_new_browser_window, which when set opens links
in a a new window instead of a new tab in the running browser.
[Note] Option works with old config files. If the config file doesn't have this option defined the old behavior (open in new tab) will be performed.
Thanks for the PR. I made a few requests for minor changes.
Thank you for the review, @michael-lazar! I addressed the requested changes. I created separate commits because I'm not sure if one can do a force push here.
Merged, thank you!
|
gharchive/pull-request
| 2019-04-14T17:11:46 |
2025-04-01T04:35:01.267143
|
{
"authors": [
"michael-lazar",
"pabloariasal"
],
"repo": "michael-lazar/rtv",
"url": "https://github.com/michael-lazar/rtv/pull/674",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
116696556
|
index.php/Vichaya - error message is not clear
Test Steps
Entered invalid email or password
Received error message "wrong details"
Problem: Error message is not clear or specific.
Suggested Solution: Change message to "Incorrect username or password."
Done
|
gharchive/issue
| 2015-11-13T04:07:49 |
2025-04-01T04:35:01.268659
|
{
"authors": [
"VALVeisG0D",
"chalkboard3000"
],
"repo": "michael-munoz/WePlayCards",
"url": "https://github.com/michael-munoz/WePlayCards/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1662269804
|
Static "Mirror" build without the editor tools
Awesome thing! I'm playing around with it a lot and I like the simplicity in it's abstractions. Would it be possible to have a static build of the site without the editor tools? As I understand we ship the whole thing with editor etc even though that stuff will only be used by the editors.
Implementation ideas
Multiple sveltekit adapters outputs the editor plus a separate static build?
Push the whole static build to S3 storage.
Can we "cache" the static build awaiting the dynamic app parts somehow?
Check if logged in in page load, async await code splitted editor functionalities if user is logged in.
I'm not sure if this is a good idea, since I quite like the "embedded" fashion and don't want the complexity of another static build. It takes this into the direction of the other CMSs which I don't like. But I don't like shipping a lot of code for editing when only a fraction of the visitors will be able to use it anyways.
Yeah, as you say the challenge is to avoid additional infrastructure and code branches so we don't end up with complex build/publishing workflows again. Still a "static mirror" may be interesting for high traffic websites, so I'm keen to have a solution for that at some point.
However, short-term a good first step would be await import so that the editor-related bits are only loaded when logged in. Would you be interested in working on a PR that adds that? :)
Yes that solution sounds like the least complex and still adds value. I can give it a shot :)
Well done! Thanks a lot.
This solution is balanced with added complexity and value IMO, closing issue.
|
gharchive/issue
| 2023-04-11T11:16:55 |
2025-04-01T04:35:01.273272
|
{
"authors": [
"michael",
"nilskj"
],
"repo": "michael/editable-website",
"url": "https://github.com/michael/editable-website/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
20190700
|
repo.write throws "Update is not a fast forward" after first successful create
Using the example from the docs, I can successfully write to my repo.
// example write
repo.write('master', 'path/to/file', 'YOUR_NEW_CONTENTS', 'YOUR_COMMIT_MESSAGE', function(err) {});
But only the first save seems to work. The others return a 422 status with the following error. I did notice this prose ticket...wondering what the workaround there was?
{
"message": "Update is not a fast forward",
"documentation_url": "http://developer.github.com/v3"
}
How are you doing callbacks? If you have something like
repo.write('master', 'hurr.txt', 'YAY!', 'Initial', function(err){});
repo.write('master', 'yay.txt', 'HURR!', 'Second', function(err){});
It will fail because both write calls will be against the same SHA, meaning one will succeed and the other will fail.
Could you try a pattern more like:
repo.write('master', 'hurr.txt', 'YAY!', 'Initial', function(err){
repo.write('master', 'hurr.txt', 'YAY!', 'Initial', function(err){
console.log('donezo');
});
});
...With the latest version of the library? Thanks!
Closing as duplicate of #85.
|
gharchive/issue
| 2013-09-27T19:41:59 |
2025-04-01T04:35:01.276161
|
{
"authors": [
"aendrew",
"thebigspoon"
],
"repo": "michael/github",
"url": "https://github.com/michael/github/issues/78",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
701631570
|
Add images in spanish to Maya notebook
Rising issue so that we don't forget... Also change links for the profiles in spanish...
Closed by #42, right?
right!
|
gharchive/issue
| 2020-09-15T05:52:38 |
2025-04-01T04:35:01.277204
|
{
"authors": [
"alxogm",
"michaelJwilson"
],
"repo": "michaelJwilson/DESI-HighSchool",
"url": "https://github.com/michaelJwilson/DESI-HighSchool/issues/41",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
619900893
|
More flexibilty with shellcode execution
To incorporate this tool into my payload build pipelines, i wanted more flexibility with the XLM code you define in PayloadType.cs (like if i want to add sandbox checks or some other XLM code to execute before the thread is created). My pull request adds a --preamble flag, which will simple take whatever XLM you define in Docs/preamble.txt and add it in front of the code which creates the thread.
Yup, this is a reasonable update / adds some extra functionality from the offense side. Merging it in - cheers!
|
gharchive/pull-request
| 2020-05-18T04:34:51 |
2025-04-01T04:35:01.339595
|
{
"authors": [
"kafkaesqu3",
"michaelweber"
],
"repo": "michaelweber/Macrome",
"url": "https://github.com/michaelweber/Macrome/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2402230963
|
Cannot getting location every time in iOS
Sometimes, my app want to getting current location to user but not every time is successful.
Geolocation.getCurrentPosition(
({ coords: { latitude, longitude, accuracy } }) => {
setUserLoc({
...userloc,
latitude,
longitude,
accuracy,
});
Geolocation.watchPosition(
({ coords: { latitude, longitude, accuracy } }) => {
setUserLoc({
...userloc,
latitude,
longitude,
accuracy,
});
},
error => {
console.log('watch user position: ', error);
sendToSlack(
'Error Watching Location on _getCurrentPosition App.js',
error,
);
},
{
enableHighAccuracy: true,
timeout: 20000,
maximumAge: 5000,
distanceFilter: 10,
},
);
setTimeout(() => Geolocation.clearWatch(), 300000);
},
error => {
console.log('error accessing location: ', error);
},
{ enableHighAccuracy: true, timeout: 20000, maximumAge: 5000 },
);
I think it will be success in first time only. another time it will be error like this:
{
"code": 3,
"message": "Unable to fetch location within 20.0s.",
"PERMISSION_DENIED": 1,
"POSITION_UNAVAILABLE": 2,
"TIMEOUT": 3
}
I have set NSLocationAlwaysUsageDescription and NSLocationAlwaysAndWhenInUseUsageDescription in my info.plist
Please help
Im having the same issue i have to close and reopen the app completely for it to work again
++
++
same here .... but for me 8/10 fails now :(
getCurrentPosition never retrieves with data however the permission is granted
@michalchudziak pls investigate this ASAP, this is extremely frustrating and we are blocked to release our app just because of this... this was the official plugin right?
Thanks for reporting this problem. It should be solved in 3.4.0 release.
I believe the issue is persisting for me ;(
|
gharchive/issue
| 2024-07-11T04:38:52 |
2025-04-01T04:35:01.347761
|
{
"authors": [
"CostasCF",
"atlasagencyca",
"gulsher7",
"michalchudziak",
"mr-sudo",
"muslimmuda15",
"petertoth-dev"
],
"repo": "michalchudziak/react-native-geolocation",
"url": "https://github.com/michalchudziak/react-native-geolocation/issues/323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1658829475
|
Not available in Greece for purchase
Hi, great little app,
I want to buy it but it is not available to the Windows Store from Greece. If i go through the web link to the store, I can see the app but no option to purchase is or download it and if i search for it in my store no results are returned.
Will be good if you can make it available to Greece or provide alternative channels to purchase it.
Thanks
It's a temporary situation in all the countries. It may take a few days to get available again. Please be patient 🙂
|
gharchive/issue
| 2023-04-07T13:53:20 |
2025-04-01T04:35:01.349337
|
{
"authors": [
"michalleptuch",
"psycho-bits"
],
"repo": "michalleptuch/calendar-flyout",
"url": "https://github.com/michalleptuch/calendar-flyout/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
272915119
|
如何添加本地图片?
如何贴上本地图片?如下的两种方法都无法成功


see #90
|
gharchive/issue
| 2017-11-10T12:19:30 |
2025-04-01T04:35:01.350457
|
{
"authors": [
"PetersonZhao",
"michalyao"
],
"repo": "michalyao/evermonkey",
"url": "https://github.com/michalyao/evermonkey/issues/89",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
936651773
|
pk采样和Loss
罗博士您好,我们最近复现了贵组的训练代码,确有一些问题和疑惑,希望您能帮忙指点一二,感谢!
1、分类Loss为什么没有采用带margin的cosface等,是和训练样本的id分布有关吗;
2、每个epoch的pk采样几乎遍历了全部图片,相比于遍历id每次随机采k张的采样方法,哪种方式更优呢,还是说对于不同的训练数据分布,可以选择不同的pk采样方式;
3、我们用单张v100训练点数确实很高,但换成多张2080Ti的DP训练,掉点比较明显,这个是因为syncBN的原因吗,但感觉掉点幅度偏大;
4、后处理中uda方法,对于mAP涨点幅度大吗,我们比赛时采用uda好像并不能涨太多点;
谢谢指教~
1、我们的工作中一直喜欢用原版的分类loss,我们把这个loss性能调的比较好,带margin的loss在我们这里效果都不如正常的分类loss。
2、遍历全部图片的性能会好一点点,除非遇到了极其长尾的数据集,后者的采样会更平衡一点。
3、多卡训练换成syBN会好一些,另外就是sampler也最好换成ddp的sampler
4、UDA的性能取决于伪标签的质量,我们融合了很多信息来提高伪标签的质量
|
gharchive/issue
| 2021-07-05T03:44:54 |
2025-04-01T04:35:01.382187
|
{
"authors": [
"NickChen31",
"michuanhaohao"
],
"repo": "michuanhaohao/AICITY2021_Track2_DMT",
"url": "https://github.com/michuanhaohao/AICITY2021_Track2_DMT/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1206763392
|
feat: add umi-child
add umi-child
动态改变basename
添加prefixCls,避免基座应用antd 对 子应用 antd 样式覆盖,关闭了 mfsu功能。开启后prefixCls无效,待修复
添加 SOCKET_SERVER 配置,使得作为子应用的 umi-child 的HMR正常使用
通过基座应用proxy配置,使开发者在开发阶段,umi mock 功能正常使用
@bailicangdu 请审核一下这个PR
|
gharchive/pull-request
| 2022-04-18T07:21:58 |
2025-04-01T04:35:01.408066
|
{
"authors": [
"JoMartinezZhu"
],
"repo": "micro-zoe/micro-app-demo",
"url": "https://github.com/micro-zoe/micro-app-demo/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
383955966
|
grpcurl -plaintext localhost:36641 list
Hello,
I developed a service with go-micro, how ever I got some problems when I tried with grpcurl.
grpcurl -plaintext localhost:36641 list
Failed to list services: rpc error: code = Internal desc = transport: received the unexpected content-type "text/plain"
Is there something need to be configured to make grpcurl work with service developed by go-micro?
Go micro does not support grpc by default at the moment. You have to either include the plugins or use go-grpc
Hello,
I add grpc client in code as the example:
import (
"github.com/micro/go-micro"
"github.com/micro/go-plugins/client/grpc"
)
func main() {
service := micro.NewService(
micro.Name("greeter"),
micro.Client(grpc.NewClient()),
)
}
then grpcurl returned the same error.
Is it possible to make grpc work with go-micro?
I am planning to replace REST API by GRPC for mobile app development, android part call methods defin ed in proto file with grpc, is this possible if I develop in go-micro?
ok, I changed it to server code, and it works.
grpcurl -protoset *.protoset -plaintext localhost:44917 list
thanks
|
gharchive/issue
| 2018-11-24T03:42:36 |
2025-04-01T04:35:01.423202
|
{
"authors": [
"asim",
"seaguest"
],
"repo": "micro/go-micro",
"url": "https://github.com/micro/go-micro/issues/337",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2150550298
|
splitOn adds rowTree instead of manipulating existing one
Describe the bug
After splitting the data, if udpate_rowTree is TRUE, splitOn() adds a new rowTree to all TreeSummarizedExperiment objects in the returned list instead of manipulating the existing rowTree.
if( class(x) == "SimpleList" ){
x <- SimpleList(lapply(x, addTaxonomyTree))
}
Thanks!
Can you make a quick try if you can fix the bug?
See https://github.com/microbiome/mia/pull/487/files#diff-86741c1d199054c30214beca158175167a1913b4e6a77e89f89b625713d04c5bR296-R298
I believe you can just
replace addTaxonomyTree with .agglomerate_trees.
Add some tests to unit tests. Check that these elements in returned list have length(tree$tip.labels) == nrow(tse) when udpate_rowTree = TRUE
Ask if unclear, do not spend too much time if it seems that the problem is harder to fix than this
I just made a commit referencing this issue.
I replaced addTaxonomyTree with .agglomerate_trees and added tests. The tests all passed but for each TreeSummarizedExperiment in the returned list, I received the following warning from the agglomerate_tree function :
'keep.nodes' does specify all the tips from 'tree'. The tree is not agglomerated..
When I ran the test without replacing addTaxonomyTree, some tests failed so I think the new function works but I don't know if the tests I added are relevant.
|
gharchive/issue
| 2024-02-23T07:43:27 |
2025-04-01T04:35:01.431038
|
{
"authors": [
"TuomasBorman",
"thpralas"
],
"repo": "microbiome/mia",
"url": "https://github.com/microbiome/mia/issues/494",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
91027334
|
Is this working?
Hi! I have Omnirom 4.4.4, I removed ugUnifiedNlp, restarted, installed this GmsCore, installed it as a system app (using TitaniumBackup), restarted again, enabled GCM from ug settings. Network location part is working, but applications which require GCM to work aren't working (for example I'm trying out TextSecure - it refuses to register me claiming I don't have Google Play services installed). Am I missing a crucial information here? Just to note, I do have xposed installed.
Edit: I used apk from http://files.brnmod.rocks/apps/GmsCore/Latest/
Some logcat output:
u0_a69@p880:/ $ su
root@p880:/ # logcat | grep gms
W/PackageManager( 539): Not granting permission android.permission.INSTALL_LOCATION_PROVIDER to package com.google.android.gms (protectionLevel=18 flags=0x88be45)
W/PackageManager( 539): Unknown permission android.permission.ACCESS_COARSE_UPDATES in package com.google.android.gms
W/PackageManager( 539): Not granting permission android.permission.INSTALL_LOCATION_PROVIDER to package com.google.android.gms (protectionLevel=18 flags=0x88be45)
W/PackageManager( 539): Unknown permission android.permission.ACCESS_COARSE_UPDATES in package com.google.android.gms
I/ActivityManager( 539): START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.google.android.gms/org.microg.gms.ui.SettingsActivity} from pid 1441
I/ActivityManager( 539): Killing 752:com.google.android.gms/u0a30 (adj 1): remove task
W/ActivityManager( 539): Scheduling restart of crashed service com.google.android.gms/org.microg.nlp.location.LocationServiceV2 in 1000ms
W/ActivityManager( 539): Scheduling restart of crashed service com.google.android.gms/org.microg.nlp.geocode.GeocodeServiceV1 in 11000ms
I/ActivityManager( 539): Start proc com.google.android.gms for service com.google.android.gms/org.microg.nlp.location.LocationServiceV2: pid=5404 uid=10030 gids={50030, 3003}
I/ActivityManager( 539): START u0 {cmp=com.google.android.gms/org.microg.nlp.ui.SettingsActivity} from pid 3920
I/ActivityManager( 539): START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.google.android.gms/org.microg.gms.ui.SettingsActivity} from pid 1441
I/ActivityManager( 539): Killing 5404:com.google.android.gms/u0a30 (adj 1): remove task
W/ActivityManager( 539): Scheduling restart of crashed service com.google.android.gms/org.microg.nlp.location.LocationServiceV2 in 1000ms
W/ActivityManager( 539): Scheduling restart of crashed service com.google.android.gms/org.microg.nlp.geocode.GeocodeServiceV1 in 10999ms
I/ActivityManager( 539): Start proc com.google.android.gms for service com.google.android.gms/org.microg.nlp.location.LocationServiceV2: pid=6625 uid=10030 gids={50030, 3003}
W/ActivityManager( 539): Permission Denial: Accessing service ComponentInfo{com.google.android.gms/org.microg.gms.gcm.PushRegisterService} from pid=972, uid=10022 requires com.google.android.c2dm.permission.RECEIVE
^C
130|root@p880:/ #
Well, yeah, I installed FakeStore, too, and now apps "see" GCM but still refuse to work. For example, TextSecure allows me to start registering process, but it cannot finish it, telling me there is no network connection (no firewall). Application "Heavens-above" crashes (it can work without play-services, but it notifies about them missing). Don't know how else to test.
You need to reinstall applications after you installed GmsCore so they can be granted the required permission.
There seem to be issues with the device checkin implementation. Can you please start "logcat | grep GmsCheckin" and then open the dialer and dial "##CHECKIN##". If you see a message like "W/GmsCheckinSvc( 5853): java.io.IOException: Internal server error." you're affected by the problem and a lot of Google services will not work probably or might stop working in the future. I am working on a fix.
I even installed real PlayStore, and I managed to sign-in. But it wont refresh applications, says: no internet connection.
Here's the output of checking test:
u0_a69@p880:/ $ su
root@p880:/ # logcat | grep GmsCheckin
V/tinyclipboardmanager( 1810): modifyClip(null, logcat | grep GmsCheckin, 0)
D/GmsCheckinTrigger( 752): Trigger checkin: Intent { act=android.provider.Telephony.SECRET_CODE dat=android_secret_code://2432546 flg=0x10 cmp=com.google.android.gms/org.microg.gms.checkin.TriggerReceiver }
D/GmsCheckinClient( 752): -- Request --
D/GmsCheckinClient( 752): CheckinRequest{androidId=0, digest=1-da39a3ee5e6b4b0d3255bfef95601890afd80709, checkin=Checkin{build=Build{fingerprint=LG/omni_p880/p880:4.4.4/KTU84P/560:userdebug/test-keys, hardware=x3, brand=LG, radio=LGP880AT-00-V20b-EUR-XXX-JUL-17-2013, bootloader=unknown, time=1432136615, packageVersionCode=7099448, device=p880, sdkVersion=19, model=LG-P880, manufacturer=LGE, product=omni_p880, otaInstalled=false}, lastCheckinMs=0, event=[], stat=[], requestedGroup=[]}, macAddress=[], accountCookie=[[pejakm@gmail.com], DQAAAAsBAADcnfcKTLt72ZbbKPYct07z0tU8YSV9iXAG1CfEQDPynspdkbykXj5Gg0V_57U1JwuQGl1OtXO8KCCf8tLtp-FresOTa9jnQl6RfeG7PKKQkEMj6rOHxqtGhKdTp9C3vOTFG3BrNGbzMSY14f_NYQpBnCJVVGW2ygYQS3HfSxWOd9tIn25KNvtDYW9IKWFBbojaVHj-np5FOeKhejyBuHaNkEoAG2FF8CPtbH59rp0Ljr0dhlyjFa69yyVpds-l5LTqnBqvF3jdKGkNGyJDOIxtgq-qA8tjk5oXeO0IEXxGHzk52UnP2AUmRpg1LziiPqKYPdVlEwH2nC25NuSASDQgt_PFeJ3Tv8ZP8TnC-BWIPQ], version=3, otaCert=[], deviceConfiguration=DeviceConfig{touchScreen=3, keyboardType=1, navigation=1, screenLayout=268435554, hasHardKeyboard=false, hasFiveWayNavigation=false, densityDpi=320, glEsVersion=131072, sharedLibrary=[android.test.runner, javax.obex, com.android.nfc_extras, com.android.future.usb.accessory, com.android.location.provider, com.android.media.remotedisplay], availableFeature=[android.hardware.wifi, android.hardware.location.network, android.hardware.nfc, android.hardware.bluetooth_le, android.hardware.location, android.software.input_methods, android.hardware.sensor.gyroscope, android.hardware.screen.landscape, android.hardware.screen.portrait, android.hardware.wifi.direct, android.hardware.usb.accessory, android.hardware.camera.any, android.hardware.bluetooth, android.hardware.touchscreen.multitouch.distinct, android.hardware.microphone, android.hardware.sensor.light, android.hardware.camera.autofocus, android.software.live_wallpaper, android.software.app_widgets, android.hardware.camera.flash, android.hardware.telephony, android.software.sip, android.hardware.touchscreen.multitouch.jazzhand, android.hardware.touchscreen.multitouch, android.hardware.sensor.compass, android.hardware.faketouch, android.hardware.camera, android.software.home_screen, android.software.sip.voip, android.hardware.sensor.proximity, android.hardware.location.gps, android.hardware.telephony.gsm, android.software.device_admin, android.hardware.nfc.hce, android.hardware.camera.front, android.hardware.sensor.accelerometer, com.nxp.mifare, android.hardware.touchscreen], nativePlatform=[armeabi-v7a, armeabi], widthPixels=720, heightPixels=1280, locale=[, ca, da, fa, ja, ka, nb, de, ne, af, bg, th, fi, hi, si, vi, sk, uk, el, nl, pl, sl, tl, am, km, rm, in, mn, ko, lo, ro, ar, fr, hr, sr, tr, cs, es, ms, et, it, lt, pt, hu, ru, zu, lv, sv, iw, sw, hy, az, en_CA, fr_CA, lo_LA, uk_UA, en_GB, in_ID, et_EE, ka_GE, ar_EG, en_SG, km_KH, th_TH, fi_FI, sl_SI, zh_HK, si_LK, sk_SK, hy_AM, zh_CN, hi_IN, en_IN, mn_MN, vi_VN, ro_RO, ne_NP, hr_HR, sr_RS, en_US, es_US, lt_LT, pt_PT, en_AU, hu_HU, lv_LV, zh_TW, ms_MY, az_AZ, en_NZ, af_ZA, zu_ZA, nl_BE, fr_BE, de_DE, sv_SE, bg_BG, de_CH, rm_CH, fr_CH, it_CH, tl_PH, de_LI, da_DK, iw_IL, nl_NL, pl_PL, nb_NO, ja_JP, pt_BR, fr_FR, el_GR, fa_IR, ko_KR, tr_TR, ca_ES, es_ES, de_AT, am_ET, it_IT, ru_RU, cs_CZ, sw_TZ, en, bn_BD, ky_KG, mk_MK, ur_PK, my_MM, ta_IN, te_IN, ml_IN, kn_IN, mr_IN, gl_ES, eu_ES, is_IS, kk_KZ, uz_UZ], glExtension=[GL_EXT_bgra, GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_s3tc, GL_EXT_texture_format_BGRA8888, GL_EXT_unpack_subimage, GL_EXT_debug_marker, GL_EXT_debug_label, GL_NV_texture_npot_2D_mipmap, GL_OES_byte_coordinates, GL_OES_compressed_ETC1_RGB8_texture, GL_OES_compressed_paletted_texture, GL_OES_draw_texture, GL_OES_extended_matrix_palette, GL_OES_EGL_image, GL_OES_EGL_image_external, GL_OES_EGL_sync, GL_OES_fbo_render_mipmap, GL_OES_fixed_point, GL_OES_framebuffer_object, GL_OES_matrix_get, GL_OES_matrix_palette, GL_OES_point_size_array, GL_OES_point_sprite, GL_OES_query_matrix, GL_OES_read_format, GL_OES_rgb8_rgba8, GL_OES_single_
W/GmsCheckinSvc( 752): java.io.IOException: Internal server error.
W/GmsCheckinSvc( 752): at org.microg.gms.checkin.CheckinClient.request(CheckinClient.java:57)
W/GmsCheckinSvc( 752): at org.microg.gms.checkin.CheckinManager.checkin(CheckinManager.java:55)
W/GmsCheckinSvc( 752): at org.microg.gms.checkin.CheckinService.onHandleIntent(CheckinService.java:50)
W/GmsCheckinSvc( 752): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:65)
W/GmsCheckinSvc( 752): at android.os.Handler.dispatchMessage(Handler.java:102)
W/GmsCheckinSvc( 752): at android.os.Looper.loop(Looper.java:136)
W/GmsCheckinSvc( 752): at android.os.HandlerThread.run(HandlerThread.java:61)
^C
130|root@p880:/ #
So I guess I'm affected by this bug. :(
Yes, Google changed something in their servers I need to adopt, all µg users will be affected as well sooner or later.
If I can help somehow, please, let me know.
This should be fixed in the latest version. Can you please try again with the latest build from http://files.brnmod.rocks/apps/GmsCore/Latest/ ?
Right away, will report back soon.
Whatever I try, I only get "This app requires Google Play services installed".
Can you update again? And remember to reboot after updating
Since commit https://github.com/microg/android_packages_apps_GmsCore/commit/cd9d38b70b39ccd3b7f739be04602188b6cf3fe7, gcm works just fine. TextSecure works like a charm on two CM 12.1 devices, one patched, one with the xposed module by thermatk.
Awesome. I'll close this one then.
After @dajoe's comment I realized that my current rom is not patched. I'm gonna try with the Xposed module and report back.
Sorry, can't use Xposed. Where can I find required patch?
It is in the root of the repo. https://github.com/microg/android_packages_apps_GmsCore/blob/master/android_frameworks_base%2BFAKE_PACKAGE_SIGNATURE.patch
Thanks!
@mar-v-in I finally got my hands on a patched rom, and I cannot get this to work. Checkin does not report revious error, I have installed FakeStore, installed microgravity to root, but can't get TextSecure to work. Registration to server succeeds, but after that it wont start claiming there is no network connection. I'm not sure how to test other apps, Viber and WhatsApp seems to work like they used to (i reinstalled them).
@pejakm can you provide some logs? Have you tried following the instructions at https://github.com/thermatk/FakeGApps/blob/master/README.md?
From my experience, if registration with the Textsecure Server works, this is an indication that gcm is setup correctly. Maybe this is an issue with Textsecure itself.
I use the app Push Notification Tester from play store to test gcm functionality.
@bonanza123 Like I've said, patched rom, no need for Xposed modules. Compiled and installed FakeStore from microg repo, but I didn't try placing it in system partition.
@dajoe Thanks, I'm gonna try with that.
I'll post logs as soon as I "catch" anything usefull.
The second try with TextSecure was succesfull. :) I upgraded it, and now works! This is awesome!
I added my gmail account and installed YouTube app, and everything works!
|
gharchive/issue
| 2015-06-25T17:31:25 |
2025-04-01T04:35:01.455510
|
{
"authors": [
"bonanza123",
"dajoe",
"mar-v-in",
"nu11point3r",
"pejakm"
],
"repo": "microg/android_packages_apps_GmsCore",
"url": "https://github.com/microg/android_packages_apps_GmsCore/issues/16",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2142197615
|
Add StreamingFileUploadRequest
This PR contributes contains an implementation of StreamingFileUploadRequest and at the same time fixes #423 where the Azure implementation is calling getInputStream of the StreamingFileUploadRequest twice
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.johanraedemaeker seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
Thanks @johanra Can you sign the CLA?
@sdelamo CLA is signed now
Hi Guys,
Wouldn't it make sense to extend this PR to allow streaming uploads to other cloud providers too? Right now it adds the abstraction, but implementation is limited to Azure only.
Hi Guys, Wouldn't it make sense to extend this PR to allow streaming uploads to other cloud providers too? Right now it adds the abstraction, but implementation is limited to Azure only.
I have the same question, specifically so we can use with AWS
Hi!
Any chance to close this ticket any time soon? We're waiting for the feature and need to make a decision to either wait or workaround this limitation.
@johanra Thanks for the contribution
@wgruszczyk @marcinfigiel @scprek I am going to merge this and it will be part of 4.7.0. We can add support to other cloud vendors later.
|
gharchive/pull-request
| 2024-02-19T11:54:56 |
2025-04-01T04:35:01.524746
|
{
"authors": [
"CLAassistant",
"johanra",
"marcinfigiel",
"scprek",
"sdelamo",
"wgruszczyk"
],
"repo": "micronaut-projects/micronaut-object-storage",
"url": "https://github.com/micronaut-projects/micronaut-object-storage/pull/424",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
840389744
|
[Trademarks & Specifications]Compatible Products Page
Approach for listing compatible implementations of the specifications
1 page
[microprofile-wg] [BALLOT] Replace MicroProfile Compatibility Logo with.pdf
MPWG drops the compatible program.
|
gharchive/issue
| 2021-03-25T00:18:18 |
2025-04-01T04:35:01.529789
|
{
"authors": [
"aeiras"
],
"repo": "microprofile/microprofile-wg",
"url": "https://github.com/microprofile/microprofile-wg/issues/60",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1841137273
|
Fixes reference to ppTenantId for PullPowerPlatformChanges
Same bug as in #10 but for PullPowerPlatformChanges
has been fixed in the AL-Go repo
|
gharchive/pull-request
| 2023-08-08T11:38:32 |
2025-04-01T04:35:01.571649
|
{
"authors": [
"freddydk",
"y0m0"
],
"repo": "microsoft/AL-Go-Actions",
"url": "https://github.com/microsoft/AL-Go-Actions/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
807072770
|
[Event Request] codeunit 6620 "Copy Document Mgt."-OnBeforeInvoice
Hi, would it be possible to get an event OnBeforeInvoice codeunit 6620 "Copy Document Mgt.", like this :
var
OldSalesHeader: Record "Sales Header";
IsHandled: Boolean;
IsHandled:=false;
OnBeforeInvoice(IsHandled);
if not IsHandled then begin
Invoice := false;
Ship := false;
end;
[IntegrationEvent(false, false)]
local procedure OnBeforeInvoice(var IsHandled: Boolean)
begin
end;
Thanks!
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
|
gharchive/issue
| 2021-02-12T09:16:27 |
2025-04-01T04:35:01.573409
|
{
"authors": [
"JesperSchulz",
"Pepito-lab"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/11062",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
831640308
|
[Event Request] Table 36 "Sales Header" - OnBeforeUpdateSalesLinesByFieldNo
Hello. In table 36 Sales Header, in the procedure UpdateSalesLinesByFieldNo, I would like to use the event publisher OnBeforeUpdateSalesLinesByFieldNo to add my own code. But I need the parameter xRec in this event publisher. Like this :
procedure UpdateSalesLinesByFieldNo(ChangedFieldNo: Integer; AskQuestion: Boolean; CurrFieldNo : Integer)
begin
IsHandled := false;
OnBeforeUpdateSalesLinesByFieldNo(Rec, ChangedFieldNo, AskQuestion, IsHandled, xRec);
if IsHandled then
exit;
With this event integration :
[IntegrationEvent(false, false)]
local procedure OnBeforeUpdateSalesLinesByFieldNo(var SalesHeader: Record "Sales Header"; ChangedFieldNo: Integer; var AskQuestion: Boolean; var IsHandled: Boolean; xSalesHeader: Record "Sales Header")
begin
end;
Do you think it could be possible ?
Thank you for your work.
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
|
gharchive/issue
| 2021-03-15T09:58:38 |
2025-04-01T04:35:01.575735
|
{
"authors": [
"JesperSchulz",
"Magicrevette"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/11668",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1106861170
|
[Event Request] OnBeforeCheckAutosearchForDuplicates in Codeunit 5060 DuplicateManagement
Hello,
please add an Event OnBeforeCheckAutosearchForDuplicates in Codeunit 5060 DuplicateManagement. It will allow to call the duplicate check procedure independently of the AutoSearch setting from Marketing Setup.
procedure DuplicateExist(Cont: Record Contact): Boolean
var
DuplCont: Record "Contact Duplicate";
IsHandled: Boolean;
begin
IsHandled := false;
OnBeforeCheckAutosearchForDuplicates(Cont, IsHandled);
if not IsHandled then begin
RMSetup.Get();
if not RMSetup."Autosearch for Duplicates" then
exit(false);
end;
DuplCont.FilterGroup(-1);
DuplCont.SetRange("Contact No.", Cont."No.");
DuplCont.SetRange("Duplicate Contact No.", Cont."No.");
DuplCont.FilterGroup(0);
DuplCont.SetRange("Separate Contacts", false);
exit(DuplCont.Find('=<>'));
end;
[IntegrationEvent(false, false)]
local procedure OnBeforeCheckAutosearchForDuplicates(Contact: Record Contact; var IsHandled: Boolean)
begin
end;
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
|
gharchive/issue
| 2022-01-18T12:38:50 |
2025-04-01T04:35:01.577546
|
{
"authors": [
"HaukeSchuldt",
"JesperSchulz"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/15918",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1452974043
|
[Event Request] Page 5708 "Get Shipment Lines" - Fix wrong publisher
It seems that the previous request introduced a problem:
https://github.com/microsoft/ALAppExtensions/issues/16605
Handled pattern should return a Result.
Current state:
trigger OnQueryClosePage(CloseAction: Action) Result: Boolean
var
IsHandled: Boolean;
begin
IsHandled := false;
OnBeforeOnQueryClosePage(CloseAction, Result, IsHandled, Rec);
if IsHandled then
exit;
if CloseAction in [ACTION::OK, ACTION::LookupOK] then
CreateLines();
end;
Desired state:
trigger OnQueryClosePage(CloseAction: Action) Result: Boolean
var
IsHandled: Boolean;
begin
IsHandled := false;
OnBeforeOnQueryClosePage(CloseAction, Result, IsHandled, Rec);
if IsHandled then
exit(Result);
if CloseAction in [ACTION::OK, ACTION::LookupOK] then
CreateLines();
end;
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 49620.
|
gharchive/issue
| 2022-11-17T09:07:02 |
2025-04-01T04:35:01.579957
|
{
"authors": [
"JesperSchulz",
"miljance"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/21036",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1564466460
|
[Event Request] Report 12176 "Suggest Customer Bills" - Function "CreateLine" - New Event OnBeforeInsertCustomerBillLine
Hello, we need the possibility to extend the Function "CreateLine" on "Suggest Customer Bills" Report 12176 (IT Localization) as follows:
procedure CreateLine()
var
LookupCustBillLine: Record "Customer Bill Line";
SEPADirectDebitMandate: Record "SEPA Direct Debit Mandate";
begin
LookupCustBillLine.SetCurrentKey("Customer Entry No.");
LookupCustBillLine.SetRange("Customer Entry No.", CustLedgEntry."Entry No.");
PaymentsCalc();
if not LookupCustBillLine.FindFirst() then
if (MaxAmount = 0) or
(TotalPayments + CustLedgEntry."Remaining Amount" <= MaxAmount)
then begin
CustomerBillLine.Init();
CustomerBillLine."Customer Bill No." := CustBillHeader."No.";
CustomerBillLine."Line No." := NextLineNo;
NextLineNo := NextLineNo + 10000;
CustomerBillLine."Customer No." := CustLedgEntry."Customer No.";
CustomerBillLine."Temporary Cust. Bill No." := CustLedgEntry."Bank Receipt Temp. No.";
CustomerBillLine."Document Type" := CustLedgEntry."Document Type";
CustomerBillLine."Document No." := CustLedgEntry."Document No.";
CustomerBillLine."Document Occurrence" := CustLedgEntry."Document Occurrence";
CustomerBillLine."Document Date" := CustLedgEntry."Document Date";
CustomerBillLine.Amount := CustLedgEntry."Remaining Amount";
CustomerBillLine."Due Date" := CustLedgEntry."Due Date";
CustomerBillLine."Cumulative Bank Receipts" := CustLedgEntry."Cumulative Bank Receipts";
if SEPADirectDebitMandate.Get(CustLedgEntry."Direct Debit Mandate ID") then
CustomerBillLine."Customer Bank Acc. No." := SEPADirectDebitMandate."Customer Bank Account Code"
else
CustomerBillLine."Customer Bank Acc. No." := CustLedgEntry."Recipient Bank Account";
CustomerBillLine."Customer Entry No." := CustLedgEntry."Entry No.";
CustomerBillLine."Direct Debit Mandate ID" := CustLedgEntry."Direct Debit Mandate ID";
//new-s
OnBeforeInsertCustomerBillLine(CustomerBillLine, CustLedgEntry, CustBillHeader);
//new-e
CustomerBillLine.Insert();
if MaxAmount > 0 then
PaymentsCalc();
end;
end;
[IntegrationEvent(false, false)]
local procedure OnBeforeInsertCustomerBillLine(var CustomerBillLine: Record "Customer Bill Line"; CustLedgEntry: Record "Cust. Ledger Entry"; CustomerBillHeader: Record "Customer Bill Header")
begin
end;
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 53382.
|
gharchive/issue
| 2023-01-31T14:53:43 |
2025-04-01T04:35:01.582850
|
{
"authors": [
"FabioC34",
"JesperSchulz"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/21960",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1713271182
|
[Event Request] - OnBeforeGetLocation_CheckWarehouse table 5902 "Service Line"
Dear ALAppExtensions Team:
Please consider adding a new event publisher in table 5902 "Service Line"
NEW PUBLISHER:
[IntegrationEvent(false, false)]
local procedure OnBeforeGetLocation_CheckWarehouse(var ServiceLine: Record "Service Line"; var IsHandled: Boolean)
begin
end;
The code in procedure "CheckWarehouse" in table 5902 "Service Line" would be modified as follows:
OLD CODE:
local procedure CheckWarehouse()
var
Location2: Record Location;
WhseSetup: Record "Warehouse Setup";
ShowDialog: Option " ",Message,Error;
DialogText: Text[100];
begin
GetLocation("Location Code");
if "Location Code" = '' then begin
.
.
.
NEW CODE:
local procedure CheckWarehouse()
var
Location2: Record Location;
WhseSetup: Record "Warehouse Setup";
ShowDialog: Option " ",Message,Error;
DialogText: Text[100];
IsHandled: Boolean
begin
IsHandled := false;
OnBeforeGetLocation_CheckWarehouse(Rec, IsHandled);
if IsHandled then
exit;
GetLocation("Location Code");
if "Location Code" = '' then begin
.
.
.
Thanks in advance!
Availability update: We will publish a fix for this issue in the next update for release 22.
Build ID to track: 56846.
|
gharchive/issue
| 2023-05-17T07:11:07 |
2025-04-01T04:35:01.587393
|
{
"authors": [
"JesperSchulz",
"mcarmenvalera"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/23430",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2611492680
|
Get Source of Tenant Media Table
Describe the request
Hallo,
We currently have a tenant media table that has a total size of ~78 Gigabyte. We would like to now were this data is coming from.
We created a solution for this in which we went through every field were the field type was either media or mediaset.
however we only get ~46 gigabytes back in our calculation.
At this point we have tried many options. Like finding the orphans of these 2 tables among other things.
What I would like to know whether or not it is possible to use the Media Id of a record in the table tenant media to find the record it is used in. (E.g. an image on a item) as far as I know this is not possible.
Any other suggestions or tips are welcome
Additional context
N/A
Sorry, we'd like to help, but this just isn't the right team to ask. Try posting your question to the following resources instead:
Business Central Community
mibuso forum
Viva Engage
This GitHub repository handles issues with extending published apps, and extensibility in general, for the latest release of Business Central.
If you're running out of DB space, this would also warrant a support case of course. With a support case, we could look into the specifics of your environment and help you. Without access to your environment, it's hard to guess.
|
gharchive/issue
| 2024-10-24T12:51:08 |
2025-04-01T04:35:01.591290
|
{
"authors": [
"JesperSchulz",
"Kenozakesy"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/27534",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2621552893
|
[Event Request] codeunit 7110 "Analysis Report Management" - OnBeforeCalcSalesAmount
Describe the request
Could you please add the event OnBeforeCalcSalesAmount in codeunit 7110 "Analysis Report Management"?
local procedure CalcSalesAmount(var ItemStatisticsBuf: Record "Item Statistics Buffer"; Invoiced: Boolean): Decimal
var IsHandled: Boolean
Result: Decimal;
begin
OnBeforeCalcSalesAmount(ItemStatisticsBuf, Invoiced, IsHandled, Result);
if IsHandled then
exit(Result);
[...]
end;
[IntegrationEvent(false, false)]
local procedure OnBeforeCalcSalesAmount(var ItemStatisticsBuf: Record "Item Statistics Buffer"; Invoiced: Boolean; var IsHandled: Boolean, var Result: Boolean)
begin
end;
Additional context
I would like to calculate the "Sales Amount (Actual)" filtering the Value Entry with "Reason Code"
Internal work item: AB#556645
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 26527.
|
gharchive/issue
| 2024-10-29T15:10:48 |
2025-04-01T04:35:01.593983
|
{
"authors": [
"JesperSchulz",
"angelocitro"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/27558",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
491017515
|
Expose procedure in Table "Option Lookup Buffer".FormatOption
Hi,
Could you please expose "Option Lookup Buffer".FormatOption, we need it for extension development?
Thanks!
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
|
gharchive/issue
| 2019-09-09T10:38:42 |
2025-04-01T04:35:01.595618
|
{
"authors": [
"bc-ghost",
"radovandokic"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/4162",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
510693126
|
[Event Extension Request] Codeunit 12 - OnCustUnrealizedVATOnBeforeInitGLEntryVAT
Please extend the integration event OnCustUnrealizedVATOnBeforeInitGLEntryVAT in Codeunit 12.
[Integration]
- LOCAL PROCEDURE OnCustUnrealizedVATOnBeforeInitGLEntryVAT@236(VAR GenJournalLine@1000 : Record 81;VAR VATEntry@1001 : Record 254;VAR VATAmount@1002 : Decimal;VAR VATBase@1003 : Decimal;VAR VATAmountAddCurr@1004 : Decimal;VAR VATBaseAddCurr@1005 : Decimal;VAR IsHandled@1006 : Boolean);
+ LOCAL PROCEDURE OnCustUnrealizedVATOnBeforeInitGLEntryVAT@236(VAR GenJournalLine@1000 : Record 81;VAR VATEntry@1001 : Record 254;VAR VATAmount@1002 : Decimal;VAR VATBase@1003 : Decimal;VAR VATAmountAddCurr@1004 : Decimal;VAR VATBaseAddCurr@1005 : Decimal;VAR SalesVATUnrealAccount@1010 : Code[20];VAR IsHandled@1006 : Boolean);
BEGIN
END;
Call for function:
LOCAL PROCEDURE CustUnrealizedVAT@16(GenJnlLine@1015 : Record 81;VAR CustLedgEntry2@1000 : Record 21;SettledAmount@1001 : Decimal);
VAR
[...]
BEGIN
IsHandled := FALSE;
OnBeforeCustUnrealizedVAT(GenJnlLine,CustLedgEntry2,SettledAmount,IsHandled);
IF IsHandled THEN
EXIT;
[...]
IF VATEntry2.FINDSET THEN BEGIN
LastConnectionNo := 0;
REPEAT
[...]
IF VATPart > 0 THEN BEGIN
[...]
IsHandled := FALSE;
- OnCustUnrealizedVATOnBeforeInitGLEntryVAT(GenJnlLine,VATEntry2,VATAmount,VATBase,VATAmountAddCurr,VATBaseAddCurr,IsHandled);
+ OnCustUnrealizedVATOnBeforeInitGLEntryVAT(GenJnlLine,VATEntry2,VATAmount,VATBase,VATAmountAddCurr,VATBaseAddCurr,SalesVATUnrealAccount,IsHandled);
IF NOT IsHandled THEN
InitGLEntryVAT(GenJnlLine,SalesVATUnrealAccount,SalesVATAccount,-VATAmount,-VATAmountAddCurr,FALSE);
GLEntryNo :=
InitGLEntryVATCopy(GenJnlLine,SalesVATAccount,SalesVATUnrealAccount,VATAmount,VATAmountAddCurr,VATEntry2);
PostUnrealVATEntry(GenJnlLine,VATEntry2,VATAmount,VATBase,VATAmountAddCurr,VATBaseAddCurr,GLEntryNo);
END;
UNTIL VATEntry2.NEXT = 0;
InsertSummarizedVAT(GenJnlLine);
END;
END;
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
|
gharchive/issue
| 2019-10-22T14:24:45 |
2025-04-01T04:35:01.597891
|
{
"authors": [
"BE-Ouaali",
"bc-ghost"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/4611",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
710157637
|
[Event Request] OnBeforeInsertTransferHeader in report "Create Subcontr. Return Order"
Hello, please add the following event in the "Create Subcontr. Return Order" report (Italian localization).
[IntegrationEvent(false, false)]
local procedure OnBeforeInsertTransferHeader(ToLocationCode: Code[10]; var TransferHeader: Record "Transfer Header"; Vendor: Record Vendor; var IsHandled: Boolean)
begin
end;
It should be raised in the InsertTransferHeader procedure, as follows
procedure InsertTransferHeader(ToLocationCode: Code[10])
var
IsHandled: Boolean;
begin
OnBeforeInsertTransferHeader(ToLocationCode, TransferHeader, Vendor, IsHandled);
if IsHandled then
exit;
[ ... ]
end;
Thank you.
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
|
gharchive/issue
| 2020-09-28T10:49:37 |
2025-04-01T04:35:01.599704
|
{
"authors": [
"bc-ghost",
"ghost"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/8904",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1905778586
|
{Freeway-Bug}[Adaptive Cards- Home- Inputs]: Talkback not announcing role and position of the menu/list items(Red, Green, blue).
Target Platforms
Android
SDK Version
2023.03.31.1
Application Name
Adaptive Card
Problem Description
Test Environment
Device: S21+ 5G
Screen Reader: Talkback
AdaptiveCard Android App version: 2023.03.31.1
Repro Steps:
Turn on Talkback.
Open Adaptive Cards Visualizer Application.
Home Page will opened.
Now Navigate to 'Sample card' button and select it.
Now navigate to "choose sample" expands/collapse control and select "Inputs" list item.
Now Navigate to Visual tab and select it.
Now navigate to Inputs card using swipe gesture and expand "What color do you want?" dropdown button.
Now Navigate to menu items and observe the issue.
Actual Result:
Talkback not announcing role and position of the menu/list items((Red, Green, blue). Talkback announce like "ticked, Red in list ".
Expected Result:
Talkback should announce the role and the position of the menu/list items((Red, Green, blue). Talkback should announce like "ticked, Red 1 of 3, list/menu item ".
User Experience:
Knowing the role for a control helps users know what will happen when that object has focus and is selected, as well as what keyboard shortcuts to use when interacting with the control. When the component’s state/value/property is not defined, assistive technology users are often blocked from using the control or knowing what to expect when interacting with the component.
WCAG Reference:
https://www.w3.org/WAI/WCAG21/Understanding/name-role-value
"Have feedback to share on Bugs ? Please help fill Trusted Tester Bug Feedback (office.com)
Screenshots
https://user-images.githubusercontent.com/89897257/230354375-911c9118-1a15-4e25-b17b-df11759a8e3b.mp4
Card JSON
NA
Sample Code Language
No response
Sample Code
No response
@Bhaskar301101 changes have been merged. please test demo app
Closing this issue here as of now this issue is still repro and having Sev 3 so we have created a new bug.
https://github.com/microsoft/AdaptiveCards-Mobile/issues/89
|
gharchive/issue
| 2023-04-06T10:43:00 |
2025-04-01T04:35:01.608783
|
{
"authors": [
"Bhaskar301101",
"ardlank",
"vagpt"
],
"repo": "microsoft/AdaptiveCards-Mobile",
"url": "https://github.com/microsoft/AdaptiveCards-Mobile/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1221637321
|
[Adaptive Cards Main Body]: Incorrect heading level is defined for 'MADE FOR' heading as 'H2' on Adaptive Cards page.
https://microsoft.visualstudio.com/OS/_workitems/edit/33897073
Closing this issue as per the comment mentioned in original bugs.
|
gharchive/issue
| 2022-04-29T22:41:39 |
2025-04-01T04:35:01.610335
|
{
"authors": [
"carlos-zamora",
"vagpt"
],
"repo": "microsoft/AdaptiveCards",
"url": "https://github.com/microsoft/AdaptiveCards/issues/7247",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1898865243
|
[BUG]
https://github.com/microsoft/ApplicationInsights-JS/blob/main/extensions/applicationinsights-perfmarkmeasure-js/README.md
The npm install command should be:
npm install --save @microsoft/applicationinsights-perfmarkmeasure-js @microsoft/applicationinsights-web
instead of:
npm install --save @microsoft/applicationinsights-applicationinsights-perfmarkmeasure-js @microsoft/applicationinsights-web
57 error code E404
58 error 404 Not Found - GET https://registry.npmjs.org/@microsoft%2Fapplicationinsights-applicationinsights-perfmarkmeasure-js - Not found
59 error 404
60 error 404 '@microsoft/applicationinsights-applicationinsights-perfmarkmeasure-js@*' is not in this registry.
61 error 404
61 error 404 Note that you can also install from a
62 error 404 tarball, folder, http url, or git url.
63 verbose exit 1
64 timing npm Completed in 1410ms
65 verbose unfinished npm timer reify 1694803010221
66 verbose unfinished npm timer reify:loadTrees 1694803010226
67 verbose code 1
yikes! Nice catch
|
gharchive/issue
| 2023-09-15T18:01:36 |
2025-04-01T04:35:01.615870
|
{
"authors": [
"MSNev",
"zhucalvin"
],
"repo": "microsoft/ApplicationInsights-JS",
"url": "https://github.com/microsoft/ApplicationInsights-JS/issues/2150",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
548230670
|
Not possible to use public types in Common (shared classes) outside of Web SDK
If you are reporting bug/issue, please provide detailed Repro instructions.
Repro Steps
Try to use StringUtilities in AspNetCore SDK
Actual Behavior
StringUtilities is defined in PerformanceCollector, DependencyCollector, and WindowsServer.
So build fails not being able to resolve type.
Typical way to work it around is to add alias on the assembly and use extern alias it in the code to force specific assembly usage.
However it does not work well with nuget packages (https://github.com/NuGet/Home/issues/4989) and broken with VS2017/fresh msbuild
So my suggestions are
never declare public types in common shared items
move common stuff to base SDK whenever possible
Just posting an update.
This continues to be an annoyance. I spoke to Sergey a couple of months ago and we agreed that common helper methods could be added to the base SDK.
I'm going to close this issue because Liudmila's suggestions have become our best practices.
|
gharchive/issue
| 2018-07-23T21:21:22 |
2025-04-01T04:35:01.619264
|
{
"authors": [
"TimothyMothra",
"lmolkova"
],
"repo": "microsoft/ApplicationInsights-dotnet",
"url": "https://github.com/microsoft/ApplicationInsights-dotnet/issues/1633",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
440514924
|
ADLS Gen2 "Folder Statistics" in Azure Storage Explorer
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Can't show "Folder Statistics" in Azure Storage Explorer
Describe the solution you'd like
A clear and concise description of what you want to happen.
ADLS Gen2 "Folder Statistics" in Azure Storage Explorer
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Get "Folder Statistics" in ACL, PowerShell or API.
Additional context
Add any other context or screenshots about the feature request here.
Duplicate of #964
|
gharchive/issue
| 2019-05-06T01:40:44 |
2025-04-01T04:35:01.622722
|
{
"authors": [
"JackyChiou",
"MRayermannMSFT"
],
"repo": "microsoft/AzureStorageExplorer",
"url": "https://github.com/microsoft/AzureStorageExplorer/issues/1349",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
916333440
|
[Task] Refactor RCP to be generic and use internal API
Story: #118
Description
Refactor the current RCP to be generic (no delete, service and workspace specialization) and use the internal API rather than the DB directly to perform the deployment.
closed
|
gharchive/issue
| 2021-06-09T15:22:16 |
2025-04-01T04:35:01.624119
|
{
"authors": [
"jjcollinge"
],
"repo": "microsoft/AzureTRE",
"url": "https://github.com/microsoft/AzureTRE/issues/234",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
923494813
|
[Task] FW Add a service bus queue for shared services update
Feature: #23
Add a new service bus queue for the updates to the shared services (TF)
Update API and processor functions app settings with queue name
Not needed in this design as sessions are planned, and will apply to all resource updates - #1245
|
gharchive/issue
| 2021-06-17T06:09:13 |
2025-04-01T04:35:01.625426
|
{
"authors": [
"damoodamoo",
"deniscep"
],
"repo": "microsoft/AzureTRE",
"url": "https://github.com/microsoft/AzureTRE/issues/287",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1561926375
|
Consolidate http status assertion in E2Es
What is being addressed
On some of the E2E tests an assertion has been made over the http status code without logging the message that came with it. This made it hard to understand the exact error.
How is this addressed
Build a single method to handle all http status checks that will also print out the text in case of an error.
/test-extended
/test-force-approve
|
gharchive/pull-request
| 2023-01-30T07:58:36 |
2025-04-01T04:35:01.627206
|
{
"authors": [
"tamirkamara"
],
"repo": "microsoft/AzureTRE",
"url": "https://github.com/microsoft/AzureTRE/pull/3143",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
570459179
|
[Accessibility][Name, Role, Value - Create Node]: Role of Paste button is not correct in add node section.
User Experience
Users who rely on AT will might get confused if the controls available in the same menu item have different roles.
Test Environment
Browser: Microsoft Edge Canary Version: 82.0.425.0 (Official build)
OS build: 2004 (19564.1005)
Screen Reader: Narrator
URL: Bot Framework Composer
Prerequisite Here
Repro Steps:
Open the above mentioned URL in EDGE and set the zoom at 400%.
Select 'Design Flow' from left navigation.
Now, navigate to the flow chart.
Navigate to "+" button and activate it.
Navigate to 'Paste' available in the list and observe the issue.
Actual:
The narrator narrate the role of 'Paste' as button where as all the other option's are having role of menu item. The narrator narrates 2nd element i.e. 'Send a respond' as menu item 2 of 8.
Expected:
The role of 'Paste' should be same as other's. The role of 'Paste' should be menu item and narrator should narrates as 'Paste' menu item 1 of 8.
MAS Impacted: MAS4.1.2
Attachment:
MAS4.1.2_Role of paste button.zip
.png uploaded for ease of access, but note that there is also a video in the .zip
addressed in https://github.com/microsoft/BotFramework-Composer/pull/2126
As cross-checked the issue is fixed in master version so we are good to close the issue.
|
gharchive/issue
| 2020-02-25T10:16:36 |
2025-04-01T04:35:01.637463
|
{
"authors": [
"ashish315",
"corinagum",
"cwhitten"
],
"repo": "microsoft/BotFramework-Composer",
"url": "https://github.com/microsoft/BotFramework-Composer/issues/2076",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
605948562
|
Visual: Banner color update from black to blue
In the visual alignment review of the FUSE products and the deltas with the Fluent style guide Composer did very well. There were a few areas where we're unnecessarily different:
Banner color (currently black) and
Icon in the upper left : we have our custom icon, Fluent apps have the 9 grid which on click brings a pane with 1 click access to related products.
Number 2 does not seem a reasonable one to introduce at this time.
Could we do 1?
@DesignPolice - could you include a pointer to the design?
Hey Chris here is the update on this...
We can just change the color Phase 1 for now, but I have added the Phase 2 - for reference when we are ready to use the Waffle menu, add Search and take off the Icon.
Figma Link
https://www.figma.com/file/LlTlh5vXwq91zjGnFtrUUR6l/Composer-master-design-spec-UI-library?node-id=1929%3A13
Screen Shot
Thanks @DesignPolice and @cwhitten - that would be great (Phase 1 for now).
addressed via https://github.com/microsoft/BotFramework-Composer/pull/2779
|
gharchive/issue
| 2020-04-24T00:04:30 |
2025-04-01T04:35:01.642012
|
{
"authors": [
"DesignPolice",
"cwhitten",
"mareekuh"
],
"repo": "microsoft/BotFramework-Composer",
"url": "https://github.com/microsoft/BotFramework-Composer/issues/2755",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
582111424
|
fix: add disk names
Description
The bug only exists on windows because of hard disk partition.
The parent of root disk will be This PC. Go to 'This PC', it will show all existing disk names.
Note that this is a corner case, Path module does not recognize 'This PC' as a valid path.
Task Item
Closes #2257
Screenshots
Looks way better now.
@liweitian please address conflicts
@liweitian please address conflicts
done
|
gharchive/pull-request
| 2020-03-16T08:49:33 |
2025-04-01T04:35:01.644937
|
{
"authors": [
"boydc2014",
"cwhitten",
"liweitian"
],
"repo": "microsoft/BotFramework-Composer",
"url": "https://github.com/microsoft/BotFramework-Composer/pull/2274",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
838837476
|
feat: Add skill configuration to bot project settings
Description
Add skill configuration to bot project settings
Task Item
#minor
Screenshots
@tdurnford lint is failing on:
Error: [typecheck:client] src/pages/botProject/skill-configuration/AllowedCallers.tsx(90,9): error TS2322: Type '(event: FormEvent<HTMLInputElement | HTMLTextAreaElement>, newValue: string) => void' is not assignable to type '(event: FormEvent<HTMLInputElement | HTMLTextAreaElement>, newValue?: string | undefined) => void'.
|
gharchive/pull-request
| 2021-03-23T15:18:27 |
2025-04-01T04:35:01.647114
|
{
"authors": [
"cwhitten",
"tdurnford"
],
"repo": "microsoft/BotFramework-Composer",
"url": "https://github.com/microsoft/BotFramework-Composer/pull/6522",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
501449335
|
the confirmed open URL is not opening in web browser
after I have configured the Auth using GenericOauth2 when I click login, the following message appears:
when I confirm, I expecting an new webpage will be opened to sign in. something like this:
but what is happening is that a new emulator window is opened but with missing controls screen:
anybody can help how to solve this issue ?
Hi @mouhannadali ,
This might be due to the fact that in version 4.5.2 NodeJS integration is enabled in all Electron windows. Which means that if we open a new Emulator window such as the one in your example, and there are conflicts between JS libraries used in the window and globals used by the NodeJS runtime, then it can cause an error and a blank screen.
We had a recent PR (#1839) that addressed ths, but is not in a stable release yet. This change will be in the next stable release, v4.6.0, but if you don't want to wait, you can try using the latest nightly Emulator build.
Thanks, it is working now :)
|
gharchive/issue
| 2019-10-02T12:09:20 |
2025-04-01T04:35:01.651267
|
{
"authors": [
"mouhannadali",
"tonyanziano"
],
"repo": "microsoft/BotFramework-Emulator",
"url": "https://github.com/microsoft/BotFramework-Emulator/issues/1891",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
605917205
|
[Tracking] Add support for deleteActivity and updateActivity
Adding support for these methods in WebChat will bring support for these methods to the emulator, which is important to allow developers to truly test their applications which may rely on these methods!
According to @compulim this would involve a few lines here:
https://github.com/Microsoft/BotFramework-WebChat/blob/master/packages/core/src/reducers/activities.js#L83
Any news on this feature? We need this feature badly.
We also need to update activities in transcript. Any news on it?
Thanks
Any update on this feature???
Hello, thank you for the feedback. I conferred with our team on this and unfortunately we do not have a short term ETA for this feature, it is a large work item and needs to be evaluated against other priorities. We’ve captured this in our planning backlog however and will evaluate it during upcoming planning milestones. Our apologies for any inconvenience here.
Hi @dmvtech,
can you please keep this issue for tracking. atleast will get to know by which quater this feature is expected to replese.
regards,
sujit gaikwad
@compulim, looking at WebChat repo I don't see the support there. Can you confirm?
We do support it in some channels that allow it, for example Telegram.
Not supported.
Please add support for updating activities in the bot emulator.
For me, I need this functionality for removing an activity if it contains PII.
Definitely won't work for everybody, but we ended up sending our own custom event activity to tell our custom WebChat implementation to hide "deleted" activities.
I would like this functionality to delete sensitive data after a short period of time.
Any update on this?
Any update? Would be very useful for LLM implementation. Updating it's streamed message response via webchat.
Any update on this?
|
gharchive/issue
| 2018-11-21T21:54:17 |
2025-04-01T04:35:01.656884
|
{
"authors": [
"AidanStrong",
"BeeMaia",
"Fulll3",
"Godrules500",
"JordyGit",
"MichaelHaydenAtMicrosoft",
"ShalithaCell",
"Sujit-Gaikwad",
"arturl",
"benbrown",
"jayadrathamondal",
"jrpavoncello",
"lordscarlet"
],
"repo": "microsoft/BotFramework-Services",
"url": "https://github.com/microsoft/BotFramework-Services/issues/205",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
569917562
|
Normal https://webchat.botframework.com stopped working in IE 10. syntax error webchat-es5.js
I am using sdk v4 bot embedded in sharepoint 2016 page using iframe webchat https://webchat.botframework.com/embed/.......... url. It stopped working in IE 10. I remember it worked before 3 weeks. Please help as we put the bot in production.
Are there errors in the browsers developer tools console? If so, can you please share those?
The error was one line.
Syntax error at webchat-es5.js
Please help. I checked below code too. It still did not work. My production bot is impacted. I want to work in IE 10.
Running into the same issue. Getting similar syntax error / exception in console logs as nvikey. Please help as this is impacting production.
I got to know something from F12 -> Network in IE browser.
All urls from botframework.com like conversations, acitivity, js and css files are getting redirected to something called gateway.zscloud.net. I donno what it it is or is it related to bot?
If it is some proxy, does botframework.com support request from such proxies?
Webchat is not supported with IE10. Per Bill's answer here, you can try and host your own build of webchat (es5) and to see if you can remedy/workaround the issue. Additionally, you can message @p-nagpal to put your bot in as an exclusion to see if that would remedy the issue.
Does it work without issue in IE11?
Yes, webchat iframe works without issue in IE 11.
Other channels like directline channel does not work for us, because conversation & other cdn files from botframework domain are not working with zscaler proxy in our organization. Does botframework urls support zscaler proxy?
There are no specific implementations around zscalar. You can try hosting your own webchat instead of using the CDN.
As IE10 is not supported, I am closing this issue.
|
gharchive/issue
| 2020-02-24T15:07:37 |
2025-04-01T04:35:01.664543
|
{
"authors": [
"dmvtech",
"ksuchy",
"nvikey"
],
"repo": "microsoft/BotFramework-WebChat",
"url": "https://github.com/microsoft/BotFramework-WebChat/issues/2940",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1667126856
|
[NODEJS]: Update nodejs to 18.9.0
Merge Checklist
All boxes should be checked before merging the PR (just tick any boxes which don't apply to this PR)
[x] The toolchain has been rebuilt successfully (or no changes were made to it)
[x] The toolchain/worker package manifests are up-to-date
[x] Any updated packages successfully build (or no packages were changed)
[x] Packages depending on static components modified in this PR (Golang, *-static subpackages, etc.) have had their Release tag incremented.
[x] Package tests (%check section) have been verified with RUN_CHECK=y for existing SPEC files, or added to new SPEC files
[x] All package sources are available
[x] cgmanifest files are up-to-date and sorted (./cgmanifest.json, ./toolkit/scripts/toolchain/cgmanifest.json, .github/workflows/cgmanifest.json)
[x] LICENSE-MAP files are up-to-date (./SPECS/LICENSES-AND-NOTICES/data/licenses.json, ./SPECS/LICENSES-AND-NOTICES/LICENSES-MAP.md, ./SPECS/LICENSES-AND-NOTICES/LICENSE-EXCEPTIONS.PHOTON)
[x] All source files have up-to-date hashes in the *.signatures.json files
[x] sudo make go-tidy-all and sudo make go-test-coverage pass
[x] Documentation has been updated to match any changes to the build system
[x] Ready to merge
Summary
What does the PR accomplish, why was it needed?
Update nodejs to 18.9.0
Change Log
Change: Update nodejs to 18.9.0
Does this affect the toolchain?
YES/NO
NO
Test Methodology
Pipeline build id: 344350
Pipeline build id: 344353
It looks like prometheus and reaper both have a BuildRequires on nodejs. Can you please include these packages in your buddy build to ensure they still build properly.
It looks like prometheus and reaper both have a BuildRequires on nodejs. Can you please include these packages in your buddy build to ensure they still build properly.
Tested those packages and they passed.
|
gharchive/pull-request
| 2023-04-13T21:05:35 |
2025-04-01T04:35:01.673143
|
{
"authors": [
"anphel31",
"rikenm1"
],
"repo": "microsoft/CBL-Mariner",
"url": "https://github.com/microsoft/CBL-Mariner/pull/5295",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1934166196
|
ValueError: mutable default <class 'cyberbattle.simulation.model.FirewallConfiguration'> for field firewall is not allowed: use default_factory
I am getting the issue when I try to clone and run the test with following command "python -m cyberbattle.agents.baseline.run --training_episode_count 5 --eval_episode_count 3 --iteration_count 100 --rewardplot_width 80 --chain_size=4 --ownership_goal 0.2
Issue:
File "/home/landon/miniconda3/envs/cybersim/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'cyberbattle.simulation.model.FirewallConfiguration'> for field firewall is not allowed: use default_factory
Can someone provide the resolution for this?
I have the same issue.
Quick fix is to patch the env.yaml file to use python=3.9 instead of >=3.9. Fixed in PR #113
Duplicate of #111
|
gharchive/issue
| 2023-10-10T02:17:21 |
2025-04-01T04:35:01.678677
|
{
"authors": [
"IpadLi",
"blumu",
"virenderdhiman"
],
"repo": "microsoft/CyberBattleSim",
"url": "https://github.com/microsoft/CyberBattleSim/issues/112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1601201752
|
better eval sampler for val or test dataset
uses DistributedSampler instead of SequentialSampler for validation sets to avoid duplicating the datasets on the GPUs
@conglongli looks like the nv-transformers-v100 tests are failing. Not sure if this is related to the PR.
Can you comment on that?
@conglongli looks like the nv-transformers-v100 tests are failing. Not sure if this is related to the PR. Can you comment on that?
Seems to be related to #2909. Should be fixed soon.
|
gharchive/pull-request
| 2023-02-27T13:34:40 |
2025-04-01T04:35:01.681532
|
{
"authors": [
"mayank31398",
"tjruwase"
],
"repo": "microsoft/DeepSpeed",
"url": "https://github.com/microsoft/DeepSpeed/pull/2907",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2126909858
|
QUESTION: Is it possible to authenticate using managed identity in the Azure pipeline?
I am trying to figure out whether it is possible to use "workload identity federation service connection" to authenticate instead of providing secret/certificate?
Does anyone have any experience with this topic?
I have used the yaml file from this website https://doitpsway.com/how-to-easily-backup-your-azure-environment-using-entraexporter-and-azure-devops-pipeline and modified it to add a task to it, refer to this website https://gotoguy.blog/2023/09/15/connect-to-microsoft-graph-in-azure-devops-pipelines-using-workload-identity-federation/ for the workload identity federation and the task to make it work.
|
gharchive/issue
| 2024-02-09T11:03:13 |
2025-04-01T04:35:01.691755
|
{
"authors": [
"berchouinard",
"ztrhgf"
],
"repo": "microsoft/EntraExporter",
"url": "https://github.com/microsoft/EntraExporter/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2003972179
|
Add: Static docs site using sveltejs
Need to add workflow to publish on GitHub page.
Thank you for the contribution. However, if we're going to publish a docs site, we would use Material for Mkdocs.
This is the sort of thing that should probably be discussed in an Issue before a lot of work is put into a PR.
|
gharchive/pull-request
| 2023-11-21T10:37:06 |
2025-04-01T04:35:01.693079
|
{
"authors": [
"bill-long",
"rizwan3d"
],
"repo": "microsoft/EventLogExpert",
"url": "https://github.com/microsoft/EventLogExpert/pull/264",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1008448281
|
Offline Support for Fluid
Scenarios
There are three customer scenarios for Fluid Offline support:
Create and edit a new container while offline, upload on connection
Load and edit a cached container while offline, synchronize on connection
Best-effort recovery of edits after rude termination
Prior Work
The offline creation scenario is supported today using the detached container
workflow, which allows the application to retrieve, store, and rehydrate a
summary of the container's contents while offline. This rehydrated summary
may then later be attached to the service when connected.
It is the application's responsibility to store the summary, as well as to
provide an implementation of IDetachedBlobStorage, which stores
blobs uploaded while the container is in a detached state.
When connected to the service, the Fluid runtime already tolerates transient
network failures by caching locally pending/unsequenced ops and resubmitting
them during reconnection.
Finally, the runtime team has prototyped the ability to retrieve the set of
pending ops and provide them to a rehydrated container to address the best-effort
recovery scenario (#4597).
Stashed Ops design
A new 'closeAndGetPendingLocalState()' API is added to IContainer which allows
the application to close the container and retrieve the pending/unsequenced
ops from the PendingStateManager.
As in the detached container workflow, the application is responsible for persisting
the returned pending ops in local storage, along with the summary and IDetachedBlobStorage
as before.
When rehydrating the container from local storage, the application provides the
pending ops to the loader. The pending ops are used for two purposes:
The pending ops are re-applied to the DDS on top of the last summary to restore
the client's local state.
The pending ops are used to initialize the queue of outgoing messages in the
PendingStateManager.
In order to re-apply pending ops, a new 'applyStashedOp' API is added to the DDS.
One nuance is that because local op metadata is not serializable, the 'applyStashedOp'
API must re-create it for the runtime. This requires a minor amount of per-DDS work.
A second nuance is that some pending/unacknowledged ops may have been in flight at the
time the the application extracted the pending ops list. This is mostly handled by our
existing reconnection logic in PendingStateManager, with the caveat that we need to
remember our original client ID so that the PendingStateManager can recognize ACKs for
previously sent ops during the "catch up phase".
Known Limitations
The ability to reconnect relies on the retention policies of the service. In particular,
if the op history has been discarded, the client cannot determine which in-flight ops were
received prior to going offline. Attempting reconnection in this state is risky, as resending
an already sent op will corrupt the document.
Ops have higher storage overhead than summaries. Keeping the list of ops for long offline sessions
may increase pressure on Fluid and DDSes to compress the op representation (#2676), support
coalescing of ops (#1912), or page-out the PendingStateManager's queue to local storage as it
grows large.
Similarly, replaying/uploading long lists of pending ops will impact load time for
offline documents. Longer term, we may want to consider enabling DDSes to summarizing
their locally pending state in order to accelerate loading and/or synthesize their pending
ops from their locally pending state (e.g., MergeTree can do this.)
A limitation of our current design is that PWAs and browser tab instances will not synchronize
with each other while offline. Rectifying this would require introducing a local intermediate
ordering service that provides a partial ordering of local changes in advance of the Fluid
service's total ordering. Such a change would require modifying DDSes to use version vectors
instead of a single seq#, which is out of scope.
Finally, we've speculated in the past that the application may want to preserving app-specific
metadata on ops, as well as fallback on interactive 3-way diffs when reconnecting a very stale
document. While we currently do not know of anything that would preclude an application with a
custom DDS from building these experiences, it's currently an unknown.
Remaining Work
The remaining work for the runtime team is to ready the stashed ops work for production,
as well as provide the applyStashedOps implementation for the built-in DDSes. Work to plumb
the storage/retrieval of stashed ops through the container implementation and implement
applyStashedOps for proprietary DDSes will be funded by our partners.
Some minor aspects of the prototype we may want to revisit while preparing it for production:
Currently, retrieving the pending/unsequenced ops also closes the container. This is problematic
for the 'best-effort recovery' scenario, which will want to periodically checkpoint the user's
work event while connected.
Closing the container may not be sufficient to be robust against duplicate op transmission. For example,
if a client rehydrates/reconnects the container twice, it will not recognize the ops sent during the first
rehydration as ACKs for it's locally pending ops. (e.g., consider the scenario where an app attempts to
rehydrate/reconnect, but is rudely terminated before reconnection completes.) One fix is to introduce a
special op that is sent before reconnection that announces the client's previous dehydrated ID so that the
client can recognize itself.
As mentioned in the previous section, the queue of pending ops may grow large during long offline
sessions. Providing an IDetachedBlobStorage-like callback interface as messages are queued/dequeued
would allow the application to persist the queue incrementally.
If possible, it would be nice to pass IDetachedBlobStorage and pending ops through the
rehydrateContainer(), giving users a single entry point for loading an offline container.
I think @wes-carlson also produced a document (back in Spring) with his notes on where we are, gaps, items to be completed, etc. - would be great to reference it here.
As for SPO deleting ops - it is indeed causes limitation for offline to be only working for 30 days. This is possible to address as SPO keeps "file versions" around and we have been talking about the need for other scenarios to be able to scan that history and extract information (blobs that are already collected in latest snapshots, ops that are already deleted, etc.). This is likely doable, however you run into another problem - GC design (as of today) has same limitations, assuming that it will not allow any sessions (offline or not) to last over 30 days, to correctly collect content. With offline support discussed here, notion of "session" (in GC world) changes to include all offline sessions (i.e. all sessions that originated at some point by loading a specific snapshot version and continuing making online/offline progress). I think there is no easy way out here and we would need to leave with that limitation, until we come up with other ways to merge files (3-way merge) that would allow to remove any of these limitations. @agarwal-navin - FYI
Thanks Vlad, I forgot about that
here is the document I wrote
stashed ops
completed work
We get stashed ops from Container.closeAndGetPendingLocalState(). Stashed ops are supplied to a new container as an argument on Loader.resolve(). ContainerRuntime/PendingStateManager will give stashed ops to DDS via SharedObject.applyStashedOp() when their reference sequence number is observed. If we see an ACK for a stashed op, we treat it as a local ACK. UnACKed stashed ops are resubmitted at time of "connected" event.
challenges
Stashed ops are matched against ACKs from the previous container by client ID and client sequence number. This has a couple of consequences:
When we get stashed ops from a container, we must close the container at the same time. Otherwise, the container could reconnect with a new client ID and resubmit the ops. This would mean we couldn't match our stashed ops to the successfully submitted ops, resulting in duplicate ops submitted, which can cause document corruption e.g. in the case of attach ops
We (and our partners) should be careful to treat stashed ops as "consumable", i.e., once stashed ops are given to a container, they should not be given to another (unless we know it never reached "connected" event), since the second container could not match stashed ops to ops resubmitted by the first container.
In order to apply stashed ops, the relevant data stores and DDS's must be loaded. This means applyStashedOp() must be async, but process() and setConnectionState() on ContainerRuntime are synchronous. We work around this by listening directly to DeltaManager "op" events and pausing between ops.
In order to apply stashed ops, we need to load from a summary at or before the reference sequence number of the first stashed op (i.e. the lowest one). SPO wants to delete old summaries though, which would affect our ability to apply old stashed ops.
Serialized stashed ops may be kept by host for arbitrary amount of time and given to newer version loader, which could cause compatibility issues
remaining work
Implement applyStashedOpCore() on all DDS's. Currently only implemented on map (except for ValueType ops).
Need the ability to load from a summary at or before specified sequence number.
Loader should only return container after it is connected when loaded with stashed ops.
Support uploadBlob(). When BlobHandles are attached, they submit a "BlobAttach" op which tells storage that the blob is attached and it should not be GC'd. However, stashed BlobAttach ops may refer to already deleted blobs, so we need to either save pending blobs and reupload them or discard these stale ops somehow.
Make sure that we connect as "write" when we have stashed ops.
Currently "pause" headers are ignored since we need to connect in order to apply and resubmit stashed ops. It might be nice to pause container after "connected" event when pause is requested (is this still possible? didn't we remove pause()?)
hot-swap context reload will definitely not work currently but I think this is not supported anymore and will be removed sometime?
Some of it is no longer accurate, for example, BlobAttach ops are no longer sent when the handle is attached, but are sent right after the blob is uploaded and before the promise returned by uploadBlob() is resolved. It would still be important to make sure this is handled correctly.
|
gharchive/issue
| 2021-09-27T18:12:39 |
2025-04-01T04:35:01.713818
|
{
"authors": [
"DLehenbauer",
"vladsud",
"wes-carlson"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/issues/7594",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1133197748
|
Aliases should be considered only when getRootDataStore API is used
The getDataStore function in dataStores.ts always checks whether the given id is an alias. However, we should only do this if the data stores is requested via the getRootDataStore API since that should be only entry point for aliased data stores. All other data stores should be retrieved via their handles.
However, currently root data stores can still be requested via the request / resolveHandle APIs so we need to handle those scenarios.
https://dev.azure.com/fluidframework/internal/_workitems/edit/358
closed as duplicate ^
|
gharchive/issue
| 2022-02-11T22:22:16 |
2025-04-01T04:35:01.717106
|
{
"authors": [
"agarwal-navin",
"andre4i"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/issues/9109",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1183708095
|
Summarizer: track number of non-system ops we are summarizing.
We record opsSinceLastSummary in summary telemetry.
I've opened #8932 to record size of ops summarized.
We would also benefit from reporting how many of these ops are system ops vs. not system ops.
This will help us understand
if we need to adjust frequency of summaries, as we might be summarizing too often when we do not need to
As part of this work, we should actually consider counting only non-system ops when deciding if "idle" & "maxTime" triggers should fire.
Hey @vladsud , what do you mean by system ops vs. not system ops ?
Is it messages vs signals? Or FluidDataStoreOp vs Attach, Alias, BlobAttach?
See isSystemMessage()
FluidDataStoreOp, Attach, Alias, BlobAttach - these are all internally types, they will show up (at the level of loader) as "op" type.
Everything else is a system op
Perfect, thank you!
Telemetry was added, follow-up https://github.com/microsoft/FluidFramework/issues/9793
|
gharchive/issue
| 2022-03-28T16:44:18 |
2025-04-01T04:35:01.720861
|
{
"authors": [
"andre4i",
"vladsud"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/issues/9637",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1557416021
|
Move AttributionKey and AttributionInfo to runtime-definitions
Description
Moves two attribution interfaces to @fluidframework/runtime-definitions.
These two attribution primitives are better suited for a definitions package, as they'll be referenced by various layers of attribution-related code.
I'm not sure this is exactly the right definitions package to move these bits of code to. I'm fine with moving them somewhere else if the reviewer thinks it's better. Just wanted to clean up the duplication sooner rather than later.
|
gharchive/pull-request
| 2023-01-25T23:44:11 |
2025-04-01T04:35:01.722304
|
{
"authors": [
"Abe27342"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/pull/13803",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
915767597
|
More formally expose IClient override in Container and use in tinylicious-client
fluid-static wants clients to connect in "write" mode by default instead of doing the "read" mode to "write" mode switch for odsp specifically. There already is a way to do this through loader options, but it's undocumented. On the assumption that this functionality is intentional and not a hidden API, formalize access to this API and use it in tinylicious-client here. If we actually don't want anyone using it I can look at some other solutions 🙃
Would love to understand why we want to connect as "write".
If this is to "simplify" dev story (of developers not needing to deal with transitions), then there is no way to avoid it, as service is free to switch back to "read" whenever it feels so (currently every 15 minutes), and also any loss of connection for any reason (without unacked ops) would result in same
Ah, I see, you are overriding the default connection, so you will always get write connection.
Still curios on the goals / reasons here, as this is against what storage would want (though I think we can pursue ODSP it's the right move if we have good story to tell :))
Ah, I see, you are overriding the default connection, so you will always get write connection.
Still curios on the goals / reasons here, as this is against what storage would want (though I think we can pursue ODSP it's the right move if we have good story to tell :))
Hopefully I can hit the important points (and hopefully @skylerjokiel or someone else can correct me if I'm wrong) - as I understand the initial connection as "read" was for ODSP specific reasons (such as how a "write" connection modifies the file and audit trail etc. which is undesirable if all you're doing is viewing it). Part of why we want this is to simplify the developer story (like you noted) around audience usage. Then FRS is becoming the focus for 3P over ODSP, so it makes sense not to force assumptions about ODSP specifically within a service-agnostic Container, or at least allow developers to opt out of them when it makes sense. So it less about working with ODSP and more about working with non-ODSP services
We should move this discussion to some other forum and involve ODSP folks.
Yes, change was mostly driven from ODSP, but it was collective work with Tanvir (he actually implemented most of it), so it's not service specific. The two key reasons were
COGS - we do not want scenarios of hundred users reading documents result in a lot of ops (it's not just join & leave, it's continuous noops as well, and it does show up in FRS scalability testing as well)
Last edited - we want files to reflect real user who last modified it. It's not ideal (leave op would change it), but it's better than nothing.
closing per discussion. connecting as "write" is a heavy-handed answer to a narrower group of problems around audience and client identification
|
gharchive/pull-request
| 2021-06-09T04:45:37 |
2025-04-01T04:35:01.727731
|
{
"authors": [
"heliocliu",
"vladsud"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/pull/6388",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1279976295
|
[ci] [R-package] Add paste_linter
Description
Added paste_linter.
Lint warnings
[1] Total linting issues found: 11
[[1]]
/home/atis/Projects/LightGBM/build_r.R:73:7: warning: [paste] toString(.) is more expressive than paste(., collapse = ", "). Note als$
, paste0(unrecognized_args, collapse = ", ")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[2]]
/home/atis/Projects/LightGBM/build_r.R:420:5: warning: [paste] toString(.) is more expressive than paste(., collapse = ", "). Note al$
, paste0(c_api_symbols, collapse = ", ")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[3]]
/home/atis/Projects/LightGBM/R-package/R/lgb.Booster.R:827:9: warning: [paste] toString(.) is more expressive than paste(., collapse $
, paste(names(additional_params), collapse = ", ")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[4]]
/home/atis/Projects/LightGBM/R-package/R/lgb.Booster.R:1134:9: warning: [paste] toString(.) is more expressive than paste(., collapse$
, paste(data_names, collapse = ", ")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[5]]
/home/atis/Projects/LightGBM/R-package/R/lgb.Booster.R:1148:9: warning: [paste] toString(.) is more expressive than paste(., collapse$
, paste(eval_names, collapse = ", ")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[6]]
/home/atis/Projects/LightGBM/R-package/R/lgb.convert_with_rules.R:23:30: warning: [paste] toString(.) is more expressive than paste(.$
col_detail_string <- paste0(
^~~~~~~
[[7]]
/home/atis/Projects/LightGBM/R-package/R/lgb.Dataset.R:474:13: warning: [paste] toString(.) is more expressive than paste(., collapse$
, paste0(sQuote(.INFO_KEYS()), collapse = ", ")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[8]]
/home/atis/Projects/LightGBM/R-package/R/lgb.Dataset.R:526:13: warning: [paste] toString(.) is more expressive than paste(., collapse$
, paste0(sQuote(.INFO_KEYS()), collapse = ", ")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[9]]
/home/atis/Projects/LightGBM/R-package/R/lgb.Predictor.R:227:26: warning: [paste] toString(.) is more expressive than paste(., collap$
, paste(class(data)
^~~~~~~~~~~~~~~~~
[[10]]
/home/atis/Projects/LightGBM/R-package/tests/testthat/test_Predictor.R:440:21: warning: [paste] paste0(...) is better than paste(...,$
row.names(X) <- paste("rname", seq(1L, nrow(X)), sep = "")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[11]]
/home/atis/Projects/LightGBM/R-package/tests/testthat/test_Predictor.R:455:21: warning: [paste] paste0(...) is better than paste(...,$
row.names(X) <- paste("rname", seq(1L, nrow(X)), sep = "")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Suggested change example
> a <- list("abc", "def")
> paste0("Unrecognized arguments: ", paste0(a, collapse = ", "))
[1] "Unrecognized arguments: abc, def"
> paste0("Unrecognized arguments: ", toString(a))
[1] "Unrecognized arguments: abc, def"
Issue
Contributes to #5303
[x] paste_linter()
I don't believe the failing check-docs build is related to the changes in this PR.
Warning, treated as error:
Invalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English).
make: *** [Makefile:20: html] Error 2
(build log)
One of the maintainers here will look into this as soon as possible. Sorry for the inconvenience.
@CuriousCorrelation in the future, please don't use force-pushing in this project. We use squash merging (all commits get collapsed into one upon merge), and having individual commits on a PR makes it easier for maintainers to understand what changes you made in response to review comments.
Ah sorry about that and thanks for letting me know. I will keep that in mind.
I'm manually re-running the check-docs job now that #5322 has been merged. Hopefully that fixes it. Sorry for the inconvenience!
Sure no problem. Any particular way I should do that?
Should I
fetch latest on fork-master -> merge -> pull remote -> checkout branch -> merge master -> push
add new origin 'upstream' -> fetch upstream master -> rebase upstream/master -> force push
Any particular way I should do that?
You don't need to force push. From the root of your fork, do the following.
# set a remote alias for microsoft/LightGBM if you don't already have one
git remote add upstream https://github.com/microsoft/LightGBM.git
# pull the latest upstream changes into your fork's "master" branch
git checkout master
git pull upstream master
# merge "master" into your feature branch, creating a merge commit
git checkout lint/paste_linter
git merge master
# push that merge commit
git push origin lint/paste_linter
@jameslamb I'd love to! I'll open a PR as soon as I can.
|
gharchive/pull-request
| 2022-06-22T11:15:17 |
2025-04-01T04:35:01.735925
|
{
"authors": [
"CuriousCorrelation",
"jameslamb"
],
"repo": "microsoft/LightGBM",
"url": "https://github.com/microsoft/LightGBM/pull/5320",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2456148734
|
MIEngine: Disable individual info threads call for each thread created
Changes to disable info thread for each created thread to expedite GPU
debugging. GPUs are likely to have 1000s of threads and calling thread
info for each thread created is very time consuming.
We call SetThreadInfoFromResultValue from CollectThreadInfo when we
get thread context.
Signed-off-by: intel-rganesh rakesh.ganesh@intel.com
There are at least parts of this change that we don't want for CPU debugging -- with this change we will no longer raise up thread create events to the rest of the debugger when GDB raises a thread create, which would be a user-visible change at the least, and I am not sure if it would break anything.
For GPU debugging, I am also a bit worried this change isn't enough -- we will still wind up creating thread objects in the UI for all the GPU threads. Is that actually usable in either VS or VS Code? For example in the VS Code call stack window, or in VS in the parallel stacks, or threads window? I would have thought you would just want to show a small number (maybe just 1 virtual thread?) with some custom UI to select the thread.
For GPU debugging, the changes are to ensure we do not call thread-info for each thread. The thread created event is still called.
Any NewThread should be added during GetThreadContext.
I do see the Thread window and Parallel Stacks working fine (although it is slow).
Our use case is to show all the stopped GPU threads (similar to Threads in VS), lanes, be able to switch to different GPU threads/lanes and inspect variables.
The views function when we exit the kernel code and step in the CPU as well.
|
gharchive/pull-request
| 2024-08-08T15:57:35 |
2025-04-01T04:35:01.743887
|
{
"authors": [
"gregg-miskelly",
"intel-rganesh"
],
"repo": "microsoft/MIEngine",
"url": "https://github.com/microsoft/MIEngine/pull/1460",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2511636701
|
Autocode
O uso do copilot é muito prático e auxiliar muito meu dia a dia. Uma única questão que seria uma boa melhora é um questionamento na tela tipo "quer que eu te ajude" quando vc abre uma nova linha pra codar, pois as vezes, dou linha para pensar em uma solução, enquanto olho o código, e a sugestão automática do copilot joga meu código lá pra baixo... atrapalha um pouco o fluxo de pensamento.
Obrigada por compartilhar - Participe da nossa série GitHub Foundations em Português em Outubro: https://aka.ms/brasil/githubfoundations
|
gharchive/issue
| 2024-09-07T12:59:48 |
2025-04-01T04:35:01.745516
|
{
"authors": [
"cyz",
"priscillatrevizan"
],
"repo": "microsoft/Mastering-GitHub-Copilot-for-Paired-Programming",
"url": "https://github.com/microsoft/Mastering-GitHub-Copilot-for-Paired-Programming/issues/75",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
486927283
|
Flag enum "Handedness" not binary compared (there's probably more)
Not really a bug, but since Handedness is an enum flag, code could be refactored in that it properly does a binary comparison.
I've seen things like value == any || value == both, where any already includes both.
@alexees what code are you referring to? Any and both are actually different.
Both only includes Left | Right
Any includes Other | Both
I began to wonder when I saw this:
https://github.com/microsoft/MixedRealityToolkit-Unity/blob/95f16ac53bd19b2dd36f0a3d7fcee6bd5d0af001/Assets/MixedRealityToolkit/Providers/BaseInputDeviceManager.cs#L69-L71
Could possibly replace many of these comparisons with HasFlag
Maybe a simpler line would simply be
pointerProfile.Handedness.HasFlag(controllingHand)?
Oh nice.
You're finding bit check all over the place but nobody mentions this method :)
|
gharchive/issue
| 2019-08-29T12:27:17 |
2025-04-01T04:35:01.748794
|
{
"authors": [
"Alexees",
"Troy-Ferrell",
"keveleigh"
],
"repo": "microsoft/MixedRealityToolkit-Unity",
"url": "https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5821",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
462413776
|
ControllerFinder now registers its interface with the InputSystem
Changes
Fixes: #5144
ControllerFinder is capable of searching controllers on it's own but should also be capable of receiving controller detection events in the case a handedness is requested where there is no controller for yet.
Registering the corresponding events completes the amount of necessary calls.
In other words, the interface implementation was already there, it was just not listening at all
/azp run mrtk_pr
Thanks! Change looks good.
|
gharchive/pull-request
| 2019-06-30T14:56:17 |
2025-04-01T04:35:01.750805
|
{
"authors": [
"Alexees",
"wiwei"
],
"repo": "microsoft/MixedRealityToolkit-Unity",
"url": "https://github.com/microsoft/MixedRealityToolkit-Unity/pull/5145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1369351075
|
setup.sh errors
We are running into some issues trying to run the bash setup.sh script in cloudshell. It appears that a folder path might be missing. Could you please let me know if I am missing something?
This issue is resolved. Please be sure to pull down the latest from the repo and try again.
Thanks
|
gharchive/issue
| 2022-09-12T06:31:05 |
2025-04-01T04:35:01.755046
|
{
"authors": [
"Watto1234",
"genegc"
],
"repo": "microsoft/OpenEduAnalytics",
"url": "https://github.com/microsoft/OpenEduAnalytics/issues/135",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
545467229
|
Add compiler error if parameters have been modified
Suggested by John Schanck. He was doing some experiments with FrodoKEM code with different parameters and ran into some unexpected errors. Turns out it was because there were some implicit assumptions in the code that certain values were multiples of 8. This doesn't try to fix that, just raises a compile-time error if that particular assumption is violated.
I limited the error detection to NBAR. When it is modified to a value that is not a multiple of 8 the code spits out a compilation error. Thanks to John for pointing out this potential issue.
|
gharchive/pull-request
| 2020-01-05T21:11:16 |
2025-04-01T04:35:01.760975
|
{
"authors": [
"dstebila",
"patricklonga"
],
"repo": "microsoft/PQCrypto-LWEKE",
"url": "https://github.com/microsoft/PQCrypto-LWEKE/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1519877791
|
Update filter delegation validation to validate without generating warnings
In order to better understand the challenges that users have in using functions and data sources that can't be delegated today, we need to collect failure telemetry without causing breaking UI changes.
This change adds the ability filter delegation validation to swallow warnings
✅ No public API change.
✅ No public API change.
✅ No public API change.
✅ No public API change.
✅ No public API change.
|
gharchive/pull-request
| 2023-01-05T01:47:52 |
2025-04-01T04:35:01.763059
|
{
"authors": [
"LucGenetier",
"ian-legler"
],
"repo": "microsoft/Power-Fx",
"url": "https://github.com/microsoft/Power-Fx/pull/960",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1349160786
|
Dox42
When submitting a connector, please make sure that you follow the requirements below, otherwise your PR might be rejected. We want to make you have a well-built connector, a smooth certification experience, and your users are happy :)
If this is your first time submitting to GitHub and you need some help, please sign up for this session.
[x] I attest that the connector doesn't exist on the Power Platform today. I've verified by checking the pull requests in GitHub and by searching for the connector on the platform or in the documentation.
[x] I attest that the connector works and I verified by deploying and testing all the operations.
[x] I attest that I have added detailed descriptions for all operations and parameters in the swagger file.
[x] I attest that I have added response schemas to my actions, unless the response schema is dynamic.
[x] I validated the swagger file, apiDefinition.swagger.json, by running paconn validate command.
[x] If this is a certified connector, I confirm that apiProperties.json has a valid brand color and doesn't use an invalid brand color, #007ee5 or #ffffff. If this is an independent publisher connector, I confirm that I am not submitting a connector icon.
If you are an Independent Publisher, you must also attest to the following to ensure a smooth publishing process:
[ ] I have named this PR after the pattern of "Connector Name (Independent Publisher)" ex: HubSpot Marketing (Independent Publisher)
[ ] Within this PR markdown file, I have pasted screenshots that show: 3 unique operations (actions/triggers) working within a Flow. This can be in one flow or part of multiple flows. For each one of those flows, I have pasted in screenshots of the Flow succeeding.
[ ] Within this PR markdown file, I have pasted in a screenshot from the Test operations section within the Custom Connector UI.
[ ] If the connector uses OAuth, I have provided detailed steps on how to create an app in the readme.md.
Hello Microsoft Team, the Policy of my custom connector has stopped working for some reason. I have made one change that removes JSON that pointed to an outdated action name, but I don't know if that will fix the problem.
The policy is essential for the connector's functionality, so I need this to work.
Hello @dox42diegoschleis,
I hope you are doing well.
Congratulations, your pull request is approved and merged. Please submit the files to ISV Studio or update your submission using your latest commit ID, please allow us up to 1-2 weeks to review your submission, and our Engineers will notify you in the “Activity Control” area in ISV Studio. Please make sure that the icon meets all the requirements, that it’s size is 230x230, icon background is not white or transparent and it matches the iconBrandColor property in apiProperties.json file.
After making the submission in ISV Studio, please attach an intro.md file in the activity control section. This file is required and it's different from readme.md. You can find an example of intro.md file on this page at step 6.
Please Create an environment in the Preview region. You will use this environment later to test your connector after Microsoft is done with the functional verification for your connector.
We expect all tests to be completed within 1-2 weeks.
If your connector passes all tests, the deployment process begins, and it typically takes up to 3 to 4 weeks to deploy your connector to all of our regions.
Please let me know if you have any questions.
Thank you very much for working with us.
Hello @dox42diegoschleis,
Unfortunately, these changes are considered as breaking changes. Could you please roll back the changes in a separate PR?
I suggest scheduling a meeting for discussing your problem and finding the right solution.
Please send us an email to ConnectorPartnerMgmtTeam@service.microsoft.com, describe the problem that you're having and share your availability.
We will schedule a meeting and send you an invite.
Let us know if you have any questions.
@v-EgorKozhadei The problem I reported actually got fixed with this PR, unless you are referring to another problem
|
gharchive/pull-request
| 2022-08-24T09:44:39 |
2025-04-01T04:35:01.774874
|
{
"authors": [
"dox42diegoschleis",
"v-EgorKozhadei"
],
"repo": "microsoft/PowerPlatformConnectors",
"url": "https://github.com/microsoft/PowerPlatformConnectors/pull/1906",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
840668954
|
Translation/consistency issue with FancyZones in Swedish
ℹ Computer information
PowerToys version: 0.33.1
PowerToy utility: FancyZones
Language: Swddish
📝 Provide where the issue is / 📷 Screenshots
Are there any useful screenshots? WinKey+Shift+S and then just paste them directly into the form
❌ Actual phrase(s)
"Öppen layoutredigeringsprogrammet"
✔️ Expected phrase(s)
"Öppna layoutredigeringsprogrammet"
ℹ Why is the current translation wrong
"Öppen" is an adjective, "Öppna" is a verb.
To be consistent, the button above the translation issue says "Starta layoutredigeringsprogrammet", why would you use both "Starta" (Start) and "Öppna" (Open)? Are they different in the English version?
in english open can be used as a verb and an adjective but i asssume this is different in swedish
source: https://english.stackexchange.com/questions/141547/opened-vs-open#:~:text=6 Answers&text=The word open can be,%2C future%2C or present tense.&text=If you did the action,Or%2C you could run away.
@nmcc1212 that's likely the reason for the erroneous translation. And to my second point, even if the translation error is corrected, the two sentences next to each other, meaning the same thing, is using different wording (although grammatically correct) and you probably want to decide on one of them. I haven't looked at the english strings but as a comparison, the first sentence would say "Start layout editing program" and the second would say "Open layout editing program". Both are correct and sensible but not giving a sense of consistency.
@nmcc1212 I'm sure you noticed the typo that happened when you made the change? There's a "t" missing at the end.
fixed this, but i dont think it will update the pr now it's closed
English
FancyZones_HotkeyEditorControl.Header = Open layout editor
FancyZones_LaunchEditorButtonControl.Text = Launch layout editor
Swedish
FancyZones_HotkeyEditorControl.Header = Öppen layoutredigeringsprogrammet (Öppna layoutredigeringsprogrammet)
FancyZones_LaunchEditorButtonControl.Text = Starta layoutredigeringsprogrammet
Sorry, have to submit translation issues through a process.
I'll create the formal internal ask.
https://ceapex.visualstudio.com/CEINTL/_workitems/edit/402187
This will push in for 0.37 and in Monday's loc drop
This is now fixed in 0.37 https://github.com/microsoft/PowerToys/releases/download/v0.37.0/PowerToysSetup-0.37.0-x64.exe
|
gharchive/issue
| 2021-03-25T08:02:44 |
2025-04-01T04:35:01.783466
|
{
"authors": [
"GenerAhl",
"crutkas",
"enricogior",
"nmcc1212"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/10431",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1078847071
|
Mouse Highlighter doesn't show up over Zoom when sharing a window
Description of the new feature / enhancement
When the PowerToys mouse Highlighter is on and the user is sharing a specific window over Zoom, the highlight should show up on the other end of the Zoom.
If only a specific window is shared over a Zoom meeting, the highlight isn't visible to the other meeting participants. In that case, the broadcaster (= the PowerToys user) can see the highlight when he clicks his mouse, but the participants on the other end of the Zoom conversation cannot see the highlight.
On the other hand, if the broadcaster shares his entire screen (rather than just a specific window) over Zoom, the highlight can be seen by the viewers.
It should work regardless of whether the entire screen or just a single window is being shared.
Scenario when this would be used?
When giving a demonstration over a Zoom, for example, it's especially important for the viewers to know when the demonstrator is clicking his mouse. Mouse Highlighter is perfect for that, but it appears not to work over Zoom when sharing a single window.
Supporting information
No response
Hm, maybe zoom window-share uses screen info that is below the layer of Mouse Highlighter?
This would be a won't-fix as same reason in #14205
Please share your full screen instead of sharing a single window to use mouse highlighter over zoom.
@Jon999999 we are treated like an external application and you shared a single app. Almost nothing we can do since we're not directly built into the OS.
|
gharchive/issue
| 2021-12-13T18:26:35 |
2025-04-01T04:35:01.787170
|
{
"authors": [
"Jay-o-Way",
"Jon999999",
"crutkas",
"franky920920"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/14995",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1171494808
|
Onenote desktop won't pin (always on top) consistently on Windows 11.
Microsoft PowerToys version
0.56.2
Running as admin
[X] Yes
Area(s) with issue?
Always on Top
Steps to reproduce
Open the latest version of onenote desktop.
Open a note.
pin the note. -> CTR+WIN+T
open edge
full screen on edge.
grab edge onenote! (edit) from its titlebar and move the edge onenote! (edit) window slightly left and right.
turn off fullscreen on edge
turn on fullscreen on edge
onenote isn't there.
✔️ Expected Behavior
Open the latest version of onenote desktop.
Open a note.
pin the note. -> CTR+WIN+T
open edge
full screen on edge. Up to this point onenote behaves as expected and is on top of the edge window.
grab edge onenote! (edit) from its titlebar and move the edge onenote! (edit) window slightly left and right.
turn off fullscreen on edge
turn on fullscreen on edge
onenote should be pinned on top while edge is on fullscreen
❌ Actual Behavior
always on top borders active , but the onenote desktop window is not behaving as always on top.
Other Software
No response
every other always on top utility that I have tried (github) behave in the same way, except for deskpins.
/bugreport
cc: @yuyoyuppe
Hi there!We need a bit more information to really debug this issue. Can you add a "Report Bug" zip file here? You right click on our system tray icon and just go to report bug. Then drag the zipfile from your desktop onto the GitHub comment box in this issue. Thanks!
PowerToysReport_2022-03-19-18-32-32.zip
I can reproduce this
Reproducible for me. OneNote is losing its topmost attribute for some reason. In my case it happens occasionally when OneNote is maximized or when it loses focus.
|
gharchive/issue
| 2022-03-16T20:06:51 |
2025-04-01T04:35:01.794612
|
{
"authors": [
"Aaron-Junker",
"ChristosChristofidis",
"SeraphimaZykova",
"franky920920"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/17097",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1261459460
|
PowerRename needs prepend/append modes, and counters
Description of the new feature / enhancement
Often when renaming files you don't actually want to replace anything, just append a date or index to either end.
Scenario when this would be used?
Say I have a folder of TV show episodes sorted by creation date. I want to prepend the season and episode number to each file. They are all in the same season, so I just need to prepend S01EXX where XX is a 2-digit counter that starts at 1.
Supporting information
Ideally I should be able to configure where this counter starts from and how it steps, for example what if I want it to start at 5 and go up in multiples of 5? 5, 10, 15, 20, 25, etc. The "step" part is not as important as having a counter at all, though. It should also allow me to specify leading 0 digits, so that I can format 42 as 042 by saying I want it to be 3-digits wide.
Good point. Issue is a duplicate of #17522 though.
Duplicate of #17522
For the position of the numbers: One work-around could be to first let PowerRename do the work (which will put the numbers at the end of the file name) and then do another run to move or format the numbers. https://docs.microsoft.com/nl-nl/windows/powertoys/powerrename#examples-of-regular-expressions
I really want a "prepend/append text" operation, wouldn't it make more sense just to build that out and make counters part of that?
Disclaimer: not familiar with the enumerations feature. Couldn't figure out how to use it
|
gharchive/issue
| 2022-06-06T07:25:13 |
2025-04-01T04:35:01.798709
|
{
"authors": [
"Jay-o-Way",
"NSExceptional"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/18628",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1703391710
|
Is it possible to consolidate all PowerToys context menu items into a single sub-menu?
Description of the new feature / enhancement
I want this feature because the context menu is already cluttered. Additionally, PowerToys options also appear scattered throughout the menu.
Scenario when this would be used?
For example PowerReanmer, File LockSmith, etc...
Supporting information
No response
/dup https://github.com/microsoft/PowerToys/issues/21719
|
gharchive/issue
| 2023-05-10T08:35:35 |
2025-04-01T04:35:01.800742
|
{
"authors": [
"crutkas",
"fatihbahceci"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/25932",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1833822724
|
Peek: Option to use it as a default viewer that opens on doubleclicking files of certain type
Description of the new feature / enhancement
Absolutely love Peek and how near instant it is. I can see myself using it as a default file viewer for pictures, just opening them normally by double clicking and not even bothering with the KB shortcuts.
Scenario when this would be used?
I can absolutely picture myself using it as a default picture viewer, operated with mouse only.
Supporting information
No response
I totally agree - the possibility to associate PowerToys Peek as the default file viewer for image files would be great!
I've found no other tool that starts that quickly and has such an unobtrusive UI! 👏
I came looking for this. Even if you manually set the Peek UI exe as default it doesn't work, nothing happens when you double click a file.
+1
+2
+3
|
gharchive/issue
| 2023-08-02T19:56:01 |
2025-04-01T04:35:01.804290
|
{
"authors": [
"DORANDKOR",
"FolkSong",
"Merlin2001",
"Zhnigo",
"heliole",
"hrvojegolcic",
"sandroshu"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/27776",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254273417
|
FanzyZones: Shortcuts for individual zones
Description of the new feature / enhancement
User would be able to define a shortcut for individual zones. Pressing the shortcut will then move the currently active window to that zone.
Scenario when this would be used?
Currently, you can move windows with shortcuts only if you override the Windows' default Win + arrow shortcut, and even then, you have to move the windows one zone at a time. This is tedious if you have for example three monitors.
Shortcuts for individual zones would allow a single shortcut to place a window in any zone, and also, you could retain the default Win+arrow shortcut, if you wish to use it as it is.
A practical usecase for me is having a browser open in the left monitor, then opening a new browser window and wanting it on my right monitor.
Supporting information
No response
duplicate
I closed this since it seems to be a common feature request, althought every request I can find is closed for being a duplicate, but none one them have been answered if this is not going to be done, it is being worked, or?
|
gharchive/issue
| 2024-04-20T00:18:43 |
2025-04-01T04:35:01.806809
|
{
"authors": [
"Catriks"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/32543",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
623313496
|
Abillity to Sync(1 way, 2 way etc) any particular folder selected by user over to one Drive
Powertoy - Need an ability to sync a particular folder on Desktop or in the Documents folder. Currently the whole documents and desktop can be synced but not a single folder through the One drive app.
And also there should be a setting to enable 1-way sync or 2-way sync or "1-way sync modified".
I'll explain 1-way sync - A folder on the desktop when synced to one drive will only send data to one drive. Any -changes made on One Drive should not reflect on the folder on the desktop. The same folder in One drive should just make a copy of the data of the folder I selected on desktop.
I'll explain 2-way sync - it is the normal sync where if changes are made in one drive they reflect on the folder on the desktop.
I'll explain "1-way sync modified(on steroids)"- A folder on the desktop when synced to one drive will only send data to one drive. Any -changes made on One Drive should not reflect on the folder on the desktop. If any folder is deleted on the file in the folder on the desktop is deleted One drive should not delete its own copy. Just mark that file somehow that it is deleted on the user's computer. If the person then wants to delete those files he can or else keep it. If any new files are added on the user's folder on desktop those are synced and added on one drive.
Don't know if this possible somehow at present.
@raghavk92, i feel like the synctoy / robocopy would do this which is tracked in #26 If not, please reopen the issue.
|
gharchive/issue
| 2020-05-22T16:09:35 |
2025-04-01T04:35:01.809496
|
{
"authors": [
"crutkas",
"raghavk92"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/issues/3531",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
916249876
|
Keep monacoPreviewPane up with master
Summary of the Pull Request
What is this about:
What is include in the PR:
Push the changes from master into monacoPreviewPane
How does someone test / validate:
Quality Checklist
[ ] Linked issue: #xxx
[ ] Communication: I've discussed this with core contributors in the issue.
[ ] Tests: Added/updated and all pass
[ ] Installer: Added/updated and all pass
[ ] Localization: All end user facing strings can be localized
[ ] Docs: Added/ updated
[ ] Binaries: Any new files are added to WXS / YML
[ ] No new binaries
[ ] YML for signing for new binaries
[ ] WXS for installer for new binaries
Contributor License Agreement (CLA)
A CLA must be signed. If not, go over here and sign the CLA.
Does someone know how I can solve this merge conflict?
@crutkas
I close this. I will merge make a new one, when the next version of PT is released
|
gharchive/pull-request
| 2021-06-09T14:08:02 |
2025-04-01T04:35:01.815637
|
{
"authors": [
"Aaron-Junker"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/pull/11675",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
653116486
|
[FancyZones] Hold Ctrl to select any number of zones
Summary of the Pull Request
Hold the Ctrl key to select any number of zones. I think the Ctrl key is a good choice. Relevant issue: #3337
Right now this feature is not controlled by an option because, if you don't want to use it, just don't press Ctrl, everything stays the same.
References
#3337, probably a few more
PR Checklist
[x] Applies to #3337
[x] CLA signed. If not, go over here and sign the CLA
[x] Tests added/passed
[x] Requires documentation to be updated
[x] I've discussed this with core contributors already. If not checked, I'm ready to accept this work might be rejected in favor of a different grand plan. Issue number where discussion took place: #xxx
Detailed Description of the Pull Request / Additional comments
Validation Steps Performed
Manually tested in single and multiple monitor scenarios
@ivan100sic
would it be too complicated to deselect a zone when moving back the window so that the zone is not anymore "selected" (I hope I explained myself).
That's doable but it would require a significant change in logic - I would do it by recording the earliest selected zone index set (let's call it S) when Ctrl was pressed, and if the cursor is currently selecting the zone index set T, I'd compute the minimum bounding rectangle of S and T and highlight it. That would give the same results as what you mentioned, and it may be more intuitive for users.
I think we can go ahead with the current implementation and then revisit it later on.
|
gharchive/pull-request
| 2020-07-08T08:54:47 |
2025-04-01T04:35:01.820478
|
{
"authors": [
"enricogior",
"ivan100sic"
],
"repo": "microsoft/PowerToys",
"url": "https://github.com/microsoft/PowerToys/pull/4850",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2666675094
|
Same version number as the Linux-ProcDump
Currently the Linux-ProdDump have version 3.3 from May, 28.
And the Mac-ProcDump have version 1.0 from November, 11.
So the binary of the Mac-ProcDump is newer and have the newes changes included.
I think it is hard to compare the versions, if they are in the future still different. Which will be newer 7.3 of the Mac-ProcDump or 10.5 of the Linux-ProcDump?
In this case I hope, that the Linux and Mac binaries will be in the future published on the same time with the same version number.
For example, that the next version for the Linux-ProcDump and the Mac-ProcDump will be 4.0 or so.
Edit:
I think the best way would be to rename the ProcDump-for-Linux repo to ProcDump-for-Unix.
And there can then be published the binaries for Linux and Mac.
And then they can be published on the same time with the same version number.
Thank you for your feedback! We decided to use version 1.0 for the Mac release for similar reasons as when we launched the Linux version. While the Windows version was already at version 9 or 10 at the time, the Linux and Mac versions are conceptually the same tool—a trigger-based dump generator—but have distinct feature sets and may never achieve full feature parity. For example, the Linux version includes unique triggers not found in Windows, and the Mac version will likely follow a similar path. Aligning the version numbers could create confusion about the availability of features across platforms, so we opted for a fresh start with 1.0 to clearly differentiate the platform-specific capabilities.
|
gharchive/issue
| 2024-11-17T22:57:03 |
2025-04-01T04:35:01.823690
|
{
"authors": [
"MarioHewardt",
"theuserbl"
],
"repo": "microsoft/ProcDump-for-Mac",
"url": "https://github.com/microsoft/ProcDump-for-Mac/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
772717817
|
[* Duration] Extract duration units when preceded by ordinal numbers
Duration units like "week", "month", "year"... should be recognized as duration entities when preceded by ordinal numbers.
For example:
"the first week is dedicated to exercises" --> "week" (duration)
@LionbridgeCSII, can you take a stab at this and see if there may be any side-effects?
@tellarin, yes there are some interferences with DatePeriod and now some patterns are not correctly handled in certain languages. I am working on fixing these issues.
Please raise a flag if this becomes too unwieldy.
|
gharchive/issue
| 2020-12-22T07:36:04 |
2025-04-01T04:35:01.827064
|
{
"authors": [
"LionbridgeCSII",
"tellarin"
],
"repo": "microsoft/Recognizers-Text",
"url": "https://github.com/microsoft/Recognizers-Text/issues/2417",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1525514205
|
[IT DateTimeV2] Fix for time expression [minutes] minuti alle [hour]
Fix for Italian time expressions such as "5 minuti alle quattro".
References: Time in Italian
Hello @tellarin, could you please take a look at this pull request as well as #3061, #3063, #3065, and #3066? Thank you in advance.
@andrew-gradinari Thanks for the ping. We'll review them shortly.
@tellarin There are some examples using "e|meno":
TimeExtractor.json
https://github.com/microsoft/Recognizers-Text/blob/b80b58ce49d7049d7f3ca4293de5bfb972c43750/Specs/DateTime/Italian/TimeExtractor.json#L195-L324
TimeParser.json
https://github.com/microsoft/Recognizers-Text/blob/b80b58ce49d7049d7f3ca4293de5bfb972c43750/Specs/DateTime/Italian/TimeParser.json#L444-L673
DateTimeModel.json
https://github.com/microsoft/Recognizers-Text/blob/b80b58ce49d7049d7f3ca4293de5bfb972c43750/Specs/DateTime/Italian/DateTimeModel.json#L1470-L1521
Anyway, I have added a few more examples to DateTimeModel.json. I would appreciate it if you could take a look.
|
gharchive/pull-request
| 2023-01-09T12:33:40 |
2025-04-01T04:35:01.830731
|
{
"authors": [
"andrew-gradinari",
"tellarin"
],
"repo": "microsoft/Recognizers-Text",
"url": "https://github.com/microsoft/Recognizers-Text/pull/3067",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
961106226
|
The result of the coefficients of my polynomial after decryption is incorrect.
Hello, i am a new user of this library. My main goal is to convert a text to binary ("H" =010010000 ) and pack it into the a polynomial. Then apply computations in these polynomials. you will find my code above. Please let me know if anyone has another solution to propose.
`
string TextToBinaryString(string words) {
string binaryString = "";
for (char& _char : words) {
binaryString +=bitset<8>(_char).to_string();
}
return binaryString;
}
int main ()
{
//we specify the type of scheme
EncryptionParameters parms(scheme_type::bfv);
// we give the prams n = 2048
size_t poly_modulus_degree = 2048;
parms.set_poly_modulus_degree(poly_modulus_degree);
// we give the ciphertext modulo
parms.set_coeff_modulus(CoeffModulus::BFVDefault(poly_modulus_degree));
// we give p=1024 the plaintext modulo
parms.set_plain_modulus(1024);
// we construct a SEALContext object.
SEALContext context(parms);
cout << "Set encryption parameters and print" << endl;
print_parameters(context);
cout << "Parameter validation (success): " << context.parameter_error_message() << endl;
cout << endl;
cout << "~~~~~~ we encrypt and decrypt x. ~~~~~~" << endl;
//generate the keys
KeyGenerator keygen(context);
SecretKey secret_key = keygen.secret_key();
PublicKey public_key;
keygen.create_public_key(public_key);
Evaluator evaluator11(context);
Encryptor encryptor11(context, public_key);
string plaintext = "H";
cout << "Convert the text to binary string " << TextToBinaryString(plaintext)<< endl; // to get 010010000 in binary
Plaintext p1(TextToBinaryString(plaintext).length()+ 1);
for (int i=0 ; i < TextToBinaryString(plaintext).length()+1 ; i++){
*p1.data(i) = TextToBinaryString(plaintext)[i];
}
cout << "The construct polynomial " << p1.to_string() << endl;
// The resilt is : The construct polynomial 30x^8 + 30x^7 + 30x^6 + 30x^5 + 31x^4 + 30x^3 + 30x^2 + 31x^1 + 30
// and i wan to get 0x^8 + 0x^7 + 0x^6 + 0x^5 + 1x^4 + 0x^3 + 0x^2 + 1x^1 + 0
`
1 - I am aware that I fill the polynomial with numbers in string type. I usually add modulo 2 to have coefficients equal to 1 and 0. How can i get the correct result ?
Next, let's look at my second issue. I try to encrypt a polynomial of degree 1025 (knowing that poly_modulus_degree = 2048), then calculate its square. The cipher is wrong. Let's see the code below.
`
Plaintext ptxt("1x^1025 + 1");
Ciphertext ctxt;
encryptor11.encrypt(ptxt, ctxt);
evaluator11.square(ctxt, ctxt);
Decryptor decryptor11(context, secret_key);
Plaintext decypt;
decryptor11.decrypt(ctxt, decypt);
cout << "The decryption is " << decypt.to_string() << endl;
//The decryption is 2x^1025 + 3FFx^2 + 1
// The result i want is 2x^1025 + x^2 + 1
`
2 - How can i not get the correct decryption ?
In addition, when I encrypt a polynomial with coefficients of 1025 (knowing that plaintext modulus = 1024). The decryption is a polynomial with coefficients modulo 1000 in relation to 1024.
Plaintext ptxt("1025x^1025 + 1"); // after encryption using the code below, decryption is "25x^1025 + 1"
3-The correct result is "1x^1025 + 1" because the plaintext module is 1024. Why ??
Thank you in advance
Why "1x^1025 + 1" square is "2x^1025 + 3FFx^2 + 1"
That is because by-the-definition. The BFV scheme is working over some polynomial ring ZZ_t[X]/(X^N + 1) in which t and N is the parameters.
The square of 1x^1025 + 1 equals to x^2050 + 2x^1025 + 1. Then taking the modulo (X^2048 + 1, t).
x^2050 mod X^2048 + 1 gives -x^2 and negative coefficients say a in SEAL is represented as t - |a|
Thanks for your comment, i got it.
Is there a function in SEAL to convert the hex coefficient to a decimal coefficient?
And the polynomial "1025 x^1025 + 1" equal to "1x^1025 + 1" . because taking the modulo 1024 (see the third problem)
Thank you
Yes, if your computation is modulo 1000, you should call parms.set_plain_modulus(1000).
|
gharchive/issue
| 2021-08-04T21:49:54 |
2025-04-01T04:35:01.837157
|
{
"authors": [
"WeiDaiWD",
"badr007-01",
"fionser"
],
"repo": "microsoft/SEAL",
"url": "https://github.com/microsoft/SEAL/issues/376",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1947965755
|
using the proxy with AAD
Hi, thanks for your good work!
I have an AAD (now entraID) tenant without on-premises connection and I am trying to understand: is there any scenario in which one can have spid users log in to AAD as guests by using their SPID identity and leveraging this proxy?
Thanks!
Hi @Badjin-bit , sorry for the late reply.
When using Entra ID, you can't let guest users authenticate via SPID.
You could let them authenticate via spid if they are "members" and you already have a federated domain with ADFS. In this case, on ADFS, you can customize the autnetication process and let them login with SPID, then "map" the SPID user to an AD user via their fiscalNumber.
|
gharchive/issue
| 2023-10-17T18:03:11 |
2025-04-01T04:35:01.839374
|
{
"authors": [
"Badjin-bit",
"fume"
],
"repo": "microsoft/SPID-and-Digital-Identity-Enabler",
"url": "https://github.com/microsoft/SPID-and-Digital-Identity-Enabler/issues/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
410367453
|
Is project still maintained?
Is this project still maintained? It seems like there are a number of outstanding pull requests waiting to be merged, such as #154 which fixes the build errors and has been needing a merge since Oct 2018.
If the maintainer(s) don't have time to work on this project that is totally understandable, but please make a note of that so everyone knows not to use this codebase.
Thanks
/cc @bowdenk7
If the maintainer(s) don't have time to work on this project that is totally understandable
Really??? Not for me. This is an important repo.
...guess not?
Yes - this is confusing. Internally this project is still using Typescript 2.7.2 which is quite dated. I'm not sure it even compiles under Typescript 3+
It looks pretty likely the project is EOL, unless someone wants to fork &
maintain it.
On Fri, May 10, 2019 at 12:46 PM jocull notifications@github.com wrote:
Yes - this is confusing. Internally this project is still using Typescript
2.7.2 which is quite dated. I'm not sure it even compiles under Typescript
3+
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/microsoft/TypeScript-Node-Starter/issues/179#issuecomment-491355062,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAHPY77JQTC5KJYA24SWGTPUWRGXANCNFSM4GXPSKYA
.
Apologies everyone - I'm a bad person...
I just merged an update that updates a bunch of dependencies including TypeScript to v3.4.5. I'll be further rehydrating this repo over the next week or so to get it back to being useful again.
Great to hear, no apologies necessary! Thanks for maintaining this :+1:
It's likely that work on this repo will always be spurts of activity, (I've just handled about ~30 PRs, and slowly looking at issues too) but I don't think it's going to be totally un-maintained unless we decide to do this with the updates to the website /docs structure. So I'm going to close this issue as, "Yes, it is still maintained"
|
gharchive/issue
| 2019-02-14T15:54:23 |
2025-04-01T04:35:01.852103
|
{
"authors": [
"MikeMitterer",
"bowdenk7",
"jocull",
"mchelen",
"musab",
"orta"
],
"repo": "microsoft/TypeScript-Node-Starter",
"url": "https://github.com/microsoft/TypeScript-Node-Starter/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
432448921
|
The type of private fields in a class cannot be used in generic
TypeScript Version: 3.4.1
Search Terms:
generic, class, private field
Code
class Foo<T> {
private a!: T; // when it is public, no error occurs below
}
type Bar<T extends Record<string, Foo<any>>> = T[keyof T]["a"]; // ts(2536) occurs, while autocomplete works well.
type A = Foo<number>["a"]; // it works always!
Expected behavior:
Not treating it as an error. If it is inteneded, at least the error message should be more helpful.
Actual behavior:
ts(2536) occurs.
Playground Link
Related Issues:
@sandersn not 100% convinced it's a bug per se, but it well might be. The error is indeed weird
This isn’t a bug, but I improved the error message to make it clear that it’s on purpose:
Private or protected member 'a' cannot be accessed on a type parameter.
|
gharchive/issue
| 2019-04-12T08:31:24 |
2025-04-01T04:35:01.857000
|
{
"authors": [
"RyanCavanaugh",
"andrewbranch",
"worudso"
],
"repo": "microsoft/TypeScript",
"url": "https://github.com/microsoft/TypeScript/issues/30882",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
460175058
|
Persist type casts/annotations
Search Terms
persist type annotation cast
Suggestion
On occasion, the same property must have its type narrowed multiple times in a single function.
Option 1 — Simply persist a type cast (until reassignment). This might be a breaking change, though I can't think of a situation where it would break existing code.
Option 2 — A new type of statement in the form of expression is type, which would implicitly perform the cast for the remainder of the scope (unless reassigned).
Use Cases
It would be great to avoid casting the same property multiple times over. It can make code easier to read while still providing the necessary type information in a clear manner.
Examples
A partial sample of some code I am using in writing a Babel plugin:
const namespace = (attribute.name as JSXNamespacedName)
.namespace
.name
.toLowerCase();
const property = t.stringLiteral(
(attribute.name as JSXNamespacedName).name.name,
);
could become
attribute.name is JSXNamespacedName;
const namespace = attribute.name.namespace.name.toLowerCase();
const property = t.stringLiteral(attribute.name.name.name);
with option 2. With option 1, the second cast could simply be omitted.
Similar code examples surely wouldn't be hard to come by — any time a type assertion is used multiple times in a scope.
Checklist
My suggestion meets these guidelines:
[x] This wouldn't be a breaking change in existing TypeScript/JavaScript code (option 2, option 1 unclear)
[x] This wouldn't change the runtime behavior of existing JavaScript code
[x] This could be implemented without emitting different JS based on the types of the expressions
[x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
[x] This feature would agree with the rest of TypeScript's Design Goals.
Two reasons this would be a breaking change:
Upcasts are allowed (though you could argue that logic shouldn't apply
Downcasts can cause inference to make further code more restrictive than before
function fn(s: number | string) {
let k = s as string;
let j = s;
j = 10; // Error if implemented as proposed
}
Duplicate #10421
Gotcha. Though intuitively, if I'm saying that s is in fact a string in your example, I would also expect j to be set to a string.
Agreed on the dupe, closing.
|
gharchive/issue
| 2019-06-25T02:20:01 |
2025-04-01T04:35:01.863439
|
{
"authors": [
"RyanCavanaugh",
"jhpratt"
],
"repo": "microsoft/TypeScript",
"url": "https://github.com/microsoft/TypeScript/issues/32078",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1339760693
|
class merge generic parameters
Suggestion
class Component<Props extends {}> {
public constructor (props: Props) {
Object.assign(this, props);
}
}
class App extends Component<{ name: string }> {
public init() {
// Property 'name' does not exist on type 'App'.ts(2339)
this.name
}
}
🔍 Search Terms
List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily and help provide feedback.
✅ Viability Checklist
My suggestion meets these guidelines:
[x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
[x] This wouldn't change the runtime behavior of existing JavaScript code
[x] This could be implemented without emitting different JS based on the types of the expressions
[x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
[x] This feature would agree with the rest of TypeScript's Design Goals.
⭐ Suggestion
📃 Motivating Example
💻 Use Cases
Is there a way to make the above code work
Duplicate of #26792.
|
gharchive/issue
| 2022-08-16T03:30:55 |
2025-04-01T04:35:01.868108
|
{
"authors": [
"MartinJohns",
"lzxb"
],
"repo": "microsoft/TypeScript",
"url": "https://github.com/microsoft/TypeScript/issues/50313",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
479883961
|
checkJs: require JSDoc type argument for Array, Object, and Promise in noImplicitAny
Fixes #32766
As discussed in that issue, when noImplicitAny is set in checkJs, this change makes it an error to not specify a type argument in the jsdoc annotations for /** @type {Array} */, /** @type {Object} */, and /** @type {Promise} */ the same way it is for all other jsdoc parameterized types.
If not noImplicitAny, nothing changes and the type argument(s) will continue to default to any for these three special cases.
Also as discussed in https://github.com/microsoft/TypeScript/issues/32766#issuecomment-520487842, dropping the Object -> any mapping will be a bigger change than the other two types, though
it's arguably what the author wants if they've set noImplicitAny and
it's unclear how many strict checkJs projects are out there that will break with this change
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: brendankenny sign nowYou have signed the CLA already but the status is still pending? Let us recheck it.
Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
this is apparently different than I expected :) Closing while I get this resolved.
actually that was easy :)
I think changing Object to Object is what people want when writing fresh jsdoc for Typescript's consumption. And there don't seem to be many people to break anyway.
|
gharchive/pull-request
| 2019-08-12T22:49:22 |
2025-04-01T04:35:01.874184
|
{
"authors": [
"brendankenny",
"msftclas",
"sandersn"
],
"repo": "microsoft/TypeScript",
"url": "https://github.com/microsoft/TypeScript/pull/32829",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2221626097
|
Update an error message to match TypeScript’s behaviour which allows spreading non objects in objects
Fixes #57957
@microsoft-github-policy-service agree
To help with PR housekeeping, I'm going to close this PR since it's pretty old now.
|
gharchive/pull-request
| 2024-04-02T23:39:29 |
2025-04-01T04:35:01.875470
|
{
"authors": [
"geryogam",
"sandersn"
],
"repo": "microsoft/TypeScript",
"url": "https://github.com/microsoft/TypeScript/pull/58046",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
185659992
|
/tmp never cleared
Ubuntu deletes all files in /tmp when booting. Other distributions (eg. RedHat) have a cronjob tmpwatch that deletes these files or even use tmpfs to mount /tmp.
WSL does not 'boot' nor has anacron running. The Windows Disk cleanup program does not consider the files in %USERPROFILE%/AppData/Local/lxss/rootfs/tmp. Hence these are never deleted (automatically).
My suggestion is to clean /tmp when the last lxss process closes. It already terminates background processes when there is no more bash session. At that point no process can access files in /tmp anymore and by Filesystem Hierarchy Standard files cannot be expected to be kept.
@Meinersbur I wonder if they're going to make it actual in-RAM storage like it is on Linux. If that's the plan, they might not do it this way.
@fpqc FYI -- /tmp is not in-RAM on my Ubuntu 16.04 VM. It's not mounted at all; it's just a directory under /, which is a regular ext4 filesystem.
It's not only /tmp all the files that You deleting under bash are left on the hdd.
I'm using npm, and it's updating files quite often, my npm folder is clean if you look from bash. But all the files are on hdd if I look in lxss folder.
@S-ed I believe it does clean them up after you end the Linux Instance (right now this means close all instances of bash.exe)
@fpqc nope, still there, even after windows reboot.
@sed On Drvfs? On Lxfs (outside of /mnt)? Did you touch them with Windows tools?
@fpqc
C:\Users\S-ed\AppData\Local\lxss\home\s-ed\.npm
C:\Users\S-ed\AppData\Local\lxss\root\.npm
Did nothing (outside of bash).
Try it Yourself, use:
sudo npm install -g express
sudo npm uninstall -g express
@S-ed please READ THIS - we STRONGLY recommend you do NOT spelunk into the Linux filesystem using Windows apps and tools. https://blogs.msdn.microsoft.com/commandline/2016/11/17/do-not-change-linux-files-using-windows-apps-and-tools/
@bitcrazed Yeah. I've kinda learned this lesson last year.
Issue was related to bash was calling windows version of the npm, instead of linux one.
(IIRC now it's fixed)
The ask in the original post here was that WSL /tmp is not being scrubbed, but should be, like Real Ubuntu. The OP then went and made the terribly unfortunate mistake of using the words "Windows Disk cleanup" and mentioning win32 paths. All hope was pretty much lost at that point.
On recent versions of Ububtu /tmp is cleaned by tmpreaper, which of course does not run on WSL because no cron. More here.
This is a dupe of #511, or, alternately, a feature request (with the 'init' label) if one subscribes to the idea that WSL init should do some systemd things like #2530.
@therealkenc Yep. I thought it's caused by the same issue as the /tmp one. but it wasn't.
@therealkenc I think there is more than one way to implement this, Running tmpreaper would be one possibility, but not the only one. Considering that "Windows Disk cleanup" is a Windows component that should be aware of special paths, I don't think its terribly unfortunate. One could install a Disk Cleanup Handler that runs eg. bash -c "rm -rf /tmp/*". However, I would not consider it the best option.
Considering that "Windows Disk cleanup" is a Windows component that should be aware of special paths
There, you used the words again. Can't say I didn't try to help you out....
Yeah, no. We won't be relying on WinDiskCleanup to non-deterministically clean tmpfs in WSL - that's something we do when terminating a session.
Just got off a team-sync call where we briefly discussed some of the init challenges and what we might be able to do. Will be meeting post-turkey-coma to discuss in more detail.
Has anything come of this?
Hey @all. We've certainly discussed it, but we've yet to figure-out a sensible, universal/cross-distro solution that would work for everyone.
When should we clean tmp?
Before we start a new session?
After a session closes?
Periodically in the background (requires daemon support - coming in spring 2018 udpate)
How often? On every new session start? Once a day/week/month?
Should we nuke everything? If not, what filter criteria should we use? Anything older than a week? Month? year?
As one of our dev's asks "sorry I know you just downloaded a 10gb file but it's gone now, download it again"
As @Meinersbur points out above: There isn't even much consistency in the Linux distros themselves about when tmp gets cleared:
Ubuntu deletes all files in /tmp when booting. Other distributions (eg. RedHat) have a cronjob tmpwatch that deletes these files or even use tmpfs to mount /tmp.
I think its best we leave decisions about when and how to nuke your tmp folder to the user and/or distro. Though we'd love to hear suggestions from y'all if you think you have a universal solution.
I didn't realize different distros did it differently. I was always under the impression that everyone wiped it on bootup/shutdown depending on if it was tmpfs or not.
So I'm all for as soon as the Windows session ends, clear /tmp.
Periodically in the background (requires daemon support - coming in spring 2018 update
Would that be the systemd daemon we're talking about as mentioned in #2530? If so, how's #1579 and #2533 looking? The latter issue specifically would resolve this issue for Ubuntu (which is still the primary official distro used). Not sure if the former is a dependency.
Or are we referring to the background task support?
I would advocate that WSL itself ideally shouldn't do this. I think it should be the responsibility of the distro userspace, because that's how it works on regular Linux.
If WSL doesn't yet support whatever mechanism a given distro uses, I would propose tweaking its image to do something equivalent using kernel and init surfaces that WSL does currently surface.
Are there any WSL distro maintainers on this bugtracker these days? Do they have any thoughts about what the best experience is for their users?
I don't personally see a need to rush this. But if there is a need, +1 to "wipe on session termination" as a short-term hack. I think the lack of awareness here that different distros do this differently is an argument that people probably won't care hugely if the behavior changes slightly as WSL becomes able to use distros' built-in /tmp-clearing logic.
People who feel strongly about it can start cron in their .bashrc, guarding to check if it is running already. As with sshd, rsyslogd, and whatever their fancy. There is no blocking surface for tmpreaper (or, call me surprised if there is).
I keep meaning to make a run at systemd like I did with Chrome but just never get around to it because like you implied it just isn't that blocking. Well that and because systemd is broken by design and making it work feels like enabling (in the mental health sense).
@bitcrazed
From my perspective - wiping on exit is good for security reasons.
Even if the WSL session crashed, you could always rerun it and it will be cleaned on exit.
Let users set this setting.
Make it possible for users to enable/disable cleaning, on launch/on exit, periodically/on size limit.
Everyone is different, there are no solution that would fit all the users.
Also, agree with @aseering on making this by the distro means, if this possible.
I also agree that WSL should support whatever distros require for their mechanisms to work to clear /tmp. From a security perspective (as well as a storage perspective), I'd always like to see this data auto-removed and leave a nice little log entry that you could look at if you really wanted confirming the auto-removal worked.
It's part of the userspace that's pulled down with the image. Just open WSL, cd / and ls. You'll see a /tmp just as you observed from discovering the Windows path. Just rm -rf it for now or create a script to do that for you.
I agree that this is the distro policy whether cleaning /tmp on boot or not.
But in order to change this feature from WSL side, how about add config setting in /etc/wsl.conf ?
config setting in /etc/wsl.conf ?
Absolutely.
It should be made general as possible though, to address Rich's tough questions. It should probably be able to run arbitrary scripts, be it tmpreaper or tmpwatch (which are the same thing forked over a spat nearly 20 years ago), or any other script for that matter. Maybe there could be some kind of flexible syntax in /etc/wsl.conf for scheduling when the scripts are run. Users should probably be able to set the year, the month of the year, day of the week, hour, minute. Perhaps there could be some kind of special character, asterisk maybe, to indicating running the script every week/day/hour/etc instead of a specified one. There could also be a different character, question mark maybe, to indicate running it only at startup.
Something like that.
Since #344 was deed addressed, following suit.
@therealkenc #344 did not address this issue on ubuntu, since ubuntu. the /tmp directory still grows indefinitely on a default ubuntu wsl instance.
I've just answered this on StackOverflow: https://superuser.com/questions/1170939/simulate-reboot-to-clear-tmp-on-the-windows-linux-subsystem/1656653#1656653
The best approach is just to make sure the /tmp directory is mounted appropriately - as a tmpfs partition. I've checked in WSL1 this works, and there's no reason this won't work in WSL2.
There are a few other directories that should equally be mounted this way, but I'm not in the right place to find the correct list.
Here is my interim workaround:
Define a task in Windows Task Scheduler that runs the following command when my user logs in:
c:\windows\system32\wsl.exe --user root --distribution ubuntu /etc/init-wsl
(you know where this is going...)
In distribution ubuntu, create a file /etc/init-wsl:
tmpreaper 96 /tmp
(my own version of this file also starts dbus, avahi-daemon and xrdp so I have a stable DNS name to connect to....)
Hey there.
I've run into the same problem — the /tmp directory is not cleared. I'm using Ubuntu 20.04. If I'm right, the distro should automatically clear /tmp on boot due to the policies described in /usr/lib/tmpfiles.d/tmp.conf (see man tmpfiles.d), which are probably copied from systemd's sources due to its installation process.
But suddenly my /usr/lib/tmpfiles.d/tmp.conf looks like this:
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
D /tmp 1777 root root -
#q /var/tmp 1777 root root 30d
- in the first rule means “never”, the second line is even better — it's commented. So, this file actually defines the current behavior, and this is probably the reason why my /tmp is never cleared.
I'm sure that I didn't touch this file by myself. However it looks very strange.
So, I'm here to ask those of you who encounter the same problem on Ubuntu 18.04 or newer. Would you mind to check your /usr/lib/tmpfiles.d/tmp.conf? Does it look like the one from systemd's repo or like mine?
Ah, nevermind.
It looks like systemd-tmpfiles (which should analyze the config I posted above) is not fired up during the boot. Probably because it should be started by systemd, but according to #994 systemd does not work :-(
As an alternative, I guess, it should be possible to run this command on cron, or use tmpreaper.
Why was this issue closed? I'm not sure the issue is resolved. Why couldn't WSL just delete the /tmp directory on temrination?!
I second this. /tmp is not cleaned on Ubuntu 22.04
FYI if you enable the new systemd support in WSL2, afterwards you can enable tmp.mount to get a proper tmpfs /tmp mount when WSL initializes, like so:
sudo systemctl enable /usr/share/systemd/tmp.mount
After the above and restarting WSL, you can check if it worked via mount | grep /tmp and you should see a line like tmpfs on /tmp type tmpfs (rw,nosuid,nodev).
You can obviously confirm further by "touching" a file (ie touch /tmp/potato) then exit WSL (wsl --shutdown) and then go back in a wsl shell and the file should be gone (ie ls /tmp/potato should fail.)
[Credit of this knowledge goes to https://github.com/microsoft/WSL/issues/6999#issuecomment-1316014788]
FYI if you enable the new systemd support in WSL2, afterwards you can enable tmp.mount to get a proper tmpfs /tmp mount when WSL initializes, like so:
sudo systemctl enable /usr/share/systemd/tmp.mount
One potential caveat: doing this replaces your /tmp mount, it doesn't clear data in your existing (old) mount.
In my case, I had to disable /usr/share/systemd/tmp.mount, purge the old data, the re-enable it.
# old
$ du -sh /mnt/wslg/distro/tmp/
515G /mnt/wslg/distro/tmp/
# new
$ du -sh /tmp/ 2>/dev/null
185M /tmp/
|
gharchive/issue
| 2016-10-27T12:53:18 |
2025-04-01T04:35:01.910860
|
{
"authors": [
"AgDude",
"Angeldude",
"DarthSpock",
"Meinersbur",
"S-ed",
"aradalvand",
"aseering",
"bitcrazed",
"bobhy",
"darkvertex",
"fpqc",
"igoradamenko",
"nesl247",
"rfc2119",
"rgmz",
"rigarash",
"therealkenc",
"xploSEoF"
],
"repo": "microsoft/WSL",
"url": "https://github.com/microsoft/WSL/issues/1278",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1163179102
|
"The parameter is incorrect" when launching wsl
Version
Microsoft Windows [Version 10.0.19044.1586]
WSL Version
[X] WSL 2
[ ] WSL 1
Kernel Version
5.10.60.1
Distro Version
Ubuntu 20.04
Other Software
No response
Repro Steps
Open any wsl terminal or try running wsl from powershell.
Expected Behavior
It should work and allow execution as normal.
Actual Behavior
The parameter is incorrect.
[process exited with code 4294967295 (0xffffffff)]
I have tried using the .wslconfig + guiapplications fix as specified in other issues, but that did not change my results. After some effort with this, my pc restarted and WSL was working fine again, but now a few days later I am back to having this issue again. I also noticed that according to the wslconfig specs that guiApplications setting isnt even recognized if you arent on Windows 11.
Notable is that an Ubuntu 18.04 instance installed from the Microsoft Store running on WSL v1 will work as expected.
Diagnostic Logs
wsl.zip
I also tried rolling back to kernel version 4.19.128 by running wsl --update --rollback, but the same results are occuring
I was able to find a temporary workaround and more clear cause for this. I read https://github.com/microsoft/WSL/issues/6085#issuecomment-774243448 and put together that my system had stopped working when i was on a different network with only wifi, but had been working when I was at home connected to an ethernet dock. I disabled my wifi network adapter and restarted my PC, which then allowed Hyper-V to create its virtual ethernet adapter, which I assume had been failing, and WSL2 started up without a hitch, and I could re-enable the adapter.
To narrow down the steps for reproducing:
start the machine with only a wifi adapter enabled (my specific adapter is Intel(R) Wireless-AC 9560)
I have the same problem
wsl --install
wsl --update
all works fine but when I installed WSL preview I cant start wsl
The parameter is incorrect. [process exited with code 4294967295 (0xffffffff)]
PS C:\Users\xxx> wsl.exe -l -v
NAME STATE VERSION
* Ubuntu Stopped 2
in EventLog I have this
The program dllhost.exe version 10.0.22000.1 stopped interacting with Windows and was closed. To see if more information about the problem is available, check the problem history in the Security and Maintenance control panel.
Process ID: e98
Start Time: 01d88df2825a5685
Termination Time: 4294967295
Application Path: C:\Windows\System32\dllhost.exe
Report Id: 79432537-c548-4a35-8f53-5c2639bbfafe
Faulting package full name: MicrosoftCorporationII.WindowsSubsystemForLinux_0.61.8.0_x64__8wekyb3d8bbwe
Faulting package-relative application ID: App
Hang type: Quiesce
Getting this error as of today. Pretty frustrating that it's been around for more than a year with no obvious recourse.
Same, machine restarted last night im assuming windows update....what the hell microsoft, i have a production environment all setup that is now all screwed up after working for a year.
The parameter is incorrect.
Error code: Wsl/Service/CreateInstance/CreateVm/E_INVALIDARG
+1
The parameter is incorrect.
Error code: Wsl/Service/CreateInstance/CreateVm/HCS/E_INVALIDARG
Same here. This is pretty lame.
Getting this as of today. Yesterday it did just work normally.
I had a similar issue after changing the IP of a Nat network, which I'm using for one of my HyperV VMs. Once the change was done, my Default Network for HyperV disappeared and I could also no longer works with Ubuntu in WSL2. To fix it, I tried many suggestions a simple networking reset + reboot after completely removing HyperV, Virtual Machine Platform and Windows Subsystem for Linux + reinstalling them after another reboot once the network reset was done. The error message however stayed. I also tried just creating a Nat Switch called WSL or an external Switch using my WiFi network, but that didn't help either. I even tried upgrading to Window11 via the upgrade assistant, hoping the new OS would come with a fully working network setup, but that was also useless.
In the end I saw en error in the event log pointing to the NetNat service not being able to start. This then lead me to a SO article pointing to a potential registry issue with the settings under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nsi\
Therefore, I tried resetting my network more thoroughly as suggested by one or more network protocols are missing and did a complete reset using the commands as admin
# release / renew ip + de/reregister dns
ipconfig /release
ipconfig /flushdns
ipconfig /renew
ipconfig /registerdns
# network stack reset
netsh winsock reset catalog
netsh int ipv4 reset reset.log
netsh int ipv6 reset reset.log
netsh int ip reset all ip.log
netsh int ip reset ipreset.log
netsh int tcp reset tcp.log
For some netsh commands I got Access is denied.Resetting , OK! errors, which I fixed by granting access permissions in Regedit to all subkeys e.g. HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Nsi\{eb004a00-9b1a-11d4-9123-0050047759bc}\26 for Everyone (Full Access) which was mentioned in the article and described there in more detail. Backup your full key HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Nsi\ first by doing an export to a file to be extra safe !)
Once the permissions were set, the netsh commands above worked without complaining about access.
...
I got the "Parameter is incorrect" after updating WSL with "wsl --update"
I followed the ipconfig and netsh commands proposed by clawsl and then restarted the PC. WSL would still give me the same error message.
I then did another "wsl --update" and that solved the problem
|
gharchive/issue
| 2022-03-08T21:53:13 |
2025-04-01T04:35:01.927531
|
{
"authors": [
"7-of-9",
"KatamariJr",
"PlasticCog",
"VaporumCoin",
"ferchor2003",
"hubertsvk",
"lagset",
"tyingq",
"whatcolor1"
],
"repo": "microsoft/WSL",
"url": "https://github.com/microsoft/WSL/issues/8129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
745572078
|
Translated 1-intro-to-programming-languages into Dutch
Quizzes and assignments are not translated yet.
@FloorD do you want to review?
@jlooper could you merge?
sorry, @FloorD and @marcduiker JUST now saw this was ready, merging! thank you!
|
gharchive/pull-request
| 2020-11-18T11:20:11 |
2025-04-01T04:35:01.930051
|
{
"authors": [
"FloorD",
"jlooper",
"marcduiker"
],
"repo": "microsoft/Web-Dev-For-Beginners",
"url": "https://github.com/microsoft/Web-Dev-For-Beginners/pull/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
828482772
|
"OpenQA.Selenium.WebDriverException" exception thrown sometimes when trying to locate elements by WinAppDriver
Sometimes my automated application runs fast and sometimes it takes very long time to locate elements and at the end it throws this exception :
OpenQA.Selenium.WebDriverException
HResult=0x80131500
Message=The HTTP request to the remote WebDriver server for URL http://127.0.0.1:4723/session/C31E5E62-EB73-4392-A659-063F46FE1F11/element timed out after 60 seconds.
Source=Appium.Net
StackTrace:
at OpenQA.Selenium.Appium.Service.AppiumCommandExecutor.Execute(Command commandToExecute)
at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary2 parameters) at OpenQA.Selenium.Appium.AppiumDriver1.Execute(String driverCommandToExecute, Dictionary2 parameters) at OpenQA.Selenium.Remote.RemoteWebDriver.FindElement(String mechanism, String value) at OpenQA.Selenium.Appium.AppiumDriver1.FindElement(String by, String value)
at OpenQA.Selenium.Appium.AppiumDriver`1.FindElementByAccessibilityId(String selector)
at ...
This exception was originally thrown at this call stack:
[External Code]
Inner Exception 1:
WebException: The request was aborted: The operation has timed out.
private WindowsDriver GetCurrentDriver()
{
var appiumOptions = new AppiumOptions();
appiumOptions.AddAdditionalCapability("app", "Root"); // use root to gain access to the entire desktop
appiumOptions.AddAdditionalCapability("deviceName", "WindowsPC");
string appSetting = _configurationDriver.Configuration["WindowsApplicationDriverUrl"];
var remoteAddress = new Uri(appSetting);
var driver = new WindowsDriver<WindowsElement>(remoteAddress, appiumOptions);
driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(30);
return driver;
}
I found the answer here
https://github.com/microsoft/WinAppDriver/issues/196
|
gharchive/issue
| 2021-03-10T23:41:46 |
2025-04-01T04:35:01.934746
|
{
"authors": [
"zahraazezo"
],
"repo": "microsoft/WinAppDriver",
"url": "https://github.com/microsoft/WinAppDriver/issues/1467",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
493164993
|
FindElementsByTagname doesn't work in this case
I'm trying to enter a date in the Date picker control.
Please observe the following code, which makes sure I am using the correct tagname: I can locate an element by its value (name), but using that element's tag name, I can't find it using the FindElementsByTagname method?
Edit: workaround using FindElementsByClassName("LoopingSelectorItem") works...
var picker = App.Session.FindElementByAccessibilityId("DatePickerFlyoutPresenter");
var list = picker.FindElementByAccessibilityId("MonthLoopingSelector");
Console.WriteLine(list.TagName); // ControlType.List
var monthByName = list.FindElementByName("september");
var monthsByTag = list.FindElementsByTagName(monthByName.TagName); // ControlType.ListItem
Console.WriteLine(monthsByTag.Count); // 0...
Tag name first letter should always been in capital letter for e.g if inspect shows tag name"list "
the it should be given as session.FindElementByTagName("List")
It is... "list" is a code variable?
|
gharchive/issue
| 2019-09-13T06:52:26 |
2025-04-01T04:35:01.936911
|
{
"authors": [
"Exoow",
"anunay1"
],
"repo": "microsoft/WinAppDriver",
"url": "https://github.com/microsoft/WinAppDriver/issues/863",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1960911404
|
Drag and drop in elevated process is still not supported
Describe the bug
COMException: Drag start failed. If this is an elevated process, drag/drop is currently not supported there
Steps to reproduce the bug
Create an elevated process
Drag or drop any element
Expected behavior
No response
Screenshots
No response
NuGet package version
Windows App SDK 1.4.2: 1.4.231008000
Packaging type
Packaged (MSIX)
Windows version
Windows 11 version 22H2 (22621, 2022 Update)
IDE
Visual Studio 2022
Additional context
No response
Closing this as duplicate to track this here: microsoft/microsoft-ui-xaml#7690
|
gharchive/issue
| 2023-10-25T09:06:26 |
2025-04-01T04:35:01.940426
|
{
"authors": [
"codendone",
"zhuxb711"
],
"repo": "microsoft/WindowsAppSDK",
"url": "https://github.com/microsoft/WindowsAppSDK/issues/3921",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
627482008
|
Hide unused scrollbars in RadioButtonsPage
Description
Add VerticalScrollBarVibility=Hidden to hide the scrollbars.
Motivation and Context
Closes #455
How Has This Been Tested?
Tested manually
Screenshots (if appropriate):
Types of changes
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Yes if the users try to use the app with its minimum width, the scrolling is necessary:
The question, should we show the scrollbars or not? Seems a bit strange that we don't show them, on the other hand, if users disable "auto hide scrollbars" they might seem them all the time while they are not doing anything in 99% of cases.
To simplify the sample, what if we removed the last color option and also deleted the ScrollViewer? The colors aren't the important part of the sample, so maybe we can take a "less is more" approach?
Great idea! Removed the scrollviewer and one color entry for each so we don't need to scroll anymore.
|
gharchive/pull-request
| 2020-05-29T19:17:09 |
2025-04-01T04:35:01.944717
|
{
"authors": [
"chingucoding",
"stmoy"
],
"repo": "microsoft/Xaml-Controls-Gallery",
"url": "https://github.com/microsoft/Xaml-Controls-Gallery/pull/458",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2269343381
|
[Feature Request]: Allow each message to be sent to a subset of agents in a group chat
Is your feature request related to a problem? Please describe.
It's not always necessary for every agent in a group chat to receive every message. Allowing messages to be sent to a subset of agents can help each agent focus on the necessary info.
Describe the solution you'd like
[ ] Similar to allowed_or_disallowed_speaker_transitions, add a `allowed_or_disallowed_receiver" dict which constrains the receivers in GroupChat.
[ ] Similar to speaker_selection_method, add a receiver_selection_method. Start with one option "all", which sends a message to every agent in the constrained receiver list of a sender.
[ ] Add a receiver_selection_method: "auto". Similar to the speaker selection method, use LLM to decide a subset of receivers in the constrained list for each message.
[ ] Support custom receiver_selection_method, similar to custom speaker_selection_method.
Additional context
No response
@sonichi what do you think about using the nomenclature tags where messaged are tagged and each agent can set which tags they want to receive messages for. By default everyone gets the "all" tag.
|
gharchive/issue
| 2024-04-29T15:29:00 |
2025-04-01T04:35:01.954581
|
{
"authors": [
"jtoy",
"sonichi"
],
"repo": "microsoft/autogen",
"url": "https://github.com/microsoft/autogen/issues/2544",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2080536757
|
[Core] Update documentation for function call compatibility
Why are these changes needed?
Additional documentation and test changes for function call compatibility of different API versions.
Related issue number
#1227
Checks
[x] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
[ ] I've added tests (if relevant) corresponding to the changes introduced in this PR.
[x] I've made sure all auto checks have passed.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (cf5e21c) 31.45% compared to head (a1322bc) 19.03%.
Additional details and impacted files
@@ Coverage Diff @@
## main #1237 +/- ##
===========================================
- Coverage 31.45% 19.03% -12.43%
===========================================
Files 32 32
Lines 4393 4393
Branches 1026 1081 +55
===========================================
- Hits 1382 836 -546
- Misses 2902 3461 +559
+ Partials 109 96 -13
Flag
Coverage Δ
unittests
19.03% <ø> (-12.39%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@ekzhu could you fix the conflict and resolve the conversations?
There is still a conflict.
|
gharchive/pull-request
| 2024-01-14T01:09:32 |
2025-04-01T04:35:01.963106
|
{
"authors": [
"codecov-commenter",
"ekzhu",
"sonichi"
],
"repo": "microsoft/autogen",
"url": "https://github.com/microsoft/autogen/pull/1237",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1656920635
|
Add an option to reset votes
It would be nice to have an option to reset the currently revealed votes in order to revote a work item after discussing it.
Example:
A selects 1
B selects 5
A and B state why they selected their estimates.
After this step, everyone votes again in order to get to a solution (this is currently not possible).
The only workaround is to switch the WI before saving and returning, but that does not seem an optimal solution
Only the scrum master (or session creator) should be able to reset the votes.
Good suggestion
It would be nice to be able to toggle off the auto reset functionality in favor of this manual reset.
|
gharchive/issue
| 2023-04-06T08:30:55 |
2025-04-01T04:35:01.965304
|
{
"authors": [
"AminTi",
"DenLynA",
"michvllni"
],
"repo": "microsoft/azure-boards-estimate",
"url": "https://github.com/microsoft/azure-boards-estimate/issues/252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.