id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2762228639
|
Add install step in test workflow
Closes #97
@jwallwork23 perhaps we could use pip3-autoremove or similar to uninstall both the packages and their dependencies to properly test the installation. But this seems like an overkill maybe...
@jwallwork23 perhaps we could use pip3-autoremove or similar to uninstall both the packages and their dependencies to properly test the installation. But this seems like an overkill maybe...
As in before calling make install_dev?
|
gharchive/pull-request
| 2024-12-29T12:21:19 |
2025-04-01T06:39:32.728413
|
{
"authors": [
"ddundo",
"jwallwork23"
],
"repo": "mesh-adaptation/docs",
"url": "https://github.com/mesh-adaptation/docs/pull/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1127300832
|
Fixed mobile responsiveness of home page
Signed-off-by: Arpit Mohapatra arpit.mohapatra19@gmail.com
Description
This PR fixes #671
Notes for Reviewers
Fixed Screenshots
Signed commits
[x] Yes, I signed my commits.
I think you left this part @marpit19
oops!! I'll fix that too
@YashKamboj could you mention the dimension cause on my machine it's responsive
@marpit19 Its not showing for me now also 😅, if you are here why don't you make all these button uniform, with equal width
@marpit19 Its not showing for me now also 😅, if you are here why don't you make all these button uniform, with equal width
I guess right now if someone could review and push this for now. I have some office work so if I got any free time for sure I would do it
|
gharchive/pull-request
| 2022-02-08T14:04:18 |
2025-04-01T06:39:32.760023
|
{
"authors": [
"YashKamboj",
"marpit19"
],
"repo": "meshery/meshery.io",
"url": "https://github.com/meshery/meshery.io/pull/672",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1085979432
|
Update remote_provider.go
Description
Codecov Report
Merging #4804 (f486cf1) into master (7797e28) will decrease coverage by 6.79%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #4804 +/- ##
==========================================
- Coverage 32.01% 25.21% -6.80%
==========================================
Files 61 61
Lines 5148 5148
==========================================
- Hits 1648 1298 -350
- Misses 3074 3532 +458
+ Partials 426 318 -108
Flag
Coverage Δ
gointegrationtests
?
unittests
25.21% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
mesheryctl/internal/cli/root/system/start.go
1.15% <0.00%> (-58.47%)
:arrow_down:
mesheryctl/internal/cli/root/system/restart.go
2.50% <0.00%> (-47.50%)
:arrow_down:
mesheryctl/internal/cli/root/system/status.go
0.00% <0.00%> (-47.44%)
:arrow_down:
mesheryctl/internal/cli/root/system/update.go
1.05% <0.00%> (-34.74%)
:arrow_down:
mesheryctl/internal/cli/root/system/stop.go
1.78% <0.00%> (-23.22%)
:arrow_down:
mesheryctl/internal/cli/root/system/check.go
1.38% <0.00%> (-22.23%)
:arrow_down:
mesheryctl/internal/cli/root/system/logs.go
3.29% <0.00%> (-20.88%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 90a4476...f486cf1. Read the comment docs.
I'll put this PR on-hold pending @sudo-NithishKarthik's investigations.
Yes, @leecalcote I think Nithish has figured out a way in Meshery-Cloud. Let's try that out first.
Yup let's try that out, I was just fixing the golangci-lint check for now.
|
gharchive/pull-request
| 2021-12-21T15:57:45 |
2025-04-01T06:39:32.776523
|
{
"authors": [
"codecov-commenter",
"leecalcote",
"sayantan1413",
"warunicorn19"
],
"repo": "meshery/meshery",
"url": "https://github.com/meshery/meshery/pull/4804",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
206092296
|
[Question] Mesos version and kafka framework version
Hi everyone,
I want to deploy Kafka on mesos and wonder what version of mesos work well with kafka framework (v0.9.5.1 or v0.10.0.0-rc0)?, any recommendation for production deployment?. I'm also deploy Spark on mesos to processing data. If anyone has production deployment: spark, kafka on mesos, could you show me what versions you are running?, it would really be useful to me.
Thanks in advance
I'd recommend using the latest version of mesos you can, but pretty much any version you can find should work fine. We actually run the scheduler on a 0.22.1 cluster, as well as a bunch of 1.0.2 clusters, but again, I'd highly recommend at least 0.28.0.
Thanks for your recommend, I will use the latest mesos 1.1.0, by the way, how stable is kafka v0.10.0.0-rc0 release?
Stable, we've been running it for about a month now in production without issues. However, #271 was just reported and fixed, I'm about to release rc1 to address it, I'd recommend going to that version.
0.10.0.0-rc1 is out now: https://github.com/mesos/kafka/releases
Thank you, your work is amazing.
|
gharchive/issue
| 2017-02-08T04:17:06 |
2025-04-01T06:39:32.842290
|
{
"authors": [
"CBR09",
"steveniemitz"
],
"repo": "mesos/kafka",
"url": "https://github.com/mesos/kafka/issues/273",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
241662369
|
send kill-task and receive malformed request
I tried to implement the code of killtask, but I got the malformed request infomation with err;
code:
events.Handle(executor.Event_KILL, events.HandlerFunc(func(e *executor.Event) error {
log.Println(e.GetKill())
resp, err := state.cli.Do(httpcli.With(e.GetKill()))
if resp != nil {
defer resp.Close()
}
**log.Println(err)**
if err != nil && err != io.EOF {
**log.Println(err)**
} else {
log.Println("disconnected")
}
}
Please give me some advice, thanks for your reply;
What do you expect the above code to do? A custom executor that receives a kill event should kill the task identified by the event. It looks to me like your code above is attempting to reflect the kill event back to mesos, which is incorrect. At first glance, I'm not even sure how the above code compiles.
See the Mesos docs for more details regarding the v1 executor API: https://github.com/apache/mesos/blob/1da4e45b8077b9046bba1a7ed15be5e344a14a91/docs/executor-http-api.md#kill
please re-open (and provide additional details, logs, etc) if this is still a problem for you.
|
gharchive/issue
| 2017-07-10T10:07:17 |
2025-04-01T06:39:32.845053
|
{
"authors": [
"guanjunding",
"jdef"
],
"repo": "mesos/mesos-go",
"url": "https://github.com/mesos/mesos-go/issues/307",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
179339111
|
Add compile dependency protobuf-3
Issue: cassandra-commons:compileJava build failed for missing protobuf
JIRA: https://dcosjira.atlassian.net/browse/CASSANDRA-26
Can one of the admins verify this patch?
Can one of the admins verify this patch?
@mesosphere-ci: retest this please
|
gharchive/pull-request
| 2016-09-26T21:15:18 |
2025-04-01T06:39:32.860815
|
{
"authors": [
"cooldoger",
"gabrielhartmann",
"mesosphere-ci"
],
"repo": "mesosphere/dcos-cassandra-service",
"url": "https://github.com/mesosphere/dcos-cassandra-service/pull/264",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
192749971
|
Add marathon-style constraints for cassandra nodes
Removes prior placement_strategy option entirely, as it was effectively a no-op anyway: could never behave as anything other than NODE, due to the matching port across nodes
Note that constraints must be of the form 'a:b,x:y:z', NOT the json array form. This is due to how the cassandra service currently handles configurations. See integration tests for examples.
LGTM!
|
gharchive/pull-request
| 2016-12-01T04:08:16 |
2025-04-01T06:39:32.862177
|
{
"authors": [
"mohitsoni",
"nickbp"
],
"repo": "mesosphere/dcos-cassandra-service",
"url": "https://github.com/mesosphere/dcos-cassandra-service/pull/333",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
245197108
|
[INFINITY-2019] Change "principle" to "service-account" - are you sure?
Do we really want to do this? "principle" is used in many places including docs/tests....
cc: @gabrielhartmann
Yes, product requested this change as it brings our docs/api in line with the rest of the product.
I am not convinced with converting "principle" keyword to "service account" keyword.
|
gharchive/pull-request
| 2017-07-24T20:19:58 |
2025-04-01T06:39:32.863872
|
{
"authors": [
"benclarkwood",
"mbdcos"
],
"repo": "mesosphere/dcos-commons",
"url": "https://github.com/mesosphere/dcos-commons/pull/1307",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
103453205
|
Refactor the application create/edit modal data handling
The data handling structures of the application modal for creating and editing the application configuration are in a bad and incomprehensible condition.
We need the following on a central and meaningful place:
[ ] Function to transform the mod_e_l in the direction of the mod_a_l form input attributes
[ ] Function to transform the mod_a_l form input attributes to the application mod_e_l structure
[ ] Central and reusable application modal validation that is relational to the form attributes, but validating the mod_e_l itself
[ ] Collapsible panels that are treated correctly even if they are closed
[ ] Let the code be expressive
https://github.com/mesosphere/marathon-ui/pull/158
https://github.com/mesosphere/marathon/issues/2148
Also see https://github.com/mesosphere/marathon/issues/2038
Please check whether behaviour as described in https://mesosphere.atlassian.net/browse/MARATHON-300 is fixed with refactoring. Otherwise create a new bug and update Jira issue.
@mwasn This is fixed with this refactoring in 0.12.0. The constraints field isn't empty, after an error on the cmd-field.
Please don't refer to links that are not opened to everybody, thank you.
|
gharchive/issue
| 2015-08-27T08:14:17 |
2025-04-01T06:39:32.874574
|
{
"authors": [
"aldipower",
"mwasn",
"pierlo-upitup"
],
"repo": "mesosphere/marathon",
"url": "https://github.com/mesosphere/marathon/issues/2105",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
135273477
|
Specific role required by app is not matched by default resource role: "*"
(using playa-mesos with latest marathon and mesos)
After installing marathon-lb using dcos cli:
dcos package install marathon-lb
I noticed that marathon indicates insufficient resources error. After some investigation, the reason seems to be that marathon-lb requires role 'public_slave', but my mesos slave's resources are declared with default role: *
After looking in the marathon code, I noticed that in that situation the resources are indeed not matched. In the following line, the role set will then contain only roles declared on app, skipping defaults: config.defaultAcceptedResourceRolesSet which include the default * role.
https://github.com/mesosphere/marathon/blob/master/src/main/scala/mesosphere/mesos/TaskBuilder.scala#L30
I'm not sure if this is maybe by design? Perhaps the spefic role the app requires just has to be available in mesos (and not in the pool of unassigned resources)?
If this is indeed a bug, I would be happy to provide a PR.
Hi @PiotrTrzpil thank you for your interest! The described behaviour is by design. Marathon will only considers resource offers with roles defined in the acceptedResourceRoles. "If you do not specify this [list], Marathon considers all resource offers with roles that have been configured by the --default_accepted_resource_roles command line flag"¹ This enables users that might want to "reserve" a share of their resources for the role "production" to ensure that other tasks do not consume all resources.
Example 1: "acceptedResourceRoles": [ "production", "*" ] Tasks of this app definition are launched either on "production" or "*" resources.
Example 2: "acceptedResourceRoles": [ "public" ] Tasks of this app definition are launched only on "public" resources.
If you have any further questions please don't hesitate to ask!
/source https://mesosphere.github.io/marathon/docs/rest-api.html (1,2)
|
gharchive/issue
| 2016-02-21T23:10:00 |
2025-04-01T06:39:32.879914
|
{
"authors": [
"PiotrTrzpil",
"orlandohohmeier"
],
"repo": "mesosphere/marathon",
"url": "https://github.com/mesosphere/marathon/issues/3290",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
777657205
|
mix.bbclass: Fix mix release erts path
Without this change building a mix release does not work on meta-erlang master, since the erts path is wrong. The directory erts- is now appended to it.
Tested with erlang 23.2.1 and elixir 1.11.2
I am not sure if this also works for other elixir/erlang versions and if this is the correct approach. At least this works for me.
I am not sure if this also works for other elixir/erlang versions and if this is the correct approach. At least this works for me.
Hello Christian,
I will take a look and test during this week.
Thanks.
Em dom, 3 de jan de 2021 4:16 PM, Christian Taedcke <
notifications@github.com> escreveu:
I am not sure if this also works for other elixir/erlang versions and if
this is the correct approach. At least this works for me.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/meta-erlang/meta-erlang/pull/64#issuecomment-753632283,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACEBIXOGFV5P7RHTQ36BP3SYCC4TANCNFSM4VR6EOXQ
.
Hello Christian,
I will take a look and test during this week.
Thanks.
Em dom, 3 de jan de 2021 4:16 PM, Christian Taedcke <
notifications@github.com> escreveu:
I am not sure if this also works for other elixir/erlang versions and if
this is the correct approach. At least this works for me.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/meta-erlang/meta-erlang/pull/64#issuecomment-753632283,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACEBIXOGFV5P7RHTQ36BP3SYCC4TANCNFSM4VR6EOXQ
.
|
gharchive/pull-request
| 2021-01-03T15:13:53 |
2025-04-01T06:39:32.893480
|
{
"authors": [
"chrta",
"joaohf"
],
"repo": "meta-erlang/meta-erlang",
"url": "https://github.com/meta-erlang/meta-erlang/pull/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1678082619
|
Fix flutter gallery
Update to the flutter gallery revision recommended by @ardera in the flutter-pi repository.
With the old SRCREV, flutter gallery does not build for flutter 3.7.x because of some issues with translation files.
This is the only recipe from flutter-test-apps that currently cannot be built with flutter 3.7.x .
I have a script btw to find the latest working flutter gallery version btw: https://gist.github.com/ardera/4d9807e1cb05b4e26be2133668197768
@Markus43 hi Markus. Thanks for the contribution. Currently I have a backlog of things to do in meta-flutter. This is one.
@ardera I'll take a look at your script when I'm back in the office next week. One approach is to test the SDK version range in the pubspec.yaml. it is a generally annoying problem for maintenance. I think there is a case to support auto rolling of recipes. The flutter watch repro already monitors stable channel changes. It could be used to stage an auto update. An auto update would still require some form of test pass. Which leads to some kind of headless screen grab scenario for evaluating app health. I would love to avoid the busy work, and come up with some reliable automation
@ardera I added the pattern as an option to workspace-automation:
https://github.com/meta-flutter/workspace-automation/pull/2
Usage pattern:
cd app/gallery
$FLUTTER_WORKSPACE/flutter_workspace.py --find-working-commit
|
gharchive/pull-request
| 2023-04-21T08:09:49 |
2025-04-01T06:39:32.897705
|
{
"authors": [
"Markus43",
"ardera",
"jwinarske"
],
"repo": "meta-flutter/meta-flutter",
"url": "https://github.com/meta-flutter/meta-flutter/pull/280",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2652082144
|
:bug: Add CustomDeploy to metal3machinetemplate webhook
This mirrors the code in metal3machine webhook. We are loosening the requirement on Image: when using CustomDeploy, Image is optional.
Ref: #1501
/test metal3-centos-e2e-integration-test-main metal3-ubuntu-e2e-integration-test-main
/cc @lentzi90
|
gharchive/pull-request
| 2024-11-12T12:45:17 |
2025-04-01T06:39:32.994244
|
{
"authors": [
"honza",
"tuminoid"
],
"repo": "metal3-io/cluster-api-provider-metal3",
"url": "https://github.com/metal3-io/cluster-api-provider-metal3/pull/2085",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1385634767
|
🐛 Remediation: fix nil panic when setting unhealthy annotation on host
What this PR does / why we need it:
Fix nil panic when setting unhealthy annotation on host
/ok-to-test
/lgtm
@furkatgofurov7 @kashifest Hi, looking at OWNERS, it seems I need an approval by one of you, can you have a look please? Thanks! 🙂
/test-ubuntu-integration-main
thanks Furkat 🙂
/cherry-pick release-1.2
|
gharchive/pull-request
| 2022-09-26T08:16:45 |
2025-04-01T06:39:32.996538
|
{
"authors": [
"furkatgofurov7",
"mboukhalfa",
"slintes"
],
"repo": "metal3-io/cluster-api-provider-metal3",
"url": "https://github.com/metal3-io/cluster-api-provider-metal3/pull/744",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1453662220
|
🌱 Uplift CAPI to v1.3.0-rc.0 in main branch
What this PR does / why we need it:
xref: https://github.com/kubernetes-sigs/cluster-api/releases/tag/v1.3.0-rc.0
/hold
until https://github.com/metal3-io/metal3-dev-env/pull/1106 is merged
https://github.com/metal3-io/metal3-dev-env/pull/1106](https://github.com/metal3-io/metal3-dev-env/pull/1106 is merged
/hold cancel
/test-ubuntu-integration-main
/cc @smoshiur1237 @Rozzii
|
gharchive/pull-request
| 2022-11-17T16:48:07 |
2025-04-01T06:39:32.999836
|
{
"authors": [
"furkatgofurov7"
],
"repo": "metal3-io/ip-address-manager",
"url": "https://github.com/metal3-io/ip-address-manager/pull/170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1877695488
|
Compatibility issues with 222
Bug Description
./gradlew runPluginVerifier
Plugin com.metalbear.mirrord:3.54.1 against IU-222.4554.10: 3 compatibility problems, some of which may be caused by absence of optional dependency in the target IDE IU-222.4554.10. 12 usages of experimental API. 2 usages of internal API
...
#Invocation of unresolved method com.intellij.javascript.nodejs.execution.NodeTargetRun.getEnvData() : EnvironmentVariablesData
Method com.metalbear.mirrord.products.nodejs.NodeRunConfigurationExtension.createLaunchSession$1.addNodeOptionsTo(com.intellij.javascript.nodejs.execution.NodeTargetRun targetRun) : void contains an *invokevirtual* instruction referencing an unresolved method com.intellij.javascript.nodejs.execution.NodeTargetRun.getEnvData() : com.intellij.execution.configuration.EnvironmentVariablesData. This can lead to **NoSuchMethodError** exception at runtime.
#Abstract method com.intellij.execution.process.ProcessListener.onTextAvailable(ProcessEvent arg0, Key arg1) : void is not implemented
Concrete class com.metalbear.mirrord.products.idea.IdeaRunConfigurationExtension.attachToProcess$1 inherits from com.intellij.execution.process.ProcessListener but doesn't implement the abstract method onTextAvailable(ProcessEvent arg0, Key arg1) : void. This can lead to **AbstractMethodError** exception at runtime.
#Abstract method com.intellij.execution.process.ProcessListener.startNotified(ProcessEvent arg0) : void is not implemented
Concrete class com.metalbear.mirrord.products.idea.IdeaRunConfigurationExtension.attachToProcess$1 inherits from com.intellij.execution.process.ProcessListener but doesn't implement the abstract method startNotified(ProcessEvent arg0) : void. This can lead to **AbstractMethodError** exception at runtime.
I think the plugin verifier still shows this one compatibility problem when run on 222:
Compatibility problems (1):
#Invocation of unresolved method com.intellij.javascript.nodejs.execution.NodeTargetRun.getEnvData() : EnvironmentVariablesData
Method com.metalbear.mirrord.products.nodejs.NodeRunConfigurationExtension.createLaunchSession$1.addNodeOptionsTo(com.intellij.javascript.nodejs.execution.NodeTargetRun targetRun) : void contains an *invokevirtual* instruction referencing an unresolved method com.intellij.javascript.nodejs.execution.NodeTargetRun.getEnvData() : com.intellij.execution.configuration.EnvironmentVariablesData. This can lead to **NoSuchMethodError** exception at runtime.
Should we reopen the issue or is that not a problem for us?
It's not a problem since we do try catch
|
gharchive/issue
| 2023-09-01T16:13:26 |
2025-04-01T06:39:33.013532
|
{
"authors": [
"aviramha",
"t4lz"
],
"repo": "metalbear-co/mirrord-intellij",
"url": "https://github.com/metalbear-co/mirrord-intellij/issues/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1571509010
|
Custom Dns Resolving Fix
Updated version of #992
closes #989
bors r+
|
gharchive/pull-request
| 2023-02-05T16:28:14 |
2025-04-01T06:39:33.014719
|
{
"authors": [
"DmitryDodzin"
],
"repo": "metalbear-co/mirrord",
"url": "https://github.com/metalbear-co/mirrord/pull/1011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
721426021
|
Missing translations to French
Follow-on from https://github.com/metanorma/metanorma-iso/issues/418:
Please use /local_bibdata/i18nyaml/i18n_reference_number for string value for "Reference Number" in ISO cover.
Please use /local_bibdata/i18nyaml/i18n_all_rights_reserved for string value for "All rights reserved" in ISO (back?) cover.
Please use /local_bibdata/i18nyaml/i18n_price_based_on for string value for "Price based on % pages" in ISO back cover. Note convention of % for placeholder. (Ronald would prefer Liquid conventions, but Liquid conventions are useless for XSLT.)
Please use /local_bibdata/i18nyaml/i18n_deprecated for string value for "DEPRECATED"
Please use /local_bibdata/i18nyaml/i18n_admonition for the titles for admonitions:
<i18n_admonition><i18n_danger>Danger</i18n_danger><i18n_warning>Avertissement</i18n_warning><i18n_caution>Attention</i18n_caution><i18n_important>Important</i18n_important><i18n_safety_precautions>Précautions de Sécurité</i18n_safety_precautions></i18n_admonition>
I've created a French language boilerplate. Note that COPYRIGHT PROTECTED DOCUMENT is now a clause title, and not to be supplied by you: I've had to refactor boilerplate/copyright-statement.
For Presentation XML, see https://github.com/metanorma/metanorma-iso/issues/418
Done. PDF example for rice-fr.presentation.xml from #418 (with manually moved i18nyaml tags to localized-strings structure):
rice-fr.presentation.pdf
|
gharchive/issue
| 2020-10-14T12:43:56 |
2025-04-01T06:39:33.034398
|
{
"authors": [
"Intelligent2013",
"opoudjis"
],
"repo": "metanorma/metanorma-iso",
"url": "https://github.com/metanorma/metanorma-iso/issues/428",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1916651565
|
Add to_h method
We store parts of ID in index file. We need to get these pars from pubid with method to_h and recreate PubID from the hash:
> hash = pubid.to_h
{
code: "800-121",
publisher: "NIST",
revision: "2",
serie: "NIST SP"
}
> Pubid::Nist::Identifier.new(**hash)
@code="800-121",
@publisher=#<Pubid::Nist::Publisher:0x0000000109b619d0 @publisher="NIST">,
@revision="2",
@serie=#<Pubid::Nist::Serie:0x0000000109b62150 @parsed=nil, @serie="NIST SP">,
@supplement=nil,
@update=nil>
@andrew2net we already have a method doing the same: #get_params, but #to_h will better reflect purpose of the method, so I'm renaming it.
@mico we need to document all the Pubid methods. You can use https://yardoc.org so we automatically get docs like this https://www.rubydoc.info/gems/relaton-bib/1.16.2/RelatonBib/Affiliation
When using "typed stages" with input like:
{ type: "tr", number: 1, publisher: "ISO", stage: :dtr }
#to_h returns:
{:number=>1, :publisher=>"ISO", :stage=>#<Pubid::Core::Stage:0x000000010a7e31b8 @config=#<Pubid::Core...>{:long=>"Technical Report", :short=>"TR"}}>, @stages=["40.00"]>>, :type=>"tr", :typed_stage=>"DTR"}
which cannot be used to rebuild identifier again because of typed_stage parameter that we don't accept for constructor, "typed stage" should be applied through stage parameter.
@mico can we create Pubid::Core::Stage#to_h method? We need to serialize Pbid as a Hash and store in YAML file. And we need to be able to recreate the Pubid from the Hash. If it's impossible to create the stage parameter in Pubid constructor, let's make Pubid#from_hash parser method.
|
gharchive/issue
| 2023-09-28T03:48:34 |
2025-04-01T06:39:33.049065
|
{
"authors": [
"andrew2net",
"mico"
],
"repo": "metanorma/pubid-core",
"url": "https://github.com/metanorma/pubid-core/issues/22",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1888566829
|
Support long notation references
This gem parses short notation references like NIST SP 800-67r1 and NIST SP 800-57pt1 but fails to parse long notation NIST SP 800-67 Rev. 1 and NIST SP 800-57 Part. 1
> Pubid::Nist::Identifier.parse "NIST SP 800-67 Rev.1"
Failed to match sequence (OLD_STAGE? (' ' / '.') REPORT_NUMBER PARTS{0, } DRAFT? STAGE? TRANSLATION?) at line 1 char 8.
cause: Failed to match sequence (OLD_STAGE? (' ' / '.') REPORT_NUMBER PARTS{0, } DRAFT? STAGE? TRANSLATION?) at line 1 char 8.
`- Don't know what to do with ".1" at line 1 char 12.
We need to parse the long notations as well.
@mico users can utilize long notaion references to fetch any document. So yes, we need to parse long notation for other series.
As per the original pubid blog post we stated the 4 different pubid outputs: short, abbreviated, long and machine readable.
We should be able to parse all 4 types.
As per the original pubid blog post we stated the 4 different pubid outputs: short, abbreviated, long and machine readable.
We should be able to parse all 4 types.
@ronaldtse we cannot parse long and abbreviated formats right now, examples for NIST FIPS 140-3:
National Institute of Standards and Technology Federal Information Processing Standards Publication 140-3 (long)
Natl. Inst. Stand. Technol. Federal Inf. Process. Stds. 140-3 (abbreviated)
How we can confirm that long and abbreviated series names in https://github.com/metanorma/pubid-nist/blob/main/series.yaml are correct?
Should we reconsider what is "long" and "abbreviated" formats in light of updates from pubs-export?
Examples:
NIST SP 800-40 Version 2
NIST SP 800-27 Revision (r)
NIST SP 800-26 Rev. 1
NIST SP 800-57 Part 2 Rev. 1
Identifier parts using "long" or "abbreviated" format, but publisher and series using short form.
|
gharchive/issue
| 2023-09-09T04:09:43 |
2025-04-01T06:39:33.054868
|
{
"authors": [
"andrew2net",
"mico",
"ronaldtse"
],
"repo": "metanorma/pubid-nist",
"url": "https://github.com/metanorma/pubid-nist/issues/194",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
260277489
|
Initialization order
While working on integrating current version of JSTP into Impress, I have discovered that it is needed for impress.applications to be initialized before calling impress.jstp.createServer() in impress.js:446.
It's not needed anymore to integrate jstp@1 but I still think that applications should be loaded before starting servers, @tshemsedinov what do you think?
Initialization order was refactored here https://github.com/metarhia/impress/pull/935/commits/f3bdbc4732f668413973818dcbfe120f9196c411 and in a few commits in this PR: https://github.com/metarhia/impress/pull/935
|
gharchive/issue
| 2017-09-25T13:30:57 |
2025-04-01T06:39:33.062017
|
{
"authors": [
"nechaido",
"tshemsedinov"
],
"repo": "metarhia/impress",
"url": "https://github.com/metarhia/impress/issues/780",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
363102328
|
Include certbot to deploy scripts
Closes #735
Landed in 6c58f97
I've landed this but note that Closes tag should be:
Closes: https://github.com/metarhia/impress/issues/735
not just Closes #735
|
gharchive/pull-request
| 2018-09-24T11:14:46 |
2025-04-01T06:39:33.063675
|
{
"authors": [
"o-rumiantsev",
"tshemsedinov"
],
"repo": "metarhia/impress",
"url": "https://github.com/metarhia/impress/pull/870",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
505115182
|
Custom Column Filtering Bug
Describe the bug
When user creates a custom column, like CONCAT( [State] , [City] ) as a DIMENSION_1, it's enabled filtering function. But it looks like doesn't working.
But When you click the 'filtering' button and change the matcher to 'Regular Expression' in filter edit mode. And then type something with pipe(|) and click the apply button.
It shows the dimension values like below the screenshots and filter also working properly.
Desktop (please complete the following information):
OS: [e.g. Windows]
Browser [chrome]
http://52.231.184.135:8180/app/v2/workbook/cabcade7-d87d-4242-839c-c5c9de84966c
@yhk6190 :) I think there are two problems here.
No filtering candidate list for user defined field
Not working regular expression filter
I'll check this out.
related with #2719
|
gharchive/issue
| 2019-10-10T08:24:53 |
2025-04-01T06:39:33.091135
|
{
"authors": [
"kyungtaak",
"yhk6190"
],
"repo": "metatron-app/metatron-discovery",
"url": "https://github.com/metatron-app/metatron-discovery/issues/2698",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
462473834
|
#2131 metadata user view
Description
메타데이터 사용자 화면을 추가합니다.
lnb: catlaog 목록, catalog search
사용자화면 메인 화면
카탈로그 상세조회시 메타데이터 목록 화면
Related Issue :
#2131
How Has This Been Tested?
lnb > explore data 화면 클릭
화면 진입시 사용자 메인화면에 메타데이터 인기도순, 수정, 즐겨찾기 순으로 정렬된 메타데이터 목록이 보이는지 확인
lnb에서 카탈로그 목록이 제대로 보이는지 확인
lnb에서 카탈로그 검색시 키워드 검색이 제대로 되는지 확인(키워드의 색깔이 변경되는지 확인)
lnb에서 카탈로그 클릭시 해당 카탈로그 상세화면으로 보여지는지 확인
Need additional checks?
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist:
[x] I have read the CONTRIBUTING document.
[x] My code follows the code style of this project.
[x] My change requires a change to the documentation.
[ ] I have added tests to cover my changes.
Additional Context
run build
deploy to 3
run build
deploy to 3
|
gharchive/pull-request
| 2019-07-01T01:38:37 |
2025-04-01T06:39:33.097061
|
{
"authors": [
"brandon-wonjune",
"ufoscw"
],
"repo": "metatron-app/metatron-discovery",
"url": "https://github.com/metatron-app/metatron-discovery/pull/2289",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
687725870
|
#3346 fix map chart with multi datasources
Description
fix map chart with multi datasources
Related Issue : #3346
How Has This Been Tested?
Create a map view chart after selecting multi datasources.
Go to the dashboard and make sure it looks normal.
Go to the map view chart edit screen and make sure it looks normal.
Need additional checks?
Types of changes
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist:
[ ] I have read the CONTRIBUTING document.
[ ] My code follows the code style of this project.
[ ] My change requires a change to the documentation.
[ ] I have added tests to cover my changes.
Additional Context
deploy to 1
|
gharchive/pull-request
| 2020-08-28T05:32:53 |
2025-04-01T06:39:33.101778
|
{
"authors": [
"minhyun2"
],
"repo": "metatron-app/metatron-discovery",
"url": "https://github.com/metatron-app/metatron-discovery/pull/3353",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
442583792
|
[MRG] Be compatible with newer scikit-learn
Scikit-learn just released their last version 0.21, which does not support python<3.5, so I think we can update the travis script to download the 0.20.3 for python3.4 and python2.7, but still use the newest release for python3.6 (so that we test the behaviour on the new scikit-learn), what do you think ?
Also, there were a few changes that need to be made to work with scikit-learn's new version, which this PR addresses
Yes, @perimosocordiae here's the issue: https://github.com/metric-learn/metric-learn/issues/166
Thanks for the review @perimosocordiae, I just addressed your comments
+1 to merge when the travis results come back.
The results are green merging
|
gharchive/pull-request
| 2019-05-10T07:46:48 |
2025-04-01T06:39:33.198143
|
{
"authors": [
"perimosocordiae",
"terrytangyuan",
"wdevazelhes"
],
"repo": "metric-learn/metric-learn",
"url": "https://github.com/metric-learn/metric-learn/pull/199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
610256860
|
AMR: Tetrahedron element is not marked for refinement
We are trying to use AMR for a paper we are working on. We want to use tetrahedral meshes and we want to test the convergence rates of a FEM, the runtime and the scalability. For these reasons, we start from an initial mesh, serially refine it few times, create a parallel mesh and do parallel refinement few times before switching to AMR. However, if we do any sort of serial refinement we encounter the error below.
Verification failed: (tet->GetRefinementFlag() != 0) is false:
--> TETRAHEDRON element is not marked for refinement.
... in function: void mfem::Mesh::Bisection(int, mfem::HashTable<mfem::Hashed2> &)
... in file: mesh/mesh.cpp
This issue might seem similar to #947, #1028 and #1119. However, it still exists in MFEM 4.1.1. The error can be reproduced by taking the steps below;
Modify ex6p.cpp slightly, move mesh->UniformRefinement(); to the line above the if (mesh->NURBSext) condition.
Compile ex6p and run using any of the inline-tet or beam-tet.
Two interesting observations:
MFEM does not throw analogous errors if ex6p is used with 2-dimensional meshes of any kind (beam-quad, beam-tri, inline-quad, inline-tri and star.mesh are the ones I tried) or 3-dimensional hexahedral meshes (beam-hex, inline-hex).
After the modification inline-wedge does not even throw an error, ex6p segfaults when AMR is attempted.
It looks like I do not have the good habit of reading documentation. Sorry for bothering you. In case any one else encounters this problem, there are currently two uniform refinement algorithms implemented for tetrahedral meshes. They are called Algorithm A and Algorithm B. The default is Algorithm A, however, if used mesh has to be finalized before local refinement can be attempted (this is stated in the doxygen pages). If Algorithm B is used, there is no issue, no need for finalization.
So what is the solution? Either the line
mesh->UniformRefinement();
can be replaced with
mesh->UniformRefinement(1); // Use Algorithm B
or the line
mesh->FinalizeTopology(); // According to the documentation this is necessary but
// everything works okay if it is not there for this example.
mesh->Finalize(true); // refine parameter is set to true
can be added after mesh->EnsureNCMesh().
FinalizeTopology is also called at the end of Finalize if may_change_topology is TRUE so it seems to me that there is some redundancy in some cases. But I guess it is an implementation choice. The important thing seems to be the function MarkForRefinement and/or MarkTetMeshForRefinement which are called during Finalize if the parameter refine is set to true which explains the error we got. Because we never marked any of those elements for refinement to begin with.
Obviously this is not a bug and obviously it is my fault as I did not read the documentation/code properly. However, I also didn’t expect this exceptional case for tetrahedral meshes. In my mind, it wasn’t any different than a hexahedral mesh. So the average user will very likely encounter this problem. That is why I wrote this long explanation.
Thanks a lot for your time and sorry for any inconvenience I caused.
Hi @aasivas, sorry about the confusion.
Let me add a few comments about tet refinement in mfem which may help explain some of the issues you encountered:
There are two ways to refine individual tets: using octa-section (a tet is directly split into 8 small tets), and using bisection (the tet is split into 2 tets three times to get 8 small tets).
Restricted to the faces, octa-section and bisection produce the following triangle refinements, respectively:
* *
/ \ /|\
*---* * | *
/ \ / \ / \|/ \
*---*---* *---*---*
Octa-section will typically give rise to better shaped refined tets than bisection will.
For uniform refinement, octa-section is used by algorithm A (the default) and bisection is used by algorithm B.
Local conforming refinement uses bisection and needs tets to be marked for refinement (this marking is lost if one uses algorithm A). Algorithm B is basically this local refinement algorithm applied to all elements.
Local non-conforming refinement uses octa-section.
Tet bisection cannot be used with mixed meshes, e.g meshes with tets and prisms.
|
gharchive/issue
| 2020-04-30T17:27:05 |
2025-04-01T06:39:33.302186
|
{
"authors": [
"aasivas",
"v-dobrev"
],
"repo": "mfem/mfem",
"url": "https://github.com/mfem/mfem/issues/1453",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
731871655
|
ParMixedBilenearForm system matrix and linear system
I see that the class ParBilinearForm includes the two functions
using BilinearForm::FormLinearSystem;
using BilinearForm::FormSystemMatrix;
whereas ParMixedBilinearForm does not.
Seems like those two functions allow the ParBilinearForm class to form the linear system and system matrix using an operator type, rather than just an operator handle, as input. Since ParMixedBilenearForm does not include
using MixedBilinearForm::FormRectangularLinearSystem;
using MixedBilinearForm::FormRectangularSystemMatrix;
the linear system and system matrix can only be formed using an operator handle, rather than operator type, as input. Is there a reason for why this functionality of ParBilinearForm and ParMixedBilinearForm is not consistent? If not, could they be made consistent? I would like to form a rectangular system matrix using an operator type as input, rather than an operator handle.
The versions of FormSystemMatrix/FormLinearSystem that work with a given Operator type are basically a thin convenience wrapper around the OperatorHandle version, as can be seen here: https://github.com/mfem/mfem/blob/de4a24b0ff7e723dfbdd7f6c0920ed372b98995b/fem/bilinearform.hpp#L436-L450
Adding similar wrappers for FormRectangularSystemMatrix/FormRectangularLinearSystem should be fairly straightforward, I think.
Great, thanks, having those wrappers would be quite useful to me.
Hi @alecampos2009, are you comfortable proposing a PR with the above modification?
Yes, supposedly it is just a couple of lines so I can go ahead a propose a PR with the above modification, as well as test it with the case that sparked my interest in these wrappers.
I added the wrappers but run into some difficulties.
I added the lines
using MixedBilinearForm::FormRectangularSystemMatrix;
using MixedBilinearForm::FormRectangularLinearSystem;
in the ParMixedBilinearForm class, so as to be consistent with the wrappers in the ParBilinearForm class. I also had to change the function
void FormRectangularSystemMatrix(const Array<int> &trial_tdof_list, cost Array<int> &test_tdof_list, OperatorHangle &A) in the MixedBilinearForm class, so that the appropriate definition is used by ParMixedBilinearForm. I then tested this new functionality in my own code as well as in ex5p.cpp, by including the following lines
HypreParMatrix test;
bVarf->FormRectangularSystemMatrix(Array<int>(), Array<int>(), test);
This unfortunately gave me a seg fault. I'm not quite sure I've implemented things correctly, or how one would go about tracking this segfault.
Seems like I'm unable to push my branch to the repo to share and discuss the changes I've made. The error I get has code 403. Is this because I am a member of mfem/users only and not mfem/collaborators?
Yes, that sounds like the reason: I believe @tzanio or @v-dobrev have to add you to the mfem/collaborators group to have push access to the repo. I suppose an alternative would be to create a fork of the mfem repo and open a PR from a branch in that fork.
Sorry for the delay @alecampos2009, you should be able to push now.
@alecampos2009, were you able to resolve this?
Yes, we were able to figure out the issue (see PR https://github.com/mfem/mfem/pull/1940). Will wrap this up after I get back from my holiday vacation.
Excellent, I'll close this issue then. Feel free to reopen if necessary.
|
gharchive/issue
| 2020-10-28T23:35:00 |
2025-04-01T06:39:33.311891
|
{
"authors": [
"alecampos2009",
"cjvogl",
"tzanio",
"v-dobrev"
],
"repo": "mfem/mfem",
"url": "https://github.com/mfem/mfem/issues/1850",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
178103141
|
Config NPM_DEPENDENCIES vs. SYSTEM_CONFIG_DEV vs. SYSTEM_BUILDER_CONFIG
Minko, I was going to update this wiki page (https://github.com/mgechev/angular2-seed/wiki/Add-PrimeNG) to replace the step that involves editing seed.config.ts, so edits are made to project.config.ts instead.
My first instinct was to simply change it add the library to NPM_DEPENDENCIES in project.config.ts. However, that made me realize that NPM_DEPENDENCIES might not be the place to do this.
For "plain old" external JavaScript libraries, NPM_DEPENDENCIES seems to make sense. However, for libraries like ngrx and primeng, I'm less certain.
I'm sure that the problem is my limited understanding of exactly how SystemJS plays into the build process (and possibly even the AoT compile process).
Would adding node modules like ngrx or primeng to SYSTEM_CONFIG_DEV.packageConfigPaths and SYSTEM_BUILDER_CONFIG.packages be a better option? Would this allow for tree-shaking or other optimizations that aren't available if an external .js bundle file is just stuck into NPM_DEPENDENCIES?
Hopefully this makes sense. I just want to be sure I understand the impact of the different options before I go edit that wiki page (and maybe even add one covering ngrx).
NPM_DEPENDENCIES is used in rare cases, mostly when you need to inject scripts which expose a global variable or there's a CSS file that you want to include in your project (bootstrap.css for instance).
SYSTEM_CONFIG_DEV is used for configuring SystemJS. In some cases the base configuration provided is enough, however, sometimes you may need to add custom mapping for your dependencies.
In the perfect world SYSTEM_CONFIG_DEV and SYSTEM_BUILDER_CONFIG should be the same, however, sometimes we want to include a minified version of a dependency in our production build or the SystemJS builder requires slightly different configuration.
In short - almost all modules should reside in SYSTEM_CONFIG_DEV and SYSTEM_BUILDER_CONFIG except ones which expose global globals, then they should be in NPM_DEPENDENCIES.
@mgechev should add this to wiki best practice.
|
gharchive/issue
| 2016-09-20T16:01:49 |
2025-04-01T06:39:33.355969
|
{
"authors": [
"brian428",
"geminiyellow",
"mgechev"
],
"repo": "mgechev/angular-seed",
"url": "https://github.com/mgechev/angular-seed/issues/1364",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
184266303
|
Add staging support as documented
https://github.com/mgechev/angular-seed#environment-configuration says staging is currently supported. This PR adds that.
I'd say that it's better to drop the line from README because there's no staging.ts file in tools/env.
How do you feel about adding a staging.ts to tools/env? Seems like > 80/20 rule for production apps.
To that end, how do you add an env other than prod or dev since https://github.com/mgechev/angular-seed/blob/master/tools/config/seed.config.ts#L600 only supports those 2? README.md says its as easy as adding a <stage>.ts but this code wont allow that. Not my intention to come off rude here, just want to make this already awesome project better.
getEnvironment gets the build type (environment), not the configuration environment. The configuration environment should be selected with --config-env. Probably we should rename this private function to not bring additional confusion.
I'm ok with both options. Maybe it's better to introduce staging.ts in tools/env because this will help understanding the difference between --env and --config-env flags.
Ah gotcha, I'll update this PR with whats discussed here and u can thumbs up or down.
Ok now looking closer at the code I'm confused. The inline docs say getEnvironment uses the value of --config-env. The code however sets the build type by looking for .prod in the gulp task name. My guess is just doc typo but please confirm.
FWIW I'm doing all this work because I have created a gulp task plugin for angular-seed that gzip's assets, adds cache headers (for CloudFront), and invalidates CloudFront cache. I just want to make sure I understand the angular-seed plumbing here so that I a) get my implementation correct b) make sure its flexible for real world production apps with team(s) of devs
Which gets to my last Q I need answered before I do the re-factor discussed above.
I want to be able to invoke gulp tasks like publish.prod and publish.staging and publish.ryanteammember. I want my publish task plugin to be able to read a config var out of project.config.ts like so:
constructor(){
...
this.PUBLISH_TASK_CONFIG.s3['prod'] = {
region: 'us-east-1',
params: {
Bucket: 'www.mydomain.org'
}
};
this.PUBLISH_TASK_CONFIG.s3['staging'] = {
region: 'us-east-1',
params: {
Bucket: 'staging.mydomain.org'
}
};
this.PUBLISH_TASK_CONFIG.s3['ryanteammember'] = {
region: 'us-east-1',
params: {
Bucket: 'ryan.mydomain.org'
}
};
When I call publish.staging the "build type" will be dev however I want the value of Config.APP_DEST to be dist/prod.
Questions:
How do I force a build type of prod when my task only contains publish.staging? I could force Config.ENV to be ENVIORNMENTS.PRODUCTION in my task (ex below) however its too late in the bootstrap process
gulp.task('publish.staging', (done: any) => {
Config.ENV = ENVIRONMENTS.PRODUCTION;
return runSequence('publish',
done)
});
When running a Gulp task - how do I get the value of --config-env? I dump the Config object from tools/config and don't see it defined. I can hardcode grabbing assets from dist/prod dir however, I need a way to lookup the correct index in Config.PUBLISH_TASK_CONFIG.s3. I had been using Config.ENV to do this however I now know that wont work per above...
Does this make sense? If not I can put my plugin up on github so you have a full concrete example.
I got a workaround. Details of my plugin published at doapp-ryanp/angular-seed-awspublish-task
Let me know if you think the way I set Config.ENV in the gulpfile.ts::gulp.task('awspublish.staging... will cause un-intended consequences. thanks.
Yes, the correct way is to use Config.ENV. getEnvironment got that complicated because we were trying to keep some backwards compatibility. Probably we can simplify it now. I'll open a PR and mention you there in order to take a look.
@doapp-ryanp here's a PR which aims to introduce better semantics of the configuration properties https://github.com/mgechev/angular-seed/pull/1519. There might be still some inconsistencies in the docs but in general I think this brings significant improvement.
|
gharchive/pull-request
| 2016-10-20T15:43:33 |
2025-04-01T06:39:33.368218
|
{
"authors": [
"doapp-ryanp",
"mgechev"
],
"repo": "mgechev/angular-seed",
"url": "https://github.com/mgechev/angular-seed/pull/1511",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
124484584
|
App_spec tests are not running at all
Currently app tests are always showing as succeeded, because code inside of .then(rootTC => { ... }) never runs.
Home/Service tests work as expected.
How to fix that?
Seems related to https://github.com/mgechev/angular2-seed/issues/333 ?
I'll close it for now, let me know if I'm wrong and the issues are not related to one another.
|
gharchive/issue
| 2015-12-31T16:34:31 |
2025-04-01T06:39:33.370354
|
{
"authors": [
"SIGAN",
"mgechev"
],
"repo": "mgechev/angular2-seed",
"url": "https://github.com/mgechev/angular2-seed/issues/332",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
158338982
|
HomeComponent not rendered
I will try to explain as much as I can about the problem. I have cloned the repo and started to integrate my logic inside the repository with only one component i.e HomeComponent which is assigned on the default route. When I load my application using command 'gulp serve.dev' I am able to see the header and footer which are part of the app.component but home component is not visible it seems its not loaded. There is no error in the console but still the home component's html is not visible. I have also used inside app.component.html but nothing gets loaded in it
Make sure you're including the non-rendered components in the directives property within the @Component decorator.
@mgechev I am using HomeComponent in the Route and I have declared the route as this
@Routes([
{ path: '/', component: HomeComponent}
])
This is the configuration of the seed itself, and it seems to work.
What is your implementation of HomeComponent?
import {Component} from '@angular/core';
@Component({
moduleId: module.id,
selector: 'sd-home',
templateUrl: 'home.component.html',
styleUrls: ['home.component.css']
})
export class HomeComponent{
constructor(){
//OnLoad();
console.log('Loaded');
}
}
Loaded message is not printed in console.
What's the content of home.component.html?
its all html with bootstrap css applied, would you like me to post that as well?
I was previously using normal angular2 app before moving to the seed project there the html was working
Most likely the issue is not related to the seed itself, since as you said it was initially working. This is how the AppComponent should look like:
@Component({
moduleId: module.id,
selector: 'sd-app',
viewProviders: [NameListService, HTTP_PROVIDERS],
templateUrl: 'app.component.html',
directives: [ROUTER_DIRECTIVES, NavbarComponent, ToolbarComponent]
})
@Routes([
{
path: '/',
component: HomeComponent
},
{
path: '/about',
component: AboutComponent
}
])
export class AppComponent {}
Most likely you're missing declaration of directives that you're using somewhere. For further reading I'd recommend you the dev guide here.
I have been able to reproduce the issue in the seed. If I remove the Navbar component from the html of app component along with toolbar component then the home component is not rendered.
|
gharchive/issue
| 2016-06-03T10:35:39 |
2025-04-01T06:39:33.375839
|
{
"authors": [
"mgechev",
"zeeshanjan82"
],
"repo": "mgechev/angular2-seed",
"url": "https://github.com/mgechev/angular2-seed/issues/958",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1145031483
|
Current MP build failing because of problem with PATO
PATO imports an outdated definition of synovial cells which causes an illegal equivalency which breaks the current release.
Solution:
[x] Need to make new PATO release (https://github.com/pato-ontology/pato/issues/478)
[ ] We should probably think of moving MP to the newer BASE infrastructure like we have with CL and PATO, but no resources right now.
I think its fine now
|
gharchive/issue
| 2022-02-20T14:13:41 |
2025-04-01T06:39:33.398356
|
{
"authors": [
"matentzn"
],
"repo": "mgijax/mammalian-phenotype-ontology",
"url": "https://github.com/mgijax/mammalian-phenotype-ontology/issues/3521",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1423422833
|
🛑 Merchant APP (Stroy API) is down
In 37c1f2c, Merchant APP (Stroy API) (https://preprod-apim-gw.wavemoney.io/v2/merchant-api/merchant-tutorial-stories/1) was down:
HTTP code: 500
Response time: 173 ms
Resolved: Merchant APP (Stroy API) is back up in feb1cdb.
|
gharchive/issue
| 2022-10-26T04:32:56 |
2025-04-01T06:39:33.409033
|
{
"authors": [
"mgmgpyaesonewin"
],
"repo": "mgmgpyaesonewin/strapi-upptime",
"url": "https://github.com/mgmgpyaesonewin/strapi-upptime/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2164974824
|
🛑 Traffic Dashboard Mediahuis België is down
In 6304075, Traffic Dashboard Mediahuis België (https://traffic.mediahuis.be/api/system/status) was down:
HTTP code: 503
Response time: 15615 ms
Resolved: Traffic Dashboard Mediahuis België is back up in 153bd7e after 15 minutes.
|
gharchive/issue
| 2024-03-02T20:52:47 |
2025-04-01T06:39:33.439958
|
{
"authors": [
"JoranDox"
],
"repo": "mh-data-science/upptime",
"url": "https://github.com/mh-data-science/upptime/issues/3046",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1557863252
|
🛑 Traffic Dashboard Mediahuis België is down
In a414e43, Traffic Dashboard Mediahuis België (https://traffic.mediahuis.be/api/system/status) was down:
HTTP code: 503
Response time: 417 ms
Resolved: Traffic Dashboard Mediahuis België is back up in 1c2623c.
|
gharchive/issue
| 2023-01-26T09:42:43 |
2025-04-01T06:39:33.442479
|
{
"authors": [
"JoranDox"
],
"repo": "mh-data-science/upptime",
"url": "https://github.com/mh-data-science/upptime/issues/904",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510822275
|
Update Dinos “nanotyrannus”
Automatically generated by Netlify CMS
👷 Deploy Preview for quizzical-galileo-bf682e processing.
Name
Link
🔨 Latest commit
ed92938004643636b55157dc5e27e5387195d713
🔍 Latest deploy log
https://app.netlify.com/sites/quizzical-galileo-bf682e/deploys/63a974b90731c80007c1253a
|
gharchive/pull-request
| 2022-12-26T10:17:27 |
2025-04-01T06:39:33.445265
|
{
"authors": [
"mhaack"
],
"repo": "mhaack/mias-dino-facts",
"url": "https://github.com/mhaack/mias-dino-facts/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1043742412
|
CTRL+D to duplicate node
This change should permit to duplicate nodes using CTRL+D. It would be a nice add for usability
Hi @ArthyChaux
I appreciate the time you spent
I have two main points regarding this PR:
The shortcut Ctrl + D is already used for Snapshot Creation.
I can't find any prominent advantage for this in terms of usability, over simple node copy-pasting
Thank you
|
gharchive/pull-request
| 2021-11-03T15:26:08 |
2025-04-01T06:39:33.546367
|
{
"authors": [
"ArthyChaux",
"mhgolkar"
],
"repo": "mhgolkar/Arrow",
"url": "https://github.com/mhgolkar/Arrow/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
173264340
|
Actually set tls.Config.PreferServerCipherSuites
It was set by default on the caddy-internal config object, and even
checked for conflicts, but it was never actually reflected on the
tls.Config.
This will have user-visible changes: a client that prefers, say, AES-CBC
but also supports AES-GCM would have used AES-CBC befor this, and will
use AES-GCM after.
This is desirable and important behavior, because if for example the
server wanted to support 3DES, but only if it was strictly necessary,
it would have had no way of doing so with PreferServerCipherSuites
false, as the client preference would have won.
Whoops. 😐 🌴
Could you do me one more quick favor and just add it to the test case in config_test.go? (I know that we need to greatly expand these tests but I want to at least start with the things that bit us.)
Nevermind, I want to work on these tests anyway. Merging now, as discussed on Slack. Thanks!!
|
gharchive/pull-request
| 2016-08-25T17:33:36 |
2025-04-01T06:39:33.560624
|
{
"authors": [
"FiloSottile",
"mholt"
],
"repo": "mholt/caddy",
"url": "https://github.com/mholt/caddy/pull/1070",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1957274063
|
Audit fixes and postcss version bump
Summary of the changes
fixes audit issue and bumps postcss to 8.4.31 as recommended by dependabot
Related issue
N/A
Does it need updating at the root level too?
Does it need updating at the root level too?
no, it's already 8.4.31 there
|
gharchive/pull-request
| 2023-10-23T14:12:10 |
2025-04-01T06:39:33.586087
|
{
"authors": [
"MI6-255",
"ad9242"
],
"repo": "mi6/ic-ui-kit",
"url": "https://github.com/mi6/ic-ui-kit/pull/1212",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1966590376
|
Artifacts expansion needs a 4th edition explanation in the help
Currently the other expansions are covered.
It's there now.
|
gharchive/issue
| 2023-10-28T13:10:22 |
2025-04-01T06:39:33.590837
|
{
"authors": [
"micahstairs",
"ultimatefiend"
],
"repo": "micahstairs/bga-innovation",
"url": "https://github.com/micahstairs/bga-innovation/issues/1411",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
156770034
|
onPageChange
Hi guys, just wondering if something like this is available in the current api, I know you can use it in the directive like (pageChange)="p = $event", but this feature could prove to be quite handy to be able to detect a onChange event with the page number etc.
Hi,
I'm afraid I don't quite understand what you are asking for. Could you provide some example code that you are suggesting?
I'm closing this issue because further data was needed but was not provided. If you want to continue the thread, feel free to re-open.
|
gharchive/issue
| 2016-05-25T14:47:48 |
2025-04-01T06:39:33.602422
|
{
"authors": [
"RicardoVaranda",
"michaelbromley"
],
"repo": "michaelbromley/ng2-pagination",
"url": "https://github.com/michaelbromley/ng2-pagination/issues/34",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1582779085
|
Docs could be improved
Your README file doesn't include .query method on the end of the connection.get example; the expression doesn't return the proper type without it
Okay, here is the problem:
should you have to call .query() or .exec() on command, or should the just always call .query() for you?
On the one side you have one function call less to worry about, on the other side are you more flexible with your return value.
Since I could not decide on which to choose, i kind of gave up on that question until there was some interest on this package, so I could get some input from others. That's why I did not update the docs. So what do you think is more viable?
I lean more towards the commands already calling .query() for simplicity’s sake.
Personally I like having the choice between either query or exec, as I might want to call exec for something like subscribe, where I don't really care about the return value, and query elsewhere
|
gharchive/issue
| 2023-02-13T17:36:28 |
2025-04-01T06:39:33.752715
|
{
"authors": [
"W-Lawless",
"michaelvanstraten"
],
"repo": "michaelvanstraten/swifty-redis",
"url": "https://github.com/michaelvanstraten/swifty-redis/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2330988431
|
Fix clockid_t compile error
I was having compile errors when running make:
/home/will/Desktop/micro-ros-project/micro_ros_raspberrypi_pico_sdk/pico_uart_transport.c:11:19: error: unknown type name 'clockid_t'
11 | int clock_gettime(clockid_t unused, struct timespec *tp)
| ^~~~~~~~~
make[2]: *** [CMakeFiles/micro-ros-project.dir/build.make:90: CMakeFiles/micro-ros-project.dir/micro_ros_raspberrypi_pico_sdk/pico_uart_transport.c.obj] Error 1
I found that I needed to add #include <time.h> in pico_uart_transport.c.
I am compiling on a Raspberry Pi 5 running Ubuntu 24.04 LTS with ROS2 Jazzy.
That makes sense, thanks for the patch!
@mergify backport humble iron rolling
|
gharchive/pull-request
| 2024-06-03T12:13:42 |
2025-04-01T06:39:33.805829
|
{
"authors": [
"pablogs9",
"willbsp"
],
"repo": "micro-ROS/micro_ros_raspberrypi_pico_sdk",
"url": "https://github.com/micro-ROS/micro_ros_raspberrypi_pico_sdk/pull/1195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
489609901
|
在第一章 user-web部分,出现404问题
是按照教程依次执行
micro --registry=consul --api_namespace=mu.micro.book.web api --handler=web
user-srv go run main.go
user-web go run main.go
Consul Services
web有两台?会不会是有一台其实已经下线了,第一章没有加上TTL检测,不会自主下线。多请求几次,如果有些能成功有些不行就可能是这个原因
关闭问题,如有其它问题未解决,请重新打开或另开issue
|
gharchive/issue
| 2019-09-05T08:39:00 |
2025-04-01T06:39:33.813233
|
{
"authors": [
"ginkgohat",
"printfcoder"
],
"repo": "micro-in-cn/tutorials",
"url": "https://github.com/micro-in-cn/tutorials/issues/114",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
646441855
|
[FEATURE REQUEST] Get all the comment for specific reference id
U### Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Users should be able to get all comments for a particular referenceId (i.e user).
Describe the solution you'd like
A clear and concise description of what you want to happen.
A controller to get all comment for a specific reference Id
Endpoints: /comment/refs/:{refId}
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
No alternative
Additional context
Add any other context or screenshots about the feature request here.
No
Time Duration to finish
50mins
Please ensure you use the updated docs at https://comments-microservice.herokuapp.com/
|
gharchive/issue
| 2020-06-26T18:35:46 |
2025-04-01T06:39:33.820973
|
{
"authors": [
"Alao-Abiodun",
"dave-ok"
],
"repo": "microapidev/comment-microapi",
"url": "https://github.com/microapidev/comment-microapi/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2351936170
|
check if classes associated with alternative_identifiers can use ANY alternative_identifiers
Use the examples data file framework
As in, could a Biosample have a neon_study_identifiers annotation?
id: nmdc:bsm-99-dtTMNb
part_of:
- nmdc:sty-00-abc123
env_broad_scale:
has_raw_value: ENVO:00002030
term:
id: ENVO:00002030
env_local_scale:
has_raw_value: ENVO:00002169
term:
id: ENVO:00002169
env_medium:
has_raw_value: ENVO:00005792
term:
id: ENVO:00005792
alternative_identifiers:
- generic:123
linkml-validate \
--schema nmdc_schema/nmdc_materialized_patterns.yaml \
--target-class Biosample src/data/valid/Biosample-minimal-with-alternative-identifiers.yaml
No issues found
id: nmdc:bsm-99-dtTMNb
part_of:
- nmdc:sty-00-abc123
env_broad_scale:
has_raw_value: ENVO:00002030
term:
id: ENVO:00002030
env_local_scale:
has_raw_value: ENVO:00002169
term:
id: ENVO:00002169
env_medium:
has_raw_value: ENVO:00005792
term:
id: ENVO:00005792
gold_study_identifiers:
- gold:Gs123
linkml-validate \
--schema nmdc_schema/nmdc_materialized_patterns.yaml \
--target-class Biosample src/data/valid/Biosample-minimal-with-study-id.yaml
No issues found
@aclum @cmungall ^
Is this a linkml bug? If it is allowed why doesn't it show up in the documentation as an inherited slot?
|
gharchive/issue
| 2024-06-13T19:47:40 |
2025-04-01T06:39:33.824386
|
{
"authors": [
"aclum",
"turbomam"
],
"repo": "microbiomedata/nmdc-schema",
"url": "https://github.com/microbiomedata/nmdc-schema/issues/2066",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1373745449
|
trim contents nmdc_schema
Filter contents of nmdc_schema to include only files that are useful when pushed to the PyPI repo.
Update contents after latest regeneration of artifacts as in v7.0.0
Eventually I think the process needs to be updated so that we don't have this folder altogether. The package should be built and published by poetry on the fly.
This PR was pre emptive. Closing it. Didn't realize it affected a lot of other downstream scripts.
|
gharchive/pull-request
| 2022-09-15T00:40:36 |
2025-04-01T06:39:33.826170
|
{
"authors": [
"sujaypatil96"
],
"repo": "microbiomedata/nmdc-schema",
"url": "https://github.com/microbiomedata/nmdc-schema/pull/463",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1387891038
|
Make a doc generator as OS-independent
Right now it looks like generated docs comes with a Linux line separator.
It would be great if generated docs would use a System.lineSeparator() instead of hard-coded \n.
This is a screenshot of commit message to confirm:
NOTE: The conversion from Windows line separator is done on commit either way.
It is only the problem during development, when these files are regenerated but without changes.
In RC1, file generation has changed to use a handlebar template rather than toString() from internal classes.
I don't have a windows, so I cannot test it but can you try with the latest snapshot (OR even RC1)? As far as quick intellij search, there is no \n used in the current code base.
Thank you @ttddyy , for looking into this.
No, it still generates them with an LF symbol.
In the end I do this though:
task filterMetricsDocsContent(type: Copy) {
dependsOn generateObservabilityDocs
from generatedDocsDir
include '_*.adoc'
into generatedDocsDir
rename { filename -> filename.replace '_', '' }
filter { line -> line.replaceAll('org.springframework.integration', 'o.s.i') }
}
which creates for me new edited files and those are already created with a proper CRLF symbol.
Technically I don't care about generated files right now since I don't push them into the repo, but rather they are used as a resource for the target docs (HTML, PDF) generation alongside with my existing adoc content.
So, from big height we can just ignore this or close altogether to avoid extra noise in backlog.
But it would be great if that Handlebars can output result in OS-dependent line separator instead of the one compiled into jar with those templates.
@artembilan I have updated the code to replace the line delimiter while loading the template files.
This would replace the one in template file to the one used by the running OS by System.lineSeparator().
So, it should be replacing LF to CRLF on windows.
Please try the latest snapshot.
That works.
Thank you, @ttddyy !
Fixed via: https://github.com/micrometer-metrics/micrometer-docs-generator/commit/c9ec96279bcbe3bb599bc436302fe266c4a1fdbd
|
gharchive/issue
| 2022-09-27T14:45:00 |
2025-04-01T06:39:33.850873
|
{
"authors": [
"artembilan",
"ttddyy"
],
"repo": "micrometer-metrics/micrometer-docs-generator",
"url": "https://github.com/micrometer-metrics/micrometer-docs-generator/issues/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1318136074
|
R2DBC can not use the same MySQL container of Flyway
Issue description
Here is my configuration:
test-resources:
containers:
mysql:
image-name: mysql:8
datasources:
default:
db-type: mysql
r2dbc:
datasources:
default:
db-type: mysql
url: r2dbc:tc:mysql:///test?TC_IMAGE_TAG=8
flyway:
datasources:
default:
db-type: mysql
enabled: true
url: jdbc:tc:mysql:8:///test
baseline-on-migrate: true
The testcontainers will start another container after Flyway migration is finished
Log:
19:44:56.876 [Test worker] INFO o.t.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
19:44:56.877 [Test worker] INFO o.testcontainers.DockerClientFactory - Checking the system...
19:44:56.877 [Test worker] INFO o.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0
19:44:56.879 [Test worker] INFO 🐳 [mysql:8] - Creating container for image: mysql:8
19:44:57.161 [Test worker] INFO 🐳 [mysql:8] - Container mysql:8 is starting: e59d231fb6043f7afe2f93af348ead7b9775f1d67bd172940daba0b3878320f0
19:44:57.531 [Test worker] INFO 🐳 [mysql:8] - Waiting for database connection to become available at jdbc:mysql://wingman:49255/test using query 'SELECT 1'
19:45:08.572 [Test worker] INFO 🐳 [mysql:8] - Container is started (JDBC URL: jdbc:mysql://wingman:49255/test)
19:45:08.572 [Test worker] INFO 🐳 [mysql:8] - Container mysql:8 started in PT11.69292S
19:45:08.596 [Test worker] INFO o.f.c.i.d.base.BaseDatabaseType - Database: jdbc:mysql://wingman:49255/test (MySQL 8.0)
19:45:08.691 [Test worker] INFO o.f.core.internal.command.DbValidate - Successfully validated 1 migration (execution time 00:00.035s)
19:45:08.743 [Test worker] INFO o.f.c.i.s.JdbcTableSchemaHistory - Creating Schema History table `test`.`flyway_schema_history` ...
19:45:08.827 [Test worker] INFO o.f.core.internal.command.DbMigrate - Current version of schema `test`: << Empty Schema >>
19:45:08.833 [Test worker] INFO o.f.core.internal.command.DbMigrate - Migrating schema `test` to version "1.0.0 - create robot"
19:45:08.890 [Test worker] INFO o.f.core.internal.command.DbMigrate - Successfully applied 1 migration to schema `test`, now at version v1.0.0 (execution time 00:00.068s)
19:45:09.533 [testcontainers-r2dbc-0] INFO 🐳 [mysql:8] - Creating container for image: mysql:8
19:45:09.687 [testcontainers-r2dbc-0] INFO 🐳 [mysql:8] - Container mysql:8 is starting: 3439720017e448b1c4f730999b8e8eeba4a7b717f5dda1c7819a57b1b16de078
19:45:09.991 [testcontainers-r2dbc-0] INFO 🐳 [mysql:8] - Waiting for database connection to become available at jdbc:mysql://wingman:49256/test using query 'SELECT 1'
19:45:20.920 [testcontainers-r2dbc-0] INFO 🐳 [mysql:8] - Container is started (JDBC URL: jdbc:mysql://wingman:49256/test)
19:45:20.921 [testcontainers-r2dbc-0] INFO 🐳 [mysql:8] - Container mysql:8 started in PT11.38907S
Thanks!
You need to remove the URLs from your configuration in order for the test resources to kick in. Can you try with:
test-resources:
containers:
mysql:
image-name: mysql:8
datasources:
default:
db-type: mysql
r2dbc:
datasources:
default:
db-type: mysql
flyway:
datasources:
default:
db-type: mysql
enabled: true
baseline-on-migrate: true
?
I removed all the URLs and try serval times, and it still does' t work.
Output logs:
Caused by: java.lang.IllegalStateException: Unable to create a ConnectionFactory for 'ConnectionFactoryOptions{options={}}'. Available drivers: [ mysql ]
at io.r2dbc.spi.ConnectionFactories.get(ConnectionFactories.java:143)
at io.micronaut.r2dbc.R2dbcConnectionFactoryBean.connectionFactory(R2dbcConnectionFactoryBean.java:68)
at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:161)
at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2333)
... 90 more
After some debugging, I found the defaultUrl in io.micronaut.r2dbc.DefaultBasicR2dbcProperties#newConnectionFactoryOptionsBuilder is null.
My project url is https://github.com/linweiyu21/mn-testcontainers-test
Thank you for your help!
There were several issues in your reproducer:
it wasn't using micronaut-test-resources, so no way it would start containers for you
it was adding too many dependencies, probably no harm but I removed them
it was missing the JDBC pool, so Flyway wouldn't be invoked
the database configuration can live in the main application.yml file, simplifying configuration
See https://github.com/linweiyu21/mn-testcontainers-test/pull/1
There were several issues in your reproducer:
it wasn't using micronaut-test-resources, so no way it would start containers for you
it was adding too many dependencies, probably no harm but I removed them
it was missing the JDBC pool, so Flyway wouldn't be invoked
the database configuration can live in the main application.yml file, simplifying configuration
See linweiyu21/mn-testcontainers-test#1
Thank you so much!
|
gharchive/issue
| 2022-07-26T11:58:46 |
2025-04-01T06:39:33.925441
|
{
"authors": [
"linweiyu21",
"melix"
],
"repo": "micronaut-projects/micronaut-test-resources",
"url": "https://github.com/micronaut-projects/micronaut-test-resources/issues/65",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
531045492
|
[CHALLENGE SUBMISSION] Week 1 - Challenge 1
Please provide the following information for your submission:
What is your name? (First, Last)
Matthew Leibowitz
Where is your GitHub Repo? (With your challenge solution)
https://github.com/mattleibow/25-days-of-serverless/tree/implementations
What Challenge is this for? (Challenge number between 1 and 25)
1
(Optional) Anything else we should know? e.g., language used, location, blog post?
We checked out your repository. We're thrilled to say your solution is now listed in our Hall of Fame
Thank you for your contributions!
This PR is now closed! 🎉
|
gharchive/pull-request
| 2019-12-02T10:48:41 |
2025-04-01T06:39:33.985953
|
{
"authors": [
"mattleibow",
"simonaco"
],
"repo": "microsoft/25-days-of-serverless",
"url": "https://github.com/microsoft/25-days-of-serverless/pull/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1674154556
|
Error: IncrementVersionNumber action failed. Error: Settings file ..AL-Go\settings.json is malformed. Cannot bind argument to parameter 'object' because it is null.. Stacktrace: at
I am stucked at creating a release. I have followed workshop, videos and resolution of similar issues #352 . But I do not seem to find the answer to my issue. Maybe a bug, Am I missing something?
Help appreciated.
Getting this error:
Error: IncrementVersionNumber action failed. Error: Settings file ..AL-Go\settings.json is malformed. Cannot bind argument to parameter 'object' because it is null.. Stacktrace: at , D:\a_actions\microsoft\AL-Go-Actions\preview\IncrementVersionNumber\IncrementVersionNumber.ps1: line 89 at , D:\a_actions\microsoft\AL-Go-Actions\preview\IncrementVersionNumber\IncrementVersionNumber.ps1: line 61 at , D:\a_temp\0d13274b-fec2-469c-9d68-815f7ea71e88.ps1: line 2 at , : line 1
Hi, it seems like you settings json correctly formated.
From experience it's most often a , in the last property. If you can't find the issue. It would be helpful to see the settings.json.
It is a one app project.
I leave you the code in the settings.json file:
{ "country": "mx", "appFolders": [], "testFolders": [], "bcptTestFolders": [], "insiderSasTokenSecretName": "InsiderSasToken", "licenseFileUrlSecretName": "LicenseFileUrl", "configPackages": ["STANDARD"], "doNotRunBcptTests": true }
What are you entering as new version number?
Okay, your json is valid. I've found an oddity.
https://github.com/microsoft/AL-Go/blob/b90607c6ca55e8d764a1723ca93774858197da7c/Actions/IncrementVersionNumber/IncrementVersionNumber.ps1#L89
Error Message Template:
Settings file $project\$ALGoSettingsFile is malformed
Your Error Message:
Settings file ..AL-Go\settings.json is malformed
This lets me to the conclusion you have specified the wrong project.
The error message should be Settings file .\.AL-Go\settings.json is malformed
@freddydk I have tried with +0.1, 1.1, +0.2, 1.2
I have used Preview Al Go and Current. I have tried only using IncrementVersionNumber Workflow with same results.
Last run did indeed prompted
IncrementVersionNumber action failed. Error: Settings file .\.AL-Go\settings.json is malformed. Cannot bind argument to parameter 'object' because it is null.. Stacktrace: at <ScriptBlock>, D:\a\_actions\microsoft\AL-Go-Actions\preview\IncrementVersionNumber\IncrementVersionNumber.ps1: line 89 at <ScriptBlock>, D:\a\_actions\microsoft\AL-Go-Actions\preview\IncrementVersionNumber\IncrementVersionNumber.ps1: line 61 at <ScriptBlock>, D:\a\_temp\27af9152-6953-4ed4-8234-e21e780b42a5.ps1: line 2 at <ScriptBlock>, <No file>: line 1
Any chance you could add me to the repo, so that I can have a look?
My GitHub username is freddydk.
Thanks
The repo you invited me to is empty???
I am sorry I invited you to wrong repo. I have now corrected that and send you right invite.
You have an empty line in the beginning of the settings file - try to remove that.
The InitializeWorkflow actually should check all settings files for validity and report whether or not they are valid, but apparently there is a bug in that, which causes it to skip the test - will fix that.
This indeed resolved the issue. Thanks very much @freddydk !
|
gharchive/issue
| 2023-04-19T04:31:53 |
2025-04-01T06:39:33.995244
|
{
"authors": [
"DanielMagMat",
"freddydk",
"jonaswre"
],
"repo": "microsoft/AL-Go",
"url": "https://github.com/microsoft/AL-Go/issues/477",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
431478204
|
Error trying to personalize/design a page
Describe the bug
Trying to personalize page "Report Selection - Sales" throws an error "Personalisation could not be enabled due to technical issues. Try again later, or contact your system administrator."
Information from docker:
TimeGenerated : 4/10/2019 3:25:36 PM
EntryType : Error
Message : Server instance: NAV
Category: Extensions
ClientSessionId: 2d2363d3-0ac5-42f3-af03-081d13535427
ClientActivityId: 2f78868b-5ab0-4eb0-bf2d-e10801a00888
ServerSessionUniqueId: 1fc22ca6-6d29-4ae8-94a2-64618a92caee
ServerActivityId: 028b801f-a39e-46af-b4be-ee60827b42b3
EventTime: 04/10/2019 12:25:36
Message (NavDesignerCompilerException): RootException:
NavDesignerCompilerException
A package with publisher 'Microsoft', name 'Test', and a
version compatible with '11.0.0.0' could not be loaded.
ExceptionStackTrace:
at Microsoft.Dynamics.Nav.Runtime.Designer.NavDesignerCompil
er.CompilePersonalizationObjects(IEnumerable`1
objectSyntaxList, IEnumerable`1 references, CompilationOptions
compilationOptions, NavAppPackageMetadataOutputter outputter,
IFileSystem fileSystem)
at Microsoft.Dynamics.Nav.Runtime.Designer.NavPersonalizatio
nDesignerExtension.Compile()
at Microsoft.Dynamics.Nav.Runtime.Designer.NavPersonalizatio
nDesignerExtension..ctor(INavPageCustomizationRepository
pageCustomizationRepository, INavDesignerCompiler compiler)
at Microsoft.Dynamics.Nav.Runtime.Designer.NavDesignerExtens
ionFactory.CreatePersonalizationDesignerExtension()
at Microsoft.Dynamics.Nav.Runtime.Designer.NavDesignerManage
mentTasks.StartDesigner(DesignerLevels designerLevel,
ConfigurationDesignerMode configurationMode)
CallerStackTrace:
at Microsoft.Dynamics.Nav.Runtime.Designer.NavDesignerManage
mentTasks.StartDesigner(DesignerLevels designerLevel,
ConfigurationDesignerMode configurationMode)
at Microsoft.Dynamics.Nav.Runtime.Apps.NavAppDiagnosticSessi
on.<>c__DisplayClass1_0`1.<SendTraceTagOnFailure>b__0()
at Microsoft.Dynamics.Nav.Runtime.Apps.NavAppDiagnosticSessi
on.SendTraceTagOnFailure(Action operation, String
additionalInfo, Func`2 exceptionMap, String callerName)
at Microsoft.Dynamics.Nav.Runtime.Apps.NavAppDiagnosticSessi
on.SendTraceTagOnFailure[T](Func`1 operation, String
additionalInfo, Func`2 exceptionMap, String callerName)
at Microsoft.Dynamics.Nav.Service.NSDesigner.HandleStartDesi
gnerAction(StartDesignerRequest action, NavSession session)
at Microsoft.Dynamics.Nav.Service.NSDesigner.InvokeAction(Na
vSession session, DesignerRequest request)
at SyncInvokeInvokeDesignerAction(Object , Object[] ,
Object[] )
at
System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object
instance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.Ru
nInTransactionCombinator(ServiceOperation innerOperation,
NSServiceBase serviceInstance, MethodBase syncMethod, Object[]
inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.Tr
ansientErrorRetryCombinator(ServiceOperation innerOperation,
NSServiceBase serviceInstance, MethodBase syncMethod, Object[]
inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.Er
rorMappingCombinator(ServiceOperation innerOperation,
NSServiceBase serviceInstance, MethodBase syncMethod, Object[]
inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.Pu
shPopCombinator(ServiceOperation innerOperation, NSServiceBase
serviceInstance, MethodBase syncMethod, Object[] inputs,
Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationTracer.Tra
ceScopeCombinator(Category telemetryCategory, ServiceOperation
innerOperation, NSServiceBase serviceInstance, MethodBase
syncMethod, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass9_0.<PerformanceCounterCombinator>b__0()
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.Pe
rformanceCounterCombinator(ServiceOperation innerOperation,
NSServiceBase serviceInstance, MethodBase syncMethod, Object[]
inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.In
itClientTelemetryIdsCombinator(ServiceOperation
innerOperation, NSServiceBase serviceInstance, MethodBase
syncMethod, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.Tl
sClearCombinator(ServiceOperation innerOperation,
NSServiceBase serviceInstance, MethodBase syncMethod, Object[]
inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.<>
c__DisplayClass27_1.<Combine>b__1(NSServiceBase
serviceInstance, Object[] inputs, Object[]& outputs)
at Microsoft.Dynamics.Nav.Service.ServiceOperationInvoker.In
voke(Object instance, Object[] inputs, Object[]& outputs)
at System.ServiceModel.Dispatcher.DispatchOperationRuntime.I
nvokeBegin(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.P
rocessMessage5(MessageRpc& rpc)
at
System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean
isOperationContextSet)
at System.ServiceModel.Dispatcher.MessageRpc.Wrapper.Resume(
Boolean& alreadyResumedNoLock)
at System.ServiceModel.Dispatcher.ThreadBehavior.ResumeProce
ssing(IResumeMessageRpc resume)
at Microsoft.Dynamics.Nav.Runtime.NavSynchronizationContext.
<>c__DisplayClass1_0.<ClearThreadLocalStorageDelegate>b__0(Obje
ct state)
at
System.Threading.ExecutionContext.RunInternal(ExecutionContext
executionContext, ContextCallback callback, Object state,
Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext
executionContext, ContextCallback callback, Object state,
Boolean preserveSyncCtx)
at System.Threading.QueueUserWorkItemCallback.System.Threadi
ng.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
ProcessId: 11104
Tag: 00001OO
ThreadId: 148
CounterInformation:
If I try to design (Design->More->Add field "Usage") web client crashes and message "Sorry, your session expired. Please refresh the page to continue." appears.
Information from docker:
TimeGenerated : 4/10/2019 3:32:27 PM
EntryType : Error
Message : Server instance: NAV
Category: Extensions
ClientSessionId: 2d2363d3-0ac5-42f3-af03-081d13535427
ClientActivityId: bd3f1e1d-8050-4ae4-be13-a065839bc83a
ServerSessionUniqueId: 1fc22ca6-6d29-4ae8-94a2-64618a92caee
ServerActivityId: 028b801f-a39e-46af-b4be-ee60827b42b3
EventTime: 04/10/2019 12:32:27
Message The execution of the code block at
HandleAddPageFieldAction had one or more unhandled
exceptions., Additional info: NSDesigner
ProcessId: 11104
Tag: 00000GQ
ThreadId: 125
CounterInformation:
Versions:
AL Langauge:
Name: AL Language
Id: microsoft.al
Description: AL development tools for Dynamics 365 Business Central
Version: 3.0.106655
Publisher: Microsoft
Business Central:
Version: Platform 14.0.31139.0 + Application 31223 (W1 14.1)
Any update for the above issue ? I am using Build No. 14.0.29530.0 i getting same issue as well.
Hi @Tonygithub17, which symbols are currently published on your server? The error is about the Test symbols being missing on your server.
Are they published on your server? Did you have them published and at some point have unpublished them?
bols are currently publ
Hi Qutreson, this is a fresh instance installed using Docker.
What do you get when running Get-NavContainerAppInfo -SymbolsOnly ?
Get-NavContainerAppInfo -SymbolsOnly
Hi Qutreson,
here is a result.
Any update for this issue ?
Same issue here for BC14.1.
More observations:
It works OK when Test app was not removed.
It doesn't work when the Test app was never present (e.g. in BCoP converted DB).
Importing Test.app makes the personalization working.
EnableSymbolsLoading option in New-NavContainer removes Test app automatically - I consider it correct, however non-working personalizations and designer is a nasty side effect (@freddydk).
Quite interestingly - it also works when the first personalization is done before removing the Test app. Then even once the Test app is removed the personalization still works.
My guess: It almost looks like the personalization is a hidden app which is initiated with the Test app in place. It is perhaps auto removed after the first successful personalization when the unused dependent apps including Test app are removed from the personalization app (if that is how it is implemented). In such a case I would vote to leave the Test app from the initial dependencies. (Or actually where/how is the personalization information stored?)
I had the similar observation in BCoP in full install. And I would prefer the personalizations/designer to work even if the Test app is not present at all.
I cannot find any nice work-arounds for scenario with EnableSymbolsLoading and uploaded Tests.
Any updates here? Still unable to use personalize after moving to 14.1.
Ivan,
as a workaround, it should be enough to temporary publish Test.app (can be found in ModernDev folder in installation DVD) with -PackageType SymbolsOnly option, and do personalization once. After that Test app can be unpublished while personalization will continue to work.
Thanks @phenno1,
That seemed to work. I'm not sure why the original issue happened in the first place but test app symbols solved that, at least for now. Also, the weird thing is it doesn't happen for everyone, even if the environments are created with same parameters.
With the switch to AL, this should not be a problem. Considering that a mitigation for the problem has been found, I will close this issue.
|
gharchive/issue
| 2019-04-10T12:35:03 |
2025-04-01T06:39:34.009924
|
{
"authors": [
"AndriusAndrulevicius",
"Tonygithub17",
"atoader",
"ichladil",
"ivandjordjevic",
"phenno1",
"qutreson"
],
"repo": "microsoft/AL",
"url": "https://github.com/microsoft/AL/issues/4877",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1276785754
|
Getting Error when sending get request with content.
trigger OnAction()
var
httpclients: HttpClient;
httpcontents: HttpContent;
httpheader: HttpHeaders;
httprequest: HttpRequestMessage;
httpresponse: HttpResponseMessage;
reponsetxt: Text;
begin
httpcontents.clear;
httpcontents.WriteFrom('{"SiteIDs":[4438],"StartDate":"2022-06-15T12:47:04.030757+01:00","EndDate":"2022-06-15T12:47:04.030757+01:00"}');
httpcontents.GetHeaders(httpHeader);
httpheader.Clear();
httpheader.Add('Content-Type', 'application/json');
httpheader.Add('GUID', string1);
httpheader.Add('DevID', string2);
httpheader.Add('CompanyId', string3);
httprequest.GetHeaders(httpheader);
httprequest.Content := httpcontents;
httprequest.Method('GET');
httprequest.SetRequestUri(Url);
httpclients.Clear();
httpclients.Send(httprequest, httpresponse);
if httpresponse.Content().ReadAs(reponsetxt) then
Message('%1\%2', reponsetxt, httpresponse.HttpStatusCode);
end;
Error:
System.Net.ProtocolViolationException: Cannot send a content-body with this verb-type.
at System.Net.HttpWebRequest.CheckProtocol(Boolean onRequestStream)
at System.Net.HttpWebRequest.BeginGetRequestStream(AsyncCallback callback, Object state)
at System.Net.Http.HttpClientHandler.StartGettingRequestStream(RequestState state)
at System.Net.Http.HttpClientHandler.PrepareAndStartContentUpload(RequestState state)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Dynamics.Nav.Runtime.NavHttpClient.<>c__DisplayClass48_0.b__0(CancellationToken ct)
at Microsoft.Dynamics.Nav.Types.NavCancellationToken.RunActionWithCancellationToken[T](Func2 func) at Microsoft.Dynamics.Nav.Runtime.NavHttpClient.SendWithTelemetry(HttpRequestMessage requestMessage, String endpoint) at Microsoft.Dynamics.Nav.Runtime.NavHttpClient.<>c__DisplayClass47_0.b__1() at Microsoft.Dynamics.Nav.Types.NavThread.<>c__DisplayClass44_0.b__0() at Microsoft.Dynamics.Nav.Types.NavThread.RunExternalAction[T](Func1 action)
at Microsoft.Dynamics.Nav.Runtime.TrappableHttpOperationExecutor.<>c__DisplayClass0_0.b__0()
at Microsoft.Dynamics.Nav.Runtime.TrappableOperationExecutor.Execute(DataError errorLevel, Func1 operation, Action2 processNativeException)
Seems a duplicate of https://github.com/microsoft/AL/issues/5961
Sending a GET request with body is a controversial topic, as @dnpb linked correctly https://github.com/microsoft/AL/issues/5961 you can see we are using just a wrapper and we can't do much about it the failure.
|
gharchive/issue
| 2022-06-20T11:30:19 |
2025-04-01T06:39:34.017077
|
{
"authors": [
"RavinderKumarGitHub",
"dnpb",
"nndobrev"
],
"repo": "microsoft/AL",
"url": "https://github.com/microsoft/AL/issues/7085",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
825509510
|
[Event Change] Database::"Job Journal Line" - Function CheckItemAvailable - OnBeforeCheckItemAvailable
We like to request the following Event:
Object: Database::"Job Journal Line"
Function: CheckItemAvailable
Publisher Event: OnBeforeCheckItemAvailable
Reason: add an extra Parameter
Where is the event placed:
Event Code:
[IntegrationEvent(false, false)]
local procedure OnBeforeCheckItemAvailable(var JobJournalLine: Record "Job Journal Line"; var ItemJournalLine: Record "Item Journal Line"; var CheckedAvailability: Boolean)
begin
end;
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
|
gharchive/issue
| 2021-03-09T08:20:50 |
2025-04-01T06:39:34.019904
|
{
"authors": [
"JesperSchulz",
"acadonAG-dev"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/11538",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
997958124
|
[EventRequest] Table 5093 "Opportunity Entry".UpdateOppFromOpp
Please add new event
procedure UpdateOppFromOpp(var Opp: Record Opportunity)
begin
Opp.TestField(Closed, false);
DeleteAll();
Init;
Validate("Opportunity No.", Opp."No.");
"Sales Cycle Code" := Opp."Sales Cycle Code";
"Contact No." := Opp."Contact No.";
"Contact Company No." := Opp."Contact Company No.";
"Salesperson Code" := Opp."Salesperson Code";
"Campaign No." := Opp."Campaign No.";
//---------------------------------------OnUpdateOppFromOppOnBeforeStartWizard2:BEGIN
OnUpdateOppFromOppOnBeforeStartWizard2(Opp,Rec)
//---------------------------------------OnUpdateOppFromOppOnBeforeStartWizard2:END
StartWizard2;
end;
//---------------------------------------OnUpdateOppFromOppOnBeforeStartWizard2:BEGIN
[IntegrationEvent(false, false)]
local procedure OnUpdateOppFromOppOnBeforeStartWizard2(Opp: Record Opportunity; var OpportunityEntry: Record "Opportunity Entry")
begin
end;
//---------------------------------------OnUpdateOppFromOppOnBeforeStartWizard2:END
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
|
gharchive/issue
| 2021-09-16T08:56:35 |
2025-04-01T06:39:34.021769
|
{
"authors": [
"JesperSchulz",
"fridrichovsky"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/14346",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1237303617
|
[Event Request] codeunit 99000845 "Reservation Management" - OnBeforeMakeRoomForReservation
Please add new event.
procedure MakeRoomForReservation(var ReservEntry: Record "Reservation Entry")
var
ReservEntry2: Record "Reservation Entry";
TotalQuantity: Decimal;
IsHandled: Boolean;
begin
IsHandled := false;
OnBeforeMakeRoomForReservation(ReservEntry, IsHandled);
if IsHandled then
exit;
[IntegrationEvent(false, false)]
local procedure OnBeforeMakeRoomForReservation(var ReservEntry: Record "Reservation Entry"; var IsHandled: boolean)
begin
end;
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 40744.
|
gharchive/issue
| 2022-05-16T15:07:43 |
2025-04-01T06:39:34.023856
|
{
"authors": [
"JesperSchulz",
"miljance"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/18032",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1525305561
|
[Event Request] codeunit 80 "Sales-Post" - InvoiceRounding
In https://github.com/microsoft/ALAppExtensions/issues/21662 , we requested a new function on SalesLine : GetVATBaseDiscountPercentage
Can you please integrate this new function for all references to SalesHeader."VAT Base Discount %" in InvoiceRounding?
This would allow us to overrule the SalesHeader."VAT Base Discount %" value with zero in case a certain sales line is to be excluded from any discounts.
As-is
Validate(
"Unit Price",
Round(
InvoiceRoundingAmount /
(1 + (1 - SalesHeader."VAT Base Discount %" / 100) * "VAT %" / 100),
Currency."Amount Rounding Precision"));
To-be
Validate(
"Unit Price",
Round(
InvoiceRoundingAmount /
(1 + (1 - SalesLine.GetVATBaseDiscountPercentage(SalesHeader) / 100) * "VAT %" / 100),
Currency."Amount Rounding Precision"));
@ZepposBE
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 52774.
|
gharchive/issue
| 2023-01-09T10:26:09 |
2025-04-01T06:39:34.026877
|
{
"authors": [
"JesperSchulz",
"fvet"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/21664",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1813302992
|
[Event Request] report 5850 "Copy Invt. Document" OnLookupDocNoOnBeforeValidateDocNo
🕮 Describe the request
Could you please add the following event for extensiblity of the _ Enum "Invt. Doc. Document Type From"_?
Please Include the sender in the event request.
local procedure LookupDocNo()
begin
case DocType of
DocType::Receipt,
DocType::Shipment:
begin
FromInvtDocHeader.FilterGroup := 2;
FromInvtDocHeader.SetRange("Document Type", ConvertInvtDocumentTypeFrom(DocType));
if InvtDocHeader."Document Type" = DocType then
FromInvtDocHeader.SetFilter("No.", '<>%1', InvtDocHeader."No.");
FromInvtDocHeader.FilterGroup := 0;
FromInvtDocHeader."Document Type" := ConvertInvtDocumentTypeFrom(DocType);
FromInvtDocHeader."No." := DocNo;
case DocType of
DocType::Receipt:
if PAGE.RunModal(PAGE::"Invt. Receipts", FromInvtDocHeader, FromInvtDocHeader."No.") = ACTION::LookupOK then
DocNo := FromInvtDocHeader."No.";
DocType::Shipment:
if PAGE.RunModal(PAGE::"Invt. Shipments", FromInvtDocHeader, FromInvtDocHeader."No.") = ACTION::LookupOK then
DocNo := FromInvtDocHeader."No.";
end;
end;
DocType::"Posted Receipt":
begin
FromInvtRcptHeader."No." := DocNo;
if PAGE.RunModal(0, FromInvtRcptHeader) = ACTION::LookupOK then
DocNo := FromInvtRcptHeader."No.";
end;
DocType::"Posted Shipment":
begin
FromInvtShptHeader."No." := DocNo;
if PAGE.RunModal(0, FromInvtShptHeader) = ACTION::LookupOK then
DocNo := FromInvtShptHeader."No.";
end;
end;
//+EVENT
OnLookupDocNoOnBeforeValidateDocNo(InvtDocHeader, DocType, DocNo);
//+EVENT
ValidateDocNo();
end;
[IntegrationEvent(true, false)] //Please Include the sender
local procedure OnLookupDocNoOnBeforeValidateDocNo(var InvtDocumentHeader: Record "Invt. Document Header"; InvtDocDocumentTypeFrom: Enum "Invt. Doc. Document Type From"; var FromDocNo: Code[20])
begin
end;
Availability update: We will publish a fix for this issue in the next update for release 22.
Build ID to track: 58993.
|
gharchive/issue
| 2023-07-20T06:57:21 |
2025-04-01T06:39:34.029797
|
{
"authors": [
"JesperSchulz",
"pri-kise"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/24168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1874848110
|
[Event Request] Codeunit 5940 ServContractManagement
The following events are required for this object:
[IntegrationEvent(false, false)]
local procedure OnCreateServiceLedgerEntryOnBeforeServLedgEntryInsert(var ServiceLedgerEntry: Record "Service Ledger Entry"; ServiceContractHeader: Record "Service Contract Header"; ServiceContractLine: Record "Service Contract Line"; ServHeader2: Record "Service Header")
begin
end;
[IntegrationEvent(false, false)]
local procedure OnInsertMultipleServLedgEntriesOnBeforeServLedgEntryInsert(var ServiceLedgerEntry: Record "Service Ledger Entry"; ServiceContractHeader: Record "Service Contract Header"; ServiceContractLine: Record "Service Contract Line"; var NonDistrAmount: array[4] of Decimal; var IsHandled: Boolean; ServHeader: Record "Service Header")
begin
end;
[IntegrationEvent(false, false)]
local procedure OnPostPartialServLedgEntryOnBeforeServLedgEntryInsert(var ServLedgEntry: Record "Service Ledger Entry"; ServContractLine: Record "Service Contract Line"; ServHeader: Record "Service Header")
begin
end;
[IntegrationEvent(false, false)]
local procedure OnBeforeCalcCurrencyAmountRoundingPrecision(ServiceContractHeader: Record "Service Contract Header"; var ServiceLedgerEntry: Record "Service Ledger Entry"; var InvRoundedAmount: array[4] of Decimal; var InvAmount: array[4] of Decimal; AmountType: Option ,Amount,DiscAmount,UnitPrice,UnitCost; Currency: Record Currency; ServHeader2: Record "Service Header"; var IsHandled: Boolean)
begin
end;
[IntegrationEvent(false, false)]
local procedure OnAfterSetDiscountAmount(ServiceContractHeader: Record "Service Contract Header"; var ServiceLedgerEntry: Record "Service Ledger Entry"; var InvRoundedAmount: array[4] of Decimal; var InvAmount: array[4] of Decimal; AmountType: Option ,Amount,DiscAmount,UnitPrice,UnitCost; Currency: Record Currency; ServHeader2: Record "Service Header")
begin
end;
[IntegrationEvent(false, false)]
local procedure OnCreateCreditLineOnBeforeCreateDim(var ServLine2: Record "Service Line"; var IsHandled: Boolean)
begin
end;
[IntegrationEvent(false, false)]
local procedure OnCreateAllServLinesOnAfterCreateServiceLine(var ServContractLine: Record "Service Contract Line"; ServHeader: Record "Service Header"; InvoiceFrom: Date; InvoiceTo: Date)
begin
end;
[IntegrationEvent(false, false)]
local procedure OnBeforeOnFindServContractLine2(var ServContractLine2: Record "Service Contract Line")
begin
end;
[IntegrationEvent(false, false)]
local procedure OnAfterSetHideValidationDialog(var ServiceContractHeader: Record "Service Contract Header"; Cust: Record Customer; var CustCheckCrLimit: Codeunit "Cust-Check Cr. Limit"; var IsHandled: Boolean)
begin
end;
[IntegrationEvent(true, false)]
local procedure OnServLedgEntryToServiceLineOnBeforeDimSet(var ServLine: Record "Service Line"; ServiceLedgerEntry: Record "Service Ledger Entry"; ServHeader: Record "Service Header"; var IsHandled: Boolean)
begin
end;
local procedure TransferQuoteToOrderLines(var ServiceQuoteLine: Record "Service Line"; var ServiceQuoteHeader: Record "Service Header"; var ServiceOrderLine: Record "Service Line"; var ServiceOrderHeader: Record "Service Header")
var
ItemCheckAvail: Codeunit "Item-Check Avail.";
IsHandled: Boolean;
begin
ServiceQuoteLine.Reset();
ServiceQuoteLine.SetRange("Document Type", ServiceQuoteHeader."Document Type");
ServiceQuoteLine.SetRange("Document No.", ServiceQuoteHeader."No.");
ServiceQuoteLine.SetRange(Type, ServiceQuoteLine.Type::Item);
ServiceQuoteLine.SetFilter("No.", '<>%1', '');
if ServiceQuoteLine.FindSet() then
repeat
//EVENT CHANGE REQUEST
IsHandled := false;
OnBeforeTransferQuoteLineToOrderLineLoop(ServiceQuoteLine, ServiceQuoteHeader, ServiceOrderLine, ServiceOrderHeader, IsHandled);
if not IsHandled then begin
//EVENT CHANGE REQUEST
ServiceOrderLine := ServiceQuoteLine;
ServiceOrderLine.Validate("Reserved Qty. (Base)", 0);
ServiceOrderLine."Line No." := 0;
if GuiAllowed then
if ItemCheckAvail.ServiceInvLineCheck(ServiceOrderLine) then
ItemCheckAvail.RaiseUpdateInterruptedError;
end;
until ServiceQuoteLine.Next() = 0;
end;
if AddingNewLines then
DueDate := InvFrom;
for Index := 1 to CountOfEntryLoop do begin
SetServLedgEntryAmounts(
ServLedgEntry, InvRoundedAmount,
NonDistrAmount[AmountType::Amount] / (NoOfPayments + 1 - Index),
NonDistrAmount[AmountType::UnitPrice] / (NoOfPayments + 1 - Index),
NonDistrAmount[AmountType::UnitCost] / (NoOfPayments + 1 - Index),
NonDistrAmount[AmountType::DiscAmount] / (NoOfPayments + 1 - Index),
AmountRoundingPrecision);
ServLedgEntry."Cost Amount" := ServLedgEntry."Charged Qty." * ServLedgEntry."Unit Cost";
NonDistrAmount[AmountType::Amount] -= ServLedgEntry."Amount (LCY)";
NonDistrAmount[AmountType::UnitPrice] -= ServLedgEntry."Unit Price";
NonDistrAmount[AmountType::UnitCost] -= ServLedgEntry."Unit Cost";
NonDistrAmount[AmountType::DiscAmount] -= ServLedgEntry."Contract Disc. Amount";
ServLedgEntry."Entry No." := NextEntry;
UpdateServLedgEntryAmount(ServLedgEntry, ServHeader);
ServLedgEntry."Posting Date" := DueDate;
ServLedgEntry.Prepaid := true;
//EVENT CHANGE REQUEST
IsHandled := false;
OnInsertMultipleServLedgEntriesOnBeforeServLedgEntryInsert(ServLedgEntry, ServContractHeader, ServContractLine, NonDistrAmount, IsHandled, ServHeader);
if IsHandled then
exit;
//EVENT CHANGE REQUEST
ServLedgEntry.Insert();
NextEntry += 1;
DueDate := CalcDate('<1M>', DueDate);
local procedure PostPartialServLedgEntry(var InvAmountRounded: array[4] of Decimal; ServContractLine: Record "Service Contract Line"; ServHeader: Record "Service Header"; InvFrom: Date; InvTo: Date; DueDate: Date; AmtRoundingPrecision: Decimal) YearContractCorrection: Boolean
begin
OnBeforePostPartialServLedgEntry(ServLedgEntry, ServContractLine);
ServLedgEntry."Service Item No. (Serviced)" := ServContractLine."Service Item No.";
ServLedgEntry."Item No. (Serviced)" := ServContractLine."Item No.";
ServLedgEntry."Serial No. (Serviced)" := ServContractLine."Serial No.";
if IsYearContract(ServContractLine."Contract Type", ServContractLine."Contract No.") then begin
YearContractCorrection := true;
CalcServLedgEntryAmounts(ServContractLine, InvAmountRounded);
ServLedgEntry."Entry No." := NextEntry;
UpdateServLedgEntryAmount(ServLedgEntry, ServHeader);
end else begin
YearContractCorrection := false;
SetServLedgEntryAmounts(
ServLedgEntry, InvAmountRounded,
-CalcContractLineAmount(ServContractLine."Line Amount", InvFrom, InvTo),
-CalcContractLineAmount(ServContractLine."Line Value", InvFrom, InvTo),
-CalcContractLineAmount(ServContractLine."Line Cost", InvFrom, InvTo),
-CalcContractLineAmount(ServContractLine."Line Discount Amount", InvFrom, InvTo),
AmtRoundingPrecision);
ServLedgEntry."Entry No." := NextEntry;
UpdateServLedgEntryAmount(ServLedgEntry, ServHeader);
end;
ServLedgEntry."Posting Date" := DueDate;
ServLedgEntry.Prepaid := true;
OnPostPartialServLedgEntryOnBeforeServLedgEntryInsert(ServLedgEntry, ServContractLine, ServHeader); //EVENT CHANGE REQUEST
ServLedgEntry.Insert();
NextEntry := NextEntry + 1;
exit(YearContractCorrection);
end;
if ServLedgEntry.Get(LastEntry) and (not YearContractCorrection)
then begin
//EVENT REQUEST
IsHandled := false;
OnBeforeCalcCurrencyAmountRoundingPrecision(ServContractHeader, ServLedgEntry, InvRoundedAmount, InvAmount, AmountType, Currency, ServHeader2, IsHandled);
if not IsHandled then
//EVENT REQUEST
ServLedgEntry."Amount (LCY)" := ServLedgEntry."Amount (LCY)" + InvRoundedAmount[AmountType::Amount] -
Round(InvAmount[AmountType::Amount], Currency."Amount Rounding Precision");
ServLedgEntry."Unit Price" := ServLedgEntry."Unit Price" + InvRoundedAmount[AmountType::UnitPrice] -
Round(InvAmount[AmountType::UnitPrice], Currency."Unit-Amount Rounding Precision");
ServLedgEntry."Cost Amount" := ServLedgEntry."Cost Amount" + InvRoundedAmount[AmountType::UnitCost] -
Round(InvAmount[AmountType::UnitCost], Currency."Amount Rounding Precision");
SetServiceLedgerEntryUnitCost(ServLedgEntry);
ServLedgEntry."Contract Disc. Amount" :=
ServLedgEntry."Contract Disc. Amount" - InvRoundedAmount[AmountType::DiscAmount] +
Round(InvAmount[AmountType::DiscAmount], Currency."Amount Rounding Precision");
ServLedgEntry."Discount Amount" := ServLedgEntry."Contract Disc. Amount";
OnAfterSetDiscountAmount(ServContractHeader, ServLedgEntry, InvRoundedAmount, InvAmount, AmountType, Currency, ServHeader2); //EVENT REQUEST
CalcServLedgEntryDiscountPct(ServLedgEntry);
UpdateServLedgEntryAmount(ServLedgEntry, ServHeader2);
ServLedgEntry.Modify();
end;
end;
if ServHeader2."Currency Code" <> '' then begin
ServLine2.Validate("Unit Price", AmountToFCY(CreditUnitPrice, ServHeader2));
ServLine2.Validate("Line Amount", AmountToFCY(CreditAmount, ServHeader2));
end else begin
ServLine2.Validate("Unit Price", CreditUnitPrice);
ServLine2.Validate("Line Amount", CreditAmount);
end;
ServLine2.Description := LineDescription;
ServLine2."Contract No." := ServContract."Contract No.";
ServLine2."Service Item No." := ServItemNo;
ServLine2."Appl.-to Service Entry" := ServLedgEntryNo;
ServLine2."Unit Cost (LCY)" := CreditCost;
ServLine2."Posting Date" := PeriodStarts;
if ApplyDiscAmt then
ServLine2.Validate("Line Discount Amount", DiscAmount);
//EVENT REQUEST
IsHandled := false;
OnCreateCreditLineOnBeforeCreateDim(ServLine2, IsHandled);
if not IsHandled then
ServLine2.CreateDimFromDefaultDim(0);
//EVENT REQUEST
OnBeforeServLineInsert(ServLine2, ServHeader2, ServContract);
ServLine2.Insert();
ShouldCraeteServiceApplyEntry := ServiceApplyEntry <> 0;
OnCreateAllServLinesOnAfterCalcShouldCraeteServiceApplyEntry(ServHeader, ServContractToInvoice, ServContractLine, PartInvoiceFrom, PartInvoiceTo, ServiceApplyEntry, ShouldCraeteServiceApplyEntry);
if ShouldCraeteServiceApplyEntry then
CreateServiceLine(
ServHeader, "Contract Type", "Contract No.",
CountLineInvFrom(false, ServContractLine, InvoiceFrom), InvoiceTo, ServiceApplyEntry, false);
OnCreateAllServLinesOnAfterCreateServiceLine(ServContractLine, ServHeader, InvoiceFrom, InvoiceTo); //EVENT REQUEST
until ServContractLine.Next() = 0;
repeat
if not TempServItem.Get(ServContractLine."Service Item No.") then begin
ServItem.Get(ServContractLine."Service Item No.");
TempServItem := ServItem;
TempServItem.Insert();
end;
ServContractLine2.Reset();
ServContractLine2.SetCurrentKey("Service Item No.", "Contract Status");
ServContractLine2.SetRange("Service Item No.", ServContractLine."Service Item No.");
ServContractLine2.SetFilter("Contract Status", '<>%1', ServContractLine."Contract Status"::Cancelled);
ServContractLine2.SetRange("Contract Type", ServContractLine."Contract Type"::Contract);
ServContractLine2.SetFilter("Contract No.", '<>%1', ServContractLine."Contract No.");
OnBeforeOnFindServContractLine2(ServContractLine2); //EVENT REQUEST
if ServContractLine2.Find('-') then
repeat
GetAffectedItemsOnContractChange(
ServContractLine2."Contract No.", TempServContract, TempServItem,
true, ServContractLine."Contract Type"::Contract);
until ServContractLine2.Next() = 0;
if "Customer No." <> NewCustomertNo then begin
if ServMgtSetup."Register Contract Changes" then
ContractChangeLog.LogContractChange(
"Contract No.", 0, FieldCaption("Customer No."), 0, "Customer No.", NewCustomertNo, '', 0);
"Customer No." := NewCustomertNo;
CustCheckCrLimit.OnNewCheckRemoveCustomerNotifications(RecordId, true);
Cust.Get(NewCustomertNo);
SetHideValidationDialog(true);
//EVENT REQUEST
IsHandled := false;
OnAfterSetHideValidationDialog(ServContractHeader, Cust, CustCheckCrLimit, IsHandled);
if not IsHandled then begin
//EVENT REQUEST
if Cust."Bill-to Customer No." <> '' then
Validate("Bill-to Customer No.", Cust."Bill-to Customer No.")
else
Validate("Bill-to Customer No.", Cust."No.");
"Responsibility Center" := UserMgt.GetRespCenter(2, Cust."Responsibility Center");
UpdateShiptoCode;
CalcFields(
Name, "Name 2", Address, "Address 2",
"Post Code", City, County, "Country/Region Code");
CustCheckCrLimit.ServiceContractHeaderCheck(ServContractHeader);
end;
end;
TotalServLine."Unit Price" += "Unit Price";
TotalServLine."Line Amount" += -ServiceLedgerEntry."Amount (LCY)";
if (ServiceLedgerEntry."Amount (LCY)" <> 0) or (ServiceLedgerEntry."Discount %" > 0) then
if ServHeader."Currency Code" <> '' then begin
Validate("Unit Price",
AmountToFCY(TotalServLine."Unit Price", ServHeader) - TotalServLineLCY."Unit Price");
Validate("Line Amount",
AmountToFCY(TotalServLine."Line Amount", ServHeader) - TotalServLineLCY."Line Amount");
end else begin
Validate("Unit Price");
Validate("Line Amount", -ServiceLedgerEntry."Amount (LCY)");
end;
TotalServLineLCY."Unit Price" += "Unit Price";
TotalServLineLCY."Line Amount" += "Line Amount";
//EVENT REQUEST
IsHandled := false;
OnServLedgEntryToServiceLineOnBeforeDimSet(ServLine, ServiceLedgerEntry, ServHeader, IsHandled);
if IsHandled then
exit;
//EVENT REQUEST
"Shortcut Dimension 1 Code" := ServiceLedgerEntry."Global Dimension 1 Code";
"Shortcut Dimension 2 Code" := ServiceLedgerEntry."Global Dimension 2 Code";
"Dimension Set ID" := ServiceLedgerEntry."Dimension Set ID";
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 12404.
|
gharchive/issue
| 2023-08-31T06:10:51 |
2025-04-01T06:39:34.039667
|
{
"authors": [
"JesperSchulz",
"SimonasDargis"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/24695",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2340574395
|
Add parameter on event OnBeforeInsertGlobalGLEntry into codeunit 12 "Gen. Jnl.-Post Line"
Describe the request
The code should be like this:
procedure FinishPosting(GenJournalLine: Record "Gen. Journal Line") IsTransactionConsistent: Boolean
var
CostAccountingSetup: Record "Cost Accounting Setup";
TransferGlEntriesToCA: Codeunit "Transfer GL Entries to CA";
IsTransactionConsistentExternal: Boolean;
begin
OnBeforeFinishPosting(GenJournalLine, TempGLEntryBuf);
IsTransactionConsistent :=
(BalanceCheckAmount = 0) and (BalanceCheckAmount2 = 0) and
(BalanceCheckAddCurrAmount = 0) and (BalanceCheckAddCurrAmount2 = 0);
IsTransactionConsistentExternal := IsTransactionConsistent;
GlobalGLEntry.Consistent(IsTransactionConsistent);
OnAfterSettingIsTransactionConsistent(GenJournalLine, IsTransactionConsistentExternal);
IsTransactionConsistent := IsTransactionConsistent and IsTransactionConsistentExternal;
if TempGLEntryBuf.FindSet() then begin
repeat
TempGLEntryPreview := TempGLEntryBuf;
TempGLEntryPreview.Insert();
GlobalGLEntry := TempGLEntryBuf;
if AddCurrencyCode = '' then begin
GlobalGLEntry."Additional-Currency Amount" := 0;
GlobalGLEntry."Add.-Currency Debit Amount" := 0;
GlobalGLEntry."Add.-Currency Credit Amount" := 0;
end;
if (GlobalGLEntry."Posting Date" < FiscalYearStartDate) or GenJournalLine."Closing Balance Sheet" then
GlobalGLEntry."Official Date" := NormalDate(WorkDate())
else
GlobalGLEntry."Official Date" := NormalDate(GlobalGLEntry."Posting Date");
GlobalGLEntry.Positive := GlobalGLEntry.Amount > 0;
GlobalGLEntry."Prior-Year Entry" := GlobalGLEntry."Posting Date" < FiscalYearStartDate;
OnBeforeInsertGlobalGLEntry(GlobalGLEntry, GenJournalLine, GLReg, FiscalYearStartDate);
GlobalGLEntry.Insert(true);
SourceCodeSetup.Get();
InsertGLBookEntry(GlobalGLEntry);
OnAfterInsertGlobalGLEntry(GlobalGLEntry, TempGLEntryBuf, NextEntryNo, GenJournalLine);
GlobalGLEntry.CopyLinks(GenJournalLine);
until TempGLEntryBuf.Next() = 0;
GLReg."To VAT Entry No." := NextVATEntryNo - 1;
GLReg."To Entry No." := GlobalGLEntry."Entry No.";
UpdateGLReg(IsTransactionConsistent, GenJournalLine);
SetGLAccountNoInVATEntries();
end;
GlobalGLEntry.Consistent(IsTransactionConsistent);
if CostAccountingSetup.Get() then
if CostAccountingSetup."Auto Transfer from G/L" then
TransferGlEntriesToCA.GetGLEntries();
OnFinishPostingOnBeforeResetFirstEntryNo(GlobalGLEntry, NextEntryNo, FirstEntryNo);
FirstEntryNo := 0;
if IsTransactionConsistent then
UpdateAppliedCVLedgerEntries();
OnAfterFinishPosting(GlobalGLEntry, GLReg, IsTransactionConsistent, GenJournalLine);
end;
[IntegrationEvent(false, false)]
local procedure OnBeforeInsertGlobalGLEntry(var GlobalGLEntry: Record "G/L Entry"; GenJournalLine: Record "Gen. Journal Line"; GLRegister: Record "G/L Register"; FiscalYearStartDate : Date)
begin
end;
Additional context
This parameter is usefull fot anyone want to use it into OnBeforeInsertGlobalGLEntry event.
Internal work item: AB#538369
Availability update: We will publish a fix for this issue in the next update for release 24.
Build ID to track: 21537.
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 21545.
|
gharchive/issue
| 2024-06-07T14:14:34 |
2025-04-01T06:39:34.045242
|
{
"authors": [
"JesperSchulz",
"NBTPaolinoMattia"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/26625",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2359565030
|
page 305 "Reservation Worksheet" - SetCurrentWkshBatchName
Describe the request
We would like to open the page 305 "Reservation Worksheet" for a specific Wksh. Batch Name. Therefore, a function should be added which allows to set the value of the global CurrentWkshBatchName variable.
Suggested function:
procedure SetCurrentWkshBatchName(NewCurrentWkshBatchName: Code[10])
begin
CurrentWkshBatchName := NewCurrentWkshBatchName;
ReservationWorksheetMgt.SetName(CurrentWkshBatchName, Rec);
end;
Additional context
Aim is to be able to launch the page 305 "Reservation Worksheet" from AL code and preset the 'Batch Name' (and record filters)
Internal work item: AB#538786
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. Please do not reply to this, as we do not monitor closed issues. If you have follow-up questions or requests, please create a new issue where you reference this one.
Build ID: 27840.
Availability update: We will publish a fix for this issue in the next update for release 25.
Build ID to track: 27926.
|
gharchive/issue
| 2024-06-18T10:54:25 |
2025-04-01T06:39:34.049085
|
{
"authors": [
"JesperSchulz",
"fvet"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/26690",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
498170057
|
[Eventrequest] codeunit 12173
Hi AL team,
please add new publisher event on Codeunit 12173 into trigger OnRun, like this:
All parameters should be passed by var.
This is necessary to manage a particular withholding system in Italy, which has a percentage and amount of 0.
if is possible change the functionvisibility to External on function "PostTax".
Thanks
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
|
gharchive/issue
| 2019-09-24T09:04:30 |
2025-04-01T06:39:34.051540
|
{
"authors": [
"bc-ghost",
"giann85"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/4336",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
561015902
|
Missing RunPageView on pages while filtering on Ledger tables
We have found pages 99000778,99000773,99000846 are missing RunPageView while filtering and running ledger entry pages, leading to slow performance (see actions for detail). Please address it in all supported versions of Dynamics NAV. Our proposed changes are in the attachment with "//K3 Changes" as referance.
action("Ledger E&ntries")
{
ApplicationArea = Manufacturing;
Caption = 'Ledger E&ntries';
Image = CustomerLedger;
Promoted = true;
PromotedCategory = Category6;
RunObject = Page "Capacity Ledger Entries";
//K3 Changes -
RunPageView = SORTING("Order Type", "Order No.", "Order Line No.", "Routing No.", "Routing Reference No.", "Operation No.", "Last Output Line");
//K3 Changes +
RunPageLink = "Order Type" = CONST(Production),
"Order No." = FIELD("Order No.");
ShortCutKey = 'Ctrl+F7';
ToolTip = 'View the history of transactions that have been posted for the selected record.';
}
Thanks, we will make this change for new version. Doing the same for older version is possible using hotfix request, you can create it here:
• On premises - https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=93d37907-ad94-d591-22e9-593cfa09dd3f
• On line – contact your partner, or if you are a partner open a support incident through Partner Center - https://partner.microsoft.com/en-us/pcv/dashboard/overview
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
|
gharchive/issue
| 2020-02-06T13:33:56 |
2025-04-01T06:39:34.056604
|
{
"authors": [
"AlexanderYakunin",
"Rama-M",
"bc-ghost"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/5954",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
713425831
|
[Event Request] Database::"Assembly Line" - Function CopyFromResource - OnBeforeCopyFromResource
We like to request the following Event:
Object: Database::"Assembly Line"
Function: CopyFromResource
Publisher Event: OnBeforeCopyFromResource
Reason: To implement custom business logic
Where is the event placed:
local procedure CopyFromResource()
begin
OnBeforeCopyFromResource(Rec);
GetItemResource;
Resource.TestField("Gen. Prod. Posting Group");
"Gen. Prod. Posting Group" := Resource."Gen. Prod. Posting Group";
"Inventory Posting Group" := '';
Description := Resource.Name;
"Description 2" := Resource."Name 2";
Event Code:
[IntegrationEvent(false, false)]
local procedure OnBeforeCopyFromResource(var AssemblyLine: Record "Assembly Line")
begin
end;
Thanks for reporting this. We agree, and we’ll publish a fix asap, either in an update for the current version or in the next major release. We will update this issue with information about availability.
|
gharchive/issue
| 2020-10-02T07:52:42 |
2025-04-01T06:39:34.060526
|
{
"authors": [
"acadonAG-dev",
"bc-ghost"
],
"repo": "microsoft/ALAppExtensions",
"url": "https://github.com/microsoft/ALAppExtensions/issues/9031",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
681610113
|
Pagination of list items
What platform is your feature request related to? (Delete other platforms).
.NET HTML
Android
iOS (Swift/Objective C)
JavaScript
Is your feature request related to a problem? Please describe.
When there are many list items, we want to show list with pagination.
Describe the solution you'd like
I suggest text with link and icon image.
Describe alternatives you've considered
Text link can be alternatives.
Additional context
This is not on our roadmap, but we are considering a carousel feature that may help with your scenario. Please track #6438 for updates.
|
gharchive/issue
| 2020-08-19T07:21:54 |
2025-04-01T06:39:34.064004
|
{
"authors": [
"RebeccaAnne",
"animia"
],
"repo": "microsoft/AdaptiveCards",
"url": "https://github.com/microsoft/AdaptiveCards/issues/4615",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1906960614
|
[BUG] Large amount of CustomEvent: {"isTrusted": false} logged in AppInsights
Description/Screenshot
I am working on a React WebApp which uses both LaunchDarkly and AppInsights. Both seem to have a concept of Custom Events.
We started having large amounts of exceptions logged in our AppInsights regarding "untrusted Custom Events", and it always seems to happen next to LaunchDarkly events calls. I have yet to find anything regarding untrusted custom events in both LaunchDarkly and AppInsights documentation.
The errorSrc being window.onerror hints at the error coming from an external script, but I still wanted to check if this behavior is known from AppInsights.
Steps to Reproduce
OS/Browser: Windows 10 / Chrome 116.0 and 117.0, Edge 117.0, Firefox 117.0
SDK Version [e.g. 22]: Using @microsoft/applicationinsights-web: 3.0.2
How you initialized the SDK: n/a
Expected behavior
Have a way to trust custom events, if that's the case?
Additional context
Hi @MSNev, thank you for the quick reply.
I highly doubt that this exception comes from the AppInsights SDK. I mostly wanted to confirm that the CustomEvent exception was not an Application Insights Custom Event and was indeed something else, since the concept of Custom Events can be found at multiple places.
Thanks again.
|
gharchive/issue
| 2023-09-21T13:15:29 |
2025-04-01T06:39:34.074331
|
{
"authors": [
"emileturcotte"
],
"repo": "microsoft/ApplicationInsights-JS",
"url": "https://github.com/microsoft/ApplicationInsights-JS/issues/2161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
452101021
|
NPM Package @microsoft/applicationinsights-react-native is not compiled
I am setting up Application Insights on a mobile app to do some frontend related logging.
When creating the React Native bundle, I am encountering the following error:
error: bundling failed: Error: While trying to resolve module @microsoft/applicationinsights-react-native from file C:\Projects\MobileApp\universalapp\src\services\AIService.js, the package C:\Projects\MobileApp\universalapp\node_modules\@microsoft\applicationinsights-react-native\package.json was successfully found. However, this package itself specifies a main module field that could not be resolved (C:\Projects\MobileApp\universalapp\node_modules\@microsoft\applicationinsights-react-native\dist-esm\index.js. Indeed, none of these files exist:
C:\Projects\MobileApp\universalapp\node_modules\@microsoft\applicationinsights-react-native\dist-esm\index.js(.native||.android.js|.native.js|.js|.android.json|.native.json|.json|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx)
C:\Projects\MobileApp\universalapp\node_modules\@microsoft\applicationinsights-react-native\dist-esm\index.js\index(.native||.android.js|.native.js|.js|.android.json|.native.json|.json|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx)
at ResolutionRequest.resolveDependency (C:\Projects\MobileApp\universalapp\node_modules\metro\src\node-haste\DependencyGraph\ResolutionRequest.js:65:15)
at DependencyGraph.resolveDependency (C:\Projects\MobileApp\universalapp\node_modules\metro\src\node-haste\DependencyGraph.js:283:16)
at Object.resolve (C:\Projects\MobileApp\universalapp\node_modules\metro\src\lib\transformHelpers.js:261:42)
at dependencies.map.result (C:\Projects\MobileApp\universalapp\node_modules\metro\src\DeltaBundler\traverseDependencies.js:399:31)
at Array.map ()
at resolveDependencies (C:\Projects\MobileApp\universalapp\node_modules\metro\src\DeltaBundler\traverseDependencies.js:396:18)
at C:\Projects\MobileApp\universalapp\node_modules\metro\src\DeltaBundler\traverseDependencies.js:269:33
at Generator.next ()
at asyncGeneratorStep (C:\Projects\MobileApp\universalapp\node_modules\metro\src\DeltaBundler\traverseDependencies.js:87:24)
at _next (C:\Projects\MobileApp\universalapp\node_modules\metro\src\DeltaBundler\traverseDependencies.js:107:9)
DELTA [android, dev] ....../index.js ▓▓░░░░░░░░░░░░░░ 14.7% (447/1167), failed.
My dependencies in package.json are
"dependencies": {
"@microsoft/applicationinsights-react-native": "^1.0.0",
"@microsoft/applicationinsights-web": "^2.0.0",
"react-native-device-info": "^2.0.2"
}
When I check the folders in C:\Projects\MobileApp\universalapp\node_modules\@microsoft, I can see that all of the folders, ie applicationinsights-web contain dist and dist-esm folders with compiled code within them. (also the other regular folders such as src and node_modules)
However, the applicationinsights-react-native folder only has src and node_modules folders.
To resolve this, I have to navigate to the C:\Projects\MobileApp\universalapp\node_modules\@microsoft\applicationinsights-react-native directory and run the tsc command to have the typescript compile.
With this, I can at least get the dist-esm folder which contains the missing files the bundler was missing. With this my app compiles and runs without any further issue.
However, I am maintaining and distributing this app though the Appcenter, which will run into the same bundler issue since I can not perform additional file operations on there.
I'd like to request for the NPM Package to be updated with the properly compiled typescript.
Thanks for any assistance!
It's been republished as 1.0.1 with latest RN versions. you can also rollback to 1.0.0-rc1
|
gharchive/issue
| 2019-06-04T16:46:40 |
2025-04-01T06:39:34.083318
|
{
"authors": [
"Vstoy001",
"markwolff"
],
"repo": "microsoft/ApplicationInsights-JS",
"url": "https://github.com/microsoft/ApplicationInsights-JS/issues/915",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
603518707
|
BodyTrackingException, Missing Library
Hi all
I am a newbi regarding Kinect. I created a Visual Studio project and was able to use my kinect device. Then I added the body tracking stuff. During the runtime I get the following runtime exception:
K4AdotNet.BodyTracking.BodyTrackingException
HResult=0x80131500
Message=Version 1.0.0 of Body Tracking runtime is expected.
Source=K4AdotNet
In my project file I have:
<ItemGroup>
<PackageReference Include="K4AdotNet" Version="1.3.5" />
<PackageReference Include="Microsoft.Azure.Kinect.BodyTracking" Version="1.0.0" />
<PackageReference Include="Microsoft.Azure.Kinect.Sensor" Version="1.3.0" />
</ItemGroup>
I also installed the Kinect BT package version 1.0.0 (https://docs.microsoft.com/bs-latn-ba/azure/Kinect-dk/body-sdk-download). The sample body tracking application (executable) is working fine!
Any ideas?
Created new project from scratch solved the issue.
|
gharchive/issue
| 2020-04-20T20:36:56 |
2025-04-01T06:39:34.086616
|
{
"authors": [
"daveherzig"
],
"repo": "microsoft/Azure-Kinect-Sensor-SDK",
"url": "https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1186",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
478836783
|
Nothing happens after clicking 'Upload/Download' for one blob container
Storage Explorer Version: 1.9.0
Branch: master
Build: 20190809.2
Platform/OS: Linux Ubuntu 19.04/Windows 10/macOS High Sierra
Architecture: ia32/x64
Regression From: Previous release 1.9.0
Steps to reproduce:
Expand one non-adls gen2 account -> 'Blob Containers'
Open one blob container -> Click 'Upload Files...'.
Check the results.
Expect Experience:
The Upload dialog pops up.
Actual Experience:
Nothing happens.
More info:
This issue DOES NOT reproduce for file shares.
This issue DOES NOT reproduce for ADLS Gen2 blob container.
This issue also reproduces when Dragging files to one blob container.
This issue doesn't reproduce on build master/20190810.5. So we close it.
|
gharchive/issue
| 2019-08-09T07:20:39 |
2025-04-01T06:39:34.091366
|
{
"authors": [
"v-xianya",
"v-xuanzh"
],
"repo": "microsoft/AzureStorageExplorer",
"url": "https://github.com/microsoft/AzureStorageExplorer/issues/1633",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
497467888
|
A11y_AzureToolsStorageExplorer_Disk_DiskInfoTable_ScreenReader: NVDA does not announcing the table name it is only announcing as table.
Check out Accessibility Insights! - Identify accessibility bugs before check-in and make bug fixing faster and easier.”
GitHubTags: #A11y_AzureToolsStorageExplorer; #A11yMAS; #A11YTCS; #DesktopApp; #MAS1.3.1;#NVDA; #Win10;
Environment Details:
Desktop App: Windows Version: Windows10
NVDA Version: 2019.1.1
Steps to Reproduce:
Launch Storage Explorer
Navigate to "Open Connect dialog" pane and Sign in to Azure.
Go to "Manage Accounts" pane and Select the subscriptions you will use. Once selected, click on "Apply" button.
Navigate to "Disks" of the selected subscription.
Navigate to "A11ytest" and click on it.
Navigate to Table under the tool bar.
Verify that all the elements of "DiskInfoTable" Screen are accessible and MAS Compliant.
Actual
NVDA does not announcing the table name it is only announcing as table.
Same issue is observed with JAWS for below scenarios.
Blob Container Table : Table-
Launch Storage Explorer
Navigate to "Open Connect dialog" pane and Sign in to Azure.
Go to "Manage Accounts" pane and Select the subscriptions you will use. Once selected, click on "Apply" button.
Navigate to "Blob Containers" of the selected subscription, right click and hit enter on "Create Blob container" to Create a new Blob container.
Once completed, Navigate to "Upload" button and Upload a blob/file by using "Upload Files" control.
Navigate to Table present in right side of the storage explorer.
Verify that all the elements of "Table" screen are accessible and MAS Compliant.
Expected
NVDA should announce the table name as "Disk info table".
Recommendations:
Refer below link which is repository of bug fixes code snippets:
https://microsoft.sharepoint.com/teams/msenable/mas/Pages/browse-fixes.aspx
User Impact:
Screen reader user unable to know that about which table it is reading.
MAS Reference:
MAS1.3.1:
https://microsoft.sharepoint.com/:w:/r/teams/msenable/_layouts/15/WopiFrame.aspx?sourcedoc={54f28d1f-a2d1-4dcd-84e1-5c9b87e8aba4}
Attachment for Reference
Verified this issue on:Azure Tools Storage Explorer (Version: 1.12.0)
The issue is no longer to reproduces, Hence closing this bug.
GitHubTags:#BM-TCS-StorageExplorer-Win32-Sep2019;
|
gharchive/issue
| 2019-09-24T05:41:15 |
2025-04-01T06:39:34.101086
|
{
"authors": [
"Radha2019",
"mstechie",
"vinodbk197"
],
"repo": "microsoft/AzureStorageExplorer",
"url": "https://github.com/microsoft/AzureStorageExplorer/issues/1889",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
548156831
|
On Cosmos DB - Execute Query: Add ability to execute selected text only like SSMS has.
Please add the ability to execute just the selected statement rather than always executing the entire contents of the query window.
Suppose I have two statements in the query window:
SELECT * FROM c WHERE c.Type = 'Fruit'
SELECT * FROM c WHERE c.Type = 'Vegetable'
In the current version of Azure Storage Explorer v1.11.2 on Windows, both statements will run when clicking "Execute Query" resulting in a 400 Bad Request message indicating "Syntax error, incorrect syntax near 'SELECT'. In applications like SQL Server Management Studio (SSMS) you can select/highlight a line or lines of query code and only that code will execute.
The current work around to having multiple queries in one query window is to comment out all other queries except for the one you want to run. Then, to run another query, you must uncomment the desired query and comment out the previous query.
Azure Storage Explorer support for Azure Cosmos DB has been discontinued as of v1.24.0. We recommend users to use Azure Cosmos DB’s Data Explorer, which can be found in Azure Portal or as a stand-alone version in http://cosmos.azure.com/ for the best Azure Cosmos DB experience. You can also find additional Azure Cosmos DB tooling here: Azure Cosmos DB Tools
|
gharchive/issue
| 2020-01-10T15:56:58 |
2025-04-01T06:39:34.105319
|
{
"authors": [
"JPhillipBrooks",
"aliuy"
],
"repo": "microsoft/AzureStorageExplorer",
"url": "https://github.com/microsoft/AzureStorageExplorer/issues/2537",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
727187278
|
It would be better to use a faster way to remove the added tags in the upload dialog
Storage Explorer Version: 1.16.0-dev
Build: 20201022.3
Branch: main
Platform/OS: Windows 10/ Linux Ubuntu 16.04/ MacOS Catalina
Architecture: ia32/x64
Regression From: Not a regression
Steps to reproduce:
Expand one storage account -> Blob Containers.
Select one blob container (make sure it supports tags)-> Open one upload dialog -> Add several tags.
Try to remove the added tags.
Expect Experience:
It would be better to use a faster way to remove the added tags in the upload dialog.
Actual Experience:
The tags can be removed when both Key and Value are empty.
I changed to use explicit "X" buttons to handle removing entries.
This issue is fixed on build 20201023.5. So close it.
|
gharchive/issue
| 2020-10-22T08:54:31 |
2025-04-01T06:39:34.110049
|
{
"authors": [
"JasonYeMSFT",
"v-xianya"
],
"repo": "microsoft/AzureStorageExplorer",
"url": "https://github.com/microsoft/AzureStorageExplorer/issues/3703",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1081451957
|
Have standalone test framework recover more gracefully from hangs
[x] Add timeouts to individual tests
[x] Allow tests to run in parallel
[ ] Ensure test results can be retrieved even if some tests hang (write results to temp file, then assemble test results file when all tests finish, even after timeout)
Cutting out running tests in parallel. Introduces way too much complexity into tests, especially existing ones which may already rely on the fact tests are run sequentially.
Merged a change for tests timing out. Tried to do a not huge change for partial results. Didn't like what I came up with. I think for now just punting this to future is fine.
|
gharchive/issue
| 2021-12-15T19:46:05 |
2025-04-01T06:39:34.112031
|
{
"authors": [
"MRayermannMSFT",
"craxal"
],
"repo": "microsoft/AzureStorageExplorer",
"url": "https://github.com/microsoft/AzureStorageExplorer/issues/5268",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1114695699
|
An error "Provider not found for operation 'Azure.Storage.Table.createEntity' " occurs when adding/editing one entity
Storage Explorer Version: 1.23.0-dev
Build Number: 20220125.10
Branch: main
Platform/OS: Windows 10/Linux Ubuntu 20.04/MacOS Big Sur 11.6.1
How Found: From running test case
Regression From: Previous release (1.22.0)
Steps to Reproduce
Expand one storage account -> Tables.
Open one table -> Click 'Edit' -> Click 'Update'.
Check no error occurs.
Expected Experience
No error occurs.
Actual Experience
An error "Provider not found for operation 'Azure.Storage.Table.creatEntity' " occurs.
We had some trouble with the component build yesterday, so we weren't able to get the necessary update out in time. That should be fixed now, so please reverify with the next build.
|
gharchive/issue
| 2022-01-26T07:06:51 |
2025-04-01T06:39:34.116032
|
{
"authors": [
"craxal",
"v-xianya"
],
"repo": "microsoft/AzureStorageExplorer",
"url": "https://github.com/microsoft/AzureStorageExplorer/issues/5372",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
569479349
|
[Accessibility][Screen Readers-Filter Dialog] : Focus should move to the chart section after selecting any dialog.
User Experience
Users will find it difficult to navigate through the application if focus order is incorrect. Here, focus should move to the flow chart section after selecting any of the dialog otherwise user would have to navigate through all the dialogs which will be time consuming.
Test Environment
Browser: Microsoft Edge Canary Version: 82.0.425.0 (Official build)
OS build: 2004 (19564.1005)
Screen Reader: Narrator
URL: Bot Framework Composer
Prerequisite Here
Repro Steps:
Open the above mentioned URL in EDGE and set the zoom at 400%.
Select 'Design Flow' from left navigation.
Now, select any dialog available in the filter dialog pane.
Observe the issue.
Actual:
When dialog gets selected, and user presses tab key, focus moves to the next dialog available in list item instead of the flow chart section.
Expected:
On selecting any of the dialog, the focus should move to the flow chart of the dialog so that the user knew about the opened chart.
Note:
Same issue found throughout the page, on activating any of the node property pane gets opened but the focus remains on the chart and when user press next tab new property pane gets opened. Due to this the keyboard user will not be able to reach property pane. I suggest the role should be assigned to the node or any shortcut should be defined to reach directly to the property pane.
MAS2.4.3_Property pane.zip
MAS Impacted: MAS2.4.3
Attachment:
MAS2.4.3_Focus should move to the first interactive element.zip
@cwhitten As checked the issue is not fixed yet as well as now the focus is not going inside the chart section with Tab key. On pressing tab from 'Show Code' the focus should come inside the tree pane but the focus stuck on the whole pane. As well as when we use mouse to take focus inside the pane the keyboard focus get tapped inside the chart section.
Please refer the video's for the same "Issue.zip"
I downloaded below Master build file last updated on 23rd April.
Test Environment
Browser: Microsoft Edge Dev {Version 84.0.488.1 (Official build) dev (64-bit)}
OS build: 2004 (19562.1000)
Screen Reader: Narrator
URL: Bot Framework Composer
Prerequisite Here
I think what's going on here is a confusion of two different issues. The original one was about the focus moving into the visual editor after selecting a dialog, and that has in fact been fixed.
However, there's a separate problem where Tab and Shift-Tab no longer work inside the visual editor because those keys have been forcibly remapped in ways that break at least Narrator - the visual-editor extension has a whole KeyboardZone component that overwrites the usual Tab button behavior with its own.
@beyackle The original issue is not fixed yet! Please check and validate.
I think there's some confusion here over the word "select". I had been interpreting it to mean "move to and hit Enter", selecting a dialog for display in the main window, and that case does move focus over. However, moving through the list of dialogs (without hitting Enter, just going back and forth through the list) does in fact use Tab instead of the arrow keys, and you are correct that this is unexpected behavior, given how things like the sidebar nav menu use the arrow keys.
I'll go ahead and fix that, because that is a problem; is that also the behavior you were referring to?
Yup! Sounds cool. The user should be able to navigate with arrow key and once hit enter the focus should jump to the opened chart.
#Nonbenchmark;
|
gharchive/issue
| 2020-02-23T11:15:47 |
2025-04-01T06:39:34.128156
|
{
"authors": [
"Amit8527510735",
"ashish315",
"beyackle"
],
"repo": "microsoft/BotFramework-Composer",
"url": "https://github.com/microsoft/BotFramework-Composer/issues/2056",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
601779786
|
a11y: add keyboard shortcut for copy, cut, paste
Description
Add keyboard shortcut for copy, cut, paste
Task Item
fix #2082
Screenshots
@alanlong9278 I think this is good, however there are some pretty gnarly conflicts to fix-up as this PR is becoming stale. Can you please address conflicts?
Approved once conflicts are addressed.
@cwhitten Fixed the conflicts and the announce will be added in #2756 if this pr is merged before it. Could you approve it now?
|
gharchive/pull-request
| 2020-04-17T07:58:33 |
2025-04-01T06:39:34.131046
|
{
"authors": [
"alanlong9278",
"cwhitten"
],
"repo": "microsoft/BotFramework-Composer",
"url": "https://github.com/microsoft/BotFramework-Composer/pull/2694",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
850602788
|
fix: detect "Get started" dependencies in all projects not just current project
Description
adjust the mechanism used to determine if a project needs LUIS or QNA to support multi-bot projects. This moves code from the header component into a recoil selector.
Task Item
fixes #6649
Coverage decreased (-0.008%) to 51.292% when pulling 29a96043837a81e87613503343237d884be99911 on benbrown/rootbot into f2b91af7b0e79de9761fc94aaabc4a469b2b2d1f on main.
|
gharchive/pull-request
| 2021-04-05T19:28:10 |
2025-04-01T06:39:34.133546
|
{
"authors": [
"benbrown",
"coveralls"
],
"repo": "microsoft/BotFramework-Composer",
"url": "https://github.com/microsoft/BotFramework-Composer/pull/6686",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
489420609
|
[#1754] Fixed log panel timestamp color on light theme.
#1754
===
(Colors taken from VS Code color palette)
Coverage remained the same at 65.588% when pulling 16da70ec718c479981914ee758c56897d2889a35 on toanzian/acc-#1754 into f819ab8979635d7e7fb60ba788ecdc2d722842e7 on master.
|
gharchive/pull-request
| 2019-09-04T21:59:44 |
2025-04-01T06:39:34.136560
|
{
"authors": [
"coveralls",
"tonyanziano"
],
"repo": "microsoft/BotFramework-Emulator",
"url": "https://github.com/microsoft/BotFramework-Emulator/pull/1831",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
470739006
|
There are bubble border color and other style settings that are not working
The border color and border width are not working
bubbleBackground: '#353535',
bubbleBorderWidth: 0,
bubbleBorderStyle: 'none',
bubbleFromUserBorderWidth: 5,
bubbleBorderColor: 'green',
bubbleBorderRadius: 0,
bubbleFromUserBorderColor: 'black',
bubbleFromUserTextColor: 'white',
bubbleTextColor: 'white',
sendBoxTextColor: 'white',
bubbleFromUserBackground: '#353535',
Can you fill out the bug form? What versions of Web Chat are you using?
@compulim is new and is not in 4.5 yet. What version is current?
Current production version is 4.5, which was released 11 days ago.
bubbleBorderWidth is in #2137, which is merged 4 days ago, and is in the development build of 4.6.
Late october is so late ... wish it was earlier we have a production release coming up...
If you need these changes immediately, please see our documentation on pointing to our MyGet feed for latest bits.
Is it working now?
Cannot set bubbleFromUserBorderColor: 'rgb(89,101,109)' in style options.
Only the default #e6e6e6 is working.
Using 4.5.3 version of the webchat.
i am using in the below mentioned way and it is working fine. I am also using 4.5.3 only.
bubbleFromUserBackground: 'rgba(0, 255, 0, .1)',
|
gharchive/issue
| 2019-07-21T03:55:29 |
2025-04-01T06:39:34.141832
|
{
"authors": [
"alokraj68",
"compulim",
"corinagum",
"narik1989",
"premaarya",
"xtianus79"
],
"repo": "microsoft/BotFramework-WebChat",
"url": "https://github.com/microsoft/BotFramework-WebChat/issues/2207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2006017272
|
Add Initial Mariner Image Modifier (EMU) Files
Merge Checklist
All boxes should be checked before merging the PR (just tick any boxes which don't apply to this PR)
[x] The toolchain has been rebuilt successfully (or no changes were made to it)
[x] The toolchain/worker package manifests are up-to-date
[x] Any updated packages successfully build (or no packages were changed)
[x] Packages depending on static components modified in this PR (Golang, *-static subpackages, etc.) have had their Release tag incremented.
[x] Package tests (%check section) have been verified with RUN_CHECK=y for existing SPEC files, or added to new SPEC files
[x] All package sources are available
[x] cgmanifest files are up-to-date and sorted (./cgmanifest.json, ./toolkit/scripts/toolchain/cgmanifest.json, .github/workflows/cgmanifest.json)
[x] LICENSE-MAP files are up-to-date (./SPECS/LICENSES-AND-NOTICES/data/licenses.json, ./SPECS/LICENSES-AND-NOTICES/LICENSES-MAP.md, ./SPECS/LICENSES-AND-NOTICES/LICENSE-EXCEPTIONS.PHOTON)
[x] All source files have up-to-date hashes in the *.signatures.json files
[x] sudo make go-tidy-all and sudo make go-test-coverage pass
[x] Documentation has been updated to match any changes to the build system
[x] Ready to merge
Summary
Add initial Mariner Image modifier(EMU) files
Change Log
Created chroot interface and a dummy implementation for in-OS scenarios
Add the SSHPubKeys to user struct as requested by Trident crew, added it to ProvisionUserSSHCerts function accordingly
Does this affect the toolchain?
NO
Test Methodology
Inside imagemodifier dir, run go build -o imagemodifier .
Will do an end-to-end test in trident to validate we can set up users correctly.
Naive question -- what's the difference between an image modifier and an image customizer? Without additional context, they sound like quite the same thing.
Naive question -- what's the difference between an image modifier and an image customizer? Without additional context, they sound like quite the same thing.
Image customizer is an officially supported CLI tool for customizing images (official releases, guaranteed backward compatibility, public documentation... etc.), while image modifier is more of a utility tool/wrapper on the customization lib that will be leveraged for ad-hoc image customizations scenarios and other efforts under development. Hence the separation to allow for a more fluid changing interface without breaking the official tool.
LGTM
One comment thou, what's our logging strategy here? how do we ensure we have enough logging to investigate customization failures?
The logging is the same with MIC, e.g. when passing a yam into it, it verifies the content and throw error when there is any step wrong while creating a new user. the logging of the invocation of this binary resides in Trident repo.
Naive question -- what's the difference between an image modifier and an image customizer? Without additional context, they sound like quite the same thing.
image modifier does not create chroot env and it will do in-OS change without an input(base) image or output an image.
Naive question -- what's the difference between an image modifier and an image customizer? Without additional context, they sound like quite the same thing.
image modifier does not create chroot env and it will do in-OS change without an input(base) image or output an image.
Perhaps we should name it "OS modifier" instead of "image modifier", since it isn't modifying an image file.
Naive question -- what's the difference between an image modifier and an image customizer? Without additional context, they sound like quite the same thing.
image modifier does not create chroot env and it will do in-OS change without an input(base) image or output an image.
Perhaps we should name it "OS modifier" instead of "image modifier", since it isn't modifying an image file.
make sense, as it modifies makes live changes to an active OS, I will update all the references to os modifier
|
gharchive/pull-request
| 2023-11-22T10:22:19 |
2025-04-01T06:39:34.153648
|
{
"authors": [
"cwize1",
"elainezhao96",
"reubeno",
"romoh"
],
"repo": "microsoft/CBL-Mariner",
"url": "https://github.com/microsoft/CBL-Mariner/pull/6824",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
762695836
|
AccessViolationException when accessing Windows.Devices.Enumeration.Pnp.PnpObject.Properties.TryGetValue
On occasion, when accessing the properties of enumerated PnpObjects, I get this exception:
Exception thrown at 0x00007FF82FDFDBDE (combase.dll) in Agent.exe: 0xC0000005: Access violation reading location 0x0000027CCFCED034.
With native code debugging turned on. the unmanaged part that gets called by TryGetValue and leads to the exception:
combase.dll!WindowsStringHasEmbeddedNull(HSTRING__ * string, int * hasEmbedNull) Line 287
at onecore\com\combase\winrt\string\string.cpp(287)
[Inline Frame] Windows.Devices.Enumeration.dll!Windows::Internal::String::HasEmbeddedNull() Line 237
at onecore\internal\com\inc\windowsstringp.h(237)
Windows.Devices.Enumeration.dll!Windows::Internal::String::GetLpcwstr(const wchar_t * * ppsz) Line 248
at onecore\internal\com\inc\windowsstringp.h(248)
[Inline Frame] Windows.Devices.Enumeration.dll!Windows::Internal::StringReference::GetLpcwstr(const wchar_t * *) Line 577
at onecore\internal\com\inc\windowsstringp.h(577)
Windows.Devices.Enumeration.dll!PropertyStoreServer::Find(HSTRING__ * key, unsigned long * index) Line 189
at onecoreuap\base\devices\rtenum\dllsrv\propertystore.cpp(189)
Windows.Devices.Enumeration.dll!PropertyStoreServer::HasKey(HSTRING__ * key, unsigned char * found) Line 628
at onecoreuap\base\devices\rtenum\dllsrv\propertystore.cpp(628)
My TFM:
net5.0-windows10.0.19041.0
@v-tbert, thanks for the report. Unfortunately, there's not enough information to root cause this issue, and we haven't seen it before. Can you provide a repro project?
@Scottj1s Messaged you directly with an example project. Let me know if you have any questions.
@v-tbert, @ujjwalchadha and I cannot repro this issue. I was able to repro it very reliably 2 weeks ago, which makes me think maybe something changed on the back end? Can you retry?
@v-tbert, @ujjwalchadha and I cannot repro this issue. I was able to repro it very reliably 2 weeks ago, which makes me think maybe something changed on the back end? Can you retry?
Not sure what backend you are referring to?
I just repro'd it with my production code though. Its watching pnp devices so sometimes it doesn't happen right away. Sometimes it happens upon resuming after sleep or just (un)plugging things.
Not sure what backend you are referring to?
I just repro'd it with my production code though. Its watching pnp devices so sometimes it doesn't happen right away. Sometimes it happens upon resuming after sleep or just (un)plugging things.
Thanks Tim - I can repro with your steps (turning on/off bluetooth headphones).
A related issue is an incorrect implementation of GetRuntimeClassName, which is reporting Windows.Devices.Enumeration.DevicePropertyStore. That type is not defined in metadata, so results in a System.TypeLoadException: 'Unable to find a type named 'Windows.Devices.Enumeration.DevicePropertyStore''.
However, cswinrt should be resilient and attempt a dynamic cast. In your repro, that was failing with:
System.InvalidCastException: 'Invalid cast from 'WinRT.IInspectable' to 'ABI.System.Collections.Generic.IEnumerable1[System.Collections.Generic.KeyValuePair2[System.String,System.Object]]'.'
I couldn't definitely pinpoint, but we think this was resolved with some work on covariant support. You can take advantage of that with a framework reference override to the latest Windows SDK projection. I've confirmed this works with your repro. Add these lines to your csproj:
<ItemGroup>
<FrameworkReference Update="Microsoft.Windows.SDK.NET.Ref" RuntimeFrameworkVersion="10.0.19041.12" />
<FrameworkReference Update="Microsoft.Windows.SDK.NET.Ref" TargetingPackVersion="10.0.19041.12" />
</ItemGroup>
Thanks Tim - I can repro with your steps (turning on/off bluetooth headphones).
A related issue is an incorrect implementation of GetRuntimeClassName, which is reporting Windows.Devices.Enumeration.DevicePropertyStore. That type is not defined in metadata, so results in a System.TypeLoadException: 'Unable to find a type named 'Windows.Devices.Enumeration.DevicePropertyStore''.
However, cswinrt should be resilient and attempt a dynamic cast. In your repro, that was failing with:
System.InvalidCastException: 'Invalid cast from 'WinRT.IInspectable' to 'ABI.System.Collections.Generic.IEnumerable1[System.Collections.Generic.KeyValuePair2[System.String,System.Object]]'.'
I couldn't definitely pinpoint, but we think this was resolved with some work on covariant support. You can take advantage of that with a framework reference override to the latest Windows SDK projection. I've confirmed this works with your repro. Add these lines to your csproj:
<ItemGroup>
<FrameworkReference Update="Microsoft.Windows.SDK.NET.Ref" RuntimeFrameworkVersion="10.0.19041.12" />
<FrameworkReference Update="Microsoft.Windows.SDK.NET.Ref" TargetingPackVersion="10.0.19041.12" />
</ItemGroup>
I checked out the .net page and it indicates 5.0.102 is included in Visual Studio 16.8.4 which I am already running. If I view the properties of Microsoft.Windows.SDK.NET.Ref in VS, it shows version 10.0.19041.12. It fails reliably with the access violation. However, if I manually add the lines you mention to the csproj file it DOES seem to resolve the access violation. If I understood correctly though, I shouldn't need to add them since I am already on the newer version?
If I disable "Enable Just My Code" I still see TypeLoadException for DevicePropertyStore
@manodasanW , any idea why an explicit *.12 framework reference would work, but not a .NET 5.0.102 reference?
@v-tbert, regarding the TypeLoadException, feel free to open an internal Windows issue on this header:
onecoreuap/base/Devices/rtenum/dllsrv/PropertyStore.h
These lines are in error:
InspectableClass(L"Windows.Devices.Enumeration.DevicePropertyStore", BaseTrust);
InspectableClass(L"Windows.Devices.Enumeration.DevicePropertyStoreIterator", BaseTrust);
Neither DevicePropertyStore nor DevicePropertyStoreIterator are actual runtime classes, which causes cswinrt to fail to find their ABI helpers via reflection, resulting in type load exceptions. These should instead report their default interfaces - IIterable/IIterator specializations.
@manodasanW , any idea why an explicit *.12 framework reference would work, but not a .NET 5.0.102 reference?
@v-tbert, regarding the TypeLoadException, feel free to open an internal Windows issue on this header:
onecoreuap/base/Devices/rtenum/dllsrv/PropertyStore.h
These lines are in error:
InspectableClass(L"Windows.Devices.Enumeration.DevicePropertyStore", BaseTrust);
InspectableClass(L"Windows.Devices.Enumeration.DevicePropertyStoreIterator", BaseTrust);
Neither DevicePropertyStore nor DevicePropertyStoreIterator are actual runtime classes, which causes cswinrt to fail to find their ABI helpers via reflection, resulting in type load exceptions. These should instead report their default interfaces - IIterable/IIterator specializations.
@v-tbert This is indeed strange as you shouldn't need a explicit reference if you got .NET 5.0.102. Without the FrameworkReference update when the issue repros, can you run and debug your app and check in the Modules view in VS (under Debug -> Windows -> Modules) what version of Microsoft.Windows.SDK.NET.dll and WinRT.Runtime.dll is loaded in the process. This should help confirm that there is indeed a difference an determine which one is being used. A build log of your project (msbuild /bl) would also be useful.
@v-tbert This is indeed strange as you shouldn't need a explicit reference if you got .NET 5.0.102. Without the FrameworkReference update when the issue repros, can you run and debug your app and check in the Modules view in VS (under Debug -> Windows -> Modules) what version of Microsoft.Windows.SDK.NET.dll and WinRT.Runtime.dll is loaded in the process. This should help confirm that there is indeed a difference an determine which one is being used. A build log of your project (msbuild /bl) would also be useful.
Closing due to inactivity
Closing due to inactivity
Sorry this fell off my radar. I updated my repro project and it reliably fails with the loaded module:
Version: 10.00.19041.14
Path: C:\Users\v-tbert\Desktop\Bug Repro MMRAgent\MMRAgent\bin\Debug\net5.0-windows10.0.19041.0\Microsoft.Windows.SDK.NET.dll
Team, see updated internal "Bug Repro MMRAgent.zip"
I don't have a self-contained repro currently but it looks like something else broke in 1.2.2. Now when retrieving the pnp property System.Devices.ContainerId it doesn't always return a Guid as it should. It will often return as string array. Any idea why that might be happening? https://docs.microsoft.com/en-us/windows/win32/properties/props-system-devices-containerid
@v-tbert I am unable to repro this issue after previous few fixes we have made for other issues, and it seems like it has been fixed as a part of those. Can you confirm whether this is fixed for you with the latest release of cswinrt?
@v-tbert I just realized the build with fixes is not released yet. This should be fixed after the upcoming cswinrt release. Let us know if it is not.
@v-tbert this should be fixed with the latest .NET 5 SDK version (.NET SDK 5.0.402 or .NET SDK 5.0.208). Feel free to reopen if you are still encountering the issue.
|
gharchive/issue
| 2020-12-11T17:58:11 |
2025-04-01T06:39:34.185833
|
{
"authors": [
"Scottj1s",
"angelazhangmsft",
"manodasanW",
"ujjwalchadha",
"v-tbert"
],
"repo": "microsoft/CsWinRT",
"url": "https://github.com/microsoft/CsWinRT/issues/635",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
450664273
|
multiple instances of the same connector need to authenticate one by one
we have a data connector which connects to our graphql API. the authentication is done by our own OAuth2 identity provider. the overall solution is working so far. currently we are fetching the data with Web.Contents where we have a full query to the data. but as we add multiple single queries to the reports (multiple times our data connector) we need to reauthenticate every instance individually every time our session is expired. this gets a really bad UX for the user.
so my questions are:
is there a way to authenticate a data connector globally?
is there a better way to customize the query to our data source than the argument to our contents function?
[DataSource.Kind="GIA.Integration.PowerBI", Publish="GIA.Integration.PowerBI.Publish"]
shared GIA.Integration.PowerBI.Contents = (Url as text) =>
let
reponse = Web.Contents(Url),
body = Json.Document(reponse)
in
body;
GIA.Integration.PowerBI = [
Authentication = [
OAuth = [
StartLogin=StartLogin,
FinishLogin=FinishLogin,
Refresh=Refresh,
Logout=Logout
]
],
TestConnection = (dataSourcePath) => {"GIA.Integration.PowerBI.Contents", dataSourcePath},
Label = Extension.LoadString("DataSourceLabel")
];
With the current definition, it looks like the Url is being used to distinguish one data source from another -- which means that every distinct Url (minus query parts) would require its own credentials. Is that the problem you're describing? If so, one solution is to annotate the function to indicate that the Url shouldn't be used to distinguish data sources. You can do that with something like
shared GIA.Integration.PowerBI.Contents = Value.ReplaceType((Url as text) => …, type function (Uri as (type text meta [DataSource.Path=false])) as any)
That is, annotate the parameter type with the metadata DataSource.Path=false.
Hello @CurtHagenlocher
Thanks for your reply, it's very helpful. But what if we want to do the same thing for a date parameter but not URL? Seems like this trick does not work then.
Generally, we have got OAuth2 authorization and our connector takes a date range as parameter (dateFrom, dateTo). And we don't want to force a user to authorize every time when he/she changes date range.
Any ideas how to do that?
Thanks in advance.
hey @alex-v-ivanov
You could:
make those parameters optional / nullable annotate the
annotate with the DataSource.Path = false meta record on both of those parameters
I think that there's another way where you could make your connector work like a pseudo singleton, but I'm not sure that's the best way to go
Oh, sorry, it was my mistake,
I wrote Datasource.Path = false instead of DataSource.Path = false (S must be capital).
So, this way works for any data.
Thanks all!
Hi All,
I tried the same trick, i.e. made most of my connector parameters optional/nullable and also added the meta DataSource.Path = false, but when I publish it and then try to update the dataset (mapped to the connector) parameter to a new value then I get Update parameter failed. Works fine on Desktop
Any help appreciated
Cheers,
Amit
|
gharchive/issue
| 2019-05-31T07:50:42 |
2025-04-01T06:39:34.192963
|
{
"authors": [
"CurtHagenlocher",
"OneCyrus",
"alex-v-ivanov",
"amitkar1980",
"migueesc123"
],
"repo": "microsoft/DataConnectors",
"url": "https://github.com/microsoft/DataConnectors/issues/219",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2486237095
|
[CCL] fix condition issue in ccl.py
previous condition check is not right, it would cause this condition always be True.
I wanted to check this on XPU and HPU,
XPU: https://github.com/microsoft/DeepSpeed/actions/runs/10586606972/job/29335722768
HPU seems to have another issue, debugging that now.
|
gharchive/pull-request
| 2024-08-26T08:12:58 |
2025-04-01T06:39:34.195999
|
{
"authors": [
"YizhouZ",
"loadams"
],
"repo": "microsoft/DeepSpeed",
"url": "https://github.com/microsoft/DeepSpeed/pull/6443",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
918878833
|
Add suggested solution/url to CLI analyze output
Is your feature request related to a problem? Please describe.
CLI outputs the following format:
/opt/atlassian/pipelines/agent/build/header.php:21:31:21:38 [Moderate] DS137138 Insecure URL
I'm missing details on the error or a link to get info on a proposed solution.
Describe the solution you'd like
It would be nice to see a proposed solution or link for more info similar to the information provided by the VSC plugin.
Check the guidance file here for the error number: https://github.com/microsoft/DevSkim/tree/main/guidance
#297 will inform the user to visit that url for guidance if there were any issues detected.
|
gharchive/issue
| 2021-06-11T15:16:01 |
2025-04-01T06:39:34.198451
|
{
"authors": [
"gfs",
"michaelw85"
],
"repo": "microsoft/DevSkim",
"url": "https://github.com/microsoft/DevSkim/issues/296",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
439923295
|
Direct3D9 Specification
Hi!
It'd be really cool if you could add the Direct3D9 spec on here. There's lots of undocumented and confusing behavior in d3d9 -- a sm1-3 specification would also be nice.
Unfortunately we don't have specs for earlier (<= D3D9 era) APIs in a form that could easily be published, so this isn't likely to be something we're ever able to add here.
The regular API documentation for D3D9 (https://docs.microsoft.com/en-us/windows/desktop/direct3d9/dx9-graphics) is the closest thing we have available.
Sorry!
"APIs in a form that could easily be published" Is it under NDA or what? I mean it is old as s***, what is the problem?
Concerning today's leak of all Windows XP, maybe at least add Directx 8?
|
gharchive/issue
| 2019-05-03T07:33:16 |
2025-04-01T06:39:34.201075
|
{
"authors": [
"Joshua-Ashton",
"ValZapod",
"shawnhar"
],
"repo": "microsoft/DirectX-Specs",
"url": "https://github.com/microsoft/DirectX-Specs/issues/10",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1401231964
|
Incorrect NuGet Package Information?
Hello, the NuGet package Project URL points to Azure AppConfiguration. I was going to do a PR but I don't know what the icon should be:
https://github.com/microsoft/FeatureManagement-Dotnet/blob/main/src/Microsoft.FeatureManagement.AspNetCore/Microsoft.FeatureManagement.AspNetCore.csproj
A bit more reading and I see why, with the Azure App configuration capability, but it's a bit confusing. Seems feature flags could be standalone without Azure App Configuration and I'd normally expect a package link to direct me to the source repository for that package.
|
gharchive/issue
| 2022-10-07T14:04:27 |
2025-04-01T06:39:34.213998
|
{
"authors": [
"verifiedcoder"
],
"repo": "microsoft/FeatureManagement-Dotnet",
"url": "https://github.com/microsoft/FeatureManagement-Dotnet/issues/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1272434615
|
Milan size limits 1
PropertyTree DDS: Summary / Operation Compression
Description
Implementation of new CompressedCommPropertyTree DDS extended from PropertyTree DDS which compresses Summaries and Deltas.
PR Checklist
[ ] I have updated the documentation accordingly.
[x] I have added tests to cover my changes.
[x] All new and existing tests passed.
[x] My code follows the code style of this project.
[x] I ran the lint checks which produced no new errors nor warnings for my changes.
[x] I have checked to ensure there aren't other open Pull Requests for the same update/change.
Does this introduce a breaking change?
[ ] Yes
[x] No
Testing
Unit test updated to perform the full test also over the new CompressedCommPropertyTree DDS.
It should be started in
experimental/PropertyDDS/packages/property-dds
by
npm run test
@tylerbutler @DLehenbauer Since initiated current pull request we are receiving continuously workflow error reports. Please advise.
https://github.com/dstanesc/FluidFramework/actions/runs/2531789440
The workflow is not valid. .github/workflows/merge-commits-to-next-branch.yml (Line: 35, Col: 9): Unexpected symbol: '$COMMIT_INFO'. Located at position 4 within expression: ( ($COMMIT_INFO).label == 'queued' ) .github/workflows/merge-commits-to-next-branch.yml (Line: 47, Col: 9): Unexpected symbol: '$COMMIT_INFO'. Located at position 3 within expression: ( $COMMIT_INFO.label == 'queued' && needs.create-branch.result == 'failure' )
@tylerbutler - Possibly because it's waiting for a maintainer to approve running the workflow? (I just approved it.)
@tylerbutler @DLehenbauer Since initiated current pull request we are receiving continuously workflow error reports. Please advise.
https://github.com/dstanesc/FluidFramework/actions/runs/2531789440
The workflow is not valid. .github/workflows/merge-commits-to-next-branch.yml (Line: 35, Col: 9): Unexpected symbol: '$COMMIT_INFO'. Located at position 4 within expression: ( ($COMMIT_INFO).label == 'queued' ) .github/workflows/merge-commits-to-next-branch.yml (Line: 47, Col: 9): Unexpected symbol: '$COMMIT_INFO'. Located at position 3 within expression: ( $COMMIT_INFO.label == 'queued' && needs.create-branch.result == 'failure' )
@dstanesc Sorry for the hassle! I think this is related to some in-progress work we're doing to make it easier to manage releases within the repo. There were some syntax errors in early versions of the workflows, so my guess is your PR is based on one of those earlier commits. The best workaround for now is to manually disable the workflows in your fork using the GitHub UI. See the Actions tab. We're making changes so that these workflows won't run on forks in the future.
@DLehenbauer I think this was separate from the ADO workflow permissions, but they are necessary too, so thanks for approving!
Thank you @DLehenbauer @tylerbutler for the super fast reaction. Confirm all workflows disabled in our fork.
I think enabling compression using specific compression algorithm for just one DDS is not ideal.
I'd love to see compression being done in a way where service could do nothing (i.e., service can roundtrip compressed payload and things continue to work as is). But service should also have a chance to decompress / recompress these blobs as it sees fit (i.e. understand they are compressed with specific algorithm). This has the following advantages:
Containers (at least in ODSP, I think same is for FRS) are loaded in small number of requests (one for ODSP, excluding media), and storage does solid gzip compression of whole payload (through built into http compression). Same for actual storage - individual blobs are stored in smaller number of Azure blobs, and compression is done on solid volume. It's more efficient not to do compression of compressed data - storage might want to decompress individual blobs before compressing whole file.
For staging, or maybe forever, we might prefer long term storage to not be impacted by our choices of compression algorithm. I.e. if we learn that we want to switch it half a year from now, we can do so with zero back compat impact. Compression stays part of transport layer (similar to compression in http) - it can be changed later (adding new compression algorithms, deprecating old ones) and it does not impact data at rest.
As for how it's implemented in runtime, I'd love to see one of the following:
It's applied uniformly to all blobs in container. Likely by building an adapter over IDocumentStorageService
It's done similar to how Kafka does it, i.e. giving a choice to customers (per DDS instance) to control what compression algorithm is used, thus customer controls balance of performance vs. extra memory footprint vs. bandwidth.
No matter what choices are made, we need to collect data like
impact on bundle size after tree shaking and minification.
expected impact on CPU and memory, and gain in bandwidth. Both compression & decompression. Using our example payloads.
Find reasonable cut-offs. Usually it does not make sense to compress really tiny payloads.
When it comes to experimentation, I'd start with something that does not necessarily has best compression semantics, but has minimal impact on bundle size, as many of our customers are sensitive to that. And then slowly find a way to add more algorithms in a way that does not impact all users (likely by using bundle chunking, not sure if it means some work on Fluid Framework or fully controlled by consumers).
@vladsud
I think enabling compression using specific compression algorithm for just one DDS is not ideal.
The reason for this implementation is short term support for summaries bigger than 30 MB. At the discussion with @DLehenbauer we were exploring possibilities how to quickly achieve some progress here (beside the long term tasks such as incremental summaries). In our use-cases we are fighting for each additional MB of the summary. The idea behind this is following
offer the PropertyTree DDS like structure which supports as big summary size as possible
keep the original PropertyTree DDS working as before so that the already stored summaries are still functional
lower the risk of regression by placing the implementation outside the existing code as much as possible
keep it simple
I am fully in agreement that the final compression approach should be common to all DDSes. It is in our plan (we should build the road-map next week when we are in Seattle) and it is considered to be longer term task due to the complexity.
@milanro, as long as it does not affect existing scenarios in production, I think that's fine approach. I believe PropertyDDS is only used for experiments - might be worth confirming with Daniel here.
@milanro - I understand the intent now, thanks.
@rruiter and his team are the other stakeholders in Property DDS, so we'll want their input on how to do this with minimal impact.
@RRuiter and his team are the other stakeholders in Property DDS, so we'll want their input on how to do this with minimal impact.
@RRuiter @ruiterr, we already went through this code together and did the code review on June, 15th
@DLehenbauer @milanro is this PR still relevant? Should we close it?
|
gharchive/pull-request
| 2022-06-15T15:52:40 |
2025-04-01T06:39:34.229381
|
{
"authors": [
"DLehenbauer",
"dstanesc",
"milanro",
"taylorsw04",
"tylerbutler",
"vladsud"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/pull/10688",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1648378507
|
Container loader deprecations
Description
The following types in the @fluidframework/container-loader package are not used by, or necessary to use our public api, so will be removed from export in the next major release:
IContainerLoadOptions
IContainerConfig
IPendingContainerState
ISerializableBlobContents
@wes-carlson this is basically what you were trying to do in : https://github.com/microsoft/FluidFramework/pull/13916/
|
gharchive/pull-request
| 2023-03-30T21:55:22 |
2025-04-01T06:39:34.232537
|
{
"authors": [
"anthony-murphy"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/pull/14891",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1797747683
|
Refactor Azure Client: Replace common-definitions with core-interfaces
This pull request updates the Azure client code by replacing the dependency @fluidframework/common-definitions with @fluidframework/core-interfaces.
investigate into how re-exported APIs affect type tests
This was only supposed to contain the azure client changes
|
gharchive/pull-request
| 2023-07-10T23:15:12 |
2025-04-01T06:39:34.233816
|
{
"authors": [
"RishhiB"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/pull/16305",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2085102442
|
Loader and some driver changes for Limited Data Virtualization
Description
Add interface to represent the snapshot tree with group Ids. This is where the snapshot tree could be partial and missing some of the children.
export interface ISnapshotTree2 extends ISnapshotTree { groupID?: string; trees: { [path: string]: ISnapshotTree2 }; }
Add interface to support passing all the snapshot contents from storage service to loader. This represents the partial snapshot. This contains the above snapshot tree.
`export interface IPartialSnapshotWithContents {
snapshotTree: ISnapshotTree2;
blobsContents: Map<string, ArrayBuffer>;
ops: ISequencedDocumentMessage[];
/**
Sequence number of the snapshot
/
sequenceNumber: number | undefined;
/*
Sequence number for the latest op/snapshot for the file.
*/
latestSequenceNumber: number | undefined;
couldBePartialSnapshot: true;
}`
3.) Add some APIS to differentiate between the full and partial snapshot contents.
4.) Changes in ODSP driver to support with both full and partial snapshot. Cache the blobs and root tree only in case of where there is full snapshot.
5.) Changes in Loader to support with both full and partial snapshot and fetch it from storage and pass it to the runtime layer.
i would really like a design here. Specifically, we already have a number of snapshot formats, so it would be good to understand what they all are, and how this fits in. I'd also like to know more about things like groupId, how it is set, can it be changed, etc. Lastly, the current proposal doesn't appear to be serializable, and i don't think we want a new snapshot format that cannot be serialized.
|
gharchive/pull-request
| 2024-01-16T22:59:26 |
2025-04-01T06:39:34.238344
|
{
"authors": [
"anthony-murphy",
"jatgarg"
],
"repo": "microsoft/FluidFramework",
"url": "https://github.com/microsoft/FluidFramework/pull/19256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
807847801
|
[Sample classification task] azure_dataset_id definition conflict between GlaucomaPublicExt.py and GlaucomaPublic.py
When following the sample classification task instructions to create the classification model configuration, we are instructed to specify azure_dataset_id in the constructor.
In the definition of GlaucomaPublic, though, these lines:
https://github.com/microsoft/InnerEye-DeepLearning/blob/eb5f931f200c253f8a3336fcb01cdaa58c6207be/InnerEye/ML/configs/classification/GlaucomaPublic.py#L12-L13
https://github.com/microsoft/InnerEye-DeepLearning/blob/eb5f931f200c253f8a3336fcb01cdaa58c6207be/InnerEye/ML/configs/classification/GlaucomaPublic.py#L23-L39
override the parameter, causing the experiment to fail if the user has not defined a dataset called glaucoma_public_dataset.
Expected behavior(s):
The parameter is not overridden.
or
Documentation specifies to call the dataset glaucoma_public_dataset.
Good catch, thanks for reporting! Update coming soon.
|
gharchive/issue
| 2021-02-13T23:10:21 |
2025-04-01T06:39:34.243981
|
{
"authors": [
"JacopoTeneggi",
"ant0nsc"
],
"repo": "microsoft/InnerEye-DeepLearning",
"url": "https://github.com/microsoft/InnerEye-DeepLearning/issues/397",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1846519461
|
Update-mgdevicemanagementmanageddevice -devicecategory
Does the cmdlet work ?
because can't make work with -bodyparameter or -devicecategory, eveytime he result with an error
$Params = @{ DeviceCategory = @{ "DisplayName" = "$NewCategory"} } | ConvertTo-Json try { # Changer la category Update-MgDeviceManagementManagedDevice -ManagedDeviceId $Device -BodyParameter $Params
I cannot make it work either with the built-in cmdlets.
`Update-MgBetaDeviceManagementManagedDevice -ManagedDeviceId $myIntuneDevice.Id -DeviceCategory $category
Update-MgBetaDeviceManagementManagedDevice : Cannot apply PATCH to navigation property 'deviceCategory' on entity type 'microsoft.management.services.api.managedDevice'.
Status: 400 (BadRequest)
ErrorCode: ModelValidationFailure
Date: 2023-10-03T17:24:51
Update-MgBetaDeviceManagementManagedDeviceCategory -ManagedDeviceId $myIntuneDevice.Id -Id $category.Id
Update-MgBetaDeviceManagementManagedDeviceCategory :
Status: 404 (NotFound)
ErrorCode: UnknownError
Date: 2023-10-03T17:03:16
`
The only way to make it work is like this:
$body = @{ '@odata.id' = "https://graph.microsoft.com/beta/deviceManagement/deviceCategories/$($category.Id)" } Invoke-MgGraphRequest -Method PUT -Uri "beta/deviceManagement/managedDevices/$(myIntuneDevice.Id)/deviceCategory/$ref" -Body $body
`
I cannot make it work either with the built-in cmdlets.
Update-MgBetaDeviceManagementManagedDevice -ManagedDeviceId $myIntuneDevice.Id -DeviceCategory $category
Update-MgBetaDeviceManagementManagedDevice : Cannot apply PATCH to navigation property 'deviceCategory' on entity type 'microsoft.management.services.api.managedDevice'.
Status: 400 (BadRequest)
ErrorCode: ModelValidationFailure
Date: 2023-10-03T17:24:51
Update-MgBetaDeviceManagementManagedDeviceCategory -ManagedDeviceId $myIntuneDevice.Id -Id $category.Id
Update-MgBetaDeviceManagementManagedDeviceCategory :
Status: 404 (NotFound)
ErrorCode: UnknownError
Date: 2023-10-03T17:03:16
The only way to make it work is like this:
$body = @{ '@odata.id' = "https://graph.microsoft.com/beta/deviceManagement/deviceCategories/$($category.Id)" }
Invoke-MgGraphRequest -Method PUT -Uri "beta/deviceManagement/managedDevices/$(myIntuneDevice.Id)/deviceCategory/`$ref" -Body $body
This one work for me :
Update-MgBetaDeviceManagementManagedDeviceCategory -ManagedDeviceId $Device -BodyParameter $BodyParams
With the new category in the body parameter
|
gharchive/issue
| 2023-08-11T09:44:04 |
2025-04-01T06:39:34.250131
|
{
"authors": [
"SamSepiolWarden",
"clienart-bmx",
"geoced"
],
"repo": "microsoft/Intune-PowerShell-SDK",
"url": "https://github.com/microsoft/Intune-PowerShell-SDK/issues/145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
452823475
|
Adding MacOS 10.14 jobs to the azure-pipelines.yml file
This also fixes the build scripts to properly opt out of the first time experience.
macOS looks to currently hit an assert: The active test run was aborted. Reason: Assertion failed: (isa<X>(Val) && "cast<Ty>() argument of incompatible type!"), function cast, file /tmp/llvm-5.0.0.src/include/llvm/Support/Casting.h, line 255.
|
gharchive/pull-request
| 2019-06-06T04:14:10 |
2025-04-01T06:39:34.251351
|
{
"authors": [
"tannergooding"
],
"repo": "microsoft/LLVMSharp",
"url": "https://github.com/microsoft/LLVMSharp/pull/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
668824328
|
[fix] Locale independent model load/save
Hello,
Issue
With the revert of its PR, this issue became again a concern in Java: https://github.com/microsoft/LightGBM/issues/2890.
At the moment LightGBM uses strtod() and C++ streams to read/write floating point the model file numbers.
The issue is that Java breaks the initialization to the standard "C" locale (see https://github.com/microsoft/LightGBM/pull/2891). Current evidence shows this is true for Java providers that use the C API through SWIG (I suspect MMLSpark's LightGBM might be affected by this too), and even for users of the LightGBM Python API that have their code wrapped by JEP.
For production scenarios ensuring the correctness of scores is of paramount importance and right now there are none for LGBM providers that use Java (see issue above).
Request for Change
After assessing alternatives, and seeing that the last fix was reverted, the robust solution would be to change the model read/write of floating numbers through use of process-locale-independent methods.
Implementation proposal
Replace floating point read/writes in model save/load with calls to C++ streams (we already use them anyway for some parameters). We imbue such streams with the "C" locale, to force the same locale for reads and writes. This fixes our bug.
It might be slightly slower to read/write the model but it will be correct everytime independently of process locale settings.
According to the benchmark https://tinodidriksen.com/2011/05/cpp-convert-string-to-double-speed/ it would result in a ~3x slower floating-point conversion (reusing streams). I doubt it's actually 3x slower (see tricks to make it faster http://cplusplus.bordoon.com/speeding_up_string_conversions.html). Besides, we're already using streams for some numbers, thus the model read/write slowdown wouldn't reach that value.
Is that an issue?
Can I get your thoughts on this? Good, bad? Secondary effects I might be missing? Would such PR be welcome? @guolinke @StrikerRUS @imatiach-msft et al.
Thank you!
Not enough? Future - Alternative libraries
In the future, if needed we can upgrade to use different libraries like Boost.Spirit or David M. Gay's library which is more accurate than glibc (current implementation). If we do so, we can have rounding differences (that in principle are small).
Side note on floating-point conversion precision and David Gay's library
GCC uses glibc to compile LightGBM, and according to these links:
https://www.exploringbinary.com/how-glibc-strtod-works/
https://www.exploringbinary.com/how-strtod-works-and-sometimes-doesnt/
Converting decimal strings to doubles correctly and efficiently is a surprisingly complex task; fortunately, David M. Gay did this for us when he wrote this paper and released this code over 20 years ago. (He maintains this reference copy of strtod() to this day.) His code appears in many places, including in the Python, PHP, and Java programming languages, and in the Firefox, Chrome, and Safari Web browsers.
Many programming environments implement their string to double conversions with David Gay’s strtod(); glibc, the GNU C Library, does not.
David Gay's implementation https://www.ampl.com/netlib/fp/dtoa.c of to/from string/floating point conversion is even more accurate than we're using now. It is a single file that can be compiled and configured through preprocessor flags, in our case to ignore locale processing altogether (resulting in the "C" locale behaviour for reads and writes).
Speaking about float2str conversion in saving models, XGBoost recently adopted Ryū algorithm (https://github.com/dmlc/xgboost/pull/5772) and numpy uses Dragon4 algorithm (https://github.com/numpy/numpy/pull/9941).
Before I commit to working on this, can someone help me direct me to the files where load/save model files code is found?
Searching for the code... (read only if at a loss)
The fix we had posted in the past https://github.com/microsoft/LightGBM/pull/2891/files required changes in: LGBM_BoosterCreateFromModelfile, LGBM_BoosterSaveModel, LGBM_BoosterSaveModelToString and LGBM_BoosterDumpModel in the C API.
This new implementations will need to change the core code, which means fixes must be done in the functions called therein, namely, Booster::Booster (constructor), Booster::SaveModelToFile, Booster::SaveModelToString and Booster::DumpModel.
Reading model files seems to be implemented in:
boosting_->LoadModelFromString(model_str, len) which leads me to GBDT::LoadModelFromString in gbdt_model_text.cpp.
In turn, that uses both ArrayToString and ArrayToStringFast (don't know why we have and use both to save the model - legacy?). In turn Common::DoubleToStr, which uses snprintf (which is not locale-independent) is used.
Conclusion and questions
I got lost on the virtual function pointers, though, I was expecting to see DART and others implement model read/write.
Can I thus assume model text file read/writes are implemented in gbdt_model_text.cpp's:
methods alone?
If I change that file and the required methods in Common like DoubleToStr, everything should work?
If I touch DoubleToStr will this impact other code? (I can always leave it alone and make a new method)
What about DART, and other boosting modes, all of them still use gbdt_model_text.cpp to implement read and writes?
Thank you so much :)
Gently ping @guolinke for two questions above about the code location.
sorry for missing this thread...
@AlbertoEAF
I remember ArrayToStringFast is only for integer. So for model_to_string, changing DoubleToStr should be enough. For for the string_to_model, there are several functions. like https://github.com/microsoft/LightGBM/blob/adfc9f5c61ad3d11b9768ed0b9104297434ce822/include/LightGBM/utils/common.h#L513-L518 and https://github.com/microsoft/LightGBM/blob/adfc9f5c61ad3d11b9768ed0b9104297434ce822/include/LightGBM/utils/common.h#L566-L574
yeah, all models share the same model input/output codes.
Thanks so much! I will start working on it ;)
By the way @StrikerRUS , regarding such libraries, I think I'll go with Ryü, I've been reading up a bit on it. Both came up after David Gay's implementations and Ryü came after Dragon4. Dragon4 has a fallback algorithm for some numbers which is slower.
I also saw even faster implementations but I don't want to get too close to the edge as this is supposed to be stable.
I was thinking of using this C++ implementation of Ryü https://github.com/expnkx/fast_io as it provides a single-header to include but it requires C++20 when LGBM is only C++11.
This leaves only the original implementation with C interface: https://github.com/ulfjack/ryu. I hope it's easy to add to our build chain, still will have to figure that one out.
@AlbertoEAF BTW, is that possible to have the two-stage solution? like, have a script to locale/de-locale the model?
LightGBM itself could read/write always by current format.
But when users want to read the content of model file, we can locale it.
Hello @guolinke , I'm not sure what you mean. Why would we want to transform the model content to different locales?
It's cleaner to have it store and load always with the same format, we don't need models in different locales.
Also, I believe it would be even trickier as you would have to know precisely the locale with which the model was written to read it correctly, which can be tricky to know in some corner cases.
Finally, as LightGBM itself does not allow "changing locale" during its execution, we would have to create an external tool, and that tool would need to parse the model file structure to read it and then rewrite it with exactly the same structure - lots of code duplication and complexity.
@AlbertoEAF I misunderstood the locale problem.
you are right, the model should be locale-independent.
By the way, maybe there were changes meanwhile because in the v3.0.0 code ArrayToStringFast and ArrayToString both operate on doubles. ArrayToString only operates on doubles, but ArrayToStringFast is a template that processes arrays of any type. I believe there's no reason to have both anymore.
Second point, instead of using Ryu, I was thinking of using fmt https://fmt.dev/latest/index.html, for its integration is much easier, and still 10x faster than the standard libraries (current state). Besides being the foundation to C++20's new format standard library, it is also very well maintained, uses only C++11 features and it has explicit support for gcc 4.8 (https://fmt.dev/latest/index.html). What do you think? Also it seems much easier to use than Ryu.
If I'm mistaken please correct me! :)
Wow, I replaced threshold, leaf_value and leaf_weight's ArrayToString calls by ArrayToStringFast in src/io/tree.cpp's Tree::ToString() and had surprising results. To test I read a reference model file and wrote it back (left=v3.0.0 code, right=modified code):
Whilst ArrayToStringFast produces numbers with less precision which saves space, the tree sizes also changed. Why is that?!
@AlbertoEAF I'm totally OK with using fmt library! It looks quite popular based on number of GitHub stars and we can use it as a git submodule (like boosts' compute).
Hah, just noticed that treelite project is using fmt library. And treelite in turn is used in rapidsai's project for tree models:
https://github.com/dmlc/treelite/blob/1c495c52a6dff8413f1b66f2f0ecc4e390473c5e/cmake/ExternalLibs.cmake#L10-L14
https://github.com/rapidsai/cuml
Yeah @StrikerRUS , even integer to string conversions might benefit from it, not only floating point conversions with shortest representation. LightGBM already has a custom Atoi function built to improve performance I believe, but it seems not even that will beat this:
http://www.zverovich.net/2020/06/13/fast-int-to-string-revisited.html
@guolinke @StrikerRUS I've hit some issue with the LightGBM model (same as the one above with the image) which I don't understand and really need your help. Any ideas of what might be causing this are greatly appreciated!
Issue
When I switch the call that turns floating point parameters to strings in the model write, I get numbers which have sometimes a redundant trailing 0 in the scientific notation which shouldn't matter - just posted a bug at fmtlib: https://github.com/fmtlib/fmt/issues/1873. See it to understand what I mean.
The big issue though is that for each of the two trees where one parameter changed, that tree_size in the model grew by +1 as well! Any ideas?!
Diffs in the output models files produced by LightGBM. Each image reports the line difference with the "unpatched"/normal LightGBM code (in red) and with the patch (in green), followed by the differences in items from such lines also in red/green:
In this case the model written with the patched code results in two trees where one parameter as a trailing 0, which results in this tree size changes:
and that's what I don't understand. Why do the tree sizes change because of the written floating point parameters?
Source code references
If you want to take a look at the difference in the code that led to this issue reported here and in https://github.com/fmtlib/fmt/issues/1873, the code change is this one: https://github.com/feedzai/LightGBM/pull/2/commits/da3d8f2397d87fcb784c854564a830763a24048d.
The images above were generated with the two commits:
red: The "base" code which works exactly like LightGBM v3.0.0 - https://github.com/feedzai/LightGBM/pull/2/commits/f65512fa9753172cc9e0728c4be67dda8a4f2fa8
green: The patched code - https://github.com/feedzai/LightGBM/pull/2/commits/da3d8f2397d87fcb784c854564a830763a24048d
PR's in progress (still in very early development):
LightGBM PR: https://github.com/feedzai/LightGBM/pull/2
Provider which uses LightGBM and where I have all test resources and functional and integration test code: https://github.com/feedzai/feedzai-openml-java/pull/53
@guolinke The issue described above can be the root cause of our previous issue when maintainers of Julia package had to remove tree_sizes field to load a model created on another machine: #3137.
To further reinforce the "bug" origin, I added code to strip away trailing 0's in the decimal digits of numbers, and it already works properly - i.e., there are no differences in the output models! (This is the temporary hack I proposed on the https://github.com/fmtlib/fmt/issues/1873).
This commit https://github.com/feedzai/LightGBM/pull/2/commits/c34c43da599b1bf60884d3576970103c26cc6204 adds that code (which is not a permanent solution as it's a hack which reduces speed and is not 'pretty').
I was already able to replace all write calls except DoubleToStr again due to divergences in fmt and snprintf. However there's a hack to fix it too until they fix it in the library: truncating the output string to the maximum number of significant digits: https://github.com/fmtlib/fmt/issues/1875
@AlbertoEAF the tree-size is the string size of tree model content. We use it for the multi-threading loading tree models.
Ah! So it's no bug! I get it now, shouldn't be a problem then, thanks!
Hello @guolinke @StrikerRUS I have good news, I've completed the code changes and it's passing all the tests!
Instead of relying on Ryu, actually I used fmt to write values to strings as it provides similar performance and its API is much more inline with the rest of the LightGBM codebase. To do string->double parsing I am using this library: https://github.com/lemire/fast_double_parser which provides maximal floating-point string parsing resolution and matches the standard library parsing outputs bit by bit.
Hence, transforming data to strings and string arrays should now be faster. Parsing strings to doubles with StringToArray should now be significantly faster at parsing doubles.
I'd like to streamline the code more by merging Common's __StringToTHelperFast and __StringToTHelper, however, Common::Atof seems to diverge from standard library's stringstreams and fast_double_parser's resolutions. Those two are equivalent and provide the maximal parsing resolution.
Common::Atof string->double parsing diverges around the 2 last decimal places. See some examples:
And seems to have less resolution:
As such I had to leave both __StringToTHelperFast and __StringToTHelper separate to maintain the current LightGBM behaviour without affecting read/write floating-point resolution.
Common::Atof string->double parsing diverges around the 2 last decimal places. See some examples:
Looks like the difference is in the 18th+ digit in all examples. And according to IEEE 754, double gives only 15-17 first significant digits. So, I believe it is OK to have inconsistent digits after 17th place.
Hmm that's interesting, in that case we could replace Atof and further simplify the codebase :)
Thanks Nikita! I'll run some tests tomorrow and see if the scores don't change.
I remember that atof is used for the precision non-sensitive conversion (it will be faster), like the split_gain.
If the new solution is faster and more precisious, we can switch to it.
Fixed via #3405.
Thanks a lot @AlbertoEAF !
Thanks a lot for all the help @StrikerRUS @jameslamb ;)
|
gharchive/issue
| 2020-07-30T14:53:30 |
2025-04-01T06:39:34.295518
|
{
"authors": [
"AlbertoEAF",
"StrikerRUS",
"guolinke"
],
"repo": "microsoft/LightGBM",
"url": "https://github.com/microsoft/LightGBM/issues/3267",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2177133364
|
智能助手 运动计步
**_
_**
@kant @jameslamb _
_
We're reporting this, please stop.
|
gharchive/issue
| 2024-03-09T08:21:45 |
2025-04-01T06:39:34.300408
|
{
"authors": [
"AB123456789Pp",
"jameslamb"
],
"repo": "microsoft/LightGBM",
"url": "https://github.com/microsoft/LightGBM/issues/6355",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1972124975
|
[python-package] Accept numpy generators as random_state
This PR adds support for numpy generators as possible inputs for random_state. Generators are the recommended way of drawing random numbers from numpy, while RandomState is deprecated. Lots of other software such as SciPy have moved towards Generator and/or allow both (example: scipy.sparse.random).
Added a compat entry and quoted type hints for running with older numpy.
I'm not sure why the linter is complaining about seemingly unrelated files. This is what I found in the logs:
python-package/lightgbm/basic.py:805: error: Incompatible return value type (got "tuple[Any, list[str], list[str] | list[int], list[list[Any]]]", expected "tuple[Any, list[str], list[str], list[list[Any]]]") [return-value]
python-package/lightgbm/basic.py:2939: error: Array constructor argument 1 of type "map[int]" is not convertible to the array element type "Iterable[c_char_p]" [misc]
python-package/lightgbm/basic.py:2953: error: Array constructor argument 1 of type "map[int]" is not convertible to the array element type "Iterable[c_char_p]" [misc]
python-package/lightgbm/basic.py:4619: error: Array constructor argument 1 of type "map[int]" is not convertible to the array element type "Iterable[c_char_p]" [misc]
python-package/lightgbm/basic.py:4633: error: Array constructor argument 1 of type "map[int]" is not convertible to the array element type "Iterable[c_char_p]" [misc]
python-package/lightgbm/basic.py:4843: error: Array constructor argument 1 of type "map[int]" is not convertible to the array element type "Iterable[c_char_p]" [misc]
python-package/lightgbm/basic.py:4859: error: Array constructor argument 1 of type "map[int]" is not convertible to the array element type "Iterable[c_char_p]" [misc]
python-package/lightgbm/engine.py:294: error: Incompatible types in assignment (expression has type "list[tuple[str, str, float, bool]] | list[tuple[str, str, float, bool, float]]", variable has type "list[tuple[str, str, float, bool]]") [assignment]
Found 8 errors in 2 files (checked 9 source files)
None of those files are being modified here.
Those mypy errors don't actually cause the lint CI job to fail:
https://github.com/microsoft/LightGBM/blob/aeafccfbfb5c223d33b61ebe0f1e8b5592249151/.ci/lint-python.sh#L19-L22
That job is failing on this PR for reasons unrelated to this PR, caused by the latest release of {lintr}, which will be fixed once we merge #6180.
|
gharchive/pull-request
| 2023-11-01T10:44:27 |
2025-04-01T06:39:34.304101
|
{
"authors": [
"david-cortes",
"jameslamb"
],
"repo": "microsoft/LightGBM",
"url": "https://github.com/microsoft/LightGBM/pull/6174",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
547142124
|
Improved Solution File Handling
This change makes a lot of improvements to how MSB4U solution file is generated, and more importantly what information it tries to bring forward when overwriting this file. This means, you can now modify and add items to the solution file, and have MSB4U preserve those.
This change addresses:
#86
{
nit: consider moving these variables into a wrapper struct/class to decrease the number of arguments in this function call
Refers to: Source/MSBuildTools.Unity/Packages/com.microsoft.msbuildforunity/Editor/ProjectGenerator/Scripts/Exporters/TemplatedProjectExporter.cs:56 in 283a6d5. [](commit_id = 283a6d5076e3efe7c048247ed942130277a3ff1d, deletion_comment = False)
|
gharchive/pull-request
| 2020-01-08T22:34:08 |
2025-04-01T06:39:34.306392
|
{
"authors": [
"andreiborodin",
"chrisfromwork"
],
"repo": "microsoft/MSBuildForUnity",
"url": "https://github.com/microsoft/MSBuildForUnity/pull/96",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1050327723
|
Traversal SDK net6 support
Hi!
In dotnet-affected, we create MSBuild Traversal SDK projects using this code:
var projectRootElement = @"<Project Sdk=""Microsoft.Build.Traversal/3.0.3""></Project>";
var stringReader = new StringReader(projectRootElement);
var xmlReader = new XmlTextReader(stringReader);
var root = ProjectRootElement.Create(xmlReader);
var project = new Project(root);
This code works fine in net5 but it's failing on net6. I suspect it may be because Microsoft.Build.Traversal needs to support net 6?
The stacktrace looks like this:
Microsoft.Build.Exceptions.InvalidProjectFileException : SDK Resolver Failure: "The SDK resolver "NuGetSdkResolver" failed while attempting to resolve the SDK "Microsoft.NET.Sdk". Exception: "System.ArgumentNullException: Value cannot be null. (Parameter 'path')
at System.IO.Directory.GetParent(String path)
at Microsoft.Build.NuGetSdkResolver.GlobalJsonReader.GetMSBuildSdkVersions(SdkResolverContext context)
at Microsoft.Build.NuGetSdkResolver.NuGetSdkResolver.TryGetNuGetVersionForSdk(String id, String version, SdkResolverContext context, Object& parsedVersion)
at Microsoft.Build.NuGetSdkResolver.NuGetSdkResolver.Resolve(SdkReference sdkReference, SdkResolverContext resolverContext, SdkResultFactory factory)
at Microsoft.Build.BackEnd.SdkResolution.SdkResolverService.ResolveSdk(Int32 submissionId, SdkReference sdk, LoggingContext loggingContext, ElementLocation sdkReferenceLocation, String solutionPath, String projectPath, Boolean interactive, Boolean isRunningInVisualStudio)"" /home/runner/.nuget/packages/microsoft.build.traversal/3.0.3/Sdk/Sdk.props
Stack Trace:
at Microsoft.Build.Shared.ProjectErrorUtilities.ThrowInvalidProject(String errorSubCategoryResourceName, IElementLocation elementLocation, String resourceName, Object[] args)
at Microsoft.Build.Shared.ProjectErrorUtilities.ThrowInvalidProject[T1](IElementLocation elementLocation, String resourceName, T1 arg0)
at Microsoft.Build.Evaluation.Evaluator`4.ExpandAndLoadImportsFromUnescapedImportExpressionConditioned(String directoryOfImportingFile, ProjectImportElement importElement, List`1& projects, SdkResult& sdkResult, Boolean throwOnFileNotExistsError)
at Microsoft.Build.Evaluation.Evaluator`4.ExpandAndLoadImports(String directoryOfImportingFile, ProjectImportElement importElement, SdkResult& sdkResult)
at Microsoft.Build.Evaluation.Evaluator`4.EvaluateImportElement(String directoryOfImportingFile, ProjectImportElement importElement)
at Microsoft.Build.Evaluation.Evaluator`4.PerformDepthFirstPass(ProjectRootElement currentProjectOrImport)
at Microsoft.Build.Evaluation.Evaluator`4.EvaluateImportElement(String directoryOfImportingFile, ProjectImportElement importElement)
at Microsoft.Build.Evaluation.Evaluator`4.PerformDepthFirstPass(ProjectRootElement currentProjectOrImport)
at Microsoft.Build.Evaluation.Evaluator`4.Evaluate()
at Microsoft.Build.Evaluation.Evaluator`4.Evaluate(IEvaluatorData`4 data, ProjectRootElement root, ProjectLoadSettings loadSettings, Int32 maxNodeCount, PropertyDictionary`1 environmentProperties, ILoggingService loggingService, IItemFactory`2 itemFactory, IToolsetProvider toolsetProvider, ProjectRootElementCacheBase projectRootElementCache, BuildEventContext buildEventContext, ISdkResolverService sdkResolverService, Int32 submissionId, EvaluationContext evaluationContext, Boolean interactive)
at Microsoft.Build.Evaluation.Project.ProjectImpl.Reevaluate(ILoggingService loggingServiceForEvaluation, ProjectLoadSettings loadSettings, EvaluationContext evaluationContext)
at Microsoft.Build.Evaluation.Project.ProjectImpl.ReevaluateIfNecessary(ILoggingService loggingServiceForEvaluation, ProjectLoadSettings loadSettings, EvaluationContext evaluationContext)
at Microsoft.Build.Evaluation.Project.ProjectImpl.ReevaluateIfNecessary(EvaluationContext evaluationContext)
at Microsoft.Build.Evaluation.Project.ProjectImpl.Initialize(IDictionary`2 globalProperties, String toolsVersion, String subToolsetVersion, ProjectLoadSettings loadSettings, EvaluationContext evaluationContext)
at Microsoft.Build.Evaluation.Project..ctor(ProjectRootElement xml, IDictionary`2 globalProperties, String toolsVersion, String subToolsetVersion, ProjectCollection projectCollection, ProjectLoadSettings loadSettings, EvaluationContext evaluationContext)
at Microsoft.Build.Evaluation.Project..ctor(ProjectRootElement xml)
at Affected.Cli.Formatters.TraversalProjectOutputFormatter.Format(IEnumerable`1 projects) in /home/runner/work/dotnet-affected/dotnet-affected/src/dotnet-affected/Infrastructure/Formatters/TraversalProjectOutputFormatter.cs:line 23
at Affected.Cli.Tests.Formatters.TraversalFormatterTests.Using_single_project_should_contain_project() in /home/runner/work/dotnet-affected/dotnet-affected/src/dotnet-affected.Tests/Formatters/TraversalFormatterTests.cs:line 25
--- End of stack trace from previous location ---
Full logs can be found here
Link to source code
dotnet info
.NET SDK (reflecting any global.json):
Version: 6.0.100
Commit: 9e8b04bbff
Runtime Environment:
OS Name: ubuntu
OS Version: 20.04
OS Platform: Linux
RID: ubuntu.20.04-x64
Base Path: /home/runner/work/dotnet-affected/dotnet-affected/eng/.dotnet/sdk/6.0.100/
Host (useful for support):
Version: 6.0.0
Commit: 4822e3c3aa
.NET SDKs installed:
3.1.415 [/home/runner/work/dotnet-affected/dotnet-affected/eng/.dotnet/sdk]
5.0.403 [/home/runner/work/dotnet-affected/dotnet-affected/eng/.dotnet/sdk]
6.0.100 [/home/runner/work/dotnet-affected/dotnet-affected/eng/.dotnet/sdk]
Any guidance is appreciated!
Thanks!
Leo.
Looks like an MSBuild bug reported here: https://github.com/dotnet/msbuild/issues/7035
Looks like the same issue.
I've tried updating MSBuild to v17 which targets net6, just in case, but same error.
Based on the stack trace, I suspect if you save the project to disk first it might work. It seems like the path to project is null which is causing an exception later on.
I'm going to close this in favor of https://github.com/NuGet/Home/issues/11376
Hi @jeffkl , sorry for the dumb question, but has this been released yet?
Not sure what I need to update to get this changes, is this included in .NET 6.0.1?
@leonardochaia the bug was fixed in the next release which will be in Visual Studio 17.1 and .NET 6.0.200 which should be available in early February. You can also use preview versions if you need the functionality right away. https://github.com/dotnet/installer#installers-and-binaries
|
gharchive/issue
| 2021-11-10T21:31:15 |
2025-04-01T06:39:34.314716
|
{
"authors": [
"jeffkl",
"leonardochaia"
],
"repo": "microsoft/MSBuildSdks",
"url": "https://github.com/microsoft/MSBuildSdks/issues/309",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2409373165
|
[Bug - Partner Center] Certification fails for MV3 extension having both background.service_worker and background.script defined in manifest
The issue
Google/Chrome, Mozilla/Firefox and Apple/Safari all allow MV3 webextensions defining both background.service_worker and background.script in the manifest. In such cases browser will use only one of them and prefer the service_worker if the browser supports it. As documented here: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/manifest.json/background#browser_support
While making the latest version of my Flickr Fixr webextension, I didn't even consider there could be a problem when uploading to MS Partner Center. I have tested the extension by manually installing in Edge, and it works perfect.
However when I upload to Partner Center, I get "package failed certification" error:
The following checks failed:
Package acceptance validation error: The background.scripts field cannot be used with manifest version 3. Use the background.service_worker field instead. Error code: background: { "scripts": [ "background/flickr_fixr.js" ], "service_worker": "background/flickr_fixr.js", "type": "module" } Line: 38 Column: 17
To Reproduce
I'm guessing issue can be reproduced by uploading any MV3 webextension having both background.service_worker and background.script defined in the manifest. But you could do it with mine:
Steps to reproduce the behavior:
Download version 2.5.0 of Flickr Fixr from https://github.com/StigNygaard/Stigs_Flickr_Fixr/releases/tag/2.5.0 (FlickrFixr250.zip)
Upload FlickrFixr250.zip as a webextension in Partner Center
Expected behavior
I had hoped/assumed it would accept the extension. Maybe with a warning, but not an error stopping me from publishing it.
Screenshots
Desktop
Probably not relevant, but I logged into Partner Center using Firefox 128 on Windows 11.
Additional context
https://discourse.mozilla.org/t/mv3-background-scripts-vs-serviceworkers-ms-edge-didnt-get-the-memo/133613
https://blog.mozilla.org/addons/2024/03/13/manifest-v3-manifest-v2-march-2024-update/
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/manifest.json/background#browser_support
https://github.com/StigNygaard/Stigs_Flickr_Fixr
https://github.com/StigNygaard/Stigs_Flickr_Fixr/blob/master/WebExtension/manifest.json
Current Flickr Fixr available for Edge: https://microsoftedge.microsoft.com/addons/detail/ieinimkepkfmmpeakdgnoikimokffneh
Okay, I see I'm not the first one reporting this:
https://github.com/microsoft/MicrosoftEdge-Extensions/issues/136
https://github.com/microsoft/MicrosoftEdge-Extensions/issues/165
Hi @StigNygaard, we want to express our gratitude for your understanding and patience. Please be assured that the similar feedback has already been forwarded to our engineering team, and they are diligently working on it. Your patience during this process is greatly appreciated.
We understand the importance of this issue and once we have an update or need further information, we will get back to you promptly. We appreciate your contribution to improving the platform.
Bonus question/comment. A bit off-topic. But only a bit, it is still about cross-browser flexibility in manifest file and what Partner Center will allow me to upload...
I have another extension which requests the two related permissions "menus" and "contextMenus". Safari recognize "menus" as an alias for "contextMenus" and in Firefox "menus" represents a superset of "contextMenus" adding support for finding the exact DOM-element a context-menu was opened from. Chromium browsers does not recognize the "menus" permission.
My extension takes advantage of the extra feature (in Firefox) of "menus" when available, but also works without the extra feature. I use it by eventually redefining contextMenus like this:
globalThis.browser ??= chrome;
if (browser.menus?.getTargetElement) {
browser.contextMenus = browser.menus;
}
So the question is, would Partner Center accept a manifest defining permissions something like this (with both "menus" and "contextMenus"):
"permissions": [
"scripting",
"contextMenus",
"menus",
"storage"
],
I can install the extension manually in Edge. It works fine.
Hey @StigNygaard, thank you for providing additional information. We will respond with an update once we hear back from our team. We appreciate your patience in the meantime.
It sounds like Microsoft plans to fix this. But are there any "ETA"?
Hey @StigNygaard, we wanted to inform you that the feature you inquired about is currently on our backlog. We will update you once we have an update on this feature.
We appreciate your understanding and patience.
|
gharchive/issue
| 2024-07-15T18:36:21 |
2025-04-01T06:39:34.376306
|
{
"authors": [
"ManikanthMSFT",
"StigNygaard"
],
"repo": "microsoft/MicrosoftEdge-Extensions",
"url": "https://github.com/microsoft/MicrosoftEdge-Extensions/issues/172",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.