id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
725575374 | Remove Back Link from Feedback thank you example
What is the context of this PR?
Remove Back Link from Feedback thank you example as it will not be on the page when implemented in runner
How to review
Check Feedback examples look and work as they should
Not planning on removing back link from the pattern at the moment
| gharchive/pull-request | 2020-10-20T13:15:25 | 2025-04-01T06:37:20.437141 | {
"authors": [
"rmccar"
],
"repo": "ONSdigital/design-system",
"url": "https://github.com/ONSdigital/design-system/pull/1110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2408442312 | update ons-grid--flex to ons-grid-flex
What is the context of this PR?
Fixes #3259. This PR renames utility class from ons-grid--flex to ons-grid-flex.
Please note that this may be a breaking change.
How to Resolve the Breaking Change
Search your codebase for any occurrences of ons-grid--flex.
Replace all instances with the new class name ons-grid-flex.
How to review this PR
Verify that all instances of ons-grid--flex have been renamed to ons-grid-flex.
Ensure that all references in the documentation, examples, and tests have been updated accordingly.
Confirm that all tests pass.
Checklist
This needs to be completed by the person raising the PR.
[x] I have selected the correct Assignee
[x] I have linked the correct Issue
The PR description needs updating. Also as you have now started using the sass syntax in this file I was wondering if we want to restructure the rest of this file using sass syntax
| gharchive/pull-request | 2024-07-15T10:55:57 | 2025-04-01T06:37:20.441041 | {
"authors": [
"precious-onyenaucheya-ons",
"rmccar"
],
"repo": "ONSdigital/design-system",
"url": "https://github.com/ONSdigital/design-system/pull/3268",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
337058918 | Feature/instance to code relation
What
Update end to end tests to import the CPIH codelist into neo4j. This allows the dimension importer to create relationships between the instance node and the codes that it relates to.
How to review
Review changes / test
End to end test using these changes:
https://github.com/ONSdigital/dp-api-tests/pull/94
https://github.com/ONSdigital/dp-dimension-importer/pull/37
https://github.com/ONSdigital/dp-dataset-exporter-xlsx/pull/68
https://github.com/ONSdigital/dp-dataset-exporter/pull/43
https://github.com/ONSdigital/dp-download-service/pull/29
Who can review
Anyone
Can you increase the default timeout to 30 or 45 seconds as I cannot get the instance to complete in time. Other than that, seems good 👍
| gharchive/pull-request | 2018-06-29T16:18:27 | 2025-04-01T06:37:20.444313 | {
"authors": [
"CarlHembrough",
"mattrout92"
],
"repo": "ONSdigital/dp-api-tests",
"url": "https://github.com/ONSdigital/dp-api-tests/pull/94",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
452231721 | Work around a glitch in the ASE DDR model
The DDR simulation model's Avalon ready/enable protocol deliberately uses clock offsets, which wreak havoc even in the Qsys-supplied bridges. Adding a small delay to the simulated DDR clock passed to the AFU causes the AFU to sample signals when they are stable.
This appears to have been the cause of random failures in hello_mem_afu simulation.
Tests: http://sw-pert.altera.com/pert/pert.php?test_run_id=3410442
| gharchive/pull-request | 2019-06-04T22:15:31 | 2025-04-01T06:37:20.484888 | {
"authors": [
"michael-adler"
],
"repo": "OPAE/opae-sdk",
"url": "https://github.com/OPAE/opae-sdk/pull/1300",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
915693799 | DocsArchive-15.0.0.0Auto archive content
Please don't merge this PR until target archive repo PR https://github.com/OPS-E2E-PPE/docs-archive-test-target/pull/896 is merged into live branch.
Auto archive content to https://github.com/opstest2/docs-archive-test-target.git
Docs Build status updates of commit 18a80dc:
:x: Validation status: errors
Please follow instructions here which may help to resolve issue.
File
Status
Preview URL
Details
:x:Error
Details
[Error-GitBranchDeletedOrForcePushed] Cannot sync git repo to specified commit because branch Release_Archive_master_2021-06-09-01-48-33 has been deleted or has been force pushed
For more details, please refer to the build report.
If you see build warnings/errors with permission issues, it might be due to single sign-on (SSO) enabled on Microsoft's GitHub organizations. Please follow instructions here to re-authorize your GitHub account to Docs Build.
Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report.
Note: Your PR may contain errors or warnings unrelated to the files you changed. This happens when external dependencies like GitHub alias, Microsoft alias, cross repo links are updated. Please use these instructions to resolve them.
For any questions, please:Try searching the docs.microsoft.com contributor guidesPost your question in the Docs support channel
| gharchive/pull-request | 2021-06-09T01:48:46 | 2025-04-01T06:37:20.520724 | {
"authors": [
"opstest2",
"v-alji"
],
"repo": "OPS-E2E-PPE/docs-archive-test-source",
"url": "https://github.com/OPS-E2E-PPE/docs-archive-test-source/pull/2000",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1145207025 | DocsArchive-15.0.0.0Auto archive content
Please don't merge this PR until target archive repo PR https://github.com/OPS-E2E-PPE/docs-archive-test-target_82c94236-9731-4b30-ba51-24cc2fbf2b05/pull/1 is merged into live branch.
Auto archive content to https://github.com/opstest2/docs-archive-test-target_82c94236-9731-4b30-ba51-24cc2fbf2b05.git
Docs Build status updates of commit f8f79df:
:x: Validation status: errors
Please follow instructions here which may help to resolve issue.
File
Status
Preview URL
Details
:x:Error
Details
[Error: GitBranchDeletedOrForcePushed] Cannot sync git repo to specified commit because branch Release_Archive_main_2022-02-21-01-17-20 has been deleted or has been force pushed
For more details, please refer to the build report.
If you see build warnings/errors with permission issues, it might be due to single sign-on (SSO) enabled on Microsoft's GitHub organizations. Please follow instructions here to re-authorize your GitHub account to Docs Build.
Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report.
Note: Your PR may contain errors or warnings unrelated to the files you changed. This happens when external dependencies like GitHub alias, Microsoft alias, cross repo links are updated. Please use these instructions to resolve them.
For any questions, please:Try searching the docs.microsoft.com contributor guidesPost your question in the Docs support channel
| gharchive/pull-request | 2022-02-21T01:18:06 | 2025-04-01T06:37:20.529790 | {
"authors": [
"opstest2",
"v-alji"
],
"repo": "OPS-E2E-PPE/docs-archive-test-source",
"url": "https://github.com/OPS-E2E-PPE/docs-archive-test-source/pull/2549",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
891617495 | DocsArchive-15.0.0.0Auto archive content
Auto archive content from https://github.com/opstest2/docs-archive-test-source.git
Docs Build status updates of commit fc4fbe9:
:x: Validation status: errors
Please follow instructions here which may help to resolve issue.
File
Status
Preview URL
Details
:x:Error
Details
[Error-RuningBuildFailed] Some unexpected errors happened when running build, please open a ticket in https://aka.ms/SiteHelp and include the error report for our team to troubleshoot
For more details, please refer to the build report.
If you see build warnings/errors with permission issues, it might be due to single sign-on (SSO) enabled on Microsoft's GitHub organizations. Please follow instructions here to re-authorize your GitHub account to Docs Build.
Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report.
Note: Your PR may contain errors or warnings unrelated to the files you changed. This happens when external dependencies like GitHub alias, Microsoft alias, cross repo links are updated. Please use these instructions to resolve them.
For any questions, please:Try searching the docs.microsoft.com contributor guidesPost your question in the Docs support channel
| gharchive/pull-request | 2021-05-14T05:23:34 | 2025-04-01T06:37:20.537831 | {
"authors": [
"opstest2",
"v-alji"
],
"repo": "OPS-E2E-PPE/docs-archive-test-target",
"url": "https://github.com/OPS-E2E-PPE/docs-archive-test-target/pull/811",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
498023910 | DocsArchive-1.0.0.0Auto archive content
Please don't merge this PR until target archive repo PR https://github.com/OPS-E2E-PPE/docs-archive-test-target/pull/57 is merged into live branch.
Auto archive content to https://github.com/OPS-E2E-PPE/docs-archive-test-target.git
Docs Build status updates of commit 816be5d:
:x: Validation status: errors
Please follow instructions here which may help to resolve issue.
File
Status
Preview URL
Details
:x:Error
Details
[Error]
Cannot sync git repo to specified commit because branch Archive_master_2019-09-25-11-15-07 has been deleted or has been force pushed: fatal: Couldn't find remote ref Archive_master_2019-09-25-11-15-07
For more details, please refer to the build report.
Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
| gharchive/pull-request | 2019-09-25T03:15:59 | 2025-04-01T06:37:20.543727 | {
"authors": [
"VSC-Service-Account",
"v-alji"
],
"repo": "OPS-E2E-PPE/docs-archive-test",
"url": "https://github.com/OPS-E2E-PPE/docs-archive-test/pull/124",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
188821995 | Add more warnings to Jenkins
I think to we need to further clean up the code to get rid of warnings. Thus I suggest we add -Werror and additional warnings detection to Jenkins CI. At the moment I get things like this:
[1059/1294] Building CXX object DataTransferKit/packages/Operators/test/CMakeFiles/dtk_hex_test_reference.dir/reference_implementation/DTK_ReferenceHexMesh.cpp.o
In file included from ../../DataTransferKit/packages/Interface/src/Client/DTK_EntitySet.hpp:47:0,
from ../../DataTransferKit/packages/Interface/src/OperatorVector/DTK_FunctionSpace.hpp:46,
from ../../DataTransferKit/packages/Operators/test/reference_implementation/DTK_ReferenceHexMesh.hpp:45,
from ../../DataTransferKit/packages/Operators/test/reference_implementation/DTK_ReferenceHexMesh.cpp:41:
../../DataTransferKit/packages/Interface/src/Client/DTK_EntityIterator.hpp:91:28: warning: ‘virtual DataTransferKit::EntityIterator DataTransferKit::EntityIterator::operator++(int)’ was hidden [-Woverloaded-virtu
al]
virtual EntityIterator operator++( int );
^~~~~~~~
In file included from ../../DataTransferKit/packages/Operators/test/reference_implementation/DTK_ReferenceHexMesh.cpp:50:0:
../../DataTransferKit/packages/Adapters/BasicGeometry/src/DTK_BasicEntitySet.hpp:83:21: warning: by ‘virtual DataTransferKit::EntityIterator& DataTransferKit::BasicEntitySetIterator::operator++()’ [-Woverloaded
-virtual]
EntityIterator &operator++() override;
^~~~~~~~
One thing, though, is that some of Trilinos packages like STK are more warning-prone. I think the linear algebra portion, though, is mostly warnings free.
Remember that we get warnings from TPLs when we build too that we have no control over. Would this flag cause the build to fail because a warning from a TPL?
I usually have the following in my configure script:
SYSTEM_HEADERS=""
for header_dir in `echo $CPATH | sed 's/:/\ /g'`; do
SYSTEM_HEADERS="$SYSTEM_HEADERS -isystem $header_dir"
done
...
-D CMAKE_CXX_FLAGS="-g -Wall -Wextra -Wno-unused-parameter $CXX_FLAGS_COMPILER_SPECIFIC $SYSTEM_HEADERS"
This way the compiler would ignore the headers coming from the things in you $CPATH (or whatever you put there). So TPLs should be fine. Most issues I see coming from Trilinos packages.
I think this is a good idea then as long as we can configure this to just fail on any DTK warnings.
The problem is not so much the TPL but the other packages in Trilinos.
Fixed in #134.
| gharchive/issue | 2016-11-11T18:33:49 | 2025-04-01T06:37:20.552391 | {
"authors": [
"aprokop",
"dalg24",
"sslattery"
],
"repo": "ORNL-CEES/DataTransferKit",
"url": "https://github.com/ORNL-CEES/DataTransferKit/issues/130",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2495555254 | Refactor bnpy data clustering apps
Currently the data clustering implementation the bnpy-based apps is functional, but can be slow to converge when too much training data is added. It can also be awkward to re-cluster after the model is updated. In order to have automatic region of interest selection in Peregrine, the clustering implementation should be sped up. Some ideas for enhancements:
use a representative sample for training instead of the entire dataset
instead of training and clustering for each part sequentially, create training data based on all parts first and then cluster
create a the Bnpy subclass of the base App class to streamline the apps themselves
Removing this from the Peregrine demonstration milestone, because of the capability implemented in #25. Updating the bnpy app is still planned, but lower priority for the moment.
| gharchive/issue | 2024-08-29T20:52:30 | 2025-04-01T06:37:20.554838 | {
"authors": [
"gknapp1"
],
"repo": "ORNL-MDF/Myna",
"url": "https://github.com/ORNL-MDF/Myna/issues/24",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1873574286 | misspelling of accordion in readme
Readme has been updated thanks!
| gharchive/issue | 2023-08-30T12:27:05 | 2025-04-01T06:37:20.599606 | {
"authors": [
"cblanquera",
"ezerssss"
],
"repo": "OSSPhilippines/frui",
"url": "https://github.com/OSSPhilippines/frui/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2291425994 | Upgrade dependency
Upgrade jsonwebtoken@8.5.1 to version 9.0.2
Please ignore this PR and close it. Generated by TestIM
| gharchive/pull-request | 2024-05-12T17:23:40 | 2025-04-01T06:37:20.622458 | {
"authors": [
"PashaPal1974"
],
"repo": "OX-Security-Demo/Multi-currency-management",
"url": "https://github.com/OX-Security-Demo/Multi-currency-management/pull/1376",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
811357778 | Sceptile and TRs
I have been testing some TRs and TMs from gen 8 since the recent patch and discovered an issue. Sceptile is unable to learn Dragon Pulse through the Dragon Pulse TR. However, it learned Focus Blast through TRs, which means Dragon Pulse is glitched for Sceptile - likely other pokemon as well.
I'm unsure if the screenshot will show, but I'll try anyway.
This is working as intended. Sceptile didn't learn Dragon Pulse by TM in Gen 7, it was a tutor move. It still learns it by tutor as of now.
Odd, the website PokemonDB shows that Sceptile can learn Dragon Pulse via TR in the Crown Tundra DLC. Are DLC mons not considered gen 8 mons?
This will be fixed in the next day or two.
| gharchive/issue | 2021-02-18T18:35:24 | 2025-04-01T06:37:20.626505 | {
"authors": [
"ConversaBC",
"Vohras2"
],
"repo": "ObliqueNET/Server",
"url": "https://github.com/ObliqueNET/Server/issues/1011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
290265006 | Incorrect Mega Medicham textures
https://puu.sh/z6kWa/05f9da5705.png
Sorry for the bad angle but it would appear that Mega Medicham is using the same textures as Sylveon, at least to some degree, as indicated by the out of place pinks and blues on the model.
Fixed
| gharchive/issue | 2018-01-21T11:18:32 | 2025-04-01T06:37:20.627636 | {
"authors": [
"Hunter1220",
"Rasgnarok"
],
"repo": "ObliqueNET/Terra",
"url": "https://github.com/ObliqueNET/Terra/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2213755239 | Configuration upgrades
It was not possible to correctly set the rate. This has been resolved.
The cache of config items was not being updated correctly, this has been resolved.
Can't really claim much JS expertise, but it looks fine. Hope it works too 😄
| gharchive/pull-request | 2024-03-28T17:25:17 | 2025-04-01T06:37:20.629907 | {
"authors": [
"ajgosl",
"ulrikpedersen"
],
"repo": "Observatory-Sciences/aravis-detector",
"url": "https://github.com/Observatory-Sciences/aravis-detector/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1724791563 | SLatissima parameters do not show
This snipped
https://github.com/OceanBioME/OceanBioME.jl/blob/1c5598ec482c97f92008250d6071c3d491a3e5c1/docs/src/appendix/params/SLatissima.md#L4-L5
doesn't seem to be doing anything.
Oh yeah that doesn't work anymore. I'd quite like to document the default parameters somehow but haven't worked out how todo it yet.
| gharchive/issue | 2023-05-24T21:57:43 | 2025-04-01T06:37:20.631251 | {
"authors": [
"jagoosw",
"navidcy"
],
"repo": "OceanBioME/OceanBioME.jl",
"url": "https://github.com/OceanBioME/OceanBioME.jl/issues/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2453406892 | Add FieldSet.from_a_grid_dataset() method
Parcels has specific methods for loading B-grid and C-grid datasets, but not for A-grids. This is because A-grid (linear) interpolation is default, so FieldSet.from_netcdf() automatically assumes an A-grid.
However, for consistency and to help new users, it might be nice to add an explicit FieldSet.from_a_grid_dataset() method to Parcels. This method would then simply call from_netcdf().
Note that while this is done, it might also be good to update a few mentions of from_bgrid() and from_cgrid() in the docstrings to the correct from_b_grid_dataset() and from_c_grid_dataset()
Hi @erikvansebille , I'd like to work on this enhancement. Could you please assign this issue to me?
Thank you!
Thanks @KOMPALALOKESH, for wanting to pick this up! I've assigned this Issue to you.
I think the key starting point is to make a new method in parcels/fieldset.py that returns a cls.from_netcdf(), a bit like how fieldset.from_b_grid() is structured but then without much (any?) adjustments.
It would also be nice to add a unit test to tests/test_fieldset.py.
Good luck, and let us know if you need any help!
@KOMPALALOKESH if you're interested and want more than this issue, "Enable pyupgrade on ruff linting" from #1620 is well defined and might be interesting from a technical standpoint if you haven't worked with Ruff before. Its a nice Python tool, and modern Python is consolidating towards it for handling QAQC.
@erikvansebille , This PR Link introduces the FieldSet.from_a_grid_dataset() method in "parcels/fieldset.py" to handle A-grid datasets. A corresponding unit test has been added in "tests/test_fieldset.py" to validate this functionality.
Please review the changes and the test case. If everything is in order, kindly merge.
| gharchive/issue | 2024-08-07T12:49:44 | 2025-04-01T06:37:20.637180 | {
"authors": [
"KOMPALALOKESH",
"VeckoTheGecko",
"erikvansebille"
],
"repo": "OceanParcels/parcels",
"url": "https://github.com/OceanParcels/parcels/issues/1642",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1173176413 | Fix double email_user/confirm due to language render
Add a persistent (per refreshed session) flag to skip any actions sparked by future renders after it has already been authenticated.
Tested as much as I can, but do try at kp in case I broke something.
We patched it up on the backend, but this should help anyway..will double check later tomorrow.
Yeah, this will prevent it from jumping to SignUp flow page due to the re-render.
(couldn't really replicate an actual problem -- had to force-render it to get the double confirm. Maybe only happens in certain locale).
| gharchive/pull-request | 2022-03-18T04:27:25 | 2025-04-01T06:37:20.721120 | {
"authors": [
"infinite-persistence",
"tzarebczan"
],
"repo": "OdyseeTeam/odysee-frontend",
"url": "https://github.com/OdyseeTeam/odysee-frontend/pull/1141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1145340854 | Pure client side Graph example
Hi All,
I have been studying the various examples provided here however Im unable to see any examples which provide a pure client side app which makes graph calls. Is there a reason for this?
Seems even the hello world with no back end still hosts a "simpleAuth" back end application. Im looking for some example which simply uses the authorization code flow with pkce to get a token and call the MS Graph. Is there som technical resson why this is not available?
There is no such sample seems to be available.
@AnthonyPhan - As per my understanding, the standard and recommended way to do the authentication at the server side and then call the API's at the client side.
However you can implement the authorization code flow at client side and do the Graph API call's using generated token at client side.
@AnthonyPhan - Please let us know if you need any further details or shall we close this issue?
@v-chetsh it looks like the "simple-auth" service is being deprecated and as of a recent update of teamsfx the standard hello world uses a complete client side implementation of authorisation code flow with PKCE as suggested my this post.
Here is a further reference:
https://stackoverflow.com/q/70388145/3095420
| gharchive/issue | 2022-02-21T05:25:40 | 2025-04-01T06:37:20.729237 | {
"authors": [
"AnthonyPhan",
"Prasad-MSFT",
"v-chetsh"
],
"repo": "OfficeDev/Microsoft-Teams-Samples",
"url": "https://github.com/OfficeDev/Microsoft-Teams-Samples/issues/254",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1726294584 | Deployment form mentions that custom domains are recommended
The documentation correctly states the advantages of having Azure Front Door for company communicator, however the deployment form we get when executing the default ARM deployment template states that custom domains are recommended. Could you please review that?
Hi @cristianoag ,
Thanks for highlighting. We will check on this.
Hi @cristianoag ,
In the new release, the above mentioned correction has been taken care. Please let us know if you have any concerns.
| gharchive/issue | 2023-05-25T18:29:30 | 2025-04-01T06:37:20.731487 | {
"authors": [
"cristianoag",
"gsv022"
],
"repo": "OfficeDev/microsoft-teams-apps-company-communicator",
"url": "https://github.com/OfficeDev/microsoft-teams-apps-company-communicator/issues/1054",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2020258793 | Deploy failed with"There was a conflict. Conflict with existing ScmType: ExternalGit"
Hi, I deploy CC 5.5 in another tenant, but it was failed, I found the closed issue#1078, follow the way (disconnect and reconnect, wait for Log show up: success(Active), and redeploy again, but it still fail with There was a conflict. Conflict with existing ScmType: ExternalGit
I go back Deployment Center and External Git show up "Error fetching information", however I disconnect and connect again, redeploy always failed with "There was a conflict. Conflict with existing ScmType: ExternalGit"
I am sure not any underscore (_) or space in any of the field values, include subscription & resource group.
@gsv022 It's working, and deployed success. thanks for your support.
Hi @reterfreeman ,
Thanks for the update. Please feel free to log a issue if anything comes up w.r.t standard version of CC
| gharchive/issue | 2023-12-01T07:14:12 | 2025-04-01T06:37:20.735486 | {
"authors": [
"gsv022",
"reterfreeman"
],
"repo": "OfficeDev/microsoft-teams-apps-company-communicator",
"url": "https://github.com/OfficeDev/microsoft-teams-apps-company-communicator/issues/1245",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2156345494 | Author App is partially available to users if published.
Hi Team, In the recent version (5.5.1) of the Company Communicator, I deployed both the Author and User apps via the Teams admin console. In earlier versions, if a non-authorized UPN user attempted to add the Author app, it would get added in Teams but would not open, displaying an authorization error. However, in version 5.5.1, users can add the app and also view the card design module, which was previously unavailable. Despite this, they still cannot send messages from the Author app. Can we enforce the same level of restriction as in the earlier version, where non-authorized users were completely restricted from accessing the app?
Below are the screenshots of the old version author app added by random users after publishing it in Teams admin via Manage Apps.
Below is the screenshot of the new version 5.5.1 author app added by random users after publishing it in Teams Admin via Manage Apps.
Hi @Sheshank-TCS ,
Thanks for raising the query. Normally, in any version unauthorized users won't have access to CC author app unless their UPN is added to the UPN list. Coming to the error, check with that specific user on browser version of teams so that user will get relevant error when tries to access the application.
Hi @gsv022 I have validated on teams web version also and its still available. Though the message cannot be sent but its not giving any access denied error. Please suggest how this can be fixed.
Hi @Sheshank-TCS ,
Is it happening in New Teams as well?
Yes, this is happening with New teams, Old teams and Web version. However old Company communicator app is restricted everywhere for the users who are not Authors.
Hi @Sheshank-TCS ,
Please share your mail id so that we can discuss the issue internally.
sheshank.dhoot@outlook.com you can use this one.
I have sent a mail. Please check and respond
Hi Sai, Thanks for your response, I have reverted to it.
I am facing the same issue, is there any update of this issue?
Hey Chris, Still waiting to hear from the team.
Hi @Sheshank-TCS ,
We have tried in different tenants and not able to replicate the issue. Please deploy one more instance and see the issue if persists.
Keep us posted
Working towards the implementation will keep you posted.
Sure, please keep us posted
Hi Sai, Just a query, have you validated setting up a second instance when existing instance exist. May be if you are setting up a single new instance it is not giving error.
Hi @Sheshank-TCS ,
We will check out this scenario and let you know.
@gsv022 We tried setting up the new instance and the Function app failed. Please suggest whether should I manually sync the resource from the deployment center again.
Hi @Sheshank-TCS ,
To overcome the failed resources, Go to azure portal-->Go to resource group(Where the deployment has been done)-->Select prep-function(Any failed resource) from list of resources-->Go to Deployment Center-->Click on Settings. Below is the screenshot for reference.
If the resource is not pointing to intended repository and showing empty. Please connect to the external Git and point to the official repository(like above)
Once the status turns to Success(active) in the logs section then good to proceed with remaining steps of deployment.
Please do the same for other failed resource as well.
After performing the above steps, don't attempt for re-deploy again
We have successfully deployed the latest App version 5.5.2 and the issue has been resolved it's throwing the disclaimer message as expected when nonauthorized members add the App. Thank you Sai for all the help and support, appreciate it.
| gharchive/issue | 2024-02-27T11:26:21 | 2025-04-01T06:37:20.748249 | {
"authors": [
"Sheshank-TCS",
"chrischowsos",
"gsv022"
],
"repo": "OfficeDev/microsoft-teams-apps-company-communicator",
"url": "https://github.com/OfficeDev/microsoft-teams-apps-company-communicator/issues/1354",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2691287759 | Issue with Adding Sharepoint Extension.
When adding the sharepoint App, the documentation seems to be out of date for the new sharepoint. In the final step "do you trust eoc-extention..." New sharepoint has 2 options
"Only enable this app" and "Enable this app and add it to all sites"
What is the correct choice?
@fajr365 , thank you for bringing this to our attention.
We will update our documentation in the upcoming releases. The correct option will be "Enable this app and add it to all sites".
I hope this clarifies your query. If you have any further questions or issues, please feel free to post here.
| gharchive/issue | 2024-11-25T16:02:40 | 2025-04-01T06:37:20.751002 | {
"authors": [
"fajr365",
"v-ajaysahu"
],
"repo": "OfficeDev/microsoft-teams-emergency-operations-center",
"url": "https://github.com/OfficeDev/microsoft-teams-emergency-operations-center/issues/297",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1179317022 | Problem with Page about Date
I would like to report a problem with this page.
Suggestion 1:
As I am looking for information on working with dates in office script, the text has a problem with the link in this paragraph:
"The next sample reads a date that's stored in Excel and translates it to a JavaScript Date object. It uses the date's numeric serial number as input for the JavaScript Date."
The link is directing to the Now function page.
Suggestion 2
The script shown can be simplified to the code below, because in this snippet of the original ( let excelDateValue = dateRange.getValue() as number;) returns an error.
function main(workbook: ExcelScript.Workbook) {
// Read a date at cell A1 from Excel.
let dateRange = workbook.getActiveWorksheet().getRange("A1").getValue();
// Convert the Excel date to a JavaScript Date object.
let javaScriptDate = new Date(Math.round((dateRange - 25569) * 86400 * 1000));
console.log(javaScriptDate);
}
Thank you!
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 98c58cbf-aaf7-3d3a-48f4-ce9cebcf331c
Version Independent ID: 74821d34-eab6-ec0f-7a9f-55aca81975d9
Content: Basic scripts for Office Scripts in Excel on the web - Office Scripts
Content Source: docs/resources/samples/excel-samples.md
Product: excel
Technology: scripts
GitHub Login: @o365devx
Microsoft Alias: o365devx
Thanks for pointing out this issue @aletrovato. I'll get the page updated soon.
How amazing the speed of the response.
Thank you so much Alex!!!
Alessandro Trovato
Microsoft MVP / Instrutor LinkedIn Learning / Youtuber
Blog, Catálogo Virtual das Aulas e cursos com certificado
https://linktr.ee/alessandrotrovato
Acesso ao Youtube http://www.youtube.com/aletrovato
App gratuito na Play Store https://bit.ly/apptrovato com as aulas
Compartilhando conhecimento, sempre!
Em qui., 24 de mar. de 2022 às 13:58, Alex Jerabek @.***>
escreveu:
Thanks for pointing out this issue @aletrovato
https://github.com/aletrovato. I'll get the page updated soon.
—
Reply to this email directly, view it on GitHub
https://github.com/OfficeDev/office-scripts-docs/issues/476#issuecomment-1077838127,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AC4HPXZNHH4EQYA6MVLQDI3VBSNLHANCNFSM5RQWXRRA
.
You are receiving this because you were mentioned.Message ID:
@.***>
Hi @aletrovato,
First, thank you for raising this issue. I have reached out to the product team to see about adding convenience methods to Office Scripts to make working with dates easier and more intuitive.
I wanted to follow up about your two suggestions. For the NOW function link, I had originally linked to that page because it contained the best description of the Excel date serial number. Is that link confusing? Should we just remove it?
For the sample not working, I could not reproduce that issue. Could you please provide a screenshot of the error or a copy of the text that is produced?
Thank you,
Hi Alex!
Your idea to develop a more practical way to work with dates is very welcome. When I tested today some scripts for date handling I ran into the problem of using the Date library (I didn't know it yet). The conversion of DataSerial types to standard JavaScript is confusing, perhaps it could also be an item in the Office Script help pages. I have written articles, produced a course and several classes for this on Youtube. I am trying to raise the flag and train users here to migrate to WEB and it would be of great help.
Suggestion 1 NOW - The redirect got confusing, if I may suggest, it would be interesting to remove the link or include the information that the NOW page illustrates how the date will look after formatting.
Suggestion 2: Solved. In the morning, when testing the code, getValue() was not recognized as a valid statement.
If I can help with anything, I am at your disposal.
Thank you very much!
| gharchive/issue | 2022-03-24T10:34:25 | 2025-04-01T06:37:20.791944 | {
"authors": [
"AlexJerabek",
"aletrovato"
],
"repo": "OfficeDev/office-scripts-docs",
"url": "https://github.com/OfficeDev/office-scripts-docs/issues/476",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
607635831 | Chrome
Actionable message support fails for web-based Outlook in Chrome, but not in Firefox or Safari. It's well known that much Microsoft client-side code has problems with Chrome (not sure why, my own code works fine), but there should be an asterisk on this page for that.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: fc039d3b-b1e6-79ae-3b83-6e90d7d8af94
Version Independent ID: f72d624f-36b9-bab1-9c14-e9fc1e81082b
Content: What are actionable messages in Office 365? - Outlook Developer
Content Source: docs/actionable-messages/index.md
Product: outlook
Technology: o365-connectors
GitHub Login: @jasonjoh
Microsoft Alias: jasonjoh
@jmussman Actionable messages are supported in Chrome. Do you have a repro of your failure you can share with us?
You are correct. I rebooted since I posted that, and when I went back to take a screen shot for you it’s magically working. My bad then, sorry 😊 I guess welcome to the wonderful world of so many things impact so many others now days… I have no idea what could have caused that.
From: Jason Johnston notifications@github.com
Reply-To: OfficeDev/outlook-dev-docs reply@reply.github.com
Date: Monday, April 27, 2020 at 6:57 PM
To: OfficeDev/outlook-dev-docs outlook-dev-docs@noreply.github.com
Cc: Joel Mussman jmussman@smallrock.net, Mention mention@noreply.github.com
Subject: Re: [OfficeDev/outlook-dev-docs] Chrome (#862)
@jmussmanhttps://github.com/jmussman Actionable messages are supported in Chrome. Do you have a repro of your failure you can share with us?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/OfficeDev/outlook-dev-docs/issues/862#issuecomment-620278257, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABPCPO2CVWLJOOCUFHRV423ROYETVANCNFSM4MSAR7RA.
| gharchive/issue | 2020-04-27T15:21:20 | 2025-04-01T06:37:20.812316 | {
"authors": [
"jasonjoh",
"jmussman"
],
"repo": "OfficeDev/outlook-dev-docs",
"url": "https://github.com/OfficeDev/outlook-dev-docs/issues/862",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2146988218 | Update tutorials for deprecated circuit draw style #725
Technical discussion for #725
Update tutorials for deprecated circuit draw style #725
| gharchive/issue | 2024-02-21T15:07:50 | 2025-04-01T06:37:20.829600 | {
"authors": [
"OkuyanBoga"
],
"repo": "OkuyanBoga/hc-qiskit-machine-learning",
"url": "https://github.com/OkuyanBoga/hc-qiskit-machine-learning/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2573729785 | 🛑 Helix Intranet API | Dev is down
In 1f8c904, Helix Intranet API | Dev (https://helix-biogen-institute-intranet-backend.onrender.com/status) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Helix Intranet API | Dev is back up in 4b86748 after 7 minutes.
| gharchive/issue | 2024-10-08T16:52:17 | 2025-04-01T06:37:20.847915 | {
"authors": [
"OluwaninsolaAO"
],
"repo": "OluwaninsolaAO/uptime-monitoring",
"url": "https://github.com/OluwaninsolaAO/uptime-monitoring/issues/144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2583019693 | 🛑 Helix Intranet API | Dev is down
In 06e35eb, Helix Intranet API | Dev (https://helix-biogen-institute-intranet-backend.onrender.com/status) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Helix Intranet API | Dev is back up in b6d5847 after 7 minutes.
| gharchive/issue | 2024-10-12T13:31:19 | 2025-04-01T06:37:20.850483 | {
"authors": [
"OluwaninsolaAO"
],
"repo": "OluwaninsolaAO/uptime-monitoring",
"url": "https://github.com/OluwaninsolaAO/uptime-monitoring/issues/205",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
224051909 | Status of this LS compared to omnisharp-node-client?
Hi @david-driscoll . Glad to see this moving forward and the node-client being dropped! I'd like to use it ASAP as a replacement for omnisharp-node-client in the Eclipse IDE integration. However, before doing it, I'd like to know whether it's already better than the omnisharp-node-client or if you think it's too early to switch (some features may be missing with the LS compared to previous one?).
Thanks in advance
@mickaelistria to early to switch for sure... this is the infrastructure for the work. I'm going to packagify this and use it in OmniSharp.
The initial go forward plan is to add a new switch to OmniSharp --lsp that will kick it over into "lsp mode". I hope to get that done over the next month or so (gotta just find a day to crush out some code really). Then for third parties you'll just have to download the correct release from the https://github.com/OmniSharp/omnisharp-roslyn repository, and be able to use LSP natively.
Work has started! https://github.com/OmniSharp/omnisharp-roslyn/pull/969
| gharchive/issue | 2017-04-25T07:53:57 | 2025-04-01T06:37:20.863365 | {
"authors": [
"david-driscoll",
"mickaelistria"
],
"repo": "OmniSharp/csharp-language-server-protocol",
"url": "https://github.com/OmniSharp/csharp-language-server-protocol/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
757328697 | [info request] Hosting the language server code in Blazor?
We're using this library to provide LSP support for Bicep. We have an browser demo at https://aka.ms/bicepdemo which calls into the compiler code directly using Blazor/WASM, using Microsoft.JSInterop to compile, emit diagnostics, get semantic tokens, and pass the results back to the Monaco editor, but we're not using the language server for this - instead we've created a few custom functions for it.
The monaco-languageclient library can be used to hook monaco up to a language server which would provide much of the functionality that VSCode offers in a browser. It would be extremely cool to be able to simply use the LSP in a browser without the need for a back-end server.
I'm curious as to whether anyone has tried to run this server code via Blazor before. I've been experimenting with it and am able to get the initialize request/response negotiation to take place, but I don't see the client/registerCapability request come through from the server. I suspect there may be some sort of message pump that needs to run, but am not at all familiar with the Reactive library that's being used. Any pointers that you can give me would be awesome!
Here's an example of the code changes I've been experimenting with to hook this up to monaco: main...antmarti/experiment/monaco_lsp
I was thinking about this a week or two ago, in theory things should "just work" because the language server is just a .NET Standard library. This sounds fun, so I'll take a look at your code and see if I can get it running.
I'm curious as to whether anyone has tried to run this server code via Blazor before. I've been experimenting with it and am able to get the initialize request/response negotiation to take place, but I don't see the client/registerCapability request come through from the server. I suspect there may be some sort of message pump that needs to run, but am not at all familiar with the Reactive library that's being used. Any pointers that you can give me would be awesome!
If there is a message pump at play one of the challenges might be that pump using a dedicated thread. I don't think Blazor WASM supports threads yet. Certain BCL methods that interact with threading will "spin" and fry your CPU :)
Thanks,
I was thinking about this a week or two ago, in theory things should "just work" because the language server is just a .NET Standard library. This sounds fun, so I'll take a look at your code and see if I can get it running.
Thanks! If you need any pointers in getting it running, let me know. It should just work if you clone the repo and run:
cd src/playground
npm i
npm start
I've stuck some haphazard logging in which should get written out to the browser console.
For input the process scheduler runs on the thread pool, which I think should be fine.
https://github.com/OmniSharp/csharp-language-server-protocol/blob/master/src/JsonRpc/InputHandler.cs#L86
For output however... I think it by default runs a dedicated thread.
https://github.com/OmniSharp/csharp-language-server-protocol/blob/6371386ee90dde8a1c419f3dc8fc94e0c1bc35f0/src/JsonRpc/OutputHandler.cs#L58
Okay here's a possible quick fix. When setting up the server... try this. IScheduler should be System.Reactive.Scheduling.IScheduler (or something like that)
options.Services.AddSingleton<IScheduler>(TaskPoolScheduler.Default);
I theory that should kick the output handler to use the IScheduler provided in the container based on how I think DryIoc will pick constructors.
@ryanbrandenburg @NTaylorMullen @TylerLeonhardt thoughts, should I just move to use the task pool scheduler for handing input/output? At the time a dedicated thread "made sense" but honestly it probably doesn't matter.
Input is already on the task pool and working fine.
Output isn't however it does ensure ordering, so we shouldn't there shouldn't really be any big problems.
@ryanbrandenburg @NTaylorMullen @TylerLeonhardt thoughts, should I just move to use the task pool scheduler for handing input/output? At the time a dedicated thread "made sense" but honestly it probably doesn't matter.
Having a dedicated thread has been risky because if something doesn't ConfigureAwait(false) and blocks you're doomed. We've actually encountered that issue once or twice in VS (as I'm sure you recall) so relying on the task pool scheduler doesn't sound awful. Are there any other drawbacks?
All that said for extra background info, we run Razor's language server in-proc in VS today which I presume from the quick glance at this thread similar types of things are trying to be acheived.
Here's where we create our own abstraction to start the spinup of the O# framework bits in VS: https://github.com/dotnet/aspnetcore-tooling/blob/feb060660bf14c9da3f284a72fe5f86390d3ab65/src/Razor/src/Microsoft.VisualStudio.LanguageServerClient.Razor/RazorLanguageServerClient.cs#L126-L129
And here's our actual abstraction that can rely on in-proc or out of proc streams: https://github.com/dotnet/aspnetcore-tooling/blob/feb060660bf14c9da3f284a72fe5f86390d3ab65/src/Razor/src/Microsoft.AspNetCore.Razor.LanguageServer/RazorLanguageServer.cs#L72-L75
Okay here's a possible quick fix. When setting up the server... try this. IScheduler should be System.Reactive.Scheduling.IScheduler (or something like that)
options.Services.AddSingleton<IScheduler>(TaskPoolScheduler.Default);
I gave this a go, but didn't see any observable difference in behavior.
I did notice that the Reactive.Wasm library I'm trying to use to replace the schedule doesn't appear to be doing what it's meant to in .NET 5 - in particular these checks no longer seem to work:
https://github.com/reactiveui/Reactive.Wasm/blob/a226dc0bb4f010c248568eb67b0b8e5b768358f5/src/System.Reactive.Wasm/Internal/WasmScheduler.cs#L136-L137
https://github.com/reactiveui/Reactive.Wasm/blob/a226dc0bb4f010c248568eb67b0b8e5b768358f5/src/System.Reactive.Wasm/Internal/WasmPlatformEnlightenmentProvider.cs#L27-L28
In theory if they were working, I should be able to do:
options.Services.AddSingleton<IScheduler>(WasmScheduler.Default);
I'll see if I can fix up the above checks locally and get that working.
All that said for extra background info, we run Razor's language server in-proc in VS today which I presume from the quick glance at this thread similar types of things are trying to be acheived.
Thanks for the pointers! We have the language server running as a standalone exe, which we use for VSCode integration, but as an experiment, I'm trying to see if we can also host the language server fully in a web browser, using Blazor/WASM without a backend - I think that's where the complexity is mostly coming from. Is that something your team has attempted by any chance?
I think that's where the complexity is mostly coming from. Is that something your team has attempted by any chance?
Ah, ya I can definitely imagine that being difficult 😄. No we haven't tried that but I can just imagine how the threading models may make things more difficult in addition to things like file watchers
@david-driscoll I just got an end-to-end working with a very hacky change here:
https://github.com/OmniSharp/csharp-language-server-protocol/blob/6371386ee90dde8a1c419f3dc8fc94e0c1bc35f0/src/JsonRpc/OutputHandler.cs#L67
Instead of adding the message to the queue, I just sent it directly with:
ProcessOutputStream(value, CancellationToken.None).Wait();
So I think that definitely confirms that it's something to do with the scheduler. Interestingly, I noticed that in the version of the language server we're using (0.18.3) it is using TaskPoolScheduler.Default rather than EventLoopScheduler.
I'm going to try and get a more solid PoC together by replacing IOutputHandler in the IoC container.
interesting!
options.Services.AddSingleton<IScheduler>(ImmediateScheduler.Instance); also appears to work.
@anthony-c-martin are you on slack or msteams?
So I'm running into an error Request client/registerCapability failed with message: i.languages.registerDocumentSemanticTokensProvider is not a function. Seems the monaco editor doesn't support semantic tokenization yet. However, disabling that things seem to work.
Here's my branch for you reference from:
https://github.com/Azure/bicep/compare/Azure:antmarti/experiment/monaco_lsp...david-driscoll:davidd/experiment/monaco_lsp
Couple notes: I was building locally with the latest version of the library (0.19.0-beta.1) so the C# changes are the changes required based on the breaking changes I've documented.
Also I was able to simplify the interop a little bit by using StreamMessageReader/StreamMessageWriter and a Duplex stream. This writes all the expected header information, so you don't have to serialize on the Blazor side, instead you just write the bytes directly into the pipe. Sending from the server to client also happens similarly.
I've created this PR #458 so we can configure the schedulers specifically.
So I'm running into an error Request client/registerCapability failed with message: i.languages.registerDocumentSemanticTokensProvider is not a function. Seems the monaco editor doesn't support semantic tokenization yet. However, disabling that things seem to work.
Here's my branch for you reference from:
Azure/bicep@Azure:antmarti/experiment/monaco_lsp...david-driscoll:davidd/experiment/monaco_lsp
Couple notes: I was building locally with the latest version of the library (0.19.0-beta.1) so the C# changes are the changes required based on the breaking changes I've documented.
Also I was able to simplify the interop a little bit by using StreamMessageReader/StreamMessageWriter and a Duplex stream. This writes all the expected header information, so you don't have to serialize on the Blazor side, instead you just write the bytes directly into the pipe. Sending from the server to client also happens similarly.
This is AMAZING, thank you so much for your help! I ran into the same issue with the language client - looks like semantic support has only been added to a preview version, and that they haven't yet picked up the latest LSP spec. For now, since we've already implemented our own semantic token handler anyway, I've reverted back to using this for now until the language client has actual support for it. I've picked up a bunch of your changes and updated my branch: main...antmarti/experiment/monaco_lsp.
I've pushed a demo of this here: https://bicepdemo.z22.web.core.windows.net/experiment/lsp/index.html
This has me thinking of making a Blazor Component that uses the monaco editor... but to try to make as much as possible of it actually live in C# and use the LanguageClient for interacting with it....
Other than the annoying part of converting the monaco api into C#... ugh.
Right now I don't think I have the bandwidth to tie monaco and blazor together. I might spike something out next weekend. I looked at https://github.com/microsoft/monaco-editor/blob/master/monaco.d.ts and while I'm sure I could... that's a lot of code to keep in sync, so I would want to build out some sort of tool to integrate the two together.
I found this project, and posted an issue there https://github.com/canhorn/EventHorizon.Blazor.TypeScript.Interop.Generator/issues/31 to see what might be needed to support generation interop with monaco.d.ts as I'm just not prepared for the maintenance that would entail.
In the meantime there is recent activity on https://github.com/TypeFox/monaco-languageclient updating it to the latest version (that would include semantic tokens), you might be able to pin to the latest master branch and see if that works (I have not tried).
@anthony-c-martin are you on slack or msteams?
I'm on Teams - antmarti@microsoft.com
Shoot now I want to run the PowerShell language server in Blazor!
I think you can, you'll just have to do something similar to the bicep solution using monaco + monaco-languageclient, it totally works, there might be some issues if you use the file system APIs but those can always be fixed.
@TylerLeonhardt feel free to reach out if you'd like any pointers for the Bicep code!
I'm skeptical the PowerShell API will "just work" in Blazor WASM but worth a shot. @anthony-c-martin how did you "start the language server" in Blazor WASM? I'd love to take a peak at how the language server is hooked up to Monaco Editor.
For context, I've used the Monaco-languageclient before, but only their stdio option where the language server was running in a separate process on the machine.
I'm skeptical the PowerShell API will "just work" in Blazor WASM but worth a shot. @anthony-c-martin how did you "start the language server" in Blazor WASM? I'd love to take a peak at how the language server is hooked up to Monaco Editor.
[credit goes to @david-driscoll for a lot of this code]
Here's where the server is being initialized:
https://github.com/Azure/bicep/blob/16d7eb7fd5a92dadf6704f1c49e9246cf52b4da9/src/Bicep.Wasm/Interop.cs#L43-L58
The Server class is our own, but is really a thin wrapper around the Omnisharp Server class. The important pieces here are initializing the input/output pipes, and overriding the scheduler with ImmediateScheduler.Instance.
Here's the C# method that the JS code invokes to send data from client to server:
https://github.com/Azure/bicep/blob/16d7eb7fd5a92dadf6704f1c49e9246cf52b4da9/src/Bicep.Wasm/Interop.cs#L62
Here's where the C# code invokes the JS code to send data from server to client:
https://github.com/Azure/bicep/blob/16d7eb7fd5a92dadf6704f1c49e9246cf52b4da9/src/Bicep.Wasm/Interop.cs#L78
Here's the JS code to setup the send/receive with the server:
https://github.com/Azure/bicep/blob/16d7eb7fd5a92dadf6704f1c49e9246cf52b4da9/src/playground/src/helpers/lspInterop.ts#L24-L34
On startup I'm initializing the Blazor code from JS and setting the interop variable which can be used to invoke Blazor code with the following:
https://github.com/Azure/bicep/blob/16d7eb7fd5a92dadf6704f1c49e9246cf52b4da9/src/playground/src/helpers/lspInterop.ts#L5-L14
If you follow through the TS code, you should be able to see how the above is hooked into monaco-languageclient. I'm probably going to try and refine this code at some point to see if I can clean up the use of globals, and also to see if I can use a webworker to run the Blazor code.
Right now I don't think I have the bandwidth to tie monaco and blazor together. I might spike something out next weekend. I looked at https://github.com/microsoft/monaco-editor/blob/master/monaco.d.ts and while I'm sure I could... that's a lot of code to keep in sync, so I would want to build out some sort of tool to integrate the two together.
Out of interest, what are the benefits of implementing the translation layer between LSP & monaco's "custom LSP" in C# vs relying on monaco-languageclient to do it? I quite like the clean separation of having the TS code handle the translation and communicating with the C# code via LSP.
I just think it would be pretty cool to have a fully featured wrapper for monaco from the C# side. The added extra would make it it easier to consume using the client.
Probably because then @david-driscoll could guarantee that the monaco language client was up-to-date on the LSP spec.
Going to pin this issue for any passers by as it is truly a cool feature.
FWIW I think we need one of https://github.com/dotnet/aspnetcore/issues/17730 or https://github.com/dotnet/aspnetcore/issues/5475 to really unlock the power of this, because at the moment synchronous dotnet code locks up the UI thread, which feels a little janky when typing.
There's also this project which I haven't really investigated that might work as a stopgap: https://github.com/Tewr/BlazorWorker
I think a web worker would be perfect. Your UI (TypeScript) starts the worker, and you interop with the worker using postmessage. The worker then just has to interop with the language server.
Thank you guys, you helped me a lot to understand some ideas. I'm trying to build a small POC on blazor and monaco based C# code editor with code completion. However, I cannot get code completion to work.
What I've done:
in JS created monaco-editor and configured monaco-languageclient (in the same way as Bicep's playground)
in wasm created an interop class:
public class Interop
{
private LanguageServer languageServer;
private readonly IJSRuntime jsRuntime;
private readonly PipeWriter inputWriter;
private readonly PipeReader outputReader;
public Interop(IJSRuntime jsRuntime)
{
this.jsRuntime = jsRuntime;
var inputPipe = new Pipe();
var outputPipe = new Pipe();
inputWriter = inputPipe.Writer;
outputReader = outputPipe.Reader;
languageServer = LanguageServer.PreInit(opts =>
{
opts.WithInput(inputPipe.Reader);
opts.WithOutput(outputPipe.Writer);
opts.Services.AddSingleton<IScheduler>(ImmediateScheduler.Instance);
});
Task.Run(() => RunAsync(CancellationToken.None));
Task.Run(() => ProcessInputStreamAsync());
}
public async Task RunAsync(CancellationToken cancellationToken)
{
await languageServer.Initialize(cancellationToken);
await languageServer.WaitForExit;
}
[JSInvokable]
public async Task SendLspDataAsync(string jsonContent)
{
var cancelToken = CancellationToken.None;
Console.WriteLine("jsonContent");
Console.WriteLine(jsonContent);
await inputWriter.WriteAsync(Encoding.UTF8.GetBytes(jsonContent)).ConfigureAwait(false);
}
private async Task ProcessInputStreamAsync()
{
do
{
var result = await outputReader.ReadAsync(CancellationToken.None).ConfigureAwait(false);
var buffer = result.Buffer;
Console.WriteLine("ProcessInputStreamAsync");
await jsRuntime.InvokeVoidAsync("ReceiveLspData", Encoding.UTF8.GetString(buffer.Slice(buffer.Start, buffer.End)));
outputReader.AdvanceTo(buffer.End, buffer.End);
// Stop reading if there's no more data coming.
if (result.IsCompleted && buffer.IsEmpty)
{
break;
}
// TODO: Add cancellation token
} while (!CancellationToken.None.IsCancellationRequested);
}
}
Code completion doesn't work, because I haven't registered CodeCompletionHandler. I don't understand which one to use, because in Bicep you use a custom completion handler, in my POC I would like to use O# completion handler.
Do you have any hints on how to implement it?
I've made a solution, that compiles C# project into single-file UMD library: https://github.com/Elringus/DotNetJS
Tried to use the server with it, but not sure how to deal with input/output. Console.STD won't work, obviously. Can we somehow run the server via websocket?
Tried to use the server with it, but not sure how to deal with input/output. Console.STD won't work, obviously. Can we somehow run the server via websocket?
Nice, I'll check that library out!
Here's how I've been doing things in my experimental branch - using a simple send/receive method to pass JSONRPC back and forth from JS <-> C#:
C#: https://github.com/Azure/bicep/blob/4a4d193eff0492043ff350419c8c9693ad6d63b6/src/Bicep.Wasm/LspWorker.cs
JS: https://github.com/Azure/bicep/blob/4a4d193eff0492043ff350419c8c9693ad6d63b6/src/playground/src/helpers/lspInterop.ts
Not the most elegant/performant, but it works well enough for now. Being able to have client-side Blazor host a websocket would make this a lot nicer.
@Elringus I'm currently trying to use your library to get our OmniSharp-based Language Server to run in an VS Code Web extension. This sounds very similar to what you want to achieve. May I ask if you already managed to get that working? My current status is that I can run the language server in a Blazor project (thanks to the information in this thread), but in the web extension the server never finishes initialization.
@Skleni I've switched to Microsoft's reference LSP implementation in JS (https://github.com/microsoft/vscode-languageserver-node), while reusing the existing language-specific C# code via DotNetJS:
— this way we can get up-to-date LSP implementation and native webworker transport layer out of the box, while keeping all the handlers logic in C#.
Regarding VS Code, there were 2 issues with this workflow, but they're both solved in insiders stream now and should become available in the main stream in February:
https://github.com/microsoft/vscode/issues/138413
https://github.com/microsoft/vscode/issues/138780
| gharchive/issue | 2020-12-04T18:56:59 | 2025-04-01T06:37:20.910565 | {
"authors": [
"Elringus",
"NTaylorMullen",
"Skleni",
"TylerLeonhardt",
"anthony-c-martin",
"david-driscoll",
"rynowak",
"s-KaiNet"
],
"repo": "OmniSharp/csharp-language-server-protocol",
"url": "https://github.com/OmniSharp/csharp-language-server-protocol/issues/456",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
70746878 | Fixed recursive require() between dock-view and omnisharp-atom, fixed a ...
...few (annoying) compiler errors. Fixes #139.
Works a treat!
| gharchive/pull-request | 2015-04-24T17:28:19 | 2025-04-01T06:37:20.913229 | {
"authors": [
"david-driscoll",
"nosami"
],
"repo": "OmniSharp/omnisharp-atom",
"url": "https://github.com/OmniSharp/omnisharp-atom/pull/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1085389109 | Add MSBuild project to solution and apply the change to Roslyn workspace as a unit
I often hit 'Failed to add project to workspace: ' error in OmniSharp log when opening one of internal projects. I only see this issue on Linux (Ubuntu 20.04) and not on Windows. I always have omnisharp.enableMsBuildLoadProjectsOnDemand set to "true". The problem occurs often but not all the time.
In both success and failure cases I see the following sequence of events (shortened to highlight only important parts):
[info]: OmniSharp.MSBuild.ProjectManager
Queue project update for '/home/<path_to_project>/ProjectName.csproj'
[info]: OmniSharp.MSBuild.ProjectManager
Loading project: /home/<path_to_project>/ProjectName.csproj
[dbug]: OmniSharp.Stdio.Host
************ Request ************
{
"Type": "request",
"Seq": 7,
"Command": "/filesChanged",
"Arguments": [
{
"FileName": "/home/<path_to_project>/obj/Debug/net6.0/linux-x64/ProjectName.AssemblyInfo.cs",
"changeType": "Change"
}
]
}
[dbug]: OmniSharp.Roslyn.BufferManager
Adding transient file for /home/<path_to_project>/obj/Debug/net6.0/linux-x64/ProjectName.AssemblyInfo.cs
[info]: OmniSharp.MSBuild.ProjectManager
Successfully loaded project file '/home/<path_to_project>/ProjectName.csproj'.
[info]: OmniSharp.MSBuild.ProjectManager
Adding project '/home/<path_to_project>/ProjectName.csproj'
In the case of the failure though the sequence is followed by:
[fail]: OmniSharp.MSBuild.ProjectManager
Failed to add project to workspace: '/home/<path_to_project>/ProjectName.csproj'
...
[fail]: OmniSharp.MSBuild.ProjectManager
Could not locate project in workspace: /home/<path_to_project>/ProjectName.csproj
The documentation for "bool Workspace.TryApplyChanges(Solution newSolution)" states: “… The specified solution must be one that originated from this workspace. If it is not, or the workspace has been updated since the solution was obtained from the workspace, then this method returns false …"
ProjectManager.AddProject clearly passes the solution that is originated from the same workspace. Therefore, the workspace somehow gets changed b/w adding the project to the solution and applying the changes to the workspace. The theory is that the file change notification received for ProjectName.AssemblyInfo.cs causes the workspace change in ProjectManager.OnDirectoryFileChanged. Thus, the lock added around these two places. My testing shows that the change does address the issue I'm seeing.
@filipw, @JoeRobich, appreciate if you could take a look.
| gharchive/pull-request | 2021-12-21T03:08:38 | 2025-04-01T06:37:20.916506 | {
"authors": [
"dmgonch"
],
"repo": "OmniSharp/omnisharp-roslyn",
"url": "https://github.com/OmniSharp/omnisharp-roslyn/pull/2314",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
143109612 | Intellisense appears after typing an open or close brace and autoselects "Action"
This causes subsequent return presses on the keyboard to insert the text "Action" instead of moving to a new line.
@agocke, are you still seeing this problem? I've been unable to reproduce it.
@DustinCampbell I think this was a bad interaction between extensions. I'll try and repro and re-open if I find anything.
thanks! No rush here. The idea that two extensions were interacting badly sounds like a good theory to me.
| gharchive/issue | 2016-03-23T23:58:47 | 2025-04-01T06:37:20.918708 | {
"authors": [
"DustinCampbell",
"agocke"
],
"repo": "OmniSharp/omnisharp-vscode",
"url": "https://github.com/OmniSharp/omnisharp-vscode/issues/121",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2284782452 | 🎯[Feat] 일정 담당자 지정 기능 개발
기능 요약 (Feature Summary)
역할이 PM인 회원은 담당자를 복수로 지정할 수 있다.
담당자는 담당하는 일정에 대해 체크박스(일정 상태 수정) 혹은 직접 입력을 통해 진행률을 수정할 수 있다.
작업 내용 (Work Description)
역할이 PM인 회원이 이해관계자 테이블에 일정에 대한 담당자 정보를 추가하거나 수정할 수 있도록 한다.
일정의 담당자가 해당 일정의 진행률(혹은 일정 상태)을 수정할 수 있도록 한다.
프론트엔드에서 PM이 일정에 대한 담당자를 지정하고, 일정의 담당자가 본인의 일정 진행률을 수정할 수 있는 UI를 구현한다.
관련 이슈 (Related Issues)
이슈 #33 : 프로젝트 구성원 역할 권한 기능 개발
체크리스트 (Checklist)
[ ] 새로운 기능에 대한 코드 작성 완료
[ ] 테스트 완료
[ ] 문서화 완료
[ ] 리뷰 요청
일정에 대한 작성자가 있어야 하니 이해관계자에 작성자 추가 메소드를 추가하여 일정 등록 시, 그 주체가(PM or PL) 작성자로서 저장될 수 있도록 해야함
| gharchive/issue | 2024-05-08T06:26:33 | 2025-04-01T06:37:20.922174 | {
"authors": [
"CJC0512"
],
"repo": "OmokNoonE/PPM-backend",
"url": "https://github.com/OmokNoonE/PPM-backend/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1587885865 | Document Sets are not visible when filtering on file extension in typesAndSources in configuration schema
Issue:
A user is required to pick a file of certain extension in the workflow. We pass the extension in the 'filters' parameter in 'typesAndSources' in the Picker Configuration Schema. The filtering seems to work fine. I receive the files with the required extension and the folders in the Sharepoint site. The issue is that the Document Sets are not visible. This hinders the user to pick files of similar extension in the Document Sets.
The only possible options we can put in filters is string array with values like:
- photo
- video
- audio
- folder
- file
- extensions prefixed with '.'
Possible Solutions:
Treat the shared documents as folders
Maybe add another parameter to filter based on the 2 content types, i.e. document sets and folder.
Add Document Sets in the filters options
Document Reference:
https://learn.microsoft.com/en-us/onedrive/developer/controls/file-pickers/v8-schema?view=odsp-graph-online
We will review this request with the engineering team.
Hi @patrick-rodgers @avirallariva, what was the outcome of this?
Hey @nicolasiscoding, did you ever get updates on this outside of this thread? If not, I can work with the team to provide some. Thanks!
| gharchive/issue | 2023-02-16T15:25:25 | 2025-04-01T06:37:20.956587 | {
"authors": [
"JCrew0",
"avirallariva",
"nicolasiscoding",
"patrick-rodgers"
],
"repo": "OneDrive/samples",
"url": "https://github.com/OneDrive/samples/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1559554478 | How to update ProgressDialog?
i want to update ProgressDialog Progress, this is my codes, where is wrong? progress dialog is not update and stay in 0
private var job: Job = Job()
override val coroutineContext: CoroutineContext
get() = IO + job
fun getCirculars(){
swipRefresh.isRefreshing = true
itemsData.clear()
val dialog = ProgressDialog(mContext)
dialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL)
dialog.setIndeterminate(true)
dialog.setCancelable(false)
dialog.setCanceledOnTouchOutside(false)
dialog.max = 100
dialog.isIndeterminate = true
dialog.show()
launch {
val operation = async {
val doc: Document = Jsoup.connect(url)
.timeout(0)
.maxBodySize(0)
.ignoreHttpErrors(true)
.sslSocketFactory(CommonHelper.trustServer())
.get()
val table: Elements = doc.select("table[class=\"table table-striped table-hover\"]")
for (myTable in table) {
val rows: Elements = myTable.select("tr")
withContext(Dispatchers.Main){
dialog.isIndeterminate = false
}
for (i in 1 until rows.size) {
withContext(Dispatchers.Main){
dialog.progress = (i / rows.size) * 100
}
}
}
}.await()
withContext(Dispatchers.Main) {
// update UI
dialog.dismiss()
swipRefresh.isRefreshing = false
}
}
}
If you still have doubts about the lib's usage please join our official Telegram Group and ask us there. I'd prefer to keep the repository Issues page only for actual lib issues/bugs rather than for generic questions about the usage of the libs..
| gharchive/issue | 2023-01-27T10:55:51 | 2025-04-01T06:37:21.336007 | {
"authors": [
"BlackMesa123",
"ghost1372"
],
"repo": "OneUIProject/oneui-design",
"url": "https://github.com/OneUIProject/oneui-design/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2235741852 | Support Flavors for MacOs
Is your feature request related to a problem? Please describe.
When creating project with flavors, they configured for Android and OS only. There no MacOS cofiguration in flavorizr.
Describe the solution you'd like
Add MacOS to flavorizr configuration
Planned in version 1.1.1(26)
| gharchive/issue | 2024-04-10T14:28:16 | 2025-04-01T06:37:21.354641 | {
"authors": [
"cozvtieg9"
],
"repo": "Onix-Systems/onix-flutter-project-generator",
"url": "https://github.com/Onix-Systems/onix-flutter-project-generator/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
482734214 | samenvattende adresnotatie werkt nog niet bij overlappende ranges van huisnummers
Aanvulling op https://github.com/OnroerendErfgoed/inventaris/issues/2430
Bij overlappende ranges van meerdere locatie-elementen wordt adressamenvatting nog niet correct gemaakt.
VB:
lijn 1: range 1-11, lijn2: range 5-9 resulteert in 1-5, 5-7, 7-9, 9-11 omdat de dubbels daar niet uit worden gehaald. Dat is iets binnen https://github.com/OnroerendErfgoed/housenum-be-r-parser/issues/ dat moet gefixt worden.
Originally posted by @Wim-De-Clercq in https://github.com/OnroerendErfgoed/inventaris/issues/2430#issuecomment-522915844
Moet hier ook nog iets voor gebeuren bij code van inventaris? Daar heb ik nu nog geen ticket aangemaakt
Na bugfix release van dit moet er 1 lijn veranderen in inventaris om de versie van housenum-be-r parser te verhogen.
TODO:
Ik denk dat als we voor de sort hier https://github.com/OnroerendErfgoed/housenum-be-r-parser/blob/d598e9ee1e60058d4d81a2c73b8af26e7cbb8e27/housenumparser/merger.py#L148
De dubbels eruithalen eerst mbv een set dat het zal werken.
Ook test toevoegen met bovenstaande ranges en zien dat huidige testen niet veranderen van gedrag.
aha - is er een verband met https://github.com/OnroerendErfgoed/inventaris/issues/2625 ?
Neen, dat is deel van https://github.com/OnroerendErfgoed/inventaris/issues/2430 Maar die issue is deel van release 1.1.0 en dus nog niet gedeployed.
| gharchive/issue | 2019-08-20T09:01:17 | 2025-04-01T06:37:21.370885 | {
"authors": [
"Wim-De-Clercq",
"astridvanhumbeeck",
"axd1967"
],
"repo": "OnroerendErfgoed/housenum-be-r-parser",
"url": "https://github.com/OnroerendErfgoed/housenum-be-r-parser/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2580847102 | Addition of Persistent Storage for UserData | ChatBot Enhancement | Optimisation of Recurrence Of Questioning
I want to add the Persistent Storage for the optimisation of UserData Flow for ChatBot Flow.
In this, I am going to implement the sessionStorage / localStorage for the userData storing that results in optimization of ChatBot Flow.
Contributor Information
Please check if you are a contributor from:
[X] GSSoC-ext
[X] Hacktoberfest
@Shariq2003
| gharchive/issue | 2024-10-11T09:01:22 | 2025-04-01T06:37:21.406404 | {
"authors": [
"Shariq2003",
"jeevan10017"
],
"repo": "Open-Code-Crafters/FitFlex",
"url": "https://github.com/Open-Code-Crafters/FitFlex/issues/165",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1858308682 | TimeOut errors on bad internet connection/strange firewall
We cannot connect to the MSColab Server on a guest WiFi with very long connection times (>2s), which may or may not be caused by a deep-inspecting firewall.
The timeout should be configurable to provide a work around.
Also, the current timeouts are defined as "magic number" in a wide range of places and the values should be collected to a single instance (as mentioned above, defined in the json configuration).
If your ISP is throttling certain types of traffic or has poor peering with other networks, VPN will mask the traffic and the result is faster.
https://www.cloudwards.net/vpn-internet-speed/#:~:text=Most of the time%2C you,your connection is capable of
| gharchive/issue | 2023-08-20T23:37:33 | 2025-04-01T06:37:21.423581 | {
"authors": [
"ReimarBauer",
"joernu76"
],
"repo": "Open-MSS/MSS",
"url": "https://github.com/Open-MSS/MSS/issues/1915",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2201912507 | Loading local Dataset saved via save_to_disk fails
Please check that this issue hasn't been reported before.
[X] I searched previous Bug Reports didn't find any similar reports.
Expected Behavior
I create some local dataset that I saved via calling ds.save_to_disk('some_path')
If I then configure Axolotl to load a dataset from save_path such as
datasets:
- path: tmp_dataset
...
I would expect it to load much like loading from a remote huggingface repo.
Current behaviour
The loaded dataset includes the contents of state.json rather than the actual dataset and everything fails.
Steps to reproduce
Save some dataset to disk via save_to_disk
Configure Axolotl to read from that dataset
See Axolotl crash
Config yaml
datasets:
- path: local_dataset_saved_via_save_to_disk
type:
field_instruction: prompts
field_output: responses
field_system: system
format: '[INST] {instruction} [/INST]'
no_input_format: '[INST] {instruction} [/INST]'
system_prompt: ''
Possible solution
In src/axolotl/utils/data.py I see that for local directory paths, datasets are loaded via load_dataset. load_from_disk seems to be the new preferred solution (this is also a comment). I manually hacked this in and everything worked without a problem.
Would switching that to load_from_disk break anything? It seems like a reasonable quick fix (which I can do)
Which Operating Systems are you using?
[X] Linux
[ ] macOS
[ ] Windows
Python Version
3.10
axolotl branch-commit
main/4e69aa4
Acknowledgements
[X] My issue title is concise, descriptive, and in title casing.
[X] I have searched the existing issues to make sure this bug has not been reported yet.
[X] I am using the latest version of axolotl.
[X] I have provided enough information for the maintainers to reproduce and diagnose the issue.
I've had to work around this for a while. You'll get:
ValueError: You are trying to load a dataset that was saved using save_to_disk. Please use load_from_disk instead.
The workaround is to manually delete "state.json" from the dataset directory, and axolotl will then read it. But other HF-compatible tools expect "state.json" so this is a hassle
I do sometimes hate HuggingFace libraries. I assumed they'd solve cases like this but thems the beans.
I think the right fix is to use the current loading behavior when data_files are specified but backoff to load_from_disk when a local directory is specified without data_files. That's what my PR does and it works at least for my case. load_from_disk is pretty limited tho it can't do much else but read from the one directory.
Thanks for PR @fozziethebeat . Sorry I didn't get time to properly review.
This got in faster than I expected! Thanks for the suggestions on testing, it made everything smoother~
| gharchive/issue | 2024-03-22T07:59:24 | 2025-04-01T06:37:21.478494 | {
"authors": [
"NanoCode012",
"anttttti",
"fozziethebeat"
],
"repo": "OpenAccess-AI-Collective/axolotl",
"url": "https://github.com/OpenAccess-AI-Collective/axolotl/issues/1430",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2236799445 | Switch to parallel FFD bin packing algorithm (closes #1492)
Description
Replace the existing sample packing algorithm with a parallel implementation of first-fit-decreasing.
Motivation and Context
I noticed recently that we could get denser sample packing with a different algorithm. Looking into it more, FFD performs just as well and is much faster than the heuristic I had 😅.
We can run FFD in parallel without losing much performance by packing samples in groups rather than all at once. On an i9-14900k, it takes 2.2s to pack 1M samples with 99.7% efficiency (current multipack.py is 91.7% in 0.32s.)
I removed the length estimates around packing in favor of just counting the batches, but let me know if I should add that back in. Two new config options are added: sample_packing_group_size controls the the number of samples packed by each process, and sample_packing_bin_size sets the number of samples that can be placed in one pack (may need to be increased for large context lengths.)
How has this been tested?
Tests have been updated to verify that packing is correct. Training appears to run the same, just with fewer steps.
It seems reasonable that sorting the items in FFD would interfere with shuffling between epochs, but I haven't been able to find any evidence of that being the case. Testing against a few similarity metrics shows that even when we do the packing at once in one group, shuffling still generates a mostly new set of packs.
Screenshots
Some performance checks below for 1M items.
I removed the length estimates around packing in favor of just counting the batches, but let me know if I should add that back in.
I need to do some checking, but the estimates exist due to different processes getting different splits of data, so the actual count of packed samples can vary from process to process. When this happens, you get one process thinking it needs to run another step, but another process thinking it's done and they get out of sync. The estimate was the most sane way I could come up with having each process come up with a deterministic length. I'm open to other ideas to working around this.
Could we generate all the packs, and then evenly split those up (like in the updated multipack.py)? I think each rank should then get an exact number of batches and stay in sync.
Could we generate all the packs, and then evenly split those up (like in the updated multipack.py)? I think each rank should then get an exact number of batches and stay in sync.
Perhaps we could do something like dispatch_batches=True to only run the packing on rank 0. I'm not 100% certain of the implications though
Hey, this is very interesting. Should there be some full run comparisons to make sure that there is no loss in performance?
Perhaps we could do something like dispatch_batches=True to only run the packing on rank 0. I'm not 100% certain of the implications though
Gotcha, for now I'll keep this PR simple by leaving the packing estimates in. Ready for another look.
Hey, this is very interesting. Should there be some full run comparisons to make sure that there is no loss in performance?
Yeah definitely, once the code is greenlit/finalized I'll rent an instance to test it in a distributed setup.
Hey @dsesclei we cherry picked and merged your fixes in #1619. Thanks! Would love to give you a shoutout if you're on twitter or discord and could share your handle. thanks!
Thanks for getting this in Wing! No handles to give, but I appreciate it
Thanks @dsesclei, I ended up having to revert the change b/c the loss was off by an order of magnitude. I need to dig into what the multipack sampler is outputting another time to see if there is something obvious that it is doing differently
Oh gotcha, I'll look into it
| gharchive/pull-request | 2024-04-11T02:36:19 | 2025-04-01T06:37:21.488610 | {
"authors": [
"NanoCode012",
"dsesclei",
"winglian"
],
"repo": "OpenAccess-AI-Collective/axolotl",
"url": "https://github.com/OpenAccess-AI-Collective/axolotl/pull/1516",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
65616351 | List transactions
It should show all transactions concerned with an address and balance being in open assets added or deducted for example https://www.coinprism.info/address/akV48Tav8nWZZgPWgHzXuQgMdM5jVJ53ros or get open assets details of a transaction based on get txid
I may be wrong, but I don't think Bitcoind has an RPC call that does that. Though we could implement that for the chain.com provider.
It would be very helpful if its implementable via chain.com api.
| gharchive/issue | 2015-04-01T06:31:34 | 2025-04-01T06:37:21.493254 | {
"authors": [
"Flavien",
"hackable"
],
"repo": "OpenAssets/colorcore",
"url": "https://github.com/OpenAssets/colorcore/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2310107657 | Cannot launch a HTTP inject
HTTP injector do not work correctly
critical : seen with product team
Issue transferred to the proper repository.
| gharchive/issue | 2024-05-20T13:11:28 | 2025-04-01T06:37:21.497531 | {
"authors": [
"SamuelHassine",
"guillaumejparis"
],
"repo": "OpenBAS-Platform/injectors-python",
"url": "https://github.com/OpenBAS-Platform/injectors-python/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
376161204 | Add the 'coin' to the 'payment' notification so that the coins icon can be displayed.
Otherwise, it's just a generic icon, which just looks weird:
done
| gharchive/issue | 2018-10-31T21:08:50 | 2025-04-01T06:37:21.527857 | {
"authors": [
"cpacia",
"rmisio"
],
"repo": "OpenBazaar/openbazaar-go",
"url": "https://github.com/OpenBazaar/openbazaar-go/issues/1272",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
453202437 | IPFS secio handshake patch likely not forward compatible
In the code segment below when establishing a new outgoing connection s.remotePeer is set to the peerID of the node we're trying to connect to. Ultimately this will either be an old style (hashed) peerID or a new style inline key depending on the actual ID that is used when we try to make a connection.
So far so good. Also peer.IDFromPublicKey is currently programmed to return an old style (hashed) peerID since we set AdvancedEnableInlining to false.
OK . Now take a look at the secio handshake... this is their raw code without any modifications:
// get peer id
actualRemotePeer, err := peer.IDFromPublicKey(s.remote.permanentPubKey)
if err != nil {
return err
}
switch s.remotePeer {
case actualRemotePeer:
// All good.
case "":
// No peer set. We're accepting a remote connection.
s.remotePeer = actualRemotePeer
default:
// Peer mismatch. Bail.
s.insecure.Close()
log.Debugf("expected peer %s, got peer %s", s.remotePeer, actualRemotePeer)
return ErrWrongPeer
}
If we ever enable inline keys, old nodes which have not upgraded, when trying to connect to a new style peerID, will pass in a new style ID into s.remotePeer but actualRemotePeer will still be an old style key. Hence s.remotePeer != actualRemotePeer and the handshake will fail.
If an upgraded node tries to connect to a non-upgraded peerID then peer.IDFromPublicKey will return a new style peerID while s.remotePeer is an old style key. Hence it will fail to connect.
So to make our current release forward compatible with a future release using inline keys I've done:
// get peer id
actualRemotePeer, err := peer.IDFromPublicKey(s.remote.permanentPubKey)
if err != nil {
return err
}
switch s.remotePeer {
case actualRemotePeer:
// All good.
case "":
// No peer set. We're accepting a remote connection.
s.remotePeer = actualRemotePeer
default:
pubkeyBytes, err := s.remote.permanentPubKey.Bytes()
if err != nil {
return err
}
oldMultihash, err := mh.Sum(pubkeyBytes, mh.SHA2_256, 32)
if err != nil {
return err
}
oldStylePeer, err := peer.IDB58Decode(oldMultihash.B58String())
if err != nil {
return err
}
if s.remotePeer != oldStylePeer {
// Peer mismatch. Bail.
s.insecure.Close()
log.Debugf("expected peer %s, got peer %s", s.remotePeer, actualRemotePeer)
return ErrWrongPeer
}
}
But looking at it, s.remotePeer would be an inline peerID and oldStylePeer would be an old style key, so I think this is screwed up and would prevent old nodes from connecting to new nodes. I think our current release should be a newStylePeer and compare it to s.remotePeer rather than an oldStylePeer.
And then a subsequent release with inline keys should use the code snippet above. (edited)
This will likely mean we need to fix this for next release and then push out the timeline for updating to the inline keys until at least enough people upgrade to this next release.
Thinking through this problem with annotations around the code:
primaryTest, err := peer.IDFromPublicKey(s.remote.permanentPubKey)
// *snip*
switch s.remotePeer {
case primaryTest:
// (Primary Case) All good.
case "":
// Ignore
default:
// (Backup Case)
// preparation and checking *snipped*
if s.remotePeer != oldStylePeer {
// Failure Case *snip*
}
}
Scenarios:
OldNode (hashed) connecting to OldNode (hashed, inline off):
remote (hashed) == primaryTest (hashed)... success in Primary Case
OldNode (hashed) connecting to NewNode (inline, inline on):
remote (hashed) != primaryTest (inline)... remote (hashed) == oldStyle (hashed) success in Backup
NewNode (inline) connecting to OldNode (hashed, inline off):
remote (inline) != primaryTest (hashed)... remote (inline) != oldStyle (hashed) ...always fails
NewNode (inline) connecting to NewNode (inline, inline on)
remote (inline) == primaryTest (inline)... success in Primary Case
I see the problem you're describing... it seems the backup test is always stuck producing the hashed peerID when we seem to want the backupTest to use the other algo opposite from the primaryTest.
I think the way forward for us will be a function ipfs.AlternativeIDFromPublicKey(crypto.PubKey) (peer.ID, error) which looks at the state of AdvancedEnableInlining and applies the opposite algorithm than what IDFromPublicKey applies. And then we can use that function inside of default until we've completed our inline key migration completely. Does that seem reasonable, @cpacia?
That seems like a good approach. Better than when I was suggesting.
| gharchive/issue | 2019-06-06T19:34:11 | 2025-04-01T06:37:21.536346 | {
"authors": [
"cpacia",
"placer14"
],
"repo": "OpenBazaar/openbazaar-go",
"url": "https://github.com/OpenBazaar/openbazaar-go/issues/1626",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
170228125 | Irregularly, but steadily getting IPNS pinned errors.
I don't know what the exact steps to reproduce are, because it doesn't always happen. But, it has been happening enough to multiple developers that I think it requires attention. Anyhow, often times, after wiping your data and starting with a fresh server, POSTs on the profile are failing with a 'not pinned' error:
Then, a subsequent fetch returns a 404, whereas a subsequent PUT fails with 'Profile already exists. Use PUT':
Once again, i think the crux of the problem is that the Profile is erroneously 404ing, when the file does indeed exists. It shouldn't 404 in this case, otherwise the client has no reliable way of knowing whether to onboard or not and whether to save a profile via PUT or POST.
The unpin error was because you were making successive puts before the previous publish completely. I moved the unpin operation to after the publish.
| gharchive/issue | 2016-08-09T17:46:17 | 2025-04-01T06:37:21.539040 | {
"authors": [
"cpacia",
"rmisio"
],
"repo": "OpenBazaar/openbazaar-go",
"url": "https://github.com/OpenBazaar/openbazaar-go/issues/73",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
676721914 | Remove non conforming SHO oid from config
~The urn:oid:1.3.6.1.4.1.1466.115.121.1.15 attribute was previously released by default when the SHO was in the ARP. This oid is not valid and was added for historic reasons. It's not clear which SP's still rely on this attribute. So by default the attribute is not released by EB, but can be enabled using the 'eb.arp_remove_non_conforming_sho_attribute' feature flag.~
Simply removing the alias from the config was a more pragmatic solution. Federations relying on the alias can manually put it back if need be.
See: https://www.pivotaltracker.com/story/show/164237578
Yes that should work too, the only drawback is that you'd have to keep track of an alternative configs/attributes.json file in you deploy scripts. If that works for you, I'd love to revert this change!
PS the build 72 build break on a visual regression test that times out. It ran on the 74 test so this should not be a blocker.
We already ship our own version of the attributes.json config file in https://github.com/OpenConext/OpenConext-deploy/blob/master/roles/engineblock/files/attributes.json
At least it's then out of the EB product. Of course everyone using OpenConext-deploy will still have it. But we can consider to move the SURF-specific items in attributes.json to our ansible environments, we can do that separately.
| gharchive/pull-request | 2020-08-11T09:25:31 | 2025-04-01T06:37:21.622560 | {
"authors": [
"MKodde",
"thijskh"
],
"repo": "OpenConext/OpenConext-engineblock",
"url": "https://github.com/OpenConext/OpenConext-engineblock/pull/877",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2250321437 | ERROR:root:<class 'ImportError'>: cannot import name 'spawn' from 'pexpect' (C:\Users\muhammad_anas.virtualenvs\OpenDevin-59qZ_dP-\Lib\site-packages\pexpect_init_.py)
Describe the bug
File "C:\Users\muhammad_anas.virtualenvs\OpenDevin-59qZ_dP-\Lib\site-packages\pexpect\pxssh.py", line 23, in
from pexpect import ExceptionPexpect, TIMEOUT, EOF, spawn
ERROR:root:<class 'ImportError'>: cannot import name 'spawn' from 'pexpect' (C:\Users\muhammad_anas.virtualenvs\OpenDevin-59qZ_dP-\Lib\site-packages\pexpect_init_.py)
Setup and configuration
Current version:
commit 426f3871235ce7e98ef60c0122f5de91c4974547 (HEAD -> main, origin/main, origin/HEAD)
Author: sp.wack <83104063+amanape@users.noreply.github.com>
Date: Wed Apr 17 20:55:17 2024 +0300
setup env for controlled integration tests with redux (#1180)
My operating system: Windows 10
My environment vars and other configuration (be sure to redact API keys):
My model and agent (you can see these settings in the UI):
Model:
Agent:
Commands I ran to install and run OpenDevin:
uvicorn opendevin.server.listen:app --port 3000
Steps to Reproduce:
1.
2.
3.
Logs, error messages, and screenshots:
Additional Context
Deduping with https://github.com/OpenDevin/OpenDevin/issues/1156
Setting SANDBOX_TYPE=exec may fix this
| gharchive/issue | 2024-04-18T10:24:35 | 2025-04-01T06:37:21.646781 | {
"authors": [
"muhammad-anas087",
"rbren"
],
"repo": "OpenDevin/OpenDevin",
"url": "https://github.com/OpenDevin/OpenDevin/issues/1205",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2419977214 | (test) test_runtime_client.py to test _execute_bash()
What is the problem that this fixes or functionality that this introduces? Does it fix any open issues?
Enhancements to RuntimeClient's method _execute_bash. Comes with new unit test file.
Belongs to #3031
Give a summary of what the PR does, explaining any non-trivial design decisions
Improved parsing of pexpect output/prompt, with first iteration of interactive prompt detection.
There's one test commented out at the end if someone else wants to try to get it working.
Some notes:
running the original "runtime_build.py" via command line produced an image of 23+ GB(!) in size
version in this PR tries to "fix" that (down to < 4GB), but somethings still not right with the setup, I think
added tweaks so both Ubuntu 22.04 and 24.04 can generated (2 packages have different setups: libgl1-mesa-glx and libasound2)
Ugh, I messed up a line in the dockerfile generation, doh!
| gharchive/pull-request | 2024-07-19T21:45:34 | 2025-04-01T06:37:21.650775 | {
"authors": [
"tobitege"
],
"repo": "OpenDevin/OpenDevin",
"url": "https://github.com/OpenDevin/OpenDevin/pull/3040",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1837774848 | The detail of data generation pipeline
Dear OpenDriveLab Team,
Thank you for sharing your outstanding work with the community.
Will you public the details of your data generation pipeline? such as how you manage dynamic objects when merging multiple LiDAR frames in your data pipeline.
Thanks in advance.
Thanks for your interest. In order to ensure the iteration of the dataset, we currently do not have plans to release the code for data processing. However, for the process of occupancy generation, please refer to OccNet (https://github.com/OpenDriveLab/OccNet). We use box annotations to accumulate foreground objects and background point clouds separately.
| gharchive/issue | 2023-08-05T13:57:28 | 2025-04-01T06:37:21.654787 | {
"authors": [
"ZhouYunsong-SJTU",
"secret104278"
],
"repo": "OpenDriveLab/OpenScene",
"url": "https://github.com/OpenDriveLab/OpenScene/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1614398420 | hd: add NBodyMod regression tests
This is ready for merging.
Feature or improvement description
Flexible platform capability was added to OpenFAST v2.6.0 in HydroDyn. This included a new flag NBodyMod with three methods of handling multiple potential flow (WAMIT) bodies. No test cases were added at that time.
This was a set of test cases developed by @mattEhall some time ago.
Related issue, if one exists
#1480 fixed a bug in NBodyMod = 1
Impacted areas of the software
Testing of HydroDyn NBodyMod options only
Test results, if applicable
New test cases have been added. These are based on the OC4Semi test case, but treat the floating platform as 4 separate bodies (center column, and 3 corner columns):
NBodyMod1 -- a single set of WAMIT files includes coupling terms between each body (PtfmRefxt/yt/zt/ztRot should match XBODY(1)/(2)/(3)/(4) in WAMIT and NBody should match NBODY in WAMIT)
NBodyMod2 -- 4 separate WAMIT bodies neglecting couplings between each body and NBODY=1 with XBODY=0 in WAMIT (PtfmRefxt/yt/zt/ztRot may differ from XBODY(1)/(2)/(3)/(4) in WAMIT)
NBodyMod3 -- 4 separate WAMIT bodies neglecting couplings between each body and NBODY=1 with XBODY=/0 in WAMIT (PtfmRefxt/yt/zt/ztRot should match XBODY(1)/(2)/(3)/(4) in WAMIT)
@luwang00, could you review this?
I've gone through the run files briefly. This is probably not critical for verification, but it makes more sense physically to set PropPot to TRUE for Member 1, the central column, which is already modeled as a potential-flow body.
The surge, heave, and pitching moments all agree relatively well between the three models (the sway, roll, and yaw moments are very small and show differences).
Lu also ran some quick tests that showed relatively close agreement between the models.
| gharchive/pull-request | 2023-03-07T23:56:09 | 2025-04-01T06:37:21.686029 | {
"authors": [
"andrew-platt",
"luwang00"
],
"repo": "OpenFAST/openfast",
"url": "https://github.com/OpenFAST/openfast/pull/1483",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
86592613 | FxPayment and NotionalExchange
FxPayment and NotionalExchange have the same functionality. Review these objects and decide which one we use.
FxPayment was replaced by Payment
FX now uses the shared Payment pricer. Notional exchange does not.
| gharchive/issue | 2015-06-09T13:01:51 | 2025-04-01T06:37:21.702276 | {
"authors": [
"jodastephen",
"yukiiwashita"
],
"repo": "OpenGamma/Strata",
"url": "https://github.com/OpenGamma/Strata/issues/309",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
91286659 | Merge Analytics into Strata
Take those parts of the Analytics repo that are being used and include them in Strata.
Fixed by a06ec1c3e5d0ed48c64d42a38f3cb32087141479
| gharchive/issue | 2015-06-26T15:58:28 | 2025-04-01T06:37:21.703101 | {
"authors": [
"jodastephen"
],
"repo": "OpenGamma/Strata",
"url": "https://github.com/OpenGamma/Strata/issues/354",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
320725349 | Feature Request: filter pair-end sequences match a pattern
Hi, I have two paired fastq files in hand and want to filter out sequences match a given pattern from both files at the same time. for example, filter out sequences and their paired-end sequences which contains "AATGCTACGTGAC"
You can specify "AATGCTACGTGAC" as adapter, and use -l to require minimum read length
| gharchive/issue | 2018-05-07T08:53:17 | 2025-04-01T06:37:21.704204 | {
"authors": [
"sfchen",
"shaoangwen"
],
"repo": "OpenGene/fastp",
"url": "https://github.com/OpenGene/fastp/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1830345323 | feat: add add labels
🔍 What type of PR is this?
/kind documentation
/kind feature
👀 What this PR does / why we need it:
[x] My pull request adheres to the code style of this project
[x] My code requires changes to the documentation
[x] I have updated the documentation as required
[x] All the tests have passed
🅰 Which issue(s) this PR fixes:
Fixes OpenIMSDK/Open-IM-Server#406
/create tag v1.1.1 "this is comment"
/create tag v1.1.1
/create tag v1.1.1 comment
/create tag v1.1.1 "comment"
| gharchive/pull-request | 2023-08-01T02:25:46 | 2025-04-01T06:37:21.864741 | {
"authors": [
"cubxxw"
],
"repo": "OpenIMSDK/chat",
"url": "https://github.com/OpenIMSDK/chat/pull/102",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2154562734 | fix!: remove methods from public interface of WindowsSessionUser
What was the problem/requirement? (What/Why)
Two problems, really:
The WindowsSessionUser class exposes a lot of functionality that are for internal use in the class constructor, and I don't think should be in the public interface. e.g. "validate_username_password"
The regex-based validation of username & domain strikes me as not the way that this validation should be done. We should just be checking that the username/domain exists to the host. Two cases: a. We have a password; the logon check will fail if the use bad information. b. We don't have a password; we only allow this when the user is the process user.
What was the solution? (How)
Move methods from public to private by prefixing with an underscore.
Remove the username & domain validation logic. It's handled by the logon & current-user checks that already exist.
What is the impact of this change?
Tidier code and interfaces. This is motivated by a change that I have upcoming.
How was this change tested?
The unit tests handle these cases; they've been updated as needed. I added some skipifs while in there to remove irrelevant platform-specific xfails.
Was this change documented?
N/A
Is this a breaking change?
BREAKING CHANGE
BadUserNameException and BadDomainNameException have been removed.
Many methods of WindowsSessionUser have been made private.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
However, I do wonder about the potential computational cost of this, and whether it will be obvious to anyone creating a WindowsSesssionUser` that a logon will be attempted etc.
Good observation in that it might be surprising. The computational cost seems to be essentially nil; it's pretty quick, and I don't anticipate the check being an issue at the scale that we're expecting -- logins spread out by seconds/minutes/hours/days, rather than microseconds.
An option to make the check optional is a two-way door, so we can add it later if the need arises.
| gharchive/pull-request | 2024-02-26T16:11:44 | 2025-04-01T06:37:21.888405 | {
"authors": [
"ddneilson"
],
"repo": "OpenJobDescription/openjd-sessions-for-python",
"url": "https://github.com/OpenJobDescription/openjd-sessions-for-python/pull/91",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1413140546 | Copy button on output codeblock in technical deep dive
There is a copy button but as this is an example output it shouldn't be there.
Another one:
Another one in:
"Augmenting the existing Jakarta RESTful Web Services annotations with OpenAPI annotations"
Error message missing in:
"Consuming the secured RESTful APIs by JWT"
Another copyblock on output:
"Deploying the microservice to Kubernetes"
| gharchive/issue | 2022-10-18T12:29:34 | 2025-04-01T06:37:21.899196 | {
"authors": [
"jakub-pomykala"
],
"repo": "OpenLiberty/cloud-hosted-guides",
"url": "https://github.com/OpenLiberty/cloud-hosted-guides/issues/2241",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1106566875 | Add http client for Java
Problem
The openlineage-java lib. does not currently provide an HTTP client to emit run events.
Closes: #430
Solution
Add OpenLineageClient the implements the HTTP endpoint /api/v1/lineage to emit run events. Note, the HTTP client was heavily inspired by the marquez-java lib.
import io.openlineage.client.OpenLineageClient;
// (1) Create a new OpenLineageClient instance Note, by default, environments variables
// OPENLINEAGE_URL and OPENLINEAGE_API_KEY are used To set the URL and API key
// manually, use the constructor OpenLineageClient(BaseUrl, ApiKey)
OpenLineageClient client = new OpenLineageClient();
// (2) Define a simple OpenLineage START or END event
OpenLineage.Run startOrEndRun = ...
// (3) Emit OpenLineage event
client.emit(startOrEndRun)
Other changes in this PR include:
Add rule in .gitignore to ignore generated OL models
Add spotless to format Java code
Checklist
[x] You've signed-off your work
[x] Your pull request title follows our guidelines
[ ] Your changes are accompanied by tests (if relevant)
[ ] Your change contains a small diff and is self-contained
[x] You've updated any relevant documentation (if relevant)
[ ] You've updated the CHANGELOG.md with details about your change under the "Unreleased" section (if relevant, depending on the change, this may not be necessary)
[ ] You've versioned the core OpenLineage model or facets according to SchemaVer (if relevant)
Are we going to support arbitrary query params, as Spark HTTP client now does? https://github.com/OpenLineage/OpenLineage/pull/425
Are we going to support arbitrary query params, as Spark HTTP client now does? #425
Yeah, I think we should. Should the params be configured when creating the client, or provide it as an option when emitting an event?
client.emit(event, queryParams)
@wslulciuc I think client.emit(event) should be an overload of client.emit(event, queryParams) that fills in default query params from environment.
Yeah, I think we should. Should the params be configured when creating the client, or provide it as an option when emitting an event?
Personally, I think setting the query params at construction time is the way to go. I don't imagine the query params changing from one emit call to another.
@mobuchowski / @collado-mike: I've documented the following the in the README.md, but here's the approach I recommend we use to configure the client with query params appended on each HTTP request:
URI uri = new URIBuilder("http://localhost:5000")
.addParameter("param0", "value0")
.addParameter("param1", "value2")
.build();
OpenLineageClient client = Clients.newClient(uri.toURL());
| gharchive/pull-request | 2022-01-18T07:33:46 | 2025-04-01T06:37:23.236678 | {
"authors": [
"collado-mike",
"mobuchowski",
"wslulciuc"
],
"repo": "OpenLineage/OpenLineage",
"url": "https://github.com/OpenLineage/OpenLineage/pull/480",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1776498097 | update installer win
Describe the change
Make user aware of Windows Defender when using our installer.
(this PR is against develop -- should it be staging? (seems outdated, but the PR pretext says so...)
PR checklist
[ ] I have added description of the change I'm proposing in the OpenMS Documentation.
[ ] I have read and followed OpenMS Documentation Contributing guidelines.
[ ] I have attached a screenshot of the relevant area after this change.
[ ] CHANGELOG.md is updated.
[ ] I have added my name in CONTRIBUTING.md.
Yes staging is obsolete. Develop is the main branch I believe.
| gharchive/pull-request | 2023-06-27T09:39:01 | 2025-04-01T06:37:23.271552 | {
"authors": [
"cbielow",
"greengypsy"
],
"repo": "OpenMS/OpenMS-docs",
"url": "https://github.com/OpenMS/OpenMS-docs/pull/201",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
567477967 | Provision objects count
Description
Provision template precisely defines how the cluster will look like, which objects and how many of them. That needs to precisely know cluster sizing, number of hosts. E.g., in this case we create a cluster of just 2 hosts:
hosts:
- reserved_cpu: 100
im_mad: kvm
vm_mad: kvm
provision:
hostname: "myhost1"
- reserved_cpu: 100
im_mad: kvm
vm_mad: kvm
provision:
hostname: "myhost2"
It would be nice if objects could have a parameter to indicate number of such objects to create, the example above could be reduced to:
hosts:
- reserved_cpu: 100
im_mad: kvm
vm_mad: kvm
provision:
hostname: "myhost<%= @a_suitable_unique_identifier_of_object %>"
count: 2
or, ideally could be also parameterized via user inputs (#4216):
inputs:
hosts_count: 2
hosts:
- reserved_cpu: 100
im_mad: kvm
vm_mad: kvm
provision:
hostname: "myhost<%= @a_suitable_unique_identifier_of_object %>"
count: "<%= @inputs['hosts_count'] %>"
Just an idea, needs to be designed (e.g., can't be the "count" name confused with regular object template parameters?).
Use case
Provision templates, which can be easily customized without the need to change / hack the base template itself.
Progress Status
[ ] Branch created
[ ] Code committed to development branch
[ ] Testing - QA
[ ] Documentation
[ ] Release notes - resolved issues, compatibility, known issues
[ ] Code committed to upstream release/hotfix branches
[ ] Documentation committed to upstream release/hotfix branches
PRs to merge in master:
code
docs
tests
| gharchive/issue | 2020-02-19T10:43:22 | 2025-04-01T06:37:23.417838 | {
"authors": [
"al3xhh",
"vholer"
],
"repo": "OpenNebula/one",
"url": "https://github.com/OpenNebula/one/issues/4217",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
595714820 | Service role terminate action is not working
Description
When a role is terminated, the server is returning an error.
To Reproduce
Try to terminate a role.
Expected behavior
The VMs in the role should be terminated and the role information should be updated properly.
Details
Affected Component: OneFlow
Version: development.
Progress Status
[ ] Branch created
[ ] Code committed to development branch
[ ] Testing - QA
[ ] Documentation
[ ] Release notes - resolved issues, compatibility, known issues
[ ] Code committed to upstream release/hotfix branches
[ ] Documentation committed to upstream release/hotfix branches
PRs to merge in master:
code: https://github.com/OpenNebula/one/pull/4492
tests: https://github.com/OpenNebula/development/pull/916
| gharchive/issue | 2020-04-07T09:10:11 | 2025-04-01T06:37:23.423182 | {
"authors": [
"OpenNebulaSupport",
"al3xhh"
],
"repo": "OpenNebula/one",
"url": "https://github.com/OpenNebula/one/issues/4491",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2515630840 | PWA app
Description :
Pecha.org need to integrate pwa (progressive web apps) so that user can experience the website similar to mobile app.
Implementation steps :
[x] create manifest.json
[x] create service_worker.js
[x] update html to link manifest.json and register service_worker.js
[x] Check Manifest and Service Worker status in dev tool
[x] Test the "Add to Home Screen" feature in mobile device
@lobsam it is solved, can you please test in your mobile in dev.pecha.org:
@Lungsangg "error: base.html could not be found"
Above bug are fixed. The pwa install button will appear for 10 second and if user does not install, then it will disappear until page get reload.
If pwa is installed, the button wont appear again.
@Lungsangg
Install button is disapearing after some seconds, which need to be appear for 10 - 20 seconds.
Current install button needs to be redesign which should includes
Pecha Logo
Cross icon : if user does not want to install and avoid the install button
| gharchive/issue | 2024-09-10T07:26:56 | 2025-04-01T06:37:23.434759 | {
"authors": [
"Lungsangg",
"kaldan007",
"lobsam"
],
"repo": "OpenPecha/pecha.org-roadmap",
"url": "https://github.com/OpenPecha/pecha.org-roadmap/issues/101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1664785822 | ipp-usb needs to be restarted after Pantum M6559NW is reconnected
Hello,
I'm experiencing an issue with a Pantum M6559NW, where the scanner and printer functions stop working after disconnecting and reconnecting the device. Checking the logs of the ipp-usb execution, I found the following:
> HTTP[008]: GET http://localhost:60000/eSCL/ScannerCapabilities
USB[1]: connection allocated, 1 in use: --- a-- ---
HTTP[008]: connection 1 allocated
! USB[1]: send: libusb_bulk_transfer: Input/Output Error
! HTTP[008]: libusb_bulk_transfer: Input/Output Error
USB[1]: connection released, 0 in use: --- --- ---
! ESCL: eSCL: Get "http://localhost:60000/eSCL/ScannerCapabilities": libusb_bulk_transfer: Input/Output Error
[...]
> HTTP[009]: POST http://localhost:60000/ipp/print
> HTTP[009]: request body: got 173 bytes; EOF
> HTTP[009]: body is small (173 bytes), prefetched before sending
USB[2]: connection allocated, 1 in use: --- --- a--
HTTP[009]: connection 2 allocated
! USB[2]: send: libusb_bulk_transfer: Input/Output Error
! HTTP[009]: libusb_bulk_transfer: Input/Output Error
USB[2]: connection released, 0 in use: --- --- ---
> HTTP[009]: POST http://localhost:60000/ipp/print
! HTTP[009]: libusb_bulk_transfer: Input/Output Error
To try to solve the issue, I set the max-usb-interfaces to 1. While this did work, I noticed that it resulted in the loss of the job cancelling feature, which is required to prevent the printer from remaining in the 'printing' state after the document is printed (as detailed in this issue).
Since limiting the USB interfaces to 1 is not a feasible solution if you want to implement the aforementioned workaround, I tried re-run ipp-usb when the printer is reconnected. This solution works well and provides a similar effect without the need to limit the USB interfaces.
I tested both the ipp-usb_0.9.23-1+53.1_amd64.deb version and a compiled one from the master branch, and observed same behavior in both cases.
Please let me know if you need any more information, or if there is anything I can contribute.
Thanks in advance!
Hi @gustingonzalez,
looks like if you connect the printer when ipp-usb is running, ipp-usb begins device initialization immediately, when device is not ready yet. While if you connect the printer and then start (or restart) the ipp-usb daemon, device has enough time to initialize itself before the first request comes
could you please play a little bit with device quirks parameters? The most promising is, probably, init-delay and, may be, init-reset
| gharchive/issue | 2023-04-12T15:27:41 | 2025-04-01T06:37:23.442164 | {
"authors": [
"alexpevzner",
"gustingonzalez"
],
"repo": "OpenPrinting/ipp-usb",
"url": "https://github.com/OpenPrinting/ipp-usb/issues/66",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
537161503 | ODS file cannot be imported, causes ODF Toolkit exceptions
Describe the bug
Some ODS files cannot be imported, but open just fine in LibreOffice.
To Reproduce
Steps to reproduce the behavior:
Try to Import one of the test files provided such as test.ods file.
Spinner is shown with "Updating Preview...".
Error is shown in console about NOT_FOUND_ERR.
Current Results
12:59:10.360 [ org.mortbay.log] /command/core/importing-controller (17ms)
org.w3c.dom.DOMException: NOT_FOUND_ERR: An attempt is made to reference a node in a context where it does not exist.
at org.apache.xerces.dom.ParentNode.internalInsertBefore(Unknown Source)
at org.apache.xerces.dom.ParentNode.insertBefore(Unknown Source)
at org.odftoolkit.odfdom.pkg.OdfElement.insertBefore(OdfElement.java:491)
at org.odftoolkit.odfdom.doc.table.OdfTable.appendColumn(OdfTable.java:1092)
at org.odftoolkit.odfdom.doc.table.OdfTable.appendColumns(OdfTable.java:1123)
at org.odftoolkit.odfdom.doc.table.OdfTableRow.getCellByIndex(OdfTableRow.java:254)
at com.google.refine.importers.OdsImporter$1.getNextRowOfCells(OdsImporter.java:174)
at com.google.refine.importers.TabularImportingParserBase.readTable(TabularImportingParserBase.java:120)
at com.google.refine.importers.OdsImporter.parseOneFile(OdsImporter.java:185)
at com.google.refine.importers.ImportingParserBase.parseOneFile(ImportingParserBase.java:118)
at com.google.refine.importers.ImportingParserBase.parse(ImportingParserBase.java:89)
at com.google.refine.importing.ImportingUtilities.previewParse(ImportingUtilities.java:961)
at com.google.refine.importing.DefaultImportingController.doUpdateFormatAndOptions(DefaultImportingController.java:174)
at com.google.refine.importing.DefaultImportingController.doPost(DefaultImportingController.java:93)
at com.google.refine.commands.importing.ImportingControllerCommand.doPost(ImportingControllerCommand.java:68)
at com.google.refine.RefineServlet.service(RefineServlet.java:189)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
Expected behavior
Preview and import should work for this test.ods file and perhaps others as well.
Screenshots
OpenRefine cannot open test.ods and just waits:
LibreOffice does open it just fine, but OpenRefine shows Console Error:
Desktop (please complete the following information):
OS: Windows 10
Browser Version: Firefox
JRE or JDK Version: \openjdk-13.0.1_windows-x64_bin\jdk-13.0.1
OpenRefine (please complete the following information):
Version: OpenRefine 3.3-beta
Datasets
These test files were used in: https://github.com/chainsawriot/readODS/tree/master/tests/testdata
Here is the zip of all of the ODS test files from that readODS project:
ODS_TEST_FILES_1.zip
Additional context
Add any other context about the problem here.
This needs to be fixed upstream, in the odftoolkit library.
Yeap, I know. But we always create our own good issue(s) to track the areas of OpenRefine that break. :-)
Same problem here! Any updates on this particular problem?
Same problem here! Any updates on this particular problem?
We might be able to improve user experience here by catching relevant exceptions in the right places. If this only happens when computing the value of some cell, it would be nice if we could be able to import the rest of the table. Perhaps some of these exceptions can be meaningfully converted to our own EvalError.
We might be able to improve user experience here by catching relevant exceptions in the right places. If this only happens when computing the value of some cell, it would be nice if we could be able to import the rest of the table. Perhaps some of these exceptions can be meaningfully converted to our own EvalError.
This needs to be fixed upstream, in the odftoolkit library.
It would be useful to have a link to the upstream bug, so that we can track it.
This needs to be fixed upstream, in the odftoolkit library.
It would be useful to have a link to the upstream bug, so that we can track it.
| gharchive/issue | 2019-12-12T19:10:47 | 2025-04-01T06:37:23.564697 | {
"authors": [
"runnwerth",
"tfmorris",
"thadguidry",
"wetneb"
],
"repo": "OpenRefine/OpenRefine",
"url": "https://github.com/OpenRefine/OpenRefine/issues/2243",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
864547329 | Conditionalize CI steps requiring secrets
Fixes #3677.
Follow-up to #3680 which this PR builds on.
The problem with #3680 (I think?) is that it disables secrets (so, coverage reporting and the Cypress dashboard) even for PRs made by org members / from within the repository. We want a way to still enable those as easily as possible when we see well-meaning PRs. One option is to add a label on the PR to mark it as trusted. I would also like to check if the author of the PR is an org member (in which case we would not have to label the PR manually) but I haven't found the syntax for that yet.
I am merging this without review as this change on the CI cannot be tested without being merged. Happy to revert if there are concerns.
It looks like the syntax I tried to introduce conditions is incorrect so I reverted this change.
| gharchive/pull-request | 2021-04-22T05:13:26 | 2025-04-01T06:37:23.566905 | {
"authors": [
"wetneb"
],
"repo": "OpenRefine/OpenRefine",
"url": "https://github.com/OpenRefine/OpenRefine/pull/3839",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1219467783 | #3515 notify user about disk space availability
Fixes #3515
Changes proposed in this pull request:
addition to pom deps to include oshi-core from here.
a suggestive action area for system information
command for retrieving system information
an ajax request that communicates with the servlet for the command stated above.
Hi @antoine2711 @wetneb , I wish to ask for your guidance on this.
The approach I followed to come up with a solution for this, was to setup a GetSystemInfoCommand which has a doGet method for responding to requests which require the system information.
Using the oshi tool suggested in this, I was able to return a serialised object containing info like the RAM, hostname, etc.
On the frontend, I used ajax to request for this information from the command stated above. This information has to be updated so I set a 1000ms interval using the setInterval function for repeatedly requesting the data.
All this was done on an isolated html + js file so I can see the results without affecting much of the codebase.
My problem is that the logs are almost completely filled with the calling of the command I created, and integrating it into an open project may create some problems. Is there some better you think I could use to maybe subscribe to the command so I get the data realtime or avoid the logging?
@wetneb @antoine2711
From the discussion yesterday, I went through the RefineServlet again and realised the commands are already checked for the value returned by logRequests(). I just updated the new command then by overriding the default logRequests() to return false.
This solves the issue I had previously.
@elroykanye the ball is in your camp on this PR too :)
@elroykanye @elroykanye Where are we at on this PR? Almost ready to merge? Ready? Needs heavy testing by all on all platforms?
Closing per inactivity.
| gharchive/pull-request | 2022-04-28T23:54:52 | 2025-04-01T06:37:23.572834 | {
"authors": [
"elroykanye",
"thadguidry",
"wetneb"
],
"repo": "OpenRefine/OpenRefine",
"url": "https://github.com/OpenRefine/OpenRefine/pull/4816",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
472171975 | view of ev3 block in simulation not working, change background not working
write an ev3 program
open sim
click "EV3" button
screen is white (press ctrl&- to see that the brick is outside of the screen (?)
changing background in sim and pressing "ev3" button generates many different errors (I expect for the same reason as above). Sometimes many ev3 bricks appears an the screen.
the same behavior on both chrome and firefox
already fixed in #61
| gharchive/issue | 2019-07-24T09:20:15 | 2025-04-01T06:37:23.582126 | {
"authors": [
"bjost2s",
"rbudde"
],
"repo": "OpenRoberta/openroberta-lab",
"url": "https://github.com/OpenRoberta/openroberta-lab/issues/202",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
525712888 | Add romanian translation
Describe the feature you'd like
Add the romanian translations from the Open Roberta Translation Sheet to the Open Roberta Lab.
Additional context
Link to the translation sheet (read only), over which the .json file can be generated.
https://docs.google.com/spreadsheets/d/18lcfyYL2UsNJEWJYEcxSujWGMCSKYDomif2Xmvz-RFM/edit?usp=sharing
Romanian translation is available to be selected and changes accordingly.
| gharchive/issue | 2019-11-20T10:28:49 | 2025-04-01T06:37:23.584195 | {
"authors": [
"boonto",
"philippmaurer"
],
"repo": "OpenRoberta/openroberta-lab",
"url": "https://github.com/OpenRoberta/openroberta-lab/issues/325",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
13597707 | Simplify get_version()
Can we simpify the get_version function and the VERSION object? At the moment it is a 5-tuple, but it can easily be a string like '1.4b1' so we do not need to do the composition work also in every plugin again.
@normanjaeckel is this an issue for the 2.0 release?
| gharchive/issue | 2013-04-24T16:39:36 | 2025-04-01T06:37:23.679795 | {
"authors": [
"normanjaeckel",
"ostcar"
],
"repo": "OpenSlides/OpenSlides",
"url": "https://github.com/OpenSlides/OpenSlides/issues/619",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
53278226 | Merge stable branch
This is to show the merge commit. If it is ok, I will push it directly into master branch.
@normanjaeckel you can push this into the master branch right now. There will be a lot of conflicts in #1381 and #1380 that have to be manually resolved. So it will be best, when I merged them into master.
| gharchive/pull-request | 2015-01-02T21:11:20 | 2025-04-01T06:37:23.680867 | {
"authors": [
"normanjaeckel",
"ostcar"
],
"repo": "OpenSlides/OpenSlides",
"url": "https://github.com/OpenSlides/OpenSlides/pull/1385",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
278575839 | Issues with 'cpi_offset' parameter
Both the House bill and the Senate bill adopt chained-CPI indexing. While working on the reform file upload option and TB's GUI input work, I realized that there might be some issues with how this parameter works.
To realize chained-CPI indexing for year 2018, the cpi_offset parameter needs to be set to -0.0025 in year 2017. However, both reforms (House and Senate) actually take affect in 2018. In this case, to implement either reform in TB's GUI, users have to always begin with year 2017. So each input field other than cpi_offset has to have an extra *, indicating no change in 2017's law. Also, the result table would build an extra column for year 2017 with no change whatsoever. Does this really make sense?
A couple solutions to simplify the issue:
Come up with some ways to allow users to edit previous year parameter on TB's GUI page, just like what we have done to allow future year edits.
Modify how cpi_offset parameter work in tax-calculator by simply implementing the chained-CPI indexing one year head of reform year specified.
@martinholmer @MattHJensen @hdoupe
@GoFroggyRun said we could:
Modify how cpi_offset parameter work in Tax-Calculator by skimpily implementing the chained-CPI indexing one year head of reform year specified.
I view this as a very bad idea and I'm not even sure how it would work. Tax-Calculator has no problem with a reform that starts before the ten-year budget window. It is the design of TaxBrain that is the problem. When I started on this project, TaxBrain required all reform provision to start in the first budget year (that is, there was no * capability to specify delayed implementation of a reform provision).
@GoFroggyRun said we could:
Come up with some ways to allow users to edit previous year parameter on TaxBrain's GUI page, just like what we have done to allow future year edits.
And @martinholmer said:
Tax-Calculator has no problem with a reform that starts before the ten-year budget window. It is the design of TaxBrain that is the problem.
One way to think about how to redesign TaxBrain, is to focus on one particular limitation of TaxBrain: the concept of "Start Year" has two conceptually distinct meanings. In TaxBrain, "Start Year" means the first year of the reform and it also means the first year of the ten-year budget window (for which tax results are shown). It seems to me that those are distinct concepts and we don't necessarily want them to be equal in all cases.
So, why not have two start years on the TaxBrain GUI page? One would be the "Reform Start Year" and the other would be the "Output Start Year".
@hdoupe @MattHJensen
So, why not have two start years on the TaxBrain GUI page? One would be the "Reform Start Year" and the other would be the "Output Start Year". Or maybe there are better labels for these two distinct start years.
@martinhomer's proposal might be best, and that option would certainly be an improvement from where we are now because it would at least enable users to generate 10 years of output for the TCJA -- currently they can not.
But it would not solve one of the problems @GoFroggyRun mentioned "each input field other than cpi_offset has to have an extra *, indicating no change in 2017's law" and it would create a fairly significant source of user error, I imagine. Users might change the Output Start Year and forget to change the Reform Start Year or similar.
So I wonder if anyone is aware of users encountering a problem like the one we face now before, where the user wants a different reform start year than output start year? If not, the simplest solution might be to just change the definition of cpi_offset parameter so that changing the 2018 value of cpi_offset changes the 2018 value of parameters, but I don't know how difficult it would be to implement this in tax-calculator.
One advantage of separating the reform start year and the output start year (@martinholmer's proposal) is that 5 years for now, say, it would be very easy to see how the TCJA implemented way back in 2017/2018 affects the forward looking 10 year budget outlook.
So actually, I think my preference would be to do both: separate the Output Start Year from the Reform Start year and redefine cpi_offset in tax-calculator. The redefinition of cpi_offset should be a higher priority because it would solve immediate problems for users analyzing TCJA, and the separation of Output Start Year and Reform Start year could happen on a slower time frame.
--
(An alternative option (which I am not terribly fond of but will present for others) would be to add a new input character that indicates that the proceeding value is for a previous year to the start year. For example, if we chose ; for such a parameter and the start year is 2018, then -0.0025; would indicate that the cpi indexing is set in 2017.)
I agree with @martinholmer's proposal of having a start year and an output start year.
I also agree with @MattHJensen that the cpi_offset parameter should be redefined in Tax-Calculator if this is possible. It doesn't make much sense to me to have a parameter not have any effect until a year after it's activated.
@hdoupe said:
I also agree with @MattHJensen that the cpi_offset parameter should be redefined in Tax-Calculator if this is possible. It doesn't make much sense to me to have a parameter not have any effect until a year after it's activated.
I have no idea how to "redefine" the cpi_offset parameter given the way Tax-Calculator does price-inflation and wage-growth indexing. The current indexing logic was built in at the very beginning of the project before I got involved with Tax-Calculator.
I agree with @martinholmer, @MattHJensen and @hdoupe that having a start year and an output year on TaxBrain would be a huge improvement. We definitely want to incorporate such enhancement after things get cool down a bit.
My apologies that my initially suggestion was not clear enough. Indeed, after looking into the price-inflation and wage-growth indexing mechanisms planted in tax-calculator, I agree with @martinholmer that modifying these logic is a bad idea. And I have no idea how to do that either.
In fact, what I have in mind, instead of dealing with those convoluted logic, is to take a roundabout approach. After reforms are read into tax-calculator, but before implementing any of them, are we able to apply some special treatment to cpi_offset parameter in the way that, if it were specified in year n, it would be processed as n-1 in the calculator? I understand this might not be an elegant solution, but, to not deal with any indexing logic, it would seem a reasonable one. Also, given I am not very familiar with the indexing logic in tax-calculator, it is very likely that my proposal is implausible.
@martinholmer, does the approach make sense to you? Would that allow 1-year lag in specifying cpi_offset-related reforms without touching any of the indexing logic?
It seems like the consensus is that we need both a parameter (or reform) start year and an output start year. The interface would then look something like this:
I think that we would only have to make changes on the TaxBrain side. Instead of passing the reform start year to TC, we would send the ouptut start year as the start_year in parameter in tbi.py.
While we are adding this functionality, I think we should allow for the user to select a start year (output and reform) for any year up to 2027. The final year of output would still be 2027 or output start_year + 9, but the user would have an option to show less years. The output tables already allow us to do this:
@GoFroggyRun and I were discussing implementation issues with adding an output start year. We circled back to the idea of adding a special reverse character. Here's our conversation:
From @GoFroggyRun
Also, having separate reform year and output year is not the only solution: the CPI offset parameter still has to be specified in 2017, and users will have to add an extra * for each reform provision other than CPI offset parameter, which can be annoying and confusing. An alternative approach would be to allow for previous year GUI input edits (for example in start year 2018, we allow users to specify 2017 and previous parameter values in some format). This alternative approach doesn’t involve separating reform year and start year, and would better solve the problem, at least for the moment, in my opinion. I’m not sure, however, how difficult this approach is.
What do you think about this alternative?
From me:
I agree that separating the reform year and output years is not the only solution. However, I think it gives us some flexibility that we may want in the future.
I’m not a huge fan of adding a reverse parameter that pushes the following parameter back a year. The nice part about the GUI interface is that you don’t need to know how to program in order to use it. So, I’m wary of adding another special character and creating our own little programming language.
On the other hand, adding a reverse character seems like a pretty simple addition. Further, if we implement this character we could implement the TCJA in a pretty straight forward way: Enter cpi_offset as “<,-0.0025” and fill in the other parameters like usual (I’m thinking “<” would be a good character, but I’m open to other suggestions)
If we add this parameter, we should only allow it to be used as the first character in the string. We don’t want to implement some function that has to figure out what this means: 7000,,,8000,<,<,10000,<,* etc.
From @GoFroggyRun:
HANK: I agree that separating the reform year and output years is not the only solution. However, I think it gives us some flexibility that we may want in the future.
SEAN: Right. I agree. This is definitely something nice to have.
HANK: I’m not a huge fan of adding a reverse parameter that pushes the following parameter back a year. The nice part about the GUI interface is that you don’t need to know how to program in order to use it. So, I’m wary of adding another special character and creating our own little programming language.
On the other hand, adding a reverse character seems like a pretty simple addition. Further, if we implement this character we could implement the TCJA in a pretty straight forward way: Enter cpi_offset as “<,-0.0025” and fill in the other parameters like usual (I’m thinking “<” would be a good character, but I’m open to other suggestions)
SEAN: Me neither haha. But it seems to me that this is an easy way to deal with the special case for parameters like CPI offset --- hopefully we won't have too many of them. If having such addition is simple, the only thing we need to worry about is to come up with some symbol straightforward yet special enough. Let's move the discussion to Github and see if others have any better ideas.
HANK: If we add this parameter, we should only allow it to be used as the first character in the string. We don’t want to implement some function that has to figure out what this means: 7000, * , * , 8000,<,<,10000,<,* etc.
SEAN: This is exactly what I have in mind as well.
HANK: Do you mind if I move the last two comments to github #763?
SEAN: Not at all.
cc @MattHJensen @martinholmer @MaxGhenis
I guess the only question remains regarding "reverse editing" is what the syntax should look like.
The <, symbol @hdoupe suggested is a good one. If <, were adopted, the "reverse editing" would be something look like: (just some random example)
-0.001 <, -0.0025 <, *, 0, *, *
or, if we were using <,<,
-0.001 <,< -0.0025 <,< *, 0, *, *
@hdoupe Is this what you were thinking? What do you think of the <,< symbol?
@GoFroggyRun asked
@hdoupe Is this what you were thinking? What do you think of the <,< symbol?
Sort of. I think we should impose some strict rules on how this symbol can be used so that we can keep everything simple.
It can only be used at the beginning of the string.
It can only send a parameter back one year (this rule could be relaxed fairly easily)
For example, if you set the start year as 2018 and the cpi_offset parameter to "<,-0.0025", then this sets the cpi_offset to -0.0025 in 2017.
Implementing this is pretty straight forward. In fact, I just put together a prototype. I'll open a PR in a few minutes.
I think adding a reverse parameter and the ability to specify a different output year adds significant flexibility to TaxBrain. Consider a reform that goes into effect in 2018, but the vast majority of it's parameters do not take effect until 2020. You could set the reform year to 2020 and the output year to 2018. You could then use the "<" character to enter the parameters that take effect in 2018 and simply enter the other parameters with out having to use a bunch of "*" characters to get them up to 2020.
I guess the argument against this character is that if you are going to learn how to use this character then wouldn't it be easier just to write a json file?
I think adding a reverse parameter and the ability to specify a different output year adds significant flexibility to TaxBrain.
Definitely agreed.
I guess the argument against this character is that if you are going to learn how to use this character then wouldn't it be easier just to write a json file?
I don't think so -- there are significant other benefits of the GUI, such as being able to view documentation, current-law values, and the reform all in one place.
I don't think so -- there are significant other benefits of the GUI, such as being able to view documentation, current-law values, and the reform all in one place.
Ok, I see. That makes sense.
| gharchive/issue | 2017-12-01T19:46:19 | 2025-04-01T06:37:23.753093 | {
"authors": [
"GoFroggyRun",
"MattHJensen",
"hdoupe",
"martinholmer"
],
"repo": "OpenSourcePolicyCenter/PolicyBrain",
"url": "https://github.com/OpenSourcePolicyCenter/PolicyBrain/issues/763",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
479987319 | Add Debug Toolbar to Dev
This sets up development dependencies so we're not installing testing libraries into production environments and installs the Django Debug Toolbar in development, configuring it so it's always running but with its panels disabled. I've disabled the panels to cope with the prohibitive performance penalty some of the panels bring, but they can be toggled on in the UI easily.
Updated as per comments and rebased into existing changes.
| gharchive/pull-request | 2019-08-13T06:48:47 | 2025-04-01T06:37:23.793997 | {
"authors": [
"ghickman"
],
"repo": "OpenTechFund/opentech.fund",
"url": "https://github.com/OpenTechFund/opentech.fund/pull/1406",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
189136182 | including total commands count within robot run emit
Small PR including 'commands_total': len(robot._commands) within the robot.run 'command-run' notification
Coverage decreased (-0.003%) to 94.882% when pulling cdbafee3e8dcdeadd9287a4780c1e31f69c0de8f on 258-s-protocol-command-emits-include-total-commands-to-calculate into c390ef87fd3f634cfb0db631b14ce52e409d7c25 on master.
| gharchive/pull-request | 2016-11-14T15:10:12 | 2025-04-01T06:37:23.838451 | {
"authors": [
"andySigler",
"coveralls"
],
"repo": "OpenTrons/opentrons-api",
"url": "https://github.com/OpenTrons/opentrons-api/pull/107",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1829374280 | 🛑 Text-to-Speech - Mimic3 - Ziggyai is down
In 823fbff, Text-to-Speech - Mimic3 - Ziggyai (https://mimic3.ziggyai.online/status) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Text-to-Speech - Mimic3 - Ziggyai is back up in 8a5377f.
| gharchive/issue | 2023-07-31T14:52:32 | 2025-04-01T06:37:23.845127 | {
"authors": [
"goldyfruit"
],
"repo": "OpenVoiceOS/status",
"url": "https://github.com/OpenVoiceOS/status/issues/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
514134817 | Force compilation to fix tests
Fixes issues reported by @abcoathup in #1241.
I'm not sure why the test script was only compiling in the CI before. I suspect it was because it doesn't work without rm -rf-ing the build/contracts directory first... And this wasn't necessary in the CI since it's a clean environment.
I'm not sure why the test script was only compiling in the CI before
I think the reason was to make tests faster by avoiding the compilation step. But I prefer having to run truffle test manually (instead of npm t) rather than having tests fail for someone new to the project due to missing steps.
| gharchive/pull-request | 2019-10-29T18:33:11 | 2025-04-01T06:37:23.879480 | {
"authors": [
"frangio",
"spalladino"
],
"repo": "OpenZeppelin/openzeppelin-sdk",
"url": "https://github.com/OpenZeppelin/openzeppelin-sdk/pull/1271",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
175888202 | ring template and flat ele damage to attacks
It's not possible to add "cold, fire or lightning to attacks" for any rare ring template.
Yep, can confirm that this affixes are missing in ring templates.
Also some other affixes are missing too (checked '+ to Evasion', '% of Physical Attack Damage Leeched as Life/Mana'). Not counting new ones from essences, just default from http://poeaffix.net/
Hard to implement or just forgot to put them in templates?
Affixes are added to templates manually; so it's almost always the case that I haven't bothered to include them. I've generally been added them as people request them, so I'll put these ones on (minus the leech one for now since PoB doesn't do leech calculations (yet!)).
Much appreciated as always :)
| gharchive/issue | 2016-09-08T23:14:40 | 2025-04-01T06:37:23.881811 | {
"authors": [
"Openarl",
"dein0s",
"mey1R"
],
"repo": "Openarl/PathOfBuilding",
"url": "https://github.com/Openarl/PathOfBuilding/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
211387701 | Lioneye's fall
The jewel seems to not affect the passive notes in range
It seems to be working, are there specific nodes that don't appear to be coverted? Note that conversion jewels don't currently change the node descriptions to reflect the converted stats.
restarted my program and now it treats them as jewel conversion, thx though, works great
| gharchive/issue | 2017-03-02T13:15:09 | 2025-04-01T06:37:23.883263 | {
"authors": [
"Openarl",
"Teriderol"
],
"repo": "Openarl/PathOfBuilding",
"url": "https://github.com/Openarl/PathOfBuilding/issues/205",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1199864236 | feat: Development environment with nix-shell as alternative to DEV_SETUP.md
Overview
The DEV_SETUP.md is quite complicated, with many manual steps, especially if someone only wants to work on part of opentrons. This feature request (which I could make a PR for) would be to add a development environment with nix-shell as an alternative.
Implementation details
A section of DEV_SETUP.md or a replacement would be:
Install Nix (or use Nix docker)
Install docker, then run docker run -it -p 8080:8080 -v $(pwd)/:/workdir nixos/nix from opentrons mono-repo
run cd workdir then nix-shell
use this file shell.nix
{ pkgs ? import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/715dc137b08213aabbbe0965b78ab938e5d8d3b7.tar.gz") {}
}:
pkgs.mkShell {
buildInputs = [
pkgs.nodejs-14_x
pkgs.yarn
pkgs.libudev0-shim
pkgs.python37
pkgs.gnumake
pkgs.curl
pkgs.openssh
pkgs.git
];
}
the development environment would be ready, leaving a dev to just run
cd protocol-designer; make dev
and see it just work
Design
Similar to
https://nixos.org/download.html#nix-install-docker
https://nixos.org/guides/declarative-and-reproducible-developer-environments.html
https://dev.to/edbentley/nix-for-frontend-developers-64g
Acceptance criteria
The shell.nix file above let's me hack on the protocol designer already; but it doesn't support the root make setup command yet due to lack of support for the usb-detection npm package so far.
Hi @barakplasm, this is an interesting proposal! While, I don't think we have the resources to test and support an "officially blessed" Docker/NixOS-based development environment, if you were to create a GitHub repository or Gist with your nix-shell configuration + setup instructions, I think that would be fantastic. I'd definitely be down to link to those instructions under an "Alternative Community Setups" section or something similar
Sure, I can make an external repo for this contribution
@mcous I opened a repo here with instructions on how to use nix-shell for dev setup (at least on the protocol designer).
I'll make a PR after I use this alternate setup a bit longer
| gharchive/issue | 2022-04-11T11:55:52 | 2025-04-01T06:37:23.890401 | {
"authors": [
"barakplasma",
"mcous"
],
"repo": "Opentrons/opentrons",
"url": "https://github.com/Opentrons/opentrons/issues/9927",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
259893299 | CodeSchool API Endpoints for CRUD & Rake Task for seeding from code_schools.yml
Description of changes
#76
Created CRUD API endpoints for the CodeSchool model. Also added Location model as well because some of the CodeSchools had multiple locations.
Also created rake task for seeding data.
rake schools:populate
Reads from the ./config/code_schools.yml and creates records in the database.
@hpjaj could you take a look at the API Documentation I've added? Besides this I'll remove the configuration for Raven I added in ./config/application.rb and the change the host: back to host: operationcode-psql in ./config/database.yml and I think it'll be close to done.
Is there anything else you can see that I've missed?
@hpjaj I've worked on the requests you had.
@hpjaj I've committed and pushed the changes you requests =).
@PrimeTimeTran - There ended up being a couple issues with this PR. I wanted to chat with you about them.
Are you a member of the Operation Code Slack channel? If yes, what is your @ name?
If no, you can join by going to https://operationcode.org/profile and clicking Enter our Slack channel.
Once you are in Slack, my handle is @ harry.
@hpjaj I see, sorry I missed them =(.
No, I'm not. ok I'll join =)
| gharchive/pull-request | 2017-09-22T18:05:46 | 2025-04-01T06:37:23.895989 | {
"authors": [
"PrimeTimeTran",
"hpjaj"
],
"repo": "OperationCode/operationcode_backend",
"url": "https://github.com/OperationCode/operationcode_backend/pull/164",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
576196095 | IntelliJ Ultimate 2019. 3. 3 gitflow null point exception
I'm submitting a ...
[x] bug report
[ ] feature request
[ ] puppy => You're not submitting a puppy. I already have one and he's adorable
What is the current behavior?
Intellij IDEA 2019. 3.3 (ultimate version)
i just open tasks integrate jira, but occur null point exception
java.lang.NullPointerException
at gitflow.ui.GitflowTaskDialogPanelProvider.getOpenTaskPanel(GitflowTaskDialogPanelProvider.java:26)
at com.intellij.tasks.ui.TaskDialogPanelProvider.getOpenTaskPanel(TaskDialogPanelProvider.java:42)
at com.intellij.tasks.ui.TaskDialogPanelProvider.lambda$getOpenTaskPanels$0(TaskDialogPanelProvider.java:24)
at com.intellij.util.containers.ContainerUtil.mapNotNull(ContainerUtil.java:2169)
at com.intellij.tasks.ui.TaskDialogPanelProvider.getOpenTaskPanels(TaskDialogPanelProvider.java:23)
at com.intellij.tasks.actions.OpenTaskDialog.(OpenTaskDialog.java:96)
at com.intellij.tasks.actions.GotoTaskAction.lambda$showOpenTaskDialog$0(GotoTaskAction.java:116)
at com.intellij.openapi.application.TransactionGuardImpl$2.run(TransactionGuardImpl.java:309)
at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.doRun(LaterInvocator.java:441)
at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.runNextEvent(LaterInvocator.java:424)
at com.intellij.openapi.application.impl.LaterInvocator$FlushQueue.run(LaterInvocator.java:407)
at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:313)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:776)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:746)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:908)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:781)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$8(IdeEventQueue.java:424)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:698)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:423)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
Is this a bug? Sorry about that. If so give me explicit details how to reproduce:
What is the expected behavior?
just tasks open
What is the motivation / use case for changing the behavior?
Please tell me about your environment:
Gitflow4idea version: 0.7.1
Gitflow version: 0.4.1
IntelliJ Help -> about > click copy icon and paste here. Should look like this:
IntelliJ IDEA 2019.3.3 (Ultimate Edition)
Build #IU-193.6494.35, built on February 11, 2020
Licensed to jiho kim
Subscription is active until March 4, 2021
For educational use only.
Runtime version: 11.0.5+10-b520.38 x86_64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
macOS 10.15.3
GC: ParNew, ConcurrentMarkSweep
Memory: 1979M
Cores: 16
Registry:
Non-Bundled Plugins: Gitflow, Lombook Plugin, com.atlassian.bitbucket.references, com.nvinayshetty.DTOnator, com.paperetto.dash, org.jetbrains.plugins.vue
* **Other information** (e.g. detailed explanation, stacktrace, related issues, suggestions how to fix, links for me to have context words of praises, pictures of puppies (again with the puppy??) )
Update your GitFlow version https://github.com/OpherV/gitflow4idea/blob/develop/GITFLOW_VERSION.md
| gharchive/issue | 2020-03-05T11:48:02 | 2025-04-01T06:37:23.910396 | {
"authors": [
"blundell",
"joek8901"
],
"repo": "OpherV/gitflow4idea",
"url": "https://github.com/OpherV/gitflow4idea/issues/282",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1438617104 | Sensitive configuration in the OpenSearch Dashboards configuration
I'm looking into implementing OpenID Connect authentication. In this scenario the IdP client secret should be configured in Kibana: https://opensearch.org/docs/latest/security-plugin/configuration/openid-connect/#configuration-example
The only way I can see to inject values from a secret is using the env: section with secretKeyRef, but unfortunately it seems that the dashboards security plugin doesn't support reading configuration from the environment.
I tried using the keystore as well but that generated an error that opensearch_security.openid.client_secret is not a supported key, and it seems to only be mounted into the OpenSearch nodes anyway.
Hi @albgus.
AFAIK You can use environment variables inside the dashboards.yml config. So you should be able to inject the sensitive information as environment variables from a secret via the env option and then reference that in the additionalConfig.
Something like:
# ...
dashboards:
env:
- name: OPENID_CLIENT_SECRET
valueFrom:
secretKeyRef: ...
additionalConfig:
opensearch_security.openid.client_secret: "${OPENID_CLIENT_SECRET}"
Can you try that?
I tried using the keystore as well [...] and it seems to only be mounted into the OpenSearch nodes anyway.
Correct, the keystore is only used in the opensearch pods, not in dashboards.
Alright, didn't realize that the config file supported variable substitution as well. It seems that this will work. Thanks!
| gharchive/issue | 2022-11-07T16:12:08 | 2025-04-01T06:37:23.918349 | {
"authors": [
"albgus",
"swoehrl-mw"
],
"repo": "Opster/opensearch-k8s-operator",
"url": "https://github.com/Opster/opensearch-k8s-operator/issues/350",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1469436516 | Markdown中可以用六个反引号扩起代码段,如wiki中日志输出的stacktrace
就像这样
java.lang.Throwable : Test
at com.optilog.test main(test.java:8)
Caused By: java.lang.Exception`
at com.optilog.test main(test.java:8)
Optilog Server中的配置文件(?貌似)可以这样在markdown写
{
"printInfo": true,
"printError": true,
"printPath": "D:\\Program\\logs", \\这是文件输出路径
"printWarn": true,
"printDebug": true,
"printFatal": true,
"consoleInfo": true,
"consoleDebug": true,
"consoleError": true,
"consoleWarn": true,
"consoleFatal": true,
"socketNumber": 65535
}
感谢提醒awa
| gharchive/issue | 2022-11-30T10:56:15 | 2025-04-01T06:37:23.920115 | {
"authors": [
"OptiJava",
"ZhuRuoLing"
],
"repo": "OptiJava/Optilog-Client",
"url": "https://github.com/OptiJava/Optilog-Client/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1988684016 | How to fix the git clone
Recently I use the github "https://github.com/OptimalScale/LMFlow" to clone on my desktop computer . I use command "git clone XXXXX" on my "D://",Then
Error downloading object: assets/multimodal-chatbot-demo.gif (2062965): Smudge error: Error downloading assets/multimodal-chatbot-demo.gif (206296519e7892d65cacc48c7e98c6743301b74c29401d57e325197bd6e41cac): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
Thanks for your interest in LMFlow! It was caused by running out of Git LFS quota. We have added a data package to increase the quota. You may try again to see if the problem occurs. Thanks 😄
| gharchive/issue | 2023-11-11T03:14:29 | 2025-04-01T06:37:23.921944 | {
"authors": [
"research4pan",
"youngcraft"
],
"repo": "OptimalScale/LMFlow",
"url": "https://github.com/OptimalScale/LMFlow/issues/676",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1274997303 | Documentation - Homogenize documentation structure for components
Each component documentation page should be built using the following structure:
Component Name (only displayed on standalone documentation)
(the paragraph introductory description of the component found in the DSM)
(Description only displayed on standalone documentation)
Page Summary
(a list of internal anchors for the headers of this component page in order to access directly to a specific section)
Specifications references
(a list of usefull references for this component)
DSM link (only displayed on standalone documentation)
Material Design link
Javadoc link
Accessibility
(contains a link to the accessibility website that are required to build this component)
Variants
(A list of all the variants of this component)
Example of variants for buttons: text button, text button with icon, outlined button, outlined button with icon, contained button, contained button with icon, toggle button
For each variant:
Title
Description
Screenshots (light and dark)
« Implementation in XML » or « Implementation in Jetpack Compose » depending on the documentation displayed
Component Specific Tokens
(contains a list of all the available component specific tokens: background color, foreground color, variant specifics, …)
updates to the doc have been confirmed
| gharchive/issue | 2022-06-17T12:54:20 | 2025-04-01T06:37:23.937218 | {
"authors": [
"B3nz01d",
"paulinea"
],
"repo": "Orange-OpenSource/ods-android",
"url": "https://github.com/Orange-OpenSource/ods-android/issues/189",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
242066094 | ContentPart names should be sanitized upon creation. Fixes #7770.
Added the ToSafeName() sanitation already used in CreatePart, to CreatePartPOST.
This ensures that only valid technical names can be used.
Shouldn't it be a validation message instead, if the ToSafeName result is different?
I'll decline this PR because I found that creating and editing both Parts and Types lack some validation against special characters and the change in this PR needs more work (I'll commit the fixes soon).
| gharchive/pull-request | 2017-07-11T14:22:44 | 2025-04-01T06:37:23.939744 | {
"authors": [
"BenedekFarkas",
"Xceno",
"sebastienros"
],
"repo": "OrchardCMS/Orchard",
"url": "https://github.com/OrchardCMS/Orchard/pull/7771",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2216085933 | No biome sounds on Fabric server 1.20.4
Hello
Firstly, the mod is really nice, well done!
I've been using it for weeks on a vanilla server with friends and everything was working just fine.
Recently, we started a new adventure on a Fabric server and biomes doesn't produce ambiance sounds no more. Yet, some sounds are working like crows, owl...
We installed VoiceChat, Lithium and AudioPlayer but we also tried to restart the server with no mods or datapack (sounds related) and it doesn't fix anything. I have the mod in my own mods file, it is working on solo map and on others vanilla servers but not on the Fabric one...
I've tried to look if anyone was having that type of issue with Fabric but I didn't find anything.
What can we try that may fix this issue ?
Thanks you!
Grogro
Which version of DS are you using? And is the server 100% Fabric as opposed to some other server flavor (like paper, bungiecord, etc.)? I have seen this issue with non-Fabric servers, as well as times when the internal sequence of connecting to a server is out of order.
Hello Ore.
Thanks for your fast reply.
The F3 says "fabric-loader-0.15.7-1.20.4/fabric/Fabric" so I guess the server is 100% Fabric.
Fabric API is 0.96.11.
I'm using DS Fabric 0.3.3.
We also use AudioPlayer 1.8.9 and VoiceChat 2.5.9.
Everyting is in 1.20.4.
Grogro
| gharchive/issue | 2024-03-29T23:32:28 | 2025-04-01T06:37:23.993294 | {
"authors": [
"OreCruncher",
"jesuisgrogro"
],
"repo": "OreCruncher/DynamicSurroundingsFabric",
"url": "https://github.com/OreCruncher/DynamicSurroundingsFabric/issues/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
998835729 | 错误:配置文件 /ql/config/check.json5 格式不对,请在 https://verytoolz.com/json5-validator.html 中检查格式
错误:配置文件 /ql/config/check.json5 格式不对,请在 https://verytoolz.com/json5-validator.html 中检查格式 是什么原因?
@cc892786825 把你 /ql/config/check.json5 粘贴到那个网址中检查格式,就是配置文件你改的格式错了。
json5 的语法在项目 readme 中有链接。
| gharchive/issue | 2021-09-17T01:40:49 | 2025-04-01T06:37:24.010067 | {
"authors": [
"cc892786825",
"night-raise"
],
"repo": "Oreomeow/checkinpanel",
"url": "https://github.com/Oreomeow/checkinpanel/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
304927354 | Switch to standard alertify npm module
@micahalcorn had to patch the old alertify to work with our webpack, which necessitated us creating our own fork of the library. See commit https://github.com/OriginProtocol/demo-dapp/commit/b3dd1babcac01e1c6f7ecca039bb0fd40b510dba
I'd vote that we just pick one of the standard supported npm packages and run with it rather than have the messiness of maintaining our own fork.
@ryana @joshfraser any opinions on this?
(Micah's patch: https://github.com/micahalcorn/alertify.js/commit/fe25d03893b0a10697127df2b1d90718be47089e )
I would agree. It's worth noting that there was at least some desire for uniformity between the DApp and company website (see #46). I'm happy to update both to a newer version and/or tweak the styles in an effort to offer the same UX, if necessary.
We should have a longer conversation around dependencies on the engineering call tomorrow, but we should try and keep this app really simple with as few third-party dependencies as possible.
It feels a bit silly to have to maintain a whole separate package around this when we can replicate this functionality ourselves with a few lines of CSS and JS. It feels like we're probably making this way too complicated.
@micahalcorn What did we end up implementing on this, and can I close this one out? :)
@wanderingstan nothing yet. I have a partially-baked replacement, but I've had it on the back burner while focusing on the UI. And it would really be best to have Matt & Aure settle on the alert/message/notification UX distinction before merging a solution to this issue.
Done via #145 ✅
| gharchive/issue | 2018-03-13T20:40:30 | 2025-04-01T06:37:24.016181 | {
"authors": [
"joshfraser",
"micahalcorn",
"wanderingstan"
],
"repo": "OriginProtocol/demo-dapp",
"url": "https://github.com/OriginProtocol/demo-dapp/issues/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
294076893 | Show warning when no listing type selected
On create page
http://localhost:3000/create
If no listing type is selected, and user hits Next, nothing happens. We should show some message asking them to select a type.
Related to #46
I think this is already fixed. WIll confirm.
fixed.
| gharchive/issue | 2018-02-03T02:46:08 | 2025-04-01T06:37:24.018013 | {
"authors": [
"wanderingstan"
],
"repo": "OriginProtocol/origin-dapp",
"url": "https://github.com/OriginProtocol/origin-dapp/issues/78",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2174220721 | Disable minting with LSTs
Changes:
Disables minting with LSTs
Adds a new _mint internal method, since super.mint() won't be possible because it's external and also will have to remove nonReentrant modifier from the inherited contract
Removes payable from fallback() since neither VaultCore or VaultAdmin have a receive() method and we don't expect the Vault to hold ETH directly either
Deployment:
OETHVaultCore implementation (upgraded): 0x9535413B1B9862D0123A36F55e9bf20EBA4b152e
Deployer: 0x58890a9cb27586e83cb51d2d26bbe18a1a647245
Governance Proposal:
Proposal ID: 72116514921051679346398237778682113450913991391551128830137727748559915078301
Proposal Tx: 0xbfdd55ea8376042a6899c9df47ce0e62a7308cb62b48b89c6e9832d16183aea0
Proposer: 0x6a6D776120f7e4a8dba5F6bF49b85cb340Cfe241
If you made a contract change, make sure to complete the checklist below before merging it in master.
Refer to our documentation for more details about contract security best practices.
Contract change checklist:
[ ] Code reviewed by 2 reviewers.
[ ] Copy & paste code review security checklist below this checklist.
[ ] Unit tests pass
[ ] Slither tests pass with no warning
[ ] Echidna tests pass if PR includes changes to OUSD contract (not automated, run manually on local)
I've verified that:
✅ the governance proposal with id 72....8301 matches the deploy script
✅ OETH vaultCore is upgraded to the new implementation
✅ The published code of the new implementation matches the one in this PR
Deploy review:
[x] All deployed contracts are listed in the deploy PR's description
[x] Deployed contract's verified code (and all dependencies) match the code in master
[x] The transactions that interacted with the newly deployed contract match the deploy script.
[x] Governance proposal matches the deploy script
[ ] Smoke tests pass after fork test execution of the governance proposal (will do as a part of code review)
Smoke fork tests check out.
from world import *
NEW_IMPL = "0x9535413b1b9862d0123a36f55e9bf20eba4b152e"
VAULT_CORE_PROXY = "0x39254033945AA2E4809Cc2977E7087BEE48bd7Ab"
proxy = Contract.from_explorer(VAULT_CORE_PROXY, as_proxy_for=VAULT_CORE_PROXY)
proxy.upgradeTo(NEW_IMPL, {'from': TIMELOCK})
WETH_WHALE = "0x2fEb1512183545f48f6b9C5b4EbfCaF49CfCa6F3"
weth.approve(oeth_vault_core, 1e70, {'from': WETH_WHALE})
oeth_vault_core.mint(WETH, 1e18, 1e18, {'from': WETH_WHALE})
print(oeth.balanceOf(WETH_WHALE))
print(oeth.balanceOf(WETH_WHALE) / 1e18)
STETH_BAGS = "0x5fEC2f34D80ED82370F733043B6A536d7e9D7f8d"
steth.approve(oeth_vault_core, 1e70, {'from': STETH_BAGS})
oeth_vault_core.mint(steth, 1e18, 0, {'from': STETH_BAGS})
oeth_vault_core.redeemAll(0, {'from': WETH_WHALE})
| gharchive/pull-request | 2024-03-07T16:09:52 | 2025-04-01T06:37:24.028026 | {
"authors": [
"DanielVF",
"shahthepro",
"sparrowDom"
],
"repo": "OriginProtocol/origin-dollar",
"url": "https://github.com/OriginProtocol/origin-dollar/pull/1993",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2113697949 | 🛑 CrackingShare is down
In b07b683, CrackingShare (https://crackingshare.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CrackingShare is back up in d2100f2 after 13 hours, 14 minutes.
| gharchive/issue | 2024-02-01T23:18:02 | 2025-04-01T06:37:24.049253 | {
"authors": [
"OsintUK"
],
"repo": "OsintUK/Up-or-Down",
"url": "https://github.com/OsintUK/Up-or-Down/issues/1574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1088596499 | Cloud bastion
Выполнено ДЗ № 5
• Основное ДЗ
• Задание со *
В процессе сделано:
• Созданы 2 виртуальные машины bastion и someinternalhost
• Настроен ssh конфиг для подключения к хостам по алиасам
• Установлен Pritunl, сгенерирован сертификат
Как проверить работоспособность:
• Достпен по ссылке Pritunl (sslip.io)
PR checklist
• Выставил label с номером домашнего задания
• Выставил label с темой домашнего задания
Почему то не проходят проверки не может, найти файлы хотя они есть .
НЕ знаю могло ли как то повлиять то что я сделал merge предыдущего ДЗ после этого pull request
| gharchive/pull-request | 2021-12-25T12:40:32 | 2025-04-01T06:37:24.061001 | {
"authors": [
"NikolayGrinin"
],
"repo": "Otus-DevOps-2021-11/NikolayGrinin_infra",
"url": "https://github.com/Otus-DevOps-2021-11/NikolayGrinin_infra/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1268666380 | khapinskiy
done
Здравствуйте!)
Репозитории созданы
Приглашение в организацию отправлено на почту, постарайтесь принять его как можно быстрее =)
| gharchive/pull-request | 2022-06-12T18:37:36 | 2025-04-01T06:37:24.062450 | {
"authors": [
"biomack",
"mrgreyves"
],
"repo": "Otus-DevOps-2022-05/students",
"url": "https://github.com/Otus-DevOps-2022-05/students/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
944396593 | Don't inject unicodedata2 into sys.modules
I noticed charset_normalizer meddles with sys.modules, causing this:
>>> import charset_normalizer
>>> import unicodedata
>>> unicodedata
<module 'unicodedata2' from '.../site-packages/unicodedata2.cpython-39-darwin.so'>
This PR fixes that by using a fairly standard try: except ImportError: guard instead of the sys.modules hook.
>>> import charset_normalizer
>>> import unicodedata
>>> unicodedata
<module 'unicodedata' from '.../python3.9/lib-dynload/unicodedata.cpython-39-darwin.so'>
Hi @akx
Thanks for your report and your PR proposal.
Why do you say that PR "fixes" when this behavior is well documented and is expected?
I would be open to change that, considering recent events.
I have some further questions regarding this patch.
Why did you install and not expect it to be used instead of your cPython unicodedata distribution?
Removing the charset_normalizer.hook and replacing it with a plain import won't be useful unless we create some intermediary compat like solution.
It would be nicer to propose something that both keeps the backward compatibility AND "fix" your concern.
Thanks.
Hi @Ousret.
As far as I can see, it's certainly not documented that simply importing charset_normalizer while having unicodedata2 installed will make it impossible to access unicodedata (unless you had imported unicodedata beforehand).
Sure, it's documented that charset_normalizer will internally use unicodedata2 if it's available, and that's fine, but I don't expect it to mess with the global state of my Python interpreter. If I wanted to patch in unicodedata2 for all unicodedatas in my interpreter, I'd want to be explicit about that. (Explicit is better than implicit.)
My proposal is exactly the same mechanism that patches in charset_normalizer for chardet in requests (which is how I stumbled upon this library in the first place):
https://github.com/psf/requests/blob/a1a6a549a0143d9b32717dbe3d75cd543ae5a4f6/requests/compat.py#L11-L14
I really do not understand why having more up-to-date unicodedata is messing with your global Python environment. Anyway.
Like I said earlier, I am open to change that behavior, but I must be convinced that using your method/patch actually does what is expected.
I really do not understand why having more up-to-date unicodedata is messing with your global Python environment.
It's a matter of principle. If I do import unicodedata, I want unicodedata, not another module just because I had happened to import an unrelated module (charset_normalizer) before importing unicodedata.
It's maybe slightly far-fetched, but without this patch, you can't easily write a program that compares the differences between unicodedata and unicodedata2 unless you're specific about your import order!
I must be convinced that using your method/patch actually does what is expected.
Well, I'm pretty sure the CI suite for charset_normalizer would show that it doesn't break things. Can you enable running the GitHub workflows for this PR?
I need another person's opinion on that. @sethmlarson What do you think of that?
Unfornutally, in the current state the test suite does not prove without a doupt that unicodedata2 is correctly used. I am working on it.
How is that the right thing for us?
import unicodata2 as unicodedata
"a".isalpha()
What I am concern about is methods like .isalpha() from a str instance. How do you ptr them to unicodedata2?
import unicodedata2 as unicodedata
"a".isalpha()
What I am concern about is methods like .isalpha() from a str instance. How do you ptr them to unicodedata2?
As mentioned in https://github.com/Ousret/charset_normalizer/pull/57#discussion_r669703707 , I don't believe having unicodedata2 installed will help at all with str.isalpha(), etc. calls. In other words, you'd need to have your own def isalpha(s) sort of implementation that'd consult unicodedata2's tables, and doing that in pure Python will likely be slow.
@Ousret Don't know much about the unicodedata module or whether it even interacts with isalpha, from a brief reading of CPython it doesn't seem like it interacts with this static table: https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Python/pyctype.c
If you chase down the implementation of str.isalpha it ends up checking flags in this table, at least from my reading.
If this is the case we shouldn't be injecting unicodedata2 as it doesn't modify str.isalpha behavior.
@sethmlarson
whether it even interacts with isalpha, from a brief reading of CPython it doesn't seem like it interacts with this static table: https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Python/pyctype.c
That's the non-Unicode table. See https://github.com/Ousret/charset_normalizer/pull/57#discussion_r669703707 for the Unicode isalpha chase.
Codecov Report
Merging #57 (a30cf9f) into master (929f13c) will decrease coverage by 0.15%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## master #57 +/- ##
==========================================
- Coverage 84.46% 84.31% -0.16%
==========================================
Files 12 11 -1
Lines 1062 1058 -4
==========================================
- Hits 897 892 -5
- Misses 165 166 +1
Impacted Files
Coverage Δ
charset_normalizer/__init__.py
100.00% <ø> (ø)
charset_normalizer/utils.py
76.25% <100.00%> (+0.52%)
:arrow_up:
charset_normalizer/api.py
82.65% <0.00%> (-1.16%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 929f13c...a30cf9f. Read the comment docs.
Thanks, @akx @sethmlarson for your inputs.
I am okay with merging this. I don't know when a new tag will be available, It would be wise to wait upon any remarks or concerns beforehand.
Do you have any more concerns @akx ?
@Ousret I don't have any further concerns regarding this PR :) Thank you for your consideration!
You are welcome, thanks for your contribution as well.
Now available under https://github.com/Ousret/charset_normalizer/releases/tag/2.0.2
I just found this while trying to debug a problem, and I just wanted to make absolutely clear that the old way causes problems. With charset_normalizer 2.0.0 (and 2.0.1), I saw this behavior:
No import:
❯ python
Python 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 05:59:23)
[Clang 11.1.0 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> compile(r's="$\N{DEGREE FAHRENHEIT}$"', 'foo.py', 'exec')
<code object <module> at 0x7fde000c4810, file "foo.py", line 1>
>>>
With import charset_normalizer:
❯ python
Python 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 05:59:23)
[Clang 11.1.0 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import charset_normalizer
>>> compile(r's="$\N{DEGREE FAHRENHEIT}$"', 'foo.py', 'exec')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "foo.py", line 1
SyntaxError: (unicode error) \N escapes not supported (can't load unicodedata module)
>>>
With the fix merged here in 2.0.2 (and the latest 2.0.7), the problem disappears. (I have an entirely different issue to solve where conda-forge is still giving me 2.0.0 🤷♂️ )
Thanks for the report. Indeed I saw this weird behavior at conda-forge giving by default 2.0.0 instead of latest. I do not know why and who to reach to explain that. https://anaconda.org/conda-forge/charset-normalizer/files The download tendency is revealing that something seems wrong. Could it be that the requests version requirement ~=2.0.* for this lib is applied in the wrong way?
Incorrect dependencies for requests was exactly the reason: https://github.com/conda-forge/requests-feedstock/pull/48
| gharchive/pull-request | 2021-07-14T12:51:48 | 2025-04-01T06:37:24.108312 | {
"authors": [
"Ousret",
"akx",
"codecov-commenter",
"dopplershift",
"sethmlarson"
],
"repo": "Ousret/charset_normalizer",
"url": "https://github.com/Ousret/charset_normalizer/pull/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2558084482 | Update go.mod
Fix for Darwin MacOs as it won't build without it.
As stated here: https://github.com/golang/go/issues/65568
| gharchive/pull-request | 2024-10-01T03:23:11 | 2025-04-01T06:37:24.110624 | {
"authors": [
"cjohannsen81"
],
"repo": "OutSystems/cloud-connector",
"url": "https://github.com/OutSystems/cloud-connector/pull/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
496771707 | Queue name in the Prometheus metrics
Hi, thank you for this project, I have a question please about the exposed metrics.
I'm trying the exporter locally, without any tasks I can check the celery_tasks_total metric in the Prometheus dashboard and I can see my tasks:
celery_tasks_total{instance="celery_exporter:9540",job="celery_exporter",name="info",namespace="celery",queue="default",state="STARTED"}
The queue as you can see is default, for this task the value is right because I have changed my default task name from celery to default, but the problem is that all the other tasks also have the queue as default even thought each task has a queue with the same name:
celery_tasks_total{instance="celery_exporter:9540",job="celery_exporter",name="process",namespace="celery",queue="default",state="STARTED"}
As you can see, here the queue should be process for my task process but it's default, that's also the case for other tasks.
Also when I delay new tasks to be processed, these metrics stay the same with their values of 0 and new metrics show up for every task but this time with the queue value of undefined with the correct number of tasks:
celery_tasks_total{instance="celery_exporter:9540",job="celery_exporter",name="info",namespace="celery",queue="undefined",state="STARTED"}
So what could be wrong here? I have my queues defined as the task_queues Celery option, and for all the tasks except the default one I'm setting the queue argument of the @app.task() decorator.
Have you configured workers with task_send_sent_event option ( http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-send-sent-event )or are you launching the exporter with --enable-events flag?
This seems like the exporter is not capturing events containing queue infos
@MRoci I'm not using --enable-events but I have task_send_sent_event and worker_send_task_events set to True on the workers, I have tested this configuration again and got the same results, the queue is default for tasks that don't use the default queue and sometimes undefined.
Are you able to add a test reproducing the issue in a PR so we could take a look at it?
@SharpEdgeMarshall Yes I will when I get some free time.
So, looks like this never happened.
@mdawar were you able to fix that?
@orkenstein sorry I haven't had the time to find a fix and I no longer use this exporter because I moved to RQ from Celery.
| gharchive/issue | 2019-09-22T12:16:59 | 2025-04-01T06:37:24.117396 | {
"authors": [
"MRoci",
"SharpEdgeMarshall",
"mdawar",
"orkenstein"
],
"repo": "OvalMoney/celery-exporter",
"url": "https://github.com/OvalMoney/celery-exporter/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
289054636 | Huawei Y7, android 7.0 black screen on launch
Huawei Y7, android 7.0
Fullscreen = true, gives such result right after unity splash screen. If false - no problems, but no status bar
@gromilQaaaa any news?
I don't have the phone in question at hand, but if you share your project and logcat of a development build - I'll try to check it out.
Based on the garbage most likely buffer not properly cleared.
No answer
| gharchive/issue | 2018-01-16T21:01:37 | 2025-04-01T06:37:24.119640 | {
"authors": [
"Over17",
"gromilQaaaa"
],
"repo": "Over17/UnityShowAndroidStatusBar",
"url": "https://github.com/Over17/UnityShowAndroidStatusBar/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2065427164 | re-rendering
When the ColorChanger button is clicked to change the color, it triggers a re-render of the DeepChatReact component. This re-render, in turn, causes the Chat component to also be re-rendered, resulting in the loss of its internal state, such as any unfinished text in the input box. However, if the DeepChatReact component is used without passing props, like , the unfinished text in the input box is preserved. This behavior suggests that the re-rendering of DeepChatReact due to prop changes is impacting the state of Chat.
import React, { useState } from 'react';
import { DeepChat as DeepChatReact, DeepChat } from 'deep-chat-react';
const ColorChanger = ({ onChangeColor }) => {
const getRandomColor = () => {
const letters = '0123456789ABCDEF';
let color = '#';
for (let i = 0; i < 6; i++) {
color += letters[Math.floor(Math.random() * 16)];
}
return color;
};
const handleChangeColor = () => {
const newColor = getRandomColor();
onChangeColor(newColor);
};
return (
<button
className="btn-round mr-1"
color="neutral"
target="_blank"
outline
onClick={handleChangeColor}
>
Change Color
</button>
);
};
const Chat = ({ index, onClose }) => {
const [color, setColor] = useState('lightblue');
const componentStyle = {
width: '98%',
height: '90%',
position: "fixed",
bottom: "1%",
borderRadius: "10px",
zIndex: 10,
left: "1%",
backgroundColor: color,
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
boxShadow: "0 0 10px rgba(0, 0, 0, 0.15)"
};
return (
<div style={componentStyle}>
<DeepChatReact request={"request"} stream={{ simulation: 31 }}/>
<ColorChanger onChangeColor={setColor} />
</div>
);
};
export default Chat;
Hi @easonoob.
I'm not 100% sure what you mean by the comment or what the question is, could you perhaps elaborate on the issue.
If you are referring to the fact that React re-renders the component when you use useState, I have discussed this topic in the following issue.
Let me know if you need further help. Thanks!
thanks, #61
I'm going to close this issue as the conversation on this topic should be continued in the following issue. Thanks!
| gharchive/issue | 2024-01-04T10:50:41 | 2025-04-01T06:37:24.139820 | {
"authors": [
"OvidijusParsiunas",
"easonoob"
],
"repo": "OvidijusParsiunas/deep-chat",
"url": "https://github.com/OvidijusParsiunas/deep-chat/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1402188995 | added backend code and modified code Structure
File Structure modified and easy to Read codes
@PARKHI277 #6
| gharchive/pull-request | 2022-10-09T07:54:07 | 2025-04-01T06:37:24.178648 | {
"authors": [
"Ayushpanditmoto"
],
"repo": "PARKHI277/My-blogwebsite-using-Nodejs",
"url": "https://github.com/PARKHI277/My-blogwebsite-using-Nodejs/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.