id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
374678454
|
Replaced stub with an article
Replaced stub with an article about setting font-family.
[x] I have read freeCodeCamp's contribution guidelines.
[x] My pull request has a descriptive title (not a vague title like Update index.md)
[x] My pull request targets the master branch of freeCodeCamp.
[x] None of my changes are plagiarized from another source without proper attribution.
[x] My article does not contain shortened URLs or affiliate links.
If your pull request closes a GitHub issue, replace the XXXXX below with the issue number.
Closes #XXXXX
It seems that similar changes have already been accepted earlier for this article you're editing, sorry about that. 😓
If you feel you have more to add, please feel free to open up a new PR.
Thanks again! 😊
If you have any questions, feel free to reach out through Gitter or by commenting below. 💬
|
gharchive/pull-request
| 2018-10-27T19:10:52 |
2025-04-01T04:34:18.476416
|
{
"authors": [
"RandellDawson",
"ambarytl"
],
"repo": "freeCodeCamp/freeCodeCamp",
"url": "https://github.com/freeCodeCamp/freeCodeCamp/pull/30584",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
374729584
|
Updated Rules of PHP variables
[ ] I have read freeCodeCamp's contribution guidelines.
[ ] My pull request has a descriptive title (not a vague title like Update index.md)
[ ] My pull request targets the master branch of freeCodeCamp.
[ ] None of my changes are plagiarized from another source without proper attribution.
[ ] My article does not contain shortened URLs or affiliate links.
If your pull request closes a GitHub issue, replace the XXXXX below with the issue number.
Closes #XXXXX
Thank you for opening this pull request.
This is a standard message notifying you that we’ve reviewed your pull request and have decided not to merge it. We would welcome future pull requests from you.
Thank you and happy coding.
|
gharchive/pull-request
| 2018-10-28T08:20:00 |
2025-04-01T04:34:18.479524
|
{
"authors": [
"RILEYCWUOJO",
"RandellDawson"
],
"repo": "freeCodeCamp/freeCodeCamp",
"url": "https://github.com/freeCodeCamp/freeCodeCamp/pull/30948",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
375655891
|
removed space
[ ] I have read freeCodeCamp's contribution guidelines.
[ ] My pull request has a descriptive title (not a vague title like Update index.md)
[ ] My pull request targets the master branch of freeCodeCamp.
[ ] None of my changes are plagiarized from another source without proper attribution.
[ ] My article does not contain shortened URLs or affiliate links.
If your pull request closes a GitHub issue, replace the XXXXX below with the issue number.
Closes #XXXXX
Hi,
Thanks for this pull request (PR).
Unfortunately, we are marking this PR invalid for hacktoberfest. We believe it does not follow the recommended guidelines:
Quality Standards
In line with Hacktoberfest value #2 (Quantity is fun, Quality is key), we have provided examples of the quality standards we encourage. This applies mainly to beginners.
PRs that are automated e.g. scripted opening PRs to remove whitespace / optimize images.
PRs that are disruptive e.g. taking someone else's branch/commits and making a PR.
PRs that are regarded by a project maintainer as a hindrance vs. helping.
Something that's clearly an attempt to simply +1 your PR count for October.
Learn more at https://hacktoberfest.digitalocean.com/details
|
gharchive/pull-request
| 2018-10-30T19:28:22 |
2025-04-01T04:34:18.484205
|
{
"authors": [
"FazilUdupi",
"RandellDawson"
],
"repo": "freeCodeCamp/freeCodeCamp",
"url": "https://github.com/freeCodeCamp/freeCodeCamp/pull/32862",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
375824701
|
Wrap syntax example in ```cpp
Wrapped syntax example in ```cpp for proper formatting
[x] I have read freeCodeCamp's contribution guidelines.
[x] My pull request has a descriptive title (not a vague title like Update index.md)
[x] My pull request targets the master branch of freeCodeCamp.
[x] None of my changes are plagiarized from another source without proper attribution.
[x] My article does not contain shortened URLs or affiliate links.
If your pull request closes a GitHub issue, replace the XXXXX below with the issue number.
Closes #XXXXX
@Nirajn2311 CI has ended. Removing label
@UberschallSamsara Thank you for your contribution to the page! 👍
We're happy to accept these changes, and look forward to future contributions. 📝
|
gharchive/pull-request
| 2018-10-31T07:00:08 |
2025-04-01T04:34:18.487409
|
{
"authors": [
"UberschallSamsara",
"thecodingaviator"
],
"repo": "freeCodeCamp/freeCodeCamp",
"url": "https://github.com/freeCodeCamp/freeCodeCamp/pull/33287",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1756311957
|
fix(docs): correct typos
Checklist:
[x] I have read and followed the contribution guidelines.
[x] I have read and followed the how to open a pull request guide.
[x] My pull request targets the main branch of freeCodeCamp.
[x] I have tested these changes either locally on my machine, or GitPod.
Simply typos - no issue was created.
A handful of typos were corrected on two pages: Debug outgoing emails locally and Set up freeCodeCamp on Windows (WSL).
The changes to how-to-catch-outgoing-emails-locally.md include:
Correction of api to API
Correction of one instance of mailhog to MailHog
The changes to how-to-setup-wsl.md include:
Correction of debian to Debian
Correction of distro(s) to distribution(s)
Correction of dockerhub to Docker Hub
Small grammatical change to a note (mainly the addition of apostrophes to denote slang)
Hey @raisedadead,
Did we miss a bunch of NPM -> PNPM in the WSL-setup document?
Did we miss a bunch of NPM -> PNPM in the WSL-setup document?
Not sure; I have been on macOS primarily for over a year.
|
gharchive/pull-request
| 2023-06-14T08:09:58 |
2025-04-01T04:34:18.493669
|
{
"authors": [
"Sembauke",
"daviesa2",
"raisedadead"
],
"repo": "freeCodeCamp/freeCodeCamp",
"url": "https://github.com/freeCodeCamp/freeCodeCamp/pull/50694",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2595104976
|
fix(tests): wait for api to respond during e2e tests
fix(tests): wait for server to accept honesty
fix(tests): wait for code to be saved
Checklist:
[x] I have read and followed the contribution guidelines.
[x] I have read and followed the how to open a pull request guide.
[x] My pull request targets the main branch of freeCodeCamp.
[x] I have tested these changes either locally on my machine, or GitPod.
Some of our tests are flaky because they have hidden race conditions. They were implicitly assuming that since a request has been made, it will have finished by the time the effect of the request is checked. This generally isn't much of a problem when the two servers are on the same machine, but it's pretty bad when they're not.
Related to https://github.com/freeCodeCamp/freeCodeCamp/pull/56728
This seems fine, but is there not a way to spy on a request?
Oh, sure, but that makes the test depend on details of the api. Also, e2e tests should act like a user and users are not typically spying on api requests.
|
gharchive/pull-request
| 2024-10-17T15:41:40 |
2025-04-01T04:34:18.497800
|
{
"authors": [
"ojeytonwilliams"
],
"repo": "freeCodeCamp/freeCodeCamp",
"url": "https://github.com/freeCodeCamp/freeCodeCamp/pull/56730",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2281984564
|
Hero Mdbook collection:filename.md not working
Situation
When we set the following.
[Link Example](developers/grid3_supported_flists.md)
[Link Example](developers/grid3_supported_flists#introduction.md)
[Link Example](../developers/grid3_supported_flists.md)
[Link Example](../developers/grid3_supported_flists#introduction.md)
[Link Example](developers:grid3_supported_flists)
[Link Example](developers:grid3_supported_flists#introduction.md)
This is the output:
[Link Example](dashboard:grid3_supported_flists.md)
[Link Example](dashboard:grid3_supported_flists#introduction.md)
[Link Example](dashboard:grid3_supported_flists.md)
[Link Example](dashboard:grid3_supported_flists#introduction.md)
[Link Example](grid3_supported_flists.md)
[Link Example](grid3_supported_flists#introduction.md#introduction.md)
Conclusion
None of those links work. We can see that in all cases the collection should be developers but it gets changed to dashboard.
On the two last, we can see that using collection:filename.md, we lose the collection name.
I think collection:filename is not working correctly.
@despiegk this should be the priority, along with #417
Once those 2 issues are set, hero mdbook can be used to production and we'll have functioning mdbooks!
can you provide steps to reproduce? (i.e. commands to run on which input)
@MarioBassem I'll show some steps to reproduce:
clone repo: https://git.ourworld.tf/tfgrid/info_tfgrid in ~/code/git.ourworld.tf/tfgrid (should be root, sudo su)
cd into info_tfgrid
checkout the branch development_test_toc
build the book with mdbook: hero mdbook -p $(pwd)/heroscript/test
Go to the hero mdbook directory built: cd ~/hero/var/mdbuild/test
Serve the book: mdbook serve --port 3003
you can see that the file Cloud TOC has the url cloud/cloud_toc
but if you go on the first page (test page) and click on either of those links:
it leads to test/cloud_toc and not cloud/cloud_toc. And for this reason, the file is situated into Additional Pages:
expected behaviour: it should go to cloud/cloud_toc
Let me know if you need more info! Thanks!
will be resolved when we add unlisted pages
|
gharchive/issue
| 2024-05-07T00:05:16 |
2025-04-01T04:34:18.580460
|
{
"authors": [
"MarioBassem",
"Mik-TF",
"despiegk"
],
"repo": "freeflowuniverse/crystallib",
"url": "https://github.com/freeflowuniverse/crystallib/issues/428",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
47245082
|
Leaderboard UI signout breaks ANE
When you sign out from the leaderbord UI (top right > settings > sign out user) the application does not receive the sign out event and still assumes your signed in.
I have the same problem signing out through the Google Play Game Services achievements UI.
|
gharchive/issue
| 2014-10-30T08:45:07 |
2025-04-01T04:34:18.790868
|
{
"authors": [
"ErikSom",
"loonychewy"
],
"repo": "freshplanet/ANE-Google-Play-Game-Services",
"url": "https://github.com/freshplanet/ANE-Google-Play-Game-Services/issues/31",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
395602301
|
Do not wait for the iframe to load when an error occurs
Fixes #9
👍
:tada: This PR is included in version 1.2.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2019-01-03T14:56:39 |
2025-04-01T04:34:18.861064
|
{
"authors": [
"DAreRodz",
"frontibotito",
"luisherranz"
],
"repo": "frontity/instademo.frontity.io",
"url": "https://github.com/frontity/instademo.frontity.io/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2571166582
|
Night theme
Mostly tested on API 33. On API 28 the app would crash sometimes on the music player citing something about an ad size - it should not be related to this branch.
I added night theme colors. Updated tint of icons, color of text, and backgrounds. Updated custom styles for toolbars, etc.
I also added a UI for added Dark Theme in preferences, but at this point, I don't think I can make it work on my own. Maybe you could add it to preferences. For now it works if a user changes to Dark Theme from phone settings. Let me know what you think, and if you want it.
While rebasing your branch I encountered some errors so I had to rebase it manually to this other branch.
https://github.com/frostwire/frostwire/commits/night-theme-rebased-by-gubatron
I'll create a second pull request with the rebased branch and close this one for now.
We'll keep this branch for safety.
|
gharchive/pull-request
| 2024-10-07T18:33:51 |
2025-04-01T04:34:18.864778
|
{
"authors": [
"gubatron",
"marcelinkaaa"
],
"repo": "frostwire/frostwire",
"url": "https://github.com/frostwire/frostwire/pull/1019",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
206996944
|
Implement persistent red-black tree
Forms the basis of:
Set (could probably also be handled by the HashMap)
Sorted Set
Sorted Map
Done.
|
gharchive/issue
| 2017-02-11T18:02:52 |
2025-04-01T04:34:18.866410
|
{
"authors": [
"axefrog"
],
"repo": "frptools/collectable",
"url": "https://github.com/frptools/collectable/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
225989671
|
Return empty array as requested by PHPDoc ...
... followup functions like processICalendarChange expect an array as well. Returning null here will result in a type error
Argument 3 passed to Sabre\CalDAV\Schedule\Plugin::processICalendarChange() must be of the type array, null given
Is anybody working on the broken tests?
I dont think so
I dont think so
let me see what I can do these days ...
I just made a PR to fix the tests. I am waiting to get some "approvals" to keep the project clean :)
https://github.com/fruux/sabre-dav/pull/965
Please rebase
Th
|
gharchive/pull-request
| 2017-05-03T13:41:14 |
2025-04-01T04:34:18.870675
|
{
"authors": [
"DeepDiver1975",
"staabm",
"tbille"
],
"repo": "fruux/sabre-dav",
"url": "https://github.com/fruux/sabre-dav/pull/960",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2152642195
|
Clear previous query from searchbox when adding subsequent characters
I need some clarification on this.
From my understanding, the query textbox should be cleared after someone adds a character.
What about the case where someone starts typing, then closes the window or clicks out of its border -may be unintentionally-. Should the query textbox also be cleared?
Yea I'd say just clear it every time it's opened, it'll be simplest
A easy way to implement this would be to set the modal to keepMounted=false https://mui.com/material-ui/api/modal/
I saw your comment too late. I just reset them manually when the window closes. I'll check your solution now.
|
gharchive/issue
| 2024-02-25T07:37:26 |
2025-04-01T04:34:18.873346
|
{
"authors": [
"HassnHamada",
"StainAE86",
"frzyc",
"nguyentvan7"
],
"repo": "frzyc/genshin-optimizer",
"url": "https://github.com/frzyc/genshin-optimizer/issues/1541",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2152959155
|
Add a toggle to disable using teammate's artifacts when optimizing
Add a toggle to the optimize tab in team, to disallow using artifacts on other teammate loadouts.
I'm working on this issue.
I'll also look into the problem causing negative numbers to appear in "Artifact Configuration" since they are probably in the same module/directory.
Sounds good, let me know if you hit any blockers, this is definitely more complicated logic than tasks you've worked on before.
|
gharchive/issue
| 2024-02-25T21:57:58 |
2025-04-01T04:34:18.875090
|
{
"authors": [
"HassnHamada",
"frzyc"
],
"repo": "frzyc/genshin-optimizer",
"url": "https://github.com/frzyc/genshin-optimizer/issues/1548",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
105788273
|
[#FLAT-370] Show footer on all pages
https://sourceclear.atlassian.net/browse/FLAT-370
:+1:
|
gharchive/pull-request
| 2015-09-10T11:33:37 |
2025-04-01T04:34:18.876397
|
{
"authors": [
"ArthurZaharov",
"VladimirMikhailov"
],
"repo": "fs/security-headers",
"url": "https://github.com/fs/security-headers/pull/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1153826420
|
fix: Discord invite links that don't expire
New Discord link: https://discord.gg/NJKM4yFUmg
New Discord link: https://discord.gg/NJKM4yFUmg
Looks good!! ☺️
New Discord link: https://discord.gg/NJKM4yFUmg
Looks good!! ☺️
Could you approve? Then I can merge 😄
New Discord link: https://discord.gg/NJKM4yFUmg
Looks good!! ☺️
Thx, could you approve? Then I can merge 😄
Did it 😄
|
gharchive/pull-request
| 2022-02-28T09:01:53 |
2025-04-01T04:34:18.902448
|
{
"authors": [
"daniel-vera-g",
"dustinsommerfeld"
],
"repo": "fsiwi-hka/iwi-website",
"url": "https://github.com/fsiwi-hka/iwi-website/pull/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
281525760
|
Allow "resign" move
From what I can see, leela-zero could always output "resign" as a move. But looking at Pattern MOVE, that was never understood as a legal move by LeelaWatcher.
Probably, the earlier networks were not prone to resigning, but now they are, which leads to:
255 (resign) Move:resign
oh noes!!!
java.lang.RuntimeException: BAD MOVE: resign
at leelawatcher.parser.AutoGtpOutputParser.parseMove(AutoGtpOutputParser.java:129)
at leelawatcher.parser.AutoGtpOutputParser.lambda$start$0(AutoGtpOutputParser.java:90)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Nevermind, I have not noticed Issue #13.
|
gharchive/issue
| 2017-12-12T20:26:26 |
2025-04-01T04:34:18.912669
|
{
"authors": [
"vrpolak"
],
"repo": "fsparv/LeelaWatcher",
"url": "https://github.com/fsparv/LeelaWatcher/issues/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
433334136
|
Update quickstart.fsx
Documentation currently refers to "AzureStorageProvider.fsx" but from looking at the Nuget package files, I think the file referenced should be "StorageTypeProvider.fsx"
Thank you! The only problem at the moment is that the docs generation is broken :-( Once it's fixed I'll rerelease (we might even move to another blog / docs platform).
|
gharchive/pull-request
| 2019-04-15T15:06:04 |
2025-04-01T04:34:18.913723
|
{
"authors": [
"ajwillshire",
"isaacabraham"
],
"repo": "fsprojects/AzureStorageTypeProvider",
"url": "https://github.com/fsprojects/AzureStorageTypeProvider/pull/127",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
412639713
|
Allow setting the backing classes for UI elements
This small change allows setting ViewBuilders properties and enables me to replace the implementation of the underlying Xamarin.Forms classes used by Fabulous.
For example, if I need to override a virtual method on MasterDetailPage, this change enables running:
type MasterDetailPageWithoutToolbar() =
inherit Xamarin.Forms.MasterDetailPage()
override __.ShouldShowToolbarButton() = false
Fabulous.DynamicViews.ViewBuilders.CreateFuncMasterDetailPage <- fun () ->
upcast(new MasterDetailPageWithoutToolbar())
This will cause the MasterDetailPageWithoutToolbar class to be loaded when using the standard:
View.MasterDetailPage()
Ok, I think it could be a good addition.
It's a bit of an advance case, but this will complete our extension points (like ViewProto for default ViewElements).
And it's the same way that Fabulous uses its internal controls instead of the default implementation.
Do you think you can add some documentation on it?
I think we should stress the fact that it's not the recommended way for custom controls, only good for overridden controls like your example.
Sure thing will add some docs. Thanks for the feedback!
Retriggering CI
Looks like that only retriggered AppVeyor. The Travis build link still says its from 8 hours ago.
just needs some time.
Awesome it worked :)
Could you run .\build.cmd (or .\build.sh on macOS) to generate the new Xamarin.Forms.Core.fs based on your modification?
Built and uploaded!
Do you feel this could fit nicely in your release plans for 0.33.0?
Thanks!
Yes, it will be released in 0.33.0.
Awesome, looking forward to it 👍
MIght be good to add some docs for ViewProto too? I couldn't spot any on a quick glance
Were you thinking we should document each member in type ViewProtos() or just a doc for the type?
I noticed some items are assigned in type ViewProtos(), but quite a few are left empty, such as ProtoContentPage and ProtoMasterDetailPage.
ViewProtos members seem to only be called from internal Update methods in the generated Xamarin.Forms.Core.fs file, so I'm not sure if they are intended to be used from outside of that file.
Ah maybe it's not what I thought it was. If that's the case, it needs to be marked as internal.
Nevertheless, I think it would be doable to propose things like this
let view model dispatch =
View.ContentPage(
content = View.StackLayout(
children = [
View.Label(text="Hello world")
]
)
)
type App() as app =
inherit Application()
ViewBuilders.DefaultLabel <- View.Label(fontFamily="Lobster-Regular")
let runner =
Program.mkProgram init update view
|> Program.runWithDynamicView app
|
gharchive/pull-request
| 2019-02-20T21:35:58 |
2025-04-01T04:34:18.923920
|
{
"authors": [
"SergejDK",
"TimLariviere",
"dsyme",
"sdaves"
],
"repo": "fsprojects/Fabulous",
"url": "https://github.com/fsprojects/Fabulous/pull/342",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
263548158
|
.net core version of System.Net.Mail is used when targeting .net461
Description
I want to add Suave.Experimental and target net461 but using .net sdk-based project.
When referencing System.Net.Mail.MailAddress class, a .net core package System.Net.Mail is added instead of using the standard System.dll 4.0
Repro steps
https://github.com/theimowski/repro-paket-system.net.mail
dotnet restore
dotnet build
dotnet run
Expected behavior
want to print:
System.Net.Mail.MailAddress, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Actual behavior
prints:
System.Net.Mail.MailAddress, System.Net.Mail, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
Known workarounds
¯\(ツ)/¯
TBH I'm not even sure if this is Paket-related, or rather F#-specific.
Same happens if I don't use Paket on F# project: https://github.com/theimowski/repro-paket-system.net.mail/tree/fsharp_nopaket
However if I use C# without Paket https://github.com/theimowski/repro-paket-system.net.mail/tree/csharp , I get an error build upon dotnet build:
Program.cs(9,54): error CS0433: The type 'MailAddress' exists in both 'System.Net.Mail, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' and 'System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'
Any guidance would be helpful here
I think this is something interesting and might be related to what happened before regarding assembly versioning and Suave.
Ok maybe it is something entirely different, you can try to
debug via msbuild https://github.com/KirillOsenkov/MSBuildStructuredLog and try to figure out what compilation flags are given to the C# and F# compiler and why
Open an issue on dotnet-sdk/dotnet-cli with the nopaket and csharp repro. Maybe someone there immediately knows what's up
This looks like you might have hit some strange incompat between a netstandard and an existing net461 api.
|
gharchive/issue
| 2017-10-06T19:26:46 |
2025-04-01T04:34:18.931032
|
{
"authors": [
"matthid",
"theimowski"
],
"repo": "fsprojects/Paket",
"url": "https://github.com/fsprojects/Paket/issues/2829",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
493615550
|
Paket remove doesn't remove clitool packages from dependencies file
Description
paket remove doesn't remove clitool packages from dependencies file.
Repro steps
Run this repro script. It uses paket in magic mode, but you can edit it to set paket=paket if you have it installed globally.
Observe script output/paket.dependencies file: clitool dependecy is not removed.
Example
Could Not Find c:\paket-remove-repro\paket.dependencies
Could Not Find c:\paket-remove-repro\paket.lock
Paket version 5.219.0
Saving file c:\paket-remove-repro\paket.dependencies
c:\paket-remove-repro\.paket\paket.targets
Downloading file from https://github.com/fsprojects/Paket/releases/download/5.219.0/paket.targets to c:\paket-remove-repro\.paket\paket.targets
c:\paket-remove-repro\.paket\paket.bootstrapper.exe
Downloading file from https://github.com/fsprojects/Paket/releases/download/5.219.0/paket.bootstrapper.exe to c:\paket-remove-repro\.paket\paket.bootstrapper.exe
Performance:
- Runtime: 2 seconds
------------------------------
Empty: paket.dependencies
------------------------------
source https://www.nuget.org/api/v2
------------------------------
Paket version 5.219.0
Adding Invoke-Build to c:\paket-remove-repro\paket.dependencies into group Main
Resolving packages for group Main:
- Invoke-Build 5.5.3
Locked version resolution written to c:\paket-remove-repro\paket.lock
Dependencies files saved to c:\paket-remove-repro\paket.dependencies
- Creating model and downloading packages.
- Installing for projects
Performance:
- Resolver: 2 seconds (1 runs)
- Runtime: 132 milliseconds
- Blocked (retrieving package details): 190 milliseconds (1 times)
- Blocked (retrieving package versions): 2 seconds (1 times)
- Disk IO: 24 milliseconds
- Average Request Time: 1 second
- Number of Requests: 2
- Runtime: 3 seconds
------------------------------
Package added: paket.dependencies
------------------------------
source https://www.nuget.org/api/v2
clitool Invoke-Build
-------------------------------
Paket version 5.219.0
Removing Invoke-Build from c:\paket-remove-repro\paket.dependencies (group Main)
Dependencies files saved to c:\paket-remove-repro\paket.dependencies
Skipping resolver for group Main since it is already up-to-date
c:\paket-remove-repro\paket.lock is already up-to-date
- Creating model and downloading packages.
- Installing for projects
Performance:
- Disk IO: 5 milliseconds
- Runtime: 967 milliseconds
------------------------------
Package removed: paket.dependencies
------------------------------
source https://www.nuget.org/api/v2
clitool Invoke-Build
-------------------------------
Expected behavior
Paket should remove packages from paket.dependencies no matter what type they are.
Actual behavior
Paket remove doesn't remove clitool packages from dependencies file. Probably other non-NuGet package types also. See: How to add/remove github dependancy with paket?
Known workarounds
None, barring the manually editing of the paket.dependencies file. Makes automated scenarios fragile.
Might be related to #3140.
|
gharchive/issue
| 2019-09-14T11:54:31 |
2025-04-01T04:34:18.935778
|
{
"authors": [
"beatcracker",
"inosik"
],
"repo": "fsprojects/Paket",
"url": "https://github.com/fsprojects/Paket/issues/3654",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
120652361
|
Peek definition
[ ] Add a setting
[ ] Should it show metadata for external symbols?
My answer is yes for all questions. How do you solve the 3rd issue?
I' not sure the third one should be implemented. The nice thing about the current impl is that it opens instantly, compared to 2+ seconds for ordinar file opening. I'm afraid that if we make it a full fledged buffer, it will become equally slow.
Ok, let's finish the first two items. The last one should only be done if file opening is fast enough.
I'm not sure I'm ready to port Go to metadata code, it's tightly coupled with UI in GoToDefinition.fs. I'd rather leave it to implement in the future.
Fair enough.
The new Peek API isn't available on VS 2013. I've got this error when invoking Alt + F12 on VS 2013.
System.ComponentModel.Composition.CompositionException: The composition produced a single composition error, with 2 root causes. The root causes are provided below. Review the CompositionException.Errors property for more detailed information.
1) The export 'Microsoft.VisualStudio.Editor.Implementation.PeekResultFactory (ContractName="Microsoft.VisualStudio.Language.Intellisense.IPeekResultFactory")' is not assignable to type 'Microsoft.VisualStudio.Language.Intellisense.IPeekResultFactory'.
Resulting in: Cannot set import 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider..ctor (Parameter="peekResultFactory", ContractName="Microsoft.VisualStudio.Language.Intellisense.IPeekResultFactory")' on part 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider'.
Element: FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider..ctor (Parameter="peekResultFactory", ContractName="Microsoft.VisualStudio.Language.Intellisense.IPeekResultFactory") --> FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider
Resulting in: Cannot get export 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider (ContractName="Microsoft.VisualStudio.Language.Intellisense.IPeekableItemSourceProvider")' from part 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider'.
Element: FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider (ContractName="Microsoft.VisualStudio.Language.Intellisense.IPeekableItemSourceProvider") --> FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider
2) The export 'Microsoft.VisualStudio.Text.Implementation.TextDocumentFactoryService (ContractName="Microsoft.VisualStudio.Text.ITextDocumentFactoryService")' is not assignable to type 'Microsoft.VisualStudio.Text.ITextDocumentFactoryService'.
Resulting in: Cannot set import 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider..ctor (Parameter="textDocumentFactoryService", ContractName="Microsoft.VisualStudio.Text.ITextDocumentFactoryService")' on part 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider'.
Element: FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider..ctor (Parameter="textDocumentFactoryService", ContractName="Microsoft.VisualStudio.Text.ITextDocumentFactoryService") --> FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider
Resulting in: Cannot get export 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider (ContractName="Microsoft.VisualStudio.Language.Intellisense.IPeekableItemSourceProvider")' from part 'FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider'.
Element: FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider (ContractName="Microsoft.VisualStudio.Language.Intellisense.IPeekableItemSourceProvider") --> FSharpVSPowerTools.Logic.VS2015.PeekableItemSourceProvider
at System.ComponentModel.Composition.Hosting.CompositionServices.GetExportedValueFromComposedPart(ImportEngine engine, ComposablePart part, ExportDefinition definition)
at System.ComponentModel.Composition.Hosting.CatalogExportProvider.GetExportedValue(CatalogPart part, ExportDefinition export, Boolean isSharedPart)
at System.ComponentModel.Composition.Hosting.CatalogExportProvider.CatalogExport.GetExportedValueCore()
at System.ComponentModel.Composition.Primitives.Export.get_Value()
at System.ComponentModel.Composition.ExportServices.GetCastedExportedValue[T](Export export)
at System.ComponentModel.Composition.ExportServices.<>c__DisplayClass42.<CreateStronglyTypedLazyOfTM>b__1()
 at System.Lazy1.CreateValue()
at System.Lazy1.LazyInitValue()
 at System.Lazy1.get_Value()
at Microsoft.VisualStudio.Text.Utilities.GuardedOperations.InvokeMatchingFactories[TExtensionInstance,TExtensionFactory,TMetadataView](IEnumerable1 lazyFactories, Func2 getter, IContentType dataContentType, Object errorSource)
Could we only enable the feature for VS2015+? It's even better to show it clearly in the setting e.g. Peek Definition (VS2015+).
I know, that is why I put it into Logic.VS2015 project. Maybe just hide or make "disabled" (gray and non interactive) the setting in VS2013?
We should make sure that Alt+F12 on VS2013 should do nothing. A clear label is OK. Of course, graying out the setting on VS 2013 is better :-).
Huh. I try to fit this trick https://github.com/fsprojects/VisualFSharpPowerTools/blob/master/src/FSharpVSPowerTools.Logic/NavigateToItem.fs#L277-L295, but the case is not the same because there's no IPeekableItemSourceProvider in VS2013 at all, so we cannot add a fake implementation in Logic, then select right one at runtime. Any ideas?
Try this first https://github.com/fsprojects/VisualFSharpPowerTools/blob/master/src/FSharpVSPowerTools/Commands/UnionPatternMatchCaseGeneratorSmartTaggerProvider.cs#L55-L57. It might be enough.
Done both disabling the setting and the trick with returning null (your last comment).
In VS 2015 everything works.
About the bug, it seems only Highlight refs stops working after single Peek Definitions invoked:
I think I understand why the bug: peek definition seems to reuse already open buffers and, when I close peek def window, it's Dispose is called, which disposes all our timers => most features stop working.
So, it seems we should use ITextView.Properties.GetOrCreateSingletonProperty instead of ITextBuffer.GetOrCreateSingletonProperty. Will check shortly.
No, it has not helped :(
Oh, no https://github.com/dotnet/roslyn/blob/a4e375b95953e471660e9686a46893c97db70b0e/src/EditorFeatures/Core/Shared/Extensions/ITextViewExtensions.PerSubjectBufferProperty.cs
It will be a long story.
The link looks scary :(
It throws errors on VS2013; we need another way.
Have fixed the bug with stopping all the features working after opening-closing a Peek Def view.
I have no idea how to get rid of the exception in VS 2013. I suggest:
stop supporting it
create different VSIXs for 2013 and 2015+
Let's merge this after addressing the last two comments. I'll try to disable the feature for VS2013 after merging the PR.
I think we should improve it significantly:
To fix the issue with OpenDocumentTracker it should keep number of views opened for concrete buffer and countdown it when a view is closing. When the counter is zero, remove it from opened docs map.
The root cause of the fact that no features work inside Peek Def view is that DTE returns main document as active over here https://github.com/fsprojects/VisualFSharpPowerTools/blob/master/src/FSharpVSPowerTools.Logic/VSUtils.fs#L252, then this case does not match if Peek view contains document different to the one opened in the main view https://github.com/fsprojects/VisualFSharpPowerTools/blob/master/src/FSharpVSPowerTools.Logic/VSUtils.fs#L262
Yes, we should eventually get rid of ActiveDocument thing. It has never been working well.
|
gharchive/pull-request
| 2015-12-06T19:03:55 |
2025-04-01T04:34:18.959125
|
{
"authors": [
"dungpa",
"vasily-kirichenko"
],
"repo": "fsprojects/VisualFSharpPowerTools",
"url": "https://github.com/fsprojects/VisualFSharpPowerTools/pull/1286",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
535611717
|
issue or user-error... handling apache logs with grok
I'm not sure if i am doing it wrong but if i use the standard %{COMMONAPACHELOG} match i get several 100 or more hits in grafana in my legend and not (as i expected) all different fields withiong this match.
I want to be able to just count for instance response codes or url's
config file:
global:
config_version: 2
input:
type: file
path: /var/log/httpd/access_log
readall: false # true Read from the beginning of the file? False means we start at the end of the file and read only new lines.
grok:
patterns_dir: ./patterns
additional_patterns:
- 'became supported'
metrics:
- type: counter
name: apache_access_log
help: Metrics for the access_log
match: '%{COMMONAPACHELOG:commonapache}'
labels:
response: '{{.commonapache}}'
server:
host:
port: 9144
Could you provide an example log line and a list of fields you expect? I'll have a look.
Here is the pattern i want to use and the log i have
patterns
COMMONAPACHELOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
access_log example
10.251.108.48 - - [12/Dec/2019:08:12:57 +0100] "POST /zabbix/api_jsonrpc.php HTTP/1.1" 200 122 "-" "-" 10.251.108.48 - - [12/Dec/2019:08:12:57 +0100] "POST /zabbix/api_jsonrpc.php HTTP/1.1" 200 101 "-" "-" localhost - - [12/Dec/2019:08:12:58 +0100] "GET /server-status/?auto HTTP/1.1" 200 439 "-" "Go-http-client/1.1" 10.225.200.101 - - [12/Dec/2019:08:12:58 +0100] "POST /zabbix/jsrpc.php?output=json-rpc HTTP/1.1" 200 64 "http://lsrv2289.linux.rabobank.nl/zabbix/maintenance.php?cancel=1&sid=cdbcd9e43b83bf85" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36 OPR/65.0.3467.48" 10.225.229.40 - - [12/Dec/2019:08:12:59 +0100] "POST /zabbix/jsrpc.php?output=json-rpc HTTP/1.1" 200 66 "http://lsrv2289.linux.rabobank.nl/zabbix/zabbix.php?action=dashboard.view" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 10.0; WOW64; Trident/7.0; Touch; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; Tablet PC 2.0)" 10.251.100.166 - - [12/Dec/2019:08:12:59 +0100] "POST /zabbix/api_jsonrpc.php HTTP/1.1" 200 122 "-" "-" 10.251.100.166 - - [12/Dec/2019:08:12:59 +0100] "POST /zabbix/api_jsonrpc.php HTTP/1.1" 200 101 "-" "-" 10.246.22.130 - - [12/Dec/2019:08:13:00 +0100] "POST /zabbix/jsrpc.php?output=json-rpc HTTP/1.1" 200 64 "http://lsrv2289.linux.rabobank.nl/zabbix/maintenance.php?groupid=0" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36" 10.225.200.101 - - [12/Dec/2019:08:13:01 +0100] "POST /zabbix/jsrpc.php?output=json-rpc HTTP/1.1" 200 66 "http://lsrv2289.linux.rabobank.nl/zabbix/hosts.php?filter_host=7000&filter_dns=&filter_ip=&filter_port=&filter_set=1" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36 OPR/65.0.3467.48" localhost - - [12/Dec/2019:08:13:03 +0100] "GET /server-status/?auto HTTP/1.1" 200 440 "-" "Go-http-client/1.1"
Your example is almost right, but you need to list the individual labels in the metric definition like this:
- type: counter
name: apache_access_log
help: Metrics for the access_log
match: '%{COMMONAPACHELOG:commonapache}'
labels:
clientip: '{{.clientip}}'
ident: '{{.ident}}'
auth: '{{.auth}}'
timestamp: '{{.timestamp}}'
request: '{{.request}}'
httpversion: '{{.httpversion}}'
rawrequest: '{{.rawrequest}}'
response: '{{.response}}'
bytes: '{{.bytes}}'
Keep in mind that for each combination of label values the Prometheus server will create a new time series. If the Prometheus server runs out of memory, you should consider removing things like timestamp from the labels because for each timestamp a new time series will be created in the Prometheus server.
I think is still am doing something wrong.
If i implemet this i get way to much data.
If i look in the metrics files after 2 minutes it looks like this:
TYPE apache_access_log counter
apache_access_log{auth="-",bytes="",clientip="127.0.0.1",httpversion="1.0",ident="-",rawrequest="",request="*",response="200"} 28
apache_access_log{auth="-",bytes="1005",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="1006",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 6
apache_access_log{auth="-",bytes="1008",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 10
apache_access_log{auth="-",bytes="101",clientip="10.251.100.157",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="101",clientip="10.251.100.166",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="101",clientip="10.251.108.48",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="101",clientip="10.251.108.53",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="1018",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 2
apache_access_log{auth="-",bytes="106515",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 3
apache_access_log{auth="-",bytes="1077",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 4
apache_access_log{auth="-",bytes="1079",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="107989",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="1080",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 3
apache_access_log{auth="-",bytes="1087",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="1089",clientip="10.225.210.239",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 3
apache_access_log{auth="-",bytes="119201",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503335957",response="200"} 1
apache_access_log{auth="-",bytes="119348",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503245962",response="200"} 1
apache_access_log{auth="-",bytes="119371",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503425963",response="200"} 1
apache_access_log{auth="-",bytes="119413",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503305963",response="200"} 1
apache_access_log{auth="-",bytes="119621",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503395955",response="200"} 1
apache_access_log{auth="-",bytes="119632",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503455969",response="200"} 1
apache_access_log{auth="-",bytes="119800",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503365965",response="200"} 1
apache_access_log{auth="-",bytes="119970",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225830&itemids%5B32374%5D=32374&itemids%5B32458%5D=32458&itemids%5B32542%5D=32542&itemids%5B32626%5D=32626&itemids%5B32710%5D=32710&itemids%5B32794%5D=32794&itemids%5B32962%5D=32962&itemids%5B32878%5D=32878&itemids%5B33046%5D=33046&itemids%5B33130%5D=33130&itemids%5B33214%5D=33214&itemids%5B33298%5D=33298&itemids%5B33382%5D=33382&itemids%5B33466%5D=33466&itemids%5B33550%5D=33550&itemids%5B33634%5D=33634&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503275954",response="200"} 1
apache_access_log{auth="-",bytes="122",clientip="10.251.100.157",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="122",clientip="10.251.100.166",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="122",clientip="10.251.108.48",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="122",clientip="10.251.108.53",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 23
apache_access_log{auth="-",bytes="124",clientip="10.238.43.50",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 4
apache_access_log{auth="-",bytes="124",clientip="10.246.22.7",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 8
apache_access_log{auth="-",bytes="124",clientip="10.246.23.155",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 8
apache_access_log{auth="-",bytes="127353",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503315765",response="200"} 1
apache_access_log{auth="-",bytes="127406",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503285772",response="200"} 1
apache_access_log{auth="-",bytes="127526",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503255766",response="200"} 1
apache_access_log{auth="-",bytes="127682",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503435767",response="200"} 1
apache_access_log{auth="-",bytes="127862",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503345773",response="200"} 1
apache_access_log{auth="-",bytes="128085",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503375767",response="200"} 1
apache_access_log{auth="-",bytes="128134",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503465773",response="200"} 1
apache_access_log{auth="-",bytes="128162",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225741&itemids%5B123520%5D=123520&itemids%5B123535%5D=123535&itemids%5B123550%5D=123550&itemids%5B123565%5D=123565&itemids%5B123580%5D=123580&itemids%5B123592%5D=123592&itemids%5B123602%5D=123602&itemids%5B123612%5D=123612&itemids%5B123622%5D=123622&itemids%5B123632%5D=123632&itemids%5B123642%5D=123642&itemids%5B123652%5D=123652&itemids%5B123662%5D=123662&itemids%5B123672%5D=123672&itemids%5B123490%5D=123490&itemids%5B123505%5D=123505&itemids%5B123685%5D=123685&itemids%5B123700%5D=123700&itemids%5B123713%5D=123713&itemids%5B123723%5D=123723&itemids%5B123942%5D=123942&itemids%5B123738%5D=123738&itemids%5B123750%5D=123750&itemids%5B123763%5D=123763&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503405762",response="200"} 1
apache_access_log{auth="-",bytes="1295",clientip="10.225.209.44",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/zabbix.php?action=widget.status.view&sid=d87f23c14b792a9b&upd_counter=4500&pmasterid=dashboard",response="200"} 1
apache_access_log{auth="-",bytes="1295",clientip="10.225.209.44",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/zabbix.php?action=widget.status.view&sid=d87f23c14b792a9b&upd_counter=4501&pmasterid=dashboard",response="200"} 1
apache_access_log{auth="-",bytes="1295",clientip="10.225.209.44",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/zabbix.php?action=widget.status.view&sid=d87f23c14b792a9b&upd_counter=4502&pmasterid=dashboard",response="200"} 1
apache_access_log{auth="-",bytes="1295",clientip="10.225.209.44",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/zabbix.php?action=widget.status.view&sid=d87f23c14b792a9b&upd_counter=4503&pmasterid=dashboard",response="200"} 1
apache_access_log{auth="-",bytes="13739",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="15139",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="156282",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225713&itemids%5B32367%5D=32367&itemids%5B32451%5D=32451&itemids%5B32535%5D=32535&itemids%5B32619%5D=32619&itemids%5B32703%5D=32703&itemids%5B32787%5D=32787&itemids%5B32871%5D=32871&itemids%5B32955%5D=32955&itemids%5B33039%5D=33039&itemids%5B33123%5D=33123&itemids%5B33207%5D=33207&itemids%5B33291%5D=33291&itemids%5B33375%5D=33375&itemids%5B33459%5D=33459&itemids%5B33543%5D=33543&itemids%5B33627%5D=33627&itemids%5B33711%5D=33711&itemids%5B33810%5D=33810&itemids%5B33909%5D=33909&itemids%5B34008%5D=34008&itemids%5B34107%5D=34107&itemids%5B34206%5D=34206&itemids%5B34305%5D=34305&itemids%5B34404%5D=34404&itemids%5B34503%5D=34503&itemids%5B34554%5D=34554&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503355975",response="200"} 1
apache_access_log{auth="-",bytes="156434",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225713&itemids%5B32367%5D=32367&itemids%5B32451%5D=32451&itemids%5B32535%5D=32535&itemids%5B32619%5D=32619&itemids%5B32703%5D=32703&itemids%5B32787%5D=32787&itemids%5B32871%5D=32871&itemids%5B32955%5D=32955&itemids%5B33039%5D=33039&itemids%5B33123%5D=33123&itemids%5B33207%5D=33207&itemids%5B33291%5D=33291&itemids%5B33375%5D=33375&itemids%5B33459%5D=33459&itemids%5B33543%5D=33543&itemids%5B33627%5D=33627&itemids%5B33711%5D=33711&itemids%5B33810%5D=33810&itemids%5B33909%5D=33909&itemids%5B34008%5D=34008&itemids%5B34107%5D=34107&itemids%5B34206%5D=34206&itemids%5B34305%5D=34305&itemids%5B34404%5D=34404&itemids%5B34503%5D=34503&itemids%5B34554%5D=34554&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503385967",response="200"} 1
apache_access_log{auth="-",bytes="156690",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225713&itemids%5B32367%5D=32367&itemids%5B32451%5D=32451&itemids%5B32535%5D=32535&itemids%5B32619%5D=32619&itemids%5B32703%5D=32703&itemids%5B32787%5D=32787&itemids%5B32871%5D=32871&itemids%5B32955%5D=32955&itemids%5B33039%5D=33039&itemids%5B33123%5D=33123&itemids%5B33207%5D=33207&itemids%5B33291%5D=33291&itemids%5B33375%5D=33375&itemids%5B33459%5D=33459&itemids%5B33543%5D=33543&itemids%5B33627%5D=33627&itemids%5B33711%5D=33711&itemids%5B33810%5D=33810&itemids%5B33909%5D=33909&itemids%5B34008%5D=34008&itemids%5B34107%5D=34107&itemids%5B34206%5D=34206&itemids%5B34305%5D=34305&itemids%5B34404%5D=34404&itemids%5B34503%5D=34503&itemids%5B34554%5D=34554&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503325969",response="200"} 1
apache_access_log{auth="-",bytes="156952",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225713&itemids%5B32367%5D=32367&itemids%5B32451%5D=32451&itemids%5B32535%5D=32535&itemids%5B32619%5D=32619&itemids%5B32703%5D=32703&itemids%5B32787%5D=32787&itemids%5B32871%5D=32871&itemids%5B32955%5D=32955&itemids%5B33039%5D=33039&itemids%5B33123%5D=33123&itemids%5B33207%5D=33207&itemids%5B33291%5D=33291&itemids%5B33375%5D=33375&itemids%5B33459%5D=33459&itemids%5B33543%5D=33543&itemids%5B33627%5D=33627&itemids%5B33711%5D=33711&itemids%5B33810%5D=33810&itemids%5B33909%5D=33909&itemids%5B34008%5D=34008&itemids%5B34107%5D=34107&itemids%5B34206%5D=34206&itemids%5B34305%5D=34305&itemids%5B34404%5D=34404&itemids%5B34503%5D=34503&itemids%5B34554%5D=34554&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503265968",response="200"} 1
apache_access_log{auth="-",bytes="157045",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225713&itemids%5B32367%5D=32367&itemids%5B32451%5D=32451&itemids%5B32535%5D=32535&itemids%5B32619%5D=32619&itemids%5B32703%5D=32703&itemids%5B32787%5D=32787&itemids%5B32871%5D=32871&itemids%5B32955%5D=32955&itemids%5B33039%5D=33039&itemids%5B33123%5D=33123&itemids%5B33207%5D=33207&itemids%5B33291%5D=33291&itemids%5B33375%5D=33375&itemids%5B33459%5D=33459&itemids%5B33543%5D=33543&itemids%5B33627%5D=33627&itemids%5B33711%5D=33711&itemids%5B33810%5D=33810&itemids%5B33909%5D=33909&itemids%5B34008%5D=34008&itemids%5B34107%5D=34107&itemids%5B34206%5D=34206&itemids%5B34305%5D=34305&itemids%5B34404%5D=34404&itemids%5B34503%5D=34503&itemids%5B34554%5D=34554&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503295977",response="200"} 1
apache_access_log{auth="-",bytes="157178",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225713&itemids%5B32367%5D=32367&itemids%5B32451%5D=32451&itemids%5B32535%5D=32535&itemids%5B32619%5D=32619&itemids%5B32703%5D=32703&itemids%5B32787%5D=32787&itemids%5B32871%5D=32871&itemids%5B32955%5D=32955&itemids%5B33039%5D=33039&itemids%5B33123%5D=33123&itemids%5B33207%5D=33207&itemids%5B33291%5D=33291&itemids%5B33375%5D=33375&itemids%5B33459%5D=33459&itemids%5B33543%5D=33543&itemids%5B33627%5D=33627&itemids%5B33711%5D=33711&itemids%5B33810%5D=33810&itemids%5B33909%5D=33909&itemids%5B34008%5D=34008&itemids%5B34107%5D=34107&itemids%5B34206%5D=34206&itemids%5B34305%5D=34305&itemids%5B34404%5D=34404&itemids%5B34503%5D=34503&itemids%5B34554%5D=34554&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503415976",response="200"} 1
apache_access_log{auth="-",bytes="157908",clientip="10.225.208.92",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/chart.php?period=21600&stime=20211214225713&itemids%5B32367%5D=32367&itemids%5B32451%5D=32451&itemids%5B32535%5D=32535&itemids%5B32619%5D=32619&itemids%5B32703%5D=32703&itemids%5B32787%5D=32787&itemids%5B32871%5D=32871&itemids%5B32955%5D=32955&itemids%5B33039%5D=33039&itemids%5B33123%5D=33123&itemids%5B33207%5D=33207&itemids%5B33291%5D=33291&itemids%5B33375%5D=33375&itemids%5B33459%5D=33459&itemids%5B33543%5D=33543&itemids%5B33627%5D=33627&itemids%5B33711%5D=33711&itemids%5B33810%5D=33810&itemids%5B33909%5D=33909&itemids%5B34008%5D=34008&itemids%5B34107%5D=34107&itemids%5B34206%5D=34206&itemids%5B34305%5D=34305&itemids%5B34404%5D=34404&itemids%5B34503%5D=34503&itemids%5B34554%5D=34554&type=0&batch=1&updateProfile=0&profileIdx=&profileIdx2=&width=1228&sid=d9ae886446fd4aac&screenid=&curtime=1576503445968",response="200"} 1
apache_access_log{auth="-",bytes="15847",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="16555",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 2
apache_access_log{auth="-",bytes="16594",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="16725",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="16867",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="17306",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 1
apache_access_log{auth="-",bytes="17774",clientip="10.251.100.81",httpversion="1.1",ident="-",rawrequest="",request="/zabbix/api_jsonrpc.php",response="200"} 2
and so on.
So every time the line is a little different it shows as a seperate item.
Main thing i want to do is count responses and maybe sources....
What am i missing??
p.s.: In Grafana it also says: Time series is monotonically increasing. Fix by adding rate().
Hi, using rate() on counter metrics is a good idea, because if you restart the grok_exporter all metrics will be reset to 0, and the rate() function makes up for that.
As for your many metrics: You see a time series for each combination of label values. If you remove things like bytes, request, rawrequest, clientip, and timestamp from your labels you will see a lot less different time series:
- type: counter
name: apache_access_log
help: Metrics for the access_log
match: '%{COMMONAPACHELOG:commonapache}'
labels:
ident: '{{.ident}}'
auth: '{{.auth}}'
httpversion: '{{.httpversion}}'
response: '{{.response}}'
If you want to keep the labels but not see them in your dashboard you can also use a Prometheus query to filter them out:
sum without (bytes, request, rawrequest, clientip, timestamp) (apache_access_log)
However, filtering them out with a Prometheus query means that the time series are still stored in the Prometheus server, so if you run out of memory it's better to remove the labels from the grok_exporter config.
Okay... i think i got it working a little but i was expecting to get more data from the loglines.
I will look into that further.
I allready tested some patterns myself before this and came up with:
HTTP200 (\s200\s) HTTP400 (\s400\s) HTTP404 (\s404\s) HTTP500 (\s500\s)
hen using this in my grok yml file like here:
- type: counter name: apache_access_log_200 help: Metrics for the access_log match: '%{HTTP200:resp200}' labels: response: '{{.resp200}}'
It also gives the same data as the apache log pattern.
My feeling is that the apache pattern gives me to much overhead. Is my assumption correct?
And no its not working as expected.... grapoh gives much higher or far to low image for the response code count.....
All i want is the response code per minute from my access_log and my ssl_request_log
I have another script that pushes this to zabbix and that works fine but i want it with prometheus...
All i want is a list off all response codes counted per minute......
Maybe my approach is wrong and i'm better of using another exporter?
If you are only interested in counting HTTP response codes, you can indeed simplify the match pattern. For example, it looks like the HTTP response code is always the next thing after the second " symbol, so you could configure your counter like this:
metrics:
- type: counter
name: apache_access_log
help: Metrics for the access_log
match: '([^"]*"){2} %{NUMBER:response} '
labels:
response: '{{.response}}'
In Prometheus, the rate() funcation gives you the per second increase rate of a counter metric. If you want the increase rate in the last minute, just multiply it by 60:
60*rate(apache_access_log[1m])
Hope that helps!
Yes! This really helped. Great!
I now have two groks running, counting http and https response codes.
Is there any idea when grok will be able to handle multiple files?
And again, thanks for the great help.
I released 1.0.0.RC1 which supports multiple log files. CONFIG.md contains updated information how to configure that.
|
gharchive/issue
| 2019-12-10T09:47:40 |
2025-04-01T04:34:19.069749
|
{
"authors": [
"fstab",
"waardd"
],
"repo": "fstab/grok_exporter",
"url": "https://github.com/fstab/grok_exporter/issues/74",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2244186088
|
关于原始高斯的实验设置
非常棒的工作,感谢开源!
我希望尝试一下与原始高斯的对比实验,但我该修改哪些配置才能退化成原始的高斯呢?
Thanks again!
你好,你可以尝试将t_init设置为无限大,并将velocity_lr, scaling_t_lr, t_lr_init置0来退化成3D Gaussian。但除此之外,还有很多细节(如config和loss)和原始Gaussian不太一样,可能还是将dataloader迁移到原始Gaussian更加方便。
非常感谢!
|
gharchive/issue
| 2024-04-15T16:55:53 |
2025-04-01T04:34:19.078239
|
{
"authors": [
"Fumore",
"Korace0v0"
],
"repo": "fudan-zvg/PVG",
"url": "https://github.com/fudan-zvg/PVG/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2636415221
|
chore(main): release pixels 0.27.0
:robot: I have created a release beep boop
0.27.0 (2024-11-05)
Features
pixels: add initial ButtonGroup (826054d)
This PR was generated with Release Please. See documentation.
:robot: Created releases:
pixels-v0.27.0
:sunflower:
|
gharchive/pull-request
| 2024-11-05T20:33:47 |
2025-04-01T04:34:19.088198
|
{
"authors": [
"fuf-cavendish"
],
"repo": "fuf-stack/uniform",
"url": "https://github.com/fuf-stack/uniform/pull/492",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2325651603
|
Run static analysis with minimum supported version of Flutter SDK in CI
To avoid issues such as #144.
Duplicate of #229.
|
gharchive/issue
| 2024-05-30T13:09:26 |
2025-04-01T04:34:19.101089
|
{
"authors": [
"fujidaiti"
],
"repo": "fujidaiti/smooth_sheets",
"url": "https://github.com/fujidaiti/smooth_sheets/issues/145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1903842107
|
Add tag with pre-alignment score to unaligned reads
I set --pre-align-min-score 200 and had a read fail to align, however the read aligned with minimap2. When I dropped the threshold to 100, the read aligned.
It would be useful to know the pre-alignment scores of reads that are expected to align but fail to do so, so that --pre-align-min-score can be adjusted intelligently.
I don't see any reason why AS couldn't be used.
How about the XS or xs tag, with the latter as per the README:
the sub-optimal alignment score, practically the maximum of any pre-alignment and secondary chain
then one could get the xs tag for all reads
perfect
|
gharchive/issue
| 2023-09-19T23:10:47 |
2025-04-01T04:34:19.111390
|
{
"authors": [
"jdidion",
"nh13"
],
"repo": "fulcrumgenomics/stitch",
"url": "https://github.com/fulcrumgenomics/stitch/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1159339990
|
🛑 NGR System Status is down
In 08d266a, NGR System Status (https://www.ngr.com.au/myngr-system-status/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NGR System Status is back up in d27283f.
|
gharchive/issue
| 2022-03-04T07:37:56 |
2025-04-01T04:34:19.129782
|
{
"authors": [
"SG2019"
],
"repo": "fullprofile/agridigital-status-monitor",
"url": "https://github.com/fullprofile/agridigital-status-monitor/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2733534024
|
fullstory_event_types -> fullstory_events_types
fullstory_event_types doesn't work, looks like it should fullstory_event_types (with an s) as used in the sql files
@sabrina-li thanks for this fix, I think the docs actually read better, can we change the SQL to use fullstory_event_types? We'll release a new version after this is merged!
@sabrina-li thanks for this fix, I think the docs actually read better, can we change the SQL to use fullstory_event_types? We'll release a new version after this is merged!
I wonder if it makes sense to support both in code, and note a deprecation in README? Since it's a breaking change.
Was thinking we could eventually drop the support when bumping the major version when bigger changes are at play? WDYT?
|
gharchive/pull-request
| 2024-12-11T17:14:06 |
2025-04-01T04:34:19.134763
|
{
"authors": [
"huttotw",
"sabrina-li"
],
"repo": "fullstorydev/dbt_fullstory",
"url": "https://github.com/fullstorydev/dbt_fullstory/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
483822678
|
capture_io not working
Hey there! It seems that Optimus parse output is preventing capture_ioto work correctly.
I have this test
test "main with help arg" do
optimus = Optimus.new!(
name: "py",
description: "CLI",
version: "0.1",
author: "Who am I",
about: "something",
allow_unknown_args: true,
parse_double_dash: true
)
exec_help_arg = fn ->
optimus |> Optimus.parse!(["--help"]) |> IO.inspect
end
assert capture_io(exec_help_arg) == "No key value pairs"
end
Which when I try to run, there is nothing in the output, it's like the assert can't be executed.
If I run the function outside the capture I can see the output
CLI 0.1
Who am I
something
USAGE:
py ...
py --version
py --help
So wondering if there is some kind of weird character that is breaking it.
Im using
Erlang/OTP 22 [erts-10.4.4] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe] [dtrace]
Elixir 1.9.1 (compiled with Erlang/OTP 20)
Thanks!
Hello!
That's happening because Optimus.parse! halts the node in case of errors.
One should use Optimus.parse for functions intended for tests.
Closing due to inactivity
|
gharchive/issue
| 2019-08-22T07:38:49 |
2025-04-01T04:34:19.145465
|
{
"authors": [
"mustela",
"savonarola"
],
"repo": "funbox/optimus",
"url": "https://github.com/funbox/optimus/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1470678071
|
Why to use torch.log after obtaining output_proposals?
torch.log in the third last line of the figure,why?
同问
同问
log函数的作用是把四个坐标映射回原来的值,因为4个坐标经过sigmoid,所以这里的log相当于反sigmoid函数。在misc.py文件夹下作者写了一个inverse_sigmoid函数是一样的。
同问
log函数的作用是把四个坐标映射回原来的值,因为4个坐标经过sigmoid,所以这里的log相当于反sigmoid函数。在misc.py文件夹下作者写了一个inverse_sigmoid函数是一样的。
明白了,多谢
|
gharchive/issue
| 2022-12-01T05:00:57 |
2025-04-01T04:34:19.147776
|
{
"authors": [
"Pujin-sysu",
"capsule2077"
],
"repo": "fundamentalvision/Deformable-DETR",
"url": "https://github.com/fundamentalvision/Deformable-DETR/issues/175",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
62934188
|
Concurrency safety
Hello!
One of the todos in your readme is "Concurrency safety by utilizing PayPal-Request-Id". Do you think the current implementation might be unsafe for use in webapps with high transaction volume? Apart from that, great library!
It may or it may not depending on Paypal API performance. But according to its documentation, using Paypal-Request-Id will decrease the risk of duplicate transactions (https://developer.paypal.com/docs/api/). I would suggest you do some tests in your environment and find out.
Thanks, I'll look into it. Thought it was more to do with the library itself and maybe some race conditions.
|
gharchive/issue
| 2015-03-19T09:53:26 |
2025-04-01T04:34:19.149731
|
{
"authors": [
"nazwa",
"pengux"
],
"repo": "fundary/paypal",
"url": "https://github.com/fundary/paypal/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
448555711
|
Update scope
Add openid to the scope so it can be used with other OpenID Connect Providers such as https://hub.docker.com/r/qlik/simple-oidc-provider
Nice one, thanks!
|
gharchive/pull-request
| 2019-05-26T11:11:01 |
2025-04-01T04:34:19.192614
|
{
"authors": [
"funkypenguin",
"mdbraber"
],
"repo": "funkypenguin/traefik-forward-auth",
"url": "https://github.com/funkypenguin/traefik-forward-auth/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
440700413
|
_posts/0000-01-02-funnysoo07.md
No comment
I don't know what i was wrong this section. please help me
|
gharchive/pull-request
| 2019-05-06T13:29:07 |
2025-04-01T04:34:19.193718
|
{
"authors": [
"funnysoo07"
],
"repo": "funnysoo07/github-slideshow",
"url": "https://github.com/funnysoo07/github-slideshow/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
350532049
|
Support using UNSAFE_componentWillMount
Upgrades dependencies to latest.
Rename and support using UNSAFE_componentWillMount as React now suggest using this as a stopgap.
!merge
Might be good to link to https://reactjs.org/blog/2018/03/27/update-on-async-rendering.html#gradual-migration-path in the description?
|
gharchive/pull-request
| 2018-08-14T17:52:09 |
2025-04-01T04:34:19.209315
|
{
"authors": [
"KevinGrandon",
"mlmorg"
],
"repo": "fusionjs/fusion-react",
"url": "https://github.com/fusionjs/fusion-react/pull/149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
377928184
|
IOS crash when use Voice.start('en-US')
It works fine on android, but on IOS it crashed when I used the function Voice.start('en-US').
Has anyone encountered this problem yet? I can't find log error, plz help me
XCode 10.
IOS 12.
React native: 0.57.3
Sorry, I sent the wrong repo
|
gharchive/issue
| 2018-11-06T16:28:59 |
2025-04-01T04:34:19.223538
|
{
"authors": [
"pekubu"
],
"repo": "futurice/react-native-audio-toolkit",
"url": "https://github.com/futurice/react-native-audio-toolkit/issues/121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
141115228
|
Eslint
Adds automatic code linting using ESLint. The current .eslintrc contains baseline rules for general JavaScript, React, and ES6 best practices. All the linting rules can be customized by modifying the .eslintrc file and changing the number value for each rule (0 = ignore, 1 = warn, 2 = error).
Linting can be run manually with npm run lint or will run automatically before publishing to npm.
I made 1 breaking change to the API (super minor) in that MortarJS.Flatten was renamed to MortarJS.flatten.
This effects the generator. The change for the generator was just pushed to master, but has not been npm published yet. That will need to be done when this branch is merged into master.
|
gharchive/pull-request
| 2016-03-15T22:17:47 |
2025-04-01T04:34:19.234843
|
{
"authors": [
"Kyle-Mendes",
"walterbm"
],
"repo": "fuzz-productions/Mortar-JS",
"url": "https://github.com/fuzz-productions/Mortar-JS/pull/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
860427351
|
Potential security problem(s)
Hi, could you please create a new draft security advisory and
invite me to it?
Any other private communication channel would also be fine.
I think I've found a potential security problem.
My disclosures always follow Github's 90-day disclosure policy (I'm not an employee of Github, I just like their policy).
I had create draft security advisory and invited you. please check.
|
gharchive/issue
| 2021-04-17T14:50:18 |
2025-04-01T04:34:19.261227
|
{
"authors": [
"fxbin",
"intrigus-lgtm"
],
"repo": "fxbin/bubble-fireworks",
"url": "https://github.com/fxbin/bubble-fireworks/issues/96",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1229648562
|
Carpet-Fixes Incompatibility with Easy-Magic
Fabric 1.18.2
Log of Issue: https://pastebin.com/z2iEcU5W
I don't have much else information for this one, it appears that Easy-Magic is the issue. I don't see any other mods that are causing this issue.
Fixed in these releases:
22w14a
1.18.x
|
gharchive/issue
| 2022-05-09T12:42:00 |
2025-04-01T04:34:19.291551
|
{
"authors": [
"RoseTheFoxGit",
"fxmorin"
],
"repo": "fxmorin/carpet-fixes",
"url": "https://github.com/fxmorin/carpet-fixes/issues/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
484720174
|
Enhancement: Disable Xdebug as early as possible
This PR
[x] disables Xdebug as early as possible on Travis
💁♂ We currently don't need it, and this potentially speeds up builds. For reference, see https://docs.travis-ci.com/user/languages/php/#disabling-preinstalled-php-extensions.
@pimjansen
Unless I've missed something, it appears we are currently not collecting code coverage. We can easily keep Xdebug enabled for a single build (rather than all builds, unless we have different execution paths for different PHP versions) when we decide to collect and display it (perhaps using something like https://codecov.io).
What do you think?
Thank you, @fzaninotto and @pimjansen!
|
gharchive/pull-request
| 2019-08-23T21:32:08 |
2025-04-01T04:34:19.296461
|
{
"authors": [
"localheinz"
],
"repo": "fzaninotto/Faker",
"url": "https://github.com/fzaninotto/Faker/pull/1758",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
86741181
|
Latitude and Longitude added
Latitude and Longitude added
See #387: latitude should not be localized, use a localLatitude instead.
|
gharchive/pull-request
| 2015-06-09T21:28:36 |
2025-04-01T04:34:19.297564
|
{
"authors": [
"fzaninotto",
"glorand"
],
"repo": "fzaninotto/Faker",
"url": "https://github.com/fzaninotto/Faker/pull/601",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2026772166
|
Fix #324: exception thrown when using GeocentricProjector
initialize origin to a non-default one to avoid throwing the exception by IOHandler::handleDefaultProjector.
See #324
Thank you for the fix, looks good :+1:
|
gharchive/pull-request
| 2023-12-05T17:28:26 |
2025-04-01T04:34:19.298618
|
{
"authors": [
"immel-f",
"mantkiew"
],
"repo": "fzi-forschungszentrum-informatik/Lanelet2",
"url": "https://github.com/fzi-forschungszentrum-informatik/Lanelet2/pull/325",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1396629005
|
EF: efficient py interface
The current solution goes point by point. Either passing Q or a list of points (compatible with open3d?)
list of 3d points from a given pose
|
gharchive/issue
| 2022-10-04T17:41:14 |
2025-04-01T04:34:19.312057
|
{
"authors": [
"g-ferrer"
],
"repo": "g-ferrer/mrob",
"url": "https://github.com/g-ferrer/mrob/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
394092909
|
Queries
Hi Gaurav,
I am an automation engineer and I need voice automation in one my project. I have query regarding how this VoiceAutoamtion Server and Client works. I cloned this repository and tried to run it. I ran only function textToSpeechTest() and in command prompt i.r in server window I get this url
https://api.voicerss.org/?key=******&c=WAV&hl=en-us&src=Hello
Questions
Every time I ran the test, I need to hit this URL to check the output. Is this the correct way or is there any other way to get output results.
I am not getting the below logs or how I will get the filename , voicename etc.
LOG.info(voice.getFilename());
LOG.info(voice.getText());
LOG.info(voice.getVoiceName());
LOG.info(voice.getVoiceLanguage().toString());
Thanks,
Mangesh
Hey,
Are you able to execute this sample Voice Demo ? https://github.com/g-tiwari/GoogleVoiceTest
Detailed description of usage are available at https://github.com/g-tiwari/VoiceAutomationClient/blob/master/README.md
Please find answer to your queries below -
Yes, If you are passing text as test data , it everytime use this service to convert in to speech.
Ideally, it should print this info, but make sure to use it in such a way that object is already available
Also, local voice file playback feature is soon coming up in next version, as of now this whole setup work only on MAC, I am going to push my changes very soon.
Please re-open if you did not get your question answered
yes, as of now I only executed VoiceTest.java file (textToSpeechTest function). I have executed this on windows machine. for #2 I will try again.
Also I have questions like how we can validate the voice responses for example , If I give input to my voice assistant and then he will reply. so how can I validate that he will give the correct response, I mean how we can write this scenarios or scripts.
Hey Mangesh,
I have pushed a new version of VoiceAutomationServer, please download that and try to execute it. I hope it should work fine.
Hi Gaurav,
its working now and I am getting the logs also. I am still unclear on my below question,
how we can validate the voice responses for example , If I give input to my voice assistant and then he will reply. so how can I validate that he will give the correct response, I mean how we can write this scenarios or scripts. Its basically alexa like application integrated with the web application
ya the response is based on voice input and also we have chatbot for text input and response, do you know any other tool for chatbot automation. I heard about Botium
|
gharchive/issue
| 2018-12-26T06:25:15 |
2025-04-01T04:34:19.325206
|
{
"authors": [
"g-tiwari",
"mangeshkhaire14"
],
"repo": "g-tiwari/VoiceAutomationClient",
"url": "https://github.com/g-tiwari/VoiceAutomationClient/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
80497430
|
NaN instead of numbers in Firefox.
Link to JSfiddle - https://jsfiddle.net/8dg5w5we/ . Please, open in Firefox.
Your date string is invalid, and resulting on an invalid Date Object that are being passed to Date-picker Component http://www.ecma-international.org/ecma-262/5.1/#sec-15.9.1.15
Thank you. You really saved my day!
No problem, glad i could help!
|
gharchive/issue
| 2015-05-25T11:04:57 |
2025-04-01T04:34:19.327357
|
{
"authors": [
"BaNdErOzZz",
"eralha"
],
"repo": "g00fy-/angular-datepicker",
"url": "https://github.com/g00fy-/angular-datepicker/issues/110",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
165117215
|
Fixed Quest - Rainbow Badge (Celadon City)
Celadon City to Lavender Town
Get Raindow Badge + Buy Lemonade for future quest
Tested and worked pretty fast and well :+1:
Done all string "Fresh Water" are now "Lemonade"
After buying Lemonade, I get stuck into a loop that goes from Celadon Mart 5 to Celadon Mart 6 to Celadon Mart 5 ...
|
gharchive/pull-request
| 2016-07-12T16:08:17 |
2025-04-01T04:34:19.328814
|
{
"authors": [
"Emuuung",
"Rympex",
"atemerino"
],
"repo": "g0ldPRO/Questing.lua",
"url": "https://github.com/g0ldPRO/Questing.lua/pull/26",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
218471533
|
Update README to be ready for 1.0 release
Make sure docker build example is valid.
Update release list.
https://github.com/g8os/core0/blob/0.12.0/README.md
|
gharchive/issue
| 2017-03-31T11:09:58 |
2025-04-01T04:34:19.346578
|
{
"authors": [
"muhamadazmy",
"zaibon"
],
"repo": "g8os/core0",
"url": "https://github.com/g8os/core0/issues/111",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
141132122
|
Biodata
Scaffolding for tests for the biodata endpoints, specifically biosamples for now. Need tests for get by ID, searching readgroupsets, and callsets.
Everything works, but the biodata in test_data is now contained in a directory structure, something we agreed upon at the beginning of this compliance project that we weren't going to do.
Let's discuss a consistent solution to this before going forward.
+1 modulo flattening out the biosample and individual json files in test_data.
@macieksmuga The directory is flat again and the test files are named for their types. https://github.com/ga4gh/compliance/pull/179/commits/fb831810d2e6aba0f007f0c165dd7b74feb7554a#diff-140b3c2404155d50dab3b256bd9396d4R1
+1
Closing in favor of https://github.com/ga4gh/compliance/pull/195
|
gharchive/pull-request
| 2016-03-16T00:05:10 |
2025-04-01T04:34:19.364976
|
{
"authors": [
"david4096",
"macieksmuga"
],
"repo": "ga4gh/compliance",
"url": "https://github.com/ga4gh/compliance/pull/179",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
145209124
|
Ownership is quirky for manual registration of alternate tool names for existing tools
[HTTP 400] Bad Request: User does not own the container quay.io/collaboratory/seqware-bwa-workflow. You can only add Quay repositories that you own or are part of the organization
When attempting to re-add seqware-bwa-workflow under an alternate name although I definitely own collaboratory
Note: This occurs on production as well
I couldn't reproduce this error in staging. It seems to have been fixed.
|
gharchive/issue
| 2016-04-01T15:22:59 |
2025-04-01T04:34:19.367472
|
{
"authors": [
"agduncan94",
"denis-yuen"
],
"repo": "ga4gh/dockstore",
"url": "https://github.com/ga4gh/dockstore/issues/182",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
198549045
|
tool convert dies on missing output
Thanks for submitting your feedback to Dockstore.
Please fill in the appropriate section depending on whether you want to submit a feature request or a bug report.
Feature Request
When there is no output: feature in the cwl file, dockstore tool convert cwl2json dies.
Desired behaviour
Throw an error that says Missing output feature in cwl file
Actual behaviour
java.lang.NullPointerException
Environment (Browser or OS and Dockstore version)
Dockstore 1.1
Similar to #570
Create a Junit + better error message for this one as well.
Looks good now on staging.dockstore.org
$ dockstore tool convert cwl2json --cwl Dockstore.cwl
problems running command: cwltool --non-strict --validate Dockstore.cwl
stderr for command:
/usr/local/bin/cwltool 1.0.20170217172322
Resolved 'Dockstore.cwl' to 'file:///home/dyuen/dockstore_tools/dockstore-tool-bamstats/Dockstore.cwl'
Tool definition failed validation:
Dockstore.cwl:3:1: Object `Dockstore.cwl` is not valid because
tried `CommandLineTool` but
missing required field `outputs`
Thanks @k-cao
|
gharchive/issue
| 2017-01-03T19:26:43 |
2025-04-01T04:34:19.371084
|
{
"authors": [
"Jeltje",
"denis-yuen"
],
"repo": "ga4gh/dockstore",
"url": "https://github.com/ga4gh/dockstore/issues/564",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
153093828
|
Sql merge update
Brings sql repo branch up to date with master. Some minor merge mess management here, but there shouldn't be any real changes.
OK, I think we can merge this fairly quickly as it's just a merge update. @david4096, @dcolligan or @macieksmuga, can you take a quick look at merge if you're happy/online?
testing now...
Passes compliance with the exception of the two reads tests as before. Was this merge expected to incorporate Danny's enhancement for searching across multiple ReadGroups within a single ReadGroupSet?
Yes, I think that went in a while ago, didn't it? This is just merging new commits from master into the sql repo branch, I think something fairly major would need to go wrong if we lost commits like that along the way.
Hmm. I still keep getting the server error:
[...]
File "/Users/maciek/Documents/dev/maciek-ga4gh-server/ga4gh/backend.py", line 160, in _search
return self._parentContainer.getReadAlignments(
AttributeError: 'HtslibReadGroupSet' object has no attribute 'getReadAlignments'
when I issue the following multi-readgroup request:
curl -XPOST -H"Content-Type:application/json" -d'{"readGroupIds": ["WyJicmNhMSIsInJncyIsIkhHMDAwOTYiLCJTUlIwNjI2MzQiXQ", "WyJicmNhMSIsInJncyIsIkhHMDAwOTYiLCJTUlIwNjI2MzUiXQ", "WyJicmNhMSIsInJncyIsIkhHMDAwOTYiLCJTUlIwNjI2NDEiXQ"], "referenceId": "WyJoZzM3IiwicmVmX2JyY2ExIl0", "start": 0, "end": 150, "pageSize": null, "pageToken": null}' localhost:8000/reads/search
OK, looks like we might have another problem not covered by the unit tests that your tests are provoking. This probably got lost in the merge somewhere, and I assumed it was working since no test cases failed. Good catch @macieksmuga, thanks.
The issues doesn't actually affect this particular merge though, so I think we should go ahead with this PR ASAP. I'll put a block on the sql repo PR until we've solved the problem.
Fine with me. +1 for merging in this PR. I'll create an issue for it so we don't lose this case.
|
gharchive/pull-request
| 2016-05-04T19:22:27 |
2025-04-01T04:34:19.375029
|
{
"authors": [
"jeromekelleher",
"macieksmuga"
],
"repo": "ga4gh/server",
"url": "https://github.com/ga4gh/server/pull/1208",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
112927896
|
G2P '/genotypephenotype/search' endpoint
A continuation of the conversation from https://github.com/ga4gh/server/pull/607
extended CLI genotypephenotype-search method to take json strings as arguments
sparql perfomance improvements
split g2p tests out of test_views.py to test_g2p.py; also fixed some style issues
optimize and correct queries
modified filesystem backend to expect g2pDatasets in ga4gh-example-data
simplify by removing GeneotypePhenotypeIterator
This PR is paired with sibling pull requests:
compliance: https://github.com/ga4gh/compliance/pull/115
schema: https://github.com/ga4gh/schemas/pull/432
Note: the commit history is not as clean as I'd like. Consecutive commits have been squashed, however the merges from develop intersperse our commit history.
Edit: If necessary, I can drop and recreate the repo
Nit: Some files have gained executable permissions --- please remove these.
It's looking good @bwalsh, thanks for the updates. I still think it's worthwhile defining a simulator for G2P data. It can be a stupid as a box of rocks, and doesn't have to generate data that's in any way realistic. However, if we have simulated data this will make testing so, so much easier as we'll be able to fit in to the existing frameworks and not have to worry about the existence of the test data.
|
gharchive/pull-request
| 2015-10-23T02:07:32 |
2025-04-01T04:34:19.379679
|
{
"authors": [
"bwalsh",
"jeromekelleher"
],
"repo": "ga4gh/server",
"url": "https://github.com/ga4gh/server/pull/770",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1092052791
|
Dynamically set Auth Token
Hi,
Since the Auth token is defined per device on the Blynk side it would make sense that we could set this property dynamically.
But this would require a connection per auth token if I understand correctly.
Something isn't right with this design.
Thanks.
I'm not sure that this is a feasible suggestion.
First of all, I think it's a bad idea to run Blynk code on the physical devices. It's much better to run MQTT code on the devices and leave Node-Red to deal with the communication with the Blynk server, in which case dynamic auth tokens wouldn't be used.
If you do run Blynk code on the devices then you still have the option to use static provisioning of auth tokens.
If you do want to use the Edgent sketches and do dynamic provisioning then being able to dynamically set the auth token in the Node-Red connection has no value that I can see. It would only be useful if the device could send it's auth token to Node-Red and this be used to update the auth token stored against that connection, but this is a chicken and egg situation - you cant get a connection to the device without knowing it's auth token, and without a connection the device cant send it's auth token.
This means that it will always be necessary to open the Blynk web console, copy the auth token from the devoice once it's been provisioned, and paste this into the current Node-Red connection dialogue box.
Having the ability to inject this auth token seems to have no benefits.
If auth tokens were still sent via e-mail then it would be a different matter, as it would be possible to use Node-Red to parse the data from the email and inject it into the connection, but that isn't how it works now.
Pete.
Hi,
Thanks for your reply, maybe I am missing something ?
Blynk code is not running on the device the device communicates with a "Hub" which then forwards the data to Node-Red via MQTT.
My understanding of the model in the new version of Blynk is you have a template defining the data-streams and then one or more device using this template. In order to target (write to) a specific device (as defined in the Blynk console) you need the "device specific" auth token. So while you can build common logic in Node-Red you still need a connection per device. I have worked around this by using Rest API directly and HTTP nodes.
Serge
From the point of view of this contrib, Blynk Legacy and Blynk IoT are exactly the same - with the exception that a Template ID is needed for IoT.
In Legacy you went into the app and created a Project, then added one or more devices. Each device has it's own auth token, and you pasted that auth token into the Node-Red connection to "fool" Blynk into thinking that Node-Red was a physical device such as a NodeMCU, as opposed to a virtual hub handling the communications.
With Blynk IoT, you create a Template (similar to the Project in Legacy), and define your datastreams. You then create a Device from this Template, copy the auth token assigned to that Device, along with the Template_ID, and paste these into the Node-Red connection.
When you deploy the Node-Red flow the new Device will appear online.
Although I have maybe 25 physical devices, I have only one Blynk Device set-up. I can use all 255 datastreams to attach Blynk widgets, and I use groups of widgets for communications that relate to specific physical devices.
This has the advantage that widgets relating to multiple physical devices can be displayed on the same app dashboard, which is something you could do anyway with Legacy, but not with IoT.
The only thing to watch out for is that if you have the Basic (Free) subscription you are limited to 30 widgets per device. The Plus plan limit is 80 widgets per device and Pro plan is 255 per device.
Pete.
Sure but I need to group devices in organisations so I dont have the luxury to have one device and scope the data via other means. I understand where you are coming from and I have found an alternative way of achieving what I needed. Kind thanks for taking the time to respond and feel to close this issue.
Serge
Technically it is not possible, because each device opens a connection to the server to send data.
It's not like sending data via an http API, here is an open bidirectional channel for any device.
Of course you can create as many "configuration nodes" as you want. So you can simulate more than one device with a single installation of node-red
Best Regards
Gabriele
|
gharchive/issue
| 2022-01-02T16:22:47 |
2025-04-01T04:34:19.403436
|
{
"authors": [
"Peterkn2001",
"gablau",
"ssozonoff"
],
"repo": "gablau/node-red-contrib-blynk-iot",
"url": "https://github.com/gablau/node-red-contrib-blynk-iot/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
724968627
|
Update LINKS.md
sites para treinar sua linguagem favorita, algoritimos e estrutura de dados.
sites para treinar sua linguagem favorita, algoritimos e estrutura de dados.
|
gharchive/pull-request
| 2020-10-19T20:55:54 |
2025-04-01T04:34:19.501561
|
{
"authors": [
"michelbernardods"
],
"repo": "gabrielcmarinho/links-uteis",
"url": "https://github.com/gabrielcmarinho/links-uteis/pull/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1094033827
|
Ability to check if docker is installed on system via docker.is_installed()
Can we add ability to check if docker is installed on system to trigger custom workflows around this check just like we have docker.compose.is_installed().
Also, is there a way to refrain python_on_whales to download docker binaries incase it does not find on system ?
That should be possible! I'll work on it :)
Out of curiosity, what is your workflow/use case? It may help me shape the API
Thanks @gabrieldemarmiesse .
Currently our scenario is via python package, we want to install our docker compose application and before we run docker-compose up (from python client), we have to make a series of checks for like docker is installed, docker-compose is installed, minimum memory and cpu constraint checks,(mostly related to client environment) etc. This library has been really helpful for us to verify checks in place for us. Awesome work on this 👍🏼
I made a PR, you should be able to use
from python_on_whales import get_docker_client_binary_path
if get_docker_client_binary_path() is None:
print("WARNING docker client binary not installled!!!!")
raise RuntimeError
....
# continue the program normally here
Would that work for you? I already made the pull request.
|
gharchive/issue
| 2022-01-05T06:45:25 |
2025-04-01T04:34:19.505378
|
{
"authors": [
"akash-jain-10",
"gabrieldemarmiesse"
],
"repo": "gabrieldemarmiesse/python-on-whales",
"url": "https://github.com/gabrieldemarmiesse/python-on-whales/issues/293",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
31960043
|
Make gameState snapshots (like game saves)
Hi there,
I know you are very conservative with the game and its core feature, that's why I'm opening this issue to know if this feature request might have a chance to be accepted and if it worth my time implementing it. Feedbacks welcome :)
Introduction:
While playing a lot (I really mean, a lot) this game, this is now quite common for me to reach 4096 tile and try to reach the next tile level (that I never reached).
I find now quite "boring" to raise tiles to that level where I now fail. And this is very time consuming to reach that level where I find suspense, tension and interest.
What I'm looking for:
I'm looking for a "save" button like in every major video game where I could take a snapshot of my current grid, and be able in the future to start with that snapshot. That way, I could save my game with the 4096 tile and continue from there in my upcoming games.
Questions:
Obviously, there are a lot of questions around this feature:
- score: would it be computed / saved ? I think it's better not to save it while starting from a saved game. I'd be glad to make some "not scored" games but starting straight from difficult point.
- saves: some UI/UX points would need a particular attention.
Thanks
Hello, I've been working on a fork of 2048 implementing some features that have been requested but will probably not make it into the official game and thought you'd be interested to know that I included your idea. I'd appreciate any feedback that you can give about my implementation/the fork in general.
A demonstration can be found here:
http://javascriptftw.github.io/2048/index.html
Thanks for your time!
You would have to make this on your own (will not be merged). Read 'contributing.md'.
But not a bad idea...
It is on my own. A lot of the stuff that I have/want to put into my fork would never make it into the base game. The goal was to create something with the features that were cool but probably wouldn't make it into the game. Nevertheless I may make a fork that can be merged in the future, who knows?
|
gharchive/issue
| 2014-04-22T10:07:57 |
2025-04-01T04:34:19.515718
|
{
"authors": [
"JavascriptFTW",
"guillaumepotier",
"seanfrasure"
],
"repo": "gabrielecirulli/2048",
"url": "https://github.com/gabrielecirulli/2048/issues/172",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2113977311
|
🛑 Trends is down
In b9ca11c, Trends (https://trends.gab.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Trends is back up in c3a40c4 after 13 hours, 17 minutes.
|
gharchive/issue
| 2024-02-02T03:37:18 |
2025-04-01T04:34:19.525026
|
{
"authors": [
"gadmln"
],
"repo": "gadmln/gabstatus",
"url": "https://github.com/gadmln/gabstatus/issues/814",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
121483056
|
Call Dispatch Multiple Times
Is this ok?
export function getSpecAcct(aId)
{
return dispatch => {
ResponsiveCtrl.getSpecAcct(aId,function(r, e) {
dispatch(showAccount(r));
dispatch(updatePath('/comppage'));
});
}
}
Yes!
|
gharchive/issue
| 2015-12-10T13:23:57 |
2025-04-01T04:34:19.550386
|
{
"authors": [
"banderson5144",
"gaearon"
],
"repo": "gaearon/redux-thunk",
"url": "https://github.com/gaearon/redux-thunk/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2680774443
|
Refactor setting Facility ID on Compliance Work Entry
It should work like the FCE does.
✨✨ Here's an AI-assisted sketch of how you might approach this issue saved by @dougwaldron using Copilot Workspace v0.26
✨✨ Here's an AI-assisted sketch of how you might approach this issue saved by @dougwaldron using Copilot Workspace v0.26
✨✨ Here's an AI-assisted sketch of how you might approach this issue saved by @dougwaldron using Copilot Workspace v0.26
|
gharchive/issue
| 2024-11-21T20:17:07 |
2025-04-01T04:34:19.552940
|
{
"authors": [
"dougwaldron"
],
"repo": "gaepdit/air-web",
"url": "https://github.com/gaepdit/air-web/issues/189",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1409731161
|
fastrlp license?
Sorry, this is more of a question than anything:
Looks like Cargo.toml uses fastrlp@0.1.3 which happens to be GPL-3.0-only WITH Classpath-exception-2.0. Wouldn't that cause an issue in this Apache-2.0 / MIT project?
Hmm. FastRLP used to be Apache until 4 months ago https://crates.io/crates/fastrlp/versions until 0.1.2. I don't want to change our license. Could we fork fastrlp from 0.1.2 and call it a day?
IANAL, so not sure about how it works with the already released current v0.17.0. But for future releases maybe worth just linking to 0.1.2 (or an alternative package)? Btw, I only found out about this from using cargo deny. Perhaps worth integrating the same here to avoid similar issues in the future?
|
gharchive/issue
| 2022-10-14T18:53:33 |
2025-04-01T04:34:19.581074
|
{
"authors": [
"gakonst",
"gd87429"
],
"repo": "gakonst/ethers-rs",
"url": "https://github.com/gakonst/ethers-rs/issues/1785",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2370853492
|
Mobile Debug View in Code Editor Error
Describe the bug
Was playing around and having issues on second project, as per left side code-editor mobile debug view not working but run on seperated tabs was fine.
Project Link: https://galacean.antgroup.com/editor/project/46500008
Screenshots
Desktop:
OS: Windows
Browser: Chrome
Version: 125.0.6422.176 (Official Build) (64-bit)
Galacean Version: 1.2.0-beta.4
Error from Logs:
preview.js:1352
Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'name')
at euL._getAssetVirtualPath (preview.js:1352:6404)
at euL._buildIR (preview.js:1352:4792)
at async preview.js:1352:7235
at async euL.buildPackage (preview.js:1352:2626)
at async eu0.prebuild (preview.js:1548:6389)
at async m (preview.js:1552:1685)
at async preview.js:1552:1972
Suspected was because of this, but not sure why the new projects not created together by default, but first project was come with all these shaders, mesh and material
Note: image was taken from other working projects
Hi @Oskang09,Thanks for your issue. It seems like a bug in Galacean editor and we have fixed it recently. Could you try it again?
Hi @MrKou47 , yea it's working fine right now, Since issue was fixed so will close the issue.
|
gharchive/issue
| 2024-06-24T18:30:54 |
2025-04-01T04:34:19.586239
|
{
"authors": [
"MrKou47",
"Oskang09"
],
"repo": "galacean/engine",
"url": "https://github.com/galacean/engine/issues/2134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1088449361
|
add hypothesis tests for settle_price
Add hypothesis tests for omnipool_amm.settle_price demonstrating that it finds a price at which both trades can be executed, respecting the swap invariant
done: 132d9dac21948de701d85a53d42b5d6e7150b005
|
gharchive/issue
| 2021-12-24T16:40:13 |
2025-04-01T04:34:19.588754
|
{
"authors": [
"poliwop"
],
"repo": "galacticcouncil/HydraDX-simulations",
"url": "https://github.com/galacticcouncil/HydraDX-simulations/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1317199910
|
Add Sentry error handling to the indexer
Changes
Adds Sentry to the indexer
Adds Sentry to the indexer servers
Adds Sentry logging hook that can be enabled via logger.AddHook(sentryutil.SentryLoggerHook) (only enabled for the indexer atm)
Log levels > INFO get automatically reported as an error to Sentry
Log levels <= INFO get automatically reported as a breadcrumb to an error that gets reported
Follow ups
Doesn't handle tracing of the indexer yet i.e. traces for writes/reads of objects to GCP or timing of RPC calls or DB writes/reads but we do get HTTP traces for server requests because of the Tracing middleware
Ship ship ship iiiiiiiiiiit 🚢 🚢 🚢
|
gharchive/pull-request
| 2022-07-25T18:21:01 |
2025-04-01T04:34:19.651622
|
{
"authors": [
"jarrel-b",
"radazen"
],
"repo": "gallery-so/go-gallery",
"url": "https://github.com/gallery-so/go-gallery/pull/453",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
897961997
|
UPM Authentication using .upmconfig.toml
Example and explanation about how to authenticate with a private UPM registry by creating an .upmconfig.toml file in the home folder.
This feature was added here: https://github.com/game-ci/unity-builder/pull/211 - but as far as I can tell it was not documented.
Changes
...
Checklist
[x] Read the contribution guide and accept the code of conduct
[ ] Readme (updated or not needed)
[ ] Tests (added, updated or not needed)
We should probably start an advanced section. I try to do it if time permits. Will merge this for now.
|
gharchive/pull-request
| 2021-05-21T11:38:03 |
2025-04-01T04:34:19.679109
|
{
"authors": [
"ChristianTellefsen",
"webbertakken"
],
"repo": "game-ci/documentation",
"url": "https://github.com/game-ci/documentation/pull/174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
90434687
|
adds test for payload client/daemon version validation
@jsingle cc @thieman
@jsingle @thieman this is updated
:rocket: assuming you've tested it
|
gharchive/pull-request
| 2015-06-23T16:37:44 |
2025-04-01T04:34:19.682024
|
{
"authors": [
"jsingle",
"zacharym"
],
"repo": "gamechanger/dusty",
"url": "https://github.com/gamechanger/dusty/pull/281",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
92661219
|
Add BINARY constant
@jsingle That was easy
:boom:
|
gharchive/pull-request
| 2015-07-02T15:09:11 |
2025-04-01T04:34:19.682974
|
{
"authors": [
"jsingle",
"thieman"
],
"repo": "gamechanger/dusty",
"url": "https://github.com/gamechanger/dusty/pull/341",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
52865273
|
Prison 2 compatibility
Can you add support for prison 2
here a link to the plugin :http://dev.bukkit.org/bukkit-plugins/mcprison/
http://dev.bukkit.org/bukkit-plugins/mcprison/?comment=592
http://dev.bukkit.org/bukkit-plugins/mcprison/?comment=593
http://dev.bukkit.org/bukkit-plugins/mcprison/?comment=594
:disappointed:
I've decided that instead of letting go of the plugin, I will still update it for each new Minecraft version, and fix any critical bugs.
|
gharchive/issue
| 2014-12-25T14:54:48 |
2025-04-01T04:34:19.725211
|
{
"authors": [
"RayZz-",
"SirFaizdat",
"games647"
],
"repo": "games647/ScoreboardStats",
"url": "https://github.com/games647/ScoreboardStats/issues/41",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
41036798
|
ReferenceError: e is not defined
I'm afraid there is a bug in distil.js. I can't know what's the real reason that use moonshine to distil my lua file unsuccessfully like this:
D:\lua>moonshine distil hello.lua
C:\Users\w00222107\AppData\Roaming\npm\node_modules\moonshine\bin\commands\distil.js:154
if (errPart[1] != 'luac') throw e;
^
ReferenceError: e is not defined
at C:\Users\w00222107\AppData\Roaming\npm\node_modules\moonshine\bin\commands\distil.js:154:36
at ChildProcess.exithandler (child_process.js:641:7)
at ChildProcess.EventEmitter.emit (events.js:98:17)
at maybeClose (child_process.js:743:16)
at Socket. (child_process.js:956:11)
at Socket.EventEmitter.emit (events.js:95:17)
at Pipe.close (net.js:466:12)
#13 :smile:
Fixed in https://github.com/gamesys/moonshine/commit/0ef4f829036fb444f31fdaea7a2efba840004490.
|
gharchive/issue
| 2014-08-25T06:50:23 |
2025-04-01T04:34:19.728441
|
{
"authors": [
"alanwangfan",
"goto-bus-stop",
"paulcuth"
],
"repo": "gamesys/moonshine",
"url": "https://github.com/gamesys/moonshine/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
141923374
|
Addition of query strings
@cdeil let me know what you think.
http://gammapy.github.io/web-experiments/?cat=2FHL&source=5
Works great!
Exactly what I had in mind.
Thanks!
I left two minor inline comments.
When those are addressed, merge this?
@cdeil
Sure let me rebase this PR
Thank you!
|
gharchive/pull-request
| 2016-03-18T16:56:58 |
2025-04-01T04:34:19.753890
|
{
"authors": [
"cdeil",
"kepta"
],
"repo": "gammapy/web-experiments",
"url": "https://github.com/gammapy/web-experiments/pull/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1449099513
|
🛑 Tokocrypto is down
In 4b8df6c, Tokocrypto (https://tokocrypto.com) was down:
HTTP code: 451
Response time: 58 ms
Resolved: Tokocrypto is back up in 90907b2.
|
gharchive/issue
| 2022-11-15T03:27:57 |
2025-04-01T04:34:19.756419
|
{
"authors": [
"gammarinaldi"
],
"repo": "gammarinaldi/watcher",
"url": "https://github.com/gammarinaldi/watcher/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
451205630
|
How to enable autocomplete with language-bael
I'm new to atom. I installed language-babel and would like to use the autocomplete feature. It detects the correct file type, but the autocomplete feature doesn't work. Which settings do I have to apply that the autocomplete feature for JSX tags works?
I have the same issue.
use ctrl+e for windows
Ctrl+E works on Mac too, thanks! It's quite an awkward shape to put your hand into though, is there a way to change the keymapping for this? I looked in the Atom keymapping settings but only found:
'atom-text-editor':
'ctrl-e': 'editor:move-to-end-of-line'
It would be great to get autocomplete working on 'Enter' similar to standard HTML files.
My impression is that this feature doesn't work for most people. In my opinion this bug makes this package almost completely useless.
I have the same problem.
ctrl-e works fine for now, It'd be much better if emmet and autocomplete works in this package.
|
gharchive/issue
| 2019-06-02T15:47:43 |
2025-04-01T04:34:19.770758
|
{
"authors": [
"Bowfish",
"johann1301h",
"jonohewitt",
"lmtX10ded",
"nehal-backspace",
"wzamites"
],
"repo": "gandm/language-babel",
"url": "https://github.com/gandm/language-babel/issues/529",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
144368117
|
Add "es" to allowed fileTypes
Hi! It would be nice to recognize .es files with language-babel. Babel is already supporting this extension, and github just merged a PR to highlight it as javascript.
@deepsweet Thanks for the PR. This is now in 2.17.0.
|
gharchive/pull-request
| 2016-03-29T19:50:20 |
2025-04-01T04:34:19.772416
|
{
"authors": [
"deepsweet",
"gandm"
],
"repo": "gandm/language-babel",
"url": "https://github.com/gandm/language-babel/pull/159",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
35051198
|
Contain within parent
How can I contain the sticky element within its parent so it doesn't cover my footer when I scroll down?
+1 for this. It would be awesome if there is a way to contain within a wrapper or perhaps we can specify which wrapper element to use. Maybe we there's additional option like
parent: "#myWrapper",
containInParent: true
Thanks, this fix works perfectly
|
gharchive/issue
| 2014-06-05T12:13:16 |
2025-04-01T04:34:19.827885
|
{
"authors": [
"arnoldbird",
"microcipcip",
"noxbriones"
],
"repo": "garand/sticky",
"url": "https://github.com/garand/sticky/issues/106",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1270315183
|
reproduce tool: AttributeError if crash causes hang
This may happen in some weirdo apps that take forever to run:
[*] BugBane reproduce tool
Reproducing with env vars: {'UBSAN_OPTIONS': 'print_stacktrace=1:allocator_may_return_null=1', 'ASAN_OPTIONS': 'allocator_may_return_null=1', 'LANG': 'C', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'}
Checking spec 'AFL++:./out ./basic/app:app01 ./basic/app:app02 ./basic/app:app03 ./basic/app:app04 ./basic/app:app05 ./basic/app:app06 ./basic/app:app07 ./basic/app:app08 ./basic/app:app09 ./basic/app:app10 ./basic/app:app11 ./basic/app:app12 ./basic/app:app13 ./basic/app:app14 ./basic/app:app15 ./asan/app:app16'...
Checking path ./out (AFL++)
Will search for crashes and hangs in directory 'app01' for app './basic/app'
Sample masks: './out/app01/crashes/id*' (crashes) and './out/app01/hangs/id*' (hangs)
Will search for crashes and hangs in directory 'app02' for app './basic/app'
Sample masks: './out/app02/crashes/id*' (crashes) and './out/app02/hangs/id*' (hangs)
Traceback (most recent call last):
File "/usr/bin/bb-reproduce", line 33, in <module>
sys.exit(load_entry_point('bugbane', 'console_scripts', 'bb-reproduce')())
File "/bugbane/bugbane/tools/reproduce/main.py", line 161, in main
results = harvester.collect_fuzzing_results()
File "/bugbane/bugbane/tools/reproduce/harvester.py", line 122, in collect_fuzzing_results
self.collect_one_spec_fuzzing_results(fuzzer_type, sync_dir, build_specs)
File "/bugbane/bugbane/tools/reproduce/harvester.py", line 170, in collect_one_spec_fuzzing_results
binary_path, crashes_path_mask, hangs_path_mask
File "/bugbane/bugbane/tools/reproduce/reproducers/default_reproducer.py", line 48, in run_binary_on_samples
cards.extend(self.run_binary_on_crashes(binary_path, crashes_mask))
File "/bugbane/bugbane/tools/reproduce/reproducers/default_reproducer.py", line 59, in run_binary_on_crashes
card = self.run(cmd, binary_path, sample, self.one_run_try)
File "/bugbane/bugbane/tools/reproduce/reproducers/default_reproducer.py", line 100, in run
verdict, output = run_method(cmd, self.run_env)
File "/bugbane/bugbane/tools/reproduce/reproducers/default_reproducer.py", line 120, in one_run_try
output = output.decode(errors="replace")
AttributeError: 'NoneType' object has no attribute 'decode'
resolved
|
gharchive/issue
| 2022-06-14T06:26:35 |
2025-04-01T04:34:19.829456
|
{
"authors": [
"fuzzah"
],
"repo": "gardatech/bugbane",
"url": "https://github.com/gardatech/bugbane/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2698863883
|
Add minimal examples to examples page
Please don't merge. I need to do the ff on Vercel
[ ] rename chat-clerk to clerk
[ ] add demo for passkey
You can mark a PR as draft to prevent accidental merges while you're working on stuff
|
gharchive/pull-request
| 2024-11-27T15:04:44 |
2025-04-01T04:34:19.830947
|
{
"authors": [
"bensleveritt",
"trishalim"
],
"repo": "garden-co/jazz",
"url": "https://github.com/garden-co/jazz/pull/891",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2528137362
|
Sync prioritisation MVP
As empirically proven by @gu.dorsi's experimentation, syncing a big BinaryCoStream tends to stall concurrently syncing smaller CoValues. Having a system where sync messages are sent out according to different update priorities significantly helps alleviate this issue.
To keep it easy to provide different transport implementations for CoJSON Peers (websocket, storage peers, etc.) we want to limit their implementation to the "dumb" task of just sending messages.
The prioritisation should therefore happen in cojson itself, the suggested way to go about this is:
Converting PeerState in cojson/src/sync.ts into a class that has the priority queue as internal state, dispatching from the priority queue into its outgoing: OutgoingSyncQueue
the naming of the latter might need to be changed to distinguish the priority queue from the dumb queue of outgoing messages that we get from the peer definition
The priority (numerical, 0 = lowest prio, ∞ = highest prio) is assigned to a whole CoValue on loading or creation, it generates NewContentMessages with the according priority to be taken into account by the priority queue of each peer
Accounts & Groups have the highest default priority (3) because other CoValues depend on them for being readable
BinaryCoStreams have the lowest default priority (1) because they are most likely to have large chunks that might stall everything else
All other CoValues have priority (2)
For now we just use the default priorities before we expose them for customisation to the user
Let's go with Option 3 + the priority schema from Option 1
|
gharchive/issue
| 2024-09-16T10:58:17 |
2025-04-01T04:34:19.849080
|
{
"authors": [
"aeplay"
],
"repo": "gardencmp/jazz",
"url": "https://github.com/gardencmp/jazz/issues/396",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1175416459
|
Improve the hibernation doc
/area documentation
Fixes https://github.com/gardener/gardener/issues/5606
Release note:
NONE
/invite @n-boshnakov
|
gharchive/pull-request
| 2022-03-21T13:48:54 |
2025-04-01T04:34:19.874791
|
{
"authors": [
"ialidzhikov"
],
"repo": "gardener/gardener",
"url": "https://github.com/gardener/gardener/pull/5621",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1376090374
|
[GEP-20] Add Topology Spread Constraints for gardener-resource-manager
How to categorize this PR?
/area high-availability
/area control-plane
/kind enhancement
What this PR does / why we need it:
This PR adds Pod Topology Spread Constraints to the gardener-resource-manager of shoot control-planes and replaces the formerly used podAntiAffinity.
Which issue(s) this PR fixes:
Fixes parts of #6529
Special notes for your reviewer:
Since the Pod Topology Webook is not applied to GRM itself, we need to workaround the rolling update issue https://github.com/kubernetes/kubernetes/issues/98215 directly in the deployment procedure.
Release note:
The `gardener-resource-manager` deployment was changed from pod anti-affinity to [Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/). Non-HA shoot clusters will still have the `gardener-resource-manager` pods being scheduled on different nodes on a best-effort basis. For HA clusters, the Topology Spread Constraints make sure that a distribution across nodes (single-zone) and zones (multi-zonal) is guaranteed, in order to tolerate failures in these domains.
FYI: PR depends on https://github.com/gardener/gardener/pull/6684 and thus remains in draft mode for the time being.
/test pull-gardener-integration
/milestone v1.56
|
gharchive/pull-request
| 2022-09-16T15:14:15 |
2025-04-01T04:34:19.880319
|
{
"authors": [
"shafeeqes",
"timuthy"
],
"repo": "gardener/gardener",
"url": "https://github.com/gardener/gardener/pull/6685",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1989095968
|
/bin/garn: line 2: warning: setlocale: LC_ALL: cannot change locale (C.UTF-8)
Hello,
I'm trying garn on macOS, I'm seeing this warning every time I run garn command
$HOME/.nix-profile/bin/garn: line 2: warning: setlocale: LC_ALL: cannot change locale (C.UTF-8): No such file or directory
This is the result of running locale
LANG=""
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
Does this happen for garn build and garn check as well as garn enter? And do you see the logs the same number of times?
Does this happen for garn build and garn check as well as garn enter? And do you see the logs the same number of times?
Yes, it happens with all commands.
executing head -n 5 $HOME/.nix-profile/bin/garn gives this result:
#! /nix/store/zzpm4317hn2y29rm46krsasaww9wxb1k-bash-5.2-p15/bin/bash -e
export LC_ALL='C.UTF-8'
PATH=${PATH:+':'$PATH':'}
PATH=${PATH/':''/nix/store/553wiycn7i2xbfykssl97smg1pwlgfg0-cabal2json/bin'':'/':'}
PATH='/nix/store/553wiycn7i2xbfykssl97smg1pwlgfg0-cabal2json/bin'$PATH
It looks like this environment variable was changed from LANG to LC_ALL in ae69820356d91edf255d8f5e05b1ee2dee778886. LANG was originally introduced in 3e52fd00e7965f47c109e36d3611c4d37305e731.
I've removed the variables completely in 665f623c133b4923bf2d3e98d3ccea5abd0b4975, but haven't opened a PR yet since I'm not sure of the ramifications. In my limited testing though I am still able to nix profile install garn and everything seems to work well both inside and outside of the garn repo -- the environment variable isn't set on my system. LANG is set, but calling garn with LANG='' garn also seems to have no noticeable impact on it. Can anyone see any reason not to remove these?
Fixed in #422. Will be released in v0.0.17.
|
gharchive/issue
| 2023-11-11T19:44:37 |
2025-04-01T04:34:19.891386
|
{
"authors": [
"alexdavid",
"jkarni",
"khaledez",
"soenkehahn"
],
"repo": "garnix-io/garn",
"url": "https://github.com/garnix-io/garn/issues/403",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2648611761
|
Remove Inactive Sites
This PR is:
[ ] Adding a new domain
[ ] Updating existing domain
[ ] Changing domain name
[x] Removing existing domain from list
[ ] Website code changes (darktheme.club site)
[ ] Other not listed
I noticed several sites that were inactive. This PR removes those.
Thanks for going through the entries!
|
gharchive/pull-request
| 2024-11-11T08:46:45 |
2025-04-01T04:34:19.902633
|
{
"authors": [
"garritfra",
"nelsonfigueroa"
],
"repo": "garritfra/darktheme.club",
"url": "https://github.com/garritfra/darktheme.club/pull/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
230337075
|
Input is not in the GZIP format
Hello, I am using the latest Gatling to record traffic towards an API which currently has a JSON endpoint and a GRPC endpoint.
When I run it immediately I get the below error.
Is gatling recorder able to cope with GRPC payloads?
23:09:40.559 [WARN ] i.n.c.DefaultChannelPipeline - An exception '{}' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
io.netty.handler.codec.compression.DecompressionException: Input is not in the GZIP format
at io.netty.handler.codec.compression.JdkZlibDecoder.readGZIPHeader(JdkZlibDecoder.java:253)
at io.netty.handler.codec.compression.JdkZlibDecoder.decode(JdkZlibDecoder.java:153)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:227)
at io.netty.handler.codec.http.HttpContentDecoder.decode(HttpContentDecoder.java:231)
at io.netty.handler.codec.http.HttpContentDecoder.decodeContent(HttpContentDecoder.java:153)
at io.netty.handler.codec.http.HttpContentDecoder.decode(HttpContentDecoder.java:145)
at io.netty.handler.codec.http.HttpContentDecoder.decode(HttpContentDecoder.java:46)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1228)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1039)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
AFAIK, GRPC runs on top of HTTP/2, so answer would be no.
|
gharchive/issue
| 2017-05-22T09:32:43 |
2025-04-01T04:34:19.917051
|
{
"authors": [
"ademaria",
"slandelle"
],
"repo": "gatling/gatling",
"url": "https://github.com/gatling/gatling/issues/3300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
257299022
|
Graphite: NPE on shutdown
I'm now experiencing this issue as well.
Setup:
Win 8.1
Gatling 2.2.5
Async websockets checks are used, together with http
Test target is on the same machine
As you can see in the screenshot below this happens after the simulation is completed.
The last thing the test does is
exec(
exec(ws("End Async Check", socketName).reconciliate),
exec(ws("Close WebSocket", socketName).close)
)
It does not reproduce every time though, but pretty often.
I managed to put a breakpoint on the line that throws this error, I don't know the issue though.
Below is a screenshot with variable values, let me know if there is anything I can try, I will upgrade my gatling version soon but as I see the changelog this is not fixed
This is a duplicate of #3052 but it is closed and it seems no one is looking at it :)
Have a look at the stacktrace: this crash happens in the Graphite DataWriter when Gatling shuts down. It has nothing to do with HTTP nor WebSockets. I suspect you use something like maxDuration that forcefully closes Gatling, and can cause some race conditions on shut down.
The HTML reports are still being generated, so this is more an inconvenience than a critical issue and I don't think me or the other Gatling Corp people will invest time on this.
Then, we'd love a contribution!
I suspect one should just try to catch the exception in the TcpSender and ignore it. Gatling is being shutdown and there's nothing to do to try to recover.
Yes I have the graphite connection enabled, (however influxDB is not started).
And you are right the test seems to work correctly without any issues, except for this log message.
Thank you for your hint, I'll see if I can find time to try and create a pull request.
Closing due to lack of activity. Moreover, such issue wasn't reported since then, so it's possible it was fixed in Akka (who knows).
Not fixed yet, its so confusing, as the test runs ok, but there is an error
|
gharchive/issue
| 2017-09-13T08:34:08 |
2025-04-01T04:34:19.922387
|
{
"authors": [
"KrauseStefan",
"owlmylove",
"slandelle"
],
"repo": "gatling/gatling",
"url": "https://github.com/gatling/gatling/issues/3352",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1224738855
|
Structure Builder
Your Idea
You can add schematics that alto clef can get the resources for and build.
Discord username (if you want updates)
Robby#8302
Developer Notes (leave this blank)
Difficulty: __
Estimated Time Required: __
Once development starts.
Aready on the todo list, in the meantime check the main page for Meloweh's Extra Features Release which has this feature already in the works, however its still on 1.17.X
Yea like Dabaski said, this would require significantly changing altoclef. Meloweh I believe has his own baritone for that release. I would love to see structure support, but it is a little out there when it comes to the roadmap
|
gharchive/issue
| 2022-05-03T22:41:24 |
2025-04-01T04:34:20.009177
|
{
"authors": [
"Dabaski",
"JamesGreen31",
"asdasdasdasdoof"
],
"repo": "gaucho-matrero/altoclef",
"url": "https://github.com/gaucho-matrero/altoclef/issues/269",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
274110827
|
Sync kerbals between clients
Kerbals are not in sync with the other players.
Ideally when a player "takes" a kerbal it should not be available to be used by other players
High priority as when 2 vessels share the same kerbals the last one that starts flying will see the kerbal with noise...
Finished :)
|
gharchive/issue
| 2017-11-15T10:45:05 |
2025-04-01T04:34:20.024606
|
{
"authors": [
"gavazquez"
],
"repo": "gavazquez/LunaMultiPlayer",
"url": "https://github.com/gavazquez/LunaMultiPlayer/issues/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1030111178
|
修改代码块样式
我想去掉这个代码块上面的三个点并且修改scrollbar的默认样式,可以在项目里直接修改吗?
你好,代码块左上角的三个装饰点是模仿的 macOS 应用窗口,像代码图片生成器 Carbon 也使用了这个风格。个人觉得看起来还是挺漂亮的,所以本主题仍会坚持使用之。但是,你可以在你自己项目的 .vuepress/styles/index.scss 文件中自定义该样式。
至于 scrollbar 的默认样式,除了在 .vuepress/styles/index.scss 自定义外,你也可以通过提交 PR 提交你的想法。
好的,谢谢
|
gharchive/issue
| 2021-10-19T10:04:54 |
2025-04-01T04:34:20.026489
|
{
"authors": [
"LwRuan",
"gavinliu6"
],
"repo": "gavinliu6/vuepress-theme-mix",
"url": "https://github.com/gavinliu6/vuepress-theme-mix/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2431888797
|
I hope it will support Vue3 as well
Good Job!
I hope it will support Vue3 as well
Coming very soon!
Hi @HelloZhu. You can now scaffold a plugin from a Vue template. If you try it and run into any issues, please let me know. Thanks!
|
gharchive/issue
| 2024-07-26T10:03:16 |
2025-04-01T04:34:20.027681
|
{
"authors": [
"HelloZhu",
"gavinmcfarland"
],
"repo": "gavinmcfarland/plugma",
"url": "https://github.com/gavinmcfarland/plugma/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
171019082
|
Write unit tests for Exchange Adapter authenticated request methods
Tricky, but really should be done.
Done. Ready for next release.
Released.
|
gharchive/issue
| 2016-08-13T17:40:38 |
2025-04-01T04:34:20.034257
|
{
"authors": [
"gazbert"
],
"repo": "gazbert/bxbot",
"url": "https://github.com/gazbert/bxbot/issues/30",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1985598762
|
Backport #351 to citadel: fix dartsim inertia matrix rotation
🦟 Bug fix
Backport #351 to citadel, better late than never (https://github.com/gazebosim/gz-physics/pull/351#issuecomment-1201385754)
Summary
Original commit message:
dartsim: fix handling inertia matrix pose rotation (#351)
When loading a model from SDF, the moment of inertia matrix is currently applying any rotations in the //inertial/pose two times, since the rotations are applied explicitly, but they are already applied in math::Inertial::Moi.
Checklist
[X] Signed all commits for DCO
[ ] Added tests
[ ] Updated documentation (as needed)
[ ] Updated migration guide (as needed)
[ ] Consider updating Python bindings (if the library has them)
[ ] codecheck passed (See contributing)
[ ] All tests passed (See test coverage)
[ ] While waiting for a review on your PR, please help review another open pull request to support the maintainers
Note to maintainers: Remember to use Rebase-and-Merge.
FYI @j-rivero the ign_physics-ci-pr_any-homebrew-amd64 build is using the wrong formula name: ign-physics2 instead of ignition-physics2
+ brew install ign-physics2 --only-dependencies
Warning: No available formula with the name "ign-physics2". Did you mean gz-physics8, gz-physics7 or gz-physics6?
==> Searching for similarly named formulae and casks...
==> Formulae
osrf/simulation/gz-physics8
osrf/simulation/gz-physics7
osrf/simulation/gz-physics6
To install osrf/simulation/gz-physics8, run:
brew install osrf/simulation/gz-physics8
Build step 'Execute shell' marked build as failure
@osrf-jenkins run tests
I could add ign-* aliases for citadel and fortress formulae if that's simpler than fixing the release-tools logic
I could add ign-* aliases for citadel and fortress formulae if that's simpler than fixing the release-tools logic
Lets try to fix the problem in the right way https://github.com/gazebo-tooling/release-tools/pull/1068
@osrf-jenkins run tests please
thanks @j-rivero! the homebrew build is working now
|
gharchive/pull-request
| 2023-11-09T13:13:09 |
2025-04-01T04:34:20.041627
|
{
"authors": [
"j-rivero",
"scpeters"
],
"repo": "gazebosim/gz-physics",
"url": "https://github.com/gazebosim/gz-physics/pull/568",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
877303715
|
Occurrence details: The field description in the detailed view of an image
In the detailed view of an image (e.g. this record https://hp-nhm-rotterdam.gbif-staging.org/data?entity=2570097743&filter=eyJtdXN0Ijp7InRheG9uS2V5IjpbMjMwMDQ3OV19fQ%3D%3D, after clicking on the image), the field 'description' that is included in the extension file multimedia is not displayed (compare same record on GBIF: https://www.gbif.org/occurrence/2570097743). Could this field be included in the hosted portal view?
closed by https://github.com/gbif/gbif-web/commit/d3c1505acabd9a20efb6069232515d7806e6404e
|
gharchive/issue
| 2021-05-06T09:38:35 |
2025-04-01T04:34:20.060921
|
{
"authors": [
"MortenHofft",
"langeveldNMR"
],
"repo": "gbif/hosted-portals",
"url": "https://github.com/gbif/hosted-portals/issues/154",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1101660717
|
Issue "log4j ManagerFactory unable to create manager" when upgrading the IPT to 2.5.5
when I try to point on the previous data folder (version 2.3.5) I receive this error message :
ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory@29bb5711] unable to create manager for [/var/ipt_rmca_data/logs/debug.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$FactoryData@6b57e7fb[pattern=/var/ipt_rmca_data/logs/debug.log.%i, append=true, bufferedIO=true, bufferSize=8192, policy=CompositeTriggeringPolicy(policies=[OnStartupTriggeringPolicy, SizeBasedTriggeringPolicy(size=10485760)]), strategy=DefaultRolloverStrategy(min=1, max=7, useMax=true), advertiseURI=null, layout=%-5p %d{dd-MMM-yyyy HH:mm:ss} [%c] - %m%n, filePermissions=null, fileOwner=null]]
The data folder was previously a 2.3.5 version (the IPT had probably the log4J jars replaced last year).
The data folder has the “tomcat” owner (which is the same owner as for /var/lib/tomcat/webapps) and /var/ipt_rmca_data/logs/debug.log is now set with a 777 permission.
If you're using Debian or Ubuntu, it could be the security sandboxing settings: https://ipt.gbif.org/manual/en/ipt/2.5/faq#i-get-the-following-error-the-data-directory-directory-is-not-writable-what-should-i-do ("On systems with security sandboxing...").
Check the permissions on /var/ipt_rmca_data/logs (the directory itself, e.g. ls -l /var/ipt_rmca_data) as the log manager's first action will be to rename debug.log to debug.log.1, then create a new debug.log.
related #1616
Hi, the problem still occurs, even by putting a 777 permission on the log folder
Which OS is this? Debian, Ubuntu, Red Hat, CentOS, something else?
It's an Ubuntu 20.04.3 LTS
You probably need to change the security sandboxing settings: https://ipt.gbif.org/manual/en/ipt/2.5/faq#i-get-the-following-error-the-data-directory-directory-is-not-writable-what-should-i-do ("On systems with security sandboxing...").
Yes, that was the issue. Thanks !
The old datasets are recognized and I can log to it.
I noticed another, minor, issue. For a few minutes after the installation, I couldn't log to the IPT. It was not even displaying the red toolbox mentioning that a wrong password had been entered (like in the image below). But this has been fixed after about 15 to 20 minutes...
Glad to hear it. I have split this part of the FAQ into a new question.
https://ipt.gbif.org/manual/en/ipt/2.5/faq#sandboxing
An IPT with many datasets (hundreds) can take a few minutes to start up, but I don't know what would have caused login to be delayed.
|
gharchive/issue
| 2022-01-13T12:38:02 |
2025-04-01T04:34:20.070986
|
{
"authors": [
"MattBlissett",
"ftheeten",
"mike-podolskiy90"
],
"repo": "gbif/ipt",
"url": "https://github.com/gbif/ipt/issues/1726",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
74316164
|
Send user auto-email on password reset
Currently the admin is responsible for reseting the password, then notifying the user
of the new password.
How about automating it?
What version of the provider software are you using?
Version 2.0.3-r3672
Original issue reported on code.google.com by kyle.braak on 2012-01-04 15:58:57
No thumbs up on this issue since 2012 so closing as won't fix.
|
gharchive/issue
| 2015-05-08T10:30:25 |
2025-04-01T04:34:20.072837
|
{
"authors": [
"kbraak"
],
"repo": "gbif/ipt",
"url": "https://github.com/gbif/ipt/issues/827",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2101621648
|
add endpoint_attrs param to simplify_graph to flexibly relax strictness
Resolves #625
This PR:
adds endpoint_attrs param to simplification.simplify_graph function to flexibly relax strictness
deprecates strict param in simplification.simplify_graph function in favor of new endpoint_attrs param
improves docstrings and comments in the simplification module
@csebastiao would you like to test this out with a specific case?
@gboeing I have tested it on my side on the two examples of the Toy graph and Copenhagen custom bicycle attribute and both are returning the expected behaviors.
It's all good for me.
Thanks. Can you provide a code snippet for testing your use case?
Oh sure sorry about that, here is the one for the Toy Graph made by hand:
import networkx as nx
import shapely
import osmnx as ox
G = nx.Graph()
G.add_node(1, x=1, y=1)
G.add_node(2, x=2, y=1)
G.add_node(3, x=2.5, y=1.5)
G.add_node(4, x=3, y=2.5)
G.add_node(5, x=3.5, y=3.5)
G.add_node(6, x=3, y=4)
G.add_node(7, x=3, y=5)
G.add_node(8, x=3.5, y=5.5)
G.add_node(9, x=3.5, y=6.5)
G.add_node(10, x=4, y=7)
G.add_node(11, x=5, y=8)
G.add_node(12, x=6, y=8)
G.add_node(13, x=6.5, y=8.5)
G.add_node(14, x=7, y=9)
G.add_node(15, x=7.5, y=8.5)
G.add_node(16, x=8, y=8)
G.add_node(17, x=7.5, y=7.5)
G.add_node(18, x=7, y=7)
G.add_node(19, x=6.5, y=7.5)
# add length and osmid just for the osmnx function to work
for i in range(1, 19):
G.add_edge(i, i + 1, length=1, osmid=i)
G.add_node(20, x=4, y=4)
G.add_node(21, x=4, y=5)
G.add_edge(5, 20, length=1, osmid=20)
G.add_edge(20, 21, length=1, osmid=21)
G.add_edge(21, 8, length=1, osmid=22)
G.add_edge(19, 12, length=1, osmid=23)
# give three value of color to see the discrimination for an attribute
for i in range(2, 8):
G.edges[i, i + 1]["color"] = 1
G.edges[1, 2]["color"] = 2
G.edges[5, 20]["color"] = 1
G.edges[20, 21]["color"] = 1
G.edges[21, 8]["color"] = 1
for i in range(8, 11):
G.edges[i, i + 1]["color"] = 3
for i in range(11, 19):
G.edges[i, i + 1]["color"] = 2
G.edges[19, 12]["color"] = 2
G = nx.MultiDiGraph(G)
# add crs for the ox_plot_graph to work
G.graph["crs"] = "epsg:4326"
ec = ox.plot.get_edge_colors_by_attr(G, "color", cmap="Set1")
ox.plot_graph(
G,
figsize=(12, 8),
bgcolor="w",
node_color="black",
node_size=30,
edge_color=ec,
edge_linewidth=3,
)
G_simple = ox.simplify_graph(G, endpoint_attrs=["color"])
ec_s = ox.plot.get_edge_colors_by_attr(G_simple, "color", cmap="Set1")
ox.plot_graph(
G_simple,
figsize=(12, 8),
bgcolor="w",
node_color="black",
node_size=30,
edge_color=ec_s,
edge_linewidth=3,
)
Which can be asserted with
assert len(G_simple) == len(ox.simplify_graph(G)) + 2
One way to see it on an actual OSMnx graph is with this basic example:
import osmnx as ox
protected_dict = {}
protected_dict["sidewalk:left:bicycle"] = "yes"
protected_dict["sidewalk:left:right"] = "yes"
protected_dict["cycleway:left"] = ["shared_lane", "shared_busway", "track"]
protected_dict["cycleway:right"] = ["shared_lane", "shared_busway", "track"]
protected_dict["cycleway:both"] = "lane"
protected_dict["cycleway"] = ["shared_lane", "shared_busway", "opposite_lane", "opposite"]
protected_dict["bicycle"] = ["designated", "yes", "official", "use_sidepath"]
protected_dict["highway"] = ["cycleway", "bridleway"]
protected_dict["cyclestreet"] = "yes"
protected_dict["bicycle_road"] = "yes"
for val in protected_dict:
if val not in ox.settings.useful_tags_way:
ox.settings.useful_tags_way += [val]
G = ox.graph_from_place("Frederiksberg Municipality, Denmark", simplify=False)
for edge in G.edges:
for key in list(protected_dict.keys()):
if key in list(G.edges[edge].keys()):
if G.edges[edge][key] in protected_dict[key]:
G.edges[edge]["cycling"] = 1
break
else:
G.edges[edge]["cycling"] = 0
else:
G.edges[edge]["cycling"] = 0
G_s = ox.simplify_graph(G, endpoint_attrs=None)
G_ns = ox.simplify_graph(G, endpoint_attrs=["osmid"])
G_attr = ox.simplify_graph(G, endpoint_attrs=["cycling"])
ec = ox.plot.get_edge_colors_by_attr(G_attr, "cycling", cmap="RdYlGn")
ox.plot_graph(
G_attr,
figsize=(12, 8),
bgcolor="w",
node_color="black",
node_size=10,
edge_color=ec,
edge_linewidth=1,
)
To take a specific node (even thought with the ever-updating OSM I don't know if the exact values will stay) one can look at the node 1262273553:
G_attr.edges(1262273553, data=True)
It only exists when discriminating with the cycling (or osmid) attribute:
assert 1262273553 not in G_s
assert 1262273553 in G_attr
This should be the case as there are only two edges (1262273553, 9740065276, 0) and (1262273553, 9917054947, 0) with a different value for the cycling attribute, because for the latter we have 'highway': 'cycleway' and 'bicycle': 'designated' and for the former 'highway': 'footway'.
endpoint_attrs looks very useful, thanks for working on this!
I believe this is ready to merge.
Note that #1145 additionally adds a node_attrs_include param to the simplify_graph function to further flexibly relax graph simplification strictness. It also renames the endpoint_attrs param to edge_attrs_differ for consistent and clear naming, given the new param.
|
gharchive/pull-request
| 2024-01-26T05:32:51 |
2025-04-01T04:34:20.177419
|
{
"authors": [
"EwoutH",
"csebastiao",
"gboeing"
],
"repo": "gboeing/osmnx",
"url": "https://github.com/gboeing/osmnx/pull/1117",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
301963163
|
Add bearing analysis functions
I added two functions:
One that gives a rounded version of edges' bearing to more easily allow you to group edges of the same and or perpendicular bearing mainly for the purpose of plotting.
And another that collects desired bearing(s) to allow one to chose which bearings to plot and how.
I also made a jupyter notebook example for them that I will submit the the examples repository.
I tried to follow commenting and docstring conventions as much as I could, but if you had any suggestions or anything I would be very grateful.
Thank you
Coverage decreased (-1.0%) to 92.833% when pulling c10eb1d0bf105a1ed4764f892c481857c8bc3303 on Gbachant:add-bearing-analysis into 07de8cdb4032f145a69c2dcff0a8f258ba9ffb7a on gboeing:master.
Coverage decreased (-1.0%) to 92.833% when pulling c10eb1d0bf105a1ed4764f892c481857c8bc3303 on Gbachant:add-bearing-analysis into 07de8cdb4032f145a69c2dcff0a8f258ba9ffb7a on gboeing:master.
Coverage decreased (-1.0%) to 92.833% when pulling c10eb1d0bf105a1ed4764f892c481857c8bc3303 on Gbachant:add-bearing-analysis into 07de8cdb4032f145a69c2dcff0a8f258ba9ffb7a on gboeing:master.
@Gbachant thanks for contributing to OSMnx! Can you tell me a little more about your additions? In particular, I want to make sure they are general enough for inclusion into the package itself, rather than just being demonstrated as a standalone example. It looks like the key feature:
for u,v,a in G.edges(data=True):
a['rounded_bearing']=int(round(a['bearing']))
a['modulo_bearing']=a['rounded_bearing']%90
gives your graph integer bearings and modulo of these integers. This is useful for binning, but as it's only a couple lines of code, it seems easy to leave as standalone in an example notebook. I'm less clear what's happening with the add_search_bearings function. What are they being added to? It looks like it just returns a list of bearings. The code inside the function doesn't have any comments so it's hard to see what it's doing and why.
|
gharchive/pull-request
| 2018-03-03T03:15:01 |
2025-04-01T04:34:20.184092
|
{
"authors": [
"Gbachant",
"coveralls",
"gboeing"
],
"repo": "gboeing/osmnx",
"url": "https://github.com/gboeing/osmnx/pull/135",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
182509475
|
Jackson serialisation using JsonTypeInfo needs improving
The JsonTypeInfo annotation causes problems with generic types/type erasure. We have a temporary solution that adds a class property to Element and ElementSeed, but it isn't very nice.
See comment on pull request - https://github.com/gchq/Gaffer/pull/505/files
The changes have caused the "class" field to be missed off again - so we need a different fix.
|
gharchive/issue
| 2016-10-12T12:11:49 |
2025-04-01T04:34:20.200297
|
{
"authors": [
"p013570"
],
"repo": "gchq/Gaffer",
"url": "https://github.com/gchq/Gaffer/issues/455",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1590852361
|
Gh 2890 cache service static instance bug
Related Issue
Resolve #2890
Codecov Report
:exclamation: No coverage uploaded for pull request base (v2-alpha@5f22ae6). Click here to learn what that means.
The diff coverage is n/a.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## v2-alpha #2893 +/- ##
===========================================
Coverage ? 73.62%
Complexity ? 228
===========================================
Files ? 19
Lines ? 728
Branches ? 57
===========================================
Hits ? 536
Misses ? 143
Partials ? 49
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
gharchive/pull-request
| 2023-02-19T20:59:25 |
2025-04-01T04:34:20.204186
|
{
"authors": [
"GCHQDev404",
"codecov-commenter"
],
"repo": "gchq/Gaffer",
"url": "https://github.com/gchq/Gaffer/pull/2893",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
239117907
|
Add pylint/PEP8 checks to GitHub Actions to validate the python shell
Travis should run some code analysis checks to determine whether the Python shell is ready for release, e.g. pylint/PEP8.
Currently there are lots of inconsistent uses of " and ' for Strings, as well as lines that are very long. A PUP8 linter would flag these.
Fixed by https://github.com/gchq/gaffer-tools/pull/1028
|
gharchive/issue
| 2017-06-28T10:19:09 |
2025-04-01T04:34:20.205663
|
{
"authors": [
"p013570",
"t92549"
],
"repo": "gchq/gaffer-tools",
"url": "https://github.com/gchq/gaffer-tools/issues/137",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1679583725
|
B15: How to Re-Trigger Apple Authorization?
Yesterday, I got the Apple Authorization popup on my iPhone to re-auth iCloud3 v3. However, I was not in a place where I could enter the code into HA so I just hit Ok and Ok. Now, I cannot sort out how to get the Auth to re-trigger on my iPhone.
I've followed the instructions from the log:
But that gives me the place to enter the key but doesn't seem to trigger it on the iPhone. I've tried restart iCloud3 v3 and HA but it doesn't seem to trigger it either.
Event Log > Actions > Request Apple-ID Verification Code
I don't have that option:
Event Log Actions > Reset iCloud Interface. I changed the item description in beta 16 which has not been release yet.
That worked, thanks.
|
gharchive/issue
| 2023-04-22T15:20:09 |
2025-04-01T04:34:20.214138
|
{
"authors": [
"Snuffy2",
"gcobb321"
],
"repo": "gcobb321/icloud3_v3",
"url": "https://github.com/gcobb321/icloud3_v3/issues/108",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2686118297
|
🛑 PasteBin is down
In bb3373b, PasteBin (https://pastebin.com) was down:
HTTP code: 403
Response time: 73 ms
Resolved: PasteBin is back up in 8cf0e91 after 16 minutes.
|
gharchive/issue
| 2024-11-23T15:10:23 |
2025-04-01T04:34:20.249376
|
{
"authors": [
"gdm257"
],
"repo": "gdm257/upptime",
"url": "https://github.com/gdm257/upptime/issues/678",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
386181477
|
Ask the user to authorize Siri
Some devices silently fail to invoke Siri intents unless permission was
granted.
Does this need to be wrapped in logic to detect the iOS version? I think I have an old iPad around somewhere that doesn't support SiriKit, I may test the behavior on there later.
|
gharchive/pull-request
| 2018-11-30T13:10:46 |
2025-04-01T04:34:20.250454
|
{
"authors": [
"chrisy"
],
"repo": "gdombiak/OctoPod",
"url": "https://github.com/gdombiak/OctoPod/pull/148",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2016784241
|
Feat : Add Dynamic Theming Example
작업 사항
Dynamic Theming Example 코드를 추가했습니다.
https://github.com/gdsc-ys/color-flutter/assets/75557859/2f3a4896-2cbd-4956-af03-0c4f934e3044
[Example]
Provider를 이용한 Dynamic Theme Example
GetX를 이용한 Dynamic Theme Example
[Library]
Provider를 통한 Dynamci Theme 구현 시 필요한 보일러플레이트 코드를 GDSCThemeManager 로 제공합니다.
배경 설명
플러터에서는 React와 달리 Provider가 first-party로 지원되는 기능이 아니라 third-party로 지원되는 기능으로 pub add provider 를 통해 설치해야 사용할 수 있습니다.
플러터 생태계에서는 상태관리 라이브러리로 GetX, Provider, BLoC 순서대로 많이 사용됩니다.
이러한 상황에서 Provider를 Dependency로 제공하는 것은 라이브러리 사용에 있어 자유도를 떨어뜨릴 수 있다는 판단이 되어 라이브러리에서 Provider 패키지에 대한 의존성을 없애고 대신 이에 대한 example을 제공하는 식으로 PR을 작성하게 되었습니다.
대신 Provider를 사용하는 경우 발생하는 보일러플레이트 코드 중 Provider 패키지에 의존적이지 않은 부분에 대해서는 이를 라이브러리에서 Export하도록 하였습니다 (GDSCThemeManager)
나머지 코드는 내일 천천히 다시 읽어보겠습니다! 아직 회사일을 하고 있어서 오늘은 자세히 보기 어려울 듯 합니다...
README.md는 별도의 PR로 열어주세요!
|
gharchive/pull-request
| 2023-11-29T15:00:09 |
2025-04-01T04:34:20.260631
|
{
"authors": [
"ANTARES-KOR",
"whatisyourname0"
],
"repo": "gdsc-ys/color-flutter",
"url": "https://github.com/gdsc-ys/color-flutter/pull/4",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.