id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
535903866 | Try rack cas
This branch removes devise and devise_cas_authenticatable in favor of the rack-cas gem. The devise_cas_authenticatable gem relies on the ruby-cas client gem, which is no longer maintained and produces deprecation warnings in Rails 6.
Aside from the initial setup for rack-cas, the branch introduces 4 new methods into the Application Controller:
authenticate_user! - directs visitors to either the login or the unregistered page
set_current_user - updates the login stats for a user and sets a @current_user, if available
get_current_user - gets the user from a CAS session
current_user - helper method for use in views
In order to maintain the user statistics that Devise's trackable module provided, the update_login_stats method has been introduced on the User model. This method gets called in the set_current_user method. A sessions controller was also introduced to handle closing these out before sending the user to the CAS logout path.
The authenticated root we used with Devise has been replicated using a condition in StaticPagesController#home
The test suite has been updated to reflect these changes and should be passing.
To handle visitors that may be authenticated at the central level by CAS but are not registered in the app, an unregistered path has been introduced. The replicates devise_cas_authenticatable's unregistered method but provides a custom layout.
In manual testing, a registered user should see no difference in their experience with the app. An unregistered user should now get the custom unregistered page.
Definitely appreciate the additional eyes on this!! :)
I think this is good to go. I manually resolved conflicts in the gem lock file.
| gharchive/pull-request | 2019-12-10T18:22:00 | 2025-04-01T06:38:21.242039 | {
"authors": [
"CraigJZ",
"dcollie2"
],
"repo": "dcollie2/enrollchat",
"url": "https://github.com/dcollie2/enrollchat/pull/113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
218010885 | Doc changes for installing DCOS v1.9.0 on Google GCE
Description
Urgency
[ ] Blocker
[ ] High
[x] Medium
Requirements
Test all commands and procedures.
Build content locally and test for formatting/links.
Add redirects to dcos-website/redirect-files.
Change all affected versions (e.g. 1.7, 1.8, and 1.9).
See the contribution guidelines.
Can one of the admins verify this patch?
Please can somebody review the changes?
I think the dcos_installer_filename usage is just confusing here. If you change it to dcos_generate_config.sh everywhere it'll be easier to follow and copy/paste.
Alright. this is fine, but needs to be rebased.
Can one of the admins verify this patch?
Hello @karlkfi, I'm not sure I've followed the correct process here
Sorry for the slow turnaround! I thought I had already merged this...
| gharchive/pull-request | 2017-03-29T21:12:44 | 2025-04-01T06:38:21.247416 | {
"authors": [
"ajazam",
"karlkfi",
"mesosphere-ci"
],
"repo": "dcos/dcos-docs",
"url": "https://github.com/dcos/dcos-docs/pull/999",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
194692367 | Prevent stop action for scheduler tasks
Scheduler tasks don't like to be stopped so we hide the stop button if one is selected.
@leemunroe - should we hide this button or disable it? If we disable it, I presume we'd also want a tooltip to explain why.
https://mesosphere.atlassian.net/browse/DCOS-10693
@MatApple Good point. I think disabled with a tooltip is a better UX in this case. Stopping a scheduler task is not supported.
Thanks @leemunroe - will make the appropriate changes.
@jfurrow - I'm having trouble getting the tooltip to work with the "Stop" button. I've tried wrapping the button with the Tooltip and also having the Tooltip as the immediate child of the button. Either way, the button doesn't like it and the Tooltip doesn't work correctly - especially with the "disabled" attribute on the button. Any suggestions? Thanks
@mesosphere-ci retest this please
@MatApple Functionally looks good but why does the disabled state have a grey background and hover state? It makes it stand out more 😪
We should remove the background if disabled. And no hover state needed (i.e. no underline).
@leemunroe - completely agree, the button is weird. Talked to @ashenden about this. Instead of adding custom CSS, we want to handle this by updating the button styles for the disabled state in CNVS.
| gharchive/pull-request | 2016-12-09T20:43:48 | 2025-04-01T06:38:21.253805 | {
"authors": [
"MatApple",
"leemunroe"
],
"repo": "dcos/dcos-ui",
"url": "https://github.com/dcos/dcos-ui/pull/1566",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
247547067 | fix(TasksView): disable restart from tasksViews
Disable restart in TasksView when is SDK, this is part of #2343
Closes DCOS-16564
Checklist
[x] Did you add a JIRA issue in a commit message or as part of the branch name?
[ ] Did you add new unit tests?
[ ] Did you add new integration tests?
[ ] If this is a regression, did you write a test to catch this in the future?
Nice catch @bstavroulakis @weblancaster 👍
| gharchive/pull-request | 2017-08-02T23:00:13 | 2025-04-01T06:38:21.256256 | {
"authors": [
"MatApple",
"weblancaster"
],
"repo": "dcos/dcos-ui",
"url": "https://github.com/dcos/dcos-ui/pull/2350",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
440718440 | DCOS-52747 Disable package installation until "manager" requirement is satisfied
https://jira.mesosphere.com/browse/DCOS-52747
⚠️ depends on https://github.com/dcos/dcos-ui/pull/3864, you might want to get that merged first.
some packages (currently only kubernetes-cluster) have a manager that must be
installed before they can be used. currently the API restricts the installation
of kubernetes-cluster if kubernetes is not installed yet. you have to go through
the whole installation-process in the UI to afterwards be notified that stuff
did not work. as a remedy a check has been implemented that renders an infobox
with a nice warning as well as the reviewAndRun-button disabled - so now we
can't waste a lot of work in the first place when we don't know about
dependencies upfront.
Review
The functionality has been implemented in two steps (commit). The first commit (i mean the one that starts with refactor) refactors geReviewAndRunButtons -> renderReviewAndRunButton in the PackageDetailTab as i can not deal with lets somehow... it also does some dummy-changes that cause huge whitespace-changes.
the second commit should contain all the logic related to dependency-checking. it should enable to get an overview of the approach taken.
Testing
Try to install kubernetes-cluster on a cluster that does not have a kubernetes service running to see the warning. install kubernetes and confirm that the message disappears (it should disappear in realtime as soon as the kubernetes-task has the state "TASK_RUNNING").
To make the warning show up programatically you can make hasUnresolvedDependency return true.
@TattdCodeMonkey I did notice that we're checking that the manager package exists, but not the status or it. So if its deploying you will get an error when trying to run the package with the dependency.
We're currently looking for a TASK_RUNNING, which was the best i could come up with. There seems to be no accurate service status for the kubernetes package yet, so this currently is the best we can do, right? would it be ok for you, if i opened an issue that reminds us of implementing a more sophisticated check once a package with a manager implements those statuses?
i'll happily implement more/different checks though if you have something in mind!
Finding it disquieting that the cosmos system test failed here. Could be relevant! 👀 I'll watch it
Frustrating because they pass locally
@pierrebeitz Cypress can't seem to complete mesos stream proxy requests so all the universe system tests fail with this change. So in order to preserve the universe system tests, I've added a commit that modifies the PackageDetailsTab to consult the DCOSStore (marathon groups endpoint) to check for an installed kubernetes package.
Unfortunately, this is a worse solution because "kubernetes" could appear to be satisfied even if it has failed to start. This can result in the failure to install kubernetes-cluster. I don't think this is a huge deal because in the end we give the user an appropriate error message after kubernetes-cluster fails to install.
Another alternative is to disable universe system tests but I opted to not do this.
Another thought I had was that there's not enough information to figure out if "manager dependencies" are truly satisfied. Would it require that all or any tasks of that package be running? For kubernetes, this is not an issue because there's only one task.
The button looks disabled, but I can still click it. Is this intentional?
@brandonc , I approve of your current solution.
The button looks disabled, but I can still click it. Is this intentional?
no, not at all! should be fixed now!
Why do we have "kubernetes" in quotation marks?
apparently i worked with a wrong design doc in the beginning. removed them!
I ran "kubernetes", tried to install "kubernetes-cluster", but got this error. Maybe we need to show the infobox in case "kubernetes" isn't configured correctly?
that seems to be an error from the server. any idea on how to find out whether kubernetes is configured correctly upfront?
Are there designs for the infobox in this case? I think gray is sort of easily ignored color, I think yellow or red are better.
@mperrotti talked to design. they want it to be gray. here's what i consider the design doc: https://mesosphere.invisionapp.com/share/2TRTTM5SZQX#/screens/361260526_k8cluster-Details-Page-Disabled
@pierrebeitz , thank you for addressing my feedback and for the quick response.
I have one concern though. If our logic for checking if the dependency is somewhat incorrect, we prevent the user from installing a certain package. I think we should just display a warning. We have a server error in case the user insists in trying to install the package.
Also, the tooltip message isn't in the message catalog and cannot be translated.
I have one concern though. If our logic for checking if the dependency is somewhat incorrect, we prevent the user from installing a certain package. I think we should just display a warning. We have a server error in case the user insists in trying to install the package.
i really admire that idea! we need to talk to design about this.
Also, the tooltip message isn't in the message catalog and cannot be translated.
@GeorgiSTodorov updated the fixup-commit
@GeorgiSTodorov @TattdCodeMonkey the ServiceTree().getLabels method flattens out those running labels so we should be able to detect. The only thing we can't detect is whether or not the underling task requirement is satisfied. But I think that is a limitation of the API.
@pierrebeitz I think this is ready to go
I'm merging this because the "installed" requirement seems like the correct one to focus on since there are many scenarios we can't detect and this is meant as a helpful shortcut for someone who definitely does not have kubernetes installed before installing kubernetes-cluster
:tada: This PR is included in version 2.96.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-05-06T14:08:50 | 2025-04-01T06:38:21.272009 | {
"authors": [
"GeorgiSTodorov",
"brandonc",
"mesosphere-ci",
"pierrebeitz"
],
"repo": "dcos/dcos-ui",
"url": "https://github.com/dcos/dcos-ui/pull/3865",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
391481694 | [master] Automated bump DC/OS UI v2.40.8
This bump contains the following JIRAs:
DCOS-14285 DCOS-46169
@mesosphere-mergebot bump-ee
@mesosphere-mergebot label [Ready for Review]
| gharchive/pull-request | 2018-12-16T16:13:56 | 2025-04-01T06:38:21.273971 | {
"authors": [
"juliangieseke"
],
"repo": "dcos/dcos",
"url": "https://github.com/dcos/dcos/pull/3978",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
406498341 | [1.12][DCOS-45428] Bump cosmos to latest version
High-level description
What features does this change enable? What bugs does this change fix?
Backport of #4409
Corresponding DC/OS tickets (obligatory)
These DC/OS JIRA ticket(s) must be updated (ideally closed) in the moment this PR lands:
DCOS-45428 Cosmos decodes URL parameter and it breaks resource links
Related tickets (optional)
Other tickets related to this change:
DCOS_OSS- Foo the Bar so it stops Bazzing.
Checklist for all PRs
[x] Added a comprehensible changelog entry to CHANGES.md or explain why this is not a user-facing change: No user visible changes.
[x] Included a test which will fail if code is reverted but test is not. If there is no test please explain here: No test included in dcos integration tests. However, package installation for dcos-ui would fail if code is reverted.
[x] Read the DC/OS contributing guidelines
[x] Followed relevant code rules Rules for Packages and Systemd
Checklist for component/package updates:
If you are changing components or packages in DC/OS (e.g. you are bumping the sha or ref of anything underneath packages), then in addition to the above please also include:
[x] Change log from the last version integrated (this should be a link to commits for easy verification and review): View diff
[x] Test Results: link to CI job test results for component
[x] Code Coverage (if available): link to code coverage report
PLEASE FILL IN THE TEMPLATE ABOVE / DO NOT REMOVE ANY SECTIONS ABOVE THIS LINE
Instructions and review process
What is the review process and when will my changes land?
All PRs require 2 approvals using GitHub's pull request reviews.
Reviewers should be:
Developers who understand the code being modified.
Developers responsible for code that interacts with or depends on the code being modified.
It is best to proactively ask for 2 reviews by @mentioning the candidate reviewers in the PR comments area. The responsibility is on the developer submitting the PR to follow-up with reviewers and make sure a PR is reviewed in a timely manner. Once a PR has 2 ship-it's, no red reviews, and all tests are green it will be included in the next train.
@mesosphere-mergebot bump-ee
@mesosphere-mergebot label Ready For Review
| gharchive/pull-request | 2019-02-04T20:09:53 | 2025-04-01T06:38:21.283604 | {
"authors": [
"takirala"
],
"repo": "dcos/dcos",
"url": "https://github.com/dcos/dcos/pull/4422",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
433274398 | [master] Bump Mesos to nightly 1.8.x 85462fc
High-level description
This is a routine bump to the latest Mesos and mesos-modules.
Related JIRA Issues
Checklist for all PRs
[ ] Included a test which will fail if code is reverted but test is not. If there is no test please explain here:
[x] Read the DC/OS contributing guidelines
[x] Followed relevant code rules Rules for Packages and Systemd
Checklist for component/package updates:
If you are changing components or packages in DC/OS (e.g. you are bumping the sha or ref of anything underneath packages), then in addition to the above please also include:
[x] Changelog:
https://github.com/apache/mesos/compare/0c503b01d3a9428ec9db35d09da5e237d737c570...85462fc183a60ae18d85729bccb1fffb59aa572c
[ ] Test Results: [link to CI job test results for component]
[ ] Code Coverage (if available): [link to code coverage report]
@mesosphere-mergebot bump-ee
| gharchive/pull-request | 2019-04-15T13:12:00 | 2025-04-01T06:38:21.288275 | {
"authors": [
"mesosphere-teamcity"
],
"repo": "dcos/dcos",
"url": "https://github.com/dcos/dcos/pull/5118",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
667928335 | exhibitor: bump
High-level description
What features does this change enable? What bugs does this change fix?
Corresponding DC/OS tickets (required)
D2IQ-ID JIRA title / short description.
Related tickets (optional)
D2IQ-ID JIRA title / short description.
@mesosphere-mergebot test teamcity/dcos/build/dcos teamcity/dcos/build/tox
@mesosphere-mergebot test all
@mesosphere-mergebot test all
@mesosphere-mergebot test all
| gharchive/pull-request | 2020-07-29T15:19:09 | 2025-04-01T06:38:21.291697 | {
"authors": [
"jkoelker"
],
"repo": "dcos/dcos",
"url": "https://github.com/dcos/dcos/pull/7490",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1715054367 | Improve suggested tags
We want to add improved weighting that gives better suggestions.
In order of bias where tags are seen:
Bottle
Brand/Distillery
Region
Country
Added randomization to the list so at least selection bias goes away. Currently this is only weighted by Bottle, and likely requires tags to be materialized/indexed to do more.
| gharchive/issue | 2023-05-18T06:10:47 | 2025-04-01T06:38:21.293536 | {
"authors": [
"dcramer"
],
"repo": "dcramer/peated",
"url": "https://github.com/dcramer/peated/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
274775694 | [WIP] Added decode tx page and view raw link
For issues #190 and #136 . The rpc api has no "getrawblock" function so that can't be added, but the view raw link works as intended now, showing the txid and hex while the view decoded link shows the json output. The decode tx pages works as intended as well but doesn't look so good for now.
Just pushed up a bit more styling for this https://github.com/dcrdata/dcrdata/tree/raw-tx
@gozart1 thanks looks pretty good right now. @chappjc this is ready for merge.
Might be nice to pass the rejection reason out to the UI.
In the server log I see:
Rejected transaction 005969df2bc92ac33b97255521762ec4d69a27d6188e958c3246f01856d78259: transaction already exists
But the UI currently just shows
Error: Could not send hex
Try decoding it first to see if it's valid
Added that
We need to be cautious with accepting and processing external data. Some thoughts:
broadcast needs to be rate limited (per IP and server-wide)
websocket input needs to somehow stop receiving data after X bytes (otherwise an attacker can send an infinite data stream, eating bandwidth and eventually filling up the server's memory)
the html form should screen the data before sending, but an attacker would just ignore the form and send directly to the websocket
the decode handler needs to have a final check on the size of the data before decoding it
Rate limiting may not be needed as the node won't broadcast junk or duplicates, and txns cost $$.
Took a shot at abusing this. My initial attempt was passing 460,000 valid hex characters to decode transaction. It completely crippled dcrdata. Local network request, just major CPU usage, forever.
I assume it's at least as costly to do this to send transaction.
I started with some valid transaction, like this one:
01000000020000000000000000000000000000000000000000000000000000000000000000ffffffff00ffffffffc0c343c6fb9ed34f26dd6a2bc7e233f22f82b4da4f363a7c5d437f7a555d30c60000000001ffffffff0400000000000000000000266a2463769d4b15965779ec9e542f888848426ec49dd80c0d175f3500000000000000b8df020000000000000000000000086a06010005000000a6e567000000000000001abb76a914a92c9ac541dd5ac40c630e6659ecce25e9fdd70e88aca53471d90100000000001abb76a91454ee8f3f4ceb3dfbdf38b3f55ec34c318c55b65c88ac0000000000000000028ffa46080000000000000000ffffffff020000bd1f92d10100000082d702000600000091483045022100f317885cfed85eb7355ea25a6fee0125b5fcd81afdcad91071a8e8ecc991f3ee02201683600dedd352d88a2ae0e11a5550ddfde68f87d53f9e072963f984f3bb8b1d014751210306559376b41006b6e16341b003c8d957cd3974ec665cea4e932826d9e7f1c7c72102f536ba2b34501eb3d3522c4397272d99803223087034a04f687ac6424c5aa8b652ae
Then I pasted that in the text box, holding down CTRL+V for a few seconds, then select all, and paste for another couple of seconds. 😆
We also need to limit the length of the hex logged: Received decodetx signal for hex: .
But what I was seeing is the process chugging away with multiple CPU cores after printing the offending hex.
It seems easy to clog up the websocket.
2017-11-21 17:25:27.351 [ERR] SQLT: SendRawTransaction failed: -1: Rejected transaction 24a719667306c2284af9cb564be3178417bd4906da09746d3f73ed34df94b2be: transaction already exists
2017-11-21 17:25:27.351 [DBG] EXPR: Failed to encode WebSocketMessage decodedtx: write tcp 127.0.0.1:7777->127.0.0.1:44582: i/o timeout
2017-11-21 17:25:27.921 [DBG] EXPR: Failed to encode WebSocketMessage 3: write tcp 127.0.0.1:7777->127.0.0.1:44582: i/o timeout
2017/11/21 17:25:27 "GET http://127.0.0.1:7777/explorer/decodetx/ws HTTP/1.1" from 127.0.0.1:44582 - 000 0B in 1m48.000770061s
I'm just clicking both buttons like a maniac.
Going in the right direction, but it still chokes. Try this: https://pastebin.com/raw/FRAfMiqb
@RogueElement nice job with this. I'm just being picky on this because of the attack surface it exposes.
| gharchive/pull-request | 2017-11-17T07:28:14 | 2025-04-01T06:38:21.302332 | {
"authors": [
"RogueElement",
"chappjc",
"gozart1"
],
"repo": "dcrdata/dcrdata",
"url": "https://github.com/dcrdata/dcrdata/pull/276",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
1955139913 | 🛑 Danbs is down
In 0ee1e52, Danbs (https://danbs.net) was down:
HTTP code: 404
Response time: 171 ms
Resolved: Danbs is back up in 5a6f73d after 6 minutes.
| gharchive/issue | 2023-10-20T23:47:54 | 2025-04-01T06:38:21.339470 | {
"authors": [
"ddanbs"
],
"repo": "ddanbs/upptime",
"url": "https://github.com/ddanbs/upptime/issues/12581",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2000883674 | 🛑 Danbs is down
In ec89686, Danbs (https://danbs.net) was down:
HTTP code: 404
Response time: 234 ms
Resolved: Danbs is back up in 48d105e after 13 minutes.
| gharchive/issue | 2023-11-19T14:54:59 | 2025-04-01T06:38:21.342092 | {
"authors": [
"ddanbs"
],
"repo": "ddanbs/upptime",
"url": "https://github.com/ddanbs/upptime/issues/13698",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2001628114 | 🛑 Danbs is down
In 5043183, Danbs (https://danbs.net) was down:
HTTP code: 404
Response time: 3785 ms
Resolved: Danbs is back up in a15b34d after 15 minutes.
| gharchive/issue | 2023-11-20T08:29:04 | 2025-04-01T06:38:21.344391 | {
"authors": [
"ddanbs"
],
"repo": "ddanbs/upptime",
"url": "https://github.com/ddanbs/upptime/issues/13724",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2092408214 | 🛑 Danbs is down
In f3e6b13, Danbs (https://danbs.net) was down:
HTTP code: 404
Response time: 169 ms
Resolved: Danbs is back up in 8d055a7 after 37 minutes.
| gharchive/issue | 2024-01-21T03:57:03 | 2025-04-01T06:38:21.346665 | {
"authors": [
"ddanbs"
],
"repo": "ddanbs/upptime",
"url": "https://github.com/ddanbs/upptime/issues/16019",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2101570441 | 🛑 Danbs is down
In c45545c, Danbs (https://danbs.net) was down:
HTTP code: 404
Response time: 146 ms
Resolved: Danbs is back up in 26941a9 after 13 minutes.
| gharchive/issue | 2024-01-26T04:15:17 | 2025-04-01T06:38:21.348894 | {
"authors": [
"ddanbs"
],
"repo": "ddanbs/upptime",
"url": "https://github.com/ddanbs/upptime/issues/16185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1525828379 | 🛑 Danbs is down
In 98fdee2, Danbs (https://danbs.net) was down:
HTTP code: 404
Response time: 195 ms
Resolved: Danbs is back up in 7494774.
| gharchive/issue | 2023-01-09T15:33:23 | 2025-04-01T06:38:21.351168 | {
"authors": [
"ddanbs"
],
"repo": "ddanbs/upptime",
"url": "https://github.com/ddanbs/upptime/issues/2577",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1527886491 | 🛑 Danbs is down
In 09b4627, Danbs (https://danbs.net) was down:
HTTP code: 404
Response time: 146 ms
Resolved: Danbs is back up in e8a976b.
| gharchive/issue | 2023-01-10T19:19:22 | 2025-04-01T06:38:21.353670 | {
"authors": [
"ddanbs"
],
"repo": "ddanbs/upptime",
"url": "https://github.com/ddanbs/upptime/issues/2616",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
710257252 | https://remiguieditor.daviderosa.repl.co/ no module named remi
Hello, just letting you know that https://remiguieditor.daviderosa.repl.co/ is not loading :)
@awesomebytes thank you a lot for the info, I will now fix
@awesomebytes Fixed thank you ;-)
It seems that because of recent repl.it updates it is not possible to run the editor in a stable way. I removed that link from the readme. In the future I will look for a different solution.
| gharchive/issue | 2020-09-28T13:17:34 | 2025-04-01T06:38:21.360663 | {
"authors": [
"awesomebytes",
"dddomodossola"
],
"repo": "dddomodossola/remi",
"url": "https://github.com/dddomodossola/remi/issues/407",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
78104789 | Item typehint is strictly array in Writers
So, if the item can be an array, object or anything then shouldn't we remove the array typehint from the Writer interface? Because currently it only accepts arrays.
In other words: do we have to convert the item to an array when it arrives at the Writer?
It's not really easy to remove this, because some writers accept only arrays. I removed some month the array type hints from the converts/items.
For example the ExcelWriter use phpexcel to create the spreadsheet. And i have no idea how we can extract the properties.
In this case, shouldn't we make the assumption that by the end of the process (aka the workflow gets to the writer part) the item should be converted into an array?
I think we should clear this (in the documentation) so that everyone knows why is this.
See this library: https://github.com/plumphp/plum/
It is inspired by data-import and it says it accepts any kind of data.
I am leaving this open as a reminder for documentation.
See this library: https://github.com/plumphp/plum/
Interesting, I didn’t know about plumphp. I’ll contact Florian and see whether it makes sense to combine our efforts.
In this case, shouldn't we make the assumption that by the end of the process (aka the workflow gets to the writer part) the item should be converted into an array?
I think we should clear this (in the documentation) so that everyone knows why is this.
Agreed. I decided to have arrays as the data element format: most readers can output it, most writers can handle it (and some, such as PHPExcel require it).
| gharchive/issue | 2015-05-19T14:11:36 | 2025-04-01T06:38:21.365608 | {
"authors": [
"Baachi",
"ddeboer",
"sagikazarmark"
],
"repo": "ddeboer/data-import",
"url": "https://github.com/ddeboer/data-import/issues/208",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
191293447 | Cardinalities missing in XMI export
There are a few cardinalities in the content, that are not included in the output as XMI. One example is the property target cardinality 2..2 of the propperts maps in AgentSimilarityPair.
some more examples on: https://ddi-alliance.atlassian.net/browse/DMT-108
as i wrote in DMT-108, is this just a matter of adding 2..2 in the list of cardinalities?
deployed https://github.com/ddialliance/lion/commit/6b7b1c7e1ec9f5bb2124430b2064edb0d919b6a1 to production. @OliverHopt is this resolved?
Solved
| gharchive/issue | 2016-11-23T14:56:10 | 2025-04-01T06:38:21.372279 | {
"authors": [
"OliverHopt",
"borsna"
],
"repo": "ddialliance/lion",
"url": "https://github.com/ddialliance/lion/issues/61",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
125231707 | Foreman in production: how to preload rails app?
I've been toying with using a kind of rails app preloader to share memory between process types. In a rails environment, one uses rails to preload the code. In theory, a foreman engine could access the environment.rb file and use it to preload it, before forking the different web/worker/etc processes. This would be therefore not a lot different than what the unicorn and puma (in cluster mode) do.
The caveat is that it seems that foreman forks shell commands (which will later load different instances of the VM). Therefore, this:
class Foreman::Engine::RailsCLI < Foreman::Engine::CLI
def startup
super
require File.expand_path('../../config/environment', __FILE__)
end
end
is not enough.
Has anyone ever played around with such a thing? This is a rails-specific solution, so I don't hope to see it in foreman (or should I? Ideally every framework could create its own foreman engine and script).
I can envision that instead of Process.spawn, one would use fork.
I'd prefer not to add things like this to foreman proper. If you want to combine processes together do it one level before foreman as a single process.
Well, I also agree that this isn't a foreman concern. Nevertheless, I was eyeing a workflow that could be "lobbied" into frameworks like rails, i.e. a kind of foreman engine that each framework could implement and reuse. I post here my POC for Rails:
# bin/foreman-rails
#!/usr/bin/env ruby
require 'foreman/cli'
module Foreman
class Engine::RailsCLI < Engine::CLI
def startup
super
require File.expand_path('../../config/environment', __FILE__)
end
def register(name, command, options={})
options[:env] ||= env
options[:cwd] ||= File.dirname(command.split(" ").first)
process = RailsProcess.new(command, options)
@names[process] = name
@processes << process
end
end
class RailsCLI < CLI
no_tasks do
def engine
@engine ||= begin
engine_class = Engine::RailsCLI
engine = engine_class.new(options)
engine
end
end
end
end
class RailsProcess < Process
def run(options={})
env = @options[:env].merge(options[:env] || {})
output = options[:output] || $stdout
runner = "#{Foreman.runner}".shellescape
final_command = expanded_command(env)
Dir.chdir(cwd) do
fork do
env.each do |k, v|
ENV[k] ||= v
end
log_args = output.is_a?(IO) ? [output] : [output, 'w']
$stdout.reopen(*log_args)
$stderr.reopen(*log_args)
argv = final_command.split(/\s+/).reject { |s| %w(bundle exec).include?(s) }
executable = argv.shift
ARGV.clear
argv.each do |v|
ARGV << v
end
load Bundler.which(executable)
end
end
end
end
end
Foreman::RailsCLI.start
This works... mostly. Basically by loading rails environment.rb, I'm eagerloading/kickstarting the initialization process, which is interpreted as "web process is starting" by some dependencies, which take decisions based on whether the current process is the web process or not (I have problems with sidekiq because of that). This is mainly because Rails itself doesn't provide proper hooks to signalize this IMO. The memory savings however, would be huge, if such a process could be implemented.
I've created this discussion, but sadly not a lot of follow up, maybe I'll reopen it as an issue.
| gharchive/issue | 2016-01-06T18:16:17 | 2025-04-01T06:38:21.379908 | {
"authors": [
"TiagoCardoso1983",
"ddollar"
],
"repo": "ddollar/foreman",
"url": "https://github.com/ddollar/foreman/issues/596",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2031371881 | set -o nounset causes unbound variable warnings
With source ~/.hishtory/config.sh in .bashrc.
$> set -u
-bash: HISHTORY_AT_PROMPT: unbound variable
-bash: HISHTORY_FIRST_PROMPT: unbound variable
Reason:
https://github.com/ddworken/hishtory/blob/cc123854a02374a7e4ee7fc87a974b19566fa142/client/lib/config.sh#L10
Solutions: https://stackoverflow.com/questions/7832080/test-if-a-variable-is-set-in-bash-when-using-set-o-nounset
This should be fixed! If you run hishtory update you'll get the latest version with the fix. If you're still experiencing this issue (or run into anything else!) please reopen this so I can take another look.
Seems to be working; thanks for the quick fix @ddworken!
| gharchive/issue | 2023-12-07T19:14:25 | 2025-04-01T06:38:21.384114 | {
"authors": [
"ddworken",
"mustafa0x"
],
"repo": "ddworken/hishtory",
"url": "https://github.com/ddworken/hishtory/issues/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
785927113 | feat: 🎸 Support passing through tags
Adding more flexibility by adding the ability to add tags to all used resources that support tag metadata - useful for billing/project grouping etc.
Released in v0.5.2.
Released in v0.5.2.
| gharchive/pull-request | 2021-01-14T11:52:08 | 2025-04-01T06:38:21.420211 | {
"authors": [
"maael",
"ofhouse"
],
"repo": "dealmore/terraform-aws-next-js",
"url": "https://github.com/dealmore/terraform-aws-next-js/pull/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
214950448 | Errror decode() argument 1 must be string, not None when run textract.process
I am having trouble when convertTotext with UTF-8 file...
text = textract.process("1.pdf", method='pdfminer')
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/local/lib/python2.7/dist-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "td-net.py", line 178, in close
textData = convertToText(path,self.date) # convert pdf to text after download
File "td-net.py", line 239, in convertToText
text = textract.process("data/pdf/{1}/{0}.pdf".format(path,sDate), method='pdfminer')
File "/usr/local/lib/python2.7/dist-packages/textract/parsers/init.py", line 58, in process
return parser.process(filename, encoding, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/textract/parsers/utils.py", line 46, in process
unicode_string = self.decode(byte_string)
File "/usr/local/lib/python2.7/dist-packages/textract/parsers/utils.py", line 65, in decode
return text.decode(result['encoding'])
TypeError: decode() argument 1 must be string, not None
That's odd. Looks like chardet could not determine an encoding for your file 1.pdf. Can you try running chardet 1.pdf to see what the output looks like?
I wonder if this is related to #133 somehow...
This is exactly the problem I was having.
I just pinned chardet to 2.1.1 to address #107. I think this will likely address your issue as well. Try pulling from the latest master on github to see if that fixes it. I'm going to close this, but feel free to reopen if it remains a problem.
Hello,
I have this issue. I went back to 2.1.1 and now I got another error: ModuleNotFoundError: No module named 'universaldetector' which happens because chardet 2.1.1 is too old. What should I do?
| gharchive/issue | 2017-03-17T09:31:28 | 2025-04-01T06:38:21.430038 | {
"authors": [
"SwedishBotMafia",
"deanmalmgren",
"frostchick",
"mstanojevic118"
],
"repo": "deanmalmgren/textract",
"url": "https://github.com/deanmalmgren/textract/issues/135",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
254905209 | Cluttered README
I think this project is awesome. It combines so many blacklists to create a comprehensive ad blocking list. Unfortunately, when I was reading the README, I found it to be rather confusing and hard to navigate. Maybe it could be split up into separate files with the stats placed somewhere else?
The script is still in an "active development state"
I have some major additions and changes on the horizon, and for now, the stats help me visualize the information at-a-glance.
I like the stats being there, and that is probably going to stay. However, I'll take any suggestions regarding the rest of the main README.md
| gharchive/issue | 2017-09-03T22:33:39 | 2025-04-01T06:38:21.433217 | {
"authors": [
"InnovativeInventor",
"deathbybandaid"
],
"repo": "deathbybandaid/piholeparser",
"url": "https://github.com/deathbybandaid/piholeparser/issues/68",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
383633503 | fatal error:: configuration file not found at /root/instapy-config.json
After lots of trial and error, I have finally made it to the very last step...but now I can't get passed this error message:
user123@computer123:/usr/share/wordlists$ sudo instagram-py -u username123 -pl rockyou.txt.gz
Instagram-Py 2.0.7 , Slick Instagram brute force command line tool.
Copyright (C) 2018 The Future Shell , Antony Jr.
[+] Started @ 2018-11-22 07:45:28.707465
fatal error:: configuration file not found at /root/instapy-config.json
python instagram-py -dc
AnalJesus
just move instagram-config.json file to /root ..
cp instagram-config.json /root/
| gharchive/issue | 2018-11-22T18:42:04 | 2025-04-01T06:38:21.439038 | {
"authors": [
"Hacker-spe",
"TheMercyless1",
"analjesus"
],
"repo": "deathsec/instagram-py",
"url": "https://github.com/deathsec/instagram-py/issues/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
737737421 | DBZ-1720 Move CI to GH Actions
@Naros need to configure a secrets for WEBHOOK_URL for Gitter.
@ani-sha Hi, this looks very nice! I have just few comments for improvements
make sure that the order in paths field is same for all workflows, first the changed module and then dependencies. Ideally the dependencies will be graphically separated for example via comment. IIUC the deps block should be the same for all workflows
would it be possible to use matrix for jobs that executes more than once per connector?
would it be possible to add one more workflow that will trigger a new docs build when documentation is changed?
Great work!
Thanks, @jpechane for the suggestions.
In which case would we be needing to run the connectors more than once? I believe depending on the no of keys in a matrix, that many no of times a job will be executed.
Surely would be adding a workflow for docs. Also had a discussion with @Naros; thought of implementing this after fixing GH actions for website.
@ani-sha See for example MongoDB connector - it is keyed by version.mongo.server Maven property
@ani-sha See for example MongoDB connector - it is keyed by version.mongo.server Maven property
yep, I can create a matrix depending on the versions for mongodb. Anything for postgres?
@ani-sha For postgres there is version.postgres.server. Unfortunately profile names are changed as well so it needs to be somehow woven together.
@jpechane I tried using a matrix for mongodb locally, but it fails the build with this error.
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Debezium Parent POM 1.4.0-SNAPSHOT:
[INFO]
[INFO] Debezium Checkstyle Rules .......................... FAILURE [ 2.882 s]
[INFO] Debezium IDE Formatting Rules ...................... SKIPPED
[INFO] Debezium Revapi Rules .............................. SKIPPED
[INFO] Debezium Parent POM ................................ SKIPPED
[INFO] Debezium API ....................................... SKIPPED
[INFO] Debezium Core ...................................... SKIPPED
[INFO] Debezium Assembly Descriptors ...................... SKIPPED
[INFO] Debezium Embedded .................................. SKIPPED
[INFO] Debezium Connector for MongoDB ..................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 58.448 s
[INFO] Finished at: 2020-11-09T09:46:02Z
[INFO] ------------------------------------------------------------------------
Error: Unknown lifecycle phase "4.2". You must specify a valid lifecycle phase or a goal in the format <plugin-prefix>:<goal> or <plugin-group-id>:<plugin-artifact-id>[:<plugin-version>]:<goal>. Available lifecycle phases are: validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy, pre-clean, clean, post-clean, pre-site, site, post-site, site-deploy. -> [Help 1]
Error:
Error: To see the full stack trace of the errors, re-run Maven with the -e switch.
Error: Re-run Maven using the -X switch to enable full debug logging.
Error:
Error: For more information about the errors and possible solutions, please read the following articles:
Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/LifecyclePhaseNotFoundException
Error: Process completed with exit code 1.
@ani-sha Could you please show the maven command you use?
@ani-sha Could you please show the maven command you use?
mvn clean install -B -pl debezium-connector-mongodb -am -Passembly -Dcheckstyle.skip=true -Dformat.skip=true -Drevapi.skip -Dversion.mongo.server= ${{ matrix.version-mongo-server }} -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
@ani-sha Try removing the space between = and ${ in -Dversion.mongo.server= ${{ matrix.version-mongo-server }}
@ani-sha Try removing the space between = and ${ in -Dversion.mongo.server= ${{ matrix.version-mongo-server }}
@jpechane Overcoming the first error throws a new error.
[INFO] Tests run: 113, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO]
[INFO] --- maven-jar-plugin:3.0.2:jar (default-jar) @ debezium-connector-mongodb ---
[INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT.jar
[INFO]
[INFO] --- maven-source-plugin:3.1.0:test-jar-no-fork (attach-test-sources) @ debezium-connector-mongodb ---
[INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-test-sources.jar
[INFO]
[INFO] --- maven-jar-plugin:3.0.2:test-jar (test-jar) @ debezium-connector-mongodb ---
[INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-tests.jar
[INFO]
[INFO] --- maven-assembly-plugin:3.1.1:single (default) @ debezium-connector-mongodb ---
[INFO] Building tar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-plugin.tar.gz
[INFO] Building zip: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-plugin.zip
[INFO]
[INFO] --- docker-maven-plugin:0.31.0:build (start) @ debezium-connector-mongodb ---
[INFO]
[INFO] --- docker-maven-plugin:0.31.0:start (start) @ debezium-connector-mongodb ---
[INFO] DOCKER> Pulling from library/mongo
[INFO] DOCKER> Digest: sha256:efc408845bc917d0b7fd97a8590e9c8d3c314f58cee651bd3030c9cf2ce9032d
[INFO] DOCKER> Status: Downloaded newer image for mongo:4
[INFO] DOCKER> Pulled mongo:4 in 8 seconds
Error: DOCKER> Error occurred during container startup, shutting down...
Error: DOCKER> I/O Error [Unable to pull 'debezium/mongo-initiator:4' : {"message":"manifest for debezium/mongo-initiator:4 not found: manifest unknown: manifest unknown"} (Not Found: 404)]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Debezium Parent POM 1.4.0-SNAPSHOT:
[INFO]
[INFO] Debezium Checkstyle Rules .......................... SUCCESS [01:31 min]
[INFO] Debezium IDE Formatting Rules ...................... SUCCESS [ 0.778 s]
[INFO] Debezium Revapi Rules .............................. SUCCESS [ 0.084 s]
[INFO] Debezium Parent POM ................................ SUCCESS [01:10 min]
[INFO] Debezium API ....................................... SUCCESS [ 31.257 s]
[INFO] Debezium Core ...................................... SUCCESS [01:06 min]
[INFO] Debezium Assembly Descriptors ...................... SUCCESS [ 0.057 s]
[INFO] Debezium Embedded .................................. SUCCESS [ 30.882 s]
[INFO] Debezium Connector for MongoDB ..................... FAILURE [ 37.975 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 06:31 min
[INFO] Finished at: 2020-11-09T10:13:45Z
[INFO] ------------------------------------------------------------------------
Error: Failed to execute goal io.fabric8:docker-maven-plugin:0.31.0:start (start) on project debezium-connector-mongodb: I/O Error: Unable to pull 'debezium/mongo-initiator:4' : {"message":"manifest for debezium/mongo-initiator:4 not found: manifest unknown: manifest unknown"} (Not Found: 404) -> [Help 1]
Error:
Error: To see the full stack trace of the errors, re-run Maven with the -e switch.
Error: Re-run Maven using the -X switch to enable full debug logging.
Error:
Error: For more information about the errors and possible solutions, please read the following articles:
Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Error:
Error: After correcting the problems, you can resume the build with the command
Error: mvn <args> -rf :debezium-connector-mongodb
Error: Process completed with exit code 1.
@ani-sha It seems that 4.0 was converted to 4 only when passed to the mvn command
@ani-sha It seems that 4.0 was converted to 4 only when passed to the mvn command
Yes. But 4.0 was provided in the matrix it took 4.
@jpechane Fixed the matrix for mongodb.
@ani-sha Nice! Do you think you'll manage to do it for postgres as well? It might be a bit more complicated as two things are updated
@ani-sha Nice! Do you think you'll manage to do it for postgres as well? It might be a bit more complicated as two things are updated
Well the first issue I am facing right now is with the dependencies in the postgres-connector. So I am not able to run or test anything locally for postgres.
@ani-sha Could you please share the full error message in the log? This should work the same way as MongoDB does.
@ani-sha Could you please share the full error message in the log? This should work the same way as MongoDB does.
Error: Errors:
Error: PostgresConnectorIT.shouldResumeStreamingFromSlotPositionForCustomSnapshot:1524->waitForSnapshotToBeCompleted:2378->AbstractConnectorTest.waitForSnapshotToBeCompleted:1035 » ConditionTimeout
[INFO]
Error: Tests run: 219, Failures: 0, Errors: 1, Skipped: 3
[INFO]
[INFO]
[INFO] --- docker-maven-plugin:0.31.0:stop (stop) @ debezium-connector-postgres ---
07:14:53.469 postgresLOG: received smart shutdown request
07:14:53.469 postgresLOG: autovacuum launcher shutting down
07:14:53.469 postgresFATAL: terminating autovacuum process due to administrator command
07:14:53.759 postgresLOG: shutting down
07:14:53.829 postgresLOG: database system is shut down
[INFO] DOCKER> [debezium/postgres-server-test-database:latest]: Stop and removed container 0f9e4630a416 after 0 ms
[INFO]
[INFO] --- maven-source-plugin:3.1.0:jar-no-fork (attach-sources) @ debezium-connector-postgres ---
[INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-postgres/target/debezium-connector-postgres-1.4.0-SNAPSHOT-sources.jar
[INFO]
[INFO] --- maven-checkstyle-plugin:3.1.1:checkstyle (check-style) @ debezium-connector-postgres ---
[INFO] Starting audit...
Audit done.
[INFO]
[INFO] --- maven-failsafe-plugin:3.0.0-M3:verify (verify) @ debezium-connector-postgres ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Debezium Parent POM 1.4.0-SNAPSHOT:
[INFO]
[INFO] Debezium Checkstyle Rules .......................... SUCCESS [ 2.313 s]
[INFO] Debezium IDE Formatting Rules ...................... SUCCESS [ 0.282 s]
[INFO] Debezium Revapi Rules .............................. SUCCESS [ 0.069 s]
[INFO] Debezium Parent POM ................................ SUCCESS [ 1.406 s]
[INFO] Debezium API ....................................... SUCCESS [ 5.895 s]
[INFO] Debezium Core ...................................... SUCCESS [01:22 min]
[INFO] Debezium Assembly Descriptors ...................... SUCCESS [ 0.107 s]
[INFO] Debezium Embedded .................................. SUCCESS [ 14.299 s]
[INFO] Debezium Connector for PostgreSQL .................. FAILURE [12:15 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14:02 min
[INFO] Finished at: 2020-11-11T07:14:55Z
[INFO] ------------------------------------------------------------------------
Error: Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:3.0.0-M3:verify (verify) on project debezium-connector-postgres: There are test failures.
Error:
Error: Please refer to /home/runner/work/debezium/debezium/debezium-connector-postgres/target/failsafe-reports for the individual test results.
Error: Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
@ani-sha This looks like an intermittent failure issue so you can ignore it for now. But if it is reliable reproducibale on your machine then we are going to borrow it to find the root cause :-)
@ani-sha This looks like an intermittent failure issue so you can ignore it for now. But if it is reliable reproducibale on your machine then we are going to borrow it to find the root cause :-)
ok sure; for postrges we would be using two matrixes one being the version.postgres.server with [9.6, 10] and other being plugin matrix would contain?
It seems to me that -Dversion.postgres.server=9.6-devel could be removed and matrix will be made out of different profile settings
assembly
assembly,wal2json
assembly,postgres-10,pgoutput
and the strings from above will be passed as -P{...} to the maven command
@ani-sha I think there is no need to delay this later - looks good and let's give it a try!
@ani-sha I think there is no need to delay this later - looks good and let's give it a try!
Absolutely! 🚀
| gharchive/pull-request | 2020-11-06T12:55:59 | 2025-04-01T06:38:21.460805 | {
"authors": [
"ani-sha",
"jpechane"
],
"repo": "debezium/debezium",
"url": "https://github.com/debezium/debezium/pull/1935",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1854901789 | DBZ-6803 Add REPEAT function for MySQL
https://issues.redhat.com/browse/DBZ-6803
Upstream - https://github.com/antlr/grammars-v4/pull/3667
@ani-sha Applied, thanks
| gharchive/pull-request | 2023-08-17T12:26:41 | 2025-04-01T06:38:21.463419 | {
"authors": [
"ani-sha",
"jpechane"
],
"repo": "debezium/debezium",
"url": "https://github.com/debezium/debezium/pull/4794",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
340908914 | DBZ-720 Updating required permissions for connector user for snapshot…
…ting
@jpechane Small update regarding grants for initial snapshotting.
Btw. this might make us re-think whether the flashback query is the best way to do the initial snapshot. We might also get away with reading within a transaction using the right isolation level. I found the "AS OF SCN ..." approach quite attractive, though, so I went for it. We can re-evaluate later on, but I wanted to bring it to your attention.
@gunnarmorling Applied, thanks!
| gharchive/pull-request | 2018-07-13T07:18:48 | 2025-04-01T06:38:21.464914 | {
"authors": [
"gunnarmorling",
"jpechane"
],
"repo": "debezium/oracle-vagrant-box",
"url": "https://github.com/debezium/oracle-vagrant-box/pull/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2372927258 | Fetch NFT prices and use them to calculate treasury total
https://docs.moralis.io/web3-data-api/evm/reference/get-nft-sale-prices
@tomstuart123 from a product perspective, do users want to see NFTs in their Safes count toward their total balance?
Just logging here
Do we still want to do this?
| gharchive/issue | 2024-06-25T15:00:55 | 2025-04-01T06:38:21.533738 | {
"authors": [
"Da-Colon",
"adamgall",
"mudrila",
"tomstuart123"
],
"repo": "decentdao/decent-interface",
"url": "https://github.com/decentdao/decent-interface/issues/2051",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2036254447 | Feat/billboard
Introduced Billboard feature as Explorer/Assets/DCL/Billboard
Locate at
Nullables enabled
Covered with tests
For demo created scenes
Assets/DCL/Billboard/Demo/BillboardDemoTest.unity - shows multiple
Assets/DCL/Billboard/Demo/BillboardPlayground.unity - makes possible to tweak options in runtime and to see a difference
IDemoWorld for easy reuse
Looks very cool, man 💪 I have just 3 suggestions:
It would be nice to have all SDK-related components under one root folder. Maybe Explorer/Assets/DCL/Scenes/Billboard or Explorer/Assets/DCL/SDKScenes/Billboard
Since you already have nice integration test environment, can you assemble then a Performance Test (for 200 and 500 entities). As we already have Unity Performance Testing imported in the project
Please, rename the header of the PR
| gharchive/pull-request | 2023-12-11T18:12:38 | 2025-04-01T06:38:21.551590 | {
"authors": [
"NickKhalow",
"popuz"
],
"repo": "decentraland/unity-explorer",
"url": "https://github.com/decentraland/unity-explorer/pull/192",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1140563433 | Messages and generators notation update for readability
First attempt to close issue #54 (see in the issue for other options).
This PR proposes to pass the generators directly to functions instead of their indexes, with the goal of increasing spec readability.
IMO this simplifies notation. For example, using indexes in spkGen will require notation for 5 different lists of indexes. This can get confusing. By passing the generators as a list, only requires notation for 2 lists of indexes (and one less input argument in SpkGen). Also,
Passing the messages as a map between the message and the index of the generator will still require a lot of notation for indexes.
Mentioning that those generators are not necessarily the L first elements from the global (or not) generators list, also preserves the flexibility required from the blind signatures.
Mentioning that implementations may choose to pass the indexes of the generators instead and pointing to a reference implementation or perhaps a more detailed explanation in the Appendix IMO will be enough to address the efficiency of the applications concerns, while keeping the spec more readable.
Also, changes in this PR use the terminology from PR #62 to some places, but I will update it elsewhere after that PR is merged.
Discussed on WG call 21st of Feb, awaiting review from other WG members
@BasileiosKal can you please update this PR to resolve the conflicts?
Multiple approvals, PR open for 2 weeks and discussed on WG call, massive improvement in notation across the spec, merging
| gharchive/pull-request | 2022-02-16T20:41:58 | 2025-04-01T06:38:21.554893 | {
"authors": [
"BasileiosKal",
"tplooker"
],
"repo": "decentralized-identity/bbs-signature",
"url": "https://github.com/decentralized-identity/bbs-signature/pull/64",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
642709116 | Feature extraction
Hi, thanks for the great codebase.
Could you kindly provide the code to extract features from custom videos using pre-trained models?
@Finspire13 Sorry for late reply. You can easily modify the config files (\eg remove the cls head) and test_video.py or test_recognizer.py to extract features.
| gharchive/issue | 2020-06-22T02:50:22 | 2025-04-01T06:38:21.585148 | {
"authors": [
"Finspire13",
"limbo0000"
],
"repo": "decisionforce/TPN",
"url": "https://github.com/decisionforce/TPN/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
777788040 | Thumbnails not loading
I encountered three different problems:
All thumbnails in a folder are visible
No thumbnail in a folder is visible
In the same folder, some photos have thumbnails while others do not.
In case for the failing thumbnail, a dark grey square is shown instead.
The failing thumbnails are for common formats like JPG and PNG, or for
something else?
Do they fail for files that no longer exist but are still registered in the
media store?
On Mon, Jan 4, 2021 at 12:24 PM Donkey-Doug notifications@github.com
wrote:
I encountered three different problems:
All thumbnails in a folder are visible
No thumbnail in a folder is visible
In the same folder, some photos have thumbnails while others do not.
In case for the failing thumbnail, a dark grey square is shown instead.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/deckerst/aves/issues/30, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ADKBEXNTBWXLWFMIU35LAMDSYEYGRANCNFSM4VSNNUMQ
.
The failing thumbnails are for common formats like JPG and PNG, or for
something else?
Do they fail for files that no longer exist but are still registered in the
media store?
On Mon, Jan 4, 2021 at 12:24 PM Donkey-Doug notifications@github.com
wrote:
I encountered three different problems:
All thumbnails in a folder are visible
No thumbnail in a folder is visible
In the same folder, some photos have thumbnails while others do not.
In case for the failing thumbnail, a dark grey square is shown instead.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/deckerst/aves/issues/30, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ADKBEXNTBWXLWFMIU35LAMDSYEYGRANCNFSM4VSNNUMQ
.
I tried to find a pattern in the failing thumbnails before reporting the issue, but failed to find any.
I tried to find a pattern in the failing thumbnails before reporting the issue, but failed to find any.
Some of the thumbnails that were not visible earlier started to show. Apparently it takes very long for some thumbnails to become visible.
Some of the thumbnails that were not visible earlier started to show. Apparently it takes very long for some thumbnails to become visible.
Thanks for the update. Indeed some apps delete files without properly removing them from the Media Store.
Ideally, these broken files should be handled more gracefully by Aves. When the app detects such file, it could even suggest to fix the situation by removing them from the Media Store.
Thanks for the update. Indeed some apps delete files without properly removing them from the Media Store.
Ideally, these broken files should be handled more gracefully by Aves. When the app detects such file, it could even suggest to fix the situation by removing them from the Media Store.
| gharchive/issue | 2021-01-04T03:24:11 | 2025-04-01T06:38:21.600870 | {
"authors": [
"Donkey-Doug",
"deckerst"
],
"repo": "deckerst/aves",
"url": "https://github.com/deckerst/aves/issues/30",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1243867370 | Update Japanese translations
Fixed Japanese mistranslation and HTML tag mistakes.
Nice catch, thanks.
| gharchive/pull-request | 2022-05-21T04:39:36 | 2025-04-01T06:38:21.601856 | {
"authors": [
"HiSubway",
"deckerst"
],
"repo": "deckerst/aves",
"url": "https://github.com/deckerst/aves/pull/256",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1203342787 | Use CompilerConfig model in Scratch model
A little refactor.
Make sure to not break the existing create scratch endpoint's interface (#148 would help here!)
@ethteck did we get anywhere with this
I've been working on it
This isn't needed anymore since we store Presets in the backend, which are essentially CompilerConfig
| gharchive/issue | 2022-04-13T13:58:32 | 2025-04-01T06:38:21.634969 | {
"authors": [
"bates64",
"ethteck",
"nanaian"
],
"repo": "decompme/decomp.me",
"url": "https://github.com/decompme/decomp.me/issues/439",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
163422760 | Hook up fee estimation to transaction creation.
This change requires a newer version of the wallet's RPC API.
Fixes #10.
Fixes #11.
Two issues; you see Estimated Remaining Balance at 1.0, should be 0. And the crash of course
If priority checks were run at all that means the fee was not high enough. I'll look through it for bugs.
Rebased over master and fixed a magnitude error with the default fee. Shouldn't see any more tx rejected errors for low priority.
Coins were unconfirmed, and therefore not usable to fund a transaction. The same transaction went through after change received the required number of block confirmations. We should display the spendable balance next to the selected account, instead of the user relying on the total balance of all accounts in the corner.
so I am ok with this going in. It isn't perfect but certainly works much better.
just fixing up an issue in the transaction authoring code where it could create dust change outputs. Will merge after that is fixed.
| gharchive/pull-request | 2016-07-01T15:44:56 | 2025-04-01T06:38:21.640596 | {
"authors": [
"jrick",
"marcopeereboom"
],
"repo": "decred/Paymetheus",
"url": "https://github.com/decred/Paymetheus/pull/57",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
796045473 | multi: stratum improvements.
This extends client read/write timeouts, mining.subscribe response handling and other stratum related improvements.
Deployed to https://pool.jholdstock.uk/
Successfully mining blocks, but still seeing a lot of errors on the pool log:
2021-02-10 10:12:34.351 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty
2021-02-10 10:25:12.450 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty
2021-02-10 10:26:33.480 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty
2021-02-10 10:31:27.851 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty
2021-02-10 10:31:34.448 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty
2021-02-10 10:31:41.004 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty
2021-02-10 10:31:47.273 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty
2021-02-10 10:32:11.731 [INF] POOL: Mined work 0000000ce28b245f67ae1103ec8619fce82b41a623d697e4e2689924557e29cb confirmed by connected block #618036
| gharchive/pull-request | 2021-01-28T14:18:22 | 2025-04-01T06:38:21.644954 | {
"authors": [
"dnldd",
"jholdstock"
],
"repo": "decred/dcrpool",
"url": "https://github.com/decred/dcrpool/pull/304",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
583136740 | QSC initial implementation
This includes an initial implementation of the QSC algorithm. It does not support Byzantine behavior.
@nkcr I had to change the address iterator thing as we need the length of the set in the Call functions. Let me know what you think about the Take function.
@nkcr comments fixed.
| gharchive/pull-request | 2020-03-17T16:24:31 | 2025-04-01T06:38:21.651162 | {
"authors": [
"Gilthoniel"
],
"repo": "dedis/fabric",
"url": "https://github.com/dedis/fabric/pull/14",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2251885817 | rename GetVTapGroupconfiguration() to GetAgentGroupConfig()
这个函数名我没改,设计到 EE 的问题,后面一起改吧
Originally posted by @sharang in https://github.com/deepflowio/deepflow/pull/6155#discussion_r1570328745
https://github.com/deepflowio/deepflow/pull/6523
| gharchive/issue | 2024-04-19T01:25:14 | 2025-04-01T06:38:21.654319 | {
"authors": [
"SongZhen0704",
"sharang"
],
"repo": "deepflowio/deepflow",
"url": "https://github.com/deepflowio/deepflow/issues/6161",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2379029645 | USER GUIDE + REFERENCE: Groovy LivenessScopes
Groovy equivalent of https://deephaven.io/core/docs/conceptual/liveness-scope-concept/#how-to-create-a-liveness-scope
the associated reference docs.
User guide is complete, Reference is not
| gharchive/issue | 2024-06-27T20:21:54 | 2025-04-01T06:38:21.660521 | {
"authors": [
"elijahpetty"
],
"repo": "deephaven/deephaven-docs-community",
"url": "https://github.com/deephaven/deephaven-docs-community/issues/251",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
235776133 | AbstractModule Fixes
Sonnet changes required that the parameters to the constructor to be named parameters.
This pushes those changes in. Also added a python gitignore file :)
I signed it!
Your changes worked for me, thanks @jramapuram!!
Thanks!
| gharchive/pull-request | 2017-06-14T06:35:46 | 2025-04-01T06:38:21.670684 | {
"authors": [
"dm-jrae",
"jnwei",
"jramapuram"
],
"repo": "deepmind/dnc",
"url": "https://github.com/deepmind/dnc/pull/14",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
484450547 | Add standalone binary to released files
After https://github.com/deepmind/kapitan/pull/323 we have a process to generate a standalone binary of kapitan and it's built in travis too.
The produced binary lives in dist/
It would be great to also include that to the files released by travis on each release.
i.e. in .travis.yml we already do that with CHANGELOG.md
deploy:
- provider: releases
api_key:
secure: blabla=
file: CHANGELOG.md
prerelease: $PRERELEASE
on:
tags: true
repo: deepmind/kapitan
@uberspot
I may be misunderstanding, but is this issue already addressed in #349?
Yes it is. :) i just made an issue to track that work. I think the binary is now on the releases page on github and every build works fine on travis so i'd consider this issue resolved for now. :)
| gharchive/issue | 2019-08-23T10:13:47 | 2025-04-01T06:38:21.673322 | {
"authors": [
"uberspot",
"yoshi-1224"
],
"repo": "deepmind/kapitan",
"url": "https://github.com/deepmind/kapitan/issues/345",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1730306758 | data.body("...").xpos does not match data.qpos
Hi,
I'm a maintainer of Gymnasium-Robotics and I'm trying to use MuJoCo to develop the v5 revision of the Gymansium/mujoco RL environments https://github.com/Farama-Foundation/Gymnasium-Robotics/pull/104.
I'm looking for some help with understanding why the following unit test fails.
data.qpos[0] == data.body("torso").xpos[0] is true
but after steeping the MuJoCo model
data.qpos[0] == data.body("torso").xpos[0] is False
Here is a model which explains my question:
The Ant.xml in Gymnasium/MuJoCo/Ant environment
https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/envs/mujoco/assets/ant.xml
Here is a unit test illustrating my question:
def test_ant_com():
env = gym.make('Ant-v4') # `env` contains `data : MjData` and `model : MjModel`
env.reset() # randomly initlizies the `data.qpos` and `data.qvel`
x_position_before = env.unwrapped.data.qpos[0]
x_position_before_com = env.unwrapped.data.body("torso").xpos[0]
assert(x_position_before == x_position_before_com) # This succeeds
random_control = env.action_space.sample()
_, _, _, _, info = env.step(random_control) # This calls mujoco.mj_step(env.model, env.data, nstep=env.frame_skip)
x_position_after = env.unwrapped.data.qpos[0]
x_position_after_com = env.unwrapped.data.body("torso").xpos[0]
assert(x_position_after == x_position_after_com) # This fails
Note: this is the case for other body.xpos & body.xquat
Is this normal/expected?
$ pip list | grep mujoco
mujoco 2.3.3
Hi!
Some clarifications are in order.
A clarification for future readers: This model's DoFs start with a body-centered free joint, therefore the first 3 elements of qpos have the same semantics as the first 3 elements of xpos. They are generally not the same thing.
A clarification for us: is this an old test that is newly breaking or a new test?
Regarding the test: It should fail. If it ever passed that is surprising.
Explanation: The purpose ofmj_step is to advance the state (qpos and qvel, in this case). It does this and nothing more. xpos is a derived quantity that is computed from qpos during mj_step (step 2 in the first link above), but at the end of mj_step, qpos gets updated. The only reason it passes after your Reset call is that (presumably) mj_forward or something similar was called at the end of the Reset. So your options are:
Compare the current qpos to the xpos measured after the previous step.
Call mj_kinematics (or the full mj_forward) after the step, and then compare.
Does this make sense?
this is a new test
reset() does call mj_forward
calling mj_forward after step() does indeed resolve the issue
def test_ant_com():
env = gym.make('Ant-v5', frame_skip=5) # `env` contains `data : MjData` and `model : MjModel`
env.reset() # randomly initlizies the `data.qpos` and `data.qvel`, calls mujoco.mj_forward(env.model, env.data)
x_position_before = env.unwrapped.data.qpos[0]
x_position_before_com = env.unwrapped.data.body("torso").xpos[0]
assert(x_position_before == x_position_before_com), "before failed" # This succeeds
random_control = env.action_space.sample()
_, _, _, _, info = env.step(random_control) # This calls mujoco.mj_step(env.model, env.data, nstep=env.frame_skip)
mujoco.mj_forward(env.unwrapped.model, env.unwrapped.data) # <-- This is new
x_position_after = env.unwrapped.data.qpos[0]
x_position_after_com = env.unwrapped.data.body("torso").xpos[0]
assert(x_position_after == x_position_after_com), "after failed" # This succeeds now
can you explain the difference of xpos and qpos
my current understanding:
qpos is part of the state (https://mujoco.readthedocs.io/en/latest/computation.html#physics-state)
and xpos is from what I can tell the kinematic approximation of the body frames positions
Thanks!
fyi mj_kinematics is enough, but other than performance no harm in mj_forward.
Yes qpos is the joint configuration. xpos is the global Cartesian position of the body frames.
One last thing (and the reason I created the unit test, in the first place)
If you wanted to calculate the displacement of the torso body after mj_step would you do
Option A:
# note `env` holds `data` and `model`
x_position_before = env.data.body("torso").xpos[0]
mujoco.mj_step(env.model, env.data, nstep=env.frame_skip)
# Note: we do not call `mj_kinematics`
x_position_after = env.data.body("torso").xpos[0]
dx = x_position_after - x_position_before # displacement
Option B:
# note `env` holds `data` and `model`
x_position_before = env.data.qpos[0]
mujoco.mj_step(env.model, env.data, nstep=env.frame_skip)
x_position_after = env.data.qpos[0]
dx = x_position_after - x_position_before # displacement
We currently use option A for Ant-v2, Ant-v3, Ant-v4,
Could you confirm that option B is a more accurate way of getting dx (displacement) (I am considering of updating it to use option B on Ant-v5)
Thanks!
Both options are equally accurate, but the second one is more up to date (by 1 timestep).
You might legitimately now ask "why would I want a delayed measurement if I can get one that is more up to date?" There are sometimes good reasons for this. For example imagine that you want to compute some value that is a function of your dx and some contact force. Forces are only determined during the step and could not be computed now since they depend on the controls. I.e. contact forces are inherently linked not to a state but a state transition. So while it possible to get some values (i.e. functions only of position and velocity) w.r.t to the current timestep, if you want all your measurements (including force/acc related quantities) to be correctly "synced", you have to pay the price of a delay of 1 timestep. In your case you may not care, but in general this can be important.
Hope this makes sense.
| gharchive/issue | 2023-05-29T08:12:48 | 2025-04-01T06:38:21.688480 | {
"authors": [
"Kallinteris-Andreas",
"yuvaltassa"
],
"repo": "deepmind/mujoco",
"url": "https://github.com/deepmind/mujoco/issues/889",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
447949533 | Failed to find .build.info file at path: /home/x/StarCraftII/.build.info
Tried to install pysc2 on ubuntu18.4 with warcraft linux 4.7.1 and map package Ladder 2019 Season 1. When run a sample code zerg_agent.py, it give error as below. I am a newbie. Is it possible give any suggestion, thanks.
python zerg_agent.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
I0524 10:06:19.386617 140062606419776 sc_process.py:110] Launching SC2: /home/d/StarCraftII/Versions/Base70154/SC2_x64 -listen 127.0.0.1 -port 24676 -dataDir /home/d/StarCraftII/ -tempDir /tmp/sc-9q1vetas/ -displayMode 0 -windowwidth 640 -windowheight 480 -windowx 50 -windowy 50
I0524 10:06:19.390377 140062606419776 remote_controller.py:163] Connecting to: ws://127.0.0.1:24676/sc2api, attempt: 0, running: True
Version: B70326 (SC2.2018Season4)
Build: Nov 27 2018 03:26:30
Command Line: '"/home/d/StarCraftII/Versions/Base70154/SC2_x64" -listen 127.0.0.1 -port 24676 -dataDir /home/d/StarCraftII/ -tempDir /tmp/sc-9q1vetas/ -displayMode 0 -windowwidth 640 -windowheight 480 -windowx 50 -windowy 50'
Starting up...
Startup Phase 1 complete
Fatal Error:
Failed to find .build.info file at path: /home/d/StarCraftII/.build.info
Terminating...
W0524 10:06:20.393357 140062606419776 remote_controller.py:160] SC2 isn't running, so bailing early on the websocket connection.
I0524 10:06:20.393687 140062606419776 sc_process.py:201] Shutdown gracefully.
I0524 10:06:20.393838 140062606419776 sc_process.py:182] Shutdown with return code: -15
Traceback (most recent call last):
File "zerg_agent.py", line 45, in
app.run(main)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "zerg_agent.py", line 27, in main
visualize=True) as env:
File "/usr/local/lib/python3.6/dist-packages/pysc2/env/sc2_env.py", line 276, in init
self._launch_sp(map_inst, interfaces[0])
File "/usr/local/lib/python3.6/dist-packages/pysc2/env/sc2_env.py", line 351, in _launch_sp
want_rgb=interface.HasField("render"))]
File "/usr/local/lib/python3.6/dist-packages/pysc2/run_configs/platforms.py", line 208, in start
want_rgb=want_rgb, extra_args=extra_args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pysc2/run_configs/platforms.py", line 97, in start
self, exec_path=exec_path, version=version, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/sc_process.py", line 116, in init
self._host, self._port, self, timeout_seconds=timeout_seconds)
File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/remote_controller.py", line 143, in init
sock = self._connect(host, port, proc, timeout_seconds)
File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/stopwatch.py", line 201, in _stopwatch
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/remote_controller.py", line 174, in _connect
raise ConnectError("Failed to connect to the SC2 websocket. Is it up?")
pysc2.lib.remote_controller.ConnectError: Failed to connect to the SC2 websocket. Is it up?
I0524 10:06:20.450495 140062606419776 sc2_env.py:656] Environment Close
Did you download SC2 4.7.1 from
https://github.com/Blizzard/s2client-proto#downloads ?
.build.info is the last unzipped file, may be the file that you get was broken.
HTTP request sent, awaiting response... 200 OK
Length: 3321311758 (3.1G) [application/zip]
Saving to: ‘SC2.4.7.1.zip’
Really thankful ethan052.
The build.info was missing when unzip.
Thanks again and close the issue
THANKS
| gharchive/issue | 2019-05-24T02:21:05 | 2025-04-01T06:38:21.702234 | {
"authors": [
"ethan052",
"lightning20",
"mschen97"
],
"repo": "deepmind/pysc2",
"url": "https://github.com/deepmind/pysc2/issues/272",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2034306434 | Potential bug: EOS token mismatch
While executing the follow code:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "6,7"
from vllm import LLM, SamplingParams
from datasets import load_dataset
from transformers import AutoTokenizer
N_SHOTS = 0
def build_dataset_deepseek_coder_33b():
dataset = load_dataset("mbpp", split="test")
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-33b-instruct")
prompt_template = "You are an exper Python programmer, and here is your task: {task}. Your code should pass these tests: \n\n{tests}\n"
examples = [{"role": "system", "content": "You are a helpful AI programming assistant."}]
if N_SHOTS > 0:
example_dataset = load_dataset("mbpp", split="prompt")
for x in example_dataset[:N_SHOTS]:
examples += [
{"role": "user", "content": prompt_template.format(task=x["text"], tests="\n".join(x["test_list"]))},
{"role": "assistant", "content": "```python\n" + x["code"] + "\n```"}
]
def map_func(example):
task_id = f'MBPP/{example["task_id"]}'
content = prompt_template.format(task=example["text"], tests="\n".join(example["test_list"]))
if N_SHOTS == 0:
content += "\nCode should be written in a markdown codeblock and NO explanation is required. Talk is easy, show me the code!"
message = [{
"role": "user", "content": content
}]
prompt = tokenizer.apply_chat_template(examples + message, tokenize=False)
return {
"task_id": task_id,
"prompt": prompt
}
dataset = dataset.map(map_func, remove_columns=dataset.column_names)
return dataset
dataset = build_dataset_deepseek_coder_33b()
llm = LLM(model="deepseek-ai/deepseek-coder-33b-instruct",
tensor_parallel_size=2,
gpu_memory_utilization=0.8,
max_model_len=8192)
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=1024)
prompts = dataset['prompt'][:10]
outputs = llm.generate(prompts, sampling_params)
print(outputs[0].outputs[0].text)
I got the response with trailing <EOT> tokens.
def remove_Occ(s, c):
first_index = s.find(c)
last_index = s.rfind(c)
if first_index != -1 and last_index != -1:
if first_index == last_index:
return s[:first_index] + s[first_index + 1:]
else:
return s[:first_index] + s[first_index + 1:last_index] + s[last_index + 1:]
return s
assert remove_Occ("hello","l") == "heo"
assert remove_Occ("abcda","a") == "bcd"
assert remove_Occ("PHP","P") == "H"
<|EOT|>
<|EOT|>
<|EOT|>
<|EOT|>
<|EOT|>
<|EOT|>
<|EOT|>
<|EOT|>
<|EOT|>
...
<|EOT|>
<|EOT|>
<|EOT|>
I found that the model generates <EOT> token as end of sequence but the eos token is set to <|end▁of▁sentence|>. This behavior may be due to the latest update in huggingface which changed the eos token.
Thanks! We have fixed it.
Thanks! We have fixed it.
请问这个问题怎么解决呢?
I have changed the tokenizer_config.json.
| gharchive/issue | 2023-12-10T10:18:03 | 2025-04-01T06:38:21.742037 | {
"authors": [
"pkuzqh",
"rookielxy",
"txy6666yr"
],
"repo": "deepseek-ai/DeepSeek-Coder",
"url": "https://github.com/deepseek-ai/DeepSeek-Coder/issues/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
832294261 | Commenting the aggregates
testing
Hey @Jeffkw213 thanks for starting to contribute to this repo.
But may I ask: what are you testing here? : )
Please give more details on what you are trying to achieve in this PR.
Oh. I'm just learning new things about GitHub and how to contribute to public projects.
And its for school.
| gharchive/pull-request | 2021-03-16T00:06:05 | 2025-04-01T06:38:21.744030 | {
"authors": [
"Jeffkw213",
"Timoeller"
],
"repo": "deepset-ai/COVID-QA",
"url": "https://github.com/deepset-ai/COVID-QA/pull/117",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
47694119 | Adding a seed to faker to provide for repeatable data sequences
Introduction
Faker has proved quite useful to my python testing library but I find that I need to be able to replicate the test sequences. Fortunately it is based on the python random library and all random requires to reproduce its random sequences exactly is to use the same "seed" value each time.
faker/init.py
I added a new method to the class, Faker.reset(self, seed), which reseeds the random number generator if the seed value is an integer (for anything else including None, it resets the generator with a random value based on time of day). I also modified Faker.__init__(self, seed) to take an optional seed and call the .reset() method with it (or with None).
tests/test_api.py
There is a new test, test_seed(), which tests resetting the seed.
Comments
A call to random.seed() with an integer argument is sufficient to reset the random number generator. If you call the seed again with the same exact value, all subsequent random calls will be reproduced identically. EVERY single call must be exact, and in the same order and with the same arguments. Since faker calls the random number generator on most method calls, the sequence of faker calls must also be the same. Changing even one of them will create a new order.
For instance:
>>> import faker # with new changes
>>> f = faker.Faker(1234)
>>> f.name()
u'Vita Kertzmann'
>>> f.city()
u'Feiltown'
>>> f.state()
u'LA'
>>> f .seed(1234)
>>> f.name()
u'Vita Kertzmann'
>>> f.state()
u'AL'
>>> f.city()
u'New Art'
Enjoy!
Curious if this is planned to be pulled into master. I've been using this patch and it's nifty to have controllable fake data with this.
Glad to see someone is using it :-) I was surprised that it never got into master since its pretty simply and I think pretty useful as well. But c'est la vie, its here for anyone who needs it. Happy Holidaze!
| gharchive/pull-request | 2014-11-04T09:40:34 | 2025-04-01T06:38:21.765383 | {
"authors": [
"bleurose",
"jalcine"
],
"repo": "deepthawtz/faker",
"url": "https://github.com/deepthawtz/faker/pull/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
524083172 | [Bug] cannot load ffprobe on OSX
Description
Spleeter cannot load ffprobe on OSX. ffmpeg was installed using homebrew on python 3.75
ffmpeg and ffprobe both call the program when entered in terminal so it's definitely linked correctly...
Step to reproduce
$ brew install ffmpeg
$ pip3 install spleeter
$ spleeter separate -i 'its_not_fair.mp3' spleeter:2stems -o splits
Installed using pip3
Run as user
Got WARNING:spleeter:ffprobe error (see stderr output for detail) error
Output
$ spleeter separate -i 'its_not_fair.mp3' spleeter:2stems -o splits
INFO:spleeter:Loading audio b'spleeter:2stems' from 0.0 to 600.0
INFO:spleeter:Loading audio b'its_not_fair.mp3' from 0.0 to 600.0
WARNING:spleeter:ffprobe error (see stderr output for detail)
INFO:spleeter:Audio data loaded successfully
Environment
OS
MacOS
Installation type
pip
So I'm an idiot and the problem was actually that I forgot the '-p' flag so then the training models never downloaded.
spleeter separate -i its_not_fair.mp3 -p spleeter:2stems -o itsnotfair works fine.
| gharchive/issue | 2019-11-18T01:15:56 | 2025-04-01T06:38:21.769624 | {
"authors": [
"inci90"
],
"repo": "deezer/spleeter",
"url": "https://github.com/deezer/spleeter/issues/111",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1928605093 | Create a library of helper functions that make working with Pepr easier.
feature request
We keep re-building helper functions in each module, and i think we can generate a set of well tested ones that are easy for people to use.
Critical needs across capabilities:
create a secret class that hides the base64 implementation, and manages if you get a buffer or a string properly.
create a class to checksum a deployment/statefulset/daemonset to restart the pods per the app's configuration.
Relates to: #245 #279
Could be worth adding it to the PeprValidateRequest class as something like request.getContainers() or at least as a helper function you can import. Thoughts?
// Returns all containers in the pod
export function containers(request: PeprValidateRequest<a.Pod>) {
return [
...(request.Raw.spec?.containers || []),
...(request.Raw.spec?.initContainers || []),
...(request.Raw.spec?.ephemeralContainers || []),
];
}
| gharchive/issue | 2023-10-05T15:52:46 | 2025-04-01T06:38:21.772267 | {
"authors": [
"bdw617",
"cmwylie19"
],
"repo": "defenseunicorns/pepr",
"url": "https://github.com/defenseunicorns/pepr/issues/299",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1567249674 | Add wait to injection method so if the cluster is slow it can catch up
Description
As the title says. Tbh I'd prefer to actually parse the error or something like that but I don't know how to find the error type that the serviceAccount error is.
Related Issue
Fixes #1327
Type of change
[X] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Other (security config, docs update, etc)
Checklist before merging
[X] Test, docs, adr added or updated as needed
[X] Contributor Guide Steps followed
Actually maybe it's better to check for serviceAccount existence after namespace creation...
Actually maybe it's better to check for serviceAccount existence after namespace creation...
Ended up implementing this instead!
| gharchive/pull-request | 2023-02-02T03:17:28 | 2025-04-01T06:38:21.775694 | {
"authors": [
"corang"
],
"repo": "defenseunicorns/zarf",
"url": "https://github.com/defenseunicorns/zarf/pull/1328",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
635120706 | Additional email address for OKC
craig.freeman@okc.gov must be added to Oklahoma City as he directly oversees the city budget
on it now
Upon further review, it looks like we are already sending Craig Freeman an email through the City Manager email address citymanager@okc.gov . https://www.okc.gov/government/city-manager/about-the-city-manager @mahrer I think we can close this issue
thanks for checking @ctneal91, gonna close this
I hear what you're saying, but I've received direct correspondence from
craig.freeman@okc.gov within the last week and I fear that the other email
is an older one from a former city manager.
On Tue, Jun 9, 2020, 8:58 AM Christian Neal-Herman notifications@github.com
wrote:
Upon further review, it looks like we are already sending Craig Freeman an
email through the City Manager email address citymanager@okc.gov .
https://www.okc.gov/government/city-manager/about-the-city-manager @mahrer
https://github.com/mahrer I think we can close this issue
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/defund12/defund12.org/issues/1054#issuecomment-641315472,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKHECFOWPSF5ZYZLVVSPZB3RVY5Z3ANCNFSM4NZBW3OA
.
| gharchive/issue | 2020-06-09T05:23:24 | 2025-04-01T06:38:21.817112 | {
"authors": [
"caduckett",
"ctneal91",
"todd-m"
],
"repo": "defund12/defund12.org",
"url": "https://github.com/defund12/defund12.org/issues/1054",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
635071975 | Add workflow for running tests
This PR takes the work in #1019 and moves running the tests into our Actions workflow 😄 I'm still new-ish to creating Actions so lmk if anything needs to be changed!
This should be merged into #1019 and if approved I believe we can go ahead and delete the dockerfile from test/markdown/
@avimoondra could you give this a look?
I personally feel a bit more inclined to use Docker here, as @avimoondra has already set up. Docker is easier to maintain for long-term, and is a bit more extensible. And ideally the CI pipeline will use the same workflow as local development. Setting up pip correctly can be a huge barrier to entry for new engineers, and I feel like Docker really simplifies the process. WDYT?
I'm fine with going the other route! Closing :)
| gharchive/pull-request | 2020-06-09T02:58:26 | 2025-04-01T06:38:21.819295 | {
"authors": [
"bingbongle",
"emplums"
],
"repo": "defund12/defund12.org",
"url": "https://github.com/defund12/defund12.org/pull/1034",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
96886871 | fix(deisctl): exit when stop/start fails
This fixes issue #3880, where a failed start or stop command will hang
deisctl. The fix simply prints out an error message and exits from the
resolution loop.
LGTM, though some unit tests would be nice
Now with three times the line count! And tests! And I fixed the typo @mboersma noted.
:heart: tests :heart:
code LGTM
Code LGTM.
| gharchive/pull-request | 2015-07-23T19:19:06 | 2025-04-01T06:38:21.830072 | {
"authors": [
"bacongobbler",
"mboersma",
"technosophos"
],
"repo": "deis/deis",
"url": "https://github.com/deis/deis/pull/4095",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
279653601 | composer require dekuan/dedid安装完之后,Class 'dekuan\dedid\CDId' not found
Class 'dekuan\dedid\CDId' not found
说明:我安装的其它包没有这个问题
这个问题可能以前不是composer的问题,但是最近发生了类似的,特此帮助后来者,因为这个依赖不符合psr-0/psr-4规范,因此composer 2.0 已经不再把不合规范的类库放入autoload里面,后来者注意如果要使用这个依赖,必须使用1.x 不能用composer2.x,这个可以看composer官网update部分的说明 传送门
另外还是建议作者可以fix一下,毕竟2.0之后估计也是趋势。 @liuqixing @dekuan
| gharchive/issue | 2017-12-06T07:29:55 | 2025-04-01T06:38:21.842818 | {
"authors": [
"findmark",
"tomener"
],
"repo": "dekuan/dedid",
"url": "https://github.com/dekuan/dedid/issues/9",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1747222511 | Bug? Signed in without email verification
Testing locally, I accidentally discovered a way to login without ever clicking on (or even receiving) the confirmation email:
Configure your .env with incorrect SMTP login credentials
Start the dev server via npm run dev
Fill in the sign up fields in the browser and click "Sign Up"
Notice the server crashes. Console logs incorrectly report the email was sent successfully before subsequently failing with an SMTP authentication error, e.g.
E-mail sent successfully!
log: {"level":"info","method":"GET","path":"/auth/verify/resend-email-a0455918-de53-4979-9aa4-3d72a06e491b","status":200,"timeInMs":1552,"user":"mrbogus@example.com","userId":"AhXN3CVU2QsGzOP","referer":"/auth/verify/email"}
log: {"level":"info","method":"GET","path":"/","status":200,"timeInMs":4056,"user":"mrbogus@example.com","userId":"AhXN3CVU2QsGzOP","referer":"/auth/verify/email"}
E-mail sent successfully!
log: {"level":"info","method":"GET","path":"/auth/verify/resend-email-a0455918-de53-4979-9aa4-3d72a06e491b","status":200,"timeInMs":654,"user":"mrbogus@example.com","userId":"AhXN3CVU2QsGzOP","referer":"/auth/verify/resend-email-a0455918-de53-4979-9aa4-3d72a06e491b"}
/src/lib/server/email-send.ts:78
throw new Error(`Error sending email: ${JSON.stringify(err)}`);
^
Error: Error sending email: {"code":"EAUTH","response":"535 Incorrect authentication data","responseCode":535,"command":"AUTH PLAIN"}
at eval (/src/lib/server/email-send.ts:78:19)
Restart the server with npm run dev
Return to the browser and click on the "If you did not receive the email, [click here] to resend it" link
Notice the server crashes again with an SMTP Auth error
Restart the server with npm run dev
Return to the browser and visit http://localhost:5173/dashboard
Notice under Protected Area, it says "If you are seeing this page, you are logged in."
Visit http://localhost:5173/profile and notice that you can see your profile and make changes to it.
Thanks for the reply. Do you have any concerns that someone could use this behavior to create an exploit that would allow them to register without using a valid email address?
If you configure your smtp settings with the correct info, can you replicate this trouble? It would concern me if correctly configuring allowed users to sign in.
Also I don’t really understand how even if it fails to send how you can be verified. When a user signs up, verified is set to false. The only way to verify the user is you need to visit this page https://github.com/delay/sveltekit-auth-starter/blob/main/src/routes/auth/verify/email-[token]/%2Bpage.server.ts so that verified can be set to true. Then you would see the protected page, without visiting that page with the correct token I am not sure how that would happen unless you changed verified to true on the sign up function.
Oops you are right, thanks for sending this bug report… The problem is resend verification email incorrectly sets verified to true. Thanks very much for reporting this! I thought it was just a configuration issue but actually is a bad bug!
It should be fixed now. Thanks once again for reporting this trouble! And sorry for not checking this out better after your first report. Thanks so much for the follow up question, because it enabled to think more about the problem and determine it shouldn't be happening whether the server was misconfigured or not.
No worries, I could have been more explicit in the initial report. Thanks for the quick fix and for putting together this example 🙂
| gharchive/issue | 2023-06-08T07:07:31 | 2025-04-01T06:38:21.851244 | {
"authors": [
"IndependentCreator",
"delay"
],
"repo": "delay/sveltekit-auth-starter",
"url": "https://github.com/delay/sveltekit-auth-starter/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
954990585 | Ansible Lint test failing on control plane
Describe the bug
The Ansible lint workflow is failing on control_plane
To Reproduce
See https://github.com/dellhpc/omnia/runs/3183237513?check_suite_focus=true
Expected behavior
Ansible lint workflow should pass
List of errors
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:53
snmp_enabled: false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:107
- ethernet_switch_support == true or ethernet_switch_support == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:114
- ib_switch_support == true or ib_switch_support == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:121
- powervault_support == true or powervault_support == false
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:340
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:342
stat:
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:346
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/package_installation.yml:21
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/password_config.yml:39
cobbler_password | length < 1 or
[201] Trailing whitespace
control_plane/roles/control_plane_device/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/check_prerequisites.yml:72
- mngmnt_network_container_status == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/configure_mngmnt_network_container.yml:26
when: mngmnt_network_container_status == true and mngmnt_network_container_config_status == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/configure_mngmnt_network_container.yml:44
when: mngmnt_network_container_config_status == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/main.yml:42
when: (not mngmnt_network_container_image_status) or ( backup_map_status == true)
[201] Trailing whitespace
control_plane/roles/control_plane_ib/tasks/check_prerequisites.yml:35
when: infiniband_backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/check_prerequisites.yml:35
when: infiniband_backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/check_prerequisites.yml:72
- infiniband_container_status == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/configure_infiniband_container.yml:26
when: infiniband_container_status == true and infiniband_container_config_status == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/main.yml:38
when: (not infiniband_container_image_status) or ( infiniband_backup_map_status == true)
[306] Shells that use pipes should set the pipefail option
control_plane/roles/control_plane_repo/tasks/install_dsu.yml:30
Task/Handler: Execute bootstrap.cgi
[601] Don't compare to literal True/False
control_plane/roles/control_plane_repo/tasks/validate_idrac_vars.yml:23
- firmware_update_required == true or firmware_update_required == false
[206] Variables should have spaces before and after: {{ var_name }}
control_plane/roles/control_plane_sm/tasks/create_pod.yml:46
replace: " image: 'localhost/{{sm_docker_image_name}}:{{ sm_docker_image_tag }}'"
[208] File permissions unset or incorrect
control_plane/roles/control_plane_sm/tasks/pre_requisites.yml:44
Task/Handler: Copy opensm configuration file
[201] Trailing whitespace
control_plane/roles/provision_cobbler/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/main.yml:63
when: (not cobbler_image_status) or ( backup_map_status == true)
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/main.yml:67
when: (not cobbler_image_status) and (host_mapping_file == true) or ( backup_map_status == true)
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:33
Task/Handler: Remove blank spaces
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:51
Task/Handler: Count the hostname
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:57
Task/Handler: Count the ip
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:63
Task/Handler: Count the macs
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:69
Task/Handler: Check for duplicate hostname
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:75
Task/Handler: Check for duplicate ip
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:81
Task/Handler: Check for duplicate mac
[602] Don't compare to empty string
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:115
when: hostname_result.stdout != ""
[206] Variables should have spaces before and after: {{ var_name }}
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:121
shell: diff {{ role_path }}/files/new_host_mapping_file.csv {{role_path}}/files/backup_host_mapping_file.csv| tr -d \>|tr -d \<| grep -E -- ', & :| '
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:123
when: backup_map_status == true
[602] Don't compare to empty string
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:128
when: diff_output.stdout!= ""
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:147
when: (not cobbler_image_status) or (new_node_status == true)
[208] File permissions unset or incorrect
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:150
Task/Handler: Create a backup file
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:166
when: ( cobbler_container_status == true ) and ( new_node_status == true )
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:171
when: ( cobbler_container_status == true ) and ( new_node_status == true )
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:176
when: ( cobbler_container_status == true ) and ( new_node_status == true )
[208] File permissions unset or incorrect
control_plane/roles/provision_cobbler/tasks/mount_iso.yml:20
Task/Handler: Create iso directory
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mount_iso.yml:43
when: mount_check == true
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/provision_password.yml:29
Task/Handler: Encrypt cobbler password
[201] Trailing whitespace
control_plane/roles/webui_awx/tasks/awx_configuration.yml:29
[206] Variables should have spaces before and after: {{ var_name }}
control_plane/roles/webui_awx/tasks/awx_configuration.yml:132
loop: "{{ scheduled_templates}}"
[306] Shells that use pipes should set the pipefail option
control_plane/roles/webui_awx/tasks/configure_settings.yml:23
Task/Handler: Get AWX admin password
@lwilson We are using ansible lint 5.1.2 and we are not observing these linting issues.
github has lint version ansible-lint==4.2.0 in file https://github.com/ansible/ansible-lint-action
Can you upgrade the lint version?
@sujit-jadhav there is currently a PR pending to resolve this: https://github.com/ansible/ansible-lint-action/pull/48.
@lwilson If team goes back to 4.2.0 then many issues will come and team will end up spending more time. I think till pending PR is resolved for upgrading the lint version, we should disable the lint check. I have requested to compulsorily perform lint check before PR is created on the github.
The new lint appears to be working except for 2 errors in tools/olm.yml. I will work to correct those errors.
| gharchive/issue | 2021-07-28T15:38:14 | 2025-04-01T06:38:21.865828 | {
"authors": [
"lwilson",
"sujit-jadhav"
],
"repo": "dellhpc/omnia",
"url": "https://github.com/dellhpc/omnia/issues/437",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
925604400 | Added .to_pandas to deltalake python
Description
Many users that work in Python use pandas in their daily work. Adding a .to_pandas method makes it super easy for users to read in a Delta table into a pandas dataframe.
Related Issue(s)
Documentation
Added documentation to the .to_pandas method. Will update the docs if the proposed change looks fine.
Ignoring the type check there works for me :) I will merge this after https://github.com/delta-io/delta-rs/pull/296 to address the new clippy error.
So if we're sourceing from a delta table in a bronze table in delta format and then converting to panda, using pandas transforms and reverting back to delta lake for write into a silver table (for example), does that delta table still store & retain the pandas transforms in the history of the delta table?
@adityamcodes To ask a question, could you please open an issue or discussion rather than commenting on an old pull request?
| gharchive/pull-request | 2021-06-20T13:52:39 | 2025-04-01T06:38:21.874667 | {
"authors": [
"adityamcodes",
"bramrodenburg",
"houqp",
"wjones127"
],
"repo": "delta-io/delta-rs",
"url": "https://github.com/delta-io/delta-rs/pull/294",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2344506300 | Fix test and protocol description about delta sharing streaming rpc internal
Fix test and protocol description about delta sharing streaming rpc internal
thanks for the fix, lgtm!
| gharchive/pull-request | 2024-06-10T17:52:17 | 2025-04-01T06:38:21.875875 | {
"authors": [
"linzhou-db",
"pranavsuku-db"
],
"repo": "delta-io/delta-sharing",
"url": "https://github.com/delta-io/delta-sharing/pull/497",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
536136739 | Allow multiple UPDATE actions in Delta Lake MERGE INTO statement
A common practice in slowly changing dimension (SCD) load patterns is to soft-delete records rather than hard delete them. This is often done by setting a flag marking the record as deleted. This is of itself is easy to achieve, however sometimes deleted records reappear in the source system and therefore need to be re-inserted (effectively a special kind of update). This requires two WHEN MATCHED clauses with different conditions and attributes to be UPDATEd. A workaround for some scenarios is using a CASE statement, but this makes the logic unintuitive and much harder to read and maintain. It would be extremely useful if we could use UPDATE more than once in a WHEN MATCHED clause.
Issue discussed with @tdas on Delta Lake Slack channel where he suggested I raise this to be tracked.
https://delta-users.slack.com/archives/CGK79PLV6/p1575927346351000
This is the API in Apache Spark, therefore we may be able support this with Spark 3.0.
@brkyvz would this restriction be lifted with this improvement - https://github.com/apache/spark/pull/28875 ? thx
We are investigating this right now. :)
This has been fixed by https://github.com/delta-io/delta/commit/13c9c6ee9ee6e6921d59e940243f5eabbee3841e
A common practice in slowly changing dimension (SCD) load patterns is to soft-delete records rather than hard delete them. This is often done by setting a flag marking the record as deleted. This is of itself is easy to achieve, however sometimes deleted records reappear in the source system and therefore need to be re-inserted (effectively a special kind of update). This requires two WHEN MATCHED clauses with different conditions and attributes to be UPDATEd. A workaround for some scenarios is using a CASE statement, but this makes the logic unintuitive and much harder to read and maintain. It would be extremely useful if we could use UPDATE more than once in a WHEN MATCHED clause.
Issue discussed with @tdas on Delta Lake Slack channel where he suggested I raise this to be tracked. https://delta-users.slack.com/archives/CGK79PLV6/p1575927346351000
Hi,
Could you please provide how the re-appearing of soft-deletes are handled in single MERGE statement? Appreciate your response.
| gharchive/issue | 2019-12-11T04:32:19 | 2025-04-01T06:38:21.881066 | {
"authors": [
"Tagar",
"ananthtony",
"brkyvz",
"gerardwolf",
"tdas",
"zsxwing"
],
"repo": "delta-io/delta",
"url": "https://github.com/delta-io/delta/issues/268",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1099811954 | 导出页面中没有放数据库链接
导出的页面中如果包含数据库,数据库位置的链接没有放进去
0.0.5版本解决
| gharchive/issue | 2022-01-12T02:39:44 | 2025-04-01T06:38:21.882063 | {
"authors": [
"delta1037"
],
"repo": "delta1037/notion-dump-kernel",
"url": "https://github.com/delta1037/notion-dump-kernel/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1338357144 | Create apiIMPROVED.py
Better
Full of bugs, not stable. Duplicated functions, please don't rename anything.
Please don't make a copy of the original code file, just update the same code file in your branch and create the pull request. In this way we can see which line of code is changed, and we can understand your change better.
| gharchive/pull-request | 2022-08-14T21:47:06 | 2025-04-01T06:38:21.924548 | {
"authors": [
"deluchen",
"itz-winter",
"whutermeloon"
],
"repo": "deluchen/fll",
"url": "https://github.com/deluchen/fll/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1298071604 | Figure out why ic_cdk::api::stable::stable_write is not panicking when we try and write out of bounds
[x] Create a bare bones rust example
[ ] Figure out where our api is breaking down
If it still presents, open a forum post
If it gets updated we need to update the stable_write tests
This has been resolved
| gharchive/issue | 2022-07-07T20:41:26 | 2025-04-01T06:38:21.926217 | {
"authors": [
"bdemann",
"lastmjs"
],
"repo": "demergent-labs/azle",
"url": "https://github.com/demergent-labs/azle/issues/481",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1416458589 | Improved the resubmission to a contribution section in the docs
Status
Ready/In Progress/In Hold (Reason for hold)
Related Issues
fixes: CIAC-3793.
related: CIAC-4347
Description
When contributing to an existing pack, the user can open a contribution PR from the UI and set the contribution's name (title) to whatever he likes (the selection of the existing pack he contributes to happens in the redirected contribution form).
When resubmitting changes from the UI to those kinds of PRs (contributions to existing packs from the UI) the user must
to set the contribution's name (title) to the display name of the existing pack he contributed to - the name of the pack should be the exact display name of the pack - otherwise, instead of updating the already existing PR, a new PR will be opened.
In this PR I'm trying to improve the instructions of the resubmission section so that users will pay attention to this known limitation.
Screenshots
Paste here any images that will help the reviewer
@ShahafBenYakir - thanks for reviewing. I'll try to explain better:
First I would like to mention that this change in content docs is temporary (I will revert it once we fix this open dev-bug).
The original resubmission section in the docs included the following sentence:
.
I think that this sentence alone wasn't clear enough - It is not clear enough to the users that in order to resubmit a change to an open contribution PR of an existing pack, the title\name of the contribution must be as the pack's display name.
In this PR I tried to improve the explanation here so that contributors will pay attention to this existing limitation in the resubmission flow.
@ShahafBenYakir, @dansterenson, @darkushin - If you can think of better wordings for that please add your suggestions.
I added this doc change due to an issue that was solved in this PR.
PR is not relevant anymore - Closing it.
| gharchive/pull-request | 2022-10-20T11:35:14 | 2025-04-01T06:38:21.933140 | {
"authors": [
"ShacharKidor"
],
"repo": "demisto/content-docs",
"url": "https://github.com/demisto/content-docs/pull/1210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1216878878 | trigger sane pdf reports build
Status
Ready/In Progress/In Hold (Reason for hold)
Related Content Pull Request
Related PR: link to the PR at demisto/content
Related Issues
Related: link to the issue
Description
A few sentences describing the overall goals of the pull request's commits.
Docker Image Ready - Dev
Docker automatic build at CircleCI has deployed your docker image: devdemisto/sane-pdf-reports:1.0.0.28997
It is available now on docker hub at: https://hub.docker.com/r/devdemisto/sane-pdf-reports/tags
Get started by pulling the image:
docker pull devdemisto/sane-pdf-reports:1.0.0.28997
Docker Metadata
Image Size: 667.82 MB
Image ID: sha256:30a7cac4ecc4ed652606becd6e4f9acacb096221c90e248677e2bc516ec2ae50
Created: 2022-04-27T06:52:36.390981922Z
Arch: linux/amd64
Command: ["python3"]
Environment:
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LANG=C.UTF-8
GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568
PYTHON_VERSION=3.9.6
PYTHON_PIP_VERSION=21.1.3
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/a1675ab6c2bd898ed82b1f58c486097f763c74a9/public/get-pip.py
PYTHON_GET_PIP_SHA256=6665659241292b2147b58922b9ffe11dda66b39d52d8a6f3aa310bc1d60ea6f7
DOCKER_IMAGE=devdemisto/sane-pdf-reports:1.0.0.28997
Labels:
org.opencontainers.image.authors:Demisto <containers@demisto.com>
org.opencontainers.image.revision:4eeecc53cc5d7a9dc2e719a265923090a1686977
org.opencontainers.image.version:1.0.0.28997
| gharchive/pull-request | 2022-04-27T06:48:18 | 2025-04-01T06:38:22.016327 | {
"authors": [
"dc-builder",
"jochman"
],
"repo": "demisto/dockerfiles",
"url": "https://github.com/demisto/dockerfiles/pull/7553",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
870085871 | doc: fix typos and format yml example
Hey @kevinslin! I found this project and approach to note taking after listening to you on FLOSS Weekly. Awesome work!
After going through some of the wiki docs (which are very helpful, btw 🙂 ), I found a few small spots that needed some attention.
Thanks.
Awesome, thanks for the corrections :)
| gharchive/pull-request | 2021-04-28T15:37:38 | 2025-04-01T06:38:22.029013 | {
"authors": [
"jasonsjones",
"kevinslin"
],
"repo": "dendronhq/dendron-site",
"url": "https://github.com/dendronhq/dendron-site/pull/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
102227567 | IE issues
There seems to be an issue with IE. Menu.js sometimes loads correctly but frequently does not. When this occurs all named links appear as one big link. Clicking makes reveal try to load all slides on top of one another.
Can you let me know what version of Windows and IE you are using. I've tested on IE9 and IE11 under Windows 7 and can't see the issue.
Thanks for the quick response. I've tried IE11 on 7 (actually Server 2008 R2) and XP
I think this was to do with the page load rather than any inherent problem with menu.js. Slimming down my site and loading the .js earlier seems to have solved the issue.
| gharchive/issue | 2015-08-20T20:17:06 | 2025-04-01T06:38:22.030668 | {
"authors": [
"cavie78",
"denehyg"
],
"repo": "denehyg/reveal.js-menu",
"url": "https://github.com/denehyg/reveal.js-menu/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
182650992 | HW: 17.10.2016 Gradle. Flavors
Commit link.
good
| gharchive/issue | 2016-10-12T22:34:13 | 2025-04-01T06:38:22.031741 | {
"authors": [
"ilya-shknaj",
"zagart"
],
"repo": "deniotokiari/training-epam-2016",
"url": "https://github.com/deniotokiari/training-epam-2016/issues/112",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
183009188 | HW 14 10 2016 Backend
https://github.com/Garasjuk/EPAMtraning2016/tree/master/BackendApplication
please provide link where i can see all changes in one page
https://github.com/Garasjuk/EPAMtraning2016/commit/80ee042a891fb6f66ca8b96ea88a2417c8ff5d5b
| gharchive/issue | 2016-10-14T09:54:17 | 2025-04-01T06:38:22.033504 | {
"authors": [
"Garasjuk",
"ilya-shknaj"
],
"repo": "deniotokiari/training-epam-2016",
"url": "https://github.com/deniotokiari/training-epam-2016/issues/119",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1652240527 | Updating Product Ads to version 3
New endpoints for product ads, check : ad_api/api/sp/product_ads_v3.py
adding the new endpoints to product_ads_v3.py
adding the @Utils.deprecated to v2 of product ads
changes to the docs
Great job @HamzaChx
Great!
I will push some changes in the rst documents:
/Users/hanuman/Documents/PycharmProjects/python-amazon-ad-api-dev/source/sp/product_ads_v3.rst:2: WARNING: Title underline too short.
Product Ads
WARNING: autodoc: failed to import function 'ProductAdsV3.create_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named 'ad_api.api.sp.ProductAdsV3'
WARNING: autodoc: failed to import function 'ProductAdsV3.edit_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named 'ad_api.api.sp.ProductAdsV3'
WARNING: autodoc: failed to import function 'ProductAdsV3.list_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named 'ad_api.api.sp.ProductAdsV3'
WARNING: autodoc: failed to import function 'ProductAdsV3.delete_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named ‘ad_api.api.sp.ProductAdsV3'
The methods was wrong as belongs to the Campaign endpoint, need to be updated to the specific method of the current endpoint
Also will push the updated init.py that allows to path the new endpoints
from .campaigns import Campaigns
from .campaigns_v3 import CampaignsV3
from .ad_groups import AdGroups
from .ad_groups_v3 import AdGroupsV3
from .product_ads import ProductAds
from .product_ads_v3 import ProductAdsV3
from .bid_recommendations import BidRecommendations
from .keywords import Keywords
from .negative_keywords import NegativeKeywords
from .campaign_negative_keywords import CampaignNegativeKeywords
from .suggested_keywords import SuggestedKeywords
from .product_targeting import Targets
from .negative_product_targeting import NegativeTargets
from .reports import Reports
from .snapshots import Snapshots
from .budget_rules import BudgetRules
from .campaings_optimization import CampaignOptimization
from .ranked_keywords_recommendations import RankedKeywordsRecommendations
from .budget_recommendations import BudgetRecommendations
from .budget_rules_recommendations import BudgetRulesRecommendations
from .product_recommendations import ProductRecommendations
all = [
"Campaigns",
"CampaignsV3",
"AdGroups",
"AdGroupsV3"
"ProductAds",
"ProductAdsV3"
"BidRecommendations",
"Keywords",
"NegativeKeywords",
"CampaignNegativeKeywords",
"SuggestedKeywords",
"Targets",
"NegativeTargets",
"Reports",
"Snapshots",
"BudgetRules",
"CampaignOptimization",
"RankedKeywordsRecommendations",
"BudgetRecommendations",
"BudgetRulesRecommendations",
"ProductRecommendations"
]
You can pull it later in a while.
El 3 abr 2023, a las 23:17, Hamza @.***> escribió:
Great!
—
Reply to this email directly, view it on GitHub https://github.com/denisneuf/python-amazon-ad-api/pull/132#issuecomment-1494520137, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4ENUXVGRFIFUDNVTQITTLW7LSY7ANCNFSM6AAAAAAWROGFYA.
You are receiving this because you modified the open/close state.
| gharchive/pull-request | 2023-04-03T14:58:13 | 2025-04-01T06:38:22.058530 | {
"authors": [
"HamzaChx",
"denisneuf"
],
"repo": "denisneuf/python-amazon-ad-api",
"url": "https://github.com/denisneuf/python-amazon-ad-api/pull/132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
363079966 | Improves Beacon logging to console output
Improve logging by using ResetColor to reset console colors to the original colors when the process was started.
What problem is this fixing?
Apparently, ResetColor is the slightly better way of resetting the text colors. It saves us from having to do that ourselves. Nothing more, nothing less.
| gharchive/pull-request | 2018-09-24T09:58:42 | 2025-04-01T06:38:22.067180 | {
"authors": [
"dennisdoomen",
"gormac"
],
"repo": "dennisdoomen/Beacon",
"url": "https://github.com/dennisdoomen/Beacon/pull/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
118607588 | Can ShouldBeEquivalentTo mutch by lambda?
public class Failure
{
public string Code { get; set; }
public IEnumerable<string> ErrorMessages { get; set; }
}
var expectedCodes = new[] { "123", "456", "789" };
var expectedErrorMessages = new[] { "Required" };
var failures = new List<Failure> {
new Failure { Code = "123", ErrorMessages = new List<string> { "Required" } },
new Failure { Code = "123", ErrorMessages = new List<string> { "Required" } }};
Can I do with ShouldBeEquivalentTo similar to pseudo code?
failures.ShouldBeEquivalentTo(x => expectedCodes.Any(x.Code) && x.ErrorMessages.ContainInOrder(expectedErrorMessages));
Because right now I can do that with custom extension method.
No, you can't do that. Instead, you could create another List<Failure> that contains the data as you expect them, and pass that to ShouldBeEquivalentTo using option WithStrictOrdering.
@TerraVenil does this answer your question?
Yes, please close this issue.
| gharchive/issue | 2015-11-24T13:22:57 | 2025-04-01T06:38:22.069634 | {
"authors": [
"TerraVenil",
"dennisdoomen"
],
"repo": "dennisdoomen/fluentassertions",
"url": "https://github.com/dennisdoomen/fluentassertions/issues/312",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
774494557 | react-reveal props I miss: spy and appear
Is this repo a fork of react-reveal? I'd love to use this more modern version but my use case makes use of two props I haven't been able to find here: spy and appear. Is there a workaround?
Hi, this is not a fork of react-reveal, this is a completely new and different implementation
That's okay. I thought it might be a fork because of this comment.
If I could just put in my two pennies worth, it would be nice if that could be implemented :-)
Keep up the good work!
Thank you @andrepadeti, feel free to submit a PR!
Merry Christmas 🎄
| gharchive/issue | 2020-12-24T15:48:36 | 2025-04-01T06:38:22.072081 | {
"authors": [
"andrepadeti",
"dennismorello"
],
"repo": "dennismorello/react-awesome-reveal",
"url": "https://github.com/dennismorello/react-awesome-reveal/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2054230428 | Gh action deploy without --prod
I've tried to find a way to allow gh actions to only do preview deploys.
It seems to perform deploys in a similar way to when the --prod flag i sent to the cli.
If the default must be --prod it would be really nice to have a config flag that would only do preview deploys that requires manual promotion.
Sorry if this exists and Ive missed it, in that case I would gladly submit a PR adding it to the action README.
When you link a Deploy project to a Github repository in order to use the GH action, you set which branch will act as the "production" branch. Any commit in that branch will result in a production deployment. I presume you have only 1 branch (main), in which case you could create a new branch and set it as the "production" branch. You can forget about it if you want to promote deployments manually, or you could merge to it those commits that should be deployed to production.
Thanks for your reply!
We've used a separate production branch before but what we're running now is a tag based versioning system and branching from tag on issues.
You're mentioning that I can forget about it if I want to promote manually which is exactly what I'd like to do. However, it seems deno deploy instantly promotes the latest deploy to production. This is the behaviour I'd like to turn off.
Enabling me to deploy, but to manually promote a deploy.
From your last sentence I got the feeling this is possible, but how?
What I suggest is to create a production branch, but never commit to it. This way, the branch where you commit to will produce preview deployments instead of production deployments.
Ahh, brilliant! I did not think of that.
Thanks so much for you help!
| gharchive/issue | 2023-12-22T17:07:47 | 2025-04-01T06:38:22.119328 | {
"authors": [
"arnauorriols",
"nicrosengren"
],
"repo": "denoland/deployctl",
"url": "https://github.com/denoland/deployctl/issues/227",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1437544026 | Show full comment for files with @module block comments.
I maintain a Deno "library" with many entrypoints here. One of the things I'd like to do is document the overall usage and paradigm of each file in its heading using the jsdoc @module tag. This sort of works with deno.land but it seems that the module information is truncated to only the first paragraph. And example is:
Source rendering on deno.land
Documentation rendering on deno.land
Documentation rendering on doc.deno.land
Is it possible to have deno.land/x modules render documentation more in line with doc.deno.land?
i dont see anything being truncated on either; the only difference is that deno.land/x/ currently doesnt render examples, which is something we do want to add
@crowlKats That tracks. I did a quick search and couldn't find a ticket for rendering examples. If you want I can adjust the title of this ticket to cover adding examples rendering, otherwise it seems this should be closed.
Example rendering has been implemented.
| gharchive/issue | 2022-11-06T21:21:01 | 2025-04-01T06:38:22.123138 | {
"authors": [
"baetheus",
"crowlKats"
],
"repo": "denoland/dotland",
"url": "https://github.com/denoland/dotland/issues/2566",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1810411648 | JSX element implicitly has type 'any' because no interface 'JSX.IntrinsicElements' exists. When using Preact v10.16.0
This is my deno.json
{
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "preact"
},
"tasks": {
"start": "deno run -A --watch=static/,routes/ dev.ts",
"vpn": "DENO_DEPLOYMENT_ID=\"$(git rev-parse HEAD)\" deno run -A main.ts"
},
"imports": {
"$fresh/": "https://deno.land/x/fresh@1.3.0/",
"preact": "https://esm.sh/preact@10.16.0",
"preact/": "https://esm.sh/preact@10.16.0/",
"preact-render-to-string": "https://esm.sh/*preact-render-to-string@6.2.0",
"@preact/signals": "https://esm.sh/*@preact/signals@1.1.5",
"@preact/signals-core": "https://esm.sh/@preact/signals-core@1.3.1"
}
}
Thanks!
Can you check that the deno vscode plugin is initialized? There is a command for that in the vscode command palette.
Yep it is, in fact this is a project I'm currently running. Changing it to Preact v10.15.1 works good. That's my workaround for now.
Screencast from 2023-07-18 14-03-35.webm
Screencast.from.2023-07-18.14-03-35.webm
Solutions
Ctrl/Cmd + Shift + P then choose Reload Window (vscode)
run fresh
deno task start
and reload browser window (deno automatically cache missing dependencies)
I use all latest preact / twind and it's work fine
Really weird @afifurrohman-id. I followed the steps you provided + I added --reload in my task to renew cache but still is showing that warning.
Alright I think I figured out the issue, but the workaround is a little odd to me:
To replicate the issue:
Close any VSCode instance
Delete Deno's cache. On Linux rm -r $HOME/.cache/deno
Open a Fresh project with VSCode and select any .tsx file
You should see something like this:
Run deno task start
Now restart the Deno language server
Now you should be able to see the error above:
To get rid of this annoying issue, stop the fresh server and run deno check main.ts
My question is, why using deno check solves the issue? I tried running with both dev.ts and main.ts and did not work.
@sant123 this worked for me. Idk why deno check works either, but it did. I was just using the default fresh template from their "Getting Started", as well. Might be worth fixing...
FWIW For people commenting here: It's not an issue with Fresh but with Deno's LSP. We are aware of the issue but haven't found the root cause yet nor a reliable way to reproduce it. Sometimes I can reproduce it and when I try again it doesn't work anymore. My guess as to why deno check works is that it may refresh the internal type cache or something.
I'll transfer this upstream to the deno cli repository, since this is not an issue with Fresh.
| gharchive/issue | 2023-07-18T17:38:22 | 2025-04-01T06:38:22.134039 | {
"authors": [
"afifurrohman-id",
"marvinhagemeister",
"mct-dev",
"sant123"
],
"repo": "denoland/fresh",
"url": "https://github.com/denoland/fresh/issues/1477",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1581165483 | feat: Params for middleware
Fix https://github.com/denoland/fresh/issues/903
This PR introduce middlewareParams in the MiddlewareHandlerContext. Each middleware have access to the params that are upstream of its definition.
For example:
With a route '/api/[id1]/[id2]/foo'
and a middleware located at '/api/[id1]/_middleware',
middlewareParams will only have a property of 'id1'. 'id2 will' be undefined
Why not all the params of the route ?
For a middleware to access the params that are downstream from its level would require, I think, to duplicate in fresh the logic that is currently in the Rutt router. This would introduce a speed decrease. One solution could be to integrate the Rutt router in fresh, although I did not want to tackle this.
While this implementation has a limitation, I think it covers most use cases and does not add any significant overhead
Note the linter is failing not because of this PR but because of a recent change in deno affecting the whole project. There a 16 instances of Deno.run being deprecated, which should probably be fixed separately
@marvinhagemeister, don't forget that this is no longer necessary due to my 1314.
Closing in favour of #1314
| gharchive/pull-request | 2023-02-12T08:07:54 | 2025-04-01T06:38:22.138084 | {
"authors": [
"deer",
"marvinhagemeister",
"sylc"
],
"repo": "denoland/fresh",
"url": "https://github.com/denoland/fresh/pull/1025",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2082899227 | Feature request: Have a more minimal saaskit option
Is your feature request related to a problem? Please describe.
It takes a bit of work to get the saaskit into a form that I can start building from. It'd be nice to have a more minimal version.
Describe the solution you'd like
I'd like a repository that has only the following:
User and auth setup
Stripe integration
And not the following:
Blogs
Graphs
Items in database
Here is my attempt: https://github.com/paudrow/saaskit-minimal
Describe alternatives you've considered
I can delete everything myself, which I've done (link above), but it is something that I have to maintain and possibly fight merge conflicts. It'd be nicer to have an officially maintained version.
Having two versions of SaaSKit would be difficult to maintain. It'd be better to have a single version that improves its modularity, making it easier to modify. That's what I've tried to do with the addition of plugins.
I'd be happy to hear ideas on how modularity, but I will close this as not planned. Either way, thank you for your suggestion 🙂
| gharchive/issue | 2024-01-16T02:07:32 | 2025-04-01T06:38:22.142287 | {
"authors": [
"iuioiua",
"paudrow"
],
"repo": "denoland/saaskit",
"url": "https://github.com/denoland/saaskit/issues/655",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1340924405 | python_dono ?!
Is possible to import deno module in python code ?
Possible, but that's completely different than what this module does so it is out of scope for this project.
Possible, but that's completely different than what this module does so it is out of scope for this project.
Thank you @DjDeveloperr
| gharchive/issue | 2022-08-16T21:58:18 | 2025-04-01T06:38:22.143752 | {
"authors": [
"DjDeveloperr",
"elycheikhsmail"
],
"repo": "denosaurs/deno_python",
"url": "https://github.com/denosaurs/deno_python/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
191987663 | Logo ideas
Woo-hoo! Spaceship is going to get 200 stars! I think it's time to make a logo for theme. I will be grad if anybody can help me with that, so I'm looking for volontiers.
I think it have to be something simple, clean and expressive. Something that can reflect the essence and idea of the theme itself.
Examples
Bullet Train:
I am not using zsh and this zsh-spaceship-theme, but on screenshots I see you have not backgrounds. What is the difference with other shell themes? I see simplicity and completeness. Like in other good themes. What is the idea of spaceship-zsh-theme?
spaceship-zsh-theme implements three core ideas:
It has a lot of useful indicators: exit, host, user, sudo, git, nvm, rvm, rbenv, chruby, virtualenv, vi-mode, (swiftenv and xenv is comming). Some of them doesn't supported in the most of other themes.
It shows only indicators which are required at the moment. Without any overkills. You see what you need.
It's almost completely customizable. With #28 and #39 we'll be able to change almost everything in this theme as you want to. (Custom colors and ability to add sectors like in agnoster-zsh-theme)
BTW, a big update is comming. I've found a way to create cross-shell and testable themes, so probably Spaceship is going to be the first shell theme, that has test and works on any shell (sh, bash, zsh, fish, etc) with single and universal code base.
Okay got it.
And another, important question: why you called it with “spaceship”? Did you mean “it's like Fantasy UI into sensor panels of devices used in space”?
In my imagination, real spaceship is an extremely complex system with dozen of indicators (refers to the first core idea) which show data about whole system.
Systems which are providing life support in real spaceships are always maximally simple. You always may get everything that you need just now, without overkills (refers to the second idea).
Spaceship's systems give you ability to do whatever you need, like scientific researches, experiments, etc (this refers to the third point — customizability of the theme).
Something like this. Maybe I'm wrong about real spaceships, but that's the reasons why I named this theme Spaceship.
Okay it's enough of great thinkings for me for now. Now, give me a time.
As you wish, up to you!
200★ are here!
@Grawl hey, any updates?
As a starting point, I make an abstraction of the screenshot from repository.
This helped me a lot to understand what I'm doing.
Then, I tried to make some flying objects:
And then I noticed that arrow and rocket is very similar, and tried to combine them:
Notice that colors is important here – because terminal is all about typography, – so I put them everywhere.
So, what direction should I follow?
@Grawl hey, I'll answer you in private.
For inspiration Coats of arms of NASA missions:
Yet another awesome example:
My thoughts:
will try to add a rays and use colors from that awesome illustration
I like this much more than my previous tries
| gharchive/issue | 2016-11-28T11:56:45 | 2025-04-01T06:38:22.165902 | {
"authors": [
"Grawl",
"denysdovhan"
],
"repo": "denysdovhan/spaceship-zsh-theme",
"url": "https://github.com/denysdovhan/spaceship-zsh-theme/issues/42",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
445854341 | Adding owner name for license
Fixes issue #87 (adding owner name in license).
I've made the changes mentioned, hope they're right. Do tell me if there's anything to change!
I'll check it tomorrow. Looks perfect :)
Thanks!
Perfect! Thank you for your help :) We have very similar #106 and #107. I'm sure, you can implement them in a few minutes. So, if you're interested in more contributions -- I would be happy to review your solutions :)
Sure, I'll take a look. Thanks!
| gharchive/pull-request | 2019-05-19T19:21:18 | 2025-04-01T06:38:22.529445 | {
"authors": [
"nandahkrishna",
"orsinium"
],
"repo": "dephell/dephell",
"url": "https://github.com/dephell/dephell/pull/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
183726981 | Please add checksum deployer.phar archive
Q
A
Issue Type
Feature Request
Deployer Version
N/A
Local Machine OS
N/A
Remote Machine OS
N/A
Description
Pages:
http://deployer.org/docs/getting-started
http://deployer.org/docs/installation
Please add checksum deployer.phar archive MD5 and sha256 to use with ansible like
- name: download file with check (md5)
get_url:
url: http://deployer.org/deployer.phar
dest: /usr/local/bin/dep
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
Ok, i have plans to do it.
Done.
| gharchive/issue | 2016-10-18T15:49:20 | 2025-04-01T06:38:22.532798 | {
"authors": [
"elfet",
"tebaly"
],
"repo": "deployphp/deployer",
"url": "https://github.com/deployphp/deployer/issues/816",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1750306812 | 나의 질문 폼 생성 - 질문 삭제 기능
🤔 해결하려는 문제가 무엇인가요?
close #196
🎉 변경 사항
드래그 해서 삭제하는 코드 제거 (쓰레기통 코드 제거)
header의 삭제버튼 클릭하면, 쓰레기통 아이콘이 보이고, 삭제를 할 수 있도록 구현
🙏 여기는 꼭 봐주세요!
Codecov Report
Patch and project coverage have no change.
Comparison is base (81604db) 91.90% compared to head (da53617) 91.90%.
:exclamation: Current head da53617 differs from pull request most recent head 34c34b9. Consider uploading reports for the commit 34c34b9 to get more accurate results
Additional details and impacted files
@@ Coverage Diff @@
## main #219 +/- ##
=======================================
Coverage 91.90% 91.90%
=======================================
Files 38 38
Lines 284 284
Branches 52 52
=======================================
Hits 261 261
Misses 23 23
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2023-06-09T18:10:44 | 2025-04-01T06:38:22.538960 | {
"authors": [
"codecov-commenter",
"sumi-0011"
],
"repo": "depromeet/na-lab-client",
"url": "https://github.com/depromeet/na-lab-client/pull/219",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
549602364 | Introduce monitoring
No project integrates monitoring right now. This means that we need to leverage application logs to get a sense of what is going on in a system. By having something like statsd or prometheus monitoring, we would be able to better monitor the systems over time.
My proposal would be to leverage statsd as the main stat emission protocol, but then leverage prometheus sidecar containers to advertise the metrics. This should fit in rather nicely to many existing stat collection tools, like prometheus and datadog, without being too opinionated about which ones companies are using.
On the helm charts, the deployment of the sidecar should be optional and the host/port should be configurable through two environment variables: STATSD_HOST and STATSD_PORT
Golang Library: https://godoc.org/github.com/etsy/statsd/examples/go
NodeJS Library: https://www.npmjs.com/package/statsd-client
I recently put together a simple grafana dashboard for the system using metrics that were already in place from Kubernetes.
I'm going to close this for now. We can certainly add more later.
| gharchive/issue | 2020-01-14T14:25:52 | 2025-04-01T06:38:22.542153 | {
"authors": [
"mjpitz"
],
"repo": "deps-cloud/deps.cloud",
"url": "https://github.com/deps-cloud/deps.cloud/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
129256190 | PHP 5.5 & newest?
Just checked with old Debian6 - its working from scratch but with 8.0 is compeletely wrong. Is project abandoned or possible to take a quick look to adopt it for current Debian/Ubuntu?
Unfortunately, I don't plan on working on this any longer. You are free to fork it or if you submit pull requests I can still merge them, but that is the extent of my work on this from now on (unless I get a ton of time on my hands)
| gharchive/issue | 2016-01-27T20:15:26 | 2025-04-01T06:38:22.598404 | {
"authors": [
"deranjer",
"lazyest"
],
"repo": "deranjer/OpenVPN-PHP-Management-Gui",
"url": "https://github.com/deranjer/OpenVPN-PHP-Management-Gui/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1754460867 | Lua Error Crafting Results Export JSON
Hi there, i might got a little problem.
Soon after i did some Prospecting i went to the CraftSim Crafting Results and saw there is a Button "Export JSON" . After i press it, my game froze a bit and i got a cute little LUA Error. Just like this :
Message: ...terface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua:74: script ran too long
Time: Tue Jun 13 11:28:38 2023
Count: 1
Stack: ...terface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua:74: script ran too long
[string "=[C]"]: ?
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:74: in function Add' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftResultItem.lua"]:35: in function GetJSON'
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:42: in function AddList' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftResult.lua"]:121: in function GetJSON'
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:42: in function AddList' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftRecipeData.lua"]:104: in function GetJSON'
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:42: in function AddList' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftSessionData.lua"]:109: in function <...ce/AddOns/CraftSim/Data/Classes/CraftSessionData.lua:101> [string "=(tail call)"]: ? [string "@Interface/AddOns/CraftSim/Modules/CraftResults/Frames.lua"]:47: in function clickCallback'
[string "@Interface/AddOns/CraftSim/Libs/GGUI-1.0/GGUI.lua"]:1107: in function <Interface/AddOns/CraftSim/Libs/GGUI-1.0/GGUI.lua:1105>
Locals: (*temporary) = defined =[C]:-1
Anny tipps on how i can fix it ?
This happens currently when you were craft a lot before exporting due to the sheer amount of data accumulating!
Currently there is no planned fix for this but its on my todo :)
| gharchive/issue | 2023-06-13T09:36:25 | 2025-04-01T06:38:22.613261 | {
"authors": [
"DrGrixel",
"derfloh205"
],
"repo": "derfloh205/CraftSim",
"url": "https://github.com/derfloh205/CraftSim/issues/149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2250574541 | [FEQ]Yaswanth/FEQ-1765/Improve/Disabled label animation and added props
Added "isLabelAnimationDisabled" prop with that we can able to control label animation for input component
Pull Request Test Coverage Report for Build 8737908623
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.2%) to 61.243%
Totals
Change from base Build 8550352225:
0.2%
Covered Lines:
202
Relevant Lines:
319
💛 - Coveralls
| gharchive/pull-request | 2024-04-18T12:30:08 | 2025-04-01T06:38:22.618259 | {
"authors": [
"coveralls",
"yaswanth-deriv"
],
"repo": "deriv-com/ui",
"url": "https://github.com/deriv-com/ui/pull/166",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2254338717 | Fix PoS mempool dynamic fee market
A bunch of things were broken in our mempool fee estimation logic. All fixed now. Summary:
Fixed a bug whereby we would never increase the fee above the minimum fee because of the placement of a if bucketMinFee <= globalMinFeeRate check.
Fixed a bug where we were overwriting the mempool fee register in Start(), which was causing the mempool fee estimator to have no txns in it, and thus always return the minimum value. This meant we were not considering mempool congestion at all.
In many places, we were confusing "fee bucket growth rate basis points" with "fee bucket multiplier". The former value would be something like 1000 (= 10%) while the latter would map to 1.1 (= 10000 + 1000 / 10000). This caused fee-time ordering to be basically completely broken. All fixed now, and fixed tests.
Just to add a little more detail: The tests were very well-written and I think they exercise this logic very well. The reason why they were passing before, though, is because we were setting the value incorrectly in the Init() and passed it wrong as an argument, and the two sortof compensated for each other in the tests. But in production, we would go down a different path that wouldn't compensate properly, which is how I found the bug. Anyway it's all fixed now.
Set optimized defaults for the mempool dynamic fee params and added a deep comment explaining why we chose these values where they are define. Also made sure we're using them consistently in all the relevant places. These params optimize heavily toward getting your txn into the next block, which is what we want. They cause reordering issues if you're sending txns at a rate much higher than 1 block per second, but this is correct behavior, and the comments include suggestions on how to mitigate these issues (eg by manually setting the fee or using an atomic txn):
MempoolCongestionFactorBasisPoints
MempoolPriorityPercentileBasisPoints
PastBlocksCongestionFactorBasisPoints
PastBlocksPriorityPercentileBasisPoints
In computeFeeTimeBucketRangeFromExponent, there was a weird edge-case where we could have a fee bucket with start less than end. This can't happen in a real scenario, though, only when the bucket growth rate is like 1bp, which is ridiculously small. And I only found it because of the growth rate <> multiplier issue mentioned previously, which was causing a 10% growth rate to be threaded through as 1bp.
In EstimateFee, we accept a minFeeRateNanosPerKB, but we were ignoring it if the fee estimators returned a higher fee. This was much less useful than using the minFeeRateNanosPerKB as a straight-up override so I changed the behavior there. Doing this made it so that my script was able to blast the mempool with txns, with a custom fee rate, without any reordering issues (because all the txns were being put in the same fee bucket). Eventually, we should probably change the name of this field to something like overrideFeeRateNanosPerKB but I think it's fine for now.
For reference, in case it's useful, the way I found all this stuff was I slowed the block time down to 1 block every 10s and made the NumPastBlocks for the block estimator 5 blocks using the params in constants.go (so that txns would accumulate in the mempool) and added logging of the fees. Then I wrote a script that blasted the mempool with txns and noticed that the fees weren't adjusting properly, which led me down the rabbit-hole to find all of these issues. After fixing all the issues I took some time to optimize all the params, and then used my script to exercise everything and make sure it's fully 100% adapting correctly. Specifically, I saw that the fee goes up correctly once the mempool has a full block's worth of txns accumulated in it, stays high for a few blocks because of the block estimator, and then starts to go down as more blocks come through. It all works really well.
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#1252 👈
#1253
feature/proof-of-stake
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @diamondhands0 and the rest of your teammates on Graphite
| gharchive/pull-request | 2024-04-20T02:50:15 | 2025-04-01T06:38:22.706248 | {
"authors": [
"diamondhands0",
"tholonious"
],
"repo": "deso-protocol/core",
"url": "https://github.com/deso-protocol/core/pull/1252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
778360479 | throw if detekt cannot find any files, i.e input was most likely configured wrong
Expected Behavior
Detekt task exits at least with a non 0 code - preferably with a message that input was configured wrong
Current Behavior
Detekt is happy
Context
I had a project which was a multi-project build. I have configured detekt as follows:
detekt{
input = files(subprojects*.collect { it.projectDir })
}
At some point I simplified the structure and converted it to a single project. Detekt continued to run but did not analyse anything and I have not detekted (:wink:) that for a long time
I can't reproduce your issue. Which version of detekt are you using? Could you provide a sample project to demostrate this?
I can't reproduce your issue. Which version of detekt are you using? Could you provide a sample project to demostrate this?
The latest one I reckon, you can try out https://github.com/robstoll/niok/commit/c4917338abc38bfe0d008b40cba8afa02380d558
With the next commit I fixed the input again and I had to fix a few issues detekt detected:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e
The latest one I reckon, you can try out https://github.com/robstoll/niok/commit/c4917338abc38bfe0d008b40cba8afa02380d558
With the next commit I fixed the input again and I had to fix a few issues detekt detected:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e
How should a computer program detect whether such a configuration was on purpose or not. Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
Detekt reports in all kinds of flavours show how many files have been analyzed.
I'm thinking about the following cases, where detekt is frequently used.
Template projects that as the name suggests contain no Kotlin sources.
Not every single project in multi-projects contains Kotlin sources.
Detekt task exits at least with a non 0 code
Why should it exit with a different code? Which other static source code analyzers yield an error due to the set of input source files being empty?
How should a computer program detect whether such a configuration was on purpose or not. Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
Detekt reports in all kinds of flavours show how many files have been analyzed.
I'm thinking about the following cases, where detekt is frequently used.
Template projects that as the name suggests contain no Kotlin sources.
Not every single project in multi-projects contains Kotlin sources.
Detekt task exits at least with a non 0 code
Why should it exit with a different code? Which other static source code analyzers yield an error due to the set of input source files being empty?
Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Anyway, I am not talking about any specific language, the use case I am talking about here is that I have messed up input so that detekt has basically 0 files to analyse. That's almost always an error and not on purpose. Otherwise there is no point in setting up detekt at all IMO. There is also not a need for a flag such as allow 0 files IMO, one implement a flag on its on and not apply the detekt plugin in such cases
Why should it exit with a different code?
I guess you are in a sarcastic mood ifor something and can answer yourself why a program which has an erroneous setup exits with something different than 0 :wink:
Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Anyway, I am not talking about any specific language, the use case I am talking about here is that I have messed up input so that detekt has basically 0 files to analyse. That's almost always an error and not on purpose. Otherwise there is no point in setting up detekt at all IMO. There is also not a need for a flag such as allow 0 files IMO, one implement a flag on its on and not apply the detekt plugin in such cases
Why should it exit with a different code?
I guess you are in a sarcastic mood ifor something and can answer yourself why a program which has an erroneous setup exits with something different than 0 :wink:
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Suppose input = mydir:
How should detekt know whether mydir contains no Kotlin source files on purpose or not?
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
input so that detekt has basically 0 files to analyse.
This can be seen in the built-in reports of detekt. Why should detekt terminate because of that?
By the way, you can implement this behavior with detekt's custom report feature. If there is actually 0 analyzed code, you could throw an exception.
I guess you are in a sarcastic mood or something and can answer yourself why a program which has an erroneous setup exits with something different than 0 😉
Excuse me, this wasn't meant sarcastic. It was a serious question.
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Suppose input = mydir:
How should detekt know whether mydir contains no Kotlin source files on purpose or not?
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
input so that detekt has basically 0 files to analyse.
This can be seen in the built-in reports of detekt. Why should detekt terminate because of that?
By the way, you can implement this behavior with detekt's custom report feature. If there is actually 0 analyzed code, you could throw an exception.
I guess you are in a sarcastic mood or something and can answer yourself why a program which has an erroneous setup exits with something different than 0 😉
Excuse me, this wasn't meant sarcastic. It was a serious question.
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
I agree that the directory can be empty at the point of configuration but when running detekt then, IMO, detekt should fail if it does not need to analyse anything at all. I don't know detekt well enough, maybe my assumption that input = can only be specified once leads to the confusion here; or in other words, maybe I should change the title of the issue to throw if detekt does not need to analyse any files (again, I am not talking about Kotlin files exclusively, I am talking about 0 files of what any kind).
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
Don't know any on top of my head but I think it makes sense to error on misconfiguration instead of happily exit with 0 which in turn means the build/CI will not fail, giving the dev creating e.g. a PR as well as the reviewer a false view of the actual state (i.e. that the source code might be full of violations.)
I know a test-runner which exits with 0 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
I agree that the directory can be empty at the point of configuration but when running detekt then, IMO, detekt should fail if it does not need to analyse anything at all. I don't know detekt well enough, maybe my assumption that input = can only be specified once leads to the confusion here; or in other words, maybe I should change the title of the issue to throw if detekt does not need to analyse any files (again, I am not talking about Kotlin files exclusively, I am talking about 0 files of what any kind).
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
Don't know any on top of my head but I think it makes sense to error on misconfiguration instead of happily exit with 0 which in turn means the build/CI will not fail, giving the dev creating e.g. a PR as well as the reviewer a false view of the actual state (i.e. that the source code might be full of violations.)
I know a test-runner which exits with 0 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
I know a test-runner which exits with 2 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
Technically we could add a flag in the config file to achieve this. I'm unsure of the usefulness of this flag as it seems more an edge case to me 🤔
Also JUnit 5, as you mentioned, is returning a success if there are no tests to run (as they have the aformentioned flag).
Moreover the detekt task is a SourceTask. I don't know your exact detekt configuration, but you can probably add a doLast that will fail the task if source is empty or not.
I know a test-runner which exits with 2 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
Technically we could add a flag in the config file to achieve this. I'm unsure of the usefulness of this flag as it seems more an edge case to me 🤔
Also JUnit 5, as you mentioned, is returning a success if there are no tests to run (as they have the aformentioned flag).
Moreover the detekt task is a SourceTask. I don't know your exact detekt configuration, but you can probably add a doLast that will fail the task if source is empty or not.
First of all, feel free to close this issue, it's an idea, but I can live without it (can add an own check).
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code. IMO this check should not be behind a flag but better behind a flag than nothing.
I don't know your exact detekt configuration
This was the erroneous configuration:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e#diff-49a96e7eea8a94af862798a45174e6ac43eb4f8b4bd40759b5da63ba31ec3ef7L71
As you can see, I have mis-configured input and pointed it to an empty list. The project had once subprojects where this configuration was correct but not anymore and I forgot to change it when I refactored the project to a single project. Since detekt was always green, I did not detect it for a long time.
First of all, feel free to close this issue, it's an idea, but I can live without it (can add an own check).
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code. IMO this check should not be behind a flag but better behind a flag than nothing.
I don't know your exact detekt configuration
This was the erroneous configuration:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e#diff-49a96e7eea8a94af862798a45174e6ac43eb4f8b4bd40759b5da63ba31ec3ef7L71
As you can see, I have mis-configured input and pointed it to an empty list. The project had once subprojects where this configuration was correct but not anymore and I forgot to change it when I refactored the project to a single project. Since detekt was always green, I did not detect it for a long time.
I've done a bit more research on this front just to understand what was happening.
but you can probably add a doLast that will fail the task if source is empty or not.
This is the check you can add to your build.gradle if you want your build to fail once the input is empty.
gradle.taskGraph.afterTask {
if(it.state.noSource && it.path == ":detekt"){
throw new StopExecutionException("Detekt has an empty input")
}
}
What happened in your case was that you provided an empty input for detekt. The detekt task is a SourceTask that exists with the NO-SOURCE status if the input is empty. Technically our code not ever runs, and Gradle just realise that the task has no input so it can be skipped.
This is making adding a failOnEmptyInput config property even more complicated, as the culprit in this case was of Gradle and of how task execution is computed.
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code.
Agree. Though in this specific case you instructed Detekt to pick an input that turned out being empty. I have several examples of Gradle modules that have no source code but have Detekt applied (a BOM, an Android resource only module, etc.). For those modules is totally reasonable to have Detekt just being skipped and resulting in a success.
The problem was that what you consider a "wrong setup" could be valid instead for another use case.
What we could do is list the snippet I posted in our official documentation, so others can benefit from it.
I've done a bit more research on this front just to understand what was happening.
but you can probably add a doLast that will fail the task if source is empty or not.
This is the check you can add to your build.gradle if you want your build to fail once the input is empty.
gradle.taskGraph.afterTask {
if(it.state.noSource && it.path == ":detekt"){
throw new StopExecutionException("Detekt has an empty input")
}
}
What happened in your case was that you provided an empty input for detekt. The detekt task is a SourceTask that exists with the NO-SOURCE status if the input is empty. Technically our code not ever runs, and Gradle just realise that the task has no input so it can be skipped.
This is making adding a failOnEmptyInput config property even more complicated, as the culprit in this case was of Gradle and of how task execution is computed.
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code.
Agree. Though in this specific case you instructed Detekt to pick an input that turned out being empty. I have several examples of Gradle modules that have no source code but have Detekt applied (a BOM, an Android resource only module, etc.). For those modules is totally reasonable to have Detekt just being skipped and resulting in a success.
The problem was that what you consider a "wrong setup" could be valid instead for another use case.
What we could do is list the snippet I posted in our official documentation, so others can benefit from it.
Thanks for the analysis. Surely good to include the snippet 🙂👍
Personally, I would add a flag doNotFailOnEmptyInput which one needs to use in case of a BOM pom project or similar. I would even go that far to not provide a flag at all. Instead, such projects should simply not apply detekt because its for nothing. Or does detekt also check non-source related stuff?
Thanks for the analysis. Surely good to include the snippet 🙂👍
Personally, I would add a flag doNotFailOnEmptyInput which one needs to use in case of a BOM pom project or similar. I would even go that far to not provide a flag at all. Instead, such projects should simply not apply detekt because its for nothing. Or does detekt also check non-source related stuff?
Instead, such projects should simply not apply detekt because its for nothing
Agree, but if you use subprojects {} or allprojects {} block in your top level build.gradle file (as a lot of our users are doing), you're applying the plugin to all the modules.
The current behavior makes sure those modules with not source are not breaking your overall builds.
Instead, such projects should simply not apply detekt because its for nothing
Agree, but if you use subprojects {} or allprojects {} block in your top level build.gradle file (as a lot of our users are doing), you're applying the plugin to all the modules.
The current behavior makes sure those modules with not source are not breaking your overall builds.
I most of the time use subprojects or configure(...) instead of a build.gradle in the subproject. So I would do the following
configure(subprojects.filter{ !it.name.contains("-bom") }){
apply(...)
}
//or
subprojects {
if(!it.name.contains("-bom")) apply(...)
}
And there you have your flag. No big deal IMO. But I see that you are hesitant to take a more restrictive approach than the current one in the sense of fail-if-no-input by default. Fine with me, I brought up my points, I think it's clear by now that both approaches require more or less the same amount of implementation in detekt and for the workaround. In the end, members of detekt need to decide more on a principal level IMO.
I most of the time use subprojects or configure(...) instead of a build.gradle in the subproject. So I would do the following
configure(subprojects.filter{ !it.name.contains("-bom") }){
apply(...)
}
//or
subprojects {
if(!it.name.contains("-bom")) apply(...)
}
And there you have your flag. No big deal IMO. But I see that you are hesitant to take a more restrictive approach than the current one in the sense of fail-if-no-input by default. Fine with me, I brought up my points, I think it's clear by now that both approaches require more or less the same amount of implementation in detekt and for the workaround. In the end, members of detekt need to decide more on a principal level IMO.
I agree with @schalkms and @cortinico. Your point have sense but it could break other users flows. It seems that the use cases where you want empty source sets are legit. So I'm going to close this issue.
I do appreciate this kind of uses related with UX. Few people report things like this. But I think that in this case it's better to keep the plugin as it is now.
I agree with @schalkms and @cortinico. Your point have sense but it could break other users flows. It seems that the use cases where you want empty source sets are legit. So I'm going to close this issue.
I do appreciate this kind of uses related with UX. Few people report things like this. But I think that in this case it's better to keep the plugin as it is now.
| gharchive/issue | 2021-01-04T20:56:46 | 2025-04-01T06:38:22.753363 | {
"authors": [
"BraisGabin",
"cortinico",
"robstoll",
"schalkms"
],
"repo": "detekt/detekt",
"url": "https://github.com/detekt/detekt/issues/3344",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2063828262 | Exclude FirErrors class from JaCoCo instrumentation to avoid ASM MethodTooLargeException
This is required for code coverage to work correctly when building using K2 compiler.
Warnings
:warning:
This PR is approved with no milestone set. If merged, it won't appear in the detekt release notes.
Generated by :no_entry_sign: dangerJS against 1971bee72b275a141265039ace7d6bc98faeaa75
| gharchive/pull-request | 2024-01-03T11:26:03 | 2025-04-01T06:38:22.756251 | {
"authors": [
"3flex",
"detekt-ci"
],
"repo": "detekt/detekt",
"url": "https://github.com/detekt/detekt/pull/6802",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1624466483 | Use self-hosted fonts instead of web-fonts.
The Poppins web-fonts served by fonts.google.com are replaced by self-hosted fonts.
This pr is no longer necessary, as it does not fit into the folder structure of the project and a font picker for user-defined fonts has been implemented.
| gharchive/pull-request | 2023-03-14T23:41:35 | 2025-04-01T06:38:22.762003 | {
"authors": [
"SubOptimal",
"dev-lu"
],
"repo": "dev-lu/osint_toolkit",
"url": "https://github.com/dev-lu/osint_toolkit/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1770862893 | 🛑 Gestor Nuevo is down
In 9e476fd, Gestor Nuevo (https://admin.okticket.es/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Gestor Nuevo is back up in 7301cec.
| gharchive/issue | 2023-06-23T06:28:24 | 2025-04-01T06:38:22.764298 | {
"authors": [
"dev-okticket"
],
"repo": "dev-okticket/status",
"url": "https://github.com/dev-okticket/status/issues/345",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2738490438 | [Question] How to build the image in a remote host
Hi,
We started to adopt devcontainers at work and so far, it has been a smooth experience. Our development setup is as follows:
Developers has their own Windows Virtual Device
Developers connect via SSH (witch VS Code) to a remote Linux machine where development happens
We are migrating a project to devcontainer which Dockerfile requires an environment variable to be passed as a build argument, like:
docker build --arg MY_VAR=$VAR_VALUE .
The MY_VAR variable exists in the Linux machines. When the devcontainer is built from the Linux machine it finishes correctly. However, when the devcontainer is built from the Windows machine (using VS Code devctontainers extension), the build fails because such environment variable is empty.
Creating the environment variable in the Windows machine is not an option.
Is there a way to make sure that the devcontainer is always built and run in the Linux machine, even when it is built from the VS Code instance running in the Windows machine?
I'll reply to myself with the details of the actual problem and how we solved it, just in case it helps others.
When connecting from the Windows machine via SSH to the Linux machine, the build process of the docker image is happening in the Linux machine. We were under the impression that, at least the variables, were injected from the Windows machine. But that is not the case.
The environment variables are present in the Linux machine when the user's ~/.profile and ~/.bashrc are loaded. But our Linux machines have a managed identity (we authenticate against an AD server). That detail is important, because when the building process is started by the VS Code's devcontainer plugin, it looks for the login shell of the current user. This thread on StackOverflow put me on the right track.
In other words: the environment variables are empty because the shell used by the devcontainer process, /bin/sh, does not load the user's profile files.
For loading environment variables in that kind of scenarios, docker compose with an .env file would be an option.
In our case, we needed the variable to include an extra pip index. We solved it in two steps:
In the devcontainer.json we include the user's pip configuration folder as extra context:
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"options": [
"--build-context=user_home=${localEnv:HOME}/.config/pip"
]
Within the Dockerfile, we copy the pip.conf file into the image, install the packages, and remove the config file
ENV PIP_CONFIG_FILE=/tmp/pip.conf
COPY --from=user_home pip.conf /tmp/pip.conf
# Install the Python packages
RUN rm /tmp/pip.conf
| gharchive/issue | 2024-12-13T14:10:36 | 2025-04-01T06:38:22.785079 | {
"authors": [
"mmartinortiz"
],
"repo": "devcontainers/cli",
"url": "https://github.com/devcontainers/cli/issues/940",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1561384755 | Image list for Jekyll links to image list for Ruby
On https://github.com/devcontainers/images/tree/main/src/jekyll, the "full list" link points to the image list for Ruby, not for Jekyll.
Closing as completed with https://github.com/devcontainers/images/pull/389
| gharchive/issue | 2023-01-29T17:30:13 | 2025-04-01T06:38:22.786910 | {
"authors": [
"samruddhikhandale",
"tudortimi"
],
"repo": "devcontainers/images",
"url": "https://github.com/devcontainers/images/issues/384",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2060948697 | Create devcontainer.json
"ghcr.io/devcontainers/features/aws-cli:1": {}
Thanks again for your interest in contributing. As I mentioned in the other couple of PRs, going to close this one as I don't think it's intended for this repo.
| gharchive/pull-request | 2023-12-31T05:39:18 | 2025-04-01T06:38:22.787819 | {
"authors": [
"Jordanwaslistening",
"bamurtaugh"
],
"repo": "devcontainers/spec",
"url": "https://github.com/devcontainers/spec/pull/364",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1523615571 | Starter code for website of Dino
Subject of the issue
Construct the code for the website as per the design mockup.
Prefer HTML,CSS and JS rather than any framework (up for discussion if you want)
@developer-diganta Can i try it?
Sorry @DHRUVKHANDELWAL00 . I have been discussing about this issue with @developer-diganta even before when SWOC started, So he is going to assign me this work. I have been looking forward to work on this issue for quite some time. So, I am sorry but you can look out for other issues
Okay manav. Also No need to say sorry @ManavLohia945 .
Thanks @DHRUVKHANDELWAL00 for understanding! @ManavLohia945 assigned
| gharchive/issue | 2023-01-07T08:37:14 | 2025-04-01T06:38:22.789863 | {
"authors": [
"DHRUVKHANDELWAL00",
"ManavLohia945",
"developer-diganta"
],
"repo": "developer-diganta/Dino",
"url": "https://github.com/developer-diganta/Dino/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
456794439 | 计算机图书书籍获取
一些资料来自于 头条iOS 陈壬 的分享
Prentice Hall,1913
世界顶级⾼高校教材出版商
学院派⻛风格(London)
Prentice Hall的⼦公司Reston Publishing,于个人PC⾸次出现之际,居于科技书籍出版的前端
培⽣生教育出版集团(Pearson Education),1942
包括:Prentice Hall、Addison Wesley、Longman等等
1998年,收购Simon & Schuster(Prentice Hall)
Addison Wesley, 1942
1942,历史悠久的计算机科学领域教科书出版商; 经典沉积,图书图灵奖大户
浓郁的学院派⻛风格,名著云集
MIT Press
MIT Press 源⾃自麻省理工学院
专注于科学技术领域的专业级书籍
专业性更强,学术化和理论化程度更高
Morgan Kaufmann
Elsevier旗下专业出版社,选题偏重于学术深度与专业性,学院派,重深度
在⼈机交互、系统设计领域的书籍尤为经典,⼏乎每本都堪称精品
McGraw Hill,1917
国际知名的教育、信息和⾦融服务集团
工程图书偏向于培训与快速上手,但经典教材也不少
O'Reilly Media, Inc(1978)
在UNIX、X、Internet和其他开放系统图书领域具有领导地位的出版公司,联机出版的先锋
从最畅销的《Whole Internet User's Guide and Catalog》到GNN(最早的Internet门户和商业⽹站),再到WebSite(第一个桌⾯PC的Web服务器软件),一直处于Internet发展的最前沿
偏向于具体的实⽤技术参考书,内容较新较快,与⾏业发展联系紧密,其In a Nutshell、Definitive Guide、Cookbook、Head First系列书籍都不乏经典
动物书!偏重学习参考,不是学院派
Manning Publications, 1990
On computer technology topics, with a particular focus on web development
Manning主要出版⾯向程序员的技术应⽤书籍,其最著名的是In Action实战系列
Manning属于⼩⽽美的专业出版公司,常关注一些较冷⻔或初期的技术领域。像iText、Lucene等相关技术书籍,都是Manning首家推出
Packt Publishing
目前世界上发展最快和产品最丰富的技术书籍出版商之⼀
Technology Books, eBooks & Videos
每年惯例公布其年度技术报告
关注实践性,读者最终的⽬标是完成⼯作
No Starch Press,1994
Geared towards the geek, hacker, and DIY subcultures
The Pragmatic Programmers, 1999
The Pragmatic Bookshelf. Great content, by developers for developers.
Apress, 1999
主要是IT技术类, 基本统一的封面风格, 根据不同的主题修改封面的配图;内容详细、适合自学
Wrox Press, 1992
面向程序员,以“由程序员为程序员⽽著”为创作理理念
出版的红皮书系列曾经风靡⼀时,2003年被Wiley收购
内容涉及C、C++、PHP、Oracle、SQLServer、 Java、.NET等各个主要的计算机程序开发领域
帮助初学者上手,书⾥里⾯经常有⼤段代码; 也有部分⾼阶主题
Springer Science+Business Media, 1842
世界上最⼤的科技出版社之一,以出版学术性出版物⽽闻名于世,也是最早将纸本期刊做成电⼦版发行的出版商
学院派、学术气氛极其浓郁; 数理类极其经典
| gharchive/issue | 2019-06-17T07:59:16 | 2025-04-01T06:38:22.799687 | {
"authors": [
"JackDrogon"
],
"repo": "developer-learning/reading-go",
"url": "https://github.com/developer-learning/reading-go/issues/415",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
396281790 | Providing a minified mjs bundle
It's recommended to use preact.min.js instead of preact.js in production builds due to the file size reduction from the property mangling: https://github.com/developit/preact/blob/87a4ebe99dc068eaeb8503644c60ffe8ad735771/config/properties.json#L1-L27
This works well if you are using a regular script to pull preact from the CDN and relying on the preact global or if you are bundling with webpack. However, it doesn't work if you are importing preact as a native ES module.
In that case, you can successfully import the unminified preact.mjs bundle, as seen here:
http://jsfiddle.net/tpck6Lf4/
But since preact.min.js is not a native ES module, you cannot change it like this:
- import { Component, h, render } from 'https://npmcdn.com/preact@8.4.2/dist/preact.mjs'
+ import { Component, h, render } from 'https://npmcdn.com/preact@8.4.2/dist/preact.min.js'
(you'd get SyntaxError: The requested module 'https://npmcdn.com/preact@8.4.2/dist/preact.min.js' does not provide an export named 'h')
It'd be awesome if there was a corresponding preact.min.mjs that we could import from in production builds to take advantage of the reduced file size.
One way to do this might be to switch from uglify to terser where the minifier can understand ES2015 syntax. Alternatively perhaps we can minify the code with export stripped out, and then add it back. Thoughts?
We published an alpha version for Preact X just a few hours ago. It's available on npm via the preact@next tag and ships with a minified mjs bundle 💯
| gharchive/issue | 2019-01-06T18:25:40 | 2025-04-01T06:38:22.814639 | {
"authors": [
"marvinhagemeister",
"rmacklin"
],
"repo": "developit/preact",
"url": "https://github.com/developit/preact/issues/1285",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
20310370 | Mark event to Daily / Weekly / Monthly / Yearly
Hello,
i have create event on 1st September, 2013 for weekly intervals. so my mark date are 1, 8, 15, 22, 29.... so how can i implement this thing using this library. i am stuck at this point. which method of your library help me to resolve this issue. please help me as soon as possible.
Thank you so much in advance.....
yea the same question http://stackoverflow.com/users/587415/matrosov-alexander )
| gharchive/issue | 2013-10-01T05:04:51 | 2025-04-01T06:38:22.845699 | {
"authors": [
"matrosovDev",
"mital87"
],
"repo": "devinross/tapkulibrary",
"url": "https://github.com/devinross/tapkulibrary/issues/264",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.