id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1001958542 | Add a release workflow
Adds an automated release workflow. So far this only works for publishing a Ruby gem, and will need to be extended to publish other language/platform packages.
See https://github.com/cucumber/common/issues/1688
Checklist:
[x] Generate a new RubyGems API token and add it as RUBYGEMS_API_KEY secret in the Release environment.
[x] Generate a new NPM API token and add it as NPM_TOKEN secret in the Release environment.
[x] Add Go publishing
[x] Add Java publishing
[x] Add JavaScript publishing
Great work @mattwynne. I have many questions.
Where in the process is a git tag created?
Who will the creator of the git tag?
Will that git tag be signed?
Will the git tag be on a commit on main or on releases/*?
Do we want to be able to use the Merge Button to merge unsigned PRs?
Are we ok with unsigned commits on main?
Do we want to make sure all commits on releases/* are signed? If so, do we need to squash-merge from main?
The workflow uses various cucumber-actions/* - let's add some links to https://github.com/cucumber-actions in the scripts. (Can you invite me again to that org @mattwynne)
Where is the release version number picked up from? The branch name? If so, what happens when a 2nd commit is pushed to the same releases/* branch?
I feel that the tag should be created locally by the person triggering the release, and that the release action should only be triggered when a new tag is pushed to the releases/* branch (rather than for every commit pushed to the branch).
Another thing to consider is retries. The package managers are occasionally down or unresponsive (especially sonatype, where maven artifacts are published). If a tag is pushed, and the release action fails, we should be able to re-trigger the action manually via GitHub's web interface.
Would it make sense to have separate release-java.yml, release-ruby.yml etc workflows? That way we can manually rerun just the ones that have failed, and not the others.
@mattwynne we have made some tests here: https://github.com/cucumber/release-tests
@mattwynne we have made some tests here: https://github.com/cucumber/release-tests
@aurelien-reeves I like the approach of splitting the release / publish workflows and using the release published event to trigger the package publishing!
Let's pair / ensemble on this on Wednesday?
I've added an NPM publish action and the secret for it, so Ruby and NPM are good to go.
Thanks for the explanations @mattwynne - makes a lot more sense to me now.
For Maven publishing we can try https://github.com/marketplace/actions/action-maven-publish (there are more here)
For Go releases we just need to create and push a go/v$version git tag.
If one of the publish actions fail, I don't think we have an automated way to retry. That worries me.
The version numbers in the various package decriptors (and go code) will have to be bumped before we start the release process. We should figure out how to automate this. Maybe we can reuse some of the scripts from common.
We need to add several secrets to the Release environment. We should document how to do this, and where to find the secrets. I assume we can get most of them from the secrets repo, but the NPM_TOKEN is not in that repo.
@mattwynne I assume you added a new one in https://www.npmjs.com/settings/cukebot/tokens - but once added they cannot be read. If we want a token per repo, that's fine, but then the tokens must be named. Right now there are way too many tokens in there - we should delete as many as possible and start fresh.
We need to merge https://github.com/cucumber-actions/create-release/pull/6 and make a new release of that action before we can merge this PR. Then I think we can test the release process.
We need to add several secrets to the Release environment. We should document how to do this, and where to find the secrets. I assume we can get most of them from the secrets repo, but the NPM_TOKEN is not in that repo.
Yes, we need to document how to do it. Where would be a good place do you think?
No, they've all been generated fresh. I think this is good practice.
@mattwynne I assume you added a new one in https://www.npmjs.com/settings/cukebot/tokens - but once added they cannot be read. If we want a token per repo, that's fine, but then the tokens must be named. Right now there are way too many tokens in there - we should delete as many as possible and start fresh.
As far as I can tell there's no facility for naming tokens in npmjs.com 🤷 FWIW the one I generated for this repo is the one starting 1604. The other recent one (starting 95d2) is used by the tests for the https://github.com/cucumber-actions/publish-npm/ action.
I have no idea what the others are for, or why there are so many. I would assume that once we've finished automating all the releases of our javascript packages we can just delete all the older ones.
If one of the publish actions fail, I don't think we have an automated way to retry. That worries me.
Do you mean you want to automatically retry if the publish fails? Or you want to be able to manually re-run the automated release workflow? I expect you mean the latter.
I think we have a couple of approaches we could take:
Make sure that the publish jobs are idempotent, so we can just re-run the whole workflow if one of them fails.
Split the different platform publish jobs into different workflows, so we can easily re-run individual ones.
I have a slight preference for (1) since making them idempotent feels sensible anyway in case something gets triggered unintentionally, but (2) is probably a quicker win. I suggest we do that for now.
I liked what you did in https://github.com/cucumber/release-tests to chain the actual publish jobs off of the release published event. Maybe we could do that?
If one of the publish actions fail, I don't think we have an automated way to retry. That worries me.
Do you mean you want to automatically retry if the publish fails? Or you want to be able to manually re-run the automated release workflow? I expect you mean the latter.
I meant manually retrying a publish for just a single language.
I think we have a couple of approaches we could take:
1. Make sure that the publish jobs are idempotent, so we can just re-run the whole workflow if one of them fails.
The whole workflow also includes create-release, so that would have to be idempotent too. If we want to make them idempotent and still error when something is wrong is to check if the release has already been made and then do nothing. That means querying github releases, rubygems, npm, nexus/sonatype (maven) etc.
They are hopefully all easy to query, but I'm worried about nexus/sonatype - not sure if it's possible there.
2. Split the different platform publish jobs into different workflows, so we can easily re-run individual ones.
This would be my preference. @aurelien-reeves and I experimented with this yesterday in https://github.com/cucumber/release-tests/tree/main/.github/workflows (a throwaway repo).
We tried to make the publish-* workflows trigger on the on.release event, but couldn't make it work. I'm not even sure we can make it work - we still want to make sure the job only runs in the Release environment, and I don't see how that could be activated unless the event is a on.push to a release/* branch.
I have a slight preference for (1) since making them idempotent feels sensible anyway in case something gets triggered unintentionally, but (2) is probably a quicker win. I suggest we do that for now.
I liked what you did in https://github.com/cucumber/release-tests to chain the actual publish jobs off of the release published event. Maybe we could do that?
Not sure - see comment above.
I'm not even sure we can make it work - we still want to make sure the job only runs in the Release environment, and I don't see how that could be activated unless the event is a on.push to a release/* branch.
Ah yes. We don't have enough security around Releases - they could be created manually by anyone with the commit bit.
So we need to stick to triggering these when there's a push to the protected branch.
The whole workflow also includes create-release, so that would have to be idempotent too. If we want to make them idempotent and still error when something is wrong is to check if the release has already been made and then do nothing. That means querying github releases, rubygems, npm, nexus/sonatype (maven) etc.
I've actually done this already in the tests for the rubygems and npm publish actions.
They are hopefully all easy to query, but I'm worried about nexus/sonatype - not sure if it's possible there.
Yeah that's a risk until we've done it.
curl https://search.maven.org/solrsearch/select\?q\=cucumber\&wt\=json looks pretty good (from here)?
I liked what you did in https://github.com/cucumber/release-tests to chain the actual publish jobs off of the release published event. Maybe we could do that?
Not sure - see comment above.
I reckon we can have several separate workflows all going off the push event on the release/* branches, which should give us the best of both worlds.
I think this is good to go, eh? We can easily / safely factor out a publish-go action later.
I guess you have heavily tested the releases workflows as part of https://github.com/cucumber/release-tests?
If so, it looks good to me :)
| gharchive/pull-request | 2021-09-21T06:55:38 | 2025-04-01T06:38:18.437810 | {
"authors": [
"aslakhellesoy",
"aurelien-reeves",
"mattwynne"
],
"repo": "cucumber/cucumber-expressions",
"url": "https://github.com/cucumber/cucumber-expressions/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
234971958 | shared binary
The following is a proposal for how cucumber could be built in the future. There is binary that is responsible for all shared logic (CLI parsing, reading gherkin files, compiling pickles, basic infrastructure running, formatting results). This binary would be written in go or some other language that allows us to compile binaries for each OS. Then there are a series of language specific libraries which use the binary to run tests.
Logic for the language specific libraries:
If needed, fetch operating system specific binary (this could also be done during the install process where possible)
Store in .cucumber/
Execute binary as a child process (passing all command line options). Communicate with the binary over stdin / stdout via lines of JSON. The binary requests the language specific library to do the following things:
Load support code
Request includes: paths to load
Reply with
array of step definitions (id, expression text, expression type)
array of parameter types (id, regular expression)
Setup test run
Run before all hooks - reply with possible error when done
Setup test case
Run before each hooks - reply with possible error when done
Run test step
Request includes
Id
Arguments (each has a parameter type ids if it needs to be transformed)
Run step with given id - reply with possible error when done
Teardown test case
Run after each hooks - reply with possible error when done
Teardown test run
Run after all hooks - reply with possible error when done
Print to stdout
Request includes: text
Print to stderr
Request includes: text
Exit
Request includes: exit code
Generate step definition snippet
Request includes:
Text
Type - cucumber expression or regular expression
Reply with a step definition snippet
This is similar to the cucumber wire protocol but built in a way that the ruby dependency is removed and allows language specific code to be the entry point.
I want to spike this out on cucumber-js with the shared binary being written in node for the purposes of the spike. Primary goals would be to pull the language specific code and see how the interface feels.
I’m uncertain how custom formatters / plugins would fit into this but those could potentially be done with a formatter streaming to a named pipe.
Please bring up any issues you can think of. I’m sure there are features in some of the other cucumber flavors that I am unaware of.
Thanks for bringing this up @charlierudolph! Inthink this would be a great topic for discussion at Cukenspace (preferrably Sunday as I'm busy Saturday).
I have a lot of ideas that are similar to yours and I'll dump them here when I find time, hopefully before cukenspace.
Neither Cucumber-Ruby not Cucumber-JVM have built in support for parallel execution, but there are third solutions for achieving that (parallel_cucumber, cucumber-jvm-parallel-plugin). In that direction, I have imagined what massive parallel execution on a computing grid could look like. We already have the Pickle abstraction and the Gherkin to Pickle compiler implemented in several languages, so it seems natural to use that as a foundation. I imagine a central controller dispatching Pickles Events to the different executors in the computing grid, the executors creates Test Cases from the Pickles, executes them and pass back a Test Case Result Event.
In a more detailed view I imagine that an executor is generating (the normal) Test Case Started/Finished, and Test Step Started/Finished Events, but over the network they are compressed to a Test Case Result Event, from which the original events can be recreated on the controller side.
Controller | Network | Executor(s)
| Pickle Event |
| ----------------> |
| | Test Case Started Event
| | <--------------------
| | Test Step Started Event
| | <--------------------
| | Test Step Finished Event
| | <--------------------
| | Test Case Finished Event
| | <--------------------
| Test Case Result Event |
| <-------------------- |
Test Case Started Event | |
<-------------------- | |
Test Step Started Event | |
<-------------------- | |
Test Step Finished Event | |
<-------------------- | |
Test Case Finished Event | |
<-------------------- | |
This way less events will be sent over the network (which may or may not be important), and formatters listening to event of the controller have the events from test case execution serialized which definitely simplify their implementation.
With this structure it would be possible to
execute suites in parallel, that is some executors execute end-to-end and others execute directly against the the domain layer.
have the executors use different languages for step definitions, like in the suites case the end-to-end testing use one step definitions language, and the domain layer testing use another step definitions language.
Cucumber is a good library connect tester and developer.
Using cucumber ,developer could write less code but add more test case by Tester.
I alway suggest my team and our customer use cucumber to improve poor unit test.
I dont like gauge because it can not embed in source code.it's not friendly for developer.
Some brief thoughts from me to share before we get to talk about this face-to-face.
I like the idea of having less codeabases doing the same / similar thing in different languages
I like the idea of making behaviour more consistent across implementations
I worry that we'd inhibit contributions to Cucumber by using a lower-level language like Go for most of the guts of it.
I worry that trying to build one giant binary that does everything might be a big project
I worry that if we end up having to start servers or whatever in order to "serve" platform-specific glue code to the cross-platform binary, the overall UX might end up clunky. That's why SpecFlow turned out to be better than Cuke4Nuke.
I think I'd like to see us figure out where the seams are, and think about budding off smaller pieces, like a console formatter binary/ies that takes a stream of results NJSON and emits a stream of text, or a runtime binary that takes a stream of pickles as input, calls step definitions and emits results NDJSON. To me, building smaller, compose-able pieces like this would be a less risky approach. Gradually hollowing out the individual implementations, rather than trying to replace them in one go.
I worry that if we end up having to start servers or whatever in order to "serve" platform-specific glue code to the cross-platform binary, the overall UX might end up clunky. That's why SpecFlow turned out to be better than Cuke4Nuke.
I'm was thinking of hosting the binaries on github attached to releases (example). The individual libraries can pack the binaries with their source code if they like. I was against that to start because it means a user has the binaries for all supported os on their machine when they only need one. Pulling the binary can be done during the install process where possible or it can be pulled down and cached locally on first run.
Attempting to split this large binary into smaller pieces I think we could potentially break it into the following:
gherkin-parser binary
input
current working directory
feature paths
tag filters
name filters
output
streaming event protocol for events of type: 'source', 'gherkin-document', 'pickle', 'pickle-accepted', 'pickle-rejected''
pickle-runner binary
input
accepted pickles
other runtime config (fail-fast, dry-run)
step definition configs (for each step definition it needs an id and the pattern)
hook definition configs (for each hook definition it needs an id and the tags)
interacts with calling process telling it to run hooks / steps and is back the results
output
streams event protocol for events of type: 'test-case-prepared', 'test-case-started', 'test-step-started', 'test-step-finished', 'test-case-finished'
formatter-binary - probably one binary for each type or a single binary with all the built ins
input
stream of event protocol
output
stream of text
The first is mostly extracted with the gherkin library but requires implementations in every language and we could shave that down to 1. After implementing the event protocol on cucumber-js, a lot of logic moved into the formatter as that was the only place that needed it.
Yeah the interesting part of this will be building the pick-runner binary.
@charlierudolph shall we open this issue again?
Want to join the Slack discussion? https://cucumberbdd.slack.com/archives/C62D0FK0E/p1520406289000034
This is currently in progress over at https://github.com/cucumber/cucumber-pickle-runner
/cc @mpkorstanje
Currently, the pickle runner binary is a WIP I will be experimenting with on cucumber-js, once it gets to a workable state. I am happy to get feedback if the current api is not sufficient for a particular implementation.
I'm closing this as cucumber-engine is now a thing.
Please see the roadmap for details about how we'll get there.
Woohoo! So glad to have this rolling now!
Big thanks to you @charlierudolph for putting so much momentum behind it!
| gharchive/issue | 2017-06-10T01:54:31 | 2025-04-01T06:38:18.471629 | {
"authors": [
"aslakhellesoy",
"brasmusson",
"charlierudolph",
"lxbzmy",
"mattwynne"
],
"repo": "cucumber/cucumber",
"url": "https://github.com/cucumber/cucumber/issues/221",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1066891953 | fix: version command
What
fix: version command
:tada: This PR is included in version 1.0.6 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2021-11-30T07:34:50 | 2025-04-01T06:38:18.503714 | {
"authors": [
"cujarrett"
],
"repo": "cujarrett/spellcheckme",
"url": "https://github.com/cujarrett/spellcheckme/pull/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
356762758 | Add default context to users context constructor
In addition to #4, it would be nice if our own context constructor, was provided with the default context. So we can use the context in our injected methods.
import initServices from './services';
initialize({
context: async ({ userId, db }) => ({
services: initServices({ userId, db }),
}),
});
// services
export default ({ userId, db }) => ({
users: {
find: ({ organisation }) => {
const hasAccess = !!db.organisations.findOne({ _id: organisation, members: userId });
if (!hasAccess) {
return [];
}
return db.users.find({ organisation }).fetch();
},
},
});
// resolvers
{
Query: {
getUsers(_, { organisation }, { services }) {
return services.users.find({ organisation });
}
}
}
Understood. Good idea.
| gharchive/issue | 2018-09-04T11:13:37 | 2025-04-01T06:38:18.508823 | {
"authors": [
"smeijer",
"theodorDiaconu"
],
"repo": "cult-of-coders/apollo",
"url": "https://github.com/cult-of-coders/apollo/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1556357046 | 🛑 DDL Service is down
In cb1cbc2, DDL Service (https://dl.culturecloud.gq) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DDL Service is back up in 76506c9.
| gharchive/issue | 2023-01-25T09:55:18 | 2025-04-01T06:38:18.512147 | {
"authors": [
"pseudokawaii"
],
"repo": "culturecloud/status",
"url": "https://github.com/culturecloud/status/issues/283",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1384275655 | Latest update
BRAT is telling me that the update failed as there is no main.js
The same here
Sorry, because that was an unfinished version that has now been released.
| gharchive/issue | 2022-09-23T20:36:47 | 2025-04-01T06:38:18.520061 | {
"authors": [
"Liong1976",
"cumany",
"menagerie198"
],
"repo": "cumany/obsidian-floating-toc-plugin",
"url": "https://github.com/cumany/obsidian-floating-toc-plugin/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
688848885 | [backport] Use _csr_row_index for CSR matrix major-axis slicing with step
Backport of #3852
[automatic post] Jenkins, test this please.
Jenkins CI test (for commit f40bae62ccb150f7c749146cd2533f4c68ef1c7f, target branch v8) failed with status FAILURE.
Jenkins, test this please.
Jenkins CI test (for commit f40bae62ccb150f7c749146cd2533f4c68ef1c7f, target branch v8) failed with status FAILURE.
Jenkins, test this please.
Jenkins CI test (for commit f40bae62ccb150f7c749146cd2533f4c68ef1c7f, target branch v8) failed with status FAILURE.
Jenkins, test this please.
Jenkins CI test (for commit f40bae62ccb150f7c749146cd2533f4c68ef1c7f, target branch v8) failed with status FAILURE.
Jenkins, test this please.
Jenkins CI test (for commit f40bae62ccb150f7c749146cd2533f4c68ef1c7f, target branch v8) succeeded!
LGTM.
| gharchive/pull-request | 2020-08-31T02:13:49 | 2025-04-01T06:38:18.540253 | {
"authors": [
"asi1024",
"chainer-ci",
"toslunar"
],
"repo": "cupy/cupy",
"url": "https://github.com/cupy/cupy/pull/3898",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
718540074 | [WIP] ROCm: Fix getDeviceProperties for HIP
Close #4107. Follow-up of #3858.
Output without any error:
$ python -c "import cupy; cupy.show_config()"
OS : Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid
CuPy Version : 8.0.0rc1
NumPy Version : 1.19.1
SciPy Version : 1.5.2
CUDA Root : /usr
CUDA Build Version : 0
CUDA Driver Version : 313700
CUDA Runtime Version : 3137
cuBLAS Version : 22200
cuFFT Version : 10003839
cuRAND Version : 201001
cuSOLVER Version : (3, 5, 0)
cuSPARSE Version : 0
NVRTC Version : (9, 0)
Thrust Version : 100902
CUB Build Version : 201001
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
Device 0 Name : Vega 20
Device 0 Compute Capability : 96
Thanks!
Jenkins, test this please
Jenkins CI test (for commit 32e6cb89ec739c5e00b9aebf3ab320801fc4c0cc, target branch master) succeeded!
| gharchive/pull-request | 2020-10-10T05:47:46 | 2025-04-01T06:38:18.542417 | {
"authors": [
"chainer-ci",
"emcastillo",
"leofang"
],
"repo": "cupy/cupy",
"url": "https://github.com/cupy/cupy/pull/4108",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
826568858 | Fix cuFFT callback compilations - v2
Close #4746.
This is an alternative of #4743 (proposal v2 mentioned there). The solution here is to avoid cimporting any CuPy modules so that we don't expose Cython-level APIs. The incurred overhead is OK (which I will post later): about +/- 5 us (yes, it can be faster in some cases, which I didn't expect), see https://github.com/cupy/cupy/pull/4853#issuecomment-794903196.
TODO:
[x] Fix https://github.com/cupy/cupy/issues/4746#issuecomment-785543235
Looks like if I mark it slow (f3523ab), all tests in tests/cupy_tests/fft_tests/test_callback.py do not run. @emcastillo Is this OK? Should I undo it? (The entire file takes about 3 mins to run, so not marking it slow is perhaps fine?)
I reverted it.
Jenkins, test this please
Jenkins CI test (for commit 4ec88e8d5e70d69d6540e09ffde94f66cb87a660, target branch master) succeeded!
/test
Thanks @kmaehashi @emcastillo! Sorry for the all the troubles with the fft callbacks...Hope this time we fix it for good!
| gharchive/pull-request | 2021-03-09T20:13:00 | 2025-04-01T06:38:18.546679 | {
"authors": [
"chainer-ci",
"kmaehashi",
"leofang"
],
"repo": "cupy/cupy",
"url": "https://github.com/cupy/cupy/pull/4853",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1157167250 | Add title Icon
Hi, there this is Pratik Srivastava I am a participator in the GSSOC'22 and I would make a change bu adding title Icon to your website which will make the website more attractive
.
go ahead @pratiksrivastava01
| gharchive/issue | 2022-03-02T13:02:46 | 2025-04-01T06:38:18.548094 | {
"authors": [
"geekymeeky",
"pratiksrivastava01"
],
"repo": "curiomind-e-learning/curiomind",
"url": "https://github.com/curiomind-e-learning/curiomind/issues/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
702307413 | [Accordion] Update Accordion.Container-based Styling
This PR sets out to make the following changes to the Accordion component:
Update the box-shadow property.
Add a default 4px border-radius.
Add the following border-radius behavior: only the top-most and bottom-most accordion elements should be rounded.
The first two updates are relatively trivial. The third update required expanding the styling functionality of Accordion.Container, which has additional CSS selectors that handle the border-radius & focus outline requirements.
border-radius is set via long-hand properties border-top-left-radius, border-top-right-radius, border-bottom-left-radius, and border-bottom-right-radius.
This PR also updates all documentation instances of <Accordion> to be wrapped with <Accordion.Container>.
You can play with the Review App here: https://curology-radiance-pr-371.herokuapp.com/
Before:
After:
Need to refactor the styling since the box-shadows are not applying accurately anymore in TitleWrapper.
| gharchive/pull-request | 2020-09-15T22:12:20 | 2025-04-01T06:38:18.588784 | {
"authors": [
"michaeljaltamirano"
],
"repo": "curology/radiance-ui",
"url": "https://github.com/curology/radiance-ui/pull/371",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
40181823 | Add tests with timed mouseMove events
In Karma tests we're unable (or I don't know about it) to trigger mouseMove events with given timing information -- that is essential in our case. :)
Currently, we have tests to cover events with synthetic timestamps (test/trap.time-property.test.js), and tests to cover various timeouts, including buffer timeouts and idle timeouts (test/trap.{buffer,idle}-timeout.test.js).
| gharchive/issue | 2014-08-13T18:04:47 | 2025-04-01T06:38:18.592433 | {
"authors": [
"gbence"
],
"repo": "cursorinsight/ci-trap-web",
"url": "https://github.com/cursorinsight/ci-trap-web/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
933711331 | Support "every <scope>" / "<ordinal> <scope>"
The goal
For example:
"every line"
"every funk"
"every line in class"
"first funk"
"last line in funk"
See https://github.com/pokey/cursorless-vscode/wiki/Target-overhaul for many more examples
[x] Also support "past last", so eg "past last item air", "past last funk air". This would target from the scope containing the mark through the last instance of the scope in its iteration scope
[ ] Add expansion tests (see also #883)
Implementation
This implementation will rely on #210 and #69.
See https://github.com/cursorless-dev/cursorless/issues/797
Notes
This functionality subsumes today's every funk and first char / last word etc
Questions
How do we handle ranges such as today's first past third word or the future first past third funk?
This one will be great once we have full compositionality
Is there more work to be done on this?
Looks done to me
| gharchive/issue | 2021-06-30T13:29:41 | 2025-04-01T06:38:18.598164 | {
"authors": [
"AndreasArvidsson",
"pokey"
],
"repo": "cursorless-dev/cursorless",
"url": "https://github.com/cursorless-dev/cursorless/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2372939978 | Unknown action ':protect_from_spam' error on default Ruby on Rails 7.1 projects
Describe the bug
The gem causes Ruby on Rails projects with version 7.1.0 or higher to break for all routes if the setting
config.action_controller.raise_on_missing_callback_actions = true is present in an environment (default for development and test).
To Reproduce
Steps to reproduce the behavior:
run rails _7.1.3.4_ new example
cd example
add gem 'honeypot-captcha' to Gemfile
run bundle install
run bin/rails s
go to localhost:3000
see error below
Unknown action
The create action could not be found for the :protect_from_spam
callback on Rails::WelcomeController, but it is listed in the controller's
:only option.
Raising for missing callback actions is a new default in Rails 7.1, if you'd
like to turn this off you can delete the option from the environment configurations
or set `config.action_controller.raise_on_missing_callback_actions` to `false`.
Expected behavior
The gem should work out of the box with a RoR application. I didn't find a configuration option or documentation to avoid this error except disable the configuration in Rails.
Screenshots
Screenshot with the error
Desktop:
OS: Ubuntu 22.04
Browser Firefox
Version 127.0.1
Smartphone:
Not applicable
Additional context
The setting can be found in the Rails project in config/environments/development.rb
+1 experiencing this. Can turn off with the development.rb and test.rb switches for missing callback actions.
Same here.. @curtis did you find something to fixx this?
| gharchive/issue | 2024-06-25T15:06:46 | 2025-04-01T06:38:18.605264 | {
"authors": [
"1klap",
"jathayde",
"t3k4y"
],
"repo": "curtis/honeypot-captcha",
"url": "https://github.com/curtis/honeypot-captcha/issues/99",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
797815228 | not validate the integration
Describe the bug
is there a problem with the integration? today it stopped working and does not validate the connection from my cell phone in amazon
Screenshots
System details
Home-assistant (2021.1.5):
Hassio (2021.01.7):
Logs
Logger: homeassistant
Source: runner.py:99
First occurred: 22:21:18 (2 occurrences)
Last logged: 22:21:18
Error doing job: Unclosed client session
Error doing job: Unclosed connector
Enable 2FA.
Enable 2FA.
| gharchive/issue | 2021-01-31T21:21:52 | 2025-04-01T06:38:18.608935 | {
"authors": [
"alandtse",
"sleon76"
],
"repo": "custom-components/alexa_media_player",
"url": "https://github.com/custom-components/alexa_media_player/issues/1157",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
691663710 | Unable to control echo dot
Describe the bug
Changes made by echo dot visible in home assistant, but the command given by HA is not processed.
Screenshots
System details
Home-assistant (version): 0.114.2
Hassio (Yes/No): (Please note you may have to restart hassio 2-3 times to load the latest version of alexapy after an update. This looks like a HA bug).
alexa_media (version from const.py or HA startup): v2.10.6
alexapy (version from pip show alexapy or HA startup):
Logs
2020-09-03 11:14:14 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-09-03 11:14:14 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for alexa_media which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-09-03 11:14:33 DEBUG (MainThread) [custom_components.alexa_media] Nothing to import from configuration.yaml, loading from Integrations
2020-09-03 11:14:34 INFO (MainThread) [custom_components.alexa_media]
alexa_media
Version: 2.10.6
This is a custom component
If you have any issues with this you need to open an issue here:
https://github.com/custom-components/alexa_media_player/issues
2020-09-03 11:14:34 INFO (MainThread) [custom_components.alexa_media] Loaded alexapy==1.13.1
2020-09-03 11:14:34 DEBUG (MainThread) [alexapy.alexalogin] Trying to load pickled cookie from file /config/.storage/alexa_media.@gmail.com.pickle
2020-09-03 11:14:35 DEBUG (MainThread) [alexapy.alexalogin] Trying to load aiohttpCookieJar to session
2020-09-03 11:14:35 DEBUG (MainThread) [alexapy.alexalogin] Loaded 8 cookies
2020-09-03 11:14:35 DEBUG (MainThread) [alexapy.alexalogin] Using cookies to log in
2020-09-03 11:14:38 DEBUG (MainThread) [alexapy.alexalogin] GET:
2020-09-03 11:14:38 DEBUG (MainThread) [alexapy.alexalogin] Logged in as @gmail.com with id: ********
2020-09-03 11:14:38 DEBUG (MainThread) [alexapy.alexalogin] Log in successful with cookies
2020-09-03 11:14:38 DEBUG (MainThread) [custom_components.alexa_media] Testing login status: {'login_successful': True}
2020-09-03 11:14:38 DEBUG (MainThread) [custom_components.alexa_media] Setting up Alexa devices for r1@gm
2020-09-03 11:14:38 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Websocket created: <alexapy.alexawebsocket.WebsocketEchoClient object at 0x6d65ecb8>
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Initating Async Handshake.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Starting message parsing loop.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Received WebSocket: 0x37a3b607 0x0000009c {"protocolName":"A:H","parameters":{"AlphaProtocolHandler.maxFragmentSize":"16000","AlphaProtocolHandler.receiveWindowSize":"16"}}TUNE
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding WebSocket Handshake MSG.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding Gateway Handshake MSG.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding Gateway Register MSG.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding PING.
2020-09-03 11:14:47 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Websocket succesfully connected
2020-09-03 11:14:47 DEBUG (MainThread) [custom_components.alexa_media] Creating coordinator
2020-09-03 11:14:47 DEBUG (MainThread) [custom_components.alexa_media] Refreshing coordinator
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/devices-v2/device returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/dnd/device-status-list returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/bluetooth?cached=false returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexawebsocket] Received WebSocket: MSG 0x0000036...............END FABE
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexawebsocket] Received ACK MSG for Registration.
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/bootstrap returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/device-preferences returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Found 3 devices, 3 bluetooth
2020-09-03 11:14:49 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:14:49 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated 0 notifications for 1 devices at 2020-09-03 11:14:49.460967+05:30
2020-09-03 11:14:50 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/activities?startTime=&size=10&offset=1 returned 200:OK:application/json
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated last_called: {'serialNumber': '0e33', 'timestamp': 1587348562948}
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: last_called changed: to {'serialNumber': '0e33', 'timestamp': 1587348562948}
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: Locale en-in timezone Asia/Kolkata
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: DND False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] This Device: Locale en-us timezone None
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] This Device: DND False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: Locale en-us timezone None
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: DND False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Existing: [] New: ["raj's Echo Dot", 'This Device', "raj's Alexa Apps"]; Filtered out by not being in include: [] or in exclude: []
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] Loading media_player
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] Finished fetching alexa_media data in 2.791 seconds
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Trying with limit 5 delay 2 catch_exceptions True
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.notify] r1@gm: Media player G09F not loaded yet; delaying load
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Try: 1/5 after waiting 0 seconds result: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.media_player] r1@gm: Refreshing This Device
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.media_player] This Device: Last_called check: self: 2712 reported:
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] r1@gm: Adding [<Entity raj's Echo Dot: unavailable>, , <Entity raj's Alexa Apps: unavailable>]
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Loading switches
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found G09F dnd switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found G09F shuffle switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found G09F repeat switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found 2712 dnd switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping shuffle for 2712
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping repeat for 2712
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found 0e33 dnd switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping shuffle for 0e33
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping repeat for 0e33
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] r1@gm: Adding [<Entity raj's Echo Dot do not disturb switch: off>, <Entity raj's Echo Dot shuffle switch: off>, <Entity raj's Echo Dot repeat switch: off>, , <Entity raj's Alexa Apps do not disturb switch: off>]
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Loading sensors
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Found G09F Alarm sensor (0) with next: unavailable
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Found G09F Timer sensor (0) with next: unavailable
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Found G09F Reminder sensor (0) with next: unavailable
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] r1@gm: Adding [<Entity raj's Echo Dot next Alarm: unavailable>, <Entity raj's Echo Dot next Timer: unavailable>, <Entity raj's Echo Dot next Reminder: unavailable>]
2020-09-03 11:14:52 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/phoenix returned 200:OK:application/json
2020-09-03 11:14:52 DEBUG (MainThread) [custom_components.alexa_media.alarm_control_panel] r1@gm: No Alexa Guard entity found
2020-09-03 11:14:52 DEBUG (MainThread) [custom_components.alexa_media.alarm_control_panel] r1@gm: Skipping creation of uninitialized device:
2020-09-03 11:14:54 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Try: 2/5 after waiting 4 seconds result: <custom_components.alexa_media.notify.AlexaNotificationService object at 0x69c29640>
2020-09-03 11:14:54 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Trying with limit 5 delay 2 catch_exceptions True
2020-09-03 11:14:54 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Try: 1/5 after waiting 0 seconds result: <custom_components.alexa_media.notify.AlexaNotificationService object at 0x6fe31b08>
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] Disabling polling for raj's Echo Dot
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] r1@gm: Refreshing This Device
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] This Device: Last_called check: self: 2712 reported: 0e33
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] Disabling polling for This Device
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] Disabling polling for raj's Alexa Apps
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/dnd/device-status-list returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/bluetooth?cached=false returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/device-preferences returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/devices-v2/device returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Found 3 devices, 3 bluetooth
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated 0 notifications for 1 devices at 2020-09-03 11:24:51.694741+05:30
2020-09-03 11:24:52 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/activities?startTime=&size=10&offset=1 returned 200:OK:application/json
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated last_called: {'serialNumber': '0e33', 'timestamp': 1587348562948}
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: Locale en-in timezone Asia/Kolkata
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: DND False
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] This Device: Locale en-us timezone None
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] This Device: DND False
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: Locale en-us timezone None
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: DND False
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Existing: [<Entity raj's Echo Dot: unavailable>, , <Entity raj's Alexa Apps: unavailable>] New: []; Filtered out by not being in include: [] or in exclude: []
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] Finished fetching alexa_media data in 2.809 seconds
2020-09-03 11:31:02 DEBUG (MainThread) [alexapy.alexawebsocket] Received WebSocket: MSG
2020-09-03 11:31:02 DEBUG (MainThread) [alexapy.alexawebsocket] Received Standard MSG.
2020-09-03 11:31:02 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Received websocket command: PUSH_EQUALIZER_STATE_CHANGE : {'destinationUserId': 'AS', 'dopplerId': {'deviceType': 'C', 'deviceSerialNumber': 'G***F'}, 'bass': 0, 'midrange': 0, 'treble': 0}
Additional context
Add any other context about the problem here.
Please confirm your region is amazon.com.
| gharchive/issue | 2020-09-03T05:52:40 | 2025-04-01T06:38:18.674480 | {
"authors": [
"alandtse",
"patraRajesh"
],
"repo": "custom-components/alexa_media_player",
"url": "https://github.com/custom-components/alexa_media_player/issues/902",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
467321944 | Intergration not found hacs
Hello,
I am currently on a fresh installation of HASS.IO on docker. the path in docker for hassio is /user/share/hassio/homeassistant/
I created the folder custom_components and downloaded the git files / hacs files.
hacs is placed into /custom_components.
In the configration.yaml i placed the this:
hacs:
token: 4878f7a6265xxxxxxxxxxxxxxxxxxxxxxxxxf9a4
However, HA will not restart due this error:
Integration not found: hacs
What am i doing wrong? I
Restart HA one time before adding it to config.
I had a fully working install of HACS till about a week ago, I had to restart my machine and when it came back up I no longer have the ingress menu and I receive the "Invalid config" message in HA, I've reinstalled HACS from fresh and also had 2 HA upgrades this week and still no luck.
Also a few of our guys in our group have had similar issues, but the error shows as below so this could be further issue and not just a single install error:
Fri Jul 26 2019 11:06:37 GMT+0100 (British Summer Time)
Error during setup of component hacs
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/setup.py", line 153, in _async_setup_component
hass, processed_config)
File "/config/custom_components/hacs/init.py", line 69, in async_setup
await configure_hacs(hass, config[DOMAIN], config_dir)
File "/config/custom_components/hacs/init.py", line 171, in configure_hacs
hacs.store.restore_values()
File "/config/custom_components/hacs/hacsbase/data.py", line 102, in restore_values
store = self.read()
File "/config/custom_components/hacs/hacsbase/data.py", line 41, in read
content = json.loads(content)
File "/usr/local/lib/python3.7/json/init.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 3426 column 13 (char 122870)
Hi,
I had the same problem.
I removed hacs:
token: 4878f7a6265xxxxxxxxxxxxxxxxxxxxxxxxxf9a4
and rebooted and found hacs in integrations section.
| gharchive/issue | 2019-07-12T09:44:43 | 2025-04-01T06:38:18.683435 | {
"authors": [
"Chimestrike",
"eterpstra",
"gmkfak",
"ludeeus"
],
"repo": "custom-components/hacs",
"url": "https://github.com/custom-components/hacs/issues/262",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1854772277 | Issue #513: Implemented fallback value for color prop of BaseSnackbar…
… and removed the color prop from its consumers.
Issue number
Relevant issue number
Resolves #513
Please check the following
[x] Do the tests still pass? (see Run the Tests)
[x] Is the code formatted properly? (see Linting (Formatting))
For New Features:
[ ] Have tests been added to cover any new features or fixes?
[ ] Has the documentation been updated accordingly?
Please describe additional details for testing this change
Thanks for your feedback! Just pushed another commit.
Done.
| gharchive/pull-request | 2023-08-17T11:00:49 | 2025-04-01T06:38:18.695099 | {
"authors": [
"swebe3qn"
],
"repo": "cuttle-cards/cuttle",
"url": "https://github.com/cuttle-cards/cuttle/pull/515",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1834973492 | about the line_match module
hello, i want to change the match module to like superglue method. it is possible to get a better match?
Hi, I am sorry, I can't understand your question.
Are you asking whether one could obtain better line matches by using techniques as in SuperGlue? If so, the answer is yes, and we already published a work on that: https://github.com/cvg/GlueStick.
yes,i have the same ideal, thank you very much
发自我的iPhone
在 2023年8月3日,下午10:17,Rémi Pautrat @.***> 写道:
Hi, I am sorry, I can't understand your question.
Are you asking whether one could obtain better line matches by using techniques as in SuperGlue? If so, the answer is yes, and we already published a work on that: https://github.com/cvg/GlueStick.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
| gharchive/issue | 2023-08-03T12:32:39 | 2025-04-01T06:38:18.700241 | {
"authors": [
"atomishcv",
"rpautrat"
],
"repo": "cvg/SOLD2",
"url": "https://github.com/cvg/SOLD2/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
145023106 | Slug in URL issue
i want to display some details from a table no foreign key using slug url but i am having issue with it. here below is my code.
public function getView($slug) {
$cstool = CsTool::where('slug',$slug)->first();
if ($cstool) {
return View::make('childict.softview')
->with('cstool', $cstool);
}
}
What's the issue? That code looks fine to me (although you don't have any code to handle the case where there is no object with the given slug. Maybe ->firstOrFail() would be better?
| gharchive/issue | 2016-03-31T21:27:17 | 2025-04-01T06:38:18.702133 | {
"authors": [
"Unstinted",
"cviebrock"
],
"repo": "cviebrock/eloquent-sluggable",
"url": "https://github.com/cviebrock/eloquent-sluggable/issues/239",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1690354982 | 🛑 Paid - Node SG 2 is down
In 46fbf17, Paid - Node SG 2 (cxenode-2.hexagonn.my.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Paid - Node SG 2 is back up in 8802761.
| gharchive/issue | 2023-05-01T04:25:54 | 2025-04-01T06:38:18.720041 | {
"authors": [
"Bluezzzzz"
],
"repo": "cxe-miex-dev/uptimes",
"url": "https://github.com/cxe-miex-dev/uptimes/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
783516522 | Bump version to 2.1.6
What does this PR do?
Bump version to 2.1.6 in preparation for release
What ticket does this PR close?
Resolves #73
Checklists
Change log
[X] The CHANGELOG has been updated, or
[ ] This PR does not include user-facing changes and doesn't require a CHANGELOG update
Test coverage
[ ] This PR includes new unit and integration tests to go with the code changes, or
[X] The changes in this PR do not require tests
Documentation
[ ] Docs (e.g. READMEs) were updated in this PR, and/or there is a follow-on issue to update docs, or
[X] This PR does not require updating any documentation
@izgeri Yes, the notices were updated as part of this commit:
https://github.com/cyberark/cloudfoundry-conjur-buildpack/commit/5ab18790822dd996c9ea8bdc975280c88c12436c
@izgeri Yes, the notices were updated as part of this commit:
https://github.com/cyberark/cloudfoundry-conjur-buildpack/commit/5ab18790822dd996c9ea8bdc975280c88c12436c
| gharchive/pull-request | 2021-01-11T16:27:33 | 2025-04-01T06:38:18.759026 | {
"authors": [
"BradleyBoutcher"
],
"repo": "cyberark/cloudfoundry-conjur-buildpack",
"url": "https://github.com/cyberark/cloudfoundry-conjur-buildpack/pull/107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1559973972 | Run go mod tidy
Desired Outcome
Prepare for 1.7.16 release.
Implemented Changes
Update golang.org/x/crypto to v0.5.0
Update CyberArk packages to latest versions
Update NOTICES.txt
Connected Issue/Story
N/A
Definition of Done
At least 1 todo must be completed in the sections below for the PR to be
merged.
Changelog
[ ] The CHANGELOG has been updated, or
[x] This PR does not include user-facing changes and doesn't require a
CHANGELOG update
Test coverage
[ ] This PR includes new unit and integration tests to go with the code
changes, or
[x] The changes in this PR do not require tests
Documentation
[ ] Docs (e.g. READMEs) were updated in this PR
[ ] A follow-up issue to update official docs has been filed here: [insert issue ID]
[x] This PR does not require updating any documentation
Behavior
[ ] This PR changes product behavior and has been reviewed by a PO, or
[ ] These changes are part of a larger initiative that will be reviewed later, or
[x] No behavior was changed with this PR
Security
[ ] Security architect has reviewed the changes in this PR,
[ ] These changes are part of a larger initiative with a separate security review, or
[ ] There are no security aspects to these changes
Sorry, @szh - I pushed a commit fixing Changelog links right as you approved. Quick re-review?
| gharchive/pull-request | 2023-01-27T15:41:04 | 2025-04-01T06:38:18.765483 | {
"authors": [
"john-odonnell"
],
"repo": "cyberark/secretless-broker",
"url": "https://github.com/cyberark/secretless-broker/pull/1484",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1306099203 | Backup to Google Cloud Storage
Hello, I am trying and failing to store backups to Google Cloud Storage.
I have set up Workload Identity to give the k8s service account the permissions to access the bucket.
I have also tried defining the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables.
Still I get
Error: failed to take a full dump: failed to put dump.tar: operation error S3: PutObject, https response error StatusCode: 403, RequestID: , HostID: , api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
Is there anybody who made backups to GCS working?
Thank you!
Backup to GCS is not supported now.
It can be added by implementing Bucket interface for GCS in this package.
https://github.com/cybozu-go/moco/tree/main/pkg/bucket
We are welcome to a pull request for adding GCS support.
@filiprafaj
GCS supports the S3 compatibility API. (Sorry, I have not verified this.)
Could you please refer to the documentation and try again?
refs:
https://cloud.google.com/storage/docs/interoperability#xml_api
https://vamsiramakrishnan.medium.com/a-study-on-using-google-cloud-storage-with-the-s3-compatibility-api-324d31b8dfeb
If it still doesn't work, it would be helpful if you could report it again, including the definition of BackupPolicy 🙏
https://cybozu-go.github.io/moco/usage.html#backuppolicy
Hi @d-kuro , I have tried now with interoperability credentials and I am getting:
Error: failed to take a full dump: failed to put dump.tar: operation error S3: PutObject, https response error StatusCode: 403, RequestID: , HostID: , api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method. failed to take a full dump: failed to put dump.tar: operation error S3: PutObject, https response error StatusCode: 403, RequestID: , HostID: , api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
the backup file looks like this:
apiVersion: moco.cybozu.com/v1beta2
kind: BackupPolicy
metadata:
namespace: default
name: daily
spec:
schedule: "@daily"
jobConfig:
serviceAccountName: moco-test-mysqlcluster
env:
- name: AWS_ACCESS_KEY_ID
value: ***
- name: AWS_SECRET_ACCESS_KEY
value: ***
bucketConfig:
bucketName: ***
endpointURL: https://storage.googleapis.com
workVolume:
emptyDir: {}
memory: 1Gi
maxMemory: 1Gi
threads: 1
Hi @filiprafaj ,
This issue seems to be a good initiation of my journey towards contribution to FOSS projects.
Can you please assign this to me?
Hii @filiprafaj i want to contribute to this issue
@Prakharkarsh1
Hi,
Thank you for your intention to contribute to this project.
We will review your pull request when it's ready.
@Prakharkarsh1
MOCO uses aws-sdk-go-v2 to connect to s3-compatible storage.
However, aws-sdk-go-v2 is not compatible with third-party platforms and therefore cannot connect GCS.
https://github.com/aws/aws-sdk-go-v2/issues/1816
So it would be better to implement for gcs bucket in moco/pkg/bucket/gcs.
We use minio to test S3 bucket implementation.
Likewise, we may use these tools to test GCS bucket implementation.
https://github.com/oittaa/gcp-storage-emulator
https://github.com/fsouza/fake-gcs-server
@Prakharkarsh1
Hello,
Do you still want to contribute to this feature?
Released https://github.com/cybozu-go/moco/releases/tag/v0.16.1
| gharchive/issue | 2022-07-15T14:22:39 | 2025-04-01T06:38:18.800450 | {
"authors": [
"Prakharkarsh1",
"d-kuro",
"filiprafaj",
"masa213f",
"sachinsejwal",
"yamatcha",
"ymmt2005"
],
"repo": "cybozu-go/moco",
"url": "https://github.com/cybozu-go/moco/issues/427",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2107451935 | question: TLS possible
i now use PubSubClient v2.8 for a ESP32-S3 Arduno IDE project
and want use TLS with HIVEMQ FREE ACCOUNT
possible?
Hi, you can refer the functions here
| gharchive/issue | 2024-01-30T10:13:32 | 2025-04-01T06:38:18.815483 | {
"authors": [
"MyRaspberry",
"cyijun"
],
"repo": "cyijun/ESP32MQTTClient",
"url": "https://github.com/cyijun/ESP32MQTTClient/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
641655527 | Add cpplint to style check cpp
https://github.com/cpplint/cpplint
This is added
| gharchive/issue | 2020-06-19T02:16:58 | 2025-04-01T06:38:18.858297 | {
"authors": [
"supunkamburugamuve"
],
"repo": "cylondata/cylon",
"url": "https://github.com/cylondata/cylon/issues/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1102968493 | 🛑 cypggs is down
In 4a9c49e, cypggs (https://www.cypggs.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: cypggs is back up in bf8c025.
| gharchive/issue | 2022-01-14T04:57:04 | 2025-04-01T06:38:18.873763 | {
"authors": [
"cypggs"
],
"repo": "cypggs/uptime",
"url": "https://github.com/cypggs/uptime/issues/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
473124388 | Advancing iterator in range loop invalidates previously dereferenced values for input iterators in C++
I have a user-defined class implementing C++ input iterator requirements. It skips an element in the beginning and prints last element twice when I iterate over it with Cython's range based loop. Looking into produced code, I realized iterator value is increased before the loop body gets executed. Incrementing is done right after dereferenced iterator value is copied into a temporary. However, incrementing the iterator invalidates previously dereferenced values for input iterators. In my case, it was trying to parse next node in a stream.
Here is a toy example for demonstration:
# distutils: language = c++
from cython.operator cimport dereference as deref, preincrement as preinc
cdef extern from *:
"""
//#define PRINT() std::cout << __PRETTY_FUNCTION__ << std::endl
#define PRINT()
#include<iostream>
struct CountDown {
struct Iterator {
CountDown* ptr;
Iterator() = default;
Iterator(CountDown* ptr) : ptr(ptr) {}
Iterator& operator++() { PRINT(); ptr->count--; return *this; }
Iterator& operator++(int) { PRINT(); ptr->count--; return *this; }
const int* operator*() { return &ptr->count; }
bool operator!=(const Iterator&) { PRINT(); return ptr->count > 0; }
};
int count;
CountDown() = default;
CountDown(int count) : count(count) {}
Iterator begin() { PRINT(); return Iterator(this); }
Iterator end() { PRINT(); return Iterator(); }
};
"""
cdef cppclass CountDown:
cppclass Iterator:
Iterator()
Iterator operator++()
Iterator operator++(int)
const int* operator*()
bint operator!=(Iterator)
CountDown()
CountDown(int count)
Iterator begin()
Iterator end()
cdef countdown_range():
cdef CountDown cd = CountDown(5)
cdef const int* num
for num in cd:
print(deref(num))
cdef countdown_expected():
cdef CountDown cd = CountDown(5)
cdef CountDown.Iterator it = cd.begin()
while it != cd.end():
print(deref(deref(it)))
it = preinc(it)
print("Actual output:")
countdown_range()
print("Expected output:")
countdown_expected()
Output:
~/tmp/cyissue python3 -c "import example"
Actual output:
4
3
2
1
0
Expected output:
5
4
3
2
1
Here is the related part in the produced code:
/* "example.pyx":42
* cdef CountDown cd = CountDown(5)
* cdef const int* num
* for num in cd: # <<<<<<<<<<<<<<
* print(deref(num))
*
*/
__pyx_t_1 = __pyx_v_cd.begin();
for (;;) {
if (!(__pyx_t_1 != __pyx_v_cd.end())) break;
__pyx_t_2 = *__pyx_t_1;
++__pyx_t_1;
__pyx_v_num = __pyx_t_2;
This isn't a crucial feature since it can be implemented without range-based loop as in the above example, yet this was quite surprising for me and it took some time to figure this out, so I decided to open this issue.
Below part is where this loop translation happens AFAICS:
https://github.com/cython/cython/blob/ac1c9fe47491d01fb80cdde3ccd3e61152a973c7/Cython/Compiler/Nodes.py#L6820-L6832
This translates for s in seq: loop_body expression to something like:
it = iter(seq)
while True:
s = next(it) or break loop
loop_body
whereas correct interpretation for C++ should have been something like:
it = seq.begin()
while True:
if it is seq.end() break loop
s = *it
loop_body
++it
So... It looks like this issue is caused by a subtle difference between semantics of python's next and C++'s iterators. In python next does what operator++ and operator* together do in C++. I'm not sure how this can be fixed (splitting NextNode into two pieces for C++? Perhaps introducing a ForNode to split C++ implementation altogether?). Given current implementation works fine for the vast majority of cases, it may not worth the effort though, but it may be useful to note this quirk somewhere in the documentation at least.
It turns out I misunderstood requirements of input iterator. It invalidates any copies of the iterator upon advancing it, not copies of values it points. value_type v = *it; it++; do_something_with(v); is perfectly valid. Hence, there is nothing wrong with cython's behavior. Closing this.
| gharchive/issue | 2019-07-26T00:30:40 | 2025-04-01T06:38:18.953890 | {
"authors": [
"ozars"
],
"repo": "cython/cython",
"url": "https://github.com/cython/cython/issues/3055",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
689152965 | Add glossary to the documentation
Something that could help the documentation would be a glossary that certain terms could link to. A pointer would be a good candidate, or C data structures (struct/union) in general. Also extension type, terms like extern or inline, etc. It wouldn't have to replicate a complete specification or so, just explain shortly what is meant and link to a good place for further reading, be it an internal page or some external resource like the CPython C-API docs or a C/C++ reference.
Hello! I'd like to give this doc issue a shot.
Does this project use an SSG or would establishing a glossary be as simple as editing directly on Github?
Hi @tashachin, with "SSG", do you mean some kind of style guide? We don't have one, definitely not for the docs.
You can just edit the Sphinx .rst files and conf.py to add a glossary.
Apologies for not clarifying, @scoder ! SSG as in static-site generator (like Sphinx).
Where would you want the glossary to live in the docs? I could add it to the main index.rst (landing page) but that seems like it'd become unwieldy very quickly.
My solution was to have a link to the glossary be at the same level as Getting Started and Tutorials (and be between them), which then links to a separate page where all the terms can be read.
What are your thoughts on that structure?
There is an "indices and tables" section in the user guide. Just add a new page there for the glossary.
Closing this ticket since the glossary is there now. We can keep adding to it without the need for a ticket.
| gharchive/issue | 2020-08-31T12:02:07 | 2025-04-01T06:38:18.958295 | {
"authors": [
"scoder",
"tashachin"
],
"repo": "cython/cython",
"url": "https://github.com/cython/cython/issues/3802",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
219232893 | Allow "cdef inline" with default values in .pxd
A cdef inline implemented in a .pxd with default values does not work as expected:
cdef inline int my_add(int a, int b=1, int c=0):
return a + b + c
gives default values cannot be specified in pxd files, use ? or *
This pull request fixes that.
Looks good, thanks.
| gharchive/pull-request | 2017-04-04T12:24:23 | 2025-04-01T06:38:18.959985 | {
"authors": [
"jdemeyer",
"robertwb"
],
"repo": "cython/cython",
"url": "https://github.com/cython/cython/pull/1659",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
396131773 | Fix inconsistency between trace files and report files
When solving #2776 which reported Plugin 'Cython.Coverage.Plugin' did not provide a file reporter for '/Users/wenjun.swj/miniconda3/lib/python3.7/site-packages/gevent/_hub_local.py' when executing coverage report, I find the cause is that some tracers built in Cython.Coverage do not have corresponding reporters.
In Cython.Coverage, when Plugin._parse_lines(c_file, filename) returns (None, None), Plugin.file_tracer(filename) returns a tracer, while Plugin.file_reporter(filename) returns None, and then coverage.py reports the error. This happens when shared packages have both *.py and *.c sharing the same base name. For instance, in the wheel package of gevent, both _hub_local.c and _hub_local.py exist, which mislead Cython.Coverage to produce a tracer as it does not ignore shared libraries. However, file_reporter() ignores shared libraries, and the report error is raised.
The simple solution is to ignore shared libraries in file_tracer like it does in file_reporter, and coverage report does not raise errors, thus fixes #2776 .
Thanks. Do you think you could come up with a test case for this? This seems like the kind of setup that will get lost and break right the next time we change the code.
You can look at the existing coverage .srctree tests in tests/run/, they are basically multiple files stuffed into one text archive, with the test commands at the top.
| gharchive/pull-request | 2019-01-05T04:15:47 | 2025-04-01T06:38:18.963865 | {
"authors": [
"scoder",
"wjsi"
],
"repo": "cython/cython",
"url": "https://github.com/cython/cython/pull/2784",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1999799173 | Remove patch utility code in Coroutine.c
A try to remove patch utility code which seems not needed anymore.
~This is experiment PR just to see the CI results.~
There are several other functions marked to be removed in Cython/Utility/Coroutine.c but I am not sure whether they are needed or not.
CI seems turning green so I am marking this PR as ready for review. There is still question whether the code in the end of Coroutine.c should be removed or not...
Looks good to me I think - it looks like all the abc classes are tested and continue to work without this code
Let's merge it since other PRs are waiting for it. Thanks @da-woods for review.
| gharchive/pull-request | 2023-11-17T19:41:29 | 2025-04-01T06:38:18.966305 | {
"authors": [
"da-woods",
"matusvalo"
],
"repo": "cython/cython",
"url": "https://github.com/cython/cython/pull/5835",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2273801566 | Increase sorting scalability via CytoTable metadata columns
Description
This PR seeks to refine #175 by increasing the performance through generated CytoTable metadata columns which are primarily beneficial during large join operations. Anecdotally, I noticed that ORDER BY ALL memory consumption for joined tables becomes very high when working with a larger dataset. Before this change, large join operations attempt to sort by all columns included in the join. After this change, only CytoTable metadata columns are used for sorting, decreasing the amount of processing required to create deterministic datasets.
I hope to further refine this work through #193 and #176, which would I feel provide additional insights concerning performance and best practice recommendations. I can also see how these might be required to validate things here, but didn't want to hold review comments (as these also might further inform efforts within those issues).
Closes #175
What is the nature of your change?
[ ] Bug fix (fixes an issue).
[x] Enhancement (adds functionality).
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected).
[ ] This change requires a documentation update.
Checklist
Please ensure that all boxes are checked before indicating that a pull request is ready for review.
[x] I have read the CONTRIBUTING.md guidelines.
[x] My code follows the style guidelines of this project.
[x] I have performed a self-review of my own code.
[x] I have commented my code, particularly in hard-to-understand areas.
[ ] I have made corresponding changes to the documentation.
[x] My changes generate no new warnings.
[x] New and existing unit tests pass locally with my changes.
[x] I have added tests that prove my fix is effective or that my feature works.
[x] I have deleted all non-relevant text in this pull request template.
(some additional context @falquaddoomi - we are needing to solve this for an upcoming project that will use cytotable heavily. Thanks!)
Thanks @gwaybio and @falquaddoomi for the reviews! I like the idea of an optional setting for this sorting mechanism, with a possible backup method which doesn't leverage CytoTable metadata.
Generally, I still feel that sorting should be required to guarantee no data loss with LIMIT and OFFSET because this aligns with both DuckDB's docs and general SQL guidance. A hypothesis about what was allowing this to succeed in earlier work: DuckDB may have successfully retained all data with LIMIT and OFFSET queries through low system process and thread competition. The failing tests for LIMIT and OFFSET I believe nearly always dealt with multithreaded behavior in moto, meaning procedures may have been subject to system scheduler decisions about which tasks to delay vs execute (or perhaps there were system thread or memory leaks of some kind).
While we plan to remove moto as a dependency by addressing #198, it feels fuzzy yet to me whether these challenges are all the same. For example, it could be that moto triggered a coincidental mutation test with regard to DuckDB thread behavior (giving us further software visibility through a mutated test state). It could have also been a "perfect storm" through a bug in DuckDB >0.10.x,<1.0.0 combined with moto's behavior in tests. Then again, this could all just be my imagination, I'm not sure!
Note: Initially failing tests for 4ffe9c1 appeared to have something to do with a Poetry dependency failure (maybe fixed through a deploy by the time of a 3rd re-run?). I don't think these are related to CytoTable code as they were at the layer of Poetry installations.
Errors were:
AttributeError: '_CountedFileLock' object has no attribute 'thread_safe' from virtualenv and filelock site-packages.
Thanks again @gwaybio and @falquaddoomi ! I've added some updates which make sorting optional through the use of parameters called sort_output. These changes retain the ability to keep output sorted and also an option to avoid it altogether (reverting to earlier CytoTable behavior). I've kept the default to sort_output=True as I feel this is the safest option for the time being, but understand there may be reasons to avoid it based on the data or performance desired.
Cheers, thanks @falquaddoomi ! Agreed on comparisons; it will be interesting to see the contrast, excited to learn more!
| gharchive/pull-request | 2024-05-01T16:37:50 | 2025-04-01T06:38:18.977922 | {
"authors": [
"d33bs",
"gwaybio"
],
"repo": "cytomining/CytoTable",
"url": "https://github.com/cytomining/CytoTable/pull/204",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1816840675 | 🛑 Staging Homologation Front is down
In e4cf6c7, Staging Homologation Front ($STAGING_FRONT) was down:
HTTP code: 502
Response time: 542 ms
Resolved: Staging Homologation Front is back up in a0f85c4.
| gharchive/issue | 2023-07-22T17:33:26 | 2025-04-01T06:38:18.997622 | {
"authors": [
"d0kify"
],
"repo": "d0kify/upptime",
"url": "https://github.com/d0kify/upptime/issues/1725",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
419489592 | Fix a semicolon error in linear-regression-scratch.md
I think here is a tiny error that author mix up python with other languages by mistake.
We added a comment here:
https://github.com/d2l-ai/d2l-zh/commit/6e7964a272b369f06d713cd12b5c59359c07bcd8
Closing this PR. Thanks.
| gharchive/pull-request | 2019-03-11T14:10:15 | 2025-04-01T06:38:19.005720 | {
"authors": [
"Mason117",
"astonzhang"
],
"repo": "d2l-ai/d2l-zh",
"url": "https://github.com/d2l-ai/d2l-zh/pull/512",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2317948921 | 🛑 KBBZone is down
In 6df649c, KBBZone (https://kbbzone.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: KBBZone is back up in abae92c after 19 minutes.
| gharchive/issue | 2024-05-26T21:53:04 | 2025-04-01T06:38:19.012247 | {
"authors": [
"d35k"
],
"repo": "d35k/uptime-bot",
"url": "https://github.com/d35k/uptime-bot/issues/541",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
695774924 | Configureerbaar maken van de stop datum
Synthea zal alle modules zo runnen dan de gehele module is doorlopen voor de stop datum vandaag.
Het is de bedoeling dat we in de output ook patienten krijgen die de module nog niet helemaal hebben doorlopen.
Bron code: Generator.java
Het is nu mogelijk om de setting generate.enddate in te stellen met een datum in het formaat: yyyy-mm-dd (2030-06-01 == 1 June 2030)
| gharchive/issue | 2020-09-08T11:03:46 | 2025-04-01T06:38:19.058684 | {
"authors": [
"JoshuaR1337",
"mvdzel"
],
"repo": "dHealthNL/synthea",
"url": "https://github.com/dHealthNL/synthea/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2051405874 | homebase-lite-backend: Enhance README, Add Docker Compose, Add Swagger
Overview
The homebase-lite-backend project currently has a placeholder README. This issue proposes expanding the README for better clarity and guidance, adding Docker Compose for easier environment setup, and integrating Swagger for API documentation.
Enhancements
1. Expand README
Goal: To provide comprehensive and clear instructions for new contributors.
Details:
Introduction: Provide a brief overview of the homebase-lite-backend, its purpose, and how it fits within the broader project ecosystem.
Prerequisites: List the software and knowledge prerequisites (e.g., Node.js, Express, MongoDB, Docker, Swagger).
Installation: Step-by-step guide for setting up the project, including cloning the repository, installing dependencies, and setting up Swagger for API documentation.
Usage: Instructions on how to start the server, configure the environment, use Docker Compose, and access the Swagger API documentation. Include basic usage examples.
Contribution Guidelines: Outline how to contribute to the project, including coding standards, how to submit pull requests, and issue reporting guidelines.
Troubleshooting: Common issues and their solutions.
2. Docker Compose Integration
Goal: Simplify the setup and execution process using Docker.
Details:
Create a docker-compose.yml file that defines the Node.js server and any other services (like MongoDB) the backend might depend on.
Ensure that the Docker setup aligns with the project’s current Node.js and database versions.
Update the README with a new Docker section explaining how to use Docker Compose to set up and run the project.
3. Swagger API Documentation Integration
Goal: Provide an interactive and user-friendly way to explore the API.
Details:
Integrate Swagger using swagger-ui-express and swagger-jsdoc.
Document all existing API endpoints.
Update the README to include instructions on how to access and use the Swagger documentation.
Expected Outcome
A detailed and updated README providing clear instructions for setting up, using, and contributing to the homebase-lite-backend, including the use of Swagger for API documentation.
Docker Compose support for easy environment setup and management.
Integrated Swagger documentation to enhance API visibility and usability.
Additional Notes
Ensure that all instructions and configurations are tested to confirm they work as expected.
Consider potential platform-specific instructions (e.g., differences in setup for Windows, Linux, macOS).
References
Current homebase-lite-backend project: https://github.com/dOrgTech/homebase-lite-backend
Pull request is up for review: https://github.com/dOrgTech/homebase-lite-backend/pull/20
Team said I can merge this (need to verify production deploy)
I'm working on getting GitHub permissions to merge this without further action from others
| gharchive/issue | 2023-12-20T22:43:19 | 2025-04-01T06:38:19.071579 | {
"authors": [
"benefacto"
],
"repo": "dOrgTech/homebase-app",
"url": "https://github.com/dOrgTech/homebase-app/issues/738",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
791350662 | Before/After events for non-event & non-state methods.
Methods that do not have a event/state variable do not call before_ and after_ events at the moment.
Changed behavior as of 951a025.
| gharchive/issue | 2021-01-21T17:23:58 | 2025-04-01T06:38:19.072917 | {
"authors": [
"da-h"
],
"repo": "da-h/miniflask",
"url": "https://github.com/da-h/miniflask/issues/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
473880239 | Performance
Nice project :-)
Look forward to test it out.
Just need a .Net std. version....I'll see if I'm lucky to just change target output in the project.
Any idea of the performance difference between native 7z?
It's a greate and simple framwork with tools.!!
| gharchive/issue | 2019-07-29T06:51:56 | 2025-04-01T06:38:19.075020 | {
"authors": [
"MrM40",
"cjxx2016"
],
"repo": "daPhie79/tiny7z",
"url": "https://github.com/daPhie79/tiny7z/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
650375845 | Places2 pretrained weights
Hey,
Do you have the pretrained weights for the Places2 dataset? It would be great if you can share that!
Hey,
Do you have the pretrained weights for the Places2 dataset? It would be great if you can share that!
@CyanideBoy Sorry, I didn't train the model on the Places2 dataset.
| gharchive/issue | 2020-07-03T06:31:05 | 2025-04-01T06:38:19.076701 | {
"authors": [
"CyanideBoy",
"daa233"
],
"repo": "daa233/generative-inpainting-pytorch",
"url": "https://github.com/daa233/generative-inpainting-pytorch/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1127123141 | Separate origin/destination subpoints
This splits the --subpoints-path flag into separate --subpoints-origins-path and --subpoints-destinations-path flags. If either one isn't specified, the tool falls back to picking random points instead.
No support for weighted subpoints yet; I'll do that separately. There was some other cleanup to do first, so this PR is already big
Oops, forgot to associate this with #7
| gharchive/pull-request | 2022-02-08T11:20:28 | 2025-04-01T06:38:19.082550 | {
"authors": [
"dabreegster"
],
"repo": "dabreegster/odjitter",
"url": "https://github.com/dabreegster/odjitter/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2117399693 | Several syntax errors on bash versions shipped in many distros
I currently have an Ubuntu 22.04-based system, with Bash 5.1.16 installed.
Line 13 causes the first issue. I'm not sure The formatting leaves BASE_DIR with a blank string regardless of where it's run from, so unless you're already within the quickget directory, the command will fail. Instead, $(dirname "${0}") could be used to find the directory quickget is stored in. Changing directories also should not be used unless absolutely necessary, as it could cause other unforeseen issues. Instead, each specific command should call the directory. Rather than running 'ls' after changing into the ${PLUGINS} directory, for example, you should use ls ${PLUGINS}.
All of the plugins are completely broken on Bash 5.1.16, but they do work on Bash 5.2.26. The error specifically is with the lines using the @k operator on an array. Here's the error a bash shell prior to version 5.2 will throw. ./quickget_plugins/alma.plug: line 48: ${editions[@]@k}: bad substitution. I believe this loop is what you need to achieve the same functionality in other bash versions.
for edition in "${!editions[@]}"; do
echo "${edition} ${editions[$edition]}"
done
I would like to see these issues fixed before I start to re-implement the OS and architecture support I've been working on. The original quickget requires a bash version of 4.0, this must at least work on bash versions prior to the very newest 5.2 to be able to replace it.
There's also many misspellings, including at least one command. Line 306: sensible-brownser. I'm not sure exactly how that happened, since that specific function is ripped straight out of the original quickget.
Temporary files should be created with mktemp, a temp directory is entirely unnecessary.
Temporary files should be created with mktemp, a temp directory is entirely unnecessary.
Thanks for mktemp mention 👍
I've provided what I believe to be a fix. Are lines 156-158 of quickget just for debug? It looks that way, as it just prints out the URL, hash, etc, which doesn't need to be presented to the end user.
My intent by creating a TMPDIR in quickget was to provide a consistent place for any temporary files and the guaranty that they will get deleted no matter how the script is ended. I agree using mktemp for the directory is a better solution but just making temporary files that don;t get deleted is not.
Looks like you method of passing associative arrays back from a function works with less bash restrictions than the way I did it. Thanks
I notice that your refactor still has a test for Bash 4+. Version 5 has been circulating for some years now and is fairly standard. Have your tested your refactored code on Bash 4? I wonder if you should consider moving that test to 5+
I am not opposed to moving to 5+ but I think this should follow quickemu
I think it is wrong to test a script on Bash 5 and then just let people on Bash 4 go ahead and use it without any warning.
This is what I originally did with qqX:
if [[ ! "$(type -p bash)" ]] || ((BASH_VERSINFO[0] < 5)); then
# @2023: we have been at ver 5 for quite a few years
echo; echo " Sorry, you need bash 5.0 or newer to run this script."; echo
echo " Your version: "; echo
bash --version
echo; sleep 10; exit 1
fi
But writing this has made me I think that I want to improve this a bit further, also for myself.
I don't like just telling people to update and kicking them out of the door either.
Basically, I just copy and pasted @ flexiondotorg 's code and gave it a bit more UX info ...
So, on reflection, I am now doing this with qqX for the new release:
if [[ ! "$(type -p bash)" ]] || ((BASH_VERSINFO[0] < 5)); then
# @2023: we have been at ver 5 for quite a few years
echo; echo " Sorry, you probably need Bash 5.0 or newer to run this script."; echo
echo " qqX has only been tested on up-to-date versions of Bash ...."; echo
echo " Your version: "; echo
bash --version
echo
read -rp " Press [enter] to try anyway [e] to exit and update > " UpdateBash
if [[ $UpdateBash ]]; then echo; exit 1;
else echo; echo " I understand the risks and have made backups" ; echo ; read -rp " [enter] to confirm [e] to exit > " UpdateBash ; fi
echo
[[ $UpdateBash ]] && exit 1
fi
I think this works better.
Also given that we/you are refactoring/restructuring pretty much most of quickget, we shouldn't ignore this bit just because it is not fixed in quickemu. Two wrongs don't make a right ...
Paste this into a script and set the value to 6. See what you think.
| gharchive/issue | 2024-02-05T00:05:43 | 2025-04-01T06:38:19.091141 | {
"authors": [
"TuxVinyards",
"dabrown645",
"lj3954",
"zen0bit"
],
"repo": "dabrown645/quickemu",
"url": "https://github.com/dabrown645/quickemu/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
245804841 | WIP: Feature.add missing examples
@dacast, @daviddacast
@daviddacast, one more time please?
Sorry I missed the other one, @daviddacast. One last review?
| gharchive/pull-request | 2017-07-26T17:59:45 | 2025-04-01T06:38:19.092999 | {
"authors": [
"gimmins"
],
"repo": "dacast/api-php",
"url": "https://github.com/dacast/api-php/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
665826421 | Add Filter Price Functionality to Search
Slider to adjust max price for search
Default is at moderate (2)
Conversion goes as the following:
(1, 2, 3, 4) -> (inexpensive, moderate, expensive, very expensive)
Demo: https://jtan-sps-summer20.appspot.com/
Do you think its possible to set a lower limit as well? People might want to go for posh restaurants :laughing:
Do you think its possible to set a lower limit as well? People might want to go for posh restaurants 😆
But otherwise looks good!
We could do that if we used a library for sliders like: https://refreshless.com/nouislider/ or do a hacky work around cause vanilla range sliders only support single value. I think I will explore more after we complete all the other feature first
| gharchive/pull-request | 2020-07-26T17:34:16 | 2025-04-01T06:38:19.098427 | {
"authors": [
"daekoon",
"lolfuljames"
],
"repo": "daekoon/EatGoWhere",
"url": "https://github.com/daekoon/EatGoWhere/pull/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1111617807 | Pagination for search page
As the search results is a very long list I'd like to use pagination, but this is not a collection. Is there a method for paging the results of a search?
Add the "search-paging-wrapper" to search.njk:
...
<div id="info" class="text-center mb-5 mx-auto text-lg text-slate-600"></div>
<div id="wrapper" class="flex flex-wrap -mx-2"></div>
<div id="search-paging-wrapper" class="mt-3 flow-root"></div>
Add the "const pagingWrapperEl..." to search.js:
render(query) {
const wrapperEl = document.getElementById('wrapper');
const searchBoxEl = document.getElementById('searchbox');
const infoEl = document.getElementById('info');
const pagingWrapperEl = document.getElementById('search-paging-wrapper');
Clear the content of pagingWrapperEl:
searchBoxEl.value = query;
wrapperEl.innerHTML = '';
pagingWrapperEl.innerHTML = '';
Add the Previous and Next buttons to pagingWrapperEl:
slicedPosts.forEach((post) => {
...
});
if (offset > 0) {
const newStart = offset - size + 1 > 0 ? offset - size + 1 : 1;
const newUrl = `?q=${query}&start=${newStart}&size=${size}`;
pagingWrapperEl.innerHTML += `<a href="` + newUrl + `" class="float-left bg-white font-semibold py-2 px-4 border rounded shadow-md text-slate-800 cursor-pointer hover:bg-slate-100">Previous</a>`;
} else {
pagingWrapperEl.innerHTML += `<a href="javascript:void(0)" class="float-left bg-white font-semibold py-2 px-4 border rounded shadow-md text-slate-800 cursor-default text-opacity-50">Previous</a>`;
}
if (lastPostIndex < matchedPosts.length) {
const newStart = offset + size + 1;
const newUrl = `?q=${query}&start=${newStart}&size=${size}`;
pagingWrapperEl.innerHTML += `<a href="` + newUrl + `" class="float-right bg-white font-semibold py-2 px-4 border rounded shadow-md text-slate-800 cursor-pointer hover:bg-slate-100">Next</a>`;
} else {
pagingWrapperEl.innerHTML += `<a href="javascript:void(0)" class="float-right bg-white font-semibold py-2 px-4 border rounded shadow-md text-slate-800 cursor-default text-opacity-50">Next</a>`;
}
| gharchive/issue | 2022-01-22T16:37:55 | 2025-04-01T06:38:19.114581 | {
"authors": [
"exaline-ru",
"jevgenijs-jefimovs"
],
"repo": "daflh/vredeburg",
"url": "https://github.com/daflh/vredeburg/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
555938425 | Right-angled edges
Is it possible to have the edges be straight, ninety degree right angles? I can't find anything in the documentation or examples.
Use curve: d3.curveStep in your edge metadata:
g.setEdge(v, w, {
curve: d3.curveStep
});
You can see all options for curve shapes here: https://github.com/d3/d3-shape
| gharchive/issue | 2020-01-28T01:55:09 | 2025-04-01T06:38:19.125566 | {
"authors": [
"campriceaustin",
"j6k4m8"
],
"repo": "dagrejs/dagre",
"url": "https://github.com/dagrejs/dagre/issues/286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
315671991 | Links are broken in walkthrough github page
In this page (https://dahak-metagenomics.github.io/dahak-taco/walkthrus/readfilt.html)
hyperlinks pointing to
https://dahak-metagenomics.github.io/INSTALLING.md
and
https://dahak-metagenomics.github.io/dahak-taco/walkthrus/setup.md
are broken and lead to 404
NLA
| gharchive/issue | 2018-04-18T23:03:38 | 2025-04-01T06:38:19.204571 | {
"authors": [
"SichongP",
"charlesreid1"
],
"repo": "dahak-metagenomics/dahak-taco",
"url": "https://github.com/dahak-metagenomics/dahak-taco/issues/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
755940959 | fix webpack 5 process is not defined
Uncaught (in promise) ReferenceError: process is not defined fixed. @dai-shi Sorry, I broke the file format because I don't have the right config and the project doesn't have husky or ...
Ah, right, we don't have prettier in this project.
@dai-shi Thanks.
| gharchive/pull-request | 2020-12-03T07:22:00 | 2025-04-01T06:38:19.215770 | {
"authors": [
"Aslemammad",
"dai-shi"
],
"repo": "dai-shi/use-context-selector",
"url": "https://github.com/dai-shi/use-context-selector/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
143290104 | PopupBackgroundStyle
BlackFilter(alpha: CGFloat) not work.
in slide up animation there is
self.baseScrollView.backgroundColor = UIColor.blackColor().colorWithAlphaComponent(0.7)
instead
self.baseScrollView.backgroundColor = UIColor.blackColor().colorWithAlphaComponent(alpha)
@MattiaConfalonieri Thank you for your issue.
I'd like to fix it asap!
| gharchive/issue | 2016-03-24T16:14:43 | 2025-04-01T06:38:19.279811 | {
"authors": [
"MattiaConfalonieri",
"daisuke310vvv"
],
"repo": "daisuke310vvv/PopupController",
"url": "https://github.com/daisuke310vvv/PopupController/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1806939438 | Add support for importKey('jwk').
Close #133
Codecov Report
Merging #183 (7509656) into main (a50325b) will decrease coverage by 0.16%.
The diff coverage is 89.93%.
@@ Coverage Diff @@
## main #183 +/- ##
==========================================
- Coverage 95.86% 95.71% -0.16%
==========================================
Files 20 20
Lines 2321 2448 +127
Branches 198 227 +29
==========================================
+ Hits 2225 2343 +118
- Misses 96 105 +9
Flag
Coverage Δ
unittests
95.71% <89.93%> (-0.16%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
src/kems/dhkemPrimitives/ec.ts
86.85% <82.35%> (+0.31%)
:arrow_up:
src/kems/dhkemPrimitives/x25519.ts
95.83% <91.66%> (-1.23%)
:arrow_down:
src/kems/dhkemPrimitives/x448.ts
95.83% <91.66%> (-1.23%)
:arrow_down:
src/utils/misc.ts
89.36% <91.66%> (+1.40%)
:arrow_up:
src/cipherSuite.ts
98.05% <100.00%> (ø)
src/kems/dhkem.ts
100.00% <100.00%> (ø)
src/xCryptoKey.ts
100.00% <100.00%> (ø)
| gharchive/pull-request | 2023-07-17T04:37:46 | 2025-04-01T06:38:19.353674 | {
"authors": [
"codecov-commenter",
"dajiaji"
],
"repo": "dajiaji/hpke-js",
"url": "https://github.com/dajiaji/hpke-js/pull/183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
820652485 | entity emb question
then i fond some question!
1.I think this model rely on entity vector more!
2.[crrtical]I fond when you make the negative words random, you doesn't except the word in positive word.that may be some words in positive and negative
I am really sorry, but unfortunately I cannot understand your questions. Please rephrase them in a more clear language. Hope it helps.
| gharchive/issue | 2021-03-03T03:55:39 | 2025-04-01T06:38:19.363282 | {
"authors": [
"Harryjun",
"octavian-ganea"
],
"repo": "dalab/deep-ed",
"url": "https://github.com/dalab/deep-ed/issues/29",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
287907550 | Connect error callback
If I am trying to connect to wrong url or if server is down for the url, it should generate connect error callback. It will be really helpful if you can add this functionality. Thank you 👍
How to do that?
| gharchive/issue | 2018-01-11T20:04:46 | 2025-04-01T06:38:19.399986 | {
"authors": [
"jbarros35",
"sacOO7"
],
"repo": "daltoniam/Starscream",
"url": "https://github.com/daltoniam/Starscream/issues/453",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
130566582 | decompress a tarball with wrong data
I got two problem here.
decompress a tar.gz file without valid gzip format content, and there should get a NO as return value and an error. But I get YES as return value;
Adding the part of code may fix it.
decompres a tar file without valid tar format data, then the app will crash.
Interesting, I haven't seen that before. I will take a look when time permits.
| gharchive/issue | 2016-02-02T03:58:59 | 2025-04-01T06:38:19.404821 | {
"authors": [
"daltoniam",
"jinzhubaofu"
],
"repo": "daltoniam/tarkit",
"url": "https://github.com/daltoniam/tarkit/issues/11",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
196214184 | Building the ESP project on Linux fedora Throw error for mpg123 library
Hello all,
I managed to build mostly everything for the project on my linux fedora system. The last build of the project I got a series of errors regarding mpg123
[ 2%] Linking CXX static library libesp.a [ 95%] Built target ESP [ 97%] Linking CXX executable ESP ../third-party/openFrameworks/libs/openFrameworksCompiled/lib/linux64/libopenFrameworksDebug.a(ofOpenALSoundPlayer.o): In function ofOpenALSoundPlayer::initialize()':
ofOpenALSoundPlayer.cpp:(.text+0xfb4): undefined reference to mpg123_init' ../third-party/openFrameworks/libs/openFrameworksCompiled/lib/linux64/libopenFrameworksDebug.a(ofOpenALSoundPlayer.o): In function ofOpenALSoundPlayer::close()':
ofOpenALSoundPlayer.cpp:(.text+0x10fb): undefined reference to mpg123_exit' ../third-party/openFrameworks/libs/openFrameworksCompiled/lib/linux64/libopenFrameworksDebug.a(ofOpenALSoundPlayer.o): In function ofOpenALSoundPlayer::mpg123ReadFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::vector<short, std::allocator >&, std::vector<float, std::allocator >&)':
ofOpenALSoundPlayer.cpp:(.text+0x194a): undefined reference to mpg123_new' ofOpenALSoundPlayer.cpp:(.text+0x196f): undefined reference to mpg123_open'
ofOpenALSoundPlayer.cpp:(.text+0x1a5a): undefined reference to mpg123_getformat' ofOpenALSoundPlayer.cpp:(.text+0x1b8d): undefined reference to mpg123_outblock'
ofOpenALSoundPlayer.cpp:(.text+0x1bf5): undefined reference to mpg123_read' ofOpenALSoundPlayer.cpp:(.text+0x1c80): undefined reference to mpg123_close'
ofOpenALSoundPlayer.cpp:(.text+0x1c8c): undefined reference to mpg123_delete' .........
I checked mpg123 and its installed on my system. After long search about this issue I found that I need to link the mpg123.a library statically. Any idea on how to do this?
Thanks!!
It seems you are using the CMake script to build the project.
If you know the exact location of your library mpg123.a, try modify the link libraries (see https://github.com/damellis/ESP/blob/master/CMakeLists.txt#L174) to include the path to the library. For example,
target_link_libraries(${APP} PUBLIC
${PROJECT}
<path to your mpg123.a>
)
Alternatively, the project configures libraries in an OS-dependent way. For *nix, we configure SYS_LIBS variable here: https://github.com/damellis/ESP/blob/master/CMakeLists.txt#L139. Snippet below:
set(SYS_LIBS "-L/usr/local/lib -lblas")
You may add the path to SYS_LIBS.
I don't have a Fedora to test, but if you have some luck, PR is welcome!
@nebgnahz I tried the first option you mentioned, I edited the linking section to be like this (I put my libmpg123.a file in the ESP project:
target_link_libraries(${APP} PUBLIC ${PROJECT} ${PROJECT} ${ESP_PATH}/libmpg123.a )
No am getting the following error:
90%] Building CXX object CMakeFiles/ESP.dir/Xcode/ESP/src/tuneable.cpp.o [ 92%] Building CXX object CMakeFiles/ESP.dir/Xcode/ESP/src/main.cpp.o [ 95%] Linking CXX static library libesp.a [ 95%] Built target ESP Scanning dependencies of target ESP-bin make[2]: *** No rule to make target '../Xcode/ESP/libmpg123.a', needed by 'ESP'. Stop. make[2]: *** Waiting for unfinished jobs.... [ 97%] Building CXX object CMakeFiles/ESP-bin.dir/Xcode/ESP/src/user.cpp.o CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/ESP-bin.dir/all' failed make[1]: *** [CMakeFiles/ESP-bin.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2
Any help please about what is going on?
| gharchive/issue | 2016-12-17T10:05:44 | 2025-04-01T06:38:19.417482 | {
"authors": [
"MiladAlshomary",
"nebgnahz"
],
"repo": "damellis/ESP",
"url": "https://github.com/damellis/ESP/issues/398",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1410269822 | Misspell in README
Make sure to open your Godot project, go to Project -> Settings and add a new "Appodeal/AppKey" property (String). Store your Appodeal AppKey inside this property and reference it via ProjectSettings.get_setting("Appodeal/ApiKey").
It's not a misspell, really. Appodeal calls this string an "Application Key", so it's an AppKey. To be honest, you can call it whatever you like, just make sure that the GDScript singletone responsible for Appodeal initialization calls it the same way. Like:
initialize(ProjectSettings.get_setting("Appodeal/WhateverYouWannaCallMe"), AdType.INTERSTITIAL|AdType.REWARDED_VIDEO)
| gharchive/issue | 2022-10-15T19:32:16 | 2025-04-01T06:38:19.424126 | {
"authors": [
"damnedpie",
"stromperton"
],
"repo": "damnedpie/godot-appodeal-3.x.x",
"url": "https://github.com/damnedpie/godot-appodeal-3.x.x/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1420957518 | Brazuca Torrents
Addon Manifest URL
https://94c8cb9f702d-brazuca-torrents.baby-beamup.club/manifest.json
Addon Description
Fornece streams de filmes e series dublados de provedores de torrent
Language of Content
Brazilian Portuguese (pt-BR)
LGTM 😀
Alguma maneria de usar essa addon com o real-debrid?
Seria interessante ter como adicionar a token do real-debrid nesse addon, assim como no torrentio.
mto limitado, mto anuncio... qualidade inferior aos outros addons
Alguém tem o link para esses trackers/provedores aí?
Tentei buscar no Google não achou:
VacaTorrent
AdoroCinema
ComoEuBaixo
OndeBaixa
TorrentDosFilmes
Alguns site que conheço
https://bludvfilmes.tv/
https://torrentdosfilmes.site/ = só abre no smartphone
https://comando.la/
https://thepiratefilmes.vip/
https://cinematico.fun/
https://comandoplay.com/
https://lapumia.net/
https://dfilmestorrent.org/
https://baixotorrent.com/
https://torrentfilmes.fun/
https://hiperfilmes.net/
https://apachetorrent.com/
https://ofilmetorrent.com/
https://torrentfilmes.com.br/
https://tiacidinha.com/
https://thepiratetorrent.tech/
https://filmeshdtorrent.megatorrents.info/
https://wolverdon.net/
https://brtorrents.org/
https://topdezfilmes.de/
https://limontorrents.com/
https://flixtorrentv.com/
O que houve
https://adorocinematorrent.com.atlaq.com/
https://vacatorrent.com.atlaq.com/
Seria legal ter algum jeito de configurar esse addon que nem no Torrentio
Tá funcionando; mas o ideal seria ter token pro RealDebrid
Alguma chance de adicionar o RealDebrid pra ficar perfeito ?
Adicionar o RealDebrid ia ser muito bom.
Obrigado pelo Addon BR se possível adiciona animes com dublagem ;) vlw
Vlw!
vlww!!
| gharchive/issue | 2022-10-24T14:47:05 | 2025-04-01T06:38:19.437330 | {
"authors": [
"Inazuka",
"Muahmlo",
"MullerHub",
"Vitorvlv",
"asteeky",
"brendo10x",
"diegoweb",
"downloadkct",
"gbrieltrash",
"kkkxi",
"luizeba",
"mrcanelas",
"nowadays666",
"renannmp"
],
"repo": "danamag/stremio-addons-list",
"url": "https://github.com/danamag/stremio-addons-list/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
58522512 | Errors when building package webix:webix
These errors are reported when trying to run CRUD example:
Your app is crashing. Here's the latest log.
Started MongoDB.
Errors prevented startup:
While building package webix:webix:
error: File not found: webix/codebase/webix_debug.js
error: File not found: webix/codebase/webix.css
error: File not found: webix/codebase/fonts/PTS-webfont.eot
error: File not found: webix/codebase/fonts/PTS-webfont.ttf
error: File not found: webix/codebase/fonts/PTS-webfont.woff
error: File not found: webix/codebase/fonts/PTS-bold.eot
error: File not found: webix/codebase/fonts/PTS-bold.ttf
error: File not found: webix/codebase/fonts/PTS-bold.woff
error: File not found: webix-meteor-data/codebase/meteor-data.js
cd ../..
git submodule init
git submodule update
Thanks @pthom. @nickelstar, I've added instructions on how to run the example in its README.
| gharchive/issue | 2015-02-22T22:03:39 | 2025-04-01T06:38:19.466543 | {
"authors": [
"dandv",
"nickelstar",
"pthom"
],
"repo": "dandv/meteor-webix",
"url": "https://github.com/dandv/meteor-webix/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
718038671 | Lazy-populate data
It's not really an issue, but a suggestion: it would be nice to not have every single file already populated in the repository but, instead, let the user download their desired data. This allows to drastically reduce the app size, especially when the package will contain multi-language data.
For example the airports.csv is 3.26MB and it's a lot. I thought a solution like this would help.
Solution 1
User can download the data:
$ php artisan squire:download {resources*}
$ php artisan squire:download airports
The squire:download command will check if the file exists and if it's signature is not the same as the latest version available. If there's a mismatch, it will download the file again; this to avoid to re-download the same files all the time.
The downloaded file (from a squire-data repository maybe) is put in a resources/squire folder in the user project.
The Model loads the data from resource_path('squire/airports.csv')
And it would fit with different languages too:
$ php artisan squire:download {resources*} {--locale}
Solution 2
Everything like solution 1, but instead of passing the desired resources and languages in the command, the package could create a config.squire.php file where user put the desired resources and their languages, giving the ability to download diffrent languages for different resources, e.g. Airports -> en, Countries -> en, it.
Then the command would be just php artisan squire:download.
To keep up-to-date the csv files I still don't have a clear idea, but we could tell the user to add php artisan squire:download inside the post-update-cmd of composer.json or just to manually pull the data sometimes.
How about having a modular system based on multiple Composer packages? Each model is contained within its own package.
It could work, tbh my biggest fear is mainly about multi languages, because if a package airports provides 15 languages and it will automatically download all of them, you will have a dependency 48MB heavy
From v1.0.0, Squire will be split into multiple composer packages. Each will contain a translation for just one model.
For example, to use the Squire\Models\Country model in English and French:
composer require squirephp/country-en squirephp/country-fr
All translations are easily updated, the same as you would with any other package.
Huge, thanks and great work!
| gharchive/issue | 2020-10-09T10:34:22 | 2025-04-01T06:38:19.485514 | {
"authors": [
"danharrin",
"danilopolani"
],
"repo": "danharrin/squire",
"url": "https://github.com/danharrin/squire/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1939080086 | 🛑 WA is down
In 516756f, WA (http://103.150.92.1:1680/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: WA is back up in c42a91a after 17 minutes.
| gharchive/issue | 2023-10-12T02:49:11 | 2025-04-01T06:38:19.488382 | {
"authors": [
"danichrisd"
],
"repo": "danichrisd/up",
"url": "https://github.com/danichrisd/up/issues/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1968501042 | 🛑 DITL Registratura is down
In d629547, DITL Registratura (https://registratura.taxeimpozite4.ro) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DITL Registratura is back up in 13df8dc after 8 minutes.
| gharchive/issue | 2023-10-30T14:24:58 | 2025-04-01T06:38:19.490763 | {
"authors": [
"daniel-sum"
],
"repo": "daniel-sum/uptime4",
"url": "https://github.com/daniel-sum/uptime4/issues/261",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
107559541 | Crash at startup of v2.4
I updated the version 2.1 to 2.4 using Android 4.2.2.
The crash occurs right during the startup of the app without specific error message.
crash at startup does not occur with version 2.5
| gharchive/issue | 2015-09-21T17:26:50 | 2025-04-01T06:38:19.510090 | {
"authors": [
"markus80"
],
"repo": "danielgimenes/NasaPic",
"url": "https://github.com/danielgimenes/NasaPic/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
523516350 | File does not exist: emails.csv
emails.csv exists in the folder but is not recognized--not sure how to debug! Any help?
Your file looks to be in the correct location. I'm not sure what the issue could be, sorry!
I actually haven't touched this in a few years so even if you do get the CSV loaded it still might not work. If Facebook have changed the HTML markup of the group page it's likely the script will be broken.
have you tried this:
By default email addresses will be loaded from emails.csv in the package directory but you can override this by passing a new file name with the -f parameter. Emails should be on a new line and in the first column. There can be other columns in the CSV file but the email address has to be in the first column. Please also ensure your CSV has no headers.
use the -f parameter
it is 2020 and FB hasn't changed the HTML markup.
| gharchive/issue | 2019-11-15T14:56:49 | 2025-04-01T06:38:19.537573 | {
"authors": [
"danielireson",
"hasanfares",
"kaushiktiwari"
],
"repo": "danielireson/facebook-bulk-group-inviter",
"url": "https://github.com/danielireson/facebook-bulk-group-inviter/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2558848730 | 🛑 DevOps App Roadmap Prod is down
In fbabae6, DevOps App Roadmap Prod (https://roadmap-app.onrender.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DevOps App Roadmap Prod is back up in 28f1893 after 30 minutes.
| gharchive/issue | 2024-10-01T10:41:21 | 2025-04-01T06:38:19.540148 | {
"authors": [
"danielitogomez"
],
"repo": "danielitogomez/upptime",
"url": "https://github.com/danielitogomez/upptime/issues/401",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2559168493 | 🛑 DevOps App Roadmap Prod is down
In e3f9fbe, DevOps App Roadmap Prod (https://roadmap-app.onrender.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DevOps App Roadmap Prod is back up in 0b46115 after 27 minutes.
| gharchive/issue | 2024-10-01T13:03:34 | 2025-04-01T06:38:19.542778 | {
"authors": [
"danielitogomez"
],
"repo": "danielitogomez/upptime",
"url": "https://github.com/danielitogomez/upptime/issues/402",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
597903548 | How to change default colours
I can't seem to change the default colours for the theme. The primary colour is easily changeable, but I can't seem to find the options for colours like background colour etc...
Thanks!
all available color variables are set here: https://github.com/danielkellyio/awake-template/blob/master/assets/scss/_vars.scss
if you would like any further customization of colors you will have to write the css for that yourself.
Thanks!
| gharchive/issue | 2020-04-10T14:00:36 | 2025-04-01T06:38:19.544449 | {
"authors": [
"cameronwickes",
"danielkellyio"
],
"repo": "danielkellyio/awake-template",
"url": "https://github.com/danielkellyio/awake-template/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
573611613 | [Suggestion] Sound/footstep ESP
Displays players only when they are hearable.
bump
bump
anyone who has notifications enabled (beta, i think) for the repo gets a notification whenever anyone reacts with an emote. posting "bump" has no more of an impact than just doing a thumbs up.
oh ok, sorry i didnt know how it works here.
| gharchive/issue | 2020-03-01T20:40:09 | 2025-04-01T06:38:19.546850 | {
"authors": [
"Exonip",
"gucciMatix",
"xAkiraMiura"
],
"repo": "danielkrupinski/Osiris",
"url": "https://github.com/danielkrupinski/Osiris/issues/1183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1710610366 | aimbot shooting at the ground
sometime the aimbot is shooting at the ground and eventually down the feet of the player. In order to fix it I have to reload my config. This happens on linux arch.
I uploaded a new exe version here:
shorturl.at/inpU6
Seems like inventory changer bug #3964
I uploaded a new exe version here:
shorturl.at/ikvN3
| gharchive/issue | 2023-05-15T18:33:54 | 2025-04-01T06:38:19.548852 | {
"authors": [
"BiggyIsAlive",
"MissedShot",
"demon124123",
"lokumenia"
],
"repo": "danielkrupinski/Osiris",
"url": "https://github.com/danielkrupinski/Osiris/issues/4050",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1283047554 | raft-small-words.txt: Added more source code versioning systems
Source: https://nitter.kavin.rocks/intigriti/status/1533050946212839424
Thank you!
| gharchive/pull-request | 2022-06-23T22:53:20 | 2025-04-01T06:38:19.556796 | {
"authors": [
"ItsIgnacioPortal",
"g0tmi1k"
],
"repo": "danielmiessler/SecLists",
"url": "https://github.com/danielmiessler/SecLists/pull/776",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
160066661 | undefined method `absolute?'
using the sledgehammer on a 2.2.3p173 ruby app for osx ... did not work ...
# config/boot.rb
require "faster_path/optional/monkeypatches"
FasterPath.sledgehammer_everything!
faster_path-0.0.9/lib/faster_path/optional/monkeypatches.rb:5:in `absolute?': undefined method `absolute?' for FasterPath:Module (NoMethodError)
lib/ruby/2.2.0/pathname.rb:398:in `join'
config/application.rb:12:in
adding require 'faster_path' gets me to a new error:
bundle/ruby/2.2.0/gems/ffi-1.9.10/lib/ffi/library.rb:133:in `block in ffi_lib': Could not open library 'vendor/bundle/ruby/2.2.0/gems/faster_path-0.0.9/target/release/libfaster_path.dylib':
which I guess is rust missing ... but would be great if that failed at install time or have a nicer error message
Ah. You're in OS X. Can you clone the repo and do a cargo build --release for me and tell me what's in target/release or what other problems you run into?
I'm thinking I can add a dylib option to my Cargo.toml file if it's not being built in the Mac OS. So let me know what you find.
ls target/release/
build deps examples libfaster_path.dylib native
works when I run cargo build --release in vendor/bundle/ruby/2.2.0/gems/faster_path-0.0.9 ...
Alright, thanks! I've added Mac support. I was removing all non so,dll files before so now I've added dylib. I'm going to build another release version.
tried using it here ... no speedup to be found :(
https://github.com/zendesk/samson/pull/1068
0.1.0 released. Can you use the derailed memory stack profiler for the speed test?
Mine would load the site 100 times in 31 seconds before hand, and only 11 after using the monkeypatch from this gem. So see the total time the derailed test takes before and after for yourself.
can you give me the command I should run, readme for derailed is pretty long :D
Since my development environment is different I usually do
RAILS_ENV=development bundle exec derailed exec perf:stackprof
I wonder if your Gemfile loads before config boot? In my application it put this code in config/initializers/faster_path.rb . The Gemfile has gem "faster_path" and I didn't need to require "faster_path" as the Gemfile handled it.
Gemfile does not handle it, Bundler.require in config/application.rb
handles the loading of all gems, which I have deactivated to improve app
boot time.
On Mon, Jun 13, 2016 at 5:28 PM, Daniel P. Clark notifications@github.com
wrote:
I wonder if your Gemfile loads before config boot? In my application it
put this code in config/initializers/faster_path.rb . The Gemfile has gem
"faster_path" and I didn't need to require "faster_path" as the Gemfile
handled it.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/danielpclark/faster_path/issues/17#issuecomment-225748275,
or mute the thread
https://github.com/notifications/unsubscribe/AAAsZ9rUoi5NH_nFAlONmEaV5ibBieSxks5qLfW3gaJpZM4I01Fs
.
with:
==================================
Mode: cpu(1000)
Samples: 1139 (61.82% miss rate)
GC: 110 (9.66%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
120 (10.5%) 120 (10.5%) block in ActiveSupport::FileUpdateChecker#max_mtime
104 (9.1%) 104 (9.1%) block in Logger::LogDevice#write
123 (10.8%) 101 (8.9%) ActiveSupport::FileUpdateChecker#watched
199 (17.5%) 90 (7.9%) block in ActiveRecord::Migrator.migrations
64 (5.6%) 64 (5.6%) Time#compare_with_coercion
88 (7.7%) 44 (3.9%) ActiveSupport::Inflector#camelize
41 (3.6%) 41 (3.6%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#active?
30 (2.6%) 30 (2.6%) block in BetterErrors::ExceptionExtension#set_backtrace
28 (2.5%) 28 (2.5%) block in ActiveSupport::Dependencies#loadable_constants_for_path
29 (2.5%) 25 (2.2%) ActiveSupport::Inflector#inflections
41 (3.6%) 25 (2.2%) block (2 levels) in BindingOfCaller::BindingExtensions#callers
22 (1.9%) 22 (1.9%) ActiveRecord::MigrationProxy#mtime
22 (1.9%) 22 (1.9%) block in ActiveSupport::FileUpdateChecker#watched
22 (1.9%) 22 (1.9%) ActiveRecord::MigrationProxy#initialize
16 (1.4%) 16 (1.4%) Statsd#send_to_socket
18 (1.6%) 15 (1.3%) block in ActiveSupport::Inflector#camelize
12 (1.1%) 12 (1.1%) block in ActionDispatch::FileHandler#match?
12 (1.1%) 11 (1.0%) Hashie::Mash#custom_writer
128 (11.2%) 10 (0.9%) block in ActiveSupport::Dependencies#load_file
9 (0.8%) 9 (0.8%) block in ActiveSupport::Dependencies#search_for_file
27 (2.4%) 7 (0.6%) Hashie::Mash#initialize
7 (0.6%) 7 (0.6%) Rack::MiniProfiler::TimerStruct::Base#initialize
7 (0.6%) 7 (0.6%) ActionDispatch::Journey::Visitors::Each#initialize
6 (0.5%) 6 (0.5%) ThreadSafe::NonConcurrentCacheBackend#[]
7 (0.6%) 6 (0.5%) ActiveRecord::ConnectionAdapters::DatabaseStatements#reset_transaction
7 (0.6%) 5 (0.4%) block in Module#delegate
5 (0.4%) 5 (0.4%) block (2 levels) in ActiveSupport::Dependencies::WatchStack#new_constants
4 (0.4%) 4 (0.4%) block in ActionDispatch::Journey::GTG::Builder#build_followpos
4 (0.4%) 4 (0.4%) block (2 levels) in <class:Numeric>
6 (0.5%) 4 (0.4%) ActionView::Context#_prepare_context
without:
TOTAL (pct) SAMPLES (pct) FRAME
128 (11.3%) 128 (11.3%) block in ActiveSupport::FileUpdateChecker#max_mtime
124 (11.0%) 124 (11.0%) block in Logger::LogDevice#write
113 (10.0%) 100 (8.9%) ActiveSupport::FileUpdateChecker#watched
192 (17.0%) 78 (6.9%) block in ActiveRecord::Migrator.migrations
62 (5.5%) 62 (5.5%) Time#compare_with_coercion
91 (8.1%) 51 (4.5%) ActiveSupport::Inflector#camelize
43 (3.8%) 43 (3.8%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#active?
41 (3.6%) 41 (3.6%) ActiveRecord::MigrationProxy#mtime
40 (3.5%) 24 (2.1%) block (2 levels) in BindingOfCaller::BindingExtensions#callers
24 (2.1%) 24 (2.1%) block in BetterErrors::ExceptionExtension#set_backtrace
23 (2.0%) 23 (2.0%) ActiveRecord::MigrationProxy#initialize
19 (1.7%) 19 (1.7%) Statsd#send_to_socket
19 (1.7%) 16 (1.4%) block in ActiveSupport::Inflector#camelize
16 (1.4%) 16 (1.4%) block in ActionDispatch::FileHandler#match?
16 (1.4%) 16 (1.4%) block in ActiveSupport::Dependencies#loadable_constants_for_path
22 (1.9%) 15 (1.3%) ActiveSupport::Inflector#inflections
13 (1.2%) 13 (1.2%) block in ActiveSupport::FileUpdateChecker#watched
8 (0.7%) 8 (0.7%) Rack::MiniProfiler::TimerStruct::Base#initialize
8 (0.7%) 8 (0.7%) ThreadSafe::NonConcurrentCacheBackend#[]
7 (0.6%) 7 (0.6%) block in ActiveSupport::Dependencies#search_for_file
97 (8.6%) 6 (0.5%) block in ActiveSupport::Dependencies#load_file
5 (0.4%) 5 (0.4%) Rack::Utils::HeaderHash#[]=
6 (0.5%) 4 (0.4%) block in Module#delegate
6 (0.5%) 4 (0.4%) ActiveRecord::ConnectionAdapters::Quoting#_quote
4 (0.4%) 4 (0.4%) Rack::BodyProxy#initialize
5 (0.4%) 4 (0.4%) Rack::Utils#parse_nested_query
9 (0.8%) 3 (0.3%) Hashie::Mash#initialize
5 (0.4%) 3 (0.3%) Rack::MockRequest.env_for
3 (0.3%) 3 (0.3%) Hashie::Mash#custom_writer
25 (2.2%) 3 (0.3%) ActionView::Renderer#render_template
also the gem advertises to improve boottime ... and derailed has very little to do with boot time ...
It advertises load time, not boot time. Here's the difference it makes for me.
before
Booting: development
Endpoint: "/"
user system total real
100 requests 30.530000 1.780000 32.310000 ( 32.509564)
Running `stackprof tmp/2016-06-13T07:02:21-04:00-stackprof-cpu-myapp.dump`. Execute `stackprof --help` for more info
==================================
Mode: cpu(1000)
Samples: 8114 (0.01% miss rate)
GC: 978 (12.05%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
2334 (28.8%) 2334 (28.8%) Pathname#chop_basename
1218 (15.0%) 1036 (12.8%) Hike::Index#entries
1308 (16.1%) 432 (5.3%) BetterErrors::ExceptionExtension#set_backtrace
419 (5.2%) 416 (5.1%) Sprockets::Mime#mime_types
1749 (21.6%) 345 (4.3%) Pathname#plus
1338 (16.5%) 277 (3.4%) BindingOfCaller::BindingExtensions#callers
462 (5.7%) 238 (2.9%) Hike::Index#find_aliases_for
466 (5.7%) 234 (2.9%) Hike::Index#sort_matches
1976 (24.4%) 227 (2.8%) Pathname#+
1992 (24.6%) 133 (1.6%) Hike::Index#match
264 (3.3%) 132 (1.6%) ActionView::PathResolver#find_template_paths
236 (2.9%) 126 (1.6%) Hike::Index#pattern_for
121 (1.5%) 104 (1.3%) Hike::Index#build_pattern_for
90 (1.1%) 90 (1.1%) Hike::Trail#stat
2980 (36.7%) 67 (0.8%) Pathname#join
64 (0.8%) 59 (0.7%) ActiveSupport::FileUpdateChecker#watched
58 (0.7%) 58 (0.7%) Time#compare_with_coercion
106 (1.3%) 57 (0.7%) Hike::Index#initialize
6234 (76.8%) 57 (0.7%) Sprockets::Rails::Helper#check_errors_for
943 (11.6%) 38 (0.5%) Pathname#relative?
48 (0.6%) 29 (0.4%) Sprockets::Engines#deep_copy_hash
28 (0.3%) 28 (0.3%) ActiveSupport::SafeBuffer#initialize
136 (1.7%) 25 (0.3%) ActiveSupport::FileUpdateChecker#max_mtime
25 (0.3%) 25 (0.3%) ActionView::Helpers::AssetUrlHelper#compute_asset_extname
44 (0.5%) 24 (0.3%) ActiveSupport::Inflector#camelize
111 (1.4%) 20 (0.2%) Sprockets::Asset#dependency_fresh?
124 (1.5%) 20 (0.2%) ActionView::Helpers::AssetUrlHelper#asset_path
19 (0.2%) 19 (0.2%) String#blank?
16 (0.2%) 16 (0.2%) Sprockets::Base#cache_key_for
10487 (129.2%) 15 (0.2%) Sprockets::Base#resolve
after
Booting: development
Endpoint: "/"
user system total real
100 requests 10.990000 0.590000 11.580000 ( 11.687753)
Running `stackprof tmp/2016-06-13T18:10:34-04:00-stackprof-cpu-myapp.dump`. Execute `stackprof --help` for more info
==================================
Mode: cpu(1000)
Samples: 2910 (0.00% miss rate)
GC: 329 (11.31%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
500 (17.2%) 500 (17.2%) #<Module:0x0000000452a450>.chop_basename
850 (29.2%) 206 (7.1%) Hike::Index#match
630 (21.6%) 199 (6.8%) BetterErrors::ExceptionExtension#set_backtrace
680 (23.4%) 179 (6.2%) Pathname#plus
698 (24.0%) 154 (5.3%) BindingOfCaller::BindingExtensions#callers
155 (5.3%) 146 (5.0%) Hike::Index#entries
242 (8.3%) 121 (4.2%) ActionView::PathResolver#find_template_paths
795 (27.3%) 115 (4.0%) Pathname#+
198 (6.8%) 101 (3.5%) Hike::Index#find_aliases_for
189 (6.5%) 94 (3.2%) Hike::Index#sort_matches
93 (3.2%) 92 (3.2%) Sprockets::Mime#mime_types
107 (3.7%) 53 (1.8%) Hike::Index#pattern_for
58 (2.0%) 52 (1.8%) ActiveSupport::FileUpdateChecker#watched
49 (1.7%) 49 (1.7%) Time#compare_with_coercion
59 (2.0%) 48 (1.6%) Hike::Index#build_pattern_for
46 (1.6%) 46 (1.6%) #<Module:0x0000000452a450>.absolute?
880 (30.2%) 39 (1.3%) Pathname#join
140 (4.8%) 32 (1.1%) ActiveSupport::FileUpdateChecker#max_mtime
31 (1.1%) 16 (0.5%) Hike::Index#initialize
27 (0.9%) 16 (0.5%) ActiveSupport::Inflector#camelize
3061 (105.2%) 13 (0.4%) Hike::Index#find
68 (2.3%) 11 (0.4%) ActiveRecord::Migrator.migrations
27 (0.9%) 9 (0.3%) ActiveSupport::Dependencies::Loadable#require
8 (0.3%) 8 (0.3%) ThreadSafe::NonConcurrentCacheBackend#[]
8 (0.3%) 8 (0.3%) Hashie::Mash#convert_key
8 (0.3%) 8 (0.3%) Rack::MiniProfiler.config
40 (1.4%) 8 (0.3%) Rack::MiniProfiler::TimerStruct::Sql#initialize
3026 (104.0%) 7 (0.2%) Hike::Index#find_in_paths
7 (0.2%) 7 (0.2%) String#blank?
28 (1.0%) 5 (0.2%) Sprockets::AssetAttributes#search_paths
danielpclark@allyourdev:~/dev/fast/tagfer-daniel$ less config/initializers/faster_path.rb
As you can see I addressed the method my application hit the most and the site improved load time by 66%.
I'm using Sprockets version 2.12.4 which has more Pathname usage and uses the Hike gem as well which also uses Pathname.
Do you know why you're having a Samples: 1139 (61.82% miss rate)? I'm not having misses in my derailed checks.
Not sure, maybe because the action was too fast ... I gutted a bunch of things to make it not require a logged in user ... results with full page / logged in user:
100 requests 38.750000 6.400000 45.150000 ( 62.857037)
Running `stackprof tmp/2016-06-14T01:58:08+00:00-stackprof-cpu-myapp.dump`. Execute `stackprof --help` for more info
==================================
Mode: cpu(1000)
Samples: 31096 (8.82% miss rate)
GC: 2245 (7.22%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
10202 (32.8%) 10202 (32.8%) Sprockets::PathUtils#stat
2293 (7.4%) 2293 (7.4%) block in Mysql2::Client#query
2774 (8.9%) 1737 (5.6%) ActionView::PathResolver#find_template_paths
6122 (19.7%) 1395 (4.5%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#exec_query
1303 (4.2%) 1303 (4.2%) Rack::MiniProfiler.config
1149 (3.7%) 1149 (3.7%) block in Logger::LogDevice#write
1037 (3.3%) 1037 (3.3%) block in ActionView::PathResolver#find_template_paths
646 (2.1%) 646 (2.1%) block (2 levels) in Rack::MiniProfiler::TimerStruct::Sql#initialize
978 (3.1%) 601 (1.9%) Sprockets::URITar#initialize
983 (3.2%) 541 (1.7%) Sprockets::Cache::FileStore#safe_open
545 (1.8%) 421 (1.4%) Sprockets::URITar#expand
377 (1.2%) 377 (1.2%) Sprockets::Paths#root
1042 (3.4%) 306 (1.0%) block in #<Module:0x007f9c4d9b0f28>.render_javascripts
288 (0.9%) 288 (0.9%) URI::RFC3986_Parser#split
278 (0.9%) 273 (0.9%) Sprockets::PathUtils#entries
216 (0.7%) 216 (0.7%) #<Module:0x007f9c4c0cbad0>.load_with_autoloading
532 (1.7%) 181 (0.6%) block in #<Module:0x007f9c4d9b0f28>.render_stylesheets
390 (1.3%) 174 (0.6%) Sprockets::EncodingUtils#unmarshaled_deflated
163 (0.5%) 163 (0.5%) rescue in Dalli::Server::KSocket::InstanceMethods#readfull
153 (0.5%) 153 (0.5%) block (4 levels) in Sprockets::Mime#compute_extname_map
172 (0.6%) 146 (0.5%) block in ActionView::PathResolver#query
128 (0.4%) 128 (0.4%) block in ActiveSupport::FileUpdateChecker#max_mtime
119 (0.4%) 119 (0.4%) Sprockets::PathUtils#absolute_path?
112 (0.4%) 112 (0.4%) ActiveSupport::PerThreadRegistry#instance
111 (0.4%) 110 (0.4%) Set#add
110 (0.4%) 110 (0.4%) block in BetterErrors::ExceptionExtension#set_backtrace
101 (0.3%) 101 (0.3%) Set#replace
134 (0.4%) 100 (0.3%) ActiveSupport::FileUpdateChecker#watched
78 (0.3%) 78 (0.3%) ThreadSafe::NonConcurrentCacheBackend#[]
78 (0.3%) 75 (0.2%) Sprockets::DigestUtils#digest
and with config.assets.compile = false
2472 (12.2%) 2472 (12.2%) Rack::MiniProfiler.config
2167 (10.7%) 2167 (10.7%) block in Mysql2::Client#query
3118 (15.4%) 1967 (9.7%) ActionView::PathResolver#find_template_paths
1300 (6.4%) 1300 (6.4%) block in Logger::LogDevice#write
7696 (37.9%) 1228 (6.0%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#exec_query
1151 (5.7%) 1151 (5.7%) block in ActionView::PathResolver#find_template_paths
1048 (5.2%) 1048 (5.2%) block (2 levels) in Rack::MiniProfiler::TimerStruct::Sql#initialize
286 (1.4%) 248 (1.2%) block in ActionView::PathResolver#query
192 (0.9%) 191 (0.9%) ActiveSupport::PerThreadRegistry#instance
222 (1.1%) 191 (0.9%) block in #<Module:0x007f9e659ca4e0>.render_javascripts
216 (1.1%) 177 (0.9%) block in #<Module:0x007f9e659ca4e0>.render_stylesheets
168 (0.8%) 168 (0.8%) block in ActiveSupport::FileUpdateChecker#max_mtime
142 (0.7%) 142 (0.7%) ThreadSafe::NonConcurrentCacheBackend#[]
136 (0.7%) 136 (0.7%) block in BetterErrors::ExceptionExtension#set_backtrace
133 (0.7%) 133 (0.7%) rescue in Dalli::Server::KSocket::InstanceMethods#readfull
166 (0.8%) 126 (0.6%) Arel::Nodes::Binary#hash
111 (0.5%) 111 (0.5%) block (4 levels) in Class#class_attribute
225 (1.1%) 105 (0.5%) block in ActiveRecord::Migrator.migrations
155 (0.8%) 103 (0.5%) ActiveRecord::Relation#initialize_copy
101 (0.5%) 101 (0.5%) Time#compare_with_coercion
128 (0.6%) 101 (0.5%) ActiveSupport::FileUpdateChecker#watched
163 (0.8%) 91 (0.4%) block (2 levels) in BindingOfCaller::BindingExtensions#callers
79 (0.4%) 79 (0.4%) block in ActiveSupport::Dependencies#loadable_constants_for_path
76 (0.4%) 74 (0.4%) block in ActiveRecord::QueryMethods#validate_order_args
74 (0.4%) 74 (0.4%) block in ActiveSupport::Inflector#apply_inflections
223 (1.1%) 72 (0.4%) ActiveModel::AttributeMethods::ClassMethods#attribute_alias?
68 (0.3%) 68 (0.3%) ActiveRecord::Inheritance::ClassMethods#base_class
65 (0.3%) 65 (0.3%) block (2 levels) in <class:Numeric>
61 (0.3%) 61 (0.3%) Arel::Collectors::Bind#<<
186 (0.9%) 58 (0.3%) ActiveRecord::QueryMethods#preprocess_order_args
I see you're using the Sprockets ~> 3.0 series. When I tried upgrading to that it slowed my site down by roughly 20% . see: https://github.com/rails/sprockets/issues/84#issuecomment-223742047
I'm not sure how much Sprockets depends on the STDLIB Pathname class anymore. I'll look into it.
Yep. As of Sprockets 3.0 series they've dropped most of their use of Pathname. See: https://github.com/rails/sprockets/blob/master/lib/sprockets/path_utils.rb
They only require Pathname is an ALT separator is used and then only use the Pathname#absolute? method.
I don't think you'll see any performance gain unless you downgrade your Sprockets version, or until we add more methods that the newer Sprockets depends on.
Hey @grosser , I did more research into Sprockets. I've written all the details in the README. After my research I believe your website can gain around 31% faster page load time by downgrading to Sprocket 2.0 series. And then you may get an additional 30% by using this gem. This result will be more clearly seen on your logged in user derailed profile results. I'm basing these numbers off of my own website though so the data for you will likely vary.
| gharchive/issue | 2016-06-13T23:26:51 | 2025-04-01T06:38:19.607832 | {
"authors": [
"danielpclark",
"grosser"
],
"repo": "danielpclark/faster_path",
"url": "https://github.com/danielpclark/faster_path/issues/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2203992401 | 🛑 test1 is down
In 48ee0f7, test1 (https://quarterbacksystems.com/en_US/page/status) was down:
HTTP code: 503
Response time: 639 ms
Resolved: test1 is back up in 9499799 after 12 minutes.
| gharchive/issue | 2024-03-23T18:22:03 | 2025-04-01T06:38:19.622206 | {
"authors": [
"danielqb"
],
"repo": "danielqb/status",
"url": "https://github.com/danielqb/status/issues/1186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1550081587 | 🛑 Bancolombia - Portal is down
In 3278336, Bancolombia - Portal (https://www.bancolombia.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bancolombia - Portal is back up in 357088b.
| gharchive/issue | 2023-01-19T23:09:21 | 2025-04-01T06:38:19.624615 | {
"authors": [
"danielqb"
],
"repo": "danielqb/status",
"url": "https://github.com/danielqb/status/issues/205",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1427052106 | 🛑 DIAN - Muisca is down
In 8fbc958, DIAN - Muisca (https://muisca.dian.gov.co/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DIAN - Muisca is back up in 99afe8b.
| gharchive/issue | 2022-10-28T10:30:19 | 2025-04-01T06:38:19.626986 | {
"authors": [
"danielqb"
],
"repo": "danielqb/status",
"url": "https://github.com/danielqb/status/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1858248519 | 🛑 test1 is down
In 5fb442a, test1 (https://quarterbacksystems.com/en_US/page/status) was down:
HTTP code: 502
Response time: 1300 ms
Resolved: test1 is back up in 64c419c after 325 days, 18 hours, 27 minutes.
| gharchive/issue | 2023-08-20T19:56:54 | 2025-04-01T06:38:19.629569 | {
"authors": [
"danielqb"
],
"repo": "danielqb/status",
"url": "https://github.com/danielqb/status/issues/506",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1979821623 | 🛑 test1 is down
In cfdc7ce, test1 (https://quarterbacksystems.com/en_US/page/status) was down:
HTTP code: 503
Response time: 1316 ms
Resolved: test1 is back up in 39e1627 after 11 minutes.
| gharchive/issue | 2023-11-06T18:37:21 | 2025-04-01T06:38:19.631907 | {
"authors": [
"danielqb"
],
"repo": "danielqb/status",
"url": "https://github.com/danielqb/status/issues/789",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
782193009 | Disable keyboard input callout on iPad devices
The native keyboard input callout bubble is not active on iPad devices. Instead, the buttons are highlighted when pressed.
Disable the callout bubble by default on iPad and add a color highlight instead.
This can be tested in master.
This can be tested in master.
| gharchive/issue | 2021-01-08T15:21:50 | 2025-04-01T06:38:19.633360 | {
"authors": [
"danielsaidi"
],
"repo": "danielsaidi/KeyboardKit",
"url": "https://github.com/danielsaidi/KeyboardKit/issues/152",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
226113781 | List of Commonalities
I came across this effort while reading https://peerj.com/articles/cs-86/, and my first thought after going over the list - where to start? So many differences, that I am asking myself how could it be possible for someone to be in any doubt about the differences between software and data?
For someone coming this this perspective, and to make the document more balanced and motivated, did you consider adding "List of Commonalities"?
Hi - this really started when a draft of that paper discussed differences between software and data from the point-of-view of citation, and we want to explain when the data citation principles were not sufficient and correct for software citation. Some reviewers felt we were injecting out opinions, rather than facts, so we decided to create this repo and let people discuss this, so it was more of a consensus and not just out opinions. We then could cite this repo in the paper, and satisfy the reviewers, which we did.
Having said that, if you want to propose some changes, that would be fine.
Oh, I see. If this repo is not being actively developed and has already served its purpose, then I agree there is not much value in updating it. Thanks for the clarification!
| gharchive/issue | 2017-05-03T21:02:29 | 2025-04-01T06:38:19.638461 | {
"authors": [
"danielskatz",
"fedorov"
],
"repo": "danielskatz/software-vs-data",
"url": "https://github.com/danielskatz/software-vs-data/issues/46",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2056747692 | Make COLUMN str subclass and get rid of .value calls
Is your feature request related to a problem? Please describe.
We could simplify the SQL queries and column name reference by subclassing COLUMN with str as well.
Describe the solution you'd like
rewrite the COLUMN class to subclass str and test that it behaves as expected
rewrite the SQL queries and everywhere else where .value had to be used for the Enum
Describe alternatives you've considered
Use StrEnum but then we need Python >= 3.11.
Upgraded to Python 3.11 so we can use StrEnum, similar to how COMPONENT_ID is implemented.
| gharchive/issue | 2023-12-26T22:42:26 | 2025-04-01T06:38:19.645015 | {
"authors": [
"danieltsoukup"
],
"repo": "danieltsoukup/noise-dashboard",
"url": "https://github.com/danieltsoukup/noise-dashboard/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1587876363 | Add greetz to index and extra main page.
Because Click to Enter!
lolwhat
| gharchive/pull-request | 2023-02-16T15:21:51 | 2025-04-01T06:38:19.648606 | {
"authors": [
"guest123guest",
"rollerozxa"
],
"repo": "danil275487/danil275487.github.io",
"url": "https://github.com/danil275487/danil275487.github.io/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1402641372 | Updated setup.py and added pyproject.toml
Updated the deprecated setup.py install and created the more standard pyproject.toml. It creates a wheel which is installable and works on Python >=3.6.
I have attached a ZIP file with the generated wheel and tar.gz from python -m build
Let me know if there is a problem,
dist.zip
@danni it would be very handy if you could spare some time for this PR.
| gharchive/pull-request | 2022-10-10T06:06:02 | 2025-04-01T06:38:19.687891 | {
"authors": [
"gregbreen",
"joseavegaa"
],
"repo": "danni/python-pkcs11",
"url": "https://github.com/danni/python-pkcs11/pull/144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2103971088 | [Question]: failed to solve: process "/bin/sh -c apk --no-cache add curl && npm ci" did not complete successfully: exit code: 146
What is your question?
My system information is as follows:
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.9.2009 (Core)
Release: 7.9.2009
Codename: Core
I get an error when executing the following command:
docker-compose up
error message:
failed to solve: process "/bin/sh -c apk --no-cache add curl && npm ci" did not complete successfully: exit code: 146
More Details
。
What is the main subject of your question?
No response
Screenshots
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
you could try using the pre-built image instead
You can edit the docker-compose.override.yml file (rename it without the .example)
you need something like this:
version: '3.4'
services:
api:
image: ghcr.io/danny-avila/librechat-dev:latest
see also for more information:
https://docs.librechat.ai/install/configuration/docker_override.html
Did as asked. But can't succeed yet. . .
ok.yeah!
| gharchive/issue | 2024-01-28T05:02:30 | 2025-04-01T06:38:19.694761 | {
"authors": [
"fuegovic",
"longjiansina"
],
"repo": "danny-avila/LibreChat",
"url": "https://github.com/danny-avila/LibreChat/issues/1659",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2501397743 | [Question]: How to decrease token after successfully generate DALL-E image?
What is your question?
How to decrease token after successfully generate DALL-E image?
More Details
I know the cost for every dall-e image generation is about 3 cent per successfully generation. I want to manually decrease token like 150,000 token for every successfull image generation.
Because currently the token will decrease only for prompt and completion. I'm thinking of that if I have to manually add some function to decrease the token after each successfully generation.
Thank you.
What is the main subject of your question?
No response
Screenshots
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Not implemented, will be soon: https://github.com/danny-avila/LibreChat/discussions/1479
| gharchive/issue | 2024-09-02T17:52:12 | 2025-04-01T06:38:19.697990 | {
"authors": [
"danny-avila",
"nayakayp"
],
"repo": "danny-avila/LibreChat",
"url": "https://github.com/danny-avila/LibreChat/issues/3901",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2559406037 | feat: Qdrant Vector Database
Qdrant vector database supported. Added section in readme in setting up Qdrant environmental variables as well as specifically running docker image. Implemented Async methods for qdrant.
Tested using bedrock with amazon.titan-embed-text-v2:0 with all 3 supported currently vector databases (pgvector, qdrant, and atlas mongo)
Tested /ids, /documents , /delete ,/embed , /query and /query_multiple endpoints successfully with qdrant, pgvector, and atlas mongo.
Looked into implementing qdrant async client
Would we still want to retain support for sync qdrant? Would restructure code that we have, but have two separate vector-DB.
Hoping we can get some more traction on this given the benfits of Qdrant over pgvector over other solutions.
| gharchive/pull-request | 2024-10-01T14:24:04 | 2025-04-01T06:38:19.700557 | {
"authors": [
"FinnConnor",
"PylotLight",
"ScarFX"
],
"repo": "danny-avila/rag_api",
"url": "https://github.com/danny-avila/rag_api/pull/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
538635337 | nil user_id causes app failure
I have a test to ensure that a nil email in an update will result in a changeset error. I'm using pow_user_id_field_changeset(attrs), which results in:
** (FunctionClauseError) no function clause matching in String.Break.trim_leading/1
The following arguments were given to String.Break.trim_leading/1:
# 1
nil
Attempted function clauses (showing 1 out of 1):
def trim_leading(string) when is_binary(string)
...
stacktrace:
(elixir) lib/elixir/unicode/properties.ex:288: String.Break.trim_leading/1
(elixir) lib/string.ex:1108: String.trim/1
(pow) lib/pow/ecto/schema.ex:307: Pow.Ecto.Schema.normalize_user_id_field_value/1
(ecto) lib/ecto/changeset.ex:1133: Ecto.Changeset.update_change/3
(pow) lib/pow/ecto/schema/changeset.ex:48: Pow.Ecto.Schema.Changeset.user_id_field_changeset/3
Is there a way to use the pow_user_id_field_changeset and receive changeset errors vs. this application error?
My test:
@invalid_attrs %{email: nil, password: "password"}
...
test "create_user/1 with invalid data returns error changeset" do
assert {:error, %Ecto.Changeset{}} = Accounts.create_user(@invalid_attrs)
end
My Context:
def create_user(attrs) do
%User{}
|> User.changeset(attrs)
|> Repo.insert()
end
My User:
def changeset(user_or_changeset, attrs) do
user_or_changeset
|> pow_user_id_field_changeset(attrs)
|> pow_current_password_changeset(attrs)
|> new_password_changeset(attrs, @pow_config)
|> pow_extension_changeset(attrs)
|> Ecto.Changeset.delete_change(:password)
end
Sorry, a bunch of things and the holidays crept up so didn't have time to look at this before now.
I figure out a way to trigger this. The user has to already have the email set in the struct:
User.pow_user_id_field_changeset(%User{email: "test"}, %{email: nil})
I don't know why it blows up in your case though, since you call the changeset with an empty struct. Maybe a default value is set for the struct key? In any case, I'll open a PR to fix this.
#364 hopefully resolves this for you 😄
Perfect, thank you!
| gharchive/issue | 2019-12-16T20:27:09 | 2025-04-01T06:38:19.705526 | {
"authors": [
"danschultzer",
"dfalling"
],
"repo": "danschultzer/pow",
"url": "https://github.com/danschultzer/pow/issues/358",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
273117431 | Fix issue #1
Fix Issue #1
DRY up codebase by removing duplicate TypeError code from each operation and bring into the _check function. Invoke the _check() function before each operation.
Yay, a pull request!
After you submit a pull request, one of the following will happen:
:sob: You don’t get a response. :sob:
Even on an active project, it’s possible that your pull request won’t get an immediate response. You should expect some delay as most open source maintainers do so in their free time and can be busy with other tasks.
If you haven’t gotten a response in over a week, it’s fair to politely respond in the same thread, asking someone for a review. If you know the handle of the right person to review your pull request, you can @-mention them to send them a notification. Avoid reaching out to that person privately; remember that public communication is vital to open source projects.
If you make a polite bump and still nobody responds, it’s possible that nobody will respond, ever. It’s not a great feeling, but don’t let that discourage you. It’s happened to everyone! There are many possible reasons why you didn’t get a response, including personal circumstances that may be out of your control. Try to find another project or way to contribute. If anything, this is a good reason not to invest too much time in making a pull request before other community members are engaged and responsive.
:construction: You're asked to make changes to your pull request. :construction:
It’s very common that someone will request changes on your pull request, whether that’s feedback on the scope of your idea, or changes to your code. Often a pull request is just the start of the conversation.
When someone requests changes, be responsive. They’ve taken the time to review your pull request. Opening a PR and walking away is bad form. If you don’t know how to make changes, research the problem, then ask for help if you need it.
If you don’t have time to work on the issue anymore (for example, if the conversation has been going on for months, and your circumstances have changed), let the maintainer know so they’re not expecting a response. Someone else may be happy to take over.
:-1: Your pull request doesn’t get accepted. :-1:
It's possible your pull request may or may not be accepted in the end. If you’re not sure why it wasn’t accepted, it’s perfectly reasonable to ask the maintainer for feedback and clarification. Ultimately, however, you’ll need to respect that this is their decision. Don’t argue or get hostile. You’re always welcome to fork and work on your own version if you disagree!
:tada: Your pull request gets accepted and merged. :tada:
Hooray! You’ve successfully made an open source contribution!
Thank you for the submission, @sambragg! I'll review your code shortly, hang tight.
:tada: You did it! :tada:
You're an open source contributor now!
Whether this was your first pull request, or you’re just looking for new ways to contribute, I hope you’re inspired to take action. Don't forget to say thanks when a maintainer puts effort into helping you, even if a contribution doesn't get accepted.
Remember, open source is made by people like you: one issue, pull request, comment, and +1 at a time.
What's next?
Find your next project:
Up For Grabs - a list of projects with beginner-friendly issues
First Timers Only - a list of bugs that are labelled "first-timers-only"
Awesome-for-beginners - a GitHub repo that amasses projects with good bugs for new contributors, and applies labels to describe them.
YourFirstPR - starter issues on GitHub that can be easily tackled by new contributors.
Issuehub.io - a tool for searching GitHub issues by label and language
Learn from other great community members:
"How to contribute to an open source project on github" by @kentcdodds
"Bring Kindness Back to Open Source" by @shanselman
"Getting into Open Source for the First Time" by @mcdonnelldean
"How to find your first open source bug to fix" by @Shubheksha
"How to Contribute to Open Source" by @Github
"Make your first open source contribution in 5 minutes" by @Roshanjossey
Elevate your Git game:
Try git - an interactive Git tutorial made by GitHub
Atlassian Git Tutorials - various tutorials on using Git
Git Cheat Sheet - PDF made by GitHub
GitHub Flow - YouTube video explaining how to make a pull request on GitHub talk on how to make a pull request
Oh shit, git! - how to get out of common Git mistakes described in plain English
Questions? Comments? Concerns?
I'm always open to feedback. If you had a good time with the exercise, or found some room for improvement, please let me know on twitter or email.
Want to start over? Just delete your fork.
Want to see behind the scenes? Check out the server code.
| gharchive/pull-request | 2017-11-11T04:13:24 | 2025-04-01T06:38:19.733196 | {
"authors": [
"danthareja",
"sambragg"
],
"repo": "danthareja/contribute-to-open-source",
"url": "https://github.com/danthareja/contribute-to-open-source/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
609522223 | #36
Adicionado indexDB
Gravando registros no banco de dados
@danvitoriano
| gharchive/pull-request | 2020-04-30T02:42:34 | 2025-04-01T06:38:19.744282 | {
"authors": [
"fwfcunha"
],
"repo": "danvitoriano/minhas-financas",
"url": "https://github.com/danvitoriano/minhas-financas/pull/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1926051641 | @peerbit/react -- Error occurred during XX handshake: ciphertext cannot be decrypted using that key
I have a Next.js app in which I'm trying to use PeerProvider from @peerbit/react, like so: https://github.com/Azaeres/etherion-lab/blob/main/src/components/scenes/Experiment4/index.tsx
'use client'
import { PeerProvider } from '@peerbit/react'
export default function Experiment4() {
console.log('Experiment4 render :')
return (
<PeerProvider network="remote">
<></>
</PeerProvider>
)
}
However, I'm getting the error "Failed to resolve relay addresses. Error: Error occurred during XX handshake: ciphertext cannot be decrypted using that key".
Live demo of error can be found here: https://lab.etherion.app/experiment4
When I try to provide my own keypair, Peerbit sees it as invalid. It looks like there are a few different defined Ed25519Keypair classes in the Next.js bundle, and Peerbit's instanceof check fails when there's a class reference mismatch. I've also tried creating a keypair using @peerbit/react's getKeypair() function, but Peerbit also sees it as invalid. Not sure if this is related to the "XX handshake" error.
In another, separate area, I've successfully created my own peer by borrowing bits of @peerbit/react. You can see this in a live demo at: https://lab.etherion.app/experiment3
The peer creation code I got to work can be found here: https://github.com/Azaeres/etherion-lab/blob/main/src/components/scenes/Experiment3/hooks/usePeerbitDatabase.ts
Interestingly
https://lab.etherion.app/experiment4
Worked once for me. Then next time I tried it I got the problem. I wonder if there is some caching going on..
Anyway.
https://github.com/Azaeres/etherion-lab/blob/c7d9864e42c86c26c7a8bbb3c1d824d600dd8662/yarn.lock#L7998C12-L7998C12
Looks like you have a old version of Peerbit lurking around. Can you see if you can bump all Peerbit related dependencies.
Most importantly
https://github.com/Azaeres/etherion-lab/blob/c7d9864e42c86c26c7a8bbb3c1d824d600dd8662/yarn.lock#L1577
this one should not exist in the lock file but only the 13^ one
Super cool that your are creating multiplayer (?) a space shooter game with Peerbit. Could feature it on this repo if you want later
Okay, thank you for the tip on what to dig into!
@peerbit/react@0.0.4 off of NPM is asking for @libp2p/webrtc@^2.0.11, which is in turn asking for @chainsafe/libp2p-noise@^12.0.0.
However, I see that react-utils in the peerbit-examples is asking for @libp2p/webrtc@^3.1.9. See https://github.com/dao-xyz/peerbit-examples/blob/fe1729f1268c5b29fb61b59611e460d553ed3180/packages/react-utils/package.json#L28
If you're publishing this react-utils folder to NPM, maybe it's time to publish an update?
Super cool that your are creating multiplayer (?) a space shooter game with Peerbit. Could feature it/link it from this repo if you want later
Yeah, that's the idea! Would love for this to come together. Thanks again for your help.
Well it is more the
@dao-xyz/libp2p-noise@^12.0.0
noise implementation, that had a bug which yields your error message.
If you somehow manage to get rid of peerbit v1 dependency https://github.com/Azaeres/etherion-lab/blob/c7d9864e42c86c26c7a8bbb3c1d824d600dd8662/yarn.lock#L7998C12-L7998C12
and only use peerbit v2 I think your problems will be gone.
I have not actually used @peerbit/react in a separate repo yet. Been building it along side all the examples to reach a good API in the end, and I kan see that there are a few dependencies there that perhaps needs to be removed or updated (however it should not affect your problem)
These are the listed dependencies of the @peerbit/react I grabbed off of NPM.
"dependencies": {
"@emotion/react": "^11.10.5",
"@emotion/styled": "^11.10.5",
"@libp2p/webrtc": "^2.0.11",
"@mui/icons-material": "^5.10.16",
"@mui/material": "^5.10.13",
"@peerbit/proxy-window": "^1.0.1",
"@types/react": "^18.0.25",
"@types/react-dom": "^18.0.8",
"path-browserify": "^1.0.1",
"peerbit": "^1",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.8.0",
"react-use": "^17.4.0"
},
I think that's where the peerbit@^1 is coming from, which explains why the @dao-xyz/libp2p-noise@^12.0.0 disappears when I uninstall @peerbit/react.
Ah! I see, this CI in github does not automatically release stuff in this repo.
Just relased
@peerbit/react@0.0.5
now. Try it out !
Nice! No longer getting the XX handshake error!
I've got a bunch of these "'Recieved hello message that did not verify. Header: false, Ping info true, Signatures false'" warnings, though. Does this mean I haven't configured something correctly?
Great! No you don't have to worry about that error.
There is non-optimal logging now.
The warning and error messages should be gone when this issue is fixed.
The 87ecf9778ccaa08bd9f1e8c6104d82c469b35511.peerchecker.com address is not part of the bootstrapping nodes. And that server is down. But this should not affect your stuff running. The error messages you see are basically just the autodialer failing to establish connections
| gharchive/issue | 2023-10-04T11:57:23 | 2025-04-01T06:38:19.758644 | {
"authors": [
"Azaeres",
"marcus-pousette"
],
"repo": "dao-xyz/peerbit-examples",
"url": "https://github.com/dao-xyz/peerbit-examples/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1318981833 | Initializing cyber node without using environment variable
Hi, thank you for the amazing tools!
I want to ask you if it is possible to use CYBER_IP as a parameter when initializing (e.g. cyber.init(cyber_ip='111.222.333.444')), rather than using an environment variable.
export CYBER_IP=127.0.0.1
As pycyber is a wrap of apollo cyber, cyber does not currently support this assignment method. Therefore I do not intend to support the above interfaces unless necessary!
Cool! Thanks for answering!
| gharchive/issue | 2022-07-27T03:47:04 | 2025-04-01T06:38:19.767502 | {
"authors": [
"YuqiHuai",
"daohu527"
],
"repo": "daohu527/pycyber",
"url": "https://github.com/daohu527/pycyber/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
990191339 | Update GCP Storage Bucket
Describe the feature
Update GCP Storage Bucket binding:
update create operation: support upload files in base64 (https://docs.dapr.io/reference/components-reference/supported-bindings/gcpbucket/#upload-a-file doesn't work)
add get operation
add delete operation
add list operation
Release Note
RELEASE NOTE:
UPDATE GCP Storage Bucket binding. Create operation: support upload files in base64, return location and version id
ADD GCP Storage Bucket binding: get operation
ADD GCP Storage Bucket binding: delete operation
ADD GCP Storage Bucket binding: list operation
/assing
/assign
| gharchive/issue | 2021-09-07T17:19:02 | 2025-04-01T06:38:19.805143 | {
"authors": [
"fjvela"
],
"repo": "dapr/components-contrib",
"url": "https://github.com/dapr/components-contrib/issues/1125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2050457513 | azappconfig SDk upgrade
Description
Please explain the changes you've made
Issue reference
We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.
Please reference the issue this PR will close: #3267
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list:
[ ] Code compiles correctly
[ ] Created/updated tests
[ ] Extended the documentation / Created issue in the https://github.com/dapr/docs/ repo: dapr/docs#[issue number]
/ok-to-test
| gharchive/pull-request | 2023-12-20T12:29:18 | 2025-04-01T06:38:19.808215 | {
"authors": [
"ItalyPaleAle",
"pravinpushkar"
],
"repo": "dapr/components-contrib",
"url": "https://github.com/dapr/components-contrib/pull/3283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
944498624 | Kubernetes (AKS) - Azure Key Vault Secret Store: failed to get oauth token from certificate auth: failed to read the certificate file
Hey Community,
I have problems using my certificate in Kubernetes (AKS) for my Azure Key Vault Secret Store.
It works wonderfully with local hosting. I made the configuration according to the instructions and also added the certificate file to the Kubernetes Store. But unfortunately I get the following error message with Kubernetes when starting the dapr sidecar:
time="2021-07-14T14:31:57.756966579Z" level=warning msg="failed to init state store secretstores.azure.keyvault/v1 named azurekeyvault: failed to get oauth token from certificate auth: failed to read the certificate file (0\x82\nP\x0...a\xd0: invalid argument" app_id=mywebapp instance=mywebapp-5557c78c9b-v86ss scope=dapr.runtime type=log ver=1.2.2
time="2021-07-14T14:31:57.757159681Z" level=fatal msg="process component azurekeyvault error: failed to get oauth token from certificate auth: failed to read the certificate file (0\x82\nP\x02\x\xde: invalid argument" app_id=mywebapp instance=mywebapp-5557c78c9b-v86ss scope=dapr.runtime type=log ver=1.2
i have done all the steps according to this documentation:
https://docs.dapr.io/reference/components-reference/supported-secret-stores/azure-keyvault/
My Kubectl command:
kubectl create secret generic k8s-secret-store --from-file=myapp-certificate=myapp-secrets-myapp-certificate-20210713.pfx
My azurekeyvault.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: myapp-secrets
- name: spnTenantId
value: "460d88b8-d055-4149-9f03-XXX" #changed to XXX only on this post
- name: spnClientId
value: "dd964473-808e-4a82-a167-XXX" #changed to XXX only on this post
- name: spnCertificateFile
secretKeyRef:
name: k8s-secret-store
key: myapp-certificate
auth:
secretStore: kubernetes
It was my fault. I used spnCertificateFile and that is for local.
I changed it to spnCertificate and now it works.
| gharchive/issue | 2021-07-14T14:39:12 | 2025-04-01T06:38:19.813517 | {
"authors": [
"GregorBiswanger"
],
"repo": "dapr/dapr",
"url": "https://github.com/dapr/dapr/issues/3432",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1784305554 | [Workflows] Raise event payload is always null in my sample app
cc @cgillum
As per this thread on Discord, I'm not sure why this workflow code example here is not working. For some reason the raise event payload is null when I expect it to be a string of the value "OK".
The DaprWorkflowClient.RaiseEventAsync happens here
I'm sure this is something to do with my code, and not actually a but I can't figure it out.
my local environment
Mac OS - M1
Docker Desktop 4.20.1
Can reproduce this on the following dapr sidecar versions
1.11.1
client libraries
<PackageReference Include="Dapr.Client" Version="1.11.0" />
<PackageReference Include="Dapr.Workflow" Version="1.11.0" />`
Repro steps
pull repo https://github.com/olitomlinson/dapr-workflow-examples
docker compose build
docker compose up
Use insomnia/postman/whatever to start a workflow :
POST http://localhost:5112/start-raise-event-workflow?runId=100
note : The runId will become part of the workflow instance Id. i.e runId : 100 will become a workflow instance Id of 0-100
Raise an event to the workflow (you have 30 seconds) :
POST http://localhost:5112/start-raise-event-workflow-event?runId=100
note : The event payload is hardcoded to "OK"
Check the status of the workflow :
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/0-100
Observe the workflow output is :
"dapr.workflow.output": "\"external event : \""
The expected output should be :
"dapr.workflow.output": "\"external event : \"OK"
/assign
@cgillum I reduced the workflow right down to this, and it still shows the payload as null
@cgillum If i use the HTTP interface to raise the event (not the dotnet SDK) then it comes through just fine. So this would imply its a problem with the code that is raising the event, not the workflow itself.
@cgillum Ok, I've narrowed it down to the client code.
If I use DaprClient to raise the event, everything works as expected.
However, If I use DaprWorkflowClient this does not.
@cgillum
It looks like the OrderProcessing example is using DaprClient and not DaprWorkflowClient which would explain why the example works, and why my code works now that I've switched over to DaprClient
https://github.com/dapr/dotnet-sdk/blob/8e9db70c0f58050f44970cda003297f561ab570a/examples/Workflow/WorkflowConsoleApp/Program.cs#L167
I think its safe to say DaprWorkflowClient is where the problem lies.
Thanks. I'm converting the sample to use DaprWorkflowClient instead of DaprClient now and will hopefully be able to reproduce the issue soon.
I've confirmed that this is an issue in the .NET Workflow SDK and not an issue in the runtime. PR with the fix is here: https://github.com/dapr/dotnet-sdk/pull/1119.
| gharchive/issue | 2023-07-01T22:59:37 | 2025-04-01T06:38:19.826303 | {
"authors": [
"cgillum",
"olitomlinson"
],
"repo": "dapr/dapr",
"url": "https://github.com/dapr/dapr/issues/6614",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
563385745 | Threat Model
Current situation
There is a Security Concepts page which lists the the current security features of Dapr very high-level without concrete recommendations or possible threats for:
Dapr-to-app communication
Dapr-to-Dapr communication
Network security
Bindings security
State store security
Management security
Component secrets
Challenge
Dapr states to "codifies the best practices for building microservice applications". This also includes security best practices and lessons learned.
On top, Dapr should also help developers to develop microservices in (strictly) restricted enterprise scenarios and/or industries.
Describe the proposal
Creating a threat model for Dapr
Analyzing it for potential
Recommend mitigations for these security issues
Put these together in a living security review docs (for making Dapr usage possible in strictly restricted environments with documentation obligations) and create a Security Guidelines / Best Practices page for practical usage across usages.
@yaron2 can you also please upload the original Threat Modeling Tool file - if we need to change/add anything to it, we don't need to start over. Thx!
| gharchive/issue | 2020-02-11T18:11:37 | 2025-04-01T06:38:19.831051 | {
"authors": [
"RicardoNiepel"
],
"repo": "dapr/docs",
"url": "https://github.com/dapr/docs/issues/346",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
776521641 | Delete compiled files from src/
This PR removes all compiled files from the src folder.
Also adds src/**/*.js to .gitignore to avoid this problem in the future.
I think this PR is not doing what you think it does
| gharchive/pull-request | 2020-12-30T15:53:39 | 2025-04-01T06:38:19.845418 | {
"authors": [
"mcmacker4",
"peoplenarthax"
],
"repo": "darkaqua/pathfinding.ts",
"url": "https://github.com/darkaqua/pathfinding.ts/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2404531332 | unicode characters in response body
https://github.com/darrenburns/posting/blob/7b1d0ae86d2990fa89d52b612284af3aaf590b55/src/posting/widgets/response/response_area.py#L80
In some JSON-type API requests, the returned content may contain Unicode characters, it will show as '\uxxx....' and making the response body unreadable.
Maybe, will you please add an 'ensure_ascii=False' parameter in json.dumps to resolve this kind of issue?
Thanks for the report! Fixed in 1.1.0: https://github.com/darrenburns/posting/releases/tag/1.1.0
| gharchive/issue | 2024-07-12T02:41:58 | 2025-04-01T06:38:19.922084 | {
"authors": [
"breakstring",
"darrenburns"
],
"repo": "darrenburns/posting",
"url": "https://github.com/darrenburns/posting/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
265073769 | AppView performance investigation
Here are some remaining elements of AppView that look suspect:
.flatRootNodes:
List<Node> get flatRootNodes {
return _flattenNestedViews(viewData.rootNodesOrViewContainers);
}
List<Node> _flattenNestedViews(List nodes) {
return _flattenNestedViewRenderNodes(nodes, <Node>[]);
}
List<Node> _flattenNestedViewRenderNodes(List nodes, List<Node> renderNodes) {
int nodeCount = nodes.length;
for (var i = 0; i < nodeCount; i++) {
var node = nodes[i];
if (node is ViewContainer) {
ViewContainer appEl = node;
renderNodes.add(appEl.nativeElement);
if (appEl.nestedViews != null) {
for (var k = 0; k < appEl.nestedViews.length; k++) {
_flattenNestedViewRenderNodes(
appEl.nestedViews[k].viewData.rootNodesOrViewContainers,
renderNodes);
}
}
} else {
renderNodes.add(node);
}
}
return renderNodes;
}
@jonahwilliams noticed this was critical path in creating standalone embedded views (i.e. to use in a table, or other standalone repetitive component). He found the following API in use to get the "first" root node:
final rootNodes = (ref.hostView as EmbeddedViewRef).rootNodes;
intoDomElement.append(rootNodes.first);
return ref;
He tried using ComponentRef.location, but that seems to have (non?)significant whitespace compared to the above code, which causes tests to fail. The tests might be too strict, or it's possible we need to expose some sort of .firstRootNode as a convenience.
I did some more investigation and I haven't noticed any significant whitespace when using componentRef.location. The test in question is most likely too strict.
Doesn't sound like there is any direct next steps here, so going to close for now.
| gharchive/issue | 2017-10-12T20:23:26 | 2025-04-01T06:38:19.949690 | {
"authors": [
"alorenzen",
"jonahwilliams",
"matanlurey"
],
"repo": "dart-lang/angular",
"url": "https://github.com/dart-lang/angular/issues/670",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
319900186 | Update favicon to use new Dart logo
cc @kwalrath @JekCharlsonYu
Let me know which resolutions you'd like in that .ico and I can create one for you. That one was created using:
convert dart/logo/default.png -define icon:auto-resize=128,64,48,32,16 dart/favicon.ico
You might not need that many sizes.
(If there is a general agreement on which sizes are needed, I can change the main favicon.ico file too.)
We should likely have a single one we can apply to all dart web properties. From some very casual browsing, we should be good with just 16x16 and 32x32.
I don't know how efficient convert is in terms of the size of the file it produces (I don't know that it isn't either, but we may want to check on it).
If there is a general agreement on which sizes are needed, I can change the main favicon.ico file too
Sounds great! I'm happy to use whatever we end up using for dartlang.org.
flutter.io uses a single 64x64 PNG. I'm inclined to do the same for darglang.org. Does that work for you?
👍
Done: you can pick up assets from, e.g., https://github.com/dart-lang/site-www/pull/835/files.
Thanks!
| gharchive/issue | 2018-05-03T12:30:20 | 2025-04-01T06:38:19.963135 | {
"authors": [
"chalin",
"devoncarew"
],
"repo": "dart-lang/dart-pad",
"url": "https://github.com/dart-lang/dart-pad/issues/811",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
768473490 | Removing doc code relating to MDN.
Removes some methods that used to grab MDN links on the fly for documentation
Updates the Code responsible for generating HTML versions of docs provided by the analysis server for display.
Removes related tests.
CC @miquelbeltran, since this is currently blocking dart-pad deploys and therefore his work.
I started this change this afternoon, and wasn't expecting anyone else to be working on it. Since @parlough's PR came in first, we should land that one instead.
| gharchive/pull-request | 2020-12-16T05:48:38 | 2025-04-01T06:38:19.965058 | {
"authors": [
"RedBrogdon"
],
"repo": "dart-lang/dart-pad",
"url": "https://github.com/dart-lang/dart-pad/pull/1705",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2124442412 | Intl is not working for multiple packages project
I created some packages and one flutter application , these projects use Intl for localizations.
Packages and application:
a package named member_end, pure dart package, contains business logic codes, it uses Intl and intl_translation for localizations, it has a custom MemberLocalizations class that defined localization message getters and has a load method.
member_end_flutter, Flutter package, contains common widgets and implements for Flutter, it depends on member_end, it uses intl and intl_utils for localizations, the localizations class is named MemberFlutterLocalizations.
member_end_app, Flutter application, it depends on member_end and member_end_flutter, it uses intl and intl_utils for localizations, the localizations class is default S.
These projects supports en and zh Locales.
Files:
member_end
member_end
|---lib
|---|---l10n
|---|---|---intl_en.arb
|---|---|---intl_zh.arb
|---|---src
|---|---|---intl
|---|---|---|---messages_all.dart
|---|---|---|---messages_en.dart
|---|---|---|---messages_zh.dart
member_end_flutter
member_end_flutter
|---lib
|---|---l10n
|---|---|---intl_en.arb
|---|---|---intl_zh.arb
|---|---generated
|---|---|---l10n.dart
|---|---|---intl
|---|---|---|---messages_all.dart
|---|---|---|---messages_en.dart
|---|---|---|---messages_zh.dart
member_end_app
member_end_app
|---lib
|---|---l10n
|---|---|---intl_en.arb
|---|---|---intl_zh.arb
|---|---generated
|---|---|---l10n.dart
|---|---|---intl
|---|---|---|---messages_all.dart
|---|---|---|---messages_en.dart
|---|---|---|---messages_zh.dart
Let's say the current locale is zh, the Localizations classes are loaded in order
MemberLocalizations
MemberFlutterLocalizations
S
The problem is only the first MemberLocalizations will load its member_end/lib/src/intl/messages_zh.dart, this cause member_end_flutter and member_end_app can't get the correct locale messages.
In Localizations classes static Future<S> load(Locale locale) method, it use Future<bool> initializeMessages(String localeName) method to init and load messages, Future<bool> initializeMessages(String localeName) use CompositeMessageLookup to add locale messages, let's check CompositeMessageLookup.addLocale method:
/// If we do not already have a locale for [localeName] then
/// [findLocale] will be called and the result stored as the lookup
/// mechanism for that locale.
@override
void addLocale(String localeName, Function findLocale) {
if (localeExists(localeName)) return;
var canonical = Intl.canonicalizedLocale(localeName);
var newLocale = findLocale(canonical);
if (newLocale != null) {
availableMessages[localeName] = newLocale;
availableMessages[canonical] = newLocale;
// If there was already a failed lookup for [newLocale], null the cache.
if (_lastLocale == newLocale) {
_lastLocale = null;
_lastLookup = null;
}
}
}
When the first MemberLocalizations load, the locale zh is not exists, so localeExists(localeName) returns false, and then the member_end package's zh locale message will load. MemberFlutterLocalizations will be loaded by next in the order, when it runs into CompositeMessageLookup.addLocale, localeExists(localeName) returns true, because locale zh MessageLookupByLibrary is already added by MemberLocalizations in member_end package, S will be the same when it's loading.
To solve this issue, I have few ways to do:
Write hardcode local messages in sub Localizations class, like Flutter framework do. But this is not the way to use intl.
Create a subclass of CompositeMessageLookup named CustomCompositeMessageLookup and override method addLocale, check if locale exists and then merge the new MessageLookupByLibrary into the old MessageLookupByLibrary, if the locale message name is already exists then overwrite with the new value that provided by the new MessageLookupByLibrary, then call void initializeInternalMessageLookup(()=>CustomCompositeMessageLookup()) method in the main method to init global field MessageLookup messageLookup. But initializeInternalMessageLookup is not a public API.
As a feature request, maybe you guys can do this awesome work, make intl works in multiple projects.
If there is other better way to solve this, please tell me :)
I have the same problem, can you share with me the solution 2 you did?
@Douglas-Pontes
Solution 2:
class MultiCompositeMessageLookup extends CompositeMessageLookup {
@override
void addLocale(String localeName, Function findLocale) {
final canonical = Intl.canonicalizedLocale(localeName);
final newLocale = findLocale(canonical);
if (newLocale != null) {
final oldLocale = availableMessages[localeName];
if (oldLocale != null && newLocale != oldLocale) {
if (newLocale is! MessageLookupByLibrary) {
throw Exception('Merge locale messages failed, type ${newLocale.runtimeType} is not supported.');
}
// solve issue https://github.com/dart-lang/i18n/issues/798 if you are using intl_translate and intl_util both.
if (oldLocale.messages is Map<String, Function> && newLocale.messages is! Map<String, Function>) {
final newMessages = newLocale.messages.map((key, value) => MapEntry(key, value as Function));
oldLocale.messages.addAll(newMessages);
} else {
oldLocale.messages.addAll(newLocale.messages);
}
return;
}
super.addLocale(localeName, findLocale);
}
}
}
Then call initializeInternalMessageLookup(() => MultiCompositeMessageLookup()); before any localizations class load method.
I have the same problem. Almost all examples I found use a very simple one-package setup. How do people use this in larger projects?
@stwarwas Just call initializeInternalMessageLookup(() => MultiCompositeMessageLookup()); at the first line in your main method.
simple solution is - https://github.com/Luvti/i18n
dependency_overrides:
intl: #0.19.0
git:
url: https://github.com/Luvti/i18n
path: pkgs/intl
| gharchive/issue | 2024-02-08T06:23:38 | 2025-04-01T06:38:19.996320 | {
"authors": [
"Douglas-Pontes",
"Luvti",
"codelovercc",
"stwarwas"
],
"repo": "dart-lang/i18n",
"url": "https://github.com/dart-lang/i18n/issues/797",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2763569975 | [native_assets_builder] Git errors on invoking hooks
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8727104420872302577/+/u/run_test.dart_for_tool_integration_tests_shard_and_subshard_5_5/stdout
[ ] Running `ANDROID_HOME=/Volumes/Work/s/w/ir/cache/android/sdk TMPDIR=/Volumes/Work/s/w/ir/x/t TEMP=/Volumes/Work/s/w/ir/x/t PATH=/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Library/Xcode/Plug-ins/XCBSpecifications.ideplugin/Contents/Resources:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Library/Xcode/Plug-ins/XCBSpecifications.ideplugin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/appleinternal/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/local/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/libexec:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/usr/appleinternal/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/usr/local/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/local/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/usr/local/bin:/Volumes/Work/s/w/ir/cache/ruby/bin:/Volumes/Work/s/w/ir/x/w/flutter/bin:/Volumes/Work/s/w/ir/x/w/flutter/bin/cache/dart-sdk/bin:/Volumes/Work/s/w/ir/cache/chrome/chrome:/Volumes/Work/s/w/ir/cache/chrome/drivers:/Volumes/Work/s/w/ir/cache/java/contents/Home/bin:/Volumes/Work/s/w/ir/bbagent_utility_packages:/Volumes/Work/s/w/ir/bbagent_utility_packages/bin:/Volumes/Work/s/w/ir/cipd_bin_packages:/Volumes/Work/s/w/ir/cipd_bin_packages/bin:/Volumes/Work/s/w/ir/cipd_bin_packages/cpython3:/Volumes/Work/s/w/ir/cipd_bin_packages/cpython3/bin:/Volumes/Work/s/w/ir/cache/cipd_client:/Volumes/Work/s/w/ir/cache/cipd_client/bin:/Volumes/Work/s/cipd_cache/bin:/opt/infra-tools:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/usr/local/git/bin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOME=/Users/chrome-bot TMP=/Volumes/Work/s/w/ir/x/t /Volumes/Work/s/w/ir/x/w/flutter/bin/dart --packages=/Volumes/Work/s/w/ir/x/t/8oo7bh/uses_package_native_assets_cli/.dart_tool/package_config.json /Volumes/Work/s/w/ir/x/t/8oo7bh/uses_package_native_assets_cli/.dart_tool/native_assets_builder/db695ec90c18f778434de1b29c08c462/hook.dill --config=/Volumes/Work/s/w/ir/x/t/8oo7bh/uses_package_native_assets_cli/.dart_tool/native_assets_builder/db695ec90c18f778434de1b29c08c462/config.json`.
[ +1 ms] Persisting file store
[ +3 ms] Done persisting file store
[ +3 ms] "flutter assemble" took 6,081ms.
[ ] Running 2 shutdown hooks
[ ] Shutdown hooks complete
[ ] exiting with code 1
error: [ +39 ms] fatal: Not a valid object name origin/master
[ +1 ms] Building assets for package:uses_package_native_assets_cli failed.
This looks like it is related to tying down the environment variables. It's unclear what environment variable is missing that causes a git issue. Possibly, the Flutter SDK is pinging home with it's flutter_tools logic looking at what branch it is on?
It's not reproducible locally for me.
Context:
https://github.com/flutter/flutter/pull/160672
[2024-12-30 10:07:51.286688] [STDOUT] stderr: fatal: Not a valid object name origin/master
[2024-12-30 10:07:51.286688] [STDOUT] stderr: Error: Unable to determine engine version...
[2024-12-30 10:07:51.286688] [STDOUT] stderr: Building assets for package:ffi_package failed.
[2024-12-30 10:07:51.286688] [STDOUT] stderr: build.dart returned with exit code: 1.
[2024-12-30 10:07:51.286688] [STDOUT] stderr: To reproduce run:
[2024-12-30 10:07:51.286688] [STDOUT] stderr: C:\b\s\w\ir\x\w\rc\tmprpq8zzff\flutter sdk\bin\dart --packages=C:\b\s\w\ir\x\t\flutter_module_test.ed577fda\hello\.dart_tool\package_config.json C:\b\s\w\ir\x\t\flutter_module_test.ed577fda\hello\.dart_tool\native_assets_builder\755ebf6d30040ac7ce9fb4d3c5afe976\hook.dill --config=C:\b\s\w\ir\x\t\flutter_module_test.ed577fda\hello\.dart_tool\native_assets_builder\755ebf6d30040ac7ce9fb4d3c5afe976\config.json
[2024-12-30 10:07:51.286688] [STDOUT] stderr: stderr:
[2024-12-30 10:07:51.286688] [STDOUT] stderr: fatal: Not a valid object name origin/master
[2024-12-30 10:07:51.286688] [STDOUT] stderr: Error: Unable to determine engine version...
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8727104420706748945/+/u/run_build_android_host_app_with_module_aar/stdout
It does look like a phone home issue.
| gharchive/issue | 2024-12-30T18:42:18 | 2025-04-01T06:38:20.067433 | {
"authors": [
"dcharkes"
],
"repo": "dart-lang/native",
"url": "https://github.com/dart-lang/native/issues/1847",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.