id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
218675809
|
Database restore error
Hello,
I installed it with apt-get on Ubuntu 16.04
development restore-from staging gives me the following error:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 27247 100 27247 0 0 3109 0 0:00:08 0:00:08 --:--:-- 7992
sh: 1: Syntax error: "&&" unexpected
sh: 1: cannot open %=: No such file
Is there any way I can debug it or check out logs to know what is the error exactly for?
I use rbenv. I tried it with Ruby 2.1.1p76, 2.2.2p95
@tannakartikey do you have your database defined with an ERB file and a dynamic value?
@geoffharcourt sorry did not get you about ERB file.
I have used Figaro for ENV variables. Variables are used in the config/database.yml. Please find it below:
default: &default
adapter: postgresql
pool: 5
timeout: 5000
production:
<<: *default
database: db/production.sqlite3
development:
adapter: postgresql
host: 127.0.0.1
database: <%= ENV["CURRENTS_DB_NAME"] %>
username: <%= ENV["CURRENTS_PSQL_USERNAME"] %>
password: <%= ENV["CURRENTS_PSQL_PASSWORD"] %>
test:
adapter: postgresql
host: 127.0.0.1
database: <%= ENV["CURRENTS_DB_NAME"] %>
username: <%= ENV["CURRENTS_PSQL_USERNAME"] %>
password: <%= ENV["CURRENTS_PSQL_PASSWORD"] %>
Hi @tannakartikey we don't currently support dynamic database names via environment variables (your database config is using ERB-style variables).
Ok. Thank you.
Can I contribute that feature? What are the ups/downs for having the feature?
You are very welcome to make a PR?
The requirements for this to work in acceptance is that whatever solution for bundling the ERB-parsing (you'll need that to parse the database.yml file since that's not valid YAML until the ERB is processed) and the environment variable-loading needs to work when the gem gets repackaged for distribution with Homebrew and APT. We experimented with ERB support previously but had to remove it when we ran into issues with packaging.
Closing this out, but open to PRs that are distributable via package managers.
@geoffharcourt we're interested in this as well. Do you happen to remember any details about what went wrong with the package managers so we have an idea of where to start if we come around to offering a PR for this?
@bbugh Traveling Ruby is no longer supported and has a version of Bundler that causes permissions issues when installed through Homebrew. If you want to try tackling a PR that includes ERB, I would enthusiastically support this.
Any solution we implement would have to work through a Homebrew install (and likely apt), so keep those in mind if you decide to take this on.
Got it, thanks for the fast reply. We made a local patch that has resolved the issue for now, but I imagine it's not portable. We'll see if this works or if we can submit a PR for this project.
--- backup.rb 2018-02-08 09:23:48.000000000 -0600
+++ backup.rb 2018-02-08 09:23:55.000000000 -0600
@@ -1,4 +1,6 @@
require "etc"
+require 'erb'
+require 'rails'
module Parity
class Backup
@@ -104,7 +106,7 @@
end
def database_yaml_file
- IO.read(DATABASE_YML_RELATIVE_PATH)
+ ERB.new(IO.read(DATABASE_YML_RELATIVE_PATH)).result(binding)
end
end
end
@bbugh possibly relevant: https://github.com/thoughtbot/homebrew-formulae/blob/master/Formula/parity.rb
I went through a couple iterations of this before giving up. I'd be pretty excited to see this make it back into the utility.
|
gharchive/issue
| 2017-04-01T07:01:10 |
2025-04-01T04:36:05.016766
|
{
"authors": [
"bbugh",
"geoffharcourt",
"tannakartikey"
],
"repo": "thoughtbot/parity",
"url": "https://github.com/thoughtbot/parity/issues/125",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
610311789
|
tag cloud
implement tag cloud
done
|
gharchive/issue
| 2020-04-30T18:58:27 |
2025-04-01T04:36:05.047124
|
{
"authors": [
"Hamdy"
],
"repo": "threefoldfoundation/tfwebserver_projects_people",
"url": "https://github.com/threefoldfoundation/tfwebserver_projects_people/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
614665439
|
3sdk should have a requirements check
to avoid having ugly errors like command not found git or docker or whatever we need to validate that all minimal requirements are met before running any commands and point to the docs how to install them.
[ ] docker
[ ] ssh (the tools)
[ ] git
Original error:
should show a meaningful message like 'docker not found on the system please install docker first' with a link to the installation requirements maybe https://sdk.threefold.io/#/3sdk_install?id=requirements or something.
verified
|
gharchive/issue
| 2020-05-08T10:48:11 |
2025-04-01T04:36:05.049600
|
{
"authors": [
"Dina-Abd-Elrahman",
"xmonader"
],
"repo": "threefoldtech/jumpscaleX_core",
"url": "https://github.com/threefoldtech/jumpscaleX_core/issues/817",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2030392659
|
Can't read 'publicIP' in Umbrel deployment
Description
Devnet, 2d54818.
After deploying the Umbrel solution, I get this error: can't read public, and it doesn't display anything on the site but only in the console.
And for some reason, the action button icon disappeared.
When any error happens, the action button shouldn't disappear, and the site should notify the user of the error.
Logs/Alerts
Threefold Dashboard: Twin.webm
fixed on #1715
Verified,
Devnet
46eb8ac
TC2132 - Deploy Peertube
I tried different deployments and refreshing and click the buttons and no error was shown and Icon didn't disappear.
Verified:
Devnet
ec2ed3d
|
gharchive/issue
| 2023-12-07T10:13:22 |
2025-04-01T04:36:05.053725
|
{
"authors": [
"0oM4R",
"A-Harby",
"ramezsaeed"
],
"repo": "threefoldtech/tfgrid-sdk-ts",
"url": "https://github.com/threefoldtech/tfgrid-sdk-ts/issues/1611",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2157044985
|
A lot of broken manual links
Description
A clear description of the bug is:
Most of the links used in the dashboard point to a broken link, like https://manual.grid.tf/playground/wallet_connector.html, https://manual.grid.tf/getstarted/TF_Connect/TF_Connect.html, and https://manual.grid.tf/threefold_token/buy_sell_tft/gettft.html.
Must include any relevant identifiers, like:
Network: Qanet
Version: 2.3.0-alpha12
Logs/Alerts
Screenshots or screen records.
Verified, Devnet a244172.
The links now pointing to the correct manual page.
|
gharchive/issue
| 2024-02-27T16:26:36 |
2025-04-01T04:36:05.058752
|
{
"authors": [
"A-Harby"
],
"repo": "threefoldtech/tfgrid-sdk-ts",
"url": "https://github.com/threefoldtech/tfgrid-sdk-ts/issues/2275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2251054095
|
Unlock grace period contracts
Update
now user can select contracts that are on rented node, and the coast of related rent contract is added to require funds.
fixed free balance issue, now we are using usable balance (free -lock)
add loading and error handling, dialog was opened before loading the details.
update lock details by opening dialog, to avoid mismatched lock details
Description
add button to manually unlock grace period contract, instead of waiting for the next billing cycle
Changes
add required methods to call billContractForBlock
support unlock grace period contracts with 3 flows
single contract
Add an unlock button to the Contract lock details dailog
if the contract is on rented node, there is no unlock button, user should unlock the rent contract and the alert is shown
multiple contracts
selected contracts from the table could be unlocked by unlock button in each table
unlock all contracts
note
we can't guarantee that the contracts will be moved to the created state, as the chain method billContractForBlock do not return an error if the balance is not enough, but it only updates the locked amount,
Also, we are updating the contracts table after 30 seconds from calling the method
Related Issues
#2489
#2670
#2687
Documentation PR
For UI changes, Please provide the Documetation PR on info_grid
Checklist
[ ] Tests included
[x] Build pass
[ ] Documentation
[x] Code format and docstrings
[x] Screenshots/Video attached (needed for UI changes)
now user can select contracts that are on rented node, and the coast of related rent contract is added to require funds.
fixed free balance issue, now we are using usable balance (free -lock)
add loading and error handling, dialog was opened before loading the details.
update lock details by opening dialog, to avoid mismatched lock details
@amiraabouhadid
when i try to deploy a vm i get the following error
just deployed a vm, please make sure to build first
I think it might look better if we unseleceted the selected contracts after they're unlocked as they stay selected as follows
What are your thoughts on using the term 'unlock'? Would using something like 'resume' or 'restore' be more friendly since the related workload is paused?
What are your thoughts on using the term 'unlock'? Would using something like 'resume' or 'restore' be more friendly since the related workload is paused?
resume sounds better
Can we add a note that in some cases the actual deducted amount when contracts are resumed could be less than estimated here?
Can we add a note that in some cases the actual deducted amount when contracts are resumed could be less than estimated here?
I prefer to not add it as it's not harmful. Cause sometimes when we add many notes, the user becomes too lazy to read them.
|
gharchive/pull-request
| 2024-04-18T16:00:16 |
2025-04-01T04:36:05.071637
|
{
"authors": [
"0oM4R",
"AhmedHanafy725",
"amiraabouhadid",
"maayarosama",
"sameh-farouk"
],
"repo": "threefoldtech/tfgrid-sdk-ts",
"url": "https://github.com/threefoldtech/tfgrid-sdk-ts/pull/2582",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1125206087
|
re-wording
[ ] in the image: change text to say "Welcome to the Freemium Owncloud Deployer"
[ ] change "Please enter your email to get creds and domain for you instance on. If not provided email TF connect will be used" --->
"<user's email> will be used to send you your deployment information. If you prefer to receive emails on a different address, fill in below"
[ ] "Agree at Terms & Conditions" ---> "I have read and I accept the terms and conditions"
[ ] "Thanks for submission, Request will be processed soon." ----> "Your request will be processed soon. You'll receive your deployment information at "
[ ] "user x has already submitted request before" ----> "user x has already submitted a request. please be patient while we prepare your deployment"
[ ] email text:
"
Dear x,
Your Owncloud instance will be ready in few minutes, please use these credentials to access it.
Domain:....
Admin username: ...
Admin password: ..."
[ ] expiration email text:
"Dear x,
Your deployment has expired"
|
gharchive/issue
| 2022-02-06T13:38:36 |
2025-04-01T04:36:05.077460
|
{
"authors": [
"rkhamis",
"waleedhammam"
],
"repo": "threefoldtech/www_owncloud",
"url": "https://github.com/threefoldtech/www_owncloud/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1609697459
|
Add Dockerfile
This makes it easier to run the app cross-platform, without worrying about local ruby versions etc:
git clone git@github.com:threeplanetssoftware/apple_cloud_notes_parser.git
cd apple_cloud_notes_parser
docker build -t apple_cloud_notes_parser .
docker run --rm \
-v ~/Library/Group\ Containers/group.com.apple.notes:/data:ro \
-v $(pwd)/my_notes_backup:/app/output \
apple_cloud_notes_parser \
--mac /data --one-output-folder
ls my_notes_backup/
Also, if you'd like, you can push the image to GitHub:
docker build -t ghcr.io/threeplanetssoftware/apple_cloud_notes_parser:latest .
docker push ghcr.io/threeplanetssoftware/apple_cloud_notes_parser:latest
So then folks don't need to even clone/build, just have Docker installed:
docker run --rm \
-v ~/Library/Group\ Containers/group.com.apple.notes:/data:ro \
-v $(pwd)/my_notes_backup:/app/output \
ghcr.io/threeplanetssoftware/apple_cloud_notes_parser \
--mac /data --one-output-folder
Thanks again for the fantastic app!
Thank you for suggesting this change! The idea of wrapping this up into something that's "easier" for a user to pick up and run like Docker has been on my mind, but I've never put time into playing with it. I'll take a look at your Docker file and test it on a few of my systems to see how well it handles those use cases.
@threeplanetssoftware any chance of merging this in, and/or making the image available on the public repo? Would make my backup cronjob even nicer, being able to pull from GitHub. 😇
Thank you for your patience and supremely polite poke! My apologies for not getting to this yet, I've had some other priorities sucking up my time. Because I'm generally more of a user of Docker, not an author, I've been wanting to make sure I understand how this affects each of the three OS' I claim to support (Windows, MacOS, and *nix). I will try to carve out some time in the near term.
Yeah no worries, I know how taxing OSS maintenance can be. 😅
Thanks again for such a lovely utility. 🙌
Ok, I've had a go at this on Linux and MacOS so far. I like the idea of having a Docker image, but am not ready to merge this PR. I like it because, as you point out, this can solve a lot of the dependency issues I've hit before by controlling the specific versions of Ruby and the gems involved. That will certainly be useful and I thank you for bring up the idea.
I'm not quite ready to merge yet for a few reasons. One is that what seems straight forward to some, might not be to others. My own experience with Docker in testing this says I'll need to update the Readme to very clearly explain what is going on and how someone can use it. The commands needed to run this, even if the "binary" itself is pretty static, are more complex than using ruby to run it. I will likely shy away from telling folks how to build it and focus instead on the assumption that an image exists on GitHub.
In addition, the build instructions assume bundle install has already been run on the repo and the build fails if Gemfile.lock isn't present. I've been removing that line in my tests (and bumping the version to Ruby 3.2 for future-proofing), but to control the gem versions for releases I would want to keep that in. Before I push this, I'll want to know it works on Linux, MacOS, and Windows starting from a bare repo checkout and following the step-by-step in the Readme (and, fair point, if the Readme leads by saying "make sure you bundle install" then that covers that issue.
Finally, my test results have not yet been stellar. My results on MacOS today are just throwing permission errors trying to open NoteStore.sqlite. Granted, my testbed Mac is abysmally old and slow, but I'm not sure what is going on and why it is failing. On Linux things worked nicely, but I want to make sure people understand why files are spat out appearing to be owned by root in their home directory (and I assume the same will be true on MacOS once that works).
Steps to merge
To summarize what I'd like to see before merging (and you don't necessarily have to do these, I'll pursue getting Docker working either way):
I'd like to test on Windows.
I'd like the Readme updated to have a section on Docker explaining everything relevant to run an image hosted on Github.
I'd like to include default shell scripts for MacOS, Windows, and Linux that would essentially be the same as running rake in the folder so users don't have to guess how to write volume includes. In other words, have a docker_run_file.sh that is essentially mounting a NoteStore.sqlite in the same folder, a docker_run_mac.sh that is mounting the Notes group container, etc.
I need my MacOs tests to work
I hope this doesn't seem to onerous a review, I'm sure it works well for you and your use case as it is right now! I just want to make sure things are clearly rolled out and don't leave myself some IOUs for tickets to be completed later.
Thanks!
Just as a quick update, I've been messing with this as I have the time. I am making progress, just trying to make sure it tested well enough to be sure it will work in a few environments was published. I've made a good bit of changes, so I will likely commit my own branch and give you credit for the idea.
Could you please help me troubleshoot something? I continue to run into errors attempting to access Notes directly as you did above. Would you please run this and tell me if there are any extended attributes?
xattr -l ~/Library/Group\ Containers/group.com.apple.notes/NoteStore.sqlite
Thank you!
Doing more digging, it seems like the permissions in ~/Library/Group Containers might be causing issues. Can you tell me if you changed permissions for the notes folder? Is it still 700? Did you run docker as root, or your normal user?
Ok, I made my own feature branch for this, did a bunch of testing, and pushed it into master. As of v0.12.2, ghcr.io now has a Docker image for this package (ghcr.io/threeplanetssoftware/apple_cloud_notes_parser:latest)! I'm not completely happy with it yet, will continue to tweak, but didn't want to let perfection be the enemy of the good.
Please let me know if this works for you.
|
gharchive/pull-request
| 2023-03-04T09:39:38 |
2025-04-01T04:36:05.092391
|
{
"authors": [
"jareware",
"threeplanetssoftware"
],
"repo": "threeplanetssoftware/apple_cloud_notes_parser",
"url": "https://github.com/threeplanetssoftware/apple_cloud_notes_parser/pull/67",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2662975823
|
[next] useFBO rewrite
This is a rewrite of useFBO that adds functionality, improves readability of the implementation, and adds to the docs to be more comprehensive.
useFBO's signature is now
export type UseFBOOptions = RenderTargetOptions & {
/**
* if set, the scene depth will be rendered into buffer.depthTexture.
*/
depth?: { width?: number; height?: number } | DepthTexture | boolean
/**
* if set, the render target size will be set to the corresponding width and height and not use or follow the size of the canvas
*/
dimensions?: { width?: number; height?: number }
}
export function useFBO({
depth = false,
dimensions,
...targetOptions
}: UseFBOOptions = {}): WebGLRenderTarget {
depth is now allowed to be either a boolean, depth texture, or dimensions object and useFBO handles each case according to what's described in the .mdx file.
the implementation of useFBO has been modified to be more readable and traceable. The old implementation was a very close 1:1 with drei's useFBO and i think deviating from there yields a more interesting and useful hook.
The corresponding mdx file has been modified to account for all the various uses of useFBO. It now contains many code snippets as examples are usually more explicit than docs.
The example has been rewritten to be more simple while retaining its original intent and goal.
i modified the example to better show off how you might use a render target. it's not much more complex than what was there already but I think it ties in a little better with the updated introduction paragraph.
|
gharchive/pull-request
| 2024-11-15T19:47:28 |
2025-04-01T04:36:05.095886
|
{
"authors": [
"joshwashywash"
],
"repo": "threlte/threlte",
"url": "https://github.com/threlte/threlte/pull/1207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2095569678
|
🌐 Add Turkish translation for docs/tr/docs/learn/index.md
🌐 Add Turkish translation for docs/tr/docs/learn/index.md
Discussion: #9193
@alperiox, @OzgunCaglarArslan @SametEmin, @mertssmnoglu, can you please leave a review?
📝 Docs preview for commit 7f597fa13decf128540d4e930179f725f59eff67 at: https://6cc6f238.fastapitiangolo.pages.dev
📝 Docs preview for commit 560cdd0c4cb5e68187f6bcd10b3bae847bbdcc49 at: https://72ea4a33.fastapitiangolo.pages.dev
|
gharchive/pull-request
| 2024-01-23T08:53:44 |
2025-04-01T04:36:05.219808
|
{
"authors": [
"hasansezertasan",
"tiangolo"
],
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/pull/11014",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
817071676
|
Some bugs in the code.
Hi,
Thanks for the awesome work.
I was trying to reproduce the results. I cloned the code on my local system and got a few "import" errors and some minor bugs in the code. I've listed some of them below. Can you please take a look?
In dihcl.py file:
line 33: import models.cifar as models.
line 448: k is unknown. (I think it should be k0, right?)
Thanks for pointing out the bugs! I just checked in a new version, which can successfully run on my machine now.
|
gharchive/issue
| 2021-02-26T06:07:50 |
2025-04-01T04:36:05.243439
|
{
"authors": [
"ns-ask",
"tianyizhou"
],
"repo": "tianyizhou/DIHCL",
"url": "https://github.com/tianyizhou/DIHCL/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2699284549
|
🛑 TICKETS.CO.ID is down
In a845223, TICKETS.CO.ID (https://tickets.co.id) was down:
HTTP code: 525
Response time: 577 ms
Resolved: TICKETS.CO.ID is back up in 58863cf after 11 minutes.
|
gharchive/issue
| 2024-11-27T17:31:29 |
2025-04-01T04:36:05.263975
|
{
"authors": [
"belovolk"
],
"repo": "tickets/upptime",
"url": "https://github.com/tickets/upptime/issues/1210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
501150803
|
Add terminal target support for Vim 8
From https://github.com/tidalcycles/vim-tidal/pull/26#issuecomment-460049250
Perhaps it would also be beneficial to investigate the Vim8 terminal feature (See :help terminal).
I already tried using it in my own fork of this plugin (https://github.com/flupe/vim-tidal), it works quite well and gets rid of all the cumbersome tmux code and bash scripts altogether.
However maybe you are not willing to ditch the tmux part of this plugin, as it is possible some users do prefer to use tmux for this kind of things – I know I don't.
Hey, I have not kept my fork up to date but I'm glad this issue is being investigated. I'm willing to work on that if any help is needed. Just let me know.
Hey! I will probably look into in the following weeks, but any help is appreciated, thanks!
It would be nice if we could extend these functions to include support for Vim 8. If Vim has a different API than NeoVim for the terminal, we'll probably have to add some conditionals e.g.:
if has("nvim")
" nvim code
else
" vim code
endif
I was stuck in a train for a couple hours so I decided to look at it, Still works fine on Vim8, haven't tested on Neovim nor tmux so I have yet to see if I broke anything.
|
gharchive/issue
| 2019-10-01T21:16:55 |
2025-04-01T04:36:05.270760
|
{
"authors": [
"flupe",
"munshkr"
],
"repo": "tidalcycles/vim-tidal",
"url": "https://github.com/tidalcycles/vim-tidal/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
345221688
|
[WIP] User-space drivers
See attached Trello card for details.
[Devices] Apple High Sierra Kernel
@pazaan I was waiting to see if my upstream PR to node-usb was going to be accepted, but I'm getting the feeling it's not going to happen anytime soon. This means that the TI USB 3410 user-space driver will not work correctly on macOS, so I've commented it out as a future TODO. This isn't a major issue as that driver is only used by Abbott BG meters, which we're not currently supporting, so we'll just have to wait some more before we can start supporting them.
|
gharchive/pull-request
| 2018-07-27T13:14:43 |
2025-04-01T04:36:05.275554
|
{
"authors": [
"gniezen"
],
"repo": "tidepool-org/chrome-uploader",
"url": "https://github.com/tidepool-org/chrome-uploader/pull/681",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1075551317
|
Support general SIMD instruction
Making ndt_omp independent to the SIMD architecture
using compiler flag -march=native, instead of using sse flags immediately.
using aligned allocation std::vector of Eigen::Matrix
I'll close and re-open since I changed the target branch.
@kenji-miyake Please double-check this PR :pray:
Also, why the CI has failed? I think we should fix them.
The CI target is noetic and melodic. CI scripts for ROS2 is ready for review on #5
Let's merge this after #5 is merged and the CI is checked in this PR.
@KeisukeShima @harihitode Could you send PRs equivalent with #5 and #8 to https://github.com/koide3/ndt_omp?
@KeisukeShima Could you rebase this, please? :pray:
This repository does not allow force push, so I created a new PR. #9
hmm, actually it was possible.
@kenji-miyake It is good, although there is a question of whether the same results can be obtained on real machines on vehicles.
|
gharchive/pull-request
| 2021-12-09T13:05:24 |
2025-04-01T04:36:05.352145
|
{
"authors": [
"KeisukeShima",
"harihitode",
"kenji-miyake",
"yukkysaito"
],
"repo": "tier4/ndt_omp",
"url": "https://github.com/tier4/ndt_omp/pull/8",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2336455122
|
New operations import_{accounts,transfers}
New operations import_{accounts,transfers}
Allows the ingestion of historical accounts and transfers with their original timestamp into TigerBeetle.
The new operations import_accounts and import_transfers behave exactly like the existing create_accounts and create_transfers, except for some validations regarding the timestamp field.
For that reason, the same events and results are shared between the create_* and import_* operations.
It was considered to create at least new result enums, such as ImportAccountResult and ImportTransferResult, but the benefits of handling unified error codes with shared enums outweighed the decision.
From the docs:
https://github.com/tigerbeetle/tigerbeetle/blob/f629c7b93b0a4dfe75a482451967b4b5b696b286/docs/reference/requests/import_transfers.md?plain=1#L10-L45
This PR
Best reviewed per commit.
State machine logic and unit tests.
REPL.
Clients (including README and examples).
Documentation.
TODO
VOPR needs changes to support synchronizing clients, otherwise we will hit import_timestamp_must_not_regress every time.
Releated #1893 and #1968
Closed in favor or #2171
|
gharchive/pull-request
| 2024-06-05T17:28:42 |
2025-04-01T04:36:05.371841
|
{
"authors": [
"batiati"
],
"repo": "tigerbeetle/tigerbeetle",
"url": "https://github.com/tigerbeetle/tigerbeetle/pull/1989",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
970926522
|
As a user I want a helpful error page
technical specifications & user story to be provided by @mike-audi
As a user I don't want a helpful error page when/if the app experiences a "hard crash" or "silent failure".
Use the ErrorWidgetBuilder as part of the MaterialApp widget to provide the page as its own slice.
The page should be tightly coupled with Sentry providing stacktrace and other helpful ANONYMOUS debugging
parameters.
The page can have different text/states depending on the type of error, like no internet vs random crash
It would be helpful if the user could add stuff like a screenshot or text when it happens
The page could include some sort of support/faq integration
include in #355
This should be implemented in
https://github.com/tiki/app/blob/bfa52ee2e592ccf426eb4816ef2024a2549f4250/lib/main.dart#L23-L27
Refer to https://api.flutter.dev/flutter/foundation/FlutterError/onError.html
|
gharchive/issue
| 2021-08-14T14:47:37 |
2025-04-01T04:36:05.377999
|
{
"authors": [
"annastoilova",
"mike-audi",
"ricardobrg"
],
"repo": "tiki/app",
"url": "https://github.com/tiki/app/issues/252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
558482770
|
Encryption-at-rest
The task is to provide transparent encryption of TiKV data stored on disk. Including:
Support encrypting all TiKV data using AES
Support automatic and manual key rotation.
Support encryption for backup.
This can be built on top of EncryptedEnv provided by rocksdb, but we also want to support other future storage engine.
Tracking in JIRA instead.
Development finished in 4.0 GA.
|
gharchive/issue
| 2020-02-01T05:05:06 |
2025-04-01T04:36:05.409563
|
{
"authors": [
"yiwu-arbug"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/issues/6505",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
610806206
|
TiKV should recover in 30s after tikv reboot
Bug Report
When a tikv reboot. the tikv will recover in 1min ~ 1min 30 seconds
What version of TiKV are you using?
master
What operating system and CPU are you using?
Linux/x86_64
Steps to reproduce
when running tpcc. then reboot a tikv server. It will take about 1m ~ 1m30s to recover
What did you expect?
What did happened?
can you provide more detailed information?
Any update @gengliqi ?
This PR tidb#17541 has solved part of this problem. It could make tidb find the leader of region in TiKV has changed earlier and retry again.
But we still need to speed up the election of raft algorithm, which may cost a long time to decide one leader when a TiKV instance crash. It is not a bug of TiKV, but we need to optimize it.
|
gharchive/issue
| 2020-05-01T15:22:37 |
2025-04-01T04:36:05.413553
|
{
"authors": [
"Little-Wallace",
"siddontang",
"zhangjinpeng1987",
"zhouqiang-cl"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/issues/7726",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
968857139
|
*: Fix disk full feature leads to batch performance back-off
What problem does this PR solve?
Issue Number:: https://github.com/tikv/tikv/issues/10716
Solve disk full option setting leads to raft batch performance back off.
Problem Summary:
When raftcmd is set with special flag about disk full, then the batch system will be drained, which leads to the batch system not work in more time.
What is changed and how it works?
Set the special flag back to delegate, and let all the batch be set with special flag if current raftcmd is set with special flag.
Related changes
PR to update pingcap/docs/pingcap/docs-cn:
PR to update pingcap/tidb-ansible:
Need to cherry-pick to the release branch
Check List
Performance regression.
Tests
Side effects
Performance regression
Release note
None
target branch
|
gharchive/pull-request
| 2021-08-12T13:43:58 |
2025-04-01T04:36:05.417560
|
{
"authors": [
"tier-cap"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/10717",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1040869421
|
Fix unstable test about slow node detection
Signed-off-by: 5kbpers tangminghua@pingcap.com
What problem does this PR solve?
Issue Number: close #11197
Problem Summary:
If there is not any write, write workers will skip replying latency inspecting, which makes test_latency_inspect unstable.
What is changed and how it works?
Proposal: xxx
What's Changed:
Move back latency inspecting of store loop to end
Add store_write duration for inspecting write duration.
Check List
Tests
Unit test
Integration test
Release note
None
/merge
/merge
/merge
/merge
/merge
cases::test_merge::test_node_merge_cascade_merge_with_apply_yield is very unstable. https://ci.pingcap.net/blue/organizations/jenkins/tikv_ghpr_test/detail/tikv_ghpr_test/9963/pipeline
/merge
/merge
/run-test
/run-all-tests
/run-all-tests
/run-all-tests
/merge
/merge
/merge
/merge
|
gharchive/pull-request
| 2021-11-01T07:46:10 |
2025-04-01T04:36:05.425260
|
{
"authors": [
"5kbpers",
"gengliqi"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/11198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1045499677
|
raftstore: Fix a flaky test
Signed-off-by: v01dstar yang.zhang@pingcap.com
What problem does this PR solve?
Issue Number: #11262
Problem Summary:
We found this test some times panics, saying "region update for unsafe recover should only occur in leaderless region", this happens when it tries to update the region's meta which is only allowed when the region does not have a leader (during unsafe recover). This test, in the past, waited for 2 election timeout cycles before doing the update, which I believe was unstable for a leader to lose leadership.
What is changed and how it works?
Instead of blindly wait for 2 election cycles, check leadership periodically in a loop with a timeout to ensure the region has lost its leader.
Related changes
N/A
Check List
N/A
Release note
NONE
/cc @Connor1996
/merge
|
gharchive/pull-request
| 2021-11-05T06:53:14 |
2025-04-01T04:36:05.429067
|
{
"authors": [
"Connor1996",
"v01dstar"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/11261",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
602960527
|
[DNM] explore tracing performance
Signed-off-by: zhongzc zhongzc_arch@outlook.com
What problem does this PR solve?
Investigate how to minimize performance loss to introduce tracing.
See: #5714
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
/bench +tpch +tpcc +sysbench
Closed by #7781
|
gharchive/pull-request
| 2020-04-20T05:48:52 |
2025-04-01T04:36:05.433441
|
{
"authors": [
"zhongzc"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/7554",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
768527112
|
backup: support split big region into small backup files
What problem does this PR solve?
Issue Number: close #9144
Problem Summary: BR will read all data of a region and fill it in a SST writer. But it is in-memory. If there is a huge region, TiKV may crash for OOM because of keeping all data of this region in memory.
What is changed and how it works?
What's Changed: Record the written txn entries' size. When it reaches region_max_size, we will save the data cached in RocksDB to a SST file and then switch to the next file.
Related changes
Need to cherry-pick to the release branch
Check List
Tests
Unit test
Integration test
Release note
Fix the problem that TiKV OOM when we backup a huge region.
/run-all-tests
/run-all-tests
/run-integration-copr-test
/cc kennytm,Little-Wallace,3pointer,overvenus
/lgtm
/run-all-tests
/run-integration-tests
/run-all-tests
/run-all-tests
/run-all-tests
PTAL @overvenus (i've already given LGT before so not gonna send the command again 🙃)
PTAL @overvenus (i've already given LGT before so not gonna send the command again 🙃)
Does raw kv support split huge regoin?
No. I think we can do this in another PR.
Does raw kv support split huge regoin?
No. I think we can do this in another PR.
/run-all-tests
/run-all-tests
LGTM
LGTM
/LGTM
/LGTM
/merge
/merge
/merge
/merge
|
gharchive/pull-request
| 2020-12-16T07:11:28 |
2025-04-01T04:36:05.442895
|
{
"authors": [
"kennytm",
"lichunzhu",
"lilinghai",
"overvenus"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/9283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
772029473
|
raftstore: renew leader lease in advance when handle read request
Signed-off-by: linning linningde25@gmail.com
ref #11579
What problem does this PR solve?
Problem Summary:
Currently, the leader lease will be renewed when there are successful writes or read index, but if there are only read requests on the leader, the lease will not be renewed until it is expired and then the leader send read index request to renew it, which cause latency jitter, https://github.com/tikv/tikv/pull/6427 try to renew lease when raft basic tick triggered, but it is not compatible with the hibernate region feature which will skip raft basic tick.
Blocked by #6427
What is changed and how it works?
What's Changed:
When the local reader handle a read request, it also checks if the region's leader lease is gonna expired in near future, if so it sends a message to raftstore and tries to renew lease
TODO: add test
Check List
Tests
Unit test
Integration test
Side effects
Performance regression
Consumes more CPU
Release note
raftstore: renew leader lease in advance when handle read request
Any stats or graph to show the improvement?
/release
/release
/release
what need to do if I want this merge this PR? @NingLin-P
what need to do if I want this merge this PR? @NingLin-P
@slow-zhang We need benchmark (a read-only workload) to verify the improvement, the key metrics to verify are the same as https://github.com/tikv/tikv/pull/9292#issuecomment-755105740
/release
what need to do if I want this merge this PR? @NingLin-P
@slow-zhang We need benchmark (a read-only workload) to verify the improvement, the key metrics to verify are the same as #9292 (comment)
Can I help here? How to set up and what kind of benchmark? Is there any relevant documentation?
what need to do if I want this merge this PR? @NingLin-P
@slow-zhang We need benchmark (a read-only workload) to verify the improvement, the key metrics to verify are the same as #9292 (comment)
Can I help here? How to set up that kind of benchmark? Is there any relevant documentation?
BTW, there is one Jenkins job failed.
Sure! You can use go-ycsb and workloadc, and check out this blog for guide. Also, you can set up a cluster quickly with tiup playgound (don't forget to replace the TiKV binary though
Sure! You can use go-ycsb and workloadc, and check out this blog for guide. Also, you can set up a cluster quickly with tiup playgound (don't forget to replace the TiKV binary though
I can not download the /release tar.gz, so can I just run make in that branch to create a binary?
can I just use my mac(with ssd) for that test or need I to run in ubuntu?
I can not download the /release tar.gz, so can I just run make in that branch to create a binary?
You can run make dist_release to get a release build
can I just use my mac(with ssd) for that test or need I to run in ubuntu?
Mac will be okay.
Is this work still advancing? @NingLin-P
@NingLin-P may I ask why the work is paused? this should improve the read performance.
/release
/run-all-tests
@BusyJay @NingLin-P Just realized we need a config item for disabling that local reader renewing the lease in advance in some tests. So I added renew_leader_lease_advance_duration to set the time that checks the lease ahead.
sysbench read_only with 256 concurrency and 15k regions shows that there is a little improvement with this patch. There are still some regions that will go hibernate in my test.
Any updates?
/merge
|
gharchive/pull-request
| 2020-12-21T09:57:08 |
2025-04-01T04:36:05.459801
|
{
"authors": [
"5kbpers",
"BusyJay",
"NingLin-P",
"gotoxu",
"hykych",
"slow-zhang"
],
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/9307",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
151990282
|
Export UNDEFINED_REFERENCE from glimmer-runtime
Exporting the UNDEFINED_REFERENCE so that it can be used inside of Ember.
Can you export NULL_REFERENCE as well? If it's not already defined in the references package, you can move it.
Also, I just realized this: in https://github.com/tildeio/glimmer/blob/master/packages/glimmer-runtime/lib/references.ts#L8, it probably makes more sense to return an UNDEFINED_REFERENCE now that we have one.
@chancancode Sure, will send another PR :)
PR for NULL_REFERENCE: https://github.com/tildeio/glimmer/pull/157
|
gharchive/pull-request
| 2016-04-29T23:56:12 |
2025-04-01T04:36:05.466045
|
{
"authors": [
"chancancode",
"zackthehuman"
],
"repo": "tildeio/glimmer",
"url": "https://github.com/tildeio/glimmer/pull/156",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
203797204
|
syntax: Add type declarations to traverse() function
/cc @mmun @rwjblue
this appears to be failing due to unrelated TSLint isses that are fixed by #404
|
gharchive/pull-request
| 2017-01-28T09:22:13 |
2025-04-01T04:36:05.467224
|
{
"authors": [
"Turbo87"
],
"repo": "tildeio/glimmer",
"url": "https://github.com/tildeio/glimmer/pull/403",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1815848740
|
An unsupported vehicleType: fuel was provided
Hallo Till,
Noch eine Kleinigkeit ist mir aufgefallen :
2023-07-21T15:47:34+0200:WARNING:addressable:/vehicles/... /trips/shortTerm/vehicleType: An unsupported vehicleType: fuel was provided, known values are [electric, hybrid, gasoline, petrol, diesel, cng, lpg, invalid, unknown car type] please report this as a bug
Danke, füge ich hinzu.
|
gharchive/issue
| 2023-07-21T13:50:54 |
2025-04-01T04:36:05.471886
|
{
"authors": [
"MyGitIT",
"tillsteinbach"
],
"repo": "tillsteinbach/VWsFriend",
"url": "https://github.com/tillsteinbach/VWsFriend/issues/525",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2154226893
|
No syntax highlighting, and errors like "Connection got disposed"
I have a Tiltfile in my repo, and vscode-tilt installed. The extension itself is activated, because I can see the link to the Tilt UI in the status bar. However, the Tiltfile is not syntax highlighted. The language mode is listed as Plain Text and there is no option to set a file association of Tilt/Tiltfile/starlark.
Possibly related: this shows under output:
[Info - 8:36:56 AM] Tiltfile LSP started
[Info - 8:36:56 AM] Restarting server
[Info - 8:36:56 AM] Found Tilt version 0.33.10
[Info - 8:36:56 AM] Starting child process
Starlark LSP server initialized
[Error - 8:36:57 AM] Server initialization failed.
Error: Connection got disposed.
at Object.dispose (/Users/me/.vscode/extensions/tilt-dev.tiltfile-0.0.3/out/extension.js:29:4041)
at Object.dispose (/Users/me/.vscode/extensions/tilt-dev.tiltfile-0.0.3/out/extension.js:34:9518)
at /Users/me/.vscode/extensions/tilt-dev.tiltfile-0.0.3/out/extension.js:36:6952
[Error - 8:36:57 AM] Starting client failed
Error: Connection got disposed.
at Object.dispose (/Users/me/.vscode/extensions/tilt-dev.tiltfile-0.0.3/out/extension.js:29:4041)
at Object.dispose (/Users/me/.vscode/extensions/tilt-dev.tiltfile-0.0.3/out/extension.js:34:9518)
at /Users/me/.vscode/extensions/tilt-dev.tiltfile-0.0.3/out/extension.js:36:6952
[Info - 8:36:57 AM] Tiltfile LSP started
[Info - 8:36:57 AM] Found Tilt version 0.33.10
[Info - 8:36:57 AM] Starting child process
Starlark LSP server initialized
After installing and reinstalling twice, and reloading the extension twice, I finally got it to work. Might be a transient error. Feel free to close if the above error is somewhat expected :)
|
gharchive/issue
| 2024-02-26T13:47:01 |
2025-04-01T04:36:05.474665
|
{
"authors": [
"majelbstoat"
],
"repo": "tilt-dev/vscode-tilt",
"url": "https://github.com/tilt-dev/vscode-tilt/issues/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
96469622
|
[SSRS.ReportingService2010.ReportExecutionService] doesn't contain a method named 'GetItemType'
Its not working for me. Are there some prerequisites?
Deploy-SSRSProject.ps1
[13:47:21][Step 1/1] : Method invocation failed because
[13:47:21][Step 1/1] [SSRS.ReportingService2010.ReportExecutionService] doesn't contain a method
[13:47:21][Step 1/1] named 'GetItemType'.
[13:47:21][Step 1/1] At line:1 char:1
[13:47:21][Step 1/1] + .\ReportingDeployment\Deploy-SSRSProject.ps1 -Verbose -Path
[13:47:21][Step 1/1] '.\Reporting\Reporti ...
[13:47:21][Step 1/1] + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[13:47:21][Step 1/1] ~~~
[13:47:21][Step 1/1] + CategoryInfo : InvalidOperation: (:) [Deploy-SSRSProject.ps1],
[13:47:21][Step 1/1] RuntimeException
[13:47:21][Step 1/1] + FullyQualifiedErrorId : MethodNotFound,Deploy-SSRSProject.ps1
[13:47:21][Step 1/1]
I was using the wrong endpoint.
|
gharchive/issue
| 2015-07-22T03:51:41 |
2025-04-01T04:36:05.485052
|
{
"authors": [
"worldspawn"
],
"repo": "timabell/ssrs-powershell-deploy",
"url": "https://github.com/timabell/ssrs-powershell-deploy/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
215190435
|
Cleanup Markdown formatting
The missing space in Markdown headings (#heading vs # heading) confused GitHub - see attached screenshots
Before
After
Hi, can I ask why this was closed instead of merged?
@Hurtak sorry must have hit the wrong button on my phone yesterday.
|
gharchive/pull-request
| 2017-03-18T12:22:11 |
2025-04-01T04:36:05.487382
|
{
"authors": [
"Hurtak",
"timarney"
],
"repo": "timarney/react-app-rewired",
"url": "https://github.com/timarney/react-app-rewired/pull/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
598021836
|
Comment input box is shown twice
One include
https://github.com/timber/starter-theme/blob/cc8e530a9cd5ece12467dd1a95e1bf47e4b618a8/templates/single.twig#L34
Above that comments.twig is included
https://github.com/timber/starter-theme/blob/cc8e530a9cd5ece12467dd1a95e1bf47e4b618a8/templates/single.twig#L25
Which renders another comment box
https://github.com/timber/starter-theme/blob/41c74ff9d68ff36fce4e1561e9cd98ddda1c6286/templates/comment.twig#L8
Maybe i did not understood this correctly. And comments itself can be commented to. Though i don't see variables passed down to the form so how to distinguish .. i think it's still an incomplete example. Better leave out things if they don't fully work
Each form has a hidden input (comment_parent) with a comment.ID or 0. That way we can distinguer them.
Each form has a hidden input (comment_parent) with a comment.ID or 0. That way we can distinguer them.
Thanks for your explanation
Thanks for your explanation
|
gharchive/issue
| 2020-04-10T18:14:37 |
2025-04-01T04:36:05.491712
|
{
"authors": [
"flip111",
"marciojc"
],
"repo": "timber/starter-theme",
"url": "https://github.com/timber/starter-theme/issues/101",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1268663534
|
Implement I-Regexp
The IETF JSONPath WG is building a minimal-ish regular expression specification called I-Regexp designed to be interoperable across a wide variety of regex implementations. See https://www.ietf.org/id/draft-ietf-jsonpath-iregexp-00.html
A full or even subset implementation of I-Regexp would be a valuable addition to Quamina.
As of 2024/09, I'm starting to take a run at this. If we get a substantial chunk of I-Regexp working, some of the other pattern upgrades can be re-implemented as regexes. Am going to do this incrementally, pulling in features one at a time, but have to have a full parser because we should reject regexes that use unimplemented features.
|
gharchive/issue
| 2022-06-12T18:24:31 |
2025-04-01T04:36:05.493648
|
{
"authors": [
"timbray"
],
"repo": "timbray/quamina",
"url": "https://github.com/timbray/quamina/issues/66",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
427009563
|
Exact time for leave - DateTime support
Hello,
This is awesome project, I am missing only feature --> Full DateTime support for leave request.
Is there any chance that DateTime will be implemented? We would like to select exact DateTime from and DateTime To.
Thank you!
This is very important for me as well to use the hosted version. I'd like to be able to request smaller absences, instead of half-days.
Thank you guys for suggestion, I am considering adding the feature. Trying to figure out the most optimal way with smallest impact on existing codebase.
Thank you. That’s really the only holding us up from using it. I don’t really even need date time, just finer control than .5
Sam Weber
On Nov 12, 2019, at 9:14 AM, Pavlo notifications@github.com wrote:
Thank you guys for suggestion, I am considering adding the feature. Trying to figure out the most optimal way with smallest impact on existing codebase.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
gharchive/issue
| 2019-03-29T14:19:18 |
2025-04-01T04:36:05.508684
|
{
"authors": [
"PepekT",
"ssweber",
"vpp"
],
"repo": "timeoff-management/application",
"url": "https://github.com/timeoff-management/application/issues/342",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2016171931
|
debug version unit test fail in gtest_paged_asof_row_refs
Describe what's wrong
How to reproduce
Build with the debug version, which will implicitly enable assertions.
I have only tested this on x86_64 Ubuntu. The logic for greater and greaterOrEqual may be incorrect.
Error message and/or stacktrace
short ver:
$ lldb -- ./src/unit_tests_dbms --gtest_filter=PagedAsofRowRefs.InsertAndFind
(lldb) target create "./src/unit_tests_dbms"
Current executable set to '/proton/build_debug/src/unit_tests_dbms' (x86_64).
(lldb) settings set -- target.run-args "--gtest_filter=PagedAsofRowRefs.InsertAndFind"
(lldb) r
Process 199468 launched: '/proton/build_debug/src/unit_tests_dbms' (x86_64)
Note: Google Test filter = PagedAsofRowRefs.InsertAndFind
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from PagedAsofRowRefs
[ RUN ] PagedAsofRowRefs.InsertAndFind
unit_tests_dbms: /proton/src/Interpreters/Streaming/RefCountDataBlockPages.h:85: void DB::Streaming::RefCountDataBlockPages<DB::LightChunk>::erasePage(RefCountDataBlockPage<DataBlock> *) [DataBlock = DB::LightChunk]: Assertion `page == current_page' failed.
Process 199468 stopped
* thread #1, name = 'unit_tests_dbms', stop reason = hit program assert
frame #4: 0x00000000470dc8f2 unit_tests_dbms`DB::Streaming::RefCountDataBlockPages<DB::LightChunk>::erasePage(this=0x00007ffff5876c70, page=0x0000604000285650) at RefCountDataBlockPages.h:85:13
82 if (unlikely(block_pages.size() == 1))
83 {
84 assert(page == block_pages.front().get());
-> 85 assert(page == current_page);
86
87 /// If this is the last page, keep it around
88 page->clear();
(lldb) p page
(DB::Streaming::RefCountDataBlockPage<DB::LightChunk> *) $0 = 0x0000604000285650
(lldb) p current_page
(DB::Streaming::RefCountDataBlockPage<DB::LightChunk> *) $1 = 0x0000604000287490
(lldb) bt
* thread #1, name = 'unit_tests_dbms', stop reason = hit program assert
frame #0: 0x00007ffff7dae00b libc.so.6`raise + 203
frame #1: 0x00007ffff7d8d859 libc.so.6`abort + 299
frame #2: 0x00007ffff7d8d729 libc.so.6`___lldb_unnamed_symbol2384 + 15
frame #3: 0x00007ffff7d9efd6 libc.so.6`__assert_fail + 70
* frame #4: 0x00000000470dc8f2 unit_tests_dbms`DB::Streaming::RefCountDataBlockPages<DB::LightChunk>::erasePage(this=0x00007ffff5876c70, page=0x0000604000285650) at RefCountDataBlockPages.h:85:13
frame #5: 0x00000000470dc167 unit_tests_dbms`DB::Streaming::RefCountDataBlockPage<DB::LightChunk>::deref(this=0x0000604000285650, page_offset=3) at RefCountDataBlockPage.cpp:39:20
frame #6: 0x000000001e6957e6 unit_tests_dbms`DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>::~PageBasedRowRefWithRefCount(this=0x0000621001c30aa8) at PageBasedRowRefWithRefCount.h:74:19
frame #7: 0x000000001e69a8f9 unit_tests_dbms`DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>::~Entry(this=0x0000621001c30aa0) at PagedAsofRowRefs.h:19:12
frame #8: 0x000000001e69a8d2 unit_tests_dbms`void std::__1::__destroy_at[abi:v15000]<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>, 0>(__loc=0x0000621001c30aa0) at construct_at.h:63:13
frame #9: 0x000000001e69a875 unit_tests_dbms`void std::__1::destroy_at[abi:v15000]<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>, 0>(__loc=0x0000621001c30aa0) at construct_at.h:88:5
frame #10: 0x000000001e69a3b9 unit_tests_dbms`void std::__1::allocator_traits<std::__1::allocator<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>>>::destroy[abi:v15000]<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>, void, void>((null)=0x00006040002856b8, __p=0x0000621001c30aa0) at allocator_traits.h:317:9
frame #11: 0x000000001e6999fa unit_tests_dbms`std::__1::__deque_base<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>, std::__1::allocator<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>>>::clear(this=0x0000604000285690) at deque:1261:9
frame #12: 0x000000001e699679 unit_tests_dbms`std::__1::__deque_base<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>, std::__1::allocator<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>>>::~__deque_base(this=0x0000604000285690) at deque:1198:5
frame #13: 0x000000001e699655 unit_tests_dbms`std::__1::deque<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>, std::__1::allocator<DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>>>::~deque(this=0x0000604000285690 size=0) at deque:1280:28
frame #14: 0x000000001e699635 unit_tests_dbms`DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>, DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::Entry<unsigned long>>::~SortedLookupContainer(this=0x0000604000285690) at SortedLookupContainer.h:12:7
frame #15: 0x000000001e6995b3 unit_tests_dbms`std::__1::default_delete<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #16: 0x000000001e699520 unit_tests_dbms`std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #17: 0x000000001e699479 unit_tests_dbms`std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #18: 0x000000001e699455 unit_tests_dbms`std::__1::__variant_detail::__alt<3ul, std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #19: 0x000000001e699439 unit_tests_dbms`auto std::__1::__variant_detail::__dtor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #20: 0x000000001e6993dd unit_tests_dbms`decltype(std::declval<auto>()(std::declval<std::__1::__variant_detail::__alt<3ul, std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #21: 0x000000001e693eed unit_tests_dbms`decltype(auto) std::__1::__variant_detail::__visitation::__base::__dispatcher<3ul>::__dispatch[abi:v15000]<std::__1::__variant_detail::__dtor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #22: 0x000000001e693c5b unit_tests_dbms`decltype(auto) std::__1::__variant_detail::__visitation::__base::__visit_alt[abi:v15000]<std::__1::__variant_detail::__dtor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #23: 0x000000001e6939ca unit_tests_dbms`std::__1::__variant_detail::__dtor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #24: 0x000000001e6938b9 unit_tests_dbms`std::__1::__variant_detail::__dtor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #25: 0x000000001e693895 unit_tests_dbms`std::__1::__variant_detail::__ctor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #26: 0x000000001e693875 unit_tests_dbms`std::__1::__variant_detail::__move_constructor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #27: 0x000000001e693855 unit_tests_dbms`std::__1::__variant_detail::__copy_constructor<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #28: 0x000000001e693835 unit_tests_dbms`std::__1::__variant_detail::__assignment<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #29: 0x000000001e693815 unit_tests_dbms`std::__1::__variant_detail::__move_assignment<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #30: 0x000000001e6937f5 unit_tests_dbms`std::__1::__variant_detail::__copy_assignment<std::__1::__variant_detail::__traits<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #31: 0x000000001e6937d5 unit_tests_dbms`std::__1::__variant_detail::__impl<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #32: 0x000000001e6937b5 unit_tests_dbms`std::__1::variant<std::__1::unique_ptr<DB::Streaming::SortedLookupContainer<DB::Streaming::PageBasedRowRefWithRefCount<DB::LightChunk>,
frame #33: 0x000000001e685bd5 unit_tests_dbms`DB::Streaming::PagedAsofRowRefs<DB::LightChunk>::~PagedAsofRowRefs(this=0x00007ffff5876cf0) at PagedAsofRowRefs.h:13:7
frame #34: 0x000000001e67d0d3 unit_tests_dbms`(anonymous namespace)::commonTest(keys=1024, page_size=16, total_pages=8, keep_versions=1000, inequality=Greater) at gtest_paged_asof_row_refs.cpp:289:1
frame #35: 0x000000001e67c27c unit_tests_dbms`PagedAsofRowRefs_InsertAndFind_Test::TestBody(this=0x000060200005d7f0) at gtest_paged_asof_row_refs.cpp:305:9
frame #36: 0x000000004a2437c3 unit_tests_dbms`void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(object=0x000060200005d7f0, method=21 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00, location="the test body") at gtest.cc:2621:10
frame #37: 0x000000004a1e0511 unit_tests_dbms`void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(object=0x000060200005d7f0, method=21 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00, location="the test body") at gtest.cc:2657:14
frame #38: 0x000000004a19dae0 unit_tests_dbms`testing::Test::Run(this=0x000060200005d7f0) at gtest.cc:2696:5
frame #39: 0x000000004a19f6e1 unit_tests_dbms`testing::TestInfo::Run(this=0x0000611000028f40) at gtest.cc:2845:11
frame #40: 0x000000004a1a0a8c unit_tests_dbms`testing::TestSuite::Run(this=0x0000611000029080) at gtest.cc:3004:30
frame #41: 0x000000004a1c1c23 unit_tests_dbms`testing::internal::UnitTestImpl::RunAllTests(this=0x0000616000000380) at gtest.cc:5889:44
frame #42: 0x000000004a244d03 unit_tests_dbms`bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(object=0x0000616000000380, method=00 11 1c 4a 00 00 00 00 00 00 00 00 00 00 00 00, location="auxiliary test code (environments or event listeners)") at gtest.cc:2621:10
frame #43: 0x000000004a1e5d2a unit_tests_dbms`bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(object=0x0000616000000380, method=00 11 1c 4a 00 00 00 00 00 00 00 00 00 00 00 00, location="auxiliary test code (environments or event listeners)") at gtest.cc:2657:14
frame #44: 0x000000004a1c0fae unit_tests_dbms`testing::UnitTest::Run(this=0x000000006ec35480) at gtest.cc:5454:10
frame #45: 0x000000001e110811 unit_tests_dbms`RUN_ALL_TESTS() at gtest.h:2310:73
frame #46: 0x000000001e0ed98b unit_tests_dbms`main(argc=1, argv=0x00007fffffffded8) at gtest_coordination.cpp:1745:12
frame #47: 0x00007ffff7d8f083 libc.so.6`__libc_start_main + 243
frame #48: 0x000000001d67f02e unit_tests_dbms`_start + 46
(lldb) q
full log:
https://harvest-vegetarian-745.notion.site/full-stack-trace-log-32d5b4719e4b4123861db340998daaca?pvs=4
Additional context
greater / greaterEqual sorted entry in a reverse way, so the dtor will be in a reverse way,
|
gharchive/issue
| 2023-11-29T09:32:02 |
2025-04-01T04:36:05.516763
|
{
"authors": [
"chenziliang",
"yokofly"
],
"repo": "timeplus-io/proton",
"url": "https://github.com/timeplus-io/proton/issues/358",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2477651206
|
Support (EXPLAIN SELECT ...) as a subquery (ported from clickhouse #40630)
…0630) (#5857)
Support (EXPLAIN SELECT ...) as a subquery (ported from clickhouse)
minor adjustments and changes
change stateless test file to pass the tests
adjust format and add extra info
small adjustments of format
PR checklist:
Did you run ClangFormat ? Yes
Did you separate headers to a different section in existing community code base ? Yes
Did you surround proton: starts/ends for new code in existing community code base ? Yes
Please write user-readable short description of the changes:
Porting
[x] clickhouse/clickhouse#40630
to close
[x] timeplus-io/proton#819
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
after sign CLA need repush a commit or add a tag to trigger.
|
gharchive/pull-request
| 2024-08-21T10:17:49 |
2025-04-01T04:36:05.523025
|
{
"authors": [
"CLAassistant",
"amamiya-len",
"yokofly"
],
"repo": "timeplus-io/proton",
"url": "https://github.com/timeplus-io/proton/pull/826",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
430056934
|
Installation Bug on IOS CocoaPod
Hi,
I have this error: React/RCTBridge.h' file not found
it works on Android, I tried cocoapods without react-native link and with it, just in case, none of them works.
Any help?
Thanks.
👋 Have you gone into the ios Directory and run pod install? If so, can you send over he contents of your Podfile and Podfile.lock?
Hi Matt,
Yes, I Did run pod install, then I did a new different project and did the same with the same results.
Here is my Podfile.lock
PODS:
- React (0.11.0):
- React/Core (= 0.11.0)
- react-native-location (2.2.0):
- React
- React/Core (0.11.0)
DEPENDENCIES:
- react-native-location (from `../node_modules/react-native-location/react-native-location.podspec`)
SPEC REPOS:
https://github.com/cocoapods/specs.git:
- React
EXTERNAL SOURCES:
react-native-location:
:path: "../node_modules/react-native-location/react-native-location.podspec"
SPEC CHECKSUMS:
React: ab1a2e21deb34965c38328d5ec40cc7d12c6050a
react-native-location: b7e0eacf93d4cfaf59fa65808987583a7fbe46cb
PODFILE CHECKSUM: 314f6a5677a6a7d77b2c6d435c4146e57e7e3c70
COCOAPODS: 1.6.1
Here is my Podfile:
# Uncomment the next line to define a global platform for your project
# platform :ios, '9.0'
target 'testLocation2' do
# Uncomment the next line if you're using Swift or would like to use dynamic frameworks
# use_frameworks!
# Pods for testLocation2
pod 'react-native-location', :path => '../node_modules/react-native-location/react-native-location.podspec'
target 'testLocation2-tvOSTests' do
inherit! :search_paths
# Pods for testing
end
target 'testLocation2Tests' do
inherit! :search_paths
# Pods for testing
end
end
target 'testLocation2-tvOS' do
# Uncomment the next line if you're using Swift or would like to use dynamic frameworks
# use_frameworks!
# Pods for testLocation2-tvOS
# target 'testLocation2-tvOSTests' do
# inherit! :search_paths
# # Pods for testing
# end
end
Thanks for your help @matt-oakes !!
This is my package.json:
{
"name": "testLocation2",
"version": "0.0.1",
"private": true,
"scripts": {
"start": "node node_modules/react-native/local-cli/cli.js start",
"test": "jest"
},
"dependencies": {
"react": "16.8.3",
"react-native": "0.59.3",
"react-native-location": "^2.2.0"
},
"devDependencies": {
"@babel/core": "7.4.3",
"@babel/runtime": "7.4.3",
"babel-jest": "24.7.1",
"jest": "24.7.1",
"metro-react-native-babel-preset": "0.53.1",
"react-test-renderer": "16.8.3"
},
"jest": {
"preset": "react-native"
}
}
To use Cocoapods with any React Native library you need to move to having React Native itself as a Cocoapods dependency. The instructions for that are here:
https://facebook.github.io/react-native/docs/integration-with-existing-apps#configuring-cocoapods-dependencies
If you do not want to use Cocoapods, then you should delete your Podfile, Podfile.lock, and Pods directory and re-run the link command.
@matt-oakes thanks for your help, I know I am doing something wrong, but I can't detect what is it, for example this is What I did for another project where I added a Cocoapod library (Firebase) and it is working, so What Do I did different in this one that i Get the "React/RCTBridge.h' file not found" ? .... Thanks again.
Here are the files of another project with cocoapod:
this is my podfile.lock
PODS:
- Firebase/AdMob (5.17.0):
- Firebase/Core
- Google-Mobile-Ads-SDK (~> 7.39)
- Firebase/Core (5.17.0):
- Firebase/CoreOnly
- FirebaseAnalytics (= 5.6.0)
- Firebase/CoreOnly (5.17.0):
- FirebaseCore (= 5.3.0)
- FirebaseAnalytics (5.6.0):
- FirebaseCore (~> 5.3)
- FirebaseInstanceID (~> 3.5)
- GoogleAppMeasurement (= 5.6.0)
- GoogleUtilities/AppDelegateSwizzler (~> 5.2)
- GoogleUtilities/MethodSwizzler (~> 5.2)
- GoogleUtilities/Network (~> 5.2)
- "GoogleUtilities/NSData+zlib (~> 5.2)"
- nanopb (~> 0.3)
- FirebaseCore (5.3.0):
- GoogleUtilities/Logger (~> 5.2)
- FirebaseInstanceID (3.5.0):
- FirebaseCore (~> 5.3)
- GoogleUtilities/Environment (~> 5.3)
- GoogleUtilities/UserDefaults (~> 5.3)
- Google-Mobile-Ads-SDK (7.40.0)
- GoogleAppMeasurement (5.6.0):
- GoogleUtilities/AppDelegateSwizzler (~> 5.2)
- GoogleUtilities/MethodSwizzler (~> 5.2)
- GoogleUtilities/Network (~> 5.2)
- "GoogleUtilities/NSData+zlib (~> 5.2)"
- nanopb (~> 0.3)
- GoogleUtilities/AppDelegateSwizzler (5.3.7):
- GoogleUtilities/Environment
- GoogleUtilities/Logger
- GoogleUtilities/Network
- GoogleUtilities/Environment (5.3.7)
- GoogleUtilities/Logger (5.3.7):
- GoogleUtilities/Environment
- GoogleUtilities/MethodSwizzler (5.3.7):
- GoogleUtilities/Logger
- GoogleUtilities/Network (5.3.7):
- GoogleUtilities/Logger
- "GoogleUtilities/NSData+zlib"
- GoogleUtilities/Reachability
- "GoogleUtilities/NSData+zlib (5.3.7)"
- GoogleUtilities/Reachability (5.3.7):
- GoogleUtilities/Logger
- GoogleUtilities/UserDefaults (5.3.7):
- GoogleUtilities/Logger
- nanopb (0.3.901):
- nanopb/decode (= 0.3.901)
- nanopb/encode (= 0.3.901)
- nanopb/decode (0.3.901)
- nanopb/encode (0.3.901)
DEPENDENCIES:
- Firebase/AdMob
- Firebase/Core
SPEC REPOS:
https://github.com/cocoapods/specs.git:
- Firebase
- FirebaseAnalytics
- FirebaseCore
- FirebaseInstanceID
- Google-Mobile-Ads-SDK
- GoogleAppMeasurement
- GoogleUtilities
- nanopb
SPEC CHECKSUMS:
Firebase: 59d557e064217fab6a03ff00baa73c06e73832e6
FirebaseAnalytics: 75e4bbc6417d190cc98ec1f17c41a4fad4c2c976
FirebaseCore: c0c4befb82374d6aef64d800e569f47625352edc
FirebaseInstanceID: 4522aad88f69297622062c0e9ffccdee3dd9b151
Google-Mobile-Ads-SDK: 9d1c38a83febea769470aa514a9c7954e2d1483d
GoogleAppMeasurement: 008e04ecd8efedd97a693aea8634aefe220bd26e
GoogleUtilities: 111a012f4c3a29c9e7c954c082fafd6ee3c999c0
nanopb: 2901f78ea1b7b4015c860c2fdd1ea2fee1a18d48
PODFILE CHECKSUM: a53b2a207ebc35aa62259ef3ade5a8080e105121
COCOAPODS: 1.6.1
and this is mi podfile:
# Uncomment the next line to define a global platform for your project
# platform :ios, '9.0'
target 'sqsonativeV57' do
# Uncomment the next line if you're using Swift or would like to use dynamic frameworks
# use_frameworks!
# Pods for sqsonativeV57
pod 'Firebase/Core'
pod 'Firebase/AdMob'
target 'sqsonativeV57-tvOSTests' do
inherit! :search_paths
# Pods for testing
end
target 'sqsonativeV57Tests' do
inherit! :search_paths
# Pods for testing
end
end
target 'sqsonativeV57-tvOS' do
# Uncomment the next line if you're using Swift or would like to use dynamic frameworks
# use_frameworks!
# Pods for sqsonativeV57-tvOS
# target 'sqsonativeV57-tvOSTests' do
# inherit! :search_paths
# Pods for testing
# end
end
The issue is that you need to link React Native itself as a Cocoapods dependency rather than the way it is currently set up (linked as a framework through Xcode). If you take a look at the link before and follow the instructions, it will guide you on how to do it.
Linking the Firebase dependencies is different as they don't know anything about React Native. If you link a React Native library through Cocoapods, you need to link React Native itself as a Cocoapod dependency.
@matt-oakes @timfpark I got it .... I decided to avoid Cocoa, so with react native link works great in both Android and iOS, thanks buddy!
|
gharchive/issue
| 2019-04-06T17:29:13 |
2025-04-01T04:36:05.554909
|
{
"authors": [
"matamicen",
"matt-oakes"
],
"repo": "timfpark/react-native-location",
"url": "https://github.com/timfpark/react-native-location/issues/52",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
321966322
|
TypeError: 'NoneType' object is not iterable
Getting the following error after scrolling the profile and scrapping the first link:
Traceback (most recent call last): File "crawl_profile.py", line 33, in <module> information, user_commented_list = extract_information(browser, username, limit_amount) File "/Users/kevinleahey/Git/instagram-profilecrawl/util/extractor.py", line 225, in extract_information caption, location_url, location_name, location_id, lat, lng, img, tags, likes, comments, date, user_commented_list = extract_post_info(browser) TypeError: 'NoneType' object is not iterable
I looked at the extract_post_info method, but nothing stuck out to me. Any thoughts?
Seems like this is related to lines 127-141 of extractor.py.
If there are no comments on the post tags = comments[0].text is causing script to fail.
I updated the else statement to elif len(comments) == 1: and that allowed the script to run fully. I'm not sure if that would have broken anything...
I had the same issue, but it is working now fine after I unindented line 140 and 141.
I found why it is problem generate. If the post doesn`t have comments even from authors, I will get
'''TypeError: 'NoneType' object is not iterable'''
@juliavollmer can you please post your code?
Hi, i had the same issue and tried to fix but nothing works, someone can help?. I thinks is something wrong when the script try to "return" on line 142:
Line 142: return caption, location_url, location_name, location_id, lat, lng, img, tags, int(likes), int(len(comments) - 1), date, user_commented_list
Error:
Traceback (most recent call last):
File "crawl_profile.py", line 33, in
information, user_commented_list = extract_information(browser, username, limit_amount)
File "/home/administrador/instagram-profilecrawl/util/extractor.py", line 226, in extract_information
caption, location_url, location_name, location_id, lat, lng, img, tags, likes, comments, date, user_commented_list = extract_post_info(browser)
TypeError: 'NoneType' object is not iterable
Can you please give me example of username where this problem is happening?
On 6 June 2018 at 05:16, davasu notifications@github.com wrote:
Hi, i had the same issue and tried to fix but nothing works, someone can
help?. I thinks is something wrong when the script try to "return" on line
142:
Line 142: return caption, location_url, location_name, location_id, lat,
lng, img, tags, int(likes), int(len(comments) - 1), date,
user_commented_list
Error:
Traceback (most recent call last):
File "crawl_profile.py", line 33, in
information, user_commented_list = extract_information(browser, username,
limit_amount)
File "/home/administrador/instagram-profilecrawl/util/extractor.py", line
226, in extract_information
caption, location_url, location_name, location_id, lat, lng, img, tags,
likes, comments, date, user_commented_list = extract_post_info(browser)
TypeError: 'NoneType' object is not iterable
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/timgrossmann/instagram-profilecrawl/issues/53#issuecomment-394927012,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AgZdwWKtr8ThCiRrLXazzSzUXjfNhfmMks5t50l4gaJpZM4T6C41
.
Hi @justdvl, the problem occurs with this user https://www.instagram.com/whereistravel/ in the last post, i think is something wrong when try to get the statistics from each post and with this user betwen post 8 and 11 https://www.instagram.com/cristian.traveler.24/
Looks like @alanwuha wrote this part of the code, maybe he will fix it soon :) If now I can look at it.
Thanks a lot @justdvl i will wait for any update from @alanwuha 👍
No response, so I have done the fix ;)
https://github.com/timgrossmann/instagram-profilecrawl/pull/58
|
gharchive/issue
| 2018-05-10T15:06:41 |
2025-04-01T04:36:05.572655
|
{
"authors": [
"ZeusFSX",
"davasu",
"juliavollmer",
"justdvl",
"kleahey",
"wecanfuture"
],
"repo": "timgrossmann/instagram-profilecrawl",
"url": "https://github.com/timgrossmann/instagram-profilecrawl/issues/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1976154946
|
Use optimized hyperparamters for our imitation example
We can use the hyperparameters I proposed here: https://github.com/HumanCompatibleAI/imitation/pull/771
done
Fixed in #33
|
gharchive/issue
| 2023-11-03T12:54:19 |
2025-04-01T04:36:05.598813
|
{
"authors": [
"patrickab",
"timokau"
],
"repo": "timokau/prefq",
"url": "https://github.com/timokau/prefq/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
231800261
|
Added callback if an error occurs during Wemo.load
Errors were being dropped from the internal callback which means calling code can't see them. I've added an optional extra parameter for the callback to deal with this case.
Codecov Report
Merging #51 into master will increase coverage by 0.49%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #51 +/- ##
==========================================
+ Coverage 87.09% 87.58% +0.49%
==========================================
Files 2 2
Lines 279 282 +3
Branches 46 47 +1
==========================================
+ Hits 243 247 +4
+ Misses 36 35 -1
Impacted Files
Coverage Δ
index.js
81.17% <100%> (+0.68%)
:arrow_up:
client.js
90.35% <0%> (+0.5%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f21ed18...3156735. Read the comment docs.
|
gharchive/pull-request
| 2017-05-27T13:10:07 |
2025-04-01T04:36:05.607034
|
{
"authors": [
"codecov-io",
"lutas"
],
"repo": "timonreinhard/wemo-client",
"url": "https://github.com/timonreinhard/wemo-client/pull/51",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1174919210
|
Current Measurement doesnt show in nightscout
I have configured an instance of nightscout and this uploader since 2 weeks now. For the first week everything ran fine. The uploader was pushing the current measurement and the backfilled data to nightscout every minute.
After the first week I noticed during one day. that the dots would not appear every minute anymore. The uploader was still logging that it was pushing ~141 Measurements to Nightscout without fail. Nightscout would also not throw any errors but I can only see a dot every five minutes which should just be the backlogged data. Tha nightscout database shows the same.
This porblem was not solved by deleting the docker images of the uploader and nightscout and pulling new ones.
My Graph now looks like this:
I also have a simple ui script using the exact same code thatjust displays the number every minute which is running fine. So my asumption is that nightscout is not correctly saving the data. Do you have any idea if there is a limit of how many uploads a nightscout instance can handle at one time and the uploaded measurements get cut of with only the last few saving?
Many Thanks
Sorry. The file didn't seem to upload correctly within github
that is strange. i am running the uploader myself for a few month now and my measurements are still accurate and come in every minute. i am not aware of any nightscout limits. :(
I have now put a debug output line at the beginning of the uploadToNightscout function.
2022-03-21T14:36:44.918Z [info]: Started
2022-03-21T14:37:00.431Z [info]: Logged in to LibreLink Up
2022-03-21T14:37:00.619Z [info]: Found 1 LibreLink Up connection.
2022-03-21T14:37:00.619Z [info]: -> The following connection will be used: XXX
2022-03-21T14:37:00.861Z [info]: Received blood glucose measurement
2022-03-21T14:37:00.863Z [info]: current measurement: 158 at: 2022-03-21T14:36:07.000Z
2022-03-21T14:37:01.139Z [info]: Upload of 141 measurements to Nightscout successfull
2022-03-21T14:38:12.517Z [info]: Found 1 LibreLink Up connection.
2022-03-21T14:38:12.517Z [info]: -> The following connection will be used: XXX
2022-03-21T14:38:12.783Z [info]: Received blood glucose measurement
2022-03-21T14:38:12.784Z [info]: current measurement: 162 at: 2022-03-21T14:37:06.000Z
2022-03-21T14:38:13.086Z [info]: Upload of 142 measurements to Nightscout successfull
2022-03-21T14:39:00.930Z [info]: Found 1 LibreLink Up connection.
2022-03-21T14:39:00.930Z [info]: -> The following connection will be used: XXX
2022-03-21T14:39:01.230Z [info]: Received blood glucose measurement
2022-03-21T14:39:01.230Z [info]: current measurement: 162 at: 2022-03-21T14:37:06.000Z
2022-03-21T14:39:01.491Z [info]: Upload of 142 measurements to Nightscout successfull
2022-03-21T14:40:00.297Z [info]: Found 1 LibreLink Up connection.
2022-03-21T14:40:00.297Z [info]: -> The following connection will be used: XXX
2022-03-21T14:40:00.583Z [info]: Received blood glucose measurement
2022-03-21T14:40:00.584Z [info]: current measurement: 162 at: 2022-03-21T14:37:06.000Z
2022-03-21T14:40:00.804Z [info]: Upload of 142 measurements to Nightscout successfull
2022-03-21T14:41:00.844Z [info]: Found 1 LibreLink Up connection.
2022-03-21T14:41:00.844Z [info]: -> The following connection will be used: XXX
2022-03-21T14:41:01.098Z [info]: Received blood glucose measurement
2022-03-21T14:41:01.098Z [info]: current measurement: 162 at: 2022-03-21T14:37:06.000Z
2022-03-21T14:41:01.356Z [info]: Upload of 141 measurements to Nightscout successfull
As it seems the script is not getting any newer data from the librelink up servers. This might indicate that Abbott changed something for the servers or that they changed the way the app pushes data to the servers or that the app is having some kind of problem for me.
Is this still an issue? I noticed some downtime on Abbotts side too but it seems to work fine now.
Yes it is unfortunately still a problem. But the Libre Link 3 App is not performing well at all. So i suspect that this is really just due to the App. Sorry for not finding a solution. You can close this issue or mark it as unfixable. Thanks for the help
|
gharchive/issue
| 2022-03-21T05:53:10 |
2025-04-01T04:36:05.618284
|
{
"authors": [
"Lirycs228",
"timoschlueter"
],
"repo": "timoschlueter/nightscout-librelink-up",
"url": "https://github.com/timoschlueter/nightscout-librelink-up/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
188919805
|
Force multiline imports
Can we have an option where imports should always be multiline despite not exceeding the line length?
For example, instead of this
from collections import deque, OrderedDict
can we have this,
from collections import (
deque,
OrderedDict,
)
However, this should only be applicable for multiple imports per line. Which means, if I am importing only a single import from the module, then it should not be multiline. Only if there are two or more imports, the forced multiline behaviour should happen.
--force-grid-wrap already does what you're asking for
That fixes my problem. Thanks!
|
gharchive/issue
| 2016-11-12T16:18:02 |
2025-04-01T04:36:05.621336
|
{
"authors": [
"pjenvey",
"tejasjadhav"
],
"repo": "timothycrosley/isort",
"url": "https://github.com/timothycrosley/isort/issues/480",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
135345371
|
Broken with babel 6.5.1
Hey, I'm not sure if this issue belongs here or on babel, but I'm going to start here.
I have a class chain like toFactory(TestComponent) > FieldSet, which expands to a prototype chain of TestComponent > toFactory(TestComponent) > FieldSet. However, when I invoke new FieldSet({}), I get a plain TestComponent instance back.
It looks like babel expands the FieldSet constructor to the following:
function _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError("this hasn't been initialised - super() hasn't been called"); } return call && (typeof call === "object" || typeof call === "function") ? call : self; }
function _inherits(subClass, superClass) { if (typeof superClass !== "function" && superClass !== null) { throw new TypeError("Super expression must either be null or a function, not " + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }
var FieldSet = function (_TestComponent) {
_inherits(FieldSet, _TestComponent);
function FieldSet(root, properties) {
var _ret;
_classCallCheck(this, FieldSet);
var _this = _possibleConstructorReturn(this, Object.getPrototypeOf(FieldSet).call(this, root, properties));
return _ret = _this, _possibleConstructorReturn(_this, _ret);
}
// ...
}(_TestComponent3.default); // where `_TestComponent3.default` is `toFactory(TestComponent)`
This is problematic because var _this = _possibleConstructorReturn(this, Object.getPrototypeOf(FieldSet).call(this, root, properties)); invokes the factory function such that _this evaluates to a brand new TestComponent object instead of this with the TestComponent trait mixed in.
I'm not sure what the best fix is. The first idea that comes to mind is to check the value of this in the factory and delegate to Class.call if it is defined and not equal to global / window.
What do you think? Let me know if I can help.
interesting. Can you push a branch with a failing test case?
|
gharchive/issue
| 2016-02-22T07:57:59 |
2025-04-01T04:36:05.625417
|
{
"authors": [
"mxdubois",
"timoxley"
],
"repo": "timoxley/to-factory",
"url": "https://github.com/timoxley/to-factory/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1428497528
|
Submitting wrong RCON password yields an unhandled exception
When submitting credentials with a valid address but incorrect RCON password the HLLServerError that is raised is not catched. Expected response would be to notify the user that the password is incorrect and ask if they want to use it anyway.
Would need to check if this is still an issue. Looks like I may have fixed it a while ago?
|
gharchive/issue
| 2022-10-29T23:11:23 |
2025-04-01T04:36:05.635506
|
{
"authors": [
"timraay"
],
"repo": "timraay/HLLLogUtilities",
"url": "https://github.com/timraay/HLLLogUtilities/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
365037002
|
dragImageCenterOnTouch and dragOffset not working
Using either of these options the drag image's top left corner is always at the touch point. I have tested it on iOS 12 and Android Chrome. I'm not really sure how to debug this.
Thanks for your bug report. Can you please provide details about the version you're using plus the config you use to initialize the polyfill?
I'm using 2.3.0-rc.1. My configuration is as so:
polyfill({
dragImageCenterOnTouch: true,
holdToDrag: bowser.mobile ? 150 : 0
})
I tried dragOffset as well with no effect. I'm using this library with react-dnd in my app.
Sorry for not getting back to this.. if you can provide some sample code I may be able to look into it. Until then I assume it's a specific CSS styling issue. Closing but happy to reopen when this issue becomes actionable!
|
gharchive/issue
| 2018-09-28T20:54:46 |
2025-04-01T04:36:05.641571
|
{
"authors": [
"AndrewMorsillo",
"reppners"
],
"repo": "timruffles/mobile-drag-drop",
"url": "https://github.com/timruffles/mobile-drag-drop/issues/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
614551173
|
his error occurs when fetching data from a specific bucket
his error occurs when fetching data from a specific bucket
var bucket is only valid inside your Bolt().Update func.
https://godoc.org/github.com/etcd-io/bbolt#Tx.Bucket
You can't reference it outside of that transaction.
but,i want get data,What i need to do
You don't need to create the buckets manually.
Just insert your model:
store, err := bolthold.Open(filename, 0666, nil)
if err != nil {
//handle error
}
err = store.Insert("king", &modeluserModel{
UserName: "king",
Password: "2959802013_",
})
dwedw := &model.UserModel{}
_ = store.Get("king", dwedw)
fmt.Println(dwedw)
When I have a default bucket, when I want to have a second bucket, should I innovate to create a store instance?
I'm not exactly following what you mean, but if you want to manually create and manage buckets, then I wouldn't suggest using Bolthold. Bolthold creates and manages it's own buckets based on the name of the type.
I have two types of data with different data structures, how to put them in the default bucket of Bolthold
Bolthold automatically creates a bucket for each type you put in. Use a new type and you get a new bucket.
If you really need to store two different types in the same bucket you can use an anonymous struct and combine the two.
struct {
Type1
Type2
}
This seems to be a good solution
|
gharchive/issue
| 2020-05-08T07:05:49 |
2025-04-01T04:36:05.646799
|
{
"authors": [
"king-wyx",
"timshannon"
],
"repo": "timshannon/bolthold",
"url": "https://github.com/timshannon/bolthold/issues/111",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
315670579
|
Update AndroidManifest.xml
As a library, you don't need the application object. It causes build issues for us.
Hi @bkhall, this is my first ever pull request!
I removed the line locally as you suggested and tested the app, seems to be working just fine. But I would like to understand further what is the build issue that you are facing? This is my first library, so there's a lot of things I don't know yet. Will be great if you can explain or send the stack trace.
Hi @tingyik90
I think it caused conflicts for the android apps built. Your setting will update their existing application name using the value as in https://github.com/tingyik90/snackprogressbar/blob/master/lib/src/main/res/values/strings.xml.
Further details may refer to: https://stackoverflow.com/questions/6842112/change-application-name-and-label. :)
Hi @tingyik90
I think it caused conflicts for the android apps built. Your setting will update their existing application name using the value as in https://github.com/tingyik90/snackprogressbar/blob/master/lib/src/main/res/values/strings.xml.
Further details may refer to https://stackoverflow.com/questions/6842112/change-application-name-and-label. :)
Android studio will make an attempt to merge the manifests from all libraries into the final application manifest.
A library can use this fact to declare in its own manifest, the things that it needs to operate. You can define permissions you may need, activities, services, etc and those will be added to the final application.
In your case, you had allowBackup set to true where my own app has it set to false, so there was a conflict, forcing me to add override statements. A library shouldn't have this set ever.
But more generally, since your library does not need extra permissions and does not have its own activities and services, the entire application object in the manifest is unnecessary.
@bkhall, thank you for the explanation! I have merged the commit.
|
gharchive/pull-request
| 2018-04-18T22:57:44 |
2025-04-01T04:36:05.695863
|
{
"authors": [
"bkhall",
"gargoylexxx",
"tingyik90"
],
"repo": "tingyik90/snackprogressbar",
"url": "https://github.com/tingyik90/snackprogressbar/pull/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1550584722
|
[BUG] KeyError while training TFT pipeline
🐛 Bug Report
Hi! I took code from your example of training tft model in documentation, but instead of using backtest chose to fit model explicitly on train part. I got KeyError: 'kwargs' after reaching maximum number of epochs. Backtesting works fine on this pipeline. (I'm using the whole dataset for it.)
Expected behavior
Fit method returns fitted model
How To Reproduce
I reproduced the same error in google colab (link here) with toy data from generate_ar_df, in my own example I used real data and had got the same error
Traceback
KeyError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/IPython/core/formatters.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
3 frames
[/usr/local/lib/python3.8/dist-packages/IPython/lib/pretty.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in pretty(self, obj)
400 if cls is not object \
401 and callable(cls.__dict__.get('__repr__')):
--> 402 return _repr_pprint(obj, self, cycle)
403
404 return _default_pprint(obj, self, cycle)
[/usr/local/lib/python3.8/dist-packages/IPython/lib/pretty.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in _repr_pprint(obj, p, cycle)
695 """A pprint that just redirects to the normal repr function."""
696 # Find newlines and replace them with p.break_()
--> 697 output = repr(obj)
698 for idx,output_line in enumerate(output.splitlines()):
699 if idx:
[/usr/local/lib/python3.8/dist-packages/etna/core/mixins.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in __repr__(self)
30 value = None
31 warnings.warn(f"You haven't set all parameters inside class __init__ method: {e}")
---> 32 args_str_representation += f"{arg} = {repr(value)}, "
33 return f"{self.__class__.__name__}({args_str_representation})"
34
[/usr/local/lib/python3.8/dist-packages/etna/core/mixins.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in __repr__(self)
22 continue
23 elif param.kind == param.VAR_KEYWORD:
---> 24 for arg_, value in self.__dict__[arg].items():
25 args_str_representation += f"{arg_} = {repr(value)}, "
26 else:
KeyError: 'kwargs'KeyError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/IPython/core/formatters.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
3 frames
[/usr/local/lib/python3.8/dist-packages/IPython/lib/pretty.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in pretty(self, obj)
400 if cls is not object \
401 and callable(cls.__dict__.get('__repr__')):
--> 402 return _repr_pprint(obj, self, cycle)
403
404 return _default_pprint(obj, self, cycle)
[/usr/local/lib/python3.8/dist-packages/IPython/lib/pretty.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in _repr_pprint(obj, p, cycle)
695 """A pprint that just redirects to the normal repr function."""
696 # Find newlines and replace them with p.break_()
--> 697 output = repr(obj)
698 for idx,output_line in enumerate(output.splitlines()):
699 if idx:
[/usr/local/lib/python3.8/dist-packages/etna/core/mixins.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in __repr__(self)
30 value = None
31 warnings.warn(f"You haven't set all parameters inside class __init__ method: {e}")
---> 32 args_str_representation += f"{arg} = {repr(value)}, "
33 return f"{self.__class__.__name__}({args_str_representation})"
34
[/usr/local/lib/python3.8/dist-packages/etna/core/mixins.py](https://4x5aoca1vrd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230118-060048-RC00_502821613#) in __repr__(self)
22 continue
23 elif param.kind == param.VAR_KEYWORD:
---> 24 for arg_, value in self.__dict__[arg].items():
25 args_str_representation += f"{arg_} = {repr(value)}, "
26 else:
KeyError: 'kwargs'
Environment
No response
Additional context
No response
Checklist
[X] Bug appears at the latest library version
Thank you for your report!
As a workaround I can suggest change pipeline_tft.fit(train_ts) to _ = pipeline_tft.fit(train_ts)
The issue is due to the first variant calls __repr__ in ipython enviroments - it seems we have some problems with __repr__ implementation.
|
gharchive/issue
| 2023-01-20T10:00:09 |
2025-04-01T04:36:05.707647
|
{
"authors": [
"benzom",
"martins0n"
],
"repo": "tinkoff-ai/etna",
"url": "https://github.com/tinkoff-ai/etna/issues/1078",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1176737735
|
fix plot_trend bug
IMPORTANT: Please do not create a Pull Request without creating an issue first.
Before submitting (must do checklist)
[ ] Did you read the contribution guide?
[ ] Did you update the docs? We use Numpy format for all the methods and classes.
[ ] Did you write any new necessary tests?
[ ] Did you update the CHANGELOG?
Type of Change
[ ] Examples / docs / tutorials / contributors update
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] Improvement (non-breaking change which improves an existing feature)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Proposed Changes
Related Issue
Closes #612
Closing issues
Codecov Report
Merging #617 (5ccf803) into master (067f6ad) will decrease coverage by 31.38%.
The diff coverage is 0.00%.
:exclamation: Current head 5ccf803 differs from pull request most recent head 2ed5331. Consider uploading reports for the commit 2ed5331 to get more accurate results
@@ Coverage Diff @@
## master #617 +/- ##
===========================================
- Coverage 84.12% 52.73% -31.39%
===========================================
Files 118 118
Lines 5964 5964
===========================================
- Hits 5017 3145 -1872
- Misses 947 2819 +1872
Impacted Files
Coverage Δ
etna/analysis/plotters.py
10.95% <0.00%> (-5.13%)
:arrow_down:
etna/commands/__init__.py
0.00% <0.00%> (-100.00%)
:arrow_down:
etna/commands/backtest_command.py
0.00% <0.00%> (-96.43%)
:arrow_down:
etna/commands/forecast_command.py
0.00% <0.00%> (-92.00%)
:arrow_down:
etna/commands/__main__.py
0.00% <0.00%> (-87.50%)
:arrow_down:
etna/commands/resolvers.py
0.00% <0.00%> (-80.00%)
:arrow_down:
etna/analysis/outliers/density_outliers.py
22.44% <0.00%> (-75.52%)
:arrow_down:
etna/datasets/datasets_generation.py
26.47% <0.00%> (-73.53%)
:arrow_down:
etna/transforms/timestamp/time_flags.py
27.02% <0.00%> (-72.98%)
:arrow_down:
etna/transforms/timestamp/fourier.py
28.57% <0.00%> (-71.43%)
:arrow_down:
... and 67 more
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
test_plot_trend
test_plot_bin_seg
test_plot_stl
|
gharchive/pull-request
| 2022-03-22T12:51:02 |
2025-04-01T04:36:05.728742
|
{
"authors": [
"codecov-commenter",
"iKintosh"
],
"repo": "tinkoff-ai/etna",
"url": "https://github.com/tinkoff-ai/etna/pull/617",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1867517137
|
[ready] ci: add py38 to linters
Running no tests on Python 3.8 makes it very easy to add features that are incompatible with that version. I suggest running at least the linters in both 3.8 and 3.11. Technically, only 3.8 would probably suffice.
I think it would be better to just have 3.8 only, all the CI stuff was bumped to 3.11 because its faster but the linter step is already fast enough on 3.8, and this way we don't use an extra runner.
|
gharchive/pull-request
| 2023-08-25T18:30:39 |
2025-04-01T04:36:05.787863
|
{
"authors": [
"roelofvandijk",
"wozeparrot"
],
"repo": "tinygrad/tinygrad",
"url": "https://github.com/tinygrad/tinygrad/pull/1674",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2371276533
|
llama concat_weights always send to device[0] first
removing that fixed the llama shard weight double copy. it was from #3966 but i think we fixed replace after that
can you fix in llama3 as well?
pending the buffer view contiguous change, otherwise it's copying the full weight on each gpu
i think it's not always guaranteed that the sharding we want among devices are the same as the view on disk, and we cannot directly copy shards from disk to individual gpus
|
gharchive/pull-request
| 2024-06-24T23:16:17 |
2025-04-01T04:36:05.789670
|
{
"authors": [
"chenyuxyz",
"wozeparrot"
],
"repo": "tinygrad/tinygrad",
"url": "https://github.com/tinygrad/tinygrad/pull/5136",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2467481860
|
Don't create cast variable if one of it's parents is an IF
In relation to this PR and specifically this comment.
In order to have IFs come after CASTs (and other UOps), we want to include those UOps as srcs of the IF, but not actually cause the UOps to be created as vars or other changes. Thus we exclude them from the count of parents when deciding on whether to create a var or not.
AMD tests failing but they supposedly work on a real chip
If your problem is that in test_cast_half_out_of_scope, the alu isn't correctly toposorted, we should fix this in the uops linearize.
Can you show a failing test for the toposort?
I don't see test_padto_multireduce failing
I'm not sure whether it would be considered a toposort issue or not. I might be misunderstanding things but thought it was cleaner to just have a one line fix here vs a whole new graph rewrite.
Here's where test_padto_where_multireduce fails: https://github.com/tinygrad/tinygrad/actions/runs/10349277723/job/28643299733. Sorry I might've mislabeled it above.
I'm also not sure the best way to show the issue, but here's the original, failing w/change from if gates, and passing w/this addition and if gates:
r_4_17_17_8__current_failing.txt
r_4_17_17_8__current_passing.txt
r_4_17_17_8__master_passing.txt
failing w/change from if gates:
// CURRENT BRANCH FAILING
r_4_17_17_8
LazyOp(MetaOps.KERNEL, arg=KernelInfo(local_dims=0, upcasted=1, dont_use_locals=False), src=(
LazyOp(BufferOps.STORE, arg=MemBuffer(idx=0, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 1, 1), strides=(1, 0, 0), offset=0, mask=((0, 17), (0, 1), (0, 1)), contiguous=False), View(shape=(4, 1, 1, 8), strides=(8, 0, 0, 1), offset=0, mask=None, contiguous=True)))), src=(
LazyOp(TernaryOps.WHERE, arg=None, src=(
LazyOp(BinaryOps.CMPLT, arg=None, src=(
LazyOp(BufferOps.CONST, arg=ConstBuffer(val=8.5, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 1, 1), strides=(0, 0, 0), offset=0, mask=((0, 17), (0, 1), (0, 1)), contiguous=False), View(shape=(4, 1, 1, 8), strides=(8, 0, 0, 1), offset=0, mask=None, contiguous=True)))), src=()),
LazyOp(ReduceOps.SUM, arg=(1,), src=(
LazyOp(BinaryOps.ADD, arg=None, src=(
LazyOp(BufferOps.LOAD, arg=MemBuffer(idx=1, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 17, 1), strides=(17, 1, 0), offset=0, mask=((0, 17), (0, 17), (0, 1)), contiguous=False), View(shape=(4, 17, 1, 8), strides=(136, 1, 0, 17), offset=0, mask=None, contiguous=False)))), src=()),
LazyOp(TernaryOps.WHERE, arg=None, src=(
LazyOp(BinaryOps.CMPLT, arg=None, src=(
LazyOp(BufferOps.CONST, arg=ConstBuffer(val=12.75, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 17, 1), strides=(0, 0, 0), offset=0, mask=((0, 17), (0, 17), (0, 1)), contiguous=False), View(shape=(4, 17, 1, 8), strides=(136, 1, 0, 17), offset=0, mask=None, contiguous=False)))), src=()),
LazyOp(ReduceOps.SUM, arg=(2,), src=(
LazyOp(BufferOps.LOAD, arg=MemBuffer(idx=1, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 17, 17), strides=(17, 0, 1), offset=0, mask=((0, 17), (0, 17), (0, 17)), contiguous=False), View(shape=(4, 17, 17, 8), strides=(2312, 17, 1, 289), offset=0, mask=None, contiguous=False)))), src=()),)),)),
LazyOp(BufferOps.LOAD, arg=MemBuffer(idx=2, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 17, 1), strides=(0, 0, 0), offset=0, mask=((0, 17), (0, 17), (0, 1)), contiguous=False), View(shape=(4, 17, 1, 8), strides=(136, 1, 0, 17), offset=0, mask=None, contiguous=False)))), src=()),
LazyOp(BufferOps.LOAD, arg=MemBuffer(idx=3, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 17, 1), strides=(0, 0, 0), offset=0, mask=((0, 17), (0, 17), (0, 1)), contiguous=False), View(shape=(4, 17, 1, 8), strides=(136, 1, 0, 17), offset=0, mask=None, contiguous=False)))), src=()),)),)),)),)),
LazyOp(BufferOps.CONST, arg=ConstBuffer(val=0.0, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 1, 1), strides=(0, 0, 0), offset=0, mask=((0, 17), (0, 1), (0, 1)), contiguous=False), View(shape=(4, 1, 1, 8), strides=(8, 0, 0, 1), offset=0, mask=None, contiguous=True)))), src=()),
LazyOp(BufferOps.CONST, arg=ConstBuffer(val=1.0, dtype=dtypes.float, st=ShapeTracker(views=(View(shape=(32, 1, 1), strides=(0, 0, 0), offset=0, mask=((0, 17), (0, 1), (0, 1)), contiguous=False), View(shape=(4, 1, 1, 8), strides=(8, 0, 0, 1), offset=0, mask=None, contiguous=True)))), src=()),)),)),))
[Opt(op=OptOps.PADTO, axis=0, amt=32), Opt(op=OptOps.UPCAST, axis=0, amt=8)]
0 UOps.DEFINE_GLOBAL : PtrDType(dtypes.float) [] 0
1 UOps.DEFINE_GLOBAL : PtrDType(dtypes.float) [] 1
2 UOps.DEFINE_GLOBAL : PtrDType(dtypes.float) [] 2
3 UOps.DEFINE_GLOBAL : PtrDType(dtypes.float) [] 3
4 UOps.CONST : dtypes.int [] 0
5 UOps.CONST : dtypes.float [] 0.0
6 UOps.CONST : dtypes.int [] 1
7 UOps.CONST : dtypes.float [] 1.0
8 UOps.CONST : dtypes.int [] 2
9 UOps.CONST : dtypes.int [] 3
10 UOps.CONST : dtypes.int [] 4
11 UOps.CONST : dtypes.int [] 8
12 UOps.CONST : dtypes.float [] 8.5
13 UOps.CONST : dtypes.float [] 12.75
14 UOps.CONST : dtypes.int [] 17
15 UOps.CONST : dtypes.int [] 34
16 UOps.CONST : dtypes.int [] 51
17 UOps.CONST : dtypes.int [] 68
18 UOps.CONST : dtypes.int [] 85
19 UOps.CONST : dtypes.int [] 102
20 UOps.CONST : dtypes.int [] 119
21 UOps.CONST : dtypes.int [] 136
22 UOps.CONST : dtypes.int [] 170
23 UOps.CONST : dtypes.int [] 187
24 UOps.CONST : dtypes.int [] 204
25 UOps.CONST : dtypes.int [] 221
26 UOps.CONST : dtypes.int [] 238
27 UOps.CONST : dtypes.int [] 255
28 UOps.CONST : dtypes.int [] 289
29 UOps.SPECIAL : dtypes.int [] ('gidx0', 4)
30 UOps.VECTORIZE : dtypes._float2 ['0.0', '0.0'] None
31 UOps.VECTORIZE : dtypes._float4 ['0.0', '0.0', '0.0', '0.0'] None
32 UOps.ALU : dtypes.int [29, '8'] BinaryOps.MUL
33 UOps.ALU : dtypes.int [32, '1'] BinaryOps.ADD
34 UOps.ALU : dtypes.int [32, '2'] BinaryOps.ADD
35 UOps.ALU : dtypes.int [32, '4'] BinaryOps.ADD
36 UOps.ALU : dtypes.int [29, '136'] BinaryOps.MUL
37 UOps.ALU : dtypes.bool [29, '2'] BinaryOps.CMPLT
38 UOps.ALU : dtypes.float [37, '1.0', '0.0'] TernaryOps.WHERE
39 UOps.ALU : dtypes.float [37, '8.5', '0.0'] TernaryOps.WHERE
40 UOps.ALU : dtypes.float [37, '12.75', '0.0'] TernaryOps.WHERE
41 UOps.ALU : dtypes.bool [29, '3'] BinaryOps.CMPLT
42 UOps.ALU : dtypes.float [41, '1.0', '0.0'] TernaryOps.WHERE
43 UOps.ALU : dtypes.float [41, '8.5', '0.0'] TernaryOps.WHERE
44 UOps.LOAD : dtypes.float [2, '0', '0.0', 37] None
45 UOps.LOAD : dtypes.float [3, '0', '0.0', 37] None
46 UOps.DEFINE_ACC : dtypes.float ['0.0', 50] (1,)
47 UOps.DEFINE_ACC : dtypes.float ['0.0', 50] (3,)
48 UOps.DEFINE_ACC : dtypes._float2 [30, 50] (5,)
49 UOps.DEFINE_ACC : dtypes._float4 [31, 50] (7,)
50 UOps.RANGE : dtypes.int ['0', '17'] (1, True)
51 UOps.GEP : dtypes.float [48] 0
52 UOps.GEP : dtypes.float [49] 0
53 UOps.GEP : dtypes.float [48] 1
54 UOps.GEP : dtypes.float [49] 1
55 UOps.GEP : dtypes.float [49] 2
56 UOps.GEP : dtypes.float [49] 3
57 UOps.ALU : dtypes.int [36, 50] BinaryOps.ADD
58 UOps.ALU : dtypes.int [57, '17'] BinaryOps.ADD
59 UOps.ALU : dtypes.int [57, '34'] BinaryOps.ADD
60 UOps.ALU : dtypes.int [57, '51'] BinaryOps.ADD
61 UOps.ALU : dtypes.int [57, '68'] BinaryOps.ADD
62 UOps.ALU : dtypes.int [57, '85'] BinaryOps.ADD
63 UOps.ALU : dtypes.int [57, '102'] BinaryOps.ADD
64 UOps.ALU : dtypes.int [57, '119'] BinaryOps.ADD
65 UOps.ALU : dtypes.bool [57, '170'] BinaryOps.CMPLT
66 UOps.ALU : dtypes.float [65, '12.75', '0.0'] TernaryOps.WHERE
67 UOps.ALU : dtypes.bool [57, '187'] BinaryOps.CMPLT
68 UOps.ALU : dtypes.float [67, '12.75', '0.0'] TernaryOps.WHERE
69 UOps.ALU : dtypes.bool [57, '204'] BinaryOps.CMPLT
70 UOps.ALU : dtypes.float [69, '12.75', '0.0'] TernaryOps.WHERE
71 UOps.ALU : dtypes.bool [57, '221'] BinaryOps.CMPLT
72 UOps.ALU : dtypes.float [71, '12.75', '0.0'] TernaryOps.WHERE
73 UOps.ALU : dtypes.bool [57, '238'] BinaryOps.CMPLT
74 UOps.ALU : dtypes.float [73, '12.75', '0.0'] TernaryOps.WHERE
75 UOps.ALU : dtypes.bool [57, '255'] BinaryOps.CMPLT
76 UOps.ALU : dtypes.float [75, '12.75', '0.0'] TernaryOps.WHERE
77 UOps.ALU : dtypes.bool [57, '289'] BinaryOps.CMPLT
78 UOps.ALU : dtypes.float [77, '12.75', '0.0'] TernaryOps.WHERE
79 UOps.LOAD : dtypes.float [1, 58, '0.0', 37] None
80 UOps.LOAD : dtypes.float [1, 59, '0.0', 75] None
81 UOps.LOAD : dtypes.float [1, 60, '0.0', 73] None
82 UOps.LOAD : dtypes.float [1, 61, '0.0', 71] None
83 UOps.LOAD : dtypes.float [1, 62, '0.0', 69] None
84 UOps.LOAD : dtypes.float [1, 63, '0.0', 67] None
85 UOps.LOAD : dtypes.float [1, 64, '0.0', 65] None
86 UOps.LOAD : dtypes.float [1, 57, '0.0', 77] None
87 UOps.LOAD : dtypes.float [2, '0', '0.0', 65] None
88 UOps.LOAD : dtypes.float [2, '0', '0.0', 67] None
89 UOps.LOAD : dtypes.float [2, '0', '0.0', 69] None
90 UOps.LOAD : dtypes.float [2, '0', '0.0', 71] None
91 UOps.LOAD : dtypes.float [2, '0', '0.0', 73] None
92 UOps.LOAD : dtypes.float [2, '0', '0.0', 75] None
93 UOps.LOAD : dtypes.float [2, '0', '0.0', 77] None
94 UOps.LOAD : dtypes.float [3, '0', '0.0', 65] None
95 UOps.LOAD : dtypes.float [3, '0', '0.0', 67] None
96 UOps.LOAD : dtypes.float [3, '0', '0.0', 69] None
97 UOps.LOAD : dtypes.float [3, '0', '0.0', 71] None
98 UOps.LOAD : dtypes.float [3, '0', '0.0', 73] None
99 UOps.LOAD : dtypes.float [3, '0', '0.0', 75] None
100 UOps.LOAD : dtypes.float [3, '0', '0.0', 77] None
101 UOps.DEFINE_ACC : dtypes.float ['0.0', 105] (0,)
102 UOps.DEFINE_ACC : dtypes.float ['0.0', 105] (2,)
103 UOps.DEFINE_ACC : dtypes._float2 [30, 105] (4,)
104 UOps.DEFINE_ACC : dtypes._float4 [31, 105] (6,)
105 UOps.RANGE : dtypes.int ['0', '17'] (2, True)
106 UOps.GEP : dtypes.float [103] 0
107 UOps.GEP : dtypes.float [104] 0
108 UOps.GEP : dtypes.float [103] 1
109 UOps.GEP : dtypes.float [104] 1
110 UOps.GEP : dtypes.float [104] 2
111 UOps.GEP : dtypes.float [104] 3
112 UOps.ALU : dtypes.int [36, 105] BinaryOps.ADD
113 UOps.ALU : dtypes.int [112, '17'] BinaryOps.ADD
114 UOps.ALU : dtypes.int [112, '34'] BinaryOps.ADD
115 UOps.ALU : dtypes.int [112, '51'] BinaryOps.ADD
116 UOps.ALU : dtypes.int [112, '68'] BinaryOps.ADD
117 UOps.ALU : dtypes.int [112, '85'] BinaryOps.ADD
118 UOps.ALU : dtypes.int [112, '102'] BinaryOps.ADD
119 UOps.ALU : dtypes.int [112, '119'] BinaryOps.ADD
120 UOps.LOAD : dtypes.float [1, 113, '0.0', 37] None
121 UOps.ALU : dtypes.float [102, 120] BinaryOps.ADD
122 UOps.LOAD : dtypes.float [1, 114, '0.0', 37] None
123 UOps.ALU : dtypes.float [106, 122] BinaryOps.ADD
124 UOps.LOAD : dtypes.float [1, 115, '0.0', 37] None
125 UOps.ALU : dtypes.float [108, 124] BinaryOps.ADD
126 UOps.VECTORIZE : dtypes._float2 [123, 125] None
127 UOps.LOAD : dtypes.float [1, 116, '0.0', 37] None
128 UOps.ALU : dtypes.float [107, 127] BinaryOps.ADD
129 UOps.LOAD : dtypes.float [1, 117, '0.0', 37] None
130 UOps.ALU : dtypes.float [109, 129] BinaryOps.ADD
131 UOps.LOAD : dtypes.float [1, 118, '0.0', 37] None
132 UOps.ALU : dtypes.float [110, 131] BinaryOps.ADD
133 UOps.LOAD : dtypes.float [1, 119, '0.0', 37] None
134 UOps.ALU : dtypes.float [111, 133] BinaryOps.ADD
135 UOps.VECTORIZE : dtypes._float4 [128, 130, 132, 134] None
136 UOps.LOAD : dtypes.float [1, 112, '0.0', 41] None
137 UOps.ALU : dtypes.float [101, 136] BinaryOps.ADD
138 UOps.PHI : dtypes.float [101, 137] None
139 UOps.PHI : dtypes.float [(102), 121] None
140 UOps.PHI : dtypes._float2 [103, 126] None
141 UOps.PHI : dtypes._float4 [104, 135] None
142 UOps.ENDRANGE : [105] None
143 UOps.ALU : dtypes.bool [78, 138] BinaryOps.CMPLT
144 UOps.ALU : dtypes.float [143, 93, 100] TernaryOps.WHERE
145 UOps.ALU : dtypes.float [86, 144] BinaryOps.ADD
146 UOps.ALU : dtypes.float [46, 145] BinaryOps.ADD
147 UOps.PHI : dtypes.float [46, 146] None
148 UOps.GEP : dtypes.float [140] 0
149 UOps.ALU : dtypes.bool [76, 148] BinaryOps.CMPLT
150 UOps.ALU : dtypes.float [149, 92, 99] TernaryOps.WHERE
151 UOps.ALU : dtypes.float [80, 150] BinaryOps.ADD
152 UOps.ALU : dtypes.float [51, 151] BinaryOps.ADD
153 UOps.GEP : dtypes.float [141] 0
154 UOps.ALU : dtypes.bool [72, 153] BinaryOps.CMPLT
155 UOps.ALU : dtypes.float [154, 90, 97] TernaryOps.WHERE
156 UOps.ALU : dtypes.float [82, 155] BinaryOps.ADD
157 UOps.ALU : dtypes.float [52, 156] BinaryOps.ADD
158 UOps.GEP : dtypes.float [140] 1
159 UOps.ALU : dtypes.bool [74, 158] BinaryOps.CMPLT
160 UOps.ALU : dtypes.float [159, 91, 98] TernaryOps.WHERE
161 UOps.ALU : dtypes.float [81, 160] BinaryOps.ADD
162 UOps.ALU : dtypes.float [53, 161] BinaryOps.ADD
163 UOps.VECTORIZE : dtypes._float2 [152, 162] None
164 UOps.PHI : dtypes._float2 [48, 163] None
165 UOps.GEP : dtypes.float [164] 0
166 UOps.GEP : dtypes.float [164] 1
167 UOps.GEP : dtypes.float [141] 1
168 UOps.ALU : dtypes.bool [70, 167] BinaryOps.CMPLT
169 UOps.ALU : dtypes.float [168, 89, 96] TernaryOps.WHERE
170 UOps.ALU : dtypes.float [83, 169] BinaryOps.ADD
171 UOps.ALU : dtypes.float [54, 170] BinaryOps.ADD
172 UOps.GEP : dtypes.float [141] 2
173 UOps.ALU : dtypes.bool [68, 172] BinaryOps.CMPLT
174 UOps.ALU : dtypes.float [173, 88, 95] TernaryOps.WHERE
175 UOps.ALU : dtypes.float [84, 174] BinaryOps.ADD
176 UOps.ALU : dtypes.float [55, 175] BinaryOps.ADD
177 UOps.GEP : dtypes.float [141] 3
178 UOps.ALU : dtypes.bool [66, 177] BinaryOps.CMPLT
179 UOps.ALU : dtypes.float [178, 87, 94] TernaryOps.WHERE
180 UOps.ALU : dtypes.float [85, 179] BinaryOps.ADD
181 UOps.ALU : dtypes.float [56, 180] BinaryOps.ADD
182 UOps.VECTORIZE : dtypes._float4 [157, 171, 176, 181] None
183 UOps.PHI : dtypes._float4 [49, 182] None
184 UOps.GEP : dtypes.float [183] 0
185 UOps.GEP : dtypes.float [183] 1
186 UOps.GEP : dtypes.float [183] 2
187 UOps.GEP : dtypes.float [183] 3
188 UOps.ALU : dtypes.bool [39, 165] BinaryOps.CMPLT
189 UOps.ALU : dtypes.float [188, '0.0', 38] TernaryOps.WHERE
190 UOps.ALU : dtypes.bool [39, 184] BinaryOps.CMPLT
191 UOps.ALU : dtypes.float [190, '0.0', 38] TernaryOps.WHERE
192 UOps.ALU : dtypes.bool [39, 166] BinaryOps.CMPLT
193 UOps.ALU : dtypes.float [192, '0.0', 38] TernaryOps.WHERE
194 UOps.VECTORIZE : dtypes._float2 [189, 193] None
195 UOps.ALU : dtypes.bool [39, 185] BinaryOps.CMPLT
196 UOps.ALU : dtypes.float [195, '0.0', 38] TernaryOps.WHERE
197 UOps.ALU : dtypes.bool [39, 186] BinaryOps.CMPLT
198 UOps.ALU : dtypes.float [197, '0.0', 38] TernaryOps.WHERE
199 UOps.ALU : dtypes.bool [39, 187] BinaryOps.CMPLT
200 UOps.ALU : dtypes.float [199, '0.0', 38] TernaryOps.WHERE
201 UOps.VECTORIZE : dtypes._float4 [191, 196, 198, 200] None
202 UOps.ALU : dtypes.bool [40, 139] BinaryOps.CMPLT
203 UOps.ALU : dtypes.float [202, 44, 45] TernaryOps.WHERE
204 UOps.ALU : dtypes.float [79, 203] BinaryOps.ADD
205 UOps.ALU : dtypes.float [47, 204] BinaryOps.ADD
206 UOps.PHI : dtypes.float [47, 205] None
207 UOps.ENDRANGE : [50] None
208 UOps.ALU : dtypes.bool [39, 206] BinaryOps.CMPLT
209 UOps.ALU : dtypes.float [208, '0.0', 38] TernaryOps.WHERE
210 UOps.ALU : dtypes.bool [43, 147] BinaryOps.CMPLT
211 UOps.ALU : dtypes.float [210, '0.0', 42] TernaryOps.WHERE
212 UOps.IF : [37, 209, 194, 201] None
213 UOps.STORE : [0, 33, 209, 212] None
214 UOps.STORE : [0, 34, 194, 212] None
215 UOps.STORE : [0, 35, 201, 212] None
216 UOps.ENDIF : [212] None
217 UOps.IF : [41, 211] None
218 UOps.STORE : [0, 32, 211, 217] None
219 UOps.ENDIF : [217] None
#include <metal_stdlib>
using namespace metal;
kernel void r_4_17_17_8(device float* data0, const device float* data1, const device float* data2, const device float* data3, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {
int gidx0 = gid.x; /* 4 */
float2 cast0 = float2(0.0f,0.0f);
float4 cast1 = float4(0.0f,0.0f,0.0f,0.0f);
int alu0 = (gidx0*8);
int alu1 = (gidx0*136);
bool alu2 = (gidx0<2);
float alu3 = (alu2?1.0f:0.0f);
float alu4 = (alu2?8.5f:0.0f);
bool alu5 = (gidx0<3);
float val0 = (alu2?*(data2+0):0.0f);
float val1 = (alu2?*(data3+0):0.0f);
float acc0 = 0.0f;
float acc1 = 0.0f;
float2 acc2 = cast0;
float4 acc3 = cast1;
for (int ridx0 = 0; ridx0 < 17; ridx0++) {
int alu6 = (alu1+ridx0);
bool alu7 = (alu6<170);
bool alu8 = (alu6<187);
bool alu9 = (alu6<204);
bool alu10 = (alu6<221);
bool alu11 = (alu6<238);
bool alu12 = (alu6<255);
bool alu13 = (alu6<289);
float val2 = (alu2?*(data1+alu6+17):0.0f);
float val3 = (alu12?*(data1+alu6+34):0.0f);
float val4 = (alu11?*(data1+alu6+51):0.0f);
float val5 = (alu10?*(data1+alu6+68):0.0f);
float val6 = (alu9?*(data1+alu6+85):0.0f);
float val7 = (alu8?*(data1+alu6+102):0.0f);
float val8 = (alu7?*(data1+alu6+119):0.0f);
float val9 = (alu13?*(data1+alu6):0.0f);
float val10 = (alu7?*(data2+0):0.0f);
float val11 = (alu8?*(data2+0):0.0f);
float val12 = (alu9?*(data2+0):0.0f);
float val13 = (alu10?*(data2+0):0.0f);
float val14 = (alu11?*(data2+0):0.0f);
float val15 = (alu12?*(data2+0):0.0f);
float val16 = (alu13?*(data2+0):0.0f);
float val17 = (alu7?*(data3+0):0.0f);
float val18 = (alu8?*(data3+0):0.0f);
float val19 = (alu9?*(data3+0):0.0f);
float val20 = (alu10?*(data3+0):0.0f);
float val21 = (alu11?*(data3+0):0.0f);
float val22 = (alu12?*(data3+0):0.0f);
float val23 = (alu13?*(data3+0):0.0f);
float acc4 = 0.0f;
float acc5 = 0.0f;
float2 acc6 = cast0;
float4 acc7 = cast1;
for (int ridx1 = 0; ridx1 < 17; ridx1++) {
int alu14 = (alu1+ridx1);
float val24 = (alu2?*(data1+alu14+17):0.0f);
float val25 = (alu2?*(data1+alu14+34):0.0f);
float val26 = (alu2?*(data1+alu14+51):0.0f);
float val27 = (alu2?*(data1+alu14+68):0.0f);
float val28 = (alu2?*(data1+alu14+85):0.0f);
float val29 = (alu2?*(data1+alu14+102):0.0f);
float val30 = (alu2?*(data1+alu14+119):0.0f);
float val31 = (alu5?*(data1+alu14):0.0f);
acc4 = (acc4+val31);
acc5 = (acc5+val24);
acc6 = float2((acc6.x+val25),(acc6.y+val26));
acc7 = float4((acc7.x+val27),(acc7.y+val28),(acc7.z+val29),(acc7.w+val30));
}
acc0 = (acc0+val9+(((alu13?12.75f:0.0f)<acc4)?val16:val23));
acc2 = float2((acc2.x+val3+(((alu12?12.75f:0.0f)<acc6.x)?val15:val22)),(acc2.y+val4+(((alu11?12.75f:0.0f)<acc6.y)?val14:val21)));
acc3 = float4((acc3.x+val5+(((alu10?12.75f:0.0f)<acc7.x)?val13:val20)),(acc3.y+val6+(((alu9?12.75f:0.0f)<acc7.y)?val12:val19)),(acc3.z+val7+(((alu8?12.75f:0.0f)<acc7.z)?val11:val18)),(acc3.w+val8+(((alu7?12.75f:0.0f)<acc7.w)?val10:val17)));
float2 cast2 = float2(((alu4<acc2.x)?0.0f:alu3),((alu4<acc2.y)?0.0f:alu3));
float4 cast3 = float4(((alu4<acc3.x)?0.0f:alu3),((alu4<acc3.y)?0.0f:alu3),((alu4<acc3.z)?0.0f:alu3),((alu4<acc3.w)?0.0f:alu3));
acc1 = (acc1+val2+(((alu2?12.75f:0.0f)<acc5)?val0:val1));
}
float alu15 = ((alu4<acc1)?0.0f:alu3);
float alu16 = (((alu5?8.5f:0.0f)<acc0)?0.0f:(alu5?1.0f:0.0f));
if (alu2) {
*(data0+alu0+1) = alu15;
*((device float2*)(data0+alu0+2)) = cast2;
*((device float4*)(data0+alu0+4)) = cast3;
}
if (alu5) {
*(data0+alu0) = alu16;
}
}
I can repro it in e51f175ca974cf67719837cbd8020e0ac656fe83. This is definitely a toposort issue and should be fixed in UOpGraph.
I'd try adding logic to the push function to prioritize cast0 correctly
https://github.com/tinygrad/tinygrad/blob/11d62668a3378d3ba6310a30d0ac988e1260eb33/tinygrad/codegen/uopgraph.py#L571
Ok, cool I'll make that change. Thanks for the details & sorry if this was a bit off base.
I'll re-open this PR when it's ready to go.
I believe the changes here https://github.com/tinygrad/tinygrad/pull/6007/files#diff-00bd44b667ec90ae1d3e984e699bc6b498c84ca1b1bd15a025437ded227457bfR566 actually solved the re-ordering issue 😅 so this shouldn't be needed anymore. Will switch back to my other PR and get the cuda tests passing.
|
gharchive/pull-request
| 2024-08-15T06:13:37 |
2025-04-01T04:36:05.805773
|
{
"authors": [
"Qazalin",
"ianpaul10"
],
"repo": "tinygrad/tinygrad",
"url": "https://github.com/tinygrad/tinygrad/pull/6086",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
877060792
|
AttributeError: 'FinanceAPI' object has no attribute 'df'
Hello -- was following your tutorial and got an issue when trying to run df=f.df that throws out AttributrError...can you help resolve?
Thanks!
Hi again -- any chance you're checking on these issues?
|
gharchive/issue
| 2021-05-06T04:26:19 |
2025-04-01T04:36:05.821271
|
{
"authors": [
"Klod"
],
"repo": "tirthajyoti/Finance-with-Python",
"url": "https://github.com/tirthajyoti/Finance-with-Python/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2531048049
|
late poet challenge start causes unexpected behaviour
When a PoET replies with its proof late at the start of cycle gap, psm can get into a state where it assumes PoST service is new or registering (that need no proving time). because it returns a READY state. This results in many post-services starting within a short amount of time, creating a "thread fight club" when PoET does reply and proving starts, resulting in poor performance.
Currently psm starts up services based up CG opening layer and current layer, but might need to revisit other CG detection methods, including the classic spacemesh.v1.PostInfoService.PostStates | grep "PROVING"
|
gharchive/issue
| 2024-09-17T12:36:03 |
2025-04-01T04:36:05.867731
|
{
"authors": [
"tjb-altf4"
],
"repo": "tjb-altf4/spacemesh-psm",
"url": "https://github.com/tjb-altf4/spacemesh-psm/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
816045373
|
Update dependencies
Bumps dependencies
Silence pylint's using-constant-test warning which is annoying
Coverage remained the same at 27.218% when pulling a215ad5a60a5d4996d916b8a6c3a89bdf4b81669 on etnguyen03:update-deps into 07b6f40e18adefc92ac104a5b8f8a320fc9935cf on tjcsl:master.
|
gharchive/pull-request
| 2021-02-25T03:20:51 |
2025-04-01T04:36:05.869792
|
{
"authors": [
"coveralls",
"etnguyen03"
],
"repo": "tjcsl/director4",
"url": "https://github.com/tjcsl/director4/pull/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2241430538
|
🛑 rss.tjo.space is down
In 68977c5, rss.tjo.space (https://rss.tjo.space) was down:
HTTP code: 502
Response time: 7465 ms
Resolved: rss.tjo.space is back up in b8974d5 after 6 minutes.
|
gharchive/issue
| 2024-04-13T08:32:41 |
2025-04-01T04:36:05.894343
|
{
"authors": [
"mentos1386"
],
"repo": "tjo-space/status",
"url": "https://github.com/tjo-space/status/issues/1213",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1593836238
|
🛑 code.tjo.space is down
In 98bd44b, code.tjo.space (https://code.tjo.space) was down:
HTTP code: 0
Response time: 0 ms
Resolved: code.tjo.space is back up in 6682bc4.
|
gharchive/issue
| 2023-02-21T17:14:15 |
2025-04-01T04:36:05.896865
|
{
"authors": [
"mentos1386"
],
"repo": "tjo-space/status",
"url": "https://github.com/tjo-space/status/issues/507",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
95792640
|
support custom actions and controllers
resolves #20
resolves #21
Hi! Nice PR! How long before a merge?
Hi @Benew , I'll try to merge this week.
There is a way to create custom actions validation rules? or this still on progress?
Any news on this feature?
|
gharchive/pull-request
| 2015-07-18T05:59:51 |
2025-04-01T04:36:05.898670
|
{
"authors": [
"Benew",
"Cornik34",
"jaime-franco",
"tjwebb"
],
"repo": "tjwebb/sails-permissions",
"url": "https://github.com/tjwebb/sails-permissions/pull/90",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1093005861
|
Crash on SSL EOF / Protocol Violation
Observed when running on a public network. Version 0.9a1
Jan 03 16:24:16 bluetak taky[924128]: DEBUG:Persistence:Updating tracking for: <Event uid="ANDROID-357536083950909" etype="a-f-G-U-C" time="2022-01-03 21:24:15.469000"> (ttl: 374)
Jan 03 16:25:03 bluetak taky[924128]: INFO:COTServer:New ssl client from 89.xxx.xxx.xxx:50082
Jan 03 16:25:13 bluetak taky[924128]: INFO:SocketTAKClient@89.xxx.xxx.xxx:50082:Socket disconnect: EOF occurred in violation of protocol (_ssl.c:1131)
Jan 03 16:25:13 bluetak taky[924128]: CRITICAL:root:Unhandled exception
Jan 03 16:25:13 bluetak taky[924128]: Traceback (most recent call last):
Jan 03 16:25:13 bluetak taky[924128]: File "/usr/local/lib/python3.8/dist-packages/taky/cot/__main__.py", line 99, in main
Jan 03 16:25:13 bluetak taky[924128]: cot_srv.loop()
Jan 03 16:25:13 bluetak taky[924128]: File "/usr/local/lib/python3.8/dist-packages/taky/cot/server.py", line 282, in loop
Jan 03 16:25:13 bluetak taky[924128]: self.client_disconnect(client, "SSL Handshake timeout")
Jan 03 16:25:13 bluetak taky[924128]: File "/usr/local/lib/python3.8/dist-packages/taky/cot/server.py", line 223, in client_disconnect
Jan 03 16:25:13 bluetak taky[924128]: client = self.clients.pop(client.sock)
Jan 03 16:25:13 bluetak taky[924128]: KeyError: <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>
Looks like this has happened more than once...
Jan 02 00:13:53 bluetak taky[883092]: INFO:COTServer:New ssl client from 23.xxx.xxx.xxx:56930
Jan 02 00:13:53 bluetak taky[883092]: INFO:SocketTAKClient@23.xxx.xxx.xxx:56930:Socket disconnect: [SSL: PEER_DID_NOT_RETURN_A_CERTIFICATE] peer did not return a certificate (_ssl.>
Jan 02 00:13:54 bluetak taky[883092]: INFO:COTServer:New ssl client from 23.xxx.xxx.xxx:58686
Jan 02 00:14:04 bluetak taky[883092]: INFO:SocketTAKClient@23.xxx.xxx.xxx:58686:Socket disconnect: EOF occurred in violation of protocol (_ssl.c:1131)
Jan 02 00:14:04 bluetak taky[883092]: CRITICAL:root:Unhandled exception
Jan 02 00:14:04 bluetak taky[883092]: Traceback (most recent call last):
Jan 02 00:14:04 bluetak taky[883092]: File "/usr/local/lib/python3.8/dist-packages/taky/cot/__main__.py", line 99, in main
Jan 02 00:14:04 bluetak taky[883092]: cot_srv.loop()
Jan 02 00:14:04 bluetak taky[883092]: File "/usr/local/lib/python3.8/dist-packages/taky/cot/server.py", line 282, in loop
Jan 02 00:14:04 bluetak taky[883092]: self.client_disconnect(client, "SSL Handshake timeout")
Jan 02 00:14:04 bluetak taky[883092]: File "/usr/local/lib/python3.8/dist-packages/taky/cot/server.py", line 223, in client_disconnect
Jan 02 00:14:04 bluetak taky[883092]: client = self.clients.pop(client.sock)
Jan 02 00:14:04 bluetak taky[883092]: KeyError: <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>
|
gharchive/issue
| 2022-01-04T04:42:44 |
2025-04-01T04:36:05.923304
|
{
"authors": [
"tkuester"
],
"repo": "tkuester/taky",
"url": "https://github.com/tkuester/taky/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
271675233
|
Cython version leaks all handles
In the Cython version, the handles are closed in __del__ both for DTraceConsumer and DTraceContinuousConsumer (used by DTraceConsumerThread). However, these are extension types, and Cython does not use __del__ for extension types (for whatever reason), so they will never be called (the explicit del self.consumer in DTraceConsumerThread's destructor, which is called as it's a normal Python object, just ensures the DTraceContinuousConsumer gets deallocated, but the consumer's __del__ still won't be called). I believe using __dealloc__ for the two extension classes would work (though handle must be checked for being non-null first).
This has also exposed a (minor) potential issue: with Cython, __init__ can end up being called multiple times, so technically handle could be initialised twice, leaking the old return from dtrace_open. To be truly correct, either __init__ should guard against this, or __cinit__ should be used instead, which will only ever be called once.
In fact it's even worse for DTraceContinuousConsumer, as that only calls dtrace_close in __del__, so the script will keep running (until the Python process exits).
Thanks for reporting these findings... (y)
You are right, will need to look into this and properly clean-up after we are done consuming. Did you by any chance, made the changes needed yet? If so can you make open a push request?
Took a stab at this, based on your proposals. See: https://github.com/tmetsch/python-dtrace/commit/8fdd42537affe1f44b35b45a092ea91dc8d46b2f
Great, that looks like what I was thinking of, except it's __dealloc__, not __delalloc__.
|
gharchive/issue
| 2017-11-07T01:04:00 |
2025-04-01T04:36:06.012995
|
{
"authors": [
"jrtc27",
"tmetsch"
],
"repo": "tmetsch/python-dtrace",
"url": "https://github.com/tmetsch/python-dtrace/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
503469830
|
Install process doesnt support latest version of terraform (0.12.x)
Micro services
Spring boot Applications:
Admin Service
Auth Service
Asset Service
Compliance Service
Statistics Service
Notification Service
Rule Engine
Rules
ETL
Webapp
OS Type: Windows/Linux/MacOS
Java version:
1.9
1.8
1.7
Summary
Install instructions say install latest version of terraform. Latest version of terraform is 0.12.x
Terraform 0.12.x introduced change to structure of resource blocks such that all tf files need to be upgraded to 0.12 version. (see https://www.terraform.io/upgrade-guides/0-12.html )
So either the pacbot install instructions should say "install terraform v 0.11.11" or the tf files need to be upgraded to be compatible with syntax employed by terraform v0.12.x
Reproduce steps
run install using terraform version 0.12.x
Expected Results
terraform to run without any issues
Actual Results
terraform plan fails
Terraform latest version support is under development and will be released soon.
great - maybe update the install instructions to mention that 0.11.11 is latest version of terraform currently supported ?
as part of 2.0 release latest version will be available
|
gharchive/issue
| 2019-10-07T13:57:03 |
2025-04-01T04:36:06.025539
|
{
"authors": [
"ajb1967",
"sajeer-nooh",
"santhoshigorle"
],
"repo": "tmobile/pacbot",
"url": "https://github.com/tmobile/pacbot/issues/337",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
117660709
|
Add overflow menu with a change reminder option when the action activ…
…ity is fired from a notification
:+1:
|
gharchive/pull-request
| 2015-11-18T19:23:02 |
2025-04-01T04:36:06.073910
|
{
"authors": [
"Revenaunt",
"bradmontgomery"
],
"repo": "tndatacommons/android-app",
"url": "https://github.com/tndatacommons/android-app/pull/151",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
99202663
|
Tests
Updated model tests
:+1:
|
gharchive/pull-request
| 2015-08-05T13:16:17 |
2025-04-01T04:36:06.074634
|
{
"authors": [
"bradmontgomery",
"damianurbanczyk"
],
"repo": "tndatacommons/android-app",
"url": "https://github.com/tndatacommons/android-app/pull/71",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
702569571
|
Problems with MacOS
Hello,
problems with pip installation on a MacOS:
MacUsers-MacBook-Pro:Sandbox mac_user$ pip3 install imgio
Requirement already satisfied: imgio in /usr/local/lib/python3.8/site-packages/imgio-0.5.6-py3.8.egg (0.5.6)
Requirement already satisfied: numpy>=1.16 in /usr/local/lib/python3.8/site-packages (from imgio) (1.19.2)
Requirement already satisfied: piexif>=1.1.3 in /usr/local/lib/python3.8/site-packages (from imgio) (1.1.3)
Collecting imread>=0.7.4
Using cached imread-0.7.4.tar.gz (151 kB)
Building wheels for collected packages: imread
Building wheel for imread (setup.py) ... error
ERROR: Command errored out with exit status 1:
...
Complete output (124 lines):
running bdist_wheel
running build
...
running build_ext
building 'imread._imread' extension
creating build/temp.macosx-10.15-x86_64-3.8
creating build/temp.macosx-10.15-x86_64-3.8/imread
creating build/temp.macosx-10.15-x86_64-3.8/imread/lib
...
imread/lib/_bmp.cpp:108:12: error: calling a private constructor of class 'std::__1::unique_ptr<Image, std::__1::default_delete<Image> >'
return output;
^
/Applications/Xcode_10_3.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:2532:3: note: declared private here
unique_ptr(unique_ptr&);
^
1 warning and 1 error generated.
error: command 'clang' failed with exit status 1
Thanks for the bug report! Can you confirm that pip3 install imread gives the same error? If so, then the issue is with imread (https://github.com/luispedro/imread) and should be reported there.
Yep, in fact it is. The problem is with imread.
Thanks!
|
gharchive/issue
| 2020-09-16T08:34:11 |
2025-04-01T04:36:06.097797
|
{
"authors": [
"toaarnio",
"vitasam"
],
"repo": "toaarnio/imgio",
"url": "https://github.com/toaarnio/imgio/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
89176833
|
【错别字】进程越多越好?
文件:/process_basic/the_more_the_better.md(第18行)
原文:我们想想进程数应该等于CPU数,但是如果进程有阻塞呢?这是是应该提高进程数增加并行数的。
更正:我们想想进程数应该等于CPU数,但是如果进程有阻塞呢?这时是应该提高进程数增加并行数的。
Nice catch @ilcc :+1:
|
gharchive/issue
| 2015-06-18T02:11:08 |
2025-04-01T04:36:06.116347
|
{
"authors": [
"ilcc",
"tobegit3hub"
],
"repo": "tobegit3hub/understand_linux_process",
"url": "https://github.com/tobegit3hub/understand_linux_process/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
356194727
|
00FF00- red FF0000 -Green other works fine like FFFF00 - yellow
over MQTT / Wheel / Ring / Node red /Apk / Openhab
My whole set up is working fine except following issue .
Red selection reflecting as green over ws2812B
Green selection as red
Yellow as Yellow
Blue as Blue
All effects working fine
I am using Wemos X- ring . removed the U1 switch .
Using current version of firmware
This looks my X ring is GRB and not RBG connection Kindly suggest firmware changes to make it compatible with GRB
You can change the color setting here:
https://github.com/toblum/McLighting/blob/172d93443d031d89a62b87a874ec712098fd494c/Arduino/McLighting/McLighting.ino#L98 if you're using WS2812FX
or here:
https://github.com/toblum/McLighting/blob/172d93443d031d89a62b87a874ec712098fd494c/Arduino/McLighting/McLighting.ino#L73 if you're using NeoAnimationFX
Regards
Tobias
|
gharchive/issue
| 2018-09-01T11:18:09 |
2025-04-01T04:36:06.124429
|
{
"authors": [
"sujitrp",
"toblum"
],
"repo": "toblum/McLighting",
"url": "https://github.com/toblum/McLighting/issues/226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
255369282
|
Note in README about sort-lines
todo.txt format is made to be sorted. We should have a note about using this module. https://atom.io/packages/sort-lines
Side note, I can't believe that atom doesn't have this built in.
|
gharchive/issue
| 2017-09-05T18:36:11 |
2025-04-01T04:36:06.147646
|
{
"authors": [
"evanp",
"karbassi"
],
"repo": "todotxt/language-todotxt",
"url": "https://github.com/todotxt/language-todotxt/issues/21",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
119468608
|
use API v9/me/experiments for obm
return is the same structure as obm in v8/me
when should we GET the request? each time we get /me data?
|
gharchive/issue
| 2015-11-30T10:57:14 |
2025-04-01T04:36:06.186022
|
{
"authors": [
"refiito",
"tanel"
],
"repo": "toggl/toggldesktop",
"url": "https://github.com/toggl/toggldesktop/issues/1700",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2090784284
|
[Snyk] Upgrade @tokens-studio/tokens from 0.0.23 to 0.0.24
This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to upgrade @tokens-studio/tokens from 0.0.23 to 0.0.24.
As this is a private repository, Snyk-bot does not have access. Therefore, this PR has been created automatically, but appears to have been created by a real user.
:sparkles: Snyk has automatically assigned this pull request, set who gets assigned.
:information_source: Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.
The recommended version is 1 version ahead of your current version.
The recommended version was released a month ago, on 2023-12-10.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open upgrade PRs.
For more information:
🧐 View latest project report
👩💻 Set who automatically gets assigned
🛠 Adjust upgrade PR settings
🔕 Ignore this dependency or unsubscribe from future upgrade PRs
https://github.com/tokens-studio/style-dictionary-configurator/pull/66 done here
|
gharchive/pull-request
| 2024-01-19T15:28:22 |
2025-04-01T04:36:06.201348
|
{
"authors": [
"SorsOps",
"jorenbroekema"
],
"repo": "tokens-studio/style-dictionary-configurator",
"url": "https://github.com/tokens-studio/style-dictionary-configurator/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
324312711
|
Are proto2 extensions supported?
https://developers.google.com/protocol-buffers/docs/proto#extensions
I have existing proto2 protocol definition which uses extensions. And I'm unable to understand how to work with such protocol using prost. Am I looking into wrong direction or this feature is not supported?
I found a workaround. This protocol definition:
syntax = "proto2";
message Header {
required uint64 msg_id = 1;
extensions 100 to max;
}
message FooMessage {
required string some_parameter = 1;
extend Header {
optional FooMessage msg_foo = 101;
}
}
message BarMessage {
required string another_parameter = 1;
extend Header {
optional BarMessage msg_bar = 102;
}
}
can be rewritten as this:
syntax = "proto2";
message FooMessage {
required string some_parameter = 1;
}
message BarMessage {
required string another_parameter = 1;
}
message Header {
required uint64 msg_id = 1;
optional FooMessage msg_foo = 101;
optional BarMessage msg_bar = 102;
}
This does not change the binary format, so both definitions are equal. And the latter one works out of the box with prost.
Glad you found a workaround. Extensions aren't supported, since I haven't personally needed them, and they are more-or-less deprecated. I'm not against adding support in prost, but obviously it will be low down on the priority list. If you want to hack on it then please feel free.
If you want to hack on it then please feel free.
I already have a sufficient workaround and it is very unlikely that I will need this feature again. So, you may close this issue if you want.
Keep it open :-)
Custom options are implemented as extensions, even in proto3: https://developers.google.com/protocol-buffers/docs/proto3#custom_options
Seems conceivable that prost could allow customizing generated code via options.
I find myself in need of extensions for the reason mentioned above. :)
I'm working on a Rust port of https://github.com/Xorlev/grpc-jersey using tonic and tower-web, but I'll need to parse options:
service TestService {
rpc TestMethod (TestRequest) returns (TestResponse) {
option (google.api.http).get = "/users/{string}/{uint32}/{nested_type.field_f64}";
}
}
If you're still open to the idea @danburkert I'll start up a draft PR. I suspect it should be reasonably similar to Java protobufs + prost-build would gain an ExtensionRegistry argument such that the parsed MethodDescriptorProto would have an extensions field populated.
It wouldn't be a bad time to drag #117 along for the ride either, but they can be separate.
I'm also interested in this.
@Xorlev I think it would require changing the generated code to carry around unknown fields, so that they may be accessed by looking them up via extension number. Or are you thinking of something else?
In the Java version of protos, extension registries are declared during
deserialization, and those fields are eagerly extracted out. If an
extension isn't recognized, it goes to unknown fields. This has the
advantage of checking validity at deserialization time rather than
extension access time.
They're mechanisms that work well together but don't necessarily need to
both be implemented.
On Wed, Mar 4, 2020, 11:24 Tiziano Santoro notifications@github.com wrote:
I'm also interested in this.
@Xorlev https://github.com/Xorlev I think it would require changing the
generated code to carry around unknown fields, so that they may be accessed
by looking them up via extension number. Or are you thinking of something
else?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/danburkert/prost/issues/100?email_source=notifications&email_token=AACVDSQ24BROZKSISY4OZCLRF3BHTA5CNFSM4FARIA3KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEN2MALA#issuecomment-594853932,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACVDSTOOBC5UWOQJG4JF53RF3BHTANCNFSM4FARIA3A
.
@Xorlev did you manage to start working on this? we're porting a lot of gRPC stuff to Rust at work and need this so just wanted to make sure we aren't duplicating work. happy to help as well
Proto2 Extensions are also required by the GTFS (General Transit Feed Specification) realtime spec. For testing purposes, I will try to use the workaround proposed by @im-0, but it would be great if we could follow the official way for our gtfs-rt extension for our project Dystonse.
@Xorlev I'd love to understand your proposed solution better, so I could try to implement it. We need the google.http.api extension at work.
One thing in particular I didn't understand was this:
In the Java version of protos, extension registries are declared during
deserialization
What deserialization are you referring to? The deserialization of the literal value on the right-hand-side of option(my_option) = { foo: "bar"};
|
gharchive/issue
| 2018-05-18T08:14:55 |
2025-04-01T04:36:06.218862
|
{
"authors": [
"Xorlev",
"amilkov3",
"danburkert",
"im-0",
"imalsogreg",
"kamalmarhubi",
"lenaschimmel",
"tiziano88"
],
"repo": "tokio-rs/prost",
"url": "https://github.com/tokio-rs/prost/issues/100",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
683945865
|
原文のナンバリングと翻訳の見出しの番号が一致していない
原文ではLESSON 1やLecture 1などに番号が振られているが、Sphinxでは見出しから自動でナンバリングされるので、原文と比べようとすると混乱するかも
以下案
無視する
見出しの冒頭にLESSON 1やLecture 1などを付ける
ドキュメントの構造を原文の見出しの番号と合うように作り変える
@drillan の余裕ができたらお願いする、という形でOKです。
|
gharchive/issue
| 2020-08-22T07:03:32 |
2025-04-01T04:36:06.243813
|
{
"authors": [
"drillan",
"shinseitaro"
],
"repo": "tokyoquantopian/quantopian-doc-ja",
"url": "https://github.com/tokyoquantopian/quantopian-doc-ja/issues/74",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
714237121
|
サンプルコード中のQuantopian特有のAPIについて
サンプルコード中のQuantopian特有のライブラリ・APIの説明はどこかで説明されていますか?
また、一般のライブラリなのかQuantopian特有か、明示。例えばコードの中で使われているメソッドの引数でわからないものが出てきた時に、Quantopin内で調べるべきなのかインターネットで調べるべきなのか分からないのでは
Quantopian API reference
https://www.quantopian.com/docs/api-reference/overview
|
gharchive/issue
| 2020-10-04T05:17:43 |
2025-04-01T04:36:06.245428
|
{
"authors": [
"daisukeuematsu",
"shinseitaro"
],
"repo": "tokyoquantopian/quantopian-doc-ja",
"url": "https://github.com/tokyoquantopian/quantopian-doc-ja/issues/81",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
659773631
|
[scheduled] Auto Update Dependencies
/retest
This is scheduled PR by GitHub Action.
Produced via:
./hack/update-deps.sh --upgrade && ./hack/update-codegen.sh
/rebase
/rebase
/rebase
It does not trigger GitHub Action test 😢
/rerun
|
gharchive/pull-request
| 2020-07-18T00:38:24 |
2025-04-01T04:36:06.263404
|
{
"authors": [
"tom24d"
],
"repo": "tom24d/eventing-dockerhub",
"url": "https://github.com/tom24d/eventing-dockerhub/pull/85",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
95839071
|
Split creating an EGL context in two parts
In order to properly create an X11 window, we need to know the visual ID of the pixel format chosen by EGL. So we split the EGL creation process in two parts.
EglContext::new returns a EglContextPrototype, which can be used to retreive the native visual ID. This prototype can then be turned into a real context by passing a window.
I checked on my machine, and this PR is indeed working on windows.
|
gharchive/pull-request
| 2015-07-18T16:34:50 |
2025-04-01T04:36:06.265784
|
{
"authors": [
"tomaka"
],
"repo": "tomaka/glutin",
"url": "https://github.com/tomaka/glutin/pull/523",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
147215000
|
Fixed context creation on Windows
The WGL context was never created and the function CreateContextAttribsARB attempted to point to one. This should fix the issue. Also the ShareLists part of the code is completely redundant as share can only ever really be null.
Closes tomaka/glium#1450
I don't think that's the right fix. As far as I know CreateContextAttribsARB doesn't need a current context. The fix probably works for you because you end up using the regular CreateContext.
|
gharchive/pull-request
| 2016-04-10T11:11:19 |
2025-04-01T04:36:06.267462
|
{
"authors": [
"0utcast",
"tomaka"
],
"repo": "tomaka/glutin",
"url": "https://github.com/tomaka/glutin/pull/760",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
807956550
|
Added WireMock.jsonResponse factory methods
The methods are similar to ResponseDefinitionBuilder.jsonResponse and allow preparing a ResponseDefinitionBuilder with provided body and status.
The introduced methods simplify stubs generation for statuses other than 200:
final Response errorResponse = new ErrorResponse("SOME_CODE", "Some error description");
final StubMapping stubMapping = stubFor(post(urlPathMatching(URL)).willReturn(jsonResponse(errorResponse, 401)));
@tomakehurst is there a chance to merge this PR please?
|
gharchive/pull-request
| 2021-02-14T13:00:43 |
2025-04-01T04:36:06.269221
|
{
"authors": [
"mih-kopylov"
],
"repo": "tomakehurst/wiremock",
"url": "https://github.com/tomakehurst/wiremock/pull/1428",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
83800494
|
Feature request: Invisible marker with label only
Hi @tombatossals , it's me again, the static map enthusiast.
I'd like to request for the above feature as it can be an useful feature, say I want to show the nearby point of interests and using the current markers, it gets a little too clustered. Will be great if I can set the markers to invisible and just showing the label.
Thanks in advance.
This is already possible, simply use a "void" style on your marker:
var marker = {
lat: xxx,
lon: yyy,
style: {}, // No style to display nothing except popup
label: {
message: 'hello !',
show: true
}
};
worked for me...
@style-x7 Does @claustres solution match your needs? I'm closing this issue. In case feel free to comment and re-open it again. Thx
|
gharchive/issue
| 2015-06-02T02:40:31 |
2025-04-01T04:36:06.294325
|
{
"authors": [
"claustres",
"juristr",
"style-x7"
],
"repo": "tombatossals/angular-openlayers-directive",
"url": "https://github.com/tombatossals/angular-openlayers-directive/issues/137",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
32392620
|
NUI as a static library
I'm hoping to build and compile NUI as a static library to eventually bind to in Xamarin.iOS.
Right now I'm having issues linking the CoreParse library as a subproject. I'm getting the error:
target specifies product type 'com.apple.product-type.framework', but there's no such product type for the 'iphoneos' platform
Would it be possible to build a universal NUI library with Cocoapods?
NUI can be now generated as a static library so I think we can close this issue.
@timbodeit ?
|
gharchive/issue
| 2014-04-28T19:52:40 |
2025-04-01T04:36:06.296359
|
{
"authors": [
"PabloLerma",
"kylesmyth"
],
"repo": "tombenner/nui",
"url": "https://github.com/tombenner/nui/issues/243",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
812031822
|
chore(package.json): fix peer dependency with react@17
I getting this error using npm@7 and I fixed your package, because your npm works properly with react@17
npm ERR! Found: react@17.0.1
npm ERR! node_modules/react
npm ERR! react@"17.0.1" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer react@"^15.0.0 || ^16.0.0" from react-google-maps@9.4.5
npm ERR! node_modules/react-google-maps
npm ERR! react-google-maps@"9.4.5" from the root project
Hello any updates on this? Im facing the same issue
+1 Can we get an update here?
|
gharchive/pull-request
| 2021-02-19T13:39:22 |
2025-04-01T04:36:06.307000
|
{
"authors": [
"andrewfritz86",
"navarroaxel",
"shye0000"
],
"repo": "tomchentw/react-google-maps",
"url": "https://github.com/tomchentw/react-google-maps/pull/1068",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
782648849
|
For repeating messages, skip the delivery if the last delivery is in the last N messages
~g edit last repeat=hours:2 skip_if=5
A command (something like) this would skip delivering the new message if the last delivery is in the last 5 messages in the channel
@stitics Let me know if you think this would be useful
@stitics Let me know if you think this would be useful
So right now, it's working as if skip_if=1
This would also allow skip_if=0 which would always deliver the new message
So right now, it's working as if skip_if=1
This would also allow skip_if=0 which would always deliver the new message
That would be amazing.
That would be amazing.
And now I scrolled to see that it exists already. Cool.
And now I scrolled to see that it exists already. Cool.
Oh, I misunderstood. Yes, that would be cool and useful. In the early part of today sometimes it's every other message.
Oh, I misunderstood. Yes, that would be cool and useful. In the early part of today sometimes it's every other message.
I'm thinking that scheduling a repeating message is getting too complicated
Maybe we schedule the message as a normal message and then have a separate ~giggle repeat <msg_id> <options> command
I'm thinking that scheduling a repeating message is getting too complicated
Maybe we schedule the message as a normal message and then have a separate ~giggle repeat <msg_id> <options> command
|
gharchive/issue
| 2021-01-09T17:18:31 |
2025-04-01T04:36:06.364144
|
{
"authors": [
"stitics",
"tomgigler"
],
"repo": "tomgigler/GiggleMe",
"url": "https://github.com/tomgigler/GiggleMe/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
941428577
|
move to new syntax
change existing code to the new syntax (2.0).
use namespaces for all
namespace RV {
// constants, macros
}
new syntax is implemented. All is under
ns riscv {
...
}
|
gharchive/issue
| 2021-07-11T11:11:02 |
2025-04-01T04:36:06.366014
|
{
"authors": [
"tomhea"
],
"repo": "tomhea/flip-jump",
"url": "https://github.com/tomhea/flip-jump/issues/93",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1869235527
|
Explore adding an FAQ section
This has come up in multiple discussions over on toml-lang/toml, as a suggestion; for placing things that do not belong in the specification itself.
A few quick thoughts worth nothing down here:
I think this:
should NOT contain broad design guidance (eg: when to use booleans vs special "string" values).
should only provide guidance in the specific context of TOML (eg: what to use instead of None).
may also be a good place to provide information about why certain design choices were made (eg: no pragma)
The least effort way to do things would likely be to have minimal text in each answer and link to (or quote from) already provided responses on the TOML issue tracker.
should NOT contain broad design guidance
Yeah, It's probably for the best to keep the scope small and focus on clarifying what TOML is and isn't. Opinions that aren't actually enforced by the language or part of its design principles should probably be left out.
The obivous Qs that come to mind:
Can I use TOML as a drop-in replacement for JSON or YAML?
Why is there no null in TOML?
What can I use to represent absence of a value instead of null?
|
gharchive/issue
| 2023-08-28T08:10:40 |
2025-04-01T04:36:06.374294
|
{
"authors": [
"DominoPivot",
"pradyunsg"
],
"repo": "toml-lang/toml.io",
"url": "https://github.com/toml-lang/toml.io/issues/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
598304797
|
Kotlin dsl and AGP and b2d-light version update
Hi,
first of all: thank you a lot for this setup. It produces a much better and more modern result than the one from libgdx.
I would like to ask three things, if possible:
android gradle plugin can be increased to 3.6.0 from 3.5.0 in my opinion
box2d light version can be increased from 1.4 to 1.5
when I select "kotlin" as a language it would be great if it produces build.gradle.kts and settings.gradle.kts files
I think nothing of that is super urgent but would be great if that gets changed. Also, is there a reason why you did not use the buildSrc folder approach and instead you define the versions in gradle.properties file? This is just a question out of interest, why you choose that approach over the other :)
I provided an example project with kotlin dsl for android and desktop which was initially setup with your setup jar and then converted to kotlin gradle.dsl to my best knowledge.
Also, converted the launcher and game class to kotlin and moved them to the kotlin src folder.
Dark Matter.zip
I'll take a look; I'm somewhat suspicious of Kotlin platform launchers because so many issues have been reported on the Discord related to people translating DesktopLauncher.java to DesktopLauncher.kt . If it can be generated with an optimal setup from the start, that might help avoid those errors. Having the Game class in Kotlin would be better than having it in Java, yeah.
I'll update Android Gradle plugin and Box2D right away.
I actually like the gradle.properties approach, since it keeps versions centralized in one file, but that wasn't my idea; I forked this from czyzby's gdx-setup, which used gradle.properties for versions.
@tommyettinger I think we can close this issue. If you are interested in kotlin dsl gradle files then please refer to this project: https://github.com/Quillraven/Dark-Matter
I started with your setup (as mentioned in Readme.md ;)) and adjusted the gradle files accordingly. I was able to successfully build the desktop jar and deploy on android.
From time to time you get the "DesktopLauncher blabla" missing when you run it. But I guess this is somehow a bug related to kotlin or intellij or gradle or the combination of them. Simply running the application again fixes it, so I guess it is sometimes a timing issue with building and running.
OK, I'll close then. I'm just not quite comfortable with the interactions between Kotlin and the many parts of a Gradle build, so I don't feel like I could reasonably maintain this kind of code myself. Could I put a link to your Dark-Matter project in the Readme.md so people who want to use Kotlin can see more fully how you did that?
If you want, you can link it of course - thanks!
|
gharchive/issue
| 2020-04-11T16:42:42 |
2025-04-01T04:36:06.385014
|
{
"authors": [
"Quillraven",
"tommyettinger"
],
"repo": "tommyettinger/gdx-liftoff",
"url": "https://github.com/tommyettinger/gdx-liftoff/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
663341605
|
Mixing tabs and spaces
There was a weird mix of tabs and spaces for the section about importing the library, so I fixed that and used only spaces since that's what the code examples were doing (with 2 spaces indents).
Ah, yeah it's annoying when that happens. Thanks for catching this!
|
gharchive/pull-request
| 2020-07-21T22:12:29 |
2025-04-01T04:36:06.386291
|
{
"authors": [
"payne911",
"tommyettinger"
],
"repo": "tommyettinger/jbump",
"url": "https://github.com/tommyettinger/jbump/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
88246418
|
Fixed django 1.8 allow_migrate router signature, documentation
The signature of allow_migrate has changed in django 1.8. See:
https://docs.djangoproject.com/en/1.8/topics/db/multi-db/#allow_migrate
With the current implementation, the migrations do not work as expected. I ran into issues when I had a shared app with a migration that had a RunPython operation. That operation was executed on the tenant on creation as well as on migrate_schemas and failed obviously because of missing database tables.
With the small update it runs fine again.
I also removed some references to the removed sync_schemas commands in documentation.
Thanks for this.
It on pypi now so you can pip install it
Wow, that was quick! Thanks a lot! :+1: Great project btw!
|
gharchive/pull-request
| 2015-06-14T20:45:25 |
2025-04-01T04:36:06.405951
|
{
"authors": [
"ivome",
"tomturner"
],
"repo": "tomturner/django-tenants",
"url": "https://github.com/tomturner/django-tenants/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
445591903
|
Unable to change paths in cobertura report
Getting an issue where our application is compiled within docker and we're trying to export the cobertura xml file from it.
Since the filename attribute is the full path to the file, we need to do a search a replace.
Adding the root of the project to sources/source and using relative paths for filename, it would be a lot easier to manager.
I agree, this will be nice enhancement. Instead of doing only for cobertura format, It will be good to do for all other format's except Team City. We can add one more options, --format --relative-code-path
could be similar to https://github.com/tonerdo/coverlet/issues/408?
and https://github.com/tonerdo/coverlet/issues/408#issuecomment-491502234 could fix?
Seems like Coverlet was using relative path earlier but @tonerdo changed it.
#295 (Comment)
This happens because I switched to using absolute paths. #356 re-includes the source element
I think, change was done as part of #260
https://github.com/tonerdo/coverlet/commit/de558c5f62b46d30fb2ba15231e50fc7f41877f7?diff=unified
There is an open issue for similar request in Icov Format, #263
jenkins don't like absolute path, I would like to see a switch to change to relative path.
FYI https://github.com/tonerdo/coverlet/issues/408#issuecomment-497674737
In #614 we changed the cobertura report back to using relative paths so that the jenkins plugin is working again and the source link option is still supported.
The difference is that currently the root dir of the OS is used in sources/source and not the root of the project. Most likely this could be changed but I'm not really sure how to determine the root of the project? To be precise, I'm not sure about the definition of project root. Is it the .csproj files, the .sln file or even the repository?
@MarcoRossignoli What do you think?
@daveMueller I think that we shouldn't search the "root" of a project, but the algs should simply try to find longest shared absolute(per drive on win) and add to sources, the relative parts will go inside file attribute.
Report is "unrelated" to project structure and we don't know how a project will be setup, could be created runtime, could o couldn't have a sln an so on...report wants only "locate" source files.
Something similar to trie structure where the absolute stops at first fork and letter are folders.
@MarcoRossignoli I got it. Is it OK when I work on it?
Yes thank's!
|
gharchive/issue
| 2019-05-17T19:47:17 |
2025-04-01T04:36:06.422088
|
{
"authors": [
"EricStG",
"MarcoRossignoli",
"ViceIce",
"daveMueller",
"sasivishnu"
],
"repo": "tonerdo/coverlet",
"url": "https://github.com/tonerdo/coverlet/issues/413",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
391206865
|
Fix gem replacer: cp -T option for macosx
Related to #88 For some reason, gems are not found with the approach. Digging into it.
Fixed by using rsync to handle copy https://github.com/tongueroo/jets/commit/90d6393b8e4ae35bef0b2a4d5b648779abfb132a
|
gharchive/issue
| 2018-12-14T17:50:56 |
2025-04-01T04:36:06.424708
|
{
"authors": [
"tongueroo"
],
"repo": "tongueroo/jets",
"url": "https://github.com/tongueroo/jets/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1009771325
|
Flag -q shows Check_Results "WARNING" additionally instead of only "FAIL"
The -q flag doesn't show only the "FAIL"s but also the WARNINGs from the exclusions when using -w whitelist.txt.
Is this by intention?
Hi @gregorschulz, yes, that is an expected behavior. WARNING is shown when a resource is excluded, it is just to take it into consideration.
Ok, but then I would suggest to adjust the documentation for -q:
https://github.com/toniblyx/prowler#show-or-log-only-fails
Hi @toniblyx & @gregorschulz !
I did a PR https://github.com/toniblyx/prowler/pull/890 updating the documentation.
Tell me what you think to see if it's clearer that way.
Cheers!
Thanks for reporting it @gregorschulz and for the enhancements @w0rmr1d3r!!
|
gharchive/issue
| 2021-09-28T13:42:36 |
2025-04-01T04:36:06.430634
|
{
"authors": [
"gregorschulz",
"toniblyx",
"w0rmr1d3r"
],
"repo": "toniblyx/prowler",
"url": "https://github.com/toniblyx/prowler/issues/884",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
353462813
|
Licence?
Hey, I plan to use this project in an (open source) project of mine so I can run something like MTR without having C dependencies. You have done a great job!
I have noticed there is no license file nor is it mentioned in the readme, would you mind adding a license so I can make sure I credit your work correctly. Thanks!
Will do! I currently have a branch with all components in separate packages for import use, if you find that handy I can already PR that.
Closed by 8214292
|
gharchive/issue
| 2018-08-23T16:38:45 |
2025-04-01T04:36:06.435090
|
{
"authors": [
"meyskens",
"tonobo"
],
"repo": "tonobo/mtr",
"url": "https://github.com/tonobo/mtr/issues/1",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
97463937
|
Failure to install HTTP::Server::Async via panda
I could not install HTTP::Server::Async because Pluggable installation failed on my development linux vm box. Any idea how to solve it?
azawawi@azawawi-vm:~/$ panda install Pluggable
==> Fetching Pluggable
==> Building Pluggable
Compiling lib/Pluggable.pm6 to mbc
===SORRY!===
Cannot call method 'match' on a null object
build stage failed for Pluggable: Failed building lib/Pluggable.pm6
in method throw at /home/azawawi/.rakudobrew/moar-nom/install/share/perl6/runtime/CORE.setting.moarvm:1
in method install at lib/Panda.pm:128
in method resolve at lib/Panda.pm:218
in sub MAIN at /home/azawawi/.rakudobrew/bin/../moar-nom/install/share/perl6/site/bin/panda:20
in block <unit> at /home/azawawi/.rakudobrew/bin/../moar-nom/install/share/perl6/site/bin/panda:87
Failure Summary
----------------
Pluggable
*build stage failed for Pluggable: Failed building lib/Pluggable.pm6
It is working again. Must be something with rakudo latest. If i see it again, i will reopen it.
Thanks for your time :+1:
|
gharchive/issue
| 2015-07-27T13:29:19 |
2025-04-01T04:36:06.437033
|
{
"authors": [
"azawawi"
],
"repo": "tony-o/perl6-http-server-async",
"url": "https://github.com/tony-o/perl6-http-server-async/issues/13",
"license": "artistic-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
118277830
|
shell_command_after is without effect
After digging into the code, i've found a reference to a shell_command_after params but it seems never used. Do I miss something?
@lmignon :
Sorry for the delay.
Correct you found a missing feature. This is meant to run operation / scripts after a vcs is sync'd.
For example, if syncing a vim configuration on git, it could symlink a file.
It's probably best to remove the code for now though. Are you interested in making a PR?
@tony Thank you for your reply. I've tested your script but it doesn't fit my needs. I was looking for a way to manage the aggregation of a lot of pending PR/branches in a reconciled one. I've finally developed my own script.
|
gharchive/issue
| 2015-11-22T18:02:53 |
2025-04-01T04:36:06.439239
|
{
"authors": [
"lmignon",
"tony"
],
"repo": "tony/vcspull",
"url": "https://github.com/tony/vcspull/issues/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2480556579
|
GenerateAnnotation: For testing purposes, the data should be read using a supllier
We get equals mismatch/failing tests when the instant-strings do not match. Easiest fix is to use a fixed instant in tests. Provide an instant-supplier using the context, defaulting to "now".
To scale better if more mini-features like this come up, maybe add a "configuration" to the context declaration.
Then we could initiate contexts using the registry (stuf read from spi), properties (stuff the user set for this generator run) and configuration (global information/suppliers that define the behavior of a run on a system level).
Or just enhance the properties and leave the configuration ...
Stupid idea ... we already can overwrite the instant. ... if a lib using this wants to do the replacement via nowSupplier -> let them handle it.
|
gharchive/issue
| 2024-08-22T11:33:39 |
2025-04-01T04:36:06.469108
|
{
"authors": [
"jangalinski"
],
"repo": "toolisticon/kotlin-code-generation",
"url": "https://github.com/toolisticon/kotlin-code-generation/issues/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1941692062
|
[BUG] ...
Describe the bug
Cannot edit Flags
To Reproduce
Try using '// editing flag
Go to '...'
Click on '....'
Scroll down to '....'
See error
Expected behavior
ability to add '// edit flags
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: Windows 11
Browser Vivaldi
Toolkit Version 3.8.1
Setting Export
Please replace the text 'REPLACE_ME_WITH_SETTINGS' below with your exported Toolkit settings. You can export these by going to the Toolkit Options page, click the Import/Export Settings button at the top right and copying the text in the modal which appears.
Note! DO NOT replace the leading and trailing ``` characters as they are required formatting characters.
[{"key":"AccountsDisplayDensity","value":"2"},{"key":"AccountsEmphasizedInflows","value":true},{"key":"AccountsEmphasizedOutflows","value":true},{"key":"AccountsStripedRows","value":true},{"key":"AccountsStripedRowsColor","value":"#fafafa"},{"key":"AccountsStripedRowsDarkColor","value":"#1e1e1f"},{"key":"AutoDistributeSplits","value":false},{"key":"AutoEnableRunningBalance","value":true},{"key":"AutomaticallyMarkAsCleared","value":false},{"key":"BetterScrollbars","value":"1"},{"key":"BottomNotificationBar","value":true},{"key":"BudgetProgressBars","value":"goals"},{"key":"BulkEditMemo","value":true},{"key":"BulkManagePayees","value":false},{"key":"CalculateIRR","value":false},{"key":"CalendarFirstDay","value":"1"},{"key":"CategoryActivityCopy","value":true},{"key":"CategoryActivityPopupWidth","value":"2"},{"key":"ChangeEnterBehavior","value":false},{"key":"ChangeMemoEnterBehavior","value":false},{"key":"CheckCreditBalances","value":true},{"key":"CheckNumbers","value":false},{"key":"ClearSelection","value":true},{"key":"CollapseInspector","value":false},{"key":"CompactAccountHeader","value":false},{"key":"CompactIncomeVsExpense","value":true},{"key":"ConfirmEditTransactionCancellation","value":false},{"key":"CreditCardEmoji","value":true},{"key":"CtrlEnterCleared","value":false},{"key":"CurrentMonthIndicator","value":true},{"key":"CustomAverageBudgeting","value":"3"},{"key":"CustomizeColourScheme","value":true},{"key":"DateOfMoney","value":true},{"key":"DaysOfBuffering","value":"6"},{"key":"DaysOfBufferingExcludeCreditCards","value":true},{"key":"DefaultCCToCleared","value":false},{"key":"DeselectTransactionsOnSave","value":false},{"key":"DisableToolkit","value":false},{"key":"DisplayTargetGoalAmount","value":"1"},{"key":"DisplayTotalMonthlyGoals","value":"show-goal-breakdown-and-income-vs-spending"},{"key":"DisplayTotalOverspent","value":true},{"key":"DisplayUpcomingAmount","value":true},{"key":"EasyTransactionApproval","value":true},{"key":"EditAccountButton","value":true},{"key":"EmphasizeNegativeLoans","value":true},{"key":"EnlargeCategoriesDropdown","value":true},{"key":"FilterCategories","value":true},{"key":"GoalIndicator","value":true},{"key":"GoalWarningColor","value":true},{"key":"GoogleFontsSelector","value":"2"},{"key":"HideAccountBalancesType","value":false},{"key":"HideAgeOfMoney","value":false},{"key":"HideClosedAccounts","value":true},{"key":"HideDebtRatio","value":false},{"key":"HideHelp","value":true},{"key":"HideReferralBanner","value":true},{"key":"HighlightNegatives","value":false},{"key":"HoveredBudgetRows","value":true},{"key":"ImportNotification","value":"1"},{"key":"IncomeVsExpenseHoverHighlight","value":true},{"key":"LargerClickableIcons","value":false},{"key":"LinkInMemo","value":false},{"key":"LiveOnLastMonthsIncome","value":"1"},{"key":"MasterCategoryRowColor","value":true},{"key":"MasterCategoryRowColorSelect","value":"#d1d1d6"},{"key":"MasterCategoryRowDarkColorSelect","value":"#969be3"},{"key":"MemoAsMarkdown","value":false},{"key":"MonthlyNotesPopupWidth","value":"2"},{"key":"NavDisplayDensity","value":false},{"key":"NotesAsMarkdown","value":false},{"key":"POSStyleCurrencyEntryMode","value":true},{"key":"Pacing","value":false},{"key":"PrintingImprovements","value":true},{"key":"PrivacyMode","value":false},{"key":"QuickBudgetWarning","value":true},{"key":"ReconcileAssistant","value":false},{"key":"ReconcileBalance","value":false},{"key":"ReconcileConfetti","value":true},{"key":"ReconciledTextColor","value":"2"},{"key":"RemovePositiveHighlight","value":false},{"key":"RemoveZeroCategories","value":false},{"key":"ResetColumnWidths","value":true},{"key":"RightClickToEdit","value":true},{"key":"RowHeight","value":false},{"key":"RowsHeight","value":"2"},{"key":"SavingsRatio","value":"0.5"},{"key":"ScrollableEditMenu","value":false},{"key":"ShowAvailableAfterSavings","value":false},{"key":"ShowCategoryBalance","value":true},{"key":"SpareChange","value":false},{"key":"SplitTransactionAutoAdjust","value":false},{"key":"SplitTransactionAutoFillPayee","value":false},{"key":"SplitTransactionTabExpand","value":false},{"key":"SquareNegativeMode","value":false},{"key":"StealingFromFuture","value":false},{"key":"StripedBudgetRows","value":true},{"key":"SubtractUpcomingFromAvailable","value":false},{"key":"SwapClearedFlagged","value":false},{"key":"TargetBalanceWarning","value":true},{"key":"ToBeBudgetedWarning","value":false},{"key":"ToggleAccountColumns","value":true},{"key":"ToggleMasterCategories","value":true},{"key":"ToggleSplits","value":true},{"key":"ToggleTransactionFilters","value":"1"},{"key":"ToolkitReports","value":true},{"key":"UnclearedAccountHighlight","value":true},{"key":"ViewZeroAsEmpty","value":true}]
Edit flags is a native YNAB feature now. You'll need to submit a bug report to them for this.
|
gharchive/issue
| 2023-10-13T10:34:44 |
2025-04-01T04:36:06.478656
|
{
"authors": [
"janrif",
"joshmadewell"
],
"repo": "toolkit-for-ynab/toolkit-for-ynab",
"url": "https://github.com/toolkit-for-ynab/toolkit-for-ynab/issues/3256",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1152544123
|
[Feature Request] Separate tab layout in ffa and other gamemodes
Hi!
Ive tested your addon, ad have some ideas to improve.
Tab wont make space between prefix and name. (Screenshot)
Tab does not know how to distinguish duel and FFA, and this is important, because its different battles.
Please, add Suffix with Placeholder support,
Thanks!
The way to support prefix space and suffix stuff is already a feature request mentioned in #11 im not sure what do you mean by number 2 though
is intentional, you can add the space in the prefix
is duplicate of #11
1 and 3 have been implemented
is not planned
|
gharchive/issue
| 2022-02-27T00:32:17 |
2025-04-01T04:36:06.614717
|
{
"authors": [
"Hitman477",
"iiAhmedYT",
"toppev"
],
"repo": "toppev/strike-tab",
"url": "https://github.com/toppev/strike-tab/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1122641008
|
Explicitly use ActiveSupport hash extensions instead of i18n
Before 1.9.0, i18n provided a set of hash extensions, corresponding to a subset of ActiveSupport's hash extensions. Chewy included these in 2014, seemingly to support Rails 3.2: https://github.com/toptal/chewy/commit/0ab155468dcfb4a8aaeab56a00dbbcd6494d3625
Since 2018, this has in fact been doing nothing (and Chewy has been silently falling back to ActiveSupport), since i18n's core_ext/hash changed to a refinement rather than patching Hash: https://github.com/ruby-i18n/i18n/commit/949dc641d81773977e432626427aa0f8971a1073
Remove the require of i18n and explicitly require active_support/core_ext/hash.
Fixes #832
Hey @chrisandreae :wave!
Thanks, done in https://github.com/toptal/chewy/pull/834.
@rabotyaga any chance you could release a 5.2.1 with this fix ?
Hey @ericproulx 👋 !
Unfortunately, not. I'd suggest upgrading to 7.2. As a workaround, you can pin the i18n gem's version to some older value, which doesn't cause failures, i.e. any version, that includes the lib/i18n/core_ext/hash.rb file.
@rabotyaga 5.2 -> 7.2 is a huge jump. We are running into this issue too while trying to update dependencies (including Rails and Pundit) for vulnerability patches. Is there something actually blocking a 5.2.1 release?
@kroehre 👋
You don't need to do this in one go - please take a look at the migration guide.
Is there something actually blocking a 5.2.1 release?
Yep, it's Elasticsearch 5.x EOL date - almost 3 years ago.
|
gharchive/pull-request
| 2022-02-03T04:26:18 |
2025-04-01T04:36:06.620963
|
{
"authors": [
"chrisandreae",
"ericproulx",
"kroehre",
"rabotyaga"
],
"repo": "toptal/chewy",
"url": "https://github.com/toptal/chewy/pull/833",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1010869955
|
feat(Timeline): [FX-1093] add component
FX-1093
Description
Abstract
Added Timeline component.
Added Timeline.Row component.
How to test
Visit this PR's employ.
Screenshots
Example
Screenshot
Default
With icons
With icons and dates
Trim last connector
Review
[x] Read CONTRIBUTING.md and Component API principles
[x] Annotate all props in component with documentation
[x] Create examples for component
[x] Ensure that deployed demo has expected results and good examples
[x] Ensure that tests pass by running yarn test
[x] Ensure that visuals tests pass by running yarn test:visual. If not - check the documentation how to fix visual tests
[x] Ensure the changed/created components have not caused accessibility issues. How to use accessibility plugin in storybook.
PR commands
List of available commands:
@toptal-bot run all - Run whole pipeline
@toptal-bot run build - Check build
@toptal-bot run visual - Run visual tests
@toptal-bot run deploy:documentation - Deploy documentation
@toptal-bot run package:alpha-release - Release alpha version
Does it make sense maybe to us to have a special component for such timeline records?
Something like
<TimelineItem>
<TimelineItem.Heading>Founder</TimelineItem.Heading>
<TimelineItem.Date>2018 -</TimelineItem.Date>
<TimelineItem.Description>
...
</TimelineItem.Description>
</TimelineItem>
?
Does it make sense maybe to us to have a special component for such timeline records?
IMO it doesn't. There is no single format for the timeline.
Does it make sense maybe to us to have a special component for such timeline records?
IMO it doesn't. There is no single format for the timeline.
To me, those 3 are kind of a pattern, but I would check with Milos:
Card 1 (with heading, date range, text)
Card 2 (with text in line 1 and description in line 2)
Card 3 (with text in line 1 and note (in line 2))
To me, those 3 are kind of a pattern, but I would check with Milos:
I will check with Milos, but the only patterns I see are:
Timeline item has (not) a dashed line.
Timeline item has (not) an icon. (dot by default)
Timeline item has (not) a date.
Timeline has some non-static content.
I'd create sub-components only if a component has some static content (strict layout). The content on the images you've sent is completely different.
cc @denieler
|
gharchive/pull-request
| 2021-09-29T12:03:43 |
2025-04-01T04:36:06.636751
|
{
"authors": [
"denieler",
"teimurjan"
],
"repo": "toptal/picasso",
"url": "https://github.com/toptal/picasso/pull/2178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1260021100
|
chore: test commit
[FX-NNNN]
Description
Describe the changes and motivations for the pull request.
How to test
FIXME: Add the steps describing how to verify your changes
Screenshots
Before.
After.
Insert screenshots or screen recordings
Insert screenshots or screen recordings
Development checks
[ ] Add changeset according to guidelines (if needed)
[ ] Read CONTRIBUTING.md and Component API principles
[ ] Annotate all props in component with documentation
[ ] Create examples for component
[ ] Ensure that deployed demo has expected results and good examples
[ ] Ensure that all PR checks are green
[ ] Ensure the changed/created components have not caused accessibility issues. How to use accessibility plugin in storybook.
[ ] Self reviewed
[ ] Covered with tests
Breaking change
[ ] codemod is created and showcased in the changeset
[ ] test alpha package of Picasso in StaffPortal
PR commands
List of available commands:
@toptal-bot run all - Run whole pipeline
@toptal-bot run build - Check build
@toptal-bot run deploy:documentation - Deploy documentation
@toptal-bot run package:alpha-release - Release alpha version
PR Review Guidelines
When to approve? ✅
You are OK with merging this PR and
You have no extra requests.
You have optional requests.
Add nit: to your comment. (ex. nit: I'd rename this variable from makeCircle to getCircle)
When to request changes? ❌
You are not OK with merging this PR because
Something is broken after the changes.
Acceptance criteria is not reached.
Code is dirty.
When to comment (neither ✅ nor ❌)
You want your comments to be addressed before merging this PR in cases like:
There are leftovers like unnecessary logs, comments, etc.
You have an opinionated comment regarding the code that requires a discussion.
You have questions.
How to handle the comments?
An owner of a comment is the only one who can resolve it.
An owner of a comment must resolve it when it's addressed.
A PR owner must reply with ✅ when a comment is addressed.
something important @augustobmoura
|
gharchive/pull-request
| 2022-06-03T15:03:12 |
2025-04-01T04:36:06.648999
|
{
"authors": [
"augustobmoura",
"ozgurkececioglu"
],
"repo": "toptal/picasso",
"url": "https://github.com/toptal/picasso/pull/2851",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
973700886
|
建议能够支持虚拟主机
既然是“轻量级”框架,建议能够支持虚拟主机。
市面上大部分虚拟主机都不支持自定义根目录,php 框架方面 laravel 基本一统天下了,但 laravel 确实太“重”了。
一般的小应用,还是这种小而美的框架好。^_^
现在开发版已经支持自定义public目录了。参考文档:https://www.chengyao.xyz/note/203.html
我看到有的框架是使用伪静态,在根目录放个 .htaccess 直接把请求转到 /public 目录下,这样是否更安全些?
RewriteRule (^[^/]*$) public/$1 [L]
我最近把 github 上的 php 微框架翻了好多,有几个感觉不错的,供站长参考:
lightpack/lightpack,bcosca/fatfree,Tencent/Biny,Usbac/wolff,bearframework/bearframework,各有优缺点,像 Tencent/Biny 看着你古董了,但毕竟是腾讯大厂的,相对应该有些保障吧~
事实上maxphp也是可以的,只需要把public目录的.htaccess文件移动到根目录。然后修改其中的index.php为public/index.php即可
|
gharchive/issue
| 2021-08-18T14:01:14 |
2025-04-01T04:36:06.658396
|
{
"authors": [
"topyao",
"wfsdaj"
],
"repo": "topyao/max",
"url": "https://github.com/topyao/max/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
201163142
|
Using half conversions from TH
Using half conversions from TH.
Depends on torch/torch7#901
I think this request just fell under the cracks - very simple change improving runtime efficiency (no memcpy, hey) and eliminating code duplication.
thanks!
|
gharchive/pull-request
| 2017-01-17T02:51:18 |
2025-04-01T04:36:06.666363
|
{
"authors": [
"borisfom",
"soumith"
],
"repo": "torch/cutorch",
"url": "https://github.com/torch/cutorch/pull/680",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
68347551
|
staticsitegen command fails for documents since commit 918f286
The staticsitegen command uses django-medusa to generate a static version of the wagtail site. Since commit 918f286eeac567ec460d786a6cfd3dd4d95185a6 wagtail uses StreamingHttpResponse to return Documents, which causes medusa to fail as it simply saves response.content to disk (https://github.com/ctxis/django-medusa/blob/master/django_medusa/renderers/disk.py#L56) and StreamingHttpResponse has no content attribute.
I've pushed a fix to our fork of django-medusa (https://github.com/ctxis/django-medusa) and submitted a pull request (https://github.com/mtigas/django-medusa/pull/20). It iterates over the streaming_response attribute if response is streaming and saves the chunks to file, otherwise it uses response.content.
Until this PR has been merged, do you think it's worth adding a note to the staticsitegen section of the wagtail docs?
Yes, staticsitegen fails using release 1.0b1 and I think a note is necessary.
Thanks for raising this, @pcraston. We'll look at a fix; in the meantime could you do a PR for a note in the docs?
I downloaded the zip from (https://github.com/ctxis/django-medusa) and copy the module to a fresh started project mysite. But I still get the following error:
Chongs-MacBook-Air:mysite chongwang$ python manage.py staticsitegen
Skipping app 'mysite'... (No 'renderers.py')
Skipping app 'django.contrib.admin'... (No 'renderers.py')
Skipping app 'django.contrib.auth'... (No 'renderers.py')
Skipping app 'django.contrib.contenttypes'... (No 'renderers.py')
Skipping app 'django.contrib.sessions'... (No 'renderers.py')
Skipping app 'django.contrib.messages'... (No 'renderers.py')
Skipping app 'django.contrib.staticfiles'... (No 'renderers.py')
Skipping app 'compressor'... (No 'renderers.py')
Skipping app 'taggit'... (No 'renderers.py')
Skipping app 'modelcluster'... (No 'renderers.py')
Skipping app 'wagtail.wagtailcore'... (No 'renderers.py')
Skipping app 'wagtail.wagtailadmin'... (No 'renderers.py')
Skipping app 'wagtail.wagtaildocs'... (No 'renderers.py')
Skipping app 'wagtail.wagtailsnippets'... (No 'renderers.py')
Skipping app 'wagtail.wagtailusers'... (No 'renderers.py')
Skipping app 'wagtail.wagtailsites'... (No 'renderers.py')
Skipping app 'wagtail.wagtailimages'... (No 'renderers.py')
Skipping app 'wagtail.wagtailembeds'... (No 'renderers.py')
Skipping app 'wagtail.wagtailsearch'... (No 'renderers.py')
Skipping app 'wagtail.wagtailredirects'... (No 'renderers.py')
Skipping app 'wagtail.wagtailforms'... (No 'renderers.py')
Found renderers for 'wagtail.contrib.wagtailmedusa'...
Skipping app 'core'... (No 'renderers.py')
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/Users/chongwang/Workspace/test/mysite/django_medusa/management/commands/staticsitegen.py", line 17, in handle
r.generate()
File "/Users/chongwang/Workspace/test/mysite/django_medusa/renderers/base.py", line 69, in generate
self.render_path(path)
File "/Users/chongwang/Workspace/test/mysite/django_medusa/renderers/base.py", line 65, in render_path
raise NotImplementedError
NotImplementedError
environment:
python 2.7.9
wagtail 1.0b1
django 1.7.7
We've agreed to investigate ways to make work without medusa patch.
This can now be worked around by setting the SENDFILE_BACKEND setting to sendfile.backends.simple.
https://github.com/torchbox/wagtail/blob/40092e18526d59c8ca0966743de7f93d14a8304c/docs/contrib/staticsitegen.rst#installing-django-medusa
|
gharchive/issue
| 2015-04-14T11:40:55 |
2025-04-01T04:36:06.692609
|
{
"authors": [
"davecranwell",
"kaedroho",
"pcraston",
"taishizhiqiu",
"tomdyson"
],
"repo": "torchbox/wagtail",
"url": "https://github.com/torchbox/wagtail/issues/1183",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.