id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2636578511 | Improve Android SDK handling
This PR seeks to improve Android SDK handling approach.
The current issues are:
Android SDK setup is limited only to Linux and is also using tooling which may be not available on other platforms.
Each module will have its own copy of SDK. Imagine a project with 20 Android modules (19 lib modules + 1 app module), it will be 20x space usage than it should be. This is quite significant, considering that the basic setup (platforms:android-35, platform-tools and build-tools;35.0.0) is taking ~300 MB.
Licenses are accepted on the end-user behalf, without the consent.
This PR:
Aligns the SDK setup experience with AGP (Android Gradle Plugin):
Expects Android SDK (at least command line tools) to be present on the end-user machine and registered with the env variable (ANDROID_HOME).
Expects command line tools to be present there to utilize sdkmanager to setup the necessary SDK components.
Expects user to accept the necessary licenses by themselves. If they are not accepted, sdkmanager will block stdin waiting for the user input before proceeding with installation. Note: when downloading Android Studio, it will come with the necessary basic license already accepted + some basic SDK components. If downloading command line tools, it is necessary to accept the main license (android-sdk-license) manually.
Installs all the necessary components to the location pointed by the ANDROID_HOME env variable, so that they are shared by all modules.
Adds the necessary step on the CI to install Android SDK for the jobs where it is needed. Note: I was also thinking about extracting Android-related examples into a dedicated job, but then for the existing jobs doing something like example.kotlinlib[__].local.testCached ->example.kotlinlib[__:^AndroidAppKotlinModule].local.testCached doesn't work (at least mill resolve still shows Android module), all cross-segments should be listed manually.
Speaking of sdkmanager: it is also possible to utilize directly the following artifact and put it in a worker, but for this one it is impossible to find a version on the Android Developers website, so end-user will have troubles updating it and anyway it is one-shot action normally, so keeping it even in the separate classpath looks like an overkill. Command line tools will be needed anyway further for the other tools it provides.
By some reason CI fails (while there is an installation of SDK components in the logs), I will investigate.
@0xnm Sir your approach is very good but i want to add something that i did during kotlin example please once see this pr #3769
here please see the AndroidAppModule and AndroidSdkModule
main points:
used cache for downloaded setup
updated the code for android setup
actually in that PR i have done a alot of progress but the only thing i am stucked with library resources (aar files packaging) and almost all things are done i have updated the setup for cache and also updated the android app creation process but i was unable to solve the addition of jetpack compose fully(only aar packaging left)
i request you to please once have a look and take some insights i think this will help...
That was my miss, I forgot to add the line https://github.com/com-lihaoyi/mill/pull/3913/files#diff-7314d0ebbd2e9537ae4889316745b4fd2fa43cb86275c9caae18a86ba228b642R96, CI is fine now.
Thanks! Will take a look
| gharchive/pull-request | 2024-11-05T22:13:53 | 2025-04-01T06:38:15.169899 | {
"authors": [
"0xnm",
"himanshumahajan138",
"lihaoyi"
],
"repo": "com-lihaoyi/mill",
"url": "https://github.com/com-lihaoyi/mill/pull/3913",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2131235034 | State Sync not working
I am testing state-sync from our RPC nodes which have been set up according to the config described below. Please let me know if I am doing anything wrong, or leaving out any necessary step.
Here is my application's tendermint version:
ABCI: 1.0.0
BlockProtocol: 11
P2PProtocol: 8
Tendermint: 0.37.1
My RPC nodes have the following config in app.toml
###############################################################################
### State Sync Configuration ###
###############################################################################
# State sync snapshots allow other nodes to rapidly join the network without replaying historical
# blocks, instead downloading and applying a snapshot of the application state at a given height.
[state-sync]
# snapshot-interval specifies the block interval at which local state sync snapshots are
# taken (0 to disable).
snapshot-interval = 1000
# snapshot-keep-recent specifies the number of recent snapshots to keep and serve (0 to keep all).
snapshot-keep-recent = 2
The node I am trying to bring up via state-sync has the following in config.toml. Trust Height is latest block -1000 and is updated each time, along with trust hash.
#######################################################
### State Sync Configuration Options ###
#######################################################
[statesync]
# State sync rapidly bootstraps a new node by discovering, fetching, and restoring a state machine
# snapshot from peers instead of fetching and replaying historical blocks. Requires some peers in
# the network to take and serve state machine snapshots. State sync is not attempted if the node
# has any local state (LastBlockHeight > 0). The node will have a truncated block history,
# starting from the height of the snapshot.
enable = true
# RPC servers (comma-separated) for light client verification of the synced state machine and
# retrieval of state data for node bootstrapping. Also needs a trusted height and corresponding
# header hash obtained from a trusted source, and a period during which validators can be trusted.
#
# For Cosmos SDK-based chains, trust_period should usually be about 2/3 of the unbonding time (~2
# weeks) during which they can be financially punished (slashed) for misbehavior.
rpc_servers = "http://34.31.152.238:80,http://35.246.188.47:80"
trust_height = 1569274
trust_hash = "59E5CBC5E2B0B3E251FC3667F4E8CC2DFB7ED12F1C041252E2169022CAC3A899"
trust_period = "168h0m0s"
# Time to spend discovering snapshots before initiating a restore.
discovery_time = "15s"
# Temporary directory for state sync snapshot chunks, defaults to the OS tempdir (typically /tmp).
# Will create a new, randomly named directory within, and remove it when done.
temp_dir = ""
# The timeout duration before re-requesting a chunk, possibly from a different
# peer (default: 1 minute).
chunk_request_timeout = "10s"
# The number of concurrent chunk fetchers to run (default: 1).
chunk_fetchers = "4"
However, the node does not sync. Instead, I see these logs:
6:08PM INF starting node with ABCI Tendermint in-process module=server
6:08PM INF service start impl=multiAppConn module=proxy msg={}
6:08PM INF service start connection=query impl=localClient module=abci-client msg={}
6:08PM INF service start connection=snapshot impl=localClient module=abci-client msg={}
6:08PM INF service start connection=mempool impl=localClient module=abci-client msg={}
6:08PM INF service start connection=consensus impl=localClient module=abci-client msg={}
6:08PM INF service start impl=EventBus module=events msg={}
6:08PM INF service start impl=PubSub module=pubsub msg={}
6:08PM INF service start impl=IndexerService module=txindex msg={}
6:08PM INF Version info abci=1.0.0 block=11 commit_hash= module=server p2p=8 tendermint_version=0.37.1
6:08PM INF This node is not a validator addr=49BF5A9A0C4117F5EB1E51AFFB59755D8371A8FA module=consensus pubKey=uEUPyozNqgYUQsSL9KWHJ4aas3YkiEczknKJKB08jpQ=
6:08PM INF P2P Node ID ID=0b8f89eb909b9098fe19af1d4d61e3e257d31c5a file=/home/ashish/.ssc/config/node_key.json module=p2p
6:08PM INF Adding persistent peers addrs=["e7f5e5327a8298eb04c29c4115ccf2d6a05ec732@35.233.157.115:26656","3abc213ec08ece180e6fa1443226689ecc4b7749@34.89.206.238:26656","239b182bab3252c26fc116bf00301f30f1dea01e@34.92.42.230:26656","8d936668e433e9e50cd4e60da218a9fe81950d3f@34.71.122.182:26656","65b64dc2d28e0116da22582cb947ec7bc8c91173@35.215.225.128:26656"] module=p2p
6:08PM INF Adding unconditional peer ids ids=[] module=p2p
6:08PM INF Add our address to book addr={"id":"0b8f89eb909b9098fe19af1d4d61e3e257d31c5a","ip":"0.0.0.0","port":26656} book=/home/ashish/.ssc/config/addrbook.json module=p2p
6:08PM INF service start impl=Node module=server msg={}
6:08PM INF Starting pprof server laddr=localhost:6060 module=server
6:08PM INF serve module=rpc-server msg={}
6:08PM INF service start impl="P2P Switch" module=p2p msg={}
6:08PM INF service start impl=Reactor module=blockchain msg={}
6:08PM INF service start impl=ConsensusReactor module=consensus msg={}
6:08PM INF Reactor module=consensus waitSync=true
6:08PM INF service start impl=Evidence module=evidence msg={}
6:08PM INF service start impl=StateSync module=statesync msg={}
6:08PM INF service start impl=PEX module=pex msg={}
6:08PM INF service start book=/home/ashish/.ssc/config/addrbook.json impl=AddrBook module=p2p msg={}
6:08PM INF Saving AddrBook to file book=/home/ashish/.ssc/config/addrbook.json module=p2p size=30
6:08PM INF Ensure peers module=pex numDialing=0 numInPeers=0 numOutPeers=0 numToDial=10
6:08PM INF Starting state sync module=statesync
6:08PM INF Downloading trusted light block using options module=light
6:08PM INF service start impl="Peer{MConn{34.71.122.182:26656} 8d936668e433e9e50cd4e60da218a9fe81950d3f out}" module=p2p msg={} peer={"id":"8d936668e433e9e50cd4e60da218a9fe81950d3f","ip":"34.71.122.182","port":26656}
6:08PM INF service start impl=MConn{34.71.122.182:26656} module=p2p msg={} peer={"id":"8d936668e433e9e50cd4e60da218a9fe81950d3f","ip":"34.71.122.182","port":26656}
6:08PM INF service start impl="Peer{MConn{173.234.17.237:26656} fa64dcb9be1733fe27e932b3d5da1e685be6906a out}" module=p2p msg={} peer={"id":"fa64dcb9be1733fe27e932b3d5da1e685be6906a","ip":"173.234.17.237","port":26656}
6:08PM INF service start impl=MConn{173.234.17.237:26656} module=p2p msg={} peer={"id":"fa64dcb9be1733fe27e932b3d5da1e685be6906a","ip":"173.234.17.237","port":26656}
6:08PM INF service start impl="Peer{MConn{34.89.206.238:26656} 3abc213ec08ece180e6fa1443226689ecc4b7749 out}" module=p2p msg={} peer={"id":"3abc213ec08ece180e6fa1443226689ecc4b7749","ip":"34.89.206.238","port":26656}
6:08PM INF service start impl=MConn{34.89.206.238:26656} module=p2p msg={} peer={"id":"3abc213ec08ece180e6fa1443226689ecc4b7749","ip":"34.89.206.238","port":26656}
6:08PM INF service start impl="Peer{MConn{43.157.26.164:26656} e3f30a584c09a2a599ab58fbe8fc2958faf6c351 out}" module=p2p msg={} peer={"id":"e3f30a584c09a2a599ab58fbe8fc2958faf6c351","ip":"43.157.26.164","port":26656}
6:08PM INF service start impl=MConn{43.157.26.164:26656} module=p2p msg={} peer={"id":"e3f30a584c09a2a599ab58fbe8fc2958faf6c351","ip":"43.157.26.164","port":26656}
6:08PM INF sync any module=statesync msg={}
6:08PM INF service start impl="Peer{MConn{135.181.220.33:26656} 82417f944fbf680042de3c4acf33aed51b34e8ec out}" module=p2p msg={} peer={"id":"82417f944fbf680042de3c4acf33aed51b34e8ec","ip":"135.181.220.33","port":26656}
6:08PM INF service start impl=MConn{135.181.220.33:26656} module=p2p msg={} peer={"id":"82417f944fbf680042de3c4acf33aed51b34e8ec","ip":"135.181.220.33","port":26656}
6:08PM INF service start impl="Peer{MConn{165.154.224.37:26656} d9dabf20d621a021aac1477b5a714cd5a8d8a3b5 out}" module=p2p msg={} peer={"id":"d9dabf20d621a021aac1477b5a714cd5a8d8a3b5","ip":"165.154.224.37","port":26656}
6:08PM INF service start impl=MConn{165.154.224.37:26656} module=p2p msg={} peer={"id":"d9dabf20d621a021aac1477b5a714cd5a8d8a3b5","ip":"165.154.224.37","port":26656}
6:08PM INF service start impl="Peer{MConn{35.233.157.115:26656} e7f5e5327a8298eb04c29c4115ccf2d6a05ec732 out}" module=p2p msg={} peer={"id":"e7f5e5327a8298eb04c29c4115ccf2d6a05ec732","ip":"35.233.157.115","port":26656}
6:08PM INF service start impl=MConn{35.233.157.115:26656} module=p2p msg={} peer={"id":"e7f5e5327a8298eb04c29c4115ccf2d6a05ec732","ip":"35.233.157.115","port":26656}
6:08PM INF service start impl="Peer{MConn{35.215.225.128:26656} 65b64dc2d28e0116da22582cb947ec7bc8c91173 out}" module=p2p msg={} peer={"id":"65b64dc2d28e0116da22582cb947ec7bc8c91173","ip":"35.215.225.128","port":26656}
6:08PM INF service start impl=MConn{35.215.225.128:26656} module=p2p msg={} peer={"id":"65b64dc2d28e0116da22582cb947ec7bc8c91173","ip":"35.215.225.128","port":26656}
6:08PM INF service start impl="Peer{MConn{34.92.42.230:26656} 239b182bab3252c26fc116bf00301f30f1dea01e out}" module=p2p msg={} peer={"id":"239b182bab3252c26fc116bf00301f30f1dea01e","ip":"34.92.42.230","port":26656}
6:08PM INF service start impl=MConn{34.92.42.230:26656} module=p2p msg={} peer={"id":"239b182bab3252c26fc116bf00301f30f1dea01e","ip":"34.92.42.230","port":26656}
6:08PM INF sync any module=statesync msg={}
6:08PM INF Ensure peers module=pex numDialing=0 numInPeers=0 numOutPeers=9 numToDial=1
6:08PM INF We need more addresses. Sending pexRequest to random peer module=pex peer={"Data":{},"Logger":{"Logger":{}}}
6:08PM INF sync any module=statesync msg={}
I check the status on my local node and it shows me this:
{"NodeInfo":{"protocol_version":{"p2p":"8","block":"11","app":"0"},"id":"0b8f89eb909b9098fe19af1d4d61e3e257d31c5a","listen_addr":"tcp://0.0.0.0:26656","network":"ssc-testnet-1","version":"0.37.1","channels":"40202122233038606100","moniker":"state-sync-node","other":{"tx_index":"on","rpc_address":"tcp://127.0.0.1:26657"}},"SyncInfo":{"latest_block_hash":"","latest_app_hash":"","latest_block_height":"0","latest_block_time":"1970-01-01T00:00:00Z","earliest_block_hash":"","earliest_app_hash":"","earliest_block_height":"0","earliest_block_time":"1970-01-01T00:00:00Z","catching_up":true},"ValidatorInfo":{"Address":"49BF5A9A0C4117F5EB1E51AFFB59755D8371A8FA","PubKey":{"type":"tendermint/PubKeyEd25519","value":"uEUPyozNqgYUQsSL9KWHJ4aas3YkiEczknKJKB08jpQ="},"VotingPower":"0"}}
Anything in the debug logs --log_level="*:debug"? https://docs.cometbft.com/main/explanation/core/running-in-production#logging
Anything in the debug logs --log_level="*:debug"? https://docs.cometbft.com/main/explanation/core/running-in-production#logging
Sorry just saw this. I will revert back today with the logs.
@melekes please see the debug log from today:
sscd.log
Hi @melekes or anyone from the CometBFT team,
Any thoughts or way forward on this?
Hey guys, any updates on this? I realized I also needed seed nodes and I have added those to my node's config.toml and tried running this. Still no luck - same issue.
@hvanz could you please take a look at this issue (as a part of the state-sync sprint)?
I looked at the log; it seems that Comet requests the list of snapshots from its peers, but it never gets a response. Is it possible that the problem is in the implementation of ListSnapshots in the application?
I looked at the log; it seems that Comet requests the list of snapshots from its peers, but it never gets a response. Is it possible that the problem is in the implementation of ListSnapshots in the application?
@hvanz It is a vanilla Cosmos SDK application and in a public repo. Feel free to look at what we are doing: https://github.com/sagaxyz/ssc/blob/c4a5823177409f4ee0ea4f99f0a7cf0c412668a7/cmd/sscd/cmd/root.go#L228
Hi @melekes @hvanz I figured out the issue after @hvanz indicated it was not finding a snapshot. We have persistent peers values and rpc nodes values. The RPC nodes are the ones that have the state-sync snapshots and our persistent peers listed in the config.toml do not have state-sync snapshots.
My local node is now running and syncing with a state-sync snaphot.
I did not find this requirement for the persistent peers to have the state-sync snapshot nodes listed. I only knew of the rpc_servers value, which was correctly set. I may have missed this in the docs but in case it is not there, please add the requirement to also have the persistent_peers include the nodes that are serving the state-sync snapshots.
@ashishchandr70 glad you've figured it out 👍 we will check the docs, thanks
Hi @ashishchandr70, I'm glad that you found the problem. Indeed, the snapshots are requested from all known peers, including the persistent peers. It's mentioned in the code but probably the documentation should be more explicit about it. https://github.com/cometbft/cometbft/blob/v0.37.1/statesync/reactor.go#L272-L278
@hvanz could you please make it clear in the docs?
| gharchive/issue | 2024-02-13T00:35:14 | 2025-04-01T06:38:15.187094 | {
"authors": [
"ashishchandr70",
"hvanz",
"melekes"
],
"repo": "cometbft/cometbft",
"url": "https://github.com/cometbft/cometbft/issues/2319",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1929668614 | grpc: Add base gRPC server with version service to v0.37.x-experimental
Addresses #1420.
Refer to the companion PR for v0.38 for additional details: #1437
PR checklist
[x] Tests written/updated
[x] Changelog entry added in .changelog (we use unclog to manage our changelog)
[ ] Updated relevant documentation (docs/ or spec/) and code comments
Closing until we get signals that this is needed in v0.37.
| gharchive/pull-request | 2023-10-06T08:19:28 | 2025-04-01T06:38:15.190493 | {
"authors": [
"adizere",
"cason"
],
"repo": "cometbft/cometbft",
"url": "https://github.com/cometbft/cometbft/pull/1438",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1639406607 | rpc: tweaked the block search test
Improved the BlockSearch test within the RPC to make sure it matches against a value if the block events are generated.
The previous version had a query that would never match and this way it is harder to test whether indexing was performed correctly.
PR checklist
[x ] Tests written/updated
[ ] Changelog entry added in .changelog (we use unclog to manage our changelog)
[ ] Updated relevant documentation (docs/ or spec/) and code comments
Should this be backported?
I added the backport labels, confirmed it works on all branches, but will need some tweaking for 0.34 as it has this match_events keyword. I'll take care of it.
| gharchive/pull-request | 2023-03-24T13:35:01 | 2025-04-01T06:38:15.193640 | {
"authors": [
"jmalicevic",
"sergio-mena"
],
"repo": "cometbft/cometbft",
"url": "https://github.com/cometbft/cometbft/pull/579",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1879730966 | Addressing comet/comet#1174. First version
As per title.
rendered
Once you're OK with the text, I'm planning to adapt the structure so that it is an RFC in the Comet repo.
There are two "TODOs" in the "Eventual requirements" section. Please let me know what you think.
Thanks folks for all your comments 🙏. I'll go through them tomorrow 👀
Notice that the rendered version should point to the latest version on the branch, namely: https://github.com/cometbft/knowledge-base/blob/sergio/addressing-1174/protocols/abci/addressing-1174.md
As a generic comment, I think this is a very nice discussion and problem to address. Some practical terms in think we should discuss and find more precise definitions:
The external validity definition needs some improvement; we should look how validity is handled by other protocols. In general, the property of BFT consensus is that a value proposed by a correct value is eventually decided. For values proposed by Byzantine agents, there are multiple definitions and ways to deal with
The changes in the consensus protocol are interesting. I wonder whether we can replace locked value/round by valid value/round on the properties. We need in fact to consider the corner cases when they differ. But they can only differ by having valid round/value more updated (fresher) than locked round/value.
As a generic and simplistic definition: a valid value is a value that I would propose; a locked value is a valued that I have accepted (prevoted for)
Regarding the pseudo-code, we have to recall that the introduction of the valid value is posterior to the writing of this pseudo-code. That is why it is a little confusing, as the valid part was introduced afterwards in order to handle with hidden locks and ensure progress after GST.
An idea that I had, I would like some feedback on it, is that we do not remove the valid(v) calls from the algorithm, in none of the clauses.
We should be able, instead, of writing a valid(v) method that does consider:
Whether we have already validated v, in which case we might skip the validation cost and just return the previous value produced by the external validation (application)
Whether we observed a quorum of processes that have validated v, in the form of 2f+1 Prevote(id(v); this allows us, again, to avoid the external validation by the application
Whether we observed a a quorum of processes that have accepted v, in the form of 2f+1 Precommit(id(v)
The main point that I have here is that a process must validate a proposed value at least once. The question is whether a rejection of a proposal block by a non-deterministic external validation procedure should prevent us from committing (or enable the commitment) of a block. The validation check performed by the blockchain (i.e., is the block a valid block?) should be performed at least once for every committed block.
In the spirit of PBTS, notice that the change introduced there was to create two validation methods, the valid(v) that remained untouched, and an additional non-deterministic timely(v) predicate. This second predicate was only evaluated when a block was received for the first time, as it was a timing predicate. In the case of re-proposed blocks, the fact that $2f + 1$ processes have considered the block timely enabled us to drop the timely check in this case.
An idea that I had, I would like some feedback on it, is that we do not remove the valid(v) calls from the algorithm, in none of the clauses.
I guess this is related to the separation of valid as a mathematical function (where it doesn't matter how often it is called) and Process_Proposal (where it might matter how often it is called if it uses oracle data). The question is whether we want to capture the latter point in the pseudo code...
I think at a very early stage, Dev has written a pseudo code variant of ABCI++ with Process_Proposal. The question is whether we should formalize it in this way, and thus eliminate the "schisma" between valid and Process_Proposal?
Another question that we need to address is how to handle the recovery of a node.
The assumption that valid(v) is a deterministic method enables us to employ this method both in regular operation and in recovery mode. So, if a node during regular operation has deemed a block valid, therefore it has issued a Prevote for the block, when this situation arises again in recovery mode, the validation should produce the same output. Otherwise, if the block is reject in this different context, the node will Prevote nil, which is an equivocation.
This same question was raised for PBTS. The solution proposed (but not implemented) is to persist in the consensus WAL the information needed to evaluate the timely predicate. Namely, we have to persist the receive time of a proposal, which is the non-deterministic part of the timely evaluation. The method, by itself, is deterministic, so that we need only to ensure that it is invoked with the same parameters both in regular and recovery mode.
How we can handle this situation when ProcessProposal is not deterministic? The first solution that came to mind is to persist in the WAL the output of this validation method when it is invoked during regular operation. The invocation of this method therefore becomes a non-deterministic external event processed by the consensus protocol. Since we probably cannot replay the exactly same (application and timing) context used during regular operation, we don't have much options than to store the result of this operation in disk. Once in recovery mode, we should be able to identify that we have the output of this validation and therefore must not perform it again, with the risk of producing a different behavior at consensus level (i.e., equivocating).
Addressed outstanding comments. I think this is ready to land as an RFC PR on CometBFT repo. If there are further comments, they can be added there. Merging this
| gharchive/pull-request | 2023-09-04T08:01:15 | 2025-04-01T06:38:15.205729 | {
"authors": [
"cason",
"josef-widder",
"sergio-mena"
],
"repo": "cometbft/knowledge-base",
"url": "https://github.com/cometbft/knowledge-base/pull/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2484331149 | complicated parser and ' or + character
filename
Airfiles #04 The 'Big Show' 2.cbz
turns title into
The' Big Show' 2
filename
Army #11 Operation 'Z'.cbz
turns title into
Operation' Z'
filename
De Dwaas Van De Koning #009 Het Testament Van d'Artagnan.cbz
turns title into
Het Testament Van d' Artagnan
notice the added space after the '
I am using the complicated parser through command line , but the same behaviour is seen in the GUI .
Using version 1.6.0a21.dev0
The same happens with + signs .
filename
De Man of Steel #01 The Invasion + Power of The Monster.cbz
turns title into
The Invasion+ Power of The Monster
When a title start with a ' character , it drops it completely .
For example
Atari Force #09 'Als Uw Oog U Tot Last Is...'.cbz
turns title into
Als Uw Oog U Tot Last Is'
drops the first ' completely (and it also drops the ... for some reason)
that's because quotes shouldn't be in filenames 😝. I'll see if I can get to this next week
If you need a Comic series to test with , this would be a good example for the ' sign :
https://www.stripinfo.be/reeks/index/1847_De_psy
| gharchive/issue | 2024-08-24T07:41:58 | 2025-04-01T06:38:15.256839 | {
"authors": [
"lordwelch",
"protofolius"
],
"repo": "comictagger/comictagger",
"url": "https://github.com/comictagger/comictagger/issues/672",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
606901907 | Joystickd readme in tools outdated
Joystickd in tools does not seem to function, and it's readme instructions are out of date.
When running all three scripts with no visible console errors and all dependencies working correctly, joystickd.py does not move the steering wheel.
The readme instructions are also out of date. For example, you do not need to manually set ALL_OUTPUT anymore.
Everything should still work, I used it pretty recently. I updated the readme with some new instructions. Can you get it to work with those instructions?
Another thing that will stop joystick use (and MOCK boardd, and a lot of the Panda Python example scripts) is running firmware built for EON, including the default/signed shipped FW from Comma. In short, a Panda flashed for actual driving with openpilot won't work for any of these tools.
Firmware built with the EON option expects a keepalive from OP boardd, and the missing keepalive kicks Panda back to NOOUTPUT within a couple seconds after the tool sets an output mode.
It's very easy to run into this, and hard for a newbie to figure out because the failure is totally silent, and it's something of an annoyance to reflash Panda if your usual development environment is virtualized.
I ran into this the hard way recently, but had not submitted a PR to fix it because I'm not sure of a workable approach that Comma would find acceptable. I don't like having to flash back and forth, and I'd especially like to be able to use the Panda tools from a vehicle-installed EON/C2. For my own development purposes, I hacked around it by setting the heartbeat timeout to an entire day.
Alternatives are to add a USB command to disable (or lengthen) keepalives, or to augment all the various development tools to send the heartbeat commands periodically. Or, even if no automatic workaround is acceptable, we could at least have the dev tools detect EON firmware and bail out with an informative notice.
I tried to get joystickd to work as well for some time to no avail. But what you mentioned @jyoung8607 sounds pretty much the same issue I was having, it was silently failing, but the code was still running and looked as if it was working, but nothing would happen.
If you have a simple guide to what you did to get past this it would be extremebly helpful. Did you create your own panda firmware with the heartbeat or eon firmware?
I think if you follow Willem's updated instructions, it's more likely to work for officially supported cars since it's using the new boardd that emits keepalives. I have not tested this personally with an EON or Comma signed build.
For the old instructions using boardd_old.py, with the default Panda firmware, I think you would have trouble with missing keepalives on EON builds, and also you can't go into ALL_OUTPUT mode without DEBUG firmware built from source. You'd also have trouble if your car's safety mode is in the ALLOW_DEBUG group as not yet officially supported.
You should retry with Willem's updated instructions. If that doesn't work, you should build the Panda firmware yourself from source, ideally on Ubuntu bare metal (virtualization makes it difficult to carry out the reflash step) so all the DEBUG stuff is enabled for you.
| gharchive/issue | 2020-04-26T02:40:37 | 2025-04-01T06:38:15.269782 | {
"authors": [
"AhadCove",
"codename224",
"jyoung8607",
"pd0wm"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/issues/1423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
620314034 | Couldn't make the black panda to turn the wheel by sending the STEERING_LKA CAN command
The wheel can turn a little, most of the time not even noticeable. After debugging, I found that increasing the torque cannot exceed a limit (between 350 and 400) and the steering angle only changes a little.
In addition, the EPS_STATUS LKA_STATE becomes 9 after the torque increases beyond the limit. I also found if the torque is below the limit, the LKA_STATE flickers between 1 and 5 quickly, which I don't think it's normal.
After installing the black panda, should the car itself still send the STEERING_LKA message to the CAN bus? It seems that when I print out the logcan messages, the car itself is still sending bus: 000 {'CHECKSUM': 78.0, 'LKA_STATE': 0.0, 'STEER_REQUEST': 0.0, 'STEER_TORQUE_CMD': 0.0}. Is this expected?
My theory is that the STEERING_LKA messages sent from the car itself interferes with the messages sent from openpilot. Not sure if that's case.
After debugging, I found out the issue:
Can1 (bus number 0) should not get the LKA 740 messages from the camera (FRC). The messages from the camera will cause conflict with the 740 messages that I send by using the openpilot joystick controller.
| gharchive/issue | 2020-05-18T15:45:48 | 2025-04-01T06:38:15.272623 | {
"authors": [
"zoukyle"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/issues/1534",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
686255209 | Systemd logcatd
I'm in need for sone help on my Honda Insight. I have openpilot and sent back because I asked about brightness. They sent a new one and now it says in is unsupported. Can you help?
What do I do with this information?
I load branch "master" and the camera dent go on.
| gharchive/pull-request | 2020-08-26T11:50:05 | 2025-04-01T06:38:15.274424 | {
"authors": [
"kyeli2",
"pd0wm"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/pull/2085",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
166065377 | Country specific config for payment methods
Config for payments needs to be done in a way that you are able to show different payment methods in the checkout based on the country selected in your billing address.
This is a bigger task.
First question: is (or should) the abstract framework be able to decide about allowed / not allowed countries?
If yes, then what are the rules?
If it is a shop responsibility then there is already a build in feature, that allows the shop to pass a filter function into the "getMethods" - API call that is executed every time where any kind of filtering is possible.
Filtering itself has to be discussed in more detail and is not in scope for the first iteration I think.
| gharchive/issue | 2016-07-18T10:39:06 | 2025-04-01T06:38:15.280262 | {
"authors": [
"MGA-dotSource",
"floriansattler"
],
"repo": "commercetools/commercetools-sunrise-java-payment",
"url": "https://github.com/commercetools/commercetools-sunrise-java-payment/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2735521062 | a fix and a url update
fix dates in blog archive list to not be for current displayed post
update url to stackage readme section on adding package
bump to final lts-22.43
Nice, thanks. :) I'll work on getting this deployed, which is still a bit of a bad process.
Thanks
Finally deployed now. :)
| gharchive/pull-request | 2024-12-12T10:39:30 | 2025-04-01T06:38:15.319156 | {
"authors": [
"chreekat",
"juhp"
],
"repo": "commercialhaskell/stackage-server",
"url": "https://github.com/commercialhaskell/stackage-server/pull/338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1988640837 | Don't error if there is no $HOME/.bookmarks.jsonl
Instead, give a helpful suggestion:
try mark add {url} !
Fixed in #45
| gharchive/issue | 2023-11-11T01:44:55 | 2025-04-01T06:38:15.349273 | {
"authors": [
"commondatageek"
],
"repo": "commondatageek/mark",
"url": "https://github.com/commondatageek/mark/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
689069270 | App Crash on Login
Summary:
Login leads to crash of app
Steps to reproduce:
Start App
Type in login data.
Hit "login" button.
System logs:
STACK_TRACE=java.lang.RuntimeException: Unable to start activity ComponentInfo{fr.free.nrw.commons/fr.free.nrw.commons.contributions.MainActivity}: java.lang.NullPointerException: Attempt to invoke virtual method 'int android.accounts.Account.hashCode()' on a null object reference
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2678)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2743)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1490)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6165)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:888)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:778)
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'int android.accounts.Account.hashCode()' on a null object reference
at android.os.Parcel.readException(Parcel.java:1690)
at android.os.Parcel.readException(Parcel.java:1637)
at android.content.IContentService$Stub$Proxy.setSyncAutomaticallyAsUser(IContentService.java:1060)
at android.content.ContentResolver.setSyncAutomaticallyAsUser(ContentResolver.java:2107)
at android.content.ContentResolver.setSyncAutomatically(ContentResolver.java:2097)
at fr.free.nrw.commons.contributions.MainActivity.initMain(MainActivity.java:110)
at fr.free.nrw.commons.contributions.MainActivity.onCreate(MainActivity.java:84)
at android.app.Activity.performCreate(Activity.java:6687)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1140)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2631)
... 9 more
Device and Android version:
ANDROID_VERSION=7.1.1
PHONE_MODEL=Moto G Play
Commons app version:
APP_VERSION_CODE=561
APP_VERSION_NAME=2.12.3
Would you like to work on the issue?
I don't have the qualification to work on this issue
Hi, this bug should have been fixed in v2.13.1. Could you register for beta testing and let us know if it fixes the problem for you? Otherwise, the new version should be available to everyone by the end of the week (if everything goes well).
Sure.
How do I register for beta?
I installed the Beta but the problem still occurs.
From the crash log I've send:
APP_VERSION_CODE=775
APP_VERSION_NAME=2.13.1
ANDROID_VERSION=7.1.1
PHONE_MODEL=Moto G Play
STACK_TRACE=java.lang.NullPointerException: Attempt to read from field 'java.lang.String android.accounts.Account.name' on a null object reference
at fr.free.nrw.commons.contributions.ContributionsFragment.setUploadCount(ContributionsFragment.java:363)
at fr.free.nrw.commons.contributions.ContributionsFragment.onCreateView(ContributionsFragment.java:166)
at androidx.fragment.app.Fragment.performCreateView(Fragment.java:2600)
at androidx.fragment.app.FragmentManagerImpl.moveToState(FragmentManagerImpl.java:881)
at androidx.fragment.app.FragmentManagerImpl.moveFragmentToExpectedState(FragmentManagerImpl.java:1238)
at androidx.fragment.app.FragmentManagerImpl.moveToState(FragmentManagerImpl.java:1303)
at androidx.fragment.app.BackStackRecord.executeOps(BackStackRecord.java:439)
at androidx.fragment.app.FragmentManagerImpl.executeOps(FragmentManagerImpl.java:2079)
at androidx.fragment.app.FragmentManagerImpl.executeOpsTogether(FragmentManagerImpl.java:1869)
at androidx.fragment.app.FragmentManagerImpl.removeRedundantOperationsAndExecute(FragmentManagerImpl.java:1824)
at androidx.fragment.app.FragmentManagerImpl.execSingleAction(FragmentManagerImpl.java:1696)
at androidx.fragment.app.BackStackRecord.commitNowAllowingStateLoss(BackStackRecord.java:299)
at androidx.fragment.app.FragmentPagerAdapter.finishUpdate(FragmentPagerAdapter.java:235)
at androidx.viewpager.widget.ViewPager.populate(ViewPager.java:1244)
at androidx.viewpager.widget.ViewPager.populate(ViewPager.java:1092)
at androidx.viewpager.widget.ViewPager.onMeasure(ViewPager.java:1622)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.view.View.measure(View.java:19883)
at android.widget.RelativeLayout.measureChildHorizontal(RelativeLayout.java:715)
at android.widget.RelativeLayout.onMeasure(RelativeLayout.java:461)
at android.view.View.measure(View.java:19883)
at androidx.drawerlayout.widget.DrawerLayout.onMeasure(DrawerLayout.java:1119)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at androidx.appcompat.widget.ContentFrameLayout.onMeasure(ContentFrameLayout.java:143)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1464)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:758)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:640)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1464)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:758)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:640)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at com.android.internal.policy.DecorView.onMeasure(DecorView.java:689)
at android.view.View.measure(View.java:19883)
at android.view.ViewRootImpl.performMeasure(ViewRootImpl.java:2293)
at android.view.ViewRootImpl.measureHierarchy(ViewRootImpl.java:1384)
at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1637)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1272)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:6408)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:874)
at android.view.Choreographer.doCallbacks(Choreographer.java:686)
at android.view.Choreographer.doFrame(Choreographer.java:621)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:860)
at android.os.Handler.handleCallback(Handler.java:751)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6165)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:888)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:778)
REPORT_ID=798a844b-fdd8-4ccf-953f-3b181b009244
@misaochan
I installed the Beta but the problem still occurs.
From the crash log I've send:
APP_VERSION_CODE=775
APP_VERSION_NAME=2.13.1
ANDROID_VERSION=7.1.1
PHONE_MODEL=Moto G Play
STACK_TRACE=java.lang.NullPointerException: Attempt to read from field 'java.lang.String android.accounts.Account.name' on a null object reference
at fr.free.nrw.commons.contributions.ContributionsFragment.setUploadCount(ContributionsFragment.java:363)
at fr.free.nrw.commons.contributions.ContributionsFragment.onCreateView(ContributionsFragment.java:166)
at androidx.fragment.app.Fragment.performCreateView(Fragment.java:2600)
at androidx.fragment.app.FragmentManagerImpl.moveToState(FragmentManagerImpl.java:881)
at androidx.fragment.app.FragmentManagerImpl.moveFragmentToExpectedState(FragmentManagerImpl.java:1238)
at androidx.fragment.app.FragmentManagerImpl.moveToState(FragmentManagerImpl.java:1303)
at androidx.fragment.app.BackStackRecord.executeOps(BackStackRecord.java:439)
at androidx.fragment.app.FragmentManagerImpl.executeOps(FragmentManagerImpl.java:2079)
at androidx.fragment.app.FragmentManagerImpl.executeOpsTogether(FragmentManagerImpl.java:1869)
at androidx.fragment.app.FragmentManagerImpl.removeRedundantOperationsAndExecute(FragmentManagerImpl.java:1824)
at androidx.fragment.app.FragmentManagerImpl.execSingleAction(FragmentManagerImpl.java:1696)
at androidx.fragment.app.BackStackRecord.commitNowAllowingStateLoss(BackStackRecord.java:299)
at androidx.fragment.app.FragmentPagerAdapter.finishUpdate(FragmentPagerAdapter.java:235)
at androidx.viewpager.widget.ViewPager.populate(ViewPager.java:1244)
at androidx.viewpager.widget.ViewPager.populate(ViewPager.java:1092)
at androidx.viewpager.widget.ViewPager.onMeasure(ViewPager.java:1622)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.view.View.measure(View.java:19883)
at android.widget.RelativeLayout.measureChildHorizontal(RelativeLayout.java:715)
at android.widget.RelativeLayout.onMeasure(RelativeLayout.java:461)
at android.view.View.measure(View.java:19883)
at androidx.drawerlayout.widget.DrawerLayout.onMeasure(DrawerLayout.java:1119)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at androidx.appcompat.widget.ContentFrameLayout.onMeasure(ContentFrameLayout.java:143)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1464)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:758)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:640)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1464)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:758)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:640)
at android.view.View.measure(View.java:19883)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6087)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at com.android.internal.policy.DecorView.onMeasure(DecorView.java:689)
at android.view.View.measure(View.java:19883)
at android.view.ViewRootImpl.performMeasure(ViewRootImpl.java:2293)
at android.view.ViewRootImpl.measureHierarchy(ViewRootImpl.java:1384)
at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1637)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1272)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:6408)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:874)
at android.view.Choreographer.doCallbacks(Choreographer.java:686)
at android.view.Choreographer.doFrame(Choreographer.java:621)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:860)
at android.os.Handler.handleCallback(Handler.java:751)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6165)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:888)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:778)
REPORT_ID=798a844b-fdd8-4ccf-953f-3b181b009244
This seems to be a different one from the one mentioned in the original issue log, @fnordson So just to confirm, you are able to login in successfully but as soon as you get logged in, the app crashes?
@ashishkumar0207
Just before the app disappears and the crash report pop-up comes up, I think I can see a small text saying that the login was successful. It's just too fast everything and I'm not sure if the text was there the last time.
From my standpoint it looks like the same behaviour:
Start App
Type in login data.
Hit "login" button.
App disappears and crash report pop-up comes up.
@misaochan Why has this been closed?
Is it supposed to be fixed (it isn't on my phone)?
Fixed in the next version?
The associated PR has been merged, so the issue is auto-closed. It will be released in the next hotfix, which should be soon.
| gharchive/issue | 2020-08-31T09:41:27 | 2025-04-01T06:38:15.390979 | {
"authors": [
"ashishkumar468",
"fnordson",
"misaochan"
],
"repo": "commons-app/apps-android-commons",
"url": "https://github.com/commons-app/apps-android-commons/issues/3914",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
137461605 | Headers to distinguish where suggestions come from
How about placing small headers to show why suggestions are made?
Prototype:
--- Nearby ---
Trafalgar Square
Buckingham Palace
--- Guessed from name ---
Demonstration
Climate change
--- Recent ---
Roti prata
Rendang
When I take a picture, I know already what type of suggestion will be correct. For instance if the last upload was of the same thing, I know I must select everything in "Recent". If I am in a famous place but photographing something unrelated to this place (for instance a vehicle) then I can ignore all "Nearby" suggestions. Etc. Having sections will thus make selection faster.
It could also allow making the suggestions list a bit longer without it feeling too long?
Hm, this sounds like a great suggestion, but I think I will have to pick between this and #70 for the grant proposal, due to the time limitations. Which do you think would be more important?
Difficult choice haha. It does not sound as difficult as anonymization, and will probably be more useful to most people.
Okay, this one it is then! :)
How do we want the headers to look like? Should they actually take up slots in the ListView like all the categories, just with the '--' to make it obvious that they are headers?
Is there an easy to use component that provide this? Either a ListView with a special setting, or a child class of ListView, or a totally different component.
I was envisioning something like this, maybe with smaller and less eye-catching headers:
http://stacktips.com/tutorials/android/listview-with-section-header-in-android
Hm. This is turning out to be much more complicated than I expected.
I've tried to apply this tutorial (and other similar ones) to our code, but the main issue is that all the solutions that I've found involve adding the section headers to the ArrayList that is used as the data source in the adapter. So now our data source contains not just CategoryItems but also header items.
This could be workable, but in CategorizationFragment.java most of the existing code assumes that every item in the data source is a Category, and operates on it accordingly. Also most of the existing methods rely on using the index of an element in the ArrayList for category selection, assignment, etc. It feels rather messy to have headers be part of that array, and to have to keep checking for getItemViewType(position) whenever we want to perform any operations on categories? I tried this method anyway at https://github.com/misaochan/apps-android-commons/commits/header-list-2 , but stopped after it became unwieldy.
I wonder if there might be a better way to handle this, given that it is only a UI change, so we probably shouldn't be introducing all these extra elements into the data source?
Is there no way to have each row contain an object, and use this object
rather than using the row's index?
On Sun, Nov 6, 2016 at 3:38 PM, Josephine Lim notifications@github.com
wrote:
Hm. This is turning out to be much more complicated than I expected.
I've tried to apply this tutorial (and other similar ones) to our code,
but the main issue is that all the solutions that I've found involve adding
the section headers to the ArrayList that is used as the data source in the
adapter. So now our data source contains not just CategoryItems but also
header items.
This could be workable, but in CategorizationFragment.java most of the
existing code assumes that every item in the data source is a Category, and
operates on it accordingly. Also most of the existing methods rely on using
the index of an element in the ArrayList for category selection,
assignment, etc. It feels rather messy to have headers be part of that
array, and to have to keep checking for getItemViewType(position)
whenever we want to perform any operations on categories? I tried this
method anyway at https://github.com/misaochan/
apps-android-commons/commits/header-list-2 , but stopped after it became
unwieldy.
I wonder if there might be a better way to handle this, given that it is
only a UI change, so we probably shouldn't be introducing all these extra
elements into the data source?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/commons-app/apps-android-commons/issues/76#issuecomment-258663552,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAGFBvfAg7df6HaWkg-KFUwW1OdqsnFsks5q7XYDgaJpZM4HmH_N
.
I'm not sure, but it would involve an almost complete refactor of CategorizationFragment.
Alternatively, I was thinking of trying a 3rd party library: https://github.com/commonsguy/cwac-merge
The 3rd party libraries I mentioned are on a Apache 2.0 license. Should be okay to include them, right?
Apache 2.0 is OK, yes.
The 3rd party library works (as seen in https://github.com/misaochan/apps-android-commons/tree/header-list-cwac-merge , headers are produced for GPS, recent, and title cats), but further issues have been discovered along the way.
The main issue is that in order to get this to work, I have needed to create a separate adapter for the empty search field (which displays automatically-generated GPS, recent, and title cats along with their headers), and for the non-empty search field (which displays the results of a manual category search with no headers).
In the app currently, selected categories are aggregated and displayed at the top of the list, from any type of category suggestion (manual or automatic). This is really useful and something I don't want to lose. But with two separate adapters, I can't seem to get this to happen anymore - selected categories remain selected but are only displayed in their respective adapter (automatic cats that are selected only display when search field is empty, manual cats only display when search field is not empty).
At this stage I think the amount of work needed for me to complete this task is exceeding the benefits that it would bring (headers would be really nice but I don't think it is a dealbreaker for most users). It's entirely possible that I'm missing something simple, so I will leave this issue and my associated branch open, but will move on to the next task for now.
Would you mind posting a screenshot of what it looks like with
header-list-cwac-merge ?
Thanks!
On Mon, Nov 14, 2016 at 2:51 PM, Josephine Lim notifications@github.com
wrote:
The 3rd party library works (as seen in https://github.com/misaochan/
apps-android-commons/tree/header-list-cwac-merge , headers are produced
for GPS, recent, and title cats), but further issues have been discovered
along the way.
The main issue is that in order to get this to work, I have needed to
create a separate adapter for the empty search field (which displays
automatically-generated GPS, recent, and title cats along with their
headers), and for the non-empty search field (which displays the results of
a manual category search with no headers).
In the app currently, selected categories are aggregated and displayed at
the top of the list, from any type of category suggestion (manual or
automatic). This is really useful and something I don't want to lose. But
with two separate adapters, I can't seem to get this to happen anymore -
selected categories remain selected but are only displayed in their
respective adapter (automatic cats that are selected only display when
search field is empty, manual cats only display when search field is not
empty).
At this stage I think the amount of work needed for me to complete this
task is exceeding the benefits that it would bring (headers would be really
nice but I don't think it is a dealbreaker for most users). It's entirely
possible that I'm missing something simple, so I will leave this issue and
my associated branch open, but will move on to the next task for now.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/commons-app/apps-android-commons/issues/76#issuecomment-260253875,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAGFBgBwj5F_QtmXxRPiCSh23JlFzTfKks5q9_bdgaJpZM4HmH_N
.
Sure. This is of the automatic categories (I hadn't hooked it up to the filterYears() method yet, so years are still being displayed):
And this is of the manual categories:
This looks like a superb idea, and the screenshots above are really helpful. Is this still something that needs to be implemented?
Chris.
Yes.
Could we add this to a target milestone, then; it's been open for eight years? It looks like people support it as an idea, so if we have a target release, it may attract volunteers who want to write the code.
Just an idea.
This isn't meant to sound harsh, but it feels like good ideas Luke this, should at least be on the 'To Do' list.
Chris
| gharchive/issue | 2016-03-01T04:01:09 | 2025-04-01T06:38:15.414164 | {
"authors": [
"chrisdebian",
"misaochan",
"nicolas-raoul"
],
"repo": "commons-app/apps-android-commons",
"url": "https://github.com/commons-app/apps-android-commons/issues/76",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1213181883 | Fixes #4934 enforced Wikimedia character blacklisting on captions and…
Description (required)
Originally the caption and description TextViews would accept any characters. It was possible to upload files with banned characters. Now, an input filter is used to remove these banned characters.
Fixes #4934
What changes did you make and why?
A new InputFilter class was applied to the caption and description TextViews. This filter checks for any illegal characters and removes them. The set of illegal characters can be found here: blacklist.
Tests performed (required)
Tested ProdDebug on Pixel 6 with API level 31.
Also included unit tests for the new InputFilter class.
I just tested, it works great and indeed fixes the bug.
Hi Alex!
Did you get a chance apply the minor modifications?
If you have no time, I can also do it for you, like you want :-)
Hi Nicolas.
Sorry for the delay, I had to do some traveling.
The new PR can be found here: #4955.
No problem, I hope you had a nice trip!
Tip: you can update an existing pull request, simply by pushing new commits to it.
| gharchive/pull-request | 2022-04-23T03:35:38 | 2025-04-01T06:38:15.419116 | {
"authors": [
"AlexMahlon",
"nicolas-raoul"
],
"repo": "commons-app/apps-android-commons",
"url": "https://github.com/commons-app/apps-android-commons/pull/4941",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1244241425 | [WIP] Explore nearby pictures test
Description (required)
Fixes #INSERT_ISSUE_NUMBER_HERE
What changes did you make and why?
Tests performed (required)
Tested {build variant, e.g. ProdDebug} on {name of device or emulator} with API level {API level}.
Screenshots (for UI changes only)
Need help? See https://support.google.com/android/answer/9075928
Note: Please ensure that you have read CONTRIBUTING.md if this is your first pull request.
This was successfully implemented in another PR I believe. :-)
| gharchive/pull-request | 2022-05-22T12:34:33 | 2025-04-01T06:38:15.422476 | {
"authors": [
"neslihanturan",
"nicolas-raoul"
],
"repo": "commons-app/apps-android-commons",
"url": "https://github.com/commons-app/apps-android-commons/pull/4969",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
984691369 | "Cannot read property 'length' of undefined" error appears when open easyCla page at PCC
Steps :
Login admin portal
Search Project which have cla enabled
Open easyCla page
Actual Results : "Cannot read property 'length' of undefined" error appears & page is inaccessible
ERROR TypeError: Cannot read property 'length' of undefined
at ActivityLogComponent_Template (activity-log.component.html:2)
at executeTemplate (core.js:9614)
at refreshView (core.js:9480)
at refreshComponent (core.js:10651)
at refreshChildComponents (core.js:9277)
at refreshView (core.js:9530)
at refreshEmbeddedViews (core.js:10605)
at refreshView (core.js:9504)
at refreshEmbeddedViews (core.js:10605)
at refreshView (core.js:9504)
https://images.zenhubusercontent.com/194341141/d542169a-41d2-40e1-b3e8-d9834843c814/easycla_error.mp4
@thakurveerendras the error is thrown on activity-log.component.html so looks like a PCC issue on the frontend
Thanks @nickmango ,
Added frontend ticket for above case (https://jira.linuxfoundation.org/browse/PCC-1591).
Closing as this appears to be a front-end issue in the PCC. Thanks!
| gharchive/issue | 2021-09-01T05:52:39 | 2025-04-01T06:38:15.429329 | {
"authors": [
"dealako",
"nickmango",
"thakurveerendras"
],
"repo": "communitybridge/easycla",
"url": "https://github.com/communitybridge/easycla/issues/3230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
160486802 | Activate Lightbox with custom HTML
Let's say I don't want to use img, and I can to activate the lightbox on , how do I achieve this?
I actually tried it but console.log gave "Invalid Images"
Thanks in advance!
If you want to achieve a simple modal, without connection to any images, you should use the normal UI-Modal (https://angular-ui.github.io/bootstrap/#/modal).
| gharchive/issue | 2016-06-15T18:06:37 | 2025-04-01T06:38:15.441289 | {
"authors": [
"EyesOnlyNet",
"drixie"
],
"repo": "compact/angular-bootstrap-lightbox",
"url": "https://github.com/compact/angular-bootstrap-lightbox/issues/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1971153495 | ROE-2405 Trusts associated with OE
JIRA link
https://companieshouse.atlassian.net/browse/ROE-2405
Change description
Update index files to include new url with params
Update html for change link with params
Update trust involved page to redirect to next page with url params
Update add.trust to include ActiveSubmissionBasePath in html template
Work checklist
[x] Tests added where applicable
[ ] UI changes meet accessibility criteria
Merge instructions
We are committed to keeping commit history clean, consistent and linear. To achieve this, this commit should be structured as follows:
<type>[optional scope]: <description>
and contain the following structural elements:
fix: a commit that patches a bug in your codebase (this correlates with PATCH in semantic versioning),
feat: a commit that introduces a new feature to the codebase (this correlates with MINOR in semantic versioning),
BREAKING CHANGE: a commit that has a footer BREAKING CHANGE: introduces a breaking API change (correlating with MAJOR in semantic versioning). A BREAKING CHANGE can be part of commits of any type,
types other than fix: and feat: are allowed, for example build:, chore:, ci:, docs:, style:, refactor:, perf:, test:, and others,
footers other than BREAKING CHANGE: <description> may be provided.
looks ok to me once dans's comments resolved
| gharchive/pull-request | 2023-10-31T19:19:42 | 2025-04-01T06:38:15.446506 | {
"authors": [
"Zain-Abbas1",
"markpit"
],
"repo": "companieshouse/overseas-entities-web",
"url": "https://github.com/companieshouse/overseas-entities-web/pull/1181",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2585642279 | feat: add upcoming events page that displays all the upcoming events
This PR resolves #92 by displaying all the upcoming events in a single dedicated page.
Output:
It's look better. I have done for #98 as well. | gharchive/pull-request | 2024-10-14T10:51:29 | 2025-04-01T06:38:15.477119 | {
"authors": [
"gaurovgiri"
],
"repo": "computerclubkec/computerclubkec.github.io",
"url": "https://github.com/computerclubkec/computerclubkec.github.io/pull/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1122039850 | Fix promise-handling bug in creation of planning documents
[#181124054]
Fixes a copy/paste/logic bug in planning document promise-handling which likely explains the occurrence of extraneous planning documents in some circumstances.
@scytacki I also added tests that would have caught the bug, officially closing the barn door. Adding the tests required setting up offline mocking of firebase that should prove useful for writing additional tests down the road.
The database mocking is complex. I could see it being hard to maintain. It would probably fail in a weird way if some code under test makes an unexpected firebase call. But lets see how it goes. :)
It's coupled tightly to the code under test, which certainly makes it more fragile, but is also why I restricted the mocking to within each individual test. At least this way when you're restructuring some code the tests of that specific code are likely to break, but it won't require updates to a shared mock that then cascades to other failing tests. In this particular case, the important point is that it tests the promise-handling logic, which was the point of the exercise.
| gharchive/pull-request | 2022-02-02T15:05:38 | 2025-04-01T06:38:15.808285 | {
"authors": [
"kswenson"
],
"repo": "concord-consortium/collaborative-learning",
"url": "https://github.com/concord-consortium/collaborative-learning/pull/1187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1809820433 | Refactor how dragging to move objects works.
Simplifies how we track the temporary position of objects on the drawing canvas while they are being dragged to move them.
Previously this was done by holding an "objectsBeingDragged" list in state, hiding these objects in the render phase, and creating clones of these objects outside of the model with altered x and y coordinates.
The new code uses volatile dragX and dragY coordintes that are part of the object model and used by the normal render process. Having these be volatile prevents each step of the drag from adding an undo step.
This concept is borrowed from https://github.com/concord-consortium/quantity-playground/blob/main/src/diagram/models/dq-node.ts#L27
The UX looks good. I think I'm forgetting some of the tricky dragging and selection cases.
If you want to test the variable chip object you can use the example unit. Then add the second node-arc tile. The rollover of the button on the toolbar is "Diagram". Then add the drawing tool. Now you should see a V= tool bar button on the drawing tool. Clicking this will add a variable chip to the drawing tool.
Here is the link to test it on the deployed branch:
https://collaborative-learning.concord.org/branch/refactor-draw-object-drag/?appMode=dev&unit=example
Another nice side effect of this, change is that if you open the same document on the left and right sides. When you drag the drawing objects around on the left you'll see them on the right side. Previously you'd only see the change on the left when you stopped dragging.
| gharchive/pull-request | 2023-07-18T12:23:55 | 2025-04-01T06:38:15.811886 | {
"authors": [
"bgoldowsky",
"scytacki"
],
"repo": "concord-consortium/collaborative-learning",
"url": "https://github.com/concord-consortium/collaborative-learning/pull/1820",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
151219721 | jobs API endpoint serves SVG badges of latest status
@xoebus did the heavy lifting.
typical URI:
https://ci.blabbertabber.com/api/v1/pipelines/main/jobs/always-succeeds/badge
Build Status
Shield
Implemented
nil (build never run)
yes
StatusAborted
yes
StatusSucceeded
yes
StatusFailed
yes
StatusErrored
yes
Similar to the badges served by Travis CI
SVG created courtesy shields.io
Badge style: flat
Here's a deployment of a Concourse dev release with the badges: https://ci.blabbertabber.com:
And here are the badges (pulled from the live site!):
[fixes #72388008]
Signed-off-by: Brian Cunnie bcunnie@pivotal.io
looks great! prioritizing.
I need it so bad, looks great.
Will use it definitely :)
| gharchive/pull-request | 2016-04-26T19:30:51 | 2025-04-01T06:38:15.823917 | {
"authors": [
"cunnie",
"ernado",
"vito"
],
"repo": "concourse/atc",
"url": "https://github.com/concourse/atc/pull/77",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
305907581 | Payment method using coinpayments.net
Hi there, I am wondering if it would be possible to add https://www.coinpayments.net/ as a payment method onto the store as this is a quickly growing form of payment and it would also open more possibilities with what people are accepting etc.
There shouldn't be any restriction from the community store side of things to allow for integration here, it can handle both client side and server side handling in different ways to accommodate different payment gateways.
It really just comes down to the amount of work that might be required. Stripe for example has both a client side library (for the pop up form) as well as strong server side libraries, so it ends up being a bit easier than others. For this one, I'm not really seeing anything that handles the client side, so stuff like picking what cryptocurrency you want to use, really the final payment interface would all need to be written.
Personally I wouldn't have plans to work on this (I'd need a client to request it), but I'm happy to point someone in the right direction. The existing payment gateways do act as good code examples.
If a "client" did decide they wanted said feature. what's your estimates on pricing for it's creation?
Furthermore would it be something you'd consider if you where to be paid to develop this new payment method?
| gharchive/issue | 2018-03-16T12:20:57 | 2025-04-01T06:38:15.851011 | {
"authors": [
"Mesuva",
"hemipatu"
],
"repo": "concrete5-community-store/community_store",
"url": "https://github.com/concrete5-community-store/community_store/issues/349",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
868249323 | EasyPost will not load anymore.
www.mdius.com
Just spins on checkout again.
Was trying to delete my EasyPost USPS shipping and it crashed my shipping.
I cannot give error codes till tomorrow, but it is related to EasyPost has no index number.
You've helped me with this issue before.
It seems to be working now.
Was able to delete the problem and shipping works.
| gharchive/issue | 2021-04-26T22:26:31 | 2025-04-01T06:38:15.852960 | {
"authors": [
"mdius"
],
"repo": "concrete5-community-store/community_store",
"url": "https://github.com/concrete5-community-store/community_store/issues/610",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
543007394 | Update Dark Panels for greater clarity, fresher design palette.
These dark side panels are really hard on some screens/eyes and are really the only place where we go so low light/contrast. I do think it'd be nice if we had a 'dark mode' - but that should be all encompassing and higher contrast. Lets see some options for updating that which is grey on dark grey to a fresher (and more ADA friendly) palette.
Agree with @mlocati re. search block, content type selector menu should be the main event and search box then only show if necessary.
I like the block menu on single lines allowing for longer block names. Could the button container be extended full width as with the container menu above?
Also the popup menu entry to open the content sidebar is overdue a change from 'Add Block' to 'Add Content' (this ties in with the wording used in the top bar '+' icon tooltip).
Should Page Template and Theme even be on the same panel?
There are times when the grid of block icons is useful, and times when the list with room for names is useful. Can we have it toggle between the 2 views?
I like the colour @katalysis has added. More interesting than grey on grey.
Can the current Alt-click behaviour on the add '+' icon to lock the block panel be made more obvious? May users never realise it is there. There are also inconsistencies of panel behaviour when the block add '+' is locked and switching between Blocks/Stacks/Clipboard.
When filtering the block add panel with a search, can the search be sticky, at least for the current page edit, or perhaps extended to the current user session?
Thinking of @fmaruna's liking for my 'recent buttons' quick pick on the Button Nav demo, can something similar be done for the block add panel? A section at the top of the block panel with a quick pick of the last few block types added for easy selection without needing to scroll down or search.
That would be for block types, unpopulated.
A different shortlist would be for the few most recently added/saved blocks to automatically be at the top of the Clipboard, or even another section for the last N blocks added, so there would be separate panels for 'Clipboard>' and 'Recent Blocks>'
Marking this as completed.
| gharchive/issue | 2019-12-27T22:23:33 | 2025-04-01T06:38:15.863313 | {
"authors": [
"JohntheFish",
"aembler",
"fmaruna",
"gary-portlandlabs",
"katalysis"
],
"repo": "concrete5/concrete5",
"url": "https://github.com/concrete5/concrete5/issues/8322",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
211097351 | Faster nginx rewrite rule
As discussed in Slack with @KorvinSzanto
PS: it requires some testing in case of DIR_REL is not empty (ie concrete5 is installed in a subdirectory)
| gharchive/pull-request | 2017-03-01T14:29:42 | 2025-04-01T06:38:15.864490 | {
"authors": [
"mlocati"
],
"repo": "concrete5/concrete5",
"url": "https://github.com/concrete5/concrete5/pull/5155",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
341407410 | Fix infinite redirection visiting existing dirs when seo.trailing_slash is false
Condition:
using Apache with .htaccess set by concrete5
users visits an existing directory (eg /concrete/images)
Execution:
the browser visits /concrete/images
Apache sees that /concrete/images is not a file and it's not a folder containing index.php/index.html, and it sends the request to index.php but with /concrete/images/ as request path
Here's the mod_rewrite log:strip per-dir prefix: /var/www/concrete/images -> concrete/images
applying pattern '.' to uri 'concrete/images'
RewriteCond: input='/var/www/concrete/images' pattern='!-f' => matched
RewriteCond: input='/var/www/concrete/images/index.html' pattern='!-f' => matched
RewriteCond: input='/var/www/concrete/images/index.php' pattern='!-f' => matched
rewrite 'concrete/images' -> 'index.php'
add per-dir prefix: index.php -> /var/www/index.php
trying to replace prefix /var/www/ with /
strip matching prefix: /var/www/index.php -> index.php
add subst prefix: index.php -> /index.php
internal redirect with /index.php [INTERNAL REDIRECT]
strip per-dir prefix: /var/www/concrete/images/ -> concrete/images/
applying pattern '.' to uri 'concrete/images/'
RewriteCond: input='/var/www/concrete/images/' pattern='!-f' => matched
RewriteCond: input='/var/www/concrete/images//index.html' pattern='!-f' => matched
RewriteCond: input='/var/www/concrete/images//index.php' pattern='!-f' => matched
rewrite 'concrete/images/' -> 'index.php'
add per-dir prefix: index.php -> /var/www/index.php
trying to replace prefix /var/www/ with /
strip matching prefix: /var/www/index.php -> index.php
add subst prefix: index.php -> /index.php
internal redirect with /index.php [INTERNAL REDIRECT]
strip per-dir prefix: /var/www/index.php -> index.php
applying pattern '.' to uri 'index.php'
RewriteCond: input='/var/www/index.php' pattern='!-f' => not-matched
pass through /var/www/index.php
the response factory sees that the request URL does not end with a slash, so it build a redirect response to /concrete/images
go to point 1 until the browsers stops the request because of too many redirects
PS: fix #6785
GitHub is having some issues: I pushed https://github.com/mlocati/concrete5/commit/bb60410b8253e79905fa1629a6349843e72b0d65 to my fix-infinite-redirection-existing-dir branch, but this PR isn't updated.
Ok, now it's here too
| gharchive/pull-request | 2018-07-16T06:54:33 | 2025-04-01T06:38:15.869835 | {
"authors": [
"mlocati"
],
"repo": "concrete5/concrete5",
"url": "https://github.com/concrete5/concrete5/pull/6903",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1218956157 | Release 0.3.2
Checklist
[x] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[x] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
@conda-forge-admin, please rerender
| gharchive/pull-request | 2022-04-28T16:09:35 | 2025-04-01T06:38:15.874943 | {
"authors": [
"magnunor"
],
"repo": "conda-forge/atomap-feedstock",
"url": "https://github.com/conda-forge/atomap-feedstock/pull/10",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
161489647 | Update to 3.6
Using GitHub tarball.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
| gharchive/pull-request | 2016-06-21T17:02:38 | 2025-04-01T06:38:15.876079 | {
"authors": [
"conda-forge-linter",
"ocefpaf"
],
"repo": "conda-forge/ckanapi-feedstock",
"url": "https://github.com/conda-forge/ckanapi-feedstock/pull/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2453543813 | Don't patch llvm packages for new versions
Checklist
[ ] Used a static YAML file for the patch if possible (instructions).
[ ] Only wrote code directly into generate_patch_json.py if absolutely necessary.
[ ] Ran pre-commit run -a and ensured all files pass the linting checks.
[ ] Ran python show_diff.py and posted the output as part of the PR.
[ ] Modifications won't affect packages built in the future.
cc @beckermr @h-vetinari
The PR is clear and thank you! Can we xref any known issues about this or why we stopped patching now?
I don't remember why I patched it in the first place. Let's try not patching for new versions.
precommit.ci autofix
Thanks.
Can we xref any known issues about this or why we stopped patching now?
https://github.com/conda-forge/llvmdev-feedstock/pull/142, https://github.com/conda-forge/llvmdev-feedstock/issues/141 are the ones I'm aware of. Looking at the git history, this seems to come from #20, which doesn't have other references AFAICT.
| gharchive/pull-request | 2024-08-07T13:50:53 | 2025-04-01T06:38:15.883793 | {
"authors": [
"beckermr",
"h-vetinari",
"isuruf"
],
"repo": "conda-forge/conda-forge-repodata-patches-feedstock",
"url": "https://github.com/conda-forge/conda-forge-repodata-patches-feedstock/pull/821",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1071062026 | upgrade to 6.2.0
Checklist
[x] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I wanted to let you know that I linted all conda-recipes in your PR (recipe) and found some lint.
Here's what I've got...
For recipe:
requirements: run: colorama>=0.3.4 must contain a space between the name and the pin, i.e. colorama >=0.3.4
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
This fixes a few dependencies that the bot in #10 didn't get.
| gharchive/pull-request | 2021-12-04T00:06:38 | 2025-04-01T06:38:15.890091 | {
"authors": [
"conda-forge-linter",
"ickc"
],
"repo": "conda-forge/defopt-feedstock",
"url": "https://github.com/conda-forge/defopt-feedstock/pull/11",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2175753778 | DGL v2.1.0 w/ liburing
Checklist
[ ] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
@conda-forge-admin, please rerender
The broken link check is most likely due to conda-forge/admin-requests#958. @conda-forge-admin, please restart ci.
@conda-forge-admin, please rerender
@conda-forge-admin , please restart CI
Ok, seems everything is working now, except for the Osx builds, which are blocked by https://github.com/conda-forge/tensorflow-feedstock/pull/372 or https://github.com/conda-forge/tensorflow-feedstock/pull/374.
@conda-forge-admin, please rerender
@h-vetinari, would you have any hints on what to do about those stdlib2.12 cuda builds?
So I think what's happening is that - in a CUDA-enabled recipe - you're overriding this big zip, and as you're only overriding it partially (one key out of many), you need to match the length of the other keys in that zip.
I needed to do something similar in arrow to override the compiler version (note, you'd need a third entry for linux because the CUDA 12 migrator adds to the length of that zip).
Perhaps an easier alternative would be to set
os_version:
linux_64: cos7
in conda-forge.yml though, which should also ensure that you get the newer stdlib
@conda-forge-admin, please rerender
Seems this is finally ready, @conda-forge/dgl :tada: However, I suggest to first merge #38, then rebase this on top of the new main and only merge after that.
Thanks so much for fixing conflicts @zklaus
| gharchive/pull-request | 2024-03-08T10:38:17 | 2025-04-01T06:38:15.898665 | {
"authors": [
"h-vetinari",
"hmacdope",
"jakirkham",
"zklaus"
],
"repo": "conda-forge/dgl-feedstock",
"url": "https://github.com/conda-forge/dgl-feedstock/pull/36",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
223532171 | MNT: Re-render the feedstock [ci skip]
Hi! This is the friendly conda-forge-admin automated user.
I've re-rendered this feedstock with the latest version of conda-smithy (2.3.0) and noticed some changes.
If the changes look good, then please go ahead and merge this PR.
If you have any questions about the changes though, please feel free to ping the 'conda-forge/core' team (using the @ notation in a comment).
Remember, for any changes to the recipe you would normally need to increment the version or the build number of the package.
Since this is an infrastructural change, we don't actually need/want a new version to be uploaded to anaconda.org/conda-forge, so the version and build/number are left unchanged and the CI has been skipped.
Thanks!
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
| gharchive/pull-request | 2017-04-22T01:43:49 | 2025-04-01T06:38:15.902732 | {
"authors": [
"conda-forge-admin",
"conda-forge-linter"
],
"repo": "conda-forge/frosted-feedstock",
"url": "https://github.com/conda-forge/frosted-feedstock/pull/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
687333097 | Rebuild 2.7
Checklist
[ ] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
| gharchive/pull-request | 2020-08-27T15:23:32 | 2025-04-01T06:38:15.906661 | {
"authors": [
"chrisburr",
"conda-forge-linter"
],
"repo": "conda-forge/gsoap-feedstock",
"url": "https://github.com/conda-forge/gsoap-feedstock/pull/33",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
367354741 | Release v1.1.0
Checklist
[x] Used a fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[x] Reset the build number to 0 (if the version changed)
[x] Re-rendered with the latest conda-smithy
[x] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
Closing in favor of #1.
Hi! This is the friendly automated conda-forge-webservice.
It appears you are making a pull request from a branch in your feedstock and not a fork. This procedure will generate a separate build for each push to the branch and is thus not allowed. See our documentation for more details.
Please close this pull request and remake it from a fork of this feedstock.
Have a great day!
Hi! This is the friendly automated conda-forge-webservice.
It appears you are making a pull request from a branch in your feedstock and not a fork. This procedure will generate a separate build for each push to the branch and is thus not allowed. See our documentation for more details.
Please close this pull request and remake it from a fork of this feedstock.
Have a great day!
| gharchive/pull-request | 2018-10-05T20:19:06 | 2025-04-01T06:38:15.922707 | {
"authors": [
"auneri",
"conda-forge-linter"
],
"repo": "conda-forge/ipyslurm-feedstock",
"url": "https://github.com/conda-forge/ipyslurm-feedstock/pull/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2440320422 | Update to v0.11.0
Checklist
[x] Used a personal fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[x] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
@conda-forge-admin, please rerender
| gharchive/pull-request | 2024-07-31T15:20:29 | 2025-04-01T06:38:15.925916 | {
"authors": [
"stephenworsley"
],
"repo": "conda-forge/iris-esmf-regrid-feedstock",
"url": "https://github.com/conda-forge/iris-esmf-regrid-feedstock/pull/16",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1007507268 | Update jupyter-sysml-kernel to 0.15.0
Checklist
[X] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[X] Reset the build number to 0 (if the version changed)
[X] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[X] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
| gharchive/pull-request | 2021-09-26T21:22:50 | 2025-04-01T06:38:15.935396 | {
"authors": [
"conda-forge-linter",
"ivan-gomes"
],
"repo": "conda-forge/jupyter-sysml-kernel-feedstock",
"url": "https://github.com/conda-forge/jupyter-sysml-kernel-feedstock/pull/9",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1966983668 | update recipe for re-render
Checklist
[x] Used a personal fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[x] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[x] Ensured the license file is being packaged.
This PR:
re-renders the recipe
replace summary, which appears to be for simplejson (?)
@conda-forge-admin, please rerender
@conda-forge-admin, please rerender
| gharchive/pull-request | 2023-10-29T13:06:36 | 2025-04-01T06:38:15.939398 | {
"authors": [
"jGaboardi"
],
"repo": "conda-forge/legendgram-feedstock",
"url": "https://github.com/conda-forge/legendgram-feedstock/pull/6",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2024224351 | Fix RPATH for libraries.
Checklist
[ ] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
This PR fixes an issue with RPATH entries in libraries.
See conda-forge/cuda-feedstock#10
Requires:
https://github.com/conda-forge/libcublas-feedstock/pull/14
https://github.com/conda-forge/cuda-nvrtc-feedstock/pull/3
https://github.com/conda-forge/libnvjitlink-feedstock/pull/3
| gharchive/pull-request | 2023-12-04T16:14:00 | 2025-04-01T06:38:15.943871 | {
"authors": [
"adibbley"
],
"repo": "conda-forge/libcusolver-feedstock",
"url": "https://github.com/conda-forge/libcusolver-feedstock/pull/6",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2742554557 | [DEVOPS-601] Update conda recipe with Jinja template
Checklist
[x] Used a personal fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
@conda-forge-admin, please rerender@conda-forge-admin, please rerender@conda-forge-admin, please rerender@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.
I do have some suggestions for making it better though...
For recipe/meta.yaml:
ℹ️ noarch: python recipes should usually follow the syntax in our documentation for specifying the Python version.
For the host section of the recipe, you should usually use python {{ python_min }} for the python entry.
For the run section of the recipe, you should usually use python >={{ python_min }} for the python entry.
If the package requires a newer Python version than the currently supported minimum version on conda-forge, you can override the python_min variable by adding a Jinja2 set statement at the top of your recipe (or using an equivalent context variable for v1 recipes).
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/12354822754. Examine the logs at this URL for more detail.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.
| gharchive/pull-request | 2024-12-16T14:33:19 | 2025-04-01T06:38:15.951532 | {
"authors": [
"SophieCurinier",
"conda-forge-admin"
],
"repo": "conda-forge/mira-simpeg-feedstock",
"url": "https://github.com/conda-forge/mira-simpeg-feedstock/pull/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
416988310 | add dep zlib
Checklist
[x] Used a fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
| gharchive/pull-request | 2019-03-04T20:54:43 | 2025-04-01T06:38:15.955027 | {
"authors": [
"conda-forge-linter",
"looooo"
],
"repo": "conda-forge/netgen-feedstock",
"url": "https://github.com/conda-forge/netgen-feedstock/pull/21",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1415652724 | Release v0.1.7
Checklist
[ ] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
@conda-forge-admin, please rerender
| gharchive/pull-request | 2022-10-19T22:28:29 | 2025-04-01T06:38:15.958867 | {
"authors": [
"conda-forge-linter",
"mattwthompson"
],
"repo": "conda-forge/openff-utilities-feedstock",
"url": "https://github.com/conda-forge/openff-utilities-feedstock/pull/7",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2008637676 | version 0.23.0
Checklist
[x] Used a personal fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[x] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[x] Ensured the license file is being packaged.
@conda-forge-admin, please rerender
| gharchive/pull-request | 2023-11-23T17:55:55 | 2025-04-01T06:38:15.969639 | {
"authors": [
"johntruckenbrodt"
],
"repo": "conda-forge/pyrosar-feedstock",
"url": "https://github.com/conda-forge/pyrosar-feedstock/pull/28",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
866988583 | try running test suite
fix #14
Checklist
[ ] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
@jan-janssen now running test suite
it's only 17 tests, and not all executables are tested but I guess it's a good start
@conda-forge-admin, please rerender
@ltalirz thanks a lot
@ltalirz By the way if you want to add yourself as a maintainer feel free to do so.
| gharchive/pull-request | 2021-04-25T11:36:28 | 2025-04-01T06:38:15.975081 | {
"authors": [
"conda-forge-linter",
"jan-janssen",
"ltalirz"
],
"repo": "conda-forge/qe-feedstock",
"url": "https://github.com/conda-forge/qe-feedstock/pull/15",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1403954808 | [do not merge; testing stuff] 4.1: Reenable Windows builds
Checklist
[ ] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
https://dev.azure.com/conda-forge/feedstock-builds/_build/results?buildId=582673&view=logs&j=a70f640f-cc53-5cd3-6cdc-236a1aa90802 built successfully!
I'm re-enabling the other platforms again.
Thanks
| gharchive/pull-request | 2022-10-11T03:44:19 | 2025-04-01T06:38:15.979820 | {
"authors": [
"conda-forge-linter",
"isuruf",
"mbargull"
],
"repo": "conda-forge/r-base-feedstock",
"url": "https://github.com/conda-forge/r-base-feedstock/pull/224",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1072466425 | up to v0.0.37
up version to 0.0.37
reformat about section
Checklist
[x] Used a personal fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
@conda-forge-admin, please rerender
| gharchive/pull-request | 2021-12-06T18:27:12 | 2025-04-01T06:38:15.991364 | {
"authors": [
"conda-forge-linter",
"ngam"
],
"repo": "conda-forge/singularity-hpc-feedstock",
"url": "https://github.com/conda-forge/singularity-hpc-feedstock/pull/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2610214698 | Dependency update
Update dependencies
bump version
Checklist
[ ] Used a personal fork of the feedstock to propose changes
[ ] Bumped the build number (if the version is unchanged)
[ ] Reset the build number to 0 (if the version changed)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[ ] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.
I do have some suggestions for making it better though...
For recipe/meta.yaml:
No valid build backend found for Python recipe for package snowflake-ml-python using pip. Python recipes using pip need to explicitly specify a build backend in the host section. If your recipe has built with only pip in the host section in the past, you likely should add setuptools to the host section of your recipe.
| gharchive/pull-request | 2024-10-24T02:06:31 | 2025-04-01T06:38:15.996825 | {
"authors": [
"conda-forge-admin",
"sfc-gh-srudenko"
],
"repo": "conda-forge/snowflake-ml-python-feedstock",
"url": "https://github.com/conda-forge/snowflake-ml-python-feedstock/pull/11",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1370558945 | Add python-dateutil as a dependency
Checklist
[x] Used a personal fork of the feedstock to propose changes
[x] Bumped the build number (if the version is unchanged)
[ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
[x] Ensured the license file is being packaged.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.
@conda-forge-admin, please rerender
| gharchive/pull-request | 2022-09-12T21:50:47 | 2025-04-01T06:38:16.082785 | {
"authors": [
"conda-forge-linter",
"tomvothecoder"
],
"repo": "conda-forge/xcdat-feedstock",
"url": "https://github.com/conda-forge/xcdat-feedstock/pull/12",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
620762679 | disable compilers when dependencies are extra
For the following package tornado_sqlalchemy_login there is a dependency for pybind11 but it is just an extra and in dev mode. However, grayskull is adding the compilers anyway. It should not add it
grayskull is fetching the extra's but not using them yet (see https://github.com/conda-incubator/grayskull/issues/150)
So how is grayskull "adding" the compilers and in what section?
that is correct now, and thanks again for the feature that you developed :)
| gharchive/issue | 2020-05-19T07:58:12 | 2025-04-01T06:38:16.084827 | {
"authors": [
"marcelotrevisani",
"woutdenolf"
],
"repo": "conda-incubator/grayskull",
"url": "https://github.com/conda-incubator/grayskull/issues/129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1912226039 | Add template-files composite action
Description
We've been using github.com/BetaHuhn/repo-file-sync-action to synchronize files from conda/infrastructure for a while now and have found that it doesn't perform in the way we need it to (bulk syncing with templating), the upstream author is not particularly responsive anymore, and since the upstream action is written in NodeJS/Typescript its not particularly simple to fork and fix.
Thus we implement a small composite action that implements a pull process and uses Jinja as the templating engine. Since this is a decentralized pull process this also gives more control to individual repos instead of having a central sync conductor.
I have verified that the workflow still works in https://github.com/conda-sandbox:
https://github.com/conda-sandbox/upstream mocks https://github.com/conda/infrastructure
https://github.com/conda-sandbox/downstream mocks any of the conda projects, e.g.: https://github.com/conda/conda
See a scheduled run (where new updates are fetched from upstream): https://github.com/conda-sandbox/downstream/pull/19
And a manually triggered run (where we enable nested templating features): https://github.com/conda-sandbox/downstream/pull/20
| gharchive/pull-request | 2023-09-25T20:22:30 | 2025-04-01T06:38:16.089810 | {
"authors": [
"kenodegard"
],
"repo": "conda/actions",
"url": "https://github.com/conda/actions/pull/127",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1460497674 | Feature: Source connector modes
Feature description
Add ability source connector to working in following modes:
read existing data, enable CDC
ignore existing data, enable CDC
It was added in this PR:
https://github.com/conduitio-labs/conduit-connector-stripe/pull/8
| gharchive/issue | 2022-11-22T20:18:06 | 2025-04-01T06:38:16.120355 | {
"authors": [
"maksenius",
"voscob"
],
"repo": "conduitio-labs/conduit-connector-stripe",
"url": "https://github.com/conduitio-labs/conduit-connector-stripe/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
599630036 | Apache.Avro support for .net framework 4.x
Why is that .Net framework support is removed in Apache.Avro 1.9.1 onwards? Is there plan to support .Net framework in upcoming versions? Is there anyway for .Net framework 4.x applications to consume latest apache.avro changes?
previously, Confluent maintained a fork of the official repo to facilitate rapidly making changes required to use the library with Confluent.Kafka. In 1.9.1, the official Avro library incorporated all our bug fixes and overtook our fork in terms of features and we switched over in 1.4. Our fork supported net451, but the official library only supports back to netstandard2.0. This is compatible with net461 and above.
| gharchive/issue | 2020-04-14T14:47:07 | 2025-04-01T06:38:16.133879 | {
"authors": [
"PaluruSumanth",
"mhowlett"
],
"repo": "confluentinc/confluent-kafka-dotnet",
"url": "https://github.com/confluentinc/confluent-kafka-dotnet/issues/1248",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
654641695 | Q: transactional producer configs for mulit-broker(cluster) behind load balancer
Description
I am trying following:
Setup Two brokers behind a load balancer(which will route traffic based on availability, load, etc).
Create a producer(with transaction support) by setting load balancer host address as bootstrap server.
It works fine fine till there are no disconnects, but one fine day following happens:
%6|1591244167.286|FAIL|app-trx#producer-2| [thrd:sasl_plaintext://load-balanacer-host:9092/bootstrap]: sasl_plaintext://load-balanacer-host:9092/bootstrap: Disconnected (after 1516513ms in state UP)
%3|1591244167.286|ERROR|app-trx#producer-2| [thrd:sasl_plaintext://load-balanacer-host:9092/bootstrap]: sasl_plaintext://load-balanacer-host:9092/bootstrap: Disconnected (after 1516513ms in state UP)
%3|1591244453.733|FAIL|app-trx#producer-2| [thrd:sasl_plaintext://broker-1-ip:9092/1001]: sasl_plaintext://broker-1-ip:9092/1001: Connect to ipv4#broker-1-ip:9092 failed: No route to host (after 3006ms in state CONNECT)
%3|1591244453.733|ERROR|app-trx#producer-2| [thrd:sasl_plaintext://broker-1-ip:9092/1001]: sasl_plaintext://broker-1-ip:9092/1001: Connect to ipv4#broker-1-ip:9092 failed: No route to host (after 3006ms in state CONNECT)
%3|1594301839.904|ADDPARTS|app-trx#producer-2| [thrd:main]: TxnCoordinator/1002: Failed to add partition "topic-name" [0] to transaction: Broker: Producer attempted to use a producer id which is not currently assigned to its transactional id
%1|1594301839.904|TXNERR|app-trx#producer-2| [thrd:main]: Fatal transaction error: Failed to add partitions to transaction: Broker: Producer attempted to use a producer id which is not currently assigned to its transactional id (INVALID_PRODUCER_ID_MAPPING)
%3|1594301839.904|ERROR|app-trx#producer-2| [thrd:main]: Fatal error: Broker: Producer attempted to use a producer id which is not currently assigned to its transactional id: Failed to add partitions to transaction: Broker: Producer attempted to use a producer id which is not currently assigned to its transactional id
Did I configure something wrong, please advice?
I did get on zkCli for /brokers/topics/topic-name, it gave following:
{"version":1,"partitions":{"1":[1001,1002],"0":[1002,1001]}}
How to reproduce
Not an issue.
Checklist
Please provide the following information:
[ ] A complete (i.e. we can run it), minimal program demonstrating the problem. No need to supply a project file.
[x] Confluent.Kafka nuget version. 1.4.0
[ ] Apache Kafka version.
[ ] Client configuration.
[ ] Operating system.
[ ] Provide logs (with "debug" : "..." as necessary in configuration).
[ ] Provide broker log excerpts.
[ ] Critical issue.
Setup Two brokers behind a load balancer(which will route traffic based on availability, load, etc).
Kafka + protocol take care of availability / load balancing for you already - you shouldn't try to use a load balancer for this purpose. The .net client expects to maintain connections directly to brokers as it requires. I'm aware of some people using a level of indirection on the bootstrap servers to allow them to switch out broker addresses (if all other connections fail, the bootstrap server addresses will be used to find the cluster again), but I haven't done this personally, and I believe there is some devil in the detail here.
| gharchive/issue | 2020-07-10T09:17:08 | 2025-04-01T06:38:16.140338 | {
"authors": [
"gelihu",
"mhowlett"
],
"repo": "confluentinc/confluent-kafka-dotnet",
"url": "https://github.com/confluentinc/confluent-kafka-dotnet/issues/1350",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
736292562 | Not able to send messages to kafka asynchronously and get a callback to handle the errors (if any) later
Description
We are using Confluent.Kafka 1.5.2. For serializing keys and values, we use Confluent.SchemaRegistry.Serdes.Avro 1.5.2.
Apache.Avro contract version is 1.9.1.
We are trying to use the below Produce API to send messages asynchronously and get a callback at a later point. We plan to handle the errors via callback.
void Produce(string topic, Message<TKey, TValue> message, Action<DeliveryReport<TKey, TValue>> deliveryHandler = null);
At runtime, we are getting error "System.InvalidOperationException: Produce called with an IAsyncSerializer key serializer configured but an ISerializer is required.
at Confluent.Kafka.Producer2.Produce(TopicPartition topicPartition, Message2 message, Action`1 deliveryHandler)"
How to reproduce
producerBuilder = new ProducerBuilder<string, T>(producerconfig)
.SetKeySerializer(new Confluent.SchemaRegistry.Serdes.AvroSerializer(schemaRegistryconfig))
.SetValueSerializer(new Confluent.SchemaRegistry.Serdes.AvroSerializer(schemaRegistryconfig))
.Build();
producerBuilder.Produce(topicName, data, CallbackHandler);
Checklist
Please provide the following information:
[ ] A complete (i.e. we can run it), minimal program demonstrating the problem. No need to supply a project file.
[x] Confluent.Kafka nuget version. 1.5.2
[ ] Apache Kafka version.
[ ] Client configuration.
[ ] Operating system.
[ ] Provide logs (with "debug" : "..." as necessary in configuration).
[ ] Provide broker log excerpts.
[x] Critical issue.
you need to convert the async serializer into a sync one. there's a helper method for this:
using Confluent.Kafka.SyncOverAsync;
then use .AsSyncOverAsync() on the serializer instance.
| gharchive/issue | 2020-11-04T17:24:29 | 2025-04-01T06:38:16.146352 | {
"authors": [
"AdikaSinghvi",
"mhowlett"
],
"repo": "confluentinc/confluent-kafka-dotnet",
"url": "https://github.com/confluentinc/confluent-kafka-dotnet/issues/1448",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1132170619 | cp-ansible ignores ssl.principal.rules ( w/ MTLS AUTH ) just extract the full subject from the JKS/certificate to super.users
hello,
we want to use MTLS auth between kafka brokers, and trying to use principal.rules to extract data from the certificates subject, but the CP-Ansible ignores this paramater and export the full subject and adding it to the super.users list ( with manual adding it's OK )
https://github.com/confluentinc/cp-ansible/blob/fd56742ff5b63a75d4be64b7c5d4118b68fbe2ee/roles/kafka_broker/tasks/set_principal.yml#L22-L55
here is the lack of this logic !
pls fix it !
Thanks
Hello @buznyusz
Can you please have a look at the PR https://github.com/confluentinc/cp-ansible/pull/905
This is available in 7.1.x onwards.
Let me know if this serves the purpose here. Thanks!
| gharchive/issue | 2022-02-11T09:23:41 | 2025-04-01T06:38:16.188814 | {
"authors": [
"buznyusz",
"nsharma-git"
],
"repo": "confluentinc/cp-ansible",
"url": "https://github.com/confluentinc/cp-ansible/issues/912",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1138689560 | feature: Retry count available in header
[x] docs
[x] update / link to from issues
[x] include issue number
[x] Remove WorkContainer from any public API - e.g. delay provider in options
Spirit of:
#65
Redundant by
#216
Blocked by:
#223
Redundant by
#216
| gharchive/pull-request | 2022-02-15T13:23:15 | 2025-04-01T06:38:16.219931 | {
"authors": [
"astubbs"
],
"repo": "confluentinc/parallel-consumer",
"url": "https://github.com/confluentinc/parallel-consumer/pull/197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
158414165 | Login only form
How to modify such that there is only Login, but no Registration form because it is an internal app I am working on and the logic is simply when username and password (hashed) match initialized username and password that is created once upon app launch. Thanks
The form would be basically the same just with fewer fields:
var loginFormSchema = new SimpleSchema({
username: {
type: String,
max: 60,
label: "Username"
},
password: {
type: String,
max: 60,
min: 8,
label: "Password"
}
});
RegistrationForm = React.createClass({
_onSubmit(doc) {
Meteor.loginWithPassword(doc.username, doc.password, (error, result) => {
// Handle error/success
})
},
render() {
return (
<Form schema={registrationFormSchema} id="login-form" onSubmit={this._onSubmit}>
<TextInput name="username" />
<TextInput name="password" type="password" />
<SubmitButton label="Login" />
</Form>
)
}
});
Near the bottom, do you mean
Thanks
| gharchive/issue | 2016-06-03T17:20:18 | 2025-04-01T06:38:16.236165 | {
"authors": [
"coniel",
"scheung38"
],
"repo": "coniel/meteor-react-form-handler",
"url": "https://github.com/coniel/meteor-react-form-handler/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2124174781 | enabled param is not taken into consideration with infinitqueries
Hi, There was an issue about enabled param not working in useQuery that seems to have been fixed in this commit, unfortinatenly the problem still persist at useInfinitQuery and also headers provided to useInfinitQuery dont seem to be sent, but maybe the issues are related.
Can yo check this issue please
Yeah looks like I forgot to port that to useInfiniteQuery. Will submit a fix shortly. In the meantime, you can use the 'disableQuery' symbol as the input to that api.
Regarding headers not being passed through, how are you passing the headers in?
As for the headers, i think it was just due to the disable param not being considered. Since the query is supposed to be enabled only after some auth headers are computed. Thats why i didnt find them in the logs. So the headers are fine i think. Its just the disabled param that needs fixing 👍
Thank you @paul-sachs for the fast action 🎉🎉🎉 , any rough estimate on where this will be released ?
Should be able to get to a release later today. Just trying to nail down if a previous change warrants a 2.0 since it changed the queryKey used by infiniteQuery.
Awesome 😎, thank you very much @paul-sachs 🙏
Fixed in https://github.com/connectrpc/connect-query-es/releases/tag/v1.2.0
| gharchive/issue | 2024-02-08T01:00:11 | 2025-04-01T06:38:16.244714 | {
"authors": [
"acbeni",
"paul-sachs"
],
"repo": "connectrpc/connect-query-es",
"url": "https://github.com/connectrpc/connect-query-es/issues/336",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1348349771 | As a user, I need my message to be processed on its destination domain
Problem
See https://github.com/connext/nxtp/issues/1714
Subtask: proveAndProcess
We need to implement relayer transmission cycle for proveAndProcess
Ideas to solve this
Implementation plan:
[subgraph] track dispatch() messages at Origin domain subgraph
[subgraph] track proveAndProcess() messages at Destination domain subgraph
[cartographer] update and add new fields to Cartographer
[lighthouse] In the lighthouse, pull all the transfers from cartographers where processed: false | null, and call proveAndProcess() it
Once this will start working we will create a new user story from the dev perspective to add sanity checks for root submission, or additional checks depending on domains and have an ideal time around calling proveAndProcess() and polling.
Subtasks
[x] https://github.com/connext/nxtp/issues/1753
[x] https://github.com/connext/nxtp/issues/1791
[x] https://github.com/connext/nxtp/issues/1792
[x] https://github.com/connext/nxtp/issues/1793
Acceptance Criteria
slow path transfers are working.
completed
| gharchive/issue | 2022-08-23T18:08:49 | 2025-04-01T06:38:16.251198 | {
"authors": [
"jakekidd",
"sanchaymittal"
],
"repo": "connext/nxtp",
"url": "https://github.com/connext/nxtp/issues/1761",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
484781199 | Feature request: composeReducers
Type
Feature request
Wot's the request
I think it'd be pretty cool if this library introduced a new method, composeReducers, that managed composing multiple reducers into one reducer.
More info
With a typical reducer API, like the one used in useReducer, composing reducers is straightforward. One way to do it is:
export default function composeReducers(reducers) {
return (state, action) =>
reducers.reduceRight(
(prevState, reducer) => reducer(prevState, action),
state
);
}
It is quite a bit more complex to compose reducers with use-reducer-with-side-effects. A few of the things you have to keep in mind are:
the return value of a reducer is on longer the input for the next one, since they receive state as an input but output both state and side effects
since the sideEffects don't pass through to the next reducer, you must keep and concatenate an array of all of the returned side effects outside of the loop
a Symbol is used for NoUpdates in this lib, rather than the previous state object, so you must track the # of Symbols returned to determine if an update actually occurred
None of these points are insurmountable, but it is a bit to think about as a consumer of this lib who is trying to split out a big reducer into smaller pieces. A composeReducers export from this lib would make it easier for devs to organize their reducers in this way.
Other notes
If this isn't added, then it might be a good idea to explicitly export NO_UPDATE_SYMBOL so that folks can reliably check for no updates when writing their own composer fn. (It can be accessed by calling NoUpdate() but that seems a lil hacky for this purpose imo)
Prior art
Redux - composeReducers
Here's an example implementation in userland that seems to be working. It could probably be tidied up a bit with direct access to NO_UPDATE_SYMBOL.
If you're :+1: to this idea, I can make a PR. Lmk what you think!
love the idea, have been thinking about implementation myself
i published 2.0.0 on the beta tag witth . all changes integrateed
Closing since this is resolved :v:
| gharchive/issue | 2019-08-24T05:31:18 | 2025-04-01T06:38:16.261953 | {
"authors": [
"conorhastings",
"jamesplease"
],
"repo": "conorhastings/use-reducer-with-side-effects",
"url": "https://github.com/conorhastings/use-reducer-with-side-effects/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
442028994 | Add BT Coin Token
为确保 Token Profile 的顺利通过,务必逐一确保以下检查项并补充项目资料:
.json 及 .png 文件检查项
[x] logo 尺寸:120*120 px
[x] logo 背景:透明
[x] 代币有至少一笔成功的转账记录
[x] 此项对于 ERC-20 项目方:合约地址有大小写(做 checksum)
[x] .json 和 .png 文件名一致(包括大小写)
项目资料:
团队背景:Btoken区块链交易云平台是一款由前阿里巴巴(天天动听技术团队)音乐播放器早期创始团队打造,当时《天天动听》APP用户数破4亿一度超越支付宝的用户数。
2017年底团队进军区块链,并于2018年独立研发首条公链和BTOKEN交易云平台。
项目基本情况:平台提供项目方自建交易所和一键发币功能,平台主站上线主流交易对和优质项目币。
平台发行平台Token“BT”,总量1亿,团队5%,排行奖励5%,剩余90%每期销售,通过收取的USDT和手续费来回购销毁,不断销毁不设上限。后期BT销售完成后通过手续费继续回购BT销毁 。
BT公链也会择机上线,BT做为公链燃料和手续费。公链代码已经开源。https://github.com/btoken-io/btoken
媒体报道:
区块链交易云平台BTOKEN今日上线,平台币BT涨幅超500%
官方公告:
BToken交易云平台官方通告 ** 请使用手机访问 **
收录的交易所:Btoken交易云平台
转账所需 Gas Limit(对 ERC-20 项目方):(默认为 60000)
https://github.com/consenlabs/token-profile/blob/master/tutorial/erc20-tutorial.zh-CN.md#资料的完整和准确
抱歉,公告中的措辞需要调整,完善 token profile 仅是为帮助用户更好的了解代币。不成为「官方合作」
我更新了下公告请检查下 (BToken交易云平台官方通告 )[https://www.btokens.net/notice/] **公告暂不支持请使用手机访问 **
抱歉,公告中的措辞需要调整,完善 token profile 仅是为帮助用户更好的了解代币。不成为「官方合作」
我已经修改了公告,请检查下有没有问题
@tstt
请审核一下
欢迎在社区中推荐用户使用 imToken 2.0 管理资产。
| gharchive/pull-request | 2019-05-09T03:09:23 | 2025-04-01T06:38:16.269242 | {
"authors": [
"LiHang941",
"tstt"
],
"repo": "consenlabs/token-profile",
"url": "https://github.com/consenlabs/token-profile/pull/4093",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
482700951 | Kaia token
官方公告:
https://www.facebook.com/111393203545176/posts/115920343092462?s=1568473956&sfns=mo
团队背景:
2018 年公司联合全球最具规模的新加 AI 区块链基金会,共同开发 KAIA,公 司收购了国内外数个实力雄厚的 AI 公司,并应用区块链技术将 AI 智能相结合。 形成更为强大的大数据处理。
项目基本情况:
KAIA 是一个分散的、无垄断的、可自定义的区块链 AI 底层技术开放服务平 台。KAIA 以区块链技术为核心,以其独特、安全、稳定的大数据处理性能为依 托,以人工智能建模和预测为依据,以 AI 量化套保交易为应用场景,以控制数 据和模型使用以及无障碍支付等服务为手段,构建了一个高技术含量的区块链服 务平台。
官网: "http://www.kaiacoin.net/index.php?m=Home&c=Index&a=login&language=zh-cn"
建议 gas limit:60000
有两个问题需要解决:
关于 checksum
https://github.com/consenlabs/token-profile/blob/master/tutorial/erc20-tutorial.zh-CN.md#具体如何操作
要求发布「官方公告」的渠道可从贵方官网跳转
| gharchive/pull-request | 2019-08-20T07:51:52 | 2025-04-01T06:38:16.273605 | {
"authors": [
"cacker931",
"tstt"
],
"repo": "consenlabs/token-profile",
"url": "https://github.com/consenlabs/token-profile/pull/4743",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1165045218 | Add CNDL
Please check the following to ensure a successful verification of Token Profile
.json and .png document checklists
[x] logo dimensions: 120*120 px
[x] logo background: Transparent
[x] At least one transaction on the token contract
[x] For ERC-20 project teams: Upper and lower capitalized alphabets in token address (For checksum purposes)
[x] Same document name for both .json and .png (Including capitalization)
Project background:
Official announcement: https://twitter.com/candle_labs
Project team background: https://candlelabs.org
Project basic information: Decentralized Human-focused metaverse - Candle (CNDL) is the utility and governance token for the Candle Protocol. This protocol is designed to decentralize social media and build out a community governed and moderated place to discuss and share thoughts and ideas.
Media publications: CoinTelegraph
Tradeable exchanges: Decoin, Tokpie, BankCEX, Coinbase
Official Announcement:
Twitter Link
Recommended Gas Limit (For ERC-20 project teams) for transaction: 40,000
1.Your PR is missing project information
2.Official Twitter is missing imToken listing announcement
3.Please don't upload more than one logo
1.Your PR is missing project information
2.Official Twitter is missing imToken listing announcement
3.Please don't upload more than one logo
Fixing now
Twitter Link
Please merge
Merge me
Please merge
Please do not delete in the local database
Fixed it
Please do not delete in the local database
Good to merge? Thanks
You have not uploaded the LOGO, please do not make changes in your database
You have not uploaded the LOGO, please do not make changes in your database
Fixed
Not sure if the project is still being maintained。
提交信息有问题,我先关闭这个 PR
| gharchive/pull-request | 2022-03-10T10:36:08 | 2025-04-01T06:38:16.282346 | {
"authors": [
"kaiko1203",
"makoshan",
"samisbakedham",
"zengbing15"
],
"repo": "consenlabs/token-profile",
"url": "https://github.com/consenlabs/token-profile/pull/7328",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2279770059 | 🛑 chattedechevre.partouze-cagoule.fr is down
In 1c006ba, chattedechevre.partouze-cagoule.fr (https://chattedechevre.partouze-cagoule.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: chattedechevre.partouze-cagoule.fr is back up in a553398.
| gharchive/issue | 2024-05-05T22:40:57 | 2025-04-01T06:38:16.322266 | {
"authors": [
"trivoallan"
],
"repo": "constructions-incongrues/status",
"url": "https://github.com/constructions-incongrues/status/issues/3470",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2381996980 | 🛑 computertruck.parishq.net is down
In 292862a, computertruck.parishq.net (https://computertruck.parishq.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: computertruck.parishq.net is back up in b88ed91.
| gharchive/issue | 2024-06-29T22:36:08 | 2025-04-01T06:38:16.325369 | {
"authors": [
"trivoallan"
],
"repo": "constructions-incongrues/status",
"url": "https://github.com/constructions-incongrues/status/issues/4222",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2420944949 | 🛑 foutre.partouze-cagoule.fr is down
In d3eab9f, foutre.partouze-cagoule.fr (https://foutre.partouze-cagoule.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: foutre.partouze-cagoule.fr is back up in 332b203 after 8 hours, 30 minutes.
Resolved: foutre.partouze-cagoule.fr is back up in 9eeb317 after 8 hours, 30 minutes.
| gharchive/issue | 2024-07-20T15:30:00 | 2025-04-01T06:38:16.329909 | {
"authors": [
"trivoallan"
],
"repo": "constructions-incongrues/status",
"url": "https://github.com/constructions-incongrues/status/issues/5499",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1431338679 | feat!: rename user to ubuntu
Rename default user from user back to ubuntu
closes #572
Surprised to learn we have some hardcoded PATHs though
yes, there the env isn't yet available 🙃
| gharchive/pull-request | 2022-11-01T12:19:22 | 2025-04-01T06:38:16.334613 | {
"authors": [
"viceice"
],
"repo": "containerbase/base",
"url": "https://github.com/containerbase/base/pull/575",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
830523277 | Windows Containers are leaked when container fails to start
Description
On Windows, when passing a command to a container to run, if the command isn't valid then containerd gives error and clean up its resources but HCS still shows a container is on the system. If you re-run container with the same name you get an error.
Steps to reproduce the issue:
ctr image pull k8s.gcr.io/pause:3.4.1
ctr.exe run --rm k8s.gcr.io/pause:3.4.1 test-1 echo c:\License.txt
ctr: hcsshim::System::CreateProcess test-1: The system cannot find the file specified.
(extra info: {"CommandLine":"echo c:\\License.txt","User":"ContainerUser","Environment":{"PATH":"C:\\Windows\\system32;C:\\Windows;"},"CreateStdInPipe":true,"CreateStdOutPipe":true,"CreateStdErrPipe":true}): unknown
Containerd doesn't know about the container:
ctr c ls
CONTAINER IMAGE RUNTIME
ctr t ls
TASK PID STATUS
but hcs has the container in the list:
hcsdiag list
test-1
Windows Server Container, Unknown
Try to run a valid command:
ctr.exe run --rm k8s.gcr.io/pause:3.4.1 test-1 cmd /c echo c:\License.txt
ctr: hcsshim::CreateComputeSystem test-1: A virtual machine or container with the specified identifier already exists.
(extra info: {"Owner":"containerd-shim-runhcs-v1.exe","SchemaVersion":{"Major":2,"Minor":1},"Container":{"Storage":{"Layers":[{"Id":"e67d9508-fe76-54e0-b098-cabfb7356fd7","Path":"C:\\ProgramData\\containerd\\root\\io.containerd.snapshotter.v1.windows\\snapshots\\24"},{"Id":"e
3732ab4-a8b2-5962-8e88-a0a76ee83d86","Path":"C:\\ProgramData\\containerd\\root\\io.containerd.snapshotter.v1.windows\\snapshots\\23"},{"Id":"51d377df-069c-5b3a-8cb0-e86be7e7cfbe","Path":"C:\\ProgramData\\containerd\\root\\io.containerd.snapshotter.v1.windows\\snapshots\\22"},
{"Id":"994b900c-05b9-5a61-b236-35342d301241","Path":"C:\\ProgramData\\containerd\\root\\io.containerd.snapshotter.v1.windows\\snapshots\\21"}],"Path":"\\\\?\\Volume{0cdda4ac-837b-11eb-b279-845c959994a0}\\"},"Networking":{"AllowUnqualifiedDnsQuery":true,"Namespace":"45B223AA-1
9BF-40D6-9F54-9578BA121B94"}},"ShouldTerminateOnLastHandleClosed":true}): unknown
Describe the results you received:
Describe the results you expected:
What version of containerd are you using:
Tried with 1.4 and 1.5
$ containerd --version
containerd github.com/containerd/containerd v1.4.3 269548fa27e0089a8b8278fc4fc781d7f65a939b
containerd github.com/containerd/containerd v1.5.0-beta.3 02334356d0774a5b194e67b5f1383fd2485ea67a
Any other relevant information (runC version, CRI configuration, OS/Kernel version, etc.):
runc --version
$ runc --version
crictl info
$ crictl info
uname -a
$ uname -a
cmd /c ver
Microsoft Windows [Version 10.0.17763.1577
/cc @kevpar
@claudiubelu
I've tried this as well with containerd v1.4.3, and can confirm the issue. I've also tried something similar with crictl, but that's not an issue there, the failed container still shows up when listing the containers (crictl --runtime-endpoint=npipe://./pipe/containerd-containerd ps), can be removed, and another can be created in its stead, which can then be started.
I've tried it out using the latest main version and latest hcsshim, and this problem doesn't occur anymore:
PS C:\tmp> ctr.exe --address //./pipe//run/containerd-test/containerd run --rm k8s.gcr.io/pause:3.4.1 test-1 echo c:\License.txt
ctr: hcs::System::CreateProcess test-1: The system cannot find the file specified.: unknown
PS C:\tmp> ctr.exe --address //./pipe//run/containerd-test/containerd run --rm k8s.gcr.io/pause:3.4.1 test-1 cmd /c echo c:\License.txt
c:\License.txt
PS C:\tmp> ctr.exe --address //./pipe//run/containerd-test/containerd container list
CONTAINER IMAGE RUNTIME
I've added a test that will make sure we don't have any regressions on this: https://github.com/containerd/containerd/pull/5578
I think we can close this issue. As seen in the previous comments, we haven't seen this anymore, and we have a test for it as well.
/close
| gharchive/issue | 2021-03-12T21:50:47 | 2025-04-01T06:38:16.350638 | {
"authors": [
"claudiubelu",
"jayunit100",
"jsturtevant"
],
"repo": "containerd/containerd",
"url": "https://github.com/containerd/containerd/issues/5188",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
214823103 | Proposal: Add Clone function on Snapshotter
Proposal
Add a Clone function to the Snapshotter interface which can capture the state of an active snapshot. The output is always a writable active snapshot. It can be immediately committed without making any changes or may further altered and commit. Like any other active snapshot, it must be committed or removed. Clone on a committed snapshot allows renaming of committed snapshot without modification.
Additionally formalize that Commit finalizes an active snapshot, not allowing Remove after Commit or multiple calls to Commit on the same snapshot. This is already the current behavior of the overlay driver and should be formalized in the interface. With the existence of the Snapshot function there is no need to support multiple calls to Commit. The use case for multiple Commit would be covered by calling Snapshot + Commit.
Diff
// After commit, the snapshot identified by key is removed.
Commit(ctx context.Context, name, key string, opts ...Opt) error
+ // Clone copies a snapshot to a new snapshot of the same type.
+ //
+ // An active clone will create a new active snapshot with the same
+ // content but changes in one will not be reflected in the other.
+ // A committed clone will create a new read only snapshot of the
+ // same content.
+ //
+ // Options:
+ // WithParent defines the parent to use for the target.
+ // If the target parent differs from the source, the snapshotter
+ // may either rebase or return ErrNotImplemented
+ Clone(ctx context.Context, target, source string, opts ...Opt) error
+
// Remove the committed or active snapshot by the provided key.
WithParent option
Used to support parent verification or rebasing. If the parent does not exist, then the operation must error with ErrNotFound. If the parent exists, but is not the source parent, the snapshotter must either perform a rebase (applying the changes between source and source's parent to the parent) or return ErrNotImplemented (should not return ErrInvalidArgument).
WithLabels option
The new snapshot will use the labels provided through the WithLabels option. Options should not be copied over from the source
Cross namespace copying
A source namespace could be provided on the context to support cloning across namespaces. In this case the source and target may be the same name. Using the same source and target within a single namespace will always return ErrAlreadyExists.
Snapshot flow diagram changes
I diagrammed what the changes may look like to the snapshot model.
https://docs.google.com/a/docker.com/presentation/d/1LYVEkP6VdYV96IO5lhxTwujtpilhzNPAouO0qGUa4z8/edit?usp=sharing
Overlay implementation
When Clone is called in active overlay snapshot, the content of the snapshotted upper directory will be copied to a new active. When Clone is called on a commited overlay snapshot, the new snapshot will just point to the same on disk layer.
Btrfs implementation
Just uses snapshots how Commit was done previously. As part of this change Commit will also remove the original source after the snapshot is completed. Cloning a committed snapshot will just point to the original source.
google doc seems private :sweat_smile:
@AkihiroSuda updated, didn't realize default scope even with a share link was org restricted. Please give it a try.
Thanks for drawing new diagrams! A few questions:
Would every Snapshotter be required to implement an online, mutable copy as part of Snapshot?
On slide 5, what makes undefined(a, P1) different from Prepare(a, P1)? Is it the fact that P2's parent is P0?
We decided this dual nature was kind of confusing so left it open as a possible future function, name yet to be defined. [...] Should I just remove the slide?
I think that makes sense unless it's something we still want to solve for 1.0.0. In my view, it's better to document the design that we're implementing than future potential options that we're not.
@samuelkarp agreed, when we update the design we will not include that slide. Just made the slides as a place to capture the diagrams for the future doc update.
So does this Snapshot function is just to support Multiple Commit use case?
Also this means consumers of Snapshot drivers, must decide prior if they want to make any changes after commit. e.g. Image builder should use snapshot + commit for intermediate builds and may use Commit to squash the image.
Also, what will be property of Commit layer. i.e. it will be equals to View or View is required to read only operations too? Currently overlay driver gives error Cannot commit view, If I tries to do multiple commit (https://github.com/docker/containerd/pull/582) .
This has been implemented as a subcommand in ctr
Renaming this, it threw me off as well. This is about a potential function on the Snapshotter interface not ctr snapshot.
Looking at this with experience eyes, I think there is a definite use case for this proposal for backups and other online snapshotting.
I think for sanity's sake, we should consider calling it Clone or Copy.
Adding to the 1.2 milestone.
Looking at this with experience eyes, I think there is a definite use case for this proposal for backups and other online snapshotting.
Do we support using a c8d snapshot as a persistent data volume?
Updated this with more details, considering this for 1.2 to support BuildKit cross namespace exporting
Mostly LGTM on this proposal. Glad its here.
A source namespace could be provided on the context to support cloning across namespaces. In this case the source and target may be the same name. Using the same source and target within a single namespace will always return ErrAlreadyExists.
Might want to detail this workflow a little more. How are the source and target namespaces specified?
This hasn't been in a milestone for a few releases; do we want to resurrect this and possibly see if there is interest in the community to work on it?
Just to chime in with some community interest, I just finished up a bunch of work for the upcoming Buildkit release that required extending the containerd snapshotter interface with a method called Merge. Merge is different than Clone, but has a lot of conceptual overlap with the WithParent rebasing feature described here and probably could have taken advantage of Clone if it had existed. If interested, background on the Buildkit features that required Merge are in these docs.
While I can't just literally upstream Merge and rename the method to Clone due to the differences, it would still be very nice to be able to use Clone in Merge's implementation if nothing else, so if there's still interest in Clone existing I could potentially help out with it (time permitting of course).
| gharchive/issue | 2017-03-16T20:06:19 | 2025-04-01T06:38:16.368310 | {
"authors": [
"AkihiroSuda",
"crosbymichael",
"dmcgowan",
"estesp",
"kunalkushwaha",
"samuelkarp",
"sipsma",
"stevvooe"
],
"repo": "containerd/containerd",
"url": "https://github.com/containerd/containerd/issues/633",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2360902250 | 1.7: Add back support for OTLP config from toml
This was broken due to 9360e37169f2ba3135f7a6f39a3ab7c9231abbd6 (backport from main).
The backport should not have broken the toml config so this more or less restores that.
This treats the env vars as preferred.
It is not perfect since, as an example, Endpoint could be set on the config and the env to set the protocol could be set and these could conflict, however I think this is an unlikely scenario and people should use one or the other, not both.
This change converts the config into the relevant environment variables before initializing tracers.
Fixes #10358
cc @vvoland
@cpuguy83 may need a similar patch for 1.6 as well, or at least I see there was a backport to 1.6 as well;
https://github.com/containerd/containerd/pull/9993
| gharchive/pull-request | 2024-06-18T23:22:59 | 2025-04-01T06:38:16.371736 | {
"authors": [
"cpuguy83",
"thaJeztah"
],
"repo": "containerd/containerd",
"url": "https://github.com/containerd/containerd/pull/10360",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
379349925 | [release/1.1] cherry-pick: enhance: update v1/v2 runtime
Cherrypick of https://github.com/containerd/containerd/pull/2769.
For https://github.com/containerd/containerd/issues/2709.
Only v1 change is cherry-picked.
Signed-off-by: Wei Fu fuweid89@gmail.com
Codecov Report
Merging #2774 into release/1.1 will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## release/1.1 #2774 +/- ##
============================================
Coverage 48.99% 48.99%
============================================
Files 85 85
Lines 7603 7603
============================================
Hits 3725 3725
Misses 3203 3203
Partials 675 675
Flag
Coverage Δ
#linux
48.99% <ø> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b337430...665815b. Read the comment docs.
| gharchive/pull-request | 2018-11-09T22:53:07 | 2025-04-01T06:38:16.379195 | {
"authors": [
"Random-Liu",
"codecov-io"
],
"repo": "containerd/containerd",
"url": "https://github.com/containerd/containerd/pull/2774",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
501375953 | Added filters to snapshots API
Fixes #3708
Added filters to snapshots and walk function interface.
@Zyqsempai thanks for working on this. Can you sign your commits with the proper DCO? https://github.com/containerd/project/blob/master/CONTRIBUTING.md#sign-your-work
Each commit will need to be signed, feel free to keep them separate or squash and force push your branch.
The vendored snapshotters (aufs, zfs) need to be updated to the new Walk(..) signature before this change can pass CI:
cmd/containerd/builtins.go:1: vendor/github.com/containerd/aufs/aufs.go:81:9: cannot use &(snapshotter literal) (value of type *snapshotter) as snapshots.Snapshotter value in return statement: wrong type for method Walk (typecheck)
Of course there is a bit of a chicken-and-egg problem in getting those fixed and re-vendored without this change to the signature being merged--might have to temporarily point their vendors to a branch of containerd with the change?
@estesp I opened PR's for them also.
Codecov Report
Merging #3709 into master will increase coverage by 3.4%.
The diff coverage is 50.74%.
@@ Coverage Diff @@
## master #3709 +/- ##
=========================================
+ Coverage 41.98% 45.38% +3.4%
=========================================
Files 131 118 -13
Lines 14536 11699 -2837
=========================================
- Hits 6103 5310 -793
+ Misses 7525 5476 -2049
- Partials 908 913 +5
Flag
Coverage Δ
#linux
45.38% <50.74%> (-0.03%)
:arrow_down:
#windows
?
Impacted Files
Coverage Δ
snapshots/storage/bolt.go
59.12% <10%> (-0.6%)
:arrow_down:
snapshots/btrfs/btrfs.go
57.39% <100%> (-0.9%)
:arrow_down:
snapshots/devmapper/snapshotter.go
66.39% <100%> (ø)
:arrow_up:
snapshots/native/native.go
52.68% <100%> (+10.26%)
:arrow_up:
snapshots/overlay/overlay.go
52.52% <100%> (ø)
:arrow_up:
metadata/snapshot.go
56.44% <50%> (+8.62%)
:arrow_up:
metadata/adaptors.go
54.63% <90.47%> (+14.16%)
:arrow_up:
remotes/docker/fetcher.go
42.5% <0%> (-7.5%)
:arrow_down:
remotes/docker/auth.go
63.82% <0%> (-3.97%)
:arrow_down:
remotes/docker/status.go
21.42% <0%> (-3.58%)
:arrow_down:
... and 83 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9c86b8f...ee085d7. Read the comment docs.
Basically, LGTM. But the CI green is not trust because the vendor check was broken 6days ago. I think we can remove the out of tree snapshotter plugins from vendor.conf shortly and then add it back after this one merged. is it ok?
@fuweid I agree, I was going to do that and forgot. Commented those out from vendor for now
| gharchive/pull-request | 2019-10-02T09:25:17 | 2025-04-01T06:38:16.397546 | {
"authors": [
"Zyqsempai",
"codecov-io",
"crosbymichael",
"dmcgowan",
"estesp",
"fuweid"
],
"repo": "containerd/containerd",
"url": "https://github.com/containerd/containerd/pull/3709",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1217716862 | Bump opencontainers/selinux from 1.10.0 to 1.10.1
Full Changelog:
https://github.com/opencontainers/selinux/compare/v1.10.0...v1.10.1
Specifically patch https://github.com/opencontainers/selinux/pull/173 which fixes the issue: https://github.com/containerd/containerd/issues/6767
Signed-off-by: Nabeel Rana nabeelnrana@gmail.com
Looks like you may have to run go mod tidy from within the integration/client directory as well (which also has a go.mod);
Files /home/runner/work/containerd/containerd/src/github.com/containerd/containerd/integration/client/go.mod and /tmp/tmp.PYcoKntieN/containerd/integration/client/go.mod differ
Files /home/runner/work/containerd/containerd/src/github.com/containerd/containerd/integration/client/go.sum and /tmp/tmp.PYcoKntieN/containerd/integration/client/go.sum differ
/ok-to-test
| gharchive/pull-request | 2022-04-27T18:12:51 | 2025-04-01T06:38:16.400837 | {
"authors": [
"log1cb0mb",
"thaJeztah"
],
"repo": "containerd/containerd",
"url": "https://github.com/containerd/containerd/pull/6865",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1517815074 | [refactor] Add global flag process helper function
I'm so sorry I have to make a huge PR again.
After #1793, we have a types.GlobalCommandOptions, and for #1792 and #1791 and many refactor PR int he future, we may need to introduce a new helper function to process the global flag.
In this PR, I introduce the new helper function and use this function to process the global flag in all subcommand.
There may be some duplicated code for some subcommand like nerdctl run, I think this should be not a problem, I will merge it when I refactor the run command,
Signed-off-by: Zheao.Li me@manjusaka.me
(Not request changes)
In the current PR, most commands are refactored into a half-way state (e.g., global flag is processed (sometimes, repeatedly) whenever it's needed. Ideally, it should only be processed once at the entry of the command, and let other usage use the processed struct.
As an example, getVolumeStore calls processGlobalFlags, and getComposer calls getVolumeStore, which means each compose commands call processGlobalFlags at least twice (one in getVolumeStore, one in its own command if global flags are used).
Either way, we should highlight that helper functions (e.g., getVolumeStore) should be refactored to not call processGlobalFlags directly and instead use the struct created by the command.
nice catch. I have update all the PR to make sure call processGlobalFlags once for each command
@AkihiroSuda PTAL
| gharchive/pull-request | 2023-01-03T19:18:16 | 2025-04-01T06:38:16.405230 | {
"authors": [
"Zheaoli"
],
"repo": "containerd/nerdctl",
"url": "https://github.com/containerd/nerdctl/pull/1797",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1718245291 | drop dependency on xfs_freeze
Reported via email; we should instead just directly invoke the ioctl. ostree has code for this we could make into a public C API, including the complex dance of doing this by forking a helper process to be robust against interruption.
This was done in https://github.com/containers/bootc/pull/102/commits/15de723ac2a9707523967aa4ee5c29875b94baa8
| gharchive/issue | 2023-05-20T18:10:48 | 2025-04-01T06:38:16.442041 | {
"authors": [
"cgwalters"
],
"repo": "containers/bootc",
"url": "https://github.com/containers/bootc/issues/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2440000300 | WIP: Update nixpkgs
What type of PR is this?
/kind dependency-change
What this PR does / why we need it:
Bump nixpkgs.
Which issue(s) this PR fixes:
None
Special notes for your reviewer:
None
Does this PR introduce a user-facing change?
None
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 35.75%. Comparing base (4e0f474) to head (2444a51).
Report is 632 commits behind head on main.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #2350 +/- ##
==========================================
- Coverage 37.53% 35.75% -1.78%
==========================================
Files 15 15
Lines 1268 1264 -4
Branches 414 420 +6
==========================================
- Hits 476 452 -24
- Misses 526 552 +26
+ Partials 266 260 -6
@kwilczynski PTAL
/approve
/lgtm
/lgtm
| gharchive/pull-request | 2024-07-31T12:58:02 | 2025-04-01T06:38:16.488033 | {
"authors": [
"codecov-commenter",
"kwilczynski",
"rphillips",
"saschagrunert"
],
"repo": "containers/conmon-rs",
"url": "https://github.com/containers/conmon-rs/pull/2350",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1316796519 | Use --systemd-cgroup for runtime commands
What type of PR is this?
/kind bug
What this PR does / why we need it:
This fixes the passing of systemd related annotations to the config, for example: https://github.com/opencontainers/runc/pull/2224
Which issue(s) this PR fixes:
None
Special notes for your reviewer:
None
Does this PR introduce a user-facing change?
Using `--systemd-cgroup` for runtime commands.
PTAL @rphillips @haircommander
lgtm
I'll defer to @haircommander for the final lgtm
Peter is AFK. Going to lgtm and we can iterate if we would like to change this PR further.
/lgtm
miss click
/lgtm
| gharchive/pull-request | 2022-07-25T12:54:11 | 2025-04-01T06:38:16.492014 | {
"authors": [
"rphillips",
"saschagrunert"
],
"repo": "containers/conmon-rs",
"url": "https://github.com/containers/conmon-rs/pull/552",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1369610672 | Add critest run to CI
What type of PR is this?
/kind ci
What this PR does / why we need it:
We now run a critest suite with a pinned CRI-O release for each PR.
Which issue(s) this PR fixes:
None
Special notes for your reviewer:
Needs https://github.com/containers/conmon-rs/pull/702 to not timeout during the attach test.
Does this PR introduce a user-facing change?
None
/lgtm
| gharchive/pull-request | 2022-09-12T10:04:08 | 2025-04-01T06:38:16.494486 | {
"authors": [
"haircommander",
"saschagrunert"
],
"repo": "containers/conmon-rs",
"url": "https://github.com/containers/conmon-rs/pull/703",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1073061562 | Do not upload tarball any more
What type of PR is this?
/kind failing-test
What this PR does / why we need it:
There is no need for the artifact right now se we can stop uploading it every commit.
Which issue(s) this PR fixes:
None
Special notes for your reviewer:
None
Does this PR introduce a user-facing change?
None
Merging to unblock the CI
| gharchive/pull-request | 2021-12-07T08:27:10 | 2025-04-01T06:38:16.506632 | {
"authors": [
"saschagrunert"
],
"repo": "containers/containrs",
"url": "https://github.com/containers/containrs/pull/705",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2581138179 | Serve command fails silently
Hello,
Running the serve command with a wrong model name outputs a container ID that dies instantly without a reason.
Here is an example reproducer:
$ ramalama serve huggingface://ibm-granite/granite-7b-instruct
6aa6dc6cca217b28656990f69a7c602c28c40f64289d8cdb5c55e6ecccdc0828
$ ramalama ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
I tried to add an helpful message by calling the pull command before serve, but it looks like there is another issue in the glob f"{args.store}/models/*/{model}" because the Model class doesn't have a str method, resulting in the path to never match because it looks like:
ramalama/models/*/<ramalama.huggingface.Huggingface object at 0x7fa1c6e659a0>
How would you recommend to fix that issue?
I kinda wish we didn't detach by default because of reasons like this
I meant to add a check before calling ramalama.run_container in the Process CLI entrypoint, to ensure the model is pulled before starting the service.
The problem with just adding a check for the file, is that only helps debug one very specific issue. Like it is already checked if the file exists just in llama-server and we can't see the output
Right, so not detaching by default sounds great to me, may I propose such a change?
| gharchive/issue | 2024-10-11T11:27:35 | 2025-04-01T06:38:16.612376 | {
"authors": [
"TristanCacqueray",
"ericcurtin"
],
"repo": "containers/ramalama",
"url": "https://github.com/containers/ramalama/issues/285",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
260391923 | WIP: Huge Page Support: Add support for huge pages (needs qemu revendor)
Add support to launch virtual machines where the RAM is
allocated using huge pages. This is useful for running
with a user mode networking stack, and for custom setups
which require high performance and low latency.
Note: Needs revendoring of ciao/qemu changes https://github.com/ciao-project/ciao/pull/1449
Signed-off-by: Manohar Castelino manohar.r.castelino@intel.com
lgtm
just to be clear - this is 'pre-allocated' with huge pages is it?
no matter...
lgtm
@mcastelino Please rebase, the needed qemu changes should be there now.
LGTM
Coverage decreased (-0.01%) to 65.928% when pulling 27902cf28573d310ebe192bb682f5da8dbf02207 on mcastelino:topic/huge_pages into 04e519d758c64d52b51670fac68d58f2ddb752c7 on containers:master.
| gharchive/pull-request | 2017-09-25T19:41:08 | 2025-04-01T06:38:16.618057 | {
"authors": [
"coveralls",
"grahamwhaley",
"jodh-intel",
"mcastelino",
"sameo"
],
"repo": "containers/virtcontainers",
"url": "https://github.com/containers/virtcontainers/pull/384",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1104142109 | Ensure youki runs under podman
We need to be more defensive when deleting a container as podman executes delete twice and cleans up the cgroup of the container before we have a chance to clean it up.
Fixes #607
Codecov Report
Merging #613 (d0d756a) into main (52f83a0) will decrease coverage by 0.03%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## main #613 +/- ##
==========================================
- Coverage 70.04% 70.01% -0.04%
==========================================
Files 82 82
Lines 10924 10933 +9
==========================================
+ Hits 7652 7655 +3
- Misses 3272 3278 +6
Wow, I was surprised to know that.
podman executes delete twice
LGTM
| gharchive/pull-request | 2022-01-14T21:33:03 | 2025-04-01T06:38:16.621576 | {
"authors": [
"Furisto",
"codecov-commenter",
"utam0k"
],
"repo": "containers/youki",
"url": "https://github.com/containers/youki/pull/613",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
500738297 | Self-generated certificate is regenerated every 15 seconds
Do you want to request a feature or report a bug?
Bug
Did you try using a 1.7.x configuration for the version 2.0?
[ ] Yes
[X] No
What did you do?
I configured Traefik with Docker provider and swarmMode=true, and I deployed a service with an HTTPS entrypoint and no tls options.
What did you expect to see?
Traefik should generate a certificate for HTTPS and keep it, so that it can be added to trusted certificates in a browser.
What did you see instead?
Traefik generates a new certificate for HTTPS every 15 seconds, so the browser keeps telling that the certificate is not trusted.
Output of traefik version: (What version of Traefik are you using?)
Version: 2.0.1
Codename: montdor
Go version: go1.13.1
Built: 2019-09-26T16:18:03Z
OS/Arch: linux/amd64
If applicable, please paste the log output in DEBUG level (--log.level=DEBUG switch)
time="2019-09-30T13:54:52Z" level=debug msg="No default certificate, generating one"
...
time="2019-09-30T13:55:07Z" level=debug msg="No default certificate, generating one"
...
time="2019-09-30T13:55:22Z" level=debug msg="No default certificate, generating one"
Hi @bgranvea ,
Thanks for your interest in the project.
It seems to be a duplicate of #5381
So I will close this issue.
hum yes maybe it is the same root cause as #5381 but with swarmMode=true, it seems even worse as the certificate is regenerated even when no new container is deployed.
And I don't understand why you don't consider #5381 as a bug: IMHO the self-generated certificate feature is not usable with a browser.
Hi @bgranvea, for the "swarmMode" making this behavior even worse, it's related to the issues #5419. This issue had been closed and will be in the next 2.0.x release. You can already test it with the image containous/traefik:experimental-v2.0 if you want to live on the edge :)
As for #5381, we consider the "self generated certificate" as a "degraded" mode. If you're willing to use a self signed certificate for all of your services, we recommend generating one and setting Traefik to use it as explained in https://docs.traefik.io/v2.0/https/tls/. I personnaly use the awesome tool mkcert https://github.com/FiloSottile/mkcert which generate for multiple domains and wildcards including *.local, and install the CA in my browsers.
| gharchive/issue | 2019-10-01T08:20:39 | 2025-04-01T06:38:16.628890 | {
"authors": [
"bgranvea",
"dduportal",
"mmatur"
],
"repo": "containous/traefik",
"url": "https://github.com/containous/traefik/issues/5557",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
501141004 | docker compose traefik2.0 not work with other docker services
Do you want to request a feature or report a bug?
Bug
Did you try using a 1.7.x configuration for the version 2.0?
[ ] Yes
[x] No
What did you do?
I setup docker-compose.yaml file with docker example whoami service. on a second docker-compose I added service whoami2.
What did you expect to see?
Firstly, expect to see routings on dashboard and services. Second expectation: see response on whoami.docker.localhost (works) and reponse on url whoami2.docker.localhost.
What did you see instead?
for bot services see information on dashboard. For whoami2.docker.localhost got "Gateway Timeout"
Output of traefik version: (What version of Traefik are you using?)
/ # traefik version
Version: 2.0.1
Codename: montdor
Go version: go1.13.1
Built: 2019-09-26T16:18:03Z
OS/Arch: linux/amd64
What is your environment & configuration (arguments, toml, provider, platform, ...)?
## docker-compose.yaml
version: '3.7'
services:
traefik:
image: "traefik:v2.0.1"
container_name: traefik
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
command:
- "--log.level=DEBUG"
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- ./traefik.yaml:/etc/traefik/traefik.yaml
whoami:
# A container that exposes an API to show its IP address
image: containous/whoami
labels:
- "traefik.http.routers.whoami.rule=Host(`whoami.docker.localhost`)"
## traefik.yml
# Docker configuration backend
providers:
docker:
exposedbydefault: true
entryPoints:
web:
# Listen on port 80 for incoming requests
address: :80
# API and dashboard configuration
api:
insecure: true
# other-folder/docker-compose.yaml
version: '3.7'
services:
whoami2:
image: containous/whoami
labels:
- "traefik.http.routers.whoami2.rule=Host(`whoami2.docker.localhost`)"
If applicable, please paste the log output in DEBUG level (--log.level=DEBUG switch)
docker up traefik whoami
Creating network "traefik_default" with the default driver
Creating traefik_whoami_1 ... done
Creating traefik ... done
Attaching to traefik_whoami_1, traefik
whoami_1 | Starting up on port 80
traefik | time="2019-10-01T20:51:55Z" level=info msg="Configuration loaded from file: /etc/traefik/traefik.yaml"
docker up traefik whoami2
Starting docker-wordpress_whoami2_1 ... done
Attaching to docker-wordpress_whoami2_1
whoami2_1 | Starting up on port 80
output whoami
curl -H Host:whoami.docker.localhost http://127.0.0.1
Hostname: 6b4f2dfe944f
IP: 127.0.0.1
IP: 172.25.0.2
GET / HTTP/1.1
Host: whoami.docker.localhost
User-Agent: curl/7.63.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 172.25.0.1
X-Forwarded-Host: whoami.docker.localhost
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 9f14dd78222b
X-Real-Ip: 172.25.0.1
output whoami2
curl -H Host:whoami2.docker.localhost http://127.0.0.1
Gateway Timeout
Hi @fragtom , thanks for your interest in the project and this complete report!
As this is a configuration issue, could you switch to our community forum at https://community.containo.us/ to get support and knowledge from the community (maintainers included) please? The reason is that we use Github for tracking bugs and feature requests.
As a hint for your next steps (if it doesn't work use a community topic to mention me), you might want to check this page from Docker documentation at https://docs.docker.com/compose/networking/.
Each docker-compose stack has it's own private network: Traefik cannot reach the whoami containers because in another network. You have to merge the stacks OR declare external networks (https://docs.docker.com/compose/compose-file/#external-1).
@fragtom the way I solve this I create a traefik network in the traefik's docker-compose file:
version: '3.7'
networks:
traefik:
name: traefik
services:
traefik:
image: "traefik:v2.0.1"
container_name: 'traefik'
networks:
- traefik
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
command:
- "--log.level=DEBUG"
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- ./traefik.yaml:/etc/traefik/traefik.yaml
And then use the same network in the other docker-compose files:
# other-folder/docker-compose.yaml
version: '3.7'
networks:
traefik:
external: true
services:
whoami2:
image: containous/whoami
container_name: 'whoami'
networks:
- 'traefik'
labels:
- "traefik.http.routers.whoami2.rule=Host(`whoami2.docker.localhost`)"
Please notice the external part.
Hope this helps
That works for me, thnks
On Fri 25. Oct 2019 at 08:33, Valentin Vieriu notifications@github.com
wrote:
@fragtom https://github.com/fragtom the way I solve this I create a
traefik network in the traefik's docker-compose file:
version: '3.7'
networks:
traefik:
name: traefik
services:
traefik:
image: "traefik:v2.0.1"
container_name: 'traefik'
networks:
- traefik
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
command:
- "--log.level=DEBUG"
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- ./traefik.yaml:/etc/traefik/traefik.yaml
And then use the same network in the other docker-compose files:
other-folder/docker-compose.yaml
version: '3.7'
networks:
traefik:
external: true
services:
whoami2:
image: containous/whoami
container_name: 'whoami'
networks:
- 'traefik'
labels:
- "traefik.http.routers.whoami2.rule=Host(whoami2.docker.localhost)"
Please notice the external part.
Hope this helps
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/containous/traefik/issues/5563?email_source=notifications&email_token=AAEPGIGPC6YMIWQ72KKATVTQQKHK3A5CNFSM4I4OXPX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECHK4LA#issuecomment-546221612,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAEPGIG3RPNWTJLPFQZK7N3QQKHK3ANCNFSM4I4OXPXQ
.
--
--
Tom Stark
Hochäckerstr. 115
81737 München
Tel.: 01520 911 9008
--
| gharchive/issue | 2019-10-01T20:54:47 | 2025-04-01T06:38:16.646950 | {
"authors": [
"dduportal",
"fragtom",
"valentinvieriu"
],
"repo": "containous/traefik",
"url": "https://github.com/containous/traefik/issues/5563",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2259235762 | Refs 2422: Add pending task metrics
Summary
Adds new metrics:
* Average queue time of pending task
* Length in queue for the oldest pending task
* Number of pending tasks
Also:
* Increases default metric calculations to 30 seconds from 5
* makes interval for metric calculation configurable
* adds log level override for metrics so you can set the rest of the app to trace while metrics are set to debugging
Testing steps
on a new db:
make run
in another tab:
make repos-import
go run cmd/external-repos/main.go nightly-jobs
This will kick off a ton of tasks, you can monitor with:
curl localhost:9000/metrics | grep task_stats
This will update every ~30 seconds.
Checklist
[ ] Tested with snapshotting feature disabled and pulp server URL not configured if appropriate
note this is currently built ontop of https://github.com/content-services/content-sources-backend/pull/637
will rebase once merged.
https://issues.redhat.com/browse/HMS-2422
/retest
/retest
/retest
added a test to check for queued_at updates.
Also i realized that the 'number of days' until the cdn cert expires was only calculated at startup, so i changed it to be calculated as part of the go routine that updates the metrics. At startup it was set to zero, so to keep that from firing an alert, i set it to calculate that one metric at startup (and only that metric, because others require the db to be up, which may not actually have happened yet?)
added
| gharchive/pull-request | 2024-04-23T15:57:19 | 2025-04-01T06:38:16.682645 | {
"authors": [
"jlsherrill",
"swadeley"
],
"repo": "content-services/content-sources-backend",
"url": "https://github.com/content-services/content-sources-backend/pull/645",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1284244730 | Fixes 41: Add create content modal
To test this PR, one will need to have checked out this backend PR (or have it merged to master).
This adds the bulk create modal.
This adds the following required API's:
Arch/Version repo
Validate Content
Bulk Create
Additional changes:
Added empty state/emptyTable state/Loading table state with skeleton
Worked entirely too much on gpgKey integration (80% functionality)
Hid said GPG key functionality.
Replaced @rverdile's react Context code (sorry!) to avoid this bug
Updated Eslint rules
Updated some cloud-services npm modules to latest
Added formik/Yup node modules
Fixed an issue with optimistic updates.
Added "Add 20" magic button within the create modal.
Fixed issue when deleting last item on a page > 1, where the page wouldn't navigate the user back.
Fixed issue with page not resetting when changing the search filters
Still needs:
Add readable specific field errors if bulk create fails (currently just throws generic), waiting on this ticket
Direction on what to do when we have an url that fails it's head request (currently prevents continuing):
Note on the above (not sure if this is considered a bug), pinging an address that has an http redirect will return success.
Example: https://quay.io/stuffandthings/thatprobablydontexist
https://issues.redhat.com/browse/HMSCONTENT-41
I'm guessing this appeared after hiding the gpg key section, but now the dropdown for versions clips the end of the section and you can't see it:
One issue:
enter a url without any version info in it
select some set of versions
go back to the url and change it slightly
versions change back to 'any version'
I thought we had mentioned this and you said it shouldn't do this, but i may be misremembering, i know you said it was tricky
seeing a weird format issue when trying to add a repo and the versions were auto-selected from the url. The structure of the distribution_versions is wrong:
[
{
"name": "test4",
"url": "http://yum.theforeman.org/katello/4.2/katello/el7/x86_64/",
"distribution_arch": "x86_64",
"distribution_versions": [
[
"el7"
]
],
"gpgKey": ""
},
{
"name": "test8",
"url": "http://mirror.centos.org/centos/8-strea/BaseOS/x86_64/os/",
"distribution_arch": "x86_64",
"distribution_versions": [
"el9"
],
"gpgKey": ""
}
]
The first object in this list was auto-selected, the 2nd one was not. Notice that there is an array within an array in the first object which isn't valid
seeing a weird format issue when trying to add a repo and the versions were auto-selected from the url. The structure of the distribution_versions is wrong:
[
{
"name": "test4",
"url": "http://yum.theforeman.org/katello/4.2/katello/el7/x86_64/",
"distribution_arch": "x86_64",
"distribution_versions": [
[
"el7"
]
],
"gpgKey": ""
},
{
"name": "test8",
"url": "http://mirror.centos.org/centos/8-strea/BaseOS/x86_64/os/",
"distribution_arch": "x86_64",
"distribution_versions": [
"el9"
],
"gpgKey": ""
}
]
The first object in this list was auto-selected, the 2nd one was not. Notice that there is an array within an array in the first object which isn't valid
Working on this now. I attempted an incomplete fix yesterday.
| gharchive/pull-request | 2022-06-24T22:25:27 | 2025-04-01T06:38:16.694030 | {
"authors": [
"Andrewgdewar",
"jlsherrill"
],
"repo": "content-services/content-sources-frontend",
"url": "https://github.com/content-services/content-sources-frontend/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2549679580 | Saving image loses c2pa information
Hi, thanks for the work. I am trying to save image with c2pa information in app.py code, it does save the image but then when i read c2pa information, it says there is no manifest attached.
However, when i follow the tutorial, as is, and do the curl request, then the output image saved by it, has the c2pa information.
I dont understand why would this happen? Why saving image after receiving image bytes, loses the c2pa content ? Please guide.
@app.route("/attach", methods=["POST"])
def resize():
request_data = request.get_data()
manifest = json.dumps({
"title": "image2.jpg",
"format": "image/jpeg",
"claim_generator_info": [
{
"name": "c2pa test",
"version": "0.0.1"
}
],
"assertions": [
{
"label": "c2pa.actions",
"data": {
"actions": [
{
"action": "c2pa.edited",
"softwareAgent": {
"name": "C2PA Python Example",
"version": "0.1.0"
}
}
]
}
}
]
})
builder = Builder(manifest)
signer = create_signer(sign, SigningAlg.ES256,
cert_chain, "http://timestamp.digicert.com")
result = io.BytesIO(b"")
builder.sign(signer, "image/jpeg", io.BytesIO(request_data), result)
with open('result.txt','w') as f:
print(result.getvalue(),file=f)
image = Image.open(result)
# Save the image IO object to a file
image.save('output_image5.jpg')
image = Image.open(io.BytesIO(result.getvalue()))
# # Save the image bytes to a file
image.save('output_image6.jpg')
with open('byte_result.txt','w') as f:
print(result.getvalue(),file=f)
print("End")
return result.getvalue()
@shayan-NECX it seems you run the api successful at the first place. however i get some issue that I need your help to confirm where is wrong ehen i follow the guide. I can't even run the curl successfully.
i also get someissues, after run the curl command I get an error when run api.py
**builder.sign(signer, "image/jpeg", io.BytesIO(request_data), result)** File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa_api/c2pa_api.py", line 129, in sign return super().sign(signer, format, C2paStream(input), C2paStream(output)) File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa/c2pa.py", line 925, in sign rust_call_with_error( File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa/c2pa.py", line 283, in rust_call_with_error uniffi_check_call_status(error_ffi_converter, call_status) File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa/c2pa.py", line 313, in uniffi_check_call_status raise error_ffi_converter.lift(call_status.error_buf) **c2pa.c2pa.c2pa.Error.Signature: reason='COSE error parsing certificate**
what i have done is first run
python setup.py create-key-and-csr 'CN=John Smith,O=C2PA Python Demo'
kms-signing.csr file is cgenerated
then create fake rootCA
openssl req -x509 \ -days 1825 \ -newkey rsa:2048 \ -keyout rootCA.key \ -out rootCA.crt
then sign the CSR created in step 1 with the temporary test CA key
openssl x509 -req \ -CA rootCA.crt \ -CAkey rootCA.key \ -in kms-signing.csr \ -out kms-signing.crt \ -days 365 \ -copy_extensions copyall
then create chain.pem
cat kms-signing.crt rootCA.crt > chain.pem
FLASK_KMS_KEY_ID="$KMS_KEY_ID" FLASK_CERT_CHAIN_PATH="./chain.pem" flask run
curl command, i get the "COSE error parsing certificate" error from the builder.sign
Never mind, i was able to create image bytes, send as image byte string and retain C2PA information in images too. I will close the issue.
Regards,
Shayan
From: ttbuffey @.>
Sent: Tuesday, October 15, 2024 7:48 AM
To: contentauth/c2pa-python-example @.>
Cc: Bhatti, Shayan Ali @.>; Mention @.>
Subject: Re: [contentauth/c2pa-python-example] Saving image loses c2pa information (Issue #10)
You don't often get email from @.*** Learn why this is importanthttps://aka.ms/LearnAboutSenderIdentification
@shayan-NECXhttps://github.com/shayan-NECX it seems you run the api successful at the first place. however i get some issue that I need your help to confirm where is wrong ehen i follow the guide. I can't even run the curl successfully.
i also get someissues, after run the curl command I get an error when run api.py
builder.sign(signer, "image/jpeg", io.BytesIO(request_data), result) File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa_api/c2pa_api.py", line 129, in sign return super().sign(signer, format, C2paStream(input), C2paStream(output)) File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa/c2pa.py", line 925, in sign rust_call_with_error( File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa/c2pa.py", line 283, in rust_call_with_error uniffi_check_call_status(error_ffi_converter, call_status) File "/Users/i302707/Documents/pii/c2pa-python-example/ven_3.10.5/lib/python3.10/site-packages/c2pa/c2pa/c2pa.py", line 313, in uniffi_check_call_status raise error_ffi_converter.lift(call_status.error_buf) c2pa.c2pa.c2pa.Error.Signature: reason='COSE error parsing certificate
what i have done is first run
python setup.py create-key-and-csr 'CN=John Smith,O=C2PA Python Demo'
kms-signing.csr file is cgenerated
then create fake rootCA
openssl req -x509 \ -days 1825 \ -newkey rsa:2048 \ -keyout rootCA.key \ -out rootCA.crt
then sign the CSR created in step 1 with the temporary test CA key
openssl x509 -req \ -CA rootCA.crt \ -CAkey rootCA.key \ -in kms-signing.csr \ -out kms-signing.crt \ -days 365 \ -copy_extensions copyall
then create chain.pem
cat kms-signing.crt rootCA.crt > chain.pem
FLASK_KMS_KEY_ID="$KMS_KEY_ID" FLASK_CERT_CHAIN_PATH="./chain.pem" flask run
curl command, i get the "COSE error parsing certificate" error from the builder.sign
—
Reply to this email directly, view it on GitHubhttps://github.com/contentauth/c2pa-python-example/issues/10#issuecomment-2413824139, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AYBQUZGWZHEZPLGG5VREOELZ3UFLRAVCNFSM6AAAAABO4JOWEOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJTHAZDIMJTHE.
You are receiving this because you were mentioned.Message ID: @.***>
| gharchive/issue | 2024-09-26T06:53:13 | 2025-04-01T06:38:16.711274 | {
"authors": [
"shayan-NECX",
"ttbuffey"
],
"repo": "contentauth/c2pa-python-example",
"url": "https://github.com/contentauth/c2pa-python-example/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
635320884 | Regarding Conan dependencies.
The current conanfile.txt in this repo is using packages that are not automatically or easily resolved.
It would be helpful to know where kwc_bintray is and add this to the README so that others can use Conan.
Digging around the commit and issue history I see that this was contributed by someone from Kai Wolf (so kwc seems to make sense). So it appears as though they have a self-hosted artifactory service but not all the packages were listed. At which point I realized that it's probably https://bintray.com/kwc which is rather easy in hindsight.
Running conan remote add kwc_bintray https://api.bintray.com/conan/kwc/conan resolved this for me.
Then you apparently also need bincrafters:
conan remote add bincrafters https://api.bintray.com/conan/bincrafters/public-conan
But you finally end up with the issue of the version of protobuf being requested is not apparently available in the bincrafters public conan repository. However, the requested version is available on conan-center. But still missing is protoc_installer.
It would be useful to add some documentation for the level of Conan support in the repo and how to use it.
Hi @photex,
I totally agree that Conan integration is not smooth yet. We worked together with @NewProggie to the Conan integration.
Unlike the behavior you get from Conan Center packages, we want the Conan integration to be truly optional. E.g. only the cmake_paths generator should be used. (e.g. all target names used in CMake should be identical to those that you get by pure CMake installs).
This is why many packages were repackaged instead of using the official packages.
However, the current recommendation to build eCAL is to use submodule integration for thirdparty dependencies for an easy build on Windows, and on Linux use packages provided by the system package manager when available (e.g. on Ubuntu 18.04, hdf5, protobuf, qt, … can all be retrieved by apt-get).
More experienced users can use any way to provide the necessary dependencies.
But it still makes sense to put a chapter in the Readme file on Conan support / integration.
However, the requested version is available on conan-center. But still missing is protoc_installer.
Conan Center decided to package protoc and protobuf separately, mainly to facilitate cross-builds. With the kwc packages, they are packaged together (which I think they should be. Since they belong together, they are always required in the same version).
Good catch. I think there should be another section in the README.md containing:
# Install Conan
$ pip install --upgrade pip
$ pip install --upgrade conan
$ conan config set general.revisions_enabled=True
# Add Conan remotes with pre-compiled dependencies
$ conan remote add -f kwc_bintray https://api.bintray.com/conan/kwc/conan
$ conan remote add -f conan-center https://bintray.com/conan/conan-center
$ conan remote add -f bincrafters https://api.bintray.com/conan/bincrafters/public-conan
Let me quickly add another PR for this.
Thanks for the explanations.
@NewProggie neither bincrafters or conan-center had protoc_installer for the specified version when I tried it today. Can you verify that this works with these remotes?
actually the packages in the conan center are now being reworked to allow transparent conan usage, e.g. same target names of the packages when installed and found via cmake...
if some package is still not work this way, its a bug in the package that needs fixing...
Closing this discussion-thread due to inactivity
| gharchive/issue | 2020-06-09T10:45:04 | 2025-04-01T06:38:16.720070 | {
"authors": [
"FlorianReimold",
"Kerstin-Keller",
"NewProggie",
"gocarlos",
"photex"
],
"repo": "continental/ecal",
"url": "https://github.com/continental/ecal/issues/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1872691864 | How to bring continue back?
Describe the bug
My continue server always no response since some days ago.
I have tried all the ways in https://continue.dev/docs/troubleshooting but none of them could working.
Environment
Operating System: MacOS 10.15.7
Python Version: 3.11.4
Continue Version: v0.0.350
Remote develop on remote server with SSH
Console logs
Nothing appears in log file continue.log. It is empty!
My current fallback solution is to install Continue v0.0.329 which give me a suggestion to install continue extension on remote server. Now I could ask continue as always.
Sorry to hear this! To be sure, you were seeing something like this and the arrow just never disappeared? We did in fact make some changes that were intended to fix SSH right around version v0.0.329, so I'll look into what might have changed there. Is there anything else I should know about the setup so I can test on my own mac?
I just upgrade to latest v0.0.352 but still got same problem as my screenshot. I tried lsof -i :65432 | grep "(LISTEN)" | awk '{print $2} on remote server and my local computer, the continue server is only running on remote server.
I think the solution is probably to have Continue downloaded only on the local computer—this is actually the new intention since that version—should have realized this earlier.
This also has the benefit that you can use the same instance of Continue across local+remote workspaces
I don't know. For me, running continue only on remote server is good. As I don't have to care about VPN or proxy to access OpenAI API.
Hadn't considered this, sorry. It makes sense.
It's possible the issue is that the React app is local and the server is remote. If so, port forwarding would be the solution here and I could try something like this:
// If running on remote, forward the port
if (
vscode.env.remoteName &&
vscode.extensions.getExtension("continue.continue")?.extensionKind ===
vscode.ExtensionKind.Workspace
) {
await vscode.env.asExternalUri(vscode.Uri.parse(getContinueServerUrl()));
}
and then you would theoretically be able to just run the extension in remote only.
Can you test forwarding the port manually and see if this works? If so, I'll move forward with that.
But also interesting is that I've tested with my own SSH server, and am currently in a state where the server is running on the SSH server, but not locally, and things are working. I've tried both with the extension installed ONLY remote and on BOTH remote and local, like below. This makes me wonder whether a fresh start would be helpful.
Yeah, port forwarding 65432 is working. Now I have two continue server running. But the local one may not connected to the openai service and output logs.
Maybe there is one problem, I have two remote server but I could only forward one 65432 port to local. So only one remote server is actually providing continue service....
One idea is to allow the servers to run on different ports so they can both be forwarded. This seems like a reasonable option that should exist anyway to avoid port conflicts.
Do you feel this would be a good solution for you?
I found out that I can set different "continue:server url" settings for different workspace. This problem is soloved.
Good point, forgot you can just map to different ports when forwarding. Glad it's all working now!
| gharchive/issue | 2023-08-30T00:38:33 | 2025-04-01T06:38:16.730124 | {
"authors": [
"BenedictKing",
"sestinj"
],
"repo": "continuedev/continue",
"url": "https://github.com/continuedev/continue/issues/426",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2432648418 | Example adding Entra Auth to OpenAI provider
Description
Added a new attribute to base LLM to allow for handling non-apikey based authentication models. Also moved the OpenAI header logic to async to allow for making calls to an IdP (Entra in this case).
Related to thread https://discord.com/channels/1108621136150929458/1131313996750917835/1251283613362683994
Note I know supporting SSO is a hot topic especially between the Open and possible value added release so this implementation may not be the best one to be reused by all the products.
I know we kicked around doing this IdP hit in config.ts as well, but the config load is sync for the moment so that would need to be an async model. I also had issues with filtering the model this code would be invoked on. If there was a LLM.model load generic (vs just a global load) it might allow for staging these environmental nuances to be staged in ~/.continue vs in the Extension itself. Sort of pre-commit hook type world.
So this code may not be the right thing to merge, but figured it could be an example of approaching this problem and rather than bespoke providers providing a baseLLM method for auth that could be done using userspace and ideally just config.json. One challenge in doing that might be module loading as in my example azure/identity is needed.
Checklist
[x] The base branch of this PR is dev, rather than main
[ ] The relevant docs, if any, have been updated or created
If this path is valuable I have no problem amending the PR to include doc on using authType as control logic in a provider.
Testing
So assumes that the OpenAI endpoint is in Azure OpenAI and uses Entra for authentication, but ideally it should allow for end to end authentication using Browser path to get the access_token and refresh if its expired within the session. This is similar to the AWS Bedrock use, but instead of requiring the user to make sure the STS is in their .aws path before calling it the token can be grabbed dynamically. I did poke at mimicking this with azcli, but they obfuscate the access token now and sort of force you to use azcli to expose it.
@byjrack I'm impressed by how clean this change is. What you point out with the synchronous nature of config.ts + the need to include external dependencies are major reasons that we've been looking for a long-term solution to SSO, rather than the situation we'd end up in by patching on dozens of providers to our code that must be shipped to every client. The direction we're taking is to do it through our teams product, and we have something working today, though we've been testing quietly in beta. I won't rule out this PR definitively quite yet, but can I email you? It would help to know what precisely your auth+LLM setup looks like. I'm confident either way we can get this working for you
Tried to make the change reusable across the base, but I don't love the if/then logic. You know it's a Pandora's box because every idp could need a custom package and possibly framework (some azcli requires python and then all the requests stuff). An oauth client could be pretty reusable, but the "it depends" is in the back of my mind. A hooks model I think could work, but again lots of corner cases.
Email is on the profile. Nothing really unique in our space, but can try to give as much info as I can.
Going to close this for reasons mentioned above and in other conversations. If we return to this will definitely look into something more extensible like hooks
| gharchive/pull-request | 2024-07-26T17:21:51 | 2025-04-01T06:38:16.736915 | {
"authors": [
"byjrack",
"sestinj"
],
"repo": "continuedev/continue",
"url": "https://github.com/continuedev/continue/pull/1841",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2406953754 | TagBot trigger issue
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/110817
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/111105
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/111768
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/112084
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/112758
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/113867
Triggering TagBot for merged registry pull request
This extra notification is being sent because I expected a tag to exist by now, but it doesn't.
You may want to check your TagBot configuration to ensure that it's running, and if it is, check the logs to make sure that there are no errors.
Triggering TagBot for merged registry pull request
This extra notification is being sent because I expected a tag to exist by now, but it doesn't.
You may want to check your TagBot configuration to ensure that it's running, and if it is, check the logs to make sure that there are no errors.
Triggering TagBot for merged registry pull request
This extra notification is being sent because I expected a tag to exist by now, but it doesn't.
You may want to check your TagBot configuration to ensure that it's running, and if it is, check the logs to make sure that there are no errors.
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/120655
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/121331
| gharchive/issue | 2024-07-13T14:21:26 | 2025-04-01T06:38:16.758338 | {
"authors": [
"JuliaTagBot"
],
"repo": "control-toolbox/CTDirect.jl",
"url": "https://github.com/control-toolbox/CTDirect.jl/issues/174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2753513184 | 🛑 MPS - TESTPROCESS is down
In 0018d40, MPS - TESTPROCESS ($TESTPROCESS_URL) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MPS - TESTPROCESS is back up in be06f3f after 12 minutes.
| gharchive/issue | 2024-12-20T21:34:42 | 2025-04-01T06:38:16.763429 | {
"authors": [
"conuti-das"
],
"repo": "conuti-das/status-lfrcs",
"url": "https://github.com/conuti-das/status-lfrcs/issues/1730",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2041134182 | Upgrade debian to 12 bookworm
Is there any plan to upgrade base debian image to 12 bookworm?
In the end we need to upgrade but there isn't any direct benefit right? Feel free to submit a PR.
| gharchive/issue | 2023-12-14T07:47:17 | 2025-04-01T06:38:16.795486 | {
"authors": [
"anyidea",
"foarsitter"
],
"repo": "cookiecutter/cookiecutter-django",
"url": "https://github.com/cookiecutter/cookiecutter-django/issues/4744",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2553921019 | Update Firefly-iii version
Update Firefly-iii to version null
error getting version
| gharchive/pull-request | 2024-09-28T00:42:58 | 2025-04-01T06:38:16.878665 | {
"authors": [
"coostax"
],
"repo": "coostax/addon-firefly-iii",
"url": "https://github.com/coostax/addon-firefly-iii/pull/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1602022244 | new event api
Problem
Currently, anchor events are emitted through logs. This is potentially problematic since logs have a maximum size per transaction. And so one might emit an event and that event can be not present in the transaction.
Solution
Instead, perhaps a better (hack) would be to add to the anchor codegeneration a special instruction for events that does nothing. Then, when emit!(event) is called, we recursively CPI into the program with that instruction and the serialized event as data. Indexers can then use this to reliably build a table.
I would highly recommend adding a PDA log authority for this if it does get implemented. This makes it so there's no way to spoof log transactions through CPI from another program.
The PDA makes it so the only way you can invoke the log event is through self CPI.
In theory it should always be possible to properly index regardless of the log authority, but you run into far fewer edge cases. Unpacking the inner instruction data of the transaction tree is really hard because stack depth is not found in the returned object, and with this change you can be confident that any inner instruction with the log instruction is valid.
Not an easy change by any means, but it would be a win for overall devex if it was built into Anchor:
Here's an example of how this works in a native contract (with a log authority): https://github.com/Ellipsis-Labs/phoenix-v1/blob/master/src/lib.rs#L111
The invoke: https://github.com/Ellipsis-Labs/phoenix-v1/blob/master/src/program/event_recorder.rs#L133
It's tricky to optimize because you really would like to densely pack all of the emits to minimize the number of self CPIs to make. The normal emit! covers the 80/20 for contracts that don't need literally 100% of their data preserved, but you can probably implement a generalized non-optimized safe emit.
Note: one tradeoff with the PDA log authority is that it adds another 32 bytes to the transaction, which is a tough price to pay.
Jarry made some good points. There's so much wrong about using CPI to log data, from the fact it's a hack in the first place to the limitations it places on how often you can log data since you probably just want a single CPI. The current emit work fine for most people.
Adding support to anchor would just mean having a simpler helper function for sending the CPI and serializing the data needed, plus taking care of the parsing in the clients. It makes sense to have this although I can't imagine it would be in much demand except by the most intense projects.
Is Solana never going to fix the core issue here with logs dying randomly? The CPI transaction data approach is such a hack but it also shows that data can survive undamaged from the transaction. I can't believe they're not working on a solution.
It doesn't make sense to self-cpi if you're going to require an additional PDA.
Otherwise might as well just deploy a no-op program that hardcodes its list of caller programs.
IIUC benefit of self-cpi is that you reduce tx size by 32 bytes by removing a separate noop program key. Therefore, It doesn't make sense to self-cpi if you're going to require an additional PDA.
I think there's a tradeoff, and IMO having the PDA is far safer and worth the extra 32 bytes. Adding the extra key makes handling the log parsing far less error prone. Given the fact that the tx object doesn't have stack depth, it's not very difficult to effectively trick the indexer into storing bad data by calling the Log instruction directly.
Example:
Sample program A CPI's into target program T's log instruction and logs either some nonsense or false data
Downstream indexer fails to detect the malicious pattern (it's probably possible to catch, just more difficult)
DB corruption
If the PDA signs, the filter of tx success && instruction == T::log is sufficient to know that the data is safe to record
I've tagged a relevant rough draft PR of adding a no-op instruction to Anchor programs that does not require a PDA to sign (sorry Jarry).
I don't mind as I most likely won't be using this feature :)
The PDA adds more complexity, but IMO the important thing is that the indexing is done accurately, and it makes it bulletproof.
Even with stack depth, having an openly exposed log instruction won't make indexing easy. There are lots of adversarial cases that still need to be checked, I think it's still very possible to mess it up.
I think the condition that the Log instruction from program A must immediately be preceded by program A AND the stack depth has increased by 1 is sufficient for indexing without a PDA signer. As long as the logic to do this is present in the parsing code, the implementation is probably acceptable
| gharchive/issue | 2023-02-27T22:00:58 | 2025-04-01T06:38:16.940303 | {
"authors": [
"Henry-E",
"armaniferrante",
"jarry-xiao",
"ngundotra"
],
"repo": "coral-xyz/anchor",
"url": "https://github.com/coral-xyz/anchor/issues/2408",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1238745617 | upgrade fluentd prometheus plugin version
Currently the prometheus plugin is in version 2.0.2 and we want it to be on 2.0.3 duo to a metric bug on 2.0.2
@amit-mazor link to the issue
| gharchive/pull-request | 2022-05-17T14:38:45 | 2025-04-01T06:38:16.941971 | {
"authors": [
"amit-mazor",
"avivgold098"
],
"repo": "coralogix/eng-integrations",
"url": "https://github.com/coralogix/eng-integrations/pull/61",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2260410812 | Add community notes
@kingfisher communitynote (user) (content of note) and then the bot just replies to that user's most recent message with the community note
funnier idea: let users vote on setting kingfisher's status
| gharchive/issue | 2024-04-24T05:56:00 | 2025-04-01T06:38:16.946074 | {
"authors": [
"MrCheeze446",
"RohanVittalMV"
],
"repo": "coravacav/uofu-cs-discord-bot",
"url": "https://github.com/coravacav/uofu-cs-discord-bot/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1400038352 | Opportunistic JSON detection and decoding of variables
Summary
JSON is often used for request bodies, and this is scanned properly. However, the use of JSON is quickly growing in the contents of variables such as:
cookies
query arguments (a single POST field)
The contents of a cookie or a query argument is currently treated by WAFs as a single string. The syntactical characters essential to JSON often lead to false positives in the Core Rule Set. (I estimate that about 50% of the rule exclusions I have to make are due to JSON in POST arguments.)
Therefore, I propose a feature where Coraza 'opportunistically' tries to perform JSON decoding on these variables. For instance:
read the variable until the first (non-whitespace*) character, is it [ or {
if so, send it to JSON decoder
if JSON decoder is successful, delete the original variable containing the full JSON, and walk the JSON object, creating sub variables for its values.
*Note: I didn't know from memory if valid JSON allows whitespace at the start, but I think this is valid: https://www.rfc-editor.org/rfc/rfc4627 under 2. Grammar: "Insignificant whitespace is allowed before or after any of the six structural characters."
Basic example
Let's say a client might want to send JSON in a POST parameter, such as: [{"foo": "bar"}, {"baz": "qux"}]
URL-encoded, this is: %5B%7B%22foo%22%3A%20%22bar%22%7D%2C%20%7B%22baz%22%3A%20%22qux%22%7D%5D
When I do a curl such as: curl --data 'param=%5B%7B%22foo%22%3A%20%22bar%22%7D%2C%20%7B%22baz%22%3A%20%22qux%22%7D%5D' https://lifeforms.nl/
This turns up false positives in Core Rule Set rules 942260 and 942370 (but there are often other rules that trigger also; I think there are about 5-10 rules which are notorious in this regard).
If instead, the param would not be scanned as whole, but be parsed as JSON, the CRS might end up with variables such as (for instance):
ARGS:param.foo = bar
ARGS:param.baz = qux
Most importantly, the FP is now gone. But we can still inspect the JSON contents specifically!
Another use case is cookies. Many trackers use JSON in cookies. What makes it worse is that their cookie names contain dynamic parts (e.g. mixpanel_bacd4323434234, so it is hard to write rules against them. If the JSON string is parsed instead of doing 'dumb' string scans, these false positives would likely disappear.
Motivation
I expect a serious lowering of false positives when JSON is posted as a cookie or parameter
This would make Coraza a very attractive WAF for novice users who cannot write exclusion rules easily
It would seriously motivate me to switch to Coraza faster 😉
Possible drawbacks
We have to be careful to not introduce bypasses by this functionality. For instance, a malicious payload may be spread over multiple parts of a JSON object. But I think the requirement for it to be valid JSON is a serious roadblock for this technique; after all you cannot put a starting { or [ after comment characters. On first sight it seems to me not dangerous. But, let's at least think about this.
Finally
I have considered if this mechanism should be generic and could also used for XML, but I see XML in variables very rarely if at all. GraphQL could also be interesting to look at.
True, it's stale, but still cool. 😎
Totally
On Sun, 6 Nov 2022, 09:37 Walter Hop, @.***> wrote:
True, it's stale, but still cool. 😎
—
Reply to this email directly, view it on GitHub
https://github.com/corazawaf/coraza/issues/456#issuecomment-1304747352,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAXOYAWE5VWVSEGOACYESA3WG5U35ANCNFSM6AAAAAAQ62UUYQ
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
That's a great feature, I think we should consider it as it's a common issue. I also think we should extend this feature to all body processors (well, except multipart).
Because of its nature, we would have to create an operator because actions won't have access to the evaluated variables
The operator must take as parameters the parameter name, unless we provide a mechanism for the operator to know which variable is being evaluated
We will have to provide the body processor to be used, but body processors has response and request processors, which one should be used? Maybe we should refactor the body processor interface and create request body processors and response body processors
Same rules for the body processors should apply to avoid DDOS, and others
If we use the ARGS:param.param_name syntax, a user could just append a value by using: /url.php?param.param_name=hack. For that reason, this operator should also wipe all param.* variables.
CC @corazawaf/core-developers we should solve this before releasing v3.
Are we implementing this? If we are, are we recycling the body processors?
If we are recycling the body processors, should we split them into RequestBodyProcessor, ResponseBodyProcessor, ParameterBodyProcessor or should we extend our current interface and add a ProcessParameter method?
Now that v3 is almost released, I would like to start working on this. We could recycle the current bodyprocessors and implement a function that reads the body as text to find a regex, then uses the content set the content to be processed. Something like this:
<html>
...
<script>
...
var json= "{\"id\": 123}";
...
SecRule REQUEST_URL "/some/file.php" "...,phase:2,processRequestBody:'JSON:var json\s=\s"(.*)";'"
SecRule ARGS:json.id "123" "...."
We will have to run the bodyprocessor to make this work.
cc @corazawaf/core-developers
| gharchive/issue | 2022-10-06T17:12:42 | 2025-04-01T06:38:16.960092 | {
"authors": [
"jcchavezs",
"jptosso",
"lifeforms"
],
"repo": "corazawaf/coraza",
"url": "https://github.com/corazawaf/coraza/issues/456",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1901794787 | fix: removes multiline from default regex modifiers [breaking]
This PR evaluates the needed changes to align Coraza to Modsec v2 behavior rather than Modsec v3 in terms of default modifiers when compiling regexes. This necessity has been raised by https://github.com/coreruleset/coreruleset/issues/3277.
We are currently running with two default modifiers:
s (dotall)
m (multiline mode)
While Modsec v2 (the reference implementation for the CRS) has:
PCRE2_DOTALL
PCRE2_DOLLAR_ENDONLY
Running the CRS rules with multiline may lead to false positives and lower performance.
Opening the PR to further discuss this, and whether we accept a breaking change for v3.* or not.
Good to see ftw passing with this change. I wouldn't merge it until seeing the PR to ModSec v3 doing the same, otherwise someone will just ask in the future why we aren't aligned with ModSec v3.
Maybe it is a good idea to coordinate this in @M4tteoP @airween
Totally 💯
Apart from aligning the behavior with ModSec, are we going to accept this breaking change and release it into a 3.x version or we aim to have it into 4.x? If the latter is the case, I would guard the feature with a build flag and merge this PR wit the fix disabled by default
We agreed to implement this as a build tag for v3 and mandatory for v4
Once merged, another PR has to be created to remove the flag and force it by default
We agreed to implement this as a build tag for v3 and mandatory for v4
Done, added coraza.rule.no_regex_multiline build tag. It should be ready to be reviewed.
Please add this build tag to magefile.go
Done, let's if tests are happy ^_^
Looks so 🚀
| gharchive/pull-request | 2023-09-18T21:43:17 | 2025-04-01T06:38:16.966668 | {
"authors": [
"M4tteoP",
"anuraaga",
"jcchavezs",
"jptosso"
],
"repo": "corazawaf/coraza",
"url": "https://github.com/corazawaf/coraza/pull/876",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.