idproject
int64 7.76k
28.8M
| issuekey
int64 675k
128M
| created
stringlengths 19
32
| title
stringlengths 4
226
| description
stringlengths 2
154k
⌀ | storypoints
float64 0
300
⌀ |
---|---|---|---|---|---|
28,847,821 | 105,632,917 |
2022-03-28 16:52:21.661
|
UX: Resources - skin Demo, Releases, Analysts, Webcasts
|
## Goal
We would like to skin our resource pages with our new branding, which will improve consistency, as measured by reduced drop off.
### Jobs To Be Done
* **Situation**: *When visiting our resource pages...*
* **Motivation**: *our customers want to feel like they are on the same website...*
* **Outcome**: *So they can be assured they are in the right plavce.*
## Page(s)
Which page(s) are involved in this request?
* [Releases page](https://about.gitlab.com/releases/)
* [Webcasts page](https://about.gitlab.com/webcast/)
* [Analysts page](https://about.gitlab.com/analysts/)
* [Demo page](https://about.gitlab.com/demo/)
## DCI
[DRI, Consulted, Informed](https://about.gitlab.com/handbook/people-group/directly-responsible-individuals/#dri-consulted-informed-dci)
- [ ] DRI: @jhalloran
- [ ] Consulted: `GitLab Handle`
- [ ] Informed: `Everyone`
## In scope
What is within scope of this request?
- [ ] Skinned MVC1 version of each page
## Out of scope
What is out of scope and not part of this iteration?
- TBD
## Requirements
What are the requirements for this request? Checklist below is an example of common requirements, please check all that apply and adjust as necessary:
- [ ] Copy writing
- [ ] Illustration
- [ ] Custom Graphics
- [ ] Research
- [ ] Data / Analytics
- [x] UX Design
- [ ] Engineering
| 6 |
28,847,821 | 105,632,875 |
2022-03-28 16:51:22.512
|
UX: Format Jam3 Pricing page to Slippers UI
|
## Goal
We would like to update our pricing page with new branding which will improve how we sell ourselves as a company as measured by increased sales.
### Jobs To Be Done
* **Situation**: *When we are shifting to a new brand...*
* **Motivation**: *We want to update the pricing page to reflect these changes...*
* **Outcome**: *So we can communicate our value as an Enterprise software company.*
## Page(s)
Which page(s) are involved in this request?
* [Pricing page Figma designs](https://www.figma.com/file/4CkayJDu97uNEWBOlgABwE/Pricing-Refresh-2022-04-01?node-id=0%3A1)
## DCI
[DRI, Consulted, Informed](https://about.gitlab.com/handbook/people-group/directly-responsible-individuals/#dri-consulted-informed-dci)
- [ ] DRI: @jhalloran
- [ ] Consulted: @mpenagos-ext
- [ ] Informed: `Everyone`
## In scope
What is within scope of this request?
- [ ] Desktop and Mobile pricing page skinned mocks
## Out of scope
What is out of scope and not part of this iteration?
- TBD
## Requirements
What are the requirements for this request? Checklist below is an example of common requirements, please check all that apply and adjust as necessary:
- [ ] Copy writing
- [ ] Illustration
- [ ] Custom Graphics
- [ ] Research
- [ ] Data / Analytics
- [x] UX Design
- [ ] Engineering
| 4 |
28,847,821 | 105,632,824 |
2022-03-28 16:50:15.774
|
UX: Format Jam3 Home Page to Slippers UI
|
## Goal
We would like to skin the home page with new branding which will improve the look and feel of our Enterprise company as measured by more traffic to our site.
### Jobs To Be Done
* **Situation**: *When we want to present GitLab as an Enterprise brand...*
* **Motivation**: *We want to update the look and feel of our site...*
* **Outcome**: *So we can reflect externally how we operate internally.*
## Page(s)
Which page(s) are involved in this request?
* [Figma designs](https://www.figma.com/file/hKvIRhVlrgXrB1WcyuDBiQ/Home-Page-Refresh-2022-03-29?node-id=208%3A4259)
## DCI
[DRI, Consulted, Informed](https://about.gitlab.com/handbook/people-group/directly-responsible-individuals/#dri-consulted-informed-dci)
- [ ] DRI: @jhalloran
- [ ] Consulted: @lduggan
- [ ] Informed: `Everyone`
## In scope
What is within scope of this request?
- [ ] Home page mocks
## Out of scope
What is out of scope and not part of this iteration?
- Not in scope 1
- Not in scope 2
## Requirements
What are the requirements for this request? Checklist below is an example of common requirements, please check all that apply and adjust as necessary:
- [ ] Copy writing
- [ ] Illustration
- [ ] Custom Graphics
- [ ] Research
- [ ] Data / Analytics
- [x] UX Design
- [ ] Engineering
| 2 |
3,828,396 | 121,256,396 |
2023-01-04 21:40:29.574000+00:00
|
Add deprecation note to GitLab deprecations regarding KAS private tls
|
### What is this issue about?
The `gitlab.kas.privateApi.tls.enabled` and `gitlab.kas.privateApi.tls.secretName` attrs were deprecated following the linked discussion below.
The following discussion from !2888 should be addressed:
- [ ] @Alexand started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2888#note_1220328090):
> I'm not sure if deprecating `gitlab.kas.privateApi.tls.*` is the best way forward. But my reasoning was:
>
> 1. I want to simplify the chart options. So it's probably better to have just one documentation section explaining how to enable TLS for KAS.
> 1. Having a global attribute to configure TLS for KAS across the chart gives us more power to automate these configurations. Right now, GitLab webservice needs KAS address (`grpc` vs `grpcs`). A configuration value that lives inside of the KAS sub-chart can't do it.
> 1. I can't immediately think of a reason why one would want to enable just certain KAS servers with TLS, but not others.
> 1. I don't think we'd need different certificates per KAS server, or any other TLS specific configuration that would be used differently for each KAS service.
>
> _I'm leaving this thread open in case reviewer and maintainer have any thoughts regarding this._
This issue is to track adding a deprecation note to https://docs.gitlab.com/ee/update/deprecations.html.
### Deprecation note proposal
Planned removal: GitLab 17.0 (2024-05-22)
The GitLab chart provides `gitlab.kas.privateApi.tls.enabled` and `gitlab.kas.privateApi.tls.secretName` to support TLS communication between KAS pods. To enable TLS communication between KAS and all other chart components that KAS needs to communicate to, one needs to set many other extra Helm values.
To facilitate enabling TLS communication between KAS and all the chart components, we've introduced the `global.kas.tls.*` Helm values. Since this is a more complete and simple approach to enabling TLS for KAS. We recommend you stop using `gitlab.kas.privateApi.tls.*` Helm values, and use `global.kas.tls.*` instead. Therefore, the `gitlab.kas.privateApi.tls.*` is deprecated and scheduled for removal in 17.0. For more information please refer to:
- The [Merge Request](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2888) which introduces the `global.kas.tls.*` values.
- The [deprecated documentation](https://docs.gitlab.com/charts/charts/gitlab/kas/index.html#enable-tls-communication).
- The new preferred documentation. (link to be added)
/cc @nmezzopera @nagyv-gitlab
| 1 |
3,828,396 | 118,702,762 |
2022-11-14 21:10:37.409000+00:00
|
Document how to connect KAS to Redis via SSL
|
<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
This is a follow-up from https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2829
KAS can do ssl communication with Redis. We should test this with the GitLab chart and document it.
| 2 |
3,828,396 | 117,046,812 |
2022-10-17 15:34:30.083000+00:00
|
Improve UX to enable TLS to KAS externally and internally
|
<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
We've recently introduced a way to mount certificate volumes into KAS pods. This enables KAS to talk internally and externally through TLS.
But the implementation is only simply to use for the `privateApi` via the `privateApi.tls.enabled` attribute.
We should make it also simple to other KAS endpoints to enable TLS, instead of relying on them to use `gitlab.kas.customConfig` and `privateApi.tls.enabled`.
### Proposal
Maybe a global key that tells KAS to enable it for all its components would make more sense.
| 3 |
3,828,396 | 116,890,603 |
2022-10-14 11:35:09.122000+00:00
|
Improve communication regarding how to install the Agent when running GitLab with custom certificates
|
## Release notes
Setting up the KAS component of the agent for Kubernetes with custom certificates and using the CI/CD integration is a rather complex task. To support our users who require custom certificates, we updated the documentation for better support. The documentation touches on how to set up KAS, agentk and how to invoke `kubectl` commands from Gitlab CI/CD with custom certificates.
## Proposal
The following discussion from !2803 should be addressed:
- [ ] @dmakovey started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2803#note_1133710036): (+4 comments)
> LGTM.
>
> something for further iterations (i.e. new issues?):
>
> Based on findings in https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2803#note_1133461689 it looks like mere setup of documented values is not sufficient for proper TLS function with custom CA certs, so can we:
>
> 1. add documentation for custom (including self-signed) CA authorities or point at existing one
> 1. how to reuse `gitlab-wildcard-tls-ca`
> 2. (optional) sort out how to add custom CA cert for `kubectl` rather than use `--insecure-skip-tls-verify` as that's not a proper solution to the problem.
>
> @Alexand what do you think?
| 2 |
3,828,396 | 112,823,192 |
2022-08-04 02:17:38.448000+00:00
|
Expose http.debug.tls settings in the registry helm chart
|
## Summary
In https://gitlab.com/gitlab-org/container-registry/-/issues/729+ we enabled the container registry to
set TLS certificates for the `http.debug` server.
We should expose these settings in the charts so that they can be configured properly on GitLab.com
| 1 |
3,828,396 | 104,725,197 |
2022-03-13 13:33:39.372000+00:00
|
Enable KAS by default on the GitLab chart
|
The following discussion from !2436 should be addressed:
- [ ] @nagyv-gitlab started a [discussion](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2436#note_872540922): (+1 comment)
> @Alexand Is this true? Did we enable `kas` by default only in Omnibus?
We already have `kas` enabled by default on Omnibus. Now we need to also enable it on the GitLab chart.
| 2 |
3,828,396 | 90,352,753 |
2021-06-11 14:25:58.344000+00:00
|
Spike: Scoping how to support many database configuration in CNG
|
CNG can have many databases configured in a similar way as by Omnibus: https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6192.
| 1 |
3,828,396 | 81,605,463 |
2021-03-25 05:10:09.495000+00:00
|
Remove non_test group specification on bundle install
|
## Summary
!322 introduced `non_test` group to Gemfile by @Ahmadposten (around %11.1), but very soon after it, it was removed in !404 (%11.2).
## Why
This change would eliminate unnecessary deprecation warning from the CI job log:
```
$ bundle install -j $(nproc) --without non_test
[DEPRECATED] The `--without` flag is deprecated because it relies on being remembered across bundler invocations, which bundler will no longer do in future versions. Instead please use `bundle config set without 'non_test'`, and stop using this flag
```
https://gitlab.com/gitlab-org/charts/gitlab/-/jobs/1126027482
| 1 |
3,828,396 | 77,883,116 |
2021-01-18 18:06:56.619000+00:00
|
Support deployment of gitlab-sshd (gitlab-charts)
|
## Summary
Support the use of `gitlab-sshd` as a component within the GitLab Shell container / chart of Cloud Native Gitlab.
## Details
2020-01-1 saw the merge of https://gitlab.com/gitlab-org/gitlab-shell/-/merge_requests/394, and we [can now begin integration](https://gitlab.com/gitlab-org/gitlab-shell/-/merge_requests/394#note_488121893).
`gitlab-sshd` is designed to provide a replacement to the use of `sshd` from OpenSSH, and can be the single service started by a container using it.
## Work items
1. Build and support configuration within `gitlab-shell` of the CNG containers
- binary
- process-wrapper / entrypoint & logging
1. Support for configuration within `gitlab/gitlab-shell` chart
| 2 |
3,828,396 | 70,677,672 |
2020-09-03 10:25:59.187000+00:00
|
Support Embedded Action Cable in Helm Charts
|
<!--
NOTICE: This Issue tracker is for the GitLab Helm chart, not the GitLab Rails application.
Support: Please do not raise support issues for GitLab.com on this tracker. See https://about.gitlab.com/support/
-->
## Summary
We introduced support for running Action Cable in embedded mode in Omnibus https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4407. It can be enabled by setting the `ACTION_CABLE_IN_APP` environment variable. This was enabled in docker-compose in https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/504.
We need to expose this environment variable via the Helm chart so that it can be set on `webservice` pods when deployed via K8s.
We should also support setting the `ACTION_CABLE_WORKER_POOL_SIZE` environment variable. Both are non-secret.
We've also created a [container](https://gitlab.com/gitlab-org/build/CNG/-/tree/master/gitlab-actioncable) specifically for running Action Cable in a standalone Puma server so it can be scaled independently. This is probably what will be used on gitlab.com and Helm chart work towards that is being tracked in https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2284.
See also: https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2043
## Detail
#### In Omnibus
- Action Cable is disabled by default;
- Action Cable is enabled in Omnibus using the `actioncable['enabled']` setting;
- By default, a separate Action Cable server is started that only serves WebSocket connections. To run it in embedded mode, as part of the same Puma server, the `actioncable['in_app']` setting is used, and
- Action Cable thread pool size is configured with the `actioncable['worker_pool_size']`.
#### In CNG
- A separate `actioncable` container exists, which can be scaled independently;
- Action Cable can be run in embedded mode on the `webservice` container by passing in the `ACTION_CABLE_IN_APP` envvar;
- Max AC thread pool size is configured with the `ACTION_CABLE_WORKER_POOL_SIZE` environment variable, and
- Workhorse accepts a `cableBackend` argument which accepts a service name and port and can proxy WebSocket requests to the correct service. It defaults to the same as the `authBackend` setting.
An example of a setup proxying requests to a `webservice` container running embedded Action Cable is in the CNG [docker-compose.yml](https://gitlab.com/gitlab-org/build/CNG/-/blob/master/docker-compose.yml) file.
Currently, Action Cable is only supported for the Puma server, not Unicorn.
## Proposal
Modify GitLab Helm chart to allow configuration of Action Cable in embedded mode.
| 5 |
3,828,396 | 69,815,283 |
2020-08-10 14:36:04.734000+00:00
|
Geo - Remove FDW related config from charts
|
https://gitlab.com/gitlab-org/charts/gitlab/
| 1 |
3,828,396 | 32,932,337 |
2020-04-06 10:38:37.867000+00:00
|
CNG: `gitlab-rails-ce`/`gitlab-rails-ee` sometimes fail at the NodeJS downloading step
|
## Summary
See:
- https://gitlab.com/gitlab-org/build/CNG-mirror/-/jobs/499481345
```
Step 26/63 : RUN curl -fsSL "https://nodejs.org/download/release/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.gz" | tar --strip-components 1 -xzC /usr/local/
---> Running in 68a08e4dd34c
curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
The command '/bin/sh -c curl -fsSL "https://nodejs.org/download/release/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.gz" | tar --strip-components 1 -xzC /usr/local/' returned a non-zero code: 2
```
- https://gitlab.com/gitlab-org/build/CNG-mirror/-/jobs/499787651
```
Step 27/65 : RUN curl -fsSL "https://nodejs.org/download/release/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.gz" | tar --strip-components 1 -xzC /usr/local/
---> Running in 0580fc35e278
curl: (22) The requested URL returned error: 500
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command '/bin/sh -c curl -fsSL "https://nodejs.org/download/release/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.gz" | tar --strip-components 1 -xzC /usr/local/' returned a non-zero code: 2
```
These are two different errors, but at the same step:
## Current behavior
Upstream codebase server issued a `500`, but the Job immediately failed
## Expected behavior
Fetch of upstream to retry in the event of a failure, (hopefully) allowing the rest of the job to succeed.
## Versions
CNG @ 3c7ef1ba18a2c361139975379f16c8b00f7156a5
## Relevant logs
See above.
| 2 |
3,828,396 | 29,996,781 |
2020-01-27 19:58:19.206000+00:00
|
Form Validation for WIP Limits
|
We need to have form validation and messages when a user inputs a non-numerical input, details found [here](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/22552#note_276193257)
I think we need to validate against the following:
- Number
- Negative Number
- Max wip limit? (Would this be valuable?)
| 2 |
3,828,396 | 26,520,238 |
2019-10-31 12:33:41.840000+00:00
|
Allow to customize the livenessProbe for `gitlab-shell`
|
In https://gitlab.com/gitlab-org/gitlab/issues/35349, we noticed that `gitlab-shell` has a lot of liveness errors in our configuration, so that would be useful to be able to customize its `livenessProbe`.
| 2 |
3,828,396 | 20,621,766 |
2019-05-06 14:07:50.613000+00:00
|
Consider caching gems to make the gitlab-rails-ce job faster and more resilient when Rubygems has problems
|
Rubygems seems to currently have problems, which leads to problems in our CI pipelines (e.g. https://gitlab.com/gitlab-org/build/CNG-mirror/-/jobs/207740638).
If we'd cache the `vendor/ruby` folder, [similarly to how we do for the CE/EE pipelines](https://gitlab.com/gitlab-org/gitlab-ce/blob/f7bba29cb573e74b56aba7882048aac7de0d6868/.gitlab/ci/global.gitlab-ci.yml#L14) we would make the jobs faster and more resilient to Rubygems issues.
| 4 |
3,836,952 | 109,094,220 |
2022-05-25 10:50:48.155
|
SCORU: WASM: Tickify module initialization
|
`Eval.init` from the WebAssembly reference interpreter needs to be tickify. It has a significant time complexity cost and must be broken up.
!5552 introduces lazy vectors instead of list for the module fields, which are then converted into list in `init`, then back to lazy vectors. Tickification can remove these conversions roundtrip by directly going from lazy vector to lazy vector, and evaluate the fields as small steps.
Remaining:
- Remove all uses of to_list/concat/of_list from `Eval.init`
| 5 |
3,836,952 | 95,278,232 |
2021-10-12 11:54:14.627
|
Make tezos-proxy-server honor /chains/main/block/head/header
|
`Tezos Commons` is willing to use `tezos-proxy-server` in their deployment. Their main target is to use it for `run_operations`, which works already :muscle:. However, they were surprised that it is not serving `/chains/main/block/head/header` too.
Indeed `tezos-proxy-server` doesn't serve it, because it is registered in a resto `Directory` in `block_directory.ml` while the proxy server serves `Plugin.RPC.rpc_services`. If we were to do this, it would go something like:
```diff
--- a/src/proto_alpha/lib_client/proxy.ml
+++ b/src/proto_alpha/lib_client/proxy.ml
@@ -163,6 +163,28 @@ let init_env_rpc_context (_printer : Tezos_client_base.Client_context.printer)
initial_context proxy_builder rpc_context mode chain block >>= fun context ->
return {Tezos_protocol_environment.block_hash; block_header = shell; context}
+let block_services_dir (rpc_context : RPC_context.json) =
+ let open Protocol in
+ let module Directory = Resto_directory.Make (RPC_encoding) in
+ let rpc_context = new Protocol_client_context.wrap_rpc_context rpc_context in
+ let open Tezos_shell_services in
+ let path =
+ let open Tezos_rpc.RPC_path in
+ prefix Block_services.chain_path Block_services.path
+ in
+ let service =
+ Tezos_rpc.RPC_service.prefix path Block_services.Empty.S.header
+ in
+ Directory.register Directory.empty service (fun (((), chain), block) ->
+ let res =
+ Protocol_client_context.Alpha_block_services.header
+ rpc_context
+ ~chain
+ ~block
+ ()
+ in
+ assert false)
+
```
or exposing `proxy.ml`'s `proxy_block_header` function in some way.
| 24 |
3,836,952 | 95,043,610 |
2021-10-07 13:23:36.887
|
Small merge requests are encouraged, but this is not documented
|
There is consensus in the merge team to favor small merge requests. However, this is not stated in the documentation; and this caused confusion in the past. Also, it happened that small merge requests introducing dead code were rejected because this policy was not well known enough.
| 1 |
3,836,952 | 126,542,462 |
2023-04-12 10:48:48.850
|
Gossipsub: Remove peers from the mesh and fanout on unsubscribe
|
Remove peers from the topic mesh when the peer unsubscribes from the topic, as done in the [rust implementation](https://github.com/libp2p/rust-libp2p/blob/b7ba0f728633c1c0e0bc67339e5b5002bb9adb0f/protocols/gossipsub/src/behaviour.rs#L2029).
Additionally, we remove the peers from the fanout on unsubscribe. This is not done in go/rust (they wait until the next heartbeat for the fanout to be cleaned) but we do it in unsusbcribe as it makese sense and is more consistent with the cleaning of mesh on unsubscribe.
| 100 |
3,836,952 | 126,049,644 |
2023-03-31 08:28:21.264
|
Gossipsub: Only graft peers that subscribed to a topic
|
We should have the invariant that any time a peer asked to graft us, we should know he subscribed to this topic.
| 100 |
3,836,952 | 125,947,472 |
2023-03-29 13:17:19.167
|
[GS/Automaton] add a Already_published case in [`Publish] output
|
Currently, we have one case for `handle_publish`: We don't check if the message is already known. We should add a case `Already_published` for the case where the message is already published
| 100 |
3,836,952 | 125,933,482 |
2023-03-29 07:32:39.392
|
Gossipsub: Fix logic for publishing to direct peers
|
From https://gitlab.com/tezos/tezos/-/merge_requests/8248#note_1332465561
> In our implementation, the `direct` peers are only considered in the case where
>
> * mesh is empty for the topic.
> * fanout is empty for the topic.
>
>(i.e. the `match fanout_opt with | None ->` case above)
>
> But in the go implementation, the message is published to the `direct` peers regardless of the `mesh`/`fanout` emptiness: https://github.com/libp2p/go-libp2p-pubsub/blob/56c0e6c5c9dfcc6cd3214e59ef19456dc0171574/gossipsub.go#L997
| 100 |
3,836,952 | 125,875,704 |
2023-03-28 08:02:07.538
|
[GS Automaton] clarify/improve Heartbeat output for prune
|
The automaton currently outputs values of the following type for heartbeat:
```ocaml
| Heartbeat : {
(* topics per peer to graft to *)
to_graft : Topic.Set.t Peer.Map.t;
(* topics per peer to prune from *)
to_prune : Topic.Set.t Peer.Map.t;
(* set of peers for which peer exchange (PX) will not be proposed *)
noPX_peers : Peer.Set.t;
}
-> [`Heartbeat] output
```
It seems that the information contained in `to_prune` field don't allow to call `handle_prune`, which requires inputs of type:
```ocaml
type prune = {
peer : Peer.t;
topic : Topic.t;
px : Peer.t Seq.t;
backoff : span;
}
```
In fact, fields `px : Peer.t Seq.t;` and `backoff` (per "peer and topic" ?) are missing
See which function needs to be modified: heartbeat or prune?
| 100 |
3,836,952 | 125,648,644 |
2023-03-22 19:13:08.113
|
[GS worker] implement sending messages to peers
| null | 100 |
3,836,952 | 125,616,372 |
2023-03-22 08:31:57.331
|
[GS worker] handle join and leave in apply
| null | 100 |
3,836,952 | 125,527,415 |
2023-03-20 10:20:55.672
|
[GS worker] implement worker shutdown
|
See current implementation of function shutdown in gossipsub_worker.ml
Also unsubscribe from callbacks when shutting down the worker?
Also see discussion in https://gitlab.com/tezos/tezos/-/merge_requests/8116#note_1323535456
| 100 |
3,836,952 | 125,526,330 |
2023-03-20 09:56:50.374
|
[GS Worker] implement the notion of P2P messages
|
P2p messages handler relies on a Codec to encode/decode messages to/from bytes. The current implementation of messages and Codec is dummy (for typechecking).
| 100 |
3,836,952 | 125,519,992 |
2023-03-20 07:57:55.358
|
[GS worker] handle Received_message application
| null | 100 |
3,836,952 | 125,519,972 |
2023-03-20 07:57:11.880
|
[GS worker] handle Send_message application
| null | 100 |
3,836,952 | 125,519,939 |
2023-03-20 07:55:56.529
|
[GS worker] handle Publish_message output
| null | 100 |
3,836,952 | 125,519,897 |
2023-03-20 07:54:44.566
|
[GS worker] handle Disconnection output
| null | 100 |
3,836,952 | 125,519,879 |
2023-03-20 07:54:03.636
|
[GS worker] handle New_connection output
| null | 100 |
3,836,952 | 125,519,853 |
2023-03-20 07:53:05.747
|
[GS worker] handle Heartbeat output
| null | 100 |
3,836,952 | 125,274,294 |
2023-03-15 03:37:08.623
|
Gossipsub: get_peers should not return peers that are waiting for expiration
|
In our implementation, `connection.expire` is used to keep the `connection` in the `connections` for a while after `remove_peer`.
Since peers with a non-none `connections.expire` is a removed peer which is wating for being expired, we shouldn't return them in the `get_peers` function [here](https://gitlab.com/tezos/tezos/blob/de253082858204509d4e91c9c079736abaf8d440/src/lib_gossipsub/tezos_gossipsub.ml#L653).
| 100 |
3,836,952 | 125,099,090 |
2023-03-10 12:37:12.719
|
Gossipsub: backoff and connection clearing
|
In the Go implementation:
* `RemovePeer` removes the peers from the state, but does not update the backoffs at all
* the backoffs are removed independently from `RemovePeer`, based on time alone, during heartbeat
In our implementation:
* `RemovePeer` sets an expiring time for the connection, using the `retain_duration` constant
* the backoffs **and** the connection are possibly removed (based on their expiring times) during the heartbeat; in particular, a connection is removed only if there are no backoffs associated.
I opened this issue to :
- check that this makes sense
- see whether there are obvious simplifications
| 50 |
3,836,952 | 124,884,495 |
2023-03-07 14:05:02.458
|
Gossipsub: prepare gossip messages during heartbeat
| null | 100 |
3,836,952 | 124,570,375 |
2023-03-01 17:02:38.105
|
Gossipsub: Score check is missing for IHave control message
|
The following discussion from !7594 should be addressed:
- [ ] @linoscope started a [discussion](https://gitlab.com/tezos/tezos/-/merge_requests/7594#note_1296475507):
> Missing score check in this `handle` too.
| 100 |
3,836,952 | 124,570,310 |
2023-03-01 17:00:50.571
|
Gossipsub: Score check is missing for IWant
|
The following discussion from !7594 should be addressed:
- [ ] @linoscope started a [discussion](https://gitlab.com/tezos/tezos/-/merge_requests/7594#note_1296353868):
> The score check is missing ([go implementation](https://github.com/libp2p/go-libp2p-pubsub/blob/56c0e6c5c9dfcc6cd3214e59ef19456dc0171574/gossipsub.go#L699))
| 100 |
3,836,952 | 124,570,183 |
2023-03-01 16:57:46.456
|
Gossipsub: Grafting with negative score
|
The following discussion from !7594 should be addressed:
- [ ] @linoscope started a [discussion](https://gitlab.com/tezos/tezos/-/merge_requests/7594#note_1296319510): (+1 comment)
> We are missing the score check ([go implmenetation](https://github.com/libp2p/go-libp2p-pubsub/blob/56c0e6c5c9dfcc6cd3214e59ef19456dc0171574/gossipsub.go#L801)). Feel free to address in a future issue, but let's be careful not to forget.
| 100 |
3,836,952 | 124,569,284 |
2023-03-01 16:43:03.216
|
Gossipsub: Implement backoff for pruning
|
The backoff computation while pruning a peer is not done yet.
| 100 |
3,836,952 | 123,288,159 |
2023-02-08 13:12:02.609
|
Node/logs: remove sections from default stdout logs
|
Section are not useful to the user and still occupy a lot of place in the default stdout logs.
Default filedescriptor sink already stores sections, so we can safely remove them from stdout logs.
| 1 |
3,836,952 | 123,015,056 |
2023-02-03 16:05:34.443
|
Node logs: activate daily-logs by default on disk
| null | 4 |
3,836,952 | 122,477,585 |
2023-01-26 10:06:10.709
|
Remove pytest framework
| null | 8 |
3,836,952 | 122,432,229 |
2023-01-25 14:27:15.217
|
lib_store: higher level of verbosity for a few events
|
```
2023-01-24T14:31:21.199-00:00 [validator.blockprechecked_block] prechecked block BMHjBfXHWQs2S93kcRtVcXsMskmvSqHaaLuB8pU2rJTgwFbQQwm
2023-01-24T14:31:21.199-00:00 [node.storestore_prechecked_block] prechecked block BMHjBfXHWQs2S93kcRtVcXsMskmvSqHaaLuB8pU2rJTgwFbQQwm (level: 31) was stored
2023-01-24T14:31:21.214-00:00 [node.storestore_block] block BMHjBfXHWQs2S93kcRtVcXsMskmvSqHaaLuB8pU2rJTgwFbQQwm (level: 31) was stored
2023-01-24T14:31:21.214-00:00 [validator.blockvalidation_success] block BMHjBfXHWQs2S93kcRtVcXsMskmvSqHaaLuB8pU2rJTgwFbQQwm validated Request pushed on 2023-01-24T14:31:21.182-00:00, treated in 8.306us, completed in 31.838ms
2023-01-24T14:31:21.215-00:00 [node.storeset_head] BMHjBfXHWQs2S93kcRtVcXsMskmvSqHaaLuB8pU2rJTgwFbQQwm (level: 31) set as new head
2023-01-24T14:31:21.216-00:00 [validator.chainhead_increment] Update current head to BMHjBfXHWQs2S93kcRtVcXsMskmvSqHaaLuB8pU2rJTgwFbQQwm (level 31, timestamp 2022-10-13T16:06:50-00:00, fitness 02::0000001f::::ffffffff::00000000), same branch
```
Store events creates a lot of noise when treating a block. These events should probably be in `Debug` instead of `Info`.
| 1 |
3,836,952 | 122,431,624 |
2023-01-25 14:21:36.976
|
File-descriptor sink: missing dot between section and event name
|
fixed by https://gitlab.com/tezos/tezos/-/merge_requests/7494
```
2023-01-24T14:37:11.559-00:00 [validator.chainhead_increment] Update current head to BMHUKJJdXtPEipwTM5SsxRQCE76qbbWnsXibA75sYY6shNcsUJU (level 10009, timestamp 2022-10-15T19:23:45-00:00, fitness 02::00002719::::ffffffff::00000001), same branch
```
```
2023-01-25T09:49:00.128-00:00 [validator.chain.head_increment] Update current head to BMJNPjiw1DkYn3NXoo6D4UQwBpyDT3rz7zmJ2mANWwxU9YFC1wn (level 563573, timestamp 2023-01-24T14:22:35-00:00, fitness 02::00089975::::ffffffff::00000000), same branch
```
`validator.chainhead_increment` -> `validator.chain.head_increment`
| 1 |
3,836,952 | 122,304,482 |
2023-01-23 14:05:02.310
|
Logging: add an option enabling rotation of file-descriptor sinks on a daily basis
|
Goal: cut a lot of things from the stdout logging without worrying of debugging
- Add options to file-based sinks to be limited by time
- option: `rotation-days=N`
| 4 |
3,836,952 | 121,760,700 |
2023-01-13 17:45:12.031
|
Node logs: simplify bootstrap logs
|
Already done in a previous [non-merged request](https://gitlab.com/tezos/tezos/-/merge_requests/2344/diffs?commit_id=1916f683d6b0ed705543b3fc42bcd249e970a122).
```
Dec 4 11:54:19.581 - node.validator.bootstrap_pipeline: fetching branch of about 10499 blocks from peer
Dec 4 11:54:19.581 - node.validator.bootstrap_pipeline: idqrpoSMD3XKbsN5U8wqyT2d4SLfgv in 98 steps
```
The section is extremely long and the wording is could be simplified.
The solution could be the following one.
```
Dec 4 14:34:25.754 - bootstrap: fetching ~11741 blocks from peer ids9jAYxv8h4kWTi6swki3fXfTEUG1 in 100 steps
```
| 1 |
3,836,952 | 121,759,462 |
2023-01-13 17:28:30.644
|
lib_shell: set higher verbosity level for a few recurrent events
|
Peers disconnection and precheck of blocks are seen as not useful to the user.
We would like to put it to `info` instead of `notice`
| 1 |
3,836,952 | 121,662,378 |
2023-01-12 13:37:18.085
|
Node logs: shortens request status size in various events
|
```
Jan 6 14:33:10.900 - validator.block: block BLkhtA81uak1SxxCDCVMXEnXCKsJcFLajVT84ZjrFKbxAaJUL4E validated
Jan 6 14:33:10.900 - validator.block: Request pushed on 2023-01-06T13:33:10.853-00:00, treated in 153us, completed in 45.531ms
```
In various places, we print the request status of some Requests to workers. It is a way to know how much time an operation took (like block validation).
However, it is hardly understandable by the user.
- The `Request pushed` part gives no information to people with no knowledge of worker interface.
- It is the same for the push time, on which Hour information is by the way not consistent with the log time
- `treated` is the time elapsed between the request submission to a worker and the beginning of its _treatment_. It doesn't seem to be of any help to someone that is not looking to debug a particular worker implementation
### Proposed solution
Simplify the printing of the status in various places. Prevalidator and Block validator probably don't need the full printing and we could reduce the display to the `completed` information. In such a situation, the above display would become
```
Jan 6 14:33:10.900 - validator.block: block BLkhtA81uak1SxxCDCVMXEnXCKsJcFLajVT84ZjrFKbxAaJUL4E validated (45.531ms)
```
| 3 |
3,836,952 | 121,662,278 |
2023-01-12 13:35:07.037
|
Node logs: remove block properties (timestamp and fitness)
|
```
Jan 6 14:33:10.920 - validator.chain: Update current head to BLkhtA81uak1SxxCDCVMXEnXCKsJcFLajVT84ZjrFKbxAaJUL4E
Jan 6 14:33:10.920 - validator.chain: (level 462824, timestamp 2023-01-06T13:33:10-00:00, fitness
Jan 6 14:33:10.920 - validator.chain: 02::00070fe8::::ffffffff::00000000), same branch
```
In some events advertizing new blocks, some properties are seen as obscure, and can even be found from elsewhere, like the `fitness` or the `timestamp`. Also they are probably not greppable, not especially useful for a developper. (Could the fitness be replaced by priority instead ?)
- timestamp
- _Regarding the Update current head to message, I think it worth keeping the timestamp. Indeed, it is easy to know if your node is "up to date" while looking at it (if last block timestamp is very recent). The level requires a third party to get the "is my node synced" information._
- we could replace by `4ms ago` which is shorter than `timestamp 2023-01-06T13:33:10-00:00
- we keep only one unit (year month s ms )
- is there a case where the timestamp is very different from the log date ?
- don't display timestamp if close to current date ?
- fitness: ensure if useful or not (feedback: + Albin Romain), fitness can be found with the block hash.
| 1 |
3,836,952 | 121,161,542 |
2023-01-03 12:47:13.823
|
Translate pytest `test_contract_opcodes.py`
|
Estimated translation time: 23 hours
| 23 |
3,836,952 | 121,161,541 |
2023-01-03 12:47:13.573
|
Translate pytest `test_contract_onchain_opcodes.py`
|
Estimated translation time:53 hours.
| 53 |
3,836,952 | 121,161,540 |
2023-01-03 12:47:13.337
|
Translate pytest `test_contract_macros.py`
|
Estimated translation time: 20 hours.
| 20 |
3,836,952 | 121,161,539 |
2023-01-03 12:47:13.147
|
Translate pytest `test_contract_baker.py`
|
Estimated translation time: 2 hours.
| 2 |
3,836,952 | 121,161,537 |
2023-01-03 12:47:12.316
|
Translate pytest `test_contract_annotations.py`
|
Estimated translation time: 5 hours.
| 5 |
3,836,952 | 116,884,253 |
2022-10-14 09:39:51.788
|
EVM on WASM: mock host functions for store_move and store_copy
|
# Goal
Implement mock versions of the `store_move` and `store_copy` functions also available on the host through the WASM PVM.
| 3 |
3,836,952 | 116,884,160 |
2022-10-14 09:38:05.683
|
EVM on WASM: store_move and store_copy
|
## Goal
Implement kernel side of `store_move` and `store_copy`. Functions already available through the WASM PVM. We need to add them to the host runtime interface in Rust.
| 3 |
3,836,952 | 116,462,276 |
2022-10-07 12:16:54.427
|
Translate pytest `test_voting.py`
|
estimated translation time: 12 hours
| 12 |
3,836,952 | 116,462,275 |
2022-10-07 12:16:54.326
|
Translate pytest `test_tls.py`
|
estimated translation time: 2 hours
| 2 |
3,836,952 | 116,462,274 |
2022-10-07 12:16:54.203
|
Translate pytest `test_proto_demo_noops_manual_bake.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,273 |
2022-10-07 12:16:54.094
|
Translate pytest `test_proto_demo_counter.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,272 |
2022-10-07 12:16:53.995
|
Translate pytest `test_p2p.py`
|
estimated translation time: 6 hours
| 6 |
3,836,952 | 116,462,271 |
2022-10-07 12:16:53.883
|
Translate pytest `test_openapi.py`
|
estimated translation time: ? hours
| 4 |
3,836,952 | 116,462,270 |
2022-10-07 12:16:53.776
|
Translate pytest `test_nonce_seed_revelation.py`
|
estimated translation time: 6 hours
| 6 |
3,836,952 | 116,462,269 |
2022-10-07 12:16:53.669
|
Translate pytest `test_multiple_transfers.py`
|
estimated translation time: 6 hours
| 6 |
3,836,952 | 116,462,268 |
2022-10-07 12:16:53.490
|
Translate pytest `test_multinode_storage_reconstruction.py`
|
estimated translation time: 8 hours
| 8 |
3,836,952 | 116,462,267 |
2022-10-07 12:16:53.386
|
Translate pytest `test_mempool.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,266 |
2022-10-07 12:16:53.235
|
Translate pytest `test_many_nodes.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,265 |
2022-10-07 12:16:53.121
|
Translate pytest `test_many_bakers.py`
|
estimated translation time: 2 hours
| 2 |
3,836,952 | 116,462,264 |
2022-10-07 12:16:52.974
|
Translate pytest `test_fork.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,263 |
2022-10-07 12:16:52.860
|
Translate pytest `test_forge_block.py`
|
estimated translation time: 2 hours
| 2 |
3,836,952 | 116,462,262 |
2022-10-07 12:16:52.764
|
Translate pytest `test_cors.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,261 |
2022-10-07 12:16:52.662
|
Translate pytest `test_client_without_node.py`
|
estimated translation time: 16 hours
| 16 |
3,836,952 | 116,462,260 |
2022-10-07 12:16:52.507
|
Translate pytest `test_client.py`
|
estimated translation time: 2 hours
| 2 |
3,836,952 | 116,462,259 |
2022-10-07 12:16:52.395
|
Translate pytest `test_binaries.py`
|
estimated translation time: 2 hours
| 2 |
3,836,952 | 116,462,258 |
2022-10-07 12:16:52.287
|
Translate pytest `test_basic.py`
|
estimated translation time: 22 hours
| 22 |
3,836,952 | 116,462,257 |
2022-10-07 12:16:52.082
|
Translate pytest `test_baker_operations_cli_options.py`
|
estimated translation time: 14 hours
| 14 |
3,836,952 | 116,462,256 |
2022-10-07 12:16:51.981
|
Translate pytest `test_accuser.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,255 |
2022-10-07 12:16:51.876
|
Translate pytest `test_programs.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 116,462,254 |
2022-10-07 12:16:51.739
|
Translate pytest `test_liquidity_baking.py`
|
estimated translation time: 12 hours
| 12 |
3,836,952 | 116,462,252 |
2022-10-07 12:16:51.614
|
Translate pytest `test_multisig.py`
|
estimated translation time: 24 hours
| 24 |
3,836,952 | 116,462,251 |
2022-10-07 12:16:51.432
|
Translate pytest `test_fa12.py`
|
estimated translation time: 16 hours
| 16 |
3,836,952 | 116,461,025 |
2022-10-07 11:51:11.990
|
Continue "Translate the churny pytests `test_contract.py`"
|
Estimated translation time: 50 hours
Continuation of #3633
- [x] `TestManager` (@arvidnl !7056)
- [x] `TestScriptHashMultiple` (~@linoscope !6840~, @arvidnl !7039)
- [x] `TestContracts` (@rdavison !6625)
- [x] `TestContractTypeChecking` (@rdavison !6503)
- [x] `TestChainId` (@rdavison !6436)
- [x] `TestExecutionOrdering` (@linoscope https://gitlab.com/tezos/tezos/-/merge_requests/7053/)
- [x] `TestNonRegression` (@xf6 !6616)
- [x] `TestView` (@xf6 !6314)
- [x] `TestTypecheck` (@lykimquyen !6381)
- [x] `TestBigMapToSelf` (@lykimquyen !6325)
- [x] `TestNormalize` (@linoscope !6394)
- [x] `TestGasBound` (@lykimquyen !6684)
- [x] `TestScriptHashOrigination` (@lykimquyen !6627)
- [x] `TestComparables` (@lykimquyen !6614)
- [x] `TestBadAnnotation` (@lykimquyen !6642)
- [x] `TestOrderInTopLevelDoesNotMatter` (@linoscope !6604)
- [x] `TestMiniScenarios`(@lykimquyen !6711)
- [x] `TestMiniScenarios` splits the test case entrypoints (@lykimquyen !6785)
- [x] `TestScriptHashRegression` (@arvidnl !6636)
| 50 |
3,836,952 | 113,675,088 |
2022-08-23 07:46:12.699
|
Translate the slow pytests `test_sapling.py`
|
estimated translation time: ~~20~~ 40 hours
| 40 |
3,836,952 | 113,675,087 |
2022-08-23 07:46:12.580
|
Translate the slow pytests `test_migration.py`
|
estimated translation time: ~~3~~ 6 hours
| 6 |
3,836,952 | 113,675,086 |
2022-08-23 07:46:12.496
|
Translate the slow pytests `test_baker_endorser.py`
|
estimated translation time: ~~2~~ 4 hours
| 4 |
3,836,952 | 113,675,085 |
2022-08-23 07:46:12.416
|
Translate the churny pytests `test_voting_full.py`
|
estimated translation time: ~~4~~ 8 hours
| 8 |
3,836,952 | 113,675,084 |
2022-08-23 07:46:12.335
|
Translate the churny pytests `test_rpc.py`
|
estimated translation time: 2 hours
| 2 |
3,836,952 | 113,675,083 |
2022-08-23 07:46:12.248
|
Translate the churny pytests `test_per_block_votes.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 113,675,082 |
2022-08-23 07:46:12.165
|
Start "Translate the churny pytests `test_contract.py`"
|
estimated translation time: 48 hours (before split with #3942)
- [x] `TestTypecheckingErrors` (@jaiyalas !6536, 1hr)
- [x] `TestTZIP4View` (@lykimquyen !6326)
- [x] `TestSelfAddressTransfer` (@lykimquyen !6437)
- [x] `TestBadIndentation` (@jaiyalas !6324)
- [x] `TestOriginateContractFromContract` (@linoscope: https://gitlab.com/tezos/tezos/-/merge_requests/6382)
- [x] `TestCreateRemoveTickets` (@linoscope !6105)
- [x] `TestSendTicketsInBigMap` (@linoscope !6105)
| 48 |
3,836,952 | 113,675,081 |
2022-08-23 07:46:12.068
|
Translate the churny pytests `test_mockup.py`
|
estimated translation time: 14 hours
| 14 |
3,836,952 | 113,675,080 |
2022-08-23 07:46:11.961
|
Translate the flaky pytests `test_per_block_votes.py`
|
estimated translation time: 4 hours
| 4 |
3,836,952 | 113,675,079 |
2022-08-23 07:46:11.773
|
Translate the flaky pytests `test_injection.py`
|
estimated translation time: 2 hours
| 2 |
3,836,952 | 113,675,078 |
2022-08-23 07:46:11.658
|
Translate the flaky pytests `test_bootstrap.py`
|
estimated translation time: 5 hours
| 5 |
3,836,952 | 113,675,077 |
2022-08-23 07:46:11.520
|
Translate the flaky pytests `test_tenderbake*.py`
|
estimated translation time: 14 hours
- [x] test_tenderbake_bakers_restart.py (~~!6320~~ !6523)
- [x] test_tenderbake_incremental_start.py (~~!6320~~ !6523)
- [x] test_tenderbake_long_dynamic_bake.py (!6300)
- [x] test_tenderbake_manual_bake.py (https://gitlab.com/tezos/tezos/-/merge_requests/6345/)
- [x] test_tenderbake.py (!5245)
| 14 |
3,836,952 | 113,413,415 |
2022-08-17 15:41:46.804
|
EVM on WASM: Add TX kernel functionality to EVM kernel
|
The EVM kernel must be able to handle deposits, transactions and withdrawals just like the TX kernel. Add a new transaction type, `EVMtransaction`, that encapsulates an EVM contract call. The contents of this transaction is not so important for this ticket, only that it is there and that is used to dispatch to a `handle_evm_call` function.
Integrating with the TX kernel got complicated. The actual implementation of transaction _handling_ (as compared to the rest of the TX kernel structure) has been moved to issue: https://gitlab.com/tezos/tezos/-/issues/3698
This ticket is about porting the structure of the TX kernel to the EVM kernel including data types and splitting and/or amending those data types where needed for EVM transaction handling. Transaction verification of TX kernel transactions is complicated by interleaved EVM transactions and moved to the issue linked above.
| 3 |
3,836,952 | 113,283,576 |
2022-08-15 10:47:53.835
|
SCORU: Add tezt test that runs a large number of evaluation ticks
|
(disabbled by default).
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.