idproject
int64 7.76k
28.8M
| issuekey
int64 675k
128M
| created
stringlengths 19
32
| title
stringlengths 4
226
| description
stringlengths 2
154k
⌀ | storypoints
float64 0
300
⌀ |
---|---|---|---|---|---|
7,128,869 | 93,710,705 |
2021-09-15 11:49:56.065
|
Add database integration tests for the Management API
|
* Add database integration tests for orders and access requests
* Do not extract domain from the database layer
This preparation is needed for when we're introducing the subject serialnumber in the management API.
| 5 |
7,128,869 | 93,710,618 |
2021-09-15 11:48:34.978
|
Management UI should display subject serialnumber
|
**As a** systeembeheerder
<br>**I want** to see my own and other organizations their Subject serialnumber
<br>**so** every organization can be identified using their OIN
**:desktop_computer: How to demonstrate / e2e test(s):**
- When a OIN is present, it should be displayed
- When no OIN is present, it should display an empty value
**:white\_check\_mark: Acceptance criteria**
- [ ] Display the subject serialnumber of your own organization (to be designed by @petervermeulen)
- [ ] Display the subject serialnumber in directory view (to be designed by @petervermeulen)
- [ ] Subject serialnumber should be present in the MobX models
**:link: Reference links**
1.

| 2 |
7,128,869 | 93,710,455 |
2021-09-15 11:46:04.443
|
The directory should display subject serialnumber when available
|
**As a** user
<br>**I want** to be able to find the OIN of every organization in the public directory
<br>**so** I can verify the organization
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
- [ ] Display OIN in the public directory UI on the service detail pane
Design, see: https://gitlab.com/commonground/nlx/nlx/-/issues/1366
**:link: Reference links**
1.
| 2 |
7,128,869 | 93,710,404 |
2021-09-15 11:44:47.213
|
Make Certportal should add Subject serialnumber to certificates
|
**As a** user of the demo directory
<br>**I want** the certificate I request to contain a _Subject serialnumber_
<br>**so** I can use the certificate for my NLX components
**:desktop_computer: How to demonstrate / e2e test(s):**
1. Open the demo directory
1. Request a certificate
1. Download the generated CRT
1. Verify there is a subject serialnumber present using `openssl x509 -in org.crt -text`
**:white\_check\_mark: Acceptance criteria**
- [ ] Certportal should add a subject serial number in the `subject.serialnumber` field of the certificate
- [ ] The subject serial number may not exceed a length of 20 characters
- [ ] The subject serial number must be unique within every NLX environment
**:link: Reference links**
1.
| 3 |
7,128,869 | 93,546,001 |
2021-09-13 08:12:17.980
|
Register organization inway
|
**As a** NLX developer
<br>**I want** the organization inway setting in the directory to recover it self when the directory database has been reset
<br>**so** that I have a self healing and more robust system
**:desktop_computer: How to demonstrate / e2e test(s):**
1. Configure organization inway for "organization a"
2. Reset the directory database
3. Send an access request to "organization a"
4. "organization a" should receive this access request
** TODO **
1. [ ] ManagementAPI: GetOrganizationInway should return the organization inway flag
1. [ ] Inway: send "is organization inway" flag when calling RegisterInway
1. [ ] DirectoryRegistrationAPI: RegisterInway parses the organization inway flag and update the database
**:white\_check\_mark: Acceptance criteria**
1. Inway tells the directory that it should be used as management API
**:link: Reference links**
1.
| 5 |
7,128,869 | 92,172,321 |
2021-08-18 07:45:52.865
|
Change an existing order
|
**As a** maintainer
<br>**I want** to change an existing order
<br>**so** maintaining orders is flexible and easy
The following fields should be editable
1. Order description
1. Public key
1. Start/enddate
1. Services
**:white\_check\_mark: Acceptance criteria**
1. Add entry to auditlog
1. Add edit button to order detail view in the same way as service detail view (see screenshot)
1. Open order form
1. a changed order must result in one order on both ends again
**:link: Reference links**

| 5 |
7,128,869 | 92,112,806 |
2021-08-17 11:57:51.787
|
Keep track of Inway activity
|
**As a** NLX maintainer
**I want** to be able to know when an Inway was last active
<br>**so** we can prepare for automatically removing inactive Inways
**✅ Acceptance criteria**
1. When registering an Inway, the `created_at` and `updated_at` fields must be added (to know when an inway is created and when its address has been updated)
1. Process is replaced with CTX
1. Database logic is moved to `adapters`
| 5 |
7,128,869 | 92,033,045 |
2021-08-16 11:27:16.084
|
Handle service deletion
|
#### As a owner of a service:
**Current behavior**
A service with access requests(active or revoked) cannot be deleted.
**Expected behavior**
* If there are no active access requests the service can be deleted
* if there are active access requests, the user should be informed that there are still active access requests and confirm that the user is sure he/she wants to delete the service.
* If there are active orders containing the service, the user should be informed that there are still active orders and confirm that the user is sure he/she wants to delete the service.
* If there are active orders and access requests, the user should be informed there are still active access requests and orders, the user should confirm is sure he/she wants to delete the service.
#### As owner of an outgoing order containing services owned by another organization
No actions required
#### As owner of an incoming order containing services owned by another organization
No actions required
**:white\_check\_mark: Acceptance criteria**
1. A service can be deleted even if there are active access requests or/and orders
1. Front-end gets latest connected access requests/orders before deleting
1. Check if access requests is actually active and not, for example, revoked
1. Show confirmation dialog when there are still active access requests/ orders before deleting
1. Remove service, access requests and order_service entries from database
1. If scheduler of the management API detects that the service of an access grant has been deleted, the access grant should be deleted from the database.
| 8 |
7,128,869 | 90,536,924 |
2021-07-20 06:31:01.382
|
Document NLX installatie-reis afwerken
|
**Acceptatie creteria**
- [ ] handleiding verder doorlopen en uitbreiden
- [ ] alle TODO's zijn weggewerkt
We kunnen de Parkeerrechten API aanbieden als test-api. De Docker image: `nlxio/parkeerrechten-api` ([Parkeerrechten API](https://gitlab.com/commonground/nlx/demo/-/tree/master/parkeerrechten-api)).
**E-mail verzonden op 20 juli van @evangelderen met uitnodiging voor overleg op 10 augustus:**
> 10 augustus is de dag waarop de guide af moet zijn. Eerst in rudimentaire vorm (dus zonder lieve teksten erom heen) in het samenwerk MD bestand: https://hackmd.io/26na7EFER4yHJTfX6gQdEA?view
>
> Wanneer hij dan goed is, dan knallen we hem in de docs site.
>
> Niels: jij ben net terug van vakantie dan dus jij hebt alleen 3 dagen nog hiervoor en we bepalen samen met Ruben wat we oppakken.
> Ronald: jij gaat al superblij worden van de stappen die we vandaag 20 juli gezet hebben en ga absoluut je gang om het af te maken… maar… nogmaals… rudimentair.. de commando’s en korte uitleg zijn belangrijk
> Henk: voor jou is dit allemaal nieuw maar als het goed is, dan ben je nu je tweede week ingegaan en heb je waarschijnlijk de eerste week al op basis van deze guide lokaal zitten proberen en testen. Maar samen met Ronald kunnen jullie prima bepalen wat je de eerste week gaat doen.
| 13 |
7,128,869 | 90,491,494 |
2021-07-19 12:03:20.477
|
Add orders functionality to demo setup
|
**As a** presenter of NLX
<br>**I want** to be able to demonstrate orders
<br>**so** potential users of NLX can easily see how orders work
**Prerequisites**
- make sure public key of every organization can be copied (#917)
**:white\_check\_mark: Acceptance criteria**
- The application should not load the parkeerrechten initially, since we don't know what Claim to use.
Add a 'Parkeerrechten ophalen' button with the option to provide the Claim for every organization.
- The header containing the reference nr of the opdracht is: X-NLX-Request-Claim
- The curl command is something like: curl my-outway.nl/org/service -H "X-NLX-Request-Claim: <opdrachtnummer>"
- the ui of parkeervergunning bv (admin) must have a text input for the order reference
- if the reference is kept empty, no Claim header should be added to the request
- describe test-flows for the orders-functionality (on ACC)
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:link: Reference links**
1.
| 5 |
7,128,869 | 90,372,140 |
2021-07-16 06:57:54.613
|
Enable GitLab pipeline for MRs from forks
|
MR from a fork is unable to merge due to pipeline not starting: https://gitlab.com/commonground/nlx/nlx/-/merge_requests/2785
TODO: check with the Haven boys if this has security (would it be possible to alter environment settings via those runners?) / performance implications
| 1 |
7,128,869 | 88,682,570 |
2021-06-14 14:13:19.605
|
Provider can see its orders
|
**As a** admin of NLX Management of a [Delegatee]
<br>**I want** to see my orders
<br>**so** I have the same insight as my [Delegator(s)]
**:desktop_computer: How to demonstrate / e2e test(s):**
1. Open NLX Management as an Delegator
1. Add a new order
1. Open NLX Management as the designated Delegatee
1. See that the new order is missing
1. Click on the 'fetch order' button
- When clicking the button, the management api will check all organizations for new orders for me
1. Go to 'Opdrachten' list view and see the newly added opdrachten
**:white\_check\_mark: Acceptance criteria**
1. The delegatee will only fetch his own orders
1. It should be possible to retrieve the orders when directory is down
- Do this similarly to how we do it in the outway
1. Sort same as overview of delegator: by valid until desc
**design**
See below in comments
| 5 |
7,128,869 | 88,437,818 |
2021-06-09 08:03:41.190
|
Add values.schema.json to our helm charts
|
**As a** developer
<br>**I want** to be able to validate my helm values
<br>**so** that I can catch mistakes before deploying
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. values.schema.json file added to all our public charts (nlx-inway, nlx-outway, nlx-management)
1. Validate using `helm lint` (https://helm.sh/docs/topics/charts/#schema-files)
**Notes**
- Reply from Conduction on the tooling they use that requires these value files:
> "Wij gebruiken zelf Dashkube, maar bijvoorbeeld Rancher, Red kuber, Open shift, Azure dashbashboard en bitnami ondersteunen dit volgens mij ook"
**:link: Reference links**
1. https://austindewey.com/2020/06/13/helm-tricks-input-validation-with-values-schema-json/
| 5 |
7,128,869 | 88,391,680 |
2021-06-08 12:56:45.226
|
Remove horizontal lines on log in screen (minor usability issue)
|
This issue came up multiple times now, so we better fix it. I actually became victim myself last week, so that says enough ;)
The horizontal lines makes it look like inputs, but they are purely decoration. By removing them, this is no longer an issue.

| 1 |
7,128,869 | 88,322,508 |
2021-06-07 12:31:01.929
|
Enable basic access authentication for NLX management
|
**As a** new interested developer
<br>**I want** to be able to use basic access authentication in NLX management
<br>**so** I do not need OpenID Connect provider to get up and running
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. able to choose during installation between basic auth and OIDC (= default setting)
1. change docs to match new option
1. login page will be included in NLX management
1. remove DEX from NLX try me
1. local development will use basic auth. All other env. will use OIDC
1. edward must install nlx on his laptop
**:link: Reference links**
1.
| 8 |
7,128,869 | 87,973,227 |
2021-05-31 12:21:39.788
|
Simplify NLX Try me setup
|
**As a** developer
<br>**I want** to use a simplified NLX setup
<br>**so** I can easily experiment with all NLX functionality
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Replace using the Directory from the `demo` environment with a local directory
- custom CA
- directory registration API
- directory inspection API
- directory monitor
2. Running `docker compose up` should be sufficient to run the complete setup
3. Update the [Try me](https://docs.nlx.io/try-nlx/introduction) documentation section, to reflect these changes
4. Move 'Retrieve a demo certificate' to another section of the documentation
**:link: Reference links**
1.
| 5 |
7,128,869 | 87,972,507 |
2021-05-31 12:11:26.602
|
Automatically test NLX Try me setup
|
**As a** developer
<br>**I want** to use a simplified NLX setup
<br>**so** I can easily experiment with all NLX functionality
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Enable setting an organization inway via the NLX ctl
1. Add `docker compose up` command to the CI, perform a HTTP request using the outway to the inway and verify if the response is correct.
1. Trigger the pipeline from the NLX repository for every commit on the master branch. As we do with the '[Trigger packaging build](https://gitlab.com/commonground/nlx/nlx/-/pipelines/300996356)' job.
1. Post message in slack if the pipeline for 'Try NLX' fails. The same kind of message as we have for a failed build on the `master` branch of NLX.
**:link: Reference links**
1.
| 5 |
7,128,869 | 87,971,841 |
2021-05-31 11:59:13.438
|
Enable transaction logs for the Try NLX setup
|
**As a** developer
<br>**I want** the transaction log to be enabled by default when following the docs
<br>**so** I can experiment with the transaction logs functionality
**:desktop_computer: How to demonstrate / e2e test(s):**
1. When setting up Try NLX from the docs, transaction logs should be inserted in the txlog-db
**:white\_check\_mark: Acceptance criteria**
- Transaction logs are enabled by default for the [Try NLX](https://gitlab.com/commonground/nlx/nlx-try-me) Docker Compose file
- Document how the transaction logs can be viewed
**:link: Reference links**
1.
| 3 |
7,128,869 | 86,938,038 |
2021-05-12 12:19:00.443
|
Update documentation
|
Remove section 'Consume an API'. It needs to be rewritten later on, using the Management UI and access requests.
(otherwise we need to accept toegangsverzoeken for every one)
| 1 |
7,128,869 | 86,935,322 |
2021-05-12 11:36:00.254
|
Bugfixes and remaining tasks for the 'Verwerkingenlogging'
|
See Verwerkingenlogging board.
| 3 |
7,128,869 | 86,852,485 |
2021-05-11 11:19:14.373
|
Verwijder parkeervergunningen applicatie
|
**NLX repo**
- [x] Helm charts
- [x] insight-ui
- [x] insight-api
- [x] insight settings page in management ui & management api (remove support for both `irmaServerURL` and `insightURL properties)
- [x] references to the insight app should be removed from the docs
- [ ] `demo.nlx.io` -> to redirect to `directory.demo.nlx.io` -> should be done through ingress
- [x] remove DNS records for `application.*.*`
- [x] remove DNS records for insight
- [ ] remove DNS records for irma
**[Demo repo](https://gitlab.com/commonground/nlx/demo)**
- [x] overview-ui (https://gitlab.com/commonground/nlx/demo/-/merge_requests/44)
**Others**
- [x] https://gitlab.com/commonground/nlx/irma (archived)
- [x] remove Parkeervergunningen from Componentencatalogus
- [ ] Remove related Docker images from the GitLab & Docker hub registries
| 3 |
7,128,869 | 86,765,943 |
2021-05-10 08:44:35.554
|
Use parkeerrechten apps for gemeente Stijns on ACC
|
Part of #1206
Use 'toon parkeerrechten' app together with 'parkeerrechten API' in the acceptance environment for gemeente Stijns.
**Technical notes**
- we will not add the demo applications to the Review Apps (this would make Review Apps too big without too much added benefits)
- use the current NLX 'acc-environment' to deploy the demo applications (we will move these to the demo-environment later on, once we're ready setting up all apps, to shorten the feedback loop)
- the Helm-charts for the 'demo-environment' will be hosted in the NLX repository (`/helm` directory) (this is easier and allows us to focus on making NLX easier to install)
- add support for PostgreSQL to the `parkeervergunningen-api` and use PostgreSQL for the ACC environment

| 3 |
7,128,869 | 86,516,238 |
2021-05-05 11:56:35.722
|
Create NLX deployment chart (which includes all sub-charts)
|
**As a** developer
<br>**I want** to install NLX using a single command
<br>**so** I feel empowered
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Enable installing NLX using a single Helm chart
1. Add support for the following "modes"
- Provide a service
- Use a service
1. Document the following scenario's on docs.nlx.io:
- Provide a service
- Use a service
**Tasks**
- [x] Support optional CertManager config by default
- [ ] Automatically run migrations
- [x] Add support for adding a super-user
**:link: Reference links**
1.
| 8 |
7,128,869 | 86,441,199 |
2021-05-04 09:44:09.882
|
Fix inconsistent Helm charts
|
**As an** organization attempting to setup NLX using Kubernetes
<br>**I want** properly documented Helm charts
<br>**so** I can install NLX without asking for help
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Enable specifying a custom port and database name for the postgres connection
(Make postgresql settings configurable in Helm charts)
* Management API/UI
1. An inway should be externally available by default
(Make `serviceType` and related config configurable for Inway and document the different values / implications)
1. Review the ingress for the Inway (is this outdated and should it be removed for example?)
1. Make a clear distinction between the internal PKI and organization certs (if we don't succeed in issuing the internal PKI certificates using CertManager)
1. Make clear which fields are required in the Helm charts README
**:link: Reference links**
1.
| 5 |
7,128,869 | 86,134,389 |
2021-04-28 12:25:42.017
|
Create demo app to create a parkeerrecht
|
Depends on #1221 (copy app)
* [x] shows list of parkeerrechten (after clicking button)
* shows a form to add a parkeerrecht
* [x] choose gemeente (list of gemeentes is passed through environment variables)
* [x] enter kenteken, which fetches data from `basisregister-fictieve-kentekens` and `-personen`
* [x] show car model and personal data. Then either:
* if all ok: enable submit button
* if not in selected gemeent: show warning
* [x] include application in deployment to ACC for the Vergunningsoftware BV
| 3 |
7,128,869 | 80,161,533 |
2021-03-03 11:02:47.157
|
Create E2E test for access request flow
|
Currently some parts of the database-layer are not being tested as we mock the database in our unittests. We do want to test this but it's hard to create end-to-end tests as it requires two Management API's, access requests etc.
We need e2e tests for:
1. Create a failing request to a service without access
2. Create an access request
3. Accept access request
4. Make the same request as in step 1 and watch it succeed
| 2 |
7,128,869 | 80,069,538 |
2021-03-02 09:24:10.921
|
Show overview of created delegations
|
**As a** municipality
<br>**I want** to have an overview of the delegations I have created
<br>**so** that I know which services can be accessed on my behalf by other organizations
**:desktop_computer: How to demonstrate / e2e test(s):**
1. As municipality, open NLX management and navigate to "Opdrachten"
1. If there are no delegations an empty state is shown
1. If there are delegations they are shown in the overview
1. Click on a delegation to open the delegation detail view in drawer
1. Click on "Toon verlopen opdrachten" to show expired delegations
**:white\_check\_mark: Acceptance criteria**
1. do not implement the status column
1. add a column with the validUntill: valid until as header
1. there is no difference between actual or expired orders
1. the table is sorted on validUntil Desc
Technical implementation





**:link: Reference links**
1.
| 5 |
7,128,869 | 80,063,691 |
2021-03-02 08:16:10.781
|
Revoke access delegation
|
**As a** _user_
<br>**I want** _goal_
<br>**so** _purpose_
- een opdracht hoeft/kan maar door 1 partij ingetrokken worden
- een ingetrokken opdracht ziet er net iets anders uit in het overzicht (Peter)
- het intrekken gebeurt lokaal direct
- via de scheduler gaat het naar de andere kant
scenario 1: opdrachtgever trekt actieve opdracht in
- een opdrachtnemer kan dus max de duur van zijn geldige claim nog bij de gegevens obv die opdracht
scenario 2: opdrachtnemer trekt actieve opdracht in
- de outway gaat controleren of de opdracht nog wel geldig is die hoort bij de referentie nr en organisatie naam in de call van de client
- indien er geen actieve opdracht is, dan geen verzoek naar buiten
- een opdrachtnemer kan dus max de duur van zijn geldige claim nog bij de gegevens obv die opdracht
scenario 3: opdrachtgever of opdrachtnemer verwijderd inactieve opdracht
- dit is alleen een lokaal ding. niet syncen
- weight is 3 (just in case we want to create a separate story for this)
**Updated as part of discussion wed 25th**:
If we implement revoking an order as delegatee, we would introduce a pulling mechanism. This would require for the organization to have an inway, even if they don't provide any service.
We're in doubt if we should make an inway required for organizations not providing a service. This must be discussed first.
Conclusion: revoking incoming orders should not be possible, that's out of scope for now


**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. update the sequence diagram (as seen in the epic)
**:link: Reference links**
| 13 |
7,128,869 | 80,061,764 |
2021-03-02 07:45:10.718
|
Grant delegation to a supplier
|
**As a** municipality
<br>**I want** to be able to allow suppliers to access services on my behalf
<br>**so** that I do not need to share my private key
**:desktop_computer: How to demonstrate / e2e test(s):**
1. Make a request as Saas Party X to kentekenregister of RDW and watch it fail
1. Log in to NLX management of Gemeente Haarlem
1. Create delegation for Saas Party X to access the service kentekenenregister on behalf of Gemeente Haarlem
1. Make a request as Saas Party X to kentekenregister of RDW with the delegation header and watch it succeed
### Checks we perform when requesting a claim for an order:
All checks below must succeed, otherwise, no claim will be given.
* *Reference*: is there an order for the provided order reference?
* *Public key*: is public key of the order equal to the public key of the organization requesting the claim
* *Date*: is the end date of the order in the future
**:white\_check\_mark: Acceptance criteria**
1. As a municipality, I want to be able to add a new delegation for a SaaS Party
1. The fields required for a new order are:
- Reference (text field max length 100, unique value)
- Description (max. length 100)
- Public key (server-side validation if its a valid public key), text field
- Name of the organization you want to delegate access to (URL-friendly, see [validation in our code](https://gitlab.com/commonground/nlx/nlx/-/blob/4c7f0be8b2c9c980351b6202fbd2106bf4acdab0/directory-registration-api/pkg/database/inway.go#L16)), text field
- Valid from (valid date)
- Valid until (valid date)
- Services (only services for which the municipality has an access grant)
1. Management API of the Municipality should return the real created delegation (instead of the mock, build as part of #1179)
1. Make sure the relevant audit log records are implemented
1. Add description of delegations to docs.nlx.io
**Tasks**
1. [x] Implement add delegation form in UI
1. [x] Add endpoint to add order to management api (for municipality)
1. [x] Update `RequestClaim` endpoint to check if there's an order for the delegatee with the provided order reference
1. [x] `ExpiredAt` of claim should not exceed the `validUntil` property of the order
1. [x] Claim should expire after 4 hours
**Questions**
- How should the list of services look like? Proposal: `<organization-name> - <service-name>` Comment Peter: Good proposal
- Should 'Valid until' be required? Yes
- Should we add 'Doelbinding'? Is 'Opdrachtomschrijving' 'Doelbinding'? We will not add Doelbinding and "Opdracht omschrijving" and "Doelbinding" are not the same
- OPEN @ehotting : do we need to add a disclaimer that the opdracht is not a legal document?

Audit log records:<br>

**:link: Reference links**
1.
| 13 |
7,128,869 | 79,990,593 |
2021-03-01 09:47:52.970
|
Access services as a supplier
|
**As a** supplier
<br>**I want** to access services using a mandate given by a municipality
<br>**so** I do not have to share my private key with the supplier
**:desktop_computer: How to demonstrate / e2e test(s):**
1. Send a request without delegation header and watch it fail
1. Saas Party X can access RDW on behalf of Gemeente Haarlem by sending a request with delegation header
1. The transaction log of RDW should show that the request was made on behalf of Gemeente Haarlem
**:white\_check\_mark: Acceptance criteria**
1. A supplier can access a service on behalf of an organization
1. The transaction log record made for a request should include that it was a delegation
1. Delegated access should be granted on the organization certificate of the supplier
1. A delegated request should be verifyable without making additional API requests.
1. The creation of claim must by synchronous
Tasks:
1. [x] Create the gRPC endpoint to retrieve a claim from municipality
1. [x] Store public key with incoming acces request
1. [x] Parse delegation header in the outway (@excol-org)
1. [x] Check if claim exists in the outway (@excol-org)
1. [x] Create the gRPC endpoint to retrieve claim outway-> management api (@ndequeker)
1. [x] Verify claim in the inway using the public key stored in the incoming access request (!2346) (@RonaldKoster @ndequeker )
1. [x] Store delegation data in the transaction log of outway (@excol-org)
1. [x] Store delegation data in the transaction log of inway (@RonaldKoster @ndequeker)
1. [x] Add Saas Party X to Helm charts (!2332) (@RonaldKoster)
1. [x] Prepare demo setup to send a delegated request using Saas Party X, gemeente Haarlem and RDW
1. [x] Outway should fail to start if no management-api is specified.
1. [ ] Optional: add E2E tests for the demo scenario, using Cypress (see https://gitlab.com/commonground/nlx/nlx/-/issues/1199)
**Technical implementation**
1. The management API of the municipality will always return a valid claim. We will merge this with master but we won't release it. In case we need to make a release because of a bug we will add a feature flag to disable it.
**:link: Reference links**
Sequence diagram, see https://gitlab.com/groups/commonground/nlx/-/epics/14
| 13 |
7,128,869 | 79,979,444 |
2021-03-01 08:01:01.157
|
Remove ability to configure inway with config file
|
**As a** NLX maintainer
<br>**I want** to remove the ability to configure the inway with a .toml file
<br>**so** that NLX users are forced to embrace the future (NLX management) and NLX becomes easier to maintain
Breaking changes:
1. Authorization mode is replaced by access requests
1. ca-cert-path can no longer be configured
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Improve code quality of the `inway`
* remove `common/process.go` (replace with context?) (@excol-org @RonaldKoster )
* use plugin system from outway (move to `common`?) (@excol-org)
1. Update demo organization BRP so that is uses NLX management (@RonaldKoster)
1. Update the docs !2380 (@ndequeker)
1. Communicate this change together with @evangelderen
1. Remove all code needed for the .toml file
**:link: Reference links**
1.
| 8 |
7,128,869 | 79,777,109 |
2021-02-25 09:49:52.474
|
GRPC Gateway v1.x -> v2.x
|
As part of #1158, we've discovered that the [GRPC Gateway has released v2](https://github.com/grpc-ecosystem/grpc-gateway/releases/tag/v2.0.0) which is a new major version. We need to upgrade the compilation steps (from our proto-files to `*.pb.go`, `pb.gw.go` & `.swagger.json`).
We currently have five proto-files:
1. directory-inspection-api/inspectionapi/inspectionapi.proto
1. directory-inspection-api/stats/stats.proto
1. directory-registration-api/registrationapi/registrationapi.proto
1. management-api/api/management.proto
1. management-api/api/external/external.proto
We've tried to gradually move from v1 -> v2 using [Earthly](https://docs.earthly.dev), but did not succeed in a timely matter.
We can make another attempt, or transition to v2 for all our proto-files.
| 3 |
7,128,869 | 79,276,948 |
2021-02-17 14:43:53.799
|
Create a helm repository for the NLX helm charts
|
**As a** developer
<br>**I want** to download the NLX helm charts from a helm repository
<br>**so** that I can install/update the charts using the helm cli
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. There is a helm repository (https://helm.sh/docs/topics/chart_repository/)
**Todo**
- [ ] Replace repository in README's
**:link: Reference links**
1.
| 3 |
7,128,869 | 78,404,665 |
2021-02-04 07:48:05.797
|
Pagination for Services page
|
**As a** user
<br>**I want** to have a paginated list of the organizations of my service
<br>**so** my browser does not crash with large list of services
**:desktop: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Add pagination for the Services page (max. 20 items per page)
1. Polling statistics for the services should only include statistics for the current page
1. Re-use the pagination component(s) from the DON project
**:link: Reference links**
1.
| 3 |
7,128,869 | 78,288,517 |
2021-02-02 15:12:00.479
|
Show pricing data of the service
|
**As a** user interested in a service
<br>**I want** to see the costs when this service is not free to use
<br>**so** I know I will receive an invoice
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. NLX Management directory should show detailed pricing info on the detail view
1. The 'request access modal' should indicate that you're requesting access to a paid service
**:link: Reference links**
1.


| 5 |
7,128,869 | 78,288,488 |
2021-02-02 15:11:44.215
|
Add pricing information to services
|
**As a** _user_
<br>**I want** _goal_
<br>**so** _purpose_
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. When submitting a new service, the following fields are required:
- administration fee (for setting up the contract and access)
- subscription fee (per month)
- transaction fee (cost per use)
1. Pricing fields are set to '0' by default
1. Service detail pane should display the pricing fields
**:link: Reference links**
1.


| 3 |
7,128,869 | 78,287,468 |
2021-02-02 15:03:39.044
|
Export billing info
|
**As a** purchase / ordering employee
<br>**I want** to download all financial information about the use of my services
<br>**so** I can create invoice for my customers
**:desktop_computer: How to demonstrate / e2e test(s):**
1. go to the 'Activity' page
1. click on the download button
1. open file with Numbers on my shiny Mac
**:white\_check\_mark: Acceptance criteria**
- Filename: YYYYMMDD-activiteit-per-organisatie.csv
- see csv drawing below for columns
Note: see warning below for actual info
- Group by: service -> organisation -> month
- Documentation about 'Enable transaction logs' should include instructions on how to specify log database URL to NLX Management
- Only include transactions for you inway in report
- When unable to connect to the DB -> export should be an empty CSV
**:link: Reference information**

> source: https://excalidraw.com/#json=4645701698977792,v1uHInLjAMIyq8Q48_coXQ
:warning: The above rows are not completely accurate anymore. For each organization/service it should list:
- 1 line for initial setup cost
- 1 line per month if montly fee
- 1 line per month per # requests

State where there is no transaction log configured:<br>

| 5 |
7,128,869 | 77,877,271 |
2021-01-26 16:06:46.260
|
Same name for api properties
|
See `management.proto` :
`message ServiceStatistics` has `incomingAccessRequestCount`
but a few lines below
`message Service` has `incomingAccessRequestsCount`
It should be singular: incomingAccessRequestCount
This needs to be changed in BE and FE.
| 1 |
7,128,869 | 77,545,391 |
2021-01-20 13:40:55.546
|
Document NLX in production
|
**As a** developer
<br>**I want** know how to setup NLX in my production environment
<br>**so** that I can run NLX in production
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Create a NLX in production section on docs.nlx.io
1. Make the destinction between Try NLX and NLX in production as clear as possible
1. Improve 'Request a production certificate' clearly describe and number all steps needed to get your production certificate.
1. Add a description on how to generate the internal PKI certificates
1. Add drawing of connections that use internal PKI and PKI-O created by @hugoboos
**:link: Reference links**
1.
| 8 |
7,128,869 | 77,544,853 |
2021-01-20 13:33:17.248
|
nlx.io - investigate toolset
|
Determine what toolset we need to build the new site containing:
* nlx website
* docs
* how to implement search without docusaurus
* directory
Outcome:
- describe the toolset (below)
---
**Research and result**
In Epic: https://gitlab.com/groups/commonground/nlx/-/epics/19
| 8 |
7,128,869 | 77,544,039 |
2021-01-20 13:21:28.432
|
Document helm charts
|
**As a** developer
<br>**I want** know how I can use the NLX helm charts
<br>**so** that I am able to install NLX on my Kubernetes cluster
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. Add a readme to each chart describing the possible values. (https://github.com/jetstack/cert-manager/blob/master/deploy/charts/cert-manager/README.template.md)
1. Make the charts available as a helm repository (https://helm.sh/docs/topics/chart_repository/)
**:link: Reference links**
1.
| 5 |
7,128,869 | 76,140,675 |
2020-12-16 13:45:36.924
|
Validate claims in authorization middleware
|
Currently, in the `OnlyAuthenticated` middleware (`management-api/pkg/oidc/authenticator.go:106`) we only validate if the `user` session is set and if it contains a valid user object. This is not sufficient for secure authorization.
**Acceptance criteria**
1. We should validate the claims and at least perform these checks:
- Is the `nbf` (Not Before) in the future?
- Is the `exp` (Expiry Date) in the past?
These checks can be performed stateless.
2. Save refresh token on auth. Use refresh token after current access token has expired.
| 3 |
7,128,869 | 76,140,403 |
2020-12-16 13:39:22.733
|
Add authorisation to Management UI
|
**As a**
<br>**I want** _goal_
<br>**so** _purpose_
**:white\_check\_mark: Acceptance criteria**
1. Store roles for users in the NLX Management database
1. Show error message when user exists in OpenID connect but not in NLX Management
1. In the future we would like to map groups in OpenID Connect to roles in NLX Management
1. There should be atleast a admin role.
1. A user can be added using the command-line. When adding the user you can specify the ID(email) and role. Eg `nlx-management-api create-user --username dillen --role admin`
**Acceptance criteria**
1. When the user does not have the 'admin' role authentication should fail
1. When something in the authentication flow fails the user should see what went wrong
**:link: Reference links**
1.
| 8 |
7,128,869 | 75,705,606 |
2020-12-07 12:48:33.593
|
Polling: outgoing access request & access proof on directory detail page
|
**As a** user
<br>**I want** the detail page to update with the latest outgoing access request and access proof data
<br>**so** I know what's going on
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. poll every 3000ms
1. updates outgoing access request & access proof data
1. UI will update to latest state
**tech impl**
1. subscribe to polling manager
1. update mobx
| 2 |
7,128,869 | 75,704,771 |
2020-12-07 12:33:04.957
|
Polling: incoming access requests & access grants in service detail view
|
**As a** user
<br>**I want** to be notified when there are new incoming access requests or access grants when I'm looking at the service detail page
<br>**so** that I can see the new data on clicking the blue pill
**:desktop_computer: How to demonstrate / e2e test(s):**
1. open service detail page
1. do an access request in another tab
1. see the blue pill appear
1. click blue pill to update list
**:white\_check\_mark: Acceptance criteria**
1. poll every 3000ms
1. design in epic
1. clicking the blue pill will update the list
**:link: Reference links**
1.
**:unicorn: Designs**
Specs:<br>
`background-color: $colorBrand4; //#36C6FF`<br>
`color: $colorTextInverse; //$colorPaletteGrey900`<br>
`margin-bottom: -$spacing03; //of iets anders waardoor ie 8px lager staat dan de content`<br>
Icon:<br>

<br><br>


| 3 |
7,128,869 | 75,703,788 |
2020-12-07 12:14:53.151
|
Polling: incoming access requests on service overview page
|
**As a** user
<br>**I want** incoming access requests to automatically appear when I'm looking at the service overview page
<br>**so** I know about the actual state of incoming access requests
**:desktop_computer: How to demonstrate / e2e test(s):**
1. go to service overview page
1. add an access request from other tab
1. see the UI update
**:white\_check\_mark: Acceptance criteria**
1. poll every 3000ms
1. design in epic
**Technical implementation**
1. add new endpoint: `/api/v1/services/stats`
Let's discuss the format of the returned JSON
1. use polling manager
1. from react -subscribe to manager
**Details**
@publicJorn and I discussed that the format should be:
```
{
"test-service-1": {
"incomingAccessRequestCount": 1
},
"test-service-2": {
"incomingAccessRequestCount": 2
}
}
```
This is neat because then we can add other statistics to the same endpoint in the future.
| 3 |
7,128,869 | 75,483,168 |
2020-12-02 12:33:19.129
|
Show warning when removing organization inway
|
**As a** user
<br>**I want** to see a warning when removing organization inway
<br>**so** I am warned that things will go wrong
**:desktop_computer: How to demonstrate / e2e test(s):**
1. go to settings
1. remove organization inway
**:white\_check\_mark: Acceptance criteria**
1. In settings, when resetting organization inway (becomes null) show warning:
"Bij het verwijderen van de organisatie inway kunnen huidige toegangsverzoeken niet meer worden afgehandeld en kan je geen nieuwe meer ontvangen"
| 1 |
7,128,869 | 74,956,580 |
2020-11-25 09:16:14.047
|
Docs: try NLX management config mismatch
|
## Situation
On the page https://docs.nlx.io/try-nlx/management/getting-up-and-running we tell the user needs to clone the NLX git repository. This results in a untagged version of the `docker-compose.management.yml` file. That file contains Docker images with the tag latest.
## Problem
The problem with this is that the settings in `docker-compose.management.yml` can mismatch with what the `latest` Docker tag points to (at the moment of cloning and pulling).
Also the user needs to clone the whole code base for a single file to run a demo.
## Proposed solution
Use a dedicated git repository for this demo and version Docker tags instead of `latest`.
- [x] Move docker-compose.management.yml to new repo 'nlx-try-me'
- [x] Add DEX config
- [x] Create demo PKI
- [x] Alter 'getting up and running' section of docs.nlx.io
- [x] Make sure it is clear at every step this is just meant to get to know NLX, NOT FOR PRODUCTION
- [x] Add renovate bot for auto updating of docker-compose
- [x] Add README with again clear warnings
| 3 |
7,128,869 | 74,170,889 |
2020-11-10 22:10:55.564
|
Always show request button on overview when I can request access
|
**As a** user
<br>**I want** to see the request button on the overview page
<br>**so** i can request access
**:desktop_computer: How to demonstrate / e2e test(s):**
1.
**:white\_check\_mark: Acceptance criteria**
1. when the access request already has a state, show the button over the text of that state
**:link: Reference links**
1.
| 2 |
7,128,869 | 73,525,271 |
2020-10-28 14:37:09.700
|
Directly expose button
|
Link to original issue: https://gitlab.com/commonground/core/design-system/-/issues/29
| 1 |
734,943 | 33,999,404 |
2020-04-30 13:17:18.566
|
Remove support for disk configuration source
|
In https://gitlab.com/gitlab-org/gitlab/-/issues/217912 we introduced `domain-config-source` configuration flag to allow users to choose which domain source to use between `disk` and `api`.
In %"14.0" we plan to remove support for old disk configuration source.
The purpose of this issue is to remove `domain-source`, some code may be hard to remove in one MR, so this can be extracted to follow-up issues.
Omnibus will need to be updated to remove this flag
If users can't use the disk config source, we can gradually get rid of old code supporting it.
| 2 |
734,943 | 33,344,838 |
2020-04-15 23:30:45.118
|
Tech evaluation: Object storage using presigned URLs
|
This is a follow-up tech evaluation from #355
@ayufan thanks for your [input on slack](https://gitlab.slack.com/archives/C1BSEQ138/p1585668949053900)!(copying here)
- We need to use pre-signed URLs from GitLab, that way we don’t need any credentials on Pages, and whether the .zip is used can be controlled by Rails exclusively, the link would have an encoded and Rails controlled expiry date
- If serving from .zip I think we need to likely define the maximum archive size that we can support, likely filtering the relevant files (public/ only folder), and holding that somewhere in memory. I would assume that we could likely configure how many files-in-archives/archives we cache and allow this to be configured and optimised towards cache-hit-ratio, likely GitLab.com would allow to use a ton of memory if needed
- I would likely break the support for Content-Range if serving files as I don’t think that this is cheaply possible with .zip
- GitLab Workhorse does have OpenArchive that supports local and remote archive just it is not performance optimised: the HTTP requests are badly aligned and this will likely need to be somehow improved, so just copy-pasting will not give a great performance yet
---
- [ ] @vshushlin started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/258#note_309921953): (+7 comments)
> Oh, I thought that https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/136/ has some object storage implementation while it only has serving from zip files from disc :see_no_evil:
>
> I have a very simple idea for alternative PoC:
>
> 1. We can copy(and maybe slightly modify) https://gitlab.com/gitlab-org/gitlab/-/blob/84c0ffe12646b9bae1fdf2e576cde7f01f8ded73/lib/api/job_artifacts.rb#L75-94 to pages API https://gitlab.com/gitlab-org/gitlab/-/blob/75f8d42bb443d0a6101a9c2f6b65c607cd95efd4/lib/api/internal/pages.rb#L19
> 1. Then we can return the `job_id` in the API
> 1. Pages will get specific file and proxy it to user.
> 1. Later we can add cache for it.
>
> Alternatively, we can get the whole artifacts zip archive and use the current zip reading code, then we'll need to cache those files.
>
> I don't think that adding some object-storage specific code to pages is a good idea. We already have it in the workhorse, we can just use the API. It's slower, but much simplier.
### Diagram/proposal

```
sequenceDiagram
participant U as User
participant P as gitlab-pages
participant G as gitlab-workhorse and rails
participant OS as Object Storage
U->>P: 1. username.gitlab.io/index.html
P->>G: 2. GET /api/v4/internal/pages?host=username.gitlab.io
G->>P: 3. {... lookup_paths: [{source: {type: "zip", path: "presignedURL"}],...}
loop zipartifacts
P->>P: 4. reader:= OpenArchive(presignedURL)
P->>OS: 5. GET presignedURL
OS->>P: 6. .zip file
P->>P: 7. reader.find(public/index.html)
P->>P: 8. go func(){ cache({host,reader}) } ()
end
P->>U: 9. username.gitlab.io/index.html
```
### Proposal
In this PoC we will hardcode the returning value from `/api/v4/internal/pages` to reduce the scope. I will use minio which is already supported in the GDK. I'll also shamelessly steal and slightly modify the `zipartifacts` package from workhorse.
To address https://gitlab.com/gitlab-org/gitlab-pages/-/issues/377#note_367358348 the source type should be `"zip"` so that Pages can serve from `.zip` regardless of the path (pre-signed URL or disk path).
### Outcomes
We have now https://gitlab.com/groups/gitlab-org/-/epics/3901 and https://gitlab.com/groups/gitlab-org/-/epics/3902 with parent https://gitlab.com/groups/gitlab-org/-/epics/1316 to track all future efforts.
#### Rails
1. Allow deploying Pages as `.zip` archives with a `max_archive_size`. https://gitlab.com/gitlab-org/gitlab/-/issues/208135
1. On deploy ->check size -> store `public.zip` either on disk or in object storage depending on the features enabled. also tracked in https://gitlab.com/gitlab-org/gitlab/-/issues/208135
1. Update `/api/v4/internal/pages` -> return a "source"."type":“zip” with a path https://gitlab.com/gitlab-org/gitlab/-/issues/225840 e.g.
```json
{
"lookup_paths" : [
{
"source": {
"type": "zip",
"path": "https://presigned.url/public.zip",
"_":"or from disk path"
"path": "/shared/pages/domain/project/public.zip"
}
]
```
#### Pages (Go)
0. extract the `resolvePath` logic from disk serving into its own package so it can be shared. https://gitlab.com/gitlab-org/gitlab-pages/-/issues/421
1. Add package `zip` with `zip/reader` https://gitlab.com/gitlab-org/gitlab/-/issues/28784
2. Add `zip` serving to Pages - this allows serving from disk or pre-signed URLs from object storage https://gitlab.com/gitlab-org/gitlab/-/issues/28784
3. Implement a zip reader caching mechanism https://gitlab.com/gitlab-org/gitlab-pages/-/issues/422
4. Add metrics for zip serving https://gitlab.com/gitlab-org/gitlab-pages/-/issues/423
* while testing I hit https://gitlab.com/gitlab-org/gitlab-pages/-/issues/371 so I think it would be valuable to work on that issue first.
| 1 |
734,943 | 32,478,519 |
2020-03-26 08:29:40.970
|
Add metrics for disk serving
|
As part of the work done for #355 we need to add some metrics to measure loading times of serving from disk.
- file size
- time taken to serve
This will give us some visibility on serving files and will allow us to compare serving times when we implement object storage #377
| 1 |
734,943 | 31,331,442 |
2020-02-27 09:32:07.557
|
Fix data race in gitlab source cache package
|
The following discussion from !246 should be addressed:
- [ ] @nolith started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/246#note_294638273): (+1 comment)
> Here the race is deep inside the package
>
> we make use of global variables and this test will alter `retrievalTimeout` when a goroutine from the previous one is still running.
>
> This will require extra effort to fix, but I think we should add the race detector now.
>
> ```
> ==================
> WARNING: DATA RACE
> Write at 0x000000e0d4f0 by goroutine 46:
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.TestResolve.func9()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/cache_test.go:247 +0x3e
> testing.tRunner()
> /usr/local/go/src/testing/testing.go:827 +0x162
> Previous read at 0x000000e0d4f0 by goroutine 44:
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.(*Retriever).Retrieve()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/retriever.go:22 +0x8d
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.(*Entry).retrieveWithClient()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/entry.go:92 +0xc5
> Goroutine 46 (running) created at:
> testing.(*T).Run()
> /usr/local/go/src/testing/testing.go:878 +0x659
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.TestResolve()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/cache_test.go:246 +0x1d5
> testing.tRunner()
> /usr/local/go/src/testing/testing.go:827 +0x162
> Goroutine 44 (running) created at:
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.(*Entry).Retrieve.func1()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/entry.go:64 +0x67
> sync.(*Once).Do()
> /usr/local/go/src/sync/once.go:44 +0xde
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.(*Entry).Retrieve()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/entry.go:64 +0xad
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.(*Cache).Resolve()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/cache.go:89 +0x1ea
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.TestResolve.func8.1()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/cache_test.go:239 +0x76
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.withTestCache()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/cache_test.go:89 +0x185
> gitlab.com/gitlab-org/gitlab-pages/internal/source/gitlab/cache.TestResolve.func8()
> /builds/nolith/gitlab-pages/internal/source/gitlab/cache/cache_test.go:238 +0xc7
> testing.tRunner()
> /usr/local/go/src/testing/testing.go:827 +0x162
> ==================
> ```
| 1 |
734,943 | 31,320,489 |
2020-02-27 01:21:02.720
|
Tech evaluation: Serve pages from object storage
|
From https://gitlab.com/gitlab-org/gitlab/issues/208135
The idea is to create proof of concept for pages to be able to serve content from object storage. This will hopefully help identify some of the unknowns to enable this functionality!
Will use [GitLab's object storage](https://docs.gitlab.com/charts/advanced/external-object-storage/) with the [GDK config](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/master/doc/howto/object_storage.md) to explore a possible solution for this.
## Results
What we've discovered from !258
* Adding specific code to handle objects from AWS S3 or GCS may not be scalable and would be harder to maintain.
* Implementing [this diagram](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/258#note_310076437) it will be super slow, as Rails and Workhorse validate metadata multiple times, and is not very efficient if we want to random access the file (it does not cache anything).
* We should explore using presigned URLs #377
| 1 |
734,943 | 29,164,214 |
2019-12-28 16:37:21.739
|
Content-length header is not provided when content-encoding is used
|
GitLab pages [support serving statically compressed files](https://docs.gitlab.com/ee/user/project/pages/introduction.html#serving-compressed-assets). But it looks like when this is done, the response headers are missing `Content-Length` header, when response headers include `Content-Encoding: gzip`.
| 1 |
734,943 | 29,009,434 |
2019-12-23 15:50:14.281
|
Tech Evaluation: Support HSTS on GitLab Pages
|
## Problem to solve
We need to research the technical implementation details to solve https://gitlab.com/gitlab-org/gitlab-pages/issues/28
## Proposal
Spend 2-3 days assessing sorting capabilities and breakdown issue into smaller components if larger than 2 MRs:
Some users want to enforce no access to their web content without HTTPS/certificates. This can be done with [HTTP Strict Transport Security (HSTS) policy](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security), but we do not currently support enabling this for GitLab Pages sites.
| 1 |
734,943 | 27,985,016 |
2019-12-05 15:29:14.918
|
GitLab Pages depends on the availability of GitLab API
|
If gitlab API is not available for some reason(e.g. https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1936) GitLab Pages currently will become unavailable too(we clear cache in case of any API lookup problem).
If we not clear cache in these cases, GitLab Pages will be more independent and won't produce spikes of errors like https://dashboards.gitlab.net/d/web-pages-main/web-pages-overview?orgId=1&from=1586889900000&to=1586891700000&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-sigma=2
The following discussion from !194 should be addressed:
- [ ] @ayufan started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/merge_requests/194#note_255169959): (+1 comment)
> Should we consider replacing entry only if the new received one does not have `entry.lookup.Error` to solve intermediate errors of processing lookups?
>
> I consider the today it can happen that we store the lookup of success request, but after refresh we receive error, like 500 from GitLab. It would help if we could catch with this mechanism a short living errors of GitLab API, and try to re-use success requests for as long as long interval. This reduces Pages dependence on API being super stable, and we anticipate that upstream API is flaky to some extent.
>
> Or maybe better is in such case, is to re-use current lookup and extend the lease on the lookup to allow another refresh?
>
> ```golang
> if entry.response != nil && entry.response.Error != nil {
> entry.response = e.response
> }
> ```
>
> > Note: I'm fine following that in next MR as an stability improvement.
| 2 |
734,943 | 27,741,902 |
2019-11-29 10:57:29.701
|
Make GitLab client timeout / JTW token expiry configurable
|
The following discussion from !201 should be addressed:
- [ ] @ayufan started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/merge_requests/201#note_251814520): (+3 comments)
> Is this timeout too small?
>
> Can we make it larger, or at least configurable?
| 2 |
734,943 | 26,998,252 |
2019-11-13 02:35:41.940
|
Provide a way to track and measure API calls
|
We need the way to track the requests made from the `gitlab` config source (https://gitlab.com/gitlab-org/gitlab-pages/merge_requests/194) to the internal API.
| 1 |
734,943 | 26,188,901 |
2019-10-22 10:20:27.813
|
XSSI in Gitlab Pages *.gitlab.io
|
**[HackerOne report #719770](https://hackerone.com/reports/719770)** by `ngalog` on 2019-10-22, assigned to @jeremymatos:
Security workflow issue: https://dev.gitlab.org/gitlab/gitlabhq/issues/2967.
### Summary
Since now gitlab.io has implemented access control for all private pages, all pages should be limited to authorized user, however I found there is a way to steal content of JS files of victim using the XSSI(Cross Site Scripting Inclusion) technique in certain conditions.
### Steps to reproduce
- Use Gitlab CI/CD to build a page, and limit access to project member only
- In the gitlab page folder, upload a JS file, something like below
`secret.js`
```
aws_key='AKIARONCHAN';
aws_secret='RONCHANAGAIN'
```
- Then, as the victim, login to gitlab
- As the attacker, in a domain that you control, prepare below html file and make victim to visit the page
https://attacker.com/attackpage
```
<script src="https://GROUPID.gitlab.io/PROJECTID/secret.js"></script>
<script>alert(aws_key);</script>
```
- Then you should see a pop up containing the JS secrets of the gitlab page.
### Impact
XSSI to steal user's gitlab page's JS file content
### Examples
Given above
### What is the current *bug* behavior?
Didn't block invalid refer
### What is the expected *correct* behavior?
Should not be able to allow attacker to carry out XSSI attacker
### Relevant logs and/or screenshots



### Output of checks
gitlab.com
## Impact
XSSI
## Attachments
**Warning:** Attachments received through HackerOne, please exercise caution!
* [Screenshot_2019-10-22_at_9.46.24_PM.png](https://h1.sec.gitlab.net/a/bb6765a1-a5b7-4bff-b992-6a6e84553263/Screenshot_2019-10-22_at_9.46.24_PM.png)
* [Screenshot_2019-10-22_at_9.46.36_PM.png](https://h1.sec.gitlab.net/a/c7948fcf-c65c-449d-8f23-58add7b4b289/Screenshot_2019-10-22_at_9.46.36_PM.png)
* [Screenshot_2019-10-22_at_9.46.42_PM.png](https://h1.sec.gitlab.net/a/08c44e4a-8ac2-47ab-b817-71f750589783/Screenshot_2019-10-22_at_9.46.42_PM.png)
| 4 |
734,943 | 26,150,199 |
2019-10-21 09:46:58.732
|
[Gitlab Pages Auth Bypass] Able to steal a user's Authentication Code For Gitlab Pages
|
**[HackerOne report #718460](https://hackerone.com/reports/718460)** by `ngalog` on 2019-10-21, assigned to @jeremymatos:
### Summary
I bypassed the regex for the gitlab pages authentication in gitlab.com
### Steps to reproduce
- Login to gitlab and then visit https://projects.gitlab.io/auth?domain=https://ronchangitlab.io&state=xdgnwM0hmRQ7g5xoevNV6g==
- Then attacker can get the access authorization code in ronchangitlab.io, then they can use it to exchange for victim's gitlab pages cookies
## Impact
Gitlab Pages authentication bypass
## Issue on dev
https://dev.gitlab.org/gitlab/gitlabhq/issues/2938
https://gitlab.com/gitlab-org/security/gitlab/-/issues/102
---
## Solution
### Proposals
- Proposal 1 - https://gitlab.com/gitlab-org/gitlab-pages/-/issues/262#note_255332604 and WIP PoC https://dev.gitlab.org/gitlab/gitlab-pages/-/merge_requests/21
- Proposal 2 - https://gitlab.com/gitlab-org/gitlab-pages/-/issues/262#note_341774099
### Proposal 2
~~Going ahead with proposal 2~~
1. Generate a JWT signing key based on the `auth-secret` on Pages startup
1. When we initiate the authentication flow, generate a JWT token with claims, set it as the state and save to the encrypted cookie
```go
state := jwt(domain, randomNewState, jwtSigningKey)
```
3. Redirect to `projects.gitlab.io?state=state` and validate the state's JWT signature
4. Extract the domain from the JWT claims and continue OAuth flow.
5. When we receive `mydomain.gitlab.io?stat=state&code=code` we verify the state's JWT signature again.
6. Continue as usual
---
## Implemented solution
It was later discovered that the problem lies in someone being able to steal an OAuth `code` as part of the authentication flow. To mitigate this, the process is to encrypt the code and add it to a JWT which is then signed.
The flow is
1. Generate a JWT signing key based on the `auth-secret` on Pages startup.
1. On redirection from GitLab to https://gitlab.io/auth?code=plaintext&state=random, encrypt and sign the code
1. Encrypt code `encryptedCode=AES-GCM(code, domain (as salt), hkdfKey(auth-secret)`
1. Use the JWT as new code `code=JWT(encryptedCode, nonce, signingKey)`
1. Redirect to `mydomain.com/auth?code=JWT(encryptedCode, nonce, signingKey)&state=random` and strip the `token` from the query string to mitigate https://gitlab.com/gitlab-org/gitlab/-/issues/285244
1. When we receive `mydomain.gitlab.io?state=random&code=code` we verify the code's JWT signature.
1. Get `encryptedCode` and `nonce` from the JWT claims and decrypt the code
1. Exchange for access token and serve content if successful
---
<details>
<summary>Sequence diagrams from the comment below</summary>
### Summary
* The goal of the attack is to ready the content of site `mygroup.gitlab.io` which is private
* To achieve this attacker sends link `project.gitlab.io?domain=Attackersdomain.com` to user, which result in `attackersdomain.com?code=mygroupcode&state=irrelevant`
* After receiving the `code` attacker can complete the Auth process for `mygroup.gitlab.io`
Note: it's not possible for attacker to get `token`, only to read the content of private web-site.
### How the current Auth workflow works
```mermaid
sequenceDiagram
participant U as User
participant D as mygroup.gitlab.io
participant P as projects.gitlab.io
participant G as gitlab.com
U->>D: get index.html
D->>U: not authorzied, redirect to projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=123
U->>P: projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=123
rect rgb(0, 255, 0)
Note over P,G: OAuth workflow
P->>U: redirect to gitlab.com/oauth?redirect_url=projects.gitlab.io&state=123
U->>G: gitlab.com/oauth?redirect_url=projects.gitlab.io&state=123
G->>U: redirect to projects.gitlab.io?state=123&code=mycode
U->>P: projects.gitlab.io?state=123&code=mycode
end
P->>U: redirect to mygroup.gitlab.io?state=123&code=mycode
U->>D: mygroup.gitlab.io?state=123&code=mycode
rect rgb(0, 255, 0)
Note over D,G: Exchange code for token and verify that user has permissions for the mygroup.gitlab.io
D->>G: get token by code=mycode
G->>D: token = blablabla
D->>G: get project by id with token = blablabla
G->>D: success
end
D->>U: success
```
### Requests highlined in the green are not so relevant to the described attack, so let's get rid of them
```mermaid
sequenceDiagram
participant U as User
participant D as mygroup.gitlab.io
participant P as projects.gitlab.io
participant G as gitlab.com
U->>D: get index.html
D->>U: not authorzied, redirect to projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=123
U->>P: projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=123
Note over P,G: OAuth workflow
P->>U: redirect to mygroup.gitlab.io?state=123&code=mycode
U->>D: mygroup.gitlab.io?state=123&code=mycode
Note over D,G: Exchange code for token and verify that user has permissions for the mygroup.gitlab.io
D->>U: success
```
### Some notes about sessions(stored in encrypted and signed cookies with 10 minutes expiration time)
```mermaid
sequenceDiagram
participant U as User
participant D as mygroup.gitlab.io
participant P as projects.gitlab.io
participant G as gitlab.com
U->>D: get index.html
Note left of D: generate random state and save it in the session cookie
D->>U: not authorzied, redirect to projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=123
U->>P: projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=123
Note over P,G: domain=mygroup.gitlab.io is being saved to the session cookie
Note over P,G: OAuth workflow
Note over P,G: verify that "GitLab Pages conrolls the domain" from session cookie
P->>U: redirect to mygroup.gitlab.io?state=123&code=mycode
U->>D: mygroup.gitlab.io?state=123&code=mycode
Note left of D: Verify that state from the query parameter is the same as in session cookie
Note over D,G: Exchange code for token and verify that user has permissions for the mygroup.gitlab.io
D->>U: success
```
### A vulnerable step
You can see `verify that "GitLab Pages conrolls the domain" from session cookie` step above.
This is how it's implemented:
```golang
func (a *Auth) domainAllowed(name string, domains source.Source) bool {
// This is incorrect but it's not important because of second check
isConfigured := (name == a.pagesDomain) || strings.HasSuffix("."+name, a.pagesDomain)
if isConfigured {
return true
}
// This check is super easy to bypass by just adding the `attackersdomain.com` to any pages project passing validation
// Note that validation does not mean that the correct CNAME record is set
domain, err := domains.GetDomain(name)
// domain exists and there is no error
return (domain != nil && err == nil)
}
```
### The attack as it's can be performed currently
(note this diagram is a little different from [my previous comment](https://gitlab.com/gitlab-org/security/gitlab-pages/-/merge_requests/1#note_359604339):
I placed attacker requesting user domain last to bring related steps closer, it's still possible to perform the attack this way, and it's actually even more convenient)
Vulnerability executed is highlighted in green.
```mermaid
sequenceDiagram
participant A as Attacker
participant U as User
participant D as mygroup.gitlab.io
participant P as projects.gitlab.io
participant G as gitlab.com
participant AD as ATTACKERSDOMAIN.com
A->U: can you please visit projects.gitlab.io/auth?domain=ATTACKERSDOMAIN.COM&state=IRRELEVANT
U->>P: projects.gitlab.io/auth?domain=ATTACKERSDOMAIN.COM&state=IRRELEVANT
Note over P,G: domain=ATTACKERSDOMAIN.COM is being saved to the session cookie
Note over P,G: OAuth workflow
rect rgb(0, 255, 0)
Note over P,G: verify that "GitLab Pages controls the domain" from session cookie
Note over P,G: This check is bypassed by adding ATTACKERSDOMAIN.COM to a random pages project
P->>U: redirect to ATTACKERSDOMAIN.COM?state=IRRELEVANT&code=mycode
U->>AD: ?state=IRRELEVANT&code=mycode
end
AD->>A: code=mycode
Note right of A: Need to get a valid state in cookie:
A->>D: get index.html
D->>A: unauthorized, redirect to projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=USERSTATE
Note right of A: skip the auth, just go the last step manually:
A->>D: /auth?state=USERSTATE&code=mycode
Note over D,G: Exchange code for token and verify that user has permissions for the mygroup.gitlab.io
D->>A: success
```
### The JWT fix
We tried to fix this in https://gitlab.com/gitlab-org/security/gitlab-pages/-/merge_requests/1
It works by completely removing the `domain` parameter and including it in the `state=JWT({random=random, domain=domain...})`.
So the attacker can't send a link to domain.
But, it can be exploited in a little more complicated way:
```mermaid
sequenceDiagram
participant A as Attacker
participant Pages as Pages Server
participant U as User
participant D as mygroup.gitlab.io
participant P as projects.gitlab.io
participant G as gitlab.com
participant AD as ATTACKERSDOMAIN.com
Note right of A: Need to get a valid state for ATTACKERSDOMAIN.COM
Note right of A: Add ATTACKERSDOMAIN.COM as the domain for the private pages project
Note right of A: But not setup DNS for it
A->Pages: ATTACKERSDOMAIN.com/index
Pages->A: not authorized, redirect to projects.gitlab.io/auth?state=JWT({random=random, domain=ATTACKERSDOMAIN.com})
Note right of A: Let's call JWT({random=random, domain=ATTACKERSDOMAIN.com}) ATTACKERSTATE
A->U: can you please visit projects.gitlab.io/auth?state=ATTACKERSTATE
U->>P: projects.gitlab.io/auth?state=ATTACKERSTATE
Note over P,G: OAuth workflow
rect rgb(0, 255, 0)
Note over P,G: use the domain from state, wich is ATTACKERSDOMAIN.COM
P->>U: redirect to ATTACKERSDOMAIN.COM?state=ATTACKERSTATE&code=mycode
U->>AD: ?state=ATTACKERSTATE&code=mycode
end
AD->>A: code=mycode
Note right of A: Need to get a valid state in cookie:
A->>D: get index.html
D->>A: unauthorized, redirect to projects.gitlab.io/auth?state=USERSTATE(JWT({random=random, domain=mygroup.gitlab.io}))
Note right of A: skip the auth, just go the last step manually:
A->>D: /auth?state=USERSTATE&code=mycode
Note over D,G: Exchange code for token and verify that user has permissions for the mygroup.gitlab.io
D->>A: success
```
### The "~~Signed~~ Encrypted code" fix
My current idea is to sign the code instead of state when we redirect back to user domain
```mermaid
sequenceDiagram
participant A as Attacker
participant U as User
participant D as mygroup.gitlab.io
participant P as projects.gitlab.io
participant G as gitlab.com
participant AD as ATTACKERSDOMAIN.com
A->U: can you please visit projects.gitlab.io/auth?domain=ATTACKERSDOMAIN.COM&state=IRRELEVANT
U->>P: projects.gitlab.io/auth?domain=ATTACKERSDOMAIN.COM&state=IRRELEVANT
Note over P,G: domain=ATTACKERSDOMAIN.COM is being saved to the session cookie
Note over P,G: OAuth workflow
P->>U: redirect to ATTACKERSDOMAIN.COM?state=IRRELEVANT&securecode=ENCRYPTED({domain=ATTACKERSDOMAIN.COM, code=mycode})
U->>AD: ?state=IRRELEVANT&securecode=ENCRYPTED({domain=ATTACKERSDOMAIN.COM, code=mycode})
AD->>A: securecode=ENCRYPTED({domain=ATTACKERSDOMAIN.COM, code=mycode})
Note right of A: Need to get a valid state in cookie:
A->>D: get index.html
D->>A: unauthorized, redirect to projects.gitlab.io/auth?domain=mygroup.gitlab.io&state=USERSTATE
Note right of A: skip the auth, just go the last step manually:
A->>D: /auth?state=USERSTATE&securecode=ENCRYPTED({domain=ATTACKERSDOMAIN.COM, code=mycode})
Note right of D: decrypt code, and get domain=ATTACKERSDOMAIN.COM, and code=mycode
Note right of D: check if ATTACKERSDOMAIN.COM==mygroup.gitlab.io
Note right of D: it's not equal, so user was tricked into clicking this,
D->>A: failure
```
**I can't say if there is a way to bypass this check. I would really appreciate if everyone who read that far tries to break it** :wink:
</details>
| 7 |
734,943 | 26,044,218 |
2019-10-17 13:59:25.470
|
Secrets cannot be passed on the command line
|
Passing of Pages secrets was deprecated with issue https://gitlab.com/gitlab-org/gitlab-pages/issues/208 and issues a deprecation warning in the Pages logs.
Disallowing the passing of secrets is a breaking change, so should be implemented during the next major release of GitLab (13.0).
| 2 |
734,943 | 21,200,693 |
2019-05-22 21:38:06.918
|
Default to JSON Logging
|
[We are moving to make JSON the default configured log format for GitLab Pages in 12.0 Omnibus release](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/4102).
Would we like to make this the source install default as well by changing https://gitlab.com/gitlab-org/gitlab-pages/blob/master/logging.go#L28 ?
| 1 |
734,943 | 20,643,279 |
2019-05-07 08:39:16.048
|
Convert https variable to Gorilla ProxyHeaders
|
We temporary saved `https` flag in the context in https://gitlab.com/gitlab-org/gitlab-pages/merge_requests/168/diffs, but `https` variable is still being passed through a lot of calls.
We can use https://godoc.org/github.com/gorilla/handlers#ProxyHeaders instead of both these solutions.
We'll remove this flag in three steps
- [x] Merge https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/225 to log an error if `https` flag does not match the `r.URL.Scheme`
- [x] Test bed with using a reverse proxy see https://gitlab.com/gitlab-org/gitlab-pages/issues/219#note_281592054
- [x] Completely remove the `https` flag from the context
<details>
<summary>Old description</summary>
The following discussion from !141 should be addressed:
- [ ] @nick.thomas started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/merge_requests/141#note_167068450): (+1 comment)
> I know we use this imperative style of middleware (returning true/false to indicate whether to end) elsewhere in gitlab-pages, but I wonder if we could avoid it in this case, and instead use the `http.Handler` style instead?
>
> One approach: the `Middleware` type would have a `ServeHTTP()` method and a `next` member. In the cases where we currently `return false`, we'd instead `m.next.ServeHTTP(w, r) ; return`. Otherwise, we'd serve the redirect and return.
>
> We use this pattern in workhorse and it serves us quite well. WDYT?
This is mostly a refactoring issue, I think it would make the gitlab-pages codebase easier to understand, and we could make use of more third-party implementations this way.
One example: we could start using https://godoc.org/github.com/gorilla/handlers#ProxyHeaders on *just* the `ServeProxy` route!
</details>
| 2 |
734,943 | 17,671,246 |
2019-01-25 20:01:38.217
|
Serve *in*directly from artifacts
|

## Prerequisites
1. Pages site artifact zip files are no longer consumed https://gitlab.com/gitlab-org/gitlab-ce/issues/45888
1. We regenerate any Pages artifact zip files that were already consumed https://gitlab.com/gitlab-org/gitlab-ee/issues/9346
1. We have Pages pull config from the Rails API https://gitlab.com/gitlab-org/gitlab-pages/issues/161
## Proposal
1. Pages receives a request for a site resource.
1. Pages does a lookup in configs as usual.
1. If the site exists, and Pages doesn't have the artifact.zip yet, it downloads it via the Rails API.
1. Pages extracts the zip to a unique path in its filesystem.
1. Pages serves the site from that path.
1. Pages regularly checks for invalidation, so the filesystem acts as an LRU or other cache.
### Pros
- ~Geo doesn't need to "sync" *anything* at all.
- Pages becomes HA and scalable?
- Pages site symlinks continue to work.
- Object storage is not required.
### Cons
- If we are extracting artifact.zip on demand, there is a cold-cache issue, whenever a new request comes in, there is this initial download/extract time before it can handle requests, and this is shared by each "Pages" node. Site can be of GBs if it include a lot of images. It would 503 for a while.
- Invalidating/expiring is complex as you need to ping each machine to free space, or each one have to ping API in order to determine if they can or cannot remove folder from disk - O(N*M)?
## More related discussion
GitLab Pages direction doc (internal link): https://docs.google.com/document/d/18awpT5MVhlmdX0erO1X__Od59KZvrXBbdV0HG5A7WZk/edit#
@nick.thomas
>>>
Once we have https://gitlab.com/gitlab-org/gitlab-pages/issues/78 done, we could rework the main pages functionality as a set of pointers to specific pages artifacts, accessed in the same way.
We'd need to stop deleting pages artifacts, and somehow regenerate the ones already deleted, of course, but then custom domains and the group / project pages can just become pointers to artifacts, with an optional filesystem cache to speed things up.
Once a given pages artifact is no longer the latest, it can expire according to the usual rules.
>>>
https://gitlab.com/gitlab-org/gitlab-pages/issues/158:
@nick.thomas
>>>
For content, I think we want to implement https://gitlab.com/gitlab-org/gitlab-ce/issues/45888. Modulo existing customer data (which could _in principle_ be backfilled), this will ensure you can _always_ get the current Pages content for a site from the GitLab API, (which may, of course, be serving a redirect to an archive in object storage).
Once we have this, we can treat the file store as a **non-coherent temporary cache**. If we're still interested in continued resilience while the GitLab API is unavailable, we can endeavour to keep it filled all the time. If the file store is lost, we can stand up a new, empty one, and the cache can be refilled from the GitLab API, either aggressively, or on-first-request.
If we have two of these backends, they don't have to share an NFS mount, and the loss of one won't cause an outage.
>>>
| 20 |
734,943 | 17,669,680 |
2019-01-25 17:55:27.240
|
Serve directly from artifacts in object storage
|

## Prerequisites
1. Pages site artifact zip files are no longer consumed https://gitlab.com/gitlab-org/gitlab-ce/issues/45888
1. We regenerate any Pages artifact zip files that were already consumed https://gitlab.com/gitlab-org/gitlab-ee/issues/9346
1. We have Pages pull config from the Rails API https://gitlab.com/gitlab-org/gitlab-pages/issues/161
## Proposal
1. Pages receives a request for a site resource.
1. Pages does a lookup in configs as usual.
1. Pages basically proxies the artifact files in object storage.
### After MVP
1. Pages caches the proxied files
1. Pages translates symlinks into redirects (if we feel this is important enough behavior to add back)
### Pros
- ~Geo doesn't need to "sync" _anything_ at all.
- Pages becomes HA and scalable?
### Cons
- Object storage is required.
- Pages site symlinks would break, but it may be possible to reimplement the behavior by translating them to redirects.
- Pages would need to keep the "old" way as well. To support small, simple instances, e.g. Raspberry Pi.
- How would we transition?
### More related discussion
GitLab Pages direction doc (internal link): https://docs.google.com/document/d/18awpT5MVhlmdX0erO1X__Od59KZvrXBbdV0HG5A7WZk/edit#
@brodock https://gitlab.com/gitlab-org/gitlab-ee/issues/4611#note_88542926:
>>>
I built a similar infra for hosting landing-pages at my previous company. The use-case is very similar to ours and it was also inspired in how GitHub Pages works.
The endpoint that served the pages were proxying requests to the S3 bucket, based on the domain we would find the base folder and then try to look for files.
Because S3 calls can be expensive, I've also added a short-lived TTL cache between the twos, so any spike in traffic would not mean multiple requests to the object storage.
Relevant information is here: http://shipit.resultadosdigitais.com.br/blog/do-apache-ao-go-como-melhoramos-nossas-landing-pages/ (it's in portuguese but google translator does a really good job).
For that specific use-case the Domain mapping was kept in both the database (for persistence reasons) and in Redis (so it is fast). Redis was being used not as cache but as main source of truth for that.
>>>
@nick.thomas https://gitlab.com/gitlab-org/gitlab-ee/issues/4611#note_101235573:
>>>
Serving the _contents_ of the artifacts directly from object storage does several undesirable things from my point of view:
- Makes object storage mandatory for Pages (unnecessary complexity for small sites)
- Requires many changes in the Pages daemon
- Breaks current symlink support, so breaking existing pages deployments
Ultimately, though, the route we take there is up to ~Release and @jlenny.
>>>
https://gitlab.com/gitlab-org/gitlab-pages/issues/68:
@ayufan
>>>
We could serve pages directly from ZIP archives, but loading all of the metadata is IO and memory consuming operation so it is not worthy.
Maybe the solution is to assume that pages access data behind object storage, always. We could then build pages and sidekiq to access object storage directly, not filesystem. Extract data there and update metadata to make pages to pick a new changes.
>>>
@ayufan
>>>
Can pages be just regular OAuth application? Can pages use general API to download artifacts? This is possible even today. Maybe we can just generate one time URLs to download artifacts, similar how you can sign S3 URLs.
>>>
| 20 |
734,943 | 16,202,508 |
2018-11-29 04:03:28.753
|
Custom error pages not served for namespace domains
|
Summary
===
`/404.html` not served even though it exists.
Steps to reproduce
===
Create a GitLab Pages website with `/404.html`. Go to `/non/existing_file`.
Expected
===
`/404.html` served.
Actual
===
Default GitLab 404 page served when using `*.gitlab.io`, but `/404.html` served when using custom domain.
Useful links
===
Public repository with a minimal, complete, verifiable example: https://gitlab.com/error-page-demo/error-page-demo.gitlab.io
Non-existent GitLab Page: https://error-page-demo.gitlab.io/non/existing_file
Page that should be served: https://error-page-demo.gitlab.io/404.html
Relevant documentation: https://docs.gitlab.com/ce/user/project/pages/introduction.html#custom-error-codes-pages
| 2 |
734,943 | 13,689,763 |
2018-08-25 06:44:46.063
|
Load domain data from API instead of traversing filesystem
|
We've seen a number of bugs where stale values of `config.json` are stored or stale directories mess up the domains loading. I think it might make sense to switch GitLab Pages to load the data from the API. We are doing this to support access control in !94.
- [ ] CE: Add API endpoint to serve project pages config.json
- [ ] Pages: Fetch and cache config when it does not exist
| 5 |
734,943 | 2,803,186 |
2016-08-22 18:30:50.935
|
Support HSTS on GitLab Pages
|
## Problem to Solve
Some users want to enforce no access to their web content without HTTPS/certificates. This can be done with [HTTP Strict Transport Security (HSTS) policy](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security), but we do not currently support enabling this for GitLab Pages sites.
## Proposal
By allowing for enabling this as a custom configuration/policy associated with a Pages site, we can make this possible.
<!-- triage-serverless v3 PLEASE DO NOT REMOVE THIS SECTION -->
*This page may contain information related to upcoming products, features and functionality.
It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes.
Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.*
<!-- triage-serverless v3 PLEASE DO NOT REMOVE THIS SECTION -->
| 1 |
734,943 | 2,574,075 |
2016-07-09 13:57:40.932
|
Support for single page applications route all requests to index.html.
|
## Problem to Solve
When routing requests to Gitlab Pages, if there is no default index.html, the request renders an error that does not permit SEO or support of SPA on GitLab Pages.
## Solution
Route all requests to GitPages to /index.html by:
* implementing a `route single page app to index.html` to a checkbox in **Pages > Settings**
* Enable SPAs to serve any accompanying files so that it works
* Ensure existing files are served before trying to serve `index.html`
## Update 2021-07-20
Closing this issue in favor of https://gitlab.com/gitlab-org/gitlab-pages/-/issues/57. We aim to enable SPAs via a new `.gitlab-pages.yml`
| 4 |
734,943 | 118,056,437 |
2022-11-03 07:20:58.432
|
Follow-up from "Add auth-cookie-session-timeout flag" - In Auth constructor use options struct
|
The following discussion from !834 should be addressed:
- [ ] @proglottis started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/834#note_1156512951):
> nit, not directly related to your change, but this constructor has enough arguments now that it's probably worth extracting an options struct.
| 2 |
734,943 | 117,942,630 |
2022-11-01 13:43:17.770
|
`cache_pages_domain_api` feature flag cause failures in staging
|
### Summary
During the rollout of `cache_pages_domain_api` feature flag, the gitlab pages qa tests on staging started to fail.
After some investigation with `@vshushlin`, we found out that the application settings are different between rails nodes (where the cache is created) and sidekiq nodes (where the cache is invalidated). Since we us a hash for the application settings in the cache key, the sidekiq job never finds the cache key to invalide it.
Related:
- https://gitlab.com/gitlab-org/gitlab/-/issues/376332+
- https://gitlab.com/gitlab-org/gitlab/-/issues/364127+
### Possible fixes
```json:table
{
"items" : [
{
"option": "🅰️",
"description": "Instead of using the application settings hash in the cache key we could insert the hash in the cached value and validate it when reading the cache. This way the app settings is not part of the cache key, so it should get invalidated in the sidekiq nodes.",
"cons": "validating the cache manually based on the cached value."
},
{
"option": "🅱️",
"description": "Instead of using the application settings hash in the cache key we could have a two layers cache. The main cache key would save the app settings hash, which would be the key to the payload itself.",
"cons": "two reads on redis to find the cached value."
}
]
}
```
| 3 |
734,943 | 117,645,763 |
2022-10-26 19:52:12.963
|
Support HSTS on GitLab Pages on per domain basis (also on gitlab.com)
|
## Problem to Solve
Some users want to enforce no access to their web content without HTTPS/certificates.
This can be done with [HTTP Strict Transport Security (HSTS) policy](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security), but we do not currently support enabling this for GitLab Pages sites on GitLab.com.
## Solution
Allow users to enable HSTS for Pages on GitLab.com.
| 5 |
734,943 | 115,192,449 |
2022-09-16 09:14:32.428
|
Session timeout after 10 minutes
|
### Summary
We use gitlab-pages for several internal website hostings and our users experience annoying session timeouts due to the short lifetime of the gitlab-pages cookie of 10 minutes. As an example, a single page web app can't load further assets (e.g. images) after the session timeout and a (hard) refresh in the browser is necessary.
The `authSessionMaxAge` constant is hardcoded in [`internal/auth/auth.go`](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/14e310074668254398d13f74e647dd16df2cb12c/internal/auth/auth.go#L40):
```go
authSessionMaxAge = 60 * 10 // 10 minutes
```
It seems that this was introduced with https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/178.
So we have a few questions:
- Why was the timeout set to exactly 10 minutes? Is there a reason for this (e.g. a standard)?
- If there is no specific reason, would it be possible to increase this value?
- Or would it be possible to make this value configurable?
WDYT?
If desired, we would be happy to support you as ~"Leading Organization" with a ~"Community contribution".
### Example Project
<!-- If possible, please create an example project here on GitLab.com that exhibits the problematic
behavior, and link to it here in the bug report. If you are using an older version of GitLab, this
will also determine whether the bug is fixed in a more recent version. -->
### What is the current *bug* behavior?
There's a session timeout after 10 minutes.
### What is the expected *correct* behavior?
The duration to a session timeout is higher or can be configured.
### Goal
Make the timeout configurable:
1. [x] Add a new flag to Pages e.g. `auth-cookie-session-timeout` with default value of `10m` :arrow_right: https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/834
1. [x] Add the flag to Omnibus, see [sample MR](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/6144/diffs) :arrow_right: https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/6495
1. [x] Add the flag to the Helm charts, see [sample MR](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2338/diffs) :arrow_right: https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2838
1. [x] Update the admin documentation, see [sample MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/89780/diffs) :arrow_right: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/102996
| 3 |
734,943 | 115,128,713 |
2022-09-15 12:07:38.985
|
GCS ranged requests sometimes fails with 400
|
<!---
Please read this!
**NOTE**: Please check https://gitlab.com/gitlab-org/gitlab/-/issues/331699 if you had issues upgrading to 14.0+
Before opening a new issue, make sure to search for keywords in the issues (including closed ones)
- https://gitlab.com/gitlab-org/gitlab-pages/issues
and verify the issue you're about to submit isn't a duplicate.
--->
### Summary
In GitLab.com, we recently ran into an [incident](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7727) where a small number of Pages hosts failed with 500 errors. Digging into it we found that this resulted from internal error where ranged requests to GCS were failing with "400 Bad Request".
This behavior seems recent (started Sepetmeber 8th), judging by graphs from GCS:
.
Kibana logs doesn't provide much info beside the error itself, so perhaps we may need to consider logging the response body of 400 responses.
<!-- DO NOT CHANGE -->
| 3 |
734,943 | 111,236,649 |
2022-07-06 08:16:37.411
|
Update documentation to state _redirects limits are now configurable
|
Based on the work in https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/778+, we want to update the ~documentation guides to let users know that the Pages ` _redirects limits` are now configurable.
| 1 |
734,943 | 110,795,348 |
2022-06-28 00:12:00.958
|
Flaky TestDomainResolverError race
|
I've seen an increase in race test failures for `TestDomainResolverError` recently, I think it's worth investigating a bit more.
Job [#2647463990](https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/2647463990) failed for be06885fdd167a523dd8e1e103bcbb23bb32db61:
| 1 |
734,943 | 110,261,083 |
2022-06-16 20:24:21.528
|
Move Gitlab Pages documentation menu in the sidebar one level up
|
### Summary
Gitlab Pages contribution documentation was recently moved to the docs site. I think it deserves to be a bit more discoverable/easier to access. For that, I propose to move it to its own *contributing* submenu.
| before | after |
|--------|-------|
| <img alt="SCR-20220616-l99" src="https://gitlab.com/gitlab-org/gitlab-docs/uploads/76a348a3f1c32c69d18dfb674aa7780d/SCR-20220616-l99.png" width="300"/> | <img alt="image" src="https://gitlab.com/gitlab-org/gitlab-docs/uploads/0abb0e10552412569fa43e60ffc03de1/image.png" width="300"/> |
| 1 |
734,943 | 109,935,782 |
2022-06-10 18:09:27.774
|
Remove doc/development in favor of https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/development/pages/index.md
|
<!---
Please read this!
**NOTE**: Please check https://gitlab.com/gitlab-org/gitlab/-/issues/331699 if you had issues upgrading to 14.0+
Before opening a new issue, make sure to search for keywords in the issues (including closed ones)
- https://gitlab.com/gitlab-org/gitlab-pages/issues
and verify the issue you're about to submit isn't a duplicate.
--->
### Summary
### Steps to reproduce
<!-- Describe how one can reproduce the issue - this is very important. Please use an ordered list. -->
### Example Project
<!-- If possible, please create an example project here on GitLab.com that exhibits the problematic
behavior, and link to it here in the bug report. If you are using an older version of GitLab, this
will also determine whether the bug is fixed in a more recent version. -->
### What is the current *bug* behavior?
<!-- Describe what actually happens. -->
### What is the expected *correct* behavior?
<!-- Describe what you should see instead. -->
### Relevant logs and/or screenshots
<!-- Paste any relevant logs - please use code blocks (```) to format console output, logs, and code
as it's tough to read otherwise. -->
### Output of checks
<!-- If you are reporting a bug on GitLab.com, write: This bug happens on GitLab.com -->
### Possible fixes
<!-- If you can, link to the line of code that might be responsible for the problem. -->
<!-- DO NOT CHANGE -->
| 1 |
734,943 | 108,886,739 |
2022-05-20 17:11:50.237
|
[SPIKE] create a gitlab-pages router
|
### Goal
Try to improve how routes are handled in gitlab-pages.
### Expected Outcome
An example of how to organize gitlab-pages routes and a plan of action in how to do that in small steps.
| 3 |
734,943 | 108,336,188 |
2022-05-10 23:08:57.856
|
Investigate and configure bundle on CI to disallow any changes to Gemfile.lock
|
This is a ~"corrective action" of sirt-2266.
Determine whether this project (https://gitlab.com/gitlab-org/gitlab-pages) uses `bundle` or `bundle install` in CI. Per the Rubygems.org advisory, we should use either the `frozen` or `deployment` options as defense-in-depth to mitigate supply chain attacks. This project needs to use the `frozen` or `deployment` options if not yet.
## More background
See https://github.com/rubygems/rubygems.org/security/advisories/GHSA-hccv-rwq6-vh79:
> Using Bundler in --frozen or --deployment mode in CI and during deploys, as the Bundler team has always recommended, will guarantee that your application does not silently switch to versions created using this exploit.
Note that the `deployment` option installs gems to `vendor`, which we may not want. So `frozen` will usually be the smaller change.
Note that:
> \[DEPRECATED\] The `--frozen` flag is deprecated because it relies on being remembered across bundler invocations, which bundler will no longer do in future versions. Instead please use `bundle config set --local frozen 'true'`, and stop using this flag
| 1 |
734,943 | 107,117,140 |
2022-04-25 07:47:32.009
|
Add early return and tests for internal/handlers/https
|
The following discussion from !735 should be addressed:
- [ ] @vshushlin started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/735#note_922396637):
> I would add
>
> ```golang
> if !redirect {
> return handler
> }
>
> and tests for this handler. But let's move it to the follow-up issue. This is already a nice improvement :thumbs-up:
| 1 |
734,943 | 106,881,635 |
2022-04-20 06:50:39.305
|
Infer artifacts-server from internalGitlabServerFromFlags() unless it's set explicitly
|
### Summary
We have a lot of config options pointing to the gitlab server. We use them for different reasons:
1. getting domain information
2. redirecting to authentication
3. fetching OAuth token
4. proxying artifacts
In the simplest case, it should be possible to infer all of them from a single URL.
Note:
1. some of these URL work in the `internal` network (e.g. OAuth token, gitlab domain information)
2. while others should be public(e.g. OAuth redirects), because we can redirect the user to this URL
So we should have at least 2 parameters:
- `gitlab-server`
- `gitlab-internal-server`
And all of the rest can be inferred from this.
Right now `artifacts-server` should always be set in the config, I suggest we infer if from `internalGitlabServerFromFlags()` unless it's set.
**Optional:** mark `artifacts-server` as deprecated an remove it on the next major release.
## Implementation details
```golang
func artifactsServerFromFlags() string {
if *artifactsServer != "" {
return *artifactsServer
}
return internalGitlabServerFromFlags() + "/api/v4"
}
```
| 1 |
734,943 | 106,318,294 |
2022-04-08 09:46:42.625
|
TLS Ciphers are breaking in FIPS Mode
|
### Summary
When Pages is running in FIPS mode, some [ciphers](https://gitlab.com/gitlab-org/gitlab-pages/blob/master/internal/tls/tls.go#L13-L13) do not work.
### Steps to reproduce
Follow steps in https://gitlab.com/gitlab-org/gitlab-pages/-/issues/718#note_904531139
It also has the list of which ciphers do not work in FIPS mode.
### What is the current *bug* behavior?
All the ciphers mentioned below work when Pages is running as part of GDK but do not work when Pages is hosted on a FIPS enabled RHEL server
#### `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305` and `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`
These are TLS 1.2 ciphers. By default the curl connection goes for TLS1.3 connection unless explicitly specified using `--tlsv1.2 --tls-max 1.2`
<details>
<summary>Logs when Pages is running in GDK and explicit TLS versions not specified in curl</summary>
```sh
$ curl https://127.0.0.1:3012 -H 'Host: t1.vishaltak.com' --verbose --insecure --cipher ECDHE-RSA-CHACHA20-POLY1305
* Trying 127.0.0.1:3012...
* Connected to 127.0.0.1 (127.0.0.1) port 3012 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ECDHE-RSA-CHACHA20-POLY1305
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=IN; ST=MH; L=Mumbai; O=GitLab; OU=Create; CN=pages.gdk.test; emailAddress=vtak@gitlab.com
* start date: Apr 7 10:38:07 2022 GMT
* expire date: Apr 7 10:38:07 2023 GMT
* issuer: C=IN; ST=MH; L=Mumbai; O=GitLab; OU=Create; CN=pages.gdk.test; emailAddress=vtak@gitlab.com
* SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x12100e600)
> GET / HTTP/2
> Host: t1.vishaltak.com
> user-agent: curl/7.79.1
> accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 200
< cache-control: max-age=600
< content-type: text/html; charset=utf-8
< etag: "7918027bec724fb2a62bcc049eef34f25848f0d4e0aaa5955107e3fb87859dba"
< expires: Fri, 08 Apr 2022 09:03:39 UTC
< last-modified: Tue, 22 Mar 2022 14:11:29 GMT
< vary: Origin
< content-length: 632
< date: Fri, 08 Apr 2022 08:53:39 GMT
...
```
</details>
**Logs when Pages is running in FIPS RHEL**
Curl logs
```sh
$ curl https://127.0.0.1:3012 -H 'Host: t1.vishaltak.com' --verbose --insecure --cipher ECDHE-ECDSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
* Trying 127.0.0.1:3012...
* Connected to 127.0.0.1 (127.0.0.1) port 3012 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ECDHE-ECDSA-CHACHA20-POLY1305
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* error:1404B410:SSL routines:ST_CONNECT:sslv3 alert handshake failure
* Closing connection 0
curl: (35) error:1404B410:SSL routines:ST_CONNECT:sslv3 alert handshake failure
```
Pages Server logs
```json
{"level":"info","msg":"http: TLS handshake error from 127.0.0.1:34048: tls: no cipher suite supported by both client and server","time":"2022-04-08T09:10:37Z"}
```
#### `TLS_AES_128_GCM_SHA256` , `TLS_AES_256_GCM_SHA384` and `TLS_CHACHA20_POLY1305_SHA256`
These are TLS v1.3 ciphers.
Curl logs
```sh
$ curl https://127.0.0.1:3012 -H 'Host: t1.vishaltak.com' --verbose --insecure --tls13-ciphers TLS_AES_128_GCM_SHA256 --tlsv1.3
* Trying 127.0.0.1:3012...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3012 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLS 1.3 cipher selection: TLS_AES_128_GCM_SHA256
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, protocol version (582):
* error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version
* Closing connection 0
curl: (35) error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version
```
Pages server logs
```json
{"level":"info","msg":"http: TLS handshake error from 127.0.0.1:34066: tls: client offered only unsupported versions: [304]","time":"2022-04-08T09:15:42Z"}
```
### What is the expected *correct* behavior?
The HTML should be served while respecting the TLS version and the cipher
### Comments
I think, for TLS 1.2, ciphers `ECDHE-RSA-CHACHA20-POLY1305` and `ECDHE-ECDSA-CHACHA20-POLY1305` are not supported in FIPS mode
**Reason**
```sh
$ openssl ciphers -v -stdname | grep TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 # when run of RHEL FIPS machine, gives empty result
$ openssl ciphers -v -stdname | grep TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 # when run of RHEL FIPS machine, gives empty result
```
I think TLS v1.3 is not supported in FIPS mode and that is why the Pages server logs `tls: client offered only unsupported versions: [304]` . `304` refers to TLS v1.3 as can be seen [here](https://cs.opensource.google/go/go/+/refs/tags/go1.18:src/crypto/tls/common.go;l=33) .
However, out of the 3 TLSv.13 ciphers mentioned above, the first two can be seen as available in FIPS RHEL OpenSSL.
```sh
$ openssl ciphers -v -stdname -s -tls1_3 # when run of RHEL FIPS machine
TLS_AES_256_GCM_SHA384 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
TLS_AES_128_GCM_SHA256 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD
TLS_AES_128_CCM_SHA256 - TLS_AES_128_CCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESCCM(128) Mac=AEAD
```
`TLS_CHACHA20_POLY1305_SHA256` is not available in the list
#### OpenSSL Version Details
- Local Mac - `LibreSSL 2.8.3`
- GDK (Ubuntu) - `OpenSSL 1.1.1f 31 Mar 2020`
- FIPS Pages(RHEL) - `OpenSSL 1.1.1k FIPS 25 Mar 2021`
If you are using Mac, you would want to spin up some server(Ubuntu) which has openssl installed or install a separate openssl on Mac. The default openssl installed in Mac does have TLS v1.3 ciphers available for some reason.
### Note
While testing this, I've generated certificates for the pages domain (`pages.gdk.test`) and not for the custom domain (`t1.vishaltak.com`) .
<!-- DO NOT CHANGE -->
| 2 |
734,943 | 106,238,762 |
2022-04-07 09:58:59.934
|
Revamp gitlab-pages development docs
|
<!---
Please read this!
**NOTE**: Please check https://gitlab.com/gitlab-org/gitlab/-/issues/331699 if you had issues upgrading to 14.0+
Before opening a new issue, make sure to search for keywords in the issues (including closed ones)
- https://gitlab.com/gitlab-org/gitlab-pages/issues
and verify the issue you're about to submit isn't a duplicate.
--->
### Summary
While setting up my gitlab-pages environment I got confused with some information on `doc/development.md`. It was also confusing to have some duplicated information in https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/pages.md.
I propose to have all the information regarding gitlab-pages development in one place (a Single Source Of Truth - SSOT). For that, I:
1. Revamped the `gitlab-pages/doc/development.md` to have all the required information to start a gitlab-pages developing environment (this MR)
1. If this MR is approve, I'll open another MR in the GDK repository to point the `GDK/doc/howto/pages.md` to the `gitlab-pages/doc/development.md`.
| 2 |
734,943 | 106,045,796 |
2022-04-04 15:48:48.597
|
Arbitrary protocol redirection
|
<!---
Please read this!
**NOTE**: Please check https://gitlab.com/gitlab-org/gitlab/-/issues/331699 if you had issues upgrading to 14.0+
Before opening a new issue, make sure to search for keywords in the issues (including closed ones)
- https://gitlab.com/gitlab-org/gitlab-pages/issues
and verify the issue you're about to submit isn't a duplicate.
--->
### Summary
<!-- Summarize the bug encountered concisely. -->
GitLab pages can be used to redirect to arbitrary protocols in the authentication flow.
### Steps to reproduce
<!-- Describe how one can reproduce the issue - this is very important. Please use an ordered list. -->
Consider the following URL:
```
https://projects.gitlab.io/auth?domain=mailto://gitlab-com.gitlab.io?body=OMGWTF&state=aaa
```
It will, after the login redirect to `gitlab.com` redirect the user to their mail client.

This might be potentially used e.g. on mobile devices to exfiltrate authentication token via custom URL handler.
### Example Project
<!-- If possible, please create an example project here on GitLab.com that exhibits the problematic
behavior, and link to it here in the bug report. If you are using an older version of GitLab, this
will also determine whether the bug is fixed in a more recent version. -->
### What is the current *bug* behavior?
<!-- Describe what actually happens. -->
Redirect to arbitrary protocols is possible.
### What is the expected *correct* behavior?
Redirect should only be possible to `https` or `http` URLs.
<!-- Describe what you should see instead. -->
### Relevant logs and/or screenshots
<!-- Paste any relevant logs - please use code blocks (```) to format console output, logs, and code
as it's tough to read otherwise. -->
### Output of checks
<!-- If you are reporting a bug on GitLab.com, write: This bug happens on GitLab.com -->
### Possible fixes
<!-- If you can, link to the line of code that might be responsible for the problem. -->
<!-- DO NOT CHANGE -->
| 2 |
734,943 | 105,901,313 |
2022-04-01 16:01:55.060
|
Unable to download large files from gitlab.io
|
### Summary
If you try to download a large file from gitlab.io the connection is closed after about 37s. This is probably a regression of the recent security fix.
### Steps to reproduce
Try to download a large file over a slow connection or try to rate limit your connection (`wget --limit-rate` works to reproduce this issue). This affects f.e. https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo. (see https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/-/issues/77) If you try to download the mentioned `codium_1.66.0-1648720116_amd64.deb` (80Mb) and the download requires more than about 37s the connection is closed and the download is not finished.
## TODO
- [x] change the default to 5 minutes https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/722
- [x] make backports to versions we backported security fix to
- 1.51: https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/723
- 1.54: https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/724
- 1.56: https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/725
- [x] make these parameters configurable in charts and omnibus
- charts: https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2504
- omnibus: https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/6029
- [x] log them in http://gitlab.com/gitlab-org/gitlab-pages/blob/4b1afecbb6ae1886bfd3a31f256909ca2770bce4/internal/config/config.go#L331-L331
- [x] remove the hot fix from https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1697
- https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1731
| 3 |
734,943 | 102,683,500 |
2022-02-18 12:21:07.463
|
Remove rate limits feature flags
|
### Summary
We rolled out rate limits for gitlab pages, so now we can remove feature flags guarding it.
SO we need to:
1. remove 4 relevant feature flags from https://gitlab.com/gitlab-org/gitlab-pages/-/blob/a8be317a96f2dfb4ab30e338daee2b51833c2322/internal/feature/feature.go#L12-L12
1. remove `enforce` flag from https://gitlab.com/gitlab-org/gitlab-pages/-/blob/master/internal/ratelimiter/ratelimiter.go#L48, and all tests using it
1. clean our production to not mention these feature flags: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/blob/96a3bf149c1cd2847f29c3ed666c480e6b8d15b7/releases/gitlab/values/values.yaml.gotmpl#L449 , https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/blob/96a3bf149c1cd2847f29c3ed666c480e6b8d15b7/releases/gitlab/values/gprd.yaml.gotmpl#L97 , and other files, **grep the repository for each of them**
1. remove mentions of feature flags in https://docs.gitlab.com/ee/administration/pages/#rate-limits
| 2 |
734,943 | 102,495,890 |
2022-02-15 17:33:07.002
|
TLS security for GitLab Pages metrics endpoints
|
Refer https://gitlab.com/groups/gitlab-org/-/epics/7479
GitLab components report metrics via Prometheus, and sometimes start a pprof listener to aid live profiling. Either of these may be open and listening when running GitLab in production, as they are on GitLab.com
1. Inventory which endpoints report to Prometheus
1. Determine of those endpoints are already or can be TLS-secured
1. Secure each unsecured endpoint
~"devops::release" ~"group::release" ~"Category:Pages"
| 5 |
734,943 | 101,609,858 |
2022-01-31 17:47:18.142
|
Intermittent 404s with gitlab-org.gitlab.io due to "domain does not exist"
|
### Summary
@marcel.amirault mentioned that he was seeing intermittent 404s while visiting https://gitlab-org.gitlab.io/gitlab-roulette/ today.
https://log.gprd.gitlab.net/goto/e5236f50-82bc-11ec-a649-b7cbb8e4f62e shows that the `domain does not exist` error pops up quite frequently for this host:

The issue seems distributed across multiple pods:

Curious why these URLs seem to be involved?

https://gitlab-org.gitlab.io/trello-power-up/scripts/api.js should exist.
<!-- Summarize the bug encountered concisely. -->
### Steps to reproduce
<!-- Describe how one can reproduce the issue - this is very important. Please use an ordered list. -->
### Example Project
<!-- If possible, please create an example project here on GitLab.com that exhibits the problematic
behavior, and link to it here in the bug report. If you are using an older version of GitLab, this
will also determine whether the bug is fixed in a more recent version. -->
### What is the current *bug* behavior?
<!-- Describe what actually happens. -->
### What is the expected *correct* behavior?
<!-- Describe what you should see instead. -->
### Relevant logs and/or screenshots
<!-- Paste any relevant logs - please use code blocks (```) to format console output, logs, and code
as it's tough to read otherwise. -->
### Output of checks
<!-- If you are reporting a bug on GitLab.com, write: This bug happens on GitLab.com -->
### Possible fixes
<!-- If you can, link to the line of code that might be responsible for the problem. -->
<!-- DO NOT CHANGE -->
~"devops::release" ~"group::release" ~"Category:Pages"
| 5 |
734,943 | 100,941,602 |
2022-01-19 22:28:41.956
|
feat: make server shutdown timeout configurable
|
The following discussion from !664 should be addressed:
- [x] @jaime started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/664#note_815813893):
> // TODO: make this timeout configurable
# Proposal
Add a new config flag `server-shutdown-timeout` of type `time.Duration`.
- [x] Add to Pages server
- [x] Add to Omnibus config (example https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5832/diffs)
- [x] Add to GitLab Charts (example https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2338)
- [x] Document in the admin docs (example https://gitlab.com/gitlab-org/gitlab/-/merge_requests/77969)
| 4 |
734,943 | 99,072,743 |
2021-12-15 10:32:37.139
|
Include correlation ID and remote IP in Sentry tags
|
### Summary
While investigating https://gitlab.com/gitlab-com/gl-infra/production/-/issues/6074, it was hard correlating 50x requests to Sentry errors because the latter don't include things like correlation ID or remote IP. Having those tags (and preferably even more) could aid greatly in tracking down problems.
| 3 |
734,943 | 99,052,899 |
2021-12-15 02:42:07.837
|
refactor: revert back to archive/zip once go1.17 is no longer supported
|
### Summary
Due to a bug in go1.17, we had to introduced a forked version of the `archive/zip` package: https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/646.
This bug is fixed in go1.18, so eventually, we can revert back to using `archive/zip` in place of `gitlab.com/gitlab-org/golang-archive-zip` once support for go1.17 has been dropped.
<!-- DO NOT CHANGE -->
~"devops::release" ~"group::release" ~"Category:Pages"
| 1 |
734,943 | 98,258,008 |
2021-11-10 13:37:42.502
|
Allow customizing the artifacts path ("public") in GitLab Pages
|
### Reasoning
Currently GitLab Pages will only publish built files from a folder named `public` inside the project root.
Except for a few minor Frameworks this requires changing the default behaviour of the framework the user is using:
For example, Next.js uses `.next`, Nuxt and Astro use `dist`, Eleventy uses `_site`, etc... — what all of them have in common is the fact, that the `public` folder (sometimes in the root directory, sometimes not) often has a slightly different use: It stores static public files that don't need to go through the build process.
This means that in order to deploy a site with pages not only does the user need to configure the build path, they also often need to change where files from the *original* `public` folders live.
So the default behaviour of Pages is almost never what the user needs. It almost always is in conflict with the default behaviour of the frameworks. This causes confusion because it's not clearly documented (gitlab#340682).
The most user-centric way to approach this is to refactor GitLab Pages so that it obtains the artifacts folder from the `.gitlab-ci.yml` file and uses whatever has been specified there.
### Proposal
#### The initial idea
The easiest way would be to just **drop** the requirement entirely and do the following:
- If there's only one folder in artifacts, use that as the pages root
- If there's multiple folders, do some educated guessing which one to use (quoted from the original MR):
> In the case where `artifacts.paths` contains more than one folder, it will check for the presence of any of the following folders. These are the output folders generated by the [most popular frontend frameworks](https://gitlab.com/gitlab-org/incubation-engineering/jamstack/meta/-/issues/12#popular-frameworks), ordered by the popularity of the related framework (but `public` first, because backwards compatibility):
> - `public` (previous GitLab behaviour, Hugo, Gatsby, Svelte)
> - `build` (React)
> - `dist` (Vue, Nuxt.js, Angular, Astro, Vite)
> - `out` (Next.js)
> - `_site` (Eleventy, Jekyll)
The problem with that approach is that it would *change GitLab Pages behavior* for existing pages. If someone had previously uploaded files inside the `pages` job that's not in a `public` directory, they would start to be exposed at the time the change came into effect.
Although it's rather unlikely that users uploaded artifacts in the pages job they *did not want* to be exposed, this case cannot be excluded, so following the principle of least surprise, we should require the user to opt in to a behaviour like this.
#### The better idea
Let's introduce a new property to the pipeline. One that's specific to the `pages` job. Say, `publish`
```yaml
pages:
script: ...
publish: some/dir
```
So this new `publish` property has a semantic meaning: "This is the folder I want to publish".
In the background this does two things:
1. It behaves as `artifacts.paths` with a single entry. (Pages needs a single root directory anyway, so it doesn't make much sense to allow more than one dir, at least for Pages. The user is allowed to add a classic `artifacts` property if they want to publish artifacts from that repo too, but it would be ignored by the pages server)
2. It causes the GitLab Pages server to treat a folder of that name as the pages root. I guess the best implementation to do this would be via the API, but I'm open to suggestions.
Now, if there's no `publish` property in the pages job definition, we just keep the legacy behaviour: Without a `publish` property, Pages will only ever expose the `public` folder. No surprises for anyone. Users would have to explicitly enable the new behaviour.
| 4 |
734,943 | 96,169,536 |
2021-10-26 14:01:39.855
|
Enable domain-base rate-limiting
|
Choose rate-limit for gitlab.com based on results of https://gitlab.com/gitlab-org/gitlab-pages/-/issues/654 and enable rate-limiting. Remove feature flags for self-hosted version.
| 1 |
734,943 | 96,169,359 |
2021-10-26 13:59:44.935
|
Enable domain-based rate-limiting in the test mode
|
Enable rate-limiting introduced in https://gitlab.com/gitlab-org/gitlab-pages/-/issues/630 in test-only mode(only report how many requests were rejected without actually rejecting them)
| 1 |
734,943 | 96,079,199 |
2021-10-25 11:52:37.359
|
Try to build GitLab Pages with FIPS compliant libraries
|
See https://gitlab.com/gitlab-org/gitlab/-/issues/296017#note_487842290 as reference.
| 2 |
734,943 | 95,676,479 |
2021-10-19 10:01:54.301
|
Wildcard redirects break Let's Encrypt integration
|
From https://gitlab.com/gitlab-org/gitlab/-/merge_requests/72069:
> The Let's encrypt automatic integration for gitlab pages is not working when using wildcard redirects.
>
> I.e. `/* /index.html 200` in _redirects file for pages.
This happens because https://gitlab.com/gitlab-org/gitlab-pages/blob/5e86747b6287381e2f23afe837ede1820876cf8d/internal/acme/acme.go#L33-L33 checks if project has a file in it, and we handle redirects, so our acme middleware doesn't redirect to gitlab.
I see a few ways of fixing this:
1. stop redirecting any `"/.well-known/acme-challenge/*"` files
1. remove this `if domain.ServeFileHTTP(w, r)` check completely. This was originally done to allow users to manually implement their integration with Let's Encrypt. People were doing this before we had our one integration.
1. In the API response for domain add `redirect_for_acme_challenges: true` and replace `if domain.ServeFileHTTP(w, r) {` with `if !domain.RedirectForAcmeChallenges {`
1. In the API response return the actually acme challenges like `{acme_challenges: [{path: "/.well-known/acme-challenge/somehting", value: "somevalue"}]}`. Currently, we rely on the main gitlab instance being public and reachable from Let's Encrypt, so we just redirect to it. If we go with the last option, we can get rid of this requirement.
However, last 2 options will also slow down the Let's Encrypt integration, as we update domain config cache not that often.
Actually, I'm(@vshushlin) in favor of the first option - it shouldn't break anything for existing users, as not many people use redirects yet. And it doesn't slow down the Let's Encrypt integraiton.
| 1 |
734,943 | 95,409,156 |
2021-10-14 09:19:21.675
|
Replace GetCompressedPageFromListener, GetPageFromListenerWithCookie and GetProxiedPageFromListener with GetPageFromListenerWithHeaders
|
In https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/594/diffs#b0462893f1f2c58c31c518d3cfb09feb2cb3dc57_402_402 we introduced GetPageFromListenerWithHeaders.
Now we can replace GetCompressedPageFromListener, GetPageFromListenerWithCookie and GetProxiedPageFromListener with it. They all are very similar and differ only be headers.
~"devops::release" ~"group::release" ~"Category:Pages"
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.