idproject
int64
7.76k
28.8M
issuekey
int64
675k
128M
created
stringlengths
19
32
title
stringlengths
4
226
description
stringlengths
2
154k
storypoints
float64
0
300
734,943
95,406,544
2021-10-14 08:45:50.412
Make internal/lru metrics optional
### Summary Currently, https://gitlab.com/gitlab-org/gitlab-pages/blob/247bd7ba2fd9139711218c6a42ed03c551f958d9/internal/lru/lru.go#L30-L30 makes metric parameters required. So in https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/594#note_703313432 we needed to make them required for rate-limiter too. If these parameters were optional: * it may be easier to test without need for things like https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/594/diffs#16df871fa23e530eb52c7b56cc4ea5ce8a5e5997_108_113 (unless we want to test metrics specifically) * `ratelimiter.New` interface would be cleaner So I suggest: * Allow `nil` for metrics in https://gitlab.com/gitlab-org/gitlab-pages/blob/247bd7ba2fd9139711218c6a42ed03c551f958d9/internal/lru/lru.go#L30-L30. Or use functional options for them. * Convert metrics in ratelimiter.New to functional options as well **This is a very low priority and just nice to have refactoring** <!-- DO NOT CHANGE --> ~"devops::release" ~"group::release" ~"Category:Pages"
1
734,943
93,685,391
2021-09-15 04:18:16.657
Allow to configure IP-rate-limits in Omnibus/CNG
Follow-up from https://gitlab.com/gitlab-org/gitlab-pages/-/issues/490 * Add flags support to omnibus -> 1 MR * Add flags to CNG -> 1 MR * Add documentation to admin guide -> 1 MR
3
734,943
93,685,350
2021-09-15 04:16:36.597
Add rate limiting per domain name
- Add per-domain rate limiting, this could be added much more quickly -> 1-2 MRs - Enable per-source-IP in all environments -> 2 MRs Follow-up from https://gitlab.com/gitlab-org/gitlab-pages/-/issues/490#note_676655191
3
734,943
93,685,291
2021-09-15 04:12:54.113
Enable rate limit per source IP without dropping requests
Follow-up 3 https://gitlab.com/gitlab-org/gitlab-pages/-/issues/490#note_676655191 Add configuration flags to Pages, integrate with the `ratelimit` package and add acceptance tests. The flags disable the rate limiter by default so this needs to be enabled in our environments manually. That means 1 MR for staging and another one for production. Follow approach described in https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/575#note_675965682 where we deploy the code and start collecting metrics as-if-enabled to understand the impact of this functionality. - [ ] pre-prod environments, blocked by https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/2083 - [ ] prod, blocked by https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5706 - [ ] create a dashboard in Grafana or Kibana
3
734,943
93,685,212
2021-09-15 04:11:27.580
GitLab Pages Source IP Rate Limiting
Follow-up 2 https://gitlab.com/gitlab-org/gitlab-pages/-/issues/490#note_676655191 - Add a `ratelimit` package with support for source IP rate limit - Add middleware - Add metrics ### Release notes GitLab Pages can become quite popular. When too many users are trying to access the Page at the same time, this can lead to an outage. In this release, GitLab is introducing rate limiting per source IP. This will make Pages hosted on gitlab.com more stable. For users with a self hosted GitLab instance, you can also limit the number of concurrent requesters.
3
734,943
93,685,144
2021-09-15 04:10:15.429
Extract lru cache to its own package so it can be reused
Follow-up 1 https://gitlab.com/gitlab-org/gitlab-pages/-/issues/490#note_676655191
1
734,943
92,972,812
2021-09-02 05:11:51.921
Enable FF_ENABLE_PLACEHOLDERS by default
We should enable `FF_ENABLE_PLACEHOLDERS` by default after this feature has been validated for a longer time, perhaps in %"14.4"
1
734,943
92,972,789
2021-09-02 05:10:20.890
Remove FF_ENABLE_REDIRECTS feature flag
`FF_ENABLE_REDIRECTS` has been enabled by default since the feature was merged https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/336 in %"13.4"
1
734,943
92,897,197
2021-08-31 23:35:50.657
Investigate and test sentry integration
### Summary Seems like Pages is not reporting any errors to Sentry at all https://sentry.gitlab.net/gitlab/gitlab-pages/. - We ignore the error when initializing the error tracker in https://gitlab.com/gitlab-org/gitlab-pages/-/blob/master/main.go#L28 so we might be blindly assuming it’s working, when in fact is not. - There is a way to deploy a local instance of Sentry for testing purposes and integrate with the GDK https://gitlab.com/gitlab-org/gitlab-development-kit/-/issues/748 - Consider implementing some sort of catch-all error reporting. - Verify the DSN set in https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/blob/master/roles/gprd-base-fe-web-pages.json#L32 is correct
3
734,943
92,242,264
2021-08-19 05:34:27.799
Replace defer with t.Cleanup in tests
At least split into 2 MRs - unit tests - acceptance tests But smaller chunks would be great and easier to review The following discussion from !546 should be addressed: - [ ] @jaime started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/546#note_654658851): (+1 comment) > **suggestion**: thinking if we should use `t.Cleanup` and extract this into the `testhelpers` > > ```diff > git diff --color=always --exit-code > diff --git a/internal/auth/auth_test.go b/internal/auth/auth_test.go > index d03407a52..d2f219c83 100644 > --- a/internal/auth/auth_test.go > +++ b/internal/auth/auth_test.go > @@ -15,6 +15,7 @@ import ( > > "gitlab.com/gitlab-org/gitlab-pages/internal/request" > "gitlab.com/gitlab-org/gitlab-pages/internal/source" > + "gitlab.com/gitlab-org/gitlab-pages/internal/testhelpers" > ) > > func createTestAuth(t *testing.T, internalServer string, publicServer string) *Auth { > @@ -211,7 +212,7 @@ func testTryAuthenticateWithCodeAndState(t *testing.T, https bool) { > require.Equal(t, true, auth.TryAuthenticate(result, r, source.NewMockSource())) > > res := result.Result() > - defer res.Body.Close() > + testhelpers.Close(t, res.Body) > > require.Equal(t, http.StatusFound, result.Code) > require.Equal(t, "https://pages.gitlab-example.com/project/", result.Header().Get("Location")) > diff --git a/internal/testhelpers/testhelpers.go b/internal/testhelpers/testhelpers.go > index 3ec97a79c..46d4932ca 100644 > --- a/internal/testhelpers/testhelpers.go > +++ b/internal/testhelpers/testhelpers.go > @@ -2,6 +2,7 @@ package testhelpers > > import ( > "fmt" > + "io" > "mime" > "net/http" > "net/http/httptest" > @@ -68,6 +69,15 @@ func ToFileProtocol(t *testing.T, path string) string { > return fmt.Sprintf("file://%s/%s", wd, path) > } > > +// Close a closer like response.Body as part of testing.T.Cleanup > +func Close(t *testing.T, c io.Closer) { > + t.Helper() > + > + t.Cleanup(func() { > + require.NoError(t, c.Close()) > + }) > +} > + > // Getwd must return current working directory > func Getwd(t *testing.T) string { > t.Helper() > ``` > > WDYT?
2
734,943
89,642,676
2021-07-02 01:32:09.467
Disable jailing mechanism by default
### Release notes We disable the jailing mechanism for GitLab Pages by default. The main reason for this is that [many users complain that it causes a lot of problems after upgrading to 14.0](https://gitlab.com/gitlab-org/gitlab/-/issues/331699). It also lost it relevance since we stopped serving directly from the disk starting from 14.0. You can still [enable jailing back](LINK TO https://gitlab.com/gitlab-org/gitlab/-/merge_requests/65791/diffs ONCE IT'S MERGED) if disabling it will break something for you. In that case please reach to us on the [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/331699). ### Background Starting from GitLab 14.1 the [jailing/chroot mechanism is disabled by default](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/589). If you are using API-based configuration and the new [Zip storage architecture](#zip-storage) there is nothing you need to do. If you run into any problems please [open a new issue](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/new) and enable the jail again by setting the environment variable: 1. Edit `/etc/gitlab/gitlab.rb`. 1. Set the `DAEMON_ENABLE_JAIL` environment variable to `true` for GitLab Pages: ```ruby gitlab_pages['env']['DAEMON_ENABLE_JAIL'] = "true" ``` Disabling the jail will hopefully fix a bunch of issues related to DNS resolution when running Pages inside a Docker container and ease the transition to the new Pages architecture. Per @vshushlin suggestion https://gitlab.com/gitlab-org/gitlab-pages/-/issues/589#note_618738893 - Add `daemon-enable-jail` flag to Pages available for source installations - For Omnibus installations, you can enable this by setting the following environment variable in your `/etc/gitlab/gitlab.rb` file ```rb gitlab_pages['env']['DAEMON_ENABLE_JAIL'] = "true" ```
2
734,943
89,577,791
2021-07-01 01:54:39.659
Internal API access drops URL path component
I am using the GitLab CE Docker images for 13.12 and run GitLab Pages on a [separate server](https://docs.gitlab.com/ee/administration/pages/#running-gitlab-pages-on-a-separate-server). My setup still uses disk based configuration and all pages are accessible. I have `gitlab_pages['inplace_chroot'] = true` and set `external_url` to point to my GitLab instance. Preparing for an upgrade to 14.0 on a test setup I ran into an issue contacting the GitLab server's API. This seems to be caused by the fact that my GitLab instance is hosted on a URL like `http://server.example.internal/gitlab`. On 13.12, I see the following in the GitLab Pages logs ``` console ==> /var/log/gitlab/gitlab-pages/current <== {"level":"info","msg":"Checking GitLab internal API availability","time":"2021-07-01T01:03:21Z"} {"error":"failed to connect to internal Pages API: Get \"http://server.example.internal/api/v4/internal/pages/status\": dial tcp: lookup maniac.machine.easy on [::1]:53: dial udp [::1]:53: connect: cannot assign requested address","level":"warning","msg":"attempted to connect to the API","time":"2021-07-01T01:03:21Z"} ``` Note the absence of the `/gitlab` path component in the URL! After I copied `/etc/resolv.conf` to an `etc/` directory directly below `gitlab_rails['pages_path']`, that changed to ``` console ==> /var/log/gitlab/gitlab-pages/current <== {"level":"info","msg":"Checking GitLab internal API availability","time":"2021-07-01T01:20:45Z"} {"error":"failed to connect to internal Pages API: HTTP status: 404","level":"warning","msg":"attempted to connect to the API","time":"2021-07-01T01:20:45Z"} ``` For the above GitLab Pages log entry, I see the following in the logs on my GitLab server ``` console ==> /var/log/gitlab/nginx/gitlab_access.log <== 10.11.55.227 - - [01/Jul/2021:01:20:45 +0000] "GET /api/v4/internal/pages/status HTTP/1.1" 404 1576 "" "Go-http-client/1.1" 1.99 ``` confirming that the `/gitlab` path component is ignored. I have tried setting `gitlab_pages['internal_gitlab_server'] = http://server.example.internal/gitlab` but that didn't make a difference. Despite all of the above, I am able to access pages using 13.12. So far so good :thinking: However, when I add `gitlab_pages['domain_config_source'] = "gitlab"` to my 13.12 configuration, all I get is `502` HTTP status error pages :sob: Seeing that 14.0 requires access to the internal API, it seems I will loose GitLab Pages functionality if I upgrade :worried:
1
734,943
88,716,490
2021-06-15 05:50:27.785
Add flag to disable ACME challenge
Currently Pages depends on [`-gitlab-server` to enable support for the ACME challenge middleware](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/39c7efd7a9045cfde6ca230e5586d8ae1842ddb0/app.go#L517). This means it's enabled by default now that the API is on by default. We should have a flag to allow users to disable the middleware if needed. ```go if config.GitLab.Server != "" { a.AcmeMiddleware = &acme.Middleware{GitlabURL: config.GitLab.Server} } ``` ## Proposal - Add `-disable-acme-middleware` flag to `internal/config/flags.go` - Add `disable_acme_middleware` to Omnibus default to `nil`
2
734,943
88,715,846
2021-06-15 05:26:15.608
Use internal-gitlab-server in ACME Middleware and Auth package
ACME middleware and the `auth` package both talk to the GitLab server. Currently uses the value of `-gitlab-server`. However, this may cause issues in networks with local access only. We should use the value of `config.GitLab.InternalServer` instead, which uses `-interanl-gitlab-server`.
1
734,943
87,172,523
2021-05-17 07:32:33.337
Refactor acceptance tests to use GitLab API stub
As part of removing support of disc source configuration, we need to make sure that acceptance tests are running using the API as well. Most of the current tests use disk source so this effort would likely take 2-3 MRs to get through. https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/483#note_576522305
3
734,943
86,252,215
2021-04-30 00:02:04.068
Remove use-legacy-storage from Pages
Decided to remove unnecessary flag use-legacy-storage from Pages during our last Pages sync. The logic will be handled by omnibus instead https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5993
1
734,943
86,163,535
2021-04-28 19:17:22.895
Add Permissions-Policy: interest-cohort=() header to Pages sites hosted on GitLab.com
### Background This issue is the same as https://gitlab.com/gitlab-org/gitlab/-/issues/327904, but for GitLab Pages instead of GitLab itself. ### Proposal Begin sending the `Permissions-Policy: interest-cohort=()` header by default for Pages sites. For now, we'll scope this change to Pages sites hosted on GitLab.com, and not self-managed Pages. ### Why? - [FLoC has some worrying privacy concerns](https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea) - [GitHub Pages recently blocked FLoC](https://github.blog/changelog/2021-04-27-github-pages-permissions-policy-interest-cohort-header-added-to-all-pages-sites/) ### Additional info GitHub only enabled this new header for non-custom sites: > Pages sites using a custom domain will not be impacted I'm not exactly sure why they made this decision. I think we should send this header by default for _all_ Pages sites, unless there's a convincing reason not to. ### Technical proposal * Add `Permissions-Policy: interest-cohort=()` to the [config CustomHeaders](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/4ff61e374a7dbfa51fee91799f11197e0f8d53f7/internal/config/config.go#L251) on startup * Add a boolean flag `-disable-floc-header` (or `-disable-permissions-policy-header`) to https://gitlab.com/gitlab-org/gitlab-pages/-/blob/master/internal/config/flags.go -> when `true` we don't attach that header as above * Add the option to Omnibus * Update admin docs
3
734,943
85,999,746
2021-04-27 05:24:27.363
Handle multiple errors in config validation
The following discussion from !465 should be addressed: - [ ] @hswimelar started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/465#note_560875755): (+1 comment) > This might be a good chance to either use non-fatal logging and return an error at the end. This way, multple problems with the config can be reported all at once, so the user may understand all the work they need to do, rather than having to fix the config one problem at a time. > > https://github.com/hashicorp/go-multierror works really well here, but I'm not a maintainer, so I don't feel very confident recommending you to add a third-party dependency. Maybe something do in a commit you can easily revert, if you decide to do down this path.
1
734,943
84,834,773
2021-04-08 05:21:56.430
Remove chroot logic
We should remove chroot/jail once we fully rollout ZIP, on or after %"14.3" https://gitlab.com/gitlab-org/gitlab/-/issues/326117#note_546346101 This can be removed after `use_legacy_storage` has been removed https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6166
3
734,943
81,597,674
2021-03-24 23:42:58.086
Reduce cyclomatic complexity of appMain
Job [#1122897267](https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/1122897267) failed for fa7c12070e7ba921c1d5091f735eec4cc638cd6b:
1
734,943
80,527,421
2021-03-09 04:50:20.468
Make mutex in type Gitlab a value receiver
The following discussion from !434 should be addressed: - [ ] @ash2k started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/434#note_522916031): > There is almost never a need to make mutexes pointers. I wonder why it's a pointer here?
1
734,943
78,475,986
2021-02-05 08:42:18.476
Allow to disable local disk type of access
Since Pages can run in a fully remote type of access we should allow disabling disk access completely. Maybe the simplest way to achieve that would be to specify `-enable-disk=false`, or switch to specifying an array of `-allowed-paths=[]` instead that would superseed `-pages-root`. This would be needed to ensure that we fully disable a disk-type of accesses for cases where it is not needed, like CNG. Proposal: - Introduce `-enable-disk`, when false Pages will not need to scan the `pages-root` directory. However, a `pages-root` value is still needed for daemonizing into a jail or chroot.
2
734,943
78,475,817
2021-02-05 08:38:08.969
Introduce a switch for `disk` serving that would allow this type of access being disabled
Currently, we expose `disk` serving as a way to access data. We should introduce a switch that would disallow usage of that feature with a toggle. This toggle ideally: - should be off by default with %"14.0" - should be off by default for every CNG installation ``` - use-legacy-storage=true ```
2
734,943
77,799,953
2021-01-25 13:54:18.794
Pages "auto" configuration source doesn't work as expected
After `13.7` some people started to have problems with gitlab-pages. Most common symptom: pages are available 50% of the time. The problem is only present if people have multiple servers running both pages and `gitlab-rails`(see below) ### Consider this setup * client has gitlab instance "gitlab.example.com" * and multiple servers running both `gitlab-rails` and `gitlab-pages` simultaneously ### How "auto" currently works * During startup `gitlab-pages` checks if the API is available multiple times until it gets `200 OK` in response. Which it eventually does because there is a load balancer for "gitlab.example.com" and eventually `gitlab-pages` hits the same server as one it's running on. * Then `gitlab-pages` remembers that `API is functioning` and will serve `502` errors if it isn't. ### Workaround * run steps 8-10 in https://docs.gitlab.com/13.7/ee/administration/pages/index.html#running-gitlab-pages-on-a-separate-server * or explicitly set `gitlab_pages['domain_config_source'] = "disk"` as described in https://docs.gitlab.com/ee/administration/pages/#gitlab-api-based-configuration (not recommended, because in %14.0 this option will be removed and you'll need to do the first option anyway) ### Suggested fix change "auto" behavior: if we ever fail to access API with unauthorized error - fallback to disk. (do not do this if `gitlab` is set explicitly).
2
734,943
76,842,878
2021-01-07 00:36:31.450
Re-enable IPv6 listeners in acceptance tests
Re-enable IPv6 listeners in acceptance tests once https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12258 is resolved Blocked by https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12258 Related to #524
1
734,943
76,168,314
2020-12-17 06:45:20.047
Flaky acceptance tests due to port already in use
Since we split the acceptance tests execution in https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/403, we now get a bunch of errors due to the ports being already in use. Retrying the jobs one by one in a failing pipeline seem to be enough to make the tests pass. Sample job https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/912690661 ``` msg="could not create socket" error="listen tcp [::1]:37000: bind: cannot assign requested address" ``` A potential solution is to use TCP port 0 and let the OS assign an available port. However, there is no easy way to communicate the port that was assigned to the Pages acceptance tests. This causes [`WaitUntilRequestSucceeds`](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/21066de52d7f7af759bdf2395694c935110da1bb/test/acceptance/helpers_test.go#L155) to wait until timeout to get a response from the binary currently running. Sample faling pipeline https://gitlab.com/gitlab-org/gitlab-pages/-/pipelines/237350515 --- ## Update 2020-01-07 IPv6 seems to be disabled in some `gitlab-org-docker` runners https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12258.
1
734,943
76,013,210
2020-12-14 07:02:16.166
Make gitlab client cache configurable
The `internal/source/gitlab/cache/` package is currently semi hardcoded using the [`defaultCacheConfiguration`](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/5e5ce293e7eb529f4443198b8cd8bb4d9a1ff418/internal/source/gitlab/cache/cache.go#L11). This makes it hard to write acceptance tests that can execute faster. For example https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/393#note_463829311. We should introduce extra flags that will allow configuring the cache's behaviour for resolving a domain's configuration. ## Default configuration ```go var defaultCacheConfig = cacheConfig{ cacheExpiry: 10 * time.Minute, entryRefreshTimeout: 60 * time.Second, retrievalTimeout: 30 * time.Second, maxRetrievalInterval: time.Second, maxRetrievalRetries: 3, } ``` ## Implementation 1. Add the following flags to Pages and override `defaultCacheConfig` - gitlab-cache-expiry -> The maximum time a domain's configuration is stored in the cache (default: 600s) - gitlab-cache-refresh -> The interval at which a domain's configuration is set to be due to refresh (default: 60s) - gitlab-cache-cleanup -> The interval at which expired items are removed from the cache (default: 60s) - gitlab-retrieval-timeout -> The maximum time to wait for a response from the GitLab API per request (default: 30s) - gitlab-retrieval-interval -> The interval to wait before retrying to resolve a domain's configuration via the GitLab API (default: 1s) - gitlab-retrieval-retries -> The maximum number of times to retry to resolve a domain's configuration via the API (default: 3) 2. Move `cacheConfig` to `internal/config/` 3. Add flags to Omnibus 4. Documentation updates The first part of https://gitlab.com/gitlab-org/gitlab-pages/-/issues/507 is done so we can work on this ## What do these settings do Users can configure these settings when using [API-based configuration](https://docs.gitlab.com/ee/administration/pages/#gitlab-api-based-configuration) to modify the cache behavior for a domain's resolution. The recommended default values are set inside GitLab Pages and should only be modified if needed. Some examples: - Increasing `gitlab-cache-expiry` will allow items to exist in the cache longer. This setting might be useful if the communication between Pages and GitLab Rails is not stable or the content served by Pages does not change frequently. - Increasing `gitlab-cache-refresh` will reduce the frequency at which GitLab Pages requests a domain's configuration from GitLab Rails. This setting might be useful for content that does not change frequently. - Decreasing `gitlab-retrieval-retries` will report issues between Pages and Rails more quickly. However, they can be transient failures rather than real issues.
4
734,943
75,395,466
2020-12-01 00:52:26.451
Refactor `Error`s to strings that aren't used as `error`
Follow up to https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/399#note_455909360 @jaime started a discussion https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/399#note_455909360: > If we are using them as error messages rather than errors, then we should probably change them to just string constants or even inline them in the log line itself. There's no point in defining an error that is not going to be returned or used as value, WDYT? > > e.g. define the messages as strings > > ```go > const ( > createArtifactRequestErrMsg = "Failed to create the artifact request" > artifactRequestErrMsg = "Failed to request the artifact" > ) > ```
1
734,943
74,943,046
2020-11-25 01:23:27.781
Use CorrelationID middleware in Pages
Since Pages does not appear to use CorrelationIDs are present (it should!) The following discussion from !397 should be addressed: - [ ] @andrewn started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/397#note_452561607): (+2 comments) > @jaime would you mind reviewing please? --- 1. Add the correlation ID to Pages, including a `propagate-correlation-id` flag. 1. Add flag to Omnibus `gitlab_pages['propagate_correlation_id'] to Omnibus 1. Update Pages admin documentation --- # Propagating the correlation ID Setting the `propagate_correlation_id` in Omnibus will allow installations behind a reverse proxy generate and set a correlation ID to requests sent to GitLab Pages. When a reverse proxy sets the header value `x-request-id`, the value will be propagated in the request chain. Users [can find the correlation ID in the logs](https://docs.gitlab.com/ee/administration/troubleshooting/tracing_correlation_id.html#identify-the-correlation-id-for-a-request).
3
734,943
74,618,720
2020-11-19 10:01:00.981
Refactor config flags and consolidate in internal/config/ package
- Move all config flags to `internal/config/` - Refactor config struct so it can be called from any other package The following discussion from !392 should be addressed: - [ ] @jaime started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/392#note_450695359): > Move all config flags to `internal/config/`
2
734,943
74,416,375
2020-11-16 09:12:14.631
Make `/pages` to be an actual `artifacts-path`
We decided that `file://` being used for ZIP will return a full absolute path as seen from Rails perspective. However, when doing `chroot` in `pages` we mount `pages-root` under `/pages`. We should instead ensure that in chroot env the absolute path to access resource is exactly the same, to allow the `file://` returned by Rails to function exactly the same. This is required for https://gitlab.com/gitlab-org/gitlab-pages/-/issues/485.
1
734,943
73,899,685
2020-11-05 08:32:43.022
Support splat (wildcard) redirects
In https://gitlab.com/gitlab-org/gitlab-pages/-/issues/24, a simple redirect mechanism was implemented https://docs.gitlab.com/ee/user/project/pages/redirects.html. It would be great if we also supported splat (wildcard) redirect support. According to the [Netlify docs](https://docs.netlify.com/routing/redirects/redirect-options/#splats): ``` /news/* /blog/:splat ``` This would redirect paths like `/news/2004/01/10/my-story` to `/blog/2004/01/10/my-story`. It would mean a lot for the docs site, since we want to move away from the `/ee` prefix and use something like `/gitlab`. ``` /ee/* /gitlab/:splat ``` ### Release Notes Details to be added
2
734,943
73,297,571
2020-10-24 16:05:31.917
Object Storage should have a strict TTFB timeout
We had/have an outage related to GCS used by GitLab Pages: https://prometheus-app.gprd.gitlab.net/graph?g0.range_input=1h&g0.expr=avg(rate(gitlab_pages_httprange_trace_duration_sum%7Brequest_stage%3D%22httptrace.ClientTrace.GotFirstResponseByte%22%7D%5B5m%5D)%2Frate(gitlab_pages_httprange_trace_duration_count%7Brequest_stage%3D%22httptrace.ClientTrace.GotFirstResponseByte%22%7D%5B5m%5D))&g0.tab=0&g1.range_input=2h&g1.expr=sum(gitlab_pages_httprange_open_requests)&g1.tab=0&g2.range_input=2h&g2.stacked=1&g2.expr=sum(increase(gitlab_pages_vfs_operations_total%7Boperation%3D%22Open%22%7D%5B5m%5D))%20by%20(vfs_name%2C%20operation)&g2.tab=0 ### TTFB ![image](/uploads/fd84ed1775b3ae7c32735a9bf8c76cfb/image.png) We clearly see that TTFB sky rockets once we disabled NFS. In regular operation we expect `25ms`, during outage of GCS we were seeing around `20s`, but once we disabled it appears that some requests had TTFB close to "a few minutes". ### Connections ![image](/uploads/24df853cd766d9b48b65962c0464ab49/image.png) This created a crazy backlog of 6k open connections to GCS. Our timeout on the connection is 30 minutes. ### Summary This outage showed some aspects to improve in Pages related how Pages handles timeouts, and what timeouts it should use to ensure sane service recovery and reducing amplification pressure due to outage happening elsewhere. ### Proposal Define TTFB to be no longer than 15s. This should ensure that "sane amount" of requests is opened, and system can fast-reject instead of hang. ```golang var httpClient = &http.Client{ // The longest time the request can be executed Timeout: 30 * time.Minute, Transport: httptransport.NewTransportWithMetrics( "httprange_client", metrics.HTTPRangeTraceDuration, metrics.HTTPRangeRequestDuration, metrics.HTTPRangeRequestsTotal, ), } ``` We should configure the: - `TLSHandshakeTimeout time.Duration`, aka before being able to write `request` - `ResponseHeaderTimeout time.Duration`, aka close to `TTFB` - `ExpectContinueTimeout time.Duration`, aka similar to `TTFB` Or, just implement that differently as part of RoundTrip of `httprange` to define a timeout to receiving response (but not yet fully reading it). More here: https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/
1
734,943
73,291,617
2020-10-24 10:50:23.835
The unavailability of GitLab API makes us fallback to disk-serving
Currently, if the GitLab API is unavailable we fallback to using `disk serving`. This is OK, now, but in a fully cloud-native architecture, this should rather return a proper `500` type of error. We should disable `local instance()` and instead return a proper error to client. ```golang func (d *Domain) resolve(r *http.Request) *serving.Request { request, _ := d.Resolver.Resolve(r) // TODO improve code around default serving, when `disk` serving gets removed // https://gitlab.com/gitlab-org/gitlab-pages/issues/353 if request == nil { return &serving.Request{Serving: local.Instance()} } return request } // ServeFileHTTP returns true if something was served, false if not. func (d *Domain) ServeFileHTTP(w http.ResponseWriter, r *http.Request) bool { if !d.HasLookupPath(r) { // TODO: this seems to be wrong: as we should rather return false, and // fallback to `ServeNotFoundHTTP` to handle this case httperrors.Serve404(w) return true } request := d.resolve(r) return request.ServeFileHTTP(w, r) } ```
1
734,943
72,696,799
2020-10-14 08:56:02.169
Allow to serve `zip` from a disk `/pages`
Currently, GitLab Pages can serve remotely stored ZIP archives. However, since we want to migrate all our deployments to ZIP, we should support serving a local stored content as well. This is needed to ensure that we can migrate on-premise installations using NFS or local disks to ZIP-architecture.
5
734,943
72,676,917
2020-10-13 23:38:37.891
Fix TestVFSFindOrCreateArchiveCacheEvict flaky test
I found this flaky test locally, I believe this line https://gitlab.com/gitlab-org/gitlab-pages/-/blob/6cba2f2875856a7b4e2d008ca348d7b91098ed35/internal/vfs/zip/vfs_test.go#L120 is causing trouble ``` go test ./internal/vfs/zip -run TestVFSFindOrCreateArchiveCacheEvict -count 10 -failfast --- FAIL: TestVFSFindOrCreateArchiveCacheEvict (0.00s) vfs_test.go:126: Error Trace: vfs_test.go:126 Error: Should not be: &zip.zipArchive{fs:(*zip.zipVFS)(0xc0000b1e90), path:"http://127.0.0.1:60311/public.zip", once:sync.Once{done:0x1, m:sync.Mutex{state:0, sema:0x0}}, done:(chan struct {})(0xc0000b26c0), openTimeout:30000000000, cacheNamespace:"1:", resource:(*httprange.Resource)(0xc00007fe00), reader:(*httprange.RangedReader)(0xc00007ba50), archive:(*zip.Reader)(0xc00007fe80), err:error(nil), files:map[string]*zip.File{"public/":(*zip.File)(0xc000240160), "public/404.html":(*zip.File)(0xc000240370), "public/bad_symlink.html":(*zip.File)(0xc000240790), "public/index.html":(*zip.File)(0xc0002402c0), "public/subdir/":(*zip.File)(0xc000240420), "public/subdir/2bp3Qzs9CCW7cGnxhghdavZ2bJDTzvu2mrj6O8Yqjm3YMRozRZULxBBKzJXCK16GlsvO1GlbCyONf2LTCndJU9cIr5T3PLDN7XnfG00lEmf9DWHPXiAbbi0v8ioSjnoTqdyjELVKuhsGRGxeV9RptLMyGnbpJx1w2uECiUQSHrRVQNuq2xoHLlk30UAmis1EhGXP5kKprzHxuavsKMdT4XRP0d79tie4tjqtfRsP4y60hmNS1vSujrxzhDa":(*zip.File)(0xc0002406e0), "public/subdir/hello.html":(*zip.File)(0xc0002404d0), "public/subdir/linked.html":(*zip.File)(0xc000240580), "public/symlink.html":(*zip.File)(0xc000240630)}} Test: TestVFSFindOrCreateArchiveCacheEvict Messages: a different archive is returned FAIL FAIL gitlab.com/gitlab-org/gitlab-pages/internal/vfs/zip 0.132s FAIL ```
1
734,943
72,631,918
2020-10-13 10:01:55.835
Pages 404s with pages_artifacts_archive enabled
We had a Silver Customer ([Zendesk](https://gitlab.zendesk.com/agent/tickets/176158) - internal only) report that they started getting 404s on their Pages project, but it worked when copying in a new project. That happened a few hours after the `pages_artifacts_archive` rollout percentage was increased. @ayufan turned off the feature flag and we were able to see the Pages again. Relevant [Slack thread](https://gitlab.slack.com/archives/C1BSEQ138/p1602582002162400?thread_ts=1602547809.159500&cid=C1BSEQ138) (internal only)
3
734,943
72,614,208
2020-10-13 05:42:39.952
Improve zip cache refreshing for the same domain
We open a bunch of archives each minute: https://prometheus-app.gprd.gitlab.net/graph?g0.range_input=1h&g0.expr=increase(gitlab_pages_zip_opened%5B5m%5D)&g0.tab=0 From slack: https://gitlab.slack.com/archives/C1BSEQ138/p1602144418115300 ``` kamil Oct 8th at 7:06 PM @jaime I think we are done with LRU for time being. Maybe next get archive refresh to be working? 4 replies kamil 5 days ago Looking at this graph it appears that we open a bunch of archives each minute: https://prometheus-app.gprd.gitlab.net/graph?g0.range_input=1h&g0.expr=increase(gitlab_pages_zip_opened%5B5m%5D)&g0.tab=0 jaime 1 day ago do you mean we’re refreshing too often? kamil 22 hours ago Yes, we each time we refresh the domain via API a new URL is being returned due to changed auth params. This means that we reload each accessed archive every minute. (edited) kamil 22 hours ago Due to caching, holding 2-3 archives for a given domain at a single time. ```
1
734,943
72,377,135
2020-10-08 09:18:48.703
Add vfs type to access log
From [slack comment](https://gitlab.slack.com/archives/C1BSEQ138/p1602147615122100?thread_ts=1602147179.119100&cid=C1BSEQ138): > I also think that it would be helpful for us to extend kibana logs with information on how the file was served (if it was ZIP), so we could ELK grep logs and find unique domains being served.
1
734,943
71,929,438
2020-09-30 09:26:49.550
Cache `Readlink` data
Currently `Readlink` is evaluated each time. Similarly to [`cacheOffset`](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/461) we should cache symlinks as well. To ensure that `Readlink` is processed in a predictable time, using minimal amount of requests, especially that in some cases we need to perform symlink traversal.
1
734,943
71,907,954
2020-09-30 02:19:12.835
Improve tooling for local Golang development
Local development is not friendly for Go newcomers, decreasing the likelihood of ~"Community contribution"s - Since go modules was introduced, `make setup` fails trying to remove `.GOPATH/`. $GOPATH is not needed anymore so we should remove this ```sh $ make setup rm: .GOPATH: ... rm: .GOPATH/pkg/mod/sourcegraph.com/sqs/pbtypes@v0.0.0-20180604144634-d3ebe8f20ae4/void.pb.go: Permission denied ``` - `make lint` fails with `ERRO Running error: context loading failed: no go files to analyze` - Add `make format` for people that run into linter issues , e.g. https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/764364111
3
734,943
71,658,480
2020-09-25 02:21:21.568
Negative cache errored zip archives
Having two intervals: * positive cache: long-time cache, likely prolonged * negative cache: short-time cache, likely not prolonged --- The following discussion from !351 should be addressed: - [ ] @ayufan started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/351#note_418261431): > I believe that we have a TODO item: Should we cache an archive that cannot be read for a long period? Aka. negative cache. > > This works very well for positive cache, but also makes us to cache negative cache (errored archive forever).
1
734,943
71,443,472
2020-09-21 11:54:45.704
Ignore `@ hashed` directory when performing disk scan.
In https://gitlab.com/gitlab-org/gitlab/-/merge_requests/42461 we store zipped pages site version on the same top directory as other pages content. But add `@hashed` to avoid name collision with namespaces. IIRC pages currently recusively scans all subdirectories in `shared/pages`, so we need exclude `shared/pages/@hashed`. It may sound irrelevant for .com, since we disabled this disk scan completely. But it's not true for self-managed instances, and once we [enable feature flag responsible for creating pages deployments](https://gitlab.com/gitlab-org/gitlab/-/issues/245308) this directory will appear on self-managed. This may create performance problem and maybe some security risks(unlikely though). But maybe pages will just not find `config.json` file there and ignore this directory automatically, but that's need to be tested. :shrug:
1
734,943
71,430,811
2020-09-21 07:45:04.405
Handle extra headers when serving from compressed zip archive
Handle some missing headers lost functionality by using `io.Copy` instead of `http.ServeContent`. Go src for [`http.ServeContent`](https://github.com/golang/go/blob/ef20f76b8bc4e082d5f81fd818890d707751475b/src/net/http/fs.go#L130) as reference. ## Proposal 1. Handle request headers: - If-Match - If-Unmodified-Since - If-None-Match - If-Modified-Since - ~~If-Range~~ can't use this header with compressed files 2. Set in response header: - Last-Modified ~"backend-weight::2" --- The following discussion from !348 should be addressed: - [ ] @vshushlin started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/348#note_415122700): (+2 comments) > **Question:** from `ServeContent` docs: > > > ...sets the MIME type, and handles If-Match, If-Unmodified-Since, If-None-Match, If-Modified-Since... > > Do we plan to reimplement MIME types and modified-since logic? Do we need follow up issues for them?
3
734,943
71,425,929
2020-09-21 05:08:55.944
Implement zip cache callbacks
The following discussion from !351 should be addressed: - [ ] @ayufan started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/351#note_414831416): > As a follow-up maybe we would have "a kind of callback" on `zipArchive`? > > ```golang > func (z *zipArchive) onCacheAdded() {} > func (z *zipArchive) onCacheRefreshed() {} > func (z *zipArchive) onCacheEvicted() {} > ``` > > We would simply call these methods on these points, and they could update metrics on a will, assuming that if you ever call `onCacheAdded()` the `onCacheEvicted()` will be called as well. And implement tests https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/351#note_418262162
1
734,943
71,425,440
2020-09-21 04:42:24.962
Make zip cache configurable
A simple in-memory cache was introduced in https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/351 and https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/364 with some hard-coded values. These should be configurable. ## Proposal * Add configuration changes flag to Pages - Add a zipCacheConfig type and flags: - `zip-cache-expiration` - `zip-cache-cleanup` - `zip-cache-refresh` - `zip-open-timeout` - ~~`zip-cache-dataoffset-size`~~ - ~~`zip-cache-dataoffset-prune-count`~~ - ~~`zip-cache-readlink-size`~~ - ~~`zip-cache-readlink-prune-count`~~ * Add documentation * Add flag to omnibus --- The following discussion from !351 should be addressed: - [ ] @jaime started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/351#note_414788991): (+2 comments) > **question**: are these values good enough? Should we make them configurable in future iterations?
3
734,943
71,182,750
2020-09-15 08:07:27.075
Cache dataOffset and symlink when requesting zip files
Currently `Readlink` and `dataOffset` are evaluated each time, we should cache them to lower the latency when serving from zip. To ensure that `Readlink` is processed in a predictable time, using minimal amount of requests, especially that in some cases we need to perform symlink traversal. --- The following discussion from !348 should be addressed: - [ ] @ayufan started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/348#note_412690978): > I wish that we would cache `dataOffset`, since the `file.DataOffset()` each time makes us to fire additional request.
1
734,943
70,849,987
2020-09-08 02:15:41.257
Make httprange http client configurable
In https://gitlab.com/gitlab-org/gitlab-pages/-/issues/448 we introduced an `httprange` package that can download artifacts and support reading ranges from the response body. The `httpClient` used has a hardcoded timeout which should be configurable. We should either pass an httpClient when initializing a resource or some config flags ## Proposal - Add configuration changes flag to Pages - Add documentation - Add flag to omnibus - Add flag to charts --- The following discussion from !333 should be addressed: - [ ] @vshushlin started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/333#note_408522039): > Do we already have an issue for that? Can we add a link?
3
734,943
70,848,619
2020-09-08 01:18:31.804
Use versioned templates for security scanners
Some security templates keep changing and they break our pipelines when they are merged to the default branch. We should use a versioned template pointing to a stable branch, e.g. ```yaml inclue: - https://gitlab.com/gitlab-org/gitlab/-/raw/13-3-stable-ee/lib/gitlab/ci/templates/Security/Dependency-Scanning.gitlab-ci.yml ``` --- The following discussion from !343 should be addressed: - [ ] @jaime started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/343#note_408774219): > We should use versioned templates
1
734,943
70,685,400
2020-09-03 13:08:04.247
TestClientStatusClientTimeout flaky
https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/718191293 ``` === RUN TestClientStatusClientTimeout client_test.go:303: Error Trace: client_test.go:303 Error: "failed to connect to internal Pages API: Get "http://127.0.0.1:42673/api/v4/internal/pages/status": context deadline exceeded" does not contain "Client.Timeout" Test: TestClientStatusClientTimeout --- FAIL: TestClientStatusClientTimeout (0.06s) ```
1
734,943
70,194,664
2020-08-24 05:44:59.085
Remove GitLab API internal polling
Remove polling from Pages implemented in gitlab-pages!304 (merged)
1
734,943
70,178,333
2020-08-24 01:28:39.390
Create an `httprange` package to allow loading content from object storage
Source https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/326/diffs#c697a55849ad2b6da28d7b6de7e8effd7860e40b - `http_range` should create resources that can be read in chunks from a pre-signed URL - Add metrics and traceln
2
734,943
70,174,536
2020-08-24 01:00:09.452
Support range requests for zip
PoC https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/331 Depends on https://gitlab.com/gitlab-org/gitlab-pages/-/issues/448 - Define a `SeekableFile` interface - Extend zip http reader to support range requests
1
734,943
70,053,993
2020-08-20 08:14:55.998
Abstract `VFS` `Root`
Currently, pages assumes that projects are located in subdirectories `pages_dir/group/subgroup/project/public/index.html`. While in the zip object storage VFS will just have `/public/index.html` in the archive structure. Based on the work in https://gitlab.com/gitlab-org/gitlab-pages/-/issues/439 we want to abstract some kind of `Root directory` for the project. It will allow us to implement zip serving much easier.
1
734,943
69,757,262
2020-08-12 10:56:19.871
Support VFS serving from LookupPath
- Add a `VFS` type to `LookupPath` - Extend the serving `Reader` to hold a map of `VFS`'s - Resolve file contents using the specified `VFS` MR https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/329/diffs
1
734,943
69,757,192
2020-08-12 10:53:57.104
VFS implementation for zip archives
Add `zip` implementation of the VFS interface introduced in https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/324 and !325. Depends on https://gitlab.com/gitlab-org/gitlab-pages/-/issues/446 and https://gitlab.com/gitlab-org/gitlab-pages/-/issues/448. - Add a `zip` package that opens archives using `http_range` - source https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/326/diffs#f0eb7ce2a638003d1f5dd027e9b5ce3aa44355e7_0_1 - Implement the `VFS` interface - source https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/326/diffs#21487386d5527c7631ac241c03fb0400516c3839_0_1 - Update metrics acceptance tests for `TestPrometheusMetricsCanBeScraped`
2
734,943
69,101,016
2020-07-28 04:49:05.917
Fix intermittent cache test failures
Job [#658818771](https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/658818771) failed for 43f724fe2886e07cd97799ff0a14a3b98a239ec3: ```plaintext --- FAIL: TestResolve/when_retrieval_failed_because_of_resolution_context_being_canceled (0.00s) cache_test.go:264: Error Trace: cache_test.go:264 cache_test.go:113 cache_test.go:253 cache_test.go:93 cache_test.go:252 Error: An error is expected but got nil. Test: TestResolve/when_retrieval_failed_because_of_resolution_context_being_canceled ``` Another job https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/658818891 ```plaintext === RUN TestResolve/when_item_is_in_long_cache_only cache_test.go:207: Error Trace: cache_test.go:207 cache_test.go:113 cache_test.go:200 cache_test.go:93 cache_test.go:199 Error: Not equal: expected: 0x1 actual : 0x0 Test: TestResolve/when_item_is_in_long_cache_only ``` Retrying a job fixes it https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/658821575
1
734,943
68,932,846
2020-07-24 05:33:13.660
Use exponential back-off for polling status endpoint
The following discussion from !304 should be addressed: - [ ] @jaime started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/304#note_382039256): (+2 comments) > **question**: should we consider using an exponential back-off approach? > > Some sample library https://pkg.go.dev/github.com/cenkalti/backoff/v4?tab=doc#pkg-examples
1
734,943
59,219,708
2020-07-15 13:28:56.899
Adopt zip package from workhorse
Extracted from https://gitlab.com/gitlab-org/gitlab/-/issues/28784#note_375446009
1
734,943
54,558,743
2020-07-12 23:41:30.593
Preprocess zip archives on load and cache file structure
Preprocess zip archive and cache relevant files: * cache only relevant list of files with references where to fetch them in an efficient way * cache into a `map` to have `O(logn)` search Reference code https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/326/diffs#f0eb7ce2a638003d1f5dd027e9b5ce3aa44355e7_0_34 Depends on https://gitlab.com/gitlab-org/gitlab-pages/-/issues/443 ### Considerations A noticeable change came up during the [profiler demo](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/479#note_423617206) with ~"team::Scalability" where we saw that `readArchive` is allocating a considerable amount of memory (about 28MB for the top 1%) for `docs.gitlab.com` alone in production. [Profiler on GCP](https://console.cloud.google.com/profiler;timespan=1h/gitlab-pages;type=HEAP/inuse_space?project=gitlab-production) (internal) ![Screen_Shot_2020-10-08_at_3.21.17_pm](/uploads/6a0a161802a5c7b5d2c31b5dcbe4be06/Screen_Shot_2020-10-08_at_3.21.17_pm.png) [Allocated memory spike](https://prometheus-app.gprd.gitlab.net/graph?g0.range_input=1w&g0.stacked=1&g0.expr=max(go_memstats_alloc_bytes%7Binstance%3D~%22web-pages-.*%22%7D)&g0.tab=0) after enabling Zip for 5% of Pages projects on 2020-10-08 ~12:30 UTC ![Screen_Shot_2020-10-12_at_10.52.25_am](/uploads/36f50746826ed854b4d2a2019d6cbad9/Screen_Shot_2020-10-12_at_10.52.25_am.png) --- <details> <summary>The following discussion from !299 should be addressed: </summary> - [ ] @ayufan started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/299#note_377210608): > Oh, yes, yes, yes. It would really help to convert a flat list into: > > - `map[string]zip.File`, ideally with `zip.File` having an `offset` as well > > And, drop all the ones that are not within `public/` Cache flat list idea https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/299#note_377210617 </details>
1
734,943
54,062,784
2020-07-07 20:59:11.915
Tech Evaluation of supporting NFS in cloud native deployments as an alternative to object storage for Pages
### Problem to solve Support for object storage in Pages is currently looking to arrive early FY22. This is impacting a few deployments to our `gitlab` helm chart: 1. GitLab.com's [migration to Kubernetes](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/98) (multiple services depend upon Pages - including web and api services) 1. A very large opportunity who requires Pages One way to potentially solve this, is to introduce shared storage into our Kubernetes deployments. ### Evaluation * [ ] Discuss feasibility of this idea with ~"group::distribution", ~infrastructure, and ~Quality * [ ] Ask large opportunity if they would be open to / capable of utilizing shared storage in their Kubernetes cluster * [ ] Implement Pages in the Helm chart and cloud native images (does not look like this is present today) - https://gitlab.com/gitlab-org/charts/gitlab/-/issues/37 * [ ] Add support for a shared `ReadWriteMany` PV across needed services (Pages, Sidekiq, API, and Web) * [ ] Run a 50k architecture load test on a cloud native deployment with Pages (will require utilizing something like Rook, Portworx, etc. to provide ReadWriteMany support) * [ ] Strongly consider utilizing this architecture on GitLab.com, so we can move forward with k8s migration/dogfooding, as well as confirm this Pages architecture works at scale before recommending to our customers * [ ] Launch official support for Pages in Helm charts using shared storage
1
734,943
53,851,171
2020-07-03 01:22:15.708
Add metrics for zip serving
Zip serving package implements the `type Serving interface`. We should add some metrics to serve from zip archives similar to https://gitlab.com/gitlab-org/gitlab-pages/-/issues/365 so that we can track and compare serving times from archives vs disk. - ZipServingOpenArchivesTotal - ZipServingFilesPerArchiveCount https://gitlab.slack.com/archives/C1BSEQ138/p1601113581019900 > I think we should for OS access have a histogram of timings of different request stages, like: connection, tls handshake, response write, response to the first byte, response read. This would allow us to understand how responsive is OS on different stages. (edited) >Also, for caches we should have hit/miss metric, and amount of cache entries :slightly_smiling_face:
1
734,943
53,850,876
2020-07-03 01:04:41.782
Tech Evaluation: Extract the Reader logic from disk serving into its own package so it can be reused
Serving from disk uses the [`type Reader struct`](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/master/internal/serving/disk/reader.go#L19) to try to find and resolve paths from the file system structure. This logic can be reused to serve content from zip archives https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/299/diffs#580d90130ae2083bae1a5eda81a8947e0b57d443_0_51
1
734,943
35,363,254
2020-06-03 07:09:06.761
root config contains unknown keys: license_management
Pipeline failed `root config contains unknown keys: license_management` https://gitlab.com/gitlab-org/gitlab-pages/pipelines/152290252 `license_management` has been deprecated https://gitlab.com/gitlab-org/gitlab/-/issues/14624 We need to update to `license_scanning`
1
734,943
35,168,405
2020-05-29 11:57:49.820
Remove eslint sast
For some reason, eslint-sast decided that we need it, and now pipeline is passed with warnings :see_no_evil: https://gitlab.com/gitlab-org/gitlab-pages/pipelines/150886980
1
734,943
34,952,382
2020-05-25 02:07:10.975
Keep track of dev dependencies in go.mod
When running `make setup`, the dev tools will be installed into the project's local `.GOPATH`. However, this overwrites the `go.mod` and `go.sum` files with a reference to those tools. The `make deps-check` rule will fail **If the changes are committed by mistake**. Using a `tools.go` file is a common pattern to keep track of dev dependencies inside the `go.mod` file. The [release-cli tools.go](https://gitlab.com/gitlab-org/release-cli/-/blob/master/cmd/release-cli/tools.go) has an example of how to do it. We also reference https://marcofranssen.nl/manage-go-tools-via-go-modules/: > To ensure my tool dependencies are not removed and can leverage the Go Modules, I create a file tools.go. In this file I will list all my tool dependencies using an import statement. To do: - Add a file called `tools.go` to the root directory and add the dev/tool dependencies. - `make deps-check` will add an entry to `go.mod`, consider version locking to a desired release. - Commit the changes.
1
734,943
34,867,840
2020-05-22 02:10:55.273
SAST scanning fails intermittently
Example of a job where the SAST scanning fails Job [#562578827](https://gitlab.com/gitlab-org/gitlab-pages/-/jobs/562578827) failed for 98c479549ef6e4017fa052a4d5f19535e70b6d2e: ``` docker: error during connect: Post http://docker:2375/v1.40/containers/create: dial tcp: lookup docker on 169.254.169.254:53: no such host. ``` From [Slack thread](https://gitlab.slack.com/archives/CETG54GQ0/p1590023829443200) > @fcatteau license-scanning works fine so this might be related to sast, and in that case the legacy SAST orchestrator (Docker-in-Docker) >by the way, the CI configuration implicitly enables the Docker-in-Docker orchestrator for SAST even though the no-DinD is now the default https://gitlab.com/gitlab-org/security/gitlab-pages/-/blob/master/.gitlab-ci.yml#L63 see https://gitlab.com/gitlab-org/gitlab/-/issues/218541 >the important difference between sast and license-scanning is that the former explicitly calls docker run whereas the latter sets the image.name in its job definition. this might explain why License Scanning works. I'm able to pull registry.gitlab.com/gitlab-org/security-products/sast:2 though > I suggest you override the rules of the *-sast jobs triggered for this project, and don't override sast anymore. could you try that in a MR? also, I suggest you share this with #g_secure-static-analysis > actually you only need to override secrets-sast and gosec-sast . see https://gitlab.com/gitlab-org/security/gitlab-pages/-/jobs/548248714#L192
1
734,943
34,789,512
2020-05-20 07:56:14.188
Add Dangerbot to Pages
Pages does not currently have a Dangerbot and therefore does not suggest reviewers for MRs. In theory any engineer working on Golang should be able to do reviews, so there would be a large pool of reviewers. The maintainer pool in Pages is quite small presently, but this is [being worked on](https://gitlab.com/gitlab-com/www-gitlab-com/-/issues/7701), with the intention of developing more maintainers within ~"group::release". Additional reviewers would be added to `team.yml` ```yml projects: gitlab-pages: reviewer ```
2
734,943
34,562,513
2020-05-14 15:09:28.045
Tech Evaluation for option B of Serve custom 404.html file for namespace domains
https://gitlab.com/gitlab-org/gitlab-pages/-/issues/183 Open the research issue for implementing the approach `B` and prioritize it. It's really tricky to implement and basically only `custom 404s for namespace level projects, e.g. username.gitlab.io` are broken, they are not broken on custom domains or on `username.gitlab.io/myprojectname`. Tasks * [ ] Update proposal for linked issue
1
734,943
34,133,160
2020-05-05 00:40:00.985
Use custom .golangci.yml file with improved lint rules
The following discussion from !264 should be addressed: - [ ] @krasio started a [discussion](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/264#note_333342106): (+2 comments) > Is this to avoid name clash with the `client` package? --- We use [automatic linting](https://docs.gitlab.com/ee/development/go_guide/#automatic-linting) via `golangci-lint` with a default set of relaxed rules. To improve the code style we should add a custom `.golangci.yml` file enabling a set of useful linters. We can use [GitLab's Runner `.golangci.yml`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/master/.golangci.yml) for consistency or use as a base for the set of rules we want to apply to Pages. cc @krasio @vshushlin @sean_carroll --- ## Linters From [GitLab's Runner `.golangci.yml`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/master/.golangci.yml) | Linter | Enable in iteration # | | ------ | :--------------------:| | bodyclose | 2 | | deadcode | 2 | | dogsled | 2 | | goconst | 1 | | gocyclo | 1 | | goimports | 1 | | golint | 1 | | gosec | 1 | | gosimple | 1 | | govet | 1 | | ineffassign | 2 | | misspell | 2 | | structcheck | 2 | | typecheck | 2 | | unconvert | 2 | | unused | 2 | | varcheck | 2 | | whitespace | 2 | ### Other linters to consider List of supported [linters](https://golangci-lint.run/usage/linters/) by `golangci-lint` | Linter | Description | | ------ | :----------:| | wsl | improve empty line spacing (annoying but makes the code more readable) | | interfacer | Linter that suggests narrower interface types | | lll | long lines | | unparam | Reports unused function parameters | | gocritic | The most opinionated Go source code linter | | gochecknoinits | This will help with #342 | --- ## MRs - [x] Add .golangci.yml linter configuration !286 - [x] Enable more linters https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/287
2
734,943
34,001,444
2020-04-30 13:59:36.861
Remove `gitlab-server` fallback to `auth-server` and `artifacts server`
Currently `auth-server` and `artifacts-server` may be used instead of `gitlab-server` command line flag. That was done for a seamless upgrade. Deprecation was announced, so we can just remove it in %"14.0" From @igorwwwwwwwwwwwwwwwwwwww's comment https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/270#note_334218639 : > Alternatively, I think it might be good to remove this fallback logic. > > From an operational perspective, I found it very surprising that `gitlab-server` inherits from both auth server as well as artifacts server. > > I would find the inverse *much* more intuitive. But even having to specify all the params would be acceptable. > > So if we can verify that the main consumers of this (probably just omnibus?) are always specifying `gitlab-server` explicitly, it may be worth simply requiring it to be specified without fallback. We need to: - [x] Mark auth-server as deprecate in Omnibus(might be already done) - [x] Make a release deprecation post. draft: ``` We're changing the behaviour of some configuration options of GitLab Pages. If you run GitLab Pages via Omnibus based package, no action required. Otherwise, make sure to set [`gitlab-server` parameter and remove `auth-server`](https://docs.gitlab.com/ee/administration/pages/). ``` Later we may consider removing `artifacts-server` and use `gitlab-server`(or better: new parameter introduced in https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/276) and appending path.
2
7,603,319
80,284,911
2021-03-04 21:55:13.432
Meltano Invoke/Select does not invalidate catalog cache after reinstalling a tap
<!--- Please read this! Before opening a new issue, make sure to search for keywords in the issues filtered by the "regression" or "bug" label and verify the issue you're about to submit isn't a duplicate. If you are submitting an issue with a tap, please include: - account details - target details - entities selected with meltano select (if you have selected any entities), as the bug may be related to a specific entity - the full elt command you are running - full output of the meltano elt command. Logs can get pretty long, so you can add the full log as a snippet in the Meltano project and add a link in the issue. ---> ### What is the current *bug* behavior? Taps are using cached properties for `meltano invoke/select` even if the tap's properties have changed after `meltano install`. ### What is the expected *correct* behavior? When running `meltano install`, a tap's properties cache should be invalidated so that `meltano invoke/select` should effectively "rediscover" the tap's properties. ### Steps to reproduce ``` # install all taps 1. meltano install 2. meltano invoke tap-custom-tap # update tap-custom-tap repo with some new schema properties 3. meltano install tap-custom-tap 4. meltano invoke tap-custom-tap ``` ### Relevant logs and/or screenshots After running `meltano install extractor tap-s3-toast` ``` > meltano --log-level=debug invoke tap-s3-toast [2021-03-04 21:28:53,618] [1|MainThread|root] [DEBUG] Creating engine <meltano.core.project.Project object at 0x7f510102d160>@sqlite:////projects/.meltano/meltano.db [2021-03-04 21:28:54,642] [1|MainThread|root] [DEBUG] Created configuration at /projects/.meltano/run/tap-s3-toast/tap.config.json [2021-03-04 21:28:54,643] [1|MainThread|root] [DEBUG] Could not find tap.properties.json in /projects/.meltano/extractors/tap-s3-toast/tap.properties.json, skipping. [2021-03-04 21:28:54,643] [1|MainThread|root] [DEBUG] Could not find tap.properties.cache_key in /projects/.meltano/extractors/tap-s3-toast/tap.properties.cache_key, skipping. [2021-03-04 21:28:54,643] [1|MainThread|root] [DEBUG] Could not find state.json in /projects/.meltano/extractors/tap-s3-toast/state.json, skipping. [2021-03-04 21:28:54,643] [1|MainThread|meltano.core.plugin.singer.tap] [DEBUG] Using cached catalog file ``` ### Possible fixes According to @DouweM and my [Slack conversation](https://meltano.slack.com/archives/C013EKWA2Q1/p1614892532050300), `meltano invoke/select` should have some logic to invalidate caches. Other options are removing the cache when running `meltano install` either by default or with an optional cli flag. ### Further regression test _Ensure we automatically catch similar issues in the future_ - [ ] Write additional adequate test cases and submit test results - [ ] Test results should be reviewed by a person from the team
4
7,603,319
79,729,839
2021-02-24 16:52:39.171
Add Superset as a plugin
- https://superset.apache.org/ - https://github.com/apache/superset - https://pypi.org/project/apache-superset ```bash meltano add analyzer superset meltano invoke superset db upgrade # Run automatically? meltano invoke superset fab create-admin meltano invoke superset init # Run automatically? meltano invoke superset run ``` Meltano can manage Superset configuration (https://superset.apache.org/docs/installation/configuring-superset) by allowing values to be set for the keys in https://github.com/apache/superset/blob/master/superset/config.py, automatically generating `superset_config.py`, and pointing Superset there by using the `SUPERSET_CONFIG_PATH` env var. Users should also be able to set `SUPERSET_CONFIG_PATH` (or `meltano config superset set config_path <path>`) themselves to use their own config file. Ideally, Meltano would also be able to inject database connection strings corresponding to loaders directly into Superset so that these don't need to be managed in two places: https://superset.apache.org/docs/databases/installing-database-drivers, https://superset.apache.org/docs/databases/postgres. Possibly through the `DB_CONNECTION_MUTATOR` setting? https://github.com/apache/superset/issues/9045
8
7,603,319
79,000,530
2021-02-12 20:43:52.805
Let plugin `pip_url` take into account Python version
The `pip_url` for `airflow` is currently `apache-airflow==1.10.14 --constraint https://raw.githubusercontent.com/apache/airflow/constraints-1.10.14/constraints-3.6.txt`, using a constraints file built specifically for Python 3.6, regardless of the version that's actually used. With Airflow 1.10.14, this is OK, because the constraints file is actually valid on 3.7 and 3.8 as well, but this is not the case for Airflow 2.0.1's https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.6.txt. This can be seen in https://gitlab.com/meltano/meltano/-/merge_requests/2032 and pipelines https://gitlab.com/michelrado/meltano/-/jobs/1026252711 (3.7) and https://gitlab.com/michelrado/meltano/-/jobs/1025856502 (3.8). One way to solve this would be to let a `PYTHON_VERSION` environment variable to referenced from `pip_url`, like so: ```yaml pip_url: 'apache-airflow==2.0.1 --constraint https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-${PYTHON_VERSION}.txt' ```
4
7,603,319
78,832,542
2021-02-10 16:46:47.822
Document how to manage incremental replication state without a persistent system database
As I wrote on Slack: >>> Note that there's already a workaround for state management with ephemeral system databases: since `meltano elt` (and `meltano schedule run`) can take a `--state` argument to run with a specific state, and can be run with `--dump=state` to dump the current state instead of running the underlying pipeline, a serverless ELT wrapper can: 1. download the state from S3 2. run the pipeline with `--state=<downloaded state>` 3. dump the current state (from the local ephemeral system database) to a file with `--dump=state > new-state.json` 4. upload the new state to S3 >>>
4
7,603,319
77,710,021
2021-01-22 22:00:04.111
Pipelines UI should let an arbitrary cron expression be chosen as a schedule interval
`meltano schedule` already supports arbitrary cron expressions, e.g. `0 */6 * * *` to mean "every 6 hours": https://meltano.com/docs/command-line-interface.html#schedule The UI should support this as well, both when creating a new pipeline and editing an existing one. At the same time, we should show these intervals correctly: https://gitlab.com/meltano/meltano/-/issues/2527 Update (2021-11-29): To provide a good user experience, I (AJ) think we should also preview the "next 10 occurrences" (or similar) so the user can ensure the cron expression is valid and meets their intended schedule.
8
7,603,319
77,189,415
2021-01-13 22:09:47.612
Remove flakehell now that it's no longer being maintained
https://github.com/life4/flakehell was added in https://gitlab.com/meltano/meltano/-/merge_requests/1970, but the maintainer has now archived the project: https://github.com/wemake-services/wemake-python-styleguide/issues/1817. Since it's no longer being maintained, we should move away from it and start using `flake8 --diff` directly. The docs at https://wemake-python-stylegui.de/en/latest/pages/usage/integrations/legacy.html will likely come in handy as well.
4
7,603,319
76,599,958
2020-12-30 23:23:54.256
Reuse parent plugin installation directory using `inherit_from` and matching `pip_url` (prevent venv duplication)
Original Description <details><summary>Click to expand</summary> When a plugin inherits from another, both get their packages installed into their own venvs at `.meltano/{type}s/{name}` (e.g. `.meltano/extractors/tap-gitlab` and `.meltano/extractors/tap-gitlab--inherited`), even though they could reuse the same package on-disk, which would speed up `meltano install` and reduce the installation footprint. One way to accomplish this would be to have plugins that share a `pip_url` share an installation directory, by using `.meltano/{type}s/{pip_url}` as the path, with `pip_url` hashed or transformed to a valid directory name. The resulting dirnames are pretty unwieldy though, and this would make it much harder to find a given plugin's installation directory just by browsing the filesystem and looking for its name. Adding a command to print the path (https://gitlab.com/meltano/meltano/-/issues/2337) would help, but there's a cost. Alternatively, we could have plugins fall back on their parent's installation dir if their `pip_url`s match, so that we'd only have `.meltano/extractors/tap-gitlab` until `tap-gitlab--inherited` changes its `pip_url` and gets a `.meltano/extractors/tap-gitlab--inherited`. Note that the directory names should always correspond to plugins _in the project_, so that names are known to be unique and map 1:1 to `pip_url`s. Plugins inheriting directly from a discoverable plugin (e.g. `tap-postgres--db-one` and `tap-postgres--db-two` both inheriting from `tap-postgres`, without a plugin by that name existing in the project) would not use their parent plugin's name for the installation directory, since the name could refer to multiple variants with their own `pip_url`s. This approach is consistent with the current behavior, where plugin dir names match project plugin names, with the addition of inheritance (unless `pip_url` is overridden). I think the limitation that it doesn't apply to plugins that don't share a common ancestor in the project, but do share a `pip_url`, is acceptable. </details> ## Updated Summary (2022-03-03) There are a few paths forward on this which each have their own pros/cons. There are also a couple related issues that make the matter overall more complex: - #3068+ - #2701+ ## Simplest solution, using `inherits_from` The simplest solution is to use the `inherits_from` option as a trigger for leveraging the same installation `venv_path` internally within Meltano. This rule would apply _only_ if the plugin inheriting from does not also define `pip_url`. If `pip_url` is also defined _and is different_, the `venv_path` for that plugin would be based only on its own name rather than its parent (meaing, a unique venv). Pseudocode: ```py @property def venv_name(self) -> string: """Get the venv name this plugin should use.""" if not self.inherits_from: # No parent. Use a unique venv per plugin. return self.plugin_name if not self.pip_url or self.inherits_from.pip_url == self.pip_url: # Use the parent's venv. Plugin is inheriting and there's no difference in pip_url. return self.parent.name # Default to unique venv per plugin. return self.plugin_name ``` Pros: - It works automatically with users' existing project definitions. - It speeds up the installation for most projects if they use `inherits_from` at least once in their project. Cons: - This would not solve for #3068 or #2701. - It doesn't solve for detecting changes in `pip_url`. (#2701) - It may or may not be able to solve for inheritance between plugins of different types, for instance: the scenario described in #3068 where it is desirable for `dbt` and `sqlfluff` to share a venv. - This is tricky because `inherits_from` was never imagined to solve for inheritance across a `transformer` plugin, for instance, and a `utility` plugin. - In theory, this too could be made to work, but even so, the other implications of `inherits_from` seem broader than would applicable to diverse utilities of fundamentally different functions and configuration options. - It doesn't solve for reconciling plugin names with their venvs if a plugin is renamed within `meltano.yml`. - It doesn't solve for `pip_url` which just _happen_ to be identical. (Not a common use case anyway.) ## Possible resolution strategy In sequence, this might get us where we want to go: 1. Solve for the simplest `inherits_from` cases, as described above. This would probably solve for 90% of the venv duplication issues for an average project. 2. To solve for the "pip_url drift" issue described in #2701, leave a marker file within the venv page during installation with the text of `pip_url` text. Then check the pip_url against the marker file on each execution. If it differs from the current value of `pip_url`, raise a warning or error that prompts the user to reinstall. 3. To resolve complex venv sharing use cases described in #3068, expand the spec so that plugins can "inherit" or "share" _only_ the venv of another plugin, without having to inherit all the other types of functionality, such as config, commands, etc. (More details on alternatives are in #3608, and presumably those solutions would stack with the `inherits_from` and 'pip_url drift detection' in the above two issues.)
8
7,603,319
76,434,934
2020-12-23 19:06:21.486
Health endpoint requires auth
The healthcheck endpoint (api/v1/health) always returns a 401 when requested from curl.
4
7,603,319
76,109,014
2020-12-16 00:40:22.011
Environment Variable Substitution doesn't work with array values
### What is the current *bug* behavior? Environment variables can't be used in the `meltano.yml` file for arrays. ### What is the expected *correct* behavior? The yaml parser should traverse arrays to substitute environment variables. It would also be nice to add custom yaml variable support to the parser. (e.g. `custom_var: var` -> `database: '{{ custom_var }}'`) ### Steps to reproduce - In a `meltano.yml` file try to use an environement variable in an array. ### Relevant logs and/or screenshots ``` plugins: extractors: - name: tap-mysql variant: transferwise pip_url: pipelinewise-tap-mysql config: host: host user: root database: ${DB_NAME} select: - ${DB_NAME}-users.id - ${DB_NAME}-users.email metadata: ${DB_NAME}-users: replication-method: FULL_TABLE ``` `database: ${DB_NAME}` works because it's an object. But it doesn't work in the select because it's an array. ### Possible fixes Not sure, sorry. ### Further regression test _Ensure we automatically catch similar issues in the future_ - [ ] Write additional adequate test cases and submit test results - [ ] Test results should be reviewed by a person from the team
8
7,603,319
74,578,751
2020-11-18 16:40:53.090
Let project plugin definitions and config be defined in multiple individual YAML files instead of `meltano.yml`
As I wrote in https://gitlab.com/meltano/meltano/-/issues/2206#note_388496082: >>> I think we could also allow users to define their project's taps and targets in their own YAML files, instead of having everything live in `meltano.yml`, since the `select`, `metadata`, `schema`, and `config` properties can grow quite large. These files could live at `<type>/<name>.yml` in the project, e.g. `extractors/tap-example.yml`: ```yaml variant: meltano pip_url: git+https://gitlab.com/meltano/tap-example.git select: ... metadata: ... schema: ... config: ... ``` We wouldn't need to support variant subdirectories in this case, since only a single variant of a plugin should exist in a project anyway, which also goes for custom plugins. >>> ------------------ ## [Update] Latest spec description: In `meltano.yml` specifying an include directory: ```yaml project_id: ... include_paths: - connectors/*.yml - schedules/*.yml ... plugins: ... schedules: ... ``` Or explicitly with exact file paths: ```yaml project_id: ... include_paths: - connectors/data-team.yml - connectors/business-team.yml - schedules/prod-refreshes.yml ... plugins: ... schedules: ... ``` And the companion files can be defined as: ```yaml # connectors/data-team.yml plugins: extractors: - name:... loaders: - name:... ``` ```yml # connectors/business-team.yml plugins: extractors: - name:... loaders: - name:... ``` ```yml # schedules/prod-refreshes.yml schedules: ... ``` Additional info: - The included files have the same spec as `meltano.yml`, with the exception that some top-level entries like `project_id` might not be permitted. - Rather than establish a priority between file declarations, collisions would be treated as errors, for instance if two files both declare an `extractor` with the name `tap-gitlab`.
8
7,603,319
74,453,850
2020-11-16 19:17:23.490
Support for data validation with Great Expectations
### Problem to solve This started with a conversation with @DouweM: We were wondering if it was possible to integrate data validation with Great Expectations more tightly into a Meltano workflow. This could be as simple as a "meltano add validator" type wrapper and integration into the elt runs, or some other form of integration It would also be neat for Great Expectations to show up in the Meltano UI, maybe with Data Docs/validation results integrated into the UI, potentially even with a "configure" interface to the great_expectations.yml config file (although that is definitely a stretch). One example for a UI integration of GE validation results is Dagster: https://dagster.io/blog/great-expectations-for-dagster ### Target audience The target audience would be two-fold: 1. Data engineers/owners of the Meltano workflow who run the pipelines and want to see whether all source data was correctly extracted and whether data was correctly transformed, and 2. data consumers (stakeholders) that want to use Data Docs for data documentation and to see the most recent validation status. ### Further details Not much to add - data validation is crucial to pipelines, and I think making it more accessible/part of Meltano would be really beneficial for users! ### Proposal (older) <details><summary>Click to expand</summary> I would love to hear from existing GE users how they envision this could work - here are some of my thoughts: - At a very high level, maybe we could add another key concept like a "validator" that can be added to a Meltano project. This could potentially even be a wrapper for "great_expectations init" to initialize a GE data context in the Meltano project. - A user would then either configure a GE datasource manually, or all data connections could be inherited from Meltano. - The user would then create Expectation Suites and Checkpoints. Running "meltano elt" (possibly with a --run-validation flag?) could then trigger running all configured Checkpoints as part of a pipeline. I'm not quite clear yet how or whether a user would specify when exactly to run validation. </details> ### What does success look like, and how can we measure that? Acceptance criteria: a user can run validation with GE as part of a Meltano pipeline run without having to invoke GE separately, and can see the results either in Data Docs or directly in the Meltano UI. Success: people actually run validation with GE in Meltano! ### Links / references * GE homepage: greatexpectations.io * Dagster integration: https://dagster.io/blog/great-expectations-for-dagster * Airflow operator (as an example of how to invoke validation): https://github.com/great-expectations/airflow-provider-great-expectations ### Updated Proposal (2022-01-05) 1. Using the recently released Meltano features `dbt test` and `test commands`, Meltano users should be able to add Great Expectations as a utility plugin. 1. First phase: Great Expectations can be added manually to `meltano.yml` by users familiar with Great Expectations (or in Meltano-owned projects like the Hub). 2. Second phase: Assuming positive results from "first phase" above, Great Expectations will be added to `discovery.yml` so that users can add it using the command `meltano add utility great-expectations` (with no `--custom` flag). 2. Within `meltano.yml`, users will add `commands` with names starting with a `test*` prefix. 3. Tests will be runnable using any of these: - `meltano test --all` - `meltano test great-expectations` - `meltano test great-expectations:test-foo great-expectations:test-bar` - `meltano run great-expectations:test-foo great-expectations:test-bar` (because tests are also commands) ### Special commands and functions Docs/UI: We may also need to come up with a `great-expectations:ui` command which would build and launch Init: We may also want a predefined `init` command and perhaps commands for other administrative operations.
4
7,603,319
74,172,740
2020-11-10 23:31:34.201
Consider renaming extractor "entities" and "attributes" to "streams" and "properties" for consistency with Singer
This would prevent confusion and inconsistent/interchangeable use of the Meltano-specific and Singer-specific term in docs or conversation. Meltano users don't technically need to be familiar with underlying Singer concepts, and "entities" and "attributes" are arguably more user-friendly and less implementation-specific, but especially at this stage this is unrealistic since users will still often need to debug (and fix) taps and read their repositories and documentation, which use the Singer terms. If Meltano were to at one point add support for a different open source EL standard that uses different terms, using "streams" and "properties" may no longer make sense, but that's a bridge we can cross then.
4
7,603,319
74,168,158
2020-11-10 20:39:58.958
`docs` job fails with vague `ERROR: Job failed: exit code 1`
See https://gitlab.com/meltano/meltano/-/jobs/841670357: ```bash <snip> /home/u894-pruvsekvb4qh/www/meltano.com/public_html/mstile-144x144.png /home/u894-pruvsekvb4qh/www/meltano.com/public_html/meltano-diagram.png /home/u894-pruvsekvb4qh/www/meltano.com/public_html/apple-touch-icon.png /home/u894-pruvsekvb4qh/www/meltano.com/public_html/mstile-310x150.png Cleaning up file based variables ERROR: Job failed: exit code 1 ``` The failing command is [`$REMOTE_EXEC "mkdir -p $SSH_BACKUP_DIRECTORY; tar zcvf $SSH_BACKUP_DIRECTORY-$(date +%Y-%m-%dT%H:%M).tar.gz $SSH_DIRECTORY/$SSH_WWW_DIRECTORY"`](https://gitlab.com/meltano/meltano/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml#L25), where `$REMOTE_EXEC` is `ssh -o StrictHostKeyChecking=no $SSH_USER_DOMAIN -p$SSH_PORT`. An earlier run (https://gitlab.com/meltano/meltano/-/jobs/841365856) logged those exact same output lines, but then was able to continue on to the next command: ```bash <snip> /home/u894-pruvsekvb4qh/www/meltano.com/public_html/mstile-144x144.png /home/u894-pruvsekvb4qh/www/meltano.com/public_html/meltano-diagram.png /home/u894-pruvsekvb4qh/www/meltano.com/public_html/apple-touch-icon.png /home/u894-pruvsekvb4qh/www/meltano.com/public_html/mstile-310x150.png $ $REMOTE_EXEC "cd $SSH_DIRECTORY && find ./$SSH_WWW_DIRECTORY -mindepth 1 -maxdepth 1 -not -name blog -not -name '.' -exec rm -rf '{}' \;" $ scp -o stricthostkeychecking=no -P$SSH_PORT -r public/* $SSH_USER_DOMAIN:$SSH_DIRECTORY/$SSH_WWW_DIRECTORY Saving cache for successful job Creating cache default... docs/node_modules/: found 25033 matching files and directories Uploading cache.zip to https://storage.googleapis.com/gitlab-com-runners-cache/project/7603319/default Created cache Cleaning up file based variables Job succeeded ```
2
7,603,319
72,420,621
2020-10-08 22:38:43.746
Nested properties in discovered catalog are interpreted as `inclusion: automatic` even if their parent property is `inclusion: available`
See https://gitlab.com/meltano/meltano/-/blob/master/src/meltano/core/plugin/singer/catalog.py#L284. As a result, these nested properties will always show up as `automatic` in `meltano select --list --all`, cannot be deselected using `meltano select --except`, and end up having metadata `inclusion: automatic, selected: true` in the generated catalog file that's passed to the tap. In reality, if a parent property is discovered as `inclusion: available`, changing `selected` or `inclusion` on an individual subproperty usually won't make a difference since taps typically only check the `selected` metadata for top-level properties, not nested ones. To resolve this, then, we can have Meltano assume `inclusion: automatic` on top-level properties that lack metadata, but not on nested properties, (assuming their parent property does have metadata). In `meltano select --list --all`, nested properties without discovered(!) `inclusion` metadata shouldn't be listed at all, since they can't actually be individually selected. If a nested property does have discovered `inclusion` metadata, we can list it. We could automatically have a top-level `selected` rule cascade down to its properties, but let's not get into that for now. --- As a temporary workaround, the following `metadata` rule can be added to a plugin definition to flip these nested `inclusion: automatic` metadatas to `inclusion: available`: ```yaml extractors: - name: tap-facebook # ... metadata: '*': '*.*': inclusion: available ```
8
7,603,319
70,124,985
2020-08-21 18:40:56.781
Add `meltano run [<plugin>...]` to run arbitrary plugins in a pipeline
As meltano run evolves we may opt to expand/change the invocation syntax but > a simple first iteration will be to focus on just the chain use case: we do a bit of work to validate blocks, but otherwise we run left to right and find ExtractLoad Blocks as appropriate. @tayloramurphy It's simply `meltano run tap-gitlab target-jsonl tap-gitlab target-csv dbt:run` with blocks run in series (with ExtractLoad blocks parsed and linked as needed). Other sample valid invocations: - `meltano run tap-gitlab map-remove-nulls target-msyql dbt:run` - `meltano run tap-gitlab target-msyql dbt:run tap-msqyl target-json` - `meltano run tap-gitlab target-msyql dbt:run dbt:test superset:build` Invalid invocations: - command block between IOBlocks: `meltano run tap-gitlab dbt:run target-msyql` - starting IOBlock (tap-peloton) with no ending IOBlock (a target): `meltano run tap-peloton tap-gitlab target-mysql` - ending IOBlock (target-jsonl) with no starting IOBlock (a tap): `meltano run tap-gitlab target-mysql target-jsonl` - tap/target used as command block: `meltano run tap-gitlab:discovery` Job ID generation and STATE support: - If no environment is provided: job ID is not generated and STATE is not supported. - If environment is provided: job ID is automatically generated with a format such as `{environment_name}:{tap_name}-to-{target_name}`. Explicitly out of scope in the first version: - Running taps/targets as commands is out of scope at least in this first version. - Retries, permissive or selective failures is out of scope at least in this first version. A failure at any point halts execution and no further blocks are executed. Fully out of scope: - There's no plan as of now to support individual block-level arg passing - and it will likely not be available in future revisions. For instance, this will not be possible: `meltano run dbt:run dbt:test[--verbose]`. Instead you'd need to use `meltano invoke` or create a custom `test-verbose` command and execute like this `meltano run dbt:run dbt:test-verbose`. <details> <summary>Original issue body</summary> As in: - `meltano elt <extractor> <loader> [--transform=skip]` -> `meltano run <extractor> <loader>`, e.g. `meltano run tap-foo target-bar` - `meltano elt <extractor> <loader> --transform=run` -> `meltano run <extractor> <loader> <transformer>`, e.g. `meltano run tap-foo target-bar dbt` - `meltano elt <extractor> <loader> --transform=only` -> `meltano run <transformer> --with <extractor> --with <loader>`, e.g. `meltano run dbt --with tap-foo target-bar` (see https://gitlab.com/meltano/meltano/-/issues/2546) And instead of `meltano schedule gitlab-to-jsonl tap-gitlab target-jsonl @hourly --transform=run: ```yaml schedules: - name: gitlab-to-jsonl extractor: tap-gitlab loader: target-jsonl transform: run interval: '@hourly' ``` We get `meltano schedule gitlab-to-jsonl tap-gitlab target-jsonl dbt @hourly`: ```yml - name: gitlab-to-jsonl interval: '@hourly' pipeline: # or `run`? - tap-gitlab - target-jsonl - dbt ``` </details>
12
7,603,319
70,124,963
2020-08-21 18:39:37.722
Allow extractor Singer messages to be transformed using Python before passing them to loader (Stream Maps)
We could support [`transformer` plugins](https://meltano.com/docs/plugins.html#transformers) other than `dbt`, that would follow extraction rather than loading, and would transform an extractor's output stream of Singer messages (`SCHEMA`, `RECORD`, etc), before they're streamed into the loader. The two types of transformers could be distinguished using an extra, e.g. `type: {etl,elt}` or `{follows,acts_on,transforms}: {extractor,loader}` We could support both pip packages, and local executable files, using either the `pip_url` or `executable` plugin setting.
12
7,603,319
70,124,790
2020-08-21 18:32:43.092
Allow stream-level and property-level transformations to be defined in `meltano.yml` (Stream Maps)
<details><summary>Expand Original Issue Description</summary> Inspired by https://www.dropbase.io/, PipelineWise transformations (https://transferwise.github.io/pipelinewise/user_guide/transformations.html), and our [existing extractor extras](https://meltano.com/docs/plugins.html#metadata-extra), I'm imagining something like: ```yaml extractors: - name: tap-example # ... transform: <entity>: <function>: <args> <attribute>: <function>: <args> ``` Since these transformations would act on an extractor's output: a stream of Singer `SCHEMA` and `RECORD` messages, we could relatively easily support functions for: - renaming (prefixing, suffixing) entity (stream, table) or attribute (property, column) names - dropping entities or attributes, in cases where a tap doesn't support discovery mode and entity selection - adding attributes with predefined or dynamic values, like PipelineWise's metadata columns: https://transferwise.github.io/pipelinewise/user_guide/metadata_columns.html - filtering records based on one or more attribute values, keeping only those that do (or don't!) match (`drop_if`, `drop_unless`?) - replacing text in attribute values - replacing empty strings with nulls - replacing nulls with a string - changing attribute types and casting values, which can go beyond overriding the JSON schema using the [`schema` extra](https://meltano.com/docs/plugins.html#schema-extra) Functions could take arguments of any type: a simple string, an array of values, or an object with additional keys. If a function doesn't take any attribute (like `drop`), it could just take a `true` boolean. Since functions could have object arguments, Meltano would not immediately be able to distinguish between `{entity: {attribute: {function: scalar_value}}}` and `{entity: {function: {nested_key: nested_value}}}`, where the key nested under an entity identifier could either be an attribute identifier or a function name. We don't have this issue with [`metadata`](https://meltano.com/docs/plugins.html#metadata-extra) and [`schema`](https://meltano.com/docs/plugins.html#schema-extra) extras, because metadata values cannot (so far) be objects, and since schema info can only be specified for attributes, not entities as a whole. Perhaps we can add a special `_` or `_self` or `_entity` key at the attribute level to nest entity-level transformation functions under, in cases where they need object values. </details> ## Background (Updated 2021-12-15) There would be a large advantage to being able to enable transformations like those from [pipelinewise-transform-field](https://github.com/transferwise/pipelinewise-transform-field) and Meltano SDK's [Inline Stream Maps](https://sdk.meltano.com/en/latest/stream_maps.html) to be defined natively in `meltano.yml` config. This opens up a large number of use cases defined on the SDK docs site: > ### Stream-Level Mapping Applications > > - Stream aliasing: streams can be aliased to provide custom naming downstream. > - Stream filtering: streams records can be filtered based on any user-defined logic. > - Stream duplication: streams can be split or duplicated and then sent as multiple distinct streams to the downstream target. > > ### Property-Level Mapping Applications > > - Property-level aliasing: properties can be renamed in the resulting stream. > - Property-level transformations: properties can be transformed inline. > - Property-level exclusions: properties can be removed from the resulting stream. > - Property-level additions: new properties can be created based on inline user-defined expressions. As well as fixes for these common issues: - Applying selection rules to taps that don't support selection. - Resolve issues caused by taps that use selection rules to filter `RECORD` messages but not the `SCHEMA` messages that is used to create target tables. - Resolve compatibility issues from taps that send data types that the chosen target cannot understand. - Resolve compatibility issues from taps that send `ACTIVATE_VERSION` messages to targets that don't understand them. - Need for "record flattening" when neither the tap or target supports this feature natively. ## Proposal: Map transforms as properties of extractors and loaders (Updated 2021-12-15) After running the following... ```bash meltano add mapper meltano-map-transform meltano add mapper pipelinewise-transform-field ``` ... you would be able to provide a config such as: ```yaml mappers: - name: meltano-map-transform pip_url: meltano-map-transform config: # Optionally, a default config. # ... - name: pipelinewise-transform-field pip_url: pipelinewise-transform-field - name: no-activate-version. # A fictional mapper that removes ACTIVATE_VERSION messages pip_url: no-activate-version config: # Optionally, a default config. # ... extractors: - name: tap-gitlab # ... mappings: - name: pii-hasher # The name of the map transform to apply. mapper: meltano-map-transform # The mapper plugin to use. config: # What will be sent to the transformer in a config.json file. stream_maps: customers: id_hashed: md5(record['id']) id: None loaders: - name: target-salesforce # ... mappings: - name: add-global-guid # The name of the map transform to apply. mapper: meltano-map-transform # The mapper plugin to use. config: # What will be sent to the transformer in a config.json file. stream_maps: customers: guid: md5(record['id']) - name: target-csv # ... mappings: - name: flatten-records # The name of the map transform to apply. mapper: meltano-map-transform # The mapper plugin to use. default: true # Transform prepended automatically if default=true. config: # What will be sent to the transformer in a config.json file. flatten_records: true - name: compat-fix # The name of the map transform to apply. mapper: no-activate-version # The mapper plugin to use. # config is omitted if the generic config is sufficient ``` One or more map transforms may be placed between tap and target: ```bash meltano run tap-gitlab pii-hasher target-salesfoce meltano run tap-gitlab pii-hasher flatten-records target-csv ``` Since `default=true` for `flatten-records` and `compat-fix` on `target-csv`, these are all equivalent: ```bash meltano run tap-gitlab flatten-records target-csv meltano run tap-gitlab target-csv ``` Note: For the example config above, this command would fail because `flatten-records` is only defined for `target-csv` and is not defined for `tap-gitlab` nor `target-salesforce`: `meltano run tap-gitlab flatten-records target-salesforce`.
12
7,603,319
69,769,623
2020-08-12 16:13:00.636
Add new doc for `airflow` plugin and its settings
It should be made clear that all of Airflow's settings can be managed through `meltano config` and `meltano.yml`. (Parts of) https://meltano.com/docs/production.html#airflow-orchestrator should be moved here too.
2
7,603,319
69,183,865
2020-07-29 17:55:52.517
Add `container_spec` for containerized plugin commands
The `tap-mssql` plugin is a good example of a non-standard, non-python tap which would nevertheless be desirable for mainstream support. One way to support this is by adding a docker_image option in discovery.yml and work through whatever issues come up in that process. Originally started as a discussion here: https://meltano.slack.com/archives/C013Z450LCD/p1596043957041600?thread_ts=1595992122.035100&cid=C013Z450LCD -------------- ## Update (2021-08-24) ----------------- The original use case for this issue (java-based `singer/tap-mssql`) is no longer a priority since we now have a python-based `tap-mssql` implementation and there just haven't been many (any?) other taps or targets requested which were not pip installable. A newer use case is Lightdash, which has a complex set of requirements and is recommended to be run via docker - or specifically via docker compose. ## Update ~~(2021-10-05)~~ ~~(2021-11-29)~~ (2021-12-07) In discussions around this topic, it appears clear we need to provide docker and non-docker invocation methods for plugins so that a `--container=True` flag (or similar) could toggle the behavior to dockerized vs native invocation methods. The docker config is then additive to existing `executable` and `pip_url` inputs. ### New `container_spec` option for plugin commands This feature would add a structured `container_spec` config option to each plugin command: ```yml plugins: utilities: name: lightdash commands: - name: serve executable: "lightdash serve" # ignored when running via docker container_spec: image: lightdash command: lightdash serve entrypoint: env: ENVVAR1: foo ENVVAR2: bar ports: - 5000 - 8080 volumes: - ".:/project/myproject" - name: build executable: "lightdash build" # ignored when running via docker container_spec: image: lightdash command: lightdash build entrypoint: env: ENVVAR1: foo ENVVAR2: bar ports: [] # None needed volumes: - ".:/project/myproject" ``` ### Supported `container_spec` options in V1: * `image` * `vars` * `ports` * `command` * `entrypoint` * `volumes` - or *at least* a container path in which to mount the root project dir Only `image` is required. All other options are optional. ### New `--containers=true|false` flag for CLI commands `elt`, `invoke`, and `run` The CLI commands `elt`, `invoke`, and `run` would all have a new `--containers` CLI arg. This arg has the following behaviors: - `true`: This uses `container_spec` whenever it is available. - Important: `true` setting does not guarantee that all invocations will be run containerized. Any plugin or command that does not have a `container_spec` will be attempted to be invoked natively. If native invocation fails, the command will fail - as is the existing behavior today. - `false`: This users local invocation (non-containerized) for all plugins. (Assumption being that there is no container runtime available on the machine.) - Note: If a plugin _only_ has a `container_spec` config and does not have any other means of being invoked, then the command will fail. Note that if multiple plugins need to be invoked, they will share the same preference setting across the entire meltano CLI command. #### Sample invocation For instance, taking the following sample invocation: `meltano --containers=true run tap-gitlab target-snowflake dbt:run dbt:test lightdash:build` and assuming that only `tap-gitlab` and `lightdash:build` have a defined `container_spec`, the behavior will be as follows: - `tap-gitlab` will be invoked within a container and its data will be passed to the STDIN of `target-snowflake`, which will be running natively. - `dbt:run` and `dbt:test` will both be run natively, since no `container_spec` exists for those commands. - `lightdash:build` will be run within a container. Assuming the same command is run with `--containers=false`, and assuming `lightdash:build` does not also have an non-containerized `executable` set, then meltano will fail to build an execution plan and will abort immediately with something like: > Execution of `lightdash:build` is not defined when container runtimes are disabled. To proceed, add a native `executable` definition to `lightdash:build` command or run with `--containers=true` to enable containerized execution. Lastly, if `executable` and `container_spec` are both provided for the `lightdash:build` command, but the command is invoked natively (`--containers=false`) without the proper prerequisites installed, then the command will fail at the very end of the job when `meltano run` reaches the final `lightdash:build` step. #### Optional CLI conventions we could support: - `--containers` is short for `--containers=true` - `--no-containers` is short for `--containers=false` ## Out of scope ### Tap and Target containerization We will probably build support for general plugins (`transformers`, `orchestrators`, and `utilities`) first - or more specifically for plugin `commands` first. Due to the additional complexity, containerized extractors and loaders would likely be a fast-follow on top of the first MVC release. Unlike for general utilities, the `container_spec` config for extractors and loaders needs to be at the plugin level, not the command level, and any `command` value will likely be ignored by Meltano at runtime, in favor of the ways that Meltano already knows to invoke a tap, especially: `tap-mytap --discover --config=/path/to/config.json` (discovery) and `tap-mytap --config=/path/to/config.json` (sync). ### Compose Syntax and docker-compose.yml files Out of scope for now, but we _might_ also support docker-compose in future: ```yml plugins: utilities: name: lighdash commands: - name: serve container_spec: compose: lightdash.docker-compose.yml ```
12
7,603,319
68,280,117
2020-07-20 20:39:16.224
Add "Configure" buttons to extractor and loader in Pipelines list
null
4
7,603,319
68,279,587
2020-07-20 20:38:22.127
Add "Full Refresh" button to UI
- Only when pipeline actually has state - Update text in "ConnectorSettings" component about changing start date
4
7,603,319
63,846,865
2020-07-17 16:23:10.920
Preserve comments and flow style when programmatically modifying `meltano.yml` (Ruamel)
PyYAML doesn't support this currently (https://github.com/yaml/pyyaml/issues/90), but https://pypi.org/project/ruamel.yaml does. Note: Our usage of the Canonical class may also be a factor we need to consider in our implementation. Noted by @kgpayne below: https://gitlab.com/meltano/meltano/-/issues/2177#note_708071594
20
7,603,319
54,102,788
2020-07-08 14:16:00.817
Create Helm chart to deploy Meltano to Kubernetes
The Helm chart should ideally run the Meltano schedule via Kubernetes orchestration without Airflow if possible. Minimum Goal: The minimum goal will to be to provide a way to run a single `meltano elt` pipeline on a schedule with a Kubernetes job. Ideal Outcome: The ideal outcome would be to support both running a project specific docker image and the default Meltano docker image in both readonly, as well as read-write mode to support the broadest possible user base. Each scheduled pipeline created using `meltano schedule` and available through `meltano schedule list --format=json` would result in its own scheduled `meltano elt` job.
12
7,603,319
35,189,225
2020-05-29 15:15:14.135
Rename of internal (OLTP) "Job" tables and columns
- [ ] Rename `job` table to `run(s)` - [ ] Rename `job_id` column to `job_name` Note: Since the concept of 'job' itself is changing, we should either wait until #2924+ is shipped - or else combine this with that.
4
7,603,319
34,008,395
2020-04-30 17:09:12.114
Allow extractor entities to be selected in UI
We already had an "Entity selection" modal before, but it was removed in https://gitlab.com/meltano/meltano/-/merge_requests/1157: ![](https://gitlab.com/meltano/meltano/uploads/4d03378285cecd17fbb9e8e537838471/Screen_Shot_2019-08-14_at_8.30.04_AM.png) I think we should: - [ ] Restore the `EntitiesSelectorModal` component from that MR - [ ] Add a new "Select entities" button to installed extractors on the Extract page, provided they support the `discovery` and `catalog`/`properties` capabilities - [ ] Make it clear that with anything but the full set of properties, Meltano's built-in transforms may not work, as described in https://gitlab.com/meltano/meltano/-/issues/876#note_240275676
8
7,603,319
29,724,563
2020-01-18 12:19:16.515
Add "Retry" button to pipeline logs view and list when run failed
### Problem to solve (Summarize the problem we are trying to solve in the form of we need to do [A], so that [B] can [C]) We need to *add a pipeline retry button to the Meltano UI Pipeline logs view*, so that when I see that something went wrong I can retry it easily. ![Screen_Shot_2020-01-18_at_7.15.57_AM](/uploads/328de49638e6d3d5e4285cd0b07d5827/Screen_Shot_2020-01-18_at_7.15.57_AM.jpg) For example, I can see that "Nothing to do" happened, but something should have happened. I want to be able to make changes and then just hit "retry". ### Target audience (For whom are we doing this? Include a persona) Anyone who is editing or building their own analyses. ### Further details (Include use cases, benefits, and/or goals) It will make it easier to trouble shoot. This helps make solve the "Have you tried restarting it?" problem. ### Proposal (How are we going to solve the problem? Try to include the user journey) I propose a Retry button in the bottom right of the modal. ### What does success look like, and how can we measure that? (Define both the success metrics and acceptance criteria. Note that success metrics indicate the desired business outcomes, while acceptance criteria indicate when the solution is working correctly. If there is no way to measure success, link to an issue that will implement a way to measure this) ### Regression test (Ensure the feature doesn't cause any regressions) - [ ] Write adequate test cases and submit test results - [ ] Test results should be reviewed by a person from the team ### Links / references _Please note that this was taken from GitLab, to be changed accordingly_
4
7,603,319
19,359,027
2019-03-22 05:19:25.268
Combine with ML Flow to Scale Knowledge
### Problem to solve Data science/ML experiments are often difficult to communicate to fellow team members and executive. When communicating to team members, the experiment has to be **reproducible** in order for them to have more technical depth. When communicating with executive, the experiment has to be **communicated in a simple manner** in order to have investment from leadership. Not only do executives and team members are stake holders, but other people in the company might be too. Scaling knowledge of experiments is difficult within any company. ### Target audience Data Scientist * Frustrated at a company process designed for software engineers * Might able to communicate success, but unable to explain the failed experiments leading up to that success. Failed experiments consists of the core pieces of insight gained by the scientist. Stakeholders often wonder why the process take so long. * .Might be able to communicate successful experiments to a few stakeholders, but unable to do so with all those that might be interested. Uable to scale their communication. * Frustrated at the slow decision making when the number of stakeholders that needed to be communicated to are high * Frustrated at the review process designed for software engineers -> Git code management system like Bitbucket does not have the necessary features to support the review process needed for high quality feedback for peers -> want to have high quality feedback on experiments to produce high quality results **As an organization grows, how do we make sure that an insight uncovered by one person effectively transfers beyond the targeted recipient?** ### Further details One of the major problems is not only sharing reports and analytics but also ML experiments so that knowledge can be pervasive. There have been attempts like this such as [Airbnb Knowledge Repo](https://github.com/airbnb/knowledge-repo) but it's an underinvested area. It's still an unsolved problem within the data science community. ### Proposal Combine it with MLflow Projects to have reproducible experiments. ### What does success look like, and how can we measure that? (Define both the success metrics and acceptance criteria. Note that success metrics indicate the desired business outcomes, while acceptance criteria indicate when the solution is working correctly. If there is no way to measure success, link to an issue that will implement a way to measure this) Have a solution that covers 5 main areas: 1. Reproduciblity 2. Quality 3. Consumability of content 4. Discoverability 5. Learning/growth Reference: [Scaling Knowledge at Airbnb](https://medium.com/airbnb-engineering/scaling-knowledge-at-airbnb-875d73eff091) ### Links / references _Please note that this was taken from GitLab, to be changed accordingly_ https://github.com/airbnb/knowledge-repo https://mlflow.org https://medium.com/airbnb-engineering/scaling-knowledge-at-airbnb-875d73eff091
1
7,603,319
18,781,735
2019-03-04 15:20:31.688
Add Snowflake dialect to PyPIKA
### Problem to solve We are currently using PyPIKA to generate the SQL queries for the Meltano UI. As we want to use Snowflake as a data warehouse, we need to make sure our SQL generation engine supports it. We'll need to add a Snowflake dialect to PyPIKA, which might involve sending back as a contribution or as a separate package. Once the dialect in, PyPIKA should be able to generate queries compatible with Snowflake specifics. ### Target audience Data Analyst ### Further details Without this, the generated SQL might cause an error when queried. ### Proposal - [ ] Investigate Snowflake SQL support - [ ] Investigate PyPIKA dialect handling - [ ] Investigate PyPIKA dialect integration - [ ] Build the dialect ### What does success look like, and how can we measure that? Our SQL generation engine can output Snowflake compatible SQL. ### Links / references Related to #428 https://github.com/kayak/pypika/blob/master/pypika/dialects.py
3
7,603,319
18,781,583
2019-03-04 15:15:18.164
Add Snowflake dialect the database settings
Related to #428 ### Problem to solve We need to add a Snowflake dialect in the database settings, along with any dialect-specific configuration keys in the database settings so that Meltano API can connect to Snowflake for querying. ### Target audience Audience: Data Analyst ### What does success look like, and how can we measure that? We are able to add a Snowflake connection in the Meltano UI database settings. ### Links / references _Please note that this was taken from GitLab, to be changed accordingly_
1