id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
828709960
|
feat: Add OC embed
Part of #38. Not complete - needs to be turned on at the settings level.
@aphelionz Updated
Thanks!
|
gharchive/pull-request
| 2021-03-11T04:26:25 |
2025-04-01T04:35:26.201591
|
{
"authors": [
"RichardLitt",
"aphelionz"
],
"repo": "orbitdb/orbitdb.org",
"url": "https://github.com/orbitdb/orbitdb.org/pull/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
481778729
|
Remote id in routes
Hey guys, I have a doubt, I am running some test cases with selenium for our backend (integration tests) anyway, the situation is when I open the browser, in this case, chrome, I am sending the remote-id because of I don't have the orbit-js generated id for that record. So my question is, there is an option to do http://url:port#/model/remote-id and should be translated to http://url:port#/model/local-id
You can call store.findRecordByKey() or store.cache.peekRecordByKey() and those methods will create a local id for the record that's associated with the remoteId in the KeyMap (if no mapping already exists). And the remote id should always be used in requests if the serializer's resourceKey method returns remoteId.
Thanks @dgeb for your answer, I appreciate your time on this, It helps us a lot
just let you know, I will first check if the id its a remote id and based on that I will decide if perform the query or just use the provided local id
|
gharchive/issue
| 2019-08-16T20:44:03 |
2025-04-01T04:35:26.205097
|
{
"authors": [
"dgeb",
"nayrban"
],
"repo": "orbitjs/ember-orbit",
"url": "https://github.com/orbitjs/ember-orbit/issues/212",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2233278940
|
Running npm ci in Dockerfile freezes the network completely
Describe the bug
Network freezes completely and I can't use browsers or anything.
In the console I saw messages like
NSTAT_MSG_TYPE_SRC_REMOVED received reports drop, source ref 785745 source NWStatsTCPSource OrbStack Helper attributed dev.kdrag0n.MacVirt pid 86147 epid 0 uuid 3D3CBA8B-D6BC-34B1-A072-20E7D81FF7A1 euuid A3788B8C-ED5E-3DBA-A6A8-F87B041FF75A fuuid (null) started 2024-04-09 14:44:08.602 +0300
To Reproduce
Dockerfile with
FROM node:lts
WORKDIR /app
COPY ./package.json ./package-lock.json /app/
RUN npm ci
build the Dockerfile
have something meaningful in package.json.. e.g. multiple packages, medium-large app :)
Expected behavior
Build should run in the background and not hinder other apps.
Diagnostic report (REQUIRED)
OrbStack info:
Version: 1.5.1
Commit: 4cfac15e1080617c70eb163966e1cb2009dac1c2 (v1.5.1)
System info:
macOS: 14.4.1 (23E224)
CPU: arm64, 10 cores
CPU model: Apple M1 Pro
Model: MacBookPro18,3
Memory: 32 GiB
Full report: https://orbstack.dev/_admin/diag/orbstack-diagreport_2024-04-09T11-49-26.895220Z.zip
Screenshots and additional context (optional)
No response
Duplicate of #976
Released in v1.6.0 Canary 1.
To update to Canary: Settings > Update channel
Can confirm this no longer freezes, thanks!
Released in v1.6.0.
New: Truly fast container filesystems on macOS: 2–10x faster, within 75-95% of native
|
gharchive/issue
| 2024-04-09T11:59:32 |
2025-04-01T04:35:26.214021
|
{
"authors": [
"kdrag0n",
"toxik"
],
"repo": "orbstack/orbstack",
"url": "https://github.com/orbstack/orbstack/issues/1118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2541333238
|
docs typo at https://docs.orbstack.dev/compare/colima
there is a typo here: https://docs.orbstack.dev/compare/colima
that's that's
OrbStack vs. Colima
OrbStack is a drop-in replacement for Colima that's that's fast, light, simple, and easy to use. It runs both containers and Linux machines.
Thanks, fixed.
|
gharchive/issue
| 2024-09-23T00:23:44 |
2025-04-01T04:35:26.215948
|
{
"authors": [
"gedw99",
"kdrag0n"
],
"repo": "orbstack/orbstack",
"url": "https://github.com/orbstack/orbstack/issues/1462",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2142844512
|
Fixed the Conflict version across service and cargo
Summary by CodeRabbit
Chores
Updated application configuration settings for improved debugging and storage access.
@coderabbitai review
|
gharchive/pull-request
| 2024-02-19T17:16:56 |
2025-04-01T04:35:26.217459
|
{
"authors": [
"itsparser"
],
"repo": "orcaci/orca",
"url": "https://github.com/orcaci/orca/pull/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
627809533
|
Any docs about localizing the dashboard?
If I want to translate the dashboard to another language (a business need), how should it go? is it a simple lang/en/file ?
Hi @alash3al . I moved this question to a more suitable repository.
All localization relies on the capabilities of the framework. As an example:
Change the language setting in the file config/app.php
/*
|--------------------------------------------------------------------------
| Application Locale Configuration
|--------------------------------------------------------------------------
|
| The application locale determines the default locale that will be used
| by the translation service provider. You are free to set this value
| to any of the locales which will be supported by the application.
|
*/
'locale' => 'jp',
After we create the file jp.json in the directory resources/lang/ so that the full path would be
resources/lang/jp.json. This file will contain plain text in JSON format, for example, add the lines there:
{
"Systems": "システム"
}
After that, we can see the result of our work in the browser:
I will close this question. If the answer didn’t help you, please open it again.
|
gharchive/issue
| 2020-05-29T22:37:08 |
2025-04-01T04:35:26.221946
|
{
"authors": [
"alash3al",
"tabuna"
],
"repo": "orchidsoftware/platform",
"url": "https://github.com/orchidsoftware/platform/issues/1114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
49542595
|
Move to io.dropwizard.metrics dependency
The dependency (not the package) is now (in version 3.1) io.dropwizard.metrics (http://mvnrepository.com/artifact/io.dropwizard.metrics). Would you be able to upgrade?
+1
+1
|
gharchive/issue
| 2014-11-20T12:48:39 |
2025-04-01T04:35:26.254607
|
{
"authors": [
"BrunoBonacci",
"analytically",
"ryan-williams"
],
"repo": "organicveggie/metrics-statsd",
"url": "https://github.com/organicveggie/metrics-statsd/issues/17",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1206953989
|
How is the "teacher" mechanism realised?
Hi there! This is interesting but I don't get how the teacher model (or its equivalent mechanism) in the original STAC paper is implemented here?
Thank you for your interest. There is no teacher model and paper. I followed the original semi-supervised learning method. First, we need to feed the model using unlabeled batch images and get out1, then feed labeled batch images and calculate the loss. After that, backward loss and feed previously used unlabeled batch get out2. Then calculate loss [unlabeled_loss = loss_fn(out1, ou2)] and backward it. After some iteration model will learn something from unlabeled data. You can do this with any model.
unlabeled_out1 = model(unlabeled_batch)
out = model(labeled_batch)
loss = loss_fn(out, target)
loss.bachward()
unlabeled_out2 = model(unlabeled_batch)
un_loss = loss_fn(unlabeled_out1, unlabeled_out2)
un_loss.backward()
But in the code wouldn't these two outs produce exactly the same result, since the optimizer hasn't done anything?
https://github.com/orgilj/semi-yolov5/blob/master/train.py#L338-L352
Thank you for your interest. Yes in this code need some changes. If you want I’ll send you result video.
…
Sent from my iPhone
On 7 Nov 2022, at 21:26, Zephyr69 @.***> wrote: But in the code wouldn't these two outs produce exactly the same result, since the optimizer hasn't done anything? https://github.com/orgilj/semi-yolov5/blob/master/train.py#L338-L352 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.
Sure! Thank you in advance!
|
gharchive/issue
| 2022-04-18T11:27:31 |
2025-04-01T04:35:26.261665
|
{
"authors": [
"Zephyr69",
"orgilj"
],
"repo": "orgilj/semi-yolov5",
"url": "https://github.com/orgilj/semi-yolov5/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2291043188
|
feat(output): support using stdout via dash (-o -)
Description
Using - for stdout is a common convention between CLI tools and this PR simply adds this support to git-cliff.
Motivation and Context
This makes it possible to use git-cliff as follows:
$ git-cliff -o -
(instead of git-cliff -o /dev/stdout or git-cliff -o)
Also you can combine it with -p as mentioned in #643
How Has This Been Tested?
Locally
Types of Changes
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
[ ] Documentation (no code change)
[ ] Refactor (refactoring production code)
[ ] Other
Checklist:
[x] My code follows the code style of this project.
[x] I have updated the documentation accordingly.
[x] I have formatted the code with rustfmt.
[x] I checked the lints with clippy.
[x] I have added tests to cover my changes.
[x] All new and existing tests passed.
Codecov Report
Attention: Patch coverage is 0% with 3 lines in your changes are missing coverage. Please review.
Project coverage is 41.90%. Comparing base (d5acda1) to head (dd154ac).
Report is 1 commits behind head on main.
Files
Patch %
Lines
git-cliff/src/lib.rs
0.00%
3 Missing :warning:
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #644 +/- ##
==========================================
+ Coverage 41.88% 41.90% +0.02%
==========================================
Files 15 15
Lines 1077 1079 +2
==========================================
+ Hits 451 452 +1
- Misses 626 627 +1
Flag
Coverage Δ
unit-tests
41.90% <0.00%> (+0.02%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
This very nearly works perfectly. The current command I run through cliff-jumper is:
F:\favware\git-cliff\target\x86_64-pc-windows-msvc\release\git-cliff.exe --tag @favware/cliff-jumper@3.0.3 --prepend ./CHANGELOG.md --unreleased --config ./cliff.toml --output - --github-repo favware/cliff-jumper --github-token "ghp_NOPE"
And when I set execa stdio to pipe I get this under the stdout property:
# Changelog
All notable changes to this project will be documented in this file.
# [@favware/cliff-jumper@3.0.3](https://github.com/favware/cliff-jumper/compare/@favware/cliff-jumper@3.0.2...@favware/cliff-jumper@3.0.3) - (2024-05-12)
## 🏠 Refactor
- **deps:** Update dependency conventional-recommended-bump to v10 ([aab0368](https://github.com/favware/cliff-jumper/commit/aab0368c3501792a64566a0d82fc0e3f935b3ed7)) ([#171](https://github.com/favware/cliff-jumper/pull/171))
- 💥 **BREAKING CHANGE:** Node 18 is now required as per the new version of `conventional-recommended-bump`
- 💥 **BREAKING CHANGE:** The base `conventional-changelog-angular` is now used instead of a customization of it. This should not affect the semver resolution, but if it does please create a GitHub issue
- 💥 **Co-authored-by:** renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
## 🐛 Bug Fixes
- **deps:** Update dependency execa to v9 ([07e39ae](https://github.com/favware/cliff-jumper/commit/07e39ae3a94ae50326ba3d250f4c16a23d665c84)) ([#175](https://github.com/favware/cliff-jumper/pull/175))
## 🚀 Features
- Test3 ([fdc7833](https://github.com/favware/cliff-jumper/commit/fdc7833fa37f90d619b134ceaff57e8681db575e))
- Test2 ([1e51302](https://github.com/favware/cliff-jumper/commit/1e5130224e323bf9886414e27cb08b33c621f16a))
- Test1 ([8aa2776](https://github.com/favware/cliff-jumper/commit/8aa27769eb37d6fa6d561ca616a30f2291061509))
This includes the header configured in cliff.toml. I'm not sure what the best approach here is. I don't want it to be included in the GitHub releases. I see 2 paths:
I add a TOML parser to my lib, check if there is a header property and if so use string replace to remove it
git-cliff somehow removes it but only from --output because ofc we wouldn't want to remove it from the file output. Not sure if this is feasible.
(sidenote, ignore that there are breaking changes in a supposedly patch, that's just because I'm debugging and I do have actual breaking changes pending release)
I'm guessing if using the --strip header option is feasible, but then the header won't be prepended to the file.
git-cliff somehow removes it but only from --output because ofc we wouldn't want to remove it from the file output. Not sure if this is feasible.
I thought of doing the same but I'm not sure. I'm not sure how people are using prepend and it might have some side effects.
Can you simply fetch it from cliff.toml and replace/remove it? I think that'd be the best.
Sounds perfect. Just LMK then!
Parsing TOML on my side is working perfectly (using smol-toml). Looking forward to this getting merged and released so I can then subsequently update cliff-jumper.
|
gharchive/pull-request
| 2024-05-11T19:45:21 |
2025-04-01T04:35:26.280293
|
{
"authors": [
"codecov-commenter",
"favna",
"orhun"
],
"repo": "orhun/git-cliff",
"url": "https://github.com/orhun/git-cliff/pull/644",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1415208320
|
Small QoL changes
Allow simulator to stop generating on ctrl+c, and some debug printouts to help users see what was read in. Additionally, don't try to process if there is no IMU data extracted from the bag.
Thank you!
|
gharchive/pull-request
| 2022-10-19T15:41:16 |
2025-04-01T04:35:26.281736
|
{
"authors": [
"goldbattle",
"raabuchanan"
],
"repo": "ori-drs/allan_variance_ros",
"url": "https://github.com/ori-drs/allan_variance_ros/pull/29",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
68755247
|
On master with strictSql enabled, SQL query fails because of escaped newline
{
"transaction": true,
"operations": [
{
"type": "script",
"language": "sql",
"script": [
"let $d0 = INSERT INTO Country SET name=\"one\\ntwo\" RETURN @rid",
"return { n0 : $d0 }"
]
}
]
}
It's works fine with 2.0.7, and correctly stores and retrieves the encoded newline
Hi @stuartcarnie
how are you executing this query?
I have a test case on develop branch that passes correctly, so maybe it's due to the rest of the stack (console, REST call?)
Thanks
Luigi
It was a REST call
fixed in develop branch
|
gharchive/issue
| 2015-04-15T18:27:25 |
2025-04-01T04:35:26.284152
|
{
"authors": [
"luigidellaquila",
"stuartcarnie"
],
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/3933",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1411077422
|
docs(Readme): Fixed Links in TOC
The links in Table Of Contents were not working, which has been fixed now!
Great, thanks!
|
gharchive/pull-request
| 2022-10-17T07:46:08 |
2025-04-01T04:35:26.286020
|
{
"authors": [
"kailashchoudhary11",
"origranot"
],
"repo": "origranot/reduced.to",
"url": "https://github.com/origranot/reduced.to/pull/237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
160241640
|
Implement scroll up infinite scroll
Differentiates whether you scrolled up or down, based on the last scroll position. Different distance thresholds for scroll up and down. Added a callback for scroll up.
Need help with updating the bundle. I've never used systemjs builder.
@orizens fair concerns. I updated the names of those so it will be backwards compatible.
@tallkid24 Thanks :)
|
gharchive/pull-request
| 2016-06-14T17:44:24 |
2025-04-01T04:35:26.287300
|
{
"authors": [
"orizens",
"tallkid24"
],
"repo": "orizens/angular2-infinite-scroll",
"url": "https://github.com/orizens/angular2-infinite-scroll/pull/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1113497833
|
[MX-13]: Add users and auth users tables
Summary
Add migrations files for the users and auth users tables creation and delete as well.
Update Makefile with more commands about goose migration tool.
Fix type on goose file.
Update docs README.md about goose commands.
|
gharchive/pull-request
| 2022-01-25T06:52:57 |
2025-04-01T04:35:26.289334
|
{
"authors": [
"orlandorode97"
],
"repo": "orlandorode97/mailx-google-service",
"url": "https://github.com/orlandorode97/mailx-google-service/pull/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2014656098
|
fix issues in ArenaAllocator
The current implementation of the ArenaAllocator class has some problems:
The memory alignment requirements for newly allocated objects are ignored.
No constructors are called during allocation which leads to issues (especially problematic when allocating objects of non-POD types).
Both these issues lead to undefined behavior.
This pull request tries to improve and fix the implementation. The alloc() member function now respects the alignment requirements and also checks if there's enough free memory in the allocator (throws otherwise). However, just like before, the alloc() function still does not call any constructors.
Therefore, another member function called emplace() has been introduced to guarantee initializing an object upon allocation (by either default-initializing it or calling one of its constructors).
For example:
auto allocator = ArenaAllocator{ 1024 * 1024 }; // 1 MiB
auto a = allocator.alloc<int>(); // correct alignment, but uninitialized
std::cout << *a << '\n'; // undefined behavior (accessing uninitialized memory)
auto b = allocator.emplace<int>(); // correct alignment, default-initialized
std::cout << *b << '\n'; // okay, prints '0'
auto c = allocator.alloc<std::vector<int>>();
c->push_back(42); // undefined behavior (constructor of std::vector was never called)
auto d = allocator.emplace<std::vector<int>>();
d->push_back(42); // ok
Personally, I think the alloc() function should be made private to avoid creating uninitialized objects. This PR already replaces all calls to alloc() with calls to emplace(), but I did not want to enforce this change.
Looks good!
|
gharchive/pull-request
| 2023-11-28T15:05:24 |
2025-04-01T04:35:26.356040
|
{
"authors": [
"mgerhold",
"orosmatthew"
],
"repo": "orosmatthew/hydrogen-cpp",
"url": "https://github.com/orosmatthew/hydrogen-cpp/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2111902
|
bypassing record_cache when selecting rows with lock
Hi,
Thanks for sharing record-cache, it is pretty solid.
I am tinkering around with it and I notice that record cache seems to hit the cache even when I want to retrieve rows with a lock. Example:
transaction do
Item.lock("lock in share mode").where(:conditions).each do |i|
...
end
end
When I look at my SQL logs, I only see:
BEGIN
COMMIT
The actual select query is not there. When I disable record_cache and do the same thing again, the select query is there as expected. So it makes me believe that record cache is retrieving from cache even though I want to lock the rows in the db.
What would be the best way to get around this, so for any lock selects, record-cache is bypassed?
Thanks!
Thanks for reporting the issue.
To temporarily disable fetching records from the cache, you can use the following construct:
RecordCache::Base.without_record_cache do
transaction do
Item.lock("...").where...
end
end
And I just committed a more permanent fix that will automatically bypass any lock-select as suggested. This will be included in the next version.
PS. Instead of looking in the SQL logs, you can also switch on DEBUG logging (config.log_level = :debug in development.rb) to get more information on record-cache hits and misses.
Fantastic, I appreciate the quick response. Thanks for making this!
|
gharchive/issue
| 2011-11-01T16:05:49 |
2025-04-01T04:35:26.359757
|
{
"authors": [
"dhruvg",
"orslumen"
],
"repo": "orslumen/record-cache",
"url": "https://github.com/orslumen/record-cache/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1100464810
|
Request for contributor access : Lakshmi Viswanath
Hi ,
I look forward to contributing to this project !
Regards,
Lakshmi Viswanath
@lakshmikarollil Can you please follow the EasyCLA link on the PR to accept that agreement?
|
gharchive/pull-request
| 2022-01-12T15:19:34 |
2025-04-01T04:35:26.361165
|
{
"authors": [
"lakshmikarollil",
"sbtaylor15"
],
"repo": "ortelius/ortelius",
"url": "https://github.com/ortelius/ortelius/pull/437",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1707412001
|
better support o-input with input type=number
Fixes #526
Fixes #527
Proposed Changes
This fixes o-input with type=number (see the linked issues).
Vue doesn't support modelModifiers fallthrough to native inputs (see https://github.com/vuejs/core/issues/1825) but simply converting the code to use v-model in o-input achieves the desired behaviour and makes it more consistent with native input.
Besides, the code becomes cleaner. I mean, why introducing a settable computable and not use?
Besides, the code becomes cleaner. I mean, why introducing a settable computable and not use?
Maybe due to integration with other libs like imask or similar
Note that this change will somehow break mobile native datepicker (<o-input type="date" @update:modelValue> will be called with old values somehow).
|
gharchive/pull-request
| 2023-05-12T10:56:30 |
2025-04-01T04:35:26.364745
|
{
"authors": [
"IlyaSemenov",
"jtommy"
],
"repo": "oruga-ui/oruga",
"url": "https://github.com/oruga-ui/oruga/pull/528",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
126641393
|
bug
v2.5.1 签到bug
明明设置的是 1-20
签到生成 1G多
流量低的时候会触发。
最低 可以设置多少
要关闭次功能
在
https://github.com/orvice/ss-panel/blob/v2/user/_checkin.php
if ($oo->unused_transfer() < 2048 * $tomb) {
$transfer_to_add = rand(1024, 2048);
} else {
$transfer_to_add = rand($check_min, $check_max);
}
改成
$transfer_to_add = rand($check_min, $check_max);
谢谢
|
gharchive/issue
| 2016-01-14T12:03:02 |
2025-04-01T04:35:26.367132
|
{
"authors": [
"jingwangnet",
"orvice"
],
"repo": "orvice/ss-panel",
"url": "https://github.com/orvice/ss-panel/issues/298",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1053829575
|
Demo 2 Superset dashboard
For completing demo 2, we should have a sample Superset dashboard with the prediction table from the output of inference. This dashboard doesn't have to be very complicated, but a collection of 2 or 3 diagrams. Some ideas:
For a selected KPI question, display answers for all the companies
For a selected company, display answers for all the kpi questions
closing as a dashboard exists: https://superset-secure-odh-superset.apps.odh-cl1.apps.os-climate.org/superset/dashboard/15/?native_filters=()
Re open if needed
|
gharchive/issue
| 2021-11-15T16:00:03 |
2025-04-01T04:35:26.428778
|
{
"authors": [
"MichaelClifford",
"Shreyanand"
],
"repo": "os-climate/aicoe-osc-demo",
"url": "https://github.com/os-climate/aicoe-osc-demo/issues/105",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1000234269
|
Dependabot Security 이슈
We found potential security vulnerabilities in your dependencies.
You can see this message because you have been granted access to Dependabot alerts for this repository.
현재 프로젝트에 Dependabot에 의해 milage-client 디렉토리 내에 16개의 dependency들에서 security 취약성이 감지되었다는 배너가 나타났습니다.
프로젝트 레포를 public으로 설정하고, #19 PR을 merge하면서부터 발생한 이슈로 파악됩니다.
해당 PR에서 저는 npx create-react-app template=typescript로 프로젝트를 생성하고, @craco/craco 와 craco-alias 패키지를 설치했습니다.
개인 프로젝트를 진행할 당시에는 경험해보지 못한 이슈이기에, 해결하실 수 있는 분이 있는 지 여쭈어봅니다..😥
특이한 점은 yarn audit 명령어로 확인 했을 때, dependabot이 감지한 16건이 아닌
glob-parent 에서 moderate 2건
browserslist 에서 moderate 1건
총 3건 만이 확인 되었고, yarn audit fix로는 문제가 해결되지 않는다는 점입니다.
일단 Dependabot 에서 자동으로 업데이트 하도록 설정해놓긴 해놓았습니다. 다만 제 의견으론 해커톤에 있어서 모든 패키지의 vulnerability를 크게 신경쓸 필요까지는 없어보입니다.
|
gharchive/issue
| 2021-09-19T07:23:50 |
2025-04-01T04:35:26.439264
|
{
"authors": [
"bwmelon97",
"pec9399"
],
"repo": "osamhack2021/WEB_Millage_ICM",
"url": "https://github.com/osamhack2021/WEB_Millage_ICM/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
573964862
|
How to use the checkboxes
Hello,
how to use the checkboxes (Marketing/ Personalization/ Analysis) on the Osano page in the CookieConsents plugin?
Unfortunately you can't find anything about this in the settings and in the API description.
Greetings
If you use the current dev version you can use the type 'compliance'
The docs seem to be a few versions behind and are not that comprehensive to begin with.
Have a look at https://github.com/osano/cookieconsent/blob/dev/src/options/popup.js for all the options.
Thank you very much for the information, it helped me a lot.
How did you get it to work in the end? I can only see that we can use the categories type, but where do we define the categories? It doesn't work for me.
Using the category type is apparently also the right solution.
I have defined the categories myself look into categories.map( ( category, index ) => ...
@Nesfiran thanks. How do you define the categories? Obviously the variable is not defined and what is the structure even supposed to look like? Help is much appreciated.
Please see the link posted above starting at line 62 for all possible options.
You initialize the script as follows, from here on you can define all variables yourself.
const cc = new CC({
//...options,
type : "categories"
})
For example:
const categories = [Category 1, Category 2, Category 3];
|
gharchive/issue
| 2020-03-02T13:12:50 |
2025-04-01T04:35:26.443975
|
{
"authors": [
"Nesfiran",
"molerat619",
"sgoldenb"
],
"repo": "osano/cookieconsent",
"url": "https://github.com/osano/cookieconsent/issues/689",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2366320861
|
Snapshots for 3rd party repos (HMS-4260)
Adds snapshots to 3rd party repos.
Payload repositories and custom repositories are resolved separately, since the UI passes them in separately as well.
The 'base' repositories have their dnf repo config added to /etc as well, this way users can get all content from consoledot.
/retest
/retest
|
gharchive/pull-request
| 2024-06-21T11:15:41 |
2025-04-01T04:35:26.450804
|
{
"authors": [
"croissanne",
"ezr-ondrej"
],
"repo": "osbuild/image-builder",
"url": "https://github.com/osbuild/image-builder/pull/1244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1303943737
|
RHEL major version - minor version mapping
The aim of this PR is to make the selection of RHEL minor versions implicit and only expose major version numbers in the general case. The long term plan is to support compose requests with just the RHEL major version (i.e., rhel-8 and rhel-9) for GA and only speak of minor versions when building EUS images (e.g., rhel-84-eus).
Currently, this PR changes the way the host distro is detected and reported:
The minor version is ignored
The weldr API reports the detected host distro string in the status info (json only: composer-cli --json status show).
It also changes the distro selection for a compose request and the distroregistry:
The current GA minor version is aliased in the distro registry as the plain major version, i.e., rhel-8 == rhel-86 and rhel-9 == rhel-90.
The combination of the changes above means that:
When requesting rhel-9, as of today, the equivalent of a rhel-90 image is built.
When running on any RHEL 9.y system, the host distro is reported as rhel-9 and the default distro on prem (in weldr) becomes rhel-9.
When requesting rhel-8, as of today, the equivalent of a rhel-86 image is built.
When running on any RHEL 8.y system, the host distro is reported as rhel-8 and the default distro on-prem (in weldr) becomes rhel-8.
Future:
Remove all distro definitions of RHEL minor versions from the distro registry except development versions. Currently, that means we would only support rhel-8, rhel-87, rhel-9, rhel-91.
Add distro definitions for minor versions that are currently in EUS. Currently, that means rhel-84-eus.
This requires a couple of changes: we need to add the EUS repositories and set the appropriate dnf configuration.
TODO for this PR
[ ] Document (in the repo's docs/) the version selection logic, based on requested version number or host distro.
[ ] Unit test version selection.
Currently, this PR changes the way the host distro is detected and reported:
The minor version is ignored
We can revert this in the future, but we should determine if e.g. running osbuild-composer on an EUS version of RHEL should result in producing an EUS RHEL image by default...
The combination of the changes above means that:
When requesting rhel-9, as of today, the equivalent of a rhel-90 image is built.
When running on any RHEL 9.y system, the host distro is reported as rhel-9 and the default distro on prem (in weldr) becomes rhel-9.
When requesting rhel-8, as of today, the equivalent of a rhel-86 image is built.
When running on any RHEL 8.y system, the host distro is reported as rhel-8 and the default distro on-prem (in weldr) becomes rhel-8.
It would be great to ensure (test) that the latest rhel-X definition is picked also for any new unknown minor version of RHEL. E.g. 8.8 or 9.2. This will be very helpful at the beginning of the next development cycle beginning, when images are built for nightly composes.
I know that @teg wanted to ensure that we allow only "known to be good and tested" distro definitions, but IIRC he came to admitting that this is something that would be OK and useful.
Future:
Remove all distro definitions of RHEL minor versions from the distro registry except development versions. Currently, that means we would only support rhel-8, rhel-87, rhel-9, rhel-91.
Add distro definitions for minor versions that are currently in EUS. Currently, that means rhel-84-eus.
We need to ensure that our internal automation that is triggering builds in Brew and also the documentation used to trigger builds for old releases keeps working. IOW triggering RHEL-8.4 SAP EC2 image build using rhel-84 distro name.
I'm also not sure about adding the *-eus to the distro name. We actually have EUS, TUS, E4S, etc... So it may be good idea to simply keep using the minor version name without any suffix.
I'm also not sure about adding the *-eus to the distro name. We actually have EUS, TUS, E4S, etc... So it may be good idea to simply keep using the minor version name without any suffix.
I realised very quickly that this is indeed preferable. We shouldn't need it in the request value.
Thank you for this!
A few notes:
To @thozza's comment, I think it makes sense to allow future minor versions, under the assumption that they remain backwards compatible. We should keep testing, but breaking changes should be caught in downstream CI, it is not something we can guard against.
Defaulting to the host distro is something we did for backwards compatibility, might not be worth making that logic more fancy unless there is a need to, requiring people to specify the distro for new features would be totally fine (but also fine to extend the logic of course).
The main point of selecting the minor version should be that we don't assume a more recent minor version than the content we consume. On-prem, this means that if we pull from the CDN, then we can assume that the content is more recent than the host we are running on. The exception to this is during a beta, where we could be running on an 8.7 host, but the CDN could still have 8.6 content. To solve this we would need to detect we are on a beta and even fall back to 8.6, or use 8.7-beta content. This gets fragile close to release where we may rhel-release packages that don't indicate beta, but the previous minor version is still on CDN.
For the purposes of the internal service, I think we still need the ability to select all non-EOL minor versions, as we could have the need to respin z-stream images of 8.6, and while now people could just use rhel-8 for that, it is a bit odd that rhel-86 stops working at release, and rhel-8 gives the same thing.
Based on the above, would it make sense for the caller / provider of the repos to figure out / declare what minor version they want. I.e., for weldr (where we provide the repos), do the logic as you describe (basically just rhel-8, rhel-9, rhel-84-eus), but for the service don't do the major version alias, but require the caller to know what content are in the repos they pass in and request the right minor version (would still drop things as soon as they are EOL though)?
* For the purposes of the internal service, I think we still need the ability to select all non-EOL minor versions, as we could have the need to respin z-stream images of 8.6, and while now people could just use `rhel-8` for that, it is a bit odd that `rhel-86` stops working at release, and `rhel-8` gives the same thing.
Yes, I can see 8.4, 8.6, 8.7, 9.0 and 9.1 image builds in brew over the past week. If this initiative intends to remove "old" minor versions from the API, we should firstly talk to OSCI/SP about which distros they need and set up some policies for ourselves. Happy to facilitate the talk, just let me know when you are ready.
|
gharchive/pull-request
| 2022-07-13T21:04:12 |
2025-04-01T04:35:26.467502
|
{
"authors": [
"achilleas-k",
"ondrejbudai",
"teg",
"thozza"
],
"repo": "osbuild/osbuild-composer",
"url": "https://github.com/osbuild/osbuild-composer/pull/2830",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
581686512
|
Config resets to default if there's an error in it, rather than offering options
I'm currently using the snap of ElectronPlayer available in the Ubuntu store, but am raising an issue here in case it's common to versions.
First of all, thanks for the great app - I've edited the config to have all my UK streaming providers (BBC iPlayer, UKTV Play, etc.). One annoyance: whenever I made and saved a small error in the config (like a missing comma) I found that the whole config file reset itself on startup. Numerous times I lost my progress and had to start again because of a mistake.
Instead of this behaviour, would it be possible to show an error message on startup ("Your config file is invalid") and give the user an option to either reset the config to default or edit it to fix the problem before seeing the menu?
Better yet: given how simple the config is... how about a graphical wizard that edits it for you?
I have been wanting to make a GUI for the config just haven't found the free time. The config is managed by an external package which for some reason have decided to reset itself on syntax errors. I don't have control over that behaviour unless I were to create a custom config system. I may build my own in the future, its just a matter of when I get free time to maintain the project.
|
gharchive/issue
| 2020-03-15T14:01:33 |
2025-04-01T04:35:26.488987
|
{
"authors": [
"oscartbeaumont",
"ubuntujaggers"
],
"repo": "oscartbeaumont/ElectronPlayer",
"url": "https://github.com/oscartbeaumont/ElectronPlayer/issues/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1287350867
|
Add isolated() wrapper style.
Description
This style was proposed in #108 for manual ownership of a style's cache.
Details
Its own cache should be of type Void.
It should store the wrapped style’s cache and inject it into the style’s methods.
It should store the wrapped style's cache as reference shared among all copies of an instances.
It should store the wrapped style using value semantics.
Implemented in 2469618ec9b10f22e90f179d5960c1572cf5cc6b (untested). I will add tests before v5.0.0.
A shower thought: if Context stores an Isolated<Style> then it no longer needs to manage Cache injection and Isolated would be the only point of Cache failure. It’s something to think about.
|
gharchive/issue
| 2022-06-28T13:27:06 |
2025-04-01T04:35:26.491964
|
{
"authors": [
"oscbyspro"
],
"repo": "oscbyspro/DiffableTextViews",
"url": "https://github.com/oscbyspro/DiffableTextViews/issues/121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1991894519
|
Update kalmanfilter.py for lower memory usage
Hi again,
working with large matrices (2000 x 30000) I ran into some memory issues on computers with less RAM. I went looking for some quick fixes and found a few:
The easiest change here is removing 1*matrix operations that were resulting in an unnecessary copy. I think the original intention was just for clarity, and removing these saves a surprising amount of RAM. This change had no effect on predictions in my test.
had to add a change to allow passing covariances=False to .smooth(). Code was written to expect self.cov=None but was never set to None. My guess is you've always been testing with covariance used.
I set the new default for .smooth() to covariance = False. This reduced max memory for a sample from max: 6.678 GB to max: 4.325 GB and I don't see any need for the covariance here (unlike in predict where cov is useful). But you might disagree if you have some use for that.
Added a comment about using float32 instead of float64 for the np.empty call. Unsurprisingly, this cuts memory use in half. However it comes with a small change in calculated values (by 4.8043524443673274e-08% in my case, so really really small). As a result I made this change in my personal version of the code but have not pushed it to master, just added a comment for anyone else who looks into this.
Overall in a test case this brings memory use from 17 GB down to 7 GB, (and down to 4 GB with float32 change) measured using scalene profiler.
Thank you again. This is great! I merged modified version https://github.com/oseiskar/simdkalman/commit/9507530d3703ba7311e43f1d9d61294e006d409e, which reverted this part:
I set the new default for .smooth() to covariance = False. This reduced max memory for a sample from max: 6.678 GB to max: 4.325 GB and I don't see any need for the covariance here (unlike in predict where cov is useful). But you might disagree if you have some use for that.
I agree that this would be a more reasonable default since accessing the smoothed covariance is not that often of interest and consumes a lot of memory. However, changing the default breaks the API (and hence the testsuite, see the CI runs in https://github.com/oseiskar/simdkalman/pull/26) so I'll keep the default. However I added a comment about lower memory usage to the docs.
The easiest change here is removing 1*matrix operations that were resulting in an unnecessary copy. I think the original intention was just for clarity, and removing these saves a surprising amount of RAM. This change had no effect on predictions in my test.
I double-checked this after merging (whoops) and noticed that this was not just for clarity, but to ensure that compute can be used to simultaneously compute and return filtered and smoother results (the usefulness of this is questionable, but it's also a part of the API that used to work). I added a test for this https://github.com/oseiskar/simdkalman/commit/12a8926a187970b23293f343ae1353db2f788a6d, which now failed and I fixed by modifying the code like this https://github.com/oseiskar/simdkalman/commit/9ae4d3c8d7ce9e71a65617ea0ac7deebc9dc24d5 , which should also achieve the lower memory consumption in the default smoothing case where filtered=False.
Added a comment about using float32 instead of float64 for the np.empty call. Unsurprisingly, this cuts memory use in half. However it comes with a small change in calculated values (by 4.8043524443673274e-08% in my case, so really really small). As a result I made this change in my personal version of the code but have not pushed it to master, just added a comment for anyone else who looks into this.
Supporting float32 as an option would be a great addition, since it works just as well with most Kalman Filters. However, there are also applications where it does not work. One example are the very complicated EKFs used in visual-inertial odometry, e.g., variants of this, which are normally implemented so that they are just barely numerically stable in double precision.
If a float32 mode was added, it should also systematically change all means, covariances and intermediate results from double to float, to avoid back-and-forth float-double conversions, which can be slow. Changing just the covariance works OK and reduces memory consumption, but also changing the other parts too could make the code run even faster.
Released the merged code in v1.0.4. Also made an issue (enhancement) mentioning the need for a 32-bit float mode: https://github.com/oseiskar/simdkalman/issues/28
|
gharchive/pull-request
| 2023-11-14T03:01:45 |
2025-04-01T04:35:26.501124
|
{
"authors": [
"oseiskar",
"winedarksea"
],
"repo": "oseiskar/simdkalman",
"url": "https://github.com/oseiskar/simdkalman/pull/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
523150750
|
Le chargement de la page n'aboutit pas
Bonjour,
la page de statistiques par département charge indéfiniment :
https://dev.cadastre.openstreetmap.fr/fantoir/stats_dept.html#dept=973
@adrienandrem je te laisse tester ? Normalement c'est corrigé. Si c'est ok pour toi tu peux fermer le ticket, sinon commentaires bienvenus.
merci
Ça fonctionne parfaitement. Merci !
|
gharchive/issue
| 2019-11-14T22:52:48 |
2025-04-01T04:35:26.513495
|
{
"authors": [
"adrienandrem",
"vdct"
],
"repo": "osm-fr/osm-vs-fantoir",
"url": "https://github.com/osm-fr/osm-vs-fantoir/issues/46",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
172980818
|
can not find any pbf when import-osm
I had put the "planet-latest.osm.pbf " file in ./import floder in the source floder but when I excute "docker-compose run import-osm" it tunrs out "can not find any osm pbf . please monut /data/import to floder contains pbf"
Hi!
The example tutorial - with zurich_switzerland.osm.pbf is working ?
see: http://osm2vectortiles.org/docs/own-vector-tiles/
It should be "docker-compose up import-osm"!
it did not work neither :( @ImreSamu
it turns out the same result. any other ways?
Odd, how is your import directory structured (is there only one osm2vectortiles folder you are using) and is your docker-compose.yml file unmodified?
I have only on osm2vectortiles folder. (/home/bear/osm2vectortiles) and I have put the pbf file in /home/bear/osm2vectortiles/import. I didn't modifiedmy docker-compose.yml . how should it go? @stirringhalo
@bearnxx
Can you post the output of the following commands?
cat /etc/*-release
cat /proc/version
docker version
docker-compose version
docker run debian:jessie /bin/echo 'Hello world'
my quess
maybe this is a Docker problems .. see similar errors:
https://github.com/docker/docker/search?utf8=✓&q="Error+resolving+syscall+name"+&type=Issues
If you don't using the latest docker and docker-compose - can you upgrade and check again?
https://github.com/docker/docker/releases ( v1.12.1 )
https://github.com/docker/compose/releases ( 1.8.0 )
[root@localhost osm2vectortiles]# cat /etc/*-release
CentOS Linux release 7.1.1503 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.1.1503 (Core)
CentOS Linux release 7.1.1503 (Core)
[root@localhost osm2vectortiles]# cat /proc/version
Linux version 3.10.0-229.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) ) #1 SMP Fri Mar 6 11:36:42 UTC 2015
[root@localhost osm2vectortiles]# docker version
Client:
Version: 1.10.3
API version: 1.22
Package version: docker-common-1.10.3-46.el7.centos.10.x86_64
Go version: go1.6.3
Git commit: d381c64-unsupported
Built: Thu Aug 4 13:21:17 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Package version: docker-common-1.10.3-46.el7.centos.10.x86_64
Go version: go1.6.3
Git commit: d381c64-unsupported
Built: Thu Aug 4 13:21:17 2016
OS/Arch: linux/amd64
[root@localhost osm2vectortiles]# docker info
Containers: 22
Running: 1
Paused: 0
Stopped: 21
Images: 6
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-8:3-137629586-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 4.961 GB
Data Space Total: 107.4 GB
Data Space Available: 13.76 GB
Metadata Space Used: 9.404 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.138 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use --storage-opt dm.thinpooldev or use --storage-opt dm.no_warn_on_loop_devices=true to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2016-06-09)
Execution Driver: native-0.2
Logging Driver: journald
Plugins:
Volume: local
Network: host bridge null
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 1
Total Memory: 7.627 GiB
Name: localhost
ID: Y3AP:WBOT:JNR6:DUIF:JHIF:G4Z6:UENO:U6PB:HNDB:TRGW:TBST:UGOC
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Registries: docker.io (secure)
[root@localhost osm2vectortiles]# docker-compose version
docker-compose version 1.8.0, build 94f7016
docker-py version: 1.9.0
CPython version: 2.7.5
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
@stirringhalo @ImreSamu
@bearnxx
I think it should be some config problem not a lower version docker problem
maybe this is a [ seccomp ] security problem ?
https://github.com/docker/docker/blob/master/docs/security/seccomp.md
docker v12 has a centos seccomp patch: https://github.com/docker/docker/pull/22344 ( but i am not a centos user .. so this is only a guess .. )
you can test your docker configurations .
if the simple docker volume ( '-v Bind mount a volume') is not working , then the osm2vectortiles also not working ...
the vt.sh should list your ./import directory ( and the pbf file ! )
cat vt.sh
#!/bin/bash
vimport=$(pwd)/import
docker run -it --rm -v $vimport:/data/import debian:jessie ls -la /data/import
[root@localhost osm2vectortiles]# docker run -it --rm -v $vimport:/data/import debian:jessie ls -la /data/import
Unable to find image 'debian:jessie' locally
Trying to pull repository docker.io/library/debian ...
jessie: Pulling from docker.io/library/debian
357ea8c3d80b: Already exists
Digest: sha256:ffb60fdbc401b2a692eef8d04616fca15905dce259d1499d96521970ed0bec36
Status: Downloaded newer image for docker.io/debian:jessie
2016/08/30 13:29:22 Error resolving syscall name execveat: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name getrandom: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name memfd_create: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name renameat2: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name sched_getattr: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name sched_setattr: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name seccomp: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name breakpoint: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name cacheflush: could not resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name set_tls: could not resolve name to syscall - ignoring syscall.
ls: cannot open directory /data/import: Permission denied
I have disabled the selinux. it did'tn work too.
I think it must be the ContOS7 Problem. I decide to change to your Linux distributions. So which one are you using now?
@ImreSamu
I think the #1 os for Docker host is Ubuntu LTS, that is 16.04 today.
On Tuesday, 30 August 2016, bearnxx notifications@github.com wrote:
[root@localhost osm2vectortiles]# docker run -it --rm -v
$vimport:/data/import debian:jessie ls -la /data/import
Unable to find image 'debian:jessie' locally
Trying to pull repository docker.io/library/debian ...
jessie: Pulling from docker.io/library/debian
357ea8c3d80b: Already exists
Digest: sha256:ffb60fdbc401b2a692eef8d04616fc
a15905dce259d1499d96521970ed0bec36
Status: Downloaded newer image for docker.io/debian:jessie
2016/08/30 13:29:22 Error resolving syscall name execveat: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name getrandom: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name memfd_create: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name renameat2: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name sched_getattr: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name sched_setattr: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name seccomp: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name breakpoint: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name cacheflush: could not
resolve name to syscall - ignoring syscall.
2016/08/30 13:29:22 Error resolving syscall name set_tls: could not
resolve name to syscall - ignoring syscall.
ls: cannot open directory /data/import: Permission denied
I have disabled the selinux. it did'tn work too.
I think it must be the ContOS7 Problem. I decide to change to your Linux
distributions. So which one are you using now?
@ImreSamu https://github.com/ImreSamu
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/osm2vectortiles/osm2vectortiles/issues/410#issuecomment-243439852,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAeKj-B7rrLShTyOq8GhZP7qggtBYLT1ks5qlDFCgaJpZM4JsH6l
.
@bearnxx :
I think it must be the ContOS7 Problem.
sometimes a simple Docker reinstall ( or upgrade ) helps ( but sometimes not )
https://docs.docker.com/engine/installation/linux/centos/
So which one are you using now?
I am using Ubuntu 16.04
and
docker version : 1.12.1
docker-compose version: 1.8.0
I change my OS to Ubuntu16.04 . All is well~~~ :) @ImreSamu @stirringhalo thank you for your help
Yep, to confirm I used 16.04 and it works perfectly!
I'm using docker on Windows machine and I get same error.
Reference: #499
Should I consider switching to Ubuntu instead?
|
gharchive/issue
| 2016-08-24T15:18:18 |
2025-04-01T04:35:26.550892
|
{
"authors": [
"ImreSamu",
"bearnxx",
"hyperknot",
"jaskiratr",
"stirringhalo"
],
"repo": "osm2vectortiles/osm2vectortiles",
"url": "https://github.com/osm2vectortiles/osm2vectortiles/issues/410",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2485135487
|
E: Unable to locate package python3-pytest-httpserver when following install instructions
I tried to follow https://github.com/osmcode/pyosmium?tab=readme-ov-file#testing
sudo apt-get install python3-pytest python3-pytest-httpserver
gives me
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package python3-pytest-httpserver
lsb_release -a
for me is
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
(XY problem avoidance: I am trying to figure out how to instantiate osmium.osm.TagList - osmium.osm.TagList({'shop': 'supermarket'}) seemingly works but functionality working for real TagList breaks, found https://docs.osmcode.org/pyosmium/latest/ref_osm.html#osmium.osm.TagList but I remain confused what needs to be passed as TagContainerProtocol, https://github.com/osmcode/pyosmium/blob/master/test/test_taglist.py is my last trace and wanted to step through test with debugger to find out how it instantiates TagList as reading code left me confused)
Unable to locate package python3-pytest-httpserver
try pip install pytest pytest-httpserver shapely
https://github.com/osmcode/pyosmium/blob/df429ca54806bef5bc5f99a30bd7f8ab29413500/.github/actions/run-tests/action.yml#L7C17-L7C60
Ubuntu has this only starting 23.04. I think it is fair to expect a recent OS for development.
NB: TagLists are not instantiatable from Python. Use a simple dict.
Use a simple dict.
the tricky part is that dict has some extra methods, so code was not failing in tests but was failing with real code
and in turn when iterating over TagList you can use k v on elements that you cannot use on dict elements
I guess I can call dict(tag_list) and hope that performace penalty is not too great
TagLists are not instantiatable from Python
at least I know that I can stop looking for rolution here
Please do not start off-topic discussions on unrelated issues, use the Discussion section.
|
gharchive/issue
| 2024-08-25T09:03:39 |
2025-04-01T04:35:26.620982
|
{
"authors": [
"ImreSamu",
"lonvia",
"matkoniecz"
],
"repo": "osmcode/pyosmium",
"url": "https://github.com/osmcode/pyosmium/issues/263",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2045957858
|
feat: support cosmostation mobile
What is the purpose of the change
This pr request supports cosmostation mobile wallet
Brief Changelog
Adds a new button for connecting Cosmostation in Cosmostation Mobile Wallet
Testing and Verifying
This change has been tested locally in cosmostation mobile wallet
@soaryong-c Connects well on Cosmostation mobile browser. But txs not going through as it is not pulling fee!
Same wallet is connecting but getting errors on txs.
Logout button is not working.
https://github.com/osmosis-labs/osmosis-frontend/assets/103904125/194c61f7-0b40-4fb7-836e-3f04231764cb
@kamal-sutra can you share test url?
@kamal-sutra can you share test url?
https://osmosis-frontend-git-fork-cosmostation-stage-osmo-labs.vercel.app/
@kamal-sutra @CryptoAssassin1
There are some issues with our app
I will comment again after processing.
thank you :)
Closing for now.
|
gharchive/pull-request
| 2023-12-18T07:58:49 |
2025-04-01T04:35:26.630250
|
{
"authors": [
"CryptoAssassin1",
"kamal-sutra",
"soaryong-c",
"sunnya97"
],
"repo": "osmosis-labs/osmosis-frontend",
"url": "https://github.com/osmosis-labs/osmosis-frontend/pull/2576",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1353211754
|
simulator: app hash MVP (part 1/2)
Closes: #2425
What is the purpose of the change
Within the simulator, we want to generate an app hash which we can use to compare across runs. This ends up not being a straight forward task since we implement a bare bones mock tendermint.
This PR introduces the absolute minimum we need to produce an app hash. This app hash is not entirely useful, as it is essentially only derived from the validator hash within the block header. If the direction of this PR is correct, the next step will be to generate a datahash via all tx results, which should then produce an apphash that has meaning (this is now completed in #2531 )
As a side note, this is my first time really digging into tendermint, so my understanding could be (and is likely) incomplete. Please lmk if anything in the PR seems to not makes sense!
Brief Changelog
changes randomProposer to return a pubkey instead of a derived hexbytes address
adds apphash to the sql table
adds version block 11 to the initial header (hashes wont get generated without this)
derives initial apphash from the abci.ResponseInitChain as we would do on a real chain
takes the current validator set and moves it to a validatorSet tendermint struct (this is needed to generate the validator hash, which cannot be nil otherwise you cannot generate an app hash)
sets the validator hash to the block header
generates an apphash from this header
Testing and Verifying
This change is already covered by existing tests
Documentation and Release Note
Does this pull request introduce a new feature or user-facing behavior changes? no
Is a relevant changelog entry added to the Unreleased section in CHANGELOG.md? no
How is the feature or change documented? not applicable
Thanks everyone for such quick reviews! Will likely merge soon after CI passes to work on part 2
|
gharchive/pull-request
| 2022-08-28T00:58:20 |
2025-04-01T04:35:26.635115
|
{
"authors": [
"czarcas7ic"
],
"repo": "osmosis-labs/osmosis",
"url": "https://github.com/osmosis-labs/osmosis/pull/2530",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2060882872
|
mDNS search with iOS support
Fix #29
It is not ready so I opened it as draft.
We'll probably find a time to work on it in the coming months
|
gharchive/pull-request
| 2023-12-30T23:24:41 |
2025-04-01T04:35:26.636406
|
{
"authors": [
"guyluz11"
],
"repo": "osociety/network_tools_flutter",
"url": "https://github.com/osociety/network_tools_flutter/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1855741842
|
Fix GPS/IMU update rates
See issue #722 .
How to test it?
Launch the simulation:
ros2 launch vrx_gz competition.launch.py world:=sydney_regatta
And verify the publication rates:
ros2 topic hz /wamv/sensors/gps/gps/fix
ros2 topic hz /wamv/sensors/imu/imu/data
They should be close to 20Hz and 100Hz respectively. Take into account that if the real time factor is lower than 100%, the values reported from topic hz should be proportionally lower.
I think this means we're going to do a 3.2.2 release. @caguero Do you agree?
Indeed, this is an important fix to justify a 3.2.2 release.
|
gharchive/pull-request
| 2023-08-17T21:36:35 |
2025-04-01T04:35:26.658615
|
{
"authors": [
"caguero"
],
"repo": "osrf/vrx",
"url": "https://github.com/osrf/vrx/pull/723",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1193454288
|
SpdxDocumentModelMapper: Make SPDX "idstring" generation predictable
Being able to predict the SPDX "idstring" for a given package is a
prerequisite for upcoming changes that will maintain the transitive
package relationships.
The SPDX ID is now derived from the coordinate representation of a
project's / package's Identifier. As the Identifier is unique within
an OrtResult, the derived SPDX ID is very likely unique, too, except
for cases where coordinate representations differ only in special
characters that get mapped to the same valid character for an SPDX ID.
Signed-off-by: Sebastian Schuberth sebastian.schuberth@bosch.io
@fviernau I hope this aligns with what we discussed.
Am I correct that we now write use "first" instead of "1"
Am I correct that we now write use "first" instead of "1"
This has been clarified in today's ORT developer meeting. The short answer is "no" 😀
|
gharchive/pull-request
| 2022-04-05T16:46:01 |
2025-04-01T04:35:26.686401
|
{
"authors": [
"sschuberth",
"tsteenbe"
],
"repo": "oss-review-toolkit/ort",
"url": "https://github.com/oss-review-toolkit/ort/pull/5225",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2592834236
|
Landscape for this WG and other related groups
Discussed on the 9/30 meeting:
We need to create a taxonomy of the AI/ML working groups (our interlocks) and integrate it within the MVSR. We need to decide what output we want to have, how we can collaborate with the other groups.
Currently, we are getting updates from these groups (best effort) but we need to identify gaps that are not addressed.
The main question to answer is what do we bring to the table, what specific outcomes we want to target. We don't want to fragment the work, dilute activities, but we are centrally placed to handle anything at the intersection of security, AI and OSS.
Some of the interlocks were documented on #24, but we probably should discuss all them
|
gharchive/issue
| 2024-10-16T19:19:08 |
2025-04-01T04:35:26.689229
|
{
"authors": [
"mihaimaruseac"
],
"repo": "ossf/ai-ml-security",
"url": "https://github.com/ossf/ai-ml-security/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1317087159
|
Organization-wide Workflow Add Instructions
I picked up the PR #151
@azeemshaikh38 @justaugustus @laurentsimon I need a 👍 to merge this in.
Is the multi-repo tool ready?
Last time I checked, it would need a bit of logic to detect whether a previous PR was sent or not, in order to avoid duplication of issues.
TBH I don't know. We can close this PR if it isn't ready. I picked up the long-running PR to close things.
Is the multi-repo tool ready?
Last time I checked, it would need a bit of logic to detect whether a previous PR was sent or not, in order to avoid duplication of issues.
TBH I don't know. We can close this PR if it isn't ready. I picked up the long-running PR to close things.
I think it'd be good to iron out some of the problems. I'm just hesitant to release this if people start finding problems. I think the main feature that would be useful it avoid spamming if a person provides a repository where an issue is already opened. That'd a cool PR to work on though.
Wdut?
@naveensrinivasan -- In lieu of updating the instructions, I would actually remove any mentions on the tool (, but not the tool itself).
I merged some initial fixes to multi-repo-action in #301, but it is still not in a working state and I think leaving instructions in about it may cause more overhead than we need.
I agree. There isn't any mention of the tool other than in its own folder and README. I will update the README with Work In Progress.
closing this for https://github.com/ossf/scorecard-action/pull/776
|
gharchive/pull-request
| 2022-07-25T16:32:02 |
2025-04-01T04:35:26.704343
|
{
"authors": [
"laurentsimon",
"naveensrinivasan"
],
"repo": "ossf/scorecard-action",
"url": "https://github.com/ossf/scorecard-action/pull/773",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1156046867
|
:sparkles: cmd: Refactor to make importable
What kind of change does this PR introduce?
(Is it a bug fix, feature, docs update, something else?)
Feature
[x] PR title follows the guidelines defined in our pull request documentation
cmd: Refactor to make importable
options: Add support for parsing via environment variables
options: Support setting feature flags via option
cmd: Replace version with sigs.k8s.io/release-utils/version
cmd: Move option validation into pre-run function
Signed-off-by: Stephen Augustus foo@auggie.dev
What is the current behavior?
scorecard is not importable, which means any consumers needing to leverage it need to either:
Grab the binary and wrap it (ref: https://github.com/ossf/scorecard-action/blob/2003da293e5ad0555c10ace5034de009632ca932/Dockerfile#L33-L34, https://github.com/ossf/scorecard-action/blob/2003da293e5ad0555c10ace5034de009632ca932/entrypoint.sh#L104-L128)
Implement parts of existing scorecard packages, which may not may not do option validation the same way we do, or afford them the complete functionality of scorecard (ref: https://github.com/ossf/allstar/blob/e507312ca14971734db95e10a50fac840360a21c/pkg/policies/branch/branch.go)
There are plenty of existing issues for this:
Allstar: https://github.com/ossf/allstar/issues/21, https://github.com/ossf/allstar/issues/22, https://github.com/ossf/allstar/issues/28, https://github.com/ossf/allstar/pull/114
scorecard-action: https://github.com/ossf/scorecard-action/issues/107, https://github.com/ossf/scorecard-action/pull/122
Witness (@colek42) probably has more to consider as well.
What is the new behavior (if this is a feature change)?**
[x] (N/A - this refactor increases test coverage) Tests for the changes have been added (for bug fixes/features)
For consumers using cobra, you can now create a scorecard *cobra.Command using cmd.New():
package main
import (
"log"
"github.com/ossf/scorecard/v4/cmd"
)
func main() {
if err := cmd.New().Execute(); err != nil {
log.Fatalf("error during command execution: %v", err)
}
}
We've also added support for setting a subset of scorecard options via environment variables (using https://github.com/caarlos0/env).
The intent is not to support all options, but to allow specific use cases like scorecard-action, which already uses environment variables for configuration.
Which issue(s) this PR fixes
Continues https://github.com/ossf/scorecard/pull/1645 and partially addresses https://github.com/ossf/scorecard/issues/1683.
Special notes for your reviewer
h/t to @n3wscott for the cobra pattern!
Does this PR introduce a user-facing change?
For user-facing changes, please add a concise, human-readable release note to
the release-note
(In particular, describe what changes users might need to make in their
application as a result of this pull request.)
- cmd: Refactor to make importable
- options: Add support for parsing via environment variables
- options: Support setting feature flags via option
- cmd: Replace `version` with sigs.k8s.io/release-utils/version
- cmd: Move option validation into pre-run function
Love the changes, thanks!
Hehe, I'm already excited!
❯ scorecard-action --help
A program that shows security scorecard for an open source software.
Usage:
./scorecard [--repo=<repo_url>] [--local=folder] [--checks=check1,...]
[--show-details] or ./scorecard --{npm,pypi,rubygems}=<package_name>
[--checks=check1,...] [--show-details] [flags]
./scorecard [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
serve Serve the scorecard program over http
version Prints the version
Flags:
--checks strings Checks to run. Possible values are: Signed-Releases,CII-Best-Practices,Fuzzing,Token-Permissions,Pinned-Dependencies,Packaging,Binary-Artifacts,Branch-Protection,Contributors,License,Dangerous-Workflow,Maintained,Security-Policy,Vulnerabilities,CI-Tests,Code-Review,Dependency-Update-Tool,SAST
--commit string commit to analyze (default "HEAD")
--format string output format allowed values are [default, json] (default "default")
-h, --help help for ./scorecard
--local string local folder to check
--metadata strings metadata for the project. It can be multiple separated by commas
--npm string npm package to check, given that the npm package has a GitHub repository
--pypi string pypi package to check, given that the pypi package has a GitHub repository
--repo string repository to check
--rubygems string rubygems package to check, given that the rubygems package has a GitHub repository
--show-details show extra details about each check
--verbosity string set the log level (default "info")
Use "./scorecard [command] --help" for more information about a command.
Please do test out the command locally since we do not have tests for these as of now. Ref: #1690 #1691
Will do!
This is really cool! Thanks @justaugustus
|
gharchive/pull-request
| 2022-03-02T00:34:10 |
2025-04-01T04:35:26.717300
|
{
"authors": [
"justaugustus",
"naveensrinivasan"
],
"repo": "ossf/scorecard",
"url": "https://github.com/ossf/scorecard/pull/1696",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1964786666
|
RTC: Refine FFmpeg opus audio noisy issue.
Before calling the API av_audio_fifo_read, use av_frame_make_writable to check if it is writable. If not, create a new frame.
TRANS_BY_GPT4
为什么要这么改?会有什么问题?解决什么问题?
|
gharchive/pull-request
| 2023-10-27T05:20:19 |
2025-04-01T04:35:26.721557
|
{
"authors": [
"chundonglinlin",
"winlinvip"
],
"repo": "ossrs/srs",
"url": "https://github.com/ossrs/srs/pull/3852",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1458481112
|
container: Add support for copying optionally-present keys
This is to aid https://github.com/coreos/coreos-assembler/pull/3214 which is trying to inject the metadata key fedora-coreos.stream into the container image. However, this value will only be present in Fedora derivatives, and not RHEL/CentOS.
Add support for copying a key only if present, instead of erroring if it's missing.
Inbound fix for the IMA tests in https://github.com/ostreedev/ostree-rs-ext/pull/419
|
gharchive/pull-request
| 2022-11-21T19:43:24 |
2025-04-01T04:35:26.723317
|
{
"authors": [
"cgwalters"
],
"repo": "ostreedev/ostree-rs-ext",
"url": "https://github.com/ostreedev/ostree-rs-ext/pull/415",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
173653767
|
Password middleware
TODO:
[x] Checked on staging
This was pretty simple to implement
We should reload the page when the password change is successful. For the redirect, it would load the profile page
|
gharchive/pull-request
| 2016-08-28T16:04:42 |
2025-04-01T04:35:26.741381
|
{
"authors": [
"atareshawty"
],
"repo": "osumb/challenges",
"url": "https://github.com/osumb/challenges/pull/150",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
938797887
|
conflict with ESLint rule no-multi-spaces
Hi, I ran into an issue having the rule no-multi-spaces enabled, as it parsed yaml indentation as unecessary spaces. When adding --fix it compltely destroys the yaml file.
Solution: turning off the no-multi-spaces rule in base overrides.
I will make a PR later today. :)
I couldn't reproduce the conflicting issue. Would you please share the YAML file and the config to reproduce?
here's my config:
https://github.com/prazdevs/eslint-config/blob/main/packages/javascript/index.js
and a yaml file i used from a project:
https://github.com/prazdevs/potato-timer/blob/main/locales/en.yml
The config already has the override
|
gharchive/issue
| 2021-07-07T11:37:49 |
2025-04-01T04:35:26.794750
|
{
"authors": [
"ota-meshi",
"prazdevs"
],
"repo": "ota-meshi/eslint-plugin-yml",
"url": "https://github.com/ota-meshi/eslint-plugin-yml/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1144632562
|
kubernetes-networks
Выполнено ДЗ №
[x] Основное ДЗ
[ ] Задание со *
Как проверить работоспособность:
minikube ssh 'curl 172.17.255.2/web/index.html'
PR checklist:
[x] Выставлен label с темой домашнего задания
Добрый день!Дз выполнено верно
|
gharchive/pull-request
| 2022-02-19T09:40:47 |
2025-04-01T04:35:26.867392
|
{
"authors": [
"Konstantinov86",
"bananamove"
],
"repo": "otus-kuber-2021-09/bananamove_platform",
"url": "https://github.com/otus-kuber-2021-09/bananamove_platform/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
770779202
|
[BUG] Не работает hook и gitrules precommit
Имеем пустой репозиторий на Windows, свежую версию gitrules, oscript 1.4
Добавляем xml с правилами в корень репозитория.
Устанавливаем хук gitrules install
git add .
git commit -m "init"
Выдается ошибка (см. скриншот)
Если вызвать напрямую gitrules precommit, ошибка та же.
Однако, при gitrules export Правила.xml /out почему-то создается каталог src вместо out, команда выполняется успешно, но правила не разбираются.
У меня была такая же ошибка, почему-то нет метода в модуле, добавил в #32
У меня была такая же ошибка, почему-то нет метода в модуле, добавил в #32
@ovcharenko-di @korolevpavel в 1.1.2 теперь все ок?
@ovcharenko-di @korolevpavel в 1.1.2 теперь все ок?
@otymko проверил, работает! и install, и export
@otymko проверил, работает! и install, и export
|
gharchive/issue
| 2020-12-18T10:40:13 |
2025-04-01T04:35:26.878318
|
{
"authors": [
"korolevpavel",
"otymko",
"ovcharenko-di"
],
"repo": "otymko/gitrules",
"url": "https://github.com/otymko/gitrules/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2067844004
|
lineage_mutations is broken for us on latest version
It works fine with single lineage names but returns a results object with zero results when a list is provided.
I tried to show you the version we are running in the code but it looks like you guys dont follow that particular convention, so its this commit: aa5676d
So those work and are present. But when I ask for both of them in a list I get this:
Am I doing something wrong?
This is pretty frustrating since this was working fine a couple months ago.
For now, I can write a for loop and jus ask for all of my lineages in a single API call, but I am pretty sure y'all would prefer me hitting your API once vs the 2000 I need.
Hi! Thanks for brining this up - you aren't doing anything wrong, we clearly have some bugs going on on the backend. Someone from our team will look into this when we can!
So you think its server-side then? I suppose that would make sense. Thanks for checking it out.
@xguse Thank you for writing in. We deployed a new, more efficient version of our API recently which included some behavior changes like this one. We recommend installing this commit of the outbreak python package from github for the best compatibility: pip install git+https://github.com/outbreak-info/python-outbreak-info.git@d4a21203c27ecef41e3d6d97213133271af99e77. Documentation updates around these changes will be completed soon.
Previously, queries to lineage-mutations with multiple lineages were handled with an internal for loop, so that is an acceptable approach; please put a ~75ms sleep in the loop to reduce our peak loading so other users are less affected. At some point we may reimplement this feature using a more efficient strategy -- we'll let you know if we do.
@mindoftea Thanks for that explanation.
Did I interpret you correctly in the following summary:
updates were made to the backend before the frontend could catch up
that commit you suggested is the most synced up with the back-end, but its changes are undocumented
me doing a for loop is an acceptable work-around for now but to be nice to everyone I should sleep for 0.075 s each loop if I take this route
If I am correct there, I have now added this sleep.
Aside from me in development phase, I should only run this once a month on average, but I will still be doing the sleeping.
I will update the package itself once you make the release.
Thank you for everything.
|
gharchive/issue
| 2024-01-05T18:26:35 |
2025-04-01T04:35:26.913951
|
{
"authors": [
"cmaceves",
"mindoftea",
"xguse"
],
"repo": "outbreak-info/python-outbreak-info",
"url": "https://github.com/outbreak-info/python-outbreak-info/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1829166269
|
[2.2.0] Deleting a row leaves rows with a different row index than what is being submitted
SimpleRepeatable::make('Charges', 'charges',[
Currency::make('Limit', 'limit')->rules(['required', 'integer', 'min:1', 'distinct'])->step(1),
Currency::make('Charge', 'charge')->rules(['nullable', 'numeric']),
]),
Steps:
With the above field on a form add 3 rows you will end up with inputs with correct indexes 0,1,2
Submit the form with empty fields and you will get The charges.x.limit field is required on all 3 rows. This is expected.
{
"message": "The charges.0.limit field is required. (and 2 more errors)",
"errors": {
"charges.0.limit": [
"The charges.0.limit field is required."
],
"charges.1.limit": [
"The charges.1.limit field is required."
],
"charges.2.limit": [
"The charges.2.limit field is required."
]
}
}
Delete the first row
Submit the form again with empty fields, only the first row shows The charges.1.limit field is required
{
"message": "The charges.0.limit field is required. (and 1 more error)",
"errors": {
"charges.0.limit": [
"The charges.0.limit field is required."
],
"charges.1.limit": [
"The charges.1.limit field is required."
]
}
}
This is because the two rows have indexes 1 and 2 but the data being submitted for validation has indexes 0 and 1.
Thank you for report. This will be fixed in the next release.
Released in 2.2.1. Good luck!
@marttinnotta @Tarpsvo Could you reopen this, since the fix was reverted?
Thank you for the report!
We had some issues with the update that would send data in FormData format and this was also reverted thanks to that. Will look into it.
|
gharchive/issue
| 2023-07-31T13:13:52 |
2025-04-01T04:35:26.921294
|
{
"authors": [
"Tarpsvo",
"dmason30",
"marttinnotta"
],
"repo": "outl1ne/nova-simple-repeatable",
"url": "https://github.com/outl1ne/nova-simple-repeatable/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1115543490
|
Add circuit breaker
Adds a circuit breaker that lowers open interest cap in the event of significant minting in recent past
Gas estimates for the full implementation. Still around 400k on build() prior to struct packing
OverlayV1Market <Contract>
├─ constructor - avg: 2524365 avg (confirmed): 2524365 low: 2524362 high: 2524374
├─ build - avg: 361614 avg (confirmed): 412329 low: 22228 high: 467567
├─ update - avg: 126435 avg (confirmed): 126435 low: 110828 high: 157915
└─ payFunding - avg: 38398 avg (confirmed): 38398 low: 38398 high: 38398
|
gharchive/pull-request
| 2022-01-26T22:01:09 |
2025-04-01T04:35:26.976388
|
{
"authors": [
"bob431136"
],
"repo": "overlay-market/v1-core",
"url": "https://github.com/overlay-market/v1-core/pull/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
199346057
|
支付签名错误
开发口令红包 ,纯 composer 模式
PHP 版本:7.09
overtrue/wechat 版本:3.1
发送普通现金红包一直返回签名错误,把发送出去的 XML 放在 https://pay.weixin.qq.com/wiki/tools/signverify/ 中验证生成的签名也是一致的。
debug 信息:
easywechat.DEBUG: Client Request: {"url":"https://api.mch.weixin.qq.com/mmpaymkttransfers/sendredpack","method":"POST","options":{"timeout":5,"body":"<xml><mch_billno>201701075651429092</mch_billno><send_name><![CDATA[Lucky Money]]></send_name><re_openid><![CDATA[oCQj_wkvh-3_PLrnNz4BCIHJNorc]]></re_openid><total_amount>166</total_amount><wishing><![CDATA[Chinese new year]]></wishing><act_name><![CDATA[redpack test]]></act_name><remark><![CDATA[记得还钱]]></remark><total_num>1</total_num><client_ip><![CDATA[121.40.125.246]]></client_ip><wxappid><![CDATA[wxa10ac71a8741474e]]></wxappid><mch_id>1405516602</mch_id><nonce_str><![CDATA[587087f431ebc]]></nonce_str><sign><![CDATA[FEB8085F70467FC4C3ED94DAAF7371EF]]></sign></xml>","cert":"/var/www/datas/ca/apiclient_cert.pem","ssl_key":"/var/www/datas/ca/apiclient_key.pem"}} []
<xml> <mch_billno>201701075651429092</mch_billno> <send_name><![CDATA[Lucky Money]]></send_name> <re_openid><![CDATA[oCQj_wkvh-3_PLrnNz4BCIHJNorc]]></re_openid> <total_amount>166</total_amount> <wishing><![CDATA[Chinese new year]]></wishing> <act_name><![CDATA[redpack test]]></act_name> <remark><![CDATA[记得还钱]]></remark> <total_num>1</total_num> <client_ip><![CDATA[121.40.125.246]]></client_ip> <wxappid><![CDATA[wxa10ac71a8741474e]]></wxappid> <mch_id>1405516602</mch_id> <nonce_str><![CDATA[587087f431ebc]]></nonce_str> <sign><![CDATA[FEB8085F70467FC4C3ED94DAAF7371EF]]></sign> </xml>
非常感谢。
@liuyami 用抓包工具看一下最终网络发出去的结果是否包含其它异常内容在里面,比如前置或者后置了其它内容
@overtrue 非常感谢,打电话给微信解决了,是我账户有点非安全方面异常,现在没有问题了,真是给你添麻烦了。
|
gharchive/issue
| 2017-01-07T06:40:38 |
2025-04-01T04:35:26.985648
|
{
"authors": [
"liuyami",
"overtrue"
],
"repo": "overtrue/wechat",
"url": "https://github.com/overtrue/wechat/issues/563",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
469243024
|
CLA is incorrect
The CLA for this project mentions projects made available by SAP which seems incorrect.
https://cla-assistant.io/ovh/ovh-warp10-datasource?pullRequest=41
Before this is fixed the CLA cannot be signed.
If it has been signed prior to this issue being fixed, then we can conclude the signing party did not read it!
Thanks ! We just update it, good catch !
|
gharchive/issue
| 2019-07-17T14:19:17 |
2025-04-01T04:35:26.993652
|
{
"authors": [
"hbs",
"miton18"
],
"repo": "ovh/ovh-warp10-datasource",
"url": "https://github.com/ovh/ovh-warp10-datasource/issues/42",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
859159048
|
build: use direnv and automate installation of dev dependencies
Setup environment using direnv
install direnv if not yet using it
checkout this branch
you will be prompted to load .envrc with direnv allow command. this command has to be run only once as long as .envrc remains unchanged
it sets up bunch of environment variables and PATHs
inside akash repo which akash points to <akash sources>/.cache/bin/akash as long as it has been compiled. for example with make akash
run configurations have own .envrc files which making AKASH_HOME to point to .cache/run/{kube|lite|single}
each dependency is being installed to .cache uppon it's necessity
install k8s.io to .cache
Signed-off-by: Artur Troian troian.ap@gmail.com
[x] Update README instructions for local deployment
I checked out this branch and did make setup-devenv and it failed with this
$ make setup-devenv
installing protoc compiler v3.13.0 ...
rm -f /home/ericu/tmp/akash/.cache/bin/protoc
(cd /tmp; \
curl -sOL "https://github.com/protocolbuffers/protobuf/releases/download/v3.13.0/protoc-3.13.0-linux-x86_64.zip"; \
unzip -oq protoc-3.13.0-linux-x86_64.zip -d /home/ericu/tmp/akash/.cache bin/protoc; \
unzip -oq protoc-3.13.0-linux-x86_64.zip -d /home/ericu/tmp/akash/.cache 'include/*'; \
rm -f protoc-3.13.0-linux-x86_64.zip)
rm -rf "CACHE}/versions/protoc/"
mkdir -p "CACHE}/versions/protoc/"
touch CACHE}/versions/protoc/3.13.0
Installing protoc-gen-grpc-gateway vv1.16.0 ...
rm -f /home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway
curl -o "/home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway" -L \
"https://github.com/grpc-ecosystem/grpc-gateway/releases/download/vv1.16.0/protoc-gen-grpc-gateway-vv1.16.0-linux-x86_64"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9 100 9 0 0 27 0 --:--:-- --:--:-- --:--:-- 27
chmod +x "/home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway"
rm -rf "CACHE}/versions/protoc-gen-grpc-gateway/"
mkdir -p "CACHE}/versions/protoc-gen-grpc-gateway/"
touch CACHE}/versions/protoc-gen-grpc-gateway/v1.16.0
make: *** No rule to make target 'protoc-swagger', needed by 'setup-devenv'. Stop.
I checked out this branch and did make setup-devenv and it failed with this
$ make setup-devenv
installing protoc compiler v3.13.0 ...
rm -f /home/ericu/tmp/akash/.cache/bin/protoc
(cd /tmp; \
curl -sOL "https://github.com/protocolbuffers/protobuf/releases/download/v3.13.0/protoc-3.13.0-linux-x86_64.zip"; \
unzip -oq protoc-3.13.0-linux-x86_64.zip -d /home/ericu/tmp/akash/.cache bin/protoc; \
unzip -oq protoc-3.13.0-linux-x86_64.zip -d /home/ericu/tmp/akash/.cache 'include/*'; \
rm -f protoc-3.13.0-linux-x86_64.zip)
rm -rf "CACHE}/versions/protoc/"
mkdir -p "CACHE}/versions/protoc/"
touch CACHE}/versions/protoc/3.13.0
Installing protoc-gen-grpc-gateway vv1.16.0 ...
rm -f /home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway
curl -o "/home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway" -L \
"https://github.com/grpc-ecosystem/grpc-gateway/releases/download/vv1.16.0/protoc-gen-grpc-gateway-vv1.16.0-linux-x86_64"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9 100 9 0 0 27 0 --:--:-- --:--:-- --:--:-- 27
chmod +x "/home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway"
rm -rf "CACHE}/versions/protoc-gen-grpc-gateway/"
mkdir -p "CACHE}/versions/protoc-gen-grpc-gateway/"
touch CACHE}/versions/protoc-gen-grpc-gateway/v1.16.0
make: *** No rule to make target 'protoc-swagger', needed by 'setup-devenv'. Stop.
@hydrogen18 fixed.
make sure to run direnv allow after pull. there was a typo in .env
I'm kinda confused at this point. I did make setup-devenv then make codegen on the latest version, it bombs on the protoc step but I'm unsure why
ericu@ericu-acer-laptop:~/tmp/akash$ make codegen
GO111MODULE=on go generate ./...
GO111MODULE=on go mod tidy
go mod vendor
vendoring non-go files...
/home/ericu/tmp/akash/.cache/bin/modvendor -copy="**/*.proto" -include=\
github.com/cosmos/cosmos-sdk/proto,\
github.com/cosmos/cosmos-sdk/third_party/proto
/home/ericu/tmp/akash/.cache/bin/modvendor -copy="**/*.h **/*.c" -include=\
github.com/zondax/hid
/home/ericu/tmp/akash/.cache/bin/modvendor -copy="**/swagger.yaml" -include=\
github.com/cosmos/cosmos-sdk/client/docs
./script/protocgen.sh
/home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway: 1: Not: not found
--grpc-gateway_out: protoc-gen-grpc-gateway: Plugin failed with status code 127.
make: *** [make/codegen.mk:31: proto-gen] Error 1
ericu@ericu-acer-laptop:~/tmp/akash$ ls -l /home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway
-rwxrwxr-x 1 ericu ericu 9 May 6 10:45 /home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway
@hydrogen18 fixed. don't need to run make setup-devenv btw. codegen and other targets have everything listed as dependencies
make clean-cache
make codegen
$ make clean-cache
make: *** No rule to make target 'clean-cache'. Stop.
There doesn't appear to be a clean-cache task
sorry, it is other way round cache-clean
It's still blowing up but the swagger-combine step is working now I think
direnv: loading ~/tmp/akash/.envrc
make: '/home/ericu/tmp/akash/.cache' is up to date.
direnv: export +AKASH +AKASH_DEVCACHE +AKASH_DEVCACHE_BASE +AKASH_DEVCACHE_BIN +AKASH_DEVCACHE_INCLUDE +AKASH_DEVCACHE_NODE_BIN +AKASH_DEVCACHE_NODE_MODULES +AKASH_DEVCACHE_VERSIONS +AKASH_ROOT +AKASH_RUN +GO111MODULE +GOLANG_VERSION +KIND_VERSION +ROOT_DIR ~PATH
ericu@ericu-acer-laptop:~/tmp/akash$ direnv allow
direnv: loading ~/tmp/akash/.envrc
make: '/home/ericu/tmp/akash/.cache' is up to date.
direnv: export +AKASH +AKASH_DEVCACHE +AKASH_DEVCACHE_BASE +AKASH_DEVCACHE_BIN +AKASH_DEVCACHE_INCLUDE +AKASH_DEVCACHE_NODE_BIN +AKASH_DEVCACHE_NODE_MODULES +AKASH_DEVCACHE_VERSIONS +AKASH_ROOT +AKASH_RUN +GO111MODULE +GOLANG_VERSION +KIND_VERSION +ROOT_DIR ~PATH
ericu@ericu-acer-laptop:~/tmp/akash$ make cache-clean && make codegen
rm -rf /home/ericu/tmp/akash/.cache
GO111MODULE=on go generate ./...
creating .cache dir structure...
mkdir -p /home/ericu/tmp/akash/.cache
mkdir -p /home/ericu/tmp/akash/.cache/bin
mkdir -p /home/ericu/tmp/akash/.cache/include
mkdir -p /home/ericu/tmp/akash/.cache/versions
mkdir -p /home/ericu/tmp/akash/.cache
mkdir -p /home/ericu/tmp/akash/.cache/run
installing protoc compiler v3.13.0 ...
rm -f /home/ericu/tmp/akash/.cache/bin/protoc
(cd /tmp; \
curl -sOL "https://github.com/protocolbuffers/protobuf/releases/download/v3.13.0/protoc-3.13.0-linux-x86_64.zip"; \
unzip -oq protoc-3.13.0-linux-x86_64.zip -d /home/ericu/tmp/akash/.cache bin/protoc; \
unzip -oq protoc-3.13.0-linux-x86_64.zip -d /home/ericu/tmp/akash/.cache 'include/*'; \
rm -f protoc-3.13.0-linux-x86_64.zip)
rm -rf "/home/ericu/tmp/akash/.cache/versions/protoc/"
mkdir -p "/home/ericu/tmp/akash/.cache/versions/protoc/"
touch /home/ericu/tmp/akash/.cache/versions/protoc/3.13.0
Installing protoc-gen-grpc-gateway vv1.16.0 ...
rm -f /home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway
curl -o "/home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway" -L \
"https://github.com/grpc-ecosystem/grpc-gateway/releases/download/v1.16.0/protoc-gen-grpc-gateway-v1.16.0-linux-x86_64"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 650 100 650 0 0 1934 0 --:--:-- --:--:-- --:--:-- 1928
100 4608k 100 4608k 0 0 3188k 0 0:00:01 0:00:01 --:--:-- 12.8M
chmod +x "/home/ericu/tmp/akash/.cache/bin/protoc-gen-grpc-gateway"
rm -rf "/home/ericu/tmp/akash/.cache/versions/protoc-gen-grpc-gateway/"
mkdir -p "/home/ericu/tmp/akash/.cache/versions/protoc-gen-grpc-gateway/"
touch /home/ericu/tmp/akash/.cache/versions/protoc-gen-grpc-gateway/v1.16.0
installing protoc-gen-cosmos v0.3.1 ...
rm -f /home/ericu/tmp/akash/.cache/bin/protoc-gen-cosmos
GOBIN=/home/ericu/tmp/akash/.cache/bin go get github.com/regen-network/cosmos-proto/protoc-gen-gocosmos@v0.3.1
rm -rf "/home/ericu/tmp/akash/.cache/versions/protoc-gen-cosmos/"
mkdir -p "/home/ericu/tmp/akash/.cache/versions/protoc-gen-cosmos/"
touch /home/ericu/tmp/akash/.cache/versions/protoc-gen-cosmos/v0.3.1
GO111MODULE=on go mod tidy
go mod vendor
installing modvendor v0.3.0 ...
rm -f /home/ericu/tmp/akash/.cache/bin/modvendor
GOBIN=/home/ericu/tmp/akash/.cache/bin GO111MODULE=on go install github.com/goware/modvendor@v0.3.0
rm -rf "/home/ericu/tmp/akash/.cache/versions/modvendor/"
mkdir -p "/home/ericu/tmp/akash/.cache/versions/modvendor/"
touch /home/ericu/tmp/akash/.cache/versions/modvendor/v0.3.0
vendoring non-go files...
/home/ericu/tmp/akash/.cache/bin/modvendor -copy="**/*.proto" -include=\
github.com/cosmos/cosmos-sdk/proto,\
github.com/cosmos/cosmos-sdk/third_party/proto
/home/ericu/tmp/akash/.cache/bin/modvendor -copy="**/*.h **/*.c" -include=\
github.com/zondax/hid
/home/ericu/tmp/akash/.cache/bin/modvendor -copy="**/swagger.yaml" -include=\
github.com/cosmos/cosmos-sdk/client/docs
./script/protocgen.sh
Installing statik v0.1.7 ...
rm -f /home/ericu/tmp/akash/.cache/bin/statik
GOBIN=/home/ericu/tmp/akash/.cache/bin GO111MODULE=on go install github.com/rakyll/statik@v0.1.7
rm -rf "/home/ericu/tmp/akash/.cache/versions/statik/"
mkdir -p "/home/ericu/tmp/akash/.cache/versions/statik/"
touch /home/ericu/tmp/akash/.cache/versions/statik/v0.1.7
installing protoc-gen-swagger v1.16.0 ...
GOBIN=/home/ericu/tmp/akash/.cache/bin GO111MODULE=on go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.16.0
Installing swagger-combine...
npm install swagger-combine --prefix /home/ericu/tmp/akash/.cache
npm WARN saveError ENOENT: no such file or directory, open '/home/ericu/tmp/akash/.cache/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/home/ericu/tmp/akash/.cache/package.json'
npm WARN .cache No description
npm WARN .cache No repository field.
npm WARN .cache No README data
npm WARN .cache No license field.
+ swagger-combine@1.3.0
added 24 packages from 24 contributors and audited 24 packages in 2.787s
found 0 vulnerabilities
./script/protoc-swagger-gen.sh
/home/ericu/tmp/akash/.cache/bin/statik -src=client/docs/swagger-ui -dest=client/docs -f -m
Swagger docs are in sync
installing k8s code-generator v0.19.3 ...
rm -f /home/ericu/tmp/akash/.cache/bin/go-to-protobuf
GOBIN=/home/ericu/tmp/akash/.cache/bin go install /home/ericu/tmp/akash/vendor/k8s.io/code-generator/...
rm -rf "/home/ericu/tmp/akash/.cache/versions/k8s-codegen/"
mkdir -p "/home/ericu/tmp/akash/.cache/versions/k8s-codegen/"
touch /home/ericu/tmp/akash/.cache/versions/k8s-codegen/v0.19.3
chmod +x /home/ericu/tmp/akash/vendor/k8s.io/code-generator/generate-groups.sh
GOBIN=/home/ericu/tmp/akash/.cache/bin /home/ericu/tmp/akash/vendor/k8s.io/code-generator/generate-groups.sh all \
github.com/ovrclk/akash/pkg/client github.com/ovrclk/akash/pkg/apis \
akash.network:v1
go: downloading golang.org/x/tools v0.0.0-20200616133436-c1934b75d054
go: downloading golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543
Generating deepcopy funcs
F0506 14:54:35.592629 307506 deepcopy.go:131] Failed loading boilerplate: open k8s.io/code-generator/hack/boilerplate.go.txt: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc0000bc001, 0xc011def2c0, 0x99, 0xe9)
/home/ericu/go/pkg/mod/k8s.io/klog/v2@v2.2.0/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x964660, 0xc000000003, 0x0, 0x0, 0xc0128db180, 0x8292db, 0xb, 0x83, 0x0)
/home/ericu/go/pkg/mod/k8s.io/klog/v2@v2.2.0/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x964660, 0x3, 0x0, 0x0, 0x76a406, 0x1e, 0xc0140ffae8, 0x1, 0x1)
/home/ericu/go/pkg/mod/k8s.io/klog/v2@v2.2.0/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
/home/ericu/go/pkg/mod/k8s.io/klog/v2@v2.2.0/klog.go:1456
k8s.io/gengo/examples/deepcopy-gen/generators.Packages(0xc012756460, 0xc0000c6fa0, 0x747a9b, 0x6, 0xc012756460)
/home/ericu/go/pkg/mod/k8s.io/gengo@v0.0.0-20200428234225-8167cfdcfc14/examples/deepcopy-gen/generators/deepcopy.go:131 +0xfd
k8s.io/gengo/args.(*GeneratorArgs).Execute(0xc0000c6fa0, 0xc0140ffe38, 0x747a9b, 0x6, 0x775000, 0x0, 0x0)
/home/ericu/go/pkg/mod/k8s.io/gengo@v0.0.0-20200428234225-8167cfdcfc14/args/args.go:206 +0x1b7
main.main()
/home/ericu/tmp/akash/vendor/k8s.io/code-generator/cmd/deepcopy-gen/main.go:77 +0x46b
goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x964660)
/home/ericu/go/pkg/mod/k8s.io/klog/v2@v2.2.0/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
/home/ericu/go/pkg/mod/k8s.io/klog/v2@v2.2.0/klog.go:416 +0xd8
make: *** [make/codegen.mk:25: kubetypes] Error 255
****
I tried running make modvendor but it didn't change anything
fixed
💯 . made it
|
gharchive/pull-request
| 2021-04-15T18:49:16 |
2025-04-01T04:35:27.039317
|
{
"authors": [
"hydrogen18",
"troian"
],
"repo": "ovrclk/akash",
"url": "https://github.com/ovrclk/akash/pull/1214",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2267043256
|
📊 grapher/regions/latest: Add income group information
This is working towards https://github.com/owid/owid-grapher/issues/3517.
This is adding the 4 WB income groups as rows to the regions.csv file which is only consumed by owid-grapher.
These rows get the new entity type income_group, which we will then handle in grapher. We mostly care about the members of these income groups; and for these to be up to date, we will want to update this step to use the latest wb/.../income_groups classification once a new one becomes available.
This also assigns new entity codes to these four regions: OWID_WB_LIC, OWID_WB_LMC, OWID_WB_UMC, OWID_WB_HIC. The last part of these are the official 3-letter codes that the WB uses for these entities.
Changing these entity codes in the future is going to be a major pain, so please let me know if you would want to use different ones instead.
The output of this step can be found here: http://staging-site-grapher-regions-income-groups:8881/grapher/regions/latest/regions/regions.csv
Hi @marcelgerber, regarding codes for income groups, in other places of OWID I have seen "OWID_LIC", "OWID_LMC", "OWID_UMC", "OWID_HIC", for example in FAOSTAT. I don't think those codes are relevant, but it makes me think that maybe these codes have been used also elsewhere, so maybe it's safer to stick to them.
This PR is in conflict with https://github.com/owid/etl/pull/2534. So let me think about both proposals and decide about a good solution.
Sure, I'll change the entity codes then to keep them consistent. Let me know how you want to proceed in regards to #2534.
Sure, I'll change the entity codes then to keep them consistent. Let me know how you want to proceed in regards to #2534.
Don't worry about it. I've created another PR with a different logic: https://github.com/owid/etl/pull/2587
I'll discuss it tomorrow in the data architecture call and then decide which approach to take.
I'd suggest closing this PR, and instead go with https://github.com/owid/etl/pull/2587 (we can discuss it later).
Note that the resulting file you could use in grapher would be http://staging-site-create-external-channel:8881/external/owid_grapher/latest/regions/regions.csv
Nice. I've read through your proposal, and all of it makes a lot of sense to me. Thank you for thinking this through!
Hey @marcelgerber you can now access the regions data in https://catalog.ourworldindata.org/external/owid_grapher/latest/regions/regions.csv
Please let me know if there's any issue or you want to change anything there, thanks!
|
gharchive/pull-request
| 2024-04-27T14:39:56 |
2025-04-01T04:35:27.068907
|
{
"authors": [
"marcelgerber",
"pabloarosado"
],
"repo": "owid/etl",
"url": "https://github.com/owid/etl/pull/2571",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
756088931
|
[QA] sharing a file fails on eos
Deployed ocis-1.0.0-rc6 via hetzner-deploy/make_ocis_eos_compose_test.sh
connect a desktop client with user marie, let it sync and watch for issues while doing the next steps
on the WBE-UI:
log in as einstein
share file Portrait.jpg with user marie
create a public share link for the file Portrait.jpg
with a fresh browser session, log in as marie,
accept the share,
go to 'All Files' -> Shares, Portrait.jpg is there. OK
click open the file, it fails showing a red triangle with exlamation mark.
with a fresh browser session, visit the public share link
the link opens showing a view with file Portrait.jpg
click open the file, it fails showing a red triangle with exlamation mark.
marie's desktop client tries to sync down the accepted share, but fails with internal error 500.
12/3/20 11:46:04 AM, Shares/Portrait.jpg, testpilotcloud3,Server replied "500 Internal Server Error" to "GET https://X.X.X.X:9200/remote.php/webdav/Shares/Portrait.jpg" (skipped due to earlier error, trying again in 42 minute(s))
RC6 has still the old eos 4.5.6
To be retested with RC7 on updated eos 4.8.28
Upstream https://github.com/owncloud/ocis-reva/issues/12
|
gharchive/issue
| 2020-12-03T10:55:54 |
2025-04-01T04:35:27.328964
|
{
"authors": [
"jnweiger",
"micbar"
],
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/issues/1015",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
913546161
|
Enablement for OnlyOffice integration
Feature
Use OnlyOffice with a Full Stack ocis Instance.
Building Blocks
Challenges
How would the document server authenticate requests whithout a user context?
How does the connector service create a user context for the Storage Server?
@butonic @wkloucek @dragotin
We need to help OnlyOffice on the Fastlane IMO.
@wkloucek How does the Collabora / WOPI Server stack solves this issue conceptually?
@antipkin-A @linneys FYI
Questions
How should one configure CS3 client when working from our plugin? Is
there a possibility to load the pre-configured client?
REVA gateway client example https://github.com/cs3org/reva/blob/183af2f94288991e177b41e6bbe44aaa9cfe3baf/cmd/reva/grpc.go#L49
How can we send an authorized CS3 API request on behalf of the user
from plugin backend? In the chat you specified that such a request might
be sent via go context, but afterwards it is unclear for us which key
has to be used, i.e. the key that other services will check for the
authorization token presence.
oCIS proxy always sends REVA token with requests (https://github.com/owncloud/ocis/blob/2927dc45c39793e42ce686c6cf1508118eab744d/proxy/pkg/middleware/account_resolver.go#L123), therefore it is present in the "x-access-token" header
this token can than be put in an ctx https://github.com/cs3org/reva/blob/183af2f94288991e177b41e6bbe44aaa9cfe3baf/cmd/reva/grpc.go#L44-L45 and the context needs to be passed to the grpc request https://github.com/cs3org/reva/blob/183af2f94288991e177b41e6bbe44aaa9cfe3baf/cmd/reva/upload.go#L106
Do you have any simple usage example of CS3 API InitiateFileDownload
and InitiateFileUpload methods?
upload https://github.com/cs3org/reva/blob/master/cmd/reva/upload.go
download https://github.com/cs3org/reva/blob/master/cmd/reva/download.go
Is there any data that must be additionally encrypted when sent to
the Document Server that might be hosted on another server?
https should be always used (with insecure option for development)
there is no file encryption
Implementation recommendation
app provider
demo implementation only https://github.com/cs3org/reva/blob/master/pkg/app/provider/demo/demo.go
wopi example https://github.com/cs3org/reva/blob/f004c26ecbc20e21543d6db27e2d5bfce9022146/pkg/app/provider/wopi/wopi.go#L120
App provider workflow
only office driver registers itself at the app registry https://github.com/cs3org/reva/blob/f004c26ecbc20e21543d6db27e2d5bfce9022146/pkg/app/provider/demo/demo.go#L35 with mimetypes it can handle and additional information https://github.com/cs3org/cs3apis/blob/63c2cee07f9008758a48691dfa45e4181b800b81/cs3/app/registry/v1beta1/resources.proto#L34-L59
if a user decides to open a file with only office "GetAppURL" will be called in the only office driver: https://github.com/cs3org/reva/blob/f004c26ecbc20e21543d6db27e2d5bfce9022146/pkg/app/provider/demo/demo.go#L42-L48. This will lead ownCloud Web to open the given content in an iframe (as a form post or with a http get, is currently being implemented).
In order for this to work you need a server that:
serves some html wich can be embedded into that iframe
The html in the iframe can receive a token and additional information via from parameters (form post) or as headers (http get).
The token can then be used to load the document via the CS3APIs and display it (from the backend which also served the html)
In order that OnlyOffice works with the CS3 WOPI server, we need this patch: https://github.com/cs3org/wopiserver/pull/47
Then following deployment can open and edit files with OnlyOffice: https://github.com/owncloud/ocis/pull/2478
In order that OnlyOffice works with the CS3 WOPI server, we need this patch: cs3org/wopiserver#47
Actually with the latest fixes where Reva mints short tokens for WOPI, that patch ought to be obsolete. And if not, to be rediscussed but I still have the conceptual concern I had put there: I don't think it's good for WOPI to inspect the Reva token, so far WOPI considers it totally as an opaque info representing the user's credentials.
Yes, I think we can close here. Anything else (bugs or future features) deserves new, dedicated issues.
|
gharchive/issue
| 2021-06-07T13:23:16 |
2025-04-01T04:35:27.342315
|
{
"authors": [
"glpatcern",
"kulmann",
"micbar",
"pmaier1",
"wkloucek"
],
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/issues/2132",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1890247103
|
Empty search list when sharee searches a folder that has been renamed by sharer
Describe the bug
When sharee searches the folder after renamed by sharer then the search list of sharee is empty.
Steps to reproduce
Steps to reproduce the behavior:
As marie, create a folder folder1
As marie, share a folder to einstein
As einstein, accept the share
As marie, rename folder folder1 to folder2
for einstein, there is folder folder1 in share-with-me page
7. As einstein, search the folder folder1 using web UI
Expected behavior
The folder1 should be contains in search list
Actual behavior
The search list does not contains folder1
But
when einstein searches resources by name folder2
Yes, this is a problem. The back-end search facility needs to (somehow) understand that Einstein's has a folder named folder1 in his view of his file-system.
And probably the back-end search-engine knows the latest name that Marie has given the folder, and it is indexed somewhere by that name.
And, if Einstein himself renames folder1 to folder3 locally for him, then the search-engine needs to find it by the name folder3 - so there are a few variations of renaming-actions that need to work and be tested.
CC @ScharfViktor
I see that we have raised this issue here https://github.com/owncloud/ocis/issues/6376 as well.
oc10 has same behavior:
and based on this comment https://github.com/owncloud/ocis/issues/6376#issuecomment-1611771332, I understand that we want to leave it like it is, so we close issue
Correct. Renaming shares does not affect both sides. Everybody can do renames which are not visible to the other share participants.
Understood, the renaming part is clear.
We need to be clear about the search? bug/feature?
By which name the sharee should search the received file? by the name that appears in the share-with-me page or by the name renamed by sharer of which the sharee don't have idea about? :thinking:
This issue NOT about the different names of the folder as seen by the sharer and sharee.
It is about what happens when the sharee tries to search for the folder after the sharer has changed the folder name (and the folder name for the sharee is the same as previously)
From my pov, the search needs to be consistent with the name in the file list / shared with me list.
|
gharchive/issue
| 2023-09-11T10:59:59 |
2025-04-01T04:35:27.351574
|
{
"authors": [
"ScharfViktor",
"micbar",
"nabim777",
"phil-davis",
"saw-jan"
],
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/issues/7262",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2310234916
|
Restrict users to only share with users in their groups and groups they are member of
Is your feature request related to a problem?
Owncloud has these settings: "Restrict users to only share with users in their groups" and "Restrict users to only share with groups they are member of" Can you implement the same for OCIS?
Describe the solution you'd like
Have "Restrict users to only share with users in their groups" and "Restrict users to only share with groups they are member of" settings so that user cannot see everybody including admin accounts on the system (which is no good).
Describe alternatives you've considered
None
Additional context
NextCloud also has this "Restrict users to only share with users in their groups" setting.
Thank you very much
This is a duplicate of https://github.com/owncloud/ocis/issues/9293
Thank you.
The linked issue is something different about roles.
Oops, this is the correct linked issue:
https://github.com/owncloud/ocis/issues/8560
|
gharchive/issue
| 2024-05-22T11:03:21 |
2025-04-01T04:35:27.355726
|
{
"authors": [
"cheegui",
"micbar"
],
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/issues/9235",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1305621259
|
[tests-only][full-ci]Bump CORE_COMMITID for tests
Description
Related Issue
https://github.com/owncloud/QA/issues/748
Motivation and Context
How Has This Been Tested?
test environment:
test case 1:
test case 2:
...
Screenshots (if appropriate):
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
[ ] Technical debt
[x] Tests only (no source changes)
Checklist:
[ ] Code changes
[ ] Unit tests added
[ ] Acceptance tests added
[ ] Documentation ticket raised:
@SagarGi please rebase after #4207 has been merged. And watch for other PRs that pull in the latest reva. We need to avoid having the commit id bump and other PRs getting any expected-failures changes out-of-sync.
Also let's update the id once this PR is merged https://github.com/owncloud/core/pull/40210 hopefully the CI will be more stable after this
core PR https://github.com/owncloud/core/pull/40210 has been merged - commit id can be updated again here.
core PR owncloud/core#40210 has been merged - commit id can be updated again here.
done in latest commit!
|
gharchive/pull-request
| 2022-07-15T06:12:49 |
2025-04-01T04:35:27.362363
|
{
"authors": [
"SagarGi",
"SwikritiT",
"phil-davis"
],
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/pull/4205",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
435647454
|
いいところ診断の結果に優しさを追加したい
以下の結果を追加したい
'{ userName }のいいところは優しさです。あなたの優しい雰囲気や立ち振る舞いに多くの人が癒やされています。'
これから対応します。
a2e5de2 で対応しました
|
gharchive/issue
| 2019-04-22T08:32:04 |
2025-04-01T04:35:27.414360
|
{
"authors": [
"owt3"
],
"repo": "owt3/assessment",
"url": "https://github.com/owt3/assessment/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2359886362
|
refactor(semantic): make control flow generation optional.
For maximum backward compatibility, we generate CFG by default.
Note: It can't be done with a simple method since lifetimes make it impossible(at least without unsafe trickery) I've tried to do it without a macro but it was just unintuitive.
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#3737 👈
#3728
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @rzvxa and the rest of your teammates on Graphite
Nice! Would be nice if we get rid of the macro.
I'll make it off by default in the next PR.
I'm working on making it disabled by default in the next few PRs, I have to first make linter tests to generate CFG optional(so we can mark it if we need cfg for the rule to work).
Ok, shall we merge or get rid of the macro?
Ok, shall we merge or get rid of the macro?
Do we want to feature gate the CFG in the future? If that's the case I think macro can help us, Otherwise, we should be fine with mapping the option.
I can also give the inline function another try, Maybe I can find a safe trick to make it work.
Ok, let's merge and iterate.
Oh poop. PR got merged while I was reviewing! @rzvxa I don't know if you think any of my comments above are worth considering and addressing in follow ups, or not?
@overlookmotel I've made some comments about the review conversations above, Let me know about your thoughts so I can integrate them in the next PR(s).
|
gharchive/pull-request
| 2024-06-18T13:35:03 |
2025-04-01T04:35:27.424224
|
{
"authors": [
"Boshen",
"overlookmotel",
"rzvxa"
],
"repo": "oxc-project/oxc",
"url": "https://github.com/oxc-project/oxc/pull/3737",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2522320349
|
ci: NeverGrowInPlaceAllocator not pub
Global allocator used in benchmarks NeverGrowInPlaceAllocator doesn't need to be pub.
#5727 👈
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @overlookmotel and the rest of your teammates on Graphite
Small change, and uncontroversial, so merging without review.
|
gharchive/pull-request
| 2024-09-12T12:54:11 |
2025-04-01T04:35:27.427739
|
{
"authors": [
"overlookmotel"
],
"repo": "oxc-project/oxc",
"url": "https://github.com/oxc-project/oxc/pull/5727",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2142732778
|
mockObservationPeriod
mockObservationPeriod(cdm, earliestStart = "2010-01-01")
added
|
gharchive/issue
| 2024-02-19T16:17:33 |
2025-04-01T04:35:27.428558
|
{
"authors": [
"edward-burn",
"ilovemane"
],
"repo": "oxford-pharmacoepi/omock",
"url": "https://github.com/oxford-pharmacoepi/omock/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1210459392
|
Use explicit usize for VALUE type
What?
This PR makes rb-sys use an explicity defined VALUE type instead of it being generated by bindgen.
Why?
The logic of making VALUE a usize is fully compatible with a C ulong
VALUE is pervasive enough that it deserves it's own special type, with derivations and such
Using the same definition as ruby-core seems like a wise idea
I think it's probably not worth it if it's going to be more painful to integrate, so I'm going to close this for now. If we want to switch later, it's easy enough anyway.
Also, thanks for taking the time for the thoughtful response.
|
gharchive/pull-request
| 2022-04-21T03:50:20 |
2025-04-01T04:35:27.431708
|
{
"authors": [
"ianks"
],
"repo": "oxidize-rb/rb-sys",
"url": "https://github.com/oxidize-rb/rb-sys/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1451104652
|
Error while storing triples
Thank you for making pyoxigraph.
We want to use pyoxigraph for persistance with acceptable performace in Python. When reading larger nt files (> 350MB) on Windows 10 using Python 3.10 and pyoxigraph 0.3.8 with
store.load(f"./filename.nt", "application/n-triples")
I do get consequently the error :
fatal runtime error: Rust cannot catch foreign exceptions
How can I debug this better for locating the reason for this error?
Hi! Thank you! Sorry for that, it sounds like an exception from the C++ code of RocksDB. RocksDB is not supposed to throw exceptions but maybe it does. The load method is transactional and quite memory intensive. The bulk_load method removes this restriction and is less memory consuming and faster.
Thank you, that works indeed better.
Even 3.6 GB ttl file is no problem and is parsed in 5 minutes on a small laptop.
Great!
I believe this could be now closed (feel free to open it again if it's not the case)
|
gharchive/issue
| 2022-11-16T08:18:30 |
2025-04-01T04:35:27.434826
|
{
"authors": [
"RichDijk",
"Tpt"
],
"repo": "oxigraph/oxigraph",
"url": "https://github.com/oxigraph/oxigraph/issues/283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2330984135
|
Is there a way to reduce oxrocksdb-sys debug build size?
Debug builds of oxrocksdb-sys (1.5GB) and liboxrocksdb_sys (760MB) are HUGE!
a few build iterations/versions, and my small projects target/ dir grows to 20GB.
It also takes long to compile.
Any remedy for that?
Yes, it's very painful. Multiple ways to mitigate that:
The most efficient: disable RocksDB if you don't need it. It is possible in the 0.4 alpha versions by disabling the "rocksdb" cargo feature:
oxigraph = { version = "0.4.0-alpha.7", default-features = false }
Edit cargo profiles to avoid generating debug information
To mitigate slow builds. Use a Rust/C/C++ cache system like sccache. I use it on my laptop and it made the rebuilds way faster if all Rust, C and C++ caching are enabled.
|
gharchive/issue
| 2024-06-03T12:11:36 |
2025-04-01T04:35:27.438177
|
{
"authors": [
"Tpt",
"hoijui"
],
"repo": "oxigraph/oxigraph",
"url": "https://github.com/oxigraph/oxigraph/issues/887",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
281045573
|
Manage invoice generation
v0.4.0
ok qa
|
gharchive/pull-request
| 2017-12-11T14:56:47 |
2025-04-01T04:35:27.503938
|
{
"authors": [
"mehdichaouch",
"rbenezra"
],
"repo": "oystparis/oyst-1click-magento",
"url": "https://github.com/oystparis/oyst-1click-magento/pull/147",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
685344760
|
Not compatible with microservices
Response is unavailable when I'm invoking protected by rate-limit controller over TCP microservice.
I will be checking on this with the v2 upgrade.Added to #17
Closed via #18
|
gharchive/issue
| 2020-08-25T09:51:07 |
2025-04-01T04:35:27.506174
|
{
"authors": [
"ozkanonur",
"slava-ovchinnikov"
],
"repo": "ozkanonur/nestjs-rate-limiter",
"url": "https://github.com/ozkanonur/nestjs-rate-limiter/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1486091811
|
Feature request: Extract all history as a txt or any other format
Need this feature cause whenever I reboot my MacBook it automatically clears all history even I didn't ticked on that option ( clear all after quit)
If it clears the history - it's obviously a bug in Maccy. I would rather fix it than implement an export functionality.
Can you please provide Maccy version you use? You can get it in "About" window.
Can you please record a screen video showing the problem? You can do this using QuickTime.app.
Can you please share an output of running defaults read org.p0deje.Maccy in Terminal.app?
Closing since there is no response.
|
gharchive/issue
| 2022-12-09T06:06:39 |
2025-04-01T04:35:27.520608
|
{
"authors": [
"krushnadeore",
"p0deje"
],
"repo": "p0deje/Maccy",
"url": "https://github.com/p0deje/Maccy/issues/498",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
531508377
|
Force set window size
Hi,
I absolutely love Maccy and I hope we will be able to pin items soon, but I'd like to be able to set that window size because if you copy many items, it seems the window length increases.
Is there a command we can set to force the window size is height?
If not, could this be implemented, to be able to set a flag? Thanks
Hi. I don't quite follow what do you mean by window size. Would you like to limit the height of the window so it doesn't show on the whole screen?
Hi, thanks for looking into this! Allow me to visualize this:
This would be a more preferred window size: https://really.iamno.pro/RBuX6Wom
But if we add more items, it starts to look like this: https://really.iamno.pro/mXumR5d2
So, what I'd like to do is tell the window to not be vertically longer, than a set amount. Maybe this can be done by setting the number of items visible when opening Maccy, for instance, after 5 items, add the next item, but only visible after mousing over an arrow. Or this to be achieved by a certain amount of pixels?
Not sure what would be the best way to handle this, but having a really long list to scroll through (Yes, I am aware we have a search option), is personally difficult for me.
Not sure if I expressed myself well here, but I hope the above, made sense?
Thanks, again!
I get it thanks. I am not sure if it's doable via default macOS menu system, but I'll see how it can be done.
<3
By quick googling and checking the documentation of NSMenu, NSMenuDelegate it seems to be impossible to do in an obvious way. There is no way to access and modify the view of NSMenu. I believe it's possible to hack this around somehow, but I am not sure how to do that.
I'll keep this issue open for discussing and I'll appreciate any tips or PRs.
Hmm.. thanks for looking into this. Could you give a look at iPaste which does this? I think after the 5th item copied, it creates a little arrow to see later items. Not sure if that'll help?
Yes, I guess could do that creating a submenu. However, this means that one can't just scroll down the menu to see the items and has to navigate these submenus.
I'd prefer to just see the same menu with the history list I currently see but with the limited height so that when there are too many items, there is a way to scroll using mouse or arrow up/down keys.
Yes, I guess could do that creating a submenu. However, this means that one can't just scroll down the menu to see the items and has to navigate these submenus.
I'd prefer to just see the same menu with the history list I currently see but with the limited height so that when there are too many items, there is a way to scroll using mouse or arrow up/down keys.
Agreed with this, coming from ClipMenu, I hated the folder and expandable arrows instead of just having a menu where you can navigate with up/down keys!
Closing since there is no obvious way to implement this without folders and I'm not going to add folders support (similar to #40)
Actually, please follow #229 as it seems like I've found a way to implement something similar.
|
gharchive/issue
| 2019-12-02T20:06:53 |
2025-04-01T04:35:27.527788
|
{
"authors": [
"LionelSelie",
"alexjousse",
"p0deje"
],
"repo": "p0deje/Maccy",
"url": "https://github.com/p0deje/Maccy/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2092400707
|
Login not working for me
System Health details
Just installed the integration.
After restart, I wanted to activate it.
I've entered the username and password and get 'Unexpected error'
On my phone app, the device is listed and shows values.
Checklist
[X] I have enabled debug logging for my installation.
[X] I have filled out the issue template to the best of my ability.
[X] This issue only contains 1 issue (if you have multiple issues, open one issue for each issue).
[X] This issue is not a duplicate issue of any previous issues..
Describe the issue
I get this in the log file:
Traceback (most recent call last):
File "/config/custom_components/gruenbeck_cloud/config_flow.py", line 48, in async_step_user
self.devices = await self.get_devices(user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/gruenbeck_cloud/config_flow.py", line 131, in get_devices
devices = await api.get_devices()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygruenbeck_cloud/pygruenbeck_cloud.py", line 423, in get_devices
devices.append(Device.from_json(device))
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygruenbeck_cloud/models.py", line 275, in from_json
return Device(**new_data)
^^^^^^^^^^^^^^^^^^
TypeError: Device.init() got an unexpected keyword argument 'access_point_name'
Reproduction steps
click 'add imtegration'
enter user and password
submit
It returns the error
If I intentionally use a wrong user/password, I'm just getting
2024-01-21 04:14:03.790 ERROR (MainThread) [pygruenbeck_cloud.pygruenbeck_cloud] Unable to login
So, I guess user/password are correct in the above case.
Debug logs
I get this in the log file:
Traceback (most recent call last):
File "/config/custom_components/gruenbeck_cloud/config_flow.py", line 48, in async_step_user
self.devices = await self.get_devices(user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/gruenbeck_cloud/config_flow.py", line 131, in get_devices
devices = await api.get_devices()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygruenbeck_cloud/pygruenbeck_cloud.py", line 423, in get_devices
devices.append(Device.from_json(device))
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygruenbeck_cloud/models.py", line 275, in from_json
return Device(**new_data)
^^^^^^^^^^^^^^^^^^
TypeError: Device.__init__() got an unexpected keyword argument 'access_point_name'
Diagnostics dump
No response
Which model do you have?
Your response seems to have additional elements, which I'm not getting.
Can you please activate debug logging for the integration and try it again, you should see in the logs the response from Grünbeck Cloud, it should be look like this, before the exception:
Response from URL https://prod-eu-gruenbeck-api.azurewebsites.net/api/devices?api-version=2020-08-03 with status 200 was [{'type': 18, 'hasError': False, 'id': 'softliQ.D/BS20445420', 'series': 'softliQ.D', 'serialNumber': 'BS20445420', 'name': 'softIQ:SD18', 'register': True}]
For activate debug logging, you need to modify the logger in your configuartion.yaml to something like this:
logger:
default: info
logs:
custom_components.gruenbeck_cloud: debug
With that I can analyze which fields are missing, in your error I only see, that access_point_name is missing, but maybe there are more.
I have an SC18 and got exactly the same error messages as @Stefan4691
@p0l0 I did add the lines to my configuration.yaml and restarted HA. I am still not getting additional details in the log file. Since the integration never gets added due to the failed login, I cannot activate debug logging using the UI.
Wish I could help more, since I would love the integrate my SC18 in HA without having to use ioBroker.
Tanks a lot in advance for looking into it!
To see the Debug log entries, you need to click on the Load Full Logs button under Settings->System->Logs, you will see entries for the integration, also if the integration never gets added.
Nevertheless with the new version 0.1.0 I changed the way how responses are parsed, so that entries in json which are missing in the Python Class should be ignored. Please give it a try.
I also implemented in version 0.1.0 the diagnostic dump option, so that after adding the integration, you can send me this diagnostic dump, so that i can check which properties your system is providing, which mine doesn't have.
Thanks for your response and the new release. I installed 0.1.1 some minutes ago and tried to add the integration after restarting HA. The login still does not work, hence the integration is not getting added. The HA core log is showing these gruenbeck related lines:
2024-02-04 18:17:41.675 WARNING (SyncWorker_3) [homeassistant.loader] We found a custom integration gruenbeck_cloud which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant
2024-02-04 18:22:13.536 WARNING (MainThread) [custom_components.gruenbeck_cloud.config_flow] 'has_error'
2024-02-04 18:22:35.775 WARNING (MainThread) [custom_components.gruenbeck_cloud.config_flow] 'has_error'
These are the only details I am getting:
Logger: custom_components.gruenbeck_cloud.config_flow
Source: custom_components/gruenbeck_cloud/config_flow.py:141
Integration: Grünbeck Cloud (documentation, issues)
First occurred: 18:22:13 (2 occurrences)
Last logged: 18:22:35
'has_error'
I hope this helps...
Are you sure, that your SC18 is using cloud for data transmission? I just looked at the ioBroker implementation and for SC Models, they are using local API.
Cloud implementation at ioBroker is only for SD Models.
If SC is also using Cloud, it seems to work differently to SD Models, that's probably the reason why login is not working.
Well, good point. I am actually not sure. When using the Grünbeck App and logging in there, authentication is done using the URL https://gruenbeckb2c.b2clogin.com/
Therefore I was under the impression, that I am accessing the Grünbeck cloud. What is the URL your integration is using to authenticate?
It's the same, but I think the following API requests which are done after login, are different for SC.
Do you have the possibility to run a python script? I could send you a script which uses directly the library and with verbose output, so that we can try to find out, what is being retuned, and how it maybe could be handled.
Sure, I can run the script either on my Mac or the server currently running a python script doing rest calls to read the data and pushing them to telegraf.
I installed the app via HACS and when setting up I get the error message: Connection failed. In the LOG I find the entry:
`Logger: custom_components.gruenbeck_cloud.config_flow
Source: custom_components/gruenbeck_cloud/config_flow.py:141
Integration: Grünbeck Cloud (documentation, issues)
First occurred: 12:55:01 (1 occurrences)
Last logged: 12:55:01
'has_error'`
I get the same error.
Dieser Fehler wurde von einer benutzerdefinierten Integration verursacht Logger: custom_components.gruenbeck_cloud.config_flow Quelle: custom_components/gruenbeck_cloud/config_flow.py:141 Integration: Grünbeck Cloud (Dokumentation, Probleme) Zum ersten Mal aufgetreten: 06:58:16 (2 Vorkommen) Zuletzt angemeldet: 07:08:33 'has_error'
Sure, I can run the script either on my Mac or the server currently running a python script doing rest calls to read the data and pushing them to telegraf.
Sorry, I was a little bit busy the last weeks. Please run this script, replacing und with your login data.
To run the script, you need to install the pygruenbeck_cloud library. This can be done with pip, just run pip install pygruenbeck_cloud.
The script will show you every request and response from server, that will make it easier to find out, what is being returned, and try to find out what is different to SD Models.
import asyncio
import logging
from pygruenbeck_cloud import PyGruenbeckCloud
logging.basicConfig(
level=logging.DEBUG,
format="[%(asctime)s] %(levelname)s [%(name)s.%(funcName)s:%(lineno)d] %(message)s",
datefmt="%d/%b/%Y %H:%M:%S",
)
_LOGGER = logging.getLogger(__name__)
class TestGruenbeck:
async def init(self):
"""Demo function for testing."""
try:
async with PyGruenbeckCloud(
username="<EMAIL>",
password="<PASSWORD>",
) as gruenbeck:
gruenbeck.logger = _LOGGER
devices = await gruenbeck.get_devices()
if len(devices) <= 0:
_LOGGER.warning("No devices!")
return
except Exception as ex:
_LOGGER.error(ex)
asyncio.run(TestGruenbeck().init())
@Cavekeeper and @ChaotenKurt which model do you have? It seems, as the hasError property is not being returned from the API.
No problem at all, I have the following device: softliq:SC18 / software version V01.01.02
My device: softliq:SC18 / V01.00.27
Hi, i have the same issue.
I had the integration running since Februar, but since Sunday my softliQ:SD18 does not initialize any more.
Now I removed the device and tried to reinitalize the complete device, but now the connection always failed.
And it seams https://gruenbeckb2c.b2clogin.com/ does not work anymore.
But I can access the SD18 via the app without problems
Hi, i have the same issue. I had the integration running since Februar, but since Sunday my softliQ:SD18 does not initialize any more. Now I removed the device and tried to reinitalize the complete device, but now the connection always failed. And it seams https://gruenbeckb2c.b2clogin.com/ does not work anymore.
But I can access the SD18 via the app without problems
It's working again. It was a problem with HA and HACS. I had to reinstall HA and load a Backup. Now it is working again.
I have now installed a completely new Home Assistant. Without a backup. I then installed HACS and then the adapter. Unfortunately without success, again I get the same error message that the login did not work.
Hi! kann mir jemand sagen wie ich das für meine Grünbeck SC zum laufen bekomme? login geht nicht.
Hi! can anybody explain me how i get this running for my Grünbeck SC? login is not possible.
Thank you
After checking the ioBroker adapter and what the App does, the SC model has a complete different way of working and therefore also a complete different implementation.
Unfortunately I don't have an SC model and currently I also don't have the time for doing a complete new implementation :(
The SC model is threaten in the ioBroker adapter and in the App as "local/standalone", probably it should be a complete different library instead of being implemented into the (pygruenbeck_cloud)[https://github.com/p0l0/pygruenbeck_cloud] library, but maybe using the same HA component is possible.
Hi, login not working. Device SC18,
home-assistant.log: WARNING (MainThread) [custom_components.gruenbeck_cloud.config_flow] 'has_error'
The Android-app is running and values are available.
Thx a lot in advance for any kind of support to get this integration running.
For the SC Models, you need to use the local API, for that I have seen that tizianodeg created the integration gruenbeck_softliQ_SC.
I will close this Issue, as my integration is only for Cloud only models.
|
gharchive/issue
| 2024-01-21T03:15:53 |
2025-04-01T04:35:27.557762
|
{
"authors": [
"Cavekeeper",
"ChaotenKurt",
"Stefan4691",
"TheOnkelTank",
"ck301",
"nielssch",
"p0l0",
"tempo3"
],
"repo": "p0l0/hagruenbeck_cloud",
"url": "https://github.com/p0l0/hagruenbeck_cloud/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
282550372
|
Support table entry timeout in PSA?
There is currently no specification about the table entry timeout behavior in PSA. We probably need an idle_timeout extern to implement the idle and hard timeout features in OpenFlow.
As far as implementation of this goes, I think we are talking about a feature that is likely to be implemented in device-specific driver/agent software that is near the device, yes? Given the discussion around watch ports for action selectors, wanted to make sure this is similar in kind to that as far as expectations of how it is implemented.
Closing this issue in favor of the duplicate issue https://github.com/p4lang/p4-spec/issues/617 which I hope may get into PSA version 1.1.
|
gharchive/issue
| 2017-12-15T21:17:50 |
2025-04-01T04:35:27.574046
|
{
"authors": [
"hanw",
"jafingerhut"
],
"repo": "p4lang/p4-spec",
"url": "https://github.com/p4lang/p4-spec/issues/523",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
800799828
|
Table 26 - B0 is double for MCMC compared to MLE
MLE looks correct (comparing to last year) and in param_est_table() has:
bo = (mddq %>% filter(Label == "SSB_Initial") %>% pull(Value) / 2e3) %>% f(0),
(so divide by 2000 to get females, and make the units tons), whereas MCMC part has:
bo = f(median(.x$mcmc$`SSB_Initial`) / 1e3, 0),
so just divide by 1000. Presume the latter should be changed to 2e3, as I remember some talk about SSB definition changing this year in SS (!). I can fix but just wanted to check.
As I have it open I will change it in a single commit that we can fix if necessary.
|
gharchive/issue
| 2021-02-03T23:53:36 |
2025-04-01T04:35:27.587459
|
{
"authors": [
"andrew-edwards"
],
"repo": "pacific-hake/hake-assessment",
"url": "https://github.com/pacific-hake/hake-assessment/issues/767",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
804861502
|
Add NOAA disclaimer on our beamer presentations
That same disclaimer that we put on the assessment document needs to go on our SRG presentations. So when we get those building we can add something like that in the hake-assessment.rnw:
\thanks{\noindent Disclaimer: These materials do not constitute a formal
publication and are for information only. They are in a pre-review,
pre-decisional state and should not be formally cited. They are to
be considered provisional and do not represent any determination or
policy of NOAA or the Department of Commerce.}
@aaronmberger-nwfsc we should probably add that to the sty file as text rather than copying and pasting it everywhere. I can do it, or you can look how fishname and surveyname are set up.
[ ] add disclaimer
I'm fine with whatever workflow is optimal! I just want us to check off that disclaimer thing and not think about it again, so sty file would be good.
Including the style file in the presentations breaks them. I get a hyperref collision right away and I don't really feel like debugging it all so I figured just add the text to each one
I didn't use the sty file because I got the same message that you did and for sure did not want to deal with it, I just made a ghetto .tex file and included it via input.
OK, Check my comment in code - tex file isn't in the repo yet
So sorry. I just did -A and did not realize it wasn't being included. I forced it. Should be there now. Hopefully I didn't break anything else.
No worries, I do that all the time:)
The sensitivity presentation compiles just fine. Nice idea to generalize it. We could put other things in there as well that are common across presentations.
|
gharchive/issue
| 2021-02-09T19:38:46 |
2025-04-01T04:35:27.591740
|
{
"authors": [
"aaronmberger-nwfsc",
"cgrandin",
"kellijohnson-NOAA"
],
"repo": "pacific-hake/hake-assessment",
"url": "https://github.com/pacific-hake/hake-assessment/issues/804",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
926418786
|
New name for natural mortality with updated SS version
Just copying this over from (nwfsc-assess/nwfscDiag) as it likely pertains to our workflow once we update:
SS version 3.30.17.01 changed the parameter label from "NatM_p_1_Fem_GP_1" to "NatM_uniform_Fem_GP_1"
@cgrandin I searched the repo for the old name and found it in the following two locations. Should we change those and then close this issue?
https://github.com/pacific-hake/hake-assessment/blob/9a9b9c64ad95bb036f85fead221145063aff6c69/R/tables-parameters.R#L351
https://github.com/pacific-hake/hake-assessment/blob/be6a0c53928dfc999fd42ed3d835d035eca964b9/beamer/SRG/Requests/requests-day1/beamer-hake-requests-day1.rnw#L210
Looks like Andy changed that first one so that it still works with last year's model, if the new name is found, use it otherwise use the old name. If that's not there then Table 25 will have last year's M missing.
ifelse("NatM_uniform_Fem_GP_1" %in% names(.x$mcmc),
f(median(.x$mcmc$`NatM_uniform_Fem_GP_1`), digits),
f(median(.x$mcmc$`NatM_p_1_Fem_GP_1`), digits)),
I changed the second one. Thanks for checking!
|
gharchive/issue
| 2021-06-21T17:01:06 |
2025-04-01T04:35:27.594951
|
{
"authors": [
"aaronmberger-nwfsc",
"cgrandin",
"kellijohnson-NOAA"
],
"repo": "pacific-hake/hake-assessment",
"url": "https://github.com/pacific-hake/hake-assessment/issues/841",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
576540852
|
Implement code to clean up assigned IP blocks
Now that we have the IP block assignment fix in, we can implement code to give back the ip blocks after the esxi hosts are converted to layer to give back the IPs.
This would reduce costs ($0.04 per hour!!) and more importantly give back the IPs to be used on different projects.
I've tested manually deleting the IP blocks after the project was created and it had no impacted on the environment or on the destroy procedure.
I fixed this in the VMware only version of this repo.
I'll try and get this turned into a PR soon.
Here are the changes:
https://github.com/c0dyhi11/vmware-on-packet/blob/bd368a0f70f4abb0aa65c4593580560bc85a842e/08-esx-host-networking.tf#L59
https://github.com/c0dyhi11/vmware-on-packet/blob/bd368a0f70f4abb0aa65c4593580560bc85a842e/templates/esx_host_networking.py#L153
https://github.com/c0dyhi11/vmware-on-packet/blob/bd368a0f70f4abb0aa65c4593580560bc85a842e/templates/esx_host_networking.py#L159
https://github.com/c0dyhi11/vmware-on-packet/blob/bd368a0f70f4abb0aa65c4593580560bc85a842e/templates/esx_host_networking.py#L162
https://github.com/c0dyhi11/vmware-on-packet/blob/bd368a0f70f4abb0aa65c4593580560bc85a842e/templates/esx_host_networking.py#L344
|
gharchive/issue
| 2020-03-05T21:26:48 |
2025-04-01T04:35:27.613498
|
{
"authors": [
"c0dyhi11",
"paemason"
],
"repo": "packet-labs/google-anthos",
"url": "https://github.com/packet-labs/google-anthos/issues/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
773330091
|
Fixup nvme
Seems we weren't getting the serial in for NVMe disks properly, also strips some extra spaces introduced recently for model
Inside OSIE:
{
"blockdevices": [
{"name": "/dev/nvme0n1", "serial": null, "model": "Micron_9300_MTFDHAL3T8TDP ", "size": "3.5T", "rev": null, "vendor": null},
{"name": "/dev/sdb", "serial": null, "model": "SSDSCKKB240G8R ", "size": "223.6G", "rev": "DL6P", "vendor": "ATA "},
{"name": "/dev/loop0", "serial": null, "model": null, "size": "199.7M", "rev": null, "vendor": null},
{"name": "/dev/nvme1n1", "serial": null, "model": "Micron_9300_MTFDHAL3T8TDP ", "size": "3.5T", "rev": null, "vendor": null},
{"name": "/dev/sda", "serial": null, "model": "SSDSCKKB240G8R ", "size": "223.6G", "rev": "DL6P", "vendor": "ATA "}
]
}```
Inside Alpine:
localhost:~# lsblk -J -p -o NAME,SERIAL,MODEL,SIZE,REV,VENDOR
{
"blockdevices": [
{"name":"/dev/loop0", "serial":null, "model":null, "size":"199.7M", "rev":null, "vendor":null},
{"name":"/dev/sda", "serial":null, "model":"SSDSCKKB240G8R ", "size":"223.6G", "rev":"DL6P", "vendor":"ATA "},
{"name":"/dev/sdb", "serial":null, "model":"SSDSCKKB240G8R ", "size":"223.6G", "rev":"DL6P", "vendor":"ATA "},
{"name":"/dev/nvme0n1", "serial":"201627A99BF3 ", "model":"Micron_9300_MTFDHAL3T8TDP ", "size":"3.5T", "rev":null, "vendor":null},
{"name":"/dev/nvme1n1", "serial":"201327AA8418 ", "model":"Micron_9300_MTFDHAL3T8TDP ", "size":"3.5T", "rev":null, "vendor":null}
]
}
Inside OSIE:
{
"blockdevices": [
{"name": "/dev/nvme0n1", "serial": null, "model": "Micron_9300_MTFDHAL3T8TDP ", "size": "3.5T", "rev": null, "vendor": null},
{"name": "/dev/sdb", "serial": null, "model": "SSDSCKKB240G8R ", "size": "223.6G", "rev": "DL6P", "vendor": "ATA "},
{"name": "/dev/loop0", "serial": null, "model": null, "size": "199.7M", "rev": null, "vendor": null},
{"name": "/dev/nvme1n1", "serial": null, "model": "Micron_9300_MTFDHAL3T8TDP ", "size": "3.5T", "rev": null, "vendor": null},
{"name": "/dev/sda", "serial": null, "model": "SSDSCKKB240G8R ", "size": "223.6G", "rev": "DL6P", "vendor": "ATA "}
]
}```
Inside Alpine:
localhost:~# lsblk -J -p -o NAME,SERIAL,MODEL,SIZE,REV,VENDOR
{
"blockdevices": [
{"name":"/dev/loop0", "serial":null, "model":null, "size":"199.7M", "rev":null, "vendor":null},
{"name":"/dev/sda", "serial":null, "model":"SSDSCKKB240G8R ", "size":"223.6G", "rev":"DL6P", "vendor":"ATA "},
{"name":"/dev/sdb", "serial":null, "model":"SSDSCKKB240G8R ", "size":"223.6G", "rev":"DL6P", "vendor":"ATA "},
{"name":"/dev/nvme0n1", "serial":"201627A99BF3 ", "model":"Micron_9300_MTFDHAL3T8TDP ", "size":"3.5T", "rev":null, "vendor":null},
{"name":"/dev/nvme1n1", "serial":"201327AA8418 ", "model":"Micron_9300_MTFDHAL3T8TDP ", "size":"3.5T", "rev":null, "vendor":null}
]
}
|
gharchive/pull-request
| 2020-12-23T00:18:57 |
2025-04-01T04:35:27.623152
|
{
"authors": [
"dustinmiller1337"
],
"repo": "packethost/packet-hardware",
"url": "https://github.com/packethost/packet-hardware/pull/19",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
419178604
|
implement create-update command
TODO:
[x] merge #137
[x] tests
docs (for both build and update) let's do docs in 0.2.0 release PR
[x] tests are borked because of bodhi: https://github.com/fedora-infra/bodhi/issues/3058
I created this update: https://bodhi.fedoraproject.org/updates/FEDORA-2019-0c53f2476d
by running packit --debug create-update --dist-git-branch f30
ready to test
let's do docs in 0.2.0 release PR
You like big pull-request, don't you?
I would prefer to have that in other PR..;-)
let's do docs in 0.2.0 release PR
I would prefer to have that in other PR..;-)
Should I do docs here? Or in a subsequent PR?
The tests are now skipped in tox :(
But they run locally just fine:
$ pytest-3 -k create_upd
==== test session starts ====
platform linux -- Python 3.7.2, pytest-3.6.4, py-1.5.4, pluggy-0.6.0
rootdir: /home/tt/g/user-cont/packit, inifile:
plugins: cov-2.5.1
collected 77 items / 75 deselected
tests/integration/test_create_update.py .. [100%]
===== 2 passed, 75 deselected in 0.73 seconds =====
Or in a subsequent PR?
:+1
I'm assuming the real test here will be to bring packit by itself to F30 and F29.
|
gharchive/pull-request
| 2019-03-10T12:13:06 |
2025-04-01T04:35:27.632278
|
{
"authors": [
"TomasTomecek",
"lachmanfrantisek"
],
"repo": "packit-service/packit",
"url": "https://github.com/packit-service/packit/pull/139",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1394671333
|
Make relation between Packit and Testing Farm more clear
One set of questions we often get is related to Testing Farm: how Packit and Testing Farm are tied together. On top of that, users sometimes ask questions specific to the FMF format, test structure, best practices and tmt. We have @FrNecas on our team that often answers these. Sometimes we need to ask the Testing Farm team for replies.
TODO:
[ ] describe the relationship between Packit and TF
[ ] describe the flow of data and who owns what
[ ] point to best practices for organizing tests and running them in Testing Farm
[ ] link to TF's status page, their documentation, landing page and the way to reach the team
Closing since there was no activity in more than a year.
|
gharchive/issue
| 2022-10-03T12:43:12 |
2025-04-01T04:35:27.634778
|
{
"authors": [
"TomasTomecek"
],
"repo": "packit/packit.dev",
"url": "https://github.com/packit/packit.dev/issues/532",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1309539697
|
Add 'url' to the SpecRequest interface
The url property was missing from the SpecRequest type. I'm reasonably sure that it's always there, but I don't know all the use cases obviously. At the very least it should be an optional property. But again I think it's required: let me know otherwise.
@ASaiAnudeep another one 😄
A workaround for while this is not merged yet is:
declare module 'pactum/src/exports/reporter' {
interface SpecRequest {
url: string;
}
}
The url property was missing from the SpecRequest type. I'm reasonably sure that it's always there, but I don't know all the use cases obviously. At the very least it should be an optional property. But again I think it's required: let me know otherwise.
The url should be part of most requests.
Thanks, I'll be looking forward to the release.
|
gharchive/pull-request
| 2022-07-19T13:38:37 |
2025-04-01T04:35:27.643877
|
{
"authors": [
"ASaiAnudeep",
"aukevanleeuwen"
],
"repo": "pactumjs/pactum",
"url": "https://github.com/pactumjs/pactum/pull/183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2121233191
|
chore: add bootstrap address for Vozer
Description
Briefly describe the changes introduced by this pull request.
Related issue(s)
If this Pull Request is related to an issue, mention it here.
Fixes #(issue number)
@pphan79, please ensure that the spacing indentation remains at 4 and resolve any conflicts.
|
gharchive/pull-request
| 2024-02-06T16:39:20 |
2025-04-01T04:35:27.645535
|
{
"authors": [
"b00f",
"pphan79"
],
"repo": "pactus-project/pactus",
"url": "https://github.com/pactus-project/pactus/pull/1075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
192392979
|
No response from the server.
This is somehow strange. Sometimes padawan is not responding to request at all. There is also no log message. Padawan is still up and running as a request in another file will work all right.
I can reproduce it with Laravel project.
composer global require laravel/installer
laravel new test
edit vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php
There is no response from padawan-server. Well, most of the times.
Sometimes the server is responding, and when it does its quite fast response. I've tried it with the netcat, and it worked few times, but usually its just silent. I think it is related to #13.
i think i'm seeing this issue because of the size of the file. for small files it works well, but for large files it never responds. it doesn't seem to even generate that it received the request.
hm, maybe the problem is somehow connected with incorrect http body fetching
@mkusher it should still log something before reaching that function right? i'm not seeing anything get logged
not sure this is actually an error with the padawan-server, i feel like the request is never making it to the server.
i added some debug logging as soon as the request gets received (there already is logging that should be taking place, ie. method etc) and it never gets invoked.
if i open up a very small file, it starts to work, open up larger file, doesn't work.
I think this is a problem with react/http.
I've just made a quick test, and when big data is sent to padawan request event is not triggered. So I've checked react/http/src/Server.php and I can see that request with small file triggers $parser->on('headers... but with the big one only $parser->on('data... is triggered.
I don't know react so I can't tell how this suppose to work exactly but clearly there is different behavior when a big file is sent and the request is not making it to the padawan.
Hope that's at least remotely clear.
I think this is related to this issue: https://github.com/reactphp/http/issues/80
that definitely sounds like the right direction
I can confirm that applying the pull request that solves https://github.com/reactphp/http/issues/80 will also solve these issues with padawan.php. It now works great, even in larger files!
that's nice)
I've explained the reason at https://github.com/reactphp/http/issues/87 and there's a pending PR that fixes this https://github.com/reactphp/http/pull/82
Close with #91
|
gharchive/issue
| 2016-11-29T20:39:08 |
2025-04-01T04:35:27.653534
|
{
"authors": [
"halftan",
"kokx",
"mhahn",
"mkusher",
"pbogut"
],
"repo": "padawan-php/padawan.php",
"url": "https://github.com/padawan-php/padawan.php/issues/71",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
59293344
|
Revert "unused namespace removed"
This isn't actually unused.
Reverts padraic/mockery#444
Noted. Totally my bad.
|
gharchive/pull-request
| 2015-02-27T20:57:50 |
2025-04-01T04:35:27.657530
|
{
"authors": [
"GrahamCampbell",
"padraic"
],
"repo": "padraic/mockery",
"url": "https://github.com/padraic/mockery/pull/449",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
638416
|
Admin-gen AR account model wipes password on save
Generated Account model for ActiveRecord has a method called encrypt_password that always rewrites the password, regardless of the existence of a new password or not. Effectively, every time I @account.save the password gets reset to '' (empty!).
This optimization seems to not allow to update the password on an account anymore.
|
gharchive/issue
| 2011-03-01T19:41:54 |
2025-04-01T04:35:27.658642
|
{
"authors": [
"bryanhelmig",
"pke"
],
"repo": "padrino/padrino-framework",
"url": "https://github.com/padrino/padrino-framework/issues/429",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1473665506
|
[TODO] Carregar a porta da aplicação da variavel de ambiente TODO_PORT
Altere o arquivo todo/app.py para ler a porta em que a aplicação roda da variável de ambienteTODO_PORT:
Caso a variável de ambiente não exista, a porta default é 4000
Exemplo:
import os
# ...
if __name__ == "__main__":
porta = os.environ.get("TODO_PORT", 4000)
app.run(host="0.0.0.0", port=porta, debug=True)
Assinando essa tarefa para @marciorenato
|
gharchive/issue
| 2022-12-03T02:26:40 |
2025-04-01T04:35:27.670297
|
{
"authors": [
"Lohann"
],
"repo": "paft-inc/paft-microservices",
"url": "https://github.com/paft-inc/paft-microservices/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
310925309
|
created blank layout and modified posts layout to inherit from blank
This is what a blog post would look like with the default theme:
I made a post layout that would allow them to look more like a blog post. Unfortunately, since the default layout has the links and the post layout inherited from that, the links stayed as such:
I've now created a new layout called 'blank' which the post layout now extends from. This allows you to create a page without the github pages/project links in the header:
This will also allow you to create regular pages without the same links in the header, as such:
I believe creating a blank layout to build from this way will maintain the "simplicity" of the project philosophy while adding "flexibility" for users to create: landing pages for their github projects, periodic posts, and regular web pages (without project links attached). I believe this will satisfy the last half of issue #20
Thanks!
Oops, looks like I accidentally closed this PR before the commit was merged. Let me know if there's anything else that needs to be done before it's merged!
Thanks!
|
gharchive/pull-request
| 2018-04-03T17:21:02 |
2025-04-01T04:35:27.707431
|
{
"authors": [
"roninb"
],
"repo": "pages-themes/hacker",
"url": "https://github.com/pages-themes/hacker/pull/24",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
151787115
|
Serve resources from a consistent URL after installing mod_pagespeed
Hi,
I using the gtmetrix.com for my website benchmark, before installing mod_pagespeed there is no such that error, but after that I got that error after running the benchmark tool... the error example is a as below :
Serve resources from a consistent URL
`The following resources have identical contents, but are served from different URLs. Serve these resources from a consistent URL to save 1 request(s) and 92.4KiB.
http://www.mydomain.com/images/template/slide/KUDoSwk.jpg
http://www.mydomain.com/images/template/slide/xKUDoSwk.jpg.pagespeed.ic.MRlqvTlFDh.jpg`
..and a lot of them, it listed out with the image file only I think
Thank you very much...
Is this a transient error? Does it go away after a couple of browser refreshes?
Could you post (or email me) the url of the website?
Hi,
I already e-mail our website link... it looks like a permanent error.
Thank you very much for the feedback..
Thanks - I received the url of your website.
Your website has a slider implemented in javascript on the page which loads
images declared via "data-thumb" attributes on "li" tags.
The complaints about inconsistent urls are because mod_pagespeed does not
process these data-thumb attributes.
Adding the following to your pagespeed configuration should fix the
inconsistent urls that are reported:
ModPagespeedUrlValuedAttribute li data-thumb image
For more information, see
https://developers.google.com/speed/pagespeed/module/domains#url-valued-attributes
Otto
On Tue, May 10, 2016 at 3:56 AM, ypmict notifications@github.com wrote:
Hi,
I already e-mail our website link... it looks like a permanent error.
Thank you very much for the feedback..
—
You are receiving this because you commented.
Reply to this email directly or view it on GitHub
https://github.com/pagespeed/mod_pagespeed/issues/1297#issuecomment-218039700
I added the syntax to my pagespeed.conf.. It works now... thank you very much... :+1:
Hi.. sorry, I just realize that when adding the syntax and turn on mod_pagespeed, my total website "Total Page Size" seems to doubled.. and page load time also increase a little bit anything wrong?
you can check the gtmetrix.com test screenshot for the comparison:
Thanks
I think there's something odd happening here.. the (anonymized) essence of what I'm seeing:
<li data-thumb="http://www.domain.com/path/to/image-AA-AA.jpg.pagespeed.ce.j_M6m_1qwr.jpg">
<img src=""http://www.domain.com/path/to/image-AA.jpg.pagespeed.ic.dPKpjl3OxI.webp" pagespeed_url_hash="349959412" onload="pagespeed.CriticalImages.checkImageForCriticality(this);">
<!--Image does not appear to need resizing.-->
<!--The image was not inlined because it has too many bytes.-->
So two different versions of the same image end up in the html, which explains the increased total page size.
Could you open a new issue for this, and email me your (pagespeed) configuration?
Hi,
ok I just e-mailed my pagespeed config... actually the way I am using the pagespeed is by using 2 separate config which is one for default config and the other one is for dedicated config for the only website that modpage_speed is enabled that is by using the 'include' syntax in Apache config for certain virtual host.
Thank you very much for the feedback, really appreciated it.
Hello,
I am using CND. here is the problem.
The following resources have identical contents, but are served from different URLs. Serve these resources from a consistent URL to save 1 request(s) and 26.9KiB.
https://delmsky2m3lys.cloudfront.net/wp-content/uploads/2015/11/caddytek-ez-fold-3-wheel-golf-push-cart.png
https://www.bestgolfcartsreviews.com/wp-content/uploads/2015/11/caddytek-ez-fold-3-wheel-golf-push-cart.png
The following resources have identical contents, but are served from different URLs. Serve these resources from a consistent URL to save 1 request(s) and 5.9KiB.
https://delmsky2m3lys.cloudfront.net/wp-content/uploads/2016/10/Clicagear-model-3.5.jpg
https://www.bestgolfcartsreviews.com/wp-content/uploads/2016/10/Clicagear-model-3.5.jpg
anyone help me?
@mejbabiplob It looks like https://www.bestgolfcartsreviews.com is not running mod_pagespeed ?
Hi Everyone em new to github Need help about page speed errors like
Serve resources from a consistent URL
Parallelize downloads across hostnames
Defer parsing of JavaScript
Kindly send me solutions of all at Ibraheemabbas@gmail.com
Hi Everyone em new to github Need help about page speed errors like
Serve resources from a consistent URL
Minimize request size
Leverage browser caching
Defer parsing of JavaScript
Kindly send me solutions of all at waqasayyaz@gmail.com
Recently i test my website article https://gst.caknowledge.com/hsn-code-list/ at the gtmetrix.com and i get lowest score for Serve resources from a consistent URL can anyone tell me how can i improve this score...
@caknowledge To avoid the warning the same content should be served from the same url.
The site you point out has a tracking pixel violating that rule, and also an ad serving script.
mod_pagespeed is able to automate the js part via the canonicalize js filter: https://www.modpagespeed.com/doc/filter-canonicalize-js
(but it won't be able to help with the pixel)
Hi, I have the same issue if anyone can help please?
Since enabling page speed I'm getting a low Gtmetrix score for 'Serve resources from a consistent URL'
The following resources have identical contents, but are served from different URLs. Serve these resources from a consistent URL to save 1 request(s) and 6.7KiB.
http://example.com/skin/frontend/default/images/ajax-loader.gif
http://example/skin/frontend/default/images/ajax-loader.gif.pagespeed.ce.afWLPCz_Xf.gif
The following resources have identical contents, but are served from different URLs. Serve these resources from a consistent URL to save 1 request(s) and 3.5KiB.
http://example.com/media/cms/homepage/Instagram_Icon.png
http://example.com/media/cms/homepage/Instagram_Icon.png.pagespeed.ce.GPkk96k8WF.png
Any ideas please?
1.Serve resources from a consistent URL
2.Minimize request size
3.Leverage browser caching
4.Defer parsing of JavaScript
I need help solve up 4 problems blogger theme, please.
|
gharchive/issue
| 2016-04-29T03:35:55 |
2025-04-01T04:35:27.727304
|
{
"authors": [
"AizazAyyaz",
"Rainbowturn",
"caknowledge",
"fullenglish",
"ibraheemsahir",
"mejbabiplob",
"oschaaf",
"ypmict"
],
"repo": "pagespeed/mod_pagespeed",
"url": "https://github.com/pagespeed/mod_pagespeed/issues/1297",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
205633761
|
File Cache clean not completing (IISpeed)
Having some problems related to a recent binary install of IISpeed 2.0.3 (based on PageSpeed 1.9.32.14-stable) on Win2008R2 IIS7.5. Pagespeed is setup to work with some 32 bit Classic ASP sites on this server.
It appears the file cache clean isn't completing if I allow the FileCacheSizeKb to go beyond roughly 3GB. The filecache is stored on an SSD with plenty of space. Which statistic should indicate if a file cache clean process was initiated and/or completed?
I've set memory-based recycling limits for the IIS application pools hosting the 3 sites that use IISpeed. The processes are configured as a web garden, 4 worker processes per application pool. It seems like when the cache clean time interval is met and the clean process starts the memory consumed by one of the 4 worker processes starts climbing rapidly, at a rate of 2.5k to 4k per screen refresh of the Windows Task Manager. Eventually it climbs beyond the application worker process memory recycling limit and IIS terminates the process.
When I don't use pagespeed the worker processes will stabilize at around 250MB of memory (Private working set) and never trigger the 512MB recycle trigger limit the processes are set for. With paegspeed running they will hit the memory limit and IIS recycles one of the 4 worker processes coinciding with FileCacheCleanIntervalMs. I've even tried raising the worker process memory recycling limit to >1GB. It doesn't help once the cache size grows to the FileCacheSizeKb.
If I keep the FileCacheSizeKb <=2.5GB the worker processes never seem to recycle from the memory limit and the PageSpeedCache folder doesn't continue to grow to the point that the disk is filled from cached files.
Is there something I can do to more appropriately diagnose the problem?
sorry, I guess this is more windows-specific. will close this here.
|
gharchive/issue
| 2017-02-06T16:16:41 |
2025-04-01T04:35:27.730986
|
{
"authors": [
"RTDoofus"
],
"repo": "pagespeed/mod_pagespeed",
"url": "https://github.com/pagespeed/mod_pagespeed/issues/1491",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
620162580
|
TMI Rates
Pull request checklist:
[ ] CHANGELOG.md was updated, if applicable
[ ] Documentation in docs/ or install-docs/ was updated, if applicable
Fixes #691
@alazymeme had the known bot idea
Is there more i must fix/add?
I think I want @pajlada to review this and work on it
Sure.
OkayChamp I'll make the entry
i did these commits via the web interface WAYTOODANK
Is there more to change for this?
|
gharchive/pull-request
| 2020-05-18T12:18:19 |
2025-04-01T04:35:27.751741
|
{
"authors": [
"RAnders00",
"TroyDota"
],
"repo": "pajbot/pajbot",
"url": "https://github.com/pajbot/pajbot/pull/878",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
895484014
|
Serialise body according to v4 spec
I added a wrapper object to the body to match the v4 body spec and tested it against the PACT-JVM-Provider Tests.
They can now validate the body - before that, the body was reported as missing.
Frankly, I'm not quiet sure if this is what you intended to do, or if you had other use cases in mind. I tried to preserve the contract-builder API, but had to drop the custom-json serialisation to get this to work. So please feel free to ignore this PR and have a go at it yourself if I misunderstood your intentions here!
Fixes #4
Let me think a bit about #4 and if needed we'll use this.
Fixed by downgrading do V3 in #7
|
gharchive/pull-request
| 2021-05-19T14:01:02 |
2025-04-01T04:35:27.753505
|
{
"authors": [
"gtudan",
"pak3nuh"
],
"repo": "pak3nuh/dart_pact_consumer",
"url": "https://github.com/pak3nuh/dart_pact_consumer/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
995034028
|
Move configuration docs
Summary
Move buildpack configuration docs from reference to How To section, since the content is outcome-oriented.
Use Cases
Checklist
[ ] I have viewed, signed, and submitted the Contributor License Agreement.
[ ] I have linked issue(s) that this PR should close using keywords or the Github UI (See docs)
[ ] I have added an integration test, if necessary.
[ ] I have reviewed the styleguide for guidance on my code quality.
[ ] I'm happy with the commit history on this PR (I have rebased/squashed as needed).
Waiting for #308 to merge.
Check links is failing because the edit on github button on the rendered site doesn't point to a real URL (yet).
I think this should be good to merge anyway. Maybe it's a signal the link checker needs another thing added to its excludelist...
@ForestEckhardt i've added a commit to update the excludelist. Should be good to go now.
|
gharchive/pull-request
| 2021-09-13T15:30:57 |
2025-04-01T04:35:27.759087
|
{
"authors": [
"fg-j"
],
"repo": "paketo-buildpacks/paketo-website",
"url": "https://github.com/paketo-buildpacks/paketo-website/pull/317",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1586758996
|
Adds RFC for Steering Committee Elections
Readable
WG meeting notes:
Folks should think about if they are interested in becoming a steering committee member going forward. Anyone involved in the project should take a look at this RFC.
|
gharchive/pull-request
| 2023-02-15T23:46:47 |
2025-04-01T04:35:27.760356
|
{
"authors": [
"ryanmoran",
"sophiewigmore"
],
"repo": "paketo-buildpacks/rfcs",
"url": "https://github.com/paketo-buildpacks/rfcs/pull/278",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
218693826
|
Adding tslint-immutable to readme community rules list?
I maintain the tslint-immutable package which is a set of rules to disable mutation in typescript. Would it be OK if I made a PR to add it to the README section about community rules?
@jonaskello sure, go for it 👍
|
gharchive/issue
| 2017-04-01T12:50:30 |
2025-04-01T04:35:27.801562
|
{
"authors": [
"adidahiya",
"jonaskello"
],
"repo": "palantir/tslint",
"url": "https://github.com/palantir/tslint/issues/2459",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
225779053
|
Consistently show absolute path
fixes #1794
fixes #2462
[bugfix] Consistently output absolute path
going to add options so that the user can select absolute/relative paths. I'm thinking default to relative, and add an option outputAbsolutePaths
This is not working.
npx tslint -p tsconfig.json --outputAbsolutePaths=false
error: unknown option `--outputAbsolutePaths=false'
@nelson6e65 TSLint is deprecated and no longer accepting any issues or pull requests: #4534. You're better off switching to typescript-eslint. Cheers!
|
gharchive/pull-request
| 2017-05-02T18:21:57 |
2025-04-01T04:35:27.803859
|
{
"authors": [
"JoshuaKGoldberg",
"nchen63",
"nelson6e65"
],
"repo": "palantir/tslint",
"url": "https://github.com/palantir/tslint/pull/2667",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
377454009
|
Assign concat rule
PR checklist
[X] Addresses an existing issue: #1154
[X] New feature, bugfix, or enhancement
[X] Includes tests
[X] Documentation update
Overview of change:
Add a new rule that helps one remember that the concat method in javascript is not destructive.
Every once and then happens that one forgets that and incurs in hard to find bugs.
Is there anything you'd like reviewers to focus on?
I added many scenarios to avoid false positives, if you guys have another let me know and I'll fix it.
CHANGELOG.md entry:
[new-rule] assign-concat helps prevent forgetting that concat method is not destructive.
Thanks for your interest in palantir/tslint, @nicoabie! Before we can accept your pull request, you need to sign our contributor license agreement - just visit https://cla.palantir.com/ and follow the instructions. Once you sign, I'll automatically update this pull request.
I see your point @JoshuaKGoldberg. It is true that it is too specific and I could extend it to tackle all non destructive methods. Maybe I put up a separate repository like you said. Thanks for the feedback.
|
gharchive/pull-request
| 2018-11-05T15:36:26 |
2025-04-01T04:35:27.808200
|
{
"authors": [
"nicoabie",
"palantirtech"
],
"repo": "palantir/tslint",
"url": "https://github.com/palantir/tslint/pull/4267",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1862392654
|
Group nodes cannot be deleted
Group nodes cannot be deleted through the API at this time. Consider:
Group nodes have two relationships: MEMBER_OF and ELEMENT_OF. Both are currently considered blocking relationships.
A member cannot delete themselves from a Group.
Group nodes also have an ELEMENT_OF relationship to themselves (reasons, though I forget at this time).
Taken together, these points prevent deletion of Group nodes.
Arguably, we could make MEMBER_OF nonblocking. If there are no elements in the group, why can't a member delete it even if it has other members? That's a philosophical discussion that is more than I want to go into at this time. But even if we were to make MEMBER_OF nonblocking, deletion would still be prevented by the fact that Groups are ELEMENT_OF themselves.
In the getRelationships routine of Resolvers.js, I tried a variation of the query that ignores both the relationship of the Group to itself and the MEMBER_OF to the current user. That looks like this:
let queryStr = relationships.reduce((str, relationship) => `
${str}
MATCH
(n)${relationship.direction === "in" ? "<-" : "-"}[:${relationship.type}]${relationship.direction === "in" ? "-" : "->"}(r)
WHERE n.pbotID="${pbotID}" ${nodeType === "Group" ?
`AND r.pbotID<>"${enteredByPersonID}" AND r.pbotID<>n.pbotID` :
''
}
RETURN
r
UNION ALL
`,'');
This allows the Group node to be deleted but fails to soft-delete the actual relationships. This violates the way we handle soft-delete, so is not a good solution.
#13 documents the reason Group nodes must have an ELEMENT_OF relationship to themselves.
I revisited the solution above and added some other special handling for Groups during deletion. This is in 6245cf053b26ef12ccbc69d4c13b83b6cdf40228
|
gharchive/issue
| 2023-08-23T01:03:04 |
2025-04-01T04:35:27.822936
|
{
"authors": [
"NoisyFlowers"
],
"repo": "paleobot/pbot-api",
"url": "https://github.com/paleobot/pbot-api/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1617095641
|
Adpating ARTPatient to use merge
Kindly check the script before deploying it to Dev branch
@DennisGibz LGTM just curious why are we using lastvisit as a unique key ?
I was trying to make the patient record more unique. If it doesn't make sense we can just do away with it.
|
gharchive/pull-request
| 2023-03-09T12:09:33 |
2025-04-01T04:35:27.824295
|
{
"authors": [
"DennisGibz",
"nobert-mumo"
],
"repo": "palladiumkenya/dwh-etl",
"url": "https://github.com/palladiumkenya/dwh-etl/pull/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
930202174
|
Add Flask example and tests
This is a basic example for Flask and some universal tests for all the examples. The app generally follows the design in #1, the only difference is that I remove the age field.
After it's merged, I will start to set up the repo (tox, pre-commit hook, CI, etc.).
Thanks for the review, I just added the missing README.md and requirements.txt/in files, and moved error messages and category list to separate variables.
it would be nice to have some curl commands for copy pasting at the top level README to play with the API
Sure, I will add it later.
what is the reason to not support PUT?
The PUT for updating a resource will be duplicate most of the update_pet view, so I think only support PATCH is enough for an example application.
Could you please explain the indirect=True parameter for parameterized?
It will pass the items in the example list to the client fixture (via request.param), so we can iterate all the example applications for each test.
|
gharchive/pull-request
| 2021-06-25T13:59:38 |
2025-04-01T04:35:27.832496
|
{
"authors": [
"greyli"
],
"repo": "pallets-eco/flask-api-examples",
"url": "https://github.com/pallets-eco/flask-api-examples/pull/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
958563790
|
No documentation on options' default values
#1618 says that it solved the problem. But there's no documentation, not even an example in the comments... Does this even work? How one would know if it works?
To be more specific. I'm not interested in prompting users. I'm interested in the equivalent of const argument in ArgumentParser.
That PR includes new documentation. If you have a question about your own code, please as on our Discord server.
Where is this documentation? There is a list of about 10 different PRs linked in multiple tickets, but I couldn't find documentation on this feature. Not to mention it hasn't been published on the documentation website.
https://github.com/pallets/click/pull/1618/files#diff-fcf77d5d210d0bb583fe53868f7897681f307a7b99e6397cce8028a8add6ed75
And it has been published on the documentation website.
https://click.palletsprojects.com/en/8.0.x/options.html
By default, the user will be prompted for an input if one was not passed through the command line.
The link in your comment is broken. The correct link was probably https://click.palletsprojects.com/en/8.0.x/options/#optional-value .
Otherwise, thanks. The confusing part here is mentioning the promoting. It's the feature that is so undesirable, that I didn't realize that the useful documentation would be behind it.
|
gharchive/issue
| 2021-08-02T22:46:31 |
2025-04-01T04:35:27.836997
|
{
"authors": [
"davidism",
"jab",
"wvxvw"
],
"repo": "pallets/click",
"url": "https://github.com/pallets/click/issues/2028",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
924363710
|
add an ability to check that a Path has an executable bit set
fixes #1961
Checklist:
[ ] Add tests that demonstrate the correct behavior of the change. Tests should fail without the change.
[x] Add an entry in CHANGES.rst summarizing the change and linking to the issue.
[x] Add .. versionchanged:: entries in any relevant code docs.
[ ] Run pre-commit hooks and fix any issues.
[ ] Run pytest and tox, no tests failed.
tox failed but as far as I can tell it's unrelated to my code. I'm also not sure where pre-commit hooks are documented, but as far as I can tell this is formatted fine.
@davidism is there anything I can help with to make sure this lands in the next milestone? If this PR needs to be redone to point at a pallets:next branch, let me know.
|
gharchive/pull-request
| 2021-06-17T21:47:24 |
2025-04-01T04:35:27.840058
|
{
"authors": [
"sielicki"
],
"repo": "pallets/click",
"url": "https://github.com/pallets/click/pull/1962",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
419531094
|
fix DeprecationWarning: time.clock
fix DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
time.clock has been replaced with time.perf_counter in the code.
Duplicate of #638
|
gharchive/pull-request
| 2019-03-11T15:26:36 |
2025-04-01T04:35:27.841306
|
{
"authors": [
"Inconnu08",
"davidism"
],
"repo": "pallets/flask-sqlalchemy",
"url": "https://github.com/pallets/flask-sqlalchemy/pull/693",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
450839792
|
Convert make_test_environ_builder into class (fixes #3207)
Added a new class flask.testing.EnvironBuilder inheriting from werkzeug.test.EnvironBuilder.
Logic from make_test_environ_builder() moved to the constructor of that class, and changed to simply instantiate the class, while issuing a DeprecationWarning.
I did explore making json_dumps() a regular method rather than a static method, to pick up app, but if anything was expecting to call EnvironBuilder.json_dumps() as a static method then this would break. Requires funky descriptor tricks to work both as a static method and an instance method under Python 2 and so didn't seem worth the code it would take.
Tests and stylecheck passed under Tox but somehow the Azure pipelines have not been requested for this PR. (I appear to be on a dodgy internet connection.)
Looks good! Please add a changelog entry as well.
|
gharchive/pull-request
| 2019-05-31T14:55:00 |
2025-04-01T04:35:27.843806
|
{
"authors": [
"davidism",
"lordmauve"
],
"repo": "pallets/flask",
"url": "https://github.com/pallets/flask/pull/3232",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1086059438
|
dict attribute items collision with python dict method
Given the context:
{
"order": {
"id": 1,
"items": ["a", "b", "c"]
}
}
using the following template:
Order: {{order.id}} <<<--- Works ok
{% for item in order.items %}
{{item}}
{% endfor %}
Returns an error, since items returns <built-in method items of dict object at 0xffffa5b5b280>
This can be bypassed with {{order["items"]}}, but it can certainly create confusion...
Environment:
Python version: 3.9
Jinja version: ==3.0.1
Nothing we can do about this. Preprocess your data to account for it, or iterate over keys and get values instead.
Fairly sure there's also a way to override how dotted lookups work on the Environment too.
You can just use order['items'] as well...
|
gharchive/issue
| 2021-12-21T17:28:55 |
2025-04-01T04:35:27.846934
|
{
"authors": [
"ThiefMaster",
"davidism",
"sergioisidoro"
],
"repo": "pallets/jinja",
"url": "https://github.com/pallets/jinja/issues/1554",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.