id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2062764650
|
fix: add type to favicon field
resolve #146
Adds the image type to the favicon field
Thank you!
|
gharchive/pull-request
| 2024-01-02T18:02:43 |
2025-04-01T06:39:48.335889
|
{
"authors": [
"nobkd",
"tipiirai"
],
"repo": "nuejs/nue",
"url": "https://github.com/nuejs/nue/pull/147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2499479263
|
chore: exclude examples and docs from triggering test run
https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#onpushpull_requestpull_request_targetpathspaths-ignore
Good change
|
gharchive/pull-request
| 2024-09-01T13:45:59 |
2025-04-01T06:39:48.337213
|
{
"authors": [
"nobkd",
"tipiirai"
],
"repo": "nuejs/nue",
"url": "https://github.com/nuejs/nue/pull/338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1729714041
|
🛑 Università e ricerca is down
In 088f9e3, Università e ricerca (https://www.mur.gov.it/it) was down:
HTTP code: 403
Response time: 689 ms
Resolved: Università e ricerca is back up in d4b33a1.
|
gharchive/issue
| 2023-05-28T22:28:48 |
2025-04-01T06:39:48.339653
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/1226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1772633759
|
🛑 Difesa is down
In aea87c5, Difesa (http://www.difesa.it/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Difesa is back up in 8d309f6.
|
gharchive/issue
| 2023-06-24T11:24:26 |
2025-04-01T06:39:48.342053
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/1861",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1824904903
|
🛑 Cultura is down
In 5cb2ed6, Cultura (https://www.beniculturali.it/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Cultura is back up in 1efe2bd.
|
gharchive/issue
| 2023-07-27T18:00:48 |
2025-04-01T06:39:48.344358
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/3283",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1866610702
|
🛑 Difesa is down
In b4ef3ce, Difesa (http://www.difesa.it/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Difesa is back up in 8803e48 after 153 days, 9 hours, 17 minutes.
|
gharchive/issue
| 2023-08-25T08:35:39 |
2025-04-01T06:39:48.346804
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/4342",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2137417037
|
🛑 Guardia di Finanza is down
In 10874ed, Guardia di Finanza (https://www.gdf.gov.it) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Guardia di Finanza is back up in 0b35dd0 after 49 minutes.
|
gharchive/issue
| 2024-02-15T20:43:35 |
2025-04-01T06:39:48.349344
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/6301",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2142873659
|
🛑 Guardia di Finanza is down
In c1a87d5, Guardia di Finanza (https://www.gdf.gov.it) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Guardia di Finanza is back up in 325fea5 after 16 minutes.
|
gharchive/issue
| 2024-02-19T17:35:39 |
2025-04-01T06:39:48.351662
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/6399",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2149816173
|
🛑 Guardia di Finanza is down
In 05de9a3, Guardia di Finanza (https://www.gdf.gov.it) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Guardia di Finanza is back up in e11f053 after 5 minutes.
|
gharchive/issue
| 2024-02-22T19:42:43 |
2025-04-01T06:39:48.353959
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/6467",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1512254002
|
Add: data_backup, ime_incognito to Thai's i18n
Added "data_backup" and "ime_incognito" Translations to Thai's i18n
69th commit, Nice :)
In this pull request
Localization
Add: data_backup, ime_incognito to Thai's i18n
Fully Translated Japanese
Thank you!
|
gharchive/pull-request
| 2022-12-27T23:20:08 |
2025-04-01T06:39:48.380373
|
{
"authors": [
"nullxception",
"rinme"
],
"repo": "nullxception/boorusphere",
"url": "https://github.com/nullxception/boorusphere/pull/69",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2392650512
|
Component Testing for Python SDK
Summary
What change needs making?
Use Cases
When would you use this?
Message from the maintainers:
If you wish to see this enhancement implemented please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.
Yeah we need some better tooling and developer experience on testing.
But it's hard to know what's in the making when only issue titles are filled.
Please include community so we can help you grow. 🙂
Indeed very hard to tell from "test" labels whether is related to numaflow core tests (go/sdk), or test framework for users (pipelines/udf).
We usually tag area/sdk when it is SDK related.
|
gharchive/issue
| 2024-07-05T13:50:03 |
2025-04-01T06:39:48.383055
|
{
"authors": [
"th0ger",
"vigith"
],
"repo": "numaproj/numaflow",
"url": "https://github.com/numaproj/numaflow/issues/1794",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1251966632
|
AWS SQS source and sink
Is your feature request related to a problem? Please describe.
Would be useful to be able to source events from AWS SQS.
Describe the solution you'd like
A source that turns GCP Pub/Sub events into messages for further processing.
Describe alternatives you've considered
None
Additional context
None
Message from the maintainers:
If you wish to see this enhancement implemented please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.
@shubhamdixit863 once you complete the issues listed in https://github.com/numaproj-contrib/aws-sqs-source-go/issues, please go ahead and close this.
Completed! SQS Source and SQS Sink
|
gharchive/issue
| 2022-05-29T19:04:00 |
2025-04-01T06:39:48.387016
|
{
"authors": [
"edlee2121",
"vigith"
],
"repo": "numaproj/numaflow",
"url": "https://github.com/numaproj/numaflow/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
875413156
|
np.linalg.solve unexpectedly returns an F_CONTIGUOUS array for C_CONTIGUOUS inputs
Reporting a bug
[X] I have tried using the latest released version of Numba (most recent is
visible in the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).
[X] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
Reproducer
import numpy as np
from numba import njit
def f(x, y):
return np.linalg.solve(x, y)
if __name__ == "__main__":
x = np.array([[1, 2], [3, 5]], dtype=np.float64)
y = np.eye(2, dtype=np.float64)
g = njit()(f)
numpy_result = f(x, y)
numba_result = g(x, y)
print(numpy_result.flags)
print(numba_result.flags)
This prints the following:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
UPDATEIFCOPY : False
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
UPDATEIFCOPY : False
I am not sure if this is the expected/desired behaviour. I encountered this when attempting to feed the results of np.linalg.solve into np.dot and saw:
NumbaPerformanceWarning: np.dot() is faster on contiguous arrays, called on (array(float64, 2d, A), array(float64, 1d, C))
Thanks for the report. Given what Numba's implementation of np.linalg.solve does this is not surprising, however it is a bug as it ought to match NumPy.
I actually encountered the same issue, but with np.linalg.qr function. Are there any workarounds for this?
|
gharchive/issue
| 2021-05-04T12:29:35 |
2025-04-01T06:39:48.391168
|
{
"authors": [
"JSKenyon",
"stuartarchibald",
"vroomzel"
],
"repo": "numba/numba",
"url": "https://github.com/numba/numba/issues/6998",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
104132275
|
Fix typo(s).
For most cases in typeinfer.py, the word 'constrain' should actually be 'constraint'.
Yes, I've always been a bit confused by that :)
|
gharchive/pull-request
| 2015-08-31T20:52:41 |
2025-04-01T06:39:48.392453
|
{
"authors": [
"pitrou",
"stefanseefeld"
],
"repo": "numba/numba",
"url": "https://github.com/numba/numba/pull/1401",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
121218796
|
Require student to specify units in number entry parts
There should be an option to make the student specify the units of their answer. For example, make them type "50g" instead of just "50".
Options:
name of unit (g,s,N, etc.)
allow SI orders of magnitude? (i.e., student can enter either "5000g" or "5kg" and be marked correctly)
which alternate names do you allow? (e.g. just "kg", or "kilos" or "kilograms" or "kilogrammes")
Will have to hardcode most units - need to know all the names for different SI units. Or, have presets that can be modified.
Maybe give a list of units and a scaling factor. For example:
(g or grammes or grams) * 1
(kg or kilogrammes or kilograms or kilos) / 1000
(mg or milligrammes or milligrams) * 1000
Some unit systems are more complicated!
Currencies go in front: "£3.50", except for
Geographical coordinates have several components: "66°33’ N", 1° = 60’ = 3600”.
Should extend the TNum type to have a "units" property, describing the units the number was given in.
While doing this, maybe we should also deal with different decimal notations - see https://en.wikipedia.org/wiki/Decimal_mark#Examples_of_use
If we hardcode the units, need to make it easy to add more for international users or new subjects.
How complicated do we want to get with combinations of units? Need to do at least things like "m/s ms^-1" and "m/s^2 = ms^-2". Can all units be expressed as a product or ratio of other units?
See http://www.boost.org/doc/libs/1_37_0/doc/html/boost_units/Dimensional_Analysis.html for an existing implementation.
Fields to add to the number entry part:
expected units (string, not necessarily directly shown to the student - complicated schemes like geographical coords will need a name like "latitude", which tells the part how to format the expected answer)
allow different orders of magnitude? (i.e., allow "kg" as well as "g")
partial credit if units not given, and a feedback message
allow alternate names? (i.e., allow "grams" as well as "g", or "l" as well as "cm^3") - maybe this should be a list of allowed units
Another point: if we're adding dimension information to TNum, should addition and subtraction fail when attempted on incompatible measurements, and how should they be combined for multiplication and division?
Rink is an existing calculator which might work as a reference https://github.com/tiffany352/rink-rs/wiki/Rink-Manual
The % symbol behaves a lot like a units symbol.
JS-quantities looks like a decent JS unit-handling library.
There's now a quantities extension.
|
gharchive/issue
| 2015-12-09T11:44:53 |
2025-04-01T06:39:48.401204
|
{
"authors": [
"christianp"
],
"repo": "numbas/Numbas",
"url": "https://github.com/numbas/Numbas/issues/419",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2259877943
|
Use multiple columns or something like that for contributor list
The contributor list is getting long. That's a good thing, but presented as a single-column list it's leaving a lot of whitespace on the About page. I recommend going to multiple columns, or just a comma-separated paragraph format, or something else space-efficient, if it grows much more. Certainly if each member of the Delft team is added individually to the list, that wold be the time to fix that formatting.
Should be incorporated in overhaul of docs for alpha.
Fixed in ui2 per #464, closing.
|
gharchive/issue
| 2024-04-23T22:25:59 |
2025-04-01T06:39:48.420843
|
{
"authors": [
"gwhitney"
],
"repo": "numberscope/frontscope",
"url": "https://github.com/numberscope/frontscope/issues/314",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1650764767
|
Return testdox as test name
Return name generated for phpunit from testDox. This add possibility to rename test name with @testdox annotation.
No plans for changing this, sorry.
Hi, I currently use @testdox to describe my tests as it gives me a clearer interface. Is there any way to display it with the art test command. Currently if I set it as parameter art test --testdox the coverage functions do not work.
Example
art test --testdox --coverage --min=100.
This example does not work.
With the change I proposed it would work exactly the same for the current test suite and also the @testdox annotation would work.
How could I fix this?
Thanks.
Translated with www.DeepL.com/Translator (free version)
|
gharchive/pull-request
| 2023-04-02T01:22:10 |
2025-04-01T06:39:48.554133
|
{
"authors": [
"nunomaduro",
"set0x"
],
"repo": "nunomaduro/collision",
"url": "https://github.com/nunomaduro/collision/pull/268",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2640024151
|
Fix unquoted command args with leading +
This PR fixes #131 by grouping "+/-" and digits in decimals.
Related test cases in cmd-026-unquoted-string-with-leading-plus.
Thanks!
|
gharchive/pull-request
| 2024-11-07T06:36:50 |
2025-04-01T06:39:48.648200
|
{
"authors": [
"blindFS",
"fdncred"
],
"repo": "nushell/tree-sitter-nu",
"url": "https://github.com/nushell/tree-sitter-nu/pull/134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
389620176
|
jenkinsx-quickstart-nuxeo-poc to 1.0.11
Promote jenkinsx-quickstart-nuxeo-poc to version 1.0.11
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.jenkins-x-bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2018-12-11T06:52:38 |
2025-04-01T06:39:48.662638
|
{
"authors": [
"CLAassistant",
"mcedica"
],
"repo": "nuxeo-sandbox/environment-jx-staging",
"url": "https://github.com/nuxeo-sandbox/environment-jx-staging/pull/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
683785203
|
feat: create jest preset
Jest preset for @nuxtjs/module-test-utils
// jest.config.js
export default {
preset: '@nuxtjs/module-test-utils'
}
@pi0 Add something else to the preset?
|
gharchive/pull-request
| 2020-08-21T19:31:59 |
2025-04-01T06:39:48.696443
|
{
"authors": [
"ricardogobbosouza"
],
"repo": "nuxt-community/module-test-utils",
"url": "https://github.com/nuxt-community/module-test-utils/pull/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
581673865
|
Add vueI18nLoader to AllOptionsInterface of types
Add vueI18nLoader to AllOptionsInterface of types.
Because Options has vueI18nLoader.
https://nuxt-community.github.io/nuxt-i18n/options-reference.html
Thank you.
|
gharchive/pull-request
| 2020-03-15T13:25:37 |
2025-04-01T06:39:48.697890
|
{
"authors": [
"munierujp",
"rchl"
],
"repo": "nuxt-community/nuxt-i18n",
"url": "https://github.com/nuxt-community/nuxt-i18n/pull/634",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2245757984
|
feat: cache management
nuxt-hub/platform#152
@Atinux The tests were passed by recreating the pnpm-lock using pnpm@9.
|
gharchive/pull-request
| 2024-04-16T11:02:29 |
2025-04-01T06:39:48.698830
|
{
"authors": [
"farnabaz"
],
"repo": "nuxt-hub/core",
"url": "https://github.com/nuxt-hub/core/pull/73",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1761481344
|
_ctx.$localePath is not a function
Environment
Operating System: Linux
Node Version: v16.14.2
Nuxt Version: 3.5.3
Nitro Version: 2.4.1
Package Manager: npm@9.4.2
Builder: vite
User Config: modules
Runtime Modules: @nuxtjs/i18n@8.0.0-beta.12
Build Modules: -
Reproduction
https://stackblitz.com/edit/github-6vqdsb?file=app.vue,package.json,nuxt.config.ts
Describe the bug
According to documentation available at https://v8.i18n.nuxtjs.org/guide/migrating#change-some-export-apis-name-on-nuxt-context it seems that API methods have to be prefixed with $, however it results in 500 error (_ctx.$localePath is not a function).
This works, but shows type error in VSCode
This does not work and results in 500 error
This does work
Additional context
No response
Logs
No response
any updates on this? Having the same problem...
As a workaround i use <nuxt-link-locale>, which just works.
Components · @nuxtjs/i18n
still same problem exist if $localePath used outside of setup.
|
gharchive/issue
| 2023-06-16T23:18:24 |
2025-04-01T06:39:48.705883
|
{
"authors": [
"antlionguard",
"fabkho",
"simkuns"
],
"repo": "nuxt-modules/i18n",
"url": "https://github.com/nuxt-modules/i18n/issues/2168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1793106760
|
nuxt 3 module i18n dont working
Environment
Operating System: Linux
Node Version: v18.16.0
Nuxt Version: 3.6.1
Nitro Version: 2.5.2
Package Manager: pnpm@8.5.1
Builder: vite
User Config: -
Runtime Modules: -
Build Modules: -
Reproduction
https://github.com/productdevbook/i18n-bugs-layer
Describe the bug
pnpm install
pnpm dev admin
✔ Nitro built in 364 ms nitro 11:55:20 AM
[intlify] Not found 'hello' key in 'en-US' locale messages. 11:55:21 AM
[intlify] Fall back to translate 'hello' key with 'en' locale. 11:55:21 AM
[intlify] Not found 'hello' key in 'en' locale messages. 11:55:21 AM
[intlify] Not found 'hello' key in 'en-US' locale messages. 11:55:21 AM
[intlify] Fall back to translate 'hello' key with 'en' locale. 11:55:21 AM
[intlify] Not found 'hello' key in 'en' locale messages. 11:55:21 AM
Additional context
pnpm install
pnpm dev admin
Logs
No response
Same here. All packages default (using Nuxt Content).
I got the same problem and tried nuxi upgrade --force once again (even though I updated the day 3.6.1 released) and it worked, dunno what happened tbh.
I got the same problem and tried nuxi upgrade --force once again (even though I updated the day 3.6.1 released) and it worked, dunno what happened tbh.
I updated the repo as you said, still the same problem persists.
Hi!
I’ve checked your minimal reproduction repo.
I’ve noticed that you need to set up two things to make nuxt i18n work with nuxt layer.
1. set your custom nuxt module at modules options
You need to set your nuxt module defined in the nuxt layer (layer dir) of the layer dir to nuxt.config.ts as follows
// https://nuxt.com/docs/api/configuration/nuxt-config
import MyModule from './module'
export default defineNuxtConfig({
modules: [
// https://i18n.nuxtjs.org/
MyModule,
'@nuxtjs/i18n',
],
devtools: { enabled: true }
})
It seems that the nuxt layer does not automatically install a custom nuxt module if you just define one.
configure i18n options
For nuxt application (admin dir) that extend the nuxt layer, the i18n options must be set as follows. The following is a case where lazy loading is used.
// https://nuxt.com/docs/api/configuration/nuxt-config
export default defineNuxtConfig({
extends: [
'../layer'
],
// see the docs: https://v8.i18n.nuxtjs.org/guide/layers
i18n: {
lazy: true,
langDir: 'lang', // need `lang` dir on `admin`
locales: [
{
code: 'en',
file: 'en.json',
},
{
code: 'fr',
file: 'fr.json',
},
]
}
})
If lazy loading is not used, at least the locales option must be defined; if locales does not have resource definitions in files, an empty array must be defined in files.
@BobbieGoede
If you have anything to add about the layer of nuxt i18n module, please comment. 🙏
Now that #2290 has been merged the configuration @kazupon described in step 2 are not necessary anymore. All you have to change in your reproduction is registering your module as mentioned.
I have changed your reproduction to demonstrate it works with the edge channel installed (I registered the module by putting it in a modules folder) check it out here.
Please let me know if you're still experiencing any issues even with the latest edge version installed!
@BobbieGoede
same repo nuxt 3.7.1 dont working
"@nuxtjs/i18n": "8.0.0-rc.4",
It is not working, if the Layer is installed as node dependency.
https://github.com/nuxt-modules/i18n/discussions/2388
Should we start a new Issue or reopen?
|
gharchive/issue
| 2023-07-07T08:56:47 |
2025-04-01T06:39:48.719399
|
{
"authors": [
"BobbieGoede",
"Tigriz",
"derHodrig",
"kazupon",
"productdevbook",
"thaikolja"
],
"repo": "nuxt-modules/i18n",
"url": "https://github.com/nuxt-modules/i18n/issues/2206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1359646154
|
500 init is not a function
Version
@nuxtjs/strapi: v1.5.0
nuxt: v3.0.0-rc.4
node: v16.16.0
strapi: v4.3.6
Linux Pop_OS 20.04
Reproduction Link
https://github.com/FKonig/NuxtStrapi_Bug
Steps to reproduce
Normal minimal installation like in the linked repo.
Opening http://localhost:3000 -> 500 init is not a function
What is Expected?
Successful start of the application
What is actually happening?
Browser:
Terminal:
[nuxt] [request error] init is not a function
at Module.useState (./.nuxt/dist/server/server.mjs:993:26)
at Module.useStrapiUser (./.nuxt/dist/server/server.mjs:3184:51)
at Module.useStrapiAuth (./.nuxt/dist/server/server.mjs:3053:38)
at ./.nuxt/dist/server/server.mjs:3020:47
at fn (./.nuxt/dist/server/server.mjs:434:27)
at Object.callAsync (./node_modules/unctx/dist/index.mjs:42:19)
at callWithNuxt (./.nuxt/dist/server/server.mjs:436:23)
at applyPlugin (./.nuxt/dist/server/server.mjs:391:29)
at Module.applyPlugins (./.nuxt/dist/server/server.mjs:401:11)
at async createNuxtAppServer (./.nuxt/dist/server/server.mjs:46:7)
[nuxt] [request error] init is not a function
at Module.useState (./.nuxt/dist/server/server.mjs:993:26)
at Module.useStrapiUser (./.nuxt/dist/server/server.mjs:3184:51)
at Module.useStrapiAuth (./.nuxt/dist/server/server.mjs:3053:38)
at ./.nuxt/dist/server/server.mjs:3020:47
at fn (./.nuxt/dist/server/server.mjs:434:27)
at Object.callAsync (./node_modules/unctx/dist/index.mjs:42:19)
at callWithNuxt (./.nuxt/dist/server/server.mjs:436:23)
at applyPlugin (./.nuxt/dist/server/server.mjs:391:29)
at Module.applyPlugins (./.nuxt/dist/server/server.mjs:401:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
Hello FKonig,
I just cloned your reproduction repository.
Unfortunately it works fine on my machine.
If you use an functions from the @nuxtjs/strapi repo please consider this issue: #282 and there is an open pull request for this bug: #281
|
gharchive/issue
| 2022-09-02T03:02:58 |
2025-04-01T06:39:48.724863
|
{
"authors": [
"FKonig",
"Mechse"
],
"repo": "nuxt-modules/strapi",
"url": "https://github.com/nuxt-modules/strapi/issues/278",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1833376463
|
Release v1.0.0
Resolves #209
Resolves #185
Resolves #207
Resolves #163
Resolves #203
Resolves #153
Resolves #179
Resolves #137
Resolves #147
Resolves #80
Resolves #92
Resolves #208
Resolves #184
Resolves #165
Wooo! thanks a ton for this, works flawlessly. Im really curious why things seemed to implode over the last couple weeks for a ton of people? was it do to changes from the supabase team on their end?
The supabase libraries (mainly gotruejs) have evolved in a way that was not fiting our use in the module. A rewrite was needed !
|
gharchive/pull-request
| 2023-08-02T15:09:27 |
2025-04-01T06:39:48.727340
|
{
"authors": [
"CptJJ",
"larbish"
],
"repo": "nuxt-modules/supabase",
"url": "https://github.com/nuxt-modules/supabase/pull/222",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
614472123
|
Cannot create new project
C:\Users\Ala\Desktop\aa>npx create-nuxt-app dashboard
create-nuxt-app v2.15.0
✨ Generating Nuxt.js project in dashboard
? Project name dashboard
? Project description My kryptonian Nuxt.js project
? Author name
? Choose programming language JavaScript
? Choose the package manager Npm
? Choose UI framework Vuetify.js
? Choose custom server framework None (Recommended)
? Choose Nuxt.js modules Axios, Progressive Web App (PWA) Support, DotEnv
? Choose linting tools (Press to select, to toggle all, to invert selection)
? Choose test framework None
? Choose rendering mode Universal (SSR)
? Choose development tools jsconfig.json (Recommended for VS Code)
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to
the actual version of core-js@3.
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to
the actual version of core-js@3.
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
gyp ERR! find VS
gyp ERR! find VS msvs_version not set from command line or npm config
gyp ERR! find VS VCINSTALLDIR not set, not running in VS Command Prompt
gyp ERR! find VS could not use PowerShell to find Visual Studio 2017 or newer
gyp ERR! find VS looking for Visual Studio 2015
gyp ERR! find VS - not found
gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8
gyp ERR! find VS
gyp ERR! find VS **************************************************************
gyp ERR! find VS You need to install the latest version of Visual Studio
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to
the actual version of core-js@3.
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
gyp ERR! find VS
gyp ERR! find VS msvs_version not set from command line or npm config
gyp ERR! find VS VCINSTALLDIR not set, not running in VS Command Prompt
gyp ERR! find VS could not use PowerShell to find Visual Studio 2017 or newer
gyp ERR! find VS looking for Visual Studio 2015
gyp ERR! find VS - not found
gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8
gyp ERR! find VS
gyp ERR! find VS **************************************************************
gyp ERR! find VS You need to install the latest version of Visual Studio
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to
the actual version of core-js@3.
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
gyp ERR! find VS
gyp ERR! find VS msvs_version not set from command line or npm config
gyp ERR! find VS VCINSTALLDIR not set, not running in VS Command Prompt
gyp ERR! find VS could not use PowerShell to find Visual Studio 2017 or newer
gyp ERR! find VS looking for Visual Studio 2015
gyp ERR! find VS - not found
gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8
gyp ERR! find VS
gyp ERR! find VS **************************************************************
gyp ERR! find VS You need to install the latest version of Visual Studio
gyp ERR! find VS including the "Desktop development with C++" workload.
gyp ERR! find VS For more information consult the documentation at:
gyp ERR! find VS https://github.com/nodejs/node-gyp#on-windows
gyp ERR! find VS **************************************************************
gyp ERR! find VS
gyp ERR! configure error
gyp ERR! stack Error: Could not find any Visual Studio installation to use
gyp ERR! stack at VisualStudioFinder.fail (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:121:47)
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:74:16
gyp ERR! stack at VisualStudioFinder.findVisualStudio2013 (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:351:14)
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:70:14
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:372:16
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to
the actual version of core-js@3.
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
gyp ERR! find VS
gyp ERR! find VS msvs_version not set from command line or npm config
gyp ERR! find VS VCINSTALLDIR not set, not running in VS Command Prompt
gyp ERR! find VS could not use PowerShell to find Visual Studio 2017 or newer
gyp ERR! find VS looking for Visual Studio 2015
gyp ERR! find VS - not found
gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8
gyp ERR! find VS
gyp ERR! find VS **************************************************************
gyp ERR! find VS You need to install the latest version of Visual Studio
gyp ERR! find VS including the "Desktop development with C++" workload.
gyp ERR! find VS For more information consult the documentation at:
gyp ERR! find VS https://github.com/nodejs/node-gyp#on-windows
gyp ERR! find VS **************************************************************
gyp ERR! find VS
gyp ERR! configure error
gyp ERR! stack Error: Could not find any Visual Studio installation to use
gyp ERR! stack at VisualStudioFinder.fail (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:121:47)
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:74:16
gyp ERR! stack at VisualStudioFinder.findVisualStudio2013 (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:351:14)
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:70:14
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:372:16
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\util.js:54:7
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to
the actual version of core-js@3.
npm WARN deprecated chokidar@2.1.8: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated fsevents@1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
gyp ERR! find VS
gyp ERR! find VS msvs_version not set from command line or npm config
gyp ERR! find VS VCINSTALLDIR not set, not running in VS Command Prompt
gyp ERR! find VS could not use PowerShell to find Visual Studio 2017 or newer
gyp ERR! find VS looking for Visual Studio 2015
gyp ERR! find VS - not found
gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8
gyp ERR! find VS
gyp ERR! find VS **************************************************************
gyp ERR! find VS You need to install the latest version of Visual Studio
gyp ERR! find VS including the "Desktop development with C++" workload.
gyp ERR! find VS For more information consult the documentation at:
gyp ERR! find VS https://github.com/nodejs/node-gyp#on-windows
gyp ERR! find VS **************************************************************
gyp ERR! find VS
gyp ERR! configure error
gyp ERR! stack Error: Could not find any Visual Studio installation to use
gyp ERR! stack at VisualStudioFinder.fail (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:121:47)
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:74:16
gyp ERR! stack at VisualStudioFinder.findVisualStudio2013 (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:351:14)
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:70:14
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\find-visualstudio.js:372:16
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\util.js:54:7
gyp ERR! stack at C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\util.js:33:16
gyp ERR! stack at ChildProcess.exithandler (child_process.js:310:5)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at maybeClose (internal/child_process.js:1051:16)
gyp ERR! System Windows_NT 10.0.18363
gyp ERR! command "C:\Program Files\nodejs\node.exe" "C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\bin\node-gyp.js" "rebuild" "--release"
gyp ERR! cwd C:\Users\Ala\Desktop\aa\dashboard\node_modules\fibers
gyp ERR! node -v v14.2.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
node-gyp exited with code: 1
Please make sure you are using a supported platform and node version. If you
would like to compile fibers on this machine please make sure you have setup your
build environment--
Windows + OS X instructions here: https://github.com/nodejs/node-gyp
Ubuntu users please run: sudo apt-get install g++ build-essential
RHEL users please run: yum install gcc-c++ and yum groupinstall 'Development Tools'
Alpine users please run: sudo apk add python make g++
'nodejs' n'est pas reconnu en tant que commande interne
ou externe, un programme ex�cutable ou un fichier de commandes.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.2.7 (node_modules\watchpack\node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.1.2 (node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@2.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! fibers@4.0.3 install: node build.js || nodejs build.js
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the fibers@4.0.3 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\Ala\AppData\Roaming\npm-cache_logs\2020-05-08T02_57_25_403Z-debug.log
fibers@4.0.3 install C:\Users\Ala\Desktop\aa\dashboard\node_modules\fibers
node build.js || nodejs build.js
C:\Users\Ala\Desktop\aa\dashboard\node_modules\fibers>if not defined npm_config_node_gyp (node "C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\nod
e-gyp-bin\....\node_modules\node-gyp\bin\node-gyp.js" rebuild --release ) else (node "C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\bin\node-gyp.js
" rebuild --release )
C:\Users\Ala\AppData\Roaming\npm-cache_npx\2584\node_modules\create-nuxt-app\node_modules\sao\lib\installPackages.js:108
throw new SAOError(Failed to install ${packageName} in ${cwd})
^
SAOError: Failed to install packages in C:\Users\Ala\Desktop\aa\dashboard
at ChildProcess. (C:\Users\Ala\AppData\Roaming\npm-cache_npx\2584\node_modules\create-nuxt-app\node_modules\sao\lib\installPackages.js:108:15)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.cp.emit (C:\Users\Ala\AppData\Roaming\npm-cache_npx\2584\node_modules\create-nuxt-app\node_modules\sao\node_modules\cross-spawn\lib\enoent.js:34:29)
at maybeClose (internal/child_process.js:1051:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5) {
__sao: true
}
I had the same issue and after updating node, the issue was fix.
I had the same problem, and solved.
See this line
Windows + OS X instructions here: https://github.com/nodejs/node-gyp
Ubuntu users please run: sudo apt-get install g++ build-essential
RHEL users please run: yum install gcc-c++ and yum groupinstall 'Development Tools'
Alpine users please run: sudo apk add python make g++
I used docker aipine image, so ran this.
apk update && apk add \
python\
make\
g++
Then solved.
@petrovicz try to clear your npm cache https://dev.to/rishiabee/npm-err-unexpected-end-of-json-input-while-parsing-near-743
@ankoe Unfortunately it didn't solve the issue for me
I had the same issue again in another project.
I needed to downgrade to use the node TLS version.
LTS versions are working for me also.
I tried clearing my npm cache and yarn cache both are failing for on the same error.
It seems that this problem only occurs when choosing Vuetify UI-framework
Same here ... When choosing vuetify
LTS version is the only solution for now
It seems that this problem only occurs when choosing Vuetify UI-framework UPD: u can use npm init nuxt-app
On my side, experiencing this after choosing tailwind css.
|
gharchive/issue
| 2020-05-08T03:10:57 |
2025-04-01T06:39:48.867043
|
{
"authors": [
"Lustach",
"ahsanahmed321",
"alakameljebali",
"alexon1234",
"ankoe",
"daisuke-fukuda",
"ellaidevs",
"jcjp",
"petrovicz"
],
"repo": "nuxt/create-nuxt-app",
"url": "https://github.com/nuxt/create-nuxt-app/issues/517",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
276730313
|
ERROR in common.83cd80e5c7fd8703ba74.js from UglifyJs
error
nuxt.config.js
package.json
why?
You should include the you dependency which use not supported es syntax yet in 'babel', pls have a look at: https://github.com/nuxt/nuxt.js/issues/1668#issuecomment-330510870
|
gharchive/issue
| 2017-11-25T06:42:15 |
2025-04-01T06:39:48.900149
|
{
"authors": [
"18717700273",
"clarkdo"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/2236",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
353226629
|
Major issue with namespaced Vuex modules
Version
v1.4.2
Reproduction link
https://vuex.vuejs.org/guide/modules.html
Steps to reproduce
Setup a module mode index.js in the store folder with code
WORKS
Copy the exact same code in a second-file.js in the store folder
Change all the helpers I'm using to reference the new namespace e.g. mapState(['stateName']) to mapState(['second-file/stateName'])
DOESN'T WORK
What is expected ?
I'm expecting Nuxt to normally access the module
What is actually happening?
Nuxt does not recognize any of the properties from the store module
Additional comments?
I want to highlight that this happens with a clean Nuxt install through Vue CLI and with very simple code in order to exclude mistakes of different nature. The two store files are exactly identical apart from one becoming namespaced due to Nuxt default behavior. The same issue persist if I try to register the modules in the classic Vuex mode, since also in this case Nuxt creates namespaced modules. Is there a way to disable the automatic namespacing and instead merge all the modules properties into the main store?
This bug report is available on Nuxt community (#c7619)
Have you tried nuxt-edge? I am using namespaced modules without any issues with nuxt-edge.
How do I migrate to nuxt-edge?
For me it was as simple as:
Uninstall nuxt
Install nuxt-edge
same problem with nuxt-edge, are you using the helpers?
I just tried as last resort to use the vuex namespace helper and now it works:
import { createNamespacedHelpers } from 'vuex'
const { mapState, mapActions } = createNamespacedHelpers('moduleName')
and then just the 'stateName' in the code, while without the namespaced helpers and just 'moduleName/stateName' it doesn't. It's a very strange issue
I kind of find a solution, which still doesn't justify to me why the other method doesn't work.
...mapState(['navigation/pages']) NOT WORKING
...mapState('navigation', ['pages']) WORKING
maybe it's just me not getting the obvious, anyways I'm very happy now that I managed to get it working, I was getting crazy
@ultramarinelights
I’m glad to read that you found a solution.
I do use helpers and they were working when I last checked. Perhaps I was using the second syntax, where you supply the namespace as the first argument. I’ll check when I’m back from vacation.
Closed in favor of https://github.com/nuxt/docs/issues/850
|
gharchive/issue
| 2018-08-23T05:08:26 |
2025-04-01T06:39:48.907999
|
{
"authors": [
"UltramarineLights",
"manniL",
"pbastowski"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/3790",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
606230581
|
npm version of @nuxt/babel-preset-app is not up to date
Version
v2.12.2
Reproduction link
https://www.npmjs.com/package/@nuxt/babel-preset-app
Steps to reproduce
check out npm version of @nuxt/babel-preset-app
first the version there is higher than the one in master (2.12.2 vs. 2.12.1)
second the "bugfixes" option is not present in the docs (and also not in the source).
What is expected ?
correct version / source of babel-preset-app
What is actually happening?
old version is packed and published
This bug report is available on Nuxt community (#c10566)
cc @clarkdo
@pi0 It looks like there is an unexpected 2.12.2 packages, do you have any idea why they are published ?
@simllll The releases seem right, the release branch is 2.x not dev, we'll merge 2.x back to dev after release.
As bugfixs is a new feature, so it will be released 2.13.0.
Ah alright, wasn't aware of that. Thanks, looking forward to 2.13.0 then ;-)
Hi, @simllll release are correct. 2.12.1 and 2.12.2 released as hotfix to 2.x branch. For nuxt (stable) code reference, you should refer to 2.x branch not development.
New bugfixes option (#7144) added after 2.12.0 and will be available in nuxt@2.13.0 (also currently nuxt-edge)
|
gharchive/issue
| 2020-04-24T11:03:09 |
2025-04-01T06:39:48.914656
|
{
"authors": [
"clarkdo",
"manniL",
"pi0",
"simllll"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/7268",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
743743609
|
content security policy header will lose When status code is 304
Versions
nuxt: v2.14.7
node: v12.16.3
Reproduction
Additional Details
Steps to reproduce
What is Expected?
What is actually happening?
I have a website built with nuxt.js, when I first visit, it has the correct csp header which I set in nuxt.config.js. When I visit this website again, it will lose csp header. And I found that when status code is 304, csp header will lose.
@pi0 @danielroe Would you please have a look on this issue ? Do you think it is a bug when http status code is 304, the CSP header will lose, if not, is there a configuration can make CSP header work as expected, if it is bug, do you have a plan to publish a patch because this can cause security problem because of the miss of CSP header. Thanks
This should be fixed in v2.14.8. Please reopen if not.
This should be fixed in v2.14.8. Please reopen if not.
Thanks. I have confirmed it was solved from v2.14.8
|
gharchive/issue
| 2020-11-16T11:05:15 |
2025-04-01T06:39:48.919038
|
{
"authors": [
"Zuckjet",
"pi0"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/8353",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
809943266
|
Option to disable build progress bars in CLI even when TTY is enabled
Is your feature request related to a problem? Please describe.
I use Docker and wanted to enable TTY so I could see colours in Docker Compose logs. However, enabling TTY also renders the webpack progress bars during builds. The bars work but have some rendering issues, most likely due to Docker Compose prefixing each log line with with the container name.
For example, the Client and Server bars render twice for each update.
Describe the solution you'd like
A top level option, or documentation, on how to disable the progress bars in the CLI, even in a development environment would be very useful. I'd be able to keep TTY enabled without the rendering issues.
Ideally this would not disable the progress bars displayed in the browser during build.
Basically, I'd like the result I get with TTY disabled, but with colours.
and during an update...
Describe alternatives you've considered
For now, I run with TTY disabled in Docker Compose, however, I lose colours.
I tried configuring the plugin in the nuxt.config.js build extend() function but couldn't seem to get any options to override.
I tried removing the progress bar plugin from the config.plugins list in extend(config) but that broke a bunch of things Nuxt or webpack was relying on.
I could also cut a bug ticket on the https://github.com/nuxt-contrib/webpackbar repo to see if the rendering can be fixed inside a docker TTY with the line prefixing.
Additional context
Naturally, as soon as I posted this, I decided to try one last configuration that targeted the progress bar reporters.
// nuxt.config.js
export default {
// ...
build: {
extend: (config) => {
const bar = config.plugins.find((p) => p.constructor.name === 'WebpackBarPlugin')
bar.reporters = bar.reporters.filter((r) => r.constructor.name !== 'FancyReporter')
}
}
}
This worked!
By removing the FancyReporter from the WebpackBarPlugin's array of reporters it disabled the progress bars in the CLI but kept the progress bars in the browser.
It would appear that changing the WebpackBarPlugin.options.fancy = false doesn't work because the plugin already populated the reporters array based on that config (and environment) during setup.
I'll leave this here and keep the issue open since it may still be a good idea to streamline this process.
Some further investigation found that the previous example wasn't fully doing what I needed. It was removing the anonymous reporter that renders the progress bars in the browser. So instead of simply removing the FancyReporter, I wound up having to "clone" the WebpackBarPlugin, include the basic reporter and exclude the FancyReporter.
So here's the masterpiece
import isDocker from 'is-docker'
export default {
build: {
extend(config) {
// hide the build progress bars in the CLI (but not the browser)
// since they do not render correctly in docker-compose logs
if (isDocker()) {
const i = config.plugins.findIndex(
(p) => p.constructor.name === 'WebpackBarPlugin'
)
const plugin = config.plugins[i]
// replace existing bar plugins with clones that exclude the 'fancy'
// reporter and include a 'basic' reporter
config.plugins.splice(
i,
1,
new WebpackBarPlugin({
name: plugin.options.name,
reporters: [
'basic', // include reporter as string because we cannot import
...plugin.reporters.filter(
(p) => p.constructor.name !== 'FancyReporter'
),
],
})
)
}
}
}
}
For future me, it looks like here is where you'd have to start to implement this as a nuxt option
https://github.com/nuxt/nuxt.js/blob/ba44b0f9ca8955ddb884744f34192c831b2d1d16/packages/webpack/src/config/base.js#L445
Hi @Soviut. Have you tried MINIMAL=1 environment variable?
@pi0 I didn't know that was available but it seems to do the same thing. Is that environment variable controlling std-env to make env.minimalCLI?
Is that environment variable controlling std-env to make env.minimalCLI?
Yes :)
Thanks. I cut a ticket on the std-env repo to update the README with the environment variables. I'll see if I can contribute that documentation soon after.
https://github.com/nuxt-contrib/std-env/issues/9
|
gharchive/issue
| 2021-02-17T07:46:21 |
2025-04-01T06:39:48.929654
|
{
"authors": [
"Soviut",
"pi0"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/8844",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2127790239
|
3.10.x: nuxt build creates vue files in .nuxt/dist/server/_nuxt folder
Environment
- Operating System: Darwin
- Node Version: v21.2.0
- Nuxt Version: 3.10.0
- CLI Version: 3.10.0
- Nitro Version: 2.8.1
- Package Manager: bun@1.0.25
- Builder: -
- User Config: devtools, css, modules, runtimeConfig, image, build, i18n, ignore, nitro, routeRules, security, vite
- Runtime Modules: @nuxtjs/i18n@8.1.0, @nuxt/image@1.3.0, nuxt-icon@0.6.8, nuxt-security@1.1.1
- Build Modules: -
Reproduction
Describe the bug
With 3.10.0 and 3.10.1 I get an error at the end of nuxt build: ERROR RollupError: At least one <template> or <script> is required in a single file component, with 3.9.3 everything is fine.
Unfortunately I cant reproduce this in the sandbox but I already tried to debug this for hours. I nailed it down to one of my ~10 components: If it contains a <style> tag with ANY content (even just a comment). Weirdly when I delete the content of the <script> tag I can have styles:
Throws the error:
<script setup lang="ts">
// hello
</script>
<template>
hello
</template>
<style lang="scss">
// hello
</style>
Doesnt throw an error:
<script setup lang="ts">
// hello
</script>
<template>
hello
</template>
<style lang="scss"></style>
Doesnt throw an error:
<script setup lang="ts"></script>
<template>
hello
</template>
<style lang="scss">
// hello
</style>
It also only happens with this one component named overlay.vue. I feel like I am getting crazy or something but this seems to make absolutely no sense.
Full error:
ERROR RollupError: At least one <template> or <script> is required in a single file component. nitro 9:00:59 PM
undefined
ERROR At least one <template> or <script> is required in a single file component. 9:00:59 PM
at error (node_modules/rollup/dist/es/shared/parseAst.js:337:30)
at Object.error (node_modules/rollup/dist/es/shared/node-entry.js:18507:20)
at Object.error (node_modules/rollup/dist/es/shared/node-entry.js:17616:42)
at node_modules/rollup-plugin-vue/dist/sfc.js:22:49
at Array.forEach (<anonymous>)
at Object.transformSFCEntry (node_modules/rollup-plugin-vue/dist/sfc.js:22:16)
at Object.transform (node_modules/rollup-plugin-vue/dist/index.js:99:38)
at node_modules/rollup/dist/es/shared/node-entry.js:18692:40
ERROR At least one <template> or <script> is required in a single file component. 9:00:59 PM
error Command failed with exit code 1.
Additional context
No response
Logs
No response
This is unlikely to be an issue with Nuxt - rollup-plugin-vue is not used by Nuxt or Nitro and is what is throwing your error.
I am not 100% sure about that. Yes this plugin throws the error BUT I disabled the plugin and realized there is exactly one .vue file in the folder .nuxt/dist/server/_nuxt which is exactly the file I debugged to be the one creates the error. I guess there shouldnt be any vue file in this folder which one mean that there actually is a bug within Nuxt.
Interesting. Have you customised assetFileNames by any chance?
I did not
I probably can't look into this without a reproduction or more info.
I can try to dig more into this and make this reproducable.
@danielroe So it took me a few hours of removing more and more stuff from my app until I ended in an empty app and I realized:
This happens on a blank empty app created with nuxi init and just a single index page and a single component in it! If the component has a script with lang=ts and a style tag (both with ANY content) a vue file is in the .nuxt/dist/server folder!
Since StackBlitz doesnt seem to work, here is the repo: https://github.com/MickL/nuxt-bug
When you say that StackBlitz doesn't work, what do you mean?
I can't reproduce with that repo.
Here's a StackBlitz created from it - seemingly working fine: https://stackblitz.com/edit/github-iqamf7.
Could you share any more info about your setup?
I used bun install if that changed anything. My specs are at the very top. I think #25690 could be the same issue
When you say that StackBlitz doesn't work, what do you mean?
I meant that the links on the Nuxt website (the two I posted above) both dont start and produce errors.
So I double checked on another machine. I downloaded my repo, run npm install and then npm run buld. If I check the folder `` I see (as I described) there is a vue file which is probably supposed to be a js file:
You're quite right; it is reproducible there - I was checking .output, not .nuxt.
I am not sure if this is causing me problems now - I updated my packages to latest (all of them), and somehow I get an error:
cannot find module './stringify' (module in question is qs, which is only used by Strapi module as far as I can tell)
This only happens in dev mode, build is running fine. When I don't include it in the configuration everything works.
If I keep use the Nuxt ^3.10.1 version I have no problems. Does any of this ring a bell?
|
gharchive/issue
| 2024-02-09T20:15:17 |
2025-04-01T06:39:48.942922
|
{
"authors": [
"KresimirCosic",
"MickL",
"danielroe"
],
"repo": "nuxt/nuxt",
"url": "https://github.com/nuxt/nuxt/issues/25724",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2142944119
|
Horizontal landing hero items end not works
Environment
Operating System: Darwin
Node Version: v20.9.0
Nuxt Version: 3.10.2
CLI Version: 3.10.1
Nitro Version: 2.8.1
Package Manager: npm@10.1.0
Builder: -
User Config: css, extends, modules, stripe, eslint, image, runtimeConfig, imports, ui, devtools, experimental
Runtime Modules: @nuxt/ui@2.13.0, @nuxt/image@1.3.0, @nuxtjs/eslint-module@4.1.0, @vueuse/motion/nuxt@2.0.0, nuxt-clarity-analytics@0.0.6, dayjs-nuxt@2.1.9, @unlok-co/nuxt-stripe@2.0.0, @nuxtjs/i18n@8.1.1
Build Modules: -
Version
"2.13.0",
Reproduction
In description
Description
Hello, i cannot override lg:items-center in uLandingHero when orientation is horizontal
<ULandingHero
:ui="{
container: 'lg:items-end sm:gap-0',
}"
orientation="horizontal"/>
Additional context
No response
Logs
No response
I've already transferred your issue to the @nuxt/ui-pro repository! This will be fixed in the next release.
|
gharchive/issue
| 2024-02-19T18:28:27 |
2025-04-01T06:39:48.955357
|
{
"authors": [
"benjamincanac",
"carlosvaldesweb"
],
"repo": "nuxt/ui",
"url": "https://github.com/nuxt/ui/issues/1382",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1804026910
|
u-accordian - name and provide a type for items
import type { AccordianItems } from '@nuxtlabs/ui'
const items:AccordianItems = [
...
Is this in the edge? how do i import it?
import type { AccordianItem } from '@nuxthq/ui'
?
|
gharchive/issue
| 2023-07-14T02:05:29 |
2025-04-01T06:39:48.956659
|
{
"authors": [
"acidjazz"
],
"repo": "nuxtlabs/ui",
"url": "https://github.com/nuxtlabs/ui/issues/412",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2461912322
|
Fix usage of the C++ impl of write_df_to_file
Description
Fixes a bug when C++ mode is enabled, the file would be written twice.
By Submitting this PR I confirm:
I am familiar with the Contributing Guidelines.
When the PR is ready for review, new or existing tests cover these changes.
When the PR is ready for review, the documentation is up to date with these changes.
/merge
|
gharchive/pull-request
| 2024-08-12T21:18:17 |
2025-04-01T06:39:48.959045
|
{
"authors": [
"dagardner-nv",
"mdemoret-nv"
],
"repo": "nv-morpheus/Morpheus",
"url": "https://github.com/nv-morpheus/Morpheus/pull/1840",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2045373916
|
kadf01-sp2
Link to application: https://eso.vse.cz/~kadf01/sp2/
@kadlecfilip mate duplikovae soubory, nevim ktere z nich hodnotit
|
gharchive/pull-request
| 2023-12-17T20:55:53 |
2025-04-01T06:39:49.005165
|
{
"authors": [
"kadlecfilip",
"nvbach91"
],
"repo": "nvbach91/4IZ268-2023-2024-ZS",
"url": "https://github.com/nvbach91/4IZ268-2023-2024-ZS/pull/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
573428705
|
Enable the use of custom keybindings
The Fuck 3.28 using Python 3.6.10 and ZSH 5.8
system: 5.5.0-gentoo #1 SMP Wed Jan 29 23:29:22 MST 2020 x86_64 AMD Ryzen 7 2700 Eight-Core Processor AuthenticAMD GNU/Linux
As a vim user, and someone who rarely uses the arrow keys, it would be great to be able to assign the up/down functionality to j/k.
I just ran fuck, and it already has support for this feature
Ugh. I donno why i missed that. Probably because I'm in a neovim terminal and there was a delay of a second due to my keybindings
Thank you!
|
gharchive/issue
| 2020-02-29T23:15:46 |
2025-04-01T06:39:49.007408
|
{
"authors": [
"bearcatsandor",
"cjoshmartin"
],
"repo": "nvbn/thefuck",
"url": "https://github.com/nvbn/thefuck/issues/1058",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
134640989
|
#N/A Remove fucked up cmd from history regardless of status
Most fucked up commands are erroneous, but that's not always the case.
Thanks!
|
gharchive/pull-request
| 2016-02-18T17:19:37 |
2025-04-01T06:39:49.008264
|
{
"authors": [
"nvbn",
"scorphus"
],
"repo": "nvbn/thefuck",
"url": "https://github.com/nvbn/thefuck/pull/462",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1361276496
|
🛑 Kubeflow ML Platform is down
In e5849a5, Kubeflow ML Platform (https://kubeflow.lab.novaglobal.com.sg) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Kubeflow ML Platform is back up in 2dbed50.
|
gharchive/issue
| 2022-09-04T23:12:28 |
2025-04-01T06:39:49.013847
|
{
"authors": [
"nvgsg"
],
"repo": "nvgsg/lab-upptime",
"url": "https://github.com/nvgsg/lab-upptime/issues/1190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1401985505
|
🛑 NVIDIA AI Enterprise Hub is down
In 9f53d5e, NVIDIA AI Enterprise Hub (https://nvaie.lab.novaglobal.com.sg) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NVIDIA AI Enterprise Hub is back up in 0b40186.
|
gharchive/issue
| 2022-10-08T15:49:40 |
2025-04-01T06:39:49.016304
|
{
"authors": [
"nvgsg"
],
"repo": "nvgsg/lab-upptime",
"url": "https://github.com/nvgsg/lab-upptime/issues/2108",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1504012231
|
🛑 NVIDIA AI Enterprise Hub is down
In 05b3aca, NVIDIA AI Enterprise Hub (https://nvaie.lab.novaglobal.com.sg) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NVIDIA AI Enterprise Hub is back up in 8d62e44.
|
gharchive/issue
| 2022-12-20T05:34:55 |
2025-04-01T06:39:49.018926
|
{
"authors": [
"nvgsg"
],
"repo": "nvgsg/lab-upptime",
"url": "https://github.com/nvgsg/lab-upptime/issues/4063",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1573254751
|
🛑 NVIDIA AI Enterprise Hub is down
In 51c7cdf, NVIDIA AI Enterprise Hub (https://nvaie.lab.novaglobal.com.sg) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NVIDIA AI Enterprise Hub is back up in a422781.
|
gharchive/issue
| 2023-02-06T20:47:58 |
2025-04-01T06:39:49.021331
|
{
"authors": [
"nvgsg"
],
"repo": "nvgsg/lab-upptime",
"url": "https://github.com/nvgsg/lab-upptime/issues/6797",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1338511872
|
@media doesn't seem to compile with css / scss modules + more
The Problem
I'm using jest-preview to visualize screens using css modules. Some of the styling is dependent on media queries. None of the @media statements seem to get compiled into the embedded style tags. Other classes from the module make it in fine.
I also cannot import SCSS files into the setup file. If I compile them using gulp or webpack and then import the resulting .css file, it appears to work fine.
Additional issue but not related, I can't get auto preview to work at all.
What I've tried
So far I've tried compiling the SCSS myself and removing hashes from the class names. I've faced trouble using the advanced configuration to try and remove the hashes from the jest-preview transform source. This didn't seem to work either.
I've also tried using a custom transform with some additional postCss utilities but haven't had any success. If I render components with no media queries, this tool is absolutely wonderful.
What is the framework/ technology you want to integrate Jest Preview to?
"react": "^16.13.1"
"jest": "^26.6.3"
"postcss": "^8.4.6"
"sass": "^1.53.0"
"sass-loader": "^10.2.1"
Thank you
Thank you for your time and continued effort on this tool! If I can contribute, I'd love to if someone can point me in the right direction.
None of the @media statements seem to get compiled into the embedded style tags
Can you provide minimum reproduction. I added a commit to include @media query and it works fine. Please refer to this commit:
https://github.com/nvh95/jest-preview/commit/0e8a8273146a993b300500e30b2c5fc7e157bbd3#diff-650149a55a5b9fea04fefedba299bd2bf341e1a1c8b4d6e63e483e95669efbfcR4-R9
I also cannot import SCSS files into the setup file.
Please help to prepare a reproduction. In our demo, we can import CSS modules file (https://github.com/nvh95/jest-preview/blob/41019e629c89c48580f29841d6380d89d8cdb41d/demo/setupTests.js#L4). Thanks.
It looks like you are correct! I do think I might have an idea of what's happening. I think the class name property on the component changes based on the media or screen size and it doesnt seem to be updating properly. Regardless of the window or screen size I stub in jest, it falls back to either default or the mobile values. Maybe it's something with the way I'm handling my renders?
|
gharchive/issue
| 2022-08-15T04:09:49 |
2025-04-01T06:39:49.026865
|
{
"authors": [
"dannyvassallo",
"nvh95"
],
"repo": "nvh95/jest-preview",
"url": "https://github.com/nvh95/jest-preview/issues/235",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2527438735
|
fix: remove hard-coded "weekly" in folder creation prompt
Proposed change
Replace hard-coded "Create weekly dir" in new folder creation prompt with more generic "Notes", resulting in the following:
Previous: "Create weekly dir folder /home//projects/second-brain/foobar/ does not exist! Shall I create it?"
New: "Notes folder /home/cameron/projects/second-brain/foobar/ does not exist! Shall I create it?"
Type of change
[x] Bugfix (non-breaking change which fixes an issue)
[ ] New feature (thank you!)
[x] Code quality improvements to existing code or addition of tests
[ ] Documentation update
Additional information
This PR fixes or closes issue: fixes #297
Checklist
[x] I am running the latest version of the plugin.
[x] The code change is tested and works locally.
[x] There is no commented out code in this PR.
[x] The code has been formatted using Stylua (a .stylua.toml file is provided)
[x] The code has been checked with luacheck (a .luacheckrc file is provided)
[ ] The README.md has been updated according to this change.
[ ] The doc/telekasten.txt helpfile has been updated according to this change.
Thank you for your contribution!
|
gharchive/pull-request
| 2024-09-16T03:45:31 |
2025-04-01T06:39:49.039654
|
{
"authors": [
"Tonitum",
"lambtho12"
],
"repo": "nvim-telekasten/telekasten.nvim",
"url": "https://github.com/nvim-telekasten/telekasten.nvim/pull/343",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
807438977
|
pull_request diff view broken
in pull_request, when I try to switch to the "diff" mode, the preview shows only:
Usage: gh pr diff [<number> | <url> | <branch>] [flags]
Flags:
--color string Use color in diff output: {always|never|auto} (default "auto")
[Process exited 1]
fix: https://github.com/nvim-telescope/telescope-github.nvim/pull/16
|
gharchive/issue
| 2021-02-12T18:00:06 |
2025-04-01T06:39:49.041319
|
{
"authors": [
"traysh"
],
"repo": "nvim-telescope/telescope-github.nvim",
"url": "https://github.com/nvim-telescope/telescope-github.nvim/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2554544034
|
overbrights
I have a scene which rendered overbright.
Is it posible to autoexposure or use all lights power for automatically balance of lights in scene.
In sky mode all looks ok
But when I upload HDRI and also when I set intensity to 0 - I have overbright
Looks like an issue
one more case of this issue - sun disk scale = 0 or sun disk intensity = 0 make overbrights
one more scene with emissive textures
with HDRI on
HDRI off
Have you tried the tonemappers ?
tone mappers doesnt helps.
emissive area lights not calculated in MIS well.
in code processed hdris and punctual lights, not area lights
The scene has indeed very strong lights, which I believe doesn't follow the rule of the intensity parameter, which is lumens per steradian (lm/sr).
There are a few things which can be done:
Increase "Max Luminance", which is controlling the Firefly Filter and is cutting out incoming energy.
Reduce the Tonemapper exposure to compensate the over bright lights
Set the sun elevation to -90 deg, to remove any contribution from the Physical Sun & Sky model. Changing the Disk Scale is not physically correct, but a user control was added for "artistical effect", but only the 1.0 value is correct.
Same is true for the SciFi scene
big versions of images doesnt open (started from private-user-images.githubusercontent.com
thank you for explaination. is this parameters can be automatic processed?
for example, "auto exposure" which can measure max luminance and fit parameters?
An auto-exposure can be done, but to properly do this, we need the inverse of the tonemapper in use. We do not plan this feature in the short term.
|
gharchive/issue
| 2024-09-28T22:30:59 |
2025-04-01T06:39:49.118816
|
{
"authors": [
"mklefrancois",
"tigrazone"
],
"repo": "nvpro-samples/nvpro_core",
"url": "https://github.com/nvpro-samples/nvpro_core/issues/71",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1154470623
|
Dependency Graph not being correctly generated
Just started using nx-dotnet and having issues with dep-graph
To Reproduce
Just created a new workspace with two dotnet libs (lib1 ref lib2)
generate the workspace: npx create-nx-workspace@latest
choose template: apps
install nxdotnet: npm i --save-dev @nx-dotnet/core
create lib1 and lib2
add reference from lib1 to lib2
build everything: npx nx run-many --target=build --all
generate dep-graph: nx dep-graph
result:
also the lib1.csproj inside .cashe\nx\nxdeps.json doesn't contain any "deps" node
Expected behavior
nxdeps.json should contain the deps for each csproj
Can you run nx report and also check that '@nx-dotnet/core' is listed as a plugin in nx.json?
Yes i do have "plugins": ["@nx-dotnet/core"] under nx.json
results for nx report
Node : 16.3.0
OS : win32 x64
npm : 7.18.1
nx : 13.8.3
@nrwl/angular : undefined
@nrwl/cli : 13.8.3
@nrwl/cypress : undefined
@nrwl/detox : undefined
@nrwl/devkit : 13.8.3
@nrwl/eslint-plugin-nx : undefined
@nrwl/express : undefined
@nrwl/jest : 13.8.3
@nrwl/js : undefined
@nrwl/linter : 13.8.3
@nrwl/nest : undefined
@nrwl/next : undefined
@nrwl/node : undefined
@nrwl/nx-cloud : 13.1.5
@nrwl/react : undefined
@nrwl/react-native : undefined
@nrwl/schematics : undefined
@nrwl/storybook : undefined
@nrwl/tao : 13.8.3
@nrwl/web : undefined
@nrwl/workspace : 13.8.3
typescript : 4.5.5
rxjs : 6.5.5
Community plugins:
@nx-dotnet/core: 1.9.2
|
gharchive/issue
| 2022-02-28T19:35:20 |
2025-04-01T06:39:49.154420
|
{
"authors": [
"AgentEnder",
"YasserKharab"
],
"repo": "nx-dotnet/nx-dotnet",
"url": "https://github.com/nx-dotnet/nx-dotnet/issues/381",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1728550908
|
🛑 Harmony Bot Website is down
In 095fc0e, Harmony Bot Website ($HARMONY_WEB) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Harmony Bot Website is back up in 06e5165.
|
gharchive/issue
| 2023-05-27T08:42:00 |
2025-04-01T06:39:49.159045
|
{
"authors": [
"nxvvvv"
],
"repo": "nxvvvv/uptime",
"url": "https://github.com/nxvvvv/uptime/issues/17866",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
213953214
|
shards install not grabbing md5
in lib/pg/src/pq/connection.cr:2: while requiring "crypto/md5": can't find file 'crypto/md5' relative to '/root/cpomf/lib/pg/src/pq'
require "crypto/md5"
the crypto/md5 is not under the lib folder compared to the rest of the things shards installed
running a new version of Crystal compared to Crystal 0.20.4
https://github.com/crystal-lang/crystal/tree/master/src/crypto
md5 is missing from the newest one so might be why
we're currently running Crystal 0.20.4 without any issues; but we'll likely try to update to a newer Crystal version in the near future.
in the meantime i'd recommend just running Crystal 0.20.4, or submitting a pull request.
Switched to Crystal 0.20.4 and had to get shards and it compiles now
|
gharchive/issue
| 2017-03-14T02:41:58 |
2025-04-01T06:39:49.162382
|
{
"authors": [
"formatme",
"neko"
],
"repo": "nya/cpomf",
"url": "https://github.com/nya/cpomf/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1030264007
|
Let users define the name of the expiredAt column
We use snake_case in our project, would be cool to just be able to pass this as an option
Not to revive an old comment, but if you/someone stumbles upon this, you can:
@Index()
@Column({ name: 'expired_at', type: 'bigint' })
expiredAt: number;
which is a built-in functionality in TypeORM :)
|
gharchive/issue
| 2021-10-19T12:46:20 |
2025-04-01T06:39:49.185842
|
{
"authors": [
"Agreon",
"ali-elamri"
],
"repo": "nykula/connect-typeorm",
"url": "https://github.com/nykula/connect-typeorm/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1351357871
|
Create vue demo for read-email use case
Description
This PR adds Vue support for read emails use cases. Functionally it works but the styling is a bit off.
License
I confirm that this contribution is made under the terms of the MIT license and that I have the authority necessary to make this contribution on behalf of its copyright owner.
@mrashed-dev is there some prettier plugin we can enable for vue?
|
gharchive/pull-request
| 2022-08-25T19:27:33 |
2025-04-01T06:39:49.191339
|
{
"authors": [
"AaronDDM",
"mrashed-dev"
],
"repo": "nylas/use-cases",
"url": "https://github.com/nylas/use-cases/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1081772931
|
[Support] No assets output at production
My craft won't output asset links with {{ craft.vite.script("/src/app.js") }} at production.
Only the code below is output.
<script type="module">!function(){const e=document.createElement("link").relList;if(!(e&&e.supports&&e.supports("modulepreload"))){for(const e of document.querySelectorAll('link[rel="modulepreload"]'))r(e);new MutationObserver((e=>{for(const o of e)if("childList"===o.type)for(const e of o.addedNodes)if("LINK"===e.tagName&&"modulepreload"===e.rel)r(e);else if(e.querySelectorAll)for(const o of e.querySelectorAll("link[rel=modulepreload]"))r(o)})).observe(document,{childList:!0,subtree:!0})}function r(e){if(e.ep)return;e.ep=!0;const r=function(e){const r={};return e.integrity&&(r.integrity=e.integrity),e.referrerpolicy&&(r.referrerPolicy=e.referrerpolicy),"use-credentials"===e.crossorigin?r.credentials="include":"anonymous"===e.crossorigin?r.credentials="omit":r.credentials="same-origin",r}(e);fetch(e.href,r)}}();</script>
I couldn't figure out why but everything is fine at development mode.
craft 3.7.23
craft-vite 1.0.19
No error at debug bar at frontend.
Please help.
It seems be caused by some composer modules.
composer update will produce this issue.
I found that craft-plugin-vite 1.0.17 required by craft-vite 1.0.18 will cause the issue.
I created new issue for craft-plugin-vite and will close this.
https://github.com/nystudio107/craft-plugin-vite/issues/6
|
gharchive/issue
| 2021-12-16T05:00:38 |
2025-04-01T06:39:49.201934
|
{
"authors": [
"watarutmnh"
],
"repo": "nystudio107/craft-vite",
"url": "https://github.com/nystudio107/craft-vite/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1046822466
|
Temperature/Humidity stopped working after update to 0.3.6
Hi,
since release 0.3.6 when support for BME280 was added, my sensor AM2320 returns "not installed" most of the time.
I tried to figure out what is wrong and discovered that it works after I reset the ESP (for example by klicking "save config" in the webinterface). It does not work after a power cycle.
Maybe a delay is needed after power up before initialization / detection? I guess the DHT22 does not need that delay which is why no one else seems to be having that issue :)
Hi,
I'll take a look, however I don't have an AM2320 here.
Hi,
I am testing right now. The sensor supports i2C and single bus mode. For single bus mode, which is the same as with DHT22, SCL (connected to GND) must be low for 500 ms. I guess reading the sensor earlier puts it into I2C mode. The datasheet also says that only a power cycle can change the mode. Page 5:
https://cdn-shop.adafruit.com/product-files/3721/AM2320.pdf
I tested a delay right before dht.setup. 400ms was too short, 500 worked most of the time, so I am using 600 ms now.
Hi, I think we can work with a delay here, the 600 ms should not be a problem.
Cool, thanks! 600ms works here without any problems so far.
fixed in v0.3.14
Hi, after flashing the latest version, my sensor stoppend working. I think this happens because it checks for other sensors on the same pin before the 600 ms delay.
I have not verified this because I am missing getSmoothedLux. How do I update the library? I expected vscode / platform.io to do this automatically.
Thanks!
Got it, I had to do a "clean all" to get the latest version of the LDR lib. I moved the delay up to where it says "// Init Temp Sensors" and it's working again.
Seems like a longer delay of 800ms is needed to make it work reliably.
The problem that the sensor could not be detected happend only in the morning after powering up my Pixelit. Power cycles during the day, leaving it off for some minutes, resulted in a working sensor. Perhaps it depends on the room temperature.
|
gharchive/issue
| 2021-11-07T18:53:00 |
2025-04-01T06:39:49.273236
|
{
"authors": [
"hamster65",
"o0shojo0o"
],
"repo": "o0shojo0o/PixelIt",
"url": "https://github.com/o0shojo0o/PixelIt/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
196753845
|
Check for updates program
Grabs a file, checks it's internal build number versus what the file provided is.
It'll have a method to disable itself on indev builds, and be put on GitHub (it's a text file...after all.)
Targeted to beta 3, but it may get pushed to beta 4.
Now implemented into 2.0.
I'll implement the update program into PyTerm 1.15.1 LTS, and stable 1.14.1 soon.
|
gharchive/issue
| 2016-12-20T19:10:03 |
2025-04-01T06:39:49.275312
|
{
"authors": [
"o355"
],
"repo": "o355/pyterm",
"url": "https://github.com/o355/pyterm/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1061610940
|
SIG Elections 12/1 - 12/15 Nominations
SIG Docs-Community chair / co-chair elections for 2022
For the first year of O3DE, the D&C chair has been staffed as an interim position. It's time to hold some official elections, following some of the proposed guidance but with our own process due to the holiday season and in order to expedite the elections into next year.
The chair / co-chair roles
The chair and co-chair serve equivalent roles in the governance of the SIG and are only differentiated by title in that the highest vote-getter is the chair and the second-highest is the co-chair. The chair and co-chair are expected to govern together in an effective way and split their responsibilities to make sure that the SIG operates smoothly and has the availability of a chairperson at any time.
Unless distinctly required, the term "chairperson" refers to either/both of the chair and co-chair. If a chair or co-chair is required to perform a specific responsibility for the SIG they will always be addressed by their official role title.
In particular, if both chairpersons would be unavailable during a period of time, the chair is considered to be an on-call position during this period. As the higher vote-getter they theoretically represent more of the community and should perform in that capacity under extenuating circumstances. This means that if there is an emergency requiring immediate action from the Documentation & Community SIG, the chair will be called to perform a responsibility.
Responsibilities
Schedule and proctor regular SIG meetings on a cadence to be determined by the SIG.
Serve as a source of authority (and ideally wisdom) with regards to O3DE documentation. Chairpersons are the ultimate arbiters of many documentation standards, processes, and practices.
Participate in the SIG Docs-Community Discord channel and on the GitHub Discussion forums.
Serve as a representative of the broader O3DE community to all other SIGs, partners, the governing board, and the Linux Foundation.
Represent the SIG to O3DE partners, the governing board, and the Linux Foundation.
Coordinate with partners and the Linux Foundation regarding official community events.
Represent (or select/elect representatives) to maintain relationships with all other SIGs as well as the marketing committee.
Serve as an arbiter in SIG-related disputes.
Coordinate releases with SIG Release.
Assist contributors in finding resources and setting up official project or task infrastructure monitored/conducted by the SIG.
Long-term planning and strategy for the course of documentation for O3DE.
Maintain a release roadmap for the O3DE documentation.
Additionally, at this stage of the project, the SIG chairpersons are expected to act in the Maintainer role for review and merge purposes only, due to the lack of infrastructure and available reviewer/maintainer pool.
... And potentially more. Again, this is an early stage of the project and chair responsibilities have been determined more or less ad-hoc as new requirements and situations arise. In particular the community half of this SIG has been very lacking due to no infrastructural support, and a chairperson will ideally bring some of these skills.
Nomination
Nomination may either be by a community member or self-nomination. A nominee may withdraw from the election at any time for any reason until the election starts on 12/1.
Nomination requirements
For this election, nominees are required to have at minimum two merged submissions to o3de.org. This is to justify any temporary promotion to Maintainer as required by this term as chairperson. Submissions may be in-flight as of the nomination deadline (2021-12-01 12PM PT), but the nominee must meet the 2-merge requirement by the end of the election or they will be removed from the results.
Any elected chairperson who does not currently meet the Maintainer status will be required to work with contributors from the SIG to produce an appropriate number of accepted submissions by January 31, 2022 or they will be removed and another election will be held.
The only other nomination requirement is that the nominee agrees to be able to perform their required duties and has the availability to do so, taking into account the fact that another chairperson will always be available as a point of contact.
How to nominate
Nominate somebody (including yourself) by responding to this issue with:
A statement that the nominee should be nominated for a chair position in the Documentation & Community SIG. Nominees are required to provide a statement that they understand the responsibilities and requirements of the role, and promise to faithfully fulfill them and follow all contributor requirements for O3DE.
The name under which the nominee should be addressed. Nominees are allowed to contact the election proctor to have this name changed.
The GitHub username of the nominee (self-nominations need not include this; it's on your post.)
Nominee's Discord username (sorry, but you must be an active Discord user if you are a chairperson.)
Election process
The election will be conducted between 2021-12-01 12:00PM PT and 2021-12-15 12:00PM PT and held through an online poll. Votes will be anonymous and anyone invested in the direction of O3DE and its documentation may vote. If you choose to vote, we ask that you be familiar with the nominees.
The current interim chair (@sptramer) will announce the results in the sig-docs-community Discord and on the sig-docs-community O3DE mailing list no later than 2021-12-15 1:00PM PT. At that time if there is a dispute over the result or concern over vote tampering, voting information will be made public to the extent that it can be exported from the polling system and the SIG will conduct an independent audit under the guidance of a higher governing body in the foundation.
The elected chairpersons will begin serving their term on 2022-01-01 at 12AM PT. Tentatively SIG D&C chairs will be elected on a yearly basis. If you have concerns about wanting to replace chairs earlier, please discuss in the request for feedback on Governance.
I would like to nominate myself (Stephen Tramer) for a chairship. I understand the responsibilities and requirements of the role, having performed them for the last year, and will continue to fulfill them and all contributor requirements for O3DE.
Nominee name: Stephen Tramer
Discord: stramer#7057
I, Jonathan Capes, nominate myself and accept nomination for a chair position in the Docs-Community SIG, and will serve if elected. I affirm that I understand the requirements and responsibilities of the chairpersonship. I further affirm that I will fulfill the requirements and responsibilities of both the chairpersonship and of a O3DE contributor.
I would like to nominate myself (Stephen Tramer) for a chairship. I understand the responsibilities and requirements of the role, having performed them for the last year, and will continue to fulfill them and all contributor requirements for O3DE.
Nominee name: Stephen Tramer
Discord: stramer#7057
I 2nd Stephen's nomination. He has really good knowledge of the engine, the needs for software developers, he listens to others which is one of his strongest sides. Something I haven't experienced with other candidates. I'm there fore behind his decision to step up here.
Nominations have concluded. In light of the fact that there are only two nominees and one received a "second", as well as unanimous agreement among the chairpersons, Stephen Tramer (@sptramer) will serve as Chair and Jonathan Capes (@FiniteStateGit) will serve as Co-chair.
Congratulations!
|
gharchive/issue
| 2021-11-23T18:54:32 |
2025-04-01T06:39:49.321774
|
{
"authors": [
"FiniteStateGit",
"NiklasHenricson",
"sptramer"
],
"repo": "o3de/sig-docs-community",
"url": "https://github.com/o3de/sig-docs-community/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
530220994
|
Add disk download example
This PR contains reimplemention of python example how to download
disk.
@imjoey please take a look
@imjoey updated, please take a look.
|
gharchive/pull-request
| 2019-11-29T08:40:03 |
2025-04-01T06:39:49.327849
|
{
"authors": [
"pkliczewski"
],
"repo": "oVirt/ovirt-engine-sdk-go",
"url": "https://github.com/oVirt/ovirt-engine-sdk-go/pull/188",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
107929064
|
Problem with timeago in French
Assigning to @mrvisser for fix
thanks
This was probably seen as a result of a trailing space being removed after \. A crowdin sync appears to have added it back in, so this is no longer reproducable.
|
gharchive/issue
| 2015-09-23T14:12:50 |
2025-04-01T06:39:49.336945
|
{
"authors": [
"dooremont",
"mrvisser",
"nicolaasmatthijs"
],
"repo": "oaeproject/3akai-ux",
"url": "https://github.com/oaeproject/3akai-ux/issues/4067",
"license": "ECL-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
861427306
|
translate i18n/zh/docusaurus-plugin-content-docs/current/helm/trait.md
除翻译规则外,是否可以使用自动翻译软件修改呢?谢谢
使用自动翻译软件指的是什么意思?
EDGE浏览器提供网页自动翻译功能,但现在在KUBEVELA文档上似乎不起效了,如果自动翻译网页的话,那就是人工做审校了吧?
本质目的是提供一份中文文档。如果机翻可以做到让大家读完能轻松看懂在说什么 而不是不通顺 不知所云或者逻辑不通 那使用机翻来辅助没有什么问题。 但机翻普遍存在的问题就是 根本看不懂他在说什么 指代不明 逻辑不通 各种神奇的用词 都是目前机翻存在的问题。。。
确实机翻的还不尽人意,需要人员在一些词汇,术语上做修正。但还是解决了许多基础性的文档翻译事务性工作的,领域内人员基本可以理解,上出版物和正规注释就不行了。
|
gharchive/issue
| 2021-04-19T15:19:30 |
2025-04-01T06:39:49.346274
|
{
"authors": [
"cheimu",
"wonderflow",
"zhujun74"
],
"repo": "oam-dev/kubevela.io",
"url": "https://github.com/oam-dev/kubevela.io/issues/49",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1091394418
|
[Feature] add the api that trigger deletion
Currently, support creation and query list API for the trigger. we need to support deletion.
https://github.com/oam-dev/kubevela/blob/master/pkg/apiserver/rest/webservice/application.go#L126
/assign chwetion
@chwetion thanks!
|
gharchive/issue
| 2021-12-31T03:15:53 |
2025-04-01T06:39:49.347797
|
{
"authors": [
"barnettZQG",
"chwetion"
],
"repo": "oam-dev/kubevela",
"url": "https://github.com/oam-dev/kubevela/issues/3030",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1093932893
|
invalid addon repo url makes apiserver crash
Describe the bug
invalid addon repo url makes apiserver crash
To Reproduce
launch VelaUX
go to Addons
click button "addon registries"
click button "new"
set "Type" to be GIthub
input one invalid URL(means that is not a git repo follows "Addon Format") to "URL" textbox
click button "submit"
Expected behavior
VelaUx should works as before, but apiserver pod is crashed by null point exception.
➜ ~ kubectl -n vela-system logs -f apiserver-7dc97f7558-kqs92
{"level":"info","ts":1641302139.5081124,"caller":"apiserver/main.go:108","msg":"KubeVela information: version: undefined, gitRevision: undefined"}
I0104 13:15:40.315824 1 utils.go:143] find cluster gateway service vela-system/kubevela-cluster-gateway-service:9443
{"level":"info","ts":1641302140.3392406,"caller":"rest/rest_server.go:251","msg":"HTTP APIs are being served on: 0.0.0.0:8000, ctx: context.Background.WithCancel"}
I0104 13:15:40.339524 1 leaderelection.go:248] attempting to acquire leader lease vela-system/apiserver-lock...
I0104 13:15:40.342548 1 rest_server.go:150] new leader elected: f28b817d-f8e7-45a3-b713-dbff80c9f15e
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x22b1528]
goroutine 231 [running]:
github.com/oam-dev/kubevela/pkg/addon.(*Registry).ListAddonMeta(0xc00071df20, 0xc000e13330, 0x8, 0xc000e90dc0)
/workspace/pkg/addon/source.go:212 +0x68
github.com/oam-dev/kubevela/pkg/addon.(*Cache).discoverAndRefreshRegistry(0xc0003c4330)
/workspace/pkg/addon/cache.go:213 +0x127
github.com/oam-dev/kubevela/pkg/addon.(*Cache).DiscoverAndRefreshLoop(0xc0003c4330, 0x8bb2c97000)
/workspace/pkg/addon/cache.go:69 +0x6a
created by github.com/oam-dev/kubevela/pkg/apiserver/rest/usecase.NewAddonUsecase
/workspace/pkg/apiserver/rest/usecase/addon.go:104 +0x1b7
Screenshots
KubeVela Version
v1.2.0-RC2
Cluster information
Additional context
@StevenLeiZhang Thanks for your report.This bug has been fixed by #3026 by test. Please check again with the code with master branch or next 1.20 release . Let me close this issue firstly, feel free we can reopen this issue if the bug still exist.
I have question about this
fmt.Errorf("git type repository only support github for now")
why is Vela only suport github? I want to use one local private gitlab, this private gitlab also suport github openapi.
@StevenLeiZhang yes, we're planing to support gitlab, but we only tested github currently. You're very welcome to contribute if the gitlab is tested well. Thanks!
|
gharchive/issue
| 2022-01-05T03:04:11 |
2025-04-01T06:39:49.354200
|
{
"authors": [
"StevenLeiZhang",
"wangyikewxgm",
"wonderflow"
],
"repo": "oam-dev/kubevela",
"url": "https://github.com/oam-dev/kubevela/issues/3042",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
427004624
|
WIP: Show information in debug mode about external commands
As in debug mode we have a lot of messages the user needs to be informed which external command has generated certain output.
This PR wants to solve this by adding this output:
leapp.stdlib.Run: External command is started: [/usr/bin/dnf rhel-upgrade]
*** external command output ***
leapp.stdlib.Run: External command is finished: [/usr/bin/dnf rhel-upgrade]
Can one of the admins verify this patch?
I afraid little about the readability. What about reword the solution like:
====== ext cmd started:
*** external command output ***
====== ext cmd finished:
Just idea that it is easy to find substring like =====. What do you think guys? @oamg/developers
|
gharchive/pull-request
| 2019-03-29T14:08:47 |
2025-04-01T06:39:49.362200
|
{
"authors": [
"asilveir",
"centos-ci",
"pirat89"
],
"repo": "oamg/leapp",
"url": "https://github.com/oamg/leapp/pull/471",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2055433608
|
[GLUTEN-4178][CH] Reduce memory usage in aggregate operators
What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
Fixes: #4178
At previous version, we convert a hash table into a block list at once,bring a double memory peak usage. It's easy to cause OOM problems.
At this version, we try to convert the buckets in a two-level hash table one by one, and the bucket in the hash table is released immediately after it have been converted. This make memory peak usage more smoothly and smaller.
How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
unit tests
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
Add some comments on key classes and functions.
|
gharchive/pull-request
| 2023-12-25T06:38:32 |
2025-04-01T06:39:49.382864
|
{
"authors": [
"lgbo-ustc",
"zhanglistar"
],
"repo": "oap-project/gluten",
"url": "https://github.com/oap-project/gluten/pull/4179",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2619209892
|
add literal colon support for gin and echo
Closes https://github.com/oapi-codegen/oapi-codegen/issues/1726
I see the test failures and will get to them in a bit.
|
gharchive/pull-request
| 2024-10-28T18:33:47 |
2025-04-01T06:39:49.384096
|
{
"authors": [
"cosban"
],
"repo": "oapi-codegen/oapi-codegen",
"url": "https://github.com/oapi-codegen/oapi-codegen/pull/1815",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
347467671
|
Need to "export" BooleanConstant
in the init.py file
Is this still a problem in the latest release? It looks like we already do this:
https://github.com/oasis-open/cti-python-stix2/blob/master/stix2/__init__.py#L36
|
gharchive/issue
| 2018-08-03T16:54:01 |
2025-04-01T06:39:49.391366
|
{
"authors": [
"clenk",
"rpiazza"
],
"repo": "oasis-open/cti-python-stix2",
"url": "https://github.com/oasis-open/cti-python-stix2/issues/205",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2434675978
|
Is there a concise way to directly export generated STIX data as JSON file?
Well, this package is straightforward at generating a random STIX data and I'm satisfied at this level of convenience. However, when I try to load this as JSON data and save it to the data like the code below,
import stix2generator
import json
stix2_generator = stix2generator.create_stix_generator()
stix = stix2_generator.generate()
stix_dump = json.dumps(stix, indent=4)
I get the errors below because some STIX SDO objects are not python default data types. (For example, the runtime exception log below says that the generated STIX data can't be converted to the JSON format since Note which is one of STIX SDO is not compatible.)
Traceback (most recent call last):
File "C:\Users\user\Documents\GitHub\nis-ems-client\test\fake_stix2_generator.py", line 6, in <module>
stix_dump = json.dumps(stix, indent=4)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\.pyenv\pyenv-win\versions\3.11.9\Lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "C:\Users\user\.pyenv\pyenv-win\versions\3.11.9\Lib\json\encoder.py", line 202, in encode
chunks = list(chunks)
^^^^^^^^^^^^
File "C:\Users\user\.pyenv\pyenv-win\versions\3.11.9\Lib\json\encoder.py", line 432, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "C:\Users\user\.pyenv\pyenv-win\versions\3.11.9\Lib\json\encoder.py", line 406, in _iterencode_dict
yield from chunks
File "C:\Users\user\.pyenv\pyenv-win\versions\3.11.9\Lib\json\encoder.py", line 439, in _iterencode
o = _default(o)
^^^^^^^^^^^
File "C:\Users\user\.pyenv\pyenv-win\versions\3.11.9\Lib\json\encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Note is not JSON serializable
So, my question is, is there a way to cope with this issue? I'm almost new to STIX, I just read the STIX documentation from the official page today and I am about to use STIX data with my project. Thanks in advance.
It's possible to simply(maybe) parse the STIX2 generation result as a JSON dump with Python's JSON package(json) with STIX2's JSON serialization class object(stix2.serialization.STIXJSONEncoder). :)
Here I'll attach a working makeshift example
import stix2generator
import stix2validator
import stix2
import json
import uuid
from rich import print
# Generate the fake STIX2 data
stix2_generator = stix2generator.create_stix_generator()
stix = stix2_generator.generate()
# Convert the STIX data (Dictionary with STIX-proprietary objects) to pure JSON objects
def stix_to_json(stix_data):
stix_objects = []
for value in stix_data.values():
stix_objects.append(json.loads(value.serialize()))
return stix_objects
stix_objects = stix_to_json(stix)
# Create a STIX bundle with the generated objects
stix_bundle = {
"type": "bundle",
"id": f"bundle--{uuid.uuid4()}",
"objects": stix_objects
}
# Convert the STIX bundle to JSON
stix_bundle_json = json.dumps(stix_bundle, cls=stix2.serialization.STIXJSONEncoder, indent=4).encode('utf-8').decode('utf-8')
print(stix_bundle_json)
# Validate the generated STIX bundle
result = stix2validator.validate_string(str(stix_bundle_json))
stix2validator.print_results(result)
Not sure what you mean by repositioning the id property. Abstractly speaking, mappings/objects don't have an intrinsic order to their entries (some implementations can allow you to control some kinds of ordering, e.g. a traversal order, but that is an implementation detail). I don't think JSON-Schema allows you to express a required ordering on object entries (correct me if I am wrong). So, it seems to me an object entry ordering difference cannot cause a JSON-Schema validation failure.
A simpler way to generate and validate is (but does not directly use the json package):
import io
import sys
# ... and all the other imports
stix = stix2_generator.generate()
bundle = stix2.Bundle(list(stix.values()))
# A simple way to dump to stdout:
# (don't use pretty=True for large numbers of objects!)
bundle.fp_serialize(sys.stdout, pretty=True)
# A simple way to dump JSON to a memory text buffer
buf = io.StringIO()
bundle.fp_serialize(buf)
# Reposition for reading, and validate
buf.seek(0, io.SEEK_SET)
result = stix2validator.validate(buf)
stix2validator.print_results(result)
Or via shell:
# -b causes output to be wrapped in a bundle
generate_stix -b > ./bundle.json
stix2_validator ./bundle.json
|
gharchive/issue
| 2024-07-29T07:31:59 |
2025-04-01T06:39:49.396927
|
{
"authors": [
"KnightChaser",
"chisholm"
],
"repo": "oasis-open/cti-stix-generator",
"url": "https://github.com/oasis-open/cti-stix-generator/issues/55",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2421714526
|
subject_token_type for Replacement Txn-Token Request
In 7.5.2:
To request a replacement Txn-Token, the requester makes a Txn-Token Request as described in Section 7.1 but includes the Txn-Token to be replaced as the value of the subject_token parameter.
Does this assume that subject_token_type should be urn:ietf:params:oauth:token-type:txn_token? Should we call it out explicitly?
Also, do other parameters (audience, scope, request_context) make sense in the context of TraT replacement flow? (I believe they don't, as they are meant to remain constant for the whole invocation chain.)
Should we call out that they must be ignored in the replacement flow?
|
gharchive/issue
| 2024-07-22T01:22:11 |
2025-04-01T06:39:49.413273
|
{
"authors": [
"dteleguin"
],
"repo": "oauth-wg/oauth-transaction-tokens",
"url": "https://github.com/oauth-wg/oauth-transaction-tokens/issues/113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1264186168
|
Drop multi_json dependency
As discussed in https://github.com/oauth-xx/oauth2/issues/579#issuecomment-1084174454, the Ruby JSON stdlib module should be good enough.
Closes #579
Codecov Report
Merging #590 (08f7c75) into master (7e5cd6c) will increase coverage by 0.04%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## master #590 +/- ##
==========================================
+ Coverage 89.66% 89.70% +0.04%
==========================================
Files 15 15
Lines 445 447 +2
==========================================
+ Hits 399 401 +2
Misses 46 46
Impacted Files
Coverage Δ
lib/oauth2/response.rb
100.00% <100.00%> (ø)
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
Code coverage failure: SimpleCov failed with exit 2 due to a coverage related error
Rubocop failure: Will be fixed in 16345ec1e12c348fc752b851173c22a8c7b68a8f via https://github.com/oauth-xx/oauth2/pull/589
|
gharchive/pull-request
| 2022-06-08T04:42:37 |
2025-04-01T06:39:49.419685
|
{
"authors": [
"codecov-commenter",
"stanhu"
],
"repo": "oauth-xx/oauth2",
"url": "https://github.com/oauth-xx/oauth2/pull/590",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
64065366
|
Memory Leak in duScrollspy directive
In src/directives/scrollspy.js line number 58. Listener is being added to $rootScope, but is not being cleaned up when the directive is destroyed.
Well spotted, thanks!
|
gharchive/issue
| 2015-03-24T18:19:51 |
2025-04-01T06:39:49.481352
|
{
"authors": [
"djwatson82",
"oblador"
],
"repo": "oblador/angular-scroll",
"url": "https://github.com/oblador/angular-scroll/issues/113",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
168886081
|
How to handle dynamic accordion content
Thanks for this useful component,
I'm having a trouble while trying to create an accordion.
When user clicks a submenu under a menu, content of the accordion is changed and it is rerendered with a different content.
This is how i use activeSection in accordion
activeSection={this.state.activeSection}
In my header content every view's structure is
View
View (image)
View (text)
View (image)
View
assuming that if i have 4 rows in old content and i click 2nd item which is index 1 in new content 2nd head (index1)view's text is changed but images are not changed. I'm having this issue only with iOS. Code works well with Android images are changed properly.
note: images's sources and logs controlled several times.
Hey, this issue has been inactive for long time and will be closed. If the issue still persists feel free to tag me to reopen.
|
gharchive/issue
| 2016-08-02T13:33:31 |
2025-04-01T06:39:49.485127
|
{
"authors": [
"Burak07",
"iRoachie"
],
"repo": "oblador/react-native-collapsible",
"url": "https://github.com/oblador/react-native-collapsible/issues/43",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
284753459
|
DNS not resolve on macOs (high sierra)
we run this script with Additional hosts file(-e parameter) and additional hosts resolves in windows, linux, android but not resolve in macOS
I can not understand what you are trying to say. Could you please rephrase it and add more details so that we can understand you ?
|
gharchive/issue
| 2017-12-27T16:22:00 |
2025-04-01T06:39:49.486154
|
{
"authors": [
"amirhosseinnazari",
"solsticedhiver"
],
"repo": "oblique/create_ap",
"url": "https://github.com/oblique/create_ap/issues/306",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1011067710
|
Data release OWL and Log
Please, merge first the PR #24.
I also deleted the log files from the previous PR #18 to not have the same file in different names and formats.
Fix #22
Looks good, but I think logs are in the wrong place. We should have a dedicated directory for these. We should also capture STDERR logs to files in the same directory as this captures other errors (we should find better ways to communicate these errors in future, but raw logs will do for now.
Ok, I'll create a dedicated directory for logs in this PR.
|
gharchive/pull-request
| 2021-09-29T14:53:43 |
2025-04-01T06:39:49.487675
|
{
"authors": [
"anitacaron",
"dosumis"
],
"repo": "obophenotype/CCF_tools",
"url": "https://github.com/obophenotype/CCF_tools/pull/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
804877852
|
B cell issues Fixes #935
Removes duplicate label and extra definition for 'B cell' and extra obo_namespace stanza.
OWL axiom in PR import!
|
gharchive/pull-request
| 2021-02-09T20:03:37 |
2025-04-01T06:39:49.492097
|
{
"authors": [
"addiehl",
"dosumis"
],
"repo": "obophenotype/cell-ontology",
"url": "https://github.com/obophenotype/cell-ontology/pull/939",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1221962873
|
Time scale labels don't adjust to font size.
As seen below, increasing the font size makes the time scale labels overlap.
style: {
fontFamily: "system-ui",
fontSize: "16px",
overflow: "visible"
},
style: {
fontFamily: "system-ui",
fontSize: "24px",
overflow: "visible"
},
Yes, that’s how it works. Plot does not consider text metrics when laying out axes.
So what's the work-a-round for this?
You can specify a different number of ticks using the scale.ticks option (e.g., x: {ticks: 4}) or you can rotate the ticks using scale.tickRotate option (e.g., x: {tickRotate: 90}) and then increasing marginBottom.
var graph = Plot.plot({
x: {
ticks: 4,
tickRotate: -45,
},
width: 640,
height: 400,
marginLeft: 100,
marginTop: 105,
marginBottom: 100,
});
|
gharchive/issue
| 2022-05-01T00:48:28 |
2025-04-01T06:39:49.511505
|
{
"authors": [
"mbostock",
"reubano"
],
"repo": "observablehq/plot",
"url": "https://github.com/observablehq/plot/issues/859",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1391581028
|
Append completion date to tasks completed in the Kanban plugin
Expected Behavior
When marking a task in kanban as done it should add the ✅ emoji with the date
Current behaviour
It doesn;t add the ✅emoji if i mark the task as done in the kanban but if i opened the board as markdown and click manually it'll add it like normal
Steps to reproduce
Install kanban and make a new board
Add tasks and mark it as done, the ✅ emoji with the date will not show up
Which Operating Systems are you using?
[ ] Android
[ ] iPhone/iPad
[ ] Linux
[ ] macOS
[X] Windows
Obsidian Version
0.15.9
Tasks Plugin Version
1.11.0
Checks
[x] I have tried it with all other plugins disabled and the error still occurs
Possible solution
No response
Hi @joetifa2003, thanks for the write up.
I’m not sure that I understand - it sounds like you would like the KanBan plugin to provide support for Tasks emojis, is that correct?
The done date is would be really usefull if it automatically added but that's ok i'll find a workaround for now, thanks!
Just to follow up from the Discussions above, when a task is marked as done in another plugin, any addition of Due date or similar would be done by code in that plugin.
So it's not something within the scope of Tasks to implement.
|
gharchive/issue
| 2022-09-29T22:36:56 |
2025-04-01T06:39:49.530101
|
{
"authors": [
"claremacrae",
"joetifa2003"
],
"repo": "obsidian-tasks-group/obsidian-tasks",
"url": "https://github.com/obsidian-tasks-group/obsidian-tasks/issues/1196",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
384086616
|
Add a FAQ to document common problems
I encountered this entitlements error and it took me awhile to find the problem. I figured this would be a good candidate for starting a FAQ.
Thanks!
|
gharchive/pull-request
| 2018-11-25T14:58:21 |
2025-04-01T06:39:49.531144
|
{
"authors": [
"luigy",
"mightybyte"
],
"repo": "obsidiansystems/obelisk",
"url": "https://github.com/obsidiansystems/obelisk/pull/321",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2499588303
|
incorrect cursor position and inconsistent scrolling behaviour in editor with overflow
Exhibit A
.ProseMirror {
word-wrap: normal;
overflow: auto;
}
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
cursor in "abc┘def", scrolled outside of viewport
press →
cursor is not scrolled into view
press →
viewport is scrolled to the cursor position ("efghi..."), but no cursor visible
manually scroll to the left
cursor is blinking at the very edge of the element, in the padding area:
Exhibit B
.ProseMirror {
overflow: auto;
}
.ProseMirror pre {
white-space: pre;
}
console.warn(
`[prosemirror-virtual-cursor] Virtual cursor does not work well with marks that have inclusive set to false. Please consider removing the inclusive option from the "${mark}" mark or adding it to the "skipWarning" option.`,
);
place cursor anywhere in the code block
scroll horizontally
use arrow keys or mouse to change cursor position
cursor is rendered with offset (verify by typing in text)
Well, it is affected by vertical overflow as well.
.ProseMirror {
max-height: 10em;
overflow: auto;
}
lorem
ipsum
dolor
sit
amet
Cursor on the last line is rendered higher than it should be:
|
gharchive/issue
| 2024-09-01T17:14:04 |
2025-04-01T06:39:49.633501
|
{
"authors": [
"vthriller"
],
"repo": "ocavue/prosemirror-virtual-cursor",
"url": "https://github.com/ocavue/prosemirror-virtual-cursor/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2537956491
|
[Bug]: If you create a new schema table, the schema supports a maximum of 1,000 shards
ODC version
4.3.2
OB version
no related
What happened?
If you create a new schema table, the maximum number of shards in a schema is 1,000
What did you expect to happen?
If you create a new logic database table, the maximum number of shards in the logic database is 1,000, and the number of shards exceeds 1,000 and the maximum number of tables is Integer.MAX_VALUE.
How can we reproduce it (as minimally and precisely as possible)?
Create a logical table, enter a logical table with more than 1000 databases and more than Integer.MAX_VALUE, and observe the page response
Anything else we need to know?
No response
pass
|
gharchive/issue
| 2024-09-20T06:07:07 |
2025-04-01T06:39:49.652027
|
{
"authors": [
"qymsummer"
],
"repo": "oceanbase/odc",
"url": "https://github.com/oceanbase/odc/issues/3470",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1130299383
|
Fee display component
We need a component to display the price structure with fees. It will be used in the Use tab. It is needed in the following places
Download : for the price
Compute: for the asset price , for each algorithm price, for the whole job price .
Preliminary structure:
publisherMarketOrderFee: string
publisherMarketPoolSwapFee: string
publisherMarketFreSwapFee: string
consumeMarketOrderFee: string
consumeMarketPoolSwapFee: string
consumeMarketFreSwapFee: string
opcFee: string
Basically this needs to look something like
Total price : x BaseToken
fee1 : y BaseToken
fee2: z BaseToken
To avoid confusion between Fre and Free I propose changing
publisherMarketFreSwapFee: string
consumeMarketFreSwapFee: string
To:
publisherMarketFixedSwapFee: string
consumeMarketFixedSwapFee: string
@mihaisc I noticed the list doesn't include provider fees? I thought we were also introducing these?
This was just a random example at that time, the component shouldn't care , it just need to display something like name : value for fees that are not 0.
ok cool
This is an example of hopefully the final version of the structure that we pass to this component https://github.com/oceanprotocol/market/blob/7336224f387ca36e3c62b83a5f82e7505691f706/src/%40types/Price.d.ts#L16 . For now it assumes all fees are in ocean.
Not that relevant now. Will open another issue in the future if needed
|
gharchive/issue
| 2022-02-10T15:26:40 |
2025-04-01T06:39:49.656659
|
{
"authors": [
"jamiehewitt15",
"mihaisc"
],
"repo": "oceanprotocol/market",
"url": "https://github.com/oceanprotocol/market/issues/1082",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
848042958
|
Market code can support different base token
What
Ocean backend code supports different base tokens. Ocean frontend code has a stronger bias to use OCEAN as a base token.
Goal of this issue: make it more straightforward for Ocean frontend code to use a different base token.
Motivation
This helps in two ways:
Helps several 3rd party marketplaces, which would like to use their own token at the frontend level, eg for pools staking/swapping and for FRE.
Helps Ocean Market switch from using OCEAN, to an OCEAN-backed stablecoin -- market#447
No, we had this in mind when refactoring things but we never tested a different base token, and there are still places which assume OCEAN alone. We always said to do this after v4
Ok, closing this. Concrete implementation of multiple approved tokens will be tracked here https://github.com/oceanprotocol/market/issues/1335
|
gharchive/issue
| 2021-04-01T06:29:58 |
2025-04-01T06:39:49.660173
|
{
"authors": [
"kremalicious",
"mihaisc",
"trentmc"
],
"repo": "oceanprotocol/market",
"url": "https://github.com/oceanprotocol/market/issues/468",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
915249358
|
Better error message when no trackable features are found
This is related to #7. Currently, when in _filter_area no labels are found, the logic will fail at min_area = np.percentile(area, min_size_quartile*100)
with the rather obscure error:
```
IndexError Traceback (most recent call last)
in
10
11 # Filter area
---> 12 area, min_area, binary_labels, N_initial = _filter_area(binary_images_with_mask, min_size_quartile)
13
14 # Label objects
~/ocetrac/ocetrac/track.py in _filter_area(binary_images, min_size_quartile)
81 labelprops = xr.DataArray(labelprops, dims=['label'], coords={'label': labelprops})
82 area = xr.DataArray([p.area for p in props], dims=['label'], coords={'label': labelprops}) # Number of pixels of the region.
---> 83 min_area = np.percentile(area, min_size_quartile*100)
84 print('minimum area: ', min_area)
85 keep_labels = labelprops.where(area>=min_area, drop=True)
<array_function internals> in percentile(*args, **kwargs)
/srv/conda/envs/notebook/lib/python3.8/site-packages/numpy/lib/function_base.py in percentile(a, q, axis, out, overwrite_input, interpolation, keepdims)
3816 if not _quantile_is_valid(q):
3817 raise ValueError("Percentiles must be in the range [0, 100]")
-> 3818 return _quantile_unchecked(
3819 a, q, axis, out, overwrite_input, interpolation, keepdims)
3820
/srv/conda/envs/notebook/lib/python3.8/site-packages/numpy/lib/function_base.py in _quantile_unchecked(a, q, axis, out, overwrite_input, interpolation, keepdims)
3935 interpolation='linear', keepdims=False):
3936 """Assumes that q is in [0, 1], and is an ndarray"""
-> 3937 r, k = _ureduce(a, func=_quantile_ureduce_func, q=q, axis=axis, out=out,
3938 overwrite_input=overwrite_input,
3939 interpolation=interpolation)
/srv/conda/envs/notebook/lib/python3.8/site-packages/numpy/lib/function_base.py in _ureduce(a, func, **kwargs)
3513 keepdim = (1,) * a.ndim
3514
-> 3515 r = func(a, **kwargs)
3516 return r, keepdim
3517
/srv/conda/envs/notebook/lib/python3.8/site-packages/numpy/lib/function_base.py in _quantile_ureduce_func(failed resolving arguments)
4048 indices_below.ravel(), indices_above.ravel(), [-1]
4049 )), axis=0)
-> 4050 n = np.isnan(ap[-1])
4051 else:
4052 # cannot contain nan
IndexError: index -1 is out of bounds for axis 0 with size 0
<details/>
I propose to add a check [here](https://github.com/ocetrac/ocetrac/blob/9e92246036ae87aea527265ef17d99e91a846c03/ocetrac/track.py#L80) along the lines of
```python
if len(props) == 0:
raise ValueError('No features detected')
# or just warn and return None?
warnings.warn(...)
return None
Are we actually testing/catching this? I might have overlooked this in the tests. Otherwise I would keep this open, I can try to implement a test soon.
|
gharchive/issue
| 2021-06-08T16:48:56 |
2025-04-01T06:39:49.680185
|
{
"authors": [
"jbusecke"
],
"repo": "ocetrac/ocetrac",
"url": "https://github.com/ocetrac/ocetrac/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2204421623
|
🛑 Ahora+ (rrhh.octubre.org.ar) is down
In 68ce046, Ahora+ (rrhh.octubre.org.ar) (https://rrhh.octubre.org.ar/api/health) was down:
HTTP code: 503
Response time: 765 ms
Resolved: Ahora+ (rrhh.octubre.org.ar) is back up in e4a55ba.
|
gharchive/issue
| 2024-03-24T15:53:19 |
2025-04-01T06:39:49.796461
|
{
"authors": [
"apps-suterh"
],
"repo": "octubre-softlab/octubre-upptime",
"url": "https://github.com/octubre-softlab/octubre-upptime/issues/3056",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2216124950
|
🛑 Ahora+ (rrhh.octubre.org.ar) is down
In e7be515, Ahora+ (rrhh.octubre.org.ar) (https://rrhh.octubre.org.ar/api/health) was down:
HTTP code: 503
Response time: 703 ms
Resolved: Ahora+ (rrhh.octubre.org.ar) is back up in b919773.
|
gharchive/issue
| 2024-03-30T00:59:42 |
2025-04-01T06:39:49.799985
|
{
"authors": [
"apps-suterh"
],
"repo": "octubre-softlab/octubre-upptime",
"url": "https://github.com/octubre-softlab/octubre-upptime/issues/3144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
314433330
|
Hero rankings have some randomness to them.
Upon refreshing my hero rankings page i keep getting somewhat randomized results. (atleast for my top heroes).
The number are so far apart that this is an unusual fluctuation (top 100 to top 4000)
reference: https://www.opendota.com/players/60372773/rankings
For performance reasons the rankings are done against a random sample on each request, so depending on the sample the results may vary. I can look into increasing the sample size to reduce the variance, but we'll need to balance that against performance.
I doubled the sample size, is it more consistent now?
Yep, a bit better. But i honestly dont think the tradeoff is worth it.
Probably have to overhaul the system in long-term.
I'll look into it (with my limited knowledge) of what i can do, over the next few weeks.
|
gharchive/issue
| 2018-04-15T17:09:41 |
2025-04-01T06:39:50.020713
|
{
"authors": [
"Darkitz",
"howardchung"
],
"repo": "odota/core",
"url": "https://github.com/odota/core/issues/1635",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
821009181
|
Subject Area OMAS is setting same classification multiple times
After adding validation to the enterprise repository connector to ensure the OMASs are not trying to classify an entity twice, the Subject Area FVT is failing with ...
ProjectFVT runIt stopped
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.825 s - in org.odpi.openmetadata.accessservices.subjectarea.fvt.junit.ProjectIT
[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TermIT.testTerm:24 Unexpected exception thrown: org.odpi.openmetadata.frameworks.connectors.ffdc.PropertyServerException: OMAG-COMMON-400-016 An unexpected org.odpi.openmetadata.accessservices.subjectarea.handlers.SubjectAreaTermHandler exception was caught by update(isReplace=false) for Term; error message was OMAG-REPOSITORY-HANDLER-500-001 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.ClassificationErrorException was returned to classifyEntity(deprecated) by the metadata server during updateTerm request for open metadata access service Subject Area OMAS on server serverinmem; message was OMRS-REPOSITORY-400-081 A classifyEntity request has been made to repository serverinmem to add a classification Criticality to entity c061d512-ac51-4121-acf3-3c9de05c84b1 when this entity is already classified
[ERROR] TermIT.testTerm:24 Unexpected exception thrown: org.odpi.openmetadata.frameworks.connectors.ffdc.PropertyServerException: OMAG-COMMON-400-016 An unexpected org.odpi.openmetadata.accessservices.subjectarea.handlers.SubjectAreaTermHandler exception was caught by update(isReplace=false) for Term; error message was OMAG-REPOSITORY-HANDLER-500-001 An unexpected error org.odpi.openmetadata.repositoryservices.ffdc.exception.ClassificationErrorException was returned to classifyEntity(deprecated) by the metadata server during updateTerm request for open metadata access service Subject Area OMAS on server servergraph; message was OMRS-REPOSITORY-400-081 A classifyEntity request has been made to repository servergraph to add a classification Criticality to entity 2c21580a-a7e9-489c-8286-d5ec911f4d1e when this entity is already classified
[INFO]
[ERROR] Tests run: 18, Failures: 2, Errors: 0, Skipped: 0
[INFO]
[INFO]
[INFO] --- process-exec-maven-plugin:0.9:stop-all (stop-all) @ subject-area-fvt ---
[INFO] Stopping all processes ...
[INFO] Stopping process: chassis-start
[INFO] Stopped process: chassis-start
[INFO]
[INFO] --- jacoco-maven-plugin:0.8.6:report (report) @ subject-area-fvt ---
[INFO] Loading execution data file /Users/mandy-chessell/CloudStation/Drive/Code/ODPi/egeria-code/egeria/open-metadata-test/open-metadata-fvt/access-services-fvt/subject-area-fvt/target/jacoco.exec
[INFO] Analyzed bundle 'Subject Area OMAS FVT' with 16 classes
[INFO]
[INFO] --- maven-failsafe-plugin:3.0.0-M5:verify (verify) @ subject-area-fvt ---
[INFO] ------------------------------------------------------------------------
The problem seems to be in the updateTerm method of SubjectAreaTermHandler. It updates the properties and then if there are any governance classifications supplied by the caller, they are added to the entity. Any governance classifications not supplied by the caller are removed.
What needs to change is that before adding the classification to the entity, it needs to check whether the entity already has the classification applied. If it does then it should call updateEntityClassificationProperties instead of classifyEntity.
This is the current logic ...
public SubjectAreaOMASAPIResponse<Term> updateTerm(String userId, String guid, Term suppliedTerm, boolean isReplace) {
final String methodName = "updateTerm";
SubjectAreaOMASAPIResponse<Term> response = new SubjectAreaOMASAPIResponse<>();
try {
InputValidator.validateNodeType(className, methodName, suppliedTerm.getNodeType(), NodeType.Term, NodeType.Activity);
response = getTermByGuid(userId, guid);
if (response.head().isPresent()) {
Term currentTerm = response.head().get();
Set<String> currentClassificationNames = getCurrentClassificationNames(currentTerm);
if (isReplace)
replaceAttributes(currentTerm, suppliedTerm);
else
updateAttributes(currentTerm, suppliedTerm);
Date termFromTime = suppliedTerm.getEffectiveFromTime();
Date termToTime = suppliedTerm.getEffectiveToTime();
currentTerm.setEffectiveFromTime(termFromTime);
currentTerm.setEffectiveToTime(termToTime);
// always update the governance actions for a replace or an update
currentTerm.setGovernanceActions(suppliedTerm.getGovernanceActions());
TermMapper termMapper = mappersFactory.get(TermMapper.class);
EntityDetail forUpdate = termMapper.map(currentTerm);
Optional<EntityDetail> updatedEntity = oMRSAPIHelper.callOMRSUpdateEntity(methodName, userId, forUpdate);
if (updatedEntity.isPresent()) {
List<Classification> classifications = forUpdate.getClassifications();
if (CollectionUtils.isNotEmpty(classifications)) {
for (Classification classification : classifications) {
oMRSAPIHelper.callOMRSClassifyEntity(methodName, userId, guid, classification);
currentClassificationNames.remove(classification.getName());
}
for (String deClassifyName : currentClassificationNames) {
oMRSAPIHelper.callOMRSDeClassifyEntity(methodName, userId, guid, deClassifyName);
}
}
List<CategorySummary> suppliedCategories = suppliedTerm.getCategories();
if (suppliedCategories==null && !isReplace) {
// in the update case with null categories supplied then do not change anything.
} else {
replaceCategories(userId, guid, suppliedTerm, methodName);
}
}
response = getTermByGuid(userId, guid);
}
} catch (SubjectAreaCheckedException | PropertyServerException | UserNotAuthorizedException | InvalidParameterException e) {
response = new SubjectAreaOMASAPIResponse<>();
response.setExceptionInfo(e, className);
}
return response;
}
I think it needs to be something like this:
public SubjectAreaOMASAPIResponse<Term> updateTerm(String userId, String guid, Term suppliedTerm, boolean isReplace) {
final String methodName = "updateTerm";
SubjectAreaOMASAPIResponse<Term> response = new SubjectAreaOMASAPIResponse<>();
try {
InputValidator.validateNodeType(className, methodName, suppliedTerm.getNodeType(), NodeType.Term, NodeType.Activity);
response = getTermByGuid(userId, guid);
if (response.head().isPresent()) {
Term currentTerm = response.head().get();
Set<String> currentClassificationNames = getCurrentClassificationNames(currentTerm);
if (isReplace)
replaceAttributes(currentTerm, suppliedTerm);
else
updateAttributes(currentTerm, suppliedTerm);
Date termFromTime = suppliedTerm.getEffectiveFromTime();
Date termToTime = suppliedTerm.getEffectiveToTime();
currentTerm.setEffectiveFromTime(termFromTime);
currentTerm.setEffectiveToTime(termToTime);
// always update the governance actions for a replace or an update
currentTerm.setGovernanceActions(suppliedTerm.getGovernanceActions());
TermMapper termMapper = mappersFactory.get(TermMapper.class);
EntityDetail forUpdate = termMapper.map(currentTerm);
Optional<EntityDetail> updatedEntity = oMRSAPIHelper.callOMRSUpdateEntity(methodName, userId, forUpdate);
if (updatedEntity.isPresent()) {
List<Classification> suppliedClassifications = forUpdate.getClassifications();
List<Classification> storedClassifications = updatedEntity.get().getClassifications();
Map<String, Classification> storedClassificationMap = null;
if ((storedClassifications != null) && (! storedClassifications.isEmpty())) {
storedClassificationMap = new HashMap<>();
for (Classification storedClassification : storedClassifications) {
if (storedClassification != null) {
storedClassificationMap.put(storedClassification.getName(), storedClassification);
}
}
}
if (CollectionUtils.isNotEmpty(suppliedClassifications)) {
for (Classification suppliedClassification : suppliedClassifications) {
if (suppliedClassification != null) {
if ((storedClassificationMap == null) || (! storedClassificationMap.keySet().contains(suppliedClassification.getName()))) {
oMRSAPIHelper.callOMRSClassifyEntity(methodName, userId, guid, suppliedClassification);
} else {
oMRSAPIHelper.callOMRSUpdateClassification(methodName, userId, guid, storedClassificationMap.get(suppliedClassification.getName()), suppliedClassification.getProperties());
}
currentClassificationNames.remove(suppliedClassification.getName());
}
}
for (String deClassifyName : currentClassificationNames) {
oMRSAPIHelper.callOMRSDeClassifyEntity(methodName, userId, guid, deClassifyName);
}
}
List<CategorySummary> suppliedCategories = suppliedTerm.getCategories();
if (suppliedCategories==null && !isReplace) {
// in the update case with null categories supplied then do not change anything.
} else {
replaceCategories(userId, guid, suppliedTerm, methodName);
}
}
response = getTermByGuid(userId, guid);
}
} catch (SubjectAreaCheckedException | PropertyServerException | UserNotAuthorizedException | InvalidParameterException e) {
response = new SubjectAreaOMASAPIResponse<>();
response.setExceptionInfo(e, className);
}
return response;
}
This required a new method in OMRSAPIHelper:
public void callOMRSUpdateClassification(String restAPIName,
String userId,
String entityGUID,
Classification existingClassification,
InstanceProperties newProperties) throws UserNotAuthorizedException,
PropertyServerException,
SubjectAreaCheckedException {
String methodName = "callOMRSUpdateClassification";
try {
InstanceType type = existingClassification.getType();
String typeDefGUID = null;
String typeDefName = null;
if (type != null) {
typeDefGUID = type.getTypeDefGUID();
typeDefName = type.getTypeDefName();
}
getRepositoryHandler().reclassifyEntity(userId,
null,
null,
entityGUID,
typeDefGUID,
typeDefName,
existingClassification,
newProperties,
restAPIName);
} catch (PropertyServerException | UserNotAuthorizedException e) {
throw e;
} catch (Exception error) {
prepareUnexpectedError(error, methodName);
}
}
Much of the code above to manage classifications is also supported in the repository handler.
|
gharchive/issue
| 2021-03-03T11:13:02 |
2025-04-01T06:39:50.028630
|
{
"authors": [
"mandy-chessell"
],
"repo": "odpi/egeria",
"url": "https://github.com/odpi/egeria/issues/4847",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1472024170
|
🛑 urbamonde.org is down
In e28ed25, urbamonde.org (https://www.urbamonde.org) was down:
HTTP code: 429
Response time: 448 ms
Resolved: urbamonde.org is back up in cdb24ac.
|
gharchive/issue
| 2022-12-01T22:09:16 |
2025-04-01T06:39:50.032699
|
{
"authors": [
"ntopulos"
],
"repo": "odqo/upptime",
"url": "https://github.com/odqo/upptime/issues/896",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1885628320
|
A possibly improved implementation.
I tried to improve the code a bit so that it handles subclasses and imports. It should also handle Iterables with limitations. The code is not tested fully, but it it seems to work. If you would be kind enough to test it, I give you my permission to upload it to your blog. I don't even need credit. However, if you spot a bug or find any improvement, please let me know. Code follows:
import sys
import types as t
from numpy import ndarray
from torch import Tensor
import main
import inspect
from pathlib import Path
from typing import Any, Optional, Set, Dict, Tuple, runtime_checkable, Protocol, TypeVar, List, Iterable
import dill # type: ignore
import torch
def mainify(obj: object, max_depth: int = 100) -> None:
_mainify(obj, 0, max_depth)
def _mainify(obj: object, depth: int, max_depth: int) -> object:
if isinstance(obj, _toxic_types): return obj
if depth > max_depth: return obj
obj = _check_recursion(obj, depth, max_depth)
if not hasattr(obj, "module") or obj.module in _builtins: return obj
_add_to_main(obj)
if is_named(obj): obj = getattr(main, obj.name)
else:
obj.class = getattr(main, obj.class.name)
for key, value in obj.dict.items():
temp: object = _mainify(value, depth, max_depth)
setattr(obj, key, temp)
return obj
def is_named(obj: object) -> bool: return hasattr(obj, "name")
def _add_to_main(obj: object):
module_source = inspect.getsource(inspect.getmodule(obj))
exec(compile(module_source, '', 'exec'), main.dict)
_already_copied.add(obj.module)
@runtime_checkable
class ListLike(Protocol[S]):
def setitem(self, __key: int, __value: S) -> None: ...
def getitem(self, __key: int) -> S: ...
def len(self) -> int: ...
@runtime_checkable
class DictLike(Protocol[T, S]):
def setitem(self, __key: T, __value: S) -> None: ...
def getitem(self, __key: T) -> S: ...
def items(self) -> List[Tuple[T, S]]: ...
def _check_recursion(obj: object, depth: int, max_depth: int) -> object:
if not isinstance(obj, Iterable) or isinstance(obj, _toxic_iterables): return obj
the_class = obj.class
if isinstance(obj, DictLike):
for key, value in obj.items(): obj[key] = _mainify(value, depth + 1, max_depth)
if isinstance(obj, Tuple): obj = list(obj)
if isinstance(obj, ListLike):
for i in range(len(obj)): obj[i] = _mainify(obj[i], depth + 1, max_depth)
return obj if obj.class == the_class else the_class(obj) # type: ignore[arg-type]
Hi, sounds like a good topic for a gist or you can leave it as a comment!
|
gharchive/issue
| 2023-09-07T10:37:40 |
2025-04-01T06:39:50.051543
|
{
"authors": [
"cparks1000000",
"oegedijk"
],
"repo": "oegedijk/blog",
"url": "https://github.com/oegedijk/blog/issues/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1646550180
|
Error: unknown type: container
Hi,
I have installed hatch-containers
pip install hatch-containers
however after running hatch run mycmd I get and error message saying I am using un unknown type.
❯ hatch run mycmd
Environment `default` has unknown type: container
Any idea about how to fix this?
try pip install hatch-containers. although not sure why it doesn't auto install it when set in build-system:
[build-system]
requires = ["hatchling", "hatch-containers"]
build-backend = "hatchling.build"
|
gharchive/issue
| 2023-03-29T21:20:58 |
2025-04-01T06:39:50.105991
|
{
"authors": [
"aleexaandr",
"ngallo"
],
"repo": "ofek/hatch-containers",
"url": "https://github.com/ofek/hatch-containers/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
742625775
|
Allow custom styling for consent banner
Right now the consent banner is styled one-size-fits-all. While we try to keep it low-impact for most designs, this surely hurts adoption in cases where a unified UX is important for operators.
While it's technically trivial to allow operators to inject custom CSS into the iframe element that displays the banner, this also comes with some pitfalls and requirements:
Operators should not be able to alter the content in the banner (i.e. change wording to be misleading or hide certain elements)
The default a11y and privacy standards must be preserved in any case
Approach: allowing operators to define custom CSS per account
Semantic classes are applied to the markup that displays the consent banner. Operators can now add custom CSS for each account which the server will add (in addition to the predefined styles) to each vault response. In order to prevent operators from creating a misleading consent banner the following sanitzation rules are applied to the CSS before saving:
no pseudo content
no hiding of elements
no opacity rules
Pros
Allows operators to apply fine grained styling that matches virtually any site design
Technically simple to implement
Cons
Sanitizing CSS will help, but it will not close all loopholes for malicious use ever (then again, forking Offen is a lot easier and gives you all options)
Does not offer a middle ground for non-technically savvy users that don't want to / cannot write CSS but still want to customize their banner's appearace
As discussed with @hendr-ik in person we would like to start off with the guidelines for sanitizing CSS: we trust the operator who deploys and customizes Offen to keep the consent guarantees intact but aim to prevent attackers (who might hijack a system or similar) from injecting malicious content through CSS.
This means we'll start with the following rules:
no loading of external resources (as this would introduce an external tracking vector)
No use of content. We want to prevent unwanted messages from showing up or attackers changing content.
No use of transparency as this would allow attackers to create an invisible banner and do "clickjacking".
Hi, I don't have the chance to give this much thought from a development point of view.
From a user point of view I really need simple customizations to make Ogden fit in with my designs: border, colors and maybe font family are all I need to touch
I don't have the time to give this much thought from a development point of view in this moment.
Your requirements as a user are super helpful and all we need. Thank you.
@TommasoAmici @whalehub We just merged the PR containing the feature and plan to include it in the next release.
In case you are interested and would like to help us ship a feature that is actually helpful, we'd appreciate any feedback on a test run. To check it out, you can run the development version in demo mode sth like this:
docker pull offen/offen:development
docker run --rm -p 9876:9876 offen/offen:development demo -port 9876
After logging in head to Customize appearance, the documentation link is currently broken because this is not properly released yet, but it would point here https://docs.offen.dev/v/development/running-offen/customizing-consent-banner/
Our primary questions for feedback would be:
can you achieve what you would like to do visually without too much hair pulling?
would you need more/less/different documentation to get up and running?
Any other feedback is welcome too of course. Thank you!
@m90 I love how you implemented this feature, what with the preview function and everything - well done! The documentation was more than sufficient for me personally.
Before
After
This now released in v0.4.2. Thanks for all of your feedback :tophat:
|
gharchive/issue
| 2020-11-13T17:17:01 |
2025-04-01T06:39:50.117022
|
{
"authors": [
"TommasoAmici",
"m90",
"whalehub"
],
"repo": "offen/offen",
"url": "https://github.com/offen/offen/issues/505",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
105679701
|
show async validation example
something quick and dirty to demonstrate how one would integrate with a foreign backend for some validation.
Added an async example to the readme here. Is that what you mean?
|
gharchive/issue
| 2015-09-09T20:38:44 |
2025-04-01T06:39:50.164236
|
{
"authors": [
"offirgolan",
"stefanpenner"
],
"repo": "offirgolan/ember-cp-validations",
"url": "https://github.com/offirgolan/ember-cp-validations/issues/18",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
874226454
|
Numbering in end of video titles & get latest clip
In next update can you add youtube video titles numbering feature n
For example gaming video 1 gaming video 2...3...4...n
also to get latest twitch clips instead of most view clips
I did something cheap it might work for u,
on config.py
u need to import those:
from __future__ import print_function
import atexit
from os import path
from json import dumps, loads
then add this 2 functions
def read_counter():
return loads(open("counter.json", "r").read()) + 1 if path.exists("counter.json") else 0
def write_counter():
with open("counter.json", "w") as f:
f.write(dumps(counter))
counter = read_counter()
atexit.register(write_counter)
and then in ur title u only need to add this
TITLE = "my title and many times i upload the content #{}".format(counter)
The problem with that is that i tried to put the name of the first clip and the count after it, but couldnt figure it out.
Anyways this repo is damn good, very well done.
Hope that helps you a bit.
|
gharchive/issue
| 2021-05-03T05:01:05 |
2025-04-01T06:39:50.166757
|
{
"authors": [
"PepNieto",
"ShivamThakkar1"
],
"repo": "offish/twitchtube",
"url": "https://github.com/offish/twitchtube/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
411292743
|
Update ScreenShot for Chrome Web Store
ストア用のスクリーンショットを更新する。
Context
スクリーンショット内の文章内に kintone のロゴが入っており、ガイドラインに抵触する可能性があるため
Expected result
スクリーンショット内の文章内の kintone 表記はノーマルなテキストタイプに変更する
Current result
スクリーンショット内の文章内に kintone のロゴが含まれている
Closed by #45
|
gharchive/issue
| 2019-02-18T04:01:57 |
2025-04-01T06:39:50.186750
|
{
"authors": [
"emiksk",
"rkonno"
],
"repo": "ofuton/maimodorun",
"url": "https://github.com/ofuton/maimodorun/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
341972646
|
Texture formats
Added support for various texture formats, adding more should be easy. Also, fixed a mistake in the initialization of depth textures.
If it compiles under MacOS, we can merge.
Compiled, installed and ran examples without problems.
|
gharchive/pull-request
| 2018-07-17T15:33:16 |
2025-04-01T06:39:50.187769
|
{
"authors": [
"TheoWinterhalter",
"VLanvin"
],
"repo": "ogaml/ogaml",
"url": "https://github.com/ogaml/ogaml/pull/42",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1051389786
|
New Features: Mapbox
Thank you for this nice project!
Would be nice if you could also implement mapbox map to carplay. We would be very happy. https://pub.dev/packages/mapbox_gl
Playing radio would be nice too
Best regards from Germany :-)
Instead of using MapBox, can I suggest 'flutter_map'? It supports many more sources and much more configuration.
Hey JaffaKetchup,
thank you for your fast response.
We implement mapbox to our app's, we decided to go forward with mapbox because of stability and feature set.
It would be huge effort to change it to other map. I would pleased if you could implement mapbox too.
Thank you!
Thank you so much for your supports and opinions in the discussion, @sarp86 and @JaffaKetchup. I really appreciate it!
First and foremost, I'd like to state that this package will support as many packages as possible at the same time. That's why you shouldn't be concerned 😊. However, that I'm the only developer of this project, it may take some time to complete all of them. I'm currently working on the CarPlay Voice Control and Now Playing Music System. So that, you will be able to use voice control, text-to-speech, speech-to-text, and play music or radio. It's nearly completed at this time.
For Map, I currently do not plan to implement flutter_map in the first plan because Native MapBox has CarPlay functionalities and it is much easier and faster to implement this package for Flutter than a flutter native map. However, this does not indicate that the flutter native map will never support this package; it will, but not in the initial plan. It makes no sense whether the flutter map package supports more sources or more configuration. CarPlay severely restricts many of its functionalities, since it has constant configurations which they are unchangeable. Consequently, MapBox Native Package will be the first plan to implement to the flutter_carplay package.
For now, you can track the development activity of version 1.1 here. I plan to release Mapbox implementations in version 1.2 or in its beta. If you all or anyone is interested in and wants to help and contribute, send me an email or a pull request so that we can improve all of our applications as soon as possible. 🚀
If you have any further questions or requests in this queue, you can continue to write in this issue.
I can understand that you only have you to develop the package. (I also am an independent package maintainer/development). Make sure you don't get yourself too stressed!
The new features sound cool!
I can understand also why you won't implement flutter_map, was only suggesting because it is more customizable (and I develop a plugin for it, so I get those juicy GitHub stars 🤣).
Likewise, I say hello to Turkey :)
|
gharchive/issue
| 2021-11-11T21:51:04 |
2025-04-01T06:39:50.218099
|
{
"authors": [
"JaffaKetchup",
"oguzhnatly",
"sarp86"
],
"repo": "oguzhnatly/flutter_carplay",
"url": "https://github.com/oguzhnatly/flutter_carplay/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2679475065
|
UI enhancement to Sample Test History
Describe the bug
The components in the Sample Test History are too close and it is hard to differentiate them from one another due to a lack of space
To Reproduce
Steps to reproduce the behavior:
Go to Sample test
Click on Sample Details of any Patient
Scroll down to Sample Test History
See error
Expected behavior
There should be some space from one test component to other
Screenshots
Desktop (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Smartphone (please complete the following information):
Device: [e.g. iPhone6]
OS: [e.g. iOS8.1]
Browser [e.g. stock browser, safari]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
I'd like to work on this issue
There's an issue/PR (#9120) open regarding modifying the Sample test details page (so far it doesn't touch with Sample test history, so you should be fine, but keep an eye out).
|
gharchive/issue
| 2024-11-21T13:25:22 |
2025-04-01T06:39:50.236543
|
{
"authors": [
"Jacobjeevan",
"Keerthilochankumar"
],
"repo": "ohcnetwork/care_fe",
"url": "https://github.com/ohcnetwork/care_fe/issues/9175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
943811971
|
Typo in _create_learner?
Found hf_confg in classmethod _create_learner from BlearnerForQuestionAnswering
if (max_seq_len is None):
max_seq_len = hf_confg.get('max_position_embeddings', 128)
Thanks for letting me know. I'll get this fixed and a new release out tonight :)
|
gharchive/issue
| 2021-07-13T20:43:38 |
2025-04-01T06:39:50.243291
|
{
"authors": [
"ocm248",
"ohmeow"
],
"repo": "ohmeow/blurr",
"url": "https://github.com/ohmeow/blurr/issues/46",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
394685363
|
ProxySchemeUnknown: Not supported proxy scheme
Hey,
I got some errors with reading my proxylist.txt.
My list looks like this
ip:port
ip:port
But when I run the script, it looses the port while running..
Has anyone an idea?
Obtaining the channel...
Obtained the channel
Preparing the processes...
. . . . . . . . . . . . . . . . . . . . .
Prepared the processes
Booting up the processes...
Process Process-1:
Traceback (most recent call last):
File "C:\Python27\lib\multiprocessing\process.py", line 232, in _bootstrap
self.run()
File "C:\Python27\lib\multiprocessing\process.py", line 88, in run
self._target(*self._args, **self._kwargs)
File "C:\python27\twitch-viewer.py", line 80, in open_url
response = s.head(url, proxies=proxy)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 568, in head
return self.request('HEAD', url, **kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 412, in send
conn = self.get_connection(request.url, proxies)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 309, in get_connection
proxy_manager = self.proxy_manager_for(proxy)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 199, in proxy_manager_for
**proxy_kwargs)
File "C:\Python27\lib\site-packages\urllib3\poolmanager.py", line 450, in proxy_from_url
return ProxyManager(proxy_url=url, **kw)
File "C:\Python27\lib\site-packages\urllib3\poolmanager.py", line 401, in init
raise ProxySchemeUnknown(proxy.scheme)
ProxySchemeUnknown: Not supported proxy scheme 37.59.248.191
Got it. Have do enable the Admin Account on win10 Pro and run it from this account.
Thx to everyone.
Is this bot working for you? Does Twitch count the viewers? @schroedaa
Yes, it works fine
Am 03.01.2019 um 12:03 schrieb s4magier notifications@github.com:
Is this bot working for you? Does Twitch count the viewers? @schroedaa
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Did you change anything of the original code? Twitch doesnt count requests as viewers anymore.
Did you change anything of the original code? Twitch doesnt count requests as viewers anymore.
no way the original code is working.
Yes, it works fine
…
Am 03.01.2019 um 12:03 schrieb s4magier @.***>: Is this bot working for you? Does Twitch count the viewers? @schroedaa — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
If you do have it working please share :)
The original code from @justinharkey.
For all the others: you have to active the admin account on win 10 pro and run the script from there. "As Admin". Works fine for me. Thanks to @virtuoz75 for his support.
|
gharchive/issue
| 2018-12-28T17:34:40 |
2025-04-01T06:39:50.262759
|
{
"authors": [
"mufin695",
"s4magier",
"schroedaa"
],
"repo": "ohyou/twitch-viewer",
"url": "https://github.com/ohyou/twitch-viewer/issues/46",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
69226665
|
Question /feature request : How can I close an alert programmatically (like swal.close() does)
Hello,
First, thanks for your tool, it makes integration with angular very easy.
What would be a proper way to close programmatically an alert created with ngSweetAlert (like swal.close() does)?
FYI my usecase can't work with the integrated timeout alone : I am repeating an apiCall with an $interval while a custom sweetalert with a spinner animated icon is displayed : I'd like to dismiss the sweet alert when I receive the proper $http.sucess() answer .
thanks for your suggestions.
Julien
did not realize that swal.close() was working, my bad.
What if I have multiple instances of Sweetalert in my controller?
|
gharchive/issue
| 2015-04-17T22:07:58 |
2025-04-01T06:39:50.270166
|
{
"authors": [
"levequej",
"mrudult"
],
"repo": "oitozero/ngSweetAlert",
"url": "https://github.com/oitozero/ngSweetAlert/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1621963308
|
Epic: Airdrop Module
Summary
Airdrop module as designed in design doc 3
[x] Design doc
[x] #140
[x] #160
[x] #162
[x] ABCI
[x] #229
[x] #221
Create genesis airdrop account
Claim airdrop
Unable to claim airdrop after the expiry block
Update params with gov prop
[x] #220
For Admin Use
[ ] Not duplicate issue
[ ] Appropriate labels applied
[ ] Appropriate contributors tagged
[ ] Contributor assigned/self-assigned
All issues complete, closing
|
gharchive/issue
| 2023-03-13T17:45:56 |
2025-04-01T06:39:50.284166
|
{
"authors": [
"adamewozniak",
"zarazan"
],
"repo": "ojo-network/ojo",
"url": "https://github.com/ojo-network/ojo/issues/121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
65715660
|
Instanced scene with an unshaded material gets lit by light and canvasmodulation
Instanced scene with an unshaded material gets lit by light and canvasmodulation
try with a newer version of Godot, the unshaded property no longer exists. The replacement enum seems to work fine.
i think this is fixed
|
gharchive/issue
| 2015-04-01T15:47:05 |
2025-04-01T06:39:50.286129
|
{
"authors": [
"WatIsDeze",
"reduz"
],
"repo": "okamstudio/godot",
"url": "https://github.com/okamstudio/godot/issues/1598",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.