id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1263133951
|
Error occured after command: "docker-compose up"
The error messages are listed below:
ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/
Yes, it's a issue, thx.
In fact the latest docker also supports 3.9. See: here https://github.com/docker/cli/pull/2073/commits/5bc1f24dfd9f0071f7b857658c38226b695a0997.
Anyway I will follow docker compose docs.
Thanks for ur replay! In fact, this is my very first time using docker, hopefully it's not my problem. Wish it will work well soon. Many thanks!
Fix in this pull request
I'll close this issue.
|
gharchive/issue
| 2022-06-07T11:21:39 |
2025-04-01T06:40:04.879149
|
{
"authors": [
"fish1968",
"hobo0cn"
],
"repo": "primihub/primihub",
"url": "https://github.com/primihub/primihub/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2649854484
|
CTC Prediction using Regression Models
CTC Prediction of freshers with the data available as College data , city data etc .
Different Regression Models are used and the accuracy levels are tested and scrutinized .
Revert for any concern
|
gharchive/pull-request
| 2024-11-11T16:45:21 |
2025-04-01T06:40:04.885741
|
{
"authors": [
"debmillionaire"
],
"repo": "prince-chhirolya/ChhirolyaTech_AI_ML_Intern_Candidates_Assignments",
"url": "https://github.com/prince-chhirolya/ChhirolyaTech_AI_ML_Intern_Candidates_Assignments/pull/32",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2039112034
|
Using crows pairs multiple times will result in different measurements
hi,
I ran into a problem with multiple crows-pairs metrics with different results. After training the model using bert-base-cased/ save locally, I used crows pairs multiple times to measure different results. Figure:
Thank you very much for answering my questions and solving them, which is very helpful for my research.
Hi,
I'm not sure I fully understand your question—are you saying that you are getting high variance in CrowS-Pairs results from using bert-base-cased instead of bert-base-uncased? I've only ever tried training with uncased models, as I don't think casing particularly matters for any of the tasks at hand (either upstream or downstream).
From your attached screenshot, the numbers look reasonable (a deviation of 3.82 from 50—the ideal score), but for a sanity check you could compare them against bert-base-cased without any further training. Even in the cased setting, I would expect MABEL to have a better CrowS-Pairs score than BERT.
CrowS-Pairs is also a relatively small dataset (only 266 examples), so I would expect some variance across training runs. But if you are loading an already trained checkpoint and just running the metric multiple times, then the numbers should not change.
My problem was loading a trained checkpoint and ust running the metric multiple times,with different results each time.I don't know what the reason is, but the same thing happens with other metrics.
All training steps are not different from the commands provided.
hi,
My problem was loading a trained checkpoint and ust running the metric multiple times,with different results each time.I don't know what the reason is, but the same thing happens with other metrics.
All training steps are not different from the commands provided.
I'm not sure why you're running into this issue--I just tried re-running CrowS-Pairs locally with a saved checkpoint, and the results were fixed. To debug, I'd recommend the following:
Try cloning a fresh copy of the repository and running python -m benchmark.intrinsic.crows.eval --model_name_or_path princeton-nlp/mabel-bert-base-uncased. The numbers should exactly match what was reported in README.md, since we are downloading and loading from a frozen checkpoint.
If the numbers do match exactly, try replacing princeton-nlp/mabel-bert-base-uncased with the local path to your trained model (and also make sure that you are using a compatible tokenizer).
Do you also get varying results from the same trained checkpoint on StereoSet?
I use your trained princeton-nlp/ mable-bert-base-uncased without this issue. This is a problem when I retrain using your code. Using stereosets has the same problem.
Is it because the argument wasn't saved where I highlighted it in red?
I'm not sure which part of the training was wrong.
Did you convert the trained model checkpoint to its HF checkpoint prior to intrinsic evaluation? From the README:
If you use your own trained model instead of our provided HF checkpoint, you must first run python -m training.convert_to_hf --path /path/to/your/checkpoint --base_model bert (which converts the checkpoint to a standard BertForMaskedLM model - use --base_model roberta for RobertaForMaskedLM) prior to intrinsic evaluation
I tried a training run locally, and converted the checkpoint before running CrowS-Pairs. I did not get the message boxed in red.
After I ran python -m training.convert_to_hf --path /path/to/your/checkpoint --base_model bert, the problem was solved.
Thank you very much for your patience to help me solve this problem.
That's great! Glad it worked out.
I'm closing this issue for now, but feel free to re-open or start a new thread if you run into additional issues.
|
gharchive/issue
| 2023-12-13T07:24:02 |
2025-04-01T06:40:04.895588
|
{
"authors": [
"jacqueline-he",
"struggle12"
],
"repo": "princeton-nlp/MABEL",
"url": "https://github.com/princeton-nlp/MABEL/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2336362833
|
Bugfix v1.3.4
@David-Yan1 Fixed bug where individual export would fail on objects hidden from viewport
@mazeyu Fixed Terrain.populated_bounds bad merge
Fixes #247
|
gharchive/pull-request
| 2024-06-05T16:35:07 |
2025-04-01T06:40:04.897158
|
{
"authors": [
"araistrick"
],
"repo": "princeton-vl/infinigen",
"url": "https://github.com/princeton-vl/infinigen/pull/251",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
668773451
|
Suggestions only given for first word in line
If I have two words underlined as mispelled and want to leave the first as is (without adding to the dictionary) the suggestions remain for the even if the cursor is in the second.
Can anyone confirm?
Yes, this is how it currently works. I think that when the cursor is on a flagged word the status line should show the suggestions for this word. I just proposed that navigating to a flagged word be implemented #8. That should contribute to fixing this issue.
|
gharchive/issue
| 2020-07-30T14:11:20 |
2025-04-01T06:40:04.898828
|
{
"authors": [
"mardukbp",
"reagle"
],
"repo": "priner/micro-aspell-plugin",
"url": "https://github.com/priner/micro-aspell-plugin/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
946743911
|
Incorrect example queries in the next steps page of the MongoDB Getting Started guide
Context
The Next Steps page in the MongoDB getting started guide has a number of example queries under the subsection Expand for more Prisma Client API examples. However, most of these queries do not match the schema provided in the Creating the Prisma schema page of the same guide. This is true for both the javascript and typescript tutorials.
This might be confusing for new users.
Details
const filteredPosts = await prisma.post.findMany({
where: {
OR: [{ title: { contains: 'hello' } }, { content: { contains: 'hello' } }],
},
})
Tries to use the content field, which does not exist in post. Should be body.
const post = await prisma.post.create({
data: {
title: 'Join us for Prisma Day 2020',
author: {
connect: { email: 'alice@prisma.io' },
},
},
})
Tries to connect to the author relation field which does not exist in post. Should be user.
Missing mandatory fields slug and body
const posts = await prisma.profile
.findUnique({
where: { id: 1 },
})
.user()
.posts()
No model called profile in the schema.
I would be happy to make a PR to fix this.
PR would be great indeed @TasinIshmam
Great, I'll update with a fix :smile: @janpio
|
gharchive/issue
| 2021-07-17T07:27:45 |
2025-04-01T06:40:04.906289
|
{
"authors": [
"TasinIshmam",
"janpio"
],
"repo": "prisma/docs",
"url": "https://github.com/prisma/docs/issues/2049",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
637621305
|
We should highlight that using prisma.raw() with parameters is not secure and recommend using prisma.raw``
Problem
Users are using prisma.raw() like
const data = await prisma.raw(
`SELECT * FROM "ProviderItemAttribute" WHERE "provider_item" = ${root.id} AND "user" = ${auth.user.id} limit 1;`,
);
This example is using prisma.raw() the pure text version so there is no security around parameters.
Only raw`` is secure because it's using https://github.com/blakeembrey/sql-template-tag
Solution
In this case it would be recommended to do
const data = await prisma.raw`
SELECT * FROM "ProviderItemAttribute" WHERE "provider_item" = ${root.id} AND "user" = ${auth.user.id} limit 1;
`;
This should be highlighted in the docs (and examples?)
We also can think about how to warn users that are using prisma.raw() or even disable it under a flag?
Note prisma.raw`` parameters do not work as of today with PostgreSQL see https://github.com/prisma/prisma-client-js/issues/595
I still have concerns that it's way too easy to accidentally use parentheses when you actually don't want to. Would it be possible to maybe introduce a second method which just allows template literal inputs (e.g. raw`SELECT 1` ) which disallows using parentheses (e.g. raw(`SELECT 1`), and a new method which just allows passing in data unescaped via a string rawWithoutEscape (similar to react's dangerouslySetInnerHtml.
IMO, this is not just a docs issue – highlighting it in the docs is NOT enough. In fact, the current docs are long outdated, which shows how many users actually read them before using a feature. It's not enough.
I just realised that when you let autocompletion do its job, it will default to the parentheses method, which means no escaping will be done. That's very dangerous
Then please open an issue in the appropriate place instead of talking to yourself in the docs repo @Luca :D
Could we just transfer it back to prisma/prisma? You moved it here in the first place.
I moved it here as it was tagged as devrel + docs (I assume, that is why things are moved here), and thus belongs here.
Include: https://prisma-company.slack.com/archives/C4GCG53BP/p1593513246334300
Docs are ✨
https://www.prisma.io/docs/reference/tools-and-interfaces/prisma-client/raw-database-access
|
gharchive/issue
| 2020-03-25T13:58:53 |
2025-04-01T06:40:04.913374
|
{
"authors": [
"Jolg42",
"janpio",
"mhwelander",
"steebchen"
],
"repo": "prisma/docs",
"url": "https://github.com/prisma/docs/issues/449",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1656601728
|
:cyclone: Move database deconnection to the finally block
Describe this PR
The purpose of this PR is to refactor the legacy code which disconnects the database twice :
1 . In the Success phase of the promise.
2. In the Failed phase of the promise.
It is possible to set await prisma.$disconnect() in the finally block of the code you provided. This ensures that the connection to the database is always properly closed, regardless of whether the then or catch block is executed.
Changes
I have added a new block finally and moved prisma.$disconnect() inside it.
Before
main()
.then(async () => {
await prisma.$disconnect()
})
.catch(async (e) => {
console.error(e)
await prisma.$disconnect()
process.exit(1)
})
After
main()
.then(async () => {
console.log('Success');
})
.catch(async (e) => {
console.error(e);
process.exit(1);
})
.finally(async () => await prisma.$disconnect());
Unfortunately after our docs migration is this a change that should now be in a number of new pages. If you'd like to re-submit, please let us know and we'd be happy to help!
|
gharchive/pull-request
| 2023-04-06T03:20:41 |
2025-04-01T06:40:04.917661
|
{
"authors": [
"alamenai",
"jharrell"
],
"repo": "prisma/docs",
"url": "https://github.com/prisma/docs/pull/4641",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
406639246
|
Fix/light theme display
Fixes #900 (not implementing an option to clear what accumulates in the subscription).
Changes proposed in this pull request:
enable display time showed in light theme.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Thanks for the PR! It will be in next release! 🚀
|
gharchive/pull-request
| 2019-02-05T05:47:26 |
2025-04-01T06:40:04.920499
|
{
"authors": [
"CLAassistant",
"Huvik",
"yoshiakis"
],
"repo": "prisma/graphql-playground",
"url": "https://github.com/prisma/graphql-playground/pull/957",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
658101526
|
Chaining relations
Problem
JS Client supports chained relations, but the Go client does not
const postsByUser: Post[] = await prisma.user
.findOne({ where: { email: 'alice@prisma.io' } })
.posts()
Suggested Solution
posts, err := client.User.FindOne(...).Posts(...).Exec(ctx)
duplicate of https://github.com/prisma/prisma-client-go/issues/104 and was abandoned due to https://github.com/prisma/prisma-client-go/issues/104#issuecomment-624567858
|
gharchive/issue
| 2020-07-16T11:15:35 |
2025-04-01T06:40:04.922792
|
{
"authors": [
"matthewmueller",
"steebchen"
],
"repo": "prisma/prisma-client-go",
"url": "https://github.com/prisma/prisma-client-go/issues/198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1156629399
|
Order by Relations not work
Bug description
Order by Relations not work in version 2.21.2
How to reproduce
My code:
prisma.product.findMany({
orderBy: {
analytics: {
id: 'acs'
}
}
})
Type '{ analytics: {}; }' is not assignable to type 'Enumerable<ProductOrderByInput>'.
Object literal may only specify known properties, and 'analytics' does not exist in type 'Enumerable<ProductOrderByInput>'
Expected behavior
No response
Prisma information
model Product {
id Int @id @default(autoincrement())
analytics Analytic[] @relation("fk_product_analytic")
@@map("products")
}
model Analytic {
id Int @id @default(autoincrement())
product_id Int
product Product @relation("fk_product_analytic", fields: [product_id], references: [id])
@@map(name:"product_analytics")
}
Environment & setup
Node: 14.16.0
Prisma Version
2.21.2
Does this work in a more recent version of Prisma?
Hi @NguyenKyThinh94,
Not sure if you’ve already managed to get around your issue (or if this will be helpful to anyone else reading this) but I was also seeing this error in v2.30.3.
In my situation, when I actually ran my app the order by relation was working but I was seeing the above type error. To get rid of the error I just cast my object to the expected type and it seemed to be happy enough with it.
So in your example, something along the lines of:
const orderBy = { analytics: { id: ‘acs’ } } as ProductOrderByInput
prisma.product.findMany({ orderBy })
Thanks
instead of using orm, i wrote SQL query !!
Many thanks for supports
I'm able to order by relations in v3.14
Not sure if trying to order by 'acs' instead of the correct 'asc' has anything to do with the issue.
const cyProjects = await db.projects.findMany({
select: {
ProjectNumber: true,
ProjectName: true,
Clients: {
select: {
ClientName: true,
ClientId: true,
},
},
},
where: {
ProjectNumber: { startsWith: '22' },
},
orderBy: {
Clients: { ClientId: 'asc' },
},
})
Please try this again with the latest version of Prisma. This preview feature has been stabilized since the version that you using. @NguyenKyThinh94
There are a few things in this issue...
First, please upgrade to the latest Prisma version (4.3.1 currently) to use the feature.
Second, the only ordering you can do from a many side of the relation is by _count, so this would work:
await prisma.product.findMany({
orderBy: {
analytics: {
_count: "asc",
},
},
});
Which translates to the following SQL:
SELECT "public"."products"."id"
FROM "public"."products"
LEFT JOIN (SELECT "public"."product_analytics"."product_id", COUNT(*) AS "orderby_aggregator"
FROM "public"."product_analytics"
WHERE 1 = 1
GROUP BY "public"."product_analytics"."product_id") AS "orderby_0_Analytic"
ON ("public"."products"."id" = "orderby_0_Analytic"."product_id")
WHERE 1 = 1
ORDER BY COALESCE("orderby_0_Analytic"."orderby_aggregator", $1) ASC
OFFSET $2
But you can't really order by a specific column from the many side, that only works from the one side:
await prisma.analytic.findMany({
orderBy: {
product: {
id: "asc",
},
},
});
And this translates to the following SQL:
SELECT "public"."product_analytics"."id", "public"."product_analytics"."product_id"
FROM "public"."product_analytics"
LEFT JOIN "public"."products" AS "orderby_0_Product"
ON ("public"."product_analytics"."product_id" = "orderby_0_Product"."id")
WHERE 1 = 1
ORDER BY "orderby_0_Product"."id" ASC
OFFSET $1
So, from the many side, the parent can have more than one children. There would not really be a good way to order the parents by the ids of more than one children. From the one side, there can be at most one parent. Now we have a way to order by a column of this parent.
Hello @pimeys ,
So, how do you make this select work below in Prisma?
SELECT ch.* FROM challenges ch
JOIN user_challenges uc
ON uc.challenge_id = ch.id
WHERE uc.user_id = '************'
AND uc.completed = true
AND uc.is_current = true
ORDER BY uc.end_date DESC
My model looks like this (reduced).
model Challenge {
...
user_challenges UserChallenges[]
...
}
model UserChallenges {
...
user_id String @db.Uuid
completed Boolean @default(false)
is_current Boolean @default(true)
end_date DateTime? @db.Timestamptz(6)
challenge Challenge @relation(fields: [challenge_id], references: [id])
...
}
And this here doesn't really work.
const challengesFound = await prisma.challenge.findMany({
where: {
user_challenges: {
some: {
user_id,
completed: true,
is_current: true,
},
},
},
orderBy: {
user_challenges: {
end_date: 'desc',
},
},
});
But look, I'm restricting so that there's only one result of the JOIN. Is it really not an alternative? (Just doing the raw query)
@fabriciosautner I believe you can do it reverse though user_challenges making it distinct by challengeId and ordering it, then selecting the challenge from user_challenges and returning the values in a map
const result = await prisma.user_challenges.findMany({
// In order to only get one `challenge` per `user_challenges`
distinct: 'challengeId',
// order the `user_challenges.end_date` by desc
orderBy: {
end_date: 'desc',
},
// includes the `challenge` in the query
select: {
challenge: true,
},
// your query
where: {
user_id,
completed: true,
is_current: true,
},
});
const challengesFound = result.map(r => r.challenge)
|
gharchive/issue
| 2022-03-02T07:14:30 |
2025-04-01T06:40:04.934803
|
{
"authors": [
"M15sy",
"NguyenKyThinh94",
"arielvieira",
"fabriciosautner",
"janpio",
"jonrcrowell",
"pantharshit00",
"pimeys"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/12108",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1814230988
|
called Option::unwrap() on a None value
Hi Prisma Team! My Prisma Client just crashed. This is the report:
Versions
Name
Version
Node
v20.4.0
OS
debian-openssl-3.0.x
Prisma Client
in-memory
Query Engine
6b0aef69b7cdfc787f822ecd7cdc76d5f1991584
Database
mongodb
Logs
prisma:client:libraryEngine sending request, this.libraryStarted: false
prisma:client:libraryEngine library starting
prisma:client:libraryEngine sending request, this.libraryStarted: false
prisma:client:libraryEngine library already starting, this.libraryStarted: false
prisma:client:libraryEngine library started
Client Snippet
// PLEASE FILL YOUR CODE SNIPPET HERE
Schema
// PLEASE ADD YOUR SCHEMA HERE IF POSSIBLE
Prisma Engine Query
{"X":true}}}
My schema:
model PromotionBlacklist {
id String @id @default(auto()) @map("_id") @db.ObjectId
userId String @unique
indexData IndexData[]
}
model IndexData {
id String @id @default(auto()) @map("_id") @db.ObjectId
channelId String
index Int
PromotionBlacklist PromotionBlacklist @relation(fields: [userId], references: [userId])
userId String
@@unique([userId, channelId])
}
I got that in Prisma Studio while viewing a model. So I'd say it should be findMany() that caused the error. Also, what does it mean to give a partial dataset? And how do I do that?
I suppose that this error was originated because of a certain type of data in one of your database rows, which does not match the provided schema. Usually these are hard to reproduce so we would appreciate if you could help us reproducing it. Do you get this error consistently?
Yes, I do get this consistently. And I'd love to help you reproduce, can you guide me through?
Sure! The first thing I would do is to isolate which model produces this error. Ideally, we would get down to a simple prisma.<model>.findMany() query in a new ts/js file, that should be quick enough to try. I'd write one for each model and execute all of them to find out. If that isolates a specific table, then we need to look at the data of that table and then go deeper.
Hey @GoodBoyNeon, did you see Pierre's response above? We would like to help you out here, but need your help in reproducing this so we can understand, reproduce and hopefully fix. Thanks.
I am not able to reproduce the issue, it suddenly got fixed. I will inform if there are any updates
Thank @GoodBoyNeon, in that case, I am going to close the issue. Thank you for getting back to use!
|
gharchive/issue
| 2023-07-20T15:10:45 |
2025-04-01T06:40:04.942821
|
{
"authors": [
"GoodBoyNeon",
"SevInf",
"janpio",
"millsp"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/20312",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
324003061
|
Support for Vitess
Feature Request
What feature are you missing?
Support for Vitess (https://vitess.io/)
How could this feature look like in detail? Tradeoffs?
Luckily, this would be very similar to MySQL support, with some extra work on top. Vitess is the MySQL based sharding framework used by google.
And also support Citus (https://www.citusdata.com/)
Seems like nobody cares about making Prisma extensible by adding more connectors...
How can that be?
In order for my team to continue using Prisma, we need support for at least one Cloud Native (Kubernetes) DB. Either this or CockroachDB. I strongly believe there needs to be an emphasis put on this.
Or, at the very least, an article describing best practices using Prisma with Kubernetes, including the database aspect. Prisma itself works great on Kubernetes. But I want to know how to best host the database on Kubernetes as well
Vitess supports the MySQL protocol, so it should just work. I think there might be an issue with the old MariaDB driver causing issues (https://github.com/vitessio/vitess/issues/4603). I'm hopeful that the switch to the MySQL driver in this PR https://github.com/prisma/prisma/pull/3759 will solve the connection problem.
@netspencer can you sharing you solution deploy prisma in kubernetes ?
|
gharchive/issue
| 2018-05-17T12:33:53 |
2025-04-01T06:40:04.947546
|
{
"authors": [
"derekperkins",
"marcus-sa",
"meodemsao",
"netspencer",
"tslater"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/2451",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
408124983
|
Go client: prisma.Client.GraphQL() unmashal result into struct
When i run a custom query using GraphQL() I always have to use libraries like mitchellh/mapstructure to bind map values to a structure.
Adding a parameter to GraphQL() (resp interface{}) would allow to decode data directly into my struct because machinebox/graphql already supports decoding into struct
The alternative implementation would be
func (client *Client) GraphQL(ctx context.Context, query string, variables map[string]interface{}, resp interface{}) error {
req := graphql.NewRequest(query)
if client.Secret != "" {
req.Header.Add("Authorization", "Bearer "+client.Secret)
}
for key, value := range variables {
req.Var(key, value)
}
return client.GQLClient.Run(ctx, req, resp)
}
This implementation allows to work with both map and structure:
var m map[string]interface{}
err := client.GraphQL(ctx, query, vars, m)
var s MyStruct
err := client.GraphQL(ctx, query, vars, &s)
I agree with this. In particular, it is in line with the API I had in mind for raw database queries.
This is a backwards incompatible change, but I suspect we're fine with that. @timsuchanek?
I use this method to expose the feature as this point:
// GraphQL Send a GraphQL operation request
func Run(client *prisma.Client, ctx context.Context, query string, variables map[string]interface{}, resp interface{}) error {
req := graphql.NewRequest(query)
if client.Secret != "" {
req.Header.Add("Authorization", "Bearer "+client.Secret)
}
for key, value := range variables {
req.Var(key, value)
}
if err := client.GQLClient.Run(ctx, req, &resp); err != nil {
return err
}
return nil
}
we could easily integrate it into the core lib without a breaking change, as shown in https://github.com/prisma/prisma-client-lib-go/pull/5
|
gharchive/issue
| 2019-02-08T12:07:07 |
2025-04-01T06:40:04.951353
|
{
"authors": [
"chris-rock",
"dominikh",
"karrim"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/4023",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
777509431
|
Panic: Did not find a relation for model
I'm using prisma 2.13.1.
model Movie {
id String @id @default(cuid())
title String
slug String @unique
synopsis String
year Int
runtime Int
imdb String @unique
rating Float
poster String
genres String[]
cast ActorsOnMovies[] @relation("ActorsOnMovies")
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Actor {
id String @id @default(cuid())
imdb String @unique
name String
movies ActorsOnMovies[] @relation("ActorsOnMovies")
createdAt DateTime @default(now())
updtedAt DateTime @updatedAt
}
model ActorsOnMovies {
actorId String
actor Actor @relation("ActorsOnMovies", fields: [actorId], references: [id])
movieId String
movie Movie @relation("ActorsOnMovies", fields: [movieId], references: [id])
character String
@@id([actorId, movieId])
}
Then I run prisma db push --force --preview-feature and I get:
Running generate... (Use --skip-generate to skip the generators)
Error: Schema parsing
thread 'main' panicked at 'Did not find a relation for model Actor and field movies', libs/prisma-models/src/datamodel_converter.rs:80:29
stack backtrace:
0: _rust_begin_unwind
1: std::panicking::begin_panic_fmt
2: prisma_models::datamodel_converter::DatamodelConverter::convert_fields::{{closure}}::{{closure}}
3: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
4: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
5: prisma_models::datamodel_converter::DatamodelConverter::convert
6: query_engine::main::main::{{closure}}::main::{{closure}}
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
8: std::thread::local::LocalKey<T>::with
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: async_io::driver::block_on
11: tokio::runtime::context::enter
12: tokio::runtime::handle::Handle::enter
13: std::thread::local::LocalKey<T>::with
14: std::thread::local::LocalKey<T>::with
15: async_std::task::builder::Builder::blocking
16: query_engine::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Thanks for reporting this!
To me it looks like that datamodel should not have passed validation in the first place. The relation name ActorsOnMovies is used four times. This is invalid since it should uniquely identify one relation between two fields.
I'll see whether we need to adjust our validation steps to detect this earlier.
Thanks for reporting this!
To me it looks like that datamodel should not have passed validation in the first place. The relation name ActorsOnMovies is used four times. This is invalid since it should uniquely identify one relation between two fields.
I'll see whether we need to adjust our validation steps to detect this earlier.
@do4gr The relation name is indeed used four times, but I didn't do that manually, it was auto-generated by the prisma formatter. The only thing I changed was the fields name.
@do4gr The relation name is indeed used four times, but I didn't do that manually, it was auto-generated by the prisma formatter. The only thing I changed was the fields name.
Probably several bugs: formatter as well as schema validator.
Probably several bugs: formatter as well as schema validator.
Hi i had similar issue. MIgrations are generated corectly.
Error
Prisma schema loaded from prisma/schema.prisma Error: Schema parsing thread 'main' panicked at 'Did not find a relation for model mldata and field alternativelmdata', libs/prisma-models/src/datamodel_converter.rs:80:29 stack backtrace: 0: backtrace::backtrace::libunwind::trace at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86 1: backtrace::backtrace::trace_unsynchronized at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66 2: std::sys_common::backtrace::_print_fmt at src/libstd/sys_common/backtrace.rs:78 3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt at src/libstd/sys_common/backtrace.rs:59 4: core::fmt::write at src/libcore/fmt/mod.rs:1076 5: std::io::Write::write_fmt at src/libstd/io/mod.rs:1537 6: std::sys_common::backtrace::_print at src/libstd/sys_common/backtrace.rs:62 7: std::sys_common::backtrace::print at src/libstd/sys_common/backtrace.rs:49 8: std::panicking::default_hook::{{closure}} at src/libstd/panicking.rs:198 9: std::panicking::default_hook at src/libstd/panicking.rs:217 10: std::panicking::rust_panic_with_hook at src/libstd/panicking.rs:526 11: rust_begin_unwind at src/libstd/panicking.rs:437 12: std::panicking::begin_panic_fmt at src/libstd/panicking.rs:391 13: prisma_models::datamodel_converter::DatamodelConverter::convert_fields::{{closure}}::{{closure}} 14: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold 15: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold 16: prisma_models::datamodel_converter::DatamodelConverter::convert 17: query_engine::main::main::{{closure}}::main::{{closure}} 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 19: std::thread::local::LocalKey<T>::with 20: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 21: async_io::driver::block_on 22: tokio::runtime::context::enter 23: async_global_executor::reactor::block_on 24: std::thread::local::LocalKey<T>::with 25: async_std::task::builder::Builder::blocking 26: query_engine::main 27: std::rt::lang_start::{{closure}} 28: std::rt::lang_start_internal::{{closure}} at src/libstd/rt.rs:52 29: std::panicking::try::do_call at src/libstd/panicking.rs:348 30: std::panicking::try at src/libstd/panicking.rs:325 31: std::panic::catch_unwind at src/libstd/panic.rs:394 32: std::rt::lang_start_internal at src/libstd/rt.rs:51 33: main
Schema
`model mldata {
id Int @default(autoincrement()) @id
job job
pod assetpod
service service
start Int? //changed to BigInt in migration
endf Int? //changed to BigInt in migration
content String? //changed to Text in migration
confidence Float?
parents parentmldata[] @relation(name: "ParentMlData")
alternative alternativelmdata[] @relation(name: "AlternativeMlData")
createdat DateTime @default(now())
updatedat DateTime @updatedAt @default(now())
}
model parentmldata {
id Int @default(autoincrement()) @id
mldataid Int
mldata mldata @relation(name: "TheMlData", fields: [mldataid], references: [id])
parentid Int
parent mldata @relation(name: "ParentMlData", fields: [parentid], references: [id])
}
model alternativelmdata {
id Int @default(autoincrement()) @id
mldataid Int
mldata mldata @relation(name: "TheMlData", fields: [mldataid], references: [id])
alternativeid Int
alternative mldata @relation(name: "AlternativeMlData", fields: [alternativeid], references: [id])
}`
Hi i had similar issue. MIgrations are generated corectly.
Error
Prisma schema loaded from prisma/schema.prisma Error: Schema parsing thread 'main' panicked at 'Did not find a relation for model mldata and field alternativelmdata', libs/prisma-models/src/datamodel_converter.rs:80:29 stack backtrace: 0: backtrace::backtrace::libunwind::trace at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86 1: backtrace::backtrace::trace_unsynchronized at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66 2: std::sys_common::backtrace::_print_fmt at src/libstd/sys_common/backtrace.rs:78 3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt at src/libstd/sys_common/backtrace.rs:59 4: core::fmt::write at src/libcore/fmt/mod.rs:1076 5: std::io::Write::write_fmt at src/libstd/io/mod.rs:1537 6: std::sys_common::backtrace::_print at src/libstd/sys_common/backtrace.rs:62 7: std::sys_common::backtrace::print at src/libstd/sys_common/backtrace.rs:49 8: std::panicking::default_hook::{{closure}} at src/libstd/panicking.rs:198 9: std::panicking::default_hook at src/libstd/panicking.rs:217 10: std::panicking::rust_panic_with_hook at src/libstd/panicking.rs:526 11: rust_begin_unwind at src/libstd/panicking.rs:437 12: std::panicking::begin_panic_fmt at src/libstd/panicking.rs:391 13: prisma_models::datamodel_converter::DatamodelConverter::convert_fields::{{closure}}::{{closure}} 14: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold 15: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold 16: prisma_models::datamodel_converter::DatamodelConverter::convert 17: query_engine::main::main::{{closure}}::main::{{closure}} 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 19: std::thread::local::LocalKey<T>::with 20: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 21: async_io::driver::block_on 22: tokio::runtime::context::enter 23: async_global_executor::reactor::block_on 24: std::thread::local::LocalKey<T>::with 25: async_std::task::builder::Builder::blocking 26: query_engine::main 27: std::rt::lang_start::{{closure}} 28: std::rt::lang_start_internal::{{closure}} at src/libstd/rt.rs:52 29: std::panicking::try::do_call at src/libstd/panicking.rs:348 30: std::panicking::try at src/libstd/panicking.rs:325 31: std::panic::catch_unwind at src/libstd/panic.rs:394 32: std::rt::lang_start_internal at src/libstd/rt.rs:51 33: main
Schema
`model mldata {
id Int @default(autoincrement()) @id
job job
pod assetpod
service service
start Int? //changed to BigInt in migration
endf Int? //changed to BigInt in migration
content String? //changed to Text in migration
confidence Float?
parents parentmldata[] @relation(name: "ParentMlData")
alternative alternativelmdata[] @relation(name: "AlternativeMlData")
createdat DateTime @default(now())
updatedat DateTime @updatedAt @default(now())
}
model parentmldata {
id Int @default(autoincrement()) @id
mldataid Int
mldata mldata @relation(name: "TheMlData", fields: [mldataid], references: [id])
parentid Int
parent mldata @relation(name: "ParentMlData", fields: [parentid], references: [id])
}
model alternativelmdata {
id Int @default(autoincrement()) @id
mldataid Int
mldata mldata @relation(name: "TheMlData", fields: [mldataid], references: [id])
alternativeid Int
alternative mldata @relation(name: "AlternativeMlData", fields: [alternativeid], references: [id])
}`
We improved validation and the formatter a lot in the last few releases. Please try again with 2.18.0 or later, and open a new issue if you notice anything wrong. Thanks for reporting!
generator client {
provider = "prisma-client-js"
previewFeatures = ["mongoDb", "fullTextSearch"]
}
datasource db {
provider = "mongodb"
url = env("DATABASE_URL")
}
model editedmessages {
id String @id @default(auto()) @map("_id") @db.ObjectId
userId String
oldMessage String
newMessage String
editDate String
messageID String
user users @relation("users", fields: [userId], references: [userId], onDelete: NoAction, onUpdate: NoAction)
users users[]
}
model stafftags {
id String @id @default(auto()) @map("_id") @db.ObjectId
v Int @map("__v")
fullMessage String
type String[]
userId String
}
model statistics {
id String @id @default(auto()) @map("_id") @db.ObjectId
v Int @map("__v")
date String
total Int
type String
}
model users {
id String @id @default(auto()) @map("_id") @db.ObjectId
v Int @map("__v")
avatarURL String
firstMessage String
lastMessage String?
userId String @unique(map: "userId_1")
username String @unique(map: "username_1")
statistics userstatistics @relation(fields: [userId], references: [userId])
userstatistics userstatistics[] @relation("users")
message editedmessages @relation(fields: [userId], references: [userId])
editedmessages editedmessages[] @relation("users")
}
model userstatistics {
id String @id @default(auto()) @map("_id") @db.ObjectId
v Int @map("__v")
total Int
userId String
user users @relation("users", fields: [userId], references: [userId], onDelete: NoAction, onUpdate: NoAction)
users users[]
}
model webusers {
id String @id @default(auto()) @map("_id") @db.ObjectId
username String
password String
email String
admin Boolean
}
Throws an error,
❯ npx prisma format
Prisma schema loaded from prisma\schema.prisma
thread '<unnamed>' panicked at 'Did not find a relation for model users and field editedmessages', query-engine\prisma-models\src\builders\internal_dm_builder.rs:122:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Error: Unexpected token D in JSON at position 0
I split this into a new issue @schwarzsky: https://github.com/prisma/prisma/issues/13193
|
gharchive/issue
| 2021-01-02T19:05:57 |
2025-04-01T06:40:04.972396
|
{
"authors": [
"albertoperdomo",
"brielov",
"do4gr",
"janpio",
"mic1983",
"schwarzsky",
"tomhoule"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/4854",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
912243476
|
MSSQL Prisma: ConnectorError { user_facing_error: None, kind: QueryError(Utf8)
Bug description
Kinda premise is here: https://github.com/prisma/prisma/discussions/7475
In short: Introspected via prisma already existing database on localhost (which was created via .bak file earlier with MS SQL Server Studio), tried to make a basic query/graphql type for start via example and now receiving error while querying
Invalid `prisma.people_List.findMany()` invocation:\n\n\n Error occurred during query execution:\nConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Utf8) })
Prisma information
generator client {
provider = "prisma-client-js"
previewFeatures = ["microsoftSqlServer"]
}
datasource db {
provider = "sqlserver"
url = env("DATABASE_URL")
}
model people_List {
id_People Int @id @default(autoincrement())
Surname String @db.VarChar(30)
Name String @db.VarChar(30)
**[A lot of other columns] too lazy to describe**
}
Environment & setup
OS: Windows 10 [RU]
Database: MSSQL
Node.js version: 15.8
Can you post the full log with https://www.prisma.io/docs/concepts/components/prisma-client/working-with-prismaclient/logging enabled and DEBUG=* set?
What version of Prisma are you using? (The template usually asks for the prisma -v output)
prisma: 2.24.1
prisma/client: 2.24.1
I am a bit lost - where to set this debug?
It is an environment variable that you can set, sorry I should have explained that better. See https://www.prisma.io/docs/concepts/components/prisma-client/debugging#overview (On windows you actually use set DEBUG=* command)
no, I quite understood that you're refering to this page, but I didnt understand from it, where exactly I must write this command
In the command line before you run the command, so probably some terminal or console. It sets the environment variable for when you run your node script (or start your web server etc).
I kinda just write 'npm run dev' in cmd from Windows. Already tried to write export DEBUG="*" via cmd, didnt work
As I said above, on Windows it is set DEBUG=* that you run first, then npm run dev. (I opened an issue to update our docs with Windows instructions)
well, kinda run, a lot of junk
prisma:engine stdout Unknown error +1ms
prisma:query SELECT [dbo].[people_List].[id_People], [dbo].[people_List].[Surname], [dbo].[people_List].[Name], [dbo].[people_List].[Patronymic], [dbo].[people_List].[OldSurname], [dbo].[people_List].[OldName], [dbo].[people_List].[OldPatronymic], [dbo].[people_List].[Marriage], [dbo].[people_List].[Sex], [dbo].[people_List].[BornDate], [dbo].[people_List].[BornPlace], [dbo].[people_List].[id_Doc], [dbo].[people_List].[DocSerial], [dbo].[people_List].[DocNo], [dbo].[people_List].[DocDistributed], [dbo].[people_List].[DocDate], [dbo].[people_List].[DocDepartmentCode], [dbo].[people_List].[id_Sitizen], [dbo].[people_List].[Photo], [dbo].[people_List].[Other], [dbo].[people_List].[is_webimported], [dbo].[people_List].[id_web], [dbo].[people_List].[FIO], [dbo].[people_List].[FIO2], [dbo].[people_List].[msrepl_tran_version], [dbo].[people_List].[SurnameRP], [dbo].[people_List].[NameRP], [dbo].[people_List].[PatronymicRP], [dbo].[people_List].[id_Nationality_old], [dbo].[people_List].[id_Nationality], [dbo].[people_List].[UID], [dbo].[people_List].[UID_stat], [dbo].[people_List].[UID_sok], [dbo].[people_List].[UID_zo], [dbo].[people_List].[INN_old], [dbo].[people_List].[SSN], [dbo].[people_List].[INN], [dbo].[people_List].[tabNumber] FROM [dbo].[people_List] WHERE 1=1
prisma:engine stdout Fetched a connection from the pool +1ms
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +0ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +1ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +1ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine {
prisma:engine error: PrismaClientUnknownRequestError2 [PrismaClientUnknownRequestError]: Error occurred during query execution:
Also have checked my database on my server (localhost) and server collation (encoding) seems like is Cyrillic_General_CI_AS, could this be an issue?
It should probably not, but considering that SQL Server is a preview feature that is indeed possible we do not support that properly yet.
Is this something that needs to be set on the server, database or table level? If you can share some code how to achieve this on a stock SQL Server, we might be able to reproduce this and hopefully understand and fix it.
Is this something that needs to be set on the server, database or table level? If you can share some code how to achieve this on a stock SQL Server, we might be able to reproduce this and hopefully understand and fix it.
You mean Collation (encoding)? Kinda it was automatically (I guess) set on installing MS SQL Server and MS SQL Server Studio. But I suppose it can be changed at least via MS SQL Server Studio with right-clicking and setting parameters of collation of the database
kinda "full" log, opened cmd - npm run dev - made query (on playground) - stopped server
nodemon bus new listener: reset (0) +0ms
nodemon bus new listener: reset (0) +1ms
nodemon bus new listener: quit (0) +21ms
nodemon bus new listener: quit (0) +0ms
nodemon bus new listener: restart (0) +1ms
nodemon bus new listener: restart (0) +0ms
nodemon bus new listener: reset (2) +1ms
nodemon bus emit: reset +1ms
nodemon resetting watchers +0ms
nodemon reset +0ms
nodemon config: dirs [ 'C:\\Projects\\Diplom_Work\\graphql-prisma-svfu' ] +0ms
[nodemon] 2.0.7
[nodemon] to restart at any time, enter `rs`
nodemon bus new listener: error (0) +38ms
nodemon bus new listener: error (0) +0ms
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node src/server.js`
nodemon:run fork C:\Windows\system32\cmd.exe /d /s /c node src/server.js +0ms
nodemon bus new listener: exit (0) +10ms
nodemon bus new listener: exit (0) +0ms
nodemon:run start watch on: [ '*.*', re: /.*\..*/ ] +1ms
nodemon start watch on: C:\Projects\Diplom_Work\graphql-prisma-svfu +49ms
nodemon ignored [
'**/.git/**',
'**/.nyc_output/**',
'**/.sass-cache/**',
'**/bower_components/**',
'**/coverage/**',
'**/node_modules/**',
re: /.*.*\/\.git\/.*.*|.*.*\/\.nyc_output\/.*.*|.*.*\/\.sass\-cache\/.*.*|.*.*\/bower_components\/.*.*|.*.*\/coverage\/.*.*|.*.*\/node_modules\/.*.*/
] +1ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\package-lock.json +0ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\package.json +1ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\README.md +1ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\schema.graphql +0ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\prisma\schema.prisma +4ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\src\context.js +1ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\src\schema.js +0ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\src\server.js +1ms
nodemon:watch chokidar watching: C:\Projects\Diplom_Work\graphql-prisma-svfu\src\generated\nexus.ts +1ms
nodemon watch is complete +20ms
prisma:tryLoadEnv Environment variables loaded from C:\Projects\Diplom_Work\graphql-prisma-svfu\.env +0ms
[dotenv][DEBUG] did not match key and value when parsing line 1: # Environment variables declared in this file are automatically made available to Prisma.
[dotenv][DEBUG] did not match key and value when parsing line 2: # See the documentation for more detail: https://pris.ly/d/prisma-schema#using-environment-variables
[dotenv][DEBUG] did not match key and value when parsing line 3:
[dotenv][DEBUG] did not match key and value when parsing line 4: # Prisma supports the native connection string format for PostgreSQL, MySQL and SQLite.
[dotenv][DEBUG] did not match key and value when parsing line 5: # See the documentation for all the connection string options: https://pris.ly/d/connection-strings
[dotenv][DEBUG] did not match key and value when parsing line 6:
prisma:tryLoadEnv Environment variables not found at C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\.prisma\client\.env +5ms
prisma:tryLoadEnv Environment variables not found at C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\.prisma\client\.env +0ms
prisma:tryLoadEnv No Environment variables loaded +0ms
prisma:client clientVersion: 2.24.1 +0ms
prisma:engine Search for Query Engine in C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\.prisma\client +0ms
express:application set "x-powered-by" to true +0ms
express:application set "etag" to 'weak' +1ms
express:application set "etag fn" to [Function: generateETag] +0ms
express:application set "env" to 'development' +1ms
express:application set "query parser" to 'extended' +0ms
express:application set "query parser fn" to [Function: parseExtendedQueryString] +0ms
express:application set "subdomain offset" to 2 +1ms
express:application set "trust proxy" to false +0ms
express:application set "trust proxy fn" to [Function: trustNone] +0ms
express:application booting in development mode +0ms
express:application set "view" to [Function: View] +1ms
express:application set "views" to 'C:\\Projects\\Diplom_Work\\graphql-prisma-svfu\\views' +0ms
express:application set "jsonp callback name" to 'callback' +0ms
express:application set "x-powered-by" to false +1ms
express:router use '/.well-known/apollo/server-health' <anonymous> +0ms
express:router:layer new '/.well-known/apollo/server-health' +0ms
express:router use '/' corsMiddleware +1ms
express:router:layer new '/' +0ms
express:router use '/' jsonParser +1ms
express:router:layer new '/' +0ms
express:router use '/' <anonymous> +0ms
express:router:layer new '/' +0ms
express:router use '/' <anonymous> +1ms
express:router:layer new '/' +0ms
express:router use '/' query +0ms
express:router:layer new '/' +1ms
express:router use '/' expressInit +0ms
express:router:layer new '/' +0ms
express:router use '/' router +1ms
express:router:layer new '/' +0ms
🚀 Server ready at: http://localhost:4000/
⭐️ See sample queries: http://pris.ly/e/js/graphql#using-the-graphql-api
express:router dispatching POST / +220ms
express:router query : / +1ms
express:router expressInit : / +1ms
express:router router : / +1ms
express:router dispatching POST / +1ms
express:router corsMiddleware : / +1ms
express:router jsonParser : / +2ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +3ms
body-parser:json read body +0ms
body-parser:json parse body +15ms
body-parser:json parse json +0ms
express:router <anonymous> : / +0ms
express:router <anonymous> : / +1ms
express:router dispatching POST / +2s
express:router query : / +0ms
express:router expressInit : / +1ms
express:router router : / +0ms
express:router dispatching POST / +0ms
express:router corsMiddleware : / +1ms
express:router jsonParser : / +0ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +0ms
body-parser:json read body +1ms
body-parser:json parse body +0ms
body-parser:json parse json +0ms
express:router <anonymous> : / +1ms
express:router <anonymous> : / +0ms
prisma:client Prisma Client call: +2s
prisma:client prisma.people_List.findMany(undefined) +0ms
prisma:client Generated request: +0ms
prisma:client query {
prisma:client findManypeople_List {
prisma:client id_People
prisma:client Surname
prisma:client Name
prisma:client Patronymic
prisma:client OldSurname
prisma:client OldName
prisma:client OldPatronymic
prisma:client Marriage
prisma:client Sex
prisma:client BornDate
prisma:client BornPlace
prisma:client id_Doc
prisma:client DocSerial
prisma:client DocNo
prisma:client DocDistributed
prisma:client DocDate
prisma:client DocDepartmentCode
prisma:client id_Sitizen
prisma:client Photo
prisma:client Other
prisma:client is_webimported
prisma:client id_web
prisma:client FIO
prisma:client FIO2
prisma:client msrepl_tran_version
prisma:client SurnameRP
prisma:client NameRP
prisma:client PatronymicRP
prisma:client id_Nationality_old
prisma:client id_Nationality
prisma:client UID
prisma:client UID_stat
prisma:client UID_sok
prisma:client UID_zo
prisma:client INN_old
prisma:client SSN
prisma:client INN
prisma:client tabNumber
prisma:client }
prisma:client }
prisma:client +0ms
prisma:engine { cwd: 'C:\\Projects\\Diplom_Work\\graphql-prisma-svfu\\prisma' } +2s
prisma:engine Search for Query Engine in C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\.prisma\client +1ms
prisma:engine { flags: [ '--enable-raw-queries', '--port', '10239' ] } +2ms
prisma:engine stdout Starting a mssql pool with 17 connections. +53ms
prisma:info Starting a mssql pool with 17 connections.
prisma:engine stdout Performing a TLS handshake +13ms
prisma:info Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation. +1ms
prisma:warn Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful +32ms
prisma:info TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted. +1ms
prisma:warn Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Database change from 'iisuss' to 'master' +0ms
prisma:info Database change from 'iisuss' to 'master'
prisma:engine stdout Контекст базы данных изменен на "iisuss". +1ms
prisma:info Контекст базы данных изменен на "iisuss".
prisma:engine stdout SQL collation change from None to None +1ms
prisma:info SQL collation change from None to None
prisma:engine stdout Microsoft SQL Server version 3490119695 +0ms
prisma:info Microsoft SQL Server version 3490119695
prisma:engine stdout Packet size change from '4096' to '4096' +1ms
prisma:info Packet size change from '4096' to '4096'
prisma:engine stdout Fetched a connection from the pool +0ms
prisma:engine stdout Started http server on http://127.0.0.1:10239 +1ms
prisma:info Started http server on http://127.0.0.1:10239
prisma:engine Search for Query Engine in C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\.prisma\client +10ms
prisma:engine stdout Fetched a connection from the pool +10ms
prisma:engine stdout Unknown error +1ms
prisma:query SELECT [dbo].[people_List].[id_People], [dbo].[people_List].[Surname], [dbo].[people_List].[Name], [dbo].[people_List].[Patronymic], [dbo].[people_List].[OldSurname], [dbo].[people_List].[OldName], [dbo].[people_List].[OldPatronymic], [dbo].[people_List].[Marriage], [dbo].[people_List].[Sex], [dbo].[people_List].[BornDate], [dbo].[people_List].[BornPlace], [dbo].[people_List].[id_Doc], [dbo].[people_List].[DocSerial], [dbo].[people_List].[DocNo], [dbo].[people_List].[DocDistributed], [dbo].[people_List].[DocDate], [dbo].[people_List].[DocDepartmentCode], [dbo].[people_List].[id_Sitizen], [dbo].[people_List].[Photo], [dbo].[people_List].[Other], [dbo].[people_List].[is_webimported], [dbo].[people_List].[id_web], [dbo].[people_List].[FIO], [dbo].[people_List].[FIO2], [dbo].[people_List].[msrepl_tran_version], [dbo].[people_List].[SurnameRP], [dbo].[people_List].[NameRP], [dbo].[people_List].[PatronymicRP], [dbo].[people_List].[id_Nationality_old], [dbo].[people_List].[id_Nationality], [dbo].[people_List].[UID], [dbo].[people_List].[UID_stat], [dbo].[people_List].[UID_sok], [dbo].[people_List].[UID_zo], [dbo].[people_List].[INN_old], [dbo].[people_List].[SSN], [dbo].[people_List].[INN], [dbo].[people_List].[tabNumber] FROM [dbo].[people_List] WHERE 1=1
prisma:engine {
prisma:engine error: PrismaClientUnknownRequestError2 [PrismaClientUnknownRequestError]: Error occurred during query execution:
prisma:engine ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Utf8) })
prisma:engine at NodeEngine.graphQLToJSError (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28377:14)
prisma:engine at NodeEngine.request (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28273:24)
prisma:engine at processTicksAndRejections (node:internal/process/task_queues:94:5)
prisma:engine at async cb (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:35193:26) {
prisma:engine clientVersion: '2.24.1'
prisma:engine }
prisma:engine } +2ms
prisma:engine stdout Fetched a connection from the pool +2ms
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +0ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine {
prisma:engine error: PrismaClientUnknownRequestError2 [PrismaClientUnknownRequestError]: Error occurred during query execution:
prisma:engine ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Utf8) })
prisma:engine at NodeEngine.graphQLToJSError (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28377:14)
prisma:engine at NodeEngine.request (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28273:24)
prisma:engine at processTicksAndRejections (node:internal/process/task_queues:94:5)
prisma:engine at async cb (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:35193:26) {
prisma:engine clientVersion: '2.24.1'
prisma:engine }
prisma:engine } +1ms
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +1ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +0ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine stdout Unknown error +1ms
prisma:query SELECT [dbo].[people_List].[id_People], [dbo].[people_List].[Surname], [dbo].[people_List].[Name], [dbo].[people_List].[Patronymic], [dbo].[people_List].[OldSurname], [dbo].[people_List].[OldName], [dbo].[people_List].[OldPatronymic], [dbo].[people_List].[Marriage], [dbo].[people_List].[Sex], [dbo].[people_List].[BornDate], [dbo].[people_List].[BornPlace], [dbo].[people_List].[id_Doc], [dbo].[people_List].[DocSerial], [dbo].[people_List].[DocNo], [dbo].[people_List].[DocDistributed], [dbo].[people_List].[DocDate], [dbo].[people_List].[DocDepartmentCode], [dbo].[people_List].[id_Sitizen], [dbo].[people_List].[Photo], [dbo].[people_List].[Other], [dbo].[people_List].[is_webimported], [dbo].[people_List].[id_web], [dbo].[people_List].[FIO], [dbo].[people_List].[FIO2], [dbo].[people_List].[msrepl_tran_version], [dbo].[people_List].[SurnameRP], [dbo].[people_List].[NameRP], [dbo].[people_List].[PatronymicRP], [dbo].[people_List].[id_Nationality_old], [dbo].[people_List].[id_Nationality], [dbo].[people_List].[UID], [dbo].[people_List].[UID_stat], [dbo].[people_List].[UID_sok], [dbo].[people_List].[UID_zo], [dbo].[people_List].[INN_old], [dbo].[people_List].[SSN], [dbo].[people_List].[INN], [dbo].[people_List].[tabNumber] FROM [dbo].[people_List] WHERE 1=1
prisma:engine {
prisma:engine error: PrismaClientUnknownRequestError2 [PrismaClientUnknownRequestError]: Error occurred during query execution:
prisma:engine ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Utf8) })
prisma:engine at NodeEngine.graphQLToJSError (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28377:14)
prisma:engine at NodeEngine.request (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28273:24)
prisma:engine at processTicksAndRejections (node:internal/process/task_queues:94:5)
prisma:engine at async cb (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:35193:26) {
prisma:engine clientVersion: '2.24.1'
prisma:engine }
prisma:engine } +1ms
prisma:client Error: Error occurred during query execution:
prisma:client ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Utf8) })
prisma:client at NodeEngine.graphQLToJSError (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28377:14)
prisma:client at NodeEngine.request (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:28273:24)
prisma:client at processTicksAndRejections (node:internal/process/task_queues:94:5)
prisma:client at async cb (C:\Projects\Diplom_Work\graphql-prisma-svfu\node_modules\@prisma\client\runtime\index.js:35193:26) +139ms
prisma:engine stdout Fetched a connection from the pool +6ms
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +0ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +0ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine stdout Flushing unhandled packet from the wire. Please consume your streams! +1ms
prisma:warn Flushing unhandled packet from the wire. Please consume your streams!
prisma:engine stdout Unknown error +2ms
prisma:query SELECT [dbo].[people_List].[id_People], [dbo].[people_List].[Surname], [dbo].[people_List].[Name], [dbo].[people_List].[Patronymic], [dbo].[people_List].[OldSurname], [dbo].[people_List].[OldName], [dbo].[people_List].[OldPatronymic], [dbo].[people_List].[Marriage], [dbo].[people_List].[Sex], [dbo].[people_List].[BornDate], [dbo].[people_List].[BornPlace], [dbo].[people_List].[id_Doc], [dbo].[people_List].[DocSerial], [dbo].[people_List].[DocNo], [dbo].[people_List].[DocDistributed], [dbo].[people_List].[DocDate], [dbo].[people_List].[DocDepartmentCode], [dbo].[people_List].[id_Sitizen], [dbo].[people_List].[Photo], [dbo].[people_List].[Other], [dbo].[people_List].[is_webimported], [dbo].[people_List].[id_web], [dbo].[people_List].[FIO], [dbo].[people_List].[FIO2], [dbo].[people_List].[msrepl_tran_version], [dbo].[people_List].[SurnameRP], [dbo].[people_List].[NameRP], [dbo].[people_List].[PatronymicRP], [dbo].[people_List].[id_Nationality_old], [dbo].[people_List].[id_Nationality], [dbo].[people_List].[UID], [dbo].[people_List].[UID_stat], [dbo].[people_List].[UID_sok], [dbo].[people_List].[UID_zo], [dbo].[people_List].[INN_old], [dbo].[people_List].[SSN], [dbo].[people_List].[INN], [dbo].[people_List].[tabNumber] FROM [dbo].[people_List] WHERE 1=1
prisma:engine Client Version: 2.24.1 +21ms
prisma:engine Engine Version: query-engine 18095475d5ee64536e2f93995e48ad800737a9e4 +0ms
prisma:engine Active provider: sqlserver +1ms
express:router dispatching POST / +323ms
express:router query : / +0ms
express:router expressInit : / +2ms
express:router router : / +1ms
express:router dispatching POST / +1ms
express:router corsMiddleware : / +1ms
express:router jsonParser : / +0ms
body-parser:json content-type "application/json" +1ms
body-parser:json content-encoding "identity" +1ms
body-parser:json read body +0ms
body-parser:json parse body +2ms
body-parser:json parse json +0ms
express:router <anonymous> : / +1ms
express:router <anonymous> : / +0ms
Thanks, that should give us more information to reproduce. You hunch that this might be caused by the collation sounds reasonable to me.
Changed my database collation to "Cyrillic_General_100_CI_AS_KS_WS_SC_UTF8", but the server (localhost) itself remains "Cyrillic_General_CI_AS". Still kinda receiving this query error. I'm now looking for way to change server collation too
Interesting. Couldn't change server collation.
But I've created a minor separate database instead of the real one (which is from .bak file) with 1 table Users(id, name), and its working with collation "Cyrillic_General_CI_AS".
I am really frustrated what is happening. I even checked databases parameters, all the same (on the first sight), just the real one had SQL support of 2016, changed it to 2019 but still same error on this database
Okay. I have managed a bit fix it. In short I guess its connected to query size, kinda. Reduced real model of people_List to (for example) id_People, Name, Surname - running fine (Prisma making query (from logs) "SELECT [dbo].[people_List].[id_People], [dbo].[people_List].[Surname], [dbo].[people_List].[Name], [dbo].[people_List].[Patronymic] FROM [dbo].[people_List] WHERE 1=1")
Changed it back to full model with a lot of columns - previously mentioned error
Definitely something on the Prisma side.
Can you share the full SQL of that table with all the columns that made it fail? Maybe that is already enough to reproduce it on our side.
Ofc, there's Prisma model
model people_List {
id_People Int @id
Surname String @db.VarChar(30)
Name String @db.VarChar(30)
Patronymic String? @db.VarChar(30)
OldSurname String? @db.VarChar(30)
OldName String? @db.VarChar(30)
OldPatronymic String? @db.VarChar(30)
Marriage Boolean
Sex String @db.Char(1)
BornDate DateTime @db.DateTime
BornPlace String? @db.VarChar(100)
id_Doc Int @db.SmallInt
DocSerial String @db.VarChar(10)
DocNo String @db.VarChar(30)
DocDistributed String? @db.VarChar(150)
DocDate DateTime? @db.DateTime
DocDepartmentCode String? @db.VarChar(50)
id_Sitizen Int @db.SmallInt
Photo Bytes? @db.Image
Other String? @db.Text
is_webimported Boolean
id_web Int?
FIO String @db.VarChar(92)
FIO2 String? @db.VarChar(125)
msrepl_tran_version String @db.UniqueIdentifier
SurnameRP String? @db.VarChar(30)
NameRP String? @db.VarChar(30)
PatronymicRP String? @db.VarChar(30)
id_Nationality_old Int?
id_Nationality Int?
UID Int?
UID_stat Int?
UID_sok Int?
UID_zo Int?
INN_old BigInt?
SSN String? @db.VarChar(255)
INN String? @db.VarChar(12)
tabNumber String? @db.VarChar(10)
}
SQL Query was already mentioned, but just to repeat
SELECT [dbo].[people_List].[id_People], [dbo].[people_List].[Surname], [dbo].[people_List].[Name], [dbo].[people_List].[Patronymic], [dbo].[people_List].[OldSurname], [dbo].[people_List].[OldName], [dbo].[people_List].[OldPatronymic], [dbo].[people_List].[Marriage], [dbo].[people_List].[Sex], [dbo].[people_List].[BornDate], [dbo].[people_List].[BornPlace], [dbo].[people_List].[id_Doc], [dbo].[people_List].[DocSerial], [dbo].[people_List].[DocNo], [dbo].[people_List].[DocDistributed], [dbo].[people_List].[DocDate], [dbo].[people_List].[DocDepartmentCode], [dbo].[people_List].[id_Sitizen], [dbo].[people_List].[Photo], [dbo].[people_List].[Other], [dbo].[people_List].[is_webimported], [dbo].[people_List].[id_web], [dbo].[people_List].[FIO], [dbo].[people_List].[FIO2], [dbo].[people_List].[msrepl_tran_version], [dbo].[people_List].[SurnameRP], [dbo].[people_List].[NameRP], [dbo].[people_List].[PatronymicRP], [dbo].[people_List].[id_Nationality_old], [dbo].[people_List].[id_Nationality], [dbo].[people_List].[UID], [dbo].[people_List].[UID_stat], [dbo].[people_List].[UID_sok], [dbo].[people_List].[UID_zo], [dbo].[people_List].[INN_old], [dbo].[people_List].[SSN], [dbo].[people_List].[INN], [dbo].[people_List].[tabNumber] FROM [dbo].[people_List] WHERE 1=1"
A bit lost at word "full sql table" so I will share an image of that table with its columns
Thanks. Did you create the SQL table with Prisma Migrate (migrate dev or db push for example) or did you already have it? SQL UIs usually have a button to export the DDL or SQL needed to create the exact same table.
(But we can try with the data we have already)
Its a table from the real database (kinda, backup file didnt contail all data/rows) with a lot of tables, which was restored via .bak file with MS SQL Server Studio on my localhost for testing purposes. I dunno how to copy exactly the table, though I could try to send .bak of database if you need, although its a pretty confidential theme.
Oh, wait, got the script of creating table (columns), though I guess it will be empty
/****** Object: Table [dbo].[people_List] Script Date: 06.06.2021 18:17:01 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[people_List](
[id_People] [int] NOT NULL,
[Surname] [varchar](30) NOT NULL,
[Name] [varchar](30) NOT NULL,
[Patronymic] [varchar](30) NULL,
[OldSurname] [varchar](30) NULL,
[OldName] [varchar](30) NULL,
[OldPatronymic] [varchar](30) NULL,
[Marriage] [bit] NOT NULL,
[Sex] [char](1) NOT NULL,
[BornDate] [datetime] NOT NULL,
[BornPlace] [varchar](100) NULL,
[id_Doc] [smallint] NOT NULL,
[DocSerial] [varchar](10) NOT NULL,
[DocNo] [varchar](30) NOT NULL,
[DocDistributed] [varchar](150) NULL,
[DocDate] [datetime] NULL,
[DocDepartmentCode] [varchar](50) NULL,
[id_Sitizen] [smallint] NOT NULL,
[Photo] [image] NULL,
[Other] [text] NULL,
[is_webimported] [bit] NOT NULL,
[id_web] [int] NULL,
[FIO] [varchar](92) NOT NULL,
[FIO2] [varchar](125) NULL,
[msrepl_tran_version] [uniqueidentifier] NOT NULL,
[SurnameRP] [varchar](30) NULL,
[NameRP] [varchar](30) NULL,
[PatronymicRP] [varchar](30) NULL,
[id_Nationality_old] [int] NULL,
[id_Nationality] [int] NULL,
[UID] [int] NULL,
[UID_stat] [int] NULL,
[UID_sok] [int] NULL,
[UID_zo] [int] NULL,
[INN_old] [bigint] NULL,
[SSN] [varchar](255) NULL,
[INN] [varchar](12) NULL,
[tabNumber] [varchar](10) NULL,
CONSTRAINT [PK_people_List] PRIMARY KEY CLUSTERED
(
[id_People] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Thanks, that is perfect. With that someone will try to reproduce this soon - and if it is indeed the data causing the error, we will ask for that later.
Hey, so are you using VarChar to store non-ascii data? What I suspect is that we don't have the right collation data in the library, and try to read the data as UTF-8. This should just work with NVarChar columns, but we don't really test with VarChar and non-ascii...
Can you elaborate on non-ascii data? It contains cyrillic symbols (aka russian language), dunno if its non-ascii or not...
But after your question I've tested additionally and finally found out that the filed on Prisma.schema
Sex String @db.Char(1)
used cyrillic language and after removing it from schema it finally now works. I guess the problem is solved, kinda.
This is an extremely interesting issue, I'll take a look for sure.
I was originally thinking it might be some of the @db.VarChar fields holding data we can't read correctly.
Can you provide me an example of a value in @db.Char(1) field that breaks Prisma? I can write a test and fix that then.
Ofc, its 1 symbols meaning Sex of human like male/female but in russian symbols, М or Ж
Cool. Thanks for an excellent reproduction. I'll try to spin up a dockerized server with the right collation, and see what's wrong in our CHAR parsing code.
So, you have cyrillic symbols in the VarChar fields too and they do work?
Yep, they work fine, at least I'm now running queries without any crashes
I think the NChar would work here just fine, but I need to fix the Char handling.
Hey @BJladika we just published a special version for you to test a bit: 2.25.0-integration-sql-server-char-collation-fix.2. Could you try this out, try reading that CHAR(1) value and see the result. Also, would be interested to know if you can write to the VARCHAR and CHAR columns successfully. We definitely have some areas in non-UTF column types we haven't covered, and if this does not work out, I'm trying to do an installation of SQL Server that can talk this collation and try to reproduce the problems by myself.
Can you give me a hint how to change to this version?
``bash
npm install prisma@2.25.0-integration-sql-server-char-collation-fix.2 --save-dev
Or set it in your `package.json`.
In npm: https://www.npmjs.com/package/prisma/v/2.25.0-integration-sql-server-char-collation-fix.2
Well, seems like its not working?
"dependencies": {
"@prisma/client": "2.24.1",
"apollo-server": "2.25.0",
"graphql": "15.5.0",
"graphql-scalars": "1.9.3"
},
"devDependencies": {
"prisma": "^2.25.0-integration-sql-server-char-collation-fix.2"
},
Still facing that error with query or I must also update prisma client?
yeah you should also update the client!
I got a database now with Cyrillic_General_CI_AS as the collation. Created a table with CHAR(1) and VARCHAR(255) columns, and added a row there with Ж and ЖЖЖЖЖЖЖЖ written to the columns. On the current prisma master I get UTF-8 error and on my branch, the data is loaded correctly:
❯ cargo run --example async-std
Compiling tiberius v0.5.13 (/home/pimeys/code/tiberius)
Finished dev [unoptimized + debuginfo] target(s) in 4.92s
Running `target/debug/examples/async-std`
Row { columns: [Column { name: "id", column_type: Int4 }, Column { name: "data", column_type: BigChar }, Column { name: "data2", column_type: BigVarChar }], data: TokenRow { data: [I32(Some(1)), String(Some("Ж")), String(Some("ЖЖЖЖЖЖЖЖ"))] } }
I might want to try to write a test for this issue...
You might need to run prisma generate too to get the correct engine.
Well, i tried to use npm install @prisma/client@2.25.0-integration-sql-server-char-collation-fix.2 --save-dev and via changing depencies to @prisma/client": "2.25.0-integration-sql-server-char-collation-fix.2" but only getting error
npm ERR! code EPERM
npm ERR! syscall rename
npm ERR! path C:\Projects\graphql-sdl-first\node_modules\@prisma\engines-version
npm ERR! dest C:\Projects\graphql-sdl-first\node_modules\@prisma\.engines-version-SMIlxHHf
npm ERR! errno -4048
npm ERR! Error: EPERM: operation not permitted, rename 'C:\Projects\graphql-sdl-first\node_modules\@prisma\engines-version' -> 'C:\Projects\Diplom_Work\graphql-sdl-first\node_modules\@prisma\.engines-version-SMIlxHHf'
npm ERR! [Error: EPERM: operation not permitted, rename 'C:\Projects\graphql-sdl-first\node_modules\@prisma\engines-version' -> 'C:\Projects\graphql-sdl-first\node_modules\@prisma\.engines-version-SMIlxHHf'] {
npm ERR! errno: -4048,
npm ERR! code: 'EPERM',
npm ERR! syscall: 'rename',
npm ERR! path: 'C:\\Projects\\graphql-sdl-first\\node_modules\\@prisma\\engines-version',
npm ERR! dest: 'C:\\Projects\\graphql-sdl-first\\node_modules\\@prisma\\.engines-version-SMIlxHHf'
npm ERR! }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It's possible that the file was already in use (by a text editor or antivirus),
npm ERR! or that you lack permissions to access it.
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.
Hey @BJladika, Node on Windows sometimes can be a pain - I also get this message from time to time. Maybe uninstall prisma and @prisma/client first (npm uninstall prisma @prisma/client) and then install it again (the commands you posted above). Sometimes it also helps to run the commands a few more times. The less tools and programs are open, the less probably it is that one of them is blocking the file access.
Yeah, ran a query now with an updated (aka reinstalled) @prisma/client and prisma to 25.0 version and now I'm not receiving error and can receive Char string. About mutations... well, maybe will write/test later, now a bit busy
The pull request above fixes it and adds a test so we don't get a regression in the future. I noticed the type TEXT was also broken in a similar way.
This will be fully available in the 2.25.0 version on next week.
Can you confirm this worked in our latest now @BJladika?
There is a guide to understand why happens this, i solve my error with the docs of this url
http://www.sql-server-helper.com/error-messages/msg-8115-numeric-to-numeric.aspx
|
gharchive/issue
| 2021-06-05T12:34:59 |
2025-04-01T06:40:05.012413
|
{
"authors": [
"Allislove",
"BJladika",
"janpio",
"pimeys"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/7476",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
915389259
|
Migrate could automatically create a CITEXT extension if it is used in the schema (Postgres)
Assume this schema (Postgres):
model User {
id String @id
email String
}
When I run migrate dev for this, all is good.
But now, when I try to change the email field to use the Citext native type:
model User {
id String @id
email String @db.Citext
}
and run migrate dev again, Migrate successfully creates a migration in disk, but fails to apply it because the CITEXT Postgres extension was not created.
I can manually create it using CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA public;, and run migrate dev again, and that succeeds just fine.
In order to ensure that future Migrate commands work correctly, I work around this issue currently by manuallyt editing this migration and adding CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA public; as the first line in the generated Migrate SQL.
It would be great if Migrate did that out of the box!
Internal: We discovered this in this PR: https://github.com/prisma/cloud/pull/695
Related to https://github.com/prisma/prisma/issues/6822
|
gharchive/issue
| 2021-06-08T19:08:57 |
2025-04-01T06:40:05.018246
|
{
"authors": [
"madebysid",
"pantharshit00"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/7535",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1810170237
|
ci(benchmark.yml): use Node v16
https://prisma-company.slack.com/archives/C04UKP1JSE7/p1689693487127009
PR to try removing V8 flags causing this in https://github.com/prisma/prisma/pull/20281
|
gharchive/pull-request
| 2023-07-18T15:27:04 |
2025-04-01T06:40:05.019916
|
{
"authors": [
"Jolg42"
],
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/pull/20280",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
58346485
|
Hide c or c++ symbols in the pod distribution
The current pod exposes all symbols in the webrtc project such as nss/sha512.cc containing symbols such as _SHA256_Update. Those symbols are susceptible to conflict with other libraries such as openssl.
Is it possible to just expose ObjectC related symbols in the pod distribution?
Thanks.
I'd used binutils to rename conflicted symbols.
|
gharchive/issue
| 2015-02-20T12:34:02 |
2025-04-01T06:40:05.030672
|
{
"authors": [
"tomfisher"
],
"repo": "pristineio/webrtc-build-scripts",
"url": "https://github.com/pristineio/webrtc-build-scripts/issues/70",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2129962199
|
chore: added placeholder to repo info form
Description
Added dynamic placeholder for the parameter input element
Related Issue
issue #385
Does this introduce a breaking change?
[ ] Yes
[x] No
Other information
Hey! Your solution is great, but someone created a PR 2 days ago to solve the same issue: https://github.com/privacy-scaling-explorations/bandada/pull/387.
Thank you so much.
Here is a list of good first issues, feel free to ask questions, take a new one and ask us to assign you: https://github.com/privacy-scaling-explorations/bandada/issues?q=is%3Aissue+is%3Aopen+label%3A"good+first+issue"
Duplicated #387
|
gharchive/pull-request
| 2024-02-12T11:44:28 |
2025-04-01T06:40:05.040195
|
{
"authors": [
"0xjei",
"gabrieltemtsen",
"vplasencia"
],
"repo": "privacy-scaling-explorations/bandada",
"url": "https://github.com/privacy-scaling-explorations/bandada/pull/388",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
630306946
|
🆕 Software Suggestion | Opticole
Basic Information
Name: Opticole
Category: Windows Privacy Tool & File Encryption
URL: www.riserbo.com/opticole.html
Description
Opticole is an easy-to-use Windows privacy toolkit with a simple GUI. I think it's best to compare it to a Swiss army knife, but for privacy. It all runs immediately off of one executable and doesn't need to be installed which I think is pretty cool. Currently I believe the only method of purchasing it is through a credit card, but I contacted the developers who got back and said BTC payments will be available soon. Also, no account is required or anything like that.
Features
These are the current features I'm aware of:
[x] File and directory encryption/decryption (includes an option for a custom key)
[x] Automatic enabling of Windows privacy settings. Original settings can be saved and restored
[x] Port scanning which guards against malicious payloads
[x] Computer performance boosting
Why I am making the suggestion
Windows can be very problematic when it comes to privacy. I love using Windows, but I also know additional privacy apps/software are necessary to protect my data while using it. Opticole is perfect because it's so easy-to-use and non-intrusive. I don't have to worry about any annoying pop-ups or start-up programs. I can just run it whenever I need it.
My connection with the software
As far as I know, I was one of the first people to purchase Opticole. I'm content with the software, and I want to share it since it's very new and doesn't get a lot of attention.
[x] I will keep the issue up-to-date if something I have said changes or I remember a connection with the software.
Website does not have an SSL certificate.
Closed source, an invalid Twitter account and their contact is a gmail account.
Nothing about this seems like something we should endorse.
Website does not have a SSL-certificate
Yeah, this seems a bit shady, also, i wouldn't even know what section this would be added too.
Closing issue (if people disagree, feel free to comment to reopen it for discussion)
|
gharchive/issue
| 2020-06-03T20:33:07 |
2025-04-01T06:40:05.062618
|
{
"authors": [
"Onit333",
"blacklight447-ptio",
"danarel",
"ph00lt0"
],
"repo": "privacytools/privacytools.io",
"url": "https://github.com/privacytools/privacytools.io/issues/1942",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2671555708
|
feat: Use HTML to format train_test_split warnings in vscode/jupyter
Probably in another task we could display those strings as formatted HTML if running in a notebook.
Maybe something like (not tested):
with contextlib.suppress(ImportError):
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import display
if InteractiveShell.initialized():
display("""
<div style="
border: solid 1px var(--vscode-editorWarning-border, red);
color: var(--vscode-editorWarning-foreground, white);
background-color: var(--vscode-editorWarning-background, red);
">
It seems that you have a classification problem with a high class
imbalance. In this
case, using train_test_split may not be a good idea because of high
variability in the scores obtained on the test set.
To tackle this challenge we suggest to use skore's
cross_validate function.
</div>""")
Originally posted by @rouk1 in https://github.com/probabl-ai/skore/pull/690#discussion_r1843441023
I would like to have more user feedbacks, and understand properly what workflow we support for emitted warnings. This is an optimization, so let's get back to it when needed.
|
gharchive/issue
| 2024-11-19T09:58:37 |
2025-04-01T06:40:05.117753
|
{
"authors": [
"rouk1",
"tuscland"
],
"repo": "probabl-ai/skore",
"url": "https://github.com/probabl-ai/skore/issues/770",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
941206513
|
Detect timeout when reading from USB HID device
The return value from HidDevice::read_timeoutwas not checked, so we did not detect when a timeout occured.
With the changes in this PR, it should be possible to try multiple reports when connecting to a CMSIS-DAP probe, as is planned in #721.
bors retry
Perhaps we could make the macos (and ubuntu and windows) CI tests no longer required for the protected branch in GitHub settings, but add macos to the bors config (it's not there currently)? I think the problem here is that GitHub requires macos check to have passed before it will merge to master, but bors is not configured to wait for the macos check, so it tries to merge its staging commit before it passes and github blocks it.
If we tell GitHub to makes bors the required check instead of all the other ones, then it can only merge through bors - and we can configure bors to require macos. Otherwise, we should at least add macos to bors' config to stop it trying to merge too early.
bors retry
Should work now, macOS tests are now part of the bors config.
I think you'll need to rebase this to get the bors change?
I think I just got lucky that the macos check was finished earlier 😅
|
gharchive/pull-request
| 2021-07-10T08:50:13 |
2025-04-01T06:40:05.120680
|
{
"authors": [
"Tiwalun",
"Yatekii",
"adamgreig"
],
"repo": "probe-rs/probe-rs",
"url": "https://github.com/probe-rs/probe-rs/pull/733",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1192406948
|
Error in Equation 25.14 [version 2022-03-14]
I think there might be a mistake in Equation 25.14 (Section 25.2 line 42, page 865 (latex)).
In the text the partition function appears inside the exponential but I think it should be outside, something more like:
$$
\int \exp (- E_{\boldsymbol{\theta}}(\mathbf{x})) Z_{\boldsymbol{\theta}}^{-1} (-\nabla_{\boldsymbol{\theta}} E_{\boldsymbol{\theta}}(\mathbf{x}))d\mathbf{x}.
$$
fixed, thanks
|
gharchive/issue
| 2022-04-04T23:07:08 |
2025-04-01T06:40:05.122887
|
{
"authors": [
"gileshd",
"murphyk"
],
"repo": "probml/pml2-book",
"url": "https://github.com/probml/pml2-book/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
329677204
|
Allow handlebars to be referenced easily
Feature Request
Is your feature request related to a problem? Please describe.
I am creating a probot application that requires a few http routes. I as they're not totally trivial I'd like to write them using a templating language and I've noticed probot already uses handlebars to render the default /probot route.
The documentation at https://probot.github.io/docs/http/ shows how to render basic output using res.end but it would be useful be able to use res.render too. Unfortunately the default views folder path is set to node_modules/probot/views thanks to https://github.com/probot/probot/blob/d2fe925e1aeade9168d3042be25d0a7c48264938/src/server.ts#L17
Describe the solution you'd like
It would be really useful to be able to do the following in my app:
module.exports = app => {
const expApp = app.route('/my-thing');
expApp.get('/', (req, res) => {
res.render('root.hbs', {name: 'Ben'});
});
}
and have the url /my-thing render the handlebars template in <project-root>/views/root.hbs.
Describe alternatives you've considered
I can get this to work by changing my render path to traverse out of the node_modules and into the project root: res.render('../../../views/root.hbs', {name: 'Ben'}); however that is very ugly.
I can also create a new express app and mount it:
const express = require('express');
module.exports = app => {
const expApp = express();
expApp.get('/', (req, res) => {
res.render('root.hbs', {name: 'Ben'});
});
app.router.use('/my-thing', expApp);
However app.router is not documented on the website so I'm not sure if that is the preferred option
Teachability, Documentation, Adoption, Migration Strategy
Adding an example of handlebars use on https://probot.github.io/docs/http
I think that'd be great. Ideally Probot would just default to including your local views directory in the path. Any idea how to do that with express and/or handlebars?
|
gharchive/issue
| 2018-06-06T01:17:16 |
2025-04-01T06:40:05.177282
|
{
"authors": [
"BPScott",
"bkeepers"
],
"repo": "probot/probot",
"url": "https://github.com/probot/probot/issues/560",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
180430146
|
Bug when using the refresh button when running code.
When I run code in the p5js editor and press the refresh button it stops the whole program from running after.
video explanation here: http://screencast.com/t/eEhRSyI5a29Z
I'm experiencing the same problem. Aside from that, I would prefer a dark environment over a light one. Maybe you could include a dark theme? At the moment, I don't really see the benefits of the p5.js editor over other code editors like Visual Studio Code...
The editor is so useful for teaching students and having a way to set up a sketch's dependent files very quickly. Saves so much drama dealing with paths in a classroom.
Just a note that this bug affects the example code included with the editor. I ran into it a minute or so into using the editor for the first time, after tweaking a value in the "Array 2d" example. I ran it, noticed it was slow and refreshed to see if that was normal (it was, I had changed 'spacer' to 1), then closed the browser and discovered I had to close and reopen the file (but not the whole application, I still had another window open and it appears to be working OK) to be able to run it again. Just thought a little more information might be helpful.
|
gharchive/issue
| 2016-10-01T01:17:43 |
2025-04-01T06:40:05.179951
|
{
"authors": [
"SimonBuxx",
"otakucode",
"tegacodes",
"twilson33"
],
"repo": "processing/p5.js-editor",
"url": "https://github.com/processing/p5.js-editor/issues/268",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1478654520
|
Add documentation for AudioVoice.dispose()
Add missing documentation for AudioVoice.dispose()
Thanks for merging this!
|
gharchive/pull-request
| 2022-12-06T09:32:22 |
2025-04-01T06:40:05.180984
|
{
"authors": [
"lf32"
],
"repo": "processing/p5.js-sound",
"url": "https://github.com/processing/p5.js-sound/pull/692",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
603259253
|
Added p5.js books to book page
Fixes #699
Changes:
Added Generative design book.
Screenshots of the change:
@jeremydouglass Thank you so much for the review! Earlier I was wondering if there was some standard file size or width for the images. I will update the image with 600px width and amend the commit. I will start adding more books right away.
The text actually needs to go into the en.yml, es.yml, and zh-Hans.yml files. Please see the community page src code as an example of how to do this.
The reason the previous two books aren't translated is because we had both the English and Spanish versions and at the time, the website only supported these two languages. Although now, I think these should probably get translated also.
@lmccart Okay, understood. I will add the i18n tags to the existing books first and then add them to en.yml.
@jeremydouglass no they don't need to be. we just need entries in the yml files so the website builds and it's clear what needs to be translated
@jeremydouglass @lmccart Do you have any suggestions/changes for the books page before I go on adding other books? Please review the changes and let me know :)
It all looked good in my review!
My only suggestion -- and this isn't my site -- is that give the large number of strings in a collection of books, it might make sense to move up numbering in the strings, like this:
book-1-title: "Getting Started with p5.js"
book-1-authors: "Lauren McCarthy, Casey Reas, and Ben Fry. "
book-1-publisher: "Published October 2015, Maker Media. "
book-1-pages: "246 pages. "
book-1-type: "Paperback."
book-1-description: "Written by the lead p5.js developer and the founders of Processing, this book provides an introduction to the creative possibilities of today's Web, using JavaScript and HTML."
book-1-order-a: "Order Print/Ebook from O'Reilly"
book-1-order-b: "Order from Amazon"
book-2-title: ... etc.
That would make the yml easier.
@jeremydouglass Thanks for the review. I will amend the commit to use suggested naming convention.
@lmccart If everything else is fine, I would like to continue adding books to the website.
@jeremydouglass To avoid conflicts, I'll update the tracking file once all books have been added.
I can't find much information about the book
Creative Coding and Data Visualization with p5.js
It is out of stock everywhere and also there is no reference of the book on the publisher's website. Should I add this book? What other books would you recommend to be added?
Perhaps my suggestions:
Superfun P5.js Projects: For Beginners
by Nazia Fakhruddin, Haseeb Zeeshan, et al.
p5.js Internet creative programming(Chinese Edition)
by YI MING
@scotthmurray we are updating the p5js.org books page. there was a suggestion to add "Creative Coding and Data Visualization with p5.js". I can't remember the status of this book. it is listed but unavailable on amazon, did it get published? would you like it to be added?
@ayushjainrksh this looks great, and thanks for the reviews @jeremydouglass! I'm going to merge this and we can continue to add other books if you'd like via a new pull request.
@lmccart Thanks for the merge.
@jeremydouglass Please let me know if there is a need to add any more books :)
@lmccart Thank you for thinking of me. Sadly, the contract for the book was cancelled and I never got around to writing it. 😭 Yes, even though it exists as a "real" product on Amazon, with a cover, ISBN, and everything. 🤷♂
@scotthmurray thanks for the update. i like the amazon page as a ghost of p5 dreams past! 👻
|
gharchive/pull-request
| 2020-04-20T13:56:49 |
2025-04-01T06:40:05.201166
|
{
"authors": [
"ayushjainrksh",
"jeremydouglass",
"lmccart",
"scotthmurray"
],
"repo": "processing/p5.js-website",
"url": "https://github.com/processing/p5.js-website/pull/716",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1160373491
|
Update lint-staged from 12.3.4 to 12.3.5
Change-type: patch
Signed-off-by: Josh Bowling josh@balena.io
@balena-ci I self certify!
|
gharchive/pull-request
| 2022-03-05T15:05:28 |
2025-04-01T06:40:05.236447
|
{
"authors": [
"joshbwlng"
],
"repo": "product-os/jellyfish-assert",
"url": "https://github.com/product-os/jellyfish-assert/pull/259",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1129162235
|
No negative Values with 1.1.2
Hey,
with 1.1.0 the Consumption stops at 0.
I got negative Values (PV Power :D )
with V1.1.2 the spend / negative values -X Watt will be show in positive values.
Can you display negative values or can you add a checkbox for „stop at 0 W“?
workaround for me: use V1.1.0
index.js erweitert:
this.powerConsumption = (parseFloat(json.emeters[0].power)+parseFloat(json.emeters[1].power)+parseFloat(json.emeters[2].power));
if (this.powerConsumption < 0) { this.powerConsumption = 0 };
this.totalPowerConsumption = ((parseFloat(json.emeters[0].total)+parseFloat(json.emeters[1].total)+parseFloat(json.emeters[2].total))/1000);
if (this.totalPowerConsumption < 0) { this.totalPowerConsumption = 0 };
this.voltage1 = (((parseFloat(json.emeters[0].voltage)+parseFloat(json.emeters[1].voltage)+parseFloat(json.emeters[2].voltage))/3));
this.ampere1 = (((parseFloat(json.emeters[0].current)*this.pf0)+(parseFloat(json.emeters[1].current)*this.pf1)+(parseFloat(json.emeters[2].current)*this.pf2)));
if (this.ampere1 < 0) { this.ampere1 = 0 };
Sorry for my very late reply.
So you need that all negative values to be zeroed, right?
If I implement: Negative Values Handling -> Option 1. Zero Values, Option 2. Show Absolute (positive) Values
Are you covered?
Please try new version and report back. Thank You.
works like charm, thanks!
Closed with V1.1.3
|
gharchive/issue
| 2022-02-09T23:11:03 |
2025-04-01T06:40:05.244544
|
{
"authors": [
"produdegr",
"thechris1992"
],
"repo": "produdegr/homebridge-3em-energy-meter",
"url": "https://github.com/produdegr/homebridge-3em-energy-meter/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2080436461
|
i can not format 64 GB transcend sd card
That picture shows a successful format. The erase option is expected to fail for some cards and USB card readers. You can ignore the error message.
|
gharchive/issue
| 2024-01-13T19:02:10 |
2025-04-01T06:40:05.268173
|
{
"authors": [
"profi200",
"sohanatrana"
],
"repo": "profi200/sdFormatLinux",
"url": "https://github.com/profi200/sdFormatLinux/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
247143711
|
macOS Sierra, go and install drivers fails: ./profitbricks.go:258: cannot use profitbricks.Lan literal (type profitbricks.Lan) as type profitbricks.CreateLanRequest in argument to profitbricks.CreateLan make: *** [compile] Error 2
macOS Sierra
go installed via brew
When I try to install the drivers via go, this happend:
docker-machine-driver-profitbricks git:(master) make install
GOGC=off CGOENABLED=0 go build -i -o ./bin/docker-machine-driver-profitbricks"" ./bin
# github.com/profitbricks/docker-machine-driver-profitbricks
./profitbricks.go:258: cannot use profitbricks.Lan literal (type profitbricks.Lan) as type profitbricks.CreateLanRequest in argument to profitbricks.CreateLan
make: *** [compile] Error 2
docker-machine-driver-profitbricks git:(master) go version
go version go1.8.3 darwin/amd64
@apietsch issue resolved in v1.3.1
|
gharchive/issue
| 2017-08-01T17:29:31 |
2025-04-01T06:40:05.270081
|
{
"authors": [
"alibazlamit",
"apietsch"
],
"repo": "profitbricks/docker-machine-driver-profitbricks",
"url": "https://github.com/profitbricks/docker-machine-driver-profitbricks/issues/30",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2439919443
|
v2.11-beta2+fall2024
Backend version
SDK SHA
Please fix the spelling and listing errors
|
gharchive/pull-request
| 2024-07-31T12:19:56 |
2025-04-01T06:40:05.323861
|
{
"authors": [
"ccruzagralopes",
"hiltonlima"
],
"repo": "project-chip/certification-tool-backend",
"url": "https://github.com/project-chip/certification-tool-backend/pull/121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1039106700
|
Need a template class for linked list
Problem
LinkedList is a useful data structure for holding a list of uncertain number items (since we cannot have containers like std::vector due to tight memory), but we don't have one.
Proposed Solution
Implement (or port) one.
Intrusive or not?
@erjiaqing Does IntrusiveList handle this?
|
gharchive/issue
| 2021-10-29T02:35:48 |
2025-04-01T06:40:05.325338
|
{
"authors": [
"bzbarsky-apple",
"erjiaqing"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/issues/11171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1271882267
|
TC-GR-2.3 Groups Cluster - AddGroupIfIdentifying command not working
1.Problem - 1. When TH adds the group using AddGroupIfIdentifying command, the group doesn't exists in GroupTable
2. when TH adds n+1 group using AddGroupIfIdentifying command, we are unable to get RESOUCE_EXHAUSTED response
2.Sequence of steps
AddGroupIfIdentifying - ./chip-tool groups add-group-if-identifying 0x0006 gp6 1 0
GroupTable - ./chip-tool groupkeymanagement read group-table 1 0
3.Test Environment
App used - allclusters app
Platform - Chip-tool - RPI-4, 8GB RAM
DUT - RPI - RPI-4, 8GB RAM
Network - Ble-wifi
Commit - 9493d7b48c410f058b85552e1668b33f858afcac
4.Logs
DUT_Logs_15.06.2022.txt
TH_Logs_15.06.2022.txt
AddGroupIfIdentifying is not part of Matter.
Looking at the test plan at https://github.com/CHIP-Specifications/chip-test-plans/blob/master/src/cluster/Groups.adoc#323-tc-g-23-commands---getgroupmembership-addgroupifidentifying-dut-server it looks like the pre-conditions do not have "DUT is identifying" as a pre-condition, and nothing in the test plan tells it to start identifying. So it sounds like the test plan is buggy and needs fixing.
Cert Blocker Review: @bzbarsky-apple will file a test plan issue, so closing this.
|
gharchive/issue
| 2022-06-15T08:41:30 |
2025-04-01T06:40:05.331300
|
{
"authors": [
"bzbarsky-apple",
"sethunk",
"woody-apple"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/issues/19611",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1977707351
|
[Build]
Build issue(s)
when i use this commond
sevene@sevenedeMacBook-Pro CHIPTool % ./gradlew build
reslut failed, how to solve,
branch is master, i want Building Android CHIPTool from Android Studio
/Users/sevene/matter/connectedhomeip/out/android_arm64/CMakeLists.txt : C/C++ debug|arm64-v8a : CMake Warning at /Users/sevene/Library/Android/sdk/ndk/23.2.8568313/build/cmake/android-legacy.toolchain.cmake:416 (message):
An old version of CMake is being used that cannot automatically detect
compiler attributes. Compiler identification is being bypassed. Some
values may be wrong or missing. Update to CMake 3.19 or newer to use
CMake's built-in compiler identification.
Call Stack (most recent call first):
/Users/sevene/Library/Android/sdk/ndk/23.2.8568313/build/cmake/android.toolchain.cmake:55 (include)
/Users/sevene/matter/connectedhomeip/examples/android/CHIPTool/app/.cxx/cmake/debug/arm64-v8a/CMakeFiles/3.10.2/CMakeSystem.cmake:6 (include)
/Users/sevene/matter/connectedhomeip/examples/android/CHIPTool/app/.cxx/cmake/debug/arm64-v8a/CMakeFiles/CMakeTmp/CMakeLists.txt:2 (project)
> Task :app:generateJsonModelDebug FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:generateJsonModelDebug'.
> /Users/sevene/matter/connectedhomeip/out/android_arm64/CMakeLists.txt : C/C++ debug|arm64-v8a : CMake Error at /Users/sevene/matter/connectedhomeip/out/android_arm64/CMakeLists.txt:136118 (add_custom_target):
Cannot find source file:
/Users/sevene/matter/connectedhomeip/third_party/pigweed/repo/pw_tokenizer/py/setup.py
Tried extensions .c .C .c++ .cc .cpp .cxx .m .M .mm .h .hh .h++ .hm .hpp
.hxx .in .txx
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Platform
No response
Anything else?
No response
Building Android CHIPTool from scripts is success,
Building Android CHIPTool from Android Studio is fail,
sevene@sevenedeMacBook-Pro connectedhomeip % ./scripts/build/build_examples.py --target android-arm64-chip-tool build
2023-11-05 17:14:50 INFO Building targets: android-arm64-chip-tool
2023-11-05 17:14:50 INFO Preparing builder 'android-arm64-chip-tool'
2023-11-05 17:14:50 INFO Generating /Users/sevene/matter/connectedhomeip/out/android-arm64-chip-tool
2023-11-05 17:14:50 INFO Setting up Android deps through Gradle
2023-11-05 17:14:51 INFO > Task :buildSrc:compileJava NO-SOURCE
2023-11-05 17:14:51 INFO > Task :buildSrc:compileGroovy UP-TO-DATE
2023-11-05 17:14:51 INFO > Task :buildSrc:processResources NO-SOURCE
.....
......
++ unset -f _chip_bootstrap_banner
++ '[' -n '' ']'
+ python3 third_party/android_deps/set_up_android_deps.py
BUILD SUCCESSFUL in 498ms
4 actionable tasks: 1 executed, 3 up-to-date
+ echo 'build ide'
build ide
+ gn gen --check --fail-on-unused-args out/android_arm64 '--args=target_os="android" target_cpu="arm64" android_ndk_root="/Users/sevene/Library/Android/sdk/ndk/23.2.8568313" android_sdk_root="/Users/sevene/Library/Android/sdk"' --ide=json --json-ide-script=//scripts/examples/gn_to_cmakelists.py
Generating JSON projects took 129450ms
Done. Made 6574 targets from 344 files in 129769ms
|
gharchive/issue
| 2023-11-05T09:33:04 |
2025-04-01T06:40:05.334815
|
{
"authors": [
"Qianweiyin"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/issues/30222",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1276311147
|
[Ameba] Populate other threadmetrics fields
Problem
When reading threadmetrics, stackSize and stackFreeCurrent shows garbage values.
Change overview
Get stackSize and stackFreeCurrent using implemented API
Testing
Read threadmetrics and checked that these 2 fields shows the correct values.
Fast tracking platform changes.
Prerequisite: #19763
/rebase
|
gharchive/pull-request
| 2022-06-20T03:30:15 |
2025-04-01T06:40:05.337374
|
{
"authors": [
"pankore",
"woody-apple"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/19752",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1565451740
|
Add a timer to track whether we have received BDX init after a query …
…image was successful
Currently if an ota requester has a successful query image and an image it available but if for any reason, the ota requester doesn't send BDX init, the provider will be stuck until we reboot the resident. In order to have a fail safe, we are adding a timer that starts after query image returns image available and waits for a BDX init to come. In case BDX init doesn't come, it times out and resets state
Add code to reset state if any API fails on the provider once we prepare for BDX transfer
Stop polling when BDX transfer reset is called
Return QueryImageResponse status busy instead of general failure if the sdk is busy and gets a sexond query image so accessory can handle the error correctly and retry until the sdk is done
Fixes #24679
It appears that the main change in this PR applies only to the Darwin platform. If so, please update the title and description to make this clear.
|
gharchive/pull-request
| 2023-02-01T05:16:20 |
2025-04-01T06:40:05.340007
|
{
"authors": [
"nivi-apple",
"selissia"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/24777",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
924335619
|
Adding request-commissioning CHIPTool command
Problem
Support for Initializing setup via "commissioner-discovery-from-an-on-network-device" is pending implementation of commands for Steps 7-12.
Note: Steps 3-5 were previously implemented as a discover-commissioner command in the CHIPTool (see PR)
What is being fixed?
Implementation of Commissionable Node advertisement in the new request-commissioning CHIPTool command.
Change overview
This PR adds the basic skeleton for the request-commissioning command on the CLI CHIPTool and initiates advertisement of the tool as a Commissionable Node.
More actions like sending a UDC request, entering commissioning mode, etc. will be implemented as part of the same command later on, in separate PRs.
Testing
Tested by running the new request-commissioning command on the CHIPTool and verifying using minimal-mdns-client that a commissionable node is advertised. See outputs below from each command:
$> ./chip-tool pairing request-commissioning
[1623962271.779161][33013] CHIP:IN: TransportMgr initialized
[1623962271.779210][33013] CHIP:DIS: Init admin pairing table with server storage
[1623962271.779507][33013] CHIP:IN: Loading certs from KVS
[1623962271.779555][33013] CHIP:IN: local node id is 0x000000000001B669
[1623962271.780311][33013] CHIP:ZCL: Using ZAP configuration...
[1623962271.780371][33013] CHIP:ZCL: deactivate report event
[1623962271.780435][33013] CHIP:CTL: Getting operational keys
[1623962271.780447][33013] CHIP:CTL: Generating credentials
[1623962271.780577][33013] CHIP:CTL: Loaded credentials successfully
[1623962271.781020][33013] CHIP:DIS: CHIP minimal mDNS started advertising.
[1623962271.781126][33013] CHIP:DIS: Replying to DNS-SD service listing request
[1623962271.781183][33013] CHIP:DIS: Replying to DNS-SD service listing request
[1623962271.781216][33013] CHIP:DIS: Replying to DNS-SD service listing request
[1623962271.781248][33013] CHIP:DIS: Replying to DNS-SD service listing request
[1623962271.781281][33013] CHIP:DIS: Replying to DNS-SD service listing request
[1623962271.781295][33013] CHIP:DL: wpa_supplicant: _IsWiFiStationProvisioned: interface not connected
[1623962271.784222][33013] CHIP:DIS: Using wifi MAC for hostname
[1623962271.784364][33013] CHIP:DL: rotatingDeviceId: 0000490C1FF0DB5910AD14994F162DB850F3
[1623962271.784378][33013] CHIP:DIS: Advertise commission parameter vendorID=9050 productID=65279 discriminator=2976/160
[1623962271.784405][33013] CHIP:DIS: CHIP minimal mDNS configured as 'Commissionable node device'.
[1623962271.784417][33013] CHIP:TOO: Waiting for 30 sec
[1623962271.781380][33018] CHIP:DL: CHIP task running
[1623962288.933834][33018] CHIP:DIS: Directly sending mDns reply to peer on port 5388
[1623962288.935128][33018] CHIP:DIS: Directly sending mDns reply to peer on port 5388
[1623962288.936404][33018] CHIP:DIS: Directly sending mDns reply to peer on port 5388
[1623962288.937910][33018] CHIP:DIS: Directly sending mDns reply to peer on port 5388
[1623962288.939446][33018] CHIP:DIS: Directly sending mDns reply to peer on port 5388
[1623962288.942679][33018] CHIP:DIS: Directly sending mDns reply to peer on port 5388
[1623962301.785356][33013] CHIP:TOO: No response from device
[1623962301.786065][33013] CHIP:CTL: Shutting down the commissioner
[1623962301.786123][33013] CHIP:CTL: Shutting down the controller
[1623962301.786139][33013] CHIP:DL: Inet Layer shutdown
[1623962301.786232][33013] CHIP:DL: BLE layer shutdown
[1623962301.786248][33013] CHIP:DL: System Layer shutdown
$> ./minimal-mdns-client -q _chipc._udp.local
Running...
Usable interface: eth0 (2)
Usable interface: services1 (5)
Usable interface: docker0 (7)
[1623962288.932292][33112] CHIP:DIS: Attempt to mDNS broadcast to an unreachable destination.
RESPONSE from: fe80::50:ff:fe00:1 on port 5353, via interface 2
RESPONSE: REPLY 4660 (1, 1, 0, 4):
RESPONSE: QUERY ANY/IN UNICAST: _chipc._udp.local.
RESPONSE: ANSWER PTR/IN ttl 120: _chipc._udp.local.
RESPONSE: PTR: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: ADDITIONAL SRV/IN ttl 120: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: SRV on port 11097, priority 0, weight 0: 025000000001.local.
RESPONSE: ADDITIONAL AAAA/IN ttl 120: 025000000001.local.
RESPONSE: IP: fe80::50:ff:fe00:1
RESPONSE: ADDITIONAL A/IN ttl 120: 025000000001.local.
RESPONSE: IP: 192.168.65.3
RESPONSE: ADDITIONAL TXT/IN ttl 4500: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: TXT: 'VP' = '9050+65279'
RESPONSE: TXT: 'D' = '2976'
RESPONSE: TXT: 'CM' = '1'
RESPONSE: TXT: 'RI' = '0000490C1FF0DB5910AD14994F162DB850F3'
RESPONSE: TXT: 'PH' = '33'
RESPONSE: TXT: 'PI' = ''
RESPONSE from: fe80::50:ff:fe00:1 on port 5353, via interface 2
RESPONSE: REPLY 4660 (1, 1, 0, 4):
RESPONSE: QUERY ANY/IN UNICAST: _chipc._udp.local.
RESPONSE: ANSWER PTR/IN ttl 120: _chipc._udp.local.
RESPONSE: PTR: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: ADDITIONAL SRV/IN ttl 120: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: SRV on port 11097, priority 0, weight 0: 025000000001.local.
RESPONSE: ADDITIONAL AAAA/IN ttl 120: 025000000001.local.
RESPONSE: IP: fe80::50:ff:fe00:1
RESPONSE: ADDITIONAL A/IN ttl 120: 025000000001.local.
RESPONSE: IP: 192.168.65.3
RESPONSE: ADDITIONAL TXT/IN ttl 4500: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: TXT: 'VP' = '9050+65279'
RESPONSE: TXT: 'D' = '2976'
RESPONSE: TXT: 'CM' = '1'
RESPONSE: TXT: 'RI' = '0000490C1FF0DB5910AD14994F162DB850F3'
RESPONSE: TXT: 'PH' = '33'
RESPONSE: TXT: 'PI' = ''
RESPONSE from: fe80::50:ff:fe00:1 on port 5353, via interface 2
RESPONSE: REPLY 4660 (1, 1, 0, 4):
RESPONSE: QUERY ANY/IN UNICAST: _chipc._udp.local.
RESPONSE: ANSWER PTR/IN ttl 120: _chipc._udp.local.
RESPONSE: PTR: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: ADDITIONAL SRV/IN ttl 120: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: SRV on port 11097, priority 0, weight 0: 025000000001.local.
RESPONSE: ADDITIONAL AAAA/IN ttl 120: 025000000001.local.
RESPONSE: IP: fe80::50:ff:fe00:1
RESPONSE: ADDITIONAL A/IN ttl 120: 025000000001.local.
RESPONSE: IP: 192.168.65.3
RESPONSE: ADDITIONAL TXT/IN ttl 4500: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: TXT: 'VP' = '9050+65279'
RESPONSE: TXT: 'D' = '2976'
RESPONSE: TXT: 'CM' = '1'
RESPONSE: TXT: 'RI' = '0000490C1FF0DB5910AD14994F162DB850F3'
RESPONSE: TXT: 'PH' = '33'
RESPONSE: TXT: 'PI' = ''
RESPONSE from: fe80::f0c6:79ff:fe6e:4308 on port 5353, via interface 5
RESPONSE: REPLY 4660 (1, 1, 0, 4):
RESPONSE: QUERY ANY/IN UNICAST: _chipc._udp.local.
RESPONSE: ANSWER PTR/IN ttl 120: _chipc._udp.local.
RESPONSE: PTR: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: ADDITIONAL SRV/IN ttl 120: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: SRV on port 11097, priority 0, weight 0: 025000000001.local.
RESPONSE: ADDITIONAL AAAA/IN ttl 120: 025000000001.local.
RESPONSE: IP: fe80::f0c6:79ff:fe6e:4308
RESPONSE: ADDITIONAL A/IN ttl 120: 025000000001.local.
RESPONSE: IP: 192.168.65.4
RESPONSE: ADDITIONAL TXT/IN ttl 4500: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: TXT: 'VP' = '9050+65279'
RESPONSE: TXT: 'D' = '2976'
RESPONSE: TXT: 'CM' = '1'
RESPONSE: TXT: 'RI' = '0000490C1FF0DB5910AD14994F162DB850F3'
RESPONSE: TXT: 'PH' = '33'
RESPONSE: TXT: 'PI' = ''
RESPONSE from: fe80::f0c6:79ff:fe6e:4308 on port 5353, via interface 5
RESPONSE: REPLY 4660 (1, 1, 0, 4):
RESPONSE: QUERY ANY/IN UNICAST: _chipc._udp.local.
RESPONSE: ANSWER PTR/IN ttl 120: _chipc._udp.local.
RESPONSE: PTR: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: ADDITIONAL SRV/IN ttl 120: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: SRV on port 11097, priority 0, weight 0: 025000000001.local.
RESPONSE: ADDITIONAL AAAA/IN ttl 120: 025000000001.local.
RESPONSE: IP: fe80::f0c6:79ff:fe6e:4308
RESPONSE: ADDITIONAL A/IN ttl 120: 025000000001.local.
RESPONSE: IP: 192.168.65.4
RESPONSE: ADDITIONAL TXT/IN ttl 4500: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: TXT: 'VP' = '9050+65279'
RESPONSE: TXT: 'D' = '2976'
RESPONSE: TXT: 'CM' = '1'
RESPONSE: TXT: 'RI' = '0000490C1FF0DB5910AD14994F162DB850F3'
RESPONSE: TXT: 'PH' = '33'
RESPONSE: TXT: 'PI' = ''
RESPONSE from: fe80::f0c6:79ff:fe6e:4308 on port 5353, via interface 5
RESPONSE: REPLY 4660 (1, 1, 0, 4):
RESPONSE: QUERY ANY/IN UNICAST: _chipc._udp.local.
RESPONSE: ANSWER PTR/IN ttl 120: _chipc._udp.local.
RESPONSE: PTR: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: ADDITIONAL SRV/IN ttl 120: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: SRV on port 11097, priority 0, weight 0: 025000000001.local.
RESPONSE: ADDITIONAL AAAA/IN ttl 120: 025000000001.local.
RESPONSE: IP: fe80::f0c6:79ff:fe6e:4308
RESPONSE: ADDITIONAL A/IN ttl 120: 025000000001.local.
RESPONSE: IP: 192.168.65.4
RESPONSE: ADDITIONAL TXT/IN ttl 4500: 1F2958EC944A5CFF._chipc._udp.local.
RESPONSE: TXT: 'VP' = '9050+65279'
RESPONSE: TXT: 'D' = '2976'
RESPONSE: TXT: 'CM' = '1'
RESPONSE: TXT: 'RI' = '0000490C1FF0DB5910AD14994F162DB850F3'
RESPONSE: TXT: 'PH' = '33'
RESPONSE: TXT: 'PI' = ''
[1623962289.434855][33112] CHIP:DL: Inet Layer shutdown
[1623962289.435056][33112] CHIP:DL: BLE layer shutdown
[1623962289.435087][33112] CHIP:DL: System Layer shutdown
Done...
This generally looks good to me. I believe, however, it will need a slight refactor to conform to https://github.com/project-chip/connectedhomeip/pull/7829.
|
gharchive/pull-request
| 2021-06-17T21:09:00 |
2025-04-01T06:40:05.346421
|
{
"authors": [
"msandstedt",
"sharadb-amazon"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/7729",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2745477127
|
Update AppWrappers from 0.27.0 to v0.30.0
Changes:
Define aw as a shortname for appwrapper (PR-288)
Promote AppWrapperLabel to v1beta2 api (PR-282)
Ensure consistent resource status if create errors (PR-273)
Flag config error if admissionGP exceeds warmupGP (PR-278)
This is independent of #642, but it probably makes the most sense to merge #642 first, then rebase this to run the new tests to validate this appwrapper version, then merge this one.
|
gharchive/pull-request
| 2024-12-17T16:51:38 |
2025-04-01T06:40:05.348827
|
{
"authors": [
"dgrove-oss"
],
"repo": "project-codeflare/codeflare-operator",
"url": "https://github.com/project-codeflare/codeflare-operator/pull/643",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
321597899
|
Switch to Nginx instead of Express for serving files
Use Docker image for Nginx
see: https://github.com/cloudigrade/frontigrade/blob/2-build-skeleton-page/Dockerfile
https://github.com/project-koku/koku-ui/pull/53
|
gharchive/issue
| 2018-05-09T14:39:01 |
2025-04-01T06:40:05.350513
|
{
"authors": [
"chambridge",
"dmiller9911"
],
"repo": "project-koku/koku-ui",
"url": "https://github.com/project-koku/koku-ui/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
715142604
|
Tertiary nav alignment
https://issues.redhat.com/browse/COST-604
Before
After
Codecov Report
Merging #1687 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1687 +/- ##
=======================================
Coverage 77.86% 77.86%
=======================================
Files 244 244
Lines 3984 3984
Branches 763 763
=======================================
Hits 3102 3102
Misses 798 798
Partials 84 84
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ecdab7f...e298283. Read the comment docs.
|
gharchive/pull-request
| 2020-10-05T20:33:30 |
2025-04-01T06:40:05.356963
|
{
"authors": [
"codecov-commenter",
"dlabrecq"
],
"repo": "project-koku/koku-ui",
"url": "https://github.com/project-koku/koku-ui/pull/1687",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1051539942
|
Team page
Generic team page needed to establish more trust with end users. A few <Avatar /> components would be good, along with a short blurb from each team member.
Link socials
Short blurb
Avatar
In the future, we will have a Figma wireframe designed by our Head Designer to work with. But as of current we don't have that luxury 😭 (I promise you I'm trying my hardest to find a designer tho!)
I can do this
I can just use the code from podcast.hackclub.com
Would be great, but will need to onboard you on dev for this. Firstly,
check out https://lumiere.codes/resources/contributing and let me know if
you get stuck.
On Thu, Nov 11, 2021 at 7:29 PM Arav Narula @.***>
wrote:
I can do this
I can just use the code from podcast.hackclub.com
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/project-lumiere/lumiere.codes/issues/71#issuecomment-966791962,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AOVOT2MQDMUJEUSKTUPF7G3ULSCYTANCNFSM5H33NA5A
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Pretty sure the Avatar component is designed only for the Navbar (so it can have a dropdown menu w/ settings and such thats unique to the specific logged in user) so I don't think we can use it for the team page
|
gharchive/issue
| 2021-11-12T03:08:07 |
2025-04-01T06:40:05.374164
|
{
"authors": [
"AnthonyKuang",
"IamTregsthedev",
"NebuDev14"
],
"repo": "project-lumiere/lumiere.codes",
"url": "https://github.com/project-lumiere/lumiere.codes/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
689694908
|
Chart Legend Label Wrapping on SE iPhone: Reduce Line Height
On the SE iPhone (1st gen - i am not sure about 2nd gen, but i would assume so), the 7-day average legend label wraps. We should reduce the line height between the two words (see screenshot below):
Next build should have this fixed.
|
gharchive/issue
| 2020-09-01T00:14:40 |
2025-04-01T06:40:05.391199
|
{
"authors": [
"ahurworth",
"floridemai"
],
"repo": "project-vagabond/covid-green-app",
"url": "https://github.com/project-vagabond/covid-green-app/issues/184",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2050282914
|
feat(log): print traceback when panics occur
What type of PR is this?
Which issue does this PR fix:
What does this PR do / Why do we need it:
If an issue # is not available please add repro steps and logs showing the issue:
Testing done on this change:
Automation added to e2e:
Will this break upgrades or downgrades?
Does this PR introduce any user-facing change?:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Example:
{"level":"error","error":"runtime error: invalid memory address or nil pointer dereference","goroutine":92,"caller":"/home/peusebiu/zot/pkg/log/log.go:29","time":"2023-12-20T15:20:44.043504+02:00","message":"panic recovered"}
goroutine 92 [running]:
runtime/debug.Stack()
/usr/local/go/src/runtime/debug/stack.go:24 +0x7a
github.com/gorilla/handlers.recoveryHandler.log({{0xfcec4c0, 0xc0011ce040}, {0xfced340, 0xc000fccee0}, 0x1}, {0xc001390b30, 0x1, 0x1})
/home/peusebiu/go/pkg/mod/github.com/gorilla/handlers@v1.5.2/recovery.go:91 +0xbb
github.com/gorilla/handlers.recoveryHandler.ServeHTTP.func1()
/home/peusebiu/go/pkg/mod/github.com/gorilla/handlers@v1.5.2/recovery.go:76 +0x105
panic({0xa6b5a20, 0x128832a0})
/usr/local/go/src/runtime/panic.go:890 +0x262
zotregistry.io/zot/pkg/api.(*RouteHandler).ListRepositories(0xc001036328, {0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/home/peusebiu/zot/pkg/api/routes.go:1682 +0x236
net/http.HandlerFunc.ServeHTTP(0xc001390180, {0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/usr/local/go/src/net/http/server.go:2122 +0x43
zotregistry.io/zot/pkg/api.getCORSHeadersHandler.func1.1({0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/home/peusebiu/zot/pkg/api/routes.go:203 +0x70
net/http.HandlerFunc.ServeHTTP(0xc001061b80, {0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/usr/local/go/src/net/http/server.go:2122 +0x43
zotregistry.io/zot/pkg/api.getUIHeadersHandler.func1.1({0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/home/peusebiu/zot/pkg/api/routes.go:221 +0x117
net/http.HandlerFunc.ServeHTTP(0xc00105dd40, {0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/usr/local/go/src/net/http/server.go:2122 +0x43
zotregistry.io/zot/pkg/api.noPasswdAuth.func1.1({0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/home/peusebiu/zot/pkg/api/authn.go:512 +0x3e4
net/http.HandlerFunc.ServeHTTP(0xc0011ce040, {0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/usr/local/go/src/net/http/server.go:2122 +0x43
github.com/gorilla/handlers.recoveryHandler.ServeHTTP({{0xfcec4c0, 0xc0011ce040}, {0xfced340, 0xc000fccee0}, 0x1}, {0xfd2e0e0, 0xc0011ce060}, 0xc000fbc900)
/home/peusebiu/go/pkg/mod/github.com/gorilla/handlers@v1.5.2/recovery.go:80 +0x13b
zotregistry.io/zot/pkg/api.SessionLogger.func1.1({0xfd2d9c0, 0xc000fe7180}, 0xc000fbc900)
/home/peusebiu/zot/pkg/api/session.go:82 +0x189
net/http.HandlerFunc.ServeHTTP(0xc0011c4b70, {0xfd2d9c0, 0xc000fe7180}, 0xc000fbc900)
/usr/local/go/src/net/http/server.go:2122 +0x43
github.com/gorilla/mux.(*Router).ServeHTTP(0xc000d16540, {0xfd2d9c0, 0xc000fe7180}, 0xc000fbc900)
/home/peusebiu/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210 +0x25d
net/http.serverHandler.ServeHTTP({0xc0008de870}, {0xfd2d9c0, 0xc000fe7180}, 0xc000fbc700)
/usr/local/go/src/net/http/server.go:2936 +0x494
net/http.(*conn).serve(0xc00109e000, {0xfd31bc8, 0xc0010a0050})
/usr/local/go/src/net/http/server.go:1995 +0x19b5
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:3089 +0xa16
Put your test case on top of this one.
https://github.com/project-zot/zot/pull/2150
that way it prints on the stdout, not in our logs.
|
gharchive/pull-request
| 2023-12-20T10:35:40 |
2025-04-01T06:40:05.396748
|
{
"authors": [
"peusebiu",
"rchincha"
],
"repo": "project-zot/zot",
"url": "https://github.com/project-zot/zot/pull/2145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1924236388
|
Contract interaction does not support multiple calldata
This does not account for cases where the contract call may have more than 1 calldata. It should be an object instead.
An example is get_lp_balance in this contract.
Fixed; https://github.com/project3fusion/StarkSharp/commit/eb73a55e38354d1e40f603282271c239164b153f
|
gharchive/issue
| 2023-10-03T14:08:56 |
2025-04-01T06:40:05.398747
|
{
"authors": [
"LastCeri",
"jelilat"
],
"repo": "project3fusion/StarkSharp",
"url": "https://github.com/project3fusion/StarkSharp/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2122670827
|
feat: create eventStore to share stores logic
What does this PR do?
This PR adds the event-store class so we can share some common code between stores.
Screenshot / video of UI
N/A
What issues does this PR fix or reference?
N/A
How to test this PR?
run tests or start studio and check that everything works as before. Only the recipe store has been updated for the moment
This complexifies a lot the code. What are the advantages of reusing this code from podman desktop, compared to the current code?
What is the advantage of using a Writable instead of a Readable? I appreciate that the Readable guarantees that changes will come from the store only, and no other part can send new state to it.
What is the advantage of using a Writable instead of a Readable? I appreciate that the Readable guarantees that changes will come from the store only, and no other part can send new state to it.
For testing
Seems not all stores are migrated, any reason ?
I wanted to see if there was any issue with it before updating all stores. I updated the recipe only because i was working on it and i noticed that it was complex to write a test that was simulating multiple state changes.
Still in progress. Forgot to mark as draft, @feloy noticed a misbehavior on the localModel's page. I'm investigating as it seems the rpcbrowser is not actually working
Still in progress. Forgot to mark as draft, @feloy noticed a misbehavior on the localModel's page. I'm investigating as it seems the rpcbrowser is not actually working
For the problem I have raised (the name in the models page is not updated when we update the catalog file), I finally understand that the problem does not come from the stores, but from the Models page, as we are not subscribing to the catalog, so we cannot get the new name.
You probably says the rpcBrowser does not work because you get an error "Message not supported" when you update the catalog file. The reason is that because the Models page does not subscribe to the catalog store, the message is received but is not recognized by the rpcBrowser, because there is no subscriber: this is not the problem (we should hide the message in this case probably, or check that the message is registered, even if there is no subscriber)
Still in progress. Forgot to mark as draft, @feloy noticed a misbehavior on the localModel's page. I'm investigating as it seems the rpcbrowser is not actually working
For the problem I have raised (the name in the models page is not updated when we update the catalog file), I finally understand that the problem does not come from the stores, but from the Models page, as we are not subscribing to the catalog, so we cannot get the new name.
You probably says the rpcBrowser does not work because you get an error "Message not supported" when you update the catalog file. The reason is that because the Models page does not subscribe to the catalog store, the message is received but is not recognized by the rpcBrowser, because there is no subscriber: this is not the problem (we should hide the message in this case probably, or check that the message is registered, even if there is no subscriber)
yes, correct. Already worked on a fix yesterday but i need to add some tests. I'll push it soon
@feloy @jeffmaury updated it. I created a custom writable so when subscribing it subscribe the rcpBrowser to all events specified by the store.
E.g the localModels depends on the localModelState and CatalogState - so if you update the catalog, the page is refreshed automatically without having to change page.
I would like not to cover the other stores in this PR as i also need to update the single pages that subscribe to them and if i tackle everything here it becomes a mess. Once this is merged i'll work on the other stores separately in small PRs so they are easily testable as well
@feloy i saw your PR to mock writable when using readable for testing 👍 I updated this PR to use a Readable as your preference
|
gharchive/pull-request
| 2024-02-07T10:16:54 |
2025-04-01T06:40:05.439195
|
{
"authors": [
"feloy",
"lstocchi"
],
"repo": "projectatomic/ai-studio",
"url": "https://github.com/projectatomic/ai-studio/pull/239",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
279155378
|
hospitable README
could the buildah README.md have some copy paste example on running it, rather than so much build instructions?
it would seem much more useful to people searching for it and just giving it a 10second review
like, at first glance i would be 10x more likely to use/try a project if the README showed how it makes my life easier but having 5 pastes of compile instructions is a deterrent
As it gets easier to find binary packages, this'll probably be more relevant to people who are interested in trying it out.
|
gharchive/issue
| 2017-12-04T21:25:08 |
2025-04-01T06:40:05.442340
|
{
"authors": [
"nalind",
"vbatts"
],
"repo": "projectatomic/buildah",
"url": "https://github.com/projectatomic/buildah/issues/347",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
141924797
|
Fixes for reading from the journald log driver
This is the set of fixes being proposed for https://bugzilla.redhat.com/show_bug.cgi?id=1314463, which fix a couple of race conditions and improve error reporting when we're reading from the journald log driver.
LGTM
LGTM
@nalind this needs to be applied on rhel7-1.9 also right? seems like I cannot patch it cleanly, could you open a PR against rhel7-1.9 also?
Yes, I hadn't realized they were drifting apart.
Opened #86.
I have a feeing docker-1.9 for RHEL has sailed. I don't see this getting in, or us stopping shipping for this fix. If this is a problem for a customer they can just use dockers logging.
we should probably revert https://github.com/projectatomic/docker/pull/86 then and get @mrunalp's patch only
we'll keep the patch in fedora though, right?
Yes we should fix fedora, but I am thinking that nothing new is going to get into rhel, at most we should just add mrunals patch anything else would need to be tested. I have a feeling we will not add anything.
|
gharchive/pull-request
| 2016-03-18T17:00:53 |
2025-04-01T06:40:05.446388
|
{
"authors": [
"nalind",
"rhatdan",
"runcom"
],
"repo": "projectatomic/docker",
"url": "https://github.com/projectatomic/docker/pull/82",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
327161828
|
Executing commands on bind mounts doesn't always work
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
Description
When using podman to launch a container, with a bind mount, attachment to the container doesn't always succeed.
Steps to reproduce the issue:
1.podman pull registry.access.redhat.com/rhel7
2.podman run -v /usr/sbin:/usr/sbin --rm -it rhel7 /usr/sbin/ip a
or -
podman run -v /usr/bin:/usr/bin --rm -it rhel7 /usr/bin/file /etc/hosts
Describe the results you received:
When it doesn't work, you get:
error attaching to container cda641fe922585a52ef37de83533910b684c27a3bbd75cd988a0f7708199ac05: failed to
connect to container's attach socket: %!v(MISSING): dial unixpacket /var/run/libpod/socket/cda
Describe the results you expected:
When it works, you get what you expect:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether fa:78:6c:23:90:dd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.88.0.11/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f878:6cff:fe23:90dd/64 scope link tentative
valid_lft forever preferred_lft forever
or-
/etc/hosts: ASCII text
Additional information you deem important (e.g. issue happens only occasionally):
The issue seems to happen about 50% of the time.
Output of podman version:
# podman version
Version: 0.4.1
Go Version: go1.9.2
OS/Arch: linux/amd64
Output of podman info:
# podman info
host:
MemFree: 1547030528
MemTotal: 1926369280
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 1
hostname: ip-10-0-2-203.ec2.internal
kernel: 3.10.0-862.3.2.el7.x86_64
os: linux
uptime: 6m 31.74s
insecure registries:
registries: []
registries:
registries:
- registry.access.redhat.com
store:
ContainerStore:
number: 14
GraphDriverName: overlay
GraphOptions:
- overlay.override_kernel_check=true
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 1
RunRoot: /var/run/containers/storage
Additional environment details (AWS, VirtualBox, physical, etc.):
The environment is running on RHEL 7.5, on AWS EC2.
I wonder if we're not seeing a race between conmon and the attach part of podman run here. Setting up the attach socket in conmon happens fairly late in the process of creating the container, after we inform libpod that it's safe to proceed, but conmon is generally so lightweight that the timing condition is never hit.
We could potentially set up the attach socket in conmon before we fork off runc - it'll potentially leave a socket and a symlink lying around in cases where runc fails, though.
Brent can you look into this?
@rhatdan I think this has to be a fix in conmon
@giuseppe Mind taking a look at this one? We'd like conmon to make the attach socket earlier in its lifetime, before it starts runc, to try and avoid a timing issue.
hm.. I see there is no race in conmon when the attach socket is created, as podman blocks on receiving the container PID which is sent by conmon after the attach socket is created. I was not able to reproduce this issue. I wonder if it depends on a change we did in conmon that correctly reports the exit status of a container now (before it was reporting 0 when the container was signaled) and podman still trying to attach to the container.
@ajacocks is it still reproducible for you? If yes, could you attach the output of podman --log-level=debug ... when it fails?
@giuseppe Yes, indeed, I can still reproduce it on RHEL7.5-current:
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo)
$ uname -r
3.10.0-862.3.3.el7.x86_64
And here are the logs, as requested:
# podman --log-level=debug run -v /usr/sbin:/usr/sbin --rm -it rhel7 /usr/sbin/ip a
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: override_kernelcheck=true
DEBU[0000] overlay test mount with multiple lowers succeeded
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true
INFO[0000] CNI network podman (type=bridge) is used from /etc/cni/net.d/87-podman-bridge.conflist
INFO[0000] Initial CNI setting succeeded
DEBU[0000] parsed reference to refname into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]docker.io/library/rhel7:latest"
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]docker.io/library/rhel7:latest" does not resolve to an image ID
DEBU[0000] parsed reference to refname into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]docker.io/library/rhel7:latest"
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]docker.io/library/rhel7:latest" does not resolve to an image ID
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] exporting opaque data as blob "sha256:7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] exporting opaque data as blob "sha256:7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] Using bridge netmode
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] created container "61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b"
DEBU[0000] container "61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b" has work directory "/var/lib/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata"
DEBU[0000] container "61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b" has run directory "/var/run/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata"
DEBU[0000] New container created "61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b"
DEBU[0000] container "61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b" has CgroupParent "/libpod_parent/libpod-conmon-61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b"
DEBU[0000] mounted container "61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b" at "/var/lib/containers/storage/overlay/3b19d27499861f080befd6dac91aeabbb857095bc0d8d43083cb0c85d9e4360b/merged"
DEBU[0000] Created root filesystem for container 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b at /var/lib/containers/storage/overlay/3b19d27499861f080befd6dac91aeabbb857095bc0d8d43083cb0c85d9e4360b/merged
DEBU[0000] Made network namespace at /var/run/netns/cni-d65d8258-7130-ea13-6aee-3becd5e8bc1e for container 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b
INFO[0000] Got pod network {Name:nervous_mayer Namespace:nervous_mayer ID:61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b NetNS:/var/run/netns/cni-d65d8258-7130-ea13-6aee-3becd5e8bc1e PortMappings:[]}
INFO[0000] About to add CNI network cni-loopback (type=loopback)
INFO[0000] Got pod network {Name:nervous_mayer Namespace:nervous_mayer ID:61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b NetNS:/var/run/netns/cni-d65d8258-7130-ea13-6aee-3becd5e8bc1e PortMappings:[]}
INFO[0000] About to add CNI network podman (type=bridge)
DEBU[0000] Response from CNI plugins: Interfaces:[{Name:cni0 Mac:1a:7e:d7:a1:bf:7e Sandbox:} {Name:veth3e515f33 Mac:32:c4:1a:96:32:f7 Sandbox:} {Name:eth0 Mac: Sandbox:/var/run/netns/cni-d65d8258-7130-ea13-6aee-3becd5e8bc1e}], IP:[{Version:4 Interface:0xc420338660 Address:{IP:10.88.0.28 Mask:ffff0000} Gateway:10.88.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]}
DEBU[0000] Running iptables command: -t filter -I FORWARD -s 10.88.0.28 ! -o 10.88.0.28 -j ACCEPT
WARN[0000] file "/etc/containers/mounts.conf" not found, skipping...
DEBU[0000] Creating dest directory: /var/run/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata/run/secrets
DEBU[0000] Calling TarUntar(/usr/share/rhel/secrets, /var/run/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata/run/secrets)
DEBU[0000] TarUntar(/usr/share/rhel/secrets /var/run/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata/run/secrets)
WARN[0000] hooks path: "/usr/share/containers/oci/hooks.d" does not exist
WARN[0000] hooks path: "/etc/containers/oci/hooks.d" does not exist
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] exporting opaque data as blob "sha256:7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] exporting opaque data as blob "sha256:7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] parsed reference to id into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.override_kernel_check=true]@7a840db7f020be49bb60fb1cc4f1669e83221d61c1af23ff2cac2f870f9deee8"
DEBU[0000] Created OCI spec for container 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b at /var/lib/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata/config.json
DEBU[0000] running conmon: /usr/libexec/podman/conmon args=[-c 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b -u 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata -p /var/run/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata/pidfile -l /var/lib/containers/storage/overlay-containers/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -t]
DEBU[0000] Received container pid: 11642
DEBU[0000] Created container 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b in OCI runtime
DEBU[0000] Attaching to container 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b
DEBU[0000] connecting to socket /var/run/libpod/socket/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/attach
DEBU[0000] Started container 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b
DEBU[0000] Enabling signal proxying
ERRO[0000] error attaching to container 61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b: failed to connect to container's attach socket: %!!(MISSING)v(MISSING): dial unixpacket /var/run/libpod/socket/61def006a50630502069ed457e43f75d6910b4496b8129198afe3c3cd451e79b/attach: connect: no such file or directory
Hopefully this helps.
I've tried on RHEL 7.5 with podman version 0.4.1 and I am still unable to reproduce this issue.
Could you please attach any output from conmon that you see in the journal?
@giuseppe Your wish is my command, sir!
Jun 18 21:33:01 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.JM3RKZ}
Jun 18 21:33:01 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: about to waitpid: 22451
Jun 18 21:33:01 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: about to accept from console_socket_fd: 9
Jun 18 21:33:01 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: about to recvfd from connfd: 15
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal kernel: SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue)
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: console = {.name = '/dev/ptmx8 21:33:01 conmon: conmon <ninfo>: about to recvfd from connfd: 15
'; .fd = 9}
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: container PID: 22458
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: attach sock path: /var/run/libpod/socket/4e30365196879c256cc6f0bf5bcea63e39acb8929374ae6508ef74f7f4293305/attach
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/libpod/socket/4e30365196879c256cc6f0bf5bcea63e39acb8929374ae6508ef74f7f4293305/attach}
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: ctl fifo path: /var/lib/containers/storage/overlay-containers/4e30365196879c256cc6f0bf5bcea63e39acb8929374ae6508ef74f7f4293305/userdata/ctl
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: terminal_ctrl_fd: 16
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <ninfo>: container 22458 exited with status 0
Jun 18 21:33:02 ip-10-0-2-174.ec2.internal conmon[22450]: conmon <nwarn>: stdio_input read failed Input/output error
@ajacocks thanks a lot! That helped to understand. It seems like the container terminates so quickly that conmon deletes the attach socket during the cleanup and podman doesn't find it. We should handle the ENOENT differently from podman as it means the container was already terminated
hm.. that should not happen though, as the container is started after the console was attached. I need to investigate it further
@rhatdan, @mheon I think the issue is here:
https://github.com/projectatomic/libpod/blob/master/libpod/container_api.go#L178-L192
We do the attach from a goroutine, but the c.start() might happen before the attach is done. We'll need to change how c.attach() works and be able to plug the start immediately after the net.DialUnix is done.
Please create a patch.
And we'd just removed the syncronization calls from attach()... Damn it.
@mheon is it something we can easily revert?
@giuseppe Don't think so, I did a large refactor to implement --attach and the sync primitives were one of the casualties. Thought they were unnecessary if we just ordered the calls in StartWithAttach()
Ok thanks. I will work on a patch as tomorrow I am back from PTO
PR here: https://github.com/projectatomic/libpod/pull/970
Thanks so much, everyone!
|
gharchive/issue
| 2018-05-29T03:50:32 |
2025-04-01T06:40:05.466182
|
{
"authors": [
"ajacocks",
"giuseppe",
"mheon",
"rhatdan"
],
"repo": "projectatomic/libpod",
"url": "https://github.com/projectatomic/libpod/issues/835",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
117329172
|
Calico with libnetwork not support Profile rules?
I cannot find any guide to use profile rules with libnetwork
And in this doc, https://github.com/projectcalico/calico-docker/blob/master/docs/calicoctl/profile.md , it says:
NOTE: The calicoctl profile commands should NOT be used when running Calico with the Docker libnetwork driver. The libnetwork driver manages the security policy for containers.
Hi @frostynova - this one is on our radar, but still needs a little thought about how to handle. It may simply end up being a testing/documentation thing - but we'll keep you posted.
@frostynova
We've just release v0.12.0 which adds Calico IPAM support. One upshot of this is that provided you are using Calico IPAM then it should be possible to edit the profile rules rather than using the default behavior.
We do still need to do a better job of documenting this - or possibly thinking about new commands, or new docker network options to improve this. I'll close this issue and raise a new one for that side of things.
In the meantime, we do now touch on the manipulation of profiles and endpoints in the libnetwork demonstration.
|
gharchive/issue
| 2015-11-17T10:50:33 |
2025-04-01T06:40:05.491062
|
{
"authors": [
"frostynova",
"robbrockbank"
],
"repo": "projectcalico/calico-docker",
"url": "https://github.com/projectcalico/calico-docker/issues/630",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
268462495
|
Still seeing "error adding host side routes for interface: xxx, error: failed to add route file exists" in Calico 2.6.1
Still seeing error adding host side routes for interface: xxx, error: failed to add route file exists which is keeping containers from starting (stuck in "ContainerCreating"). This is detailed in https://github.com/projectcalico/cni-plugin/issues/352 (and also more detail about orphan WEPs which are a part of this problem can be found here https://github.com/projectcalico/calico/issues/1115). It was thought that fix(es) in Calico 2.6.1 would solve this problem, but we are still seeing it.
Your Environment
Calico version: 2.6.1
Orchestrator version: Kubernetes 1.8.1
Operating System and version: Ubuntu 16.04.3
I will attach the logs, there are quite a few containers stuck in this ContainerCreating state, one is master-b4fe00b59ff948088731b4985367b705-6b987df84d-bz9zv in the kubx-masters namespace
calico-node.log
kubelet.zip
calico-node.log
Above are the logs I could attach. I also have syslog, but it is too large to attach, let me know if you need it and I'll get it to you somehow.
@bradbehle is this on a fresh cluster, or an upgraded cluster from a previous version? Could you do a quick double-check of the version of the CNI plugin in use?
This should do it:
/opt/cni/bin/calico -v
@caseydavenport
CNI plugin is at 1.11.0:
behle@dev-mex01-carrier80-worker-01:~$ /opt/cni/bin/calico -v
v1.11.0
behle@dev-mex01-carrier80-worker-01:~$
And it was an upgraded cluster that was originally at kube 1.5 and Calico 2.1.5
I've seen similar behaviour:
# /opt/cni/bin/calico -v
v1.11.0
k8s: 1.8.1
calico: 2.6.1
OS: ubuntu 14.04
kubelet logs:
https://gist.github.com/r0bj/a656c75a7b08e1b79be4671206b70779
calico-node logs:
https://gist.github.com/r0bj/1279b3fa321ea5d47d66f5e4c945eadb
@bradbehle @r0bj I have 2 PRs up for the fix, one for CNI v2.0 and one backported to CNI v1.11.x, I've made a CNI image with the fix backported to CNI v1.11.0, so you can try it out, I haven't been able to reproduce it but I've added a test to replicate the issue as best as I can. You can try the debug image with the fix at gunjan5/cni:route2 it should also have the CNI binary
@bradbehle @r0bj just checking to see if you've had a chance to try out the debug image, if it works then we can get the PRs merged and get the fix included in the next patch releases
@gunjan5 for me it's just difficult to reproduce it. I have encountered it twice on production cluster and after node restart everything worked as expected again. If I have better way of reproducing I'll test your debug image for sure.
HI @gunjan5
I have exact same problem as @r0bj
I took your patch and applied in my environment.
And now i see below error :
Nov 8 00:53:33 node1 kubelet: E1108 00:53:33.577354 15218 pod_workers.go:182] Error syncing pod fb3638af-c443-11e7-9f0f-0894ef42f61e ("test1-test1-1193689166-zqh8b_namespace1(fb3638af-c443-11e7-9f0f-0894ef42f61e)"), skipping: failed to "SetupNetwork" for "aio-stage3-serviceavailability-mobilezip-1193689166-zqh8b_staging3" with SetupNetworkError: "NetworkPlugin cni failed to set up pod "aio-stage3-serviceavailability-mobilezip-1193689166-zqh8b_staging3" network: error adding host side routes for interface: cali38f5e43d7eb, error: route (Ifindex: 8374, Dst: 172.40.107.158/32, Scope: %!!(MISSING)s(netlink.Scope=253)) already exists for an interface other than 'cali38f5e43d7eb'"
Error is little differnent this time.
It says route already exists for an interface other than 'calixxxxx'
What needs to be done to resolve this ?
Any WA solution you can give for now ?
@msavlani can you post the CNI debug logs?
@gunjan5 I used your debug image and I was able to reproduce it.
kubelet error message:
2017-11-10 16:34:00.247 [WARNING][27292] calico-ipam.go 236: Asked to release address but it doesn't exist. Ignoring Workload="ops.nginx-stateful-test-0" workloadId="ops.nginx-stateful-test-0"
E1110 16:34:00.256432 1467 cni.go:301] Error adding network: error adding host side routes for interface: cali05ad595351c, error: route (Ifindex: 93, Dst: 10.203.131.144/32, Scope: %!!(MISSING)s(netlink.Scope=253)) already exists for an interface other than 'cali05ad595351c'
E1110 16:34:00.256462 1467 cni.go:250] Error while adding to cni network: error adding host side routes for interface: cali05ad595351c, error: route (Ifindex: 93, Dst: 10.203.131.144/32, Scope: %!!(MISSING)s(netlink.Scope=253)) already exists for an interface other than 'cali05ad595351c'
I1110 16:34:00.306370 1467 kubelet.go:1871] SyncLoop (PLEG): "nginx-stateful-test-0_ops(90302ede-c634-11e7-883a-5254009ef6db)", event: &pleg.PodLifecycleEvent{ID:"90302ede-c634-11e7-883a-5254009ef6db", Type:"ContainerStarted", Data:"85e04cf0fcb9830d0824805bb9ca4849b5c53ee93124332a746d22a8956de823"}
E1110 16:34:00.457030 1467 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-stateful-test-0_ops" network: error adding host side routes for interface: cali05ad595351c, error: route (Ifindex: 93, Dst: 10.203.131.144/32, Scope: %!!(MISSING)s(netlink.Scope=253)) already exists for an interface other than 'cali05ad595351c'
calico error:
2017-11-10 16:33:16.862 [WARNING][156] route_table.go 477: Failed to add route error=file exists ifaceName="cali05ad595351c" ipVersion=0x4 targetCIDR=10.203.131.144/32
2017-11-10 16:33:16.862 [INFO][156] route_table.go 200: Trying to connect to netlink
2017-11-10 16:33:16.863 [WARNING][156] route_table.go 577: Failed to access interface but it now appears to be up error=netlink update operation failed ifaceName="cali05ad595351c" ipVersion=0x4 link=&netlink.Veth{LinkAttrs:netlink.LinkAttrs{Index:78, MTU:1500, TxQLen:0, Name:"cali05ad595351c", HardwareAddr:net.HardwareAddr{0x6e, 0x40, 0x72, 0xce, 0xb0, 0xb3}, Flags:0x13, RawFlags:0x11043, ParentIndex:4, MasterIndex:0, Namespace:interface {}(nil), Alias:"", Statistics:(*netlink.LinkStatistics)(0xc420ad7140), Promisc:0, Xdp:(*netlink.LinkXdp)(nil), EncapType:"ether", Protinfo:(*netlink.Protinfo)(nil), OperState:0x6}, PeerName:""}
full kubelet logs:
https://gist.github.com/r0bj/9a96a0041733e9cb9a2e126d4ef224b2
full calico logs:
https://gist.github.com/r0bj/ab637b6ab3deeaa4a88646d5f22dad4d
Another example:
kubelet logs:
https://gist.github.com/r0bj/9ee5a7b1d15fde85390f5ceb99a76c15
calico logs:
https://gist.github.com/r0bj/a31672f4835ed4d12c41a788ede495ac
@r0bj @msavlani Thanks for providing the logs, both the kubelet and calico/node logs make it look like we've assigned the IP to one endpoint, then tried to assign it to another but it's not clear from the logs how that happened because the logs start around the time that we're trying to assign the IP to the second endpoint.
If you can get a node in the bad state again, it'd help to have:
any logs from before the problem, if you could look back to see if the IP of the problematic route occurs earlier in the log, that'd be good
the output of ip route on the affected node, that'll confirm that the bad route belongs to Calico and that it's still in-place
the output of calicoctl get workloadendpoint --node=<nodename> -o yaml for the affected node; that should dump all the workloads that Calico thinks it has on the node.
@gunjan5 From code reading, I spot one issue; after a failed networking attempt, we always clean up the IPAM allocation. However, if this is an attempt to re-network a pod then we've already written the IP into the workload endpoint when we first networked the pod so, I think, we end up deleting the IPAM reservation without removing the IP from the workload endpoint. Later, we'll then try to re-use that IP for another pod and hit this failure mode.
If we're in the existing endpoint case, I think we just need to leave the IPAM allocation as is and fail the CNI request so that it gets retried. I guess we could delete the workload endpoint.
@fasaxc data you requested:
logs from the the time when node started:
kubelet logs:
https://gist.github.com/r0bj/f795bf18dec385ebd5e0035544649d17
calico-node logs:
https://gist.github.com/r0bj/02819862d4e30c19501b06ecd9dc67c3
ip route on the node:
https://gist.github.com/r0bj/58c931dec5e795d8fa068be21cf5c747
output of calicoctl get workloadendpoint --node=dev-k8s-worker-p4 -o yaml:
https://gist.github.com/r0bj/4eda1531de42bb3d18df439604dd4db6
Awesome, thanks @r0bj, it looks like you may have hit the case I described above. You have two workload endpoints with the same IP assigned. In addition, the IPAM error in the log indicates that the IPAM allocation was incorrectly lost or cleaned up.
Assuming this is the only instance of the problem on your node, a temporary workaround would be to delete and recreate these two pods: ops.nsqlookupd2-7574c7fbd5-rnntr and ops.nginx-stateful-test-0.
I found those by searching for the IP address of the failing route in the workload endpoint dump.
@fasaxc , i am encountering exact same problem.
Lately i am facing this with every pod running on that one node and had to manually delete pod with same endpoint (Going in cycles)
Can you provide your fix patch so that i can try in my environment to see if that resolves the problem ?
@fasaxc thanks for the fix, I upgraded to v1.x-series/v2.x-series docker tags for cni and node, it solved the issue for me.
@msavlani The fix is in This release of the CNI plugin. We're about to release a Calico v2.6.3 patch release to include it. https://github.com/projectcalico/cni-plugin/releases/tag/v1.11.1
Note: after taking the fix, you'll need to remove and recreate all the pods on affected nodes.
Fixed by #425 #418 #408 #406, to be released in Calico v2.6.3 (CNI plugin v1.11.1) and and Calico v3.0.
Please open a new issue if you see this again after upgrading (the Calico release should be out in the next few days). Note: as mentioned above, one issue was that, after a failure, we were cleaning up the IPAM entry int he datastore even though it was in use. After that has occurred the datastore is inconsistent. To resolve:
upgrade to release of Calico with the fix
list all the pods on nodes with the problem
delete/recreate any pods that share an IP address.
I am still facing this issue and raised another issue #1406 as per above suggestion.
|
gharchive/issue
| 2017-10-25T16:22:48 |
2025-04-01T06:40:05.517292
|
{
"authors": [
"MikaelCluseau",
"bradbehle",
"caseydavenport",
"fasaxc",
"gunjan5",
"msavlani",
"r0bj"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/issues/1253",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
568987380
|
vxlan interface always down , cause calico-node failed to add route for pods on other node
calico version: 3.11.2
k8s version: 1.17
I have a k8s cluster with 3 nodes , which uses calico CNI .
2 calico ippool is set up , both of them use vxvlan mode
# calicoctl get ippool -o wide
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR
default-ipv4-ippool 172.29.0.0/16 true Never Always false all()
kube-system-ipv4-ippool 172.28.0.0/16 true Never Always false all()
the strange thing happened , the vxlan.calico interface on a node always stay down , and I tried to set it up , but after about 3seconds , this tunnel interface goes down again . That make calico-node failed to add route for other 2 node .
After I delete the daemonset calico-node, I can set the vxlan.calico interface up . However, I apply the daemonset calico-node again , the vxlan.calico interface on the same node begin to stay down again , and I could not set it up agian . So I thinkg it is should that calico-node found something wrong and always make the vxlan.calico interface down
However other 2 node2 work fine
BTW, my other Cluster with same calico configuration , has no this issue
Finally , I found a work-around way : delete the tunnel interface , which trigger the calico-node recreate the tunnel interface, then everything go to work fine
this cluster environment may scale up nodes or reinstall calico operation , Maybe the tunnel interface is created by previously with wrong VXLAN configuration for right now . So deleting it can trigger calico to regenerate right one
the calico-node log of the bad node is bellow
2020-02-21 13:51:39.739 [INFO][50] int_dataplane.go 849: Received interface update msg=&intdataplane.ifaceUpdate{Name:"vxlan.calico", State:"up"}
2020-02-21 13:51:39.740 [INFO][50] int_dataplane.go 962: Applying dataplane updates
2020-02-21 13:51:39.740 [INFO][50] int_dataplane.go 597: Linux interface state changed. ifaceName="vxlan.calico" state="down"
2020-02-21 13:51:39.745 [INFO][50] route_table.go 577: Syncing routes: adding new route. ifaceName="vxlan.calico" ipVersion=0x4 targetCIDR=172.29.122.0/26
2020-02-21 13:51:39.745 [WARNING][50] route_table.go 604: Failed to add route error=network is down ifaceName="vxlan.calico" ipVersion=0x4 targetCIDR=172.29.122.0/26
2020-02-21 13:51:39.745 [INFO][50] route_table.go 577: Syncing routes: adding new route. ifaceName="vxlan.calico" ipVersion=0x4 targetCIDR=172.28.122.0/26
2020-02-21 13:51:39.745 [WARNING][50] route_table.go 604: Failed to add route error=network is down ifaceName="vxlan.calico" ipVersion=0x4 targetCIDR=172.28.122.0/26
2020-02-21 13:51:39.745 [INFO][50] route_table.go 577: Syncing routes: adding new route. ifaceName="vxlan.calico" ipVersion=0x4 targetCIDR=172.29.193.64/26
2020-02-21 13:51:39.745 [WARNING][50] route_table.go 604: Failed to add route error=network is down ifaceName="vxlan.calico" ipVersion=0x4 targetCIDR=172.29.193.64/26
2020-02-21 13:51:39.745 [INFO][50] route_table.go 247: Trying to connect to netlink
the route table of the bad node is bellow , it only have local pod route , and miss subnet route for pods on other node
# ip r
default via 10.6.0.1 dev dce-mng proto static metric 100
default via 10.7.0.1 dev ens224 proto static metric 101
10.6.0.0/16 dev dce-mng proto kernel scope link src 10.6.150.57 metric 100
10.7.0.0/16 dev ens224 proto kernel scope link src 10.7.117.177 metric 100
169.254.0.0/16 dev parcel-vip proto kernel scope link src 169.254.232.109
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.28.70.128 dev calia1e9256f877 scope link
172.28.70.129 dev cali23d0198f573 scope link
172.28.70.130 dev cali63579a15e5c scope link
172.28.70.131 dev calie07fbc9b479 scope link
networkmanager has no configuration for this tunnel
# nmcli con show
NAME UUID TYPE DEVICE
Wired connection 1 a7eba6a4-1e06-3217-a476-8fb0326c9a38 802-3-ethernet ens224
dce-mng 7adf786c-6968-4d2c-aba0-8f0e89606097 802-3-ethernet dce-mng
docker0 d0d7ca77-316e-4260-8b52-8bf6a811ea3f bridge docker0
Expected Behavior
tunnel interface should stay up , and route should be added for pod on other nodes
Current Behavior
tunnel interface stay down , and pods on the bad node can not access pods on other nodes
Possible Solution
Steps to Reproduce (for bugs)
Context
Your Environment
Calico version
Orchestrator version (e.g. kubernetes, mesos, rkt):
Operating System and version:
Link to your project (optional):
@weizhouBlue
Try to add the following to /etc/NetworkManager/conf.d/calico.conf
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico
After that restart the NetworkManager and check if the vxlan.calico interface comes up
|
gharchive/issue
| 2020-02-21T14:26:38 |
2025-04-01T06:40:05.526203
|
{
"authors": [
"rgarcia89",
"weizhouBlue"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/issues/3271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1081527014
|
Request to build multiarch docker images
When attempting to use Calico with operator install and enable the apiserver the api pod fails to deploy as no arm64 image is available
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 110s default-scheduler Successfully assigned calico-apiserver/calico-apiserver-6ccdbf984d-jz568 to k8s-controller
Normal BackOff 21s (x5 over 104s) kubelet Back-off pulling image "docker.io/calico/apiserver:master"
Warning Failed 21s (x5 over 104s) kubelet Error: ImagePullBackOff
Normal Pulling 6s (x4 over 107s) kubelet Pulling image "docker.io/calico/apiserver:master"
Warning Failed 4s (x4 over 105s) kubelet Failed to pull image "docker.io/calico/apiserver:master": rpc error: code = Unknown desc = no matching manifest for linux/arm64/v8 in the manifest list entries
Warning Failed 4s (x4 over 105s) kubelet Error: ErrImagePull
Merged https://github.com/projectcalico/calico/pull/5386
apiserver is still missing some other arch, but this issue can be closed for now.
That's great thanks very much... however, I see the following currently:
`Events:
Type Reason Age From Message
Normal Scheduled 3m49s default-scheduler Successfully assigned calico-apiserver/calico-apiserver-6fdddbdc9c-gr5xg to k8s-worker03
Normal Pulling 2m11s (x4 over 3m46s) kubelet Pulling image "docker.io/calico/apiserver:v3.21.4"
Warning Failed 2m9s (x4 over 3m45s) kubelet Failed to pull image "docker.io/calico/apiserver:v3.21.4": rpc error: code = Unknown desc = no matching manifest for linux/arm64/v8 in the manifest list entries
Warning Failed 2m9s (x4 over 3m45s) kubelet Error: ErrImagePull
Warning Failed 117s (x6 over 3m44s) kubelet Error: ImagePullBackOff
Normal BackOff 105s (x7 over 3m44s) kubelet Back-off pulling image "docker.io/calico/apiserver:v3.21.4"
`
|
gharchive/issue
| 2021-09-06T13:56:54 |
2025-04-01T06:40:05.531926
|
{
"authors": [
"caseydavenport",
"frozenprocess",
"jonstevecg"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/issues/5222",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1964335049
|
Calico on EKS - disable source/destination check
Hey, I am sorry if this is a bit silly but I am trying to get Calico configured on my EKS cluster and I don't quite get what I am supposed to do about the AWS src/dest check setting
I am following the Tigera Operator approach and the doc has no reference to it. However, if I check the "manifests" approach, I see a reference to the below:
kubectl -n kube-system set env daemonset/calico-node FELIX_AWSSRCDSTCHECK=Disable
What should I do?
For anyone who finds this in the future: I found the answer and I am happy to leave it in here.
disabling src/dst checks is a prerequisite for using VXLAN-CrossSubnet but not for VXLAN
This means that using a Calico Installation manifest like the below (which is the one from the Calico docs), does not need changes on the AWS side:
kind: Installation
apiVersion: operator.tigera.io/v1
metadata:
name: default
spec:
kubernetesProvider: EKS
cni:
type: Calico
ipam:
type: Calico
calicoNetwork:
bgp: Disabled
hostPorts: Enabled
linuxDataplane: Iptables
ipPools:
- blockSize: 26
cidr: 172.16.0.0/16
encapsulation: VXLAN # <---------
natOutgoing: Enabled
nodeSelector: all()
This video helped me a lot: https://www.tigera.io/blog/everything-you-need-to-know-about-kubernetes-pod-networking-on-aws/
|
gharchive/issue
| 2023-10-26T20:38:32 |
2025-04-01T06:40:05.534975
|
{
"authors": [
"MatteoMori"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/issues/8166",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2123875892
|
Calico-apiserver pods are all restarting on change to extension-apiserver-authentication Configmap causing webhooks to fail
Expected Behavior
calico-apiservers should always be available to service webhooks.
Current Behavior
Currently when calico-apiserver pods detect a change to the extension-apiserver-authentication ConfigMap in the kube-system namespace they all terminate with exit code 0 causing the kubelet to restart them. During this time they are not available to handle webhooks and so can cause issues with deployments.
The following logs lines are show.
time="2024-02-06T18:40:58Z" level=info msg="Main client watcher loop"
I0206 18:41:24.185839 1 run_server.go:64] Detected change in extension-apiserver-authentication ConfigMap, exiting so apiserver can be restarted
Possible Solution
calico-apiserver should either not restart on changes to the extension-apiserver-authentication Configmap and load the changes it needs
OR
Trigger a rolling restart of the deployment to ensure only one pod is down at a time.
Steps to Reproduce (for bugs)
Trigger a change to the extension-apiserver-authentication Configmap in the kube-system namespace.
Context
Our CI is failing periodically because the webhooks are not available.
We are getting alerts for pods restarting.
Your Environment
apiserver:v3.27.0
AWS EKS
Amazon linux
@caseydavenport
Just giving my 2 cents, but keep in mind I don't know much about calico-apiserver and I'm coming from having some experience doing KTHW and setting this stuff up for the datadog-cluster-agent's implementation of the custom metrics API, so I may say some stuff that's a taller order than I think it is.
The only caveat here is that I believe there are some possible changes to the extension-apiserver-authentication configmap that would invalidate all running apiservers, so a rolling update would potentially be slower than desired. That configmap contains the certs needed to authenticate with the main Kubernetes API server, so if those certificates change it's quite possible your Calico API servers won't be functional anyway.
As long as the signing CA root for the certs in the extension-apiserver-authentication ConfigMap doesn't change, then I don't think it's going to make all the API servers non-functional. You can have requestheader-client-ca-file in that ConfigMap change its value without things breaking, since that file can be a cert bundle containing an intermediate CA (it is on my cluster, at least.) I am 90% certain I recall rotating the intermediate CA without things breaking.
Regardless, since it is a matter of the client cert you are presenting being rejected or not, I think that's kind of moot; you would just reinitialize your K8s client if you get an SSL error saying your cert was rejected. You already are proactive when you see changes to the ConfigMap, so I think you just need to make the action you take to be to reinitialize the K8s client. I assume that whatever concurrency you have going on will make that easier said than done, but hopefully it is not too hard to finagle.
AFAICT, we never have this problem with the custom metrics API backed by the datadog-cluster-agent, so I do believe this is possible to accomplish without the pods restarting themselves.
Add a config option to disable that restart in the Calico API server, so it doesn't restart itself on change.
Instead, make sure we use an annotation of the hashed configmap contents, applied to the pods from the tigera/operator.
I believe that an approach like this will be kind of annoying to end-users; my team right now is just rewriting our monitors to be less sensitive to calico-apiserver restarting because that is easier than having to finagle stuff with the tigera operator. But, I am ignorant about how calico-apiserver's internals work, so maybe this is the only feasible solution for some reason, so I don't want to discredit it. I'm mostly just saying that as an end-user, I wouldn't want to have to set some config option that by all reasonable accounts should just be the default behavior, and I wouldn't want to have to interact with the tigera operator in any capacity to facilitate this behavior that I would want to be the default. As an end-user, I would want things to just transparently work without interruption if possible. If that's not done by refreshing a K8s client when a ConfigMap change is detected, then that's fine, I would just want it to be hands-off, if possible.
Hopefully my comment gives you some good ideas or a good lead toward a solution, and you forgive me for speaking in generalities since I don't know calico-apiserver's internals. Thank you very much!
The rolling update approach was my first though just because it ensures that, speaking generally, changes that result in a "bad" configuration can be caught by a hung upgrade. But I'm not super attached to that.
If we can manage to simply recreate the client using the new configuration, I am good with that!
|
gharchive/issue
| 2024-02-07T20:57:24 |
2025-04-01T06:40:05.545348
|
{
"authors": [
"2rs2ts",
"MichalFupso",
"caseydavenport",
"lhopki01"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/issues/8493",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1106064131
|
Added dummy routetable for network policy only mode
Description
This PR adds a config flag for felix named RouteSyncDisabled:
The default value is false.
Setting this to true will prevent felix from doing any updates on the route table.
This is applied to all instances of RouteTable (IPv4, IPv6, VXLAN and Wireguard).
In order to make sure no operations are done against the linux routing table, a new implementation of RouteTable called DummyTable was added. While using the DummyTable all operations are skipped, enabling the use of felix as a network policy controller only.
Related issues/PRs
More information on the context for this change can be found in #5247.
Release Note
None required?
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Thanks @juanfresia for the PR. There are multiple places need to be updated in order to add a Felix configuration.
./api directory. https://github.com/projectcalico/calico/blob/master/api/pkg/apis/projectcalico/v3/felixconfig.go and auto generated files
./libcalico directory. CI and auto generated files
Thanks you both for the comments, I'll send a commit to address them shortly
Hi @song-jiang @fasaxc sorry for the delay. I've just updated the PR with the requested changes.
I added the same change for `IPv6, VXLAN and Wireguard, which are all the uses of RouteTable I could find.
I think IPv6 and IPv4 are quite similar; but I'm not super familiar with the other two. If you know of any side effects of this change for those, you can point me in the right direction to fix them; or maybe it'd be an alternative to keep this flag from affecting VXLAN and Wireguard.
I also changed the semantics to be RouteSyncDisabled with a omitempty config and a false default.
api and libcalico-go sub-projects have been updated (hopefully, in the right way).
Also updated the PR's original description.
Let me know if there are any more needed changes/or ideas to be added here. Since this is basically adding a Dummy implementation of an interface, I'm not quite sure which tests would be valuable here; let me know if you have any ideas here as well.
@juanfresia I've reviewed your changes and added couple of comments. Thanks!
@song-jiang Just fixed the log level as per your comment. I had left you some question regarding your other comments, what do you think would be the best alternative?
@juanfresia I'm sorry I somehow missed this thread and have been on vacation lately. Thanks for all your work.
I'd like to get input from @mikestephen on changes related to wireguard, but the rest code changes LGTM.
By the way, could you please rebase on master branch again?
Thanks @song-jiang , I'll rebase the changes
@juanfresia looks like this still needs a rebase.
/sem-approve
Hi @mikestephen! I believe @song-jiang wanted your input on this change, do you have any thoughs on this?
@juanfresia I had a chat with @mikestephen and @caseydavenport , we think at this point, the PR is good to get merged. I'll raise a follow up PR to address two improvements:
Fix Calico doc for the new
@juanfresia I had a chat with @mikestephen and @caseydavenport , we think at this point, the PR is good to get merged. I'll raise a follow up PR to address two improvements:
Fix Calico doc for the new
/sem-approve
/sem-approve
@juanfresia looks like this needs another rebase and then it's ready to go with the passing tests.
/sem-approve
Thanks @mgleung, I'll sync it again against master shortly
/sem-approve
Hi @song-jiang, you mentioned some documentation PRs missing; is there an example I can use as an example? Do you think we can merge this PR or should we keep it open until documentation is ready?
Other than that, is there anything we are missing to merge this change?
@juanfresia Sorry I missed your comment. I think we can just merge this PR. We need to do the following:
Fix Calico doc for the new felix parameter.
Make wireguard start using the routeTable interface.
I can take care both point but please let me know if you are happy to do the first one. Thanks.
|
gharchive/pull-request
| 2022-01-17T16:37:00 |
2025-04-01T06:40:05.560853
|
{
"authors": [
"CLAassistant",
"caseydavenport",
"juanfresia",
"mgleung",
"song-jiang"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/pull/5454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
227713701
|
Documented CALICO_NAT_OUTGOING Parameter
Added documentation for CALICO_NAT_OUTGOING added by pull request #1640.
Fixes #516
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
This looks good but will need some updates after https://github.com/projectcalico/calicoctl/pull/1640 is finalized.
@VincentS Could you please rebase this and get rid of that merge commit from me? I hit the appealing "Update branch" button and it created that merge commit, sorry I'd never used it before and need to remember to never use it again.
After the rebase then I think this is good.
Done!
LGTM
@robbrockbank What do you think?
LGTM
|
gharchive/pull-request
| 2017-05-10T15:10:57 |
2025-04-01T06:40:05.567876
|
{
"authors": [
"CLAassistant",
"VincentS",
"robbrockbank",
"tmjd"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/pull/775",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
255456097
|
Clean up Makefile
Description
A few sentences describing the overall goals of the pull request's commits.
Please include
the type of fix - (e.g. bug fix, new feature, documentation)
some details on why this PR should be merged
the details of the testing you've done on it (both manual and automated)
which components are affected by this PR
links to issues that this PR addresses
Todos
[ ] Tests
[ ] Documentation
[ ] Release note
Release Note
None required
CC @svInfra17
@bcreane thanks, I've updated according to your comments.
|
gharchive/pull-request
| 2017-09-06T01:44:56 |
2025-04-01T06:40:05.605704
|
{
"authors": [
"caseydavenport"
],
"repo": "projectcalico/k8s-policy",
"url": "https://github.com/projectcalico/k8s-policy/pull/122",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1574100143
|
Moving scripts to vm_manager_binaries project
Tracked-On: OAM-105759
Signed-off-by: tprabhu vignesh.t.prabhu@intel.com
LGTM
|
gharchive/pull-request
| 2023-02-07T10:48:03 |
2025-04-01T06:40:05.613233
|
{
"authors": [
"YadongQi",
"iViggyPrabhu"
],
"repo": "projectceladon/vm_manager",
"url": "https://github.com/projectceladon/vm_manager/pull/309",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
724136126
|
Adds Contour and Envoy Service Support
Adds support for managing the Contour and Envoy Service.
Fixes https://github.com/projectcontour/contour-operator/issues/51
Requires https://github.com/projectcontour/contour-operator/pull/65 for port variables.
/assign @jpeach @stevesloka
/cc @Miciah
Signed-off-by: Daneyon Hansen daneyonhansen@gmail.com
@stevesloka @jpeach what are your thoughts on https://github.com/projectcontour/contour-operator/pull/52#discussion_r510429040. As we've discussed, I would like v1alpha of contour-operator to be as consitent as possible with the upstream manifests. Do you have any reservations for Envoy's Service to use AWS NLB instead of ELB?
@Miciah commit 953224f addresses all your comments accept https://github.com/projectcontour/contour-operator/pull/52#discussion_r509854161, PTAL.
@stevesloka @jpeach what are your thoughts on #52 (comment). As we've discussed, I would like v1alpha of contour-operator to be as consitent as possible with the upstream manifests. Do you have any reservations for Envoy's Service to use AWS NLB instead of ELB?
I don't know whether anyone actually uses the upstream default in production configurations. I have no objection to changing it, but I'd prefer that change to be informed by AWS deployment experience. Note that, IIRC, proxy protocol is not enabled by default.
/lgtm
Note that, IIRC, proxy protocol is not enabled by default.
@jpeach understood and why I created https://github.com/projectcontour/contour-operator/issues/49.
|
gharchive/pull-request
| 2020-10-18T22:33:18 |
2025-04-01T06:40:05.619469
|
{
"authors": [
"Miciah",
"danehans",
"jpeach"
],
"repo": "projectcontour/contour-operator",
"url": "https://github.com/projectcontour/contour-operator/pull/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1438199611
|
Use utils helper libraries
Use utils.
This PR closes #68.
Thanks @edoardottt !
|
gharchive/pull-request
| 2022-11-07T11:24:09 |
2025-04-01T06:40:05.620714
|
{
"authors": [
"Mzack9999",
"edoardottt"
],
"repo": "projectdiscovery/fastdialer",
"url": "https://github.com/projectdiscovery/fastdialer/pull/69",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2137549107
|
Inability to List Both Normal URLs and Extensions in a Single Operation
Katana Version:
v1.0.5
Current Behavior:
Currently, Katana does not provide a way to list both normal URLs and specific file extensions in one operation. Users can either obtain a list of URLs using the default settings or apply the extension match option (-em) to filter for specific file extensions. However, when using -em, URLs without an extension are omitted from the final output.
Desired Behavior:
Modify Katana's behavior to list all URLs and file extensions by default. Introduce functionality for the -em option to serve as a filter that includes only the specified file extensions in the output. This change would allow users to see the complete set of resources initially and have the option to narrow down the results based on specific extension criteria, enhancing usability and flexibility.
Steps To Reproduce (Current Behavior):
Run Katana with the command: katana -u https://chaos.projectdiscovery.io -headless -depth 2
Observe that only URLs are listed, and file extensions are not included.
Execute Katana with extension filtering: katana -u https://chaos.projectdiscovery.io -headless -depth 2 -em css,js,ico,jpg,png,html
Notice the inclusion of specified extensions in the results, but URLs without an extension are missing.
Results
katana -u https://chaos.projectdiscovery.io -headless -depth 2
https://chaos.projectdiscovery.io
https://chaos.projectdiscovery.io/app.bundle.css
https://chaos.projectdiscovery.io/app.js
katana -u https://chaos.projectdiscovery.io -headless -depth 2 -em css,js,ico,jpg,png,html
https://chaos.projectdiscovery.io/fevicon.png
https://chaos.projectdiscovery.io/app.bundle.css
https://chaos.projectdiscovery.io/app.js
https://chaos.projectdiscovery.io/361bc8b680f5b7c8f0bd7fb587ea7666.png
https://chaos.projectdiscovery.io/326b684b7243f6148e7ec7dcd3ba1d5b.png
https://chaos.projectdiscovery.io/e9b61c5e5a0c43cdcd96fcc568af8e36.png
Proposed Fix:
Implement changes to the crawling and listing mechanism to display all accessible URLs and assets by default. Adjust the -em flag functionality to act as a post-crawl filter that refines the output to include only the assets with the specified extensions. This approach ensures a comprehensive view of the site's resources is available by default, with the flexibility to focus on specific types of files as needed.
Benefits:
Provides a complete overview of all site resources without the need to run multiple commands.
Enhances user efficiency by simplifying the process of targeting specific file types.
Improves Katana's flexibility and adaptability to different use cases.
Thanks for this issue @swdbo - I do think this is intended behavior but your ideas could be great enhancements.
cc @Mzack9999
|
gharchive/issue
| 2024-02-15T22:24:34 |
2025-04-01T06:40:05.626793
|
{
"authors": [
"olearycrew",
"swdbo"
],
"repo": "projectdiscovery/katana",
"url": "https://github.com/projectdiscovery/katana/issues/764",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
965247169
|
Notify config file isn't loading
Hello guys,
I am using a fully updated kali Linux on a VPS.
Notify Version: 0.0.2
As i installed notify and runt it as root, i got this message:
Running as root, skipping config file write to avoid permissions issues: /root/.config/notify/notify.conf
Could not read configuration file /root/.config/notify/notify.conf - ignoring error: open /root/.config/notify/noti
fy.conf: no such file or directory
Then proceeded to run it as a normal user
and i got this message:
Found existing config file: /home/user/.config/notify/notify.conf
Could not read configuration file /home/user/.config/notify/notify.conf - ignoring error: EOF
I edited the .config file with my webhook and also removed the comment from discord: true
Then i copied the notify.conf from the /home/user/.config/notify directory to the root directory: /root/.config/notify/ (because it was missing from the root directory)
And run it again.
The message i got is this one:
Found existing config file: /root/.config/notify/notify.conf
Could not read configuration file /root/.config/notify/notify.conf - ignoring error: yaml: line 8: did not find expected key
After that i tried again with the normal user and it WORKED.
But running it as root causes the error i mentioned above.
what is the problem and it cannot read the configuration file when running it as root?
@sh0tal soon we will be pushing new version of notify, we can confirm this is already fixed in the dev version.
Hello @ehsandeep
Okay, thank you for your answer.
|
gharchive/issue
| 2021-08-10T18:41:34 |
2025-04-01T06:40:05.631538
|
{
"authors": [
"ehsandeep",
"sh0tal"
],
"repo": "projectdiscovery/notify",
"url": "https://github.com/projectdiscovery/notify/issues/69",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
166387961
|
Geo predicates wiki
Create wiki doc listing geo specific predicates that are used in GeoConcerns. Requested by @tpendragon and the Hydra URI Management Working Group.
Let's take another look at projection predicate.
DCMI Box encoding scheme also accommodates recording projection. Would that be a solution? http://dublincore.org/documents/dcmi-box/
@johnhuck What we want, I think, is just the projection bit. It seems to me that dcmi-box describes the projection of the bounding box not necessarily the projection of the underlying dataset. This is quite common. I suppose what I meant, is, does there exist a predicate/vocabulary that describes projection that we can use out of the box. Or is it like it like geospatial file format, and we'll have to cobble something together ourselves.
@johnhuck I'm sure you've seen this, but EPSG is a comprehensive, and industry standard list of f coordinate reference systems. Is there a linked data vocab that already uses this list? If not, then perhaps we can build one. This one is easier than file format, because ESPG is exhaustive. Not entirely complete I'm sure, but pretty darn close I'd bet.
http://spatialreference.org/ref/epsg/4326/
Since EPSG assigns URNs to manage different projections, what they are maintaining is already a controlled vocabulary, so there wouldn't be a need to make a different one. As least as far as RDF is concerned, a URN is a URI and can be used anywhere a URI is required. But if we used bf:projection, whose range is rdfs:Literal, its URI status would be irrelevant, except as an unambiguous identifier, which could perhaps be repurposed later. Several online services like the one you mention seem to provide the value-added service of a URL for the terms. This one supplies the GML metadata for the projection, but I think it's a third-party offering: https://epsg.io/4326.gml
Added a wiki page with a basic table. Needs more information.
We have the basic wiki. I think we can close this issue.
Agreed.
|
gharchive/issue
| 2016-07-19T17:27:19 |
2025-04-01T06:40:05.701158
|
{
"authors": [
"eliotjordan",
"johnhuck"
],
"repo": "projecthydra-labs/geo_concerns",
"url": "https://github.com/projecthydra-labs/geo_concerns/issues/170",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
177914612
|
"Integrate tab" - Actions not working
Repro steps
Create a new function via Empy C# template
Go to integrate tab
Add "Azure Storage blob" as output binding
Click save
Click "Go" button in the actions section
Expected result
+New functions page should have opened with "blobTrigger-c#" template selected and settings path and storage account same as Empty-C# output binding.
Actual
+New functions page opens with nothing selected
Fixed with 69d854d41
|
gharchive/issue
| 2016-09-19T22:26:26 |
2025-04-01T06:40:05.727167
|
{
"authors": [
"fashaikh",
"soninaren"
],
"repo": "projectkudu/AzureFunctionsPortal",
"url": "https://github.com/projectkudu/AzureFunctionsPortal/issues/567",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
187264602
|
Not possible anymore to deploy a sln which also contains a new project.json .net core project ?
I'm not sure if this is the correct entry point to ask a questions, but I noticed that's not possible anymore to deploy a webjob from github if the project contains a project.json .net core project.
It seems that msbuild is not executed ?
See error messages below: (The error message is caused because the KendoGridBinderEx project is not build ?
Adding package 'KendoUIWeb.2014.1.318' to folder 'D:\home\site\repository\packages'
Added package 'KendoUIWeb.2014.1.318' to folder 'D:\home\site\repository\packages'
Restoring packages for D:\home\site\repository\KendoGridBinderEx\project.json...
Committing restore...
Writing lock file to disk. Path: D:\home\site\repository\KendoGridBinderEx\project.lock.json
D:\home\site\repository\KendoGridBinderEx\KendoGridBinderEx.xproj
Restore completed in 968ms.
NuGet Config files used:
C:\DWASFiles\Sites#1stef-kendogridbinderex\AppData\NuGet\NuGet.Config
Feeds used:
https://api.nuget.org/v3/index.json
Installed:
46 package(s) to packages.config projects
D:\Program Files (x86)\MSBuild\14.0\bin\Microsoft.Common.CurrentVersion.targets(1819,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "KendoGridBinderEx". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. [D:\home\site\repository\KendoGridBinderEx.Examples.Business\KendoGridBinderEx.Examples.Business.csproj]
QueryContext\QueryContext.cs(43,73): error CS0246: The type or namespace name 'KendoGridBaseRequest' could not be found (are you missing a using directive or an assembly reference?) [D:\home\site\repository\KendoGridBinderEx.Examples.Business\KendoGridBinderEx.Examples.Business.csproj]
QueryContext\QueryContext.cs(43,16): error CS0246: The type or namespace name 'KendoGridEx<TEntity, TViewModel>' could not be found (are you missing a using directive or an assembly reference?) [D:\home\site\repository\KendoGridBinderEx.Examples.Business\KendoGridBinderEx.Examples.Business.csproj]
QueryContext\IQueryContext.cs(18,66): error CS0246: The type or namespace name 'KendoGridBaseRequest' could not be found (are you missing a using directive or an assembly reference?) [D:\home\site\repository\KendoGridBinderEx.Examples.Business\KendoGridBinderEx.Examples.Business.csproj]
QueryContext\IQueryContext.cs(18,9): error CS0246: The type or namespace name 'KendoGridEx<TEntity, TViewModel>' could not be found (are you missing a using directive or an assembly reference?) [D:\home\site\repository\KendoGridBinderEx.Examples.Business\KendoGridBinderEx.Examples.Business.csproj]
Failed exitCode=1, command="D:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild.exe" "D:\home\site\repository\KendoGridBinderEx.Examples.MVC\KendoGridBinderEx.Examples.MVC.csproj" /nologo /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="D:\local\Temp\8d4047cf48f3562";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release;UseSharedCompilation=false /p:SolutionDir="D:\home\site\repository.\"
An error has occurred during web site deployment.
\r\nD:\Program Files (x86)\SiteExtensions\Kudu\59.51102.2512\bin\Scripts\starter.cmd "D:\home\site\deployments\tools\deploy.cmd"
I don't see any msbuild commands, only restore ?
When you say 'not possible anymore', do you mean that the exact same scenario used to work and no longer does? Or that you had not tried it before on solutions that also have a project.json?
Note that project.json is on its way out with the new Core tooling.
Yes sorry for the vague description, but indeed it did work fine for several years/months and all of a sudden, it did not work anymore.
Can you indicate which log files I can review in kudo which show some more details?
I has a problem which seems to cause the same errors. I have opened a bug on nuget.
My workaround was to add a .deployment file with my specific project, because the problematic project.json was only used on a test project not necessary for my website.
@StefH does this issue still persist?
Can you share a github repository that highlights your project structure so that I can try to reproduce the issue at my end, thanks
This one:
https://github.com/StefH/KendoGridBinderEx/tree/master/KendoGridBinderEx.Examples.MVC
I just submitted a pull request, it should work for you
build in Azure is different than build in VS, you probably build your entire solution (all projects) locally whereas Azure only build what's necessary (the web project and projects it depends on)
in your case, KendoGridBinderEx.Examples.MVC.csproj is the one argument consumed by msbuild and inside this csproj file, it references KendoGridBinderEx.Examples.Business.csproj (so msbuild knows it needs to build this reference)
I don't see any msbuild commands, only restore ?
to answer your question, the error message you are seeing here:
D:\Program Files (x86)\MSBuild\14.0\bin\Microsoft.Common.CurrentVersion.targets(1819,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "KendoGridBinderEx". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. [D:\home\site\repository\KendoGridBinderEx.Examples.Business\KendoGridBinderEx.Examples.Business.csproj]
is the output of msbuild KendoGridBinderEx.Examples.Business.csproj, it is complaining that you referenced KendoGridBinderEx.dll but this file is missing. The solution it simply tell msbuild that it needs to build this assembly itself
This PR solved my issue.
Thanks for your help.
|
gharchive/issue
| 2016-11-04T06:47:25 |
2025-04-01T06:40:05.742670
|
{
"authors": [
"StefH",
"davidebbo",
"tbolon",
"watashiSHUN"
],
"repo": "projectkudu/kudu",
"url": "https://github.com/projectkudu/kudu/issues/2213",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
747324383
|
Fix synctriggers for functions
This commit added code to make sure that synctriggers is always called
after a function app has finished deploying. Additionally, this change
also adds an empty settriggers call that is required by the front end to
notify the zip cache that changes were made.
Fixes https://github.com/projectkudu/kudu/issues/3238
This PR was waiting on successful ANT92 deployment (platform). That's almost completed, and I will now update this PR to get it ready to be completed.
Merged here -- https://github.com/projectkudu/kudu/commit/86918e0c0f1098afefcd5e1a03e1d7281cb850c8
|
gharchive/pull-request
| 2020-11-20T09:20:22 |
2025-04-01T06:40:05.745384
|
{
"authors": [
"ankitkumarr"
],
"repo": "projectkudu/kudu",
"url": "https://github.com/projectkudu/kudu/pull/3245",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
696173120
|
Release 4.0.1
Pull requests into cqm-execution require the following. Submitter and reviewer should :white_check_mark: when done. For items that are not-applicable, note it's not-applicable ("N/A") and :white_check_mark:.
Submitter:
[ ] This pull request describes why these changes were made.
[ ] Internal ticket for this PR:
[ ] Internal ticket links to this PR
[ ] Code diff has been done and been reviewed
[ ] Tests are included and test edge cases
[ ] Tests have been run locally and pass
Reviewer 1:
Name:
[ ] Code is maintainable and reusable, reuses existing code and infrastructure where appropriate, and accomplishes the task’s purpose
[ ] The tests appropriately test the new code, including edge cases
[ ] You have tried to break the code
Reviewer 2:
Name:
[ ] Code is maintainable and reusable, reuses existing code and infrastructure where appropriate, and accomplishes the task’s purpose
[ ] The tests appropriately test the new code, including edge cases
[ ] You have tried to break the code
Do we need to update the readme file? It points to QDM models
Do we need to update the readme file? It points to QDM models
Yeah. Looks like README should be replaced. It has QDM samples and links to QDM models.
|
gharchive/pull-request
| 2020-09-08T20:58:38 |
2025-04-01T06:40:05.782646
|
{
"authors": [
"adongare",
"serhii-ilin"
],
"repo": "projecttacoma/cqm-execution",
"url": "https://github.com/projecttacoma/cqm-execution/pull/192",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
373706352
|
When making a FromArray deriving shouldn't an enum use fromValue instead of fromName?
Code generated is:
public static function fromArray(array $data): Commission
{
$condition = \BolCom\RetailerApi\Model\Offer\Condition::fromValue($data['condition']);
}
It would make sense in my mind that the value of a payload should use the actual enum value instead of the name of the enum?
Let me try to explain. In my case I'm not using the types to use it in the prooph event store. I'm getting a payload from an external service (calling something over rest) and want to return a type based on the response:
public function __invoke(GetCommission $getCommission): Commission
{
$response = $this->client->get("/retailer/commission/{$getCommission->ean()->value()}", [
'query' => [
'condition' => $getCommission->condition()->value(),
'price' => $getCommission->price()->value()
],
'headers' => ['Accept' => 'application/vnd.retailer.v4+json']
]);
return Commission::fromArray($response->getBody()->json());
}
I get the actual 'ugly' enum value from the remote system, and want to be able to set the value. Would be nice if I wouldn't have to reformat the response before putting it in a type.
Oh I see what the system does when there isn't a while construct, it just gives all the values an integer number..
Sorry what? Which while construct?
I understand that you have a use case where you want fromArray / toArray to use enum values and not enum names. However I am not willing to change this behavior, as I have the other use case more often.
But... There is a solution for both of us:
Foo deriving (Enum(useValue))
There is an open ticket to allow to pass options to derivings, but that's not an easy task because it requires to update the parser as well.
I have no time currently to do this, but if you are willing to work on this, let me know.
Still struggling to find out what while construct you are referring to.
Sorry what? Which while construct?
I can't type, was a little bit late: I meant the Enum with(FIELD='bla') construct.
As for the changes to the parser logic, might be interesting, I'll have to see if I can figure that out.
Got something working! https://github.com/paales/fpp/pull/2
Will create a PR when i've got the code sniffer thing set up locally.
resolved via https://github.com/prolic/fpp/pull/106
|
gharchive/issue
| 2018-10-24T22:45:25 |
2025-04-01T06:40:05.809677
|
{
"authors": [
"paales",
"prolic"
],
"repo": "prolic/fpp",
"url": "https://github.com/prolic/fpp/issues/100",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
60918819
|
namespace attributes and ressources
This is the fix for issue #12
:+1:
|
gharchive/pull-request
| 2015-03-12T22:15:36 |
2025-04-01T06:40:05.810731
|
{
"authors": [
"Fgabz",
"dandc87"
],
"repo": "prolificinteractive/material-calendarview",
"url": "https://github.com/prolificinteractive/material-calendarview/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
810041684
|
JSON Parse Failures count keeps growing
We're running elasticsearch and elasticsearch_exporter on k8s and querying elasticsearch_cluster_health_json_parse_failures with Grafana. Each time our elasticsearch pod restarts, we see increments of this value, which makes sense since the health-check may not receive an expected response. The issue is that this value doesn't seem to drop unless we restart the exporter. Is this by intentional behavior, or should it decrease over time?
The problem we're facing with this is we're not sure what our alert conditions should look like. Intuitively, we assumed and set off an alert whenever json_parse_failures is greater than zero, but doesn't feel right to restart the exporter every time an alert is received.
Further, is there a command to flush this counter that I might've missed?
This seems resolved somehow, as our elasticsearch pods restart a lot from time to time, but haven't got such an issue for months.
|
gharchive/issue
| 2021-02-17T10:10:51 |
2025-04-01T06:40:05.817769
|
{
"authors": [
"alfieyfc"
],
"repo": "prometheus-community/elasticsearch_exporter",
"url": "https://github.com/prometheus-community/elasticsearch_exporter/issues/403",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1183756832
|
Remote write receiver support
It would be awesome to have remote write receiver support in prom-label-proxy, so we are able to push metrics into Prometheus through prom-label-proxy which forces the metrics on all incoming metrics.
The endpoint that should be supported is /api/v1/write, docs can be found here.
I think this would be very nice. We recently implemented exactly this in the Observatorium API, so it's proof that doing so is relatively straightforward
|
gharchive/issue
| 2022-03-28T17:27:31 |
2025-04-01T06:40:05.864718
|
{
"authors": [
"Wouter0100",
"squat"
],
"repo": "prometheus-community/prom-label-proxy",
"url": "https://github.com/prometheus-community/prom-label-proxy/issues/104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2385832636
|
matcher is not native regexp why?
What did you do?
I found that the matcher is not a native regexp when configuring routes using alertmanager, for example
- receiver: DBA
matchers:
- cluster =~ "^(mysql|redis)"
Can only match 'mysql' or 'redis' itself, but not match start with 'mysql01','mysql02','redis01'...
Use as is expect
I checkd source code for matcher.go:
if t == MatchRegexp || t == MatchNotRegexp {
re, err := regexp.Compile("^(?:" + v + ")$")
if err != nil {
return nil, err
}
m.re = re
}
return m, nil
why regexp.Compile("^(?:" + v + ")$") or regexp.Compile(v)?
Alertmanager version:
All versions that support matchers
Alertmanager regexp matchers are always anchored (as it is for Prometheus). To get back to your question, you could do (mysql|redis).* to match any label starting with mysql and redis.
But there is a visual editor in the official documentation that is a native regexp.
https://prometheus.io/docs/alerting/latest/configuration/
That's what makes me wonder
|
gharchive/issue
| 2024-07-02T10:09:32 |
2025-04-01T06:40:05.869653
|
{
"authors": [
"linuxfan1",
"simonpasquier"
],
"repo": "prometheus/alertmanager",
"url": "https://github.com/prometheus/alertmanager/issues/3915",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
107446567
|
Basic state persistence for the alertmanager silences
Extends the silences.json model to have the memoryAlertManager persist its state to an on-disk JSON file. This allows restarts to avoid re-dispatching sent notifications. The persistence file can be disabled by setting it to a blank string "".
Hey, thanks for sending this! Since we're planning on rewriting the Alertmanager completely (which will include features like this), I'm thinking whether to make this ready for merging. On the other hand, it doesn't have to be perfect then if it gets replaced soon anyways.
Besides minor issues (and lack of tests, which is a general AM problem), there are at least some things that make me unsure about the full correctness here: a restart should at least be treated like a configuration reload to make sure aggregates are (re)associated with the correct (potentially new) rules, so alertManager.SetAggregationRules(conf.AggregationRules()) should be called after loading the persisted file. In general, just serializing and deserializing whole datastructures including pointered objects that weren't initially designed for that makes me a bit nervous - I couldn't immediately see a problem with that though. Are you reasonably confident that's all ok and no references will be broken after reload?
In my (limited) testing it all seemed fine but I'll run it through it's paces a bit more extensively in the next few days. Good point about the rule changes - I'll test that to see what the damage is.
I pulled this together to solve an immediate short term problem (namely, when I do a config reload I inevitably have to take down the alertmanager anyway, so saving my alert emits stops me spamming out a whole bunch of new notifications).
Also noticed that alertManager.Run(), which does the loading, is called asynchronously in main.go, so it's possible for the web interface to start up and accept alerts already and those would be removed again when the alertstates are loaded afterwards...
Since it looks like the new alertmanager is ramping up for roll out, and I'm moving over to it myself, I'm going to close this off for the time being since it won't apply to it.
|
gharchive/pull-request
| 2015-09-21T05:26:23 |
2025-04-01T06:40:05.873450
|
{
"authors": [
"juliusv",
"wrouesnel"
],
"repo": "prometheus/alertmanager",
"url": "https://github.com/prometheus/alertmanager/pull/123",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
257951749
|
silences: avoid deadlock
Calling gossip.GossipBroadcast() will cause a deadlock if
there is a currently executing OnBroadcast* function.
See #982
LGTM
|
gharchive/pull-request
| 2017-09-15T07:33:41 |
2025-04-01T06:40:05.874694
|
{
"authors": [
"iksaif",
"olive42"
],
"repo": "prometheus/alertmanager",
"url": "https://github.com/prometheus/alertmanager/pull/995",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
227392121
|
Document that invalid label name chars get replaced by underscores in SD meta label names
This is about SD meta labels like __meta_kubernetes_node_label_<labelname>.
https://prometheus.io/docs/operating/configuration does not mention that fact anywhere.
Can I take this one?
Go ahead.
@infoverload are you still interested in working on this?
|
gharchive/issue
| 2017-05-09T14:47:55 |
2025-04-01T06:40:05.881974
|
{
"authors": [
"brian-brazil",
"hdost",
"infoverload",
"juliusv"
],
"repo": "prometheus/docs",
"url": "https://github.com/prometheus/docs/issues/735",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
232423205
|
Carry over pre-allocation sizes across TSDB blocks and targets
We see in #2774 that new TSDB blocks being cut as well as pods being scaled cause significant memory spikes. (Seems we regressed a bit in the former since alpha.0 – not exactly sure why.)
One possible reason is that we allocate maps and slices, which then basically immediately converge to their max size. On the way, we grow these structures, which may make a lot of allocations that go to garbage shortly after.
For slices, this is easy to measure (see below). For maps, which account for most of those structures, I'm not entirely sure whether in how far it behaves the same or just dynamically adds new buckets.
For slices however, the growth is only by power of two until 1KB and slows down after, causing more garbage to be generated. Until 200k elements are created, 860k elements of garbage are created: https://play.golang.org/p/KcvqVdZjR1
We should try to pre-allocate on instantiation based on size info we have from previous blocks and targets and see whether that reduces the spikes.
@Gouthamve
Closing as this no longer happens in 2.0 as we don't cut blocks anymore.
|
gharchive/issue
| 2017-05-31T00:52:10 |
2025-04-01T06:40:05.905853
|
{
"authors": [
"Gouthamve",
"fabxc"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/2784",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
341465533
|
TimeOut in Histogram and Summaries
Proposal:
Sometimes a user start timer, but don't want to stop it. In such cases, the duration is not logged in any bucket. So, there should be a timeout settings in Histogram/Summary, so that user can configure after how much time, the timer should be automatically stopped, and logged in the appropriate bucket.
Use Case:
In my case, I send an HTTP request to an API, which could give a TimeOutException. In this case, I start the timer before sending the request. Here, I catch a TimeOutException in which I log the exception using Counter metric, but don't stop the Histogram.Timer. That is why, the timer starts, but it dont stop and is not logged in any of the bucket. There can be timeout setting in Histogram in which I can configure that after this much time, the duration should be logged in infinity bucket
You should file this against the client library you are using, as this repository is for the Prometheus server itself.
I would say in this case that it's your responsibility to stop the timer.
Thank you @brian-brazil ,
Filing against the appropriate client library.
|
gharchive/issue
| 2018-07-16T10:30:57 |
2025-04-01T06:40:05.908407
|
{
"authors": [
"atulverma1997",
"brian-brazil"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/4387",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
964635793
|
remote write filesystem data disorder
What did you do?
we use remote write and nodeexporter to forward data to other system。but we find that filesystem is error.
the value and mountain of one host is others.
/gitlab is not exist in this host, it belong to other host remotewrite by the same Prometheus.
we cant find '/backup/gitlab' in this host's metrics.
Environment
docker k8s linux
Prometheus version:
2.28.1
We use GitHub issues to track Prometheus development. Please use our community channels for usage questions which you can find here https://prometheus.io/community/.
|
gharchive/issue
| 2021-08-10T06:21:25 |
2025-04-01T06:40:05.912251
|
{
"authors": [
"codesome",
"hs02041102"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/9182",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1273232353
|
Reduce chunk write queue memory usage 1
This avoids wasting memory on the c.chunkRefMap by re-initializing it regularly. When re-initializing it, it gets initialized with a size which is half of the peak usage of the time period since the last re-init event, for this we also track the peak usage and reset it on every re-init event.
Very frequent re-initialization of the map would cause unnecessary allocations, to avoid that there are two factors which limit the frequency of the re-initializations:
There is a minimum interval of 10min between re-init events
In order to re-init the map the recorded peak usage since the last re-init event must be at least 1000 (objects in c.chunkRefMap).
When re-initializing it we initialize it to half of the peak usage since the last re-init event to try to hit the sweet spot in the trade-off between initializing it to a very low size potentially resulting in many allocations to grow it, and initializing it to a large size potentially resulting in unused allocated memory.
With this solution we have the following advantages:
If a tenant's number of active series decreases over time then we want that their queue size also shrinks over time. By always resetting it to half of the previous peak it will shrink together with the usage over time
We don't want to initialize it to a size of 0 because this would cause a lot of allocations to grow it back to the size which it actually needs. By initializing it to half of the previous peak it will rarely have to be grown to more than double of the initialized size.
We don't want to initialize it too frequently because that would also cause avoidable allocations, so there is a minimum interval of 10min between re-init events
This PR comes from https://github.com/grafana/mimir-prometheus/pull/131. We use it in Grafana Mimir to reduce memory usage in Mimir clusters with thousands of open TSDBs in a single process.
Related to https://github.com/prometheus/prometheus/pull/10874.
Before this change, we have seen this cumulative memory usage across pods in single Mimir deployment.
Legend:
zone-a using new chunk mapper without fix from this PR
zone-b and zone-c are using old chunk mapper before https://github.com/prometheus/prometheus/pull/10051
After applying change from this PR into zone-b and enabling new chunk mapper there, we can see that memory usage has improved compared to zone-a:
Legend:
zone-a using new chunk mapper without fix from this PR
zone-b using new chunk mapper with the fix from this PR
zone-c using old chunk mapper before https://github.com/prometheus/prometheus/pull/10051
Note that memory usage in zone-b still isn't the same as with old chunk mapper (zone-c), but second part is addressed by https://github.com/prometheus/prometheus/pull/10874
Could you say which metric is being graphed, please? (working set?)
Query is:
sum by (zone) (label_replace(go_memstats_heap_inuse_bytes{job=~"cortex-dev-01/ingester.*"}, "zone", "$1", "pod", "(ingester-zone-.)-\\d+")) / 1e9
With pod names like ingester-zone-<a|b|c>-<number>.
|
gharchive/pull-request
| 2022-06-16T08:29:35 |
2025-04-01T06:40:05.921157
|
{
"authors": [
"bboreham",
"pstibrany"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/10873",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
654047231
|
test
sorry
|
gharchive/pull-request
| 2020-07-09T13:08:42 |
2025-04-01T06:40:05.922242
|
{
"authors": [
"rara-tan"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/7543",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
62113637
|
Change is_superuser to support #88557194
We need to only show the admin dashboard to cloud_admin.
Is this a change we can make? Could it be added to the OPENSTACK_KEYSTONE_ADMIN_ROLES env variable?
I'll try that now Jeff, if so I'll close this PR and update the other.
Found a better way to do this thanks to @jeffdeville
|
gharchive/pull-request
| 2015-03-16T15:22:52 |
2025-04-01T06:40:05.931682
|
{
"authors": [
"jeffdeville",
"mattamizer"
],
"repo": "promptworks/django_openstack_auth",
"url": "https://github.com/promptworks/django_openstack_auth/pull/1",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
106833895
|
Add Mark a todo as done functionality
Fixes issue #1
build failed, please fix php cs
|
gharchive/pull-request
| 2015-09-16T18:31:29 |
2025-04-01T06:40:05.932723
|
{
"authors": [
"DannyvdSluijs",
"prolic"
],
"repo": "prooph/proophessor-do",
"url": "https://github.com/prooph/proophessor-do/pull/25",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2271653
|
[gh-55] Revert change that was made to postgres schema parsing that caus...
...es regressions and has no tests.
Can you be more specific about the regression, and point the original commit and associated issue?
Click through to the old ticket.
Basically a default of
Default ''::character varying
Stopped being parsed correctly when someone else fixed a corner case.
Sent from iPhone.
On Nov 18, 2011, at 6:44 AM, Francois Zaninottoreply@reply.github.com wrote:
Can you be more specific about the regression, and point the original commit and associated issue?
Reply to this email directly or view it on GitHub:
https://github.com/propelorm/Propel2/pull/56#issuecomment-2788229
Ok I'm closing that one as it should be better to base your work on your Propel 1.6 PR.
Ok, I will port that patch.
|
gharchive/issue
| 2011-11-17T17:55:08 |
2025-04-01T06:40:05.936149
|
{
"authors": [
"apinstein",
"fzaninotto",
"willdurand"
],
"repo": "propelorm/Propel2",
"url": "https://github.com/propelorm/Propel2/issues/56",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
152572190
|
Installation not working
I'm trying to install propel-laravel through composer in my Laravel 5.2.* framework, but it's not working for me.
First of all I tried to do what the documentation said, editting my composer.json like so:
"config": {
"preferred-install": "dist"
},
"minimum-stability": "alpha"
But when I do a: composer require propel/propel-laravel it seems that it can't find the package. I have to do this in order to install propel-laravel:
composer require propel/propel-laravel:~2.0.0-alpha5
Now it's able to download the package, but I'm getting the following error during the installation:
Problem 1
- Installation request for propel/propel ~2.0.0-alpha5 -> satisfiable by propel/propel[2.0.0-alpha5].
- Conclusion: remove symfony/finder v3.0.4
- Conclusion: don't install symfony/finder v3.0.4
- propel/propel 2.0.0-alpha5 requires symfony/finder ~2.3 -> satisfiable by
symfony/finder[v2.3.0, v2.3.1, v2.3.10, v2.3.11, v2.3.12, v2.3.13, v2.3.14, v2.3
.15, v2.3.16, v2.3.17, v2.3.18, v2.3.19, v2.3.2, v2.3.20, v2.3.21, v2.3.22, v2.3
.23, v2.3.24, v2.3.25, v2.3.26, v2.3.27, v2.3.28, v2.3.29, v2.3.3, v2.3.30, v2.3
.31, v2.3.32, v2.3.33, v2.3.34, v2.3.35, v2.3.36, v2.3.37, v2.3.38, v2.3.39, v2.
3.4, v2.3.40, v2.3.5, v2.3.6, v2.3.7, v2.3.8, v2.3.9, v2.4.0, v2.4.1, v2.4.10, v
2.4.2, v2.4.3, v2.4.4, v2.4.5, v2.4.6, v2.4.7, v2.4.8, v2.4.9, v2.5.0, v2.5.1, v
2.5.10, v2.5.11, v2.5.12, v2.5.2, v2.5.3, v2.5.4, v2.5.5, v2.5.6, v2.5.7, v2.5.8
, v2.5.9, v2.6.0, v2.6.1, v2.6.10, v2.6.11, v2.6.12, v2.6.13, v2.6.2, v2.6.3, v2
.6.4, v2.6.5, v2.6.6, v2.6.7, v2.6.8, v2.6.9, v2.7.0, v2.7.1, v2.7.10, v2.7.11,
v2.7.12, v2.7.2, v2.7.3, v2.7.4, v2.7.5, v2.7.6, v2.7.7, v2.7.8, v2.7.9, v2.8.0,
v2.8.1, v2.8.2, v2.8.3, v2.8.4].
- Can only install one of: symfony/finder[v2.8.0, v3.0.4].
- Can only install one of: symfony/finder[v2.8.1, v3.0.4].
- Can only install one of: symfony/finder[v2.8.2, v3.0.4].
- Can only install one of: symfony/finder[v2.8.3, v3.0.4].
...
...
- Can only install one of: symfony/finder[v2.7.8, v3.0.4].
- Can only install one of: symfony/finder[v2.7.9, v3.0.4].
- Installation request for symfony/finder (locked at v3.0.4) -> satisfiable
by symfony/finder[v3.0.4].
Installation failed, reverting ./composer.json to its original content.
Anyone idea how I can fix this problem?
Another note, I had the package propel/propel installed. I just removed that through Composer.
Now when I try to install propel/propel-laravel I get the error:
Problem 1
- The requested package propel/propel-laravel 2.0.0-alpha5 exists as
propel/propel-laravel[dev-Big-Shark-patch-1, dev-Big-Shark-patch-2, dev-develop, dev-ma
ster] but these are rejected by your constraint.
Not sure why this error is now suddenly showing. I still have `"minimum-stability": "alpha" in my composer.json file.
Please try dev-develop version, 'propel/propel-laravel: dev-develop'
@Big-Shark , @chinookproject Develop branch did updated and introduced 5.2 support and new command. Read docs and it's recommended for new installations and testing purposes for now.
|
gharchive/issue
| 2016-05-02T15:03:09 |
2025-04-01T06:40:05.940717
|
{
"authors": [
"Big-Shark",
"SCIF",
"chinookproject"
],
"repo": "propelorm/PropelLaravel",
"url": "https://github.com/propelorm/PropelLaravel/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1333903019
|
Add support for running CFU-Playground natively with notebook in rad-lab env
Updated environment with CFU-Playground packages, repo, and other needed toolchains. Added CFU-Playground notebook after converting to markdown.
FYI @tcal-x
|
gharchive/pull-request
| 2022-08-10T00:01:52 |
2025-04-01T06:40:05.941915
|
{
"authors": [
"ShvetankPrakash"
],
"repo": "proppy/rad-lab",
"url": "https://github.com/proppy/rad-lab/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
129753567
|
Only insert 1 line break after frontmatter
Fixes #916 - unless there was some other reason to insert 2 lines? If so, we can probably just trim a leading \n off content.
Jekyll doesn't care if there's a \n after frontmatter, and parsing the leading \n off content is more trouble than it's worth. I'm fine with this, and allowing people to add their own spaces if that's their thing.
|
gharchive/pull-request
| 2016-01-29T12:34:23 |
2025-04-01T06:40:05.946412
|
{
"authors": [
"dereklieu",
"timwis"
],
"repo": "prose/prose",
"url": "https://github.com/prose/prose/pull/919",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2369787435
|
🛑 pl-launchpad.io is down
In 7f83ca3, pl-launchpad.io (https://pl-launchpad.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: pl-launchpad.io is back up in 75e3813 after 8 minutes.
|
gharchive/issue
| 2024-06-24T09:57:32 |
2025-04-01T06:40:05.973854
|
{
"authors": [
"mastrwayne"
],
"repo": "protocol/upptime-pln",
"url": "https://github.com/protocol/upptime-pln/issues/1618",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2512172441
|
🛑 nftschool.dev is down
In e29a6fc, nftschool.dev (https://nftschool.dev) was down:
HTTP code: 0
Response time: 0 ms
Resolved: nftschool.dev is back up in c6f535b after 1 hour, 24 minutes.
|
gharchive/issue
| 2024-09-08T03:10:46 |
2025-04-01T06:40:05.976283
|
{
"authors": [
"mastrwayne"
],
"repo": "protocol/upptime-pln",
"url": "https://github.com/protocol/upptime-pln/issues/2251",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2527504450
|
🛑 nftschool.dev is down
In 8eed021, nftschool.dev (https://nftschool.dev) was down:
HTTP code: 0
Response time: 0 ms
Resolved: nftschool.dev is back up in 6bc7afe after 46 minutes.
|
gharchive/issue
| 2024-09-16T04:59:32 |
2025-04-01T06:40:05.978802
|
{
"authors": [
"mastrwayne"
],
"repo": "protocol/upptime-pln",
"url": "https://github.com/protocol/upptime-pln/issues/2535",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2531586520
|
🛑 nftschool.dev is down
In 42e9d06, nftschool.dev (https://nftschool.dev) was down:
HTTP code: 0
Response time: 0 ms
Resolved: nftschool.dev is back up in 4189521 after 19 minutes.
|
gharchive/issue
| 2024-09-17T16:27:02 |
2025-04-01T06:40:05.981241
|
{
"authors": [
"mastrwayne"
],
"repo": "protocol/upptime-pln",
"url": "https://github.com/protocol/upptime-pln/issues/2586",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1825387229
|
Display value as HEX
Issue submitter TODO list
[X] I've searched for an already existing issues here
[X] I'm running a supported version of the application which is listed here and the feature is not present there
Is your proposal related to a problem?
I'm frustrated when I'm trying to view a binary message posted to kafka.
Describe the feature you're interested in
A button I could press to switch between the native view and a hex view of the value.
Describe alternatives you've considered
Writing my own client.
Version you're running
56fa824
Additional context
This is related to trying to sort out why my protobuf messages aren't coming out correctly. The messages themselves come from a 3rd party so I need to extract the hex to show exactly what they are sending.
Hello @jcpunk, thank you for suggestion. We will implement this feature as a Hex serde .
|
gharchive/issue
| 2023-07-27T23:44:03 |
2025-04-01T06:40:06.026116
|
{
"authors": [
"iliax",
"jcpunk"
],
"repo": "provectus/kafka-ui",
"url": "https://github.com/provectus/kafka-ui/issues/4067",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2522356346
|
Roomba Remote Client: Unknown disconnection error: ID=16
I am regularly getting a warning saying "Unexpectedly disconnected from Roomba xxx.xxx.xxx.xxx, code The connection was lost" which I usually ignore.
But today I got this warning asking to create a new issue. So here it is. Thanks
Logger: roombapy.remote_client
Source: /usr/local/lib/python3.12/site-packages/roombapy/remote_client.py:172
First occurred: 01:05:37 (1 occurrences)
Last logged: 01:05:37
Unknown disconnection error: ID=16.Kindly use https://github.com/pschmitt/roombapy/issues/new
Getting the same error but only since a few days.
Logger: roombapy.remote_client
Bron: /usr/local/lib/python3.12/site-packages/roombapy/remote_client.py:172
Eerst voorgekomen: 17:01:02 (6 gebeurtenissen)
Laatst gelogd: 18:18:10
Unknown disconnection error: ID=16.Kindly use https://github.com/pschmitt/roombapy/issues/new
I got this error ID yesterday when my Roomba's battery died after getting stuck
Yeah i took my robot out of it's dock and back in and+reset HA, didn't have the error since.
@Pastaloverzzz What do you mean by "reset HA" ? Do you mean restarting HA? I did that but I am still getting the error.
@snowmangh Yes i meant a reboot. In my case the iRobot app wouldn't connect either so i think he was trying to connect to another router than the one closest to him. Could it be that the docking station isn't in a good wifi spot? If you get this error every day you could put it closer to a router and see if you keep getting the error then..
@Pastaloverzzz Interesting. I do have a mesh network at home and wondering if that could be the reason I get the error. I do have devices that for some reasons do switch between the mesh point and the actual router which is probably enough to trigger a disconnect.
@snowmangh Well, i suspect on the day i got the error that my router closest to the docking station was down. So chances are you found the culprit 🙂
@Pastaloverzzz Looks like it. Maybe the error handling should account for a short disconnect before throwing this warning. 🙂
|
gharchive/issue
| 2024-09-12T13:09:21 |
2025-04-01T06:40:06.351882
|
{
"authors": [
"Pastaloverzzz",
"snowmangh"
],
"repo": "pschmitt/roombapy",
"url": "https://github.com/pschmitt/roombapy/issues/355",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2341655473
|
Add psptoolchain-extra dispatch
Right now this is missing. I'm not sure if this would require us to still set some variables in the github runner settings or if they are already set.
I don’t have grants in this repo to setup the keys
That's a bother, me neither 😬
I'll convert this to a draft for now until we have someone who can set this up.
|
gharchive/pull-request
| 2024-06-08T13:23:26 |
2025-04-01T06:40:06.431087
|
{
"authors": [
"fjtrujy",
"sharkwouter"
],
"repo": "pspdev/psp-pacman",
"url": "https://github.com/pspdev/psp-pacman/pull/29",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1343150862
|
Appstudio update build-suite-test-component-git-source-omst
Pipelines as Code configuration proposal
Pipelines as Code CI/build-suite-test-component-git-source-omst-on-pull-request has successfully validated your commit.
StatusDurationName
✅ Succeeded
7 seconds
appstudio-init
✅ Succeeded
14 seconds
clone-repository
✅ Succeeded
7 seconds
appstudio-configure-build
✅ Succeeded
30 seconds
build-container
✅ Succeeded
5 seconds
show-summary
|
gharchive/pull-request
| 2022-08-18T14:00:25 |
2025-04-01T06:40:06.439546
|
{
"authors": [
"psturc"
],
"repo": "psturc-org/devfile-sample-hello-world",
"url": "https://github.com/psturc-org/devfile-sample-hello-world/pull/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
257711943
|
WIP: Coveralls combined
Test pr to check Coveralls test coverage if we run one combined test in Travis.
Changes Unknown when pulling 272a11cec44841c2da8f2b368b2cd5dd6451de26 on coveralls-combined into ** on develop**.
@cam156 well, that's interesting.
@awead That is interesting. I'm guessing coveralls uses the rpsec process to gather information.
@cam156 so is it worth changing the Travis build? I'm not sure we're really slowing things down all the much with a single build as opposed to a split one.
|
gharchive/pull-request
| 2017-09-14T13:04:01 |
2025-04-01T06:40:06.443683
|
{
"authors": [
"awead",
"cam156",
"coveralls"
],
"repo": "psu-stewardship/scholarsphere",
"url": "https://github.com/psu-stewardship/scholarsphere/pull/1034",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
882321646
|
Better map appearance
Change the appearance of the map to look more like League Of Legends (not only some grass-tiles repeated, but maybe some lanes, water, etc)
Connected to #23
|
gharchive/issue
| 2021-05-09T13:31:41 |
2025-04-01T06:40:06.455142
|
{
"authors": [
"EliottBICS",
"ptal"
],
"repo": "ptal/lol2D",
"url": "https://github.com/ptal/lol2D/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
69228131
|
Cache is global, can potentially conflict with external modules
Since required modules are cached in Node.js, it means that multiple modules could be using the same cache at the same time. There should be a .createStore() method that creates a new cache instance.
I think this is a matter of personal opinion. There are other node caches that do this, but then they suffer from not following typical require patterns (requiring in multiple places does not duplicate itself).
Why not instead namespace your cache keys?
@brandonculver that still wouldn't work. Let's say I have a library which exposes a SomeClient class which uses the node-cache module internally.
var SomeClient = require('my-module').SomeClient;
var client1 = new SomeClient();
var client2 = new SomeClient();
Both client1 and client2 will be using the same cache. This makes node-cache fine to use for standalone apps but restricts its use in published libraries.
@olalonde I understand what you are saying. I mean namspacing your cache keys. ('myapp-cache1', myapp-cach2)
I would also question why you are using this particular module in other libraries. Its a pretty bare bones cache solution. Again, there are other caching solutions that do exactly what you want https://www.npmjs.com/package/node-cache
It just seems like unnecessary overhead in a currently simple module. Ultimately its not my call though :wink:
Oh thanks I wasn't aware of this module! FWIW, I don't think it's unnecessary overhead. In fact, I just implemented this feature in 4 lines of code :)
https://github.com/ptarjan/node-cache/pull/43
+1。namespace key is a workaround friendly for mistakes, for example, some module also use memory cache and finally call
cache.clear();
then all my module's cache is away.
What happens when running in cluster mode, like pm2 does? Could this mean the cache is shared between threads?
|
gharchive/issue
| 2015-04-17T22:17:36 |
2025-04-01T06:40:06.472113
|
{
"authors": [
"amenadiel",
"brandonculver",
"olalonde",
"springuper"
],
"repo": "ptarjan/node-cache",
"url": "https://github.com/ptarjan/node-cache/issues/41",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
796474660
|
CoinRanking Auth
Updated the authentication for CoinRanking from "No" to "apiKey.
[x] My submission is formatted according to the guidelines in the contributing guide
[x] My addition is ordered alphabetically
[x] My submission has a useful description
[x] The description does not end with punctuation
[x] Each table column is padded with one space on either side
[x] I have searched the repository for any relevant issues or pull requests
[x] Any category I am creating has the minimum requirement of 3 items
[x] All changes have been squashed into a single commit
Just a small change, but increasing accuracy
Just a small change, but increasing accuracy
@tannershimanek
Don't forget to squash all the commits into the single one after committing the changes suggested by @marekdano
This PR is inactive, so the API was added in another PR/commit.
More information at: #2002
|
gharchive/pull-request
| 2021-01-29T00:17:50 |
2025-04-01T06:40:06.501718
|
{
"authors": [
"matheusfelipeog",
"pawelbr",
"tannershimanek"
],
"repo": "public-apis/public-apis",
"url": "https://github.com/public-apis/public-apis/pull/1540",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.