id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2421598808 | 🛑 MPC Homepage is down
In 8ac936d, MPC Homepage (https://www.minorplanetcenter.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MPC Homepage is back up in 6046705 after 1 minute.
| gharchive/issue | 2024-07-21T20:58:50 | 2025-04-01T06:37:33.673108 | {
"authors": [
"ChrisMoriarty"
],
"repo": "Smithsonian/upptime",
"url": "https://github.com/Smithsonian/upptime/issues/2509",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2428044875 | 🛑 NEOCP is down
In b57ed4d, NEOCP (https://minorplanetcenter.net/iau/NEO/toconfirm_tabular.html) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NEOCP is back up in 830d658 after 4 minutes.
| gharchive/issue | 2024-07-24T16:46:07 | 2025-04-01T06:37:33.675482 | {
"authors": [
"ChrisMoriarty"
],
"repo": "Smithsonian/upptime",
"url": "https://github.com/Smithsonian/upptime/issues/2961",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2437092801 | 🛑 NEOCP is down
In f7a0260, NEOCP (https://minorplanetcenter.net/iau/NEO/toconfirm_tabular.html) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NEOCP is back up in 74b956e after 2 minutes.
| gharchive/issue | 2024-07-30T07:17:57 | 2025-04-01T06:37:33.677854 | {
"authors": [
"ChrisMoriarty"
],
"repo": "Smithsonian/upptime",
"url": "https://github.com/Smithsonian/upptime/issues/4073",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2438744342 | 🛑 MPC Homepage is down
In 9368684, MPC Homepage (https://www.minorplanetcenter.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MPC Homepage is back up in 9b00224 after 2 minutes.
| gharchive/issue | 2024-07-30T21:54:32 | 2025-04-01T06:37:33.680441 | {
"authors": [
"ChrisMoriarty"
],
"repo": "Smithsonian/upptime",
"url": "https://github.com/Smithsonian/upptime/issues/4262",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
933945419 | chore: ローカル開発の改善
vercel cli を同梱
ローカルではpuppeteerのインストールしたchromiumを使うようにする
| gharchive/pull-request | 2021-06-30T17:39:36 | 2025-04-01T06:37:33.693939 | {
"authors": [
"SnO2WMaN",
"tosuke"
],
"repo": "SnO2WMaN/tohohoify",
"url": "https://github.com/SnO2WMaN/tohohoify/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113233012 | Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Attributes should be chained before defining the constraint relation'
-(instancetype)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier{
if (self == [super initWithStyle:style reuseIdentifier:reuseIdentifier]) {
//TODO
UIView *backgroundView = [UIView new];
[self.contentView addSubview:backgroundView];
[backgroundView addSubview:self.firstView];
[backgroundView addSubview:self.secondView];
[backgroundView mas_makeConstraints:^(MASConstraintMaker *make) {
make.size.equalTo(self.contentView);
make.center.equalTo(self.contentView);
}];
[self.firstView mas_makeConstraints:^(MASConstraintMaker *make) {
make.centerY.mas_equalTo(backgroundView.mas_centerY);
make.left.equalTo(backgroundView.mas_left);
make.right.equalTo(self.secondView.mas_left).with.offset(-5);
make.height.mas_equalTo(@150);
make.width.equalTo(self.secondView);
}];
[self.secondView mas_makeConstraints:^(MASConstraintMaker *make) {
make.centerY.mas_equalTo(backgroundView.mas_centerY);
make.left.equalTo(self.firstView.mas_right).width.offset(5);
make.right.equalTo(backgroundView.mas_right);
make.height.mas_equalTo(@150);
make.width.equalTo(self.firstView);
}];
}
return self;
}
The above is my code, please help me to look at have what problem
My fault, code wrong, here I'm very sorry!!!Zhe code is " make.left.equalTo(self.firstView.mas_right).width.offset(5);" Should be with, rather than the width
@hanhailong I mistyped too. Thanks!
| gharchive/issue | 2015-10-25T15:41:27 | 2025-04-01T06:37:33.697635 | {
"authors": [
"0xa6a",
"hanhailong"
],
"repo": "SnapKit/Masonry",
"url": "https://github.com/SnapKit/Masonry/issues/259",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
62249435 | Store package settings files in the settings/ directory instead of scoped-properties/
Reported via deprecation-cop
Thanks for adding this! Merged a fix. :)
Hi @Sneagan I've got the same issue with the package language-slim. Could you look into it?
@yiliangt This looks like an issue with this package, not mine. https://github.com/slim-template/language-slim
@Sneagan. Ok thanks.
| gharchive/issue | 2015-03-17T01:11:57 | 2025-04-01T06:37:33.700073 | {
"authors": [
"Sneagan",
"benogle",
"yiliang",
"yiliangt"
],
"repo": "Sneagan/atom-handlebars",
"url": "https://github.com/Sneagan/atom-handlebars/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1378624533 | Change overlay size/position
Is it possible to adjust the size and position of the overlay?
From what I can tell regardless of the size of the original overlay window it seems to expand it to fit the entire window so there is no way to interact with the underlying window unless it's set to ignore mouse events.
You can disable default behaviour of resizing Electron window and handle it yourself.
https://github.com/SnosMe/electron-overlay-window/blob/e0b3da27a20251cb5673c43390abdc2e3fe3e78e/src/index.ts#L59
In 3.0 beta there is no way, yeah.
You can disable default behaviour of resizing Electron window and handle it yourself.
https://github.com/SnosMe/electron-overlay-window/blob/e0b3da27a20251cb5673c43390abdc2e3fe3e78e/src/index.ts#L59
Thanks I'll give it a try.
| gharchive/issue | 2022-09-19T23:11:40 | 2025-04-01T06:37:33.705733 | {
"authors": [
"SnosMe",
"sushibagel"
],
"repo": "SnosMe/electron-overlay-window",
"url": "https://github.com/SnosMe/electron-overlay-window/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1325859732 | private_key authentication attribute conflicts with SNOWFLAKE_PASSWORD environment variable
Provider Version
0.25.36
Terraform Version
1.2.0
Describe the bug
The private_key authentication attribute, when set explicitly as part of a provider "snowflake {...} config, conflicts with the SNOWFLAKE_PASSWORD environment variable.
Expected behavior
Perhaps this is an unrealistic expectation, but I would expect them not to conflict with each other.
Code samples and commands
export SNOWFLAKE_PASSWORD=foo (which I need for non-Terraform automation and scripts which run against my Snowflake resources)
provider "snowflake" {
# See https://guides.snowflake.com/guide/terraforming_snowflake/index.html?index=..%2F..index#2
# for instructions on how to set up a user account to be used by Terraform.
// required
username = "someuser"
account = "ABC123456"
region = "us-east-1"
private_key = data.aws_kms_secrets.snowflake_private_key.plaintext["snowflake_private_key"]
// optional
role = "ACCOUNTADMIN"
}
Export the above environment variable and then use the above provider config and you'll get a conflict.
If something is explicitly defined in Terraform config, as a private key is here, it should take precedence over an environment variable.
Note that I also got the same experience with the snowsql provider, so this conflict is almost surely coming from the snowflake golang SDK. But it would be nice if this provider could handle this situation.
We are closing this issue as part of a cleanup described in announcement. If you believe that the issue is still valid in v0.89.0, please open a new ticket.
| gharchive/issue | 2022-08-02T13:28:05 | 2025-04-01T06:37:33.725534 | {
"authors": [
"jrobison-sb",
"sfc-gh-asawicki"
],
"repo": "Snowflake-Labs/terraform-provider-snowflake",
"url": "https://github.com/Snowflake-Labs/terraform-provider-snowflake/issues/1161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1301970751 | feat: add AWS GOV support in api_integration
Added aws_gov_api_gateway and aws_gov_private_api_gateway in api_provider list in api_integration resource
Test Plan
[x] careful review
References
API Integration doc
issue ref #1113
/ok-to-test sha=d9195ba
/ok-to-test sha=d9195ba
| gharchive/pull-request | 2022-07-12T12:08:40 | 2025-04-01T06:37:33.728351 | {
"authors": [
"sfc-gh-jalin",
"sfc-gh-kmaurya"
],
"repo": "Snowflake-Labs/terraform-provider-snowflake",
"url": "https://github.com/Snowflake-Labs/terraform-provider-snowflake/pull/1118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2313425954 | target
i dont seem to get the ability to target it shows but doesnt allow me to click on it
If you do not use qb-target you will need to write the function for your target in place of qb-target things
so yeha i use qb-target when i go to the ped and click on it nothing happen other then the menu comes up and freezes untill you walk away so im not sure why
https://prnt.sc/T9Xicg5CN8pE
| gharchive/issue | 2024-05-23T16:55:12 | 2025-04-01T06:37:33.730663 | {
"authors": [
"NotPhelps",
"Sober881"
],
"repo": "Sober881/qb-vendingjob",
"url": "https://github.com/Sober881/qb-vendingjob/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1349445606 | En tant que admin d'un département lorsque j'importe une liste de bénéficiaire pour la réorientation alors certain ne sont pas trouvés
Problème
Marie-Lisa nous transmet via le SAV une liste de bénéficiaires qu'elle n'accompagnent plus, et nous demande de retirer son nom comme référente.
Lors de l'import via le compte admin CD08 , 2 bénéficiaires ne sont pas trouvés alors qu'ils disposent d'un carnet de bord.
Les données obligatoires et les formats sont respectés lors de l'import et pourtant le matching ne se fait pas.
(note: on remarque que les deux bénéficiaires concernés ont deux référents dans l'item "groupe de suivi" de leur carnet)
screen avec requête dans le thread privé carnet de bord
Quand on inspecte de la requête effectuée lors de l'import, on voit que :
la requête GraphQL émise mentionne bien les bénéficiaires présents dans le fichier
la réponse du serveur inclut quand même un des deux bénéficiaires recherchés, bien que les deux soient signalés manquants
Après investigation :
le bénéficiaire qui ne remontait pas dans la réponse GraphQL n'avait pas la date de naissance indiquée dans le fichier, ce qui explique pourquoi il n'était pas trouvé ;
en supprimant ce "mauvais" bénéficiaire du fichier (en ne laissant que celui qui était trouvé dans la base donc), l'import n'affichait toujours pas le "bon" bénéficiaire ;
en changeant la casse dans le fichier pour que le nom et prénom du bénéficiaire en question soient entièrement en capitales, comme c'était le cas dans la base, l'import des données de ce bénéficiaire s'est bien déroulé.
Il semble donc que l'import de bénéficiaires avec une casse dans le fichier qui ne corresponde pas à celle utilisée dans la base ne fonctionne pas : bien que la requête GraphQL utilise l'opérateur _ilike qui permet bien de retrouver le bénéficiaire, le code client qui traite ensuite le résultat de requête ne doit pas être insensible à la casse.
| gharchive/issue | 2022-08-24T13:25:05 | 2025-04-01T06:37:33.735260 | {
"authors": [
"Pauldoliveira",
"jonathanperret"
],
"repo": "SocialGouv/carnet-de-bord",
"url": "https://github.com/SocialGouv/carnet-de-bord/issues/1010",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1005762367 | Fiche bénéficiaire > Groupe de suivi > Inviter un accompagnateur > Ajouter un nouvel accompagnateur
• Il manque le titre « Ajouter un nouvel accompagnateur » et la phrase « Recherchez un accompagnateur et envoyez une invitation à rejoindre le groupe de suivi de M. …… ».
• Remplacer le texte du btn « Je valide mon inscription » par « Envoyer l’invitation ».
creation dun compo ProNoteBookMemberForm
modif du confirmText sur ProCreationForm
| gharchive/issue | 2021-09-23T19:13:42 | 2025-04-01T06:37:33.737259 | {
"authors": [
"JonathanetRaimond",
"lionelB"
],
"repo": "SocialGouv/carnet-de-bord",
"url": "https://github.com/SocialGouv/carnet-de-bord/issues/268",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1141020960 | 🛑 NextCloud is down
In db0084c, NextCloud (https://nextcloud.fabrique.social.gouv.fr/) was down:
HTTP code: 502
Response time: 436 ms
Resolved: NextCloud is back up in 9eb9bfb.
| gharchive/issue | 2022-02-17T08:30:14 | 2025-04-01T06:37:33.745379 | {
"authors": [
"SocialGroovyBot"
],
"repo": "SocialGouv/upptime",
"url": "https://github.com/SocialGouv/upptime/issues/559",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2333106400 | Gradio version isn't updated to the latest patches.
Since I don't have a powerful enough NVIDIA chip, I use AICoverGen with Google Colab instead. Although I have installed both the latest editions of Python (3.12.3) and Gradio (4.32.2) and then ran the Python verification code to confirm that I'm using the 4.32.2 patch of Gradio, when I try to run Google Colab, the message "
IMPORTANT: You are using Gradio version 3.48.0, however version 4.29.0 is available, please upgrade."
still appears. I am worried that using a highly outdated version would badly affect the results of my AI covers. What can I do to fix this?
I am working on a fork of this repository which supports latest version of gradio 4 (4.37.1 as of speaking) as well as many new features. Its available here: https://github.com/JackismyShephard/ultimate-rvc, in case you are interested. Any feedback is greatly appreciated.
| gharchive/issue | 2024-06-04T10:03:13 | 2025-04-01T06:37:33.748711 | {
"authors": [
"JackismyShephard",
"SokolyMoravia"
],
"repo": "SociallyIneptWeeb/AICoverGen",
"url": "https://github.com/SociallyIneptWeeb/AICoverGen/issues/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
288968105 | Integrate fuzzers
Integrate Guido Vranken’s work from https://github.com/guidovranken/SoftEtherVpn-Fuzz-Audit.
@guidovranken , can you help with fuzzers ?
| gharchive/issue | 2018-01-16T16:21:19 | 2025-04-01T06:37:33.758771 | {
"authors": [
"chipitsine",
"paulmenzel"
],
"repo": "SoftEtherVPN/SoftEtherVPN",
"url": "https://github.com/SoftEtherVPN/SoftEtherVPN/issues/428",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1561304043 | Consider adding a demo app
Please add a separate example app demonstrating multithreading and direct memory releasing in JME context.
This a list of tech-demos that could be made on a separate module jme3-testalloc:
[ ] HelloJme3alloc.java: demonstrating the basic alllocation/deallocation capabilities.
[ ] HelloThreadedJme3alloc.java: testing the multi-threading capabilities.
[ ] HelloMemoryCopy.java: demonstrating memoryCopy(ByteBuffer to, ByteBuffer from, long size).
[ ] HelloMemorySet.java: demonstrating memorySet(ByteBuffer buffer, int value, long size).
[ ] HelloMemoryMove.java: demonstrating memoryMove(ByteBuffer to, ByteBuffer from, long size).
[ ] HelloJme3allocLog.java (WIP).
I will notify you on forums, if you want to add more techdemos later after fixing issue #25, because i want to examine the windows github-runner image output.
| gharchive/issue | 2023-01-29T14:04:34 | 2025-04-01T06:37:33.774679 | {
"authors": [
"Ali-RS",
"Scrappers-glitch"
],
"repo": "Software-Hardware-Codesign/jme-alloc",
"url": "https://github.com/Software-Hardware-Codesign/jme-alloc/issues/16",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1500076543 | Fixes a security vulnerability in rollup-plugin-terser
High Terser insecure use of regular expressions before v4.8.1 and
v5.14.2 leads to ReDoS
Package terser
Patched in >=4.8.1
Dependency of admin-bro
Path admin-bro > rollup-plugin-terser > terser
More info https://github.com/advisories/GHSA-4wf5-vphf-c2xc
@dziraf
I migrated from admin bro to adminjs(last version)
I changed the config but when i start application i have this error:
(node:21508) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'useState' of
null
at Object.useState (D:\projets\equipe\back\Equipty-Admin\node_modules\react\cjs\react.development.js:1622:21)
at ae (D:\projets\equipe\back\Equipty-Admin\node_modules\styled-components\src\models\StyleSheetManager.js:34:33)
| gharchive/issue | 2022-12-16T11:47:32 | 2025-04-01T06:37:33.778780 | {
"authors": [
"faroukbouzouita"
],
"repo": "SoftwareBrothers/adminjs",
"url": "https://github.com/SoftwareBrothers/adminjs/issues/1343",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
176688947 | Get initial feedback and improve
[sorry if I'm doing github wrong...]
Tim, this looks awesome. I have a couple of comments which you're totally free to ignore :-) I've put in descending order of how strongly I feel.
It's a bit weird that you can sign up without giving any info about yourself. Surely it's ok to ask people to put their name down, and maybe also company/job title? Anonymous sign-ups are pretty worthless IMO. I like this page: https://www.givingwhatwecan.org/about-us/members/
Shifting the focus towards diversity as a whole rather than just gender split is nice, but the way you've done it actually lowers the bar even further (you could have an all-male line-up with one Asian male and be fine), and in practice increases the number of speakers who can't meaningfully sign the pledge (as they themselves stop events from being homogeneous). I don't really have an answer to this tbh - but we could consider making the speaker's pledge a bit stronger (e.g. I'll actively spread the word about this pledge, or I'll only speak at events where there's some diversity not including myself)
Do we need to define diversity somewhere, or is it just obvious? Does it include, idunno, dyslexic speakers, narcoleptic speakers, the French? Am I just being deliberately boneheaded? Who knows??
Anonymousness - you may be right, I'm not sure on this one. Maybe going non-anonymous for now and waiting for somebody to complain is a better approach. I'd love to hear thoughts from other people. I'm open to doing this though.
This change does potentially lower the bar a little. I think it's a tradeoff in return for making the pledge itself more widely relevant and inclusive, and I think it's a worthwhile balance, but you're right, it's not perfect. I expect gender diversity is likely to be the main bit focused on though (it's easiest - everywhere in the world has a lot of women - and it's high-profile already). We are aiming for a minimal/worst-offenders level anyway, and I think the vast majority of events that were covered before (all-male events) still get the same impact (because I think they're typically all-straight-white-male events, in practice).
As for relevance to pledgers: it is always potentially relevant, but yes in practice it's less of a meaningful commitment the more unrepresented your group is. Previously though it was nonsensical for ~50% of the population, now it's at least potentially relevant to more, and still a meaningful commitment to most. It's not perfect, but it's less obviously/fundamentally broken, I think it's more widely useful overall, and I can't think of anything better :smile:. Notably this just for the speaker pledge though, everything else is equally relevant.
As for expanding the pledges; I'm cautious about asking people not to speak at events where there's no diversity other than yourself - it makes it harder for non-diverse events to invite diverse speakers - or making the pledge too complicated. Very simple changes might be workable though. "...and I'll warn/check with events about this in advance" to explicitly add the "how does this work" suggestions?
Totally agree - we should add more detail about diversity. "What do you mean by 'diverse' speakers?" is now in the FAQ: https://github.com/Softwire/minimum-viable-diversity-pledge/commit/848c74c021a10092b743d1991fca95bcc86292c8. Does that cover what you were looking for?
I've actually just dropped anonymity entirely for the company/event pledges. Individuals are maybe debatable, but for events/companies you're definitely right.
This is awesome! Super excited. Have you sent it round the directors yet? What do WES think?
Completely tiny thing, but it just occurred to me that the first two pledges start "I won't ever..." and the last two start "We will never..." - Is the difference deliberate? Should we make them consistent?
I think I agree with Chris that anonymous pledges are slightly worthless. Seeing the caveat "feel free to skip this if you're not comfortable publicizing this commitment" is a bit odd - it makes me think, "hang on, am I comfortable with this? why wouldn't someone be? maybe I shouldn't be." I think we should drop anonymity, and if the odd person really minds, they will just put in initials or an alias. We could put in a tick box to say "don't publish my name anywhere public" maybe.
There are one or two more things that I'm mulling over... I'll comment if I can think of any actual concrete things to say about them :p
Have you sent it round the directors yet?
Sorry I have now just seen your email chasing us up for feedback and I assume that's what you were waiting for - sorry!!!
WES have had a quick look and are very keen, but they're struggling to find substantial time for a detailed check through with feedback at the moment. I've sent it to Zoe yesterday to look at and confirm with the directors, but that's still in progress.
Ok, since you're both agreed on anonymity I've dropped it completely, and added a "Can we publish your name in the list of signups?" tickbox instead, so people can opt-out (but not companies and events; they're never anonymous).
I've also updated the intro text to make it "I/we will never..." in every case.
One new thing: the description on the site generally is ok I think, but for the pledges themselves in "I will never attend any paid conferences or panels with a completely homogeneous group of speakers" for example, "homogeneous" feels a bit formal. I want something like 'with zero diversity in the speaker lineup', but less wordy - any ideas?
Let me know about your mullings Ying! More thoughts the merrier.
This looks grand - I don't have any blocking concerns or further suggestions for improvement, so full speed ahead as far as I'm concerned.
I want something like 'with zero diversity in the speaker lineup', but less wordy - any ideas?
You could flip them all round to positive statements, e.g. "I will only attend paid conferences or panels that have a diverse speaker line-up"
Thanks Tim!
Two thoughts:
The first sentence in the header would be better split into 2 sentences I think? And then we need to change the 2nd sentence as it is then a bit further removed from the 1st one. Something like this?
Professional events need to be inclusive, by representing a diverse range of speakers. That way, everybody can be involved with and inspired by the cutting edge of their domain.
Too many events don't represent any diversity at all.
We want to...
About the "How to use this pledge" bit in Resources:
The description under "as a speaker" imo compounds the relevance issue. It simply doesn't make sense to say "I won't be able to speak if every other speaker you pick is a [black trans person] like me". I think we should just face it head on and explain what we think the pledge should mean for someone coming from a minority background, even if it necessarily means something less. "Because we've set the bar at a minimum level, this pledge might not be quite as meaningful for you if you are already in an underrepresented group. Please sign up anyway to show your support! You could even use this pledge as an excuse to exert more pressure on organisations to improve diversity further." or something?
Secondly, does this whole section fit better in "How does this work?" or even, in its own section? It seems very useful.
I've tweaked the intro with almost exactly those changes, moved the template text in resources into the How Does This Work section completely (you're right, it fits way better there), and added an FAQ item to cover marginalized people. We've also gained mobile styling and some tweaks on the way. What do you two think?
Unless anybody has strong opinions on it, I'm actually going to ignore the homogeneous bit for now I think - it is easier to read if you flip the negatives, but doesn't have the same feeling imo ("I will only speak at conferences or on panels with lineups that include some diversity" doesn't sound as solid a commitment as "I will never speak at any paid conferences or panels as part of a completely homogeneous group of speakers"). I think I'm ok with it.
Bit pedantic I suppose, but "I will never speak at any paid conferences or panels as part of a completely homogeneous group of speakers" seems to also ban, say, all-black panels, which seems contrary to the intended purpose?
The idea is that an all-black-male tech conference lineup probably isn't a great thing (although it is at least a change from all-white-male tech conferences) as you've got zero representation of women. If otoh you have black men and women though, great; you've got a degree of diversity, and this doesn't apply.
What 'diversity' is is a bit of a fluffy line of course (see the FAQ). You can argue that everybody is unique and diverse, and wiggle out of it if you really push for it, but I think most of the time a) it's fairly clear when your speaking lineup is a whole bunch of very similar people and b) in almost any case where that's happening, there's some major groups that are being excluded (e.g. 50% of the population).
This is great! I particularly like the example wording for conference attendees and speakers - it's tough to find assertive ways to phrase these things when you're unsure if you're the only one speaking up.
Two suggestions:
My only niggle is about the "Why is this important?" section, specifically the emphasis on "Diverse groups solve problems better". It might just be a personal bugbear, but I always think it's a shame when this is mentioned first. It's an important point, and I understand why it's highlighted - you're trying to convince people, and this might sway someone who would otherwise ignore arguments about diversity. But I think those other arguments (about fairness, inclusiveness and providing role models) are more important - I'd want more diversity at conferences even if the research said that diverse teams had no effect on problem solving. :)
I also realise it's only this section which emphasies that argument, and that the focus of the introduction is on inclusiveness. But it'd still read better to me if the sentence was something along the lines of "As well as helping and inspiring individual people from underrepresented groups, increasing speaker diversity is important because diverse groups solve problems better".
For the resources section, Meri Williams' post on getting a diverse lineup for The Lead Developer is really good: https://medium.com/@geek_manager/broadening-the-responses-to-our-conference-cfp-a22f120fa941
Chris: Shifting the focus towards diversity as a whole rather than just gender split is nice, but the way you've done it actually lowers the bar even further (you could have an all-male line-up with one Asian male and be fine)
Tim: What 'diversity' is is a bit of a fluffy line of course (see the FAQ). You can argue that everybody is unique and diverse, and wiggle out of it if you really push for it, but I think most of the time a) it's fairly clear when your speaking lineup is a whole bunch of very similar people and b) in almost any case where that's happening, there's some major groups that are being excluded (e.g. 50% of the population).
This probably isn't aimed at me at all. However, I got a change notification email for it, and I read it, and thought I'd stick my 2¢ in.
I agree with Chris: using the word "diversity" if you mean "representation for women" probably dilutes your meaning to the point of ineffectiveness. As it stands, if I were a slightly non-PC event organiser, I don't think I could be sure in advance whether or not my event were "diverse" enough to avoid getting flak from lots of activists pointing to this manifesto. If this manifesto only has meaning to those already in the know, what's the point?
This reminds me of a lot of code reviews I do where people start writing a FooConnector and think, "oh I could generalise this", and before you know it we have a PluggableConnectorFactoryFactory and it's really difficult to use and understand, and we never end up connecting to anything else so the generalisation was wasted.
By using such a general term and refusing to define it concretely I think you are giving too much wiggle room to both sides of the argument:
Anyone who wants to condemn an event can surely find a group which represents ~50% of the population and isn't represented (e.g. any non-English speakers at your conference?)
Anyone who wants to defend an event can find some "diversity" via the uniqueness of individuals, as you have already pointed out.
I'd consider changing it back to "I will never attend any paid conferences or panels with only men" or consider trying to define "diversity" (which is surely a fools errand)
Thanks Rich! More ideas and thoughts are definitely good, 2¢ away.
That said, I'm not too worried about this, and I still think this pledge is more useful now we've made these changes. It's primarily because I don't think either of those cases are problems we need to protect against, but also because 99% of cases where this ever matters are very clear cut, and leaving a tiny bit of space for common sense for the last 1% is fbm (rather than the impossible task of coming up with something both fair and precise).
We're not trying to write a formal binding contract. We're trying to write a useful clear guideline for people to opt into, hopefully in a way that results in positive change long-term.
For your two examples:
Aiming high for diversity is explicitly not our focus. We're just trying to push a minimum bar, to avoid events that represent only one major group (typically white men, but white women only panels and conferences are a contentious topic too). If you have any diversity at all already (e.g. somebody with disabilities in your lineup), this pledge isn't aimed at you. That sounds like an easy line, yes. That's the point though: loads of events & panels still fail at even that.
Defending clearly bad events seems unlikely. The person making the call on what's "diverse" is somebody who's actively come here and committed themselves to aiming for diversity. I'm not sure why you'd sign a pledge to avoid events without diversity and then try and wrangle your way out of it, and I'm not super worried about policing people who do so. The person making this 'is this diverse?' call is somebody who's signed up because they want to push for diversity - I'd like to give hints, but in the end I'm ok with them deciding what diversity is important to them.
Softwire is the motivating example. We were trying to come up with a commitment for what kind of events we'd rather not actively support people in going to (or similarly, a commitment we could encourage our public speakers to make about which events they'd speak at). You need a low bar to do that, or it's difficult and massively stymies getting anything done, and this is a clear, basic and useful one. Softwire isn't going to say "oh but we want to go to that event and, uh, it's got a blond guy", because we want to avoid non-diverse events. Nobody is going to point at Softwire and say "hey you went to an event that wasn't diverse enough to include [non-english speaking/female/latino] people" just because we've signed this pledge (certainly no more than they might already).
I think that's the key point: we're not creating a binding contract with strict rules, just an outline of a helpful line people can opt into and agree to aim for. I'm pretty confident this is useful, and might create some positive change long-term.
Does that context make it a bit clearer?
Also @suzannehamilton: thanks, that's really helpful, I totally agree! I've added Meri's article to the organizer resources, and replaced that bit of the FAQ with your exact words :+1:.
Looking great, really starting to take shape, and a lot of good suggestions made so far!
I think this bit could use some more work:
There are some people however that in today's world are never a member of a homogeneous group of speakers. If you're in this position, the speaker pledge is a commitment that you'd struggle to break! Feel free to sign it anyway to show your support, or to sign one of the other pledges, which are still just as relevant.
If I saw this, I personally would feel marginalised, particularly by "feel free to sign it anyway". I'd feel like my signature was a nice to have, and not as valuable as that of white male cis folks, which IMO is a bit counterproductive. We don't want the list of supporters to become the homogeneous group we're trying to discourage!
I think we absolutely do want the signature of folks from underrepresented groups - if we are trying to do something for them without including them, we are missing the point. They HAVE to be on board.
If they are attendees, they can action the bit where they object to attending if they're not represented. This is important, and the pledge can give those objections weight that they wouldn't have otherwise.
If they are speakers, they can put themselves forward to help events achieve better diversity. We should point them to the resources that will help them do so.
Even if they are neither, I think we should emphasise that their signatures are what gives the pledge legitimacy, and their support is not just something that's nice to have, it's essential to the success of the pledge.
Those are the main improvements I'd like to see to that paragraph, but I think even the first couple of sentences could be reworded to express more sensitivity. It's true that for some people (like me) it seems like we'll never be part of a homogeneous group and while that's good in principle (because homogeneous groups are bad) the reasons why this is true for people like me suck royally. I feel that if I'm going to be reminded of that, I'd want it to be done in a way that sympathises with my struggle; otherwise I'd have the feeling that the folks behind this pledge don't really get it.
I hope that makes sense. Thanks all (Tim especially) for all the hard work that's gone into this! I can't wait to sign 😄
Just for reference on that last comment, Helen and I had a separate chat about this, and came up with some nice improvements (https://github.com/Softwire/minimum-viable-diversity-pledge/commit/e7c59dfe14cb2362be3f4421c885e92ee8529244 for those interested).
More generally, with some last tweaks the Softwire board is happy and signed off on this, and we're prepping for launch. I'm aiming to launch this midday Thursday - if anybody wants to come in with any more feedback or thoughts on this, please do it well before then please!
Thanks everybody - loads of useful stuff here :-)
I only just saw this!
I love the sentiment behind this and the effort put into it but there are some incidental details that would make me think a bit before I would feel comfortable linking anyone to this page. It looks like this is exactly not the time you wanted feedback though, and I suspect I'm alone here, so I'm happy to just sit it out :)
Thanks Stephen! I doubt you're alone - please do give us your feedback :)
(We could incorporate them even more quickly if you submitted a pull request which is what Hereward did https://github.com/Softwire/minimum-viable-diversity-pledge/pull/22 :))
| gharchive/issue | 2016-09-13T16:31:11 | 2025-04-01T06:37:33.811098 | {
"authors": [
"RichardBradley",
"aeone",
"catalin-ursachi",
"chris-harris-softwire",
"hayh",
"pimterry",
"suzannehamilton",
"yingxinj"
],
"repo": "Softwire/minimum-viable-diversity-pledge",
"url": "https://github.com/Softwire/minimum-viable-diversity-pledge/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
120071101 | updated link to RDO image, closes issue 119
Moved the developer RDO image to rackspace.
New link worked and .ova file started downloading.
I did not let the download complete due to size and download times, but can do so if desired.
:+1:
| gharchive/pull-request | 2015-12-03T01:00:04 | 2025-04-01T06:37:33.827331 | {
"authors": [
"jxstanford",
"lexjacobs"
],
"repo": "Solinea/goldstone-server",
"url": "https://github.com/Solinea/goldstone-server/pull/157",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2229761579 | 🛑 Solutions2AZ-es is down
In 5756087, Solutions2AZ-es (https://www.solutions2az.es) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Solutions2AZ-es is back up in cda595e after 4 minutes.
| gharchive/issue | 2024-04-07T13:06:43 | 2025-04-01T06:37:33.842088 | {
"authors": [
"danifernandezs"
],
"repo": "Solutions2AZ/2az-status",
"url": "https://github.com/Solutions2AZ/2az-status/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
199479179 | Crash when placed next to extreme reactors
http://pastebin.com/qepVcxcJ
the crash is re-create able with the latest extreme reactors and flux networks
minecraft version 1.10.2
extreme reactors version 1.10.2-0.4.5.23
flux networks version 1.10.2-1.0.9
how to re-create:
build functional extreme reactor (does not need fuel)
place either send or recieve node next to reactor on a non output block (if you put it on the output it connects just fine, no crashes, and interacts with the network fine)
game crashes with no such method error
I have this happened to me when I was updating my server.
The issue was SonarCore get updated but Flux Networks don't.
I fixed it by rollback the SonarCore to version 3.1.9.
thanks for letting me know, i did notice the crash also happened next to EIO capacitors with that version and yeah, i'll check in the morning if the rollback did the trick for me.
Will be fixed in next update.
| gharchive/issue | 2017-01-09T05:16:03 | 2025-04-01T06:37:33.864061 | {
"authors": [
"SonarSonic",
"thetechnodragon",
"zlainsama"
],
"repo": "SonarSonic/Flux-Networks",
"url": "https://github.com/SonarSonic/Flux-Networks/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
912891839 | How to enable chunk loading
How do i enable chunk loading on flux point ?
I hate that I have spent the last 2 hours of my life looking for answers to that very question. Im playing atm7tts in singleplayer and i even found the config folder and changed the enable chunk loading to true and it still wont allow me to chunk load.... its been almost 3 years since this was opened and im the first person commenting really??? this is like low key insane why cant anyone give a simple solution or answer like maybe the mod dev hasnt implemented the support for that feature in 1.18.2 yet but hey acknowledgement of the issue would be helpful or is there another config somewhere that i cant find that also needs to be updated
I play on ATM7: To The Sky and i cant click on the chunk loading button.
In the serveur config file : enable chunk loading = true.
I dont understand... please help me.
Just dealt with the same issue and found the solution thanks to some helpful folks and figured I'd pass it along here as well after confirming it on my own server:
The config file located at [serverdirectory]/config is copied into the world folder when it is generated, so the one you're editing is likely only going to take effect on a new fresh world.
The correct config file to edit for existing worlds is located at [serverdirectory]/world/serverconfig folder.
You just need to use FTB map (press M), then click at the chunk you want to claim after that shift+click to force load the chunk
| gharchive/issue | 2021-06-06T17:07:21 | 2025-04-01T06:37:33.868184 | {
"authors": [
"BlackXSkunk394",
"KayerMC",
"MikeyM3thodic4l",
"Thegriefmaker99",
"rents44"
],
"repo": "SonarSonic/Flux-Networks",
"url": "https://github.com/SonarSonic/Flux-Networks/issues/440",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
95185764 | SONARMSBRU-99: remaining file renamings
Changed:
SonarQube.MSBuild.PreProcessor.exe -> MSBuild.SonarQube.Internal.PreProcess.exe
SonarQube.MSBuild.PostProcessor.exe -> MSBuild.SonarQube.Internal.PostProcess.exe
SonarQube.TeamBuild.Integration.dll -> TeamBuild.SonarQube.Integration.dll
SonarQube.MSBuild.Tasks.dll -> SonarQube.Integration.Tasks.dll
Unchanged: SonarQube.Common.dll, SonarQube.Integration.targets, SonarQube.Integration.ImportBefore.targets
Done in a previous commit: SonarQube.MSBuild.Runner.exe -> MSBuild.SonarQube.Runner.exe
I've fixed up the packaging projects and manually deployed and executed the newly-packaged code.
I haven't renamed the embedded zip file.
LGTM
| gharchive/pull-request | 2015-07-15T13:01:30 | 2025-04-01T06:37:33.932982 | {
"authors": [
"bgavrilMS",
"duncanpMS"
],
"repo": "SonarSource/sonar-msbuild-runner",
"url": "https://github.com/SonarSource/sonar-msbuild-runner/pull/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1142804818 | Code review xx Gwen
Read Me
Wordt nog gefixt zo te zien :)
Code
Index.html --> Ziet er netjes uit
Style.css --> Handige notaties, verder niks op aan te merken
Script.js --> ""
Thanks gwen :)
| gharchive/issue | 2022-02-18T10:34:07 | 2025-04-01T06:37:33.943849 | {
"authors": [
"Sophievanderburg",
"gwenversteegh"
],
"repo": "Sophievanderburg/get-inspired",
"url": "https://github.com/Sophievanderburg/get-inspired/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2245694533 | 🛑 Minecraft Server is down
In ff5997e, Minecraft Server (http://minecraft.sorrowblue.com:8123/login.html) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Minecraft Server is back up in 0b841dd after 1 day, 2 hours, 12 minutes.
| gharchive/issue | 2024-04-16T10:28:39 | 2025-04-01T06:37:33.947291 | {
"authors": [
"SorrowBlue"
],
"repo": "SorrowBlue/upptime",
"url": "https://github.com/SorrowBlue/upptime/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2381320966 | 🛑 Marcel Web is down
In c694915, Marcel Web ($MARCEL_WEB) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Marcel Web is back up in 79c2cbf after 3 hours, 16 minutes.
| gharchive/issue | 2024-06-28T23:43:30 | 2025-04-01T06:37:34.007142 | {
"authors": [
"Sundypha"
],
"repo": "Source-Graphics-GmbH/upptime",
"url": "https://github.com/Source-Graphics-GmbH/upptime/issues/740",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2397024841 | 🛑 Marcel Web is down
In 3390ee5, Marcel Web ($MARCEL_WEB) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Marcel Web is back up in 3ab7a11 after 2 hours, 1 minute.
| gharchive/issue | 2024-07-09T03:29:31 | 2025-04-01T06:37:34.009370 | {
"authors": [
"Sundypha"
],
"repo": "Source-Graphics-GmbH/upptime",
"url": "https://github.com/Source-Graphics-GmbH/upptime/issues/777",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1974875027 | Combo submission: produced features suggestions/autocomplete should filter out utility features
Describe the Problem
The produced features suggested by autocompletion in the combo submission form include utility features, which aren't meant to be seen by end users. It should apply a filter to exclude those.
Solved by hiding them from the API
| gharchive/issue | 2023-11-02T19:02:25 | 2025-04-01T06:37:34.027405 | {
"authors": [
"ldeluigi"
],
"repo": "SpaceCowMedia/commander-spellbook-site",
"url": "https://github.com/SpaceCowMedia/commander-spellbook-site/issues/440",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1305477617 | Help!
So I found you via a gabb hack and was wanderign you you want some publicity or smt. I have a Yt with 330+ subs, And I could do a partner thing with the You and you guys. Also you can you fix it or update idk
Hi! A yt video would be awesome! My Discord is SpaceSaver2000#2992 and the Gabb Development Discord server is: https://discord.gg/SpbVSjv9uW . Also, what's the problem you're having with the script?
| gharchive/issue | 2022-07-15T01:54:51 | 2025-04-01T06:37:34.029946 | {
"authors": [
"SpaceSaver",
"theBlaize"
],
"repo": "SpaceSaver/AndroidProxySigninHack",
"url": "https://github.com/SpaceSaver/AndroidProxySigninHack/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
216939306 | [HYDRAR-141] fixing bad markup warning in .Rd files
These warnings occur when we separate two brackets of the command \item{}{} into two lines.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the Apache License 2.0; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
Build triggered.
Build success.
All unit tests passed.
| gharchive/pull-request | 2017-03-25T01:25:22 | 2025-04-01T06:37:34.079929 | {
"authors": [
"R4ML-CI",
"iyounus"
],
"repo": "SparkTC/r4ml",
"url": "https://github.com/SparkTC/r4ml/pull/10",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1124386348 | [Bug]: cannot add type species for new genus name
Steps to reproduce the bug
1.See genus Kressleinius
2. when I attempt to add the type species, Kressleinius celans, I get the dialog box in screen shot below
3. None of choices are active (correct one is type by original designation and monotypy), in fact, the x at top right to exit doesn't work either
4. Locks up the system, need to quit TW and restart
...
Screenshot
Expected behavior
No response
Additional Screenshots
No response
Environment
[ ] Development (native)
[ ] Development (docker)
[ ] Sandbox
[X] Production
Sandbox Used
No response
Version
Version 0.22.7 release
Browser Used
Chrome
Duplicate, this is fixed and will be made live ASAP.
| gharchive/issue | 2022-02-04T16:26:36 | 2025-04-01T06:37:34.094051 | {
"authors": [
"JimWoolley",
"mjy"
],
"repo": "SpeciesFileGroup/taxonworks",
"url": "https://github.com/SpeciesFileGroup/taxonworks/issues/2796",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1392616580 | Task - matrix row coder - checkbox to show only unscored descriptors
Saad requested some filtering functionality to
display only unscored descriptors,
displays descriptors with a particular tag (we are doing this in interactive key)
Same should be implemented in the Column coder
display only unscored rows
I don't think we want to implement tags filtering- this is already done with dynamic character/views. Create a new matrix, add the dynamic column via the Keyword, and you have the view you need, updated as you need.
| gharchive/issue | 2022-09-30T15:32:24 | 2025-04-01T06:37:34.095873 | {
"authors": [
"mjy",
"proceps"
],
"repo": "SpeciesFileGroup/taxonworks",
"url": "https://github.com/SpeciesFileGroup/taxonworks/issues/3121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1874090748 | User auth method update fix
Description
When a user's auth method is updated to "username/password", the form still sends the previously selected samlProviderId in the request body, which prevents the auth method from actually being updated. In order to make the update correctly, we need to set this field to an empty string when "username/password" is chosen.
Motivation and Context
Corrects a bug which prevents administrators from updating a user's auth method from SAML.
How Has This Been Tested?
Manual testing to confirm users can be switched freely between the two auth methods.
Screenshots (if appropriate):
Types of changes
[ ] Chore (a change that does not modify the application functionality)
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist:
[ ] Documentation updates are needed, and have been made accordingly.
[ ] I have added and/or updated tests to cover my changes.
[x] All new and existing tests passed.
[ ] My changes include a database migration.
The changes seem solid however I would love to see some accompanying tests.
| gharchive/pull-request | 2023-08-30T17:25:14 | 2025-04-01T06:37:34.100212 | {
"authors": [
"elikmiller",
"maffkipp"
],
"repo": "SpecterOps/BloodHound",
"url": "https://github.com/SpecterOps/BloodHound/pull/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
828902892 | Design and code: Make UI of front-page responsive and attractive.
Describe the issue: I'd like to make the landing page more attractive and responsive to all devices using JS, HTML?CSS.
Expected behavior: Landing page changes to make the website more user-friendly and responsive.
I'd like to work on this issue under GSSOC'21, thank you.
I'd like to get assigned to this issue, can I get assigned on this issue under GSSoC'21?
Describe the issue: I'd like to make the landing page more attractive and responsive to all devices using JS, HTML?CSS.
Expected behavior: Landing page changes to make the website more user-friendly and responsive.
I'd like to work on this issue under GSSOC'21, thank you.
Hey @aashishah, I have assigned this issue to you!
I'd like to get assigned to this issue, can I get assigned on this issue under GSSoC'21?
Hey @oneknucklehead, Someone is already working on this issue, so you can either work on the other issues or you can create your own.
Hi I am a Gssoc'21 participant and would like to work on this issue , I have experience in css and js , please assign me this issue
Hi I am a Gssoc'21 participant and would like to work on this issue , I have experience in css and js , please assign me this issue
Hey @simranbhalla3, This issue has been assigned to someone else. You can either work on other issues or you can create your own issues.
@rupeshmohanty Sir can I work on this Issue its been many days and yet it is not Solved. Please Assign It to me
i would like to work on the issue aswell if its still open.
@aashishah Are you working on this issue or should I assign this one to someone else?
If @aashishah isn't working on this issue, can I get assigned to it?
@rupeshmohanty You can unassign me, sorry for the inconvenience.
If @aashishah isn't working on this issue, can I get assigned to it?
Mentor please let me Work on this Issue, I will do it as soon as possible
@oneknucklehead @Inventor77 @simranbhalla3 @Ananyaagupta I want to hear your ideas on how to solve this issue and how much time do you guys need to implement it.
@oneknucklehead @Inventor77 @simranbhalla3 @Ananyaagupta I want to hear your ideas on how to solve this issue and how much time do you guys need to implement it.
Yes sir
@rupeshmohanty I would like to work on this issue as a GSSOC'21 participant. Please assign it to me.
@Mukta-Sawant Done!
Can I work on this issue?
@srishtij2000 Someone is already working on this issue. You can work on some other issue.
| gharchive/issue | 2021-03-11T08:04:40 | 2025-04-01T06:37:34.110513 | {
"authors": [
"Ananyaagupta",
"Inventor77",
"Mukta-Sawant",
"aashishah",
"oneknucklehead",
"rupeshmohanty",
"simranbhalla3",
"srishtij2000"
],
"repo": "Spectrum-CETB/LesKollab",
"url": "https://github.com/Spectrum-CETB/LesKollab/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1473601360 | TAG.name suffix and prefix Do not have space
We can use override tag for the name
TAG.NAME.PREFIX | Sets some text to display before the character's name.
TAG.NAME.SUFFIX | Sets some text to display after the character's name.
I have tested it and it's weird because there no space between name..
I can easily modify it in core but I'm wandering if I do modif on my shard only or for everyone with a PR
What was the goal of these tag at beginning?
Maybe it was for adding special bracket like ->Ronaldo<-
It's not a bug, it was made on purpose:
12-11-2003, Kell
Added support for the following TAGs on characters:
TAG.NAME.ALT (alternate name, good for incognito effects)
TAG.NAME.PREFIX (alternate prefix, if not set, defaults to Notoriety prefix - lady/lord)
TAG.NAME.SUFFIX (suffix for the name)
Note that a space isn't added for prefix or suffix on purpose, to allow text to be
glued to the name. You can add a space by using quotes, as in: TAG.NAME.SUFFIX="text "
Hoo. Very interesting. thx for the " ". It was the part I was missing.
| gharchive/issue | 2022-12-03T00:01:07 | 2025-04-01T06:37:34.184526 | {
"authors": [
"Jhobean",
"drk84"
],
"repo": "Sphereserver/Source-X",
"url": "https://github.com/Sphereserver/Source-X/issues/970",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
320513189 | Port new features or bugfixes from Source
In the discussion below we can talk about commits to add to this to-port list.
Paste the whole commit text even if you intend to port only a part of its code.
When a commit is ported, reference to this issue in the commit message and add a comment to this issue specifying which functionality was ported from which commit.
This message will be edited and the text made strikedthrough.
To be tested more and eventually port.
Revision: a622737b2be9495d264091d7632af4987bfbfcfd
Author: Coruja coruja@outlook.com
Date: 13/06/2016 22:58:50
Message:
Fixed: Boat/ship parts resetting TYPE after use 'turn' commands.
To be tested more and eventually port.
Revision: acc1eb61d49e9c8d898027f64eafb8ee9076d343
Author: Coruja coruja@outlook.com
Date: 17/06/2016 09:11:42
Message:
Fixed: Paralyze / Paralyze Field making NPCs flood too many attack actions
Fixed: Console error when for some reason an NPC got created/moved to an invalid region
To be tested more and eventually port.
Revision: ffc3a52226fa3fa65ac40b3bf5476e5aa2e4f571
Author: Coruja coruja@outlook.com
Date: 22/06/2016 18:24:07
Message:
Added: Support for colored multis on target functions (only compatible with HS clients 7.0.9.0+)
Revision: aada1eb2fcb12b5b3564be6aa5a7b23e5d91a9e8
Author: Coruja coruja@outlook.com
Date: 27/06/2016 23:44:21
Message:
Fixed: Function f_onchar_delete not being called if the player char get deleted in any other way different from client Character Selecion menu
To be tested more and eventually ported.
Revision: a79d64ef2bee889a3a63ac0a7d96f5defc0e0459
Author: Coruja coruja@outlook.com
Date: 27/06/2016 23:57:49
Message:
Fixed: Multi dynamic regions getting replaced by script static regions on server resync
Revision: 3066341740896269766206db690f76a3785290b1
Author: Coruja coruja@outlook.com
Date: 30/06/2016 00:41:34
Message:
Fixed: Attack/Kill command on pets allowing select the pet itself as target, making it attack his owner
Revision: f26b39b9b40fba76660c36d57baa8387d3613272
Author: Coruja coruja@outlook.com
Date: 22/07/2016 22:52:03
Message:
Fixed: Security issue setting account login as chat name when newest clients try to setup the chat window for the first time and the char name is not available
Fixed: Invisible chars being incorrectly revealed by REVEALF_LOOTINGOTHERS reveal flag when picking items from the ground
Fixed: Return 0/1 on spell triggers not working correctly
Fixed: Function MOVENEAR not working correctly
To be discussed and eventually ported.
Revision: dd7a59b451e9d4d8d99c3bec15b330cf60dded2f
Author: Coruja coruja@outlook.com
Date: 02/08/2016 07:07:12
Message:
Fixed: Items inside trade window not firing @DropOn_Item trigger when the trade move the item to player bacpkack
Revision: 1fa0504b39e9df1074fb29121da2446926eb56b6
Author: Coruja coruja@outlook.com
Date: 03/08/2016 08:59:17
Message:
Changed: Max item capacity on containers changed from 255 to 125
This is required to make containers works properly on Enhanced Clients, because containers on these clients have hardcoded capacity of 125 items (OSI already uses this value since many years ago, even before enhanced client)
Revision: 5586c3db3ff6940a6d93c6d8062ccca1e195cbc2
Author: Coruja coruja@outlook.com
Date: 07/08/2016 06:54:07
Message:
Fixed: ARGN1 on char trigger @SkillPreStart not working correctly
Revision: 48ba5dc6e0e8be27b06208b31d128400eda47dd1
Author: Coruja coruja@outlook.com
Date: 12/08/2016 08:30:13
Message:
Fixed: Client war mode flag not being removed on death
To be tested more and eventually port.
Revision: 210563f9d22c4374e4faecdf92cd78ccd7114661
Author: Coruja coruja@outlook.com
Date: 13/08/2016 07:24:17
Message:
Fixed: Multi regions not reloading correctly after server resync
Also added a smart check to only reload it when needed (eg: AREADEF/ROOMDEF get changed on scripts)
To be tested more and eventually port.
Revision: c8223f203bbeb753ff0e1d3388db13e4df0b4d4a
Author: Coruja coruja@outlook.com
Date: 21/08/2016 05:29:13
Message:
Fixed: Internal check to prevent drop items inside walls preventing the item drop even when the wall is on another floor
Fixed: Message "Too many items here!" not showing correctly when items are dropped on areas with too many items
Revision: 30364c13a309452fe5fa2d28f264766c67b34e5d
Author: Coruja coruja@outlook.com
Date: 27/09/2016 02:07:55
Message:
Fixed: HTTP server not working correctly
Revision: 8fd75922705cbf38eda5130a65bbc91c020e418e
Author: ares ares@alathair.de
Date: 06/10/2016 22:31:42
Message:
As we have experienced at Alathair, exporting chars is nearly always meant to export other chars but not the own char, a flag here would be neccessary to control that. The default case should be not to export SRC itself. but it could be optionally turned on using the bitflag 0100. So if you want to export chars including yourself use flag 6. If it should include items use flag 7. Otherwise 1 2 and 3 will not include SRC.
...Continues...
To be tested more and eventually ported.
Revision: d66893bfe79e5e359f1d01465f24bdabbd515a6f
Author: ares ares@alathair.de
Date: 08/10/2016 20:10:14
Message:
Fixed a critical bug in background save mechanism where an unsaved offline player character disappears from worldsave when moved to an already saved sector.
Revision: 003321948c4694b5575c6072dec2d72db4a2e576
Author: Coruja coruja@outlook.com
Date: 17/10/2016 21:23:34
Message:
Fixed: NPCs losing 'statf_spawned' flag after server restart
To be tested more and eventually ported.
Revision: 72ff09966548b27829fcf2ed7608207ae6862d26
Author: Coruja coruja@outlook.com
Date: 12/11/2016 20:41:49
Message:
Fixed: Resurrect, Reveal, Meteor Swarm and Lightning spells showing effect animation even when EFFECT_ID=0 is set
[sphereCrypt.ini]: Added crypt key for classic clients 7.0.54 ~ 7.0.55 and enhanced clients 4.0.54 ~ 4.0.55
Revision: b054cd3277f26789ee2fbdd1cb7f96aa21d38127
Author: Coruja coruja@outlook.com
Date: 22/12/2016 19:09:09
Message:
Added: Missing buff icon for Criminal flag
I found this on ServUO repo, thanks for the help :P
Revision: bdc53e3cd70755a5add1c4b57819a2fef00d0245
Author: Coruja coruja@outlook.com
Date: 08/01/2017 19:23:28
Message:
Fixed: Function 'CRIMINAL 0' not updating char notoriety/buff when removing criminal flag
To be tested more and eventually ported.
Revision: 6e0f90f61fdd81d522143e35b6d968137c3d39ab
Author: Coruja coruja@outlook.com
Date: 25/01/2017 17:44:31
Message:
Fixed: Client encryption not being decrypted correctly on login process
Also reverted changes on packet 0BF 018 accidentally sent some commits ago
Revision: f1b5cffa0081d12aecf6600447ccae32cb752925
Author: Coruja coruja@outlook.com
Date: 21/12/2016 06:52:38
Message:
Fixed: Mass Curse spell not working correctly
Revision: 28b644628e76d3b837ecc960f5fb40d9bc920a40
Author: Coruja coruja@outlook.com
Date: 07/02/2017 21:59:56
Message:
Fixed: Char flag 'statf_hovering' (gargoyle fly ability) not clearing when gargoyle chars polymorph into non-gargoyle char ID
Probably more to add in the next days (i'm not done checking Source's changelog).
Closing, we are splitting ports into separate issues.
| gharchive/issue | 2018-05-05T14:22:20 | 2025-04-01T06:37:34.209540 | {
"authors": [
"cbnolok"
],
"repo": "Sphereserver/Source-experimental",
"url": "https://github.com/Sphereserver/Source-experimental/issues/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
599118653 | Devices 'Not Supported'
Hi,
Please tell me this project isn't dead. I use this every day and really love the work!
I've just had to rebuild the computer that runs zwave.me, home bridge and zway. I installed the latest version (3) and I'm getting some issues.
I added one device to test, a simple socket switch. Added fine on zway. Then installed the plugin on homebridge. Also when well. But on "home" the device is showing up 'Not supported' I think its not detected it currently. But it use to work fine before the rebuild.
Any help would be amazing.
Thanks,
Jamie
So, I had this exact problem, and was able to track it down. I don't know why it worked before for me as well.
For some reason, Zway appends the probe value to the deviceType; so switchBinary becomes switchBinary.switch, and so it doesn't find switchBinary in its big switch (no pun intended) statement that creates the details for the services. I noted it does the exact same thing for switchMultiLevel, but that works because it then maps switchMultilevel.multilevel back to switchMultiLevel, so I just added the switchBinary case to the same map, and it works fine.
I barely understand what I'm doing here, don't know if this is a correct fix for anyone else, and don't know how to deal with npm etc, so I'll leave it to someone else to decide if this is the right answer and submit a pull request (assuming anyone would respond to a PR).
specifically, add a line 89: "switchBinary.switch": "switchBinary",
Here's a patch file with the same idea:
switchBinary.patch.txt
And here's my patched index.js, which on my Raspberry system goes in /usr/local/lib/node_modules/homebridge-zway
Homebridge-zway-index.js.zip
Again, your mileage may vary...
So, I had this exact problem, and was able to track it down. I don't know why it worked before for me as well.
For some reason, Zway appends the probe value to the deviceType; so switchBinary becomes switchBinary.switch, and so it doesn't find switchBinary in its big switch (no pun intended) statement that creates the details for the services. I noted it does the exact same thing for switchMultiLevel, but that works because it then maps switchMultilevel.multilevel back to switchMultiLevel, so I just added the switchBinary case to the same map, and it works fine.
I barely understand what I'm doing here, don't know if this is a correct fix for anyone else, and don't know how to deal with npm etc, so I'll leave it to someone else to decide if this is the right answer and submit a pull request (assuming anyone would respond to a PR).
specifically, add a line 89: "switchBinary.switch": "switchBinary",
Here's a patch file with the same idea:
switchBinary.patch.txt
And here's my patched index.js, which on my Raspberry system goes in /usr/local/lib/node_modules/homebridge-zway
Homebridge-zway-index.js.zip
Again, your mileage may vary...
Great advice !!!
Thank you!
So, I had this exact problem, and was able to track it down. I don't know why it worked before for me as well.
For some reason, Zway appends the probe value to the deviceType; so switchBinary becomes switchBinary.switch, and so it doesn't find switchBinary in its big switch (no pun intended) statement that creates the details for the services. I noted it does the exact same thing for switchMultiLevel, but that works because it then maps switchMultilevel.multilevel back to switchMultiLevel, so I just added the switchBinary case to the same map, and it works fine.
I barely understand what I'm doing here, don't know if this is a correct fix for anyone else, and don't know how to deal with npm etc, so I'll leave it to someone else to decide if this is the right answer and submit a pull request (assuming anyone would respond to a PR).
specifically, add a line 89: "switchBinary.switch": "switchBinary",
Here's a patch file with the same idea:
switchBinary.patch.txt
And here's my patched index.js, which on my Raspberry system goes in /usr/local/lib/node_modules/homebridge-zway
Homebridge-zway-index.js.zip
Again, your mileage may vary...
This works for my issue, too. Thanks.
I can't believe it took me a year to find this. So incredibly frustrating.... That said... riddle me this. I have a dozen BinarySwitches. All Jasco, some with the same software/firmware versions others a level up or down... I can discern no pattern as to why some show(ed) in Homekit correctly and others showed as Not supported. Regardless making the change above resoled the issue for my remaining switches.
I just had to replace two Jasco switches that died, and replaced them with brand new Leviton Z-Wave switches. And, of course, after replacing them, the dreaded "Not Supported" started showing up again. I had added the previous fix to index.js (thanks, @KanG00!) but no dice. So I added some logging and figured out that these new Leviton switches were scanning as "switchBinary.power_switch_binary". That's just terrific. So, I added:
"switchBinary.power_switch_binary": "switchBinary",
in the same place as the other "switchBinary.switch" addition, and it started working.
If I had more time, I'd figure out a way to make this not so brittle.
I just had to replace two Jasco switches that died, and replaced them with brand new Leviton Z-Wave switches (both model DZ15S). And, of course, after replacing them, the dreaded "Not Supported" started showing up again. I had added the previous fix to index.js (thanks, @KanG00!) but no dice. So I added some logging and figured out that these new Leviton switches were scanning as "switchBinary.power_switch_binary". That's just terrific. So, I added:
"switchBinary.power_switch_binary": "switchBinary",
in the same place as the other "switchBinary.switch" addition, and it started working.
If I had more time, I'd figure out a way to make this not so brittle.
tl;dr
If you have this problem and don't want to change any code, use homebridge-zway-kevindayton specifically 0.6.0-alpha1 and tag your device with "Homebridge.Override.probeType:switch", this however makes custom tags not work.
If you want to run my code on HOOBS, install homebridge-zway-kevindayton then open your terminal and enter:
cd ~/.hoobs
npm install git+https://github.com/dkattan/homebridge-zway-kevindayton.git
Details
I just had to fix this after upgrading my Z-Way server from some ancient version to v3.2.2, and the fix that @mackworth posted did not work for my Jasco switches did not work, but the switch for the Leviton switches did work for my Jasco switches.
I'm running homebridge-zway-kevindayton 0.6.0-alpha1 which was released specifically to address this issue, but it no longer does due because the Z-Way API changed how it is returning device data.
Here's what I know, the Z-Way v3.2.2 API returns the following JSON for Jasco switches:
{
"creationTime": 1643564474,
"creatorId": 1,
"customIcons": {},
"deviceType": 'switchBinary',
"firmware": '5.22',
"h": -1928968201,
"hasHistory": false,
"id": 'ZWayVDev_zway_105-0-37',
"location": 0,
"locationName": 'globalRoom',
"manufacturer": 'Jasco Products',
"metrics": {
"icon": 'switch',
"isFailed": false,
"title": 'Living Room Overhead Light',
"level": 'off'
},
"nodeId": 105,
"order": { rooms": 0, elements": 0, dashboard": 0
},
"permanently_hidden": false,
"probeType": 'power_switch_binary',
"product": '',
"tags": [],
"technology": 'Z-Wave',
"visibility": true,
"updateTime": 1643630461
}
In short, it appears that initially Z-Way did not return probeType for these switches. Then at some point in time it started returning it with the value "switch", which is why @mackworth 's fix worked, and why @kevindayton 's fix worked. (They are both doing the same thing in 2 different ways)
Then at some subsequent point later Z-Way began returning probeType: "power_switch_binary", and that is the state of affairs today in v3.2.2
If you peruse the release history, there is multiple mentions of probeType
https://z-wave.me/z-way/version-history/
Based on the date this issue was opened, I think the breaking changes occurred either in 18.07.2019 v3.0.0 or 03.04.2020 v3.0.5
Anyway, I incorporated changes for GE Gan Controllers as well as a fix for this issue in my branch, which I'll submit a PR for to homebridge-zway-kevindayton since this project appears to be abandoned.
| gharchive/issue | 2020-04-13T20:32:01 | 2025-04-01T06:37:34.236566 | {
"authors": [
"KanG00",
"dkattan",
"iTommix",
"jabeard3",
"jamoir",
"kiwidyne",
"mackworth"
],
"repo": "SphtKr/homebridge-zway",
"url": "https://github.com/SphtKr/homebridge-zway/issues/138",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
198435492 | No suitable clads for Window Sensor
See the DEBUG log file attached. Obviously, there is no suitable class for the sensor:
logfile.txt
Hold on a bit, I'm going to try to push 0.5.0 out the door--and I think you may still be on 0.4.0 and not on the pre-release channel.
?????? the packet I installed is 0.5.0..... ????
Am 05.01.17 um 19:52 schrieb SphtKr:
Hold on a bit, I'm going to try to push 0.5.0 out the door--and I
think you may still be on 0.4.0 and not on the pre-release channel.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/SphtKr/homebridge-zway/issues/84#issuecomment-270725120,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AOtuiBNCmZ6ieT0H1GGbDUhSGq_-OFI5ks5rPTv0gaJpZM4LZcuw.
--
Wolfgang Domröse
Von Eichendorff-Str. 23
37539 Bad Grund (Harz)
Oh...sorry, this is the Fibaro device and you're also on the conversation for #69. (I thought your log output looked like an old version but now I see it's not.)
Check the update over on the other issue...
| gharchive/issue | 2017-01-03T09:25:02 | 2025-04-01T06:37:34.242476 | {
"authors": [
"SphtKr",
"WolfgangDomroese"
],
"repo": "SphtKr/homebridge-zway",
"url": "https://github.com/SphtKr/homebridge-zway/issues/84",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
19377590 | file uploads over 8k fail when using ModSecurity 2.7.5 and Nginx 1.4.2
Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 GNU/Linux
ModSec 2.7.5 and Nginx 1.4.2
I have an Apache backend and it receives my file uploads and requests if the file is below 8k. Only got the basic modsecurity.conf loaded without any rules. If I set the SecRequestBodyAccess = Off even those pass through. Succesful upload:
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Initialising transaction (txid AcAcAGl3AcAcAcAcAcAcAcAc).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Transaction context created (dcfg 7f35a9f41980).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Starting phase REQUEST_HEADERS.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Second phase starting (dcfg 7f35a9f41980).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Input filter: Reading request body.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Multipart: Created temporary file 1 (mode 0600): /var/log/modsecurity_workdir/20130912-151049-AcAcAGl3AcAcAcAcAcAcAcAc-file-vIn5DC
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][5] Adding request argument (BODY): name "submit", value "Submit"
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Request body no files length: 150
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Input filter: Completed receiving request body (length 4719).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Starting phase REQUEST_BODY.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Hook insert_filter: Adding input forwarding filter (r 7f35a9d950a0).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Hook insert_filter: Adding output filter (r 7f35a9d950a0).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Input filter: Forwarding input: mode=0, block=0, nbytes=-1 (f 7f35a9d962b0, r 7f35a9d950a0).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Input filter: Forwarded 4719 bytes.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Input filter: Sent EOS.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Input filter: Input forwarding complete.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Starting phase RESPONSE_HEADERS.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Output filter: Not buffering response body for unconfigured MIME type "text/html".
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Output filter: Completed receiving response body (non-buffering).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Starting phase RESPONSE_BODY.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Output filter: Output forwarding complete.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Initialising logging.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Starting phase LOGGING.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Recording persistent data took 0 microseconds.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Audit log: Ignoring a non-relevant request.
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Multipart: Cleanup started (remove files 1).
[12/Sep/2013:15:10:49 +0300] [/sid#7f35a9f410a0][rid#7f35a9d950a0][/upload_file.php][4] Multipart: Deleted file (part) "/var/log/modsecurity_workdir/20130912-151049-AcAcAGl3AcAcAcAcAcAcAcAc-file-vIn5DC"
failed upload:
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Initialising transaction (txid AcAcATAcccAcAcRcvYAIpcAc).
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Transaction context created (dcfg 7f35a9f41980).
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Starting phase REQUEST_HEADERS.
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Second phase starting (dcfg 7f35a9f41980).
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Input filter: Reading request body.
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Multipart: Created temporary file 1 (mode 0600): /var/log/modsecurity_workdir/20130912-151248-AcAcATAcccAcAcRcvYAIpcAc-file-qmZcxo
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][5] Adding request argument (BODY): name "submit", value "Submit"
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Request body no files length: 149
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Input filter: Completed receiving request body (length 8893).
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Starting phase REQUEST_BODY.
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Hook insert_filter: Adding input forwarding filter (r 7f35a9d8d0a0).
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Hook insert_filter: Adding output filter (r 7f35a9d8d0a0).
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Input filter: Forwarding input: mode=0, block=0, nbytes=-1 (f 7f35a9d8e2b0, r 7f35a9d8d0a0).
[12/Sep/2013:15:12:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Input filter: Forwarded 8192 bytes.
[12/Sep/2013:15:13:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Initialising logging.
[12/Sep/2013:15:13:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Starting phase LOGGING.
[12/Sep/2013:15:13:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Recording persistent data took 0 microseconds.
[12/Sep/2013:15:13:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Audit log: Ignoring a non-relevant request.
[12/Sep/2013:15:13:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Multipart: Cleanup started (remove files 1).
[12/Sep/2013:15:13:48 +0300] [/sid#7f35a9f410a0][rid#7f35a9d8d0a0][/upload_file.php][4] Multipart: Deleted file (part) "/var/log/modsecurity_workdir/20130912-151248-AcAcATAcccAcAcRcvYAIpcAc-file-qmZcxo"
I can tentatively confirm that using nginx_refactoring I was able to fix this particular problem.
I, too, can confirm I'm running into this issue with nginx 1.6.2 where uploads over 8k with SecRequestBodyAccess On.
2015/02/02 11:11:00 [notice] 24627#0: ModSecurity for nginx (STABLE)/2.8.0 (http://www.modsecurity.org/) configured.
2015/02/02 11:11:00 [notice] 24627#0: ModSecurity: APR compiled version="1.3.9"; loaded version="1.3.9"
2015/02/02 11:11:00 [notice] 24627#0: ModSecurity: PCRE compiled version="7.8 "; loaded version="7.8 2008-09-05"
2015/02/02 11:11:00 [notice] 24627#0: ModSecurity: LIBXML compiled version="2.7.6"
I will have to test using the nginx_refactoring branch when I have time. For now, I have SecStatusEngine set to Off as file uploads are necessary.
Hi guys, few minutes ago I've merge #904 into nginx_refactoring branch. It should fix this issue, please confirm that the issue is fixed.
I just built the standalone ModSecurity module from the nginx_refactoring branch and compiled nginx 1.9.1 with it:
nginx version: nginx/1.9.1
built by gcc 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC)
built with OpenSSL 1.0.1k-fips 8 Jan 2015
TLS SNI support enabled
configure arguments: --add-module=../ModSecurity/nginx/modsecurity/ --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'
revelant portion of /etc/nginx/modsecurity.conf (~line 174)
SecDebugLog /var/log/modsecurity-debug.log
SecDebugLogLevel 6
Uploads >= 1MB are failing for me. nginx is forwarding these requests to an upstream server which handles the actual upload, FWIW.
I enabled the debugging log tried 2 uploads. 1 file < 1MB and another file equal to 1MB.
Please note I removed server information and the actual URI's in the following log snippets. Log for successful file upload:
[21/Jul/2015:16:47:34 +0000] Input filter: Forwarded 2063 bytes.
[21/Jul/2015:16:47:34 +0000] Input filter: Sent EOS.
[21/Jul/2015:16:47:34 +0000] Input filter: Input forwarding complete.
[21/Jul/2015:16:47:34 +0000] Starting phase RESPONSE_HEADERS.
[21/Jul/2015:16:47:34 +0000] Output filter: Not buffering response body for unconfigured MIME type "application/json".
[21/Jul/2015:16:47:34 +0000] Output filter: Completed receiving response body (non-buffering).
[21/Jul/2015:16:47:34 +0000] Starting phase RESPONSE_BODY.
[21/Jul/2015:16:47:34 +0000] Output filter: Output forwarding complete.
[21/Jul/2015:16:47:34 +0000] Initialising logging.
Log for failing file upload:
[21/Jul/2015:17:01:02 +0000] Input filter: Forwarded 8192 bytes.
[21/Jul/2015:17:01:12 +0000] Initialising logging.
[21/Jul/2015:17:01:12 +0000] Starting phase LOGGING.
Disabling ModSecurity completely causes both file upload attempts to work.
Let me know if you want additional information and/or help testing.
I think it still exists in modsecurity2.9.1 for nginx, and I use nginx1.8.1 now.
The problem is that the function modsecurity_request_body_retrieve is wrongly used in function input_filter, if modsecurity_request_body_retrieve return 1, it means there are more chunks left, so it should be called again until it doesn't return 1.
It works fine now after I changed that.
how soon can you commit a fix ;)
Hi @mwang911,
Are you sure that you are inspecting a full request body after that modification? not only the first chunk?
I would like to suggest you guys to use the ModSecurity-nginx connector, available at: https://github.com/SpiderLabs/ModSecurity-nginx
Further information on libmodsecurity is available here: https://github.com/SpiderLabs/ModSecurity/tree/v3/master
Hi zimmerle,
I just test the file uploading and it works well now. It could retrieve all request body for nginx not only the max 8k. It is a advice I want you to consider, because I didn't make full other tests. As far as i know, it is related to the retrieving of request body. The result is that nginx still think the request body has not been read.
@wellumies
Hi, the ngx_refactoring has done that in the right way. Look at the function input_filter in https://github.com/SpiderLabs/ModSecurity/blob/nginx_refactoring/apache2/apache2_io.c .
You can compare this function with the .tar.gz file for nginx in https://www.modsecurity.org/download.html.
No longer a concern in libModSecurity. Marking it as won't fix for 2.x.
Further information about libModSecurity available here:
https://github.com/SpiderLabs/ModSecurity/tree/v3/master
| gharchive/issue | 2013-09-12T12:14:52 | 2025-04-01T06:37:34.285021 | {
"authors": [
"applematt",
"efx",
"mwang911",
"victorhora",
"wellumies",
"zakarth",
"zimmerle"
],
"repo": "SpiderLabs/ModSecurity",
"url": "https://github.com/SpiderLabs/ModSecurity/issues/142",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
416191024 | Remove enrichment options
This PR finalized simplifying enrichments, removing the enrichment_for and by options in favor of using multiple functions per enrichable message type (introduced in the related PR in core-java).
Notable changes:
Removed options and corresponding classes
enrichment_for and by options and support classes.
EnrichmentMessage is removed. Now any message can serve as an enrichment. It could be entity state, for example, or standard message type.
EnrichmentType and related query methods were removed..
io.spine.type.enrichment package is removed.
Support for setting enrichment message interface is removed.
Other changes
Removed previously deprecated Logging.supplyFor().
MessageClass got method for querying super interfaces extending Message.
Classes implementing MessageContext no longer need to have a name ending with Context.
Elements of TaskName enumeration were renamed to their Gradle task counterparts.
Gradle-related testing utilities moved under io.spine.tools.gradle.testing package to avoid split-package problem with the main plugin JAR.
Introduced TaskSubject for testing Gradle tasks.
@armiol, @yuri-sergiichuk, PTAL. Most of the changes are removal.
This PR is the basis for this one from core-java.
| gharchive/pull-request | 2019-03-01T16:54:05 | 2025-04-01T06:37:34.359169 | {
"authors": [
"alexander-yevsyukov"
],
"repo": "SpineEventEngine/base",
"url": "https://github.com/SpineEventEngine/base/pull/349",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1423817473 | Get rid off dependency constants in version.gradle.kts in favor of default values passed to Spine object
We need to employ Kotlin feature of default parameter values instead of using constants defined under version.gradle.kts.
The current arrangement for handling those constants is cumbersome. It works, but it requires too much code, and it's prone to errors in usage.
It's time to make it simpler and easier to use.
@armiol, @yevhenii-nadtochii, FYI.
Closing as outdated.
| gharchive/issue | 2022-10-26T10:46:32 | 2025-04-01T06:37:34.360980 | {
"authors": [
"alexander-yevsyukov"
],
"repo": "SpineEventEngine/config",
"url": "https://github.com/SpineEventEngine/config/issues/418",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
111807190 | Server won't start on v 1521/711
The server wouldn't start even after I had removed all plugins/mods
http://pastebin.com/embed_js.php?i=Ns1BR5FN">
This was fixed just recently - try updating to the most recent version of Sponge
Unfortunately, the latest version cannot be downloaded at the moment
https://forums.spongepowered.org/t/latest-sponge-download-links/9588
| gharchive/issue | 2015-10-16T10:50:58 | 2025-04-01T06:37:34.365515 | {
"authors": [
"Billabonga",
"ZephireNZ"
],
"repo": "SpongePowered/Sponge",
"url": "https://github.com/SpongePowered/Sponge/issues/372",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
136956788 | Map scaling
When you currently scale the world in or out the camera perspective gets messed up: moves to the left & down. The more you scale the more left & down it goes.
This is because of the way the algorithm is written to make sure nothing outside the map is shown: so when the character comes to a border the camera doesn’t keep following him & shows the nothingness.
Easy fix is to have an alternate option for camera following: no algorithms during camera positioning calculation.
The nothingness outside the map will be visible but scaling won't be a problem anymore.
This way, one won't require new art assets, but simply will be able to scale the world by 2 while making sure texture filtering is set to nearest.
The map wasn't meant to be scaled so I see how this would mess up the auto follow logic. I will see if there is a work around. Also I suspect this will mess up the index for point logic too as the tileWidth/tileHeight will not be accurate after scaling. In the short term if you are targeting iOS 9 or higher I believe they added an SKCameraNode which might be worth looking into. I will close this as it isn't an issue but will look into this as a future feature.
| gharchive/issue | 2016-02-27T19:23:36 | 2025-04-01T06:37:34.398902 | {
"authors": [
"NI92",
"Urthstripe29"
],
"repo": "SpriteKitAlliance/SKAToolKit",
"url": "https://github.com/SpriteKitAlliance/SKAToolKit/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1838155219 | [Add] Change the alert.
Details
I want to change the UI of alert
which is showing on failing log in.
Screenshots
Current:
Expected:
Type of Contribution
[ ] Update to an existing Animated Button
[ ] Adding a new Button
[ ] Resolving a bug
[X] Proposal to the Repository
[ ] Changes related to documentation or README.md
[ ] Other Changes
Checklist
[X] I have checked the existing issues
[X] I have read the Contributing Guidelines
[X] I am willing to work on this issue
[X] I am a GSSoC'23 contributor
@soubhik-111 We are using a different library.
So as considering our UI we had decided to use that toast.
Closing this as not required
| gharchive/issue | 2023-08-06T10:49:22 | 2025-04-01T06:37:34.404194 | {
"authors": [
"Spyware007",
"soubhik-111"
],
"repo": "Spyware007/Animating-Buttons",
"url": "https://github.com/Spyware007/Animating-Buttons/issues/1969",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1723030569 | [FEATURE]: Readme Files Update
This is a(n):
[ ] Update to an existing Animated Button
[ ] Any Error
[ ] Proposal to the Repository
[x] Documentation / README.md changes
Details:
Hey, @Spyware007
I can help with making the readme file for this project. Please assign it to me.
PS: If you would like an updated template as well please raise an issue for that and assign it to me :)
@tuhinaww Assigned to you!
| gharchive/issue | 2023-05-24T01:30:47 | 2025-04-01T06:37:34.407316 | {
"authors": [
"Spyware007",
"tuhinaww"
],
"repo": "Spyware007/Animating-Buttons",
"url": "https://github.com/Spyware007/Animating-Buttons/issues/504",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1759590052 | Add Translation section for README
Can we add a translation section within the README file ?
This section will provide translated versions of the content in different languages.
It would be a great idea for better accessibility and reach many people as possible.
If yes, kindly assign to me under GSSOC'23
Hello geoffreylgv, thanks for opening a issue, your contribution is valuable to us. The maintainers will review this issue and provide feedback as soon as possible.
Closing as not required
| gharchive/issue | 2023-06-15T21:46:02 | 2025-04-01T06:37:34.414457 | {
"authors": [
"SrijanShovit",
"geoffreylgv",
"shreya024"
],
"repo": "SrijanShovit/carbonops-v2",
"url": "https://github.com/SrijanShovit/carbonops-v2/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1660505796 | RuntimeError: expected scalar type BFloat16 but found Float
Below is the log I have encountered at running "python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt <path/to/768model.ckpt/> --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768"
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 0%| | 0/50 [00:00<?, ?it/s]
data: 0%| | 0/1 [00:00<?, ?it/s]
Sampling: 0%| | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last):
File "scripts/txt2img.py", line 388, in
main(opt)
File "scripts/txt2img.py", line 347, in main
samples, _ = sampler.sample(S=opt.steps,
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/models/diffusion/ddim.py", line 104, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/models/diffusion/ddim.py", line 164, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/models/diffusion/ddim.py", line 212, in p_sample_ddim
model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
x = layer(x, context)
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk1/swh/git_sd/stablediffusion/ldm/modules/attention.py", line 327, in forward
x = self.norm(x)
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 272, in forward
return F.group_norm(
File "/root/miniconda3/envs/ldmsd/lib/python3.8/site-packages/torch/nn/functional.py", line 2516, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type BFloat16 but found Float
Please, anyone has met the same and had a solution?
have you solved the issue?
have you solved the issue?
Yes I have. It is due to the incompatiblity of pytorch with cuda.
I am facing the same issue myself. Is it incompatible with cuda et al, or a version of it? because I have a hard time imagining running it without using the gpu. how did you fix it?
@picard314 I have run into this issue, but I was able to make adjustments so that the code runs, but it's using my CPU and not my NVIDIA GPU. I'm running CUDA 11.7 as that is what seemed to be the correct version. What CUDA version are you using, what all did you do to resolve this issue?
@wobblytables mine is cuda 11.4
Yes I have. It is due to the incompatiblity of pytorch with cuda
I had the same problem and solved it by setting up the gpu to run
Yes I have. It is due to the incompatiblity of pytorch with cuda
I had the same problem and solved it by setting up the gpu to run
I met with the same problem.Are you mean to use methods like set CUDA_VISIBLE_DEVICES to set up the gpu?Thank you very much
@wobblytables mine is cuda 11.4
If for cuda 11.7, I think installation needs to be
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
if you don't mind, can I know your GPU name and which version of pytorch you used?
I have geforce3060, and I used cuda 11.4, pytorch 1.12.1 but I met that error
so I changed the cuda version to 11.6 but still have a same problem...
python main.py --base=$cfg -t --gpus -1 --ckpt /mnt/cache/maqiang/pretrain/sd/512-base-ema.ckpt
Adding --device cuda worked for me.
It looks like a change setted CPU to be used by default https://github.com/Stability-AI/stablediffusion/pull/147/files#diff-048b7bba4049f97b2038502af5686b6c5f53a882ff02771fcb0d733d22a0ab6cR180-R186, I think it was messing up data types.
Adding --device cuda worked for me.
It looks like a change setted CPU to be used by default https://github.com/Stability-AI/stablediffusion/pull/147/files#diff-048b7bba4049f97b2038502af5686b6c5f53a882ff02771fcb0d733d22a0ab6cR180-R186, I think it was messing up data types.
nice solution,it's worked for me too
How do you fix this error when you actually want to run it on the CPU? I can't find a way to
you have to use a different config, in the path "configs/stable-diffusion" there is a folder called "intel" which can be used for running on the cpu, for example you can use -fp32 configs.
Is this going to get fixed?
I read the documentation, installed the requirements, and ran the example. It crashed with this error message.
That seems like a pretty critical bug, but it hasn't even been assigned to anyone yet after 9 months.
as a hint, here is some description what might help: use "--precision full" (taken from here: https://huggingface.co/CompVis/stable-diffusion-v1-4/discussions/42) and in addition there are special configs for cpu processing in the "intel" folder of this repo. Currently I'm using the "-fp32" config in combination with the precision flag and it at least generates some images.
I'm not sure what the root-cause really is as I'm no expert in this field, but this https://github.com/Stability-AI/stablediffusion/blob/main/ldm/modules/attention.py#L175 looks suspicious...
i am having the same issue
!pip install torch==2.0.1 transformers datasets peft accelerate trl bitsandbytes optimum
when i try to load the X_IA3 adapters
| gharchive/issue | 2023-04-10T09:24:30 | 2025-04-01T06:37:34.438065 | {
"authors": [
"320010ly",
"Mateusmsouza",
"SofiaBianchi123",
"adirz",
"asdfjkluiop",
"esiefker",
"hotelbread",
"lijain",
"order-a-lemonade",
"picard314",
"questor",
"simonnxren",
"wobblytables"
],
"repo": "Stability-AI/stablediffusion",
"url": "https://github.com/Stability-AI/stablediffusion/issues/236",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170046595 | Unable to edit rule
Just installed version 1.5 using this: https://docs.stackstorm.com/install/rhel7.html
Created first Rule from UI. Now I'm unable to Edit it. When I press "Edit" button I get redirect to #/history
Problem is gone in 1.6. I guess some UI tests required for this project :)
| gharchive/issue | 2016-08-08T23:07:22 | 2025-04-01T06:37:34.457547 | {
"authors": [
"krainevsky"
],
"repo": "StackStorm/st2",
"url": "https://github.com/StackStorm/st2/issues/2857",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
327871144 | Regression of #3820 - Jinja is not rendered in action default values
ISSUE TYPE
Bug Report
STACKSTORM VERSION
# st2 --version
st2 2.7.2, on Python 2.7
OS / ENVIRONMENT / INSTALL METHOD
Install method = puppet-st2
OS = Red Hat Enterprise Linux Server release 7.5 (Maipo)
SUMMARY
array and object parameters are not being rendered in action metadata parameter defaults. integer parameters work fine.
STEPS TO REPRODUCE
1) create a new action metadata file
/opt/stackstorm/packs/default/actions/render_test.yaml
---
description: Run a local linux command
enabled: true
runner_type: mistral-v2
entry_point: workflows/render_test.yaml
name: render_test
pack: default
parameters:
cmd:
required: true
type: string
timeout:
type: integer
default: 60
kv_array:
type: array
default: "{{ st2kv.system.kv_array | from_json_string }}"
kv_object:
type: object
default: "{{ st2kv.system.kv_object | from_json_string }}"
2) create a new workflow
/opt/stackstorm/packs/default/actions/workflows/render_test.yaml
version: '2.0'
default.render_test:
description: demo rendering failures
type: direct
input:
- cmd
- timeout
- kv_array
- kv_object
tasks:
task1:
action: core.local
input:
cmd: "{{ _.cmd }}"
3) assign values in the datastore
$ st2 key set kv_array '["a", "b", "c"]'
$ st2 key set kv_object '{"a": "value", "b": "value2", "c": "value3"}'
4) register and run the action
$ st2 action create /opt/stackstorm/packs/default/actions/render_test.yaml
$ st2 run default.render_test cmd="ls"
EXPECTED RESULTS
Action to execute successfully with parameters:
cmd: ls
timeout: 60
kv_array:
- a
- b
- c
kv_object:
a: value
b: value2
c: value3
ACTUAL RESULTS
$ st2 run default.render_test cmd="ls"
ERROR: 400 Client Error: Bad Request
MESSAGE: '{{ st2kv.system.kv_array | from_json_string }}' is not of type 'array' for url: http://127.0.0.1:9101/executions
Error in /var/log/st2/st2api.log
2018-05-30 15:39:22,101 139814525031760 ERROR actionexecutions [-] Unable to execute action. Parameter validation failed.
Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2api/controllers/v1/actionexecutions.py", line 127, in _handle_schedule_execution
pack=action_db.pack)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2api/controllers/v1/actionexecutions.py", line 186, in _schedule_execution
liveaction_db, actionexecution_db = action_service.create_request(liveaction_db)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/services/action.py", line 89, in create_request
allow_default_none=True)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/util/schema/__init__.py", line 294, in validate
jsonschema.validate(instance=instance, schema=schema, cls=cls, *args, **kwargs)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/jsonschema/validators.py", line 541, in validate
cls(schema, *args, **kwargs).validate(instance)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/jsonschema/validators.py", line 130, in validate
raise error
ValidationError: u'{{ st2kv.system.kv_array | from_json_string }}' is not of type u'array'
Failed validating u'type' in schema['properties'][u'kv_array']:
{u'default': u'{{ st2kv.system.kv_array | from_json_string }}',
u'type': u'array'}
On instance[u'kv_array']:
u'{{ st2kv.system.kv_array | from_json_string }}'
2018-05-30 15:39:22,103 139814525031760 ERROR router [-] Failed to call controller function "post" for operation "st2api.controllers.v1.actionexecutions:action_executions_controller.post": '{{ st2kv.system.kv_array | from_json_string }}' is not of type 'array'
Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/router.py", line 470, in __call__
resp = func(**kw)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2api/controllers/v1/actionexecutions.py", line 572, in post
show_secrets=show_secrets)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2api/controllers/v1/actionexecutions.py", line 133, in _handle_schedule_execution
abort(http_client.BAD_REQUEST, re.sub("u'([^']*)'", r"'\1'", e.message))
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/router.py", line 55, in abort
raise exc.status_map[status_code](message)
HTTPBadRequest: '{{ st2kv.system.kv_array | from_json_string }}' is not of type 'array'
Just to confirm - did this behavior ever work in the past? If so, which version?
I believe it worked in the 2.6.x series. Just noticed it was broken in 2.7.2 (might have been broken sooner)
That's interesting since I don't remember us touching any of that code recently. Only somewhat related change was #4052
In any case, it looks like we should start with a test case. Another question also is why we don't have one for functionality we apparently support :)
I just tried to replicate the problem with the code you provided in v2.7.2, v2.7.1, v2.6.0 and v2.5.1. I get the same error message with every version (aka that functionality doesn't appear work / be supported).
Also looking at the PRs, that functionality was supposedly added in https://github.com/StackStorm/st2/pull/3892. Looking at the tests there - we only have tests for int scenario so likely there are more edge cases which are not handled correctly.
@Kami good to know, maybe i obviously never fully tested on my end.
Did a little digging this morning and found where it is failing specifically: https://github.com/StackStorm/st2/blob/master/st2common/st2common/util/param.py#L189
This is throwing the following exception:
Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2api/controllers/v1/actionexecutions.py", line 127, in _handle_schedule_execution
pack=action_db.pack)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2api/controllers/v1/actionexecutions.py", line 182, in _schedule_execution
liveaction_db.context)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/util/param.py", line 306, in render_live_params
context = _resolve_dependencies(G)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/util/param.py", line 217, in _resolve_dependencies
context[name] = _render(node, context)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/util/param.py", line 196, in _render
result = ENV.from_string(str(node['template'])).render(render_context)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/opt/stackstorm/st2/lib/python2.7/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "<template>", line 1, in top-level template code
File "/opt/stackstorm/st2/lib/python2.7/site-packages/st2common/jinja/filters/data.py", line 29, in from_json_string
return json.loads(value)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
I turned on debug mode and here is the log for that node:
2018-05-31 07:20:58,097 139913045820592 INFO param [-] Rendering node: {'template': u'{{ st2kv.system.kv_array | from_json_string }}'} with context: {u'timeout': 60, 'st2kv': {'system': <st2common.services.keyvalues.KeyValueLookup object at 0x7f400b752a50>}}
@nmaludy Alright, after some more digging in it turns out it's not an actual issue in the code, but it's related to calling filter on the value which has already been de-serialized (aka filter is being called twice - once internally based on the action parameter definition and again inside the parameter Jinja string which is not needed).
You don't need to call from_json_string filter on the template value. This is done automatically based on the parameter type.
The following works fine for me:
---
description: Run a local linux command
enabled: true
runner_type: mistral-v2
entry_point: workflows/render_test.yaml
name: render_test
pack: default
parameters:
cmd:
required: true
type: string
timeout:
type: integer
default: 60
kv_array:
type: array
default: "{{ st2kv.system.kv_array }}"
kv_object:
type: object
default: "{{ st2kv.system.kv_object }}"
I will close that as not an issue.
Having said that, I do agree that the current exception is very unfriendly. At the very least, exception should include more data which would give user some clue what is going in (I will look into that change).
Awesome! I really swore i tested it...
I'll make a PR for st2docs and try to make another PR to add tests for objects and arrays
| gharchive/issue | 2018-05-30T19:40:13 | 2025-04-01T06:37:34.470177 | {
"authors": [
"Kami",
"nmaludy"
],
"repo": "StackStorm/st2",
"url": "https://github.com/StackStorm/st2/issues/4153",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
492934573 | st2 fails to store a Key-Value in the datastore if there is a "/" in the key name.
SUMMARY
st2 fails to store a Key-Value in the datastore if there is a "/" in the key name. I am sorry if I missed anything in the docs, which says that you cannot store a key with "/" in it and please feel free to close this issue if that is the case. Thank you.
STACKSTORM VERSION
st2 3.1.0, on Python 2.7.5
OS, environment, install method:
CentOS/Docker, custom install.
Steps to reproduce the problem
bash-4.2$ st2 key set "foo / bar" "foo_bar"
ERROR: 404 Client Error: Not Found
MESSAGE: The resource could not be found. for url: http://127.0.0.1:9101/v1/keys/foo%20/%20bar
Expected Results
st2 should have accepted the key-value pair and stored it.
Actual Results
st2 has failed to store the key-value pair.
Hi @armab I would like to take this up and send a fix.
I believe this issue probably lies in st2client / CLI and not the API itself (likely / character is not correctly URL encoded).
I believe we even have some st2api API level tests for keys with / in the name (and if we don't, we should add some).
Hi @Kami
I tried via the st2 command and also from the actions I have implemented, both self.action_service.set_value and self.action_service.get_value are failing with the same error. So I assume, it should be an API level issue.
It could be, although action service also utilizes st2client code which talks to the API :)
If there is not an existing test already, I would start with an API level test for that functionality.
@Kami @armab I see no tests which stores a key-value pair with a "/" in the key name.
Also, I am not able to run the tests in test_kvps.py file using the following command from the root directory as mentioned in this doc on a Ubuntu machine:
nosetests --nocapture ./st2api/tests/unit/controllers/v1/test_kvps.py
It says ERROR: Failure: ImportError (No module named st2tests.api)
I tried exporting/setting PYTHONPATH to st2 directory but no luck. Could you please help me to get the tests running ? I tried with both Python2 and Python3.
Thanks.
For developing StackStorm platform itself, I would recommend you to use this Vagrant image - https://github.com/StackStorm/st2vagrantdev
In short, you need to run make requirements which will create virtualenv, install all the dependencies and set PYTHONPATH correctly.
@Kami Thanks. This is Great.
I am able to setup the vagrant environment and I can run the tests now. Also, I have added a test which stores a KV and there is a "/" in the key name and it fails. I will dig through the code and see, if we can encode "/" someway.
@armab
@Kami @armab
I modified the serialize method here to quote (from six.moves.urllib_parse import quote) so that the key for example a/b will be converted into a%2Fb. However, the st2 key set "a/b" "some value" is failing with resource could not be found error.
ERROR: 404 Client Error: Not Found
MESSAGE: The resource could not be found. for url: http://127.0.0.1:9101/v1/keys/a%2Fb
I think, I will have to make some modifications on the API as well to accept this kind of keys?
Thank you.
Hi @Kami @armab Do you think, we should be able to encode and store keys with /? or I will try to implement something which will thrown an error if the key has / in it. Thank you.
Yes, I think it's absolutely reasonable to be able to use the / in Key name.
Quick example: https://www.consul.io/docs/commands/kv/put.html#examples
If you could make it work and support your enhancement with tests as well, - that would be great addition :+1:
Thanks for the reply @armab
As I mention in the previous comment, I can encode the key on the st2client side and I can see the PUT call is also on the encoded key like below:
(virtualenv) vagrant@ubuntu-xenial:~/local/st2$ st2 key set foo/bar foo
# -------- begin 140367719226128 request ----------
curl -X PUT -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-requests/2.23.0' -H 'content-type: application/json' -H 'Content-Length: 70' --data-binary '{"scope": "system", "name": "foo%2Fbar", "value": "foo", "user": null}' http://127.0.0.1:9101/v1/keys/foo%2Fbar
# -------- begin 140367719226128 response ----------
{
"faultstring": "The resource could not be found."
}
# -------- end 140367719226128 response ------------
ERROR: 404 Client Error: Not Found
MESSAGE: The resource could not be found. for url: http://127.0.0.1:9101/v1/keys/foo%2Fbar
I am trying to find the code which is accepting this request and make it work for encoded / or am I missing something?
Any help in making me understand the api side of things would be appreciated. Thank you.
@Kami
@Kami @armab Where can I find the API side related code? As you can see in my previous comment, API isn't consider a key with encoded / as a new key in the PUT call.
Could you please direct me to some documentation or any other reference/example? Thank you.
Start with the st2 api KeyValue controller in: https://github.com/StackStorm/st2/blob/master/st2api/st2api/controllers/v1/keyvalue.py
For development environment, standards and expectations, check the https://docs.stackstorm.com/development/index.html
@armab @Kami It looks like if I encode a key foo/bar to foo%2Fbar on the client side, the call which will be made will look like this:
curl -X PUT -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-requests/2.23.0' -H 'content-type: application/json' -H 'Content-Length: 69' --data-binary '{"scope": "system", "name": "foo%2Fbar", "value": "f1", "user": null}' http://127.0.0.1:9101/v1/keys/foo%2Fbar
But on the api side, it is failing while trying to match the requests path to a controller here.
For some reason, the req.path is going back to foo/bar instead of being the encodedfoo%2Fbar. As you can see, this happening before entering the code in https://github.com/StackStorm/st2/blob/master/st2api/st2api/controllers/v1/keyvalue.py
So I am guessing somewhere else, we need to make change to persist the encoded key.
I did this little test based on the code in router.py:
import webob.compat
path="http://127.0.0.1:9101/v1/keys/foo%2Fbar"
webob.compat.url_unquote(path)
'http://127.0.0.1:9101/v1/keys/foo/bar'
So it looks like it gets altered to foo/bar by the url_unquote call.
Thank you for the input @amanda11.
I just tried with double encoding/quoting the key name if it contains a / while I set or get the KV pair and it is working as expected. If this OK, I will go ahead and submit a pull request.
@Kami @armab
git diff:
diff --git a/st2client/st2client/commands/keyvalue.py b/st2client/st2client/commands/keyvalue.py
index 8eed47364..d1a2305eb 100644
--- a/st2client/st2client/commands/keyvalue.py
+++ b/st2client/st2client/commands/keyvalue.py
@@ -21,6 +21,7 @@ import logging
from os.path import join as pjoin
import six
+from six.moves.urllib_parse import quote
from st2client.commands import resource
from st2client.commands.noop import NoopCommand
@@ -141,6 +142,8 @@ class KeyValuePairGetCommand(resource.ResourceGetCommand):
@resource.add_auth_token_to_kwargs_from_cli
def run(self, args, **kwargs):
resource_name = getattr(args, self.pk_argument_name, None)
+ if '/' in resource_name:
+ resource_name = quote(quote(resource_name, safe=''))
decrypt = getattr(args, 'decrypt', False)
scope = getattr(args, 'scope', DEFAULT_GET_SCOPE)
kwargs['params'] = {'decrypt': str(decrypt).lower()}
@@ -185,8 +188,14 @@ class KeyValuePairSetCommand(resource.ResourceCommand):
@resource.add_auth_token_to_kwargs_from_cli
def run(self, args, **kwargs):
instance = KeyValuePair()
- instance.id = args.name # TODO: refactor and get rid of id
- instance.name = args.name
+ key_name = args.name
+ # urllib_parse.quote the key name to support keys with '/' in them.
+ # We double quote it here, as it will unquoted once on the API side.
+ if '/' in key_name:
+ key_name = quote(quote(args.name, safe=''))
+
+ instance.id = key_name # TODO: refactor and get rid of id
+ instance.name = key_name
instance.value = args.value
instance.scope = args.scope
instance.user = args.user
A PR to fix this issue which is still around is very welcome, @RaviTezu :)
| gharchive/issue | 2019-09-12T17:26:19 | 2025-04-01T06:37:34.490544 | {
"authors": [
"Kami",
"RaviTezu",
"amanda11",
"armab",
"winem"
],
"repo": "StackStorm/st2",
"url": "https://github.com/StackStorm/st2/issues/4789",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
92723282 | st2cd KVStore Compatibility >=0.9
This PR attempts to load up the old and new libraries to access the K/V store, allowing compatibility with older running nodes (<0.8 ) and newer nodes (>=0.9)
/cc @DoriftoShoes
Tested and working on st2ops001
+1
| gharchive/pull-request | 2015-07-02T20:02:41 | 2025-04-01T06:37:34.492484 | {
"authors": [
"DoriftoShoes",
"jfryman"
],
"repo": "StackStorm/st2incubator",
"url": "https://github.com/StackStorm/st2incubator/pull/232",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
334899618 | Update dependencies
And pin ThingTalk to the 1.1.0 branch, because the next version
is likely to include dangerous stuff that should go into Cloud Almond
1.1 at least.
Yeah no, dep updates should be handled differently (Greenkeeper maybe?)
| gharchive/pull-request | 2018-06-22T14:29:50 | 2025-04-01T06:37:34.517324 | {
"authors": [
"gcampax"
],
"repo": "Stanford-Mobisocial-IoT-Lab/almond-cloud",
"url": "https://github.com/Stanford-Mobisocial-IoT-Lab/almond-cloud/pull/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
216943924 | Pennant Exercising a DMA Bug
Pennant (without control replication) is exercising a bug in the new DMA system in the master branch. Below is a command line that runs correctly in the 'olddma' branch, but results in bad data (a pointer in this case) being read from a field in the 'master' branch. Notice that command line uses '-lg:inorder' to ensure that the runtime is doing things in strict program order. Also notice the '-fbounds-checks' means this code takes a while to run so be patient.
Build regent with: ./install.py --debug --gasnet
LAUNCHER='mpirun -H n0002,n0003 -np 4 --map-by numa:pe=6 -x LD_LIBRARY_PATH -x INCLUDE_PATH -x TERRA_PATH --oversubscribe' ./regent.py examples/pennant_fast.rg examples/pennant.tests/leblanc/leblanc.pnt -ll:cpu 5 -npieces 20 -numpcx 1 -numpcy 20 -par_init 1 -seq_init 0 -fbounds-checks 1 -lg:inorder
When the code fails in the 'master' branch you will see the following error:
Errors reported during runtime.
examples/pennant_fast.rg:122: pointer ptr(zone(), $rz) is out-of-bounds
@magnatelee first reported this bug so ask him to test it when there is a fix.
By commenting out the application assertions I was able to validate the runtime execution with Legion Spy, so I'm now confident that the issue is in the DMA system.
I've also noticed other weird failure modes where it is not just the pointer data that is invalid, suggesting that copies of any field are subject to having their data corrupted.
I'm about to assign the bug back to Sean and Zhihao, but before I do here is a summary of what we learned and why we now know the bug is in the DMA system (both new AND old have the same bug). We've simplified the test case to a single node and it can be reproduced with the following command line:
./regent.py examples/pennant_fast.rg examples/pennant.tests/leblanc/leblanc.pnt -ll:cpu 2 -npieces 2 -numpcx 1 -numpcy 2 -par_init 1 -seq_init 0 -lg:inorder -fbounds-checks 1
We know this is the commit that causes the problem: 390235e It changes the sizes of the physical instances being used.
The same behavior can be observed with both the master branch and the olddma branch, so it is nothing related to the new DMA system specifically.
The application is passing with -fbounds-checks 1 so there is no corruption of the data by the application itself.
Legion Spy validates this execution as being correct by the Legion runtime and @TWarszawski and I have hand-checked about 75% of the graph ourselves.
The first task to fail is the calc_volumes_full task during the start-up and it fails the 'sv > 0.0' assertion.
The first cell to fail is cell at location 1768 in the first (and only span).
The reason it fails is that (deterministically) different data is being read from the fields 'zv.x' and 'zv.y' during the computation of the 'sa' temporary. Specifically we see the following values for the computation of 'sa' in the 'good' and 'bad' cases:
sa good: 0.0025 = 0.5 * cross(<0.3,4.4> - <0.2,4.4>,<0.25,4.45> - <0.2,4.4>)
sa bad: -0.05375 = 0.5 * cross(<0.3,4.4> - <0.2,4.4>,<0.175,3.325> - <0.2,4.4>)
If we examine the event graph for a bad run we see that the data for this field is placed in a concrete instance and never moved. No copies to or from this instance are issued by the runtime before the failure, which is correct. The proper event dependences ensure that all tasks associated with this field are running in the correct order which is consistent with Legion Spy validation.
I traced the bad value back to 'calc_centers_full' where I determined that the difference comes from the computation of 'zx'. On the third iteration for zone 442 the good and the bad differ while still having the same pointers for p1.
Good:
p1_px= <0.2,4.4> p1=497
p1_px= <0.3,4.4> p1=498
p1_px= <0.3,4.5> p1=3
p1_px= <0.2,4.5> p1=2
znump 4
zx 442 is <0.25,4.45> nside=4
Bad:
p1_px=<0.2,4.4> p1=497
p1_px=<0.3,4.4> p1=498
p1_px=<0,0> p1=3
p1_px=<0.2,4.5> p1=2
znump 4
zx 442 is <0.175,3.325> nside=4
The p1.px field is initialized by 'intiialize_topology' so I went and printed out the values that are computed for the 'px' field for location p1=3 for both the good and bad versions:
good: p1=3 = <0.3,4.5> Shared Bottom
bad: p1=3 = <0.3,4.5> Shared Bottom
Good news! They're the same, that means that something is going wrong between the tasks.
If you look at the attached event graph you will see that there is a copy needed between the instances used by 'initialize_topology' and 'calc_centers_full'. Legion correctly issues this copy (Realm copy 19). Legion Spy indicates that for this particular execution both index spaces 10 and 12 are exactly the same, so their intersection is the same (the copy is an intersection of index spaces 10 and 12) so all of the data should be moved.
Reducing top-level index space shapes...
Done
Space Index Space 10 has 3 points
Points: 3 7 6
Space Index Space 12 has 3 points
Points: 3 7 6
However, clearly, p1=3 is not being moved properly, and that is a DMA system bug.
bad.pdf
Note that Legion is doing the intersection computation using Realm primitives so you might want to check this code too:
https://github.com/StanfordLegion/legion/blob/master/runtime/legion/region_tree.cc#L5177-L5188
I've managed to reproduce the error, and am digging into it now.
I'm still sorting through some stuff, but there is only a single copy that occurs before calc_centers_full reads the bad value, and it appears to be copying an indexspace that is the non-empty intersection of two index spaces into an instance that was created from an empty index space. Realm does not verify that the domain used for a copy is a subset of the domains that exist in source and destination instances (because it is too expensive), but if there were such a check, it'd be complaining loudly here.
Some snippets from the log:
[0 - b0910000] {2}{region}: subregion 500000000000000a (of 5000000000000007) restricted to [0,10]
[0 - b0910000] {2}{meta}: index space created: id=500000000000000a parent=5000000000000007 (num_elmts=1001)
[0 - b0910000] {2}{region}: subregion 500000000000000c (of 5000000000000007) restricted to [0,10]
[0 - b0910000] {2}{meta}: index space created: id=500000000000000c parent=5000000000000007 (num_elmts=1001)
[0 - b0910000] {2}{region}: subregion 500000000000000d (of 5000000000000007) restricted to [-1,-1]
[0 - b0910000] {2}{meta}: index space created: id=500000000000000d parent=5000000000000007 (num_elmts=1001)
[0 - b0d16000] {2}{inst}: local instance 6000000000000007 created in memory 1e00000000000000 at offset 536775424+72 (redop=0 list_size=-1 parent_inst=0 block_size=4)
[0 - b0d16000] {2}{meta}: instance created: region=500000000000000d memory=1e00000000000000 id=6000000000000007 bytes=72
[0 - b050a000] {3}{index_spaces}: creating intersection: 500000000000000a & 500000000000000c
[0 - b050a000] {2}{region}: subregion 5000000000000026 (of 0) restricted to [0,10]
[0 - b050a000] {2}{meta}: index space created: id=5000000000000026 num_elmts=64
[0 - b050a000] {1}{dma}: copy: 1 distinct src/dst mem pairs, is=5000000000000026
[0 - b050a000] {2}{dma}: dma request 0x7fb3441ecc40 created - is=5000000000000026 before=0 after=8000000002c00008
[0 - b050a000] {2}{dma}: dma request 0x7fb3441ecc40 field: 6000000000000002[0]->6000000000000007[0] size=8 serdez=0
[0 - b050a000] {2}{dma}: dma request 0x7fb3441ecc40 field: 6000000000000002[8]->6000000000000007[8] size=8 serdez=0
@streichler Are you sure you got the right point for calc_centers_full (there are two of them) and the right copy? I think the copy we're interested in here should be from 6000000000000002 to 600000000000000b and not 6000000000000007. If you're running with -lg:inorder all the mapping decisions should be deterministic and therefore the instance creation names too. At least they were for me when I was debugging. Instance 600000000000000b should be an instance of index space 10 which is non-empty.
This was the only copy to have completed before the first calc_centers_full executed. (I modified the test to assert on bad data in there, so the other instance of calc_centers_full never executes either.) The only other copy that has even been requested at that point is dependent on the first calc_centers_full instance's completion, and appears to target the same empty instance.
Reassigning this to @lightsighter . There's definitely something wrong in the instance creation. Here's a reformatted excerpt of legion_spy.py -i on my trimmed down test:
Instance 0x6000000000000000 Region (4,1,1)
Instance 0x6000000000000001 Region (8,2,2)
Instance 0x6000000000000002 Region (12,2,2)
Instance 0x6000000000000003 Region (10,2,2)
Instance 0x6000000000000004 Region (14,3,3)
Instance 0x6000000000000005 Region (5,1,1)
Instance 0x6000000000000006 Region (9,2,2)
Instance 0x6000000000000007 rp_all (2,2,2)
Instance 0x6000000000000008 Region (11,2,2)
Instance 0x6000000000000009 Region (15,3,3)
Instance 0x600000000000000a Region (18,4,4)
Instance 0x600000000000000b Region (22,5,5)
Instance 0x600000000000000c Region (26,6,6)
Instance 0x600000000000000d Region (30,7,7)
Instance 0x600000000000000e Region (19,4,4)
Instance 0x600000000000000f Region (23,5,5)
Instance 0x6000000000000010 Region (27,6,6)
Instance 0x6000000000000011 Region (31,7,7)
Instance 0x6000000000000012 Region (4,1,1)
Instance 0x6000000000000013 Region (5,1,1)
Instance 0x6000000000000014 Region (14,3,3)
And here's the Realm logging of instance creations:
[0 - b0284000] {2}{meta}: instance created: region=5000000000000004 memory=1e00000000000000 id=6000000000000000 bytes=452
[0 - b0284000] {2}{meta}: instance created: region=5000000000000008 memory=1e00000000000000 id=6000000000000001 bytes=8928
[0 - b0284000] {2}{meta}: instance created: region=500000000000000c memory=1e00000000000000 id=6000000000000002 bytes=216
[0 - b0284000] {2}{meta}: instance created: region=500000000000000a memory=1e00000000000000 id=6000000000000003 bytes=192
[0 - b0284000] {2}{meta}: instance created: region=500000000000000e memory=1e00000000000000 id=6000000000000004 bytes=75600
[0 - b0d16000] {2}{meta}: instance created: region=5000000000000005 memory=1e00000000000000 id=6000000000000005 bytes=452
[0 - b0d16000] {2}{meta}: instance created: region=5000000000000009 memory=1e00000000000000 id=6000000000000006 bytes=8928
[0 - b0d16000] {2}{meta}: instance created: region=500000000000000d memory=1e00000000000000 id=6000000000000007 bytes=72
[0 - b0d16000] {2}{meta}: instance created: region=500000000000000b memory=1e00000000000000 id=6000000000000008 bytes=192
[0 - b0d16000] {2}{meta}: instance created: region=500000000000000f memory=1e00000000000000 id=6000000000000009 bytes=75600
[0 - b111c000] {2}{meta}: instance created: region=5000000000000011 memory=1e00000000000000 id=600000000000000a bytes=68
[0 - b111c000] {2}{meta}: instance created: region=5000000000000014 memory=1e00000000000000 id=600000000000000b bytes=68
[0 - b111c000] {2}{meta}: instance created: region=5000000000000017 memory=1e00000000000000 id=600000000000000c bytes=68
[0 - b111c000] {2}{meta}: instance created: region=500000000000001a memory=1e00000000000000 id=600000000000000d bytes=68
[0 - b0284000] {2}{meta}: instance created: region=5000000000000012 memory=1e00000000000000 id=600000000000000e bytes=68
[0 - b0284000] {2}{meta}: instance created: region=5000000000000015 memory=1e00000000000000 id=600000000000000f bytes=68
[0 - b0284000] {2}{meta}: instance created: region=5000000000000018 memory=1e00000000000000 id=6000000000000010 bytes=68
[0 - b0284000] {2}{meta}: instance created: region=500000000000001b memory=1e00000000000000 id=6000000000000011 bytes=68
[0 - b0284000] {2}{meta}: instance created: region=5000000000000004 memory=1e00000000000000 id=6000000000000012 bytes=14464
[0 - b131f000] {2}{meta}: instance created: region=5000000000000005 memory=1e00000000000000 id=6000000000000013 bytes=14464
[0 - b0284000] {2}{meta}: instance created: region=500000000000000e memory=1e00000000000000 id=6000000000000014 bytes=28800
It looks like the Legion and Realm index space numbers deviate after a while, but all of the first ones match up EXCEPT for instance 6000000000000007, which happens to be the target of the copy, and for which that copy would make a lot more sense if the instance had been constructed from rp_all instead of (13,2,2) (which looks like it's rp_shared[1] from the index space tree).
I believe that this bug is now fixed with da5e088. Assigning back to @magnatelee to confirm.
The bug seems fixed. Close this issue.
| gharchive/issue | 2017-03-25T02:48:47 | 2025-04-01T06:37:34.546718 | {
"authors": [
"lightsighter",
"magnatelee",
"streichler"
],
"repo": "StanfordLegion/legion",
"url": "https://github.com/StanfordLegion/legion/issues/234",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1936978794 | To load parquet file has been fail.
We've tried to load parquet file from s3 api.
(A table with 8,000 columns has roughly 5,000 rows.)
Few hour later, some starrocks BE nodes have killed by self.
But memory and storage space was enough to insert records.
Steps to reproduce the behavior (Required)
CREATE TABLE '...'
column#: 8,000 (2000 integer, 6000 decimal)
CREATE TABLE tbl_pk_dec_c8000_r200000
(
int_col1 INT,
int_col2 INT,
int_col3 INT,
int_col4 INT,
int_col5 INT,
...
dec_col5999 DECIMAL(17,7),
dec_col6000 DECIMAL(17,7)
)
PRIMARY KEY(int_col1)
DISTRIBUTED BY HASH(int_col1) BUCKETS 20
PROPERTIES("replication_num" = "3");
INSERT INTO '....'
LOAD LABEL tbl_pk_dec_c8000_r200000
(
DATA INFILE("s3a://test/dec_c8000_r200000.001.parquet")
INTO TABLE tbl_pk_dec_c8000_r200000
-- COLUMNS TERMINATED BY ","
FORMAT AS "PARQUET"
(
int_col1,
int_col2,
int_col3,
...
dec_col5998,
dec_col5999,
dec_col6000
)
)
WITH BROKER
(
"aws.s3.endpoint" = "<storage_addr>",
"aws.s3.access_key" = "....",
"aws.s3.secret_key" = "....",
"aws.s3.enable_ssl" = "true",
"aws.s3.enable_path_style_access" = "true"
);
dec_c8000_r200000.001.parquet
size: 1.1G
rows: 5,115
Expected behavior (Required)
complete to load
Real behavior (Required)
memory status (top)
show load\G
*************************** 2. row ***************************
JobId: 14005
Label: tbl_pk_dec_c8000_r200000
State: CANCELLED
Progress: ETL:N/A; LOAD:N/A
Type: BROKER
Priority: NORMAL
ScanRows: 835
FilteredRows: 0
UnselectedRows: 0
SinkRows: 835
EtlInfo: NULL
TaskInfo: resource:N/A; timeout(s):14400; max_filter_ratio:0.0
ErrorMsg: type:LOAD_RUN_FAIL; msg:[E1014]Got EOF of Socket{id=116 fd=190 addr=10.202.14.125:8060:50134} (0x0x7ffab6563740) [R1][E112]Not connected to 10.202.14.125:8060 yet, server_id=116 [R2][E112]Not connected to 10.202.14.125:8060 yet, server_id=116 [R3][E112]Not connected to 10.202.14.125:8060 yet, server_id=116
CreateTime: 2023-09-25 14:13:17
EtlStartTime: 2023-09-25 14:13:23
EtlFinishTime: 2023-09-25 14:13:23
LoadStartTime: 2023-09-25 14:13:23
LoadFinishTime: 2023-09-25 14:56:48
TrackingSQL:
JobDetails: {"All backends":{"d2f70bc9-a641-442d-a997-9bb0f71cade0":[10009,10007,10006,10008]},"FileNumber":1,"FileSize":1155971344,"InternalTableLoadBytes":53505686,"InternalTableLoadRows":835,"ScanBytes":53441670,"ScanRows":835,"TaskNumber":1,"Unfinished backends":{"d2f70bc9-a641-442d-a997-9bb0f71cade0":[10009,10007,10006]}}
be2: be.WARNING
W0925 14:56:44.756848 404252 stack_util.cpp:350] 2023-09-25 14:56:44.756733, query_id=00000000-0000-0000-0000-000000000000, fragment_instance_id=00000000-0000-0000-0000-000000000000 throws exception: std::bad_alloc, trace:
@ 0xbd4c46d __wrap___cxa_throw
@ 0x863f966 _Znwm.cold
@ 0x86d9a4c std::__new_allocator<>::allocate()
@ 0x86d6aff std::allocator_traits<>::allocate()
@ 0x8743d0c std::_Vector_base<>::_M_allocate()
@ 0x8743bab std::_Vector_base<>::_M_create_storage()
@ 0x874195b std::_Vector_base<>::_Vector_base()
@ 0x8831a4d std::vector<>::vector()
@ 0x88102a3 starrocks::FixedLengthColumnBase<>::FixedLengthColumnBase()
@ 0x8bb4919 starrocks::ColumnFactory<>::ColumnFactory<>()
@ 0x8bb46f1 starrocks::FixedLengthColumn<>::FixedLengthColumn()
@ 0xc48a5cb std::_Construct<>()
@ 0xc489d90 std::allocator_traits<>::construct<>()
@ 0xc48912b std::_Sp_counted_ptr_inplace<>::_Sp_counted_ptr_inplace<>()
@ 0xc487ba4 std::__shared_count<>::__shared_count<>()
@ 0xc486930 std::__shared_ptr<>::__shared_ptr<>()
@ 0xc485d67 std::shared_ptr<>::shared_ptr<>()
@ 0xc485502 std::make_shared<>()
@ 0xc484e1a starrocks::ColumnFactory<>::create<>()
@ 0xc48418b starrocks::ColumnBuilder::operator()<>()
@ 0xc4834f3 _ZN9starrocks20type_dispatch_columnINS_13ColumnBuilderEJNS_14TypeDescriptorEmEEEDaNS_11LogicalTypeET_DpT0_
@ 0xc481a51 starrocks::ColumnHelper::create_column()
@ 0xca1caf9 starrocks::serde::ProtobufChunkDeserializer::deserialize()
@ 0xbb3a082 starrocks::LoadChannel::_deserialize_chunk()
@ 0xbb38d2f starrocks::LoadChannel::add_chunk()
@ 0xbb31f04 starrocks::LoadChannelMgr::add_chunk()
@ 0xbc6049b starrocks::BackendInternalServiceImpl<>::tablet_writer_add_chunk()
@ 0xc1df97a doris::PBackendService::CallMethod()
@ 0xd44f1a1 brpc::policy::ProcessRpcRequest()
@ 0xd533707 brpc::ProcessInputMessage()
@ 0xd5345a7 brpc::InputMessenger::OnNewMessages()
@ 0xd3dac1d brpc::Socket::ProcessEvent()
StarRocks version (Required)
You can get the StarRocks version by executing SQL select current_version()
3.1.2-4f3a2ee91b
Is /proc/sys/vm/overcommit_memory turned on?
@meegoo
yes. It has been turned on.
Is it possible to also try the files() function? https://docs.starrocks.io/en-us/latest/sql-reference/sql-functions/table-functions/files
Sorry. I am too late.
I've tried to your suggestion as below.
But, it didn't work.
-- ___test_file.sql
CREATE TABLE dec_c8000_r200000_001 AS
SELECT * FROM FILES(
"path" = "s3://test/dec_c8000_r200000.001.parquet",
"format" = "parquet",
"aws.s3.access_key" = "******",
"aws.s3.secret_key" = "******",
"aws.s3.region" = "us-west-2"
);
result
mysql> source ___test_file.sql;
ERROR 1064 (HY000): Access storage error. Unknown error
mysql>
You'll have to break the parquet file to be smaller or use the upcoming PIPE feature to load data.
| gharchive/issue | 2023-10-11T06:47:01 | 2025-04-01T06:37:34.561840 | {
"authors": [
"alberttwong",
"kakao-lunarvel-vet",
"meegoo"
],
"repo": "StarRocks/starrocks",
"url": "https://github.com/StarRocks/starrocks/issues/32467",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1175234311 | "select array_concat(array,empty array) from table" is different from the expected result
Steps to reproduce the behavior (Required)
CREATE TABLE `array_sort_null` ( c1 int NOT NULL, c2 array<date>, c3 array<datetime>, c4 array<char(20)>, c5 array<varchar(20)>, c6 array<boolean>, c7 array<tinyint>, c8 array<smallint>, c9 array<int>, c10 array<bigint>, c11 array<largeint>, c12 array<float>, c13 array<double> ) DUPLICATE KEY(c1) DISTRIBUTED BY HASH(c1) buckets 1 PROPERTIES ("replication_num"="1");
insert into array_sort_null (c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13)values (1, ['2012-01-02', '2012-01-01', '2012-01-04'], ['2012-01-04 01:01:01', '2012-01-03 01:01:01', '2013-01-01 01:01:01'], ['22', '33', '11'], ['222', null, '111'], [False, True, False], [2, 1, 3], [22, 11, 33], [null, null, null], [null, null, null], [3333, 4231, 1111], [11.2, 11.1, 10.1], [11.11, 8.99, 4.44]), (2, ['2013-01-02', '2013-01-01', '2014-01-04'], ['2013-01-04 01:01:01', '2013-01-03 01:01:01', '2010-01-01 01:01:01'], ['22', '33', '11', '66'], ['222', null, null, '000'], [False, True, False], [2, 4, 3], [22, 11, 33], [222, 111, 333], [null, null, null], [3343, 4831, 1711], [11.3, 11.2, 10.2], [11.22, 8.77, 4.33]);
select array_concat(["2012-01-02","2012-01-01","2012-01-04"],[]) from array_sort_null where c1=1;
Expected behavior (Required)
["2012-01-02","2012-01-01","2012-01-04"]
Real behavior (Required)
ERROR 1064 (HY000): Index: 0, Size: 0
StarRocks version (Required)
+------------------------------------------+
| current_version() |
+------------------------------------------+
| PIPELINE_SQLANCER_MASTER_RELEASE b61b70b |
+------------------------------------------+
It can't re-produce. It should be fixed.
| gharchive/issue | 2022-03-21T11:15:14 | 2025-04-01T06:37:34.565905 | {
"authors": [
"Pslydhh",
"lvchenyang-maker"
],
"repo": "StarRocks/starrocks",
"url": "https://github.com/StarRocks/starrocks/issues/4300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1504271324 | [Feature]filetable with recursive path
Signed-off-by: zombee0 flylucas_10@163.com
What type of PR is this:
[ ] BugFix
[ ] Feature
[ ] Enhancement
[ ] Refactor
[ ] UT
[ ] Doc
[ ] Tool
Which issues of this PR fixes :
Fixes #
Problem Summary(Required) :
Checklist:
[ ] I have added test cases for my bug fix or my new feature
[ ] This pr will affect users' behaviors
[ ] This pr needs user documentation (for new or modified features or behaviors)
[ ] I have added documentation for my new feature or new function
Bugfix cherry-pick branch check:
[ ] I have checked the version labels which the pr will be auto backported to target branch
[ ] 2.5
[ ] 2.4
[ ] 2.3
[ ] 2.2
[FE PR Coverage Check]
:disappointed: fail : 0 / 28 (00.00%)
file detail
path
covered_line
new_line
coverage
not_covered_line_detail
:large_blue_circle:
com/starrocks/planner/FileTableScanNode.java
0
1
00.00%
[80]
:large_blue_circle:
com/starrocks/catalog/FileTable.java
0
2
00.00%
[109, 111]
:large_blue_circle:
com/starrocks/connector/hive/HiveRemoteFileIO.java
0
25
00.00%
[104, 105, 106, 108, 111, 112, 114, 118, 119, 121, 123, 124, 125, 126, 128, 130, 131, 132, 133, 134, 135, 136, 137, 138, 140]
| gharchive/pull-request | 2022-12-20T09:46:09 | 2025-04-01T06:37:34.574858 | {
"authors": [
"wanpengfei-git",
"zombee0"
],
"repo": "StarRocks/starrocks",
"url": "https://github.com/StarRocks/starrocks/pull/15501",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1321689164 | How can I set match files to specific file types?
I found that the default setting is /*, but I just want to match .sv and .v files.
I've tried the
**/{.sv,.v}//*
and
/{.sv,.v}//*
as the exclude setting style, not working.
Hello! This should already be possible by changing the commentAnchors.workspace.matchFiles config property to **/*.{sv,v}
Currently this only applies to the workspace anchors panel. The next release will apply the same behavior to opened text files.
| gharchive/issue | 2022-07-29T02:41:34 | 2025-04-01T06:37:34.604939 | {
"authors": [
"g98aq8g09w",
"macjuul"
],
"repo": "StarlaneStudios/vscode-comment-anchors",
"url": "https://github.com/StarlaneStudios/vscode-comment-anchors/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1291183989 | Proposal of tweaks around global and local cache configuration
Making sure global cache configuration is taken into consideration for all datasource methods
Making sure method-specific cache configuration is overriding global configuration, if present
Adding related tests
Adding simple VSC config to enforce basic code styling
Adding VSC launcher to help out with debugging tests and code
@StarpTech I'm happy to talk through the changes I've made and the motivation, I also have following changes queued up, ie. for optional maxTtlIfError and I'm thinking on improving performance and bandwidth used while caching with maxTtlIfError
Hi @kdybicz sorry for the late response. Your change will give request options precedence over global defaults. This is natural and the current behavior was more of a bug.
| gharchive/pull-request | 2022-07-01T09:53:29 | 2025-04-01T06:37:34.607475 | {
"authors": [
"StarpTech",
"kdybicz"
],
"repo": "StarpTech/apollo-datasource-http",
"url": "https://github.com/StarpTech/apollo-datasource-http/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1761711791 | 接收到图片后没有返回
环境py3.9.12、nb2rc4,自建仓库,插件使用最新版本,没有报错,网络可以正常访问抱脸
检查了一下,发现是huggingface的推断接口API地址变更导致的
比如,默认的地址更改前是:https://hf.space/embed/ppxxxg22/Real-ESRGAN/api/predict/,更改后应该为:https://ppxxxg22-real-esrgan.hf.space/api/predict/
如果是其他自己建的仓,API仿照第二个格式进行更改即可。
检查了一下,发现是huggingface的推断接口API地址变更导致的
比如,默认的地址更改前是:https://hf.space/embed/ppxxxg22/Real-ESRGAN/api/predict/,更改后应该为:https://ppxxxg22-real-esrgan.hf.space/api/predict/
如果是其他自己建的仓,API仿照第二个格式进行更改即可。
感谢,辛苦了
检查了一下,发现是huggingface的推断接口API地址变更导致的
比如,默认的地址更改前是:https://hf.space/embed/ppxxxg22/Real-ESRGAN/api/predict/,更改后应该为:https://ppxxxg22-real-esrgan.hf.space/api/predict/
如果是其他自己建的仓,API仿照第二个格式进行更改即可。
替换后可以了
| gharchive/issue | 2023-06-17T08:35:59 | 2025-04-01T06:37:34.615727 | {
"authors": [
"ElainaFanBoy",
"Staskaer"
],
"repo": "Staskaer/nonebot_plugin_RealESRGAN",
"url": "https://github.com/Staskaer/nonebot_plugin_RealESRGAN/issues/9",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1344287694 | Decide on the usage of .ConfigureAwait(false) in Steeltoe code
Today the Steeltoe codebase contains limited usage of .ConfigureAwait(false). I'm wondering if that was a deliberate decision, and if so, what the rationale was for sometimes using it and sometimes omitting it. Should we revisit this, now that we're not targeting .NET Framework and NetStandard anymore?
There's quite some contradicting guidance around the topic.
The blog at https://itnext.io/a-deep-dive-into-configureawait-65f52b9605c2 states that there's no more need for it when targeting only .NET Core:
At this moment, one might think that in .NET Core you won’t need to spread ConfigureAwait(false) all over your code. Almost!
This is almost true, it is still recommended the utilization of ConfigureAwait(false) for libraries as a fallback if those libraries are used within a legacy framework. But for most of the cases yes, in .NET Core you can drop the ConfigureAwait(false) usage.
EF Core recently switched to adding .ConfigureAwait(false) almost everywhere, despite requiring .NET Core.
The ASP.NET Core repo removed .ConfigureAwait(false) from all projects that don't target NetStandard.
Excerpt from the ConfigureAwait FAQ by Stephen Toub:
I’ve heard ConfigureAwait(false) is no longer necessary in .NET Core. True?
False. It’s needed when running on .NET Core for exactly the same reasons it’s needed when running on .NET Framework. Nothing’s changed in that regard.
What has changed, however, is whether certain environments publish their own SynchronizationContext. In particular, whereas the classic ASP.NET on .NET Framework has its own SynchronizationContext, in contrast ASP.NET Core does not. That means that code running in an ASP.NET Core app by default won’t see a custom SynchronizationContext, which lessens the need for ConfigureAwait(false) running in such an environment.
It doesn’t mean, however, that there will never be a custom SynchronizationContext or TaskScheduler present. If some user code (or other library code your app is using) sets a custom context and calls your code, or invokes your code in a Task scheduled to a custom TaskScheduler, then even in ASP.NET Core your awaits may see a non-default context or scheduler that would lead you to want to use ConfigureAwait(false). Of course, in such situations, if you avoid synchronously blocking (which you should avoid doing in web apps regardless) and if you don’t mind the small performance overheads in such limited occurrences, you can probably get away without using ConfigureAwait(false).
@dtillman Can you chime in on this?
| gharchive/issue | 2022-08-19T11:02:08 | 2025-04-01T06:37:34.690369 | {
"authors": [
"bart-vmware"
],
"repo": "SteeltoeOSS/Steeltoe",
"url": "https://github.com/SteeltoeOSS/Steeltoe/issues/998",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
309009863 | Using NuGet Package with ClickOnce Deployment
Actually this seems not to be your fault, but it is not possible to use this nuget-package in combination with a clickonce deployment. Although there is no error, the problem is that ClickOnce does not see the dependency to MediaInfo.Native. Thus it does not add it to the installation and on application start all DLLs from MediaInfo.Native are not available and thus my application crashes.
I'm not an expert with NuGet, but do you think there is some other way to make your dependency on MediaInfo.Native more obvious? Or do you know of any way to properly install an application using your NuGet package?
I do need to find some way to install my application using your NuGet package and the required MediaInfo.dll (and its dependencies) in an easy way. - Thank you so much for your effort!
Quick question; did you try to build your app as x64 or 32 ?
Thanks for your immediate feedback. I tried both, but would be happy with x64 in the first try :-(
Am 27.03.2018 um 20:42 schrieb Stef Heyenrath notifications@github.com:
Quick question; did you try to build your app as x64 or 32 ?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
Just created a quick example to show the problem: https://github.com/suchja/MediaInfoConsole
When preparing to deploy this solution the MediaInfo.Native package and its included DLLs like MediaInfo.dll do not appear in the corresponding wizard (sorry for the German UI).
I tried it once again with x86 and x64 and in addition with Debug and Release. It is the same in all combinations.
While inspecting the project file I found this entry. This seems to me like a first indication why all the other DLLs are not recognised by ClickOnce. You link the project which seems to be build during application build and not the already build DLL like you did it for MediaInfo.DotNetWrapper.dll.
A workaround which could work:
Just add the 5 dll's from the packages folder (C:\Users\azureuser\Documents\Github\MediaInfo.DotNetWrapper\packages\MediaInfo.Native.17.12\build\native\x86) or x64 to your project.
And set Copy if newer to true for all these dll's
When inspecting the clickone files, you see that the dll's are included:
Yeah, did that already as workaround. However, would be good to have a proper solution which builds for x86 and x64.
Another solution would be to add a post-build step and just copy the correct dll's from the packages\MediaInfo.Native.17.12\build\native... folder ?
Or conditionally include the dll's, by manually editing the csproj file?
That probably will do. Thank you! Will update this issue once I have it running, so the solution is documented.
| gharchive/issue | 2018-03-27T15:04:39 | 2025-04-01T06:37:34.704750 | {
"authors": [
"StefH",
"suchja"
],
"repo": "StefH/MediaInfo.DotNetWrapper",
"url": "https://github.com/StefH/MediaInfo.DotNetWrapper/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
319938899 | Question: Is it possible to call a generic method in a dynamic manner
Hello colleagues, sorry, the question maybe a bit silly, as i'm mot an experienced C# developer.
I got the following piece of code:
var result = list
.Where(x => typesFiltered.Contains(x.Type))
.Where(x => x.GetField<string>(ParameterName) == ParameterValue)
.ToArray();
is it possible to convert it to something like
var result = list
.Where(x => typesFiltered.Contains(x.Type))
.Where("it.GetField<string>(ParameterName) == ParameterValue") // <- use the string here
.ToArray();
If you just want to filter using this library, use code like:
var result = list
.Where(x => typesFiltered.Contains(x.Type))
.Where("it.ParameterName == \"ParameterValue\")
.ToDynamicArray();
Closing...
| gharchive/issue | 2018-05-03T14:14:07 | 2025-04-01T06:37:34.707138 | {
"authors": [
"StefH",
"eosfor"
],
"repo": "StefH/System.Linq.Dynamic.Core",
"url": "https://github.com/StefH/System.Linq.Dynamic.Core/issues/162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1674541519 | Opbouw van de reservering's pagina met formulier
Hier kan de gebruiker zijn boek reserveren en post zijn gegevens in de api.
Deze zal Stefan op zich nemen.
| gharchive/issue | 2023-04-19T09:35:53 | 2025-04-01T06:37:34.708031 | {
"authors": [
"Stefan-Espant",
"jtoufik"
],
"repo": "Stefan-Espant/performance-matters-oba",
"url": "https://github.com/Stefan-Espant/performance-matters-oba/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
664664500 | Use DeleteAclRequest_v1 if broker version is over 2.0.0
Fixes https://github.com/StephenSorriaux/ansible-kafka-admin/issues/61
Proposed Changes
Use DeleteAclRequest_v1 if broker version is over 2.0.0
(also forced EOL to LF, that's why the huge diff)
I tested this and I can confirm that it's now working with prefixed acls. Many thanks.
Thanks for your feedback, I will merge this and generate a new release in the next days
| gharchive/pull-request | 2020-07-23T17:58:26 | 2025-04-01T06:37:34.789858 | {
"authors": [
"StephenSorriaux",
"justCatchingRye"
],
"repo": "StephenSorriaux/ansible-kafka-admin",
"url": "https://github.com/StephenSorriaux/ansible-kafka-admin/pull/62",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
523289053 | crashing when custom font is set
maybe I'm doing this totally wrong. But I have a custom font and I need to set it. I tried passing just the string of the font. which i have in res/font/montessara_light_black as "montessara_light_black" in the setFont. It just crashed.
Can you show how to do that?
Indeed this supports fonts from asset folder. Using fonts from font folder was not possible when this library was created. This should be changed in order to support this
https://github.com/StephenVinouze/MaterialNumberPicker/pull/21
Cool! :D
thanks
| gharchive/issue | 2019-11-15T06:50:58 | 2025-04-01T06:37:34.791913 | {
"authors": [
"AlejandroHCruz",
"StephenVinouze",
"fperez-rsc",
"githubashutoshsoni"
],
"repo": "StephenVinouze/MaterialNumberPicker",
"url": "https://github.com/StephenVinouze/MaterialNumberPicker/issues/17",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
157334679 | add checking for auth response
Signed-off-by: greg zimin_grigory@hotmail.com
good idea, but pls resolve conflicts
closed, decided not to merge it. Let's keep examples as minimal as possible.
| gharchive/pull-request | 2016-05-28T11:37:43 | 2025-04-01T06:37:34.793401 | {
"authors": [
"ZiminGrigory",
"mehanig"
],
"repo": "StepicOrg/Stepic-API",
"url": "https://github.com/StepicOrg/Stepic-API/pull/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1437341021 | 🛑 DNS (he.net) is down
In b42247f, DNS (he.net) ($HE_NS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DNS (he.net) is back up in f4741f3.
| gharchive/issue | 2022-11-06T09:22:38 | 2025-04-01T06:37:34.807984 | {
"authors": [
"leitmori"
],
"repo": "Sternwarte-St-Ottilien-e-V/status",
"url": "https://github.com/Sternwarte-St-Ottilien-e-V/status/issues/1087",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1473993869 | 🛑 DNS (he.net) is down
In 19b6c0b, DNS (he.net) ($HE_NS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DNS (he.net) is back up in ba66094.
| gharchive/issue | 2022-12-03T12:47:50 | 2025-04-01T06:37:34.811047 | {
"authors": [
"leitmori"
],
"repo": "Sternwarte-St-Ottilien-e-V/status",
"url": "https://github.com/Sternwarte-St-Ottilien-e-V/status/issues/2232",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1640855718 | 🛑 DNS (he.net) is down
In d4560c8, DNS (he.net) ($HE_NS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DNS (he.net) is back up in 3364dd4.
| gharchive/issue | 2023-03-26T10:34:44 | 2025-04-01T06:37:34.813819 | {
"authors": [
"leitmori"
],
"repo": "Sternwarte-St-Ottilien-e-V/status",
"url": "https://github.com/Sternwarte-St-Ottilien-e-V/status/issues/3412",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1654910925 | 🛑 DNS (he.net) is down
In 3be5602, DNS (he.net) ($HE_NS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DNS (he.net) is back up in 1547aa7.
| gharchive/issue | 2023-04-05T04:22:42 | 2025-04-01T06:37:34.816686 | {
"authors": [
"leitmori"
],
"repo": "Sternwarte-St-Ottilien-e-V/status",
"url": "https://github.com/Sternwarte-St-Ottilien-e-V/status/issues/4243",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1689332219 | 🛑 DNS (he.net) is down
In 0767d24, DNS (he.net) ($HE_NS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DNS (he.net) is back up in 937dc0b.
| gharchive/issue | 2023-04-29T02:55:43 | 2025-04-01T06:37:34.819502 | {
"authors": [
"leitmori"
],
"repo": "Sternwarte-St-Ottilien-e-V/status",
"url": "https://github.com/Sternwarte-St-Ottilien-e-V/status/issues/6185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2455516601 | [歌词请愿] 阿云嘎, HOYO-MiX - Regression
歌曲名称
阿云嘎, HOYO-MiX - Regression
音乐对应的音乐平台和音乐 ID
1913478990
其他备注
[ ] 这个歌曲在 Apple Music 上有逐词歌词
备注
没有人做这首的话我就上传了哦😋
平台自带的我看着质量还行啊😂除了没有排好版之外😂
平台自带的我看着质量还行啊😂除了没有排好版之外😂
是的,因为排版所以我宁愿自己再手打一份(?)😂
是的,因为排版所以我宁愿自己再手打一份(?)😂
直接拿下来排一下版不就行了(
直接拿下来排一下版不就行了(
还要加一些背景人声吧,这两天看看啥情况吧🥲
#1857 Refered.
主教的爱,太沉重了...
| gharchive/issue | 2024-08-08T11:07:23 | 2025-04-01T06:37:34.823282 | {
"authors": [
"ITManCHINA",
"Xionghaizi001"
],
"repo": "Steve-xmh/amll-ttml-db",
"url": "https://github.com/Steve-xmh/amll-ttml-db/issues/1797",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
339202817 | DNS Client High Cpu Usage
Hello!
I've seen people with similar issues with connectivity when using this huge hosts file. I just switched over to winhelp2002 mvps file from this one and for some reason, DNS Client didn't react to it. I no longer have these 10-30s delays when connecting to the net or switching on/off VPN. Could it be due to the file size? Hosts.mvps is (today) 465KB while "StevenBlacks" is 2038KB (depending on which one you choose).
Windows HOSTS file was never designed to be an ad-blocking solution, so this is not only on your end or affects only a few people, that's quite normal. Svchost.exe is rising high cpu because the dns service tries to resolve all domains one by one which is ineffective.
The only thing you can do is install AdGuard or use Unbound or another DNS system which is more efficient when it comes to blacklisting domains.
See also #93
Also #411 #695
If we are talking about file size you can also see #459 (--compress ==> cf.)
You can't disable DNSCache anymore that easy since Windows 10 RS 3 the DNSCache 'stop' function is gone via services.msc and even if you disable it via registry it might get reset after a restart/update or whenever a depending service starts it.
Again the HOSTS file (no matter which OS) and especially not on Windows is not designed to be an ad-blocker, you over the long term have to use AdGuard or hardware based Pi-Hole which is more efficient, especially because it also work with regular expressions, which reduces the hosts file up to 80% since you easily can work with wildcards like .* etc.
To compress the hosts is also not effective at some point, especially not with 100k+ entries. You have to face the fact that HOSTS is a very very bad solution and causes more troubles rather than it helps, if you already distrust Windows own DNS mechanism just use dnscrypt and combine it with unbound, that doesn't cost much system resources and as a benefit you improve your security setup + can easier work with hosts, especially because you can automatically update the hosts file via unbound while you usually also need third-party tools here in order to do that.
There several guides here:
https://etherarp.net/build-an-adblocking-dns-server/
https://news.ycombinator.com/item?id=11084968
https://github.com/lepiaf/adblock-unbound
https://deadc0de.re/articles/unbound-blocking-ads.html
If you like to do this more professional with thousands of entries there is no way around a Pi-Hole (or a similar script). The benefit is obvious, optimized for DNS related ads-blocking, works on all network devices without any applications and you get a nice GUI in order to minor + white-/blacklist something.
Yes, that win10 thing is really pissing me off so I just stopped updating the hosts file at work.
(using win8.1pro at home, hosts file works well)
Off topic:
oh those are cool!
But on my case, as an average power-user.
I only have my laptop, android phones, and my not-so-customizable router-modem at home, I am not able to setup those kinds (you know, no server thingy).
Though I am really interested setting up one ('cause it's damn tedious setting up all the hosts file on each device, rooted and non-rooted), thanks for the links, unbound looks easy to setup ;)
Thank you, everyone, for the excellent community response to this issue.
Some really cool people are watching this repo. I'm thankful for that!
I'd like to echo @CHEF-KOCH when he recommends Pi-Hole. Pi-Hole is a great project and a great reason for everybody to get into Raspberry Pi devices more generally.
Closing!
This issue can be successfully resolved via regedit by removing 'DNSCache' entry from NetworkService REG_MULTI_SZ at:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost
HKLM\SOFTWARE\WOW6432Node\Microsoft\Windows NT\CurrentVersion\Svchost
Prior to this I had to wait ~10 minutes for SVCHOST DNS Client to complete its process when using Steven Black's hosts file. After making this change, I no longer have to wait. Enjoy....
This issue can be successfully resolved via regedit by removing 'DNSCache' entry from NetworkService REG_MULTI_SZ at:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost
HKLM\SOFTWARE\WOW6432Node\Microsoft\Windows NT\CurrentVersion\Svchost
Prior to this I had to wait ~10 minutes for SVCHOST DNS Client to complete its process when using Steven Black's hosts file. After making this change, I no longer have to wait. Enjoy....
If confirmed that one should definitely be in the README Steve @StevenBlack !
If confirmed that one should definitely be in the README Steve @StevenBlack !
Al @Al-Green-COS what version of Windows is this?
Al @Al-Green-COS what version of Windows is this?
@Al-Green-COS isn't it that the same as to disable the DNS Client in Services? Not sure why the extra steps,especially when it comes to messing with registry entries?
Al @Al-Green-COS what version of Windows is this?
Steve @StevenBlack it doesn't matter. I'm with 1607 and i do have the option to do it and i'm pretty sure @Al-Green-COS is on a much higher version then me.
@Al-Green-COS isn't it that the same as to disable the DNS Client in Services? Not sure why the extra steps,especially when it comes to messing with registry entries?
Al @Al-Green-COS what version of Windows is this?
Steve @StevenBlack it doesn't matter. I'm with 1607 and i do have the option to do it and i'm pretty sure @Al-Green-COS is on a much higher version then me.
dnmTX,
In the latest releases of Windows 10 it’s no longer possible to disable DNS Client via Services. I don’t know what release it was that you could no longer do this. I’m presently on Windows 10 Release 20H2 (2009).
[cid:image003.png@01D6E42A.8CBC5340]
As you can see, the buttons for Start, Stop, Pause, and Resume are grayed out.
If you’re able to do this, please advise as to the Windows 10 release you are on.
reg query "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion" /v ReleaseID
Respectfully,
Al Green
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows 10
From: dnmTXmailto:notifications@github.com
Sent: Wednesday, January 6, 2021 11:27 AM
To: StevenBlack/hostsmailto:hosts@noreply.github.com
Cc: Al Greenmailto:Alfred.Green@Outlook.com; Mentionmailto:mention@noreply.github.com
Subject: Re: [StevenBlack/hosts] DNS Client High Cpu Usage (#710)
@Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544769030|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=BB4gWleYFQYMuvewACCPvuXj4T5RrwL8Ktvqk0maS48%3D&reserved=0 isn't it that the same as to disable the DNS Client in Services? Not sure why the extra steps,especially when it comes to messing with registry entries?
[Capture]https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F34774426%2F103805957-ecab8400-5019-11eb-8484-417ad8738ee9.PNG&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544778987|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=Kg32WGMdQUjkai%2FQNXOPcltR%2BNryiP0b9yMGzDXBTHU%3D&reserved=0
Al @Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544778987|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=CN7tZb8%2BYat2Q66DU16F0sw6wkdVbkwwXtF%2FdJFkI2s%3D&reserved=0 what version of Windows is this?
Steve @StevenBlackhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544788944|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=%2FwVNGqqafGNLnwGoCxh1Hiy2OXQ2GD%2B6GiHrcqWL6SE%3D&reserved=0 it doesn't matter. I'm with 1607 and i do have the option to do it and i'm pretty sure @Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544788944|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=xMLiXuNRVkK8H7RODjk7lxWgsyNCvNwdyycnDE9lrYk%3D&reserved=0 is on a much higher version then me.
[Help]https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F34774426%2F103806334-79564200-501a-11eb-9e01-e20582c7b2d4.png&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544788944|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=vSOI1WlLVbIJx4RyTGqejMYqfB6NhXjlgt8vIRYa8io%3D&reserved=0
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack%2Fhosts%2Fissues%2F710%23issuecomment-755479824&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544798900|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=rwvvSK2eCHCmDU2zwPHQ3NkTmEOsGhLyoTMouYNFGhs%3D&reserved=0, or unsubscribehttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FASK3GPA45L2LHI4A2ZY2VW3SYSTRLANCNFSM4FI2PKJQ&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544798900|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=kOzeLvvjhs46rlRCcxn37UFXMp%2BjLmQbWDZ0sPIgnEw%3D&reserved=0.
dnmTX,
In the latest releases of Windows 10 it’s no longer possible to disable DNS Client via Services. I don’t know what release it was that you could no longer do this. I’m presently on Windows 10 Release 20H2 (2009).
[cid:image003.png@01D6E42A.8CBC5340]
As you can see, the buttons for Start, Stop, Pause, and Resume are grayed out.
If you’re able to do this, please advise as to the Windows 10 release you are on.
reg query "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion" /v ReleaseID
Respectfully,
Al Green
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows 10
From: dnmTXmailto:notifications@github.com
Sent: Wednesday, January 6, 2021 11:27 AM
To: StevenBlack/hostsmailto:hosts@noreply.github.com
Cc: Al Greenmailto:Alfred.Green@Outlook.com; Mentionmailto:mention@noreply.github.com
Subject: Re: [StevenBlack/hosts] DNS Client High Cpu Usage (#710)
@Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544769030|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=BB4gWleYFQYMuvewACCPvuXj4T5RrwL8Ktvqk0maS48%3D&reserved=0 isn't it that the same as to disable the DNS Client in Services? Not sure why the extra steps,especially when it comes to messing with registry entries?
[Capture]https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F34774426%2F103805957-ecab8400-5019-11eb-8484-417ad8738ee9.PNG&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544778987|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=Kg32WGMdQUjkai%2FQNXOPcltR%2BNryiP0b9yMGzDXBTHU%3D&reserved=0
Al @Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544778987|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=CN7tZb8%2BYat2Q66DU16F0sw6wkdVbkwwXtF%2FdJFkI2s%3D&reserved=0 what version of Windows is this?
Steve @StevenBlackhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544788944|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=%2FwVNGqqafGNLnwGoCxh1Hiy2OXQ2GD%2B6GiHrcqWL6SE%3D&reserved=0 it doesn't matter. I'm with 1607 and i do have the option to do it and i'm pretty sure @Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544788944|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=xMLiXuNRVkK8H7RODjk7lxWgsyNCvNwdyycnDE9lrYk%3D&reserved=0 is on a much higher version then me.
[Help]https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F34774426%2F103806334-79564200-501a-11eb-9e01-e20582c7b2d4.png&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544788944|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=vSOI1WlLVbIJx4RyTGqejMYqfB6NhXjlgt8vIRYa8io%3D&reserved=0
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack%2Fhosts%2Fissues%2F710%23issuecomment-755479824&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544798900|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=rwvvSK2eCHCmDU2zwPHQ3NkTmEOsGhLyoTMouYNFGhs%3D&reserved=0, or unsubscribehttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FASK3GPA45L2LHI4A2ZY2VW3SYSTRLANCNFSM4FI2PKJQ&data=04|01||f539c8ff3a734dde063008d8b270bc4d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455544544798900|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=kOzeLvvjhs46rlRCcxn37UFXMp%2BjLmQbWDZ0sPIgnEw%3D&reserved=0.
@Al-Green-COS i see. I didn't know that but it's less then suprising for me to be honest.
Still,another less coplicated option would be to go to:
HKLM\SYSTEM\CurrentControlSet\Services\Dnscache and change Start from 2 to 4(disabled) and Restart
Can you confirm if this option is still possible on the build that you're on. Thank you 👍
@Al-Green-COS i see. I didn't know that but it's less then suprising for me to be honest.
Still,another less coplicated option would be to go to:
HKLM\SYSTEM\CurrentControlSet\Services\Dnscache and change Start from 2 to 4(disabled) and Restart
Can you confirm if this option is still possible on the build that you're on. Thank you 👍
Mr. Black,
[cid:image003.png@01D6E449.8CD8C3D0]
Respectfully,
Al Green
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows 10
From: Steven Blackmailto:notifications@github.com
Sent: Wednesday, January 6, 2021 10:18 AM
To: StevenBlack/hostsmailto:hosts@noreply.github.com
Cc: Al Greenmailto:Alfred.Green@outlook.com; Mentionmailto:mention@noreply.github.com
Subject: Re: [StevenBlack/hosts] DNS Client High Cpu Usage (#710)
Al @Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||bb707cf3de344805d5c408d8b2670e2d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455502965937289|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=cSysS6vrhbqt2XTA1QvpTd9E0I%2FLEjwJ4urBopkTl2g%3D&reserved=0 what version of Windows is this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack%2Fhosts%2Fissues%2F710%23issuecomment-755437863&data=04|01||bb707cf3de344805d5c408d8b2670e2d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455502965937289|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=PslLT6EHBoyIF3chmuHfmUbDWk%2FgHpL%2F%2Ba7JFY%2B6ep0%3D&reserved=0, or unsubscribehttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FASK3GPBIQLUDRYSUSF6J6OTSYSLNPANCNFSM4FI2PKJQ&data=04|01||bb707cf3de344805d5c408d8b2670e2d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455502965937289|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=ZjZdkZlH%2FBj3AjJmxOXknHkDr704kvEg3fROGRhuSKo%3D&reserved=0.
Mr. Black,
[cid:image003.png@01D6E449.8CD8C3D0]
Respectfully,
Al Green
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows 10
From: Steven Blackmailto:notifications@github.com
Sent: Wednesday, January 6, 2021 10:18 AM
To: StevenBlack/hostsmailto:hosts@noreply.github.com
Cc: Al Greenmailto:Alfred.Green@outlook.com; Mentionmailto:mention@noreply.github.com
Subject: Re: [StevenBlack/hosts] DNS Client High Cpu Usage (#710)
Al @Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||bb707cf3de344805d5c408d8b2670e2d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455502965937289|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=cSysS6vrhbqt2XTA1QvpTd9E0I%2FLEjwJ4urBopkTl2g%3D&reserved=0 what version of Windows is this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack%2Fhosts%2Fissues%2F710%23issuecomment-755437863&data=04|01||bb707cf3de344805d5c408d8b2670e2d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455502965937289|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=PslLT6EHBoyIF3chmuHfmUbDWk%2FgHpL%2F%2Ba7JFY%2B6ep0%3D&reserved=0, or unsubscribehttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FASK3GPBIQLUDRYSUSF6J6OTSYSLNPANCNFSM4FI2PKJQ&data=04|01||bb707cf3de344805d5c408d8b2670e2d|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455502965937289|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=ZjZdkZlH%2FBj3AjJmxOXknHkDr704kvEg3fROGRhuSKo%3D&reserved=0.
dnmTX,
Your recommendation works as well.
[cid:image002.png@01D6E44D.5C7A6C30]
[cid:image004.png@01D6E44D.0FDBB500]
Respectfully,
Al Green
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows 10
From: dnmTXmailto:notifications@github.com
Sent: Wednesday, January 6, 2021 4:09 PM
To: StevenBlack/hostsmailto:hosts@noreply.github.com
Cc: Al Greenmailto:Alfred.Green@Outlook.com; Mentionmailto:mention@noreply.github.com
Subject: Re: [StevenBlack/hosts] DNS Client High Cpu Usage (#710)
@Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||ac574ce9206542bc2d1f08d8b2981fbe|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455713720234028|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=xYFceSFlQ38vNSrLzFeuLm3e0Q8GlpCfz6gab8QoinA%3D&reserved=0 i see. I didn't know that but it's less then suprising for me to be honest.
Still,another less coplicated option would be to go to:
HKLM\SYSTEM\CurrentControlSet\Services\Dnscache and change Start from 2 to 4(disabled) and Restart
Can you confirm if this option is still possible on the build that you're on. Thank you 👍
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack%2Fhosts%2Fissues%2F710%23issuecomment-755770823&data=04|01||ac574ce9206542bc2d1f08d8b2981fbe|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455713720243991|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=fHhpAPRgmpirQ0jgaEEbuWGFRdoK6O2ruJRbXpBszUQ%3D&reserved=0, or unsubscribehttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FASK3GPBB7IYIJPQEROTCK43SYTUSVANCNFSM4FI2PKJQ&data=04|01||ac574ce9206542bc2d1f08d8b2981fbe|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455713720243991|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=%2B5q8uLETPRKiVXiFnpQ2alsHBQCK%2FTSt1tDdury9cj0%3D&reserved=0.
dnmTX,
Your recommendation works as well.
[cid:image002.png@01D6E44D.5C7A6C30]
[cid:image004.png@01D6E44D.0FDBB500]
Respectfully,
Al Green
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows 10
From: dnmTXmailto:notifications@github.com
Sent: Wednesday, January 6, 2021 4:09 PM
To: StevenBlack/hostsmailto:hosts@noreply.github.com
Cc: Al Greenmailto:Alfred.Green@Outlook.com; Mentionmailto:mention@noreply.github.com
Subject: Re: [StevenBlack/hosts] DNS Client High Cpu Usage (#710)
@Al-Green-COShttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAl-Green-COS&data=04|01||ac574ce9206542bc2d1f08d8b2981fbe|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455713720234028|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=xYFceSFlQ38vNSrLzFeuLm3e0Q8GlpCfz6gab8QoinA%3D&reserved=0 i see. I didn't know that but it's less then suprising for me to be honest.
Still,another less coplicated option would be to go to:
HKLM\SYSTEM\CurrentControlSet\Services\Dnscache and change Start from 2 to 4(disabled) and Restart
Can you confirm if this option is still possible on the build that you're on. Thank you 👍
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FStevenBlack%2Fhosts%2Fissues%2F710%23issuecomment-755770823&data=04|01||ac574ce9206542bc2d1f08d8b2981fbe|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455713720243991|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=fHhpAPRgmpirQ0jgaEEbuWGFRdoK6O2ruJRbXpBszUQ%3D&reserved=0, or unsubscribehttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FASK3GPBB7IYIJPQEROTCK43SYTUSVANCNFSM4FI2PKJQ&data=04|01||ac574ce9206542bc2d1f08d8b2981fbe|84df9e7fe9f640afb435aaaaaaaaaaaa|1|0|637455713720243991|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=%2B5q8uLETPRKiVXiFnpQ2alsHBQCK%2FTSt1tDdury9cj0%3D&reserved=0.
A note for the unwary: I applied the changes recommended in this thread, rebooted and it broke file share browsing. I could no longer access files on my NAS (a Synology). Reverting the changes restored access (after another reboot).
The underlying cause of high CPU in my case was a faulty USB hub with a built-in network adapter. It was constantly connecting and disconnecting as evidenced by entries in event logs. This may have been specific to Hyper-V, which automatically reprovisions network devices on discovery.
High usage on W10 startup while DNS Client processes all the urls in the hosts file. Presumably it calls add-dnsserverresourcerecord to the cache for use with Store/ActiveDirectory or some other non-browser based operations. If, for some reason Windows shutdown is not successful, the DNS cache gets flushed on reboot, so the db has to be rebuilt.
| gharchive/issue | 2018-07-08T07:02:42 | 2025-04-01T06:37:34.894165 | {
"authors": [
"Al-Green-COS",
"CHEF-KOCH",
"Laicure",
"StevenBlack",
"dnmTX",
"funilrys",
"lmstearn",
"norage",
"robpomeroy"
],
"repo": "StevenBlack/hosts",
"url": "https://github.com/StevenBlack/hosts/issues/710",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
625466278 | Update updateHostsFile.py
Escape backslashes
@StevenBlack @funilrys Untested, but it makes sense 🙂 There's still one error that makes CI fail, though:
./updateHostsFile.py:1094:13: F523 '...'.format(...) has unused arguments at position(s): 0
Thanks for taking initiative and jumping in, @XhmikosR!
MErging.
| gharchive/pull-request | 2020-05-27T07:54:59 | 2025-04-01T06:37:34.896941 | {
"authors": [
"StevenBlack",
"XhmikosR"
],
"repo": "StevenBlack/hosts",
"url": "https://github.com/StevenBlack/hosts/pull/1295",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
798552353 | Ravi/Usir Dialogue images
I have tried a billion times to overwrite the "bad" images and they should no longer exist anywhere in Godot. And yet, they do...
I think it must be that my images were scaled. I thought it was the fact they were scaled + shifted and when I cropped them it would work, but I created a brand new image with a new name and copied everything over and it still showed up wrong. Then I re-downloaded Jo's original, saw they were too big on the map (since it was using the same image) and scaled it down. Finally the dialogue was working. So... oops. Wasted well over an hour banging my head against the wall thinking Godot was somehow caching the image...
| gharchive/issue | 2021-02-01T17:18:53 | 2025-04-01T06:37:34.898269 | {
"authors": [
"Parmeisan"
],
"repo": "StevenGreenbank/Global-Gamejam-2021",
"url": "https://github.com/StevenGreenbank/Global-Gamejam-2021/issues/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
607303135 | 検査陽性者の状況が表示されないエラーの修正
👏 解決する issue / Resolved Issues
close #124
⛏ 変更内容 / Details of Changes
検査陽性者の状況が表示されないエラーの修正
📸 スクリーンショット / Screenshots
該当処理ではdata.jsonの次の箇所を参照しています。
https://github.com/wataruoguchi/covid19_nagano_csv_to_json/blob/bce32ea71da5c466c7b0fb8a7264bbf91128cb65/src/.json/data.json#L3343-L3364
データ取得側で修正する場合、このPRは不要です
| gharchive/pull-request | 2020-04-27T07:22:13 | 2025-04-01T06:37:34.933504 | {
"authors": [
"kikd"
],
"repo": "Stop-COVID19-Nagano/covid19",
"url": "https://github.com/Stop-COVID19-Nagano/covid19/pull/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
349214986 | Ubuntu: getting a string instead of samples in data event.
My app works perfectly on my mac. When I run it on my ubuntu machine (my actual target) the 'data' event returns strings instead of samples.
I've tried it with a couple of different USB microphones. Here's an example of a message that is coming through the 'data' callback:
re/alsa/alsa.confommon:CARD=0,DEVICE=3,CTLINDEX=0,AES0=4,AES1=130,AES2=0,AES3=2m
What's causing this? I am able to record on this PC using arecord, so it appears ALSA is working in the system.
Update: appears to be related to asking for a sample format that the hardware doesn't support. Not sure why this isn't an error instead of passing messages through the data event.
Maybe more data on the environment might help:
Ubuntu 18.04 64-bit server running on a celeron x86 CPU.
There is no native audio hardware on this device; I'm using USB microphones.
I installed alsa using apt. Appears to be version 1.1.3.
What other info would be helpful?
I have just pushed a big update to naudiodon. It would be good to know if the new implementation still has this problem.
| gharchive/issue | 2018-08-09T17:11:05 | 2025-04-01T06:37:34.991458 | {
"authors": [
"jmbldwn",
"scriptorian"
],
"repo": "Streampunk/naudiodon",
"url": "https://github.com/Streampunk/naudiodon/issues/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
744569256 | Add extract Twitter embed and youtube video. Please!
like a title!!!!!!!!!!
The algorithm already keeps embedded Youtube videos. Are you asking for something like a property Videos or Tweets on the Article object? Or are you thinking about something like a method GetVideosAsync()?
thank you!!!!!
| gharchive/issue | 2020-11-17T09:35:37 | 2025-04-01T06:37:35.177115 | {
"authors": [
"gabriele-tomassetti",
"taavandais2"
],
"repo": "Strumenta/SmartReader",
"url": "https://github.com/Strumenta/SmartReader/issues/24",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1601072762 | [Enhancements]: Display Error
What is going on?
When I entrance app:
I can't see Search Controls
I click on button Gallery -> It don't display anything you need click back to Home and after that click on Gallery back
When you on Galley -> Syncfusion -> More detail to open the new page -> It can't do it and got out of the program
When you on Galley -> Syncfusion -> SfRadialGauge, SfDataGrid -> More detail to open the new page-> I don't know It empty or no data
When you on Galley -> Built-in-> Check Box-> More detail to open the new page -> It can't do it and got out of the program
When you on Galley -> Built-in-> Application Settings JSON -> Detail and More detail to open the new page -> When I roll to the end and click on Show me those settings -> The program is closed
RefershView do not display all data or because my Windows doesn't assit
StackLayout -> More detail to open the new page -> Some Data missing ?
TabbedPage -> Detal -> I can't back to Menu Galley
VerticalStackLayout -> More detail to open the new page -> It can't do it and got out of the program
@nhatminh1401 we move this to discussions yet ?
| gharchive/issue | 2023-02-27T12:15:28 | 2025-04-01T06:37:35.180877 | {
"authors": [
"Strypper",
"nhatminh1401"
],
"repo": "Strypper/mauisland",
"url": "https://github.com/Strypper/mauisland/issues/113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
805333260 | 🛑 Nextcloud Papa is down
In c7e519d, Nextcloud Papa ($NEXTCLOUD_2) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nextcloud Papa is back up in 3e528ff.
| gharchive/issue | 2021-02-10T09:25:47 | 2025-04-01T06:37:35.183157 | {
"authors": [
"StudFu-WordToMD"
],
"repo": "StudFu-WordToMD/status",
"url": "https://github.com/StudFu-WordToMD/status/issues/352",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1368997535 | 🛑 Nextcloud is down
In 9482095, Nextcloud ($NEXTCLOUD) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nextcloud is back up in 53359d7.
| gharchive/issue | 2022-09-11T16:33:43 | 2025-04-01T06:37:35.185296 | {
"authors": [
"StudFu-WordToMD"
],
"repo": "StudFu-WordToMD/status",
"url": "https://github.com/StudFu-WordToMD/status/issues/4305",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
960229553 | add translations - [merged]
In GitLab by @erdzan on Apr 27, 2021, 20:10
Merges ngxtranslate -> master
Co-authored-by: Ben Lakhoune ben.lakhoune@rwth-aachen.de
In GitLab by @erdzan on Apr 27, 2021, 20:10
approved this merge request
In GitLab by @erdzan on Apr 27, 2021, 20:11
added 2 commits
382be4c1 - 1 commit from branch masterb61ab284 - Merge remote-tracking branch 'origin/master' into ngxtranslate
Compare with previous version
In GitLab by @erdzan on Apr 27, 2021, 20:11
approved this merge request
In GitLab by @erdzan on Apr 27, 2021, 20:11
mentioned in commit 5a0fe54e847ac9a4bff91ede6da2a5fadb19340b
| gharchive/issue | 2021-08-04T10:39:34 | 2025-04-01T06:37:35.243728 | {
"authors": [
"erdzan12"
],
"repo": "StudyGrow/Cards",
"url": "https://github.com/StudyGrow/Cards/issues/154",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
99611925 | Different setting per folder
I have a project which consists of many submodules and each of them has its own JSHint and JSCS config files. In Sublime I want to have all these submodules as one projects (because they are rather small).
Is it possible to configure SublimeLinter3 so it uses different linter settings per folder? Or is it more a question about Sublime's than SublimeLinter's capabilities?
If I didn't miss anything and this is indeed impossible to achieve now, then could such feature be added to SublimeLinter or it doesn't seem to be feasible?
A linter executable will look for it's configuration starting from the file it is linting. So, this should "just work". If not, that's up to the linter, not SL.
| gharchive/issue | 2015-08-07T09:15:24 | 2025-04-01T06:37:35.270311 | {
"authors": [
"Reinmar",
"braver"
],
"repo": "SublimeLinter/SublimeLinter3",
"url": "https://github.com/SublimeLinter/SublimeLinter3/issues/310",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1997040887 | chore: update orc certificates
Description
How has this been tested?
Checklist
[ ] changelog was updated with notable changes
[ ] documentation was updated
Thank you!
| gharchive/pull-request | 2023-11-16T14:49:50 | 2025-04-01T06:37:35.290062 | {
"authors": [
"guilhem-barthes",
"oleobal"
],
"repo": "Substra/orchestrator",
"url": "https://github.com/Substra/orchestrator/pull/333",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1551025870 | al_income.yml can't get past id: regular job employer if x.is_self_employed
"Interview order" for ALJob won't get past id: regular job employer if user selects that they are self-employed.
https://github.com/SuffolkLITLab/docassemble-ALToolbox/blob/main/docassemble/ALToolbox/data/questions/al_income.yml#L627
I think it'll be the same for ALItemizedJob.
https://github.com/SuffolkLITLab/docassemble-ALToolbox/blob/main/docassemble/ALToolbox/data/questions/al_income.yml#L158
Can't ask for x.employer.name.first because that may not be filled in
[Also, how many of those questions need to be required? For example, does the postal code and phone need to be required?]
I'm gonna mention this is the docs, but the reason for this is mentioned right above the line you linked for ALItemizedJob;
# NOTE: if `is_self_employed`, you need to set this yourself
Because in al_income, we only have access to the ALJob types. We need that job to be associated with some individual for us to actually be able to do anything with is_self_employed and the employer's name, which nothing in this module is.
In the ALAffidavit, the block that solves this is:
sets:
- x.jobs[i].employer.name.first
generic object: ALIndividual
code: |
if x.jobs[i].is_self_employed:
x.jobs[i].employer.address = x.address
x.jobs[i].employer.phone = x.phone_number
x.jobs[i].employer.name = x.name
Definitely not great, but there's not a good way around it IMO. Unless you have any ideas.
Can't ask for x.employer.name.first because that may not be filled in
That's explicitly why we define it right there; because it might not be filled in, we need to trigger a different code block (that needs to be provided by the user of this library) to fill it in when is_self_employed == True
how many of those questions need to be required? For example, does the postal code and phone need to be required?
Those are required for the Massachusetts form, and I'd guess that they'd be required elsewhere as well. Is replacing that screen not a good trade off there? Those attributes aren't defined in the code blocks, unlike the employer name.
I think I didn't describe the problem clearly enough. When a user reaches the screen I named, there's an option to check off that they are self employed. If they select that option, they cannot move on to the next question. Either we can't ask about self employment on the same page as the employer name or we need some other solution.
I'm not sure which file you are using for testing, but again, it's because users need to add a code block that sets x.employer.name.first, and that's noted in the file (and will be documented in the documentation). Adding that code block above in a user's file would fix it.
In case you were trying the demo (which I hadn't done a through fix on when updating the rest of the module), I fixed it in #145, if you want to try that out.
We had a quick conversation about this, and can be summarized as:
we should add a code block to AssemblyLine to make it easier for most of our users to have things work out of the box. Something like this:sets:
- x.jobs[i].employer.name.first
generic object: ALIndividual
code: |
if x.jobs[i].is_self_employed:
x.jobs[i].employer.address = x.address
x.jobs[i].employer.phone = x.phone_number
x.jobs[i].employer.name = x.name
if we can, the default code block in the demo, which sets just the employer's first name to "self-employed" and everything else to empty strings, might be a good default to just include in al_income.yml itself. We should see if that is actually a good default to have.
if not, we will document heavily in the file how to override that behavior.
@nonprofittechy @CaroRob we thought y'all might have an opinion on the second point above; specifically, that when we are asking about someone's employer at their job, if they say that they are self employed, the employer's name would be set to "self-employed", and the employer's contact information would be empty. That string, "self-employed", would likely appear verbatim on the form itself. Is this a good default to have, or are there cases where this would be a very bad default to have?
We were thinking this is a good default; our thought was that on a form where we are asking for someone's employment information, that person's information will be on the form somewhere else, and anyone reading the form would immediately look towards that info if in a question about someone's employer's information they had written "self-employed". Would appreciate y'all's opinions though.
This was merged, and fixed in https://github.com/SuffolkLITLab/docassemble-ALToolbox/commit/5f3f799374236384df6699a5ea8ec5bf7fbb90a5.
| gharchive/issue | 2023-01-20T15:17:45 | 2025-04-01T06:37:35.314690 | {
"authors": [
"BryceStevenWilley",
"plocket"
],
"repo": "SuffolkLITLab/docassemble-ALToolbox",
"url": "https://github.com/SuffolkLITLab/docassemble-ALToolbox/issues/141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2305570689 | adding navbar
Adding new navigation bar
Added changes for new navigation bar into main
| gharchive/pull-request | 2024-05-20T09:41:11 | 2025-04-01T06:37:35.318151 | {
"authors": [
"SujavAc"
],
"repo": "SujavAc/webagencypublic",
"url": "https://github.com/SujavAc/webagencypublic/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1393713606 | 🛑 Zinc 20 is down
In 8b174ce, Zinc 20 (https://zinc20.docking.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Zinc 20 is back up in 07bc9d1.
| gharchive/issue | 2022-10-02T10:19:07 | 2025-04-01T06:37:35.320764 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/11315",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1118542259 | 🛑 ChemSpider is down
In e3a55e9, ChemSpider (http://www.chemspider.com/Default.aspx) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChemSpider is back up in 3ececb2.
| gharchive/issue | 2022-01-30T13:44:28 | 2025-04-01T06:37:35.323165 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/1158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1684434292 | 🛑 Chem Exper is down
In e6623d7, Chem Exper (http://www.chemexper.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chem Exper is back up in 962e422.
| gharchive/issue | 2023-04-26T07:26:39 | 2025-04-01T06:37:35.325515 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/18762",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1706906348 | 🛑 Chem Exper is down
In c40fa5f, Chem Exper (http://www.chemexper.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chem Exper is back up in d3bc929.
| gharchive/issue | 2023-05-12T04:29:15 | 2025-04-01T06:37:35.327790 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/19427",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1711116830 | 🛑 Chem Exper is down
In 0d40457, Chem Exper (http://www.chemexper.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chem Exper is back up in 3a5edf6.
| gharchive/issue | 2023-05-16T02:37:19 | 2025-04-01T06:37:35.330111 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/19592",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1731175702 | 🛑 Chem Exper is down
In be4aa43, Chem Exper (http://www.chemexper.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chem Exper is back up in 8d07322.
| gharchive/issue | 2023-05-29T19:50:43 | 2025-04-01T06:37:35.332607 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/20657",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1766309525 | 🛑 Zinc 20 is down
In 06f635e, Zinc 20 (https://zinc20.docking.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Zinc 20 is back up in 2a7933d.
| gharchive/issue | 2023-06-20T22:51:40 | 2025-04-01T06:37:35.334907 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/22608",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1781208550 | 🛑 Zinc 20 is down
In d7bb1bb, Zinc 20 (https://zinc20.docking.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Zinc 20 is back up in 13e92ec.
| gharchive/issue | 2023-06-29T17:02:14 | 2025-04-01T06:37:35.337227 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/23392",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1823556348 | 🛑 Chem Exper is down
In b4d3ffb, Chem Exper (http://www.chemexper.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chem Exper is back up in bc7bdf6.
| gharchive/issue | 2023-07-27T03:44:26 | 2025-04-01T06:37:35.339539 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/25050",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1844382422 | 🛑 Chem Exper is down
In bc0ba81, Chem Exper (http://www.chemexper.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chem Exper is back up in 92e43b6.
| gharchive/issue | 2023-08-10T04:14:33 | 2025-04-01T06:37:35.341968 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/26147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2057706704 | 🛑 Binding Database is down
In dc3b89e, Binding Database (http://www.bindingdb.org/bind/index.jsp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Binding Database is back up in d2da347 after 8 minutes.
| gharchive/issue | 2023-12-27T19:24:15 | 2025-04-01T06:37:35.344539 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/34010",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2061066126 | 🛑 Binding Database is down
In 33184c4, Binding Database (http://www.bindingdb.org/bind/index.jsp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Binding Database is back up in c1b38d2 after 18 minutes.
| gharchive/issue | 2023-12-31T13:49:30 | 2025-04-01T06:37:35.346959 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/34207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2068960983 | 🛑 Chem Exper is down
In 6b4becb, Chem Exper (http://www.chemexper.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chem Exper is back up in cf66ecd after 7 minutes.
| gharchive/issue | 2024-01-07T04:33:50 | 2025-04-01T06:37:35.349798 | {
"authors": [
"Sulstice"
],
"repo": "Sulstice/Uptime-Cheminformatics",
"url": "https://github.com/Sulstice/Uptime-Cheminformatics/issues/34559",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.