id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2067031371
SakeDokoへのエクスポート Why/ 背景 SakeDoko というサービスがある。 酒の種類と買った場所を登録してマイマップを作れる。 What/ 何をしたいか SAKAZUKIに登録した酒ページに、SakeDokoへの登録リンクをおけないか? 銘柄、銘柄詳細、買った日付くらいをいれれると便利そう。 How/ どうやって実現するか SakeDoko、param受け入れてるかから調査。 そもそも/posts/newへダイレクトにアクセスすると、/mypagesに飛ばされちゃうな。 ぴえん。 ダメそう
gharchive/issue
2024-01-05T09:44:15
2025-04-01T06:39:39.552463
{ "authors": [ "yonta" ], "repo": "momocus/sakazuki", "url": "https://github.com/momocus/sakazuki/issues/685", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1206025584
酒indexのデザインを新しくした! after #469 fix #405 ついに新たな酒indexとなったぞ! やったこと 酒indexを新たなデザインにした カードデザインを採用した 画面サイズによって3列・2列・1列が切り替わる グラフは4x3ですべて固定した 各種RSpecを修正した 構造が大きく変わったので、testidの振り方を修正 変更箇所のみリファクタ ここ揃ってないんだけど、Cardでいい感じに揃える方法がわからん~~~~ どこ揃えにしても空白が目立つな・・・ やっぱ下に揃えよう これ、新しくなったのでますますsort機能いらないな。 実際使ってないし。
gharchive/pull-request
2022-04-16T06:02:55
2025-04-01T06:39:39.556624
{ "authors": [ "yonta" ], "repo": "momocus/sakazuki", "url": "https://github.com/momocus/sakazuki/pull/471", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
523689781
Corneal transplant term questions I am having difficulties finding specific definitions and differences in some of these terms. It seems to me that some of them have overlapping meaning or are synonymous, but I haven't found a resources that defines all of them. Corneal transplant corneal patch graft traditional, full thickness cornea transplant (also known as penetrating keratoplasty, or PK) back layer cornea transplant (also known as endothelial keratoplasty, or EK) Anterior lamellar keratoplasty (ALK) @pnrobinson Do you know the difference? Without more input or a use case, I'll just update a couple of synonyms and close.
gharchive/issue
2019-11-15T21:10:31
2025-04-01T06:39:39.565222
{ "authors": [ "LCCarmody" ], "repo": "monarch-initiative/MAxO", "url": "https://github.com/monarch-initiative/MAxO/issues/118", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2033376709
Add superclass to pregnancy loss, recurrent, 4 close #6867 @sabrina I'm not totally certain about this. This superclass makes sense per Megan's suggestion (based on what is in the OMIM description) but it seems to be weird to have a disease that is a susceptibility and a disease. But it does seem to make sense in this case. I wonder if maybe should remove the superclass 'fertility disorder'. Looks like I am the source and obviously, i am not an expert. @nicolevasilevsky I agree that it is weird, but we have other examples of terms that are both diseases and susceptibilities. What is weird here is the disease and the susceptibility are both for "pregnancy loss, recurrent". I would suggest that we keep both these parents (susceptibility and disease), and that we review this in the context of a branch review, when we talk to experts.
gharchive/pull-request
2023-12-08T22:16:29
2025-04-01T06:39:39.569310
{ "authors": [ "nicolevasilevsky", "sabrinatoro" ], "repo": "monarch-initiative/mondo", "url": "https://github.com/monarch-initiative/mondo/pull/7008", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2012874119
new maxo annotation example This is a new maxo annotation example for PMID 31078652, in a new subfolder under cases called hannah_manual_annotations. Thanks @hannahblau
gharchive/pull-request
2023-11-27T18:40:18
2025-04-01T06:39:39.570387
{ "authors": [ "caufieldjh", "hannahblau" ], "repo": "monarch-initiative/ontogpt-experiments", "url": "https://github.com/monarch-initiative/ontogpt-experiments/pull/4", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
296158104
Fix floating point casting to integer in PHP calculator Fixes #455 @chekalskiy can you please check if this solves the issue for you? Yup, it did. I'll check a bit later. Thank you I've tested and now my test case looks fine. 👍 But shouldn't we add castString() to add(), subtract(), absolute() and mod() methods? No, as they don't involve casting float to string: add: adding two integers will result in an integer anyway subtract: subtracting one integer from another will result in an integer absolute: there is no calculation related to precision mod: there is no calculation related to precision But I've just found other places where the tests break. I added a new test case for running all tests with russian UTF8 locale for the php calculator. @frederikbosch see the separate commit with the breaking tests. Solution pushed in a separate commit. Merging this as it fixes the problem, but I'm going to do some refactoring around the number class. I feel that there is too many string casting here and there. Totally agree. In my perception, in version 4 we should use Number everwhere internally. That means only accepting Number or directly do the conversion on every numeric parameter.
gharchive/pull-request
2018-02-11T03:58:22
2025-04-01T06:39:39.655288
{ "authors": [ "chekalskiy", "frederikbosch", "sagikazarmark" ], "repo": "moneyphp/money", "url": "https://github.com/moneyphp/money/pull/460", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1197152589
Ability to parse UUID https://github.com/mongkok/fastapi-debug-toolbar/blob/3552b0bbb8e1a86a4f5eaaf214e6916d52c941ef/debug_toolbar/panels/sql.py#L120-L121 Using UUID field raise an exception *** TypeError: Object of type UUID is not JSON serializable I suggest, adding UUID serialization, by cast to str if isinstance(obj, UUID): return str(obj) Or allow setting encoder on json.dumps method For anyone finding this issue, I bumped into the same situation and managed to find a workaround using FastAPI's jsonable_encoder: from debug_toolbar.panels.sqlalchemy import SQLAlchemyPanel as Base from fastapi.encoders import jsonable_encoder class SQLAlchemyPanel(Base): def after_execute(self, *args) -> None: # type: ignore # HACK: base SQL panel calls json.dumps(parameters) at some point. # Ensure values such as UUIDs can be dumped. parameters = args[3] args = (*args[:3], jsonable_encoder(parameters), *args[4:]) return super().after_execute(*args) It can then be used by passing panels=["path.to.panels.SQLAlchemyPanel"]. Hey @deby22 , issue is fixed using jsonable_encoder() as suggested by @florimondmanca , see #23 and v0.3.0.
gharchive/issue
2022-04-08T10:38:41
2025-04-01T06:39:39.659077
{ "authors": [ "deby22", "florimondmanca", "mongkok" ], "repo": "mongkok/fastapi-debug-toolbar", "url": "https://github.com/mongkok/fastapi-debug-toolbar/issues/15", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2365813168
Does not work with latest models It does not produce response with text-embedding-3-small and gpt-4o combination can you provide more information about the error that you're getting so we can help debug? as a first step, you can try to lower the minScore in the FindContentFunc. the .9 that we have as a default in the quick start works well for ada-02 but is often too high for text-embedding-3-small. const findContent = makeDefaultFindContent({ embedder, store: embeddedContentStore, findNearestNeighborsOptions: { k: 5, path: "embedding", indexName: VECTOR_SEARCH_INDEX_NAME, // Start low to make sure all is working, and work your way up to a higher score if it's suiting. minScore: 0.1, }, }); Followed your suggestion to lower the minScore to 0.1 and that worked. Thank you!
gharchive/issue
2024-06-21T06:25:40
2025-04-01T06:39:39.663396
{ "authors": [ "hasaketa", "mongodben" ], "repo": "mongodb/chatbot", "url": "https://github.com/mongodb/chatbot/issues/441", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
990139724
test: remove tests for deprecated command Proposed changes Remove tests for deprecated command, this commands have been deprecated for a while and they are starting to flake so better to remove them Checklist [x] I have added tests that prove my fix is effective or that my feature works [x] I have added any necessary documentation (if appropriate) [x] I have updated e2e/E2E-TESTS.md (if a new command or e2e test has been added) [x] I have run make fmt and formatted my code is there an entry that needs to be removed/changed in E2e-TESTS.md as a result of this? good call, I deleted the entries as we don't even document these commands any more, we should plan some time to start deleting some of the deprecated stuff
gharchive/pull-request
2021-09-07T16:08:42
2025-04-01T06:39:39.688208
{ "authors": [ "gssbzn" ], "repo": "mongodb/mongocli", "url": "https://github.com/mongodb/mongocli/pull/826", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
318742209
Cannot insert a field with '.' ('dot') in field name Summary: Cannot insert any Map that contains e.g. mail (with dot) as a key. Tested on Morphia 1.3.2. Steps to reproduce: Create class containing map, e.g.: class MyObjectWithMap { Map<String, String> map = Maps.newHashMap(); } Try to save object: MyObjectWithMap obj = new MyObjectWithMap(); obj.map.put("keyOk", "OK"); getDs().save(obj); // Works well obj.map.put("key.notOK", "notOK"); getDs().save(obj); // Throws exception See result: java.lang.IllegalArgumentException: Invalid BSON field name key.notOK at org.bson.AbstractBsonWriter.writeName(AbstractBsonWriter.java:532) at com.mongodb.DBObjectCodec.encodeMap(DBObjectCodec.java:221) at com.mongodb.DBObjectCodec.writeValue(DBObjectCodec.java:198) at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:130) at com.mongodb.DBObjectCodec.encode(DBObjectCodec.java:61) at org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:63) at org.bson.codecs.BsonDocumentWrapperCodec.encode(BsonDocumentWrapperCodec.java:29) at com.mongodb.operation.BulkWriteBatch$WriteRequestEncoder.encode(BulkWriteBatch.java:392) (... cut ...) Additional info: MongoDB docs ( https://docs.mongodb.com/manual/reference/limits/#Restrictions-on-Field-Names ) allows "." char. Also adding a document containing field with "." from Mongo shell works well. This is same problem to #827, but this time MongoDB (at least from 3.6) allows to use "." char in field names. Updates title and description to reflect reality - it is again about "." (dot) char... Run into the same issue right now. We've got 2018 and MongoDB nor Morphia is able to store a darn dot in the key. 💢👿 😡 This complaint is actually coming from the driver and not morphia. morphia just passes the name down to the driver which is rejecting the name. Consider filing a bug there. You might also try a new version fo the java driver and see if that behavior persists. @evanchooly You're right! Sorry for messing up and filling it here. Closing it and going to complain on Mongo bug tracker. ;-) For everybody running into same problem - there is an issue on Mongo tracker: https://jira.mongodb.org/browse/JAVA-2810
gharchive/issue
2018-04-29T19:22:48
2025-04-01T06:39:39.697223
{ "authors": [ "Shad0wCore", "elwin013", "evanchooly" ], "repo": "mongodb/morphia", "url": "https://github.com/mongodb/morphia/issues/1244", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2767038
Mongoid support for non-rack applications I'm using Mongoid in a non-rack application, and after updating to 2.4.0 I started receiving the error "Mongoid attempted to find the appropriate environment but no Rails.env, Sinatra::Base.environment, or RACK_ENV could be found" as soon as i load the configuration. Is Mongoid going to support only rack based applications? Sorry about that... Can you just use RACK_ENV for now and then I'll get this fixed for 2.4.1? How exactly are you determining what environment you are running under? I'm not determining it, I know it's a bare eventmachine ... the app is not running within any external environment, it's self contained.
gharchive/issue
2012-01-09T10:18:27
2025-04-01T06:39:39.699786
{ "authors": [ "danielemilan", "durran" ], "repo": "mongoid/mongoid", "url": "https://github.com/mongoid/mongoid/issues/1568", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1315640699
sub-document virtuals in nested arrays don't get attached when specified Do you want to request a feature or report a bug? Bug What is the current behaviour? The virtuals of sub-documents in nested arrays aren't attached to the result when I specify the virtual paths { virtuals: [...] }. It works fine when I lean the query with { virtuals: true } If the current behaviour is a bug, please provide the steps to reproduce. import mongoose from "mongoose"; import { mongooseLeanVirtuals } from "mongoose-lean-virtuals"; mongoose.plugin(mongooseLeanVirtuals); const NameSchema = new mongoose.Schema({ first: { type: String, required: true }, last: String, }); NameSchema.virtual("full").get(function () { return `${this.first} ${this.last ?? ""}`.trim(); }); const ChildSchema = new mongoose.Schema({ age: { type: Number, required: true }, name: { type: NameSchema, required: true }, }, { _id: false }); const ParentModel = mongoose.model( "Parent", new mongoose.Schema({ name: { type: NameSchema, required: true }, child: ChildSchema, nested: new mongoose.Schema({ children: [ChildSchema], }), }) ); async function run() { await mongoose.connect("..."); await ParentModel.create({ name: { first: "Homer", last: "Simpson" }, child: { age: 10, name: { first: "Bart", last: "Simpson" } }, nested: { children: [ { age: 6, name: { first: "Lisa", last: "Simpson" } }, { age: 3, name: { first: "Baby" } }, ], }, }); const result = await ParentModel.find({}) .populate("child") .populate("nested.children") .lean({ virtuals: ["name.full", "child.name.full", "children.name.full"], }); assert.equal(result.name.full, "Homer Simpson"); // Pass assert.equal(result.child.name.full, "Bart Simpson"); // Pass assert.equal(result.children[0].name.full, "Lisa Simpson"); // Fail assert.equal(result.children[1].name.full, "Baby"); // Fail } run().catch(console.error) What is the expected behaviour? For the virtuals of the sub-documents in nested arrays to be attached. What are the versions of Node.js, mongoose-lean-getters, and Mongoose are you are using? Note that "latest" is not a version. Package Version mongoose 6.4.3 mongoose-lean-getters N/A Node.js 18.6.0 You need to specify nested.children.name.full. The script you provided fails, but the below script works: const mongoose = require('mongoose'); const mongooseLeanVirtuals = require('mongoose-lean-virtuals'); const assert = require('assert'); mongoose.plugin(mongooseLeanVirtuals); const NameSchema = new mongoose.Schema({ first: { type: String, required: true }, last: String, }); NameSchema.virtual("full").get(function () { return `${this.first} ${this.last ?? ""}`.trim(); }); const ChildSchema = new mongoose.Schema({ age: { type: Number, required: true }, name: { type: NameSchema, required: true }, }, { _id: false }); const ParentModel = mongoose.model( "Parent", new mongoose.Schema({ name: { type: NameSchema, required: true }, child: ChildSchema, nested: new mongoose.Schema({ children: [ChildSchema], }), }) ); async function run() { await mongoose.connect("mongodb://localhost:27017/test"); await mongoose.connection.dropDatabase(); const { _id } = await ParentModel.create({ name: { first: "Homer", last: "Simpson" }, child: { age: 10, name: { first: "Bart", last: "Simpson" } }, nested: { children: [ { age: 6, name: { first: "Lisa", last: "Simpson" } }, { age: 3, name: { first: "Baby" } }, ], }, }); const result = await ParentModel.findById(_id) .populate("child") .populate("nested.children") .lean({ virtuals: ["name.full", "child.name.full", "nested.children.name.full"], // <-- note the 'nested.' }); assert.equal(result.name.full, "Homer Simpson"); // Pass assert.equal(result.child.name.full, "Bart Simpson"); // Pass assert.equal(result.nested.children[0].name.full, "Lisa Simpson"); // Pass assert.equal(result.nested.children[1].name.full, "Baby"); // Pass console.log('Done'); } run().catch(console.error) Also works fine if you remove the 'nested' Hey @vkarpov15, forgive my invalid script. I'll try to explain my actual use case and give an excuse for why my script had errors. I have a model with 2 discriminator schemas applied to it. The ParentModel.child and ParentModel.nested... was meant to show the field I'm trying to populate between them. They both point to the same model it just doesn't work with virtuals on the array of sub-documents. So I created a repro script to use here with populating the fields, and it didn't work. So I tried to check if it affected other schema definitions as well. That's why my original script has .populate("...") calls when the fields don't need it and the values aren't correctly checked. This is another script that I'm pretty sure will show my issue. Thanks again for your putting up with me and all your efforts across all the mongoose packages. import assert from "node:assert"; import mongoose from "mongoose"; import { mongooseLeanVirtuals } from "mongoose-lean-virtuals"; async function run() { mongoose.plugin(mongooseLeanVirtuals); await mongoose.connect("..."); const NameSchema = new mongoose.Schema({ first: { type: String, required: true }, last: String, }); NameSchema.virtual("full").get(function () { return `${this.first} ${this.last ?? ""}`.trim(); }); const ChildModel = mongoose.model( "Child", new mongoose.Schema({ age: { type: Number, required: true }, name: { type: NameSchema, required: true }, }) ); const ParentModel = mongoose.model( "Parent", new mongoose.Schema({ name: { type: NameSchema, required: true }, child: { type: mongoose.Types.ObjectId, ref: "Child" }, nested: { type: [new mongoose.Schema({ item: { type: mongoose.Types.ObjectId, ref: "Child" } })], }, }) ); const [child_1, child_2, child_3] = await ChildModel.create([ { age: 10, name: { first: "Bart", last: "Simpson" } }, { age: 6, name: { first: "Lisa", last: "Simpson" } }, { age: 3, name: { first: "Baby" } }, ]); await ParentModel.create({ name: { first: "Homer", last: "Simpson" }, child: child_1._id, nested: [{ item: child_3._id }, { item: child_2._id }], }); const result = await ParentModel.findOne({}) .populate("child") .populate("nested.item") .orFail(new Error("DOC!")) .lean({ virtuals: ["name.full", "child.name.full", "nested.item.name.full"] }); assert.equal(result.name.full, "Homer Simpson"); // Pass assert.equal(result.child?.name.full, "Bart Simpson"); // Pass assert.equal(result.nested[0].item?.name.full, "Baby"); // Fail assert.equal(result.nested[1].item?.name.full, "Lisa Simpson"); // Fail } run(); Also, did you mean to close #36? The PR you linked didn't have an effect because you used the Fix keyword on #37 We took a closer look and this proves to be tricky to implement in general with how Mongoose populate works. Without hooking more closely into Mongoose populate, handling cases like discriminators isn't really feasible. However, there is a simple workaround: const result = await ParentModel.findOne({}) .populate("child") .populate({ path: "nested.item", options: { lean: { virtuals: ['name.full'] } } }) // <-- add the `name.full` virtual here .orFail(new Error("DOC!")) .lean({ virtuals: ["name.full", "child.name.full"] }); // <-- instead of here
gharchive/issue
2022-07-23T11:51:46
2025-04-01T06:39:39.711945
{ "authors": [ "iammola", "vkarpov15" ], "repo": "mongoosejs/mongoose-lean-virtuals", "url": "https://github.com/mongoosejs/mongoose-lean-virtuals/issues/62", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2092221152
🛑 Lobembe is down In cc33316, Lobembe (https://lobembe.mongulu.cm) was down: HTTP code: 0 Response time: 0 ms Resolved: Lobembe is back up in d2175b4 after 15 minutes.
gharchive/issue
2024-01-20T19:56:38
2025-04-01T06:39:39.714949
{ "authors": [ "fabiolatagne97" ], "repo": "mongulu-cm/uptime", "url": "https://github.com/mongulu-cm/uptime/issues/610", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1236142335
⚠️ Quirk Club Argentina (API endpoint) has degraded performance In a8cb0e6, Quirk Club Argentina (API endpoint) (https://us-central1-quirkclub-dev.cloudfunctions.net/api/check/api) experienced degraded performance: HTTP code: 200 Response time: 7418 ms Resolved: Quirk Club Argentina (API endpoint) performance has improved in 62cf923.
gharchive/issue
2022-05-14T23:30:58
2025-04-01T06:39:39.719338
{ "authors": [ "monitoring-apps" ], "repo": "monitoring-apps/qc.app", "url": "https://github.com/monitoring-apps/qc.app/issues/72", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2161904450
Update silencer-plugin to 1.7.16 in series/4.x About this PR 📦 Updates com.github.ghik:silencer-plugin from 1.7.8 to 1.7.16 📜 GitHub Release Notes - Version Diff Usage ✅ Please merge! I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! ⚙ Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "com.github.ghik", artifactId = "silencer-plugin" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "com.github.ghik", artifactId = "silencer-plugin" } }] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #1838.
gharchive/pull-request
2024-02-29T18:54:56
2025-04-01T06:39:39.733664
{ "authors": [ "scala-steward" ], "repo": "monix/monix", "url": "https://github.com/monix/monix/pull/1817", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
682416222
Usage with Scala version 2.13 When running with Scala 2.13 it turns up with java.lang.ClassNotFoundException: scala.Serializable using monix shade version "io.monix" % "shade_2.12" % "1.10.0". But works if we use with Scala version 2.12. Do we have a way to use with Scala 2.13 or this library to be compiled to 2.13? This'll add support for Scala 2.13 - https://github.com/monix/shade/pull/67
gharchive/issue
2020-08-20T05:12:26
2025-04-01T06:39:39.735774
{ "authors": [ "kr-pawan", "xBATx" ], "repo": "monix/shade", "url": "https://github.com/monix/shade/issues/66", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
518569918
Problem generating correct Entrypoint for method. I'm using templates on Linux, using the CLI.exe to generate C# and the correct -Symbols.cpp files. I added the following to the CppCharp.Generator#Setup(Driver) method: driver.Options.GenerateClassTemplates = true; My simple class: #include <string> class Example { public: Example() {}; void print(std::string &x); }; I use the following command: mono --debug ~/src/CppSharp/build/gmake/lib/Release_x64/CppSharp.CLI.exe -ax64 -o=build/cppsharp Example.hpp Example.cpp This problem is that the following lines get generated in the cppsharp.cs file: [SuppressUnmanagedCodeSecurity] [DllImport("cppsharp", CallingConvention = global::System.Runtime.InteropServices.CallingConvention.Cdecl, EntryPoint="_ZN7Example5printERSs")] internal static extern void print(global::System.IntPtr __instance, global::System.IntPtr x); The actual entry point to this function in the libcppsharp.so file is _ZN7Example5printERNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE. Changing the _ZN7Example5printERSs in the cppsharp.cs file to the above value rectifies this problem. Is there a way to get this entry point to generate correctly? @ddobrev Any idea about this? I imagine finding the mangled name has to be done by types somewhere. I imagine "RSs" meaning "Reference to Standard string", but as @ddobrev mentioned, std::string is defined as std::basic_string<char, std::allocator<char>> on all platforms. I added another `std::string`` parameter: print(std::string &x, std::string &y); and corresponding mangled names are: _ZN7Example5printERSsS0_ and the correct one is: _ZN7Example5printERNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_ . @polar If you run nm on the native library, does _ZN7Example5printERSsS0_ show up as a symbol? No, it does not. 0000000000000e11 T Example_Example 0000000000000e65 T Example__Example 0000000000000e3e T Example_Example___1__S_Example 0000000000000daa T _ZN7Example5printERNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_ 0000000000000f20 W _ZN7ExampleaSEOS_ 0000000000000f0e W _ZN7ExampleaSERKS_ 0000000000000f02 W _ZN7ExampleC1Ev 0000000000000f02 W _ZN7ExampleC2Ev I don't really know why. It seems that std::string is supposed to mangle to "Ss". Maybe I have a bad option in compiling these? I have compiled Example.cpp with c++, g++, and clang++ with absolutely no options, and I get the same complicated mangled name for print. Never mind. I needed "--c++11" in the flags, which got the proper ABI configured for the parser.
gharchive/issue
2019-11-06T16:04:44
2025-04-01T06:39:39.746998
{ "authors": [ "polar", "tritao" ], "repo": "mono/CppSharp", "url": "https://github.com/mono/CppSharp/issues/1260", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
563377984
[FEATURE] Adding metadata properties to JPEG images when using SKPixmap.Encode() Is your feature request related to a problem? Please describe. JPEG images and other image file types can have additional metadata properties that can enhance how the picture is interpreted by renderers. An example is the "/xmp/{wstr=http://ns.google.com/photos/1.0/panorama/}:ProjectionType" = "equirectangular" property which identifies an image as a 360 panoramic projection image. You can easily use the exiftool app https://exiftool.org/gui/ to add the metadata but I need to add it in the code of my app. Describe the solution you'd like Add a dictionary property to SKJpegEncoderOptions to enable setting metadata properties when encoding JPEG images. Describe alternatives you've considered None known. Additional context Any thoughts around this? PNG format has metadata as well which should be writable.
gharchive/issue
2020-02-11T17:56:10
2025-04-01T06:39:39.750557
{ "authors": [ "BenMcLean", "andreasbrostencab", "mscherotter" ], "repo": "mono/SkiaSharp", "url": "https://github.com/mono/SkiaSharp/issues/1139", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2474483750
Colab 환경에서 KobertTokenizer 사용 시 AttributeError: token2idx 에러 부모클래스 생성자함수 super().init() 부분에서 get_vocab() 메서드를 호출하게 되는데, 이것이 초기화되어있지 않아 발생하는 오류가 있습니다. 사실 순서만 바꿔서 token2dix를 초기화한 뒤 부모클래스 생성자를 실행하면 해결됩니다. 간단하게 해결이 가능한 이슈이기도 하고, 테스트해보면 학습도 잘 되는것 같아요. 사용이 급하신 분들은 #12 Pull Request의 tokenization_kobert.py 파일을 가져다가 사용하시면 됩니다.
gharchive/issue
2024-08-20T00:18:56
2025-04-01T06:39:39.789172
{ "authors": [ "devsosin" ], "repo": "monologg/KoBERT-Transformers", "url": "https://github.com/monologg/KoBERT-Transformers/issues/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
108441641
Unexpected start of next frame Connection between XBee client and TCP server is not stable. Before TCP connection drop The warning "unexpected start of next frame" is printed. Byte processing is fixed. End of the frame will not be missed anymore.
gharchive/issue
2015-09-26T05:26:40
2025-04-01T06:39:39.794542
{ "authors": [ "monstrenyatko" ], "repo": "monstrenyatko/butler-xbee-gateway", "url": "https://github.com/monstrenyatko/butler-xbee-gateway/issues/10", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
105864000
ReferenceError cordova is not defined Tried to follow instrutctions given here : https://docs.moodle.org/dev/Setting_up_your_development_environment_for_Moodle_Mobile_2 but i keep getting "cordova is not defined" when trying to connect to moodle. Hi, we've identified this error and we've opened an issue to fix it: https://tracker.moodle.org/browse/MOBILE-1219 Thanks, Dani Hello, I Tried to follow instrutctions, but I have this problem, please help me. Thanks :) Closing this issue, please follow-up in the tracker
gharchive/issue
2015-09-10T17:59:25
2025-04-01T06:39:39.798054
{ "authors": [ "carloscallahuayapa", "dpaloucva", "hitteshahuja", "jleyva" ], "repo": "moodlehq/moodlemobile2", "url": "https://github.com/moodlehq/moodlemobile2/issues/217", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
659615570
Package implement? Hey could you implement a package system so we can use this program in our software? chrome, err := Recovery(browser, "text.json", history,password,cookie) ok, I would add this feature today or tomorrow. Thank you, would it be possible to make it so you can choose only windows function so build is smaller? @Dontmindmes compiling for windows only seems impossible! And this tool is best used as a command-line tool. I've added two new interface structs. The functions in these two interfaces will hopefully help you to use them in your own projects. type Browser interface { InitSecretKey() error GetName() string GetSecretKey() []byte GetAllItems(itemName string) ([]common.Item, error) } type Item interface { ChromeParse(key []byte) error FirefoxParse() error OutPut(format, browser, dir string) error CopyItem() error Release() error }
gharchive/issue
2020-07-17T20:33:54
2025-04-01T06:39:39.815619
{ "authors": [ "Dontmindmes", "moonD4rk" ], "repo": "moonD4rk/HackBrowserData", "url": "https://github.com/moonD4rk/HackBrowserData/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1738767552
🛑 THORN Website is down In 6b769a6, THORN Website (https://www.thorn.so) was down: HTTP code: 0 Response time: 0 ms Resolved: THORN Website is back up in cd24f3f.
gharchive/issue
2023-06-02T20:13:58
2025-04-01T06:39:39.818586
{ "authors": [ "Alecyrus" ], "repo": "mooncyan/Status", "url": "https://github.com/mooncyan/Status/issues/116", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
244160864
Project name Some words to play with: learning education resonance engineering computation frequency vibration oscillation damping natrual frequency phase shift degree of freedom mass spring damper mode shape mode eigen* stability Going with resonance.
gharchive/issue
2017-07-19T20:16:13
2025-04-01T06:39:39.862515
{ "authors": [ "moorepants" ], "repo": "moorepants/resonance", "url": "https://github.com/moorepants/resonance/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1889993059
🛑 plex server is down In 12e83f9, plex server ($PLEX_URL/identity) was down: HTTP code: 0 Response time: 0 ms Resolved: plex server is back up in 8175106 after 5 minutes.
gharchive/issue
2023-09-11T08:45:27
2025-04-01T06:39:39.864720
{ "authors": [ "mooseburgr" ], "repo": "mooseburgr/kmj-wtf-upptime", "url": "https://github.com/mooseburgr/kmj-wtf-upptime/issues/239", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1991380358
Reduce Jaro term proximity We can roughly assume terms should match within a few positions relative to each other. Queries should contain as many terms as possible and ideally would have rough ordering similar to indexed terms. Codecov Report Merging #511 (53d1fd2) into master (766a672) will decrease coverage by 0.01%. Report is 1 commits behind head on master. The diff coverage is 0.00%. :exclamation: Current head 53d1fd2 differs from pull request most recent head 489fc9e. Consider uploading reports for the commit 489fc9e to get more accurate results :exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files @@ Coverage Diff @@ ## master #511 +/- ## ========================================= - Coverage 8.27% 8.26% -0.01% ========================================= Files 44 44 Lines 3492 3496 +4 ========================================= Hits 289 289 - Misses 3180 3184 +4 Partials 23 23
gharchive/pull-request
2023-11-13T19:58:18
2025-04-01T06:39:39.868556
{ "authors": [ "adamdecaf", "codecov-commenter" ], "repo": "moov-io/watchman", "url": "https://github.com/moov-io/watchman/pull/511", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
906272781
Support macOS 11 Big Sur Binary Downloads There was some compatibility code for Golang versions less than 1.4.3 that broke installs on Big Sur. Prior to 1.4.3, there were two versions for osx10.8 and osx10.6. 1.4.3 introduced a unified amd64 version. When OS X changed the major version from 10 to 11, that logic broke. This patch moves that compatibility logic into the "less than 1.4.3" check, and now checks both the major and minor version of macOS. I tried a different patch that would download go1.4.2.darwin-amd64-osx10.8.pkg on Big Sur, and it installed successfully, but go version printed a stack trace, so I adjusted the patch to print Binary Go unavailable for this platform. I tested, go1.4.2 and it installs, but also crashes. go1.5 installed and passed my go version test. We could write more code to protect Big Sur users from this, but I doubt many developers are still trying to use a Golang from 2015 on Big Sur, and I don't think my go version test is a comprehensive compatibility test anyways. PS: I included a whitespace-only commit, as a few lines used spaces for indentation while the overwhelming majority of the file used tabs. I just noticed that https://github.com/moovweb/gvm/pull/364 is largely the same patch. Moving the macOS specific checks into the 1.4.x check is slightly better (since they only matter if that check passes), but it's a very minor optimization.
gharchive/pull-request
2021-05-29T01:12:28
2025-04-01T06:39:39.874027
{ "authors": [ "jeremy-ebler-vineti" ], "repo": "moovweb/gvm", "url": "https://github.com/moovweb/gvm/pull/380", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
128832238
get toplist not working, len of list is empty toplist = self.session.get_toplist(type=spotify.ToplistType.TRACKS, region='US') toplist.load() print len(toplist.tracks) -> 0 print len(toplist.artists) -> 0 Tell me how to fix this issue? thanks! Toplists seems to work as expected: In [1]: import spotify In [2]: session = spotify.Session() In [3]: session.login('user', 'secret') In [4]: loop = spotify.EventLoop(session) In [5]: loop.start() In [6]: toplist = session.get_toplist(type=spotify.ToplistType.TRACKS, region='US') In [7]: toplist.load() Out[7]: Toplist(type=<ToplistType.TRACKS: 2>, region='US', canonical_username=None) In [8]: len(toplist.tracks) Out[8]: 100 In [9]: len(toplist.artists) Out[9]: 0 Closing, as I assume you haven't been waiting for a solution for 3.5y.
gharchive/issue
2016-01-26T14:17:59
2025-04-01T06:39:39.886421
{ "authors": [ "hoangphuongcs", "jodal" ], "repo": "mopidy/pyspotify", "url": "https://github.com/mopidy/pyspotify/issues/181", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
267406894
recording never starts but screenshot works Honor 6X - Android 7 Mac OS High Sierra Version 1.65 Just installed the latest version and recording never starts. It displays "recording finished" as soon as video recording button pressed. Same here: Nokia 3 - Android 7.1.1 macOS Sierra 10.12.6 AndroidTool 1.66 (1) However sometimes clicking the video button seemingly does nothing (no indication of recording started), and sometimes it just says "recorded finished" immediately as for @redhatjobin. No video files are created in the screen recordings either (screenshots work though). Same for me Samsung galaxy note 10.1 2010 Android 4.1 Mac Os/X Screenshot is working fine Screencast, just turn as a red square and then nothing is happening. Here under is visible when I start by hand androidtool from my terminal window remote object '/sdcard/capture.mp4' does not exist mv: rename capture.mp4 to p4noteltexdJZO54Kcourtox03242018145957.mp4: No such file or directory ffmpeg version 2.6.1 Copyright (c) 2000-2015 the FFmpeg developers built with llvm-gcc 4.2.1 (LLVM build 2336.11.00) configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --enable-libvidstab --enable-libx265 --disable-doc --arch=x86_64 --enable-runtime-cpudetect libavutil 54. 20.100 / 54. 20.100 libavcodec 56. 26.100 / 56. 26.100 libavformat 56. 25.101 / 56. 25.101 libavdevice 56. 4.100 / 56. 4.100 libavfilter 5. 11.102 / 5. 11.102 libswscale 3. 1.101 / 3. 1.101 libswresample 1. 1.100 / 1. 1.100 libpostproc 53. 3.100 / 53. 3.100
gharchive/issue
2017-10-21T19:21:06
2025-04-01T06:39:39.996527
{ "authors": [ "courtox", "emmiep", "redhatjobin" ], "repo": "mortenjust/androidtool-mac", "url": "https://github.com/mortenjust/androidtool-mac/issues/134", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
305021583
Added --quiet option. Suppresses most messages. Test failures will still print and the 'Total: Passed X/Y tests' line will still print. This is a great idea. The only issue is that with test suites with only one describe, like the example project, the total isn't printed, because it wolud be wasteful to print both vector: Passed 6/7 tests and Total: Passed 6/7 tests. I'll merge this PR, and then fix that by making it always print the total when --quiet is provided. This is a great idea. The only issue is that when there's only one describe, which is the case in the example project, no total is printed, because it would look weird to print both vector: Passed 6/7 tests and Total: Passed 6/7 tests. I'll make it always print the total when the --quiet option is passed. https://github.com/mortie/snow/commit/13feeb0f129ad0754095af2403348a0392e47e8b Thanks!
gharchive/pull-request
2018-03-14T04:45:29
2025-04-01T06:39:40.000096
{ "authors": [ "mattsmith24", "mortie" ], "repo": "mortie/snow", "url": "https://github.com/mortie/snow/pull/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
740266005
Mosaic doesn't expand when resizing the terminal window and it panics OS: mac-os terminal: alacritty When I run Mosaic inside Alaacritty, I get into a situation where mosaic pane size is stuck (and sometimes panics). Reconstruct: Open Alacritty to not take full-screen area. run mosaic resize alacritty to full screen notice that Mosaic is still somewhat in original size and doesn't reflow. notice that vim didn't stretch after the resize. I think this is: https://github.com/mosaic-org/mosaic/issues/34? I think you are correct.
gharchive/issue
2020-11-10T21:47:19
2025-04-01T06:39:40.004104
{ "authors": [ "imsnif", "qballer" ], "repo": "mosaic-org/mosaic", "url": "https://github.com/mosaic-org/mosaic/issues/38", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1586766955
Make wandb checkpoint logging compatible with wandb model registry What does this PR do? This PR modifies wandb_logger.py to instantiate model checkpoints as type "model" instead of ".pt", to allow compatibility with W&B's new model registry feature, which MosaicML and WandB are cross-promoting with a demo/webinar next week. Before submitting [x] Have you read the contributor guidelines? [ ] Is this change a documentation change or typo fix? If so, skip the rest of this checklist. [ ] Was this change discussed/approved in a GitHub issue first? It is much more likely to be merged if so. [x] Did you update any related docs and document your change? [x] Did you update any related tests and add any new tests related to your change? (see testing) [x] Did you run the tests locally to make sure they pass? [x] Did you run pre-commit on your change? (see the pre-commit section of prerequisites) @eracah LGTM. Just curious: do you have a link to a wandb run that used this? Why yes I do
gharchive/pull-request
2023-02-15T23:52:31
2025-04-01T06:39:40.009070
{ "authors": [ "growlix" ], "repo": "mosaicml/composer", "url": "https://github.com/mosaicml/composer/pull/1973", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1792086163
Fix wandb errror with autoresume issue What does this PR do? The new wandb release (0.15.5) changes it's error messages for artifact downloading failures. Our WandBLogger is designed to be EAFP instead of LBYL, so it relies on catching an exact error to prevent autoresume from trying to download checkpoints from wandb that aren't actually there. This PR makes it so the check is not so specific (i.e. it catches all wandb CommErrors instead of just ones with specific messages). As a result, the EAFP code will then work with the new wandb install. Unfortunately this didn't make it into v0.15.1. Will it be part of the next release?
gharchive/pull-request
2023-07-06T19:09:10
2025-04-01T06:39:40.011083
{ "authors": [ "antoinebrl", "eracah" ], "repo": "mosaicml/composer", "url": "https://github.com/mosaicml/composer/pull/2353", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1617362728
Error in ResNet ImageNet examples While training ResNet with the "mild" ImageNet recipe, I realized that the call to config.update(recipe_config) doesn't actually work. See these lines. recipe_config is an OmegaConf dictionary config with keys that include dots in the name: { 'model.loss_name': 'binary_cross_entropy', 'train_dataset.crop_size': 176, 'eval_dataset.resize_size': 232, 'max_duration': '36ep' } When you call config.update(recipe_config), it adds these keys to the config object directly, instead of updating theconfig.train_dataset nested-dictionary. This means the max_duration is actually 36ep because it's a top-level key, but the crop sizes will not be changed and the model loss is still cross entropy instead of the binary variant. You can fix it like this: for key, value in recipe_config.items(): OmegaConf.update(config, key, value) OmegaConf.update respects the dots in the key names. Thanks for identifying this and providing a fix! We will merge the fix in a PR soon. Apologies for the bug 😅
gharchive/issue
2023-03-09T14:26:52
2025-04-01T06:39:40.014088
{ "authors": [ "Landanjs", "samuelstevens" ], "repo": "mosaicml/examples", "url": "https://github.com/mosaicml/examples/issues/219", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
514991518
Switch entrypoints to functions that load classes PR Summary: Small change post-#282 per suggestions of @uppittu11 and @ahy3nz https://github.com/mosdef-hub/foyer/pull/282#issuecomment-547144491 https://github.com/mosdef-hub/forcefield_perfluoroethers/pull/4/files#r339807274 Instead of the entrypoint being an instance of the force field, it's a function that grabs it. This cleans up the import such that the user more discretely instantiates the object. It also prevents the loading from happening upon import foyer and should save some time on imports. A quick local check saved ~0.3 seconds, no promises this is accurate. PR Checklist [ ] Includes appropriate unit test(s) [ ] Appropriate docstring(s) are added/updated [ ] Code is (approximately) PEP8 compliant [ ] Issue(s) raised/addressed? Looks like the last test is being skipped now (it wasn't before). Should we drop it or fix it? foyer/tests/test_validator.py::test_forcefields[ff_file0] SKIPPED https://travis-ci.org/mosdef-hub/foyer/jobs/605218247?utm_medium=notification&utm_source=github_status We should remove this test because there are no forcefield xmls in foyer/forcefields/. I put them in foyer/forcefields/xml/ but didn't update the glob in that path Maybe we could have a function that detects which plugins are installed when you import foyer (it could save/print the list of plugins too). And this can be used to iterate through all the plugins for this test. We can stick this in another PR if that’s appropriate. On Oct 30, 2019, at 7:20 PM, Matt Thompson notifications@github.com wrote:  @mattwthompson commented on this pull request. In foyer/tests/test_plugin.pyhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmosdef-hub%2Ffoyer%2Fpull%2F288%23discussion_r340915935&data=02|01|p.shama%40vanderbilt.edu|94ec9de2f6004448d14108d75d981e67|ba5a7f39e3be4ab3b45067fa80faecad|0|0|637080780257946600&sdata=nl%2Fm2ArV9W%2FIOC7wpOHS40UGG3qqr0BSAXYmfnvOOWM%3D&reserved=0: @@ -6,9 +6,9 @@ def test_basic_import(): assert 'forcefields' in dir(foyer) -@pytest.mark.parametrize('ff_name', ['OPLSAA', 'TRAPPE_UA']) -def test_forcefields_exist(ff_name): ff_name in dir(foyer.forcefields) +@pytest.mark.parametrize('ff_loader', ['load_OPLSAA', 'load_TRAPPE_UA']) +def test_forcefields_exist(ff_loader): assert ff_loader in dir(foyer.forcefields) What's the purpose of this test? It's just to make sure those loader functions are there, which ... I thought we are trying to move opls and trappe to their own repos, like PFE. assumes, for now that we are shipping these force fields with the main repo. I like your idea for iterating through the loaders that are in the entry point group gets at what I was going for but isn't this self-assuring? Since it only iterates through the loaders it finds, is there even a way for it to fail? I guess it could fail if the loader is bad, but the point was to check to see the loaders that we hope are there indeed exist. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmosdef-hub%2Ffoyer%2Fpull%2F288%3Femail_source%3Dnotifications%26email_token%3DAH77TWMFWDJEOLZVFK2PV7DQRIQELA5CNFSM4JHARGS2YY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCJ2AH2Q%23discussion_r340915935&data=02|01|p.shama%40vanderbilt.edu|94ec9de2f6004448d14108d75d981e67|ba5a7f39e3be4ab3b45067fa80faecad|0|0|637080780257946600&sdata=ZsQx6%2FpzFNU%2Byrf5WE8W0j39fHG3J%2BE%2FeNXVO3PnJEY%3D&reserved=0, or unsubscribehttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAH77TWIJYP5SDIYKPVFIE3TQRIQELANCNFSM4JHARGSQ&data=02|01|p.shama%40vanderbilt.edu|94ec9de2f6004448d14108d75d981e67|ba5a7f39e3be4ab3b45067fa80faecad|0|0|637080780257956591&sdata=FM9tHEIvcK8QQddDe%2Be4C3mpw95JBI4I0%2BpWiZO82WQ%3D&reserved=0. >>> funcs = [func for func in dir(foyer.forcefields) if 'load' in func and '__' not in func] >>> funcs ['load_OPLSAA', 'load_TRAPPE_UA'] >>> [eval('foyer.forcefields.' + func)() for func in funcs] [<foyer.forcefield.Forcefield object at 0x11ab12470>, <foyer.forcefield.Forcefield object at 0x11a89aa58>] I think this accomplishes what you suggested. I also noticed that even though we're checking these functions exist, we're not checking to see that they work. Above should fix that, I think. https://codecov.io/gh/mosdef-hub/foyer/src/update-entrypoints/foyer/forcefields/forcefields.py @rsdefever try to update gaff with these functions @rsdefever try to update gaff with these functions Check out this PR too for inspiration. https://github.com/mosdef-hub/forcefield_perfluoroethers/pull/7 @ahy3nz @uppittu11 take a look at my GAFF-foyer repo. I think it is now working with the latest entry points in foyer.
gharchive/pull-request
2019-10-30T21:24:04
2025-04-01T06:39:40.034885
{ "authors": [ "ahy3nz", "mattwthompson", "rsdefever", "uppittu11" ], "repo": "mosdef-hub/foyer", "url": "https://github.com/mosdef-hub/foyer/pull/288", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1030703190
Return periodic torsion params, regardless of k value PR Summary: See #470 for further details. Previously, we didn't return any parameters if the value of k == 0. This PR attempts to fix that. Resolves #470. PR Checklist [x] Includes appropriate unit test(s) [x] Appropriate docstring(s) are added/updated [x] Code is (approximately) PEP8 compliant [x] Issue(s) raised/addressed? /azp run /azp run
gharchive/pull-request
2021-10-19T20:14:41
2025-04-01T06:39:40.037570
{ "authors": [ "justinGilmer", "umesh-timalsina" ], "repo": "mosdef-hub/foyer", "url": "https://github.com/mosdef-hub/foyer/pull/471", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1158970750
Modify pentaneUA statepoints for flexible and constrained In order to compare between MD and MC treatments of pentaneUA, we need to have both rigid and flexible models for pentaneUA. *Lammps-VU and lammps-UD can not do constrained bonds *All MC engines and HOOMD + Gromacs can do constrained bonds *All MC engines cannot do flexible bonds *All MD engines can do flexible bonds This PR updates the statepoints from init.py with two pentane "molecules"; 1 that will be treated flexibly, and one that will be treated constrained. I looked through the rest of the files, and I think we're good now. I added pentaneUA : Pentane() to the get_molecule function in system_builder, so it should work regardless of the name for the pentane molecule, such as the one used in the spe-subproject. Good catch @jennyfothergill
gharchive/pull-request
2022-03-03T22:23:12
2025-04-01T06:39:40.039855
{ "authors": [ "CalCraven" ], "repo": "mosdef-hub/reproducibility_study", "url": "https://github.com/mosdef-hub/reproducibility_study/pull/174", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1471491043
vscode addon for formatting and highlighting it would be nice if you made a vscode addon for Nilesoft Shell Script (.shl). it would add simple text highlighting and support for: collapsing {...} and multi-line items maybe IntelliSense (also, it would be nice if you had a discord server👍) i'm looking into it and i think i might beable to do it myself. This would be a really great contribution @moudey could you create a discord server so we could chat about this on there?
gharchive/issue
2022-12-01T15:13:25
2025-04-01T06:39:40.078532
{ "authors": [ "Natejoestev", "moudey" ], "repo": "moudey/Shell", "url": "https://github.com/moudey/Shell/issues/68", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1728408982
Reto #3 - JavaScript Describe tus cambios (Opcional) Sobre todo aconsejable si la "Pull Request" se corresponde con una corrección adicional y no con la presentación de un ejercicio. Comprobaciones Asegúrate de cumplir los siguientes puntos antes de realizar la "Pull Request": [x] El título de mi Pull Request sigue este formato: "Reto #[número] - [lenguaje_utilizado]". (Ej: Reto #0 - Kotlin") [x] El nombre el fichero que se corresponde con el de mi usuario en GitHub más la extensión del lenguaje. (Ej: mouredev.kt) [x] El fichero de corrección se encuentra dentro del directorio del ejercicio y en una carpeta con el nombre del lenguaje de programación utilizado en minúsculas. (Ej: Reto #0/kotlin/mouredev.kt) [x] He revisado que el nombre del directorio del lenguaje no es conflictivo: c#, no csharp c++, no cplusplus go, no golang javascript, no js [x] Únicamente he incluído los ficheros de ejercicios. No se aceptarán Pull Requests que contengan archivos adicionales asociados a editores de código o semejantes. Información Tienes toda la información sobre los retos semanales en retosdeprogramacion.com/semanales2023. Cada semana se realizará la corrección en directo y publicación de un nuevo reto en twitch.tv/mouredev. Recuerda que tienes un grupo de apoyo llamado "reto-semanal" en Discord. 🥖
gharchive/pull-request
2023-05-27T02:15:39
2025-04-01T06:39:40.101106
{ "authors": [ "FabianCristancho", "captaindrokky" ], "repo": "mouredev/retos-programacion-2023", "url": "https://github.com/mouredev/retos-programacion-2023/pull/3608", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
163594533
Resource Content changes Remove Application name/description from all resource content pages. Replace with Resource Card for specific operation Implemented
gharchive/issue
2016-07-04T02:46:05
2025-04-01T06:39:40.117154
{ "authors": [ "d4ncer" ], "repo": "movio/apidoc-ui", "url": "https://github.com/movio/apidoc-ui/issues/32", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1290196045
🛑 t10n Services: Dynamic Image CDN (Das Quartier) is down In 41503f9, t10n Services: Dynamic Image CDN (Das Quartier) (https://dq-images.t10n.de) was down: HTTP code: 403 Response time: 213 ms Resolved: t10n Services: Dynamic Image CDN (Das Quartier) is back up in 8a964b5.
gharchive/issue
2022-06-30T14:10:26
2025-04-01T06:39:40.122744
{ "authors": [ "moximoti" ], "repo": "moximoti/upptime", "url": "https://github.com/moximoti/upptime/issues/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
123216929
Add CC License matter to Terms Split from #1311 何かをForkしたプロジェクトを利用する際に、「作者と作者以外のクレジット表示される者」というのがフォーク元のプロジェクトのオーナーとコラボレーターにあたる、というようなことを利用規約に書いておく ・利用規約第5条第4項を新設(クレジット表示されるものに関する項目) 4. クリエイティブ・コモンズ・ライセンスが付与された投稿素材をユーザーが利用する場合、当該ユーザーは同ライセンスの表示義務にしたがって「ライセンス対象物の作者その他クレジット表示される者」を表示しなければなりません(同ライセンス第3条a.1.A.ⅰ)。本サービスにおいて、この「ライセンス対象物の作者その他クレジット表示される者」とは、プロジェクトの「オーナー」及び「コラボレーター」がこれに該当するとします。​ In the case that a User uses Submission Materials for which a creative commons license, the User shall display or retain “the identification of the creator(s) of the Licensed Material and any others designated to receive attribution” according to the display obligations of the license (Creative Commons License, Section 3 a.1.A.ⅰ). Regarding this Service, “the creator(s) of the Licensed Material and any others designated to receive attribution” refers to the “Owner” and the “Collaborator”. ・利用規約第3条第6項を改定(プロジェクトの「オーナー」の定義に関する文言を追加) 第3条 プロジェクト ユーザーは、プロジェクトまたはプロジェクト・ページ(以下「プロジェクト」といいます)を作成し、自らが提供する文章、イラスト、音、写真、映像、データ、プログラム、ハードウェアその他の素材(以下「投稿素材」という)を投稿し、または公開し、他のユーザーと共有することができます。ただし、ユーザーは、プロジェクトを公開しないこともできます。 ユーザーは、プロジェクト内において、「状態(State)」・「注釈(Annotation)」・「使い方(Usage)」からなる「レシピ」を作成・編集・削除することができます。 ユーザーは、プロジェクト内で、プロジェクトに関する「ノート」を作成できます。ただし、当該プロジェクトにおいて編集権限のないユーザーは、ノートを作成することはできません。 ユーザーは、プロジェクト内において公開されている投稿素材を、投稿したユーザーが定める条件にしたがって、自ら利用することができます(「フォーク」といいます)。 ユーザーは、前項に基づくフォークの結果を参照することができます。 ユーザーは、保有しているプロジェクトに関し、「コラボレータ」を追加することができます(プロジェクトを保有しているユーザーを「オーナー」といいます)。 Article 3 User’s Projects Users may create projects and project pages (hereinafter referred to as the “Project(s)”), submit their own text, illustrations, sounds, photos, videos, data, programs, hardware and other material (hereinafter referred to as the “Submission Material(s)”), as well publish it and share it with other Users. Users may also choose not to publish a Project. Users may create, edit and delete “Recipes” based on “State”,”Annotations” and “Usage” within the context of a Project. Users may create “Notes” related to the Project within the context of a Project. However, Users may NOT create notes within a particular Project without edit privileges. Users may reproduce and/or reuse Submission Materials that have been published and utilize them themselves in accordance with the conditions designated by the User within the context of a Project (referred to as “Fork”). Users may consult the result of the Fork outlined in the previous paragraph. Users may add “Collaborators” to the Projects they own (the User who own the Project is referred to as the "Owner"). ・プロジェクトページの「Members」を「Owner & Collaborators」に変更
gharchive/issue
2015-12-21T06:11:24
2025-04-01T06:39:40.188647
{ "authors": [ "oshimaryo" ], "repo": "mozilla-japan/gitfab2", "url": "https://github.com/mozilla-japan/gitfab2/issues/1316", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2409321071
Audit requirements may be overly strong when dealing with multiple root crates in a workspace This was noticed in https://bugzilla.mozilla.org/show_bug.cgi?id=1907810, specifically because of the changes in https://phabricator.services.mozilla.com/D212959#inline-1194604. If we have multiple crates in the local workspace which have different audit requirements, it is possible for an overly strong requirement to be placed on an indirect dependency. Specifically if we have a dependency graph like the following, where W1 and W2 are toplevel workspace crates, D1 and D2 are third-party dependencies, where D1 optionally depends on D2. W1 does not enable this dependency but W2 does. If W1 has stronger audit requirements than W2 does, those stronger requirements can end up applying to D2, as the two D1 nodes are unified when running cargo metadata to have the combined feature set. W1[req:safe-to-deploy] -> D1[features:] W2[req:safe-to-run] -> D1[features:D2] -> D2 (D2 should require "safe-to-run", but will instead require "safe-to-deploy") Unfortunately, I don't think that the output of cargo metadata really provides a way to solve this ambiguity and get what the resolution would be for each workspace crate independently, such that we could treat the two D1 dependencies as separate "nodes", to not propagate the same audit requirements to their dependencies. It might be possible if we didn't trust the dependency resolution done by cargo and implemented feature resolution ourselves, but that seems like it'd be both a lot of work, and likely to break as cargo updates and changes how feature resolution is handled. I've experimented a bit and it looks like using cargo tree --format "{p} {f}" --package CRATE will show the correct subset of features and thus dependent crates. Unfortunately, though, cargo tree doesn't support an output format more easily ingested by programs. Unfortunately it does look like if you run the cargo tree without a specific --package flag that it'll still unify features, as the in the example from that bug, the neqo-udp crate still has a tokio dependency even under the gkrust crate unless you explicitly scope to only the gkrust package. Does seem like cargo metadata would need a --package flag and we'd need a reliable way to enumerate all packages within the workspace so that we could do separate metadata runs for each one. On top of that we'd also need to tweak the graph building for the resolver and such so that dependency lists and inherited audit requirements could be dependent on which package the dependency graph is coming from.
gharchive/issue
2024-07-15T18:06:03
2025-04-01T06:39:40.532001
{ "authors": [ "afranchuk", "mystor" ], "repo": "mozilla/cargo-vet", "url": "https://github.com/mozilla/cargo-vet/issues/626", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
84709566
[bug 1169261] Fix event tracking Event tracking needs the provider version number so we can distinguish between flows for successive algorithms. Further, the "suggest" event needs to be synchronous while the "view*" events can be asynchronous. Sorry--forgot to do this. :( r? lgtm r+! Thank you!
gharchive/pull-request
2015-06-03T18:35:49
2025-04-01T06:39:40.547823
{ "authors": [ "rlr", "willkg" ], "repo": "mozilla/fjord", "url": "https://github.com/mozilla/fjord/pull/593", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
343819816
Request headers support for protocols This is followup from #2 where you can find all the details. Below quote is short summary: Request Headers - Things seems here sadly far more complicated, but it is also clear that this is really important for proper support of Video / Audio tags that just assume HTTP backend. It was suggested to me to that we could do following: Plain data from request so that protocol handler can parse HTTP headers or alternative encoding of metadata. - Expose existing HTTP parser so it could be used. There was quite a reluctance on this though for reasons I did not understand (Part of it was that it's written in C++ and could not be easily extracted for just parsing use). It was highly recommended to instead just support more strictly encoded subset of the headers as anything send from video / audio tags would fit. Unfortunately it seems that supporting request headers would require C++ work and landing corresponding changes into Firefox which is to say it's likely going to take a while and likely will come after we illustrated that people are actually building on this work. Is analog to HTTP's Range requests tracked under this, or should I create a separate issue for it? Is analog to HTTP's Range requests tracked under this, or should I create a separate issue for it? Let's track it here for now, depending how it will pan out I might need separate one, but for now it's good. One thing that could help is an example that produces Range requests that are disregarded. @Gozala I believe seeking with <video> is the most popular use for range requests right now. I created a small sandbox illustrating current problems, more details in README at: lidel/libdweb/tree/video-range-use-case-demo Migrate to https://bugzilla.mozilla.org/show_bug.cgi?id=1572215
gharchive/issue
2018-07-23T22:34:10
2025-04-01T06:39:40.642735
{ "authors": [ "Gozala", "lidel" ], "repo": "mozilla/libdweb", "url": "https://github.com/mozilla/libdweb/issues/36", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
740263112
Question: Can this library not be used from Node.js? Hello, I just have a question. Can this library not be used from Node.js? I get this error when trying to open a PDF file The browser/environment lacks native support for critical functionality used by the PDF.js library (e.g. ReadableStream and/or Promise.allSettled); please use an ES5-compatible build instead. my code: const pdfjsLib = require('pdfjs-dist'); const fs = require('fs'); const pdfPath = 'test.pdf'; const data = fs.readFileSync(pdfPath); var loadingTask = pdfjsLib.getDocument({data: data}); loadingTask.promise It's most definitely possible since we have examples of this in the examples folder. The key is that you use the ES5 build as indicated in the error; see https://github.com/mozilla/pdf.js/blob/master/examples/node/pdf2svg.js#L17
gharchive/issue
2020-11-10T21:42:17
2025-04-01T06:39:40.712934
{ "authors": [ "siddjain", "timvandermeij" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/12606", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
959092718
Null pointer exception in optional_content_config.js when viewing PDF document The document in question: doc_image.pdf Configuration: Web browser - Chrome Operating system and its version: Windows 10 PDF.js version: 2.8.335, 2.10.377 Is a browser extension: no Expected document rendering: Actual rendering with an error I can reproduce with Firefox nightly. @brendandahl, could you have a look ? Thank you. When the fix might be available in a released version? Thank you. When the fix might be available in a released version? The patch has neither been reviewed or landed yet, so you'll have to be patient :-) It will simply be included in the next release, however no exact date for that can be provided (since we don't have a fixed release schedule) and note also that the last release was just nine days ago. Furthermore, this isn't a recent regression either, since it's been present ever since PR #12095 which landed a year ago.
gharchive/issue
2021-08-03T13:01:02
2025-04-01T06:39:40.717882
{ "authors": [ "Snuffleupagus", "calixteman", "ypersion1956" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/13851", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2223846727
transposed JBIG2 text segments with non-topleft reference corner don't render correctly Attach (recommended) or Link to PDF file here: symbol-texttranspose.pdf symbol-topright-transposed.pdf symbol-bottomleft-transposed.pdf symbol-bottomright-transposed.pdf Configuration: Web browser and its version: Chrome 123.0.6312.59 Operating system and its version: macOS 13.5.2 PDF.js version: today's trunk at https://mozilla.github.io/pdf.js/web/viewer.html (I verified it has the fix for #17871 already). Is a browser extension: No Steps to reproduce the problem: Open each of the four PDFs above What is the expected behavior? (add screenshot) They should all look like the first one: What went wrong? (add screenshot) The ones that have the reference corner not set to topleft are in various states of disarray: ITU-T_T_88__08_2018.pdf 6.4.5 Decoding the text region has two steps for updating cur_s, once in vi) Update CURS as follows: before drawing the bitmap, and then again xi) Update CURS as follows: after drawing the bitmap. It looks like 25f6a0c13965c5ad9cebe701e4752bde5e8fa811 mixes up these two steps with the "is transposed" check. Depending on the reference corner, this needs to happen before or after drawing for both transposed and untransposed iamges. Like in #17871: I made these files myself while writing a JBIG2 decoder. I'm reasonably confident that the files and Chrome and jbig2dec and my decoder are correct, but it's possible the files are wrong instead. Oh, and this isn't purely theoretical: This slightly-more-real-world PDF looks wonky because of this. transpose2.pdf It's not fully real-world since it's 042_19.jb2 from https://git.ghostscript.com/?p=tests.git;a=tree;f=jbig2;h=8a7abaf842435e204c1ff1dbeed10826bf24afe6;hb=HEAD wrapped in a PDF, so it's still a bit synthetic. But it's a file made by someone else at least, which maybe gives the bug report more credence.
gharchive/issue
2024-04-03T20:27:57
2025-04-01T06:39:40.725820
{ "authors": [ "nico" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/17883", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2594871918
[Bug]: Annotation artifacts remain visible in viewer after deleting all annotations. Attach (recommended) or Link to PDF file test.pdf Web browser and its version Mozilla Firefox Operating system and its version Windows 10 PDF.js version 4.7.76 Is the bug present in the latest PDF.js version? Yes Is a browser extension No Steps to reproduce the problem Open the attached PDF in the default viewer. Activate the annotation editor by clicking highlight on the toolbar. Scroll down to the following pages to load the annotations. Press Ctrl+A to select all annotations in the PDF. Press delete to remove all annotations at once. What is the expected behavior? All annotations should be removed from the PDF. What went wrong? Although the annotations are no longer selectable, some of them still appear visible in the viewer. They only disappear once the annotation editor is closed. Link to a viewer No response Additional context No response This is a tricky bug... When the pdf is rendered we only render the 2 (it depends of the zoom level) first pages, so we're only aware of the annotations we have on those pages. That means we don't have the ids, the properties, ... of the other annotations in the pdf. For example, we can ctrl+a and then change the color of one highlight, it should impact all the highlights of the document. So I think the only right way to fix this would be to get all the editable annotations when the user is selecting all, put them in the storage, and then apply the changes to the data we've in the storage. The problem is most likely OS related. I played around with it on macOS Firefox latest version (132.0.2 aarch64) and Chrome (131.0.6778.70). PDF.js correctly identified and rendered all highlighted text, regardless of the zoom level (even on "Page Fit", all highlights were selected and removed after backspacing or clicking on the delete button).
gharchive/issue
2024-10-17T14:11:46
2025-04-01T06:39:40.732357
{ "authors": [ "atrinker", "calixteman", "zzadxz" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/issues/18915", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1413094931
Remove the Glyph.matchesForCache method (PR 13494 follow-up) This method, and its class, was originally added in PR #4453 to reduce memory usage when parsing text. Then PR #13494 extended the Glyph-representation slightly to also include the charCode, which made the matchesForCache method effectively redundant since most properties on a Glyph-instance indirectly depends on that one. The only exception is potentially isSpace in multi-byte strings. Also, something that I noticed when testing this code: The matchesForCache method never worked correctly for Glyphs containing accent-data, since Objects are passed by reference in JavaScript. For affected fonts, of which there's only a handful of examples in our test-suite, we'd fail to find an already existing Glyph because of this. /botio test /botio test Note that being able to skip re-parsing this data over and over for every single rendered glyph is a small performance improvement. Some very quick console.time/timeEnd benchmarking, with the default tracemonkey.pdf file, suggest that it's on average 1-2 ms faster per page, which obviously isn't a lot but still doesn't seem worthless.
gharchive/pull-request
2022-10-18T12:00:34
2025-04-01T06:39:40.735788
{ "authors": [ "Snuffleupagus" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/15586", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1494254734
The annotation layer dimensions must be set before adding some elements (follow-up of #15770) In order to move the annotations in the DOM to have something which corresponds to the visual order, we need to have their dimensions/positions which means that the parent must have some dimensions. /botio integrationtest /botio unittest
gharchive/pull-request
2022-12-13T13:37:22
2025-04-01T06:39:40.737093
{ "authors": [ "calixteman" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/15820", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1577682635
[api-minor] Don't print hidden annotations (bug 1815196) and handle correctly the NoView and NoPrint flags when they're changed from JS. /botio test /botio test The test bug1737260-oc was failing because the visibility of a widget is now handled in the annotation layer, hence I fixed it in adding annotations: true. The test bug1737260-oc was failing because the visibility of a widget is now handled in the annotation layer, hence I fixed it in adding annotations: true. Does that mean that if you use the API directly (and not the full viewer), will rendering now be "wrong" for the document? I suppose that I don't fully understand exactly why this broke and why updating the test is necessary/correct here. /botio test Does this, together with your recent patches, replace PR #15032? r=me, thank you! Yep that's the idea, I just need to rewrite the part to update appearances when background or border colors changed.
gharchive/pull-request
2023-02-09T11:04:35
2025-04-01T06:39:40.740749
{ "authors": [ "Snuffleupagus", "calixteman" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/16029", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
327116317
Backout of pull request #9345 Refer to https://github.com/mozilla/pdf.js/pull/9345#issuecomment-392536768. /botio-linux preview Trivial.
gharchive/pull-request
2018-05-28T20:58:33
2025-04-01T06:39:40.742414
{ "authors": [ "timvandermeij" ], "repo": "mozilla/pdf.js", "url": "https://github.com/mozilla/pdf.js/pull/9757", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
110948265
test cases added for agents.py This is a test case for agents.py Please do let me know, this is how you want to continue in tests. I am writing for other modules also How to run project_root_folder# cd tests project_root_folder/tests# nosetests test_agents.py .... Ran 4 tests in 5.909s OK I cannot start it. Starting it from tests/ returns: ====================================================================== ERROR: Failure: ImportError (No module named bugzilla.agents) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/loader.py", line 420, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/sylvestre/dev/mozilla/relman-auto-nag/tests/test_agents.py", line 1, in <module> from bugzilla.agents import BMOAgent ImportError: No module named bugzilla.agents ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (errors=1) `̀`` Sylvestre, I actually installed it using setup.py If we put that in "tests" obviously we can't access the modules like bugzilla until and unless it is in the python path. I thought of adding separate modules for each file, which will increase the readability. If the method I am following is wrong please do let me know, I can push back in to the root folder in to a single file.
gharchive/pull-request
2015-10-12T10:28:22
2025-04-01T06:39:40.762495
{ "authors": [ "anoopvalluthadam", "sylvestre" ], "repo": "mozilla/relman-auto-nag", "url": "https://github.com/mozilla/relman-auto-nag/pull/24", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
167117023
yargs can't handle the way process.argv looks in some electron contexts. yargs assumes that process.argv is [node_path, initial_js, ...rest] so it strips the first two parts of the array and processes the rest (https://github.com/yargs/yargs/blob/master/index.js#L6). But in electron, particularly when we run as a packaged app process.argv is actually [tofino.exe, ...rest] so we lose the first argument and this breaks events coming from squirrel's installer. :(
gharchive/issue
2016-07-22T19:38:51
2025-04-01T06:39:40.797378
{ "authors": [ "Mossop", "victorporof" ], "repo": "mozilla/tofino", "url": "https://github.com/mozilla/tofino/issues/860", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
222834763
HELP: disable/enable editing Hi how to enable or disable contentEditing of DIV ? I tried this but nothing , it dont work document.getElementsByClassName("simditor-body").contentEditable=flase; this.body.contentEditable=false; How to create a function like this Simditor.prototype.enableEditing= function(enable) { // Enable disable content editing }; Please help me, is very important for me 1.getElementByClassName returns an array 2.contenteditable is an attribute of DOM, not member of object. document.getElementsByClassName("simditor-body")[0].setAttribute('contenteditable', false);
gharchive/issue
2017-04-19T18:50:12
2025-04-01T06:39:40.968799
{ "authors": [ "aresares", "mr5" ], "repo": "mr5/icarus-android", "url": "https://github.com/mr5/icarus-android/issues/30", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
33268712
Doesn't timeout properly when server is unreachable var r = require('redis').createClient(null, '1.2.3.4', {connect_timeout: 1000}); r.on('ready', function(){ console.log('ready!'); }).on('error', function(err){ console.log('error: ' + err)}); It doesn't timeout after 1 second. It does timeout after unknown amount of time with: error: Error: Redis connection to 1.2.3.4:6379 failed - connect ETIMEDOUT I think it should follow the docs and error out after specified timeout. I also got the same issue, my config is the following: var client = redis.createClient(conf.port, conf.host, { connect_timeout: 100, // abandon connection after 100ms retry_max_delay: 100, // no impact, will not wait more than 100ms between reconnections attemps max_attempts: 1, // only 1 connection attempt enable_offline_queue: false, // no offline queue, must wait for online mode no_ready_check: true // we don't check for redis ready state, let's query the redis directly }); But instead of failing after 100ms, it fails after a full second. Anyone found a fix for that? Similar issues here. +1 +1 Any news on this? I wound up switching to ioredis, which appears to handle the connection timeouts properly and aside from the client creation, be a drop in replacement for my use case. That looks interesting - I'll take a closer look, thanks. To reproduce I paused a Redis Docker container but still seeing long timeouts Is this issue resolved? I experience the behavior described above. The connection fails after about 2 minutes, despite connect_timeout of 5000. I am using version 2.2.5. @jbergknoff this should definitly be resolved. Do you have a reproducable case? And how do your current options look like? Hm, interesting. A slightly modified code snippet from the original post here reproduces the issue: var r = require('redis').createClient({host: '1.2.3.4', connect_timeout: 1000}); r.on('ready', function(){ console.log('ready!'); }).on('error', function(err){ console.log('error: ' + err)}); The change is in the arguments to createClient. The original snippet unintentionally falls back on the default host (127.0.0.1) because typeof null is object (https://github.com/NodeRedis/node_redis/blob/afc4989495245e683ce70a234c55046a51e73c08/index.js#L1243). If the host is the bogus 1.2.3.4 then this hangs for 2 minutes. If the host is 127.0.0.1 then it works as expected. Any insight into that? @jbergknoff thx for pointing that out! I'm looking into it right now. @jbergknoff fixed on master Great, thanks @BridgeAR! Any ETA on next release to npm? Later today
gharchive/issue
2014-05-11T19:34:58
2025-04-01T06:39:41.000257
{ "authors": [ "BenHall", "BridgeAR", "bgSosh", "celesteking", "chrisbaldauf", "jbergknoff", "michelsalib", "raviv" ], "repo": "mranney/node_redis", "url": "https://github.com/mranney/node_redis/issues/587", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1643126041
🛑 Lyster Exteriors [855lysters.com] is down In 73e331f, Lyster Exteriors [855lysters.com] (https://855lysters.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Lyster Exteriors [855lysters.com] is back up in 4365d96.
gharchive/issue
2023-03-28T03:10:15
2025-04-01T06:39:41.006247
{ "authors": [ "mrbrant89" ], "repo": "mrbrant89/od1-monitoring", "url": "https://github.com/mrbrant89/od1-monitoring/issues/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
541800429
Adding the specified count to the semaphore would cause it to exceed its maximum count First off, I have to say, this wrapper is great, I've been using it with great success. However at some point I had the need to fire off these Rest20 commands as async Tasks and I occasionally received an error in StreamSession.cs mentioning: Adding the specified count to the semaphore would cause it to exceed its maximum count I recognize these errors from when the database context of entity frame has not been properly disposed. However in this case there is no database context, so I am very curious as to what the reason could be and how I can fix it Hi yAnn1ck, First: The OandaV20 code and repo is no longer supported. Please migrate to OandaV20.2. Second: The Semaphore class is Not used within OkonkwoOandaV20 library. Any issue(s) you may be experiencing may be due to your code or environment. That said, the Semaphore class is used in the sample app, OkonkwoOandaV20App. It is also used in the test project, OkonkwoOandaV20Test. In both cases, the semaphore usage was inelegant and not intended for production. Please refer to Microsoft docs at the link below on the Semaphore class for detailed information on the proper use of the Semaphore. If you need further assistance, please provide a code snippet. Thanks, Chris Good day mrchrisok, Can you possibly indicate where OandaV20.2 can be found, I would like to migrate but I don't know where to find this repo.... Regards, JvZ OANDAV20.2 is available here: https://github.com/mrchrisok/OandaV20.2
gharchive/issue
2019-12-23T15:29:20
2025-04-01T06:39:41.017406
{ "authors": [ "Jaco-JvZ", "mrchrisok", "yAnn1ck-B" ], "repo": "mrchrisok/OandaV20", "url": "https://github.com/mrchrisok/OandaV20/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1669580431
Check with DeviceControl Software Update Hey, did anybody check compatibility with new Devices Control Software from minidsp? Would love to clarify before updating and breaking the connection for minidsp-rs thy I think you are refering to device console? I did upgrade my SHD to that latest firmware. While minidsp-rs works in every way I've tried with it, the ability for device console to "tunnel" through minidsp-rs directly to the SHD does not work. Device Console throws some error when I attempt to connect. The MiniDSP android app does continue to work. ok, cool. Thank you for your reply. I would not like to loose the ability to control via. api my minidsp. :)
gharchive/issue
2023-04-15T21:29:47
2025-04-01T06:39:41.099905
{ "authors": [ "holisticagile", "scottshanafelt" ], "repo": "mrene/minidsp-rs", "url": "https://github.com/mrene/minidsp-rs/issues/568", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2119016708
[FEATURE REQUEST] add support for notifications icons :speech_balloon: Description It would be awesome if this package also generates the icons for notifications. :question: Platform This would be for Android. Duplicated #29 I have add android notification icon on this release v3.0.0-beta.1
gharchive/issue
2024-02-05T16:52:35
2025-04-01T06:39:41.183864
{ "authors": [ "JobMoll", "deandreamatias", "mrrhak" ], "repo": "mrrhak/icons_launcher", "url": "https://github.com/mrrhak/icons_launcher/issues/48", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
225300897
Does it has BigDecimal in mruby? I found that in some circumstances I need highly precise decimal type for calculation. So does Mruby support BigDecimal or does it has Mruby gems to support that? thanks!! If not, what should I do the achieve that? There is a gem to add bignum support. https://github.com/chasonr/mruby-gmp-bignum/ or https://github.com/chasonr/mruby-bignum/ Read the readme on how to use it, because it either needs a forked mruby version or you have to use special functions to to math on them. @Asmod4n Thank you very much. However, as my understandings, the gem does not support the BigDecimal. For example, if I want "123456789.123456789 + 123456789.123456789", it should be "246913578.246913578". But for now I got "246913578.246913". Am I correct? if you want to use + - * / you have to read the readme of the gem. It should work with 123456789.123456789.to_bn + 123456789.123456789 but i haven't use those gems yet. Since there are external mrbgems to implement bignum, and the ISO standard does not require bignum, I suggest we close this issue as wontfix. Just to clarify: the Bignum gems implement integer arithmetic, not decimal floating point as torsakch seems to want. They could be used as a basis for a BigDecimal class, but they do not provide BigDecimal themselves.
gharchive/issue
2017-04-30T03:12:57
2025-04-01T06:39:41.189167
{ "authors": [ "Asmod4n", "beoran", "chasonr", "torsakch" ], "repo": "mruby/mruby", "url": "https://github.com/mruby/mruby/issues/3646", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1718392240
Issue with implementing ctx.i18n.t("...") in Telegraf's "scenes" Hello, Thank you for your contribution. I am currently exploring how to make ctx.i18n.t("...") work in Telegraf's "scenes", and I'm encountering some difficulties. I would appreciate any assistance or guidance on this matter. Could you please provide more information on how to effectively implement ctx.i18n.t("...") into "scenes" in Telegraf? Any examples or code snippets demonstrating the correct usage would be greatly appreciated. Additionally, if there are any specific configurations or dependencies that need to be considered, please let me know. Thank you in advance for your help. I'm eager to resolve this issue and make progress with my project. Best regards, Germán Lugo Hello Use the same as the examples given And it does not require any special affiliation But if you need more help, contact me on Telegram t.me/Target_Designer ‫‪Germán Lugo‬‏ @.***‬‏> در تاریخ یکشنبه ۲۱ مه ۲۰۲۳ ساعت ۱۰:۱۹ نوشت:‬ Hello, Thank you for your contribution. I am currently exploring how to make ctx.i18n.t("...") work in Telegraf's "scenes", and I'm encountering some difficulties. I would appreciate any assistance or guidance on this matter. Could you please provide more information on how to effectively implement ctx.i18n.t("...") into "scenes" in Telegraf? Any examples or code snippets demonstrating the correct usage would be greatly appreciated. Additionally, if there are any specific configurations or dependencies that need to be considered, please let me know. Thank you in advance for your help. I'm eager to resolve this issue and make progress with my project. Best regards, Germán Lugo — Reply to this email directly, view it on GitHub https://github.com/msaebi031/i18n-telegraf/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKR4BH6I5ZDBNJE5ZZXYWCTXHG3HPANCNFSM6AAAAAAYJGUUN4 . You are receiving this because you are subscribed to this thread.Message ID: @.***> Hi msaebi031, Thank you for your quick response. I have found a solution. Inside the Scene file: ctx.reply(ctx.scene.ctx.i18n.t("test")) The trouble maybe is because I'm exporting the Scenes from a subfolder. Best regards.
gharchive/issue
2023-05-21T06:49:16
2025-04-01T06:39:41.198825
{ "authors": [ "Chococoin", "msaebi031" ], "repo": "msaebi031/i18n-telegraf", "url": "https://github.com/msaebi031/i18n-telegraf/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2216618751
🛑 Do53 Roost IPv6 UDP is down In 965233b, Do53 Roost IPv6 UDP (http://107.189.10.142:9204) was down: HTTP code: 0 Response time: 0 ms Resolved: Do53 Roost IPv6 UDP is back up in 3edd04a after 1 hour.
gharchive/issue
2024-03-30T18:36:54
2025-04-01T06:39:41.215884
{ "authors": [ "mschirrmeister" ], "repo": "mschirrmeister/upptime-loopx", "url": "https://github.com/mschirrmeister/upptime-loopx/issues/4565", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2595925036
Sequential processing dRi = fcn_singleOrthonormalMatrixGeneration(angles,mus,partial_difference=True,index_pd_angle=iAngle) # TODO: Sequential processing https://github.com/msiplab/TanSacNet/blob/fda9ca7f20ca0420b7f480b7d2da8de93a4a289c/code/appendix/torch_tansacnet/orthonormalTransform.py#L367 Refactoring the backward method of GivensRotaitons4Synthesizer in orthonormalTransform.py to reflect the sequential differential calculation process.
gharchive/issue
2024-10-17T22:51:51
2025-04-01T06:39:41.236649
{ "authors": [ "shodimaggio" ], "repo": "msiplab/TanSacNet", "url": "https://github.com/msiplab/TanSacNet/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
393590326
Complete copy edit pass. @rotycenh OPS Build status updates of commit c158ee4: :clock10: Preparing: average preparing time is 45 sec(s) OPS Build status updates of commit c158ee4: :clock10: Incremental building: average incremental building time is 23 sec(s) OPS Build status updates of commit c158ee4: :warning: Validation status: warnings File Status Preview URL Details toc.json :warning:Warning Details bread/toc.json :warning:Warning Details building-blocks/extending-templates/toc.json :warning:Warning Details docs/cloud-adoption/infrastructure/logs-and-reporting/overview.md :white_check_mark:Succeeded View toc.json [Warning] Error happen when converting toc.json to Pdf. Details: Could not find file 'T:\azwh\toc.json'. bread/toc.json [Warning] Error happen when converting bread/toc.json to Pdf. Details: Could not find a part of the path 'T:\azwh\bread\toc.json'. building-blocks/extending-templates/toc.json [Warning] Error happen when converting building-blocks/extending-templates/toc.json to Pdf. Details: Could not find a part of the path 'T:\azwh\building-blocks\extending-templates\toc.json'. For more details, please refer to the build report. Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
gharchive/pull-request
2018-12-21T21:04:17
2025-04-01T06:39:41.264874
{ "authors": [ "VSC-Service-Account", "laschultz" ], "repo": "mspnp/architecture-center", "url": "https://github.com/mspnp/architecture-center/pull/1114", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
285161272
naming conventions: change 3-letter suffix for Data Lake Store from 'dtl' to 'dls' Dear all, could you please consider if it make sense to change 3-letter suffix for Data Lake Store from 'dtl' to 'dls'. The justification is that other tools already use the 'dls' abbreviation (such as Azure CLI ). Also 'dls' seem to be appropriate for the resoure type 'Microsoft.DataLakeStore'. Ref. 5fe63d8. Thanks (and sorry about the messy PRs). FYI: 'dtl' is widely used abbreviation for DevTestLab, even some templates in azure-quickstart-templates repo use it (even though 'lab' perhaps is better abbreviation for DevTestLab). :white_check_mark: Validation status: passed File Status Preview URL Details docs/best-practices/naming-conventions.md :white_check_mark:Succeeded View For more details, please refer to the build report. Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
gharchive/pull-request
2017-12-29T22:01:10
2025-04-01T06:39:41.269791
{ "authors": [ "bennage", "joakimhellum-in" ], "repo": "mspnp/architecture-center", "url": "https://github.com/mspnp/architecture-center/pull/342", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
259025125
Shape error occur when running the experiment [09:38:23] /home/pdl/workspace2/ylj/MXNet/mxnet/dmlc-core/include/dmlc/logging.h:308: [09:38:23] src/operator/batch_norm-inl.h:238: Check failed: channelAxis < dshape.ndim() (1 vs. 0) Channel axis out of range: 1 Stack trace returned 10 entries: [bt] (0) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7fd8fec99aac] [bt] (1) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(ZNK5mxnet2op13BatchNormProp10InferShapeEPSt6vectorIN4nnvm6TShapeESaIS4_EES7_S7+0x979) [0x7fd8ffbab989] [bt] (2) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x16979e7) [0x7fd8ffb719e7] [bt] (3) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x152be47) [0x7fd8ffa05e47] [bt] (4) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet4exec10InferShapeEN4nnvm5GraphESt6vectorINS1_6TShapeESaIS4_EERKSs+0x83b) [0x7fd8ffa07afb] [bt] (5) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(MXSymbolInferShape+0x17ed) [0x7fd8ff99f62d] [bt] (6) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call_unix64+0x4c) [0x7fd90e97857c] [bt] (7) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call+0x1f5) [0x7fd90e977cd5] [bt] (8) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(_ctypes_callproc+0x3e6) [0x7fd90e96f376] [bt] (9) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(+0x9db3) [0x7fd90e966db3] infer_shape error. Arguments: data: (1L, 3L, 562L, 1000L) label: (1L, 20412L) bbox_target: (1L, 36L, 36L, 63L) bbox_weight: (1L, 36L, 36L, 63L) Traceback (most recent call last): File "dff_rfcn_end2end_train_test.py", line 19, in train_end2end.main() File "../../dff_rfcn/train_end2end.py", line 182, in main config['TRAIN']['begin_epoch'], config['TRAIN']['end_epoch'], config['TRAIN']['lr'], config['TRAIN']['lr_step']) File "../../dff_rfcn/train_end2end.py", line 101, in train_net sym_instance.infer_shape(data_shape_dict) File "../../dff_rfcn/../lib/utils/symbol.py", line 38, in infer_shape arg_shape, out_shape, aux_shape = self.sym.infer_shape(**data_shape_dict) File "/home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/symbol/symbol.py", line 958, in infer_shape res = self._infer_shape_impl(False, *args, **kwargs) File "/home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/symbol/symbol.py", line 1087, in _infer_shape_impl ctypes.byref(complete))) File "/home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/base.py", line 143, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: Error in operator bn_conv1: [09:38:23] src/operator/batch_norm-inl.h:238: Check failed: channelAxis < dshape.ndim() (1 vs. 0) Channel axis out of range: 1 Stack trace returned 10 entries: [bt] (0) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7fd8fec99aac] [bt] (1) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(ZNK5mxnet2op13BatchNormProp10InferShapeEPSt6vectorIN4nnvm6TShapeESaIS4_EES7_S7+0x979) [0x7fd8ffbab989] [bt] (2) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x16979e7) [0x7fd8ffb719e7] [bt] (3) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(+0x152be47) [0x7fd8ffa05e47] [bt] (4) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet4exec10InferShapeEN4nnvm5GraphESt6vectorINS1_6TShapeESaIS4_EERKSs+0x83b) [0x7fd8ffa07afb] [bt] (5) /home/pdl/workspace2/ylj/MXNet/mxnet/python/mxnet/../../lib/libmxnet.so(MXSymbolInferShape+0x17ed) [0x7fd8ff99f62d] [bt] (6) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call_unix64+0x4c) [0x7fd90e97857c] [bt] (7) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(ffi_call+0x1f5) [0x7fd90e977cd5] [bt] (8) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(_ctypes_callproc+0x3e6) [0x7fd90e96f376] [bt] (9) /home/pdl/anaconda2/lib/python2.7/lib-dynload/_ctypes.so(+0x9db3) [0x7fd90e966db3] It seems like something wrong with the image shape, but I don't know how to solve it. Help, please! hello , i don't download DET and VID, can you share it?thank you. hello , i don't download DET and VID, can you share it?thank you. http://bvisionweb1.cs.unc.edu/ilsvrc2015/download-videos-3j16.php#vid
gharchive/issue
2017-09-20T03:02:50
2025-04-01T06:39:41.284064
{ "authors": [ "mornfairy", "xuwentang", "yljylj" ], "repo": "msracver/Deep-Feature-Flow", "url": "https://github.com/msracver/Deep-Feature-Flow/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
231764180
Illegal memory access with gpu_mask_voting It seems to me that for some images, gpu_mask_voting causes illegal memory access (the problem still exists after the latest fix on mask merge). This problem will go away if we use cpu_mask_voting instead. I have also encountered this...thanks for solution.
gharchive/issue
2017-05-27T01:09:37
2025-04-01T06:39:41.285444
{ "authors": [ "realwecan", "rnunziata" ], "repo": "msracver/FCIS", "url": "https://github.com/msracver/FCIS/issues/16", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
170152426
Average calculation in legend First at all, thank you very much for this plugin @mtanda, it is very useful. I noted a problem regarding average calculation in the legend. In my case I should have a voltage value close to 230 V, but Grafana shows me out-of-range and non-sense values (greater than 3000 V). Instead maximum and minimum work fine. I think to have found the problem. The average calculation is bound to the bucket size parameter. If I change it, also the average changes. Thanks for report. I found a bug. I'll fix it. I fix the bug, I'll register new version to grafana.net.
gharchive/issue
2016-08-09T12:21:48
2025-04-01T06:39:41.344536
{ "authors": [ "dstreppa", "mtanda" ], "repo": "mtanda/grafana-histogram-panel", "url": "https://github.com/mtanda/grafana-histogram-panel/issues/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
520601839
Question Hey, I was wondering if it is possible to position the barlabel above the bar when the value is positive and below the bar when the value is negative. My code for the bar chart options plugins: [ Chartist.plugins.ctBarLabels({ textAnchor: 'middle', labelClass: 'ct-label', showZeroLabels: true, labelOffset: { x: 0, y: -15 }, }) ], positions the label of positive values with an offset of -15 slightly above the bars. When I have negative values it looks like this: I would like to put the labels below the bars, please consider to add an example code on how to achieve this: Best regards, flix Hmm... I don't think I ever handled this use-case. I'll have to sit down and work out a good solution. There's likely a way to do this without the plugin using css classes, but that kind defeats the point. Is that data-set private? Can you post up the code that you're making that graph with and its data set? Hey, thanks for the quick response. This is my code: var data = { labels: ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Nov", "Dez"] series: [[6,0,1,-9,2,-46,-2,5,2,0,6,1],[1,2,6,-8,1,3,1,4,-1,57,1,0]] }; var options = { textAnchor: 'middle', divisor: 4, seriesBarDistance: 10, chartPadding: { left: 40, top: 40, bottom: 40 }, axisX: { showGrid: false, showLabel: true, labelClass: 'ct-label', labelOffset: { x: 0, y: 15 }, }, axisY: { showGrid: true, onlyInteger: true, labelClass: 'ct-label', labelOffset: { x: -15, y: 0 }, labelInterpolationFnc: function(value) { return value + '€'; } }, plugins: [ Chartist.plugins.ctBarLabels({ textAnchor: 'middle', labelClass: 'ct-label', showZeroLabels: true, labelOffset: { x: 0, y: -15 }, }) ], }; new Chartist.Bar('#bar-chart', data, options).on('draw', function(data) { if (data.type == 'bar') { data.element.animate({ y2: { dur: '0.4s', from: data.y1, to: data.y2 } }); } }); I could easily manage this problem if the dataset would be consistent but the dataset is changing all the time... so I need a solution that fits my needs. Hey, Were you able to solve my problem? ;) Best regards No, I'm sorry. I haven't had the time. Holidays are always tight for me. I wish you had of caught me last month. I'll try to get this banged out, but the soonest I'd probably be able to take a look is two weekends from now. :( Thanks for your chart data though, I'll get to it when I can. Alright. Thanks :)
gharchive/issue
2019-11-10T13:37:05
2025-04-01T06:39:41.352804
{ "authors": [ "flixoflax", "mtgibbs" ], "repo": "mtgibbs/chartist-plugin-barlabels", "url": "https://github.com/mtgibbs/chartist-plugin-barlabels/issues/7", "license": "WTFPL", "license_type": "permissive", "license_source": "github-api" }
1570930659
🛑 https://minfo.ps is down In 11b28e9, https://minfo.ps (https://minfo.ps) was down: HTTP code: 0 Response time: 0 ms Resolved: https://minfo.ps is back up in aba8181.
gharchive/issue
2023-02-04T12:53:36
2025-04-01T06:39:41.370903
{ "authors": [ "mtitservice" ], "repo": "mtitservice/site", "url": "https://github.com/mtitservice/site/issues/270", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1634327738
🛑 https://moj.pna.ps is down In 81ddb97, https://moj.pna.ps (https://moj.pna.ps) was down: HTTP code: 0 Response time: 0 ms Resolved: https://moj.pna.ps is back up in b8fbdb9.
gharchive/issue
2023-03-21T16:46:07
2025-04-01T06:39:41.374088
{ "authors": [ "mtitservice" ], "repo": "mtitservice/site", "url": "https://github.com/mtitservice/site/issues/3952", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1689021349
🛑 https://db.hcys.ps is down In 6b66db0, https://db.hcys.ps (https://db.hcys.ps) was down: HTTP code: 0 Response time: 0 ms Resolved: https://db.hcys.ps is back up in ab18cc9.
gharchive/issue
2023-04-28T19:15:13
2025-04-01T06:39:41.377109
{ "authors": [ "mtitservice" ], "repo": "mtitservice/site", "url": "https://github.com/mtitservice/site/issues/9669", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
171828879
Disable join button if there is no initiator I am developing a video class app. Once the staff started the class means, i will send mail to all students who are all subscribed to that class. How do i disable the Join button in Student side if the Staff is not available, i mean if video class is not started. I'm using RTCMultiConnection V2.2.2 v3 users can use checkPresence method: <button id="btn-join-room" disabled></button> <script> connection.checkPresence('room-id', function(isRoomExists, roomid) { if (isRoomExists === false) { document.querySelector('#btn-join-room').disabled = true; } else { document.querySelector('#btn-join-room').disabled = false; } }); </script> And v2 users can use sendCustomMessage: <button id="btn-join-room" disabled></button> <script> var isRoomOpened = false; connection.onCustomMessage = function(message) { if (message.isRoomExists === false) { document.querySelector('#btn-join-room').disabled = true; isRoomOpened = true; } else if (message.isRoomExists === true) { document.querySelector('#btn-join-room').disabled = false; isRoomOpened = true; } if (message.checkIfRoomExists === true && message.roomid === connection.sessionid) { connection.sendCustomMessage({ isRoomExists: true, roomid: 'room-id' }); } }; connection.connect(); (function looper() { connection.sendCustomMessage({ checkIfRoomExists: true, roomid: 'room-id' }); if (isRoomOpened === true) return; setTimeout(looper, 3000); // check after every 3-seconds })(); </script> One more question. How do i rejoin the student if he come back again???
gharchive/issue
2016-08-18T06:43:46
2025-04-01T06:39:41.538260
{ "authors": [ "muaz-khan", "nasr18" ], "repo": "muaz-khan/RTCMultiConnection", "url": "https://github.com/muaz-khan/RTCMultiConnection/issues/216", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
187551409
ICE fails resolving relay when using desktop browser on one side and crosswalk or webview Android on the other side I used the library in a project and it works fine between browsers from different networks (tried home network + 4G once). It works even between desktop browser and mobile browser (chrome). Unfortunately I cannot make it work when I deploy the APK on Android. I use Crosswalk version 22. I deployed Coturn on a digital ocean account. Tried both turn and turns. By looking at the web browser console on the desktop I see: browser to browser, the remote ICE candidates are displayed correctly, the last one which matters is the relay one. The connection was established. browser to android webview or crosswalk, I see only one remote ICE candidate of type host then a short pause and then the connectivity attempt is dropped. It happened though that 3 times out of many many attempts it worked, that was when the remote candidate displayed was a relay one. I wasn't able to keep it working consistently, I cannot reproduce to make it work again. What shall I do to make it work? Hi Muaz-Khan, Were you able to create an android app that either broadcasts or displays video content using RTCMultiConnection? If so, did it work when the the two involved parties are in separate networks? Salut @raducrisan1 . Si mie mi se intampla acelasi lucru. Am o aplicatie in angular si folosesc tot crosswalk 22. Pe consola de la app imi apar cateodata doar host ice candidates, iar alta imi apar si relay. Ai reusit sa gaesti vreo solutie? Hi, Yes. I created my own signaling server. Cheers, Radu On Mon, Jan 9, 2017 at 8:45 PM, mado1987 notifications@github.com wrote: Salut @raducrisan1 https://github.com/raducrisan1 . Si mie mi se intampla acelasi lucru. Am o aplicatie in angular si folosesc tot crosswalk 22. Pe consola de la app imi apar cateodata doar host ice candidates, iar alta imi apar si relay. Ai reusit sa gaesti vreo solutie? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/muaz-khan/RTCMultiConnection/issues/283#issuecomment-271368716, or mute the thread https://github.com/notifications/unsubscribe-auth/AOfVE75HcsHCuh9XTcbvbU1pQFySWLIyks5rQoBWgaJpZM4KqgkS . Si care a fost problema? Ai un id de skype te rog?Mersi Hi how allow video call through webview? on Android and ios
gharchive/issue
2016-11-06T09:43:41
2025-04-01T06:39:41.545434
{ "authors": [ "mado1987", "raducrisan1", "webleb" ], "repo": "muaz-khan/RTCMultiConnection", "url": "https://github.com/muaz-khan/RTCMultiConnection/issues/283", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
547984395
Is possible to use this library in my KITE Project ? How can I get these stats: bandwidth usage, packets lost, local/remote ip addresses and ports, type of connection etc and use them in my KITE Project ? hi,guys. I did it on last year.
gharchive/issue
2020-01-10T10:04:05
2025-04-01T06:39:41.546891
{ "authors": [ "Talbot3", "valdrinnz" ], "repo": "muaz-khan/getStats", "url": "https://github.com/muaz-khan/getStats/issues/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1594226204
Save and Load: New Json Save and Load: New Json [x] Save Parse New Json Primary: @Joperezc Secondary: @nrhoopes Size: MEDIUM Actual Size: MEDIUM Actual Time: 1HR 30MIN Updated Json parse in the form {guessedWords, wordList, puzzleLetters, requiredLetter, currentPoints, maxPoints}
gharchive/issue
2023-02-21T23:26:33
2025-04-01T06:39:41.559212
{ "authors": [ "AitorCantero", "Joperezc" ], "repo": "mucsci-students/2023sp-420-SNEK", "url": "https://github.com/mucsci-students/2023sp-420-SNEK/issues/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2001508819
local build run failed for rwkv backend LocalAI version: LocalAI version v1.40.0-38-g3e35b20 (3e35b20a0201db39ac7973c4e2b6b528f3d044b2) Environment, CPU architecture, OS, and Version: mac m1 Darwin Kernel Version 22.6.0: arm 64 Describe the bug load rwkv model failed, chat api request return 500: Request: curl http://localhost:8989/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "outv4.bin", "messages": [{"role": "user", "content": "How are you?"}], "temperature": 0.9, "top_p": 0.8, "top_k": 80 }' {"error":{"code":500,"message":"could not load model - all backends returned error: 17 errors occurred:\n\t* could not load model: rpc error: code = Canceled desc = \n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = could not load model\n\t* could not load model: rpc error: code = Unknown desc = unable to load model\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/stablediffusion. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/piper. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\n","type":""}}% To Reproduce Expected behavior load rwkv model success, api return sucess Logs LOG : 3:00PM DBG [rwkv] Attempting to load 3:00PM DBG Loading model rwkv from outv4.bin 3:00PM DBG Loading model in memory from file: models/outv4.bin 3:00PM DBG Loading Model outv4.bin with gRPC (file: models/outv4.bin) (backend: rwkv): {backendString:rwkv model:outv4.bin threads:4 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0x1400046e1e0 externalBackends:map[] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false} 3:00PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/rwkv 3:00PM DBG GRPC Service for outv4.bin will be running at: '127.0.0.1:64113' 3:00PM DBG GRPC Service state dir: /var/folders/yj/wt9vbj1s34vb69qyywcgwxlh0000gn/T/go-processmanager2591975212 3:00PM DBG GRPC Service Started rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:64113: connect: connection refused" 3:00PM DBG GRPC(outv4.bin-127.0.0.1:64113): stderr 2023/11/20 15:00:07 gRPC Server listening at 127.0.0.1:64113 3:00PM DBG GRPC Service Ready 3:00PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:outv4.bin ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/gpt4all RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:models/outv4.bin Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} 3:00PM DBG [rwkv] Fails: could not load model: rpc error: code = Unknown desc = could not load model Additional context Hi @redstarxz we have some examples of rwkv in https://github.com/go-skynet/model-gallery/blob/main/rwkv-raven-1b.yaml. Please check the example. I am not if outv4.bin's format is correct. Hi @redstarxz we have some examples of rwkv in https://github.com/go-skynet/model-gallery/blob/main/rwkv-raven-1b.yaml. Please check the example. I am not if outv4.bin's format is correct. Thanks, Finaly, I found the reason, for rwkv backend, the token file name must fit with the model file name...
gharchive/issue
2023-11-20T07:06:18
2025-04-01T06:39:41.572766
{ "authors": [ "Aisuko", "redstarxz" ], "repo": "mudler/LocalAI", "url": "https://github.com/mudler/LocalAI/issues/1307", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2245629167
Add new code syntax I really hope that the plugin can support more syntax highlighting. I have a lot of HLSL code in my notes, but Obsidian does not support highlighting by default. I wonder if this plugin could be expanded to support more syntax in the future, such as supporting all the latest syntax from PrismJS. Theoretically it could, but writing the grammar for syntax highlighting is very (and I mean really) complicated. It could take a lot of time to write the grammar for a single language, if you are not familiar with it. And I am not. If you want to write the grammar, I am happy to include it in the plugin. @Porco24 Did you mean editing mode or reading mode? Or both? During the development of the current release, I had to something with syntax highlighting. If reading mode would be enough for you, that could probably be solved. But editing mode is really challenging. I will take a look, maybe there is a "relatively" easy way. @mugiwara85 Hi, I mean Both, If this feature is implemented, it would be very helpful to me. I hope to be able to download some highlight rule from the internet and import it into Obsidian. Hi @StarkSkywalker, Some basics :) The plugin does not add syntax highlighting to code blocks Obsidian uses two separate methods for providing syntax highlighting. In editor mode, it uses CodeMirror 6, and in reading mode it uses Prism.js -> This is the reason that the syntax highlighting differs in editor mode and reading mode. Prism.js supports the makefile language that is why it works in reading mode as shown below: And this is also the reason why it doesn't work in editor mode, because CodeMirror 6 does not support it. It is possible to create and add syntax highlighting for new languages in CodeMirror 6, but it is very complicated. You basically, have to write for every language the grammar, and that is complicated and time consuming. But! You are in luck! For MakeFile there is a package, I can add. I'll check it out and contact you later. Oh! Thank you so much, my friend! Your kindness knows no bounds.(My English is a little terrible, but this is the first time I've interacted with a webmaster on Github! The joy is beyond words!Long live the spirit of the Internet!) I actually learned the basics first through your reply before I tried it, and it turned out just like you said it would. Hi @StarkSkywalker, Some basics :) The plugin does not add syntax highlighting to code blocks Obsidian uses two separate methods for providing syntax highlighting. In editor mode, it uses CodeMirror 6, and in reading mode it uses Prism.js -> This is the reason that the syntax highlighting differs in editor mode and reading mode. Prism.js supports the makefile language that is why it works in reading mode as shown below: And this is also the reason why it doesn't work in editor mode, because CodeMirror 6 does not support it. It is possible to create and add syntax highlighting for new languages in CodeMirror 6, but it is very complicated. You basically, have to write for every language the grammar, and that is complicated and time consuming. But! You are in luck! For MakeFile there is a package, I can add. I'll check it out and contact you later. @mugiwara85 Oh! Thank you so much, my friend! Your kindness knows no bounds.(My English is a little terrible, but this is the first time I've interacted with a webmaster on Github! The joy is beyond words!Long live the spirit of the Internet!) I actually learned the basics first through your reply before I tried it, and it turned out just like you said it would in reading mode. Don't worry about your english. It's good (I am also not a native english speaker). I just noticed that the package which adds MakeFile syntax, supports only basic syntax, but nertheless it's more than nothing. I'll report back later if I find out, how I can add it. @StarkSkywalker Unfortunately, I have bad news for you. I just tried to install that package I mentioned last time, but that won't work. The reason is, that as it turned out, Obsidian uses CodeMirror 6, but not for everything. Specifically, for syntax highlighting it uses CodeMirror 5 nodes. And the package is written in CodeMirror 6 so it won't work :( As far as I could tell, the package would have only added very basic syntax highlight. Basically, comments and that's it. It is also important to mention, that makefile is one of the hardest languages apparently, as I couldn't find any grammar for it. Multiple are asking, but there are just some custom implementations. But it is also important to mention, that CodeMirror 5 syntax highlighting might work. And this might is really just a guest. Unfortunately, I couldn't find a list which languages does CodeMirror 6 support, but CodeMirror 5 does support these: https://github.com/codemirror/codemirror5/tree/master/mode Is here anything interesting for you? I might be able to import that. (No guarantee)
gharchive/issue
2024-04-16T09:54:53
2025-04-01T06:39:41.588757
{ "authors": [ "Porco24", "StarkSkywalker", "mugiwara85" ], "repo": "mugiwara85/CodeblockCustomizer", "url": "https://github.com/mugiwara85/CodeblockCustomizer/issues/82", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
338216743
[emotion] Unable to override TextField's label focus class. First, thank you for this wonderful library. Please see this minimal example As you can see, there is no chaging on focus label color, although my custom className is in higher order, there is two combination className stick together, causing my custom className disabled. Sorry for not typing code because I can't trigger focus when I copy. I think this is a bug, please help, thanks. @rockmandash Here is the right approach: <TextField id="pid" label="test" InputLabelProps={{ FormLabelClasses: { root: css` &.focused { color: red; } `, focused: "focused" } }} /> https://codesandbox.io/s/20mo9rkl0y Maybe we should be adding an example with Emotion in the documentation? I have prepared this codesandbox. Do you want to work on it? https://codesandbox.io/s/yw93kl7y0j And maybe you should use the theming of material-ui. Actually, documenting react-emotion would be good too. @oliviertassinari Wow, thank you for your fast reply! It's working right now! Thank you so much. Documenting emotion library is good, I don't know if I can work on it, but thank you so much for asking! @oliviertassinari Maybe we should be adding an example with Emotion in the documentation? I think it makes sense to add a section to the style libraries guide on Emotion. I'd get happy to work on this! @lukePeavey Awesome. We already have the codesandbox: https://codesandbox.io/s/yw93kl7y0j. We can add emotion to the list https://material-ui.com/guides/interoperability/. By popularity, I would say after styled-components but before glamourous. @rockmandash Here is the right approach: <TextField id="pid" label="test" InputLabelProps={{ FormLabelClasses: { root: css` &.focused { color: red; } `, focused: "focused" } }} /> https://codesandbox.io/s/20mo9rkl0y Maybe we should be adding an example with Emotion in the documentation? I have prepared this codesandbox. Do you want to work on it? https://codesandbox.io/s/yw93kl7y0j Looking at the code, does it mean one has to also use those JSS bits when he wants to use emotion? I'd rather not pull in JSS in addition to emotion but then maybe I'm missing something? @markusgattol Looking at the code, does it mean one has to also use those JSS bits when he wants to use emotion? I'd rather not pull in JSS in addition to emotion but then maybe I'm missing something? What JSS bits are you referring to? @lukePeavey import JssProvider from "react-jss/lib/JssProvider"; for example from link @oliviertassinari posted in his solution. I see what you mean... You need to configure the injection order so that Emotion styles are injected below JSS styles. This is necessary to ensure that Emotion styles have higher priority than the default material ui styles. (otherwise you need to !important) JSS is already included your project as a dependency of material ui. Reading through the docs for the last hour I actually figured I'll not use emotion because there's no need. JSS is included, as you said and seems to be first choice when it comes branding i.e. applying an individual style on top of components from material-ui. fyi, I've been using preact and preact-material-components for a while but the amount of work necessary and the ongoing breaking changes made me drop the entire stack and move back to react and material-ui (the latter which I haven't used before because I switched from react to preact about 18 month ago for reasons of bundle size and speed but then that gap is becoming smaller and less important as time goes on.) @oliviertassinari should react-emotion be a separate top level section in the interop guide or sub section of emotion @lukePeavey a sub section? Just in case some one else find this issue later. I've created a component to simplify the material ui's css over writing process. You just need to wrap your whole application in this OverrideMaterialUICss component. This library is a wrapper component which only takes the children prop and renders it without any modification but just moving Material-UI's
gharchive/issue
2018-07-04T10:17:26
2025-04-01T06:39:41.603290
{ "authors": [ "janhoeck", "lukePeavey", "markusgattol", "nerdmax", "oliviertassinari", "rockmandash" ], "repo": "mui-org/material-ui", "url": "https://github.com/mui-org/material-ui/issues/12054", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
561562550
Support "auto" for them.spacing() - them.spacing("auto") [X] I have searched the issues of this repository and believe that this is not a duplicate. Summary 💡 I think that it will be more convenient when spacing function support "auto" value. const useStyles = makeStyles((theme: Theme) => ({ searchBar: { width: 1400, margin: theme.spacing(1, "auto", 1, 4), }, layoutButtons: { marginRight: theme.spacing(4) } })); Examples 🌈 In my case, const useStyles = makeStyles((theme: Theme) => ({ searchBar: { width: 1400, marginRight: "auto", marginBottom: theme.spacing(7) marginLeft: theme.spacing(4), }, layoutButtons: { marginRight: theme.spacing(4) } })); It can be shorten like this: const useStyles = makeStyles((theme: Theme) => ({ searchBar: { width: projectSearchBar, margin: theme.spacing(0, "auto", 7, 4), }, layoutButtons: { marginRight: theme.spacing(4) } })); @hckhanh This sounds like a great idea. Do you want to work on it? :) It would be a good opportunity to unify the behavior between https://github.com/mui-org/material-ui/blob/07b725e54cdec560dab06f5a662d2869eca9ffb2/packages/material-ui-system/src/spacing.js#L77-L116 and https://github.com/mui-org/material-ui/blob/07b725e54cdec560dab06f5a662d2869eca9ffb2/packages/material-ui/src/styles/createSpacing.js#L3-L34 hi @oliviertassinari, I will make an PR for this 👍 We can support any string.
gharchive/issue
2020-02-07T10:49:44
2025-04-01T06:39:41.608965
{ "authors": [ "hckhanh", "oliviertassinari" ], "repo": "mui-org/material-ui", "url": "https://github.com/mui-org/material-ui/issues/19601", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
804925067
Add inputRef to Select [x] I have searched the issues of this repository and believe that this is not a duplicate. Summary 💡 It would be nice to have inputRef on Select so that it could easily work with libraries like react-hook-form Examples 🌈 import React from "react"; import { useForm } from "react-hook-form"; import { Select } from "@material-ui/core"; const MySelect = () => { const { register, handleSubmit } = useForm(); return ( <form onSubmit={handleSubmit(console.log)}> // should log { mySelect: "1" } on submit <Select native inputRef={register} name="mySelect"> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> </Select> </form> ); }; export default MySelect; Motivation 🔦 As I mention, this would make it easier to work with libraries react-hook-form Use it with <Controller> like this: https://codesandbox.io/s/rhf-mui-select-forked-7xl8z?file=/src/index.js:802-826 <Controller render={({ ref, onChange }) => ( <Select inputRef={ref} onChange={onChange}> <MenuItem value="">None</MenuItem> // (...)
gharchive/issue
2021-02-09T21:08:15
2025-04-01T06:39:41.612955
{ "authors": [ "Andrew5569", "maliboo" ], "repo": "mui-org/material-ui", "url": "https://github.com/mui-org/material-ui/issues/24849", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
997166737
[@mui/styled-engine-sc] The checks breaks styled-components' API The current implementation of styled-engine-sc breaks styled-components' API when not in production [x] The issue is present in the latest release. [x] I have searched the issues of this repository and believe that this is not a duplicate. Current Behavior 😯 styled(MyComponent).attrs({})`` // Error, .attrs is not defined Expected Behavior 🤔 According to the styled-components' API, this should be allowed Steps to Reproduce 🕹 import React from "react"; import { AppBar as MuiAppBar } from "@mui/material"; import styled from "@mui/styled-engine-sc"; const AppBar = styled(MuiAppBar).attrs({ position: "static", })` box-shadow: none; `; If NODE_ENV=="production", this works If NODE_ENV!="production", attrs is undefined Cause In @mui/styled-engine-sc/index.js there is a specific check done against the function's parameter. But this overrides the default styledFactory object, breaking the styled-component API Recommended action I'm all for extra checks, but until there is a better way to do this (sorry, can't think of one right now), this condition should be disabled We don't support .attrs(). I don't think that we should either, to maximize the interoperability. However, not supporting the behavior in dev and prod sounds better for the DX. cc @mnajdova for thoughts. We don't support .attrs(). I don't think that we should either, to maximize the interoperability between the different engines (not supported by emotion, goober, etc.). Could importing from styled-components directly if you really need this API work? However, not supporting the behavior in prod, like in dev sounds better for the DX: no surprises. Agree, we don't want to support he different APIs. I will create a PR for ensuring the same behavior is persisted in prod mode too.
gharchive/issue
2021-09-15T14:40:18
2025-04-01T06:39:41.618567
{ "authors": [ "mnajdova", "oliviertassinari", "yleflour" ], "repo": "mui-org/material-ui", "url": "https://github.com/mui-org/material-ui/issues/28364", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
364624291
[StepConnector] Customize connector based on internal states Closes #13010 This can be accomplished with the following example: <Stepper connector={ <StepConnector classes={{ lineActive: classes.lineActive, lineCompleted: classes.lineCompleted }} /> } > {/* ... */} </Stepper> or with using createMuiTheme: createMuiTheme({ overrides: { MuiStepConnector: { completed: { '& span': { borderColor: indigo[500], }, }, }, }, }); Before After @oliviertassinari Would it be possible to include an error class here as well?
gharchive/pull-request
2018-09-27T19:38:57
2025-04-01T06:39:41.620905
{ "authors": [ "colespencer1453", "spirosikmd" ], "repo": "mui-org/material-ui", "url": "https://github.com/mui-org/material-ui/pull/13023", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2254126131
[core] Update monorepo Propagate https://github.com/mui/material-ui/pull/41901 Preview: https://deploy-preview-333--base-ui.netlify.app/base-ui/getting-started/ On hold, waiting for #326 to be merged
gharchive/pull-request
2024-04-19T22:28:22
2025-04-01T06:39:41.622717
{ "authors": [ "oliviertassinari" ], "repo": "mui/base-ui", "url": "https://github.com/mui/base-ui/pull/333", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1514550568
Syntax error: "@next/font" requires SWC although Babel is being used due to a custom babel config being present. Duplicates [X] I have searched the existing issues Latest version [X] I have tested the latest version Steps to reproduce 🕹 Link to live example: Steps: Open the example on this page in StackBlitz: Current behavior 😯 ❯ npm install && npx next dev warn preInstall No description field warn preInstall No repository field warn preInstall No license field ┌ [1/4] 🔍 Resolving dependencies └ Completed in 0.146s ┌ [2/4] 🚚 Fetching dependencies │ info pruneDeps Excluding 8 dependencies. For more information use `--verbose`. └ Completed in 1.945s ┌ [3/4] 🔗 Linking dependencies └ Completed in 3.256s info security We found `install` scripts which turbo skips for security reasons. For more information see https://turbo.sh/install-scripts. └─ core-js-pure@3.27.1 success Saved lockfile "package-lock.json" success Updated "package.json" success Install finished in 5.417s ready - started server on 0.0.0.0:3000, url: http://localhost:3000 info - Disabled SWC as replacement for Babel because of custom Babel configuration ".babelrc" https://nextjs.org/docs/messages/swc-disabled info - Using external babel configuration from /home/projects/xoribkgmv.github/.babelrc error - ./src/theme.ts:1:1 Syntax error: "@next/font" requires SWC although Babel is being used due to a custom babel config being present. Read more: https://nextjs.org/docs/messages/babel-font-loader-conflict ^C ~/projects/xoribkgmv.github 5m 21s Expected behavior 🤔 No SyntaxError: error - ./src/theme.ts:1:1 Syntax error: "@next/font" requires SWC although Babel is being used due to a custom babel config being present. Read more: https://nextjs.org/docs/messages/babel-font-loader-conflict Context 🔦 No response Your environment 🌎 npx @mui/envinfo ❯ npx @mui/envinfo success Install finished in 3.908s System: OS: Linux 5.0 undefined Binaries: Node: 16.14.2 - /usr/local/bin/node Yarn: 1.22.19 - /usr/local/bin/yarn npm: 7.17.0 - /usr/local/bin/npm Browsers: Chrome: Not Found Firefox: Not Found npmPackages: @emotion/react: latest => 11.10.5 @emotion/styled: latest => 11.10.5 @mui/base: 5.0.0-alpha.112 @mui/core-downloads-tracker: 5.11.2 @mui/icons-material: latest => 5.11.0 @mui/material: latest => 5.11.2 @mui/private-theming: 5.11.2 @mui/styled-engine: 5.11.0 @mui/system: 5.11.2 @mui/types: 7.2.3 @mui/utils: 5.11.2 @types/react: latest => 18.0.26 react: latest => 18.2.0 react-dom: latest => 18.2.0 typescript: latest => 4.9.4 ``` </details> The .babelrc file does not exist in the folder, it's strange why it is there when opened with StackBlitz. It works as expected in Codesandbox tough. It shouldn't happen locally as this file does not exist. Hi, if you are using NextJS 13, some features like @next/fonts are using SWC to compile, so you need to configure swc instead of babel. SWC documentation: https://swc.rs/docs/getting-started if you have the .babelrc i can help to make a migration for SWC
gharchive/issue
2022-12-30T14:10:40
2025-04-01T06:39:41.628883
{ "authors": [ "behrangsa", "marciofaria-git", "mnajdova" ], "repo": "mui/material-ui", "url": "https://github.com/mui/material-ui/issues/35673", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2391117163
Perhaps a more correct calculation sizeof for memcpy() @matttbe, @mjmartineau, am I correct in assuming that in this code, memcpy has different size and it possible underflow or overflow? I don't know code perfectly, so I'm asking you. If I make a mistake, cancel PR changes. Thanks for PR, too! It's always good to have more eyes on the code. Much appreciated! @GermanAizek thank you for this PR. @ossama-othman thank you for the complete reply! I agree with you, we cannot replace the sizeof() for IPv6. I guess we can then close this PR.
gharchive/pull-request
2024-07-04T15:36:51
2025-04-01T06:39:41.869587
{ "authors": [ "GermanAizek", "matttbe", "ossama-othman" ], "repo": "multipath-tcp/mptcpd", "url": "https://github.com/multipath-tcp/mptcpd/pull/294", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1932008832
Remove assignment to the variable as the value is never used(1) Breaking change Proposed change Type of change [ ] Dependency upgrade [ ] Bugfix (non-breaking change which fixes an issue) [ ] New integration (thank you!) [ ] New feature (which adds functionality to an existing integration) [ ] Deprecation (breaking change to happen in the future) [ ] Breaking change (fix/feature causing existing functionality to break) [ ] Code quality improvements to existing code or addition of tests Additional information This PR fixes or closes issue: fixes # This PR is related to issue: Link to documentation pull request: Checklist [ ] The code change is tested and works locally. [ ] Local tests pass. Your PR cannot be merged unless tests pass [ ] There is no commented out code in this PR. [ ] I have followed the development checklist [ ] I have followed the perfect PR recommendations [ ] The code has been formatted using Black (black --fast homeassistant tests) [ ] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: [ ] Documentation added/updated for www.home-assistant.io If the code communicates with devices, web services, or third-party tools: [ ] The manifest file has all fields filled out correctly. Updated and included derived files by running: python3 -m script.hassfest. [ ] New or updated dependencies have been added to requirements_all.txt. Updated by running python3 -m script.gen_requirements_all. [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. [ ] Untested files have been added to .coveragerc. To help with the load of incoming pull requests: [ ] I have reviewed two other open pull requests in this repository. Motivation: Removing dead stores, where a value is assigned to a variable but never used, is crucial for writing high-quality code. It not only makes the code easier to read and maintain but also ensures efficient use of resources. When unused variables clutter the code, it can confuse developers and make debugging more challenging. Additionally, performing calculations or retrieving values that are never used can lead to wasteful resource consumption, affecting the program's performance. Furthermore, dead stores may indicate a logic error, potentially causing unexpected behavior. By getting rid of these unused variables, developers can create more efficient and error-resistant code, ultimately leading to better software quality. In this particular code, I removed "device = entry.data[CONF_DEVICE]" as it is not used. looks good
gharchive/pull-request
2023-10-08T19:01:26
2025-04-01T06:39:41.926933
{ "authors": [ "GaneshSarla", "munterkalmsteiner" ], "repo": "munterkalmsteiner/core", "url": "https://github.com/munterkalmsteiner/core/pull/167", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1206895659
插件token报校验异常 按照此步骤https://www.murphysec.com/docs/integrations/murphysec-jetbrains-plugin/#%E9%85%8D%E7%BD%AE%E6%8F%92%E4%BB%B6 操作显示token校验异常 网络是通的吗,大佬,要不加下我们readme下面的运营同学的微信,我们给你解决下。 网络是通的吗,大佬,要不加下我们readme下面的运营同学的微信 网络是通的吗,大佬,要不加下我们readme下面的运营同学的微信,我们给你解决下。 在内网环境开发,不确定校验所用网址是否通,可以把请求地址发出来吗?
gharchive/issue
2022-04-18T10:10:10
2025-04-01T06:39:41.977313
{ "authors": [ "chncaption", "duzhenming" ], "repo": "murphysecurity/murphysec", "url": "https://github.com/murphysecurity/murphysec/issues/18", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1243598826
Heos by Marantz and Denon It doesn't appear to work with HEOS. What is not working ? Please define your steps... Anything helpful in your logs that give me a clue ? I had a quick peek at the source code of the Heos integration in HA and that should theoretically work fine with Music Assistant. So it would really help if you provide a step for step walkthrough what you did and where it went wrong and if you see any errors somewhere. @chrismdann can you share some more info what is not working otherwise I'll have to close this report closed due to no response
gharchive/issue
2022-05-20T20:41:59
2025-04-01T06:39:42.031485
{ "authors": [ "chrismdann", "marcelveldt" ], "repo": "music-assistant/hass-music-assistant", "url": "https://github.com/music-assistant/hass-music-assistant/issues/206", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1350725795
Can't update MA from HACs What version of Music Assistant has the issue? 2022.7.2 The problem After analysing logs in HACs... it seems that I am trying without success to update MA with patch-2022.7.3. Seems this file is no more present in the github. Is possible to have it availabe in otder to have a normal MA update ? How to reproduce Just try to click on the update button Relevant log output 022-08-24 15:53:34.883 ERROR (MainThread) [custom_components.hacs] <Integration music-assistant/hass-music-assistant> GitHub returned 404 for https://api.github.com/repos/music-assistant/hass-music-assistant/git/trees/patch-2022.7.3 "pushed_at": "2022-08-23T21:18:45", "releases": true, "selected_tag": "patch-2022.7.3", "version_installed": "2022.7.2", "last_fetched": 1661424301.722832 Additional information No response What version of Home Assistant Core are your running 20228.6 What type of installation are you running? Home Assistant OS On what type of hardware are you running? ODROID You need to update MA to 2022.8.x as you are running HA 2022.8.6. That option should be in HACS? No I can't update anything .... that's the main problem since more than in month.... When I click on update I got this error file in hacs pointing the missing file.... No, we can't bring that file back. Besides that it is an old version too. Can't you just remove Music Assistant completely from HA and HACS and reinstall ? Or press the button "Update information" first ? No, we can't bring that file back. Besides that it is an old version too. Can't you just remove Music Assistant completely from HA and HACS and reinstall ? Or press the button "Update information" first ? I tried all your options.... nothing worked unfortunatly .... :( Are you on HA version 2022.8.x ? Yes everything is updated from my side, last HA, last everything...... I don't know in the ./storage/hacs.repositories mentioned with patch file.... impossible to update it manually for another version... Maybe try to update it manually ? Download the zipfile for the latest release: https://github.com/music-assistant/hass-music-assistant/releases/download/2022.8.4/mass.zip unpack it in the custom_components folder, overwriting the existing content I already tried that option too :( :( In that case the only option is to remove HACS completely and reinstall. Also delete all the HACS related files and folders. We can't help you on this I'm afraid as it is strictly taken an HACS issue and not a MA issue. I see ... in Hacs issue, they said it's a MA issue (deleted file).... If I delted all ./storage file, I will also loose all other integrations inputs and reinstalled them too... It is not too uncommon that a release is replaced. Bad things happen and code needs to be guarded for that. I find it kind of strange that the whole thing is unrecoverable crashed due to the fact that we deleted a release 10 minutes after it was published (because it was faulty). I really do not understand why HACS keeps grabbing a tag that does not exists, in my opinion that is a bug in HACS. This is why we have backups so we can restore after something bad happened ;-) I have recreated that missing tag but I don't think that will fix the issue. Ok ... I had to delete all HACS and ./storage associated files and also all integrations/frontend one by one :( :( ... But at least I got an MA updated..... so now for sure I will wait a while before a new MA update...
gharchive/issue
2022-08-25T11:07:35
2025-04-01T06:39:42.042269
{ "authors": [ "OzGav", "SeByDocKy", "marcelveldt" ], "repo": "music-assistant/hass-music-assistant", "url": "https://github.com/music-assistant/hass-music-assistant/issues/880", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
59039349
Method "listenForMessageWithIdentifier" didn't work if I launch watch app from glance Here is an simple example project: https://github.com/yenchenlin1994/WormholeBugExample (You have to change App groups to your own groups) This app simply use a NSTimer to update counter label on Phone interface, and then use MMWormhole try to sync value of the label on Watch interface. It works totally fine when I choose schema WormholeBugExample WatchKit App to run, and then manually open the phone app on iOS simulator. However, things changed if I change schema to Glance - WormholeBugExample WatchKit App and then do the same process as above. After I tap on Glance to launch my watch app, the label on watch's interface didn't correctly sync with label on phone interface. In fact, it sometimes stop listening for the message at the beginning or when counter counts to particular value (ex: When it counts to 8). How can I fix it? You should move everything in awakeWithContext into willActivate. That'll help make sure the wormhole is active. It's also the best way to ensure that UI updates only happen when the interface controller is active. It's also a good idea to stop listening and/or nil out the wormhole in the didDeactivate method of the interface controller. I've tried your solution, but problems I met remain the same. I seem to be having similar issues when activating my WatchKit app from the glance. The WatchKit app works perfectly when I run it directly from Xcode, but if I run the glance scheme and then tap on it to open the WatchKit app, my callbacks don't run. This has been discussed in the Apple Forums, and based on a response from an Apple employee, it's possible that it's only a simulator bug: https://devforums.apple.com/message/1111028 Yeah. My understanding is that it's an unfortunate (and fairly annoying until we have hardware) simulator bug.
gharchive/issue
2015-02-26T08:52:47
2025-04-01T06:39:42.051962
{ "authors": [ "cnstoll", "davidbarker", "mikeswanson", "yenchenlin1994" ], "repo": "mutualmobile/MMWormhole", "url": "https://github.com/mutualmobile/MMWormhole/issues/16", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
184331841
feat(test): add better test covering each template Now using nifty async iteration over each template type and checking if EACH template's entries contain the right scripts. Thanks!
gharchive/pull-request
2016-10-20T20:26:13
2025-04-01T06:39:42.054054
{ "authors": [ "TheLarkInn" ], "repo": "mutualofomaha/multipage-webpack-plugin", "url": "https://github.com/mutualofomaha/multipage-webpack-plugin/pull/14", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
55740598
No have tag instance access api grid and item is tag, nesting in html: <grid> <item field="name" label="名称" /> </grid> grid not access item instance,such: grid.children[0] item not access grid instance,such: item.parent, but item.parent is null Only 'item' definition in 'grid' item.parent isn't null children property is now implemented on v2.0.8. on the nested tag you'll have access to parent property. resolved on v2.0.8
gharchive/issue
2015-01-28T10:48:08
2025-04-01T06:39:42.072319
{ "authors": [ "cheft", "tipiirai" ], "repo": "muut/riotjs", "url": "https://github.com/muut/riotjs/issues/242", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
525718339
Timeout waiting for log entry "containerrootrc=ready" Hi! Not really sure whats happening here. Happens with either of xephyr, xpra, nxagent and hostidsplay. As if the docker image doesnt even get started. Here is a verbose log output upload: https://waritschlager.de/share/922650f43f77a6fe.txt Seems to have broken with a x11docker update of last few weeks/months, as I did not have any such problems beforehand. I can cycle through recent commits to find the culprit tomorrow. until then, LG Thank you for the bug report! I found this error message: /tmp/containerrootrc: 11: /tmp/containerrootrc: cannot create /x11docker/container.log: Permission denied It is this line in containerrootrc: exec 1>>/x11docker/container.log 2>&1 It only serves to redirect some log output. I'll think about a possible alternative. The error message is confusing because the file exists for sure. I got around to check the commits already: bcb791cc9d581e1cb404761c52127ca63d26d110 is when the issue arose. The log says x11docker[14:18:12,517]: Waiting since 2s for /x11docker/containerrootrc.ready to have content, will wait up to 32000 seconds.. Do you need any more verbose logs from around this commit? Part of the problem (seeing that this issue hasnt been opened yet) may be that I set the permissions on ~/.local and most other folders in $HOME to drwx------ 6 phi phi 4.0K Oct 21 14:08 .local. I think it makes the most sense as the contents of this folder arent any other user's (be it system or human) business. This is what had me open another issue once already (#131), where you fixed it by leveraging a /tmp file. I also found that running with --cap-default fixes it. If you decide this is a wont-fix, I could personally live with using this flag, but I assume a proper error message would be great, at least. In the end, I am not sure what exactly is causing this but I hope this will help you. Cheers It also did not work after a chmod -R o+rX ~/.cache, however one warning came up during this: chmod: changing permissions of '.cache/x11docker/x11docker-xfce-xfce4-terminal-55668200901/share/tini': Operation not permitted. This is what the respective share folder looks like: -rw-r--r-- 1 phi phi 3054 Nov 20 14:14 container.CMD.sh -rw-r--r-- 1 phi phi 4412 Nov 20 14:14 containerrootrc -rw-r--r-- 1 phi phi 4 Nov 20 14:14 container.user -rw-rw-rw- 1 phi phi 293 Nov 20 14:14 environment -rw-r--r-- 1 phi phi 2 Nov 20 14:14 exitcode -rw-r--r-- 1 phi phi 0 Nov 20 14:14 journalctl.log prw-rw-rw- 1 phi phi 0 Nov 20 14:14 message.fifo -rw-rw-rw- 1 phi phi 704 Nov 20 14:14 stderr -rw-rw-rw- 1 phi phi 0 Nov 20 14:14 stdout -rw-r--r-- 1 phi phi 51 Nov 20 14:14 timetosaygoodbye prw-rw-r-- 1 phi phi 0 Nov 20 14:14 timetosaygoodbye.fifo -rwxr-xr-x 1 root root 0 Nov 20 14:14 tini -rw-r--r-- 1 phi phi 69315 Nov 20 14:14 x11docker.log -rw-r--r-- 1 phi phi 104 Nov 20 14:14 Xclientcookie which looks fine I think (?) Thanks for your investigation! Part of the problem (seeing that this issue hasnt been opened yet) may be that I set the permissions on ~/.local and most other folders in $HOME to drwx------ 700. I think it makes the most sense as the contents of this folder arent any other user's business, be it system or human. This is what had had me open another issue once already (#131), where you fixed it by leveraging a /tmp file. Ah, yes! I already thought that this bug looks somehow familar. So I reintroduced it. As a first fix that maybe helps I've changed the file permission of x11docker.log to 666. I doubt that this is enough, but it would be nice if you try it out. I also found that running with --cap-default fixes it. The point is that x11docker runs the container with docker option --cap-drop=ALL. That disallows a lot of root privileges. But containerrootrc tries to access x11docker.log as root in container and cannot supersede your 700 settings on host. Adding the capability to supersede access permissions should solve the issue (compare man capabilities): x11docker -- --cap-add DAC_OVERRIDE -- x11docker/xfce xfce4-terminal I am currently not sure what would be the best solution. Always adding capability DAC_OVERRIDE regardless of the setup. Easy, but a security impact on systems that would not need it. Somehow checking for 700 and only adding DAC_OVERRIDE in that case. Adds some complexity in the code. Avoiding container root access on host files. Some work, but probably the cleanest solution. I tried using the latest commit, still no change. Nope, DAC_OVERRIDE on its own does not cut it, only cap-default. I am not very sure on to the details of this. But option 1 and 2 do not sound quite elegant. And it would make sense to prevent root files in the host environment. Those are always a bummer when it comes to archives, searches, deletions etc. Like when you want to backup your home folder but get errors somewhere deep nested inside .local/share/x11docker, as it contains root files. Option 3 makes the most sense to me, but this is only intuition. chmod -R o+rX ~/.cache Sorry for asking, but what exactly does that do? No worries – it means recursively giving read access (+4) to "other" users (not owner or group) to everthing inside .cache. X stands for folder execution rights (+1), so they will be able to list folder contents, but it does not apply to files (so no scripts can be run etc). + means "adding permissions", so if any were present beforehand, those are combined. To set them, use =. Ok wait, the recent commit might have solved this (but everything is kinda laggy now). Just so you dont spend too much unnecessary digging here. I'll get back to you later. Ok wait, the recent commit might have solved this (but everything is kinda laggy now). Good! Maybe there is another timeout now. Please give me a fresh logfile and I'll look through it. And it would make sense to prevent root files in the host environment. Those are always a bummer when it comes to archives, searches, deletions etc. x11docker already avoids root files in HOME. root only appends lines to existing files. That does not change the file ownership. But this seems to be an issue with 700 for ~/.cache I am surprised why you have a root-owned tini in the share folder: -rwxr-xr-x 1 root root 0 Nov 20 14:14 tini It should not be there. It is shared with the container with: --volume '/usr/bin/docker-init':'/usr/local/bin/tini':ro \ In my own x11docker cache tini does not appear. With latest commit: Started with--cap-default from command line: Works fine, but is slow. Definitely laggier than before, vscode editor not really usable. Started normally: Does not start (described timeout) Started from a noninteractive shell, with no special options (no cap-default), using a xfce hotkey shortcut on the host system: Works fine, but also slow. I did not expect that. It is the opposite behavior of a past issue #177 where it was the shortcut action that failed. - I checked the user from such a shortcut job, it is also the normal one, "phi". Maybe a tty issue again? Started as above via xfce hotkey, but with --cap-default: Same behavior With previous commit, before your latest fix (at 25b8034f24238eed267784def8f2e33de5a489d1), everything seems the same as above. So nothing changed. Here's a new verbose log file as requested, from latest commit's ./x11docker -v --xephyr x11docker/xfce xfce4-terminal, run from terminal: https://waritschlager.de/share/f43f639000f6f3c6.txt Nothing seems to have changed much I am surprised why you have a root-owned tini in the share folder I thought I'd try to reproduce it by deleting the respective cache folder. But now when I run it (with cap-default), no folder is recreated inside .cache/x11docker anymore, so I cannot give you any more details. Okay, regarding the lagginess: This might not be related to this issue at all, but I'll post the info here anyway: The lags got introduced with 5a35b8107ca043d2f0dd8a2fafe97157164bbc5f. I only encounter those with Xephyr. The application I tested this with is xfce, with vscode as the application running inside. I dont know what is so special about it, but thunar for instance was not lagging. Please tell me if you want more info here. While digging, I also realized that nxagent and hostdisplay seem to behave more fluent than xephyr and xpra. I dont know how nxagent works, but with hostdisplay it makes sense as no key presses are proxied. Thank you for your detailed investigation! The lags got introduced with 5a35b81. Note that this is an earlier one than the one that broke everything described above. I only encounter those lags with Xephyr. The application I tested this with is xfce, with vscode as the application running inside. I dont know what is so special about it, but thunar for instance was not lagging. Finally an issue that I can fix easily! In the commit you found I've enabled Xephyr option -glamor. From Xephyr -help: -glamor Enable 2D acceleration using glamor glamor should help to speed up some things, but obviously it can be problematic. I've disabled it yet. --xephyr should not be laggy anymore. While digging, I also realized that nxagent and hostdisplay seem to behave more fluent than xephyr and xpra --hostdisplay is the fastest option because no additional X server is involved. Unfortunately it costs some container isolation. A malicious application could access your host system. --xpra is the slowest option. But it is a preferred default of x11docker because it provides some nice features, e.g. graphical clipboard support. Furthermore it is the only seamless solution for --gpu beside the insecure --hostdisplay. However, if security/container isolation is not a concern, --hostdisplay --gpu is the fastest setup with the lowest overhead. I thought I'd try to reproduce it by deleting the respective cache folder. But now when I run it (with cap-default), no folder is recreated inside .cache/x11docker anymore, so I cannot give you any more details. The cache folder only exists while the container is running. If you don't have a cache folder while x11docker is running, something very basical goes wrong. I just tried to reproduce the 700 issue with chmod -R 700 ~/.cache/x11docker. Surprisingly I have no issues at all and cannot reproduce your issue. x11docker just starts up well. I'll look closer how to reproduce it. Huh, sorry - I dont know how the version mismatch happened. I redid it with the latest commit from yesterday, for sure this time: https://waritschlager.de/share/32492a8420106529.txt That's odd. I doubt that it is a tty issue again. I would see that in the log. Maybe you have two x11docker on your system, e.g. one in a cloned git folder and one in /usr/bin? No, this is not the case. xfce shortcut and interactive bash definitely behave differently (one working, the other not). I removed all the times and pids from it with some wild regex and skipped display numbers etc. and below are the notable log output differences when run as xfce shortcut. As expected, the only real difference seems to be that the container.log permission error is gone. 8a9 > DEBUGNOTE: check_host(): Command tty failed. Guess if running on console: no 45a47 > DEBUGNOTE: check_host(): Command tty failed. Guess if running on console: no 124c126 < Running in a terminal: yes --- > Running in a terminal: no 371,377d372 < grep -x -q 'x11docker/xfce' < /home/phi/.cache/x11docker/docker.imagelist || grep -x -q 'x11docker/xfce:latest' < /home/phi/.cache/x11docker/docker.imagelist || { < docker inspect x11docker/xfce >>/home/phi/.cache/x11docker/x11docker-xfce-/share/container.log 2>&1 || { < echo 'Image x11docker/xfce not found locally.' >&2 < echo 'Do you want to pull it from docker hub?' >&2 < askyesno && Dockerpull=yes || error "Image 'x11docker/xfce' not available locally and not pulled from docker hub." < } < } 382a378 > env DISPLAY=':0.0' DBUS_SESSION_BUS_ADDRESS='unix:path=/run/user/1000/bus' bash -c "notify-send 'x11docker: Pulling image x11docker/xfce from docker hub'" 2>/dev/null & 1475c1471,1495 < /tmp/containerrootrc: 11: /tmp/containerrootrc: cannot create /x11docker/container.log: Permission denied --- > mkdir: created directory '/var/run/dbus' > mkdir: created directory '/tmp/.ICE-unix' > mkdir: created directory '/tmp/.X11-unix' > mkdir: created directory '/tmp/.font-unix' > srwxrwxrwx 1 1000 1001 0 Nov 23 08:10 /X113 > > ==> /home/phi/.cache/x11docker/x11docker-xfce-/message.log <== > DEBUGNOTE: Running containerrootrc: Setup as root in container > > ==> /home/phi/.cache/x11docker/x11docker-xfce-/share/container.log <== > lrwxrwxrwx 1 root root 5 Nov 23 08:10 /tmp -> /X113 > mkdir: created directory '/fakehome' > > ==> /home/phi/.cache/x11docker/x11docker-xfce-/message.log <== > DEBUGNOTE: containerrootrc: Container libc: glibc > > ==> /home/phi/.cache/x11docker/x11docker-xfce-/share/container.log <== > removed '/etc/shadow' > > ==> /home/phi/.cache/x11docker/x11docker-xfce-/message.log <== > x11docker: Container system ID: debian > > > ==> /home/phi/.cache/x11docker/x11docker-xfce-/share/container.log <== > chown: changing ownership of '/tmp/chowntestfile': Operation not permitted 1551d1572 < DEBUGNOTE: waitforlogentry(): tailstderr: Found log entry "x11docker=ready" in store.info. 1553,1592c1574,1586 < DEBUGNOTE: waitforlogentry(): containerrc: Waiting since 11s for log entry "containerrootrc=ready" in store.info < DEBUGNOTE: waitforlogentry(): containerrc: Waiting since 12s for log entry "containerrootrc=ready" in store.info ... --- > DEBUGNOTE: waitforlogentry(): tailstderr: Found log entry "x11docker=ready" in store.info. > DEBUGNOTE: waitforlogentry(): containerrc: Found log entry "containerrootrc=ready" in store.info. So it fails here: https://github.com/mviereck/x11docker/blob/master/x11docker#L4282. $(convertpath share $Containerlogfile), which resolves to /x11docker/container.log isnt accessible, because /x11docker itself cannot be traversed into. I put an ls -l / at that position and here is the output: total 16 srwxrwxrwx 1 1000 1001 0 Nov 23 08:57 X120 drwxr-xr-x 2 root root 4096 Jul 14 08:49 bin drwxr-xr-x 2 root root 6 May 13 2019 boot drwxr-xr-x 5 root root 360 Nov 23 08:57 dev drwxr-xr-x 40 root root 4096 Nov 23 08:57 etc drwxr-xr-x 2 root root 6 May 13 2019 home drwxr-xr-x 8 root root 107 Jul 14 08:49 lib drwxr-xr-x 2 root root 34 Jul 8 03:30 lib64 drwxr-xr-x 2 root root 6 Jul 8 03:30 media drwxr-xr-x 2 root root 6 Jul 8 03:30 mnt drwxr-xr-x 2 root root 6 Jul 8 03:30 opt dr-xr-xr-x 374 root root 0 Nov 23 08:57 proc drwx------ 2 root root 37 Jul 8 03:30 root drwxr-xr-x 3 root root 60 Nov 23 08:57 run drwxr-xr-x 2 root root 4096 Jul 8 03:30 sbin drwxr-xr-x 2 root root 6 Jul 8 03:30 srv dr-xr-xr-x 13 root root 0 Nov 23 08:57 sys drwxrwxrwt 2 root root 29 Nov 23 08:57 tmp drwxr-xr-x 10 root root 105 Jul 8 03:30 usr drwxr-xr-x 11 root root 139 Jul 8 03:30 var drwxrwx--- 2 1000 1001 4096 Nov 23 08:57 x11docker /tmp ls: cannot access '/x11docker/container.log': Permission denied and $PWD is /tmp and $USER is phi and $UID is empty and id says uid=0(root) gid=0(root) groups=0(root) and cat /etc/passwd also does not contain phi. So the user $USER doesnt exist..?! On my host system, the the $UID is (as on most systems) 1000. Thanks for removing the glamor option! Everything is smooth again. The cache folder only exists while the container is running. Oh, huh. The cache folders I described above were present without any container running. So I guess they were leftovers from failed run attempts. Should not matter, this doesnt happen anymore right now. No, this is not the case. xfce shortcut and interactive bash definitely behave differently (one working, the other not). That's really odd. I have no good idea why there is a difference. That also indicates that 700 is not the core issue. As been said, I cannot reproduce the issue if I set my own cache to 700. The only idea I have yet is some sort of >>redirection issue. But I would not know why it only happens in terminal, but not with a shortcut. echo "exec 1>>$(convertpath share $Containerlogfile) 2>&1" Could you try to just disable this line? It only serves to redirect some output into the logfile, so it would not hurt essentially. Though, ls fails, too: ls: cannot access '/x11docker/container.log': Permission denied $PWD is /tmp and $USER is phi and $UID is empty and id says uid=0(root) gid=0(root) groups=0(root) and cat /etc/passwd also does not contain phi. So the user $USER doesnt exist..?! The entries in /etc/passwd and etc/group are done shortly after that in containerrootrc. The cache folders I described above were present without any container running. So I guess they were leftovers from failed run attempts. Should not matter, this doesnt happen anymore right now. I also get those leftover folders. It seems x11docker does not get enough time to clean up if I shut down the system while x11docker is running. You can use sudo x11docker --cleanup to remove all leftovers (and currently running x11docker sessions). I did a test in an old Manjarao VM. I've set ~/.cache/x11docker to 700. It works well. I could not update Manjaro due to some package conflicts. But I assume that would not change anything. So i cannot reproduce your issue and have no idea why x11docker fails on your system in terminal only. Would it be ok for you to just use --cap-default and close the ticket? Though, if you have an idea and want to investigate further, I am happy to look at, too. Sure. I'll get back to you when I come accross a meaningful hint. Thank you for the help! The latest commit runs containerrootrc with flag --privileged. You should not need --cap-default anymore. x11docker's root setup in container now has no restrictions. This allow to drop --cap-default and the container command will run without privileges at all. It would be nice if you try this out. yup, works now out of the box :-) good job Great! :-) Finally solved although we did not find the very special point where it previously failed.
gharchive/issue
2019-11-20T10:38:27
2025-04-01T06:39:42.129516
{ "authors": [ "mviereck", "phil294" ], "repo": "mviereck/x11docker", "url": "https://github.com/mviereck/x11docker/issues/196", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
824293877
Build failing with a NPE After having the project building properly, I suddenly started having this error (no changes where made at config, I am at 0.4.1): [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.709 s (Wall Clock) [INFO] Finished at: 2021-03-08T08:58:50+01:00 [INFO] ------------------------------------------------------------------------ [ERROR] NullPointerException java.lang.NullPointerException: null at org.mvndaemon.mvnd.cache.invalidating.InvalidatingProjectArtifactsCache$Record.getDependencyPaths(InvalidatingProjectArtifactsCache.java:48) at org.mvndaemon.mvnd.cache.impl.WatchServiceCacheFactory$WrappedCacheRecord.getDependencyPaths(WatchServiceCacheFactory.java:254) at org.mvndaemon.mvnd.cache.impl.WatchServiceCacheFactory.add(WatchServiceCacheFactory.java:75) at org.mvndaemon.mvnd.cache.impl.WatchServiceCacheFactory$WatchServiceCache.put(WatchServiceCacheFactory.java:210) at org.mvndaemon.mvnd.cache.invalidating.InvalidatingProjectArtifactsCache.put(InvalidatingProjectArtifactsCache.java:90) at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.resolveProjectDependencies(LifecycleDependencyResolver.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.ensureDependenciesAreResolved(MojoExecutor.java:248) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:202) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117) at org.mvndaemon.mvnd.builder.SmartBuilderImpl.buildProject(SmartBuilderImpl.java:178) at org.mvndaemon.mvnd.builder.SmartBuilderImpl$ProjectBuildTask.run(SmartBuilderImpl.java:198) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) [ERROR] Party Proxy: null: NullPointerException java.lang.RuntimeException: Party Proxy: null at org.mvndaemon.mvnd.builder.SmartBuilderImpl.buildProject(SmartBuilderImpl.java:183) at org.mvndaemon.mvnd.builder.SmartBuilderImpl$ProjectBuildTask.run(SmartBuilderImpl.java:198) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.lang.NullPointerException: null at org.mvndaemon.mvnd.cache.invalidating.InvalidatingProjectArtifactsCache$Record.getDependencyPaths(InvalidatingProjectArtifactsCache.java:48) at org.mvndaemon.mvnd.cache.impl.WatchServiceCacheFactory$WrappedCacheRecord.getDependencyPaths(WatchServiceCacheFactory.java:254) at org.mvndaemon.mvnd.cache.impl.WatchServiceCacheFactory.add(WatchServiceCacheFactory.java:75) at org.mvndaemon.mvnd.cache.impl.WatchServiceCacheFactory$WatchServiceCache.put(WatchServiceCacheFactory.java:210) at org.mvndaemon.mvnd.cache.invalidating.InvalidatingProjectArtifactsCache.put(InvalidatingProjectArtifactsCache.java:90) at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.resolveProjectDependencies(LifecycleDependencyResolver.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.ensureDependenciesAreResolved(MojoExecutor.java:248) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:202) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117) at org.mvndaemon.mvnd.builder.SmartBuilderImpl.buildProject(SmartBuilderImpl.java:178) ... 6 common frames omitted [ERROR] Sounds like https://github.com/mvndaemon/mvnd/issues/347 - please check whether the stack trace is really the same. Stock mvn would perhaps show a more meaningful exception. There is a fix in master and we should release within a couple of days. Tried with 0.4.3, working. Thanks!
gharchive/issue
2021-03-08T08:02:44
2025-04-01T06:39:42.153024
{ "authors": [ "galegofer", "ppalaga" ], "repo": "mvndaemon/mvnd", "url": "https://github.com/mvndaemon/mvnd/issues/372", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
658354634
Can't unarchive .zip I try to unzip my .zip file (downloaded from the Internet) using the .unzip () command and get a nil. Also I cannot open on my computer the file zipped by .zip(). With gzip all ok. Please help! I seem to have figured out that this solution is not suitable for pkzip files, but I need to process this type of file. What can I do? Hi. Your best bet ist called ‘minizip’. Maybe google that together with swift and you should find what you are looking for. Good luck.
gharchive/issue
2020-07-16T16:26:32
2025-04-01T06:39:42.156658
{ "authors": [ "EugeneKudr", "mw99" ], "repo": "mw99/DataCompression", "url": "https://github.com/mw99/DataCompression/issues/25", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
616848380
[QT-Wallet] In "Accounts" page, UI needs updates. Description: In Accounts page, observed the following issues, Needs description about the ACCOUNTS page. Page UI need to be aligned, observing lot of empty spaces in the right side of the Table . Please refer screenshot for more details . for the width issue, can you evenly space the columns horizontally to take up 100% width always? thanks @vinayaga07 for bringing up some design considerations ! It is current default values, seems like issue is addressed. None default we can't fix
gharchive/issue
2020-05-12T17:56:53
2025-04-01T06:39:42.179984
{ "authors": [ "bayk", "condensed-io", "vinayaga07" ], "repo": "mwcproject/mwc-qt-wallet", "url": "https://github.com/mwcproject/mwc-qt-wallet/issues/322", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
262986584
A few small fixes handle parsing timestamps when value not supplied (return nil instead of exception). handle open and closed order on the Order class. gracefully handle a symbol not returning data for Quote class. additional attributes for Order class. Specs updated. Plus, can see code in action on my crypto project WHAt is this a few small fixes ? is this me being hacked? CryptOKlizO -------- Original Message -------- Subject: Re: [mwerner/bittrex] A few small fixes (#13) Local Time: November 1, 2017 1:07 PM UTC Time: November 1, 2017 8:07 PM From: notifications@github.com To: mwerner/bittrex bittrex@noreply.github.com Subscribed subscribed@noreply.github.com Merged #13. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread. Hello wtf? CryptOKlizO -------- Original Message -------- Subject: Re: [mwerner/bittrex] A few small fixes (#13) Local Time: November 1, 2017 2:00 PM UTC Time: November 1, 2017 9:00 PM From: protomolecule1@protonmail.com To: mwerner/bittrex reply@reply.github.com mwerner/bittrex bittrex@noreply.github.com, Subscribed subscribed@noreply.github.com WHAt is this a few small fixes ? is this me being hacked? CryptOKlizO -------- Original Message -------- Subject: Re: [mwerner/bittrex] A few small fixes (#13) Local Time: November 1, 2017 1:07 PM UTC Time: November 1, 2017 8:07 PM From: notifications@github.com To: mwerner/bittrex bittrex@noreply.github.com Subscribed subscribed@noreply.github.com Merged #13. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
gharchive/pull-request
2017-10-05T02:26:11
2025-04-01T06:39:42.191950
{ "authors": [ "CryptoKlizO", "mwlang" ], "repo": "mwerner/bittrex", "url": "https://github.com/mwerner/bittrex/pull/13", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }