id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2531177723
|
Code replication issue
Hello, I tried to reproduce your code, but I couldn't get the results in your paper.
My only change was batch size, and I ran the following code directly,
#predictor training all LLMs
python preprocess_dataset.py --task_type 0 --data_size 10 --all_models
python latency_prediction.py --task_type 0 --data_size 10 --all_models
But the result is very exaggerated:
Validation metrics after training:
L1 error: 121.2489
MSE: 27016.6429
Start inference...
Saved results to ./results/predictions_all_models_warmup_reg_mse_10K.csv
Then I tried several other tasks and found that the results were not very satisfactory.
Please tell me where I went wrong?
Looking forward to your answer!
Thanks!
Please try with a larger data_size. In your case, --data_size 10 means that BERT training dataset is only 10K, which is not enough for the predictor to reach a sufficient prediction accuracy. In the paper, the actual training dataset size is the entire LMSYS-Chat dataset.
thanks for your reply!
|
gharchive/issue
| 2024-09-17T13:30:22 |
2025-04-01T04:55:12.871222
|
{
"authors": [
"James-QiuHaoran",
"Nighttell"
],
"repo": "James-QiuHaoran/LLM-serving-with-proxy-models",
"url": "https://github.com/James-QiuHaoran/LLM-serving-with-proxy-models/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
682004294
|
arguments / methods / classes are not parsed properly
Hey @JamesALeedham, thanks for this repo which certainly saved me a lot of headaches. Previously I am not able to generate auto-summary for some of my deeper classes and link them to a new page for detailed documentation. However, I still got a few new problems. Hope you may offer me some thoughts:
as you can see from my API page, my current issues are the parts related to classes in Conventional scRNA-seq (est) or Time-resolved metabolic labeling based scRNA-seq (est.tsc).
firstly the docstring doesn't seem to parse correctly and is different from all other function/class while they used the same format: for example: https://dynamo-release.readthedocs.io/en/latest/_autosummary/dynamo.est.csc.ss_estimation.html#module-dynamo.est.csc.ss_estimation
None of the methods, inherited classes, etc. are listed on the linked page.
While other shallower classes are documented correctly, for example:
https://dynamo-release.readthedocs.io/en/latest/_autosummary/dynamo.mv.StreamFuncAnim.html#dynamo.mv.StreamFuncAnim
or
https://dynamo-release.readthedocs.io/en/latest/_autosummary/dynamo.vf.vfGraph.html#dynamo.vf.vfGraph
My doc folder is here: https://github.com/aristoteleo/dynamo-release/tree/master/docs/source
The API page is here: https://dynamo-release.readthedocs.io/en/latest/API.html
Similar/same problem here. I don't get links to the classes either.
|
gharchive/issue
| 2020-08-19T17:09:47 |
2025-04-01T04:55:12.892281
|
{
"authors": [
"Xiaojieqiu",
"dmyersturnbull"
],
"repo": "JamesALeedham/Sphinx-Autosummary-Recursion",
"url": "https://github.com/JamesALeedham/Sphinx-Autosummary-Recursion/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1856923197
|
How to use Newtonsoft.Json in place of BinaryFormatter in C# .Net?
We have legacy code, We learned that there are some vulnerabilities using BinaryFormatter. So I am trying to use NewtonSoft.Json. My project is already using NewtonSoft.Json package.
I have tried the below code but it gives an exception on the line var deserializedStream = serializer.Deserialize(jsonTextReader);when running.
Unexpected character encountered while parsing value: . Path '', line 1, position 1.
My new Code:
public void LoadFromDisk()
{
if (!File.Exists(BINARY_FILENAME)) return;
var serializer = new JsonSerializer();
using (var stream = File.Open(BINARY_FILENAME, FileMode.Open, FileAccess.Read))
{
using (var sr = new StreamReader(stream))
{
using (var jsonTextReader = new JsonTextReader(sr))
{
var deserializedStream = serializer.Deserialize(jsonTextReader);
_jobsAck = deserializedStream as ConcurrentDictionary<string, DateTime>;
if (_jobsAck == null)
{
_jobsAck = new ConcurrentDictionary<string, DateTime>();
if (!(deserializedStream is Dictionary<string, DateTime> ackDict)) return;
foreach (var pair in ackDict)
{
_jobsAck.TryAdd(pair.Key, pair.Value);
}
}
}
}
}
}
Old Code:
public void LoadFromDisk()
{
if (!File.Exists(BINARY_FILENAME)) return;
var binaryFormatter = new BinaryFormatter();
using (var stream = File.Open(BINARY_FILENAME, FileMode.Open, FileAccess.Read))
{
var deserializedStream = binaryFormatter.Deserialize(stream);
_jobsAck = deserializedStream as ConcurrentDictionary<string, DateTime>;
if (_jobsAck == null)
{
_jobsAck = new ConcurrentDictionary<string, DateTime>();
if (!(deserializedStream is Dictionary<string, DateTime> ackDict)) return;
foreach (var pair in ackDict)
{
_jobsAck.TryAdd(pair.Key, pair.Value);
}
}
}
```}
**Target framework**: 4.7.2 (I cannot upgrade .net framework due to some constraints)
**NewtonSoft version:** 12.0.2
What makes you think that the binary serialized data produced and consumable by BinaryFormatter is being consumable by a Json serializer, considering that json is a text-based format and not a binary one?
At a guess from the error message, it looks like the BINARY_FILENAME file is still written with BinaryFormatter. Make sure you delete the file, and replace your SaveToDisk method first so that the written data is serialized in Json format.
Unless you use a custom convention of some sort, I don't think it can deserialize directly to a ConcurrentDictionary.
See Serializing Collections
I think your code could be something like:
using Newtonsoft.Json;
using System.Collections.Concurrent;
internal class Program
{
private ConcurrentDictionary<string, DateTime> _jobsAck = new();
private const string JSON_FILENAME = "JobsAck.json";
public ConcurrentDictionary<string, DateTime> JobsAck { get => _jobsAck; set => _jobsAck = value; }
public void LoadFromDisk()
{
if (!File.Exists(JSON_FILENAME)) return;
var serializer = new JsonSerializer();
using (var stream = File.Open(JSON_FILENAME, FileMode.Open, FileAccess.Read))
{
using (var sr = new StreamReader(stream))
{
using (var jsonTextReader = new JsonTextReader(sr))
{
var ackDict = serializer.Deserialize<Dictionary<string, DateTime>>(jsonTextReader);
if (ackDict != null)
{
_jobsAck = new ConcurrentDictionary<string, DateTime>();
foreach (var pair in ackDict)
{
_jobsAck.TryAdd(pair.Key, pair.Value);
}
}
}
}
}
}
public void SaveToDisk()
{
var serializer = new JsonSerializer();
using (var stream = File.Open(JSON_FILENAME, FileMode.Create, FileAccess.Write))
{
using (var sw = new StreamWriter(stream))
{
using (var jsonTextWriter = new JsonTextWriter(sw)
{
Formatting = Formatting.Indented // Not needed, and makes the file a little larger, but is much more readable.
})
{
serializer.Serialize(jsonTextWriter, _jobsAck);
}
}
}
}
static void Main(string[] args)
{
var prog = new Program();
prog.LoadFromDisk();
prog.JobsAck.TryAdd(Guid.NewGuid().ToString(), DateTime.UtcNow);
prog.SaveToDisk();
}
}
Hope this helps.
|
gharchive/issue
| 2023-08-18T15:36:53 |
2025-04-01T04:55:12.897977
|
{
"authors": [
"CZEMacLeod",
"elgonzo",
"viveknuna"
],
"repo": "JamesNK/Newtonsoft.Json",
"url": "https://github.com/JamesNK/Newtonsoft.Json/issues/2886",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
424636233
|
Can't publish to master branch
I want to use this task with my personal GitHub pages repository. I can only use master though while the task is always using gh-pages. So I want to be able to configure the branch to use.
See remark below from https://help.github.com/en/articles/configuring-a-publishing-source-for-github-pages
If your site is a User or Organization Page that has a repository named username.github.io or orgname.github.io, you cannot publish your site's source files from different locations. User and Organization Pages that have this type of repository name are only published from the master branch.
Now pushed to the marketplace - thanks to @MarcBruins
|
gharchive/issue
| 2019-03-24T17:08:01 |
2025-04-01T04:55:12.901086
|
{
"authors": [
"JamesRandall",
"ronaldbosma"
],
"repo": "JamesRandall/Vsts-GitHub-Pages-Publish",
"url": "https://github.com/JamesRandall/Vsts-GitHub-Pages-Publish/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1192350765
|
[Bug Report] Misnamed field name
ftransimissionChangeRateScaleUpShift and ftransimissionChangeRateScaleDownShift should be fClutchChangeRateScaleUpShift and fClutchChangeRateScaleDownShift.
Realized I broke this myself doing a replace all. Closing out
|
gharchive/issue
| 2022-04-04T21:49:58 |
2025-04-01T04:55:12.910042
|
{
"authors": [
"Mkeefeus"
],
"repo": "Jameslroll/gta_vehicleDebug",
"url": "https://github.com/Jameslroll/gta_vehicleDebug/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2044942750
|
chore: add schema to goreleaser config
Prerequisites
[x] I have read and understood the [contributing guide][CONTRIBUTING.md].
[x] The commit message follows the [conventional commits][cc] guidelines.
[ ] Tests for the changes have been added (for bug fixes / features).
[ ] Docs have been added/updated (for bug fixes / features).
Description
Adds schema to GoReleaser configuration. Will help in autocomplete suggestions, given by VSCode's IntelliSense.
@all-contributors please add @vedantmgoyal2009 for code
|
gharchive/pull-request
| 2023-12-16T20:07:01 |
2025-04-01T04:55:12.918537
|
{
"authors": [
"JanDeDobbeleer",
"vedantmgoyal2009"
],
"repo": "JanDeDobbeleer/aliae",
"url": "https://github.com/JanDeDobbeleer/aliae/pull/78",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
62442100
|
Layout nested diagrams before they become visible
When a nested diagram is opened for the first time, its contents are automatically layouted. It would be much smoother when this happened before it becomes visible.
This requires to precompute the bounds of nodes. Node#autosize() should work as long as we don't allow CSS styled nodes. Consider creating a new API method getBounds(BoundsType), that obsoletes getSnapBounds() as well.
Fixed with 2b1fb4e4616fd52f54cb17b224cffe40e037cc1f
Performance issue covered in 45f9841e05e7ce788b392e0706e9a37bb7eff181
|
gharchive/issue
| 2015-03-17T16:10:59 |
2025-04-01T04:55:12.962848
|
{
"authors": [
"JanKoehnlein"
],
"repo": "JanKoehnlein/FXDiagram",
"url": "https://github.com/JanKoehnlein/FXDiagram/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1800706335
|
Add Serialization/Deserialization for all keys
Add serialization/deserialization for Poly, Ciphertext, KeySwitchingKey, RelinerizationKey, GaloisKey.
Since all keys depend on Poly and poly only consists of coefficients (ie Array2) and representation, the only important functions are converting vector of u64s (ie each row in Array2) to bytes and back.
I have already implemented functions to convert vector of u64s assumed to be in range [0, modulus) to bytes and back. Check convert_to_bytes and convert_from_bytes.
I have also added a rough way to convert Poly to proto::Poly here.
Conversion of proto::Poly to bytes is behaving as expected. For example, a Poly with modulus Q of size 500 bits (ie moduli chain = [50;10]) and degree 2^15 should take 500*(2^15)/8 = 2048000 bytes (ignoring representation value). I can confirm that converting such a Poly to proto::Poly and encoding converted value in bytes equals to 2048052 bytes, which is only a difference of 52 bytes.
Commit 5bf8e16da939a4aef0a8e469242f3359cbbd4cc8 implements serialization/deserialization for HybridKeySwitchingKey, EvaluationKey, RelinearizationKey, GaloisKey.
I checked that serialized objects are of expected size in bytes by first estimating the sizes by hand and then converting the types to respective serialized types and then to bytes. Eveything looks correct.
I seem to have made a mistake by allowing Poly to be serialized in Evaluation representation. Since we plan to support mutiple NTT backends, Evaluation representation of same polynomial can be different on each. This can lead to incorrectness if one serializes a polynomial in Evaluation form using native NTT backend and deserializes it using hexl NTT backend, since the polynomial would not remain the same.
To avoid this, I think it is ok to forbid serliization of polynomial in Evaluation form.
I have been contemplating to switch serialization to a feature. The obvious benefit of this is to allow users not in need of serialization to not download and install protobuf compiler. Plus, I don't see any downisdes of this.
|
gharchive/issue
| 2023-07-12T10:49:05 |
2025-04-01T04:55:12.979328
|
{
"authors": [
"Janmajayamall"
],
"repo": "Janmajayamall/bfv",
"url": "https://github.com/Janmajayamall/bfv/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2357885581
|
Support for arbitrary/more extensions (.mts, .cts, .mjs,…)
Hey,
I love this project, thanks for your hard work!
I wonder if it's possible to add any kind extensions to the CLI, like you could with Prettier or ESLint.
Actually, it's only for .js or .ts`, meaning you can't use some files in a lot of projects.
At least, standard extensions : ctsx, cjsx, js, ts, mts, cts, mjs, cjs (I hope I haven't forgot one 😅) would be awesome.
Bye
Thanks for the idea, this is interesting
I can potentially extend https://github.com/JasonShin/sqlx-ts/blob/953ee1c47da9c9e113e0c4dbeba45d411b5c6f93/src/common/cli.rs#L28C9-L28C12 to support the additional JS types.
jsx/tsx is a tricky one because that requires a whole lot of work in the parser to grab sql string literals - (maybe or maybe not)
No worries for the JSX/TSX part. I think that, conceptually, this is not the typical place where you should put sql queries ;)
I think most dev will agree, even more if it add difficulty for dev/maintenance.
I tried sqlx-ts with ESM and CommonJS, and excluding the file extension issue, it went smoothly.
|
gharchive/issue
| 2024-06-17T17:40:28 |
2025-04-01T04:55:13.067309
|
{
"authors": [
"JasonShin",
"JulianCataldo"
],
"repo": "JasonShin/sqlx-ts",
"url": "https://github.com/JasonShin/sqlx-ts/issues/123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
138162975
|
BulkInsert and overwrite existing
In RavenDB there's a BulkInsertOptions where you can specify OverwriteExisting:
documentStore.BulkInsert(options: new BulkInsertOptions
{
OverwriteExisting = true
});
In Marten the bulk insert is done by: COPY mt_doc_test(\"data\", \"id\") FROM STDIN BINARY
We talked a bit in the gitter chat about how and if this can be done against Postgres. There was a mention of a trigger ON CONFLICT DO UPDATE but there might be other ways of doing it? (Note: My extremely limited Postgres 'skills' prevented me to create such a trigger and test locally...)
We use the BulkInsert - OverwriteExisting quite a lot in a couple of projects where we replace most of the db every hour or so, but we don't want to truncate before as we want to keep older record that wasn't in the last load (we have timestamps for each document so we can see what entries is old/outdated).
Without the overwrite support we have to use the session.Store(...) for each document and let the UPSERT sort it out, but since we're processing millions of records this will cause a bit of an overhead.
Any ideas?
@jenspettersson I think we've got two options, we could either:
Automatically add the ON CONFLICT trigger to handle the conflicts
Make the bulk insert quietly switch to using the normal Unit of Work/batched update command instead of Postgresql's "COPY" functionality.
Both would incur a performance hit, but I don't know which one would be worse. The second option would be faster to code out.
@jeremydmiller If I can get the ON CONFLICT trigger to work I can do some testing locally using both approaches (adding the trigger manually that is). Any tips on how the trigger could look like?
I struck out googling for that, and I think I'm mistaken because I don't see any option for ON CONFLICT for trigger definitions in the postgresql docs.
That explains... a lot! I started to doubt my googling skills there... =) Then there might not be another way than 2?
I had a quick look / google and there doesn't seem to be a clean way to do this with the built in COPY tool or a conflict trigger.
I don't know what you would think of this solution, but my idea would be that if overwrite existing is true :-
Create a new (temporary) table and execute the COPY into it
Insert any id's that don't exist into the primary table
Update any id's that exist in the primary table
Drop the new (temporary) table
This would at least allow for some bulk INSERT / UPDATE operations which should be pretty fast
Hopefully this would only be temporary until someone improves the COPY util in pgSQL to support overwrite
As I said in the gitter chat, I'm not a PG expert by far, but if the solution @jamesfarrer suggest is doable in a clean and secure way, then why not. It does feel a bit "Rube Goldberg" however ;)
I'm not in a position to call any shots here, so we might need @jeremydmiller to comment as well.
Throwing this link in here: http://stackoverflow.com/questions/30500560/import-and-overwrite-duplicate-rows.
@jenspettersson @jamesfarrer Do we want to throw away the duplicates, or overwrite? I suppose that it could be done with a mode like "Fast" / "IgnoreDuplicates" / "Overwrite"
It might be my severe "man cold" that's making me numb, but what's the difference between "throwing away" and "Overwrite" a duplicate?
Do you mean: First delete the duplicate and then insert a new instead of just overwriting it? I don't see a need for that, but then again, I think I'm missing something...
Ignore duplicate values altogether vs. last one in wins
Ah you mean if there's a duplicate within the new batch that are to be bulk inserted. Yeah that also need to be handled. In all my cases I have logic that handles duplicates within current batch before doing a bulk insert so I've never thought about this.
Yup I think the 3 config options make sense. For overwrite I think an update would be preferable. Deleting could cause potential FK exceptions specified on that document / table.
So for the 3 options..
Fast (Default) = standard copy as used today
IgnoreDuplicates would be
create temporary table import_issues as select * from issues limit 0; (I've cribbed this from that SO post but assume as we'd have the schema definition we'd use that)
copy import_issues from stdin;
begin transaction
insert into issues
select * from import_issues where id not in (select id from issues)
end transaction
Overwrite would be
create temporary table import_issues as select * from issues limit 0; (I've cribbed this from that SO post but assume as we'd have the schema definition we'd use that)
copy import_issues from stdin;
begin transaction
insert into issues
select * from import_issues where id not in (select id from issues)
update i set i.data = i2.data
from issues i
inner join import_issues i2 on i.id = i2.id
end transaction
I'm doing this one right now, might be ready tomorrow. Basically doing what @jamesfarrer suggested.
|
gharchive/issue
| 2016-03-03T12:23:23 |
2025-04-01T04:55:13.080347
|
{
"authors": [
"jamesfarrer",
"jenspettersson",
"jeremydmiller"
],
"repo": "JasperFx/marten",
"url": "https://github.com/JasperFx/marten/issues/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
217267104
|
Fix problem where sometimes the asyncprojectiondaemon might "miss" an event
Fix for GH-715.
Pfft, good catch. I'm pulling this in now and it'll be in Marten 1.5 hopefully this week
|
gharchive/pull-request
| 2017-03-27T14:36:55 |
2025-04-01T04:55:13.082240
|
{
"authors": [
"jeremydmiller",
"wastaz"
],
"repo": "JasperFx/marten",
"url": "https://github.com/JasperFx/marten/pull/716",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2004961776
|
Incompatible JasperFX.Core version in WolverineFx.Postgresql
Describe the bug
This error is thrown while calling host.SetupResources();
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
---> System.TypeInitializationException: The type initializer for 'Weasel.Postgresql.PostgresqlProvider' threw an exception.
---> System.IO.FileNotFoundException: Could not load file or assembly 'JasperFx.Core, Version=1.4.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.
File name: 'JasperFx.Core, Version=1.4.0.0, Culture=neutral, PublicKeyToken=null'
at Weasel.Core.DatabaseProvider`6..ctor(String defaultDatabaseSchemaName)
at Weasel.Postgresql.PostgresqlProvider..ctor()
at Weasel.Postgresql.PostgresqlProvider..cctor()
--- End of inner exception stack trace ---
at Weasel.Postgresql.PostgresqlMigrator..ctor()
at Wolverine.Postgresql.PostgresqlMessageStore..ctor(DatabaseSettings databaseSettings, DurabilitySettings settings, ILogger`1 logger)
at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor)
at System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object obj, Span`1 copyOfArgs, BindingFlags invokeAttr)
--- End of inner exception stack trace ---
at System.Reflection.MethodBaseInvoker.InvokeDirectByRefWithFewArgs(Object obj, Span`1 copyOfArgs, BindingFlags invokeAttr)
at System.Reflection.MethodBaseInvoker.InvokeWithFewArgs(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture)
at System.Activator.CreateInstance(Type type, Object[] args)
at Lamar.IoC.Instances.ConstructorInstance.quickResolve(Scope scope)
at Lamar.IoC.Instances.ConstructorInstance.QuickResolve(Scope scope)
at Lamar.IoC.Instances.ConstructorInstance.<>c__DisplayClass27_0.<quickResolve>b__0(CtorArg x)
at System.Linq.Enumerable.SelectArrayIterator`2.Fill(ReadOnlySpan`1 source, Span`1 destination, Func`2 func)
at System.Linq.Enumerable.SelectArrayIterator`2.ToArray()
at Lamar.IoC.Instances.ConstructorInstance.quickResolve(Scope scope)
at Lamar.IoC.Instances.ConstructorInstance.QuickResolve(Scope scope)
at Lamar.IoC.Frames.InjectedServiceField.ToVariableExpression(LambdaDefinition definition)
at JasperFx.CodeGeneration.Expressions.LambdaDefinition.ExpressionFor(Variable variable)
at System.Linq.Enumerable.SelectArrayIterator`2.Fill(ReadOnlySpan`1 source, Span`1 destination, Func`2 func)
at System.Linq.Enumerable.SelectArrayIterator`2.ToArray()
at System.Dynamic.Utils.CollectionExtensions.ToReadOnly[T](IEnumerable`1 enumerable)
at System.Linq.Expressions.Expression.NewArrayInit(Type type, IEnumerable`1 initializers)
at Lamar.IoC.Frames.ListAssignmentFrame`1.WriteExpressions(LambdaDefinition definition)
at Lamar.IoC.Instances.FuncResolverDefinition.BuildResolver()
at Lamar.IoC.Instances.GeneratedInstance.BuildFuncResolver(Scope scope)
at Lamar.IoC.Instances.GeneratedInstance.buildResolver(Scope scope)
at Lamar.IoC.Instances.GeneratedInstance.ToResolver(Scope topScope)
at Lamar.ServiceGraph.FindResolver(Type serviceType)
at Lamar.IoC.Scope.GetInstance(Type serviceType)
at Lamar.Container.Microsoft.Extensions.DependencyInjection.ISupportRequiredService.GetRequiredService(Type serviceType)
at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService[T](IServiceProvider provider)
at Oakton.Resources.ResourcesCommand.FindResources(IServiceProvider services, String typeName, String resourceName)
at Oakton.Resources.ResourceHostExtensions.SetupResources(IHost host, CancellationToken cancellation, String resourceType, String resourceName)
I believe this is due to an incompatible version of Weasel.Postgresql which depends on Weasel.Core that expects an older version of JasperFX
To Reproduce
Steps to reproduce the behavior:
Create a new project with postgresql persistance
Call host.SetupResources()
Desktop (please complete the following information):
OS: Windows 11 23H2 22631.2715
Version WolverineFx.Postgresql 1.12.0
Upgrade to Wolverine 1.12.1 when that's available in the next 30 minutes
@redbaty I'll still pull your PR in later as a longer term fix. Thank you!
Hey there, sorry for the timing confusion on the pull request hehe.
By the way even on the latest 1.12.2 there's still an error thrown while using postgres:
System.InvalidCastException: Unable to cast object of type 'Weasel.Core.DbObjectName' to type 'Weasel.Postgresql.PostgresqlObjectName'.
at Weasel.Postgresql.Tables.Table.ColumnExpression.ForeignKeyTo(DbObjectName referencedIdentifier, String referencedColumnName, String fkName, CascadeAction onDelete, CascadeAction onUpdate)
at Wolverine.Postgresql.PostgresqlMessageStore.AllObjects()+MoveNext()
at System.Collections.Generic.LargeArrayBuilder`1.AddRange(IEnumerable`1 items)
at System.Collections.Generic.EnumerableHelpers.ToArray[T](IEnumerable`1 source)
at Wolverine.RDBMS.MessageDatabase`1.get_Objects()
at Weasel.Core.Migrations.DatabaseBase`1.<>c.<AllObjects>b__19_0(IFeatureSchema group)
at System.Linq.Enumerable.SelectManySingleSelectorIterator`2.ToArray()
at Weasel.Core.Migrations.DatabaseBase`1.ApplyAllConfiguredChangesToDatabaseAsync(IGlobalLock`1 globalLock, Nullable`1 override, ReconnectionOptions reconnectionOptions, CancellationToken ct)
at Oakton.Resources.ResourceHostExtensions.SetupResources(IHost host, CancellationToken cancellation, String resourceType, String resourceName)
I had it fixed in by doing this in the PR: https://github.com/JasperFx/wolverine/pull/638/commits/a454a5ac1f5e87748797da2bf4ad417d630115bc#diff-96018c9e1e0dcb7add35644a779053e4ee707e7fc1775f7e5d60a1fd09af3a16
Works after 1.12.3, thanks! 🎉
|
gharchive/issue
| 2023-11-21T19:07:19 |
2025-04-01T04:55:13.088555
|
{
"authors": [
"jeremydmiller",
"redbaty"
],
"repo": "JasperFx/wolverine",
"url": "https://github.com/JasperFx/wolverine/issues/637",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1199078847
|
Add support for resuming game
Resuming game
It would be nice if the user could resume the game from where they left off the last time.
The flow could be something like this :
User chooses a difficulty level, starts the game.
Leaves the game / closes the app.
User reopens the game.
User is directly navigated to the game screen with the same grid where he left off the last time.
He can continue from there or navigate back to the difficulty selection screen.
On continuing, the user should be presented with the same grid that they had played and left the last time.
In the difficulty selection screen, on selecting a difficulty level and starting a game, if a previous unfinished game with the same difficulty level exists, the user should be prompted with a dialog about whether he wants to continue or to start over.
Based on what the user chooses, an appropriate game board should be rendered in the game screen.
How?
[x] Choose a KMM library for DB (SQLDelight chosen).
[x] Extend the data module to have a DB to be able to store objects.
[x] #72
[x] #74
[x] #78
[ ] Setup the difficulty selection screen UI for the new flow.
[ ] Wire in the logic of reading and presenting the saved game info in the difficulty selection screen.
[x] #75
[ ] Wire in the saved game info propagation from difficulty selection screen to game screen.
With the current state, reading and saving aspects of the Grid and its state are done for the most part.
Oh well!, Finally.
With the current status, all items in the list have been taken care of.
The application is working well and looks coherent with the resuming feature so far.
|
gharchive/issue
| 2022-04-10T15:42:32 |
2025-04-01T04:55:13.112844
|
{
"authors": [
"JayaSuryaT"
],
"repo": "JayaSuryaT/minesweeper-j-compose",
"url": "https://github.com/JayaSuryaT/minesweeper-j-compose/issues/70",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1162241863
|
Why run on Pull Request?
Makes no sense to me. Please explain.
https://github.com/JaydenDev/AuraPad/blob/1bf8a00e253c191bcde9d6947d604ae92ba30c7f/.github/workflows/autocomp.yml#L6
It doesn't work in the first place...
|
gharchive/issue
| 2022-03-08T05:37:23 |
2025-04-01T04:55:13.117205
|
{
"authors": [
"JaydenDev",
"webdev03"
],
"repo": "JaydenDev/AuraPad",
"url": "https://github.com/JaydenDev/AuraPad/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
874026282
|
Extra, Nonexistent Space Displayed
An extra, nonexistent space is displayed to the left of the actual, available spaces. See attached photo. "Screen 1" does not exist.
Is this persistent even after a refresh or switching to a different space? Is there a way to reproduce this or has it been like this since you installed the app?
Thanks for the reply. It's been like this since I've installed the app. I don't know if it's because of Sidecar or something. Yes, it persists even when switching between spaces. What's weird is if a boot off a clean Big Sur installation, the issue isn't seen.
If you installed using Homebrew, could you try running the following in terminal:
defaults delete dev.jaysce.Spaceman
brew uninstall --zap spaceman
and then installing again, brew install spaceman.
Or If you installed through GitHub, run the following:
defaults delete dev.jaysce.Spaceman
If you have AppCleaner installed, use that to uninstall Spaceman then reinstall, otherwise run the following in terminal:
rm -rf ~/Library/Application\ Scripts/dev.jaysce.Spaceman-LaunchAtLoginHelper
rm -rf ~/Library/Caches/dev.jaysce.Spaceman
rm -rf ~/Library/Containers/dev.jaysce.Spaceman-LaunchAtLoginHelper
rm -rf ~/Library/Preferences/dev.jaysce.Spaceman.plist
Then delete Spaceman.app by moving to bin and emptying the bin
Reinstall Spaceman through GitHub or Homebrew
Let me know if that helps.
Thanks. Unfortunately, none of that helped. I previously tried to uninstall it using CleanMyMac but also did it via your instructions just now.
Maybe try this:
Quit Spaceman.app
Run rm -rf ~/Library/Preferences/com.apple.spaces.plist in terminal.
Create a new Space
Open Spaceman.app
Thanks. So deleting "com.apple.spaces.plist" in terminal and rebooting my Mac fixed the issue.
One really bizarre thing is that when my computer came back up, obviously it only had 1 space, but it was using a really old wallpaper from years ago. It's almost as if macOS was retaining some data about an old space that was hidden this entire time from view. I am still wondering if this is a leftover from back when macOS supported Dashboard, which used to count as a "space" as well.
Yea I'm pretty sure Dashboard was the problem. When Dashboard was supported it also counted as a space, and I guess it's left over in that plist even after updating the OS.
Glad the bug is fixed!
|
gharchive/issue
| 2021-05-02T19:40:13 |
2025-04-01T04:55:13.127733
|
{
"authors": [
"Jaysce",
"jetblackrx89"
],
"repo": "Jaysce/Spaceman",
"url": "https://github.com/Jaysce/Spaceman/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2088938917
|
Merge Unrelated histories
Merged unrelated histories
Solved the initial issues
|
gharchive/issue
| 2024-01-18T19:40:41 |
2025-04-01T04:55:13.128960
|
{
"authors": [
"Jaz-3-0"
],
"repo": "Jaz-3-0/Hyperledger_ChainCode_dev",
"url": "https://github.com/Jaz-3-0/Hyperledger_ChainCode_dev/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
320576111
|
Migrate serialization extensions to new project
Migrate the Id3.Serialization package code to the new netstandard2.0 project.
[x] Fix any errors
[x] Update language features to C# 7.2
[x] Code clean-up and optimization
Closed with 34426c2b15addb57a55b39a5944d17e63b84aeb6
|
gharchive/issue
| 2018-05-06T09:13:53 |
2025-04-01T04:55:13.145390
|
{
"authors": [
"JeevanJames"
],
"repo": "JeevanJames/Id3",
"url": "https://github.com/JeevanJames/Id3/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2008445369
|
Help about parameter setting
Hello, would you like to know that when training cvusa, a gpu is used, gou is set to 1,lr is set to 0.0001, batch-size is set to 32, did-URL is set to 'tcp://localhost:10001' and world-size is set to 1. rank set to 0, epochs set to 100, op set to sam, wd set to 0.03, dataset set to cvusa, cos set to True,dim set to 1000, asam set to True, rho set to 2.5. But the result of the first stage is very bad, I would like to ask if I made a mistake, I took a screenshot of the specific parameter Settings, thank you
The image is not uploaded correctly, so I can not see the parameters. Why not directly run the scripts following the instructions?
Since I only have one gpu, running sh run_CVUSA.sh directly seems to require multiple Gpus, and when I run sh run_cvusa.sh directly, I get an error. The error is work = _default_pg.barrier().
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1607370172916/work/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
You may need to specify the GPUs for training in "train.py". Remove the second line if you want to train the simple stage-1 model. Change the "--dataset" to train on other datasets. The code follows the multiprocessing distributed training style from PyTorch and Moco, but it only uses one GPU by default for training,The readme paragraph should mean using a single GPU parameter, but the content of the command line should run in a distributed manner with multiple Gpus
It does not require multiple GPUs. If the train.py, we use os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" and os.environ["CUDA_VISIBLE_DEVICES"] = "0" to assign one GPU.
|
gharchive/issue
| 2023-11-23T15:26:53 |
2025-04-01T04:55:13.151151
|
{
"authors": [
"Jeff-Zilence",
"wlfxy"
],
"repo": "Jeff-Zilence/TransGeo2022",
"url": "https://github.com/Jeff-Zilence/TransGeo2022/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
417771412
|
找不到方法:“System.DateTime Senparc.NeuChar.Entities.IMessageBase.get_CreateTime()”
(*** 此版块专为反馈bug及提交需求服务,不负责解答开发问题,请勿发表开发问题,
如果您需要这方面的帮助,请移步问答社区:https://weixin.senparc.com/QA ***)
问题描述
升级到最新版本之后 ,事件无法触发 系统日志里面显示
找不到方法:“System.DateTime Senparc.NeuChar.Entities.IMessageBase.get_CreateTime()”
重现问题步骤(如果可以)
...
...
微信官方文档 URL
微信官方文档快照(直接复制关键内容到下方)
发现问题的模块
[ ] Senparc.Weixin 版本:6.3.9.0
[ ] Senparc.Weixin.MP 版本:16.6.1.0
[ ] Senparc.Weixin.MP.MVC 版本:7.2.1.0
[ ] Senparc.Weixin.Open 版本:
~- [ ] Senparc.Weixin.QY 版本:~
[ ] Senparc.Weixin.Work 版本:
[ ] Senparc.Weixin.WxOpen 版本:
[ ] Senparc.Weixin.Cache.Redis 版本:
[ ] Senparc.Weixin.Cache.Memcached 版本:
[ ] 其他模块:
模块对应的 .net 版本
-[x] ] .net 4.5
开发环境
[x] Visual Studio 2017
缓存环境
[x] 服务器内存缓存(默认)
[ ] Redis 版本:
[ ] Memcached 版本:
[ ] 其他
联系方式
Email: 29938389@qq.com
(也可将问题地址及联系方式发送到 www.jeffrey.su@gmail.com)
发布问题后,请保持对 issue 的关注,有时会有需要进一步沟通的信息,很长时间内没有得到答复的 issue 将被关闭。
你确定全部都升级到最新的版本了吗?之前有开发者反映过类似的问题,最后发现是生产环境有一个dll是旧版本的。
你确定全部都升级到最新的版本了吗?之前有开发者反映过类似的问题,最后发现是生产环境有一个dll是旧版本的。
我重新对比了 是最新版本~
现在我们线上Demo运行的就是自动部署的全部最新dll版本的:https://sdk.weixin.senparc.com/
是否能告知一下你的生产环境软件配置(虽然我感觉关系应该不大)?
我们线上的是 IIS 7.5+ .NET Core 2.2 + Redis
我看到差别了,你发的错误:System.DateTime Senparc.NeuChar.Entities.IMessageBase.get_CreateTime,确实是不存在的,最新版本已经是DateTimeOffset类型了,不再是DateTime,所以可能要看一下你调用的dll是不是旧的(或者说引用新版本之后没有重新编译)?
现在我们线上Demo运行的就是自动部署的全部最新dll版本的:https://sdk.weixin.senparc.com/
是否能告知一下你的生产环境软件配置(虽然我感觉关系应该不大)?
我们线上的是 IIS 7.5+ .NET Core 2.2 + Redis
我们的是 IIS 16 + .Net 4.5
我看到差别了,你发的错误:System.DateTime Senparc.NeuChar.Entities.IMessageBase.get_CreateTime,确实是不存在的,最新版本已经是DateTimeOffset类型了,不再是DateTime,所以可能要看一下你调用的dll是不是旧的(或者说引用新版本之后没有重新编译)?
嗯 非常感谢 我清除掉重新拉下新版本 麻烦你了~
嗯,补充一下,我上面说的“你调用的dll”是你自己项目的(或其他第三方)的dll。
|
gharchive/issue
| 2019-03-06T12:07:38 |
2025-04-01T04:55:13.170764
|
{
"authors": [
"JeffreySu",
"foxaaa"
],
"repo": "JeffreySu/WeiXinMPSDK",
"url": "https://github.com/JeffreySu/WeiXinMPSDK/issues/1643",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
421001668
|
Engine crash when loading a scene with buildings and navmesh
Description: The engine crashes when I try to load a scene with buildings and navmesh. If you don't close the engine, there's no problem, the navmesh works fine, etc. But when you close it and open it again, the engine crashes the second you try to load the scene.
Build: 0.2.8
Type: NavMesh / Save&Load
Steps to reproduce: Bake a navmesh on the scene with some buildings setted as static and no walkable. Save the scene, close the engine, open it again, and try to load the scene.
Frequency: Always
Fixed in master @aleixgab
|
gharchive/issue
| 2019-03-14T12:53:52 |
2025-04-01T04:55:13.185391
|
{
"authors": [
"JoanValiente",
"OscarHernandezG"
],
"repo": "JellyBitStudios/JellyBitEngine",
"url": "https://github.com/JellyBitStudios/JellyBitEngine/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1260036255
|
Event of closing
Add event to onclose, beforeModalClose, onBeforeModalClose guards
closeById Нужно расширить и добавить вторым параметром опции. Далее набор параметро будет передаваться из методов close, и конечный event (после обработки dtoEvent) будет передавать первым параметром(единственным) в каждый guard
Point for closing:
modal.close
closeModal
popModal
backgroundClick
background: Boolean. Mean that an attempt to close the modal window by clicking on the background. Default(false)
Firstly need find relative prop that will be passed inside EventClose
Inside WidgetModalContainerItem closing will use modal.close(), not lastModal.close()
|
gharchive/issue
| 2022-06-03T15:17:13 |
2025-04-01T04:55:13.208694
|
{
"authors": [
"Jenesius"
],
"repo": "Jenesius/vue-modal",
"url": "https://github.com/Jenesius/vue-modal/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
250610609
|
[Question] Combining FluentValidation and IValidatableObject Errors
Is it possible to run both FluentValidation and IValidatableObject in the same ViewModel? For example if I had the following ViewModel:
public class TestViewModel
{
public Foo Foo { get; set; }
public Bar Bar { get; set; }
}
public class Foo : IValidatableObject
{
public string FooName { get; set; }
public IEnumerable<ValidationResult> Validate(System.ComponentModel.DataAnnotations.ValidationContext validationContext)
{
if (string.IsNullOrEmpty(FooName))
{
yield return new ValidationResult("Required from IValidatableObject", new[] { "FooName" });
}
}
}
[Validator(typeof(BarValidator))]
public class Bar
{
public string BarName { get; set; }
}
public class BarValidator : AbstractValidator<Bar>
{
public BarValidator() {
RuleFor(x => x.BarName).NotNull().WithMessage("Required from FluentValidation");
}
}
Is there a way both Foo and Bar validations can run and return the results when my Controller calls ModelState.IsValid?
With the asp.net core integration, no, you can't mix FV and other validation strategies for the same object. For MVC5 and older I'm not sure - I've not tested that.
@JeremySkinner thanks for the quick reply.
It's something I'd like to enable support for in the future in aspnetcore, but there are a lot of implications for doing this and requires a lot of work, so it's not something that'll be added soon I'm afraid.
I've just pushed out 7.2-beta1 with initial support for IValidatableObject. Please feel free to give it a try.
@JeremySkinner Thanks for the update! I tried it out in my repo (https://github.com/ryanbuening/DynamicCollections) and it seems to be working great.
I think this gives me the flexibility to use IValidatableObject for BeginCollectionItem and FluentValidation for everything else in my ViewModel which I tried to mock in my repo.
I'm not quite sure what you mean you say "...this only supports top-level objects, but not child properties". Could you give me an example?
If you had a child property that also implements IValidatableObject, then I don't believe that will be invoked.
@JeremySkinner so I have this ViewModel: https://github.com/ryanbuening/DynamicCollections/blob/master/DynamicCollections/Models/PersonsViewModel.cs and all validations are showing when I would expect them to. Would you expect this to work?
Could you give me a specific scenario in the ViewModel above where you wouldn't expect IValidatableObject to be invoked? Sorry, I'm not completely following what you mean by a child property implementing IValidatableObject. I'm just trying to figure out the limitations before I start converting my validation logic to FluentValidation in combination with IValidatableObject.
Thanks.
|
gharchive/issue
| 2017-08-16T12:33:09 |
2025-04-01T04:55:13.225518
|
{
"authors": [
"JeremySkinner",
"ryanbuening"
],
"repo": "JeremySkinner/FluentValidation",
"url": "https://github.com/JeremySkinner/FluentValidation/issues/545",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
251198310
|
Question - Use numeric types on WithErrorCode
Hi fellas,
Today FluentValidation supports String only at WithErrorCode method. Is there any plans to include an overload to support numeric types, such as Int32?
Why:
The error codes in my project are inside enums, so, technically, I can use only numeric types as Int32 and long (Int64).
Thanks a lot.
Regards,
@rmszc81
Hi, no - the error codes by default store the name of the validator used to generate the error, so they need to be strings. I'd suggest just using a string representation of the number and then parse/convert the results after validation has completed.
|
gharchive/issue
| 2017-08-18T10:01:39 |
2025-04-01T04:55:13.227846
|
{
"authors": [
"JeremySkinner",
"rmszc81"
],
"repo": "JeremySkinner/FluentValidation",
"url": "https://github.com/JeremySkinner/FluentValidation/issues/547",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
334091157
|
Error when I pass my object to validate equal null
Hello everyone
I'm sorry I do not speak English and also if this channel is not considered appropriate for clarification of doubt. I am building a WebApi in .Net core 2.0 using version 7.5.2.
A validation that I must do is when my client that will consume my WebApi, should check if the object that arrived is null and for this I did the following validation
My validations
public class FaturaDtoValidator: AbstractValidator<FaturaDto>
{
public FaturaDtoValidator()
{
RuleFor(fatura => fatura).NotNull().WithMessage(Mensagem.FATURA_OBRIGATORIO);
RuleFor(fatura => fatura.Itens).SetCollectionValidator(new ContratoItemDtoValidator());
RuleFor(fatura => fatura.Itens).NotNull().WithMessage(Mensagem.ITENS_OBRIGATORIO);
RuleFor(fatura => fatura.Itens).Must(list => list.Count == 0).WithMessage(Mensagem.ITENS_OBRIGATORIO);
}
}
public class ContratoItemDtoValidator : AbstractValidator<ContratoItemDto>
{
public ContratoItemDtoValidator()
{
RuleFor(item => item.CodigoItem).NotEmpty().NotNull().WithMessage(Mensagem.CODIGO_ITEM_OBRIGATORIO);
}
}
My startup class
services
.AddTransient<IValidator<FaturaDto>, FaturaDtoValidator>()
.AddTransient<IValidator<ContratoItemDto>, ContratoItemDtoValidator>();
Meu controller (My controller)
[Produces("application/json")]
[Route("api/v1/")]
public class HubController : Controller
{
[HttpPost("{fatura}")]
[Route("faturamento")]
public async Task<IActionResult> Faturamento([FromBody]FaturaDto fatura)
{
OrchestratorApp orc = new OrchestratorApp();
return await orc.RedirectFaturamento(fatura);
}
}
And to trigger validation
public async Task<JsonResult> RedirectFaturamento(FaturaDto fatura)
{
...
FaturaDtoValidator validator = new FaturaDtoValidator();
ValidationResult results = validator.Validate(fatura);
My FaturaDto object came null and it is this test I would like to include but when the validation is triggered an error occurs "ArgumentNullException: Value can not be null. Parameter name: Can not pass null model to Validate."
Hi there, please could you post your question in English?
Looking at the screenshot, you must provide a non-null root instance. This is by design.
Hi Jeremy thanks for answering, I'm using google translator
Existe a possibilidade, de incluir na biblioteca para olhar essa validação, quando vier objetoi nulo?
No, that's not possible I'm afraid - FluentValidation is designed to validate properties on a non-null object instance. You can't pass a null instance to Validate. You need to make sure that WebApi instantiates a valid instance before passing it to the validator.
Is this also true for an instantiated object that contains a null list?
This list should also be instantiated?
I can do something about it:
RuleFor(fatura => fatura.Itens).NotNull().WithMessage(Mensagem.ITENS_OBRIGATORIO);
RuleFor(fatura => fatura.Itens).Must(list => list.Count ==0).WithMessage(Mensagem.ITENS_OBRIGATORIO);
That's fine, the list can be null, but you should put a When condition on the second rule, like this:
RuleFor(fatura => fatura.Itens).NotNull().WithMessage(Mensagem.ITENS_OBRIGATORIO);
RuleFor(fatura => fatura.Itens).Must(list => list.Count ==0).WithMessage(Mensagem.ITENS_OBRIGATORIO).When(list => list != null);
I understood the explanation and thank you for your time.
You're welcome - let me know if you have any other questions.
|
gharchive/issue
| 2018-06-20T13:58:18 |
2025-04-01T04:55:13.235435
|
{
"authors": [
"JeremySkinner",
"rafaelaugustomiranda"
],
"repo": "JeremySkinner/FluentValidation",
"url": "https://github.com/JeremySkinner/FluentValidation/issues/795",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2541918783
|
Typo in about us
hellow 能麻烦你们改一下这个andorganization么 可能我们有一个typo。多谢
然后这个字体现在是bold吗 我觉得是不是unbold好看一些
fixed, see commit a5bac007f7da544f1d885186ba8cedecbbd85a34
|
gharchive/issue
| 2024-09-23T08:23:53 |
2025-04-01T04:55:13.237311
|
{
"authors": [
"JeremyZXi"
],
"repo": "JeremyZXi/NEST",
"url": "https://github.com/JeremyZXi/NEST/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
79376376
|
fix: IllegalArgumentException in AnkoLogger
Log.isLoggable will throw an IllegalArgumentException when the tag.length() > 23.
See http://developer.android.com/reference/android/util/Log.html for more informations.
Thank you for the pull request!
Could you fix the formatting in it?
pull request updated :)
Merged!
|
gharchive/pull-request
| 2015-05-22T09:31:29 |
2025-04-01T04:55:13.267359
|
{
"authors": [
"timfreiheit",
"yanex"
],
"repo": "JetBrains/anko",
"url": "https://github.com/JetBrains/anko/pull/46",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1537320356
|
how to reduce the time for First Meaning Paint
Hi,
Is there any way to shrink the time for "First Meaning Paint" in compose framework side, i.e., the time for the first page rendering. For example, maybe we can delay the number of modifiers until 2nd frame? Maybe the slot-table for first frame could be captured and loaded in advanced next time?
The whole process of showing an application includes:
Initialization of Compose. Happens only when we start an application.
Loading Java and native libraries
Initialization of various frameworks used by Compose (Swing, Skia, Native platform framework)
Initialization of Compose itself
Preparing the application. Happens every time we open a new window/dialog, or just change the content of it.
composition (construction tree of UI components)
layout (deciding position and size of UI components)
paint (drawing UI components on Canvas)
all drawing calls during paint are performed by GPU
On every step we can do some optimization on the framework level, but it is not always possible to optimize it completely, and developer of the application should care of optimization of the second step (preparing the application).
If it is not possible to do all the work fast in the first frame, the application developer should schedule some work to the next frames. For desktop it can be achieved this way
Code
```
import androidx.compose.foundation.background
import androidx.compose.foundation.border
import androidx.compose.foundation.layout.Box
import androidx.compose.foundation.layout.Column
import androidx.compose.foundation.layout.Row
import androidx.compose.foundation.layout.fillMaxHeight
import androidx.compose.foundation.layout.fillMaxSize
import androidx.compose.foundation.layout.width
import androidx.compose.foundation.rememberScrollState
import androidx.compose.foundation.verticalScroll
import androidx.compose.material.TextField
import androidx.compose.runtime.Composable
import androidx.compose.runtime.DisposableEffect
import androidx.compose.runtime.State
import androidx.compose.runtime.mutableStateOf
import androidx.compose.runtime.remember
import androidx.compose.runtime.getValue
import androidx.compose.runtime.setValue
import androidx.compose.ui.Modifier
import androidx.compose.ui.graphics.Color
import androidx.compose.ui.unit.dp
import androidx.compose.ui.window.WindowScope
import androidx.compose.ui.window.singleWindowApplication
import java.awt.event.ComponentAdapter
import java.awt.event.ComponentEvent
fun main() = singleWindowApplication {
val isWindowVisible by isWindowVisibleState()
Box(Modifier.fillMaxSize().background(Color.LightGray)) {
if (!isWindowVisible) {
LightUI()
} else {
HeavyUI()
}
}
}
@Composable
private fun WindowScope.isWindowVisibleState(): State {
val isVisible = remember { mutableStateOf(false) }
DisposableEffect(Unit) {
val listener = object : ComponentAdapter() {
override fun componentShown(e: ComponentEvent?) {
isVisible.value = window.isVisible
}
}
window.addComponentListener(listener)
onDispose {
window.removeComponentListener(listener)
}
}
return isVisible
}
@Composable
fun LightUI() {
Row {
repeat(3) {
Box(
Modifier
.width(200.dp)
.fillMaxHeight()
.border(1.dp, Color.Gray)
)
}
}
}
@Composable
fun HeavyUI() {
Row {
repeat(3) {
Column(
Modifier
.width(200.dp)
.fillMaxHeight()
.border(1.dp, Color.Black)
.verticalScroll(rememberScrollState())
) {
repeat(50) {
var text by remember { mutableStateOf("") }
TextField(text, { text = it })
}
}
}
Column {
// loading images asynchronously
}
}
}
</details>
(maybe we should provide a similar check in Compose itself, for all platforms)
Anyway, the startup of Compose application can be very slow (2-4 seconds), so we need:
1. Optimize some of the steps (where we can):
- reduce framework initialization time
- do some initialization in parallel
- support parallel composition/layout/paint
- postpone some work, and show the content faster, if possible (not sure that it is achievable on the framework level)
2. Provide tools to do additional optimizations on the application level:
- ability of preparing content (compose, layout, paint) asynchronously in background
- serialize some of the work, and just load it from the disk next time
3. Provide tools to measure performance of each step (in IDE, or just in console)
4. Describe them in our documentation
> delay the number of modifiers until 2nd frame
Good idea, but not sure that is possible in general case. This work should be done in each modifier - we should look what modifiers are slow, and postpone some work in them.
> the slot-table for first frame could be captured and loaded in advanced next time
It is not possible to do on the framework level, because we can't know which part can be loaded next time, that should be decided by application developer. Also, if we talk about serialization of some part (storing it on a disk), it is also not possible on the framework level, as the slot table contain a lot of non-serializable data. But anyway, we can provide tools, so developers can easily choose parts which they decide can be postponed or/and serialized.
Related issue: https://github.com/JetBrains/compose-jb/issues/2517
Just a small hint: one thing that helps (at least on macOS) is loading fonts in parallel, if you have a main menu. Launch a coroutine on a background dispatcher even before you call application { … } and call this magic: UIManager.getFont("Panel.font").fontName. While Compose app is loading classes, initializing, etc, at least part of the font system would be ready.
Please check the following ticket on YouTrack for follow-ups to this issue. GitHub issues will be closed in the coming weeks.
|
gharchive/issue
| 2023-01-18T02:49:01 |
2025-04-01T04:55:13.277371
|
{
"authors": [
"guoguo338",
"igordmn",
"okushnikov",
"orangy"
],
"repo": "JetBrains/compose-multiplatform",
"url": "https://github.com/JetBrains/compose-multiplatform/issues/2645",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2018387503
|
rtl doesn't work on iOS
Describe the bug
iOS ignores arabic rtl layout/orientation. See attached screen where right side shows the arabic preview inside Android studio, and the left side shows the iOS simulator with that screen. Arabic rtl layout works fine on Android.
Affected platforms
Select one of the platforms below:
iOS
Versions
Kotlin version: 1.9.20
Compose Multiplatform version: 1.5.11
OS version(s)* (required for Desktop and iOS issues): iOS 15
OS architecture (x86 or arm64): arm64
Context
The code I use to create viewcontrollers is generic call to ComposeUIViewController plus some extra code to pass the UIViewController to my classes to be able to call native stuff. First, I tried to create a @Composable that logs the layout direction:
val result = ComposeUIViewController {
logDirection()
WallaceTheme {
BiomarkersHomeScreen(
platform = platform,
…
)
}
}
…
@Composable
fun logDirection() {
if (LocalLayoutDirection.current == LayoutDirection.Rtl) {
println("DDDD direction is rtl")
} else {
println("DDDD direction is ltr")
}
}
Running this code under iOS shows no surprises: the direction is LTR despite OS being set to RTL language. Then, I modified my platform class to contain a native swift test using the UIViewController returned from ComposeUIViewController:
override func testDirection() {
print("DDDD vc is \(vc)")
if let direction = vc?.view.effectiveUserInterfaceLayoutDirection {
if direction == .rightToLeft {
print("DDDD iOS direction is right to left")
} else {
print("DDDD iOS direction is bad")
}
} else {
print("DDDD no vc or view yet?")
}
}
Running this code on iOS will show that the direction is RTL. The actual log output I get is:
DDDD testing platform direction
DDDD vc is Optional(<ComposeWindow: 0x131c74910>)
DDDD iOS direction is right to left
DDDD direction is ltr
The third line is the swift code, and the fourth line is the kotlin composable.
Something is not connecting the RTL direction of the ComposeWindow.view with the LocalLayoutDirection.
Not being an iOS arabic user myself, I've been releasing this onto users without realising, and now I guess they are having fun navigating back through screens where some have the back arrow where expected, and some screens have the back arrow in the opposite place 🙈.
Is there any way for me to quickly patch this and force LocalLayoutDirection to have the right value outside of your bugfix release cycle?
Is there any way for me to quickly patch this and force LocalLayoutDirection to have the right value outside of your bugfix release cycle?
Ok, managed to patch it. My solution has been to implement the following code around the returned UIViewController:
override func isNativeDirectionRtl() -> KotlinBoolean? {
if let direction = vc?.view.effectiveUserInterfaceLayoutDirection {
if direction == .rightToLeft {
return KotlinBoolean(bool: true)
} else {
return KotlinBoolean(bool: false)
}
} else {
return nil
}
}
Then, I add the following composable and wrap everything with it:
@Composable
fun overrideDirection(platform: BiomarkersHomePlatformIos, content: @Composable () -> Unit) {
when (platform.isNativeDirectionRtl()) {
true -> {
CompositionLocalProvider(LocalLayoutDirection provides LayoutDirection.Rtl) {
content()
}
}
false -> {
CompositionLocalProvider(LocalLayoutDirection provides LayoutDirection.Ltr) {
content()
}
}
null -> content()
}
}
…
val result = ComposeUIViewController {
logDirection("first test")
overrideDirection(platform) {
logDirection("overriden test")
WallaceTheme {
BiomarkersHomeScreen(
…
With overrideDirection I finally get rendered everything in Arabic as intented.
Thank you for submitting the issue!
It's a known problem, we linked this issue to #3096
Please check the following ticket on YouTrack for follow-ups to this issue. GitHub issues will be closed in the coming weeks.
|
gharchive/issue
| 2023-11-30T10:58:09 |
2025-04-01T04:55:13.286981
|
{
"authors": [
"chokokatana",
"mazunin-v-jb",
"okushnikov"
],
"repo": "JetBrains/compose-multiplatform",
"url": "https://github.com/JetBrains/compose-multiplatform/issues/3997",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2205761021
|
using drag events prevent desktop app to close
Describe the bug
Using "Modifier.onExternalDrag" sometimes prevents the application from closing properly when you close the window. The window closes but the application remains active.
Affected platforms
Desktop Linux
Versions
Kotlin version*: 1.9.22
Compose Multiplatform version*: 1.6.0
OS version(s)* (required for Desktop and iOS issues): Fedora 39
OS architecture (x86 or arm64): x86_64
JDK (for desktop issues): corretto 17
To Reproduce
fun main() = application {
val windowState = rememberWindowState(width = 800.dp, height = 600.dp)
Window(
state = windowState,
onCloseRequest = ::exitApplication,
) {
Row(Modifier.onExternalDrag(onDrop = {})) {
}
}
}
The problem occurs after 2 or 4 attempts.
Expected behavior
a call to "::exitApplication" should closes the application.
I created an empty project using the "Kotlin Multiplatform Wizard", then modified the main function as indicated in the bug report. I run the "run" gradle task and then close the window several times without dragging any file to the window. I tested on all 3 platforms: Windows Linux and Mac OS. The problem only occurs on Linux, the error frequency is 25-33%.
Please check the following ticket on YouTrack for follow-ups to this issue. GitHub issues will be closed in the coming weeks.
|
gharchive/issue
| 2024-03-25T13:24:26 |
2025-04-01T04:55:13.292324
|
{
"authors": [
"hgourvest",
"okushnikov"
],
"repo": "JetBrains/compose-multiplatform",
"url": "https://github.com/JetBrains/compose-multiplatform/issues/4541",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
823671604
|
Windows 7 : DLL Error
After installing the exported msi file, when launching the program, below errors are showing.
After pressing OK
OS:: Windows 7
Will installation of MSVC redistributable help?
I didn't try that, but after installing JDK (openjdk-15), the error is gone.
You installed openjdk-15 before doing the build? Or on the machine which you ran the app? Or are those the same machine?
Hopefully installing the jdk on the target machine where you run the app is not necessary.
I think we might have added a rule against building with JDk-14 in https://github.com/JetBrains/compose-jb/commit/ecbf3485381b39be66aaa299ff17a1851a97089b
You installed openjdk-15 before you ran the app?
No. It was a fresh Windows 7 VM (No JDK at all)
So shall we close this bug since it should no longer occur now that we require JDK 15? Or did I miss something?
Would be cool to test if installing Visual C++ redistributable (https://aka.ms/vs/16/release/vc_redist.x64.exe) is enough. If this is the case - then it's likely OpenJDK issue.
The same issue can still be reproduced on Windows 7. The first error is exactly the one about api-ms-win-crt-runtime-l1-1-0.dll, then followed by the error of jni.dll. Install openjdk-17 on the target device fixes this problem.
The same issue can still be reproduced on Windows 7. The first error is exactly the one about api-ms-win-crt-runtime-l1-1-0.dll, then followed by the error of jni.dll. Install openjdk-17 on the target device fixes this problem.
I fixed this issue installing the update KB2999226 avaliable at the link below (I'm using openjdk-20 to generate the installer):
https://www.microsoft.com/en-US/download/confirmation.aspx?id=49093
Please check the following ticket on YouTrack for follow-ups to this issue. GitHub issues will be closed in the coming weeks.
I had the same issue on windows 7.
Fixed after installing open jdk https://aka.ms/download-jdk/microsoft-jdk-17.0.13-windows-x64.msi
|
gharchive/issue
| 2021-03-06T15:12:25 |
2025-04-01T04:55:13.300065
|
{
"authors": [
"JunkFood02",
"RafaelUSA7",
"chethann",
"jimgoog",
"okushnikov",
"olonho",
"theapache64"
],
"repo": "JetBrains/compose-multiplatform",
"url": "https://github.com/JetBrains/compose-multiplatform/issues/468",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1074223669
|
Process stops when runIdeForUiTests is launched with gradlew
If using the gradlew for runIdeForUiTests the process just stops, if its launched like that
I just encountered the same situation. The documentation claims that this would work:
./gradlew clean runIdeForUiTest & ./gradlew test
But with a single ampersand Gradle is just backgrounded and there is no feedback when is is actually save to start the UI test.
If you could terminate the Gradle process after the UI server is running, then using a double ampersand this command could actually work.
But that also requires a way to get rid of the running process later after the test. Even in case of tests failures.
|
gharchive/issue
| 2021-12-08T10:01:55 |
2025-04-01T04:55:13.302071
|
{
"authors": [
"nizienko",
"redcatbear"
],
"repo": "JetBrains/intellij-ui-test-robot",
"url": "https://github.com/JetBrains/intellij-ui-test-robot/issues/133",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
928521308
|
Add aasthacollege
Added aasthacollege.edu.np
@philipto Any updates on this
|
gharchive/pull-request
| 2021-06-23T18:13:52 |
2025-04-01T04:55:13.345565
|
{
"authors": [
"rubiin"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/11861",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
116547577
|
neu: Schulen in Bremen, Deutschland
see also http://www.schule.bremen.de
@mondmann Sorry for the delay - we've been launching rebranding and I hadn't time to check the repo for a while. As for request - the page provided seems to point at some informational website for all schools. Please provide some confirmation of the fact, that only teachers and students can register email addresses under the mentioned domain. Thank you!
@zweizwei Hmm, how would you like me to prove this. Well, there is an other e-mail domain called bildung.bremen.de for non educational staff in schools of bremen. For the domain schule.bremen.de, all e-mail users are nightly updated from the list of pupils and teachers in public schools in the federal state of bremen. What do you want to get from me for approval?
@mondmann if there is any official document confirming this (for example instructions on how to register email address for students / teachers) or the page indicating this - this should suffice. Anyway untill we have such confirmation we can not add the domain - the addresses are accepted automatically once the domain is added so we need to make sure that the domain is used on purpose. Thank you!
@zweizwei I asked the administrator of this e-mail domain and he added a TXT record to schule.bremen.de DNS entry:
$ host -t txt schule.bremen.de
schule.bremen.de descriptive text "Mailboxes" "under" "the" "domain" "schule.bremen.de" "are" "exclusively" "provided" "to" "teachers" "and" "pupils" "of" "schools" "of" "the" "federal" "city" "state" "of" "Bremen," "Germany."
Is that sufficient?
@zweizwei Are there any news on that, please?
@mondmann I am sorry for a delay with a resolving of the issue, but the problem is that this repository is to list particular colleges and universities. We are developing a common approach to situations when a large group of educational institutions share the same email domain for their students. I appreciate your patience, and I am sorry for taking one more week to resolve the situation.
In the meantime those teachers in your schools who wish to advise their students using JetBrains tools, can apply for a Classroom licenses: it's quicker and can be done online at jetbrains.com.
|
gharchive/pull-request
| 2015-11-12T13:21:09 |
2025-04-01T04:55:13.352807
|
{
"authors": [
"mondmann",
"philipto",
"zweizwei"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/1463",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1414168287
|
Create lecturer.txt
lecturer domain for Riau University
@rahmatrizalandhi First stage of verification passed, official domain is unri.ac.id as far as I verified. This is just an informational note. Pull request is still in the review. I appreciate your patience while waiting for the final decision.
@rahmatrizalandhi Pull request merged. Thank you.
|
gharchive/pull-request
| 2022-10-19T02:37:22 |
2025-04-01T04:55:13.354414
|
{
"authors": [
"philipto",
"rahmatrizalandhi"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/15464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1527824514
|
Add EESI
The EESI is a french (public) art school in Angoulême and Poitiers https://www.eesi.eu/site/?rubrique162
@nathangir eesi.txt I am sorry, I have to close the request without merging, because you made a mistake in a request and did not specify any filepath. Please ensure that you are familiar with this repository rules located at the bottom of this page and make a new correct request. Thank you for your understanding!
|
gharchive/pull-request
| 2023-01-10T18:22:59 |
2025-04-01T04:55:13.355983
|
{
"authors": [
"nathangir",
"philipto"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/16252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1718544241
|
Add Gordon College in list of universities
Gordon College website
@JezReal First stage of verification passed, official domain is facebook.com as far as I verified. This is just an informational note. Pull request is still in the review. I appreciate your patience while waiting for the final decision. Proof of domain ownership: https://www.facebook.com/GordonCollegeOfficial/
|
gharchive/pull-request
| 2023-05-21T15:32:29 |
2025-04-01T04:55:13.357611
|
{
"authors": [
"JezReal",
"philipto"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/17579",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1741035342
|
add Megarezky University
unimerz.ac.id
@robikurniawan First stage of verification passed, official domain is unimerz.ac.id as far as I verified. This is just an informational note. Pull request is still in the review. I appreciate your patience while waiting for the final decision.
@robikurniawan Pull request merged. Thank you.
thank you @philipto
|
gharchive/pull-request
| 2023-06-05T06:03:34 |
2025-04-01T04:55:13.359635
|
{
"authors": [
"philipto",
"robikurniawan"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/17715",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
51476802
|
Add NTI-Gymnasiet alternative domain (nti.se)
There exists an alternative domain which some students are using and it would be great if it could be added.
obsolete, please reply to reopen
|
gharchive/pull-request
| 2014-12-09T20:24:46 |
2025-04-01T04:55:13.360855
|
{
"authors": [
"ntielev",
"zweizwei"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/573",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
732475888
|
Added Gdynia's faculty of WSB University
Added Gdynia's faculty of WSB University in Gdańsk due to different domain address.
@konopkagrzegorz Please provide us with a proof, which shows that the school recognizes the domain you are submitting as an official email domain for the students. It can be a page at the official website which confirms it, or a copy of an internal school instruction.
@philipto Thank You for response. I do not have any internal instruction. I hope that information shown below will help You in verification.
From NASK – NATIONAL RESEARCH INSTITUTE (company responsible for domain registration in PL) we can read that domain: https://www.wsb.pl belongs to: Towarzystwo Edukacji Bankowej S.A. - it is owner of "WSB" schools in Poland
When You will enter adress https://www.wsb.gdynia.pl it automatically redirect You to: https://www.wsb.pl/gdynia
For students we have a https://www.portal.wsb.pl extranet CRM portal when we have got all neccesary data about ourselves - also student's email address.
We can get to that address trough tab in https://www.wsb.pl
ID in login in https://www.portal.wsb.pl is the same as prefix in my email address.
I hope that those proofs are enough for verification - if not please inform me and I think it will be needed to get an official response from my university.
@konopkagrzegorz Thank you for the clarification. Pull request merged.
|
gharchive/pull-request
| 2020-10-29T16:38:43 |
2025-04-01T04:55:13.365948
|
{
"authors": [
"konopkagrzegorz",
"philipto"
],
"repo": "JetBrains/swot",
"url": "https://github.com/JetBrains/swot/pull/9745",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
239053842
|
24 hour timestamps don't work for join messages
The 24 hour timestamp option doesn't apply to "X has joined the server" messages
The normal messages are shown with the proper 24 hour timestamp, but the "Zach just joined. Everyone, look busy!" message shows up as "Today at 11:29 PM", which is not 24 hour time.
Version is: v0.2.81:1.792byJiiks
I still see all chat messages in 12h am/pm mode.
And I have enabled the 24h option :)
This bug also goes for "somebody started a call" messages.
|
gharchive/issue
| 2017-06-28T05:06:20 |
2025-04-01T04:55:13.402048
|
{
"authors": [
"Daniihh",
"Lazersmoke",
"dinosw"
],
"repo": "Jiiks/BetterDiscordApp",
"url": "https://github.com/Jiiks/BetterDiscordApp/issues/519",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
201584717
|
content-length not sent when storing images outside of web root
Need to set the content-length when using TransmitFile() method.
I'm testing a fix now - https://github.com/alindgren/ImageProcessor/commit/a11b32a2d13550793f5a873f4b8198f88d375537
Thanks... Disappointed I missed this.
@JimBobSquarePants I think several people were looking at this code and missed it. But this is OSS in action. PR just submitted :)
Fixed in ImageProcessor.Web v4.8.1
|
gharchive/issue
| 2017-01-18T14:13:43 |
2025-04-01T04:55:13.405366
|
{
"authors": [
"JimBobSquarePants",
"alindgren"
],
"repo": "JimBobSquarePants/ImageProcessor",
"url": "https://github.com/JimBobSquarePants/ImageProcessor/issues/544",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1368613215
|
how can i read the log file of your upload ckpts?
how can i read the log file of your upload ckpts?
Please refer to his issue
https://github.com/Jingkang50/OpenPSG/issues/33
|
gharchive/issue
| 2022-09-10T12:50:26 |
2025-04-01T04:55:13.409143
|
{
"authors": [
"Jingkang50",
"ZHUXUHAN"
],
"repo": "Jingkang50/OpenPSG",
"url": "https://github.com/Jingkang50/OpenPSG/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2124885104
|
1.10.3 reports wrong version
when using document.querySelector('fx-fore').version 1.10.3 should be reported but is 1.11.2
This happens when used via npm i not when building locally
fake bug - does work after further testing
|
gharchive/issue
| 2024-02-08T10:56:22 |
2025-04-01T04:55:13.415438
|
{
"authors": [
"JoernT"
],
"repo": "Jinntec/Fore",
"url": "https://github.com/Jinntec/Fore/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
156503758
|
reverting the chnges for monitoring
since the monitoring changes are not going in the imminent release, I am reverting these changes
Thanks for taking care of this sir.
|
gharchive/pull-request
| 2016-05-24T13:13:22 |
2025-04-01T04:55:13.416458
|
{
"authors": [
"anantkaushik89",
"punituee"
],
"repo": "JioCloudCompute/jcsclient",
"url": "https://github.com/JioCloudCompute/jcsclient/pull/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1592881327
|
How to run "train.py" correctly?
I have downloaded raw data.
I have used video2img.py to convert the file into an image, and put the image in a relative location, such as ..\Data\Train\00000220\00000220_10035.jpg.
When I executed train.py, I found that a Train_List.txt was needed, so I wrote another program to write the path of the picture into Train_List.txt.
When I run train.py again, I find that img_names[1][:-1] is used in dataset.py
May I ask what is the correct data format in Train_List.txt?
What kind of information should I store to be correct?
And are there any details that need attention?
Thanks.
I'm sorry,
I have found the relevant files under the original directory
...\DeepHomography-master\Data\
I will continue to investigate.
Thank you
|
gharchive/issue
| 2023-02-21T06:19:09 |
2025-04-01T04:55:13.425862
|
{
"authors": [
"Maitreya229"
],
"repo": "JirongZhang/DeepHomography",
"url": "https://github.com/JirongZhang/DeepHomography/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1237403301
|
Create an npm package that supports typescript
I have created an npm package that supports typescript without modifying the original "detectIncognito.js" and "example.html" file.
I have also updated the usage in the README (cdn and npm).
And I still preserve the comments, license in the source code.
Hope that you will merge this pull request, thanks
Fantastic job!
|
gharchive/pull-request
| 2022-05-16T16:27:13 |
2025-04-01T04:55:13.441180
|
{
"authors": [
"Joe12387",
"naptestdev"
],
"repo": "Joe12387/detectIncognito",
"url": "https://github.com/Joe12387/detectIncognito/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
369852466
|
Enable systemd to restart glusterd automatically
Enables systemd to automatically try restarting glusterd up to 3 times per hour.
Also fixes yaml syntax issues.
@ShyamsundarR I'm curious if you see any issues w/ auto-restart of glusterd. It should pause 60s between and limit to a max of 3x per hour.
|
gharchive/pull-request
| 2018-10-13T23:48:12 |
2025-04-01T04:55:13.716345
|
{
"authors": [
"JohnStrunk"
],
"repo": "JohnStrunk/oso-gluster-ansible",
"url": "https://github.com/JohnStrunk/oso-gluster-ansible/pull/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
302031125
|
.net standard support
Are there any plans on supporting .net standard?
You can easily do it by yourself.
You just need to replace the old project file with a new one. All explained here:
http://blogs.microsoft.co.il/iblogger/2017/04/05/easily-supporting-multiple-target-frameworks-tfms-with-vs2017-and-nuget/
Not really supported, WIP in core-3 branch and maybe come in the future. You can also have a look at #223
|
gharchive/issue
| 2018-03-03T20:49:52 |
2025-04-01T04:55:13.718940
|
{
"authors": [
"Cr1TiKa7",
"JohnnyCrazy",
"gilmishal"
],
"repo": "JohnnyCrazy/SpotifyAPI-NET",
"url": "https://github.com/JohnnyCrazy/SpotifyAPI-NET/issues/221",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
840045147
|
Things that cost gold get more expensive overtime
It would be awesome to design the game with build-in inflation with the gold (potentially the other resources too). Essentially, over time people will naturally get more and more gold because their empires will grow larger and larger, however troops and buildings with have a fixed price, this would encourage saving and hoarding of gold and other resources because its buy power never decreases (although its trading power might). So, the solution is to determine the price of various objects based on a percentage of the game's total value in play, or just the total gold in play. I think either would work fine.
Definitely inflation is a good idea. With this in mind, how do we incentivize building new factories and stuff?
Well, at some point the players will realize they need to increase the production of troops, resources, or maybe they just need more storage, so I think the problem will fix itself.
However, another strategy I thought of was a player could invest in the "futures" of the building prices by buying it in the beginning of the game and then trading it, later, to other people for less than they could buy it, but more than what they bought it for
|
gharchive/issue
| 2021-03-24T18:43:14 |
2025-04-01T04:55:13.721156
|
{
"authors": [
"AkivaDienstfrey",
"JohnnyWobble"
],
"repo": "JohnnyWobble/oversimplified",
"url": "https://github.com/JohnnyWobble/oversimplified/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2044504810
|
Java
Just learn more about java and implement it where you can
import { OpenAI, toFile } from "https://deno.land/x/openai@v4.20.1/mod.ts";
import { createReadStream } from "https://deno.land/std@0.153.0/node/fs.ts";
import { type Transcription } from "https://deno.land/x/openai@v4.20.1/resources/audio/transcriptions.ts";
import { db_get } from "../helpers/envs.ts";
// Openai api key
const key = ${await db_get("OPENAI_KEY")};
/**
Helper class used to summarize text/audio
Limitations: 25MB audio file cap (Can be fixed by splitting audio file)
*/
class Summarizer {
static openai = new OpenAI({ apiKey: key });
public static async summarize_from_txt(
sub: string | Transcription,
save: boolean,
) {
console.log(`${async () => {
return await db_get("OPENAI_KEY");
}}`);
let opcc = await this.openai.chat.completions.create({
model: "gpt-3.5-turbo-16k",
messages: [
{
role: "system",
content:
"You are an assistant that is specialized in summarizing text. When you receive text, you provide a concise and clear summary of it.",
},
{ role: "assistant", content: `${sub}` },
],
});
let resp = opcc.choices[0].message.content?.toString();
if (save) {
await Deno.writeFile(
`${Deno.cwd()}/summarized.txt`,
new TextEncoder().encode(resp),
);
}
return resp;
}
public static async summarize_from_audio(path: string, save: boolean) {
let file = await toFile(createReadStream(path));
let resp = await this.openai.audio.transcriptions.create(
{ file: file, model: "whisper-1", response_format: "text" },
);
return await this.summarize_from_txt(resp, save);
}
}
export { Summarizer };
Find the output
|
gharchive/issue
| 2023-12-15T23:58:36 |
2025-04-01T04:55:13.726210
|
{
"authors": [
"Johnnyj-P",
"SonitPanda"
],
"repo": "Johnnyj-P/Stylz-Web",
"url": "https://github.com/Johnnyj-P/Stylz-Web/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
615198588
|
chore: Standard Formatting
This PR installs the standard library, and applies its recommendations for formatting.
Should be reviewed and merged after #5
🎆 🥇
|
gharchive/pull-request
| 2020-05-09T15:22:16 |
2025-04-01T04:55:13.747117
|
{
"authors": [
"Jon-Biz",
"aphelionz"
],
"repo": "Jon-Biz/orbitdb-pinner",
"url": "https://github.com/Jon-Biz/orbitdb-pinner/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1448622387
|
[2pt] Display joined missions in My profile - Filters
Render a list of all joined missions (use filter()) on the "My profile" page.
Reminder to deploy the live application on Netlify
Reminder to Perform Testing using Jest & React Testing Library
|
gharchive/issue
| 2022-11-14T19:30:25 |
2025-04-01T04:55:13.757231
|
{
"authors": [
"JonahKayizzi"
],
"repo": "JonahKayizzi/space-travellers-hub",
"url": "https://github.com/JonahKayizzi/space-travellers-hub/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1632399616
|
ci: exclude directory of cookie in renovate config of cutter
part of #68
worked
|
gharchive/issue
| 2023-03-20T15:52:44 |
2025-04-01T04:55:13.760853
|
{
"authors": [
"JonasPammer"
],
"repo": "JonasPammer/cookiecutter-ansible-role",
"url": "https://github.com/JonasPammer/cookiecutter-ansible-role/issues/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2547931305
|
🛑 IServ is down
In 166f41d, IServ (https://gymall.de/iserv/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: IServ is back up in b0239cd after 31 minutes.
|
gharchive/issue
| 2024-09-25T12:59:57 |
2025-04-01T04:55:13.763412
|
{
"authors": [
"JonasSchaber"
],
"repo": "JonasSchaber/uptime",
"url": "https://github.com/JonasSchaber/uptime/issues/599",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2268483382
|
Too Low HOTA and IDF1
Hi, I'm trying to test on my custom dataset.
However, HOTA and IDF1 scores are too low.
What do you think I should check?
Thanks.
There is not enough information here to help sorry. Why do you think they are too low?
Hi, I'm trying to test on my custom dataset. However, HOTA and IDF1 scores are too low. What do you think I should check?
Thanks.
Hello, may I ask if you have solved this problem
Hi, I'm trying to test on my custom dataset. However, HOTA and IDF1 scores are too low. What do you think I should check?
Thanks.
Hello, may I ask if you have solved this problem
Hi. I found that the annotations are not perfect. So there was the mismatch between detections and ground truths. I think these low scores are normal.
|
gharchive/issue
| 2024-04-29T08:59:06 |
2025-04-01T04:55:13.768830
|
{
"authors": [
"H1-KEY",
"JonathonLuiten",
"lmaple24327"
],
"repo": "JonathonLuiten/TrackEval",
"url": "https://github.com/JonathonLuiten/TrackEval/issues/146",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
52783019
|
Modified to allow for hooks into the code
This pull request updates a lot of Gorilla REPL to allow for an external system to hook into Gorilla REPL.
There are options for overriding fixed strings in the client by sending configuration information (e.g., Title).
The worksheet.html file can live at a different context path and all the services are relative to this context path (e.g., /worksheet.html becomes /foo/bar/worksheet.html)
Resources (e.g., webservices URLs, MathJax, etc.) are accessed securely if the original page was served via HTTPS.
There's a 1 second ping between the client and server (over the websocket) to keep the web socket alive across proxies.
Jony,
After reading your feedback, I realize that what I'm doing is not well aligned with where you are taking Gorilla REPL. I am a firm believer in respecting the project founder because the founder's vision is important in making a project something useful for all.
I think the best bet is the fork option for me. Please take whatever bits and pieces you want from this PR (or ignore it altogether).
I will continue to maintain a fork of Gorilla REPL that has the pieces I need... more of the Gorilla as library rather than Gorilla as app. If there's anything that's app-general (e.g., a better keep alive mechanism for the web socket), I'll ping you and see if you want it.
Thanks for building something great and thanks for your thoughtful comments!
Rock on!
David
PS -- I'll leave it to you to close the PR or continue the conversation... either is cool with me.
Hi David,
thanks for the considered reply. I'm still trying to feel out the scope of Gorilla - my main fear is expanding beyond my or others' ability to continue maintaining the project, as I wouldn't like to see it turn into abandonware.
The plan of working on a fork sounds like a good one, and I'll try and keep an eye on it - and also keep you up to speed as the plan for Gorilla evolves.
I'll leave the PR open and pull out the bits that are going to go in when I have a chance.
Thanks again,
Jony
|
gharchive/pull-request
| 2014-12-23T23:19:17 |
2025-04-01T04:55:13.774707
|
{
"authors": [
"JonyEpsilon",
"dpp"
],
"repo": "JonyEpsilon/gorilla-repl",
"url": "https://github.com/JonyEpsilon/gorilla-repl/pull/174",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1759958660
|
🛑 Joris - Perso is down
In 772bdf2, Joris - Perso (https://joris-parmentier.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Joris - Perso is back up in 569a676.
|
gharchive/issue
| 2023-06-16T05:42:34 |
2025-04-01T04:55:13.796625
|
{
"authors": [
"JorisPV"
],
"repo": "JorisPV/Status",
"url": "https://github.com/JorisPV/Status/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
500720455
|
KakaoAccountResult returning null except for userId
First of all thank you @JosephNK for creating this, I find it much better than the other package out there since it's much easier to use 👍
My issue is that final KakaoLoginResult result = await kakaoSignIn.getUserMe(); returns null except for userId.
When I debugged kakaoSignIn.getUserMe() method, the result seems to be there.
login_page.dart
final KakaoLoginResult result = await kakaoSignIn.getUserMe();
print('userName: ${result.account.userNickname}');
flutter: userNickname: null
flutter_kakao_login.dart
// Get UserMe Method
Future<KakaoLoginResult> getUserMe() async {
final Map<dynamic, dynamic> result =
await _channel.invokeMethod('getUserMe');
print('debug: ${result.toString()}');
return _delayedToResult(
new KakaoLoginResult._(result.cast<String, dynamic>()));
}
flutter: debug: {status: loggedIn, userID: xxxxxxxxxx, userNickname: xxxxxxxx}
Not sure why it won't return the value.
Hi @charkala.
Do you get a nickname?
userNickname and userID is null?
@JosephNK Hello.
await kakaoSignIn.getUserMe() returns only userID: xxxxxxxxx, all other fields are null
but when I print from inside the package (see my print() line) it returns both userID and userNickname.
For my Kakao Dev Console, I've enabled profile (nickname/profile photo) and account (email).
I've also setup the SDK from their guide with no problem.
I have tested on both devices, Android (Samsung) and iOS (iPhone) and all same results.
@JosephNK 안녕하세요, 계정 데이터를받지 못하는 것과 같은 문제가 있습니다.이 문제를 해결하는 방법을 알려주세요. 즐거운 하루 되세요.
please ans this, because I have not getting any account details, please let me know the solution
|
gharchive/issue
| 2019-10-01T07:41:10 |
2025-04-01T04:55:13.822403
|
{
"authors": [
"JosephNK",
"KuldeepVagadiya",
"charkala"
],
"repo": "JosephNK/flutter_kakao_login",
"url": "https://github.com/JosephNK/flutter_kakao_login/issues/15",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
177375398
|
Recent changes to the nginx.tmpl in jwilder/nginx-proxy cause certificate validation to fail
Hey,
when I use the suggested nginx.tmpl file (https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl) The certificate validation fails.
letsencrypt-companion | Creating Diffie-Hellman group (can take several minutes...)
letsencrypt-companion | Generating DH parameters, 2048 bit long safe prime, generator 2
letsencrypt-companion | This is going to take a long time
letsencrypt-companion | ............................................................................................................................................+.......................................................................................................................................................................................................+....+...........................................................................+...........................................................+.+...........................................+..................................................................................................................................................................+.........................................................................................................................+....+......................................................................+...........................................+........................................................................................+....................................................................................+...................+.........................................................................................................................+................................+..............................................................+........+........................................................................................................................................+....................+.........................................................................+.................+...+........+..................................................................+........+..............................................+.....................................................................................................................................................+.............................................................................................................................................................+......................................................................................................................................................................................................+..............................+...................+............................++*++*
letsencrypt-companion | Sleep for 3600s
letsencrypt-companion | 2016/09/16 08:53:14 Generated '/app/letsencrypt_service_data' from 4 containers
letsencrypt-companion | 2016/09/16 08:53:14 Running '/app/update_certs'
letsencrypt-companion | 2016/09/16 08:53:14 Watching docker events
letsencrypt-companion | Reloading nginx proxy (using separate container nginx-gen)...
letsencrypt-companion | Creating/renewal sub.domain.tld certificates... (sub.domain.tld)
letsencrypt-companion | 2016/09/16 08:53:14 Contents of /app/letsencrypt_service_data did not change. Skipping notification '/app/update_certs'
letsencrypt-companion | 2016-09-16 08:53:14,695:INFO:simp_le:1211: Generating new account key
letsencrypt-companion | 2016-09-16 08:53:15,942:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:16,150:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:16,365:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:17,319:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:17,743:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:17,981:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:18,230:INFO:requests.packages.urllib3.connectionpool:207: Starting new HTTP connection (1): sub.domain.tld
letsencrypt-companion | 2016-09-16 08:53:18,235:WARNING:simp_le:1303: sub.domain.tld was not successfully self-verified. CA is likely to fail as well!
letsencrypt-companion | 2016-09-16 08:53:18,246:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:18,484:INFO:simp_le:1313: Generating new certificate private key
letsencrypt-companion | 2016-09-16 08:53:19,097:INFO:requests.packages.urllib3.connectionpool:756: Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
letsencrypt-companion | 2016-09-16 08:53:19,310:ERROR:simp_le:1271: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Is there a warning log entry about unsuccessful self-verification? Are all your domains accessible from the internet? Failing authorizations: https://acme-v01.api.letsencrypt.org/acme/authz/9P0PeTuGE9OXnehkgR89IZB97m3Ubhf4Vwo-sRpb-GY
letsencrypt-companion | Challenge validation has failed, see error log.
letsencrypt-companion |
letsencrypt-companion | Debugging tips: -v improves output verbosity. Help is available under --help.
letsencrypt-companion | Sleep for 3600s
An older version of the template (e.g. https://gist.github.com/SnowMB/87c5360f9bf81925af26c31f6d71410e) works just fine.
Anybody an idea what causes the problem and how to fix it?
If you use the blob link to download the file, you get HTML. Try https://github.com/jwilder/nginx-proxy/raw/master/nginx.tmpl instead?
Yeah sure ;).
A few additional Infos:
I tried using the "recommended" setup with nginx and docker-gen in two seperate containers.
I defined no external network since all relevant containers are in the same docker-file
I user docker-compose version 2
I made some changes to the template file and found out that these lines return no values. This leaves me with and empty upstream block in the config.
In the older file these lines are not present.
You need to open an issue in nginx-proxy repository as is not relate to letsencrypt container.
Perhaps because of a specific docker version ?
I also had a issue with the nginx.tmpl file, not sure if it was the same, but it was fixed with using the file on the docker-gen repo. So for example i used the latest (https://raw.githubusercontent.com/jwilder/docker-gen/master/templates/nginx.tmpl) as i am also using the latest version of the docker image (0.7.3).
Maybe the link in the documentation should be changed to the one on the docker-gen repo as the example works with that docker image and not with the nginx-proxy image.
Hey,
I think the template file in the docker-gen repo is a little outdated. Anyway I found a fix for the problem. See https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion/issues/88.
That didn't work for me!
I had to adjust my nginx.tmpl to don't proxy .well-known/acme-challenge through to the child-containers. Did it like this:
`
server {
server_name {{ $host }};
listen 80 {{ $default_server }};
access_log /var/log/nginx/access.log vhost;
{{ if (exists (printf "/etc/nginx/vhost.d/%s" $host)) }}
include {{ printf "/etc/nginx/vhost.d/%s" $host }};
{{ else if (exists "/etc/nginx/vhost.d/default") }}
include /etc/nginx/vhost.d/default;
{{ end }}
added code here START
location /.well-known/acme-challenge {
root /usr/share/nginx/html/.well-known/;
break;
}
added code here END
location / {
proxy_pass {{ trim $proto }}://{{ trim $host }};
{{ if (exists (printf "/etc/nginx/htpasswd/%s" $host)) }}
auth_basic "Restricted {{ $host }}";
auth_basic_user_file {{ (printf "/etc/nginx/htpasswd/%s" $host) }};
{{ end }}
{{ if (exists (printf "/etc/nginx/vhost.d/%s_location" $host)) }}
include {{ printf "/etc/nginx/vhost.d/%s_location" $host}};
{{ else if (exists "/etc/nginx/vhost.d/default_location") }}
include /etc/nginx/vhost.d/default_location;
{{ end }}
}
}
`
@domdorn It finally works! But I have to change the line
root /usr/share/nginx/html/.well-known/;
to
root /usr/share/nginx/html/;
@domdorn that solved failing certificate validations for me as well, many thanks.
@domdorn Solution fixed me up as well. Seems to only affect port 80 services as mentioned in #182
Hi, guys!
I still have problems with verifying certificate... I tried both options:
location /.well-known/acme-challenge - in the nginx.tmpl
location /.well-known/acme-challenge - in the vhost.d/sub.domain.tld
Files generated in the .well-known/acme-challenge can be accessed by me in the browser, but the script still has some problems :(
I include some logs (interesting case...):
# it works... file can be accessed in the browser
nginx-proxy | nginx.1 | sub.domain.tld 192.168.1.1 - - [29/Mar/2017:19:55:33 +0000] "GET /.well-known/acme-challenge/smth HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
# new file is generated by docker-letsencrypt-nginx-proxy-companion
nginx_proxy-lets_encrypt | 2017-03-29 19:57:01,007:DEBUG:simp_le:983: Saving validation (u'pfLLevTbGfHeMYx1yFuDQCCPaV-CjveX6-g_YJtvMPs.d0Sfyt3ze6Zkhpu4NDshs08QRwRaFxYxZGePeEK9irA') at /usr/share/nginx/html/.well-known/acme-challenge/pfLLevTbGfHeMYx1yFuDQCCPaV-CjveX6-g_YJtvMPs
nginx_proxy-lets_encrypt | 2017-03-29 19:57:01,012:DEBUG:acme.challenges:252: Verifying http-01 at http://sub.domain.tld/.well-known/acme-challenge/pfLLevTbGfHeMYx1yFuDQCCPaV-CjveX6-g_YJtvMPs...
nginx_proxy-lets_encrypt | 2017-03-29 19:57:01,014:INFO:requests.packages.urllib3.connectionpool:207: Starting new HTTP connection (1): sub.domain.tld
# new file can be accessed in the browser...
nginx-proxy | nginx.1 | sub.domain.tld 192.168.1.1 - - [29/Mar/2017:19:57:21 +0000] "GET /.well-known/acme-challenge/pfLLevTbGfHeMYx1yFuDQCCPaV-CjveX6-g_YJtvMPs HTTP/1.1" 200 87 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
# docker-letsencrypt-nginx-proxy-companion unfortunately cannot access the file :(
nginx_proxy-lets_encrypt | 2017-03-29 19:59:08,420:ERROR:acme.challenges:256: Unable to reach http://sub.domain.tld/.well-known/acme-challenge/pfLLevTbGfHeMYx1yFuDQCCPaV-CjveX6-g_YJtvMPs: HTTPConnectionPool(host='sub.domain.tld', port=80): Max retries exceeded with url: /.well-known/acme-challenge/pfLLevTbGfHeMYx1yFuDQCCPaV-CjveX6-g_YJtvMPs (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7efe9b0bac50>: Failed to establish a new connection: [Errno 110] Operation timed out',))
nginx_proxy-lets_encrypt | 2017-03-29 19:59:08,422:WARNING:simp_le:1303: sub.domain.tld was not successfully self-verified. CA is likely to fail as well!
# right after previous companion logs, this log appears in the nginx logs: (200 status...)
nginx-proxy | nginx.1 | sub.domain.tld 192.168.1.1 - - [29/Mar/2017:19:59:09 +0000] "GET /.well-known/acme-challenge/pfLLevTbGfHeMYx1yFuDQCCPaV-CjveX6-g_YJtvMPs HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
I completely have no idea, what's wrong :/ I've spent already many hours trying to fix that...
Do you have any ideas? Thanks in advance for any clues.
@tomajask > Hi ! jwilder/nginx-proxy and jwilder/docker-gen are not supposed to be used together, the later is meant to be used with the official nginx image : https://github.com/jwilder/nginx-proxy#separate-containers
@Routhinator > I had that issue with non port 80 services as well, maybe because i wasn't using the "VIRTUAL_PORT" environment variable (I believe I wasn't supposed to, as the container was exposing onyl one port) ? In thoses cases, @domdorn modification of nginx.tmpl solved it, but the one suggested by @Nierrrrrrr did not.
We ran into the same problem while setting up the companion on our server. @domdorn 's modification fixed it for us with the .well-known removed from @domdorn 's modification suggestion as pointed out by @Nierrrrrrr - adding "VIRTUAL_PORT" on the other hand to the environment for all containers did NOT work for us.
(so we couldn't manage to get this to work without modifying the nginx.tmpl by docker-gen)
Is there any chance this issue will be looked at in the near future? It seems with the current docker-gen nginx.tmpl combined with let's encrypt companion in the current version, people are bound to run into this problem.
i encountered a LE validation failure on renewal due to timeout on the http link (and on a renewal the http port is redirected to https port) and discovered this issue, that could be (or not) the same cause, so i share as this issue is still opened:
May that the same cause than for the issue #178 (and the PR #192 resolved the problem for me)
basically, the LE endpoint wasn't taking into account, and the LE request was forwarded to my webapp.
I confirm that #192 resolve the failed validation issue without modifying the nginx.tmpl file.
The same location will also be appended to /etc/nginx/vhost.d/default, why changing the tmpl file is unnecessary.
This was supposedly fixed by #192, reverted, then fixed again by #335
|
gharchive/issue
| 2016-09-16T08:57:00 |
2025-04-01T04:55:13.921967
|
{
"authors": [
"FallenRiteMonk",
"JonasT",
"JrCs",
"Nierrrrrrr",
"Routhinator",
"SnowMB",
"almereyda",
"buchdag",
"domdorn",
"lounagen",
"tomajask"
],
"repo": "JrCs/docker-letsencrypt-nginx-proxy-companion",
"url": "https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion/issues/107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1060888136
|
Override templates modal with panel [question]
What would be the best way to achieve showing templates in a panel view instead of modal ?
At the moment there is none, the ui isn't built to fit in the panels, so the only way would be to build a new ui that fits in the panels similar to how it's done for the pages
|
gharchive/issue
| 2021-11-23T06:43:16 |
2025-04-01T04:55:13.942366
|
{
"authors": [
"Dev-dpk",
"Ju99ernaut"
],
"repo": "Ju99ernaut/grapesjs-template-manager",
"url": "https://github.com/Ju99ernaut/grapesjs-template-manager/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2456214617
|
Download videos in 360p
Hello, get_highest_resolution() is not downloading videos in the highest available resolution (e.g., 720p, 1080p, etc.). Instead, the downloaded videos are being saved in 360p, even when higher resolutions are available in MP4 format.
Not a bug. This has been discussed recently. Search for the "closed" issues. Youtube changed on its side.
Start searching here: https://github.com/JuanBindez/pytubefix/issues/145
|
gharchive/issue
| 2024-08-08T16:36:44 |
2025-04-01T04:55:13.952451
|
{
"authors": [
"NannoSilver",
"nathannogueira"
],
"repo": "JuanBindez/pytubefix",
"url": "https://github.com/JuanBindez/pytubefix/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1588303561
|
Wrap trees using DecisionTree.wrap for more convenient display
It's now possible to plot a tree using the TreeRecipe.jl package but the workflow is not very user-friendly, as trees first need to be wrapped.
Currently the raw decision tree is exposed as fitted_params(mach).tree. I propose we pre-wrap this object (with the feature names already embedded) and add an extra field fitted_params(mach).raw_tree for the original unwrapped object.
Then plotting a tree would be as simple as
using TreeRecipe, Plots
tree = fitted_params(mach).tree
plot(tree)
Thoughts anyone?
@roland-KA
Related: #23
That sounds indeed like a more user-friendly way to do it (in contrast to the more the piggy-back solution we have currently).
But one thing in this new workflow isn't quite clear to me. The wrap-function has been introduced, because the decision tree models (depending on implementation and application) don't have information about feature labels and/or class labels included in the model. So the intention of wrap was to "add" this information.
The workflow looks currently as follows:
create model
wrap additional information (which is not included in the model)
plot
The new workflow would be:
wrap (sort of)
create model
plot
But where (in which context) would wrap occur? Where would the information about class labels and/or feature labels come from?
Where would the information about class labels and/or feature labels come from?
@roland-KA The user provides a the MLJ interface a table (with features names embedded) and a categorical target (in classication case). So MLJModelInterface.fit can just peel off this metadata, rather than ignoring it as present for recombination (wrapping) with the output of the call to DecisionTree.build_tree. Instead of returning the raw output as the learned parameters, it returns the wrapped output. No?
Ah, of course, they are there just getting ignored 🤦♂️. Well, then we have everything to make the workflow more user-friendly.
|
gharchive/issue
| 2023-02-16T20:13:44 |
2025-04-01T04:55:13.984382
|
{
"authors": [
"ablaom",
"roland-KA"
],
"repo": "JuliaAI/MLJDecisionTreeInterface.jl",
"url": "https://github.com/JuliaAI/MLJDecisionTreeInterface.jl/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1871357722
|
Miscellaneous issues with file formats
1 - GeoJSON.jl writes table columns in arbitrary order;
2 - Shapefile.jl saves Chains and Polygons as Multi;
3 - The default number type in GeoJSON.jl is Float32, set this option with numbertype kwarg;
4 - GeoJSON.jl writes Date columns as string because the JSON format has no date types;
5 - KML file is failing to save with Shapefile.jl;
These backend issues are not issues of GeoIO.jl per see. Closing it.
|
gharchive/issue
| 2023-08-25T20:25:42 |
2025-04-01T04:55:14.076281
|
{
"authors": [
"eliascarv",
"juliohm"
],
"repo": "JuliaEarth/GeoIO.jl",
"url": "https://github.com/JuliaEarth/GeoIO.jl/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
209374022
|
Revert the methods added in #84
This is "type piracy" - extending a Base method on a signature of all Base types. Base's method tables are shared global mutable state, doing things like this can change the behavior of completely unrelated code when this package is imported.
nightly travis failure is not new, same happened at https://travis-ci.org/JuliaMath/IterativeSolvers.jl/jobs/204028681
cc @lopezm94
This may break some things, but what this is really pointing to is that the real strategy for allowing functions is to use LinearMaps like https://github.com/JuliaMath/IterativeSolvers.jl/pull/104
Codecov Report
Merging #108 into master will decrease coverage by -1.51%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #108 +/- ##
=========================================
- Coverage 86.41% 84.9% -1.51%
=========================================
Files 18 18
Lines 1391 1391
=========================================
- Hits 1202 1181 -21
- Misses 189 210 +21
Impacted Files
Coverage Δ
src/common.jl
70.96% <ø> (-7.83%)
:x:
src/krylov.jl
68.08% <100%> (+1.41%)
:white_check_mark:
src/cg.jl
100% <100%> (ø)
:white_check_mark:
src/gmres.jl
90.78% <100%> (ø)
:white_check_mark:
src/history.jl
34.25% <0%> (-17.6%)
:x:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d6092e6...541d3af. Read the comment docs.
|
gharchive/pull-request
| 2017-02-22T07:32:28 |
2025-04-01T04:55:14.355095
|
{
"authors": [
"ChrisRackauckas",
"andreasnoack",
"codecov-io",
"tkelman"
],
"repo": "JuliaMath/IterativeSolvers.jl",
"url": "https://github.com/JuliaMath/IterativeSolvers.jl/pull/108",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
50299293
|
Mosek.jl fails JuMP nonlinear test
Mosek reports "nonconvex problem" during first test in
JuMP/test/nonlinear.jl
Trace:
ERROR: Mosek.MosekError(1291,"The optimization problem is nonconvex.")
in optimize at /home/ulfw/.julia/Mosek/src/msk_functions.jl:2498
in optimize! at /home/ulfw/.julia/Mosek/src/MosekSolverInterface.jl:660
in solvenlp at /home/ulfw/.julia/JuMP/src/nlp.jl:485
in solve at /home/ulfw/.julia/JuMP/src/solvers.jl:6
in anonymous at /home/ulfw/.julia/JuMP/test/nonlinear.jl:32
in context at /home/ulfw/.julia/FactCheck/src/FactCheck.jl:282
in anonymous at /home/ulfw/.julia/JuMP/test/nonlinear.jl:14
in facts at /home/ulfw/.julia/FactCheck/src/FactCheck.jl:261
in include at ./boot.jl:242
in include_from_node1 at ./loading.jl:128
in include at ./boot.jl:242
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:300
in _start at ./client.jl:382
in _start_3B_3993 at /usr/local/julia/bin/../lib/julia/sys.so
while loading /home/ulfw/.julia/JuMP/test/nonlinear.jl, in expression starting on line 12
while loading /home/ulfw/.julia/JuMP/test/runtests.jl, in expression starting on line 28
@ulfworsoe, this test still seems to be failing. Any idea why the strange return status?
That looks a bit funny... From the log it looks like the status should be at least NEAR_OPTIMAL. Do you have the return value from Mosek.optimize()?
I added some debugging statements to Mosek.jl:
optimize(m.task) => 0
soldef => 2
prosta => 0
solsta => 0
I cannot rule out that it is a bug in the solver that sets a wrong status. I will try to debug it.
It could also be a bug in the NLP wrapper I wrote.
A very quick test: If I insert
solutionsummary(task_,MSK_STREAM_LOG)
println(getsolsta(task_,MSK_SOL_ITR))
in the Mosek.optimize() function, immediately after the native function has been called, I can see that the interior-point solution is optimal, and that solution status for MSK_SOL_ITR is 1 (optimal). Is it possible that you accidentally modify the task so the status changes after optimize()?
It looks like you ask for the integer solution, MSK_SOL_ITG, rather than the interior, MSK_SOL_ITR.
Right, getsoldef is returning MSK_SOL_ITG:
solutiondef(m.task,MSK_SOL_ITG) => true
solutiondef(m.task,MSK_SOL_BAS) => true
solutiondef(m.task,MSK_SOL_ITR) => true
Yes, I can see that. I am not sure where those solutions come from... Do you at any point input an initial solution or somthing like that?
Yes, JuMP always gives a starting point:
https://github.com/JuliaOpt/Mosek.jl/blob/master/src/MosekSolverInterface.jl#L719
It looks like this logic needs to be tweaked?
You could check if solution is defined and not unknown, perhaps? Problem is that MOSEK will not delete an inputted solution.
|
gharchive/issue
| 2014-11-27T14:55:44 |
2025-04-01T04:55:14.418460
|
{
"authors": [
"mlubin",
"ulfworsoe"
],
"repo": "JuliaOpt/Mosek.jl",
"url": "https://github.com/JuliaOpt/Mosek.jl/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2616151484
|
TagBot trigger issue
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/117932
Triggering TagBot for merged registry pull request: https://github.com/JuliaRegistries/General/pull/122160
|
gharchive/issue
| 2024-10-26T21:25:05 |
2025-04-01T04:55:14.427046
|
{
"authors": [
"JuliaTagBot"
],
"repo": "JuliaPlasma/ParticleInCell.jl",
"url": "https://github.com/JuliaPlasma/ParticleInCell.jl/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2449025697
|
Using with Quarto?
Thank you for this package!
When I’m at the Julia REPL producing plots with PlotlyLight, I can download the plots successfully with this package, and all is well.
However, I’m really interested in producing PDF reports using Quarto. Printing the html report from the browser works, but is not ideal as I would like to automate the process from the command line (using quarto render —to pdf), and produce a PDF version of several Markdown reports. Unfortunately, when using Quarto, images downloaded by PlotlyKaleido are nowhere to be found (at least by me), even if I export them to a hard-coded absolute path. Has anyone successfully used this kind of workflow?
Thanks!
If it's helpful, here's an example Quarto report. The issue is that I've no idea where myplot.svg is, if anywhere.
---
title: PlotlyLight and Quarto
engine: julia
---
# Environment setup
```{julia}
#| output: false
using Pkg
Pkg.activate("plotlylight-quarto")
Pkg.add("PlotlyLight")
Pkg.add("PlotlyKaleido")
using PlotlyLight
using PlotlyKaleido
PlotlyKaleido.start()
```
# A plot with PlotlyLight
```{julia}
p = plot(x = 1:200, y = cumsum(randn(200)), type="scatter", mode="lines" )
p.layout.title.text = "Your stocks"
display(p) # this plot appears in the html rendering
```
<!-- Now I would like this plot to appear in the pdf rendering -->
```julia
PlotlyKaleido.savefig(p, "myplot.svg"))
```

Render with quarto render report.qmd --execute --to pdf.
Nevermind. Silly user error.
```julia
PlotlyKaleido.savefig(p, "myplot.svg")
```
should be
```{julia}
PlotlyKaleido.savefig(p, "myplot.svg")
```
|
gharchive/issue
| 2024-08-05T16:56:22 |
2025-04-01T04:55:14.430480
|
{
"authors": [
"dpo"
],
"repo": "JuliaPlots/PlotlyKaleido.jl",
"url": "https://github.com/JuliaPlots/PlotlyKaleido.jl/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1505826491
|
Don't bump compat for equality specifiers
If you have e.g. Foo = "=1.2.3, CompatHelper will suggest Foo = "=1.2.3, 1 when e.g. 1.3 comes along. I don't think it should, since if you are using an equality specifier, you probably have a good reason not to allow newer versions.
Maybe tangentially related to #266?
I think this is a duplicate of #250, which was fixed in #251, so this should be closed.
Oh, you need to set an argument to disable this. I never checked the main() docstring.
Then I would suggest making bump_compat_containing_equality_specifier=false the default, rather than true, although I know that would be a breaking change.
|
gharchive/issue
| 2022-12-21T07:21:39 |
2025-04-01T04:55:14.459011
|
{
"authors": [
"Octogonapus",
"mortenpi"
],
"repo": "JuliaRegistries/CompatHelper.jl",
"url": "https://github.com/JuliaRegistries/CompatHelper.jl/issues/444",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
653141140
|
New package: Replace v0.1.0
Registering package: Replace
Repository: https://github.com/jwscook/Replace.jl
Created by: @jwscook
Version: v0.1.0
Commit: 14941c612b527cb68c4e1bd8be7ff2822bf20049
Git reference: master
This package might have a fundamental flaw so it'd be best to cancel this PR.
|
gharchive/pull-request
| 2020-07-08T09:30:15 |
2025-04-01T04:55:14.461156
|
{
"authors": [
"JuliaRegistrator",
"jwscook"
],
"repo": "JuliaRegistries/General",
"url": "https://github.com/JuliaRegistries/General/pull/17630",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
678664953
|
New package: ComplexMixtures v0.3.0
Registering package: ComplexMixtures
Repository: https://github.com/m3g/ComplexMixtures
Created by: @leandromartinez98
Version: v0.3.0
Commit: 1402df3408ea42f240b289d44330d36b4d64c31d
Reviewed by: @leandromartinez98
Reference: https://github.com/m3g/ComplexMixtures/commit/1402df3408ea42f240b289d44330d36b4d64c31d#commitcomment-41448978
https://github.com/m3g/ComplexMixtures/commit/1402df3408ea42f240b289d44330d36b4d64c31d#commitcomment-41449913
|
gharchive/pull-request
| 2020-08-13T18:55:47 |
2025-04-01T04:55:14.464264
|
{
"authors": [
"JuliaRegistrator",
"fredrikekre"
],
"repo": "JuliaRegistries/General",
"url": "https://github.com/JuliaRegistries/General/pull/19456",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
704580841
|
New package: SharedMATLAB v0.1.0
Registering package: SharedMATLAB
Repository: https://github.com/jonniedie/SharedMATLABEngine.jl
Created by: @jonniedie
Version: v0.1.0
Commit: d243cbb49d24a3fd645eeb7f4e5ead86492f9db8
Reviewed by: @jonniedie
Reference: https://github.com/jonniedie/SharedMATLABEngine.jl/commit/d243cbb49d24a3fd645eeb7f4e5ead86492f9db8#commitcomment-42441089
@jonniedie ^
This shouldn't exist. I don't understand how this even got made. I changed the name of the repository before I registered it.
Is CompatHelper doing something here?
|
gharchive/pull-request
| 2020-09-18T18:42:32 |
2025-04-01T04:55:14.467451
|
{
"authors": [
"JuliaRegistrator",
"fredrikekre",
"jonniedie"
],
"repo": "JuliaRegistries/General",
"url": "https://github.com/JuliaRegistries/General/pull/21592",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
998833815
|
New package: VecDiff v0.1.0
Registering package: VecDiff
Repository: https://github.com/chengzhengqian/VecDiff.jl
Created by: @chengzhengqian
Version: v0.1.0
Commit: 6c1e3f3d8fd4ccf7aaab463965df63963c2f5361
Git reference: HEAD
Release notes:
the automatic forward differentiation for vector-based function
https://github.com/chengzhengqian/VecDiff.jl/blob/6c1e3f3d8fd4ccf7aaab463965df63963c2f5361/src/VecDiff.jl#L716
hmm. Are the vector subtraction not being handled in Zygote/Diffractor that we need a package for this?
Thanks for the question.
The basic provided (up to second order right now) a forward mode for
function R^N->R^M.
meaning, for example, y=f(x1,x2,x3,x4), where y, x1,x2,x3, are all vectors
with different sizes, the library could yields
\partial y/\partial x1 as the first order (a matrix)
\partial y^2/\partial x1/partial x2 as second-order (a 3-rank tensor) and
others.
In each step, it uses Zygote (backward mode) for general operation and uses
hard-coded rules for simple operation.
In this sense, it is a mixed-mode auto differentiation. (i.e., globally
forward and backward to each function operation)
The reason why I write this library is for a very special. The
functionality, of course, can be implemented in Zygote easily (that is what
I do first). But the problem for me is that the function I try to
differentiate contained a very long list of operations (form Feynman
diagrams if you want to know, 10000 up), it simply impossible to perform
Zygote's source to source differentiation. (Part of the reason may be llvm
overkill optimization too), as the scaling of Zygote (include llvm) in
terms of source code length is far from O(N), (maybe O(N^3)?)
Luckily, these operations are of simple categories, so I use Zygote for
each step and use forward mode to combine them.
To get a feeling of what I say, try the following in Zygote (just to show
the idea)
function f(x1,x2,x3,x4)
t1=x1+x2
t2=t1+x1
t3=t1+t2*x1
.... (100000 similar simple expressions)
t100000=t9999+t1001+t1231+..
return t100000
end
xi and ti are small vectors for example
I am pretty sure this will be out of memory for Zygote to actually perform
the differentiation.
(For the simple case, the Zygote indeed has a performance lead than this
approach, but the first time run is much longer for Zygote than this
approach )
On Fri, Sep 17, 2021 at 6:33 AM Jerry Ling @.***> wrote:
https://github.com/chengzhengqian/VecDiff.jl/blob/6c1e3f3d8fd4ccf7aaab463965df63963c2f5361/src/VecDiff.jl#L716
hmm. Is the vector subtraction not being handled in Zygote/Diffractor that
we need a package for this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/JuliaRegistries/General/pull/45049#issuecomment-921691577,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AIDKOKXHF3GUFJ6AE7BN5I3UCMKOJANCNFSM5EF6QO3Q
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
--
Zhengqian Cheng
Columbia University
I write this library is for a very special. The
functionality, of course, can be implemented in Zygote easily (that is what
I do first). But the problem for me is that the function I try to
differentiate contained a very long list of operations (from Feynman
diagrams if you want to know, 10000 up
if this is domain-specific (phenomenology? condense matter?), maybe you can make the package name more suggestive of the domain. Vector differentiation sounds like a very general operation that should be supported in many AD frameworks.
[noblock]
The problem I try to solve comes from condensed matter, but the nature of
the problem is generic. i.e, we want to differentiation a long generic
function consisted of basic vector operations ( but not limits to
trivial ones, for example, one can interpret a vector as a matrix, and
perform as matrix inverse, and thus yields highly non-trivial vector to
vector function).
The code (leveraged by Zygote) can perform a generic vector-to-vector
operation (one can flatten matrix and tensor to vector, thus the vector is
generic in this sense).
The main merit is that we use a mixed-mode, i.e perform forward mode
globally to ensure the cost is O(N), and locally use backward mode to allow
arbitrary operation can be differentiated (one can add the support for
other AD libraries too), thus allowing for very long code to get
differentiated with reasonable speed.
I feel it is hard to capture this point in the package name. Maybe
MixedModeDiff? (but I still like VecDiff as it indeed tries to
differentiate general vector to vector map, as contrast the standard in ML
as a vector to scalar.)
On Fri, Sep 17, 2021 at 7:22 AM Jerry Ling @.***> wrote:
I write this library is for a very special. The
functionality, of course, can be implemented in Zygote easily (that is what
I do first). But the problem for me is that the function I try to
differentiate contained a very long list of operations (from Feynman
diagrams if you want to know, 10000 up
if this is domain-specific (phenomenology? condense matter?), maybe you
can make the package name more suggestive of the domain. Vector
differentiation sounds like a very general operation that should be
supported in many AD frameworks.
[noblock]
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/JuliaRegistries/General/pull/45049#issuecomment-921718406,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AIDKOKS4MU5Q2GPQJTAO46LUCMQGPANCNFSM5EF6QO3Q
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
--
Zhengqian Cheng
Columbia University
|
gharchive/pull-request
| 2021-09-17T01:36:14 |
2025-04-01T04:55:14.486603
|
{
"authors": [
"JuliaRegistrator",
"Moelf",
"chengzhengqian"
],
"repo": "JuliaRegistries/General",
"url": "https://github.com/JuliaRegistries/General/pull/45049",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
439977446
|
rename repo to TagBot?
Since the bot itself seems to be called TagBot, should we rename this repo to match?
Sure, that works for me. Feel free to change it.
|
gharchive/issue
| 2019-05-03T10:07:18 |
2025-04-01T04:55:14.487874
|
{
"authors": [
"StefanKarpinski",
"christopher-dG"
],
"repo": "JuliaRegistries/tag-bot",
"url": "https://github.com/JuliaRegistries/tag-bot/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1049004015
|
model output
The model outputs x_20 and y_20
What are these exactly?
same for x_200 and y_200
Also x_200 / y_200 has a size of 200 per image. can i assume, the first 10 values are for the first class, the next 10 values for the second class and so on?
Also i set the value for k_cluster in script.sh to 5, but only label_200 came out as size 100 for each image. x_200 and y_200 were still size 200 for each image. Why?
y_20 is the output (x_20 is before sigmoid) of 20-category classification, and y_200 is the output of the fine-grained classification, where each parent class contains 10 sub-categories. And yes, the first 10 are for the 1st parent class, and so on.
This line should give you 100 outputs if you set the cluster number as 5, where the model is initiated at this line in train_cls.py.
I want to modify the images used for training: i want to add my own images. how do i go around doing that?
here is what i did:
I added the raw images to the JPEGImages folder
I added the segmented versions to the SegmentationClassAug folder
I added the image paths to train_aug.txt in voc12
I added their labels to train_label.npy
I added their image name and class label to cls_labels.npy in the same format as the other entires
did not need to add them to 20_class_labels.npy, as they were already there
I added about 1500 images, extract features extracts all the 12000+ images, but when i run create pseudo label, in the line instead of printing (12000, 200) (12000, 20) it prints (18000, 200) (18000, 20)
What am i doing wrong, or am i missing any steps?
|
gharchive/issue
| 2021-11-09T19:28:45 |
2025-04-01T04:55:14.504620
|
{
"authors": [
"SharhadBashar",
"wasidennis"
],
"repo": "Juliachang/SC-CAM",
"url": "https://github.com/Juliachang/SC-CAM/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
248644198
|
machine: attach/detach external network
What this PR resolves:
#79: machine: attach/detach external network + tests/docs
Requires Jumpscale/lib9#45
|
gharchive/pull-request
| 2017-08-08T08:51:40 |
2025-04-01T04:55:14.526942
|
{
"authors": [
"jvwilder"
],
"repo": "Jumpscale/ays9",
"url": "https://github.com/Jumpscale/ays9/pull/157",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
459788315
|
[FR] Use more of the workspace, eg for DataFrames
It would be nice if more of the workspace could be used, eg for DataFrames. For example, I think there's enough room to show a couple more columns:
Juno could perhaps set the terminal width to the screen width?
The workspace's width is very dynamic, so I don't think that's a great solution. For now my recommendation would be to use TableView.jl instead -- but I suppose we could fairly easily integrate that into Juno itself. Maybe as a button in the workspace that opens the object with TableView?
|
gharchive/issue
| 2019-06-24T09:33:21 |
2025-04-01T04:55:14.560890
|
{
"authors": [
"asinghvi17",
"cormullion",
"pfitzseb"
],
"repo": "JunoLab/Juno.jl",
"url": "https://github.com/JunoLab/Juno.jl/issues/317",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
399558244
|
AutoPlaylistRandom doesn't work
Please tick all applicable boxes.
[ .] I am using Python 3.5 or higher (run python --version on the command line)
[ .] I have followed the official guides to install the bot for my system
[. ] I have updated my dependencies to the latest version using the appropriate update script
Which version are you using?
[ .] The latest master version (1.9.8)
[ ] The latest review version
What type of issue are you creating?
[. ] Bug
[ ] Feature request
[ ] Question
Description of issue
AutoPlaylistRandom function doesn't work I have activated AutoPlaylist and it autojoin the channel but pley the music sequencially
Steps to reproduce
Log file
Please attach your MusicBot log file (located at logs/musicbot.log) to this issue. You can do so by dragging and dropping the file here. If you do not include your log file, you WILL be asked to provide one.
musicbot.log
For it to be random please make sure the AutoPlaylistRandom is set to no, and you've cleared out your queue. Closing due to unable to reproduce.
|
gharchive/issue
| 2019-01-15T22:11:30 |
2025-04-01T04:55:14.598668
|
{
"authors": [
"Aur3m",
"AutumnClove"
],
"repo": "Just-Some-Bots/MusicBot",
"url": "https://github.com/Just-Some-Bots/MusicBot/issues/1820",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
227082565
|
Create _goodbye.siml
May or may not conflict with !ai goodbye. I don't know what I'm doing.
@Thielak Am I doing it right? Updated my file.
Aye aye
|
gharchive/pull-request
| 2017-05-08T15:24:51 |
2025-04-01T04:55:14.606384
|
{
"authors": [
"Thielak",
"jbondguy007"
],
"repo": "JustArchi/ArchiBoT",
"url": "https://github.com/JustArchi/ArchiBoT/pull/97",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2003881807
|
Merge entry from same genomes (but different version)
When a genomes has several versions (for instance : GCF_003555505.1&GCA_003555505.2) should we merge the entry in the results of the analysis ?
Don't know.
It could be an option for pcalf-datasets to only download the last version of an assembly, but you might miss some ccyA+ genomes in this way.
What could be done on the pcalf-annotation part is the addition of a dereplication steps (dRep for example or InStrain) before CheckM / GTDB-Tk reducing calculation time (??? dRep itself can take a while so ....) for those steps...
Whatever, i agree with you, merging several versions of an assembly could reduce the complexity of the final dataset.
Could points.
Do you know whether a new version of genome means the older one is obsolete (ncbi seems to hint that), or not ? Because if it is the case we can focus only on the latest version.
I will try to implement dRep into the tools to have a more user friendly output.
|
gharchive/issue
| 2023-11-21T09:53:21 |
2025-04-01T04:55:14.660263
|
{
"authors": [
"GGasch",
"K2SOHIGH"
],
"repo": "K2SOHIGH/pcalf",
"url": "https://github.com/K2SOHIGH/pcalf/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
380102889
|
"Error: Undefined offset: 0" while installing
The current version 4f972ce9fd seems not working.
$ curl -LSs https://keinos.github.io/Phar_Box3_installer/installer.php | php
================================================================================
Box Installer
================================================================================
===================
Environment Check
===================
"-" indicates success.
"*" indicates error.
- You have a supported min version of PHP (>= 5.3.3).
- You have the "phar" extension installed.
- You have a supported version of the "phar" extension.
- You have the "openssl" extension installed.
* Notice: The "phar.readonly" setting needs to be off to create Phars.
- The "detect_unicode" setting is off.
- The "allow_url_fopen" setting is on.
You need to fix above error in order to use BOX3.
- Path to your php.ini: /usr/local/etc/php/7.2/php.ini
Continue download BOX3(box.phar) anyway? (y/n):Continuing ...
==============
Downlaod Box
==============
- Fetching releases ... OK
- Reading releases:
Release: 3.3.0
Release: 3.2.0
Release: 3.1.3
Release: 3.1.2
Release: 3.1.1
Release: 3.1.0
Release: 3.0.0
Release: 3.0.0-RC.0
Release: 3.0.0-beta.4
Release: 3.0.0-beta.3
Release: 3.0.0-beta.2
Release: 3.0.0-beta.1
Release: 3.0.0-beta.0
Release: 3.0.0-alpha.7
Release: 3.0.0-alpha.6
Release: 3.0.0-alpha.5
Release: 3.0.0-alpha.4
Release: 3.0.0-alpha.3
Release: 3.0.0-alpha.2
Release: 3.0.0-alpha.1
Release: 3.0.0-alpha.0
Latest release -> 3.3.0
Error: Undefined offset: 0
BackTrace:
Line:264
$ php -v | head -1
PHP 7.2.6 (cli) (built: May 25 2018 06:18:43) ( NTS )
$
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.13.6
BuildVersion: 17G3025
Line 264.
https://github.com/KEINOS/Phar_Box3_installer/blob/4f972ce9fdc6ad730f4f477ae6fd3d3c9b367755/installer.php#L260-L270
This happens if the assets are empty at the release info.
Which means that if there's no official compiled box.phar provided, this error occurs. And it seems that the head family forgot to attach the "box.phar" in their releases page.
I asked a favor to provide them in their issue #326, though, meanwhile, we better implement a skipping process, if the $latest->assets[0] is not set, or throw an error or something.
|
gharchive/issue
| 2018-11-13T08:03:49 |
2025-04-01T04:55:14.691453
|
{
"authors": [
"KEINOS"
],
"repo": "KEINOS/Phar_Box3_installer",
"url": "https://github.com/KEINOS/Phar_Box3_installer/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
702604749
|
已删除的任务仍可点击重跑继续执行
Describe the bug
如题
To Reproduce
新建任务,任务正常执行
删除任务,相关实例停止运行
点击重跑,已删除的任务又可继续运行
Expected behavior
用户在尝试执行已删除任务时应提醒用户“该任务已删除”并停止启动该任务
Environment
PowerJob Version: 3.2.3
Java Version: OpenJDK 8
OS: CentOS 7
正常设计,无负面影响,忽略。
|
gharchive/issue
| 2020-09-16T09:25:40 |
2025-04-01T04:55:14.694594
|
{
"authors": [
"ALittleBrother",
"KFCFans"
],
"repo": "KFCFans/PowerJob",
"url": "https://github.com/KFCFans/PowerJob/issues/69",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
79521091
|
Add Chenowth MPPV from Kerbal Stuff
This pull request was automatically generated by Kerbal Stuff on behalf of LuizChacon, to add Chenowth MPPV to CKAN.
Please direct questions about this pull request to LuizChacon.
License included in file does not match license on KS.
KS license is the more restricted one and as such should be used.
|
gharchive/pull-request
| 2015-05-22T16:38:15 |
2025-04-01T04:55:14.739814
|
{
"authors": [
"Dazpoet",
"KerbalStuffBot"
],
"repo": "KSP-CKAN/NetKAN",
"url": "https://github.com/KSP-CKAN/NetKAN/pull/1383",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
963512584
|
Downscroll arrow tail ends are layered below arrow tail segments; should be the opposite
DESCRIPTION
Windows 10 64bit, KE 1.6 (if this was fixed in a newer version, you can close this, but I don't find this being mentioned in the changelog)
How downscroll note assets are layered causes tails to look like this, which is a bit ugly
It should instead look something like this (excuse the flipped image)
Probably should have tail ends layered above tail segments.
Here, arrow tails seem opaque. In most other versions in FnF, they are slightly transparent. To prevent overlaps, arrow tail ends may have to be moved up instead.
|
gharchive/issue
| 2021-08-08T20:52:06 |
2025-04-01T04:55:14.764441
|
{
"authors": [
"i-winxd"
],
"repo": "KadeDev/Kade-Engine",
"url": "https://github.com/KadeDev/Kade-Engine/issues/1665",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
917806136
|
(DELETED)
(ONE OF THE CREATORS DELETED THIS SO I WILL REMOVE EVERYTHING)
yeah, im a asshole, what can I say.
|
gharchive/issue
| 2021-06-10T19:42:03 |
2025-04-01T04:55:14.765418
|
{
"authors": [
"GordoiteYT"
],
"repo": "KadeDev/Kade-Engine",
"url": "https://github.com/KadeDev/Kade-Engine/issues/812",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
994057319
|
Add up/down hotkeys to Charter
I was going through old issues for improvements to make and found this idea: #209. Most of this is implemented (selecting notes, deleting notes) but moving selected notes was not.
This PR performs the following changes:
Adds up/down hotkeys to move selected notes up/down one step in the charter.
Adds shift-up/down hotkeys to move selected notes up/down one beat in the charter.
Renamed the Shift option in the tabs to better indicate it affects all notes.
Added additional controls to the Help text.
Fix a crash bug: When placing a note, removing a note, then zooming in or out with CTRL-MWhell, a Null Object Reference occurs.
Nice job
|
gharchive/pull-request
| 2021-09-12T05:31:54 |
2025-04-01T04:55:14.767719
|
{
"authors": [
"KadeDev",
"MasterEric"
],
"repo": "KadeDev/Kade-Engine",
"url": "https://github.com/KadeDev/Kade-Engine/pull/2237",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2159299504
|
Storybook - add rules to limit the storybook action triggering
Please describe the task that needs to be done
Add rules to restrict the storybook triggering - e.g. limit the directories where the code changes happened - similar to this config
@tplevko , what do you think about skipping the storybook action when the PR is not ready? This way we can push all the changes as many times as we want for review purposes, and once we're good, we can enable the PR for an entire test suite
Also, using [skip ci] seems to avoid executing the workflow, so maybe that could be an alternative from the contributor side to avoid extra runs
feat(CanvasForm): Add support for ErrorHandler [skip ci]
|
gharchive/issue
| 2024-02-28T16:01:21 |
2025-04-01T04:55:14.818678
|
{
"authors": [
"lordrip",
"tplevko"
],
"repo": "KaotoIO/kaoto-next",
"url": "https://github.com/KaotoIO/kaoto-next/issues/883",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
807706595
|
🛑 IServ (for comparison) is down
In 76f9acb, IServ (for comparison) ($SECRET_SITE2) was down:
HTTP code: 0
Response time: 0 ms
Resolved: IServ (for comparison) is back up in a878453.
|
gharchive/issue
| 2021-02-13T08:34:43 |
2025-04-01T04:55:14.834148
|
{
"authors": [
"KaratekHD"
],
"repo": "KaratekHD/status",
"url": "https://github.com/KaratekHD/status/issues/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
213233664
|
Crash running under anaconda
$ python predictor.py
Using TensorFlow backend.
Traceback (most recent call last):
File "predictor.py", line 21, in
model.add(MaxPooling2D(pool_size=(2, 2)))
File "/Users/pato/anaconda/lib/python2.7/site-packages/keras/models.py", line 332, in add
output_tensor = layer(self.outputs[0])
File "/Users/pato/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 572, in call
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/Users/pato/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 635, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/Users/pato/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 166, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/Users/pato/anaconda/lib/python2.7/site-packages/keras/layers/pooling.py", line 160, in call
dim_ordering=self.dim_ordering)
File "/Users/pato/anaconda/lib/python2.7/site-packages/keras/layers/pooling.py", line 210, in _pooling_function
pool_mode='max')
File "/Users/pato/anaconda/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2866, in pool2d
x = tf.nn.max_pool(x, pool_size, strides, padding=padding)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1793, in max_pool
name=name)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1598, in _max_pool
data_format=data_format, name=name)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2397, in create_op
set_shapes_for_outputs(ret)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1757, in set_shapes_for_outputs
shapes = shape_func(op)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1707, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn)
File "/Users/pato/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'MaxPool' (op: 'MaxPool') with input shapes: [?,1,254,32].
I never try it with anaconda, but just keep in mind, that predictor.py you should edit with correct paths to model weights, training data and data which you are trying to predict. Also there should be aligned pictures dimensions with model weight if you are using precompiled weights.
I have pointed it to the data supplied in your trunk. Can you provide a full step by step usage example so that I can test it? It is not clear what the system actually does, nor what does it predicts and how.
I have made so updated to the code and documentation.
But bassically if you want to test it with yours data, you should first get about 50 spectra images of your car engine sound (with wavelet sound explorer). Then train it. And then you can predict. Because this project idea is not to detect cars brand but to detect concreate car. Also this source can not right predict if in audio stream hearing more than one car engine.
How I generate spectra analysis pictures:
recording car engine sound about one minute (converts it to wav)
then with audacity divide whole stream into 400ms.
each 400ms of audio passed to wavelet sound explorer (if can not hold more longer record:( )
when generated images cutted with snipping tool and placed to his representative class
when run training with image_classificator_more_layers.py
when run predictor.py whitch uses in training step generates weight and pass picture which should be non used in network training
Is this an example which you expected ? :)
Let's review this
Split a wav into 400ms chunks.
Batch process each chunk through a power spectrum analysis. You mention "wavelet sound explorer" which is the spectrum visualization for Audacity, but what about others? And what specific configurations for that Audacity plugin do you use??
I don't understand what "representative class means" or what snipping tool you're talking about. My guess is that you crop and resize the images using a certain criteria and that criteria is not clear to me.
Run the training image classificator how exactly?
Run the predictor how exactly? What is the expected output? Just a log with weights?
wavelet sound explorer - there is link in readme.md file.
Audacity I use only for getting 400ms chunks and thats it.
Snipping tool is the default windows tool. When I get power spectrum analysis picture, I just capture screen view with that tool. There is no any rezising criteria.
Also I updated that readme.md (there is writen how to run training, expected output and how run predictor) and some sources. Also added comments in important rows. Please review it.
|
gharchive/issue
| 2017-03-10T03:29:27 |
2025-04-01T04:55:14.851475
|
{
"authors": [
"KaroDievas",
"datascienceteam01"
],
"repo": "KaroDievas/car-sound-classification-with-keras",
"url": "https://github.com/KaroDievas/car-sound-classification-with-keras/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
627460637
|
Display protein modifications for genes identified via UniProt ids
Example:
http://localhost:3000/gene/P37173 should display 18 modifications
Complete once KarrLab/datanator_rest_api#106 is addressed.
Done
|
gharchive/issue
| 2020-05-29T18:33:30 |
2025-04-01T04:55:14.853414
|
{
"authors": [
"jonrkarr"
],
"repo": "KarrLab/datanator_frontend",
"url": "https://github.com/KarrLab/datanator_frontend/issues/261",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
551542896
|
Improve responsiveness from intermediate page to category-specific page
Sample link (protein)
Sample link (reaction)
Issue is likely caused by a combination of round-about api calls and the performance of the underlying python queries.
I'm marking this as "future", because I don't think this is a big problem. For an innovative academic tool, I think the responsiveness is pretty good.
The bottleneck is the REST calls. In my experience, the slowest call is the REST call which gets the UniProt ids for the proteins in the ortholog group. This is ~8s.
Many of the REST calls are retrieving more data than necessary for the pages. One potential way to reduce the latency is to implement more limited queries that only return the needed information. With the refactored code, it should be easier to understand what data actually needs to be returned.
The bottleneck is the REST calls. In my experience, the slowest call is the REST call which gets the UniProt ids for the proteins in the ortholog group. This is ~8s.
I think this would be a good starting point to try modularizing data retrieval and at the same time, avoiding problems with async from react.
Since UniProt is released every 4 weeks and is a known fairly stable format, why not just download the file and parse into a means that can be queried locally? I know you are using REST, but federated search might be of value here.
In a proof of concept with UniProt I downloaded the file and parsed using Prolog DCGs. Granted the data as Prolog facts are rather large, but using Quick Load Files they load relatively fast.
See: Quick Load Files
While on the topic of federated search, Neo4j recently added that ability. See: Extending the Graph Database with Data Federation
Not suggesting you change anything just dropping some ideas.
The issue refers to two queries to our API which we should refactor. The data is downloaded and processed locally, in part for performance and in part because many of the data sources don't support federation.
Thanks for the added info.
I'm closing this because it seems to be working pretty well.
|
gharchive/issue
| 2020-01-17T17:32:08 |
2025-04-01T04:55:14.859066
|
{
"authors": [
"EricGT",
"jonrkarr",
"lzy7071"
],
"repo": "KarrLab/datanator_frontend",
"url": "https://github.com/KarrLab/datanator_frontend/issues/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
564730704
|
Task :app:downloadScreenshots FAILED
I've created a simple project and Shot works perfectly when there isn't any flavor but when I add flavors I would get this error:
`> Task :app:downloadScreenshots FAILED
⬇️ Pulling screenshots from your connected devices!
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':app:downloadScreenshots'.
Nonzero exit value: 1
`
I've also added this to the gradle file:
shot { appId = 'com.example.snapshottesting' instrumentationTestTask = 'connectedDevDebugAndroidTest' }
I have found the answer!
I've added applicationIdSuffix ".dev" to the Dev flavor, so the application id in the Dev is not the same as production.
So, I had to change the appid to `appId = 'com.example.snapshottesting.dev'.
I suggest adding the capability of working Shot with different environments.
Hey guys!
I meet the same trouble after upgraded to 4.0.0, because the "appId" configuration is gone now.
Is there any way to deal with this?
@scm573 I upgraded to 4.0.0 too, you don't need to use the appId anymore. It works for me.
@mandomi
I tried many different patterns and probably found the root cause.
Where we put "applicationIdSuffix" is not in the productFlavors block, but the buildTypes block.
And due to this process
https://github.com/Karumi/Shot/blob/master/shot/src/main/scala/com/karumi/shot/ShotPlugin.scala#L68
The applicationIdSuffix declared in buildTypes block is not being taken into consideration.
Also that's why, it works fine if I remove that applicationIdSuffix declaration, or downgrading to v3.1.0 to use the appId declaration explicitly.
Thanks for the info @scm573 I'm going to see If I can fix it.
The fix will be available in the next release I'm about to publish
|
gharchive/issue
| 2020-02-13T14:44:38 |
2025-04-01T04:55:14.865512
|
{
"authors": [
"mandomi",
"pedrovgs",
"scm573"
],
"repo": "Karumi/Shot",
"url": "https://github.com/Karumi/Shot/issues/79",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2499101038
|
survivability improvements
Tested in run. Previously didn't have sweatpants equipped so couldn't use sweat flood. Now has sweat pants equipped and casts flood prior to launch spikes every time
fixes #15
Also only convert grey goose matter if can survive at least 2 hits
before update
after update
`> Equipped: none, June cleaver, unwrapped knock-off retro superhero cape, Jurassic Parka, designer sweatpants, lucky gold ring, birch battery, Space Trip safety headphones, grey down vest
= Combat Rate: -25
CCS: [default]\n "if !monsterid 1517 && !monsterid 1514 && !monsterid 1185;if hasskill Summon Love Gnats;skill Summon Love Gnats;endif;if hasskill Sweat Flood;skill Sweat Flood;endif;skill Launch spikolodon spikes;endif;skill Bowl a Curveball;"
Preference nextAdventure changed from The Dark Neck of the Woods to The Spooky Forest
Preference lastAdventure changed from The Dark Neck of the Woods to The Spooky Forest
[40] The Spooky Forest
Preference lastEncounter changed from Adjust your Parka to wolfman
Encounter: wolfman
Round 0: alium wins initiative!
Round 1: alium executes a macro!
Round 1: alium casts SWEAT FLOOD!
Preference sweat changed from 19 to 14
Round 2: alium casts LAUNCH SPIKOLODON SPIKES!
Round 3: wolfman takes 45 damage.
Round 3: alium wins the fight!`
Thanks for the changes. Two things:
This change means that people without sweatpants won't use the parka at all. Could you set the equip on the spike task to have two outfit options, with just the parka as a fallback. So something like:
{ equip: $items`Jurassic Parka, designer sweatpants`, modes: { parka: "spikolodon" } },
{ equip: $items`Jurassic Parka`, modes: { parka: "spikolodon" } }
].```
2. `expectedDamage` is designed to be used as part of a combat consult script. Here when setting up the macro it won't do the right thing. See https://wiki.kolmafia.us/index.php?title=Expected_damage.
interesting. Didn't know either of those things. Fixed item 1. Took at a look at the list of macro commands and don't see an equivalent to expected damage so will remove that
btw thanks for the details review comments, appreciate them.
Thanks! :)
|
gharchive/pull-request
| 2024-09-01T00:11:43 |
2025-04-01T04:55:14.871191
|
{
"authors": [
"Alium58",
"Kasekopf"
],
"repo": "Kasekopf/looprobot",
"url": "https://github.com/Kasekopf/looprobot/pull/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1047710531
|
{ status: 403, message: 'You cannot create a playlist for another user.' }
I'm getting a 403 status code response. I complete the auth flow and everything but keep getting this response.
I'm getting a 403 status code response. I complete the auth flow and everything but keep getting this response.
Hey, thanks for reporting the problem, can you explain me better where exactly you're having this
error code and the steps required to reproduce it?
Thank you, your latest commit fixed the issue.
|
gharchive/issue
| 2021-11-08T17:21:24 |
2025-04-01T04:55:14.878375
|
{
"authors": [
"0xAbel",
"Kauefranca"
],
"repo": "Kauefranca/sync-playlists",
"url": "https://github.com/Kauefranca/sync-playlists/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
930214216
|
[Navigator Compose] Design of singleTop is different from traditional navigation system
For a very long time, we knew how a single top instance worked in the Fragment's navigation world. Since transactions in the FragmentManager are reversible the only way to correctly achieve this functionality of singleTop is to pop the backstack up to the point where we find that respective instance in the FragmentTransaction. This is how navigator & Jetpack Navigation handles single top functionality.
In Compose world, this could take a different turn since destinations are nothing but a State which we observe to switch between different composables we can remove it at any point without affecting the previous transaction (i.e history). The current implementation does not work like how it was done in Fragments world, here we just remove all the existing instances from the backstack directly (i.e no recursive pop till we find that instance).
This could be confusing since the functionality is not carried over to the new system. Now should I keep or change the logic that we are familiar with Fragments? Changing it would then become a breaking change.
Also, to achieve the single top functionality of FragmentManager in navigator-compose we can use popUpTo with argument all=true.
|
gharchive/issue
| 2021-06-25T14:12:16 |
2025-04-01T04:55:14.882627
|
{
"authors": [
"KaustubhPatange"
],
"repo": "KaustubhPatange/navigator",
"url": "https://github.com/KaustubhPatange/navigator/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1023237430
|
Changing joint_limits.yaml
Hello,
I am currently working on a project where I would like to increase the robot's speed of movement. I was wondering if I could change the max_velocity and max_acceleration fields in the respective joint_limits.yaml file and what would be the most sensible and safe manner to do so? The end-effector load is barely over 0.1 kg and the robot is an RS007 series.
Thank you
Hello,
Sorry for the late response, tried what you said and was able to tweak results that way. Thank you very much for your time. Will be closing the issue
|
gharchive/issue
| 2021-10-12T00:31:33 |
2025-04-01T04:55:14.884246
|
{
"authors": [
"r-ym"
],
"repo": "Kawasaki-Robotics/khi_robot",
"url": "https://github.com/Kawasaki-Robotics/khi_robot/issues/53",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
126446163
|
Stable CTC loss
Hi, have you tried your code and any results?
I was using the same ctc loss, but it seems not stable. I was using SGD optimizer with learningrate-0.0001, but it can't converg.
Speaking frankly, and this is the fault of my own for not putting this somewhere in the code, I wouldn't put too much faith into this code as I was just using it to experiment a while back and is far from complete. There are far better alternatives if you want something right out of the box to work. If you're trying to find something that's a really well-defined and introductory example of CTC, the loss function in this file is great: https://github.com/tmbdev/ocropy/blob/master/ocrolib/lstm.py
-Kayne
|
gharchive/issue
| 2016-01-13T15:41:26 |
2025-04-01T04:55:14.886514
|
{
"authors": [
"KayneWest",
"Michlong"
],
"repo": "KayneWest/DeepSpeech",
"url": "https://github.com/KayneWest/DeepSpeech/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
704057831
|
Preventing Magic Item mod firing
If I create a magic item and attach a spell to it. You go into the spellbook to fire the magic item.
This cannot happen when itemmacro module is enabled.
itemMacro.js:291 Uncaught (in promise) TypeError: Cannot read property 'data' of null at HTMLDivElement.<anonymous> (itemMacro.js:291) at HTMLDivElement.dispatch (jquery.min.js:2) at HTMLDivElement.v.handle (jquery.min.js:2)
Not sure what i can do here due to the way that Magic Items are added to the character sheets. I would have to get in contact with the author and work on a solution together.
|
gharchive/issue
| 2020-09-18T03:39:23 |
2025-04-01T04:55:14.901080
|
{
"authors": [
"Kekilla0",
"crymic-linith"
],
"repo": "Kekilla0/Item-Macro",
"url": "https://github.com/Kekilla0/Item-Macro/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
261819219
|
Add Mission Statement to website and Rebrand as 'Ride With Us'
This commit gives the website a new mission statement as well as a new
branding, 'Ride With Us'.
This is currently a WIP.
This was added outside of the PR.
|
gharchive/pull-request
| 2017-09-30T05:17:43 |
2025-04-01T04:55:14.919797
|
{
"authors": [
"KenEucker",
"demophoon"
],
"repo": "KenEucker/pdxbikesafe-website",
"url": "https://github.com/KenEucker/pdxbikesafe-website/pull/1",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
407410989
|
Move the scope of the celerite and scikit-learn imports
Right now PLDCorrector.correct() is the only method in Lightkurve that requires celerite and scikit-learn.
This PR changes correctors.py to import these two dependencies at the method-level rather than the module-level, which will prevent import lightkuve from crashing if any issue with these dependencies happen to exist.
This PR is inspired by #429. @nksaunders or @gully please review.
Codecov Report
Merging #430 into master will not change coverage.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #430 +/- ##
=======================================
Coverage 88.34% 88.34%
=======================================
Files 28 28
Lines 4866 4866
=======================================
Hits 4299 4299
Misses 567 567
Impacted Files
Coverage Δ
lightkurve/correctors.py
83.33% <100%> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 85a2f69...2c1a8fe. Read the comment docs.
@barentsen @nksaunders @gully should we remove celerite and scikit-learn from the requirements.txt file given that they are optional dependencies?
Thanks for weighing in @mirca! My idea here is for celerite and scikit-learn to be required dependencies, because the PLDCorrector will hopefully become a very standard tool & we'll no doubt use GPs elsewhere too.
However, because only one function uses these packages so far, I wanted to avoid import lightkurve breaking just in case something is wrong. As a general rule, I think it's OK to import a package within a function if it's the only place it is ever used.
@barentsen totally agree with you
|
gharchive/pull-request
| 2019-02-06T20:13:42 |
2025-04-01T04:55:15.006737
|
{
"authors": [
"barentsen",
"codecov-io",
"mirca"
],
"repo": "KeplerGO/lightkurve",
"url": "https://github.com/KeplerGO/lightkurve/pull/430",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
717882029
|
Integrate Last Resort Font as last fallback
https://github.com/unicode-org/last-resort-font/
This would let us render characters in a way that end users could tell what is missing and why.
I find it funny that this exists.
me too. And yet its kinda perfect for our use case haha
|
gharchive/issue
| 2020-10-09T06:12:58 |
2025-04-01T04:55:15.026418
|
{
"authors": [
"Kethku",
"j4qfrost"
],
"repo": "Kethku/neovide",
"url": "https://github.com/Kethku/neovide/issues/381",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.