id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
581529813
|
High ms?
Hello sometimes i'm getting 0.40+ms sometimes in game using resmon 1 or only me having this problem?
Have you installed the latest possible version of TigoAntiCheat?
Yea it is the latest version i'm using i get 0.40 ms and its only 2 people online inside
That's indeed on the high side, I'll try to see if I can find a possible memory leak.
nice, when you plan to release your v1.1.0 ? with nuke and explosion anti cheat ?
Do you have a name for me from your server where I can trace where this high ms came from? You can also send it via discord if you'd prefer.
https://discordapp.com/users/636509961375055882
i tried to message you, long time ago you are not replying
Mac Gaming#2446 - here is my discord
Contact via Discord, indicated that he must temporarily disable objects.lua.
Go make a fix for this in the coming days
|
gharchive/issue
| 2020-03-15T05:40:10 |
2025-04-01T04:55:42.708487
|
{
"authors": [
"HarithMichael",
"TigoDevelopment"
],
"repo": "TigoDevelopment/TigoAntiCheat",
"url": "https://github.com/TigoDevelopment/TigoAntiCheat/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2179620219
|
Make somacore dependency exact to match tiledbsoma
A previous version of tiledb-soma erroneously marked itself as compatible with versions of tilebsoma v1.0.8. This is because the PyPI package (https://github.com/single-cell-data/TileDB-SOMA/blob/1.8.0/apis/python/setup.py#L332) lists itself as only compatible with == a version, but this Conda package listed itself as compatible with >= a version.
I'll put up a mod
The person who edits recipe/meta.yaml is 99% of the time myself. (See also our established procedure.) Making an exact somacore pin here is an improvement -- it matches TileDB-SOMA's apis/python/setup.py -- but it introduces an error-prone step to be addressed. Namely, the change-me spots in this file are well-highlighted. Similarly the spots that bump regularly including core. The somacore package, by contrast, is "almost invisible" and it tends to disappear in the tiledbsoma shrubbery. A reminder is the right thing to do here.
https://github.com/TileDB-Inc/tiledbsoma-feedstock/pull/97/commits/ebc8814bb449530b52e1c5c4040266257b5853ff
|
gharchive/pull-request
| 2024-03-11T16:49:28 |
2025-04-01T04:55:42.713534
|
{
"authors": [
"johnkerl",
"thetorpedodog"
],
"repo": "TileDB-Inc/tiledbsoma-feedstock",
"url": "https://github.com/TileDB-Inc/tiledbsoma-feedstock/pull/97",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1726980027
|
Any benchmark of inference speed about FP4/NF4?
I wonder if there is any speed testing results about FP4/NF4 inference vs FP16?
This has been addressed in the most recent release. The benchmarks can be run by using pytest -vsk bench_matmul. If you want to benchmark other dimensions, go into the file and change the dimensionality.
|
gharchive/issue
| 2023-05-26T06:22:41 |
2025-04-01T04:55:42.714605
|
{
"authors": [
"TimDettmers",
"Tracin"
],
"repo": "TimDettmers/bitsandbytes",
"url": "https://github.com/TimDettmers/bitsandbytes/issues/442",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
902989104
|
work on dart ?
Because rmw depends on r_crypto any which requires the Flutter SDK, version solving
failed.
@gcxfd which version of flutter you used? can u share your pubspec.yaml?
no more response, closed it
|
gharchive/issue
| 2021-05-26T22:37:57 |
2025-04-01T04:55:42.786266
|
{
"authors": [
"TinoGuo",
"gcxfd"
],
"repo": "TinoGuo/r_crypto",
"url": "https://github.com/TinoGuo/r_crypto/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
530347694
|
Parallelization not working on ITER
On ITER clusters, using @munechika-koyo 's camera (which is big with 250,000 LOS) and an edge emissivity field from JINTRAC, the computation time (for res= 10 cm, just for testing) does not seem to be accelerated by parallelization:
Minimum working example (tofu 1.4.2-a5):
In [1]: import tofu as tf
/home/ITER/vezined/ToFu_All/tofu/tofu/__init__.py:95: UserWarning:
The following subpackages are not available:
- tofu.mag
=> see tofu.dsub[<subpackage>] for details.
warnings.warn(msg)
In [2]: cam = tf.load('/home/ITER/munechk/public/MyTofu/output/ITER_test_camera_config.npz')
Loaded from:
/home/ITER/munechk/public/MyTofu/output/ITER_test_camera_config.npz
In [3]: multi = tf.imas2tofu.MultiIDSLoader(user='hoeneno', tokamak='convert', shot=134000, run=29, ids=['core_sources', 'equilibrium', 'edge_sources'])
Getting ids [occ] tokamak user version shot run refshot refrun
------------ ----- ------- ------- ------- ------ --- ------- ------
core_sources [0] convert hoeneno 3 134000 29 -1 -1
edge_sources [0] " " " " " " "
equilibrium [0] " " " " " " "
In [4]: _dshort = {'core_sources': {'1drhotn':'source[identifier.name=radiation].profiles_1d[time].grid.rho_tor_norm',
...: ...: '1deEnergy':'source[identifier.name=radiation].profiles_1d[time].electrons.energy'
...: ...: },
...: ...: 'equilibrium': {'2dpsi': 'time_slice[time].profiles_2d[0].psi',
...: ...: '2dmeshR': 'time_slice[time].profiles_2d[0].r',
...: ...: '2dmeshZ': 'time_slice[time].profiles_2d[0].z'}}
...:
In [5]: multi.set_shortcuts(dshort=_dshort)
In [6]: plasma = multi.to_Plasma2D(shapeRZ=('R', 'Z'))
/home/ITER/vezined/ToFu_All/tofu/tofu/imas2tofu/_core.py:1972: UserWarning: The following data could not be retrieved:
- equilibrium:
2dB : '2dBT'
2dBR : list index out of range
2dBT : list index out of range
2dBZ : list index out of range
2djT : list index out of range
2dmeshFaces : list index out of range
2dmeshNodes : list index out of range
2dphi : list index out of range
2dpsi : list index out of range
2drhopn : '2dpsi'
2drhotn : '2dphi'
2dtheta : '2dmeshNodes'
strike0 : 'strike0R'
strike0R : list index out of range
strike0Z : list index out of range
strike1 : 'strike1R'
strike1R : list index out of range
strike1Z : list index out of range
x0 : 'x0R'
x0R : list index out of range
x0Z : list index out of range
x1 : 'x1R'
x1R : list index out of range
x1Z : list index out of range
- core_sources:
1dbrem : No / several matching signals for: - source[]['identifier', 'name'] = bremsstrahlung - nb.of matches: 0
1dline : No / several matching signals for: - source[]['identifier', 'name'] = lineradiation - nb.of matches: 0
1dprad : '1dbrem'
1dpsi : No / several matching signals for: - source[]['identifier', 'name'] = lineradiation - nb.of matches: 0
1drhopn : '1dpsi'
1drhotn : No / several matching signals for: - source[]['identifier', 'name'] = lineradiation - nb.of matches: 0
warnings.warn(msg)
In [7]: %timeit sig_sum, units = cam.calc_signal_from_Plasma2D(plasma, quant='edge_sources.2dradiation', plot=False, res=0.1, method='sum', minimize='calls', num_threads=1)
12.8 s ± 61.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit sig_sum, units = cam.calc_signal_from_Plasma2D(plasma, quant='edge_sources.2dradiation', plot=False, res=0.1, method='sum', minimize='calls', num_threads=10)
13 s ± 88.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The computation time is virtually the same.
Checking the presence of openmp:
I checked the presence of openmp based on the test in the setup.py:
In [11]: omp_test = r"""
...: #include <omp.h>
...: #include <stdio.h>
...: int main() {
...: #pragma omp parallel
...: printf("Hello from thread %d, nthreads %d\n", omp_get_thread_num(),
...: omp_get_num_threads());
...: }
...: """
...:
...:
...: def check_for_openmp(cc_var):
...: import tempfile
...:
...: tmpdir = tempfile.mkdtemp()
...: curdir = os.getcwd()
...: os.chdir(tmpdir)
...:
...: filename = r"test.c"
...: with open(filename, "w") as file:
...: file.write(omp_test)
...: with open(os.devnull, "w") as fnull:
...: result = subprocess.call(
...: [cc_var, "-fopenmp", filename], stdout=fnull, stderr=fnull
...: )
...:
...: os.chdir(curdir)
...: # clean up
...: shutil.rmtree(tmpdir)
...: return result
...:
...:
In [12]: import subprocess
In [13]: import shutil
In [14]: openmp_installed = not check_for_openmp("cc")
In [15]: openmp_installed
Out[15]: True
So openmp is apparently available.
But, if I open another terminal in parallel and try to monitor the CPU usage during the executaiuon of the two %timeit commands above using
top -u vezined
I see that the CPU usage is effectively limited to 100%, meaning that despite the presence of openmp and num_threads=10, we are limited to 1 CPU only.
Possible causes on my opinion:
I suspect this is due to the fact that we are running from inside the ipython console and that ipython was allocated only one CPU was it was first started.
=> this seems to hold some valuable information on that point:
http://ipython.org/ipython-doc/stable/parallel/parallel_intro.html
Or, it could be that the system admins allocated, by default, only one CPU per user on the cluster
What do you think @lasofivec ?
It would be nice to have some extra information:
What gcc version are you using ?
Was it the same as when tofu was build ?
Can you try to run a small parallelized cython example in ipython ?
(base) [ vezined hpc-login02 ~ ] gcc --version
gcc (GCC) 6.4.0
The tests above where done in a local git repo containing tofu (compiled with this gcc using python setup.py build_ext --inplace).
Could you try an example independent of tofu ? to see if the problem comes from the machine or from tofu ?
From a ipython console:
In [22]: bench(cython_cdist_argmin, X, Y, "prange4")
...:
...:
version time (s) speedup
0 baseline 4.360840 1
1 cython pure python 4.743235 0.919381
2 pointers + unrolled loop 0.100821 43.2534
3 prange 0.057333 76.0614
4 prange4 0.029498 147.834
(short version, but same functions, as here: https://github.com/jeremiedbb/tutorial-euroscipy-2019/blob/master/tutorial.ipynb)
I tried it in the login node of ITER, same as you I think. So options 1 and 2 that you suggested seem to be impossible.
Ahhhh sorry for such a slow reaction....
for technical reasons method='sum', minimize='calls' cannot be parallelized.
so num_threads won't affect anything.
I'm closing this issue as there is no problem. It is not supposed to be parallelized.
|
gharchive/issue
| 2019-11-29T13:29:40 |
2025-04-01T04:55:42.811018
|
{
"authors": [
"Didou09",
"lasofivec"
],
"repo": "ToFuProject/tofu",
"url": "https://github.com/ToFuProject/tofu/issues/307",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2440453397
|
Fetch Current Schema and Table for Granting Permissions in Post-Hooks
As a data transformer, I would like to be able to grant permissions when new schemas and tables are created. For example, I have a role in Postgres called data_consumer_admin.
Regardless of the plan being executed, I would like the data_consumer_admin to be able to access the newly created schemas and tables. For example, to review MRs/PRs, I share an example of the data processed. Every data_consumer_admin should be able to query from this SQLMesh environment.
This runs, but marts is not replaced with marts__dev or whatever the current plan's environment is.
@IF(
@runtime_stage = 'creating',
GRANT USAGE ON SCHEMA marts TO group_data_consumer_admin
);
From Slack discussion here: https://tobiko-data.slack.com/archives/C044BRE5W4S/p1722431705430829
Hey @jrhorne, after discussing about this internally we concluded that exposing the current plan's environment to the macro evaluator is not the right way to solve this problem. You can instead leverage the Python API to directly execute statements that grant permissions after you run the plan.
For example, after creating a new "dev" environment, you could do the following for postgres, assuming you have >1 schemas and want to automate the permission granting (otherwise you could simply hardcode the schema name):
from sqlmesh.core.context import Context
ctx = Context()
engine_adapter = ctx.engine_adapter
dev_schemas = engine_adapter.fetchdf(
"select schema_name from information_schema.schemata where schema_name like '%__dev'"
)
for dev_schema in dev_schemas:
engine_adapter.execute(f"GRANT USAGE ON SCHEMA {dev_schema} TO group_data_consumer_admin")
I'm closing this, but happy to continue discussing.
CC: @izeigerman
Makes sense. Here's my script assign_dev_permissions.py if anyone stumbles upon this later:
`"""
README:
Purpose:
tl;dr Run this script to give other DS members access to testing schemas for reviewing data changes
This script automates the assignment of specific permissions to designated roles in development schemas.
The script targets schemas in a database that contain the pattern '__dev' in their names, commonly used for development and testing environments.
The script iterates over each of these schemas and assigns the following permissions to each role listed in roles_to_assign:
Roles and Permissions:
group_data_consumer_admin:
Granted USAGE on the schema, allowing access to the schema without direct access to its contents.
Granted SELECT on all tables in the schema, enabling read-only access to data.
Granted EXECUTE on all functions in the schema, allowing the execution of stored procedures and functions.
Granted USAGE on all sequences in the schema, allowing access to the sequence objects, typically for reading their current value.
Prerequisites:
Ensure that the database connection and context are correctly set up using sqlmesh.
Verify that the user running this script has the necessary privileges to grant the described permissions in the database.
Instructions:
Install the required package (sqlmesh).
Run this script in an environment with the necessary privileges.
"""
from sqlmesh.core.context import Context
Initialize the context and retrieve the engine adapter
ctx = Context()
engine_adapter = ctx.engine_adapter
Fetch all development schemas that match the naming pattern
dev_schemas = engine_adapter.fetchdf(
"SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE '%__%dev%'"
)
List of roles to assign permissions to
roles_to_assign = ["group_data_consumer_admin"]
Iterate over each role and assign the necessary permissions for each development schema
for role_to_assign in roles_to_assign:
for dev_schema in dev_schemas.schema_name:
engine_adapter.execute(
f"GRANT USAGE ON SCHEMA {dev_schema} TO {role_to_assign}"
)
engine_adapter.execute(
f"GRANT SELECT ON ALL TABLES IN SCHEMA {dev_schema} TO {role_to_assign}"
)
engine_adapter.execute(
f"GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA {dev_schema} TO {role_to_assign}"
)
engine_adapter.execute(
f"GRANT USAGE ON ALL SEQUENCES IN SCHEMA {dev_schema} TO {role_to_assign}"
)
print(f"Assigned permissions to {role_to_assign} on {dev_schema}")
`
|
gharchive/issue
| 2024-07-31T16:30:48 |
2025-04-01T04:55:42.840745
|
{
"authors": [
"georgesittas",
"jrhorne"
],
"repo": "TobikoData/sqlmesh",
"url": "https://github.com/TobikoData/sqlmesh/issues/2974",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1236724140
|
Feature(api): Add date cancelled on invoices
Add date cancelled on invoices if a invoice gets cancelled.
Is added but not used, needs to be added if status is cancelled.
|
gharchive/issue
| 2022-05-16T07:13:34 |
2025-04-01T04:55:42.951079
|
{
"authors": [
"Tolfx",
"Tooxic"
],
"repo": "Tolfix/cpg",
"url": "https://github.com/Tolfix/cpg/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
701681084
|
Bundle prism binary for distribution inside the php package
Bundle le binary prism.js pour être utilisable out-of-the-box dans le bundle avec pour seul requirement Node.js pour l'exécution.
Via Github Actions:
au merge d'une PR ou au commit sur master, un workflow exécute le make dist et commit les changements s'il y a
tous les lundi à 9h00, un workflow tente d'update les dependences JS et commit le .lock s'il y a changement. Est alors déclenché le workflow de dist pour répercuter sur le binary bundlé.
On pourra aussi fournir d'autres fichiers css/js à distribuer par ce biais au besoin en ajoutant des configs spécifiques dans webpack.config.js.
cc @benji07
Nickel! J'ai justement un CSS de colo-syntaxique par défaut à proposer, je le rajoute dans une PR qui part d'ici 👍
Sachant qu'il a déjà des thèmes de base avec Prism qu'on pourrait exposer dans le bundle en plus
|
gharchive/pull-request
| 2020-09-15T07:21:12 |
2025-04-01T04:55:42.953490
|
{
"authors": [
"ogizanagi"
],
"repo": "Tom32i/content",
"url": "https://github.com/Tom32i/content/pull/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
289202293
|
May I use load balance in configuration file?
Our application is a micro service architecture and there maybe has a facade service between the API gateway and business service, just like this:
we use facade services to call multiple business services and aggregate results so that each micro service is not dependent on each other.
question:
some business service will have several instances (less than 3) so we need to load balance. I write a class to call business services in the facade service project, and the class reads the Ocelot's config file ("ReRoutes" section) so that the config file doesn't write twice. but it seems that Ocelot only supports to use load balance by using service discovery? so is there a need to make Ocelot supporting load balance written in the configuration file?
thanks.
It's very exciting to see that! that's just what I want. I have saw the document link above, and there are two questions:
Do I need to set the "LoadBalancer" configuration item when using multiple downstream hosts? if not, what's the default policy? LeastConnection or RoundRobin ?
I see the "Catch All" routing has a lower priority than any others. so besides this, other routing still tries to match according to the orders written in the configuration one by one as the version 2.x?
thanks.
@BowAngel
You need to set a load balancer it defaults to no load balancer and just uses the first host.
Yes it still tries to match one by one. I'm not happy with my code at the moment but thinking about a way to do it nicely.
Thanks! :D
|
gharchive/issue
| 2018-01-17T09:50:32 |
2025-04-01T04:55:42.958561
|
{
"authors": [
"BowAngel",
"TomPallister"
],
"repo": "TomPallister/Ocelot",
"url": "https://github.com/TomPallister/Ocelot/issues/201",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
222008014
|
Random thoughts on middleware
Hey @TomPallister,
Just been looking through the code trying to understand how it all works. I've got a few thoughts about the middleware implementation...
Is there any reason for ExceptionHandlingMiddleware to not implement OcelotMiddleware?
Any reason for not defining abstract Task Invoke(HttpContext context) on OcelotMiddleware? Adding this would give extra compile-time check that middleware implementors are on the correct path.
It took me a while to understand what parts of the code were middleware, and what parts of the code were infrastructure stuff. Given that the middleware are effectively the feature slices of the project, and are where all the magic happens, I feel it could do with being a bit more distinct from the rest of the supporting code. Any opinion on grouping all the middleware together as individual folders under the an Ocelot.Middleware namespace? Eg,
+-Ocelot
+-Middleware
| +-Authentication
| | +- ...
| |
| +-Authorisation
| | +- ...
| |
| +-...
|
+-other stuff
This structure would also provide a path to providing a more modular package-based approach to middleware in the future, with a common naming convention: optional middleware could be published in packages; the common Ocelot.Middleware.Whatever convention would make it easy to discover these packages; and say someone didn't want to use identityserver then they could choose to avoid pulling this package in.
If you think any of this is worth doing I'm happy to pick up the work on it.
@binarymash makes sense, I tried to stuff all the classes below a relevant feature such as Authentication has all the code for authentication and anything cross cutting I just dumped in a catch all..probably called Infrastructure.
I find the code quite hard to follow myself because it is not directly procedural in the sense that you can go this method calls that and that calls this. Due to the fact that anyone can add middleware or do what they want basically it could all fall apart quite quickly!!
More than happy for you to make those changes! You shouldn't have any nightmare merges.
The only feature Im thinking of doing is storing the config in Consul which gets me away from having to implement that raft consensus algorithm any time soon.
Cool bananas. That's my Easter Sunday afternoon sorted out then :)
I noticed a few naming inconsistencies between the folders and the middleware classes. I'll have a look at making everything consistent.
Oh yeah there's definitely still a need for the cross cutting stuff, no problem with that.
@binarymash awesome thanks! I'm painting and decorating. It's harder than I thought to select a colour.
My advice is just go with white ;)
Thats what Im trying to tell the Mrs! Nevermind.
Ok, I think you might finish the decorating before I finish this... it's a bit more complex than I first thought, because I naively assumed that each of the middleware implementations was isolated from the others, but now I see that there's shared code between some of them, and some of them rely on each other. Some of this might be solved by extracting code out into a common model, and other parts might be fixed by removing leaky abstractions, and some might be solved by consolidation. Or maybe I'm wasting my time and it's never going to be squeezed into the shape I imagine :)
Basically, I need to think about this a bit more. Whatever, any code change here would be a breaking change to everyone anyway.
I've pushed up the refactoring so far - see https://github.com/binarymash/Ocelot/tree/feature/RestructureMiddleware
btw are you seeing unstable tests? Saw quite a lot of apparently random failures when building on the command line.
Oops. Forgot to say, for future reference here's the dependencies I see between the middleware...
DownstreamRouteFinder is referenced by:
DownstreamUrlCreator (UrlMatcher.IDownstreamPathPlaceholderReplacer, UrlMatcher.UrlPathPlaceholderNameAndValue)
Authentication (DownstreamRoute.ReRoute())
Authorisation (DownstreamRoute.ReRoute())
Headers (DownstreamRoute, UrlMatcher.UrlPathPlaceholderNameAndValue)
OcelotMiddleware (DownstreamRoute)
Request is referenced by:
Requester (Request)
Requester is referenced by:
Request (QoS.IQoSProvider, QoS.IQosProviderHouse)
RequestId is referenced by:
Request (RequestId)
Headers is referenced by:
Responder (IRemoveOutputHeaders)
@binarymash sorry Phil! Yeh some of the middleware are dependent on others I think I mentioned above it might better to just refactor them into one procedural style stack! Not sure.
Obviously they cannot be pulled apart without some refactoring. Also I'm not sure you get any value from them unless they are a together so no point being middleware. They all need that scoped data repository which is a crap dependency imo and that would go away if they were refactored into one stack. There is quite a lot that annoys me.
Sorry the code is a bit crap! I just cracked on trying to get something working.
Just been looking through the code a bit more to understand what each of the middleware does. So, in general, they do one of the following...
Modifying the request that we've received (downstream route finder, request id, claims builder, request headers, query string, load balancing, downstream url creator)
Applying some cross-cutting functionality (exception handler, authentication, authorisation, output cache, requester)
Modifying the response we return (responder)
Actually, a middleware could potentially do a combination of any of these things.
I think the middleware approach is fine, but I think what makes things a bit complex at the moment is that for all those middleware that modify the request, we store their changes individually, and then try to gather them all together just before we send the request. Maybe there's a good reason for doing that - I didn't go through all the hard work of writing it all, so you're in a better position than me to answer :)
It feels like things might be easier if we turned things round the other way... rather than trying to assemble a request at the very end of the pipeline, how about, when the request first came in, we immediately use this to create an initial version of the downstream request, and then each of the middleware directly mutate this request? This means that each middleware would not need to know at all about the implementation details of the other middleware - they just do whatever they need to do directly on the downstream request. This would prevent the need for leaking middleware implementation into the OcelotMiddleware base class. At the end of the pipeline, the requester just sends whatever the current downstream request is.
In effect, this is kind of what you're doing with the response to the downstream request... the requester immediately stores the response in the IRequestScopedDataRepository, and then other middleware (for example, the output cache) use this to do whatever they want with it.
I've hacked together a poc of this which decouples each bit of middleware... work in progress, not fully wired up yet so no idea if it works, and I've broken lots of tests, but it compiles ;) My initial thoughts are that it reduces the amount of code and complexity. I'll try to spend a bit more time on this in the next day or two to see if it is worth pursuing.
https://github.com/binarymash/Ocelot/commits/feature/RequestMutation
@binarymash I'll have a look tomorrow! Thanks man. Went with jasmine white in the end.
@binarymash we keeping this open? just doing some house cleaning?
Yeah now that #88 has made the middleware pretty much independent of each other I want to try this again. Job for the bank holiday weekend :)
Gonna close this one as no longer using asp.net middleware and passing an Ocelot specific context tho its name might not be correct right now "DownstreamContext" doesnt seem right anymore haha.
|
gharchive/issue
| 2017-04-16T12:32:55 |
2025-04-01T04:55:42.974963
|
{
"authors": [
"TomPallister",
"binarymash"
],
"repo": "TomPallister/Ocelot",
"url": "https://github.com/TomPallister/Ocelot/issues/84",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1083729073
|
🛑 VTP.XYZ is down
In 5fa23ae, VTP.XYZ (https://vtp.xyz/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: VTP.XYZ is back up in 375f156.
|
gharchive/issue
| 2021-12-18T02:50:35 |
2025-04-01T04:55:42.986628
|
{
"authors": [
"TomsProject"
],
"repo": "TomsProject/uptime",
"url": "https://github.com/TomsProject/uptime/issues/928",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1047374775
|
Random LFO shapes
The feature you'd like
A class much like SuperCollider's LFNoise0 (stepped noise) or LFNoise1 (interpolated ramped noise) for adding some natural jitter and randomisation to a signal or control.
Any alternatives you've considered
Perhaps this can be achieve currently by creating a DC signal and enveloping with rampTo? Or sample and holding a white noise, and applying a low pass filter to smooth it? I've not ventured down that path quite yet, but I wanted to ask if something existed already, or I've overlooked it.
applying a low pass filter to smooth it?
That's basically what I've been doing. It's been using a combination of a scheduled repeat -> random value -> signal -> low pass filter to create smooth random movement.
let signal = new Tone.Signal(0.5);
Tone.Transport.scheduleRepeat(time => {
signal.setValueAtTime(Math.random(), time);
}, "16n");
let filter = new Tone.Filter(0.4, "lowpass");
signal.chain(filter);
// filter output is now smooth random signal
For a kind of stepped noise, you can remove the LPF.
Here's a CodePen with my most commonly used signal-based modulators.
https://codepen.io/joeweiss/pen/VwMqrYx
|
gharchive/issue
| 2021-11-08T12:11:37 |
2025-04-01T04:55:42.994601
|
{
"authors": [
"boonier",
"joeweiss"
],
"repo": "Tonejs/Tone.js",
"url": "https://github.com/Tonejs/Tone.js/issues/975",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2334113245
|
Not an issue
Not an issue, just posting this here to show I have accepted the invite
ok
GitHub Desktop is currently uploading all the files and stuff
CODE IS UPLOAD!
Try to build it now, you were putting OG instead of Old
Mainly because you renamed the images but not the character file
oh
Creating library ApplicationMain.lib and object ApplicationMain.exp
49d10c9b_NV_depth_nonlinear.obj : error LNK2011: precompiled object not linked in; image may not run
Hint on symbols that are defined and could potentially match:
__@@PchSym@00@UumuRkhbxsPvmtrmvRhlfixvUumuRyzolmvbvmtrmveCRnzrmUvckligUivovzhvUdrmwldhUlyqUlyqUnhexBJGERmxUPPkxsUifmgrnvUscxkkOlyq@link9f427fbb282f3ea6dbaa0233bdb91576
__@@PchSym@00@UumuRyzolmvbRvmtrmvUvckligUivovzhvUdrmwldhUlyqUlyqUnhexBJGERmxUPPkxsUszcvUscxkkOlyq@link54406bf8480fba887f066eae07c0abf2
ApplicationMain.exe : fatal error LNK1120: 1 unresolved externals
done.
Press any key to continue . . .
it always does this (proceeds to rebuild the whole thing)
I don't know what the issue is with that honestly
me niether. but just deleting every export thing does the trick usually
say, do u have an fnf chromatic? (if u dont know who im talking about, a YOU chromatic)
I do, and I have a FNF character, though I have to redo my chromatic... it honestly just sounds like I'm depressed lol
Also, I won't be able to comment for the next hour, I have to finish up here at work
ok
Uh....
(character is locked for some reason)
hi
https://www.youtube.com/watch?v=f-LkpP4ELsI
could we do this as a chat thing, or do u still prefer discord?
If this is a chat thing then anyone else could see it and interrupt it. I also don't really pay attention to my Gmail, so that wouldn't work that well either
ok
|
gharchive/issue
| 2024-06-04T18:18:09 |
2025-04-01T04:55:43.050716
|
{
"authors": [
"TonyBallOhKnee-Games",
"TorchTheDragon"
],
"repo": "TonyBallOhKnee-Games/fnf-baloney-engine",
"url": "https://github.com/TonyBallOhKnee-Games/fnf-baloney-engine/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1046351474
|
Row-artifacts when sorting a column
Sometimes the sort adds some odd artifacts to the rows.
For example: https://gyazo.com/235b23c3652e59fb5c7de19d20a12d4c.
This is definitely a bug. I'll look into this over the weekend, probably not a big deal.
The logic for this is in https://github.com/TonyGermaneri/canvas-datagrid/blob/143d356bc5bfd141979f868a917ebfc00a74be1d/lib/draw.js#L1106 — not sure if the logic is no longer sound, or bound row to view row mapping isn't updated correctly
Ah, OK, it seems to make sense: when you're sorting, the boundRowIndex isn't necessarily contiguous from one row to the next, causing the gap to be shown hasRowGap = true. An indicator like this isn't particularly useful, so I'll add a provision to exclude the rowGap from being rendered when sorting is active.
|
gharchive/issue
| 2021-11-06T01:24:58 |
2025-04-01T04:55:43.053805
|
{
"authors": [
"david542542",
"ndrsn"
],
"repo": "TonyGermaneri/canvas-datagrid",
"url": "https://github.com/TonyGermaneri/canvas-datagrid/issues/391",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1046353305
|
Moving a row screws up the row-highlighting
Often when a row is moved, the selectability of rows/cells doesn't act as expected.
For example: https://gyazo.com/4dde8cf953dc9ec7502fc182fecee37d
Totally weird, this looks like something recent. Major buggy, thanks for reporting. Will look into.
|
gharchive/issue
| 2021-11-06T01:34:27 |
2025-04-01T04:55:43.056346
|
{
"authors": [
"david542542",
"ndrsn"
],
"repo": "TonyGermaneri/canvas-datagrid",
"url": "https://github.com/TonyGermaneri/canvas-datagrid/issues/394",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1258130137
|
fix: don't force a copy of std::string fname when moving is an option
Taking this parameter by const reference forces us to copy it (because
we know we're going to store it). Taking it by r-value reference would
suggest that we might take ownership over it and would also force the
user to make a copy if they wish to retain the original value.
Taking this parameter by value however clearly gives us ownership of its
content without forcing a copy if it's implicit conversion from
const char* or explicitly handed over to us by the user via std::move.
Also note that without this the std::move call already present in parse
is effectively a NOP. (I.e. it calls the const r-value ref constructor of
std::string, which doesn't exist, so falls back to const l-value ref.).
This looks nice. After checking CI results, I will merge it.
Your change has not been merged yet. And this change is worth merging. Also, it seems that all the CI tests pass. Could you restore the branch and reopen this Pull Req? If you don't want to be listed at contributors for some reason, then I will add this manually. Could you tell me what should I do?
I have no idea what happened. Maybe when I was cleaning up branches in some other unrelated repos I accidentally deleted this one as well?
Anyway, I restored the branch and PR.
Thank you!
|
gharchive/pull-request
| 2022-06-02T12:41:08 |
2025-04-01T04:55:43.118811
|
{
"authors": [
"ToruNiina",
"muggenhor"
],
"repo": "ToruNiina/toml11",
"url": "https://github.com/ToruNiina/toml11/pull/189",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1847931516
|
After editing symbols library, parts can't be added.
Describe the bug
After importing one or more parts and editing or removing parts, new parts added doesn't show.
This is probably the same issue as #45
To Reproduce
Steps to reproduce the behavior:
LCSC part # that caused the issue
Any part#
Arguments used for the execution
$ JLC2KiCadLib C599645 -dir ~/Documents/CodeWorkspace/myCAD/KiCadLibraries/JLC2KiCad -symbol_lib JLC2KiCad-symbol -footprint_lib JLC2KiCad-footprint
Then after deleting or editing a part using the Symbol Editor new parts added isn't shown either in the Symbol Editor or is selectable in 'Add a symbol'.
Expected behavior
A clear and concise description of what you expected to happen.
Parts can be added, edited or deleted as usual.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
After adding a part kicad_sym has an erroneous ')' as shown.
New part in shown case is "C0603C104J3RAC7867_0_1" and is the last part in the file.
...
)
(property "LCSC" "C599645" (id 5) (at 0 0 0)
(effects (font (size 1.27 1.27)) hide)
)
) (symbol "C0603C104J3RAC7867_0_1"
(polyline
(pts
(xy -0.5080010160020321 -2.0320040640081283)
(xy -0.5080010160020321 2.0320040640081283)
...
After removing the parentheses it works as intended. ie
...
)
(property "LCSC" "C599645" (id 5) (at 0 0 0)
(effects (font (size 1.27 1.27)) hide)
)
(symbol "C0603C104J3RAC7867_0_1"
(polyline
(pts
(xy -0.5080010160020321 -2.0320040640081283)
(xy -0.5080010160020321 2.0320040640081283)
...
KiCad info:
KiCad x86_64 on x86_64
Version: 7.0.6-7.0.6~ubuntu22.04.1, release build
Libraries:
wxWidgets 3.2.1
FreeType 2.11.1
HarfBuzz 6.0.0
FontConfig 2.13.1
libcurl/7.81.0 OpenSSL/3.0.2 zlib/1.2.11 brotli/1.0.9 zstd/1.4.8 libidn2/2.3.2 libpsl/0.21.0 (+libidn2/2.3.2) libssh/0.9.6/openssl/zlib nghttp2/1.43.0 librtmp/2.3 OpenLDAP/2.5.15
Platform: Linux Mint 21.2, 64 bit, Little endian, wxGTK, cinnamon, x11
Build Info:
Date: Jul 7 2023 02:32:39
wxWidgets: 3.2.1 (wchar_t,wx containers) GTK+ 3.24
Boost: 1.74.0
OCC: 7.5.2
Curl: 7.88.1
ngspice: 38
Compiler: GCC 11.3.0 with C++ ABI 1016
Build settings:
KICAD_SPICE=ON
Thanks
/jon
Hi,
Thank you for reporting this issue.
It should be fixed with the latest commit.
I will let you close the issue if you can confirm this is working as expected on your side.
Great.
Haven't got the possibility to test at the moment, but as soon as
everything is OK, I will close it.
Thanks!
/jon
Jon Sagebrand
Jonix
Blacksta 6
611 95 Nyköping
@.***
070-694 67 34
Den sön 20 aug. 2023 kl 18:40 skrev Nicolas Toussaint <
@.***>:
Hi,
Thank you for reporting this issue.
It should be fixed with the latest commit.
I will let you close the issue if you can confirm this is working as
expected on your side.
—
Reply to this email directly, view it on GitHub
https://github.com/TousstNicolas/JLC2KiCad_lib/issues/48#issuecomment-1685326768,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAMOVF5VSYEIU3ROWJWKNMDXWI4Y7ANCNFSM6AAAAAA3N3THM4
.
You are receiving this because you authored the thread.Message ID:
@.***>
Seems to be working as it should now.
Good work!
Thanks
Closing
|
gharchive/issue
| 2023-08-12T10:54:38 |
2025-04-01T04:55:43.153014
|
{
"authors": [
"TousstNicolas",
"jonsag"
],
"repo": "TousstNicolas/JLC2KiCad_lib",
"url": "https://github.com/TousstNicolas/JLC2KiCad_lib/issues/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1064815105
|
🛑 Melody is down
In cada421, Melody (https://melody.triza.dev) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Melody is back up in 483be6b.
|
gharchive/issue
| 2021-11-26T21:40:52 |
2025-04-01T04:55:43.277952
|
{
"authors": [
"CodedJimmy"
],
"repo": "TrizaCorporation/Status",
"url": "https://github.com/TrizaCorporation/Status/issues/389",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1110336004
|
🛑 RbxVeri is down
In 5633af9, RbxVeri (https://verify.triza.dev) was down:
HTTP code: 502
Response time: 737 ms
Resolved: RbxVeri is back up in 04e7b96.
|
gharchive/issue
| 2022-01-21T10:58:59 |
2025-04-01T04:55:43.280518
|
{
"authors": [
"CodedJimmy"
],
"repo": "TrizaCorporation/Status",
"url": "https://github.com/TrizaCorporation/Status/issues/580",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
937455457
|
Feature request: feature first image of none specificied
Hi! I’m just digging into Ghost theme dev. Will send future pull requests :-)
For this theme is there a way to use the first image in a post as the “featured” image when listing all posts?
I often have an image in a post but don’t want the image to be featured because I need specific caption with the image.
Thanks!
Hey @tedserbinski,
Perfect timing 🙂
We added alt/caption support to feature image in Ghost 4.9.0, and Edition was updated to support this feature in https://github.com/TryGhost/Edition/commit/59624e8a4baed5a31f8a49459880b03dbf35ea1b as well.
Update your Ghost and Edition theme, and you'll have this feature out of the box.
Gotcha ok that works well. I wish there was a way to make an existing image featured -- otherwise have to download image, reupload -- kind of a pain.
|
gharchive/issue
| 2021-07-06T02:04:25 |
2025-04-01T04:55:43.298433
|
{
"authors": [
"minimaluminium",
"tedserbinski"
],
"repo": "TryGhost/Edition",
"url": "https://github.com/TryGhost/Edition/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
425903599
|
Wrong Last Modified Date for Post Tag in XML Sitemap
XML Sitemap Wrong Last Modified Date for Post Tag it shown 1970-01-01 00:00
check - https://blog.ghost.org/sitemap.xml
@rishabhgrg After Update it displays the post tag published Date only, not a Post Tag Last Modified date
Hey @mskian, is this on local testing with latest release/version(2.19.4) ? If so, can you please provide some more details - steps, logs or screenshot. :)
@rishabhgrg tested on Production version it will display the post tag publish date only
not updated the Last Modified date
See this screenshot post tag date too updated once we updated or publish a blog post
but its not updated display the Published date only
@mskian Which sitemap page is the screenshot for? If its the index page( /sitemap.xml), the date for sitemap-tags will be last modified date of any tag, and will not be affected by updating or publishing a blog post which does not has a new tag. For the tag date to update, you'll need to edit an existing tag or add a new tag.
sitemap-tags.xml
Yes I update the existing it's not Updating the Last Modified date
@mskian Updating an existing post will not update Last modified dates on sitemap-tags.xml, as those are only updated if a tag data is updated from Admin, which you can do at - /ghost/#/settings/tags. Updating existing Post will update the Last Modified date for that post in sitemap-posts.xml.
|
gharchive/issue
| 2019-03-27T11:24:21 |
2025-04-01T04:55:43.304294
|
{
"authors": [
"mskian",
"rishabhgrg"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/issues/10640",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1685741666
|
Trailing slash stripped from @site.url and resulting in redirects
Issue Summary
Ghost's policy is to include trailing slashes on URLs. However, trailing slashes are stripped from @site.url. That causes a problem when Ghost is installed at a subdirectory, because all links to @site.url are redirected to the path with a trailing slash. In other words, there is at least one invalid link on every page with a link to @site.url. While ordinary users may not notice anything but a slight slowdown, for SEO these invalid links are pretty negative.
I believe the problem can be solved by removing trailingSlash: false in updateLocalTemplateOptions, but I'm not sure what effect that would have on Ghost installations on a domain (rather than a subdirectory).
Steps to Reproduce
Install Ghost to a subdirectory, setting the configured url value like https://www.example.com/blog/.
Use the Casper theme (or any other theme that uses @site.url).
On any page that uses default.hbs, click the logo image. Notice you are 301 redirected from to.
Ghost Version
5.44.0
Node.js Version
v16.20.0
How did you install Ghost?
Ubuntu 20.04
Database type
MySQL 8
Browser & OS version
Chrome 108
Relevant log / error output
Not applicable
Code of Conduct
[X] I agree to be friendly and polite to people in this repository
I believe the problem can be solved by removing trailingSlash: false in updateLocalTemplateOptions, but I'm not sure what effect that would have on Ghost installations on a domain (rather than a subdirectory).
Looks like removing it would be safe. In researching this, everyone says trailing slashes after the domain name don’t matter (example, example, example).
I'm going to submit a PR.
PR is here: https://github.com/TryGhost/Ghost/pull/17859
|
gharchive/issue
| 2023-04-26T21:21:56 |
2025-04-01T04:55:43.312224
|
{
"authors": [
"dan-jensen"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/issues/16712",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1401818080
|
Added e2e tests for member.deleted webhook
#15537
Hi @illiteratewriter thanks so much for this PR. Sorry for the delay, but it has now been merged 🎉 and will appear in the next release of Ghost - usually Fridays.
I'm not sure if you found this through hacktoberfest, but I've added the accepted label to this PR to make sure it counts.
I see you have another PR open, so will head over and check that one out ASAP 👍
|
gharchive/pull-request
| 2022-10-08T04:30:45 |
2025-04-01T04:55:43.313868
|
{
"authors": [
"ErisDS",
"illiteratewriter"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/pull/15570",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1708233363
|
Final polishing script needed
After doing the semi auto alignment we need to then run a routine to cross match sources in the images (which should now be very accurate) and do one final wcs determination. This is necessary because the manual alignment can't take distortion into account when we apply an initial guess.
For the f/2 camera this seems to be essential.
Done in three different ways.
Users can use Astrometry.net, Scamp, or the built in gaia alignment. This will be available in the next release.
|
gharchive/issue
| 2023-05-12T21:28:14 |
2025-04-01T04:55:43.319489
|
{
"authors": [
"TrystanScottLambert"
],
"repo": "TrystanScottLambert/imacs_wcs",
"url": "https://github.com/TrystanScottLambert/imacs_wcs/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
892272875
|
add tradeable modifier
Example query: ^ids tradeable 9* to see what you can trade for the new latents
|
gharchive/issue
| 2021-05-14T21:59:03 |
2025-04-01T04:55:43.338563
|
{
"authors": [
"RheingoldRiver"
],
"repo": "TsubakiBotPad/pad-cogs",
"url": "https://github.com/TsubakiBotPad/pad-cogs/issues/1052",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2117832001
|
🛑 Content Delivery Network (S3 Bucket) is down
In d9ed601, Content Delivery Network (S3 Bucket) (https://cdn.tubnet.gg/minecraft-resourcepack/TubPack-production.zip) was down:
HTTP code: 522
Response time: 15424 ms
Resolved: Content Delivery Network (S3 Bucket) is back up in d1201cb after 10 minutes.
|
gharchive/issue
| 2024-02-05T07:10:24 |
2025-04-01T04:55:43.343581
|
{
"authors": [
"PublicQualityAcc"
],
"repo": "Tubnom/tubnet-uptime",
"url": "https://github.com/Tubnom/tubnet-uptime/issues/852",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2406949402
|
Add debouncer with function to manually cancel debounced tasks
Closes #2
lgtm
|
gharchive/pull-request
| 2024-07-13T14:12:42 |
2025-04-01T04:55:43.365428
|
{
"authors": [
"Tunous",
"mesqueeb"
],
"repo": "Tunous/DebouncedOnChange",
"url": "https://github.com/Tunous/DebouncedOnChange/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2548242573
|
Interface Changes for Use in Filtering
In the added example I propose a structure for generalized filtering, and additionally employ this method within PMMH given a random walk kernel.
Each filtering algorithm requires the user to construct 3 functions:
1. initialise for initializing the filtered states (whether it be particles or a Gaussian distribution)
2. predict which resamples the states and performs a one step ahead sampling step
3. update to evaluate the importance weight of the sample and return a marginal log-likelihood
Since migrating the filtering code to AnalyticalFilters (see https://github.com/TuringLang/AnalyticalFilters.jl/pull/7), there are minimal changes to SSMProblems from this PR:
[x] improve type consistency for SSMs
[ ] polish behavior of kwargs for controls
On Type Consistency
Currently, the interface contains type parameters for the state and observation objects; while this is ultimately good for forward simulation, we no longer have easy access to the element types. This is used mainly for preallocating particle weights, log evidence, etc and helps avoid unnecessary type conversions. At its worst, this raises errors when facing impossible type conversions.
Taking this to an extreme, I also think the user should keep the model element types consistent throughout the dynamics and observation process. This would allow us to employ a type structure like StateSpaceModel{T, LatentDynamics{Vector{T}}, ObservationProcess{Vector{T}}}, where Base.eltype(::StateSpaceModel) would return T. I would like to reiterate that keeping T consistent is the only way to avoid unnecessary/redundant type promotion.
Awesome stuff. I'll take a look through and make some comments.
Code style is Blue. I use the formatter baked into the Julia VSCode extension set up to format my code on every save. Works like a charm.
A few random thoughts:
Should resampling take place in its own function? E.g. a step consists of resample, predict, update. For the tracking-in-clutter applications, it seems likely we're going to have to change step to a predict, associate, update loop, so doesn't feel as bad to add extra steps to the particle filter
On the question of particle storage and ancestry (for naive smoothing), this may be too abstract but I wonder whether it is best left up to the storage object to determine how to save and store this. If the particle filter provides the new states, log weight increments and parent indexes, it can then be down to the storage object to store that as a linked list.
As mentioned in my message about RTS, I wonder if rather than having a callback function, a callback struct with two methods, post_predict_store and post_update_store would be better. This would allow you to constantly update state in place rather than separating out into proposed_states and filtered_states.
I like the look of the "distribution-focused" version of the Kalman Filter. It might be worth leaving a comment clarifying what that is about for Frederic/Hong since I think we spoke more about that by ourselves. I'm apprehensive about using the name particles. state is much clear to me, where your state can be e.g. a Gaussian, a categorical distribution (HHM), or a weighted collection of point masses (particle collection).
Particle Ancestry
I implemented the particle storage algorithm presented in (Murray, 2015). I include a demonstration at the bottom of this file which informally benchmarks the performance of the algorithm.
@THargreaves since you brought my attention to this algorithm, your comments would be much appreciated. I was aiming for elegance with this commit, and is thus far from optimal in terms of speed. Feel free to scrap anything remotely wasteful.
I've pushed a small change that keeps track of the free indices using a stack. On my computer using the parameters in your test script, this reduced the median time from 48ms to 5ms...
@THargreaves legendary commit. 10x speed-up in less than 10 lines of changes.
I'm not sure about the predict and filter methods dispatching on AncestryContainer though. It would be much more general to have these implementation details abstracted away. Say by a store!(::AbstractParticleContainer, proposed_states idx) or something similar.
Yeah, it was purely to demonstrate a proof of concept. Now that you fixed the main drawback of the ancestry, I went ahead and added some key-word arguments to let initialise know whether to create an AncestryContainer or a ParticleContianer. It uses a nasty conditional, but I couldn't think of anything else.
I also consolidated my demonstrations to script.jl and moved resampling to a separate file.
A Note on Particle Gibbs
I updated the filtering code such that reference trajectories can be passed as key word arguments. This allows for conditional SMC to be used within particle Gibbs blocks using the highly efficient sparse ancestry storage. My implementation of CSMC is based mostly on Nicolas Chopin's own particles, and is nowhere near as general as the methods defined in AdvancedPS. Again, all suggestions are welcome.
I guess one thing of note about moving to kwargs is that we no longer have the fine-grained control of which kwargs are accessible at each time step.
This actually might be a net positive as it allows for a lot more flexibility and avoids the need for extra0 which always felt clunky to me.
Might be worth thinking if there are any scenarios where this could lead to trouble. I can't imagine there is any computational cost involved.
Maybe the closest thing would be that this makes it difficult to use the filter online.
We would write
def simulate(...; dts, kwargs...)
dt = dts[step]
end
Which then expects us to be appending to a vector of dts as observations come in. Not hard to get around this by creating a "memoryless" vector. E.g.
struct MemorylessVector{T}
x::T
i::Int
end
Base.getindex(v:: MemorylessVector, i::Int64) = i == v.i ? x : error(..)
|
gharchive/pull-request
| 2024-09-25T14:58:45 |
2025-04-01T04:55:43.393407
|
{
"authors": [
"THargreaves",
"charlesknipp"
],
"repo": "TuringLang/SSMProblems.jl",
"url": "https://github.com/TuringLang/SSMProblems.jl/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
871668688
|
PRADO REQUEST
Describe what you want to be added
A new location? More customisation?
Prado Pokémon
(If your feature request is related to a problem, please create a Bug Report Issue instead.)
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered. What implementation options do we have?
Animal type: Golden retriever
Gender: Male
Personality: Nice, easy-going, adorable
Reasoning behind the request
Why should this be considered? Is it a mechanic from another Pokémon game? Would it make the game more playable?
BECAUSE, IT'S A PRADO!!!
Additional context
Add any other context or screenshots about the feature request here.
Thank you Johnny SD
Okay! This will be added in eventually. Thanks for the request!
|
gharchive/issue
| 2021-04-29T23:22:08 |
2025-04-01T04:55:43.400422
|
{
"authors": [
"Isabel-Lifu-211207-XPrado",
"TurnipGuy30"
],
"repo": "TurnipGuy30/Pokemon-PythonRed",
"url": "https://github.com/TurnipGuy30/Pokemon-PythonRed/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1335113731
|
False "Downs"
Sites that are not down are being marked as down. Upon investigation of the workflows causing this, it seems that the timeouts are being repeatedly reached occasionally for some of the sites, understandable due to Replit's free plan restrictions. In theory, increasing the timeout threshold to a number that reasonably allows the Repls to load should decrease the number of false reports.
Day 1, results of timeout modification look promising, down frequency and
length appear to have decreased by at least 1/4. Will continue to monitor.
|
gharchive/issue
| 2022-08-10T19:39:51 |
2025-04-01T04:55:43.401898
|
{
"authors": [
"TurtleCode84"
],
"repo": "TurtleCode84/status",
"url": "https://github.com/TurtleCode84/status/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1027175481
|
Add ctx.reply
Adding ctx.reply so the bot can reply in context to a command / message
closes #119
|
gharchive/pull-request
| 2021-10-15T07:51:32 |
2025-04-01T04:55:43.447537
|
{
"authors": [
"chillymosh"
],
"repo": "TwitchIO/TwitchIO",
"url": "https://github.com/TwitchIO/TwitchIO/pull/223",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
157067176
|
most-subject/dist/most-subject.js not working in browser
I used most-subject successfully in the past. Then I switched to xstream for a while. Now I am trying to use most-subject version 4.02 but it won't load when I try it in Chrome. I am bundling with Webpack. I moved the whole npm package into bower_components because I need to get node_modules out of the way when I compile the server. I started out with just the /dist/most-subject.js file in bower_components and get identical results to the ones shown here. This is a screen shot of the top of of the Chrome console log:
Here is line 9876 of the of the Webpack bundle:
And here is line 9832
The most relevant line seems to be line 7434. Here it is:
This is a screen shot of my index.html file:
That's for all of this, I'll try to look into it very soon!
Thanks. By the way, I have a function something like this:
function f(string) {
var _this = this;
this.id = string;
this.stream = some stream;
this g(x) {
put x into _this.stream;
}
}
var instance = new f;
I did this with most-subject and xstream, but I could not find a way to do it with most. If you, or anyone reading this, happens to know how to use most in this way, I would much appreciate your telling me. most uses most-subject, so I keep thinking the technology is somewhere inside of most.
Cheers!
So I think the issue is right in my face: https://github.com/TylorS/most-subject/blob/master/dist/most-subject.js#L4
The usage of most.defaultScheduler is not something that is accessible via the most builds currently.
I might backtrack and use browserify or something else and do separate builds of for node/browser
UPDATE:
I don't know if this helps; but for what it is worth, I tried most version 18.8 and got this:
Hey @dschalk As soon as https://github.com/mostjs/prelude/pull/8 gets merged, I have a rewrite in TypeScript with a new build system that should fix this issue. Hopefully by the end of the night :)
Thanks. I am eager to switch back to most-subject.
The new version does not crash the browser or give any error messages. The new version's API is different from what I remember so I am experimenting. As I recall, subject() returned an object with a "stream" attribute, but that isn't the case now.
I am experimenting with the return value of mostSubject.subject in my Chrome console log. When I run s = mostSubject.subject(), I get an object with many methods, including "next" and "observe". So far, I have gotten no results and no error messages. Obviously I am missing something.
So, naturally, I clicked the link to https://tylors.github.io/most-subject/doc, but the page is not there. Chrome displays the 404 "File not found" page.
The APi has channged with v4 - thanks for pointing out that there are issues with the docs as well. I'l ltry to get that fixed as well.
In the mean time, the API is as follows
- const {observer, stream} = subject()
+ const stream = subject()
stream.observe(x => console.log(x)
- observer.next(1)
+ stream.next(1)
- observer.error(new Error('error'))
+ stream.error(new Error('error'))
- observer.complete()
+ stream.complete()
http://codepen.io/TylorS/pen/rLNgrZ?editors=0010
Thanks again. The example is just what I needed.
:+1: I'll close this then. Feel free to reopen if its needed
I think the issue is resolved, so I am not trying to reopen it. I am writing this to say that I don't think "then" is working in the example you gave me. When I run the code shown below, the output remains the same: three results, no result repeated.
stream.observe(x => console.log(x))
.then(x => console.log(x))
.then(x => console.log(x))
.then(x => console.log(x))
.then(x => console.log(x))
.then(x => console.log(x))
.catch(e => console.error(e))
stream.next(1)
stream.next(2)
stream.next(3)
stream.complete('complete')
Browser console:
1
2
3
Strange indeed, I'm not getting the same behavior here :/
Does it print each number twice? It only prints the numbers once for me. I don't need next right now, but I thought I should tell you what I got when I ran the code. I expected it to log a number once when the promise resolved and again on the call to "next".
I made the transition from xstream back to most-subject. My app is running flawlessly (as far as I can tell, anyway) at http://schalk.net:3055.
They print just once for me as well. then() is only called on complete() which is the same behavior the most has itself.
I'm curious, are you using most-subject for the ability to imperatively call complete()?
No. Here is the story:
My app runs online at http://schalk.net:3055. The repo is at https://github.com/dschalk/JS-monads-stable. In the repo at /client/monad.js there is this:
var MonadStream = function MonadStream(g) {
var _this = this;
this.id = g;
this.stream = mostSubject.subject()
this.ret = function (a) {
_this.stream.next(a);
console.log('in ' + _this.id + ' emmitting ' + a);
return _this;
};
};
var mM$1 = new MonadStream('mM$1');
And one of its uses is (in /client/main line 626) is this
const mM$1Action$ = mM$1.stream.observe(v => {
console.log('In mM$1Action$. v is: ', v)
O.mMindex2.bnd(inc, mMindex2);
O.mMallRolls.bnd(spliceAdd, O.mMindex2.x, v, mMallRolls);
mMcurrentRoll.ret(v);
document.getElementById('0').innerHTML = (O.mMallRolls.x[O.mMindex2.x])[0];
document.getElementById('1').innerHTML = (O.mMallRolls.x[O.mMindex2.x])[1];
document.getElementById('2').innerHTML = (O.mMallRolls.x[O.mMindex2.x])[2];
document.getElementById('3').innerHTML = (O.mMallRolls.x[O.mMindex2.x])[3];
cleanup(7)
})
I tried to define MonadStream using only most and was unable to come up with an algorithm that worked. I think I could define MonadStream with RxJS and Bacon, and I have successfully done so with xstream and most-subject. Now that the state of my monads is in the mutable global object "O", the slickest solution might be to use ordinary monads defined by Monad (in /client/monads.js at line 49) and the ordinary "ret()" method, which mutates "O" and automatically runs every expression observing it. There's a spreadsheed-like calculator in the online app at http://schalk.net:3055, demonstrating that this works.
I keep thinking that there must be a way to define MonadStream with most. That's why I asked if anybody knows how to do it in the off-topic comment above. The tools don't seem to be in the most API, so I think I would need to study and understand the most code. That would be a project for me.
I am awe-struck by Motorcycle.js, My monad experiments thrive in Motorcycle. I am using most-subject for streaming mostly because I feel so comfortable in the environment that you and your collaborators have created.
|
gharchive/issue
| 2016-05-26T20:05:00 |
2025-04-01T04:55:43.490063
|
{
"authors": [
"TylorS",
"dschalk"
],
"repo": "TylorS/most-subject",
"url": "https://github.com/TylorS/most-subject/issues/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
374686212
|
Search returning no results when [include] or [exclude] set on
For example, using the code below gives a blank search results area. Should be able to search within 'people' category. Works as expected when [include] is removed.
<emoji-mart set="apple" [title]="''" [emojiSize]="16" [emoji]="''"
[totalFrequentLines]="1" [perLine]="8" (emojiClick)="emojiClick($event)" [include]="['people']">
</emoji-mart>
Any fix on this yet?
I probably won't be able to get to this, please open a PR
I'll try to take a look at it ... might also be related to #182
:tada: This issue has been resolved in version 1.0.5 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
@scttcper Thanks for the new release. However, is there a fix for those of us who aren't on Angular 8 yet...?
|
gharchive/issue
| 2018-10-27T20:34:08 |
2025-04-01T04:55:43.496245
|
{
"authors": [
"mlembke1",
"s-moran",
"scttcper",
"zoualmamy"
],
"repo": "TypeCtrl/ngx-emoji-mart",
"url": "https://github.com/TypeCtrl/ngx-emoji-mart/issues/172",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
487017160
|
feat: add devcontainer support
I use a Windows machine; the test pack for the fork-ts-checker-webpack-plugin has never entirely worked with Windows. This is fine; I just switch to WSL and go with it.
This PR adds in support for VS Code's dev containers; see: https://code.visualstudio.com/docs/remote/containers#_getting-started
This helps me and may help other Windows users that would like to contribute.
When started from linux, this seems to have permission problems - vscode tries to create files in /root, but it runs as user "node", so it doesn't have correct permissions to write there.
Run: docker exec 95209b7c27b90a0623404756729da4953c6ef6c63aedeba1399829ba49629277 test -d /root/.vscode-server/bin/c7d83e57cd18f18026a8162d042843bda1bcf21f
Installing VS Code Server for commit c7d83e57cd18f18026a8162d042843bda1bcf21f
Run: docker exec 95209b7c27b90a0623404756729da4953c6ef6c63aedeba1399829ba49629277 mkdir -p /root/.vscode-server/bin/c7d83e57cd18f18026a8162d042843bda1bcf21f_1567458922703
mkdir: cannot create directory '/root': Permission denied
Command failed: docker exec 95209b7c27b90a0623404756729da4953c6ef6c63aedeba1399829ba49629277 mkdir -p /root/.vscode-server/bin/c7d83e57cd18f18026a8162d042843bda1bcf21f_1567458922703
Adding a && chmod o-rwX /root \ somewhere in the Dockerfile seems to work, but that's more of a dirty workaround.
Unfortunately, I haven't tried devcontainers yet, so I don't really have experience with that. But I guess this is a common problem with all devcontainers based on the node container, so we can't be the first with that problem.
That's interesting - I pretty much lifted and shifted the Microsoft example. I think it may have been this one: https://github.com/microsoft/vscode-dev-containers/tree/master/containers/javascript-node-lts-mongo/.devcontainer
Let me double check if I made any significant tweaks...
Unfortunately my first steps into Linux land have resulted in a bricked laptop 😄
@OB6160 will reattempt in a month or so; just need to fit a new hard drive and we've a hackathon to prepare for first.
When I'm up and running with Ubuntu I'll take this for a whirl there. Works great on Windows already though 😁
When I get a moment I'll give this a go on my Linux machine & look into a fix
Unfortunately my first steps into Linux land have resulted in a bricked laptop smile
What did you DO? :D
@ob6160 only do it if you're totally bored - this can wait! (And it's not your fault I'm blocked; it's my dodgy machine 😄)
What did you DO? :D
Well my XPS had been behaving oddly for a while. It was that that prompted me to get a new one. When @ob6160 and I started attempting to repave the old one with Ubuntu we discovered that the hard drive was trashed midway through reformatting it. C'est la vie 😁
I now have a new hard drive which we'll use to resurrect the machine next month. I'll get there... It's just a matter of time ⌚
@phryneas this patch seems to do the trick on my linux box https://gist.github.com/ob6160/525eb3c6684e5d7104e676c410495a70
Maybe give a rebuild of your devcontainer a go with that change? Not sure why the runArgs didn't work though.
I've patched - is that right @ob6160 ?
I think this is now working thanks to @ob6160 help ❤️
I want to get a release out there as I've realised that @pelotom's https://github.com/TypeStrong/fork-ts-checker-webpack-plugin/pull/345 didn't trigger a release due to the commit format. I'm planning to merge this as (AFAIK) it all works and it should be a non-invasive change. There's nothing in here that can break existing users and it's certainly making it easier for me to work on the plugin 😄
Unless anyone objects I'm going to merge this PR when the CI goes green later this morning. This will trigger a release and fix the current live issue that exists with 1.5.1 - see @johnbouma's issue here: https://github.com/TypeStrong/fork-ts-checker-webpack-plugin/issues/349
Hopefully this should trigger a release shortly. Travis has been a little flaky for the last couple of days though - so will keep an eye on it
:tada: This PR is included in version 1.6.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2019-08-29T15:07:27 |
2025-04-01T04:55:43.507872
|
{
"authors": [
"johnnyreilly",
"ob6160",
"phryneas",
"piotr-oles"
],
"repo": "TypeStrong/fork-ts-checker-webpack-plugin",
"url": "https://github.com/TypeStrong/fork-ts-checker-webpack-plugin/pull/334",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1422124129
|
Methods marked with @internal show up in the docs for child classes if redeclared
Expected Behavior
Interpreting the JSDoc documentation where it states
By default, if you do not add a JSDoc comment to a symbol, the symbol will inherit documentation from its parent.
I would expect the @internal annotation to also affect methods of inherited implementations, even if I'm redeclaring them in a child class.
Actual Behavior
Typedoc generates documentation for a method that is marked @internal in a parent class and redeclared within a child class, even if there are no additional JSDoc comments on the declaration within the child class
Steps to reproduce the bug
"typedoc": "0.23.15"
class Foo {
/**
@internal
*/
myInternalMethod(){}
}
class Bar extends Foo {
myInternalMethod(){} // I would expect this to still be stripped from documentation when `excludeInternal` is set to true
}
{
"compilerOptions": {
"target": "ES2019" /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */,
"module": "ESNext" /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */,
"outDir": "dist",
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"strict": true /* Enable all strict type-checking options. */,
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */,
"skipLibCheck": true /* Skip type checking of declaration files. */,
"noUnusedLocals": true,
"forceConsistentCasingInFileNames": true /* Disallow inconsistently-cased references to the same file. */,
"moduleResolution": "node",
"resolveJsonModule": true,
"importsNotUsedAsValues": "error"
},
"exclude": ["dist", "**/*.test.ts"],
"include": ["src/**/*"],
"typedocOptions": {
"entryPoints": ["src/index.ts"],
"excludeInternal": true,
"excludePrivate": true,
"excludeProtected": true,
"excludeExternals": true,
"includeVersion": true,
"out": "docs",
"theme": "default"
}
}
yarn typedoc
Environment
Typedoc version: 0.23.15
TypeScript version: 4.8.4
Node.js version: v16.15.0
OS: MacOS Ventura
Hmmmm... this is because TypeDoc does removal before all of the logic to copy comments around. This is an unfortunate inconsistency, will be rather annoying to fix.
Worth noting that TypeDoc intentionally does not follow JSDoc's behavior in several places, so using JSDoc's site to guess what TypeDoc will do isn't always safe.
Corollary:
class Foo { /** @hidden */ method() {} }
class Bar { /** {@inheritDoc Foo.method} */ baz() {} }
Should Bar.baz be hidden? This feels like a likely mistake to me...
Current process... rearranging this is going to be tricky to do without breaking things. Probably going to take at least a day dedicated to just this at some point...
During conversion:
Handle visibility flags (@private, @protected. @public)
Handle module renames (@module)
Remove excluded tags & comment discovery tags (@module, @packageDocumentation)
Copy comments for type parameters from the parent container
Resolve begin:
Remove hidden reflections
Resolve:
Apply @label tag
Copy comments on signature containers to the signature if signatures don't already have a comment
and then remove the comment on the container.
Copy comments from signatures to parameters and type parameters (again? why?)
Apply @group and @category tags
Resolve end:
Copy auto inherited comments from heritage clauses
Handle @inheritDoc
Resolve @link tags to point to target reflections
|
gharchive/issue
| 2022-10-25T09:06:39 |
2025-04-01T04:55:43.522775
|
{
"authors": [
"Gerrit0",
"lukasIO"
],
"repo": "TypeStrong/typedoc",
"url": "https://github.com/TypeStrong/typedoc/issues/2084",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1196181311
|
Display flags next to class name
Reflections in TypeDoc can be assigned flags, describing certain
properties of the reflection, e.g. abstract, private, readonly etc.
These flags were rendered using badges for most type of reflections, but
not for reflections which are displayed on their own page (like classes,
interfaces etc.). This made it difficult to establish e.g. whether a
class is abstract (see #1874).
This PR fixes the issue by rendering flags in the page titles next to
the name of the documented entity.
Styling of the badges has been amended, to account for them now showing
next to much bigger headings. The styling was inspired by Bootstrap
badges.
Partially resolves #1874.
Preview of the changes (for abstract classes):
I like it, thanks!
|
gharchive/pull-request
| 2022-04-07T15:03:35 |
2025-04-01T04:55:43.526626
|
{
"authors": [
"Gerrit0",
"ejuda"
],
"repo": "TypeStrong/typedoc",
"url": "https://github.com/TypeStrong/typedoc/pull/1914",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2630099361
|
Age restricted videos can't be downloaded
Version
v1.13.3
Platform
Windows 11
Steps to reproduce
Step 1: find an age restricted video (One that requires login)
Step 2: attempt to put it into the Youtube Downloader
Step 3: Try to download which will cause the error to pop up
Details
The error message says that it can't be downloaded due to "sign in"
Checklist
[X] I have looked through existing issues to make sure that this bug has not been reported before
[X] I have provided a descriptive title for this issue
[X] I have made sure that this bug is reproducible on the latest version of the application
[X] I have provided all the information needed to reproduce this bug as efficiently as possible
[ ] I have sponsored this project
[ ] I have not read any of the above and just checked all the boxes to submit the issue
There are many types of age-restricted videos and some are definitely downloaded. Which one are you having issues with?
The one I am having issues with is this specific video.
https://youtu.be/n2tqE-b6lH4?si=MC5ifXAnePRqtGal
On Tue, Nov 5, 2024, 2:18 PM Oleksii Holub @.***> wrote:
There are many types of age-restricted videos and some are definitely
downloaded. Which one are you having issues with?
—
Reply to this email directly, view it on GitHub
https://github.com/Tyrrrz/YoutubeDownloader/issues/540#issuecomment-2458075955,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AYGSS7HB44QWG2B7IG65STTZ7ERX5AVCNFSM6AAAAABRBG4GZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJYGA3TKOJVGU
.
You are receiving this because you authored the thread.Message ID:
@.***>
Also having the same issue with music marked as age-restricted. The first error says to sign in. When I sign in I get the following error (attached file). Thanks!
Sorry it's still not working for you.
On Fri, 8 Nov 2024 at 10:43, DrewzleX @.***> wrote:
Tried the newest CI build and got the same results.
—
Reply to this email directly, view it on GitHub
https://github.com/Tyrrrz/YoutubeDownloader/issues/540#issuecomment-2465244411,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AYGSS7AN4KDN3CBLOYKAOALZ7TS2LAVCNFSM6AAAAABRBG4GZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRVGI2DINBRGE
.
You are receiving this because you authored the thread.Message ID:
@.***>
yup same tried with multiple age restricted video ..nothing works
Just updated to 1.14 and still having the same issue. If I am logged out, the tool will tell me to authenticate. Once authenticated, I receive the same error with a similar message (different line numbers).
Same problem with any version.
|
gharchive/issue
| 2024-11-02T01:14:25 |
2025-04-01T04:55:43.548307
|
{
"authors": [
"Coolkatisa",
"DrewzleX",
"Heminoid",
"Tyrrrz",
"nestg"
],
"repo": "Tyrrrz/YoutubeDownloader",
"url": "https://github.com/Tyrrrz/YoutubeDownloader/issues/540",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1835990997
|
scroll direction
the shorts scroller only goes down but that's a problem when watching a shorts series that goes up
Hi, I will definitely soon add an option where you can change the scroll direction for use cases like yours + others.
I've just published an update with scroll direction, up and down
|
gharchive/issue
| 2023-08-04T03:05:08 |
2025-04-01T04:55:43.549690
|
{
"authors": [
"Tyson3101",
"daviono997"
],
"repo": "Tyson3101/Auto-Youtube-Shorts-Scroller",
"url": "https://github.com/Tyson3101/Auto-Youtube-Shorts-Scroller/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2291357233
|
🛑 Mirrors OPL is down
In 10fe40e, Mirrors OPL (https://mirrors.opl.uab.cat) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Mirrors OPL is back up in 813638b after 10 minutes.
|
gharchive/issue
| 2024-05-12T14:27:40 |
2025-04-01T04:55:43.552355
|
{
"authors": [
"JordiRoman"
],
"repo": "UAB-OPL/opl-uab-monitoring",
"url": "https://github.com/UAB-OPL/opl-uab-monitoring/issues/1623",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
417785082
|
Add PAGE -> hOCR conversion by @mhug
@mhug has created a little XSLT to convert PAGE to hOCR.
@wrznr @zuphilip @stweil
Travis did not run through because Saxon could not been downloaded (?):
wget --progress=bar:force --no-verbose -O "SaxonHE9-8-0-1J.zip" "https://sourceforge.net/projects/saxon/files/Saxon-HE/9.8/SaxonHE9-8-0-1J.zip/download"
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received
The build has been terminated
Der direkte Link auf einen lokalen Mirror wie https://netcologne.dl.sourceforge.net/project/saxon/Saxon-HE/9.8/SaxonHE9-8-0-1J.zip funktioniert wahrscheinlich besser.
Okay, I changed the URL as @stweil suggested and Travis is now happy again. I will merge this now. Thank you all for the work!
|
gharchive/pull-request
| 2019-03-06T12:44:14 |
2025-04-01T04:55:43.556238
|
{
"authors": [
"kba",
"stweil",
"zuphilip"
],
"repo": "UB-Mannheim/ocr-fileformat",
"url": "https://github.com/UB-Mannheim/ocr-fileformat/pull/86",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1139346052
|
Section 2: Description of the data
You are allowed to select any dataset you want for this project, as long as you have the license to use it publicly. Warning: finding a good data set can take a lot of time and effort. We therefore recommend that you select one that you have worked with in a previous lab in MDS and that you are already familiar with (for example the Gapminder, movie, or the dataset you use for your project 522).
Task completed.
|
gharchive/issue
| 2022-02-16T00:16:03 |
2025-04-01T04:55:43.559818
|
{
"authors": [
"Luming-ubc"
],
"repo": "UBC-MDS/Olympic_athletes_dashboard",
"url": "https://github.com/UBC-MDS/Olympic_athletes_dashboard/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1206265981
|
Milestone 2 Feedback
Group 21
Set up EC2 instance: 20/20 {correctness: 20}
Set up JupyterHub: 20/20 {correctness: 20}
Setup server: 20/20 {correctness: 20}
N/A
Setup S3 bucket and move data: 20/20 {correctness: 20}
Wrangle data in preparation for ML: 20/20 {correctness: 20}
100/100
Great job guys!
Thank you for the feedback.
|
gharchive/issue
| 2022-04-17T02:57:33 |
2025-04-01T04:55:43.567589
|
{
"authors": [
"liannah",
"nafi007"
],
"repo": "UBC-MDS/daily_rainfall_group_21",
"url": "https://github.com/UBC-MDS/daily_rainfall_group_21/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1332120328
|
changed euler to NavierStokesTransport
this closes #313
Looks good!
|
gharchive/pull-request
| 2022-08-08T16:37:24 |
2025-04-01T04:55:43.583308
|
{
"authors": [
"Rozie100",
"mmcgurn"
],
"repo": "UBCHREST/ablate",
"url": "https://github.com/UBCHREST/ablate/pull/314",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
906689717
|
NAV-ISS-20 Rockblock Ring Alerts - Poll for SBDRING
Right now we successfully check the ring alert status with 'AT-CRISX',
but this just tells us the most recent Type of alert that was received (telephony vs SBD)
not if there is a pending ring alert.
We actually need to poll for the SBDRING announcement sent from the rockblock.
However, the announcement is only sent when the rockblock is in 'command mode'
I think this requires us to be sending an AT command at the time.
I think we have to send an SBDRB command prior to polling for SBDRING,
because the announcement is sent to the last known location of the rockblock
https://docs.rockblock.rock7.com/docs/faqs
Potential solution is to create another function that sends a simple AT command
and just waits until the SBDRING announcement is received.
Another solution is to make use of the 'timestamp' -which is a hex value representing
the Iridium system time (# 90ms frames from the last epoch). But based on experimentation,
this value doesn't seem to change - at least within a 5 second polling interval.
https://www.rock7.com/downloads/ATC_Iridium_ISU_AT_Command_Reference_MAN0009_v5.pdf
For Oct launch (and likely any upcoming launches in 2021) we will not be utilizing the SBDRING feature.
This would require a major refactoring of our bbb_satellite_listener code. In the interest of time, this may or may not be revisited in the future.
|
gharchive/issue
| 2021-05-30T09:08:09 |
2025-04-01T04:55:43.587264
|
{
"authors": [
"briellelaw"
],
"repo": "UBCSailbot/network-table",
"url": "https://github.com/UBCSailbot/network-table/issues/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1905322610
|
Config.Sample Note
During the unity catalog deployment it appears that we get inconsistencies if the zone names have capital letters in them. This updates the sample to recommend lower case names.
/test
:robot: pr-bot :robot:
:runner: Running tests: https://github.com/UCLH-Foundry/FlowEHR/actions/runs/6251285334 (with refid 6a654d83)
(in response to this comment from @damoodamoo)
/test-force-approve
:robot: pr-bot :robot:
:white_check_mark: Marking tests as complete (for commit e2ac84923fd51758b7b8302a293b54e45445fb03)
(in response to this comment from @jjgriff93)
|
gharchive/pull-request
| 2023-09-20T16:12:15 |
2025-04-01T04:55:43.598054
|
{
"authors": [
"damoodamoo",
"jjgriff93"
],
"repo": "UCLH-Foundry/FlowEHR",
"url": "https://github.com/UCLH-Foundry/FlowEHR/pull/330",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1845287835
|
Adjust the synthetic dataset generation
Simplify the synthetic data to elements we can actually play.
Also implement beats per bar handling
In addition, last time I checked, the "/repeat" was still written without "/" which causes issues in the lilypond parser. Did you fix that already?
What is the progress on this?
|
gharchive/issue
| 2023-08-10T14:18:24 |
2025-04-01T04:55:43.634051
|
{
"authors": [
"Flova",
"ateRstones",
"scaredycode"
],
"repo": "UHHRobotics22-23/marimbabot",
"url": "https://github.com/UHHRobotics22-23/marimbabot/issues/211",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1854832229
|
inconsistency in vision node
Hand written tempo and loudness values (tempo=120 , fff, pp) have considerable consistency.
Recognition of magnetic notes is good but has a little inconsistency.
related to
https://github.com/UHHRobotics22-23/marimbabot/issues/43
https://github.com/UHHRobotics22-23/marimbabot/issues/66
As they are in the beginning of the sequence we can also collect data only featuring tempo information, but no notes. This should be way faster to collect.
The same procedure is possible for keys etc
What is the status of major/minor keys + bpm collection from last week @cvhex ?
Can we close this @Flova ?
|
gharchive/issue
| 2023-08-17T11:40:21 |
2025-04-01T04:55:43.637662
|
{
"authors": [
"Flova",
"Juphex",
"berkgungor"
],
"repo": "UHHRobotics22-23/marimbabot",
"url": "https://github.com/UHHRobotics22-23/marimbabot/issues/234",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1874588638
|
check if it is a github repo
Pull request to see on railway.
Add mime type == text (or whatever the syntax is), and ingest that with the ingest_single_txt function (ensure no changes are needed there).
Thanks! That should support ALLLLL text files.
This should be ready to merge now!
|
gharchive/pull-request
| 2023-08-31T01:20:21 |
2025-04-01T04:55:43.643735
|
{
"authors": [
"KastanDay",
"jkmin3"
],
"repo": "UIUC-Chatbot/ai-ta-backend",
"url": "https://github.com/UIUC-Chatbot/ai-ta-backend/pull/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
432923133
|
Add 'ofi' target to autobuild
Original issue: https://charm.cs.illinois.edu/redmine/issues/1674
No body.
Original date: 2017-09-13 19:42:57
ofi builds could be added on the same target machine (campus cluster) as verbs, so assigning to Jaemin.
Original date: 2017-09-13 19:46:25
Bridges at PSC would be an ideal candidate for autobuild, if we can set it up to run without 2-factor authentication.
Original author: Jim Phillips (@jcphill)
Original date: 2017-09-14 03:31:28
Bridges doesn't use 2-factor authentication.
Original date: 2017-11-01 14:22:18
Is there a problem with ofi on golub? This needs to get done
Original date: 2017-11-06 14:57:59
Bump. Any reason to not add this on golub?
Original date: 2017-11-06 17:57:10
I've been working on getting it running on golub, but there is an issue with using ++mpiexec, where if you set ppn in qsub larger than the total number of PEs used in a test program it uses only a single physical node. This is problematic with SMP buidls, and it affects the verbs autobuilds as well. I've found a workaround by setting ppn to 1 in qsub and create a nodelist out of PBS_NODEFILE and using that, so I'll go forward with it.
Original date: 2017-11-08 20:48:10
We want this to be on Bridges since that has an Omni-Path interconnect and we know OFI works there. We need to ensure we have enough allocation to run autobuild on it.
Original date: 2017-12-11 23:21:29
Bump, this needs to get done
Original date: 2017-12-13 21:41:19
I don't have allocation anymore on Bridges, but Karthik does.
We'll try using his account to set up autobuild there.
Original date: 2018-01-10 20:46:39
Autobuild for OFI is not passing yet. I don't even think it has actually gotten to building charm yet. Here's last night's run's failure:
./instead_test.sh: line 15: cd: charm/ofi-linux-x86_64/tmp: No such file or directory
Original date: 2018-01-18 00:33:49
Last night autobuild was able to log onto Bridges successfully but failed to unzip charm:
remote> gunzip -f charm.tar.gz
gzip: charm.tar.gz: No such file or directory
Original date: 2018-01-30 18:29:06
The build works, but then the jobs are pretty consistently timing out for whatever reason now:
In testdir charm/ofi-linux-x86_64/tmp
Submitting batch job for> make test OPTS=
using the command> sbatch /home/skk3/autobuild/ofi/charmrun_script.31865.sh
Job enqueued under job ID 2272723
Job in state
Job in state RUNNING
Job in state RUNNING
Job in state RUNNING
...
...
Job in state TIMEOUT
Job in state TIMEOUT
Job in state TIMEOUT
...
...
Original date: 2018-01-31 15:19:52
Once we get the non-SMP build running, we'll want to add a second target that is SMP
Original date: 2018-01-31 20:25:43
The issue of "There seems to be an issue with the OFI build that +p1 passed to an application is regarded as argv[1], and the pingpong benchmark (tests/charm++/pingpong) with ./pgm +p1 hangs as it tries to use +p1 as the payload which is ultimately set to 0.", which I thought was resolved by ~~https://charm.cs.illinois.edu/gerrit/#/c/3452/~~ https://github.com/UIUC-PPL/charm/commit/6ec0f6de23ef5cf040221a8511ee90173cf634f9, seems to have resurfaced.
Original date: 2018-01-31 21:16:18
Actually the problem this time doesn't seem to be caused from +p1; the command that causes the hang is ../../../bin/testrun ./pgm +p1 ++timeout 180 +isomalloc_sync, and the ++timeout 180 part is the culprit (so removing this works). But I think this problem happens whenever something not parsable is passed, because even +timeout 180 causes the same hang. And the same thing if I use charmrun instead of ../../../bin/testrun.
Original date: 2018-02-02 14:11:47
We still need +isomalloc_sync. The tests ran last night but failed in an AMPI test that needs that flag.
Original date: 2018-02-03 16:58:33
I added +isomalloc_sync and it passed last night. ALl that is need now is to add an SMP target.
Original date: 2018-02-03 17:56:01
Added SMP target to system_list and created ofi-smp folder along with instead_test.sh on Bridges.
Original date: 2018-02-23 14:35:57
ofi non-SMP and SMP passed yesterday. The SMP build seems to oftenhang, so that should still be monitored and addressed, but we can mark this resolved now.
|
gharchive/issue
| 2017-09-11T21:36:05 |
2025-04-01T04:55:43.662488
|
{
"authors": [
"minitu",
"pplimport",
"stwhite91"
],
"repo": "UIUC-PPL/charm",
"url": "https://github.com/UIUC-PPL/charm/issues/1674",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
382691145
|
MSPSDS-537: Filter by status
Probably not really related to status, but it's the ticket this bug report showed up.
Pagination was broken when exporting to xlsx. This makes sure that 'the exported list is exactly as visible list', which might not be the perfectly intuitive approach, but is what is written on several tickets.
Pull Request Test Coverage Report for Build 1331
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 95.649%
Totals
Change from base Build 1326:
0.0%
Covered Lines:
2506
Relevant Lines:
2620
💛 - Coveralls
|
gharchive/pull-request
| 2018-11-20T14:34:42 |
2025-04-01T04:55:43.668031
|
{
"authors": [
"coveralls",
"kicferk1"
],
"repo": "UKGovernmentBEIS/beis-mspsds",
"url": "https://github.com/UKGovernmentBEIS/beis-mspsds/pull/286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1295779260
|
51101 Cache FSS response for weekly batches
This PR contains changes related to -
1.Dev Changes -
added code changes and unit test cases.
Devops Changes-
i. add new cache storage account
ii. enable network rules for cache storage account
iii. add storage account related secretes in keyvault
@nevillejrbrown can you please revisit this PR to see if you are happy that the changes you requested have been done?
|
gharchive/pull-request
| 2022-07-06T12:37:25 |
2025-04-01T04:55:43.669801
|
{
"authors": [
"barrie-cooper",
"tejasi15072"
],
"repo": "UKHO/Maritime-Safety-Information",
"url": "https://github.com/UKHO/Maritime-Safety-Information/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
163167508
|
IBM-255: Consitency & Style of Expected In & Expected Out & Expected outgoing
https://jira.digital.homeoffice.gov.uk/browse/IBM-255
closing in line with wallboard pr
|
gharchive/pull-request
| 2016-06-30T13:37:44 |
2025-04-01T04:55:43.687323
|
{
"authors": [
"chrisns",
"cksanders"
],
"repo": "UKHomeOffice/removals_e2etests",
"url": "https://github.com/UKHomeOffice/removals_e2etests/pull/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
464642153
|
Create ca_chain file when using fmt=cert
It would be useful when using the PKI backend if fmt=cert would also pull out the ca_chain as well as the existing crt, ca and key files.
Had a look into this and as @jprelph says the ca_chain values are not written out. In our case we have two certs in this chain and we only get the root CA written out to file.
I have had a first stab at fixing this on my fork:
https://github.com/james-bjss/vault-sidekick/commit/235c5d3ff38add2b98aa80d410d4f09d0399f8dd
Some questions about the solution
Should the ability to pull out the whole chain be optional?
Should this be implemented as a new fmt or an additional flag or just done by default?
Is it acceptable to write out the whole chain into a single file?
What should the filename(s) be?
Other notes:
*I added a newline char between the certs when writing them to a single file. Should I used a localised/os specific carriage return?
The code assumes that ca_cert is always an interface slice, the only other way I could thinking of was using reflection, but this is usually frowned upon. I looked at the vault code and it seems like this should always be a slice.
If you are happy with the above I can look at tests/incorporate feedback and raise a PR.
|
gharchive/issue
| 2019-07-05T13:30:17 |
2025-04-01T04:55:43.690826
|
{
"authors": [
"james-bjss",
"jprelph"
],
"repo": "UKHomeOffice/vault-sidekick",
"url": "https://github.com/UKHomeOffice/vault-sidekick/issues/91",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2135772573
|
Locator API: Wrong response definition (DropLocation)?
Hello there,
we are currently migrating to the new REST API.
I think there is a mistake regarding the DropLocation within the LocatorRequestResponse:
https://github.com/UPS-API/api-documentation/blob/main/Locator.yaml#L1267
https://github.com/UPS-API/api-documentation/blob/main/Locator.yaml#L1362
Previously, there was a list of DropLocations in the old XML API. The API documentation states there is only one DropLocation entity which leads to wrong results for swagger-codegen-based model structures as far I interpret the deserialization.
Hi, thank you for your comment and patience. We have updated the specs to resolve this issue.
|
gharchive/issue
| 2024-02-15T06:49:18 |
2025-04-01T04:55:43.733147
|
{
"authors": [
"UPSRahul",
"florianschieder"
],
"repo": "UPS-API/api-documentation",
"url": "https://github.com/UPS-API/api-documentation/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
690251495
|
Styling Monitoring Plan Viewer
[ ] Apply USWDS standards to MP plan viewer
Need to include @aprematta for 508 considerations here
No longer needed at this time
|
gharchive/issue
| 2020-09-01T15:45:02 |
2025-04-01T04:55:43.735293
|
{
"authors": [
"davidkwartler"
],
"repo": "US-EPA-CAMD/PI-1",
"url": "https://github.com/US-EPA-CAMD/PI-1/issues/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1209716639
|
Data Table Coding is Used in Layout Table
Functionality:
Evaluation Report
Issue:
Layout tables contain data table markup such as TH tags, ID, scope attributes, or Summary. This issue is a violation of Section 508 and WCAG 2.0 Success Criterion 1.3.1.
Facility ID (ORISPL):
Table has role of presentation but also TH cells. Change the TH cells to TD
Suggested Resolution:
Remove any table markup that should only be used in data tables.
Testing complete, issue has been fixed
|
gharchive/issue
| 2022-04-20T13:46:59 |
2025-04-01T04:55:43.738107
|
{
"authors": [
"aprematta",
"vishnunavuluri"
],
"repo": "US-EPA-CAMD/easey-ui",
"url": "https://github.com/US-EPA-CAMD/easey-ui/issues/3416",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
961339789
|
Medium Wave Damage Curves improperly utilized
https://github.com/USACE/go-consequences/blob/64ddb7aac89f8cd85a2970e6612be90a9af5b092/structures/occupancytypes.go#L59
This and the High wave damage functions are improperly sampled. Because of the way we are using them here we will never actually get the benefit of these curves as far as I can tell, we will always get the central value.
the function is assigned to an OccupancyTypeDeterministic, so that is not the problem here.
|
gharchive/issue
| 2021-08-05T02:02:34 |
2025-04-01T04:55:43.739848
|
{
"authors": [
"HenryGeorgist"
],
"repo": "USACE/go-consequences",
"url": "https://github.com/USACE/go-consequences/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
366963418
|
Adding new data/members/variables to save/load functions with backward compatibility?
Consider the following:
I have a pair of save / load functions that describe how a C++ structure must be serialized and deserialized. I have an existing library of files containing serialized data describing many instances of this structure that I would like to preserve.
I would like to add a new member or data type to the save / load functions, while also preserving the ability to deserialize the older files in the existing library that lack this new data structure.
How would I go about doing this to ensure backward compatibility? Will this work so long as I always add the new members/data type to the end of the save/load functions? Or is this impossible?
Let me preface this by saying that cereal was not designed with this use case as primary motivation, so it takes some extra work to accomplish this. Other serialization frameworks that use an explicit serialization specification (file) tend to be more flexible in this regard.
One other warning is that I'm writing this without testing any of the solutions, but they should be roughly accurate.
If your data is stored in a format that is opaque (i.e., binary), it will not be possible to add or remove fields and have the existing serialization code load the data properly. Text based data can search (we call this out of order loading) for the field and get around this restriction.
The only way around this for binary files would be for the types (version X vs version X+1) to be distinct and use the appropriate serialization code to do loading and storing. You'd have to maintain some kind of mock version of your class that was just a thin wrapper around the serialized code, then determine at runtime what kind to use internally.
A more principled way to do this in cereal would be to use versioning (see the docs), where you would supply a version with each change to your serialized data. This would give you a way, at runtime, to decide how to load or save the data, by looking at the version number. With this scheme you would not need to preserve ordering either, so long as your serialization code for each version is correct, it will load the data.
Unfortunately code that was not versioned cannot be made versioned without doing an explicit conversion, because the serialized representations are slightly different. You would need to perform a one-time conversion, using a program that
It will not be possible in a single pass to load binary to a versioned function, but it can be done by loading from binary, then writing out to json and saving that. Then migrate your code to use the version parameter, and comment out the portion of cereal that throws if the version key is missing from the json, then save again.
|
gharchive/issue
| 2018-10-04T20:46:18 |
2025-04-01T04:55:43.755620
|
{
"authors": [
"AzothAmmo",
"M2tM",
"scottmudge"
],
"repo": "USCiLab/cereal",
"url": "https://github.com/USCiLab/cereal/issues/525",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
158331802
|
only one stream data point for 20160531
Didnt see any errors when I ran the code... but I'm on a bad internet connection, not sure if that may have impacted it. @jread-usgs
That's no good. I will take a look.
must have been fluke?
PR coming
|
gharchive/issue
| 2016-06-03T09:55:23 |
2025-04-01T04:55:43.815737
|
{
"authors": [
"eread-usgs",
"jread-usgs"
],
"repo": "USGS-CIDA/CIDA-Viz",
"url": "https://github.com/USGS-CIDA/CIDA-Viz/issues/436",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
180410960
|
Ability to choose axis scale
Especially important when plotting a multi-site (facet), the scales are often different and that can be confusing, misleading. It would be nice to be able to have the same scale and/or edit them as desired.
I think this would be pretty difficult to do, probably too difficult. However, I will leave open for now and think about it
|
gharchive/issue
| 2016-09-30T21:58:46 |
2025-04-01T04:55:43.818936
|
{
"authors": [
"psolberg-usgs",
"tmills-usgs"
],
"repo": "USGS-R/WQ-Review",
"url": "https://github.com/USGS-R/WQ-Review/issues/121",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
118199192
|
remove lims from points() lists after passing through the set/extract
gs <- gsplot() %>%
points(y=1, x=2, xlim=c(0,3),ylim=c(0,3), col="blue", pch=18)
gs$view$window$ylim
[1] 0 3
gs$view$points$ylim
[1] 0 3
I know why this is the case, but we end up with ylim=c(0,3) both in the gs$view$points and gs$view$window if the user specifies it.
The reason this happens (I think, if I remember correctly) is because we want to keep around the user-specified args so they don't get overwritten by the next addition of points or something else.
I would like to see all those types of args end up in window, because the things in points should really just be those that are part of the call to graphics::points in the rendering stage.
perhaps window needs user.ylim and data.ylim to handle this type of thing.
Fixed w/ #290
gs <- gsplot() %>%
points(y=1, x=2, xlim=c(0,3),ylim=c(0,3), col="blue", pch=18)
gs$view.1.2$points
$x
[1] 2
$y
[1] 1
$col
[1] "blue"
$pch
[1] 18
|
gharchive/issue
| 2015-11-21T14:08:19 |
2025-04-01T04:55:43.821491
|
{
"authors": [
"jread-usgs"
],
"repo": "USGS-R/gsplot",
"url": "https://github.com/USGS-R/gsplot/issues/279",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
323772052
|
add size and hash of data file to ind file build files
example of a current build file:
version: 0.3.0
name: 3_forecast/out/preds_loadest.rds.ind
type: file
hash: 4c3e2454d173831585e8009d686ce9ba
time: 2018-05-16 18:24:03 UTC
depends:
forecast_df: 44ed1f468cefbc17eb2dd2a380ae8756
forecast_plan: 23bdef57ad5beb6a0deb5e67e1d6e500
lib/cfg/gd_config.yml: cf9c52868594c233f17a6f751a553018
fixed: b26a7577dedb6ce133d4fe20a172d331
code:
functions:
gather_forecasts: 9d2e514d082a7529a9853e926ef084b8
i end up reading these files in commits and PRs pretty often, so info relevant to humans should go here. and if i remember right, adding bonus information to these files wouldn't hurt the remake process because we just wouldn't copy it over to remake's version of the status file.
just need to be resilient to cases where there is no data file, or it's not local, that corresponds to the ind file. one possible implementation would be to (1) copy the contents of ind files, or at least the first 10 lines or something, straight into these build files, and then (2) add more info to standard .ind files created by gd_put, sc_put, sc_indicate(data_file=xx), etc.
This is related to, and possibly redundant with, #49. At the very least, it'd probably be efficient for one developer to tackle both issues at once.
|
gharchive/issue
| 2018-05-16T20:21:17 |
2025-04-01T04:55:43.823482
|
{
"authors": [
"aappling-usgs"
],
"repo": "USGS-R/scipiper",
"url": "https://github.com/USGS-R/scipiper/issues/40",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
158771420
|
Minimize javascript and css for release builds
Have a way of doing non-debug and debug builds where javascript and css is or is not minimized accordingly.
Duplicated in #190; copied above comment there.
|
gharchive/issue
| 2016-06-06T20:49:00 |
2025-04-01T04:55:43.824440
|
{
"authors": [
"aappling-usgs",
"jiwalker-usgs"
],
"repo": "USGS-VIZLAB/vizlab",
"url": "https://github.com/USGS-VIZLAB/vizlab/issues/13",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1388053177
|
Add PWA manifest
Overview
List the GitHub issues containing the issues relevant to this pull request.
Closes #75
Describe your changes here at a high level, describing how this PR fits into the
rest of the project.
This PR adds a web manifest to the site.
What Changed
Go more into detail about key files that were modified and why they were
updated. Do not list every file that was modified; only note the ones most
relevant to this feature or bug fix.
This PR:
adds a manifest.json file for browsers
adds icons at different scales
updates paths to icons in _app.tsx
Other Notes
If were roadblocks encountered during development that remain unresolved or any
future additions or changes to make, note them here. Otherwise. Feel free to
delete this section if it isn't needed.
I used this as my reference: https://web.dev/add-manifest/
Overall looks good! I'll review it more in-depth either today or tomorrow & leave more comments if needed.
|
gharchive/pull-request
| 2022-09-27T16:37:52 |
2025-04-01T04:55:43.845075
|
{
"authors": [
"ZzRanger",
"jasonappah"
],
"repo": "UTDNebula/planner",
"url": "https://github.com/UTDNebula/planner/pull/201",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
365900865
|
UWA MEG FISHERIES -
@BrookeGibbons can the top left title be expanded maybe onto two lines so it reads.
"UWA Marine Ecology Group - Fisheries"
@TimLanglois I've changed the title. It looks a bit weird don't you think?
|
gharchive/issue
| 2018-10-02T13:35:53 |
2025-04-01T04:55:43.873888
|
{
"authors": [
"BrookeGibbons",
"TimLanglois"
],
"repo": "UWAMEGFisheries/UWAMEGFisheries.github.io",
"url": "https://github.com/UWAMEGFisheries/UWAMEGFisheries.github.io/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2619884628
|
Assignment 6.2 Quickstart
Adds a new quickstart page to the docs
Hi Alicia,
I'm unable to merge this PR because it has merge commits in it.
While merge commits integrate recent changes from the main branch into your PR (feature) branch, they do so in a way that changes the history (by combining the new commits into a single summary of those commits.) The correct way to update your feature branch is to use the rebase and merge option to update your feature branch.
If you need any help sorting this out, let me know.
what do I need to do?
I think the easiest path to take would be to:
In the GitHub website, Update your fork.
In the GitHub desktop, create a new feature branch from main.
Select your PR/feature branch.
Select the commit from your PR branch with your changes (Commit 9b5640a). Don't select any of the merge commits or any other commits.
Cherry-pick that commit from your PR branch (Commit 9b5640a) to your new feature branch.
Review the changes in your new feature branch. Build and test the docs that build from your new feature branch.
After everything looks good, open a new PR from the new feature branch.
Update this PR with a new comment that contains the link to your new PR.
Let me know at this point and I'll take it from there.
I tried this from my account and it seemed to work. Because you have only one commit with your changes and it doesn't have any conflicts, it should be able to be cherry-picked from the current branch to a new one without any problems.
You can do this .
Here's the link to the new PR: https://github.com/UWC2-APIDOC/to-do-service-au24/pull/50
|
gharchive/pull-request
| 2024-10-29T01:12:54 |
2025-04-01T04:55:43.881481
|
{
"authors": [
"alkreb",
"rbwatson"
],
"repo": "UWC2-APIDOC/to-do-service-au24",
"url": "https://github.com/UWC2-APIDOC/to-do-service-au24/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
709710283
|
disable dev tools in dev profile
in all services
dev tools automatically turned off when running application using jar.
No need to explicit turn off.
Reference: https://stackoverflow.com/questions/37701330/spring-boot-dev-tools-turning-them-off-for-production
|
gharchive/issue
| 2020-09-27T10:07:46 |
2025-04-01T04:55:43.924953
|
{
"authors": [
"UbaidurRehman1"
],
"repo": "UbaidurRehman1/SpringServices_Rest_Micro",
"url": "https://github.com/UbaidurRehman1/SpringServices_Rest_Micro/issues/101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1485528049
|
updated the template
changed the layout by a few margins and replaced GaussianBlur with MedianBlur
To be kept as part of https://github.com/Udayraj123/OMRChecker/pull/107.
|
gharchive/pull-request
| 2022-12-08T22:43:56 |
2025-04-01T04:55:44.144135
|
{
"authors": [
"Udayraj123",
"rudrapsc"
],
"repo": "Udayraj123/OMRChecker",
"url": "https://github.com/Udayraj123/OMRChecker/pull/107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1035502243
|
[QUESTION] Is comet download still a supported command?
❓ Questions and Help
What is your question?
Is comet download still a supported command? If not, what is the best way to download the data needed to reproduce the results?
Code
$ comet download --help
=> command not found: comet
$ comet-download --help
=> command not found: comet-download
What have you tried?
I tried running comet download as specified in data/README.md, and I get the error command not found: comet. However, comet-score and comet-compare works, so I know that I did install it. I also tried comet-download and still get the same error.
What's your environment?
OS: macOS Big Sur 11.5.2
Packaging: conda
Version 4.8.3
Python 3.9.7
Hi @ricardorei,
Are there already any similar download links for the 2021 shared task?
Thank you!
Cheers,
Chantal
Hi @chanberg
This is the link for 2020 DA's:
wget https://unbabel-experimental-data-sets.s3.eu-west-1.amazonaws.com/wmt/2020-da.csv.tar.gz
2020 DA Relative-Ranks:
wget https://unbabel-experimental-data-sets.s3.eu-west-1.amazonaws.com/wmt/2020-daRR.csv.tar.gz
And for the MQM data you have it here but I'll try to upload the exact files we used after splitting the data and creating the z-scores. I'll try to do that later today or tomorrow..
@ricardorei thank you so much!
@chanberg I also prepared the WMT20 MQM annotated data.
The entire dataset with MQM sentence scores and the corresponding z-score:
wget https://unbabel-experimental-data-sets.s3.eu-west-1.amazonaws.com/wmt/2020-MQM.csv.tar.gz
The train split we used::
wget https://unbabel-experimental-data-sets.s3.eu-west-1.amazonaws.com/wmt/2020-MQM.train.csv.tar.gz
The corresponding test split:
wget https://unbabel-experimental-data-sets.s3.eu-west-1.amazonaws.com/wmt/2020-MQM.test.csv.tar.gz
Don't forget to cite Markus paper if you use this MQM data from 2020:
@misc{freitag2021experts,
title={Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation},
author={Markus Freitag and George Foster and David Grangier and Viresh Ratnakar and Qijun Tan and Wolfgang Macherey},
year={2021},
eprint={2104.14478},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
and the WMT Metrics/News Translation tasks if you use the direct assessments!
Cheers,
Ricardo
@ricardorei this is great! thank you so much!
|
gharchive/issue
| 2021-10-25T19:40:11 |
2025-04-01T04:55:44.280708
|
{
"authors": [
"chanberg",
"isabelcachola",
"ricardorei"
],
"repo": "Unbabel/COMET",
"url": "https://github.com/Unbabel/COMET/issues/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
920002248
|
Pydeck + Earth Engine: Terrain Visualization Not visualize in colab
Hi I have tried to run the example Pydeck + Earth Engine: Terrain Visualization notebook command in colab. But at the end, it doesn't show any map in colab.
Colab does not support Jupyter Widgets. You should try .to_html which supports Colab, however I'm not 100% sure the Earth Engine integration also supports .to_html because of how authentication support works with Earth Engine. I would expect it to work until the token expires, which is usually within one hour.
Thanks @kylebarron It works now by setting .to_html.
Good to hear!
|
gharchive/issue
| 2021-06-14T05:05:39 |
2025-04-01T04:55:44.301812
|
{
"authors": [
"BijoyKrGayen",
"kylebarron"
],
"repo": "UnfoldedInc/earthengine-layers",
"url": "https://github.com/UnfoldedInc/earthengine-layers/issues/115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2034318380
|
PETSc stages
Added 4 stages:
Thermal solver
SNES Solve
Advect markers
Saving output
I am not sure about Thermal solver since it usually solves very fast.
We should maybe discuss what is the best place to put stages.
Here is a short snippet on run with -log_view now.
Thanks a lot, this is great!
Does the thermal solver also take this little time in the Diffusion solver benchmark?
Thanks a lot, this is great!
Does the thermal solver also take this little time in the Diffusion solver benchmark?
No it does not show up there.
I put profiler for thermal diffusion in timestep look already.
I put it at
ierr = JacResInitTemp(&lm->jr); CHKERRQ(ierr);
But to measure thermal diffuion in Diffusion solver benchmark I will need to put it in Initial guess part of the code.
Probably this function.
ierr = LaMEMLibDiffuseTemp(lm); CHKERRQ(ierr);
Also note that the adjoint tests fail with this:
[0]PETSC ERROR: Duplicate stage name given: Thermal solver
this requires fixing.
ok, now this works and I'm in principle happy to merge it. For consistency it would be good if you can increment the version of LaMEM such that we release a new version (easier when doing benchmarking later).
Can you please add that (also in the documentation)?
Thanks - you also need to change it in the LaMEM.h file (I believe) as the version number of LaMEM is printed at the beginning of the simulation.
|
gharchive/pull-request
| 2023-12-10T10:53:49 |
2025-04-01T04:55:44.307283
|
{
"authors": [
"IskanderI",
"boriskaus"
],
"repo": "UniMainzGeo/LaMEM",
"url": "https://github.com/UniMainzGeo/LaMEM/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
255883592
|
MGMT: Thymeleaf templates may be removed?
Do we need to keep the thymeleaf templates in src/main/resources/fragments or can they be removed?
Removed
Follow up question: should the following resources be kept? are they used?
src/main/resources/templates
src/main/resources/static (and sub directories such as js, etc)
For the moment authorizationFailure.html, error.html, and logout.html are still used. In fact we need to restore the header and footer fragments we removed in order for the screens to appear correctly.
In static, the js can be removed, all images except the logo can be removed.
css and fonts may need to stay for the static screens to render correctly.
From: Misagh Moayyed notifications@github.com
Sent: Friday, September 8, 2017 9:29:09 AM
To: Unicon/cas
Cc: Travis S Schmidt; State change
Subject: Re: [Unicon/cas] MGMT: Thymeleaf templates may be removed? (#24)
Follow up question: should the following resources be kept? are they used?
src/main/resources/templates
src/main/resources/static (and sub directories such as js, etc)
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHubhttps://github.com/Unicon/cas/issues/24#issuecomment-328151452, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AS0kvVbigGYZrbAXIhol_QP9E_OZi6cbks5sgWtUgaJpZM4PPlYl.
Thanks. Would you want me to remove them, or could you make the changes as part of an existing branch we have today?
Yes, I can get this functioning again today, probably do it in our original branch
From: Misagh Moayyed notifications@github.com
Sent: Friday, September 8, 2017 9:58:08 AM
To: Unicon/cas
Cc: Travis S Schmidt; State change
Subject: Re: [Unicon/cas] MGMT: Thymeleaf templates may be removed? (#24)
Thanks. Would you want me to remove them, or could you make the changes as part of an existing branch we have today?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHubhttps://github.com/Unicon/cas/issues/24#issuecomment-328158531, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AS0kvTWOQls7kZD0Ouk25wtSjV_QSs-dks5sgXIggaJpZM4PPlYl.
I have pushed to mgmt-angular-json with this functionality restored and the static resources trimmed.
From: Travis S Schmidt
Sent: Friday, September 8, 2017 10:02:48 AM
To: Unicon/cas; Unicon/cas
Cc: State change
Subject: Re: [Unicon/cas] MGMT: Thymeleaf templates may be removed? (#24)
Yes, I can get this functioning again today, probably do it in our original branch
From: Misagh Moayyed notifications@github.com
Sent: Friday, September 8, 2017 9:58:08 AM
To: Unicon/cas
Cc: Travis S Schmidt; State change
Subject: Re: [Unicon/cas] MGMT: Thymeleaf templates may be removed? (#24)
Thanks. Would you want me to remove them, or could you make the changes as part of an existing branch we have today?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHubhttps://github.com/Unicon/cas/issues/24#issuecomment-328158531, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AS0kvTWOQls7kZD0Ouk25wtSjV_QSs-dks5sgXIggaJpZM4PPlYl.
Merged with apereo/cas master. Lets retire this branch for now altogether. Merges are getting a bit weird :)
By the way, build seems to fail on this block:
@font-face {
font-family: 'Glyphicons Halflings';
src: url('../fonts/glyphicons-halflings-regular.eot');
src: url('../fonts/glyphicons-halflings-regular.eot?#iefix') format('embedded-opentype'), url('../fonts/glyphicons-halflings-regular.woff2') format('woff2'), url('../fonts/glyphicons-halflings-regular.woff') format('woff'), url('../fonts/glyphicons-halflings-regular.ttf') format('truetype'), url('../fonts/glyphicons-halflings-regular.svg#glyphicons_halflingsregular') format('svg');
}
...in the cas-management.css file. Should this also be removed?
I crossed myself up on this and the other branches. So the fonts dir needs to be restored for this branch. If we merge the domains branch or the search branch then these fonts can be removed from the css.
Committed with the fonts restored
Excellent. Thanks for the update. I'll wait until we finalize work on those branches, unless you tell me it's good to go. I'll continue to QA things.
|
gharchive/issue
| 2017-09-07T10:03:51 |
2025-04-01T04:55:44.319565
|
{
"authors": [
"mmoayyed",
"tsschmidt"
],
"repo": "Unicon/cas",
"url": "https://github.com/Unicon/cas/issues/24",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1884817450
|
Binance,ID,no.201099596 IDPR22NOV22.Anonnymous-user-eb7fb...e.mail.NurcanTuncay007@gmail.com...tel.+905375067135.Turkey.online miktarı online aktarınız komisyonu içinden düşünüz.Spot cüzdanıma aktarınız.Nurcan Tuncay.adlm.
Binance,ID,no.201099596 IDPR22NOV22.Anonnymous-user-eb7fb...e.mail.NurcanTuncay007@gmail.com...tel.+905375067135.Turkey.online miktarı online aktarınız komisyonu içinden düşünüz.Spot cüzdanıma aktarınız.Nurcan Tuncay.adlm.
Originally posted by @Nurcan123456 in https://github.com/Uniswap/token-lists/issues/486#issuecomment-1709186384
cyny39.cb.id
Hfcc
|
gharchive/issue
| 2023-09-06T22:03:12 |
2025-04-01T04:55:44.364193
|
{
"authors": [
"M-Sayeh",
"Nurcan123456",
"cyny39"
],
"repo": "Uniswap/token-lists",
"url": "https://github.com/Uniswap/token-lists/issues/496",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2250745413
|
Could we stop the External Control on the UR robot at runtime?
Affected ROS2 Driver version(s)
All
Used ROS distribution.
Humble
Which combination of platform is the ROS driver running on.
Ubuntu Linux with standard kernel
How is the UR ROS2 Driver installed.
Build both the ROS driver and UR Client Library from source
Which robot platform is the driver connected to.
UR E-series robot
Robot SW / URSim version(s)
5.14
How is the ROS driver used.
Through the robot teach pendant using External Control URCap
Issue details
Using the URCap External Control and ROS 2 ur_ros_driver, I wonder if I can stop the External Control running on my UR robot ( for example, in this figure, the External Control module is "Control by myhostname"
) at runtime without quitting the UR main program or quitting the ur_ros_driver. Do you have any idea to:
stop or pause the External Control from the ur_ros_driver side?
stop or pause the External Control from the UR robot side ?
(without stopping the whole program containing other threads)
The big picture explains why I need this: I'm using OnRobot Eyes, which calculates and produces the object picking very well. This feature includes move actions, so it cannot work in another thread parallel with External Control. Thus, I have two approaches to combining it with my ROS2 program: First, I create another thread to calculate variables of the OnRobot object picking when necessary and temporarily pause the External Control in the main thread (Robot Program) to proceed the move using the variables; Second, I can transfer the variables calculated from the OnRobot Eyes and send it to my ROS2 program. The above question relates to the first approach.
Relevant log output
No response
Accept Public visibility
[X] I agree to make this context public
ros2 service call /io_and_status_controller/hand_back_control std_srvs/srv/Trigger should do the trick
Nice. Thank you very much @fmauch
|
gharchive/issue
| 2024-04-18T13:46:08 |
2025-04-01T04:55:44.525603
|
{
"authors": [
"fmauch",
"nqduy35"
],
"repo": "UniversalRobots/Universal_Robots_ROS2_Driver",
"url": "https://github.com/UniversalRobots/Universal_Robots_ROS2_Driver/issues/972",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
720493803
|
Timing bug in SystemTimerScheduledTaskManager
There is a timing bug in SystemTimerScheduledTaskManager that can prevent the Unleash client from properly scheduling the fetch-feature-toggle-task. This means that the Unleash client will silently not fetch feature toggles.
The problem is the gap in time between when a background task's Timer is created and the Timer is registered in the timers dictionary.
Specifically: the problem occurs if the callback associated with a task's Timer executes and reaches this line before the Timer is put in the dictionary:
https://github.com/Unleash/unleash-client-dotnet/blob/master/src/Unleash/Scheduling/SystemTimerScheduledTaskManager.cs#L33
Then the callback terminates early, and does not go on to reschedule itself by changing the dueTime for the Timer. Since the period for the Timer is set to infinite, it means that the Timer will never fire again. In other words, the callback executes once and terminates early, without running the task, and without making sure that the Timer will fire again.
Thanks for reporting @einarwh! I feel the whole Callback method could benefit a rewrite.
What do you think about changing the lines 85-91 with the following:
var timer = new Timer(
callback: Callback,
state: callbackState,
dueTime: Timeout.Infinite,
period: Timeout.Infinite);
timers.Add(name, timer);
timer.SafeTimerChange(dueTime, Timeout.InfiniteTimeSpan, ref disposeEnded);
This will not start the timer before we have successfully added it to the timers dictionary.
That looks like something that would fix the bug, so I would be happy with that change. But the entire process seems needlessly complicated to me, with the caveat that there might be subtle things that I'm unaware of. In particular I don't understand why the Timer needs to be modified on each callback.
But the entire process seems needlessly complicated to me
I totally agree. I will try to think of a simple solution.
I think the main reason to change the intervals is related to the use case of executing the task immediately on initialisation. I think that can be solved simpler.
Right, but in my understanding dueTime is for the delay before firing the first time, and period is for the interval between firings. Why isn't that enough?
Hmz, good point. I will need to read up on the specifications of the Timer object. Sounds like you are on to something.
|
gharchive/issue
| 2020-10-13T16:19:20 |
2025-04-01T04:55:44.549952
|
{
"authors": [
"einarwh",
"ivarconr"
],
"repo": "Unleash/unleash-client-dotnet",
"url": "https://github.com/Unleash/unleash-client-dotnet/issues/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1169736034
|
Update context does not fetch new flags
If the refreshInterval is set to 0 and updateContext function is called, the data does not update.
public async updateContext(context: IMutableContext): Promise<void> {
// Give the user a nicer error message when including
// static fields in the mutable context object
// @ts-ignore
if (context.appName || context.environment) {
console.warn(
"appName and environment are static. They can't be updated with updateContext."
);
}
const staticContext = {
environment: this.context.environment,
appName: this.context.appName,
};
this.context = { ...staticContext, ...context };
if (this.timerRef) { // <--------- here is the problem
await this.fetchToggles();
}
}
I think we should consider a 'started' flag instead. The intention was that a updateContext call should not fetch toggles before the SDK is started. But I agree that you should not necessarily require it to actually have background polling enabled.
|
gharchive/issue
| 2022-03-15T14:16:46 |
2025-04-01T04:55:44.561058
|
{
"authors": [
"ivarconr",
"spiderhands"
],
"repo": "Unleash/unleash-proxy-client-js",
"url": "https://github.com/Unleash/unleash-proxy-client-js/issues/73",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1570240326
|
Vercel timeout
Describe the bug
We have some longer deploy times and use this to correctly enforce the result status of the vercel deployment, but after enabling debug mode, we noticed that after a while, the vercel request will timeout
FetchError: request to https://api.vercel.com/v11/now/deployments/get?url=********.vercel.app failed, reason: connect ETIMEDOUT 76.76.21.112:443
I think this is a ddos protection on vercel's side
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
Scroll down to '....'
See error
Expected behavior
It would be really helpful if we could configure the retry step (eg: as opposed to checking status every 5 seconds, we can do it every 30 seconds)
Much appreciated
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Smartphone (please complete the following information):
Device: [e.g. iPhone6]
OS: [e.g. iOS8.1]
Browser [e.g. stock browser, safari]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
Are you sure this issue is related to the number of retries?
How long does your Vercel deployment take?
It takes about 20minutes, we're statically building a ton of pages.
Just checked the code and it seems like everything happens in a while loop https://github.com/UnlyEd/github-action-await-vercel/blob/b3516eac88ef939ccc2c6b25987ba153d2c7ef48/src/awaitVercelDeployment.ts#L18
Is there any chance we can add a timeout between requests? Seems like this is just hammering vercel api without any delay. I think 5secs would be nice, ideally configurable
@cipriancaba I believe https://github.com/UnlyEd/github-action-await-vercel/pull/97 might solve your issue. (PR from first time contributor)
Merged through #98
Could you let me know if this change improves your issues?
This other PR should help as well.
https://github.com/UnlyEd/github-action-await-vercel/pull/100
I'm closing this, let me know if anything doesn't work as expected.
Works great, much appreciated @Vadorequest
I haven't done much!
@namoscato @dlively1 are to thank for those :)
|
gharchive/issue
| 2023-02-03T18:02:57 |
2025-04-01T04:55:44.573193
|
{
"authors": [
"Vadorequest",
"cipriancaba"
],
"repo": "UnlyEd/github-action-await-vercel",
"url": "https://github.com/UnlyEd/github-action-await-vercel/issues/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
285685144
|
Release adapters as separate modules
Atm you need to install both rethinkdbdash and arangojs regardless of what adapter you are using.
If we publish the adapters as separate modules it will not only fix this, but also allow the adapter to depend on the library where something like const ArangoAdapter = require('pims/arango') would require the user to manually install rethinkdbdash or arangojs
Done
|
gharchive/issue
| 2018-01-03T13:20:07 |
2025-04-01T04:55:44.584020
|
{
"authors": [
"UnwrittenFun"
],
"repo": "UnwrittenFun/pims",
"url": "https://github.com/UnwrittenFun/pims/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2604673932
|
Added AI Anime Avatar Generation Feature Using OpenCV and Matplotlib
Related Issues or bug
Added support for generating anime-style avatars with smooth edges and vibrant colors using OpenCV and Matplotlib. fixed the issue #495
Fixes: #495
Proposed Changes
Implemented an AI Anime Avatar Generation feature that transforms input images into anime-style avatars using OpenCV and Matplotlib.
Developed functions to load images, apply Gaussian blurring, adaptive thresholding, and bilateral filtering for achieving smooth edges and vibrant colors.
Enhanced the output images' vibrancy and smooth shading transitions to closely mimic traditional anime art styles.
Included functionality to display the original and transformed images side by side for easy
Screenshots
Original
Updated
** **
Screenshot of model implementation :
@UppuluriKalyani
Please review and approve this pull request. I have also added the necessary labels: gssoc-ext, hacktoberfest, and hacktoberfest-accepted. Thank you!
@UppuluriKalyani Okay will update the PR by tonight.
|
gharchive/pull-request
| 2024-10-22T08:26:01 |
2025-04-01T04:55:44.590354
|
{
"authors": [
"RB137"
],
"repo": "UppuluriKalyani/ML-Nexus",
"url": "https://github.com/UppuluriKalyani/ML-Nexus/pull/520",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2574405952
|
Research Database options
Research task involving documenting possible Database options.
What can we use as a database that can easily intergrade into the RoR app?
Firebase
Google Sheets API?
MySQL Lite and localized history?
Can we get a demo DB spun up with seed data to manually test the pros and cons?
I suggest going with PostgreSQL for your Ruby on Rails app, and here’s why:
Seamless Rails integration: PostgreSQL works beautifully with Rails, so setup is straightforward, and you won’t hit any compatibility issues.
Scalability: Whether you're starting small or planning for growth, PostgreSQL can handle it. It’s great for both small projects and larger apps down the road.
Powerful features: It supports advanced features like complex queries and JSON, giving you flexibility if your data needs become more complex in the future.
Data reliability: It’s designed to ensure your data stays consistent and safe, which is key when dealing with critical information.
Great support: Since it’s popular and open-source, there’s a huge community, so finding help or resources when needed is easy.
|
gharchive/issue
| 2024-10-08T23:28:49 |
2025-04-01T04:55:44.593245
|
{
"authors": [
"Arhaan-Siddiquee",
"kodebae"
],
"repo": "UpstateWomenInSoftwareEngineering/job_tracker",
"url": "https://github.com/UpstateWomenInSoftwareEngineering/job_tracker/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
53596208
|
External issues that need community support
[ ] Adding tag on BODY tag - https://github.com/meteor/meteor/issues/2561
[ ] Language attributes to templates - https://github.com/meteor/meteor/issues/3400
[ ] Make Meteor's build process handle few plugins on one file extension, so we can run ng-annotate on .js files - https://github.com/MeteorCommunity/discussions/issues/51
[ ] Official Meteor support for bower with handling conflicts - https://github.com/mquandalle/meteor-bower/issues/30
[ ] Ability to change Blaze's default-delimiters - https://github.com/meteor/meteor/issues/2765
[ ] Ability to extend Meteor's command line to do something like Yeoman's angular with meteor: "meteor create --angular socially" - https://github.com/MeteorCommunity/discussions/issues/9
Closing as I've discussed it all with the Meteor team and @netanelgilad wrote a great pull request for solving the most relevant issue.
|
gharchive/issue
| 2015-01-07T04:20:33 |
2025-04-01T04:55:44.601953
|
{
"authors": [
"Urigo"
],
"repo": "Urigo/angular-meteor",
"url": "https://github.com/Urigo/angular-meteor/issues/109",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
723243813
|
Way to pass JWT to underlying services
Hello!
I am using that project to make a dynamic mesh where services can register themselves. That idea worked until I faced the need to pass JWT provided by client to underlying services. Is there any context or other way to tell services who exactly makes the request?
Currently, I have to rebuild schema, and it is definetely counter-optimal solution:
Main file:
app.use(
graphqlUploadExpress({ maxFileSize: 10000000, maxFiles: 10 }),
graphqlHTTP(async (req) => {
const token = `${req.headers['authorization']}`;
let resultSchema = ttl.get(token);
if (!resultSchema) {
const services = await getServicesWithEndpoints();
const { schema } = await rebuildSchema(services, {
token
});
ttl.push(token, schema, null, TTL);
resultSchema = schema;
} else {
console.log("Schema found in cache for token:", token);
}
return {
schema: resultSchema,
graphiql: false
};
})
);
Rebuild schema method:
const rebuildSchema = async (endpoints, options) => {
const config = {
merger: 'federation',
sources: endpoints.map((endpoint) => ({
name: endpoint.key,
handler: {
graphql: {
endpoint: endpoint.endpoint,
enableSubscriptions: false,
operationHeaders: {
Authorization: options.token
}
}
}
})),
additionalTypeDefs: getMeshTypes(endpoints.length > 0),
additionalResolvers: ['./src/mesh/additional_resolvers.js']
};
const subConfig = {
sources: endpoints
.filter((endpoint) => !!endpoint.subEndpoint)
.map((endpoint) => ({
name: endpoint.key,
handler: {
graphql: {
endpoint: endpoint.subEndpoint,
enableSubscriptions: true,
operationHeaders: {
Authorization: options.token
}
}
}
})),
// additionalTypeDefs: getMeshTypes(endpoints.length > 0),
additionalTypeDefs: getMeshTypes(false),
additionalResolvers: ['./src/mesh/additional_resolvers.js']
};
if (endpoints.length === 0) {
delete config.merger;
}
const meshConfig = await processConfig(config);
const meshSubConfig = await processConfig(subConfig);
try {
const { schema, contextBuilder } = await getMesh(meshConfig);
currentSchema = schema;
currentContextBuilder = contextBuilder;
} catch (e) {
console.log('Schema error:', e, JSON.stringify(e));
}
try {
const {
schema: subSchema,
contextBuilder: subContextBuilder
} = await getMesh(meshSubConfig);
currentSubSchema = subSchema;
currentSubContextBuilder = subContextBuilder;
} catch (e) {
console.log('SUB ERR:', e, JSON.stringify(e));
}
return {
schema: currentSchema,
contextBuilder: currentContextBuilder
};
};
Any ideas how to make it better?
You can pass values from context as headers like below;
operationHeaders:
Authorization: Bearer {context.jwtToken}
@ardatan I have a mix of Runtime package functions and copypasted/edited CLI functions, and my Mesh allows services to register themselves, so I don't have any config there. I would try your approach but I don't completely understand where do I pass context in my case
GraphQL Mesh generates a GraphQLSchema object that can be passed to any kind of GraphQL Server.
const { schema } = getMesh(...);
app.use(graphQLHTTP(req => {
return {
context: {
jwtToken: req.headers.authorization.replace('Bearer ', '') // <<< HERE
}
}
})
or you can directly use GraphQL Mesh CLI to serve your GraphQL API so in this case context will be upcoming HTTP request.
Thanks! I have successfully used context builder to pass JWT and Jaeger token to my underlying services.
|
gharchive/issue
| 2020-10-16T13:47:12 |
2025-04-01T04:55:44.606486
|
{
"authors": [
"ardatan",
"skayred"
],
"repo": "Urigo/graphql-mesh",
"url": "https://github.com/Urigo/graphql-mesh/issues/1075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
579425722
|
Generated SDK of graphql handler doesn't work
It just returns null, it guess it's related to the way info is being parsed in graphql-tools-fork.
Not sure about the architecture here, but if you are able to share some code of what you are expecting vs what you got, I can take a look.
@yaacovCR We are taking all root operations from Query and Mutation, and wrap it with a function to make it available to run from the context. It's very similar to delegateToSchema, but as a function.
The current behaviour requires to pass context and info, but maybe we can bypass that later by wrapping all resolvers.
In this example: https://github.com/Urigo/graphql-mesh/blob/master/examples/postgres-geodb/src/mesh/additional-resolvers.ts#L8
I can only assume that it breaks because of the dependency in info variable, but maybe that's something we can solve somehow?
Because let's say the you have something like the in the example above:
// Schema A
type Node {
id: ID!
}
type User implements Node {
id: ID!
username: String!
name: String!
}
type Query {
search(term: String): [Node!]!
}
// Schema B
type Location {
id: ID!
city: String!
country: String!
}
type Query {
locations(search: String!): [Location!]!
}
And then we would like to apply a schema extension, to connect the two together:
extend type Location {
users: [User!]!
}
And to query it:
query locationsAndUsers {
locations(search: "...") {
id
city
country
users {
id
name
}
}
}
We would like to have a function in context, to allow you to easily run search from schema A. The reason to have a function is the fact that we can later easily create type-safe typings for it.
The resolver will look like that:
export const resolvers = {
Location: {
users: (location, arg, context, info) => {
return context.Users.search({ term: `location:${location.city}` }, context, info);
}
}
}
And I would expect the operation Query.search to be executed with the fragment of users, maybe with a spread to make the type explicit? something like:
query {
search(terms: "...") {
... on User {
id
name
}
}
}
I can only assume that right now it fails and returns null because it unable to parse info correctly. @yaacovCR maybe it's something we can implement in graphql-tools-fork? what do you think?
Sounds like a new graphql-binding...
Looks like you are passing the info object, are you sending that within the function in context to delegateToSchema? Where is that code?
@yaacovCR yeah exactly, it seems like the delegateToSchema behind the scenes is using the selection set as-is, while it needs to "transform" it, so it might needed to be done before running the actual resolver. Or maybe just create delegateToSchema manually with the extracted selection set.
I will push an example for that soon, and let you know.
Thanks!
@yaacovCR Ok so I managed to get this working, it's not perfect, but I guess it's fine for now.
The example is here: https://github.com/Urigo/graphql-mesh/tree/master/examples/postgres-geodb (see README)
So what I did is: the method I'm putting in the context is not the resolver of the root operation itself, but a method that executes delegateToSchema with the correct argument, but with the right selection set.
It's done here: https://github.com/Urigo/graphql-mesh/blob/master/packages/handlers/graphql/src/index.ts#L28-L37
This way, it makes sure that the selection set is being treated correctly, without the need to pass anything else but args, context, info.
It would like to improve that in the future, so right not the extension and the linking is defined as:
extend type Geo_City {
developers(limit: Int = 10): Github_SearchResultItemConnection!
}
So behind the scenes, the implementation of developers field uses Github_search query field, so it must return the same value as the root operation (Github_SearchResultItemConnection).
It means that a query like that works:
query {
allCities(orderBy: ID_ASC, condition: { name: "London" }) {
nodes {
name
countrycode
district
developers {
nodes {
... on Github_User {
login
avatarUrl
}
}
}
}
}
}
And the custom resolvers that uses this binding is located here: https://github.com/Urigo/graphql-mesh/blob/master/examples/postgres-geodb/src/mesh/additional-resolvers.ts#L5-L13
This works, but what id I would like to eliminate the need for nodes and the fragment spread under developers? changing the definition of the field to developers: [Github_User!]! would be ideal.
But, at the moment it's not possible, because the delegation executions fails and returns just null - because when running Github_search it expect this structure because of the delegated operation.
The binding implementation can be controlled by us, and we do have strict resolvers signature, so right not it's failed on build time, which is great.
I think at some point, we'll allow developers to specify either info or fragment to fetch data with this kind of delegation, in order to make it simple to customize the fetched data (this way, it's simpler to create a fragment that will work with delegateToSchem, and then the user can manipulate the data before returning it).
What do you think: @yaacovCR
I think you can use HoistField transform to hoist nodes. Or you can just take nodes out of root fields, I guess...
I may be misunderstanding...
|
gharchive/issue
| 2020-03-11T17:21:55 |
2025-04-01T04:55:44.620322
|
{
"authors": [
"dotansimha",
"yaacovCR"
],
"repo": "Urigo/graphql-mesh",
"url": "https://github.com/Urigo/graphql-mesh/issues/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
273089546
|
update failure mode
Suggest updating the out-of-range failure mode to reset both drive AND steering values if either one is out of range. This way the car will always stop if ether value is out of range.
Yep, sounds sensible
|
gharchive/pull-request
| 2017-11-10T23:31:28 |
2025-04-01T04:55:44.633581
|
{
"authors": [
"UvinduW",
"oliverwilkins"
],
"repo": "UvinduW/Miniature-Autonomous-Car",
"url": "https://github.com/UvinduW/Miniature-Autonomous-Car/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2180566264
|
Out of Memory.
I have 4gb Vram and it gives me out of memory error. How to switch to cpu only?
To switch to CPU, please run with run.py --device cpu
File "E:\TripSo\TripoSR\env\lib\site-packages\transformers\models\vit\modeling_vit.py", line 219, in forward
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacity of 4.00 GiB of which 0 bytes is free. Of the allocated memory 6.05 GiB is allocated by PyTorch, and 158.95 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\TripSo\TripoSR\env\lib\site-packages\gradio\queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
File "E:\TripSo\TripoSR\env\lib\site-packages\gradio\queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None
what is the minimum gpu vram required. i have 4gb vram. i tried lowering the model.renderer.set_chunk_size(8192) to model.renderer.set_chunk_size(8) but still getting out of memory
ran fine on CPU though but took almost 6mins
I have the same problem here, does anyone have any suggestions?
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 896.00 MiB. GPU 0 has a total capacity of 23.68 GiB of which 417.81 MiB is free. Process 715370 has 23.27 GiB memory in use. Of the allocated memory 22.96 GiB is allocated by PyTorch, and 19.59 MiB is reserved by PyTorch but unallocated.
could this actually be the kv cache and thus be linked to https://github.com/vllm-project/vllm/discussions/241 ?
|
gharchive/issue
| 2024-03-12T01:55:05 |
2025-04-01T04:55:44.647572
|
{
"authors": [
"Bikimaharjan",
"Dileepvk98",
"GrimalDev",
"pookiefoof"
],
"repo": "VAST-AI-Research/TripoSR",
"url": "https://github.com/VAST-AI-Research/TripoSR/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
166212355
|
Run-time Error when adding body parameter
I'm getting the following error:
Run-time error '5': Invalid procedure call or argument
When calling AddBodyParameter on a WebRequest
The error occurs on the line: dict_pDictionary.Item(Key) = Value
In:
Public Property Let Item(Key As Variant, Value As Variant)
#If Mac Or Not UseScriptingDictionaryIfAvailable Then
If Me.Exists(Key) Then
dict_ReplaceKeyValue dict_GetKeyValue(Key), Key, Value
Else
dict_AddKeyValue Key, Value
End If
#Else
dict_pDictionary.Item(Key) = Value
#End If
End Property
The key is "propertyName" and the value is "asdasdasd". This error occurs after its already looped through and added some Body Parameters.
The exact same code and input runs fine on 32bit Excel 2010, 2013 and 2016.
The error is occurring for me in x64 Excel 2016
Hmm, not sure why that particular code would be affected by 32- vs 64-bit, The only thing that I can think of is the Variant type (from AddBodyParameter to Let Item) may be causing some weird issue. What is the type of the key/value being passed to AddBodyParameter? I would try the following:
Dim Key, Value ' <- What type are these?
' Explicitly convert key and value to string
Request.AddBodyParameter(CStr(Key), CStr(Value))
I haven't found explicit coercion to string to be necessary, but maybe it'll help. Otherwise, I'll keep looking into it, but I may need a more complete sample.
I've just done some more testing and I believe its due to passing the Request into a function ByRef. Have you encountered something like this before? Seems to also be related to x64 Excel
Here is my function:
Sub AddBodyParamIfNotEmpty(ByRef Request As WebRequest, ParamName As String, CellName As String)
If Not IsEmpty(Sheets(SE1.Name).Range(CellName)) And Not IsError(Sheets(SE1.Name).Range(CellName)) Then
Request.AddBodyParameter ParamName, Sheets(SE1.Name).Range(CellName).Value
End If
End If
End Sub
Quick replication:
Sub Something()
Dim Request As New WebRequest
Request.AddBodyParameter "Test", "Testing"
QuickTest Request, "Test2", "Testing2"
End Sub
Sub QuickTest(ByRef Request, ParamName As String, ParamValue As String)
Request.AddBodyParameter ParamName, ParamValue
End Sub
Has error in x64 Excel when attempting to add body parameter to a request passed ByRef but not when adding to request normally
Great sleuthing @levitatejay! I'll look into it and see what I can find.
Hi Tim,
I've done some more searching. It looks like the problem is related to an automation issue with the key.
You can see that the key's value is still intact when calling "AddBodyParameter"
But once it goes a level deeper into the Dictionary implementation it loses its value:
Any idea why this is happening/ what I can do to fix it?
The problem doesn't appear to be related to passing ByRef as I initially thought.
I can get my code to work by changing my subs declaration:
AddBodyParamIfNotEmpty(ByRef Request As WebRequest, ParamName As String, CellName As String)
from ParamName As String to ParamName As Variant
or by changing the the following sub's (from WebRequest class) declaration:
Public Sub AddBodyParameter(Key As Variant, Value As Variant)
from Key As Variant to ByVal Key As Variant
So it appears the reference to the initial Key Variant/String object is being lost somewhere between jumping from the WebRequest Class to the Dictionary Class...
But to make matters more confusing:
It only occurs in x64 Excel
The "automation error" only occurs to the Key and not the Value even though they are the same type and go through the exact same process ???
@levitatejay Thanks again for digging so deep into this weird issue. I'll add ByVal shortly to resolve this.
|
gharchive/issue
| 2016-07-18T22:55:48 |
2025-04-01T04:55:44.672375
|
{
"authors": [
"levitatejay",
"timhall"
],
"repo": "VBA-tools/VBA-Web",
"url": "https://github.com/VBA-tools/VBA-Web/issues/239",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
168758297
|
Complete example GMail
I am trying to implement VBA-Web in one of my projects but I struggle to connect to GMail.
Can you please publish an example step by step that shows how to connect to GMail ?
I cannot even understand what the format credentials.txt should have
Many thanks
Alberto
Found the code in the examples.
|
gharchive/issue
| 2016-08-01T22:35:42 |
2025-04-01T04:55:44.674037
|
{
"authors": [
"PioPio2"
],
"repo": "VBA-tools/VBA-Web",
"url": "https://github.com/VBA-tools/VBA-Web/issues/243",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1251228199
|
Visualizations: update handling of dependency order of strata variables in the viz tab
depends on veupathdb/edadataservice#190
came up during a conversation w DSR where he was trying to approximately replicate a plot from a data provider and wasnt able to.
currently we can support a dependency order of visual elements defined something like:
'yaxis' -> 'xaxis' -> 'overlay' -> 'facet'
and wed like to be able to do something like:
'yaxis' -> 'xaxis' -> ['overlay','facet']
essentially stating that both the overlay and facet strata vars need to simply be the same or a parent entity of the xaxis, rather than the facet var being required as the same or parent of the overlay.
i think someone made a duplicate in #1410 and was resolved with #1415 .. but yes its been handled.
|
gharchive/issue
| 2022-05-27T17:23:46 |
2025-04-01T04:55:44.681314
|
{
"authors": [
"d-callan"
],
"repo": "VEuPathDB/web-eda",
"url": "https://github.com/VEuPathDB/web-eda/issues/1160",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2242671023
|
issue with inference module running on tiles with crs 3857
Hi, I'm currently handling aerial images in CRS 3857. While defining rasters with the generate function and specifying CRS parameters for the module, I'm encountering errors halfway through the inference process on my tiles. What could be triggering these errors? Are there potential oversights on my end? Interestingly, when I omit the CRS parameter and leave it as default, the inference proceeds smoothly, albeit with slightly misaligned results.
here is an image of the results i got on a small image in sydney, australia
Hello, could you please copy and paste the exact error output?
sure here it is:
ERROR Command ['python', '-m', 'tile2net', 'inference', '--city_info', '/content/output/test_tile/tiles/test_tile_256_info.json', '--interactive', '--dump_percent', '0'] returned non-zero exit status 1.
Stdout:
Stderr: INFO NumExpr defaulting to 2 threads.
INFO Inferencing. Segmentation results will not be saved.
INFO Downloading weights for segmentation, this may take a while...
INFO Weights downloaded.
INFO Using a single GPU.
INFO Using Per Image based weighted loss
INFO Using Cross Entropy Loss
INFO Loading weights from: checkpoint=/content/tile2net/src/tile2net/raster/resources/assets/weights/satellite_2021.pth
INFO init weights from normal distribution
INFO loading pretrained model /content/tile2net/src/tile2net/raster/resources/assets/weights/hrnetv2_w48_imagenet_pretrained.pth
INFO Trunk: hrnetv2
INFO Model params = 72.1M
/usr/lib/python3.10/multiprocessing/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
self.pid = os.fork()
INFO Polygons are generated and saved!
INFO Starting network creation...
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 2828.26it/s]
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 2686.93it/s]
INFO ..... creating the processed sidewalk network
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/content/tile2net/src/tile2net/main.py", line 6, in
argh.dispatch_commands([
File "/usr/local/lib/python3.10/dist-packages/argh/dispatching.py", line 358, in dispatch_commands
dispatch(parser, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/argh/dispatching.py", line 183, in dispatch
for line in lines:
File "/usr/local/lib/python3.10/dist-packages/argh/dispatching.py", line 294, in _execute_command
for line in result:
File "/usr/local/lib/python3.10/dist-packages/argh/dispatching.py", line 247, in _call
result = function(namespace_obj)
File "/content/tile2net/src/tile2net/namespace.py", line 671, in wrapper
return func(namespace, **kwargs)
File "/content/tile2net/src/tile2net/tileseg/inference/inference.py", line 510, in inference
return inference.inference()
File "/content/tile2net/src/tile2net/tileseg/inference/inference.py", line 198, in inference
self.validate(
File "/content/tile2net/src/tile2net/tileseg/inference/inference.py", line 313, in validate
net.convert_whole_poly2line()
File "/content/tile2net/src/tile2net/raster/pednet.py", line 375, in convert_whole_poly2line
self.create_sidewalks()
File "/content/tile2net/src/tile2net/raster/pednet.py", line 344, in create_sidewalks
swntw.geometry = swntw.simplify(0.6)
AttributeError: 'NoneType' object has no attribute 'simplify'
CalledProcessError Traceback (most recent call last)
in <cell line: 1>()
----> 1 raster1.inference()
2 frames
/usr/lib/python3.10/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
524 retcode = process.poll()
525 if check and retcode:
--> 526 raise CalledProcessError(retcode, process.args,
527 output=stdout, stderr=stderr)
528 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['python', '-m', 'tile2net', 'inference', '--city_info', '/content/output/test_tile/tiles/test_tile_256_info.json', '--interactive', '--dump_percent', '0']' returned non-zero exit status 1.
Could you please also pass the command that you ran?
raster1 = Raster(
location=[-33.72176969693144, 150.72435381126422,-33.71948521547603, 150.7271003932958],
name='test_tile',
input_dir='/content/test_tile/x_y.png',
output_dir='/content/output',
zoom=21,
crs=3857
)
raster1.generate(2)
raster1.inference()
Thank you for using Tile2net. The CRS parameter is intended to be paired with the location parameter, not the geometry which encapsulates its CRS in its metadata. Please leave the CRS to be the default 4326 coordinate system (WGS84). As for the misaligned results, could you please overlay it onto the source imagery, and see if it still seems to be erroneous?
@ahmademami97
A side note here: Tile2Net works with Slippy Tiles Map system (XYZ system) (read our data preparation guide here. This requires tiles to be in EPSG: 4326, you can use QGIS re-project your large raster to 4326 and the tile it according to XYZ again to arrive at the most accurate results.
Thank you so much, @Mary-h86 and @dhodcz2, for your notes and help. I fixed the misalignment in my network. There was a small issue with the conversion to the slippy tile system in my code. It's all working fine now.
Perfect! Happy to hear that. I am closing this issue now. Feel free to re-open if you have any other questions.
|
gharchive/issue
| 2024-04-15T04:23:35 |
2025-04-01T04:55:44.718116
|
{
"authors": [
"Mary-h86",
"ahmademami97",
"dhodcz2"
],
"repo": "VIDA-NYU/tile2net",
"url": "https://github.com/VIDA-NYU/tile2net/issues/61",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
676682429
|
Не отправляется событие VKWebAppGetPersonalCard при выходе из кеша
Проблема отправки события VKWebAppGetPersonalCard после очистки кеша. У промиса bridge.send("VKWebAppGetPersonalCard", {...}) не срабатывает ни then, ни catch, ни finally.
Шаги для воспроизведения:
Очистить кеш, закрыть mini app и само VK приложение
Открыть VK приложение, открыть mini app - VKWebAppGetPersonalCard отправляется
Очистить кеш и, не закрывая VK приложение, заново открыть mini app - VKWebAppGetPersonalCard не отправляется
Какая платформа?
Платформу подскажешь?
Заметил на iOS.
Заметил тоже самое поведение у события VKWebAppGetUserInfo. Пробовал его использовать как bridge.send('VKWebAppGetUserInfo.').then(...)
и событийным способом
bridge.subscribe(e => {
if (e.detail.type === 'VKWebAppGetUserInfoResult') {
...
}
});
bridge.send('VKWebAppGetUserInfo', {});
Тогда дубликат, увы, спасибо за ишью
#122
|
gharchive/issue
| 2020-08-11T08:24:26 |
2025-04-01T04:55:44.731526
|
{
"authors": [
"ilyapishchulin",
"maksimdegtyarev"
],
"repo": "VKCOM/vk-bridge",
"url": "https://github.com/VKCOM/vk-bridge/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2754544167
|
Fram 11
updated video in ticket - https://vacso.atlassian.net/browse/FRAM-11?focusedCommentId=10026
Let me know where you want to keep viewer's component on the page.
@BhuvaneshPatil overall, looks good. some tweaks:
Move the component to the same row as the record header. Justify it right.
Move all the data pulling logic for the activeviewers component into that component.
Switch from callbackend to useBackend, which uses tanstack query under the hood. To force it to refresh every 5 seconds, use a reload usestate, and pass it to reload of the usebackend. This will force it to update when the usestate increments via an interval in a useeffect.
Inside view.js on the backend, feel free to use knex directly. it is available to every class via this.knex. This way you can request just the relevant rows, and use more advanced queries then rowsget provides.
Let me know if you have any questions. If you have an issues let me know and we can either discuss more, or I can refactor the code my self.
Thanks!
also please merge the latest main into this branch.
changed the position of Viewers component, moved to header
moved logic inside the component
@BhuvaneshPatil
This all looks good. I pushed a commit into your branch just to tweak some spacing so the page doesn't move when the active viewers data comes in.
can we combine the two usebackend calls into one that updates that this user is active and gets a list of all the active users?
Yeah sounds like good idea.
I will update the code
|
gharchive/pull-request
| 2024-12-22T08:13:21 |
2025-04-01T04:55:44.877058
|
{
"authors": [
"Bhuvanesh-Fictiv",
"BhuvaneshPatil",
"trippd6"
],
"repo": "VacsoLLC/frameworkFrontend",
"url": "https://github.com/VacsoLLC/frameworkFrontend/pull/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2291173210
|
Add another module of transportation
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
please assign me this issue
Please explain the issue or feature in brief.
So it is a feature where all the modes of transportation will be shown and a description of each mode of transport
I can also add login and signup page and integrate it with firebase
so I should add both login and sign up page and transportation module??
No just add the transportation module as well keep it light weight as possible use SVGs for that
ok done. Thank you
Hey, Khushi Soni this side, I'm a GSSoC contributor and would love to handle this issue
Hey @sapatevaibhav can you please asssign me this issue . I can work on this
@Rashii1218, update??
So I have completed half of the work and then my exams started and they will get over on 30th may. So I'll try to complete as soon as possible after my exams
Do your best in exams....
I would like to work on this issue. Can you please assign it to me.
|
gharchive/issue
| 2024-05-12T05:22:02 |
2025-04-01T04:55:44.882396
|
{
"authors": [
"LEARNER-dakshesh",
"NamrataCSalvi",
"Rashii1218",
"khush1yaaar",
"sapatevaibhav"
],
"repo": "VaibhavCodeClub/learn",
"url": "https://github.com/VaibhavCodeClub/learn/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1469575972
|
Add support for OpenWrt
Describe your feature / Опишите ваше предложение
Adding support for Openwrt can go a long way to bypass censorship over various devices connected to a single router
RTFM README!
GoodbyeDPI is Windows-only.
OpenWRT is based on Linux, not Windows.
You can check https://github.com/ValdikSS/GoodbyeDPI#similar-projects for Linux.
You can Just run a windows vm running Goodbyedpi and install a proxy server such as Privoxy then send your clients to the proxy and all the traffic will be filtered by Goodbyedpi.
If you share with us what the program does as open source or share a documentation of what it does from the beginning to the end so that it can be done as an OpenWRT plugin, a new plugin can be written about this topic. With Zapret, you can simulate exactly what this program does and write a more automated GoodbyDPI plugin.
|
gharchive/issue
| 2022-11-30T12:42:43 |
2025-04-01T04:55:44.885433
|
{
"authors": [
"7gxycn08",
"omercerci",
"r4sas",
"rdavydov",
"yesrab"
],
"repo": "ValdikSS/GoodbyeDPI",
"url": "https://github.com/ValdikSS/GoodbyeDPI/issues/293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2304037231
|
🛑 b2b MarketingServices is down
In 8f05e61, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in 3f17b8e after 8 minutes.
|
gharchive/issue
| 2024-05-18T11:26:44 |
2025-04-01T04:55:44.888429
|
{
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/2558",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2312392169
|
🛑 b2b MarketingServices is down
In 2f9b2f5, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in 235983f after 11 minutes.
|
gharchive/issue
| 2024-05-23T09:15:11 |
2025-04-01T04:55:44.891192
|
{
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/3141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2342033191
|
🛑 b2b MarketingServices is down
In 88b0415, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in 0f5c892 after 8 minutes.
|
gharchive/issue
| 2024-06-09T05:35:08 |
2025-04-01T04:55:44.893743
|
{
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/5123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2274316013
|
🛑 b2b MarketingServices is down
In 3e621da, b2b MarketingServices (https://b2b-marketingservices.de/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: b2b MarketingServices is back up in c9ee500 after 27 minutes.
|
gharchive/issue
| 2024-05-01T22:55:49 |
2025-04-01T04:55:44.896333
|
{
"authors": [
"Valecha24"
],
"repo": "Valecha24/WebMonitoring",
"url": "https://github.com/Valecha24/WebMonitoring/issues/937",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
394055581
|
Tutorial Template
Comments on this issue will contain potential templates to be used in the eventual tutorial rewrite.
Preface
A discussion on the topic of the tutorial section with relevant notes, links, images, and diagrams.
Goals
Unordered
List
Of
Goals
For
This
Section
Code
Name
Diff
Current Version
Previous Version
Main
Link
Link
Link
Test
Link
Link
Link
Results
Videos and images showing the result of this section.
A link to the release for this section, so the user can download the Jar and run the game.
For the previous template, some code may need to be explained, so there could be additional sub-sections highlighting particular files or pieces of code along with explanations.
|
gharchive/issue
| 2018-12-25T23:27:54 |
2025-04-01T04:55:44.915621
|
{
"authors": [
"Valkryst"
],
"repo": "Valkryst/VTerminal_Tutorial",
"url": "https://github.com/Valkryst/VTerminal_Tutorial/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1813001350
|
This issue occurs when only Valkyrien Skies and addons are installed and no other mods
[X] I have tested this issue and it occurs when no other mods are installed
Minecraft Version
1.19
Mod Loader
Forge
Issue description
I Get quite far in the game then after a bit the server doent crash just shuts off and everytime i turn it back on you can play for like 2 min and it just shuts down agene
Issue reproduction
use aternos server hosting then just servive and make a airship then after a while thit will eventualy happen
Logs
No response
thats not a bug, aeternos is not powerful enough to handle vs2, only solution to my knowledge is to just not use free hosting such as aternos
thats not a bug, aeternos is not powerful enough to handle vs2, only solution to my knowledge is to just not use free hosting such as aternos
So, the mod will probably never be able to run on aternos servers?
for free hosting I would have to reccommend minehut or ... whats that one with the blue cube logo
|
gharchive/issue
| 2023-07-20T01:53:34 |
2025-04-01T04:55:44.958761
|
{
"authors": [
"Miszelka",
"NullHarp",
"SpyingGnome775",
"walksanatora"
],
"repo": "ValkyrienSkies/Eureka",
"url": "https://github.com/ValkyrienSkies/Eureka/issues/234",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
232955360
|
Steam Audio does not respond to position change in Scene Tab in Unity.
Reported via Steam Audio Community Forum -
http://steamcommunity.com/app/596420/discussions/0/1290691937711370700/
Game Tab needs to be visible for EndOfFrameUpdate function to be called to update source and listener position.
This issue has been fixed in our 2.0-beta.15 release.
|
gharchive/issue
| 2017-06-01T17:38:05 |
2025-04-01T04:55:45.140283
|
{
"authors": [
"achandak"
],
"repo": "ValveSoftware/steam-audio",
"url": "https://github.com/ValveSoftware/steam-audio/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.