id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2552394616 | [Choix du statut] Ajouter les SCIC, SCOP, CAPE
Ajouter SCIC, SCOP et CAPE dans les statuts proposés par le comparateur de statut
Discussed in https://github.com/betagouv/mon-entreprise/discussions/2672
Originally posted by johangirod November 30, 2021
Pourquoi ne pas évoquer les SCOOP ; les SCIC, le CAPE (couveuses et coopératives) , le portage salarial ?
Le mieux serait de développer le parcours à partir de la question « Faire de l'argent » / « Non lucratif » (à reformuler).
On peut imaginer ensuite relier le parcours classique pour aider à choisir entre SCOP SARL et SCOP SAS.
Laissé à l'état de discussion. Peu de retours utilisateurs sur le sujet, manque de clarté sur le besoin, complexité de l'implémentation.
| gharchive/issue | 2024-09-27T08:59:00 | 2025-04-01T06:38:02.632628 | {
"authors": [
"VeroniqueR75",
"liliced"
],
"repo": "betagouv/mon-entreprise",
"url": "https://github.com/betagouv/mon-entreprise/issues/3137",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
794310510 | Adiciona verificacao de saude da api antes de processar conteudos
Contexto
Devo ao fato da api do heroku demorar um certo tempo estar disponivel, a action não conseguia acessar a mesma e falhava a sua execução.
Saída
Sendo assim foi implementada uma validação de retry na API aguardando que a mesma esteja disponível.
Duas coisas:
Faltou colocar contexto da descrição. É super importante que os pull requests tenham contexto, para acelerar o processo e qualquer pessoa consiga fazer code review;
Boa, fiquei na dúvida pois por ser um repósitorio aberto, foi pedido que não se colocasse descrição com informações do mesmo no PR 🤔
Faltou seguir o processo de code review;
Os outros times não tem acesso a esse repo ainda, sendo assim não consigo marcar o tech-code-review vou ver como posso configurar (:
Duas coisas:
Faltou colocar contexto da descrição. É super importante que os pull requests tenham contexto, para acelerar o processo e qualquer pessoa consiga fazer code review;
Boa, fiquei na dúvida pois por ser um repósitorio aberto, foi pedido que não se colocasse descrição com informações do mesmo no PR 🤔
Faltou seguir o processo de code review;
Os outros times não tem acesso a esse repo ainda, sendo assim não consigo marcar o tech-code-review vou ver como posso configurar (:
@Caciquez da pra trazer contexto sem trazer informação sensível. São coisas diferentes.
Sobre o acesso, eu dei acesso e agora da pra seguir o processo nesse PR também.
@Caciquez da pra trazer contexto sem trazer informação sensível. São coisas diferentes.
Sobre o acesso, eu dei acesso e agora da pra seguir o processo nesse PR também.
| gharchive/pull-request | 2021-01-26T15:11:07 | 2025-04-01T06:38:02.653933 | {
"authors": [
"Caciquez",
"caironoleto"
],
"repo": "betrybe/process-content",
"url": "https://github.com/betrybe/process-content/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
211545281 | Adubey64
Files for documentation.
Thanks, Anshu.
| gharchive/pull-request | 2017-03-02T23:02:07 | 2025-04-01T06:38:02.654781 | {
"authors": [
"adubey64",
"curfman"
],
"repo": "betterscientificsoftware/betterscientificsoftware.github.io",
"url": "https://github.com/betterscientificsoftware/betterscientificsoftware.github.io/pull/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2755137638 | 🛑 Reges Hotel Offers Portal is down
In d780987, Reges Hotel Offers Portal (https://offer.reges.com.tr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Reges Hotel Offers Portal is back up in 003d2ed after 8 minutes.
| gharchive/issue | 2024-12-23T03:38:29 | 2025-04-01T06:38:02.657445 | {
"authors": [
"betterwithagency"
],
"repo": "betterwithagency/status-page",
"url": "https://github.com/betterwithagency/status-page/issues/1461",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
866175861 | [SGP-16183] Parse Locust Stats By Individual Load Test Endpoint and Serve to Users
This PR uses the existing stats gathering implemented by Locust and parses them based on each individual endpoint hit in a locust load test (instead of by the summation of all requests, which is built-in to locust) and serves them to locust users.
The goal of this PR is provide a JSON which can then be used by plotly to give Bevy information on the performance of individual API endpoints as the load varies over time.
Eventually it would be nice to have these stats displayed in a chart during locust runtime, but that would be an extra unit of work. Right now the data is simply served in JSON format and then plotted manually.
Eventually it would also be nice to have an automated job which posts these stats to datadog or another performance monitoring dashboard.
NOTE: this is being merged into bevy/locust, not into locustio/locust
Thank you @codergolem and @slimPickens
| gharchive/pull-request | 2021-04-23T14:39:18 | 2025-04-01T06:38:02.664775 | {
"authors": [
"ecedmondson"
],
"repo": "bevy/locust",
"url": "https://github.com/bevy/locust/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2527167052 | bevy_color's operations are not stable across all hardware due to use of trigonometric float methods
Bevy version
Commit 87afa98167b67e09d5701e49b02e185c8f09439d
[Optional] Relevant system information
DxDiag.txt
Lenovo Thinkbook Pro Gen 3
Razer Core X Chroma with an RTX 2060
Samsung Odyssey Neo G9 54"
What you did
Ran cargo run -p ci -- test
What went wrong
---- oklcha::tests::test_to_from_srgba_2 stdout ----
thread 'oklcha::tests::test_to_from_srgba_2' panicked at crates\bevy_color\src\oklcha.rs:415:13:
gray: Oklcha { lightness: 0.5981808, chroma: 2.3841858e-7, hue: 0.0, alpha: 1.0 } != Oklcha { lightness: 0.5981807, chroma: 9.424322e-8, hue: 18.434948, alpha: 1.0 }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
oklcha::tests::test_to_from_srgba_2
Additional information
This is not on main. It is on https://github.com/BenjaminBrienen/bevy/tree/rename-add-to-enqueue
This is super mysterious: my guess is on a floating point implementation or something being inconsistent.
Also happens on the (currently) latest commit ca6056d6d292874e5da57bc4de68429fed1af8ad
So it's probably not a cosmic ray 😂
@BenjaminBrienen What OS?
This test involves transcendental functions which have an implementation which varies from operating system to operating system, because they are implemented within software and do not have a specific degree of precision they are evaluated to.
It actually should be relatively stable across hardware, ironically, just not software.
It happens on bevy/main as well
@Jondolf says that we should be using https://github.com/bevyengine/bevy/blob/5a0c09d38fd63383281991da9d6c353ad7228f50/crates/bevy_math/clippy.toml, and forbidding the use of the std version of these methods.
IMO we should enable this lint at the workspace level, and allow specific crates to opt-out if they really need the performance.
I'll try my hand at this
All of the math has been moved to bevy_math, but of course I get the same result since libm is not enabled by default. Should I add libm to the default features of bevy_math?
IMO no; we should enable that by default later after serious benchmarks.
| gharchive/issue | 2024-09-15T19:55:40 | 2025-04-01T06:38:02.671772 | {
"authors": [
"BenjaminBrienen",
"alice-i-cecile",
"workingjubilee"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/issues/15236",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2675310671 | panic in bevy_render::renderer::render_system when running /examples/3d/*
Bevy version
v0.14.2
[Optional] Relevant system information
cargo 1.82.0
Ubuntu 24.04.1 LTS on WSL2 Windows 11 x86_64
AdapterInfo { name: "D3D12 (NVIDIA GeForce RTX 4060 Ti)", vendor: 0, device: 0, device_type: Other, driver: "OpenGL", driver_info: "4.6 (Compatibility Profile) Mesa 24.0.9-0ubuntu0.2", backend: Gl }
What you did
Running a copy-paste of bevy/examples/3d/lighting.rs
It builds fine, but this is the output of cargo run.
I've already checked I'm on Bevy 0.14.2 and run cargo clean && cargo update
What went wrong
$ RUST_BACKTRACE=1 zsh -c 'cargo run'
Finished `dev` profile [optimized + debuginfo] target(s) in 0.12s
Running `target/debug/alienlines`
2024-11-20T10:10:58.115461Z INFO bevy_diagnostic::system_information_diagnostics_plugin::internal: SystemInfo { os: "Linux 24.04 Ubuntu", kernel: "5.15.167.4-microsoft-standard-WSL2", cpu: "12th Gen Intel(R) Core(TM) i7-12700K", core_count: "10", memory: "15.5 GiB" }
2024-11-20T10:10:58.117187Z WARN winit::platform_impl::linux::x11::xdisplay: error setting XSETTINGS; Xft options won't reload automatically
MESA: error: ZINK: failed to choose pdev
libEGL warning: egl: failed to create dri2 screen
2024-11-20T10:10:58.298586Z INFO bevy_render::renderer: AdapterInfo { name: "D3D12 (NVIDIA GeForce RTX 4060 Ti)", vendor: 0, device: 0, device_type: Other, driver: "OpenGL", driver_info: "4.6 (Compatibility Profile) Mesa 24.0.9-0ubuntu0.2", backend: Gl }
ALSA lib confmisc.c:855:(parse_card) cannot find card '0'
ALSA lib conf.c:5204:(_snd_config_evaluate) function snd_func_card_inum returned error: No such file or directory
ALSA lib confmisc.c:422:(snd_func_concat) error evaluating strings
ALSA lib conf.c:5204:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1342:(snd_func_refer) error evaluating name
ALSA lib conf.c:5204:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5727:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2721:(snd_pcm_open_noupdate) Unknown PCM default
2024-11-20T10:10:58.334268Z WARN bevy_audio::audio_output: No audio device found.
2024-11-20T10:10:58.366440Z WARN bevy_pbr::ssao: ScreenSpaceAmbientOcclusionPlugin not loaded. GPU lacks support: TextureFormat::R16Float does not support TextureUsages::STORAGE_BINDING.
2024-11-20T10:10:58.367239Z INFO bevy_winit::system: Creating new window "App" (Entity { index: 0, generation: 1 })
2024-11-20T10:10:58.367405Z WARN winit::platform_impl::linux::x11::util::randr: XRandR reported that the display's 0mm in size, which is certifiably insane
2024-11-20T10:10:58.367457Z INFO winit::platform_impl::linux::x11::window: Guessed window scale factor: 1
2024-11-20T10:10:58.368579Z ERROR bevy_asset::server: Path not found: /home/smokracek/dev/alienlines/assets/branding/bevy_logo_light.png
2024-11-20T10:10:58.499814Z ERROR wgpu_hal::gles: GLES: [ShaderCompiler/Error] ID 1 : 0:3(12): error: extension `GL_EXT_texture_shadow_lod' unsupported in fragment shader
2024-11-20T10:10:58.499850Z ERROR wgpu_hal::gles::device: Shader compilation failed: 0:3(12): error: extension `GL_EXT_texture_shadow_lod' unsupported in fragment shader
2024-11-20T10:10:58.500114Z ERROR wgpu_core::device::global: Device::create_render_pipeline error: Internal error in ShaderStages(FRAGMENT) shader: 0:3(12): error: extension `GL_EXT_texture_shadow_lod' unsupported in fragment shader
2024-11-20T10:10:58.500127Z ERROR wgpu::backend::wgpu_core: Shader translation error for stage ShaderStages(FRAGMENT): 0:3(12): error: extension `GL_EXT_texture_shadow_lod' unsupported in fragment shader
2024-11-20T10:10:58.500131Z ERROR wgpu::backend::wgpu_core: Please report it to https://github.com/gfx-rs/wgpu
2024-11-20T10:10:58.500143Z ERROR wgpu::backend::wgpu_core: Handling wgpu errors as fatal by default
thread 'Async Compute Task Pool (0)' panicked at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.20.1/src/backend/wgpu_core.rs:2996:5:
wgpu error: Validation Error
Caused by:
In Device::create_render_pipeline
note: label = `pbr_opaque_mesh_pipeline`
Internal error in ShaderStages(FRAGMENT) shader: 0:3(12): error: extension `GL_EXT_texture_shadow_lod' unsupported in fragment shader
stack backtrace:
0: rust_begin_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:74:14
2: wgpu::backend::wgpu_core::default_error_handler
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.20.1/src/backend/wgpu_core.rs:2996:5
3: core::ops::function::Fn::call
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:79:5
4: <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/alloc/src/boxed.rs:2245:9
5: wgpu::backend::wgpu_core::ErrorSinkRaw::handle_error
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.20.1/src/backend/wgpu_core.rs:2982:17
6: wgpu::backend::wgpu_core::ContextWgpuCore::handle_error
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.20.1/src/backend/wgpu_core.rs:293:9
7: <wgpu::backend::wgpu_core::ContextWgpuCore as wgpu::context::Context>::device_create_render_pipeline
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.20.1/src/backend/wgpu_core.rs:1182:13
8: <T as wgpu::context::DynContext>::device_create_render_pipeline
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.20.1/src/context.rs:2266:13
9: wgpu::Device::create_render_pipeline
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.20.1/src/lib.rs:2692:26
10: bevy_render::renderer::render_device::RenderDevice::create_render_pipeline
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/renderer/render_device.rs:131:36
11: bevy_render::render_resource::pipeline_cache::PipelineCache::start_create_render_pipeline::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/render_resource/pipeline_cache.rs:773:21
12: async_executor::Executor::spawn_inner::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.13.1/src/lib.rs:250:20
13: async_task::raw::RawTask<F,T,S,M>::run::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-task-4.7.1/src/raw.rs:550:21
14: core::ops::function::FnOnce::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:250:5
15: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panic/unwind_safe.rs:272:9
16: std::panicking::try::do_call
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:554:40
17: std::panicking::try
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:518:19
18: std::panic::catch_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panic.rs:345:14
19: async_task::raw::RawTask<F,T,S,M>::run
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-task-4.7.1/src/raw.rs:549:23
20: async_task::runnable::Runnable<M>::run
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-task-4.7.1/src/runnable.rs:781:18
21: async_executor::State::run::{{closure}}::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.13.1/src/lib.rs:741:21
22: <futures_lite::future::Or<F1,F2> as core::future::future::Future>::poll
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-2.5.0/src/future.rs:457:33
23: async_executor::State::run::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.13.1/src/lib.rs:748:32
24: async_executor::Executor::run::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.13.1/src/lib.rs:344:34
25: futures_lite::future::block_on::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-2.5.0/src/future.rs:99:19
26: std::thread::local::LocalKey<T>::try_with
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/thread/local.rs:283:12
27: std::thread::local::LocalKey<T>::with
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/thread/local.rs:260:9
28: futures_lite::future::block_on
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-2.5.0/src/future.rs:78:11
29: bevy_tasks::task_pool::TaskPool::new_internal::{{closure}}::{{closure}}::{{closure}}::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.14.2/src/task_pool.rs:176:37
30: std::panicking::try::do_call
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:554:40
31: std::panicking::try
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:518:19
32: std::panic::catch_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panic.rs:345:14
33: bevy_tasks::task_pool::TaskPool::new_internal::{{closure}}::{{closure}}::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.14.2/src/task_pool.rs:170:43
34: std::thread::local::LocalKey<T>::try_with
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/thread/local.rs:283:12
35: std::thread::local::LocalKey<T>::with
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/thread/local.rs:260:9
36: bevy_tasks::task_pool::TaskPool::new_internal::{{closure}}::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.14.2/src/task_pool.rs:163:50
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Encountered a panic in system `bevy_render::render_resource::pipeline_cache::PipelineCache::process_pipeline_queue_system`!
thread '<unnamed>' panicked at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/render_resource/pipeline_cache.rs:553:28:
index out of bounds: the len is 0 but the index is 1
stack backtrace:
0: rust_begin_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:74:14
2: core::panicking::panic_bounds_check
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:276:5
3: <usize as core::slice::index::SliceIndex<[T]>>::index
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/slice/index.rs:302:10
4: core::slice::index::<impl core::ops::index::Index<I> for [T]>::index
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/slice/index.rs:16:9
5: <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/alloc/src/vec/mod.rs:2920:9
6: bevy_render::render_resource::pipeline_cache::PipelineCache::get_render_pipeline
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/render_resource/pipeline_cache.rs:553:28
7: <bevy_render::render_phase::SetItemPipeline as bevy_render::render_phase::draw::RenderCommand<P>>::render
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/render_phase/mod.rs:1079:14
8: <(C0,C1,C2,C3) as bevy_render::render_phase::draw::RenderCommand<P>>::render
9: <bevy_render::render_phase::draw::RenderCommandState<P,C> as bevy_render::render_phase::draw::Draw<P>>::draw
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/render_phase/draw.rs:298:9
10: bevy_render::render_phase::SortedRenderPhase<I>::render_range
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/render_phase/mod.rs:801:17
11: bevy_render::render_phase::SortedRenderPhase<I>::render
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/render_phase/mod.rs:773:9
12: <bevy_ui::render::render_pass::UiPassNode as bevy_render::render_graph::node::Node>::run
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ui-0.14.2/src/render/render_pass.rs:83:27
13: bevy_render::renderer::graph_runner::RenderGraphRunner::run_graph
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/renderer/graph_runner.rs:226:21
14: bevy_render::renderer::graph_runner::RenderGraphRunner::run_graph
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/renderer/graph_runner.rs:233:21
15: bevy_render::renderer::graph_runner::RenderGraphRunner::run_graph
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/renderer/graph_runner.rs:233:21
16: bevy_render::renderer::graph_runner::RenderGraphRunner::run
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/renderer/graph_runner.rs:81:9
17: bevy_render::renderer::render_system
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_render-0.14.2/src/renderer/mod.rs:40:15
18: core::ops::function::FnMut::call_mut
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:166:5
19: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &mut F>::call_mut
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:294:13
20: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn(F0) .> Out>>::run::call_inner
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.14.2/src/system/exclusive_function_system.rs:218:21
21: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn(F0) .> Out>>::run
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.14.2/src/system/exclusive_function_system.rs:221:17
22: <bevy_ecs::system::exclusive_function_system::ExclusiveFunctionSystem<Marker,F> as bevy_ecs::system::system::System>::run::{{closure}}
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.14.2/src/system/exclusive_function_system.rs:111:23
23: bevy_ecs::world::World::last_change_tick_scope
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.14.2/src/world/mod.rs:2215:9
24: <bevy_ecs::system::exclusive_function_system::ExclusiveFunctionSystem<Marker,F> as bevy_ecs::system::system::System>::run
at /home/smokracek/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.14.2/src/system/exclusive_function_system.rs:103:9
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Encountered a panic in system `bevy_render::renderer::render_system`!
Additional information
Unfortunately I'm not too experienced with graphics programming, so I hope I'm not missing anything simple here. Many of the 2D examples have run perfectly on WSL, but so far all the 3D ones fail like this.
All the examples, including the 3D ones, run fine in the browser.
You need to install/update nvidia drivers, its running in Gl compatibility mode but that has insufficient features for our renderer (missing GL_EXT_texture_shadow_lod).
| gharchive/issue | 2024-11-20T10:16:55 | 2025-04-01T06:38:02.680219 | {
"authors": [
"atlv24",
"smokracek"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/issues/16445",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1268603890 | Use surface.get_preferred_format() to prevent crashes on wayland with nvidia cards.
What problem does this solve or what need does it fill?
Currently bevy usee hardcoded sRGB surface format which is not (yet) supported by nvidia on linux.
What solution would you like?
Straight forward solution is to request GPUs' prefered surface format which will use best available on each platform.
What alternative(s) have you considered?
Perhaps we could wait months or even years until nvidia decides on adding sRGB support on Wayland.
Additional context
None.
Straight forward solution is to request GPUs' prefered surface format which will use best available on each platform.
Doing so correctly will require adding code to convert from sRGB to whichever colorspace said surface format uses, which isn't straight forward in the general case.
We no longer crash, but randomly choose the first available format the compositor reports. Explained in f5322cd757. On proprietary nvidia drivers on wayland, this results in ARGB8888, which means all the colors are too dark, see #7318
| gharchive/issue | 2022-06-12T14:42:03 | 2025-04-01T06:38:02.684190 | {
"authors": [
"HeavyRain266",
"bjorn3",
"johanhelsing"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/issues/4995",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2098844793 | Fix panic in examples using argh on the web
Objective
Fixes #11503
Solution
Use an empty set of args on the web.
Discussion
Maybe in the future we could wrap this so that we can use query args on the web or something, but this was the minimum changeset I could think of to keep the functionality and make them not panic on the web.
Imo Args::default() would be cleaner, and it should have a comment on each of them saying from_env panics on web.
Imo Args::default() would be cleaner
I am not sure how that would work.
didn't saw a better way to do this in argh doc, and I think it's better than to specify defaults twice, once for argh and once for the Default trait
| gharchive/pull-request | 2024-01-24T18:25:04 | 2025-04-01T06:38:02.686616 | {
"authors": [
"Elabajaba",
"mockersf",
"rparrett"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/pull/11513",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2440201315 | Add freebsd support for sysinfo
I'm not sure if bevy works on FreeBSD or not. But in case it does, better allow sysinfo to be used as well if users want.
I don't think I've ever seen anyone try to use Bevy with freebsd, but I think this change is correct regardless in case we do add support or they just want to use the ECS or something.
| gharchive/pull-request | 2024-07-31T14:27:13 | 2025-04-01T06:38:02.687763 | {
"authors": [
"GuillaumeGomez",
"alice-i-cecile"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/pull/14553",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
855389396 | Add marker components for cameras
Adds marker components for cameras, as suggested in #1854.
Fixes #1854.
Changes alien cake addict example to use the GameplayCamera marker in order to show use.
IMO we should remove the name field from Camera as part of this PR as well.
Ok, I started on this.
But I think I will have to study the code (and/or get some reviewer comments) on what to do with bevy_render/src/camera/active_cameras.rs if the name field goes away.
That system is based around &str access to cameras.
ActiveCameras again is referenced in a few places: render_graph, bevy_sprite, bevy_ui, and the multiple windows example.
It sounds very manageable with a little time, but please give me some input on what the best way forward is:
Split removing the name field into another issue
Rework active_cameras.rs to not be "stringly typed", and adapt all usages to follow
Remove active_camera.rs altogether? Maybe it isn't needed when markers are available
Taking a closer look @Grindv1k, I think that removing the name field deserves to be part of a follow-up issue and PR. active_camera gets plumbed around in a number of places, and won't be a trivial fix.
This solves the end-user case; we can do code-quality clean-up on the internals separately, particularly since the rendering is due for a rework.
Camera names are also "end user" apis. I do think marker components and names solve very similar problems and having both feels a bit odd. I think marker components are the right move, but I'd want a concrete plan before committing to them. Prior to adding marker components, I think we should have either:
a complete replacement of camera names in this pr
a merged RFC illustrating a clear plan for the "marker only" future
While we wait, users can already grab the entity for a given camera name using ActiveCameras, then plug that into camera queries.
Closing in favor of #3635 <3 Thanks for all the exploration you did here!
| gharchive/pull-request | 2021-04-11T19:34:17 | 2025-04-01T06:38:02.693794 | {
"authors": [
"Grindv1k",
"alice-i-cecile",
"cart"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/pull/1888",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
942379952 | Remove unused deps
Objective
Reduce compilation time
Solution
Remove unused dependencies. While this PR doesn't remove any crates from Cargo.lock, it may unlock more build parallelism.
Is there any Github action we can add to make sure we don't have unused dependencies?
cargo udeps is a useful command for this.
The duplicate dependency skip list needs to be updated it seems. proc-macro-crate is now duplicated.
yup #2456
Rebased to re-trigger CI.
bors r+
bors r+
| gharchive/pull-request | 2021-07-12T19:35:05 | 2025-04-01T06:38:02.696600 | {
"authors": [
"NathanSWard",
"bjorn3",
"mockersf"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/pull/2455",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1866627781 | [WIP] Motion Vectors for morph targets & skinned meshes
Objective
Add TAA support when using morph targets and skinned meshes
Solution
General solution: Keep track of the SkinnedMeshUniform and MorphUniform buffers from last frame
To do that: Add a DoubleBufferVec struct. A double buffers that keeps around the old values in the buffer.
Replace the BufferVec in SkinnedMeshUniform and MorphUniform by that.
Upload the old buffer in addition to the new one for the prepass shader
Probably a lot of issues I didn't think of yet.
Alternatives
#7502 would allow use to keep around the old vertex positions, likely could be useful and much more performant.
Changelog
Add a DoubleBufferVec struct. A double buffers that keeps around the old values in the buffer.
Add handling of morph target & skinned mesh animation in the TAA shader.
Migration Guide
The PrepassPipeline's view_layout_no_motion_vectors and view_layout_motion_vectors fields are now consolidated in a layouts: PrepassLayouts field
PrepassBindGroup is now PrepassBindGroups, since it has several bind groups.
Most likely more to come
I'll close this in favor of a new PR. Too bothered to try to rebase. And it contains a lot of unrelated changes.
| gharchive/pull-request | 2023-08-25T08:47:28 | 2025-04-01T06:38:02.701085 | {
"authors": [
"nicopap"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/pull/9569",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1118140730 | go-storage documentation page not found
go-storage documentation on beyondstorage.io yields page not found
beyondstorage.io
Temporary fix: Removing index can find it: https://beyondstorage.io/docs/go-storage
Website also fixed now (do not link to pages end with /index) https://github.com/beyondstorage/site/pull/280
| gharchive/issue | 2022-01-29T09:32:43 | 2025-04-01T06:38:02.710609 | {
"authors": [
"ceyhunkerti",
"xxchan"
],
"repo": "beyondstorage/go-storage",
"url": "https://github.com/beyondstorage/go-storage/issues/1119",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
628445657 | parse perPage prop
Fixes #31
I, do you have the chance to verify this so I can base another pull request from here?
| gharchive/pull-request | 2020-06-01T13:39:50 | 2025-04-01T06:38:02.711374 | {
"authors": [
"jokin"
],
"repo": "beyonk-adventures/svelte-carousel",
"url": "https://github.com/beyonk-adventures/svelte-carousel/pull/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2045681285 | How does this prevent Copilot from reading files?
Does this actually prevent copilot from reading the file? How about disabling copilot/other plugins given we are opening some sensitive file?
Hello @ariel-frischer, thanks for your question.
You're right to ask it because the plugin currently uses the BufReadPre auto-command event to simply set the buffer as readonly and non modifiable (see https://neovim.io/doc/user/autocmd.html#autocmd-events).
As the buffer is readonly and not modifiable all Copilot actions (and other similar plugin actions) trigger through auto-complete and edition cannot be triggered.
But you're right about the reading process, it does not prevent the file to be read and the buffer to be filled. So if Copilot read stuffs in the loaded buffer it could send it remotely which is not what we want.
Sorry about that this is clearly an error in the design of the plugin and I should have been more precise in the readme to indicate it.
However I searched a solution today to be more strict and simply prevent the files to be read and I think could be easily doable.
Using the BufReadCmd event instead of the BufReadPre event could do the job, I tested it and it seems to open a blank buffer instead of the file content (i.e. the file is simply not read and so no plugin could read data associated to it).
I'll do more research to be sure it does what I think and write a PR to give more details about how the plugin works.
Thanks
| gharchive/issue | 2023-12-18T03:53:23 | 2025-04-01T06:38:02.736741 | {
"authors": [
"ariel-frischer",
"bgaillard"
],
"repo": "bgaillard/readonly.nvim",
"url": "https://github.com/bgaillard/readonly.nvim/issues/3",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
563124519 | epsilon-greedy choose function may be wrong
class EpsilonGreedyPolicy(Policy):
[................................]
def choose(self, agent):
if np.random.random() < self.epsilon:
return np.random.choice(len(agent.value_estimates))
else:
action = np.argmax(agent.value_estimates) <---------
check = np.where(agent.value_estimates == action)[0] <------
if len(check) == 0:
return action
else:
return np.random.choice(check)
I don't really get how the lines with "<-----------" work. Action is an index of value_estimates, okay, but in the second line I think you are comparing an index with value_estimates values!! This is the reason why len(check) can be 0. I believe the correct code would be:
def choose(self, agent):
if np.random.random() < self.epsilon:
return np.random.choice(len(agent.value_estimates))
else:
action = np.argmax(agent.value_estimates) <---------
check = np.where(agent.value_estimates == agent.value_estimates[action])[0] <------
if len(check) == 1: <--- At least there is going to be 1
return action
else: <---- Ties are solved randomly
return np.random.choice(check)
Please, let me know if I'm mistaking. Thank you!
Hi @emiliocuestaf !
It's not my code (although I have written a much more complete bandit library, see https://github.com/SMPyBandits/SMPyBandits/), but I think you are completely right!
The first code does not make sense to me.
You can wait to have a feedback from the author @bgalbraith and then submit a pull request (see this tutorial if needed) to this project, maybe he will accept it (and it could fix this bug)!
Ok, I see there already was an issue talking about this bug. I'm sorry!
I think you should update the master branch
Indeed, I didn't check but https://github.com/bgalbraith/bandits/issues/7
hi @emiliocuestaf, you are correct. A few people have pointed this issue out and I didn't get around to merging in this fix (https://github.com/bgalbraith/bandits/pull/4). Thanks!
| gharchive/issue | 2020-02-11T10:57:04 | 2025-04-01T06:38:02.741469 | {
"authors": [
"Naereen",
"bgalbraith",
"emiliocuestaf"
],
"repo": "bgalbraith/bandits",
"url": "https://github.com/bgalbraith/bandits/issues/9",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2465593296 | 🛑 SHOWCASE is down
In c6775eb, SHOWCASE (https://showcase.bgord.me/healthcheck) was down:
HTTP code: 424
Response time: 2942 ms
Resolved: SHOWCASE is back up in 647a769 after 14 minutes.
| gharchive/issue | 2024-08-14T11:34:56 | 2025-04-01T06:38:02.745318 | {
"authors": [
"bgord"
],
"repo": "bgord/statuses",
"url": "https://github.com/bgord/statuses/issues/331",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1870241994 | How to generate drum tones from the Tone function
This may be a stupid question but, is there a way to generate tones of various musical instruments using this function, like guitar and drums or flutes
No.....but with proper current control it's possible I guess?
the tone() function generate square wave where it's just 0 and 1
With current control you can reach between 0 and 1
Resulting in proper waveform
| gharchive/issue | 2023-08-28T18:28:54 | 2025-04-01T06:38:02.755820 | {
"authors": [
"AdroitBit",
"shreyas2020e"
],
"repo": "bhagman/Tone",
"url": "https://github.com/bhagman/Tone/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
158040352 | Error when shortening path with package name when in a .git directory
When shortening the path in prompt_dir with truncate_with_package_name and within a .git directory errors are shown:
prompt_dir:27: substring expression: -3 < 0
prompt_dir:27: substring expression: -3 < 0
git rev-parse --git-dir returns just . when in the top level of a.git directory so package_path=${repo_path:0:-4} fails.
Should have closed this when I merged #275 two weeks ago.
| gharchive/issue | 2016-06-02T01:15:33 | 2025-04-01T06:38:02.763550 | {
"authors": [
"andjscott",
"bhilburn"
],
"repo": "bhilburn/powerlevel9k",
"url": "https://github.com/bhilburn/powerlevel9k/issues/271",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2468910546 | Ability to adjust position of card
I am trying to adjust the bottom positon of the card to give it some padding so it won't overlay itself on top of my custom navbar card but I can't get the card-mod CSS to work. It would be nice if positioning or bottom-padding was just a native feature instead.
My code:
type: custom:fab-card
icon: mdi:play-box-multiple
action:
action: navigate
navigation_path: '#events'
card_mod:
style: |
ha-card div.fab {
bottom: 5em !important;
}
I would love to be able to adjust it too, so I can have more than one of these buttons on a page.
+1 here for this feature!
I have the humble desire of placing a single button, yet at the center-bottom :)
| gharchive/issue | 2024-08-15T20:37:51 | 2025-04-01T06:38:02.768599 | {
"authors": [
"Mastiffen",
"dnestico",
"shaiger"
],
"repo": "bhuebschen/fab-card",
"url": "https://github.com/bhuebschen/fab-card/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2297719217 | TEN-101 Replace app name with AIAIAI, add logos and favicon
Pull Request Checklist
[ ] Target branch: Pull requests should target the dev branch.
[ ] Description: Briefly describe the changes in this pull request.
[ ] Changelog: Ensure a changelog entry following the format of Keep a Changelog is added at the bottom of the PR description.
[ ] Documentation: Have you updated relevant documentation Open WebUI Docs, or other documentation sources?
[ ] Dependencies: Are there any new dependencies? Have you updated the dependency versions in the documentation?
[ ] Testing: Have you written and run sufficient tests for the changes?
[ ] Code Review: Have you self-reviewed your code and addressed any coding standard issues?
Description
Replaces Open WebUI names with AIAIAI, add custom logos
LGTM
| gharchive/pull-request | 2024-05-15T12:02:39 | 2025-04-01T06:38:02.772992 | {
"authors": [
"frandominguezl",
"rubentrf"
],
"repo": "bi4group/open-webui",
"url": "https://github.com/bi4group/open-webui/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1321868422 | Add new entity: "coil" ?
Hi BIDS,
It would be great if BIDS could add an optional -coil to the entity table. Example - subject scanned in the Parallel transmit coil and Single transmit coil or other coils for the same session. The coil is the same for most of the studies. However, some studies use different coils. The coil info is in the JSON. However, it's better to have the '-coil' option in the main name to distinguish the image.
Best
Siya
As I understand it, the main concern here is convenience for accessing these values (always more tricky to extract from and inspect in JSON). For that purpose, the best (and fastest) solution is to agglutinate this with the acq- entity. Note that this does not require (nor will it generally imply) transforming the acq- value into a list, since it is highly unlikely you would be running the same acquisition on different coils. Generally the purpose of switching coils (other than having to work with multiple coils of the same type due to lack of reliability) is precisely to enable different variations of acquisition.
Additionally, the coil — as seen in the examples you provided — is described by a set of parameters. Unlike, for instance, contrast agents, there is far less standardization, so while cagent-endorem might provide more immediately meaningful information about the dataset, coil-TRfirstonewebuilt might be comparatively meaningless. So if the coil parameters vary and this is established as a meaningful source of variation, the coil information would need to be documented more extensively in the JSON in any case. Here again, coil- turns out to be a very symbolic (“custom” as per our current nomenclature) entity, similar to acq-. I think there is something to be said for trying to minimize symbolic entities or encourage agglutination wherever possible — which might not always be the case, but I would say it is here.
For this study, i am using -acq for the test and retest in the same session. In addition, i am performing the same in a different coil. PTx coils are different from STx coils. We might use a parallel transmit pulse (example, Universal pulse) for the PTx coil in future. I think that coils (-coil) are not symbolic. Typical studies use the same coil. But, it's not true. Its important to have a coil (-coil) option in BIDS since the number of sites using pTx coils are increasing (especially at high field)
Just a note that there is already a coil- proposal #425 that uses it differently.
@effigies Thank you for posting the link to the coil proposal. Proposal 425 discuss the receive channels in the coil. I am adding the transmitter channels in addition to the receiver channels.
For end-user studies, the coil remains constant. In that case, one doesn't need to use -coil. However, it's not the case for sequence testing. It's better to have an option for -coil.
For end-user studies, the coil remains constant. In that case, one doesn't need to use -coil. However, it's not the case for sequence testing. It's better to have an option for -coil.
@SherS2 could you expand on this? I don't know what you mean by "the coil remains constant", nor which proposal you're referring to when you say "better to have an option for -coil", given that both proposals use the term coil.
@tsalo Eg: for fMRI studies, users won't change the coil once the sequence is optimized for the study. The study population is scanned with the same coil. In this case, there is no need to mention the coil in the 'entity'.
Example for using -coil
When same subjects are scanned with different coils
for MR physics, we are testing single transit coil (sTx) and parallel transmit (pTx). We would like clearly label the coil.
Also, there are coils with a various number of receiver channels (20, 32 and 64 channels). If a study is using different coils with different receivers, then it's better to label it in the entity.
For this study, i am using -acq for the test and retest in the same session.
If the only difference is re-testing, this sounds like a job for run-. From the documentation “The acq-<label> entity [...] use[d] to distinguish a different set of parameters used for acquiring the same modality.”
In any case, I would recommend leveraging acq- to capture the coil parameter for the time being.
@TheChymera ye, the run could handle test-retest.
Using the acq- for coil type is a temporary solution. But, it is not ideal. By definition: The acq- key/value pair corresponds to a custom label the user MAY use to distinguish a different set of parameters used for acquiring the same modality. Coil is not a parameter is an "MRI hardware component."
It's better to have a -coil entity for addressing the hardware component. For the time being, I created a -coil entity to handle the coil. 1Tx and 8Tx for single and parallel transmit coils, respectively. I will use it until BIDS finds a permanent solution.
Ideally, this could be expanded if someone is using different receiver coils 1Tx32Rx, 8Tx64Rx, etc.
| gharchive/issue | 2022-07-29T07:29:48 | 2025-04-01T06:38:02.797786 | {
"authors": [
"SherS2",
"TheChymera",
"effigies",
"tsalo"
],
"repo": "bids-standard/bids-specification",
"url": "https://github.com/bids-standard/bids-specification/issues/1170",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2182841246 | feat(core): use focus-visible instead of focus
What/Why?
Use focus-visible instead of focus, so that we retain our focus styles for accessibility purposes, but do not style elements when focused via actions like click or touch events.
Testing
Ensured no focus styling is visible when using click or touch events, but remains when navigating via keyboard.
🍹 Not sure if you meant to add apps/core/components/header/index.tsx
Impressive catch, had modified but not saved the file, thanks.
Can you expand on why we don’t want focus to show on click?
| gharchive/pull-request | 2024-03-12T23:22:12 | 2025-04-01T06:38:02.905501 | {
"authors": [
"christensenep",
"jorgemoya"
],
"repo": "bigcommerce/catalyst",
"url": "https://github.com/bigcommerce/catalyst/pull/644",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
267251019 | BCF-6592: add logical path to the output testpath on horizon [cherry-pick]
Reviewer: trivial
cherry-pick change.
Refer to this link for build results (access rights to CI server needed):
https://jenkins.bigswitch.com/job/openstack_horizon_bsn_pull_req/131/
| gharchive/pull-request | 2017-10-20T17:41:14 | 2025-04-01T06:38:02.974258 | {
"authors": [
"bsn-abat",
"wolverineav"
],
"repo": "bigswitch/horizon-bsn",
"url": "https://github.com/bigswitch/horizon-bsn/pull/96",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
813638348 | Resolve Codacy errors
After some configuration, the Codacy issues appear to be at least somewhat useable. However, there are 2k+ such issues and adressing them will be a major effort. We should distribute files/paths by person and go through them one-by-one. Certain classes can be adressed automatically, certain classes have to be adressed in a proper way and certain classes should be ignored (one-by-one or the whole pattern).
auto cleanup
trailing whitespace
manual no-brainer cleanup
unused import
one-by-one treatment
unused variable
exceptions may be ignored
parameters may get a underscore prefix, or _ = x assignment to unused
string statement has no effect
redefined built-in X
rename to X_ or give it another name
Trailing whitespace can be removed with Perl, unused import with autoflakes. See #7.
| gharchive/issue | 2021-02-22T16:06:58 | 2025-04-01T06:38:02.979513 | {
"authors": [
"holtgrewe"
],
"repo": "bihealth/snappy-pipeline",
"url": "https://github.com/bihealth/snappy-pipeline/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2023489122 | Incorrect write access notes in landing_zone_move zone status info
If calling landing_zone_move with validate_only, zone status info messages for validation make references to "write access disabled". When validating only, we explicitly do not restrict write access.
Minor issue, I'll fix it together with #1840 and #1843.
Fixed.
| gharchive/issue | 2023-12-04T10:14:25 | 2025-04-01T06:38:02.981043 | {
"authors": [
"mikkonie"
],
"repo": "bihealth/sodar-server",
"url": "https://github.com/bihealth/sodar-server/issues/1845",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1955574753 | Feature request: add a little sign to tell whether the file is modified
My current working window is putting the tab goup below the explore and invisible the original tab pannel from VScode to make it looked exactly like Edge. But without the little modifying sign on original tab pannel, it always makes me forget to save file. So it would be better to have a sign to tell whether this file is modified just like the original tab pannel.
Thank you for your work!
+1. A must have feature
| gharchive/issue | 2023-10-21T16:50:22 | 2025-04-01T06:38:02.985985 | {
"authors": [
"p1k0pan",
"suxscribe"
],
"repo": "billgoo/vscode-tab-group",
"url": "https://github.com/billgoo/vscode-tab-group/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
667885735 | Makefile available?
Do you have a Makefile that will build Protogen using gcc?
No, but it should be easy to create. You will need to enable c++17 support.
On Wed, Jul 29, 2020 at 7:22 AM knoll01 notifications@github.com wrote:
Do you have a Makefile that will build Protogen using gcc?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/billvaglienti/ProtoGen/issues/95, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACSZMHUV6XECS4F62HPAIS3R6AWDZANCNFSM4PLU26LQ
.
I've created one. I will submit once I am sure it works.
| gharchive/issue | 2020-07-29T14:22:35 | 2025-04-01T06:38:03.244316 | {
"authors": [
"billvaglienti",
"knoll01"
],
"repo": "billvaglienti/ProtoGen",
"url": "https://github.com/billvaglienti/ProtoGen/issues/95",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1769671099 | feature: BIM-33341 Square kind, marker position
… added
Нужно починить пайплайн. Валится во время билда.
| gharchive/pull-request | 2023-06-22T13:22:56 | 2025-04-01T06:38:03.251482 | {
"authors": [
"Iamthereality",
"SanchouZ"
],
"repo": "bimeister/pupakit",
"url": "https://github.com/bimeister/pupakit/pull/105",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2069141921 | Placing market order by USDT (not quantity)
Hello,
How do i change "quantity=0.01" to use USDT instead.. for example Market sell BTCUSDT with 1000 USDT.
Here is code i'm using
import logging
from binance.um_futures import UMFutures
from binance.lib.utils import config_logging
from binance.error import ClientError
config_logging(logging, logging.DEBUG)
key = ""
secret = ""
testnet_url = "https://testnet.binancefuture.com"
Set the testnet URL when creating the UMFutures instance
um_futures_client = UMFutures(key=key, secret=secret)
um_futures_client.base_url = testnet_url
try:
response = um_futures_client.new_order(
symbol="BTCUSDT",
side="SELL",
type="MARKET", # Set type to "MARKET" for a market order
quantity=0.01
)
logging.info(response)
except ClientError as error:
logging.error(
"Found error. status: {}, error code: {}, error message: {}".format(
error.status_code, error.error_code, error.error_message
)
)
That's known as quote quantity, which is unfortunately not available to send as input directly, although you could look into this workaround: https://dev.binance.vision/t/creating-market-order-with-underlying-currency-amount-instead-of-quantity-quoteorderqty-futures-api/10117
| gharchive/issue | 2024-01-07T14:01:11 | 2025-04-01T06:38:03.264065 | {
"authors": [
"aisling-2",
"dontpnc"
],
"repo": "binance/binance-futures-connector-python",
"url": "https://github.com/binance/binance-futures-connector-python/issues/162",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
436748736 | Mac自用软件
MAC 自用软件一览
mac 剪贴板增强 Maccy
iShot是Mac上一款免费优秀、功能全面的区域截图、窗口截图、多窗口截图、长截图、延时截图、快速标注、贴图、取色、屏幕录制工具。
Kantu良心Mac图片浏览工具,腾讯最新出品。
keka压缩(7Z ZIP TAR GZIP BZIP2 XZ LZIP DMG ISO),解压(7Z ZIP ZIPX RAR TAR GZIP BZIP2 XZ LZIP DMG ISO LZMA EXE CAB WIM PAX JAR WAR IPA APK APPX XPI CPGZ CPIO)
Sip是一款位于菜单栏上的Mac测色计应用工具
SSHPASS 非交互SSH密码验证
Termius适用于桌面和手机的 SSH 客户端,可以登录服务器。
Awesome Mac
Display macOS Dock in Touch Bar It's free and open source!
hstr: bash and zsh shell history suggest box - easily view, navigate, search and manage your command history.
录屏好帮手,实时显示按键操作的小工具:KeyCastr, Mac 系统实时显示键位操作软件:KeyCastr, 录屏好帮手,实时显示按键操作的小工具:KeyCastr | App+1
Oh my zsh
安装 c
zsh --version 看版本 brew install zsh zsh-completions
sudo vi /etc/shells 添加 /usr/local/bin/zsh
chsh -s /usr/local/bin/zsh
sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"
vi ~/.zshrc修改plugins,plugins=(git z zsh-autosuggestions zsh-syntax-highlighting urltools encode64 wd last-working-dir)
git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting
zsh-autosuggestions 黏贴时反应慢问题解决c
# This speeds up pasting w/ autosuggest
# https://github.com/zsh-users/zsh-autosuggestions/issues/238
pasteinit() {
OLD_SELF_INSERT=${${(s.:.)widgets[self-insert]}[2,3]}
zle -N self-insert url-quote-magic # I wonder if you'd need `.url-quote-magic`?
}
pastefinish() {
zle -N self-insert $OLD_SELF_INSERT
}
zstyle :bracketed-paste-magic paste-init pasteinit
zstyle :bracketed-paste-magic paste-finish pastefinish
sshpass命令使用
# 直接远程连接某主机
$ sshpass -p {密码} ssh {用户名}@{主机IP}
# 远程连接指定ssh的端口
$ sshpass -p {密码} ssh -p ${端口} {用户名}@{主机IP}
# 从密码文件读取文件内容作为密码去远程连接主机
$ sshpass -f ${密码文本文件} ssh {用户名}@{主机IP}
# 从远程主机上拉取文件到本地
$ sshpass -p {密码} scp {用户名}@{主机IP}:${远程主机目录} ${本地主机目录}
# 将主机目录文件拷贝至远程主机目录
$ sshpass -p {密码} scp ${本地主机目录} {用户名}@{主机IP}:${远程主机目录}
# 远程连接主机并执行命令
$ sshpass -p {密码} ssh -o StrictHostKeyChecking=no {用户名}@{主机IP} 'rm -rf /tmp/test'
# -o StrictHostKeyChecking=no :忽略密码提示
IDEA 自用插件
Plugin for redis. Iedis
Rainbow Brackets
Key Promoter X
SFTP on curl
# Login using curl on SFTP
curl -k "sftp://83.46.38.23:22/" --user "testuser:testpassword"
# Upload using curl on SFTP
curl -k "sftp://83.46.38.23:22/CurlPutTest/" --user "testuser:testpassword" -T "C:\test\testfile.xml" --ftp-create-dirs
# Download using curl on SFTP
curl -k "sftp://83.46.38.23:22/CurlPutTest/testfile.xml" --user "testuser:testpassword" -o "C:\test\testfile.xml" --ftp-create-dirs
# Rename using curl on SFTP
curl -k "sftp://83.46.38.23:22/CurlPutTest/" --user "testuser:testpassword" -Q "-RENAME
‘/CurlPutTest/testfile.xml’ ‘/CurlPutTest/testfile.xml.tmp’" --ftp-create-dirs
# Delete using curl on SFTP
curl -k "sftp://83.46.38.23:22/CurlPutTest/ " --user "testuser:testpassword" -Q "–RM /CurlPutTest/testfile.xml" --ftp-create-dirs
# Make directory using curl on SFTP
curl -k "sftp://83.46.38.23:22/CurlPutTest/test " --user "testuser:testpassword" -Q "–MKDIR /CurlPutTest/Test" --ftp-create-dirs
# Remove directory using curl on SFTP
curl -k "sftp://83.46.38.23:22/CurlPutTest/test " --user "testuser:testpassword" -Q "–RMDIR /CurlPutTest/Test" --ftp-create-dirs
显式隐藏文件
CMD + SHIFT + . (above macOS Sierra)
Show defaults write com.apple.finder AppleShowAllFiles YES
Hide defaults write com.apple.finder AppleShowAllFiles NO
alias showFiles='defaults write com.apple.finder AppleShowAllFiles YES; killall Finder /System/Library/CoreServices/Finder.app'
alias hideFiles='defaults write com.apple.finder AppleShowAllFiles NO; killall Finder /System/Library/CoreServices/Finder.app'
mvn clean install -DskipTests时报错gpg: signing failed: Inappropriate ioctl for device,解决办法:export GPG_TTY=$(tty)
using python -m json.tool to pretty json output of curl
echo '{"foo": "lorem", "bar": "ipsum"}' | python -m json.tool
BurntSushi/ripgrep
$ rg 🔥
notify/qcloudvoice.go
41: "appName": gou.Decode(m.State, "🔥", "告警啦,", "告警解除啦,").(string) + m.AppName,
model/msg.go
23: State string `json:"state"` // 🔥/❄️
45: State: "🔥",
86: State: "🔥",
# bingoo @ 192 in ~/github/rig on git:master o [22:21:26]
$ rg ❄
model/state.go
83: msg.State = "❄️"
84: logrus.Infof("❄️,消息:%+v, 配置:%+v", msg, w)
model/msg.go
23: State string `json:"state"` // 🔥/❄️
Command-Cache
Mac 环境下的Ruby
官方推荐安装RVM方式
// 离线包
curl -sSL https://github.com/rvm/rvm/tarball/stable -o rvm-stable.tar.gz
// 创建文件夹
mkdir rvm && cd rvm
// 解包
tar --strip-components=1 -xzf ../rvm-stable.tar.gz
// 安装
./install --auto-dotfiles
// 加载
source ~/.rvm/scripts/rvm
// if --path was specified when instaling rvm, use the specified path rather than '~/.rvm'
安装 ruby
// 查询 ruby的版本
rvm list known
// 下载指定的版本
rvm install 2.4.0
// 将系统的ruby切换为下载的版本
rvm use 2.4.0 --default
参考 Mac 环境下的Ruby
命令行翻墙 export http_proxy=http://127.0.0.1:9999; export https_proxy=http://127.0.0.1:9999;
端口9999,怎么看 到的?系统偏好设置->网络->高级->代理
Mac高效开发之iTerm2、Prezto和Solarized主题
Mac上往U盘拷贝,推出U盘命令
$ cp ~/go/bin/linux_amd64/mci /Volumes/Untitled
$ hdiutil eject /Volumes/Untitled
"disk2" ejected.
$ diskutil info /Volumes/Untitled
Device Identifier: disk2s1
Device Node: /dev/disk2s1
Whole: No
Part of Whole: disk2
Volume Name:
Mounted: Yes
Mount Point: /Volumes/Untitled
Partition Type: Windows_FAT_32
File System Personality: MS-DOS FAT32
Type (Bundle): msdos
Name (User Visible): MS-DOS (FAT32)
OS Can Be Installed: No
Media Type: Generic
Protocol: USB
SMART Status: Not Supported
Partition Offset: 16384 Bytes (32 512-Byte-Device-Blocks)
Disk Size: 31.2 GB (31237062656 Bytes) (exactly 61009888 512-Byte-Units)
Device Block Size: 512 Bytes
Volume Total Space: 31.2 GB (31221792768 Bytes) (exactly 60980064 512-Byte-Units)
Volume Used Space: 14.9 GB (14892433408 Bytes) (exactly 29086784 512-Byte-Units) (47.7%)
Volume Free Space: 16.3 GB (16329359360 Bytes) (exactly 31893280 512-Byte-Units) (52.3%)
Allocation Block Size: 16384 Bytes
Read-Only Media: No
Read-Only Volume: No
Device Location: External
Removable Media: Removable
Media Removal: Software-Activated
Solid State: Info not available
Thanks MAC挂载NTFS硬盘或U盘
Shortcuts to move faster in Bash command line
Basic moves
Move back one character. Ctrl + b
Move forward one character. Ctrl + f
Delete current character. Ctrl + d
Delete previous character. Backspace
Undo. Ctrl + -
Moving faster
Move to the start of line. Ctrl + a
Move to the end of line. Ctrl + e
Move forward a word. Meta + f (a word contains alphabets and digits, no symbols)
Move backward a word. Meta + b
Clear the screen. Ctrl + l
What is Meta? Meta is your Alt key, normally. For Mac OSX user, you need to enable it yourself. Open Terminal > Preferences > Settings > Keyboard, and enable Use option as meta key. Meta key, by convention, is used for operations on word.
Cut and paste (‘Kill and yank’ for old schoolers)
Cut from cursor to the end of line. Ctrl + k
Cut from cursor to the end of word. Meta + d
Cut from cursor to the start of word. Meta + Backspace
Cut from cursor to previous whitespace. Ctrl + w
Paste the last cut text. Ctrl + y
Loop through and paste previously cut text. Meta + y (use it after Ctrl + y)
Loop through and paste the last argument of previous commands. Meta + .
Search the command history
Search as you type. Ctrl + r and type the search term; Repeat Ctrl + r to loop through results.
Search the last remembered search term. Ctrl + r twice.
End the search at current history entry. Ctrl + j
Cancel the search and restore original line. Ctrl + g
How do I clear/delete the current line in terminal?
You can use Ctrl+U to clear up to the beginning.
You can use Ctrl+W to delete just a word.
You can also use Ctrl+C to cancel.
If you want to keep the history, you can use Alt+Shift+# to make it a comment.
Just to summarise all the answers:
Clean up the line: You can use Ctrl+U to clear up to the beginning.
Clean up the line: Ctrl+E Ctrl+U to wipe the current line in the terminal
Clean up the line: Ctrl+A Ctrl+K to wipe the current line in the terminal
Cancel the current command/line: Ctrl+C.
Recall the deleted command: Ctrl+Y (then Alt+Y)
Go to beginning of the line: Ctrl+A
Go to end of the line: Ctrl+E
Remove the forward words for example, if you are middle of the command: Ctrl+K
Remove characters on the left, until the beginning of the word: Ctrl+W
To clear your entire command prompt: Ctrl + L
Toggle between the start of line and current cursor position: Ctrl + XX
关闭菜单栏效果, 减少资源占用和产生的热量
系统偏好设置/辅助功能/显示, 勾选 (减弱动态效果、减少透明度)
配置睡眠保护
系统偏好设置/安全性与隐私/通用, 勾选(进入睡眠或开始保护程序 立即 要求输入密码)
配置触发角
系统偏好设置/屏幕保护程序/触发角, 选择(右上桌面,左下启动台,右下启动屏幕保护)
Xcode Command Line Tools
xcode-select --install
Homebrew
# 这里必须设置 代理地址,否则无法安装brew
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install
还在用 Win?教你从零把 Mac 打造成开发利器
Mac OS X下快速复制文件路径 copy file path in mac
MAC版本TC Marta
Fig Your terminal, reimagined
Chameleon
Chameleon is web application (blog engine) that reflects content from markdown files from a git repository. Powers articles.orsinium.dev.
Features:
Markdown (full support for CommonMark and GitHub Flavored Markdown)
Minimalistic UI
Easy to use, no CI or a special repo structure required
Zero configuration
Single binary
Automatically pull the repo by schedule
Built-in prose linter (Vale)
Syntax highlighting (Prism)
Formulas (MathJax)
Emoji (enescakir/emoji)
Views count
Great performance and server-side caching
Optional password protection
Search
Minification (minify)
Usage
Build:
git clone https://github.com/life4/chameleon.git
cd chameleon
go build -o chameleon.bin .
| gharchive/issue | 2019-04-24T15:00:50 | 2025-04-01T06:38:03.374091 | {
"authors": [
"bingoohuang"
],
"repo": "bingoohuang/blog",
"url": "https://github.com/bingoohuang/blog/issues/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1584760586 | Subscriber Billed Topics
I am going with bill-to-last-visitor-for-12-hrs even though it doesn't allow users to unsubscribe because it is the simplest to explain. It's much easier to document and explain if you subscribe to a up_ topic, you are responsible for whatever is sent there for the next 12 hrs, so keep it secret, than anything else I could come up with.
The only other simple-for-the-user options would be:
You can only subscribe to 30 UP topics in a day. Each UP topic can receive 100 messages / day. However, that requires keeping track of all those states and adds significant complexity to the code.
Bill in sub := func(v *visitor, msg *message) error {, so only messages you actually receive count against you. However, that allows spamming the cache with messages that no one is billed for, on the sending side.
I'll add tests and rebase it and stuff if you're good with this general implementation.
I looked, I promise. I just cannot get my brain into the right headspace, which I need for this. I promise I will :-)
Cool. I'll start reviewing again, but you gotta repoint to main and rebase/merge the latest main.
I'm working on the rest of the things
On Mon, Feb 20, 2023, 6:46 PM Philipp C. Heckel @.***>
wrote:
Cool. I'll start reviewing again, but you gotta repoint to main and
rebase/merge the latest main.
—
Reply to this email directly, view it on GitHub
https://github.com/binwiederhier/ntfy/pull/609#issuecomment-1437706802,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AHZIPSQDN3HITZ6WCGWE7ODWYQF6FANCNFSM6AAAAAAU377XDE
.
You are receiving this because you authored the thread.Message ID:
@.***>
Superseded by https://github.com/binwiederhier/ntfy/pull/633
| gharchive/pull-request | 2023-02-14T20:11:30 | 2025-04-01T06:38:03.385156 | {
"authors": [
"binwiederhier",
"karmanyaahm"
],
"repo": "binwiederhier/ntfy",
"url": "https://github.com/binwiederhier/ntfy/pull/609",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
678775416 | Unit tests for javascript
Requires qunit from npm/node in order to run tests from command line. Installs a node_modules folder containing qunit based on the package-lock.json in the microsetta_admin/tests/js folder.
First build should fail with the new unit tests - hopefully
Seems reasonable!
See travis build in second commit (f639ef3) for example of a js test failure.
To run locally, you'll need to install node/npm, this will then be used to install qunit and all dependencies. You can then run make test, or if you just want the javascript tests, run_js_tests.sh at the root of the repo.
Note that there is a compromise made to enable testing by command line: any testable functions must be retrievable through node. This means declaring them in the node defined module.exports field at the bottom of js files. Since the browser has no concept of module.exports, you must check for existence before setting this field. For a simple example of this, see microsetta_admin/static/js/testable.js.
The exact mechanism we use for setting module.exports is up for debate - if we want to enable node-like behavior in our browser imports, there is a slightly different pattern used in Emperor that makes use of requirejs. The pattern used in testable.js should work so long as all our javascript is expected to be hosted in the browser.
In the future, if you want to update qunit, cd to /microsetta_admin/tests/js/ and run npm install qunit, then commit the package-lock.json and travis will automatically use your new configuration.
Thanks, @dhakim87! Would it be possible to put the text about running and updating into the repository directly (e.g., in the readme or comments in the makefile)?
| gharchive/pull-request | 2020-08-13T22:41:28 | 2025-04-01T06:38:03.455693 | {
"authors": [
"dhakim87",
"wasade"
],
"repo": "biocore/microsetta-admin",
"url": "https://github.com/biocore/microsetta-admin/pull/44",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2101963536 | Add license
Hi @oeway ,
great training resource here and great that you were publishing it openly! I was just wondering under which conditions one could reuse the code ansd materials provided here. Would you mind adding a license file? If you're new to licensing and/or wonder which license to use, you can read more in this blob post: https://focalplane.biologists.com/2023/05/06/if-you-license-it-itll-be-harder-to-steal-it-why-we-should-license-our-work/
Thanks!
Best,
Robert
Hi thanks for the heads up, our good old MIT should do!
https://github.com/bioimage-io/bioengine/commit/620440be21799457f7f73395bd9ccf6e8ca7bbea
| gharchive/issue | 2024-01-26T10:25:05 | 2025-04-01T06:38:03.460996 | {
"authors": [
"haesleinhuepf",
"oeway"
],
"repo": "bioimage-io/bioengine",
"url": "https://github.com/bioimage-io/bioengine/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2651106983 | question about multi R-group data preprocessing
Hi,
I'm trying to reproduce the data preprocess results.
I followed the tutorial in lib-invent repo to process multi R-group data of crossdock. However, I could only get 17,586 data entries which is much less than the results in you article (about 150k).
My configs for slicing are as followed:
filter_conditions.json
{
"scaffold": [{
"name":"ring_count",
"min": 1
}],
"decoration": [
{
"name":"molecular_weight",
"max": 300
},
{
"name":"hydrogen_bond_acceptors",
"max": 3
},
{
"name":"hydrogen_bond_donors",
"max": 3
},
{
"name":"clogp",
"max": 3
},
{
"name":"rotatable_bonds",
"max": 3
},
{
"name": "heavy_atom_count",
"max": 10,
"min": 1
}
]
}
reaction_based_slicing.json
{
"run_type": "reaction_based_slicing",
"parameters": {
"input_file": "path/to/unsliced/data.smi",
"output_path": "path/to/output/folder",
"output_smiles_file": "path/to/output/file.smi",
"conditions_file": "configs/filter_conditions.json",
"reactions_file": "configs/reaction.smirks",
"max_cuts": 4,
"number_of_partitions": 1000,
"validate_randomization": true
}
}
Would you like to share more details about how to get the 150k multi R-group data of crossdock?
Thanks
I found that reaction based slicing will automatically deduplicate the output data. There are many same molecule in Crossdock train set. However, one molecule may correspond to multiple proteins. Run python -W ignore process_and_prepare.py will finally get the 150k results.
| gharchive/issue | 2024-11-12T05:42:33 | 2025-04-01T06:38:03.505091 | {
"authors": [
"HShokaku"
],
"repo": "biomed-AI/DiffDec",
"url": "https://github.com/biomed-AI/DiffDec/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2034402491 | feat(style/useConsistentArrayType): add rule
Summary
Implement lint/style/useConsistentArrayType
Fixed: #68
Test Plan
All existing tests has passed.
Thanks for your contribution! This is greatly(大大) appreciated :)
I left suggestions. Feel free to ask details if it is not clear.
Many thanks for your patience reviewing and suggestions.
Sorry for the late reply.
@eryue0220 Have you still some time for this PR?
Yes, I'm still working on this PR.
@Conaclos Sorry for the late response before. And huge thanks to your suggestions and reviews. Merry Christmas.
I think it is ready for merging.
Please run just ready to format/lint the code and generate missing files.
Once CI is passing, we will be able to merge :)
Again. Huge Thanks to @Conaclos for your patience and your suggestion that I can ship this. It's a really wonderful travel.
| gharchive/pull-request | 2023-12-10T14:50:55 | 2025-04-01T06:38:03.509417 | {
"authors": [
"Conaclos",
"eryue0220"
],
"repo": "biomejs/biome",
"url": "https://github.com/biomejs/biome/pull/1137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2647124713 | write a load_example method
assuming a dataset has an id field and an index.
index will be a parquet file with no extension mapping id to shard - then we can download a single shard and retrieve the example
What we need:
a split generator that looks for split-specific index files (train_index or train/index)
index files allow us to subset both parquets and examples
we then add a ds.filter before returning the dataset.
there might be an efficient arrow way to implement the filter
(this could also go directly into yaml but the index file solution is more modular).
| gharchive/issue | 2024-11-10T11:13:18 | 2025-04-01T06:38:03.513798 | {
"authors": [
"alex-hh"
],
"repo": "bioml-tools/bio-datasets",
"url": "https://github.com/bioml-tools/bio-datasets/issues/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1021276835 | refactor(platform): added tool to publish example
Indicated in which examples should be published and what ids to use in tools/example-projects.json
This avoids polluting our published projects with many versions of the same project with different simulation tools
Added command-line program to update these published projects
Presently requires a machine-to-machine api client and secret.
I started with machine-to-machine because its easier to setup. I could append this to the GitHub action I created to check the examples and automatically create/update their publication.
| gharchive/pull-request | 2021-10-08T16:25:53 | 2025-04-01T06:38:03.522315 | {
"authors": [
"jonrkarr"
],
"repo": "biosimulations/biosimulations",
"url": "https://github.com/biosimulations/biosimulations/pull/3182",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
144742312 | Display verbose progress
From @lkursell on February 12, 2016 22:39
Displaying ST2's progress, like the original, is very helpful for making sure commands are getting executed correctly, especially given the time required for jobs to finish. And they are a very convenient way for determining progress.
Copied from original issue: biota/sourcetracker2_internal#14
From @lkursell on February 15, 2016 16:24
The thing I counted on most from the verbose ST was to make sure that my sinks and sources were properly defined, and were running as I intended. But it also helped me gauge speed to know if some rarefaction level was just never going to work out.
On Feb 15, 2016, at 12:15 AM, Will Van Treuren notifications@github.com wrote:
Will check in to using click for this. Should be straightforward given that the number of samples is known and each iteration of Gibbs is independent.
For methods that are not Gibb's this might be harder, but we will investigate those as we come to it.
—
Reply to this email directly or view it on GitHub https://github.com/biota/sourcetracker2/issues/14#issuecomment-184107722.
From @wdwvt1 on February 15, 2016 8:15
Will check in to using click for this. Should be straightforward given that the number of samples is known and each iteration of Gibbs is independent.
For methods that are not Gibb's this might be harder, but we will investigate those as we come to it.
| gharchive/issue | 2016-03-30T22:51:57 | 2025-04-01T06:38:03.526794 | {
"authors": [
"gregcaporaso"
],
"repo": "biota/sourcetracker2",
"url": "https://github.com/biota/sourcetracker2/issues/23",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
841701701 | MyDisease for disease's descendants is missing answers
Perhaps similar to #128.
This query for MyDisease.info (thru BTE's TRAPI endpoint for one api) only shows one result node in the node section of the response, even though the logs mention 39 results (the direct query to the API returns 39 results as well - I believe these results are unique):
MyDisease returns for 39 results for:
curl -X POST "http://mydisease.info/v1/query?fields=mondo.descendants" -H "accept: application/json" -H "Content-Type: application/x-www-form-urlencoded" -d "q=MONDO%3A0002494&scopes=mondo.mondo"
BTE returns 1 node in the following response (MONDO:0003232). The results section seems wonky (redundant errors for edges)?
{
"message": {
"query_graph": {
"edges": {
"e00": {
"object": "n01",
"subject": "n00",
"predicate": [
"biolink:superclass_of"
]
}
},
"nodes": {
"n00": {
"category": "biolink:Disease",
"id": "MONDO:0002494"
},
"n01": {
"category": "biolink:Disease"
}
}
},
"knowledge_graph": {
"nodes": {
"MONDO:0003232": {
"category": "biolink:Disease",
"name": "alcoholic pancreatitis",
"attributes": [
{
"name": "equivalent_identifiers",
"value": [
"MONDO:0003232",
"DOID:4988",
"UMLS:C0376670",
"name:alcoholic pancreatitis",
"name:Pancreatitis, Alcoholic",
"MESH:D019512",
"EFO:1002013"
],
"type": "biolink:id"
},
{
"name": "num_source_nodes",
"value": 1,
"type": "bts:num_source_nodes"
},
{
"name": "num_target_nodes",
"value": 0,
"type": "bts:num_target_nodes"
},
{
"name": "source_qg_nodes",
"value": [
"n00"
],
"type": "bts:source_qg_nodes"
},
{
"name": "target_qg_nodes",
"value": [],
"type": "bts:target_qg_nodes"
}
]
},
"MONDO:0002494": {
"category": "biolink:Disease",
"name": "substance-related disorder",
"attributes": [
{
"name": "equivalent_identifiers",
"value": [
"MONDO:0002494",
"DOID:303",
"UMLS:C0236969",
"name:substance-related disorder",
"MESH:D019966"
],
"type": "biolink:id"
},
{
"name": "num_source_nodes",
"value": 0,
"type": "bts:num_source_nodes"
},
{
"name": "num_target_nodes",
"value": 1,
"type": "bts:num_target_nodes"
},
{
"name": "source_qg_nodes",
"value": [],
"type": "bts:source_qg_nodes"
},
{
"name": "target_qg_nodes",
"value": [
"n01"
],
"type": "bts:target_qg_nodes"
}
]
}
},
"edges": {
"MONDO:0002494-biolink:superclass_of-MONDO:0003232": {
"predicate": "biolink:superclass_of",
"subject": "MONDO:0002494",
"object": "MONDO:0003232",
"attributes": [
{
"name": "provided_by",
"value": [
"MONDO"
],
"type": "biolink:provided_by"
},
{
"name": "api",
"value": [
"MyDisease.info API"
],
"type": "bts:api"
},
{
"name": "publications",
"value": [],
"type": "biolink:publication"
}
]
}
}
},
"results": [
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
},
{
"node_bindings": {
"n00": [
{
"id": "MONDO:0002494"
}
],
"n01": [
{
"id": "MONDO:0003232"
}
]
},
"edge_bindings": {
"e00": [
{
"id": "MONDO:0002494-biolink:superclass_of-MONDO:0003232"
}
]
}
}
]
},
"logs": [
{
"timestamp": "2021-03-26T08:02:32.269Z",
"level": "DEBUG",
"message": "BTE identified 2 QNodes from your query graph",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.269Z",
"level": "DEBUG",
"message": "BTE identified 1 QEdges from your query graph",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.269Z",
"level": "DEBUG",
"message": "BTE identified your query graph as a 1-depth query graph",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.326Z",
"level": "DEBUG",
"message": "REDIS cache is not enabled.",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.326Z",
"level": "DEBUG",
"message": "BTE is trying to find SmartAPI edges connecting from Disease to Disease with predicate superclass_of",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.327Z",
"level": "DEBUG",
"message": "BTE found 1 smartapi edges corresponding to e00. These smartaip edges comes from 1 unique APIs. They are MyDisease.info API",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.327Z",
"level": "DEBUG",
"message": "BTE found 1 bte edges for this batch.",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.327Z",
"level": "DEBUG",
"message": "call-apis: Resolving ID feature is turned on",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.327Z",
"level": "DEBUG",
"message": "call-apis: Number of BTE Edges received is 1",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.338Z",
"level": "DEBUG",
"message": "call-apis: Succesfully made the following query: {\"url\":\"http://mydisease.info/v1/query\",\"params\":{\"fields\":\"mondo.descendants\",\"size\":\"1000\"},\"data\":\"q=MONDO:0002494&scopes=mondo.mondo\",\"method\":\"post\",\"timeout\":50000}",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.339Z",
"level": "DEBUG",
"message": "call-apis: After transformation, BTE is able to retrieve 39 hits!",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.339Z",
"level": "DEBUG",
"message": "call-apis: Total number of results returned for this query is 39",
"code": null
},
{
"timestamp": "2021-03-26T08:02:32.357Z",
"level": "DEBUG",
"message": "call-apis: Query completes",
"code": null
}
]
}
Fixed.
| gharchive/issue | 2021-03-26T08:09:02 | 2025-04-01T06:38:03.546600 | {
"authors": [
"colleenXu",
"kevinxin90"
],
"repo": "biothings/BioThings_Explorer_TRAPI",
"url": "https://github.com/biothings/BioThings_Explorer_TRAPI/issues/131",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
511590465 | hint does not query Mesh Identifiers for chemical substances
Mesh IDs for chemicals do not return any results when queried by Hint. However, MyChem has the ability to query for these mesh IDs.
See Example below:
from biothings_explorer.user_query_dispatcher import SingleEdgeQueryDispatcher
from biothings_explorer.hint import Hint
from biothings_explorer.registry import Registry
reg = Registry()
ht = Hint()
mesh_id = 'D020110'
ht.query(mesh_id)
{'Gene': [],
'SequenceVariant': [],
'ChemicalSubstance': [],
'DiseaseOrPhenotypicFeature': [],
'Pathway': [],
'MolecularActivity': [],
'CellularComponent': [],
'BiologicalProcess': [],
'Anatomy': [],
'PhenotypicFeature': []}
from biothings_client import get_client
mc = get_client('chem')
r = mc.query('drugcentral.xrefs.mesh_descriptor_ui:{}'.format(mesh_id))
len(r['hits'])
1
print([k for k in r['hits'][0].keys() if not k.startswith('_')])
print([v.get('id') for k, v in r['hits'][0].items() if not k.startswith('_')])
['chebi', 'chembl', 'drugbank', 'drugcentral', 'ginas', 'pubchem']
['CHEBI:91706', None, 'DB06762', None, None, None]
db_id = r['hits'][0]['drugbank']['id']
ht.query(db_id)
{'Gene': [],
'SequenceVariant': [],
'ChemicalSubstance': [{'chembl': 'CHEMBL1159',
'drugbank': 'DB06762',
'name': 'PINACIDIL',
'pubchem': 4826,
'umls': 'C0071074',
'display': 'chembl(CHEMBL1159) drugbank(DB06762) name(PINACIDIL) pubchem(4826) umls(C0071074) ',
'type': 'ChemicalSubstance',
'primary': {'identifier': 'chembl',
'cls': 'ChemicalSubstance',
'value': 'CHEMBL1159'}}],
'DiseaseOrPhenotypicFeature': [],
'Pathway': [],
'MolecularActivity': [],
'CellularComponent': [],
'BiologicalProcess': [],
'Anatomy': [],
'PhenotypicFeature': []}
Thanks for the feedback. If you upgrade the bte_schema package, you should be able to query for mesh chemical ID now.
Fixed in: d98957d30538ad1fb83d65e4719aedf588458a51
| gharchive/issue | 2019-10-23T21:50:18 | 2025-04-01T06:38:03.550490 | {
"authors": [
"kevinxin90",
"mmayers12"
],
"repo": "biothings/bte_schema",
"url": "https://github.com/biothings/bte_schema/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
777520353 | Google Fit API client. authentication
Followed the instructions here https://github.com/StasDoskalenko/react-native-google-fit/blob/HEAD/docs/INSTALLATION.md#getting-started
Did not pass new GoogleFitPackage(BuildConfig.APPLICATION_ID) in MainApplication.java, auto link takes care if this
Add - to android Manifest.xml
https://developers.google.com/fit/android/get-started - followed this to set up OAuth client on the google console. ( having issues with this because the OAuth client with the SHA1 and package name already exists, not sure how to add the fitness api to an existing client id)
Once you pull the changes (googlefit.tsx, TipCard.tsx), after log in on android, you will be prompted to sign in to a google account.
ERROR
Does not resolve, getting stuck here
Followed the instructions here https://github.com/StasDoskalenko/react-native-google-fit/blob/HEAD/docs/INSTALLATION.md#getting-started
Do not pass new GoogleFitPackage(BuildConfig.APPLICATION_ID) in MainApplication.java, auto link takes care if this
Add <uses-permission android:name="android.permission.ACTIVITY_RECOGNITION"/> to android Manifest.xml
On the GCP, go to OAuth consent screen, and add a test user for the authentication to work
Copy useGoogleFit and Tipcard files to your local
Followed the instructions here https://github.com/StasDoskalenko/react-native-google-fit/blob/HEAD/docs/INSTALLATION.md#getting-started
Do not pass new GoogleFitPackage(BuildConfig.APPLICATION_ID) in MainApplication.java, auto link takes care if this
Add <uses-permission android:name="android.permission.ACTIVITY_RECOGNITION"/> to android Manifest.xml
On the GCP, go to OAuth consent screen, and add a test user for the authentication to work
Copy useGoogleFit and Tipcard files to your local
@crugwiro Can you clarify the testing details for me? Here's what I've done so far:
Pulled down this branch and followed step 3 in your comment above as we talked about (the rest of the steps are already done)
Launched android (after syncing with gradle)
Tried logging in with email/password. After logging in, it prompted me to sign into my google account. After selecting the email I wanted to login with, it got stuck on the loading pop up
Rebuilt the app and tried logging in with google sign, same thing happened - got stuck on a loading pop up.
Am I missing something? (screenshot below for reference) Also still in the process of reviewing the changes file by file, so maybe I will find something from that.
@crugwiro Can you clarify the testing details for me? Here's what I've done so far:
Pulled down this branch and followed step 3 in your comment above as we talked about (the rest of the steps are already done)
Launched android (after syncing with gradle)
Tried logging in with email/password. After logging in, it prompted me to sign into my google account. After selecting the email I wanted to login with, it got stuck on the loading pop up
Rebuilt the app and tried logging in with google sign, same thing happened - got stuck on a loading pop up.
Am I missing something? (screenshot below for reference) Also still in the process of reviewing the changes file by file, so maybe I will find something from that.
This branch has too many changes and merges across different PRs. It is safe to Close/delete.
This branch has too many changes and merges across different PRs. It is safe to Close/delete.
Closing as per @crugwiro
| gharchive/pull-request | 2021-01-02T20:28:49 | 2025-04-01T06:38:03.571179 | {
"authors": [
"cara-wong",
"crugwiro",
"wynnset"
],
"repo": "bipolarbridges/companion-app",
"url": "https://github.com/bipolarbridges/companion-app/pull/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
353574712 | Fix duplicate key error when creating >10 back-to-back events
Solved an issue when creating >10 events on several days. Now there shouldn't be an issue even when creating one event for each minute for the entire week.
@birik Found this defect as well. It would be great if @nickrenfo2 's fix could be merged to master.
@nickrenfo2 @sherwinchu Change was merged. NPM will be updated soon
The updates in 0.1.2.
@nickrenfo2 Thank you for your commits
| gharchive/pull-request | 2018-08-23T22:20:07 | 2025-04-01T06:38:03.598769 | {
"authors": [
"birik",
"nickrenfo2",
"sherwinchu"
],
"repo": "birik/react-week-calendar",
"url": "https://github.com/birik/react-week-calendar/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1192277876 | Spec for third-party blocks
Based on the haskell PoC implementation, here's a cleaned up spec of 3rd party blocks.
Some implementations details are open to discussion, i'll add comments highlighting them
A commit-by-commit review is advised.
Both biscuit-haskell and biscuit-rust have candidate releases with third-party blocks support
| gharchive/pull-request | 2022-04-04T20:27:02 | 2025-04-01T06:38:03.610252 | {
"authors": [
"clementd-fretlink",
"divarvel"
],
"repo": "biscuit-auth/biscuit",
"url": "https://github.com/biscuit-auth/biscuit/pull/103",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
283325337 | Add bash completion
Faker offer many "formatters" (name, email). I create simple bash completion script for them.
For example:
faker.php [tab][tab]
#will display all available formatters (with some other options) for default locale
faker.php --locale pl_PL [tab][tab]
#will display all available formatters for pl_PL locale
I am sorry, but I will not continue this project. The project is now archived on github and abandoned on packagist.
| gharchive/pull-request | 2017-12-19T18:22:39 | 2025-04-01T06:38:03.641023 | {
"authors": [
"morawskim",
"tristanlins"
],
"repo": "bit3/faker-cli",
"url": "https://github.com/bit3/faker-cli/pull/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
681600323 | Is it compatible with bouncycastle?
I am searching an ECIES algorithm which can work fine both in client and server. My server is using Java, and the algorithm provider is bouncycastle. Is there any sample encrypting by javascript and decrypting by java?
If it supports ECIES it should be compatible.
| gharchive/issue | 2020-08-19T07:02:47 | 2025-04-01T06:38:03.713063 | {
"authors": [
"JBaczuk",
"gy0801151351"
],
"repo": "bitchan/eccrypto",
"url": "https://github.com/bitchan/eccrypto/issues/69",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1422315065 | [Feature request] Preview button presets with One ME program source feedback
For when you don't want or have space for a preview and a program row it's handy to see what source is live on the preview button. Right now you have to add the feedback manually to each button. Would be perfect if there were an option to have preset buttons with both "One ME preview source" and "One ME program source".
Are you looking for these label variables?
$(atem:pvw1_input)
$(atem:pgm1_input)
Check the Variables tab for a list of all available shortcodes.
| gharchive/issue | 2022-10-25T11:32:16 | 2025-04-01T06:38:03.889453 | {
"authors": [
"davidjoshuaford",
"ettnoll"
],
"repo": "bitfocus/companion-module-bmd-atem",
"url": "https://github.com/bitfocus/companion-module-bmd-atem/issues/214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1137578714 | [BUG] Spotify - Some Press actions are not working
Is this a bug in companion itself or a module?
[X] I believe this to be a bug in companion
Is there an existing issue for this?
[X] I have searched the existing issues
Describe the bug
The following press actions are not working:
Play
Volume Up
Volume down
Steps To Reproduce
No response
Expected Behavior
No response
Environment (please complete the following information)
- OS:
- Browser:
- Companion Version:
Additional context
No response
Meanwhile, it is also no longer possible to load albums.
Can you try with the latest beta, with a fresh api keys? There was a problem with reaching the api rate limit that have recently been resolved.
What version of companion or the module did loading albums last work for you in?
Same for the other actions? Some idea of when it could have broken would be useful, otherwise it is hard to figure out where to start looking
Acutally I'm using the latest beta -> 2.2.0 (2.2.0+4125-beta-5d1d7f80)
Unfortunately, I can not say exactly when the albums worked. It must have been 1.5 months ago.
The other Keys have never worked.
I have new infos...
The key works with an iPad as audio device.
But not with the browser solution https://open.spotify.com/
Also happening for me, volume not working.
@nick-potts Can you please provide some more information about your environment? What device are you using as the host device for Spotify?
What device are you using as the host device for Spotify?
Windows in my case
| gharchive/issue | 2022-02-14T16:35:06 | 2025-04-01T06:38:03.898873 | {
"authors": [
"AHub88",
"Julusian",
"bevanjkay",
"nick-potts"
],
"repo": "bitfocus/companion-module-spotify-remote",
"url": "https://github.com/bitfocus/companion-module-spotify-remote/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
600582416 | Audio delay/latency issue
With eqMac enabled, an audio delay forms and gets longer and more noticeable the longer it runs.
After about an hour it becomes unmistakable. If I leave eqMac enabled overnight, it's a several second delay.
This happens with eqMac enabled and an HDMI output source on Catalina.
The same issue would happen with SoundFlower which I had tried using years back to have keyboard audio control of HDMI devices (my TV).
Switching the source to Internal Speakers, then back to eqMac resolves the delay temporarily.
possibly related:
https://arstechnica.com/civis/viewtopic.php?f=19&t=1289815
https://github.com/SakuraG/soundflower/issues/43
I'm moving the discussion around this issue to #225
| gharchive/issue | 2020-04-15T20:55:55 | 2025-04-01T06:38:03.902939 | {
"authors": [
"nodeful",
"northamerican"
],
"repo": "bitgapp/eqMac",
"url": "https://github.com/bitgapp/eqMac/issues/212",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2583968433 | Register Signer should allow more sharing options
The register signer screen when clicking on Vault details when signing with an external key only shows QR code, it should have the NFC, File, and Remote link options as well
Verified this issue on dev app v 1.2.18(410)
| gharchive/issue | 2024-10-13T13:28:28 | 2025-04-01T06:38:03.904882 | {
"authors": [
"ben-kaufman",
"cakesoft-swati"
],
"repo": "bithyve/bitcoin-keeper",
"url": "https://github.com/bithyve/bitcoin-keeper/issues/5323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
876208884 | CloudHistory Page when we click on Backup it will increase current level
CloudHistory Page when we click on Backup it will increase the current level so when user goes to Restore flow having level1 so it will show level 2 on restore and flow breaks
-Verified this issue on Staging app v1.6.5(281)
| gharchive/issue | 2021-05-05T08:36:49 | 2025-04-01T06:38:03.906027 | {
"authors": [
"cakesoft-devika",
"cakesoft-nikhita"
],
"repo": "bithyve/hexa",
"url": "https://github.com/bithyve/hexa/issues/3288",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
262707026 | Add an "additionalRequestProperties" option to requests
This is to allow eg. AWS XRay instrumentation, which requires the addition of an "XRaySegment" value to be sent to the http.request() call.
The reasoning for adding additionalRequestProperties is to preserve backwards compatibility – just assigning all unknown properties from init could result in unwanted values being forwarded to http.request().
I am aware that this module wants to mimik the browser fetch() and this property doesn't align with that, but it do align with properties like agent and is useful when eg. there's a need to better trace calls across microservices in a server environment.
Codecov Report
Merging #350 into master will decrease coverage by 0.47%.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #350 +/- ##
==========================================
- Coverage 100% 99.52% -0.48%
==========================================
Files 6 6
Lines 423 425 +2
Branches 133 134 +1
==========================================
Hits 423 423
- Misses 0 1 +1
- Partials 0 1 +1
Impacted Files
Coverage Δ
src/request.js
97.18% <0%> (-2.82%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d1a3b1e...18704dd. Read the comment docs.
@voxpelli, Did you manage to get X-Ray to play nicely with node-fetch? I tried using the captureHTTPsGlobal function with no luck.
@josh--newman We're running https://github.com/Sydsvenskan/node-fetch/tree/1.x-fork internally to achieve this behavior and will probably move to another module eventually, that better suits our node.js needs. This module is more geared towards modules that should work both in in node.js and the browser (no x-ray there)
@voxpelli thanks for your response! I'm considering reevaluating our use of node-fetch as well.
| gharchive/pull-request | 2017-10-04T08:35:44 | 2025-04-01T06:38:03.915762 | {
"authors": [
"codecov-io",
"josh--newman",
"voxpelli"
],
"repo": "bitinn/node-fetch",
"url": "https://github.com/bitinn/node-fetch/pull/350",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1833258912 | We need a couple of values to be added to the analytics helper
Is there an existing issue for this?
[X] I have searched the existing issues
Current Behavior
Currently, we have the following methods from the documentation
https://developer.bitmovin.com/playback/docs/enabling-bitmovin-analytics
// Update the current custom data config.
player.analyticsCollector?.setCustomDataOnce({
customData2: 'Updated custom data field 2',
customData4: 'Updated custom data field 4',
});
we need to add new methods such as
Expected Behavior
player.analyticsCollector.setVideoId('123');
player.analyticsCollector.setTitle('Title of the Video');
player.analyticsCollector.setCdnProvider('Cdn Provider');
Steps To Reproduce
No response
What platform(s) are you experiencing the issue on?
[X] Android
[ ] Android TV / Fire TV
[X] iOS
[ ] tvOS
Player React Native SDK version
0.8.0
Device / Environment
No response
Stream URL (Optional)
No response
Additional information / Code snippets / Screenshots
No response
Hi @jonathanm-tkf!
AnalyticsConfig already offers options to assign those values on player creation.
Could this be used for your use case?
Hi @rolandkakonyi unfortunately it doesn't work for me, as we have the Next Episode functionality, which we don't destroy the player.
I'll comment in more detail:
Currently, we have the player, which has a Next Episode button, once pressed, loads the next video, for performance reasons and to not delete the current player and recreate it we use the load implementation as follows:
player.load({
url: videoData.url,
type: Platform.OS === 'ios' ? SourceType.HLS : SourceType.DASH,
title: videoData.title,
poster: videoData.posterUrl,
...
...
..
after this we need to do the following
if (player.analyticsCollector) {
player.analyticsCollector.setCustomDataOnce({
customData2: videoData.customId,
customData3: videoData.supplier,
});
player.analyticsCollector.setVideoId(videoData.id);
player.analyticsCollector.setTitle(getTitleAnalytics(videoData.dataAnalytics));
player.analyticsCollector.setCdnProvider(videoData.supplier);
}
Do you understand what the idea is? any recommendations? or do we need to destroy the player every time we want to modify this data?
Thanks for your help
Hi @jonathanm-tkf!
I see, you are correct that this is not possible right now without creating a new player instance.
You could implement this in AnalyticsModule yourself or this would be a feature request.
We have further plans with our analytics integration, we can get back to you during next week.
Thanks for the effort unfortunately I have too little time to contribute apologies, it is something we have today and I would not like to change the way of implementation, I look forward to the update, thanks again. If it is not implemented and I can help when I am free I will gladly do so.
Regards.
@jonathanm-tkf we are already working on a solution for this, you can watch #184 for the details.
I will post it here once it is released.
Hi @jonathanm-tkf, we just released v0.9.0 with support for the above use cases.
Now you can call the following API to update source-specific values for the analytics collector:
player.analyticsCollector.addSourceMetadata({
videoId: 'new video ID',
title: 'new video title',
path: 'new path',
cdnProvider: 'new CDN provider' // your new CDN provider,
});
Please see SourceMetadata for all options.
| gharchive/issue | 2023-08-02T14:04:54 | 2025-04-01T06:38:03.926204 | {
"authors": [
"jonathanm-tkf",
"rolandkakonyi"
],
"repo": "bitmovin/bitmovin-player-react-native",
"url": "https://github.com/bitmovin/bitmovin-player-react-native/issues/176",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2127365235 | Update iOS player to 3.56.0
Automated iOS player version update to 3.56.0
Closing in favor of #398
| gharchive/pull-request | 2024-02-09T15:27:44 | 2025-04-01T06:38:03.927736 | {
"authors": [
"bitPlayerGHActions",
"rolandkakonyi"
],
"repo": "bitmovin/bitmovin-player-react-native",
"url": "https://github.com/bitmovin/bitmovin-player-react-native/pull/395",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
724526995 | Fix error in nginx configuration
When trying to deploy the helm chart phpfpm
It was failing with:
nginx 11:21:11.85 ERROR ==> Custom server blocks files were found inside '/bitnami/nginx/conf/vhosts'. This configuration is not supported anymore. Please mount your custom server blocks config files at '/opt/bitnami/nginx/conf/server_blocks' instead.
The fix is already applied (and taken from): https://github.com/bitnami/bitnami-docker-php-fpm/pull/124/files
@juan131 I was too hasty to submit it.
While it fixes the original problem, and the pod does not get stuck in a crashLoop, the whole application does not deliver anymore the php. My knowlege of nginx configuration is not good, so I don't know else should be fixed so that the nginx points to the right place.
(when you open the app on the given ip on the browser, it points to the default nginx index.html. If you try to point it to the correct php file, it returns a "not found")
@koyan please the changes I did at https://github.com/bitnami/tutorials/pull/28
That should fix the issue, thanks for reporting it!
| gharchive/pull-request | 2020-10-19T11:31:55 | 2025-04-01T06:38:03.988019 | {
"authors": [
"juan131",
"koyan"
],
"repo": "bitnami/tutorials",
"url": "https://github.com/bitnami/tutorials/pull/26",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
589925717 | Ethan twitter new
Issues Closed
Please make one new line for each issue, otherwise not all issues will be accounted for!
Closes #944
Closes #942
...
Changes proposed in this pull request:
*Explanation on rate limiting
*New diagram for rate limiting
*New visual for data analysis
*Simple explanations of the libraries used
@reviewer/kavuong
Please also add your local images to GitHub (as right now you have added the file paths of your local computer). It should go something like this: commit your images locally(add them to your local version of the activity folder) and push to origin on your GitHub desktop. Other than that: Good job! I'll start merging after you are done with that. Also the new path should be "./image_name" and the image should be where the cards that use it are.
| gharchive/pull-request | 2020-03-30T00:49:42 | 2025-04-01T06:38:04.005715 | {
"authors": [
"etang01",
"ismaildude"
],
"repo": "bitprj/curriculum",
"url": "https://github.com/bitprj/curriculum/pull/964",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1835142186 | Fix fallback to xcodebuild when xcpretty/xcbeautify is unavailable
Checklist
[x] I've read and followed the Contribution Guidelines
[x] step.yml and README.md is updated with the changes (if needed)
Version
Requires a PATCH version update
Context
Before:
Checking log formatter (xcbeautify) version
Failed to install Step dependencies:
installing log formatter failed:
failed to run xcbeautify command:
executing command failed (xcbeautify "--version"):
exec: "xcbeautify":
executable file not found in $PATH
Switching to xcodebuild for output tool
Running the tests...
[16:58:43] $ set -o pipefail && xcodebuild "-workspace" "/Users/lpusok/Develop/go/src/github.com/bitrise-steplib/steps-xcode-test/_tmp/BullsEye.xcworkspace" "-scheme" "BullsEye" "test" "-destination" "id=56F53136-D68E-4D5A-83B3-C35BDC7D9AFD" "-testPlan" "UITests" "-resultBundlePath" "/var/folders/r5/gkvczn3j2tb0m79nwby9fjv80000gq/T/XCUITestOutput3261492831/Test-BullsEye.xcresult" "-xcconfig" "/var/folders/r5/gkvczn3j2tb0m79nwby9fjv80000gq/T/2103298897/temp.xcconfig" | xcbeautify
xcbeautify command failed: executing command failed (xcbeautify): exec: not started
Exit code: -1
After:
Checking log formatter (xcbeautify) version
Checking log formatter failed: failed to run xcbeautify command: executing command failed (xcbeautify "--version"): exec: "xcbeautify": executable file not found in $PATH
Falling back to xcodebuild log formatter
Running the tests...
[16:57:02] $ xcodebuild "-workspace" "/Users/lpusok/Develop/go/src/github.com/bitrise-steplib/steps-xcode-test/_tmp/BullsEye.xcworkspace" "-scheme" "BullsEye" "test" "-destination" "id=56F53136-D68E-4D5A-83B3-C35BDC7D9AFD" "-testPlan" "UITests" "-resultBundlePath" "/var/folders/r5/gkvczn3j2tb0m79nwby9fjv80000gq/T/XCUITestOutput3631683970/Test-BullsEye.xcresult" "-xcconfig" "/var/folders/r5/gkvczn3j2tb0m79nwby9fjv80000gq/T/1898852877/temp.xcconfig"
Resolves: https://bitrise.atlassian.net/browse/BE-880
Changes
Investigation details
Decisions
Fix fallback to xcodebuild when xcpretty/xcbeautify is unavailable.
| gharchive/pull-request | 2023-08-03T14:08:10 | 2025-04-01T06:38:04.010838 | {
"authors": [
"lpusok"
],
"repo": "bitrise-steplib/steps-xcode-test",
"url": "https://github.com/bitrise-steplib/steps-xcode-test/pull/233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1029693225 | Add scroll down arrow on desktop view
Add a scroll down arrow on the desktop view
not only for desktop, but for all devices...
| gharchive/issue | 2021-10-18T23:23:39 | 2025-04-01T06:38:04.018722 | {
"authors": [
"bitsandbytesdev"
],
"repo": "bitsandbytesdev/bitsandbytesdev.github.io",
"url": "https://github.com/bitsandbytesdev/bitsandbytesdev.github.io/issues/1",
"license": "BSD-4-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
169202066 | Add attribute rel="nofollow" to changes links
What this PR does:
Seems we do not have the tariff in the gov.uk robots file, so this PR adds the rel attribute to nofollow to the atom/changes links to prevent spiders crawling them.
[TARIFF16] Reject Spiders / No follow / No Index
| gharchive/pull-request | 2016-08-03T18:36:08 | 2025-04-01T06:38:04.066973 | {
"authors": [
"theharq"
],
"repo": "bitzesty/trade-tariff-frontend",
"url": "https://github.com/bitzesty/trade-tariff-frontend/pull/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1605294164 | 開起來的時候變成純白色載不出來
不知道跟系統是win11有沒有關係
下个新的试试,可能缺了文件
下个新的试试,可能缺了文件
你好作者。我也是win11,也产生了这个问题。但我解决了。在打开这个软件之后,win11会出现一个无法访问SmartScreen的框。然后就打不开了。解决办法是进入windows安全中心-应用和浏览器控制-基于声誉的保护-开关一次适用于microsoft edge的smartscreen。我不清楚解决的原理,但是我已经能打开并使用你这个软件了
下个新的试试,可能缺了文件
你好作者。我也是win11,也产生了这个问题。但我解决了。在打开这个软件之后,win11会出现一个无法访问SmartScreen的框。然后就打不开了。解决办法是进入windows安全中心-应用和浏览器控制-基于声誉的保护-开关一次适用于microsoft edge的smartscreen。我不清楚解决的原理,但是我已经能打开并使用你这个软件了
謝謝,我沒跳那個無法訪問的提示,但伯用同樣的方法也可以開起來了。
我也遇到了这个问题,最近重装系统之后再尝试打开就失败了,出现白色的对话框,几秒之后就闪退了,任务管理器可以查到对应的进程,不会自动结束。
试了楼上的方法也不起作用……
我也遇到了这个问题,楼上的方法不起作用。
版本 Windows 10 专业版
版本号 22H2
安装日期 2022/07/22
操作系统内部版本 19045.2251
体验 Windows Feature Experience Pack 120.2212.4180.0
不知道该怎么办
我也遇到了类似的问题,打开程序空白页面,任务管理器里面有进程。
先排查是不是将文件夹放在特殊字符文件夹下,比如我的文件目录里面有(。◕‿◕。)文件夹,程序就打不开。
移动到根目录下试试,比如D盘里面。
我跟新了win11之后也遇到了这个问题,楼上的不管用。但是我重新下载了一个放在桌面就能打开了。但如果我吧这文件夹替换掉我原本位置的旧文件就一样还是打不开,就算放在上一级文件夹也打不开。
建议各位把文件夹到处移动试试,桌面,每个盘的根目录下,不要放在奇怪名字的文件夹下,比如(。◕‿◕。),实测过打不开。
大概就是权限问题,到处移动,说不定哪个盘你权限给的高就能正常打开了。
| gharchive/issue | 2023-03-01T16:33:03 | 2025-04-01T06:38:04.071822 | {
"authors": [
"BTTLimit",
"Dongagent",
"Eternallyn",
"Jacky6079",
"Masters0713",
"biuuu",
"moe-miao"
],
"repo": "biuuu/genshin-wish-export",
"url": "https://github.com/biuuu/genshin-wish-export/issues/187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2154871420 | Example Input Files Not Working
Great work on the latest version of the paper and thanks for putting this repo out.
I was trying to test the basic inference you outlined using either the ESMFlow or AlphaFlow models and weights and ran into problems at every corner. I'll detail my specific issues below but repos always get increased usage when authors provide at least one full example input line for inference, so if you provide that I'm sure it would help many people checking out your code. Thanks!
Trying ESMFlow Model
mkdir output
mkdir weights
python predict.py --mode esmfold --input_csv splits/atlas_test.csv --weights weights/esmflow_md_distilled_202402.pt --samples 5 --outpdb output/
Output
2024-02-26 12:54:34,511 [---] [INFO] Loading the model
2024-02-26 12:55:16,878 [---] [INFO] Model has been loaded
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:25<00:00, 5.08s/it]
Traceback (most recent call last):
File "/---/alphaflow/predict.py", line 132, in
main()
File "/---/miniconda3/envs/AlphaFlow/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/---/alphaflow/predict.py", line 126, in main
f.write(protein.prots_to_pdb(result))
File "/---/alphaflow/alphaflow/utils/protein.py", line 163, in prots_to_pdb
prot = to_pdb(prot)
File "/---/miniconda3/envs/AlphaFlow/lib/python3.9/site-packages/openfold/np/protein.py", line 341, in to_pdb
chain_index = prot.chain_index.astype(np.int32)
AttributeError: 'NoneType' object has no attribute 'astype'
Tried with esmflow_pdb_base_202402.pt weights as well...same result.
Trying AlphaFlow Model
Preparing the MSA
python -m scripts.mmseqs_query --split splits/atlas_test.csv --outdir output
COMPLETE: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 450/450 [elapsed: 00:02 remaining: 00:00]
SUCCESS!
Running Inference
python predict.py --mode alphafold --input_csv splits/atlas_test.csv --msa_dir output/ --weights weights/alphaflow_pdb_distilled_202402.pt --samples 5 --outpdb output/
2024-02-26 13:17:56,383 [---] [INFO] Loading the model
Traceback (most recent call last):
File "/---/alphaflow/predict.py", line 132, in
main()
File "/---/miniconda3/envs/AlphaFlow/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/---/alphaflow/predict.py", line 78, in main
model = model_class(**ckpt['hyper_parameters'], training=False)
File "/---/alphaflow/alphaflow/model/wrapper.py", line 496, in init
self.model = AlphaFold(config,
File "/---/alphaflow/alphaflow/model/alphafold.py", line 73, in init
self.extra_msa_stack = ExtraMSAStack(
TypeError: init() missing 2 required positional arguments: 'opm_first' and 'fuse_projection_weights'
Thanks again for your assistance. Looking forward to trying out this great work.
Can you check if the OpenFold version is correct? A previous version of the README had the wrong install command --- see https://github.com/bjing2016/alphaflow/issues/2 where the issue sounds similar to what you describe with the ExtraMSAStack.
Yep, that was it. Thanks!
In case anyone asks, the git request for the commit from OpenFold you specify won't work if you have Cuda 12.3, which is what led me to try the latest OpenFold commit which did work with 12.3 before I got Cuda 11.6 setup.
Also, in mmseqs_query line 284 you have a hard-coded iloc[:3] which only allows the first three entries of a csv file to be processed. I ran into that when trying to use the atlas_test.csv file which will then throw an error during inference when it can't find the 4th entry's mmSeq folder.
Thanks, mmseqs_query has been fixed.
| gharchive/issue | 2024-02-26T18:49:24 | 2025-04-01T06:38:04.086764 | {
"authors": [
"bjing2016",
"jfreeze95"
],
"repo": "bjing2016/alphaflow",
"url": "https://github.com/bjing2016/alphaflow/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2086449441 | Refactor samples, pantry etcetera
🚀 Pull Request
Description
Applying some lessons learned from introducing GeoVista's 'assets' structure to new developers - seeing what was easy to grasp, and what was hard.
If we like this: corresponding changes will be needed in the structure of geovista-data.
Please take a look at the new structure in my branch, to get a feel for it.
Made geovista.cache into a package directory, allowing the inclusion of registry.txt and an explanatory README.md.
Made a geovista.pantry package directory, encompassing several sorts of reusable things that were previously scattered through several root modules:
fetch_coastlines() has moved to the geovista.pantry root
The previous ~geovista.pantry~ has become geovista.pantry.data.
~geovista.samples~ has become geovista.pantry.meshes.
The texture routines previously in ~geovista.cache~ have moved to geovista.pantry.textures.
Hopefully I've caught everything - refactoring can be rather fraught!
Thanks @bjlittle!
@all-contributors please add @trexfeathers for maintenance
| gharchive/pull-request | 2024-01-17T15:36:41 | 2025-04-01T06:38:04.093561 | {
"authors": [
"bjlittle",
"trexfeathers"
],
"repo": "bjlittle/geovista",
"url": "https://github.com/bjlittle/geovista/pull/645",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
645788037 | f/Linear -- documentation updates for nodal outputs
I added some updates to the documentation for the nodal outputs. I also added a rough skeleton of documentation for ElastoDyn that will need to be filled out sometime later.
During testing, I also added a few minor changes to the nodal output parsing.
Thanks, Andy!
| gharchive/pull-request | 2020-06-25T19:07:38 | 2025-04-01T06:38:04.097361 | {
"authors": [
"andrew-platt",
"bjonkman"
],
"repo": "bjonkman/openfast",
"url": "https://github.com/bjonkman/openfast/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
653556551 | [arm] error: couldn't load codegen backend
Hi, I'm trying to follow the build instructions, but when I get to test.sh, I get this error:
error: couldn't load codegen backend "/home/pi/codegen/rustc_codegen_cranelift/target/release/librustc_codegen_cranelift.so": "/home/pi/codegen/rustc_codegen_cranelift/target/release/librustc_codegen_cranelift.so: undefined symbol: __register_frame"
Are you using glibc or another libc? Just guessing.
That symbol should be provided by libunwind: https://github.com/bjorn3/rustc_codegen_cranelift/blob/eb5ce4e92ae8d512804279fda1101032c7ec9f28/src/debuginfo/unwind.rs#L136
Not sure if this answers your question, but this is the output of ldd --version
ldd --version
ldd (Debian GLIBC 2.28-10+rpi1) 2.28
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
pi@raspberrypi:~/codegen/rustc_codegen_cranelift
Ah, you are using an arm system. While the error is unrelated, please note thatthe AArch64 backend of Cranelift is still missing some things necessary for cg_clif (mostly 128bit int support) and arm32 is completely missing.
Ok, I see, so this is a lost cause, and I should close the issue?
btw, I just installed libunwind-dev, and I got the same error
You can keep it open. Once Cranelift implements the necessary features I do want to get cg_clif fully functioning on AArch64.
AArch64 support is now almost complete. It only needs a couple of changes to Cranelift that have already landed on main support to fix the remaining tests. I'm not sure why __register_frame wasn't found for you. If you still have this issue you can build with --no-unstable-features to disable the JIT.
| gharchive/issue | 2020-07-08T19:56:17 | 2025-04-01T06:38:04.103094 | {
"authors": [
"bjorn3",
"zwhitchcox"
],
"repo": "bjorn3/rustc_codegen_cranelift",
"url": "https://github.com/bjorn3/rustc_codegen_cranelift/issues/1060",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
727894338 | Fix template (-t) handling of export declarations
Fixed export commands being repeated in templated values.
Can we keep this open? I like to use export syntax in my .env files, and having to go through the template and replace =export with = is a bit of a drag. If I can help get this ready for merge, let me know and I'll be happy to.
Commenting in the hope that this will be kept open and fixed.
Changes look great. Sorry for the delay. I will merge now and push out a new release soon.
| gharchive/pull-request | 2020-10-23T04:20:18 | 2025-04-01T06:38:04.113560 | {
"authors": [
"benforeva",
"bkeepers",
"marnen"
],
"repo": "bkeepers/dotenv",
"url": "https://github.com/bkeepers/dotenv/pull/416",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1228059705 | Updated telnetenabled on NBR750
Hello sir! Hope all is well with you these days. I see Netgear has changed the 'magic packet' for telnetenabled on my new router NBR750 and so sadly this 'telnet-enable2.py' is no longer working for it. Was hoping you might have a chance to take a peek at the updated binary/libraries for it so that maybe we could fork an updated version of the python script for NBR750 and newer routers.
As always happy to send some pizza your way for your valuable time. I have zipped up binary along with library dependencies in the download link that follows along with a .txt inside that gives strings, strace, and ldd output which hopefully helps. Cheers!
http://paste.c-net.org/RaquelBuffalo
(SHA-256 .zip checksum: 517f4422d3b6ac36d20bfaedb5e80d094b834927e4901d9e15f4bf96c3482430)
Worked with @bkerler on updating this telnet enabler for the NBR750 and he found that the final command to created the hashed password in the magic packet had to be updated to '...hexdigest().lower()'; basically lowercase instead of uppercase used in the LBR20. However, even after this was done and the magic packet could be successfully sent, the telnet daemon still would not launch so it seems Netgear has changed something even deeper and/or otherwise broken the telnet enabler daemon.
I spun my wheels on it for a few weeks but ultimately decided upon a workaround instead. At least on the NBR750 initial stable release OEM firmware, Netgear has brought back the NVRAM parameter 'telnetd_enable' which by default is set to '0' to disable it. However, if connected via serial console one can set it to enabled with the command 'fnvram set telnet_enable=1' followed by 'fnvram commit'. After reboot telnet daemon will be running so you can telnet into the device. From there you can either continue to use telnet or enable SSH instead (which I recommend).
To enable telnet easily to start with for those without access to a serial console I've taken a config backup which captures the 'telnetd_enable=1' parameter value so that anyone on the OEM initial release firmware can restore it and gain telnet access. As part of the config backup it also overwrites things like 'admin' and wifi passwords along with wifi SSID but those can be changed post-restore. All the details including the required config backup file are located in the thread below with instructions. Hope this is helpful.
https://wirelessjoint.com/viewtopic.php?p=24894
Closing as per the workaround in my last comment.
@bkerler pinged me end of October 2022 and indicated that there was an additional change in the python script required that was identified and that I should not download the updated script to test. Unfortunately, other work piled up and I am now just getting back to testing and validation of this on the various firmware that Netgear have released for the NBR750 since the last time I tested. Will post the result here momentarily.
I have successfully tested the updated 'telnet-enable2.py' script from this repo on the NBR750 for the following firmware versions:
V4.6.5.11_1.5.50
V4.6.5.11_1.5.63
V4.6.5.11_1.5.64
It is confirmed functional for all of these now. Thanks so much for your expertise and effort to make this work :)
Confirmed script is functional on V4.6.5.11_1.5.66
| gharchive/issue | 2022-05-06T16:19:45 | 2025-04-01T06:38:04.121581 | {
"authors": [
"hazarjast",
"jericsmith504"
],
"repo": "bkerler/netgear_telnet",
"url": "https://github.com/bkerler/netgear_telnet/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
371663319 | Using the newly created chromeDriverVersionFinder.
Issue: https://github.com/blackbaud/skyux2/issues/2124
Codecov Report
Merging #489 into master will not change coverage.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #489 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 54 54
Lines 1653 1664 +11
Branches 245 246 +1
=====================================
+ Hits 1653 1664 +11
Flag
Coverage Δ
#builder
100% <100%> (ø)
:arrow_up:
#runtime
100% <ø> (ø)
:arrow_up:
#srcapp
100% <ø> (ø)
:arrow_up:
Impacted Files
Coverage Δ
cli/e2e.js
100% <100%> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f9d7447...fbae359. Read the comment docs.
| gharchive/pull-request | 2018-10-18T18:51:13 | 2025-04-01T06:38:04.182126 | {
"authors": [
"Blackbaud-BobbyEarl",
"codecov-io"
],
"repo": "blackbaud/skyux-builder",
"url": "https://github.com/blackbaud/skyux-builder/pull/489",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
151128669 | Cavebot sometimes incorrectly detects GM's when going down floors
there is some weird bug wherein this error will occur:
mychar closed with this message: Client closed - condition onGMcloseConnection was activated: GM Tsar
when blackd proxy have seen GM Tsar when leaving depot, and cavebot has onGMcloseConnection, it may sometimes (completely incorrectly!!) believe that GM Tsar is nearby when the cavebot is changing floor down!
Whoaaaaah I just logged in to my e-mail account and from what I can see there's a lot of new things here! Keep it up mate
^ i started playing again ^^
anyway, i found out that it's actually a bug in this OT, it will sometimes send info about players which is like 100 SQM away when walking up/down floors. like here, i am very very far away from the GM in question, yet the server sends me info about everyone in DP every time i go down a floor. i have no idea why. weird OT custom code bug.
' we already knew his ID + include some info
tempID = FourBytesDouble(packet(pos + 2), packet(pos + 3), packet(pos + 4), packet(pos + 5))
AddID_HP idConnection, tempID, packet(pos + 6) 'update hp
nameofgivenID = GetNameFromID(idConnection, tempID)
| gharchive/issue | 2016-04-26T13:03:53 | 2025-04-01T06:38:04.198436 | {
"authors": [
"Nrated",
"divinity76"
],
"repo": "blackdtools/Blackd-Proxy-CLASSIC",
"url": "https://github.com/blackdtools/Blackd-Proxy-CLASSIC/issues/80",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
544057248 | [Gally]: master <- dev
Automatically created by Git-Ally
:tada: This PR is included in version 2.1.5 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-12-31T01:14:51 | 2025-04-01T06:38:04.207620 | {
"authors": [
"MrsFlux"
],
"repo": "blackflux/lambda-monitor",
"url": "https://github.com/blackflux/lambda-monitor/pull/1524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1182687471 | Docs and examples using refactored APIs while most recent release still using the old ones
Bug Description
I was trying to reproduce the Logistic Regression example using version 0.3.0 and got the following error when running line rmh_sampler = blackjax.rmh(logprob_fn, sigma=jnp.ones(M) * 0.7).
I also tried to run the other examples in the Introduction notebook as well as the one in the README without success.
It took me a while to realise that the APIs in the last release 0.3.0 have been considerably refactored with !159 and that all the docs are using the new APIs.
I think that it would be nice if you could specify in the README as well as in the main doc that the examples are all using a still to be released version and that to make them work you need to install directly from a local clone of the project.
Versions
BlackJAX 0.3.0
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
[GCC 9.3.0]
Jax 0.2.28
Jaxlib 0.1.76
Thank you for raising the issue. I am going to do even better than this and release a new version of the package!
Done!
| gharchive/issue | 2022-03-27T22:40:20 | 2025-04-01T06:38:04.215035 | {
"authors": [
"drabbit17",
"rlouf"
],
"repo": "blackjax-devs/blackjax",
"url": "https://github.com/blackjax-devs/blackjax/issues/185",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
235603054 | Can't connect to Atlassian Marketplace due to DNS issues
Sorry if this is a general Docker issue and not due to your images, but JIRA can't connect to Marketplace. I guess it's due to name resolution not working from inside the container:
# docker exec -it jira /bin/bash
bash-4.3$ nslookup google.com
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'google.com': Try again
Any idea on how to fix this? I'm using your docker-compose.yml.
PS:
Docker version 17.05.0-ce, build 89658be
docker-compose version 1.13.0, build 1719ceb
Please check this page: https://docs.docker.com/engine/userguide/networking/default_network/configure-dns/
Yes, I know this page, but it doesn't speak about docker-compose. I have changed your docker-compose.yml to include the setting dns: 8.8.8.8 and verified that docker inspect shows this address as dns, but nslookup still fails. Maybe it's related to this issue, where users claim the problem is due to the line options ndots:0 injected into resolv.conf. Indeed the container's resolv.conf looks like:
search mydomain.com
nameserver 127.0.0.11
options ndots:0
Does name resolution work for you with your provided docker-compose.yml?
I verified:
$ curl -O https://raw.githubusercontent.com/blacklabelops/jira/master/docker-compose.yml
$ docker-compose up -d
$ docker-compose exec jira
$ nslookup google.com
Name: google.com
Address 1: 216.58.207.174 muc11s04-in-f14.1e100.net
Address 2: 2a00:1450:4016:807::200e muc11s04-in-x0e.1e100.net
Your docker-demon is not able to configure the network bridge correctly. You container's are not able to connect to the internet. Can't help you with more.
Thanks, @blacklabelops. The issue was entirely unrelated to Docker: our internal DNS server didn't accept recursion from Docker's subnet. I needed to add
allow-recursion { 127.0.0.1; 10.0.0.0/8; 172.16.0.0/12; };
to bind.conf. Confusion arose from the line nslookup: can't resolve '(null)': Name does not resolve which is apparently unique to alpine, other containers don't show it. Now the line still appears, but name resolution works:
# docker exec -it jira nslookup google.com
nslookup: can't resolve '(null)': Name does not resolve
Name: google.com
Address 1: 172.217.17.142 ams15s30-in-f14.1e100.net
Address 2: 2a00:1450:400e:807::200e ams15s30-in-x0e.1e100.net
Thanks, closing.
| gharchive/issue | 2017-06-13T15:53:56 | 2025-04-01T06:38:04.220952 | {
"authors": [
"blacklabelops",
"marton78"
],
"repo": "blacklabelops/jira",
"url": "https://github.com/blacklabelops/jira/issues/34",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1171871949 | Empty value in bracketed inline data overwriting default value in table
What happened?
This error occurred on both deesktop & mobile versions of Obsidian
Frontmatter & unbracketed key-value pairs performed as expected, however an empty bracketed inline data overwrites a table's default value with blank. Screenshot of source & preview mode attached.
issue can be recreated :
Test1:
Test2::
[Test3::]
DQL
Table
default(Test1, "👽") AS "1",
default(Test2, "🛸") AS "2",
default(Test3, "🐮") AS "3"
WHERE Type = "Test"
JS
No response
Dataview Version
0.4.26
Obsidian Version
0.13.33
OS
Windows
In Discord, was asked to also run ' = this` query. Results attached
Also, was asked to reassign issue to @AB1908 but I can't seem to find a way to do that (using mobile GitHub)
Test1 and Test2 evaluate to undefined and null. Test2 is not indexed at all as you can see here:
default, per the docs, changes default values for values that are null, and also appears to work for undefined. However, Test3 has an empty string "", which is not null, which is why the field is not populating as you expect it to.
Ah that makes sense now! I didn't realize the brackets would automatically make it a blank string, whereas the other formats don't.
Thank you for your help!!
Admittedly I was using the brackets primarily just to be able to target the data with CSS. Is there a way to target the Test2 format? If not, I totally understand, styling kind of goes beyond the point of reading & working with raw data.
I'm not aware of how to apply CSS to inline fields. Let me poke around and get back to you.
Sorry for the late check on this but any progress? If not, I'll dig it up again.
Hello,
sorry to come back to you so late. I currently go through older issues to see which are stale or already solved.
Theres a FR in #713 to be able to style non-bracketed inline fields. If I get you right, that's what you want to have? If this is the only open point on this issue and if it's okay, I'd like to close the issue in favor or #713.
That the bracketed inline field is set to an empty string as value is intended behaviour, as far as I can tell - in fact, it is the only way I am aware of to use an empty value.
| gharchive/issue | 2022-03-17T03:00:54 | 2025-04-01T06:38:04.229880 | {
"authors": [
"AB1908",
"s-blu",
"spasticginger"
],
"repo": "blacksmithgu/obsidian-dataview",
"url": "https://github.com/blacksmithgu/obsidian-dataview/issues/956",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2164756479 | 🛑 We Miller Puzzles - JigSaw Puzzles is down
In ff4e939, We Miller Puzzles - JigSaw Puzzles (https://puzzles.wemiller.com/) was down:
HTTP code: 523
Response time: 3205 ms
Resolved: We Miller Puzzles - JigSaw Puzzles is back up in 5698d6d after 5 hours, 57 minutes.
| gharchive/issue | 2024-03-02T10:52:41 | 2025-04-01T06:38:04.235969 | {
"authors": [
"blaineam"
],
"repo": "blaineam/statussi",
"url": "https://github.com/blaineam/statussi/issues/318",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2407180769 | 🛑 We Miller Pass - Password Manager is down
In a34ae29, We Miller Pass - Password Manager (https://pass.wemiller.com/) was down:
HTTP code: 523
Response time: 117 ms
Resolved: We Miller Pass - Password Manager is back up in 0824341 after 10 minutes.
| gharchive/issue | 2024-07-13T22:14:20 | 2025-04-01T06:38:04.238600 | {
"authors": [
"blaineam"
],
"repo": "blaineam/statussi",
"url": "https://github.com/blaineam/statussi/issues/465",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
912250023 | 超参数设置
你好,在阅读你的文章和代码之后,我有一些问题想请教一下。
首先是,我在High_level_fusion class中发现hat_F_I_4, hat_F_I_5分别有2.0和1.7倍率的超参数设置,请问这里的超参数设置是怎样选择的,对最终的结果有多大的影响?
其次是,我自己在做网络训练时,也尝试过利用边缘信息进行附加的监督,不过我用的是bce损失,而且并没有发现有明显的性能提升。因为你在ablation study中并没有对这一部分做对比实验,所以我很好奇,你这里加入contour信息之后,大概能有多大的效果。
最后是,由于训练代码暂时没有公布,你这里对所有的损失项是采用了同样的权重吗?这里的损失项太多,感觉确定权重会比较麻烦。
你好,在阅读你的文章和代码之后,我有一些问题想请教一下。
首先是,我在High_level_fusion class中发现hat_F_I_4, hat_F_I_5分别有2.0和1.7倍率的超参数设置,请问这里的超参数设置是怎样选择的,对最终的结果有多大的影响?
其次是,我自己在做网络训练时,也尝试过利用边缘信息进行附加的监督,不过我用的是bce损失,而且并没有发现有明显的性能提升。因为你在ablation study中并没有对这一部分做对比实验,所以我很好奇,你这里加入contour信息之后,大概能有多大的效果。
最后是,由于训练代码暂时没有公布,你这里对所有的损失项是采用了同样的权重吗?这里的损失项太多,感觉确定权重会比较麻烦。
您好,
你好,在阅读你的文章和代码之后,我有一些问题想请教一下。
首先是,我在High_level_fusion class中发现hat_F_I_4, hat_F_I_5分别有2.0和1.7倍率的超参数设置,请问这里的超参数设置是怎样选择的,对最终的结果有多大的影响?
其次是,我自己在做网络训练时,也尝试过利用边缘信息进行附加的监督,不过我用的是bce损失,而且并没有发现有明显的性能提升。因为你在ablation study中并没有对这一部分做对比实验,所以我很好奇,你这里加入contour信息之后,大概能有多大的效果。
最后是,由于训练代码暂时没有公布,你这里对所有的损失项是采用了同样的权重吗?这里的损失项太多,感觉确定权重会比较麻烦。
您好,
在 high-level fusion 中,确实有两个超参数来控制不同级别的特征,并且这两个参数是根据我的调试经验确定的。不过,我发现在我自己的机器上训练的时候,设置成 2.0 和 1.7 会有比较好的效果,但在其它人的机器上训练时,反而其它的参数设置会比较好(可能与一些库的版本有关)。所以如果您不是特别追求 SOTA 的性能,将它们设置成 1.0 和 1.0 也是可以的。
我曾经也尝试过 bce 损失来对边缘信息监督,同样没有发现性能上的提升(也没有负面效果),后来我发现 l2 损失可以带来一些微弱的性能提升。在论文中,您可以从模型是否增加了 low-level fusion 来看到性能对比。但需要强调的是,这部分提升并不是很大(high-level 更加重要),并且对底层高分辨率特征进行额外地特征融合、监督会带来更多的时间消耗,所以是否增加边缘监督需要您自己考量。
目前我放出了训练代码,在 train-CN 分支下。由于我目前没什么时间,所以并没有完全注释这些代码,直接运行也可能会报错。但您可以从里面看到损失函数是怎样的、如何调整各损失间比例、以及深度监督具体都加在了哪里,希望这能帮助到您。
非常感谢,祝好
| gharchive/issue | 2021-06-05T12:52:00 | 2025-04-01T06:38:04.318148 | {
"authors": [
"blanclist",
"clelouch"
],
"repo": "blanclist/CDNet",
"url": "https://github.com/blanclist/CDNet/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2598891885 | Default view on scenes when importing to blazium 4.3 default to wireframe
Tested versions
v4.3.stable.custom_build [ce7311f34]
System information
Windows 11, Ryzen 9 7900X, 7900XTX, 64G Ram
Issue description
Other need to check this, or if this was something fishy with my project...
Majority of my scenes defaulted to wireframe when imporing from godot to blazium
Steps to reproduce
Minimal reproduction project (MRP)
is this still an issue?
| gharchive/issue | 2024-10-19T08:19:58 | 2025-04-01T06:38:04.325295 | {
"authors": [
"Norrox",
"Starkium"
],
"repo": "blazium-engine/blazium",
"url": "https://github.com/blazium-engine/blazium/issues/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2442433828 | AC-3: Roles
AC-3: Roles
User Story
AS A business owner
I WANT to be able to view and manage the departments, roles, and employees in my company
SO THAT I can organize and plan my business
Acceptance Criteria
GIVEN a command-line application that accepts user input
WHEN I choose to view all roles
THEN I am presented with the job title, role id, the department that role belongs to, and the salary for that role
To fulfill the acceptance criteria for the viewAllRoles function in your command-line application, you need to retrieve the necessary information from your database and present it to the user in a formatted manner. Here's a step-by-step guide on how you can achieve this:
Retrieve Roles Data: Query your database to fetch all roles along with their associated information such as job title, role ID, department ID, and salary.
Format Data: Once you have the roles data, format it in a way that displays the job title, role ID, department name (instead of ID), and salary for each role.
Display Data: Present the formatted roles data to the user in a clear and organized manner, such as printing it in a table format.
| gharchive/issue | 2024-08-01T13:15:43 | 2025-04-01T06:38:04.328073 | {
"authors": [
"bldambtn"
],
"repo": "bldambtn/WhoDoYouWorkFor",
"url": "https://github.com/bldambtn/WhoDoYouWorkFor/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
149723338 | alternatives to appdirectory dependency
https://github.com/LinusU/node-application-config
https://github.com/sindresorhus/env-paths
| gharchive/issue | 2016-04-20T10:06:32 | 2025-04-01T06:38:04.361632 | {
"authors": [
"jokeyrhyme"
],
"repo": "blinkmobile/blinkmrc.js",
"url": "https://github.com/blinkmobile/blinkmrc.js/issues/2",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
209192397 | How to make views appear from the bottom?
I want to make the views appear from the bottom rather than from the top. Is that possible?
Check my comment on the issue #3
| gharchive/issue | 2017-02-21T16:09:49 | 2025-04-01T06:38:04.400122 | {
"authors": [
"RomiValladares",
"blipinsk"
],
"repo": "blipinsk/FlippableStackView",
"url": "https://github.com/blipinsk/FlippableStackView/issues/27",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
484976492 | Contexts + CdnInfoContext & Gw2ClientContext
Added base structure for new ContextsService. Also added two contexts CdnInfoContext and Gw2ClientContext which demonstrate detecting the Client version of the application using the Mumble Link service to detect the current build ID and then the asset CDNs to check which client we are using based on that current build ID.
This provides some of the potential implementation currently being discussed in #62.
Additionally, if accepted, this contains contexts that would allow us to more easily solve #50.
Resolved bug with CdnInfo where parsedSuccessfully could be true if the number of values was not 5. https://github.com/blish-hud/Blish-HUD/pull/106#discussion_r318001999
Reduced complexity of "Is______ClientType" function. https://github.com/blish-hud/Blish-HUD/pull/106#discussion_r318009487 & https://github.com/blish-hud/Blish-HUD/pull/106#discussion_r318009731
Fixed bugs in CdnInfoContext as well as Gw2ClientContext.
Contexts can now be unloaded.
Gw2MumbleService now has an event "BuildIdChanged"
OverlayService now, in addition to TacO, checks to see if the running client is the standard client or the Chinese client (moves the CornerIcons over if it is the Chinese client).
@greaka While I think the primary focus of this PR is the Contexts implementation, I think it'd be good to have Lei test the artifact that is created from this build and ensure the Blish HUD CornerIcon moves over as expected on a real Chinese version of the client. I did a good amount of testing with Fiddler to fake the build ID that is returned and was able to confirm it should work. Also ensured that it would fallback gracefully if something fails with the request (and logs are much more detailed with why it failed, now).
If you like this latest implementation, I can finish writing up the XML docs for it and mark it ready for a true review.
Once we get this merged, we can start adding things like the FestivalContext, which I am fairly excited about. 🎉
@greaka Please re-review the implementation when you have a chance. I believe I have implemented everything now and have put in XML documentation where appropriate. I've also added a State member to Contexts which indicates if the Context is loading, ready, etc.
To prevent anybody from unregistering a Context, I added a ContextHandle which can be used to expire a Context. The ContextHandle associated with a Context type is returned when you call RegisterContext. The full workflow would look something like:
// Register the context when the module is loaded
var myContextHandle = GameService.Contexts.RegisterContext(new ExampleContext());
// Expire context when the module is unloaded
myContextHandle.Expire();
Also, I received confirmation that this implementation has fixed Lei's problem! 🙂
https://cdn.discordapp.com/attachments/568470477543178261/617894300327477249/jt_2019-09-02_09-30-59.png
I have multiple nitpicks in case the contexts are meant to be thread safe.
Having them threadsafe is probably a nice to have for now though
Agreed!
| gharchive/pull-request | 2019-08-25T21:12:31 | 2025-04-01T06:38:04.408317 | {
"authors": [
"dlamkins",
"greaka"
],
"repo": "blish-hud/Blish-HUD",
"url": "https://github.com/blish-hud/Blish-HUD/pull/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1800356496 | [Feature] Ability to implement custom drawing functions
Hi, I've added a way of implementing custom node drawing functions.
Here's an example usage:
use eframe::{run_native, App, CreationContext};
use egui::{
epaint::{CircleShape, RectShape},
Color32, Context, Pos2, Rect, Rounding, Stroke,
};
use egui_graphs::{
to_input_graph, Graph, GraphView, Node, SettingsInteraction, ShapesNodes, StateComputedNode,
};
use petgraph::{stable_graph::StableGraph, Directed};
pub struct BasicApp {
g: Graph<(), (), Directed>,
}
impl BasicApp {
fn new(_: &CreationContext<'_>) -> Self {
let g = generate_graph();
Self { g }
}
}
impl App for BasicApp {
fn update(&mut self, ctx: &Context, _: &mut eframe::Frame) {
let a = SettingsInteraction::new().with_dragging_enabled(true);
egui::CentralPanel::default().show(ctx, |ui| {
ui.add(
&mut GraphView::new(&mut self.g)
.with_interactions(&a)
.with_custom_node_drawing(Some(
|res: &mut (ShapesNodes, ShapesNodes),
loc: Pos2,
node: &Node<()>,
comp_node: &StateComputedNode| {
let color = Color32::from_rgb(0, 0, 0);
let shape = CircleShape {
center: loc,
radius: 20.0,
fill: color,
stroke: Stroke::new(1., color),
};
res.0 .0.push(shape.into());
},
))
.with_custom_node_interacted_drawing(Some(
|res: &mut (ShapesNodes, ShapesNodes),
loc: Pos2,
node: &Node<()>,
comp_node: &StateComputedNode| {
if !(node.selected()
|| comp_node.subselected()
|| node.dragged()
|| node.folded()
|| comp_node.subfolded())
{
return;
}
let color = Color32::from_rgb(50, 0, 0);
let rounding = Rounding::default();
let mut min_rec = loc;
min_rec.x -= 20.0;
min_rec.y -= 20.0;
let mut max_rec = loc;
max_rec.x += 20.0;
max_rec.y += 20.0;
let rect = Rect {
min: min_rec,
max: max_rec,
};
let shape = RectShape {
rect,
rounding,
fill: color,
stroke: Stroke::new(1., color),
};
res.1 .0.push(shape.into());
},
)),
);
});
}
}
fn generate_graph() -> Graph<(), (), Directed> {
let mut g: StableGraph<(), ()> = StableGraph::new();
let a = g.add_node(());
let b = g.add_node(());
let c = g.add_node(());
g.add_edge(a, b, ());
g.add_edge(b, c, ());
g.add_edge(c, a, ());
to_input_graph(&g)
}
fn main() {
let native_options = eframe::NativeOptions::default();
run_native(
"egui_graphs_basic_demo",
native_options,
Box::new(|cc| Box::new(BasicApp::new(cc))),
)
.unwrap();
}
I thinks this is a good contribution which expands style customization.
I am planning to return to style customization after we have stable and polished version. But I will accept this for now.
| gharchive/pull-request | 2023-07-12T07:25:42 | 2025-04-01T06:38:04.422944 | {
"authors": [
"TDiblik",
"blitzarx1"
],
"repo": "blitzarx1/egui_graphs",
"url": "https://github.com/blitzarx1/egui_graphs/pull/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1288601197 | [Issue]: Wallet addresses shouldn't be lowercase
Is your request related to a problem?
The last update made the wallet address lowercase by default, which changes the behavior of things like Blockies which now have different colors than the ones in MetaMask for example.
Feature Description
No response
Alternative Solutions
Maybe don't call .toLowerCase() on the addresses, let the user to that when needed. Or was there a specific reason you made this change?
Anything else?
No response
Hm actually this seems fine on desktop Metamask, but on their mobile app, Blockies have different colors.
It may not be related to this package so we may close it.
| gharchive/issue | 2022-06-29T11:49:19 | 2025-04-01T06:38:04.442819 | {
"authors": [
"cmalex23"
],
"repo": "blocknative/web3-onboard",
"url": "https://github.com/blocknative/web3-onboard/issues/1107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
314265729 | As a developer, I want to better understand which callbacks run in which threads
In most callbacks it is necessary to user runOnUIThread.
This should be documented or the SDK should call the callbacks from the UI thread already.
Thanks @friedger - i made a note of this in the comments.
Going to close this issue. Please open a new issue if you think think it would be better to set these callbacks up to run on the UI thread and let me know if you'd like to take a stab at it.
| gharchive/issue | 2018-04-13T22:16:10 | 2025-04-01T06:38:04.450339 | {
"authors": [
"friedger",
"larrysalibra"
],
"repo": "blockstack/blockstack-android",
"url": "https://github.com/blockstack/blockstack-android/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
418930147 | Installation of Blockstack 19.0.0 includes a blockstack-18.3.0-rc1.tgz file
Installing blockstack 19.0.0 comes with a blockstack-18.3.0-rc1.tgz file. Was confusing to me when I listed the module contents.
npm ls shows the right version.
We should remove the confusing tgz file.
$ ls -al node_modules/blockstack/
total 6328
drwxr-xr-x 23 manthony wheel 736 Mar 8 11:38 .
drwxr-xr-x 148 manthony wheel 4736 Mar 8 11:38 ..
-rw-r--r-- 1 manthony wheel 200 Oct 26 1985 .babelrc
drwxr-xr-x 3 manthony wheel 96 Mar 8 11:38 .circleci
-rw-r--r-- 1 manthony wheel 118 Oct 26 1985 .eslintignore
-rw-r--r-- 1 manthony wheel 1395 Oct 26 1985 .eslintrc
-rw-r--r-- 1 manthony wheel 242 Oct 26 1985 .flowconfig
drwxr-xr-x 17 manthony wheel 544 Mar 8 11:38 .nyc_output
-rw-r--r-- 1 manthony wheel 8353 Oct 26 1985 CHANGELOG.md
-rw-r--r-- 1 manthony wheel 2137 Oct 26 1985 CONTRIBUTING.md
-rw-r--r-- 1 manthony wheel 1082 Oct 26 1985 LICENSE
-rw-r--r-- 1 manthony wheel 3746 Oct 26 1985 README.md
-rw-r--r-- 1 manthony wheel 2412148 Oct 26 1985 blockstack-18.3.0-rc1.tgz
-rw-r--r-- 1 manthony wheel 889 Oct 26 1985 bower.json
drwxr-xr-x 4 manthony wheel 128 Mar 8 11:38 dist
drwxr-xr-x 4 manthony wheel 128 Mar 8 11:38 docs
-rw-r--r-- 1 manthony wheel 4545 Oct 26 1985 docs-button.png
-rw-r--r-- 1 manthony wheel 761104 Oct 26 1985 docs.json
-rw-r--r-- 1 manthony wheel 686 Oct 26 1985 documentation.yml
drwxr-xr-x 3 manthony wheel 96 Mar 8 11:38 flow-typed
drwxr-xr-x 18 manthony wheel 576 Mar 8 11:38 lib
-rw-r--r-- 1 manthony wheel 5274 Mar 8 11:38 package.json
drwxr-xr-x 7 manthony wheel 224 Mar 8 11:38 tests
$ npm ls blockstack
/private/tmp
└── blockstack@19.0.0
This should be fix in v19.1.0
@yknl yo blockstack gave me this 5 minutes ago. that's what we use to build the initial hello world
manthony at booboo in /tmp/test-download/node_modules/blockstack
$ ls
CHANGELOG.md bower.json docs.json package.json
LICENSE dist documentation.yml tests
README.md docs flow-typed
blockstack-18.3.0-rc1.tgz docs-button.png lib
I don't see this file anymore in the latest versions. The app generator should have been updated as well.
| gharchive/issue | 2019-03-08T19:43:22 | 2025-04-01T06:38:04.476714 | {
"authors": [
"moxiegirl",
"yknl"
],
"repo": "blockstack/blockstack.js",
"url": "https://github.com/blockstack/blockstack.js/issues/619",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
672634320 | Was this page helpful? section improvements
Align the edges with the content
Icons draw too much attention because they're too large. Should be 28px instead of 42px
Icons missing background, attached svgs!
Happy.svg
@jasperjansz these are react components, the background is missing only on accident when I was making it work with dark mode. Could you design a dark mode version of the faces I can use? Thanks!
| gharchive/issue | 2020-08-04T09:07:57 | 2025-04-01T06:38:04.481499 | {
"authors": [
"aulneau",
"jasperjansz"
],
"repo": "blockstack/docs.blockstack",
"url": "https://github.com/blockstack/docs.blockstack/issues/686",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
898648067 | recuperacion de token
There is a possibility of recovering this transfer that you do without the memo for not knowing how to transfer
99,00 STX
SP34FXH035DMS7BDZCHF17Q0P0QXPT713BZZGBER4
SP1P72Z3704VMT3DMHPP2CB8TGQWGDBHD3RPR9GZS
0xf6a911512fd3c55afd8e85a91864db0d75fb529d582369d469b4f7dbe4dc96bf
0.00018 STX
#15805
Hace 19 horas
0xa847f231c4261882fa4f737b6d14f2c3bb3ef17de3a70a52b0033b929499cce3
This is a transaction to Binance, without a memo. That means they can not automatically credit it to your account but you can contact them with these details and I am sure they can sort it out for you. Nobody here can do anything about it though.
This issue can be closed.
| gharchive/issue | 2021-05-21T22:52:12 | 2025-04-01T06:38:04.483532 | {
"authors": [
"314159265359879",
"razpurge"
],
"repo": "blockstack/explorer",
"url": "https://github.com/blockstack/explorer/issues/446",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
749687834 | Integrate identity, latest transaction and assets into home modal
See screen design
Identity
If the user hasn't set a username for the currently selected address, the address is shown (needs design)
If user has set a username, the username shows primarily with the address secondarily (needs design)
In both cases, there are options to both copy the address to clipboard and visit the address page on the Explorer (needs design)
Latest transaction
If the user hasn't had a transaction within the past 48 hours, this section is hidden entirely
If they've had one or more transaction within the past 48 hours, the latest is shown in this section. The image, ticker and name should be set as defaults for any of the tokens involved in the transaction.
The amount listed should be positive if the token was received and negative if sent.
NFTs should show "1" or "-1" whereas FTs should show the exact amount transferred.
Tokens (fungible tokens)
If the user has no tokens, they should be directed to buy STX with link to CoinMarketCap (see design)
If the user has one or more tokens, they should be listed with default image, ticker and name values
Collectibles (non-fungible tokens)
If the user has no collectibles, they should simply see "You don't own any collectibles" message.
If the user has one or more collectibles, they should be listed with default image, ticker and name values and no value on the right (since they're all implicitly "1")
@jasperjansz it appears the identity area of the home modal needs some small updates here?
@hstove what's the best way to test multiple assets in the UI here?
@hstove what's the best way to test multiple assets in the UI here?
| gharchive/issue | 2020-11-24T13:01:55 | 2025-04-01T06:38:04.490623 | {
"authors": [
"markmhx"
],
"repo": "blockstack/ux",
"url": "https://github.com/blockstack/ux/issues/689",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
70009613 | Upgrade JMXTrans
JMXTrans community cookbook points to an older version (2012) of JMXTrans software to install. We need to use the latest version of the software which can fix some of the stats data issues seen in Graphite. Also it will help us get support from the JMXTrans community.
Version 2.0 of the jmxtrans-cookbook is available for testing. The default recipe does the install from the tar.gz file published by the jmxtrans project and install_ubuntu uses the deb package. Also included is remove-ver1 to remove jmxtrans installed using the previous version of the cookbook.
Fixed as part of PR #556.
| gharchive/issue | 2015-04-22T03:20:49 | 2025-04-01T06:38:04.502397 | {
"authors": [
"bijugs"
],
"repo": "bloomberg/chef-bach",
"url": "https://github.com/bloomberg/chef-bach/issues/137",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2549751579 | feat: extract statetest models/structs to standalone crate
Closes: #1787
Cool! lgtm with one nit.
Done! Ci green.
| gharchive/pull-request | 2024-09-26T07:31:01 | 2025-04-01T06:38:04.531343 | {
"authors": [
"royvardhan"
],
"repo": "bluealloy/revm",
"url": "https://github.com/bluealloy/revm/pull/1808",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
165494371 | Add rails env to Airbrake
Airbrake.configure do |config|
config.host = 'http://errors.blueberry.cz'
config.project_id = -1
config.project_key = 'API_KEY'
config.environment = Rails.env
config.ignore_environments = %w(development test)
end
https://github.com/airbrake/airbrake-ruby#blacklist_keys
| gharchive/issue | 2016-07-14T07:22:37 | 2025-04-01T06:38:04.532396 | {
"authors": [
"Ceda",
"mmagnusek"
],
"repo": "blueberryapps/blueberry_rails",
"url": "https://github.com/blueberryapps/blueberry_rails/issues/46",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
100109727 | Keystone ldap
Updates added for deploying openldap, setting up password policies, and configuring keystone. By default openldap is not deployed, it needs to be enabled explicitly.
Can one of the admins verify this patch?
"ok to test" to accept this pull request for testing
"test this please" for a one time test run
"add to whitelist" to add the author to the whitelist
| gharchive/pull-request | 2015-08-10T16:34:34 | 2025-04-01T06:38:04.533950 | {
"authors": [
"bbc-jenkins",
"lihkin213"
],
"repo": "blueboxgroup/ursula",
"url": "https://github.com/blueboxgroup/ursula/pull/1129",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
192132224 | wip: experimental CentOS support
demo: ursula --vagrant envs/example/allinone-centos site.yml
this is just enough to get nova, glance, keystone, neutron seemingly fuctioning
correctly on CentOS.
have disabled heat, lbaas, etc. also disabling logging/monitoring/etc.
retest
retest
retest
| gharchive/pull-request | 2016-11-28T22:16:27 | 2025-04-01T06:38:04.535585 | {
"authors": [
"nirajdp76",
"paulczar"
],
"repo": "blueboxgroup/ursula",
"url": "https://github.com/blueboxgroup/ursula/pull/2300",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
982875491 | doc(readme): Matrix chat room
Add a chat room for discussions
:tada: This PR is included in version 1.14.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2021-08-30T14:47:46 | 2025-04-01T06:38:04.537374 | {
"authors": [
"bluecmd"
],
"repo": "bluecmd/fortigate_exporter",
"url": "https://github.com/bluecmd/fortigate_exporter/pull/136",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
90041484 | update index.html
new ui design
That's the lamest "hacking" attempt I've ever seen.
| gharchive/pull-request | 2015-06-22T08:25:17 | 2025-04-01T06:38:04.543285 | {
"authors": [
"blueimp",
"mufti1927"
],
"repo": "blueimp/jQuery-File-Upload",
"url": "https://github.com/blueimp/jQuery-File-Upload/pull/3402",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
181809765 | last line when textFormat="formatted"
Hello,
The last line is automatically justified when using textFormat="formatted". I've seen in another issue here that you suggest to use the JustifiedSpan. The problem with that is I cannot know in advance how long the last line will be, and the span must consist of the entire line (so setting it just to the last character/word won't help). When using textFormat="plain" it works well. I've attached 2 screenshots, so you can see the difference.
Please let me know if there's something I got wrong. If you have any idea of how to solve it, I would be extremely thankful.
Goni
|
@gkrishnan
+1
any update??
| gharchive/issue | 2016-10-08T07:08:24 | 2025-04-01T06:38:04.545977 | {
"authors": [
"CompositionCloud",
"bluejamesbond",
"ericmguimaraes",
"zuraba"
],
"repo": "bluejamesbond/TextJustify-Android",
"url": "https://github.com/bluejamesbond/TextJustify-Android/issues/127",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
636535142 | enforce selectedRoles
selectedRoles isn't really useful until we get choice-driven config variation modelling, but we should make sure that it works as expected for now so that things don't break when config choices are implemented.
We do have validation that any selected role has a role configuration in the kdapp. What we don't have is awareness in the cluster setup that only selected roles should be paid attention to.
Hi @joel-bluedata , I am new to open source. Could I please take it up and start working?
| gharchive/issue | 2020-06-10T20:42:35 | 2025-04-01T06:38:04.554448 | {
"authors": [
"Pushkal-G",
"joel-bluedata"
],
"repo": "bluek8s/kubedirector",
"url": "https://github.com/bluek8s/kubedirector/issues/330",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1868451658 | Failed compiling release 1.1.6 with TDM-gcc
However , 1.1.4 works well when compiled with TDM-gcc . What should I do to fix this?
Compile 1.1.6
Compile 1.1.4
error message:
C:/ProgramEnv/gcc/bin/../lib/gcc/x86_64-w64-mingw32/10.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\ADMINI~1\AppData\Local\Temp\cc0NVBNz.o:main.cpp:(.text+0x41076): undefined reference to `fread_s'
collect2.exe: error: ld returned 1 exit status
You are right. On windows I assume you are using MSVC compiler, so fopen_s is used instead of f_open.
你是对的,在windows上我假设你使用 MSVC 编译器,因此打开文件的函数是 fopen_s,该函数在其他编译器上不起作用。
I've pushed a fix for this issue. It should work now.
我推送了此 issue 的一个修复,他应该有用。
| gharchive/issue | 2023-08-27T10:57:44 | 2025-04-01T06:38:04.557212 | {
"authors": [
"XingZhe-Li",
"blueloveTH"
],
"repo": "blueloveTH/pocketpy",
"url": "https://github.com/blueloveTH/pocketpy/issues/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
212011716 | Allow translation in mods
Since Terraria will have full translation starting from 1.3.5, it will be great if tModLoader supports a similar pattern and make the switch of language the same way as the vanilla one.
Thanks!
I see as why jopo added the 'Far in Future' label. This will be quite a large undertaking and not to mention, this is actually already possible for mods, but with their own implementation.
Resolved with the recent updates to 1.3.5 (from 57418dd6a3e7abbc472dc0403e5ef415f80124fe to 00240dafc313bf708947a7719ba7d7d1d55a4d7a)
| gharchive/issue | 2017-03-06T02:52:00 | 2025-04-01T06:38:04.559154 | {
"authors": [
"Jofairden",
"Kimi-Arthur",
"bluemagic123"
],
"repo": "bluemagic123/tModLoader",
"url": "https://github.com/bluemagic123/tModLoader/issues/134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1274413999 | [Needs confirmation] GPS origin should be wrapped
This forum post found that setting the GPS origin in the Water Linked DVL interface with a -220 longitude didn't work.
@Williangalvani suspects that it may require a value between -180 to 180. If confirmed, this package should wrap the value before displaying to the user and/or before sending to the autopilot to ensure correct functionality.
Longitude wrapping added in #11
| gharchive/issue | 2022-06-17T02:26:28 | 2025-04-01T06:38:04.570652 | {
"authors": [
"ES-Alexander"
],
"repo": "bluerobotics/BlueOS-Water-Linked-DVL",
"url": "https://github.com/bluerobotics/BlueOS-Water-Linked-DVL/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.