id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2724798927 | 🛑 Manhuagui is down
In c4be4d9, Manhuagui (https://www.manhuagui.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Manhuagui is back up in f04cd4c after 9 minutes.
| gharchive/issue | 2024-12-07T20:18:46 | 2025-04-01T06:38:59.067090 | {
"authors": [
"http403"
],
"repo": "http403/uptime_monitor",
"url": "https://github.com/http403/uptime_monitor/issues/233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
812835099 | Update sbt-release to 1.0.14
Updates com.github.gseitz:sbt-release from 1.0.13 to 1.0.14.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.github.gseitz", artifactId = "sbt-release" } ]
labels: sbt-plugin-update, semver-patch
Superseded by #117.
| gharchive/pull-request | 2021-02-21T11:40:37 | 2025-04-01T06:38:59.070663 | {
"authors": [
"scala-steward"
],
"repo": "http4s/http4s-netty",
"url": "https://github.com/http4s/http4s-netty/pull/116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
269648481 | Http4s Testing Documentation
In our documentation we are lacking examples and demonstration of using http4s-testing to create unit tests for our services. We have some implementations in http4s.g8, but we should have more concrete examples that show a consensus that people can work from in the Website documentation.
@ChristopherDavenport Could you point me to http4s-testing when you have a chance? I can't seem to find it.
It's the testing subproject. It gets published as:
"org.http4s" %% "http4s-testing" % Http4sVersion
We have Http4sMatchers, which help with Specs2. If you search our tests for the methods therein, you'll find examples of their use. (Aside: it's not great that our testing module has a dependency on specs2. I think we should consider whether that's the right home.)
Other interesting things in there are the ArbitraryInstances, which go nicely with Scalacheck. We have also started adding some laws that go well with discipline. I suspect these are more useful inside http4s than outside, though I can imagine them being used in more advanced testing scenarios.
@rossabaker Awesome, I will play around with that! Thank you!
I've been looking around at this a little bit, one thing i'm unclear on is where these docs are supposed to live. In the docs/ folder?
Definitely seems good enough to close out this issue. If we want we can revisit arbitraries and law testing at some point in the future.
| gharchive/issue | 2017-10-30T15:42:50 | 2025-04-01T06:38:59.075043 | {
"authors": [
"ChristopherDavenport",
"calvinbrown085",
"rossabaker"
],
"repo": "http4s/http4s",
"url": "https://github.com/http4s/http4s/issues/1504",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
152692230 | Default bind address: 127.0.0.1 or 0.0.0.0?
The default has tripped a couple people up recently. I looked at a few other servers for prior art, and found support for both.
Without looking, what do you expect?
I'd say 0.0.0.0
On May 2, 2016 8:21 PM, "Ross A. Baker" notifications@github.com wrote:
The default has tripped a couple people up recently. I looked at a few
other servers for prior art, and found support for both.
Without looking, what do you expect?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/http4s/http4s/issues/622
Vetoed. See other ticket.
| gharchive/issue | 2016-05-03T03:21:39 | 2025-04-01T06:38:59.078574 | {
"authors": [
"LeifW",
"rossabaker"
],
"repo": "http4s/http4s",
"url": "https://github.com/http4s/http4s/issues/622",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
297161643 | Add @SystemFw to the POM
Welcome to the team!
Thank you very much :)
👏
| gharchive/pull-request | 2018-02-14T16:32:35 | 2025-04-01T06:38:59.080010 | {
"authors": [
"SystemFw",
"jcranky",
"rossabaker"
],
"repo": "http4s/http4s",
"url": "https://github.com/http4s/http4s/pull/1657",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
239938946 | Further upload improvements
This pull request adds two more upload improvements. First commit changes HTTP::Request::Body to use IO#readpartial if available, and second commit adds a buffer string for memory-efficient uploads; both are explained in detail in their commit descriptions.
I was hoping that, once this is merged (or rejected), we could release a new version of HTTP.rb and form_data.rb with the streaming upload feature.
Does this need an addition to the test suite to exercise both #read and #readpartial?
@mikegee The example that uses StringIO as a "Enumerable IO" already exercises #readpartial, and the "non-Enumerable" FakeIO example exercises #read.
Thanks for the detailed commit messages. I was about to ask why #readpartial was preferable.
@mikegee I made what I think is an improvement in the last commit. Instead of writing out the #read/#readpartial logic and using a buffer string, we can use IO.copy_stream which will do all that for us (see the Rubinius implementation), with the help of a BlockIO object which acts as a "destination IO" and just calls the block with the data that IO.copy_stream "writes".
Here we use a trick of creating a Proc objects from an implicit block by calling Proc.new without any arguments. So, this:
def foo
block = Proc.new
end
is equivalent to
def foo(&block)
end
This way we can still use yield in other branches of the conditional.
Wow! TIL about def foo; Proc.new; end
@ixti Updated. I brought back returning Enumerator in HTTP::Request::Body#each because it makes tests simpler. Also, because of https://github.com/jruby/jruby/issues/4701 I changed the tests for IO objects to just verify that yielded chunks sum up to the entire content.
@britishtea @ixti Updated.
@ixti @britishtea Do you think this one is ready for merging?
Sorry I was on a vacation and completely off the grid. Will handle this ASAP.
@ixti Quick bump 😃
Sorry took me so long. Merged. Thank you!
@ixti No problem at all!
Sorry that I've been nagging about it, I just thought about writing a section in Shrine documentation about how to upload files with various HTTP libraries, and I wanted to wait until HTTP.rb had streaming IO uploads, so that I don't have to change the documentation later.
Now that we merged these final memory optimizations, and made deflating work with IO/Enumerable uploads (https://github.com/httprb/http/pull/412), do you think we can release a new HTTP.rb version with the new streaming IO uploads feature?
@janko-m I want to work-on/merge all outstanding PRs that can be merged in relatively fast, and then we'll be good to release 3.0.0.pre1 I just want to merge max breaking changes as we ae doing that anyway ;))
@ixti Sounds great, thank you!
| gharchive/pull-request | 2017-07-01T09:32:53 | 2025-04-01T06:38:59.089301 | {
"authors": [
"ixti",
"janko-m",
"mikegee"
],
"repo": "httprb/http",
"url": "https://github.com/httprb/http/pull/418",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2408222015 | 能不能提供一下在4090上实验的具体安装清单
我自己只修改了torch 和cuda的版本 发现会出现很多错误
你好,请问你解决了吗
| gharchive/issue | 2024-07-15T09:01:24 | 2025-04-01T06:38:59.093737 | {
"authors": [
"jackwky",
"johnren-code"
],
"repo": "huang-yh/SelfOcc",
"url": "https://github.com/huang-yh/SelfOcc/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2726569099 | How to achieve renderings from other perspectives?
Thanksfor your efforts!
The effect is very good, but the images are all from a horizontal perspective. How about the effect images from other perspectives?
For example, Looking down on objects
We look forward to your reply!
hey,
a part of the answer lies in the way the camera is initialized :
https://github.com/huanngzh/MV-Adapter/blob/612211253f79d383656289ab85e8df22b9f46ea4/scripts/inference_t2mv_sdxl.py#L83-L92
specifically the values of
elevation_deg=[0, 0, 0, 0, 0, 0], # <= rotation about the X axis : the "up/down" angle
distance=[1.8] * num_views, #<= distance from center
azimuth_deg=[x - 90 for x in [0, 45, 90, 180, 270, 315]], # <= rotation about the Y axis : the "turn table" angle
these values are partly hardcoded (elevation & azimut_deg) and partly dynamic (distances is based on num_views) so first idea is to make everything dynamic like so:
# initialise rotations values
azimuts = []
distances = []
elevations = []
elevation = 0
step = int( 360 / num_views ) # 'turn table' angle
for i in range( 0, 360, step):
azimuts.append( i - 90 )
elevations.append( elevation )
distances.append( 1.8 )
print( "azimut values", len(azimuts), azimuts, 'step', step )
# with: a num_views of 9: => azimut values 9 [-90, -50, -10, 30, 70, 110, 150, 190, 230] step 40
# Prepare cameras
cameras = get_orthogonal_camera(
azimuth_deg=azimuts,
elevation_deg=elevations,
distance=distances,
left=-0.55,
right=0.55,
bottom=-0.55,
top=0.55,
device=device,
)
NB: the get_orthogonal_camera method doesn't have a num_views argument so you must provide the azimuth_deg (or add num_views to get_orthogonal_camera's args).
this should create 9 views around the model BUT it doesn't work and I don't understand why.
it does create 9 images but they always use the same 6 (default) angles...
this is what the above call with 9 views produces
the images 2 & 3, 4 & 5, 6 & 7 are computed individually (notice some slight differences) but have the same camera angles that correspond to the 'default' angles.
any hint as to why & how to fix would be appreciated :)
Hi, your understanding of the hyperparameters is correct. However, sorry we haven't released our arbitrary view generation, as shown in README.
Hi, thanks for your great work, I'd like to know when will you release your inference code for generating arbitrary number of views?
| gharchive/issue | 2024-12-09T10:05:37 | 2025-04-01T06:38:59.114070 | {
"authors": [
"benzhangdragonplus",
"huanngzh",
"nicoptere",
"yejr0229"
],
"repo": "huanngzh/MV-Adapter",
"url": "https://github.com/huanngzh/MV-Adapter/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
799707680 | Cross Origin Problem
I started your app on my server like that : wssh --address='127.0.0.1' --origin="*" --port=8888 --xsrf=False --debug
And i access it from my computer to the server with my custom address.
The problem is that it does not connect :(
E.G : It's working without problem from localhost to localhost.
But i need the access from * to my server
Do you see what's wrong ?
What error complained by your webssh server?
Hello, this is the error that appears on the server
It seems that the problem was on the nginx configuration ! Strange thing
I will close it, thanks for help
| gharchive/issue | 2021-02-02T21:31:18 | 2025-04-01T06:38:59.119092 | {
"authors": [
"Whisper40",
"huashengdun"
],
"repo": "huashengdun/webssh",
"url": "https://github.com/huashengdun/webssh/issues/206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1014569126 | Some keys appear wrong on German keyboard
Hi, this one is weird. Generally the assignments are correct, but for example # appears as / in an WebSSH terminal.
OK
| gharchive/issue | 2021-10-03T22:28:04 | 2025-04-01T06:38:59.120175 | {
"authors": [
"stefan-reich"
],
"repo": "huashengdun/webssh",
"url": "https://github.com/huashengdun/webssh/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2042738318 | 【Enhancement】 router support set fallback route as downgrading routing strategy
What would you like to be enhanced?
Router add fallback rule to support downgrading routing strategy.
Why is this needed?
Support multi permanent path routing degradation strategy.
标签路由目前存在以下问题,当存在多标签的实例时,有可能存在以下的配置:
precedence: 0
route:
weight: 100
tags:
cas_lane_tag: base
weight: 100
precedence: 1
match:
headers:
x-cse-canary:
exact: red
route:
weight: 100
tags:
cas_lane_tag: red
precedence: 2
match:
headers:
x-cse-canary:
exact: gray
route:
weight: 100
tags:
cas_lane_tag: gray
那么会存在一种场景,在全链路灰度的情况下,如果一个请求匹配上了cas_lane_tag: gray的路由,但是却没有部署实例,那么将会根据负载均衡策略选取实例,也就说有可能会路由到red组的实例或者base组的实例,但大多数场景下,更多的是希望能路由到base组,所以需要增加fallback机制,配置可拓展为:
precedence: 0
route:
weight: 100
tags:
cas_lane_tag: base
weight: 100
precedence: 1
match:
headers:
x-cse-canary:
exact: red
route:
weight: 100
tags:
cas_lane_tag: red
fallback:
tags:
cas_lane_tag: base
weight: 100
precedence: 2
match:
headers:
x-cse-canary:
exact: gray
route:
weight: 100
tags:
cas_lane_tag: gray
fallback:
tags:
cas_lane_tag: base
weight: 100
那么,没有red组或者gray组的情况下,将fallback到base组,解决了之前的不足
| gharchive/issue | 2023-12-15T01:24:32 | 2025-04-01T06:38:59.134191 | {
"authors": [
"chengyouling",
"provenceee"
],
"repo": "huaweicloud/Sermant",
"url": "https://github.com/huaweicloud/Sermant/issues/1386",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2345339732 | 怎么携带 context
为什么sdk所有对 obs 操作的 api 没有鞋带 context 参数
这个参数是要实现什么功能呢?
| gharchive/issue | 2024-06-11T04:12:55 | 2025-04-01T06:38:59.140115 | {
"authors": [
"burybell",
"liqiuqiu111"
],
"repo": "huaweicloud/huaweicloud-sdk-go-obs",
"url": "https://github.com/huaweicloud/huaweicloud-sdk-go-obs/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2554358539 | Fix a few typos in why.md
As a native US English speaker, these minor mistakes caught my eye while reading about this very cool project.
So, I took a minute to clean them up.
Thank you for this PR @NateEag !
Merged
| gharchive/pull-request | 2024-09-28T15:57:18 | 2025-04-01T06:38:59.153962 | {
"authors": [
"NateEag",
"bpetit"
],
"repo": "hubblo-org/scaphandre",
"url": "https://github.com/hubblo-org/scaphandre/pull/393",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
378479945 | Based on this awesome API
Hi @huchenme! Maybe you want to know that I'm working on this project: https://github.com/zircleUI/github-trending-plus that heavily uses your API 😄 🎉
Thank you again for your work!!
Hi @tinchox5, this is nice to know! Your site is fun to play with, nice animations 😄
@tinchox5 Just updated README and added your project to this section https://github.com/huchenme/github-trending-api#projects-using-github-trending-api
🙌
Thank you!!
On Wed, Feb 13, 2019 at 10:40 AM Hu Chen notifications@github.com wrote:
@tinchox5 https://github.com/tinchox5 Just updated README and added
your project to this section
https://github.com/huchenme/github-trending-api#projects-using-github-trending-api
🙌
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/huchenme/github-trending-api/issues/11#issuecomment-463247357,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ALBcDTH1n5XIbPUQPk896crJ3TAwLrMgks5vNDIGgaJpZM4YTZyu
.
| gharchive/issue | 2018-11-07T21:38:14 | 2025-04-01T06:38:59.170760 | {
"authors": [
"huchenme",
"tinchox5"
],
"repo": "huchenme/github-trending-api",
"url": "https://github.com/huchenme/github-trending-api/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2661125726 | 🛑 VGMdb API is down
In 7f8afed, VGMdb API (https://vgmdb.info) was down:
HTTP code: 0
Response time: 0 ms
Resolved: VGMdb API is back up in 8e8cd42 after 5 hours, 18 minutes.
| gharchive/issue | 2024-11-15T07:43:25 | 2025-04-01T06:38:59.173464 | {
"authors": [
"hufman"
],
"repo": "hufman/upptime",
"url": "https://github.com/hufman/upptime/issues/1515",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
794772663 | hugegraph0.8.0 怎么配置 ConfigAuthenticator模式
我的配置
ConfigAuthenticator模式
ConfigAuthenticator模式是通过预先在配置文件中设置用户信息来支持用户认证,该实现是基于配置好的静态tokens来验证用户是否合法。下面是具体的配置流程(重启服务生效):
在配置文件gremlin-server.yaml中配置authenticator及其rest-server文件路径:
authentication: {
authenticator: com.baidu.hugegraph.auth.ConfigAuthenticator,
authenticationHandler: com.baidu.hugegraph.auth.WsAndHttpBasicAuthHandler,
config: {tokens: /etc/hugegraph/rest-server.properties}
}
在配置文件rest-server.properties中配置authenticator及其tokens信息:
auth.authenticator=com.baidu.hugegraph.auth.ConfigAuthenticator
auth.admin_token=token-value-a
auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]
在配置文件hugegraph{n}.properties中配置gremlin.graph信息:
gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
我的报错:
main] [ERROR] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Gremlin Server Error
java.lang.IllegalStateException: Could not create/configure Authenticator null
2021-01-27 11:30:55 6209 [main] [ERROR] com.baidu.hugegraph.dist.HugeGremlinServer [] - Gremlin Server was unable to start and will shutdown now: Could not create/configure Authenticator null
已解决
已解决
| gharchive/issue | 2021-01-27T04:59:38 | 2025-04-01T06:38:59.175965 | {
"authors": [
"MrZhangn"
],
"repo": "hugegraph/hugegraph",
"url": "https://github.com/hugegraph/hugegraph/issues/1348",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
215995119 | 很大的一个BUG,请修改
如果用户没有下载次数的话,自动更新会直接奔溃,报空指针;
我也发现了这个问题
| gharchive/issue | 2017-03-22T09:08:32 | 2025-04-01T06:38:59.176857 | {
"authors": [
"aesion",
"qincunrong"
],
"repo": "hugeterry/UpdateDemo",
"url": "https://github.com/hugeterry/UpdateDemo/issues/22",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2008534893 | ValueError: Query/Key/Value should either all have the same dtype,
--enable_xformers_memory_efficient_attention fails with the following error when run with accelerate
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/xformers/ops/fmha/__init__.py", line 348, in _memory_efficient_attention_forward_requires_grad
inp.validate_inputs()
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/xformers/ops/fmha/common.py", line 121, in validate_inputs
raise ValueError(
ValueError: Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch.int32
query.dtype: torch.float32
key.dtype : torch.float16
value.dtype: torch.float16
Steps: 0%| | 0/1000 [00:02<?, ?it/s]
[2023-11-25 14:03:17,898] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 5264) of binary: /anaconda/envs/diffusers-ikin/bin/python
Traceback (most recent call last):
File "/anaconda/envs/diffusers-ikin/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/accelerate/commands/launch.py", line 985, in launch_command
multi_gpu_launcher(args)
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/accelerate/commands/launch.py", line 654, in multi_gpu_launcher
distrib_run.run(args)
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/anaconda/envs/diffusers-ikin/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
diffusers/examples/dreambooth/train_dreambooth_lora_sdxl.py FAILED
I am running accelerate as following
accelerate launch diffusers/examples/dreambooth/train_dreambooth_lora_sdxl.py \
--pretrained_model_name_or_path="stabilityai/stable-diffusion-xl-base-1.0" \
--instance_data_dir={input_dir} \
--output_dir={output_dir} \
--instance_prompt=instance_prompt \
--mixed_precision="fp16" \
--resolution=1024 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--learning_rate=1e-4 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--checkpointing_steps=500 \
--max_train_steps=1000 \
--seed="0" \
--checkpoints_total_limit=5 \
--enable_xformers_memory_efficient_attention
Accelerate config
{
"compute_environment": "LOCAL_MACHINE",
"debug": false,
"distributed_type": "MULTI_GPU",
"downcast_bf16": false,
"machine_rank": 0,
"main_training_function": "main",
"mixed_precision": "no",
"num_machines": 1,
"num_processes": 2,
"rdzv_backend": "static",
"same_network": false,
"tpu_use_cluster": false,
"tpu_use_sudo": false,
"use_cpu": false
}
Versions:
xformers==0.0.23.dev687
accelerate==0.24.1
torch==2.1.0
torchvision==0.16.1
Thanks for reporting @khayamgondal, is it working without accelerate ? It looks like the query don't have the same dtype as the key and the value. I would suggest trying to find out why it is the case.
If I run accelerate without --enable_xformers_memory_efficient_attention flag, training works fine. Llooks like somehow xformers upscales query vector to float32.
+1
@sr5434 can you try without accelerate ? it doesn't seems to be an issue with accelerate since we don't modify the dtype. Maybe it is probably something in train_dreambooth_lora_sdxl.py that modifies the dtype. LMK how it goes !
If I run accelerate without --enable_xformers_memory_efficient_attention flag, training works fine. Looks like somehow xformers upscales query vector to float32.
this made all the difference. thank you!
@klopez89 that seems more like a workaround though
@SunMarc This is the source code: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py
Also when I just use python the error stays the same
Thanks for clarifying @sr5434 ! This issue is not related to accelerate then. You should probably open an issue on diffusers repository. My guess is that xformers attention is not working as expected in the script.
i think the problem is caused by xformers and pytorch used different cuda version to compile, xformers is compiled used cuda 11.8 or higher,but the cuda version your pytorch compiled is 11.7 or lower.solution is to download xfomers source code and compile it in your environment
@SunMarc I was able to fix the problem by disabling xformers
+1
I get the same error when running the train_text_to_image_lora_sdxl.py script. I've made sure to align the CUDA version as suggestion above but no difference.
accelerate==0.26.1
torch==2.1.2+cu118
torchvision==0.16.2+cu118
xformers==0.0.23.post1+cu118
Can the same Dreambooth fixes be applied here as well?
Hi @JakobLS , can you try to build xformers from source ?
I'm running this in a cloud VM and it's freezing when I do that. Tried it twice - in my original and in a new I created. If I don't align the CUDA versions before, as I showed above, I get the following error:
The detected CUDA version (11.8) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.
Hi @JakobLS , you need to either reinstall pytorch with the cu121. Or the you can the version of CUDA of your driver to 11.8
Hi @SunMarc, I might not be explaining myself properly. That's what I wanted to show with my first comment.
The VM comes installed with CUDA 11.8.
If I install a Pytorch version with CUDA 11.8 and run it with no mixed precision it works fine. If I run it with mixed_precision="fp16" I get the following error:
ValueError: Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch.int32
query.dtype: torch.float32
key.dtype : torch.float16
value.dtype: torch.float16
Similarly if I install a Pytorch version with CUDA 12.1 and run it with no mixed precision it also works fine. But if I run it with mixed_precision="fp16" I get the exact same error.
It doesn't like when I run it with mixed precision so it looks like the issue might be there rather than in the CUDA version. I'd like to use mixed precision though due to the speed advantage.
Let me know if you want me to open a new issue for this btw.
@SunMarc, when I install from source, the installation automatically terminates and the terminal window shuts down. No xformers is being installed. Using the following script:
pip install ninja
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
This is the installed Pytorch:
accelerate==0.26.1
torch==2.1.2+cu118
torchvision==0.16.2+cu118
and CUDA (checking with nvcc --version):
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
This is indeed strange that you are not able to install from source. Let's try this then: pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118. You should get the 11.8 version this way.
That successfully installs xformers:
accelerate==0.26.1
torch==2.1.2+cu118
torchvision==0.16.2+cu118
xformers==0.0.23.post1+cu118
But when I now execute the script train_text_to_image_lora_sdxl.py as shown above, I get the same error as OP:
ValueError: Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch.int32
query.dtype: torch.float32
key.dtype : torch.float16
value.dtype: torch.float16
If I run the script without --enable_xformers_memory_efficient_attention I instead get the following error:
ValueError: Attempting to unscale FP16 gradients.
ValueError: Attempting to unscale FP16 gradients. error means that the model is on fp16. See this thread for more information.
As for the other issue, it seems that compiling both pytorch and xformers with the same cuda version doesn't work. I will try to debug that.
Thanks for looking into it. I'll keep --enable_xformers_memory_efficient_attention enabled to reap its benefits though. 🙂
If it helps you, it's working if I remove mixed precision like this:
accelerate launch $SCRIPT_PATH \
--enable_xformers_memory_efficient_attention \
# Same as before
Still, I'd like to use mixed_precision="fp16" for the speed boost.
Still the same issue thanks for the workaround @JakobLS
Has anyone found a fix to this? I still facing the same issues.
It's now some time ago and I'm not working on it anymore. But I think I "solved" it by using mixed_precision="fp16" over xformers instead since the latter seem to lead to clearer degradation in image quality which fp16 doesn't.
You could also try to update to later cuda, torch and xformers versions if you really want to use xformers (I think I tried that and got it to work).
| gharchive/issue | 2023-11-23T16:26:01 | 2025-04-01T06:38:59.195596 | {
"authors": [
"GaParmar",
"JakobLS",
"SunMarc",
"billvsme",
"faceslog",
"khayamgondal",
"klopez89",
"sr5434",
"xiaohaipeng"
],
"repo": "huggingface/accelerate",
"url": "https://github.com/huggingface/accelerate/issues/2182",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1169581097 | Training gets stuck at loss.backward() using TPU
❓ Questions and Help
Hi, I am trying to fine-tune a pre-trained language model (~400M) using ParlAI and XLA via huggingface's accelerate. But the training gets stuck at loss.backward() step. One thing about the ParlAI is that they seperate the data loading and main model training into different scripts, therefore I have to use self.model, self.optimizer, batch = accelerator.prepare(self.model, self.optimizer, batch) multiple times otherwise the system screams that the tensors are not XLA tensors. I would really appreciate it If anyone can take a look and advise on how to fix this issue.
Below is the output of the metric report along with other print statements I used for debugging.
11:13:27 INFO | training...
the computed loss is tensor(1.9162, device='xla:1', grad_fn=<DivBackward0>)
performing accelerator.backward(loss): None
this is torch_xla._XLAC._get_xla_tensors_text([loss] IR {
%0 = s64[16]{0} xla::device_data(), device=TPU:0
%1 = s64[] aten::sum(%0), dimensions=(0), keep_reduced_dimensions=0, dtype=4
%2 = f32[] xla::cast(%1), type=f32, dtype=Float
%3 = f32[16]{0} xla::device_data(), device=TPU:0
%4 = f32[] aten::sum(%3), dimensions=(0), keep_reduced_dimensions=0, dtype=6
%5 = f32[] aten::div(%4, %2), ROOT=0
}
this is a metric report Metric: CompileTime
TotalSamples: 15
Accumulator: 19s334ms940.755us
ValueRate: 01s021ms185.841us / second
Rate: 0.792274 / second
Percentiles: 1%=007ms553.223us; 5%=007ms553.223us; 10%=007ms425.795us; 20%=009ms072.881us; 50%=021ms906.075us; 80%=998ms656.162us; 90%=09s994ms042.677us; 95%=09s047ms368.065us; 99%=09s047ms368.065us
Metric: DeviceLockWait
TotalSamples: 15
Accumulator: 053.267us
ValueRate: 002.686us / second
Rate: 0.756331 / second
Percentiles: 1%=001.586us; 5%=001.586us; 10%=001.905us; 20%=002.947us; 50%=003.184us; 80%=005.166us; 90%=005.648us; 95%=005.716us; 99%=005.716us
Metric: ExecuteTime
TotalSamples: 15
Accumulator: 211ms826.316us
ValueRate: 011ms134.990us / second
Rate: 0.792239 / second
Percentiles: 1%=002ms386.367us; 5%=002ms386.367us; 10%=002ms455.348us; 20%=002ms478.949us; 50%=003ms287.108us; 80%=014ms827.398us; 90%=071ms815.250us; 95%=073ms143.528us; 99%=073ms143.528us
Metric: InboundData
TotalSamples: 18
Accumulator: 2.41KB
ValueRate: 130.39B / second
Rate: 0.951754 / second
Percentiles: 1%=1.00B; 5%=1.00B; 10%=1.00B; 20%=1.00B; 50%=8.00B; 80%=128.00B; 90%=128.00B; 95%=1.73KB; 99%=1.73KB
Metric: InputOutputAliasCount
TotalSamples: 11
Accumulator: 12.00
ValueRate: 0.61 / second
Rate: 0.55474 / second
Percentiles: 1%=1.00; 5%=1.00; 10%=1.00; 20%=1.00; 50%=1.00; 80%=1.00; 90%=1.00; 95%=2.00; 99%=2.00
Metric: IrValueTensorToXlaData
TotalSamples: 357
Accumulator: 15s110ms619.120us
ValueRate: 424ms335.772us / second
Rate: 10.0259 / second
Percentiles: 1%=001ms069.136us; 5%=001ms279.980us; 10%=001ms413.787us; 20%=002ms551.990us; 50%=002ms841.347us; 80%=066ms132.470us; 90%=067ms457.525us; 95%=261ms053.985us; 99%=270ms630.855us
Metric: OutboundData
TotalSamples: 368
Accumulator: 1.36GB
ValueRate: 38.75MB / second
Rate: 10.2479 / second
Percentiles: 1%=4.00B; 5%=5.00KB; 10%=5.00KB; 20%=5.00KB; 50%=5.00KB; 80%=6.25MB; 90%=6.25MB; 95%=25.00MB; 99%=25.00MB
Metric: ReleaseDataHandlesTime
TotalSamples: 14
Accumulator: 153ms995.227us
ValueRate: 008ms089.200us / second
Rate: 0.740211 / second
Percentiles: 1%=738.550us; 5%=738.550us; 10%=962.790us; 20%=001ms031.819us; 50%=002ms699.409us; 80%=016ms770.251us; 90%=019ms555.057us; 95%=080ms454.166us; 99%=080ms454.166us
Metric: TensorsGraphSize
TotalSamples: 15
Accumulator: 13737.00
ValueRate: 725.57 / second
Rate: 0.792275 / second
Percentiles: 1%=3.00; 5%=3.00; 10%=3.00; 20%=5.00; 50%=7.00; 80%=15.00; 90%=6823.00; 95%=6823.00; 99%=6823.00
Metric: TransferFromServerTime
TotalSamples: 18
Accumulator: 066ms155.154us
ValueRate: 003ms497.967us / second
Rate: 0.951754 / second
Percentiles: 1%=001ms306.763us; 5%=001ms306.763us; 10%=001ms398.792us; 20%=002ms544.387us; 50%=002ms869.355us; 80%=008ms518.333us; 90%=012ms099.418us; 95%=013ms433.408us; 99%=013ms433.408us
Metric: TransferToServerTime
TotalSamples: 368
Accumulator: 15s131ms507.769us
ValueRate: 421ms440.948us / second
Rate: 10.2502 / second
Percentiles: 1%=001ms063.388us; 5%=001ms270.664us; 10%=001ms390.961us; 20%=002ms546.362us; 50%=002ms829.677us; 80%=066ms102.141us; 90%=067ms432.965us; 95%=261ms927.295us; 99%=270ms621.466us
Metric: TransferToServerTransformTime
TotalSamples: 368
Accumulator: 389ms949.399us
ValueRate: 011ms831.283us / second
Rate: 10.2479 / second
Percentiles: 1%=043.033us; 5%=050.050us; 10%=059.885us; 20%=087.111us; 50%=130.146us; 80%=002ms578.564us; 90%=002ms702.544us; 95%=006ms921.781us; 99%=008ms652.382us
Counter: CreateCompileHandles
Value: 15
Counter: CreateDataHandles
Value: 384
Counter: CreateXlaTensor
Value: 5308
Counter: DestroyDataHandles
Value: 15
Counter: DestroyXlaTensor
Value: 4603
Counter: DeviceDataCacheMiss
Value: 10
Counter: ReleaseDataHandles
Value: 15
Counter: UncachedCompile
Value: 15
Counter: XRTAllocateFromTensor_Empty
Value: 13
Counter: XrtCompile_Empty
Value: 384
Counter: XrtExecuteChained_Empty
Value: 384
Counter: XrtExecute_Empty
Value: 384
Counter: XrtMemoryInfo_Empty
Value: 384
Counter: XrtRead_Empty
Value: 384
Counter: XrtReleaseAllocationHandle_Empty
Value: 384
Counter: XrtReleaseCompileHandle_Empty
Value: 384
Counter: XrtSessionCount
Value: 4
Counter: XrtSubTuple_Empty
Value: 384
Counter: aten::_local_scalar_dense
Value: 9
Counter: aten::cumsum
Value: 1
Counter: aten::masked_select
Value: 1
Counter: xla::_copy_from
Value: 422
Counter: xla::_log_softmax
Value: 1
Counter: xla::_log_softmax_backward_data
Value: 1
Counter: xla::_softmax
Value: 26
Counter: xla::_softmax_backward_data
Value: 26
Counter: xla::_unsafe_view
Value: 133
Counter: xla::abs
Value: 2
Counter: xla::add
Value: 135
Counter: xla::add_
Value: 132
Counter: xla::addcmul
Value: 42
Counter: xla::any
Value: 1
Counter: xla::arange_out
Value: 1
Counter: xla::as_strided
Value: 382
Counter: xla::bernoulli_
Value: 42
Counter: xla::bitwise_and_out
Value: 1
Counter: xla::bmm
Value: 156
Counter: xla::cat
Value: 1
Counter: xla::ceil
Value: 1
Counter: xla::clamp_
Value: 1
Counter: xla::clone
Value: 53
Counter: xla::cumsum
Value: 1
Counter: xla::div
Value: 28
Counter: xla::div_
Value: 69
Counter: xla::embedding
Value: 4
Counter: xla::embedding_dense_backward
Value: 2
Counter: xla::empty
Value: 446
Counter: xla::empty_strided
Value: 382
Counter: xla::eq
Value: 30
Counter: xla::expand
Value: 44
Counter: xla::fill_
Value: 14
Counter: xla::gelu
Value: 14
Counter: xla::gelu_backward
Value: 14
Counter: xla::gt
Value: 2
Counter: xla::index_select
Value: 4
Counter: xla::lt
Value: 1
Counter: xla::masked_fill_
Value: 52
Counter: xla::masked_select
Value: 1
Counter: xla::max
Value: 4
Counter: xla::min
Value: 1
Counter: xla::mm
Value: 399
Counter: xla::mul
Value: 262
Counter: xla::mul_
Value: 4
Counter: xla::native_batch_norm
Value: 42
Counter: xla::native_batch_norm_backward
Value: 42
Counter: xla::ne
Value: 5
Counter: xla::nll_loss_backward
Value: 1
Counter: xla::nll_loss_forward
Value: 1
Counter: xla::repeat
Value: 26
Counter: xla::select
Value: 4
Counter: xla::slice
Value: 8
Counter: xla::sub
Value: 1
Counter: xla::sum
Value: 221
Counter: xla::t
Value: 532
Counter: xla::transpose
Value: 416
Counter: xla::tril
Value: 12
Counter: xla::unbind
Value: 2
Counter: xla::unsqueeze
Value: 17
Counter: xla::view
Value: 1639
Counter: xla::zero_
Value: 1
Metric: XrtAllocateFromTensor
TotalSamples: 368
Accumulator: 674ms548.126us
Mean: 002ms830.294us
StdDev: 003ms035.214us
Rate: 10.2503 / second
Percentiles: 25%=429.054us; 50%=554.600us; 80%=002ms440.790us; 90%=003ms026.023us; 95%=008ms286.670us; 99%=019ms699.977us
Metric: XrtCompile
TotalSamples: 15
Accumulator: 19s091ms002.111us
Mean: 01s273ms733.474us
StdDev: 03s034ms709.782us
Rate: 0.792277 / second
Percentiles: 25%=007ms006.559us; 50%=019ms110.326us; 80%=932ms812.838us; 90%=09s961ms819.611us; 95%=09s010ms994.333us; 99%=09s010ms994.333us
Metric: XrtExecute
TotalSamples: 15
Accumulator: 160ms538.076us
Mean: 011ms635.872us
StdDev: 023ms381.113us
Rate: 0.792158 / second
Percentiles: 25%=001ms106.660us; 50%=002ms532.117us; 80%=002ms086.252us; 90%=069ms194.319us; 95%=071ms271.787us; 99%=071ms271.787us
Metric: XrtExecutorEvict
TotalSamples: 0
Accumulator: nanB
Mean: nanB
StdDev: nanB
Percentiles:
Metric: XrtReadLiteral
TotalSamples: 19
Accumulator: 010ms266.051us
Mean: 540.318us
StdDev: 147.028us
Rate: 1.00464 / second
Percentiles: 25%=375.627us; 50%=565.235us; 80%=679.678us; 90%=714.152us; 95%=830.439us; 99%=830.439us
Metric: XrtReleaseAllocation
TotalSamples: 14
Accumulator: 283.906us
Mean: 020.279us
StdDev: 005.056us
Rate: 0.740218 / second
Percentiles: 25%=018.544us; 50%=021.219us; 80%=023.325us; 90%=024.486us; 95%=032.160us; 99%=032.160us
To Reproduce
The main training scripts are as below.
scripts.torch_generator_agent.txt
train_model.txt
torch_agent.txt
Environment
reproducible on XLA backend [CPU/TPU]: tested on a colab TPU
torch_xla version: cloud-tpu-client==0.10 and torch_xla-1.9-cp37-cp37m-linux_x86_64
torch==1.9.0+cu111
I used pip install accelerate
Thank you so much in advance!
Please use the forums to debug your code, as we keep the issues for bugs and feature requests only :-)
Not passing the whole dataloader will result in each process seeing the same data, which completely kills the purpose of distributed training. So you should find a way to import the dataloader from your other script.
Cool, thanks for letting for me know! And sorry about posting it here. I will try to see implement dataloader if possible.
| gharchive/issue | 2022-03-15T12:02:23 | 2025-04-01T06:38:59.204222 | {
"authors": [
"evelynkyl",
"sgugger"
],
"repo": "huggingface/accelerate",
"url": "https://github.com/huggingface/accelerate/issues/283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1880028195 | [On Cuda] Tensor::randn fails to generate random floats for odd no: of samples
I've run the sample code, with different shapes, mentioned in Readme file (https://github.com/huggingface/candle/blob/main/README.md#get-started)
use candle_core::{Device, Tensor};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let device = Device::Cpu;
let a = Tensor::randn(0f32, 1., (4, 5), &device)?; // This will generate random values
let b = Tensor::randn(0f32, 1., (5, 3), &device)?; // This will error out
let c = a.matmul(&b)?;
println!("{c}");
Ok(())
}
It will error out during runtime with the message: Cuda(Curand(CurandError(CURAND_STATUS_LENGTH_NOT_MULTIPLE))).
It works if I change to CPU. After digging deeper into cudarc library, found that a pseudorandom generator is used for generating floats and cudarand supports only even number of output samples for pseudorandom generator.
How can I generate odd number of samples ?
One workaround is to index the tensor b and reshape:
#[cfg(test)]
mod tests {
use candle_core::{Device, IndexOp, Tensor};
#[test]
fn it_works() -> Result<(), Box<dyn std::error::Error>> {
let device = Device::new_cuda(0)?;
let a = Tensor::randn(0f32, 1., (4, 5), &device)?; // This will generate random values
let b = Tensor::randn(0f32, 1., (5, 4), &device)?; // This is OK
let b_ = b.i((.., 0..3))?.reshape((5,3))?; // Reshape is needed
println!("b_ shape: {:?}", b_.shape());
let c = a.matmul(&b_)?;
println!("{a}");
println!("{b}");
println!("{c}");
Ok(())
}
}
Result:
b_ shape: [5, 3]
[[-1.8936, 0.9397, 1.0709, -0.0135, -0.2310],
[-0.4147, -0.1733, 0.3322, 0.1012, 0.4609],
[-0.5161, 0.1109, -0.1549, 2.1302, 0.7367],
[-0.3432, 1.0344, 1.1740, 0.8580, 0.8757]]
Tensor[[4, 5], f32, cuda:0]
[[-0.4003, 0.7206, -0.9397, -0.0915],
[ 2.3167, 1.5755, -0.7861, 0.1864],
[ 0.2528, -0.8081, -0.5926, -1.3103],
[-1.9073, 0.8630, 0.1234, -1.7610],
[ 0.4231, -0.7811, -0.2071, -0.2975]]
Tensor[[5, 4], f32, cuda:0]
[[ 3.1338, -0.5808, 0.4523],
[-0.1496, -1.1131, 0.2461],
[-3.3269, 1.1909, 0.5999],
[ 1.5647, 0.4901, -1.2617]]
Tensor[[4, 3], f32, cuda:0]
Since the underlying cuRAND only supports even numbers, maybe candle can hide the workaround in the randn API.
Right, I've actually just merged #793 that should hopefully fix this (at the expense of generating an additional value when the number of elements is odd).
Closing as hopefully fixed now, but please re-open if you run into any more issues.
| gharchive/issue | 2023-09-04T10:57:21 | 2025-04-01T06:38:59.209540 | {
"authors": [
"GeauxEric",
"LaurentMazare",
"akhildevelops"
],
"repo": "huggingface/candle",
"url": "https://github.com/huggingface/candle/issues/734",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1437463660 | Spanish translation of Chapter 5
¡Hi! This is the spanish translation for chapter 5 according to issue #38.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
| gharchive/pull-request | 2022-11-06T16:35:36 | 2025-04-01T06:38:59.212265 | {
"authors": [
"HuggingFaceDocBuilderDev",
"camartinezbu"
],
"repo": "huggingface/course",
"url": "https://github.com/huggingface/course/pull/366",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1896382218 | Replace deprecated typing
Since Python 3.9 (this project is in Python 3.9.15), some typing hints are deprecated.
This PR replaces deprecated type hints.
Codecov Report
Patch coverage: 100.00% and project coverage change: -4.44% :warning:
Comparison is base (f24b758) 90.50% compared to head (e40e7ba) 86.06%.
Report is 1 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #1805 +/- ##
==========================================
- Coverage 90.50% 86.06% -4.44%
==========================================
Files 222 87 -135
Lines 13641 3187 -10454
==========================================
- Hits 12346 2743 -9603
+ Misses 1295 444 -851
Flag
Coverage Δ
jobs_cache_maintenance
?
jobs_mongodb_migration
85.92% <100.00%> (ø)
libs_libcommon
?
services_admin
87.44% <100.00%> (ø)
services_api
?
services_rows
82.75% <100.00%> (ø)
services_search
79.16% <ø> (ø)
services_sse-api
94.32% <100.00%> (ø)
services_worker
?
Flags with carried forward coverage won't be shown. Click here to find out more.
Files Changed
Coverage Δ
services/search/src/search/routes/search.py
54.62% <ø> (ø)
...s/mongodb_migration/src/mongodb_migration/check.py
34.48% <100.00%> (ø)
...ngodb_migration/src/mongodb_migration/collector.py
100.00% <100.00%> (ø)
...ation/src/mongodb_migration/deletion_migrations.py
69.51% <100.00%> (ø)
..._20230511100700_queue_delete_indexes_with_force.py
76.00% <100.00%> (ø)
...30516101600_queue_delete_index_without_revision.py
73.07% <100.00%> (ø)
...bs/mongodb_migration/src/mongodb_migration/plan.py
100.00% <100.00%> (ø)
...he_add_partial_field_in_config_parquet_and_info.py
100.00% <100.00%> (ø)
..._cache_add_features_field_in_split_duckdb_index.py
100.00% <100.00%> (ø)
jobs/mongodb_migration/tests/test_plan.py
95.45% <100.00%> (ø)
... and 3 more
... and 135 files with indirect coverage changes
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
| gharchive/pull-request | 2023-09-14T11:36:54 | 2025-04-01T06:38:59.237016 | {
"authors": [
"albertvillanova",
"codecov-commenter"
],
"repo": "huggingface/datasets-server",
"url": "https://github.com/huggingface/datasets-server/pull/1805",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2113800655 | Unit 3 proposal updates
In Unit 3, in the description of Double Deep Q Networks, I thought that the descriptions of the respective roles of the DQN and of the Target Network were a bit confusing, which is why I am proposing an update that - I think - matches the pseudo-code provided in the Unit.
Maybe I am mistaken and just did not understand these paragraphs. Let me know.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| gharchive/pull-request | 2024-02-02T00:34:40 | 2025-04-01T06:38:59.238910 | {
"authors": [
"HuggingFaceDocBuilderDev",
"PierreCounathe"
],
"repo": "huggingface/deep-rl-class",
"url": "https://github.com/huggingface/deep-rl-class/pull/481",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1975222291 | [Docs Model UI] Gallery component
Follow up to https://github.com/huggingface/hub-docs/pull/1076#issuecomment-1791164555
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
| gharchive/pull-request | 2023-11-03T00:01:39 | 2025-04-01T06:38:59.242787 | {
"authors": [
"HuggingFaceDocBuilderDev",
"mishig25"
],
"repo": "huggingface/hub-docs",
"url": "https://github.com/huggingface/hub-docs/pull/1079",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1578200001 | Add Space disk space info
Related discussion https://huggingface.slack.com/archives/C048K60MPNF/p1675778705684199
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
Would it make sens to display that information in the pricing page instead, and maybe link to there from the docs?
It would avoid discrepancies between the docs and the actual prices / hardware specs, and keep moon-landing as the source of truth on that matter
WDYT @julien-c? You mentioned nicer in the docs that in /settings, but what about /pricing?
no strong opinion – we can also merge them here for now, and think about it later
i.E. i think on /pricing for instance "disk space" might be a bit too much info, so i think it makes sense here
Alright, merging ahead then!
| gharchive/pull-request | 2023-02-09T16:19:46 | 2025-04-01T06:38:59.246197 | {
"authors": [
"HuggingFaceDocBuilderDev",
"SBrandeis",
"julien-c",
"osanseviero"
],
"repo": "huggingface/hub-docs",
"url": "https://github.com/huggingface/hub-docs/pull/669",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1604704271 | Update models-downloading.md
Update the file downloading function, I got this warning using cached_download:
file_download.py:629: FutureWarning: `cached_download` is the legacy way to download files from the HF hub, please consider upgrading to `hf_hub_download`
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
thanks @dveni!
| gharchive/pull-request | 2023-03-01T10:35:10 | 2025-04-01T06:38:59.248060 | {
"authors": [
"HuggingFaceDocBuilderDev",
"dveni",
"julien-c"
],
"repo": "huggingface/hub-docs",
"url": "https://github.com/huggingface/hub-docs/pull/701",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2002295062 | Issue detecting graphviz on macos
Describe the bug
When I run huggingface-cli env while having graphviz installed, I expect to see the version of graphviz showing up.
Reproduction
brew install graphviz
huggingface-cli env
Logs
$ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.15.1
- Platform: macOS-14.1.1-arm64-arm-64bit
- Python version: 3.11.5
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /Users/rleone/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: remyleone
- Configured git credential helpers: 360000, osxkeychain
- FastAI: 2.7.13
- Tensorflow: 2.15.0
- Torch: 2.1.0
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: 1.4.2
- Pillow: 9.4.0
- hf_transfer: 0.1.4
- gradio: 4.4.1
- numpy: 1.24.3
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /Users/rleone/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /Users/rleone/.cache/huggingface/assets
- HF_TOKEN_PATH: /Users/rleone/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
(base) (k8s|admin@cli-k8s-beautiful-proskuriakova-7b18357b-9c0a-46ea-af9a-5427d97f17c2:kubeflow)
# rleone @ rleone-macbook in ~ [15:08:58]
$ which dot
/opt/homebrew/bin/dot
(base) (k8s|admin@cli-k8s-beautiful-proskuriakova-7b18357b-9c0a-46ea-af9a-5427d97f17c2:kubeflow)
# rleone @ rleone-macbook in ~ [15:09:05]
$ dot --version
dot - graphviz version 9.0.0 (20230911.1827)
### System info
```shell
$ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.15.1
- Platform: macOS-14.1.1-arm64-arm-64bit
- Python version: 3.11.5
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /Users/rleone/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: remyleone
- Configured git credential helpers: 360000, osxkeychain
- FastAI: 2.7.13
- Tensorflow: 2.15.0
- Torch: 2.1.0
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: 1.4.2
- Pillow: 9.4.0
- hf_transfer: 0.1.4
- gradio: 4.4.1
- numpy: 1.24.3
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /Users/rleone/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /Users/rleone/.cache/huggingface/assets
- HF_TOKEN_PATH: /Users/rleone/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
Thanks it solved it. I was thinking that dependencies were somewhat installed from a different lib but it wasn't.
Glad to know your problem's fixed!
| gharchive/issue | 2023-11-20T14:11:54 | 2025-04-01T06:38:59.250881 | {
"authors": [
"Wauplin",
"remyleone"
],
"repo": "huggingface/huggingface_hub",
"url": "https://github.com/huggingface/huggingface_hub/issues/1846",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2006606079 | Send user_agent in HEAD calls
It looks like for the past 14 months we were not sending the user agent in the HEAD call when downloading a file thanks to a change I made in https://github.com/huggingface/huggingface_hub/pull/1058...
Thanks @patrickvonplaten for reporting (private slack thread). This means we don't have user_agent (i.e. library_name/version + additional information like pipeline_class) in our stats when model files are already cached. This PR fixes it and adds a regression test for it.
cc @julien-c @osanseviero weird that we haven't found out about it before.
(also, deprecate http_user_agent -a completely unused method- + add get_hf_file_metadata to HfApi -it wasn't there before-)
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
| gharchive/pull-request | 2023-11-22T15:38:50 | 2025-04-01T06:38:59.254017 | {
"authors": [
"HuggingFaceDocBuilderDev",
"Wauplin"
],
"repo": "huggingface/huggingface_hub",
"url": "https://github.com/huggingface/huggingface_hub/pull/1854",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1121748467 | Bart and T5
What does this PR do?
Adds training support for:
BartForConditionalGeneration
T5ForConditionalGeneration
Evaluation for these models will come in another PR but some building blocks are already implemented here.
There are also examples for summarization and translation finetuning, but instructions on how to use them will also come in another PR.
🎉 looks good to me
| gharchive/pull-request | 2022-02-02T10:26:57 | 2025-04-01T06:38:59.255909 | {
"authors": [
"jimypbr",
"michaelbenayoun"
],
"repo": "huggingface/optimum-graphcore",
"url": "https://github.com/huggingface/optimum-graphcore/pull/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1926434876 | Tensor parallelism: saved model can't be loaded
After running finetuning using run_summarization.py for a couple of steps, the directory to which the model shards have been saved does not contain a config.json. Hence, the model stored can't be loaded using from_pretrained():`
OSError: PATH_TO_SAVED_MODEL_DIR does not appear to have a file named config.json.
Should the config.json be saved together with the shards?
You need to provide the parent directory instead. The parent directory contains all the config files.
| gharchive/issue | 2023-10-04T15:04:02 | 2025-04-01T06:38:59.257778 | {
"authors": [
"bocchris-aws",
"michaelbenayoun"
],
"repo": "huggingface/optimum-neuron",
"url": "https://github.com/huggingface/optimum-neuron/issues/249",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1214611466 | Remove unused argument
What does this PR do?
Remove an unused argument in compute_loss_ort() from Trainer class.
@JingyaHuang (I can not add reviewers so I ping you!)
Before submitting
[x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
Closing as corrected in https://github.com/huggingface/optimum/pull/189/
Sorry, I did not see this.
| gharchive/pull-request | 2022-04-25T14:38:38 | 2025-04-01T06:38:59.260421 | {
"authors": [
"HuggingFaceDocBuilderDev",
"JingyaHuang",
"fxmarty"
],
"repo": "huggingface/optimum",
"url": "https://github.com/huggingface/optimum/pull/154",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2614485807 | [FEATURE] Additional metrics for ImageNet validation results
The results files for ImageNet, for example this one reports top-1, top-5 accuracy along with other measures. Can the total training time of each model also be added?
Inference times are available in other results files but without the exact number of training epochs. Without the training epochs for all the models, it is not straightforward to estimate the total training time per model.
This will be useful to study the performance versus training time resource consumption, for example.
@raghavian models have been trains on such a wide variety of compute resources across a long period of time, the effort to pull that information together would be too high relative to continuing library maintenance and ongoing feature development.
| gharchive/issue | 2024-10-25T15:49:57 | 2025-04-01T06:38:59.262295 | {
"authors": [
"raghavian",
"rwightman"
],
"repo": "huggingface/pytorch-image-models",
"url": "https://github.com/huggingface/pytorch-image-models/issues/2313",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1576146136 | Explicit automatic alignment of header
My use case might be overly specific, but when writing/using vectorised code on mmap'd safetensors file header sometimes causes everything to have an odd-numbered pointer which breaks even 16 bit vectorisation. Is there a possibility that python bindings will get an option to save with an explicit alignment? Just padding the header should be enough for most use cases.
No it's not overly specific, actually there's already a PR for that.
https://github.com/huggingface/safetensors/pull/148
I was waiting for more need for it before merging, but it seems this is picking up in low level frameworks where alignment could really help speed up load times.
Also I love the project !
If you want pure rust ML framework I recommend https://github.com/coreylowman/dfdx (Still very early on, but there's at least a lot to inspire from IMO).
For instance I implemented https://github.com/Narsil/fast_gpt2 (without dfdx, more like your approach, but still stealing the mkl bindings from dfdx to get the performance ! )
Thank you, @Narsil! Yeah, doing math low level isn't that popular apart from people who know how to code and are on "sub-par" HW by today's standards. Also thanks for the mention of dfdx and your repo. I didn't even consider trying to use any BLAS lib.
isn't that popular apart from people who know how to code and are on "sub-par" HW by today's standards.
It's still the future in my eyes. The ML fields is somewhat settling and not experimenting as much as it used to, performance is becoming a real concern for anything at scale. And all the python solution for performance are way to clunky to beat compiled code.
This is a very personal view.
Closing because #148 is merged
| gharchive/issue | 2023-02-08T13:41:31 | 2025-04-01T06:38:59.267042 | {
"authors": [
"Narsil",
"mrsteyk"
],
"repo": "huggingface/safetensors",
"url": "https://github.com/huggingface/safetensors/issues/178",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1735481908 | trust_remote_code is not available in SageMaker Endpoint
System Info
Hi Team ,
As i can see there is not way i can set trust_remote_code option in sagemaker_endpoint.sh
#!/bin/bash
if [[ -z "${HF_MODEL_ID}" ]]; then
echo "HF_MODEL_ID must be set"
exit 1
fi
export MODEL_ID="${HF_MODEL_ID}"
if [[ -n "${HF_MODEL_REVISION}" ]]; then
export REVISION="${HF_MODEL_REVISION}"
fi
if [[ -n "${SM_NUM_GPUS}" ]]; then
export NUM_SHARD="${SM_NUM_GPUS}"
fi
if [[ -n "${HF_MODEL_QUANTIZE}" ]]; then
export QUANTIZE="${HF_MODEL_QUANTIZE}"
fi
text-generation-launcher --port 8080
Information
[ ] Docker
[ ] The CLI directly
Tasks
[ ] An officially supported command
[ ] My own modifications
Reproduction
llm_model = HuggingFaceModel(
role=role,
image_uri=llm_image,
env={
'HF_MODEL_ID': hf_model_id,
'HF_MODEL_QUANTIZE': json.dumps(use_quantization),
'SM_NUM_GPUS': json.dumps(number_of_gpu)
}
)
This gives error for Falcon-40b
Expected behavior
The endpoint should get created
@OlivierDehaene Hi, do I currently need to build this locally, or is this latest updated docker image uploaded to a public container repository that I can pull from?
We already working on releasing the new Image for SageMaker. I ll keep you posted here.
@austinmw @songfeng @OlivierDehaene
New image is now available. See Phil's latest blog post for how to deploy Falcon 7B & 40B - https://www.philschmid.de/sagemaker-falcon-llm
@Markus-Zeggel The tgi 0.8.2 works for me. Also, make sure you update the sagemaker sdk or maybe create a new env from scratch.
trust_remote_code = True
env={
'HF_MODEL_ID': hf_model_id,
# 'HF_MODEL_QUANTIZE': json.dumps(use_quantization),
'SM_NUM_GPUS': json.dumps(number_of_gpu),
'HF_MODEL_TRUST_REMOTE_CODE': json.dumps(trust_remote_code)
}
Hi, I exactly followed the https://www.philschmid.de/sagemaker-falcon-llm tutorial. (The only difference is the instance type) However, I still get the error message
Loading tiiuae/falcon-7b-instruct requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.
I tried this config:
# TGI config
config = {
'HF_MODEL_ID': "tiiuae/falcon-7b-instruct", # model_id from hf.co/models
# 'SM_NUM_GPUS': json.dumps(number_of_gpu), # Number of GPU used per replica
'MAX_INPUT_LENGTH': json.dumps(1024), # Max length of input text
'MAX_TOTEL_TOKENS': json.dumps(2048), # Max length of the generation (including input text)
'HF_MODEL_TRUST_REMOTE_CODE': "true"
# 'HF_MODEL_QUANTIZE': "bitsandbytes", # comment in to quantize
}
Am I missing something?
Is this the correct image: 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.0.0-tgi0.8.2-gpu-py39-cu118-ubuntu20.04 ?
Thanks!
@Markus-Zeggel The tgi 0.8.2 works for me. Also, make sure you update the sagemaker sdk or maybe create a new env from scratch.
trust_remote_code = True
env={
'HF_MODEL_ID': hf_model_id,
# 'HF_MODEL_QUANTIZE': json.dumps(use_quantization),
'SM_NUM_GPUS': json.dumps(number_of_gpu),
'HF_MODEL_TRUST_REMOTE_CODE': json.dumps(trust_remote_code)
}
Hello,
would like to know if anybody actually managed to use SSE streaming in Sagemaker using this TGI server image. Thanks.
@gsaivinay sagemaker is currently not supporting SSE.
Apologies as I know this isn't the best place for it, but I wasn't sure where else to ask this... Is anyone aware if we can provide the scripts that Sagemaker needs ourselves rather than pulling from huggingface.co/models which occurs when you define 'HF_MODEL_ID'? We're behind a proxy so cannot do it directly from sagemaker. We have other models that work fine in the same environment, however we saw that the falcon-7b has a few python scripts in the HF repo (https://huggingface.co/tiiuae/falcon-7b/tree/main). The models that work fine do not have these python scripts which from other research seems to be why it pulls from HF...?
@Steven-N we created a blog post https://www.philschmid.de/sagemaker-llm-vpc
@philschmid, i tried with tiiuae/falcon-rw-1b on sagemaker==2.170.0, still get the following error
ValueError: Loading /opt/ml/model requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
Code to reproduce.
import torch
from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
from huggingface_hub import snapshot_download
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
print(f"sagemaker role arn: {role}")
print(f"sagemaker session region: {sess.boto_region_name}")
MODEL_ID = "tiiuae/falcon-rw-1b"
CACHED_DIR = "../cache"
MERGE_MODEL_DIR = "merged_model_test"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
# device_map="auto",
trust_remote_code=True,
cache_dir=CACHED_DIR,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_ID,
cache_dir=CACHED_DIR,
)
model.save_pretrained(MERGE_MODEL_DIR, safe_serialization=True)
tokenizer.save_pretrained(MERGE_MODEL_DIR, safe_serialization=True)
import os
parent_dir = os.getcwd()
# change to model dir
os.chdir(MERGE_MODEL_DIR)
# use pigz for faster and parallel compression
!tar -cf model.tar.gz --use-compress-program=pigz *
# change back to parent dir
os.chdir(parent_dir)
from sagemaker.s3 import S3Uploader
# upload model.tar.gz to s3
s3_model_uri = S3Uploader.upload(local_path=str(MERGE_MODEL_DIR + "/model.tar.gz"), desired_s3_uri=f"s3://{sess.default_bucket()}/test-model")
print(f"model uploaded to: {s3_model_uri}")
from sagemaker.huggingface import get_huggingface_llm_image_uri, HuggingFaceModel
import json
image_uri = get_huggingface_llm_image_uri("huggingface", version="0.8.2")
print(f"llm image uri: {image_uri}")
instance_type = "ml.g4dn.2xlarge"
health_check_timeout = 300
trust_remote_code = True
config = {
"HF_MODEL_ID": "/opt/ml/model", # path to where sagemaker stores the model
"MAX_INPUT_LENGTH": json.dumps(2048), # Max length of input text
"MAX_TOTAL_TOKENS": json.dumps(3000), # Max length of the generation (including input text)
"HF_MODEL_TRUST_REMOTE_CODE": json.dumps(trust_remote_code)
}
# create HuggingFaceModel with the image uri
llm_model = HuggingFaceModel(
role=role,
image_uri=image_uri,
model_data=s3_model_uri,
env=config,
)
endpoint_name = sagemaker.utils.name_from_base("test")
predictor = llm_model.deploy(
endpoint_name=endpoint_name,
initial_instance_count=1,
instance_type=instance_type,
model_data_download_timeout=10 * 60,
container_startup_health_check_timeout=10 * 60,
wait=False,
)
print(predictor.endpoint_name)
falcon-rw-1b is a different model than the 7B or 40B model.
Hey team, I was able to reproduce the same error with using the new TGI 0.8.2 image with deployment of falcon-7b following this guide and setting the 'HF_MODEL_TRUST_REMOTE_CODE': json.dumps(True),.
The only difference was the Instance Size (ml.m5.2xlarge). I can't correlate if instance size is what raises the exception of trust_remote_code=true.
| gharchive/issue | 2023-06-01T05:21:58 | 2025-04-01T06:38:59.281567 | {
"authors": [
"Steven-N",
"austinmw",
"gsaivinay",
"monuminu",
"mrwadams",
"philschmid",
"razasaad",
"songfeng",
"yapweiyih"
],
"repo": "huggingface/text-generation-inference",
"url": "https://github.com/huggingface/text-generation-inference/issues/390",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2355970516 | Fix local installation after Rust 1.79 and transformers 4.41.2
I was trying to install text-generation-inference locally without Docker, and I encountered several problems while doing so. I collected all those problems and made the fixes here so that people do not encounter such tedious problems again.
Fixes # (issue)
The rust-toolchain.toml file was overriding Rust version to 1.78.0, making the installation fail as the inline const feature in rust requires 1.79.0.
Upgraded the transformers version.
Fixed the error: "str" has no attribute "logits" in rw.py when the model starts
Before submitting
[ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
[ X] Did you read the contributor guideline,
Pull Request section?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@OlivierDehaene
@Narsil
The rust-toolchain is already fixed on main (and you don't seem to fix it it seems).
Rw.py should be in it's own PR (and with a reproducer, no need for a test since we don't really maintain that hard non flash models, no custom kernels models anymore).
As for the transformers version you are simply modifying the benchmark calls which is an optional dependency, and the lockfile only points to an old version because some package seems to depend on an old version.
Regardless we never manually touch the lock file, only pyproject.toml so the solution would need to be in it.
| gharchive/pull-request | 2024-06-16T20:43:43 | 2025-04-01T06:38:59.287797 | {
"authors": [
"Narsil",
"rYoussefAli"
],
"repo": "huggingface/text-generation-inference",
"url": "https://github.com/huggingface/text-generation-inference/pull/2071",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1795379083 | chore: promote jx-aspnet-app-01 to version 0.0.2
this commit will trigger a pipeline to generate the actual kubernetes resources to perform the promotion which will create a second commit on this Pull Request before it can merge
jx-aspnet-app-01
Changes in version 0.0.2
Chores
release 0.0.2 (jenkins-x-bot)
add variables (jenkins-x-bot)
Other Changes
These commits did not use Conventional Commits formatted messages:
Update release.yaml (hughexp)
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To complete the pull request process, please assign
You can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@hughexp: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
verify
b39f949a1d17fe74fe66617312701be5af6c690e
link
/test verify
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository. I understand the commands that are listed here.
| gharchive/pull-request | 2023-07-09T12:25:19 | 2025-04-01T06:38:59.480723 | {
"authors": [
"hughexp"
],
"repo": "hughexp/jx3-minikube_home",
"url": "https://github.com/hughexp/jx3-minikube_home/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1097084767 | Replace RR with RSpec Mocks
RSpec Mocks is widely used nowadays, and having to learn RR in order to write tests is becoming an obstacle to new contributors.
https://github.com/kjvarga/rr-to-rspec-converter can be of help.
| gharchive/issue | 2022-01-09T02:11:18 | 2025-04-01T06:38:59.497650 | {
"authors": [
"knu"
],
"repo": "huginn/huginn",
"url": "https://github.com/huginn/huginn/issues/3062",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2104195906 | [BUG] 图片显示问题
Describe the bug 描述你遇到的错误
图片显示比例出现问题
使用原生 markdown 会出现这个问题,换成 shortcode {{< figure >}} 则没有这个问题
Expected behavior 期待的行为
fix
Screenshots 屏幕截图
Build Environment 构建环境
hugo v0.120.4
fixit v0.3.0
Preview Environment 预览环境
No response
Additional Information 补充信息
复现 demo
BUG 原因,Chrome (121.0.6167.85) 开始,给 img size="auto" 加了一个默认样式导致图片宽高比异常。
| gharchive/issue | 2024-01-28T14:41:09 | 2025-04-01T06:38:59.501221 | {
"authors": [
"Alfly",
"Lruihao"
],
"repo": "hugo-fixit/FixIt",
"url": "https://github.com/hugo-fixit/FixIt/issues/411",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1121248789 | Gdk::Cursor.new_from_name must return Gdk::Cursor but it is returning (Gdk::Cursor | Nil)
Trying to create the following cursor:
Gdk::Cursor.new_from_name("pointer", nil)
Results in the aforementioned compile-time error.
The generated method:
def self.new_from_name(name : ::String, fallback : Gdk::Cursor?) : self
# gdk_cursor_new_from_name: (Constructor)
# @fallback: (nullable)
# Returns: (transfer full)
# Handle parameters
fallback = if fallback.nil?
Pointer(Void).null
else
fallback.to_unsafe
end
# C call
_retval = LibGdk.gdk_cursor_new_from_name(name, fallback)
# Return value handling
Gdk::Cursor.new(_retval, GICrystal::Transfer::Full) unless _retval.null?
end
This seems to apply to other methods as well (which previously worked):
GdkPixbuf::Pixbuf.new_from_file
GdkPixbuf::Pixbuf.new_from_file_at_size
| gharchive/issue | 2022-02-01T22:06:23 | 2025-04-01T06:38:59.514352 | {
"authors": [
"BlobCodes"
],
"repo": "hugopl/gtk4.cr",
"url": "https://github.com/hugopl/gtk4.cr/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
184721371 | Permitir ingresar el toolbox en formato xml string
Ember-blocky espera que los bloques del toolbox
se envíen como una lista o como una serie de diccionarios
con categorías y bloques.
Pero pilas-bloques usa un modelo distinto, construye
el toolbox usando objetos y produciendo un string
xml con todos los bloques en categorías.
Para que estas dos ideas puedan co-existir, sería útil que
ember-blockly admita un parámetro extra que permita definir
el toolbox usando una cadena xml con el toolbox completo, algo así:
{{ember-blockly toolbox="<xml>bal...</xml>'}}
De esta forma podríamos re-utilizar el armado del toolbox que viene
en pilas-bloques y vincular ember-blockly en ese proyecto también.
Ahora se puede especificar el toolbox como un string xml, usando la misma propiedad que define los bloques:
Este es un ejemplo de invocación para mostrar el componente:
{{ember-blockly blocks=blocks_array_or_xml_string withZoom=true withTrash=true}}
| gharchive/issue | 2016-10-23T22:45:26 | 2025-04-01T06:38:59.517103 | {
"authors": [
"hugoruscitti"
],
"repo": "hugoruscitti/ember-blockly",
"url": "https://github.com/hugoruscitti/ember-blockly/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
106082165 | Make it possible to define a bounding box for the component map
There are good use cases for a fixed map that displays data, but is not zoomable. This would make the components a bit closer to illustrations.
Related to #40. Should be able to tackle both at the same time. This particular setting would probably best be an option added to the component map, along with an option to set the center point and zoom level.
This exposes an API for setting center and zoom level. We can do bounds if we want as well, but the functionality is more or less equivalent and possibly more accessible. In the component context:
map.getOptions().centerCoordinates([50, 0]);
map.getOptions().zoomLevel(6);
| gharchive/issue | 2015-09-11T19:53:47 | 2025-04-01T06:38:59.559969 | {
"authors": [
"cncoleman",
"esjewett"
],
"repo": "humanitiesplusdesign/palladio",
"url": "https://github.com/humanitiesplusdesign/palladio/issues/53",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1593779960 | Client - typo in gateway generate-certs message
version: development 1.13.0
There's typo in gateway generate-certs message
Steps:
Clone and install development branch Source or Docker
Start the Client
execute gateway generate-certs command
pay attention to the message
Actual:
pass phase
Expected:
pass phrase
Fixed by #6119
| gharchive/issue | 2023-02-21T16:34:00 | 2025-04-01T06:38:59.655310 | {
"authors": [
"nikspz"
],
"repo": "hummingbot/hummingbot",
"url": "https://github.com/hummingbot/hummingbot/issues/6103",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2258264328 | Is this repo dead?
Seems to be a long time since any dev work happened here.
Is this project dead?
Just asking so I know whether to allocate time/energy into learning the system in case it will never be updated again.
I understand maintaining software is not easy and I am am certainly not expecting anything, just more curious.
I saw Orca when it was first released and thought it was super interesting but didn't have the time to invest in it back then. Checking in again and it seems to be abandoned (which is fine, but perhaps mark if so it that is the case :) ).
Cheers for all your work so far.
It's still very much alive. The javascript implementation is complete, so I don't need to update it, the other implementations(uxn, cli, etc..) still change from time to time, but only rarely.
The Orca specification is frozen, the next time this repo will be modified is when the webmidi API changes.
| gharchive/issue | 2024-04-23T08:25:35 | 2025-04-01T06:38:59.660190 | {
"authors": [
"Venj-ADL",
"neauoire"
],
"repo": "hundredrabbits/Orca",
"url": "https://github.com/hundredrabbits/Orca/issues/297",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
173621718 | Backport fix for building unit-tests for gtest itself with Mingw
backport of bugfix for https://github.com/google/googletest/issues/708 from upstream googletest.
fix #9
https://ci.appveyor.com/project/andoks/googletest
Release: https://github.com/hunter-packages/googletest/releases/tag/1.8.0-hunter-p3
Testing:
https://ci.appveyor.com/project/ingenue/hunter/build/1.0.860
https://travis-ci.org/ingenue/hunter/builds/155816959
https://ci.appveyor.com/project/andoks/googletest
Well, this test use Visual Studio:
-- Building for: Visual Studio 14 2015
-- The C compiler identification is MSVC 19.0.24213.1
-- The CXX compiler identification is MSVC 19.0.24213.1
-- Check for working C compiler using: Visual Studio 14 2015
-- Check for working C compiler using: Visual Studio 14 2015 -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler using: Visual Studio 14 2015
-- Check for working CXX compiler using: Visual Studio 14 2015 -- works
MinGW still not working:
https://ci.appveyor.com/project/ingenue/hunter/build/1.0.860/job/2rvaic4wy6penu6w
I've saved this pull request in branch hunter.pr-10 and reverted hunter branch to the previous state.
Sorry, I'm not too familiar with appveyor, and obviously did not take enough time to set up building with MinGW properly.
It seems like even though the upstream PR was accepted, it still has not fixed all the issues with building with MinGW on windows (https://github.com/google/googletest/pull/721).
An in addition, I'm still running with self-patched hunter (#8), and did not test the cherry-pick locally.
If I submit another PR, I'll make sure to set up testing properly with Appveyour.
Here is the configuration with AppVeyor testing with MinGW: https://github.com/forexample/hunter-simple/blob/master/appveyor.yml. By the way you can send me pull request with working configuration (or may be even upstream).
| gharchive/pull-request | 2016-08-28T00:05:28 | 2025-04-01T06:38:59.667207 | {
"authors": [
"andoks",
"ruslo"
],
"repo": "hunter-packages/googletest",
"url": "https://github.com/hunter-packages/googletest/pull/10",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
239331857 | Plans for the future
@hunterjm any plans in continuing this auto buyer for FIFA 18??
Most likely.
👍
What this programs needs is a scheduler that lets you configure at what times the program runs, for how many hours, at random times etc.
All of my accounts have been banned after some time, even with RPM=4.
The autobuyer has to mimic human behaviour better than with the current version.
I hope you keep going with this ab for fifa 18!
@hunterjm is there any chance to make this work on a mobile device? that would be great.
| gharchive/issue | 2017-06-29T00:07:51 | 2025-04-01T06:38:59.668806 | {
"authors": [
"Fotospecht",
"hunterjm",
"marlonespindola",
"syros1977"
],
"repo": "hunterjm/fifa-autobuyer",
"url": "https://github.com/hunterjm/fifa-autobuyer/issues/193",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
918459105 | 🛑 Posthog is down
In c3bfa05, Posthog (https://posthog.craftstudios.eu/setup_admin) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Posthog is back up in 8398ce1.
| gharchive/issue | 2021-06-11T08:54:36 | 2025-04-01T06:38:59.716610 | {
"authors": [
"hupratt"
],
"repo": "hupratt/upptime",
"url": "https://github.com/hupratt/upptime/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2723633695 | 🛑 CV is down
In 80c3836, CV (https://minio-api.thekor.eu/rihab-f1492f08-f236-4a55-afb7-70ded209cb24/rihab/index.html) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CV is back up in 8321561 after 50 minutes.
| gharchive/issue | 2024-12-06T17:59:49 | 2025-04-01T06:38:59.718182 | {
"authors": [
"hupratt"
],
"repo": "hupratt/upptime",
"url": "https://github.com/hupratt/upptime/issues/4010",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2563337759 | Reading Order model integration simplification
Hello, I want to continue the discussion from here - https://huggingface.co/HURIDOCS/pdf-reading-order/discussions/1
Thanks for all of your work, I am exploring to integrate your code into my project but, need more simplification.
Any way to do this without Poppler and do this with some other pdf parsing framework. Is it possible for you to modularise part where its extract pdf stuff and then make a separate code specifically for adding results from segment model and all the other stuff into a pipeline for easy integration with different backbones for parsing. Or just make input as simple as list of segments in a page {contained text, bbox of segment, type of segment} and output as dict{bbox: order}; <did i miss anything else required?>
please share a sample jupyter notebook.
regards.
Thank you for your valuable suggestion. We understand the importance of interchangeability for the parser and appreciate your insight.
While making this change would require a significant time investment, it's not currently a top priority due to our ongoing commitments to supporting NGOs in other areas. However, we recognize the potential benefits and encourage you to explore this feature by creating a fork of the project.
The codebase is already structured to facilitate this change, particularly within the PdfFeatures class and its initialization. With some additional effort, you should be able to implement the interchangeable parser functionality in a repository fork.
| gharchive/issue | 2024-10-03T07:34:31 | 2025-04-01T06:38:59.719841 | {
"authors": [
"gabriel-piles",
"mllife"
],
"repo": "huridocs/pdf-document-layout-analysis",
"url": "https://github.com/huridocs/pdf-document-layout-analysis/issues/91",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1700187783 | Rewrite joy servo node in cpp
bump::minor
It was necessary due to performance issues with Python node.
Then I try to close gripper in RViz with gripper move group only left gripper joint is moving in simulation.
Screencast from 19.05.2023 23:24:14.webm
Done: #3 #4 #5
| gharchive/pull-request | 2023-05-08T12:45:21 | 2025-04-01T06:38:59.728426 | {
"authors": [
"delihus",
"macstepien"
],
"repo": "husarion/rosbot_xl_manipulation_ros",
"url": "https://github.com/husarion/rosbot_xl_manipulation_ros/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1639991330 | StructuredOutputParser - Allow users to get multiple items from response.
Allow users to choose the type in the schema (string | List[string])
Allow users to get multiple json objects (get JSON array) in the response. I achieved it by replacing the {{ }} -> [[ ]] as follows: prompt.replace("""{{ "ID": string // IDs which refers to the sentences. "Text": string // Sentences that contains the answer to the question. }}""", """[[ "ID": string // IDs which refers to the sentences. "Text": string // Sentences that contains the answer to the question. ]]""") And got a list of json objects with this method.
Very hacky, but this is what I did to have it return an array of objects.
output_parser = StructuredOutputParser.from_response_schemas(
response_schemas=[
ResponseSchema(
name="country_code", description="two letter country code"
),
ResponseSchema(name="city", description="city name"),
ResponseSchema(
name="places",
description="""array of of 10 places in the following format: [
{{ "name": string // name of the place', "types": [string] // types of the place' }}
]
""",
),
]
)
format_instructions =output_parser.get_format_instructions()
format_instructions =format_instructions.replace(
'"places": string', '"places": array of objects'
)
Which generates:
```json
{
"country_code": string // two letter country code
"city": string // city name
"places": array of objects // array of of 10 places in the following format: [
{ "name": string // name of the place', "types": [string] // types of the place' }
]
}
I managed to achieve this by passing type to the ResponseSchema. Something like this
ResponseSchema(
name="someList", description="description to model with the example: [{name: string, other: string}]", type="array(objects)"
)
I managed to achieve this by passing type to the ResponseSchema. Something like this
ResponseSchema(
name="someList", description="description to model with the example: [{name: string, other: string}]",
type="array(objects)"
)
Is there a TypeScript equivalent of ResponseSchema for Langchain.js? It doesn't look like so, but it would be greatly helpful for the JS/TS community.
| gharchive/issue | 2023-03-24T20:27:06 | 2025-04-01T06:38:59.739011 | {
"authors": [
"IbraheemAlSaady",
"berengamble",
"keremnalbant",
"patrick-ve"
],
"repo": "hwchase17/langchain",
"url": "https://github.com/hwchase17/langchain/issues/1976",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
10791182 | What other data sets should be included in The Maker Map?
Possibilities:
3D printers (private/commercial)
Maker events
Hackerspaces (from the wiki, check for activity)
Things that already live on other maps?
???
Discuss.
I think including the locations of all fablabs worldwide would also be beneficial.
This data is already organised in the form of a map on fablabs.io
| gharchive/issue | 2013-02-08T18:59:45 | 2025-04-01T06:38:59.762778 | {
"authors": [
"bobthecow",
"patkan"
],
"repo": "hwstartup/TheMakerMap.com",
"url": "https://github.com/hwstartup/TheMakerMap.com/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
202717541 | How to setup batch_size?
There seems no where to set batch_size, as neither datablock nor singleblock class have an input entry for it.
Singleblock and datablock is a subclass of dataset so you can just put batch_size as argument to set batch_size
https://github.com/hycis/Mozi/blob/master/example/datablocks_example.py#L62 https://github.com/hycis/Mozi/blob/master/example/datablocks_example.py#L62
On 24 Jan 2017, at 11:44 AM, shakacs notifications@github.com wrote:
There seems no where to set batch_size, as neither datablock nor singleblock class have an input entry for it.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub https://github.com/hycis/Mozi/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/ADWIiW6EKIZLKajmYZzTBaCryxoxqZkEks5rVXOggaJpZM4Lr1QH.
Thanks. The reason i post this issue is that when i tried to run the code in tourist, it throws "'Mnist' object has no attribute 'batch_size'", the same exception thrown once i tried to print the batch_size of a datablock object, and i traced into the source and found that only IterMatrix class has a member called batch_size. May be i got something wrong.
Yap, the dataset object itself contains the train, valid and test itermatrix, and dataset itself indeed does not have the attribute of batch_size. However you can call the batch_size by looking into the itermatrix, for example
dataset = Mnist(batch_size=32)
print(dataset.train.batch_size)
On 24 Jan 2017, at 12:13 PM, shakacs notifications@github.com wrote:
Thanks. The reason i post this issue is that when i tried to run the code in tourist, it throws "'Mnist' object has no attribute 'batch_size'", the same exception thrown once i tried to print the batch_size of a datablock object, and i traced into the source and found that only IterMatrix class has a member called batch_size. May be i got something wrong.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/hycis/Mozi/issues/3#issuecomment-274705250, or mute the thread https://github.com/notifications/unsubscribe-auth/ADWIiVLEI-3rxz8CKYfKLkckV2bPLe0Vks5rVXpTgaJpZM4Lr1QH.
| gharchive/issue | 2017-01-24T03:44:32 | 2025-04-01T06:38:59.788431 | {
"authors": [
"hycis",
"shakacs"
],
"repo": "hycis/Mozi",
"url": "https://github.com/hycis/Mozi/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
238635106 | HOTFIX: remove second resource_modified call
Hotfix for #2191
As described in #2182
+1
| gharchive/pull-request | 2017-06-26T18:48:56 | 2025-04-01T06:38:59.792761 | {
"authors": [
"mjstealey",
"pkdash"
],
"repo": "hydroshare/hydroshare",
"url": "https://github.com/hydroshare/hydroshare/pull/2192",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1161857978 | [WIP] 4535 Remove netcdf resource type
NOTE: This PR replaces another closed PR #4528
Pull Request Checklist:
[ ] Positive Test Case Written by Dev
[ ] Automated Testing
[ ] Sufficient User and Developer Documentation
[ ] Passing Jenkins Build
[ ] Peer Code review and approval
Positive Test Case
[Enter positive test case here]
METRIC
VALUE
https://sonarqube.cuahsi-workstation.com:9000/dashboard?id=hydroshare-4536
@sblack-usu I made a change as per your suggestion. Can you re-approve?
| gharchive/pull-request | 2022-03-07T19:57:07 | 2025-04-01T06:38:59.795341 | {
"authors": [
"hydrocheck",
"pkdash"
],
"repo": "hydroshare/hydroshare",
"url": "https://github.com/hydroshare/hydroshare/pull/4536",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1956097575 | 🛑 Daisy - Voice Tools Bot Backend is down
In be9df94, Daisy - Voice Tools Bot Backend (https://voice-backend.hydev.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Daisy - Voice Tools Bot Backend is back up in 8a87e93 after 3 minutes.
| gharchive/issue | 2023-10-22T23:23:41 | 2025-04-01T06:38:59.797103 | {
"authors": [
"hykilpikonna"
],
"repo": "hykilpikonna/Uptime",
"url": "https://github.com/hykilpikonna/Uptime/issues/503",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
395442097 | cURL Error #:Could not resolve host: tenant4.hubshake.localhost
Description
I always get this error when trying to CURL on the API from wordpress to laravel application.
cURL Error #:Could not resolve host: tenant4.hubshake.localhost
..
Actual behavior
Right now it always showing me this error
cURL Error #:Could not resolve host: tenant4.hubshake.localhost
..
Expected behavior
It should got to the API and call the function in that API and perform the operation inside that function.
..
Information
hyn/multi-tenant version: 5.*
laravel version: 5.5.*
database driver and version: mysql
webserver software and version: xampp v3.2.2
php version: 7.1
tenancy.php config
<?php
/*
* This file is part of the hyn/multi-tenant package.
*
* (c) Daniël Klabbers <daniel@klabbers.email>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*
* @see https://laravel-tenancy.com
* @see https://github.com/hyn/multi-tenant
*/
use Hyn\Tenancy\Database\Connection;
return [
'models' => [
/**
* Specify different models to be used for the global, system database
* connection. These are also used in their relationships. Models
* used have to implement their respective contracts and
* either extend the SystemModel or use the trait
* UsesSystemConnection.
*/
// Must implement \Hyn\Tenancy\Contracts\Customer
'customer' => \Hyn\Tenancy\Models\Customer::class,
// Must implement \Hyn\Tenancy\Contracts\Hostname
'hostname' => \Hyn\Tenancy\Models\Hostname::class,
// Must implement \Hyn\Tenancy\Contracts\Website
'website' => \Hyn\Tenancy\Models\Website::class
],
'website' => [
/**
* Each website has a short random hash that identifies this entity
* to the application. By default this id is randomized and fully
* auto-generated. In case you want to force your own logic for
* when you need to have a better overview of the complete
* tenant folder structure, disable this and implement
* your own id generation logic.
*/
'disable-random-id' => false,
/**
* The random Id generator is responsible for creating the hash as mentioned
* above. You can override what generator to use by modifying this value
* in the configuration.
*
* @warn This won't work if disable-random-id is true.
*/
// 'random-id-generator' => Hyn\Tenancy\Generators\Uuid\ShaGenerator::class,
'random-id-generator' => App\Tenancy\Generators\DBNameGenerator::class,
/**
* Enable this flag in case you're using a driver that does not support
* database username or database name with a length of more than 32 characters.
*
* This should be enabled for MySQL, but not for MariaDB and PostgreSQL.
*/
'uuid-limit-length-to-32' => env('LIMIT_UUID_LENGTH_32', false),
/**
* Specify the disk you configured in the filesystems.php file where to store
* the tenant specific files, including media, packages, routes and other
* files for this particular website.
*
* @info If not set, will revert to the default filesystem.
*/
'disk' => null,
/**
* Automatically generate a tenant directory based on the random id of the
* website. Uses the above disk to store files to override system-wide
* files.
*
* @info set to false to disable.
*/
'auto-create-tenant-directory' => true,
/**
* Automatically rename the tenant directory when the random id of the
* website changes. This should not be too common, but in case it happens
* we automatically want to move files accordingly.
*
* @info set to false to disable.
*/
'auto-rename-tenant-directory' => true,
/**
* Automatically deletes the tenant specific directory and all files
* contained within.
*
* @see
* @info set to true to enable.
*/
'auto-delete-tenant-directory' => false,
/**
* Time to cache websites in minutes. Set to false to disable.
*/
'cache' => 10,
],
'hostname' => [
/**
* If you want the multi tenant application to fall back to a default
* hostname/website in case the requested hostname was not found
* in the database, complete in detail the default hostname.
*
* @warn this must be a FQDN, these have no protocol or path!
*/
'default' => env('TENANCY_DEFAULT_HOSTNAME'),
/**
* The package is able to identify the requested hostname by itself,
* disable to get full control (and responsibility) over hostname
* identification. The hostname identification is needed to
* set a specific website as currently active.
*
* @see src/Jobs/HostnameIdentification.php
*/
'auto-identification' => env('TENANCY_AUTO_HOSTNAME_IDENTIFICATION', true),
/**
* In case you want to have the tenancy environment set up early,
* enable this flag. This will run the tenant identification
* inside a middleware. This will eager load tenancy.
*
* A good use case is when you have set "tenant" as the default
* database connection.
*/
'early-identification' => env('TENANCY_EARLY_IDENTIFICATION', false),
/**
* Abort application execution in case no hostname was identified. This will throw a
* 404 not found in case the tenant hostname was not resolved.
*/
'abort-without-identified-hostname' => true,
/**
* Time to cache hostnames in minutes. Set to false to disable.
*/
'cache' => 10,
],
'db' => [
/**
* The default connection to use; this overrules the Laravel database.default
* configuration setting. In Laravel this is normally configured to 'mysql'.
* You can set a environment variable to override the default database
* connection to - for instance - the tenant connection 'tenant'.
*/
'default' => env('TENANCY_DEFAULT_CONNECTION'),
/**
* Used to give names to the system and tenant database connections. By
* default we configure 'system' and 'tenant'. The tenant connection
* is set up automatically by this package.
*
* @see src/Database/Connection.php
* @var system-connection-name The database connection name to use for the global/system database.
* @var tenant-connection-name The database connection name to use for the tenant database.
*/
'system-connection-name' => env('TENANCY_SYSTEM_CONNECTION_NAME', Connection::DEFAULT_SYSTEM_NAME),
'tenant-connection-name' => env('TENANCY_TENANT_CONNECTION_NAME', Connection::DEFAULT_TENANT_NAME),
/**
* The tenant division mode specifies to what database websites will be
* connecting. The default setup is to use a new database per tenant.
* In case you prefer to use the same database with a table prefix,
* set the mode to 'prefix'.
*
* @see src/Database/Connection.php
*/
'tenant-division-mode' => env('TENANCY_DATABASE_DIVISION_MODE', 'database'),
/**
* The database password generator takes care of creating a valid hashed
* string used for tenants to connect to the specific database. Do
* note that this will only work in 'division modes' that set up
* a connection to a separate database.
*/
// 'password-generator' => Hyn\Tenancy\Generators\Database\DefaultPasswordGenerator::class,
'password-generator' => App\Tenancy\Generators\DefaultPasswordGenerator::class,
/**
* The tenant migrations to be run during creation of a tenant. Specify a directory
* to run the migrations from. If specified these migrations will be executed
* whenever a new tenant is created.
*
* @info set to false to disable auto migrating.
*
* @warn this has to be an absolute path, feel free to use helper methods like
* base_path() or database_path() to set this up.
*/
'tenant-migrations-path' => database_path('migrations/tenant'),
/**
* Seeds the newly created tenant database based on this Seeder.
*
* @info requires tenant-migrations-path to be in use.
*
* @warn specify a valid fully qualified class name.
* @example App\Seeders\AdminSeeder::class
*/
'tenant-seed-class' => false,
/**
* Automatically generate a tenant database based on the random id of the
* website.
*
* @info set to false to disable.
*/
'auto-create-tenant-database' => true,
/**
* Automatically rename the tenant database when the random id of the
* website changes. This should not be too common, but in case it happens
* we automatically want to move databases accordingly.
*
* @info set to false to disable.
*/
'auto-rename-tenant-database' => true,
/**
* Automatically deletes the tenant specific database and all data
* contained within.
*
* @info set to true to enable.
*/
'auto-delete-tenant-database' => false,
],
'folders' => [
'config' => [
/**
* Merge configuration files from the config directory
* inside the tenant directory with the global configuration files.
*/
'enabled' => true,
/**
* List of configuration files to ignore, preventing override of crucial
* application configurations.
*/
'blacklist' => ['database', 'tenancy', 'webserver'],
],
'routes' => [
/**
* Allows adding and overriding URL routes inside the tenant directory.
*/
'enabled' => true,
/**
* Prefix all tenant routes.
*/
'prefix' => null,
],
'trans' => [
/**
* Allows reading translation files from a trans directory inside
* the tenant directory.
*/
'enabled' => true,
/**
* Will override the global translations with the tenant translations.
* This is done by overriding the laravel default translator with the new path.
*/
'override-global' => true,
/**
* In case you disabled global override, specify a namespace here to load the
* tenant translation files with.
*/
'namespace' => 'tenant',
],
'vendor' => [
/**
* Allows using a custom vendor (composer driven) folder inside
* the tenant directory.
*/
'enabled' => true,
],
'media' => [
/**
* Mounts the assets directory with (static) files for public use.
*/
'enabled' => true,
]
]
];
webserver.php config
<?php
/*
* This file is part of the hyn/multi-tenant package.
*
* (c) Daniël Klabbers <daniel@klabbers.email>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*
* @see https://laravel-tenancy.com
* @see https://github.com/hyn/multi-tenant
*/
return [
/**
* Apache2 is one of the most widely adopted webserver packages available.
*
* @see http://httpd.apache.org/docs/
* @see https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu
*/
'apache2' => [
/**
* Whether the integration with Apache2 is currently active.
*/
'enabled' => false,
/**
* Define the ports of your Apache service.
*/
'ports' => [
/**
* HTTP, non-SSL port.
*
* @default 80
*/
'http' => 80,
/**
* HTTPS, SSL port.
*
* @default 443
*/
'https' => 443
],
/**
* The generator taking care of hooking into the Apache services and files.
*/
'generator' => \Hyn\Tenancy\Generators\Webserver\Vhost\ApacheGenerator::class,
/**
* The view that holds the vhost configuration template.
*/
'view' => 'tenancy.generators::webserver.apache.vhost',
/**
* Specify the disk you configured in the filesystems.php file where to store
* the tenant vhost configuration files.
*
* @info If not set, will revert to the default filesystem.
*/
'disk' => null,
'paths' => [
/**
* Location where vhost configuration files can be found.
*/
'vhost-files' => [
'/etc/apache2/sites-enabled/'
],
/**
* Actions to run to work with the Apache2 service.
*/
'actions' => [
/**
* Action that asserts Apache2 is installed.
*/
'exists' => '/etc/init.d/apache2',
/**
* Action to run to test the apache configuration.
*/
'test-config' => 'apache2ctl -t',
/**
* Action to run to reload the apache service.
*/
'reload' => 'apache2ctl graceful'
]
]
],
/**
* Nginx webserver support.
*
* @see http://nginx.org
*/
'nginx' => [
/**
* Whether the integration with nginx is currently active.
*/
'enabled' => false,
/**
* The php sock to be used.
*/
'php-sock' => 'unix:/var/run/php/php7.0-fpm.sock',
/**
* Define the ports of your nginx service.
*/
'ports' => [
/**
* HTTP, non-SSL port.
*
* @default 80
*/
'http' => 80,
/**
* HTTPS, SSL port.
*
* @default 443
*/
'https' => 443
],
/**
* The generator taking care of hooking into the nginx services and files.
*/
'generator' => \Hyn\Tenancy\Generators\Webserver\Vhost\NginxGenerator::class,
/**
* The view that holds the vhost configuration template.
*/
'view' => 'tenancy.generators::webserver.nginx.vhost',
/**
* Specify the disk you configured in the filesystems.php file where to store
* the tenant vhost configuration files.
*
* @info If not set, will revert to the default filesystem.
*/
'disk' => null,
'paths' => [
/**
* Location where vhost configuration files can be found.
*/
'vhost-files' => [
'/etc/nginx/sites-enabled/'
],
/**
* Actions to run to work with the Nginx service.
*/
'actions' => [
/**
* Action that asserts nginx is installed.
*/
'exists' => '/etc/init.d/nginx',
/**
* Action to run to test the nginx configuration.
*/
'test-config' => '/etc/init.d/nginx configtest',
/**
* Action to run to reload the nginx service.
*/
'reload' => 'systemctl restart nginx'
]
]
]
];
Then that would be why. Please read the article I linked. A subdomain is a full domain, but with a bit of syntactic sugar. It still needs to be explicitly defined somewhere, which you have not done yet
ahhh now I know.. thank you so much man...
No problem
| gharchive/issue | 2019-01-03T02:45:28 | 2025-04-01T06:38:59.811652 | {
"authors": [
"flashery",
"fletch3555"
],
"repo": "hyn/multi-tenant",
"url": "https://github.com/hyn/multi-tenant/issues/698",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1124266996 | Use turbo-hash-map to avoid stringifying keys
Depends on https://github.com/mafintosh/turbo-hash-map/pull/3 to maintain browser compatibility.
Moved to protomux
| gharchive/pull-request | 2022-02-04T14:30:38 | 2025-04-01T06:38:59.818284 | {
"authors": [
"kasperisager",
"mafintosh"
],
"repo": "hypercore-protocol/hypercore-next",
"url": "https://github.com/hypercore-protocol/hypercore-next/pull/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
689842841 | [FEATURE] retry 组件中, CircuitBreaker 功能,FallbackRetryPolicy 能否直接支持通过 container 调用?
Is your feature request related to a problem? Please describe.
现有的 https://github.com/hyperf/retry/blob/master/src/Policy/FallbackRetryPolicy.php#L44 并不是调用的 container,而是通过 php 现有的 call_user_func 进行调用的。考虑到直接使用 call_user_func 可能性比较小,因此想着直接使用 container 进行调用
Describe the solution you'd like
如果 fallback 是 array 则直接通过 container 进行调用
public function end(RetryContext &$retryContext): bool
{
if (!isset($retryContext['retryExhausted'])) {
return false;
}
if (!is_callable($this->fallback)) {
return false;
}
$throwable = $retryContext['lastThrowable'] ?? null;
$arguments = [$throwable];
$retryContext['lastThrowable'] = $retryContext['lastResult'] = null;
if (isset($retryContext['proceedingJoinPoint'])) {
$arguments = array_merge($retryContext['proceedingJoinPoint']->getArguments(), $arguments);
}
try {
if (!is_array($this->fallback)) {
$retryContext['lastResult'] = call_user_func($this->fallback, ...$arguments);
} else {
$retryContext['lastResult'] = ApplicationContext::getContainer()->get(reset($this->fallback))->{end($this->fallback)}(...$arguments);
}
} catch (Throwable $throwable) {
$retryContext['lastThrowable'] = $throwable;
}
return false;
}
Describe alternatives you've considered
N
Additional context
N
感谢
/assign @Reasno
@Reasno 请问这个 ISSUE 怎么想?
@Reasno 请问这个 ISSUE 怎么想?
不是特别喜欢用同一个policy来实现两种用法,感觉加一个policy来走container更好,fallback的signature也不需要非用数组了,不太喜欢数组隐式表达,可以改成两个参数,显式调用。
closed because of #2465
| gharchive/issue | 2020-09-01T06:04:50 | 2025-04-01T06:38:59.820997 | {
"authors": [
"Reasno",
"huangzhhui",
"yansongda"
],
"repo": "hyperf/hyperf",
"url": "https://github.com/hyperf/hyperf/issues/2402",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
250045708 | kill two todos in deleteContainer operation
Also clean up glog trailing newline a bit in cli/container.go.
@laijs updated.
LGTM
| gharchive/pull-request | 2017-08-14T14:21:16 | 2025-04-01T06:38:59.826621 | {
"authors": [
"bergwolf",
"laijs"
],
"repo": "hyperhq/runv",
"url": "https://github.com/hyperhq/runv/pull/554",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
79297900 | Make OpenSSL an optional dependency
It can be tricky to install OpenSSL, especially on Windows. It would be nice if it were an optional dependency.
Closed via #577
| gharchive/issue | 2015-05-22T05:36:29 | 2025-04-01T06:38:59.827129 | {
"authors": [
"nstoddard",
"seanmonstar"
],
"repo": "hyperium/hyper",
"url": "https://github.com/hyperium/hyper/issues/541",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
708845536 | Allow custom error types in RPC methods
Feature Request
Motivation
One of the things I love about Rust is it's stellar error-handling capabilities and how it helps write correct code. When writing RPC implementations with this library I've noticed excessive and duplicated use of map_err or unwrap and no custom error types used in those functions.
For some functions with lots of fallible operations, or when there's several functions that can return many of the same errors. I've extracted those out into separate functions and defined a custom error type so I can use ? and have some centrally defined error handling strategy for the common ones. This works, but being able to do this straight in the impl would be really nice.
Proposal
As I see it there's two possibilities here:
Bake some magic into the tonic::async_trait macro (or provide a separate one, maybe a function attribute) that wraps the implementation functions in an outer function that returns tonic::Status error like before, but coerces the actual return type based on some trait.
Allow the user to define an error type for each generated function in the trait and convert it in the generated service stub code. This one I'm not that big a fan of cause it currently requires defining the type for every trait function until associated_type_defaults lands...
The trait used for conversion I think could look something like this:
trait ToStatus {
fn code(&self) -> Code;
fn status(&self) -> Status {
let code = self.code();
Status::new(code, code.description())
}
}
impl<T: ToStatus> From<T> for Status {
fn from(custom: T) -> Self {
custom.status()
}
}
// example impl
impl ToStatus for Code {
fn code(&self) -> Code {
*self
}
}
What do you guys think? I'd be more than happy to attempt implementing this myself, but I've never written any real macro code before. :)
Thanks for reading!
I've cobbled together an attribute macro that seems to get the job done:
gardk/tonic_rpc_wrapper
There is some upcoming work to improve internal error handling and I have been thinking about ways to make it easier to use richer error model with google.rpc.Status. It may be worth it to look into all this, including your proposal, in an unified way. It will take a little while though, so thank you for your patience.
Personally, I share your observation about Status conversions. I have tried a couple of approaches, none of which I completely like. It's a minor thing and I'm not sure what can be done about it or how but I believe it's worth considering.
You likely want to take advantage of Try to convert errors into Status I believe that should work. Usually, what I see is people will have custom code to convert their internal errors into Status, I think this is the correct method.
Thank you for the responses! I look forward to seeing any future work you may have in store for this.
As for the comment about using Try. This will only work directly (without Result::map_error) when the fallible function being called returns a first-party error if I'm not mistaken? What prompted me posting this issue was that I wanted to treat Diesel's errors the same for many of my handlers within a service. And there is no direct conversion from those into Status that I'm aware of.
What I ended up doing was mapping any handled third-party errors into a custom error type, then using ? for conversion to Status, which is almost the same.
However, it looks like you're aware of this little snag and may be looking into it, which was really all I wanted :)
@gardk Yeah, the intermediate error type imo is great too. I think its always good to funnel errors into a "application" defined one that then can get translated into say Status or some other http style error. So I think that is the correct solution!
| gharchive/issue | 2020-09-25T10:58:44 | 2025-04-01T06:38:59.831859 | {
"authors": [
"LucioFranco",
"alce",
"gardk"
],
"repo": "hyperium/tonic",
"url": "https://github.com/hyperium/tonic/issues/463",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2595821657 | chore: Start development of version 0.13
Motivation
Starts the development of version 0.13. This is needed to tell cargo-semver-checks that the development includes breaking changes.
Solution
Updates versions to 0.13.0-dev.
I opened this pull request as I think we probably could start development of the next version including breaking changes as @LucioFranco merged #1892 which includes breaking changes. This change is needed for the pull requests such as #2006 and #2013.
| gharchive/pull-request | 2024-10-17T21:47:46 | 2025-04-01T06:38:59.833236 | {
"authors": [
"tottoto"
],
"repo": "hyperium/tonic",
"url": "https://github.com/hyperium/tonic/pull/2014",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
725812542 | Getting the history of transactions
Hello, I know if we run minifab blockquery it will generate the transactions for the last block, but is there a way we can generate the history of all transactions or the history of all blocks and transactions?
Use minifab blockquery -b command to get transactions of the block. Please close this issue.
-b blocknumber
| gharchive/issue | 2020-10-20T17:49:01 | 2025-04-01T06:38:59.850056 | {
"authors": [
"SaidShah",
"litong01"
],
"repo": "hyperledger-labs/minifabric",
"url": "https://github.com/hyperledger-labs/minifabric/issues/104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
537362406 | Generic JSON Message : Controller API
Design : https://wiki.hyperledger.org/pages/viewpage.action?pageId=24778806
new REST endpoints to be added for message handler operation like,
Register new message service
Unregister a message service
Send message
| gharchive/issue | 2019-12-13T05:53:16 | 2025-04-01T06:38:59.851246 | {
"authors": [
"sudeshrshetty"
],
"repo": "hyperledger/aries-framework-go",
"url": "https://github.com/hyperledger/aries-framework-go/issues/973",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
974848258 | Remove dependency on indy module from protocol module
Module protocols should be a "codification" of state machines as defines by Aries RCFs. These should not directly tie to libindy or any specific type of credentials or frameworks.
This was solved a while ago by hiding indy dependencies behind traits. Closing
| gharchive/issue | 2021-08-19T16:35:51 | 2025-04-01T06:38:59.852233 | {
"authors": [
"Patrik-Stas"
],
"repo": "hyperledger/aries-vcx",
"url": "https://github.com/hyperledger/aries-vcx/issues/338",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1454612413 | Start privacyCluster non-bootnodes in parallel (acceptance test improvement)
Signed-off-by: Gabriel Trintinalia gabriel.trintinalia@gmail.com
PR description
Start non-bootnodes in parallel for privacy clusters (acceptance test only)
may reduce some acceptance tests duration by 15%
Documentation
[x] I thought about documentation and added the doc-change-required label to this PR if
updates are required.
Changelog
[x] I thought about the changelog and included a changelog update if required.
@Gabriel-Trintinalia any reason not to merge this change? is there more to do?
It improved the tests on my machine but I had no data to validate on CircleCi. I think we can merge, it definitely does not hurt.
| gharchive/pull-request | 2022-11-18T07:26:21 | 2025-04-01T06:38:59.857321 | {
"authors": [
"Gabriel-Trintinalia"
],
"repo": "hyperledger/besu",
"url": "https://github.com/hyperledger/besu/pull/4701",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1730075757 | Revert the revert of the tx selection commit
The original PR (#5396) caused a problem when running on Frontier. The world state update was committed too late and the receipt was created with the old world state root hash.
@pinges shall we resolve and merge?
To check for no side effects on frontier block syncing, started a full sync node with this PR and one on latest main commit https://app.circleci.com/pipelines/github/hyperledger/besu/22489/workflows/5495f46f-5114-4cc0-b4d3-46280dd735ef/jobs/139706/artifacts
Both have got past block 200,000 so have synced frontier blocks successfully.
| gharchive/pull-request | 2023-05-29T05:13:02 | 2025-04-01T06:38:59.858933 | {
"authors": [
"macfarla",
"non-fungible-nelson",
"pinges"
],
"repo": "hyperledger/besu",
"url": "https://github.com/hyperledger/besu/pull/5507",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2225980125 | Update dependencies to address GO-2024-2687
Disable automatic update of dependencies that do not address security vulnerabilities. These updates appear to regularly mess up the Go dependencies (go.mod/go.sum).
grpc.Dial() and grpc.DialContext() are deprecated in current gRPC versions, and replaced by grpc.NewClient().
LGTM, btw @bestbeforetoday , grpc changes some behaviors in the NewClient, any impact?
One place in the code used grpc.DialContent(). This tries to establish the network connection immediately rather than waiting until the connection is actually used to make a request. In practice I don't think this makes any noticeable difference. It just means that an existing network issue will cause an error when a request is made instead when the connection is created. A network issue can occur at any time so errors could always happen when a request is made, even if the initial connect was successful.
| gharchive/pull-request | 2024-04-04T16:22:19 | 2025-04-01T06:38:59.862095 | {
"authors": [
"bestbeforetoday"
],
"repo": "hyperledger/fabric-admin-sdk",
"url": "https://github.com/hyperledger/fabric-admin-sdk/pull/192",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
635659157 | Rename marbles_transfer to asset-transfer-secured-agreement
Rename marbles_transfer to a more generic and streamlined asset
Move sample to new directory
Remove TODO for marbles by size query
The images in the readme still uses marbles as the asset in the diagrams. In a follow-up we can generate new diagrams.
| gharchive/pull-request | 2020-06-09T18:19:12 | 2025-04-01T06:38:59.869452 | {
"authors": [
"stephyee"
],
"repo": "hyperledger/fabric-samples",
"url": "https://github.com/hyperledger/fabric-samples/pull/202",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
149781884 | Explore tools to allow community to assist in labeling issues
Currently, labeling of issues is limited to only maintainers of this project. Ideally, we would want all members to assist in labeling issues. This is not a permission that exists in GitHub, so we want to investigate if there are any tools/bots that may help. For example, this may be a bot that reads a slack channel or GitHub comments and applies the appropriate label to a issue when asked.
http://partyline.rocks/blog/github/
http://blog.james-carr.org/2014/08/21/open-a-github-issue-from-slack/
Both above can be for our reference I believe.
following the CI meeting we had yesterday with the LF RI engineers, if we move to Gerritt we will need to move to Jira or Bugzilla to manage issues - think this will be an improvement and obviate the need for a bot to fix GH issues.
Prefer to use Gerrit which is easy to review and merge code.
What are the key benefits of Gerrit? I haven't used it. Does it only provide code reviews? Git is quite sufficient for reviewing, commenting, and updating pull requests.
The key capability that we need is for a larger community to be able to manage issues without being a maintainer. If we had to move to Jira or Bugzilla for this then I don't see the value of Gerrit but another tool.
Gerritt can enforce reviews and it can integrate checks from CI and that the legal bits are properly handled. It is far more effective than github in that regard, and on a project where security and quality are of paramount importance such as this, we can be far more certain of what gets merged. Also, because Gerritt is tied into LF identity, we can also be a bit more certain of who is making the contributions (because we cannot enforce TFA on github accounts).
@binhn: we can reference this I think https://review.openstack.org/#/c/309610/. As @christo4ferris mentioned, the gerrit can be integrated with CI easily and we can involve more peoples to review codes. Everyone can give +/-1 on your commit and give comments, core team member can give +/-2 to approve it, CI tool will used to verified it before merge.
@srderson closing this as we are transitioning to Jira RSN (promise;-)
| gharchive/issue | 2016-04-20T14:20:41 | 2025-04-01T06:38:59.872471 | {
"authors": [
"binhn",
"christo4ferris",
"genggjh",
"srderson"
],
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/issues/1173",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
635409020 | Update private data tutorial for contract api
Should be merged along with the smart contract update: https://github.com/hyperledger/fabric-samples/pull/201
Signed-off-by: NIKHIL E GUPTA negupta@us.ibm.com
Type of change
Documentation update
@Mergifyio backport release-2.1
@Mergifyio backport release-2.0
| gharchive/pull-request | 2020-06-09T12:59:26 | 2025-04-01T06:38:59.874085 | {
"authors": [
"nikhil550"
],
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/pull/1382",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
533086484 | [FAB-17199] Add new test network tutorial
Signed-off-by: NIKHIL E GUPTA negupta@us.ibm.com
Add new test network tutorial as part of getting started section.
Type of change
Documentation update
Description
Users would be guided toward the new test network directly after they install the prerequisites and fabric docker images and samples. The tutorial is the first step toward replacing the build a network tutorial in the long term.
Related issues
The PR to add the test network to the Fabric samples:
https://github.com/hyperledger/fabric-samples/pull/80. (Merged)
The tutorial should be merged after the sample
I don't see documentation for adding a new organization using addOrg3.sh script. What's the plan?
I don't see documentation for adding a new organization using addOrg3.sh script. What's the plan?
We will rebase the adding an org to the network tutorial to the new test network and doc the addOrg3 script there. We will continue to use the byfn tutorial and eyfn until then.
Long term the future of EYFN should be to focus on the process for creating the structure and crypto material for an org, and how to add the org to a channel through a channel update (what the json file and jq command would look like, in other words). We can leverage the updated Config Update doc -- https://hyperledger-fabric.readthedocs.io/en/master/config_update.html -- for the latter, rather than that being the focus of an Add an Org tutorial.
| gharchive/pull-request | 2019-12-05T02:36:36 | 2025-04-01T06:38:59.877170 | {
"authors": [
"joealewine",
"nikhil550",
"rameshthoomu"
],
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/pull/369",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1120897377 | Support multiple token connectors in a stack, with ERC20/ERC721
Signed-off-by: Peter Broadhurst peter.broadhurst@kaleido.io
Ok - behavior should be more intuitive now:
# Ethereum default: you'll get erc1155, erc20_erc721
ff init test
# Fabric default: you'll get nothing
ff init -b fabric test
# Ethereum with explicitly no tokens
ff init -t none test
# Ethereum with just ERC-1155
ff init -t erc1155 test
# Note that none does not cancel others out, so this would get erc20_erc721 and erc1155
ff init -t erc20_erc721 -t none -t erc1155 test
Holding off on the merge of this, until we have a complimentary manifest merge into FireFly Core
| gharchive/pull-request | 2022-02-01T16:10:13 | 2025-04-01T06:38:59.878369 | {
"authors": [
"peterbroadhurst"
],
"repo": "hyperledger/firefly-cli",
"url": "https://github.com/hyperledger/firefly-cli/pull/137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
302711039 | Permission Implementation Improvement
Permissions Implementation Improvement
This document provides an overview of a proposal to improve implementation of permissions in shared model.
Note: This document does not discuss current permission model, its scope is pure technical and does not affect behavior of the system.
Permission Model
Each role has a set of permissions, associated with it. Additionally, each account can have so called grantable permissions, which are permissions given to it by some other account. Grantable permissions allow account to perform some actions on other account.
In order to execute a command, account needs to have certain permission. Each command has unique permissions for its execution. Also, some commands require additional permissions, e.g transferAsset command requires destination account to have can_receive_asset permission.
Each query has 3 distinct permissions associated with it. They allow executing query on a specific account, domain or globally.
For example account A can have permission to get balance of account A, so he will not have permission to get balance of account B, even if they belong to the same domain. However, if account A has domain permission, to get balance, it would be possible to perform query on account B.
Current Permission Implementation
Currently, all permissions in Iroha are implemented as strings. This has exactly one benefit - ease of use for developers. Strings are easy to debug and code working with strings is straightforward.
However, strings have several disadvantages. First of all it is easy to mistype them, for clients and developers. They need to be allocated and take up a lot of space. They are not type-safe and thus error-prone.
This proposal tries to solve given issues while maintaining ease of use.
Enum Based implementation
Enum is a good choice for permissions, since they are just set of constants. It provides type-safety, efficiency, and clarity.
I propose we declare permission Enum - either one, or several for command, query, and grantable permissions, each command and query will have an associated permission as a static const variable of a class.
We would need to define mapping for protobuf and postgres enums. This could be done by assigning same integer values to corresponding permissions in all three enums and then simply performing static_cast. This way we can avoid defining giant switch-cases / maps.
One big issue with enums in C++ is that there is no idiomatic way to get string representation. There are several possible solutions to this issue:
Brute force - define mapping from each permission to string
Rely on implementation, such as protobuf to provide mapping. Define abstract class like PermissionPrinter, which will have virtual method toString(permission) and implement it for each backend of shared model. Since we can map protobuf enums to our enums, we can use protobuf reflection as one possible implementation.
Rely on things such as Boost Preprocessor or other code-generation solutions.
This issue is open for discussion and would like to hear an opinion of other developers.
It provides type-safety
no. Unless, you use scoped enum.
I like defining some kind of singleton or static array `const char* map[] = {"permA", "permB"};
And cast to string like
enum class Perm: uint32_t {A, B};
Perm a = Perm::A;
map[a]; // I am not sure actually, if it has implicit cast to int or not, but idea is like this
no. Unless, you use scoped enum.
Of course we would use scoped enum. I did not mention that since it seems to obvious
So far I have 3 ideas for solving it issue (feel free to propose new one). The first two use additional strings as it is, the last one is prefer depend on the protobuf (thus less transport-agnostic, but more stable).
All the ways describe how define enum Permission, and related toString & fromString functions. Imo that should be enough for replacing (am I missing smth?).
p.s code may cause compile-errors, consider as pseudo-code
Macros. Yes, they are considered evilish but the solution is the most compact and has no code duplication at all.
New permission -> 1 LOC
#define __PERM_SWITCH_TOSTR(r, _, elem) case Permission::elem: return ##elem;
#define __PERM_SWITCH_FROMSTR(r, _, elem) {##elem, Permission::elem}
#define PERM_DEF(perm_list) \
enum class Permission { \
perm_list \
, NONE \
}; \
inline const std::string toString(Permission p) { \
switch (p) { \
BOOST_PP_SEQ_FOR_EACH(__PERM_SWITCH_TOSTR, _, perm_list) \
} \
return "undefined"; \
} \
inline Permission fromString(const std::string s) { \
static std::map<std::string, Permission> m { \
BOOST_PP_SEQ_FOR_EACH(__PERM_SWITCH_FROMSTR, _, perm_list) \
}; \
}
PERM_DEF((can_receive)
(can_transfer)
(can_create_domain)
...)
Constexpr maps, a lot of code duplication but compile-time (-> prob less error-prone), need to write function str -> enum (idk how to do it via templates). It is roughly same to the case enum+pair of methods, I don't show this case, because constexpr-maps is a bit better (for toString)
New permission -> 4 LOC (enum + PermMap::s (x2) + fromString's map)
enum class Permission {
can_receive,
can_transfer,
can_create_domain,
};
template<Permission>
class PermMap {
static const std::string s;
};
template<>
const std::string PermMap<Permission::can_receive>::s = "can_receive";
inline const std::string toString(Permission p) {
return PermMap<p>::s;
}
inline Permission fromString(const std::string s) {
static const std::map<std::string, Permission> m {
{"can_receive", Permission::can_receive},
{"can_transfer", Permission::can_transfer},
{"can_create_domain", Permission::can_create_domain},
...
};
return m[s];
}
Depend on the protobuf, the most elegant way. The only problem that it highly depend on the transport so imo it is a bad thing to use. Also protobuf uses enum (NOT enum class, so weak typing might be a huge problem in future)
New permission -> 0 LOC
Though protobuf updates may need some fixes
using Permission = iroha::protocol::RolePermission;
inline const std::string toString(Permission p) {
return iroha::protocol::RolePermission_Name(p);
}
inline Permission fromString(const std::string &s) {
Permission p;
iroha::protocol::RolePermission_Parse(s, &p);
return p;
}
In order to execute a command, account needs to have certain permission.
Well, it has to have a role
It would be simpler to have switch case for toString, and do not implement fromString at all, since it is not needed much.
I thought that you need fromString operation for deserialization, e.g. in genesis block a role can be created
We use protobuf for deserealization
Good
| gharchive/issue | 2018-03-06T13:58:14 | 2025-04-01T06:38:59.886538 | {
"authors": [
"Warchant",
"l4l",
"neewy",
"nickaleks"
],
"repo": "hyperledger/iroha",
"url": "https://github.com/hyperledger/iroha/issues/1045",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1434513423 | Migration tool
For smooth upgrades, one needs to be able to migrate the versioned transactions from the old standard to the new preferably ensuring that the operations are handled automatically when it is safe to do so, but with human intervention requested when that is necessary.
related #2920
As you know, the Client CLI accepts and executes JSON instructions. I guess we should also add versions to them to avoid problems that can appear for our users.
| gharchive/issue | 2022-11-03T11:35:12 | 2025-04-01T06:38:59.887852 | {
"authors": [
"appetrosyan",
"mversic",
"pesterev"
],
"repo": "hyperledger/iroha",
"url": "https://github.com/hyperledger/iroha/issues/2932",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
380474710 | S8 [User Story] Spike -- Research LDAP login token
[User Story]
Spike -- Research LDAP login token
Epic Title
#271
User Story Title
Spike -- Research LDAP login token
T-shirt size: S(mall), M(edium), L(arge)
M
Story Points - Fibonacci (1,2,3,5,8,13)
3
User Story
[Value Statement] "As a user of a certain type (specify), I want to do something (action) so I can provide business value"
Description
Find out what is returned after logging in to AD as a user. Could be a JWT token or some other token.
Business Need (the why):
from epic, if needed
Details
All the details needed to accomplish the story. Tasks - can be linked
Additional User Story
linked
Acceptance Criteria/Tests
The list of tests or criteria to be used to validate the story has been completed. May still need UAT or SIT / integration testing with other stories.
Failure
if any, fold in negative tests
Definition of Done
e.g.Passes all regression tests, Passes testing per acceptance criteria, Approved by UI team, Able to show feature in demo
UX/UI Details and/or mockups
if UI change, a mockup or description of what to change, attach image/file
Applications or Systems impacted
upstream/downstream applications/systems
Dependencies
non story dependencies
Assumptions
if any
Performance Considerations
Will the work performed as part of this story impact system performance. If there is potential for a performance hit, where would you expect it to manifest in the system.
Security Considerations
E.g., does this story involve working with Personal Identifying Information (PII)? Note any such security sensitive aspects.
QA Considerations
what might be hard to test or test setup criteria
Acrhitectural/System or Component Impacts
if we're adding something new or significantly changing architecture flag for architect review
Reverse Engineering Required
to document areas not known well by dev team
Questions/Clarifications
Create new issue (add label: question) in this repo and link to this issue
Just like all stories, this one needs an estimate. The template at the top indicates it's a 3. Can we go by this?
@adamgering does this need to be in S8? Seems like it's related to the stories around AD auth but if are able to demo AD functionality now, I'm assuming we are logged in without understanding what is returned. Why do we need this? Thx.
| gharchive/issue | 2018-11-13T23:48:13 | 2025-04-01T06:38:59.892652 | {
"authors": [
"PatrickLammers",
"mtn217"
],
"repo": "hyperledger/sawtooth-next-directory",
"url": "https://github.com/hyperledger/sawtooth-next-directory/issues/571",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1166060781 | Enable smart contract support in hypersign network
https://docs.cosmwasm.com/docs/1.0/
Reference: https://github.com/osmosis-labs/osmosis/tree/main/wasmbinding
| gharchive/issue | 2022-03-11T06:42:15 | 2025-04-01T06:38:59.895531 | {
"authors": [
"Vishwas1",
"arnabghose997"
],
"repo": "hypersign-protocol/hid-node",
"url": "https://github.com/hypersign-protocol/hid-node/issues/99",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1083004031 | remove node net module
After we get the new utp module done, we should remove net module to reduce size/complexity. It's only used in a few places.
%grep "require('net" -R .
./node_modules/bind-easy/test.js:const net = require('net')
./node_modules/bind-easy/index.js:const net = require('net')
./node_modules/@hyperswarm/dht/test/connections.js:const net = require('net')
./node_modules/@hyperswarm/dht/lib/socket-pairer.js:const net = require('net')
./node_modules/@hyperswarm/secret-stream/test.js:const net = require('net')
in master
| gharchive/issue | 2021-12-17T08:24:14 | 2025-04-01T06:38:59.897768 | {
"authors": [
"heapwolf",
"mafintosh"
],
"repo": "hyperswarm/dht",
"url": "https://github.com/hyperswarm/dht/issues/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
867927303 | feat: Capability to fetch logs along with span
hypertrace/hypertrace#224
In this change, following is done:
Parse the log event request attributes from the span request
Make the span query as usual, subsequently build the log query using requested log attributes and returned spanIds as filter (ex: select attr1, attr2 from logEventView where spanId in [id1, id2, ....]), use the timefilter from the spanQuery as well
maintain a mapping from spanId(referred as id) -> list of logEvents
Major changes in SpanLogEventFetcher, LogEventAttributeRequestBuilder
Test
{
spans(
limit: 100
between: {startTime: "2021-04-26T17:45:31.849Z", endTime: "2021-04-26T18:45:31.849Z"}
offset: 0
orderBy: [{key: "startTime", direction: DESC}]
) {
results {
id
logEvents {
results {
spanId: attribute(key: "spanId")
attribute: attribute(key: "attributes")
}
}
protocolName: attribute(key: "protocolName")
serviceName: attribute(key: "serviceName")
displaySpanName: attribute(key: "displaySpanName")
statusCode: attribute(key: "statusCode")
duration: attribute(key: "duration")
startTime: attribute(key: "startTime")
traceId: attribute(key: "traceId")
__typename
}
total
__typename
}
}
{
"data": {
"spans": {
"results": [
{
"id": "2b1b03a627d8315b",
"logEvents": [],
"protocolName": "HTTP",
"serviceName": "route",
"displaySpanName": "HTTP GET /route",
"statusCode": "200",
"duration": 63,
"startTime": 1619462676157,
"traceId": "00000000000000006485904421d38a40",
"__typename": "Span"
},
{
"id": "4b374aba5a81f266",
"logEvents": [
"results":[
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "GetConn"
}
},
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "ConnectStart",
"addr": "0.0.0.0:8081",
"network": "tcp"
}
},
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "ConnectDone",
"addr": "0.0.0.0:8081",
"network": "tcp"
}
},
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "GotConn"
}
}
]
],
"protocolName": "HTTP",
"serviceName": "frontend",
"displaySpanName": "HTTP GET /customer",
"statusCode": "200",
"duration": 292,
"startTime": 1619462675412,
"traceId": "00000000000000006485904421d38a40",
"__typename": "Span"
}
.....
],
"total": 54,
"__typename": "SpanResultSet"
}
}
}
@aaron-steinfeld for the log and span join to work, the span query must request id attribute and log query must request for spanId, in case when either is missing should it be handled by internally requesting the id/spanId attribute, or can we consider it sort of an informal contract with the consumer of the api?
@aaron-steinfeld for the log and span join to work, the span query must request id attribute and log query must request for spanId, in case when either is missing should it be handled by internally requesting the id/spanId attribute, or can we consider it sort of an informal contract with the consumer of the api?
Thanks for bringing that up! I meant to comment on that and it completely slipped my mind. IMO it should be handled internally, but I'm OK deferring that into a separate PR if you want since it strictly relaxes the API contract.
| gharchive/pull-request | 2021-04-26T17:02:54 | 2025-04-01T06:38:59.900912 | {
"authors": [
"aaron-steinfeld",
"rish691"
],
"repo": "hypertrace/hypertrace-core-graphql",
"url": "https://github.com/hypertrace/hypertrace-core-graphql/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1315986797 | [Question] Do we need to specify the available node version we are compatible with?
Description
I got this error after I updated the latest commit. The newly added package @plasmohq/edge-addons-api requires a node version >=16.14, but mine is 16.13.1.
This error can be easily fixed by updating node. However, this error could be a problem for some of our contributors.
Do we need to specify the available node version we are compatible with?
Hi, @wxharry. Agree with you, we'd better add "engine" field to package.json to limit the node version and add "Requirement" to README like:
By the way, it is a convention that odd version is less stable than even version, so you are encouraged to update your node from 16.13.1 to 16.14 or other even number version.
Thank you for your information. I'm using 16.16 now.
closed via #437.
| gharchive/issue | 2022-07-24T19:16:16 | 2025-04-01T06:38:59.903040 | {
"authors": [
"tyn1998",
"wxharry"
],
"repo": "hypertrons/hypertrons-crx",
"url": "https://github.com/hypertrons/hypertrons-crx/issues/423",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1438053484 | [Question] API request
Description
Can you provide all github apis used by this plug-in
Hi @xueqidove, here is a main API we are using at the moment.
There are several other APIs actually, but they might be deprecated recently, so I will not provide them here.
API has changed, please refer to https://github.com/hypertrons/hypertrons-crx/issues/515
| gharchive/issue | 2022-11-07T09:39:03 | 2025-04-01T06:38:59.904507 | {
"authors": [
"tyn1998",
"xueqidove"
],
"repo": "hypertrons/hypertrons-crx",
"url": "https://github.com/hypertrons/hypertrons-crx/issues/509",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
993334069 | QUIC not TCP
Discussed in https://github.com/hyprspace/hyprspace/discussions/31
Originally posted by notassigned September 10, 2021
Since this is a VPN we shouldn't be using TCP, since we may run into a TCP meltdown when running a TCP connection over the VPN connection.
I suggest switching the listen addrs to QUIC.
https://openvpn.net/faq/what-is-tcp-meltdown/
I have a branch for this. Just give me write permission and I'll push it.
Does your branch completely replace TCP with QUIC or provide both as options?
There are situations where TCP is the only way the VPN can be formed, such as in very locked down public networks within offices or campuses that for some reason block all UDP traffic including QUIC, so keeping TCP as an option is needed when QUIC won't work.
I just removed the TCP listen addrs and replaced them with QUIC multiaddrs. TCP is still a transport that can be used but the node doesn't advertise listen addrs for it. The reason I removed them is because I'm not entirely sure how libp2p decides which transports to use when dialing a peer. I will test having both transports and see if I can force the order.
Another thing to keep in mind is that hole punching was just merged into go-libp2p (not released yet) and the success rate for QUIC is far higher than TCP.
Here is my branch on a fork: https://github.com/notassigned/hyprspace/tree/add-quic-transport
Ah I see, some form of hyprspace init --force-tcp option would probably need to added, there must be a way to define the order of transports; I'll look out for it if I can find it.
Hi all! Just saw #37 and maybe we should consider adding this as a field in the hyprspace configuration for a given interface? Possibly something like,
accepted_transports:
- tcp
- quic
- etc...
By default maybe we accept both QUIC and TCP but allow users to pick one specifically if they can't use QUIC/TCP for a network.
The dial method will try to dial multiaddrs for the peer until it gets a connection, so if QUIC can't be used and TCP can then the connection to the peer will be established over TCP. But realistically QUIC will work on virtually all networks and provides numerous benefits over TCP like native roaming, faster connection establishment, and native stream muxing.
I think the only potential downside is that if you were on a network which QUIC couldn't be used then you would have to wait an extra few seconds for the QUIC dial to fail.
| gharchive/issue | 2021-09-10T15:17:19 | 2025-04-01T06:38:59.919670 | {
"authors": [
"LynHyper",
"alecbcs",
"notassigned"
],
"repo": "hyprspace/hyprspace",
"url": "https://github.com/hyprspace/hyprspace/issues/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1582791858 | Window rule to remember last size/position
I would like a window rule that just remembers the last size/position of the window and restore it when the window is opened.
how would you even imagine that being tracked? Just by class?
how would you even imagine that being tracked? Just by class?
With the normal rules: class and title, or just class, or just title
how would you even imagine that being tracked? Just by class?
Like all other rules: by class and/or title
Could the feature request in #1426 also solve this one here? Because that one seems a bit more predictable (you know exactly when to save something and therefore where to save something.)
Some windows don't remember their size and I don't know how big they will be until trigger floating. Sometimes they keep the size of tiling in floating state. That's quite annoying.
I have the similar problem. I use Thunar as file manager and set it as floating window. Every time I open it, it shows up in the minimum size, I have to use meta + right mouse button to resize to normal size. It would be better if the window rule that remembers last size is available.
Not sure if this is of any use to anyone - but I was trying to get something similar to work using special workspaces and finally got the result I was after.
windowrulev2 = workspace special:name, class:^(APP)$
windowrulev2 = float, class:^(APP)$
windowrulev2 = size 1000 900, class:^(APP)$
windowrulev2 = move 500 200, class:^(APP)$
I then use the following to exec and toggle:
bind = SUPER, T, togglespecialworkspace, name
exec-once = [workspace special:name] APP
| gharchive/issue | 2023-02-13T17:45:35 | 2025-04-01T06:38:59.922359 | {
"authors": [
"875d",
"Aleksanaa",
"Bryan2333",
"aksdb",
"aux-op",
"raffaem",
"vaxerski"
],
"repo": "hyprwm/Hyprland",
"url": "https://github.com/hyprwm/Hyprland/issues/1543",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2010846001 | Random flickering/artifacting when opengl introspection is not needed
Hyprland Version
Hyprland, built from branch main at commit 99ca26d4eb84e0071264713902e5b287fcab392e (hooksystem: fix missed log include).
Tag: v0.32.3-84-g99ca26d4
flags: (if any)
debug
Bug or Regression?
Regression
Description
Similar to #3952. Weird flickering artifacts in lower right quadrant of screen. Bisected to commit aedcade68dd0615fd919a7249633a554d0accd81
System:
Host: gentoo Kernel: 6.6.2-gentoo arch: x86_64 bits: 64 Desktop: Hyprland
Distro: Gentoo Base System release 2.14
Machine:
Type: Desktop System: ASUS product: N/A v: N/A serial: <superuser required>
Mobo: ASUSTeK model: TUF GAMING Z790-PLUS WIFI v: Rev 1.xx
serial: <superuser required> UEFI: American Megatrends v: 1220
date: 07/28/2023
CPU:
Info: 24-core (8-mt/16-st) model: 13th Gen Intel Core i9-13900KS bits: 64
type: MST AMCP cache: L2: 32 MiB
Speed (MHz): avg: 837 min/max: 800/5600:6000:4300 cores: 1: 1100 2: 800
3: 800 4: 800 5: 800 6: 800 7: 1100 8: 800 9: 1100 10: 1100 11: 800 12: 800
13: 800 14: 800 15: 800 16: 800 17: 800 18: 800 19: 800 20: 801 21: 800
22: 800 23: 800 24: 800 25: 800 26: 800 27: 800 28: 800 29: 800 30: 800
31: 800 32: 800
Graphics:
Device-1: NVIDIA AD102 [GeForce RTX 4090] driver: nvidia v: 545.29.06
Display: wayland server: Xwayland v: 21.1.99 compositor: Hyprland driver:
X: loaded: nvidia gpu: nvidia,nvidia-nvswitch resolution: 1: 2560x1440~165Hz
2: 3840x2160~160Hz
API: EGL v: 1.5 drivers: nvidia,swrast
platforms: gbm,wayland,x11,surfaceless,device
API: OpenGL v: 4.6.0 vendor: nvidia v: 545.29.06 renderer: NVIDIA GeForce
RTX 4090/PCIe/SSE2
API: Vulkan v: 1.3.268 drivers: nvidia surfaces: xcb,xlib,wayland
debug:disable_logs = false
exec-once = waybar & swaybg -o \* -i ~/Pictures/Wallpapers/forest_sunset_pink.jpg -m fill & swayidle -w
exec-once = /usr/libexec/polkit-gnome-authentication-agent-1
exec-once = udiskie -t
exec-once = foot spotify_player & foot nvtop & foot btop
exec-once = wl-paste --type text --watch cliphist store
exec-once = wl-paste --type image --watch cliphist store
monitor=DP-3,3840x2160@159.975,2560x0,1.5
monitor=DP-2,2560x1440@164.958,0x0,1
# NVIDIA
env = LIBVA_DRIVER_NAME,nvidia
env = XDG_SESSION_TYPE,wayland
env = GBM_BACKEND,nvidia-drm
env = __GLX_VENDOR_LIBRARY_NAME,nvidia
env = WLR_NO_HARDWARE_CURSORS,1
env = XDG_CURRENT_DESKTOP,Hyprland
env = XDG_SESSION_DEKSTOP,Hyprland
# try again when latest gamescope works with 535/545
#env = XWAYLAND_NO_GLAMOR,1 # with this you'll need to use gamescope for gaming
xwayland {
force_zero_scaling = true
}
env = GDK_SCALE,1.5
env = QT_SCALE_FACTOR,1.5
env = ELM_SCALE,1.5
env = XCURSOR_SIZE,24
env = XCURSOR_THEME,Sunity-cursors
env = TERM,foot
env = TERMINAL,foot
input {
follow_mouse = 1
kb_options=ctrl:nocaps
}
general {
gaps_in = 8
gaps_out = 15
border_size = 2
col.active_border = rgb(cdd6f4)
col.inactive_border = rgb(11111b)
layout = master
}
decoration {
rounding = 5
}
animations {
enabled = yes
#easeOutExpo
bezier = myBezier, 0.22, 1, 0.36, 1
animation = windows, 1, 7, myBezier, slide
animation = windowsOut, 1, 7, myBezier, slide
animation = border, 1, 7, myBezier
animation = fade, 1, 1, myBezier
animation = fadeDim, 0
animation = workspaces, 1, 5, default
animation = specialWorkspace, 1, 5, default, slidefadevert 50%
}
master {
}
misc {
force_default_wallpaper = 0
vrr = 1
allow_session_lock_restore = true
}
workspace=1,monitor:DP-2,default:true,persistent:true
workspace=2,monitor:DP-3,default:true,persistent:true
workspace=3,monitor:DP-3,default:true,persistent:true
workspace=special, gapsin:70, gapsout:120, on-created-empty:foot
windowrule = float, Rofi
windowrule = noborder, Rofi
windowrule = workspace 3 silent, ^(steam)$
windowrule = workspace 3 silent, ^(discord)$
windowrule = fullscreen,title:^(Steam Big Picture Mode)$
windowrule = tile,title:^(Old School RuneScape)$
$mainMod = SUPER
# keybind for Master layout
bind = SUPER_SHIFT, SPACE, layoutmsg, orientationnext
bind = $mainMod, comma, layoutmsg, addmaster
bind = $mainMod, period, layoutmsg, removemaster
bind = $mainMod, SPACE, layoutmsg, swapwithmaster
bind = $mainMod, RETURN, exec, foot
bind = $mainMod SHIFT, C, killactive
bind = $mainMod SHIFT, Q, exit,
bind = $mainMod, R, exec, sh $HOME/.config/rofi/bin/launcher
bind = $mainMod SHIFT, R, exec, sh $HOME/.config/rofi/bin/runner
bind = $mainMod, P, exec, sh $HOME/.config/rofi/bin/powermenu
bind = $mainMod, V, togglefloating,
bind = $mainMod, F, fullscreen
bind = $mainMod, W, exec, pkill -SIGUSR1 '^waybar$'
bind = $mainMod, C, exec, cliphist list | sh $HOME/.config/rofi/bin/clipboard | cliphist decode | wl-copy
# volume control
bind = $mainMod, MINUS, exec, amixer sset Master 5%-;
bind = $mainMod, EQUAL, exec, amixer sset Master 5%+;
# screenshot
bind = $mainMod, A, exec, shotname=$(date '+%Y-%m-%d-%H:%M:%S').png && grim ~/Pictures/Screenshots/$shotname && dunstify -u low --replace=699 "Screenshot ${shotname} Saved."
bind = $mainMod, S, exec, shotname=$(date '+%Y-%m-%d-%H:%M:%S').png && grim -o "$(hyprctl activeworkspace | grep -m1 "DP-" | cut -d' ' -f7 | sed s/://g)" ~/Pictures/Screenshots/$shotname && dunstify -u low --replace=699 "Screenshot ${shotname} Saved."
bind = $mainMod SHIFT, S, exec, shotname=$(date '+%Y-%m-%d-%H:%M:%S').png && grim -g "$(slurp)" ~/Pictures/Screenshots/$shotname && dunstify -u low --replace=699 "Screenshot ${shotname} Saved."
bind = $mainMod, left, movefocus, l
bind = $mainMod, right, movefocus, r
bind = $mainMod, up, movefocus, u
bind = $mainMod, down, movefocus, d
#vim bindings for move focus
bind = $mainMod, H, movefocus, l
bind = $mainMod, L, movefocus, r
bind = $mainMod, K, movefocus, u
bind = $mainMod, J, movefocus, d
bind = $mainMod, 1, workspace, 1
bind = $mainMod, 2, workspace, 2
bind = $mainMod, 3, workspace, 3
bind = $mainMod, 4, workspace, 4
bind = $mainMod, 5, workspace, 5
bind = $mainMod, 6, workspace, 6
bind = $mainMod, 7, workspace, 7
bind = $mainMod, 8, workspace, 8
bind = $mainMod, 9, workspace, 9
bind = $mainMod, Z, togglespecialworkspace
bind = $mainMod SHIFT, 1, movetoworkspace, 1
bind = $mainMod SHIFT, 2, movetoworkspace, 2
bind = $mainMod SHIFT, 3, movetoworkspace, 3
bind = $mainMod SHIFT, 4, movetoworkspace, 4
bind = $mainMod SHIFT, 5, movetoworkspace, 5
bind = $mainMod SHIFT, 6, movetoworkspace, 6
bind = $mainMod SHIFT, 7, movetoworkspace, 7
bind = $mainMod SHIFT, 8, movetoworkspace, 8
bind = $mainMod SHIFT, 9, movetoworkspace, 9
bind = $mainMod SHIFT, Z, movetoworkspace, special
bindm = $mainMod, mouse:272, movewindow
bindm = $mainMod, mouse:273, resizewindow
bind = $mainMod SHIFT, left, movewindow, l
bind = $mainMod SHIFT, right, movewindow, r
bind = $mainMod SHIFT, up, movewindow, u
bind = $mainMod SHIFT, down, movewindow, d
#vim bindings for move window
bind = $mainMod SHIFT, H, movewindow, l
bind = $mainMod SHIFT, L, movewindow, r
bind = $mainMod SHIFT, K, movewindow, u
bind = $mainMod SHIFT, J, movewindow, d
exec-once=dbus-update-activation-environment --systemd WAYLAND_DISPLAY XDG_CURRENT_DESKTOP
How to reproduce
Most easily reproduce with 2 windows in master/slave layout -- in my case, Firefox as master and foot as slave. Do something in the master window like play a video, change tabs, etc.
Crash reports, logs, images, videos
hyprland.log
https://github.com/hyprwm/Hyprland/assets/7462622/6da60d73-5fde-4b0b-b21c-48da5dbcf9d4
latest commit should the problem
@imxyy1soope1 it's still persistent--that's why I opened this issue since the original issue was closed out.
if you float a window with blur, does the issue go away?
@vaxerski yep that fixes it if i do that
What does floating a window with blur mean exactly? I have the same issue and it is driving me insane!
@BendTheKn33 floating it instead of tiling the window and blur meaning the blur window effect
Thanks, that's what I thought but when I for example toggle a terminal window, which has blur, to float the issue remains. Yes there is less flicker but still it's clearly there and annoying.
Guess I'll wait for the next update and use another WM for the time being.
@BendTheKn33 for me the last usable commit is e40e486f61f2643578b9977b86f408799dbc75fd, so you may try rolling back to that for time being unless there are things in later commits you absolutely need. (Technically, 802ab58f8a129b42d61ec13898fd978e0920f7d8 doesn't have the issue, but it does have the black cursor issue / resolution issue)
you're saying 802ab58 doesn't have the issue? Can you bisect it then after that commit?
@vaxerski Correct--https://github.com/hyprwm/Hyprland/commit/aedcade68dd0615fd919a7249633a554d0accd81 is where it appears
roight. That doesn't help. :/
can any one of you add a glFlush() as a first thing in beginRender()?
@vaxerski doesn't seem to make any difference
:(
patch.txt
?
renamed cuz it's not limited to nv
@vaxerski no dice
Didn't work for me either.
:(
I assume wlroots with nvidia patches + sway-git does not exhibit this issue?
what about d2c3b23ace745d3f79926c71d4cb3c481a394841
hyprlandCrashReport135341.txt
@vaxerski it goes poopy now
see #3998 already fixed in head
@vaxerski that seems to have fixed it!!
the crash or the artifacts?
oke, great, closing then :)
| gharchive/issue | 2023-11-26T00:56:10 | 2025-04-01T06:38:59.930491 | {
"authors": [
"BendTheKn33",
"andresilva",
"brettalcox",
"imxyy1soope1",
"vaxerski"
],
"repo": "hyprwm/Hyprland",
"url": "https://github.com/hyprwm/Hyprland/issues/3962",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2182815717 | [REQ] Fade out option
Current Behavior
On authentication, the onscreen graphics are immediately killed.
Wanted Behavior
Have an option in the config for the graphics to fade out like how they fade in before the graphics are killed.
#60
| gharchive/issue | 2024-03-12T22:52:03 | 2025-04-01T06:38:59.931799 | {
"authors": [
"BowlBird",
"vaxerski"
],
"repo": "hyprwm/hyprlock",
"url": "https://github.com/hyprwm/hyprlock/issues/185",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
228715648 | Stock doesn't decrease after selling one variation of a variable product
Hi there and thanks for your plugin !
Here is the description of an issue I'm experiencing :
Can you reproduce this issue on default Wordpress theme (eg Storefront)?
Yes.
Can you reproduce this issue when all other plugins are disabled except WooCommerce, Polylang and Hyyan WooCommerce Polylang Integration?
Yes.
What product versions and settings are you using when this issue occurs?
PHP: 5.6.30
WordPress: 4.7.4
WooCommerce: 3.0.6
Polylang: 2.13
Hyyan WooCommerce Polylang Integration: 0.29.1
Browser: Chrome (58.0.3029.96)
Steps to Reproduce
Check the stock in a variable product
Go to the variable product in shop
Buy the variable product
Check the stock in variable product which didn't decrease
What I Expected
I expected the stock to decrease itself from 1 item when the user buy 1 item.
What Happened Instead
The stock didn't decrease.
Hey @Wiloud , please note the version 0.29.1 doesn't work with WooCommerce: 3.0.6 , try the to download the plugin from the master branch , it contians the latest updates to support WooCommerce: 3.0.6.
The new version will be released soon in the wordpress repo
Hey @hyyan , thanks a lot for this quick answer.
It worked !
Best !
Wil
| gharchive/issue | 2017-05-15T13:31:34 | 2025-04-01T06:38:59.942884 | {
"authors": [
"Wiloud",
"hyyan"
],
"repo": "hyyan/woo-poly-integration",
"url": "https://github.com/hyyan/woo-poly-integration/issues/162",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1331014104 | Found bugs in inference_img.py
inference_img.py - line 99
with bugs:
res.append(model.inference(img0, img1, (i+1) * 1. / (n+1), args.scale))
fixed:
img_list.append(model.inference(img0, img1, (i+1) / n))
There's no res variable, so I assumed it should be img_list
Formula was incorrect, resulting in a 0.333 ratio instead of 0.5 for x2 interpolation
There's no args.scale argument, so I removed it, but you can add it in the list of arguments to keep it
Thank you! Modified it.
Also, it seems like at line 70, it's not sending the ratio to model.inference(), resulting in a 0.5 ratio every time, even if another was provided. But I'm not 100% sure. My fix:
img_list = [img0, model.inference(img0, img1, args.ratio), img1]
Hey @hzwer - I wrote another fix above, but it seems you didn't see it. I don't know GitHub well. Sorry.
@yaroslav-semeniuk Thank you so much for your feedback!
Hey @hzwer - I wrote another fix above, but it seems you didn't see it. I don't know GitHub well. Sorry.
You can try to fork the project to your own branch, fix it, and pull your branch as a pull request.
| gharchive/issue | 2022-08-07T13:17:42 | 2025-04-01T06:38:59.965124 | {
"authors": [
"hzwer",
"oblessnoob",
"yaroslav-semeniuk"
],
"repo": "hzwer/Practical-RIFE",
"url": "https://github.com/hzwer/Practical-RIFE/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1034589945 | 纯色背景或天空位置的马赛克问题
请问一下作者,有出现过在天空位置出现马赛克的现象么,该如何解决?是否是warp的问题(已训练70Epoch)
建议改用 lpips 训练,看看会不会好一些
可以试试最新的pytorch1.9
| gharchive/issue | 2021-10-25T01:47:52 | 2025-04-01T06:38:59.966081 | {
"authors": [
"JasonSheng-atp",
"NEFUJoeyChen",
"hzwer"
],
"repo": "hzwer/arXiv2020-RIFE",
"url": "https://github.com/hzwer/arXiv2020-RIFE/issues/205",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1292210314 | README編集
READMEを編集
project cardのlink追加
filename等変更
| gharchive/issue | 2022-07-03T05:14:14 | 2025-04-01T06:38:59.978917 | {
"authors": [
"i13abe"
],
"repo": "i13abe/yoloX_web",
"url": "https://github.com/i13abe/yoloX_web/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
122402591 | Fix small documentation bugs
Just fixed a few broken links (npm is case sensitive and postProcessor gives you 404). Maybe it'd be a good idea to change the name of this repo to lower case too :)
There was a small mistake with the name of the overloadTranslationOptionHandler method
One more thing, bower installation doesn't work yet so I left it as is in case they support case sensitive names.
bower should now be registered...missed that install from repo seems not to work....so will need to register all the new stuff on bower.
renamed the function: overloadTranslationOptionHandler: sprintf.overloadTranslationOptionHandler so it matches with i18next
thank you for the help
| gharchive/pull-request | 2015-12-16T00:54:25 | 2025-04-01T06:38:59.980102 | {
"authors": [
"jamuhl",
"perrin4869"
],
"repo": "i18next/i18next-sprintf-postProcessor",
"url": "https://github.com/i18next/i18next-sprintf-postProcessor/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
255115523 | Android Oreo Build Error
SystemUI.zip
I've decompiled the systemui.apk on Android Oreo from the 6p. I've edited a few XML's. Upon building I get this error. If I decompile the apk and build it as is I still get the error
Information
Apktool Version (2.2.4) -
Operating System (Windows) -
APK From? (Stock Oreo system.img) -
Stacktrace/Logcat
W: ERROR: 9-patch image C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\res\drawable-xxxhdpi\pip_dismiss_scrim.9.png malformed.
W: No marked region found along edge.
W: Found along left edge.
W: ERROR: Failure processing PNG image C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\res\drawable-xxxhdpi\pip_dismiss_scrim.9.png
Exception in thread "main" brut.androlib.AndrolibException: brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\KRISY~1.DES\AppData\Local\Temp\brut_util_Jar_3948520725014907572.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\KRISY~1.DES\AppData\Local\Temp\APKTOOL2731525356834830561.tmp, -0, arsc, -0, arsc, -I, C:\Users\krisy.DESKTOP-3EFRBLR\AppData\Local\apktool\framework\1.apk, -S, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\res, -M, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\AndroidManifest.xml]
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:496)
at brut.androlib.Androlib.buildResources(Androlib.java:430)
at brut.androlib.Androlib.build(Androlib.java:329)
at brut.androlib.Androlib.build(Androlib.java:267)
at brut.apktool.Main.cmdBuild(Main.java:230)
at brut.apktool.Main.main(Main.java:83)
Caused by: brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\KRISY~1.DES\AppData\Local\Temp\brut_util_Jar_3948520725014907572.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\KRISY~1.DES\AppData\Local\Temp\APKTOOL2731525356834830561.tmp, -0, arsc, -0, arsc, -I, C:\Users\krisy.DESKTOP-3EFRBLR\AppData\Local\apktool\framework\1.apk, -S, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\res, -M, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\AndroidManifest.xml]
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:441)
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:482)
... 5 more
Caused by: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\KRISY~1.DES\AppData\Local\Temp\brut_util_Jar_3948520725014907572.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\KRISY~1.DES\AppData\Local\Temp\APKTOOL2731525356834830561.tmp, -0, arsc, -0, arsc, -I, C:\Users\krisy.DESKTOP-3EFRBLR\AppData\Local\apktool\framework\1.apk, -S, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\res, -M, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUI.apk\AndroidManifest.xml]
at brut.util.OS.exec(OS.java:95)
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:435)
... 6 more
A subdirectory or file modified-system-apk-files-here already exists.
W: ERROR: 9-patch image C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\res\drawable-xxxhdpi\pip_dismiss_scrim.9.png malformed.
W: No marked region found along edge.
W: Found along left edge.
W: ERROR: Failure processing PNG image C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\res\drawable-xxxhdpi\pip_dismiss_scrim.9.png
Exception in thread "main" brut.androlib.AndrolibException: brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\KRISY~1.DES\AppData\Local\Temp\brut_util_Jar_7543208047023060219.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\KRISY~1.DES\AppData\Local\Temp\APKTOOL4037231958546700424.tmp, -0, arsc, -0, arsc, -I, C:\Users\krisy.DESKTOP-3EFRBLR\AppData\Local\apktool\framework\1.apk, -S, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\res, -M, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\AndroidManifest.xml]
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:496)
at brut.androlib.Androlib.buildResources(Androlib.java:430)
at brut.androlib.Androlib.build(Androlib.java:329)
at brut.androlib.Androlib.build(Androlib.java:267)
at brut.apktool.Main.cmdBuild(Main.java:230)
at brut.apktool.Main.main(Main.java:83)
Caused by: brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\KRISY~1.DES\AppData\Local\Temp\brut_util_Jar_7543208047023060219.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\KRISY~1.DES\AppData\Local\Temp\APKTOOL4037231958546700424.tmp, -0, arsc, -0, arsc, -I, C:\Users\krisy.DESKTOP-3EFRBLR\AppData\Local\apktool\framework\1.apk, -S, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\res, -M, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\AndroidManifest.xml]
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:441)
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:482)
... 5 more
Caused by: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\KRISY~1.DES\AppData\Local\Temp\brut_util_Jar_7543208047023060219.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\KRISY~1.DES\AppData\Local\Temp\APKTOOL4037231958546700424.tmp, -0, arsc, -0, arsc, -I, C:\Users\krisy.DESKTOP-3EFRBLR\AppData\Local\apktool\framework\1.apk, -S, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\res, -M, C:\Users\krisy.DESKTOP-3EFRBLR\Desktop\APK-Multi-Tool-master\APK-Multi-Tool-master\APK-Multi-Tool-master\projects\SystemUIGoogle.apk\AndroidManifest.xml]
at brut.util.OS.exec(OS.java:95)
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:435)
... 6 more
Steps to Reproduce
apktool b systemui
Frameworks
If this APK is from an OEM ROM (Samsung, HTC, LG). Please attach framework files
(.apks that live in /system/framework or /system/priv-app)
APK
If this APK can be freely shared, please upload/attach a link to it.
Questions to ask before submission
Have you tried apktool d, apktool b without changing anything? Yes
If you are trying to install a modified apk, did you resign it? N/A
Are you using the latest apktool version? Yes
Hmm. Seems to be a 9patch issue more so than related to Oreo. Recent image tickets:
https://github.com/iBotPeaches/Apktool/issues/1559
https://github.com/iBotPeaches/Apktool/issues/1522
Without the apk to investigate the 9patch image directly, there is not much I can do here :/
I attached the apk in a zip on the original post. #1522 seems to be a very similar issue
Hai m trying to recompile systemUI.apk, This error is comming,,,
Any help,
System:- Windows 10
Apktool Version:- apktool_2.2.4
Android:- Oreo
Recompile:- command:- java -jar apktool.jar b SystemUI -c
Error:-
I: Using Apktool 2.2.4
I: Checking whether sources has changed...
I: Checking whether resources has changed...
I: Building resources...
W: ERROR: 9-patch image C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\res\drawable-xhdpi\pip_dismiss_scrim.9.png malformed.
W: No marked region found along edge.
W: Found along left edge.
W: ERROR: Failure processing PNG image C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\res\drawable-xhdpi\pip_dismiss_scrim.9.png
Exception in thread "main" brut.androlib.AndrolibException: brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\subin\AppData\Local\Temp\brut_util_Jar_1834093262529986261.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\subin\AppData\Local\Temp\APKTOOL1957412541163737414.tmp, -0, arsc, -0, arsc, -I, C:\Users\subin\AppData\Local\apktool\framework\1.apk, -S, C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\res, -M, C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\AndroidManifest.xml]
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:496)
at brut.androlib.Androlib.buildResources(Androlib.java:430)
at brut.androlib.Androlib.build(Androlib.java:329)
at brut.androlib.Androlib.build(Androlib.java:267)
at brut.apktool.Main.cmdBuild(Main.java:230)
at brut.apktool.Main.main(Main.java:83)
Caused by: brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\subin\AppData\Local\Temp\brut_util_Jar_1834093262529986261.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\subin\AppData\Local\Temp\APKTOOL1957412541163737414.tmp, -0, arsc, -0, arsc, -I, C:\Users\subin\AppData\Local\apktool\framework\1.apk, -S, C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\res, -M, C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\AndroidManifest.xml]
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:441)
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:482)
... 5 more
Caused by: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\subin\AppData\Local\Temp\brut_util_Jar_1834093262529986261.tmp, p, --forced-package-id, 127, --min-sdk-version, 26, --target-sdk-version, 26, --version-code, 26, --version-name, 8.0.0, --no-version-vectors, -F, C:\Users\subin\AppData\Local\Temp\APKTOOL1957412541163737414.tmp, -0, arsc, -0, arsc, -I, C:\Users\subin\AppData\Local\apktool\framework\1.apk, -S, C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\res, -M, C:\Users\subin\Desktop\My project\Devices\Osprey\Oreo\Osprey\SystemUI\AndroidManifest.xml]
at brut.util.OS.exec(OS.java:95)
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:435)
... 6 more
Confirmed
W: ERROR: 9-patch image /Users/connortumbleson/Desktop/Apktool/Bugs/Bug1608/SystemUI/res/drawable-xxxhdpi/pip_dismiss_scrim.9.png malformed.
W: No marked region found along edge.
W: Found along left edge.
W: ERROR: Failure processing PNG image /Users/connortumbleson/Desktop/Apktool/Bugs/Bug1608/SystemUI/res/drawable-xxxhdpi/pip_dismiss_scrim.9.png
Confirmed duplicate of #1522. All about this stupid malformed "pip_dismiss_scrim.9.png" image.
Future details will be in #1522 . Thanks for the report.
i have this another error and don't have idea why have this...
I: Using Apktool 2.3.4
I: Checking whether sources has changed...
I: Smaling smali folder into classes.dex...
I: Checking whether sources has changed...
I: Checking whether resources has changed...
I: Building resources...
S: WARNING: Could not write to (C:\Users\rafy2\AppData\Local\apktool\framework), using C:\Users\rafy2\AppData\Local\Temp\ instead...
S: Please be aware this is a volatile directory and frameworks could go missing, please utilize --frame-path if the default storage directory is unavailable
W: ERROR: 9-patch image C:\Users\rafy2\Desktop\Apktool_By_Xianos\SystemUI\res\drawable-xxhdpi\pip_dismiss_scrim.9.png malformed.
W: No marked region found along edge.
W: Found along left edge.
W: ERROR: Failure processing PNG image C:\Users\rafy2\Desktop\Apktool_By_Xianos\SystemUI\res\drawable-xxhdpi\pip_dismiss_scrim.9.png
brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): [C:\Users\rafy2\AppData\Local\Temp\brut_util_Jar_4109523728293022626.tmp, p, --forced-package-id, 127, --min-sdk-version, 27, --target-sdk-version, 27, --version-code, 20171030, --version-name, 8.1.0, --no-version-vectors, -F, C:\Users\rafy2\AppData\Local\Temp\APKTOOL3236394871236666726.tmp, -0, arsc, -0, webp, -0, res/drawable-ldrtl-xxhdpi-v4/abc_spinner_mtrl_am_alpha.9.png, -0, res/drawable-mdpi-v4/ic_call_lock_normal.9.png, -0, res/drawable-mdpi-v4/ic_call_lock_pressed.9.png, -0, res/drawable-mdpi-v4/ic_email_lock_normal.9.png, -0, res/drawable-mdpi-v4/ic_email_lock_pressed.9.png, -0, png, -0, res/drawable-xhdpi-v4/ic_biometric_small_icon_bg.9.png, -0, res/drawable-xhdpi-v4/lb_card_shadow_focused.9.png, -0, res/drawable-xhdpi-v4/lb_card_shadow_normal.9.png, -0, res/drawable-xxhdpi-v4/abc_ab_share_pack_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_btn_switch_to_on_mtrl_00001.9.png, -0, res/drawable-xxhdpi-v4/abc_btn_switch_to_on_mtrl_00012.9.png, -0, res/drawable-xxhdpi-v4/abc_cab_background_top_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_list_divider_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_list_focused_holo.9.png, -0, res/drawable-xxhdpi-v4/abc_list_longpressed_holo.9.png, -0, res/drawable-xxhdpi-v4/abc_list_pressed_holo_dark.9.png, -0, res/drawable-xxhdpi-v4/abc_list_pressed_holo_light.9.png, -0, res/drawable-xxhdpi-v4/abc_list_selector_disabled_holo_dark.9.png, -0, res/drawable-xxhdpi-v4/abc_list_selector_disabled_holo_light.9.png, -0, res/drawable-xxhdpi-v4/abc_menu_hardkey_panel_mtrl_mult.9.png, -0, res/drawable-xxhdpi-v4/abc_popup_background_mtrl_mult.9.png, -0, res/drawable-xxhdpi-v4/abc_scrubber_primary_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_scrubber_track_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_spinner_mtrl_am_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_switch_track_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_tab_indicator_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_textfield_activated_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_textfield_default_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_textfield_search_activated_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/abc_textfield_search_default_mtrl_alpha.9.png, -0, res/drawable-xxhdpi-v4/biometric_toast_progress_bg.9.png, -0, res/drawable-xxhdpi-v4/biometric_toast_result_bg.9.png, -0, res/drawable-xxhdpi-v4/fmm_btn_k.9.png, -0, res/drawable-xxhdpi-v4/fmm_btn_w.9.png, -0, res/drawable-xxhdpi-v4/ic_notification_overlay.9.png, -0, jpg, -0, res/drawable-xxhdpi-v4/lb_action_bg_focused.9.png, -0, res/drawable-xxhdpi-v4/lb_in_app_search_bg.9.png, -0, res/drawable-xxhdpi-v4/lb_in_app_search_shadow_focused.9.png, -0, res/drawable-xxhdpi-v4/lb_in_app_search_shadow_normal.9.png, -0, res/drawable-xxhdpi-v4/nav_background.9.png, -0, res/drawable-xxhdpi-v4/pip_dismiss_scrim.9.png, -0, res/drawable-xxhdpi-v4/recents_lower_gradient.9.png, -0, res/drawable-xxhdpi-v4/recents_status_gradient.9.png, -0, res/drawable-xxhdpi-v4/recents_task_shadow.9.png, -0, res/drawable-xxhdpi-v4/screenshot_panel.9.png, -0, res/drawable-xxhdpi-v4/search_bg_transparent.9.png, -0, res/drawable-xxhdpi-v4/status_background.9.png, -0, res/drawable/bubble_03_l_bottom.9.png, -0, res/drawable/bubble_03_left.9.png, -0, res/drawable/bubble_03_right.9.png, -0, res/drawable/bubble_04_left.9.png, -0, res/drawable/bubble_04_right.9.png, -0, res/drawable/bubble_bottom_arrow_left.9.png, -0, res/drawable/bubble_bottom_arrow_right.9.png, -0, res/drawable/bubble_bottom_r_arrow_left.9.png, -0, res/drawable/bubble_bottom_r_arrow_right.9.png, -0, res/drawable/bubble_left_arrow_bottom.9.png, -0, res/drawable/bubble_left_arrow_top.9.png, -0, res/drawable/bubble_line_03_left.9.png, -0, res/drawable/bubble_line_03_right.9.png, -0, res/drawable/bubble_line_04_left.9.png, -0, res/drawable/bubble_line_04_right.9.png, -0, res/drawable/bubble_line_bottom_arrow_left.9.png, -0, res/drawable/bubble_line_bottom_arrow_right.9.png, -0, res/drawable/bubble_line_bottom_r_arrow_left.9.png, -0, res/drawable/bubble_line_bottom_r_arrow_right.9.png, -0, res/drawable/bubble_line_left_arrow_bottom.9.png, -0, res/drawable/bubble_line_left_arrow_top.9.png, -0, res/drawable/bubble_line_right_arrow_bottom.9.png, -0, res/drawable/bubble_line_right_arrow_top.9.png, -0, res/drawable/bubble_line_top_arrow_left.9.png, -0, res/drawable/bubble_line_top_arrow_right.9.png, -0, res/drawable/bubble_line_top_r_arrow_left.9.png, -0, res/drawable/bubble_line_top_r_arrow_right.9.png, -0, res/drawable/bubble_right_arrow_bottom.9.png, -0, res/drawable/bubble_right_arrow_top.9.png, -0, res/drawable/bubble_top_arrow_left.9.png, -0, res/drawable/bubble_top_arrow_right.9.png, -0, res/drawable/bubble_top_r_arrow_left.9.png, -0, res/drawable/bubble_top_r_arrow_right.9.png, -0, res/drawable/focused_bg.9.png, -0, res/drawable/homescreen_badge_bg.9.png, -0, res/drawable/homescreen_badge_multiselect_bg.9.png, -0, res/drawable/knox_scrollbar.9.png, -0, res/drawable/tw_scrollbar_mtrl.9.png, -0, ogg, -0, arsc, -I, C:\Users\rafy2\AppData\Local\Temp\1.apk, -S, C:\Users\rafy2\Desktop\Apktool_By_Xianos\SystemUI\res, -M, C:\Users\rafy2\Desktop\Apktool_By_Xianos\SystemUI\AndroidManifest.xml]
| gharchive/issue | 2017-09-04T20:51:23 | 2025-04-01T06:39:00.033607 | {
"authors": [
"KrisYarno",
"ROM-PacMe",
"Subinsmani",
"iBotPeaches"
],
"repo": "iBotPeaches/Apktool",
"url": "https://github.com/iBotPeaches/Apktool/issues/1608",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2146792576 | 🛑 Game 04 - Tsuki is down
In 8e88549, Game 04 - Tsuki (https://game04.idrunk.fr) was down:
HTTP code: 502
Response time: 431 ms
Resolved: Game 04 - Tsuki is back up in 416b6f7 after 10 minutes.
| gharchive/issue | 2024-02-21T13:42:34 | 2025-04-01T06:39:00.041910 | {
"authors": [
"iDrunK65"
],
"repo": "iDrunK65/games",
"url": "https://github.com/iDrunK65/games/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1649817128 | Support linting of source code
added the ability to scan source code files and apply rules
added source-checker Rule for declarative rules
introduced the idea of an intermediate representation object
defines its own type, as each representation can be different, and its source path so that it is self-documenting
multiple parsers can parse the same file, each returning a different intermediate representation. eg. HTML files can be parsed by an HTML parser, a Javascript parser, and a CSS parser
a single parser can produce multiple intermediate representations if it likes, so that the result of parsing by one more parsers is an array of intermediate representations
rules are specific to an intermediate representation type and are only applied to the representations that they know how to parse
pass in an API object for plugins to call
only call in there is currently getLogger to get the lint tool's logger
fixed the plugin loader to be able to load plugins written in ESM.
Currently, node cannot load them from a directory, so you have to load them from the main file instead.
moved more functionality into the Project class so that later after more updates, we can offer linting as a library as well as a command-line tool
Checks will fail until the i18nlint-common library is approved and published to npm
Okay i18nlint-common is published, and this is updated and ready for review. (Yes, sorry it is so big.)
| gharchive/pull-request | 2023-03-31T18:18:17 | 2025-04-01T06:39:00.051129 | {
"authors": [
"ehoogerbeets"
],
"repo": "iLib-js/i18nlint",
"url": "https://github.com/iLib-js/i18nlint/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
800685746 | Android MotoG9 discovery issue
Environment:
Sagemcom F@st 3865bProximus (b-box3)
Release: 8c.514A
GUI Version
7.4.2.0
The Sagemcom Client seems to have issues while discovering my Androids devices.
After further investigation the issue seems triggered by the 'hostname' and device type values seen into the router. By default in the router you don't have a friendly name so not sure if your application is expecting a value because in the HA entity you are mapping the same friendly name value.
Workaround is: change the device type from 'Generic' to 'Mobile phone' and assign a friendly name then reload the application
Action Taken:
Added the Sagemcom client integration
Reloaded home Assistant
the client doesn't discover the Motorola G9 Android
Sagemcom F@st
192.168.1.1
12 devices and 12 entities
Tried to reload either the application or HA but no luck
Changed the Motorola device type and the friendly name in the router
Reloaded the client and issue is fixed
Apologies but i cannot share the logs somehow HA didn't print any logs even though i added the required lines in the configuration file.
Currently there not much is logged from the integration, thus you won't see a lot indeed.
Currently we use the following definition for the name, so that shouldn't be the issue. Are you familiar with Python and are you able to try if it works with the Python client directly? (https://github.com/iMicknl/python-sagemcom-api)
@property
def name(self):
"""Return name of the device."""
return self.user_host_name or self.host_name
Ok i will give it a try by using the client.
@cingolo could you have a look if it works with the latest version? :)
Oki, I will run few tests later today
Inviato da Yahoo Mail su Android
Il ven, 9 apr, 2021 alle 14:57, Mick @.***> ha scritto:
@cingolo could you have a look if it works with the latest version? :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
| gharchive/issue | 2021-02-03T20:45:20 | 2025-04-01T06:39:00.057941 | {
"authors": [
"cingolo",
"iMicknl"
],
"repo": "iMicknl/ha-sagemcom-fast",
"url": "https://github.com/iMicknl/ha-sagemcom-fast/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
788244173 | Open/Close Status indicator (percentage) inconsistency - AwningValance (io:AwningValanceIOComponent)
[x ] I have read the Readme, including the Advanced section regarding debugging.
Describe the bug
percentage indicatory is not consistent overy the somfy products
To Reproduce
configure entity in entity card
Expected behavior
same status indicator in percentage for every somfy cover product
Screenshots
Environment (please complete the following information):
ha-tahoma version: 2020.6.4-15
Home Assistant version: core-2021.1.4
Platform: cover.volant
Device: (if your problem is related to a specific device)
pc
Model:
current_position: 0
ui_class: Awning
widget: AwningValance
controllable_name: 'io:AwningValanceIOComponent'
rssi_level: 32
'core:NameState': Volant
'core:PriorityLockTimerState': 0
'core:StatusState': available
'core:DiscreteRSSILevelState': low
'core:RSSILevelState': 32
'core:ClosureState': 0
'core:OpenClosedState': open
'core:Memorized1PositionState': 105
friendly_name: Volant
supported_features: 527
device_class: awning
Type: controllable_name: 'io:AwningValanceIOComponent'
Additional context
Add any other context about the problem here.
Thanks for reporting @inconsx. It seems that there is an issue in the latest version of ha-tahoma or changes have been made on the side of Somfy TaHoma...
We have had these issues in the past as well, however when we reverse the state, someone else will have issues with exactly the same device. We need to investigate more, will let you know when I do require more details.
Do you know the type (TaHoma vs Connexoon) of your hub and the firmware version of your hub? And just to double check, the status on tahomalink.com is correct?
(possible duplicate of #352 and #350)
Thanks for reporting @inconsx. It seems that there is an issue in the latest version of ha-tahoma or changes have been made on the side of Somfy TaHoma...
We have had these issues in the past as well, however when we reverse the state, someone else will have issues with exactly the same device. We need to investigate more, will let you know when I do require more details.
Do you know the type (TaHoma vs Connexoon) of your hub and the firmware version of your hub? And just to double check, the status on tahomalink.com is correct?
(possible duplicate of #352 and #350)
Hey
It's a Tahoma and the firmware is 2.18.
Status is correct.
Thanks and regards
Hey
It's a Tahoma and the firmware is 2.18.
Status is correct.
Thanks and regards
Could you give https://github.com/iMicknl/ha-tahoma/archive/fix/awning_valance.zip a try? Extract this file and place custom_components/tahoma in your custom_components folder. This should solve your issues.
Hi,
can we reopen this issue.
Volant is now 90 % open but is shown 10% open
Tahoma FW: 2.19
Home Assistant 2021.2.3
current_position: 12
rssi_level: 38
'core:NameState': Volant
'core:PriorityLockTimerState': 0
'core:StatusState': available
'core:DiscreteRSSILevelState': low
'core:RSSILevelState': 38
'core:ClosureState': 88
'core:OpenClosedState': open
'core:Memorized1PositionState': 105
friendly_name: Volant
supported_features: 527
device_class: awning
@inconsx which version of the TaHoma integration are you using?
@iMicknl
2.4.7
@inconsx this fix has been confirmed with others with the same device.. Perhaps there are even differences between the same devices.
Do all buttons work? (open / close / set position). I would like to understand if this is just a cosmetic issue or if this disables the functionality.
@iMicknl
every button is working, yes.
it's just inverted and it's the only entity which shows percentage indicator when closed (before 0% and now inverted 100%)
Hello,
I've got the same issue. It's been like since the start (about 1 year ago).
Beside the awning, all other Somfy equipment report the correct states in HA.
I went through several Tahoma firmwares and several Home Assistant releases, and no improvement as been seen.
Currently I'm on these releases :
Tahoma firmware : 2021.1.4-12
Home Assistant : docker 2021.4.3
HA Tahoma : 2.4.8
Below, on the screenshots, you can see the awning is reported to be open when it's actually closed (Tahoma on android or tahomalink.com show me the correct status). I can still use the HA Tahoma integration to open or close the awning, but I have to know that the numbers and status are incorrect.
@labr3ythank, I would like to see if I can tackle this issue this week. Is anyone willing to work with me via Discord to see if we can debug / fix this issue? Would be great to have this solved before the integration is added to core.
Join us on Discord (https://discord.gg/3Wpn6q9z or add iMick#1903, to discuss in more detail. Eventually the easiest would be if you are able to change your password temporarily and share your credentials privately, so I can have a quick look and see how to fix it :).
I could help with that. I'll try to join you later on discord after work. We can see how and when we can do some testing.
| gharchive/issue | 2021-01-18T12:50:49 | 2025-04-01T06:39:00.078890 | {
"authors": [
"iMicknl",
"inconsx",
"labr3y"
],
"repo": "iMicknl/ha-tahoma",
"url": "https://github.com/iMicknl/ha-tahoma/issues/351",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1203757642 | checkpoint no page found
https://download.impersonator.org/iper_plus_plus_latest_checkpoints.zip no page found
@fpcarva have you find a different link?
@fpcarva have you find a different link?
https://github.com/iPERDance/iPERCore/blob/main/docs/install.md
scroll down to OneDrive 1 or OneDrive 2
@Burtdoe can i use the OneDrive in the wget command?
| gharchive/issue | 2022-04-13T20:27:40 | 2025-04-01T06:39:00.147531 | {
"authors": [
"Burtdoe",
"ProudFoxx42069",
"fpcarva"
],
"repo": "iPERDance/iPERCore",
"url": "https://github.com/iPERDance/iPERCore/issues/145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
196283369 | pause
hello i want to know how can i pause and resume download of file
so thanks
There are Pause and Unpause functions.
| gharchive/issue | 2016-12-18T16:04:56 | 2025-04-01T06:39:00.155962 | {
"authors": [
"iTaybb",
"nima2017"
],
"repo": "iTaybb/pySmartDL",
"url": "https://github.com/iTaybb/pySmartDL/issues/11",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
165312150 | Adding in UITableViewCell
I’m adding LCBannerView in a UITableViewCell, it seems that it is adding this again and again as i’m seeing overlapped images after scrolling.
Any suggestions how can I fix this?
Sorry this is my fault... But I can't fix it now...
But I have some Advice, you could change some properties in .m file, like imageName, imageURLs... You could move those from .m file to .h file, and than, you should reset bannerView.imageName, bannerView.imageURLs... in UITableView's delegate method tableView:cellForRowAtIndexPath:, for example:
cell.bannerView.imageName = self.dataSourceArray[indexPath.row][@"imageName"];
cell.bannerView.imageURLs = self.dataSourceArray[indexPath.row][@"imageURLs"];
If I'm free, I will fix it like this way :)
Thanks. 👍
I’ll give it a try.
Hey, I had fix this bug a moment ago, you could run pod update to get the Latest release.
For more information you could see the README's Release Logs.
Thanks much appreciated.
| gharchive/issue | 2016-07-13T12:49:27 | 2025-04-01T06:39:00.159782 | {
"authors": [
"MVakas",
"iTofu"
],
"repo": "iTofu/LCBannerView",
"url": "https://github.com/iTofu/LCBannerView/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1497365651 | feat: Add new Icon component
CSS: Added new iui-svg-icon class that takes an svg as a child. Allows changing size and fill using data-iui-icon-color and data-iui-icon-size attributes.
React: Added an Icon component, used like this:
<Icon><SvgCloseSmall /></Icon>
<Icon fill='informational'><SvgInfoCircular /></Icon>
Other notes:
I used the name iui-svg-icon to prevent conflicts with iui-icon that is already used in a bunch of components.
I kept backwards compatibility with the data-iui-icon-color attributes because those are already part of our released css.
I tried not to change the icon.scss file because it contains mixins that are used all over the project. I wanted to reuse the icon-sizes map but I was having issues because sass sees xl as string but 2xl as number. and changing the whole map to be explicitly strings would result in dozens of changes in other components, so I just duplicated the icon sizes map.
It's not ideal that our utils folder is a catch-all that is used to dump all our helper mixins and components. It is confusing to keep track of what's internal vs user facing, makes it hard to change things, and results in a bloated util.css file. We should improve this folder in the future.
Pending
[x] narrow down list of sizes (see comment)
[x] test with new svgs (after https://github.com/iTwin/iTwinUI-icons/pull/56)
[x] test with our components (example: passing <Icon> to <Button>)
I've changed the default size to medium because that is the most common use case. Users will need to opt into autoscaling with text using size='auto'.
so what do we do about this? https://github.com/iTwin/iTwinUI/pull/854#issuecomment-1372811544
I've also tested in some of our components like <IconButton> and it displays fine but the wrapping span ends up increasing in height in a little bit.
so what do we do about this? #854 (comment)
I've also tested in some of our components like <IconButton> and it displays fine but the wrapping span ends up increasing in height in a little bit.
Is it avoidable?
Is it avoidable?
I haven't been able to figure out why this is happening. need to investigate and play around a bit more. Maybe someone else can figure it out?
so what do we do about this? #854 (comment)
I've also tested in some of our components like <IconButton> and it displays fine but the wrapping span ends up increasing in height in a little bit.
Is it avoidable?
I haven't been able to figure out why this is happening. need to investigate and play around a bit more. Maybe someone else can figure it out?
should be fixed in bccf588. does that look good?
| gharchive/pull-request | 2022-12-14T20:46:53 | 2025-04-01T06:39:00.181086 | {
"authors": [
"gretanausedaite",
"mayank99"
],
"repo": "iTwin/iTwinUI",
"url": "https://github.com/iTwin/iTwinUI/pull/854",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1321357822 | Implement normal mapping
Look at this Three.js tutorial about normal mapping: https://sbcode.net/threejs/normalmap/
Could we do something similar in the iTwin.js renderer? This could be valuable when rendering terrain, etc. if the source data has a normal map for a mostly-flat surface.
Requires:
[x] Ability to define and use normal maps on frontend
[ ] Support for generating pattern map coords in shader (@MarcNeely)
[x] Ability to define normal maps in RenderMaterial elements (@pmconne)
[x] Ensure MicroStation connector includes normal maps in material definitions (@pmconne)
[x] Include normal map info in tiles (@pmconne)
Yes, we can do normal mapping. It would require WebGL 2 because max texture units, but that's no longer a problem except for iOS users who refuse to update to 15. IIRC the DGN converter already preserves normal maps (and bump, glow, etc maps) from MicroStation when converting materials.
Appears to be high priority.
(for one user).
@DStradley will look at relevant QV code.
@PeterJBassett We are looking at adding support for normal maps -- could you provide us any information or documentation on how the UVs are stored in the data passed in from FutureOn subsea models?
@PeterJBassett We are looking at adding support for normal maps -- could you provide us any information or documentation on how the UVs are stored in the data passed in from FutureOn subsea models?
cc @MarcNeely @pmconne
Hi @PeterJBassett Just checking in again -- we would like to implement normal mapping for FutureOn subsea models -- do you know what process you are following to create the UV coordinates from a FutureOn normal map?
Hello @MarcNeely - I apologise, I forgot to reply. Here is some information from a colleague.
So first the UV wrapping should be set to ClampToEdgeWrapping. Filter to Linear.
We use tangent space normal maps. Each pixel in the normal map is actually a normal encoded, where R = X, G = Y and B = Z.
The encoding follow the usual :
normal = texture2D( normalMap, vUv ).xyz * 2.0 - 1.0;
The code for genarting the UVs are quite simple, and I think it should be part of the exporter ?
const vertices = new Float32Array(this.resource._heightmapWidth * this.resource._heightmapHeight * 3)
const normals = new Float32Array(this.resource._heightmapWidth * this.resource._heightmapHeight * 3)
const uvs = new Float32Array(this.resource._heightmapWidth * this.resource._heightmapHeight * 2)
let offset = 0
let offset2 = 0
let offset3 = 0
for (let iy = 0; iy < gridY1; iy++) {
const y = iy * segmentHeight - heightHalf
for (let ix = 0; ix < gridX1; ix++) {
const x = ix * segmentWidth - widthHalf
vertices[offset3] = x
vertices[offset3 + 1] = y
vertices[offset3 + 2] = heights[offset]
normals[offset3] = 0
normals[offset3 + 1] = 0
normals[offset3 + 2] = 1
uvs[offset2] = ix / gridX
uvs[offset2 + 1] = 1 - iy / gridY
offset3 += 3
offset2 += 2
offset++
}
}
@MarcNeely @DStradley @pmconne See above reply about UVs + normal maps.
@PeterJBassett
Thanks for the information.
A few more questions--
In the FutureOn example data we have, there is a pattern texture depicting the seabed being drawn in addition to the normal map texture. Is it correct that this is displayed in addition to the normal map?
We also need to know where the UV coordinates for the pattern texture originate. Do you use the same UV coordinates as the normal map texture?
If they are not the same UV coordinates, can they be derived from them using a scaling factor or some kind of transformation?
What is the wrapping mode for the pattern texture?
cc @MarcNeely @DStradley @pmconne
@MarcNeely @DStradley @pmconne
Here is some further info from my colleague.
So, this will get a bit hairy.
There is actually two way of rendering the user can chose from :
Simple color ( in this case, we just use the color defined in “seabedColor”. In this case we just display the seabed with the normal map on top. We usually scale a bit the normals so that it looks a bit better.
We use a texture, in this case the textures can be found inside the “public” assets path of the software. The “texture key” is defined in “seaBedTextureName”. We default to “muddyDiffuse” if this value is not set
const filenamesToLoad = {
rocksDiffuse: 'RockySeabed_dark_01_1024x1024.jpg',
rocksLightDiffuse: 'RockySeabed_light_01_1024x1024.jpg',
rocks2Diffuse: 'rocks2.jpg',
sandsDiffuse: 'SandySeabed_dark_01_1024x1024.jpg',
sandsLightDiffuse: 'SandySeabed_light_01_1024x1024.jpg',
muddyDiffuse: 'MuddySeabed_dark_01_1024x1024.jpg',
muddyLightDiffuse: 'MuddySeabed_light_01_1024x1024.jpg',
desertDiffuse: 'DesertSand_01_1024x1024.jpg',
}
for (const key in filenamesToLoad) {
const filename = filenamesToLoad[key]
const texture = textureLoader.load('/assets/textures/seabed/' + filename, (texture) => {
threeVisualizer.requestRender()
})
texture.wrapS = RepeatWrapping
texture.wrapT = RepeatWrapping
texture.encoding = sRGBEncoding
this.seabedTextures[key] = texture
}
So for example, using my local dev : if “ https://futureon-designer.lvh.me/assets/textures/seabed/RockySeabed_dark_01_1024x1024.jpg
If we use the texture, the shader is a bit particular as we try to tile it depending on the depth so that it does not look to bad. So basically we generate the UVs dynamically :
{
shader.vertexShader =
`
//attribute vec4 homogeneousPosition;
uniform vec2 uvOffsetCustom;
//uniform mat4 modelViewProjectionMatrix;
uniform float orthographicFakeDistance;
varying vec2 vUvCustom;
varying float vDepth;
`
+
insertStringAfterSubstring('<fog_vertex>', shader.vertexShader, `
vec2 modelPosition = (modelMatrix * vec4(position, 0.0)).xy; // 0.0 is used to ignore translation
vUvCustom = uvOffsetCustom + modelPosition;
vec4 viewPosition = modelViewMatrix * vec4(position, 1.0);
if (isOrthographic) {
vDepth = orthographicFakeDistance;
} else {
vDepth = -viewPosition.z;
}
//#if (HAS_PRECALCULATED_HOMOGENEOUS_POSITION == 1)
// gl_Position = homogeneousPosition;
//#else
//gl_Position = modelViewProjectionMatrix * vec4(position, 1.0);
// WEIRD! The modelViewProjectionMatrix had more artifacts than in-shader "projectionMatrix * viewPosition"; flashing artifacts for the infinite seabed,
// and sometimes weird clipping or something making the quad concave. Bug story for future reference: To fix infinite seabed flashing, I wrote code to update
// the position attribute to be the frustum-to-infinite-seabed intersection. Still had flashing. So I did the homogeneous position calculation on CPU and
// passed that directly in instead. That fixed it. Later, I looked at the water, and noticed that even though I don't do any special frustum intersection or
// such (I just set a 1x1 quad's scale to be camera.far * 2), it didn't have any flashing artifacts. I tried doing "projectionMatrix * viewPosition" in
// the shader instead of passing it as a uniform for the seabed material, to match the water material, and sure enough, that was what caused the flashing
// for some reason. Not sure why, but perhaps the uniforms have less precision than the in-shader calculations here? Or perhaps a large number of uniforms
// just messes things up without throwing an error? (Edit: I checked and .capabilities says we have 1024 uniforms available, so weird if it would mess up this way.)
// Take note of this for the future. However, the frustum intersection still makes UVs precise on the seabed, so I will keep that. However, the homogeneous
// position attribute doesn't seem like it's needed, hence I commented it out.
gl_Position = projectionMatrix * viewPosition;
//#endif
`)
}
// Extend fragment shader
{
shader.fragmentShader =
`
varying vec2 vUvCustom;
varying float vDepth;
`
+
replaceSubstring('#include <map_fragment>', shader.fragmentShader, `
float logDepth = log2(vDepth);
float repetitions = 3.0;
vec4 texelColor = mix(
texture2D(map, vUvCustom / clamp(pow(2.0, floor(logDepth)), float(MIN_TILE_SIZE), float(MAX_TILE_SIZE)) * repetitions),
texture2D(map, vUvCustom / clamp(pow(2.0, floor(logDepth) + 1.0), float(MIN_TILE_SIZE), float(MAX_TILE_SIZE)) * repetitions),
fract(logDepth)
);
diffuseColor *= texelColor;
`)
}
Hi @PeterJBassett --
The texture mapping mode you describe above appears to be feasible to implement in iTwin.js as well as regular normal mapping.
Do you know the origin of this texture mapping technique and its name? Is this something you would like to see implemented in iTwin.js? (In addition to normal mapping).
cc @MarcNeely @DStradley @pmconne
Hello @markschlosseratbentley @MarcNeely @DStradley @pmconne
Apologise for the delay replying. There is no standard name for the texture mapping technique.
This is the reply from my colleague:
If you mean the way we keep the seabed detailed as we zoom in and out, it's a shader part i wrote, found in TerrainMaterial.js. Not sure if it has a name. It basically shaderized your old LOD mesh method, but with fading between the two closest levels, to avoid popping as you zoom.
In addition to the frontend code, we need normal maps transferred from the backend. We also need scale.
Still in progress.
@MarcNeely please let me know what I can help with.
Hello @markschlosseratbentley Thanks for your update, apologies for not replying sooner.
Can you provide any more info on how to use the normal mapping feature, eg. how to input the normal map image into iTwin.
@PeterJBassett supply a normal map when creating a RenderMaterialElement.
| gharchive/issue | 2022-07-28T19:00:15 | 2025-04-01T06:39:00.197745 | {
"authors": [
"PeterJBassett",
"markschlosseratbentley",
"pmconne"
],
"repo": "iTwin/itwinjs-core",
"url": "https://github.com/iTwin/itwinjs-core/issues/3989",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.