id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
674758983
|
submit action use job_tool and job_source
change submit message body to use job_tool and job_source as attributes.
OK
|
gharchive/pull-request
| 2020-08-07T05:02:00 |
2025-04-01T04:35:46.814255
|
{
"authors": [
"rwang5688"
],
"repo": "rwang5688/jobs-list-aws",
"url": "https://github.com/rwang5688/jobs-list-aws/pull/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
674835661
|
processJob use attributes with job_ prefix
processJob use attributes with job_ prefix.
fix source.tar.gz so that it extracts out to "source" directory.
clean up comments and error messages.
OK
|
gharchive/pull-request
| 2020-08-07T07:55:27 |
2025-04-01T04:35:46.815288
|
{
"authors": [
"rwang5688"
],
"repo": "rwang5688/jobs-list-aws",
"url": "https://github.com/rwang5688/jobs-list-aws/pull/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
692552808
|
change submitTask for real XVSA
change submitTask for real XVSA
OK
|
gharchive/pull-request
| 2020-09-04T00:49:22 |
2025-04-01T04:35:46.815982
|
{
"authors": [
"rwang5688"
],
"repo": "rwang5688/task-list-aws-xcalibyte-com-cn",
"url": "https://github.com/rwang5688/task-list-aws-xcalibyte-com-cn/pull/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
741882716
|
Metrics: PascalBoxes_Precision/mAP@0.5IOU: 0.0 PascalBoxes_PerformanceByCategory/AP@0.5IOU/drop_inlet: 0.0
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.
mAP always 0 after validating on every epoch
I am training the model efficientdet-d0 based on pretrained Coco weights from the repo config file on a custom dataset of 1 class only and 1 object per image. I am just resetting the coco head of 90 class to 1 class. Rest of the part trains as it is. There is no freezing of layers
Describe the solution you'd like
A clear and concise description of what you want to happen.
sense of mAP
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Do I need to consider freezing of the layers?
Additional context
Add any other context or screenshots about the feature request here.
You are using a custom dataset, I cannot help. Please stop creating issues about this.
I did test a one class scenario w/ TFW validator. After only two epochs of training VOC with all but bicyle filtered
PascalBoxes_Precision/mAP@0.5IOU: 0.6753591421661145
PascalBoxes_PerformanceByCategory/AP@0.5IOU/bicycle: 0.6753591421661145
If you do want help with your custom datasets / applications I charge 225 USD / hr for that sort of work. I canot provide assistance otherwise. I only support fixing bugs in the models here w/ provided open datasets and impl that I have verified myself.
I hope you have got me wrong. I don't say that it's a bug. I am just
seeking help from the other audience as well if they have reached the same
issue.
Sorry for the misunderstanding.
About the last post, it was labeled as bug by mistake. I am just askinh the
possible reasons to do so.
I am sorry to have wasted your time and your inconvenience.
On Thu, Nov 12, 2020, 3:22 PM Ross Wightman notifications@github.com
wrote:
If you do want help with your custom datasets / applications I charge 225
USD / hr for that sort of work. I canot provide assistance otherwise. I
only support fixing bugs in the models here w/ provided open datasets and
impl that I have verified myself.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/rwightman/efficientdet-pytorch/issues/122#issuecomment-726319627,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AMQBWSNDX7D4OCOG5EY7WWLSPQ7XLANCNFSM4TTW3KTQ
.
No worries. I'm really trying to keep the issues down to just bugs and
feature requests right now. It consumes too much time weeding through valid
issues (for me to fix) vs requests for help, etc.
For community help / discussions I'm hoping to activate github Discussions
beta for this repo very soon, it doesn't quite qualify yet.
On Thu, Nov 12, 2020 at 12:46 PM Ekta P Bhojwani notifications@github.com
wrote:
I hope you have got me wrong. I don't say that it's a bug. I am just
seeking help from the other audience as well if they have reached the same
issue.
Sorry for the misunderstanding.
About the last post, it was labeled as bug by mistake. I am just askinh the
possible reasons to do so.
I am sorry to have wasted your time and your inconvenience.
On Thu, Nov 12, 2020, 3:22 PM Ross Wightman notifications@github.com
wrote:
If you do want help with your custom datasets / applications I charge 225
USD / hr for that sort of work. I canot provide assistance otherwise. I
only support fixing bugs in the models here w/ provided open datasets and
impl that I have verified myself.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<
https://github.com/rwightman/efficientdet-pytorch/issues/122#issuecomment-726319627
,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AMQBWSNDX7D4OCOG5EY7WWLSPQ7XLANCNFSM4TTW3KTQ
.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/rwightman/efficientdet-pytorch/issues/122#issuecomment-726332556,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABLQICF27A5PX54NL2EJT2LSPRCQZANCNFSM4TTW3KTQ
.
Yes, I am really sorry.
On Thu, Nov 12, 2020, 4:13 PM Ross Wightman notifications@github.com
wrote:
No worries. I'm really trying to keep the issues down to just bugs and
feature requests right now. It consumes too much time weeding through valid
issues (for me to fix) vs requests for help, etc.
For community help / discussions I'm hoping to activate github Discussions
beta for this repo very soon, it doesn't quite qualify yet.
On Thu, Nov 12, 2020 at 12:46 PM Ekta P Bhojwani <notifications@github.com
wrote:
I hope you have got me wrong. I don't say that it's a bug. I am just
seeking help from the other audience as well if they have reached the
same
issue.
Sorry for the misunderstanding.
About the last post, it was labeled as bug by mistake. I am just askinh
the
possible reasons to do so.
I am sorry to have wasted your time and your inconvenience.
On Thu, Nov 12, 2020, 3:22 PM Ross Wightman notifications@github.com
wrote:
If you do want help with your custom datasets / applications I charge
225
USD / hr for that sort of work. I canot provide assistance otherwise. I
only support fixing bugs in the models here w/ provided open datasets
and
impl that I have verified myself.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<
https://github.com/rwightman/efficientdet-pytorch/issues/122#issuecomment-726319627
,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AMQBWSNDX7D4OCOG5EY7WWLSPQ7XLANCNFSM4TTW3KTQ
.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<
https://github.com/rwightman/efficientdet-pytorch/issues/122#issuecomment-726332556
,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/ABLQICF27A5PX54NL2EJT2LSPRCQZANCNFSM4TTW3KTQ
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/rwightman/efficientdet-pytorch/issues/122#issuecomment-726345516,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AMQBWSLQB4GQR47CM4BK263SPRFY3ANCNFSM4TTW3KTQ
.
|
gharchive/issue
| 2020-11-12T19:51:45 |
2025-04-01T04:35:46.869439
|
{
"authors": [
"Ekta246",
"rwightman"
],
"repo": "rwightman/efficientdet-pytorch",
"url": "https://github.com/rwightman/efficientdet-pytorch/issues/122",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1530190633
|
[BUG] EfficientNetFeatures behaves differently when loading in a checkpoint from timm-0.4.5
Describe the bug
EfficientNetFeatures behaves differently when loading in a checkpoint from timm-0.4.5, and the model throws out not unexpectedly large numbers like something in the range of 1e21. I believe this issue is solely related to the changes in efficientnet.
To Reproduce
Steps to reproduce the behavior:
https://colab.research.google.com/drive/11xz5IhwqAqHj9-XAIP17yVIuJsLqeYYJ?usp=sharing
set the pip install version to the latest version
run the demo and print the outputs
to get the activation of the efficientnetfeature backbone, run
from util.misc import NestedTensor
samples = NestedTensor.from_tensor_list(samples)
model_pc.detr.backbone[1](samples)
Expected behavior
the model would output NaN and the backbone would output large numbers on the latest version
Screenshots
N/A
Desktop (please complete the following information):
OS: Ubuntu 18.04
This repository version pip 0.3.1
It's the FrozenBatchNorm that's used... EfficientNet was changed to use Norm + Act layer combo modules so that alternate norm + act layers could be easily swapped in, but this conflicts with use cases like https://github.com/ashkamath/mdetr/blob/ea09acc44ca067072c4b143b726447ee7ff66f5f/models/backbone.py#L20 that overwrite the BatchNorm layer w/o an activation.
Is there a way to set up a warning for this at least? It was pretty confusing that when loading previous weights and it breaks.
@kyleliang919 unfortunately, when code modifies layers in place like this, no way to warn, a downsize of PyTorch flexibility and unforunate that PyTorch + torchvision has evolved to rely on mutations like this, instead of properly supporting all three 'modes' of BatchNorm in one layer (ie supporting 'frozen', and 'sync' as modes in one nn.BatchNormxx module family).
FYI, a PR for helpers that make handling this easy (works for any case) https://github.com/rwightman/pytorch-image-models/pull/1633
You could patch mdetr.models.backbone.replace_bn with timm.layers.freeze_batch_norm_2d
Somethign like below, before creating any models, etc..
mdetr.models.backbone.replace_bn = timm.layers.freeze_batch_norm_2d
thanks it works for me.
|
gharchive/issue
| 2023-01-12T06:41:54 |
2025-04-01T04:35:46.877446
|
{
"authors": [
"kyleliang919",
"rwightman"
],
"repo": "rwightman/pytorch-image-models",
"url": "https://github.com/rwightman/pytorch-image-models/issues/1631",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1830140456
|
🛑 Backend is down
In de403d7, Backend ($SECRET_BACKEND_SITE) was down:
HTTP code: 500
Response time: 29315 ms
Resolved: Backend is back up in 630498c.
|
gharchive/issue
| 2023-07-31T22:26:20 |
2025-04-01T04:35:46.907865
|
{
"authors": [
"ryanlid"
],
"repo": "ryanlid/upptime",
"url": "https://github.com/ryanlid/upptime/issues/637",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
202128994
|
[section] "Don't over optimize" explanation
Reference: dont-over-optimize
Since all examples are using ES6+, arguing about old browsers don't looks significant IMHO.
Maybe another argument should be better. What do you think?
You're right that the example is older, but it's one that some JavaScript developers who started with the language a while back will recognize as an old optimization. It's no longer a needed optimization, and the central point of this subsection is to suggest that you should see if you're doing anything like this currently in your code, but is actually already browser-optimized. See my project on typed arrays for example: https://github.com/ryanmcdermott/typed-arrays
|
gharchive/issue
| 2017-01-20T12:28:29 |
2025-04-01T04:35:46.910119
|
{
"authors": [
"lnfnunes",
"ryanmcdermott"
],
"repo": "ryanmcdermott/clean-code-javascript",
"url": "https://github.com/ryanmcdermott/clean-code-javascript/issues/163",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
506557621
|
Could not open a connection to your authentication agent
Describe the bug
Probably missing something obvious here. I'm running Git bash on Windows. Was curious to try this out and followed the README instructions. All good until myos create. Keys are created fine but then got the error in the title. Same if I try to connect:
$ myos connect testmyos
Could not open a connection to your authentication agent.
I am not using OpenSSH but Putty instead. Tried to spawn the Putty SSH authentication agent (Pageant) to no avail.
Host OS (please complete if relevant):
OS: Git Bash on Windows 10
@simoneb sorry about the late reply. I have a Windows computer at home so I'll try and test this week. Windows and docker never play as nice as I would hope, but crossing my fingers.
Thanks and no rush, I was really just playing around with it.
|
gharchive/issue
| 2019-10-14T10:05:04 |
2025-04-01T04:35:46.915938
|
{
"authors": [
"rylandg",
"simoneb"
],
"repo": "rylandg/myos",
"url": "https://github.com/rylandg/myos/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1365126836
|
does not work imEncodeWithParams on M1 Mac
https://github.com/ryoppippi/zigcv/blob/619e465671366e24862bab975ad8c6641e75743a/src/imgcodecs.zig#L233-L235
When executing this, it crashes
I tested on v0.10.0-dev.4060+61aaef0b0 and it works.
So, I close this issue.
|
gharchive/issue
| 2022-09-07T20:05:16 |
2025-04-01T04:35:46.919899
|
{
"authors": [
"ryoppippi"
],
"repo": "ryoppippi/zigcv",
"url": "https://github.com/ryoppippi/zigcv/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
188491489
|
Merging of outstanding pull requests
I'm currently maintaining a fork of this repository with a selection of the outstanding PRs merged in. It would be greate to re-adopt upstream.
Any chance the outstanding PRs could be merged?
@ryotarai bump
@ryotarai Is there anything I can do to help you get the outstanding PRs merged?
In frustration I've created: https://github.com/sampointer/fluent-plugin-cloudwatch-ingest
It addresses most of the issues I've been struggling to get merged (both my PRs and others)
@sampointer Is this issue still active?
fluent-plugin-cloudwatch-logs is maintained under fluent-plugins-nursery. This project is not "inactive".
Closing.
|
gharchive/issue
| 2016-11-10T12:06:23 |
2025-04-01T04:35:46.923832
|
{
"authors": [
"cosmo0920",
"sampointer"
],
"repo": "ryotarai/fluent-plugin-cloudwatch-logs",
"url": "https://github.com/ryotarai/fluent-plugin-cloudwatch-logs/issues/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
479829281
|
chore: go-demo-6 to 1.0.328
chore: Promote go-demo-6 to version 1.0.328
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
gharchive/pull-request
| 2019-08-12T20:13:44 |
2025-04-01T04:35:46.953617
|
{
"authors": [
"ryspnc"
],
"repo": "ryspnc/environment-jx-rocks-staging",
"url": "https://github.com/ryspnc/environment-jx-rocks-staging/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
526982490
|
Set option 'arm_64bit' in config.txt
When I worked on lesson1, I installed Raspbian Buster(release date 2019-09-26). There are 3 kernel image under boot partition: kernel.img(32bit older RPI), kernel7.img(32bit RPI2&3) and kernel7l.img(32-bit RPI4).
After I deleted kernel7.img, copy kernel8.img, modified config.txt according to the tutorial and boot my Pi 3b, no UART message was printed. However, when I deleted all kernel*.img except kernel8.img, it worked. I searched the Internet for a while and couldn't figure out why. Maybe the latest firmware has changed the boot sequence?
Then I saw the arm_64bit option in the official document.
arm_64bit
If set to non-zero, forces the kernel loading system to assume a 64-bit kernel, starts the processors up in 64-bit mode, and sets kernel8.img to be the kernel image loaded, unless there is an explicit kernel option defined in which case that is used instead. Defaults to 0 on all platforms. NOTE: 64-bit kernels must be uncompressed image files.
Note that the 64-bit kernel will only work on the Pi4, Pi3, and Pi2B rev1.2 boards with latest firmware.
I tried it and it worked well without deleting any other kernel*.img. Maybe setting arm_64bit option is a better choice for current firmware?
Thanks, @legendlc for reporting this.
I made a fix in this commit.
I prefer to add instructions to delete all 'kernel*img' files rather than use the arm_64bit option because I am not sure how this option is going to work with different versions of Raspberry Pi boards as well as different firmware versions. Deleting all kernel*.img files looks like a more reliable way to me. Also, default config.txt is used in a lot of different places (source of all lessons, exercises, translations, etc) and updating all of them could be painful
|
gharchive/issue
| 2019-11-22T04:36:49 |
2025-04-01T04:35:47.051334
|
{
"authors": [
"legendlc",
"s-matyukevich"
],
"repo": "s-matyukevich/raspberry-pi-os",
"url": "https://github.com/s-matyukevich/raspberry-pi-os/issues/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1296528437
|
broken pipe
I have been encountering this issue:
File "/usr/local/bin/uro", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/uro/uro.py", line 151, in main
print(host + path + dict_to_params(param))
BrokenPipeError: [Errno 32] Broken pipe
Traceback (most recent call last):
File "/usr/local/bin/uro", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/uro/uro.py", line 161, in main
print(host + path)
BrokenPipeError: [Errno 32] Broken pipe
Any idea why would it be?
Apparently this isn't uro's fault, it happens when the program providing input via stdin/pipe exits abruptly or does something weird.
|
gharchive/issue
| 2022-07-06T21:53:15 |
2025-04-01T04:35:47.059017
|
{
"authors": [
"marcelo321",
"s0md3v"
],
"repo": "s0md3v/uro",
"url": "https://github.com/s0md3v/uro/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
236031349
|
set number of active requests
is there any option to set the number of active requests? The site is crashed cause of the huge amount of requests at the same time. options like this would be nice in the future
interval - between 2 requests
maxConcurrency - how many requests are in progress
Thank you
#129
Hi @balazserdos
There is no such option now
Thanks for suggestion, I'll consider adding it in future
how to bypass login or download content after login? Thanks
Hi @mtr980. Your question is not related to the subject of this ticket.
Please try to use search by issues or create separate issue for you question.
BTW, see https://github.com/s0ph1e/node-website-scraper#request
requestConcurrency option was released in version 3.3.0
For anyone looking for a bit more flexibility, here's a plugin with randomized min/max timeouts between requests:
https://benjaminhorn.io/code/request-throttle-for-npm-package-website-scraper/
Hi @erickhavel
Great article 👍
Thank you, I'll add link to your solution to readme so others can find it there
|
gharchive/issue
| 2017-06-14T22:48:33 |
2025-04-01T04:35:47.063671
|
{
"authors": [
"aivus",
"balazserdos",
"erickhavel",
"mtr980",
"s0ph1e"
],
"repo": "s0ph1e/node-website-scraper",
"url": "https://github.com/s0ph1e/node-website-scraper/issues/219",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
161730182
|
Escape url string because NSURL doesn't do it.
I found that NSURL(string) didn't escape the string so valid url with accent or emoji will return nil.
Let me know if you find it helpful for Arrow ;)
That's a really nice catch!!!
|
gharchive/pull-request
| 2016-06-22T16:37:42 |
2025-04-01T04:35:47.087729
|
{
"authors": [
"damien-nd",
"s4cha"
],
"repo": "s4cha/Arrow",
"url": "https://github.com/s4cha/Arrow/pull/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
432535110
|
HandyJSON and Swift 5
Undefined symbol: _swift_getFieldAt
Please update HandyJSON for compatibility with Swift 5
Here is the solution for this, which means the need is to update HandyJson branch to 5.0.0-beta.1.
[https://github.com/alibaba/HandyJSON/issues/307](HandyJson Issue 307)
With regard to using HandyJson 5.0.0-beta.1: that build isn't compatible with ScClient 1.0.8 (which requires HandyJson 4.2.0). Any possibilities of updating ScClient to support the HandyJson beta build?
Added support for swift 5, I am really sorry for the delayed response :) Please try using it and let me know if you are facing any issues.
|
gharchive/issue
| 2019-04-12T12:11:09 |
2025-04-01T04:35:47.130527
|
{
"authors": [
"ZihanCheng-CGLand",
"ambrusha",
"ryan-peters",
"sachinsh76"
],
"repo": "sacOO7/socketcluster-client-swift",
"url": "https://github.com/sacOO7/socketcluster-client-swift/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1270432115
|
Favicons and page title
I'd remove all default react favicons :P I don't have a good one to swap it out with, but I rather have none than the react one 😛
I'd change the title of the page to PredictProtein Embed
so..?
|
gharchive/issue
| 2022-06-14T08:18:17 |
2025-04-01T04:35:47.132131
|
{
"authors": [
"sacdallago",
"tihanikolova"
],
"repo": "sacdallago/embed.predictprotein.org",
"url": "https://github.com/sacdallago/embed.predictprotein.org/issues/6",
"license": "AFL-3.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
211426557
|
improve width calculation (autoWidth option true)
Fix issue of missing last slide when option autoWidth is set 'true'.
The problem (last slide breaks into next line and is therefore invisible) shows in every browser but Chrome and has its origin in the rounding of width values (function width() returns rounded value). My approach is to add up the exact widths of all slides and then round up the result, making sure enough space is available.
The same problem
My conf:
item: 4, loop: true, pager: false, autoWidth: true
Width of block - 200px;
|
gharchive/pull-request
| 2017-03-02T15:34:47 |
2025-04-01T04:35:47.134070
|
{
"authors": [
"miashadowblue",
"prolisk"
],
"repo": "sachinchoolur/lightslider",
"url": "https://github.com/sachinchoolur/lightslider/pull/318",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1697815308
|
Confusion about condition statement
Describe The Bug
a = get_env ...
b = get_env ...
if not ${a} and ${b}
...
else
...
end
gets error
Error while running duckscript: Source: Unknown Line: 36 - Unexpected 'and'
To Reproduce
Error Stack
The error stack trace
Code Sample
/// paste code here
Besides,
not false and false
gives true is a little confusing... Although I can kind of understand it.
Hmmm, but why
# Unexpected value: false
if true and ( not false )
# ok
if false and ( not false )
It seems eval_condition_for_slice only support literals..
sorry for the confusion, but conditions like in 'if' command are build either from
variable
command
condition
in your case, i figure you think you have a condition, but a condition is defined by "A condition statement is made up of values, or/and keywords and '('/')' groups."
see docs at:
https://github.com/sagiegurari/duckscript/blob/master/docs/sdk.md#std__flowcontrol__If
the 'not' is not a condition keyword, its also a command:
https://github.com/sagiegurari/duckscript/blob/master/docs/sdk.md#std__Not
which i guess where the confusion is coming from.
so when you have a condition phrase, you can't use embedded commands inside since i have no idea where that command args stop and the condition phrase continues.
so you need to do
a = get_env ...
b = get_env ...
not_a = not ${a}
if ${not_a} and ${b}
...
else
...
end
Yes, thanks for your answer and I’ve already fixed my problem in this way.
I don’t have strong opinion, and haven’t learned much about the design decisions of duckscript. Do you think make “not” a condition keyword can be more user-friendly?
conditions are not part of duckscript. they are commands on their own.
it has its limitations but also extendibility where the main goal was to be super super simple lang with nothing in it just simple 1 liner syntax.
so i agree it can make things simpler, it is also not that clear where not ends and make things more complex at same time.
|
gharchive/issue
| 2023-05-05T15:33:16 |
2025-04-01T04:35:47.203711
|
{
"authors": [
"sagiegurari",
"xxchan"
],
"repo": "sagiegurari/duckscript",
"url": "https://github.com/sagiegurari/duckscript/issues/326",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1590376910
|
🛑 Aquafinest Ro ( Gajanan Praskar ) is down
In 5d4b548, Aquafinest Ro ( Gajanan Praskar ) (https://aquafinestro.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Aquafinest Ro ( Gajanan Praskar ) is back up in 94b89b0.
|
gharchive/issue
| 2023-02-18T15:24:45 |
2025-04-01T04:35:47.206516
|
{
"authors": [
"vanpariyar"
],
"repo": "sahajananddigital/status",
"url": "https://github.com/sahajananddigital/status/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2117570045
|
🛑 Evon Stone Ceramics is down
In 9f24f6a, Evon Stone Ceramics (https://evonceramics.com) was down:
HTTP code: 500
Response time: 313 ms
Resolved: Evon Stone Ceramics is back up in 64f7713 after .
|
gharchive/issue
| 2024-02-05T02:55:40 |
2025-04-01T04:35:47.208867
|
{
"authors": [
"vanpariyar"
],
"repo": "sahajananddigital/status",
"url": "https://github.com/sahajananddigital/status/issues/535",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1289393632
|
🛑 urSIGN is down
In 59fe9f7, urSIGN (https://www.ursign.ch) was down:
HTTP code: 403
Response time: 54 ms
Resolved: urSIGN is back up in 04258a5.
|
gharchive/issue
| 2022-06-29T23:13:19 |
2025-04-01T04:35:47.228098
|
{
"authors": [
"m43nu"
],
"repo": "sahli-interactive/status.sahli-interactive.ch",
"url": "https://github.com/sahli-interactive/status.sahli-interactive.ch/issues/199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1375835185
|
Twitter Post with video and picture
so now there twitter premium they can post video with picture too! but it not gonna work when use tvdl shorcut. hopefully you can update. love ya shorcut <3
Hi @dinouse, yes that's a feature that I am currently considering. It's a little more complicated though.
|
gharchive/issue
| 2022-09-16T11:46:36 |
2025-04-01T04:35:47.229088
|
{
"authors": [
"dinouse",
"saifalfalah"
],
"repo": "saifalfalah/tvdl-2",
"url": "https://github.com/saifalfalah/tvdl-2/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1207812825
|
proses migrasi pada database terjadi error Illuminate\Database\QueryException
halo, saya mengalami error ketika mencoba migrasi database dengan error Illuminate\Database\QueryException
SQLSTATE[HY000]: General error: 1025 Error on rename of '.\whatsapp_server#sql-107c_316' to '.\whatsapp_server\cms_settings' (errno: 13 "Permission denied") (SQL: alter table cms_settings add label varchar(255) null)
at C:\xampp\htdocs\laravel-whatsapp-server\vendor\laravel\framework\src\Illuminate\Database\Connection.php:712
708▕ // If an exception occurs when attempting to run a query, we'll format the error
709▕ // message to include the bindings with SQL, which will make this exception a
710▕ // lot more helpful to the developer instead of just the database's errors.
711▕ catch (Exception $e) {
➜ 712▕ throw new QueryException(
713▕ $query, $this->prepareBindings($bindings), $e
714▕ );
715▕ }
716▕ }
1 C:\xampp\htdocs\laravel-whatsapp-server\vendor\doctrine\dbal\lib\Doctrine\DBAL\Driver\PDO\Exception.php:18
Doctrine\DBAL\Driver\PDO\Exception::("SQLSTATE[HY000]: General error: 1025 Error on rename of '.\whatsapp_server#sql-107c_316' to '.\whatsapp_server\cms_settings' (errno: 13 "Permission denied")")
2 C:\xampp\htdocs\laravel-whatsapp-server\vendor\doctrine\dbal\lib\Doctrine\DBAL\Driver\PDOStatement.php:119
Doctrine\DBAL\Driver\PDO\Exception::new(Object(PDOException))
Permission user databasenya mas
Permission user databasenya mas
untuk create table yang lain bisa, nah pas sampai di cms_settings errornya sprt itu. kira2 apa masalahnya ya
|
gharchive/issue
| 2022-04-19T06:30:08 |
2025-04-01T04:35:47.233963
|
{
"authors": [
"Helwieahmad",
"zakirkun"
],
"repo": "saifulcoder/laravel-whatsapp-server",
"url": "https://github.com/saifulcoder/laravel-whatsapp-server/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1350900006
|
We NEED a document
As the title goes, we need a document to learn about the language.
Currently no product and hence no document. And this repo is kind of deprecated.
|
gharchive/issue
| 2022-08-25T13:26:39 |
2025-04-01T04:35:47.234896
|
{
"authors": [
"HZDZ9495",
"saihaze"
],
"repo": "saihaze/cific",
"url": "https://github.com/saihaze/cific/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
488061406
|
make error
hi~ no makefile , so “make” error
Hi, you need to run cmake first. Check README.md.
|
gharchive/issue
| 2019-09-02T08:19:15 |
2025-04-01T04:35:47.239421
|
{
"authors": [
"Hjy20255",
"martyone"
],
"repo": "sailfishos/RemoteDisplay",
"url": "https://github.com/sailfishos/RemoteDisplay/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
893444165
|
jose library imports fail
fastfed-node-sdk, npm run watch:
This appears caused by the upgrade:
- "jose": "^1.17.1",
+ "jose": "^3.11.6",
Reverting to jose 2.0.5 (latest in the still-supported 2.xx series) appears to resolve it. Will do that for now.
|
gharchive/issue
| 2021-05-17T15:21:06 |
2025-04-01T04:35:47.241213
|
{
"authors": [
"matt-domsch-sp"
],
"repo": "sailpoint-oss/fastfed-sdk",
"url": "https://github.com/sailpoint-oss/fastfed-sdk/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1590531084
|
Mail Alert stops code from running
How can I make the mail alert not stop the code from running, while also sending me an email whenever the number of people goes over the threshold?
We need to use a background thread to send the alerts, fixed in this commit
|
gharchive/issue
| 2023-02-19T02:09:38 |
2025-04-01T04:35:47.242538
|
{
"authors": [
"XxXxNooBxXxX",
"saimj7"
],
"repo": "saimj7/People-Counting-in-Real-Time",
"url": "https://github.com/saimj7/People-Counting-in-Real-Time/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
873929218
|
To add calculator for calculating Length, Diagonal and Breadth of Rectangle and also adding steps for the same
Is your feature request related to a problem? Please describe.
To add calculator for calculating Length, Diagonal and Breadth of Rectangle and also adding steps for the same now formula is only written calculation is not done.
Hi, I want to work on this issue I will start working on it as soon as I get assigned!! I am a part of GSSoC'21.
Kindly assign this issue to me!
Discord Username : Abhijeet Sinha(P)
Discord Tag: #4018
|
gharchive/issue
| 2021-05-02T12:22:51 |
2025-04-01T04:35:47.247935
|
{
"authors": [
"abhijeet141"
],
"repo": "sairish2001/makesmatheasy",
"url": "https://github.com/sairish2001/makesmatheasy/issues/1615",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
54376251
|
LSNBLDR-455 - When importing from cartridge HTML should be inline rather...
... than linked
I'm going to do this a different way, that won't break imports from other systems.
|
gharchive/pull-request
| 2015-01-14T21:02:47 |
2025-04-01T04:35:47.252070
|
{
"authors": [
"clhedrick",
"jonespm"
],
"repo": "sakaiproject/sakai",
"url": "https://github.com/sakaiproject/sakai/pull/41",
"license": "ECL-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1934247767
|
Isn't this project maintained?
Isn't this project maintained?
Thank you for reaching out. I have temporarily paused development on this project, but I plan to resume work on it soon, possibly next month, Insha'Allah. Please note that there is an advanced version of the project with significant changes to the current project structure, which will be uploaded to the "dev" branch.
|
gharchive/issue
| 2023-10-10T03:22:38 |
2025-04-01T04:35:47.267537
|
{
"authors": [
"LiJoeAllen",
"salah-rashad"
],
"repo": "salah-rashad/blueprint_system",
"url": "https://github.com/salah-rashad/blueprint_system/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
178578768
|
optional _defaultOptions
_defaultOptions now is "TypeScript-optional", which makes Angular 2 AoT work with this library
Could you create a new release and publish it to npm? Thanks!
|
gharchive/pull-request
| 2016-09-22T10:59:20 |
2025-04-01T04:35:47.271119
|
{
"authors": [
"DirkWillem",
"accnops"
],
"repo": "salemdar/angular2-cookie",
"url": "https://github.com/salemdar/angular2-cookie/pull/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1710152373
|
Fix media upgrade not working on transcript converted to live chat
When the secure transcript is upgraded to live chat, the root coordinator doesn't know know about this and thinks it is still in secure conversations. With this commit, at the time where the view models are swapped, a message is sent through the delegate chain to correctly assign the type of engagement to the coordinator. This in turn means that the coordinator can handle upgrades correctly.
MOB-2192
!merge
|
gharchive/pull-request
| 2023-05-15T13:54:05 |
2025-04-01T04:35:47.272321
|
{
"authors": [
"gersonnoboa"
],
"repo": "salemove/ios-sdk-widgets",
"url": "https://github.com/salemove/ios-sdk-widgets/pull/623",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1434381058
|
Add phone input
Fixes #371
You forgot to run pnpm i after removing that one dependency, other than that looks good!
When I go to checkout there is 404. Please check https://watch.screencastify.com/v/bZottsmZZMUV2EWhCAA0
|
gharchive/pull-request
| 2022-11-03T09:53:03 |
2025-04-01T04:35:47.273793
|
{
"authors": [
"bmigirl",
"michalina-graczyk",
"mmiszy"
],
"repo": "saleor/react-storefront",
"url": "https://github.com/saleor/react-storefront/pull/601",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1917459479
|
Allow add multiple codes per voucher
I want to merge this change because it adds the possibility of creating multiple codes for vouchers.
Part of #14409
Docs: 1000
Acceptance Criteria
[x] The voucher can be created, updated, and deleted only by users and apps with manage discounts permission.
[x] Allow creating vouchers with multiple codes using the new field addCodes.
[x] Voucher/s queries return a paginated list of codes.
[x] Ensure that old voucher API(creating/updating voucher code using code input) and queries can be handled as previously.
[x] Voucher usage is calculated as the sum of used codes.
[x] Allow to set singleUse flag to true to make codes single-use and deactivate code when it is used.
[x] Allow to export voucher codes via exportVoucherCodes mutation.
[x] Allow to delete codes using voucherCodeBulkDelete mutation.
[x] Ensure that old vouchers are migrated to new db models.
Impact
[x] New migrations
[x] New/Updated API fields or mutations
[x] Deprecated API fields or mutations
[ ] Removed API types, fields, or mutations
Docs
[ ] Link to documentation:
Pull Request Checklist
[x] Privileged queries and mutations are either absent or guarded by proper permission checks
[x] Database queries are optimized and the number of queries is constant
[x] Database migrations are either absent or optimized for zero downtime
[x] The changes are covered by test cases
Test migrations compatibility / build (pull_request) is failing but it's handled in https://github.com/saleor/saleor/pull/14150.
|
gharchive/pull-request
| 2023-09-28T12:39:29 |
2025-04-01T04:35:47.285719
|
{
"authors": [
"IKarbowiak",
"SzymJ"
],
"repo": "saleor/saleor",
"url": "https://github.com/saleor/saleor/pull/14123",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
743077018
|
Описание произвольного объекта
Описывал впервые произвольный объект в метаданных, столкнулся с глюком.
У меня структура, которая в себе содержи тоже 2 структуры.
Если я без них делаю свою структуру с разными типами данных, все ок,
но как только добавляю структуры, все ломается.
На видео постарался показать что именно происходит.
В файле прикладываю структуру, ее можно будет задействовать везде, там ссылок на объекты нет.
СоставСтруктуры.txt
Может и я что не так делаю, но очень странное поведение.
Посмотри пожалуйста.
Описывал впервые произвольный объект в метаданных, столкнулся с глюком.
У меня структура, которая в себе содержи тоже 2 структуры.
Объекты, вложенные в другие объекты в чистом виде не поддерживаются. Это можно решить выносом объекта "СостояниеОбменаДанными" в корень пользовательских объектов и уже потом ссылаться на него через ref: customObjects.СостояниеОбменаДанными
Еще одна проблема в том, что у вложенных структур не задано поле "name"
Еще одна проблема в том, что у вложенных структур не задано поле "name" - ты имеешь ввиду что мне не нужно name задавать? или то что я где то не задал?
По первому абзацу не совсем понял, но попробую реализовать, спасибо
Еще одна проблема в том, что у вложенных структур не задано поле "name" - ты имеешь ввиду что мне не нужно name задавать? или то что я где то не задал?
Я говорю вот об этом
По первому абзацу не совсем понял, но попробую реализовать, спасибо
Примерно так:
Еще одна проблема в том, что у вложенных структур не задано поле "name" - ты имеешь ввиду что мне не нужно name задавать? или то что я где то не задал?
Я говорю вот об этом
Ты знаешь, я делал по подобию твоих примеров
Вот файлик тестов что ты делал, у тебя тоже у структуры нет имени, только ref и параметры
Ты знаешь, я делал по подобию твоих примеров
В моем примере, как и не скриншоте выше, нет поля name только у корневых объектов. Если мы говорим о свойствах объекта (properties), а в твоем случае, это именно так, то name должен быть обязательно
я тебя понял, спасибо учту.
Я так понял ты что то будешь реализовывать, мне делать как ты сказал, или подождать?
я тебя понял, спасибо учту.
Я так понял ты что то будешь реализовывать, мне делать как ты сказал, или подождать?
Лучше подождать т.к. вынос вложенного объекта в корень всё равно на 100% не решает задачу
Реализовано тут. Пример в тестах.
|
gharchive/issue
| 2020-11-14T20:48:58 |
2025-04-01T04:35:47.322370
|
{
"authors": [
"ViktorErmakov",
"salexdv"
],
"repo": "salexdv/bsl_console",
"url": "https://github.com/salexdv/bsl_console/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2292405973
|
Riksarkivet pseudonymer - text strängar med pseudonymer levereras 2024
Datasilos skapas 2024
Riksarkivet släpper Genväg till namnen bakom pseudonymerna i svensk dagspress - Internet Archive
Min analys: Riksarkivet och KB saknar helt förmåga att skapa interoperabilitet mellan sina datamängder -> 2 datasilos skapas modell med en databas dvs. ett steg bättre än tidigare kartotekskort
Var
Leverans 2024
Hur fel KB designat tidningsmarknaden #153
|
gharchive/issue
| 2024-05-13T10:19:34 |
2025-04-01T04:35:47.327676
|
{
"authors": [
"salgo60"
],
"repo": "salgo60/spa2Commons",
"url": "https://github.com/salgo60/spa2Commons/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
45900687
|
Show current track info in right click menu
When a sound is playing, or has recently played, it'd be nice if it were shown in the right-click menu, a la Vox.
I'll work on this too
|
gharchive/issue
| 2014-10-15T18:20:20 |
2025-04-01T04:35:47.338358
|
{
"authors": [
"chrismessina",
"kinglime"
],
"repo": "salomvary/soundcleod",
"url": "https://github.com/salomvary/soundcleod/issues/62",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2010695516
|
🛑 Bitwarden is down
In 4678ce3, Bitwarden (https://bitwarden.saltbo.fun) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bitwarden is back up in 667ef23 after 15 minutes.
|
gharchive/issue
| 2023-11-25T15:18:30 |
2025-04-01T04:35:47.349746
|
{
"authors": [
"saltbo"
],
"repo": "saltbo/status",
"url": "https://github.com/saltbo/status/issues/594",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
847328749
|
Migrate to Furo theme for Sphinx
I was looking to transition this to using Furo, but I don't know how to work with this repository at the moment, so I couldn't quite test how it works.
Merged via https://github.com/saltstack/salt-extensions-index/commit/5f4f8bdf57bbaec433b2e1a1a46a9aafd35d5655
|
gharchive/pull-request
| 2021-03-31T21:00:21 |
2025-04-01T04:35:47.437387
|
{
"authors": [
"ScriptAutomate"
],
"repo": "saltstack/salt-extensions-index",
"url": "https://github.com/saltstack/salt-extensions-index/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
254821592
|
Relicense under BSD
I licensed this project under UNLICENSE because I'm a smart-ass. A "real" license like BSD would be better.
The following people have contributed to this project in the past (thanks again, you beautiful people!).
Could you please ping your approval on this issue that you're okay with your contribution to be relicensed under MIT license, from the original UNLICENSE?
@rmjohnson
@kcarlson
@jansink
@ZhouHansen
@Myztiq
@marsaud
@pdehaan
@jmhobbs
@IonicaBizau
@RoboterHund
@EvanOxfeld
@sunib
@idralyuk
You have my consent to relicense under the MIT license.
Sent from my iPhone
On Sep 2, 2017, at 18:32, Sam notifications@github.com wrote:
The following people have contributed to this project in the past (thanks again, you beautiful people!).
Could you please ping your approval on this issue that you're okay with your contribution to be relicensed under MIT license, from the original UNLICENSE?
@rmjohnson
@kcarlson
@jansink
@ZhouHansen
@Myztiq
@marsaud
@pdehaan
@jmhobbs
@IonicaBizau
@RoboterHund
@EvanOxfeld
@sunib
@idralyuk
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Go ahead, no problem for me.
No problem for me! 👍
Keep up the great work! 🚀🚀
Go for it!
Sure. ¯\_(ツ)_/¯
Works for me.
:100:
Thanks -- sounds like a plan. 👍
|
gharchive/issue
| 2017-09-02T16:28:59 |
2025-04-01T04:35:47.567344
|
{
"authors": [
"EvanOxfeld",
"IonicaBizau",
"Myztiq",
"jmhobbs",
"kcarlson",
"pdehaan",
"rmjohnson",
"samcday",
"sunib"
],
"repo": "samcday/node-stream-buffer",
"url": "https://github.com/samcday/node-stream-buffer/issues/39",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
620689517
|
Native videojs subtitle support
Video.js support webvtt subtitles. This patch makes synclounge properly notify that the player can support webvtt subtitles when possible (with the X-Plex-Client-Profile-Extra header) as well as adding some basic css styling for the subtitles on the player.
Fixed #204 in cases where subtitles can be transcoded to webvtt
Woah TIL about that header - i'm 100% sure it wasn't a thing when I wrote the player. Awesome! Thers conflicts on this branch, I'll merge when they're resolved 👌
The big revelation for me was actually that Plex supported transcoding subtitles to webvtt. Plex's API has very little public documentation and it was a matter of looking at the headers other clients sent back to my server and reverse engineering it that way.
Going to close this PR since it's not compatible with my shaka player PR, and that PR fixes so many more issues than this.
|
gharchive/pull-request
| 2020-05-19T05:29:08 |
2025-04-01T04:35:47.569597
|
{
"authors": [
"samcm",
"ttshivers"
],
"repo": "samcm/synclounge",
"url": "https://github.com/samcm/synclounge/pull/206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1520886822
|
publication action is not working
github action is not picking up the if: startsWith(github.ref, 'refs/tags') for the publish to pypi.org job.
Resolved by #13 .
|
gharchive/issue
| 2023-01-05T15:00:08 |
2025-04-01T04:35:47.584263
|
{
"authors": [
"sami-m-g",
"tngTUDOR"
],
"repo": "sami-m-g/pyecospold",
"url": "https://github.com/sami-m-g/pyecospold/issues/12",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
174920521
|
1st attempt on a platform-shared code
Hey!
First, I wanna say, that I love the project! I'm excited about Alexa without spending the money for another device that I don't need and having the privacy under my control - recording truly only when I want it.
I've read in comments that you've been doing some code cleanup. However, I wanted to play around with it now on my desktop and I have some changes that I'd like to push upstream. If these are made obsolete by your changes, feel free to close and throw away this PR. If you'd like me to rework it, let me know!
Sorry for not doing this in atomic commits (will do it now if you like and in the future), I was just doing a quick and dirty fixes. What's in this:
Directories!
All config (except for Raspberry Pi config - will do that later) in a YAML configuration file so users can
safely pull from the repo in order to get updates.
Making the codebase common for all platforms and having the platforms as plugins. So we don't have to have AlexaPi, AlexaCHIP, AlexaOPi, ...
Please test on Raspberry Pi - I don't have one on me right now. I was forced to do these changes because I wanted to run in on my Arch Linux machine and it's important that developers can run this on their machines directly using for example the dummy platform. What would be coming next is merging other AlexaSTUFF projects by creating platform files for those and merging the actual projects.
Let me know what you think or if you have your own changes, please push them. Also, please keep the current stuff in master. I'm very confused by this version1.1 branch - it doesn't even look like it's got common history and
This branch is 17 commits ahead, 41 commits behind master.
looks like quite a mess.
I've stumbled upon https://github.com/maso27/AlexaPi/ and other forks. Is this still developed? Do you guys want to negotiate some kind of merge? (feature-wise and platform wise) This is a mess! :-(
That fork has seen quite a bit of development. I just added a new branch to it last week, trying out a more automated snowboy implementation. And yeah, I feel like there are probably features in there that could be useful to merge.
Sorry if it's a mess. It was my first time using github, and if you've got suggestions for a better way, I'm listening.
Yes, I've seen some good work there - snowboy is an awesome thing to have :-)
I didn't mean to be rude. The current state is just very confusing for me and I think it makes it hard for the project to have efficient development. As a matter of fact, this PR is kind of a preview what I would like to do with the project for a start. If this were to be merged, I'd rewrite it so it's split into respective commits and it's not a half-done mashup of things.
I guess we have to wait for @sammachin what he has to say regarding this / his further progress and the code cleanup that he mentioned in a comment here on GH.
What I suggest:
Choose one repo that is gonna be the upstream - if @sammachin wants to lead the development further / at least review & accept pull requests, let's choose his as upstream.
For a start, let's have the master branch as the main development branch. We can have later some extra branch as dev, but I don't think that's necessary now.
Don't do development in these versionX.Y branches - seriously. If one wants to develop a feature - create a feature branch forked from master and when done, do a PR against upstream master. That way, the latest code is always in master for anyone to base on.
When the project leader decides there is enough features for a new version, he just makes a tag.
The point of this is that having these long-living branches that contain many new features in a various state of development prevents collaboration and is confusing and hard to merge eventually
What I would like for the project, as I have indicated in this PR, is to be multi-platform and to provide users with multiple choices for example for the trigger mechanism (snowboy, cmusphinx, button, ...) and prevent hard-coded things in general. The first stage is to have a modular code base and then we can rock & roll :-)
So, these are my thoughts :-) I'll be happy to help out with this in my free time, even though I don't have much of it nowadays.
This is awesome!
It's pretty much exactly what I've been thinking and wanting to do I did
make a start last week but other bits of life got in the way!
I completely agree that the current code base is a mess why started as a
weekend hack project kind of took me by suprise in its popularity.
I've been thinking about how to manage the various options around
triggering and indicators I'd welcome your thoughts on how to design a
modular system for this.
We also need to consider how to make the project accessible to people that
aren't developers I'm currently leaning towards offering a ready built
RasPi image that could use a standard set of credentials so those that
just want to get Alexa up and running have a quick route without having to
do too much Linux tinkering.
I'd very much like to remain the 'lead' on the project if he have a more
modular code base and can try to stick to Renés suggestions around
development then it should be easier for me to maintain. I think most of
the links and docs out there point to my repo anyway.
We could also look at creating a GitHub org for the project and make than
the definitive repo, open to thoughts on this?
René what state is the code in this PR at? And what branch is it based on?
I'm currently just looking on my phone so haven't had a proper look yet.
Thanks for this.
Sam
On Monday, 5 September 2016, René Kliment notifications@github.com wrote:
Yes, I've seen some good work there - snowboy is an awesome thing to have
:-)
I didn't mean to be rude. The current state is just very confusing for me
and I think it makes it hard for the project to have efficient development.
As a matter of fact, this PR is kind of a preview what I would like to
do with the project for a start. If this were to be merged, I'd rewrite it
so it's split into respective commits and it's not a half-done mashup of
things.
I guess we have to wait for @sammachin https://github.com/sammachin
what he has to say regarding this / his further progress and the code
cleanup that he mentioned in a comment here on GH.
What I suggest:
Choose one repo that is gonna be the upstream - if @sammachin
https://github.com/sammachin wants to lead the development further / at
least review & accept pull requests, let's choose his as upstream.
For a start, let's have the master branch as the main development
branch. We can have later some extra branch as dev, but I don't think
that's necessary now.
Don't do development in these versionX.Y branches - seriously. If one
wants to develop a feature - create a feature branch forked from master
and when done, do a PR against upstream master. That way, the latest code
is always in master for anyone to base on.
When the project leader decides there is enough features for a new
version, he just makes a tag.
The point of this is that having these long-living branches that contain
many new features in a various state of development prevents collaboration
and is confusing and hard to merge eventually
What I would like for the project, as I have indicated in this PR, is to
be multi-platform and to provide users with multiple choices for example
for the trigger mechanism (snowboy, cmusphinx, button, ...) and prevent
hard-coded things in general. The first stage is to have a modular code
base and then we can rock & roll :-)
So, these are my thoughts :-) I'll be happy to help out with this in my
free time, even though I don't have much of it nowadays.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/sammachin/AlexaPi/pull/125#issuecomment-244806123,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AALc_YW6pkEUU3_LWWH6b-4MpIo3-n84ks5qnHbFgaJpZM4J0aK-
.
Cool!
To be honest, this PR is more of a sketch. I did this in a hurry to make it work on my desktop. I'll close this once we agree on the further development and I'll make it the right way.
I suggest:
@sammachin Throw away the version1.0 branch (as there is no point for it IMHO) - tag the respective commit in master as v1.0.
version1.1 branch was created in a weird way, because it doesn't share history with master - it probably wasn't based on it - that's why git/GH gives me This branch is 17 commits ahead, 41 commits behind master. To get a common ground, merge version1.1 into master and delete version1.1. Now we have up-to-date master and no other branch and the work can begin. Tell me if you need help with this.
I'll separate the code into directories as in this PR.
I'll split it into platform files as in this PR. We can develop this on our desktop machines now - yay!
Merge AlexaCHIP and AlexaOPi - should be easy - just making the small platform files for now. Would need testing though. Then, those repos can be deleted / emptied with link to this one / redirected to this one with instructions.
We can talk about configuration - YAML would be cool - as suggested in this PR, but I'd probably enhance it a bit.
Give users possibility to choose the trigger - now we have platform-specific (like a button on a RasPi here) and @maso27 has implemented snowboy and cmusphinx if I'm correct. Let's have code for everything and let it be an option. In this stage, @maso27 would refactor his code for this new code base (not necessarily all alone).
I would expect we would have some discussion on each thing to think about it and design it right.
We can always do organization repo later - let's clean up first.
Also, if you mean SD card images, we can talk about that later. I personally think that simple and short instructions on how to run this on a standard Raspbian would be enough so people can integrate this into their existing systems, but that's a discussion for later :-)
Let me know what you think @sammachin and @maso27 . I'll create a meta bug for this structure change once we agree on the process and start sending PRs for this for review.
Thanks!
Ah, silly me! Looking at @maso27's branches, it would probably make sense to merge his work first and then do the structural changes, so it's not a nightmare for him. Although branch version1.2 seems messed up a bit - GH gives me This branch is 52 commits ahead, 41 commits behind sammachin:master. - what is it based on? The version1.1 seems like it could be merged into @sammachin's version1.1 now.
Yeah, version 1.2 was where things diverged significantly. That was the whole reason I bumped the number.
I tried to write it into the readme's, but I pulled some things in from Nascent Objects and added other things:
-silence detection rather than timer for voice-activated listening.
-volume control.
-better tune-in support.
-always-on monitoring. (Done via a separate script-- I now believe it's too heavy-handed and unnecessary)
As for the snowboy branch: that is still pretty untested. It worked fine for me with a fresh install last week but might be worth getting a little more mileage before doing anything rash.
Hopefully that's helpful?
I'm just futzing with things until they seem to work. Still pretty amateur at the python thing.
Okay. Let's merge your version1.2 into master then. Hopefully you can refactor the whole snowboy into the new code structure later?
FINALLY!!! ... I'm glad to hear this ... I bailed out a while back as it got too much of a mess ... and (tbh) I got lost with all the branches, routes and versions etc ...
I'm totally in agreement of all the above ... this is a fantastic project but it needs to be reasonably simple for the newer folk coming aboard ... it needs to flurish and with everyone who has made major changes & addition, to the AlexaPi project coming together, it will be awesome ...
So yeah dump all the old versions and concentrate on (as Sam says) one definitve version for all ... this will then cut down on "I can't get it to work" comments, as we'll all be in the same boat and the issues 'should' then be the same for everyone(?) ...
I miss messing with this project and after telling a few mates to try it out it, it got difficult for them as they are not developers ... if it can be simple to setup (ideally a one off Setup package) then more will join in ... most can do simple Nano editing but not much more, where adding repos and stuff ... basically what Sam & renekliment has said above really ...
Keeping an eye on this bigtime ...
cheers,
Gem
@renekliment
I'm cool with that. Once there's an established framework I'm happy seeing what it takes to refactor snowboy. (Or anyone else if you want to, of course!)
Also, I'm excited about this getting commonized, as you were saying. I've got an arch linux box that I got AlexaPi running on eventually, but it took a bit of shoehorning, and is still kinda crashy.
Plus, I got my pre-ordered C.H.I.P. processors recently and having an easy and reproducible path to get the latest onto one of those would be awesome!
So I've made a quick attempt at merging version1.2 into master to make a PR and it's not easy, because they don't have a common history. (this is a reminder to never do anything like this again) So I figure we have these options:
Manually resolve merge conflicts.
Create some other branch - let's call it dev, pull version1.2 into it, rename master to something else like original, rename dev to master with the process being a little less straightforward I guess in reality.
Create a completely new repo (can be an organization repo like @sammachin pointed out) with its master branch having the content of version1.2. We could choose a new name for the project, since it would support more platforms than just Pi SBCs, but I guess we can always rename it later, so no actual need to decide it now.
Everything with the full history of version1.2 of course! So what do you guys think is the best option? I'd probably go for 3, as it is the easiest one.
So I'm going to setup an organisation on github and we can then have multiple repos in that if we need to.
Name wise I'd like to keep AlexaPi for the project as;
Its quite short/easy to remember,
Thats whats got most of the publicity/traction so far.
I know one of the aims is to have a single codebase that supports multiple boards in the future but I don't think thats too much of an issue if the name has 'Pi' in it. RasPi is still the most well known board out there.
So I'll call the org alexapi (can't have that as someone has that username, for now I've gone for alexa-pi, might see if the guy will give us alexapi as he doesn't seem to be using it.)
@renekliment do you want to start a fresh repo in that org based on whatever code base you feel is best, I havn't had much of a look at @maso27's 1.2 but I know that snowboy takes a fair bit of setup so I think we might be better starting from 1.1 with just the button triggering and working from there.
Awesome, thank you!
Actually, snowboy support is in the snowboy branch, so we can do version1.2 as master safely. I'll create a repo and start some work on it tomorrow evening. I'll keep you guys posted. Thanks again!
I've reached out to the person that has the alexapi username to see if he'd
be willing to give it up, for now were' on github.com/alexa-pi but I'm not
too keen I'd like to change.
Other possible name I thought of was 'MyAlexa' but I'm not sure I like
that, I should probabbly run it by Amazon too as we don't want to upset
them for using the Alexa trademark
On Wed, Sep 7, 2016 at 11:13 AM, René Kliment notifications@github.com
wrote:
Closed #125 https://github.com/sammachin/AlexaPi/pull/125.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/sammachin/AlexaPi/pull/125#event-780689403, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AALc_Y8Rqz-qqAyygwx943106rFxYwSVks5qno4vgaJpZM4J0aK-
.
@sammachin @maso27 I've created the repo, but please don't touch it yet in any way. I'll have some notes and PRs coming later today.
So, it's time to move the discussion here: https://github.com/alexa-pi/AlexaPi/issues/2
I'll soon start pushing PRs there.
Important notes:
Let's stick to the rule, that no one, even (@sammachin, @maso27 and myself) pushes directly to the repo. Every time one wants to make the a change - even a silly one, it has to go through the Pull Request mechanism and at least one other core developer (one with push access to the repo) has to do a code review and agree with it before pulling it into master.
So for the development you have to create a fork into your own account. Now, the name's gonna collide with the original repo, so GH is gonna name it AlexaPi-1 ... I've renamed mine to AlexaPiNG, which is cool, since it's just my own repo. You can basically name it whatever you want to.
See CONTRIBUTING.md in the new repo for further instructions. I'll write it in detail (with commands) soon.
|
gharchive/pull-request
| 2016-09-03T22:17:19 |
2025-04-01T04:35:47.625675
|
{
"authors": [
"GemBro",
"maso27",
"renekliment",
"sammachin"
],
"repo": "sammachin/AlexaPi",
"url": "https://github.com/sammachin/AlexaPi/pull/125",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
363967883
|
Can't properly pause third party video on mobile
Expected behaviour
Plyr can be controlled when playinline on touch devices.
Actual behaviour
Plyr can't be paused. Controls don't behave correctly. From what I could see plyr thinks the video is not playing already so it doesn't pause it.
Steps to reproduce
HTML5 seem to work fine.
Just try to use vimeo on a mobile device with plyr.
Environment
Browser: Chrome
Version: 69
Operating System: macOS
Version: Latest
Console errors (if any)
No errors.
Link to where the bug is happening
https://codepen.io/Belelros/full/dqxzBN/
That pen is using a very old version: https://cdn.plyr.io/3.2.4/plyr.js
That said, I think I know the issue you were having, and it was fixed in 3.4.4
Cheers fellas
|
gharchive/issue
| 2018-09-26T10:49:04 |
2025-04-01T04:35:47.631482
|
{
"authors": [
"Antonio-Laguna",
"jamesoflol",
"sampotts"
],
"repo": "sampotts/plyr",
"url": "https://github.com/sampotts/plyr/issues/1193",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
408658059
|
Not show quality selector with hls
I have tried to use your player into my project. It's worked well. But I have an issue with hls, when I tried to load the hls file like the example here https://codepen.io/pen?template=oyLKQb
The player can play the hls, but the quality settings is not show. How can I config for show the quality selector?
Thanks
@sampotts
Duplicate of https://github.com/sampotts/plyr/issues/1118 - Please search next time 👍
|
gharchive/issue
| 2019-02-11T07:12:39 |
2025-04-01T04:35:47.633610
|
{
"authors": [
"sampotts",
"tamdao"
],
"repo": "sampotts/plyr",
"url": "https://github.com/sampotts/plyr/issues/1339",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
100538539
|
Login credentials?
Is it possible to put your login credentials in the config somewhere to automaticly mirror protected repositories?
You can create the mirror on commandline supplying your credentials 2 times, but updates will obviously fail.
Kind regards,
Frank
The bash scripts simple use git commands. So you can do anything that git supports. Including:
Use the git credential helper store
Include the username and password as part of the HTTP url for basic HTTP auth.
Use SSH to clone with an SSH private key (RECOMMENDED).
|
gharchive/issue
| 2015-08-12T13:03:45 |
2025-04-01T04:35:47.644875
|
{
"authors": [
"Norcoen",
"samrocketman"
],
"repo": "samrocketman/gitlab-mirrors",
"url": "https://github.com/samrocketman/gitlab-mirrors/issues/81",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
174884614
|
Feature/extract config out of constructir
@samselikoff, first try for https://github.com/samselikoff/ember-cli-mirage/issues/842 is this what you want? Started adding a few tests.
It contains the changes in https://github.com/samselikoff/ember-cli-mirage/pull/846 too, I expect this PR to be merged soon right?
L
@samselikoff not sure with the travis build failed on this one, tests are all passing on my side.
L
@samselikoff done
@samselikoff any update on this thing, ready to merge I believe? Anything else I can help on to make Mirage folder sharable between apps in an addon? Thanks
Thanks for all your work on this!
You're welcome @samselikoff, what are the next steps to be able to share the Mirage folder in an addon? Would be happy to work on it if you explain.
L
Tracking in #899
|
gharchive/pull-request
| 2016-09-03T07:41:14 |
2025-04-01T04:35:47.648195
|
{
"authors": [
"Leooo",
"samselikoff"
],
"repo": "samselikoff/ember-cli-mirage",
"url": "https://github.com/samselikoff/ember-cli-mirage/pull/865",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2144075250
|
Emojis didn't change (Ubuntu)
I followed the instructions, rebuild the cache and even tried a reboot, but the new emojis just won't be used by my Ubuntu PC. I still see the old, ugly ones.
I'm on Ubuntu 22.04.03.
I checked and I only have AppleColorEmoji.ttf in ~/.local/share/fonts/ so there shouldn't be any conflicts with different fonts.
Hi @Arche151, could you try this and see if that fixes it?
@dmlls Didn't work unfortunately :/
@Arche151 could you then try a system-wide installation placing the font at /usr/local/share/fonts or /usr/share/fonts?
placing the font at /usr/local/share/fonts or /usr/share/fonts?
This worked for me on Ubuntu 24.04 LTS x86_64
|
gharchive/issue
| 2024-02-20T10:42:46 |
2025-04-01T04:35:47.668494
|
{
"authors": [
"Arche151",
"dmlls",
"subeenregmi"
],
"repo": "samuelngs/apple-emoji-linux",
"url": "https://github.com/samuelngs/apple-emoji-linux/issues/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
402783285
|
Does this work on Gnu/Linux systems?
Hi
I wanted to know if this would work on systems with Gnu/Linux operating systems or not. I tried it once and got the following error:
ImportError: No module named _winreg
I run MT4 using Wine.
Hi Ahangarha, i am also using mac and installed mt4 using wine, will it work?
MT works with wine almost perfectly. there are some minor issues.
I wanted to see if these codes work natively on GnuLinux or not
|
gharchive/issue
| 2019-01-24T16:14:47 |
2025-04-01T04:35:47.672498
|
{
"authors": [
"RamMakireddi",
"ahangarha"
],
"repo": "samuraitaiga/py-metatrader",
"url": "https://github.com/samuraitaiga/py-metatrader/issues/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1688999371
|
XlaRuntimeError: FAILED_PRECONDITION: DNN library initialization failed.
Hi!
When I run the following:
from whisper_jax import FlaxWhisperPipline
pipeline = FlaxWhisperPipline("openai/whisper-base.en")
I get an XlaRuntimeError (full description found at the end of this message).
My configurations are:
Ubuntu 20.4
GPU: Quadro RTX 3000
Driver Version: 515.105.1
CUDA Version: 11.7
Python 3.9.16
Follow some version info:
import jax
import jaxlib
import tensorflow as tf
num_devices = jax.device_count()
device_type = jax.devices()[0].device_kind
print(f"tensorflow version: {tf.__version__}")
print(f"jax version: {jax.__version__}")
print(f"jaxlib version: {jaxlib.__version__}")
print(f"Found {num_devices} JAX devices of type {device_type}.")
Output:
tensorflow version: 2.10.1
jax version: 0.4.8
jaxlib version: 0.4.7
Found 1 JAX devices of type Quadro RTX 3000.
I believe have installed the dependencies and followed all the instructions to install jax, jaxlib and whisper-jax. But not sure if I have missed something here.
Any help or comments would be greatly appreciated! Thank you!
(FULL DESCRIPTION)
---------------------------------------------------------------------------
XlaRuntimeError Traceback (most recent call last)
Cell In[2], line 8
1 from whisper_jax import FlaxWhisperPipline
2 # import jax.numpy as jnp
3
4 # LIST OF MODELS: https://github.com/openai/whisper
5 # hugging face: https://huggingface.co/openai
6
7 # instantiate pipeline
----> 8 pipeline = FlaxWhisperPipline("openai/whisper-base.en") # ~200MB
9 # pipeline = FlaxWhisperPipline("openai/whisper-base.en", dtype=jnp.bfloat16) # half-precision
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/whisper_jax/pipeline.py:88, in FlaxWhisperPipline.__init__(self, checkpoint, dtype, batch_size, max_length)
85 self.feature_extractor = self.processor.feature_extractor
86 self.tokenizer = self.processor.tokenizer
---> 88 self.model, self.params = FlaxWhisperForConditionalGeneration.from_pretrained(
89 self.checkpoint,
90 _do_init=False,
91 dtype=self.dtype,
92 )
94 self.max_length = max_length if max_length is not None else self.model.generation_config.max_length
95 self.min_batch_size = jax.local_device_count()
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/modeling_flax_utils.py:807, in FlaxPreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs)
791 resolved_archive_file, _ = get_checkpoint_shard_files(
792 pretrained_model_name_or_path,
793 resolved_archive_file,
(...)
803 _commit_hash=commit_hash,
804 )
806 # init random models
--> 807 model = cls(config, *model_args, _do_init=_do_init, **model_kwargs)
809 if from_pt:
810 state = load_pytorch_checkpoint_in_flax_state_dict(model, resolved_archive_file, is_sharded)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/whisper_jax/modeling_flax_whisper.py:1008, in FlaxWhisperPreTrainedModel.__init__(self, config, input_shape, seed, dtype, params_dtype, _do_init, **kwargs)
997 def __init__(
998 self,
999 config: WhisperConfig,
(...)
1005 **kwargs,
1006 ):
1007 module = self.module_class(config=config, dtype=dtype, params_dtype=params_dtype, **kwargs)
-> 1008 super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/modeling_flax_utils.py:199, in FlaxPreTrainedModel.__init__(self, config, module, input_shape, seed, dtype, _do_init)
196 self._module = module
198 # Those are public as their type is generic to every derived classes.
--> 199 self.key = PRNGKey(seed)
200 self.dtype = dtype
201 self.input_shape = input_shape
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/random.py:136, in PRNGKey(seed)
133 if np.ndim(seed):
134 raise TypeError("PRNGKey accepts a scalar seed, but was given an array of"
135 f"shape {np.shape(seed)} != (). Use jax.vmap for batching")
--> 136 key = prng.seed_with_impl(impl, seed)
137 return _return_prng_keys(True, key)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/prng.py:270, in seed_with_impl(impl, seed)
269 def seed_with_impl(impl: PRNGImpl, seed: Union[int, Array]) -> PRNGKeyArray:
--> 270 return random_seed(seed, impl=impl)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/prng.py:561, in random_seed(seeds, impl)
559 else:
560 seeds_arr = jnp.asarray(seeds)
--> 561 return random_seed_p.bind(seeds_arr, impl=impl)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/core.py:360, in Primitive.bind(self, *args, **params)
357 def bind(self, *args, **params):
358 assert (not config.jax_enable_checks or
359 all(isinstance(arg, Tracer) or valid_jaxtype(arg) for arg in args)), args
--> 360 return self.bind_with_trace(find_top_trace(args), args, params)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/core.py:363, in Primitive.bind_with_trace(self, trace, args, params)
362 def bind_with_trace(self, trace, args, params):
--> 363 out = trace.process_primitive(self, map(trace.full_raise, args), params)
364 return map(full_lower, out) if self.multiple_results else full_lower(out)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/core.py:817, in EvalTrace.process_primitive(self, primitive, tracers, params)
816 def process_primitive(self, primitive, tracers, params):
--> 817 return primitive.impl(*tracers, **params)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/prng.py:573, in random_seed_impl(seeds, impl)
571 @random_seed_p.def_impl
572 def random_seed_impl(seeds, *, impl):
--> 573 base_arr = random_seed_impl_base(seeds, impl=impl)
574 return PRNGKeyArray(impl, base_arr)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/prng.py:578, in random_seed_impl_base(seeds, impl)
576 def random_seed_impl_base(seeds, *, impl):
577 seed = iterated_vmap_unary(seeds.ndim, impl.seed)
--> 578 return seed(seeds)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/prng.py:813, in threefry_seed(seed)
810 raise TypeError(f"PRNG key seed must be an integer; got {seed!r}")
811 convert = lambda k: lax.reshape(lax.convert_element_type(k, np.uint32), [1])
812 k1 = convert(
--> 813 lax.shift_right_logical(seed, lax_internal._const(seed, 32)))
814 with jax.numpy_dtype_promotion('standard'):
815 # TODO(jakevdp): in X64 mode, this can generate 64-bit computations for 32-bit
816 # inputs. We should avoid this.
817 k2 = convert(jnp.bitwise_and(seed, np.uint32(0xFFFFFFFF)))
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/lax/lax.py:458, in shift_right_logical(x, y)
456 def shift_right_logical(x: ArrayLike, y: ArrayLike) -> Array:
457 r"""Elementwise logical right shift: :math:`x \gg y`."""
--> 458 return shift_right_logical_p.bind(x, y)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/core.py:360, in Primitive.bind(self, *args, **params)
357 def bind(self, *args, **params):
358 assert (not config.jax_enable_checks or
359 all(isinstance(arg, Tracer) or valid_jaxtype(arg) for arg in args)), args
--> 360 return self.bind_with_trace(find_top_trace(args), args, params)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/core.py:363, in Primitive.bind_with_trace(self, trace, args, params)
362 def bind_with_trace(self, trace, args, params):
--> 363 out = trace.process_primitive(self, map(trace.full_raise, args), params)
364 return map(full_lower, out) if self.multiple_results else full_lower(out)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/core.py:817, in EvalTrace.process_primitive(self, primitive, tracers, params)
816 def process_primitive(self, primitive, tracers, params):
--> 817 return primitive.impl(*tracers, **params)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/dispatch.py:117, in apply_primitive(prim, *args, **params)
114 from jax._src import pjit
116 try:
--> 117 compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args),
118 **params)
119 except pxla.DeviceAssignmentMismatchError as e:
120 fails, = e.args
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/util.py:253, in cache.<locals>.wrap.<locals>.wrapper(*args, **kwargs)
251 return f(*args, **kwargs)
252 else:
--> 253 return cached(config._trace_context(), *args, **kwargs)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/util.py:246, in cache.<locals>.wrap.<locals>.cached(_, *args, **kwargs)
244 @functools.lru_cache(max_size)
245 def cached(_, *args, **kwargs):
--> 246 return f(*args, **kwargs)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/dispatch.py:208, in xla_primitive_callable(prim, *arg_specs, **params)
206 else:
207 return out,
--> 208 compiled = _xla_callable_uncached(lu.wrap_init(prim_fun), prim.name,
209 donated_invars, False, *arg_specs)
210 if not prim.multiple_results:
211 return lambda *args, **kw: compiled(*args, **kw)[0]
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/dispatch.py:254, in _xla_callable_uncached(fun, name, donated_invars, keep_unused, *arg_specs)
251 computation = sharded_lowering(fun, name, donated_invars, keep_unused,
252 *arg_specs, lowering_platform=None)
253 allow_prop = [True] * len(computation.compile_args['global_out_avals'])
--> 254 return computation.compile(_allow_propagation_to_outputs=allow_prop).unsafe_call
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/interpreters/pxla.py:2816, in MeshComputation.compile(self, _allow_propagation_to_outputs, _allow_compile_replicated)
2813 self._executable = MeshExecutable.from_trivial_jaxpr(
2814 **self.compile_args)
2815 else:
-> 2816 self._executable = UnloadedMeshExecutable.from_hlo(
2817 self._name,
2818 self._hlo,
2819 **self.compile_args,
2820 _allow_propagation_to_outputs=_allow_propagation_to_outputs,
2821 _allow_compile_replicated=_allow_compile_replicated)
2822 return self._executable
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/interpreters/pxla.py:3028, in UnloadedMeshExecutable.from_hlo(name, computation, mesh, global_in_avals, global_out_avals, in_shardings, out_shardings, spmd_lowering, tuple_args, auto_spmd_lowering, _allow_propagation_to_outputs, _allow_compile_replicated, unordered_effects, ordered_effects, host_callbacks, keepalive, kept_var_idx, backend, device_assignment, committed, pmap_nreps)
3024 else:
3025 with dispatch.log_elapsed_time(f"Finished XLA compilation of {name} "
3026 "in {elapsed_time} sec",
3027 event=dispatch.BACKEND_COMPILE_EVENT):
-> 3028 xla_executable = dispatch.compile_or_get_cached(
3029 backend, computation, compile_options, host_callbacks)
3031 if auto_spmd_lowering:
3032 assert mesh is not None
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/dispatch.py:526, in compile_or_get_cached(backend, computation, compile_options, host_callbacks)
522 _cache_write(serialized_computation, compile_time, module_name,
523 compile_options, backend, compiled, host_callbacks)
524 return compiled
--> 526 return backend_compile(backend, serialized_computation, compile_options,
527 host_callbacks)
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/profiler.py:314, in annotate_function.<locals>.wrapper(*args, **kwargs)
311 @wraps(func)
312 def wrapper(*args, **kwargs):
313 with TraceAnnotation(name, **decorator_kwargs):
--> 314 return func(*args, **kwargs)
315 return wrapper
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/jax/_src/dispatch.py:471, in backend_compile(backend, built_c, options, host_callbacks)
466 return backend.compile(built_c, compile_options=options,
467 host_callbacks=host_callbacks)
468 # Some backends don't have `host_callbacks` option yet
469 # TODO(sharadmv): remove this fallback when all backends allow `compile`
470 # to take in `host_callbacks`
--> 471 return backend.compile(built_c, compile_options=options)
XlaRuntimeError: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
"""
I had to set up Jax by following the install locally harder option. That is, install cuda drivers, Cudnn, then run above command to install jax[cuda11_local].
The caveat is you need to find the correct versions for Cuda drivers and Cudnn, then let pip to decide which Jax version to use. For me I use Cuda117 and Cudnn 8.6, the pip install command will probably give you hint about the versioning available. Hope this helps.
|
gharchive/issue
| 2023-04-28T18:55:31 |
2025-04-01T04:35:47.705989
|
{
"authors": [
"jaimeide",
"vinvcn"
],
"repo": "sanchit-gandhi/whisper-jax",
"url": "https://github.com/sanchit-gandhi/whisper-jax/issues/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2351037743
|
Why Cannot I scan ExtensionTotal extension itself?
Hi there,
just to be sure that your extension is safe I tried to scan it...
But it doesn't work
Not to say that you are fake...But why it doesn't work?
Hey @zioalex, you can, if you search for it in the home page it shows up - https://app.extensiontotal.com/report/extensiontotal.extensiontotal-vscode
We are still waiting for domain verification so the report is not ideal yet, once its approved ExtensionTotal should be low per the engine
Cool thanks,
It wasn't there when I tried.
|
gharchive/issue
| 2024-06-13T12:37:07 |
2025-04-01T04:35:47.709727
|
{
"authors": [
"amitassaraf",
"zioalex"
],
"repo": "sand-security/extensiontotal-vscode",
"url": "https://github.com/sand-security/extensiontotal-vscode/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2249672057
|
Make use of LayoutPosition and (layer_index, row_index, col_index) consistent
Sometimes I use three indexes, sometimes I use a LayoutPosition
LayoutPosition::new generally has solved this
|
gharchive/issue
| 2024-04-18T03:42:05 |
2025-04-01T04:35:47.737511
|
{
"authors": [
"sandsq"
],
"repo": "sandsq/alc-rs",
"url": "https://github.com/sandsq/alc-rs/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
162335455
|
Exec tachikoma update 20160626163448
:hamster::hamster::hamster: Strategy none
Powered by Interval Pull Request as a Service - Tachikoma.io Announcement: 2015-06-28 Updated permissions - Tachikoma.io
Coverage remained the same at 68.0% when pulling 9cf9c29d970ae0241080d51b46bcea2863551eac on pazjacket:tachikoma/update-20160626163448 into 0c477cea399180e0dc0455b31d685efdbee0f3af on sanemat:master.
|
gharchive/pull-request
| 2016-06-26T16:34:54 |
2025-04-01T04:35:47.759901
|
{
"authors": [
"coveralls",
"pazjacket"
],
"repo": "sanemat/tachikoma",
"url": "https://github.com/sanemat/tachikoma/pull/243",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2638224880
|
Mapping can be removed
Description of the bug
Mapping is resource-intensive and generates a lot of unnecessary data that BTK doesn't need. This adds a significant amount of time to the total run time.
Instead, the mapping should be removed from EAR and we allow the BTK to run its own mapping which only requires coverage information, not indels and CIGAR data.
Command used and terminal output
No response
Relevant files
No response
System information
No response
BTK can take aligned bam/cram or unaligned bam/cram, but in principle, giving an aligned one will save time to process BTK. The solution of mapping for EAR can be either: convert read fasta to unmapped bam/cram, but when running BTK should change the "align = true" in nextflow.config (https://github.com/sanger-tol/blobtoolkit/blob/dev/nextflow.config). solution B, using cram_filter to chunk the container as in readmapping : https://github.com/sanger-tol/readmapping/blob/dev/subworkflows/local/align_pacbio.nf, but will first convert fasta to cram (also in readmapping ).
|
gharchive/issue
| 2024-11-06T14:18:42 |
2025-04-01T04:35:47.763724
|
{
"authors": [
"DLBPointon",
"yumisims"
],
"repo": "sanger-tol/ear",
"url": "https://github.com/sanger-tol/ear/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1386444952
|
Update test_data.config
Delete the unused test data config originally for tblastn
PR checklist
Closes #XXX
[ ] This comment contains a description of changes (with reason).
[ ] If you've fixed a bug or added code that should be tested, add tests!
[ ] If you've added a new tool - have you followed the module conventions in the contribution docs
[ ] If necessary, include test data in your PR.
[ ] Remove all TODO statements.
[ ] Emit the versions.yml file.
[ ] Follow the naming conventions.
[ ] Follow the parameters requirements.
[ ] Follow the input/output options guidelines.
[ ] Add a resource label
[ ] Use BioConda and BioContainers if possible to fulfil software requirements.
Ensure that the test works with either Docker / Singularity. Conda CI tests can be quite flaky:
[ ] PROFILE=docker pytest --tag <MODULE> --symlink --keep-workflow-wd --git-aware
[ ] PROFILE=singularity pytest --tag <MODULE> --symlink --keep-workflow-wd --git-aware
[ ] PROFILE=conda pytest --tag <MODULE> --symlink --keep-workflow-wd --git-aware
Yes, I will. Once this being merged.
|
gharchive/pull-request
| 2022-09-26T17:16:57 |
2025-04-01T04:35:47.768875
|
{
"authors": [
"gq1"
],
"repo": "sanger-tol/nf-core-modules",
"url": "https://github.com/sanger-tol/nf-core-modules/pull/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2682930412
|
Update vitest and coverage together
Closes #
Changes proposed in this pull request
Testing compatibility
Instructions for Reviewers
[All PRs] - Confirm PR template filled
[Feature Branches] - Review code
[Production Merges to main]
- Check story numbers included
- Check for debug code
- Check version
Closing this as it was created for testing dependency updates together and it served the purpose.
|
gharchive/pull-request
| 2024-11-22T11:34:29 |
2025-04-01T04:35:47.771632
|
{
"authors": [
"yoldas"
],
"repo": "sanger/quanthub",
"url": "https://github.com/sanger/quanthub/pull/604",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2364142924
|
Y24-144 - As a team lead (Danni) I would like to see the location (including co-ordinate) for each samples so I can find out where my stuff is without going to labwhere
User story
As a team lead (Danni) I would like to see the location (including co-ordinate) for each samples so I can find out where my stuff is without going to labwhere
Who are the primary contacts for this story
Steve / James / Danni
Who is the nominated tester for UAT
Steve / James / Danni
Acceptance criteria
To be considered successful the solution must allow:
[ ] When I visit the samples page I should see the location for each sample along with it's co-ordinate.
[ ] The location should be read-only
Dependencies
This story is blocked by the following dependencies:
#<issue_no.>
sanger/#<issue_no.>
References
This story has a non-blocking relationship with:
#714
Additional context
Add any other context or screenshots about the feature request here.
Closed as duplicate
|
gharchive/issue
| 2024-06-20T10:54:24 |
2025-04-01T04:35:47.776217
|
{
"authors": [
"stevieing"
],
"repo": "sanger/traction-ui",
"url": "https://github.com/sanger/traction-ui/issues/1775",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
643566954
|
Multiple Retries Before Giving Up
Currently, a goroutine periodically pings on the connection that holds the lock. If a ping fails, it bails out and closes the connection.
Confirm what happens to connection when a ping fails
If connection is still kept in pool, consider multiple ping retries before giving up (and possibly make it configurable)
After reading driver's code and golang sql wrapper
driver returns badConn error if ping fails
if ping was done on a connection, connection is closed https://golang.org/src/database/sql/sql.go?s=50229:50282#L1914
so most likely, retries do not make sense. Confirm this by doing a simulated run locally
|
gharchive/issue
| 2020-06-23T06:24:16 |
2025-04-01T04:35:47.793271
|
{
"authors": [
"sanketplus"
],
"repo": "sanketplus/go-mysql-lock",
"url": "https://github.com/sanketplus/go-mysql-lock/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2538122310
|
🛑 Paperless is down
In 698a450, Paperless ($PAPERLESS_URL) was down:
HTTP code: 530
Response time: 70 ms
The issue was due to a power outage, which lasted for more than 5 hours. Current setup provides backup power for 2 hours, hence the downtime.
Resolved: Paperless is back up in 3a1e358 after 4 hours, 31 minutes.
|
gharchive/issue
| 2024-09-20T07:46:00 |
2025-04-01T04:35:47.795724
|
{
"authors": [
"sannidhyaroy"
],
"repo": "sannidhyaroy/Uptime-Ryuu",
"url": "https://github.com/sannidhyaroy/Uptime-Ryuu/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1612251234
|
Fix formatting in argtemplate.py
When running make pycodestyle the following errors appear:
httpie/cli/argtemplate.py:1:1: F401 'argparse' imported but unused
httpie/cli/argtemplate.py:3:1: F401 'sys' imported but unused
httpie/cli/argtemplate.py:7:1: E302 expected 2 blank lines, found 1
httpie/cli/argtemplate.py:16:1: W293 blank line contains whitespace
httpie/cli/argtemplate.py:22:1: W293 blank line contains whitespace
httpie/cli/argtemplate.py:44:1: E302 expected 2 blank lines, found 1
httpie/cli/argtemplate.py:53:37: W291 trailing whitespace
httpie/cli/argtemplate.py:55:1: W293 blank line contains whitespace
httpie/cli/argtemplate.py:58:1: W293 blank line contains whitespace
httpie/cli/argtemplate.py:61:1: W293 blank line contains whitespace
httpie/cli/argtemplate.py:72:39: W291 trailing whitespace
httpie/cli/argtemplate.py:75:1: E302 expected 2 blank lines, found 1
httpie/cli/argtemplate.py:84:12: E714 test for object identity should be 'is not'
httpie/core.py:2:1: F401 'json' imported but unused
tests/test_argtemplate.py:5:1: F401 '.utils.http' imported but unused
tests/test_argtemplate.py:8:1: E302 expected 2 blank lines, found 1
tests/test_argtemplate.py:54:39: E711 comparison to None should be 'if cond is None:'
tests/test_argtemplate.py:109:1: E302 expected 2 blank lines, found 1
tests/test_argtemplate.py:135:1: W293 blank line contains whitespace
tests/test_argtemplate.py:181:1: W293 blank line contains whitespace
tests/test_argtemplate.py:209:13: F841 local variable 'new_stored_template' is assigned to but never used
tests/test_argtemplate.py:210:20: F821 undefined name 'new_stored_templates'
tests/test_argtemplate.py:212:1: E302 expected 2 blank lines, found 1
tests/test_argtemplate.py:226:37: E226 missing whitespace around arithmetic operator
tests/test_argtemplate.py:227:30: E226 missing whitespace around arithmetic operator
tests/test_argtemplate.py:241:1: W293 blank line contains whitespace
tests/test_argtemplate.py:241:17: W292 no newline at end of file
done
|
gharchive/issue
| 2023-03-06T21:45:15 |
2025-04-01T04:35:47.799003
|
{
"authors": [
"editf",
"mrporsev"
],
"repo": "sanredz/DD2480-httpie",
"url": "https://github.com/sanredz/DD2480-httpie/issues/14",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1088633576
|
Upcoming release(s)
I have recently got to know much more about what the community likes in the Python frameworks.
I will try to implement the following in the upcoming releases:
[x] Check windows and mac support
- Mac support works fine for me on intel macs. Need to check if it works well for arm macs as well.
[ ] Add a way to release on test pypi before actual pypi
[ ] Add the bases of python stubs
[x] Add flake8 config for the entire project
[ ] Fix hot reloading
[ ] Better support for cargo fuzz
[ ] Use click for better cli parsing
[ ] Web3 extension
[ ] Middleware support
[x] Make readme more colorful and add the link to the benchmarking scripts.
[ ] .....
This is a duplicate issue of this https://github.com/sansyrox/robyn/issues/145
|
gharchive/issue
| 2021-12-25T17:43:23 |
2025-04-01T04:35:47.848990
|
{
"authors": [
"sansyrox"
],
"repo": "sansyrox/robyn",
"url": "https://github.com/sansyrox/robyn/issues/141",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1382234801
|
yarn dev 启动 报错: Failed to resolve import "D:myCodeislanddocsindex.md" from "virtual:routes". Does the file exist?
Describe the bug
根据 文档 案例 操作过来 ,最后 yarn dev 启动失败
Reproduction
[plugin:vite:import-analysis] Failed to resolve import "D:myCodeislanddocsindex.md" from "virtual:routes". Does the file exist?
32 | ;
33 | import React from 'react';
34 | const Route0 = lazyWithPreload(() => import('D:\myCode\island\docs\index.md'))
| ^
35 | export const routes = [
36 | { path: '/', element: React.createElement(Route0), filePath: 'D:\myCode\island\docs\index.md', preload: Route0.preload }
at formatError (file:///D:/myCode/island/node_modules/islandjs/node_modules/vite/dist/node/chunks/dep-0fc8e132.js:35330:46)
at TransformContext.error (file:///D:/myCode/island/node_modules/islandjs/node_modules/vite/dist/node/chunks/dep-0fc8e132.js:35326:19)
at normalizeUrl (file:///D:/myCode/island/node_modules/islandjs/node_modules/vite/dist/node/chunks/dep-0fc8e132.js:40255:33)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async TransformContext.transform (file:///D:/myCode/island/node_modules/islandjs/node_modules/vite/dist/node/chunks/dep-0fc8e132.js:40389:47)
at async Object.transform (file:///D:/myCode/island/node_modules/islandjs/node_modules/vite/dist/node/chunks/dep-0fc8e132.js:35579:30)
at async loadAndTransform (file:///D:/myCode/island/node_modules/islandjs/node_modules/vite/dist/node/chunks
Expected behavior
希望能正常跑通 流程。
System Info
windows 系统
Additional context
No response
Validations
[X] Read the docs
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
请问是用的哪个 islandjs 版本?
请问是用的哪个 islandjs 版本?
0.1.5
这个 PR 解决了 https://github.com/sanyuan0704/island.js/pull/31
0.1.6 版本应该没有这个问题了
|
gharchive/issue
| 2022-09-22T10:44:21 |
2025-04-01T04:35:47.865356
|
{
"authors": [
"Davont",
"sanyuan0704"
],
"repo": "sanyuan0704/island.js",
"url": "https://github.com/sanyuan0704/island.js/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
183203954
|
Added BFS in java
Added Breadth First Search #65
Added Floyd Warshall #65
Added Djikstra #65
Thanks for the PR :) Merged.
|
gharchive/pull-request
| 2016-10-15T11:11:33 |
2025-04-01T04:35:47.935604
|
{
"authors": [
"sarsiz",
"saru95"
],
"repo": "saru95/DSA",
"url": "https://github.com/saru95/DSA/pull/72",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
56133740
|
Active breakpoint in a global variable.
Hey,
This is just an idea, I wanted to ask you if this makes sense.
I would like to be able to know which active breakpoint I'm writing code for. This would allow me to make decisions based on it (out of this ticket scope).
The basic idea would be to declare a global variable with the default breakpoint as initial value. The default value may depend on the approach used (mobile first, desktop first). So it should be a configurable variable:
$mq-default-breakpoint: mobile !default;
$mq-active-breakpoint: $mq-default-breakpoint;
This variable would change when calling mq(). It would be set to the active breakpoint just before the @content and then reset to its default value. Roughly something like that:
$mq-active-breakpoint: $until; // or $from? or more complex?
@content;
$mq-active-breakpoint: $mq-default-breakpoint;
The code executed in @content would then be aware of its responsive context, which could be pretty powerful.
The problem here is to define the meaning of the active breakpoint. In a mobile first approach, it would probably be the value of $from, in a desktop first approach it would probably be $until.
What happens if we specify both? Perhaps we could define a direction.
But first, does this makes sense to you? :neckbeard:
Thanks for using Sass MQ, Nicolas.
This is a brilliant idea :sparkles:, I had never thought about it, but I can see the power of this feature!
Would you care to open a Pull Request for it?
|
gharchive/issue
| 2015-01-31T18:19:21 |
2025-04-01T04:35:47.941432
|
{
"authors": [
"kaelig",
"ngryman"
],
"repo": "sass-mq/sass-mq",
"url": "https://github.com/sass-mq/sass-mq/issues/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
125005919
|
wrong additional debugging information in the output file as CSS comments
code:
sass.render({
file: 'test.scss',
data: 'body{background:blue; a{color:black;}}',
sourceComments: true,
outputStyle: 'expanded'
}, function(error, result) {
console.log(result.css.toString());
});
gives this css:
/* line 1, /home/dp/Projects/tests/spa-test/test.scss */
body {
background: blue;
}
/* line 1, /home/dp/Projects/tests/spa-test/test.scss */
body a {
color: black;
}
problem:
render option data shadowed option file so test.scss was ignored
which is fine, but in the comments this file should be ignored as well
expected output:
/* line 1, stdin */
body {
background: blue;
}
/* line 1, stdin */
body a {
color: black;
}
You must use either data or file. Using both is defined behaviour. We have no plans to support this at the moment has it has larger ramifications.
This raises an interesting question as to why we're telling LibSass that the source is stdin.
I see no reason to change this though since omitting the file and stdin make no practical difference, and changing this would be a breaking change.
I vaguely remember from the bugs filed by @jameslnewell that there was some use case for filing both data and the file parameter just for the sake of naming the input for errors/maps. Not sure how things currently work in libsass though (a lot has changed in the import mechanics). Node-sass needs to guess whether to pass a so called file context or data context to libsass for processing.
The good news is that I wrote https://github.com/sass/node-sass/blob/master/test/lowlevel.js specifically to test for issues related to this - just adding a test there can help us figure out what libsass will do with such input. (This test jump directly to the binding code and avoids the guessing game we do in JavaScript right now).
I think it was this one. When a module is imported (e.g. @import "sass-grid";) I'm returning the absolute path, along with its contents read from disk, so that for that module's imports (e.g. @import "sass-breakpoints";) I can resolve the dependencies (e.g. sass-breakpoints) relative to the module (e.g. sass-grid).
The absolute path is necessary for modules to work with npm's nested node_modules directories. Instead of letting node-sass read the contents from disk I'm reading the contents from disk myself in order to solve this and this.
Is there anything required to progress this issue. I think current behaviour as reported makes perfect sense.
|
gharchive/issue
| 2016-01-05T16:55:05 |
2025-04-01T04:35:47.967090
|
{
"authors": [
"DarkPark",
"jameslnewell",
"saper",
"xzyfer"
],
"repo": "sass/node-sass",
"url": "https://github.com/sass/node-sass/issues/1330",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
120487890
|
Drop older PHP versions
We need to release the new version, which will be the last version that supports older versions (mainly PHP 5.3, potentially PHP 5.4 too). Then the next version will be able to drop the support for older versions.
Once it happens, #104 will become relevant again.
Massive :-1: for this. This tool is to support monitoring code coverage, and disallowing it for 5.3 (or even 5.4) is a big drawback.
You can still use the legacy version as long as we maintain the legacy version, no?
And you never know how long it would be.
And then, what is the real benefits of dropping it?
@keradus
You can casually add this library as a dependency to your project.
But yeah, you may be right. Since we're going to provide a phar file very soon, it (having legacy deps) shouldn't be a big problem.
Legacy deps isn't a problem for other repo since, for example for Symfony's components, one can install 2.3 as well as 3.0 ;) You have just changed it.
So still, it should be allowed to run tool (phared or not) on PHP 5.3.3 ;)
Ok, so I'll drop this plan.
Thank you @tmatsuo
|
gharchive/issue
| 2015-12-04T21:17:23 |
2025-04-01T04:35:48.025580
|
{
"authors": [
"keradus",
"tmatsuo"
],
"repo": "satooshi/php-coveralls",
"url": "https://github.com/satooshi/php-coveralls/issues/128",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2590986783
|
Drop CLI version support
As discussed with @satwikkansal in discussion, cli version is not convenient to read and hard to maintain, so it must be deleted
cli related issues should be closed as not planned
@satwikkansal , I suppose it would be better to delete package on pypi as well. Someone could find it and use outdated snippets and gotchas
@nifadyev The package in itself pulls the current README from Github and renders, so it can stay. I'll see how to mark it as deprecated/unmaintained in pypi.
|
gharchive/issue
| 2024-10-16T07:53:35 |
2025-04-01T04:35:48.045319
|
{
"authors": [
"nifadyev",
"satwikkansal"
],
"repo": "satwikkansal/wtfpython",
"url": "https://github.com/satwikkansal/wtfpython/issues/345",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
157628624
|
Upgrade from 1.3
Hello
I have an app. Which used 1.3 (db version 1) . There was an update with 1.4 (db version 2) . And a brand new version with 1.5 (db version 3).
If I upgrade 1.3 -> 1.4 -> 1.5 everything is good.
In this way there is 2.sql file in asset/sugar_upgrades. I saw in logcat the alter table commands are commited.
If I upgrade 1.3 -> 1.5 there are some errors and whole db is deleted on the device.
Errors:
05-31 11:30:34.201 29140-29140/? W/System.err: at com.orm.SchemaGenerator.executeSugarUpgrade(SchemaGenerator.java:100)
05-31 11:30:34.201 29140-29140/? W/System.err: at com.orm.SugarDb.onUpgrade(SugarDb.java:33)
05-31 11:30:34.201 29140-29140/? W/System.err: at com.orm.SugarDb.getDB(SugarDb.java:38)
05-31 11:30:34.201 29140-29140/? W/System.err: at com.orm.SugarRecord.getSugarDataBase(SugarRecord.java:35)
05-31 11:30:34.201 29140-29140/? W/System.err: at com.orm.SugarRecord.count(SugarRecord.java:238)
05-31 11:30:34.201 29140-29140/? W/System.err: at com.orm.SugarRecord.count(SugarRecord.java:226)
When you first run the database, and later upgrade, you have to export your data and remove all the previous data on the device and uninstall. Then install now your app and it should run with the recent version of your database.
|
gharchive/issue
| 2016-05-31T09:32:51 |
2025-04-01T04:35:48.048467
|
{
"authors": [
"larrytech7",
"mesterj"
],
"repo": "satyan/sugar",
"url": "https://github.com/satyan/sugar/issues/615",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2560707850
|
fix the navbar responsiveness in mobile
fix the navar responsiveness in mobile
additionally i adjust the path of section like navbar etc. previously it throws an error
can i work on this
My issue no. Is #24
@Charul00 already merged
Close the issue #24 @thakuratul2
|
gharchive/pull-request
| 2024-10-02T05:34:50 |
2025-04-01T04:35:48.078850
|
{
"authors": [
"Charul00",
"adityalaxkar123",
"thakuratul2"
],
"repo": "saurabhbakolia/SCROLLME--ECOMMERCE-WEBSITE",
"url": "https://github.com/saurabhbakolia/SCROLLME--ECOMMERCE-WEBSITE/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2139901760
|
feat: get all resources recorded within an encounter
Review Checklist
Summary*
[ ] fix: Bug fix for Add the items tackled here as checklists
[x] feat: Completed task
[ ] chore: Incomplete task
[ ] test: Sub-task 1
Structure*
[ ] The Pull Request has a proper title that conforms to our MR title standards
[ ] The Pull Request has one commit, and if there are more than one, they should be squashed
[ ] The commit should have a proper title and a short description
[ ] The commit must be signed off
[ ] Unused imports are not present
[ ] Dead code is not present
[ ] Ensure dry running library to confirm changes
Tests
[ ] Proper and high quality unit, integration and acceptance(if applicable) tests have been written
[ ] The coverage threshold should not be lowered
Sign off*
[ ] All comments have been resolved by the reviewers
[ ] Approved by Czar {replace_with_czar_name}
[ ] Signed off by second reviewer {replace_with_name}
[ ] Ensure all checks are done before merging :warning:
[ ] All PRs needs to be signed before merging :warning:
N/B:
Add a checklist if more than one item is done.
Add screenshots and/or images where necessary
Indicate any breakages caused in the UI :exclamation:
Where necessary, indicate which issue the Pull Request solves (Closes #)
Any new files are updated in the folder structure in the README
Pull Request Test Coverage Report for Build 7940554284
Details
-35 of 35 (0.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.4%) to 82.838%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
pkg/clinical/usecases/clinical/encounter.go
0
35
0.0%
Totals
Change from base Build 7927479723:
-0.4%
Covered Lines:
6714
Relevant Lines:
8105
💛 - Coveralls
|
gharchive/pull-request
| 2024-02-17T08:43:26 |
2025-04-01T04:35:48.144233
|
{
"authors": [
"Salaton",
"coveralls"
],
"repo": "savannahghi/clinical",
"url": "https://github.com/savannahghi/clinical/pull/363",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2631447046
|
Help with translation into Russian.
Hello. I am from Russia. My name is Artem and I am 19. I came across your game in one of the Russian Telegram channels. I would like to offer you my help with translation into Russian. Exclusively in a voluntary form.
Thats great! feel free to check out the translating README https://github.com/saving-satoshi/saving-satoshi/blob/dev/i18n/README.md and let us know if you have any other questions in our bitcoin design discord
Amazing!! Thank you so much for you interest in helping with this! Tagging @aassoiants in case he can be helpful for review :smile:
@satsie ha! My proofreading would need some proofreading!
This issue is a duplicate of https://github.com/saving-satoshi/saving-satoshi/issues/1194 so I am closing this one!
|
gharchive/issue
| 2024-11-03T20:17:43 |
2025-04-01T04:35:48.147115
|
{
"authors": [
"ArtoymKryazhev",
"aassoiants",
"benalleng",
"satsie"
],
"repo": "saving-satoshi/saving-satoshi",
"url": "https://github.com/saving-satoshi/saving-satoshi/issues/1157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1355029225
|
Fix imports, update readme install, and rearrange .env var order
if an import fails, then it is retried using pip, and if that fails then it falls back to pip3. If both fail, then raise exception. Also rearranged env variable declaration so that they are declared in the order that they are in .env.example. Updated the readme install instructions so that they were clearer and easier to follow
Looks good to me!
|
gharchive/pull-request
| 2022-08-30T00:55:01 |
2025-04-01T04:35:48.162175
|
{
"authors": [
"NelsonDane",
"sazncode"
],
"repo": "sazncode/BingRewards",
"url": "https://github.com/sazncode/BingRewards/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
520854814
|
Allow translating Opens and Closes text
Could you please add opensText and closesText as props (Under formFieldMixin.js)? It's useful for translating
opensText: { type: String, default: 'Opens', required: false }, closesText: { type: String, default: 'Closes', required: false }
Thanks
Sorry Ive been super busy and haven't had a chance to take a look at this yet. Hopefully this weekend.
This has been addressed with PR #4. All strings are able to be translated now.
|
gharchive/issue
| 2019-11-11T10:03:24 |
2025-04-01T04:35:48.195138
|
{
"authors": [
"sbarry50",
"wilokecom"
],
"repo": "sbarry50/vue-business-hours",
"url": "https://github.com/sbarry50/vue-business-hours/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1840681628
|
Trying to change font and background colour
Hello,
I'm attempting to modify the font and background colors, but they keep reverting back to the default colors automatically. Could you please suggest a solution to overcome this issue? Thank you in advance.
Would you mind being a bit more specific?
h1, h2, h3, h4, h5, h6,
.h1, .h2, .h3, .h4, .h5, .h6 {
font-family: inherit;
font-weight: 400;
line-height: 1.1;
color: #964103; }
h1 small,
h1 .small, h2 small,
h2 .small, h3 small,
h3 .small, h4 small,
h4 .small, h5 small,
h5 .small, h6 small,
h6 .small,
.h1 small,
.h1 .small, .h2 small,
.h2 .small, .h3 small,
.h3 .small, .h4 small,
.h4 .small, .h5 small,
.h5 .small, .h6 small,
.h6 .small {
font-weight: normal;
line-height: 1;
color: #999999; }
Suppose I'm altering the font colour here, and it promptly updates on the local site. However, upon committing to GitHub, it reverts to the default color.
Based on this information I am unsure what is causing your problem -- sorry.
My apologies for the short answers. I am kindly sharing the repository that is containing my website. Would this be helpful?
Link to the repository: https://github.com/BrinthanK/brinthank.github.io
You have no source branch. Perhaps this is the issue. Please read the instructions carefully!
Thank you for pointing out the problem. Let me try it.
Thank you for your support @sbryngelson. I was pushing _site/ folder as well. That's where my mistake was.
|
gharchive/issue
| 2023-08-08T06:55:54 |
2025-04-01T04:35:48.203863
|
{
"authors": [
"BrinthanK",
"sbryngelson"
],
"repo": "sbryngelson/academic-website-template",
"url": "https://github.com/sbryngelson/academic-website-template/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
313547357
|
Watch improvements
This pull request makes some improvements to the source modification watch.
The PollingWatchService had a few minor issues that caused transient test failures
The MacOSXWatchService did not quite implement the WatchService api correctly (DefaultWatchServiceSpec failed with the watch service set to MacOSXWatchService
I re-implement the SourceModificationWatch.watch method to remove the latency between receiving file or user input events and triggering a build (or exiting).
To validate this pull request against other sbt modules, please visit this link.
@eatkins Thanks for the update.
I am going to be busy this week and next for conference, so the review might delay.
Sounds good. I have some improvements I want to make anyway.
https://github.com/facebook/watchman
Care to elaborate, @hepin1989?
Thanks @dwijnand. Yeah, that last commit was hard to split up into bite sized chunks. Apologies for the review overhead.
Adding mac os to travis seems like a no brainer. I'm working on the matrix, but travis seems to be having issues at the moment.
I figured out the travis issues with PollingWatchServiceSpec (among other things, the travis osx agents use HFS+, which doesn't have millisecond precision). I also added the scaladocs to EventMonitor as requested. Finally, I renamed EventMonitor.watch to EventMonitor.awaitEvents as suggested by @eed3si9n.
@eatkins Can you please provide a link to a failing Travis run in which you see utimensat() returning EINVAL? According to the man page, there are only a few cases in which that may happen, the most likely being a NULL path name.
@cunei https://travis-ci.org/sbt/io/jobs/370883894
|
gharchive/pull-request
| 2018-04-12T01:50:38 |
2025-04-01T04:35:48.209773
|
{
"authors": [
"cunei",
"eatkins",
"eed3si9n",
"hepin1989",
"typesafe-tools"
],
"repo": "sbt/io",
"url": "https://github.com/sbt/io/pull/142",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
946719134
|
Bump JNA to 5.8.0
Fixes #320
@eed3si9n Thanks! Are you planning to merge this for the next release?
Note that 1.5.x series is already branched out to 1.5.x, so merging this to develop branch doesn't necessarily mean it will be part of the next release. I am not sure how safe it would be to upgrade JNA without going through RC cycle. It might address this issue, but it could break some other plugin?
sbt transitively also depend on JNA via sbt/ipcsocket, and I've actually released ipcsocket 1.3.1 and ipcsocket 1.4.0 so I don't introduce JNA upgrade in 1.5.x. One potential workaround might be to release sbt 1.6.0-M1 or fix the nightly build.
|
gharchive/pull-request
| 2021-07-17T05:27:57 |
2025-04-01T04:35:48.212355
|
{
"authors": [
"eed3si9n",
"vbabenkoru"
],
"repo": "sbt/io",
"url": "https://github.com/sbt/io/pull/321",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
91044645
|
Comment out explicit check against M2_PATTERN
REF sbt/sbt#2005, and potentially relates to some of the SNAPSHOT issues listed on sbt/sbt#1780.
Ivy checks if the repository patterns ends with M2_PATTERN to see if maven-metadata.xml should be used. Because sbt customizes the pattern, Ivy will end up not using maven-metadata.xml.
/review @jsuereth, @dwijnand
Sorry I've no idea what changing shouldUseMavenMetadata would impact.
@dwijnand I appreciate the candid honesty. It's impossible to keep track of all the details, so feel free to ask for explanation of what's going on. Note that these are facts that I've dug out recently, so might not be 100% accurate.
When Ivy sees dynamic revisions like version range, it queries the repository for the list of all available versions for the given artifact. The method that gets called eventually is listResources - https://github.com/sbt/ivy/blob/aa94ce736984ceb123a327ae5e7a3269baadbf05/src/java/org/apache/ivy/plugins/resolver/IBiblioResolver.java#L392-L433
It first determines that the repository is fully compatible with Maven using shouldUseMavenMetadata(pattern), if it is, it uses maven-metadata.xml that lists out the versions. Otherwise, it tries screen scraping using ApacheURLLister. The Apache screen scraping doesn't work with Bintray, because they've added "#" character to their directory listing. The goal here to make shouldUseMavenMetadata(pattern) more lenient so sbt's dependency resolution can use more accurate maven-metadata.xml.
But why does sbt (not ivy) change the maven pattern?
Also, given a release off of the head of this PR is now in sbt master, this should be merged.
sbt's pattern encodes (_[scalaVersion])(_[sbtVersion]) so we can do cross publishing in Maven sort-of-compatible way. - https://github.com/sbt/sbt/blob/0.13/ivy/src/main/scala/sbt/Resolver.scala#L328-L330
Oh, those. I see. Huh, but that was years ago! Ok thanks.
|
gharchive/pull-request
| 2015-06-25T18:44:15 |
2025-04-01T04:35:48.217743
|
{
"authors": [
"dwijnand",
"eed3si9n"
],
"repo": "sbt/ivy",
"url": "https://github.com/sbt/ivy/pull/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
125046891
|
Provide a Setting that points to the (eventual) config:packageBin
Currently the packageBin task is the only way to get hold of the jar output from a project/config. (actually (artifactPath in Compile in packageBin).value would be enough but it amounts to the same).
However, it is often useful to know where the file is going to be without necessarily invoking the compilation (e.g. projects that are compiler plugins and need to be referenced by IDEs, caching strategies).
Proposed implementation:
val packageBinFile = SettingKey[File](
"packageBinFile",
"The (eventual) location of the packageBin."
)
def packageBinFileSetting = Def.setting {
val append = configuration.value match {
case Compile => ""
case Test => "-tests"
case _ => "-" + configuration.value.name
}
crossTarget.value / s"${projectID.value.name}_${scalaBinaryVersion.value}-${projectID.value.revision}$append.jar"
}
It's artifactPath in packageBin.
I don't think you read the ticket in enough detail, that forces many ivy Tasks to run, all the way down (resulting in hundreds of IO heavy tasks) which is very slow on windows.
(BTW configuration is needed too)
I should really from a tasktimings trace of the current approach to show what is being triggered.
Using sbt-big-project-test on Windows XP, compile:packageBin::artifactPath returns immediately for me.
Its possible I was getting crosstalk from other tasks or settings. Did you do a TaskTimings to trace that?
Did you do a TaskTimings to trace that?
I did it anyway, but I didn't have to because artifactPath is a settingKey, which by design does not invoke the task engine.
What you might see in the task timing if you run sbt from the shell is the compilation of the meta build i.e. project/ClusterFsckBuild.scala itself, not the processing of compile:packageBin::artifactPath.
OK, cool, I'll confirm this tonight and close the ticket if that's the case. And remove my workaround.
|
gharchive/issue
| 2016-01-05T20:41:46 |
2025-04-01T04:35:48.229018
|
{
"authors": [
"eed3si9n",
"fommil"
],
"repo": "sbt/sbt",
"url": "https://github.com/sbt/sbt/issues/2348",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
144844683
|
Error message could be better
I get
java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
<ELIDED BY ME>
The problem is that it doesn't say what exactly it tried to access. I expect that all exceptions of this kind are caught and display a proper error message like "Dear user, we tried to create a file named foo in directory /bar/za. Unfortunately, the w bit in directory za is not set for user $RUNNING_USER, so as a result this program will now have to exit. Please correct this situation and run the program again".
I agree. Could you provide a reproducing case so we can pin-point what needs changing?
|
gharchive/issue
| 2016-03-31T09:39:20 |
2025-04-01T04:35:48.230782
|
{
"authors": [
"aragnon",
"dwijnand"
],
"repo": "sbt/sbt",
"url": "https://github.com/sbt/sbt/issues/2529",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
10311579
|
Fork in test should handle exceptions during creation of test class
We recently ran into a pretty severe issue where a ScalaTest test class was throwing a NPE during construction, which caused test execution to stop but for success to be reported. This meant that we were running 1/3 of our test suite instead of the full test suite and failures were not reported. Note that this was a multi-module build.
I can provide a simple test case if that helps.
The output was:
Exception in thread "Thread-32" java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2571)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1315)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at sbt.React.react(ForkTests.scala:98)
at sbt.ForkTests$$anonfun$apply$2$Acceptor$2$.run(ForkTests.scala:66)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "Thread-32" java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2571)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1315)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at sbt.React.react(ForkTests.scala:98)
at sbt.ForkTests$$anonfun$apply$2$Acceptor$2$.run(ForkTests.scala:66)
at java.lang.Thread.run(Thread.java:722)
[info] Passed: : Total 0, Failed 0, Errors 0, Passed 0, Skipped 0
I am still facing issue when using fork in Test := true
Can some one please suggest the fix.
It might be network related folks.
I can reproduce this test on my OSS project, http://github.com/spark-jobserver/spark-jobserver, master branch, which uses SBT 0.13.7, Scala 2.10.4, and ScalaTest, and forks tests .... IF I turn my VPN on, which screws up my localhost IP. In which case:
[warn] four warnings found
[error] Uncaught exception when running tests: java.net.ConnectException: Operation timed out
Exception in thread "Thread-4" java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2601)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1319)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at sbt.React.react(ForkTests.scala:114)
at sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:74)
at java.lang.Thread.run(Thread.java:745)
If I turn my VPN off, then all the tests pass.
For me adding JVM memory options to build.sbt did the magic..somehow
Seeing this with increasing regularity.
|
gharchive/issue
| 2013-01-25T13:54:48 |
2025-04-01T04:35:48.234874
|
{
"authors": [
"ijuma",
"kapilsaini",
"michaelahlers",
"velvia"
],
"repo": "sbt/sbt",
"url": "https://github.com/sbt/sbt/issues/653",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
243160291
|
Synchronize with latest changes in Zinc API
As per suggestions in https://github.com/sbt/zinc/pull/351.
To validate this pull request against other sbt modules, please visit this link.
This will actually be the ripple PR for Zinc's PR: https://github.com/sbt/zinc/pull/354/files.
Since this PR depends on https://github.com/sbt/sbt/pull/3316 changes because they've gone into master, I'm closing this and adding this commit to the main sbt ripple PR.
|
gharchive/pull-request
| 2017-07-15T07:48:14 |
2025-04-01T04:35:48.237695
|
{
"authors": [
"jvican",
"typesafe-tools"
],
"repo": "sbt/sbt",
"url": "https://github.com/sbt/sbt/pull/3318",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
194787347
|
doc version menu: choosing "latest version" gives you 1.0
if this was even intentional (?), it doesn't seem like a good idea to me. 1.0 isn't out yet. I would suggest that "latest version" give me the docs for 0.13.13
not sure whether these should be separate tickets, but:
I would also suggest dropping everything but 0.7.7, 0.12.4, and 0.13
it's really confusing that doc versions "0.13" and "0.13.5" are offered but "0.13" is actually newer than 0.13.5, since "0.13" is actually "latest 0.13.x"
It was intentional, but you're right on all points.
The list is now limited to 0.7.7, 0.12.4, and 0.13 - https://github.com/sbt/website/commit/890cae47e65037d53c545f05322862b7bb9aab06
To mitigate the 0.13.5 confusion, I'm borrowing the warning callout from Akka's doc:
https://github.com/sbt/website/commit/e5adeb5879d08a98620e3c712681e60fc17eb68b
https://github.com/sbt/sbt.github.com/commit/50229c6047cdf58fa0c0c3017616f61ec49f56fc
I love the big yellow warning, well done!
An interesting side effect of removing everything but 0.7.7, 0.12.4, and 0.13 is that if you did end up on 1.0 beta document via Google, there's no way to go to the actual latest 0.13.
|
gharchive/issue
| 2016-12-10T19:09:51 |
2025-04-01T04:35:48.243146
|
{
"authors": [
"SethTisue",
"eed3si9n"
],
"repo": "sbt/website",
"url": "https://github.com/sbt/website/issues/302",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1859380797
|
java records not supported
To reproduce: Put a record in your java code and try compiling.
It's been in the language since java 14. I don't know what version you try to support but this will be appreciated.
what version of Scala are you using? in Scala 2, support landed in Scala 2.13.7: https://github.com/scala/scala/pull/9551. in Scala 3, support is in 3.3.1-RC5, and will land in a stable release 3.3.1, as per https://github.com/lampepfl/dotty/pull/16762
I was switching to scala 3, now that you mention it. Sounds it's under control. Thank you for your efforts.
|
gharchive/issue
| 2023-08-21T13:30:38 |
2025-04-01T04:35:48.245752
|
{
"authors": [
"SethTisue",
"tballard"
],
"repo": "sbt/zinc",
"url": "https://github.com/sbt/zinc/issues/1238",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
161896878
|
Extract implicits from the exercise bodies
@raulraja please take a look.
LGTM!
|
gharchive/pull-request
| 2016-06-23T11:08:53 |
2025-04-01T04:35:48.261688
|
{
"authors": [
"dialelo",
"raulraja"
],
"repo": "scala-exercises/exercises-cats",
"url": "https://github.com/scala-exercises/exercises-cats/pull/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
162939098
|
AbstractMethodError when evaluating exercises
Compilation error: java.lang.AbstractMethodError: Eval9055044378944555573.apply()Ljava/lang/Object;
@raulraja is fixed now, i'll close this issue when somebody other than me confirms it. @FPerezP can you take a look?
|
gharchive/issue
| 2016-06-29T14:25:31 |
2025-04-01T04:35:48.262640
|
{
"authors": [
"dialelo",
"raulraja"
],
"repo": "scala-exercises/scala-exercises",
"url": "https://github.com/scala-exercises/scala-exercises/issues/490",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
16722397
|
Keep single case on its line
Hi,
I would like to preserve (or ideally force) this kind of layout with a single case:
future {
// ...
} onSuccess { case result ⇒
// ...
}
rather than
future {
// ...
} onSuccess {
case result ⇒
// ...
}
This is similar to how scalariform treats lambdas, and saves space and indentation. Two cases should of course be put each on its line. Common scenarios besides onSuccess/onFailure include zip+map combinations, etc.
Please note that this is related to, but different from #82.
Kind regards,
Nick
Duplicate of #29 . This would be nice. :)
|
gharchive/issue
| 2013-07-14T01:23:15 |
2025-04-01T04:35:48.265502
|
{
"authors": [
"jkinkead",
"stanch"
],
"repo": "scala-ide/scalariform",
"url": "https://github.com/scala-ide/scalariform/issues/85",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2747693758
|
Named tuples are ambigious in (new) given syntax
Compiler version
2.6.2 with -experimental
Minimized example
I'm currently implementing the new given syntax parsing for the intellij-scala plugin and I stumbled about this:
given (x: Int) = ???
Output
The compiler gives ')' expected, but ':' found as error message.
Expectation
According to the grammar the above code should be allowed
as named tuples are part of SimpleType.
Here the relevant grammar:
GivenDef ::= [id ':'] GivenSig
GivenSig ::= GivenImpl
| '(' ')' '=>' GivenImpl
| GivenConditional '=>' GivenSig
GivenImpl ::= GivenType ([‘=’ Expr] | TemplateBody)
| ...
GivenConditional ::= DefTypeParamClause
| DefTermParamClause
| ...
| GivenType
GivenType ::= AnnotType1 {id [nl] AnnotType1}
AnnotType ::= SimpleType {Annotation}
DefTermParamClause and GivenType can look the same ((x: Int) for example)
I know that it doesn't make too much sense to have a named tuple in that position, still I'd like to know if that is intentional or not. Just so I don't have to implement and test our parser twice :grinning:
In the case that is should stay disallowed, I'd suggest changing the error message either to => expected after the ) or something like named tuples are not allowed in this position
The -experimental option will allow @experimental definitions. An import scala.language.experimental.namedTuples is necessary to use named tuples. Having imported it, the given (x: Int) = ??? parses as expected, on the current main branch at least.
@SrTobi The following code compiles correctly:
//> using scala 3.nightly
import scala.language.experimental.namedTuples
given (x: Int) = ???
Ah ok... I was confused by the fact that val x: (y: Int) = ??? compiles when -experimental is used.
Also the example doesn't even relate to what I mean with the above mentioned ambiguity. The example I should have given (pun intended) is: given (x: Int) => Int = ???
Here (x: Int) can be both DefTermParamClause and GivenType in GivenConditional. Currently it's parsed as a DefTermParamClause and I'm going forward with implementing it that way on our side.
Not sure if you want to change anything in the grammar. if not feel free to close this issue. Thx
given (x: Int) => Int = ???
Good point, this one indeed appears to be ambiguous. Though any changes to the grammar will most likely reflect the current behavior.
GivenConditional is supposed to win here. If you really want a given of a function type, you need to put the function type in parentheses. (but that should almost never happen in real programs, because why would you even want a given of a function type other than to design a puzzler?)
@sjrd it's not about function type vs given conditional+givenImpl, but about parameter clause vs named tuple within a given conditional
|
gharchive/issue
| 2024-12-18T12:32:16 |
2025-04-01T04:35:48.308798
|
{
"authors": [
"EugeneFlesselle",
"KacperFKorban",
"SrTobi",
"sjrd"
],
"repo": "scala/scala3",
"url": "https://github.com/scala/scala3/issues/22237",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
602768183
|
Add example using MUnit with Maven builds
The documentation currently doesn't explain how to setup MUnit with Maven. Here is an example repo that sets up MUnit with Maven https://gitter.im/scalameta/munit?at=5e9b77505706b414e1daea88
direct link: https://github.com/milanvdm/munit-playground/blob/master/pom.xml
Btw. @sideeffffect I also added munit in the example Maven g8 template here: https://github.com/scalameta/maven-scala-seed.g8
If there is something missing maybe we could add it there? We already added that template to Metals and is discoverable via new-scala-scala-project command.
What would be also useful is an example how to make it work with the maven-surefire-plugin of the 3.x series. I wasn't able to figure it out :(
|
gharchive/issue
| 2020-04-19T17:05:08 |
2025-04-01T04:35:48.331761
|
{
"authors": [
"olafurpg",
"sideeffffect",
"tgodzik"
],
"repo": "scalameta/munit",
"url": "https://github.com/scalameta/munit/issues/107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
931778962
|
Ability to add parameters to test("something") for scenarios where values only available with injection
Realworld scenario: Trying to test Vert.x + Scala apps.
Vert.x has their own testing integration called Vert.x Unit that's based on JUnit 4, as well as a JUnit 5 integration (but JUnit 5 doesn't seem well-supported in Scala-land right now so it seemed off the table):
https://vertx.io/docs/vertx-unit/java/
https://vertx.io/docs/vertx-junit5/java/
object Test:
var vertxDeploymentId: String = ""
@BeforeClass
final def setup() =
val maxWaitTime: FiniteDuration = Duration(5, TimeUnit.SECONDS)
vertxDeploymentId = Await.result(VertxScala(Vertx.vertx).deployVerticle(new HttpVerticle()), maxWaitTime)
@AfterClass
final def teardown() =
VertxScala(Vertx.vertx).undeploy(vertxDeploymentId)
@RunWith(classOf[VertxUnitRunner])
class Test:
@Test def t1(testContext: TestContext) =
// injected here
I have tried to mimic this with MUnit, but it doesn't appear to function at all:
abstract class FunSuite extends munit.FunSuite {
def test(name: String)(body: (TestContext) => Any)(implicit loc: munit.Location): Unit = {
test(new munit.TestOptions(name))(body)
}
}
@RunWith(classOf[VertxUnitRunner])
class MUnitExample extends FunSuite {
test("my test") { (testContext: TestContext) =>
munit.Assertions.assert(testContest != null)
println(f"MUnit testContext is: $testContext")
}
}
Some solution to this would be great, even like:
test("my test", (testContext: TestContext) => {
munit.Assertions.assert(testContext != null)
println(f"MUnit testContext is: $testContext")
})
Hi, thanks for reporting! Have you taken a look at functional fixtures? https://scalameta.org/munit/docs/fixtures.html#functional-test-local-fixtures
They look like a great fit for your use case
Thanks for responding so quickly -- I'd be happy to use those if they work with the parameter dependency injection.
I had looked at them and wasn't sure how I could emulate this:
@Test def t1(testContext: TestContext) =
Since testContext is injected in a special way by the Vert.x runner into JUnit test arguments by some magic it does I think
|
gharchive/issue
| 2021-06-28T17:04:49 |
2025-04-01T04:35:48.336642
|
{
"authors": [
"GavinRay97",
"gabro"
],
"repo": "scalameta/munit",
"url": "https://github.com/scalameta/munit/issues/383",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
102387545
|
Remaining cases from specification (resubmit)
Supersedes https://github.com/scalameta/scalameta/pull/228
LGTM
|
gharchive/pull-request
| 2015-08-21T14:11:47 |
2025-04-01T04:35:48.338124
|
{
"authors": [
"xeno-by"
],
"repo": "scalameta/scalameta",
"url": "https://github.com/scalameta/scalameta/pull/232",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
183204397
|
Publish for Scala 2.12.0-RC2
Would be very much appreciated :-)
I'm ready to publish paradise right after scalatest for RC2 is released.
Blocked by ScalaCheck, we'll ping Ricky to release for ScalaCheck. :)
This is like dbuild, but really slow.
Follow the bread crumbs: https://github.com/rickynils/scalacheck/pull/275
If you could publish for scalatest 2.x too, it would be really great too. We'd rather not tie supporting scala 2.12 to switching to scalatest 3.x.
Scalacheck now published for 2.12.0-RC2: https://github.com/rickynils/scalacheck/pull/275#issuecomment-254441760.
@milessabin @xeno-by 3.0.0 for Scala 2.12.0-RC2 is published.
Thanks!
Thank you!
Well I guess it's already there: http://search.maven.org/#artifactdetails|org.scalatest|scalatest_2.12.0-RC2|3.0.0|bundle
now it's time for 2.12.0 : https://github.com/scala/scala/tree/v2.12.0
Yes, don't forget scalautils for Scala 2.12.0 :)
I'd appreciate a 2.12.0 build of scalatest in order to publish paradise :)
I think I'm going to ask Lightbend to please stop making releases on Friday nights. It seems to happen a lot! I don't like asking people to work on the weekends, so we'll do this on Monday at the latest.
well for me it's not an issue if you don't publish it immediatly.
I guess most people prefer the weekend to toy around, so I guess releases for scala on thursday and for the testing frameworks on friday would be cool, so that some people who use scala in their free time or helping out open source in their free time can actually toy with it ;)
Yes, I do make the request to publish on the weekend, because it is important to the community. But I don't like doing that. Regardless, we couldn't publish today because ScalaCheck wasn't out yet.
Scalacheck is out for Scala 2.12!
FYI: I just released Scalactic and ScalaTest 3.0.0 built for 2.12.0.
@cheeseng Thank you very much! I've just released macro paradise 2.1.0 for 2.12.0.
@xeno-by You're most welcome!
(close this ticket now?)
yes.
|
gharchive/issue
| 2016-10-15T11:23:17 |
2025-04-01T04:35:48.349568
|
{
"authors": [
"SethTisue",
"bvenners",
"cheeseng",
"dwijnand",
"etorreborre",
"milessabin",
"mosesn",
"mslinn",
"schmitch",
"xeno-by"
],
"repo": "scalatest/scalatest",
"url": "https://github.com/scalatest/scalatest/issues/989",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2159536847
|
Multiple Cores
Hello, I am trying to run Yolo topology with an 8MB SRAM buffer each for Input, Output, and Filter .i.e a total of 24MB SRAM. It takes a long time to execute just the first layer.
I have the following questions:
Is there a way I can estimate the runtime memory required?
Can we run the simulation on multiple cores? I see a comment on Line 69 of Simulator.py about parallelizable.
# 2. Run each layer
# TODO: This is parallelizable
Hello, I am trying to run Yolo topology with an 8MB SRAM buffer each for Input, Output, and Filter .i.e a total of 24MB SRAM. It takes a long time to execute just the first layer.
I have the following questions:
Is there a way I can estimate the runtime memory required?
Can we run the simulation on multiple cores? I see a comment on Line 69 of Simulator.py about parallelizable.
# 2. Run each layer
# TODO: This is parallelizable
Hi, have you solved this problem? Please contact me, Thanks in advance!
|
gharchive/issue
| 2024-02-28T18:06:00 |
2025-04-01T04:35:48.366494
|
{
"authors": [
"V-Run-P",
"zhouwq13"
],
"repo": "scalesim-project/scale-sim-v2",
"url": "https://github.com/scalesim-project/scale-sim-v2/issues/101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
261679389
|
Error: handleToastTapped that is not exposed to Objective-C
I am getting this error, any idea?
Argument of '#selector' refers to instance method 'handleToastTapped' that is not exposed to Objective-C
Add '@objc' to expose this instance method to Objective-C
Thanks
See #83. Thanks.
Thanks!
|
gharchive/issue
| 2017-09-29T15:28:41 |
2025-04-01T04:35:48.368097
|
{
"authors": [
"Daxito",
"scalessec"
],
"repo": "scalessec/Toast-Swift",
"url": "https://github.com/scalessec/Toast-Swift/issues/81",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1725552253
|
PTFE-246: bump ingress apiversion of gh-exporter
due to a cluster upgrade apiversion v1beta is no longer supported
Codecov Report
Merging #38 (c5092bc) into main (6f4b71e) will increase coverage by 0.63%.
The diff coverage is 96.87%.
@@ Coverage Diff @@
## main #38 +/- ##
==========================================
+ Coverage 96.15% 96.79% +0.63%
==========================================
Files 10 14 +4
Lines 442 499 +57
==========================================
+ Hits 425 483 +58
+ Misses 17 16 -1
Flag
Coverage Δ
api
96.69% <96.68%> (?)
unit
19.54% <38.70%> (?)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
gh_actions_exporter/metrics.py
89.56% <85.29%> (+1.23%)
:arrow_up:
gh_actions_exporter/main.py
96.07% <88.23%> (-1.49%)
:arrow_down:
tests/api/metrics/test_job.py
98.68% <98.68%> (ø)
gh_actions_exporter/Webhook.py
100.00% <100.00%> (ø)
gh_actions_exporter/config.py
97.22% <100.00%> (+0.44%)
:arrow_up:
gh_actions_exporter/cost.py
100.00% <100.00%> (ø)
tests/api/conftest.py
100.00% <100.00%> (ø)
tests/api/metrics/conftest.py
100.00% <100.00%> (ø)
tests/api/metrics/test_workflow.py
100.00% <100.00%> (ø)
tests/unit/conftest.py
100.00% <100.00%> (ø)
... and 1 more
|
gharchive/pull-request
| 2023-05-25T10:33:23 |
2025-04-01T04:35:48.513986
|
{
"authors": [
"Abubakarr99",
"codecov-commenter"
],
"repo": "scality/gh-actions-exporter",
"url": "https://github.com/scality/gh-actions-exporter/pull/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
3384408
|
Correcting invitable synchronization with omniauthable?
Thanks for creating such a great tool with invitable.
I have been using Devise with omniauth (primarily for Facebook login) and now I am seeking to implement invitations, yet not lose the omniauth login functionality. My goal is to allow a user to accept an invitation, then have the option to register/login using omniauth or the password confirmation fields that already exist.
I know to edit "/views/devise/invitations/edit.html.erb" so that there is a link to the omniauth_authorize_path. However, how would this authorization retain the checks on the invitation token? With omniauth we would still need to ensure a valid invitation_token and record the invited_by.
Many thanks for your help!
-Dave
I was wondering this same thing. No, simply providing a link is not enough. The invitation token will be lost.
There's no good way to pass parameters via omniauth and have them come back, so the best approach is probably to save the invitation token in the session before you send them off to Facebook, and then look for this token in the session in the omniauth callback.
Thanks, @travisp -- that sounds smart. Were you able to get it to work? I'm a bit new to this, so if you have any example code that made this happen for you, I would be most grateful to see it and learn from you.
Many thanks!
-Dave
I did get it to work. I've provided some actual and some modified code. Hope this helps you and others:
https://gist.github.com/2007222
Most of the custom code is in the omniauth callbacks controller (and the exact approach will depend on your implementation of the callbacks controller), but it does require modifying the Devise::InvitationsController as well.
Two small edits to Invitations controller:
InvitationsController/edit -- set invitation_token in the session
InvitationsController/update -- remove the session invitation token if they accept the invitation by filling in the standard sign up form
OmniauthCallbacksController:
If this is a callback for an omniauth identity that's not in our database, and an invitation token exists in the session, then find the User matching that invitation token. Apply the omniauth information to the user model (in this case, basically store the omniauth identity). If the user model is valid and the user was an invited user, call accept_invitation! to save the model and clear the invitation token from the database (this is crucial). Finally, clear the session invitation token.
You can make this invite required either by modifying the callbacks controller to deal with when a user is not found with the invitation token, or by adding a validation to the model.
This is quite old already, but just FYI, you can add params to omniauth stages:
A link generated with:
<%= link_to "Sign in with Google", user_google_oauth2_omniauth_authorize_url(invitation_token: params[:invitation_token] ) %>
Will give you access to the invitation_token in request.env['omniauth.params'] on the callback action.
|
gharchive/issue
| 2012-02-25T11:31:54 |
2025-04-01T04:35:48.580064
|
{
"authors": [
"Onumis",
"daidekman",
"travisp"
],
"repo": "scambra/devise_invitable",
"url": "https://github.com/scambra/devise_invitable/issues/177",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1623765323
|
Update a chart in PowerPoint with multiple plots using Python-pptx
I have the following chart in PowerPoint:
I read the chart data in python using the python pptx library.
chart = shapes_list[19].chart
plot = chart.plots
len(plot)
the length of the plot is 4 in different plots.
In the four plots, just two have categories, and just one is shown on the diagram. It is plot[2]
plot_categories= [c.label for c in plot[2].categories]
the chart has four series although just three are shown.
std_start_graf_navn = plot[0].series[0].name #series name > Std. start graf
std_navn=plot[0].series[1].name #series name > Standardavvik
snitt_navn = plot[2].series[0].name #series name > snitt
snitt1_navn= plot8[3].series[0].name #series name > snitt om et år
I do change data with this code:
chart_data_plot0 = CategoryChartData()
chart_data_plot0.add_series(std_start_graf_navn,t_std_start_graf_ny)
chart_data_plot2 = CategoryChartData()
chart_data_plot2.add_series(snitt_navn,t_snitt_ny)
chart_data_plot3 = CategoryChartData()
chart_data_plot3.add_series(snitt1_navn,t_snitt1_ny)
What I need help with is that when I try to replace data on one plot, the other disappears. The length of the plot goes from 4 to 1.
When I run:
plot[0].chart.replace_data(chart_data_plot0)
I do get an error when trying:
plot[1].chart.replace_data(chart_data_plot2)
plot[2].chart.replace_data(chart_data_plot3)
This happens because the original length of the plot changes from 4 to one. Is there a workaround to this issue?
Have you figured it out yet? I am having the same issue: I have a chart with two plots. The first plot has one series and the second has two. I set up my code as follows:
chart = shapes[5].chart
chart_dat = CategoryChartData()
cats = [c.label for c in chart.plot[0].categories]
chart_dat.categories = cats
for idx, srs in enumerate(chart.series):
chart_dat.add_series(srs.name, dat_lists[idx], number_format='0.0%')
chart.replace_data(chart_dat)
I didn't get an error, and the chart still has two plots and three data series. However, only the first data series is updated, while the remaining are unchanged.
Hi all, I share a function which works (by my tests) to update these kind of charts starting from a Pandas DataFrame.
In practice, the issue is that at the XML level the tags for data vary between category based charts and not.
This uses another function shared at https://github.com/scanny/python-pptx/issues/132#issuecomment-1643400011
chart = shape.chart
def update_chart(chart, data):
chart.replace_data(
dataframe_to_chart_data(data)
)
# Fix for filling non category charts (XY, Bubble)
id_attribute = '{http://schemas.openxmlformats.org/officeDocument/2006/relationships}id'
chart_ref_id = self._shape.element.xpath(".//c:chart")[0].attrib[id_attribute]
chart_part = self._shape.part.rels._rels[chart_ref_id].target_part
chart_part._element.xpath(".//c:autoUpdate")[0].set("val", "1")
point_series = chart_part._element.xpath(".//c:xVal")
import copy
for s in point_series:
series_ref = s.getparent()
# Find new reference (category information)
x = series_ref.xpath(".//c:cat")[0]
y = series_ref.xpath(".//c:val")[0]
# Find existing reference (XY, Bubble)
prev_x = series_ref.xpath(".//c:xVal")[0]
prev_y = series_ref.xpath(".//c:yVal")[0]
# Clean old contents
[prev_x.remove(c) for c in prev_x.getchildren()]
[prev_y.remove(c) for c in prev_y.getchildren()]
# Add new contents
[prev_x.append(c) for c in copy.deepcopy(x).getchildren()]
[prev_y.append(c) for c in copy.deepcopy(y).getchildren()]
# Remove category information
series_ref.remove(x)
series_ref.remove(y)
Nice! Thank you very much. I think the numbers did get updated for the various plots in the chart. And I believe the bars (in the bar chart) are also updated to reflect the new values. However, the labels for each of the values are still the old ones. Is it possible to fix that, or how would one approach it using your method?
This is great. The data update worked, at least in my case (a single chart with 2 plots and 3 data series). Thank you!! However, the format of the data labels got changed as well. Before, the label was formatted in 0.0%, and now it got wiped and the formatted became "general" (or no format). Is there any way to modify the update function above so that the format of the data labels is retained?
Or, another thought is: is there a way to change the format of a data series. Apparently, this cannot be done either. The situation is the same as with data update: only the format with respect to the first plot can be changed, not the rest.
I think I found a solution: just modify your code above with two additional lines:
/
# get the existing format code reference before it gets wiped out
# put this after where y is defined
f = series_ref.xpath(".//c:formatCode")[0]
# append the format code reference after the data has been updated
# put this after where new content has been appended
[f.append(c) for c in copy.deepcopy(f).getchildren()]
So far, I haven't found any issues with it....
could you publish your code. still strugling with this ... thanks
|
gharchive/issue
| 2023-03-14T15:49:04 |
2025-04-01T04:35:48.603125
|
{
"authors": [
"Dasc3er",
"NunoTrigo1986",
"riskassure"
],
"repo": "scanny/python-pptx",
"url": "https://github.com/scanny/python-pptx/issues/877",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1192708186
|
Update semanticdb-scalac to 4.5.2
Updates org.scalameta:semanticdb-scalac from 4.4.35 to 4.5.2.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "semanticdb-scalac" } ]
labels: library-update, early-semver-minor, semver-spec-minor, commit-count:1
Superseded by #638.
|
gharchive/pull-request
| 2022-04-05T07:01:43 |
2025-04-01T04:35:48.606791
|
{
"authors": [
"scala-steward"
],
"repo": "scapegoat-scala/scapegoat",
"url": "https://github.com/scapegoat-scala/scapegoat/pull/637",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1644548445
|
Update scalafmt-core to 3.7.3
About this PR
📦 Updates org.scalameta:scalafmt-core from 3.7.2 to 3.7.3
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "org.scalameta", artifactId = "scalafmt-core" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Codecov Report
Patch and project coverage have no change.
Comparison is base (a8fbb23) 87.28% compared to head (273b968) 87.28%.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
Additional details and impacted files
@@ Coverage Diff @@
## master #734 +/- ##
=======================================
Coverage 87.28% 87.28%
=======================================
Files 141 141
Lines 1526 1526
Branches 260 260
=======================================
Hits 1332 1332
Misses 194 194
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2023-03-28T19:34:11 |
2025-04-01T04:35:48.614604
|
{
"authors": [
"codecov-commenter",
"scala-steward"
],
"repo": "scapegoat-scala/scapegoat",
"url": "https://github.com/scapegoat-scala/scapegoat/pull/734",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1564216745
|
read-line furher survey
Run the below two lines in REPL. The content of foo.txt file is foo\rbar
(read-line (open-input-file "foo.txt"))
(read-line (open-input-string "foo\rbar"))
Racket | gauche | Chibi |Chicken | Mit-scheme | Gambit | Kawa
"foo\rbar" | "foo" |"foo\rbar" | "foo\\rbar" | ;Value: "foo\rbar" | "foo\rbar" | foo
"foo\rbar" | "foo" |"foo" | "foo" | ;Value: "foo\rbar" | "foo\rbar" | foo
When looking into the source code of the different implementation, Like in Mit-scheme, in the ./src/runtime/textual-port.scm f file, it wrapped the string-port and file-port as the new record <textual-port-type>. so when using read-line, it has the same interface to be invoked. Then it be handled in the different way.
But in Chibi and Chicken, there is not such wrapped. read-line directly treat string-port and file-port differently. That is why the output are different.
Surprised in ChezScheme, Seem like it does not have read-line procedure.
The output-port and input-port might be different, the above refer to input-port.
Thanks! Added in commit cde06a6 and published at https://docs.scheme.org/surveys/read-line/. Sorry about the long delay.
Thanks for the updated. And read-line is a part of the read-string in Lisp and Scheme or part of the read-line of the C function read-line for input-file port. Mostly depend on the implementer in such surroundings, no way to work out for me. Like the people living the way used to be for a long time, it is a hard change.@Sayward
Withdraw the part of input file port in this survey. Sorry. I did not verify the input for the input foo.txt file before running the (read-line (open-input-file "foo.txt")) line in each implementation.
I mean that echo "foo\rbar" >foo.txt on the docker container(Racket Gauche Chibi Mit Gambit Implementation)
which the images are from has an issue, which is that cat foo.txt command does not get the foo\rbar content, rather than the bar content.
For Chicken implementation, it run on the my VM machine, which does not have an issue about echo "foo\rbar" command, and the output of cat foo.txt is foo\rbar
After I create the input file foo.txt by tee foo\rbar foo.txt, now that all the above mentioned implementation for running (read-line (open-input-file "foo.txt")) in REPL, and the output is "foo\rbar". except the Kawa implementation
, the output is foo\rbar.
|
gharchive/issue
| 2023-01-31T12:27:02 |
2025-04-01T04:35:48.697429
|
{
"authors": [
"APIPLM",
"lassik"
],
"repo": "schemedoc/surveys",
"url": "https://github.com/schemedoc/surveys/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.