id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
329004527 | Docker/kubernetes deployment of UAA
How are you deploying the UAA?
I am wanting to deploy the UAA
other (please explain)
I want to be able to deploy UAA using docker (for me on kubernetes, but even a docker container would be fine). There are a few third-party images that people have build, but it would be ideal if we could have an official docker image
@agentgonzo
step 1 generate docker image
root@k8s:~/workspace/uaa# export DOCKER_REPO_URL=index.tenxcloud.com
root@k8s:~/workspace/uaa# export DOCKER_REPO_USER=username
root@k8s:~/workspace/uaa# export MAINTAINER_EMAIL=someone@company.com
root@k8s:~/workspace/uaa# ./gradlew -x javadocJar -x sourceJar clean buildImage
BUILD SUCCESSFUL in 52s
38 actionable tasks: 38 executed
Task timings:
974ms :cloudfoundry-identity-model:compileJava
15024ms :cloudfoundry-identity-server:compileJava
706ms :cloudfoundry-identity-statsd:compileJava
960ms :cloudfoundry-identity-statsd:war
7541ms :cloudfoundry-identity-uaa:war
3267ms :cloudfoundry-identity-samples:cloudfoundry-identity-api:war
3361ms :cloudfoundry-identity-samples:cloudfoundry-identity-app:war
15854ms :buildImage
Test timings:
step 2 run uaa in docker
root@k8s:~# docker run -p 8080:8080 --name uaa -d --restart=always \
-v /opt/uaa.yml:/uaa/uaa.yml \
index.tenxcloud.com/username/uaa:4.14.0
@JoshuaAndrew
does it mean that some 3rd party docker image belonging to index.tenxcloud.com is used?
@w7089
DOCKER_REPO_URL can by any docker image repo, It happened that I used index.tenxcloud.com here
But there's no official docker repository of cloud foundry where image of uaa can be downloaded from, correct?
@w7089 The gradle task that builds a docker image (./gradlew -x javadocJar -x sourceJar clean buildImage) was contributed by the community. The intent was for development usage only (it hardcodes private keys etc). However we have discovered that it has been used in the wild for production purposes. This plugin is considered by the team deprecated. We do not test this image in our pipeline, and thus we are not sure if it works. We have a plan to remove this plugin.
To answer your initial question, we do not have an official docker image published to run uaa.
We do have a recommended way to run the uaa which is via its bosh release. bosh is a way to run software in different IaaSes (AWS, GCP, Vsphere, Docker etc). We would recommend to run uaa using bosh targeting docker as your Iaas.
If this approach interests you, then you can look at our uaa acceptance tests that runs uaa inside docker and runs tests against that deployment. This is the bosh manifest that we use in our bosh docker deployment.
A quick way to create a bosh that targets a docker host would be to follow this test script. To target a docker host update https://github.com/cppforlife/bosh-docker-cpi-release/blob/master/tests/run.sh#L56
| gharchive/issue | 2018-06-04T10:50:26 | 2025-04-01T06:38:12.867607 | {
"authors": [
"DennisDenuto",
"JoshuaAndrew",
"agentgonzo",
"w7089"
],
"repo": "cloudfoundry/uaa",
"url": "https://github.com/cloudfoundry/uaa/issues/842",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
193577334 | Add table of contents to MD files
A lot of the markdown files are quite lengthy. It is nice to have a ToC to get an overview of what is in it. I used doctoc to generate them.
Not sure if this the way the project wants to go, but it would be nice to have auto-generated ToCs.
Coverage decreased (-0.09%) to 84.966% when pulling 6d5aefd871c7d3a78823cf949750bdccd5deacbc on pradtke:doctoc into 7267d5553a633e9948250fc70772ec03e604fa18 on cloudfoundry:develop.
We have plans to consolidate all docs under doc.cloudfoundry.org. This in the in the process of getting reviewed and will be published soon
| gharchive/pull-request | 2016-12-05T18:41:50 | 2025-04-01T06:38:12.870749 | {
"authors": [
"coveralls",
"pradtke",
"sreetummidi"
],
"repo": "cloudfoundry/uaa",
"url": "https://github.com/cloudfoundry/uaa/pull/496",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
618276 | Question: Any way to edit global variables dynamically
Say i have a less file with a bunch of global variables (@variable...) would it be possible for me to change their value using JavaScript (after the page is loaded)?
Thanks!
Hi,
I needed a similar solution described in the beginning of this thread.
For this, I wrote a tiny function that solved it for me.
I forked Less today - check it out:
https://github.com/hbi99/less.js/blob/master/dist/less-1.1.6.js
Worth mentioning; this is a fast solution in order to focus on my primary project - therefore, it can most likely be implemented more gracefully than my version. The function accepts a JSON object and variables declared in that object will override existing variables. Finally, the function does not cause new requests to the server.
Anywho, this is how it can be used:
Sample LESS:
@bgColor: black;
@textColor: yellow;
body {background: @bgColor; color: @textColor;}
From JS:
less.modifyVars({
'@bgColor': 'blue',
'@textColor': 'red'
});
Why not just dynamically create a LESS file with variable overrides and recompile?
@matthewdl - actually, the loaded LESS file(s) are stored in a variable called "session_cache". All "new" less-variables pushed in via "modifyVars" are appended to "session_cache" - thereby treated as a new less file and re-renders new CSS.
@Loda - the answer to your question is yes. See my answer to mathewdl - all variables passed in via "modifyVars" will be evaluated as if they were part of the LESS file, just know that the new vars will be appended at the end of the LESS files.
Example:
less.addVars({'@color': 'darken(#F00, 10%)'});
Finally, I would like to inform about two more things;
Any "@import" lines will be removed from the loaded LESS file(s) when stored in "session_cache". This is a good thing (IMHO) - all LESS files will be concatinated and stored in the variable "session_cache". No need to re-import them.
If one utilizes animations with transforms - using this function might result in - depending on how you utilize transforms, result in bad or good visual effect.
Good effect = if the colors animates, it will result in fading-effect.
Bad effect = if one have animations with movement, scaling, etc, which are triggered "onload" - animations will reset and animate from the start.
| gharchive/issue | 2011-02-22T17:29:33 | 2025-04-01T06:38:12.877658 | {
"authors": [
"elfassy",
"hbi99",
"matthewdl"
],
"repo": "cloudhead/less.js",
"url": "https://github.com/cloudhead/less.js/issues/207",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2028325666 | RangeError (index): Invalid value: Only valid value is 0: 1 on Pixel 5 phone
When I run my app on Google Pixel 5 (API 32) (both device and emulator) the following throws error:
CldImg cldimg = CloudinaryObject.fromCloudName(cloudName: 'somename').image('someId');
print(cldimg); // throws: RangeError (index): Invalid value: Only valid value is 0: 1
On other phones everything works.
flutter doctor:
[✓] Flutter (Channel stable, 3.13.9, on macOS 14.0 23A344 darwin-arm64 (Rosetta), locale en-KG)
• Flutter version 3.13.9 on channel stable at /Users/temair/Programs/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision d211f42860 (6 weeks ago), 2023-10-25 13:42:25 -0700
• Engine revision 0545f8705d
• Dart version 3.1.5
• DevTools version 2.25.0
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.1)
• Android SDK at /Users/temair/Library/Android/sdk
• Platform android-33, build-tools 33.0.1
• ANDROID_HOME = /Users/temair/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15A507
• CocoaPods version 1.14.3
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.84.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.76.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.0 23A344 darwin-arm64 (Rosetta)
• Chrome (web) • chrome • web-javascript • Google Chrome 119.0.6045.199
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated
with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
Hi @temirfe
Thank you for reporting the issue, we'll take a look ASAP.
Hi there @temirfe,
Currently, we are unable to replicate your issue. Is there any additional information you could provide that can help us reproduce your issue?
| gharchive/issue | 2023-12-06T11:23:57 | 2025-04-01T06:38:12.893716 | {
"authors": [
"PixelCook",
"adimiz1",
"temirfe"
],
"repo": "cloudinary/cloudinary_flutter",
"url": "https://github.com/cloudinary/cloudinary_flutter/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1417298788 | Turkish language and git-updater support
Hello @cloudpanel-io
For those who want to use varnish cache with cloudpanel from Turkey, I translated the plugin into Turkish and I have added the headers requested by the git-updater plugin.
Thanks a lot @mertcangokgoz
| gharchive/pull-request | 2022-10-20T21:15:41 | 2025-04-01T06:38:12.906911 | {
"authors": [
"cloudpanel-io",
"mertcangokgoz"
],
"repo": "cloudpanel-io/clp-wp-varnish-cache",
"url": "https://github.com/cloudpanel-io/clp-wp-varnish-cache/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
962912558 | Fix YAML conversion with empty strings
YAML merge fails when the input variable is an empty string or nil.
Hey, thanks for this provider. We use it to do a merge of two Helm value files (example below) - one with a set of default values and one with user overrides/additions. In cases when user does not set any overriding values, the second input is empty (var.values).
Passing an empty string (or nil) to the inputs, causes the provider to crash. I'm attaching the panic log below.
data "utils_deep_merge_yaml" "values" {
count = var.enabled ? 1 : 0
input = [
local.values,
var.values
]
}
goroutine 34 [running]:
github.com/cloudposse/terraform-provider-utils/internal/convert.YAMLSliceOfInterfaceToSliceOfMaps(0xc000576660, 0x2, 0x2, 0xb49b80, 0xc000338408, 0xd4ad01, 0xc00033a7c0, 0xc03b6ee8763240ae)
github.com/cloudposse/terraform-provider-utils/internal/convert/yaml.go:20 +0x1f4
github.com/cloudposse/terraform-provider-utils/internal/provider.dataSourceDeepMergeYAMLRead(0xd4ada8, 0xc0003346c0, 0xc000366300, 0x0, 0x0, 0xc000336a70, 0xc000375948, 0x40e0f8)
github.com/cloudposse/terraform-provider-utils/internal/provider/data_source_deep_merge_yaml.go:40 +0x87
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc0003dc0e0, 0xd4ad38, 0xc00033a7c0, 0xc000366300, 0x0, 0x0, 0x0, 0x0, 0x0)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.7.0/helper/schema/resource.go:347 +0x17f
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc0003dc0e0, 0xd4ad38, 0xc00033a7c0, 0xc000576300, 0x0, 0x0, 0x0, 0xc000576300, 0x0, 0x0)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.7.0/helper/schema/resource.go:558 +0xfd
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0xc00000c0a8, 0xd4ad38, 0xc00033a7c0, 0xc000576140, 0xc00033a7c0, 0x40b965, 0xbfbf20)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.7.0/helper/schema/grpc_provider.go:1105 +0x4d6
github.com/hashicorp/terraform-plugin-go/tfprotov5/server.(*server).ReadDataSource(0xc0000b9ac0, 0xd4ade0, 0xc00033a7c0, 0xc0003480a0, 0xc0000b9ac0, 0xc0003461b0, 0xc00058aba0)
github.com/hashicorp/terraform-plugin-go@v0.3.0/tfprotov5/server/server.go:247 +0xe5
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler(0xc28bc0, 0xc0000b9ac0, 0xd4ade0, 0xc0003461b0, 0xc0003344e0, 0x0, 0xd4ade0, 0xc0003461b0, 0xc000360300, 0x17a)
github.com/hashicorp/terraform-plugin-go@v0.3.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:416 +0x214
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00025ae00, 0xd51b58, 0xc000131980, 0xc000352000, 0xc0002c67b0, 0x11576d0, 0x0, 0x0, 0x0)
google.golang.org/grpc@v1.32.0/server.go:1194 +0x52b
google.golang.org/grpc.(*Server).handleStream(0xc00025ae00, 0xd51b58, 0xc000131980, 0xc000352000, 0x0)
google.golang.org/grpc@v1.32.0/server.go:1517 +0xd0c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000116280, 0xc00025ae00, 0xd51b58, 0xc000131980, 0xc000352000)
google.golang.org/grpc@v1.32.0/server.go:859 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/grpc@v1.32.0/server.go:857 +0x1fd
Error: The terraform-provider-utils_v0.12.0 plugin crashed!
This PR adds a switch to the YAML convert function, which asses the interface type before passing the input to the convert function. In case a non-string value is passed as input, the iteration in the inner loop is skipped.
TL;DR
provider crashed when empty string is passed as input
PR adds type assessment and accepts only string values
/test all
/test all
@nitrocode @jhosteny @RothAndrew could you take a look and review this?
I've never touched this repo so I'm not wanting to be the approver, but nothing is jumping out at me as a red flag
/test all
| gharchive/pull-request | 2021-08-06T17:15:58 | 2025-04-01T06:38:12.915022 | {
"authors": [
"RothAndrew",
"martinhaus",
"nitrocode"
],
"repo": "cloudposse/terraform-provider-utils",
"url": "https://github.com/cloudposse/terraform-provider-utils/pull/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2630771463 | chore: release main
:robot: I have created a release beep boop
operator: 1.0.0
1.0.0 (2024-11-02)
Features
init (e8be0d4)
Bug Fixes
build (cc58b33)
build (6062759)
build (9d22dff)
build (0304c39)
deps: update all non-major dependencies (c508cb7)
deps: update k8s.io/utils digest to 49e7df5 (c2662a5)
goreleaser (3e1b24e)
release (f2c81ad)
release (6c5efaf)
renovate.json (a566540)
update release (92b145f)
helm: 1.0.0
1.0.0 (2024-11-02)
Features
init (e8be0d4)
This PR was generated with Release Please. See documentation.
:robot: Created releases:
operator-v1.0.0
helm-v1.0.0
:sunflower:
| gharchive/pull-request | 2024-11-02T20:32:45 | 2025-04-01T06:38:12.932426 | {
"authors": [
"golgoth31"
],
"repo": "cloudscalerio/cloudscaler",
"url": "https://github.com/cloudscalerio/cloudscaler/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
195623419 | make README a little more workplace-friendly
This is a cool project, and I just wanted to make it a little more immediately shareable.
Hell is a good word.
Hey @bg46z. There is no a big reason to change this words but I'm happy to see your interest here.
If you want to help us with the documentation there are many work related that. Explained for example on issue #45 . Feel free to join us :)
| gharchive/pull-request | 2016-12-14T19:30:12 | 2025-04-01T06:38:12.934117 | {
"authors": [
"bg46z",
"cloudson",
"ryukinix"
],
"repo": "cloudson/gitql",
"url": "https://github.com/cloudson/gitql/pull/31",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1227768398 | all: use spansql for parse DDL
WHAT
all: use spansql for parse DDL
WHY
Can parse include comment schema
@zchee Please fix the tests (now it can't be compiled...) 🙏
🙄
I'll fix and write test
@zchee Let me fix the test cases...
| gharchive/pull-request | 2022-05-06T11:53:49 | 2025-04-01T06:38:12.935838 | {
"authors": [
"110y",
"zchee"
],
"repo": "cloudspannerecosystem/wrench",
"url": "https://github.com/cloudspannerecosystem/wrench/pull/54",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2196888815 | Is there an existing issue for this?
[X] I have searched the existing issues
Description
Unable to load datasource templates. Details: Cannot invoke "org.pf4j.PluginWrapper.getPluginClassLoader()" because the return value of "org.pf4j.PluginManager.getPlugin(String)" is null.
查看产品或者创建产品时出错,
但是插件的jar已经在dist中了,不知道为什么
Steps To Reproduce
编辑应用
Public Sample App
No response
Environment
Deploy Preview
Issue video log
No response
Version
1.6.6
是不是跟着教程文档走的,可以查看下docs.pageplug.cn里面,有一个本地安装的教程,有个内容还没更新,切换到jdk17
我在内网部署启动时出现两种错误,麻烦您帮忙看看是为什么呢 | gharchive/issue | 2024-03-20T07:55:28 | 2025-04-01T06:38:12.942965 | {
"authors": [
"AppsmithCN",
"gangzi59185"
],
"repo": "cloudtogo/pageplug",
"url": "https://github.com/cloudtogo/pageplug/issues/92",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
199088048 | How to create an OriginAccessIdentity for CloudFront
The only information I have on OAI is the following
class S3Origin(AWSProperty):
props = {
'OriginAccessIdentity': (basestring, False),
}
How can I create on for my CloudFront distribution?
Pointers appreciated.
Hi,
OriginAccessIdentify: (basestring, False) means that a string goes here, and that it's not required. I think you probably need to look at the amazon docs to tell what should go there. Based on the docs here it indicates that you put the CloudFront origin access identity here - so you want something like S3Origin(OriginAccessIdentity=)
It is not possible without using custom resources, it's a well known annoyance of cloudfront + cloudformation.
Here's a project of my own, that contains a Custom Resource for handling just that. Note that the lambda that powers it, is by no means perfect, it won't delete the OAI when it's not used, though adding that is easy, I don't want it to happen automatically.
generate_package_repository.zip
| gharchive/issue | 2017-01-05T23:37:40 | 2025-04-01T06:38:12.952602 | {
"authors": [
"RasmusWernerLarsen",
"emayssat-ms",
"lil-cain"
],
"repo": "cloudtools/troposphere",
"url": "https://github.com/cloudtools/troposphere/issues/638",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
207500736 | How I can change the default param from python
How I can pass other value from python to keyname_param?
I Have:
template = Template()
keyname_param = template.add_parameter(Parameter(
"KeyName",
Description="Name of an existing EC2 KeyPair to enable SSH "
"access to the instance",
Default="param",
Type="String",
))
I'm not sure I understand your question. Do you want to add multiple Parameters? There are uses of Parameters in the example directory that might help.
Not,
I want to pass other value to key_param.
I have param as a default with value param
I'm calling my troposphere template like:
python template.py
When I calling a CloudFormation I can pass the vars to set my template.
How can I do this from troposphere?
@ismaelfernandezscmspain You wouldn't necessarily do it from within troposphere, you would pass it as a variable using boto3 or one of the other aws sdk's. Below is how you could do it using Boto3:
cfparams = [] cfparams.append({'ParameterKey' : 'KeyName', 'ParameterValue': <value you want to pass here or variable>, 'UsePreviousValue' : False )
and then in your create_stack function use 'Parameters=cfparams,'
That should do what you're asking.
👍 works it!
thanks!
Troposphere just generate a template FILE. Nothing else! You can change the default value in template generation process or (better) you call your template with a non-default parameter! You can accomplish the latter with boto or from the aws cli, i.e.
aws cloudformation create-stack --stack-name myteststack --template-body file:////home//local//test//sampletemplate.json --parameters ParameterKey=KeyPairName,ParameterValue=TestKey ParameterKey=SubnetIDs,ParameterValue=SubnetID1\,SubnetID2
| gharchive/issue | 2017-02-14T12:22:16 | 2025-04-01T06:38:12.957487 | {
"authors": [
"emayssat-ms",
"glitchindustries",
"ismaelfernandezscmspain",
"markpeek"
],
"repo": "cloudtools/troposphere",
"url": "https://github.com/cloudtools/troposphere/issues/663",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
353023437 | Demo fails
I've followed the instructions to run the demo. After installing, I run "npm start" which seems to succeed, but then fails. Full output is below.
flutter-webrtc-server $ npm install
npm WARN deprecated babel-preset-es2015@6.24.1: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
> fsevents@1.2.4 install /Users/Sites/webrtc/flutter-webrtc-server/node_modules/fsevents
> node install
[fsevents] Success: "/Users/Sites/webrtc/flutter-webrtc-server/node_modules/fsevents/lib/binding/Release/node-v59-darwin-x64/fse.node" is installed via remote
> jss@9.8.7 postinstall /Users/Sites/webrtc/flutter-webrtc-server/node_modules/jss
> node -e "console.log('\u001b[35m\u001b[1mLove JSS? You can now support us on open collective:\u001b[22m\u001b[39m\n > \u001b[34mhttps://opencollective.com/jss/donate\u001b[0m')"
Love JSS? You can now support us on open collective:
> https://opencollective.com/jss/donate
npm notice created a lockfile as package-lock.json. You should commit this file.
added 1137 packages in 87.457s
flutter-webrtc-server $ npm start
> flutter-webrtc-web@1.0.0 start /Users/Sites/webrtc/flutter-webrtc-server
> npm-run-all --parallel run-server run-webpack-dev-server
> flutter-webrtc-web@1.0.0 run-server /Users/Sites/webrtc/flutter-webrtc-server
> node server/index.js
> flutter-webrtc-web@1.0.0 run-webpack-dev-server /Users/Sites/webrtc/flutter-webrtc-server
> webpack-dev-server --mode development --https --cert ./certs/cert.pem --key ./certs/key.pem --hot --inline --progress --colors --watch --compress --content-base ./dist --port 8086 --host 0.0.0.0
Start WS Server: bind => ws://0.0.0.0:4442
Start WSS Server: bind => wss://0.0.0.0:4443
10% building modules 1/1 modules 0 activeℹ 「wds」: Project is running at https://0.0.0.0:8086/
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: Content not from webpack is served from /Users/Sites/webrtc/flutter-webrtc-server/dist
✖ 「wdm」: Hash: 20c9379ed53a3fe1c1ae
Version: webpack 4.17.1
Time: 3838ms
Built at: 22/08/2018 18:02:05
1 asset
Entrypoint main = main.20c9379e.bundle.js
[./node_modules/loglevel/lib/loglevel.js] 7.68 KiB {main} [built]
[./node_modules/react-dom/index.js] 1.33 KiB {main} [built]
[./node_modules/react/index.js] 190 bytes {main} [built]
[./node_modules/strip-ansi/index.js] 161 bytes {main} [built]
[./node_modules/url/url.js] 22.8 KiB {main} [built]
[./node_modules/webpack-dev-server/client/index.js?https://0.0.0.0:8086] (webpack)-dev-server/client?https://0.0.0.0:8086 7.78 KiB {main} [built]
[./node_modules/webpack-dev-server/client/overlay.js] (webpack)-dev-server/client/overlay.js 3.58 KiB {main} [built]
[./node_modules/webpack/hot sync ^\.\/log$] (webpack)/hot sync nonrecursive ^\.\/log$ 170 bytes {main} [built]
[0] multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js 52 bytes {main} [built]
[./node_modules/webpack/hot/dev-server.js] (webpack)/hot/dev-server.js 1.61 KiB {main} [built]
[./node_modules/webpack/hot/emitter.js] (webpack)/hot/emitter.js 75 bytes {main} [built]
[./node_modules/webpack/hot/log-apply-result.js] (webpack)/hot/log-apply-result.js 1.27 KiB {main} [built]
[./node_modules/webpack/hot/log.js] (webpack)/hot/log.js 1.11 KiB {main} [built]
[./src/App.js] 13.1 KiB {main} [built]
[./src/index.js] 466 bytes {main} [built]
+ 328 hidden modules
ERROR in ./node_modules/@material-ui/icons/Menu.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
@ ./node_modules/@material-ui/icons/Menu.js 3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in ./node_modules/@material-ui/icons/Videocam.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
@ ./node_modules/@material-ui/icons/Videocam.js 3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in ./node_modules/@material-ui/icons/Call.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
@ ./node_modules/@material-ui/icons/Call.js 3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in ./node_modules/@material-ui/icons/CallEnd.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
@ ./node_modules/@material-ui/icons/CallEnd.js 3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in ./node_modules/@material-ui/icons/VideocamOff.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
@ ./node_modules/@material-ui/icons/VideocamOff.js 3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in ./node_modules/@material-ui/icons/Mic.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
@ ./node_modules/@material-ui/icons/Mic.js 3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in ./node_modules/@material-ui/icons/MicOff.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
@ ./node_modules/@material-ui/icons/MicOff.js 3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in ./node_modules/@material-ui/icons/utils/createSvgIcon.js
Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons/utils'
@ ./node_modules/@material-ui/icons/utils/createSvgIcon.js 3:29-92
@ ./node_modules/@material-ui/icons/Menu.js
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://0.0.0.0:8086 (webpack)/hot/dev-server.js ./src/index.js
Child html-webpack-plugin for "index.html":
1 asset
Entrypoint undefined = ./index.html
[./node_modules/html-webpack-plugin/lib/loader.js!./src/index.html] 370 bytes {0} [built]
[./node_modules/lodash/lodash.js] 527 KiB {0} [built]
[./node_modules/webpack/buildin/global.js] (webpack)/buildin/global.js 489 bytes {0} [built]
[./node_modules/webpack/buildin/module.js] (webpack)/buildin/module.js 497 bytes {0} [built]
ℹ 「wdm」: Failed to compile.
You can try npm i babel-runtime, May lack an npm dependency.
That solved it, although I had to delete node_modules and package-lock file, install babel-runtime, then install as per usual.
| gharchive/issue | 2018-08-22T16:07:58 | 2025-04-01T06:38:12.961289 | {
"authors": [
"ROTGP",
"cloudwebrtc"
],
"repo": "cloudwebrtc/flutter-webrtc-server",
"url": "https://github.com/cloudwebrtc/flutter-webrtc-server/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1294757785 | For rendering, any features to do header.tmpl / footer. tmpl etc?
i realised ctx.HTML is much slower than ctx.String... possible to speed this up? 50k req/s vs 15k req/s
For rendering, any features to do header.tmpl / footer. tmpl etc?
for header and footer so to have consistency. just curious how to stack them together. now i'm doing a lot of repeats. how to stack them together?
ctx.HTML(consts.StatusOK, "index.tmpl", utils.H{
"t": "Main website",
"h": "Head website",
"b": "Body website",
})
@hiqsociety Can you use front and back-end separation design?
| gharchive/issue | 2022-07-05T20:54:31 | 2025-04-01T06:38:12.963663 | {
"authors": [
"FGYFFFF",
"hiqsociety"
],
"repo": "cloudwego/hertz-examples",
"url": "https://github.com/cloudwego/hertz-examples/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1465146335 | how to use different tlsCfg like fasthttp in example code?
i have a problem with the tls configuration.
sometimes depending on the ip address, i will use 1 version of tlsCfg, or another version of tlsCfg. I can do this switch in fasthttp when running in reverse proxy mode easily. how do i do the same for the below?
basically i'm using different tls cert for different types of ip origin... and sometimes domain name if i need to
fasthttp
for {
conn, err := ln.Accept()
if err != nil {
panic(err) // TODO: handle error.
}
ip, _, err := net.SplitHostPort(conn.RemoteAddr().String())
if err != nil {
log.Printf("could not parse remote addr: %s\n", err)
conn.Close()
continue
}
var tlsCfg *tls.Config
if ip == "1.1.1.1" {
tlsCfg = //something{}
}else if ip == "123.123.123.123" {
tlsCfg = //something{}
}
tlsConn := tls.Server(conn, tlsCfg)
gopool.Go(func() {
if err := server.ServeConn(tlsConn); err != nil {
log.Printf("e1x = %s", err)
}
})
}
hertz?
tlsCfg := &tls.Config{
GetCertificate: func(hello *tls.ClientHelloInfo) (*tls.Certificate, error) {
ip, _, err := net.SplitHostPort(hello.Conn.RemoteAddr().String())
if err != nil {
return nil, err
}
if ip == "1.1.1.1" {
return &cert, nil
}else if ip == "123.123.123.123" {
return &cert, nil
}
})
}
//**??? oh no doesn't work same way as fasthttp**
h := server.New(server.WithHostPorts(":443"), server.WithALPN(true), server.WithTLS(tlsCfg), server.WithListenConfig(&cfg))
h.Use(func(cc context.Context, ctx *app.RequestContext) {
ProxyHandlerHertz(cc, ctx)
})
h.Spin()
refer to https://www.cloudwego.io/docs/hertz/tutorials/basic-feature/protocol/tls/
https://github.com/cloudwego/hertz-examples/
Note:Currently, Hertz TLS server is not supported Netpoll network library temporarily.
h := server.Default(server.WithTLS(cfg), server.WithTransport(netpoll.NewTransporter)) support is still on the way.
is it already updated for tls or is there a workaround? sry i read but cant find
| gharchive/issue | 2022-11-26T12:49:54 | 2025-04-01T06:38:12.967315 | {
"authors": [
"li-jin-gou",
"ultperf"
],
"repo": "cloudwego/hertz",
"url": "https://github.com/cloudwego/hertz/issues/425",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1660309830 | docs(README): update documentation description and link
What type of PR is this?
docs
Check the PR title.
[X] This PR title match the format: <type>(optional scope): <description>
[X] The description of this PR title is user-oriented and clear enough for others to understand.
(Optional) Translate the PR title into Chinese.
更新 readme 中文档相关的描述和超链接
(Optional) More detail description for this PR(en: English/zh: Chinese).
en:
zh(optional):
Which issue(s) this PR fixes:
Long-term stale status, the context is no longer clear, close it first, and resubmit the pr if necessary.
| gharchive/pull-request | 2023-04-10T06:23:54 | 2025-04-01T06:38:12.970672 | {
"authors": [
"GuangmingLuo",
"welkeyever"
],
"repo": "cloudwego/hertz",
"url": "https://github.com/cloudwego/hertz/pull/712",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1073413101 | Kitex Streaming (GRPC) 模式下,destService 让人费解
Is your feature request related to a problem? Please describe.
以 https://github.com/cloudwego/kitex-examples/tree/main/streaming 作为 DEMO 模板,在 istio 中启用 mTLS,会导致无法连接,经排查
client, err := echo.NewClient("demo", client.WithHostPorts("demo-server:8888"))
NewClient 中的第一个参数 destService 会被填充至 Authority,而在 istio virtualhost 匹配规则中,只包含了服务名 demo-server 而无 demo 导致无法正确的解析匹配,参考 GRPC 的实现
https://github.com/grpc/grpc-go/blob/master/clientconn.go#L1693
建议将 destService 作为可选参数在 GRPC 模式下。
谢谢反馈,请问如果destservice用“demo-server”是否还有问题?
@yanickxia 确认下service name保持一致是否还有问题
| gharchive/issue | 2021-12-07T14:23:34 | 2025-04-01T06:38:12.973452 | {
"authors": [
"YangruiEmma",
"yanickxia"
],
"repo": "cloudwego/kitex",
"url": "https://github.com/cloudwego/kitex/issues/259",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1058036581 | Custom model
I found Layoutlmv2 takes both the image and the context into the pretrain. Why would you use BERT and not Layoutlmv2. I want custom to use layoutlmv2 for encoder, is it possible? Thank you so much
Hi @MinhQuang2021
You may customize this part
https://github.com/clovaai/spade/blob/a85574ceaa00f1878a23754f283aa66bc2daf082/spade/model/model.py#L743-L775
The backbone is based on hugginface transformers thus it shouldn't be difficult to adapt layoutlm.
During the major development , layoutlm wasn't available. Also layoutlm is not pre-trained for Japanese and Indonesian. Thus we initialized the spade encoder with the subset of BERT weights (note that spade encoder ≠ BERT).
Best
Thank you, I really want to apply it on my custom dataset for my language. But creating a dataset with a label like CORD is quite time consuming, so I want to ask you, How many datasets will give good results? Is the standard dataset for this model important? Is this model applicable to unseen data well?
Can you give me some knowledge in EL task I still find it quite confusing in paper to just generate graph and decode it.
I am learning it for my graduation project. Thank you very much
I noticed when set self.hparam.token_lv_boxing == True. parse is parsed differently. Why in Cord_test.yml and Cord_train.yml you both set it to True. Can you explain how it works for me?
why do you set token_lv_boxing= True in CORD , and in FUNSD you set token_lv_boxing=False .
How does token_lv_boxing work? I noticed it changes the parse result when training.
hello, can you help me find the code to apply inferring_method?
- tca_rel_s
- tca_rel_g
Hi @MinhQuang2021
I would say, roughly 1,000 documents are required for the reasonable performance.
Under zero-shot setting, the domain should be similar for reasonable performance. Otherwise, you'll observe a large performance drop.
About the EL task, FUNSD consists of four types of fields: "Header" "Question", "Answer", "ETC".
The EL task aims to connect "Header" → "Question", "Question" → "Answer".
So it is kind of "grouping two fields where each field consist of multiple serialized words".
When token_lv_boxing=True, spade draws directed arrows between tokens. For the noisy OCR data like receipt, this makes a better result. For FUNSD, the task assumes a perfect OCR so I turned it off.
The data augmentation option is automatically turned off during prediction. Sorry for the confusion.
Sorry for being late in reply. Good luck with your project!
| gharchive/issue | 2021-11-19T01:37:14 | 2025-04-01T06:38:13.006779 | {
"authors": [
"MinhQuang2021",
"whwang299"
],
"repo": "clovaai/spade",
"url": "https://github.com/clovaai/spade/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1298879547 | fargate-scheduler: Your AWS account is currently blocked and thus cannot launch any Fargate pods
Describe the bug
I followed this example and I am stuck with the following status:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m59s fargate-scheduler Your AWS account is currently blocked and thus cannot launch any Fargate pods
To Reproduce
Steps to reproduce the behavior:
Navigate to the serverless tutorial
Run it
See error
Code
terraform {
required_version = "~> 1.2.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.21.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~>2.12.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.5" # "~>2.6.0"
}
null = {
source = "hashicorp/null"
version = ">= 3.0" # "~>3.1.0"
}
}
}
provider "aws" {
profile = "syncifyEKS-terraform-admin"
region = local.region
default_tags {
tags = {
Environment = "Staging"
Owner = "BT-Compliance"
Terraform = "True"
}
}
}
#
# Housekeeping
#
locals {
project_name = "syncify-dev"
cluster_name = "${local.project_name}-eks-cluster"
cluster_version = "1.22"
region = "us-west-1"
}
/*
The following 2 data resources are used get around the fact that we have to wait
for the EKS cluster to be initialised before we can attempt to authenticate.
*/
data "aws_eks_cluster" "default" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "default" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}
}
#############################################################################################
#############################################################################################
# Create EKS Cluster
#############################################################################################
#############################################################################################
# Create VPC for EKS Cluster
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.2"
name = local.cluster_name
cidr = "10.0.0.0/16"
azs = ["${local.region}a", "${local.region}b", "${local.region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"] #, "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"] #, "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
manage_default_network_acl = true
default_network_acl_tags = { Name = "${local.cluster_name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${local.cluster_name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${local.cluster_name}-default" }
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.26.3"
cluster_name = local.cluster_name
cluster_version = local.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_addons = {
kube-proxy = {
addon_version = data.aws_eks_addon_version.this["kube-proxy"].version
resolve_conflicts = "OVERWRITE"
}
vpc-cni = {
addon_version = data.aws_eks_addon_version.this["vpc-cni"].version
resolve_conflicts = "OVERWRITE"
}
}
# manage_aws_auth_configmap = true
fargate_profiles = {
default = {
name = "default"
selectors = [
{ namespace = "default" }
]
}
kube_system = {
name = "kube-system"
selectors = [
{ namespace = "kube-system" }
]
}
}
}
data "aws_eks_addon_version" "this" {
for_each = toset(["coredns", "kube-proxy", "vpc-cni"])
addon_name = each.value
kubernetes_version = module.eks.cluster_version
most_recent = true
}
################################################################################
# Modify EKS CoreDNS Deployment
################################################################################
data "aws_eks_cluster_auth" "this" {
name = module.eks.cluster_id
}
locals {
kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks.cluster_id
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = "terraform"
context = {
cluster = module.eks.cluster_id
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})
}
# Separate resource so that this is only ever executed once
resource "null_resource" "remove_default_coredns_deployment" {
triggers = {}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(local.kubeconfig)
}
# We are removing the deployment provided by the EKS service and replacing it through the self-managed CoreDNS Helm addon
# However, we are maintaing the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
kubectl --namespace kube-system delete deployment coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
EOT
}
}
resource "null_resource" "modify_kube_dns" {
triggers = {}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(local.kubeconfig)
}
# We are maintaing the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
echo "Setting implicit dependency on ${module.eks.fargate_profiles["kube_system"].fargate_profile_pod_execution_role_arn}"
kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-name=coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-namespace=kube-system --kubeconfig <(echo $KUBECONFIG | base64 --decode)
kubectl --namespace kube-system label --overwrite service kube-dns app.kubernetes.io/managed-by=Helm --kubeconfig <(echo $KUBECONFIG | base64 --decode)
EOT
}
depends_on = [
null_resource.remove_default_coredns_deployment
]
}
################################################################################
# CoreDNS Helm Chart (self-managed)
################################################################################
resource "helm_release" "coredns" {
name = "coredns"
namespace = "kube-system"
create_namespace = false
description = "CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services"
chart = "coredns"
version = "1.19.4"
repository = "https://coredns.github.io/helm"
force_update = true
recreate_pods = true
# For EKS image repositories https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
values = [
<<-EOT
image:
repository: 602401143452.dkr.ecr.us-west-1.amazonaws.com/eks/coredns
tag: ${data.aws_eks_addon_version.this["coredns"].version}
deployment:
name: coredns
annotations:
eks.amazonaws.com/compute-type: fargate
service:
name: kube-dns
annotations:
eks.amazonaws.com/compute-type: fargate
podAnnotations:
eks.amazonaws.com/compute-type: fargate
EOT
]
depends_on = [
null_resource.modify_kube_dns
]
}
Expected behavior
coredns pods should have gotten scheduled.
Any help would be greatly appreciated.
hi @arnav13081994 - this appears to be an issue with your account and is not related to the code provided here. Please reach out to your AWS support to resolve
| gharchive/issue | 2022-07-08T11:16:07 | 2025-04-01T06:38:13.014323 | {
"authors": [
"arnav13081994",
"bryantbiggs"
],
"repo": "clowdhaus/eks-reference-architecture",
"url": "https://github.com/clowdhaus/eks-reference-architecture/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
231324225 | No Response when Querying MDNS Cast Devices
Hello,
I'm trying to implement your MDNS library in a project I have that aims to communicate with chromecast devices.
I'm trying to get results for the record _googlecast._tcp.local, but keep getiting a TimeoutException:
Thu, 18 May 2017 14:37:32 -0500 [ DEBUG ] api.php::testReact - Function fired!
Thu, 18 May 2017 14:37:37 -0500 [ ERROR ] api.php::{closure} - Error:
React\Dns\Query\TimeoutException: DNS query for _googlecast._tcp.local timed out in
/volume1/Webroot/Phlex/vendor/clue/mdns-react/src/MulticastExecutor.php:84
Stack trace:
#0 [internal function]: Clue\React\Mdns\MulticastExecutor->Clue\React\Mdns\{closure}
(Object(React\EventLoop\Timer\Timer))
#1 /volume1/Webroot/Phlex/vendor/react/event-loop/src/Timer/Timers.php(90):
call_user_func(Object(Closure), Object(React\EventLoop\Timer\Timer))
#2 /volume1/Webroot/Phlex/vendor/react/event-loop/src/StreamSelectLoop.php(177):
React\EventLoop\Timer\Timers->tick()
#3 /volume1/Webroot/Phlex/api.php(1857): React\EventLoop\StreamSelectLoop->run()
#4 /volume1/Webroot/Phlex/api.php(1732): testReact()
#5 /volume1/Webroot/Phlex/api.php(164): scanDevices()
#6 {main}
I don't see any other glaring errors in my logs that would indicate a systemic error, but I could also be mistaken.
I'm invoking the lookup like so:
function testReact() {
write_log("Function fired!");
$name = '_googlecast._tcp.local';
$loop = React\EventLoop\Factory::create();
$factory = new Clue\React\Mdns\Factory($loop);
$mdns = $factory->createResolver();
$mdns->resolve($name)->then(function ($value) {
write_log("Value: ".$value);
},function ($value) {
write_log("Error: ".$value,"ERROR");
});
$loop->run();
}
I'm running this on Synology with apache 2.4 and php7.
Thanks!
Thanks for your interesting question!
It looks like what you're trying to achieve is actually DNS-SD instead of mDNS, as mentioned in the readme:
This library implements the mDNS protocol as defined in RFC 6762. Note that this protocol is related to, but independent of, DNS-Based Service Discovery (DNS-SD) as defined in RFC 6763.
In other words: mDNS deals with finding the IP of a given hostname via multicast, while DNS-SD can be used to find a list of devices that offer a certain service.
I hope this helps :+1:
| gharchive/issue | 2017-05-25T12:19:47 | 2025-04-01T06:38:13.026988 | {
"authors": [
"clue",
"d8ahazard"
],
"repo": "clue/php-mdns-react",
"url": "https://github.com/clue/php-mdns-react/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1415932324 | clusternet schduler support SchdulerExtender to extend?
clusternet schduler support SchdulerExtender to extend?
clusternet schduler support SchdulerExtender to extend?
No. And I don't suggest using extender to extend scheduler.
Clusternet scheduler is using scheduling framework, which uses plugins to implement different scheduling phases. And it is easy to hook your out-of-tree plugins.
If want to update the logic code of score plugin , have a way to hot update the plugin but no restart clusternet schduler?
No. For scheduling framework, you need to rebuild scheduler and update it.
Scheduler extender is also not suggested by Kubernetes community.
If want to update the logic code of score plugin , have a way to hot update the plugin but no restart clusternet schduler?
No. For scheduling framework, you need to rebuild scheduler and update it.
Scheduler extender is also not suggested by Kubernetes community.
OK. Thanks a lot.
| gharchive/issue | 2022-10-20T04:41:06 | 2025-04-01T06:38:13.041399 | {
"authors": [
"dixudx",
"victory460"
],
"repo": "clusternet/clusternet",
"url": "https://github.com/clusternet/clusternet/issues/509",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
982627340 | comment out checking cluster type on registration
What type of PR is this?
kind/bug
What this PR does / why we need it:
Currently there is only one kind of ClusterType, i.e., EdgeCluster. Commenting out checking cluster type to allow more customized cluster type.
Which issue(s) this PR fixes:
Fixes #89
Special notes for your reviewer:
/lgtm
| gharchive/pull-request | 2021-08-30T10:10:27 | 2025-04-01T06:38:13.043519 | {
"authors": [
"dixudx",
"huxiaoliang"
],
"repo": "clusternet/clusternet",
"url": "https://github.com/clusternet/clusternet/pull/94",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
109588349 | Empty catch blocks trigger indentation errors for version > 1.11.1
An empty catch block (with or without comments) causes coffeelint to complain about inconsistent indentation.
Sample code
try
console.log 'a tisket'
console.log 'a tasket'
catch err
# We'll eat the exception
console.log 'All done'
coffeelint@1.11.1 finds no errors with this code. However, later version (1.12.0, 1.12.1) both give the following error
$ coffeelint sample.coffee
✗ sample.coffee
✗ #4: Line contains inconsistent indentation. Expected 2 got 0.
✗ Lint! » 1 error and 0 warnings in 1 file
Thanks for the fix Shuan Wang :) :+1:
no prob. I just release v1.13.0 with all the fixes.
Please update the changelog too.
| gharchive/issue | 2015-10-02T23:17:24 | 2025-04-01T06:38:13.046639 | {
"authors": [
"AsaAyers",
"akshat1",
"swang"
],
"repo": "clutchski/coffeelint",
"url": "https://github.com/clutchski/coffeelint/issues/511",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
824922992 | remove legacy runtime
because:
seeing pretty decent adoption (edit: of kube-runtime) across github code search
haven't gotten complaints about new runtime
no need to keep this entirely unmaintained module around to confuse people
some counter-points. There is still matt butcher's blog-post that made it to hackernews which uses it, but that's hopefully passed now?
I have updated all my old blog posts (and even written a new one).
Can hold off until we have written something official about kube-runtime potentially.
What do people think?
i'll trust the thumbs up consensus :-)
| gharchive/pull-request | 2021-03-08T20:26:51 | 2025-04-01T06:38:13.049403 | {
"authors": [
"clux"
],
"repo": "clux/kube-rs",
"url": "https://github.com/clux/kube-rs/pull/454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
188672186 | Google Maps Language localization
Display the map in pt-BR
thanks!
| gharchive/pull-request | 2016-11-11T02:42:53 | 2025-04-01T06:38:13.074582 | {
"authors": [
"cmdalbem",
"gutobenn"
],
"repo": "cmdalbem/bikedeboa",
"url": "https://github.com/cmdalbem/bikedeboa/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2444924645 | Invalid Schnorr Siganture - Example: Basic spend using key-path
I have a method to send sats from one p2tr address to another one. I followed this example to implement it.
The validation goes through, but when I submit the transaction to mempool.space (or blockstream, same thing), I get the following response:
sendrawtransaction RPC error: {"code":-26,"message":"non-mandatory-script-verify-flag (Invalid Schnorr signature)"}
Here is the code I send sats with (essentially the same as the example, but with many UTXOs in tx input):
https://pastebin.com/hBGXNnJe
Here is the decoded TX that fails to be broadcast:
{
version: 2,
vin: [
{
txid: 'fd13e6e779960e0ba56405928524d1077575568a58b8e123275f11cb8630a816',
vout: 0,
scriptSig: [],
sequence: 'fffffffd',
witness: [Array]
}
],
vout: [
{
value: 1000n,
scriptPubKey: '5120b32ce90751de8b076dc9c8f6d46f967abe6cc7a4e97c452ff17444bb60c5890b'
}
],
locktime: 0
}
I think the problem may be here with txInput.vout:
Signer.taproot.sign(taprootPk, txData, txInput.vout)
The index value should be the index of the input being signed, so like this:
for (let i = 0; i < txData.vin.length; i++) {
txData.vin[i].witness = [
Signer.taproot.sign(taprootPk, txData, i),
];
}
Try that change and let me know if it works.
p.s the n notes a BigInt type. They are a relatively new type in javascript.
@cmdruid thanks, fixed this one! But the error is still there, unfortunately :( something else is also wrong
Where are you getting the keys from?
You may want to try skipping this part:
const [taprootPk] = Tap.getSecKey(pk);
const [taprootPub] = Tap.getPubKey(pub);
This step adds an empty tweak to each key in your key-pair, which obfuscates the keys for privacy.
However it is not necessary and not all wallets bother to do this step, so you may want to try omitting it and signing with the keys you have directly.
@cmdruid I tried skipping tweaking, but no luck, same error.
I generate the keys myself, then tweak them, then give the taproot address to the user.
Then somewhere in the future the user funds the taproot address and I do this sending method (I invoke it with not tweaked private key and the method tweaks the key inside it, so that it has access to the taproot address the user has funded).
Also, so that you don't have to waste time, while trying to guess what's wrong, I created this minimal example with all the method inputs, how I generate the keys etc., before I invoke the method:
https://pastebin.com/qH73NDqC#google_vignette
Do you end up getting the same tweaked key-pair in both cases? Does the tweaked public key, when encoded as an address, match the address of the utxo being spent?
@cmdruid Yes, the same tweaked key-pair in both cases 😢
UTXO address matches as well, yes, I re-verify UTXO data for the address being spent doing this request:
GET https://blockstream.info/testnet/api/address/tb1pxez9u8zkzhgephpus2a3fuz7jujhpeujml32a5d4g57aj9wjdqrqh02yen/utxo
What does Signer.taproot.verify(txData, i) not check during verification? I know UTXOs could be one thing, but those are correct, what else could it miss?
Maybe it will be easier for me to debug, if I at least know what could be wrong.
@cmdruid I also can reproduce this error in keyspend.test.ts.
I just removed the wallet stuff, as I don't have a local node.
Then I put my private key there, my UTXO input data/output data and set the network to 'testnet'.
Then, after running the test (it runs successfully), I took the TX hex and posted it to the blockstream node like this:
curl --request POST
--url https://blockstream.info/testnet/api/tx
--header 'Content-Type: text/plain'
--header 'User-Agent: insomnia/9.3.2'
--data 020000000001010449081422de6fda2d995662ab51909a3a45bf422bcf47b40af047c94d467b360000000000fdffffff015203000000000000160014d0829aa329e5716e71b10cb17a6a33df8caf72a3014095df63eff8710b308c2f69b693cf7db731cd320c0789d5fad412a1911d1105e243efbd39d2d74cfd3dd0b9fe6b8e1e5387b92b61364c4d664b609441b984d2a000000000
The pubkey that I am decoding from the address "tb1pxez9u8zkzhgephpus2a3fuz7jujhpeujml32a5d4g57aj9wjdqrqh02yen" is the following:
36445e1c5615d190dc3c82bb14f05e972570e792dfe2aed1b5453dd915d26806
Are you signing with a private key that matches this public key?
@cmdruid
Secret: c8ec44f2fd52560af4aaab3210c104d18cf5904c75288d182124c2905da69102
You can import into Unisat Testnet to see for yourself. Address is derived from public and you'll see the correct address there.
Just try this test and see for yourself:
https://pastebin.com/raw/vhn2YJvt
You'll see all the correct keys in the test, the UTXO used here is also correct. And the test will pass, of course.
But then, use the TX Hex and submit it to blockstream (or mempool.space). The BTC node will reject it.
Submission request:
curl --request POST
--url https://blockstream.info/testnet/api/tx
--header 'Content-Type: text/plain'
--data
@cmdruid okay, I realized my error, incorrect value was inputted into prevout.
Sorry for disturbing :)
no problem. I'm glad that you were able to get it working!
| gharchive/issue | 2024-08-02T13:05:12 | 2025-04-01T06:38:13.097256 | {
"authors": [
"cmdruid",
"maxgmer"
],
"repo": "cmdruid/tapscript",
"url": "https://github.com/cmdruid/tapscript/issues/43",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
435323023 | Agregando Bootstrap al sitio web
¿Qué tarea hace la rama a agregar?
Incorpora Bootstrap, usa un slider para agregar dinamismo, mejora la estética.
¿Cómo ver la función que realiza?
sitio001/index.
Es en el:
[x] Frontend.
[ ] Backend.
Desea agregar alguna descripción adicional?
Agregar JQuery antes que bootstrap.
Bootstrap revisado y aprovado.
| gharchive/pull-request | 2019-04-19T22:37:54 | 2025-04-01T06:38:13.104079 | {
"authors": [
"cmorris3000"
],
"repo": "cmorris3000/sitio001",
"url": "https://github.com/cmorris3000/sitio001/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2126343336 | Update Book “2024-02-08/index-2”
Automatically generated by Netlify CMS
👷 Deploy Preview for comforting-lamington-e443cb processing.
Name
Link
🔨 Latest commit
0d3bd5e61b6efdc246e3e522851ec730fd7c2ec3
🔍 Latest deploy log
https://app.netlify.com/sites/comforting-lamington-e443cb/deploys/65c58461b2b6b900097b87c0
| gharchive/pull-request | 2024-02-09T01:48:15 | 2025-04-01T06:38:13.106870 | {
"authors": [
"cmpasc"
],
"repo": "cmpasc/VueDN",
"url": "https://github.com/cmpasc/VueDN/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2072209223 | Docker instructions for Mac users
Hi @clelange,
I had a few issues when trying to open any graphic element in the docker standalone version on MacOS. I realised that in order to run properly on Mac, one should first set the DISPLAY variable in the docker run. Would it be possible to update the docker run command to:
docker run --hostname=97e12d678fa2 --user=cmsusr --env=PYTHONPATH=/usr/local/venv/lib::/code/HiggsAnalysis/CombinedLimit/build/lib/python --env=HOME=/home/cmsusr --env=CMSSW_BASE=/code --env=PATH=/usr/local/venv/bin:/usr/local/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin:/code/HiggsAnalysis/CombinedLimit/build/bin --env=LD_LIBRARY_PATH=/usr/local/venv/lib:/usr/local/venv/lib64:/usr/local/venv/lib:/usr/local/lib:/code/HiggsAnalysis/CombinedLimit/build/lib --env=ROOTSYS=/usr/local/venv --env=CLING_STANDARD_PCH=none --env=USER=cmsusr --env=GEOMETRY=1920x1080 --env=DISPLAY=host.docker.internal:0 --env=CC=/usr/local/venv/bin/gcc --env=CXX=/usr/local/venv/bin/g++ --env=GSL_ROOT_DIR=/usr/local/venv --workdir=/code --restart=no --label='maintainer.email=clemens.lange@cern.ch' --label='maintainer.name=Clemens Lange' --runtime=runc -t -d gitlab-registry.cern.ch/cms-cloud/combine-standalone:latest
Afaict, it should not create problems to windows/linux users (although, I couldn't test it). Maybe it would be safer to maintain 2 different run commands?
Then we should instruct Mac users to also run sudo xhost +loclahost on their machines (see: https://gist.github.com/paul-krohn/e45f96181b1cf5e536325d1bdee6c949)
Hi @giacomoortona -- thanks for reporting this issue. While X windows access is possible, we've observed that it's easier for CMS open data users to use VNC instead. Could you try if the instructions at https://gitlab.cern.ch/cms-cloud/root-vnc/-/blob/master/README.md?ref_type=heads work for you?
I'll look into updating the image but since I'm travelling this and next week it might take a bit longer. It should not break Linux/Windows but might break VNC.
Hi @clelange, Thank you for your reply.
Maybe it's just my very limited docker knowledge, but it seems to me that there is no VNC server coupled to the combine docker image, am I wrong? In any case this is not urgent, it can definitively wait until you come back.
Thank you,
Giacomo
The new (slim) container doesn't include VNC anymore. I can look into adding it though, making the necessary fixes pointed out by @giacomoortona
The new (slim) container doesn't include VNC anymore. I can look into adding it though, making the necessary fixes pointed out by @giacomoortona
Hi @clelange, afaict, adding the Display environment variable + xhost should be working on Mac, KDE and maybe elsewhere. Although for reasons I can't yet understand, sometimes container created with my suggested run command get stuck after "docker start". They do work if one uses "docker exec" instead, but you might want to have a look at my suggestion. Maybe there something silly in my command (I tried figuring it out, but with no success so far).
| gharchive/issue | 2024-01-09T11:43:56 | 2025-04-01T06:38:13.115140 | {
"authors": [
"clelange",
"giacomoortona"
],
"repo": "cms-analysis/HiggsAnalysis-CombinedLimit",
"url": "https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/issues/896",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
296463427 | Parametric shapes
fix for RooParamatricShapesBinPdf.
still needs some testing for the correctness of the outcome, but the PR may be useful has starting point for the future.
Idea: pointers are not saved in the Workspace, therefore when loaded again are not valid.
This minimal changes, make it mutable (because evaluate is const), and when it needs to be used, if not properly initialized, initialization is performed.
Hi Andrea,
Thanks for this PR. It would be nice to have some check that it still gives
the same results as you say before merging.
For example, its probably enough just to make a comparison as was done
using a simple RooExponential for the documentation :
https://cms-hcomb.gitbooks.io/combine/content/part2/settinguptheanalysis.html#caveat-on-using-parametric-pdfs-with-binned-datasets
On Mon, Feb 12, 2018 at 5:38 PM, Andrea Marini notifications@github.com
wrote:
fix for RooParamatricShapesBinPdf.
still needs some testing for the correctness of the outcome, but the PR
may be useful has starting point for the future.
Idea: pointers are not saved in the Workspace, therefore when loaded again
are not valid.
This minimal changes, make it mutable (because evaluate is const), and
when it needs to be used, if not properly initialized, initialization is
performed.
You can view, comment on, or merge this pull request online at:
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454
Commit Summary
scope specification
fix pointers in ParametricShape
File Changes
M interface/RooParametricShapeBinPdf.h
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454/files#diff-0
(2)
M src/RooParametricShapeBinPdf.cc
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454/files#diff-1
(2)
M src/SequentialMinimizer.cc
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454/files#diff-2
(2)
Patch Links:
https://github.com/cms-analysis/HiggsAnalysis-
CombinedLimit/pull/454.patch
https://github.com/cms-analysis/HiggsAnalysis-
CombinedLimit/pull/454.diff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABhK2whgtu_dfPO4dB4al0jatI2_jZK1ks5tUHcTgaJpZM4SCjdu
.
@amarini, I just started a new PR #493, which changes how RooParametricShapeBinPdf computes the integrals internally (uses RooAbsReal::createIntegral(), rather than the RooAbsReal::asTF(), which would not allow parameters to functions themselves).
Can you take a look and make sure this new branch still works for your use case (and if your original issue is still present?)?
Thanks!
Javier
| gharchive/pull-request | 2018-02-12T17:38:26 | 2025-04-01T06:38:13.126076 | {
"authors": [
"amarini",
"jmduarte",
"nucleosynthesis"
],
"repo": "cms-analysis/HiggsAnalysis-CombinedLimit",
"url": "https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
320541814 | [15721] TileGroup Compaction and GC Fixes
Overview of Project
Our project has 3 main aspects:
Enabling the Garbage Collector to free empty tile groups and reclaim their memory (Done)
Fixing several important bugs in the Garbage Collector and Transaction Manager (Done)
Merging sparsely occupied tile groups to save memory (Done, Not Tested)
Status
This PR resolves most of the issues identified in issue #1325. It includes the results of a thorough correctness audit we performed on Peleton's garbage collection system. It includes a whole new test suite for the Transaction-Level Garbage Collector and several important bug fixes to the Garbage Collector and Transaction Manager. It also includes our previous work which enhances the Garbage Collector to free empty TileGroups when all of its tuple slots have been recycled. It adds a class called TileGroupCompactor that performs compaction of tile groups. It also includes a large number of changes necessary to rebase on the latest version of Peloton (Mengran's Catalog changes).
The summary of changes are:
GCManager::RecycleTupleSlot allows unused ItemPointers to be returned without going through the entire Unlink and Reclaim process.
Modified TOTransactionManager to pass tombstones created by deletes to the GCManager.
Modified DataTable's Insert to return the ItemPointer to the GCManager in the case of a failed insert.
Modified DataTable's InsertIntoIndexes to iterate through indexes and remove inserted keys in the event of a failure.
Modified GCManager's Unlink function to clean indexes from garbage created by COMMIT_DELETE, COMMIT_UPDATE, and ABORT_UPDATE.
Added 14 tests to transaction_level_gc_manager_test.cpp to handle more complex GC scenarios. Currently 4 of these tests still fail for polluting indexes with old keys, but we believe this will require more significant changes at the execution layer to resolve. We believe we have resolved all of the tuple-level GC bugs and most of the index bugs. We have disabled the 4 check that fail and will open a new issue describing those scenarios only.
We have disabled some of the old GC tests because they need updating to conform to the new GC behavior.
As my teammates said, Nice Job ! !
Coverage decreased (-77.4%) to 0.0% when pulling 36d546d69b8f20b0f891e3faf296545b393694fd on mbutrovich:gc_fixes into 881a8e6d34296d372593ac9714d6a71a5500f82c on cmu-db:master.
| gharchive/pull-request | 2018-05-05T21:15:23 | 2025-04-01T06:38:13.414567 | {
"authors": [
"bohanjason",
"coveralls",
"dqrs"
],
"repo": "cmu-db/peloton",
"url": "https://github.com/cmu-db/peloton/pull/1349",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
650112130 | Revert "Explicit instantiation of spdlog template functions (#995)"
This reverts commit 686eb69f. Not sure we want to merge this immediately, but it sounds like #995 didn't really help so this is a PR to revert it if we want.
I didn't see this PR when I reverted that commit in #1026. For what it's worth, #1026 does this as a necessary prerequisite for statically linking spdlog.
Redundant.
| gharchive/pull-request | 2020-07-02T17:58:50 | 2025-04-01T06:38:13.416115 | {
"authors": [
"gonzalezjo",
"mbutrovich"
],
"repo": "cmu-db/terrier",
"url": "https://github.com/cmu-db/terrier/pull/1004",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
930466141 | Adding Skooner to the sandbox
Pre-submission checklist:
Please check each of these after submitting your pull request:
[ ] Are you only including a repo_url if your project is 100% open source? If so, you need to pick the single best GitHub repository for your project, not a GitHub organization.
[ ] Is your project closed source or, if it is open source, does your project have at least 300 GitHub stars?
[ ] Have you picked the single best (existing) category for your project?
[ ] Does it follow the other guidelines from the new entries section?
[ ] Have you added your SVG to hosted_logos and referenced it there?
[ ] Does your logo clearly state the name of the project/product and follow the other logo guidelines?
[ ] Does your project/product name match the text on the logo?
[ ] Have you verified that the Crunchbase data for your organization is correct (including headquarters and LinkedIn)?
[ ] ~15 minutes after opening the pull request, the CNCF-Bot will post the URL for your staging server. Have you confirmed that it looks good to you and then added a comment to the PR saying "LGTM"?
Build failed because of:
No cached entry, and Valve Software (member) has issues with twitter: https://twitter.com/valveoficial, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
Empty twitter for Rancher Federal (member): https://twitter.com/rancherfederal
No cached entry, and Sosivio (member) has issues with twitter: https://twitter.com/SosivioLtd, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
Empty twitter for Banzai Cloud (KCSP): https://twitter.com/banzaicloud
Empty twitter for StackStorm: https://twitter.com/Stack_Storm
Empty twitter for WasmEdge Runtime: https://twitter.com/realwasmedge
Skooner has an empty or missing homepage_url | gharchive/pull-request | 2021-06-25T20:00:01 | 2025-04-01T06:38:13.448267 | {
"authors": [
"CNCF-Bot",
"amye"
],
"repo": "cncf/landscape",
"url": "https://github.com/cncf/landscape/pull/2182",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2550544943 | [SANDBOX PROJECT ONBOARDING] Capsule
Welcome to CNCF Project Onboarding!
This is an issue created to help onboard your project into the CNCF after the TOC has voted to accept your project.
We would like to complete onboarding within one month of acceptance.
From the project side, please ensure that you:
[x] Understand the project proposal process and reqs: https://github.com/cncf/toc/blob/main/process/project_proposals.md#introduction
[x] Understand the services available for your project at CNCF https://www.cncf.io/services-for-projects/
[x] Ensure your project meets the CNCF IP Policy: https://github.com/cncf/foundation/blob/master/charter.md#11-ip-policy
[x] Review the online programs guidelines: https://github.com/cncf/foundation/blob/master/online-programs-guidelines.md
[x] Understand the trademark guidelines: https://www.linuxfoundation.org/en/trademark-usage/
[x] Understand the license allowlist: https://github.com/cncf/foundation/blob/master/allowed-third-party-license-policy.md#approved-licenses-for-allowlist
[x] Is your project working on written, open governance? see https://contribute.cncf.io/maintainers/governance/
[x] Slack: Are your slack channels migrated to the Kubernetes or CNCF Slack? (see https://slack.com/help/articles/217872578-Import-data-from-one-Slack-workspace-to-another for more details)
[x] Is your project in its own separate neutral github organization?
[x] Submitted a Pull request to add your project as a sandbox project to https://landscape.cncf.io
[x] Create maintainer list + add to aggregated https://maintainers.cncf.io list by submitting a PR to it
[x] Have added your project to https://github.com/cncf/contribute
[x] Artwork: Submit a pull request to https://github.com/cncf/artwork with your artwork
[x] Domain: transfer domain to the CNCF - https://jira.linuxfoundation.org/plugins/servlet/theme/portal/2/create/63
Things that CNCF will need from the project:
[x] Provide emails for the maintainers added to https://maintainers.cncf.io in order to get access to the maintainers mailing list and ServiceDesk
[x] Trademarks: transfer any trademark and logo mark assets over to the LF - https://github.com/cncf/foundation/tree/master/agreements has agreements
[x] GitHub: ensure 'thelinuxfoundation' and 'caniszczyk' are added as initial org owners, this helps us make sure we have continuity of GH ownership
[x] GitHub: ensure DCO or CLA are enabled for all GitHub repositories of the project
[x] GitHub: ensure that hat the CNCF Code of Conduct (or your adopted version of it) are explicitly referenced at the project's README on GitHub
[x] Website: ensure LF footer is there and website guidelines followed (if your project doesn't have a dedicated website, please adopt those guidelines to the README file of your project on GitHub).
[x] Website: Analytics transferred to projects@cncf.io
[x] CII: Start on a CII best practices badge https://bestpractices.coreinfrastructure.org/en
Things that the CNCF will do or help the project to do:
[x] Devstats: add to devstats https://devstats.cncf.io/
[x] Insights: add to LFX Insights https://insights.v3.lfx.linuxfoundation.org/
[x] Marketing: update relevant intro + slide decks
[x] Events: update CFP + Registration + CFP Area forms
[x] ServiceDesk: confirm maintainers have read https://www.cncf.io/services-for-projects/
[x] CNCF Welcome Email Sent to confirm maintainer list access, welcome email has monthly project sync details
[x] Create space for meetings/events on https://community.cncf.io, e.g., https://community.cncf.io/pravega-community/ - (https://github.com/cncf/communitygroups/blob/main/README.md#cncf-projects)
[x] Adopt a license scanning tool, like FOSSA or Snyk
I'd prefer to do it via ticket as I may not be the one to do the work -- but I'm also happy to open it on your behalf if you've not got access yet (would just need your email address, which I can probably get from @krook)
Just pinged it to you in Slack.
A Google analytics property has been created, the tracking number is G-4YLJ6T1Z8F
I couldn't use the hotmail email address provided as Google refused to use it saying it was "an alternate" for a gmail address, and so I used it instead. Please confirm you got the invite @oliverbaehler.
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4YLJ6T1Z8F"></script>
<script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-4YLJ6T1Z8F'); </script>
@nate-double-u I have added the GA:
https://github.com/projectcapsule/website/pull/10/files
Can you check if it's tracking?
Havent gotten an invite. tbh i am also confused with these analytics.. :D
I have one mail that coreshawty@gmail.com
@nate-double-u can we mark this final item complete now?
[ ] Website: Analytics transferred to projects@cncf.io
Google Analytics is set up and collecting data now.
@oliverbaehler, I've invited your Baehler.Oliver@gmail.com, and I've just added your coreshawty@gmail.com account as well. Please let me know if you're unable to access.
@krook, we can check this off now: Website: Analytics transferred to projects@cncf.io
@oliverbaehler one additional question,
The CNCF has the https://projectcapsule.dev/ domain which is hosting a site for Capsule.
But there's also an (earlier?) one that is at https://capsule.clastix.io/
Is it possible to have the clastix.io one just forward to projectcapsule.dev?
@krook we will take care of that.
Excellent. We can follow up on that separately. But with everything else complete for onboarding we can mark this complete 🎉
| gharchive/issue | 2022-12-13T17:59:23 | 2025-04-01T06:38:13.472273 | {
"authors": [
"amye",
"bsctl",
"krook",
"nate-double-u",
"oliverbaehler"
],
"repo": "cncf/sandbox",
"url": "https://github.com/cncf/sandbox/issues/166",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2440978079 | Making changes to ALARA flux file and input
Changed flux values to go from high to low energy in W wall
Increased tolerance to gather data for more nuclides
I noticed the Neutron_Flux.csv file was still going from low to high energy, even though the OpenMC script has it written from high to low. The flux file currently in the repo may be from an older version of the OpenMC script
| gharchive/pull-request | 2024-07-31T21:24:51 | 2025-04-01T06:38:13.475888 | {
"authors": [
"anu1217"
],
"repo": "cnerg/OpenMCActivationStudy",
"url": "https://github.com/cnerg/OpenMCActivationStudy/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1719603473 | SSD1306 drawImage
I've been working on (image) compression and the drawImage function for the @emeb SSD1306 was a necessary side product.
As the display operates in vertical mode with us looking at it horizontally, still every pixel needs to be set.
I managed a tiny speed increase by eliminating the call to drawPixel but the main benefit of the function is to use black or white as transparency and the ability to draw ontop of the buffer contents using bitmath, think sprites!
Of course the SSD1306_LOG_IMAGE can be stripped away.
It looks good on my end! What do you think @emeb ?
Looks good to me!
| gharchive/pull-request | 2023-05-22T12:36:21 | 2025-04-01T06:38:13.483874 | {
"authors": [
"cnlohr",
"emeb",
"recallmenot"
],
"repo": "cnlohr/ch32v003fun",
"url": "https://github.com/cnlohr/ch32v003fun/pull/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1347943182 | [RFC] unpublish package
不知道出于什么考虑没有做删包npm unpublish <package> --force 这个 API。
有些人习惯整理 package,有强迫症的需要删除整包,先提个想法出来看看大家的看法。需要支持一个下面这样的接口。
DELETE /:fullname/-rev/:rev
代码示例
@HTTPMethod({
// DELETE /@cnpm/foo/-rev/61af62d6295fcbd9f8f1c08f
// DELETE /:fullname/-rev/:rev
path: `/:fullname(${FULLNAME_REG_STRING})/-rev/:rev`,
method: HTTPMethodEnum.DELETE,
})
async removeAll(@Context() ctx: EggContext, @HTTPParam() fullname: string) {
const npmCommand = ctx.get('npm-command');
if (npmCommand !== 'unpublish') {
throw new BadRequestError('Only allow "unpublish" npm-command');
}
const pkg = await this.getPackageEntityAndRequiredMaintainer(ctx, fullname);
await this.packageManagerService.unpublishPackage(pkg);
return { ok: true };
}
删除的风险性太高了,很难分析出是否还有依赖。
| gharchive/issue | 2022-08-23T13:15:40 | 2025-04-01T06:38:13.498572 | {
"authors": [
"Beace",
"killagu"
],
"repo": "cnpm/cnpmcore",
"url": "https://github.com/cnpm/cnpmcore/issues/295",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1852598029 | feat: proxy mode
#366 开启代理模式时如果找不到依赖会直接返回上游仓库的manifest信息并缓存于nfs,当请求的tgz文件不存在时从上游仓库获取并返回,同时创建对应版本的同步任务。每小时检查更新已缓存的manifest文件保证上游仓库发布新版本时不会因为缓存落后而404。
目前代理模式无法配置上游仓库认证头的信息,认证头应该作为仓库的一个属性记录在数据库里,可以通过接口修改。
@hezhengxu2018 🤩 已经改完了吗?
管理缓存的接口也加上了,功能相对来说比较完整了,帮忙看一下吧 @elrrrrrrr @fengmk2
管理缓存的接口也加上了,功能相对来说比较完整了,帮忙看一下吧 @elrrrrrrr @fengmk2
🤩 改动内容比较多 我明天详细看下 🙏🏻
@hezhengxu2018
proxyMode 定位是做代理模式,然后本地 registry 做缓存加速?需要确认一下 manifest 以本地为准还是以上游为准。
目前实现是 manifest 和 tgz 请求时,将代理结果直接返回,并创建同步任务
proxyMode 下,对于
优先返回 本地 registry 信息
若本地没有版本数据,则代理返回上游 registry
tgz 访问时触发下载,仅同步当前访问报的单个版本
如果 proxyMode 中直接命中 tgz 下载,需要等异步定时任务补偿后才能继续访问,否则 manifest 会返回单个版本?客户单查包信息的时候就过期了。
我们是否可以改为 proxyMode 下,始终返回上游 registry 信息已获取更高的实时性。
@hezhengxu2018
proxyMode 定位是做代理模式,然后本地 registry 做缓存加速?需要确认一下 manifest 以本地为准还是以上游为准。
目前实现是 manifest 和 tgz 请求时,将代理结果直接返回,并创建同步任务
proxyMode 下,对于
优先返回 本地 registry 信息
若本地没有版本数据,则代理返回上游 registry
tgz 访问时触发下载,仅同步当前访问报的单个版本
如果 proxyMode 中直接命中 tgz 下载,需要等异步定时任务补偿后才能继续访问,否则 manifest 会返回单个版本?客户单查包信息的时候就过期了。
我们是否可以改为 proxyMode 下,始终返回上游 registry 信息已获取更高的实时性。
开启proxyMode之后返回的manifest都是上游仓库上次更新的manifest,不会使用数据库里的manifest信息。如果上游仓库无法正常使用了,切回到none模式下才会返回代理仓库已缓存版本的manifest。如果异步任务还没同步完成会一直通过反向代理返回上游仓库的tgz信息,用户不会需要等待异步任务完成,当异步任务完成后才会优先从对象存储中读取tgz。
代理仓库主要的功能点是在访问外网或者官方npm仓库非常缓慢甚至无法访问的情况下能够缓存结果加速内网用户安装依赖,同时即使外网无法访问了也不影响内网用户照常使用已经缓存的依赖,所以始终返回上游仓库可能不行,如果内网用户发现代理仓库的缓存需要更新或者缓存脏数据了可以使用/-/proxy-cache接口进行一些缓存的刷新删除。
manifest 以本地为准,因为nexus是这么做的。代理模式整体是一个缓存,如果网络不好还一直使用上游仓库的索引的话就没有缓存的意义了,不过nexus默认刷新manifest的频率很高,30分钟就会去刷新一次,我设置每天刷新一次感觉有点保守了。
改了一部分,有些感觉不合适或者不太会改的,抽空再看一下吧 @elrrrrrrr
@hezhengxu2018 等明天再具体看看,新年快乐 ヽ(≧◡≦)八(o^ ^o)ノ
想起来搜索接口没有做代理,进入代理模式后搜索的结果也应该是代理的,因为之前没有实现搜索接口的所以就漏了。这个问题等基础功能合并之后再修复吧。
上游仓库可能会302返回一个重定向的地址,内网无法访问上游仓库返回的302地址。直接用egg-js的proxy插件不合适,这样的话proxy这个插件也有点不需要了
好像之前想的有点问题,代理模式下可以不用管本地数据库的内容。依赖的manifest能从proxyCache读到就用缓存的,没有就返回上游的流,因为代理模式下只要创建的任务是指定版本的,本地的永远是不完整的,不用和其他模式的逻辑在一起。
优化了反向代理请求的返回处理,tgz文件现在不会写入本地磁盘的返回而是直接使用上游返回的流,速度提升明显。但是manifest因为需要把文件里上游仓库的地址改为应用的地址没有办法直接以流的形式返回,对于大json可能会有压力。
使用独立的请求函数发送proxy mode的请求,会带上代理的请求头,减少对原有逻辑的改动。
移除了config 的ts校验,现在如果想正常使用代理模式需要手动设置redirectNotFound为false,修改ts时不会提示。这部分的校验让config文件的ts变得有些复杂了。
🤩期待早日合并
@elrrrrrrr 空了帮忙 review 完?我理解不是默认开启,应该风险比较低。
在内网使用了一段时间,基本替代了之前nexus的功能。移除了之前写在ts里的配置校验,在开启proxy时需要注意把redirectNotFound的配置设置成false,这部分的内容写在ts的静态校验里不合适。应该有一个修改配置的接口,配置的校验通过接口完成,现在没有这个接口只能是管理员注意配置的正确性。
感谢 @hezhengxu2018 的耐心贡献!让我们拥有了一个商业级别的核心功能!
想问下,这个代理模式有没有完整config配置的示例?官方的文档内容质量有点堪忧,config里的属性都没有说明😂
想问下,这个代理模式有没有完整config配置的示例?官方的文档内容质量有点堪忧,config里的属性都没有说明😂
把syncMode设置成proxy之后还需要把redirectNotFound设置成false,其他的配置和普通模式是一样的。
现在还有一个bug没有修复,建议等pr合并后再使用这个功能。
全局的缓存更新设定在每天凌晨的三点,更新的频率还没有配置项可以配置,如果不满足需求需要手动修改schedule的频率。单个依赖的缓存刷新是有接口的,发现依赖已经过期的话可以手动通过接口PATCH /-/proxy-cache/:fullname刷新该依赖。
@hezhengxu2018 使用proxy的时候发现一个问题,推送私有包可以成功,但是查询信息和下载的时候都会显示not found,如果有对npm上已存在项目做修改二次发布,版本号加了后缀,查询这个包版本列表的时候也不会显示私服里上传的版本号
@hezhengxu2018 使用proxy的时候发现一个问题,推送私有包可以成功,但是查询信息和下载的时候都会显示not found,如果有对npm上已存在项目做修改二次发布,版本号加了后缀,查询这个包版本列表的时候也不会显示私服里上传的版本号
二次发布之后需要手动调用接口刷新一下依赖文件或者等凌晨三点的刷新依赖。
现在确实没有优先返回本地已有的依赖,主要是考虑到proxy模式下两边manifest不一致的时候应该以上游仓库为准,关闭proxy时才以本地为准,这点确实是和verdaccio或者nexus表现不一致的。当时在做的时候好像是在not found的错误里再恢复的话实现的有点hack了,我可以再看看。
目前我自己在公司内网用的时候没有这个问题是在内网部署了两个仓库,一个是公司私有依赖仓库,另一个是有权限访问外网的代理仓库,私有依赖仓库not found时会去请求代理仓库。这样两边的依赖是分开的,不会把npm上的公共依赖与自己的依赖混杂在同一个仓库里,管理上也更方便。
@hezhengxu2018 使用proxy的时候发现一个问题,推送私有包可以成功,但是查询信息和下载的时候都会显示not found,如果有对npm上已存在项目做修改二次发布,版本号加了后缀,查询这个包版本列表的时候也不会显示私服里上传的版本号
二次发布之后需要手动调用接口刷新一下依赖文件或者等凌晨三点的刷新依赖。
现在确实没有优先返回本地已有的依赖,主要是考虑到proxy模式下两边manifest不一致的时候应该以上游仓库为准,关闭proxy时才以本地为准,这点确实是和verdaccio或者nexus表现不一致的。当时在做的时候好像是在not found的错误里再恢复的话实现的有点hack了,我可以再看看。
目前我自己在公司内网用的时候没有这个问题是在内网部署了两个仓库,一个是公司私有依赖仓库,另一个是有权限访问外网的代理仓库,私有依赖仓库not found时会去请求代理仓库。这样两边的依赖是分开的,不会把npm上的公共依赖与自己的依赖混杂在同一个仓库里,管理上也更方便。
感谢解答,请问是代理库走(SyncMode.proxy), 私有库走(SyncMode.none)这样吗?SyncMode这个的介绍太少了有点搞不清楚
@hezhengxu2018 使用proxy的时候发现一个问题,推送私有包可以成功,但是查询信息和下载的时候都会显示not found,如果有对npm上已存在项目做修改二次发布,版本号加了后缀,查询这个包版本列表的时候也不会显示私服里上传的版本号
二次发布之后需要手动调用接口刷新一下依赖文件或者等凌晨三点的刷新依赖。
现在确实没有优先返回本地已有的依赖,主要是考虑到proxy模式下两边manifest不一致的时候应该以上游仓库为准,关闭proxy时才以本地为准,这点确实是和verdaccio或者nexus表现不一致的。当时在做的时候好像是在not found的错误里再恢复的话实现的有点hack了,我可以再看看。
目前我自己在公司内网用的时候没有这个问题是在内网部署了两个仓库,一个是公司私有依赖仓库,另一个是有权限访问外网的代理仓库,私有依赖仓库not found时会去请求代理仓库。这样两边的依赖是分开的,不会把npm上的公共依赖与自己的依赖混杂在同一个仓库里,管理上也更方便。
感谢解答,请问是代理库走(SyncMode.proxy), 私有库走(SyncMode.none)这样吗?
私有库上游配置代理库,redirectNotFound=true,找不到的包直接让代理库处理
SyncMode这个的介绍太少了有点搞不清楚
是的,私有仓库不同步仅重定向,代理仓库做缓存加速。当然代理仓库用verdaccio这种也行,还是选择用cnpmcore主要是看中透明的MySQL数据库和Redis带来的速度。
看了一下好像能改成和verdaccio一样的处理方式,等bug修复了改改看。
@hezhengxu2018 使用proxy的时候发现一个问题,推送私有包可以成功,但是查询信息和下载的时候都会显示not found,如果有对npm上已存在项目做修改二次发布,版本号加了后缀,查询这个包版本列表的时候也不会显示私服里上传的版本号
二次发布之后需要手动调用接口刷新一下依赖文件或者等凌晨三点的刷新依赖。
现在确实没有优先返回本地已有的依赖,主要是考虑到proxy模式下两边manifest不一致的时候应该以上游仓库为准,关闭proxy时才以本地为准,这点确实是和verdaccio或者nexus表现不一致的。当时在做的时候好像是在not found的错误里再恢复的话实现的有点hack了,我可以再看看。
目前我自己在公司内网用的时候没有这个问题是在内网部署了两个仓库,一个是公司私有依赖仓库,另一个是有权限访问外网的代理仓库,私有依赖仓库not found时会去请求代理仓库。这样两边的依赖是分开的,不会把npm上的公共依赖与自己的依赖混杂在同一个仓库里,管理上也更方便。
感谢解答,请问是代理库走(SyncMode.proxy), 私有库走(SyncMode.none)这样吗?
私有库上游配置代理库,redirectNotFound=true,找不到的包直接让代理库处理
SyncMode这个的介绍太少了有点搞不清楚
是的,私有仓库不同步仅重定向,代理仓库做缓存加速。当然代理仓库用verdaccio这种也行,还是选择用cnpmcore主要是看中透明的MySQL数据库和Redis带来的速度。
看了一下好像能改成和verdaccio一样的处理方式,等bug修复了改改看。
好的,我抽时间试试,verdaccio这个感觉不是很稳定,并发量大了经常下载失败,尤其是用bun那种同一时间下载多个包的,基本上包失败的,想着cnpmcore公网提供的服务性能不错所以才想迁过来
| gharchive/pull-request | 2023-08-16T06:30:52 | 2025-04-01T06:38:13.522623 | {
"authors": [
"elrrrrrrr",
"fangzhengjin",
"fengmk2",
"hezhengxu2018"
],
"repo": "cnpm/cnpmcore",
"url": "https://github.com/cnpm/cnpmcore/pull/571",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
217940272 | Question: do you have an example on how to add plugins to canvas
hi, do you have some kind of example on how to extend canvas even further
thank you for the work its awesome
Thank you @SerjoA for your kind comments on the project! I don't currently have any examples of plugins, it's been a work-in-progress between a couple developers here. Anything that you have to offer in that area would be a very welcome contribution!
thank you for your kind words, i can try and make a plugin in my spare time, just need some info like what folder to put the plugin, how to connect it to the system, do you have any specific request in mind? also , you can redirect me to someone who is working with this system also and i can ask him some info and then make a plugin
Hello @SerjoA,
There is a base extension class here: https://github.com/cnvs/easel/blob/master/src/Extensions/Extension.php which we adapted from Flarum (https://github.com/flarum/core). Themes are extensions so they therefore extend this base class.
thanks @reliq ill look into it and hope to learn from it
@SerjoA If you're satisfied with the answer provided here, feel free to close the issue out 👍
yes thank you, i am using now caffinated modules for laravel, its awesome
| gharchive/issue | 2017-03-29T16:53:14 | 2025-04-01T06:38:13.535710 | {
"authors": [
"SerjoA",
"austintoddj",
"reliq"
],
"repo": "cnvs/canvas",
"url": "https://github.com/cnvs/canvas/issues/326",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
101171817 | Add HTML report design
@AbdealiJK will need this I presume.
html report exists and is in beta
| gharchive/issue | 2015-08-15T12:30:38 | 2025-04-01T06:38:13.541824 | {
"authors": [
"sils1297"
],
"repo": "coala-analyzer/coala-artwork",
"url": "https://github.com/coala-analyzer/coala-artwork/issues/12",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
991447370 | sql: allow references to top-level WITH from apply-join
PR: https://github.com/cockroachdb/cockroach/pull/65550
From release notes:
References to WITH expressions from correlated subqueries are now always supported. [#65550][#65550] {% comment %}doc{% endcomment %}
Our WITH, correlated subqueries , and known limitations docs do not explicitly disallow WITH expressions in correlated subqueries.
Closing as there doesn't seem to be doc impact.
| gharchive/issue | 2021-09-08T19:13:38 | 2025-04-01T06:38:14.156320 | {
"authors": [
"ianjevans",
"jseldess"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/11335",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1110231984 | sql,server: support SCRAM authentication for SQL sessions
Exalate commented:
https://github.com/cockroachdb/cockroach/pull/74301 --- Release note (security update): The hash method used to encode cleartext passwords before storing them is now configurable, via the new cluster setting server.user_login.password_encryption. Its supported values are crdb-bcrypt and scram-sha-256. The cluster setting only becomes effective and its default value is scram-sha-256 after all cluster nodes have been upgraded. Prior to completion of the upgrade, the cluster behaves as if the cluster setting is fixed to crdb-bcrypt (for backward compatibility) Note that the preferred way to populate password credentials for SQL user accounts is to pre-compute the hash client-side, and pass the precomputed hash via CREATE/ALTER USER/ROLE WITH PASSWORD. This ensures that the server never sees the cleartext password. Release note (security update): The cost of the hashing function for scram-sha-256 is now configurable via the new cluster setting server.user_login.password_hashes.default_cost.scram_sha_256. Its default value is 119680, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to bruteforce passwords via repeated login attempts. Future versions of CockroachDB will likely update the default accordingly. Release note (sql change): The session variable password_encryption is now exposed to SQL clients. Note that SQL clients cannot modify its value directly; it is configurable via a cluster setting.
Jira Issue: DOC-2364
covered by https://github.com/cockroachdb/docs/issues/12792
| gharchive/issue | 2022-01-21T09:15:34 | 2025-04-01T06:38:14.159774 | {
"authors": [
"cockroach-teamcity",
"rafiss"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/12795",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
761639793 | Duplicate Indexes based on column(s) different from the primary index
This documentation page describes an example where the secondary indexes are based on column(s) the SAME AS THE PRIMARY INDEX. In this case, secondary indexes should be created on ALL OTHER localities, NOT INCLUDING where the primary index is located.
Please add clarification to that page that secondary indexes based on column(s) DIFFERENT FROM THE PRIMARY INDEX should be created on ALL localities INCLUDING where the primary index is located. This case was pointed out in this support ticket: https://cockroachdb.zendesk.com/agent/tickets/6985.
To test both cases, I used these sql files in a multi-region cluster:
CREATE INDEX state_idx_central ON postal.sql.txt
INSERT INTO postal_codes VALUES.sql.txt
@ericharmeling Would this belong to your area? I'm not sure who previously worked on these docs, but I can also see them belonging to my area.
I think Jesse wrote these docs, but they are in the "multi-region" area, which belongs to @rmloveland. Rich, I'm sure this page is on your radar for scheduled multi-region updates?
This usage pattern will be replaced by something much simpler for end users in 21.1, but we will need to fix bugs in this doc for now. I'll assign to myself.
@taroface I hope that's ok that I snagged this. I need to learn more about these old multi-region patterns as part of writing docs for the new ones.
@rmloveland For sure! Thanks for taking it on.
| gharchive/issue | 2020-12-10T21:41:55 | 2025-04-01T06:38:14.164630 | {
"authors": [
"ericharmeling",
"florence-crl",
"rmloveland",
"taroface"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/9171",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2246318213 | Upgrading Python and packages using Poetry
As discussed in #1023 and #1410, we should regularly upgrade Python version and other packages.
Apparently Poetry is an interesting tool to help with resolving package conflicts.
We need to keep in mind that we have several Dockerfiles et requirements.txt files in this project. Putting this in place may be tricky at first but very useful in the future.
Requirements files
https://github.com/codalab/codabench/blob/develop/requirements.txt
https://github.com/codalab/codabench/blob/develop/requirements.dev.txt
https://github.com/codalab/codabench/blob/develop/compute_worker/compute_worker_requirements.txt
Dockerfiles
https://github.com/codalab/codabench/blob/develop/Dockerfile
https://github.com/codalab/codabench/blob/develop/Dockerfile.builder
https://github.com/codalab/codabench/blob/develop/Dockerfile.celery
https://github.com/codalab/codabench/blob/develop/Dockerfile.compute_worker
https://github.com/codalab/codabench/blob/develop/Dockerfile.compute_worker_gpu
https://github.com/codalab/codabench/blob/develop/Dockerfile.flower
https://github.com/codalab/codabench/blob/develop/Dockerfile.rabbitmq
Related branch: https://github.com/codalab/codabench/tree/issue_1413
Solved by #1416
| gharchive/issue | 2024-04-16T15:17:59 | 2025-04-01T06:38:14.242222 | {
"authors": [
"Didayolo"
],
"repo": "codalab/codabench",
"url": "https://github.com/codalab/codabench/issues/1413",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
810407547 | Image Borders as Class
Neat needs a class for image borders.
When images are light on light or dark on dark an image border helps them stand out. In neat.html I used an inline style to add a border. Now that there is a dark mode theme a class should be used instead. That would allow the border to swap back and forth between dark and light mode.
Change neat.html so that it's got a class instead of a style
Add the white border as the default style to neat.css
Add a black border in the dark mode section of neat.css
I've fixed this by adding a class of bordered and removing the styles.
| gharchive/issue | 2021-02-17T17:55:46 | 2025-04-01T06:38:14.244192 | {
"authors": [
"codazoda"
],
"repo": "codazoda/neatcss",
"url": "https://github.com/codazoda/neatcss/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
66604059 | Figure out a metric for measuring changes over time
@mshenfield
Potential metrics...:
Chi Square. Compare the Observed/Expected ratios for Metro as a whole and plot the resulting chisquare value overtime.
Advantages: Statistical backing, appropriate for this type of data, creates a single metric that can be plotted
Disadvantages: May be to sophisticated for average user, how do we handle multiple income levels?
Some metric that relates the number of departments at or near the values predicted by census ?
@neolytics I think Chi Square is a good start. Really the actual Chi Square score is less important than the P value it indicates. We could use ranges of P values to simplify the Chi Square down to Red, Yellow, and Green "Diversity Health" scores. For example, P <= .05 could be Red, meaning it was highly likely that the lack of diversity was not by chance. It would be simple to digest, while still being scientifically honest about the health of the department.
I don't know about multiple income levels in a color scenario. Maybe have three channels in our diagram - one for each income level? Here's a mock of that idea:
Hrm, this is an interesting take on the visualization. I really like the concept. We'd have to have an explanation link somewhere to help with this (but this would be the case anyway). It doesn't show month over month trends however, and the updates would be done quarterly.
@mshenfield Do you know of a convenient JS based library that could do something like this? I know you did work with D3 back when you were working with ngd, but I try to avoid pure D3 like the plague. Very powerful, but really steep learning curve.
@mshenfield
Hey man anyway you could help me push out a visualization and the chi square piece? We need to get this ready and I am just struggling to get to it with the whole NDOCH thing to consider.
@neolytics Definitely - I'm going to try and process the data using Chi-Square into a usable format today and tomorrow and then move on to the visualization.
@mshenfield Awesome dude. You have no idea how much I appreciate your help. Thanks!
We've decided on Chi Square. It's the appropriate statistical analysis for this data.
| gharchive/issue | 2015-04-06T13:15:05 | 2025-04-01T06:38:14.320996 | {
"authors": [
"mshenfield",
"neolytics"
],
"repo": "code-for-nashville/hrc-employment-diversity-report",
"url": "https://github.com/code-for-nashville/hrc-employment-diversity-report/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1602824941 | Admin features new
Gateway https://github.com/code-kern-ai/refinery-gateway/pull/119
Model submodule https://github.com/code-kern-ai/refinery-submodule-model/pull/36
Admin dashboard: https://github.com/code-kern-ai/admin-dashboard/pull/62
Refinery UI: https://github.com/code-kern-ai/refinery-ui/pull/125
Updater https://github.com/code-kern-ai/refinery-updater/pull/28
I'm not super sure how but I think there is still some issue in the logic since I never touched the users without a role assignment yet they still got a timestamp.
Only way I could find to reproduce was.
start fresh
Create org
Assign user to org
switch to refinery
import first project
That said I tried to find it with throwing an exception on the id but nothing happened at that point so maybe some further investigation is required
[ ] resolved
| gharchive/pull-request | 2023-02-28T11:03:29 | 2025-04-01T06:38:14.326108 | {
"authors": [
"JWittmeyer",
"SimonDegrafKern"
],
"repo": "code-kern-ai/refinery-gateway",
"url": "https://github.com/code-kern-ai/refinery-gateway/pull/119",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1826476960 | realmRoles Field required
Using the examples provided, I'm unable to get the user's info through the route
http://localhost:8081/user/2b1b34f0-5efb-4cf8-b620-b619fd9b98bc (it's a valid user-id)
due to a pydantic error:
pydantic_core._pydantic_core.ValidationError: 1 validation error for KeycloakUser
realmRoles Field required [type=missing, input_value={'id': '2b1b34f0-5efb-4cf8-b620-b619fd9b98bc'...}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1/v/missing
This is due to the absence, in the incoming json (Python dict), of the key "realmRoles" which is NOT always returned by the KeyCloak platform.
In your model.py module, class KeycloakUser(BaseModel), the realmRoles field is specified as "Optional" (realmRoles: Optional[List[str]]) but this attribute seems to be ignored by pydantic...
Any suggestion?
Thanks in advance
My requirements.txt:
fastapi==0.100.1
fastapi_keycloak==1.0.10
pydantic==2.2.1
uvicorn==0.23.1
My KeyCloack platform:
docker image of jsboss/keycloak:latest (Server version: 16.1.1) with postgresql 13.0
Hi,
you can apply a workaround as a patch of KeycloakUser.__init__ like the following:
oryg__init__ = KeycloakUser.__init__
def mocked__init__(*args, **kwargs):
kwargs['realmRoles'] = kwargs.get('realmRoles', [])
kwargs['attributes'] = kwargs.get('attributes', {})
oryg__init__(*args, **kwargs)
KeycloakUser.__init__ = mocked__init__
In which file we have to apply this?
You can define it in any file like the following function
def patcher():
from fastapi_keycloak import KeycloakUser
oryg__init__ = KeycloakUser.__init__
def new__init__(*args, **kwargs):
kwargs['realmRoles'] = kwargs.get('realmRoles', [])
kwargs['attributes'] = kwargs.get('attributes', {})
oryg__init__(*args, **kwargs)
KeycloakUser.__init__ = new__init__
And call patcher before any fast API code is executed in some main.py or whatever the main module you defined.
I believe that this is related to #97. But it has been a while, so maybe I misremember the error I got at that point.
| gharchive/issue | 2023-07-28T13:55:41 | 2025-04-01T06:38:14.342489 | {
"authors": [
"alexbarcelo",
"praveenexaf",
"ricciarellif",
"softfactory1"
],
"repo": "code-specialist/fastapi-keycloak",
"url": "https://github.com/code-specialist/fastapi-keycloak/issues/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1531761088 | コメントマップ上部に回遊できるリンクを
コメントマップ上部には他の同ページにあるグローバルヘッダ(回遊できるリンク)がないので、設定していただけたらです
https://code4fukui.github.io/sightseeingApp/
@NaobumiTanaka
全ての画面にヘッダが表示されるよう修正しました。
(元々は表示されていたのですが、私が手を加えていたため非表示の状態になっていたようです。。。)
| gharchive/issue | 2023-01-13T06:12:52 | 2025-04-01T06:38:14.348236 | {
"authors": [
"EiichiMiyagawa",
"NaobumiTanaka"
],
"repo": "code4fukui/fukui-kanko-stat",
"url": "https://github.com/code4fukui/fukui-kanko-stat/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
803056067 | Google Sheets Styled demo not working
Solving this issue will help us figure out whether our integration with Google Sheets will still work.
The styled demo mentioned in the README is not working due to: Uncaught ReferenceError: tsml_react_config is not defined
Try creating an instance of tsml_react_config before meetings src with
I had additional trouble with Google Drive after this debug on my test site, and i will continue to look at that. I will share further info as I learn more if this doesn't do the trick.
Thank you!
Should be working again now! Thanks for letting us know that was out of date. https://react.meetingguide.org/demo.html
| gharchive/issue | 2021-02-07T20:57:06 | 2025-04-01T06:38:14.355455 | {
"authors": [
"blafving",
"joshreisner"
],
"repo": "code4recovery/tsml-ui",
"url": "https://github.com/code4recovery/tsml-ui/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1257979865 | "Failed to compile" error message in the new generated app
Type: Bug
Operating system: MacOS 10.15.7
Affects Version: 1.0.3
Priority: High
Severity: Critical
Preconditions: Generate the new application using janush command
Steps to reproduce:
Use command cd web
Use command npm run start or yarn start
Actual result:
After initial response there is an error message in terminal:
"Failed to compile. 'Router' cannot be used as a JSX component. Its instance type 'BrowserRouter' is not a valid JSX element".
App is opening in default browser with index page of Janush, but no action can be performed on it.
Expected result:
The command finished running application correctly.
In default browser window is opening with index page of Janush and you can perform some action on it (for example can click sign in on the page and it will redirect to the sign in page).
Attachments:
The problem here is the wrong node version. You had node version 14 set and Janush has dependencies in package-lock.json for node v16.
Solution:
Reinstall the application from node v16
| gharchive/issue | 2022-06-02T10:13:32 | 2025-04-01T06:38:14.362709 | {
"authors": [
"hadrysm",
"krzyjel"
],
"repo": "codeandpepper/janush",
"url": "https://github.com/codeandpepper/janush/issues/242",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
158479847 | Figure out how to get NullToggle to not cause app to fail when feature target type validation happens on evaluation
Currently the NullToggle has a null feature instance that has a target_type of NONE.
In any of the cases where a NullToggle is returned the evaluations would fail if a target is passed in during evaluation. This is extremely bad, and something that can NOT happen.
We need to figure out a way to prevent this exception from happening in the NullToggle case.
Ideas so far around this are to maybe make a NullFeature that uses the NOT_SET type. This bastardizes the intent for the NOT_SET type but would work with the current logic in the feature target type validation logic. However, if the process made it further say to the type match check it might not.
Another idea would be to add a new TargetType that would have to be considered in all of the locations we care about target types (the feature target type validator, the feature to rule target type checker, etc.). This new type would tell those checkers to basically not worry about matching it to the feature and to not worry about any target passed in during evaluation matching to the feature contract.
At the moment this seems like the best idea.
I created a branch called fix_null_toggle_null_feature_target_type with a failing test in the scenario that this would happen.
I dug into this further and seems that this would only happen if it was unable to find a toggle in all of the drivers in the toggle repository. So, it would have to fail to find it in all of the drivers and lastly the in memory driver.
This is a viable scenario specifically if the user accidentally fat fingers the feature identifier so it doesn't match a defined feature.
This has been merged in. Closing...
| gharchive/issue | 2016-06-04T01:13:04 | 2025-04-01T06:38:14.384366 | {
"authors": [
"cyphactor"
],
"repo": "codebreakdown/togls",
"url": "https://github.com/codebreakdown/togls/issues/83",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
902983902 | How to remove border if I don't need one?
Type of issue
[ ] Bug
[+] Question (e.g. about handling/usage)
[ ] Request for new feature/improvement
Expected Behavior
Current Behavior
Possible Solution (optional)
Steps to Reproduce (for bugs)
Your Environment
Hi @MaslovD ,
just set the parameter "drawQuietZones" to false. You can read more about the parameters in our wiki: https://github.com/codebude/QRCoder/wiki/Advanced-usage---QR-Code-renderers#21-qrcode-renderer-in-detail
| gharchive/issue | 2021-05-26T22:29:47 | 2025-04-01T06:38:14.387762 | {
"authors": [
"MaslovD",
"codebude"
],
"repo": "codebude/QRCoder",
"url": "https://github.com/codebude/QRCoder/issues/301",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1293919233 | Unable to use BeforeSuite and AfterSuite hooks
What are you trying to achieve?
I am trying to use BeforeSuite and AfterSuite hooks
What do you get instead?
Could not include object Step Definition from ../step_definitions/hooks.js from module 'C:\Projects\Aura-Core\aura-accelerate-seed\step_definitions\hooks.js'
BeforeSuite is not a function
TypeError: BeforeSuite is not a function
at Object. (C:\Projects\Aura-Core\aura-accelerate-seed\step_definitions\hooks.js:4:1)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at loadSupportObject (C:\Projects\Aura-Core\aura-accelerate-seed\node_modules\codeceptjs\lib\container.js:338:17)
at loadGherkinSteps (C:\Projects\Aura-Core\aura-accelerate-seed\node_modules\codeceptjs\lib\container.js:317:7)
at Function.create (C:\Projects\Aura-Core\aura-accelerate-seed\node_modules\codeceptjs\lib\container.js:48:25)
Provide console output if related. Use --verbose mode for more details.
Provide test source code if related
### Details
* CodeceptJS version: 3.3.3
* NodeJS Version: 16.5
* Operating System: Windows
* playwright
* Configuration file:
```js
# paste config here
@viveklandeWK I think you should try to use _beforeSuite and _afterSuite instead of BeforeSuite and AfterSuite according to documentation of hooks
Hi @dyaroman, browser session not available in _beforeSuite(), do we have any way to make it available?
Following documentation before creating an issue is highly recommended
Thanks
| gharchive/issue | 2022-07-05T07:45:02 | 2025-04-01T06:38:14.428237 | {
"authors": [
"DavertMik",
"VivekLande",
"dyaroman",
"viveklandeWK"
],
"repo": "codeceptjs/CodeceptJS",
"url": "https://github.com/codeceptjs/CodeceptJS/issues/3353",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
716908761 | Fix dto
closes #188
Here is an overview of what got changed by this pull request:
Complexity increasing per file
==============================
- src/shared/controllers/base.controller.ts 4
- src/users/users.controller.ts 4
See the complete overview on Codacy
| gharchive/pull-request | 2020-10-07T23:10:11 | 2025-04-01T06:38:14.430001 | {
"authors": [
"ofuochi",
"tayormi"
],
"repo": "codeclannigeria/codeclannigeria-backend",
"url": "https://github.com/codeclannigeria/codeclannigeria-backend/pull/189",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
373864892 | Update ja.coffee
Did some minor teacher CS translations
Thanks!
| gharchive/pull-request | 2018-10-25T09:52:57 | 2025-04-01T06:38:14.438324 | {
"authors": [
"Bryukh",
"Chaboi45"
],
"repo": "codecombat/codecombat",
"url": "https://github.com/codecombat/codecombat/pull/4991",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1041463831 | Version 0.1.9 incorrectly detects Heroku CI as provider when using on Travis
Describe the bug
I'm running the uploader from travis and on version 0.1.8 the uploader correctly detected this
[2021-10-27T18:48:16.571Z] ['info'] Detected Travis CI as the CI provider.
Since the update to 0.1.9, the uploader detects Heroku CI without any changes on my side
[2021-11-01T17:56:05.952Z] ['info'] Detected Heroku CI as the CI provider.
To Reproduce
Steps to reproduce the behavior:
Run the uploader on verbose mode from a travis build
See error
Expected behavior
Travis to be detected when running build on travis
Screenshots
N/A
Additional context
Because Heroku is detected, the uploader tries to use the Heroku env variables and request is invalid (missing branch etc)
[2021-11-01T17:56:05.952Z] ['info'] Detected Heroku CI as the CI provider.
[2021-11-01T17:56:05.952Z] ['verbose'] -> Using the following env variables:
[2021-11-01T17:56:05.952Z] ['verbose'] CI: true
[2021-11-01T17:56:05.952Z] ['verbose'] HEROKU_TEST_RUN_BRANCH: undefined
[2021-11-01T17:56:05.952Z] ['verbose'] HEROKU_TEST_RUN_COMMIT_VERSION: undefined
[2021-11-01T17:56:05.952Z] ['verbose'] HEROKU_TEST_RUN_ID: undefined
[2021-11-01T17:56:05.971Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=uploader-0.1.9&token=*******&branch=&build=&build_url=&commit=&job=&pr=&service=heroku&slug=XX%2FXX&name=&tag=&flags=&parent=
[2021-11-01T17:56:05.971Z] ['verbose'] Passed token was 36 characters long
[2021-11-01T17:56:05.971Z] ['verbose'] https://codecov.io/upload/v4?package=uploader-0.1.9&branch=&build=&build_url=&commit=&job=&pr=&service=heroku&slug=XX%2FXX&name=&tag=&flags=&parent=
Content-Type: 'text/plain'
Content-Encoding: 'gzip'
X-Reduced-Redundancy: 'false'
[2021-11-01T17:56:06.100Z] ['error'] Error POSTing to https://codecov.io: 400 Invalid request parameters
[2021-11-01T17:56:06.101Z] ['error'] There was an error running the uploader: Error uploading to https://codecov.io: Error: Bad Request
Looks like this issue is fixed by:
https://github.com/codecov/uploader/pull/485/files#diff-a53210814fae036993f7ffe30dc831d847e6f4526dc59cd216c7ba18a7d9d354R5
| gharchive/issue | 2021-11-01T18:25:26 | 2025-04-01T06:38:14.449296 | {
"authors": [
"adolfov"
],
"repo": "codecov/uploader",
"url": "https://github.com/codecov/uploader/issues/481",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
200839318 | docker integration driver ignores preemption option during mount command
When attempting to mount a volume via docker integration driver, there is no check in place to allow preemptive mount option to function correctly. This is the result of scrubbing unavailable volumes from the list of volumes before processing attach/mount command.
https://github.com/codedellemc/libstorage/blob/master/drivers/integration/docker/docker.go#L173
Fix looks good
| gharchive/issue | 2017-01-15T00:35:42 | 2025-04-01T06:38:14.452233 | {
"authors": [
"cduchesne"
],
"repo": "codedellemc/libstorage",
"url": "https://github.com/codedellemc/libstorage/issues/389",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
221627832 | Tax assessment data via opendata API
Opendata.dc.gov provides tax assessment data of all properties in DC via an API.
[x] Create a TaxApiConn class based on the MarApiConn class currently in the code. Use this to add raw data files to /data/raw/tax_assessment/opendata/YYYYMMDD when the python/cmd/data.py script is run with the appropriate arguments. Write a demo command to run at the command line to create this file with the appropriate datestamp.
(pull request after step 1; someone else can do part 2 if desired)
[ ] Add this file to the manifest.csv using the instructions on adding a new dataset.
Future: automatically add new files to the pipeline whenever this update script is run, but we wait to do this until we have created an appropriate structure for all API scripts.
Note on integrating this data from open.data@dc.gov:
EXTRACTDATE field says when this was last updated, looks to be roughly monthly.
Hey @eng1nerd have you been able to do any work on this issue? Let me know where things stand - and if you're busy with the new job we can also see if someone's able to take it over!
@NealHumphrey I'm sorry for the delay. I will try to finish it by tomorrow 6pm. If something will not be right or it will take me much longer, then some other person should probably step in.
Latest status:
@eng1nerd 's code has been merged into the codefordc repository under the branch name 198-add-tax-data; should pick up code from there.
Current TaxApiConn class uses the 'GeoService` api. Instead we should use the 'GeoJSON' api url, but swap it out for .csv as noted on the opendata documentation (bottom of page): http://opendata.dc.gov/pages/using-apis . This will resolve the issue that the number of rows is limited in the geojson api (and also make it easier for us to parse since it'll be in the format we want anyways).
| gharchive/issue | 2017-04-13T17:16:11 | 2025-04-01T06:38:14.483565 | {
"authors": [
"NealHumphrey",
"eng1nerd"
],
"repo": "codefordc/housing-insights",
"url": "https://github.com/codefordc/housing-insights/issues/198",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
293767176 | Write tests to cover the test cases
Test Cases are here: https://drive.google.com/drive/folders/1obtB1bMOm_DdY7t7jSrrMLE_lVbiKZ7z
Some tests were added with #247
@scottfirestone, I assigned you to this issue since you mentioned you wanted to write some tests. I hope I didn't overstep my bounds :). Like you said, I thought we should make sure we're not working on the same thing. I'll continue on with some landing page integration tests for now. I may look into unit testing with Cypress to learn how that works but will let you know before I start writing them. If you want to take on another page of the app, just respond to this message. Let me know if there's a better way to handle more than one assignee on an issue using Github/Waffle without stepping on each others toes. I'm usually using Github by myself and not on a team.
Thanks for fixing the Travis CI issue! I'm hoping I can get some tests committed this weekend.
| gharchive/issue | 2018-02-02T04:06:14 | 2025-04-01T06:38:14.486874 | {
"authors": [
"LaurieLinz",
"dwhite96"
],
"repo": "codefordenver/Circular",
"url": "https://github.com/codefordenver/Circular/issues/268",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
175122687 | Münster-rote Linie raus
Siehe Detailseite links vom Reagenzglas
Wurde auf Hamburg und Hamburger Werte geändert. Die Linie macht ja schon Sinn, soll die wirklich raus?
| gharchive/issue | 2016-09-05T20:05:32 | 2025-04-01T06:38:14.491213 | {
"authors": [
"TomThats",
"lundelius"
],
"repo": "codeforhamburg/Trinkwasser-Hamburg",
"url": "https://github.com/codeforhamburg/Trinkwasser-Hamburg/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
451662220 | scraper.py in root folder...
...are we going to use this one in the future?
should we move some parts to immo_scraper/, keep the good, and delete what is deprecated?
@jahnique what is your opinion on this?
Resolved by #14
| gharchive/issue | 2019-06-03T19:58:37 | 2025-04-01T06:38:14.494256 | {
"authors": [
"ThorbenJensen"
],
"repo": "codeformuenster/immoscout",
"url": "https://github.com/codeformuenster/immoscout/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1943699635 | Improve accessible name for images on home page and prop names for HomeSection
This PR:
Improves accessible name for images on home page.
1. Adds context for all logo images
2. Removes unnecessary aria-labels
3. Mark decorative (non-informative images) as such using an empty alt attribute
The files this PR effects:
Components
src/components/Footer/RenderCompanyInfoSection.jsx
src/components/NavBar/NavbarDesktop.jsx
src/components/NavBar/NavbarLoggedOut.jsx
src/components/NavBar/NavbarMobile.jsx
src/pages/Home.jsx
Tests
test/components/NavBar/NavbarDesktop.test.jsx
test/components/NavBar/NavbarLoggedOut.test.jsx
test/components/NavBar/NavbarMobile.test.jsx
test/pages/__snapshots__/Home.test.jsx.snap
Screenshots (if applicable):
Should be no difference visually.
Additional Context (optional):
Following guidance around decorative/informative images from W3. Verified using this extension and DevTools.
@leekahung I believe this is ready for you to approve now - so merging is possible.
Yeah, I'm mostly fine with this. Although I've notice the image alts for HomeSection are all empty now compared to before. Won't it be necessary for accessibility?
https://accessibility.psu.edu/images/imageshtml/
I went with the info about decorative/informative images from W3. It was a judgment call that I decided they were decorative since I'm not sure they contribute to the content. Happy to hear otherwise, it's a tough call.
I went with the info about decorative/informative images from W3 (see example 4 in decorative as that's what I thought it fell under). It was a judgment call that I decided they were decorative since I'm not sure they contribute to the content. Happy to hear otherwise, it's a tough call.
Ah, I see. Well, considering that the section title follows immediately after the images, I think it should be fine even if the image themselves break. Alright, I'll approve this.
Hey @milofultz. Was planning to merge this branch in after resolving a merge conflict for one of the test files.
Unfortunately, the resolution I've attempted to make seemed to have failed the test. I think you'll be able to fix it from your end (sorry for the inconvinence). If the test gets fixed, let me know, I'll have this merged into Development.
Thanks!
@leekahung Should be good to go now 👍
| gharchive/pull-request | 2023-10-15T04:12:13 | 2025-04-01T06:38:14.503157 | {
"authors": [
"leekahung",
"milofultz",
"xscottxbrownx"
],
"repo": "codeforpdx/PASS",
"url": "https://github.com/codeforpdx/PASS/pull/458",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1499759517 | Bug: [Cookie] validatePrefix function can not understand bool var even if its a boolean var
PHP Version
8.1
CodeIgniter4 Version
4.2.10
CodeIgniter4 Installation Method
Composer (as dependency to an existing project)
Which operating systems have you tested for this bug?
Linux
Which server did you use?
apache
Database
Postgres
What happened?
I have an application with benedmunds IONAuth integrated with it. I can login to the site with no problem.
But when I logout protected function validatePrefix in vendor/codeigniter4/framework/system/Cookie/Cookie.phpthrows an error like below.
TypeError
CodeIgniter\Cookie\Cookie::validatePrefix(): Argument #2 ($secure) must be of type bool, null given, called in /srv/www/htdocs/vendor/codeigniter4/framework/system/Cookie/Cookie.php on line 226
The line 222 on that file sets secure flag for cookie from App.php in config directory with $secure = $options['secure'];
But it seems that validatePrefixfunction can not understand that if the variable is type of bool or not.
I used dd($secure) and the result is
$secure boolean false
but validatePrefix function can not decide !
When I change the line 222 $secure = $options['secure']; to $secure = (bool)$options['secure']; everything works fine.
Steps to Reproduce
Login with IONAuth login page (http://somesite.com/auth/login) and logout with (http://somsite.com/auth/logout)
Expected Output
redirection to login page.
Anything else?
No response
I tested the following code, but cannot reproduce the TypeError.
<?php
namespace App\Controllers;
class Home extends BaseController
{
public function index()
{
helper('cookie');
delete_cookie('remember_code');
}
}
no need @kenjis. got it figured it out. Its because of IonAıth...
| gharchive/issue | 2022-12-16T07:53:42 | 2025-04-01T06:38:14.528732 | {
"authors": [
"Slamoth",
"kenjis"
],
"repo": "codeigniter4/CodeIgniter4",
"url": "https://github.com/codeigniter4/CodeIgniter4/issues/6982",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
855109089 | Relocate cookie exception
Finish Relocate Cookie Class #4502
@MGatner this PR need to be merged before the next release
@paulbalandan please review
I deleted it, but forgot to save changes ^^
Waiting for tests..
All green
@paulbalandan Thank you
| gharchive/pull-request | 2021-04-10T16:47:59 | 2025-04-01T06:38:14.531101 | {
"authors": [
"mostafakhudair",
"paulbalandan"
],
"repo": "codeigniter4/CodeIgniter4",
"url": "https://github.com/codeigniter4/CodeIgniter4/pull/4544",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
655028752 | Changed the behavior of _refuri2http to return a refid fragment when markdown_http_base is None
HI 👋
I noticed that some RST internal cross-references like :py:func:`my_func`
would get written to markdown like so `my_func()`
instead of the expected [`my_func()`](#my_func)
Looks like when self.markdown_http_base is None, the reference becomes None. Is this the intended behavior? Feel free to ignore/close this PR if so, otherwise, here's a simple fix.
@symbiont-liam-howell thanks
| gharchive/pull-request | 2020-07-10T20:49:44 | 2025-04-01T06:38:14.552764 | {
"authors": [
"codejamninja",
"symbiont-liam-howell"
],
"repo": "codejamninja/sphinx-markdown-builder",
"url": "https://github.com/codejamninja/sphinx-markdown-builder/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
490058169 | Rethink the last slide and potentially break it up into multiple slides
Right now the last slide of every milestone is very overloaded: https://codelab.fun/angular/create-first-app/end
Someone with design skills should think through the best way of making it easier to comprehend or potentially break it up into multiple slides
This the same as #1069 closing
| gharchive/issue | 2019-09-05T23:27:01 | 2025-04-01T06:38:14.554542 | {
"authors": [
"kirjs"
],
"repo": "codelab-fun/codelab",
"url": "https://github.com/codelab-fun/codelab/issues/1025",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
198614993 | Add User Scaffolding Using Laravel's inbuilt scaffolding
https://laravel.com/docs/5.3/authentication#introduction
Use the scaffolding feature to create the register/login/reset password feature.
Pushed in commit: https://github.com/codelust/laravel-skeleton/commit/ae48d8a1a4a49be6dadf1ed9a6e8b8f56d148c21
| gharchive/issue | 2017-01-04T02:07:20 | 2025-04-01T06:38:14.557540 | {
"authors": [
"codelust"
],
"repo": "codelust/laravel-skeleton",
"url": "https://github.com/codelust/laravel-skeleton/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
122107697 | Does not work on babel >= 6.0
ERROR in ./common/index.js
Module build failed: TypeError: Transformer is not a function
at build (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-plugin-closure-elimination/lib/index.js:153:10)
at Function.memoisePluginContainer (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:127:13)
at Function.normalisePlugin (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:161:32)
at /Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:197:30
at Array.map (native)
at Function.normalisePlugins (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:173:20)
at OptionManager.mergeOptions (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:271:36)
at OptionManager.init (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:416:10)
at File.initOptions (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/index.js:190:75)
at new File (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/index.js:121:22)
at Pipeline.transform (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/pipeline.js:42:16)
at transpile (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-loader/index.js:14:22)
at /Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-loader/lib/fs-cache.js:140:16
at ReadFileContext.callback (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-loader/lib/fs-cache.js:27:23)
at FSReqWrap.readFileAfterOpen [as oncomplete] (fs.js:325:13)
Not yet. I plan to support it, I just haven't had time yet. PRs welcome :)
+1 (wish I had time to fix this)
I've updated the tests and build scripts to use Babel 6 in my branch below. However, I do not know enough about Babel 6 to determine how to replace the use of Transformer in src/index.js as it's no longer part of the entry parameters given to the plugin by Babel. *
https://github.com/jadbox/babel-plugin-closure-elimination
**
http://babeljs.io/blog/2015/10/29/6.0.0/
Babel 5
export default function({ Plugin, types: t }) {
return new Plugin(‘ast-transform’, {
visitor: { … }
});
}
Babel 6
export default function({ types: t }) {
return {
visitor: { … }
};
}
@phpnode Does it mean that you found this plugin useless and don't want to develop/use it anymore?
@develar not at all, the project that I wrote this for is still on babel 5 and I've not had chance to upgrade it yet. I'd like to use this on my babel 6 projects too but there are only so many hours in the day and those are not as performance sensitive - PRs are welcome, otherwise I will get around to this in the coming weeks / months.
@phpnode Thanks for clarification. I am interested because I want to debug lambdas and it is not possible since V8 VM doesn't support column-based breakpoints correctly in all cases (https://bugs.chromium.org/p/v8/issues/detail?id=2825). (I develop JetBrains JS debugger (WebStorm, IDEA and so on)).
Good news everyone!
https://github.com/codemix/babel-plugin-closure-elimination/pull/4
I finish work of @jadbox
awesome work @Gvozd
Fixed in 1.0.0
@Gvozd Thanks so much finishing the work in my branch! I'm excited to see if this improves performance.
Btw, will this work for => functions?
| gharchive/issue | 2015-12-14T19:00:47 | 2025-04-01T06:38:14.576220 | {
"authors": [
"Gvozd",
"develar",
"jadbox",
"nicksrandall",
"phpnode"
],
"repo": "codemix/babel-plugin-closure-elimination",
"url": "https://github.com/codemix/babel-plugin-closure-elimination/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
239770014 | Fix #14: Added fill method to the handler
Adds a method to the handler class to fill a memory object with a certain value
Looks good to me :+1:
| gharchive/pull-request | 2017-06-30T12:51:49 | 2025-04-01T06:38:14.598670 | {
"authors": [
"AerialMantis",
"Ruyk"
],
"repo": "codeplaysoftware/standards-proposals",
"url": "https://github.com/codeplaysoftware/standards-proposals/pull/15",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2189839774 | 请问点击事件如何做呢
这个特性解决了什么问题?
比如我想爬一下油管的直播,但是保存的屏幕截图,有一个按钮,但是没法点击。有没有可以点击的操作
提议的 API 是什么样的?
click函数
你好,请参考 https://pptr.dev/api/puppeteer.elementhandle.click
像这样:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ maxRetry: 3, intervalTime: { max: 2000, min: 1000 } })
// 调用 crawlPage API 来爬取页面
myXCrawl.crawlPage('https://www.example.com').then(async (pageResult) => {
const { browser, page } = pageResult.data
// 获取按钮
const sendBtnEl = await page.$('.send-btn')
// 执行点击
sendBtnEl?.click()
// 关闭浏览器
browser.close()
})
哦,好的,非常感谢
如果想开启浏览器看运行效果可以参考这个:https://github.com/coder-hxl/x-crawl/blob/main/docs/cn.md#打开浏览器
| gharchive/issue | 2024-03-16T07:33:23 | 2025-04-01T06:38:14.601657 | {
"authors": [
"coder-hxl",
"xfxssr"
],
"repo": "coder-hxl/x-crawl",
"url": "https://github.com/coder-hxl/x-crawl/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2171007633 | Azure DevOps repo clone with GIT_USERNAME from coder_external_auth
Hi there,
I am trying to clone a repo from a private Azure DevOps repository.
The user has authenticated using OAUTH2 via the Coder external_auth documentation
I am then using the template in the repo and injecting the external auth as a data object in Terraform
data "coder_external_auth" "azure_devops" {
id = "primary-devops"
}
resource "kubernetes_deployment" "workspace" {
metadata {
name = "coder-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
namespace = var.namespace
labels = {
...
}
}
spec {
replicas = data.coder_workspace.me.start_count
selector {
match_labels = {
"coder.workspace_id" = data.coder_workspace.me.id
}
}
strategy {
type = "Recreate"
}
template {
...
}
spec {
container {
name = "coder-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
# Find the latest version here:
# https://github.com/coder/envbuilder/tags
image = "ghcr.io/coder/envbuilder:0.2.7"
env {
name = "CODER_AGENT_TOKEN"
value = coder_agent.main.token
}
env {
name = "CODER_AGENT_URL"
value = replace(data.coder_workspace.me.access_url, "/localhost|127\\.0\\.0\\.1/", "host.docker.internal")
}
env {
name = "GIT_URL"
value = data.coder_parameter.repo.value == "custom" ? data.coder_parameter.custom_repo_url.value : data.coder_parameter.repo.value
}
env {
name = "GIT_USERNAME"
value = data.coder_external_auth.azure_devops.access_token
}
env {
name = "INIT_SCRIPT"
value = replace(coder_agent.main.init_script, "/localhost|127\\.0\\.0\\.1/", "host.docker.internal")
}
env {
name = "FALLBACK_IMAGE"
value = "codercom/enterprise-base:ubuntu"
}
volume_mount {
name = "workspaces"
mount_path = "/workspaces"
}
}
volume {
name = "workspaces"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.workspaces.metadata.0.name
}
}
}
}
}
}
When it get's to checking out the repo, the terraform throws a pretty unhelpful error:
#1: 📦 Cloning https://<our_org>@dev.azure.com/<our_org>/build-automation/_git/devcontainers to /workspaces/devcontainers...
Failed to clone repository: clone "https://<access_token_I_presume>:@dev.azure.com/<our_org>/build-automation/_git/devcontainers": unexpected client error: unexpected requesting "https://<access_token_I_presume>@dev.azure.com/<our_org>/build-automation/_git/devcontainers/git-upload-pack" status code: 400
Falling back to the default image...
Am I missing something or is there something I can test?
It seems that the git clone stage is adding /git-upload-pack to the end of the URL
Could it be related to this issue on the Git Go repository?
https://github.com/go-git/go-git/issues/64
@wf1-brandon-grant based on: https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=Windows#use-a-pat
It seems the username should be a dummy string, and GIT_PASSWORD should be the token.
Have you tried that?
I have given that a shot with the below:
Same error response I am afraid.
Interestingly, if I open up the workspace (as it falls back to the enterprise container image).
And hop into the directory that was cloned, there is a .git/config that when I use the URL, works as expected and clones the repo
Hi @kylecarbs -
Do you have any thoughts on how we might be able to work around this issue?
Hmm odd that cloning afterwards fails.
I'll look at this today.
@wf1-brandon-grant fixed in the attached PR! I'll do a release post-merge.
@wf1-brandon-grant please let me know if that fixes it or not, it'd be very helpful!
Hey @kylecarbs -
Just gave this a test and it has done the trick. Thank you!
| gharchive/issue | 2024-03-06T09:12:36 | 2025-04-01T06:38:14.618947 | {
"authors": [
"kylecarbs",
"wf1-brandon-grant"
],
"repo": "coder/envbuilder",
"url": "https://github.com/coder/envbuilder/issues/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1389072582 | The source_suffix[es] configuration is no longer needed
Sphinx extensions can now set this automatically.
source_suffix = ['.rst', '.md'] is no longer needed
| gharchive/issue | 2022-09-28T09:55:16 | 2025-04-01T06:38:14.620320 | {
"authors": [
"rkdarst"
],
"repo": "coderefinery/documentation",
"url": "https://github.com/coderefinery/documentation/issues/247",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
740660247 | fix windows instructions for meld
there's a type (ways "linux") and should probably have more explicit steps
This has been fixed in #150.
| gharchive/issue | 2020-11-11T10:37:58 | 2025-04-01T06:38:14.621176 | {
"authors": [
"bast",
"wikfeldt"
],
"repo": "coderefinery/installation",
"url": "https://github.com/coderefinery/installation/issues/144",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1387742133 | Faster explanation of snakemake
For the snakemake episode, I tried to use the strategy "minimal explanation, let people go to exercises and give people the most time to do the exercise (and read more of the text to learn what I didn't say).
But, some feedback was that I should have done a better introduction to Snakemake to show what it actually is. I tried to, but with a goal of five minutes of intro, that requires better planning and maybe re-focused episode text.
My first thought is a rework of the first part of the episode thinking about the above. Not necessarily reducing text but thinking about the order and emphasis.
I have significantly shortened the episode in terms of reading and explaing.
| gharchive/issue | 2022-09-27T13:12:39 | 2025-04-01T06:38:14.622639 | {
"authors": [
"bast",
"rkdarst"
],
"repo": "coderefinery/reproducible-research",
"url": "https://github.com/coderefinery/reproducible-research/issues/202",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
245421308 | React.PureComponent with out props injection? Looking for guidance.
@piq9117
https://github.com/codergvbrownsville/code-rgv-pwa/blob/master/src/pages/Home/Home.tsx
I was under the impression that PureComponents are simply internally setup with a shallow comparison implementation of shouldComponentUpdate(). Without a constructor passing in props or initializing state, how is it different from a functional component?
Yours truly,
A Concerned Citizen
According to the docs its exactly the same as the React.Component, but it implements shouldComponentUpdate to check for comparison. I used to implement this with react-pure-renderer-utils. However, since react implements it internally now I'll just use that.
| gharchive/issue | 2017-07-25T14:30:51 | 2025-04-01T06:38:14.625168 | {
"authors": [
"celgra",
"piq9117"
],
"repo": "codergvbrownsville/code-rgv-pwa",
"url": "https://github.com/codergvbrownsville/code-rgv-pwa/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
838859354 | Multiple Previews in the same provider
Allow multiple previews in the same provider. Very powerful if combined with some preview props like viewport or routes.
Also stumbled into the same issue. Would be great if the Preview would support adding starting URL so that each preview could potentially show different routes/pages.
Yes, I've been experimenting with this, but it's a bit on hold while I work on some other things. Unfortunately the original requirements of sandpack were assuming 1 bundler 1 preview for each sandpack instance, hence the architecture needs a bit of an overhaul to make this work, especially if you want to have a dynamic number of previews at runtime
| gharchive/issue | 2021-03-23T15:40:50 | 2025-04-01T06:38:14.655845 | {
"authors": [
"alexnm",
"nkovacic",
"zehfernandes"
],
"repo": "codesandbox/sandpack",
"url": "https://github.com/codesandbox/sandpack/issues/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1410216622 | Write a C# program to convert a string to an integer
Description
Write a C# program to convert a string to an integer
Input : "123"
Output : 123
How to contribute
Save the solution in program/ConvertAStringToAnInteger.cs file
Add ConvertAStringToAnInteger.cs file in convert-a-string-to-an-integer folder
!assign
Hey @roberanegussie, this issue is already assigned to @anandfresh !!!
Please choose another issue.
Thanks for your interest in contributing to this project.
| gharchive/issue | 2022-10-15T16:17:00 | 2025-04-01T06:38:14.742080 | {
"authors": [
"harshraj8843",
"roberanegussie"
],
"repo": "codinasion/program",
"url": "https://github.com/codinasion/program/issues/4405",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
231091215 | Create desktop GUI
ctdl currently works as a command line utility.
Create a desktop GUI for ctdl using Tkinter.
Is using Tkinter compulsory? Can't we use other tools for this? Like electron?https://electron.atom.io
Well, I have no idea if it is easy/possible to integrate python scripts with electron.
Tkinter is not compulsory though. You can try electron if it is possible.
It is possible to use electron here. The UI/UX with Tkinter is a bust.
Heres how it'll go using electron:
Write a Python Wrapper to expose the cli as an API for the electron app to consume. (something like zeromq )
Use electron to server static files (.html) and use the api.
In the process we will have a web api for the same as well.
Ok then. Go for it!
Wouldn't that be too complex @abhishek97 ? (import this)
(totally keeping aside electron's aspects)
Now, if you are talking about cross platform aspect then why not kivy?
Why the need of desktop GUI than a simple webapp? @nikhilkumarsingh
#5 gui for content downloader @nikhilkumarsingh
| gharchive/issue | 2017-05-24T16:02:42 | 2025-04-01T06:38:14.746210 | {
"authors": [
"abhishek97",
"akansh97531",
"nikhilkumarsingh",
"sourabhtk37"
],
"repo": "coding-blocks/content-downloader",
"url": "https://github.com/coding-blocks/content-downloader/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
817850292 | 84 Super Star Trek CSharp
WIP
Super Trek Trek port to C#
Introduction including instructions
Main game loop
Short Range Scan
Shield Control
More to come...
This is a big one, take your time! Thanks!
| gharchive/pull-request | 2021-02-27T06:36:20 | 2025-04-01T06:38:14.759102 | {
"authors": [
"coding-horror",
"drewjcooper"
],
"repo": "coding-horror/basic-computer-games",
"url": "https://github.com/coding-horror/basic-computer-games/pull/97",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
378029117 | Value error in post(id = id)
def post_detail(request,id): postdetail = get_object_or_404(Posts,id =id) context={ "title":postdetail.title, "instance":postdetail, } return render(request,"webindex.html" ,context)
postdetail = get_object_or_404(post,id=id)..... check this
| gharchive/issue | 2018-11-06T20:54:44 | 2025-04-01T06:38:14.765364 | {
"authors": [
"maulik9021",
"warui1738"
],
"repo": "codingforentrepreneurs/Try-Django",
"url": "https://github.com/codingforentrepreneurs/Try-Django/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1903409099 | Can't display commands due to insufficient permissions
When writing "/" we don't have all the commands displaying :
This is clearly a non intentionall bug.
/fart should be available for everyone to use.
I think the scope of this fix can be greater tho. I believe we should have a way to declare permissions for our slash commands, which we do not have yet.
Interesting topic that need to be discussed.
Thanks for your report!
| gharchive/issue | 2023-09-19T17:10:54 | 2025-04-01T06:38:14.768987 | {
"authors": [
"gdamou",
"neolectron"
],
"repo": "codinglab-io/discord-bot",
"url": "https://github.com/codinglab-io/discord-bot/issues/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
255165332 | Asset accounts gets (-) negative sign prepended when editing, causing double negative
Steps to reproduce the behaviour
Create a transaction to decrease amount in an asset account.
Click to edit the transaction, a (-) sign appears in front of red decreasing value
Click to save the edited transaction
-> error = the "decrease" transaction now becomes a "negative decrease" resulting in a positive increase to the asset balance
This behavior occurred after changing default transactions to be "credit" rather than "debit" in settings. Not sure if this is relevant.
Expected behaviour
Expected behavior would be to have the negative not included in the calculation.
Actual behaviour
Negative sign seems to be prepended upon a decrease transaction type. After editing and saving, the negative seems to be included in the resulting calculations, causing the transaction to increase the asset balance rather than decreasing it as should be.
Software specifications
GnuCash Android version: 2.2.1
System Android version: 6.0.1
Device type: LG Nexus 5
This has already been reported in #723. Please, add any further comments there. Thanks!
| gharchive/issue | 2017-09-05T05:43:10 | 2025-04-01T06:38:14.772858 | {
"authors": [
"mutedbytes",
"rivaldi8"
],
"repo": "codinguser/gnucash-android",
"url": "https://github.com/codinguser/gnucash-android/issues/726",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
733109403 | De-prioritized TabNine suggestions to be at the end of the list
please complete the following information:
OS version: ArchLinux kernel 5.9.1
Editor version: 1.50.1
Programming language: php,TypeScript,bash
TabNine extension version: 2.8.8
Issue Details:
This is a feature request
The current official TabNine is unusable for my colleagues and myself because we can not change the suggestion priority.
We need the priority to come after the build in suggestions otherwise we get less useful suggestions at the top most of the time.
This problem is faced by a lot of people since there is an unofficial TabNine extension for vscode which does just that.
With that feature TabNine is a great tool to reduce repetitive typing tasks.
So can you please take look at this unofficial extension and add a way to change the suggestion priority ?
Hey @elovin thanks for reaching out to us!
Thanks for the suggestion, we will try to address that in upcoming releases.
Thanks, Boaz.
| gharchive/issue | 2020-10-30T11:48:15 | 2025-04-01T06:38:14.777049 | {
"authors": [
"boaz-codota",
"elovin"
],
"repo": "codota/tabnine-vscode",
"url": "https://github.com/codota/tabnine-vscode/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
449515691 | Request: Implement a Searchable Select
I think it would be nice to have a Select with Input component, I've seen it used in many Google Material products for making it easier to filter large lists.
Something like this little library: https://selectize.github.io/selectize.js/
I'm by no means expecting you to dictate the behaviour of the select, in fact I'd rather you didn't so that one could do so themselves, or at least add an optional onChange prop override.
Yeah under the hood the Select component using the TextField component https://github.com/codypearce/material-bread/blob/master/src/Components/Select/Select.js#L87, but it needs some rewriting to be able to type inside it and activate the dropdown.
I think there's two parts to that:
A prop to distinguish between allowing the user to type in the textfield and only allowing them to click to activate the dropdown.
Better support for chips in TextFields, like here https://material.io/design/components/chips.html#
What do you think?
I agree with that yeah.
Possibly have the prop be boolean and something like filterable ?
Another possibility is to allow it to have children, but that could lead to behaviours against the material ethos.
Well I like the idea of of prebuilt functionality that matches material, but with enough escape hatches to make something more custom.
So maybe add that filterable prop, but allow for the user to pass in a custom TextField component that gets rendered instead of the prebuilt one, for example renderTextField prop. This would also allow the user to use other packages like https://github.com/benhurott/react-native-masked-text while still taking advantage of the Select element.
Yeah that sounds great
What about downshift?
Might not need too much refactoring to be cross platform.
I'm not sure tho, haven't looked into it.
https://github.com/downshift-js/downshift
| gharchive/issue | 2019-05-28T23:06:34 | 2025-04-01T06:38:14.794545 | {
"authors": [
"Emuentes",
"GeorgeWL",
"codypearce"
],
"repo": "codypearce/material-bread",
"url": "https://github.com/codypearce/material-bread/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
256766483 | Shaylen Naidoo: Conditional Send Should Support Logical Operations
Email
shaylen.naidoo@edmonton.ca
Description
Allow sending of mail merges to use several logical conditions such that we can use operations like AND, OR and NOT to join the multiple logical conditions.
This is now possible in Rich Text Mailman, because all of the fields are rendered using the Handlebars templating engine. (to, cc, bcc, subject, body, and condition)
For example, If you have two columns "My Condition 1" and "My Condition 2":
To only send emails when both are TRUE use {{#and [My Condition 1] [My Condition 2]}}
To send emails if either are TRUE use {{#or [My Condition 1] [My Condition 2]}}
More powerful logical operators exist, documentation will be forthcoming
| gharchive/issue | 2017-09-11T16:34:22 | 2025-04-01T06:38:14.799290 | {
"authors": [
"dchenier",
"j-rewerts"
],
"repo": "coe-google-apps-support/Mailman",
"url": "https://github.com/coe-google-apps-support/Mailman/issues/168",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
986196915 | Set cache-control for assets
As discussed in slack, after we remove max-age in #284, cloudflare never caches our asset files.
This is because
koa-send (used by koa-static-server we are currently using) sets max-age to 0 if we do not specify any max-age: https://github.com/koajs/send/blob/master/index.js#L60
Default cache behavior of cloudflare will respect max-age: 0 when it is given
In this PR we override Cache-Control using setHeaders option so that whenever koa-send serves a file that is not index.html, we attach a 1-year long max-age in the response header.
HTML
Assets that is not HTML
Pull Request Test Coverage Report for Build 1193026137
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 87.326%
Totals
Change from base Build 1192393041:
0.0%
Covered Lines:
968
Relevant Lines:
1094
💛 - Coveralls
| gharchive/pull-request | 2021-09-02T06:03:52 | 2025-04-01T06:38:14.807985 | {
"authors": [
"MrOrz",
"coveralls"
],
"repo": "cofacts/rumors-line-bot",
"url": "https://github.com/cofacts/rumors-line-bot/pull/286",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2217078408 | chore: release v0.2.4
:robot: I have created a release beep boop
0.2.4 (2024-04-01)
What's Changed
feat(ci): pre-build arm64 on linux binaries by @coffeebeats in https://github.com/coffeebeats/gdbuild/pull/80
fix(scripts): unblock downloads of new arm64 on linux target by @coffeebeats in https://github.com/coffeebeats/gdbuild/pull/82
fix(scripts): use correct compound condition syntax by @coffeebeats in https://github.com/coffeebeats/gdbuild/pull/83
feat(ci): add support for explicit --debug flag by @coffeebeats in https://github.com/coffeebeats/gdbuild/pull/84
Full Changelog: https://github.com/coffeebeats/gdbuild/compare/v0.2.3...v0.2.4
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/coffeebeats/gdbuild/releases/tag/v0.2.4 :sunflower:
| gharchive/pull-request | 2024-03-31T16:45:26 | 2025-04-01T06:38:14.813301 | {
"authors": [
"coffeebeats"
],
"repo": "coffeebeats/gdbuild",
"url": "https://github.com/coffeebeats/gdbuild/pull/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2178603351 | Migrate AB and CI dependencies off conda to pip-compile
Will close #1437
#1437
Moves dependencies to new AB_<name>.requirements.txt file, which is dependabot friendly for automated upgrades. Alone w/ the associated dependabot.yml file configured to look in AB_environments directory.
Successful A/B run here: https://github.com/coiled/benchmarks/actions/runs/8246636881
@crusaderky, it seemed to get unwieldly needing to copy around a core set of requirements to so many files when moving to pip. Therefore I've converted to pip-compile so we can include these in other environments, by just adding -r requirements.in or similar both in CI and AB environments. Then just two conda env files, one for the base to determine python version and stuff like openssl and openjdk which is easier w/ conda, then one for updating to git-tip.
Done in https://github.com/coiled/benchmarks/pull/1447/commits/d628ccee9253acb9f3bff8f332bf2643761a6c40, let me know what you think.
@milesgranger I've gone through all the documentation; please review.
If you're happy with it I think we can merge?
| gharchive/pull-request | 2024-03-11T09:04:40 | 2025-04-01T06:38:14.821137 | {
"authors": [
"crusaderky",
"milesgranger"
],
"repo": "coiled/benchmarks",
"url": "https://github.com/coiled/benchmarks/pull/1447",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1101948515 | C# option to toggle debug output
Writing to the console multiple times a frame is actually a pretty demanding task, and it clutters up other outputs I'm looking for while debugging. Seeing the output of every string that comes in, while cool, keeps me from seeing more important stuff, and while it's useful for knowing what kind of information you're receiving there should be a way to toggle it on or off.
Fixed in #126. You can basically Debug on the handler side of the event if you want. Upgrade nuget to 2.0.1
| gharchive/issue | 2022-01-13T15:37:26 | 2025-04-01T06:38:14.991372 | {
"authors": [
"Svisstack",
"jamieyello"
],
"repo": "coinapi/coinapi-sdk",
"url": "https://github.com/coinapi/coinapi-sdk/issues/125",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
699904087 | improved error handling and test coverage
Fixes # .
Motivation
we've updated the Rosetta SDK to include named error types for better visibility on error handling. we're updating the CLI here to leverage that new functionality to improve coverage in block syncing and balance checking tests.
Solution
we are using the Err() functions that the SDK now exposes to check if an error originated from the storage or syncer packages.
we also make sure to fail noisily by having check:data hard exit if we encounter an error not handled by one of our tests.
Open questions
Pull Request Test Coverage Report for Build 4571
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 84.737%
Totals
Change from base Build 4550:
0.0%
Covered Lines:
161
Relevant Lines:
190
💛 - Coveralls
| gharchive/pull-request | 2020-09-12T00:39:48 | 2025-04-01T06:38:14.997294 | {
"authors": [
"cindyxkuang",
"coveralls"
],
"repo": "coinbase/rosetta-cli",
"url": "https://github.com/coinbase/rosetta-cli/pull/130",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1050675380 | Trying to run an example but failed with DNS resolution failed for service error
I tried to run temporal locally and try with hello world workflow but get this error. Could anyone help me?
Temporal.start_workflow(HelloWorldWorkflow)
[DEPRECATION] This method is now deprecated without a substitution
GRPC::Unavailable: 14:DNS resolution failed for service: :. debug_error_string:{"created":"@1636617011.591632000","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":1356,"referenced_errors":[{"created":"@1636617011.591631000","description":"DNS resolution failed for service: :","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":360,"grpc_status":14,"referenced_errors":[{"created":"@1636617011.591625000","description":"unparseable host:port","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":843,"target_address":":"}]}]}
from /Users/duy-chk/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/grpc-1.41.1-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
I am able to run temporal server and web service. Also run the configuration block already.
Temporal.configure do |config|
config.host = 'localhost'
config.port = 7233
config.namespace = 'ruby-samples'
config.task_queue = 'hello-world'
end
Since you've closed the issue I'm assuming this has been resolved, please let me know if that's not the case and you need help
Hi @antstorm, thanks for your reply. It was resolved by restarting irb.
| gharchive/issue | 2021-11-11T07:54:45 | 2025-04-01T06:38:15.015716 | {
"authors": [
"antstorm",
"duy-chk"
],
"repo": "coinbase/temporal-ruby",
"url": "https://github.com/coinbase/temporal-ruby/issues/116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1824904502 | 🛑 Coincord Mobile API is down
In ab65f2b, Coincord Mobile API ($SECRET_MOBILE) was down:
HTTP code: 502
Response time: 448 ms
Resolved: Coincord Mobile API is back up in 0cc950b.
| gharchive/issue | 2023-07-27T18:00:28 | 2025-04-01T06:38:15.018029 | {
"authors": [
"cuddimatic"
],
"repo": "coincord/coincord-status",
"url": "https://github.com/coincord/coincord-status/issues/94",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
624780705 | RawConfig FromJSON
Sometimes when we are operating with multiple LND nodes in real apps, it's convenient to read LND environment from some JSON environment variable
refactor RawConfig fields types (proper types instead of blind ByteString and Int)
implement FromJSON type class for RawConfig type and its fields
use smart constructors for FromJSON instances of fields where it is possible
if value is just newtype without smart constructor, use GeneralizedNewtypeDeriving
Let's also rename type to RawLndEnv and export it (just type, not constructors because it supposed to be read from JSON or environment variables, not constructed directly)
done 🚀
| gharchive/issue | 2020-05-26T10:34:37 | 2025-04-01T06:38:15.020374 | {
"authors": [
"tim2CF"
],
"repo": "coingaming/lnd-client",
"url": "https://github.com/coingaming/lnd-client/issues/29",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
689287750 | Add Exercise 03 Extra Credit - Env Vars
This exercise listed a TODO so I took a stab at adding it based on the lesson you recorded.
thanks Zac!!
| gharchive/pull-request | 2020-08-31T15:18:05 | 2025-04-01T06:38:15.021780 | {
"authors": [
"colbyfayock",
"zacjones93"
],
"repo": "colbyfayock/launchtime-workshop",
"url": "https://github.com/colbyfayock/launchtime-workshop/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
511629436 | More Pythonic EventReactor
Make EventReactor a context manager
Use Thread via composition instead of inheritance
Codecov Report
Merging #258 into master will increase coverage by 0.03%.
The diff coverage is 88.88%.
@@ Coverage Diff @@
## master #258 +/- ##
==========================================
+ Coverage 80.04% 80.07% +0.03%
==========================================
Files 54 54
Lines 3122 3127 +5
Branches 518 518
==========================================
+ Hits 2499 2504 +5
Misses 584 584
Partials 39 39
Impacted Files
Coverage Δ
colcon_core/event_reactor.py
100% <100%> (ø)
:arrow_up:
colcon_core/executor/__init__.py
95.23% <81.81%> (-0.07%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c5983df...819b170. Read the comment docs.
Thanks for the contribution.
You’re always welcome!
| gharchive/pull-request | 2019-10-23T23:53:38 | 2025-04-01T06:38:15.029750 | {
"authors": [
"codecov-io",
"dirk-thomas",
"rotu"
],
"repo": "colcon/colcon-core",
"url": "https://github.com/colcon/colcon-core/pull/258",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2471929185 | z.ref
There is a special usage in JSON Schema:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"root": {
"type": "object",
"properties": {
"type": {
"type": "string"
},
"arguments": {
"type": "array"
},
"children": {
"type": "array",
"items": {
"$ref": "#/properties/root"
}
},
}
}
},
"required": [
"root"
]
}
$ref means use a defined structure in the tree.
Have zod a similar function or not? If not, there is any posibility add it?
I try to use a defined structure in Zod, but the ts-server says the types are loop used
| gharchive/issue | 2024-08-18T11:47:29 | 2025-04-01T06:38:15.054248 | {
"authors": [
"sheepbox8646"
],
"repo": "colinhacks/zod",
"url": "https://github.com/colinhacks/zod/issues/3715",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
385372221 | error to play recorded data on firefox
Description
I get erro on firefox , when i finish record data and go to playback in de same video tag.
on chrome it's work.
i don't understand the erro, you could help me ?
Steps to reproduce
const timeout = 30
var options = {
controls: false,
width: 554,
height: 240,
fluid: false,
controlBar: {
fullscreenToggle: false,
volumePanel: false
},
plugins: {
record: {
videoMimeType: 'video/webm',
audio: true,
video: {
mandatory: {
minWidth: 320,
minHeight: 240,
},
},
maxLength: timeout,
debug: true
}
}
};
var player = videojs('record_retc', options, function() {
// print version information at startup
var msg = 'Using video.js ' + videojs.VERSION +
' with videojs-record ' + videojs.getPluginVersion('record') +
' and recordrtc ' + RecordRTC.version;
videojs.log(msg);
});
const sleep = (milliseconds) => {
return new Promise(resolve => setTimeout(resolve, milliseconds))
}
document.querySelector('.start_record').onclick = function(event){
player.record().getDevice();
sleep(1 * 1000).then(() => {
player.record().start()
})
}
player.on('stopRecord', function(event){
console.log('stopped recording!');
player.record().stopDevice()
})
player.on('finishRecord', function() {
console.log('finished recording:', player.recordedData);
console.log(player.record().getDuration())
player.on('loadeddata' , function() {
console.log('starting playback');
player.play();
});
var blobUrl = window.URL.createObjectURL(player.recordedData.video);
player.src({type: player.recordedData.video.type, src: blobUrl});
});
console log
Using recorderType: MediaStreamRecorder RecordRTC.js:1039:9
Passing following config over MediaRecorder API.
Object { type: "video", video: {…}, canvas: {…}, frameInterval: 10, disableLogs: false, recorderType: null, mimeType: "video/webm", timeSlice: undefined, onTimeStamp: undefined, initCallback: initCallback()
, … }
RecordRTC.js:2043:13
Recorder state changed: recording RecordRTC.js:699:17
Initialized recorderType: MediaStreamRecorder for output-type: video RecordRTC.js:99:13
started recording! test:432:5
stopped recording! test:436:3
Stopped recording video stream. RecordRTC.js:125:13
Recorder state changed: stopped RecordRTC.js:699:17
video/webm -> 3.90 MB RecordRTC.js:166:17
finished recording:
Blob { lastModified: 1543424288691, lastModifiedDate: Date 2018-11-28T16:58:08.691Z, name: "1543424288691.webm", size: 3904337, type: "video/webm" }
Results
Expected
Please describe what you expected to see.
Actual
Please describe what actually happened.
Error output
VIDEOJS: ERROR: TypeError: "right-hand side of 'in' should be an object, got undefined"
createObjectURL https://webrtc.github.io/adapter/adapter-latest.js:4014:7
http://127.0.0.1:8000/pt-br/jobconvo/NTc1Ng-analista-trade-marketing/23778629-2214-410c-889f-cd12bb2d8c96/test/:456:19
bound https://vjs.zencdn.net/7.3.0/video.js:2168:14
dispatcher https://vjs.zencdn.net/7.3.0/video.js:1818:17
trigger https://vjs.zencdn.net/7.3.0/video.js:1954:7
1 https://vjs.zencdn.net/7.3.0/video.js:2832:14
value https://cdnjs.cloudflare.com/ajax/libs/videojs-record/2.4.1/videojs.record.min.js:8:17415
bound https://vjs.zencdn.net/7.3.0/video.js:2168:14
dispatcher https://vjs.zencdn.net/7.3.0/video.js:1818:17
trigger https://vjs.zencdn.net/7.3.0/video.js:1954:7
1 https://vjs.zencdn.net/7.3.0/video.js:2832:14
value https://cdnjs.cloudflare.com/ajax/libs/videojs-record/2.4.1/videojs.record.min.js:8:41484
getBlob https://webrtcexperiment-webrtc.netdna-ssl.com/RecordRTC.js:1348:13
value https://cdnjs.cloudflare.com/ajax/libs/videojs-record/2.4.1/videojs.record.min.js:8:41104
stopRecording https://webrtcexperiment-webrtc.netdna-ssl.com/RecordRTC.js:1264:17
_callback https://webrtcexperiment-webrtc.netdna-ssl.com/RecordRTC.js:173:21
ondataavailable https://webrtcexperiment-webrtc.netdna-ssl.com/RecordRTC.js:2123:17
video.js:142:49
Additional Information
Please include any additional information necessary here. Including the following:
versions
videojs
what version of videojs does this occur with?
browsers
what browser(s) are affected? Make sure to test with all third-party browser extensions disabled.
OSes
what platforms (operating systems and devices) are affected?
You do not need to load the data manually, it loads automatically after you stop recording, see examples in repository.
sorry,
but loads automatically after i stop recording, don't work , what's exemplo do that ?
i need config some thing ?
ps: i don't using the controls , i stop and play manually
I see. What happens when you use this in the finishRecord handler (player.src is not supported afaik):
player.record().load(blobUrl);
@olivx seems you're also running into the issue where you have to use player.recordedData.video on Chrome and player.recordedData on Firefox. This will be fixed in v3.0.0 (see #270).
worked this way....
if (navigator.userAgent.toLowerCase().indexOf("chrome") != -1){
console.log('google chrome')
blobUrl = window.URL.createObjectURL(player.recordedData.video);
player.src({type: player.recordedData.video.type, src: blobUrl});
}
if (navigator.userAgent.toLowerCase().indexOf("firefox") != -1){
console.log('mozila firefox')
blobUrl = window.URL.createObjectURL(player.recordedData);
player.src({type: player.recordedData.type, src: blobUrl});
}
thanks a lot for your help !
No worries! In 3.0.0 it should be simply blobUrl = window.URL.createObjectURL(player.recordedData);.
| gharchive/issue | 2018-11-28T17:13:48 | 2025-04-01T06:38:15.068508 | {
"authors": [
"olivx",
"thijstriemstra"
],
"repo": "collab-project/videojs-record",
"url": "https://github.com/collab-project/videojs-record/issues/308",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2038210524 | Say where sources came from
Split from #15
Say where sources came from. Defer to CancerInFocus's sources page rather than duplicating data.
We're going to have a dedicated Sources page, accessible from the header of the site, and populated with static text and citation data about each source, including timespans (per #21 ), linking to external info pages when possible/appropriate.
A shortened label or id or some citation for the source will be hardcoded into the backend for now and displayed on the frontend in the legend.
From Sean:
Perhaps start as a google doc to keep list of where things come from and provide user-level and technical documentation.
From the meeting:
@vincerubinetti : would be good to have a meeting where we interactively put this together.
Merging in #21
Also specify timespans of data.
I believe the CancerInFocus data is from the last 2-5 years or something like that? Might be best to defer external pages for that info? Maybe linking to the appropriate documentation that says the timespans is enough...?
Hi all -
Attached is a document with our data sources page. There is one highlighted part that we need your team's eyes for. If the highlighted part remains true then we can keep it in and can publish to the platform. If it is not true, we can delete or edit and publish.
Thanks!
DATA SOURCES_Final.docx
| gharchive/issue | 2023-12-12T17:08:17 | 2025-04-01T06:38:15.131977 | {
"authors": [
"cydneyj303",
"falquaddoomi",
"vincerubinetti"
],
"repo": "colorado-cancer-center/COCancerScope",
"url": "https://github.com/colorado-cancer-center/COCancerScope/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1263734696 | Newlines stripped when reading STDIN
Description
When the file is provided as an argument, the newlines are handled correctly. But when fsrx reads from standard input the newlines are stripped away.
To Reproduce
$ fsrx .bashrc # as expected
$ cat .bashrc | fsrx # a mess
Expected behavior
The newline handling should be consistent between those two methods.
Screenshots
Additional context
Fixing this would make it possible to use fmt to reflow incoming text to a managable width, like so:
$ fmt -w 60 long_lines.txt | fsrx | less -R
Right now the newlines created by fmt are deleted.
Just noticed that (some?) special characters at the end of lines are stripped as well. - notice the dissappearing ! in the screenshot above.
Here is another screenshot with a line from running fsrx ~/.bashrc, notice the unclosed parenthesis and double quotes:
Is this by design?
@piotr-machura hmm I know this was working at some point, I must have added a regression. Looking now!
all fixed! releasing now
| gharchive/issue | 2022-06-07T18:58:08 | 2025-04-01T06:38:15.136602 | {
"authors": [
"coloradocolby",
"piotr-machura"
],
"repo": "coloradocolby/fsrx",
"url": "https://github.com/coloradocolby/fsrx/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.