id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
180286466
|
Round counter to ScoreScreen
Since we have the amount of rounds by looking at match.commits we can display where we are at!
This was completed a long time ago. :)
|
gharchive/issue
| 2016-09-30T11:56:46 |
2025-04-01T04:34:04.184944
|
{
"authors": [
"thiderman"
],
"repo": "drunkenfall/drunkenfall",
"url": "https://github.com/drunkenfall/drunkenfall/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
685809285
|
Does not work with dry-system 0.18.0
App fails with:
NoMethodError:
private method `boot_files' called for #<Dry::System::Booter:0x0000000004e35068>
Did you mean? booted
I was literally about to post the same bug. Thanks @jandudulski!
I'll take a look at this shortly, thanks for the reports.
Thanks for the quick report of this issue, folks!
Here's how I've addressed it:
Released 0.2.1 of dry-rails which depends on dry-system 0.17.x specifically (i.e. it won't permit an upgrade to dry-system 0.18.0) - see changes
Released 0.18.1 of dry-system, which makes the Booter#boot_files method public again (the source of your particular issue here) - see changes
Released 0.3.0 of dry-rails, which depends on dry-system 0.18.1 or higher, including another tweak to the container setup to properly configure dry-system 0.18.0's new bootable_dirs setting - see changes
So your options here are to:
Upgrade to 0.2.1 of dry-rails to keep things working in the short term
Upgrade to 0.3.0 to permit future dry-system upgrades
Both of these should work just fine, I think :)
Can you please have a go and let me know how it works out?
Sorry again for the unexpected breakages!
/cc @solnic
Works, thanks :heart:
|
gharchive/issue
| 2020-08-25T20:51:31 |
2025-04-01T04:34:04.232761
|
{
"authors": [
"danderozier",
"jandudulski",
"timriley"
],
"repo": "dry-rb/dry-rails",
"url": "https://github.com/dry-rb/dry-rails/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
353999591
|
Broken in react-native@0.57.0-rc.3
I'm aware that I'm a bit early here but I just wanted to let you know or ask if the transformer might be broken in the newest version of react-native ?
SyntaxError: /testproject/src/components/Test.tsx: Unexpected token, expected ")" (37:30)
35 |
36 | interface SectionProps {
> 37 | content: (({ i, isActive }: { i: number, isActive: boolean }) => React.ReactElement<any>) | string;
| ^
38 | label: string;
39 | info: string | number;
40 | direction?: 'row' | 'column';
SyntaxError: /testproject/src/components/Test.tsx: Unexpected token, expected ")" (37:30)
35 |
36 | interface SectionProps {
> 37 | content: (({ i, isActive }: { i: number, isActive: boolean }) => React.ReactElement<any>) | string;
| ^
38 | label: string;
39 | info: string | number;
40 | direction?: 'row' | 'column';
at _class.raise (/testproject/node_modules/@babel/parser/lib/index.js:3906:15)
at _class.unexpected (/testproject/node_modules/@babel/parser/lib/index.js:5235:16)
at _class.expect (/testproject/node_modules/@babel/parser/lib/index.js:5223:28)
at _class.tsParseParenthesizedType (/testproject/node_modules/@babel/parser/lib/index.js:8943:12)
at _class.tsParseNonArrayType (/testproject/node_modules/@babel/parser/lib/index.js:9043:23)
at _class.tsParseArrayTypeOrHigher (/testproject/node_modules/@babel/parser/lib/index.js:9050:23)
at _class.tsParseTypeOperatorOrHigher (/testproject/node_modules/@babel/parser/lib/index.js:9094:122)
at _class.tsParseUnionOrIntersectionType (/testproject/node_modules/@babel/parser/lib/index.js:9099:18)
at _class.tsParseIntersectionTypeOrHigher (/testproject/node_modules/@babel/parser/lib/index.js:9117:19)
at _class.tsParseUnionOrIntersectionType (/testproject/node_modules/@babel/parser/lib/index.js:9099:18)
I'm getting this error which lets me think that babel is trying to transform the code before the typescript transformer jumps in.
I'm also interested in knowing about the compatibility of the soon to be released 0.57 version.
At least the v0.57 has changes to the Metro bundler config. I'm not really sure what changes are needed though.
Since the version 0.43.3, Metro has native support for Typescript by Babel 7.0.0 Plugin.
See Add TypeScript support to React Native and #209
The react-native 0.57-rc3 comes with Metro 0.43.6
I don't know how it can afect this package.
You're right the typescript is now supported out of the box in react-native. I had to change my code to
interface SectionProps {
content: ((props: { i: number, isActive: boolean }) => React.ReactElement<any>) | string;
label: string;
info: string | number;
direction?: 'row' | 'column';
}
to make the babel typescript compiler accept it. I'm going to leave this issue open since metro is not even trying to make use of this transformer anymore but feel free to close it if you think it's not worth looking into now that typescript is supported by babel.
Could you add some notice to the README that it's not needed anymore in 0.57?
(Which is awesome for react-native, but sad for this transformer :-( )
Yup, Typescript is supported by default in 0.57. No transformers needed.
@kristerkari Could you point me how to set up typescript with react native 0.57? I don't quite understand "No transformers needed".
@xufeipyxis React Native should be able to pick up .ts and .tsx files automatically, and transpile them using Babel.
@xufeipyxis I updated one of my project that was using this library to React Native 0.57. I removed transformer and the project continued working normally. So I guess that you only need to install Typescript.
Here are my changes when updating to 0.57:
https://github.com/kristerkari/react-native-css-modules-with-typescript-example/commit/d4ec640b374200011f9a3f5f3f462f2b0b58ac53#diff-c2e7e130f67fd2d0a2fe543cc954a128
For people that still want to use this module, solution is simple. In file rn-cli.config.js remove keys getTransformModulePath and getSourceExts, and add section:
transformer: {
babelTransformerPath: require.resolve('react-native-typescript-transformer'),
},
For documentation, see https://facebook.github.io/metro/docs/en/configuration
thanks @vovkasm I think this can be closed
Man, I'm new to react-native. I just started trying to figure out why react native was being so strange wrt to typescript and this finally explains everything. The existing guides out there are all wrong right now.
@vovkasm thanks! I prefer the support for const enum than the marginal gain in speed.
|
gharchive/issue
| 2018-08-25T09:41:18 |
2025-04-01T04:34:04.241867
|
{
"authors": [
"JonET",
"aMarCruz",
"acuntex",
"dalcib",
"kristerkari",
"nico1510",
"sunnylqm",
"vovkasm",
"xufeipyxis"
],
"repo": "ds300/react-native-typescript-transformer",
"url": "https://github.com/ds300/react-native-typescript-transformer/issues/77",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1563968175
|
SPInstallPrereqs: sqlncli.msi is no longer required after November 2021 CU
Problem description
According to the information in November 2021 CU for SharePoint 2016 and 2019 it is no longer recommended to have Microsoft SQL Server 2012 Native Client (sqlncli.msi) installed on the SharePoint servers as it's no longer used. Is it possible to remove this from the prerequisite installation?
I've not tested if the prerequisite installer still enforces installation of sqlncli.msi
SP2019 November CU
SP2016 November CU
Verbose logs
Not relevant
DSC configuration
SPInstallPrereqs PrerequisitesInstallation
{
IsSingleInstance = 'Yes'
InstallerPath = $ConfigurationData.NonNodeData.SPPrereqsInstallerPath;
OnlineMode = $false;
Ensure = "Present";
PSDscRunAsCredential = $SPSetupCredential;
SQLNCli = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\sqlncli.msi"
Sync = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\Synchronization.msi"
AppFabric = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\WindowsServerAppFabricSetup_x64.exe"
IDFX11 = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\MicrosoftIdentityExtensions-64.msi"
MSIPCClient = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\setup_msipc_x64.exe"
WCFDataServices56 = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\WcfDataServices.exe"
MSVCRT11 = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\vcredist_x64.exe"
MSVCRT141 = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\vc_redist.x64.exe"
KB3092423 = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\AppFabric-KB3092423-x64-ENU.exe"
DotNet472 = "$($ConfigurationData.NonNodeData.SPInstallationBinaryPath)\prerequisiteinstallerfiles\NDP472-KB4054530-x86-x64-AllOS-ENU.exe"
DependsOn = "[WindowsFeatureSet]NET-Framework"
}
Suggested solution
Remove installation of sqlncli.msi when installed CU version of SharePoint is November 2021 or newer.
SharePoint version and build
SP 2016 and SP2019 November 2021 CU and newer
Operating system the target node is running
not relevant
PowerShell version and build the target node is running
not relevant
SharePointDsc version
4.8.0
I have had several discussions around changes in CUs and install sources with the PG. In short:
A CU does not update the install source. So the prerequisites installer is still the original code and therefore requires the SQL Native Client to be installed.
This means that once you have the November 22 CU installed, you can uninstall the SQL Native Client without issues. We can add code to a resource (e.g. ProductInstall) to uninstall the client when the November 22 CU is installed, but not sure if that really adds benefits. There is no harm in leaving it installed.
We can add code to a resource (e.g. ProductInstall) to uninstall the client when the November 22 CU is installed, but not sure if that really adds benefits. There is no harm in leaving it installed.
Our compliance tool nagging about the "SQL Native Client" as dangerous because it is out of support and now we get more red KPIs.
I don't like that tool, but the benefit would be more green checkboxes on a a report🎉.
|
gharchive/issue
| 2023-01-31T09:38:59 |
2025-04-01T04:34:04.250585
|
{
"authors": [
"ChristophHannappel",
"jensotto",
"ykuijs"
],
"repo": "dsccommunity/SharePointDsc",
"url": "https://github.com/dsccommunity/SharePointDsc/issues/1419",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
664609403
|
xGroup: Inconsistent behavior in Azure VM deployments (dscextension)
Details of the scenario you tried and the problem that is occurring
Deploying Azure resource (dscextension) on existing VM (is domain joined). Fairly large DSC configuration (using PSDesiredStateConfiguration, xPSDesiredStateConfiguration, CertificateDsc, and ComputerManagementDsc). The first section of the local node configuration is xGroup for modifying the local administrators group members.
VM is joined to domain.local
Resource fails intermittently between different VMs. All VMs joined to same domain. Last issue was 2 VM deployment, 1 of 2 VMs failed with xGroup resource. Checked the VM and the group was added regardless the failure.
Verbose logs showing the problem
{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"Conflict","message":"{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"VMExtensionProvisioningError\",\r\n \"message\": \"VM has reported a failure when processing extension 'dscextension'. Error message: \\\"DSC Configuration 'Win10Config' completed with error(s). Following are the first few: PowerShell DSC resource DSC_xGroupResource failed to execute Test-TargetResource functionality with error message: Unable to resolve account 'domain.local\\\\MyGroup'. Failed with message: Exception calling \\\"FindByIdentity\\\" with \\\"2\\\" argument(s): \\\"The user name or password is incorrect.\\r\\n\\\" (error code=-2146233087)\\r\\nParameter name: domain.local\\\\MyGroup \\\"\\r\\n\\r\\nMore information on troubleshooting is available at https://aka.ms/VMExtensionDSCWindowsTroubleshoot \"\r\n }\r\n ]\r\n }\r\n}"}]}
Suggested solution to the issue
The DSC configuration that is used to reproduce the issue (as detailed as possible)
xGroup 'Local Administrators Group Members' {
GroupName = "Administrators"
Ensure = "Present"
MembersToInclude = "domain.local\MyGroup"
}
The operating system the target node is running
OsName : Microsoft Windows 10 Enterprise for Virtual Desktops
OsOperatingSystemSKU : 175
OsArchitecture : 64-bit
WindowsVersion : 1909
WindowsBuildLabEx : 18362.1.amd64fre.19h1_release.190318-1202
OsLanguage : en-US
OsMuiLanguages : {en-US)
Version and build of PowerShell the target node is running
Name Value
PSVersion 5.1.18362.752
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.18362.752
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
Version of the DSC module that was used
Latest (07/2020)
Hi @msft-jasonparker - it sounds like this could be an intermittent connectivity issue with the DC. Is the DC a VM in Azure or Azure ADDS or an on-prem DC?
You could try adding a WaitForADDoman before the call to the xGroup resource:
https://github.com/dsccommunity/ActiveDirectoryDsc/wiki/WaitForADDomain
This would first wait for a DC to be contactable before moving on to the xGroup resource.
|
gharchive/issue
| 2020-07-23T16:22:48 |
2025-04-01T04:34:04.257169
|
{
"authors": [
"PlagueHO",
"msft-jasonparker"
],
"repo": "dsccommunity/xPSDesiredStateConfiguration",
"url": "https://github.com/dsccommunity/xPSDesiredStateConfiguration/issues/695",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
126169397
|
Add p{read|write}_lwt functions
Builds on #4 to add pread_lwt and pwrite_lwt functions.
Cool! What about the socket/non-blocking case? I think then we don't have to use jobs/threads and we can be a little faster?
Good point. Implemented in 43405c473c08.
|
gharchive/pull-request
| 2016-01-12T12:30:45 |
2025-04-01T04:34:04.266195
|
{
"authors": [
"dsheets",
"yallop"
],
"repo": "dsheets/ocaml-unix-unistd",
"url": "https://github.com/dsheets/ocaml-unix-unistd/pull/5",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1915292170
|
Job becomes unscheduled if you access the admin list page when the job is active
Describe the bug
In this method - https://github.com/dsoftwareinc/django-tasks-scheduler/blob/master/scheduler/models/scheduled_job.py#L83
It checks for jobs in scheduled or queued. If it doesn't find the job it sets the job_id to None which means after its finished its current execution its doesn't reschedule.
To Reproduce
Steps to reproduce the behavior:
Go to the admin page admin/scheduler/{repeatablejob/cronjob/scheduledjob}/ and see the green tick against the job to show its scheduled.
Take a look at the queue admin/scheduler/queue/ and wait for the job to move to active (best to test with a job that takes a few seconds to run).
Go to the admin page admin/scheduler/{repeatablejob/cronjob/scheduledjob}/ (or refresh page)
See the red tick against the job to show its no longer scheduled.
Expected behavior
The job stays scheduled.
Implementation idea
Idea 1:
Add an additional check for the job id in self.rqueue.started_job_registry.get_job_ids()
Idea 2:
Remove the
self.job_id = None
self.save()
As I dont see why you would save the database record in this method
PR against this https://github.com/dsoftwareinc/django-tasks-scheduler/pull/33
I agree saving the job in this method isn't the best approach, though I am not sure about your solution..
|
gharchive/issue
| 2023-09-27T11:06:42 |
2025-04-01T04:34:04.276468
|
{
"authors": [
"cunla",
"rstalbow"
],
"repo": "dsoftwareinc/django-tasks-scheduler",
"url": "https://github.com/dsoftwareinc/django-tasks-scheduler/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
528454933
|
Allow passing Channel.Event to off
This allows developers to call
channel.off(Channel.Event.REPLY, ref)
instead of
channel.off(Channel.Event.REPLY.value, ref)
// or
channel.off("phx_reply", ref)
Codecov Report
Merging #75 into master will decrease coverage by 0.34%.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #75 +/- ##
============================================
- Coverage 91.38% 91.04% -0.35%
Complexity 178 178
============================================
Files 9 9
Lines 534 536 +2
Branches 71 71
============================================
Hits 488 488
- Misses 12 14 +2
Partials 34 34
Impacted Files
Coverage Δ
Complexity Δ
src/main/kotlin/org/phoenixframework/Channel.kt
93.82% <0%> (-1.18%)
53 <0> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f879632...7f8a664. Read the comment docs.
|
gharchive/pull-request
| 2019-11-26T02:54:52 |
2025-04-01T04:34:04.284284
|
{
"authors": [
"codecov-io",
"dsrees"
],
"repo": "dsrees/JavaPhoenixClient",
"url": "https://github.com/dsrees/JavaPhoenixClient/pull/75",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
843407543
|
M/N track logic?
Is M/N track logic already available in StoneSoup or it leaves user to implement their own version?
The MultiMeasurementInitiator should be what your after, and can also have different components from main tracker.
|
gharchive/issue
| 2021-03-29T14:07:56 |
2025-04-01T04:34:04.294517
|
{
"authors": [
"Huang-Chuan",
"sdhiscocks"
],
"repo": "dstl/Stone-Soup",
"url": "https://github.com/dstl/Stone-Soup/issues/428",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1787927169
|
No 44.1khz sampling rate or S32 format on librespot -d ?
Compatible Device
[X] I'm running Raspotify on a compatible Device
Are you sure?
[X] I'm not trying to run Raspotify on a ARMv6 device
Compatible OS
[X] I'm running Raspotify on a compatible OS
Compatible Configuration
[X] I'm running Raspotify on a compatible Configuration
Latest Version
[X] I'm running the latest version of Raspotify
Are you sure?
[X] I'm not running Raspotify 0.31.8.1 on a Pi v1 or Pi Zero
Due Diligence
[X] I have done my due diligence
What happened?
Can't see 44.1khz, S32 format and hw:CARD intead of dmix:CARD supported on
`librespot -d ?
[2023-07-04T12:40:51Z INFO librespot] librespot 0.4.2 a6e1258 (Built on 2023-06-21, Build ID: g9ifMDjd, Profile: release)
Compatible alsa device(s):
--------------------------------------------------------------------
Device:
dmix:CARD=sndrpihifiberry,DEV=0
Description:
snd_rpi_hifiberry_digi, HiFiBerry Digi+ Pro HiFi wm8804-spdif-0
Direct sample mixing device
Supported Format & Sample Rate Combinations:
Format: **S16** Sample Rate(s): **48kHz**
--------------------------------------------------------------------`
On /etc/raspotify/conf, i've set:
LIBRESPOT_FORMAT="S32"
LIBRESPOT_BACKEND="alsa"
LIBRESPOT_DEVICE="hw:CARD=sndrpihifiberry"
LIBRESPOT_INITIAL_VOLUME="100"
LIBRESPOT_NORMALISATION_PREGAIN="-10.0"
aplay -L is:
root@DietPi:/etc/raspotify# aplay -L
null
Discard all samples (playback) or generate zero samples (capture)
hw:CARD=sndrpihifiberry,DEV=0
snd_rpi_hifiberry_digi, HiFiBerry Digi+ Pro HiFi wm8804-spdif-0
Direct hardware device without any conversions
plughw:CARD=sndrpihifiberry,DEV=0
snd_rpi_hifiberry_digi, HiFiBerry Digi+ Pro HiFi wm8804-spdif-0
Hardware device with all software conversions
sysdefault:CARD=sndrpihifiberry
snd_rpi_hifiberry_digi, HiFiBerry Digi+ Pro HiFi wm8804-spdif-0
Default Audio Device
dmix:CARD=sndrpihifiberry,DEV=0
snd_rpi_hifiberry_digi, HiFiBerry Digi+ Pro HiFi wm8804-spdif-0
Direct sample mixing device
amixer:
root@DietPi:/etc/raspotify# amixer
Simple mixer control 'Tx Source',0
Capabilities: enum
Items: 'S/PDIF RX' 'AIF'
Item0: 'AIF'
My audio card is a SPDIF digital transport compatible with hifiberry-digi-pro, displayed as:
hw:0,0 : sndrpihifiberry HiFiBerry Digi+ Pro HiFi wm8804-spdif-0
(it's dtoverlay is not necessary entered in /boot/config.txt)
My interest is to support native 44.1khz on raspotify and select, if possible hw:0,0 instead of dmix
DietPi v8.19.1 : 12:52 - mar 04/07/23 (debian)
Device model : RPi 4 Model B (aarch64)
6.1.21-v8+ kernel
nothing relevant on journalctl -u raspotify -b
mpd service and upnprender are disabled, nothing is using the soundcard
hifiberry-digi-pro works well before raspotify with mpd and upnprenderer enabled
thanks for support
Relevant log output and/or the contents of /etc/raspotify/crash_report if any ( sudo journalctl -u raspotify -b and sudo cat /etc/raspotify/crash_report )
jul 04 13:42:05 DietPi systemd[1]: Stopped raspotify.service - Raspotify (Spotify Connect Client).
jul 04 14:40:48 DietPi systemd[1]: Started raspotify.service - Raspotify (Spotify Connect Client).
jul 04 14:40:49 DietPi librespot[4477]: [2023-07-04T12:40:49Z WARN librespot] Without the `--enable-volume-normalisation` / `-N` flag normalisation options have no effect.
Is it muted?
I hadn't tested it with music playback, just 'librespot -d ?' output idle.
Now it's working ok, according to my dac, input is 44.1Khz.
This what 'librespot -d ? says:
`root@DietPi:~# librespot -d ?
[2023-07-16T17:05:38Z INFO librespot] librespot 0.4.2 a6e1258 (Built on 2023-06-21, Build ID: g9ifMDjd, Profile: release)
Compatible alsa device(s):
--------------------------------------------------------------------
ALSA lib pcm_dmix.c:999:(snd_pcm_dmix_open) unable to open slave`
You can't prove devices while they are in use.
|
gharchive/issue
| 2023-07-04T13:22:15 |
2025-04-01T04:34:04.303578
|
{
"authors": [
"JasonLG1979",
"wyup"
],
"repo": "dtcooper/raspotify",
"url": "https://github.com/dtcooper/raspotify/issues/633",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
273075044
|
Add 'new' badge on swap tab & Add shapeshift promo
Add 'new' badge available to all nav tabs by adding badge: true key val pair
Add Shapeshift promo
PR has a massive diff I wasn't expecting, closing and making a clean pr.
|
gharchive/pull-request
| 2017-11-10T22:10:06 |
2025-04-01T04:34:04.306505
|
{
"authors": [
"james-prado"
],
"repo": "dternyak/etherwallet",
"url": "https://github.com/dternyak/etherwallet/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
156610022
|
Are tests not written correctly?
In trying to update the solution for #52, I ran across my updates not passing the tests. First up, the test says that its missing the required option --required. I'm not sure if I actually broke something, or if the test just wasn't written properly. It appears that while the fixture doesn't pass any options, the command actually wants the --required option, it just wasn't provided.
Thoughts? Is this test written correctly?
It might be. Correct the test to what you think it should do, and post the test code here, and I'll make sure it confirms to how I expected it to behave. Thanks!
|
gharchive/issue
| 2016-05-24T21:10:26 |
2025-04-01T04:34:04.308520
|
{
"authors": [
"dthree",
"eliperelman"
],
"repo": "dthree/vorpal",
"url": "https://github.com/dthree/vorpal/issues/150",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
93763595
|
Quartz Scheduler doesn't shutdown anymore
Since the last release, Quartz+TopShelf doesn't terminate properly anymore. Pressing Ctrl+C or shutting down the Windows service leaves the executable still running.
I'm reasonably certain this is due to change in https://github.com/dtinteractive/Topshelf.Integrations/blob/89d9b121ef28eaadd71c769c8170d926dcbbe99b/Source/Topshelf.Quartz/SchedulejobServiceConfiguratorExtensions.cs#L138
introduced in 5260b67a4581b6c2cc4bab9ccddcd1edbfe1d8bc . Seems that the change https://github.com/dtinteractive/Topshelf.Integrations/commit/5260b67a4581b6c2cc4bab9ccddcd1edbfe1d8bc#diff-c522a1363e6aa9ff9d737c0c68c534a5 might have been commited by mistake?
Good catch, thank you. I'll be revving the Topshelf.Quartz version soon.
|
gharchive/issue
| 2015-07-08T11:05:02 |
2025-04-01T04:34:04.310878
|
{
"authors": [
"markhuber",
"skolima"
],
"repo": "dtinteractive/Topshelf.Integrations",
"url": "https://github.com/dtinteractive/Topshelf.Integrations/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2072796756
|
ImplGenerics ToTokens ignores defaults of const Generics
let mut generics = syn::Generics::default();
generics.params.push(parse_quote!(const N: usize = 1));
println!("{:?}", generics.split_for_impl().0.to_token_stream().to_string());
prints: "< const N : usize >"
When I would expect "< const N : usize = 1 >"
Looking at your source code this seems to be done on purpose, so I guess there are situations where it is preferred to leave out the defaults.
If this is the case, could it be possible to have an option to toggle this?
This is the point of ImplGenerics.
impl<const N: usize = 1> T {} is not valid.
Fair point, I should have read the docs for split_for_impl() again :facepalm:
For future reference calling to_token_stream() on Generics gives the result I was looking for.
|
gharchive/issue
| 2024-01-09T17:05:31 |
2025-04-01T04:34:04.314113
|
{
"authors": [
"JustSomeRandomUsername",
"dtolnay"
],
"repo": "dtolnay/syn",
"url": "https://github.com/dtolnay/syn/issues/1579",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
200796300
|
Setup processor to add missing @Override annotations
This pull request adds the ability to add missing @Override annotations by using the already available IntelliJ inspection.
Thanks a lot for that contribution, I check it out ASAP
Merged. I'll release version 0.13 with it
|
gharchive/pull-request
| 2017-01-14T10:50:54 |
2025-04-01T04:34:04.320234
|
{
"authors": [
"dubreuia",
"marcosbento"
],
"repo": "dubreuia/intellij-plugin-save-actions",
"url": "https://github.com/dubreuia/intellij-plugin-save-actions/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2355493327
|
enable union test
Blocked by https://github.com/duckdb/duckdb-rs/pull/336
Thanks for the PR! Could you rebase with main?
|
gharchive/pull-request
| 2024-06-16T07:16:58 |
2025-04-01T04:34:04.407799
|
{
"authors": [
"Mause",
"Mytherin"
],
"repo": "duckdb/duckdb-rs",
"url": "https://github.com/duckdb/duckdb-rs/pull/340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1332329368
|
test
Task/Issue URL: https://app.asana.com/0/488551667048375/1202717423070850/f
Description
Steps to test this PR
Feature 1
[ ]
[ ]
UI changes
Before
After
!(Upload before screenshot)
(Upload after screenshot)
Current dependencies on/for this PR:
develop
PR #2147 👈
This comment was auto-generated by Graphite.
|
gharchive/pull-request
| 2022-08-08T19:56:28 |
2025-04-01T04:34:04.418579
|
{
"authors": [
"aitorvs"
],
"repo": "duckduckgo/Android",
"url": "https://github.com/duckduckgo/Android/pull/2147",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1498780467
|
Fix support for no leadingImageBackground on OneLineListItem and TwoLineListItem
Task/Issue URL: https://app.asana.com/0/488551667048375/1203563042100874/f
Description
Steps to test this PR
Feature 1
[ ]
[ ]
UI changes
Before
After
!(Upload before screenshot)
(Upload after screenshot)
Current dependencies on/for this PR:
develop
PR #2651 👈
This comment was auto-generated by Graphite.
|
gharchive/pull-request
| 2022-12-15T16:57:52 |
2025-04-01T04:34:04.422639
|
{
"authors": [
"karlenDimla"
],
"repo": "duckduckgo/Android",
"url": "https://github.com/duckduckgo/Android/pull/2651",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2440301998
|
Removed excludeFromRecents flag associated with IntentDispatcherActivity.
Task/Issue URL:
Description
Steps to test this PR
Feature 1
[ ]
[ ]
UI changes
Before
After
!(Upload before screenshot)
(Upload after screenshot)
#4833 👈
develop
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @anikiki and the rest of your teammates on Graphite
|
gharchive/pull-request
| 2024-07-31T15:11:56 |
2025-04-01T04:34:04.426780
|
{
"authors": [
"anikiki"
],
"repo": "duckduckgo/Android",
"url": "https://github.com/duckduckgo/Android/pull/4833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
231080218
|
Update long press popover so it is displayed at location pressed
Reviewer: Caine
Description:
Adds support for long pressing a link on tablet and showing the popover beside it. The popover was previously unpositioned so appeared at odd locations on the screen.
Steps to test this PR:
From a tablet, long press a link and note that the popover is displayed at the location which was long pressed (see image).
Reviewer Checklist:
[x] Ensure the PR solves the problem
[x] Review every line of code
[x] Ensure the PR does no harm by testing the changes thoroughly
[ ] Get help if you're uncomfortable with any of the above!
[ ] Determine if there are any quick wins that improve the implementation
PR DRI Checklist:
[x] Get advice or leverage existing code
[x] Agree on technical approach with reviewer (if the changes are nuanced)
[x] Ensure that there is a testing strategy (and documented non-automated tests)
[x] Ensure there is a documented monitoring strategy (if necessary)
[x] Consider systems implications (Database connections, Grafana stats, CPU)
Thanks for the review :)
|
gharchive/pull-request
| 2017-05-24T15:28:29 |
2025-04-01T04:34:04.433655
|
{
"authors": [
"subsymbolic"
],
"repo": "duckduckgo/iOS",
"url": "https://github.com/duckduckgo/iOS/pull/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2112807980
|
Where could I get the vitalqip package?
Hello Daniel,
The client uses the vitalqip package, which doesn't exist in PyPi.
Could you please advise where it could be found?
Thank you
@dunielpls is the vitalqip package available anywhere?
|
gharchive/issue
| 2024-02-01T15:33:42 |
2025-04-01T04:34:04.489176
|
{
"authors": [
"christianbur",
"nsmcan"
],
"repo": "dunielpls/vitalqip-api-client",
"url": "https://github.com/dunielpls/vitalqip-api-client/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
340803739
|
Added AWS CloudTrail account ids to vendor_accounts.yaml
Added account ids so that CloudTrail accounts are not unknown for wot command
Thank you!
|
gharchive/pull-request
| 2018-07-12T21:24:24 |
2025-04-01T04:34:04.493462
|
{
"authors": [
"0xdabbad00",
"williambherman"
],
"repo": "duo-labs/cloudmapper",
"url": "https://github.com/duo-labs/cloudmapper/pull/122",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1628146677
|
🛑 Mention Test Bot is down
In b2c10c3, Mention Test Bot (https://backend.isbotdown.com/bots/mentiontestbot) was down:
HTTP code: 200
Response time: 116 ms
Resolved: Mention Test Bot is back up in f829513.
|
gharchive/issue
| 2023-03-16T19:37:14 |
2025-04-01T04:34:04.501836
|
{
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/3058",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1838274517
|
🛑 Mention Bot is down
In 0d70252, Mention Bot (https://backend.isbotdown.com/bots/mentionbot) was down:
HTTP code: 200
Response time: 89 ms
Resolved: Mention Bot is back up in 62313c4.
|
gharchive/issue
| 2023-08-06T16:50:56 |
2025-04-01T04:34:04.504123
|
{
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/4921",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1857541300
|
🛑 Mention Robot is down
In 50d2d40, Mention Robot (https://backend.isbotdown.com/bots/mentionrobot) was down:
HTTP code: 200
Response time: 125 ms
Resolved: Mention Robot is back up in 8b9193f after 356 days, 16 hours, 4 minutes.
|
gharchive/issue
| 2023-08-19T03:58:36 |
2025-04-01T04:34:04.506486
|
{
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/5038",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1402245428
|
🛑 Mention Bot is down
In 3dddee4, Mention Bot (https://backend.isbotdown.com/bots/mentionbot) was down:
HTTP code: 200
Response time: 123 ms
Resolved: Mention Bot is back up in f7b2f18.
|
gharchive/issue
| 2022-10-09T11:30:46 |
2025-04-01T04:34:04.508788
|
{
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/742",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2490817881
|
🛑 Mention Bot is down
In 44238e4, Mention Bot (https://backend.isbotdown.com/bots/mentionbot) was down:
HTTP code: 200
Response time: 85 ms
Resolved: Mention Bot is back up in d5773ce after 5 hours, 5 minutes.
|
gharchive/issue
| 2024-08-28T02:43:14 |
2025-04-01T04:34:04.511051
|
{
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/8110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2647128906
|
🛑 Mention Robot is down
In 53c36ea, Mention Robot (https://backend.isbotdown.com/bots/mentionrobot) was down:
HTTP code: 200
Response time: 87 ms
Resolved: Mention Robot is back up in 47fc2c8 after 1 hour, 48 minutes.
|
gharchive/issue
| 2024-11-10T11:20:21 |
2025-04-01T04:34:04.513507
|
{
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/8514",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2632709281
|
Flav/serialize requested group ids
Description
See https://github.com/dust-tt/dust/issues/8268.
With the recent addition of restricted/unrestricted spaces that starts creating a one to many relationship between spaces and groups. The current groupIds column has some limitations as it only performs a conjunction between all the group ids. Which does not work for unrestricted spaces that leverage the existing global group. This PR adds a new column on conversations and agent_configurations to serialize the requestedGroupIds. This new columns will serialized the requested group ids from the requested permissions preserving the disjunction of the group ids for a given space.
This is only the first step, that add the new column and enable the double write. Once the migration will be run, a second step will follow to start reading from it.
Risk
Deploy Plan
[ ] Run SQL migration
[ ] Deploy front
[ ] Run the backfill
:wave: Thanks! Would the colmn be added in addition to groupIds? I had in mind that we would remove the groupIds column and directly add a full acl column with all the generality, is that was this is? in that case what about naming it acl directly?
👋 Thanks! Would the colmn be added in addition to groupIds? I had in mind that we would remove the groupIds column and directly add a full acl column with all the generality, is that was this is? in that case what about naming it acl directly?
The main reason we don't want to store the acl for now is because permissions (like read, write are defined in the code not in the DB). Ultimately we know that at some point we will store the space Id to abstract all those permissions but for the time being we agreed to only store the group ids.
|
gharchive/pull-request
| 2024-11-04T12:38:29 |
2025-04-01T04:34:04.518559
|
{
"authors": [
"flvndvd",
"philipperolet"
],
"repo": "dust-tt/dust",
"url": "https://github.com/dust-tt/dust/pull/8413",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2314163475
|
feat: improve GitHub Actions workflows
Initial check-and-test composite action to make shared codebase checks and tests reusable by multiple workflows.
Initial setup composite action to make shared setup tasks reusable by multiple workflows.
Rename the build-check-test.yaml file to check-and-test.yaml and make the job use the composite actions.
Implement the DRY_RUN environment variable and conditional logic for testing the release.yaml workflow locally with the act CLI tool.
Re-order job steps in release.yaml to simplify them and make the job use the composite actions.
:tada: This PR is included in version 1.1.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 1.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 1.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 1.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-05-24T01:58:32 |
2025-04-01T04:34:04.530070
|
{
"authors": [
"dustin-ruetz"
],
"repo": "dustin-ruetz/devdeps",
"url": "https://github.com/dustin-ruetz/devdeps/pull/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
112011841
|
Update pt.json
Most of the translation was poorly done, and mixed with brazillian portuguese.
Thanks for the submission! Though I try to keep all my translations on Transifex so everyone can contribute and they're easier for me to review. Thanks! https://github.com/dustinblackman/Championify/blob/master/CONTRIBUTE.md
|
gharchive/pull-request
| 2015-10-18T10:21:04 |
2025-04-01T04:34:04.538670
|
{
"authors": [
"SW1FT",
"dustinblackman"
],
"repo": "dustinblackman/Championify",
"url": "https://github.com/dustinblackman/Championify/pull/132",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
849795767
|
download-models.sh
Could we add a check to see if the files have already been downloaded and extracted?
That way, if the install gets interrupted for some reason, it does not download the files every time.
Hi @h3retek, that could be possible, although some users need to re-download models due to corruption, which would complicate matters. Particularly those in China have problems sometime downloading the files.
The way I try to do this today, is that the first time you run cmake ../, it runs the downloader script, but on future invokations of cmake, it doesn't run the downloader script. You would need to run the downloader script manually again, at which time you can de-select those models except the new ones you want to download.
The docker/run.sh script also does something similar - it only kicks off the downloader script if it hasn't run before or if you have no models at all.
on future invokations of cmake
In my case, cmake never finished :) so, it just kept on downloading the files... on a fast network, no problem, but I "only" have a 10Mb GSM. FeelsBadMan
re-download models due to corruption
We can check the hash?
In your case, you may just want to comment out the model downloader from being called in cmake - https://github.com/dusty-nv/jetson-inference/blob/86776674121f071453b64cc5754e628cb6a6d32c/CMakePreBuild.sh#L45
You would then need to run the model downloader manually if/when you wanted to download the models.
While possible, checking the hash would add complications to the script and I would need to keep track of the hashes.
Understood.
Thank you for your time.
Just one last suggestion:
Maybe modify the wget command with ‐‐continue ‐‐timestamping?
I wonder if the download was corrupted, would --continue keep appending to the corrupted file, or would it restart from the beginning?
If you want to try it out and report back how it works, that would be great. Thanks!
It shall be done sir.
I have come across this article... which is a very interesting read, if you have the time.
Mr. John Walker created a perl script that checks the file's integrity on his ftp server. I believe the only secure way to check file integrity is to provide a md5 or sha1 check.
Anyway, in the future, I will just follow your advice, skip the downloader in jetson-inference/CMakePreBuild.sh... unless of course I am on better internet ^_^
Thanks so much
On Thu, Apr 8, 2021, 07:23 h3retek @.***> wrote:
I have come across this article
https://www.fourmilab.ch/documents/corrupted_downloads/... which is a
very interesting read, if you have the time.
Mr. John Walker https://www.fourmilab.ch/ created a perl script that
checks the file's integrity on his ftp server. I believe the only secure
way to check file integrity is to provide a md5 or sha1 check.
Anyway, in the future, I will just follow your advice, skip the downloader
in jetson-inference/CMakePreBuild.sh... unless of course I am on better
internet ^_^
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/dusty-nv/jetson-inference/issues/988#issuecomment-815388021,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AFUDZAXAWTIAAFX45T6GVODTHUECHANCNFSM42K7LBQQ
.
|
gharchive/issue
| 2021-04-04T04:33:35 |
2025-04-01T04:34:04.549512
|
{
"authors": [
"SriniVest",
"dusty-nv",
"h3retek"
],
"repo": "dusty-nv/jetson-inference",
"url": "https://github.com/dusty-nv/jetson-inference/issues/988",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1049399104
|
Resolves seccomp error
Closes #1283.
Disables seccomp error for updated versions of docker.
Thank you @pfremm! Works a treat on my Xavier NX.
|
gharchive/pull-request
| 2021-11-10T04:49:07 |
2025-04-01T04:34:04.550891
|
{
"authors": [
"ewth",
"pfremm"
],
"repo": "dusty-nv/jetson-inference",
"url": "https://github.com/dusty-nv/jetson-inference/pull/1288",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2741009681
|
Black pawn colors are wrong
After installing pygn-mode using package-install I ran pygn-mode-run-diagnostic and all diagnostics passed (shown below)
[x] Good. We can execute the pygn-mode-python-executable at 'python'
[x] Good. The pygn-mode-python-executable at 'python' is a Python 3 interpreter.
[x] Good. The pygn-mode-python-executable at 'python' is better than or equal to Python version 3.7.
[x] Good. The pygn-mode-python-executable at 'python' can import the Python chess library.
[x] Good. The pygn-mode-script-directory ('/home/bt/.emacs.d/elpa/pygn-mode-0.5.1/') is found and the server script is callable.
[x] Good. The `uci-mode' library is available.
[x] Good. The `nav-flash' library is available.
[x] Good. The `ivy' library is available.
------------------------------------
All pygn-mode required diagnostics completed successfully.
However on opening a sample PGN file and displaying the board the black pawns are shown in white color (screenshot below).
Hi! Something like this happened twice before, and was solved by upgrading the Python chess dependency here. I really don't understand how it breaks or why that solves it, but it sounds like you might understand more.
Anyway let's try updating.
Aha! The update in #214 was supposed to be from chess 1.9.4 to 1.11.1, not so old as 1.6.1. Are you using https://stable.melpa.org ? We haven't tagged a release in a long while on the theory that everyone used https://melpa.org instead. But we can tag a release.
Tagged https://github.com/dwcoates/pygn-mode/releases/tag/v0.6.3 . Not sure how long it will take to show up in MELPA stable. You were running a very old version.
|
gharchive/issue
| 2024-12-16T00:07:36 |
2025-04-01T04:34:04.565494
|
{
"authors": [
"balbirthomas",
"rolandwalker"
],
"repo": "dwcoates/pygn-mode",
"url": "https://github.com/dwcoates/pygn-mode/issues/212",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
230178087
|
[ QUESTION ] --clear-chunk-age & --clear-chunk-max-size
For example if I set:
--clear-chunk-max-size=400GB (to make sure disk does not get full)
--clear-chunk-age=730h ( to keep chunks for 1 month )
1.
I assume when --clear-chunk-max-size get to 400GB the oldest (access wise) files will be deleted, regardless that --clear-chunk-age still did not match criteria ?
2.
If I replace/upgrade existing movie with better version (so same name) if the chunks for that movie where already downloaded, are they automatically replaced (when plex access them).
3.
What happens if I delete all chunks while that were not accessed in last minute, would that cause any errors or plexdrive automatically downloaded them when needed ?
I made script to keep chunks 30days and manually deleting them if my disk space gets bellow 1GB
https://github.com/ajkis/scripts/blob/master/plexdrive/plexdrivechunks.sh
It seems there is a race issue: https://github.com/dweidenfeld/plexdrive/issues/104
@ajkis
It is not a race issue.
If --clear-chunk-max-size is set, --clear-chunk-age is disabled (do not have any effect)
Source Code
--clear-chunk-max-size have the highest priority.
|
gharchive/issue
| 2017-05-20T20:47:07 |
2025-04-01T04:34:04.570492
|
{
"authors": [
"Em31Et",
"ajkis"
],
"repo": "dweidenfeld/plexdrive",
"url": "https://github.com/dweidenfeld/plexdrive/issues/116",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1725673016
|
Update preinstalled software list for TCC agents
This PR was created by an automated build in TeamCity Cloud
Closing this PR as obsolete, there is a more recent one
|
gharchive/pull-request
| 2023-05-25T11:54:38 |
2025-04-01T04:34:04.592863
|
{
"authors": [
"dy1ng"
],
"repo": "dy1ng/teamcity-documentation",
"url": "https://github.com/dy1ng/teamcity-documentation/pull/189",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1267511349
|
Update preinstalled software list for TCC agents
This PR was created by an automated build in TeamCity Cloud
Closing this PR as obsolete, there is a more recent one
|
gharchive/pull-request
| 2022-06-10T12:59:26 |
2025-04-01T04:34:04.593625
|
{
"authors": [
"dy1ng"
],
"repo": "dy1ng/teamcity-documentation",
"url": "https://github.com/dy1ng/teamcity-documentation/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
288342919
|
Is there padding/align option?
Description
I'm not sure if there already is such option - but is there a way to align results? I tried to just insert spaces in categories names but they seem to be trimmed as only one space after name remains displayed. It'd make much more sense to align results for barinfo display mode as currently it looks really messy and kind of defeats purpose of displaying bars if they don't look clean at all...
This kind of already exists, it's a hacky fix though. I had to search through Neofetch's source code. I used it to align my rainbow, since my terminal supports 16-million truecolor and I like showing it off.
Basically you just need to add printf '\e[%bC%b' "$text_padding" "${zws}${cols}" for aligning things.
^ Here is my config for the above.
(Requires kitty terminal for native image support - https://github.com/kovidgoyal/kitty)
https://pastebin.com/C4LbSySJ
Another workaround is to pipe the output through column:
neofetch --stdout | column -t -s ':'
gives
OS Ubuntu 18.04.2 LTS x86_64
Kernel 4.15.0-45-generic
Uptime 2d 4h 48m
Packages 1823
Shell zsh 5.4.2
...
Another workaround is using --separator '\t'
if you want color output with the piping through column trick you can use:
neofetch --off --color_blocks off | column -t -s ':'
Closing as stale.
A cringe-way workaround
neofetch --color_blocks off | column -t -s ":" -o "" && neofetch --block_range 0 8 --off --disable $(cat .config/neofetch/config.conf | grep -oP '(?<=info )\w+|(?<=." )\w+' | head -n -2 | tr '\n' ' ') | tail -n +2
Here's a simple workaround: while normal spaces at the end of the "subtitle" get stripped, I found that non-breaking spaces can be used to align the information manually in the config file.
Much easier to read at a glance in my opinion.
another way is just using \n for space in this way you can align it
Another simple wayaround is using \n it outputs as space you add many more to center it as you like or align
|
gharchive/issue
| 2018-01-13T16:58:25 |
2025-04-01T04:34:04.600669
|
{
"authors": [
"Snuggle",
"dylanaraps",
"hikkidev",
"its-19818942118",
"jackcogdill",
"joelostblom",
"lapsio",
"svenknoke",
"wave864"
],
"repo": "dylanaraps/neofetch",
"url": "https://github.com/dylanaraps/neofetch/issues/895",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2673832737
|
Test - Create Invitation
Test Description:
Test if the user can create an invitation
Test case successfully implemented for create invitation service
|
gharchive/issue
| 2024-11-19T23:14:39 |
2025-04-01T04:34:04.601923
|
{
"authors": [
"dylandacosta8"
],
"repo": "dylandacosta8/is601_final",
"url": "https://github.com/dylandacosta8/is601_final/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2685022753
|
chore: remove redundant std import in codeSample
Kind of a strange one, as I'm not sure what changed to make this redundant.. but since it shadows the top-level import, Zig wont compile the generated code using this code sample.
In any case, this fixes the test.
Is it possible there is an auto-formatter that was removing it but now isn't?
Is it possible there is an auto-formatter that was removing it but now isn't?
hmm.. it could be! I did just upgrade to a newer version of Zig locally.
i think maybe one of the new schemas may have triggered a condition to add the top-level import. idk tho I'll see if it's my local compiler version today.
either way, I think this is safe to merge. I can also change the way this is imported so it always works / doesn't shadow.
|
gharchive/pull-request
| 2024-11-23T00:21:44 |
2025-04-01T04:34:04.603869
|
{
"authors": [
"bhelx",
"nilslice"
],
"repo": "dylibso/xtp-bindgen-test",
"url": "https://github.com/dylibso/xtp-bindgen-test/pull/22",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
68930636
|
just a little cleanup
I didn't run a linter on it i just caught a few things that should be cleaned up.
:+1: Thanks!
|
gharchive/pull-request
| 2015-04-16T14:01:48 |
2025-04-01T04:34:04.617085
|
{
"authors": [
"ethanlarochelle",
"jukebox42"
],
"repo": "dyninc/statuspage-api",
"url": "https://github.com/dyninc/statuspage-api/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
290347733
|
[db] Migration
[x] survey finished in #7
[x] design doc
[x] interface for migration tasks 484c455f1e1ea8a9e8269d8c0ac910c5b16e232d
[x] create migration table 7c39e8a1d88f358c4ff715c154d0a9ef93a7851a
[ ] create database
[ ] generator for generating base migration task file
[ ] register migration
[ ] example
[ ] user table
[ ] user's org, project etc.
[ ] seed dummy data
should reopen when we start working on db
|
gharchive/issue
| 2018-01-22T04:27:16 |
2025-04-01T04:34:04.623998
|
{
"authors": [
"at15"
],
"repo": "dyweb/go.ice",
"url": "https://github.com/dyweb/go.ice/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
816871693
|
Cleanup old artifacts
Hello, first I would like to thank you for the active support on the project, I am really satisfied with your service.
I use timestamp-based snapshots in my Reposilite instance to store development artifacts. But as time goes by, my global disk usage is growing. The "disk quota" feature is very nice to avoid a disk saturation. But by using your service, I don't have a quickly or automatically way for cleaning up old artifacts.
Is it a planned feature? Or maybe too complex/consuming to be implemented?
I don't have a clear idea how to integrate it (maybe a max amount of snapshots per artifact?).
Thanks :)
Yeah, it sounds reasonable. I'm not sure how to solve this issue at this moment, but I'll try to explore possible options after #384.
Thank you! It's not urgent :)
any progress?
We're still with #387 which conflicts with this topic, so unfortunately it's not yet available. It's issue of high priority, so as soon as we will get through it, I'll take care of it. Moving to nio and implementing whole fs abstraction takes some time :/ It's also developed by volunteer and I don't want make his task harder because of some various merge conflicts.
nio porting and file system abstraction are nearly done, I just need to do some more thorough testing. I'll hopefully have it in a "finished" state by the end of next week.
I'll add it after 3.x release as I don't want to introduce new features to somehow simplify this process.
Available in Configuration tab in the dashboard, configurable per each repository:
Available in Configuration tab in the dashboard, configurable per each repository:
Thank you for this addition! I can't wait to try version 3 🥳
I think it was the longest opened issue in this repository, not quite proud of this fact, but at least I like the fact, it's fully implemented as event listener on top of the new plugin api 🎉
@dzikoysk is this in the web interface? I've just upgraded to 3.5.5 and I can't find it in the repository configurations.
@dzikoysk thanks. I thought the feature allows for telling how many old repos to retain, but it's good even without this detail.
Sure, keep in mind that if you have some specific requirements, you can always cover this via plugin api with your own implementation.
|
gharchive/issue
| 2021-02-25T22:50:01 |
2025-04-01T04:34:04.636224
|
{
"authors": [
"Haven-King",
"caoli5288",
"dzikoysk",
"marco-brandizi",
"utarwyn"
],
"repo": "dzikoysk/reposilite",
"url": "https://github.com/dzikoysk/reposilite/issues/385",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2627711248
|
🛑 ALAO 2020 is down
In 3911db0, ALAO 2020 (https://2020.alaoweb.org/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ALAO 2020 is back up in 2076264 after 4 minutes.
|
gharchive/issue
| 2024-10-31T19:55:34 |
2025-04-01T04:34:04.638786
|
{
"authors": [
"dzoladz"
],
"repo": "dzoladz/status",
"url": "https://github.com/dzoladz/status/issues/174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
160532223
|
Convert readthedocs links for their .org -> .io migration for hosted projects
As per their blog post of the 27th April ‘Securing subdomains’:
Starting today, Read the Docs will start hosting projects from subdomains on the domain readthedocs.io, instead of on readthedocs.org. This change addresses some security concerns around site cookies while hosting user generated data on the same domain as our dashboard.
Test Plan: Manually visited all the links I’ve modified.
The link on the Github repo description also needs updating.
Coverage remained the same at 74.4% when pulling b5cf64cb1ac2794519eef78a7d6034705d7da219 on adamchainz:readthedocs.io into 30c93182fa177f5b7000ba3f0934fe0beaaeba3c on e-dard:master.
I can't edit the link, only @e-dard can, but thanks for the PR!
|
gharchive/pull-request
| 2016-06-15T21:49:57 |
2025-04-01T04:34:04.643803
|
{
"authors": [
"SunDwarf",
"adamchainz",
"coveralls"
],
"repo": "e-dard/flask-s3",
"url": "https://github.com/e-dard/flask-s3/pull/72",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
149414479
|
Add symptomsOn property to patient
Fix for CCSL-1556
:+1:
|
gharchive/pull-request
| 2016-04-19T10:16:31 |
2025-04-01T04:34:04.734750
|
{
"authors": [
"emig",
"pkolios"
],
"repo": "eHealthAfrica/data-models",
"url": "https://github.com/eHealthAfrica/data-models/pull/108",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
314692096
|
Suggestion also
So one person had an idea I thought I'd share - the bot works great but does become a bit spammy on an active pool. Would it be possible to link a symbol to a webhook - that way you could allocate an area on Discord per coin.
Something like:
XVG Webhook1
RVN Webhook2
I could also flush found blocks per coin and per time limit
Meaning get rid of history in a given channel by hook? .. If so I'd love it, like limiting to the last 10 blocks or such. Great thought.
Regarding your first answer, I think it's already doable actually. As the coin in requested by passing ?algo=all you could run several instance of the script binded on different hooks and using ?algo=xvg or ?algo=rvn
Hmm interesting, although if the pool is large I'm not in love with one systemd per coin.
That becomes unwieldy, but I understand.
Well, it's a Python script using asyncio. It does NOT use any resource basically so it's clearly intended to be run multiple times if you want.
|
gharchive/issue
| 2018-04-16T14:52:56 |
2025-04-01T04:34:04.739269
|
{
"authors": [
"eLvErDe",
"mtompkins"
],
"repo": "eLvErDe/yiimp-blocks-found-to-discord",
"url": "https://github.com/eLvErDe/yiimp-blocks-found-to-discord/issues/6",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1797319790
|
🛑 ECO Drive is down
In ed438f4, ECO Drive ($ECO_DRIVE_URL) was down:
HTTP code: 500
Response time: 741 ms
Resolved: ECO Drive is back up in 09f21dc.
|
gharchive/issue
| 2023-07-10T18:41:19 |
2025-04-01T04:34:04.775336
|
{
"authors": [
"finncyr"
],
"repo": "eSports-Cologne-Dev/upptime",
"url": "https://github.com/eSports-Cologne-Dev/upptime/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
133296136
|
lock down dev environment
http://grosser.it/2015/08/14/check-in-your-gemfile-lock/
https://github.com/eac/predictive_load/pull/12
@pschambacher
:+1:
|
gharchive/pull-request
| 2016-02-12T17:18:00 |
2025-04-01T04:34:04.792067
|
{
"authors": [
"grosser",
"pschambacher"
],
"repo": "eac/predictive_load",
"url": "https://github.com/eac/predictive_load/pull/14",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
306782688
|
GitLens disables merging CodeLens
GitLens Version: 8.1.1
VSCode Version: 1.21.1
OS Version: Windows 10
Steps to Reproduce:
Use VSCode as mergetool
GitLens enabled:
GitLens disabled:
I can't reproduce this behavior, nor should it be possible for GitLens to conflict with any other code lens. When this occurs can you open the devtools (Developer: Toggle Developer Tools from the command palette) and see if there are any errors in the console and post them here? And when you say GitLens disabled, do you mean uninstalled? Code lens turned off?
Disable = disabled via extension manager
The dev console is empty
Here is a screenshot of all my extensions disabled but gitlens:
I can't reproduce on my other machine either. I'll take a deeper look tomorrow
Thank you for the update! Let me know.
I've seen this actually happen myself (once), but saw no errors or issue related to GitLens. Once I restarted vscode it fixed itself. Very strange.
It still happens for me everytime at my work pc, but I don't really have any other helpful information
@CoenraadS if it is reliably reproducible for you, I'd love to find a way to dig deeper into it -- maybe a Live Share or something?
I am closing this issue because it needs more information and has not had recent activity. Please re-open this issue when more details can be provided.
Thanks!
I've just now been having similar issues with GitLens. Disabling GitLens makes the actions show up again. I have GitLens version 8.5.6 and Visual Code 1.27.2.
Not sure how to reproduce. Seems to pop up every now and then.
Hey guys, So after long searching with trying i may discovered the reason of disabled options for the merge conflict sections. In my case was solved by removing this line in settings.json file which is located in
~/.config/code/User note that my OS is Linux Ubuntu.
"editor.codeLens": false
Hope that will resolve your issue also.
Thank you.
Just so everyone is clear, that setting has nothing directly to do with GitLens -- and GitLens never changes that setting. That is vscode's master setting to allow (or disallow) and code lens in the editor -- GitLens or otherwise.
Is anyone still experiencing this? If so, please try GitLens 9 and let me know if the issue still persists.
I have experiences the same as yuo @eamodio: Installed gitlense: Links are gone. The setting "Merge-conflict > Code Lense: Enabled" (checked) brought it back.
Thanks for mentioning @James-E-Adams, it works again!
I'm going to close this as I think this is resolved (and was a vscode issue).
I'm having this issue with GitLens 9.2.2, VS Code 1.30.1 running on Linux.
editor.codelens and Merge-conflict > Code Lense are enabled.
Turning off GitLens' review mode (to Zen mode) fixes the issue and I can see the resolve conflict actions above the merge conflict code again.
FYI @oferz:
For me:
VSCode 1.30.1
GitLens 9.2.2
Merge-conflict > Code Lens was already enabled, so that wasn't the problem, but as @AhmedBHameed said up above, Editor: Code Lens was disabled, and switching that to enabled fixed it. I'm not sure how that got disabled in the first place.
@oferz I'm not sure how GitLens could be conflicting with the built-in merge code lens unless there is an issue in vscode itself. Could you please open an issue against the vscode repository and link it here?
@eamodio with the latest versions (VSCODE 1.30.1 and GitLense 9.3.0) it works fine again.
FWIW I do NOT have GitLens installed but still had this issue and google brought me here.
Ubuntu 18, VSC 1.32.1
Enable / Disabling
Merge-conflict > Code Lens: Enabled, Editor: Code Lens: Enabled
fixed my issues. Both shown as enabled. I disabled both and then re-enabled both via the GUI.
The same reason as @randallmeeker. Disable Gitlens > Code Lens: make merge conflict actions appear.
@i-love-thinking what platform are you on? Linux?
Ubuntu 18, VSC 1.32.3
Reported here: https://github.com/Microsoft/vscode/issues/71532
This happend to me just now doing a rebase. I left the computer for 10 minutes, and when I came back the missing merge actions buttons were back. I also noticed the Srouce Control: GIT tab, had MERGE CHANGES with the correct file now - before it was in STAGED CHANGES, even though it contained conflicts.
|
gharchive/issue
| 2018-03-20T09:15:55 |
2025-04-01T04:34:04.812720
|
{
"authors": [
"AhmedBHameed",
"AustonZ",
"CoenraadS",
"Waltari10",
"bgpedersen",
"eamodio",
"i-love-thinking",
"oferz",
"randallmeeker",
"shiftgeist"
],
"repo": "eamodio/vscode-gitlens",
"url": "https://github.com/eamodio/vscode-gitlens/issues/319",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2000876066
|
Fix download list for collections with hidden items, add --include-hidden
When requesting a collection, Bandcamp returns the first page of items in item_cache.collection and the first page of hidden items in item_cache.hidden. The total number of items in the two categories (across all pages) are collection_data.item_count and hidden_data.item_count.
We might hope that there would then be len(item_cache.collection) entries in collection_data.redownload_urls, and len(item_cache.hidden) entries in hidden_data.redownload_urls. Unfortunately, hidden_data.redownload_urls doesn't exist, and instead there are len(item_cache.collection) + len(item_cache.hidden) items in collection_data.redownload_urls, combining both hidden and unhidden results.
bandcamp-downloader calculates the number of items to request by the total length of the (visible) library minus the length of redownload_urls:
'count' : _user_info['collection_count'] - len(_user_info['download_urls']),
Because of the page of hidden items in the download urls, this means that for collections with hidden items, one page of hidden items is included in the url list, and one page of unhidden items is truncated from the end of the list.
This PR adds the ability to intentionally include hidden downloads with the --include-hidden flag. When false, it skips URLs for hidden items in the initial query, which fixes the issue where some visible items would be missing in the resulting download. When true, it adds a request to the POST endpoint for hidden items, .../hidden_items, to fetch the remaining pages (previously only .../collection_items was used).
Fixes https://github.com/easlice/bandcamp-downloader/issues/5.
Thanks! I also have a followup I've been using locally that improves the performance of repeated downloads by comparing existing files against the reported size in the metadata instead of actually starting every download stream first, but I was holding off until this one got merged... I can submit that one soon, I will try syncing with the other recent changes this weekend :-)
That would be great. I tried using Header Requests at one point but Bandcamp never quite replied to those consistently, so a way to stop redoing so many requests would be wonderful.
|
gharchive/pull-request
| 2023-11-19T14:33:22 |
2025-04-01T04:34:04.846333
|
{
"authors": [
"cubicvoid",
"easlice"
],
"repo": "easlice/bandcamp-downloader",
"url": "https://github.com/easlice/bandcamp-downloader/pull/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2758724836
|
url中的 & 符号需要转换为 &
url中的 & 符号需要转换为 & 否则xml解析报错
& amp;
|
gharchive/issue
| 2024-12-25T10:48:47 |
2025-04-01T04:34:04.900766
|
{
"authors": [
"shipsw"
],
"repo": "easychen/ai-rss",
"url": "https://github.com/easychen/ai-rss/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2265084555
|
使用 upx 减小 micro.sfx 体积时,三次压缩的体积大小在一些特定情况下会不一致
https://github.com/easysoft/phpmicro/blob/e142401ee260c128ac9d6bb1e35f80040614437c/Makefile.frag#L11C1-L23C30
相关 Issue:
https://github.com/crazywhalecc/static-php-cli/issues/425
主要是修改 strip 命令,在 strip 后立刻执行 upx 压缩。但即使三次压缩体积貌似也不太一样,或者这种情况就只能再加一次构建吗
这个问题似乎和 upx 有关。因为 phpmicro 的原理仔细阅读了下源码就是将自身文件大小写到自身文件内,而 upx 机制是压缩,有可能一个 bit 的变化都会影响最终压缩的大小。
极端情况就是某个二进制文件大小(如 12345678 字节)压缩后可能是 x 大小,获取到压缩后的 x 大小后,第二次写入 x 大小的二进制压缩后又可能是 y 大小,以此往复,无法构建正常的 micro.sfx。
如果 micro.sfx 不更改原理的话,这个问题似乎是无解的,先关了。
我有个想法来解决这个问题(顺便安排下数字签名兼容)
但人太懒了一直没做完。。。
我有个想法来解决这个问题(顺便安排下数字签名兼容)
说来听听?
完整的PE/COFF ELF Mach-O editor呗
总共有以下工作量:
[x] PE/COFF header parser/builder
[ ] Win32 RSRC parser/builder
[ ] ELF parser/builder
[ ] Mach-O parser/builder
[ ] UI (CLI/GUI)
[ ] 交付(glci 数字签名 osscdn)
用php写的 目前我的进度如上
但我目前还有十个甚至九个要填的坑(硬件、操作系统方向),就暂且咕了,如果你有兴趣的话 我抽空发一下一起填
完整的PE/COFF ELF Mach-O editor呗
其实还有个点 就是如果我们有了这个,就可以把phar丢进一个正经section里了,micro也不需要多次构建来固定长度了
这样其实相当于改了micro的运行机制吧,会和旧的胶水模式兼容吗
这样其实相当于改了micro的运行机制吧,会和旧的胶水模式兼容吗
当前的micro是在unix下直接硬编码的offset到rodata,windows下用的Win32资源
sfx-editor的思路是扒出来这个offset来修改
对于Windows PE来说,要upx就先把sfx upx下,然后用sfx-editor修正rsrc段的offset就完事,
对于unix ELF/Mach-O来说 要支持upx就可能要新建一个section,sfx进行upx之后再附加上这个段,然后改micro源码去读这个段。
以上这么个玩法是不会改运行机制的,无非ELF/Mach-O需要改micro代码,Windows现在用这个就可以没啥问题的兼容了
WIP: https://github.com/dixyes/sfx-editor
对于数字签名:windows下你merge完sfx和payload直接签就完事了,需要自定义stub(windows这边签名是放在exe结尾的,所以需要避免php解释器去读文件结尾)晚点看能不能实现一个payload limit,这样就不需要自定义stub了
macos需要研究下
upx+签名太抽象了 暂且还没想明白,但感觉问题不大
|
gharchive/issue
| 2024-04-26T06:51:55 |
2025-04-01T04:34:04.957544
|
{
"authors": [
"crazywhalecc",
"dixyes"
],
"repo": "easysoft/phpmicro",
"url": "https://github.com/easysoft/phpmicro/issues/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
807769030
|
Unable to obtain IP
Whenever I try to setup the Bridge I have problems with the IP address. It just tells me that it can't find the IPv4 address. Searching the internet didn't help much.
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Serial Number to "xxx"
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Manufacturer to "Nuki Home Solutions GmbH"
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Model to "Bridge"
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Firmware Revision to "1.18.2"
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: Nuki Bridge v1.18.2 xxx at 192.168.2.100:8080
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Heartrate to 60 seconds
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Restart to false
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Log Level to 2
[2/13/2021, 16:18:52] [Nuki] Nuki_Bridge_B9FC91: set Status Fault to 0
[2/13/2021, 16:18:53] [Nuki] error: unhandled rejection: Error: cannot determine my IPv4 address
at /usr/lib/node_modules/homebridge-nb/lib/NbListener.js:108:27
at new Promise (<anonymous>)
at NbListener.listen (/usr/lib/node_modules/homebridge-nb/lib/NbListener.js:103:12)
at NbListener.addClient (/usr/lib/node_modules/homebridge-nb/lib/NbListener.js:131:16)
at NbPlatform.addClient (/usr/lib/node_modules/homebridge-nb/lib/NbPlatform.js:227:26)
at Bridge.init (/usr/lib/node_modules/homebridge-nb/lib/NbAccessory.js:145:52)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Homebridge NB sets up a web server to receive push notifications from the Nuki bridge. It needs to subscribe to the bridge, specifying the address of this web server. The error indicates that it cannot determine the IP address for this web server.
I think this would only happen on a multi-homed server, where none of the interfaces share a subnet with the Nuki bridge. Or in layman's terms: a very complex network setup. What system are you running Homebridge on? How is it connected to the Nuki bridge? When you start node from the command line (in REPL mode), what is the output of os.networkInterfaces()?
"very complex network setup" may be the correct term in my case. My network is splitted into 3 parts. 192.168.1.x where the Homebridge is in, 192.168.2.x for all the smarthome devices and 192.168.3.x for guests. The traffic from 192.168.1.x and 192.168.2.x is routed between the nets.
Output from os.networkInterface() (I removed the parts from docker. None of them are relevant for my Homebridge setup.):
{
lo: [
{
address: '127.0.0.1',
netmask: '255.0.0.0',
family: 'IPv4',
mac: '00:00:00:00:00:00',
internal: true,
cidr: '127.0.0.1/8'
},
{
address: '::1',
netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
family: 'IPv6',
mac: '00:00:00:00:00:00',
internal: true,
cidr: '::1/128',
scopeid: 0
}
],
eth0: [
{
address: '192.168.1.3',
netmask: '255.255.255.0',
family: 'IPv4',
mac: 'dc:a6:32:14:a7:b7',
internal: false,
cidr: '192.168.1.3/24'
},
{
address: 'fe80::8730:6bb0:90df:1ef0',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'dc:a6:32:14:a7:b7',
internal: false,
cidr: 'fe80::8730:6bb0:90df:1ef0/64',
scopeid: 2
}
],
...
}
Output from os.networkInterface() (I removed the parts from docker. None of them are relevant for my Homebridge setup.):
I’m afraid they are. With this output, Homebridge NB would just select 192.168.1.3, as that’s the only non-internal IPv4 address available. Because there’s multiple potential addresses, Homebridge NB doesn’t know which one to choose.
Here is the complete output.
> os.networkInterfaces()
{
lo: [
{
address: '127.0.0.1',
netmask: '255.0.0.0',
family: 'IPv4',
mac: '00:00:00:00:00:00',
internal: true,
cidr: '127.0.0.1/8'
},
{
address: '::1',
netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
family: 'IPv6',
mac: '00:00:00:00:00:00',
internal: true,
cidr: '::1/128',
scopeid: 0
}
],
eth0: [
{
address: '192.168.1.3',
netmask: '255.255.255.0',
family: 'IPv4',
mac: 'dc:a6:32:14:a7:b7',
internal: false,
cidr: '192.168.1.3/24'
},
{
address: 'fe80::8730:6bb0:90df:1ef0',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'dc:a6:32:14:a7:b7',
internal: false,
cidr: 'fe80::8730:6bb0:90df:1ef0/64',
scopeid: 2
}
],
docker0: [
{
address: '172.17.0.1',
netmask: '255.255.0.0',
family: 'IPv4',
mac: '02:42:46:75:9c:2b',
internal: false,
cidr: '172.17.0.1/16'
},
{
address: 'fe80::42:46ff:fe75:9c2b',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: '02:42:46:75:9c:2b',
internal: false,
cidr: 'fe80::42:46ff:fe75:9c2b/64',
scopeid: 4
}
],
veth056725e: [
{
address: '169.254.235.225',
netmask: '255.255.0.0',
family: 'IPv4',
mac: 'd6:7f:6b:46:93:de',
internal: false,
cidr: '169.254.235.225/16'
},
{
address: 'fe80::4ca2:7bf7:6473:6dd4',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'd6:7f:6b:46:93:de',
internal: false,
cidr: 'fe80::4ca2:7bf7:6473:6dd4/64',
scopeid: 10
},
{
address: 'fe80::d47f:6bff:fe46:93de',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'd6:7f:6b:46:93:de',
internal: false,
cidr: 'fe80::d47f:6bff:fe46:93de/64',
scopeid: 10
}
],
veth7abaca3: [
{
address: '169.254.216.152',
netmask: '255.255.0.0',
family: 'IPv4',
mac: 'fe:72:aa:05:b6:39',
internal: false,
cidr: '169.254.216.152/16'
},
{
address: 'fe80::984e:ac1c:fdee:f6bc',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:72:aa:05:b6:39',
internal: false,
cidr: 'fe80::984e:ac1c:fdee:f6bc/64',
scopeid: 12
},
{
address: 'fe80::fc72:aaff:fe05:b639',
netmask: 'ffff:ffff:ffff:ffff::',
family: 'IPv6',
mac: 'fe:72:aa:05:b6:39',
internal: false,
cidr: 'fe80::fc72:aaff:fe05:b639/64',
scopeid: 12
}
]
}
Could you try beta v1.0.7-0? You should be able to specify address in config.json (under Advanced Setting when using the Homebridge UI).
v1.0.7-0 fixes the issue. Adding the address was sufficient. Thanks!
|
gharchive/issue
| 2021-02-13T15:27:30 |
2025-04-01T04:34:04.967414
|
{
"authors": [
"BNoiZe",
"ebaauw"
],
"repo": "ebaauw/homebridge-nb",
"url": "https://github.com/ebaauw/homebridge-nb/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
357378768
|
Sonos v9.1 breaking change
Sonos v9.1 seems to have a breaking change in how it handles groups. Best not update your Sonos system yet. See https://github.com/bencevans/node-sonos/issues/332.
Looks like homebridge-zp still works with Sonos 9.1.
I have already updated and it does not seem to be working at all. Does Sonos support accepting downgrade???
Did you restart homebridge after the update? When the zoneplayer reboots, it loses any open subscriptions, so homebridge-zp no longer receives event notifications. At least until it renews the
subscriptions, which might take up to 30min.
Afaik, Sonos doesn’t provide any means to downgrade the firmware.
The only script I use is node-sonos which runs on a RPI which has been restarted. Besides this I have factory reset all 3 zoneplayers
Sent from ProtonMail Mobile
On Thu, Sep 6, 2018 at 17:40, Erik Baauw notifications@github.com wrote:
Did you restart homebridge after the update? When the zoneplayer reboots, it loses any open subscriptions, so homebridge-zp no longer receives event notifications. At least until it renews the
subscriptions, which might take up to 30min.
Afaik, Sonos doesn’t provide any means to downgrade the firmware.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
If this is unrelated to homebridge-zp, then please discuss this with the node-sonos issue.
Strange
I registred this at node sonos and you replied from there. Please clarify
Sent from ProtonMail mobile
-------- Original Message --------
On 6 sep. 2018 21:02, Erik Baauw wrote:
If this is unrelated to homebridge-zp, then please discuss this with the node-sonos issue.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
gharchive/issue
| 2018-09-05T19:53:56 |
2025-04-01T04:34:04.974585
|
{
"authors": [
"ebaauw",
"magnlund"
],
"repo": "ebaauw/homebridge-zp",
"url": "https://github.com/ebaauw/homebridge-zp/issues/46",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2229914275
|
For .mp4 and .m4a Files, Add FFmpeg flag -movflags +faststart
This is a simple "optimization" for audio and video files in the MPEG4-type containers: for most use cases, it provides a speedup in terms of playing back the audio or video data. See here
Also, changed the temporary file in video retrieving from .mp4 to .m2t to reflect video data coming in as MPEG-TS format.
|
gharchive/pull-request
| 2024-04-07T19:23:12 |
2025-04-01T04:34:04.976084
|
{
"authors": [
"ebb-earl-co"
],
"repo": "ebb-earl-co/tidal-wave",
"url": "https://github.com/ebb-earl-co/tidal-wave/pull/145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
201546809
|
Add posibility to disable noblanks method in Signer initialization
Thank you for amazing gem!
In my work i discovered that changing of xml structure such as Nokogiri noblanks needed for canonicalization of XML in default behavior breaks ability to validate in one of endpoints that I'm working with.
I've added option for disabling noblanks in Signer initialization with optional argument should_trim: false, default gem behavior stays the same and all previous tests pass except on ruby 2.0.
@ebeigarts I have added fix for ruby version 2.0.0 + Nokogiri dependency issue which caused test to fail. Newer versions of Nokogiri require ruby at least 2.1.
With this fix legacy ruby 2.0 users can install working Signer. Please let me know if there Is a need for changing something.
@ebeigarts Thank you for reply!
I've added required_ruby_version gem.required_ruby_version = '>= 2.1.0'. Also changed 'rake' and 'rspec' to single quotes just to keep it tidy.
Plus ruby 2.0 is removed from travis.yml
Thanks, released in v1.5.0
|
gharchive/pull-request
| 2017-01-18T11:25:07 |
2025-04-01T04:34:04.979343
|
{
"authors": [
"bpietraga",
"ebeigarts"
],
"repo": "ebeigarts/signer",
"url": "https://github.com/ebeigarts/signer/pull/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2242169452
|
[mini.completion] using mini.fuzzy to process LSP items
Contributing guidelines
[X] I have read CONTRIBUTING.md
[X] I have read CODE_OF_CONDUCT.md
[X] I have updated 'mini.nvim' to latest version
Module(s)
mini.completion, mini.fuzzy
Description
Today I'm testing fuzzy completion with mini.completion using mini.fuzzy, but I'm not sure the behavior I'm seeing is what I should expect. As I type characters, the popup opens correctly initially, but then closes as I continue to type, and finally opens again sometime later with the correct match. For example, if my buffer contains:
local H = {}
H.test_is_this_working = function() end
H.test_or_am_i_crazy = function() end
H.test_or_both = function() end
This is what happens when I perform the following actions:
Go, end of file, open line
H.t, popup opens with 3 functions from LSP
w, popup closes (not expected)
o, popup remains closed
r, popup opens again with "test_is_this_working() Function" (expected)
I was expecting the popup to remain open in step 3 and display the "test_is_this_working()" item as it was the only candidate with the letter "t" and "w" in it.
I have confirmed that executing the following returns the expected single result, so it does not appear to be an issue with mini.fuzzy:
:lua =require("mini.fuzzy").filtersort("tw", {"test_is_this_working", "test_or_am_i_crazy", "test_or_both"})
:lua =require("mini.fuzzy").filtersort("two", {"test_is_this_working", "test_or_am_i_crazy", "test_or_both"})
:lua =require("mini.fuzzy").filtersort("twor", {"test_is_this_working", "test_or_am_i_crazy", "test_or_both"})
Neovim version
NVIM v0.9.5, Build type: Release, LuaJIT 2.1.1703358377
Steps to reproduce
See steps in description.
Here is a minimal.lua configuration for testing if needed.
-- Clone 'mini.nvim' manually in a way that it gets managed by 'mini.deps'
local path_package = vim.fn.stdpath("data") .. "/site/"
local mini_path = path_package .. "pack/deps/start/mini.nvim"
if not vim.loop.fs_stat(mini_path) then
vim.cmd('echo "Installing `mini.nvim`" | redraw')
local clone_cmd = {
"git",
"clone",
"--filter=blob:none",
"https://github.com/echasnovski/mini.nvim",
mini_path,
}
vim.fn.system(clone_cmd)
vim.cmd("packadd mini.nvim | helptags ALL")
vim.cmd('echo "Installed `mini.nvim`" | redraw')
end
-- Set up 'mini.deps' (customize to your liking)
require("mini.deps").setup({ path = { package = path_package } })
MiniDeps.add("williamboman/mason.nvim")
MiniDeps.add("williamboman/mason-lspconfig.nvim")
MiniDeps.add("neovim/nvim-lspconfig")
require("mini.completion").setup({
lsp_completion = {
source_func = "completefunc",
auto_setup = true,
process_items = require("mini.fuzzy").process_lsp_items,
},
})
require("mason").setup()
require("mason-lspconfig").setup({
ensure_installed = { "lua_ls" }
})
require("lspconfig").lua_ls.setup({
on_attach = function(client, bufnr)
-- vim.bo[bufnr].omnifunc = "v:lua.MiniCompletion.completefunc_lsp"
end,
})
local H = {}
H.test_is_this_working = function() end
H.test_or_am_i_crazy = function() end
H.test_or_both = function() end
Expected behavior
See description with actual vs expected behavior.
Actual behavior
See description with actual vs expected behavior.
Yes, this works as expected (albeit not ideally).
MiniCompletion.config.lsp_completion.process_items is a function which processes response items of LSP request. This is done only at completion initialization (to actually get a full list of completion items). After that, the built-in completion filtering is "in charge" which is not fuzzy.
H.t, popup opens with 3 functions from LSP
w, popup closes (not expected)
o, popup remains closed
r, popup opens again with "test_is_this_working() Function" (expected)
So what happens is that pressing w results into empty completion list (as filtering is not fuzzy) but the built-in completion is still active (you can do <BS> and it will show previous completion list immediately). Pressing o results into built-in completion being finished (without new completion initializing). Pressing r results into a new completion initialization which makes new LSP request and matches fuzzily.
Although a bit unintuitive, this is how built-in completion and 'mini.completion' works. I don't think this is a huge issue right now, as at any point pressing <C-Space> forces new LSP request with later showing up to date candidates.
Closing as "although not ideal, works as expected".
Thank you for the explanation.
In my opinion, this means that MiniCompletion.config.lsp_completion.process_items should not be used for interactive fuzzy completion. The main benefit of fuzzy completion is that I can quickly explore the list of candidates interactively as I'm typing and BS'ing and typing and BS'ing.
The actual behavior, as you state, is not intuitive when using MiniFuzzy.process_lsp_items. And, likely, not what most users would expect of fuzzy completion. I think process_items is better suited for filtering as most of MiniCompletion's documentation states.
Over the past several weekends, I've been incorporating more and more of the mini ecosystem. I've adopted all of them so far with the exception of mini.pairs (doesn't support several use cases in python) and mini.deps (haven't tried it yet, next weekend likely). I'm torn on mini.completion due to the fuzzy finding behavior as I'm so accustomed to nvim-cmp. I'll test for a week without fuzzy finding using the default configuration before I make a decision.
The balance you are trying to achieve with regards to simplicity and user experience is a tough act. For me, I think you have found a nice balance in most of the modules. And, since I've adopted most of these, I feel like my vim instance is not as sluggish. Maybe it's all in my head, but vim feels snappy again to me, so thank you.
I'm torn on mini.completion due to the fuzzy finding behavior as I'm so accustomed to nvim-cmp. I'll test for a week without fuzzy finding using the default configuration before I make a decision.
This is the current price to pay for 'mini.completion' to do as much as possible using built-in features. It is the main reason why this module is not as big as might have been with custom matching and visualization. I have hope that in some future there will be a fuzzy matching for built-in completion (there were open issues about it in Vim).
I have hope that in some future there will be a fuzzy matching for built-in completion (there were open issues about it in Vim).
@echasnovski is this what you were referring to? https://neovim.io/doc/user/options.html#'completeopt'
There is now a fuzzy option in completeopt right?
@echasnovski is this what you were referring to? https://neovim.io/doc/user/options.html#'completeopt'
There is now a fuzzy option in completeopt right?
Yes, but it is currently only on Nightly (0.11). And all caveats of this comment still apply: adding fuzzy to 'completeopt' will fuzzy match already received list of completion items. It will not enable fuzzy matching on LSP server side. As in:
Assume there is a valid function completion candidate.
Typing fn fast will result into fn being sent to LSP server. If there is no fuzzy matching on its side, it will not match function.
Typing f, waiting for completion, and then typgin n again will match function (if it was returned as label in some completion item) in case fuzzy is part of 'completeopt'.
Ahhhh this makes so much sense now why autocomplete can be sometimes "hit or miss" when fuzzy! I've always wondered why sometimes I need to backspace and re-type something and then it shows up! Thanks
|
gharchive/issue
| 2024-04-14T13:25:09 |
2025-04-01T04:34:05.040089
|
{
"authors": [
"GitMurf",
"echasnovski",
"pkazmier"
],
"repo": "echasnovski/mini.nvim",
"url": "https://github.com/echasnovski/mini.nvim/issues/812",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1360074196
|
javax replaced with jakarta
javax.ws.rs was moved to jakarta.ws.rs therefore replaced javax dependency with latest jakarta version
https://github.com/eclipse-basyx/basyx-java-sdk/pull/109 needs to be merged first.
Thanks a lot!
Unfortunately, I have to revert this change. For details, see https://github.com/eclipse-basyx/basyx-java-sdk/pull/109#issuecomment-1252219429
|
gharchive/pull-request
| 2022-09-02T11:37:38 |
2025-04-01T04:34:05.047703
|
{
"authors": [
"FrankSchnicke",
"saffis"
],
"repo": "eclipse-basyx/basyx-java-components",
"url": "https://github.com/eclipse-basyx/basyx-java-components/pull/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1378747209
|
the doc for /states has been updated
the doc for /states has been updated for performing the search functionality
Updating Change request, just put clearer examples, "1" is fine.
Hi @MatthewKhouzam ,
Please, check my new changes above
Also reminder: give an example query like "Duration > 10ms"
Please squash the updates into one patch and put a more descriptive commit message.
done
Also reminder: give an example query like "Duration > 10ms"
it was on the 3rd commit. After squashing it should be visible now
@hriday-panchasara did you generate the API using swagger UI? If yes, please provide me the link to patch in the Trace Compass incubator to review. If now, please create such patch in the Trace Compass incubator and update this PR accordingly. The procedure is described in the README of this repo.
Search functionality was covered by #92 specifically commit https://github.com/eclipse-cdt-cloud/trace-server-protocol/pull/92/commits/3471802f0b42c8f98e449f875ac92b0d2f688372.
Closing this PR. Please reopen if required.
|
gharchive/pull-request
| 2022-09-20T02:57:30 |
2025-04-01T04:34:05.051559
|
{
"authors": [
"Arxemond777",
"MatthewKhouzam",
"bhufmann"
],
"repo": "eclipse-cdt-cloud/trace-server-protocol",
"url": "https://github.com/eclipse-cdt-cloud/trace-server-protocol/pull/84",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1615369077
|
Provisioner's TransferProcess automatic deprovisioning
Discussed in https://github.com/eclipse-edc/Connector/discussions/2542
Originally posted by ndr-brt March 3, 2023
Currently the deprovisioning process is completely manual, this means that if no one takes care of it there could a potential lack of provisioned resources that aren't needed anymore.
While I get the need on the consumer side to let the data put in the provisioned resource (like a bucket or so) ready to be consumed, I don't see why the provider couldn't automatically deprovision the transfer resources as soon as it gets completed or terminated.
What do you think about?
Yes, this was fixed in the new protocol so we should be able to do this.
|
gharchive/issue
| 2023-03-08T14:27:26 |
2025-04-01T04:34:05.056743
|
{
"authors": [
"jimmarino",
"ndr-brt"
],
"repo": "eclipse-edc/Connector",
"url": "https://github.com/eclipse-edc/Connector/issues/2563",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254453681
|
feat: update Iron VC to 0.14.0
What this PR changes/adds
this PR updates our implementation of the JsonWebKey2020Signature suite to the new API of the iron-verifiable-credentials library v0.14.0.
Why it does that
Avoid technical debt
Further notes
List other areas of code that have changed but are not necessarily linked to the main feature. This could be method
signature changes, package declarations, bugs that were encountered and were fixed inline, etc.
Linked Issue(s)
Closes #4066
Please be sure to take a look at the contributing guidelines and our etiquette for pull requests.
Codecov Report
Attention: Patch coverage is 0% with 277 lines in your changes are missing coverage. Please review.
Project coverage is 19.03%. Comparing base (7f20ba5) to head (843b001).
Report is 223 commits behind head on main.
Files
Patch %
Lines
.../verifiablecredentials/linkeddata/LdpVerifier.java
0.00%
73 Missing :warning:
...e/edc/security/signature/jws2020/Jws2020Proof.java
0.00%
45 Missing :warning:
.../security/signature/jws2020/Jwk2020KeyAdapter.java
0.00%
40 Missing :warning:
.../security/signature/jws2020/Jws2020ProofDraft.java
0.00%
30 Missing :warning:
...lecredentials/linkeddata/DataIntegrityKeyPair.java
0.00%
24 Missing :warning:
...ipse/edc/security/signature/jws2020/JwsIssuer.java
0.00%
24 Missing :warning:
...urity/signature/jws2020/Jws2020SignatureSuite.java
0.00%
20 Missing :warning:
...dc/verifiablecredentials/linkeddata/LdpIssuer.java
0.00%
11 Missing :warning:
...se/edc/security/signature/jws2020/JsonAdapter.java
0.00%
3 Missing :warning:
...iablecredentials/linkeddata/DidMethodResolver.java
0.00%
2 Missing :warning:
... and 4 more
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #4125 +/- ##
===========================================
- Coverage 71.74% 19.03% -52.72%
===========================================
Files 919 986 +67
Lines 18457 20306 +1849
Branches 1037 1148 +111
===========================================
- Hits 13242 3865 -9377
- Misses 4756 16328 +11572
+ Partials 459 113 -346
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2024-04-20T08:37:04 |
2025-04-01T04:34:05.079320
|
{
"authors": [
"codecov-commenter",
"paullatzelsperger"
],
"repo": "eclipse-edc/Connector",
"url": "https://github.com/eclipse-edc/Connector/pull/4125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1082280939
|
Could not be build with jdk 8
I cloned the repo and did a "mvn clean install" then I got this:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project jakarta.inject-api: Fatal error compiling: invalid flag: --release -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project jakarta.inject-api: Fatal error compiling
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.apache.maven.plugin.MojoExecutionException: Fatal error compiling
at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute (AbstractCompilerMojo.java:1145)
at org.apache.maven.plugin.compiler.CompilerMojo.execute (CompilerMojo.java:187)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.codehaus.plexus.compiler.CompilerException: invalid flag: --release
at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess (JavaxToolsCompiler.java:173)
at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile (JavacCompiler.java:174)
at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute (AbstractCompilerMojo.java:1134)
at org.apache.maven.plugin.compiler.CompilerMojo.execute (CompilerMojo.java:187)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: java.lang.IllegalArgumentException: invalid flag: --release
at com.sun.tools.javac.api.JavacTool.processOptions (JavacTool.java:206)
at com.sun.tools.javac.api.JavacTool.getTask (JavacTool.java:156)
at com.sun.tools.javac.api.JavacTool.getTask (JavacTool.java:107)
at com.sun.tools.javac.api.JavacTool.getTask (JavacTool.java:64)
at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess (JavaxToolsCompiler.java:125)
at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile (JavacCompiler.java:174)
at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute (AbstractCompilerMojo.java:1134)
at org.apache.maven.plugin.compiler.CompilerMojo.execute (CompilerMojo.java:187)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
I'm using
$mvn -version
Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
Maven home: D:\Dev\ApacheMaven\current-maven
Java version: 1.8.0_252, vendor: Azul Systems, Inc., runtime: D:\Dev\Java\jdk8x64\jre
Default locale: de_DE, platform encoding: Cp1252
OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
This is expected as it now includes a JPMS module-info.class which requires Java SE 11+ to compile. That is the only file compiled above SE 8. The reset of the classes in the jar will be at target level SE 8.
Like Scott says, this is now expected. Master branch is targeting EE 10 which is based off JDK 11 anyway.
Closing the issue because the behavior is expected. If you disagree, feel free to reopen and/or leave a comment.
|
gharchive/issue
| 2021-12-16T14:45:45 |
2025-04-01T04:34:05.083916
|
{
"authors": [
"ahoehma",
"manovotn",
"starksm64"
],
"repo": "eclipse-ee4j/injection-api",
"url": "https://github.com/eclipse-ee4j/injection-api/issues/26",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1861534051
|
Feat: Schema overriding hook for OpenApiConfig [POC]
Adds a hook which allows overriding OpenAPI schema definitions.
This is useful for two scenarios:
Override the schema description of a generic class with specific type parameter, as the annotated Schema description might not be specific enough.
Example: TypeKeyNameVerboseDto<CountryCode>
Clone a schema, overriding its name and description, to use it for some specific field.
This is due to OpenAPI limitations which don't allow to specify a field description for complex singular objects, see https://github.com/springdoc/springdoc-openapi/issues/1178 .
Example: legalAddress in LegalEntityPartnerCreateRequest etc.
This is just a proof-of-concept how it could be done. I don't know if it's worth it considering the effort, especially in the second case.
Proof of concept for https://github.com/eclipse-tractusx/bpdm/issues/409
https://github.com/eclipse-tractusx/bpdm/pull/398 must be merged first
The ADR links to this PR saying "The potential workarounds are implemented as proof-of-concept in ...".
I don't know if the PR is still readable after closing it?
The ADR links to this PR saying "The potential workarounds are implemented as proof-of-concept in ...". I don't know if the PR is still readable after closing it?
No worries, the closed pull request is still readable and can be reopened as well. I will close it for now. If we feel we need to open it, we can do so at a later time.
|
gharchive/pull-request
| 2023-08-22T14:01:28 |
2025-04-01T04:34:05.209273
|
{
"authors": [
"martinfkaeser",
"nicoprow"
],
"repo": "eclipse-tractusx/bpdm",
"url": "https://github.com/eclipse-tractusx/bpdm/pull/405",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2672393764
|
Integrate getStoredTaskResultByRequestId in xpanse
[ ] Add a new mandatory column 'HANDLER' to the SERVICE_ORDER.
[ ] Values for HANDLER -> TERRAFORM-LOCAL, TERRAFORM-BOOT, OPENTOFU-LOCAL, TOFU-MAKER, XPANSE, WORKFLOW
[ ] Update all inserts for SERVICE_ORDER to have this value.
[ ] Add new configuration - max.service.order.processing.duration.in.seconds=180
[ ] Whenever there is an API call that returns 'DeployedService' or DeployedServiceDetails and when the corresponding service is stuck in DESTROYING/DEPLOYING/MODIFYING for more than the above configuration time.. (order created time > the max processing duration) and if the order handler is TERRAFORM-BOOT or OPENTOFU-LOCAL, then call the newly added service in #2079 and #2129 respectively to fetch the latest status. If they return an exception (Already returned or not found), then set the order status to ERROR with the error message from API and set the ServiceDeploymentState to MANUAL_CLEANUP_REQUIRED.
[ ] If API returns 204 (implemented in #2131), then return the status same as what is stored in xpanse DB
let me do it
|
gharchive/issue
| 2024-11-19T14:39:57 |
2025-04-01T04:34:05.219346
|
{
"authors": [
"WangLiNaruto",
"swaroopar"
],
"repo": "eclipse-xpanse/xpanse",
"url": "https://github.com/eclipse-xpanse/xpanse/issues/2130",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
959731593
|
Enhance samediff to allow dimensions as variables
What changes were proposed in this pull request?
Add ops and related samediff calls covering dimensions as variables while preserving backwards compatibility.
Depends on
https://github.com/eclipse/deeplearning4j/pull/9410
(Please fill in changes proposed in this fix)
How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
Quick checklist
The following checklist helps ensure your PR is complete:
[X ] Eclipse Contributor Agreement signed, and signed commits - see IP Requirements page for details
[X ] Reviewed the Contributing Guidelines and followed the steps within.
[ X] Created tests for any significant new code additions.
[ X] Relevant tests for your changes are passing.
@treo I agree overall. I feel like the best way would be to make the op generation integrate with the serialization and also just generate all the op classes whole sale. We were targeting that anyways and I intend on revisiting this. It was a trade off to unblock work getting done while still avoiding manually modifying the generated classes from the ops.
|
gharchive/pull-request
| 2021-08-04T01:05:59 |
2025-04-01T04:34:05.327481
|
{
"authors": [
"agibsonccc"
],
"repo": "eclipse/deeplearning4j",
"url": "https://github.com/eclipse/deeplearning4j/pull/9410",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
792056162
|
DTLS Handshake timeout
Is there any way to configure the DTLS handshake duration? Whats the default duration?
I have a nRF9160 that I want to keep a pretty long DTLS handshake duration because of keeping the power consumption low.
Thanks!
You can have a look at : https://github.com/eclipse/leshan/wiki/Request-Timeout#dtls-handshake-timeout
Could you explain "how keeping a long DTLS Handshake" will keep the power consumption low ? You mean because you keep CPU power in some kind of low profile and so the crypo operations are very slow at client side ? or I missed something ?
You can have a look at : https://github.com/eclipse/leshan/wiki/Request-Timeout#dtls-handshake-timeout
Could you explain "how keeping a long DTLS Handshake" will keep the power consumption low ? You mean because you keep CPU power in some kind of low profile and so the crypo operations are very slow at client side ? or I missed something ?
Yes the power consumption is correlating with data consumption. The DTLS handshake is consuming quite a bit of data compared to the regular excange of data between the server and client.
Our goal is to run our nRF9160 with board on battery supply, that is why this is in our interest.
Yes the power consumption is correlating with data consumption. The DTLS handshake is consuming quite a bit of data compared to the regular excange of data between the server and client.
Our goal is to run our nRF9160 with board on battery supply, that is why this is in our interest.
I think I didn't understand you. :sweat_smile:
The DTLS handshake is just some messages exchange between 2 peers to established a secure connection.
You talk about "handshake duration", I don't really understand what you mean. I firstly think you are talking about DTLS handshake message timeout. But reading this last message I'm not so sure. :thinking:
It's really not clear but It seems you want to reduce the number of handshake and not the duration of it ?
I guess when you talk about "DTLS handshake duration", you try to talk about DTLS connection lifetime ?
If I'm correct. Currently there is no lifetime for connection (neither for session), meaning they could live forever (see https://github.com/eclipse/californium/issues/617) :
There are stored in memory and so you lost it on restart. (there are some work about at Californium side : https://github.com/eclipse/californium/issues/1345#issuecomment-764646743)
Connection is store in a LRU cache when this cache is full last recently used one is removed. (and so new handshake will be required). Default size is NetworkConfigDefaults.DEFAULT_MAX_ACTIVE_PEERS (150000) and can be changed in californium NetworkConfig with Keys.MAX_ACTIVE_PEERS)
Connection is identified by address+port, so if for any reason peer address/port changed (e.g. peer behind NAT), a new handshake is needed. Using fixed address or connection id should help to reduce number of handshake. Connection id should be supported by callifornium/scandium but I never used it in production.
I'm not sure but It could have some other reason to lost the connection (e.g. a FATAL HANDSHAKE ERROR)
All those are more californium/scandium questions than Leshan ones.
I hope this time better understand you. If not please give a more precise description of your problem with more context.
I think I didn't understand you. :sweat_smile:
The DTLS handshake is just some messages exchange between 2 peers to established a secure connection.
You talk about "handshake duration", I don't really understand what you mean. I firstly think you are talking about DTLS handshake message timeout. But reading this last message I'm not so sure. :thinking:
It's really not clear but It seems you want to reduce the number of handshake and not the duration of it ?
I guess when you talk about "DTLS handshake duration", you try to talk about DTLS connection lifetime ?
If I'm correct. Currently there is no lifetime for connection (neither for session), meaning they could live forever (see https://github.com/eclipse/californium/issues/617) :
There are stored in memory and so you lost it on restart. (there are some work about at Californium side : https://github.com/eclipse/californium/issues/1345#issuecomment-764646743)
Connection is store in a LRU cache when this cache is full last recently used one is removed. (and so new handshake will be required). Default size is NetworkConfigDefaults.DEFAULT_MAX_ACTIVE_PEERS (150000) and can be changed in californium NetworkConfig with Keys.MAX_ACTIVE_PEERS)
Connection is identified by address+port, so if for any reason peer address/port changed (e.g. peer behind NAT), a new handshake is needed. Using fixed address or connection id should help to reduce number of handshake. Connection id should be supported by callifornium/scandium but I never used it in production.
I'm not sure but It could have some other reason to lost the connection (e.g. a FATAL HANDSHAKE ERROR)
All those are more californium/scandium questions than Leshan ones.
I hope this time better understand you. If not please give a more precise description of your problem with more context.
"It's really not clear but It seems you want to reduce the number of handshake and not the duration of it ?
I guess when you talk about "DTLS handshake duration", you try to talk about DTLS connection lifetime ?"
Yes! You are correct, my bad for poor description :)
Thank you so much for your response! I will look into your description.
"It's really not clear but It seems you want to reduce the number of handshake and not the duration of it ?
I guess when you talk about "DTLS handshake duration", you try to talk about DTLS connection lifetime ?"
Yes! You are correct, my bad for poor description :)
Thank you so much for your response! I will look into your description.
Do not hesitate to share with us what you choose to resolve your issue.
Do not hesitate to share with us what you choose to resolve your issue.
@robertkofoed
I guess, your term "DTLS handshake duration" is not the time, the handshake itself requires, it's more the time, the handshake "lasts" (the time you can exchange messages without an new handshake). Is it that?
Do you use CAT-NB (NB-IoT)? Or CAT-M1?
If the first, CAT-NB, is used, the mobile provider usually apply a NAT, if the device is communicating with the public network (if you use a private network, that may be different). Such NATs are nasty (see DTLS - NAT ). You may ask your mobile provider, if the NAT timeout could be enlarged, but please ensure, that you configure all NATs on your route, there may be additional ones.
If adjusting the NAT timeouts is no choice (as in my experience in the most cases), then you will need frequently handshakes, may be full handshakes or resumption handshakes. With that, you will be faced the next drawback: CAT-NB has unfortunately a very large latency. 1-2s are not unusual. Depending on used HELLO_VERIFY_REQUEST, even a resumption will take easily up to 8s and a full even more.
FMPOV, I decided two years ago, that this is no option. And therefore I started to implement DTLS 1.2 Connection ID in Eclipse/Californium, a CoAP/DTLS implementation used by Leshan. With that, the results using CAT-NB are in my opinion pretty well.
INFO [statistic]: success: 304, failures: 2
INFO [statistic]: RTT all: 304, avg.: 5295 ms, 95%: 8320 ms, 99%: 50900 ms, 99.9%: 55358 ms, max.: 55358 ms
(The times (5s average) include the PSM wakeup and reconnect. The coap-request itself takes usually about 2-3s.)
|
gharchive/issue
| 2021-01-22T14:38:55 |
2025-04-01T04:34:05.394182
|
{
"authors": [
"boaks",
"robertkofoed",
"sbernard31"
],
"repo": "eclipse/leshan",
"url": "https://github.com/eclipse/leshan/issues/961",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
852441324
|
main shacl tests are not integration tests
GitHub issue resolved: #
Briefly describe the changes proposed in this PR:
Main SHACL test suite has been marked as integration tests even though they are the main unit tests for the ShaclSail that checks that constraints work.
PR Author Checklist (see the contributor guidelines for more details):
[ ] my pull request is self-contained
[ ] I've added tests for the changes I made
[ ] I've applied code formatting (you can use mvn process-resources to format from the command line)
[ ] I've squashed my commits down to one or a few meaningful commits
[ ] every commit message starts with the issue number (GH-xxxx) followed by a meaningful description of the change
[ ] every commit has been signed off
Arguably they are integration tests in that they execute multiple test cases against a real MemoryStore - they're not proper unit tests in that respect. But semantics aside (and we aren't always very careful about this in other place either to be honest), the main reason to mark them as integration tests is that they take over 2 minutes to complete, and moving that time to the integration testing phase makes the parallel CI faster.
It's only 2 minutes so I have no massive objections to naming them back, but: what problem are you having with executing these as integration tests?
Integration tests aren't required to pass before merging, and this is the core test suite for the ShaclSail.
In this run now they took 1:40 to run.
Integration tests aren't required to pass before merging, and this is the core test suite for the ShaclSail.
So? You know when you've made changes to the ShaclSail, you'll have to wait for the integration tests to succeed, right?
The point of the setup is that for those PRs where nothing was changed in the ShaclSail, we can (optionally) choose to merge quickly.
As an aside I'm perfectly fine with making the integration testing job required before merging as well if you prefer. I treat it as required for pretty much anything except the most trivial editorial changes already anyway.
In this run now they took 1:40 to run.
Yes, in the grand scheme of things it's not a lot either way. But there's still the fact that technically, these tests are integration tests, not unit tests. That doesn't make them less important though (quite the reverse I'd say). It was never my intent that integration tests can be safely skipped before merging.
That's not really a definition of integration and unit tests that I would use. The intention of the ShaclTest class is to test various shapes and transaction scenarios. It only indirectly tests the integration with the MemoryStore, the SPARQL engine and Rio. The intention is to test shapes. Mocking the entire Sail backend, SPARQL engine and Rio code would be overkill when we can just simply use that code.
If we are going to be that strict on the difference between integration and unit tests then things like SparqlRegexTest, StoreSerializationTest, FederationConnectionTest and a lot more should be marked as integration tests.
Things like MultithreadedMemoryStoreIT though on the other hand specifically test how the ShaclSail acts when integrated with the MemoryStore under a multithreaded workload. That is why we also have MultithreadedNativeStoreIT and MultithreadedNativeStoreRDFSIT.
I could split out the different isolation levels into their own suite marked as slow, so that the main tests only run with NONE level.
That's not really a definition of integration and unit tests that I would use. The intention of the ShaclTest class is to test various shapes and transaction scenarios. It only indirectly tests the integration with the MemoryStore, the SPARQL engine and Rio. The intention is to test shapes. Mocking the entire Sail backend, SPARQL engine and Rio code would be overkill when we can just simply use that code.
I'm not worried about the use of Rio to read in fixtures - that's fine. It's specifically the use of the MemoryStore that in my eyes makes this an integration test suite, though I'll grant you that if you squint a bit and say "but the memorystore is only used in the role of a stub component so that we don't have to really mock/stub things", you could get away with it. In fact we do that, in several places.
If we are going to be that strict on the difference between integration and unit tests then things like SparqlRegexTest, StoreSerializationTest, FederationConnectionTest and a lot more should be marked as integration tests
Probably, if we're strict. I'm not really suggesting that we really should always be this strict. But the title of your pull request is "main shacl tests are not integration tests", which is a strong statement that I wanted to examine a little.
I could split out the different isolation levels into their own suite marked as slow, so that the main tests only run with NONE level.
That might be an option worth keeping in reserve if this suite starts to grow larger (and more costly) again, but for now let's keep things simple.
You've about halfway convinced me that these should "count" as unit tests, so just go ahead and merge this - could you fix the commit message with an issue number though? No need for a new ticket, the original issue on which we did the test suite/CI refactor might be a good one (even if it's closed).
For what it's worth, after I've collected my thoughts on this a bit more I'll write something up about how we should generally treat the distinction between different test types (and why it matters), I think this may be useful to other contributors as well.
|
gharchive/pull-request
| 2021-04-07T14:00:27 |
2025-04-01T04:34:05.529071
|
{
"authors": [
"hmottestad",
"jeenbroekstra"
],
"repo": "eclipse/rdf4j",
"url": "https://github.com/eclipse/rdf4j/pull/2971",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1497418620
|
GH-4295 fed x not applied dataset
GitHub issue resolved: #4295
Briefly describe the changes proposed in this PR:
Pass the set dataset into the queries created/evaluated in TripleSourceBase.
PR Author Checklist (see the contributor guidelines for more details):
[x] my pull request is self-contained
[x] I've added tests for the changes I made
[x] I've applied code formatting (you can use mvn process-resources to format from the command line)
[x] I've squashed my commits where necessary
[x] every commit message starts with the issue number (GH-xxxx) followed by a meaningful description of the change
@JervenBolleman sorry for the delay in the review. All looking good from my side. Thanks for the fix
|
gharchive/pull-request
| 2022-12-14T21:19:56 |
2025-04-01T04:34:05.533757
|
{
"authors": [
"JervenBolleman",
"aschwarte10"
],
"repo": "eclipse/rdf4j",
"url": "https://github.com/eclipse/rdf4j/pull/4316",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
629971636
|
Fix: Duplicate content-type header in NVD-rest
Currently, any request to nvd/vulnerabilities/<cve-id>
returns a response with the headers
Content-Type: application/json
Content-Type: application/json; charset=utf-8
Since flask since version 1.1.0 automatically sets the Content-Type: application/json
see https://flask.palletsprojects.com/en/1.1.x/quickstart/#apis-with-json
The code
return data, 200, {'Content-Type': 'application/json; charset=utf-8'}
results in the duplicate Content-Type header.
This PR fixes the duplicate code.
TODOs
[ ] Tests
[ ] Documentation
Thanks a lot for spotting this @anddann !
Could you please sign the ECA so that I can merge this?
Antonino
Sry, I've signed the ECA now.
|
gharchive/pull-request
| 2020-06-03T12:55:36 |
2025-04-01T04:34:05.568435
|
{
"authors": [
"anddann",
"copernico"
],
"repo": "eclipse/steady",
"url": "https://github.com/eclipse/steady/pull/395",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1710431486
|
SCC modularization
I've split the SCC transform into six transforms:
SCCBaseTransformation
SCCDevectorTransformation
SCCDemoteTransformation
SCCRevectorTransformation
SCCHoistTransformation
SCCAnnotateTransformation
The original SingleColumnCoalescedTransformation also still exists, but it's basically now a wrapper script that calls the methods contained in the above transformations.
I've updated loki-transform.py to run each one of the six above one-by-one rather than running the unified SCC transform. This is regression tested on the ecWam physics against the original SCC transform and produces string-identical results.
I still have to refactor the tests to better test the new modular transforms individually, but before I do that I would really appreciate your input on whether I am the right path.
NB: this is stacked on top of a small bug fix in #89.
Regression tested against cloudsc and ecWam physics. I've left the old SingleColumnCoalescedTransformation (including some tests that call it) in. It's now basically a wrapper script that calls the methods in the modular transformations. In principle this can, and perhaps should, be removed.
Thanks a lot for your comments! I've implemented all of them, please let me know what you think.
Thanks so much, looks great! Going in...
|
gharchive/pull-request
| 2023-05-15T16:28:47 |
2025-04-01T04:34:05.592927
|
{
"authors": [
"awnawab",
"reuterbal"
],
"repo": "ecmwf-ifs/loki",
"url": "https://github.com/ecmwf-ifs/loki/pull/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1353339416
|
feat(domain): add token model
close #2
:tada: This PR is included in version 1.2.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-08-28T12:29:22 |
2025-04-01T04:34:05.601088
|
{
"authors": [
"MEBoo",
"MeblabsBot"
],
"repo": "eco-trip/rasp-control-unit",
"url": "https://github.com/eco-trip/rasp-control-unit/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
65354517
|
有没有办法可提高ECharts图表的清晰度及分辨率?
ECharts图表类型丰富,但是生成的图表总感觉不够清晰,稍微放大一些,就锯齿感很明显。不同word、excel生成的表格,不管放大到多大,都感觉很清晰。ECharts图表如能够提高清晰度很分辨率就好了,如果没有办法,总感觉难以在商业中应用。
echart <canvas> 就是位图图像,放大肯定时锯齿的,要缩放无损你得用矢量的 svg 了。
在echart中如何生成无损的svg或其他高清晰度的图表?具体参数如何调整和设置?能不能给出一个具体例子,我看不少用户都有这个需求。
目前不行,好像是个不错的需求
过去这么久了,现在可生成矢量图了吗?
|
gharchive/issue
| 2015-03-31T03:14:48 |
2025-04-01T04:34:05.631556
|
{
"authors": [
"muzuiget",
"pissang",
"wuwencheng",
"zhuligang"
],
"repo": "ecomfe/echarts",
"url": "https://github.com/ecomfe/echarts/issues/1431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
276529211
|
map系列hoverable:false无效
One-line summary [问题简述]
想取消地图鼠标hover后区域高亮效果。
echarts3中map类型系列中配置hoverable:false无效
Version & Environment [版本及环境]
ECharts version [ECharts 版本]: echarts最新版
Browser version [浏览器类型和版本]: 谷歌最新版
OS Version [操作系统类型和版本]: win10系统
Expected behaviour [期望结果]
map的hoverable:fasle有效
ECharts option [ECharts配置项]
option = {
title : {
text: 'iphone销量',
subtext: '纯属虚构',
left: 'center'
},
tooltip : {
trigger: 'item'
},
legend: {
orient: 'vertical',
left: 'left',
data:['iphone3','iphone4','iphone5']
},
visualMap: {
min: 0,
max: 2500,
left: 'left',
top: 'bottom',
text:['高','低'], // 文本,默认为数值文本
calculable : true
},
toolbox: {
show: true,
orient : 'vertical',
left: 'right',
top: 'center',
feature : {
mark : {show: true},
dataView : {show: true, readOnly: false},
restore : {show: true},
saveAsImage : {show: true}
}
},
series : [
{
name: 'iphone3',
type: 'map',
hoverable: false,
mapType: 'china',
roam: false,
label: {
normal: {
show: false
},
emphasis: {
show: true
}
},
data:[
{name: '北京',value: Math.round(Math.random()*1000)},
{name: '天津',value: Math.round(Math.random()*1000)},
{name: '上海',value: Math.round(Math.random()*1000)},
{name: '重庆',value: Math.round(Math.random()*1000)},
{name: '河北',value: Math.round(Math.random()*1000)},
{name: '河南',value: Math.round(Math.random()*1000)},
{name: '云南',value: Math.round(Math.random()*1000)},
{name: '辽宁',value: Math.round(Math.random()*1000)},
{name: '黑龙江',value: Math.round(Math.random()*1000)},
{name: '湖南',value: Math.round(Math.random()*1000)},
{name: '安徽',value: Math.round(Math.random()*1000)},
{name: '山东',value: Math.round(Math.random()*1000)},
{name: '新疆',value: Math.round(Math.random()*1000)},
{name: '江苏',value: Math.round(Math.random()*1000)},
{name: '浙江',value: Math.round(Math.random()*1000)},
{name: '江西',value: Math.round(Math.random()*1000)},
{name: '湖北',value: Math.round(Math.random()*1000)},
{name: '广西',value: Math.round(Math.random()*1000)},
{name: '甘肃',value: Math.round(Math.random()*1000)},
{name: '山西',value: Math.round(Math.random()*1000)},
{name: '内蒙古',value: Math.round(Math.random()*1000)},
{name: '陕西',value: Math.round(Math.random()*1000)},
{name: '吉林',value: Math.round(Math.random()*1000)},
{name: '福建',value: Math.round(Math.random()*1000)},
{name: '贵州',value: Math.round(Math.random()*1000)},
{name: '广东',value: Math.round(Math.random()*1000)},
{name: '青海',value: Math.round(Math.random()*1000)},
{name: '西藏',value: Math.round(Math.random()*1000)},
{name: '四川',value: Math.round(Math.random()*1000)},
{name: '宁夏',value: Math.round(Math.random()*1000)},
{name: '海南',value: Math.round(Math.random()*1000)},
{name: '台湾',value: Math.round(Math.random()*1000)},
{name: '香港',value: Math.round(Math.random()*1000)},
{name: '澳门',value: Math.round(Math.random()*1000)}
]
},
{
name: 'iphone4',
type: 'map',
hoverable: false,
mapType: 'china',
label: {
normal: {
show: false
},
emphasis: {
show: true
}
},
data:[
{name: '北京',value: Math.round(Math.random()*1000)},
{name: '天津',value: Math.round(Math.random()*1000)},
{name: '上海',value: Math.round(Math.random()*1000)},
{name: '重庆',value: Math.round(Math.random()*1000)},
{name: '河北',value: Math.round(Math.random()*1000)},
{name: '安徽',value: Math.round(Math.random()*1000)},
{name: '新疆',value: Math.round(Math.random()*1000)},
{name: '浙江',value: Math.round(Math.random()*1000)},
{name: '江西',value: Math.round(Math.random()*1000)},
{name: '山西',value: Math.round(Math.random()*1000)},
{name: '内蒙古',value: Math.round(Math.random()*1000)},
{name: '吉林',value: Math.round(Math.random()*1000)},
{name: '福建',value: Math.round(Math.random()*1000)},
{name: '广东',value: Math.round(Math.random()*1000)},
{name: '西藏',value: Math.round(Math.random()*1000)},
{name: '四川',value: Math.round(Math.random()*1000)},
{name: '宁夏',value: Math.round(Math.random()*1000)},
{name: '香港',value: Math.round(Math.random()*1000)},
{name: '澳门',value: Math.round(Math.random()*1000)}
]
},
{
name: 'iphone5',
type: 'map',
hoverable: false,
mapType: 'china',
label: {
normal: {
show: false
},
emphasis: {
show: true
}
},
data:[
{name: '北京',value: Math.round(Math.random()*1000)},
{name: '天津',value: Math.round(Math.random()*1000)},
{name: '上海',value: Math.round(Math.random()*1000)},
{name: '广东',value: Math.round(Math.random()*1000)},
{name: '台湾',value: Math.round(Math.random()*1000)},
{name: '香港',value: Math.round(Math.random()*1000)},
{name: '澳门',value: Math.round(Math.random()*1000)}
]
}
]
}
Other comments [其他信息]
同 #7109
|
gharchive/issue
| 2017-11-24T06:57:31 |
2025-04-01T04:34:05.636573
|
{
"authors": [
"chengwubin",
"pissang"
],
"repo": "ecomfe/echarts",
"url": "https://github.com/ecomfe/echarts/issues/7110",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
263709090
|
s-for 里面 checkbox 的 checked 状态有时候跟数据状态不一致
最新的 master 代码,复现地址:https://jsbin.com/yopiyak/1/edit?js,output
在 input 里面输入 bar 之后,导致 filteredDatasource 发生变化,期望 bar 一直处于选中的状态,实际不是。
亲,没写完呢。。。
原来如此
刚push了一版,明天再回归你的一大堆issues。。。
好像 tag 之间的 whitespace 都没有了
嗯,依然没写完,要开放个option决定是否保留whitespace
|
gharchive/issue
| 2017-10-08T10:49:08 |
2025-04-01T04:34:05.638909
|
{
"authors": [
"errorrik",
"leeight"
],
"repo": "ecomfe/san",
"url": "https://github.com/ecomfe/san/issues/97",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1160029653
|
Ausência de prazo de postagem para configuração
No calculate existe um default, mas não tem como o lojista configurar o dele
Como não consigo me auto assinar aqui, já me coloco @matheusgnreis
Deve conseguir agora @matheusgnreis
|
gharchive/issue
| 2022-03-04T20:34:14 |
2025-04-01T04:34:05.640110
|
{
"authors": [
"leomp12",
"matheusgnreis"
],
"repo": "ecomplus/app-mandabem",
"url": "https://github.com/ecomplus/app-mandabem/issues/78",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
457568852
|
IndShockConsumerType init method prints unnecessary message from checkConditions
IndShockConsumerType now calls checkConditions() via update() when it is initialized. checkConditions knows that it is only useful if T_cycle==1 and cycles==0, and does nothing otherwise... except print a message to screen that says, "This method only checks for the conditions for infinite horizon models with a 1 period cycle."
This was fine when checkConditions was not called internally, only from the command line as a teaching tool, but is weird (and unhelpful) now that it's run automatically. The user will not know what "this method" is. Someone running a lifecycle model does not want to see this message pop up every single time.
Yes, thanks for posting this issue, I've been meaning to post on this (or fix it) for a while.
I've been meaning to get to this as well... So what's the solution? Is it ever a useful message to receive?
Actually, no, I think it is never useful -- except possibly as a comment
inside the code (and not to users).
On Wed, Jun 19, 2019 at 6:05 PM Patrick Kofod Mogensen <
notifications@github.com> wrote:
I've been meaning to get to this as well... So what's the solution? Is it
ever a useful message to receive?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/econ-ark/HARK/issues/319?email_source=notifications&email_token=ABSEEJEJJ4QFGASWQLEVLTDP3KUTBA5CNFSM4HZBTEU2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYDNJJQ#issuecomment-503764134,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABSEEJBRC2MZ5VJ4UTKGFHLP3KUTBANCNFSM4HZBTEUQ
.
--
Chris Carroll
Actually, no, I think it is never useful -- except possibly as a comment inside the code (and not to users).
I'll remove it then.
|
gharchive/issue
| 2019-06-18T16:04:00 |
2025-04-01T04:34:05.645372
|
{
"authors": [
"llorracc",
"llorracc-git",
"mnwhite",
"pkofod"
],
"repo": "econ-ark/HARK",
"url": "https://github.com/econ-ark/HARK/issues/319",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
961604790
|
Antlers directive.
Hey Erin,
Content and Context are just passed to \Statamic\Facades\Antlers::parse
If passing everything inline, then the Antlers content will need to have @ added to its curly braces. quotes will need to be escaped too.
{{-- testing --}}
@antlers('@{{ test }}', ['test' => 'testing'])
It is however possible to also define the variables beforehand, either in a @php block or it could have been passed down from the controller.
@php
$content = '{{ test | ucfirst }}';
$context = ['test' => 'testing'];
@endphp
{{-- Testing --}}
@antlers($content, $context)
Might be useful, where tag(), modify() or any of the existing directives aren't achieving the desired outcome.
Thanks.
Can you check lease update the readme to include?
Thanks Erin, I've added a fair chunk of info about it.
Let me know if you need any changes, I've tried to cover/detail a fair bit of its potiental usage.
|
gharchive/pull-request
| 2021-08-05T09:02:03 |
2025-04-01T04:34:05.651304
|
{
"authors": [
"edalzell",
"michaelr0"
],
"repo": "edalzell/statamic-blade",
"url": "https://github.com/edalzell/statamic-blade/pull/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
726536844
|
Could you add pnpm like you did with Yarn ?
I want to use pnpm instead of npm and Yarn.
I just want to know if this is something you might consider.
this lib works fine with npm and yarn? there's no conflict?
|
gharchive/issue
| 2020-10-21T14:17:08 |
2025-04-01T04:34:05.655171
|
{
"authors": [
"Daniel-Mendes",
"edbizarro"
],
"repo": "edbizarro/gitlab-ci-pipeline-php",
"url": "https://github.com/edbizarro/gitlab-ci-pipeline-php/issues/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
588864261
|
authorization support
请问权限控制是否有支持呢?
@morphyguo 暂时没有,如果有兴趣可以看看相关 RBAC 的第三库,这比较偏业务了。
|
gharchive/issue
| 2020-03-27T03:39:21 |
2025-04-01T04:34:05.661484
|
{
"authors": [
"eddycjy",
"morphyguo"
],
"repo": "eddycjy/go-gin-example",
"url": "https://github.com/eddycjy/go-gin-example/issues/101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
284899726
|
Legal Entity registration avoids API-KEY authorization
Env: preprod
Expected: API-KEY should be present and verified for all MIS calls
Currently: able to register Legal Entity without API-KEY
Request example attached
le_reg_no_apikey.txt
Response:
{
"urgent": {
"security": {
"redirect_uri": "http://e-life.com.ua/",
"client_secret": "aVlxVTdjcFpXZDhnYldIZ0VxSnR1QT09",
"client_id": "e630ab9c-6684-4d73-ae8f-d545cebab13a"
},
"employee_request_id": "643d8b56-0f5b-48ec-ade1-b5deb0b2f223"
},
"meta": {
"url": "http://api.ehealth.world/api/legal_entities",
"type": "object",
"request_id": "slkh591a81fdaig3t2ptg77jc115ecae",
"code": 200
},
"data": {
"updated_by": "0c997c53-3292-4d30-a5b0-8d5c9a064e68",
"updated_at": "2017-12-28T11:49:17.725027",
"type": "MSP",
"status": "ACTIVE",
"short_name": "ТОВ \"СМАРТ СОЛЮШНС УКРАЇНА\"",
"public_name": "ТОВ \"СМАРТ СОЛЮШНС УКРАЇНА\"",
"phones": [
{
"type": "MOBILE",
"number": "+380675380037"
},
{
"type": "LAND_LINE",
"number": "+380675380037"
}
],
"owner_property_type": "STATE",
"nhs_verified": false,
"name": "ТОВ \"СМАРТ СОЛЮШНС УКРАЇНА\"",
"mis_verified": "NOT_VERIFIED",
"medical_service_provider": {
"licenses": [
{
"what_licensed": "Реалізація ліків",
"order_no": "1",
"license_number": "Л-123",
"issued_date": "2017-09-05",
"issued_by": "Кваліфікаційна комісійна",
"expiry_date": "2020-09-01",
"active_from_date": "2017-09-07"
}
],
"accreditation": null
},
"legal_form": "240",
"kveds": [
"86.10"
],
"is_active": true,
"inserted_by": "55d565c7-db49-4a03-8b7c-f476247e3a73",
"inserted_at": "2017-12-15T11:50:52.377217",
"id": "e630ab9c-6684-4d73-ae8f-d545cebab13a",
"email": "miniailo@gmail.com",
"edrpou": "01010101",
"addresses": [
{
"zip": "01032",
"type": "REGISTRATION",
"street_type": "STREET",
"street": "С. Петлюри",
"settlement_type": "CITY",
"settlement_id": "adaa4abf-f530-461c-bcbf-a0ac210d955b",
"settlement": "КИЇВ",
"region": "ШЕВЧЕНКІВСЬКИЙ",
"country": "UA",
"building": "28",
"area": "М.КИЇВ",
"apartment": ""
}
]
}
}
legal entity is registered directly by MIS. MIS doesn't act as a broker in this case. api-key shouldn't be required.
|
gharchive/issue
| 2017-12-28T11:52:48 |
2025-04-01T04:34:05.666619
|
{
"authors": [
"lymychp",
"pzhuk"
],
"repo": "edenlabllc/ehealth.api",
"url": "https://github.com/edenlabllc/ehealth.api/issues/1697",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1878576787
|
Start in Unraid not working
Hi, I get this error while starting the docker container in Unraid:
I uploaded the model that you passed as URL in the model folder so I have it locally. But I laways get this error
/models/llama-2-13b-chat.bin model found.
stdout
02/09/2023 12:08:53
Initializing server with:
stdout
02/09/2023 12:08:53
Batch size: 2096
stdout
02/09/2023 12:08:53
Number of CPU threads: 16
stdout
02/09/2023 12:08:53
Context window: 4096
stderr
02/09/2023 12:08:58
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/fields.py:127: UserWarning: Field "model_alias" has conflict with protected namespace "model".
stderr
02/09/2023 12:08:58
stderr
02/09/2023 12:08:58
You may be able to resolve this warning by setting model_config['protected_namespaces'] = ('settings_',).
stderr
02/09/2023 12:08:58
warnings.warn(
stderr
02/09/2023 12:08:58
llama.cpp: loading model from /models/llama-2-13b-chat.bin
stdout
02/09/2023 12:09:02
/models/llama-2-13b-chat.bin model found.
stdout
02/09/2023 12:09:02
Initializing server with:
stdout
02/09/2023 12:09:02
Batch size: 2096
stdout
02/09/2023 12:09:02
Number of CPU threads: 16
stdout
02/09/2023 12:09:02
Context window: 4096
stderr
02/09/2023 12:09:04
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/fields.py:127: UserWarning: Field "model_alias" has conflict with protected namespace "model".
stderr
02/09/2023 12:09:04
stderr
02/09/2023 12:09:04
You may be able to resolve this warning by setting model_config['protected_namespaces'] = ('settings_',).
same error here
I think the Python error is just a warning. Regardless, here's my complete log. Says it starts up and then the container stops.
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/fields.py:127: UserWarning: Field "model_alias" has conflict with >protected namespace "model".
You may be able to resolve this warning by setting model_config['protected_namespaces'] = ('settings_',).
warnings.warn(
llama.cpp: loading model from /models/llama-2-13b-chat.bin
/models/llama-2-13b-chat.bin model found.
Initializing server with:
Batch size: 2096
Number of CPU threads: 24
Context window: 4096
ai-chatbot-starter@0.1.0 start
next start
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
The model path in the unraid docker settings is still wrong:
Download URL: https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GGML/resolve/main/nous-hermes-llama-2-7b.ggmlv3.q4_0.bin
Local Model Path: /models/llama-2-7b.bin
I'm getting cuda out of memory errors when trying to load the CUDA version:
CUDA error 2 at /tmp/pip-install-p6rdknxd/llama-cpp-python_6cf58cabdb694a60bef40fdd1b9f4b5f/vendor/llama.cpp/ggml-cuda.cu:6301: out of memory
I get this with the 16b and 7b files. Server has 128GB of RAM.
I'm getting cuda out of memory errors when trying to load the CUDA version:
CUDA error 2 at /tmp/pip-install-p6rdknxd/llama-cpp-python_6cf58cabdb694a60bef40fdd1b9f4b5f/vendor/llama.cpp/ggml-cuda.cu:6301: out of memory
I get this with the 16b and 7b files. Server has 128GB of RAM.
Server memory makes no difference with cuda. Look at the life and you will see how much vram is needed vs available. I only have a 2gig p400 which wasn't enough.
Same issue with it just exiting after model is downloaded.
I'm getting cuda out of memory errors when trying to load the CUDA version:
CUDA error 2 at /tmp/pip-install-p6rdknxd/llama-cpp-python_6cf58cabdb694a60bef40fdd1b9f4b5f/vendor/llama.cpp/ggml-cuda.cu:6301: out of memory
I get this with the 16b and 7b files. Server has 128GB of RAM.
Server memory makes no difference with cuda. Look at the container log and you will see how much vram is needed vs available. I only have a 2gig p400 which wasn't enough but I'm also running CP.AI which is using 1400MB.
I'm running an RTX A2000 12GB so if it's running completely GPU I would at least think the 7b would start. I'll try switching to CPU and see if that makes a difference.
do we need to set up in docker template the gpu settings?
--runtime=nvidia
Container Variable: NVIDIA_VISIBLE_DEVICES
Container Variable: NVIDIA_DRIVER_CAPABILITIES
Container won't start, but no error logs are shown...
CUDA Version 12.2.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
/models/llama-2-7b-chat.bin model found.
Initializing server with:
Batch size: 2096
Number of CPU threads: 24
Context window: 4096
Swapped for CPU version and still not starting
I’m using the CPU version as well. Just ends after image downloads. Sent from my iPhoneOn Sep 2, 2023, at 5:52 PM, tezgno @.***> wrote:
Swapped for CPU version and still not starting
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
What CPUs do you people have?
The error is illegal instruction, which to me seems like the CPU doesn't support specific instruction sets that this expects. I believe it currently expects AVX2 as a minimum.
What CPUs do you people have?
The error is illegal instruction, which to me seems like the CPU doesn't support specific instruction sets that this expects. I believe it currently expects AVX2 as a minimum.
I’m running on an AMD Ryzen 7 3700X, which is a Zen 2 architecture. Zen 2 supports AVX and AVX2.
What CPUs do you people have?
i9 10900 here
i5-8500
The model path in the unraid docker settings is still wrong:
Download URL: https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GGML/resolve/main/nous-hermes-llama-2-7b.ggmlv3.q4_0.bin
Local Model Path: /models/llama-2-7b.bin
But even when these are correct, the docker still exits.
Yes I noticed that I had to fix the path of the URL, but yeah after I fixed the bug, downloaded the file manually, still the container does not start
i7-7700K
My docker will run, but when it starts it times out and says I do not have an openAI api linked. tells me to input in bottom left, but nowhere to be found
My docker will run, but when it starts it times out and says I do not have an openAI api linked. tells me to input in bottom left, but nowhere to be found
nevermind, I saw the logs and saw out of vram. changed the GPU layers under more settings in setup to 32 instead of 64 on a 2060.
So, I was able to get the CUDA version to work after cutting the number from 64 to 32. Looks like it’s using almost 4GB of vram in that configuration. Not sure why 64 doesn’t work on my card, but ok.
CPU version doesn’t work at all. It’s giving an error that complains about the CPU, but all of the ones listed here support AVX2. Can we confirm that the CPU version is for x86_64 or AMD64? I’m thinking it may be the wrong architecture.
Hi Guys, same problem here, cannot start after the model download. Also set it to 32 and nothing.
My CPU: Intel Xeon E5-2695 v2
i have an Itel CPU i3 8100 which seems to have AVX / AVX2
also i use an Nvidia GT 1030 on my NAS with 2gb vram.
it also failed , but when i set the GPU layers from 64 to 16 it seems to work now
Same issue here. Looks like its an issue from the pydantic library. Seen here and here. @uogbuji then replies and says the newer release fixed the issue.
I have no idea how to do this but comparing 0.1.69 to 0.1.70 for abetlen/llama-cpp-python might reveal a fix
Same issue here. Looks like its an issue from the pydantic library. Seen here and here. @uogbuji then replies and says the newer release fixed the issue.
I have no idea how to do this but comparing 0.1.69 to 0.1.70 for abetlen/llama-cpp-python might reveal a fix
That is the issue around the model warning error that you see inside of the docker logs, but that isn't what causes the docker container to not start. The container isn't starting due to EXIT Code 132 (you'll notice this on the right when the container shuts down). A Docker Exit Code 132 means that the docker container itself (not the app running inside of it) received an Illegal Instruction to the CPU. This is normally caused when your CPU doesn't contain the instruction sets that the docker is written for (usually an AVX, AVX2 or SSE instruction set). However, based on the CPU's that people have mentioned here, all of them would support AVX, AVX2, and SSE. My theory in this is that the docker container may be written for a different CPU architecture all together (such as AMD64_AVX, which isn't necessarily the same as AMD64 with AVX instructions).
Or just simply requires AVX512.
Or just simply requires AVX512.
That could be. Unfortunately it's difficult to see which at this point.
Is this issue relevant?
When I try to start the Docker i get the Docker Exit Code 132 and this in the unraid log: traps: python3[15228] trap invalid opcode ip:14d65822c872 sp:7fffc70fbc50 error:0 in libllama.so[14d658211000+62000]
I'm getting cuda out of memory errors when trying to load the CUDA version:
CUDA error 2 at /tmp/pip-install-p6rdknxd/llama-cpp-python_6cf58cabdb694a60bef40fdd1b9f4b5f/vendor/llama.cpp/ggml-cuda.cu:6301: out of memory
I get this with the 16b and 7b files. Server has 128GB of RAM.
Can you lower the of number of GPU layers under the Show More Settings? It's set to 64 by default.
Same issue here. Looks like its an issue from the pydantic library. Seen here and here. @uogbuji then replies and says the newer release fixed the issue.
I have no idea how to do this but comparing 0.1.69 to 0.1.70 for abetlen/llama-cpp-python might reveal a fix
That is the issue around the model warning error that you see inside of the docker logs, but that isn't what causes the docker container to not start. The container isn't starting due to EXIT Code 132 (you'll notice this on the right when the container shuts down). A Docker Exit Code 132 means that the docker container itself (not the app running inside of it) received an Illegal Instruction to the CPU. This is normally caused when your CPU doesn't contain the instruction sets that the docker is written for (usually an AVX, AVX2 or SSE instruction set). However, based on the CPU's that people have mentioned here, all of them would support AVX, AVX2, and SSE. My theory in this is that the docker container may be written for a different CPU architecture all together (such as AMD64_AVX, which isn't necessarily the same as AMD64 with AVX instructions).
The CPU version is having this issue. Everything works as expected using the Docker image I built locally using Ubuntu but not on the Github version. Here's my script:
- run: docker buildx create --use
- run: docker buildx build --platform linux/amd64 -f Dockerfile-${{ matrix.type }} --tag $IMAGE_NAME:${{ github.ref_name }} --push .
- run: docker buildx build --platform linux/amd64 -f Dockerfile-${{ matrix.type }} --tag $IMAGE_NAME:latest --push .
I'm getting cuda out of memory errors when trying to load the CUDA version:
CUDA error 2 at /tmp/pip-install-p6rdknxd/llama-cpp-python_6cf58cabdb694a60bef40fdd1b9f4b5f/vendor/llama.cpp/ggml-cuda.cu:6301: out of memory
I get this with the 16b and 7b files. Server has 128GB of RAM.
Can you lower the of number of GPU layers under the Show More Settings? It's set to 64 by default.
I did and when lowered it does work. CPU version does not work.
I'm getting cuda out of memory errors when trying to load the CUDA version:
CUDA error 2 at /tmp/pip-install-p6rdknxd/llama-cpp-python_6cf58cabdb694a60bef40fdd1b9f4b5f/vendor/llama.cpp/ggml-cuda.cu:6301: out of memory
I get this with the 16b and 7b files. Server has 128GB of RAM.
Can you lower the of number of GPU layers under the Show More Settings? It's set to 64 by default.
Also by reducing to 1 does not work in my case
RTX 2080 ti
Same issue here. Looks like its an issue from the pydantic library. Seen here and here. @uogbuji then replies and says the newer release fixed the issue.
I have no idea how to do this but comparing 0.1.69 to 0.1.70 for abetlen/llama-cpp-python might reveal a fix
That is the issue around the model warning error that you see inside of the docker logs, but that isn't what causes the docker container to not start. The container isn't starting due to EXIT Code 132 (you'll notice this on the right when the container shuts down). A Docker Exit Code 132 means that the docker container itself (not the app running inside of it) received an Illegal Instruction to the CPU. This is normally caused when your CPU doesn't contain the instruction sets that the docker is written for (usually an AVX, AVX2 or SSE instruction set). However, based on the CPU's that people have mentioned here, all of them would support AVX, AVX2, and SSE. My theory in this is that the docker container may be written for a different CPU architecture all together (such as AMD64_AVX, which isn't necessarily the same as AMD64 with AVX instructions).
The latest build fixes the EXIT Code 132. I added libopenblas-dev and things appear to be working as expected.
how can i get an error log?
the container does not start and shows no indication, where the error could occur.
Only the cuda shows information
`CUDA Version 12.2.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
/models/llama-2-7b.bin model found.
Initializing server with:
Batch size: 2096
Number of CPU threads: 24
Context window: 4096
** Press ANY KEY to close this window ** `
i7-7700K
The latest version of the image worked for me on CPU!
i7-7700K
The latest version of the image worked for me on CPU!
Same for me, very slow and the results were quite bad. Uninstalled again :/
very slow
I'm not sure what you expected
Yeah, CPU is slow tbh.
I stumbled upon this tonight and with the unraid template for cuda this is still an issue. Seeing the same @horphi0815 up there. Don't see anywhere that error logs would be stored but I did pull the repo and run it with docker compose and still don't see any logs 'find . -iname "log"'
|
gharchive/issue
| 2023-09-02T10:20:04 |
2025-04-01T04:34:05.713041
|
{
"authors": [
"Dustinhoefer",
"JDolven",
"KrX3D",
"Wallayy",
"corndog2000",
"dgpugliese",
"edgar971",
"horphi0815",
"jamescochran",
"jimserio",
"johnny2678",
"jppoeck",
"sharpsounds",
"tezgno",
"timocapa",
"zenkraker"
],
"repo": "edgar971/open-chat",
"url": "https://github.com/edgar971/open-chat/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1654314979
|
fix connection phase ignoring server errors
Summary
The binding doesn't propagate errors received from the server during the connection phase. If an error response is sent after a client handshake, the error is logged and the client continues to execute; disregarding the error.
Example flow:
13:19:08 - Debug: C->S: Client 1: ClientHandshake len: 50
13:19:08 - Debug: S->C: Client 1: ErrorResponse len: 348
13:19:08 - Error: Got error level: Error Message: ...
(client continues to perform execution)
13:19:08 - Debug: C->S: Client 1: Parse len: 120
...
This PR adds a check for an error in the connection phase, propagating it as an exception if one is received.
Closing as this behavior is implemented in #42
|
gharchive/pull-request
| 2023-04-04T17:51:25 |
2025-04-01T04:34:05.717975
|
{
"authors": [
"quinchs"
],
"repo": "edgedb/edgedb-net",
"url": "https://github.com/edgedb/edgedb-net/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1192380254
|
Geo vision/Honeywell/Bosch camera- Set DNS API not working correctly
Summary of the problem:
Set DNS API not updating camera DNS settings correctly.
Issue reproducible on below camera models:
Geo Vision GV-BX8700
Honeywell HC30WB5R1
Able to enable/disable DHCP on Geo vision and Honeywell cameras, however not able to update IP address with 'DNS Manual"
Not getting any error, but IP address not updated when verified using Get DNS api.
Bosch DINION 6000HD
Enabling/disabling of DHCP not working
Repro steps:
Execute get DNS rest api to retrieve existing DNS configuration
Enable or disable DHCP using set DNS API
Execute Get DNS api again to verify DNS settings
Execute set DNS with dhcp set to false and updated ip address.
Verify Get DNS api to verify updated Ip address.
Expected result: User should be able to enable/disable DHCP and updated ip address without any issues.
Actual result: On Geo vision camera it fails to set IP address, and on Bosch camera not able to enable/disable DHCP option.
attached set DNS api screenshot.
Issue reproducible on Geo vision camera using latest images on 10/26/22.
@vyshali-chitikeshi did you try rebooting the cameras after setting the new DNS? Also in any case it sounds like an issue with the cameras onvif implementations and nothing the device service would have control over. If anything, this sounds more like errata that belongs in onvif-footnotes than issues.
Levski Release : Bosch
Same issue : Enabling/disabling of DHCP not working
Issue reproducible with V3 code. On Bosch camera not able to enable and disable DHCP with set DNS API
|
gharchive/issue
| 2022-04-04T22:28:21 |
2025-04-01T04:34:05.744404
|
{
"authors": [
"ajcasagrande",
"surajitx-pal",
"vyshali-chitikeshi"
],
"repo": "edgexfoundry/device-onvif-camera",
"url": "https://github.com/edgexfoundry/device-onvif-camera/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1775057800
|
build: Add jenkinsfile
If your build fails due to your commit message not passing the build checks, please review the guidelines here: https://github.com/edgexfoundry/edgex-go/blob/main/.github/Contributing.md
PR Checklist
Please check if your PR fulfills the following requirements:
[x] I am not introducing a breaking change (if you are, flag in conventional commit message with BREAKING CHANGE: describing the break)
[x] I am not introducing a new dependency (add notes below if you are)
[ ] I have added unit tests for the new feature or bug fix (if not, why?) N/A
[ ] I have fully tested (add details below) this the new feature or bug fix (if not, why?) N/A
[ ] I have opened a PR for the related docs change (if not, why?) N/A
Testing Instructions
New Dependency Instructions (If applicable)
recheck
recheck
recheck
recheck
|
gharchive/pull-request
| 2023-06-26T15:31:01 |
2025-04-01T04:34:05.749097
|
{
"authors": [
"MightyNerdEric",
"lenny-intel"
],
"repo": "edgexfoundry/device-uart",
"url": "https://github.com/edgexfoundry/device-uart/pull/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
378513852
|
Kong no longer uses ssl certs signed by the generated EdgeX TrustCA
The edgexproxy binary no longer configures Kong to use SSL certificates that were generated by the pkisetup binary from security-secret-store from this change: https://github.com/edgexfoundry/security-api-gateway/commit/3f9f9deca250f68b56e8c8e2c54d54048e1c024a#diff-711007c703bd39258a3323751c269e25R51
This means that when you deploy EdgeX using the security services, the SSL certificate presented by kong isn't signed by the Trust CA as it should be (and as it was with California).
@tingyuz this is the issue I spoke about over Rocket.Chat and should be considered a blocker for Delhi.
we were planning to switch the cert from default to kong specific (created by vault) and the related code was updated as a result. I will have a PR to restore the logic back same as California.
This has been fixed for Delhi with #31, now just needs to be forward-ported to master for Edinburgh
|
gharchive/issue
| 2018-11-07T23:28:33 |
2025-04-01T04:34:05.756213
|
{
"authors": [
"anonymouse64",
"tingyuz"
],
"repo": "edgexfoundry/security-api-gateway",
"url": "https://github.com/edgexfoundry/security-api-gateway/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2404376451
|
Support Docker appInsts for rootLb rebuild
Right now we have support to rebuild appInsts for k8s appInsts, we need to support docker appInsts.
https://github.com/edgexr/edge-cloud-platform/blob/25493ba6327969f99affca9bd995044e205c3630/pkg/platform/common/vmlayer/vmprovider.go#L571
fixed
|
gharchive/issue
| 2024-07-11T23:58:31 |
2025-04-01T04:34:05.757431
|
{
"authors": [
"levshvarts"
],
"repo": "edgexr/edge-cloud-platform",
"url": "https://github.com/edgexr/edge-cloud-platform/issues/343",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
288952272
|
remove x y z columns from ciftify_peaktable output
maybe throw x,y,z mapping into an internal function so that it can be used for forward inference later on..
Done
|
gharchive/issue
| 2018-01-16T15:40:20 |
2025-04-01T04:34:05.775048
|
{
"authors": [
"edickie"
],
"repo": "edickie/ciftify",
"url": "https://github.com/edickie/ciftify/issues/34",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2147170979
|
Conform breaks Remix submission with clientAction
Describe the bug and the expected behavior
Submitting a conform driven <Form /> by "press Enter on input" should not make a real POST request. It should behave the same as clicking a submit button.
I've tried hard to make a very minimum reproduction to reveal root cause. I believe the bug is occurring with all of these condition:
clientAction reply with { resetForm: true }export const clientAction = async ({ request }: ClientActionFunctionArgs) => {
const formData = await request.formData();
const submission = parseWithZod(formData, { schema });
return submission.reply({ resetForm: true });
};
useForm with lastResult (of cause) AND shouldValidate being "onBlur"const [form, fields] = useForm({
shouldValidate: "onBlur",
lastResult,
});
Change the condition by one of the following workaround the bug:
Submitting the <Form /> by submit button works fine
submission.reply(), that is, without { resetForm: true }
Use action instead of clientAction (But I want a SPA app)
shouldValidate being onInput or onSubmit
Conform version
v1.0.2
Steps to Reproduce the Bug or Issue
git clone https://github.com/QzCurious/conform-remix-vite-form-submission-bug
It's a Remix Vite App
pnpm i
pnpm dev and open the web page
Focus on "remix form with conform" and press Enter
Bug happens. <Form /> is making a real POST request instead being handled by clientAction
What browsers are you seeing the problem on?
No response
Screenshots or Videos
https://github.com/edmundhung/conform/assets/38932402/8b952d99-9774-4349-8f23-a962df8fbb9a
Additional context
Remix with Vite were just marked stable today.
This is because Conform handle everything including validation as a form submission to achieve single data flow. You should either check the submission status on the client action and just return the result back similar to action, or if you wanna minimize it hitting the client action, just set the onValidate option.
|
gharchive/issue
| 2024-02-21T16:20:04 |
2025-04-01T04:34:05.797016
|
{
"authors": [
"QzCurious",
"edmundhung"
],
"repo": "edmundhung/conform",
"url": "https://github.com/edmundhung/conform/issues/473",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2125791039
|
Restrict task assignment to admins
Summary
Tasks cannot be assigned to admins.
DoD
[x] Admins cannot be assigned for a task.
[x] Existing tasks assigned to admins are unassigned.
Fixed with 467a7cc8650d37c282249fede0378600afe56c07
Fixed with 467a7cc8650d37c282249fede0378600afe56c07
|
gharchive/issue
| 2024-02-08T18:08:10 |
2025-04-01T04:34:05.814621
|
{
"authors": [
"shorodilov"
],
"repo": "edu-python-course/tasktracker-django",
"url": "https://github.com/edu-python-course/tasktracker-django/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
679655987
|
resource property of type number without default
hi, seems like having a number without a default value results in an internal error when not specified. i would expect the value to be undefined.
I will have a look. I believe the framework is casting to number (or bigint for that matter) but only if different than undefined or null. https://github.com/eduardomourar/cloudformation-cli-typescript-plugin/blob/26497282b6189f599fa6f98b99bf19592be19d82/src/recast.ts#L40-L42
I see the problem now. We are receiving an empty string from CloudFormation and that is being converted to 0.
{
"message": "Error: 1 validation error detected: Value '0' at 'passwordReusePrevention' failed to satisfy constraint: Member must have value greater than or equal to 1",
"status": "FAILED",
"errorCode": "InternalFailure"
}
@OlafConijn , you can check the unit test failing here: https://github.com/eduardomourar/cloudformation-cli-typescript-plugin/pull/22/checks?check_run_id=999912293. is there any other problem besides that?
no, i think thats it :)
Fixed on release v0.3.1
|
gharchive/issue
| 2020-08-15T23:02:17 |
2025-04-01T04:34:05.820188
|
{
"authors": [
"OlafConijn",
"eduardomourar"
],
"repo": "eduardomourar/cloudformation-cli-typescript-plugin",
"url": "https://github.com/eduardomourar/cloudformation-cli-typescript-plugin/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1056283529
|
EDU-103 Do a full page refresh (without using the cache) when home page is loaded from cache
Closes https://educandu.atlassian.net/browse/EDU-103
This works in Chrome, Firefox and Safari on MacOS.
Works on Chrome, Firefox and Edge on Linux :+1:
|
gharchive/pull-request
| 2021-11-17T15:45:35 |
2025-04-01T04:34:05.823405
|
{
"authors": [
"adrianaluchian",
"ahelmberger"
],
"repo": "educandu/educandu",
"url": "https://github.com/educandu/educandu/pull/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1387985008
|
EDU-830 Restore volume with unmutting and support solo
Closes https://educandu.atlassian.net/browse/EDU-830
Closes https://educandu.atlassian.net/browse/EDU-830
It would be cool if we had a speaker logo with an "S" in it (in most applications the "Mute" and "Solo" buttons are just labeled "M" and "S"). Maybe we can ask Mani to create one and exchange it then.
It would be cool if we had a speaker logo with an "S" in it (in most applications the "Mute" and "Solo" buttons are just labeled "M" and "S"). Maybe we can ask Mani to create one and exchange it then.
After failing my attempt to create a dedicated icon, I went for the colored variation for now, but I agree we should get a proper icon. "M" would only make sense in combination with "S", but in most use cases of the volume slider there is no solo.
"S" though makes sense on its own I'd say 👍
|
gharchive/pull-request
| 2022-09-27T15:48:12 |
2025-04-01T04:34:05.826534
|
{
"authors": [
"adrianaluchian",
"ahelmberger"
],
"repo": "educandu/educandu",
"url": "https://github.com/educandu/educandu/pull/645",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2279332855
|
Tool: Fetch Assignment Submissions
Create a tool to fetch the actual submission files for an assignment.
Allow filtering by students.
Ensure that files are named by email.
It looks like each submission file(s) will have to be be downloaded one at a time (see below references).
So we'll fetch a full assignment (including "attachments"), then go through the attachments and download one at a time.
Some references:
https://community.canvaslms.com/t5/Canvas-Developers-Group/API-Download-all-assignment-submissions/td-p/206319
https://community.canvaslms.com/t5/Canvas-Developers-Group/Assignment-API-submissions-download-url/m-p/74848
https://github.com/ucfopen/canvasapi/discussions/634
Initial version complete in 3cd1c7e0f9e5f10007e7d7a7b153633ebeaa5536.
|
gharchive/issue
| 2024-05-05T05:20:40 |
2025-04-01T04:34:05.877759
|
{
"authors": [
"eriq-augustine"
],
"repo": "edulinq/py-canvas",
"url": "https://github.com/edulinq/py-canvas/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1243840116
|
docker compose error
Hello guys,
server is not running at docker. all container compose is ok at this. the steps i followed;
wget https://github.com/edumeet/edumeet/archive/refs/heads/master.zip
replace name is cert file name "mediasoup-demo.localhost.cert.pem" instead "edumeet-demo-cert.pem"
docker-compose up -d running docker
i was install docker in "ubuntu 22 server".
is error when login button click;
Execute command => docker-compose logs -f --tail=500 edumeet
display output;
edumeet | [app] yarn install v1.22.19
edumeet | [app] [1/4] Resolving packages...
edumeet | [app] success Already up-to-date.
edumeet | [app] Done in 7.40s.
edumeet | [app] yarn run v1.22.19
edumeet | [app] $ react-scripts build && rm -rf ../server/public && DEST='../server/dist/public' && rm -rf $DEST && mkdir -p $DEST && mv -T build/ $DEST
edumeet | [app] Creating an optimized production build...
edumeet | [server] npm WARN deprecated debug@4.1.1: Debug versions >=3.2.0 <3.2.7 || >=4 <4.3.1 have a low-severity ReDos regression when used in a Node.js environment. It is recommended you upgrade to 3.2.7 or 4.3.1. (https://github.com/visionmedia/debug/issues/797)
edumeet | [server] npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
edumeet | [server] npm WARN deprecated lodash-node@2.4.1: This package is discontinued. Use lodash@^4.0.0.
edumeet | [server] npm WARN rm not removing /edumeet/server/node_modules/.bin/semver as it wasn't installed by /edumeet/server/node_modules/semver
edumeet | [server] npm WARN rm not removing /edumeet/server/node_modules/.bin/rimraf as it wasn't installed by /edumeet/server/node_modules/rimraf
edumeet | [server] npm WARN rm not removing /edumeet/server/node_modules/.bin/uuid as it wasn't installed by /edumeet/server/node_modules/node-uuid
edumeet | [server] npm notice created a lockfile as package-lock.json. You should commit this file.
edumeet | [server] npm WARN notsup Unsupported engine for xmlbuilder@2.4.6: wanted: {"node":"0.8.x || 0.10.x || 0.11.x || 1.0.x"} (current: {"node":"14.19.3","npm":"6.14.17"})
edumeet | [server] npm WARN notsup Not compatible with your version of node/npm: xmlbuilder@2.4.6
edumeet | [server]
edumeet | [server] + postinstall@0.7.4
edumeet | [server] added 43 packages from 17 contributors, removed 16 packages, updated 487 packages and audited 578 packages in 147.549s
edumeet | [server]
edumeet | [server] 79 packages are looking for funding
edumeet | [server] run npm fund for details
edumeet | [server]
edumeet | [server] found 6 vulnerabilities (2 high, 4 critical)
edumeet | [server] run npm audit fix to fix them, or npm audit for details
edumeet | [server] yarn install v1.22.19
edumeet | [server] warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
edumeet | [server] [1/4] Resolving packages...
edumeet | [server] [2/4] Fetching packages...
edumeet | [app] Compiled with warnings.
edumeet | [app]
edumeet | [app] ./node_modules/yargs-parser/index.js
edumeet | [app] Critical dependency: the request of a dependency is an expression
edumeet | [app]
edumeet | [app] Search for the keywords to learn more about each warning.
edumeet | [app] To ignore, add // eslint-disable-next-line to the line before.
edumeet | [app]
edumeet | [app] File sizes after gzip:
edumeet | [app]
edumeet | [app] 245.96 KB build/static/js/11.dddbd8e6.chunk.js
edumeet | [app] 168.51 KB build/static/js/12.60cfc40f.chunk.js
edumeet | [app] 102.89 KB build/static/js/main.088324d7.chunk.js
edumeet | [app] 64.83 KB build/static/js/13.99fa0f26.chunk.js
edumeet | [app] 38.23 KB build/static/js/room.5e81ecf2.chunk.js
edumeet | [app] 20.13 KB build/static/js/1.04ba19f9.chunk.js
edumeet | [app] 18.49 KB build/static/js/14.562abbc7.chunk.js
edumeet | [app] 6.73 KB build/static/js/0.fe86bb42.chunk.js
edumeet | [app] 1.77 KB build/static/js/runtime-main.1fff7e7b.js
edumeet | [app] 1.32 KB build/static/css/12.7c6c0c5f.chunk.css
edumeet | [app] 1.29 KB build/static/js/file-saver.bc97249c.chunk.js
edumeet | [app] 1.11 KB build/static/js/screensharing.dd3d4cb4.chunk.js
edumeet | [app] 421 B build/static/js/app.c6314209.chunk.js
edumeet | [app] 357 B build/static/css/main.ea979dc9.chunk.css
edumeet | [app] 203 B build/static/js/webtorrent.b512346c.chunk.js
edumeet | [app] 156 B build/static/js/createtorrent.12a8b594.chunk.js
edumeet | [app] 141 B build/static/js/socket.io.9bf69c83.chunk.js
edumeet | [app]
edumeet | [app] The project was built assuming it is hosted at ./.
edumeet | [app] You can control this with the homepage field in your package.json.
edumeet | [app]
edumeet | [app] The build folder is ready to be deployed.
edumeet | [app]
edumeet | [app] Find out more about deployment here:
edumeet | [app]
edumeet | [app] https://cra.link/deployment
edumeet | [app]
edumeet | [app] Done in 149.00s.
edumeet | [app] yarn run v1.22.19
edumeet | [app] $ HTTPS=true PORT=4443 react-scripts start
edumeet | [app] ℹ 「wds」: Project is running at https://0.0.0.0:4443/
edumeet | [app] ℹ 「wds」: webpack output is served from
edumeet | [app] ℹ 「wds」: Content not from webpack is served from /edumeet/app/public
edumeet | [app] ℹ 「wds」: 404s will fallback to /
edumeet | [app] Starting the development server...
edumeet | [app]
edumeet | [server] info There appears to be trouble with your network connection. Retrying...
edumeet | [server] [3/4] Linking dependencies...
edumeet | [app] Compiled with warnings.
edumeet | [app]
edumeet | [app] ./node_modules/yargs-parser/index.js
edumeet | [app] Critical dependency: the request of a dependency is an expression
edumeet | [app]
edumeet | [app] Search for the keywords to learn more about each warning.
edumeet | [app] To ignore, add // eslint-disable-next-line to the line before.
edumeet | [app]
edumeet | [server] [4/4] Building fresh packages...
edumeet | [server] error /edumeet/server/node_modules/mediasoup: Command failed.
edumeet | [server] Exit code: 1
edumeet | [server] Command: node npm-scripts.js postinstall
edumeet | [server] Arguments:
edumeet | [server] Directory: /edumeet/server/node_modules/mediasoup
edumeet | [server] Output:
edumeet | [server] npm-scripts.js [INFO] running task "postinstall"
edumeet | [server] npm-scripts.js [INFO] executing command: node npm-scripts.js worker:build
edumeet | [server] npm-scripts.js [INFO] running task "worker:build"
edumeet | [server] npm-scripts.js [INFO] executing command: make -C worker
edumeet | [server] make: Entering directory '/edumeet/server/node_modules/mediasoup/worker'
edumeet | [server] # Updated pip and setuptools are needed for meson
edumeet | [server] # --system is not present everywhere and is only needed as workaround for
edumeet | [server] # Debian-specific issue (copied from
edumeet | [server] # https://github.com/gluster/gstatus/pull/33), fallback to command without
edumeet | [server] # --system if the first one fails.
edumeet | [server] python -m pip install --system --target=/edumeet/server/node_modules/mediasoup/worker/out/pip pip setuptools ||
edumeet | [server] python -m pip install --target=/edumeet/server/node_modules/mediasoup/worker/out/pip pip setuptools ||
edumeet | [server] echo "Installation failed, likely because PIP is unavailable, if you are on Debian/Ubuntu or derivative please install the python3-pip package"
edumeet | [server] /usr/bin/python: No module named pip
edumeet | [server] /usr/bin/python: No module named pip
edumeet | [server] Installation failed, likely because PIP is unavailable, if you are on Debian/Ubuntu or derivative please install the python3-pip package
edumeet | [server] # Install meson and ninja using pip into custom location, so we don't
edumeet | [server] # depend on system-wide installation.
edumeet | [server] python -m pip install --upgrade --target=/edumeet/server/node_modules/mediasoup/worker/out/pip meson ninja
edumeet | [server] /usr/bin/python: No module named pip
edumeet | [server] make: *** [Makefile:65: meson-ninja] Error 1
edumeet | [server] make: Leaving directory '/edumeet/server/node_modules/mediasoup/worker'
edumeet | [server] info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
edumeet | [server] cd server && npm install postinstall && yarn && yarn dev exited with code 1
solution of the problem;
compose>edumeet>dockerfile;
FROM node:14-buster-slim
RUN apt-get update &&
apt-get install -y git build-essential python pkg-config libssl-dev python3-pip &&
apt-get clean
WORKDIR /edumeet
ENV DEBUG=edumeet*,mediasoup*
RUN npm install -g nodemon &&
npm install -g concurrently
RUN touch /.yarnrc && mkdir -p /.yarn /.cache/yarn && chmod -R 775 /.yarn /.yarnrc /.cache
CMD concurrently --names "server,app"
"cd server && yarn && yarn dev"
"cd app && yarn && yarn build && yarn start"
|
gharchive/issue
| 2022-05-21T02:17:15 |
2025-04-01T04:34:05.908758
|
{
"authors": [
"buraksumer"
],
"repo": "edumeet/edumeet",
"url": "https://github.com/edumeet/edumeet/issues/1006",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1588737165
|
🛑 eaglestorage.com.au is down
In 7ff81f1, eaglestorage.com.au (https://eaglestorage.com.au) was down:
HTTP code: 0
Response time: 0 ms
Resolved: eaglestorage.com.au is back up in cdd041d.
|
gharchive/issue
| 2023-02-17T04:36:52 |
2025-04-01T04:34:05.916230
|
{
"authors": [
"edwardcox"
],
"repo": "edwardcox/eaglestorage",
"url": "https://github.com/edwardcox/eaglestorage/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
181144882
|
Canonical xml using Exalt
Is it possible to Canonicalize xml using Exalt? I have seen that lxml is a dependency. I can't find any option to canonicalize xml.
Not at the moment. I can look into it, though.
@eerohele Thanks, looking forward to it. In the meantime, do you know any such tools which can help me canonicalize the xml in osx.
Something like this should work:
$ brew install libxml2
$ xmllint --c14n11 myfile.xml
Provided that you have Homebrew installed.
I've released version 0.3.0, which adds support for canonicalizing XML.
It might take a while for it to appear in Package Control, but when it does, can you please give it a go and let me know whether it works for you?
You can canonicalize a well-formed XML document by selecting "Exalt: Canonicalize Document" from the Sublime Text command palette.
Thanks, It works for me, there is an issue that it removes the CDATA. Here is a test xml
<?xml version="1.0"?>
<theme>
<controller id="dictionary" name="dictionary" type="data"><![CDATA[{}]]></controller>
</theme>
After canonical run
<theme>
<controller id="dictionary" name="dictionary" type="data">{}</controller>
</theme>
Please check why CDATA is removed, it should just arrange the properties in sorted order and should preserve the content as it is.
It seems that the W3C specification for canonical XML dictates that "CDATA sections are replaced with their character content", so the current behavior is correct.
Anyway, it's libxml2 that does the actual canonicalization, so there's little I can do to modify the behavior.
|
gharchive/issue
| 2016-10-05T12:55:17 |
2025-04-01T04:34:06.021285
|
{
"authors": [
"cksachdev",
"eerohele"
],
"repo": "eerohele/exalt",
"url": "https://github.com/eerohele/exalt/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
661343763
|
How to avoid distortion on images?
I'm trying to make a slideshow that shows simple photos on a kiosk.
I have photos that are portrait oriented and have a resolution of 1080x1920 pixels. I'm using the following markup:
[container top=0% right=0% bottom=0% left=0%] [img url="https://libresign.domain.com/api/endpoint/slide/asset/slide_get_asset.php?id=cd5904f728f8122d8a20e5348b73b3a0&name=AllNatural.png&hash=60a1cae4fb4cd8b49f2a25b67e8b0f31" width=100% height=100%][/img] [/container]
When I run the slideshow on a client that has not the exact same resolution, then the image will be distorted. Is there a way to avoid that?
Currently there's no way to set static sizes or aspect ratios for images. Sizes can only be specified in percents of the screen size. It would certainly be a good idea to make it possible to resize images by specifying a pixel size aswell. It'd also be nice to have a markup tag where you can set the image size as a percentage but that would still preserve the original aspect ratio of the image.
Wouldn't it be sufficient to just not require both height and width, and calculate the missing datum respecting the aspect ratio? CSS makes this really easy, actually.
Off topic: I would also suggest to set default values for other tags which are currently required. For instance, it makes no sense to require absolute percentages for all tags in container, you could just use 0% as a fallback. That would be much more intuitive.
|
gharchive/issue
| 2020-07-20T04:13:13 |
2025-04-01T04:34:06.024114
|
{
"authors": [
"MatthK",
"TheAssassin",
"eerotal"
],
"repo": "eerotal/LibreSignage",
"url": "https://github.com/eerotal/LibreSignage/issues/134",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2134816772
|
Several projects where the defaults block layout was incorrect although it had the same default block numbers.
This may have been caused by older versions of the defaults blocks that existed in the user mag directory at the time the script was run. The script does not create a new file if there is an existing file.
PR #102 for caravel should fix this.
Only need to check production code matches offline development scripts. LVL of both flows output.
new gpio check from Mitch should check gds. also need to roll change to script to delete previous default blocks.
|
gharchive/issue
| 2024-02-14T17:18:26 |
2025-04-01T04:34:06.036814
|
{
"authors": [
"DavidRLindley",
"d-mitch-bailey",
"jeffdi"
],
"repo": "efabless/caravel",
"url": "https://github.com/efabless/caravel/issues/529",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1993257936
|
🛑 Almacenata is down
In 63c66e8, Almacenata (https://www.almacenata.com.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Almacenata is back up in 8635882 after 30 minutes.
|
gharchive/issue
| 2023-11-14T17:52:52 |
2025-04-01T04:34:06.043870
|
{
"authors": [
"efecear"
],
"repo": "efecear/upptime",
"url": "https://github.com/efecear/upptime/issues/11653",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.