id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
927606681
|
feat(Consumer): escape commas + use Writer + test Print
Address the remaining tasks of: https://github.com/optopodi/optopodi/issues/30
[x] escape , in entry content
[x] make a test producer that supplies dummy data to test it
[ ] Use anyhow::Error instead of String for the Consumer return type. This can be addressed in another PR since we already have an issue for it: https://github.com/optopodi/optopodi/issues/28
I used the csv crate to write/escape the data.
As suggested in the review of #32, I added a Write instance variable to the Print struct. I decided to add it to the state rather than as a parameter to the consume function to avoid (or just postpone) refactoring the struct: ExportToSheets, which is a Consumer but the Write parameter cannot be added straightforward.
To test the Print, I had to pass a reference to consume (&self instead of self).
Since the resulting code is a bit over-complex, feedback is super appreciated, thanks!
@angelocatalani I'm happy to take a look at this but likely won't have all too much to give you — I am no expert (yet) when it comes to Arc and Mutex... @nikomatsakis will have much better insight for you.
@angelocatalani so I took a look. I think it's not worth having the unit tests. Really we're just checking that the csv::Writer code works correctly, and maybe it'd be better to just build up a unit testing harness runs the final executable anyway.
I pushed a commit that moves back to by-value and things work much more nicely.
If we really wanted the unit tests, I'd probably implement some custom writer that writes the data into a shared buffer, but it doesn't seem worth the trouble.
Moving to anyhow::Error and rebasing would be good though!
OK great I ll do it and request a new review
|
gharchive/pull-request
| 2021-06-22T20:26:56 |
2025-04-01T06:39:55.594301
|
{
"authors": [
"angelocatalani",
"chazkiker2",
"nikomatsakis"
],
"repo": "optopodi/optopodi",
"url": "https://github.com/optopodi/optopodi/pull/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1046225084
|
Unable to use an iSCSI disk greater than 2 TB
When i try to create a volume more than 2 TB, always get formated to 2 TB, tops. fdisk only support up to 2 TB.
Investigating
Problem was related to the usage of fdisks that won't comply beyond 2TB.
Fix is to use gpart
Fixed block looks like this:
resource "null_resource" "partition_disk" {
depends_on = [null_resource.provisioning_disk]
count = length(oci_core_volume_attachment.ISCSIDiskAttachment)
connection {
type = "ssh"
host = var.linux_compute_private_ip
user = var.is_opc ? "opc" : "ubuntu"
private_key = var.ssh_private_is_path ? file(var.ssh_private_key) : var.ssh_private_key
}
# With provisioned disk, trigger fdisk, then pvcreate and vgcreate to tag the disk
provisioner "remote-exec" {
inline = [
"set +x",
"export DEVICE_ID=/dev/disk/by-path/ip-${oci_core_volume_attachment.ISCSIDiskAttachment[count.index].ipv4}:${oci_core_volume_attachment.ISCSIDiskAttachment[count.index].port}-iscsi-${oci_core_volume_attachment.ISCSIDiskAttachment[count.index].iqn}-lun-1",
"if [ ${var.disk_size_in_gb} > 2000 ]; then ${local.parted} $${DEVICE_ID} mklabel gpt mkpart P1 xfs 0% 100%; else ${local.fdisk} $${DEVICE_ID}; fi",
]
}
}
Closing bug as unitary testing went ok
[opc@dalquintdevhubscl SSH]$ ssh -i auto_ssh_id_rsa opc@129.151.114.86
The authenticity of host '129.151.114.86 (129.151.114.86)' can't be established.
ECDSA key fingerprint is SHA256:WbaUBnKPzyqnMxdJ0HCgvnw+uG0QO4IMOJmS5kOM92c.
ECDSA key fingerprint is MD5:c2:77:73:a0:1a:7d:bf:d6:48:be:bd:88:9a:83:b4:02.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '129.151.114.86' (ECDSA) to the list of known hosts.
cLast login: Fri Nov 5 21:26:43 2021 from dalquintdevhubscl.dalquinthubpubs.dalquinthubvcn.oraclevcn.com
cd Welcome to Autonomous Linux
Effective kernel version is 5.4.17-2102.200.13.el7uek.x86_64
Please add OCI notification service topic OCID with
$ sudo al-config -T [topic OCID]
[opc@testiscsi01 ~]$ cd /u01/
[opc@testiscsi01 u01]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 3.0T 33M 3.0T 1% /u01
[opc@testiscsi01 u01]$ exit
logout
Connection to 129.151.114.86 closed.
|
gharchive/issue
| 2021-11-05T20:33:32 |
2025-04-01T06:39:55.640651
|
{
"authors": [
"dralquinta",
"kakopedreros"
],
"repo": "oracle-devrel/terraform-oci-cloudbricks-linux-iscsi-disks",
"url": "https://github.com/oracle-devrel/terraform-oci-cloudbricks-linux-iscsi-disks/issues/6",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1455484394
|
Fix css to make video responsive - LLAPEX - 646
Add here your WMS ID
General requirements
Information in the workshop is adequate and updated
Code is correct and working
Links are correct
Make sure you enter the help email link in your manifest.json
Please make sure WMS URLs are updated as needed after your PR is approved
Checklist - Refer to the QA document for the complete list
Please confirm that the following is completed before submitting your PR
[ ] All filenames are lower case (including folders, images, files, etc.)
[ ] Filenames are descriptive
[ ] Your workshop folder structure should be similar to the one used in the sample workshop (https://github.com/oracle-livelabs/common/tree/main/sample-livelabs-templates/sample-workshop)
[ ] Are you using multiple versions (desktop/, sandbox/, tenancy/)? Make sure that each of them contains a manifest.json and an index.html
[ ] Image references in markdown contain an alternate text
Hi Anoosha! I cannot merge your PR. Can you look into this issue below?
Also, for the step by step guide, are you updating the new one or the old one?
|
gharchive/pull-request
| 2022-11-18T17:07:29 |
2025-04-01T06:39:55.645221
|
{
"authors": [
"anooshapilli",
"arabellayao"
],
"repo": "oracle-livelabs/common",
"url": "https://github.com/oracle-livelabs/common/pull/137",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2199370402
|
chore: use dunder function str when requiring a string serialization of a PackageURL object
The PackageURL class implements the __str__() function here:
def __str__(self, *args: Any, **kwargs: Any) -> str:
return self.to_string()
That __str__() function is invoked automatically when using %s in a format string (as is the case when logging, docs) or when calling the str() built-in.
This change removes the to_string() use such that the plain PackageURL objects are passed to logging and str() and thereby their __str__() function is called instead. That guarantees that the PackageURL class is responsible for returning the “‘informal’ or nicely printable string representation” of the object. And, lucky enough, that’s the value produced by to_string() anyway.
Thanks @jenstroeger for the changes. They make sense :+1:
|
gharchive/pull-request
| 2024-03-21T07:16:40 |
2025-04-01T06:39:55.736008
|
{
"authors": [
"behnazh-w",
"jenstroeger"
],
"repo": "oracle/macaron",
"url": "https://github.com/oracle/macaron/pull/675",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
378814466
|
Use options.PVName when creating FSS PV objects
An OCID cannot be reliability used as a PV name since OCIDs may contain invalid
chars.
This change fixes a previous bug where FSS PV's would fail to create in certain regions
leaving orphaned OCI resources.
This behaviour mimics the AWS EFS provisioner.
For reference the PV name is made up of the prefix "pvc" and the PVC.UID.
➜ k get pv -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
pvc-c7fa1319-e372-11e8-9ff9-020017006e02
|
gharchive/pull-request
| 2018-11-08T16:30:23 |
2025-04-01T06:39:55.741861
|
{
"authors": [
"owainlewis"
],
"repo": "oracle/oci-cloud-controller-manager",
"url": "https://github.com/oracle/oci-cloud-controller-manager/pull/279",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
117013321
|
change site method to use new name
see: https://github.com/jekyll/jekyll/pull/3240
Is this going to be accepted?
Oops, I totally forgot about this. Is it possible to make the code work with Jekyll 2 and 3?
If not, maybe I'll keep a 2.0 and 3.0 branch with corresponding versions.
Also, version 1.0.0 is now out with this change.
|
gharchive/pull-request
| 2015-11-15T19:49:09 |
2025-04-01T06:39:55.788812
|
{
"authors": [
"blundin",
"friedenberg",
"orangejulius"
],
"repo": "orangejulius/jekyll-footnotes",
"url": "https://github.com/orangejulius/jekyll-footnotes/pull/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1362841801
|
feat: Repository.Manifest() now returns a ManifestStore
Resolves #284
Signed-off-by: Sylvia Lei lixlei@microsoft.com
Codecov Report
Merging #300 (a9cee9c) into main (e413b92) will increase coverage by 0.03%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## main #300 +/- ##
==========================================
+ Coverage 71.54% 71.57% +0.03%
==========================================
Files 37 37
Lines 3511 3515 +4
==========================================
+ Hits 2512 2516 +4
Misses 747 747
Partials 252 252
Impacted Files
Coverage Δ
registry/repository.go
0.00% <ø> (ø)
registry/remote/repository.go
65.72% <100.00%> (+0.18%)
:arrow_up:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
gharchive/pull-request
| 2022-09-06T07:38:48 |
2025-04-01T06:39:55.797799
|
{
"authors": [
"Wwwsylvia",
"codecov-commenter"
],
"repo": "oras-project/oras-go",
"url": "https://github.com/oras-project/oras-go/pull/300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
236801538
|
handle program exceptions
add try & catch to the program exceptions
Done.
|
gharchive/issue
| 2017-06-19T07:25:49 |
2025-04-01T06:39:55.811561
|
{
"authors": [
"orbardugo",
"sabagsapir"
],
"repo": "orbardugo/Hahot-Hameshulash",
"url": "https://github.com/orbardugo/Hahot-Hameshulash/issues/45",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2643178415
|
The application crashes when the cross is pressed
Describe the bug
I usually close the application window so that it stays running but not displayed on the screen. At some point after clicking on the cross - the app started to unload from memory. I am using OrbStack on multiple macs with the same settings for both the system and the OrbStack app itself. Below I will add information from the macs where I am experiencing the problem.
To Reproduce
No response
Expected behavior
The application should remain visible in the dock, but its window should be closed
Diagnostic report (REQUIRED)
OrbStack info:
Version: 1.8.0
Commit: 58b79a4a06c46666c70ca908a94b47fe70882fca (v1.8.0)
System info:
macOS: 13.7.1 (22H221)
CPU: amd64, 4 cores
CPU model: Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
Model: MacBookPro14,1
Memory: 8 GiB
Diagnostic report failed: get presigned url: Post "https://api-license.orbstack.dev/api/v1/debug/diag_reports": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Full report: https://orbstack.dev/_admin/diag/orbstack-diagreport_2024-11-08T07-45-01.921975Z.zip
OrbStack Crash Report.zip
orbstack-diagreport_2024-11-08T07-45-01.921975Z.zip
gui.log.zip
vmgr.1.log.zip
vmgr.log.zip
Screenshots and additional context (optional)
No response
Already fixed for the next version.
Released in v1.8.1.
|
gharchive/issue
| 2024-11-08T07:49:22 |
2025-04-01T06:39:55.833296
|
{
"authors": [
"kdrag0n",
"sandergol"
],
"repo": "orbstack/orbstack",
"url": "https://github.com/orbstack/orbstack/issues/1558",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1643203672
|
Linux virtual machine creation stuck
Describe the bug
Linux virtual machine creation stuck
To Reproduce
Steps to reproduce the behavior:
Download and install orbstack
click create virtual machine
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Logs
If applicable, attach log files by opening the OrbStack menu and clicking “Show Logs”, or email them for privacy.
time="03-28 10:54:07" level=info msg="creating VM"
time="03-28 10:54:07" level=info msg="forwarding SSH agent" sock=/private/tmp/com.apple.launchd.gmdHhg6hdD/Listeners
time="03-28 10:54:07" level=info msg="starting VM"
time="03-28 10:54:07" level=info msg="starting host services"
time="03-28 10:54:07" level=info msg="waiting for VM to start"
time="03-28 10:54:07" level=info msg="[VM] starting"
time="03-28 10:54:07" level=info msg="[VM] started"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-unix forward: dial failed" addr="{1 100.115.92.2 2375}" error="connect tcp 100.115.92.2:2375: connection was refused"
time="03-28 10:54:08" level=error msg="host-unix forward: dial failed" addr="{1 100.115.92.2 2375}" error="connect tcp 100.115.92.2:2375: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:08" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:09" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:09" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-28 10:54:09" level=info msg="data ready"
time="03-28 10:54:09" level=info msg="Mounting NFS..."
time="03-28 10:54:09" level=info msg="data ready"
time="03-28 10:54:09" level=error msg="host-vsock forward: dial failed" error="Error Domain=NSPOSIXErrorDomain Code=54 \"Connection reset by peer\""
time="03-28 10:54:11" level=error msg="NFS mount failed" error="mount nfs: mount(): connection reset by peer"
time="03-28 10:54:11" level=info msg="Mounting NFS..."
time="03-28 10:54:11" level=info msg="NFS mounted"
time="03-28 10:54:17" level=info msg="Setup not done in time, running setup..."
time="03-28 10:54:17" level=info msg="CLI setup complete"
time="03-28 11:22:00" level=info msg=sleep
time="03-28 12:11:09" level=info msg=wake
time="03-28 12:11:18" level=info msg=sleep
time="03-28 12:12:39" level=info msg=wake
time="03-28 12:14:08" level=info msg=sleep
time="03-28 12:31:51" level=info msg=wake
time="03-28 12:31:57" level=info msg=sleep
time="03-28 12:32:01" level=info msg=wake
time="03-28 12:43:55" level=info msg=sleep
time="03-28 12:49:31" level=info msg=wake
time="03-28 12:49:39" level=info msg=sleep
time="03-28 12:49:45" level=info msg=wake
Info report
OrbStack info:
Version: 0.5.1 (50100)
Commit: 646c501f9b245f5bc61bae3036ff5e92aaa7840e (v0.5.1)
System info:
macOS: 12.6.3 (21G419)
CPU: arm64, 10 cores
CPU model: Apple M1 Pro
---------------- [ cut here ] ----------------
Please copy and paste the above information into your bug report.
Open an issue here: https://github.com/orbstack/orbstack/issues/new/choose
Additional context
After creating for a long time, the error Failed to create machine: create 'ubuntu2': timed out waiting for network was reported, and then the creation failed
Please add both logs, including console.log.
console.log is empty
Please add both logs, including console.log.
[ 0.401215] cacheinfo: Unable to detect cache hierarchy for CPU 0
[ 0.401250] random: crng init done
[ 0.401400] loop: module loaded
[ 0.402200] virtio_blk virtio2: 1/0/0 default/read/poll queues
[ 0.402407] virtio_blk virtio2: [vda] 297960 512-byte logical blocks (153 MB/145 MiB)
[ 0.402808] virtio_blk virtio3: 1/0/0 default/read/poll queues
[ 0.402970] virtio_blk virtio3: [vdb] 17179869184 512-byte logical blocks (8.80 TB/8.00 TiB)
[ 0.403698] vdb: vdb1
[ 0.403798] virtio_blk virtio4: 1/0/0 default/read/poll queues
[ 0.403952] virtio_blk virtio4: [vdc] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB)
[ 0.404424] vdc: vdc1 vdc2
[ 0.404535] zram: Added device: zram0
[ 0.404583] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
[ 0.404618] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld Jason@zx2c4.com. All Rights Reserved.
[ 0.404713] tun: Universal TUN/TAP device driver, 1.6
[ 0.405086] PPP generic driver version 2.4.2
[ 0.405181] PPP Deflate Compression module registered
[ 0.405298] PPP MPPE Compression module registered
[ 0.405338] usbcore: registered new interface driver rtl8187
[ 0.405373] usbcore: registered new interface driver rtl8192cu
[ 0.405404] usbcore: registered new interface driver rtl8150
[ 0.405439] usbcore: registered new interface driver r8152
[ 0.405471] usbcore: registered new interface driver asix
[ 0.405509] usbcore: registered new interface driver ax88179_178a
[ 0.405546] usbcore: registered new interface driver cdc_ether
[ 0.405582] usbcore: registered new interface driver cdc_ncm
[ 0.405618] usbcore: registered new interface driver r8153_ecm
[ 0.405679] VFIO - User Level meta-driver version: 0.3
[ 0.405809] usbcore: registered new interface driver uas
[ 0.405851] usbcore: registered new interface driver usb-storage
[ 0.405944] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[ 0.405976] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 1
[ 0.406095] vhci_hcd: created sysfs vhci_hcd.0
[ 0.406141] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.01
[ 0.406183] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.406234] usb usb1: Product: USB/IP Virtual Host Controller
[ 0.406271] usb usb1: Manufacturer: Linux 6.1.21-orbstack-00098-g7d48b03fef38 vhci_hcd
[ 0.406306] usb usb1: SerialNumber: vhci_hcd.0
[ 0.406375] hub 1-0:1.0: USB hub found
[ 0.406401] hub 1-0:1.0: 8 ports detected
[ 0.406450] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[ 0.406495] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 2
[ 0.406539] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[ 0.406583] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 6.01
[ 0.406626] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.406671] usb usb2: Product: USB/IP Virtual Host Controller
[ 0.406704] usb usb2: Manufacturer: Linux 6.1.21-orbstack-00098-g7d48b03fef38 vhci_hcd
[ 0.406738] usb usb2: SerialNumber: vhci_hcd.0
[ 0.406804] hub 2-0:1.0: USB hub found
[ 0.406824] hub 2-0:1.0: 8 ports detected
[ 0.406962] rtc-pl031 20050000.pl031: registered as rtc0
[ 0.407002] rtc-pl031 20050000.pl031: setting system clock to 2023-03-28T05:33:26 UTC (1679981606)
[ 0.407094] hid: raw HID events driver (C) Jiri Kosina
[ 0.407139] usbcore: registered new interface driver usbhid
[ 0.407162] usbhid: USB HID core driver
[ 0.407203] GACT probability NOT on
[ 0.407236] Mirror/redirect action on
[ 0.407263] netem: version 1.3
[ 0.415679] Initializing XFRM netlink socket
[ 0.415785] NET: Registered PF_INET6 protocol family
[ 0.416021] Segment Routing with IPv6
[ 0.416053] In-situ OAM (IOAM) with IPv6
[ 0.416102] NET: Registered PF_PACKET protocol family
[ 0.416141] Bridge firewalling registered
[ 0.416171] l2tp_core: L2TP core driver, V2.0
[ 0.416205] 8021q: 802.1Q VLAN Support v1.8
[ 0.416227] Key type dns_resolver registered
[ 0.416290] NET: Registered PF_VSOCK protocol family
[ 0.416604] Loading compiled-in X.509 certificates
[ 0.416913] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
[ 0.420944] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[ 0.421451] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[ 0.421498] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 0.421536] cfg80211: failed to load regulatory.db
[ 0.421685] erofs: (device vda): mounted with root inode @ nid 36.
[ 0.421720] VFS: Mounted root (erofs filesystem) readonly on device 254:0.
[ 0.421930] devtmpfs: mounted
[ 0.422151] Freeing unused kernel memory: 3008K
[ 0.483979] Run /opt/orb/preinit as init process
[BEGIN] preinit
[END] preinit
OpenRC 0.46 is starting up Linux 6.1.21-orbstack-00098-g7d48b03fef38 (aarch64)
Mounting /proc ... [ ok ]
Mounting /run ... * /run/openrc: creating directory
/run/lock: creating directory
/run/lock: correcting owner
/lib/rc/sh/gendepends.sh: 12: [: unexpected operator
Caching service dependencies ... [ ok ]
Mounting /sys ... * Mounting devtmpfs on /dev ... [ ok ]
[ ok ]
Mounting security filesystem ... [ ok ]
Mounting /dev/mqueue ... * Mounting fuse control filesystem ... [ ok ]
[ ok ]
Mounting /dev/pts ... [ ok ]
Mounting /dev/shm ... [ ok ]
[BEGIN] vinit-early
[2m2023-03-28T05:33:26.775294Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m listening on 100.115.92.2:103
[ 0.765036] virtio-fs: tag not found
mount: /mnt/rosetta: wrong fs type, bad option, bad superblock on rosetta, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
/opt/orb/vinit-early: line 63: echo: write error: Invalid argument
/opt/orb/vinit-early: line 64: echo: write error: Invalid argument
[END] vinit-early
hostname | * Setting hostname ...udev | * Starting udev ... [ ok ]
[2m2023-03-28T05:33:26.892996Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T05:33:26.892996Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T05:33:26.893079Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[2m2023-03-28T05:33:26.893052Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[ ok ]
[ 0.883090] udevd[563]: starting version 3.2.11
[ 0.885299] udevd[563]: starting eudev-3.2.11
udev-settle | * Waiting for uevents to be processed ...fsck | * Checking local filesystems ... [ ok ]
root | * Remounting filesystems ... [ ok ]
[2m2023-03-28T05:33:27.095060Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T05:33:27.095112Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[2m2023-03-28T05:33:27.095064Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T05:33:27.095209Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[ ok ]
localmount | * Mounting local filesystems ...[ 1.011114] BTRFS: device label user-data-fs devid 1 transid 190 /dev/vdb1 scanned by mount (692)
[ 1.011383] BTRFS info (device vdb1): using crc32c (crc32c-generic) checksum algorithm
[ 1.011469] BTRFS info (device vdb1): turning on async discard
[ 1.011526] BTRFS info (device vdb1): enabling ssd optimizations
[ 1.011592] BTRFS info (device vdb1): setting nodatacow, compression disabled
[ 1.011673] BTRFS info (device vdb1): using free space tree
[ ok ]
networking | * Starting networking ... * lo .../etc/network/if-up.d/dad: 11: arithmetic expression: expecting primary: " counter-- "
networking |run-parts: /etc/network/if-up.d/dad: exit status 2
networking | [ ok ]
chronyd | * Starting chronyd ... [ ok ]
[BEGIN] vinit-late
[ 1.154770] zram: setup backing device /dev/vdc1
/opt/docker-rootfs /
/
Resize device id 1 (/dev/vdb1) from 926.35GiB to max
[END] vinit-late
[ 1.170196] zram0: detected capacity change from 0 to 32530432
[36mINFO[0m[03-28 05:33:27] started
[36mINFO[0m[03-28 05:33:27] starting container [36mcontainer[0m=docker
Setting up swapspace version 1, size = 15.5 GiB (16655577088 bytes)
no label, UUID=79ab33e2-81c6-477c-9847-c757feb48fa1
[2m2023-03-28T05:33:27.296431Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T05:33:27.296453Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[ 1.227035] Adding 16265212k swap on /dev/zram0. Priority:32767 extents:1 across:16265212k SSDsc
vm.swappiness = 100
vm.page-cluster = 1
[ 1.229405] Adding 4194300k swap on /dev/vdc2. Priority:1 extents:1 across:4194300k Dsc
rpcbind | * Starting rpcbind ...[ 1.331838] conbr0: port 1(vethjHtS9x) entered blocking state
[ 1.331929] conbr0: port 1(vethjHtS9x) entered disabled state
[ 1.331992] device vethjHtS9x entered promiscuous mode
[ 1.332097] conbr0: port 1(vethjHtS9x) entered blocking state
[ 1.332180] conbr0: port 1(vethjHtS9x) entered forwarding state
[ 1.332615] eth0: renamed from vethcWuDjU
[ 1.366098] IPv6: ADDRCONF(NETDEV_CHANGE): vethjHtS9x: link becomes ready
[36mINFO[0m[03-28 05:33:27] container started [36mcontainer[0m=docker
[ 1.413642] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ ok ]
rpc.statd | * Starting NFS statd ... [ ok ]
nfs | * Mounting nfsd filesystem in /proc ... [ ok ]
nfs | * Exporting NFS directories ... [ ok ]
nfs | * Starting NFS mountd ... [ ok ]
nfs | * Starting NFS daemon ...[ 2.865910] NFSD: Using UMH upcall client tracking operations.
[ 2.866050] NFSD: starting 1-second grace period (net f0000000)
[ ok ]
nfs | * Starting NFS smnotify ... [ ok ]
[36mINFO[0m[03-28 05:33:32] creating container [36mid[0m=01GWKBA36JBCFTP53V4RZWE0ZH [36mname[0m=ubuntu
[36mINFO[0m[03-28 05:33:32] fetching image index
[36mINFO[0m[03-28 05:33:34] downloading images [36mspec[0m="{ubuntu kinetic amd64 default}"
[36mINFO[0m[03-28 05:34:31] extracting rootfs [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 05:34:33] applying templates
[36mINFO[0m[03-28 05:34:33] starting container [36mcontainer[0m=ubuntu
[ 67.280274] conbr0: port 2(vethPRIbyo) entered blocking state
[ 67.280378] conbr0: port 2(vethPRIbyo) entered disabled state
[ 67.280528] device vethPRIbyo entered promiscuous mode
[ 67.280634] conbr0: port 2(vethPRIbyo) entered blocking state
[ 67.280971] conbr0: port 2(vethPRIbyo) entered forwarding state
[ 67.282175] conbr0: port 2(vethPRIbyo) entered disabled state
[ 67.282331] eth0: renamed from vethJjjlSN
[36mINFO[0m[03-28 05:34:33] container started [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 05:34:33] waiting for network before setup [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 05:34:53] stopping container [36mcontainer[0m=ubuntu
[33mWARN[0m[03-28 05:34:57] graceful shutdown failed [33mcontainer[0m=ubuntu [33merror[0m="shutting down the container failed"
[36mINFO[0m[03-28 05:34:57] stopped container [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 05:34:57] deleting container [36mcontainer[0m=ubuntu
I have the same problem
Got the same problem. Network stopped working on the old vms, telling me that "network is unreachable", also can not create any new ones because it fails on the network.
here's full log:
[ 0.448326] cacheinfo: Unable to detect cache hierarchy for CPU 0
[ 0.448349] random: crng init done
[ 0.448500] loop: module loaded
[ 0.449269] virtio_blk virtio2: 1/0/0 default/read/poll queues
[ 0.449461] virtio_blk virtio2: [vda] 297960 512-byte logical blocks (153 MB/145 MiB)
[ 0.449775] virtio_blk virtio3: 1/0/0 default/read/poll queues
[ 0.449926] virtio_blk virtio3: [vdb] 17179869184 512-byte logical blocks (8.80 TB/8.00 TiB)
[ 0.450519] vdb: vdb1
[ 0.450618] virtio_blk virtio4: 1/0/0 default/read/poll queues
[ 0.450771] virtio_blk virtio4: [vdc] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB)
[ 0.451279] vdc: vdc1 vdc2
[ 0.451387] zram: Added device: zram0
[ 0.451432] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
[ 0.451471] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
[ 0.451557] tun: Universal TUN/TAP device driver, 1.6
[ 0.451900] PPP generic driver version 2.4.2
[ 0.451961] PPP Deflate Compression module registered
[ 0.452079] PPP MPPE Compression module registered
[ 0.452111] usbcore: registered new interface driver rtl8187
[ 0.452150] usbcore: registered new interface driver rtl8192cu
[ 0.452186] usbcore: registered new interface driver rtl8150
[ 0.452218] usbcore: registered new interface driver r8152
[ 0.452245] usbcore: registered new interface driver asix
[ 0.452277] usbcore: registered new interface driver ax88179_178a
[ 0.452312] usbcore: registered new interface driver cdc_ether
[ 0.452351] usbcore: registered new interface driver cdc_ncm
[ 0.452483] usbcore: registered new interface driver r8153_ecm
[ 0.452581] VFIO - User Level meta-driver version: 0.3
[ 0.452714] usbcore: registered new interface driver uas
[ 0.452751] usbcore: registered new interface driver usb-storage
[ 0.452846] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[ 0.452894] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 1
[ 0.452943] vhci_hcd: created sysfs vhci_hcd.0
[ 0.452983] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.01
[ 0.453063] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.453120] usb usb1: Product: USB/IP Virtual Host Controller
[ 0.453153] usb usb1: Manufacturer: Linux 6.1.21-orbstack-00098-g7d48b03fef38 vhci_hcd
[ 0.453204] usb usb1: SerialNumber: vhci_hcd.0
[ 0.453288] hub 1-0:1.0: USB hub found
[ 0.453354] hub 1-0:1.0: 8 ports detected
[ 0.453406] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[ 0.453475] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 2
[ 0.453538] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[ 0.453676] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 6.01
[ 0.453721] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.453771] usb usb2: Product: USB/IP Virtual Host Controller
[ 0.453811] usb usb2: Manufacturer: Linux 6.1.21-orbstack-00098-g7d48b03fef38 vhci_hcd
[ 0.453860] usb usb2: SerialNumber: vhci_hcd.0
[ 0.453938] hub 2-0:1.0: USB hub found
[ 0.453971] hub 2-0:1.0: 8 ports detected
[ 0.454102] rtc-pl031 20050000.pl031: registered as rtc0
[ 0.454146] rtc-pl031 20050000.pl031: setting system clock to 2023-03-28T21:57:13 UTC (1680040633)
[ 0.454217] hid: raw HID events driver (C) Jiri Kosina
[ 0.454257] usbcore: registered new interface driver usbhid
[ 0.454322] usbhid: USB HID core driver
[ 0.454361] GACT probability NOT on
[ 0.454421] Mirror/redirect action on
[ 0.454449] netem: version 1.3
[ 0.461579] Initializing XFRM netlink socket
[ 0.461679] NET: Registered PF_INET6 protocol family
[ 0.461922] Segment Routing with IPv6
[ 0.461978] In-situ OAM (IOAM) with IPv6
[ 0.462021] NET: Registered PF_PACKET protocol family
[ 0.462085] Bridge firewalling registered
[ 0.462125] l2tp_core: L2TP core driver, V2.0
[ 0.462202] 8021q: 802.1Q VLAN Support v1.8
[ 0.462236] Key type dns_resolver registered
[ 0.462316] NET: Registered PF_VSOCK protocol family
[ 0.462700] Loading compiled-in X.509 certificates
[ 0.462995] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
[ 0.467559] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[ 0.468021] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[ 0.468071] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 0.468123] cfg80211: failed to load regulatory.db
[ 0.468262] erofs: (device vda): mounted with root inode @ nid 36.
[ 0.468306] VFS: Mounted root (erofs filesystem) readonly on device 254:0.
[ 0.468381] devtmpfs: mounted
[ 0.468603] Freeing unused kernel memory: 3008K
[ 0.530944] Run /opt/orb/preinit as init process
[BEGIN] preinit
[END] preinit
OpenRC 0.46 is starting up Linux 6.1.21-orbstack-00098-g7d48b03fef38 (aarch64)
* Mounting /proc ... [ ok ]
* Mounting /run ... * /run/openrc: creating directory
* /run/lock: creating directory
* /run/lock: correcting owner
/lib/rc/sh/gendepends.sh: 12: [: unexpected operator
* Caching service dependencies ... [ ok ]
* Mounting /sys ... * Mounting devtmpfs on /dev ... [ ok ]
[ ok ]
* Mounting security filesystem ... [ ok ]
* Mounting /dev/mqueue ... * Mounting fuse control filesystem ... [ ok ]
[ ok ]
* Mounting /dev/pts ... [ ok ]
* Mounting /dev/shm ... [ ok ]
[BEGIN] vinit-early
[2m2023-03-28T21:57:13.765299Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m listening on 100.115.92.2:103
[2m2023-03-28T21:57:13.779916Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T21:57:13.779958Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T21:57:13.780210Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[2m2023-03-28T21:57:13.780049Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[ 0.799645] virtio-fs: tag <rosetta> not found
mount: /mnt/rosetta: wrong fs type, bad option, bad superblock on rosetta, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
/opt/orb/vinit-early: line 63: echo: write error: Invalid argument
/opt/orb/vinit-early: line 64: echo: write error: Invalid argument
[END] vinit-early
hostname | * Setting hostname ... [ ok ]
udev | * Starting udev ... [ ok ]
[ 0.913563] udevd[550]: starting version 3.2.11
[ 0.915415] udevd[550]: starting eudev-3.2.11
[2m2023-03-28T21:57:13.981388Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T21:57:13.981452Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[2m2023-03-28T21:57:13.981435Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T21:57:13.981597Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
udev-settle | * Waiting for uevents to be processed ...fsck | * Checking local filesystems ... [ ok ]
root | * Remounting filesystems ... [ ok ]
[ ok ]
localmount | * Mounting local filesystems ...[ 1.055462] BTRFS: device label user-data-fs devid 1 transid 237 /dev/vdb1 scanned by mount (675)
[ 1.055697] BTRFS info (device vdb1): using crc32c (crc32c-generic) checksum algorithm
[ 1.055761] BTRFS info (device vdb1): turning on async discard
[ 1.055824] BTRFS info (device vdb1): enabling ssd optimizations
[ 1.055873] BTRFS info (device vdb1): setting nodatacow, compression disabled
[ 1.055920] BTRFS info (device vdb1): using free space tree
[ ok ]
networking | * Starting networking ... * lo .../etc/network/if-up.d/dad: 11: arithmetic expression: expecting primary: " counter-- "
networking |run-parts: /etc/network/if-up.d/dad: exit status 2
networking | [ ok ]
chronyd | * Starting chronyd ...[2m2023-03-28T21:57:14.182475Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T21:57:14.182572Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[2m2023-03-28T21:57:14.182503Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T21:57:14.182722Z[0m [31mERROR[0m [2mvcontrol::error[0m[2m:[0m Request failed: data not ready
[ ok ]
[BEGIN] vinit-late
[ 1.206669] zram: setup backing device /dev/vdc1
/opt/docker-rootfs /
/
Resize device id 1 (/dev/vdb1) from 228.27GiB to max
[ 1.224970] zram0: detected capacity change from 0 to 48963584
[END] vinit-late
rpcbind | * Starting rpcbind ...[36mINFO[0m[03-28 21:57:14] started
Setting up swapspace version 1, size = 23.3 GiB (25069350912 bytes)
no label, UUID=1ef3e470-56e8-456b-ba6b-b94d33b1a2fe
[ 1.310387] Adding 24481788k swap on /dev/zram0. Priority:32767 extents:1 across:24481788k SSDsc
vm.swappiness = 100
vm.page-cluster = 1
[ 1.313173] Adding 4194300k swap on /dev/vdc2. Priority:1 extents:1 across:4194300k Dsc
[2m2023-03-28T21:57:14.384304Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[2m2023-03-28T21:57:14.384304Z[0m [32m INFO[0m [2mvcontrol[0m[2m:[0m flag_data_resized
[ ok ]
rpc.statd | * Starting NFS statd ... [ ok ]
[ 1.535484] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
nfs | * Mounting nfsd filesystem in /proc ... [ ok ]
nfs | * Exporting NFS directories ... [ ok ]
nfs | * Starting NFS mountd ... [ ok ]
nfs | * Starting NFS daemon ...[ 2.767014] NFSD: Using UMH upcall client tracking operations.
[ 2.767196] NFSD: starting 1-second grace period (net f0000000)
[ ok ]
nfs | * Starting NFS smnotify ... [ ok ]
[36mINFO[0m[03-28 21:57:15] starting container [36mcontainer[0m=ubuntu
[ 2.963064] conbr0: port 1(vethcoklmk) entered blocking state
[ 2.963305] conbr0: port 1(vethcoklmk) entered disabled state
[ 2.963498] device vethcoklmk entered promiscuous mode
[ 2.964559] eth0: renamed from vethfiMor2
[36mINFO[0m[03-28 21:57:16] container started [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 21:57:27] stopping container [36mcontainer[0m=ubuntu
[33mWARN[0m[03-28 21:57:31] graceful shutdown failed [33mcontainer[0m=ubuntu [33merror[0m="shutting down the container failed"
[36mINFO[0m[03-28 21:57:32] stopped container [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 21:58:40] creating container [36mid[0m=01GWN3NYDFQT7JKCD5P8T0E751 [36mname[0m=ubuntu-no-rosetta
[36mINFO[0m[03-28 21:58:40] fetching image index
[36mINFO[0m[03-28 21:58:41] downloading images [36mspec[0m="{ubuntu kinetic amd64 default}"
[36mINFO[0m[03-28 21:58:47] extracting rootfs [36mcontainer[0m=ubuntu-no-rosetta
[36mINFO[0m[03-28 21:58:49] applying templates
[36mINFO[0m[03-28 21:58:49] starting container [36mcontainer[0m=ubuntu-no-rosetta
[36mINFO[0m[03-28 21:58:49] container started [36mcontainer[0m=ubuntu-no-rosetta
[36mINFO[0m[03-28 21:58:49] waiting for network before setup [36mcontainer[0m=ubuntu-no-rosetta
[36mINFO[0m[03-28 21:59:00] starting container for ssh [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 21:59:00] starting container [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 21:59:00] container started [36mcontainer[0m=ubuntu
[36mINFO[0m[03-28 21:59:09] stopping container [36mcontainer[0m=ubuntu-no-rosetta
[33mWARN[0m[03-28 21:59:13] graceful shutdown failed [33mcontainer[0m=ubuntu-no-rosetta [33merror[0m="shutting down the container failed"
[31mERRO[0m[03-28 21:59:13] SSH error: bad file descriptor
[36mINFO[0m[03-28 21:59:13] stopped container [36mcontainer[0m=ubuntu-no-rosetta
[36mINFO[0m[03-28 21:59:13] deleting container [36mcontainer[0m=ubuntu-no-rosetta
[36mINFO[0m[03-28 22:00:38] creating container [36mid[0m=01GWN3SHEZ6Y1AYZHW994HV93J [36mname[0m=ubuntu2
[36mINFO[0m[03-28 22:00:38] fetching image index
[36mINFO[0m[03-28 22:00:39] downloading images [36mspec[0m="{ubuntu kinetic amd64 default}"
[36mINFO[0m[03-28 22:00:45] extracting rootfs [36mcontainer[0m=ubuntu2
[36mINFO[0m[03-28 22:00:47] applying templates
[36mINFO[0m[03-28 22:00:47] starting container [36mcontainer[0m=ubuntu2
[36mINFO[0m[03-28 22:00:47] container started [36mcontainer[0m=ubuntu2
[36mINFO[0m[03-28 22:00:47] waiting for network before setup [36mcontainer[0m=ubuntu2
[36mINFO[0m[03-28 22:01:07] stopping container [36mcontainer[0m=ubuntu2
[33mWARN[0m[03-28 22:01:11] graceful shutdown failed [33mcontainer[0m=ubuntu2 [33merror[0m="shutting down the container failed"
[36mINFO[0m[03-28 22:01:11] stopped container [36mcontainer[0m=ubuntu2
[36mINFO[0m[03-28 22:01:11] deleting container [36mcontainer[0m=ubuntu2
vmgr.log:
time="03-29 00:57:12" level=info msg="creating VM"
time="03-29 00:57:12" level=info msg="forwarding SSH agent" sock=/private/tmp/com.apple.launchd.jYRLDqCqop/Listeners
time="03-29 00:57:12" level=info msg="starting VM"
time="03-29 00:57:12" level=info msg="starting host services"
time="03-29 00:57:12" level=info msg="waiting for VM to start"
time="03-29 00:57:12" level=info msg="[VM] starting"
time="03-29 00:57:12" level=info msg="[VM] started"
time="03-29 00:57:13" level=error msg="host-unix forward: dial failed" addr="{1 100.115.92.2 2375}" error="connect tcp 100.115.92.2:2375: connection was refused"
time="03-29 00:57:13" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-29 00:57:13" level=error msg="host-unix forward: dial failed" addr="{1 100.115.92.2 2375}" error="connect tcp 100.115.92.2:2375: connection was refused"
time="03-29 00:57:13" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-29 00:57:13" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-29 00:57:13" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-29 00:57:13" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-29 00:57:14" level=info msg="data ready"
time="03-29 00:57:14" level=info msg="data ready"
time="03-29 00:57:14" level=info msg="Mounting NFS..."
time="03-29 00:57:14" level=error msg="host-vsock forward: dial failed" error="Error Domain=NSPOSIXErrorDomain Code=54 \"Connection reset by peer\""
time="03-29 00:57:16" level=error msg="NFS mount failed" error="mount nfs: mount(): connection reset by peer"
time="03-29 00:57:16" level=info msg="Mounting NFS..."
time="03-29 00:57:16" level=info msg="NFS mounted"
time="03-29 00:57:22" level=info msg="Setup not done in time, running setup..."
time="03-29 00:57:22" level=info msg="CLI setup complete"
@MrBIMC Can you be more specific about the ? That's likely to be the underlying cause here. What's returning "network is unreachable"?
Please the full output of all of these commands to help debug the issue:
ip addr
ip route
networkctl status eth0
dig host.orb.internal
ping 1.1.1.1
ping 100.115.93.1
ping 100.115.92.2
ping 100.115.92.254
ping 2606:4700:4700::1111
ping fd00:30:31::1
ping fd00:96dc:7096:1d21::2
ping fd00:96dc:7096:1d22::254
mtr or traceroute to each of the IPs above (skip if your machine doesn't have either installed)
Does restarting the old machine help?
Also please share the output of orb report and the exact error message you get when creating a new machine (unexpected EOF, timed out waiting for network, etc.).
hmm, I've just rebooted the machine and network started working again.
Got it to break. Stopping the system, disabling rosetta for x86 images and then starting the vm again breaks the network. Collecting logs rn.
Btw, for some reason rosetta toggle is always on after the reboot.
Here are the logs for ubuntu without rosetta when network is broken:
pwniestarr@ubuntu:/Users/pwniestarr$ ping 8.8.8.8
ping: connect: Network is unreachable
pwniestarr@ubuntu:/Users/pwniestarr$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if5: <BROADCAST,MULTICAST> mtu 65520 qdisc noop state DOWN group default qlen 1000
link/ether ca:5d:84:fe:d8:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0
pwniestarr@ubuntu:/Users/pwniestarr$ ip route
pwniestarr@ubuntu:/Users/pwniestarr$ networkctl status eth0
WARNING: systemd-networkd is not running, output will be incomplete.
^C
pwniestarr@ubuntu:/Users/pwniestarr$ dig host.orb.internal
-bash: dig: command not found
pwniestarr@ubuntu:/Users/pwniestarr$ ping 1.1.1.1
ping: connect: Network is unreachable
pwniestarr@ubuntu:/Users/pwniestarr$ ping 100.115.93.1
ping: connect: Network is unreachable
pwniestarr@ubuntu:/Users/pwniestarr$ ping 2606:4700:4700::1111
ping: connect: Network is unreachable
pwniestarr@ubuntu:/Users/pwniestarr$ mtr 1.1.1.1
-bash: mtr: command not found
pwniestarr@ubuntu:/Users/pwniestarr$ traceroute 1.1.1.1
-bash: traceroute: command not found
Creating new machine when network is broken yields "network unreachable, couldn't create"
return value of orb report
OrbStack info:
Version: 0.5.1 (50100)
Commit: 646c501f9b245f5bc61bae3036ff5e92aaa7840e (v0.5.1)
System info:
macOS: 13.1 (22C65)
CPU: arm64, 8 cores
CPU model: Apple M1
Updated to version 0.5.2, the problem still exists
time="03-30 16:21:50" level=info msg="creating VM"
time="03-30 16:21:50" level=info msg="forwarding SSH agent" sock=/private/tmp/com.apple.launchd.gmdHhg6hdD/Listeners
time="03-30 16:21:50" level=info msg="starting VM"
time="03-30 16:21:50" level=info msg="starting host services"
time="03-30 16:21:50" level=info msg="waiting for VM to start"
time="03-30 16:21:50" level=info msg="[VM] starting"
time="03-30 16:21:50" level=info msg="[VM] started"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:51" level=error msg="host-unix forward: dial failed" addr="{1 100.115.92.2 2375}" error="connect tcp 100.115.92.2:2375: connection was refused"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:51" level=error msg="host-unix forward: dial failed" addr="{1 100.115.92.2 2375}" error="connect tcp 100.115.92.2:2375: connection was refused"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:51" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:52" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:52" level=error msg="host-tcp forward: dial failed" addr="{1 100.115.92.2 8000}" error="connect tcp 100.115.92.2:8000: connection was refused"
time="03-30 16:21:52" level=info msg="data ready"
time="03-30 16:21:52" level=info msg="Mounting NFS..."
time="03-30 16:21:52" level=info msg="data ready"
time="03-30 16:21:52" level=error msg="host-vsock forward: dial failed" error="Error Domain=NSPOSIXErrorDomain Code=54 "Connection reset by peer""
time="03-30 16:21:54" level=error msg="NFS mount failed" error="mount nfs: mount(): connection reset by peer"
time="03-30 16:21:54" level=info msg="Mounting NFS..."
time="03-30 16:21:54" level=info msg="NFS mounted"
time="03-30 16:25:37" level=warning msg="DNS query failed" error="no such record" name=_http._tcp.ports.ubuntu.com. type=SRV
time="03-30 16:25:37" level=warning msg="DNS query failed" error="no such record" name=_https._tcp.motd.ubuntu.com. type=SRV
time="03-30 16:25:43" level=warning msg="DNS query failed" error="no such record" name=_http._tcp.ports.ubuntu.com. type=SRV
It looks like you have an Alpine machine already. Can you start it and check if network works in it?
Also, are you creating an Intel Ubuntu machine?
I also created an arm ubuntu machine,But the intel machines never create
Thanks, that narrows it down. Will investigate.
Reproduced. The fix will likely take some time: the root cause is that the fallback for x86 emulation is missing functionality needed for systemd-networkd to work. Networking is broken because networkd doesn't start.
In the meantime, please upgrade to macOS 13.x if possible so you can use Rosetta instead (which doesn't have this issue), or run x86 programs in your ARM machine using Ubuntu's multiarch support. I'll also consider temporarily disabling creation of new x86 machines when Rosetta is disabled or not supported. (Unfortunately it's not possible to use Rosetta on macOS 12 due to hypervisor limitations.)
I updated my Macos to version 13.1 and then solved the problem
same problem when exec orb create centos:7 centos-7.9 --arch amd64
setup: timed out waiting for network
@jjeejj That's a different issue. OrbStack can't run distros with too old systemd due to cgroup compatibility issues. #66
@kdrag0n The document tells me to do this, If this feature is not yet supported, update the document?
@jjeejj CentOS 7 has been removed for now. Thanks for pointing that out.
Fixed for the next version.
Released in v0.8.0.
|
gharchive/issue
| 2023-03-28T05:02:24 |
2025-04-01T06:39:55.920622
|
{
"authors": [
"MrBIMC",
"Smi1eSEC",
"jjeejj",
"kdrag0n",
"xiantang",
"youmeng1024"
],
"repo": "orbstack/orbstack",
"url": "https://github.com/orbstack/orbstack/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
926912739
|
Prevent loadDefaultView firing twice
Description
Prevents the default view from being loaded twice on mount (loadDefaultView is handled by initializeFirstView)
Closes #333
Checklist
[x] The documentation reflects the changes.
[x] I have manually tested the application to make sure the changes don’t cause any downstream issues, which includes making sure ./orchest status --ext is not reporting failures when Orchest is running.
[ ] In case I changed code in the orchest-sdk, I updated its version according to SemVer in its _version.py and updated the version compability table in its README.md
[ ] In case I changed one of the services’ models.py I have performed the appropriate database migrations.
Could you leave out the console.log statements? 😉
Otherwise, clean fix! 👍🏻
|
gharchive/pull-request
| 2021-06-22T07:17:35 |
2025-04-01T06:39:55.930974
|
{
"authors": [
"joe-bell",
"ricklamers"
],
"repo": "orchest/orchest",
"url": "https://github.com/orchest/orchest/pull/337",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1372621538
|
Plugins clear button and remove button don't work well together
Describe the bug
When using clear button and remove button plugins together with a multiple select row,
clear button is not well placed and cannot be differentiate with other remove buttons.
To Reproduce
Have clear button enabled
Have remove button enabled
Select multiple items
Have select with multiple lines
Hover the select
Expected behavior
The clear button should more recognizable than the remove button.
It should be as the same height as the drop down caret.
Maybe clear button needs his own place that item couldn't take ?
Green lines are safe zone for the clear button:
Select items will never cross that line
Dropdown caret should not cross that line
Additional context
OS: Windows
Browser: Firefox
Version: 2.0.3
Bootstrap: 5
Theme/Template: www.Tabler.io
**To Reproduce**
Create an example on JSFiddle, CodePen or similar service and outline the steps for reproducing the bug.
Here is a demo : https://jsfiddle.net/florian_allsoftware/f0wkpyd9/8/
Using :
Symfony (only for class naming)
Tabler.io (Template)
TomSelect
Make it like that.
Symfony -> Add form-select class to select
Tabler.io -> Stylize form-select by adding the caret at the end.
TomSelect -> Don't use the caret because there's not the class single (multi currently), but Tabler.io added it anyway.
|
gharchive/issue
| 2022-09-14T09:02:48 |
2025-04-01T06:39:55.943526
|
{
"authors": [
"cavasinf",
"oyejorge"
],
"repo": "orchidjs/tom-select",
"url": "https://github.com/orchidjs/tom-select/issues/469",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
808848340
|
Scavenge About/Construction and Demonstration of docs/ Publishing
The Miser Project has an effort to spiral the construction of of its docs/ and the understanding of the default template and other aspects of the mapping between docs/ and the published orcmid.github.io/miser pages.
Bring those components here and then set those up. The use of spiraling may need to be explained. Add that to the README.md page for the project.
Check out GitHub Community Guidelines
I need to find the screen captures that go with documenting the plain case. I also need to find the MarkDown cheat-sheet that I was using to work through the cases.
|
gharchive/issue
| 2021-02-15T22:13:31 |
2025-04-01T06:39:55.947603
|
{
"authors": [
"orcmid"
],
"repo": "orcmid/docEng",
"url": "https://github.com/orcmid/docEng/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2191955828
|
ignore invalid instruction of rune commit tx
fix https://github.com/ordinals/ord/issues/3306
We get transactions from bitcoin core, so I don't think it's possible to encounter a transaction with an invalid instruction. Have you found a case where it is possible?
|
gharchive/pull-request
| 2024-03-18T11:32:51 |
2025-04-01T06:39:55.948985
|
{
"authors": [
"casey",
"nowherekai"
],
"repo": "ordinals/ord",
"url": "https://github.com/ordinals/ord/pull/3307",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2568473121
|
🛑 Code Quality Analysis is down
In dd5d765, Code Quality Analysis (https://sonar.orfeo-toolbox.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Code Quality Analysis is back up in 93e91d7 after 7 minutes.
|
gharchive/issue
| 2024-10-06T05:31:21 |
2025-04-01T06:39:55.956142
|
{
"authors": [
"Julien-Osman"
],
"repo": "orfeotoolbox/status-page",
"url": "https://github.com/orfeotoolbox/status-page/issues/275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
67135243
|
Traverse in some case with additinal condition return more value than without.
As title
@tglman do you still have the test case...?
fixed on commit 35ed13f8a68fd571bda2a7b19df0de2b0cdc34a7 develop branch. The issue was related to field deserialization
|
gharchive/issue
| 2015-04-08T13:51:22 |
2025-04-01T06:39:55.978759
|
{
"authors": [
"luigidellaquila",
"tglman"
],
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/3892",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
99596161
|
Functions with duplicate names
Orientdb can have function with the same name but while retrieving the functions from java api, using meta data as below, it needs the function name. How can I know which function is getting retrieve from the below code? Is there way to retrieve the function using its RID?
db.getMetadata().getFunctionLibrary().getFunction("sum");
Are you sure orientdb can have a function with the same name?
As far as I can tell from the code, it's not possible.
@randikaf what's the purpose of having 2 functions with the same name?
@matanshukry , Yes, if you try from the Orientdb studio, you'll be able to create.
@lvca Actually this happened accidentally because of a issue in my code. But the problem is, Orientdb does not validate the function creation with the same name.
Can I use the function name to retreive the function consistently?
@randikaf you're right, my bad; Was looking at a different function.
@lvca - IMO, function names should be unique. Now I'm not too familiar with indices, but we can create (by default) a unique index on OFunction.name, that should be simple enough. What do you think?
@matanshukry , @lvca Are you planning to fix this?
i also observed that when importing the functions only from an export file using '-merge' option, duplicate function entries are created. not sure if creating index on OFunction will solve the problem. i suspect that it would need a change to import command to 'overwrite' the function in case of import.
@nagarajasr - I am not familiar with the import process, but doesn't the process add entries to the OFunction table through the index? That is, shouldn't a unique index throw error on duplicate functions?
I think it's the best option since even if you fix the import function, one can still create multiple functions with the same name, which I don't think make much sense. Unless you're planning an using parameters overloading, but then you'll need a more complex unique process.
@randikaf - I can take a look at it, just looking for some confirmation from the Orient team; They need to decide how they want to fix it (unique index? overwrite? else? ..)
+1 for the unique index (in v 2.2 IMHO)
@matanshukry yes, but i would expect that the import process should overwrite an existing function if the "-merge" option is specified
|
gharchive/issue
| 2015-08-07T07:33:06 |
2025-04-01T06:39:55.984484
|
{
"authors": [
"luigidellaquila",
"lvca",
"matanshukry",
"nagarajasr",
"randikaf"
],
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/4751",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
116734627
|
Rewrite functionality of tracking of changes inside of atomic operation using RoaringBitmaps
In order to track changes on pages inside of atomic operation we need some form of implementation of sparse bitset. Typically data structures proposed to be us as sparse bitset use very poor sequential performance for example to check whether n-bit is set we need to use algorithm with O(n) complexity which is unacceptable. Also we can not use plain byte array because it means that in order to change one page in page we need to consume 2 * page_size bytes and store in WAL page_size bytes for single byte change. That is why we used augmented rb-tree for tracking of changes in one dimensional intervals but this data structure is very resource consuming in case of big transactions and also consumes more space than RoaringBitmaps implementation.
For details of data structure look at http://arxiv.org/pdf/1402.6407v9.pdf .
+1
Not needed any more
|
gharchive/issue
| 2015-11-13T09:45:22 |
2025-04-01T06:39:55.987294
|
{
"authors": [
"laa",
"lvca"
],
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/5309",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
71278529
|
Telescope and Orion
Has anyone looked at how to possibly integrate Orion with Telescope? (point the telescope towards Orion, lol).
You may ask "why", I'm building a community-based platform and instead of reinventing the wheel I would rather just reuse some of that Telescope code. What I imagine is replace the security, user, permission system of telescope with Orion's, because I want the nice configurable admin panel that Orion offers.
@vonwao point the telescope team to Orion and sell them on the benefit, be an Orion evangelist :)
Yes, good idea :) I decided to delete/close this issue for now because I thought I should do some more research on this for now and not introduce too much noise into the issues with new ideas.
|
gharchive/issue
| 2015-04-27T13:08:50 |
2025-04-01T06:39:55.989292
|
{
"authors": [
"timfam",
"vonwao"
],
"repo": "orionjs/orion",
"url": "https://github.com/orionjs/orion/issues/119",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
205016100
|
initAccounts is not a function
Hello,
I'm trying to use meteor-apollo-accounts, with the tutorial http://dev.apollodata.com/core/meteor.html and your README.md.
So now I have :
import { createApolloServer } from 'meteor/apollo';
import { makeExecutableSchema, addMockFunctionsToSchema } from 'graphql-tools';
import {initAccounts} from 'meteor/nicolaslopezj:apollo-accounts'
import {loadSchema, getSchema} from 'graphql-loader'
import { typeDefs } from '/imports/api/schema';
import { resolvers } from '/imports/api/resolvers';
const options = {}
// Load all accounts related resolvers and type definitions into graphql-loader
initAccounts(options)
// Load all your resolvers and type definitions into graphql-loader
loadSchema({typeDefs, resolvers})
// Gets all the resolvers and type definitions loaded in graphql-loader
const schema = getSchema()
const executableSchema = makeExecutableSchema(schema)
createApolloServer({
executableSchema,
});
when I launch the application, my server crash :
W20170202-17:22:45.325(-5)? (STDERR) Note: you are using a pure-JavaScript implementation of bcrypt.
W20170202-17:22:45.325(-5)? (STDERR) While this implementation will work correctly, it is known to be
W20170202-17:22:45.326(-5)? (STDERR) approximately three times slower than the native implementation.
W20170202-17:22:45.326(-5)? (STDERR) In order to use the native implementation instead, run
W20170202-17:22:45.326(-5)? (STDERR)
W20170202-17:22:45.327(-5)? (STDERR) meteor npm install --save bcrypt
W20170202-17:22:45.327(-5)? (STDERR)
W20170202-17:22:45.327(-5)? (STDERR) in the root directory of your application.
W20170202-17:22:45.348(-5)? (STDERR) WARNING: npm peer requirements (for apollo) not installed:
W20170202-17:22:45.348(-5)? (STDERR) - graphql-server-express@0.6.0 installed, graphql-server-express@^0.4.3 needed
W20170202-17:22:45.349(-5)? (STDERR) - graphql@0.9.1 installed, graphql@^0.7.0 || ^0.8.0 needed
W20170202-17:22:45.349(-5)? (STDERR) - graphql-tools@0.9.2 installed, graphql-tools@^0.8.0 needed
W20170202-17:22:45.349(-5)? (STDERR)
W20170202-17:22:45.350(-5)? (STDERR) Read more about installing npm peer dependencies:
W20170202-17:22:45.350(-5)? (STDERR) http://guide.meteor.com/using-packages.html#peer-npm-dependencies
W20170202-17:22:45.350(-5)? (STDERR)
W20170202-17:22:45.639(-5)? (STDERR) C:\Users\Erwan\AppData\Local\.meteor\packages\meteor-tool\1.4.2_3\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\fibers\future.js:280
W20170202-17:22:45.639(-5)? (STDERR) throw(ex);
W20170202-17:22:45.639(-5)? (STDERR) ^
W20170202-17:22:45.640(-5)? (STDERR)
W20170202-17:22:45.640(-5)? (STDERR) TypeError: initAccounts is not a function
W20170202-17:22:45.641(-5)? (STDERR) at server/main.js:19:2
W20170202-17:22:45.641(-5)? (STDERR) at Function.time (C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\profile.js:301:28)
W20170202-17:22:45.641(-5)? (STDERR) at C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\boot.js:304:13
W20170202-17:22:45.642(-5)? (STDERR) at C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\boot.js:345:5
W20170202-17:22:45.642(-5)? (STDERR) at Function.run (C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\profile.js:480:12)
W20170202-17:22:45.642(-5)? (STDERR) at C:\Users\Erwan\Documents\Perso\Travaux\looc\api-server\.meteor\local\build\programs\server\boot.js:343:11
=> Exited with code: 1
=> Your application is crashing. Waiting for file change.
I really don't understand. Is the tutorial up-to-date ?
thanks in advance, and good job 👍 I really need it ^^
Sorry for the delay,
initAccounts might not be exported, or maybe not exist in the version of apollo-accounts you have installed.
I can help you with that.
@dbrrt thanks for you answer.
So what should I do ?
Figure out why "TypeError: initAccounts is not a function"
Checking if there's no conflicts between packages versions (graphlql/graphql-express/graphql-tools)
I'm running this repository's code on a WIP Apollo/GraphQL based app, so it's working. Even tried with React Native.
I am also getting the same issue. Did you manage to find the solution?
@ahsanwtc Here's my server.js file, actually I'm using initAccounts not directly the Meteor library but it should work in both cases
import {makeExecutableSchema} from 'graphql-tools'
import {loadSchema, getSchema} from 'graphql-loader'
import {initAccounts} from '/imports/lib/accounts-gql/server'
import typeDefs from './schema'
import resolvers from './resolvers'
import {createApolloServer} from 'meteor/orionsoft:apollo'
import cors from 'cors'
const options = {}
// Load all accounts related resolvers and type definitions into graphql-loader
initAccounts(options)
// Load all your resolvers and type definitions into graphql-loader
loadSchema({typeDefs, resolvers})
// Gets all the resolvers and type definitions loaded in graphql-loader
const schema = getSchema()
const executableSchema = makeExecutableSchema(schema)
createApolloServer({
schema: executableSchema
}, {
configServer (graphQLServer) {
graphQLServer.use(cors())
}
})
@dbrrt thanks for the reply. Can you give the the full path to the initAccounts? I am not able to find the lib folder. It's not able to find the path '/imports/lib/accounts-gql/server'
I "repacked" the libraries (client and server), under the lib path, so that's just my project.
But I think that if you can't use initAccounts, that's because of an export or something like that.
thank you for your feed back.
I already share my code... all is here (juste a test to integrate apollo in an other app)
In fact there is dependencies issue... I will investigate but i'm pretty sure it's not the problem. If you can (of course), I'm very curious about your test result concerning the meteor initAccount method.
Thank a lot ;)
@rwaness
Here's an extract of my packages.json, if that can help:
"dependencies": {
"apollo-client": "^0.5.0",
"babel-runtime": "^6.20.0",
"bcrypt": "^0.8.7",
"body-parser": "^1.16.0",
"classnames": "^2.2.5",
"cors": "^2.8.1",
"express": "^4.14.0",
"graphql": "^0.7.0",
"graphql-loader": "^1.0.1",
"graphql-server-express": "^0.4.3",
"graphql-tools": "^0.8.0",
"graphql-typings": "0.0.1-beta-2",
"invariant": "^2.2.1",
"meteor-node-stubs": "~0.2.0",
"moment": "^2.17.1",
"node-sass": "^3.13.1",
"normalize.css": "^5.0.0",
"react": "^15.3.1",
"react-addons-css-transition-group": "~15.4.0",
"react-addons-pure-render-mixin": "^15.3.1",
"react-apollo": "^0.5.16",
"react-dom": "^15.3.1",
"react-komposer": "^1.13.1",
"react-mounter": "^1.2.0",
"react-redux": "^5.0.2",
"react-router": "^3.0.2",
"react-slick": "^0.14.6",
"redux": "^3.6.0",
"semantic-ui-css": "^2.2.4",
"semantic-ui-react": "^0.64.3"
},
"devDependencies": {
"babel-plugin-module-resolver": "^2.4.0",
"babel-plugin-transform-class-properties": "^6.19.0",
"babel-plugin-transform-decorators-legacy": "^1.3.4",
"semantic-ui": "^2.2.7",
"standard": "^8.5.0"
}
and I didn't have time for now to reproduce your issue with initAccount (directly using the Meteor lib), I'll try ASAP
@rwaness have you solved the problem?
I think that's because the package version installed in your meteor app.
You need to specify the version of this package. If not, 1.0.1 version is installed instead.
Try this:
meteor add nicolaslopezj:apollo-accounts@3.0.1
All good with initAccount now. Thanks to you.
Unfortunatly, I found a second issue.
I follow this tutorial : https://blog.orionsoft.io/using-meteor-accounts-with-apollo-and-react-df3c89b46b17
and when I want to test the application, the server crash and say :
TypeError: Auth is not a function
at meteorInstall.server.schema.Mutation.index.js (server/schema/Mutation/index.js:5:4)
at fileEvaluate (packages\modules-runtime.js:197:9)
at require (packages\modules-runtime.js:120:16)
at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\app\app.js:220:1
at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\boot.js:303:34
at Array.forEach (native)
at Function._.each._.forEach (C:\Users\erwan\AppData\Local\.meteor\packages\meteor-tool\1.4.3_2\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\underscore\underscore.js:79:11)
at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\boot.js:128:5
at C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\boot.js:352:5
at Function.run (C:\Users\erwan\Documents\Travaux\meteor\loocc\tests\api-server\.meteor\local\build\programs\server\profile.js:510:12)
The file server/schema/Mutation/index.js looks like :
import {SchemaMutations as Auth} from 'meteor/nicolaslopezj:apollo-accounts'
export default `
type Mutation {
${Auth()}
}
`
Is it again a version issue ?
How can we know the good version to use ?
Good job guys. Continue ;)
@rwaness
Yeah. I think that api version is old.
It has changed.
You better see this install guide.
Or, mine is like below:
import { createApolloServer } from 'meteor/apollo';
import { initAccounts } from 'meteor/nicolaslopezj:apollo-accounts';
import { loadSchema, getSchema } from 'graphql-loader';
import { makeExecutableSchema } from 'graphql-tools';
import cors from 'cors';
import { typeDefs } from './schema';
import { resolvers } from './resolvers';
initAccounts({});
loadSchema({ typeDefs, resolvers });
const schema = makeExecutableSchema(getSchema());
export default () => {
createApolloServer({ schema }, {
configServer(graphQLServer) {
graphQLServer.use(cors());
},
});
};
Yes, it works with the boilerplate : https://github.com/orionsoft/server-boilerplate
Thanks ;)
|
gharchive/issue
| 2017-02-02T22:28:47 |
2025-04-01T06:39:56.002205
|
{
"authors": [
"ahsanwtc",
"dbrrt",
"rwaness",
"simsim0709"
],
"repo": "orionsoft/meteor-apollo-accounts",
"url": "https://github.com/orionsoft/meteor-apollo-accounts/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
169297634
|
react highlighting and autocomplete not working
hi yesterday it works but now it doesn't work. Why is that? Can you help me fix it
I'm experiencing the same problem, you aren't alone!
SAme problem here
I installed the package called language-babel. That works perfectly fine. Another solution is to use this package https://atom.io/packages/language-javascript-jsx. Both language-babel and language-javascript-jsx will solve this issue until the react package is fixed. Make sure you just use one of these, disable the others.
I tried that, but not autocompleting yet.
Atenciosamente,
Maurício R. Duarte.
2016-08-04 20:52 GMT-03:00 Marcus Hurney notifications@github.com:
I installed the package called language-babel. That works perfectly fine.
Another solution is to use this package https://atom.io/packages/
language-javascript-jsx. Both language-babel and
language-javascript-jsx will solve this issue until the react package
is fixed. Make sure you just use one of these, disable the others.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/orktes/atom-react/issues/184#issuecomment-237719916,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABE4_ciLSTMyhyUENfBPcQvFhqY2tvgsks5qcntSgaJpZM4JcXfU
.
I recomend using language-babel https://atom.io/packages/language-babel
It has similiraties to react package since it has autocomplete and highlighting. The othe package doesn't have autocomplete.
I agree, not having autocomplete is a big disadvantage of the jsx language
package.
On Friday, August 5, 2016, pillowface notifications@github.com wrote:
I recomend using language-babel https://atom.io/packages/language-babel
It has similiraties to react package since it has autocomplete and
highlighting. The othe package doesn't have autocomplete.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/orktes/atom-react/issues/184#issuecomment-237870544,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AMusLgj98qwACBpHSgw7Hglx12Q_wiWmks5qc00_gaJpZM4JcXfU
.
Should be fixed https://github.com/orktes/atom-react/issues/188
|
gharchive/issue
| 2016-08-04T06:02:10 |
2025-04-01T06:39:56.010095
|
{
"authors": [
"MarcusHurney",
"mauriciord",
"orktes",
"pillowface"
],
"repo": "orktes/atom-react",
"url": "https://github.com/orktes/atom-react/issues/184",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2054995864
|
Universal?
Universal hashing implies N/M collision estimate for all collision test sets, and not (N^2)/M/2 like can be seen in SMHasher tests for Polymur. Just saying. https://www.cs.cmu.edu/~avrim/451f11/lectures/lect1004.pdf
The universal hash function for strings is a unicorn (that is, it doesn't exist). Any fixed-memory universal hash function must be almost-universal. Since no one makes a hash function that allocates 2^64 bytes of memory to store its data, it is implied that any hash function which hashes strings is therefore almost-universal.
When I say that PolymurHash is "a universal hash function" in the introduction I don't mean to claim that it holds the "Universal" property over the set of all strings, rather that it is a hash function with universality claims. I immediately follow it up one sentence later with the exact collision probabilities claimed.
For example the wikipedia article Poly1305 hash function starts with "Poly1305 is a universal hash family". Since Poly1305 hashes strings and also doesn't allocate tons of memory we see that in fact, its collision probability scales with the input size.
Well, it would be fair to call it "almost universal hash function" then. Or most authors of "good" SMHasher hash functions could come up with proofs. "Universal hashing" is a brand, and it's a bit unfair to exploit it not actually having an intention to prove the "original" universality requirement.
Or most authors of "good" SMHasher hash functions could come up with proofs.
The proofs of what I claim are in this repository.
"Universal hashing" is an academic brand, and it's a bit unfair to exploit it not actually having an intention to prove the "original" universality requirement.
I don't exploit anything, you simply take the most unreasonably strict reading of the word "Universal" that no real-world hash function for strings ever satisfies, and then try to apply it to a hash function over strings.
Almost-universality isn't something I just came up with. It is just as academic. It has formal definitions. For example the PolyR hash function paper also opens with "We describe a universal hash-function family, PolyR". Guess what? Yes, that is also colloquial and also used to mean almost-universality.
Universal hashing implies N/M collision estimate for all collision test sets, and not (N^2)/M/2 like can be seen in SMHasher tests for Polymur.
Also, this is a misunderstanding of Polymur's security claim. Polymur doesn't have a "N^2/M/2" collision probability, assuming we use N to mean the number of strings in the input space, and M to mean 2^64 like in the paper you linked.
Polymur's collision probability bound scales with the length of the input string, not the number of strings in your input set. This is an exponential difference, there are 256^1024 binary strings of length 1024, yet Polymur's collision probability only goes up to 256 * 2^-60.2.
I have no idea where you got the N^2 term from, or how you can claim that it "can be seen in SMHasher tests".
Perhaps you misunderstood the claim of the lecture notes?
Note that they count the expected number of collisions of a specific string x with all other strings, not the total number of collisions over an entire data set.
Note that they count the expected number of collisions of a specific string x with all other strings to get to N/M, not the total number of collisions over an entire data set. That is also on the order of N^2/M/2 for a perfect universal hash. That is simply basic probability and has nothing to do with how good your hash function is.
I would not handle N^2/M/2 as something "simple". For example, (let's call it) "set's collision estimate" of CRC32 is way lower than N^2/M/2 in many SMHasher tests yet its collision estimate for isolated strings is likely N/M. I just do not see how collision estimate of a specific string is good, if it's not easily translatable to set's collision estimate. The situation you are probably overlooking is that when you add a new string to an already constructed set with N^2/M/2 total collisions, the collision estimate of a new string cannot be N/M. Either I'm misunderstanding something or "universal hashing" concepts are very poorly defined.
PolyR paper defines a collision estimate like for cryptographic hashes, which are never treated as "universal" to my knowledge. With such liberal treatment of collision estimate, "universality" claim is a bit moot.
N^2/M/2 is a formula used by SMHasher to calculate expected number of collisions.
If you look closer, it's derivative of set's collision estimate which is N/M. But there's no mention of collision estimate's derivatives in "universal hashing".
Either I'm misunderstanding something or "universal hashing" concepts are very poorly defined.
I think you are misunderstanding, because they are very strictly defined. The collision probability always talks about two arbitrary but different strings. You only talk about more than two strings once you get into k-independence for k > 2.
I just do not see how collision estimate of a specific string is good, if it's not easily translatable to set's collision estimate.
It generally translates pretty well.
PolyR paper defines a collision estimate like for cryptographic hashes, which are never treated as "universal" to my knowledge.
You are conflating "cryptographically secure hash" with the study of cryptographically secure message authentication codes. The latter has always been based on universal hash function theory (PolyR, Poly1305, Galois Counter Mode, UMAC, etc are all (almost-)universal hash families).
With such liberal treatment of collision estimate, "universality" claim is a bit moot.
It is not liberal at all, it is very precise.
N^2/M/2 is a formula used by SMHasher to calculate expected number of collisions.
Expected number of collisions between ALL strings, not the expected number of collisions between ONE and all other strings. That is what the lecture notes you linked compute.
Your original statement "universal hashing implies N/M collision estimate for all collision test sets, and not (N^2)/M/2 like can be seen in SMHasher tests" is false, because that is mathematically impossible. It's also not what the linked lecture notes claim.
If you look closer, it's derivative of set's collision estimate which is N/M. But there's no mention of collision estimate's derivatives in "universal hashing".
I have no idea what you mean by that.
In "universal hashing", N/M collision estimate already implies a *set S from U. So, of course, I'm talking about more than two strings in a set. What's the purpose of talking about 2 strings in all sets?
By N/M derivative I mean that if you keep adding strings to a set, yielding sets with N+1, N+2... strings, addition increases set's collision estimate by N/M after each operation, if set's current collision estimate is N^2/M/2.
In "universal hashing", N/M collision estimate already implies a set S from U. So, of course, I'm talking about more than two strings in a set. What's the purpose of talking about sets having only 2 strings each?
Basic universal hashing is only talking about pairs of strings. Look, in the very lecture notes you linked:
The universality bound is for ONE pair of strings x != y.
The expectation of N/M is then calculated from the bound on pairs:
What's the purpose of talking about sets having only 2 strings each?
The purpose is that it's possible to analyze and prove things about 2 strings at a time, after which you can use statistics to extrapolate that behavior to sets of strings.
By N/M derivative I mean that if you keep adding strings to a set, yielding sets with N+1, N+2... strings, addition increases set's collision estimate by N/M after each operation, if set's current collision estimate is N^2/M/2.
Can you please give a clear definition of what you mean by "set's collision estimate"?
"set's collision estimate" is what "estimated total number of collisions" equals to, in all SMHasher tests.
Why the paper you've quoted talks about "linearity of expectations", on what premises?
"set's collision estimate" is what "estimated total number of collisions" equals to, in all SMHasher tests.
Yes, universal hashing makes no direct claims about that. Any bounds on that are entirely statistical, derived from the bound on pairs of strings.
Why the paper you've quoted talks about "linearity of expectation", on what premises?
https://math.stackexchange.com/questions/1810365/proof-of-linearity-for-expectation-given-random-variables-are-dependent
Also this isn't a "paper you've quoted", this is from the lecture notes you linked in your initial post. It's from https://www.cs.cmu.edu/~avrim/451f11/lectures/lect1004.pdf.
Yes, partially it was my misunderstanding. The set's collision estimate is emergent and not really definable while "universality" constraints are much simpler. But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, because without that the Pr(x and y collide)<1/M bound cannot be considered proven.
The set's collision estimate is emergent
Exactly.
and not really definable while "universality" constraints are much simpler.
There are also concepts of universality which do go beyond traditional universality, those are (almost-)k-independent hash function families. They are saying that if you have a set of k elements, the probability their hashes are any particular k-tuple is bounded.
To my knowledge we only have practical ways of constructing k-independent hash functions for relatively small k, and only for fixed-size inputs, not arbitrary-length strings when k > 2.
But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, because without that the Pr(x and y collide)<1/M bound cannot be considered proven.
As I said, Polymur is not exactly 1/M (where M = 2^64), but rather claims Pr(x and y collide) <= n * 2^-60.2 where n = len(x).
The proof for that is here: https://github.com/orlp/polymur-hash/blob/master/extras/universality-proof.md.
I mean the proof that for all values of set S from U, the permutation produces a set of pseudo-random values - values that look like a random oracle, if observer does not know original values of S.
That's a stronger property than regular universality, but luckily Polymur claims to be almost-2-independent, which is a stronger version of this anyway. Polymur claims that pairs of values look like random variables. That is, Polymur claims that if H is chosen independently at random from the family that for ANY m != m' and ANY x, y that
Pr[H(m) = x && H(m') = y] <= n * 2^-124.2
You can just ignore one of the two variables and assume it collides to get a bound of n * 2^-60.2 on any particular output.
Note that again we prove the chance of any specific output is low, for a single input (or a single pair). A claim about "a set of pseudo-random values" where that set would have > 2 elements would again be an emergent probability from pulling multiple random numbers.
Also,
But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, because without that the Pr(x and y collide)<1/M bound cannot be considered proven.
That claim is not true. Consider a 64-bit hash function with a perfect 2^-64 collision bound. I can take that hash function, set the lowest bit to 0 and use that as a new hash function. It's now perfectly valid to claim this new function has a 2^-63 bound. But obviously the output is not uniformly random.
I'm maybe overlooking something in your proof, but I do not see how your construction leads to pair-wise independence (50% difference in bits) of outputs. Maybe you are referring to some "common knowledge" I have not read/can't remember at the moment? I do not argue with PolyMur's practical performance, but a proof is something more formal.
But I have a further question - where can I find a proof that PolyMur's permutation yields uniformly-random output for any input, [because without that the Pr(x and y collide)<1/M bound cannot be considered proven].
Consider a 64-bit hash function with a perfect 2^-64 collision bound. I can take that hash function, set the lowest bit to 0 and use that as a new hash function. It's now perfectly valid to claim this new function has a 2^-63 collision bound. But obviously the output is not uniformly random.
Uniform randomness is a statistical property. 1/M bound is emergent from probability theory. If bin A is used (among M bins), the probability the next independent random number will also refer to bin A is 1/M.
I'm maybe overlooking something in your proof, but I do not see how your construction leads to pair-wise independence (50% difference in bits) of outputs.
That is a different definition of pairwise independence. The definition for pairwise independence in universal hashing can be seen on Wikipedia:
It's also called strong universality, in our case almost-strong universality.
Uniform randomness is a statistical property. 1/M bound is emergent from probability theory. If bin A is used (among M total bins), the probability the next independent random number will also refer to bin A is 1/M.
Sure, but as my counter-example showed, low chance of collision does not imply every bit of the output looks random. In the most extreme case, consider hashing 32-bit numbers to 32-bit numbers. The identity function has 0 collisions, but obviously is not random.
Ok, understood about the pairwise independence. Can you comment how collision bound in the proof is useful if k when hashing a set of values is usually unchanging. In such condition, I can only rely on message's properties, not the key. Referring to (14 + n/7) / |K|.
As for zeroing the lowest bit example which reduces collision expectation, the issue with the example is that while collision expectation will change, and seemingly won't break anything, an initially random function will not be strictly uniformly-random anymore, and so probability theory won't be applicable. Strictly speaking, you can't use 1/M pair-wise collision estimate on a variable that was not previously shown to be uniformly-random. That for myself, maybe you do not look at things this way.
Can you comment how collision bound in the proof is useful if k when hashing a set of values is usually unchanging.
The point is that k is chosen uniformly at random independently from the values. As long as that holds, the bound is valid. At that point seeing k is "unchanging" doesn't matter. Since we assume the key was chosen independent from the value it doesn't matter whether k was chosen just now or ages ago - the choice was independent.
This does mean it's not secure to leak the hash values to an attacker, please read this section. If an attacker can see the hash values, he can potentially construct new values which are much more likely to collide.
Strictly speaking, you can't use 1/M pair-wise collision estimate on a variable that was not previously shown to be uniformly-random. That for myself, maybe you do not look at things this way.
Well, that's just mathematically wrong. Each input behaving as if being uniformly random is a sufficient but not necessary condition for collision resistance.
Consider a 16-bit to 16-bit identity hash function. It is not random at all but it literally can not create collisions. This isn't a trick either, it means if your input is 16 bits and you're building a hash table with 2^16 slots you can just use the identity hash function and never get collisions or the problems that come with them. That specific kind of hash table has a useful name: an array.
Well, events of collisions depend on variations in messages or keys. f = k^7 * (f + m[6]) implies f depends on both message and key at the same time. If key is fixed in a set, then only messages affect events of collisions, functionally. As for myself, I cannot accept the collision bound for the case of a fixed key.
Consider a 16-bit to 16-bit identity hash function. It is not random at all but it literally can not create collisions. This isn't a trick either, it means if your input is 16 bits and you're building a hash table with 2^16 slots you can just use the identity hash function and never get collisions or the problems that come with them. That specific kind of hash table has a useful name: an array.
I'm assuming an identity hash is non-random and it cannot be "universal" by definition. Then its collision expectation is knowingly 0, no need for a bound.
As for myself, I cannot accept the collision bound for the case of a fixed key.
There is no collision bound for a fixed key at all. That's not how universal hash functions work. The bound is always assuming the hash function is picked at random from the hash function family. This is fundamental.
There is no collision bound for a fixed key at all. That's not how universal hash functions work. The bound is always assuming the hash function is picked at random from the hash function family. This is fundamental.
But how this is practical? When one fills a structure with values the key is always fixed. Otherwise value lookup is just impossible.
Again, SSL is secured using this assumption - the key is chosen randomly and independently from the values, and the attacker never gets to see the hashes directly.
SSL is a stream cipher beside KEM; this is unrelated to hashing.
But how this is practical? When one fills a structure with values the key is always fixed. Otherwise value lookup is just impossible
The key isn't fixed. The key is chosen randomly at startup when the hash function is initialized independently from the values.
SSL is a stream cipher beside KEM; this is unrelated to hashing.
SSL includes authentication of the data. Authentication in 2023 is typically done with with Galois Counter Mode or Poly1305, both of which are universal hash based constructions.
Either way, I'm sorry, this is taking up too much of my time now. I would strongly suggest doing some more reading on what universal hashing is or how it works on a fundamental level.
The original paper that introduced universal hashing is Universal Classes of Hash Functions by Carter & Wegman. I would suggest starting there.
No problem. However, the key is fixed during application's lifetime usually. It's practically the same as selecting a single hash function from a family, for unlimited duration of time, in case of a server application, for example. Okay, abstractly it is an independent argument to the hash function, but functionally it does not change. MAC is another story - there key and/or nonce changes frequently. (GCM is stream cipher's operation mode)
You may delete the issue if it seems unuseful or confusing to potential users.
Another possible issue I've found in PolyMur's proof, and referring to your stance about 1/M pair-wise collision bound (that uniform randomness of hash outputs is out of the question). What your proof is lacking I think is that it does not show that pair-wise collisions of outputs for m and m' are bounded to 1/M. Your proof relies on pre-requisite that keys are chosen in a uniformly-random manner. But the case when both m and m' use the same key is not covered. So, a central pre-requisite of "universal hashing" (Pr[x and y collide]<1/M) for the case of same keys in x and y is not proven.
Your proof relies on pre-requisite that keys are chosen in a uniformly-random manner. But the case when both m and m' use the same key is not covered.
It absolutely is. The key is not chosen per message, the key is chosen per function. The choice of k is what determines hash function H from the family. So in the claim
Pr[H(m) = x && H(m') = y] <= eps
there is only a single H, and thus a single choice of k.
Look Alexey... I'm aware of your work on komihash. You clearly know a lot about traditional hash functions and are not a stupid man. I say this, because I really need you to change your approach from trying to jump on the first thing that looks wrong to you to trying to learn instead.
Universal hash functions are a mathematical construct. To understand their claims and proofs you have to be very careful, and understand the mathematical background. This is work, and it's not intuitive. I understand it can be hard and confusing.
But claims this and that isn't proven (especially things as basic as linearity of expectation) show you simply haven't done the homework. Please read the paper I have suggested to you, and really try to understand the math. Another suggestion is to read High Speed Hashing for Integers and Strings and do the exercises. Many of the things you say "hey, this looks wrong" are covered in those exercises - they force you to examine your own assumptions.
It is hard, I understand. I struggled with it too! But you have to get these fundamentals right before you can start to analyze more complex constructions.
Well, I expected you would tell me to read stuff. Yes, my understanding of "universality" constraints was initially incomplete - the issue I had is with the total collision estimate of a set - I expected it's the first thing a "universal" hashing has to follow. Then I do have issues with terminology - it changes, it may be different in other sources, but it does not mean I do not know that probabilities of two uniformly-random events sum.
However, this in no way makes my further questions "absurd". I'm like you are interested in obtaining a proof of universality, and I'm taking your proof as some kind of a basis. For example, how you can prove "linearity of expectation" for your hash if you did not prove collision bound for the case of same keys.
MACs are totally different in this respect - they need to prove collision bound for independent streams, not for a local data set.
I never said your questions were absurd. They're very natural questions. They are however also very basic questions covered in any text on universal hash functions, and are not questions specific to Polymur. Your questions apply to any universal hash function family.
Which is why I said this takes too much of my time, and ask you to do the homework needed to cover the basics. If anything is unclear specifically about Polymur or its design I'm happy to help. But I'm not here to give a 1 on 1 informal lecture on universal hashing.
I do not say your proof is wrong - I only say it applies to a specific case of MAC hashing, when key is uniquely generated for each hashing operation, without any requirement to store hashes in a single data set.
I only say it applies to a specific case of MAC hashing, when key is uniquely generated for each hashing operation
Please read section 4 "authenticating multiple messages" from New hash functions and their use in authentication and set equality by Wegman and Carter. It proves that for any strongly universal hash function family you can reuse the same hash function for authentication as long as you hide the hash values from the attacker (which they do with a one-time-pad).
There are similar proofs that for any strongly universal hash function hash table access with chaining takes 0(1) expected time.
I do not say your proof is wrong - I only say it applies to
The thing is, the proven claim has a standard form. It proves the almost (strong) universality. Because of that we can reuse results that only rely on that property, such as the above results.
This is why I say your questions are not about Polymur! If your problem isn't with the proof but with its applicability you have general questions about universality, for which I already pointed you to multiple resources to learn from.
Is PolyMur "strongly universal", and where I can gather that from the proof? Sorry if that's obvious.
In either case, my opinion is that if "non-strong universality" implies collision bound only with a pre-requisite of "random choice of family functions", it is practically only useful for MACs. OK, not arguing with theory, but theories have their limits of applicability, and it's not always obvious from the theory itself where it isn't applicable.
Anyway, thanks for the discussion. I still think "almost universal" claims are a bit of an exaggeration, an unfair advantage. And I think most of the hash functions that pass all SMHasher tests are practically "universal" except most of them have no formal proofs. Because SMHasher was built to test some basic statistics that are also claimed for "universal" hash functions.
Is PolyMur "strongly universal", and where I can gather that from the proof? Sorry if that's obvious.
Polymur is almost-strongly universal. That's this claim from the proof:
If H is chosen randomly and independently then for any m != m' and any hash outcomes x, y we have
Pr[H(m) = x && H(m') = y] <= n * 2^-124.2
Instead of phrasing it as a probability, the paper by Carter and Wegman simply counts the number of functions H in the family such that H(m) = x && H(m') = y. But it's the same concept, divide this count by the total number of possible functions to get the probability (since we choose the function at random from the family).
In either case, my opinion is that if "non-strong universality" implies collision bound only with a pre-requisite of "random choice of family functions", it is practically only useful for MACs. OK, not arguing with theory, but theories have their limits of applicability, and it's not always obvious from the theory itself where it isn't applicable.
Well, it's also useful in the case of hash tables as it proves expected O(1) access time when using chaining, which is explicitly the intended purpose of Polymur. You can't just discount this proof 'in your opinion'. This is math, not opinion. Most modern hash table implementations already generate a random hash function per table, or per program, to protect against HashDoS.
Anyway, thanks for the discussion. I still think "almost universal" claims are a bit of an exaggeration, an unfair advantage. And I think most of the hash functions that pass all SMHasher tests are practically "universal" except most of them have no formal proofs. Because SMHasher was built to test some basic statistics that are also claimed for "universal" hash functions.
I don't think it's unfair. It is easy to make hash functions that pass SMHasher but for which an attacker can still construct arbitrary multicollisions.
I don't think it's unfair. It is easy to make hash functions that pass SMHasher but for which an attacker can still construct arbitrary multicollisions.
I doubt it, if the function passes Perlin Noise and Seed tests as well. But as SMHasher fork's author notes, quick brute-force generation of collisions is possible for all "fast" hashes. Except server applications usually do not give even a chance to flood with unnecessary information.
At the moment I think I'll leave this 40 year old "universality" alone for the sake of happier life. I can't swallow a proof that refers to |K| and its variation whereas functionally k is unchanging.
|
gharchive/issue
| 2023-12-24T06:03:54 |
2025-04-01T06:39:56.061441
|
{
"authors": [
"avaneev",
"orlp"
],
"repo": "orlp/polymur-hash",
"url": "https://github.com/orlp/polymur-hash/issues/8",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
383956588
|
Add support for custom hashers in SparseSecondaryMap. Closes #14
Merging after #18 will require the tweak for std::hash to core::hash.
I would rather merge this first then leave the no std for last.
@orlp Rebased onto your master.
|
gharchive/pull-request
| 2018-11-24T03:55:28 |
2025-04-01T06:39:56.065185
|
{
"authors": [
"james-darkfox",
"orlp"
],
"repo": "orlp/slotmap",
"url": "https://github.com/orlp/slotmap/pull/20",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
1580143349
|
Extra closing brace removed
Call me crazy but I think there's a dangling closing brace.
ok but why all the extra spaces?
fixed extra space
|
gharchive/pull-request
| 2023-02-10T18:31:57 |
2025-04-01T06:39:56.077976
|
{
"authors": [
"dehidehidehi",
"ornicar"
],
"repo": "ornicar/userstyles",
"url": "https://github.com/ornicar/userstyles/pull/5",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
2118432707
|
MediaDevices: add support for MediaTrackConstraints.channelCount
Fixes #446
TODO: channelCount should be modeled as a ConstrainULong, not just u32.
TODO: cubeb support
Cool, really nice you could make it so fast
|
gharchive/pull-request
| 2024-02-05T12:32:15 |
2025-04-01T06:39:56.095209
|
{
"authors": [
"b-ma",
"orottier"
],
"repo": "orottier/web-audio-api-rs",
"url": "https://github.com/orottier/web-audio-api-rs/pull/447",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1111309788
|
proof of concept: delegation.
The point of this exploration is to see if we can hide some of the internals of the (base)audiocontext implementation from the user facing code as discussed in #76
Ideally AsBaseAudioContext should go, as it is not part of the official spec and might surprise users that they have to import that trait. I'm not entirely sure if we can drop it though as it is used in the generics setup. We should be able to drop it from user facing code though
The BaseAudioContext must stay as it is part of the spec
@tasogare3710 my suggestion for moving forwards would be:
scratch the current work and start fresh
start by looking at 1 example, e.g. example/simple_delay.rs
remove the import of AsBaseAudioContext
obviously it won't compile now. Add the required methods to AudioContext with the delegate trait. For example delegate context.create_delay to the base()
now you can remove the create_delay method from AsBaseAudioContext
rinse, repeat, for more methods and examples and apply to OfflineAudioContext too
check if the docs look good for AudioContext, OfflineAudioContext (cargo doc --lib --open)
check if there is any user facing code still containing AsBaseAudioContext (I mean the examples, integration tests and docs)
Thanks for the comments, it made me realize any choice we make will have disadvantages..
Summarizing the three options at the table
structs AudioContext, OfflineAudioContext and BaseAudioContext.
trait AsBaseAudioContext for shared functionality and mimicking the inheritance.
This is the current state.
Advantages: rust-like
Disadvantages: users need to import AsBaseAudioContext most of the time, which is not part of the spec
structs AudioContext, OfflineAudioContext and ConcreteBaseAudioContext.
trait BaseAudioContext for shared functionality and mimicking the inheritance.
Advantages: no need for strange imports in user facing code, still powerful and not too bad from a rust perspective
Disadvantages: whenever you need to deal with the actual base (this is rare), the struct name may be surprising
structs AudioContext, OfflineAudioContext and BaseAudioContext.
no traits. We use delegation/macros to make all shared functionality available on the three structs
Advantages: no names that are not part of spec
Disadvantages: not rust-like. No possibility for function to be generic over a shared trait (we use that in Node constructors now) but we could avoid that issue by forcing to take in a BaseAudioContext always
For context, we are using option 2 for AudioNode and AudioScheduledSourceNode which I think works out well. There is no issue there because they are never used as concrete types.
All in all I think I am in favour of option 2 now - we model inheritance using traits, always.
We could even make the namespace like this:
crate::context::{AudioContext, OfflineAudioContext, BaseAudioContext, concrete::BaseAudioContext}
@b-ma any final thoughts?
Hey,
Yup I agree, option 2. seems both the more clean and simple solution. (Then, we will also have to fix the discrepancy of the AudioScheduledSourceNode to make it coherent with this rule too, I will have a look)
I dont know for the namespacing concrete::BaseAudioContext, it seems quite good but also a bit strange alongside the BaseAudioContextInner.
Wouldn't it be possible to somehow merge BaseAudioContextInner and ConcreteBaseAudioContext (I don't really have a clear picture of the possible implications)?
If not possible or just complicating things, maybe doing something like concrete::{BaseAudioContext, BaseAudioContextInner} so that these two stay coherent?
Fortunately BaseAudioContextInner is an implementation detail and is not exposed in the public interface.
The reason for BaseAudioContext { inner: Arc<BaseAudioContextInner> } is that we clone it many times. All the AudioNodes have static lifetime (meaning, they do not contain references, only owned values) which is great for user friendlyness. The nodes contain an AudioContextRegistration which has reference to the base context via that Arc
We could rename BaseAudioContextInner --> BaseAudioFields if that makes it easier for the eyes?
Yup I see, thanks for the explanation, looks indeed logical to keep this simple
We could rename BaseAudioContextInner --> BaseAudioFields if that makes it easier for the eyes?
No, Inner is good I think and already used here and there if I remember well.
However, I would go for ConcreteAudioContextInner, so we have:
trait BaseAudioContext (which retrieve)
ConcreteBaseAudioContext (which holds)
ConcreteBaseAudioContextInner
(not sure of my keyword relationships but you see the idea :)
(Then, we will also have to fix the discrepancy of the AudioScheduledSourceNode to make it coherent with this rule too, I will have a look)
Yes I think we can make that one follow the spec better. Right now the trait exposes the implementation detail of Scheduler and then derives the methods from the spec (start, stop etc).
We should change that to just the raw trait, require implementers to supply start(_at), stop etc
Internally we can use the Scheduler, but we should remove that from the public API (the whole src/control.rs file actually)
Was that what you had in mind too @b-ma ?
We should change that to just the raw trait, require implementers to supply start(_at), stop etc
Internally we can use the Scheduler, but we should remove that from the public API (the whole src/control.rs file actually)
Yup, I think it looks really more safe to go that path
Thanks for moving the discussion forward @tasogare3710
I'm closing this PR in favour of #102
@orottier
Thanks for moving the discussion forward @tasogare3710
I'm closing this PR in favour of #102
Got it. it's my pleasure.
|
gharchive/pull-request
| 2022-01-22T07:16:59 |
2025-04-01T06:39:56.111839
|
{
"authors": [
"b-ma",
"orottier",
"tasogare3710"
],
"repo": "orottier/web-audio-api-rs",
"url": "https://github.com/orottier/web-audio-api-rs/pull/99",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1651881131
|
Add deployment example
This PR adds an example of how to use the OpenTelemetry Collector (Red Hat Distribution) as deployment.
@iblancasa please wait for approvals before merging
@pavolloffay since we talked and you told me it looks OK, I decided to merge after your last requests. Sorry for that.
|
gharchive/pull-request
| 2023-04-03T11:28:57 |
2025-04-01T06:39:56.175136
|
{
"authors": [
"iblancasa",
"pavolloffay"
],
"repo": "os-observability/redhat-rhosdt-samples",
"url": "https://github.com/os-observability/redhat-rhosdt-samples/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2102836814
|
Update FwController.cs
Will not be excessive, because the JSON reposnse doesn't get return_url value as location in afterSaveLocation() earlier.
I used it with modals and form ajax submits to other controllers.
Let's discuss this.
Wouldn't it be better to update afterSaveLocation instead?
So we can have just a single location response?
Having 2 urls in response doesn't seem right.
Revoking this pull-request. afterSaveLocation must be reviewed for the JSON. Currently. the return_url parameter is doubled for the JSON location result. And I noticed that the HTTP error code has been changed from 200 to 400 for the validation exception, so other chanes are nesassairly (in autosave code, for example) to catch the form errors. Currently, I parese the location result and check extract the return_url. My
|
gharchive/pull-request
| 2024-01-26T20:14:57 |
2025-04-01T06:39:56.204955
|
{
"authors": [
"osalabs",
"vladsavchuk"
],
"repo": "osalabs/osafw-asp.net-core",
"url": "https://github.com/osalabs/osafw-asp.net-core/pull/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
123954590
|
Include template from schema does not work
Hello Oscar,
I started working on the oddgen project. The goal is to support several template languages, one of them is tePLSQL.
I've tried to include a template using <%@ include(templ, my_package, package_body, demo) %>. This was not working for several reasons:
There is a typo in tePLSQL.pkb on line 532 (l_object_type instead of l_schema).
dbms_metadata.get_ddl fails when called from another user, because dbms_metadata requires the SELECT_CATALOG_ROLE which is not visible in a definer rights package.
The first problem is easy to fix ;-). For the second one I see the following options:
from 12.1 on you may grant the role to the package directly, e.g. GRANT select_catalog_role TO PACKAGE teplsql.teplsql;. This was my solution.
to support older Oracle versions you may switch to invoker rights or you may access the dba_source view directly
Thanks.
Best Regards,
Philipp
Thanks Philipp for finding this bug.
I will fix it soon.
|
gharchive/issue
| 2015-12-27T00:24:50 |
2025-04-01T06:39:56.208600
|
{
"authors": [
"PhilippSalvisberg",
"osalvador"
],
"repo": "osalvador/tePLSQL",
"url": "https://github.com/osalvador/tePLSQL/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2161622284
|
fedora: webui only on rawhide
The webui for Fedora has been delayed until Fedora 41, current rawhide. Let our version gates reflect that.
Speaking of, do we want a const RAWHIDE somewhere in distro/fedora?
Duplicate of #479.
|
gharchive/pull-request
| 2024-02-29T16:16:02 |
2025-04-01T06:39:56.211732
|
{
"authors": [
"supakeen"
],
"repo": "osbuild/images",
"url": "https://github.com/osbuild/images/pull/485",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1071252499
|
Support icon theme
I would like to be able to choose different icon themes for light/dark
Which icons do you mean? Do you use Gnome, KDE or something else? Or do you mean an application?
Which icons do you mean? Do you use Gnome, KDE or something else? Or do you mean an application?
I mean all applications icons. I use KDE.
Ok, this was already requested in #50
|
gharchive/issue
| 2021-12-04T17:07:23 |
2025-04-01T06:39:56.260065
|
{
"authors": [
"AlexandreMarkus",
"l0drex"
],
"repo": "oskarsh/Yin-Yang",
"url": "https://github.com/oskarsh/Yin-Yang/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2731257311
|
scientific_full fora do ar, documentar e juntar essa issue com as demais sobre edição de coberturas
https://afa.codes/scientific_full/
Diretamente não funciona. precisa acessar um código, por exemplo: https://afa.codes/scientific_full/BR+d
Tal qual funciona a scientific normal.
|
gharchive/issue
| 2024-12-10T21:46:36 |
2025-04-01T06:39:56.263479
|
{
"authors": [
"0e1",
"ppKrauss"
],
"repo": "osm-codes/gridMap-draftPages",
"url": "https://github.com/osm-codes/gridMap-draftPages/issues/98",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
101155956
|
Birthday and hack weekend
I should've done a blog post about the birthday and London hack weekend earlier. My excuse is that somebody else had the list of hacks. He's published that now so I think I'll write something and put it out today:
https://hackpad.com/Birthday-and-hack-weekend-HJAjZHsJ8eb
posted https://blog.openstreetmap.org/2015/08/15/birthday-hack-weekend/
|
gharchive/issue
| 2015-08-15T09:46:13 |
2025-04-01T06:39:56.265157
|
{
"authors": [
"harry-wood"
],
"repo": "osm-cwg/posts",
"url": "https://github.com/osm-cwg/posts/issues/8",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1810771048
|
Différencier l'origine du libellé
Hello,
Colonne Libellé BAN ou Cadastre, ajouter un moyen de discriminer l'origine.
Pastille de couleur comme dans la colonne Libellé OSM, police de couleur,... je n'ai pas d'idée particulière
Je ne sais pas s'il y aura besoin de filtrer cette info. Utilité ?
Colonne Libellé BAN ou Cadastre, ajouter un moyen de discriminer l'origine.
C'est maintenant le cas pour chaque ligne avec des petits pictos
|
gharchive/issue
| 2023-07-18T21:49:04 |
2025-04-01T06:39:56.267014
|
{
"authors": [
"gendy54",
"vdct"
],
"repo": "osm-fr/osm-vs-fantoir",
"url": "https://github.com/osm-fr/osm-vs-fantoir/issues/272",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
2080165292
|
failing test - test_children_center_is_not_an_intentional_human_activity
This test is now failing since instance of was changed from camp to summer camp.
https://www.wikidata.org/w/index.php?title=Q706474&diff=2049000673&oldid=1982054285
I think it might be worth finding an an alternative camp for this test case
Can you try following https://github.com/osm-quality/wikibrain/blob/master/CONTRIBUTING.md#nonsense-reports ?
Feel free to write here on first encounter with something unclear/confusing.
|
gharchive/issue
| 2024-01-13T07:50:24 |
2025-04-01T06:39:56.271509
|
{
"authors": [
"KasperFranz",
"matkoniecz"
],
"repo": "osm-quality/wikibrain",
"url": "https://github.com/osm-quality/wikibrain/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
294338262
|
Explore the cosmogony
We will need a visualization tool (or maybe several tools ?) for our day to day usage of cosmogony.
The purpose of this issue is to gather our needs. Then we may summarize it in the readme of this repo.
Visual coverage by zone type
we want to explore the world on a map, select an zone type and see the existing zones of this type on the map to get an idea of the coverage
a POC has been done in this repo, PR : https://github.com/osm-without-borders/cosmogony_explorer/pull/1
View zone metadata
we want to select a zone and get all its metadata (names in different languages, wikidata id, etc)
All zones containing a point
we want to click the map and see all the zones including the point
Explore the hierarchy
we want to select a zone, and get an idea of its hierarchy :
see all its parent zones
see all its direct child zones
see all its child zones, cascading the hierarchy
see its other linked zones
Download some zones
we want to select some zones (selecting from the map and/or using the hierarchy) and download them in a GIS friendly format (at least geojson, with metadata as properties).
Quality assurance
We want some dashboard with the coverage tests results described in issue #4
We have a pretty good start in this repo : https://github.com/osm-without-borders/cosmogony_explorer
[x] Visual coverage by zone type
[x] View zone metadata
[x] Explore the hierarchy
|
gharchive/issue
| 2018-02-05T10:14:26 |
2025-04-01T06:39:56.279557
|
{
"authors": [
"nlehuby"
],
"repo": "osm-without-borders/cosmogony",
"url": "https://github.com/osm-without-borders/cosmogony/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
342614261
|
add_metadata list of attributes - no effect on history file output
I would like to write osm history files with only version and timestamp information using libosmium release v2.14.0 (Debian GNU/ Linux 9). I used the following command separating the attributes with + like suggested in the release note:
osmium cat file-history.osm -o file-history-VT.osm -f osm,add_metadata=version+timestamp --overwrite
I expected version and timestamp data to be included but no uid, no user and no changeset data. The output file included all of them like I would have used add_metadata=true.
I can't reproduce this. Are you sure you are calling the right osmium program? Try osmium version, it tells you which version of the osmium program and of libosmium it is.
Yes, osmium version showed libosmium version 2.13.1. Right hint!
Thank you
|
gharchive/issue
| 2018-07-19T07:42:35 |
2025-04-01T06:39:56.307009
|
{
"authors": [
"SunStormRain",
"joto"
],
"repo": "osmcode/libosmium",
"url": "https://github.com/osmcode/libosmium/issues/261",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
147878373
|
[Validator] Strange Layer
http://osmlab.github.io/to-fix/#/task/strangelayer
cc @geohacker @maning @planemad
done!
|
gharchive/issue
| 2016-04-12T21:29:03 |
2025-04-01T06:39:56.311517
|
{
"authors": [
"Rub21"
],
"repo": "osmlab/osmlint",
"url": "https://github.com/osmlab/osmlint/issues/94",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1744710309
|
not ignore lock, because i cannot compile until copied lock for docs.rs
so without lock people cannot compile, no cannot compile deterministically to safety
if i recall rust guidline for apps to add lock into repo
|
gharchive/pull-request
| 2023-06-06T22:29:52 |
2025-04-01T06:39:56.322677
|
{
"authors": [
"dzmitry-lahoda"
],
"repo": "osmosis-labs/beaker",
"url": "https://github.com/osmosis-labs/beaker/pull/121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1336723259
|
Reducing dependency osmosis-std wasm size impact
Background
Potentially due to prost the size of wasm generated is around +200k
Expectation
see if the size impact is really from prost or not, if so, find a way to reduce it
if not possible, considering remove prost and make everything json serializable in cosmwasm
It seems that osmosis-std doesn't really cause the issue. close for now.
|
gharchive/issue
| 2022-08-12T04:03:40 |
2025-04-01T06:39:56.327140
|
{
"authors": [
"iboss-ptk"
],
"repo": "osmosis-labs/osmosis-rust",
"url": "https://github.com/osmosis-labs/osmosis-rust/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1398600080
|
[x/gamm]: Add RPC query endpoints for joining and exiting pools
Background
It would be helpful to have a RPC query that returns the lp shares when providing liquidity to a pool. It would also be helpful to have a query that returns the assets returned withdrawing liquidity from a liquidity pool. These queries will be useful when using a StargateQuery in cosmwasm smart contracts.
Suggested Design
Here is the commit for the PoolType query that was added recently: https://github.com/osmosis-labs/osmosis/commit/e55e13d709628cff2075ae1576f993ed2a8c310e. The design should follow the steps taken in this commit to add RPC end points for the four following queries:
QueryJoinPool
Parameters: QueryJoinPoolSharesRequest{ PoolId, TokenInMaxs}
Returns: QueryJoinPoolSharesResponse { ShareOutAmount, TokenIn }
QueryJoinSwapExactAmountIn
Parameters: QueryJoinSwapExactAmountInRequest{ PoolId, TokenIn}
Returns: QueryJoinSwapExactAmountInResponse { ShareOutAmount }
QueryExitPool
Parameters: QueryExitPoolSharesRequest{ PoolId, ShareInAmount }
Returns: QueryExitPoolSharesResponse { TokenOut }
QueryExitSwapShareAmountIn
Parameters: QueryExitSwapShareAmountInRequest{ PoolId, TokenOutDenom, ShareInAmount }
Returns: QueryExitSwapShareAmountInResponse { TokenOutAmount }
I would like to work on this issue!
I would like to work on this issue!
Excellent. I am going to start on it today.
|
gharchive/issue
| 2022-10-06T01:30:58 |
2025-04-01T06:39:56.332614
|
{
"authors": [
"RusAkh",
"georgemc98"
],
"repo": "osmosis-labs/osmosis",
"url": "https://github.com/osmosis-labs/osmosis/issues/2956",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
412188210
|
Add Pose / Transform msg support
Add conversion between ign::msgs::Pose and
geometry_msgs/Pose
geometry_msgs/PoseStamped
geometry_msgs/Transform
geometry_msgs/TransformStamped
Unlike previous msgs types, this is a one-to-many mapping. This is possible since the ign::msgs::Pose msg captures most of the fields in the above ROS geometry msgs, with the following exception:
geometry_msgs/TransformStamped's child_frame_id field.
Initial plan was to introduce a frame field in ignition::msgs::Pose but later examination of the current ros1_ign_bridge implementation reveals that a similar ros msg field, frame_id, is being stored in an any field in ign::msgs::Header. So I took the same approach and store the child_frame_id in the ign header msg.
Looks good to me!
+1
|
gharchive/pull-request
| 2019-02-20T00:29:07 |
2025-04-01T06:39:56.385950
|
{
"authors": [
"caguero",
"iche033"
],
"repo": "osrf/ros1_ign_bridge",
"url": "https://github.com/osrf/ros1_ign_bridge/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
558961314
|
Bug/fix dict illegal accesses
added some None initializations
added some checks before accessing
These should fix problems for super minimal maps that have no doors, or some elements
oof, yeah I didn't dig through it deep enough to figure out if they were basic maps/dictionaries or some other fancy collection. Sure thing.
|
gharchive/pull-request
| 2020-02-03T09:42:17 |
2025-04-01T06:39:56.387508
|
{
"authors": [
"aaronchongth"
],
"repo": "osrf/traffic_editor",
"url": "https://github.com/osrf/traffic_editor/pull/47",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
781652443
|
python code analyze error
my system has intalled python 3.7.3,this is my python version and pip version
C:\Users\tony>python -V
Python 3.7.3
C:\Users\tony>pip -V
pip 19.0.3 from c:\users\tony\appdata\local\programs\python\python37\lib\site-pa
ckages\pip (python 3.7)
C:\Users\tony>
when i analyze a python project include requirements.txt it show a error in my analyze.result.json
"issues" : {
"PIP::requirements.txt:" : [ {
"timestamp" : "2021-01-07T21:58:27.699009500Z",
"source" : "PIP",
"message" : "Resolving dependencies for 'requirements.txt' failed with: IOException: Running 'py -2 C:\\Users\\tony\\AppData\\Local\\Temp\\python_interpreter17850449696562758315.py' in 'C:\\Users\\tony\\Desktop\\ort\\ort' failed with exit code 103:\nInstalled Pythons found by py Launcher for Windows *\n\nRequested Python version (2) not installed, use -0 for available pythons\n",
"severity" : "ERROR"
} ]
}
i don't know what i can do next for continue
Hello @gaoyaqing12
How did you manage to resolve that issue?
|
gharchive/issue
| 2021-01-07T22:19:51 |
2025-04-01T06:39:56.415957
|
{
"authors": [
"gaoyaqing12",
"woznik"
],
"repo": "oss-review-toolkit/ort",
"url": "https://github.com/oss-review-toolkit/ort/issues/3491",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
810846496
|
clearlydefined parsing exception
Hello,
I've recently activate clearlydefined in my ort pipeline, and I've got some projects that failed on analyzer error :
[2021-02-17T21:24:22.007Z] 21:24:21.837 [DefaultDispatcher-worker-3] WARN org.ossreviewtoolkit.analyzer.curation.ClearlyDefinedPackageCurationProvider - Getting curations for 'Maven:com.google.code.findbugs:jsr305:1.3.9' failed with: HttpException: HTTP 404
[2021-02-17T21:24:22.987Z] Exception in thread "main" com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize value of type `org.ossreviewtoolkit.clients.clearlydefined.ClearlyDefinedService$Facet` from Array value (token `JsonToken.START_ARRAY`)
[2021-02-17T21:24:22.987Z] at [Source: (okhttp3.ResponseBody$BomAwareReader); line: 1, column: 31] (through reference chain: org.ossreviewtoolkit.clients.clearlydefined.ClearlyDefinedService$Curation["described"]->org.ossreviewtoolkit.clients.clearlydefined.ClearlyDefinedService$Described["facets"]->org.ossreviewtoolkit.clients.clearlydefined.ClearlyDefinedService$Facets["dev"])
[2021-02-17T21:24:22.987Z] at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
Is it a parsing issue from ORT or a wrong content from clearlydefined or a bit of both :-) ?
regards
This duplicates https://github.com/oss-review-toolkit/ort/issues/3559. Please search existing issues first.
|
gharchive/issue
| 2021-02-18T07:41:21 |
2025-04-01T06:39:56.418327
|
{
"authors": [
"fb33",
"sschuberth"
],
"repo": "oss-review-toolkit/ort",
"url": "https://github.com/oss-review-toolkit/ort/issues/3645",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1325156016
|
Release scorecard
Release scorecard https://github.com/sigstore/sigstore-java/issues/2#issuecomment-1201899264
@laurentsimon @azeemshaikh38 Planning to release at this SHA 69eb1ccf1d0cf8c5b291044479f18672bf250325.
SGTM.
Thank you!
https://github.com/ossf/scorecard/releases/tag/v4.5.0
|
gharchive/issue
| 2022-08-02T01:08:32 |
2025-04-01T06:39:56.421444
|
{
"authors": [
"azeemshaikh38",
"laurentsimon",
"naveensrinivasan"
],
"repo": "ossf/scorecard",
"url": "https://github.com/ossf/scorecard/issues/2115",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1760587483
|
removal of --verbosity flag
As I was tinkering with the scorecard, I encountered the --verbosity [string] flag. No options are given, and the argument string seems to be disregarded. Any input string seems to be accepted, yet it is set to the default level of 'info'.
I may be wrong, but for now this --verbosity option seems redundant and removable.
Alternatively, options such as debug, error etc. could be implemented and described in the help message.
Thank you for getting back to me about this. My bad; I will look into it and propose some changes, starting with displaying the options for the --verbosity flag in the usage message.
|
gharchive/issue
| 2023-06-16T12:54:22 |
2025-04-01T06:39:56.423127
|
{
"authors": [
"andrelmbackman"
],
"repo": "ossf/scorecard",
"url": "https://github.com/ossf/scorecard/issues/3173",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
757885794
|
Document code for pong
Add docstrings and comments in the code
Done. Code documented at the same time as creating the tests, as per commit cc9a398991bb19ef984eeea47067baad594abfc2.
|
gharchive/issue
| 2020-12-06T09:39:44 |
2025-04-01T06:39:56.429077
|
{
"authors": [
"osso73"
],
"repo": "osso73/classic_games",
"url": "https://github.com/osso73/classic_games/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
525034924
|
srs3.0 librtmp 交叉编译失败
描述
srs2.0的librtmp库前段时间用live555取流推送到nginx-rtmp 浏览器和vlc都可以播放,不知是不是由于flash升级导致的,尝试更换srs3.0的librtmp 导出librtmp模块后,修改makefile 交叉编译环境,但是protocol里面依赖openssl,编译不通过,到srs里config 配置交叉编译环境without ssl ,with librtmp模块但是config提示stCPU架构不支持,
环境
操作系统:'Ubuntu16.04 hi-linux aarch-64'
编码器:'...'
SRS版本: '3.0.'
SRS的日志如下:
导出lib库编译提示缺少openssl
arch64-himix100-linux-g++ -c -ansi -Wall -g -O0 \
-Isrc/protocol -Isrc/core -Isrc/kernel -Iobjs \
-o objs/src/protocol/srs_rtmp_stack.o src/protocol/srs_rtmp_stack.cpp
In file included from src/protocol/srs_rtmp_stack.cpp:33:0:
src/protocol/srs_rtmp_handshake.hpp:35:26: fatal error: openssl/hmac.h: 没有那个文件或目录
#include <openssl/hmac.h>
^
compilation terminated.
直接在srs里编译提示cpu架构不支持
make OS="LINUX" BUILD="DBG"
make[1]: Entering directory '/home/ubuntu/projects/hi3559a/github/srs/srs-3.0-a1/trunk/objs/state-threads-1.9.1'
if [ ! -d LINUX_4.15.0-66-generic_DBG ]; then mkdir LINUX_4.15.0-66-generic_DBG; fi
aarch64-himix100-linux-gcc -DLINUX -DDEBUG -Wall -g -DMD_HAVE_EPOLL -c sched.c -o LINUX_4.15.0-66-generic_DBG/sched.o
In file included from common.h:68:0,
from sched.c:48:
md.h:454:14: error: #error "Unknown CPU architecture"
#error "Unknown CPU architecture"
^~~~~
sched.c: In function ‘_st_vp_check_clock’:
sched.c:480:16: warning: variable ‘elapsed’ set but not used [-Wunused-but-set-variable]
st_utime_t elapsed, now;
^~~~~~~
sched.c: In function ‘st_thread_create’:
sched.c:596:2: error: #error Unknown OS
#error Unknown OS
^~~~~
期望行为
解决2.0推流浏览器取流但黑屏, 3.0 交叉编译通过openssl已经编译通过,但是放到objs里提示还是找不到文件,如果修改makefile 将openssl 添加到编译选项里
谢谢大神了
srs-librtmp目前以SRS2版本为主,有些bug已经修复,可以再试试。
|
gharchive/issue
| 2019-11-19T14:12:46 |
2025-04-01T06:39:56.432324
|
{
"authors": [
"luqinlive",
"winlinvip"
],
"repo": "ossrs/srs",
"url": "https://github.com/ossrs/srs/issues/1494",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1057774018
|
test that we support image signatures
xref https://github.com/containers/skopeo/issues/1482
We should validate that we're doing image signatures via the proxy correctly.
In this issue, the great thing about the new ostree-native-container flow is that if you have a setup to sign container images, that exact same setup can be used to sign OS updates.
See https://docs.podman.io/en/latest/markdown/podman-image-sign.1.html and https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/signing_container_images for some old-style GPG signatures. As of recently the containers/image stack gained support for "cosign", see https://github.com/containers/skopeo/pull/1701
To test the recent changes for policy verification:
Modify /etc/containers/policy.json for sigstore/gpg signed images from the remote-registry
Sign an existing fcos image and push to the remore-registry
Try doing an rpm-ostree rebase ${signed-image}
Ensure it fails
Since we do not currently sign fcos or any ostree-based-images, we need to have signed images available.
Initially it was thought this can be done locally by doing skopeo copy docker://quay.io/fedora/fedora-coreos:testing-devel oci:/var/lib/containers/signed-local-registry/sigstore/test.oci --sign-by-sigstore-private-key fcos.key .
But this fails unfortunately, since how sigstore signs images is by pushing the artifacts generated to the remote-registry. Hence, signing an local oci or dir does not work. It instead gives the following error: Cannot determine canonical Docker reference for destination oci:/var/lib/containers/signed-local-registry/sigstore/test.oci.
Instead we need to be able to push this to some ephemeral testing Docker image registry. The perfect candidate was ttl.sh as mentioned in sigstore doumentation, but unfortunately fcos image exceeds the maxiumum image size limit there. Is there any other registry we could push to and verify instead?
So CI on this repository mainly uses GHA, for which there is https://docs.github.com/en/actions/using-containerized-services/about-service-containers
But that's just sugar for running a container...we can run any registry (quay.io, docker/distribution or whatever) inside a GHA job right?
|
gharchive/issue
| 2021-11-18T20:32:50 |
2025-04-01T06:39:56.440298
|
{
"authors": [
"RishabhSaini",
"cgwalters"
],
"repo": "ostreedev/ostree-rs-ext",
"url": "https://github.com/ostreedev/ostree-rs-ext/issues/163",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1244409771
|
Consider using Notion API
To add a database item, one purchase at a time, rather than manually importing a CSV at the end.
Is this valuable? Even if we're already generating monthly CSVs in the process? Should we then not bother with these CSVs?
Resolved in 21d1e929a30c825ffca5849a6bf27f39e4a82205.
I experimented with bypassing the categorized CSV and going straight from categorization to Notion, but it was super slow to import (> 5 min for one month of data, and even then it failed partway through). So, keeping the CSVs seemed to be a better choice with respect to both performance and safety. Until we have a solution for checking against duplicate entries, partial uploads are a big issue.
|
gharchive/issue
| 2022-05-23T00:15:10 |
2025-04-01T06:39:56.511355
|
{
"authors": [
"eemiily",
"oswinrodrigues"
],
"repo": "oswinrodrigues/steward-little",
"url": "https://github.com/oswinrodrigues/steward-little/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1624369241
|
認証周りのフォームが壊れているらしい
少なくともエラーのときにinputが再レンダリングが走ってフォーカスが外れる(らしい)
多分治った?
|
gharchive/issue
| 2023-03-14T22:09:38 |
2025-04-01T06:39:56.536822
|
{
"authors": [
"SnO2WMaN"
],
"repo": "otomad-database/web",
"url": "https://github.com/otomad-database/web/issues/239",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2161356304
|
Add support for marking AWS roles and policies as unused instead of deleting them upon cleanup
Users have requested to be able to configure the credentials operator to not delete AWS IAM roles and policies, but instead tag them as unused, a sort of "soft delete" mode.
@omris94 please add info on how this feature will be configured
https://github.com/otterize/intents-operator/issues/366
I suggest the following solution:
To indicate that AWS IAM roles and policies should only be soft deleted when they are not used anymore, there are two methods to follow:
If you want to avoid deletion of AWS roles and policies corresponding with a specific pod, label the pod as credentials-operator.otterize.com/aws-use-soft-delete=true.
If you want to globally avoid the deletion of AWS roles and policies, initialize the credentials-operator with the --aws-use-soft-delete=true flag. You can set this flag by adjusting the helm chart's value (global.aws.useSoftDelete).
How would the soft deletion of AWS IAM roles and policies look like?
Soft deletion of an AWS IAM policy will be performed by tagging a policy as otterize/softDeletedAt=<timeOfDeletion>
Soft deletion of an AWS IAM role will be performed by tagging a role and all of its policies as otterize/softDeletedAt=<timeOfDeletion>
When does the soft deletion occur?
Every case that would cause a deletion without this feature, such as serviceAccount deletion for roles or ClientIntent deletion for policies.
At the moment, we don't have the necessary logic to remove orphaned policies/roles. This means that if you start with --aws-use-soft-delete=true or its pod-level equivalent, you will need to switch it back to false before deleting the pod or ClientIntents to ensure that the roles/policies are removed. However, everything is possible so we may be able to implement this logic in the future.
So the answer is yes. Means that if I switch back operator will start manage role/policy. Looks good then.
|
gharchive/issue
| 2024-02-29T14:09:04 |
2025-04-01T06:39:56.549250
|
{
"authors": [
"omris94",
"orishoshan",
"volk1234"
],
"repo": "otterize/credentials-operator",
"url": "https://github.com/otterize/credentials-operator/issues/112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
250041688
|
Integration with Source Control to ensure new items get added
We are moving to Tabular Editor for the save to Folder capability which provides 3 way merging safety and simplicity.
However one tradeoff is that for new objects (tables, columns, measures etc) the files require remembering to "Add" to the workspace otherwise these parts of the model are forgotten on the merge. It would be great if somehow this could be solved as we have given safety and efficiency with one hand but taken away with the other.
Of course full source control integration might be overkill but some support for at least TFS and GIT would be greatly appreciated
Yes, we're facing the same issue internally at one of our clients. Since apparently this is a useful feature, I will investigate if there is some easy way to enable better source control integration with Tabular Editor. See also issue #67. Perhaps Tabular Editor can somehow "detect" that it's saving files back to a source controlled folder and then ensure that new files are automatically added to the TFS/git workspace. Will get back to you when I have an update.
Highly recommend using Git together with the Save to Folder option, as Git automatically detects file additions/deletions, where as TFS/TFVC does not... Tabular Editor will not get support for TFS/TFVC, as Git is the more popular source control system. Tracking Git integration on issue #104. Closing this issue.
|
gharchive/issue
| 2017-08-14T14:07:39 |
2025-04-01T06:39:56.557337
|
{
"authors": [
"o-o00o-o",
"otykier"
],
"repo": "otykier/TabularEditor",
"url": "https://github.com/otykier/TabularEditor/issues/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1497095317
|
Infer call children from parents
Minor changes to things in the trace format that were bothering me:
Rather than storing children explicitly in the trace, infer them from the parents when reading the trace. So no more "01GM8SWJ2YKM8QQNSRA07W3VQR.children.01GM8SWJ2Y29PZMSKVH0AJQHQC":true. This:
Saves bytes.
Should allow fixing https://github.com/oughtinc/ice/pull/141#discussion_r1042698274 better.
Means that the type children?: Calls in CallInfo is actually true now, i.e. the values of children are calls instead of booleans.
Emit a single value which is combined into the existing calls using lodash's merge instead of set with paths. In particular this means the end lines change from this:
{
"01GM8SWJ2Y29PZMSKVH0AJQHQC.result": ...,
"01GM8SWJ2Y29PZMSKVH0AJQHQC.shortResult": ...,
"01GM8SWJ2Y29PZMSKVH0AJQHQC.end": ...
}
to this:
{
"01GM8SWJ2Y29PZMSKVH0AJQHQC": {
"result": ...,
"shortResult": ...,
"end": ...
}
}
which is more space efficient, more natural, and easier to work with (no callId = path.split(".")[0]).
Haven't actually tested this yet.
Is this ready for review?
Haven't actually tested this yet.
This was because of the broken dev server. Just tried building the UI, got an error as soon as I looked at a trace. Will revisit this once the HMR issue is fixed.
Closing for now, feel free to reopen.
|
gharchive/pull-request
| 2022-12-14T17:32:02 |
2025-04-01T06:39:56.562170
|
{
"authors": [
"alexmojaki",
"lslunis",
"stuhlmueller"
],
"repo": "oughtinc/ice",
"url": "https://github.com/oughtinc/ice/pull/161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
249028821
|
Event with multiple dates only shows first date
See https://rogerthat-server.appspot.com/internal/shop/questions/6621295077228544
Event with multiple dates
Only first date is shown in the app
Example:
Event is only shown on the 28th in the app, and not on 29, 30 or 31
As @bart-at-mobicage mentioned, it would be better to list all the dates and their events, instead of just listing the first date, and the user needs to go to event details to see other dates.
|
gharchive/issue
| 2017-08-09T13:22:06 |
2025-04-01T06:39:56.564768
|
{
"authors": [
"abom",
"lucasvanhalst"
],
"repo": "our-city-app/oca-backend",
"url": "https://github.com/our-city-app/oca-backend/issues/438",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
322706988
|
Add an OCA branded watermark to main branding
Add a watermark, like we did with DJ-Matic services
Looks like watermarks aren't supported yet in native brandings
|
gharchive/issue
| 2018-05-14T07:50:32 |
2025-04-01T06:39:56.566263
|
{
"authors": [
"bart-at-mobicage"
],
"repo": "our-city-app/oca-backend",
"url": "https://github.com/our-city-app/oca-backend/issues/887",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1629657914
|
Add token holdings cutoff date to message
Description
Motivation & context
Code review
Type of change
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist
[ ] I have done a self-review of my own code
[ ] Any new and existing tests pass locally with my changes
[ ] My changes generate no new warnings (lint warnings, console warnings, etc)
Ah! Created this PR too early
|
gharchive/pull-request
| 2023-03-17T17:10:26 |
2025-04-01T06:39:56.569126
|
{
"authors": [
"psatyajeet"
],
"repo": "ourzora/nouns-builder",
"url": "https://github.com/ourzora/nouns-builder/pull/146",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
379354785
|
[Image Tiles Service] Consistency bug: AssetManagerHost vs ServiceHostUrl in appsettings.json
We should have either AssetManagerHost and ServiceHost or AssetManagerHostUrl and ServiceHostUrl.
Fixed in PR #27.
lol not much point assigning me these issues if they are opened, assigned, a fixed committed and merged over a weekend before i get to see them ;)
thanks for the code improvement @senakafdo
|
gharchive/issue
| 2018-11-09T23:15:47 |
2025-04-01T06:39:56.600024
|
{
"authors": [
"davidbirchwork",
"senakafdo"
],
"repo": "ove/ove-asset-services",
"url": "https://github.com/ove/ove-asset-services/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1512357038
|
[Bug] Debugs are not removed on Zone Removal
When a zone is removed, the debug draws don't seem to be removed as well. In the screen shot the zones themselves are not present as I can't target the stands, but the spheres are still there, and sometimes would double up when the resource is started again (see below)
Was an ox_lib issue and has been resolved.
|
gharchive/issue
| 2022-12-28T03:21:30 |
2025-04-01T06:39:56.657522
|
{
"authors": [
"Mkeefeus",
"thelindat"
],
"repo": "overextended/ox_target",
"url": "https://github.com/overextended/ox_target/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
871383513
|
Only serialize properties of objects which are accessed
Input:
const obj = { x: 1, y: 2, z: 3 };
export default () => obj.x;
Current output:
export default ( obj => () => obj.x )( { x: 1, y: 2, z: 3 } );
As x is only property of obj which is accessed, this could be reduced to:
export default ( x => () => x )( 1 );
Optimizations
The following optimizations can be applied:
1. Omit unused properties
Where only certain properties of an object are accessed, any other properties can be omitted:
// Input
const obj = { x: 1, y: 2, z: 3 };
export default () => obj.x + obj['y'];
// Output
const obj = { x: 1, y: 2, z: 3 };
export default ( obj => () => obj.x + obj.y )( { x: 1, y: 2 } );
Note property z has been discarded in output.
2. Break object properties apart with scopes
Where object is never used as a whole (only individual properties accessed by name), each property can be split into a separate scope var.
// Input
const obj = { x: 1, y: 2, z: 3 };
export default {
getX: () => obj.x,
setX: v => obj.x = v,
getY: () => obj.y,
setY: v => obj.y = v
};
// Output
const scope1 = ( x => [ () => x, v => x = v ] )( 1 ),
scope2 = ( y => [ () => y, v => y = v ] )( 2 );
export default {
getX: scope1[0],
setX: scope1[1],
getY: scope2[0],
setY: scope2[1]
};
getX() + setX() can be code-split into a separate file from getY() + setY().
3. Break object properties apart with object wrappers
Where object is never used as a whole (only individual properties accessed by name), each property can be wrapped in a separate object.
// Input - same as (2)
const obj = { x: 1, y: 2, z: 3 };
export default {
getX: () => obj.x,
setX: v => obj.x = v,
getY: () => obj.y,
setY: v => obj.y = v
};
// Output
const objX = { x: 1 },
objY = { y: 2 };
export default {
getX: ( objX => () => objX.x )( objX ),
setX: ( objX => v => objX.x = v )( objX ),
getY: ( objY => () => objY.y )( objY ),
setY: ( objY => v => objY.y = v )( objY )
};
Using 2 wrapper objects is slightly more verbose than output from optimizations (1) or (2), but more code-splittable than either. getX(), setX(), getY() and setY() could each be in separate files with objX and objY split into separate common files.
4. Reduce to static values
Where a property is read only (never written to in any functions serialized), the property can be reduced to a static value.
// Input
const obj = { x: 1, y: 2, z: 3 };
export default {
getX: () => obj.x,
getY: () => obj.y
};
// Output
export default {
getX: ( x => () => x )( 1 ),
getY: ( y => () => y )( 2 )
};
This is completely code-splittable. It's more efficient than any of the other approaches above, but only works if obj.x and obj.y are read-only.
Optimization killers
None of these optimizations can be used if:
Object used standalone e.g. const objCopy = obj; or fn( obj )
Object properties accessed with dynamic lookup e.g. obj[ name ]
Object passed as this in a method call e.g. obj.getX() and .getX() uses this
Property is getters/setters
Property is not defined, so access will fall through to object's prototype
Property may be deleted by code elsewhere (delete obj.x) so a later access may fall through to object's prototype
An eval() has access to object in scope (no way to know ahead of time how the object will be used)
Tempting to think could still apply optimization (3) in cases of undefined properties by defining object wrapper as objX = Object.create( originalObjectPrototype ). However, this won't work as it's possible object's prototype is altered later with Object.setPrototypeOf().
It's impossible to accurately detect any changes made to the object with Object.defineProperty() - which could change property values, or change properties to getters/setters. However, this isn't a problem - the call to Object.defineProperty( obj ) would involve using the object standalone, and so would prevent optimization due to restriction (1) above.
ESM
These optimizations would also have effect of tree-shaking ESM (#53).
ESM is transpiled to CommonJS in Livepack's Babel plugin, prior to being run or serialized:
// Input
import { createElement } from 'react';
export default () => createElement( 'div', null, 'Hello!' );
// Transpiled to (before code runs)
const _react = require('react');
module.exports = () => _react.createElement( 'div', null, 'Hello!' );
Consequently, when this function is serialized, the whole of the _react object is in scope and is serialized, whereas all we actually need is the .createElement property.
Optimization (4) (the most efficient one) would apply, except in case of export let where the value of the export can be changed dynamically in a function (pretty rare case).
Difficulties
I can foresee several difficulties implementing this:
Which optimization (if any) to apply cannot be determined until entire app has been serialized to know whether (a) any optimization killer applies and (b) whether properties are read-only or not.
Where a property is called as a method e.g. _react.createElement(), createElement()'s code must be analysed to see if it uses this. Optimizations can only be used if it doesn't.
Both of the above mean serialization of function scope vars needs to happen later than at present, to avoid serializing the whole object if only one property will be included in output. This will complicate identifying circular references.
Tempting to think that functions accessing a read-only object property can be optimized to access only that property even if another function accesses the object whole. However, that's not possible, as the function accessing the object whole could use Object.defineProperty() to redefine that property - so it's actually not read-only at all.
const O = Object,
d = O['define' + 'Property'];
function write(o, n, v) {
d( o, n, { value: v } );
}
const obj = { x: 1, y: 2 };
export default {
getX() {
return obj.x;
},
setX(v) {
write(obj, 'x', v);
}
};
setX() writes to obj.x but it's not possible through static analysis to detect that it will do this. So can't know if obj.x is read only or not.
You can optimize getX() if only other use of obj is via dynamic property lookup (obj[ name ]) and not within an assignment (obj[ name ] = ...).
Detecting read-only properties will also need to detect assignment via deconstruction e.g.:
({a: obj.x} = {a: 123})
Two-phase serialization now has its own issue: #426
|
gharchive/issue
| 2021-04-29T19:23:41 |
2025-04-01T06:39:56.672335
|
{
"authors": [
"overlookmotel"
],
"repo": "overlookmotel/livepack",
"url": "https://github.com/overlookmotel/livepack/issues/169",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
485781540
|
Broken with React 16.9.0
With React/ReactDOM 16.9.0 all tests relating to rehydration on client are failing. Am not sure what the cause is.
Have locked dependency to 16.8.x for now and put a note in README not to use 16.9.0.
Fixed in v0.5.2.
|
gharchive/issue
| 2019-08-27T12:32:32 |
2025-04-01T06:39:56.674252
|
{
"authors": [
"overlookmotel"
],
"repo": "overlookmotel/react-async-ssr",
"url": "https://github.com/overlookmotel/react-async-ssr/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
650831764
|
Feature/#28 enable fuzzy search
Enable fuzzy search
Major change
Enable fuzzy search in the cocktail search
function change: onClickSearch
change GA script after UI body rendered
In order mot to block the user in the slow network env.
Library add
"fuse.js": "6.4.0"
https://www.npmjs.com/package/fuse.js
Nice!
|
gharchive/pull-request
| 2020-07-04T05:36:30 |
2025-04-01T06:39:56.677029
|
{
"authors": [
"sean1093",
"tkforce"
],
"repo": "overpartylab/cocktails-guide-book",
"url": "https://github.com/overpartylab/cocktails-guide-book/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1315717187
|
Import 369da2d262: Merge remote-tracking branch 'origin/master' (by Caity)
This is an automated proposal to look at a commit made by Caity and import it into Overte
Commit:
369da2d262d7da45373da7cf89b375eeeb846c24
Author:
Caity
Date:
Tue, 04 Feb 2020 18:47
Merge remote-tracking branch 'origin/master'
Stats:
Filename
Stats
Lines
Added
Removed
Lines in blame
.gitlab-ci.yml
92
92
0
⚠ 0
docker/.gitignore
1
1
0
⚠ 0
docker/Dockerfile
22
22
0
⚠ File gone
docker/digitalocean.json
36
36
0
⚠ File gone
docker/docker-compose.yml
21
21
0
⚠ File gone
docker/modify-domain-port.py
32
32
0
⚠ File gone
docker/supervisor.conf
59
59
0
⚠ File gone
7 files
-
263
263
0
0
To work on this, please assign the issue to yourself, then look at the commit and decide whether this would be a good addition to Overte.
If the commit is useful, tag it with "Tivoli: Keep", and keep it open until it's merged.
If the commit is not useful, tag it with "Tivoli: Discard", and close it.
If the commit is not useful right now, but might be later, tag it with "Tivoli: Maybe later", and close it.
If it's hard to decide, tag it with "Tivoli: Discuss", and keep it open.
Useful commits should be submitted as a PR against Overte. Tag this issue in PR, so that it's automatically closed once the PR is merged.
You can cherry-pick this issue with this command:
git cherry-pick 369da2d262d7da45373da7cf89b375eeeb846c24
Duplicate of #149
|
gharchive/issue
| 2022-07-23T18:16:48 |
2025-04-01T06:39:56.689564
|
{
"authors": [
"daleglass"
],
"repo": "overte-org/tivolicloud",
"url": "https://github.com/overte-org/tivolicloud/issues/169",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
106145038
|
请教laravel中如何在微信支付中 catch UnifiedOrder 抛出的异常?
请教laravel中如何在微信支付中 catch UnifiedOrder 抛出的异常?
http://php.net/manual/zh/language.exceptions.php
|
gharchive/issue
| 2015-09-12T10:08:30 |
2025-04-01T06:39:56.693187
|
{
"authors": [
"jaring",
"overtrue"
],
"repo": "overtrue/wechat",
"url": "https://github.com/overtrue/wechat/issues/93",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
104562531
|
Issue with creating production bundles
Hi,
Thanks very much for publishing this and the supplementary guide. It is very very helpful.
For webpack.prod.config.js, I had to remove the last curly brace. I also had to add the following two lines:
var webpack = require('webpack');
var BundleTracker = require('webpack-bundle-tracker');
When I run
./node_modules/.bin/webpack --config webpack.prod.config.js
it expects additional parameters. Am I doing something wrong?
Can you share full config and full traceback please?
I started with what is in your blog post here at the bottom under 'webpack.prod.config.js'.
Running
./node_modules/.bin/webpack --config webpack.prod.config.js
I got
/home/manish/Work/mundaii/webpack.prod.config.js:6
new BundleTracker({filename: './webpack-stats-prod.json'})
^
ReferenceError: BundleTracker is not defined
at Object.<anonymous> (/home/manish/Work/mundaii/webpack.prod.config.js:6:10)
Adding
var BundleTracker = require('webpack-bundle-tracker');
gets me
/home/manish/Work/mundaii/webpack.prod.config.js:12
new webpack.DefinePlugin({
^
ReferenceError: webpack is not defined
at Object.<anonymous> (/home/manish/Work/mundaii/webpack.prod.config.js:12:7)
Adding
var webpack = require('webpack');
gets me
webpack 1.12.0
Usage: https://webpack.github.io/docs/cli.html
.
.
.
Final webpack.prod.config.js looks like
var config = require('./webpack.config.js');
var webpack = require('webpack');
var BundleTracker = require('webpack-bundle-tracker');
config.output.path = require('path').resolve('./assets/dist');
config.output.pathName = '/production/path/to/bundle/directory'; // This will override the url generated by django's staticfiles
config.plugins = [
new BundleTracker({filename: './webpack-stats-prod.json'}),
// removes a lot of debugging code in React
new webpack.DefinePlugin({
'process.env': {
'NODE_ENV': JSON.stringify('production')
}}),
// keeps hashes consistent between compilations
new webpack.optimize.OccurenceOrderPlugin(),
// minifies your code
new webpack.optimize.UglifyJsPlugin({
compressor: {
warnings: false
}
})
];
You'll have to add module.exports = config; to the bottom of the production config file. I've updated the blog post. Thanks for pointing this out.
It works now, thanks @owais.
FYI - you're still missing
var BundleTracker = require('webpack-bundle-tracker');
in your updated blog post.
|
gharchive/issue
| 2015-09-02T20:15:56 |
2025-04-01T06:39:56.724252
|
{
"authors": [
"dopeboy",
"owais"
],
"repo": "owais/django-webpack-loader",
"url": "https://github.com/owais/django-webpack-loader/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1777253740
|
:hammer: Support strict primary key enforcement
This change causes a warning to happen any time you save a table without a primary key. It also allows a strict mode in which the warning becomes an exception.
Motivation
We are trying to move more metadata to the indicator level. The dimensions an indicator has is a really important piece of metadata that is make or break for automatically collecting, indexing and reusing the indicators.
How it will be used
The Buildkite tasks that build production data and do the nightly full build will not be run in strict mode. However, PRs will be run in strict mode. This is to avoid any downtime for the ETL, whilst strongly encouraging future changes to have a primary key.
For more background of where I think we should go with dimensions, have a read here: https://www.notion.so/owid/2023-06-27-Proposal-for-dimensions-in-the-ETL-9e1a26fec3b94ad2a33ca8fab14b090a?pvs=4
It's a good idea. The only annoying thing is that, in practice, we are always setting index before saving, and then resetting index after loading. In an ideal world working with multi-index tables would be easy. But overall I think it's a safe approach.
@pabloarosado Have a read of the proposal I wrote in Notion as well. I think we should move to something simpler, which is just using dim_ in front of dimension columns. E.g. dim_country, dim_year. It's a common convention that's also pretty self-explanatory. We can could even support new and old ways of doing this in a backwards-compatible way.
|
gharchive/pull-request
| 2023-06-27T15:37:37 |
2025-04-01T06:39:56.743592
|
{
"authors": [
"larsyencken"
],
"repo": "owid/etl",
"url": "https://github.com/owid/etl/pull/1275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1945198170
|
Update CO2 dataset
Minor changes in GCB metadata.
Fix ~issue with African consumption-based emissions, and~(this was handled in a previous PR) issue with Palau emissions.
Update owid_co2 dataset.
Archive unused steps (and update country_profiles dependencies to use the latest GCB dataset).
Hey @lucasrodes I'm going to merge this PR to avoid blocking other things. There aren't any big changes, but please have a look whenever you have a few minutes, thanks.
|
gharchive/pull-request
| 2023-10-16T13:15:40 |
2025-04-01T06:39:56.745733
|
{
"authors": [
"pabloarosado"
],
"repo": "owid/etl",
"url": "https://github.com/owid/etl/pull/1793",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1312052139
|
Minter updates
Get better coverage on Minter contracts.
/cib
|
gharchive/issue
| 2022-07-20T22:36:17 |
2025-04-01T06:39:56.752372
|
{
"authors": [
"corbanvilla"
],
"repo": "owlprotocol/contracts",
"url": "https://github.com/owlprotocol/contracts/issues/348",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
862488863
|
Release web 3.0.0
~DO NOT MERGE
This is a WIP PR bringing web-3.0.0, but it's still at the web-v3.0.0-rc2. If it looks good in oCIS we'll release web-v3.0.0 and update it in this PR again.~ It does now feature the official web-v3.0.0 release
Description
This PR pulls the assets of the web-v3.0.0 release and updates the accounts and settings service according to the recent changes in the owncloud design system v6.0.1.
Related Issue
Fixes https://github.com/owncloud/ocis/issues/1927
Motivation and Context
Bring the new web ui to its most recent version.
How Has This Been Tested?
CI
Types of changes
[x] New feature (non-breaking change which adds functionality)
Checklist:
[x] Code changes
needs a rebase after #1941 in order to get the pipeline green
needs a rebase after #1941 in order to get the pipeline green
I'll take care, thanks 🤝
|
gharchive/pull-request
| 2021-04-20T07:06:14 |
2025-04-01T06:39:56.942200
|
{
"authors": [
"kulmann",
"pascalwengerter",
"wkloucek"
],
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/pull/1938",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1236701959
|
Bugfix: Fix multiple configuration environment variables for the storage-users extension
Description
We've fixed multiple environment variable configuration options for the storage-users extension:
STORAGE_USERS_GRPC_ADDR was used to configure both the address of the http and grpc server.
This resulted in a failing startup of the storage-users extension if this config option is set,
because the service tries to double-bind the configured port (one time for each of the http and grpc server). You can now configure the grpc server's address with the environment variable STORAGE_USERS_GRPC_ADDR and the http server's address with the environment variable STORAGE_USERS_HTTP_ADDR
STORAGE_USERS_S3NG_USERS_PROVIDER_ENDPOINT was used to configure the permissions service endpoint for the S3NG driver and was therefore renamed to STORAGE_USERS_S3NG_PERMISSIONS_ENDPOINT
It's now possible to configure the permissions service endpoint for all storage drivers with the environment variable STORAGE_USERS_PERMISSION_ENDPOINT, which was previously only used by the S3NG driver.
WARNING: this could be considered a breaking change
Related Issue
Needed for https://github.com/owncloud/ocis-charts/pull/43 to start the storage-users http and grpc servers listening on all interfaces
Motivation and Context
fix the config
How Has This Been Tested?
locally
Screenshots (if appropriate):
Types of changes
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[x] Breaking change (fix or feature that would cause existing functionality to change)
[ ] Technical debt
[ ] Tests only (no source changes)
Checklist:
[x] Code changes
[ ] Unit tests added
[ ] Acceptance tests added
[ ] Documentation ticket raised:
@micbar is this considered a breaking change? For the STORAGE_USERS_S3NG_USERS_PROVIDER_ENDPOINT change, we could remain backwards compatible. All others are not breaking (STORAGE_USERS_GRPC_ADDR can not be configured by anyone currently because the service refuses to start)
I guess that this will change the yaml/env file output - therefore docs relevant. Just hooking in so we can trigger a docs build.
|
gharchive/pull-request
| 2022-05-16T06:50:03 |
2025-04-01T06:39:56.949927
|
{
"authors": [
"mmattel",
"wkloucek"
],
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/pull/3802",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
143980945
|
add more data as inputs
volume
bid/ask spread
RSI/MACD / other derivative metrics
social sentiment analysis
ref: #13 #8
I've used Ichimocku in my manual trading. They are very powerful.
Have you looked at all into using Social Mention or Google Alerts for social sentiment analysis?
Sifting and reacting to viral bursts of social approval could provide large gains with minimal trading.
Nice @jeff-hykin , I didnt check the other issues. I will start to follow all of them in here.
@owocki I already had filled the form, will you create a a group or we will be here by now?
@rmendes900 just sent your invite
(Just saving this for future use)
Heres a potential data source @darcy mentioned on slack
https://www.quandl.com/data/BCHAIN?keyword=bitcoin
Quandl looks super easy to integrate, there's also a Python library:
https://www.quandl.com/help/python
https://github.com/owocki/pytrader/pull/75
https://github.com/owocki/pytrader/pull/76
|
gharchive/issue
| 2016-03-28T13:45:09 |
2025-04-01T06:39:56.968250
|
{
"authors": [
"NDuma",
"jeff-hykin",
"owocki",
"rmendes900",
"rubik"
],
"repo": "owocki/pytrader",
"url": "https://github.com/owocki/pytrader/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2254746405
|
Remove FfiWaker from RUST_WAKER_VTABLE
Only FfiWakerBase is used by vtable functions (as it's the stable ABI)
In general, the pointer isn't guaranteed to (and is expected, at times, not to) point to FfiWaker
Sorry for the late response. Thanks!
|
gharchive/pull-request
| 2024-04-20T22:33:50 |
2025-04-01T06:39:56.970473
|
{
"authors": [
"oxalica",
"timotheyca"
],
"repo": "oxalica/async-ffi",
"url": "https://github.com/oxalica/async-ffi/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2045403517
|
feat(linter) no double comparisons
https://rust-lang.github.io/rust-clippy/master/index.html#/double_comparisons
Current dependencies on/for this PR:
main
PR #1710
PR #1712 👈
This stack of pull requests is managed by Graphite.
It looks like we can make this rule support auto-fix
|
gharchive/pull-request
| 2023-12-17T22:32:20 |
2025-04-01T06:39:56.974846
|
{
"authors": [
"Dunqing",
"camc314"
],
"repo": "oxc-project/oxc",
"url": "https://github.com/oxc-project/oxc/pull/1712",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2098016460
|
Refactorization of the optimizer code.
This cleans some code around the optimizer. I breaks the API. See #67 .
OK! It is a bit messy with the build system. But so be it!
|
gharchive/pull-request
| 2024-01-24T11:03:05 |
2025-04-01T06:39:57.012633
|
{
"authors": [
"oysteijo"
],
"repo": "oysteijo/simd_neuralnet",
"url": "https://github.com/oysteijo/simd_neuralnet/pull/68",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2041559023
|
Improve Kerberos auth test stability
Change KerberosContainer wait strategy from log to 88 port listen
Remove keytabs created in test-resources, the test now copies them to the target/test-classes/kerberos dir.
Fixes #121
@ozangunalp lgtm. thank you for the investigation and producing the fix.
|
gharchive/pull-request
| 2023-12-14T12:04:08 |
2025-04-01T06:39:57.013804
|
{
"authors": [
"k-wall",
"ozangunalp"
],
"repo": "ozangunalp/kafka-native",
"url": "https://github.com/ozangunalp/kafka-native/pull/127",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
607704841
|
The Ci_query_log don´t save operations logs in action UPDATE, DELETE, INSERT
Hi ozanmora
First congratulations on the creation of HOOK.
I implanted in my system this hook and it worked perfectly, but the actions of UPDATE, INSERT and DELETE are not recorded in the log file.
Is it something of a config in Codeigniter?
See u ozanmora
Hello @denisVitonis ,
I noticed the same problem. I have no idea where exactly the reason. But I am thinking of updating this code. When I solve the issue, I will note here.
If you can find a solution to this issue, I would be glad if you can share it with me.
Thanks
Dear Ozan, I managed to solve it in another way, however using your hook.
First and I created a helper.
with the function
////////////CODE IN helper FILE***//////////////////////////
function log_queries_2($sql) {
$filepath = APPPATH . 'logs/Query-log-' . date('Y-m-d') . '.php';
$handle = fopen($filepath, "a+");
fwrite($handle, $sql." \n Execution Time: ".date("Y-m-d H:i:s")."\n\n");
fclose($handle);
}
//////////////**************************///////////////////
After this go to system/database/DB_driver.php and find function named "query" and put this below code on top inside the function.
It may not be the best of good practices but it works perfectly.
////////////CODE IN DB_driver.php FILE***//////////////////////////
log_queries_2($sql);'
//////////////**************************///////////////////
Dear Ozan, I managed to solve it in another way, however using your hook.
First and I created a helper.
with the function
////////////CODE IN helper FILE***//////////////////////////
function log_queries_2($sql) {
$filepath = APPPATH . 'logs/Query-log-' . date('Y-m-d') . '.php';
$handle = fopen($filepath, "a+");
fwrite($handle, $sql." \n Execution Time: ".date("Y-m-d H:i:s")."\n\n");
fclose($handle);
}
//////////////**************************///////////////////
After this go to system/database/DB_driver.php and find function named "query" and put this below code on top inside the function.
It may not be the best of good practices but it works perfectly.
////////////CODE IN DB_driver.php FILE***//////////////////////////
log_queries_2($sql);'
//////////////**************************///////////////////
this solution is not the right approach. Because if you want to be able to update Codeigniter, you should never make changes under the system folder.
I have made some code fixes but this problem is still not resolved.
I'm still investigating this problem. If I can find a solution, I want to update it as soon as possible.
I solved this problem. Actually, this was not an error.
The code runs correctly and actually writes INSERT, UPDATE, DELETE queries to the log file.
However, this code only works on "post_controller" (Called immediately after your controller is fully executed). You can read the details of this from the link.
You are probably doing a redirect after doing INSERT, UPDATE, DELETE operations like me. The controller cannot be completed if it contains redirecting. No database queries can be written to the log file because the operation could not be completed.
P.S.: My English is not very good. I apologize for this.
|
gharchive/issue
| 2020-04-27T17:00:42 |
2025-04-01T06:39:57.026678
|
{
"authors": [
"denisVitonis",
"ozanmora"
],
"repo": "ozanmora/ci_log_query",
"url": "https://github.com/ozanmora/ci_log_query/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1943257992
|
Add Polish Names and Surnames
The repository is missing Polish names and surnames. Follow the guide below to add them.
🚀 How to Contribute:
Fork the repository.
Add names and surnames for your country following the format provided below.
Submit a Pull Request with your changes.
Please include a reliable source for the names and surnames you're adding, preferably a public database or a reputable website.
📄 Format:
For names (src/names/pl.ts):
const polishNames = {
0: [ // Male names - Add the URL of the names source here
'Name1', 'Name2', 'Name3', ...
],
1: [ // Female names - Add the URL of the names source here
'Name1', 'Name2', 'Name3', ...
]
}
export default polishNames;
For surnames (src/surnames/pl.ts):
const polishSurnames = [ // Add the URL of the surnames source here
'Surname1', 'Surname2', 'Surname3', ...
];
export default polishSurnames;
📌 Important Notes:
Ensure that the names and surnames you add are common and not specific to a small group.
50 names for both males and females, and 50 surnames.
Organize the names with 10 entries per row.
Avoid adding names that might be offensive or inappropriate.
Ensure you're not violating any copyright or data privacy rules.
i would like to work on this issue
Sure, actually, there's no need to ask. As long as anyone follows the guidelines, I'm OK with merging any pull request.
|
gharchive/issue
| 2023-10-14T13:19:45 |
2025-04-01T06:39:57.031828
|
{
"authors": [
"Arkhein6",
"ozdemirburak"
],
"repo": "ozdemirburak/full-name-generator",
"url": "https://github.com/ozdemirburak/full-name-generator/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
714410855
|
Does fuzzy matching work?
Hey I am trying to get the fuzzy matching part working but it seems to not be giving me any results.
I have 123 456 7891 and 123 556 8891 in the Trie and I want the query of 123 656 to return me those two numbers but instead it returns nil.
Am I misunderstanding what fuzzy searching is here?
@ozeidan
Hello. So I am not sure if this implementation adheres to the normal definition of fuzzy matching but my definition is: characters of the search term have to occur in the matching string in the same order, but not necessarily in one piece. So searching for grch will match gosearch, but grhc won't.
I am not quite sure about the definition of fuzzy matching according to your examples. Should the query match those strings because all the characters in the query are also present in the strings?
@ozeidan Fuzzy matching to me would be that the input query is 'close enough' to the strings in the tree
the 'edit distance' isn't too far out
If the trie has 123 456 7891 then search query 123 465 8971 should return the string - because it is fuzzily the same
but 567 123 6280 should not - cause its really far apart
Does that make sense?
Ah I see what you mean. So you are talking about something like the Levenshtein distance?
So as of now the 'fuzzy' matching here works in a very different way (maybe we should call it something else then). I'll try to think about how we could implement the Levenshtein distance (or some similar heuristic). We'll have to find something that works well with the trie structure, as far as I remember the performance of the search depends on being able to eliminate a lot of branches of the trie very early on.
|
gharchive/issue
| 2020-10-04T21:55:50 |
2025-04-01T06:39:57.039530
|
{
"authors": [
"asad-awadia",
"ozeidan"
],
"repo": "ozeidan/fuzzy-patricia",
"url": "https://github.com/ozeidan/fuzzy-patricia/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1689719268
|
Release/0.1.2
Release notes
Addition of time() and timeEnd() methods for measuring code execution times.
Added formatting options for text and json outputs.
Changed configurations for init() method in order to give more flexibility when choosing the output target and their formats.
Log levels can be updated at runtime to increase/decrease logging verbosity with the OZLOGGER_LEVEL environment variable.
Colored output must be enabled with the OZLOGGER_COLORS environment variable.
Resolves #3
|
gharchive/pull-request
| 2023-04-29T23:37:04 |
2025-04-01T06:39:57.042912
|
{
"authors": [
"leandroschabarum"
],
"repo": "ozmap/ozlogger",
"url": "https://github.com/ozmap/ozlogger/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2283891153
|
The window jumps when the Option (⌥) is pressed
Before Submitting Your Bug Report
[X] I have verified that there isn't already an issue reporting the same bug to prevent duplication.
[X] I have seen the FAQ.
Maccy Version (see 'About' window)
0.31.0
macOS Version
14.4.1
Maccy Settings
{
"KeyboardShortcuts_delete" = "{\\"carbonModifiers\\":2048,\\"carbonKeyCode\\":51}";
"KeyboardShortcuts_pin" = "{\\"carbonModifiers\\":2048,\\"carbonKeyCode\\":35}";
"KeyboardShortcuts_popup" = "{\\"carbonModifiers\\":2304,\\"carbonKeyCode\\":8}";
"LaunchAtLogin__hasMigrated" = 1;
"NSStatusItem Visible Item-0" = 0;
"NSWindow Frame SUStatusFrame" = "1080 961 400 134 0 0 2560 1415 ";
"NSWindow Frame SUUpdateAlert" = "970 762 620 398 0 0 2560 1415 ";
"NSWindow Frame com.sindresorhus.Preferences.FrameAutosaveName" = "1039 469 542 435 0 0 2560 1415 ";
"NSWindow Frame com.sindresorhus.Settings.FrameAutosaveName" = "993 667 442 322 0 0 2560 1415 ";
SUEnableAutomaticChecks = 0;
SUHasLaunchedBefore = 1;
SULastCheckTime = "2024-05-07 17:12:09 +0000";
SUSendProfileInfo = 0;
WebKitDefaultFontSize = 13;
WebKitJavaScriptEnabled = 0;
WebKitPluginsEnabled = 0;
WebKitStandardFont = "-apple-system-font";
WebKitUserStyleSheetEnabledPreferenceKey = 1;
WebKitUserStyleSheetLocationPreferenceKey = "/Applications/Maccy.app/Contents/Frameworks/Sparkle.framework/Resources/ReleaseNotesColorStyle.css";
avoidTakingFocus = 1;
clearOnQuit = 0;
enabledPasteboardTypes = (
"public.html",
"public.utf8-plain-text",
"public.rtf"
);
hideFooter = 1;
hideTitle = 1;
historySize = 999;
ignoredPasteboardTypes = (
"com.typeit4me.clipping",
"Pasteboard generator type",
"net.antelle.keeweb",
"de.petermaurer.TransientPasteboardType",
"com.agilebits.onepassword"
);
imageMaxHeight = 16;
maxMenuItemLength = 80;
maxMenuItems = 16;
migrations = {
"2020-04-25-allow-custom-ignored-types" = 1;
"2020-06-19-use-keyboardshortcuts" = 1;
"2020-09-01-ignore-keeweb" = 1;
"2021-02-20-allow-to-customize-supported-types" = 1;
"2021-06-28-add-title-to-history-item" = 1;
"2021-10-16-remove-dynamic-pasteboard-types" = 1;
"2022-08-01-rename-suppress-clear-alert" = 1;
"2022-11-14-add-html-rtf-to-supported-types" = 1;
"2023-01-22-add-regexp-search-mode" = 1;
};
pasteByDefault = 0;
playSounds = 0;
popupPosition = center;
previewDelay = 99000;
searchMode = fuzzy;
showInStatusBar = 0;
}
Description
See the attached video:
https://github.com/p0deje/Maccy/assets/88809/fdd4de5c-b9f2-49d3-8057-50893f13ba69
Refs #631
Steps to Reproduce
One way to reproduce it is to summon the popup window then press ⌥. Another way is to summon the popup window and then release the shortcut keys such that the ⌥ is released the last.
FWIW I see something similar but only seems to be when Terminal.app is frontmost app.
I tried to debug this by adding an observer for NSWindow.didMoveNotification / NSWindow.didResizeNotification in MenuHeaderView.viewDidMoveToWindow:
Looks like when you release the Option key, the window gets incorrectly resized, here is the backtrace:
Thread 1 Queue : com.apple.main-thread (serial)
#0 0x0000000100f293d8 in closure #2 in MenuHeaderView.viewDidMoveToWindow() at Maccy/Maccy/Menu/MenuHeader/MenuHeaderView.swift:62
#1 0x0000000100f2932c in thunk for @escaping @callee_guaranteed @Sendable (@in_guaranteed Notification) -> () ()
#2 0x000000018818f130 in __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ ()
#3 0x00000001882233d8 in ___CFXRegistrationPost_block_invoke ()
#4 0x0000000188223320 in _CFXRegistrationPost ()
#5 0x000000018815d678 in _CFXNotificationPost ()
#6 0x000000018927a4e4 in -[NSNotificationCenter postNotificationName:object:userInfo:] ()
#7 0x000000018ba3c0c8 in -[NSWindow _setFrameCommon:display:fromServer:] ()
#8 0x000000018c0f8804 in -[NSPopupMenuWindow setFrame:display:animate:] ()
#9 0x000000018c0f8178 in -[NSPopupMenuWindow updateWindowFrameTo:withAnimation:] ()
#10 0x000000018bff2828 in -[NSContextMenuImpl _menuBackingViewDidChangeIntrinsicSizeWithAnimation:] ()
#11 0x000000018bff2454 in -[NSContextMenuImpl _commitWindowSizeChangesForWidth:height:animated:] ()
#12 0x000000018bff2294 in -[NSContextMenuImpl endGroupingUpdates] ()
#13 0x000000018c20fd60 in -[NSCocoaMenuImpl _updateModifierFlagsTo:groupingUpdates:] ()
#14 0x000000018c0381dc in -[NSMenuTrackingSession _modifierFlagsChanged:] ()
#15 0x000000018c037cb4 in -[NSMenuTrackingSession handleEvent:] ()
#16 0x000000018c037150 in -[NSMenuTrackingSession startRunningMenuEventLoop:] ()
#17 0x000000018c76a8e4 in -[NSContextMenuTrackingSession startMonitoringEventsInMode:] ()
#18 0x000000018bff0310 in +[NSContextMenuImpl presentPopup:fromView:withContext:animated:] ()
#19 0x000000018c218b40 in _NSPopUpMenu ()
#20 0x000000018c21d398 in -[NSCocoaMenuImpl _popUpMenuPositioningItem:atCocoaIndex:atLocation:inView:withPrivateFlags:appearance:] ()
#21 0x000000018c093a80 in -[NSMenu popUpMenuPositioningItem:atLocation:inView:appearance:] ()
#22 0x0000000100f3be40 in Menu.popUpMenu(at:ofType:) at Maccy/Maccy/Menu/Menu.swift:92
#23 0x0000000100ee334c in closure #1 in closure #1 in MenuController.popUp() at Maccy/Maccy/Menu/MenuController.swift:29
#24 0x0000000100ee3ca4 in MenuController.linkingMenuToStatusItem(_:) at Maccy/Maccy/Menu/MenuController.swift:79
#25 0x0000000100ee3190 in closure #1 in MenuController.popUp() at Maccy/Maccy/Menu/MenuController.swift:28
#26 0x0000000100ee3e68 in MenuController.withFocus(_:) at Maccy/Maccy/Menu/MenuController.swift:120
#27 0x0000000100ee3024 in MenuController.popUp() at Maccy/Maccy/Menu/MenuController.swift:23
#28 0x0000000100f52ab0 in Maccy.popUp() at Maccy/Maccy/Maccy.swift:99
#29 0x0000000100f66fc0 in implicit closure #2 in implicit closure #1 in AppDelegate.applicationDidFinishLaunching(_:) at Maccy/Maccy/AppDelegate.swift:31
#30 0x0000000100f7f91c in thunk for @escaping @callee_guaranteed () -> () ()
#31 0x0000000100f7f564 in thunk for @escaping @callee_guaranteed () -> (@out ()) ()
#32 0x0000000100f7e580 in static KeyboardShortcuts.handleOnKeyDown(_:) at KeyboardShortcuts/Sources/KeyboardShortcuts/KeyboardShortcuts.swift:82
#33 0x0000000100f7df40 in implicit closure #1 in static KeyboardShortcuts.register(_:) at KeyboardShortcuts/Sources/KeyboardShortcuts/KeyboardShortcuts.swift:26
#34 0x0000000100f78634 in static CarbonKeyboardShortcuts.handleEvent(_:) at KeyboardShortcuts/Sources/KeyboardShortcuts/CarbonKeyboardShortcuts.swift:142
#35 0x0000000100f782f4 in carbonKeyboardShortcutsEventHandler(eventHandlerCall:event:userData:) at KeyboardShortcuts/Sources/KeyboardShortcuts/CarbonKeyboardShortcuts.swift:4
#36 0x0000000100f79130 in @objc carbonKeyboardShortcutsEventHandler(eventHandlerCall:event:userData:) ()
#37 0x0000000192914444 in DispatchEventToHandlers(EventTargetRec*, OpaqueEventRef*, HandlerCallRec*) ()
#38 0x0000000192913844 in SendEventToEventTargetInternal(OpaqueEventRef*, OpaqueEventTargetRef*, HandlerCallRec*) ()
#39 0x0000000192929cd8 in SendEventToEventTarget ()
#40 0x000000018c1ec6c8 in -[NSApplication(NSEventRouting) sendEvent:] ()
#41 0x000000018be3a89c in -[NSApplication _handleEvent:] ()
#42 0x000000018b9eb0c0 in -[NSApplication run] ()
#43 0x000000018b9c22e0 in NSApplicationMain ()
#44 0x0000000100f6b034 in main at Maccy/Maccy/AppDelegate.swift:9
#45 0x0000000187d320e0 in start ()
Don't know if it's a bug or a misuse yet.
Removing https://github.com/p0deje/Maccy/blob/211f327ba5d1bcbbae34df976719626c664a2907/Maccy/Menu/Menu.swift#L91 fixes the issue, although the window appears not centered on the screen.
Tracked it to https://github.com/p0deje/Maccy/blob/211f327ba5d1bcbbae34df976719626c664a2907/Maccy/Menu/Menu.swift#L485
I was able to consistently reproduce inside a clean VM (using VirtualBuddy) with just Maccy installed.
I have this fixed in 2.0, but it's going to be a couple of weeks until I release the first alpha. In 2.0, Maccy uses NSPanel instead of NSMenu which makes window manipulation much easier than it currently is.
It appears that post-popup adjustment of window position is the true culprit here https://github.com/p0deje/Maccy/blob/211f327ba5d1bcbbae34df976719626c664a2907/Maccy/Menu/Menu.swift#L429
Wrapping this call with ensureInEventTrackingModeIfVisible didn't fix it for me.
|
gharchive/issue
| 2024-05-07T17:35:41 |
2025-04-01T06:39:57.054967
|
{
"authors": [
"Kentzo",
"afragen",
"p0deje"
],
"repo": "p0deje/Maccy",
"url": "https://github.com/p0deje/Maccy/issues/777",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
403551709
|
jsonschema 3.0+ support
Hi, the jsonschema has been updated to 3.0+, and maybe has some break changes:
.eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/__init__.py:7: in <module>
from openapi_spec_validator.factories import JSONSpecValidatorFactory
.eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/factories.py:5: in <module>
from openapi_spec_validator.generators import (
.eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/generators.py:12: in <module>
class SpecValidatorsGeneratorFactory:
.eggs/openapi_spec_validator-0.2.4-py3.6.egg/openapi_spec_validator/generators.py:19: in SpecValidatorsGeneratorFactory
'properties': _validators.properties_draft4,
E AttributeError: module 'jsonschema._validators' has no attribute 'properties_draft4'
It was fixed with #590.2.5 and version 0.2.5
I have same problem on python3.6.7
openapi-spec-validator==0.2.6
jsonschema==3.0.1
MacBook-Pro-ilya:projects ilya$ pip -V
pip 19.0.3 from /Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/pip (python 3.6)
MacBook-Pro-ilya:projects ilya$ pip show openapi_spec_validator
Name: openapi-spec-validator
Version: 0.2.6
Summary: UNKNOWN
Home-page: https://github.com/p1c2u/openapi-spec-validator
Author: Artur Maciag
Author-email: maciag.artur@gmail.com
License: Apache License, Version 2.0
Location: /Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages
Requires: pathlib, PyYAML, jsonschema, six
Required-by:
MacBook-Pro-ilya:projects ilya$ pip show jsonschema
Name: jsonschema
Version: 3.0.1
Summary: An implementation of JSON Schema validation for Python
Home-page: https://github.com/Julian/jsonschema
Author: Julian Berman
Author-email: Julian@GrayVines.com
License: UNKNOWN
Location: /Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages
Requires: setuptools, pyrsistent, six, attrs
Required-by: openapi-spec-validator, jsonmerge
MacBook-Pro-ilya:projects ilya$ /Users/ilya/.pyenv/versions/3.6.7/bin/python
Python 3.6.7 (default, Mar 13 2019, 14:00:09)
[GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.10.44.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from openapi_spec_validator import validate_v3_spec
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/__init__.py", line 7, in <module>
from openapi_spec_validator.factories import JSONSpecValidatorFactory
File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/factories.py", line 5, in <module>
from openapi_spec_validator.generators import (
File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/generators.py", line 12, in <module>
class SpecValidatorsGeneratorFactory:
File "/Users/ilya/.pyenv/versions/3.6.7/lib/python3.6/site-packages/openapi_spec_validator/generators.py", line 19, in SpecValidatorsGeneratorFactory
'properties': _validators.properties_draft4,
AttributeError: module 'jsonschema._validators' has no attribute 'properties_draft4'
>>>
@strongbugman , do you still have that problem?
@timoilya
openapi-spec-validator 0.2.6 has requirement jsonschema<3, but you'll have jsonschema 3.0.1 which is incompatible.
@strongbugman , this ussue is closed, but jsonschema >3 is not supported in version 0.2.5
https://github.com/p1c2u/openapi-spec-validator/issues/54#issuecomment-467098215
I am having the same issue - openapi-spec-validator 0.2.6 and jsonschema 3.0.1 delivering the following error:
AttributeError: module 'jsonschema._validators' has no attribute 'properties_draft4'
Does @strongbugman 's comment mean I should step the spec validator back to a version lower than 0.2.5?
Yes, or you can create a PR to fix the compatibility problem
Ben notifications@github.com 于2019年4月14日周日 上午6:56写道:
I am having the same issue - openapi-spec-validator 0.2.6 and jsonschema
3.0.1 delivering the following error:
AttributeError: module 'jsonschema._validators' has no attribute
'properties_draft4'
Does @strongbugman https://github.com/strongbugman 's comment mean I
should step the spec validator back to a version lower than 0.2.5?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/p1c2u/openapi-spec-validator/issues/54#issuecomment-482896243,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APXibQXnDJcVpCpN9v7i65wEQk-FwdFEks5vgmCNgaJpZM4aUuL6
.
@strongbugman , please review PR https://github.com/p1c2u/openapi-spec-validator/pull/72 if you may to approve it
|
gharchive/issue
| 2019-01-27T13:37:03 |
2025-04-01T06:39:57.066149
|
{
"authors": [
"BenTaub",
"p1c2u",
"strongbugman",
"timoilya"
],
"repo": "p1c2u/openapi-spec-validator",
"url": "https://github.com/p1c2u/openapi-spec-validator/issues/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1473899641
|
bind context for customMethod
@RegisterInIs({
customMethod: 'customNameOfMethod'
})
class Person {
public name: string = 'default';
methodOne() {
console.log(this.name);
return new String();
}
public static customNameOfMethod(argument: unknown): argument is Person {
console.log(this); // empty object
return this.name === undefined; // return true
// return this.name === 'Ivan'; // return false and is sad
}
}
const person = new Person();
person.name = 'Ivan';
console.log(is.String(person.methodOne()));
is.Person.bind({custom: 1}); // Bind or take context from argument.
console.log([person].some(is.Person));
In 3.0.1 :)
|
gharchive/issue
| 2022-12-03T10:34:32 |
2025-04-01T06:39:57.076611
|
{
"authors": [
"Karbashevskyi"
],
"repo": "p4ck493/ts-is",
"url": "https://github.com/p4ck493/ts-is/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
172343982
|
This module has gone Glacial - on the back-burner
This module was mostly built using the pre 2016 ufo build-tool, which is no longer available.
It's now reliant on rakudo 2016.xx precompilation, which isn't yet fast to load or run. Now takes quite a few minutes to run the test-suite, with most of the time spent in precompilation and/or loading.
At this stage, I'm only regressing this module occasionally. There doesn't seem to be a fair bit of scope for optimization both in rakudo and within this module. May look at this again towards the end of 2016 or early in 2017.
Running much better. Typically taking ~90 sec to run test suite on latest Rakudo 2016.11.
Picking it up again :-)
|
gharchive/issue
| 2016-08-21T22:01:36 |
2025-04-01T06:39:57.104626
|
{
"authors": [
"dwarring"
],
"repo": "p6-pdf/perl6-PDF",
"url": "https://github.com/p6-pdf/perl6-PDF/issues/8",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1599853507
|
Positional bool
Fixes #171
@ysndr, can you take a look if this does what you want? If it is - I'll make a new release.
Todo:
[ ] Add anywhere top level annotation
[ ] Add box top level annotation
[ ] Add catch for anywhere and use it in failing test cases
[ ] update documentation
seems so, found a different issue though:
remember that i try to parse --setBool <key> <value>?
#[derive(Debug, Clone, Bpaf)]
#[bpaf(adjacent)]
#[allow(unused)]
pub struct ConfigSetBool {
/// Set <key> to <bool>
#[bpaf(long("setBool"))]
set_bool: (),
/// Configuration key
#[bpaf(positional("key"))]
key: String,
/// Configuration Value (bool)
#[bpaf(positional("bool"))] // << this seems to work now, hurray :)
value: bool,
}
I use this ^ contraption for that.
if i run xyz config --setBool key notabool I'm greeted with
ERROR: --setBool is not expected in this context
configure user parameters
Which yes, tells me somewhat that it cant parse --setBool. but the way it does does not really help me to do it right. -- what i would expect is it saying which (positional) argument failed to parse.
This also seems to happen with when value: u32 but I'm not sure if that is a regression maybe
adjacent makes it so blocks must stick together, you also need anywhere
pub struct ConfigSetBool {
/// Set <key> to <bool>
#[bpaf(long("setBool"))]
set_bool: (),
/// Configuration key
#[bpaf(positional("key"))]
key: String,
/// Configuration Value (bool)
#[bpaf(positional("bool"))] // << this seems to work now, hurray :)
value: bool,
}
fn try_this() -> impl Parser<ConfigSetBool> {
config_set_bool().anywhere()
}
If this works - I'll add anywhere to a derive macro
no, that doesnt change anything as far as i can tell
Unrelated,... wouldnt it be great if i could write
fn try_this<T: ToParser>() -> impl Parser<T> {
T::to_parser().anywhere()
}
instead of the same function three times ;)
Hmm... It sort of works for me
use bpaf::*;
#[derive(Debug, Clone, Bpaf)]
#[bpaf(adjacent)]
#[allow(unused)]
pub struct ConfigSetBool {
/// Set <key> to <bool>
#[bpaf(long("setBool"))]
set_bool: (),
/// Configuration key
#[bpaf(positional("key"))]
key: String,
/// Configuration Value (bool)
#[bpaf(positional("bool"))] // << this seems to work now, hurray :)
value: bool,
}
fn main() {
let x = config_set_bool().anywhere().many().to_options().run();
todo!("{:?}", x);
}
elakelaiset% cargo run --release --example set_bool -- --setBool banana false --setBool durian true
Finished release [optimized] target(s) in 0.02s
Running `target/release/examples/set_bool --setBool banana false --setBool durian true`
thread 'main' panicked at 'not yet implemented: [ConfigSetBool { set_bool: (), key: "banana", value: false }, ConfigSetBool { set_bool: (), key: "durian", value: true }]', examples/set_bool.rs:21:5
try --setBool banana 123
Hmm... I see, I would call this a bug in anywhere, it should ignore missing arguments but should retain parse errors. Will fix, might take a bit.
instead of the same function three times ;)
Without functional dependencies or type families it seems hard to implement the trait constraint ToParser... I blame Rust :)
I blame Rust :)
fair enough :D
Though I'd be perfectly happy if deriving Bpaf for something would implement a trait for them that just exposes their derived parser as a Box<dyn Parser>..
Ive been doing that manually so far but having access to that directly would be handy in such cases...
try --setBool banana 123, parsing correct values works for me too
Pushed something. Seems to work for me as expected with invalid values. I'll have to add some tests and possibly deal with more corner cases so release might take a bit.
Though I'd be perfectly happy if deriving Bpaf for something would implement a trait for them that just exposes their derived parser as a Box..
Hmm... Currently you can get a boxed parser with construct!(parser), I can also expose it as method and add the method to top level bpaf annotation...
#[derive(Debug, Clone, Bpaf)]
#[bpaf(adjacent, anywhere, box)]
struct Foo {
...
}
in this case will give you fn foo() -> Box<dyn Parse>...
Any feedback on the article?
Back in 40 minutes.
well i kinda just want to have a generic way to get to the derived parser, what i currently use is a macro like this:
macro_rules! parseable {
($type:ty, $parser:ident) => {
impl crate::commands::package::Parseable for $type {
fn parse() -> bpaf::parsers::ParseBox<Self> {
let p = $parser();
bpaf::construct!(p)
}
}
};
}
that i just run manually for every type where i would want that capability..
it gets me what i want but if the Bpaf derive macro could do that automatically would just be more convenient.
Any feedback on the article?
reading now
Any feedback on the article?
shall I review there, or do you want to discuss outside?
shall I review there, or do you want to discuss outside?
Up to you, I'm available at discord, whatsapp, google chat and signal. Probably google chat is going to be the easiest if you use it. manpacket@gmail.com
google chat
Oh they try it again? :D
I'll reach out tomorrow I guess or commit the review comments first, its not quite the middle of the day for me, quite the opposite
Though it is interesting to see this project unravelled a bit 👍🏼
Though I'd be perfectly happy if deriving Bpaf for something would implement a trait for them that just exposes their derived parser as a Box..
Hmm... Currently you can get a boxed parser with construct!(parser), I can also expose it as method and add the method to top level bpaf annotation...
#[derive(Debug, Clone, Bpaf)]
#[bpaf(adjacent, anywhere, box)]
struct Foo {
...
}
in this case will give you fn foo() -> Box<dyn Parse>...
Oh in don't know if were talking of different things then..
I mean something like <Foo as AsParseBox>::parse_box() -> Box<dyn Parser>
related to https://github.com/pacak/bpaf/pull/170, contains breaking changes in behavior
Sorry this will be a bit of a brain dump i figured after writing this up.
tl;dr
#[positional]
filed: bool
seems to work, error messages now talk about the value part not something else.
Nice progress!
Errors are something else, give it a read, I'll make it a seperate issue not to block this one.
So with
#[derive(Bpaf, Debug)]
struct Xyz {
#[bpaf(long, switch)]
bool_flag: bool,
#[bpaf(long, argument)]
bool_opt: bool,
#[bpaf(positional)]
bool_arg: bool,
}
these work as expected:
$ cargo run -- --bool-opt true true --bool-flag
$ cargo run -- --bool-opt true --bool-flag true
$ cargo run -- --bool-opt true true
these fail as expected:
$ cargo run -- --bool-opt true nottrue
Couldn't parse "nottrue": provided string was not `true` or `false`
^ ^ ^ ^ ^ ^
' ' '----'----'-----'
'-------'- why are these not the same? -'
with
#[derive(Debug, Clone, Bpaf)]
#[bpaf(adjacent)]
#[allow(unused)]
pub struct ConfigSetBool {
/// Set <key> to <bool>
#[bpaf(long("setBool"))]
set_bool: (),
/// Configuration key
#[bpaf(positional("key"))]
key: String,
/// Configuration Value (bool)
#[bpaf(positional("bool"))] // << this seems to work now, hurray :)
value: bool,
}
fails correctly
$ cargo run -- --setBool key tru
Couldn't parse "tru": provided string was not `true` or `false`
boxed seems to work for what i can tell, even though it might not be what i proposed hh
UX discussion
I think this better go into a separate issue but i put it here for context
$ cargo run -- --bool-opt tru tru
Couldn't parse "tru": provided string was not `true` or `false`
| L use positive sentence instead
\ which tru (see below)?
$ cargo run -- --bool-opt true true --bool-fla
No such flag: `--bool-fla`, did you mean `--bool-flag`?
L differnet
$ cargo run -- true --bool-flag --bool-flag --bool-opt true true
--bool-flag is not expected in this context
|
\ another one (this time without ticks at all :p)
the error message should if possible show which argument went wrong
$ cargo run -- --bool-opt tru true
Invalid argument:
got: --bool-opt tru
expected: --bool-opt (true | false)
Type '<command> <subcommand> --help' to see all arguments
$ cargo run -- --bool-opt true true --bool-fla
No such flag:
got: --bool-fla
expected: [ --bool-flag ] < Only if --bool-flag is not already specified
Type '<command> <subcommand> --help' to see all arguments
cargo run -- true --bool-flag --bool-flag --bool-opt true true
No such flag:
got: --bool-flag
expected: (none)
Type '<command> <subcommand> --help' to see all arguments
or even better:
$ cargo run -- true --bool-flag --bool-flag --bool-opt true true
Duplicate flag:
got: --bool-flag --bool-flag
expected: [ --bool-flag ]
Type '<command> <subcommand> --help' to see all arguments
Though using positionals for multi value options is still a bit awkward and looks like a hack in the help:
Usage: [--bool-flag] --bool-opt ARG <ARG> --setBool <key> <bool>
Available positional items:
<key> Configuration key <---,
<bool> Configuration Value (bool) <---,
,
,
Available options: ,
--bool-flag ,
--bool-opt <ARG> <--
FWIW pushed a branch that renders proper help for multivariable arguments - multiarg
Positional bool is now supported,
For purely positional items something like this should work in the recent master, adjacent is not needed. I also made some changes to make error messages more user friendly
#[derive(Debug, Clone, Bpaf)]
#[bpaf(anywhere, box)]
struct Foo {
/// Set <key> to <bool>
#[bpaf(long("setBool"))]
set_bool: (),
/// Configuration key
#[bpaf(positional("key"))]
key: String,
/// Configuration Value (bool)
#[bpaf(positional("bool"))] // << this seems to work now, hurray :)
value: bool,
}
Took out UX discussion parts in a separate issue. going to merge this.
|
gharchive/pull-request
| 2023-02-25T21:55:26 |
2025-04-01T06:39:57.146893
|
{
"authors": [
"pacak",
"ysndr"
],
"repo": "pacak/bpaf",
"url": "https://github.com/pacak/bpaf/pull/172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1916699724
|
Qualifier values do not have first character converted to lowercase
FromString panics for some inputs. This change fixes the issue and adds a test to ensure it works.
I think we can close this PR here, because the commits are already in: https://github.com/package-url/packageurl-go/pull/65
|
gharchive/pull-request
| 2023-09-28T04:57:42 |
2025-04-01T06:39:57.177751
|
{
"authors": [
"shibumi",
"wetterjames4"
],
"repo": "package-url/packageurl-go",
"url": "https://github.com/package-url/packageurl-go/pull/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1239729415
|
DistGitMRHandler: fetch tags from upstream source-git
We clone source-git fork by default - it does not need to have upstream
tags that are needed in the update-dist-git process.
This commit fetches tags from upstream (the repo against the MR is
opened) after the repo is cloned (initialization of LocalProject).
Fixes https://github.com/packit/hardly/issues/61
RELEASE NOTES BEGIN
When a dist-git MR is being created, hardly now fetches tags from the upstream source-git repo as they may not be present in contributor's fork.
RELEASE NOTES END
RELEASE NOTES BEGIN
Do we already want to include hardly changes in blog posts? We haven't so far.
We should start doing that at some point. Especially when we get people who will use hardly and rely on it :)
|
gharchive/pull-request
| 2022-05-18T10:09:04 |
2025-04-01T06:39:57.196074
|
{
"authors": [
"TomasTomecek"
],
"repo": "packit/hardly",
"url": "https://github.com/packit/hardly/pull/67",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
207675822
|
Use existing __toString methods for argument and Matcher representation
If an error prints the arguments of an expectation, expected or actual, it represents them via the Mockery::formatArgument() method. For objects, this will return a simple object(className). This is unneccessarily obscure when the object has a __toString() method. The Matcher interface for example requires them.
@ccprog thanks for reporting, see #698 for a partial fix.
|
gharchive/issue
| 2017-02-15T00:22:38 |
2025-04-01T06:39:57.226467
|
{
"authors": [
"ccprog",
"davedevelopment"
],
"repo": "padraic/mockery",
"url": "https://github.com/padraic/mockery/issues/697",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1474036484
|
fix/issue-29-remoção-do-hello-world-auth/app.py
remoção do hello world no arquivo auth/app.py
Já foi ajustado em outro pull request
|
gharchive/pull-request
| 2022-12-03T13:47:13 |
2025-04-01T06:39:57.227740
|
{
"authors": [
"rafaelrubira2"
],
"repo": "paft-inc/paft-microservices",
"url": "https://github.com/paft-inc/paft-microservices/pull/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
38607546
|
Form builder
Is there a form builder that can be used in extensions for creating forms? Something like https://github.com/illuminate/html
We released the Pagekit Beta today. I close this issue because the code base completely changed. Please open a new issue if it still exists.
|
gharchive/issue
| 2014-07-24T08:31:46 |
2025-04-01T06:39:57.240373
|
{
"authors": [
"bweston92",
"saschadube"
],
"repo": "pagekit/pagekit",
"url": "https://github.com/pagekit/pagekit/issues/111",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1318522578
|
🛑 Every Nation Campus Gent is down
In d83e51c, Every Nation Campus Gent (http://everynationcampusgent.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Every Nation Campus Gent is back up in 5200516.
|
gharchive/issue
| 2022-07-26T16:58:33 |
2025-04-01T06:39:57.250199
|
{
"authors": [
"carakas"
],
"repo": "pageon/uptime",
"url": "https://github.com/pageon/uptime/issues/196",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
218766426
|
ngx_pagespeed not working with reverse proxy
I can't get ngx_pagespeed to work when reverse proxying node.js.
It works when just serving files normally, so is there something I'm missing here?
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
pagespeed on;
pagespeed FileCachePath /var/cache/ngx_pagespeed/;
pagespeed RewriteLevel PassThrough;
pagespeed EnableCachePurge on;
pagespeed PurgeMethod PURGE;
pagespeed EnableFilters prioritize_critical_css;
location /assets/ {
expires 10d;
alias /var/www/demo/assets/;
}
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" {
add_header "" "";
}
location ~ "^/pagespeed_static/" { }
location ~ "^/ngx_pagespeed_beacon$" { }
}
Could you share how you tested if ngx_pagespeed is working?
Ensure you are not performing any gzip or other compression from within Node.js.
|
gharchive/issue
| 2017-04-02T14:02:02 |
2025-04-01T06:39:57.252207
|
{
"authors": [
"mhaagens",
"mikinho",
"oschaaf"
],
"repo": "pagespeed/ngx_pagespeed",
"url": "https://github.com/pagespeed/ngx_pagespeed/issues/1401",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
926251738
|
Migrated to v2 emote cdn endpoint
Pull request checklist:
[x] CHANGELOG.md was updated, if applicable
[x] Documentation in docs/ or install-docs/ was updated, if applicable
Using v2 paths is required for animated emotes, as can be seen in the following example.
Example of an animated emote:
https://static-cdn.jtvnw.net/emoticons/v1/emotesv2_e0dd54510bc94631899bf64b097680a2/3.0
https://static-cdn.jtvnw.net/emoticons/v2/emotesv2_e0dd54510bc94631899bf64b097680a2/default/dark/3.0
https://static-cdn.jtvnw.net/emoticons/v2/emotesv2_e0dd54510bc94631899bf64b097680a2/static/dark/3.0
default can be replaced with static to always get a non-animated variant
I also moved already hardcoded emotes in the source in case v1 endpoint gets shut down later.
|
gharchive/pull-request
| 2021-06-21T14:02:53 |
2025-04-01T06:39:57.272780
|
{
"authors": [
"alazymeme",
"zneix"
],
"repo": "pajbot/pajbot",
"url": "https://github.com/pajbot/pajbot/pull/1297",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
893602285
|
Release Paketo Go buildpack
Release in the week of May 17th if relevant changes or dependency updates are merged.
What steps did you take to close this issue? What resources did you use? How long did you spend on this task this week? Answer in a comment.
Released v0.7.0
In an effort to de-clutter the project board, we are moving away from recurring issues such as this one. This task will instead be added to a checklist of tasks to be completed on a weekly basis.
|
gharchive/issue
| 2021-05-17T18:46:54 |
2025-04-01T06:39:57.274524
|
{
"authors": [
"fg-j",
"thitch97"
],
"repo": "paketo-buildpacks/go",
"url": "https://github.com/paketo-buildpacks/go/issues/427",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
823222936
|
Nominate Emily Johnson (@emmjohnson) as Nodejs Contributor
In accordance with the Paketo Buildpacks Governance document, I am nominating Emily Johnson (@emmjohnson) as a contributor to the Nodejs language family. Emily is a regular contributor to the buildpack family (https://github.com/paketo-buildpacks/nodejs/pull/321, https://github.com/paketo-buildpacks/node-engine/pull/240 etc.).
+1
With supermajority vote of maintainers in the affirmative, the nomination is considered to be approved.
Welcome to the team of Nodejs Contributors @emmjohnson!
@paketo-buildpacks/steering-committee Could you please add Emily to the contributors team on github?
@emmjohnson Congrats!
@arjun024 Done!
|
gharchive/issue
| 2021-03-05T16:08:01 |
2025-04-01T06:39:57.277492
|
{
"authors": [
"arjun024",
"ryanmoran",
"thitch97"
],
"repo": "paketo-buildpacks/nodejs",
"url": "https://github.com/paketo-buildpacks/nodejs/issues/344",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
684126945
|
Buildpack support for Drupal 8
I'm attempting to build Drupal 8 using PHP buildpacks. Here's a breakdown of the steps I'm doing.
Scaffold Drupal 8.
composer create-project "drupal/recommended-project:^8" drupal
Add the following buildpack.yml.
---
php:
version: 7.4.*
webserver: nginx
webdirectory: web
Build the container image.
pack build -b gcr.io/paketo-buildpacks/php drupal-8 --builder paketobuildpacks/builder:full
Run the new image.
docker run --interactive --tty --env PORT=8080 --publish 8080:8080 drupal-8
The trouble is, the build process creates a symlink of the vendor directory, and running composer install post that updates the autoload.php thus:
<?php
/**
* @file
* Includes the autoloader created by Composer.
*
* This file was generated by drupal-scaffold.
*.
* @see composer.json
* @see index.php
* @see core/install.php
* @see core/rebuild.php
* @see core/modules/statistics/statistics.php
*/
return require __DIR__ . '//layers/paketo-buildpacks_php-composer/php-composer-packages/vendor/autoload.php';
Which breaks the autoload sequence.
When I edit it back to what it was,
<?php
/**
* @file
* Includes the autoloader created by Composer.
*
* This file was generated by drupal-scaffold.
*.
* @see composer.json
* @see index.php
* @see core/install.php
* @see core/rebuild.php
* @see core/modules/statistics/statistics.php
*/
return require __DIR__ . '/../vendor/autoload.php';
It works fine. I am not sure why we create symlinks and then run composer install again.
Copying the vendor directory instead of symlinking it would help, although there might be some rationale behind symlinking it which I'm not aware of.
Running composer install after symlinking updates the autoload.php files to reflect the new location of vendor directory.
Happy to triage any approaches/fixes and contribute back to the buildpack, and thanks for the awesome work.
@paketo-buildpacks/php-maintainers This has been open for a bit. Any update on this? Does the workaround described in the replies to #366 also apply to this use case?
|
gharchive/issue
| 2020-08-23T06:35:40 |
2025-04-01T06:39:57.282413
|
{
"authors": [
"badri",
"fg-j"
],
"repo": "paketo-buildpacks/php",
"url": "https://github.com/paketo-buildpacks/php/issues/253",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1291835790
|
Remove dependencies
Demonstration implementation of paketo-buildpacks/rfcs#214
SBOM now generated from installation directory
Will no longer reuse layers
Installed version now retrieved using 'poetry --version'
Only exact version matching is supported
Checklist
[x] I have viewed, signed, and submitted the Contributor License Agreement.
[x] I have linked issue(s) that this PR should close using keywords or the Github UI (See docs)
[x] I have added an integration test, if necessary.
[x] I have reviewed the styleguide for guidance on my code quality.
[x] I'm happy with the commit history on this PR (I have rebased/squashed as needed).
Closing in favor of #75
|
gharchive/pull-request
| 2022-07-01T21:14:06 |
2025-04-01T06:39:57.286015
|
{
"authors": [
"joshuatcasey"
],
"repo": "paketo-buildpacks/poetry",
"url": "https://github.com/paketo-buildpacks/poetry/pull/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
853692161
|
Set RAILS_LOG_TO_STDOUT to true/1
What happened?
What were you attempting to do?
Build a rails app that sends logs to stdout
What did you expect to happen?
I expected RAILS_LOG_TO_STDOUT to be enabled so that our environment config could pick it up and use it to set the appropriate logger to direct logs to stdout for the container
What was the actual behavior? Please provide log output, if possible.
It is not enabled, the logs were directed to a file
Build Configuration
What platform (pack, kpack, tekton buildpacks plugin, etc.) are you
using? Please include a version.
What buildpacks are you using? Please include versions.
What builder are you using? If custom, can you provide the output from pack inspect-builder <builder>?
Can you provide a sample app or relevant configuration (buildpack.yml,
nginx.conf, etc.)?
Checklist
[ ] I have included log output.
[ ] The log output includes an error message.
[ ] I have included steps for reproduction.
@genevieve @paketo-buildpacks/ruby-maintainers This has been open for a bit. Is this still a need? Is there a workaround available?
Its not a bug. Its a feature request. Users routinely need to set this environment variable so that they can have their logs streamed to stdout in the container rather than the default location of a file inside the container. The workaround is just to set that environment variable when starting the container, but they shouldn't need to as having the buildpack set it by default would be pretty obviously better.
Makes sense.
Until the buildpack sets this environment variable, users can get this behaviour today by setting BPE_RAILS_LOG_TO_STDOUT=true in the build environment. This would ensure that RAILS_LOG_TO_STDOUT=true automatically when the container starts (see docs).
@robdimsdale I think this logic would go into the rails-assets buildpack. Its the most closely related, if not completely aligned with the intent of this feature.
We'll want to set this variable as a default using https://pkg.go.dev/github.com/paketo-buildpacks/packit/v2#Environment.Default. This would allow users to still override this value if needed.
I'm probably not going to get to this just yet, so anyone else who is interested is welcome to pick it up!
|
gharchive/issue
| 2021-04-08T17:28:12 |
2025-04-01T06:39:57.294090
|
{
"authors": [
"fg-j",
"genevieve",
"robdimsdale",
"ryanmoran"
],
"repo": "paketo-buildpacks/ruby",
"url": "https://github.com/paketo-buildpacks/ruby/issues/567",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
345607150
|
Eval a line of code without resetting the interpreter?
So I was writing some bindings to control Arduino functions with this, and thought it would be really cool if you could do something suspending a running interpreter, evaluating a line of code, and then returning to normal execution.
This would let you interact with a running program during development, change variables, etc, and would be great for using it like the early home computers that you used through a BASIC prompt.
Is this possible with the current API?
The interpreter is not designed to work as REPL, but it's possible to load and run without resetting the interpreter, eg:
int main(int argc, char* argv[]) {
struct mb_interpreter_t* bas = 0;
mb_init();
mb_open(&bas);
mb_load_string(
bas,
"n = n + 1\n"
"print \"entry \", n;\n",
false
);
mb_load_string(bas, "a = 22", false);
mb_run(bas, false);
mb_load_string(bas, "b = 7", false);
mb_run(bas, false);
mb_load_string(bas, "c = a / b", false);
mb_run(bas, false);
mb_load_string(bas, "print c;", false);
mb_run(bas, false);
mb_close(&bas);
mb_dispose();
return 0;
}
You would see each mb_run is a top-down execution with previous values reserved in the variables. Although this is not reentrant.
It's also possible to inspect variables with the mb_debug_get and mb_debug_set API, if this was the only interaction you were looking for.
Oh thanks! Those 2 APIs actually do cover 90% of what I'd like to do interactively.
I started working on a fork to allow multiple parsing contexts and stacks per interpreter, so you can load multiple "threads" (probably with a python style GIL ) and do traditional reentrant REPL, but that's a fairly big project.
Nice work! I appreciate your efforts, and I believe it would help others a lot.
Here's the dev roadmap:
Releasing current v1.2
Rewriting a new kernal as v2.0
So for the current branch, it's kinda feature-frozen. But it inspired me, I would consider adding REPL for v2.0.
I prefer to keep this open so others will find.
Awesome! I'll probably be following this project for a while, I'm using it to allow remote code updates on an open source IoT platform I'm doing, and I might try to port it to a handheld game console at some point.
It's a great language! Definitely one of the easiest to embed interpreters out there, and it doesn't use too much RAM on embedded systems.
Thanks! Looking forward to your sharing of your creations.
it's such a shame there is no REPL :(
|
gharchive/issue
| 2018-07-30T04:28:18 |
2025-04-01T06:39:57.302614
|
{
"authors": [
"EternityForest",
"atheros",
"paladin-t"
],
"repo": "paladin-t/my_basic",
"url": "https://github.com/paladin-t/my_basic/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1621511519
|
Testing, please ignore. ajmshv
This is a bug bounty test. Please do not approve this! ajmshv
Thanks for your interest in palantir/eclipse-typescript, @letmeinkvar! Before we can accept your pull request, you need to sign our contributor license agreement - just visit https://cla.palantir.com/ and follow the instructions. Once you sign, I'll automatically update this pull request.
|
gharchive/pull-request
| 2023-03-13T13:39:59 |
2025-04-01T06:39:57.335397
|
{
"authors": [
"letmeinkvar",
"palantirtech"
],
"repo": "palantir/eclipse-typescript",
"url": "https://github.com/palantir/eclipse-typescript/pull/363",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.