id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
408501978
|
Report ClientID for consumers
Store and report ClientID for consumers, this is useful in environments where IPs are not ideal for identifying consumers. The ClientID is an ideal choice as it's user controlled so can be configured to be whatever the user finds most useful for correlation purposes.
Coverage increased (+0.0004%) to 74.614% when pulling 5ae52aea7990ccbeb449cce09c6f8ca10b52b0b2 on postmates:jpg/report-client-id into 429c6e8d4f58cfd9b6b76da035d98f12d3cf0c41 on linkedin:master.
Thanks!
|
gharchive/pull-request
| 2019-02-10T05:10:25 |
2025-04-01T06:39:25.838733
|
{
"authors": [
"bai",
"coveralls",
"josephglanville"
],
"repo": "linkedin/Burrow",
"url": "https://github.com/linkedin/Burrow/pull/491",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2370827203
|
Support nextMarker and nextContinuationToken
Support nextMarker for listObject and nextContinuationToken for listObjectV2
Codecov Report
Attention: Patch coverage is 48.64865% with 19 lines in your changes missing coverage. Please review.
Project coverage is 18.65%. Comparing base (52ba813) to head (fe3ca37).
Report is 33 commits behind head on master.
Files
Patch %
Lines
...com/github/ambry/frontend/s3/S3MessagePayload.java
52.38%
10 Missing :warning:
...va/com/github/ambry/frontend/s3/S3ListHandler.java
43.75%
7 Missing and 2 partials :warning:
:exclamation: There is a different number of reports uploaded between BASE (52ba813) and HEAD (fe3ca37). Click for more details.
HEAD has 2 uploads less than BASE
| Flag | BASE (52ba813) | HEAD (fe3ca37) |
|------|------|------|
||3|1|
Additional details and impacted files
@@ Coverage Diff @@
## master #2807 +/- ##
=============================================
- Coverage 64.24% 18.65% -45.60%
+ Complexity 10398 2919 -7479
=============================================
Files 840 842 +2
Lines 71755 72314 +559
Branches 8611 8703 +92
=============================================
- Hits 46099 13489 -32610
- Misses 23004 57585 +34581
+ Partials 2652 1240 -1412
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2024-06-24T18:14:17 |
2025-04-01T06:39:25.846956
|
{
"authors": [
"SophieGuo410",
"codecov-commenter"
],
"repo": "linkedin/ambry",
"url": "https://github.com/linkedin/ambry/pull/2807",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
318216592
|
Cleanup deprecated code from PinotLLCRealtimeSegmentManager and ValidationManager
As part of https://github.com/linkedin/pinot/pull/2721 we refactored the PinotLLCRealtimeSegmentManager to not depend on znode for stream partition assignment. A lot of methods were rewritten, and older ones deprecated. This PR attempts to clean up all the deprecated and unused methods
Codecov Report
Merging #2762 into master will increase coverage by 11.48%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #2762 +/- ##
==========================================
+ Coverage 57.51% 69% +11.48%
==========================================
Files 876 876
Lines 42319 41993 -326
Branches 5754 5708 -46
==========================================
+ Hits 24339 28976 +4637
+ Misses 16288 11144 -5144
- Partials 1692 1873 +181
Impacted Files
Coverage Δ
.../core/realtime/PinotLLCRealtimeSegmentManager.java
55.6% <ø> (+49.79%)
:arrow_up:
...pinot/controller/validation/ValidationManager.java
84.39% <ø> (+48.32%)
:arrow_up:
...not/transport/scattergather/ScatterGatherImpl.java
55.69% <0%> (+0.63%)
:arrow_up:
...ore/realtime/impl/RealtimeSegmentStatsHistory.java
80.95% <0%> (+0.68%)
:arrow_up:
.../pinot/core/segment/index/SegmentMetadataImpl.java
81.56% <0%> (+0.7%)
:arrow_up:
...t/creator/impl/SegmentIndexCreationDriverImpl.java
88.43% <0%> (+0.74%)
:arrow_up:
...r/transform/function/ValueInTransformFunction.java
39.2% <0%> (+0.8%)
:arrow_up:
.../helix/core/realtime/SegmentCompletionManager.java
69.54% <0%> (+0.9%)
:arrow_up:
...din/pinot/core/realtime/stream/StreamMetadata.java
67.88% <0%> (+0.91%)
:arrow_up:
...e/io/writer/impl/MutableOffHeapByteArrayStore.java
86.59% <0%> (+1.03%)
:arrow_up:
... and 272 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2d22966...84e50a3. Read the comment docs.
|
gharchive/pull-request
| 2018-04-26T22:18:56 |
2025-04-01T06:39:25.862432
|
{
"authors": [
"codecov-io",
"npawar"
],
"repo": "linkedin/pinot",
"url": "https://github.com/linkedin/pinot/pull/2762",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
615954940
|
Set DNS-SD service name/type for the directory
A service type for the directory service is not yet registered at IANA. But it is worth some improvements in our implementation.
Currently the the type is set to: _linksmart-td._tcp
Other works related to this:
https://github.com/w3c/wot-discovery/tree/18820a3f31f191f3e3689158672decf6c906bbf6/prior-work/fujitsu
https://github.com/w3c/wot-discovery/issues/5
Proposal:
<instance-name>._directory._sub._wot._tcp where instance is configurable defaulting to linksmart. The instance name should be made unique in each environment.
The <Instance> portion of the Service Instance Name is a user-
friendly name consisting of arbitrary Net-Unicode text [RFC5198]. It
MUST NOT contain ASCII control characters (byte values 0x00-0x1F and
0x7F) [RFC20] but otherwise is allowed to contain any characters,
without restriction, including spaces, uppercase, lowercase,
punctuation -- including dots -- accented characters, non-Roman text,
and anything else that may be represented using Net-Unicode. For
discussion of why the <Instance> name should be a user-visible, user-
friendly name rather than an invisible machine-generated opaque
identifier, see Appendix C, "What You See Is What You Get".
The <Instance> portion of the name of a service being offered on the
network SHOULD be configurable by the user setting up the service, so
that he or she may give it an informative name. However, the device
or service SHOULD NOT require the user to configure a name before it
can be used. A sensible choice of default name can in many cases
allow the device or service to be accessed without any manual
configuration at all. The default name should be short and
descriptive, and SHOULD NOT include the device's Media Access Control
(MAC) address, serial number, or any similar incomprehensible
hexadecimal string in an attempt to make the name globally unique.
https://tools.ietf.org/html/rfc6763#section-4.1.1
When a DNS-SD service is advertised using Multicast DNS [RFC6762], if
there is already another service of the same type advertising with
the same name then automatic name conflict resolution will occur. As
described in the Multicast DNS specification [RFC6762], upon
detecting a conflict, the service should:
1. Automatically select a new name (typically by appending or
incrementing a digit at the end of the name),
2. Try advertising with the new name, and
3. Upon success, record the new name in persistent storage.
This renaming behavior is very important, because it is key to
providing user-friendly instance names in the out-of-the-box factory-
default configuration.
https://tools.ietf.org/html/rfc6763#appendix-D
Service registration with subtype fails using several tested clients.
CLI registration:
macOS:
$ dns-sd -R "thing directory" _directory._sub._wot._tcp local. 8081
Registering Service thing directory._directory._sub._wot._tcp.local. port 8081
DNSService call failed -65540
Debian:
$ avahi-publish -s "thing directory" "_directory._sub._wot._tcp" 8081
Failed to add service: Invalid service type
Implemented with _wot._tcp type and _directory subtype. Instance name is configurable.
|
gharchive/issue
| 2020-05-11T15:01:52 |
2025-04-01T06:39:25.904660
|
{
"authors": [
"farshidtz"
],
"repo": "linksmart/thing-directory",
"url": "https://github.com/linksmart/thing-directory/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
494192031
|
[Do Not Merge] LKE Beta Endpoints
Adds LKE Beta endpoints:
/lke/clusters
/lke/clusters/{clusterId}
/lke/clusters/{clusterId}/pools
/lke/clusters/{clusterId}/pools/{poolId}
/lke/clusters/{clusterId}/kubeconfig
/lke/versions
/lke/versions/{version}
Note: This work was started by @asauber and @jfrederickson in bits repo. This adds latest updates to that worki.
please add:
lke:read_only
lke:read_write
to the oauth schema section and to the front information section
A general note for beta - LKE is available in us-central and with Kubernetes version 1.16, these should be updated in the examples.
Additionally we want to add a link to the beta sign up page with the beta note.
|
gharchive/pull-request
| 2019-09-16T17:54:50 |
2025-04-01T06:39:26.040120
|
{
"authors": [
"hzoppetti",
"leslitagordita"
],
"repo": "linode/linode-api-docs",
"url": "https://github.com/linode/linode-api-docs/pull/126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2236367499
|
upcoming: [M3-7972] - Invalidate PG queries on Linode create/delete
Description 📝
Now that the POST & DELETE linode/instances Alpha endpoint have been updated to work with placement groups, we need to invalidate the related PG if:
on POST (create linode) we assign a placement group
on DELETE (delete linode) we delete a linode assigned to a placement group
Changes 🔄
Invalidate PG queries on Linode create & delete mutations
Preview 📷
Create Linode (and assign to PG)
Delete Linode (and unassign from PG)
How to test 🧪
Prerequisites
Using alpha environment and having the "Placement Group" feature flag enabled, either:
use your account (need the placement-group customer tag
use the pg-user-1 (see creds in 1Password vault)
Have at least one Placement Group created
Verification steps
See the video above:
Create a linode and assign to a PG: confirm UI updates accordingly in the placement group section
Delete a linode that has a linode assigned: confirm UI updates accordingly in the placement group section
As an Author I have considered 🤔
Check all that apply
[ ] 👀 Doing a self review
[ ] ❔ Our contribution guidelines
[x] 🤏 Splitting feature into small PRs
[x] ➕ Adding a changeset
[ ] 🧪 Providing/Improving test coverage
[ ] 🔐 Removing all sensitive information from the code and PR description
[ ] 🚩 Using a feature flag to protect the release
[x] 👣 Providing comprehensive reproduction steps
[ ] 📑 Providing or updating our documentation
[ ] 🕛 Scheduling a pair reviewing session
[ ] 📱 Providing mobile support
[ ] ♿ Providing accessibility support
I was able to verify that create and delete/unassign operations worked as expected by the changes. The linode count updated in the UI accordingly and did not observe any regressions.
|
gharchive/pull-request
| 2024-04-10T20:23:00 |
2025-04-01T06:39:26.048680
|
{
"authors": [
"abailly-akamai",
"carrillo-erik"
],
"repo": "linode/manager",
"url": "https://github.com/linode/manager/pull/10366",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1521050476
|
M3-6044: Create Button "Create Using Command Line" and show modal
Description 📝
This story is part of feature API-CLI.
Team, guide me incase I miss setting any process oriented things before kicking off feature development. (feature flag, etc..)
Note: ga events are in my radar, if need following PR's will cover that.
What does this PR do?
Shows API-CLI awareness modal upon clicking "Create Create Using Command Line"
Preview 📷
How to test 🧪
Navigate to create Linode page
Scroll to bottom and click ""Create Create Using Command Line"" button.
Should show the modal.
Work in progress...
How do I run relevant unit or e2e tests?
yarn test ApiAwarenessModal
As a thought, if the intent is to bring awareness about the command line options to other entities as well in the future, maybe ApiAwarenessModal can be made a little more generic now by passing the modal copy/contents in as a child.
It'd look something like:
<Dialog>
{children}
<ActionsPanel>
...
</ActionsPanel>
</Dialog>
the JSX for this specific modal would be defined in LinodeCreate.tsx and then passed as a children/render prop to <ApiAwarenessModal />.
If we take this approach, the component and its test should be moved from the /LinodesCreate directory too
Good call @dwiley-akamai! That was one of the reason for decoupling ApiAwarenessModal from LinodeCreate in this iteration. But, we could definitely make it as more generic considering future wireframes it will be more clearer to convert ApiAwarenessModal as reusable.
|
gharchive/pull-request
| 2023-01-05T16:48:40 |
2025-04-01T06:39:26.054593
|
{
"authors": [
"cpathipa"
],
"repo": "linode/manager",
"url": "https://github.com/linode/manager/pull/8689",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
606284281
|
Scanse Sweep compatibility
Hi,
I'm using linorobot with Jetson Nano (Ubuntu 18.04), Arduino Mega and Sweep Lidar.
I configured the linobase module and the minimal.launch is working fine.
In order to be able to use Sweep Lidar I added it's package into the linorobot directory and catkin_make it. After that, I changed the linorobot launch files to run the sweep lidar launch file.
When I launch the bringup.launch file, I receive those error.
My question is: What else would need to be done in order to use the Sweep Lidar?
started roslaunch server http://192.168.43.50:34635/
SUMMARY
========
PARAMETERS
* /apply_calib/calib_file: /home/jetson/lino...
* /apply_calib/calibrate_gyros: True
* /ekf_localization/base_link_frame: base_footprint
* /ekf_localization/diagnostics_agg: True
* /ekf_localization/frequency: 50
* /ekf_localization/imu0: /imu/data
* /ekf_localization/imu0_config: [False, False, Fa...
* /ekf_localization/imu0_differential: True
* /ekf_localization/imu0_relative: True
* /ekf_localization/odom0: /raw_odom
* /ekf_localization/odom0_config: [False, False, Fa...
* /ekf_localization/odom0_differential: True
* /ekf_localization/odom0_relative: False
* /ekf_localization/odom_frame: odom
* /ekf_localization/two_d_mode: True
* /ekf_localization/world_frame: odom
* /imu_filter_madgwick/fixed_frame: base_footprint
* /imu_filter_madgwick/orientation_stddev: 0.05
* /imu_filter_madgwick/publish_tf: False
* /imu_filter_madgwick/use_mag: True
* /imu_filter_madgwick/use_magnetic_field_msg: True
* /imu_filter_madgwick/world_frame: enu
* /pointcloud_to_laserscan/angle_increment: 0.0174533
* /pointcloud_to_laserscan/angle_max: 3.14
* /pointcloud_to_laserscan/angle_min: -3.14
* /pointcloud_to_laserscan/concurrency_level: 1
* /pointcloud_to_laserscan/max_height: 1.0
* /pointcloud_to_laserscan/min_height: -1.0
* /pointcloud_to_laserscan/range_max: 40.0
* /pointcloud_to_laserscan/range_min: 0.0
* /pointcloud_to_laserscan/scan_time: 0.1
* /pointcloud_to_laserscan/target_frame: laser
* /pointcloud_to_laserscan/transform_tolerance: 0.001
* /pointcloud_to_laserscan/use_inf: True
* /rosdistro: melodic
* /rosserial_lino/baud: 57600
* /rosserial_lino/port: /dev/linobase
* /rosversion: 1.14.5
* /sweep_node/frame_id: laser
* /sweep_node/serial_baudrate: 115200
* /sweep_node/serial_port: /dev/linolidar
NODES
/
apply_calib (imu_calib/apply_calib)
base_footprint_to_base_link (tf2_ros/static_transform_publisher)
base_footprint_to_imu_link (tf2_ros/static_transform_publisher)
base_link_to_laser (tf2_ros/static_transform_publisher)
ekf_localization (robot_localization/ekf_localization_node)
imu_filter_madgwick (imu_filter_madgwick/imu_filter_node)
lino_base_node (linorobot/lino_base_node)
pointcloud_to_laserscan (pointcloud_to_laserscan/pointcloud_to_laserscan_node)
rosserial_lino (rosserial_python/serial_node.py)
sweep_node (sweep_ros/sweep_node)
auto-starting new master
process[master]: started with pid [26933]
ROS_MASTER_URI=http://192.168.43.50:11311
setting /run_id to 11079158-8627-11ea-ad52-12ed0dd4ce4d
process[rosout-1]: started with pid [26944]
started core service [/rosout]
process[rosserial_lino-2]: started with pid [26951]
process[apply_calib-3]: started with pid [26952]
process[imu_filter_madgwick-4]: started with pid [26953]
process[base_footprint_to_imu_link-5]: started with pid [26954]
[ INFO] [1587731298.663476387]: Starting ImuFilter
[ INFO] [1587731298.677961559]: Using dt computed from message headers
[ INFO] [1587731298.724266254]: Imu filter gain set to 0.100000
[ INFO] [1587731298.724941426]: Gyro drift bias set to 0.000000
[ INFO] [1587731298.725399145]: Magnetometer bias values: 0.000000 0.000000 0.000000
process[lino_base_node-6]: started with pid [26965]
process[base_footprint_to_base_link-7]: started with pid [26971]
process[ekf_localization-8]: started with pid [26972]
process[sweep_node-9]: started with pid [26983]
process[pointcloud_to_laserscan-10]: started with pid [26985]
process[base_link_to_laser-11]: started with pid [26987]
[ WARN] [1587731299.346980673]: Both imu0_differential and imu0_relative were set to true. Using differential mode.
[INFO] [1587731300.024167]: ROS Serial Python Node
[INFO] [1587731300.042407]: Connecting to /dev/linobase at 57600 baud
[INFO] [1587731302.160059]: Requesting topics...
Error: invalid response header checksum
[INFO] [1587731302.361959]: Note: publish buffer size is 512 bytes
[INFO] [1587731302.367370]: Setup publisher on raw_imu [lino_msgs/Imu]
[INFO] [1587731302.380240]: Note: subscribe buffer size is 512 bytes
[INFO] [1587731302.384288]: Setup subscriber on pid [lino_msgs/PID]
[INFO] [1587731302.396869]: Setup subscriber on cmd_vel [geometry_msgs/Twist]
[INFO] [1587731302.403159]: LINOBASE CONNECTED
[ERROR] [1587731302.409229]: Tried to publish before configured, topic id 125
[INFO] [1587731302.413351]: Requesting topics...
[ERROR] [1587731302.429750]: Tried to publish before configured, topic id 125
[INFO] [1587731302.433974]: Requesting topics...
[sweep_node-9] process has finished cleanly
log file: /home/jetson/.ros/log/11079158-8627-11ea-ad52-12ed0dd4ce4d/sweep_node-9*.log
[INFO] [1587731302.457929]: Setup publisher on raw_imu [lino_msgs/Imu]
[ERROR] [1587731302.488946]: Tried to publish before configured, topic id 125
[INFO] [1587731302.493745]: Requesting topics...
[INFO] [1587731302.538571]: Setup publisher on raw_vel [lino_msgs/Velocities]
[INFO] [1587731302.549725]: Setup publisher on raw_imu [lino_msgs/Imu]
[INFO] [1587731302.622646]: Setup publisher on raw_vel [lino_msgs/Velocities]
[INFO] [1587731302.638786]: Setup publisher on raw_imu [lino_msgs/Imu]
[ INFO] [1587731302.693795733]: Calibrating gyros; do not move the IMU
[ WARN] [1587731308.828438073]: Still waiting for data on topics /imu/data_raw and /imu/mag...
[ INFO] [1587731310.355790401]: Gyro calibration complete! (bias = [-0.056, 0.037, -0.012])
[ INFO] [1587731310.508411130]: First pair of IMU and magnetometer messages received.
Hi,
Can you share your launch files please? Thanks
Hi,
Sorry for late response. I configured the environment as you said but the error still occurs.
These are the launch files.
../linorobot/launch/include/laser.launch
<launch>
<!-- Run Linorobot compatible laser drivers. Takes reference from env var LINOLIDAR. ie. export LINOLIDAR=xv11 -->
<include file="$(find linorobot)/launch/include/lidar/sweep.launch" />
<!-- Publish static transform of the laser. Define your sensor offset here -->
<node pkg="tf2_ros" type="static_transform_publisher" name="base_link_to_laser" args="0.065 0 0.098 0 0 0 /base_link /laser"/>
</launch>
../linorobot/launch/include/lidar/sweep.launch (here I added the conversion between pc2 and laserscan)
<launch>
<!-- run sweep_node node -->
<node name="sweep_node" pkg="sweep_ros" type="sweep_node" output="screen">
<param name="serial_port" type="string" value="/dev/linolidar"/>
<param name="serial_baudrate" type="int" value="115200"/>
<param name="frame_id" type="string" value="laser"/>
</node>
<!-- run pointcloud_to_laserscan node -->
<node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">
<remap from="cloud_in" to="pc2"/>
<rosparam>
target_frame: laser # Leave disabled to output scan in pointcloud frame
transform_tolerance: 0.001
min_height: -1.0
max_height: 1.0
angle_min: -3.14 # -M_PI/2
angle_max: 3.14 # M_PI/2
angle_increment: 0.0174533 # M_PI/360.0
scan_time: 0.1
range_min: 0.0
range_max: 40.0
use_inf: true
# Concurrency level, affects number of pointclouds queued for processing and number of threads used
# 0 : Detect number of cores
# 1 : Single threaded
# 2->inf : Parallelism level
concurrency_level: 1
</rosparam>
</node>
</launch>
Your launch files looks good. Just omit the pointcloud_to_laserscan.
Also make sure that sweep publishes that data in "laser" frame. Otherwise, you have to rename "laser" in static_transform_publisher to the correct frame the LIDAR is using.
The problem was not the lidar itself, it worked well, but the udev rules that i've manually created. I observed that each time when I plugged in a new lino device (LiDAR / Arduino Mega) each device was bound on the same ttyUSB* port.
So, I followed the https://github.com/linorobot/linorobot/issues/31#issuecomment-602075774 instructions to configure the libgudev on my ubuntu 18.04, and after the port configurations provided by the lino_udev script, I still had to manually configure the port for Arduino Mega.
But it works now, thank you!
|
gharchive/issue
| 2020-04-24T12:41:26 |
2025-04-01T06:39:26.069964
|
{
"authors": [
"Valentinkvn",
"grassjelly"
],
"repo": "linorobot/linorobot",
"url": "https://github.com/linorobot/linorobot/issues/41",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
296845869
|
Update default threading behavior
If a message was sent in a thread, answer in a thread per default.
Hey! Looks good, how about adding a test for this? That would help merging it faster. Thanks!
Hey @jsargiot, thanks for the response. I won't make it to create a custom testing instance of Slack nor configure Travis. It would be way easier for me (and everybody else who wants to contribute), if you could setup your Travis to run all the tests. If you are concerned about user that are not that familiar with git and that add several commits to fix test errors, you could automatically squash all commits of a PR.
Have you read https://docs.travis-ci.com/user/pull-requests/?
|
gharchive/pull-request
| 2018-02-13T18:49:12 |
2025-04-01T06:39:26.099861
|
{
"authors": [
"jonas-schulze",
"jsargiot"
],
"repo": "lins05/slackbot",
"url": "https://github.com/lins05/slackbot/pull/172",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2077713607
|
fix: Enable PostgreSQL stream selection for c9s and RHEL9
c9s/RHEL9 provides PostgreSQL 13 as the default system version as a classic RPM package. Alternative versions are provided as modular content. So, it requires a different installation procedure.
Issue Tracker Tickets (Jira or BZ if any): RHEL-5274
[citest]
lgtm - I can confirm that using postgresql_version: "16" correctly installs version 16 on centos-9.
Once the ci tests pass, we can merge
|
gharchive/pull-request
| 2024-01-11T22:30:53 |
2025-04-01T06:39:26.127775
|
{
"authors": [
"fila43",
"richm"
],
"repo": "linux-system-roles/postgresql",
"url": "https://github.com/linux-system-roles/postgresql/pull/72",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1366562168
|
Increase healthcheck retries
Signed-off-by: Mohamed Abokammer mahmednabil109@gmail.com
Codecov Report
Merging #151 (b815f94) into main (a091455) will increase coverage by 0.00%.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #151 +/- ##
=======================================
Coverage 63.18% 63.18%
=======================================
Files 165 165
Lines 10383 10383
=======================================
+ Hits 6560 6561 +1
+ Misses 3096 3094 -2
- Partials 727 728 +1
Flag
Coverage Δ
e2e
48.95% <ø> (+0.02%)
:arrow_up:
integration
54.39% <ø> (+0.05%)
:arrow_up:
unittests
48.89% <ø> (-0.08%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
pkg/runner/step_runner.go
88.84% <0.00%> (-0.40%)
:arrow_down:
pkg/jobmanager/jobmanager.go
78.02% <0.00%> (+1.09%)
:arrow_up:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
gharchive/pull-request
| 2022-09-08T15:24:43 |
2025-04-01T06:39:26.144587
|
{
"authors": [
"codecov-commenter",
"mahmednabil109"
],
"repo": "linuxboot/contest",
"url": "https://github.com/linuxboot/contest/pull/151",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1401905076
|
New Twitter Update : Tweet with Multi Media
Hello
In the new update, people can share a tweet with multi media(For example photo, video and gif);
Tweets.GetTweetAsync gives the first media in tweet and doesn't show another media in Media Property.
Can I fix it? or you should correct it?
Thanks a lot.
Please see #1198
Please see #1198
Thanks;but there isn's Variants property in the TweetsV2.GetTweetAsync method.
How can I fix it?
Thank you.
My pull request hasn't been merged yet.
What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
My pull request hasn't been merged yet.
What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
Ok.
Thanks🙏🏼
My pull request hasn't been merged yet.
What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
Hello
I want to know, can I have the dll that you fix it before merging pull request?
Thank you
My pull request hasn't been merged yet. What I did was to clone the repo / make the fix the variants, and the the fixed dll and not the nuget one.
I cloned the repo and changed the files(your commits), but the variates is null;
Do you know how to fix it?
Thanks.
|
gharchive/issue
| 2022-10-08T10:40:39 |
2025-04-01T06:39:26.238079
|
{
"authors": [
"AMIR34A",
"kodsu"
],
"repo": "linvi/tweetinvi",
"url": "https://github.com/linvi/tweetinvi/issues/1189",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
717739066
|
Make Geospatial Data Lake repo public
There are organisations outside of LINZ that are interested in what we are doing. We should make the repo public as soon as possible. I think it's fine to do this while it's a work in progress as long as we indicate that somehow.
Tasks
[x] LGTM
[x] check source code for non public content
[x] open and close tickets
Thanks @SPlanzer. Closing this issue as done.
|
gharchive/issue
| 2020-10-08T23:35:56 |
2025-04-01T06:39:26.239875
|
{
"authors": [
"billgeo"
],
"repo": "linz/geospatial-data-lake",
"url": "https://github.com/linz/geospatial-data-lake/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2107814948
|
Theft of Vitae
text: If both combatants strike with Theft of Vitae while one of the vampires are at 0 blood, no blood is stolen from the empty vampire (but blood would still move to the empty vampire).
link: https://groups.google.com/g/rec.games.trading-cards.jyhad/c/BHeGvhd4yEA/m/SdKih5fV34wJ
Keeping it for after refactoring, will ne to be applied on all blood stealing cards:
Form of the Cobra [pro]
Theft of Vitae
Donnybrook [ser]
Call the Lamprey
Tongue of the Serpent
Veiled Sight [CHI]
Hunger of Marduk
Drain Essence
Diversion [tha]
Absorb the mind [myt], [MYT]
Kraken's Kiss [VIC]
200078|Anastasz di Zagreb (G3)
200528|Goratrix (G2)
200976|Menele (G3 ADV) [MERGED]
201345|Tariq, The Silent (G2 ADV)
201517|Lord Leopold Valdemar (G5)
|
gharchive/issue
| 2024-01-30T13:02:23 |
2025-04-01T06:39:26.244376
|
{
"authors": [
"codex-krcg",
"lionel-panhaleux"
],
"repo": "lionel-panhaleux/krcg",
"url": "https://github.com/lionel-panhaleux/krcg/issues/768",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2227706954
|
Prevent spurious SET SEARCH_PATH SQL statements for Postgres during update-sql command. Fixes #5316
Impact
[X] Bug fix (non-breaking change which fixes expected existing functionality)
[ ] Enhancement/New feature (adds functionality without impacting existing logic)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Description
Prevent resetting the search path in Postgres after a rollback if we are running in a mode that does not update the database to avoid spurious SET SEARCH_PATH SQL statements.
The logic in DatabaseUtils.initializeDatabase for Postgres expects changes to the SEARCH_PATH to be persisted to detect when the SEARCH_PATH is already correct(ed). However in update-sql mode these changes are not executed, causing the detection mechanism to fail and resulting in extra SET SEARCH_PATH statements.
Things to be aware of
I'm aware of the other change in review that will fix this issue, but that is a bigger change and might take a long time to be merged. This could be a quick win.
Things to worry about
Additional Context
Hi @mpvvliet ! Your PR handles the same as https://github.com/liquibase/liquibase/pull/5444 and also fixes https://github.com/liquibase/liquibase/issues/5316 just aiming at update-sql and does not modifies the user database settings using alter database session. What do you think about the solution proposed in the other PR?
@filipelautert I like the solution in the PR #5444 because it avoids the need to re-apply the SEARCH_PATH changes. However since that PR is bigger and seems stuck, I proposed this smaller one.
Happy to close this one if the other one has a shot of getting finalised soon.
AS PR #5444 still pending some tests let's move this one ahead, and if it gets merged we can revert this one here. Thanks @mpvvliet !
@filipelautert @MalloD12 should be merged functional test fix with https://github.com/liquibase/liquibase-pro-tests/pull/1445
|
gharchive/pull-request
| 2024-04-05T10:53:21 |
2025-04-01T06:39:26.298639
|
{
"authors": [
"filipelautert",
"mpvvliet",
"rberezen"
],
"repo": "liquibase/liquibase",
"url": "https://github.com/liquibase/liquibase/pull/5774",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2259019304
|
Simplify command titles to just the command (rather than the whole signature)
Overview
Right now we have command seciton headers like 'executable command ', etc. This is pretty wordy and we want to swap that so the section header is just the command name, which is followed by the full signature as text (but not heading). We do, however, want to retain the full command sig in the anchor ID in order to avoid ambiguation in a case like 'exec foo' and 'exec bar foo', which would have the same headins, but in a different context.
Work for this issue will begin begin on branch work-liquid-labs/command-line-documentation/9.
|
gharchive/issue
| 2024-04-23T14:20:31 |
2025-04-01T06:39:26.300284
|
{
"authors": [
"zanerock"
],
"repo": "liquid-labs/command-line-documentation",
"url": "https://github.com/liquid-labs/command-line-documentation/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2663743514
|
Brian Neff added MCC clock divider example to basic package
Ben, I just wanted to try a very simple example to get this started. If this all works, I'll start pushing more.
Merged in a new PR #10
|
gharchive/pull-request
| 2024-11-16T04:22:43 |
2025-04-01T06:39:26.304163
|
{
"authors": [
"bjneff13",
"bnizette-li"
],
"repo": "liquidinstruments/moku-examples",
"url": "https://github.com/liquidinstruments/moku-examples/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1848098134
|
[bugs] top level await + optional chaining (?.) will build product syntax error
// main.ts
// bugs reproduction
// top level await
console.log(await window?.fetch(`/`));
export {};
// test-top-level-await.user.js
// ==UserScript==
// @name test-top-level-await
// @namespace vite-plugin-monkey
// @version 0.0.0
// @author monkey
// @match https://songe.li
// ==/UserScript==
(async function () {
'use strict';
console.log((await fetch == null ? void 0 : fetch(`/`))));
// An extra parenthesis
})();
fixed by v3.4.1
|
gharchive/issue
| 2023-08-12T16:11:14 |
2025-04-01T06:39:26.316262
|
{
"authors": [
"lisonge"
],
"repo": "lisonge/vite-plugin-monkey",
"url": "https://github.com/lisonge/vite-plugin-monkey/issues/101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
924948504
|
[New/Revise] improve parameter estimation performance with constant liar and shorter timeout_iteration
Summary of this new feature
Improve performance (estimation score and runtime) of parameter estimation with the following solutions.
improve estmation score with constant liar
optuna provides new option constant_liar of TPESampler at version 2.8.0. Constant Liar heuristic reduces search effort, avoiding trials which trys similar parameter sets. Please refer to their detailed explanations and discussions with Optuna version 2.8.0 release note. It will be great for CovsirPhy users to use constant_liar=True if Optuna version 2.8.0 is available in our environments.
Improve runtime with shorter time_iteration
At version 2.20.3, Scenario.estimate(timeout_iteration=5) is the default value. Estimation score (RMSLE as default) is calculated every five seconds and the socre was not changed for tail_n=4 iterations, estimation will be stopped and best parameter set will be returned. However, with my tests, timeout_iteration appears to be a bottleneck. Many phases runs 5 seconds. (i.e. when timeout_iteration is shorter, runtime may be shorter.)
Note regarding constant liar:
constant_liar argument cannot be applied with Optuna version 2.7.0 or older.
https://gist.github.com/lisphilar/6440b5d69c4984bb0b34ede8c8ebcca3
TypeError means we use Optuna version 2.7.0 or older. When ``covsirphygetTypeErrorwithconstant_liarargument, it should remove the arument and retry creatingTPESampler`.
At version CovsirPhy 2.3.0 with Italy data (as of 18Jun2021), example/scenario_analysis.py and 8 CPUs at my local environment, parameter estimation completed with RMSLE=0.0795 in 2 min 22 sec.
(Please ignore accuracy of the last phase of Forecast scenario because this is a forecasted future phase.)
.
I compared the performances, changing constant_liar and timeout_iteration with Italy data as of 18Jun2021, my local environment and CovsirPhy version 2.20.3-theta. I used only 1 CPU with n_jobs=1 to get robust values of runtime as total value of all phases. Parameter estimation of each phase was done seaquencially. Code are as follows.
import covsirphy as cs
loader = cs.DataLoader()
jhu_data = loader.jhu()
snl = cs.Scenario(country="Italy")
snl.register(jhu_data)
snl.trend()
snl.estimate(cs.SIRF, n_jobs=1)
print(f"RMSLE: {snl.score(metric='RMSLE')}")
Results are here.
RMSLE (runtime)
constant_liar=False
constant_liar=True
timeout_iteration=5
0.06810 (13 min 22 sec)
0.06868 (17 min 42 sec)
timeout_iteration=4
0.06812 (14 min 03 sec)
0.06869 (14 min 07 sec)
timeout_iteration=3
0.06808 (10 min 10 sec)
0.06871 (10 min 31 sec)
timeout_iteration=2
0.06811 (07 min 55 sec)
0.06865 (07 min 11 sec)
timeout_iteration=1
0.06806 (03 min 21 sec)
0.06901 (03 min 53 sec)
I expected constant_liar=True and timeout_iteration=1 would show the best performance, but these results indicated constant_liar=False and timeout_iteration=1. I will create a pull request for constant_liar=False and timeout_iteration=1. These default values may be changed later if we get different results with the other countries' data.
With #833, timeout_iteration=1 will be default value for Scenario.esitmate(). constant_liar=False as-is explicitly.
Later, I will add constant_liar=False as an argument of Scenario.estimate(), if necessary.
WIth #835, user can select whether use constant liar or not with Scenario.esitmate(<model>, constant_liar=False) (default).
I compared RMSLE scores and runtime of constant_liar=False (default at this time) and constant_liar=True with some countries' datasets. I used example/scenario_analysis.py with 8 CPUs.
Results are here.
iso3
Country
constant_liar=False
constant_liar=True
Better RMSLE
Better runtime
Winner
ita
Italy
0.07642 (27 sec)
0.07686 (29 sec)
FALSE
FALSE
FALSE
jpn
Japan
0.06103 (39 sec)
0.06200 (44 sec)
FALSE
FALSE
FALSE
grc
Greece
0.05472 (37 sec)
0.05107 (44 sec)
TRUE
FALSE
NA
nld
Netherlands
0.03719 (37 sec)
0.03706 (28 sec)
TRUE
TRUE
TRUE
usa
USA
0.23073 (33 sec)
0.24186 (22 sec)
FALSE
TRUE
NA
ind
India
0.21665 (36 sec)
0.21871 (50 sec)
FALSE
FALSE
FALSE
bra
Brazil
0.06754 (53 sec)
0.06634 (63 sec)
TRUE
FALSE
NA
rus
Russia
0.61374 (38 sec)
0.61293 (28 sec)
TRUE
TRUE
TRUE
Because there was no significant difference, we continue to use constant_liar=False as default. For Netherlands and Russia, it will be better to use Scenario.estimate(cs.SIRF, constant_liar=True).
Runtime of parameter estimation will be quite shorter with timeout_iteration=1 (default). Version 2.21.0 release was planed in Jul2021, but this should be moved up to Jun2021. Tomorrow or within some days.
|
gharchive/issue
| 2021-06-18T14:13:22 |
2025-04-01T06:39:26.336659
|
{
"authors": [
"lisphilar"
],
"repo": "lisphilar/covid19-sir",
"url": "https://github.com/lisphilar/covid19-sir/issues/833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
543303915
|
歌曲排序
加入歌单的歌曲都是后加的在最上面,建议加入手动排序,以及根据歌曲名,歌手等自动排序。
+10086 排序功能太重要啦~~~ :)
|
gharchive/issue
| 2019-12-29T01:49:55 |
2025-04-01T06:39:26.338021
|
{
"authors": [
"Wine93",
"netsonicyxf"
],
"repo": "listen1/listen1_desktop",
"url": "https://github.com/listen1/listen1_desktop/issues/186",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1228588849
|
修复全局隐藏TopBar只在Tab布局下生效的问题
sorry,之前提交的PR[#228 ]只在Tab布局发现会自动隐藏,刚刚发现stack布局还是有topbar的,在这里修复了。
另外我们这个库是否支持tablet设备上横屏状态下TabBar改变位置?就是类似于很多iPad应用,将TabBar放到了左侧,我自己现在通过findViewById硬改实现了,想问一下有没有优雅点的方案呢?
刚刚测试了下,全局设置 topBarHidden: true, iOS 也是无法隐藏的
刚刚测试了下,全局设置 topBarHidden: true, iOS 也是无法隐藏的
我不太会iOS :joy:
刚刚测试了下,全局设置 topBarHidden: true, iOS 也是无法隐藏的
我不太会iOS 😂
我来处理下
计划通过以下方式支持:
// 开始注册组件,即基本页面单元
ReactRegistry.startRegisterComponent(withNavigationItem({ topBarHidden: true }))
计划通过以下方式支持:
// 开始注册组件,即基本页面单元
ReactRegistry.startRegisterComponent(withNavigationItem({ topBarHidden: true }))
嗯嗯,这个也是一种解决思路👍🏻
已实现 hybrid-navigation@2.9.0
|
gharchive/pull-request
| 2022-05-07T10:10:07 |
2025-04-01T06:39:26.341827
|
{
"authors": [
"NiuGuohui",
"listenzz"
],
"repo": "listenzz/hybrid-navigation",
"url": "https://github.com/listenzz/hybrid-navigation/pull/229",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1975951638
|
Can't run lit.dev on windows
When trying to run build the script fails when running the fonts:manrope script
rm (does not exist on windows)
cp (does not exist on windows)
mkdir -p (invalid syntax on windows)
I fixed those by changing the command by node scripts/fonts.js
After that some error with samples (D:\Workspace\lit\lit.dev\packages\lit-dev-content\samples\js_check-code-helpers.ts) is in a wrong path or the scipt is searching the wrong one
Thank you for filing this issue! Could _check-code-helpers.js not have been built into its expected location?
I only have occasional access to a Windows machine, but would happily review PRs that move us towards building on Windows.
Hi @AndrewJakubowicz, thanks!
I'm back to windows all the time for a long time now, spilled monster on the MBP 😢
The _check-code-helpers.js file is in the samples dir while the script expects it to be in samples/js
But in generate-js-samples.ts I can see that the js folder should not be included into the glob to pass to TS so maybe that is the problem.
https://github.com/lit/lit.dev/pull/1255
Hmm, now I'm getting a rollup error
Nobody else is getting that?
No clue
Thank you for raising this! We definitely should try and make this repo buildable in Windows.
In the meantime, I have found WSL2 to be quite good as a dev environment on Windows.
@augustjk
True WSL is great, but also a hassle plus somehow breaks from time to time imo WSL i awesome when your cross compiling etc,but for a node project it's quite simple to get it working the issues are really small like using the correct separator for paths.
Only issue is with eleventy now all the other stuff I already fixed (if Mac and Linux isn't broken now).
WSL doesn't work also, just installed new distro & think it tries to use npm installed on the windows side.
|
gharchive/issue
| 2023-11-03T11:09:47 |
2025-04-01T06:39:26.349010
|
{
"authors": [
"AndrewJakubowicz",
"VandeurenGlenn",
"augustjk"
],
"repo": "lit/lit.dev",
"url": "https://github.com/lit/lit.dev/issues/1249",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1910756659
|
ipc
(open in new tab instead of new window)
I was looking for this funcion, but must ask on discord.
Not obvious from README.md...
The README is generated from manifest.json so that's the one that needs to be updated.
I think something like Adds inter-process communication support, single-instance mode and tab drag and drop between instances. would be more explanatory.
|
gharchive/pull-request
| 2023-09-25T06:24:34 |
2025-04-01T06:39:26.366650
|
{
"authors": [
"Guldoman",
"zen0bit"
],
"repo": "lite-xl/lite-xl-plugins",
"url": "https://github.com/lite-xl/lite-xl-plugins/pull/306",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
224943469
|
iCloud backup Wallet seed
As Loaf/Bread Wallet uses something on the device as a seed for the wallet creation, do you know if replacing the processor in the phone will maintain whatever is used for this seed (Device ID/IMEI/serial number)?
I've had a phone die on me, and don't have the recovery phrase for the wallet. The phone is currently stuck in recovery mode and is returning an error that is consistent with a processor failure. I can get the processor replaced which would allow the phone to boot and for me to recover from iCloud but if it's not going to rebuild the same wallet, then I won't go ahead with it.
Any help is appreciated.
Do you have the 12 word seed (passphrase) that was generated when you first
used LoafWallet. With that 12 word seed, you can restore your wallet.
If you are able to get someone to replace your iPhone's processor and
unlock your phone without restoring, then there would be a somewhat high
chance that you are able to recover your wallet. At any point, if you end
up restoring your device, you will loose all of your coins, as LoafWallet
does not back up your seed/private keys to iCloud (for obvious reasons).
If you do end up recovering your wallet, please make sure that you go into
settings and then copy down your 12 word seed, just in case anything like
this happens in the future.
On 28 April 2017 at 00:49, Robbie Andrews notifications@github.com wrote:
As Loaf/Bread Wallet uses something on the device as a seed for the wallet
creation, do you know if replacing the processor in the phone will maintain
whatever is used for this seed (Device ID/IMEI/serial number)?
I've had a phone die on me, and don't have the recovery phrase for the
wallet. The phone is currently stuck in recovery mode and is returning an
error that is consistent with a processor failure. I can get the processor
replaced which would allow the phone to boot and for me to recover from
iCloud but if it's not going to rebuild the same wallet, then I won't go
ahead with it.
Any help is appreciated.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/litecoin-association/LoafWallet/issues/23, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ACfNukf1gkynnP3dYnbh2jZ4Rh7kSzjzks5r0SmMgaJpZM4NK5fD
.
Hi losh11,
Thanks for the quick reply.
I'd assumed that the iCloud backups would work in a similar way to Bread wallets method (as stated here ).
|
gharchive/issue
| 2017-04-27T23:49:32 |
2025-04-01T06:39:26.400258
|
{
"authors": [
"bearpig",
"losh11"
],
"repo": "litecoin-association/LoafWallet",
"url": "https://github.com/litecoin-association/LoafWallet/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1659807451
|
🚀[ Release v.2.8.4] Merge into Main
Overview
This is the last major release prior to work on Newborn, the refactored Litewallet Android. While there are many requests to improve the current codebase, it is actually 7 years of patching and rework and the cost / time of maintenance is no longer worth it.
We looked at the most important features needed and addressed them in this release.
They are:
Bech32 support for sending to 1ltc addresses
Allow user to see their 12 words / seed phrase
User preferences for sync vs anonymity (false positives rate)
Clips
Note some views
Show seed phrase
Add user preference False Positives
Wow, looks nice! @kcw-grunt Why is inside onClick empty for below code?
if (BuildConfig.VERSION_NAME == "v2.8.4") {
Snackbar.make(parentLayout,
R.string.release_notes,
Snackbar.LENGTH_INDEFINITE).setAction(R.string.Webview_dismiss, new View.OnClickListener() {
@Override
public void onClick(View view) {
}
})
.setActionTextColor(getResources().getColor(android.R.color.holo_red_light ))
.show();
}
The comment said to show, what 'false' does on items.add on code line 130-134 file SettingsActivity.java?
Why do we need to clear DB table to enable Bech32 features?
Thanks @josikie ...you are too kind.
Why do we need to clear DB table to enable Bech32 features?:
The legacy db used a different schema for ltc addresses. So, adding new addresses (ltc1) would fail in that old db. One of the steps @vsima added was to have the device wipe the existing the db and add transactions to the new schema so now sending of L, M and ltc1 addresses is readable by the new schema.
The comment said to show, what 'false' does on items.add on code line 130-134 file SettingsActivity.java?
This is just a implementation detail in Android/Java Settings tables. So, it distinguishes a table item (section: false) to a table section (section: true). Truth is I just used the existing design and added the item for Show my seed
Thank you for the explanation! @kcw-grunt
|
gharchive/pull-request
| 2023-04-09T09:08:59 |
2025-04-01T06:39:26.407260
|
{
"authors": [
"josikie",
"kcw-grunt"
],
"repo": "litecoin-foundation/litewallet-android",
"url": "https://github.com/litecoin-foundation/litewallet-android/pull/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
603103815
|
Replace deploy:service/deploy:process command with deploy
Dependency: https://github.com/liteflow-labs/liteflow-js/pull/38
Add liteflow deploy command that deploys all processes in a directory based on the liteflow framework structure.
All process-related services will automatically be deployed and started
Closing in favor of #42 that already includes these changes
|
gharchive/pull-request
| 2020-04-20T09:54:19 |
2025-04-01T06:39:26.409412
|
{
"authors": [
"antho1404"
],
"repo": "liteflow-labs/liteflow-js",
"url": "https://github.com/liteflow-labs/liteflow-js/pull/39",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
105671703
|
Signal AWS when applications are deployed
We currently can't do rolling upgrades of the galaxy image because the apps often take longer to deploy than the instances.
Galaxy needs to lookup all applications that should be running on a host, and notify the ASG when deployment is complete.
The ASG can have an UpdatePolicy, or the stack can contain a CreationPolicy to define when an instance is ready. Galaxy can use the API or the cfn-signal script for notification.
Since we've removed the cloudformation dependency from galaxy, this should be implemented in such a way that it's not coupled to AWS.
A callback command when the host is up should suffice.
|
gharchive/issue
| 2015-09-09T19:52:06 |
2025-04-01T06:39:26.434148
|
{
"authors": [
"jbardin"
],
"repo": "litl/galaxy",
"url": "https://github.com/litl/galaxy/issues/267",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
183257626
|
添加 enableCmdQR 参数以解决登录异常
当pillow没有安装时,本来应该使用备用的函数提示用户手动扫码,但由于缺少enableCmdQR参数,触发了client.py中的except,并出现莫名其妙的报错 “Failed to get QR Code, please restart the program”
@xmcp 是我疏忽了,多谢!
|
gharchive/pull-request
| 2016-10-16T09:29:49 |
2025-04-01T06:39:26.442468
|
{
"authors": [
"littlecodersh",
"xmcp"
],
"repo": "littlecodersh/ItChat",
"url": "https://github.com/littlecodersh/ItChat/pull/99",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1578773815
|
🛑 USTC Mirrors (IPv4) is down
In 41f09e6, USTC Mirrors (IPv4) (https://ipv4.mirrors.ustc.edu.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: USTC Mirrors (IPv4) is back up in 5fa02b3.
|
gharchive/issue
| 2023-02-09T23:43:41 |
2025-04-01T06:39:26.444908
|
{
"authors": [
"littlekud"
],
"repo": "littlekud/sites-status",
"url": "https://github.com/littlekud/sites-status/issues/1238",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2459290571
|
Compile error on Termux (Android)
OS: Android (Termux)
vim-clap version: Latest from main.
Describe the bug
I get a compilation error when trying to upgrade:
error[E0282]: type annotations needed for `Box<_>`
--> /data/data/com.termux/files/home/.cargo/registry/src/index.crates.io-6f17d22bba15001f/time-0.3.34/src/format_description/parse/mod.rs:83:9
|
83 | let items = format_items
| ^^^^^
...
86 | Ok(items.into())
| ---- type must be known at this point
|
help: consider giving `items` an explicit type, where the placeholders `_` are specified
|
83 | let items: Box<_> = format_items
| ++++++++
Compiling utils v0.1.54 (/data/data/com.termux/files/home/.vim/bundle/vim-clap/crates/utils)
For more information about this error, try `rustc --explain E0282`.
error: could not compile `time` (lib) due to 1 previous error
warning: build failed, waiting for other jobs to finish...
To Reproduce
Steps to reproduce the behavior:
Just ran Plugupdate, also tried cargo build --release --target aarch64-linux-android
running cargo update -p time seems to solve the issue.
|
gharchive/issue
| 2024-08-10T21:07:09 |
2025-04-01T06:39:26.462375
|
{
"authors": [
"luisdavim"
],
"repo": "liuchengxu/vim-clap",
"url": "https://github.com/liuchengxu/vim-clap/issues/1088",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1653122452
|
pip install -r requirements.txt报错
请问是故意的还是不小心的←_←为什么会有googleads
Collecting googleads==3.8.0
Using cached https://mirrors.aliyun.com/pypi/packages/fa/f8/f84ad483afaa29bfc807ab6e8a06b6712ee494a2aad7db545865655bdf99/googleads-3.8.0.tar.gz (23 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in googleads setup command: use_2to3 is invalid.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
删除googleads之后
Collecting ruamel-yaml-conda
Using cached https://mirrors.aliyun.com/pypi/packages/94/ef/31bfa8456e01ff1 Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [19 lines of output]
Traceback (most recent call last):
File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/__init__.py", line 21, in <module>
from .main import * # NOQA File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/main.py", line 12, in <module>
import ruamel.yaml ModuleNotFoundError: No module named 'ruamel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/setup.py", line 14, in <module>
import ruamel_yaml # NOQA File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/__init__.py", line 23, in <module>
from ruamel_yaml.main import * # NOQA File "/tmp/pip-install-26tjscuc/ruamel-yaml-conda_ca1bc85899634c92a2eb8c802d6396b0/ruamel_yaml/main.py", line 12, in <module> import ruamel.yaml ModuleNotFoundError: No module named 'ruamel'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
手动运行了pip install ruamel.yaml和conda install ruamel.yaml(两个指令都是Google搜的)之后
我搞不定了,救命
Collecting ruamel-yaml-conda Using cached https://mirrors.aliyun.com/pypi/packages/94/ef/31bfa8456e01ff13d8d98bdbc80ab2e592c830e52ccaff62c35d5f890357/ruamel_yaml_conda-0.15.80.tar.gz (202 kB)
Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-nsnxrkh6/ruamel-yaml-conda_cfce9d13e9ae4a85b8c11d278dffd58b/setup.py", line 35, in <module>
ext_modules=cythonize(extensions), File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 970, in cythonize
module_list, module_metadata = create_extension_list( File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 816, in create_extension_list
for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 114, in nonempty
raise ValueError(error_msg) ValueError: 'ruamel_yaml/ext/_ruamel_yaml.pyx' doesn't match any files
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
总之希望能有个手动配置环境的说明,或者至少把需要的python版本一类的写清楚吧,不是所有用户都是程序员,也不是所有程序员都懂ai(比如我。。。)
ruamel yaml在conda里装可以试试 conda install -c conda-forge ruamel_yaml
手动运行了pip install ruamel.yaml和conda install ruamel.yaml(两个指令都是Google搜的)之后
我搞不定了,救命
Collecting ruamel-yaml-conda Using cached https://mirrors.aliyun.com/pypi/packages/94/ef/31bfa8456e01ff13d8d98bdbc80ab2e592c830e52ccaff62c35d5f890357/ruamel_yaml_conda-0.15.80.tar.gz (202 kB)
Preparing metadata (setup.py) ... error error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-nsnxrkh6/ruamel-yaml-conda_cfce9d13e9ae4a85b8c11d278dffd58b/setup.py", line 35, in <module>
ext_modules=cythonize(extensions), File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 970, in cythonize
module_list, module_metadata = create_extension_list( File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 816, in create_extension_list
for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
File "/usr/local/lib/python3.8/dist-packages/Cython/Build/Dependencies.py", line 114, in nonempty
raise ValueError(error_msg) ValueError: 'ruamel_yaml/ext/_ruamel_yaml.pyx' doesn't match any files
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
ruamel yaml在conda里装可以试试 conda install -c conda-forge ruamel_yaml
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的)
我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的) 我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
Windows可以参考diff-svc的部署教程,因为我就是参考这个部署的,装不上的包能删就删了。
https://diff-svc.gitbook.io/the-beginners-guide-to-diff-svc/setting-up/setting-up-the-environment
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的) 我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
Windows可以参考diff-svc的部署教程,因为我就是参考这个部署的,装不上的包能删就删了。 https://diff-svc.gitbook.io/the-beginners-guide-to-diff-svc/setting-up/setting-up-the-environment
Linux。。。
我的锅,req是在3.8环境下导出的,因此存在一些奇怪的依赖错误
晚点我根据windows版一键包试着在3.9下重新配一遍吧
我把报错的依赖都先跳过去不装,最后给我报了个找不到torch2.0.0,我彻底不知道我环境什么东西版本有问题了,我租的云服务器(本地没显卡是这样的) 我先用那个colab一键包对付下吧,colab404还随时可能被掐毕竟还是不方便
我试了用 pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 来装 torch 2.0,后面能启动,但是不停报错 FileNotFoundError: [Errno 2] No such file or directory: 'weights/[]',其它的
均已修复。
Hello @NijiharaTsubasa @Gillwindy @chenxvb @ricecakey06
I have spent 3 entire weeks trying to find a way to clone correclty voices and I still did not get good results, I am so tired of it. I am contacting you because I saw you had old comments under old issues, you haev probably found better ways since then? Could you save me from my misery and direct me towards a method, a repo, a tutorial or anything that helps get to the point where I can actually clone a voice thats looks similar to the cloned voice please? Help my soul lol. Really.
|
gharchive/issue
| 2023-04-04T03:46:56 |
2025-04-01T06:39:26.480073
|
{
"authors": [
"AIhasArrived",
"Gillwindy",
"NijiharaTsubasa",
"chenxvb",
"gak123",
"liujing04"
],
"repo": "liujing04/Retrieval-based-Voice-Conversion-WebUI",
"url": "https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1608895351
|
An easy interface to add custom accelerators / backends?
There should be some easy framework, so that i can easily add my ops for custom accelerator / framework.
I wanted to see if i can add easily port it to a proprietary chip aiming to outcompete M1 GPU.
I really like TFs framework for adding custom backends but its too big.
If you can sigh NDAs and stuff, can share more details.
Thanks for the offering! Sorry I didn't sign NDAs without knowing more details. (i.e. if NDA is about custom accelerators alone, we can discuss more in private channels).
Also, you can check out tinygrad: https://github.com/geohot/tinygrad which supposedly should be easy to add custom backends.
|
gharchive/issue
| 2023-03-03T16:05:56 |
2025-04-01T06:39:26.483106
|
{
"authors": [
"brappier",
"liuliu"
],
"repo": "liuliu/s4nnc",
"url": "https://github.com/liuliu/s4nnc/issues/7",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1775728286
|
🛑 clans.korabli.su is down
In 537d4ac, clans.korabli.su (https://clans.korabli.su) was down:
HTTP code: 0
Response time: 0 ms
Resolved: clans.korabli.su is back up in 59c36b7.
|
gharchive/issue
| 2023-06-26T21:57:32 |
2025-04-01T06:39:26.546476
|
{
"authors": [
"nonamenix"
],
"repo": "live4dev/uptime.live4.dev",
"url": "https://github.com/live4dev/uptime.live4.dev/issues/450",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
967502682
|
Working version of tvp to stored proc
Updated dependencies, merged with dizzy:masters and confirmed working.
confirmed working to sqlserver 2019 windows env
Updated version of #49
|
gharchive/pull-request
| 2021-08-11T21:34:25 |
2025-04-01T06:39:26.557554
|
{
"authors": [
"Deathklok-97"
],
"repo": "livehelpnow/tds",
"url": "https://github.com/livehelpnow/tds/pull/125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2414940497
|
How to type rpm packets
Due to the requirements of the business scenario, we need to package this component as rpm
sorry, we do not currently offer official RPM distributions. You can use GoReleaser to build your own
|
gharchive/issue
| 2024-07-18T01:13:06 |
2025-04-01T06:39:26.558350
|
{
"authors": [
"davidzhao",
"zuyou-alt"
],
"repo": "livekit/livekit",
"url": "https://github.com/livekit/livekit/issues/2875",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
412708322
|
Explorer round count sometimes shows up negative
Describe the bug (required)
The "rounds remaining" indicator displays incorrect data while the data is still loading from Infura. I believe I introduced this with https://github.com/livepeer/livepeerjs/pull/333, as that switched up the render logic on some of the GraphQL data.
Expected behavior (required)
This seems to show up after a few seconds.
To Reproduce (required)
Steps to reproduce the behavior:
Boot up the explorer.
Immediately click on the "round" thing in the upper left.
That'll show up.
Closing since the classic explorer was sunsetted.
Closing since the classic explorer was sunsetted.
|
gharchive/issue
| 2019-02-21T01:38:20 |
2025-04-01T06:39:26.562191
|
{
"authors": [
"adamsoffer",
"iameli"
],
"repo": "livepeer/livepeerjs",
"url": "https://github.com/livepeer/livepeerjs/issues/340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1622150254
|
Modifier -> SwiftUI -> Documents: renameAction(_:)
Doc:
https://developer.apple.com/documentation/swiftui/view/renameaction(_:)-6lghl
[x] Swift implementation
[x] Elixir implementation
https://developer.apple.com/documentation/swiftui/view/renameaction(_:)-324yw
[x] Swift implementation
[x] Elixir implementation
Implemented in #326
|
gharchive/issue
| 2023-03-13T19:45:15 |
2025-04-01T06:39:26.564716
|
{
"authors": [
"AZholtkevych",
"carson-katri"
],
"repo": "liveview-native/liveview-client-swiftui",
"url": "https://github.com/liveview-native/liveview-client-swiftui/issues/613",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
972643073
|
RegisterCommandIf Not working
` public function boot()
{
Spotlight::registerCommandIf(Auth::check() && Auth::user()->role == 'A',Logout::class);
}`
@sandy15d please use the shouldBeShown method on the command when working with dependencies that need to be resolved:
public function shouldBeShown(Request $request): bool
{
return $request->user()->role == 'A;
}
More info: https://github.com/livewire-ui/spotlight#register-commands
|
gharchive/issue
| 2021-08-17T12:36:55 |
2025-04-01T06:39:26.566722
|
{
"authors": [
"PhiloNL",
"sandy15d"
],
"repo": "livewire-ui/spotlight",
"url": "https://github.com/livewire-ui/spotlight/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1567742451
|
按照文档执行报错
npm install nestscript
npx nsc compile main.js main 到这步直接报错
拉去master分支,将/nsc/bin/run文件中的#!/usr/bin/env ts-node修改为#!/usr/bin/env node,然后在nestscript-master目录下执行npx nsc compile main.js main npx nsc run main,正常输出了,注意需要安装Oclif环境,参考http://semlinker.com/oclif-node-cli/,楼主的命令使用了oclif框架,本人逆向工作者,遇到了个网站使用了这个项目进行了JS加密,一摸一样,使用其他方式破解了,还是想学习下虚拟机解密二进制这块。
想了解一下是哪个网站
这个网站目前还原到以前的版本了,没有用虚拟机加载JS了,网站不方便透漏,实在不好意思。
|
gharchive/issue
| 2023-02-02T10:13:15 |
2025-04-01T06:39:26.575224
|
{
"authors": [
"livoras",
"weifeng2"
],
"repo": "livoras/nestscript",
"url": "https://github.com/livoras/nestscript/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2043053523
|
update ICASSP'24 info
Source of information on the number of accepted papers: https://cmsworkshops.com/ICASSP2024/papers/accepted_papers.php
Source of information on paper acceptance rate and number of valid submissions: official notification email.
The numbers from different channels are slightly different.
Thanks.
|
gharchive/pull-request
| 2023-12-15T07:27:09 |
2025-04-01T06:39:26.584795
|
{
"authors": [
"lixin4ever",
"youngfish42"
],
"repo": "lixin4ever/Conference-Acceptance-Rate",
"url": "https://github.com/lixin4ever/Conference-Acceptance-Rate/pull/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
351464503
|
Use IP.SB's HTTPS API
Use IP.SB's HTTPS API
该api是属于个人搭建,考虑稳定性、信息收集和兼容性(兼容其他有更多输出内容的api),不使用该api作为默认预设,可加入readme中推荐。
|
gharchive/pull-request
| 2018-08-17T05:47:02 |
2025-04-01T06:39:26.587488
|
{
"authors": [
"HamJin",
"lixuy"
],
"repo": "lixuy/CloudXNS-DDNS-with-BashShell",
"url": "https://github.com/lixuy/CloudXNS-DDNS-with-BashShell/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
210221537
|
NODE_ENV=production egg-bin dev Instead of nuxt build
use NODE_ENV=production egg-bin dev to run server.
it has two steps.
nuxt build
egg start
Done.
|
gharchive/issue
| 2017-02-25T08:42:19 |
2025-04-01T06:39:26.589493
|
{
"authors": [
"liyanlong"
],
"repo": "liyanlong/nuxt-egg",
"url": "https://github.com/liyanlong/nuxt-egg/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1289800839
|
🛑 Personal Home is down
In 3a25125, Personal Home (https://liyaodong.com) was down:
HTTP code: 403
Response time: 37 ms
Resolved: Personal Home is back up in 256e0ef.
|
gharchive/issue
| 2022-06-30T08:44:01 |
2025-04-01T06:39:26.591798
|
{
"authors": [
"liyaodong"
],
"repo": "liyaodong/uptime",
"url": "https://github.com/liyaodong/uptime/issues/182",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1389366266
|
没有第二块屏幕的时候,main函数也会执行两次
普通单屏平板或者虚拟机调试的时候,发现会执行两次main函数。
其中一次 window.defaultRouteName 是 /
一次 window.defaultRouteName 是 subMain
而且执行顺序不固定
这个是正常的么?
你好,问题已修复,之前为了适配应用使用过程中手机投屏电视,默认加载app就初始化了副屏engine。修复后只有检测到第二块屏幕后才会初始化副屏engine。
谢谢
|
gharchive/issue
| 2022-09-28T13:29:35 |
2025-04-01T06:39:26.593207
|
{
"authors": [
"cwangfr",
"liyufengrex"
],
"repo": "liyufengrex/flutter_subscreen_plugin",
"url": "https://github.com/liyufengrex/flutter_subscreen_plugin/issues/6",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
127901726
|
dw [delete word] at last word of line should not shift a line below
I faced a problem when I use command 'dw' [delete word] at last word of line. A below line will be shifted to current line.
[example document]
line1,col2,col3
line2,col2,col3
line3,col2,col3
[after dw at 'col3' on line1]
line1,col2,line2,col2,col3
line3,col2,col3
[expected result]
line1,col2,
line2,col2,col3
line3,col2,col3
Oh sorry. I think I should post this issue to vim-mode.
|
gharchive/issue
| 2016-01-21T11:15:07 |
2025-04-01T06:39:26.636979
|
{
"authors": [
"paween1980"
],
"repo": "lloeki/ex-mode",
"url": "https://github.com/lloeki/ex-mode/issues/126",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
155659716
|
Cannot get Peer-Tweet running
Everything's in the title.
I followed the instructions on installing. I ran
npm install
npm install --save-dev electron-rebuild
./node_modules/.bin/electron-rebuild
After launching with 2 separate instance of terminal with these commands
npm run hot-server
npm run start-hot
When I start the second command line, I get this output
> PeerTweet@0.1.1 start-hot /home/l4p1n/peer-tweet
> cross-env HOT=1 NODE_ENV=development electron ./
(electron) companyName is now a required option to crashReporter.start
Error opening app
The app provided is not a valid Electron app, please read the docs on how to write one:
https://github.com/atom/electron/tree/v0.36.12/docs
Error: Cannot find module 'electron-debug'
npm ERR! Linux 4.2.0-36-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "start-hot"
npm ERR! node v4.4.4
npm ERR! npm v3.8.9
npm ERR! code ELIFECYCLE
npm ERR! PeerTweet@0.1.1 start-hot: `cross-env HOT=1 NODE_ENV=development electron ./`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the PeerTweet@0.1.1 start-hot script 'cross-env HOT=1 NODE_ENV=development electron ./'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the PeerTweet package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! cross-env HOT=1 NODE_ENV=development electron ./
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs PeerTweet
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls PeerTweet
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /home/l4p1n/peer-tweet/npm-debug.log
And here is the content of /home/l4p1n/peer-tweet/npm-debug.log
0 info it worked if it ends with ok
1 verbose cli [ '/usr/bin/nodejs', '/usr/bin/npm', 'run', 'start-hot' ]
2 info using npm@3.8.9
3 info using node@v4.4.4
4 verbose run-script [ 'prestart-hot', 'start-hot', 'poststart-hot' ]
5 info lifecycle PeerTweet@0.1.1~prestart-hot: PeerTweet@0.1.1
6 silly lifecycle PeerTweet@0.1.1~prestart-hot: no script for prestart-hot, continuing
7 info lifecycle PeerTweet@0.1.1~start-hot: PeerTweet@0.1.1
8 verbose lifecycle PeerTweet@0.1.1~start-hot: unsafe-perm in lifecycle true
9 verbose lifecycle PeerTweet@0.1.1~start-hot: PATH: /usr/lib/node_modules/npm/bin/node-gyp-bin:/home/l4p1n/peer-tweet/node_modules/.bin:/usr/bin:/home/l4p1n/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
10 verbose lifecycle PeerTweet@0.1.1~start-hot: CWD: /home/l4p1n/peer-tweet
11 silly lifecycle PeerTweet@0.1.1~start-hot: Args: [ '-c', 'cross-env HOT=1 NODE_ENV=development electron ./' ]
12 silly lifecycle PeerTweet@0.1.1~start-hot: Returned: code: 1 signal: null
13 info lifecycle PeerTweet@0.1.1~start-hot: Failed to exec start-hot script
14 verbose stack Error: PeerTweet@0.1.1 start-hot: `cross-env HOT=1 NODE_ENV=development electron ./`
14 verbose stack Exit status 1
14 verbose stack at EventEmitter.<anonymous> (/usr/lib/node_modules/npm/lib/utils/lifecycle.js:245:16)
14 verbose stack at emitTwo (events.js:87:13)
14 verbose stack at EventEmitter.emit (events.js:172:7)
14 verbose stack at ChildProcess.<anonymous> (/usr/lib/node_modules/npm/lib/utils/spawn.js:24:14)
14 verbose stack at emitTwo (events.js:87:13)
14 verbose stack at ChildProcess.emit (events.js:172:7)
14 verbose stack at maybeClose (internal/child_process.js:827:16)
14 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:211:5)
15 verbose pkgid PeerTweet@0.1.1
16 verbose cwd /home/l4p1n/peer-tweet
17 error Linux 4.2.0-36-generic
18 error argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "start-hot"
19 error node v4.4.4
20 error npm v3.8.9
21 error code ELIFECYCLE
22 error PeerTweet@0.1.1 start-hot: `cross-env HOT=1 NODE_ENV=development electron ./`
22 error Exit status 1
23 error Failed at the PeerTweet@0.1.1 start-hot script 'cross-env HOT=1 NODE_ENV=development electron ./'.
23 error Make sure you have the latest version of node.js and npm installed.
23 error If you do, this is most likely a problem with the PeerTweet package,
23 error not with npm itself.
23 error Tell the author that this fails on your system:
23 error cross-env HOT=1 NODE_ENV=development electron ./
23 error You can get information on how to open an issue for this project with:
23 error npm bugs PeerTweet
23 error Or if that isn't available, you can get their info via:
23 error npm owner ls PeerTweet
23 error There is likely additional logging output above.
24 verbose exit [ 1, true ]
I guess I have to run npm install --dev but I prefer to ask to make sure.
Did it work with --dev? I don't have a linux distro to test this on.
So. I cloned a fresh copy of the repo, ran
npm install --dev
npm install electron-debug
Then started the server with npm run hot-server and started the client npm run start-hot. Until there everything is fine.
I've got another problem with the client saying in the devtools
Error: Module version mismatch. Expected 47, got 46.
I've got no idea of what going on.
The problem is that you need to install the native modules: https://github.com/lmatteis/peer-tweet#installing-native-modules
I'm not sure how to do that in linux.
Everything work. It works better if I read properly the README.md :joy:
|
gharchive/issue
| 2016-05-19T05:56:22 |
2025-04-01T06:39:26.822633
|
{
"authors": [
"lapin-b",
"lmatteis"
],
"repo": "lmatteis/peer-tweet",
"url": "https://github.com/lmatteis/peer-tweet/issues/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1881944269
|
Formatting the LNBits Frontpage Description
Issue:
When entering the LNBits Frontpage there is a description like you can see in the following Screenshot:
This lines have no formating rules. So the lines set just next to next. It would be lovely for reading and serve informaion in a well formatted architecture.
Solution:
My solution would be Rich Text Editors oder WYSIWYG-Editors (What You See Is What You Get). That would allow also the dumpest presumed user to edit the text.
Output:
A well arranged/readable start page with important information about the server
this can be set in the manage server section
in the manage server section i´am able to edit the text. But it´s not displayed like that on the LNBits frontpage.
see screenshots:
Try it with :
<h2Family Bank</h2>
<h4>Playground</h4>
<p>For testing extensions and other stuff</p>
<p>**************************************</p>
<p>Do not store large amounts here</p>
Try it with :
<h2Family Bank</h2>
<h4>Playground</h4>
<p>For testing extensions and other stuff</p>
<p>**************************************</p>
<p>Do not store large amounts here</p>
Ok, this is not nice :) Could it also allow paragraphs ?
<h2**>**Family Bank
Playground
For testing extensions and other stuff
**************************************
Do not store large amounts here
If it would work this way it would be a solution for me and a bunch of people.
On the other hand everybody is well known with simple formatting rules in common programs like gmail, telegram and also here on Github. In my opinion the aim should be to reduce every not needed effort to present a well arranged/readable start page with important information about the server.
It's a simple HTML syntax! Github uses Markdown...
Maybe adding Markdown support in the future is an option
https://markdoc.dev/
I did it here: https://github.com/lnbits/events/pull/10
@arcbtc worth doing it for the frontpage description also?
i think it worth it, it going to be useful for other extensions aswell
Really ? People need to know markdown or html to put in a description ?
Thanks for your work. Its now working with ease.
pls close this issue.
|
gharchive/issue
| 2023-09-05T13:05:51 |
2025-04-01T06:39:26.839311
|
{
"authors": [
"DoktorShift",
"arbadacarbaYK",
"bitkarrot",
"dni",
"talvasconcelos"
],
"repo": "lnbits/lnbits",
"url": "https://github.com/lnbits/lnbits/issues/1912",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1594910592
|
possible to support https://github.com/bytedance/terarkdb
pls support this as storage backend
https://github.com/bytedance/terarkdb
No, it will not be supported.
First, it is a RocksDB based KV store, storing Raft logs in KV stores is always wasteful. We have a much better storage engine for the logs, it is available at -
https://github.com/lni/dragonboat/tree/master/internal/tan
This engine, called tan, doesn't force you to have keys, why would you need to touch or construct trillions of keys when the leader just want to stream continuous entries to followers. It doesn't do compactions, as the log entries in raft are mostly append only. You also avoid some write amplification when you stop writing logs twice - you don't need to log your log. Its memtable is another redundant component when we already have an in memory log storage inside the raft implementation - inserting into that skiplist based memtable eats a huge chunk of your CPU cycles when you have millions of entries per second.
I'd be willing to be bet that tan is at least 20-30% faster than your suggested library when used for storing raft logs.
Secondly, that suggested library is C++ based.
Think @kolinfluence you should add it.
|
gharchive/issue
| 2023-02-22T11:09:31 |
2025-04-01T06:39:26.843067
|
{
"authors": [
"dioptre",
"lni",
"ultperf"
],
"repo": "lni/dragonboat",
"url": "https://github.com/lni/dragonboat/issues/273",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1891620789
|
Support external node registry functions
This PR allows clients to provide a NodeRegistryFactory function in the Expert config section which will be used to resolve nodes.
This is useful for clients who want to create and manage a node discovery service externally (so it can be used for other things) but still have the dragonboat library use it for dynamic node discovery.
Also adds a test for this new functionality.
Fixes: https://github.com/lni/dragonboat/issues/326
Codecov Report
Patch coverage is 50.00% of modified lines.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Files Changed
Coverage
node.go
ø
nodehost.go
50.00%
:loudspeaker: Thoughts on this report? Let us know!.
Thanks for the PR.
Could you please have a look at the review comments above. There is also some data race errors when running the new test, log pasted below.
=== RUN TestExternalNodeRegistryFunction
2023-09-19 08:31:38.652923 I | dragonboat: go version: go1.19.13, linux/amd64
2023-09-19 08:31:38.652958 I | dragonboat: dragonboat version: 4.0.0 (Dev)
2023-09-19 08:31:38.653001 W | config: mutual TLS disabled, communication is insecure
2023-09-19 08:31:38.653134 I | config: using default EngineConfig
2023-09-19 08:31:38.653166 I | config: using default LogDBConfig
2023-09-19 08:31:38.653248 I | dragonboat: DeploymentID set to 1
2023-09-19 08:31:38.660302 I | dragonboat: LogDB info received, shard 0, busy false
2023-09-19 08:31:38.665674 I | dragonboat: LogDB info received, shard 1, busy false
2023-09-19 08:31:38.669765 I | dragonboat: LogDB info received, shard 2, busy false
2023-09-19 08:31:38.674620 I | dragonboat: LogDB info received, shard 3, busy false
2023-09-19 08:31:38.677094 W | gossip: memberlist: Was able to connect to 123e4567-e89b-12d3-a456-426614174000 but other probes failed, network may be misconfigured
2023-09-19 08:31:38.679718 I | dragonboat: LogDB info received, shard 4, busy false
2023-09-19 08:31:38.684090 I | dragonboat: LogDB info received, shard 5, busy false
2023-09-19 08:31:38.689280 I | dragonboat: LogDB info received, shard 6, busy false
2023-09-19 08:31:38.693791 I | dragonboat: LogDB info received, shard 7, busy false
2023-09-19 08:31:38.699416 I | dragonboat: LogDB info received, shard 8, busy false
2023-09-19 08:31:38.704158 I | dragonboat: LogDB info received, shard 9, busy false
2023-09-19 08:31:38.709267 I | dragonboat: LogDB info received, shard 10, busy false
2023-09-19 08:31:38.713627 I | dragonboat: LogDB info received, shard 11, busy false
2023-09-19 08:31:38.718071 I | dragonboat: LogDB info received, shard 12, busy false
2023-09-19 08:31:38.722903 I | dragonboat: LogDB info received, shard 13, busy false
2023-09-19 08:31:38.728906 I | dragonboat: LogDB info received, shard 14, busy false
2023-09-19 08:31:38.733055 I | dragonboat: LogDB info received, shard 15, busy false
2023-09-19 08:31:38.733422 I | logdb: using plain logdb
2023-09-19 08:31:38.734863 I | dragonboat: logdb memory limit: 8192 MBytes
2023-09-19 08:31:38.735371 I | dragonboat: NodeHost ID: 123e4567-e89b-12d3-a456-426614174000
2023-09-19 08:31:38.735401 I | dragonboat: Expert.NodeRegistryFactory was set: using custom registry
2023-09-19 08:31:38.735440 I | dragonboat: filesystem error injection mode enabled: false
2023-09-19 08:31:38.736034 I | transport: transport type: go-tcp-transport
2023-09-19 08:31:38.737214 I | dragonboat: transport type: go-tcp-transport
2023-09-19 08:31:38.737253 I | dragonboat: logdb type: sharded-pebble
2023-09-19 08:31:38.737296 I | dragonboat: nodehost address: localhost:26001
2023-09-19 08:31:38.737322 I | dragonboat: go version: go1.19.13, linux/amd64
2023-09-19 08:31:38.737372 I | dragonboat: dragonboat version: 4.0.0 (Dev)
2023-09-19 08:31:38.737395 W | config: mutual TLS disabled, communication is insecure
2023-09-19 08:31:38.737490 I | config: using default EngineConfig
2023-09-19 08:31:38.737533 I | config: using default LogDBConfig
2023-09-19 08:31:38.737617 I | dragonboat: DeploymentID set to 1
2023-09-19 08:31:38.743690 I | dragonboat: LogDB info received, shard 0, busy false
2023-09-19 08:31:38.748062 I | dragonboat: LogDB info received, shard 1, busy false
2023-09-19 08:31:38.752225 I | dragonboat: LogDB info received, shard 2, busy false
2023-09-19 08:31:38.757578 I | dragonboat: LogDB info received, shard 3, busy false
2023-09-19 08:31:38.762896 I | dragonboat: LogDB info received, shard 4, busy false
2023-09-19 08:31:38.767749 I | dragonboat: LogDB info received, shard 5, busy false
2023-09-19 08:31:38.772257 I | dragonboat: LogDB info received, shard 6, busy false
2023-09-19 08:31:38.777120 I | dragonboat: LogDB info received, shard 7, busy false
2023-09-19 08:31:38.784119 I | dragonboat: LogDB info received, shard 8, busy false
2023-09-19 08:31:38.788517 I | dragonboat: LogDB info received, shard 9, busy false
2023-09-19 08:31:38.793300 I | dragonboat: LogDB info received, shard 10, busy false
2023-09-19 08:31:38.799423 I | dragonboat: LogDB info received, shard 11, busy false
2023-09-19 08:31:38.803587 I | dragonboat: LogDB info received, shard 12, busy false
2023-09-19 08:31:38.808046 I | dragonboat: LogDB info received, shard 13, busy false
2023-09-19 08:31:38.812889 I | dragonboat: LogDB info received, shard 14, busy false
2023-09-19 08:31:38.818779 I | dragonboat: LogDB info received, shard 15, busy false
2023-09-19 08:31:38.819076 I | logdb: using plain logdb
2023-09-19 08:31:38.820205 I | dragonboat: logdb memory limit: 8192 MBytes
2023-09-19 08:31:38.820907 I | dragonboat: NodeHost ID: 123e4567-e89b-12d3-a456-426614174001
2023-09-19 08:31:38.820938 I | dragonboat: Expert.NodeRegistryFactory was set: using custom registry
2023-09-19 08:31:38.820980 I | dragonboat: filesystem error injection mode enabled: false
2023-09-19 08:31:38.821880 I | transport: transport type: go-tcp-transport
2023-09-19 08:31:38.822814 I | dragonboat: transport type: go-tcp-transport
2023-09-19 08:31:38.822883 I | dragonboat: logdb type: sharded-pebble
2023-09-19 08:31:38.822919 I | dragonboat: nodehost address: localhost:26002
2023-09-19 08:31:38.826387 I | dragonboat: [00001:00001] replaying raft logs
2023-09-19 08:31:38.826569 I | raft: [00001:00001] created, initial: true, new: true
2023-09-19 08:31:38.826615 W | config: ElectionRTT is not a magnitude larger than HeartbeatRTT
2023-09-19 08:31:38.826656 I | raft: [00001:00001] raft log rate limit enabled: false, 0
2023-09-19 08:31:38.826715 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t0 became follower
2023-09-19 08:31:38.826801 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t1 became follower
2023-09-19 08:31:38.826860 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t1 added bootstrap ConfigChangeAddNode, 1, 123e4567-e89b-12d3-a456-426614174000
2023-09-19 08:31:38.826919 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00001] t1 added bootstrap ConfigChangeAddNode, 2, 123e4567-e89b-12d3-a456-426614174001
2023-09-19 08:31:38.827428 I | rsm: [00001:00001] no snapshot available during launch
2023-09-19 08:31:38.827563 I | dragonboat: [00001:00001] initialized using <00001:00001:0>
2023-09-19 08:31:38.827605 I | dragonboat: [00001:00001] initial index set to 0
2023-09-19 08:31:38.830797 I | dragonboat: [00001:00002] replaying raft logs
2023-09-19 08:31:38.831038 I | raft: [00001:00002] created, initial: true, new: true
2023-09-19 08:31:38.831088 W | config: ElectionRTT is not a magnitude larger than HeartbeatRTT
2023-09-19 08:31:38.831138 I | raft: [00001:00002] raft log rate limit enabled: false, 0
2023-09-19 08:31:38.831323 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t0 became follower
2023-09-19 08:31:38.831408 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t1 became follower
2023-09-19 08:31:38.831480 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t1 added bootstrap ConfigChangeAddNode, 1, 123e4567-e89b-12d3-a456-426614174000
2023-09-19 08:31:38.831546 I | raft: [f:1,l:0,t:0,c:0,a:0] [00001:00002] t1 added bootstrap ConfigChangeAddNode, 2, 123e4567-e89b-12d3-a456-426614174001
2023-09-19 08:31:38.833185 I | rsm: [00001:00002] no snapshot available during launch
2023-09-19 08:31:38.833398 I | dragonboat: [00001:00002] initialized using <00001:00002:0>
2023-09-19 08:31:38.833461 I | dragonboat: [00001:00002] initial index set to 0
2023-09-19 08:31:38.834893 I | rsm: [00001:00002] applied ADD ccid 0 (1), n00001 (123e4567-e89b-12d3-a456-426614174000)
2023-09-19 08:31:38.835034 I | rsm: [00001:00002] applied ADD ccid 0 (2), n00002 (123e4567-e89b-12d3-a456-426614174001)
2023-09-19 08:31:38.837618 W | dragonboat: [00001:00001] had 2 LocalTick msgs in one batch
2023-09-19 08:31:38.838604 I | rsm: [00001:00001] applied ADD ccid 0 (1), n00001 (123e4567-e89b-12d3-a456-426614174000)
2023-09-19 08:31:38.838684 I | rsm: [00001:00001] applied ADD ccid 0 (2), n00002 (123e4567-e89b-12d3-a456-426614174001)
2023-09-19 08:31:38.853533 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 became candidate
2023-09-19 08:31:38.853619 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 received RequestVoteResp from n00002
2023-09-19 08:31:38.853673 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 sent RequestVote to n00001
2023-09-19 08:31:38.857429 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t1 received RequestVote with higher term (2) from n00002
2023-09-19 08:31:38.857485 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t1 become followerKE after receiving higher term from n00002
2023-09-19 08:31:38.857671 I | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t2 became follower
2023-09-19 08:31:38.857779 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00001] t2 cast vote from n00002 index 2 term 2, log term: 1
2023-09-19 08:31:38.860333 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 received RequestVoteResp from n00001
2023-09-19 08:31:38.860407 W | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 received 2 votes and 0 rejections, quorum is 2
2023-09-19 08:31:38.860478 I | raft: [f:1,l:2,t:1,c:2,a:2] [00001:00002] t2 became leader
2023-09-19 08:31:38.935478 E | transport: send batch failed, target localhost:26002 (write tcp 127.0.0.1:37262->127.0.0.1:26002: write: connection reset by peer), 2
2023-09-19 08:31:38.935607 W | transport: breaker 123e4567-e89b-12d3-a456-426614174000 to localhost:26002 failed, connect and process failed: write tcp 127.0.0.1:37262->127.0.0.1:26002: write: connection reset by peer
2023-09-19 08:31:38.935682 W | transport: localhost:26002 became unreachable, affected 1 nodes
==================
WARNING: DATA RACE
Write at 0x00c0000eb110 by goroutine 6651:
runtime.mapassign_faststr()
/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/map_faststr.go:203 +0x0
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1320 +0xef7
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
Previous read at 0x00c0000eb110 by goroutine 6961:
runtime.mapaccess1_faststr()
/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/map_faststr.go:13 +0x0
github.com/lni/dragonboat/v4.(*testRegistry).Resolve()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1208 +0xe4
github.com/lni/dragonboat/v4/internal/transport.(*Transport).send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:361 +0xba
github.com/lni/dragonboat/v4/internal/transport.(*Transport).Send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:347 +0x68
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage()
/home/runner/work/dragonboat/dragonboat/nodehost.go:1881 +0xf4
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage-fm()
<autogenerated>:1 +0x84
github.com/lni/dragonboat/v4.(*node).sendMessages()
/home/runner/work/dragonboat/dragonboat/node.go:1011 +0x1b6
github.com/lni/dragonboat/v4.(*node).processRaftUpdate()
/home/runner/work/dragonboat/dragonboat/node.go:1108 +0xb3
github.com/lni/dragonboat/v4.(*engine).processSteps()
/home/runner/work/dragonboat/dragonboat/engine.go:1353 +0x804
github.com/lni/dragonboat/v4.(*engine).stepWorkerMain()
/home/runner/work/dragonboat/dragonboat/engine.go:1254 +0x5e6
github.com/lni/dragonboat/v4.newExecEngine.func1()
/home/runner/work/dragonboat/dragonboat/engine.go:1047 +0x98
github.com/lni/goutils/syncutil.(*Stopper).runWorker.func1()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:79 +0x12e
Goroutine 6651 (running) created at:
testing.(*T).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x75d
testing.runTests.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1846 +0x99
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.runTests()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1844 +0x7ec
testing.(*M).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1726 +0xa84
main.main()
_testmain.go:675 +0x2e9
Goroutine 6961 (running) created at:
github.com/lni/goutils/syncutil.(*Stopper).runWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:74 +0x19a
github.com/lni/goutils/syncutil.(*Stopper).RunWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:68 +0xef
github.com/lni/dragonboat/v4.newExecEngine()
/home/runner/work/dragonboat/dragonboat/engine.go:1037 +0xa19
github.com/lni/dragonboat/v4.NewNodeHost()
/home/runner/work/dragonboat/dragonboat/nodehost.go:366 +0x1486
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1266 +0x8f7
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
==================
==================
WARNING: DATA RACE
Write at 0x00c00048d178 by goroutine 6651:
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1320 +0xf38
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
Previous read at 0x00c00048d178 by goroutine 6961:
github.com/lni/dragonboat/v4.(*testRegistry).Resolve()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1208 +0xee
github.com/lni/dragonboat/v4/internal/transport.(*Transport).send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:361 +0xba
github.com/lni/dragonboat/v4/internal/transport.(*Transport).Send()
/home/runner/work/dragonboat/dragonboat/internal/transport/transport.go:347 +0x68
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage()
/home/runner/work/dragonboat/dragonboat/nodehost.go:1881 +0xf4
github.com/lni/dragonboat/v4.(*NodeHost).sendMessage-fm()
<autogenerated>:1 +0x84
github.com/lni/dragonboat/v4.(*node).sendMessages()
/home/runner/work/dragonboat/dragonboat/node.go:1011 +0x1b6
github.com/lni/dragonboat/v4.(*node).processRaftUpdate()
/home/runner/work/dragonboat/dragonboat/node.go:1108 +0xb3
github.com/lni/dragonboat/v4.(*engine).processSteps()
/home/runner/work/dragonboat/dragonboat/engine.go:1353 +0x804
github.com/lni/dragonboat/v4.(*engine).stepWorkerMain()
/home/runner/work/dragonboat/dragonboat/engine.go:1254 +0x5e6
github.com/lni/dragonboat/v4.newExecEngine.func1()
/home/runner/work/dragonboat/dragonboat/engine.go:1047 +0x98
github.com/lni/goutils/syncutil.(*Stopper).runWorker.func1()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:79 +0x12e
Goroutine 6651 (running) created at:
testing.(*T).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x75d
testing.runTests.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1846 +0x99
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.runTests()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1844 +0x7ec
testing.(*M).Run()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1726 +0xa84
main.main()
_testmain.go:675 +0x2e9
Goroutine 6961 (running) created at:
github.com/lni/goutils/syncutil.(*Stopper).runWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:74 +0x19a
github.com/lni/goutils/syncutil.(*Stopper).RunWorker()
/home/runner/go/pkg/mod/github.com/lni/goutils@v1.3.1-0.20220604063047-388d67b4dbc4/syncutil/stopper.go:68 +0xef
github.com/lni/dragonboat/v4.newExecEngine()
/home/runner/work/dragonboat/dragonboat/engine.go:1037 +0xa19
github.com/lni/dragonboat/v4.NewNodeHost()
/home/runner/work/dragonboat/dragonboat/nodehost.go:366 +0x1486
github.com/lni/dragonboat/v4.TestExternalNodeRegistryFunction()
/home/runner/work/dragonboat/dragonboat/nodehost_test.go:1266 +0x8f7
testing.tRunner()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.19.13/x64/src/testing/testing.go:1493 +0x47
==================
Oh missed the data race one -- taking a look at that now.
OK, fixed the data race too.
Cool, thanks.
|
gharchive/pull-request
| 2023-09-12T03:56:43 |
2025-04-01T06:39:26.854034
|
{
"authors": [
"codecov-commenter",
"lni",
"tylerwilliams"
],
"repo": "lni/dragonboat",
"url": "https://github.com/lni/dragonboat/pull/327",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
167150525
|
Is --keep-until-expiring needed?
The renew command only generates new certificates if they are near expiry. See https://certbot.eff.org/docs/using.html#command-line-options
Thanks for all your fixes! :D
|
gharchive/issue
| 2016-07-22T23:22:03 |
2025-04-01T06:39:26.856905
|
{
"authors": [
"emersion"
],
"repo": "lnicola/certbot-systemd-nginx",
"url": "https://github.com/lnicola/certbot-systemd-nginx/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
129181205
|
handle multiple events
the site is currently showing info for the Feb event, which is cool but people probably still want to know what is going to be happening tonight at the January event.
Can we switch it back to Jan? I've checked my phone so many times on the way to the event to remind myself what the talks are going to be (and where it is!)
https://github.com/lnug/lnug.github.io/pull/104/files
On 27 January 2016 at 16:19, lnugbot notifications@github.com wrote:
Yerp,
Agreed. Will take a look ASAP
I'm on a train into London atm. Will take a look when I can find somewhere
dry.
Sent from my iPhone
On 27 Jan 2016, at 15:57, Clarkie notifications@github.com wrote:
Can we switch it back to Jan? I've checked my phone so many times on the
way to the event to remind myself what the talks are going to be (and where
it is!)
—
Reply to this email directly or view it on GitHub.
—
Reply to this email directly or view it on GitHub
https://github.com/lnug/lnug.github.io/issues/103#issuecomment-175719869
.
--
Simon McManus
DotJS Ltd - Node Consultancy
|
gharchive/issue
| 2016-01-27T15:43:29 |
2025-04-01T06:39:26.863865
|
{
"authors": [
"clarkie",
"simonmcmanus"
],
"repo": "lnug/lnug.github.io",
"url": "https://github.com/lnug/lnug.github.io/issues/103",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
183642051
|
[Error: no commits found]
I'm getting the following error on a mac system:
[Error: no commits found]
There were 3 commits in the repo when I first saw the issue, and I added two more commits to test, but itt still wouldn't work.
Hmm that's odd. Could you let me know what version of git you're using? Also, when you run the following command, what is outputted?
git log -E --format=%H%n%s%n%b%n===END===
I realized the issue was with that particular developer not pushing with --tags, causing local tags to not be pushed into remote.
|
gharchive/issue
| 2016-10-18T10:02:32 |
2025-04-01T06:39:26.903321
|
{
"authors": [
"akashdeep-singh",
"robinjoseph08"
],
"repo": "lob/generate-changelog",
"url": "https://github.com/lob/generate-changelog/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2357228338
|
SELF-302: IconButton Variant
JIRA
SELF-343
Description
Adds IconButton text variant
Add Skeleton component
Screenshots
Reviewer Checklist
This section is to be filled out by reviewers
Testing
[ ] This code was tested by somebody other than the developer. Do not merge until this has been done.
Hey Nate! I'd like to make one suggestion - can we call this something more generic since we plan to build on this? Maybe like stylized button or something like that? This saves us having to change it in the dashboard later
Hey Nate! I'd like to make one suggestion - can we call this something more generic since we plan to build on this? Maybe like stylized button or something like that? This saves us having to change it in the dashboard later
Hey!
Are you referring to the name IconButton?
|
gharchive/pull-request
| 2024-06-17T12:29:21 |
2025-04-01T06:39:26.915183
|
{
"authors": [
"NateWaldschmidt",
"shannamurry"
],
"repo": "lob/ui-components",
"url": "https://github.com/lob/ui-components/pull/517",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1056133558
|
Is isRunning method works fine?
I try to make a simple e2e test (Java + Spring Framework) which check our API by stopping Localstack instance, sending message to broker instance and finally asserting HTTP error response code. This test is part of bigger test suite with DirtiesContext annotation (with after each method mode).
Out Localstack bean is customized. In Spring Configuration we defined a bean with custom init and destroy methods. Init method will be posted below, destroy method just send purge requests into all queues. We don't want to stop Localstack instance - time optimization.
Init method:
if (!localstack.isRunning()) {
localstack.startup(LOCALSTACK_CONFIGURATION);
Runtime.getRuntime().addShutdownHook(new Thread(localstack::stop));
}
After localstack.stop(); - our init method will never work because isRunning method returns always true even when docker doesn't have running containers (docker ps return empty list).
If Localstack object (unfortunately a static object) has non-null instance of localStackContainer - isRunning method return true response (with empty list of available ports underneath). Seems like stop method do not unset localStackContainer field?
Container.isRunning method:
try {
new PortCommand(containerId).execute();
return true;
} catch(Exception e) {
return false;
}
Could you allow to unset localStackContainer field or just unset this instance inner stop method? We just want to find out (using isRunning method) that docker image is running or not to avoid unnecessary Localstack restart between single test (using DirtiesContext annotation).
This will be unit test for this fix:
localstack.start();
localstack.stop();
assertFalse(localstack.isRunning());
Could you upload following changes:
In logic: cloud.localstack.Locastack:
public void stop() {
if (localStackContainer != null) {
localStackContainer.stop();
localStackContainer = null;
}
locked = false;
}
Unit test in cloud.localstack.dockerLocalstackDockerTest:
@Test
public void restart() {
Localstack.INSTANCE.startup(DOCKER_CONFIG);
Localstack.INSTANCE.stop();
assertFalse(Localstack.INSTANCE.isRunning());
}
@whummer
Thanks for reporting @wojciechszymski , and apologies for the long delay. This is potentially related to #82 . We believe that this should be fixed in the meantime - a new version 0.2.20 has been pushed to Maven Central. Can you please give it a try with that version? Please keep us posted if the problem persists.. Thanks!
Hi! We just wanted to follow up on our last message to see whether your issue has been resolved. Were you able to get it working with the latest version of LocalStack? We would appreciate your feedback!
|
gharchive/issue
| 2021-11-17T13:29:32 |
2025-04-01T06:39:26.941428
|
{
"authors": [
"lakkeger",
"whummer",
"wojciechszymski"
],
"repo": "localstack/localstack-java-utils",
"url": "https://github.com/localstack/localstack-java-utils/issues/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
205309929
|
Back button, close button, buttons size, getValue
backButton - Back button component prop (optional).
closeButton - Close button component prop (optional).
backCloseSize - sets the size of back button & close button (default 20).
getValue - returns the current value.
Solves #6
Changes accepted, please re-review.
|
gharchive/pull-request
| 2017-02-04T01:08:59 |
2025-04-01T06:39:26.963613
|
{
"authors": [
"avishayil"
],
"repo": "localz/react-native-searchbar",
"url": "https://github.com/localz/react-native-searchbar/pull/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
357215137
|
GEOMESA-2386 Adding WPS module to FileSystem gs-plugin
Signed-off-by: Emilio Lahr-Vivaz elahrvivaz@ccri.com
I haven't been able to re-create the original issue, but bundles the jars needed for WPS in with the FSDS gs-plugin (where previously you had to also install the e.g. accumulo gs-plugin, or manually copy the correct jars)
|
gharchive/pull-request
| 2018-09-05T12:53:32 |
2025-04-01T06:39:26.964877
|
{
"authors": [
"elahrvivaz"
],
"repo": "locationtech/geomesa",
"url": "https://github.com/locationtech/geomesa/pull/2052",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2627522250
|
link opening without clicking it
Here I'm trying to append to the bullet point "door opening procedure" I want to press enter and ultimately write "fixed". So I click on the blank space after, we can discuss the capture behavior here but opening this link is def not my intention and I didn't tap the link.
https://github.com/user-attachments/assets/6a0e8b30-cbf7-48ef-b4a6-6e447496065e
@Parth are you still able to produce this? I'm unable to reproduce so far
|
gharchive/issue
| 2024-10-31T18:09:47 |
2025-04-01T06:39:26.977819
|
{
"authors": [
"Parth",
"tvanderstad"
],
"repo": "lockbook/lockbook",
"url": "https://github.com/lockbook/lockbook/issues/3050",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1770116368
|
Added model saving
It looks like the code to save the models was missing, I've added these three lines to main.py:
if args.save:
model.save_pretrained(args.save)
tokenizer.save_pretrained(args.save)
Thanks for the interest in our work! I have updated the repository to add support for this feature, which used a separate argument --save_model to allow custom demand of saving pruned models.
|
gharchive/pull-request
| 2023-06-22T17:28:45 |
2025-04-01T06:39:26.982048
|
{
"authors": [
"CoffeeVampir3",
"Eric-mingjie"
],
"repo": "locuslab/wanda",
"url": "https://github.com/locuslab/wanda/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1987443906
|
SocketIOUser not support send BINARY data
Prerequisites
[X] I am using the latest version of Locust
[X] I am suggesting a new feature, not asking a question
Description
in this function, it prevent user to send data in OPCODE_BINARY, so I suggest to export websocket 'opcode' param to user as bellow。
# def send(self, body, name=None, context={}, opcode=websocket.ABNF.OPCODE_TEXT)
def send(self, body, name=None, context={}):
if not name:
if body == "2":
name = "2 heartbeat"
else:
# hoping this is a subscribe type message, try to detect name
m = re.search(r'(\d*)\["([a-z]*)"', body)
assert m is not None
code = m.group(1)
action = m.group(2)
url_part = re.search(r'"url": *"([^"]*)"', body)
assert url_part is not None
url = re.sub(r"/[0-9_]*/", "/:id/", url_part.group(1))
name = f"{code} {action} url: {url}"
self.environment.events.request.fire(
request_type="WSS",
name=name,
response_time=None,
response_length=len(body),
exception=None,
context={**self.context(), **context},
)
logging.debug(f"WSS: {body}")
# self.ws.send(body, opcode)
self.ws.send(body)
👍 PR welcome!
👍 PR welcome! (technically this issue should be in locust-plugins but its ok :)
a PR is created in locust-plugins.pr-151
Merged!
|
gharchive/issue
| 2023-11-10T11:26:23 |
2025-04-01T06:39:26.985573
|
{
"authors": [
"cyberw",
"luis-allan"
],
"repo": "locustio/locust",
"url": "https://github.com/locustio/locust/issues/2457",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
789298501
|
Feature chart sync
This PR improves the charts of the index and the report page with the following changes:
Prevent to continue reporting stats to the history of stats when the runner is stopped
Usage of shared template to build stats_history data to reload to charts data in the index.html and report.html templates
Fix Report Charts tooltips user count values
Fixes #1677
Awesome!
Awesome!
@cyberw Sorry, I need to add a new way to pass the data from python to the js without need to generate the instructions. I will make another PR
@cyberw Sorry, I need to add a new way to pass the data from python to the js without need to generate the instructions. I will make another PR
|
gharchive/pull-request
| 2021-01-19T19:32:14 |
2025-04-01T06:39:26.988371
|
{
"authors": [
"aek",
"cyberw"
],
"repo": "locustio/locust",
"url": "https://github.com/locustio/locust/pull/1678",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2653588969
|
[BUG] Can't add new role
After adding the first role, when second role is added, role-already-exists is displayed in the chat box, and the role cannot be added.
Incidentally, when checking the lectern at this point, the first role name is blank.
After restarting the server, the roles “permissions” and “players” appear on this screen.
In this state, after allowing role-editor-edit-manage-roles in permissions and assigning a player, the same operation cannot add a role.
I cannot reproduce it. Could you explain more specifically the steps to do it? Thanks!
What's the name of the role you want to create?
for example, I tried to name three times: "test", "税務官", and "aaaa".
I tried to name three times: "test", "税務官", and "aaaa".
On all three times, the server had been started by deleting the plugin data and installing the plugin each time.
Still cannot reproduce it. Both "test" and "aaaa" work fine, and "税務官" just gives the only-alphanumeric error and the role is not created. I'll take a deeper look into it and see if I find the issue.
In my environment, both CJK and alphabetic characters are logged as shown in the image.
Nothing is displayed in the console.
Oh, I found a conflict with a certain chat related plugin. It appears that the plugin is preventing the chat data from being passed to this plugin.
I will ask the author of the plugin that caused the problem.
|
gharchive/issue
| 2024-11-12T23:12:21 |
2025-04-01T06:39:26.992864
|
{
"authors": [
"DrJekyllH",
"lofi-enjoyer"
],
"repo": "lofi-enjoyer/NubladaTowns",
"url": "https://github.com/lofi-enjoyer/NubladaTowns/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1323370718
|
No way to disband parties?
I cannot seem to find a way to disband parties, as you cannot leave a party if you are the leader and there doesn't appear to be a party disband command
I am also having this issue.
|
gharchive/issue
| 2022-07-31T05:59:20 |
2025-04-01T06:39:26.993882
|
{
"authors": [
"MoneyRBK",
"Random-User-34"
],
"repo": "lofi-enjoyer/TownyElections",
"url": "https://github.com/lofi-enjoyer/TownyElections/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
851257259
|
Use stable rbac.authorization.k8s.io/v1 API
RBAC mode is stable since k8s v1.8.
We should also update the helm charts.
oh, new helm charts already use rbac.authorization.k8s.io/v1 💪
|
gharchive/pull-request
| 2021-04-06T09:53:08 |
2025-04-01T06:39:27.005274
|
{
"authors": [
"jorgebay"
],
"repo": "logdna/logdna-agent-v2",
"url": "https://github.com/logdna/logdna-agent-v2/pull/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
165790067
|
Pypi install
@StephenHynes7 @eflanagan-r7 please review
+1
|
gharchive/pull-request
| 2016-07-15T13:45:46 |
2025-04-01T06:39:27.006248
|
{
"authors": [
"eflanagan-r7",
"stopal-r7"
],
"repo": "logentries/lecli",
"url": "https://github.com/logentries/lecli/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1798560871
|
Reloading standalone page fails with 500 error
When reloading the standalone page (e.g. /chat/{address}/object/{objectId}/new) a 500 Internal Error is reported. On the console log there is a message 'no wallet`.
Since this PR the problem can be traced back to the check for having wallet defined in src/lib/objects/ui.svelte. The problem is when the app is restarted the state stores are reinitialized and while they are in their initial phases (e.g. loading = true) their content is not available.
It would be better to create a generic mechanism for waiting for all the stores to be loaded, otherwise all the pages has to implement a check for their dependent stores to be loaded and display a loading screen, which makes it fragile against introducing bugs on reload.
|
gharchive/issue
| 2023-07-11T09:52:51 |
2025-04-01T06:39:27.014617
|
{
"authors": [
"agazso"
],
"repo": "logos-innovation-lab/waku-objects-playground",
"url": "https://github.com/logos-innovation-lab/waku-objects-playground/issues/178",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
121991527
|
High CPU usage in logstash 2.0
Hi, we are using logstash 2.0 to collect logs. But we found it consumed too much CPU resource, over 300% (8 cores).
Here is the config:
input {
kafka {
group_id => "logstash"
topic_id => "logstash"
zk_connect => "192.168.1.101:2181,192.168.1.102:2181,192.168.1.103:2181/kafkalog"
}
}
output {
elasticsearch {
hosts => [ "192.168.1.201:9200","192.168.1.202:9200","192.168.1.203:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
index_type => "%{type}"
workers => 24
template_overwrite => true
}
}
Here is the top for logstash process:
45520 admin 20 0 5145m 770m 13m R 95.4 1.2 3919:16 <kafka
16493 admin 20 0 5145m 770m 13m S 9.9 1.2 10:00.87 <kafka
16494 admin 20 0 5145m 770m 13m R 8.0 1.2 10:22.48 <kafka
45611 admin 20 0 5145m 770m 13m S 6.0 1.2 219:50.52 <kafka
45536 admin 20 0 5145m 770m 13m R 4.0 1.2 295:32.40 >output
45548 admin 20 0 5145m 770m 13m S 4.0 1.2 105:37.91 >elasticsearch.
45608 admin 20 0 5145m 770m 13m S 4.0 1.2 47:13.84 >elasticsearch.
45635 admin 20 0 5145m 770m 13m S 4.0 1.2 47:29.15 >elasticsearch.
45638 admin 20 0 5145m 770m 13m S 4.0 1.2 47:09.60 >elasticsearch.
45363 admin 20 0 5145m 770m 13m S 2.0 1.2 88:07.72 java
45364 admin 20 0 5145m 770m 13m S 2.0 1.2 88:04.00 java
45365 admin 20 0 5145m 770m 13m S 2.0 1.2 88:13.39 java
45366 admin 20 0 5145m 770m 13m S 2.0 1.2 88:14.11 java
45367 admin 20 0 5145m 770m 13m S 2.0 1.2 88:12.28 java
45368 admin 20 0 5145m 770m 13m S 2.0 1.2 88:05.51 java
45372 admin 20 0 5145m 770m 13m S 2.0 1.2 39:54.07 java
45373 admin 20 0 5145m 770m 13m S 2.0 1.2 41:29.71 java
45543 admin 20 0 5145m 770m 13m S 2.0 1.2 107:13.18 >elasticsearch.
45544 admin 20 0 5145m 770m 13m S 2.0 1.2 106:05.68 >elasticsearch.
45545 admin 20 0 5145m 770m 13m S 2.0 1.2 105:52.78 >elasticsearch.
45546 admin 20 0 5145m 770m 13m S 2.0 1.2 105:43.77 >elasticsearch.
45549 admin 20 0 5145m 770m 13m S 2.0 1.2 106:16.93 >elasticsearch.
45550 admin 20 0 5145m 770m 13m S 2.0 1.2 105:44.84 >elasticsearch.
45552 admin 20 0 5145m 770m 13m S 2.0 1.2 106:15.47 >elasticsearch.
45554 admin 20 0 5145m 770m 13m S 2.0 1.2 106:38.47 >elasticsearch.
45555 admin 20 0 5145m 770m 13m S 2.0 1.2 106:31.77 >elasticsearch.
45557 admin 20 0 5145m 770m 13m S 2.0 1.2 105:34.25 >elasticsearch.
45558 admin 20 0 5145m 770m 13m R 2.0 1.2 105:55.13 >elasticsearch.
45561 admin 20 0 5145m 770m 13m S 2.0 1.2 106:28.72 >elasticsearch.
45562 admin 20 0 5145m 770m 13m S 2.0 1.2 106:50.73 >elasticsearch.
Here is the jstack of thread-id 45520:
"<kafka" daemon prio=10 tid=0x00007ff37435b800 nid=0xb1d0 runnable [0x00007ff37bbe6000]
java.lang.Thread.State: RUNNABLE
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:134)
at rubyjit.LogStash::Inputs::Base$$stop?_5ecc17de0faba55421c72ac5c66b2d232a0c2171273061103.__file__(/home/admin/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/inputs/base.rb:89)
at rubyjit.LogStash::Inputs::Base$$stop?_5ecc17de0faba55421c72ac5c66b2d232a0c2171273061103.__file__(/home/admin/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/inputs/base.rb)
at org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:141)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:134)
at org.jruby.ast.FCallNoArgNode.interpret(FCallNoArgNode.java:31)
at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:60)
at org.jruby.ast.WhileNode.interpret(WhileNode.java:127)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
at org.jruby.ast.BeginNode.interpret(BeginNode.java:83)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:96)
at org.jruby.ast.BeginNode.interpret(BeginNode.java:83)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
at org.jruby.ast.FCallOneArgNode.interpret(FCallOneArgNode.java:36)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:112)
at org.jruby.runtime.Interpreted19Block.evalBlockBody(Interpreted19Block.java:206)
at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:194)
at org.jruby.runtime.Interpreted19Block.call(Interpreted19Block.java:125)
at org.jruby.runtime.Block.call(Block.java:101)
at org.jruby.RubyProc.call(RubyProc.java:290)
at org.jruby.RubyProc.call(RubyProc.java:228)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:99)
at java.lang.Thread.run(Thread.java:745)
@caipeichao are there any warning messages on your logs?
I am seeing this too
Threads: 159 total, 11 running, 148 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.4 us, 0.9 sy, 95.3 ni, 2.7 id, 0.0 wa, 0.0 hi, 0.6 si, 0.0 st
KiB Mem: 3801036 total, 1215884 used, 2585152 free, 107080 buffers
KiB Swap: 0 total, 0 used, 0 free. 613808 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4028 logstash 39 19 4465452 319472 19056 R 13.1 8.4 1:41.32 <kafka
4029 logstash 39 19 4465452 319472 19056 R 13.1 8.4 1:41.28 <kafka
4030 logstash 39 19 4465452 319472 19056 R 13.1 8.4 1:47.25 <kafka
4031 logstash 39 19 4465452 319472 19056 R 13.1 8.4 1:41.28 <kafka
4032 logstash 39 19 4465452 319472 19056 R 13.1 8.4 1:41.18 <kafka
4033 logstash 39 19 4465452 319472 19056 R 13.1 8.4 1:41.29 <kafka
4034 logstash 39 19 4465452 319472 19056 R 13.1 8.4 1:41.20 <kafka
4663 logstash 39 19 4465452 319472 19056 S 6.6 8.4 0:00.37 <kafka
4670 logstash 39 19 4465452 319472 19056 S 6.6 8.4 0:00.38 <kafka
3992 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.00 java
4006 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:04.60 LogStash::Runne
4007 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.00 java
4008 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:02.11 java
4009 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:07.26 java
4010 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.05 java
4011 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.01 java
4012 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.00 java
4013 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.00 java
4014 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:32.82 java
4015 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:07.53 java
4016 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.00 java
4017 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.65 java
4024 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.00 java
4025 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:02.38 LogStash::Runne
4027 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.00 LogStash::Runne
4038 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.12 <kafka
4039 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:00.10 <kafka
4040 logstash 39 19 4465452 319472 19056 S 0.0 8.4 0:01.89 |filterworker.
Thread 4034: (state = BLOCKED)
- org.jruby.RubyString.newUTF8String(org.jruby.Ruby, java.lang.String) @bci=16, line=544 (Compiled frame)
- org.jruby.RubyString.newUnicodeString(org.jruby.Ruby, java.lang.String) @bci=20, line=538 (Compiled frame)
- com.jrjackson.RubyUtils.rubyString(org.jruby.Ruby, java.lang.String) @bci=2, line=41 (Compiled frame)
- com.jrjackson.RubyStringNameConverter.convert(org.jruby.Ruby, java.lang.String) @bci=2, line=10 (Compiled frame)
- com.jrjackson.RubyHandler.hashKey(java.lang.String) @bci=9, line=56 (Compiled frame)
- com.jrjackson.JrParse.callHashKey(com.fasterxml.jackson.core.JsonStreamContext) @bci=22, line=83 (Compiled frame)
- com.jrjackson.JrParse.callAddValue(com.fasterxml.jackson.core.JsonStreamContext, org.jruby.runtime.builtin.IRubyObject) @bci=57, line=71 (Compiled frame)
- com.jrjackson.RubyHandler.treatString(com.fasterxml.jackson.core.JsonParser) @bci=9, line=96 (Compiled frame)
- com.jrjackson.JrParse.handleCurrentToken(com.fasterxml.jackson.core.JsonParser) @bci=156, line=111 (Compiled frame)
- com.jrjackson.JrParse.deserialize(com.fasterxml.jackson.core.JsonParser) @bci=9, line=31 (Compiled frame)
- com.jrjackson.JrJacksonRuby.__parse(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, com.jrjackson.RubyNameConverter, com.jrjackson.RubyConverter, com.jrjackson.RubyConverter) @bci=68, line=113 (Compiled frame)
- com.jrjackson.JrJacksonRuby.parse(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=138, line=63 (Compiled frame)
- com.jrjackson.JrJacksonRuby$INVOKER$s$2$0$parse.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=6 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=40, line=202 (Compiled frame)
- rubyjit.LogStash::Json$$jruby_load_d1327f24790a049fc38cd186bbd8240006a83e981028566121.chained_0_rescue_1$RUBY$SYNTHETIC__file__(rubyjit.LogStash::Json$$jruby_load_d1327f24790a049fc38cd186bbd8240006a83e981028566121, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=58, line=38 (Compiled frame)
- rubyjit.LogStash::Json$$jruby_load_d1327f24790a049fc38cd186bbd8240006a83e981028566121.__file__(rubyjit.LogStash::Json$$jruby_load_d1327f24790a049fc38cd186bbd8240006a83e981028566121, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=72 (Compiled frame)
- rubyjit.LogStash::Json$$jruby_load_d1327f24790a049fc38cd186bbd8240006a83e981028566121.__file__(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=6 (Compiled frame)
- org.jruby.ast.executable.AbstractScript.__file__(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=13, line=42 (Compiled frame)
- org.jruby.internal.runtime.methods.JittedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=35, line=181 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=40, line=206 (Compiled frame)
- org.jruby.internal.runtime.methods.AliasMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=13, line=61 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- rubyjit.LogStash::Codecs::JSON$$decode_0583bafe20dcf5e4ae48ab2622babfda5eaaf8bf1028566121.chained_0_rescue_1$RUBY$SYNTHETIC__file__(rubyjit.LogStash::Codecs::JSON$$decode_0583bafe20dcf5e4ae48ab2622babfda5eaaf8bf1028566121, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=51, line=42 (Compiled frame)
- rubyjit.LogStash::Codecs::JSON$$decode_0583bafe20dcf5e4ae48ab2622babfda5eaaf8bf1028566121.__file__(rubyjit.LogStash::Codecs::JSON$$decode_0583bafe20dcf5e4ae48ab2622babfda5eaaf8bf1028566121, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=52, line=41 (Compiled frame)
- rubyjit.LogStash::Codecs::JSON$$decode_0583bafe20dcf5e4ae48ab2622babfda5eaaf8bf1028566121.__file__(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=6 (Compiled frame)
- org.jruby.internal.runtime.methods.JittedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=201 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.callBlock(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=40, line=177 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.callIter(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=8, line=188 (Compiled frame)
- rubyjit.LogStash::Inputs::Kafka$$queue_event_5c3938086d9dc3a9a8fcf149a74f4eb21b3ade7c1028566121.chained_0_rescue_1$RUBY$SYNTHETIC__file__(rubyjit.LogStash::Inputs::Kafka$$queue_event_5c3938086d9dc3a9a8fcf149a74f4eb21b3ade7c1028566121, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=83, line=171 (Compiled frame)
- rubyjit.LogStash::Inputs::Kafka$$queue_event_5c3938086d9dc3a9a8fcf149a74f4eb21b3ade7c1028566121.__file__(rubyjit.LogStash::Inputs::Kafka$$queue_event_5c3938086d9dc3a9a8fcf149a74f4eb21b3ade7c1028566121, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=31, line=170 (Compiled frame)
- rubyjit.LogStash::Inputs::Kafka$$queue_event_5c3938086d9dc3a9a8fcf149a74f4eb21b3ade7c1028566121.__file__(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=8 (Compiled frame)
- org.jruby.internal.runtime.methods.JittedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=37, line=221 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=40, line=202 (Compiled frame)
- org.jruby.ast.FCallTwoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=31, line=38 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.ast.IfNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=71, line=116 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.WhileNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=46, line=131 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.CallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=57 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.EnsureNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=16, line=96 (Interpreted frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.FCallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=19, line=36 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=112 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.evalBlockBody(org.jruby.runtime.ThreadContext, org.jruby.runtime.Binding, org.jruby.runtime.builtin.IRubyObject) @bci=25, line=206 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.yield(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, boolean, org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=51, line=194 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=16, line=125 (Compiled frame)
- org.jruby.runtime.Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=15, line=101 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=68, line=300 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[]) @bci=7, line=230 (Interpreted frame)
- org.jruby.internal.runtime.RubyRunnable.run() @bci=146, line=99 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
Locked ownable synchronizers:
- None
Thread 4033: (state = IN_JAVA)
- org.jruby.ast.CallNoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=15, line=60 (Compiled frame; information may be imprecise)
- org.jruby.ast.CallNoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=15, line=60 (Compiled frame)
- org.jruby.ast.IfNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=110 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.WhileNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=46, line=131 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.CallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=57 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.EnsureNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=16, line=96 (Interpreted frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.FCallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=19, line=36 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=112 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.evalBlockBody(org.jruby.runtime.ThreadContext, org.jruby.runtime.Binding, org.jruby.runtime.builtin.IRubyObject) @bci=25, line=206 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.yield(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, boolean, org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=51, line=194 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=16, line=125 (Compiled frame)
- org.jruby.runtime.Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=15, line=101 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=68, line=300 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[]) @bci=7, line=230 (Interpreted frame)
- org.jruby.internal.runtime.RubyRunnable.run() @bci=146, line=99 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
Locked ownable synchronizers:
- None
Thread 4032: (state = IN_JAVA)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame; information may be imprecise)
- org.jruby.ast.WhileNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=46, line=131 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyClass, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=57, line=326 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=50, line=170 (Compiled frame)
- org.jruby.ast.CallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=57 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.EnsureNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=16, line=96 (Interpreted frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyClass, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=57, line=326 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=50, line=170 (Compiled frame)
- org.jruby.ast.FCallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=19, line=36 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=112 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.evalBlockBody(org.jruby.runtime.ThreadContext, org.jruby.runtime.Binding, org.jruby.runtime.builtin.IRubyObject) @bci=25, line=206 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.yield(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, boolean, org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=51, line=194 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=16, line=125 (Compiled frame)
- org.jruby.runtime.Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=15, line=101 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=68, line=300 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[]) @bci=7, line=230 (Interpreted frame)
- org.jruby.internal.runtime.RubyRunnable.run() @bci=146, line=99 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
Locked ownable synchronizers:
- None
Thread 4031: (state = IN_JAVA)
- org.jruby.ast.executable.RuntimeCache.getVariable(org.jruby.runtime.ThreadContext, int, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=6, line=186 (Compiled frame; information may be imprecise)
- org.jruby.ast.executable.AbstractScript.getVariable0(org.jruby.runtime.ThreadContext, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=8, line=262 (Compiled frame)
- rubyjit.LogStash::Inputs::Base$$stop?_5ecc17de0faba55421c72ac5c66b2d232a0c21711028566121.__file__(rubyjit.LogStash::Inputs::Base$$stop?_5ecc17de0faba55421c72ac5c66b2d232a0c21711028566121, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=11, line=89 (Compiled frame)
- rubyjit.LogStash::Inputs::Base$$stop?_5ecc17de0faba55421c72ac5c66b2d232a0c21711028566121.__file__(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=4 (Compiled frame)
- org.jruby.internal.runtime.methods.JittedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String) @bci=33, line=141 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=36, line=134 (Compiled frame)
- org.jruby.ast.FCallNoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=7, line=31 (Compiled frame)
- org.jruby.ast.CallNoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=15, line=60 (Compiled frame)
- org.jruby.ast.WhileNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=23, line=127 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.CallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=57 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.EnsureNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=16, line=96 (Interpreted frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.FCallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=19, line=36 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=112 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.evalBlockBody(org.jruby.runtime.ThreadContext, org.jruby.runtime.Binding, org.jruby.runtime.builtin.IRubyObject) @bci=25, line=206 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.yield(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, boolean, org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=51, line=194 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=16, line=125 (Compiled frame)
- org.jruby.runtime.Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=15, line=101 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=68, line=300 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[]) @bci=7, line=230 (Interpreted frame)
- org.jruby.internal.runtime.RubyRunnable.run() @bci=146, line=99 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
Locked ownable synchronizers:
- None
Thread 4030: (state = IN_JAVA)
Error occurred during stack walking:
java.lang.NullPointerException
at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:88)
at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:45)
at sun.jvm.hotspot.tools.JStack.run(JStack.java:66)
at sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
at sun.jvm.hotspot.tools.JStack.main(JStack.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.tools.jstack.JStack.runJStackTool(JStack.java:140)
at sun.tools.jstack.JStack.main(JStack.java:106)
Locked ownable synchronizers:
- None
Thread 4029: (state = IN_JAVA)
- org.jruby.internal.runtime.methods.JittedMethod.post(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String) @bci=5, line=300 (Compiled frame; information may be imprecise)
- org.jruby.internal.runtime.methods.JittedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String) @bci=46, line=149 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=36, line=134 (Compiled frame)
- org.jruby.ast.FCallNoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=7, line=31 (Compiled frame)
- org.jruby.ast.CallNoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=15, line=60 (Compiled frame)
- org.jruby.ast.WhileNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=23, line=127 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.CallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=57 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.EnsureNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=16, line=96 (Interpreted frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=38, line=168 (Compiled frame)
- org.jruby.ast.FCallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=19, line=36 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=112 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.evalBlockBody(org.jruby.runtime.ThreadContext, org.jruby.runtime.Binding, org.jruby.runtime.builtin.IRubyObject) @bci=25, line=206 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.yield(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, boolean, org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=51, line=194 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=16, line=125 (Compiled frame)
- org.jruby.runtime.Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=15, line=101 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=68, line=300 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[]) @bci=7, line=230 (Interpreted frame)
- org.jruby.internal.runtime.RubyRunnable.run() @bci=146, line=99 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
Locked ownable synchronizers:
- None
Thread 4028: (state = IN_JAVA)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=36, line=134 (Compiled frame; information may be imprecise)
- org.jruby.ast.CallNoArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=18, line=60 (Compiled frame)
- org.jruby.ast.WhileNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=23, line=127 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyClass, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=57, line=326 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=50, line=170 (Compiled frame)
- org.jruby.ast.CallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=57 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.RescueNode.executeBody(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=21, line=221 (Compiled frame)
- org.jruby.ast.RescueNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=26, line=116 (Compiled frame)
- org.jruby.ast.EnsureNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=16, line=96 (Interpreted frame)
- org.jruby.ast.BeginNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=9, line=83 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.ast.BlockNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=33, line=71 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.RubyModule, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block, boolean) @bci=30, line=74 (Compiled frame)
- org.jruby.internal.runtime.methods.InterpretedMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=82, line=182 (Compiled frame)
- org.jruby.internal.runtime.methods.DefaultMethod.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, java.lang.String, org.jruby.runtime.builtin.IRubyObject) @bci=22, line=203 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyClass, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=57, line=326 (Compiled frame)
- org.jruby.runtime.callsite.CachingCallSite.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject) @bci=50, line=170 (Compiled frame)
- org.jruby.ast.FCallOneArgNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=19, line=36 (Compiled frame)
- org.jruby.ast.NewlineNode.interpret(org.jruby.Ruby, org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=41, line=105 (Compiled frame)
- org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(org.jruby.Ruby, org.jruby.runtime.ThreadContext, java.lang.String, int, org.jruby.ast.Node, java.lang.String, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=30, line=112 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.evalBlockBody(org.jruby.runtime.ThreadContext, org.jruby.runtime.Binding, org.jruby.runtime.builtin.IRubyObject) @bci=25, line=206 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.yield(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.builtin.IRubyObject, org.jruby.RubyModule, boolean, org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=51, line=194 (Compiled frame)
- org.jruby.runtime.Interpreted19Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Binding, org.jruby.runtime.Block$Type, org.jruby.runtime.Block) @bci=16, line=125 (Compiled frame)
- org.jruby.runtime.Block.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.Block) @bci=15, line=101 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[], org.jruby.runtime.builtin.IRubyObject, org.jruby.runtime.Block) @bci=68, line=300 (Compiled frame)
- org.jruby.RubyProc.call(org.jruby.runtime.ThreadContext, org.jruby.runtime.builtin.IRubyObject[]) @bci=7, line=230 (Interpreted frame)
- org.jruby.internal.runtime.RubyRunnable.run() @bci=146, line=99 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
Logs have no warning in them. However it warns when trying to restart logstash
{:timestamp=>"2015-12-14T18:42:06.192000+0000", :level=>:warn, "INFLIGHT_EVENT_COUNT"=>{"total"=>0}, "STALLING_THREADS"=>{["LogStash::Inputs::Kafka", {"zk_connect"=>"zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181", "topic_id"=>"prod_logs", "consumer_threads"=>1, "consumer_restart_on_error"=>"true", "consumer_restart_sleep_ms"=>100, "decorate_events"=>"true", "type"=>"prod_logs"}]=>[{"thread_id"=>20, "name"=>"<kafka", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/inputs/base.rb:89:in `stop?'"}], ["LogStash::Inputs::Kafka", {"zk_connect"=>"zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181", "topic_id"=>"prod_exchange_logs", "consumer_threads"=>1, "consumer_restart_on_error"=>"true", "consumer_restart_sleep_ms"=>100, "decorate_events"=>"true", "type"=>"prod_ex_logs"}]=>[{"thread_id"=>21, "name"=>"<kafka", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-2.0.2/lib/logstash/inputs/kafka.rb:139:in `run'"}], ["LogStash::Inputs::Kafka", {"zk_connect"=>"zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181", "topic_id"=>"staging_logs", "consumer_threads"=>1, "consumer_restart_on_error"=>"true", "consumer_restart_sleep_ms"=>100, "decorate_events"=>"true", "type"=>"staging_logs"}]=>[{"thread_id"=>22, "name"=>"<kafka", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-2.0.2/lib/logstash/inputs/kafka.rb:139:in `run'"}], ["LogStash::Inputs::Kafka", {"zk_connect"=>"zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181", "topic_id"=>"staging_exchange_logs", "consumer_threads"=>1, "consumer_restart_on_error"=>"true", "consumer_restart_sleep_ms"=>100, "decorate_events"=>"true", "type"=>"staging_ex_logs"}]=>[{"thread_id"=>23, "name"=>"<kafka", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/inputs/base.rb:89:in `stop?'"}]}}
{:timestamp=>"2015-12-14T18:42:11.079000+0000", :level=>:warn, "INFLIGHT_EVENT_COUNT"=>{"total"=>0}, "STALLING_THREADS"=>{["LogStash::Inputs::Kafka", {"zk_connect"=>"zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181", "topic_id"=>"staging_exchange_logs", "consumer_threads"=>1, "consumer_restart_on_error"=>"true", "consumer_restart_sleep_ms"=>100, "decorate_events"=>"true", "type"=>"staging_ex_logs"}]=>[{"thread_id"=>23, "name"=>"<kafka", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-2.0.2/lib/logstash/inputs/kafka.rb:139:in `run'"}]}}
{:timestamp=>"2015-12-14T18:42:16.085000+0000", :level=>:warn, "INFLIGHT_EVENT_COUNT"=>{"total"=>0}, "STALLING_THREADS"=>{["LogStash::Inputs::Kafka", {"zk_connect"=>"zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181", "topic_id"=>"staging_exchange_logs", "consumer_threads"=>1, "consumer_restart_on_error"=>"true", "consumer_restart_sleep_ms"=>100, "decorate_events"=>"true", "type"=>"staging_ex_logs"}]=>[{"thread_id"=>23, "name"=>"<kafka", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-2.0.2/lib/logstash/inputs/kafka.rb:139:in `run'"}]}}
{:timestamp=>"2015-12-14T18:42:16.108000+0000", :message=>"The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.", :level=>:error}
Still verifying... but this may resolve the issue: https://github.com/logstash-plugins/logstash-input-kafka/pull/54. From the stacktraces, it looks like the topic has no new messages to read, and is calling stop? in a loop.
downgrading to 1.5 brought it back to normal.
Folks, we just fixed the issue. Can you upgrade to 2.0.3 of this plugin?
bin/plugin install --version 2.0.3 logstash-input-kafka
We'll do a LS release soon as soon as we verify this fixes it. Thanks!
@r-tock @caipeichao please let us know if this new version fixes CPU usage
CPU issue seems to have been solved. But I am seeing continually increasing lag in the kafka consumer. We have 20 partitions per topic. Some of the partitions are not being read since the update
I restarted logstash again, now the consumer has caught up. And lag is near 0 most of the time. I will keep an eye for this lag
And one more note, the average cpu usage on the node is 13% for my setup with 2.1. With 1.5 the average cpu usage was 6%. There is more that could be done to tune this further
@r-tock in 2.1, we defaulted the number of filter workers to be based on CPU cores on the machine. So half the number of cores go to filters - this could be another reason why you are seeing a CPU spike.
Did you set -w flag previously?
This is a single cpu node.
Thanks a lot.
I have upgraded to 2.0.3. Now it got much better.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18139 admin 20 0 4931m 616m 14m R 39.2 1.0 25:14.06 <kafka
18191 admin 20 0 4931m 616m 14m S 13.3 1.0 6:49.61 <kafka
18192 admin 20 0 4931m 616m 14m S 12.3 1.0 6:20.63 <kafka
18142 admin 20 0 4931m 616m 14m S 9.3 1.0 5:53.85 |worker
18143 admin 20 0 4931m 616m 14m S 9.3 1.0 5:50.60 |worker
18140 admin 20 0 4931m 616m 14m S 9.0 1.0 5:52.71 |worker
18141 admin 20 0 4931m 616m 14m S 9.0 1.0 5:50.15 |worker
18154 admin 20 0 4931m 616m 14m S 9.0 1.0 5:47.12 >output
18190 admin 20 0 4931m 616m 14m S 7.3 1.0 4:40.52 <kafka
17924 admin 20 0 4931m 616m 14m S 2.0 1.0 0:54.70 java
jstack shows the thread 18139 is waiting on object monitor:
"<kafka" daemon prio=10 tid=0x00007fa9804b0800 nid=0x46db in Object.wait() [0x00007fa9b526f000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:461)
at org.jruby.RubyThread$SleepTask.run(RubyThread.java:1050)
- locked <0x00000000eb78a558> (a org.jruby.ext.thread.SizedQueue)
at org.jruby.RubyThread.executeBlockingTask(RubyThread.java:1066)
at org.jruby.RubyThread.wait_timeout(RubyThread.java:1414)
at org.jruby.ext.thread.Queue.pop(Queue.java:152)
- locked <0x00000000eb78a558> (a org.jruby.ext.thread.SizedQueue)
at org.jruby.ext.thread.Queue.pop(Queue.java:127)
- eliminated <0x00000000eb78a558> (a org.jruby.ext.thread.SizedQueue)
at org.jruby.ext.thread.SizedQueue.pop(SizedQueue.java:111)
- locked <0x00000000eb78a558> (a org.jruby.ext.thread.SizedQueue)
at org.jruby.ext.thread.SizedQueue$INVOKER$i$pop.call(SizedQueue$INVOKER$i$pop.gen)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:134)
at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:60)
at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
2.0.3 has helped fix high CPU issue. However, the process it not responding to kill command. Not even kill -3. Had to do kill -9. Memory consumption looks quite high as well.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7674 root 18 0 2679m 356m 15m S 0.7 9.0 540:36.68 java
@oazabir thanks for the feedback. Keep an eye on #62 for the other problem.
|
gharchive/issue
| 2015-12-14T07:48:21 |
2025-04-01T06:39:27.060252
|
{
"authors": [
"caipeichao",
"joekiller",
"jsvd",
"oazabir",
"r-tock",
"suyograo",
"talevy"
],
"repo": "logstash-plugins/logstash-input-kafka",
"url": "https://github.com/logstash-plugins/logstash-input-kafka/issues/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
416655185
|
Add --log-level flag and use boost trivial logger
Allows to specify --log-level trace/debug/info/warning/error/fatal optionally (default: info)
Ran clang-format so there are some noisy styling changes, sorry.
Please review the log level used for each message and if I could add more messages.
Resolves #22
|
gharchive/pull-request
| 2019-03-04T06:47:46 |
2025-04-01T06:39:27.070169
|
{
"authors": [
"sachaaaaa"
],
"repo": "loki-project/loki-storage-server",
"url": "https://github.com/loki-project/loki-storage-server/pull/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1503320114
|
not able to login anymore
I noticed today that I'm not able to login anymore. I get the error "You are not supposed to be here".
I tried login in on the website manually with the same login/password, working without a problem.
Is this something on my side or did DeGiro change something in the API?
Yeah, I guess 4 issues opened on that wasn't enough. We need more.
See #56
OMG, I feel stupid!
I'm new to GitHub and it searched only the 'open' issues. Thx for this quick heads up
|
gharchive/issue
| 2022-12-19T17:35:53 |
2025-04-01T06:39:27.112524
|
{
"authors": [
"Jakub-CZ",
"brogel"
],
"repo": "lolokraus/DegiroAPI",
"url": "https://github.com/lolokraus/DegiroAPI/issues/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
202931784
|
Read CORS settings from configuration file
Use the same pattern as in akka.http.impl.settings.ServerSettingsImpl to load the settings from a .conf file.
Provide a reference.conf with default values.
Have you been work on this? I think I could try to implement it. :)
@jiminhsieh I already started working on this, but got caught on something else.
I will clean the code a bit and push it to a feature branch this way we can discuss how to finish it.
I just thought if you have not been working on this, I could try to help it. :)
Release version 0.3.0 with this improvement.
|
gharchive/issue
| 2017-01-24T20:31:19 |
2025-04-01T06:39:27.114961
|
{
"authors": [
"jiminhsieh",
"lomigmegard"
],
"repo": "lomigmegard/akka-http-cors",
"url": "https://github.com/lomigmegard/akka-http-cors/issues/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
118072943
|
Destinations beta opt-out banner
Adds a banner to top of page that allows Destinations Next beta users to opt out and return to the original experience. Also has a 'close' button that will hide the banner and set a cookie to prevent showing the banner again.
|
gharchive/pull-request
| 2015-11-20T16:11:51 |
2025-04-01T06:39:27.118691
|
{
"authors": [
"JoeShep",
"edubkendo"
],
"repo": "lonelyplanet/rizzo-next",
"url": "https://github.com/lonelyplanet/rizzo-next/pull/221",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1745922370
|
🛑 小程序 - 个股详情页面 (via longbridgeapp.com) is down
In 5932816, 小程序 - 个股详情页面 (via longbridgeapp.com) (https://longbridgeapp.com/quote/00700.HK?source_app=longbridge) was down:
HTTP code: 500
Response time: 2942 ms
Resolved: 小程序 - 个股详情页面 (via longbridgeapp.com) is back up in 6cc1c90.
|
gharchive/issue
| 2023-06-07T13:32:04 |
2025-04-01T06:39:27.130154
|
{
"authors": [
"huacnlee"
],
"repo": "longbridgeapp/uptime",
"url": "https://github.com/longbridgeapp/uptime/issues/607",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
918029573
|
🛑 KeepCup is down
In 527dd2d, KeepCup (https://keepcup.eco.jau.co.jp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: KeepCup is back up in d3804bb.
|
gharchive/issue
| 2021-06-10T23:04:16 |
2025-04-01T06:39:27.132747
|
{
"authors": [
"sonnymai"
],
"repo": "longforme/longforme-uptime-monitor",
"url": "https://github.com/longforme/longforme-uptime-monitor/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1108670799
|
🛑 KeepCup is down
In 619a76a, KeepCup (https://keepcup.eco.jau.co.jp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: KeepCup is back up in 2f0ad5f.
|
gharchive/issue
| 2022-01-19T23:43:29 |
2025-04-01T06:39:27.135236
|
{
"authors": [
"sonnymai"
],
"repo": "longforme/longforme-uptime-monitor",
"url": "https://github.com/longforme/longforme-uptime-monitor/issues/230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1523844010
|
🛑 Posie is down
In 9bd4640, Posie (https://posie.jau.co.jp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Posie is back up in ac5d94c.
|
gharchive/issue
| 2023-01-07T13:52:21 |
2025-04-01T06:39:27.137572
|
{
"authors": [
"sonnymai"
],
"repo": "longforme/longforme-uptime-monitor",
"url": "https://github.com/longforme/longforme-uptime-monitor/issues/2410",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
956291350
|
2818 further socket usage optimizations
Lazily initialize the replica clients for replica/sync service
This is to optimize the connection count for the longhorn-manager engine
monitor loop, in a single loop the monitor executes the engine binary 8
times this leads to a total connection count of f() = 8e + 6 * 2r = 20
for a volume with 1 replica.
Since all the calls the monitor makes only require either the replica or
the sync client, we can reduce this further with this optimization to
g() = 8e + 6r = 14 per monitor loop.
The monitor loop executes 60/5 = 12 times per minute, so our total
connections per minute are reduced from 240 to 168. For volume with 3
replicas we end up with 312 connection per minute.
No further optimization on this end is possible, the next required task
is removal of the direct engine binary invocations by the longhorn-manager.
This will reduce the connection counts to h() = 1e + 2*r which as one
can see would lead to the desired behavior for each volume of 1 engine connection
and 2 connections per replica (replica / sync).
longhorn/longhorn#2818
Signed-off-by: Joshua Moody joshua.moody@suse.com
Good to review, let me merge afterwards.
|
gharchive/pull-request
| 2021-07-30T00:07:30 |
2025-04-01T06:39:27.141414
|
{
"authors": [
"joshimoo"
],
"repo": "longhorn/longhorn-engine",
"url": "https://github.com/longhorn/longhorn-engine/pull/646",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2721724650
|
chore(robot): test pvc expand more than storage maximum size
Which issue(s) this PR fixes:
Issue longhorn/longhorn#6633
What this PR does / why we need it:
Add a robot test case to verify that a PVC cannot be expanded beyond the storage maximum size.
Special notes for your reviewer:
None
Additional documentation or context
None
Summary by CodeRabbit
Release Notes
New Features
Enhanced persistent volume claim creation with flexible configuration options.
New keyword for verifying persistent volume claim requested size over time.
New methods for volume size retrieval and maximum disk storage checks.
Added functionality for expanding workloads and persistent volume claims with improved size management.
Bug Fixes
Improved error handling and logging in backup and AWS operations.
Tests
Introduced a new test suite for validating persistent volume claim behavior, including checks for expansion limits.
@coderabbitai review
https://github.com/coderabbitai review
|
gharchive/pull-request
| 2024-12-06T00:06:55 |
2025-04-01T06:39:27.146622
|
{
"authors": [
"c3y1huang"
],
"repo": "longhorn/longhorn-tests",
"url": "https://github.com/longhorn/longhorn-tests/pull/2178",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
612331611
|
test: Add test case for multiple paths on the same filesystem in the node default disk config annotation
This test case is for if there are two or more disk paths in the default disk config annotation cat not in the same filesystem in the node.
Test steps:
Make a clean condition: no disk, no tag, no default disk related annotation.
Create default disk annotation with two disks different paths, but in the same filesystem.
Enable "Setting/General/Create Default Disk on Labeled Nodes".
Wait for node update, and check there should be no disk and no tag created.
Cleanup test environment: remove default disk related annotation
Verified the test case, the got the expected error message:
[longhorn-manager-t87z5] time="2020-05-05T01:56:25Z" level=warning msg=" [{"path":"/root","allowScheduling":false,"storageReserved":1024,"name":"root-name"},{"path":"/var/lib/longhorn/","allowScheduling":false,"storageReserved": 1024,"name":"default-name"}]"
[longhorn-manager-t87z5] time="2020-05-05T01:56:25Z" level=warning msg="Kubernetes node: invalid annotation node.longhorn.io/default-disks-config: config: the disk /var/lib/longhorn/ is the samefile system with /root, fsid 58fe937c58377e45"
Verified with local build longhorn-manager:
[longhorn-manager-t87z5] time="2020-05-05T16:56:21Z" level=warning msg="[{"path":"/root","allowScheduling":false,"storageReserved":1024,"name":"root-name"},{"path":"/var/lib/longhorn/","allowScheduling":false,"storageReserved": 1024,"name":"default-name"}]"
[longhorn-manager-t87z5] time="2020-05-05T16:56:21Z" level=warning msg="Kubernetes node: invalid annotation node.longhorn.io/default-disks-config: config: the disk /var/lib/longhorn/ is the samefile system with /root, fsid 58fe937c58377e45"
test_node_config_annotation_invalid passed for two consecutive runs. longhorn-tests/421 & longhorn-tests/422
|
gharchive/issue
| 2020-05-05T04:37:33 |
2025-04-01T06:39:27.154136
|
{
"authors": [
"boknowswiki",
"meldafrawi"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/1293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
524522179
|
Update the CSI driver list
https://kubernetes-csi.github.io/docs/drivers.html
Docs PR at https://github.com/kubernetes-csi/docs/pull/228
PR merged. Done.
|
gharchive/issue
| 2019-11-18T17:53:00 |
2025-04-01T06:39:27.155743
|
{
"authors": [
"yasker"
],
"repo": "longhorn/longhorn",
"url": "https://github.com/longhorn/longhorn/issues/898",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1213558377
|
报警通知功能中,未能正常推送到企业微信
目前在验证Hippo4J过程中,配置报警通知遇到一点小问题,请教一下:
我是使用docker搭建的hippo4j-server,dockerfile如下:
FROM openjdk:8u322-jdk
WORKDIR /app
ADD hippo4j-server /app/hippo4j-server
ENV TZ=Asia/Shanghai
EXPOSE 6691
CMD java -Djava.ext.dirs=/usr/local/openjdk-8/jre/lib/ext:/usr/local/openjdk-8/lib/ext -Xloggc:/app/hippo4j-server/logs/hippo4j_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Xms1024m -Xmx1024m -Xmn512m -Dhippo4j.standalone=true -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/app/hippo4j-server/logs/java_heapdump.hprof -Dhippo4j.home=/app/hippo4j-server -jar /app/hippo4j-server/target/hippo4j-server.jar --spring.config.location=/app/hippo4j-server/conf/application.properties --logging.config=/app/hippo4j-server/conf/hippo4j-logback.xml --server.max-http-header-size=524288 --server.tomcat.basedir=/app/hippo4j-server/bin
启动后观察docker日志无异常,并在客户端使用1.2.0-RC4版本的SpringBoot pom接入了服务端,yml文件配置如下:
spring:
application:
name: lbx-ai-biz-staff
dynamic:
thread-pool:
server-addr: http://10.8.46.203:6691
username: admin
password: 123456
namespace: lbx-ai
item-id: ${spring.application.name}
以上功能,都可通过server动态修改和配置线程池:
但在预警推送功能中发现问题:
我通过报警通知新增了通知类型为CONFIG的配置(如下图)
配置完成后,在线程池管理功能修改对应线程池ID的配置后,未收到企业微信的通知推送,另外耶配置了一个ALARM类型的预警,也没收到推送,麻烦大佬帮忙看一下~万分感谢
@chenruiyingry 客户端项目只有在启动时,才会去 server 拉取报警通知配置。如果先启动客户端,server 再添加报警通知,是不生效的,需要重启客户端
那请问大佬,推送是基于客户端发出的还是服务端呢?
@chenruiyingry 配置推送是客户端发出的。为什么要这么做?因为如果放到服务端,极端情况线程池配置没有修改成功,但是通知发出去了
@chenruiyingry 配置推送是客户端发出的。为什么要这么做?因为如果放到服务端,极端情况线程池配置没有修改成功,但是通知发出去了
在客户端发出的话,如果我们部署的客户端是在存内网环境(不能访问外网接口),那么就无法正常发送通知了。如果是少数的客户端还好,可以将该部分调整到少数有外网权限的服务端。如果是大量的客户端,那就会遇到问题了。
@chenruiyingry 我加一个控制吧,支持客户端推送,以及客户端线程池确定后,向 server 端发出确认,由 server 端统一发送。
好的,Nice!
|
gharchive/issue
| 2022-04-24T06:13:16 |
2025-04-01T06:39:27.167323
|
{
"authors": [
"chenruiyingry",
"longtai-cn"
],
"repo": "longtai-cn/hippo4j",
"url": "https://github.com/longtai-cn/hippo4j/issues/199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2212335133
|
feat(app): Implement the trust level restrictions on starting a new discussion
Added policy for creating a discussion
check the policy around the Start a Discussion button
Pull Request Test Coverage Report for Build 8462524096
Details
4 of 4 (100.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.09%) to 79.15%
Totals
Change from base Build 8259902318:
0.09%
Covered Lines:
2532
Relevant Lines:
3199
💛 - Coveralls
|
gharchive/pull-request
| 2024-03-28T04:40:44 |
2025-04-01T06:39:27.173234
|
{
"authors": [
"coveralls",
"lonnieezell"
],
"repo": "lonnieezell/forum-example",
"url": "https://github.com/lonnieezell/forum-example/pull/317",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1053395581
|
Delete paper_wallet when removing DB
Backup the paper_wallet + remove the file if it exists.
@all-contributors please add @neuhausj for code
|
gharchive/pull-request
| 2021-11-15T09:15:19 |
2025-04-01T06:39:27.199951
|
{
"authors": [
"neuhausj",
"titulebolide"
],
"repo": "lorcalhost/BTB-manager-telegram",
"url": "https://github.com/lorcalhost/BTB-manager-telegram/pull/149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1827996815
|
Fix #85
The issues here are related to those discussed in the previous meeting. If deletion is triggered before synchronization with the index occurs, the deletion will be lost. Would it be better to directly add a deletion record without checking if the index exists in this case?
Yes, I think you are right. There are mainly the following two points:
The main problem is that when the data need to be deleted is stored in memTable before flush, neither in batch.pendingWrites nor db.index, the deletion will be lost, so add a deletion record without checking directly can guarantee deletion will be lost;
We are not need to read bptree before deleting each entry.
|
gharchive/pull-request
| 2023-07-30T16:39:50 |
2025-04-01T06:39:27.232741
|
{
"authors": [
"akiozihao",
"yanxiaoqi932"
],
"repo": "lotusdblabs/lotusdb",
"url": "https://github.com/lotusdblabs/lotusdb/pull/87",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2236849506
|
Erreur lorsque j'essaie de créer un compte en dev
MongoDB roule et le Serveur back-end écoute sur le port 4400 (je crois que mes .env sont corrects). Pourtant, l'histoire de compte Gmail pour créer un compte n'est pas documentée.
Dois-je utiliser un compte Gmail bidon dont je spécifie le mot de passe (texte clair) dans le fichier .env ? (Ça me semble vraiment faible comme façon de faire pour la sécurité). J'imagine que l'intention serait d'utiliser un vrai service de courriel pour transmettre les messages, etc.
En dev, peut-on juste sauter l'étape de l'envoie de courriel pour créer le compte?
J'ai réglé ce problème (le .env était mal configuré).
|
gharchive/issue
| 2024-04-11T03:39:36 |
2025-04-01T06:39:27.240933
|
{
"authors": [
"fuhrmanator"
],
"repo": "louis-antoine-etsmtl/EvalueTonSavoir",
"url": "https://github.com/louis-antoine-etsmtl/EvalueTonSavoir/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
147728416
|
Bugreport
Hi
I have encountered a problem with nedb creating duplicate records using the same _id, this is a gist I have created to show bug.
https://gist.github.com/simon-p-r/f043d8885115549a90d327a34c87cf1a
OS is Windows 8.1
Node version is 5.10.1
Thanks
Simon
Works fine on my machine, and I don't see any problem with the code ... What does your script output? Thanks for the nice bug report format in any case!
I too am seeing duplicate entries in the database when doing update/patch calls. Though my setup is quite a bit more complex (using FeathersJS), my results are nearly identical to Simon's.
OSX v10.10.5
Node v4.4.3
NeDB v1.8.0
As per the readme and numerous issues priori to that, this is the expected behavior as nedb persistence uses an append only file for performance purposes. Thanks for the nice bug report format though!
Thanks for the quick response. I see now that it is intentional behavior. Setting db.persistence.setAutocompactionInterval(interval) did the trick for me.
Cheers,
Chris
normally there is no need to setAutocompactionInterval
Duplicates are intentionally ignored and autocompacted at next start.
|
gharchive/issue
| 2016-04-12T12:00:22 |
2025-04-01T06:39:27.249888
|
{
"authors": [
"crstffr",
"louischatriot",
"simon-p-r",
"zevero"
],
"repo": "louischatriot/nedb",
"url": "https://github.com/louischatriot/nedb/issues/406",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1848105240
|
sort monitor before DOWN then alphabetical order
⚠️ Please verify that this feature request has NOT been suggested before.
[X] I checked and didn't find similar feature request
🏷️ Feature Request Type
UI Feature
🔖 Feature description
for me is more a clean interface is if a monitor is down is displayed first.
i have some monitor, and sometimes a monitor still down for days and is more clean if displayed first
✔️ Solution
create a change of sorting...
❓ Alternatives
alternatives is show only monitor DOWN on list ....
📝 Additional Context
thank you
This issue is likely resolved in 1.23.0-beta.1 as https://github.com/louislam/uptime-kuma/pull/3312 and https://github.com/louislam/uptime-kuma/pull/3469 were merged.
Please refer to the beta at https://github.com/louislam/uptime-kuma/releases/tag/1.23.0-beta.1.
⇒ Could you close this issue as it is resolved or comment on why it is not? ^^
PS:
For the future, please do run a duplication search, as otherwise managing this number of issues is quite bad, see https://github.com/louislam/uptime-kuma/issues?q=is%3Aissue+sort+down ⇒ https://github.com/louislam/uptime-kuma/issues/1585 or other issues
sorry, i searched but probably not that deep...
thank you
@ale82x I think you forgot to close this issue, right?
Could you close this issue as it is resolved or comment on why it is not? ^^
|
gharchive/issue
| 2023-08-12T16:25:25 |
2025-04-01T06:39:27.255877
|
{
"authors": [
"CommanderStorm",
"ale82x"
],
"repo": "louislam/uptime-kuma",
"url": "https://github.com/louislam/uptime-kuma/issues/3567",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2169012919
|
monitoring giving socket hang up
⚠️ Please verify that this question has NOT been raised before.
[X] I checked and didn't find similar issue
🛡️ Security Policy
[X] I agree to have read this project Security Policy
📝 Describe your problem
Hello dear ones!
I have a problem with my kuma uptime, and I can't find a solution.
Just from yesterday, monitoring started to give this socket hang up error, in a very large and frequent way, in all monitoring.
This error had been occurring for some time, but very sporadically and very little, almost imperceptibly, but yesterday it started to notify much more frequently and in all monitoring, in an interspersed way, it gives the warning, then it stays up, and then notify again.
I've done everything, changed the version, cleaned the base, stopped the container and uploaded it again, but nothing resolved it, remembering that there was no type of change on the environment side.
Please thank anyone who can help me.
📝 Error Message(s) or Log
🐻 Uptime-Kuma Version
1.23.11
💻 Operating System and Arch
Container uptime/kuma
🌐 Browser
Google Chrome
🖥️ Deployment Environment
Runtime: K8S, EKS 1.27
Database: sqllite
Filesystem used to store the database on: EBS GP2
number of monitors: 120
what is your database size and retention set to (just as a precaution, not likely related)
Could you have a look at https://github.com/louislam/uptime-kuma/wiki/Troubleshooting and see if you can reproduce this in a shell to give more context?
I managed to solve the problem, it really was something within our network, which we ended up discovering.
|
gharchive/issue
| 2024-03-05T11:56:21 |
2025-04-01T06:39:27.261918
|
{
"authors": [
"CommanderStorm",
"Vanieltk"
],
"repo": "louislam/uptime-kuma",
"url": "https://github.com/louislam/uptime-kuma/issues/4553",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1826336801
|
Callbacks: accessing subcharts and setting switcher option
Hello,
i'm trying to implement show_async to have a more flexible chart. I am using the Callbacks example.
I'm wondering about two things.
1.) Accessing subcharts
When using the Callbacks example I want to create a chart including subcharts. I am struggling to access the subcharts (including their lines) after creating them in the main function. What is the best way to access them?
2.) Setting the switcher widget current option programmatically
In the example the topbar text is set
self.chart.topbar['symbol'].set(searched_string)
I am wondering if I can do the same for a switcher widget? I couldn't find any similiar way to change the current option programmatically
BR
Hey
In your API callback class, the attribute chart will be dynamically updated to the chart or subchart that was responsible for emitting the callback.
There is no way to programatically change a switcher after the chart has been loaded, however you can set the inital value of the switcher when defining it.
Thank you @louisnw01 for the reply. It makes sense to have it like that.
In my case I am using the switcher widget to move my "chart area" one day further. It's not working as intended because it can only be triggered once.
Ah I see. So I think you want to be able to click a switcher more than once?
Perhaps a seperate 'button' widget may be of benefit.
Yes, basicly I want to use it as a button. Another widget of that kind would be perfect.
|
gharchive/issue
| 2023-07-28T12:26:03 |
2025-04-01T06:39:27.265940
|
{
"authors": [
"Zartexo",
"louisnw01"
],
"repo": "louisnw01/lightweight-charts-python",
"url": "https://github.com/louisnw01/lightweight-charts-python/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
527118951
|
Problem to use sharp / libvips on Docker / Dokku
Hello,
I'm trying to use the sharp module within a nodejs application and it works fine locally on Ubuntu. Now I want to deploy this application on our server as a dokku application. This application relies on a Dockerfile that initializes elements:
FROM node:8.9.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 5000
CMD [ "npm", "start" ]
During the execution of the npm install task, I can see traces regarding sharp and things seem successful:
> sharp@0.23.3 install /usr/src/app/node_modules/sharp
> (node install/libvips && node install/dll-copy && prebuild-install) || (node-gyp rebuild && node install/dll-copy)
info sharp Downloading https://github.com/lovell/sharp-libvips/releases/download/v8.8.1/libvips-8.8.1-linux-x64.tar.gz
However, when I'm trying to use the endpoint relying on sharp, the application crashes.
I guess that some required libraries are missing because it's a minimal distribution within the docker image but can't find what to add.
Thanks for your help!
Thierry
Is /usr/src/app/node_modules being overwritten by the COPY . /usr/src/app command?
Thanks very much for your answer!
For testing, I changed the Dockerfil with this:
FROM node:8.9.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY . /usr/src/app/
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
and I have the same problem!
I can see that the file libvips-8.8.1-linux-x64.tar.gz is still downloaded...
Does upgrading the version of Node.js help (v8.9.0 is over 2 years old)? What is the output of RUN npm install --verbose within the container?
This works with Node 10!
Thanks very much for your help!
|
gharchive/issue
| 2019-11-22T10:38:40 |
2025-04-01T06:39:27.286777
|
{
"authors": [
"lovell",
"templth"
],
"repo": "lovell/sharp-libvips",
"url": "https://github.com/lovell/sharp-libvips/issues/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
889258293
|
optimal way to omit alpha channel in output, SVG to PNG
What are you trying to achieve?
I want to convert SVG to PNG without an alpha channel in the result, as well as get an optimized rendering of the text.
I am successfully converting SVG to PNG files with code shown below. The problem is the result contains 4 channels, RGBA, rather than what I expect, which is 3 channels RGB.
The SVG will always be 2 tones, white text on black background.
What is the best way to omit the alpha channel in the output?
Can I describe the SVG input in a way that avoids the transparency?
Is .resize() the best way to render desired sizes? What is the impact on the text aliasing?
Have you searched for similar questions?
Yes, it seems I can use operations like .removeAlpha() to prevent the output. Can this be optimized more?
Why is alpha added? Is the text causing the transparency?
Could I create a blank canvas of desired size and channels, and then composite the SVG onto it?
Are you able to provide a minimal, standalone code sample that demonstrates this question?
const fs = require('fs')
const sharp = require('sharp')
const metadata = {
format: 'png',
width: 1920,
height: 1080,
}
const source = "<svg id=\"preview\" version=\"1.1\" baseProfile=\"full\" viewBox=\"0 0 1920 1080\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"> <style type=\"text/css\"> text { font-family: \"Roboto\", sans-serif } </style> <rect width=\"100%\" height=\"100%\" fill=\"black\" /> <text x=\"930\" y=\"370\" font-size=\"20\" text-anchor=\"end\" fill=\"white\">HELLO</text> <text x=\"990\" y=\"370\" font-size=\"20\" text-anchor=\"start\" fill=\"white\">WORLD</text> </svg>"
const bufferPromise = sharp(Buffer.from(source))
//.removeAlpha()
.resize(metadata.width, metadata.height)
.toFormat(metadata.format)
.toBuffer()
bufferPromise.then(buffer => {
fs.writeFileSync(`./out.${metadata.format}`, buffer)
})
Are you able to provide a sample image that helps explain the question?
See attached.
I check the channels using magick identify -verbose out.png
SVG rendering is via librsvg, is always 4-channel RGBA and you are correct to use removeAlpha to reduce this to RGB output.
(32bpp RGBA is usually considered more optimal than 24bpp RGB for image processing as it can allow for the use of memory-aligned SIMD instructions.)
In terms of the use of resize vs density / viewBox, it depends on the image so you'll probably need to experiment for a given set of inputs.
I hope this information helped. Please feel free to re-open with more details if further assistance is required.
|
gharchive/issue
| 2021-05-12T00:13:15 |
2025-04-01T06:39:27.292547
|
{
"authors": [
"cyrfer",
"lovell"
],
"repo": "lovell/sharp",
"url": "https://github.com/lovell/sharp/issues/2711",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
891187568
|
After resizing animation, broken image returns
Are you using the latest version? Is the version currently in use as reported by npm ls sharp the same as the latest version as reported by npm view sharp dist-tags.latest?
yes
What is the expected behaviour?
correct output
Are you able to provide a minimal, standalone code sample, without other dependencies, that demonstrates this problem?
https://codesandbox.io/s/2-drk7e?file=/src/server.js
Are you able to provide a sample image that helps explain the problem?
any animated image (gif, webp, apng)
What is the output of running npx envinfo --binaries --system?
System:
OS: Linux 5.4 Debian GNU/Linux 10 (buster) 10 (buster)
CPU: (16) x64 Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Memory: 7.61 GB / 62.73 GB
Container: Yes
Shell: 5.0.3 - /bin/bash
Binaries:
Node: 14.16.1 - ~/.nvm/versions/node/v14.16.1/bin/node
Yarn: 1.22.10 - ~/.nvm/versions/node/v14.16.1/bin/yarn
npm: 6.14.12 - ~/.nvm/versions/node/v14.16.1/bin/npm
As you've seen, you'll need to update the output with the new page height when resizing a multi-page image.
.webp({ pageHeight: ... })
https://sharp.pixelplumbing.com/api-output#webp
Please see #2275 for a future possible enhancement that relates to this.
@lovell, Thank you for your reply. I added this parameter, but it doesn't work in all cases. There are 2 examples in the sandbox where everything works and 2 examples where the image breaks. I am unable to establish the reason for this. The only thing I noticed is that the size of the output buffer is very different from the working example.
The output looks as-expected to me. Example 1 is using fit=fill, which ignores aspect ratio, and is reducing the width but keeping the height the same.
https://sharp.pixelplumbing.com/api-resize
@lovell, I have simplified the example for clarity. https://codesandbox.io/s/2-drk7e?file=/src/sharp/examples.js
You can see that at values of height from 47 pixels to 56 pixels, the image breaks, but before and after all the images are normal.
I used one code for all pictures.
const width = 100
const s = sharp(imageBuf, { animated: true });
const metadata = await s.metadata();
const height = pageHeight * metadata.pages; // calculate the sum of heights
const result = await s
.resize({ width, height, fit: "fill" })
.webp({ pageHeight })
.toBuffer();
This is as it should be, is it my mistake or a mistake in the library?
How can I resize the animated picture to {height: 56, width: 100}, to keep the original aspect ratio?
Please can you try setting the fastShrinkOnLoad option to false
.resize({ width, height, fit: "fill", fastShrinkOnLoad: false })
https://sharp.pixelplumbing.com/api-resize
@lovell, It works, thanks
Thanks for confirming. Commit https://github.com/lovell/sharp/commit/5bd5e5052ad53c67c89c930c4eacd1e5fa916280 makes this the default behaviour for animated WebP images. I'll re-open this issue until it's released.
v0.28.3 now available.
|
gharchive/issue
| 2021-05-13T16:30:45 |
2025-04-01T06:39:27.302414
|
{
"authors": [
"MaxMls",
"lovell"
],
"repo": "lovell/sharp",
"url": "https://github.com/lovell/sharp/issues/2714",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2131298124
|
Why does Tile skip folder numbers the higher the zoom?
I'm trying to use Sharp to tile an uploaded image for leaflet maps. I'm pretty close, but I found that leaflet is trying to grab say image url 4/0/0, and it's a failed get request. Taking a look at the tiled images sharp created and I found that zoom folder 4 skips the y coordinate folders 0 and 1, and start from 2. Zoom folder 5 skips 0 to 3, and start from 4. Besides the get request errors I'm getting, leaflet looks to work just fine. I'm just wondering if it's intentional? If so, I would like to know why and if I can do anything about it to just get rid of the request errors?
await sharp(filePath)
.composite([{ input: Buffer.from('<svg><rect x="0" y="0" width="256" height="256" style="fill:rgb(0,0,0);fill-opacity:0.0;"></rect></svg>') }]) // add transparent background
.png() // convert to png so transparency is preserved
.tile({
size: 256,
background: { r: 256, g: 256, b: 256, alpha: 0 },
center: true,
basename: 'tiles',
container: 'fs',
layout: 'google'
})
.toFile(`uploads/${key}/map.png`)
Here's an example of the folders sharp creates
[0]
[1]
[2]
[3]
[4]
[2]
[3]
...
[5]
[4]
[5]
...
Looking more into it, it's because sharp automatically removes blank tiles and that is why sometimes the couple of first Y and now realizing a couple of the last Y coordinates are not created (Double so, because I have the tile centered). I store my tiled files into AWS S3, so I'm not sure if it's better to just have get errors than to have a bunch of empty tiles stored in S3, but that's a problem for me.
The "google" layout changes the default skipBlanks from -1 to 5 to match the behaviour of libvips.
https://www.libvips.org/API/current/VipsForeignSave.html#vips-dzsave
However this was undocumented in sharp, sorry, and I have just updated this via commit https://github.com/lovell/sharp/commit/bc95531f2dcd4e6eb2b207016a390fb066b4461a
Okay, thanks!
|
gharchive/issue
| 2024-02-13T01:23:20 |
2025-04-01T06:39:27.306355
|
{
"authors": [
"GcodeG01",
"lovell"
],
"repo": "lovell/sharp",
"url": "https://github.com/lovell/sharp/issues/3991",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
313725195
|
Make check fails on O3D303 IFM camera: XMLRPC call timed out
Hi all,
I am trying to use libo3d3xx with our IFM O3D303 Camera and eventually I want to use o3d3xx-ros to extract the Point Cloud from the camera and use it for navigation of our manipulator.
I have changed the camera's ip into 192.168.1.69 and i can ping it, it also work well in windows.
I have downloaded the library source code, and I am trying to get the camera module to work. C-make and make run fine, but when make check gives the following errors :
andy@andy-zhaoyang-k42-80:~$ ping 192.168.1.69
PING 192.168.1.69 (192.168.1.69) 56(84) bytes of data.
64 bytes from 192.168.1.69: icmp_seq=1 ttl=64 time=1.90 ms
64 bytes from 192.168.1.69: icmp_seq=2 ttl=64 time=1.05 ms
64 bytes from 192.168.1.69: icmp_seq=3 ttl=64 time=0.784 ms
64 bytes from 192.168.1.69: icmp_seq=4 ttl=64 time=0.976 ms
^C
--- 192.168.1.69 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3011ms
rtt min/avg/max/mdev = 0.784/1.180/1.908/0.432 ms
andy@andy-zhaoyang-k42-80:~$ cd libo3d3xx-0.7.3/modules/camera/build/src/bin
andy@andy-zhaoyang-k42-80:~/libo3d3xx-0.7.3/modules/camera/build/src/bin$ ./o3d3xx-dump
Failed to dump camera configuration:
XMLRPC call timed out
andy@andy-zhaoyang-k42-80:~$ cd
andy@andy-zhaoyang-k42-80:~$ cd libo3d3xx-0.7.3
andy@andy-zhaoyang-k42-80:~/libo3d3xx-0.7.3$ cd modules/camera
andy@andy-zhaoyang-k42-80:~/libo3d3xx-0.7.3/modules/camera$ cd build
andy@andy-zhaoyang-k42-80:~/libo3d3xx-0.7.3/modules/camera/build$ cmake -DCMAKE_INSTALL_PREFIX=/usr ..
-- UBUNTU_VERSION: 16.04.4
-- UBUNTU_MAJOR: 16
-- UBUNTU_MINOR: 04
-- UBUNTU_PATCH: 4
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- program_options
-- system
-- DEB_FILE: libo3d3xx-camera_0.7.3_ubuntu-16.04.4_amd64.deb
-- Configuring done
-- Generating done
-- Build files have been written to: /home/andy/libo3d3xx-0.7.3/modules/camera/build
andy@andy-zhaoyang-k42-80:~/libo3d3xx-0.7.3/modules/camera/build$ make check
[ 8%] Built target gtest
[ 54%] Built target o3d3xx_camera
[ 58%] Built target o3d3xx_camera_shared
[ 66%] Built target gtest_main
[100%] Built target o3d3xx-camera-tests
[==========] Running 43 tests from 6 test cases.
[----------] Global test environment set-up.
[----------] 15 tests from AppImagerTest
[ RUN ] AppImagerTest.CopyDeleteApplication
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.CopyDeleteApplication (3005 ms)
[ RUN ] AppImagerTest.CreateApplication
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.CreateApplication (3006 ms)
[ RUN ] AppImagerTest.ChangeAppNameAndDescription
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.ChangeAppNameAndDescription (3006 ms)
[ RUN ] AppImagerTest.EditApplication
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.EditApplication (3004 ms)
[ RUN ] AppImagerTest.GetAppParameters
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.GetAppParameters (3004 ms)
[ RUN ] AppImagerTest.AppConfig
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.AppConfig (3006 ms)
[ RUN ] AppImagerTest.AppConfig_JSON
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.AppConfig_JSON (3007 ms)
[ RUN ] AppImagerTest.GetAvailableImagerTypes
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.GetAvailableImagerTypes (3006 ms)
[ RUN ] AppImagerTest.ChangeImagerType
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.ChangeImagerType (3006 ms)
[ RUN ] AppImagerTest.GetImagerParameters
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.GetImagerParameters (3005 ms)
[ RUN ] AppImagerTest.GetImagerParameterLimits
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.GetImagerParameterLimits (3005 ms)
[ RUN ] AppImagerTest.ImagerConfig
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.ImagerConfig (3003 ms)
[ RUN ] AppImagerTest.ImagerConfigValueOutOfRange
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.ImagerConfigValueOutOfRange (3004 ms)
[ RUN ] AppImagerTest.ImagerConfig_JSON
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.ImagerConfig_JSON (3004 ms)
[ RUN ] AppImagerTest.Exposure
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] AppImagerTest.Exposure (3000 ms)
[----------] 15 tests from AppImagerTest (45071 ms total)
[----------] 2 tests from General_Tests
[ RUN ] General_Tests.Exceptions
[ OK ] General_Tests.Exceptions (0 ms)
[ RUN ] General_Tests.Version
[ OK ] General_Tests.Version (0 ms)
[----------] 2 tests from General_Tests (1 ms total)
[----------] 17 tests from Camera_Tests
[ RUN ] Camera_Tests.Ctor
[ OK ] Camera_Tests.Ctor (0 ms)
[ RUN ] Camera_Tests.GetXMLRPCURLPrefix
[ OK ] Camera_Tests.GetXMLRPCURLPrefix (0 ms)
[ RUN ] Camera_Tests.GetAllParameters
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:61: Failure
Expected: all_params = cam->GetAllParameters() doesn't throw an exception.
Actual: it throws.
[ FAILED ] Camera_Tests.GetAllParameters (3004 ms)
[ RUN ] Camera_Tests.GetParameter
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:70: Failure
Expected: all_params = cam->GetAllParameters() doesn't throw an exception.
Actual: it throws.
[ FAILED ] Camera_Tests.GetParameter (6008 ms)
[ RUN ] Camera_Tests.GetSWVersion
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:90: Failure
Expected: sw_version = cam->GetSWVersion() doesn't throw an exception.
Actual: it throws.
[ FAILED ] Camera_Tests.GetSWVersion (3004 ms)
[ RUN ] Camera_Tests.GetApplicationList
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:99: Failure
Expected: apps = cam->GetApplicationList() doesn't throw an exception.
Actual: it throws.
[ FAILED ] Camera_Tests.GetApplicationList (3003 ms)
[ RUN ] Camera_Tests.RequestSession
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:107: Failure
Expected: cam->RequestSession() doesn't throw an exception.
Actual: it throws.
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:108: Failure
Value of: 32
Expected: cam->GetSessionID().size()
Which is: 0
[ FAILED ] Camera_Tests.RequestSession (3007 ms)
[ RUN ] Camera_Tests.CancelSession
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:132: Failure
Expected: cam->RequestSession() doesn't throw an exception.
Actual: it throws.
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:133: Failure
Value of: 32
Expected: cam->GetSessionID().size()
Which is: 0
[ FAILED ] Camera_Tests.CancelSession (6013 ms)
[ RUN ] Camera_Tests.Heartbeat
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.Heartbeat (6008 ms)
[ RUN ] Camera_Tests.SetOperatingMode
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.SetOperatingMode (3007 ms)
[ RUN ] Camera_Tests.GetDeviceConfig
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.GetDeviceConfig (3004 ms)
[ RUN ] Camera_Tests.ActivateDisablePassword
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:230: Failure
Expected: cam->RequestSession() doesn't throw an exception.
Actual: it throws.
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:231: Failure
Expected: cam->SetOperatingMode(o3d3xx::Camera::operating_mode::EDIT) doesn't throw an exception.
Actual: it throws.
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:233: Failure
Expected: cam->ActivatePassword() doesn't throw an exception.
Actual: it throws.
/home/andy/libo3d3xx-0.7.3/modules/camera/test/o3d3xx-camera-tests.cpp:239: Failure
Expected: cam->DisablePassword() doesn't throw an exception.
Actual: it throws.
[ FAILED ] Camera_Tests.ActivateDisablePassword (12016 ms)
[ RUN ] Camera_Tests.SetDeviceConfig
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.SetDeviceConfig (3002 ms)
[ RUN ] Camera_Tests.DeviceConfig_JSON
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.DeviceConfig_JSON (3007 ms)
[ RUN ] Camera_Tests.GetNetParameters
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.GetNetParameters (3007 ms)
[ RUN ] Camera_Tests.NetConfig
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.NetConfig (3004 ms)
[ RUN ] Camera_Tests.NetConfig_JSON
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] Camera_Tests.NetConfig_JSON (3006 ms)
[----------] 17 tests from Camera_Tests (63100 ms total)
[----------] 1 test from ImportExport_Tests
[ RUN ] ImportExport_Tests.ImportExportApp
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in the test body.
[ FAILED ] ImportExport_Tests.ImportExportApp (3005 ms)
[----------] 1 test from ImportExport_Tests (3005 ms total)
[----------] 4 tests from SpatialFilterTest
[ RUN ] SpatialFilterTest.SpatialFilterConfig_General
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] SpatialFilterTest.SpatialFilterConfig_General (3007 ms)
[ RUN ] SpatialFilterTest.GetSpatialFilterParameters
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] SpatialFilterTest.GetSpatialFilterParameters (3003 ms)
[ RUN ] SpatialFilterTest.GetSpatialFilterParameterLimits
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] SpatialFilterTest.GetSpatialFilterParameterLimits (3004 ms)
[ RUN ] SpatialFilterTest.SpatialFilterConfig_JSON
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] SpatialFilterTest.SpatialFilterConfig_JSON (3004 ms)
[----------] 4 tests from SpatialFilterTest (12018 ms total)
[----------] 4 tests from TemporalFilterTest
[ RUN ] TemporalFilterTest.TemporalFilterConfig_General
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] TemporalFilterTest.TemporalFilterConfig_General (3004 ms)
[ RUN ] TemporalFilterTest.GetTemporalFilterParameters
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] TemporalFilterTest.GetTemporalFilterParameters (3004 ms)
[ RUN ] TemporalFilterTest.GetTemporalFilterParameterLimits
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] TemporalFilterTest.GetTemporalFilterParameterLimits (3006 ms)
[ RUN ] TemporalFilterTest.TemporalFilterConfig_JSON
unknown file: Failure
C++ exception with description "XMLRPC call timed out" thrown in SetUp().
[ FAILED ] TemporalFilterTest.TemporalFilterConfig_JSON (3004 ms)
[----------] 4 tests from TemporalFilterTest (12019 ms total)
[----------] Global test environment tear-down
[==========] 43 tests from 6 test cases ran. (135214 ms total)
[ PASSED ] 4 tests.
[ FAILED ] 39 tests, listed below:
[ FAILED ] AppImagerTest.CopyDeleteApplication
[ FAILED ] AppImagerTest.CreateApplication
[ FAILED ] AppImagerTest.ChangeAppNameAndDescription
[ FAILED ] AppImagerTest.EditApplication
[ FAILED ] AppImagerTest.GetAppParameters
[ FAILED ] AppImagerTest.AppConfig
[ FAILED ] AppImagerTest.AppConfig_JSON
[ FAILED ] AppImagerTest.GetAvailableImagerTypes
[ FAILED ] AppImagerTest.ChangeImagerType
[ FAILED ] AppImagerTest.GetImagerParameters
[ FAILED ] AppImagerTest.GetImagerParameterLimits
[ FAILED ] AppImagerTest.ImagerConfig
[ FAILED ] AppImagerTest.ImagerConfigValueOutOfRange
[ FAILED ] AppImagerTest.ImagerConfig_JSON
[ FAILED ] AppImagerTest.Exposure
[ FAILED ] Camera_Tests.GetAllParameters
[ FAILED ] Camera_Tests.GetParameter
[ FAILED ] Camera_Tests.GetSWVersion
[ FAILED ] Camera_Tests.GetApplicationList
[ FAILED ] Camera_Tests.RequestSession
[ FAILED ] Camera_Tests.CancelSession
[ FAILED ] Camera_Tests.Heartbeat
[ FAILED ] Camera_Tests.SetOperatingMode
[ FAILED ] Camera_Tests.GetDeviceConfig
[ FAILED ] Camera_Tests.ActivateDisablePassword
[ FAILED ] Camera_Tests.SetDeviceConfig
[ FAILED ] Camera_Tests.DeviceConfig_JSON
[ FAILED ] Camera_Tests.GetNetParameters
[ FAILED ] Camera_Tests.NetConfig
[ FAILED ] Camera_Tests.NetConfig_JSON
[ FAILED ] ImportExport_Tests.ImportExportApp
[ FAILED ] SpatialFilterTest.SpatialFilterConfig_General
[ FAILED ] SpatialFilterTest.GetSpatialFilterParameters
[ FAILED ] SpatialFilterTest.GetSpatialFilterParameterLimits
[ FAILED ] SpatialFilterTest.SpatialFilterConfig_JSON
[ FAILED ] TemporalFilterTest.TemporalFilterConfig_General
[ FAILED ] TemporalFilterTest.GetTemporalFilterParameters
[ FAILED ] TemporalFilterTest.GetTemporalFilterParameterLimits
[ FAILED ] TemporalFilterTest.TemporalFilterConfig_JSON
39 FAILED TESTS
YOU HAVE 1 DISABLED TEST
test/CMakeFiles/check.dir/build.make:57: recipe for target 'test/CMakeFiles/check' failed
make[3]: *** [test/CMakeFiles/check] Error 1
CMakeFiles/Makefile2:688: recipe for target 'test/CMakeFiles/check.dir/all' failed
make[2]: *** [test/CMakeFiles/check.dir/all] Error 2
CMakeFiles/Makefile2:695: recipe for target 'test/CMakeFiles/check.dir/rule' failed
make[1]: *** [test/CMakeFiles/check.dir/rule] Error 2
Makefile:368: recipe for target 'check' failed
make: *** [check] Error 2
In addition, my libo3d3xx: version=0.7.3 and "IFM_Software": "1.6.2114".
I have followed the solutions in #124 #49 #51 #28 and so on, but it still failed, I have been messed up by this problem for several days, it will really help me a lot if you can offer a detail solution for my problem. @tpanzarella @graugans @cfreundl Thanks very much.
I also tried the another library "ifm3d", it still cannot pass the test.
andy@andy-zhaoyang-k42-80:~/ifm3d-0.7.0$ mkdir build
andy@andy-zhaoyang-k42-80:~/ifm3d-0.7.0$ cd build
andy@andy-zhaoyang-k42-80:~/ifm3d-0.7.0/build$ cmake -DCMAKE_INSTALL_PREFIX=/usr ..
-- The CXX compiler identification is GNU 5.4.1
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The C compiler identification is GNU 5.4.1
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Found PythonInterp: /usr/bin/python (found version "2.7.12")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found XMLRPC: 1
-- Boost version: 1.58.0
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- date_time
-- regex
-- Checking for module 'eigen3'
-- Found eigen3, version 3.2.92
-- Found eigen: /usr/include/eigen3
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- filesystem
-- thread
-- date_time
-- iostreams
-- serialization
-- chrono
-- atomic
-- regex
-- looking for PCL_COMMON
-- Found PCL_COMMON: /usr/lib/x86_64-linux-gnu/libpcl_common.so
-- Found PCL: /usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_filesystem.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;optimized;/usr/lib/x86_64-linux-gnu/libpcl_common.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_common.so;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_filesystem.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so (Required is at least version "1.7")
-- Found OpenCV: /usr/local (found version "3.4.1")
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- date_time
-- regex
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- regex
-- date_time
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- regex
-- date_time
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- program_options
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- filesystem
-- thread
-- date_time
-- iostreams
-- serialization
-- chrono
-- atomic
-- regex
-- Checking for module 'libopenni'
-- Found libopenni, version 1.5.4.0
-- Found openni: /usr/lib/libOpenNI.so
-- Checking for module 'libopenni2'
-- Found libopenni2, version 2.2.0.3
-- Found OpenNI2: /usr/lib/libOpenNI2.so
** WARNING ** io features related to pcap will be disabled
** WARNING ** io features related to png will be disabled
-- The imported target "vtkRenderingPythonTkWidgets" references the file
"/usr/lib/x86_64-linux-gnu/libvtkRenderingPythonTkWidgets.so"
but this file does not exist. Possible reasons include:
The file was deleted, renamed, or moved to another location.
An install or uninstall procedure did not complete successfully.
The installation package was faulty and contained
"/usr/lib/cmake/vtk-6.2/VTKTargets.cmake"
but not all the files it references.
-- The imported target "vtk" references the file
"/usr/bin/vtk"
but this file does not exist. Possible reasons include:
The file was deleted, renamed, or moved to another location.
An install or uninstall procedure did not complete successfully.
The installation package was faulty and contained
"/usr/lib/cmake/vtk-6.2/VTKTargets.cmake"
but not all the files it references.
-- Found libusb-1.0: /usr/include
-- Checking for module 'flann'
-- Found flann, version 1.8.4
-- Found Flann: /usr/lib/x86_64-linux-gnu/libflann_cpp_s.a
** WARNING ** visualization features related to pcap will be disabled
** WARNING ** visualization features related to png will be disabled
-- looking for PCL_COMMON
-- looking for PCL_OCTREE
-- Found PCL_OCTREE: /usr/lib/x86_64-linux-gnu/libpcl_octree.so
-- looking for PCL_IO
-- Found PCL_IO: /usr/lib/x86_64-linux-gnu/libpcl_io.so
-- looking for PCL_KDTREE
-- Found PCL_KDTREE: /usr/lib/x86_64-linux-gnu/libpcl_kdtree.so
-- looking for PCL_GEOMETRY
-- Found PCL_GEOMETRY: /usr/include/pcl-1.7
-- looking for PCL_SEARCH
-- Found PCL_SEARCH: /usr/lib/x86_64-linux-gnu/libpcl_search.so
-- looking for PCL_VISUALIZATION
-- Found PCL_VISUALIZATION: /usr/lib/x86_64-linux-gnu/libpcl_visualization.so
-- Found PCL: /usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_filesystem.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;optimized;/usr/lib/x86_64-linux-gnu/libpcl_common.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_common.so;optimized;/usr/lib/x86_64-linux-gnu/libpcl_octree.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_octree.so;/usr/lib/libOpenNI.so;/usr/lib/libOpenNI2.so;vtkImagingStencil;vtkCommonComputationalGeometry;vtkCommonDataModel;vtkCommonMath;vtkCommonCore;vtksys;vtkCommonMisc;vtkCommonSystem;vtkCommonTransforms;vtkImagingCore;vtkCommonExecutionModel;vtkFiltersAMR;vtkFiltersGeneral;vtkFiltersCore;vtkParallelCore;vtkIOLegacy;vtkIOCore;/usr/lib/x86_64-linux-gnu/libz.so;vtkInteractionWidgets;vtkFiltersHybrid;vtkImagingSources;vtkRenderingCore;vtkCommonColor;vtkFiltersExtraction;vtkFiltersStatistics;vtkImagingFourier;vtkalglib;vtkFiltersGeometry;vtkFiltersSources;vtkFiltersModeling;vtkImagingGeneral;vtkImagingHybrid;vtkIOImage;vtkDICOMParser;vtkmetaio;/usr/lib/x86_64-linux-gnu/libjpeg.so;/usr/lib/x86_64-linux-gnu/libpng.so;/usr/lib/x86_64-linux-gnu/libtiff.so;vtkInteractionStyle;vtkRenderingAnnotation;vtkImagingColor;vtkRenderingFreeType;/usr/lib/x86_64-linux-gnu/libfreetype.so;vtkftgl;vtkRenderingVolume;vtkIOParallelNetCDF;vtkParallelMPI;/usr/lib/x86_64-linux-gnu/libnetcdf_c++.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5_hl.so;vtkRenderingOpenGL;vtkIOLSDyna;vtkIOXML;vtkIOGeometry;/usr/lib/x86_64-linux-gnu/libjsoncpp.so;vtkIOXMLParser;/usr/lib/x86_64-linux-gnu/libexpat.so;vtkLocalExample;vtkInfovisCore;vtkGeovisCore;vtkInfovisLayout;vtkViewsCore;vtkproj4;/usr/lib/x86_64-linux-gnu/libpython2.7.so;vtkTestingGenericBridge;/usr/lib/libgl2ps.so;verdict;vtkIOMovie;/usr/lib/x86_64-linux-gnu/libtheoraenc.so;/usr/lib/x86_64-linux-gnu/libtheoradec.so;/usr/lib/x86_64-linux-gnu/libogg.so;vtkFiltersImaging;vtkIOMINC;vtkRenderingLOD;vtkViewsQt;vtkGUISupportQt;vtkViewsInfovis;vtkChartsCore;vtkRenderingContext2D;vtkRenderingLabel;vtkRenderingImage;vtkFiltersFlowPaths;vtkxdmf2;/usr/lib/x86_64-linux-gnu/libxml2.so;vtkFiltersReebGraph;vtkViewsContext2D;vtkIOXdmf2;vtkIOAMR;vtkRenderingContextOpenGL;vtkImagingStatistics;vtkIOParallel;vtkFiltersParallel;vtkIONetCDF;vtkexoIIc;vtkGUISupportQtOpenGL;vtkIOParallelLSDyna;vtkFiltersParallelGeometry;vtkGUISupportQtWebkit;vtkIOPLY;vtkWrappingTools;vtkFiltersHyperTree;vtkRenderingVolumeOpenGL;vtkIOExodus;vtkIOPostgreSQL;vtkIOSQL;sqlite3;vtkWrappingJava;vtkFiltersParallelFlowPaths;vtkFiltersParallelStatistics;vtkFiltersProgrammable;vtkFiltersParallelImaging;vtkRenderingParallelLIC;vtkRenderingLIC;vtkInteractionImage;vtkFiltersPython;vtkWrappingPythonCore;vtkIOParallelExodus;vtkFiltersGeneric;vtkIOVideo;vtkRenderingQt;vtkFiltersTexture;vtkIOInfovis;vtkGUISupportQtSQL;vtkRenderingFreeTypeOpenGL;vtkInfovisBoostGraphAlgorithms;vtkRenderingGL2PS;vtkIOGeoJSON;vtkFiltersVerdict;vtkViewsGeovis;vtkIOImport;vtkTestingIOSQL;vtkPythonInterpreter;vtkIOODBC;vtkIOEnSight;vtkIOMySQL;vtkRenderingMatplotlib;vtkDomainsChemistry;vtkIOExport;vtkFiltersParallelMPI;vtkIOParallelXML;vtkTestingRendering;vtkIOMPIParallel;vtkParallelMPI4Py;vtkFiltersSMP;vtkFiltersSelection;vtkIOVPIC;VPIC;vtkImagingMath;vtkImagingMorphological;vtkRenderingParallel;vtkRenderingFreeTypeFontConfig;vtkIOFFMPEG;vtkIOMPIImage;vtkIOGDAL;optimized;/usr/lib/x86_64-linux-gnu/libpcl_io.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_io.so;optimized;/usr/lib/x86_64-linux-gnu/libflann_cpp_s.a;debug;/usr/lib/x86_64-linux-gnu/libflann_cpp_s.a;optimized;/usr/lib/x86_64-linux-gnu/libpcl_kdtree.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_kdtree.so;optimized;/usr/lib/x86_64-linux-gnu/libpcl_search.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_search.so;optimized;/usr/lib/x86_64-linux-gnu/libpcl_visualization.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_visualization.so;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_filesystem.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;/usr/lib/libOpenNI.so;/usr/lib/libOpenNI2.so;optimized;/usr/lib/x86_64-linux-gnu/libflann_cpp_s.a;debug;/usr/lib/x86_64-linux-gnu/libflann_cpp_s.a;vtkImagingStencil;vtkCommonComputationalGeometry;vtkCommonDataModel;vtkCommonMath;vtkCommonCore;vtksys;vtkCommonMisc;vtkCommonSystem;vtkCommonTransforms;vtkImagingCore;vtkCommonExecutionModel;vtkFiltersAMR;vtkFiltersGeneral;vtkFiltersCore;vtkParallelCore;vtkIOLegacy;vtkIOCore;/usr/lib/x86_64-linux-gnu/libz.so;vtkInteractionWidgets;vtkFiltersHybrid;vtkImagingSources;vtkRenderingCore;vtkCommonColor;vtkFiltersExtraction;vtkFiltersStatistics;vtkImagingFourier;vtkalglib;vtkFiltersGeometry;vtkFiltersSources;vtkFiltersModeling;vtkImagingGeneral;vtkImagingHybrid;vtkIOImage;vtkDICOMParser;vtkmetaio;/usr/lib/x86_64-linux-gnu/libjpeg.so;/usr/lib/x86_64-linux-gnu/libpng.so;/usr/lib/x86_64-linux-gnu/libtiff.so;vtkInteractionStyle;vtkRenderingAnnotation;vtkImagingColor;vtkRenderingFreeType;/usr/lib/x86_64-linux-gnu/libfreetype.so;vtkftgl;vtkRenderingVolume;vtkIOParallelNetCDF;vtkParallelMPI;/usr/lib/x86_64-linux-gnu/libnetcdf_c++.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5_hl.so;vtkRenderingOpenGL;vtkIOLSDyna;vtkIOXML;vtkIOGeometry;/usr/lib/x86_64-linux-gnu/libjsoncpp.so;vtkIOXMLParser;/usr/lib/x86_64-linux-gnu/libexpat.so;vtkLocalExample;vtkInfovisCore;vtkGeovisCore;vtkInfovisLayout;vtkViewsCore;vtkproj4;/usr/lib/x86_64-linux-gnu/libpython2.7.so;vtkTestingGenericBridge;/usr/lib/libgl2ps.so;verdict;vtkIOMovie;/usr/lib/x86_64-linux-gnu/libtheoraenc.so;/usr/lib/x86_64-linux-gnu/libtheoradec.so;/usr/lib/x86_64-linux-gnu/libogg.so;vtkFiltersImaging;vtkIOMINC;vtkRenderingLOD;vtkViewsQt;vtkGUISupportQt;vtkViewsInfovis;vtkChartsCore;vtkRenderingContext2D;vtkRenderingLabel;vtkRenderingImage;vtkFiltersFlowPaths;vtkxdmf2;/usr/lib/x86_64-linux-gnu/libxml2.so;vtkFiltersReebGraph;vtkViewsContext2D;vtkIOXdmf2;vtkIOAMR;vtkRenderingContextOpenGL;vtkImagingStatistics;vtkIOParallel;vtkFiltersParallel;vtkIONetCDF;vtkexoIIc;vtkGUISupportQtOpenGL;vtkIOParallelLSDyna;vtkFiltersParallelGeometry;vtkGUISupportQtWebkit;vtkIOPLY;vtkWrappingTools;vtkFiltersHyperTree;vtkRenderingVolumeOpenGL;vtkIOExodus;vtkIOPostgreSQL;vtkIOSQL;sqlite3;vtkWrappingJava;vtkFiltersParallelFlowPaths;vtkFiltersParallelStatistics;vtkFiltersProgrammable;vtkFiltersParallelImaging;vtkRenderingParallelLIC;vtkRenderingLIC;vtkInteractionImage;vtkFiltersPython;vtkWrappingPythonCore;vtkIOParallelExodus;vtkFiltersGeneric;vtkIOVideo;vtkRenderingQt;vtkFiltersTexture;vtkIOInfovis;vtkGUISupportQtSQL;vtkRenderingFreeTypeOpenGL;vtkInfovisBoostGraphAlgorithms;vtkRenderingGL2PS;vtkIOGeoJSON;vtkFiltersVerdict;vtkViewsGeovis;vtkIOImport;vtkTestingIOSQL;vtkPythonInterpreter;vtkIOODBC;vtkIOEnSight;vtkIOMySQL;vtkRenderingMatplotlib;vtkDomainsChemistry;vtkIOExport;vtkFiltersParallelMPI;vtkIOParallelXML;vtkTestingRendering;vtkIOMPIParallel;vtkParallelMPI4Py;vtkFiltersSMP;vtkFiltersSelection;vtkIOVPIC;VPIC;vtkImagingMath;vtkImagingMorphological;vtkRenderingParallel;vtkRenderingFreeTypeFontConfig;vtkIOFFMPEG;vtkIOMPIImage;vtkIOGDAL (Required is at least version "1.7")
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- program_options
-- system
-- Found CURL: /usr/lib/x86_64-linux-gnu/libcurl.so (found version "7.47.0")
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- filesystem
-- thread
-- date_time
-- iostreams
-- serialization
-- chrono
-- atomic
-- regex
** WARNING ** io features related to pcap will be disabled
** WARNING ** io features related to png will be disabled
-- The imported target "vtkRenderingPythonTkWidgets" references the file
"/usr/lib/x86_64-linux-gnu/libvtkRenderingPythonTkWidgets.so"
but this file does not exist. Possible reasons include:
The file was deleted, renamed, or moved to another location.
An install or uninstall procedure did not complete successfully.
The installation package was faulty and contained
"/usr/lib/cmake/vtk-6.2/VTKTargets.cmake"
but not all the files it references.
-- The imported target "vtk" references the file
"/usr/bin/vtk"
but this file does not exist. Possible reasons include:
The file was deleted, renamed, or moved to another location.
An install or uninstall procedure did not complete successfully.
The installation package was faulty and contained
"/usr/lib/cmake/vtk-6.2/VTKTargets.cmake"
but not all the files it references.
-- looking for PCL_COMMON
-- looking for PCL_OCTREE
-- looking for PCL_IO
-- Found PCL: /usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_filesystem.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;optimized;/usr/lib/x86_64-linux-gnu/libpcl_common.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_common.so;optimized;/usr/lib/x86_64-linux-gnu/libpcl_octree.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_octree.so;/usr/lib/libOpenNI.so;/usr/lib/libOpenNI2.so;vtkImagingStencil;vtkCommonComputationalGeometry;vtkCommonDataModel;vtkCommonMath;vtkCommonCore;vtksys;vtkCommonMisc;vtkCommonSystem;vtkCommonTransforms;vtkImagingCore;vtkCommonExecutionModel;vtkFiltersAMR;vtkFiltersGeneral;vtkFiltersCore;vtkParallelCore;vtkIOLegacy;vtkIOCore;/usr/lib/x86_64-linux-gnu/libz.so;vtkInteractionWidgets;vtkFiltersHybrid;vtkImagingSources;vtkRenderingCore;vtkCommonColor;vtkFiltersExtraction;vtkFiltersStatistics;vtkImagingFourier;vtkalglib;vtkFiltersGeometry;vtkFiltersSources;vtkFiltersModeling;vtkImagingGeneral;vtkImagingHybrid;vtkIOImage;vtkDICOMParser;vtkmetaio;/usr/lib/x86_64-linux-gnu/libjpeg.so;/usr/lib/x86_64-linux-gnu/libpng.so;/usr/lib/x86_64-linux-gnu/libtiff.so;vtkInteractionStyle;vtkRenderingAnnotation;vtkImagingColor;vtkRenderingFreeType;/usr/lib/x86_64-linux-gnu/libfreetype.so;vtkftgl;vtkRenderingVolume;vtkIOParallelNetCDF;vtkParallelMPI;/usr/lib/x86_64-linux-gnu/libnetcdf_c++.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5_hl.so;vtkRenderingOpenGL;vtkIOLSDyna;vtkIOXML;vtkIOGeometry;/usr/lib/x86_64-linux-gnu/libjsoncpp.so;vtkIOXMLParser;/usr/lib/x86_64-linux-gnu/libexpat.so;vtkLocalExample;vtkInfovisCore;vtkGeovisCore;vtkInfovisLayout;vtkViewsCore;vtkproj4;/usr/lib/x86_64-linux-gnu/libpython2.7.so;vtkTestingGenericBridge;/usr/lib/libgl2ps.so;verdict;vtkIOMovie;/usr/lib/x86_64-linux-gnu/libtheoraenc.so;/usr/lib/x86_64-linux-gnu/libtheoradec.so;/usr/lib/x86_64-linux-gnu/libogg.so;vtkFiltersImaging;vtkIOMINC;vtkRenderingLOD;vtkViewsQt;vtkGUISupportQt;vtkViewsInfovis;vtkChartsCore;vtkRenderingContext2D;vtkRenderingLabel;vtkRenderingImage;vtkFiltersFlowPaths;vtkxdmf2;/usr/lib/x86_64-linux-gnu/libxml2.so;vtkFiltersReebGraph;vtkViewsContext2D;vtkIOXdmf2;vtkIOAMR;vtkRenderingContextOpenGL;vtkImagingStatistics;vtkIOParallel;vtkFiltersParallel;vtkIONetCDF;vtkexoIIc;vtkGUISupportQtOpenGL;vtkIOParallelLSDyna;vtkFiltersParallelGeometry;vtkGUISupportQtWebkit;vtkIOPLY;vtkWrappingTools;vtkFiltersHyperTree;vtkRenderingVolumeOpenGL;vtkIOExodus;vtkIOPostgreSQL;vtkIOSQL;sqlite3;vtkWrappingJava;vtkFiltersParallelFlowPaths;vtkFiltersParallelStatistics;vtkFiltersProgrammable;vtkFiltersParallelImaging;vtkRenderingParallelLIC;vtkRenderingLIC;vtkInteractionImage;vtkFiltersPython;vtkWrappingPythonCore;vtkIOParallelExodus;vtkFiltersGeneric;vtkIOVideo;vtkRenderingQt;vtkFiltersTexture;vtkIOInfovis;vtkGUISupportQtSQL;vtkRenderingFreeTypeOpenGL;vtkInfovisBoostGraphAlgorithms;vtkRenderingGL2PS;vtkIOGeoJSON;vtkFiltersVerdict;vtkViewsGeovis;vtkIOImport;vtkTestingIOSQL;vtkPythonInterpreter;vtkIOODBC;vtkIOEnSight;vtkIOMySQL;vtkRenderingMatplotlib;vtkDomainsChemistry;vtkIOExport;vtkFiltersParallelMPI;vtkIOParallelXML;vtkTestingRendering;vtkIOMPIParallel;vtkParallelMPI4Py;vtkFiltersSMP;vtkFiltersSelection;vtkIOVPIC;VPIC;vtkImagingMath;vtkImagingMorphological;vtkRenderingParallel;vtkRenderingFreeTypeFontConfig;vtkIOFFMPEG;vtkIOMPIImage;vtkIOGDAL;optimized;/usr/lib/x86_64-linux-gnu/libpcl_io.so;debug;/usr/lib/x86_64-linux-gnu/libpcl_io.so;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_filesystem.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;/usr/lib/libOpenNI.so;/usr/lib/libOpenNI2.so;vtkImagingStencil;vtkCommonComputationalGeometry;vtkCommonDataModel;vtkCommonMath;vtkCommonCore;vtksys;vtkCommonMisc;vtkCommonSystem;vtkCommonTransforms;vtkImagingCore;vtkCommonExecutionModel;vtkFiltersAMR;vtkFiltersGeneral;vtkFiltersCore;vtkParallelCore;vtkIOLegacy;vtkIOCore;/usr/lib/x86_64-linux-gnu/libz.so;vtkInteractionWidgets;vtkFiltersHybrid;vtkImagingSources;vtkRenderingCore;vtkCommonColor;vtkFiltersExtraction;vtkFiltersStatistics;vtkImagingFourier;vtkalglib;vtkFiltersGeometry;vtkFiltersSources;vtkFiltersModeling;vtkImagingGeneral;vtkImagingHybrid;vtkIOImage;vtkDICOMParser;vtkmetaio;/usr/lib/x86_64-linux-gnu/libjpeg.so;/usr/lib/x86_64-linux-gnu/libpng.so;/usr/lib/x86_64-linux-gnu/libtiff.so;vtkInteractionStyle;vtkRenderingAnnotation;vtkImagingColor;vtkRenderingFreeType;/usr/lib/x86_64-linux-gnu/libfreetype.so;vtkftgl;vtkRenderingVolume;vtkIOParallelNetCDF;vtkParallelMPI;/usr/lib/x86_64-linux-gnu/libnetcdf_c++.so;/usr/lib/x86_64-linux-gnu/libnetcdf.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5_hl.so;vtkRenderingOpenGL;vtkIOLSDyna;vtkIOXML;vtkIOGeometry;/usr/lib/x86_64-linux-gnu/libjsoncpp.so;vtkIOXMLParser;/usr/lib/x86_64-linux-gnu/libexpat.so;vtkLocalExample;vtkInfovisCore;vtkGeovisCore;vtkInfovisLayout;vtkViewsCore;vtkproj4;/usr/lib/x86_64-linux-gnu/libpython2.7.so;vtkTestingGenericBridge;/usr/lib/libgl2ps.so;verdict;vtkIOMovie;/usr/lib/x86_64-linux-gnu/libtheoraenc.so;/usr/lib/x86_64-linux-gnu/libtheoradec.so;/usr/lib/x86_64-linux-gnu/libogg.so;vtkFiltersImaging;vtkIOMINC;vtkRenderingLOD;vtkViewsQt;vtkGUISupportQt;vtkViewsInfovis;vtkChartsCore;vtkRenderingContext2D;vtkRenderingLabel;vtkRenderingImage;vtkFiltersFlowPaths;vtkxdmf2;/usr/lib/x86_64-linux-gnu/libxml2.so;vtkFiltersReebGraph;vtkViewsContext2D;vtkIOXdmf2;vtkIOAMR;vtkRenderingContextOpenGL;vtkImagingStatistics;vtkIOParallel;vtkFiltersParallel;vtkIONetCDF;vtkexoIIc;vtkGUISupportQtOpenGL;vtkIOParallelLSDyna;vtkFiltersParallelGeometry;vtkGUISupportQtWebkit;vtkIOPLY;vtkWrappingTools;vtkFiltersHyperTree;vtkRenderingVolumeOpenGL;vtkIOExodus;vtkIOPostgreSQL;vtkIOSQL;sqlite3;vtkWrappingJava;vtkFiltersParallelFlowPaths;vtkFiltersParallelStatistics;vtkFiltersProgrammable;vtkFiltersParallelImaging;vtkRenderingParallelLIC;vtkRenderingLIC;vtkInteractionImage;vtkFiltersPython;vtkWrappingPythonCore;vtkIOParallelExodus;vtkFiltersGeneric;vtkIOVideo;vtkRenderingQt;vtkFiltersTexture;vtkIOInfovis;vtkGUISupportQtSQL;vtkRenderingFreeTypeOpenGL;vtkInfovisBoostGraphAlgorithms;vtkRenderingGL2PS;vtkIOGeoJSON;vtkFiltersVerdict;vtkViewsGeovis;vtkIOImport;vtkTestingIOSQL;vtkPythonInterpreter;vtkIOODBC;vtkIOEnSight;vtkIOMySQL;vtkRenderingMatplotlib;vtkDomainsChemistry;vtkIOExport;vtkFiltersParallelMPI;vtkIOParallelXML;vtkTestingRendering;vtkIOMPIParallel;vtkParallelMPI4Py;vtkFiltersSMP;vtkFiltersSelection;vtkIOVPIC;VPIC;vtkImagingMath;vtkImagingMorphological;vtkRenderingParallel;vtkRenderingFreeTypeFontConfig;vtkIOFFMPEG;vtkIOMPIImage;vtkIOGDAL (Required is at least version "1.7.1")
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- system
-- Configuring done
-- Generating done
-- Build files have been written to: /home/andy/ifm3d-0.7.0/build
andy@andy-zhaoyang-k42-80:~/ifm3d-0.7.0/build$ o3d3xx_ip=192.168.1.69 make
Scanning dependencies of target gtest
[ 1%] Building CXX object gtest_bin/CMakeFiles/gtest.dir/src/gtest-all.cc.o
[ 2%] Linking CXX static library libgtest.a
[ 2%] Built target gtest
Scanning dependencies of target gtest_main
[ 4%] Building CXX object gtest_bin/CMakeFiles/gtest_main.dir/src/gtest_main.cc.o
[ 5%] Linking CXX static library libgtest_main.a
[ 5%] Built target gtest_main
Scanning dependencies of target ifm3d_camera_shared
[ 7%] Building CXX object modules/camera/src/libifm3d_camera/CMakeFiles/ifm3d_camera_shared.dir/camera.cpp.o
[ 8%] Building CXX object modules/camera/src/libifm3d_camera/CMakeFiles/ifm3d_camera_shared.dir/err.cpp.o
[ 10%] Building CXX object modules/camera/src/libifm3d_camera/CMakeFiles/ifm3d_camera_shared.dir/logging.cpp.o
[ 11%] Building CXX object modules/camera/src/libifm3d_camera/CMakeFiles/ifm3d_camera_shared.dir/version.cpp.o
[ 13%] Linking CXX shared library libifm3d_camera.so
[ 13%] Built target ifm3d_camera_shared
Scanning dependencies of target ifm3d-camera-tests
[ 14%] Building CXX object modules/camera/test/CMakeFiles/ifm3d-camera-tests.dir/ifm3d-camera-camera-tests.cpp.o
[ 16%] Building CXX object modules/camera/test/CMakeFiles/ifm3d-camera-tests.dir/ifm3d-camera-err-tests.cpp.o
[ 17%] Building CXX object modules/camera/test/CMakeFiles/ifm3d-camera-tests.dir/ifm3d-camera-testrunner.cpp.o
[ 19%] Building CXX object modules/camera/test/CMakeFiles/ifm3d-camera-tests.dir/ifm3d-camera-version-tests.cpp.o
[ 20%] Linking CXX executable ifm3d-camera-tests
[ 20%] Built target ifm3d-camera-tests
Scanning dependencies of target ifm3d_framegrabber_shared
[ 22%] Building CXX object modules/framegrabber/src/libifm3d_framegrabber/CMakeFiles/ifm3d_framegrabber_shared.dir/byte_buffer.cpp.o
[ 23%] Building CXX object modules/framegrabber/src/libifm3d_framegrabber/CMakeFiles/ifm3d_framegrabber_shared.dir/frame_grabber.cpp.o
[ 25%] Building CXX object modules/framegrabber/src/libifm3d_framegrabber/CMakeFiles/ifm3d_framegrabber_shared.dir/schema.cpp.o
[ 26%] Linking CXX shared library libifm3d_framegrabber.so
[ 26%] Built target ifm3d_framegrabber_shared
Scanning dependencies of target ifm3d-fg-tests
[ 28%] Building CXX object modules/framegrabber/test/CMakeFiles/ifm3d-fg-tests.dir/ifm3d-fg-testrunner.cpp.o
[ 29%] Building CXX object modules/framegrabber/test/CMakeFiles/ifm3d-fg-tests.dir/ifm3d-fg-tests.cpp.o
[ 31%] Linking CXX executable ifm3d-fg-tests
[ 31%] Built target ifm3d-fg-tests
Scanning dependencies of target ifm3d_image_shared
[ 32%] Building CXX object modules/image/src/libifm3d_image/CMakeFiles/ifm3d_image_shared.dir/image_buffer.cpp.o
[ 34%] Linking CXX shared library libifm3d_image.so
[ 34%] Built target ifm3d_image_shared
Scanning dependencies of target ifm3d-image-tests
[ 35%] Building CXX object modules/image/test/CMakeFiles/ifm3d-image-tests.dir/ifm3d-image-testrunner.cpp.o
[ 37%] Building CXX object modules/image/test/CMakeFiles/ifm3d-image-tests.dir/ifm3d-image-tests.cpp.o
[ 38%] Linking CXX executable ifm3d-image-tests
[ 38%] Built target ifm3d-image-tests
Scanning dependencies of target ifm3d_pcicclient_shared
[ 40%] Building CXX object modules/pcicclient/src/libifm3d_pcicclient/CMakeFiles/ifm3d_pcicclient_shared.dir/pcicclient.cpp.o
[ 41%] Linking CXX shared library libifm3d_pcicclient.so
[ 41%] Built target ifm3d_pcicclient_shared
Scanning dependencies of target ifm3d-pcicclient-tests
[ 43%] Building CXX object modules/pcicclient/test/CMakeFiles/ifm3d-pcicclient-tests.dir/ifm3d-pcicclient-testrunner.cpp.o
[ 44%] Building CXX object modules/pcicclient/test/CMakeFiles/ifm3d-pcicclient-tests.dir/ifm3d-pcicclient-tests.cpp.o
[ 46%] Linking CXX executable ifm3d-pcicclient-tests
[ 46%] Built target ifm3d-pcicclient-tests
Scanning dependencies of target ifm3d_tools_shared_autogen
[ 47%] Automatic MOC for target ifm3d_tools_shared
[ 47%] Built target ifm3d_tools_shared_autogen
Scanning dependencies of target ifm3d_tools_shared
[ 49%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/app_types_app.cpp.o
[ 50%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/cmdline_app.cpp.o
[ 52%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/config_app.cpp.o
[ 53%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/cp_app.cpp.o
[ 55%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/dump_app.cpp.o
[ 56%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/export_app.cpp.o
[ 58%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/imager_types_app.cpp.o
[ 59%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/import_app.cpp.o
[ 61%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/ls_app.cpp.o
[ 62%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/make_app.cpp.o
[ 64%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/reboot_app.cpp.o
[ 65%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/reset_app.cpp.o
[ 67%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/rm_app.cpp.o
[ 68%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/swupdate_app.cpp.o
[ 70%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/time_app.cpp.o
[ 71%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/trace_app.cpp.o
[ 73%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/fg/hz_app.cpp.o
[ 74%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/fg/schema_app.cpp.o
[ 76%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/image/viewer_app.cpp.o
[ 77%] Building CXX object modules/tools/src/libifm3d_tools/CMakeFiles/ifm3d_tools_shared.dir/ifm3d_tools_shared_autogen/mocs_compilation.cpp.o
[ 79%] Linking CXX shared library libifm3d_tools.so
[ 79%] Built target ifm3d_tools_shared
Scanning dependencies of target ifm3d
[ 80%] Building CXX object modules/tools/src/bin/CMakeFiles/ifm3d.dir/ifm3d.cpp.o
[ 82%] Linking CXX executable ifm3d
[ 82%] Built target ifm3d
Scanning dependencies of target ex-pcicclient_async_messages
[ 83%] Building CXX object modules/examples/CMakeFiles/ex-pcicclient_async_messages.dir/ex-pcicclient_async_messages.cpp.o
[ 85%] Linking CXX executable ex-pcicclient_async_messages
[ 85%] Built target ex-pcicclient_async_messages
Scanning dependencies of target ex-getmac
[ 86%] Building CXX object modules/examples/CMakeFiles/ex-getmac.dir/ex-getmac.cpp.o
[ 88%] Linking CXX executable ex-getmac
[ 88%] Built target ex-getmac
Scanning dependencies of target ex-file_io
[ 89%] Building CXX object modules/examples/CMakeFiles/ex-file_io.dir/ex-file_io.cpp.o
[ 91%] Linking CXX executable ex-file_io
[ 91%] Built target ex-file_io
Scanning dependencies of target ex-timestamp
[ 92%] Building CXX object modules/examples/CMakeFiles/ex-timestamp.dir/ex-timestamp.cpp.o
[ 94%] Linking CXX executable ex-timestamp
[ 94%] Built target ex-timestamp
Scanning dependencies of target ex-fast_app_switch
[ 95%] Building CXX object modules/examples/CMakeFiles/ex-fast_app_switch.dir/ex-fast_app_switch.cpp.o
[ 97%] Linking CXX executable ex-fast_app_switch
[ 97%] Built target ex-fast_app_switch
Scanning dependencies of target ex-pcicclient_set_io
[ 98%] Building CXX object modules/examples/CMakeFiles/ex-pcicclient_set_io.dir/ex-pcicclient_set_io.cpp.o
[100%] Linking CXX executable ex-pcicclient_set_io
[100%] Built target ex-pcicclient_set_io
andy@andy-zhaoyang-k42-80:~/ifm3d-0.7.0/build$ make check
[ 16%] Built target ifm3d_camera_shared
[ 22%] Built target ifm3d_pcicclient_shared
[ 29%] Built target gtest
[ 35%] Built target gtest_main
[ 45%] Built target ifm3d-pcicclient-tests
Scanning dependencies of target check_pcicclient
[==========] Running 2 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 2 tests from PCICClientTest
[ RUN ] PCICClientTest.IncomingResponseMessage
unknown file: Failure
C++ exception with description "Lib: XMLRPC Timeout - can you ping' the sensor?" thrown in the test body. [ FAILED ] PCICClientTest.IncomingResponseMessage (5248 ms) [ RUN ] PCICClientTest.InvalidCommandLength unknown file: Failure C++ exception with description "Lib: XMLRPC Timeout - can you ping' the sensor?" thrown in the test body.
[ FAILED ] PCICClientTest.InvalidCommandLength (3070 ms)
[----------] 2 tests from PCICClientTest (8319 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (8319 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 2 tests, listed below:
[ FAILED ] PCICClientTest.IncomingResponseMessage
[ FAILED ] PCICClientTest.InvalidCommandLength
2 FAILED TESTS
modules/pcicclient/test/CMakeFiles/check_pcicclient.dir/build.make:57: recipe for target 'modules/pcicclient/test/CMakeFiles/check_pcicclient' failed
make[3]: *** [modules/pcicclient/test/CMakeFiles/check_pcicclient] Error 1
CMakeFiles/Makefile2:918: recipe for target 'modules/pcicclient/test/CMakeFiles/check_pcicclient.dir/all' failed
make[2]: *** [modules/pcicclient/test/CMakeFiles/check_pcicclient.dir/all] Error 2
CMakeFiles/Makefile2:109: recipe for target 'CMakeFiles/check.dir/rule' failed
make[1]: *** [CMakeFiles/check.dir/rule] Error 2
Makefile:199: recipe for target 'check' failed
make: *** [check] Error 2
Is this the same error like what i have pasted above?
BTW, ifm3d: version=0.7.0 and IFM_Software: 1.6.2114.
@tpanzarella @graugans @cfreundl
Any help will be appreciated.
Thanks!
I have tried with the command line "$ export o3d3xx_ip=192.168.1.69" and "$ o3d3xx_ip=192.168.1.69 make check", but obtained the same errors.
Is the "$ O3D3XX_IP=192.168.1.69 make check" case-sensitive?
Yes. It is case sensitive.
Yeah. It was really the cause of ignoring the case sensitive, I changed the command into capital letter and than ran it, it worked. Thank you soooooo much! @tpanzarella
Tpanzarella helped me fix my issue, and thank you the same to you. I will notice my issue format and use code blocks to encapsulate my command line output next time. Thank you for your advice. @graugans
Yeah. It was really the cause of ignoring the case sensitive, I changed the command into capital letter and than ran it, it worked. Thank you soooooo much! @tpanzarella
Tpanzarella helped me fix my issue, and thank you the same to you. I will notice my issue format and use code blocks to encapsulate my command line output next time. Thank you for your advice. @graugans
|
gharchive/issue
| 2018-04-12T13:23:20 |
2025-04-01T06:39:27.445244
|
{
"authors": [
"Andychou007",
"tpanzarella"
],
"repo": "lovepark/libo3d3xx",
"url": "https://github.com/lovepark/libo3d3xx/issues/126",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
859376251
|
[lockstep] Introduce optimization barrier around lockstep Ibex
Certain synthesis tools like DC are very smart at optimizing away redundant logic.
Hence, we have to insert an optimization barrier at the IOs of the lockstep Ibex.
This is achieved by manually buffering each bit using prim_buf.
Our Xilinx and DC synthesis flows make sure that these buffers cannot be optimized
away using keep attributes (Vivado) and size_only constraints (DC).
Signed-off-by: Michael Schaffner msf@google.com
Could you integrate this and vendor it back into the OT repo? Thanks!
|
gharchive/pull-request
| 2021-04-16T01:28:24 |
2025-04-01T06:39:27.451735
|
{
"authors": [
"msfschaffner"
],
"repo": "lowRISC/ibex",
"url": "https://github.com/lowRISC/ibex/pull/1341",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1150095426
|
[dv/doc] Document security verification for memory integrity
I missed this one last time. Now add it.
Signed-off-by: Weicai Yang weicai@google.com
@rswarbrick thank you so much for correcting the grammars. Fixed all of them.
|
gharchive/pull-request
| 2022-02-25T06:33:49 |
2025-04-01T06:39:27.453048
|
{
"authors": [
"weicaiyang"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/11103",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1185117495
|
[aes] Fix clearing of data input registers without inferring combo loop
Previously, the write enable for the data input registers was set for two clock cycles when clearing the registers. This caused the data_in_qe_i signals used for status tracking to be high during the first clock cycle when back in IDLE. As a result, the AES unit would immediately start when running in automatic operation.
This is a second version of the fix that doesn't infer a combo loop by splitting the clearing operation into two distinct states: First CLEAR_I clears input registers such as Initial Key, IV and input data registers. Then CLEAR_CO waits for the cipher core, clears the trigger bits and if selected also clear the output data registers.
This is related to lowRISC/OpenTitan#11431 and lowRISC/OpenTitan#11758.
This fixes #11431.
@tjaychen would you mind running AscentLint over this? Locally it doesn't seem to infer combo loops on the FPGA anymore, but I would like to be 100% sure before merging.
sorry a bit belated. I pulled to head of tree and did a run, did not see the issue anymore.
sorry a bit belated. I pulled to head of tree and did a run, did not see the issue anymore.
Thanks @tjaychen for taking a look and the feedback!
|
gharchive/pull-request
| 2022-03-29T16:23:59 |
2025-04-01T06:39:27.455854
|
{
"authors": [
"tjaychen",
"vogelpi"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/11771",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1275386435
|
[usbdev/dif] Move DIF to S2
[RTL] Change pkt_sent interrupt to a ganged-status type instead of a pulsed-event type.
Adjust the TX status function to break up checking for sent packets and clearing status, since the function that checks the status dispatches handling to other, endpoint-specific functions.
Rebase the USB stack on top of the DIFs.
Remove the control endpoint's expression of support for remote wake (not actually supported by the IP).
Move remaining tests over to the DIFs.
should we merge this one first? or do you prefer to do it as part of #13371?
should we merge this one first? or do you prefer to do it as part of #13371?
For me, the ordering doesn't matter. I pulled the RTL change into #13371 in case the software review has a substantially longer delay than the hardware review.
|
gharchive/pull-request
| 2022-06-17T19:27:10 |
2025-04-01T06:39:27.458452
|
{
"authors": [
"a-will",
"tjaychen"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/13287",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1433939077
|
[chip testplan] Fix chip level testplan
Fix the mis-mapped chip_sw_spi_device_tpm test.
Expand on example tests.
Signed-off-by: Srikrishna Iyer sriyer@google.com
Thanks for cleaning this up @sriyerg !
|
gharchive/pull-request
| 2022-11-03T00:11:56 |
2025-04-01T06:39:27.459980
|
{
"authors": [
"sriyerg",
"timothytrippel"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/15959",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
755740246
|
[lc_ctrl/otp_ctrl/doc] Documentation updates
Updates include:
LC:
[x] Corrections to the programmers guide due to the multibit mutex
[x] Update block diagram
[x] Document signals and interfaces
[x] Update documentation of revised life cycle access control signals (#4504)
[x] Add main FSM diagram
- [ ] Add system integration diagram (will add that in a subsequent PR)
OTP:
[x] Explicit documentation of access granularity of all OTP items
[x] DAI FSM diagram update
[x] Blockdiagram update
Hey Michael, one small thing could you also update the LC spec here: https://docs.opentitan.org/hw/ip/lc_ctrl/doc/index.html#programmers-guide
Point 3 says: "Claim exclusive access to the transition interface by writing 1 to the CLAIM_TRANSITION_IF register,"
It should be writing 'hA5 right? Do you mind updating that also?
Hey Michael, one small thing could you also update the LC spec here: https://docs.opentitan.org/hw/ip/lc_ctrl/doc/index.html#programmers-guide
Point 3 says: "Claim exclusive access to the transition interface by writing 1 to the CLAIM_TRANSITION_IF register,"
It should be writing 'hA5 right? Do you mind updating that also?
Yeah that's right, thanks for catching that. I'll amend this part.
Ok this documentation update is mostly final now.
There is another system integration diagram for life cycle which is not quite finished yet.
I will add that in a subsequent PR.
Thanks a lot for the detailed review, @tjaychen.
Amended and rebased.
|
gharchive/pull-request
| 2020-12-03T01:36:32 |
2025-04-01T06:39:27.466780
|
{
"authors": [
"cindychip",
"msfschaffner"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/4386",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1021500903
|
[reggen] Fixes and mubi introduction
This primarily ensures reset values are consistent
Fixes #8521
Fixes #7566
Tracking issue for mubi conversions: https://github.com/lowRISC/opentitan/issues/8347
|
gharchive/pull-request
| 2021-10-08T22:04:10 |
2025-04-01T06:39:27.468458
|
{
"authors": [
"msfschaffner",
"tjaychen"
],
"repo": "lowRISC/opentitan",
"url": "https://github.com/lowRISC/opentitan/pull/8589",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2365536296
|
Add path flag to loxicmd save to non default /etc/loxilb directory
Is your feature request related to a problem? Please describe.
BPFire UI is able to configure loxilb lb, fw, ip. but unable to save the configuration, and if loxilb restart, the configuration configured from UI is lost.
from BPFire web UI to invoke loxicmd save -a, loxicmd save -a failed to save to /etc/loxilb because UI does not have permission to write to /etc/loxilb directory.
Describe the solution you'd like
add path flag to loxicmd save to save to non default /etc/loxilb directory so other non root users like UI user could execute loxicmd save -a -p /var/ipfire/loxilb to save the config to /var/ipfire/loxilb directory. it would be nice if loxilb restart could restore the config from non default /etc/loxilb directory also like /var/ipfire/loxilb
Describe alternatives you've considered
Additional context
Hi @vincentmli,
This issue has been updated.
You can specify the saving path using the -c option in the loxicmd, and to load it from that path, you will use the --config-path option when running loxilb. Please check it out.
loxicmd save -a -c **/root/**
IP Configuration saved in ipconfig_2024-06-28_06:43:08.txt
/usr/bin/bash -c cp -R lbconfig_2024-06-28_06:43:08.txt /root/lbconfig.txt
.....
./loxilb --config-path **/root/**
@inhogog2 thanks, I will test in BPFire and let you know the result
@inhogog2 I tested the feature and it works perfectly from loxicmd command line, but I still run into issue when calling loxicmd from WebUI Perl CGI program with user nobody https://github.com/vincentmli/BPFire/issues/30, this is not related to loxicmd though, something in the OS user permission level
|
gharchive/issue
| 2024-06-21T01:43:42 |
2025-04-01T06:39:27.480275
|
{
"authors": [
"inhogog2",
"vincentmli"
],
"repo": "loxilb-io/loxilb",
"url": "https://github.com/loxilb-io/loxilb/issues/706",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.