id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1494090408 | Issue with building cardano-node: Fedora 37 official repos do not contain ncurses-compat-libs
I followed the procedure https://github.com/cardano-foundation/developer-portal/blob/staging/docs/get-started/installing-cardano-node.md on my Fedora 37 (newest version as of writing) and found out that the package ncurses-compat-libs that is needed to install the cardano-node is not contained anymore in official Fedora repositories. Tried to install the Fedora 36 version of this package on Fedora 37 and had dependency issues. Not sure whether the same problem also appears for the newest RHEL.
Can this be raised against cardano-node repo? ncurses-compat-libs was dropped in RHEL/Fedora from ncurses-6.3.1 as the newer ABI has been live since 7 years. There are hacks around (installing ncurses and creating symlink /usr/lib64/libncurses.so.5 pointing to /usr/lib64/libncurses.so.6), but should be easier to solve correctly
| gharchive/issue | 2022-12-13T11:58:04 | 2025-04-01T06:38:08.498914 | {
"authors": [
"LukaKurnjek",
"rdlrt"
],
"repo": "cardano-foundation/developer-portal",
"url": "https://github.com/cardano-foundation/developer-portal/issues/886",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1040292662 | BOX-227: Add actual edit-permissions check
CHANGELOG
Добавил фактическую проверку на редактирование карточки
https://user-images.githubusercontent.com/42924400/139557470-49e7bec9-5493-421a-951c-256e21f2b069.mp4
@sergeysova ping
| gharchive/pull-request | 2021-10-30T20:22:52 | 2025-04-01T06:38:08.500534 | {
"authors": [
"azinit"
],
"repo": "cardbox/frontend",
"url": "https://github.com/cardbox/frontend/pull/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1336611305 | [Documentation] Fix Redundant Header in Compatibility Guide
See: https://cardinal-dev.github.io/Cardinal/pages/compatibility-guide/
Fixed in: https://github.com/cardinal-dev/Cardinal/pull/185
| gharchive/issue | 2022-08-12T00:10:11 | 2025-04-01T06:38:08.501937 | {
"authors": [
"falcon78921"
],
"repo": "cardinal-dev/Cardinal",
"url": "https://github.com/cardinal-dev/Cardinal/issues/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2622587337 | Only show finder filter scrollbar when hovered
This PR updates the cards grid such that the filter list scroll bar is only shown when hovered over.
Note that this update is hostile to touch devices like the iPad...
I tried it on a MacBook. It works when connected to an external mouse but does not show if I just use the trackpad from the MacBook.
If an external mouse is connected
scroll-showing-on-macbook-with-mouse-connected.mov
Using Macbook trackpad only
It only shows until you scroll
scroll-not-showing-on-macbook-mousepad-only.mov
Thanks for that. My computer is an Ubuntu. @lukemelia is this what you want to see in the case that user is on a MacBook and they use a trackpad?
I think this is OK. Let's live with it like this and see how it feels.
| gharchive/pull-request | 2024-10-30T00:32:49 | 2025-04-01T06:38:08.505042 | {
"authors": [
"habdelra",
"lukemelia"
],
"repo": "cardstack/boxel",
"url": "https://github.com/cardstack/boxel/pull/1735",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
287468375 | Loading new camera post-processing assets dynamically
Hello there.
How can I load a new camera postprocessing effect (uasset) without having to re-compile Carla? Is that possible at this stage?
This is now totally possible using CARLA API, so I am closing this issue.
| gharchive/issue | 2018-01-10T15:14:10 | 2025-04-01T06:38:08.546566 | {
"authors": [
"eds89",
"germanros1987"
],
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
754298371 | [Camera RGB][synchronous mode]: sensor_tick not in synch with fixed_delta_seconds
Problem - short description
I have observed weird (atleast to me) behavior when using sensor_tick of the camera sensor (didn't try others yet) and the synchronous mode.
As soon as a value is set (different than 0), the sensor data is not being transmitted during the second tick.
Additionally, the sensor data doesn't get transmitted according to the least common multiple
See Observations for more info.
Example script
This is a modified sensor_synchronization.py example:
import glob
import os
import sys
from queue import Queue
from queue import Empty
sys.path.insert(0, r'C:\workdir\installations\carla\0.9.9.4\CARLA_0.9.9.4\WindowsNoEditor\PythonAPI\carla\dist\carla-0.9.9-py3.7-win-amd64.egg')
import carla
def sensor_callback(sensor_data, sensor_queue, sensor_name):
# just to make sure that something weird doesn't happen due to the queue
print("RECEIVED sensor data for frame: {}".format(sensor_data.frame))
sensor_queue.put((sensor_data.frame, sensor_name, sensor_data.timestamp))
def main():
# We start creating the client
client = carla.Client('localhost', 2000)
client.set_timeout(2.0)
world = client.get_world()
try:
original_settings = world.get_settings()
settings = world.get_settings()
# We set CARLA syncronous mode
settings.fixed_delta_seconds = 0.02
settings.synchronous_mode = True
world.apply_settings(settings)
sensor_queue = Queue()
blueprint_library = world.get_blueprint_library()
cam_bp = blueprint_library.find('sensor.camera.rgb')
sensor_list = []
## 1. no sensor tick changes
#cam_bp.set_attribute('sensor_tick', '0.0') # 2. manually set to 0.0
#cam_bp.set_attribute('sensor_tick', '0.02') # 3. manually set to 0.02 (same as fixed_delta_seconds)
#cam_bp.set_attribute('sensor_tick', '0.04') # 4. manually set to 0.04 (double the value of fixed_delta_seconds)
#cam_bp.set_attribute('sensor_tick', '0.03') # 5. manually set to 0.03 (least common multiple is 0.06)
cam01 = world.spawn_actor(cam_bp, carla.Transform())
cam01.listen(lambda data: sensor_callback(data, sensor_queue, "camera01"))
sensor_list.append(cam01)
# Main loop
for i in range(10):
# Tick the server
world.tick()
w_frame = world.get_snapshot().frame
w_time = world.get_snapshot().timestamp
print("\nWorld's frame: %d timestamp: %f" % (w_frame, w_time.elapsed_seconds))
try:
for i in range(0, len(sensor_list)):
s_frame = sensor_queue.get(True, 1.0)
print(" Frame: %d Sensor: %s Timestamp: %f" % (s_frame[0], s_frame[1], s_frame[2]))
except Empty:
print(" Some of the sensor information is missed")
finally:
world.apply_settings(original_settings)
for sensor in sensor_list:
sensor.destroy()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
print(' - Exited by user.')
In order to try it out on your machine:
modify sys.path accordingly
uncomment the setting of sensor_tick which interests you (see Observations)
And, I didn't mess with the queue that much, just added a print to make sure that i don't miss some sensor data due to the nature of the example (queue and popping).
Observations
I use fixed_delta_seconds of 0.02 in all tests, and modify the sensor_tick of the camera to observe the following:
sensor_tick not modified -> image received every tick [EXPECTED]
World's frame: 178547 timestamp: 2195.663063
RECEIVED sensor data for frame: 178547
Frame: 178547 Sensor: camera01 Timestamp: 2195.663063
World's frame: 178548 timestamp: 2195.683063
RECEIVED sensor data for frame: 178548
Frame: 178548 Sensor: camera01 Timestamp: 2195.683063
World's frame: 178549 timestamp: 2195.703063
RECEIVED sensor data for frame: 178549
Frame: 178549 Sensor: camera01 Timestamp: 2195.703063
World's frame: 178550 timestamp: 2195.723063
RECEIVED sensor data for frame: 178550
Frame: 178550 Sensor: camera01 Timestamp: 2195.723063
World's frame: 178551 timestamp: 2195.743063
RECEIVED sensor data for frame: 178551
Frame: 178551 Sensor: camera01 Timestamp: 2195.743063
World's frame: 178552 timestamp: 2195.763063
RECEIVED sensor data for frame: 178552
Frame: 178552 Sensor: camera01 Timestamp: 2195.763063
World's frame: 178553 timestamp: 2195.783063
RECEIVED sensor data for frame: 178553
Frame: 178553 Sensor: camera01 Timestamp: 2195.783063
World's frame: 178554 timestamp: 2195.803063
RECEIVED sensor data for frame: 178554
Frame: 178554 Sensor: camera01 Timestamp: 2195.803063
World's frame: 178555 timestamp: 2195.823063
RECEIVED sensor data for frame: 178555
Frame: 178555 Sensor: camera01 Timestamp: 2195.823063
World's frame: 178556 timestamp: 2195.843063
RECEIVED sensor data for frame: 178556
Frame: 178556 Sensor: camera01 Timestamp: 2195.843063
sensor_tick manually set to 0.0 -> image received every tick [EXPECTED]
World's frame: 182146 timestamp: 2241.293400
RECEIVED sensor data for frame: 182146
Frame: 182146 Sensor: camera01 Timestamp: 2241.293400
World's frame: 182147 timestamp: 2241.313400
RECEIVED sensor data for frame: 182147
Frame: 182147 Sensor: camera01 Timestamp: 2241.313400
World's frame: 182148 timestamp: 2241.333400
RECEIVED sensor data for frame: 182148
Frame: 182148 Sensor: camera01 Timestamp: 2241.333400
World's frame: 182149 timestamp: 2241.353400
RECEIVED sensor data for frame: 182149
Frame: 182149 Sensor: camera01 Timestamp: 2241.353400
World's frame: 182150 timestamp: 2241.373400
RECEIVED sensor data for frame: 182150
Frame: 182150 Sensor: camera01 Timestamp: 2241.373400
World's frame: 182151 timestamp: 2241.393400
RECEIVED sensor data for frame: 182151
Frame: 182151 Sensor: camera01 Timestamp: 2241.393400
World's frame: 182152 timestamp: 2241.413400
RECEIVED sensor data for frame: 182152
Frame: 182152 Sensor: camera01 Timestamp: 2241.413400
World's frame: 182153 timestamp: 2241.433400
RECEIVED sensor data for frame: 182153
Frame: 182153 Sensor: camera01 Timestamp: 2241.433400
World's frame: 182154 timestamp: 2241.453400
RECEIVED sensor data for frame: 182154
Frame: 182154 Sensor: camera01 Timestamp: 2241.453400
World's frame: 182155 timestamp: 2241.473400
RECEIVED sensor data for frame: 182155
Frame: 182155 Sensor: camera01 Timestamp: 2241.473400
sensor_tick manually set to 0.02 -> image NOT received for the second tick, afterwards received every tick [NOT EXPECTED]
World's frame: 184174 timestamp: 2270.390377
RECEIVED sensor data for frame: 184174
Frame: 184174 Sensor: camera01 Timestamp: 2270.390377
World's frame: 184175 timestamp: 2270.410377
Some of the sensor information is missed <---- MISS ALWAYS on the second tick
World's frame: 184176 timestamp: 2270.430377
RECEIVED sensor data for frame: 184176
Frame: 184176 Sensor: camera01 Timestamp: 2270.430377
World's frame: 184177 timestamp: 2270.450377
RECEIVED sensor data for frame: 184177
Frame: 184177 Sensor: camera01 Timestamp: 2270.450377
World's frame: 184178 timestamp: 2270.470377
RECEIVED sensor data for frame: 184178
Frame: 184178 Sensor: camera01 Timestamp: 2270.470377
World's frame: 184179 timestamp: 2270.490377
RECEIVED sensor data for frame: 184179
Frame: 184179 Sensor: camera01 Timestamp: 2270.490377
World's frame: 184180 timestamp: 2270.510377
RECEIVED sensor data for frame: 184180
Frame: 184180 Sensor: camera01 Timestamp: 2270.510377
World's frame: 184181 timestamp: 2270.530377
RECEIVED sensor data for frame: 184181
Frame: 184181 Sensor: camera01 Timestamp: 2270.530377
World's frame: 184182 timestamp: 2270.550377
RECEIVED sensor data for frame: 184182
Frame: 184182 Sensor: camera01 Timestamp: 2270.550377
World's frame: 184183 timestamp: 2270.570377
RECEIVED sensor data for frame: 184183
Frame: 184183 Sensor: camera01 Timestamp: 2270.570377
sensor_tick manually set to 0.04 -> data not received for the second and third tick [NOT EXPECTED]; afterwards it is received every second tick [EXPECTED]
World's frame: 192157 timestamp: 2367.593661
RECEIVED sensor data for frame: 192157
Frame: 192157 Sensor: camera01 Timestamp: 2367.593661
World's frame: 192158 timestamp: 2367.613661
Some of the sensor information is missed <-- MISS
World's frame: 192159 timestamp: 2367.633661
Some of the sensor information is missed <-- MISS
World's frame: 192160 timestamp: 2367.653661
RECEIVED sensor data for frame: 192160
Frame: 192160 Sensor: camera01 Timestamp: 2367.653661
World's frame: 192161 timestamp: 2367.673661
Some of the sensor information is missed
World's frame: 192162 timestamp: 2367.693661
RECEIVED sensor data for frame: 192162
Frame: 192162 Sensor: camera01 Timestamp: 2367.693661
World's frame: 192163 timestamp: 2367.713661
Some of the sensor information is missed
World's frame: 192164 timestamp: 2367.733661
RECEIVED sensor data for frame: 192164
Frame: 192164 Sensor: camera01 Timestamp: 2367.733661
World's frame: 192165 timestamp: 2367.753661
Some of the sensor information is missed
World's frame: 192166 timestamp: 2367.773661
RECEIVED sensor data for frame: 192166
Frame: 192166 Sensor: camera01 Timestamp: 2367.773661
sensor_tick manually set to 0.03 - weird sequence of HIT MISS HIT MISS at the begininng
World's frame: 205626 timestamp: 2531.280411
RECEIVED sensor data for frame: 205626
Frame: 205626 Sensor: camera01 Timestamp: 2531.280411
World's frame: 205627 timestamp: 2531.300411
Some of the sensor information is missed
World's frame: 205628 timestamp: 2531.320411
RECEIVED sensor data for frame: 205628
Frame: 205628 Sensor: camera01 Timestamp: 2531.320411
World's frame: 205629 timestamp: 2531.340411
Some of the sensor information is missed
World's frame: 205630 timestamp: 2531.360411
RECEIVED sensor data for frame: 205630
Frame: 205630 Sensor: camera01 Timestamp: 2531.360411
World's frame: 205631 timestamp: 2531.380411
RECEIVED sensor data for frame: 205631
Frame: 205631 Sensor: camera01 Timestamp: 2531.380411
World's frame: 205632 timestamp: 2531.400411
Some of the sensor information is missed
World's frame: 205633 timestamp: 2531.420411
RECEIVED sensor data for frame: 205633
Frame: 205633 Sensor: camera01 Timestamp: 2531.420411
World's frame: 205634 timestamp: 2531.440411
RECEIVED sensor data for frame: 205634
Frame: 205634 Sensor: camera01 Timestamp: 2531.440411
World's frame: 205635 timestamp: 2531.460411
Some of the sensor information is missed
Problems
Maybe some of the questions below are not bugs (you know better than I do), but based on the information available in the documentation, and without looking deeper into the workings of UE, I couldn't figure out the behavior. We can extend the docs if needed (I can make a PR as long as I understand the behavior - np).
Why is data always skipped for the second tick (case 3) ?
** Is it because of SetActorTickInterval which is used?
In case 4, why don't we get data for the 2nd and 3rd tick? I would expect something like:
1. 0.00 -> 0.02 HIT (if I interpret this as an initial value)
2. 0.02 -> 0.04 MISS (0.04 seconds didn't pass from 0.02 -> 0.04)
3. 0.04 -> 0.06 HIT (at 0.06, 0.04 seconds passed: 0.02 -> 0.06)
4. 0.06 -> 0.08 MISS
5. 0.08 -> 0.10 HIT
6. 0.10 -> 0.12 MISS
7. 0.12 -> 0.14 HIT
8. 0.14 -> 0.16 MISS
In case 5, why do we have a sequence of HIT MISS HIT MISS at the beginning? I would expect something like:
1. 0.00 -> 0.02 HIT (if I interpret this as an initial value)
2. 0.02 -> 0.04 MISS (0.03 seconds didn't pass from 0.02 -> 0.04)
3. 0.04 -> 0.06 HIT (at 0.05, 0.03 seconds passed)
4. 0.06 -> 0.08 HIT (at 0.08 0.03 seconds passed again)
5. 0.08 -> 0.10 MISS
6. 0.10 -> 0.12 HIT
7. 0.12 -> 0.14 HIT
8. 0.14 -> 0.16 MISS
Could it be related to how the ticks are stored (double vs float) ?
Related issues
#3385 and the related PR and testing issue
I couldn't find anything useful on the discord channel as well
Environment
Platform: Windows 10
Python: Python 3.7.7
Carla: 0.9.7.2, 0.9.9.4, 0.9.10 (ignored due to the linked issue)
Is there a solution for this issue without waiting till the next release ?
Hi, I found a similar issue in Carla 0.9.10, changing the sensor tick for the camera just doesn't work. The timestamps of frames do not change accordingly after we set a different sensor tick.
Do you find any solution?
at the latest version 0.9.11 they solved this issue, but i didn't try it yet.
at the latest version 0.9.11 they solved this issue, but i didn't try it yet.
Thanks. I also tried the async mode, it does not work either. And the real sensor tick depends on the quality of rendering. Did you encounter a similar issue?
sorry i didn't go through this issue before, but i checked the bugs they solved in the latest version, and they mentioned it.
@OmarAbdElNaser Thanks. Where did they mention it's fixed? Can you direct me? I saw something like Fixed bug causing camera-based sensors to stop sending data, not sure it is related though.
This is what i'm refering to, the camera issue was sending data without caring about the sensor tick, they said they solved it.
@OmarAbdElNaser Great, I will test it with the new version and let you know if it's really fixed. Thanks.
@OmarAbdElNaser I tested 0.9.11, the sensor_tick functionality has been improved, at least in the synchronized mode. I see the sensor tick setting is effective if you don't choose a very small value, this is expected due to the performance limitation of our system I guess.
This is not fixed. Problem still persists.
e.g. in case 4 from my description, we still get 2 misses one after the other.
Tried with 0.9.13.
Please reopen the issue.
| gharchive/issue | 2020-12-01T11:07:07 | 2025-04-01T06:38:08.564069 | {
"authors": [
"HaoZhouGT",
"OmarAbdElNaser",
"Vaan5"
],
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/3653",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2720231774 | How can I remove all things from a map
Hello, I want to build something in CarlaUE4 and want to make it in a map which is already there, for example Town01_Opt. I want to remove all other actors for this, but this doesn't function because some actors are grey in the World Outliner and I can neither click on them, nor delete them. Does somebody know how I can remove all actors from a map (also the grey)? Thank you for your help!
Hello! _Opt maps are composed by sublevels, you can just delete the sublevels and the actor will dissapear
https://dev.epicgames.com/documentation/en-us/unreal-engine/managing-multiple-levels?application_version=4.27
| gharchive/issue | 2024-12-05T12:04:58 | 2025-04-01T06:38:08.566889 | {
"authors": [
"Blyron",
"Vincent318"
],
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/8444",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
711279495 | Removed carla and srunner from requirements
Both carla and srunner have been removed from the requirements.
This change is
Updated the README. This is heavily based on the first part of the leaderboard web. Adding @sergi-e
| gharchive/pull-request | 2020-09-29T16:34:50 | 2025-04-01T06:38:08.568663 | {
"authors": [
"glopezdiest"
],
"repo": "carla-simulator/leaderboard",
"url": "https://github.com/carla-simulator/leaderboard/pull/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
406826630 | ERROR: 'Vehicle' object has no attribute 'get_control'
(carla) cienet@cienet-desktop:~/scenario_runner$ python scenario_runner.py --scenario FollowLeadingVehicle
Preparing scenario: FollowLeadingVehicle
ScenarioManager: Running scenario FollowVehiclcd
And then I start a new terminal, try to run the manual_control.py
but I met an error like this:
ERROR: 'Vehicle' object has no attribute 'get_control'
Traceback (most recent call last):
File "manual_control.py", line 681, in main
game_loop(args)
File "manual_control.py", line 622, in game_loop
if not world.tick(clock):
File "manual_control.py", line 162, in tick
self.hud.tick(self, clock)
File "manual_control.py", line 281, in tick
c = world.vehicle.get_control()
AttributeError: 'Vehicle' object has no attribute 'get_control'
my friend and I have the same version of packages, but it works well at his computer.
Please help to have a look.
You need to update to the recent CARLA release 0.9.3, which will resolve this issue.
I update the 0.9.3 and then run the "python scenario_runner.py"
There is an import module error:
Traceback (most recent call last): File "scenario_runner.py", line 25, in <module> from Scenarios.follow_leading_vehicle import * File "/home/cienet/scenario_runner/Scenarios/follow_leading_vehicle.py", line 24, in <module> from ScenarioManager.atomic_scenario_behavior import * File "/home/cienet/scenario_runner/ScenarioManager/atomic_scenario_behavior.py", line 21, in <module> from agents.navigation.basic_agent import * File "/home/cienet/CARLA_0.9.3/PythonAPI/agents/navigation/basic_agent.py", line 21, in <module> from agents.navigation.global_route_planner import GlobalRoutePlanner File "/home/cienet/CARLA_0.9.3/PythonAPI/agents/navigation/global_route_planner.py", line 15, in <module> from local_planner import RoadOption ImportError: No module named 'local_planner'
But the 'local_planner.py' is in the same folder with "global_route_planner.py"
Then I commented " from local_planner import RoadOption"
The scenario_runner.py and manual_control.py both worked.
| gharchive/issue | 2019-02-05T15:08:19 | 2025-04-01T06:38:08.572494 | {
"authors": [
"GetDarren"
],
"repo": "carla-simulator/scenario_runner",
"url": "https://github.com/carla-simulator/scenario_runner/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
690797139 | OpenSCENARIO support - PrivateActions
This issue is meant to track the support of PrivateActions within OSC 1.0:
[x] ActivateControllerAction (Activates CARLA's autopilot)
[x] ControllerAction
[x] LaneChangeAction (Some dynamics are still ignored)
[ ] LaneOffsetAction
[ ] LateralDistanceAction
[ ] LongitudinalDistanceAction
[x] SpeedAction
[ ] SynchronizeAction
[x] TeleportAction
[ ] VisibilityAction
[x] AcquirePositionAction
[x] AssignRouteAction
[ ] FollowTrajectoryAction
Thanks for working on improving support for OSC1.0! :+1:
I am especially interested in support for the LaneOffsetAction.
As of #628, AcquirePositionAction is now supported at the Story part
As of #689, SynchronizeAction is now supported
| gharchive/issue | 2020-09-02T07:42:06 | 2025-04-01T06:38:08.576639 | {
"authors": [
"glopezdiest",
"r-snijders"
],
"repo": "carla-simulator/scenario_runner",
"url": "https://github.com/carla-simulator/scenario_runner/issues/626",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2103691132 | 🛑 Site is down
In dc51ad3, Site (https://www.ifms.edu.br) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Site is back up in 3be1477 after 22 minutes.
| gharchive/issue | 2024-01-27T17:35:45 | 2025-04-01T06:38:08.581007 | {
"authors": [
"carlitos-ifms"
],
"repo": "carlitos-ifms/uptime",
"url": "https://github.com/carlitos-ifms/uptime/issues/333",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1884977699 | Another interrupt handling bug fixed
Specifically, this line in spi_transfer:
// Clear the interrupt request.
dma_hw->ints0 = 1u << spi_p->rx_dma;
which must have been left over from a past when only DMA_IRQ_0 was used.
Relevant to #74
| gharchive/pull-request | 2023-09-07T01:52:18 | 2025-04-01T06:38:08.582236 | {
"authors": [
"carlk3"
],
"repo": "carlk3/no-OS-FatFS-SD-SPI-RPi-Pico",
"url": "https://github.com/carlk3/no-OS-FatFS-SD-SPI-RPi-Pico/pull/85",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
278499446 | Snap package
Pomodoro is now available as Snap package!
https://github.com/m4sk1n/pomodoro
Maybe some info about it?
Closed, not fully working…
| gharchive/issue | 2017-12-01T15:23:03 | 2025-04-01T06:38:08.583832 | {
"authors": [
"m4sk1n"
],
"repo": "carlmjohnson/pomodoro",
"url": "https://github.com/carlmjohnson/pomodoro/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
897528354 | Applying themes
I am a bit puzzled about how to apply themes to the resulting blogs. I ran the importer successfully, and also realized I probably need to copy over the layouts/ folder from this repository to my blog (it wasn't done automatically). But then, even though I've installed a theme and specified it in config.toml, it's not applied, my blog is plain.
Any idea if I did something wrong, and if not, what the correct method to theme the blog is?
Thanks for using this. It sounds like you didn't do anything wrong. Any normal Hugo template that you use with this importer will need to be partially rewritten to work with it. I'm not sure if you saw this "philosophy" section of the README, but it's good background on why:
When converting a Tumblr blog to Hugo, you may initially think you want all your content converted to Markdown files. For example, you may think you want your link posts to become something like ### Link: [$TITLE]($LINK)↵↵$CONTENT. The trouble with this approach is that converting to Markdown loses formatting information from Tumblr and locks you into a single representation of the data which cannot be easily changed later.
How tumblr-importr works instead is it reads the common post metadata out of the Tumblr API (title, URL, slug, date, etc.) and writes that in the format Hugo expects, then it makes all of the other data from Tumblr on the post available as a custom parameter. Now you can format your link posts using Hugo's templating language to make it look exactly how you want:
<h3>Link: <a href="{{ .Params.tumblr.url }}">{{ .Params.tumblr.title }}</a></h3>
{{ .Params.tumblr.description | safeHTML }}
If you decide the H3 should be an H2 or the content needs a wrapper <div class="content"> or you want to change "Link:" to be an emoji 🔗, all you need to do is change your Hugo theme, rather than going back and reformatting all your Markdown files. All of the information that Tumblr had on the post is available, making it possible to fully replicate a Tumblr theme in Hugo without any information loss.
The side effect of this philosophy is that you're going to need to change existing theme to make them work with this data. You can see some of the basics in the layouts directory here, but it's really just a suggested starting point. If I had more time, I would love to have a better sample theme or write some adaptations to show how it works.
If you look at the _default/single.html, the relevant section is here:
{{ .Render "content" }}
That means to render the content.html file according to the current page's type. A normal content.html looks like _defaults/content.html, which calls {{ .Content }}. In other words, it's just rendering the page's Markdown as is. This is what normal themes do because it's what you're expected to do by Hugo. What you'll need to do is to go into the theme you want and change {{ .Content }} to {{ .Render "content" }} and then add different content.html for the different Tumblr page types. So for example, for Tumblr's video page type, I have tumblr-video/content.html, which overrides the normal content.html and looks like this:
{{ range last 1 .Params.tumblr.player }}
<div class="vidblock">{{ .embed_code | safeHTML }}</div>
{{ end }}
<div class="caption">{{ .Params.tumblr.caption | safeHTML }}</div>
In other words, it says "if a page has type tumblr-video, instead of rendering Markdown (which isn't there), look at the .tumblr.play data and render the last embed code in that list, followed by the .tumblr.caption."
The other page types have similar content.html adaptations, such as showing the picture for a photo post or the link for a link post.
Hope that helps. Let me know if any part of that didn't make sense.
One more comment: It's possible that you'll find it easier to go from a custom Tumblr theme to a custom Hugo theme than vice versa. E.g. this snippet from Tumblr theme documentation:
<ol id="posts">
{block:Posts}{block:Text}
<li class="post text">
{block:Title}
<h3><a href="{Permalink}">{Title}</a></h3>
{/block:Title}{Body}
</li>
{/block:Text}{block:Photo}
would turn into something like this in Hugo markdown:
<ol id="posts">
{{ range .Pages }}{{ if eq .Section "tumblr-text" }}
<li class="post text">
{{ if eq .Kind "page" }}
<h3><a href="{{ .Permalink }}">{{ .Title }}</a></h3>
{{ end }}
{{ .Params.tumblr.content | safeHTML }}
</li>
{{ end }}{{ if eq .Section "tumblr-photo" }}
Thanks for the detailed explanation. I guess I have to delve into Hugo themes then :)
I have one immediate question, though: let's say I manage to change an existing theme to render my tumblr posts the way I want it to. However, when I add new posts, I'm probably not going to follow the structure that the posts imported from tumblr have. So my guess is that any new blog posts I write will not work with the theme I created to accommodate my tumblr posts. Does that sound correct or am I misunderstanding something?
Yes, your edit is correct. That’s what I did with blog.carlmjohnson.net, where the old posts are Tumblr formatted but the new posts are normal Hugo files.
| gharchive/issue | 2021-05-20T22:51:27 | 2025-04-01T06:38:08.593083 | {
"authors": [
"carlmjohnson",
"frontierpsycho"
],
"repo": "carlmjohnson/tumblr-importr",
"url": "https://github.com/carlmjohnson/tumblr-importr/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2469842855 | Node.js binding/npm package
Currently, there seems to be no library in npm that converts latex to mathml core.
It would be great if we could provide Node.js binding or wasm-wasi to javascript binding!
I believe this can promote the popularity of mathml core.
Yes of course! This is one of the goals I have with this crate: having the library available as an npm package for js.
I do not have a lot of experience with publishing crates to npm, and I have never done it with a rust lib. If you have any idea on how it is done, it would gladly take the advice!
| gharchive/issue | 2024-08-16T09:05:51 | 2025-04-01T06:38:08.601063 | {
"authors": [
"Aalivexy",
"carloskiki"
],
"repo": "carloskiki/pulldown-latex",
"url": "https://github.com/carloskiki/pulldown-latex/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2002779704 | Network usage is prohibited for themes
It appears that your theme uses network connections to load assets (e.g. fonts, icons, or images). This is prohibited by the official Obsidian developer policies because themes should function completely locally.
You can bundle an asset for local use by using data URLs. See this guide.
Please let us know if you have any questions. Any themes that use network connections will be removed from the official directory in the first week of January 2024.
The Obsidian team.
@LEFD what do you think is the best path forward for this?
Possibly the best thing is just not bundling the font anymore, so people can opt into using the fonts I like, but that's a breaking change. I'm not sure the best way to communicate to people that they would need to install fonts to keep the theme looking the same
@caro401 I think the best way forward would be to embed the fonts in the CSS file. That's a bit messy, but this way the theme would look as we intended out of the box.
I'm not sure the best way to communicate to people that they would need to install fonts to keep the theme looking the same
I don't think people read change logs for themes. (I don't). Installing fonts on mobile devices might be a problem as well.
Also, one always has the option to use a custom font if one desires to do so.
| gharchive/issue | 2023-11-20T18:34:12 | 2025-04-01T06:38:08.606021 | {
"authors": [
"LEFD",
"caro401",
"joethei"
],
"repo": "caro401/royal-velvet",
"url": "https://github.com/caro401/royal-velvet/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1291739052 | try bulleted view for schedule info
Changes schedule details (what asterisks and abbreviations mean) to a bulleted view rather than paragraph view.
@acrall if you like it with the bullets, please merge.
If you'd prefer to keep it like it is, then you can just close this PR.
| gharchive/pull-request | 2022-07-01T18:58:36 | 2025-04-01T06:38:08.630753 | {
"authors": [
"maneesha"
],
"repo": "carpentrycon/carpentrycon2022",
"url": "https://github.com/carpentrycon/carpentrycon2022/pull/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
199439572 | Improvement: Total worklog of today
In addition to the total of non-submitted worklogs of time tracker, there should be a display of users total worklogs time which are booked for today. It should be updated after submission of tracked time.
If you cannot get it via api, it would be an alternative to only show the todays tracked time via jira stopwatch so you can (export times to a local file and) make an addition every time.
Thank you very much.
Hi Philip,
I cannot find anything in the Jira REST API, that enables me to fetch all the worklogs, that a single user has posted on a specific date. Only way is to poll through the issues, and that doesn't really work that well, if you remove an issue key from StopWatch during the day (which I do a lot, once I'm done with it).
You are welcome to fiddle with this yourself and if you come up with a solution, that you are happy about, I'll gladly accept a PR.
Could you make a local log instead?
Beste Grüße
Best regards
Philipp Jäger
SAP Basis Administrator
Von: Carsten Gehling [mailto:notifications@github.com]
Gesendet: Montag, 20. März 2017 12:56
An: carstengehling/jirastopwatch jirastopwatch@noreply.github.com
Cc: Jäger, Philipp philipp.jaeger@cgm.com; Author author@noreply.github.com
Betreff: Re: [carstengehling/jirastopwatch] Improvement: Total worklog of today (#47)
Hi Philip,
I cannot find anything in the Jira REST API, that enables me to fetch all the worklogs, that a single user has posted on a specific date. Only way is to poll through the issues, and that doesn't really work that well, if you remove an issue key from StopWatch during the day (which I do a lot, once I'm done with it).
You are welcome to fiddle with this yourself and if you come up with a solution, that you are happy about, I'll gladly accept a PR.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/carstengehling/jirastopwatch/issues/47#issuecomment-287739291, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AVcxiLKJhd7Qp_EEodUueSjFlXu2hvQuks5rnmlmgaJpZM4LdxL3.
Possibly yes. Then it would need to be reset automatically on change of current date. So you should store the current date in the user config.
Could you develop this?
Beste Grüße
Best regards
Philipp Jäger
SAP Basis Administrator
Von: Carsten Gehling [mailto:notifications@github.com]
Gesendet: Montag, 20. März 2017 13:08
An: carstengehling/jirastopwatch jirastopwatch@noreply.github.com
Cc: Jäger, Philipp philipp.jaeger@cgm.com; Author author@noreply.github.com
Betreff: Re: [carstengehling/jirastopwatch] Improvement: Total worklog of today (#47)
Possibly yes. Then it would need to be reset automatically on change of current date. So you should store the current date in the user config.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/carstengehling/jirastopwatch/issues/47#issuecomment-287741505, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AVcxiH9mEGchSd2_7Msxv3yTOK1aaOmyks5rnmw9gaJpZM4LdxL3.
@pjaeger16: Not sure if you know about it already but we use a Jira plugin (https://marketplace.atlassian.com/plugins/org.everit.jira.timetracker.plugin/server/overview) which provides a nice page within Jira which lists the time logged by the current use per day, including total time. it's not bad (although I would like more flexibility to customize the columns in the view)
Personally, I think it's more appropriate there - the stopwatch is to help log time, not to audit or manage existing time logs in my view
I also use Everit's time tracker and it's pretty useful. It would be nice
to see my previous work log entries in JIRA stopwatch somehow, for those
times when I want to "fill in the blanks", but the Everit plugin actually
works OK for that too.
Adam Conway notifications@github.com ezt írta (időpont: 2017. márc. 20.,
Hét 15:54):
@pjaeger16 https://github.com/pjaeger16: Not sure if you know about it
already but we use a Jira plugin (
https://marketplace.atlassian.com/plugins/org.everit.jira.timetracker.plugin/server/overview)
which provides a nice page within Jira which lists the time logged by the
current use per day, including total time. it's not bad (although I would
like more flexibility to customize the columns in the view)
Personally, I think it's more appropriate there - the stopwatch is to help
log time, not to audit or manage existing time logs in my view
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carstengehling/jirastopwatch/issues/47#issuecomment-287804018,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEKzmOY0soz-moNFjuj6OZVWaUfJbeJks5rnqEsgaJpZM4LdxL3
.
Could you develop this?
I don't personally find the feature particularly useful, especially since it it would require quite a lot to make it accurate. But if you could implement it yourself or know someone else who could, feel free to make a pull request.
| gharchive/issue | 2017-01-08T19:12:24 | 2025-04-01T06:38:08.667463 | {
"authors": [
"asztal",
"carstengehling",
"pjaeger16",
"slarti-b"
],
"repo": "carstengehling/jirastopwatch",
"url": "https://github.com/carstengehling/jirastopwatch/issues/47",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
399289104 | Unable to match scans (i.e. submap list is empty amd no map is being built)
Hi Cartographer team,
I am trying to run cartographer on a turtlebot 2 with a 360° rplidar using only the laser scan (as of now) without odometry or an imu. I understand this might not be optimal, however I think my results shouldn't be that wrong even without odom/imu.
I have managed to run cartographer however it is not working and there has to be some major configuration issue because not a single submap can be matched. (in rviz if I disable "All" in submaps the list of all submaps contains only one element with the float in front of it increasing steadily). This also shows in the visualization because no map is built (i.e. I get only a "random" scattering of laser scan points in grey color while in other issues I have seen a "white map with black edges" is supposed to be built.
My rosbag validate result:
Is there a documentation for this tool available? I can barely make sense of the output data. It doesn't seem wrong, however I don't understand what the histogram/distribution is supposed to tell me.
https://gist.github.com/ftbmynameis/a6a51eba839f5906ecc971a7a4c99711
My branch with my configuration files:
Note: I have installed cartographer_ros using the official manual (i.e. installed it in a catkin workspace) and made my configuration changes in /install_isolated/share/cartographer_ros. Since the official repository seems to be located in the "src" folder I have copied my configuration files there (in a new branch "turtlebot_config") and pushed them to my github fork of the repository. While doing so I have noticed that configuration files in the install_isolated/.. and src/.. folders do not match exactly which confuses me, but I have simply commited them hoping you guys have a better idea what is going on.
https://github.com/ftbmynameis/cartographer_ros/tree/turtlebot_config
My bag file is located here:
https://drive.google.com/open?id=1fs7C5IL_9VitraK0TFMOyb6UA18NkMbh
When running with the given configuration and bag file the point cloud in rviz starts sort of "spinning" which almost reminds me of a tumbling airplane going down and shows something major going wrong.
While I am here I also have a question about the parameter TRAJECTORY_BUILDER_nD.num_accumulated_range_data: as I understand this is a sensor dependent parameter (i.e. how many scans / how much data is provided by all my laser sensors) however I couldn't understand / figure out how to analyze my data to retrieve this value.
Thanks for your help and insights! If there is anything else required to be used I will try my best to provide it.
I am using the same lidar in a custom robot, initially only with lidar, I had issues finding which bits to tune and how to do it. A first tuning approach that produced good results, my config.lua was as follows:
include "map_builder.lua"
include "trajectory_builder.lua"
options = {
map_builder = MAP_BUILDER,
trajectory_builder = TRAJECTORY_BUILDER,
map_frame = "map",
tracking_frame = "base_link",
published_frame = "base_link",
use_odometry = false,
provide_odom_frame = true,
odom_frame = "odom",
publish_frame_projected_to_2d = false,
use_pose_extrapolator = true,
use_nav_sat = false,
use_landmarks = false,
num_laser_scans = 1,
num_multi_echo_laser_scans = 0,
num_subdivisions_per_laser_scan = 1,
num_point_clouds = 0,
lookup_transform_timeout_sec = 0.2,
submap_publish_period_sec = 0.3,
pose_publish_period_sec = 5e-3,
trajectory_publish_period_sec = 30e-3,
rangefinder_sampling_ratio = 1.,
odometry_sampling_ratio = 1.,
fixed_frame_pose_sampling_ratio = 1.,
imu_sampling_ratio = 1.,
landmarks_sampling_ratio = 1.,
}
MAP_BUILDER.use_trajectory_builder_2d = true
--this one tries to match two laser scans together to estimate the position,
--I think if not on it will rely more on wheel odometry
TRAJECTORY_BUILDER_2D.use_online_correlative_scan_matching = true
-- tune this value to the amount of samples (i think revolutions) to average over
--before estimating te position of the walls and features in the environment
TRAJECTORY_BUILDER_2D.num_accumulated_range_data = 1
--use or not use IMU, if used, the tracking_frame should be set to the one that the IMU is on
TRAJECTORY_BUILDER_2D.use_imu_data = false
--bandpass filter for lidar distance measurements
TRAJECTORY_BUILDER_2D.min_range = 0.3
TRAJECTORY_BUILDER_2D.max_range = 8.
--This is the scan matcher and the weights to different assumptions
--occupied_space gives more weight to the 'previous' features detected.
TRAJECTORY_BUILDER_2D.ceres_scan_matcher.occupied_space_weight = 10.
TRAJECTORY_BUILDER_2D.ceres_scan_matcher.translation_weight = 10.
TRAJECTORY_BUILDER_2D.ceres_scan_matcher.rotation_weight = 40.
return options
might not be optimal, but it is working the robot and its angular and linear speeds.
Hi @joekeo I used you same configuration file without "use_pose_extrapolator" but list_map is empty and there is no map. Do you know which can be the problem?
Closing for inactivity, we can't invest time to help you with your setup anymore unfortunately.
| gharchive/issue | 2019-01-15T10:50:58 | 2025-04-01T06:38:08.687962 | {
"authors": [
"MichaelGrupp",
"ftbmynameis",
"joekeo",
"jonra1993"
],
"repo": "cartographer-project/cartographer_ros",
"url": "https://github.com/cartographer-project/cartographer_ros/issues/1159",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1343765331 | can not login with admin/123 on docker casbin/casdoor:latest
I'm trying to access to the admin dashboard using admin/123 but casdooor throw this error
{
"status": "error",
"msg": "Unauthorized operation",
"sub": "",
"name": "",
"data": null,
"data2": null
}
the enforcer says built-in/admin isn't allowed
Check the database table 'permission_rule' to see whether there is any data
Check the db table named casdoor.permission_rule whether there is any data, it may be a bug start with docker.
@hsluoyz can we have at least some test on the login methods to avoid this kind of issues on the future?
@fernandolguevara good idea, created here: https://github.com/casdoor/casdoor/issues/1036
@fernandolguevara i wonder how did this bug finally be solved 😕 i met the same question when i upgrade casdoor from v1.105.0 to v1.270.0, and now i cannot even reset the admin password 😡
| gharchive/issue | 2022-08-18T23:57:46 | 2025-04-01T06:38:08.714756 | {
"authors": [
"Bruc3Stark",
"cofecatt",
"fernandolguevara",
"hsluoyz"
],
"repo": "casdoor/casdoor",
"url": "https://github.com/casdoor/casdoor/issues/1031",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1431719747 | Fix background flashing for login page
https://door.casdoor.com/login
@leo220yuyaodog
| gharchive/issue | 2022-11-01T16:26:06 | 2025-04-01T06:38:08.716171 | {
"authors": [
"hsluoyz"
],
"repo": "casdoor/casdoor",
"url": "https://github.com/casdoor/casdoor/issues/1254",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2422351926 | sms login
Does Casdoor support custom applications using SMS verification for login (without CAPTCHA)? Why can't I find the corresponding interface or demo?
Hello 我会尽快回复的。
@yangyulele it's code login: https://door.casdoor.com/login . It can be verification code from Email or phone
| gharchive/issue | 2024-07-22T09:08:18 | 2025-04-01T06:38:08.717840 | {
"authors": [
"hsluoyz",
"yangyulele"
],
"repo": "casdoor/casdoor",
"url": "https://github.com/casdoor/casdoor/issues/3071",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
212361883 | Print justfile path on line above syntax errors
Possibly above run errors too.
Ehhhhhhh, I think not doing this is probably fine. Every hoopy frood knows where their justfile is.
| gharchive/issue | 2017-03-07T08:26:14 | 2025-04-01T06:38:08.718683 | {
"authors": [
"casey"
],
"repo": "casey/just",
"url": "https://github.com/casey/just/issues/157",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
664688735 | Certificate.sha256Hash() method
As used in https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
The HPKP policy specifies hashes of the subject public key info of one of the certificates in the website's authentic X.509 public key certificate chain (and at least one backup key) in pin-sha256 directives, and a period of time during which the user agent shall enforce public key pinning in max-age directive, optional includeSubDomains directive to include all subdomains (of the domain that sent the header) in pinning policy and optional report-uri directive with URL where to send pinning violation reports. At least one of the public keys of the certificates in the certificate chain needs to match a pinned public key in order for the chain to be considered valid by the user agent.
Failure makes no sense
<img width="810" alt="image" src="https://user-images.githubusercontent.com/231923/88431371-1701e000-cdf2-11ea-9e9a-f4fa4c38688b.png">
| gharchive/pull-request | 2020-07-23T18:41:31 | 2025-04-01T06:38:08.720948 | {
"authors": [
"yschimke"
],
"repo": "cashapp/certifikit",
"url": "https://github.com/cashapp/certifikit/pull/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1600903401 | Add because to allowUrl
Nice to have a comment in allowUrl too, symmetry to allowDependency.
Seems like it should just be on everything
| gharchive/issue | 2023-02-27T10:30:12 | 2025-04-01T06:38:08.722078 | {
"authors": [
"JakeWharton",
"hfhbd"
],
"repo": "cashapp/licensee",
"url": "https://github.com/cashapp/licensee/issues/171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2629103722 | Add nix flake for prettier checks
just for you @callebtc
why do we need this unsolicited nix config flash again?
visits spec repo, sees this:
why do we need this unsolicited nix config flash again?
This would be useful to me as I don't have prettier installed globally to run the linting check, and it would be easier for me (and others on nix) to have a dev flake defined in the repo to drop into then creating a custom dev shell each time. However, I do realize this is likely an issue for only me so feel free to close.
Changing the CI only has the benefit since there is a flake defined it ensures what is run locally is run in CI, though in such a simply CI likely no real benefit.
Closing then.
| gharchive/pull-request | 2024-11-01T14:02:01 | 2025-04-01T06:38:08.728120 | {
"authors": [
"callebtc",
"prusnak",
"thesimplekid"
],
"repo": "cashubtc/nuts",
"url": "https://github.com/cashubtc/nuts/pull/185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
106884760 | COOK-72 Support later HDP 2.1 and HDP 2.2 updates on Ubuntu
Otherwise, we fail to find the correct repository path.
LGTM :+1:
:+1:
| gharchive/pull-request | 2015-09-16T23:27:44 | 2025-04-01T06:38:08.739236 | {
"authors": [
"dereklwood",
"gokulavasan",
"wolf31o2"
],
"repo": "caskdata/hadoop_cookbook",
"url": "https://github.com/caskdata/hadoop_cookbook/pull/225",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
98206396 | Unrelated commits included when auditing modified casks during Travis builds
The commit range provided by Travis in the enrivonment variable TRAVIS_COMMIT_RANGE does not appear to include only those commits to be merged in a pull request, as evidenced by this spurious build failure (#12899).
We may want to hard-code the commit range in .travis.yml as refs/remotes/origin/HEAD..HEAD to ensure that only commits relevant to the pull request are considered when auditing modified Casks.
Go for whatever solution you find best. You have been finding a lot of areas where our Travis checks can be improved, and have been fast and efficient in patching those. Feel free to proceed as you like, with Travis.
| gharchive/issue | 2015-07-30T16:07:08 | 2025-04-01T06:38:08.741035 | {
"authors": [
"jawshooah",
"vitorgalvao"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/issues/12903",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
245056865 | brew cask add tistory-editor.app
Cask details
Please fill out as much as possible. Before you do, note we cannot support Mac App Store-only apps.
Name: Tistory-editor.app
Homepage: https://joostory.github.io/tistory-editor/
Download URL: https://github.com/joostory/tistory-editor/releases/download/0.3.8/TistoryEditor-0.3.8-mac.zip
Description: This is the most used blog service management tool in Korea
I already created tistory-editor.rb and I have posted it on the pull request.
I've already run it
https://github.com/caskroom/homebrew-cask/issues/36923
| gharchive/issue | 2017-07-24T11:57:40 | 2025-04-01T06:38:08.743751 | {
"authors": [
"commitay",
"pareut"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/issues/36922",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
39577430 | Support "sudo -A" so that multiple large cask installs can be scripted
When using tools like https://github.com/pivotal-sprout/sprout-wrap or https://github.com/kitchenplan/kitchenplan a large number of casks are defined to be installed via a script. This installation can frequently take longer than the sudo timeout. This means that when homebrew-cask calls sudo (https://github.com/caskroom/homebrew-cask/blob/master/lib/cask/system_command.rb#L49) there is no prompt for the user to enter their password. The error given is "sudo: no tty present and no askpass program specified"
It would be nice if https://github.com/caskroom/homebrew-cask/blob/master/lib/cask/system_command.rb#L49 could also support using "sudo -A" so that the SUDO_ASKPASS environment variable can be used. An example script of this would be something like https://gist.github.com/mtougeron/8dc9cd42c1dd9bd566a3 This would allow the sudo authentication to be handled without user input.
Thanks, Mike
Closing for lack of interest/implementation. This is not urgent and can be revisited at a later date. it concerns homebrew-cask being called by other tools and not homebrew-cask itself, but it should be revisited.
Why not mark an issue with a non-critical tag instead of closing a valid concern?
I believe from the initial request all that needs to be done is adding -A to sudo so it can optionally use the $SUDO_ASKPASS env var.
Why not mark an issue with a non-critical tag instead of closing a valid concern?
Because more tags is not a solution. An over-abundance of issues makes it difficult to go through them and focusing. Having issues open indefinitely is a very poor solution.
It’s not like issues are deleted. You were still able to find it, and if you really think it’s simple to do, you’re very welcome to submit a PR with a new discussion.
But closing an unresolved issue is a solution?
Tags exist to make an over-abundance of issues less difficult to go through, focus can be spent on the important ones.
Yes, its not deleted but its less relevant and accessible. Someone may have the same experience and odds are they're not searching though closed issues.
Keep in mind I tend to be somewhat blunt when writing something so long, and that can make my tone seem confrontational. Nothing could be farther from the truth: this is meant as an explanation, not a defence.
The message is clear. The door is not closed on this issue, and it is very clear it can be revisited at a later date.
Someone may have the same experience and odds are they're not searching though closed issues.
Good. If an issue was closed due to lack of interest and someone opens a new one, it means there’s interest again, and we can form a new issue (backed up by the knowledge of the old one) to try again to tackle it.
Tags exist to make an over-abundance of issues less difficult to go through, focus can be spent on the important ones.
With all due respect1, you’re not managing the project. I don’t mean this in a “your opinion is irrelevant” way (because it is relevant, just like any other user’s or maintainer’s) but in a “you haven’t experienced how bad it is”. Your solution is all well and good in theory; in practice, that is not what happens. You know why are close to all new issues labeled? Because one day, while managing the open ones I noticed how unbearable it had become and personally spent a stupid amount of hours going through all open issues, reading most in their entirety, and making decisions on labels and open status. We’re now keeping issues well labeled and even document how, but it has not always been like that.
Furthermore, many people don’t decide “let me work on this tag” and only pick that one, they look at every issue and decide what to work on based on the labels it has. In other words, the filtering is done visually after picking an issue, not the reverse.
No, we will not keep decrepit issues no one cares about or wants to work on and have minimal dubious benefit, open indefinitely. Their overhead is not worth it, and once again they can be revisited at a later time. This project was close to stagnation, you just couldn’t tell from the outside because casks were still being merged. Among the many other changes, organising issues is helping us move forward again, and that includes closing unimportant ones.
To put it bluntly, you can theorise all you want about the effectiveness and use of labels, I’ve experienced first hand what this specific project needs at this specific time. Right now, this is it.
1 I’d been wanting to use that, for a while. I just find the scene funny, I mean nothing more by it.
This affects ability to use it from tools like chef/puppet to manage workstations which is a big use case.
This affects ability to use it from tools like chef/puppet to manage workstations which is a big use case.
Irrelevant if no one works on it or shows interest in working on it. It is also absolutely irrelevant for the functioning of homebrew-cask, and we have big issues there that need addressing. If the core tool itself has issues, you can be damn sure those take precedence above making it play nicely with other tools. Why wouldn’t it?
Unless, of course, we get a PR. So please either submit one or lets end the conversation here. We’re all volunteers, and you do not get to pick how volunteers spend their time. For the last time, this feature is non-critical, and hence of low important and if no one works on it until then we can revisit it in the future.
Even though this is an old issue, FYI @mattbell87 posted a possible solution in https://github.com/caskroom/homebrew-cask/issues/19180#issuecomment-188522310
| gharchive/issue | 2014-08-05T23:55:38 | 2025-04-01T06:38:08.756432 | {
"authors": [
"adidalal",
"bcg62",
"chino",
"mtougeron",
"vitorgalvao"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/issues/5667",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
119076266 | remove qtox
Ref https://github.com/caskroom/homebrew-cask/issues/15420
Merged as 43bcf9fa32a4fbb367dcd1b537953c99484567af.
| gharchive/pull-request | 2015-11-26T16:38:40 | 2025-04-01T06:38:08.758050 | {
"authors": [
"adityadalal924"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/15428",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
208854462 | OmegaT
If there’s a checkbox you can’t complete for any reason, that's okay, just explain in detail why you weren’t able to do so.
After making all changes to the cask:
[x] brew cask audit --download {{cask_file}} is error-free.
[x] brew cask style --fix {{cask_file}} reports no offenses.
[x] The commit message includes the cask’s name and version.
Additionally, if adding a new cask:
[x] Named the cask according to the token reference.
[x] brew cask install {{cask_file}} worked successfully.
[x] brew cask uninstall {{cask_file}} worked successfully.
[x] Checked there are no open pull requests for the same cask.
[x] Checked the cask was not already refused in closed issues.
[x] Checked the cask is submitted to the correct repo.
OmegaT Cask Installer
In the future, please submit separate pull requests for each cask, as more often than not this type of pull request brings problems with one or more of them, and it halts the inclusion of the other ones.
| gharchive/pull-request | 2017-02-20T11:34:08 | 2025-04-01T06:38:08.763268 | {
"authors": [
"jbeagley52",
"vitorgalvao"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/30251",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
224636375 | Update travis xcode to 8.3
I just noticed this and I'm not sure if it should be updated or not.
@vitorgalvao Should I do the other repos?
@commitay No need, thank you.
| gharchive/pull-request | 2017-04-27T00:29:05 | 2025-04-01T06:38:08.764787 | {
"authors": [
"commitay",
"vitorgalvao"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/33021",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
281042721 | Update camtasia to 3.1.2
After making all changes to the cask:
[x] brew cask audit --download {{cask_file}} is error-free.
[x] brew cask style --fix {{cask_file}} left no offenses.
[x] The commit message includes the cask’s name and version.
Thank you, but this is a regression and has conflicts.
| gharchive/pull-request | 2017-12-11T14:48:22 | 2025-04-01T06:38:08.766489 | {
"authors": [
"eenick",
"vitorgalvao"
],
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/41831",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
298132034 | Add Basler Pylon Camera Suite 5.0.5
After making all changes to the cask:
[x] brew cask audit --download {{cask_file}} is error-free.
[x] brew cask style --fix {{cask_file}} reports no offenses.
[x] The commit message includes the cask’s name and version.
Additionally, if adding a new cask:
[x] Named the cask according to the token reference.
[x] brew cask install {{cask_file}} worked successfully.
[x] brew cask uninstall {{cask_file}} worked successfully.
[x] Checked there are no open pull requests for the same cask.
[x] Checked the cask was not already refused in closed issues.
[x] Checked the cask is submitted to the correct repo.
The Package "Pylon 5.0.5 Camera Software Suite OS X" includes
pylon IP Configurator.app (com.baslerweb.pylon.util.ipconf)
pylon Programmer's Guide and API Reference.app (com.baslerweb.pylon.doc.cpp)
pylon Viewer.app (com.baslerweb.pylon.viewer)
Headers and Libraries (com.baslerweb.pylon.framework)
So I named the cask "pylon". Possible alternatives could be "pylon-suite" or "basler-pylon".
Moved from homebrew-cask PR #44127
I could not find clear rules on quotation marks:
brew cask create uses single quotes
Some packages in homebrew-drivers use double quotes
Single quotes don't work with variable expansion like 'package-#{version}.pkg' (afaik)
So I just stuck with double quotes for everything because I wanted to use variable expansion, I hope that's ok.
| gharchive/pull-request | 2018-02-19T00:04:47 | 2025-04-01T06:38:08.773971 | {
"authors": [
"ckrooss"
],
"repo": "caskroom/homebrew-drivers",
"url": "https://github.com/caskroom/homebrew-drivers/pull/337",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
1227403796 | 🛑 Baked Bean is down
In a41fd4b, Baked Bean (https://baked-bean.co.uk) was down:
HTTP code: 403
Response time: 235 ms
Resolved: Baked Bean is back up in 59f8429.
| gharchive/issue | 2022-05-06T04:23:00 | 2025-04-01T06:38:08.776492 | {
"authors": [
"casman300"
],
"repo": "casman300/my_website_upptimes",
"url": "https://github.com/casman300/my_website_upptimes/issues/980",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1068211991 | InMemoryGlobalState hangs during 2nd run of create_domains entrypoint
Follow-up for #2346. When smart contract from #2346 was ran through InMemoryGlobalState it worked for the first time (it wrote new entries in the trie) but 2nd run when the entries existed already it hang seemingly forever. I couldn't observe similar behavior with LMDB. Running it through strace seem to hang on brk (memory allocations are out of hand?). I didn't try to debug it further than this. (edited)
fixed in https://github.com/casper-network/casper-node/pull/3146
| gharchive/issue | 2021-12-01T10:10:37 | 2025-04-01T06:38:08.777973 | {
"authors": [
"piotr-dziubecki"
],
"repo": "casper-network/casper-node",
"url": "https://github.com/casper-network/casper-node/issues/2425",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
993629709 | QR codes for transactions
Transaction QR codes
Prize Bounty
23,000 (approx. 3,000 USDT) for each of the top 10 intermediate submissions
Challenge Description
Create QR codes for delegation or other transactions. Bring your creativity, ideas, and design to the table.
Winning contributors may receive funding or grants to continue this work beyond the hackathon.
Submission Requirements
Create a public GitHub repository for your project, then submit it using the Gitcoin UI.
Create a design document or a README file.
Find a way to explain your design and implementation. You can use a detailed written document with screenshots, or a video.
You can upload your document or the video here or on a platform of your choice. Use the following naming convention for your file:
Teamname_Title_Serial_YearMonthDate.*
Example: dAppsRUs_DAO_001_20210914.*
Add technical documentation and unit tests as appropriate.
All bounty submissions must be received no later than 11:59 PM EDT on October 11th, 2021, or earlier.
Judging Criteria
This entry will be judged based on the following:
Novelty, design, creativity, and complexity
Code correctness, unit tests, and technical documentation
Deployment of the project to the Testnet
A detailed design description and a plan for future possibilities and enhancements
A video or technical documentation explaining the design and the implementation
The submission will be compared to other advanced projects, and the top 5 submissions will receive a prize.
Winner Announcement Date
Projects will be evaluated within 2 weeks of the hackathon ending or earlier when possible. Winners will be announced the week of October 25th, and the payout will occur after the winners are announced.
CSPR vs USDT
If CSPR cannot be accepted in certain jurisdictions, winners will receive the equivalent amount in USDT. Gitcoin establishes the conversion rate on the day the bounty is issued.
Questions?
Join the Casper Hackathon Discord Server if you have any questions.
We are also holding live ask-me-anything sessions every weekday at 4 pm CEST.
I need to re-open this issue in a new category.
| gharchive/issue | 2021-09-10T22:01:38 | 2025-04-01T06:38:08.788201 | {
"authors": [
"ipopescu"
],
"repo": "casper-network/gitcoin-hackathon",
"url": "https://github.com/casper-network/gitcoin-hackathon/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2206284477 | Suggestion: Retouch colors and maybe add some more
TL;DR
Colors (sage in particular) seem off to me; I propose slight adjustments, maybe adding more colors too.
The following section Explanation is a bunch of writing, explaining my thoughts; I recommend skipping it if you're not interested.
Explanation
Motivation
Ferra has a rich earthy tone, partly because of the abundance of browns and oranges used in the scheme. However, this comes at the cost of seeming overly homogeneous. One way to combat it is to put slightly more attention towards the accentual colors present.
Another noticeable outcome of the lacking focus on accentual colors is that they do not fit in as well as they could.
Notes
One thing that is easy to notice is the general lack of consistent saturation or chroma in the accentual colors; while Mist is practically white, Ember and Honey stun with high chromaticity. It would also be beneficial to include more variants of said color, to be able to better cover the need for brighter and darker variations, this goes hand in hand with another easily apparent problem: the colors do not have a consistent luminosity either.
Solution
In the following, I have established 8 categories of accentual colors, most with 3 grades of luminosity. I tried matching the chroma within a category, so as to ensure cohesion.
The chroma between color categories has outliers, but also tries to stay within bounds.[^1]
The luminosity of the grades tries to stay consistent over categories. Furthermore, I propose shifting the base colors (Ash, Umber and Bark) to be slightly more chromatic, with a bigger lean to brown.
Additionally, I added a blue and purplish color to fit the common Base16 color schemes more; while I personally dislike said schemes, it could be beneficial to allow for more color variety.
I am still undecided on whether green should retain the chroma levels present in Sage, or to increase them.[^2]
[^1]: the orange, red and yellow categories having nearly the same chroma, with green and white being low-chroma and rose being somewhere in-between.
[^2]: approximately double the previous chroma
[^3]: Here the vivid and light orange are Coral and Blush respectively, with Blush chroma-adjusted to fit Coral. The vivid low chroma green is Sage without any changes. Vivid rose is Rose, dark red is Ember, light yellow is honey.
The Actual Adjustments
A picture of the adjusted colors in helix using the old Sage color; the change is minimal but perceptible:
The colors[^3] enumerated:
Color
Dark
Vivid
Light
Orange
#d67751
#ffa07a
#ffc49e
Green (low chroma)
#8b906f
#b1b695
#d6dbba
Green (high chroma)
#8c9368
#b2b98e
#d7deb3
Red
#e06b75
None
#ff919b
Rose
None
#f6b6c9
#ffc8db
Yellow
#d8a442
#e9b553
#f5d76e
White
#90909d
#b8b8c6
#d6d6e4
Blue
#839ae3
#9eb3ff
#bbd0ff
Purple
#ba87d8
#d69eff
#f6b3ff
The adjustments to the base colors are as follows:
Color
Hex
Night
Unchanged
Ash
#3c3538
Umber
#4c4245
Bark
#6f5c5e
Considering many other themes featuring light versions of their themes, it would also probably not hurt too much, to display colors for said light variant. Thus, iterating on this proposal, I have narrowed down the changes to only feature 2 shades of each color (per variant), merging rose and ember shades of one color. The following is the revised palette:
Base
Dark Mode Colors
Light Mode Colors
Hex Codes
Dark Mode
Light Mode
Color
Main Shade
Alternate Shade
Main Shade
Alternate Shade
White
#b8b8c6
#d6d6e4
#a4a4bb
#7d7e93
Green
#b2b98e
#d7deb3
#9aa95d
#758438
Yellow
#e9b553
#f5d76e
#d7a135
#a6781a
Orange
#ffa07a
#ffc49e
#ef8961
#c6623a
Rose
#ff919b
#f6b6c9
#e99fb6
#d15262
Purple
#d69eff
#f6b3ff
#c689f2
#c689f2
Blue
#9eb3ff
#bbd0ff
#86a0f4
#6a88d8
With the 4 base colors being:
Color
Dark Mode
Light Mode
Night (& Day?)
#2b292d
#a89594
Ash
#a89594
#cbb6b4
Umber
#4d424b
#dec8c6
Bark
#6f5d63
#edd8d6
Actually scratch that, my colors suck
| gharchive/issue | 2024-03-25T17:18:19 | 2025-04-01T06:38:08.817828 | {
"authors": [
"artemisYo"
],
"repo": "casperstorm/ferra",
"url": "https://github.com/casperstorm/ferra/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
408551203 | change Dependency.OnValue(object value) to (T Value)
currently, this overload of OnValue has this definition:
public static Property OnValue<TDependencyType>(object value);
wouldn't
public static Property OnValue<TDependencyType>(TDependencyType value);
make more sense? (and catch more bugs via the compiler)
Also ran into this issue recently when changing types in a constructor and forgetting these values were supplied via DI. Seems like it should be relatively straightforward, not sure what might might be missing.
It might have been done this way on purpose to force specifying the generic type parameter since linting tools like ReSharper (and now Roslyn) suggest to remove redundant generic parameters.
If you had the following, and removed the generic parameter because it is redundant, then changed the thing variable to var it would pass a different type to Windsor:
IThing thing = new Thing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
Looks like a tradeoff both ways.
That a good point, since always supplying it would mean the type it supplied for DI would always be obvious.
Would a runtime check in the OnValue method be a reasonable addition to make sure the value is always a subtype of the provided generic type?
Would a runtime check in the OnValue method be a reasonable addition to make sure the value is always a subtype of the provided generic type?
Sounds like a reasonable addition, did you want to submit a pull request?
I think we are conflating two separate things:
1
IThing thing = new Thing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
vs
2
var thing = new NotAThing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
the second scenario compiles now! I am proposing a changing this:
public static Property OnValue<TDependencyType>(TDependencyType value);
which would prevent 2 from even compiling
The problem with making this change as is in place that it will more than likely break a lot of code... which is bad for package maintainers - instead the existing needs to be marked obsolete and 2 new methods: should be created:
[Obsolete('use OnObjectValue or OnTypedValue<> instead")]
public static Property OnValue<TDependencyType>(object value);
public static Property OnObjectValue (object value);
// This is the old method for those that really want to use 'object'
// note the lack of generics since it served no purpose in the first place
public static Property OnTypedValue<TDependencyType>(TDependencyType value);
// This is the new / better version with type safety
@abbotware I'm not sure you understood what I described above. I've repeated it with some context.
If you had the following (and OnValue accepted TDependencyType rather than object):
IThing thing = new Thing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
and removed the generic parameter because it is redundant:
then changed the thing variable to var it would pass a different type to Windsor
I'm not saying this is all done in the same step, but refactoring this code using linting tools like ReSharper and Roslyn will break your code.
I understand completely - I create read only/immutable interfaces for my objects and inject those into the container.
Refactoring tools are not a panacea - I believe it is even trying to warn you that something might be undesirable since the warning inferred type will change appears - I use the var = refactoring all the time and I have never seem that message in VS or Resharper! I would wager that warning is not present in in other scenarios for that refactoring feature when it doesn't change the type.. so looks like user error :-)
I haven't verified it, but I think the solution I proposed might actually prevent refactoring tools from making a mistake since it is more strongly typed. Right now, anything you pass to OnValue is only an Object
if OnValue expected a parameter of TDependencyType instead, it would preserve type information as most FluentAPI's do when properly implemented.
public static Property OnObjectValue (object value);
// This is the old method for those that really want to use 'object'
// note the lack of generics since it served no purpose in the first place
@abbotware what do you mean by the generic type served no purpose? It is the key:
public static Property OnValue<TDependencyType>(object value)
{
return Property.ForKey<TDependencyType>().Eq(value);
}
whoops. I forgot about that!
In either case - it's not used to enforce the type of 'object value' It seems like such a trivial change - however, it might have many unintended consequences - Code that compiles today, might stop compiling once this is changed.
Hence the reason recommend to mark it deprecated, and introduce 2 new versions.
Or just provide provide the OnTypedValue variant which is Strongly Typed and hope the old one falls out of favor
This has gone stale so I'm going to close it. I still think the unintended consequences are more important here than the compile time check. We could go with a runtime check in the future, or maybe even a chained API where the generic type parameter can't inferred so linting tools can't remove it being specified.
How is a compile time check not better?
How is a compile time check not better?
I've explained the unintended consequences multiple times in this issue, please reread the issue if you don't recall. I even proposed an idea for a compile time check which would avoid those unintended consequences in my last comment.
So why close this issue then?
Your idea makes no sense as stated,
maybe even a chained API where the generic type parameter can't inferred so linting tools can't remove it being specified.
NEW PROPOSAL
public static Property OnTypedValue<TDependencyType, TValueType>(TValueType value)
where TValueType : TDependencyType
{
return Property.OnValue<TDependencyType>(value);
}
With 2 type parameters, that are related via inheritance, I doubt any linting tool would recommend removal
Your idea makes no sense as stated,
maybe even a chained API where the generic type parameter can't inferred so linting tools can't remove it being specified.
Dependency.OnValue is implemented as a single line chained method call. Property.ForKey<> returns a PropertyKey and Eq returns a Property. My suggestion was pushing the compiler safety into the Property class, i.e. Property.ForKey<TKey> returning a PropertyKey<TKey> so PropertyKey.Eq would become generic and only accept the specified type.
NEW PROPOSAL
...
With 2 type parameters, that are related via inheritance, I doubt any linting tool would recommend removal
Your new proposal could work, but doesn't that mean you always have to specify the generic type parameter twice?
your new proposal could work, but doesn't that mean you always have to specify the generic type parameter twice?
yes, but that is the entire point of using this sort of technique
Specifying the types explicitly just makes it extremely obvious that something special is happening when using OnValue/Dependency notation. Without it can be error prone due to the 'object' in the return - (hence this issue being opened in the first place)
With this overload there would be zero chance this would be used incorrectly, by accident, or change existing behavior. Even in my PR I renamed the function so it would have broken anything, but with 2 Type parameters (I think I prefer this now), it can be keep the same name. I think with 2 types, the compiler won't guess or try to infer the type
@abbotware any opinion about the first half of my comment...?
We've already got:
Property.ForKey<TKey>().Eq(value)
If we changed ForKey<TKey>() to return a new class PropertyKey<TKey> that only handled Type keys (and not string keys) you'd get the compiler type checking without double specifying the generic type parameter, and probably also get compiler type checking for the Is<T> method for service overrides at the same time.
| gharchive/issue | 2019-02-10T16:20:22 | 2025-04-01T06:38:08.840703 | {
"authors": [
"abbotware",
"jonorossi",
"matgrioni"
],
"repo": "castleproject/Windsor",
"url": "https://github.com/castleproject/Windsor/issues/465",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
227801198 | Deploy on tag
Add NuGet publishing, similarly to Castle.Core
Leaving this pull request for now until we've finished with https://github.com/castleproject/Core/pull/259.
Closing this as agreed here: https://github.com/castleproject/Windsor/issues/220#issuecomment-302530186
| gharchive/pull-request | 2017-05-10T20:30:05 | 2025-04-01T06:38:08.843357 | {
"authors": [
"alinapopa",
"fir3pho3nixx",
"jonorossi"
],
"repo": "castleproject/Windsor",
"url": "https://github.com/castleproject/Windsor/pull/233",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1843339506 | Lost facts fix
This PR adds an initial fix for FERC missing facts initially identified in this issue. The notebook in examples/lost_fact_exploration.ipynb demonstrates exploration of the problem, and outlines which missing facts are dealt with in this PR. There are still more cases that are identified in the notebook, but need more manual exploration before applying a fix. I will break out a separate issue for tracking the remaining missing facts.
Closed in favor of #118 .
| gharchive/pull-request | 2023-08-09T14:12:58 | 2025-04-01T06:38:08.863122 | {
"authors": [
"jdangerx",
"zschira"
],
"repo": "catalyst-cooperative/ferc-xbrl-extractor",
"url": "https://github.com/catalyst-cooperative/ferc-xbrl-extractor/pull/113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2404120708 | Superset deployment
Overview
This PR contains superset configuration and cloud deployment changes for our data exploration tool.
Testing
How did you make sure this worked? How can a reviewer verify this?
# To-do list
- [ ] If updating analyses or data processing functions: make sure to update or write data validation tests (e.g., `test_minmax_rows()`)
- [ ] Update the [release notes](../docs/release_notes.rst): reference the PR and related issues.
- [ ] Ensure docs build, unit & integration tests, and test coverage pass locally with `make pytest-coverage` (otherwise the merge queue may reject your PR)
- [ ] Review the PR yourself and call out any questions or issues you have
- [ ] For minor ETL changes or data additions, once `make pytest-coverage` passes, make sure you have a fresh full PUDL DB downloaded locally, materialize new/changed assets and all their downstream assets and [run relevant data validation tests](https://catalystcoop-pudl.readthedocs.io/en/latest/dev/testing.html#data-validation) using `pytest` and `--live-dbs`.
- [ ] For significant ETL, data coverage or analysis changes, once `make pytest-coverage` passes, ensure the full ETL runs locally and [run data validation tests](https://catalystcoop-pudl.readthedocs.io/en/latest/dev/testing.html#data-validation) using `make pytest-validate` (a ~10 hour run). If you can't run this locally, run the `build-deploy-pudl` GitHub Action (or ask someone with permissions to). Then, check the logs on the `#pudl-deployments` Slack channel or `gs://builds.catalyst.coop`.
Thanks for all the great feedback! I pinned the docker base image version, updated auth0 env var instructions and set docker compose env var defaults. I also created some draft issues that I'll flesh out tomorrow.
You're hitting a bunch of this error in CI:
ERROR test/integration/glue_test.py::test_unmapped_utils_eia - TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
Which is related to dagster / python version incompatibilities: https://github.com/dagster-io/dagster/issues/22985
| gharchive/pull-request | 2024-07-11T21:01:34 | 2025-04-01T06:38:08.866322 | {
"authors": [
"bendnorman",
"jdangerx"
],
"repo": "catalyst-cooperative/pudl",
"url": "https://github.com/catalyst-cooperative/pudl/pull/3715",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2477248779 | Improve/besu network update
PULL REQUEST
Before
[x] 遵守 Commit 規範 (follow commit convention)
[ ] 遵守 Contributing 規範 (follow contributing)
說明 (Description)
相關問題 (Linked Issues)
貢獻種類 (Type of change)
[ ] Bug fix (除錯 non-breaking change which fixes an issue)
[x] New feature (增加新功能 non-breaking change which adds functionality)
[ ] Breaking change (可能導致相容性問題 fix or feature that would cause existing functionality to not work as expected)
[ ] Doc change (需要更新文件 this change requires a documentation update)
測試環境 (Test Configuration):
OS:
NodeJS Version:
NPM Version:
Docker Version:
檢查清單 (Checklist):
[x] 我的程式碼遵從此專案的規範 (My code follows the style guidelines of this project)
[ ] 我有對於自己的程式碼進行測試檢查 (I have performed a self-review of my own code)
[ ] 我有在程式碼中提供必要的註解 (I have commented my code, particularly in hard-to-understand areas)
[ ] 我有在文件中進行必要的更動 (I have made corresponding changes to the documentation)
[ ] 我的程式碼更動沒有顯著增加錯誤數量 (My changes generate no new warnings)
[ ] 我有新增必要的單元測試 (I have added tests that prove my fix is effective or that my feature works)
[ ] 我有檢查並更正程式碼錯誤的拼字 (I have checked my code and corrected any misspellings)
我已完成以上清單,並且同意遵守 Code of Conduct
I have completed the checklist and agree to abide by the code of conduct.
[x] 同意 (I consent)
@johnny30678 you should review this PR.
Wrong lint, empty git commit user, and all failed CI.
I think you should seperate make the improvement
move all files from quorum to eth, or I think evm should much better
update the besu network config
update the kubernetes part
Now it's hard to review and try to find the issues
| gharchive/pull-request | 2024-08-21T07:09:32 | 2025-04-01T06:38:08.914349 | {
"authors": [
"a7351220",
"kidneyweakx"
],
"repo": "cathayddt/bdk",
"url": "https://github.com/cathayddt/bdk/pull/109",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2106952835 | catwalk cannot be installed via home-brew
Whenever i try to install it via brew, it gives me
Downloading https://ghcr.io/v2/catppuccin/tap/catwalk/manifests/1.2.0-1
curl: (22) The requested URL returned error: 401
and i have been advised by hammy to open an issue about this
What is your Homebrew version (brew --version)?
hey @GenShibe thanks for raising this! the taps were made private by mistake.
whiskers, catwalk, and mdbook-catppuccin are now public and i'm able to install all three successfully. let me know if you have any further trouble with them.
| gharchive/issue | 2024-01-30T04:55:35 | 2025-04-01T06:38:08.916596 | {
"authors": [
"GenShibe",
"backwardspy",
"uncenter"
],
"repo": "catppuccin/homebrew-tap",
"url": "https://github.com/catppuccin/homebrew-tap/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2701954819 | PHP Typed Icons
Type
[X] File icon
[ ] Folder icon
Context and associations
Phpstorm provides different icons for PHP classes/interfaces/traits, the icon theme does not have these different icons
References
Abstract class, Classes, and an Interface:
Trait:
Readonly:
This is probably for downstream to use in JetBrains. Sorry, I still haven't quite gotten around to decoupling the icons repository from the vscode extension.
| gharchive/issue | 2024-11-28T12:04:00 | 2025-04-01T06:38:08.920079 | {
"authors": [
"Coffee2CodeNL",
"sgoudham"
],
"repo": "catppuccin/vscode-icons",
"url": "https://github.com/catppuccin/vscode-icons/issues/365",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
184282947 | Refresh button
Hi,
Could you add a refresh button similar to how Muximux(https://github.com/Tenzinn3/Managethis) has integrated to their dashboard?
Currently if I want to reload a just one tab, I would have to hit the refresh button on the browser, which will take me to the default tab. It would be great if you could implement a refresh button that refreshes only the selected frame/tab.
Overall great dashboard, keep it up. Looking forward to what more you could add it, if you need any suggestions let me know.
Double click the tab name. :)
Well I feel really stupid now haha :D
Btw, are you planning on releasing any more features or are you done with this project? Would love something like multiple user support.
Thanks for getting back to me.
Yea, i'm planning more stuff, i just need to find more time for it. Hopefully soon, keep the suggestions coming :)
| gharchive/issue | 2016-10-20T16:49:14 | 2025-04-01T06:38:08.931523 | {
"authors": [
"Githubtordl",
"causefx"
],
"repo": "causefx/iDashboard-PHP",
"url": "https://github.com/causefx/iDashboard-PHP/issues/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1198624833 | 🛑 TR66 is down
In 633dd96, TR66 (https://www.transportesruta66.com.ar/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: TR66 is back up in ab3a24a.
| gharchive/issue | 2022-04-09T14:08:34 | 2025-04-01T06:38:08.933995 | {
"authors": [
"cavalicenti"
],
"repo": "cavalicenti/upptime",
"url": "https://github.com/cavalicenti/upptime/issues/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1199080654 | 🛑 Mas Valores is down
In 6b3d029, Mas Valores (https://www.masvalores.com.ar/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Mas Valores is back up in 3a6954b.
| gharchive/issue | 2022-04-10T15:49:22 | 2025-04-01T06:38:08.936472 | {
"authors": [
"cavalicenti"
],
"repo": "cavalicenti/upptime",
"url": "https://github.com/cavalicenti/upptime/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
586735493 | My First PR commit
PR comments
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: cjxd-bot-test
If they are not already assigned, you can assign the PR to them by writing /assign @cjxd-bot-test in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
:star: PR built and available in a preview environment cb-kubecd-bdd-nh-1585032040-pr-1 here
| gharchive/pull-request | 2020-03-24T06:51:15 | 2025-04-01T06:38:08.954279 | {
"authors": [
"cjxd-bot-test"
],
"repo": "cb-kubecd/bdd-nh-1585032040",
"url": "https://github.com/cb-kubecd/bdd-nh-1585032040/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
566739106 | chore: bdd-spring-1582015162 to 0.0.1
chore: Promote bdd-spring-1582015162 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
| gharchive/pull-request | 2020-02-18T08:54:49 | 2025-04-01T06:38:08.984235 | {
"authors": [
"cjxd-bot-test"
],
"repo": "cb-kubecd/environment-pr-162-14-boot-vault-gke-production",
"url": "https://github.com/cb-kubecd/environment-pr-162-14-boot-vault-gke-production/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
573406889 | chore: bdd-spring-1583007820 to 0.0.1
chore: Promote bdd-spring-1583007820 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
| gharchive/pull-request | 2020-02-29T20:35:48 | 2025-04-01T06:38:08.988214 | {
"authors": [
"cjxd-bot-test"
],
"repo": "cb-kubecd/environment-pr-170-37-boot-gke-production",
"url": "https://github.com/cb-kubecd/environment-pr-170-37-boot-gke-production/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
623567270 | chore: bdd-spring-1590202430 to 0.0.1
chore: Promote bdd-spring-1590202430 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To complete the pull request process, please assign
You can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
| gharchive/pull-request | 2020-05-23T02:59:40 | 2025-04-01T06:38:09.010931 | {
"authors": [
"cjxd-bot-test"
],
"repo": "cb-kubecd/environment-pr-230-12-gke-upgrade-staging",
"url": "https://github.com/cb-kubecd/environment-pr-230-12-gke-upgrade-staging/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1198888963 | Audio Issue: Chromebook 3100, pre 2021.
So I've successfully got ubuntu 20.04 integrated into my Dell Chromebook 3100, and followed the setup procedures, yet nothing, oddly during the first part, I noticed several "Mount Point not found" errors, atleast before the sof-setup-audio, what do I do?
Can you write this in terminal bash -x /usr/local/bin/setup-audio-skl 2>&1 | tee output.txt
Then upload the output.txt
Closing because no answer from author.
| gharchive/issue | 2022-04-10T05:42:46 | 2025-04-01T06:38:09.023503 | {
"authors": [
"LameLad007-Sudo",
"runcros"
],
"repo": "cb-linux/breath",
"url": "https://github.com/cb-linux/breath/issues/154",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
899398267 | _posts/0000-01-02-cbenakis.md
layout: slide
title: “Welcome to our second slide!”
Your test
Use the left arrow to go back!
How do I merge pull request?
| gharchive/pull-request | 2021-05-24T07:52:20 | 2025-04-01T06:38:09.035459 | {
"authors": [
"cbenakis"
],
"repo": "cbenakis/github-slideshow",
"url": "https://github.com/cbenakis/github-slideshow/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
271255335 | Create an installer for PSMoveService
Creating an issue to track the working being done on the installer by @gb2111.
I just checked in the following change related to Greg's efforts:
https://github.com/cboulay/PSMoveService/commit/4600f20f2eda3b2d6db3f53b757923d396b7f8e5
Files are now copied to ${ROOT_DIR}/install/${ARCH_LABEL}/ instead of ${ROOT_DIR}/${PSM_PROJECT_NAME}/${ARCH_LABEL}/ when running the "INSTALL" project
This will make it easier for the installer script to find the output build files
Fixed issue with BuildOfficialDistribution.bat script failing to find OpenCV cmake files
hi,
it can be tested on my fork:
https://github.com/gb2111/PSMoveService
let me know if there is better way to let you test.
you need download Inno Setup to compile setup
https://github.com/gb2111/PSMoveService
the script is here
misc/installer/installer_win64.iss
setup PSMoveService-Setup64.exe will be created in 'installer' folder.
i create few shortcuts, not sure if that is good idea
this is bery basic version so fi you think we need more please let me know.
Closing this issue since we now have an Inno Setup based installer
| gharchive/issue | 2017-11-05T07:42:36 | 2025-04-01T06:38:09.053083 | {
"authors": [
"HipsterSloth",
"gb2111"
],
"repo": "cboulay/PSMoveService",
"url": "https://github.com/cboulay/PSMoveService/issues/477",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
236733581 | Unpooling and deconvolution layers
Hello!
Is it possible to create a segmentation network using your library? As I know, for that I need an unpooling and deconvolutional layers.
Hi!
Currently there is no unpooling or deconvolution. It's something I have planned to do but it's low on my priority list.
PR welcome :)
Def interested about this too. I'm not sure if my own skill/time (the two tend to be related!) is enough to add this feature, but many of my use cases would need proper learning-enabled upsampling in the form of transposed convolution.
I think I will get cracking on it. It would be very useful for generative networks.
IIRC, the maths are very similar to convolutions (some indexes swapped) and cudnn already handles that.
Some useful links:
Convolution and Transposed Convolution algos: https://arxiv.org/pdf/1603.07285.pdf
Convolution, GPU double: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume.GPU/Double/Volume.cs#L245
Convolution, GPU single: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume.GPU/Single/Volume.cs#L246
Convolution, CPU double: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume/Double/Volume.cs#L145
Convolution, CPU single: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume/Single/Volume.cs#L142
Tests: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume.Tests/VolumeTests.cs#L252
| gharchive/issue | 2017-06-18T17:13:23 | 2025-04-01T06:38:09.059062 | {
"authors": [
"Daemon2017",
"cbovar",
"tzaeru"
],
"repo": "cbovar/ConvNetSharp",
"url": "https://github.com/cbovar/ConvNetSharp/issues/52",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2527404729 | Replace non-bonded interactions of specified atom pairs
Is it possible to cancel non-bonded interactions between specified atom pairs and introduce custom potential instead?
Unfortunately no. You could try smina (not developed by us) or the --modpair option in autodock-gpu, which doesn't allows limited customization, but maybe is enough for your purposes. You can also use meeko to customize the atom typing with smarts.
https://github.com/ccsb-scripps/autoDock-gpu
https://github.com/forlilab/meeko
Thank you very much for your reply. Modifying parameters based on atom types might not be sufficient for my needs, as I intend to replace interactions between specific atoms identified by their indices, regardless of their atom types. To achieve this, would it be necessary to modify the source code?
Yes, it would be necessary to modify the source.
Thank you!
| gharchive/issue | 2024-09-16T03:02:17 | 2025-04-01T06:38:09.351937 | {
"authors": [
"YGuangye",
"diogomart"
],
"repo": "ccsb-scripps/AutoDock-Vina",
"url": "https://github.com/ccsb-scripps/AutoDock-Vina/issues/342",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1327764933 | Add Background Color option to Media and Text widget
Summary of changes
Add Background Color option to Media and Text widget so they can be used instead of custom-built YAML components
Frontend
add relevant background color fields to gatsby-node.js schema
add background and text color logic to mediaText.js
add data-title fields to widgets.js and sectionWidgets.js for reference purposes (unlike the html title attribute, data-title does not result in a tooltip on hover and is invisible to end users unless they view source code)
remove obsolete media links field
upgrade packages
Backend
add Background Color taxonomy with choices based on Bootstrap 5 classes
add Background field to Media and Text paragraph
remove obsolete links field from Media and Text paragraph
Test Plan
Go to https://tqtest.gatsbyjs.io/media-text-testing and ensure it's consistent with the original version on https://preview-ugconthub.gtsb.io/media-text-testing
Go to https://tqtest.gatsbyjs.io/media-text-test2 and ensure the colors display correctly (headings and text should be black for uog-blue-muted and light-gray backgrounds, white for dark-gray backgrounds, and default otherwise, i.e. dark text, red heading)
Do the same for https://tqtest.gatsbyjs.io/media-text-video
Review https://tqtest.gatsbyjs.io/bcomm/become-a-global-leader and ensure the two Future You media/text widgets have the uog-blue-muted background color
Review https://tqtest.gatsbyjs.io/study-in-canada and verify the Things You Should Know About the City of Guelph media and text widgets look and behave like the custom YAML widgets on https://www.uoguelph.ca/study-in-canada
Drupal multidev can be reviewed at https://medtxtbg-chug.pantheonsite.io/
The page https://tqtest.gatsbyjs.io/media-text2 doesn't seem to exist.
Sorry, typo - the link is https://tqtest.gatsbyjs.io/media-text-test2
I find the bg-light option so light that I can barely tell the difference between that and the white background. I think we should just limit the options to the light-blue (which will be lighter than it currently is for accessibility) and the dark grey.
Although I worry that we're adding a can of worms by even adding the dark grey option. I like the way its looks, however there may be additional accessibility issues with it. The red headings for example fail colour contrast.
Agree with Miranda that the bg-light option could be removed.
I think you have to use single quotes in the classNames utility
I think you have to use single quotes in the classNames utility
Changed the syntax but it didn't make a difference - the problem was due to another issue which should now be resolved.
If we can fix the primary-outline and info-outline colour contrast, that would be great. Other than that, not seeing roadblocks.
Done - see latest changes on https://tqtest.gatsbyjs.io/media-text-test2
| gharchive/pull-request | 2022-08-03T20:39:09 | 2025-04-01T06:38:09.364892 | {
"authors": [
"abanuog",
"mmafe",
"tqureshi-uog"
],
"repo": "ccswbs/gus",
"url": "https://github.com/ccswbs/gus/pull/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1430027647 | question
I had to downgrade docker-compose file to 3.3. Like the file below. I'm getting connection refused for the ports and /var/log/salt/master is logging the messages below, tell me if this looks like a bug, I can report it as so (using the template). I've also noticed that, there's no configuration in /home/salt/data/config/. I've added a master.conf with default ports but no luck. Thanks for any help.
version: '3.3'
volumes:
roots:
keys:
logs:
services:
master:
container_name: salt_master_engage1
image: ghcr.io/cdalvaro/docker-salt-master:latest
restart: always
volumes:
- "roots/:/home/salt/data/srv"
- "keys/:/home/salt/data/keys"
- "logs/:/home/salt/data/logs"
ports:
- "4505:4505"
- "4506:4506"
### salt-api port
# - "8000:8000"
healthcheck:
test: ["CMD", "/usr/local/sbin/healthcheck"]
#start_period: 30s
environment:
DEBUG: 'false'
TZ: America/Chicago
PUID: 1000
PGID: 1000
SALT_LOG_LEVEL: info
### salt-api settings
# SALT_API_SERVICE_ENABLED: 'True'
# SALT_API_USER: salt_api
# SALT_API_USER_PASS: 4wesome-Pass0rd
2022-10-31 10:12:57,968 [salt.modules.network:2143][ERROR ][12952] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:57,986 [salt.modules.network:2143][ERROR ][12951] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:57,995 [salt.modules.network:2143][ERROR ][12950] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:57,996 [salt.modules.network:2143][ERROR ][12949] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:58,007 [salt.modules.network:2143][ERROR ][12948] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:58,066 [salt.utils.process:998 ][ERROR ][13192] An un-handled exception from the multiprocessing process 'FileserverUpdate' was caught:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/salt/utils/process.py", line 993, in wrapped_run_func
return run_func()
File "/usr/local/lib/python3.10/dist-packages/salt/master.py", line 508, in run
self.update_threads[interval].start()
File "/usr/lib/python3.10/threading.py", line 935, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Hi!
Thank you very much for opening the issue!
The master.yml is automatically generated when the container starts.
If you need to add some specific configuration that you can't using env variables, create a .conf file that fulfill your needs. (You can see the test directory for some help)
I'll try to reproduce your bug and see if I can figure out a solution.
Hi @ggmartins,
I've tried you compose file with the following tweaks and everything works fine:
version: '3.3'
volumes:
roots:
keys:
services:
master:
container_name: salt_master_engage1
image: ghcr.io/cdalvaro/docker-salt-master:latest
restart: always
volumes:
- "roots:/home/salt/data/srv/"
- "keys:/home/salt/data/keys/"
- "./logs/:/home/salt/data/logs/"
ports:
- "4505:4505"
- "4506:4506"
### salt-api port
# - "8000:8000"
healthcheck:
test: ["CMD", "/usr/local/sbin/healthcheck"]
#start_period: 30s
environment:
DEBUG: 'false'
TZ: America/Chicago
PUID: 1000
PGID: 1000
SALT_LOG_LEVEL: info
### salt-api settings
# SALT_API_SERVICE_ENABLED: 'True'
# SALT_API_USER: salt_api
# SALT_API_USER_PASS: 4wesome-Pass0rd
Please, could you test it??
If you are viewing salt logs inside the logs volume, you should set the [SALT_LEVEL_LOGFILE](https://docs.saltproject.io/en/latest/ref/configuration/master.html#log-level-logfile) to info as well since, SALT_LOG_LEVEL is for the standard output.
I'll make some changes to use SALT_LOG_LEVEL when SALT_LEVEL_LOGFILE is not defined in order to make it less confusing.
ok, thanks, looks like your changes are working on one machine. The other server seems to be having problems connecting the minions, but we think the problem is with the server itself. Thank you so much for your help! Thanks for the tip on logging too.
You are welcome!
| gharchive/issue | 2022-10-31T15:21:03 | 2025-04-01T06:38:09.461141 | {
"authors": [
"cdalvaro",
"ggmartins"
],
"repo": "cdalvaro/docker-salt-master",
"url": "https://github.com/cdalvaro/docker-salt-master/issues/171",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
832174503 | Add support for configurating the reactor in master config
As per the documentation, there is a way for telling the master to sync custom types on minions' start. Please refer to: https://docs.saltproject.io/en/latest/topics/reactor/index.html#minion-start-reactor
It would be good to have a config option and directory mapping for configuring the reactor, much like other options are (mounting the keys, roots, etc).
An alternative could be enhancing the config system so a master config template (or zero or more small config files like Apache's conf.d/* files, for example) can be used. This way, future or yet unsupported options could be covered easily.
Hi @dmlambea! Thank you for opening this issue!
Right now, you can set your own reactor settings by creating a reactor.conf file inside the config directory and mounting it:
# config/reactor.conf
reactor:
- 'salt/minion/*/start':
- /srv/reactor/sync_grains.sls
/srv directory is symlinked to /home/salt/data/srv, but you can also specify /home/salt/data/srv/reactor/sync_grains.sls instead of /srv/reactor/sync_grains.sls.
docker run --name salt_stack --detach \
--publish 4505:4505 --publish 4506:4506 \
--volume $(pwd)/roots/:/home/salt/data/srv/ \
--volume $(pwd)/config/:/home/salt/data/config/ \ # Mounts config/ dir
cdalvaro/docker-salt-master:latest
This way, you can add your sync_grains.sls file to roots/reactor:
# roots/reactor/sync_grains.sls
sync_grains:
local.saltutil.sync_grains:
- tgt: {{ data['id'] }}
And that should do the trick. I use this method for my start.sls reactor.
Please let me know if this method does not fit your requirements, or if you see any way to improve this support.
Anyway, I will add this case to the documentation for better support.
Hi @cdalvaro
It worked, thank you!
| gharchive/issue | 2021-03-15T20:39:33 | 2025-04-01T06:38:09.466367 | {
"authors": [
"cdalvaro",
"dmlambea"
],
"repo": "cdalvaro/docker-salt-master",
"url": "https://github.com/cdalvaro/docker-salt-master/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2047985523 | PDF Adding Info
Add important information to the PDF that is relevant to the assignment
Added parts of the assignment that is necessary for grading
| gharchive/issue | 2023-12-19T06:05:09 | 2025-04-01T06:38:09.467519 | {
"authors": [
"cdasilvasantos"
],
"repo": "cdasilvasantos/is218-group-project",
"url": "https://github.com/cdasilvasantos/is218-group-project/issues/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
362652430 | 考虑兼容python3?
我发现GAE建立的python环境中已经包含python3.5了,虽然版本有点低,但是可以考虑支持?
短期内没有可能,最重要的是Calibre不支持,我自己写的代码基本上都是兼容2.x和3.x的,因为我平时使用的都是Python3。
@cdhigh 感谢,那这个issue先留着把。
既然无法解决,所以留着干嘛,而且非得花大代价兼容3.x意义不大
| gharchive/issue | 2018-09-21T14:55:56 | 2025-04-01T06:38:09.468744 | {
"authors": [
"cdhigh",
"specter119"
],
"repo": "cdhigh/KindleEar",
"url": "https://github.com/cdhigh/KindleEar/issues/522",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2266981774 | fix(templates): tell jest to prefer ts files over compiled js files
The template project can run completely without emitting .js files, - e.g. ts-node is used for synth and ts-jest.
However, if js files are emitted via npm run compile, they are imported by test files instead of .ts files that may have been updated.
i.e. given this import in main.test.ts, it will import and test main.ts unless main.js exists in which case main.js is imported.
import {MyChart} from './main';
This is rather confusing if e.g. some changes to main.ts have been pulled. Let's fix this!
Approach: per https://jestjs.io/docs/configuration#modulefileextensions-arraystring
We recommend placing the extensions most commonly used in your project on the left, so if you are using TypeScript, you may want to consider moving "ts" and/or "tsx" to the beginning of the array.
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 2194
Questions ?
Please refer to the Backport tool documentation
| gharchive/pull-request | 2024-04-27T11:54:28 | 2025-04-01T06:38:09.474964 | {
"authors": [
"cakemanny",
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-cli",
"url": "https://github.com/cdk8s-team/cdk8s-cli/pull/2194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1322851054 | chore(deps): upgrade dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-2.x" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 426
Questions ?
Please refer to the Backport tool documentation
| gharchive/pull-request | 2022-07-30T02:50:02 | 2025-04-01T06:38:09.477810 | {
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-cli",
"url": "https://github.com/cdk8s-team/cdk8s-cli/pull/426",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2530491441 | chore(deps): upgrade dev dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-dev-dependencies-2.x" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 2880
Questions ?
Please refer to the Backport tool documentation
| gharchive/pull-request | 2024-09-17T09:02:47 | 2025-04-01T06:38:09.480702 | {
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-core",
"url": "https://github.com/cdk8s-team/cdk8s-core/pull/2880",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1345035214 | chore(deps): upgrade dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-k8s-24-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 1060
Questions ?
Please refer to the Backport tool documentation
| gharchive/pull-request | 2022-08-20T02:46:47 | 2025-04-01T06:38:09.483328 | {
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/1060",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1383234519 | chore(deps): upgrade dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-k8s-22-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 1186
Questions ?
Please refer to the Backport tool documentation
| gharchive/pull-request | 2022-09-23T02:59:46 | 2025-04-01T06:38:09.485883 | {
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/1186",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1927768240 | chore(deps): upgrade dev dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-dev-dependencies-k8s-27-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 3041
Questions ?
Please refer to the Backport tool documentation
| gharchive/pull-request | 2023-10-05T09:06:20 | 2025-04-01T06:38:09.488448 | {
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/3041",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1981192701 | chore(deps): upgrade compiler dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-compiler-dependencies-k8s-27-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 3255
Questions ?
Please refer to the Backport tool documentation
| gharchive/pull-request | 2023-11-07T12:02:19 | 2025-04-01T06:38:09.491138 | {
"authors": [
"cdk8s-automation"
],
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/3255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
133464376 | Images for README
img
| gharchive/issue | 2016-02-13T19:37:55 | 2025-04-01T06:38:09.537625 | {
"authors": [
"cdpr123"
],
"repo": "cdpr123/cdpr123.github.io",
"url": "https://github.com/cdpr123/cdpr123.github.io/issues/1",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
983202762 | Terminal font rendering bug in Firefox and Chrome.
OS/Web Information
Web Browser: Firefox, Google Chrome, Microsoft Edge
Local OS: Linux, Windows 10
Remote OS: Amazon Linux 2
Remote Architecture: x86_64
code-server --version: 3.11.1 c680aae973d83583e4a73dc0c422f44021f0140e
Steps to Reproduce
open code-server tab in Firefox
open terminal
type _______
Expected
The underscore characters are visible.
Actual
The underscore characters are not visible, leaving what appears to be blank spaces. In Chrome based browsers, adjusting the terminal's line spacing to 1.1 makes the underscores visible, but this does not work in Firefox.
Logs
Console stdout/stderr:
[2021-08-30T20:58:55.509Z] info - Not serving HTTPS
[2021-08-30T21:02:15.733Z] debug forking vs code...
[2021-08-30T21:02:16.096Z] debug setting up vs code...
[2021-08-30T21:02:16.099Z] debug vscode got message from code-server {"type":"init"}
[2021-08-30T21:02:18.976Z] debug vscode got message from code-server {"type":"socket"}
[2021-08-30T21:02:18.979Z] debug protocol Initiating handshake... {"token":"aae87521-0bbb-4f11-9aac-5b37061ad123"}
[2021-08-30T21:02:19.040Z] debug protocol Handshake completed {"token":"aae87521-0bbb-4f11-9aac-5b37061ad123"}
[2021-08-30T21:02:19.041Z] debug management Connecting... {"token":"aae87521-0bbb-4f11-9aac-5b37061ad123"}
[2021-08-30T21:02:19.042Z] debug vscode 1 active management connection(s)
[2021-08-30T21:02:20.647Z] debug vscode got message from code-server {"type":"socket"}
[2021-08-30T21:02:20.647Z] debug protocol Initiating handshake... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.884Z] debug got latest version {"latest":"3.11.1"}
[2021-08-30T21:02:20.884Z] debug comparing versions {"current":"3.11.1","latest":"3.11.1"}
[2021-08-30T21:02:20.919Z] debug protocol Handshake completed {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.920Z] debug exthost Connecting... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.921Z] debug exthost Getting NLS configuration... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.921Z] debug vscode 1 active exthost connection(s)
[2021-08-30T21:02:20.921Z] debug exthost Spawning extension host... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.925Z] debug exthost Waiting for handshake... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:21.328Z] debug exthost Handshake completed {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:21.328Z] debug exthost Sending socket {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] failed to start after retrying for some time, giving up. Please report this as a bug report!
ERR [File Watcher (nsfw)] failed to start after retrying for some time, giving up. Please report this as a bug report!
^C[2021-08-30T21:02:59.332Z] debug child:72071 disposing {"code":"SIGINT"}
Screenshot
The "command not found" error was produced by typing several underscores and hitting enter.
Notes
This issue can be reproduced in VS Code: No
I can't reproduce this in Firefox + macOS. Are you using a custom font in your terminal by chance?
https://user-images.githubusercontent.com/3806031/131408401-dfb9fbd3-54b8-4cbe-b98f-5bd9ecaab881.mov
No, default options. Please test on a non-retina display.
Underscore is not visible in firefox on 133% zoom:
https://user-images.githubusercontent.com/76137/131497615-2398d771-046a-41f2-9b5b-490c2404c239.mp4
Underscore is not visible in firefox when it's on the last row:
https://user-images.githubusercontent.com/76137/131498038-4310f522-29db-4476-aeea-ab901e724c8f.mp4
Underscore is not visible in firefox on 133% zoom:
I can't reproduce this unfortunately. Are you using a custom font?
I can reproduce it on latest linux mint and fedora. Changing the font to custom made underscore visible all the time and by visible I mean:
there's something wrong in how the underscore is rendered.
Got it!
"terminal.integrated.gpuAcceleration": "off"
fixes the issue:
| gharchive/issue | 2021-08-30T21:05:10 | 2025-04-01T06:38:09.552885 | {
"authors": [
"jsjoeio",
"senyai",
"tidux"
],
"repo": "cdr/code-server",
"url": "https://github.com/cdr/code-server/issues/4073",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2331769434 | EKS Upgrade to 1.30 in Staging
Summary | Résumé
New version of EKS - 1.30
Verified that there are no deprecations we need to worry about.
Release notes: https://kubernetes.io/blog/2024/04/17/kubernetes-v1-30-release/
Upgraded and working in dev.
Related Issues | Cartes liées
Chore
Test instructions | Instructions pour tester la modification
Smoke test/perf test staging
Release Instructions | Instructions pour le déploiement
None.
Reviewer checklist | Liste de vérification du réviseur
[ ] This PR does not break existing functionality.
[x] This PR does not violate GCNotify's privacy policies.
[x] This PR does not raise new security concerns. Refer to our GC Notify Risk Register document on our Google drive.
[x] This PR does not significantly alter performance.
[x] Additional required documentation resulting of these changes is covered (such as the README, setup instructions, a related ADR or the technical documentation).
⚠ If boxes cannot be checked off before merging the PR, they should be moved to the "Release Instructions" section with appropriate steps required to verify before release. For example, changes to celery code may require tests on staging to verify that performance has not been affected.
tbh, I'd say that we can't really check off all these, at least "This PR does not break existing functionality." should be checked before release 👍
| gharchive/pull-request | 2024-06-03T18:17:41 | 2025-04-01T06:38:09.560847 | {
"authors": [
"ben851",
"sastels"
],
"repo": "cds-snc/notification-terraform",
"url": "https://github.com/cds-snc/notification-terraform/pull/1349",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1610006375 | 🛑 tanecni-divadlo.cz is down
In 5c9156e, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in d42fc34.
| gharchive/issue | 2023-03-05T00:56:47 | 2025-04-01T06:38:09.582483 | {
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/10210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1610225034 | 🛑 tanecni-divadlo.cz is down
In f02f99e, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 9de881a.
| gharchive/issue | 2023-03-05T14:36:49 | 2025-04-01T06:38:09.585806 | {
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/10256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1611815820 | 🛑 Bali 2017 is down
In d1eac63, Bali 2017 ($SITE_BALI) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Bali 2017 is back up in 39fc0b5.
| gharchive/issue | 2023-03-06T16:38:10 | 2025-04-01T06:38:09.587898 | {
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/10334",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1630447514 | 🛑 tanecni-divadlo.cz is down
In 3bf2e0c, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 3a4f32d.
| gharchive/issue | 2023-03-18T17:27:40 | 2025-04-01T06:38:09.590933 | {
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/11071",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1689907281 | 🛑 tanecni-divadlo.cz is down
In 3e532f5, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in d1c3833.
| gharchive/issue | 2023-04-30T12:53:38 | 2025-04-01T06:38:09.593976 | {
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/13279",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1510841114 | 🛑 tanecni-divadlo.cz is down
In 40a39b6, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 7775498.
| gharchive/issue | 2022-12-26T10:47:00 | 2025-04-01T06:38:09.597223 | {
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/5322",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1602447736 | 🛑 tanecni-divadlo.cz is down
In 564fc72, tanecni-divadlo.cz (https://www.tanecni-divadlo.cz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: tanecni-divadlo.cz is back up in 9fe0b08.
| gharchive/issue | 2023-02-28T06:26:47 | 2025-04-01T06:38:09.600258 | {
"authors": [
"cebreus"
],
"repo": "cebreus/upptime",
"url": "https://github.com/cebreus/upptime/issues/9914",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1465212253 | 🛑 CDN is down
In ffb1bd5, CDN (https://cdn.ceccun.com/cdynamic/captive) was down:
HTTP code: 522
Response time: 15582 ms
Resolved: CDN is back up in 9863df2.
| gharchive/issue | 2022-11-26T17:31:49 | 2025-04-01T06:38:09.602699 | {
"authors": [
"ejaz4"
],
"repo": "ceccun/status",
"url": "https://github.com/ceccun/status/issues/1090",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1508868165 | 🛑 CDN is down
In 206f5dc, CDN (https://cdn.ceccun.com/cdynamic/captive) was down:
HTTP code: 522
Response time: 15433 ms
Resolved: CDN is back up in 85afe92.
| gharchive/issue | 2022-12-23T04:23:05 | 2025-04-01T06:38:09.604997 | {
"authors": [
"ejaz4"
],
"repo": "ceccun/status",
"url": "https://github.com/ceccun/status/issues/1761",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1117090310 | 🛑 CDN is down
In e7ae9eb, CDN (https://cdn.ceccun.com) was down:
HTTP code: 403
Response time: 518 ms
Resolved: CDN is back up in 0280423.
| gharchive/issue | 2022-01-28T07:37:23 | 2025-04-01T06:38:09.607212 | {
"authors": [
"ejaz4"
],
"repo": "ceccun/status",
"url": "https://github.com/ceccun/status/issues/195",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1718115294 | 🛑 CDN is down
In 71f92ad, CDN (https://cdn.ceccun.com/cdynamic/captive) was down:
HTTP code: 522
Response time: 15367 ms
Resolved: CDN is back up in df74161.
| gharchive/issue | 2023-05-20T09:32:47 | 2025-04-01T06:38:09.609752 | {
"authors": [
"ejaz4"
],
"repo": "ceccun/status",
"url": "https://github.com/ceccun/status/issues/4957",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1824012062 | 🛑 CDN is down
In 149a982, CDN (https://cdn.ceccun.com/cdynamic/captive) was down:
HTTP code: 522
Response time: 15296 ms
Resolved: CDN is back up in b14fb76.
| gharchive/issue | 2023-07-27T09:46:32 | 2025-04-01T06:38:09.612270 | {
"authors": [
"ejaz4"
],
"repo": "ceccun/status",
"url": "https://github.com/ceccun/status/issues/6129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1838024738 | 🛑 CDN is down
In 6101b0d, CDN (https://cdn.ceccun.com/cdynamic/captive) was down:
HTTP code: 525
Response time: 275 ms
Resolved: CDN is back up in df6eca1.
| gharchive/issue | 2023-08-06T03:14:58 | 2025-04-01T06:38:09.614549 | {
"authors": [
"ejaz4"
],
"repo": "ceccun/status",
"url": "https://github.com/ceccun/status/issues/6308",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1867765266 | 🛑 CDN is down
In b110d0e, CDN (https://cdn.ceccun.com/cdynamic/captive) was down:
HTTP code: 525
Response time: 244 ms
Resolved: CDN is back up in 12b435c after 792 days, 15 hours, 19 minutes.
| gharchive/issue | 2023-08-25T22:57:25 | 2025-04-01T06:38:09.616914 | {
"authors": [
"ejaz4"
],
"repo": "ceccun/status",
"url": "https://github.com/ceccun/status/issues/6681",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2427883355 | chore(main): release 0.37.1
:robot: I have created a release beep boop
0.37.1 (2024-07-24)
Bug Fixes
bump guzzlehttp/guzzle from 7.9.1 to 7.9.2 (#639) (61de914)
bump laravel/framework from 11.16.0 to 11.17.0 (#640) (2c7a8c4)
bump league/commonmark from 2.5.0 to 2.5.1 (#638) (549978c)
This PR was generated with Release Please. See documentation.
:robot: Created releases:
0.37.1
:sunflower:
| gharchive/pull-request | 2024-07-24T15:28:48 | 2025-04-01T06:38:09.637088 | {
"authors": [
"cedricziel"
],
"repo": "cedricziel/faro-shop",
"url": "https://github.com/cedricziel/faro-shop/pull/641",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
104693360 | Startup time is high
numbat-collector is one of the larger chunks of the startup time of my module.
It clocks in at 800ms to require on my machine.
It's a dep of newww, and I was mechanically going through deps with require-time to see what's slow. It's got a 3+ second startup time.
Though looking deeper, it looks like it depends on it but never uses it.
| gharchive/issue | 2015-09-03T13:27:35 | 2025-04-01T06:38:09.638549 | {
"authors": [
"aredridel"
],
"repo": "ceejbot/numbat-collector",
"url": "https://github.com/ceejbot/numbat-collector/issues/7",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
1092433355 | Decide on governance
Summary
Currently, we stripped the gov module entirely from celestia-app. We should decide on the gov mechanism if any.
Problem Definition
There is no way to vote on (parameter) changes currently.
Proposal
@musalbas proposed to only have signalling text governance proposals on chain.
The upgrade path must be clearly defined for various scenarios and what role governance should play in this if any.
E.g. should coin holders vote on a block size increase? On upgrading to a new software release? On other params? Only on non-binding signalling text proposals?
Action Items
[ ] Decide on if we want any governance in the sense of "coin-voting" at all
[ ] Decide on what kind of proposals governance can vote
[ ] Summarize findings and decision in a brief ADR
[ ] then: implement changes in app
[ ] add a more fine-grained document about various upgrade-paths including what can be voted on but also beyond what is covered by governance
Related:
https://www.figment.io/resources/cosmos-parameter-change-documentation, https://github.com/gavinly/CosmosParametersWiki/blob/master/param_index.md
https://github.com/celestiaorg/celestia-specs/issues/128
https://github.com/celestiaorg/celestia-specs/issues/171
some SDK discussions and issues to keep an eye on:
https://github.com/cosmos/cosmos-sdk/discussions/9066
https://github.com/cosmos/cosmos-sdk/discussions/9913
https://linktr.ee/cosmos_gov
code that (currently) handles text proposals: https://github.com/cosmos/cosmos-sdk/blob/5725659684fc93790a63981c653feee33ecf3225/x/gov/types/proposal.go#L249-L253, code that (currently) handles param changes: https://github.com/cosmos/cosmos-sdk/blob/58a6c4c00771e766f37f0f8e50adbbfe0bc7362d/x/params/proposal_handler.go#L26
The decision for now:
we will add back the full governance module for the next testnet
if we want to limit anything we should figure this out between testnet and mainnet
IMHO, we should simply use the full governance module at launch. If we ever want to move away from signalling/coin-voting, there either needs to be a governance proposal to do that, or, a coordinated hard-fork. cc @musalbas @adlerjohn
Sounds good
| gharchive/issue | 2022-01-03T11:29:28 | 2025-04-01T06:38:09.739211 | {
"authors": [
"adlerjohn",
"liamsi"
],
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/issues/168",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1183961588 | Orchestrator and relayer client
Description
This PR is the first of three PRs to add the MVP orchestrator and relayer. It contains the various rpc clients used by both the relayer and orchestrator to communicate with celestia-app, celestia-core, and ethereum.
part 1/3 of the orchestrator/relayer MVP
Codecov Report
:exclamation: No coverage uploaded for pull request base (qgb-integration@6d78b9b). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## qgb-integration #255 +/- ##
==================================================
Coverage ? 14.77%
==================================================
Files ? 42
Lines ? 8576
Branches ? 0
==================================================
Hits ? 1267
Misses ? 7223
Partials ? 86
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6d78b9b...8a63b4e. Read the comment docs.
there's still a lot of unused code that will get used later, so the linter is failing
| gharchive/pull-request | 2022-03-28T20:35:54 | 2025-04-01T06:38:09.745155 | {
"authors": [
"codecov-commenter",
"evan-forbes"
],
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/pull/255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1931386287 | fix: specs for MaxDepositPeriod and VotingPeriod
Closes https://github.com/celestiaorg/celestia-app/issues/2624
Codecov Report
Merging #2626 (caa5c07) into main (44be82a) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #2626 +/- ##
=======================================
Coverage 20.63% 20.63%
=======================================
Files 133 133
Lines 15346 15346
=======================================
Hits 3166 3166
Misses 11877 11877
Partials 303 303
| gharchive/pull-request | 2023-10-07T15:10:21 | 2025-04-01T06:38:09.748290 | {
"authors": [
"codecov-commenter",
"rootulp"
],
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/pull/2626",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1161795449 | Cut a v0.1.0 Release
Since we have reached a state where we can send transactions, deploy contracts, and call contracts using Optimint as a backend we should create a v0.1.0 release.
Depends on
[x] https://github.com/celestiaorg/ethermint/issues/3
[x] https://github.com/celestiaorg/optimint/issues/310
[x] https://github.com/celestiaorg/optimint/issues/323
[x] https://github.com/celestiaorg/evmos/pull/32
All pre tasks are done. A release can be cut.
| gharchive/issue | 2022-03-07T18:51:24 | 2025-04-01T06:38:09.751421 | {
"authors": [
"jbowen93"
],
"repo": "celestiaorg/evmos",
"url": "https://github.com/celestiaorg/evmos/issues/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1685238805 | WIP: Feat/bootstrap from previously seen peers
Overview
This PR contains the imlpementation of ADR-14
Checklist
[ ] New and updated code has appropriate documentation
[ ] New and updated code has new and/or updated testing
[ ] Required CI checks are passing
[ ] Visual proof for any user facing features like CLI or documentation updates
[ ] Linked issues closed with keywords
Codecov Report
Merging #35 (cc4d2b0) into main (4a93da2) will decrease coverage by 0.04%.
The diff coverage is 76.00%.
@@ Coverage Diff @@
## main #35 +/- ##
==========================================
- Coverage 66.22% 66.18% -0.04%
==========================================
Files 35 36 +1
Lines 2768 2827 +59
==========================================
+ Hits 1833 1871 +38
- Misses 785 801 +16
- Partials 150 155 +5
Impacted Files
Coverage Δ
p2p/options.go
44.44% <50.00%> (+0.78%)
:arrow_up:
p2p/exchange.go
78.86% <69.69%> (-2.39%)
:arrow_down:
p2p/peer_tracker.go
77.39% <86.36%> (+1.20%)
:arrow_up:
p2p/peerstore/peerstore.go
100.00% <100.00%> (ø)
sync/sync_head.go
62.03% <100.00%> (ø)
... and 1 file with indirect coverage changes
Closed in favor of:
https://github.com/celestiaorg/go-header/pull/36
Incoming
| gharchive/pull-request | 2023-04-26T15:19:07 | 2025-04-01T06:38:09.762240 | {
"authors": [
"codecov-commenter",
"derrandz"
],
"repo": "celestiaorg/go-header",
"url": "https://github.com/celestiaorg/go-header/pull/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1681751418 | panic: op-batcher flag redefined: namespace-id
When running on latest commit (c0bd1b58912d183d0d708d56bd3d5ca271cb132e) of celestia branch on this repo, I encounter this error:
op-batcher flag redefined: namespace-id
panic: op-batcher flag redefined: namespace-id
goroutine 1 [running]:
flag.(*FlagSet).Var(0xc0001aa300, {0x13077a0, 0xc000198db0}, {0xff7cd1, 0xc}, {0x100a033, 0x1c})
/usr/local/go/src/flag/flag.go:980 +0x2f9
flag.(*FlagSet).StringVar(...)
/usr/local/go/src/flag/flag.go:845
flag.(*FlagSet).String(0x10000005828b3?, {0xff7cd1, 0xc}, {0xc00003e1f8, 0x10}, {0x100a033, 0x1c})
/usr/local/go/src/flag/flag.go:858 +0xac
github.com/urfave/cli.StringFlag.ApplyWithError.func1({0xff7cd1?, 0xc?})
/go/pkg/mod/github.com/urfave/cli@v1.22.9/flag_string.go:67 +0x94
github.com/urfave/cli.eachName({0xff7cd1?, 0x0?}, 0xc0000ad750)
/go/pkg/mod/github.com/urfave/cli@v1.22.9/flag.go:130 +0x93
github.com/urfave/cli.StringFlag.ApplyWithError({{0xff7cd1, 0xc}, {0x100a033, 0x1c}, {0xc000473050, 0x17}, {0x0, 0x0}, 0x0, 0x0, ...}, ...)
/go/pkg/mod/github.com/urfave/cli@v1.22.9/flag_string.go:62 +0xe9
github.com/urfave/cli.flagSet({0xff5a50, 0xa}, {0xc000232b00, 0x2d, 0x7efef5dd5c80?})
/go/pkg/mod/github.com/urfave/cli@v1.22.9/flag.go:115 +0x170
github.com/urfave/cli.(*App).newFlagSet(...)
/go/pkg/mod/github.com/urfave/cli@v1.22.9/app.go:185
github.com/urfave/cli.(*App).Run(0xc00054d340, {0xc0001981e0, 0x1, 0x1})
/go/pkg/mod/github.com/urfave/cli@v1.22.9/app.go:205 +0xea
main.main()
/app/op-batcher/cmd/main.go:39 +0x334
Resolved in https://github.com/celestiaorg/optimism/commit/34352c423a6c95b3e04115e91c5c690259283a73
| gharchive/issue | 2023-04-24T17:46:36 | 2025-04-01T06:38:09.764111 | {
"authors": [
"jcstein"
],
"repo": "celestiaorg/optimism",
"url": "https://github.com/celestiaorg/optimism/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
352649795 | Setting up dev environment
I'm trying to get this app setup properly for a project I'm working on.
I've cloned the repo, run npm i (though it appears Lerna is the only dependency), then npm run build. I get a bunch of errors, most related to missing modules. Fixing one only brings another one up, so I figured the fastest way to fix that would be to ask here what the proper setup steps are to get Celluloid working.
What packages am I missing / what steps should I take?
Thanks!
Why was this closed @3rwww1 ? I'd appreciate help on what dependencies are required for the project to run in development
Hi Eric,
Erwan, our IT developer, is about to answer you :-) We've talked about that
just this morning.
All the best,
Michaël
Le lun. 27 août 2018 à 14:54, Eric Han Liu notifications@github.com a
écrit :
Why was this closed @3rwww1 https://github.com/3rwww1 ? I'd appreciate
help on what dependencies are required for the project to run in development
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/celluloid-edu/celluloid/issues/39#issuecomment-416216850,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AhumWdnvyt-W5r6mqbdoJ3B2tGppzsnYks5uU-wdgaJpZM4WGT9G
.
--
Le cinéma Utopia... Une histoire de militantisme culturel et politique en
vente ici
https://warm-ed.fr/fr/cinema/33-le-cinema-utopia-a-avignon-de-1976-a-1994-de-michael-bourgatte-97829556759.html
Michaël Bourgatte
Maître de Conférences à l'Institut Supérieur de Pédagogie
http://www.icp.fr/a-propos-de-l-icp/decouvrez-l-icp/facultes-et-instituts/isp-faculte-d-education-1600.kjsp
Directeur du département Humanités Numériques et Innovations Educatives
Responsable de l'Atelier du Numérique
http://www.icp.fr/a-propos-de-l-icp/decouvrez-l-icp/campus-numerique/atelier-du-numerique-26791.kjsp
Institut Catholique de Paris
|
Bureau E-63 | 74, Rue de Vaugirard 75270 Paris Cedex 06
✆
+
33 (0)1.70.64.29.92 |
+33
(0)
6.87.23.66.8
7
Twitter :
@michaelbourgatt https://twitter.com/michaelbourgatt
Blog : http://celluloid.hypotheses.org/
http://celluloid.hypotheses.org/
Hi @EricHanLiu !
Sorry for closing the issue, I was doing a bit of "housekeeping" on this repository, and for some reason I didn't see this new issue, nor was I notified by mail when you opened it, nor did I check I wasn't the author when closing it.
I'm in the process of writing an extensive README for the project, but in the meantime, here is an excerpt
Prerequisites
using an OSX or Linux operating system is highly recommended. With a bit of tweaking, windows will work too, albeit you'll have to do to a bit more searching on how to install and configure the following tools.
download Yarn and use it instead of NPM. The project is organized as a monorepo so it needs yarn to leverage Yarn workspace
install a local postgresql server, version 9.6 or later, for your environment, optionally using a docker image. Then, create a user for celluloid and then create a database owned by this user. You can follow this tutorial to get setup quickly.
finally, you'll need an SMTP server to send emails for account confirmation. For development purpose, you could use your email account SMTP credentials, for instance gmail, or a dedicated service, such as mailtrap
Configuration
create a .env file at the root of your repository, with the following contents:
1 NODE_ENV=development
2 CELLULOID_LISTEN_PORT=3001
3 CELLULOID_PG_HOST=celluloid-db-postgres.cticqmujyhft.eu-west-1.rds.amazonaws.com
4 CELLULOID_PG_PORT=5432
5 CELLULOID_PG_DATABASE=celluloid
6 CELLULOID_PG_USER=celluloid
7 CELLULOID_PG_PASSWORD=8H#Cjvp!ZmY#!h6p
8 CELLULOID_PG_MAX_POOL_SIZE=20
9 CELLULOID_PG_IDLE_TIMEOUT=30000
10 CELLULOID_JWT_SECRET="bhN63!4A^CAnn@xe53s7d8uD1jCXbMXPtU6H*a0*YZ%Z1#2!C95hgJv7i#53NU1d"
11 CELLULOID_SMTP_USER=AKIAILSQA6FHHYZ24IMA
12 CELLULOID_SMTP_PASSWORD='AjCjLRwqAMudLq62oKksUin6kscuHnP5bE+ieE/ssf3b'
13 CELLULOID_SMTP_HOST=email-smtp.eu-west-1.amazonaws.com
14 COOL_INFRA_PATH=../cool/infra/celluloid```
## Installation
- setup a local database with postgresql and restore
| gharchive/issue | 2018-08-21T18:04:04 | 2025-04-01T06:38:09.778850 | {
"authors": [
"3rwww1",
"EricHanLiu",
"michaelbourgatt"
],
"repo": "celluloid-edu/celluloid",
"url": "https://github.com/celluloid-edu/celluloid/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
96035022 | LoadError: cannot load such file -- win32ole on OSX - celluloid-essentials-0.20.1
@digitalextremist Thanks for the hard work !
However I get a new error now.
You asked me which operating system and RVM I am using:
rvm 1.26.11
OSX Yosemite
Here is the error :
namespace should be set in your ruby initializer, is ignored in config file
config.redis = { :url => ..., :namespace => 'namespace_dev' }
D, [2015-07-20T12:17:14.819053 #84626] DEBUG -- : Celluloid 0.17.0 is running in BACKPORTED mode. [ http://git.io/vJf3J ]
2015-07-20T10:17:15.564Z 84626 TID-ovftx0h60 INFO: [Sidetiq] Sidetiq v0.6.3 - Copyright (c) 2012-2013, Tobias Svensson <tob@tobiassvensson.co.uk>
2015-07-20T10:17:15.564Z 84626 TID-ovftx0h60 INFO: [Sidetiq] Sidetiq is covered by the 3-clause BSD license.
2015-07-20T10:17:15.564Z 84626 TID-ovftx0h60 INFO: [Sidetiq] See LICENSE and http://opensource.org/licenses/BSD-3-Clause for licensing details.
2015-07-20T10:17:15.564Z 84626 TID-ovftx0h60 INFO: [Sidetiq] Sidetiq::Supervisor start
2015-07-20T10:17:15.565Z 84626 TID-ovfu2yi9c INFO: [Sidetiq] Sidetiq::Actor::Clock id: 70170397578020 initialize
2015-07-20T10:17:15.565Z 84626 TID-ovfu2yi9c DEBUG: [Sidetiq] Sidetiq::Clock looping ...
2015-07-20T10:17:15.568Z 84626 TID-ovfu2vc6o ERROR: Actor crashed!
LoadError: cannot load such file -- win32ole
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `from_win32ole'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:12:in `count_cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:6:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:96:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-pool-0.20.0/lib/celluloid/supervision/container/pool.rb:22:in `initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `public_send'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:16:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:50:in `block in dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:76:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:363:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task.rb:57:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:14:in `block in create'
2015-07-20T10:17:15.568Z 84626 TID-ovfu309eo ERROR: Actor crashed!
LoadError: cannot load such file -- win32ole
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `from_win32ole'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:12:in `count_cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:6:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:96:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-pool-0.20.0/lib/celluloid/supervision/container/pool.rb:22:in `initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `public_send'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:16:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:50:in `block in dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:76:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:363:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task.rb:57:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:14:in `block in create'
(celluloid):0:in `remote procedure call'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:45:in `value'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/sync.rb:40:in `method_missing'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/cell.rb:20:in `_send_'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:204:in `new_link'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container/instance.rb:34:in `start'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container/instance.rb:29:in `initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container.rb:73:in `new'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container.rb:73:in `add'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/deprecate/supervise.rb:96:in `supervise'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-pool-0.20.0/lib/celluloid/supervision/container/behavior/pool.rb:29:in `pool'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `public_send'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:16:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:50:in `block in dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:76:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:363:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task.rb:57:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:14:in `block in create'
2015-07-20T10:17:15.569Z 84626 TID-ovfu2vc6o ERROR: thread crashed
LoadError: cannot load such file -- win32ole
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `from_win32ole'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:12:in `count_cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:6:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:96:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-pool-0.20.0/lib/celluloid/supervision/container/pool.rb:22:in `initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `public_send'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:16:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:50:in `block in dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:76:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:363:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task.rb:57:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:14:in `block in create'
(celluloid):0:in `remote procedure call'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:45:in `value'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/sync.rb:40:in `method_missing'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/cell.rb:20:in `_send_'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:204:in `new_link'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container/instance.rb:34:in `start'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container/instance.rb:29:in `initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container.rb:73:in `new'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container.rb:73:in `add'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/deprecate/supervise.rb:96:in `supervise'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-pool-0.20.0/lib/celluloid/supervision/container/behavior/pool.rb:29:in `pool'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `public_send'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:16:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:50:in `block in dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:76:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:363:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task.rb:57:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:14:in `block in create'
2015-07-20T10:17:15.570Z 84626 TID-ovfu2yi9c INFO: [Sidetiq] Sidetiq::Actor::Clock id: 70170397578020 shutting down ...
2015-07-20T10:17:15.571Z 84626 TID-ovfu309eo WARN: Terminating task: type=:call, meta={:dangerous_suspend=>true, :method_name=>:initialize}, status=:callwait
Celluloid::Task::Fibered backtrace unavailable. Please try `Celluloid.task_class = Celluloid::Task::Threaded` if you need backtraces here.
2015-07-20T10:17:15.571Z 84626 TID-ovfu309eo ERROR: thread crashed
LoadError: cannot load such file -- win32ole
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:43:in `from_win32ole'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celltask was terminateduloid/internals/cpu_counter.rb:12:in `count_cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/cpu_counter.rb:6:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:96:in `cores'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-pool-0.20.0/lib/celluloid/supervision/container/pool.rb:22:in `initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:35:in `terminate'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:347:in `block in cleanup'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:345:in `each'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:345:in `cleanup'
/Users/mickael/.rv.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `public_send'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:16:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:50:in `block in dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/cellum/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:331:in `shutdown'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:323:in `handle_crash'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:171:in `rescue in run'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:150:in `run'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/loid-0.17.0/lib/celluloid/cell.rb:76:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:363:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task.rb:57:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:14:in `block in create'
(celluloid):0:in `remote procedure call'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:45:in `value'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/sync.rb:40:in `method_missing'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/cell.rb:20:in `_send_'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:204:in `new_link'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container/instance.rb:34:in `start'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container/instance.rb:29:in `initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/container.rb:73:in `new'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:132:in `block in start'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-essentials-0.20.1/lib/celluloid/internals/thread_handle.rb:14:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor_system.rb:76:in `block in get_thread'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/group/spawner.rb:54:in `call'/lib/celluloid/supervision/container.rb:73:in `add'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/deprecate/supervise.rb:96:in `supervise'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-pool-0.20.0/lib/celluloid/supervision/container/behavior/pool.rb:29:in `pool'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `public_send'
/Users/mickael/.rvm/rub
/Useries/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/calls.rb:28:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:16:in `dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:50:in `block in dispatch'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/cell.rb:76:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/actor.rb:363:in `block in task'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task.rb:57:in `block in initialize'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/task/fibered.rb:14:in `block in create'
s/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/group/spawner.rb:54:in `block in instantiate'
(celluloid):0:in `remote procedure call'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/call/sync.rb:45:in `value'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/sync.rb:40:in `method_missing'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid/proxy/cell.rb:20:in `_send_'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-0.17.0/lib/celluloid.rb:193:in `new'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/celluloid-supervision-0.20.0/lib/celluloid/supervision/deprecate/supervise.rb:61:in `run!'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/sidetiq-0.6.3/lib/sidetiq/supervisor.rb:33:in `run!'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/sidetiq-0.6.3/lib/sidetiq.rb:65:in `<top (required)>'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/bundler-1.7.12/lib/bundler/runtime.rb:76:in `require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/bundler-1.7.12/lib/bundler/runtime.rb:76:in `block (2 levels) in require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/bundler-1.7.12/lib/bundler/runtime.rb:72:in `each'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/bundler-1.7.12/lib/bundler/runtime.rb:72:in `block in require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/bundler-1.7.12/lib/bundler/runtime.rb:61:in `each'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/bundler-1.7.12/lib/bundler/runtime.rb:61:in `require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/bundler-1.7.12/lib/bundler.rb:134:in `require'
/Users/mickael/Documents/folder/projectname/config/application.rb:7:in `<top (required)>'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/sidekiq-3.3.4/lib/sidekiq/cli.rb:236:in `require'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/sidekiq-3.3.4/lib/sidekiq/cli.rb:236:in `boot_system'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/sidekiq-3.3.4/lib/sidekiq/cli.rb:50:in `run'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/gems/sidekiq-3.3.4/bin/sidekiq:8:in `<top (required)>'
/Users/mickael/.rvm/gems/ruby-2.1.5/bin/sidekiq:23:in `load'
/Users/mickael/.rvm/gems/ruby-2.1.5/bin/sidekiq:23:in `<main>'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/bin/ruby_executable_hooks:15:in `eval'
/Users/mickael/.rvm/rubies/ruby-2.1.5/lib/ruby/gems/2.1.0/bin/ruby_executable_hooks:15:in `<main>'
2015-07-20T10:17:15.573Z 84626 TID-ovftx0h60 DEBUG: Terminating 5 actors...
referencing https://github.com/celluloid/celluloid/issues/650 for follow up.
@Micka33 we can keep this in the celluloid-essentials gem. I've posted a patch there.
Ignore previous commit message.
| gharchive/issue | 2015-07-20T10:26:30 | 2025-04-01T06:38:09.787067 | {
"authors": [
"Micka33",
"digitalextremist"
],
"repo": "celluloid/celluloid",
"url": "https://github.com/celluloid/celluloid/issues/651",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
810620209 | [cli] Prevent releasegold:withdraw if a ReleaseGold contract has a cEUR balance
Expected Behavior
We currently check to see if a RG contract would self-destruct and take a cUSD balance with it. We should add a similar check for cEUR.
Current Behavior
Someone can send cEUR to a RG contract, then withdraw all remaining CELO, and lose the cEUR.
@gastonponti is this still being worked on or complete?
If not started, I don't think we need to do this now. There's an open question around there being no way to get cEUR or any other token back from the RG contract now. Which means users would be blocked on withdrawing their CELO with this change
Yes, I've asked that in the cap channel
I didn't create the PR until having a clearer answer.
Are just a few changes, I could add a branch to this issue with something like "to be merged when this was fixed"
| gharchive/issue | 2021-02-17T23:37:29 | 2025-04-01T06:38:09.793800 | {
"authors": [
"aslawson",
"gastonponti",
"timmoreton"
],
"repo": "celo-org/celo-monorepo",
"url": "https://github.com/celo-org/celo-monorepo/issues/7158",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
489032183 | wrong metric for ServerResponseCountByStatusCode view
https://github.com/census-instrumentation/opencensus-go/blob/6ddd4bcc9c808594ec82377ce4323c3f7913be6d/plugin/ochttp/stats.go#L263-L269
The ServerLatency metric seems wrong, since there's no ServerResponseCount metric defined nor used. Given that I guess the view was meant to be ServerRequestCountByStatusCode.
Since server latency by status code feels like not that useful view, I was wondering whether ServerResponseCountByStatusCode should be replaced by ServerRequestCountByStatusCode:
ServerRequestCountByStatusCode = &view.View{
Name: "opencensus.io/http/server/request_count_by_status_code",
Description: "Server request count by status code",
TagKeys: []tag.Key{StatusCode},
Measure: ServerRequestCount,
Aggregation: view.Count(),
}
Looks like the same issue as #995...
| gharchive/issue | 2019-09-04T09:00:57 | 2025-04-01T06:38:09.814653 | {
"authors": [
"bvwells",
"rjeczalik"
],
"repo": "census-instrumentation/opencensus-go",
"url": "https://github.com/census-instrumentation/opencensus-go/issues/1163",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
417542443 | Exporter/Metrics/OcAgent: Add integration test.
Metrics counterpart of https://github.com/census-instrumentation/opencensus-java/pull/1776.
Codecov Report
:exclamation: No coverage uploaded for pull request base (master@b552db4). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1786 +/- ##
=========================================
Coverage ? 83.89%
Complexity ? 2020
=========================================
Files ? 291
Lines ? 9219
Branches ? 890
=========================================
Hits ? 7734
Misses ? 1171
Partials ? 314
Impacted Files
Coverage Δ
Complexity Δ
...porter/metrics/ocagent/OcAgentMetricsExporter.java
69.04% <ø> (ø)
4 <0> (?)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b552db4...cd82544. Read the comment docs.
one nit. LGTM otherwise.
| gharchive/pull-request | 2019-03-05T23:02:26 | 2025-04-01T06:38:09.822129 | {
"authors": [
"codecov-io",
"rghetia",
"songy23"
],
"repo": "census-instrumentation/opencensus-java",
"url": "https://github.com/census-instrumentation/opencensus-java/pull/1786",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.