id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
318026823 | 3.1.3 Enhance controlled lists and valid values
Enable storing of constraints such as datatypes, pick lists, or default values
Ok-- I'm not understanding this, especially the part about "The contractor shall add an option for default values to be an array of allowed literal value for a given property".
Is the "default literal" field supposed to be a dropdown with options derived from a vocabulary from the URI defined in the "Values > URI" field?
In other words: If the a Values URI is set to "http://id.loc.gov/vocabulary/issuance", then the "default literal" field should be a select box with the options of:
integrating resource
multipart monograph
serial
single unit
???
Thanks,
--Charles
We might have over summarized that one. That's one use case - so if you have a list, you can pick a default from the list.
Another is more complicated - an adhoc list of values for a literal property.
If we enabled "note type", there is no list in ID to use, but there is a List in MARC:
http://id.loc.gov/ontologies/bibframe.html#p_noteType
This would be added as a property in the Note ResourceTemplate see
http://bibframe.org/bibliomata/profile-edit/#/profile/b488ee5c-511a-4ea6-8cfd-81eee398b13f
Then you would populate a list (I think a text box control with a list deliminted by /n is easy)
Note Type
Issuance information
Type of computer data
Related material
Biographical data
Administrative history
Issuing body
Index
Finding aid
Binding
Related material
Action
Exhibition
Description source
Physical details
Accompanying material
Numbering
Data source
Data not found
Musical presentation:
Computer file characteristics:
Coverage
Location
Relief
Form of original Item
Metadata entry convention
Technique
Completeness
Film inspection date
The valid values part is if you have a literal value but it is constrained by a datatype like a date or timestamp, which flows into 3.2.6
So if you look at Monograph->Instance->Projected publication date (YYMM) http://bibframe.org/bibliomata/profile-edit/#/profile/4af68062-8c00-41cb-b311-fb6c450054f6
We would add a value constraint so we get a 4 digit number (or YYMM) and then if you type something else in bfe, 3.2.6 flags it as not valid.
There's more examples of this in the Identifer profile->
http://bibframe.org/bibliomata/profile-edit/#/profile/410f8076-d0db-4acb-9d1e-719007afa9f7
Barcodes, LCCNs and LC Shelfmark all could be validated to prevent garbage being typed in.
Resolved by #26
| gharchive/issue | 2018-04-26T13:20:40 | 2025-04-01T06:39:21.935188 | {
"authors": [
"cledvina",
"kirkhess"
],
"repo": "lcnetdev/profile-edit",
"url": "https://github.com/lcnetdev/profile-edit/issues/14",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
234867528 | add a callback optionnal parameter for server.close method
net and tls server.close method accept an optional callback parameter.
ldapjs server.close method delegates to net or tls server.close method but doesn't transfert any callback parameter...
Is it would be cool if it would.
seems to me the modification should be easy:
something like to change in "lib/server.js"
Server.prototype.close = function () {
return this.server.close();
};
to
Server.prototype.close = function (callback) {
return this.server.close(callback);
};
⚠️ This issue has been locked due to age. If you have encountered a recent
problem that seems to be covered by this issue, please open a new issue.
Please include a minimal reproducible example
when opening a new issue.
| gharchive/issue | 2017-06-09T15:47:10 | 2025-04-01T06:39:21.946581 | {
"authors": [
"jsumners",
"stalb"
],
"repo": "ldapjs/node-ldapjs",
"url": "https://github.com/ldapjs/node-ldapjs/issues/438",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
437619860 | Array of changes is not accepted in client.Modify
Array of changes is not accepted in client.Modify
Documentation states "Note that you can pass in a single Change or an array of Change objects." and code is present to assert is change instead of passing to the code further down that copes with an array of changes.
👋
On February 22, 2023, we released version 3 of this library. As a result, we are closing this issue/pull request.
Please see issue #839 for more information, including how to proceed if you feel this closure is in error.
| gharchive/issue | 2019-04-26T11:02:21 | 2025-04-01T06:39:21.948719 | {
"authors": [
"jsumners",
"willmcenaney"
],
"repo": "ldapjs/node-ldapjs",
"url": "https://github.com/ldapjs/node-ldapjs/issues/514",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1608097284 | CDLP algorithm on some large graphs fails validation
Validation for CDLP fails for the datagen-sf3k-fb, graph500-27 and graph500-28 graphs (and potentially the larger graph500 data sets) using the code on the v1-dev branch.
datagen-sf3k-fb
Expected results:
$ head datagen-sf3k-fb-CDLP
6 563
10 1073928
41 48
48 8796117679059
50 8796107572362
59 65
65 40574
73 76
76 85
85 256504
Actual results:
$ head r573407-CDLP-datagen-sf3k-fb
6 555
10 1073925
41 41
48 8796117679052
50 8796107572359
59 59
65 40573
73 73
76 76
85 256496
Log:
Parsing file/directory /mnt/gx/datagen-sf3k-fb-CDLP.
Parsed 33484375 lines from datagen-sf3k-fb-CDLP.
Parsing file/directory /mnt/gx/ldbc_graphalytics_platforms_graphblas/graphalytics-1.5.0-graphblas-0.1-SNAPSHOT/./output/r573407-CDLP-dat
agen-sf3k-fb.
Parsed 33484375 lines from r573407-CDLP-datagen-sf3k-fb.
- Vertex 2199043067904 has value '2199039431900', but valid value is '2199039431903'
- Vertex 4398051328829 has value '4398048256842', but valid value is '4398048256859'
- Vertex 2199024370932 has value '3895582', but valid value is '3895583'
- Vertex 2199052528055 has value '583496', but valid value is '583497'
- Vertex 2199030788455 has value '2199023419513', but valid value is '2199023419518'
...
- [33484275 errors have been omitted]
Validation failed.
- Correct vertices: 0 (0.00%)
- Incorrect vertices: 33484375 (100.00%)
- Missing vertices: 0 (0.00%)
- Unknown vertices: 0 (0.00%)
Memory (free/total/max) = 1163.54M / 4464.00M / 92064.00M
...
graph500-27
Expected:
0 3678
2 3678
5 3678
6 3678
7 3678
8 3678
9 3678
12 3678
13 3678
17 3678
Actual:
0 3676
2 3676
5 3676
6 3676
7 3676
8 3676
9 3676
12 3676
13 3676
17 3676
graph500-28
Expected:
$ head /data/gx/graphs/graph500-28-CDLP
0 3678
5 3678
6 3678
7 3678
12 3678
13 3678
15 3678
17 3678
18 3678
19 3678
Actual:
$ head output/r502951-CDLP-graph500-28/r502951-CDLP-graph500-28
0 3676
5 3676
6 3676
7 3676
12 3676
13 3676
15 3676
17 3676
18 3676
19 3676
Thoughts
The CDLP algorithm is due for a rework anyways – but it's important to keep in mind that the current version fails validation.
Now the GraphBLAS implementation used to pass 100% of the tests, so why did this error not occur before? This is because the framework performed the validation incorrectly, using the equivalence match approach instead of the exact match approach, see the release notes of framework v1.4.0.
This may imply that some of the largest data sets (e.g. graph500-30) may have an incorrect CDLP reference output – as these were generated using the GraphBLAS implementation.
It turns out this was an issue with the reference data sets. These have been fixed now.
| gharchive/issue | 2023-03-03T07:33:03 | 2025-04-01T06:39:21.954662 | {
"authors": [
"szarnyasg"
],
"repo": "ldbc/ldbc_graphalytics_platforms_graphblas",
"url": "https://github.com/ldbc/ldbc_graphalytics_platforms_graphblas/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1567695799 | hadoop generate data, how to set date format
I use hadoop to generate data set.
But, generate data date format is
I want yyyy-MM-dd hh:mm:ss
How to set generate data date format
@suzhaojun you need millisecond precision for valid benchmarks. If you only need the timestamps up to second precision:
ldbc.snb.datagen.util.formatter.StringDateFormatter.dateTimeFormat:yyyy-MM-dd'T'HH:mm:ss
| gharchive/issue | 2023-02-02T09:50:47 | 2025-04-01T06:39:21.956625 | {
"authors": [
"suzhaojun",
"szarnyasg"
],
"repo": "ldbc/ldbc_snb_datagen_hadoop",
"url": "https://github.com/ldbc/ldbc_snb_datagen_hadoop/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1453038925 | Error when launching python code.
After the command "ros2 launch ldlidar_stl_ros2 ld19.launch.py" I recieved the following error:
[INFO] [launch]: All log files can be found below /home/ditmar/.ros/log/2022-11-17-10-38-42-823400-ditmar-VirtualBox-3613
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [ldlidar_stl_ros2_node-1]: process started with pid [3614]
[INFO] [static_transform_publisher-2]: process started with pid [3616]
[static_transform_publisher-2] [WARN] [1668677924.187515024] []: Old-style arguments are deprecated; see --help for new-style arguments
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.433554313] [LD19]: LDLiDAR SDK Pack Version is: v3.0.3
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.434860673] [LD19]: <product_name>: LDLiDAR_LD19
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.435963707] [LD19]: <topic_name>: scan
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.436106027] [LD19]: <frame_id>: base_laser
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.436180008] [LD19]: <port_name>: /dev/ttyUSB0
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.436199364] [LD19]: <port_baudrate>: 230400
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.436209824] [LD19]: <laser_scan_dir>: Counterclockwise
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.436219813] [LD19]: <enable_angle_crop_func>: false
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.436229702] [LD19]: <angle_crop_min>: 135.000000
[ldlidar_stl_ros2_node-1] [INFO] [1668677924.436245783] [LD19]: <angle_crop_max>: 225.000000
[ldlidar_stl_ros2_node-1] [ERROR] [1668677924.436360430] [LD19]: ldlidar node start is fail
[static_transform_publisher-2] [INFO] [1668677924.503143437] [base_link_to_base_laser_ld19]: Spinning until stopped - publishing transform
[static_transform_publisher-2] translation: ('0.000000', '0.000000', '0.180000')
[static_transform_publisher-2] rotation: ('0.000000', '0.000000', '0.000000', '1.000000')
[static_transform_publisher-2] from 'base_link' to 'base_laser'
[ldlidar_stl_ros2_node-1] [LDS][ERROR][Thu Nov 17 10:38:44 2022][/home/ditmar/ldlidar_ros2_ws/src/ldlidar_stl_ros2/ldlidar_driver/src/serialcom/serial_interface_linux.cpp][Open][41][Open open error,Permission denied]
[ldlidar_stl_ros2_node-1] [LDS][ERROR][Thu Nov 17 10:38:44 2022][/home/ditmar/ldlidar_ros2_ws/src/ldlidar_stl_ros2/ldlidar_driver/src/core/ldlidar_driver.cpp][Start][90][serial is not open:/dev/ttyUSB0]
[ERROR] [ldlidar_stl_ros2_node-1]: process has died [pid 3614, exit code 1, cmd '/home/ditmar/ldlidar_ros2_ws/install/ldlidar_stl_ros2/lib/ldlidar_stl_ros2/ldlidar_stl_ros2_node --ros-args -r __node:=LD19 --params-file /tmp/launch_params_02h4cjfi --params-file /tmp/launch_params_4p_xhzqx --params-file /tmp/launch_params_crh_pjiy --params-file /tmp/launch_params_d3w9fnwj --params-file /tmp/launch_params_ed2i03x1 --params-file /tmp/launch_params_xthq6i0l --params-file /tmp/launch_params_kcwvvbzd --params-file /tmp/launch_params_3_0jvih8 --params-file /tmp/launch_params_taoamxlv'].
Currently using ubuntu version: Ubuntu 22.04.1 LTS
I'm also using Oracle Virtual Machine to run Ubuntu.
I hope somone can help me with this problem.
Set the -x permission for the serial port device mounted by the lidar in the system:
sudo chmod 777 /dev/ttyUSB*
Thanks for the quick response!
The sudo chmod 777 /dev/ttyUSB* seems to have fixed the permission problem.
But after the following text it stops printing.
ditmar@ditmar-VirtualBox:~/ldlidar_ros2_ws$ ros2 launch ldlidar_stl_ros2 ld19.launch.py
[INFO] [launch]: All log files can be found below /home/ditmar/.ros/log/2022-11-17-12-21-39-380464-ditmar-VirtualBox-5986
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [ldlidar_stl_ros2_node-1]: process started with pid [5987]
[INFO] [static_transform_publisher-2]: process started with pid [5989]
[static_transform_publisher-2] [WARN] [1668684099.539221234] []: Old-style arguments are deprecated; see --help for new-style arguments
[static_transform_publisher-2] [INFO] [1668684099.581556904] [base_link_to_base_laser_ld19]: Spinning until stopped - publishing transform
[static_transform_publisher-2] translation: ('0.000000', '0.000000', '0.180000')
[static_transform_publisher-2] rotation: ('0.000000', '0.000000', '0.000000', '1.000000')
[static_transform_publisher-2] from 'base_link' to 'base_laser'
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.611595903] [LD19]: LDLiDAR SDK Pack Version is: v3.0.3
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.612532324] [LD19]: <product_name>: LDLiDAR_LD19
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.613199108] [LD19]: <topic_name>: scan
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.613551183] [LD19]: <frame_id>: base_laser
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.613906238] [LD19]: <port_name>: /dev/ttyUSB0
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.614215534] [LD19]: <port_baudrate>: 230400
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.614502757] [LD19]: <laser_scan_dir>: Counterclockwise
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.614756829] [LD19]: <enable_angle_crop_func>: false
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.615068479] [LD19]: <angle_crop_min>: 135.000000
[ldlidar_stl_ros2_node-1] [INFO] [1668684099.615259426] [LD19]: <angle_crop_max>: 225.000000
[ldlidar_stl_ros2_node-1] [INFO] [1668684100.083317173] [LD19]: ldlidar node start is success
[ldlidar_stl_ros2_node-1] [INFO] [1668684100.110435328] [LD19]: ldlidar communication is normal.
[ldlidar_stl_ros2_node-1] [INFO] [1668684100.116253038] [LD19]: Publish topic message:ldlidar scan data.
Also the /scan doesn't show up in the ros2 topic list.
ditmar@ditmar-VirtualBox:~$ ros2 topic list
/parameter_events
/rosout
It works now, after a second time the /scan did show up.
I am now able to vizualize the data from the lidar in rviz2.
I want to use the lidar to detect obstacles on a modol car.
Is ther a way to print/obtain the "raw data" like Distance, Start angle and End angle?
So I can use the data in code.
| gharchive/issue | 2022-11-17T09:52:03 | 2025-04-01T06:39:21.984170 | {
"authors": [
"Kristian181",
"ldrobotsensor"
],
"repo": "ldrobotSensorTeam/ldlidar_stl_ros2",
"url": "https://github.com/ldrobotSensorTeam/ldlidar_stl_ros2/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2066757589 | 🛑 Plex is down
In 6ec8b00, Plex (thonk.xyz:32400) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Plex is back up in 0a11a37 after 6 minutes.
| gharchive/issue | 2024-01-05T05:35:42 | 2025-04-01T06:39:22.027394 | {
"authors": [
"le-server"
],
"repo": "le-server/thonk-upptime",
"url": "https://github.com/le-server/thonk-upptime/issues/1338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
965732799 | More advanced linkrot checking
x failed pings over x period of time (reasonable should be determined)
without a successful ping
successful ping should reset all counters
if it exceeds threshold it should then become definitely rotted
we don't want it to go rotted because cloudflare went down, basically
Wayback Machine API docs: https://archive.org/help/wayback_api.php
This is a better idea than the original two-strikes.
Could always make the failure count and the holdover period configurable via envvars.
Would a "definitely rotted" link then be checked against the web archive api and updated to point to it instead? What if it cannot be found on the web archive (an increasingly rare happening but it still happens)?
That's a good edge case. I think if no archive is available, the link should remain the same but either the title or description should be tagged.
And that's definitely a case where that alert email or whatever should trigger.
i set the variables to X because we should probably tweak - and different rings may want different settings. I'm also really not sure what our defaults should be - maybe the time period should be a week?
A week sounds like a sane baseline. We don't want to trip over things like provider outages or routing hiccups, and we're also not doing uptime monitoring for the people who are part of the ring.
Should things in the "maybe" rotstate be tested more frequently though? Daily, maybe? And should we have an update endpoint/method to set something back to "not rotted" if the admin checks and find the thing is fine?
It could work with the existing update method; I'll make sure that the admin wrapper I'm writing makes that a simple command.
Manual-run-test as well as manual-set for admin is a good idea - even just for testing, we'll want to be able to force an up-test. stuff in maybe should definitely be tested daily (or maybe once every couple hours - might need configuration capability?)
Since the update command is a PATCH request, you can literally just pass {"rotted": "yes" | "no" | "maybe"} and it'll change the flag. Logic to reset any counters based on the input can be added as needed.
Endpoints to check all links and a single link for rottenness already exists.
How would we want to record the check results? I'm assuming in the db, but I'm not thinking too clearly right now and I can't generate a schema for it.
Do we need the precise results of the check itself to be stored /anywhere/, or is it enough to just store the suspect state rotted?
I'm referring to the number of times a link has passed the ping check so the scheduled check can decide how to set the rotted flag.
I think I just figured it out anyway. It'll be a new table (rotted_links) that's just
uuid
count
uuid
int
A record is only added when the check first fails and updated on repeated checks. If it passes, the record is deleted.
Implemented in #3. Feel free to take a look.
| gharchive/issue | 2021-08-11T02:23:29 | 2025-04-01T06:39:22.034242 | {
"authors": [
"ZAdamMac",
"le717",
"noghiri"
],
"repo": "le717/webring",
"url": "https://github.com/le717/webring/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1109783590 | Add Koajs
Adding a language
[x] The code displays "Hello World"
[x] I have updated the readme to include the new language
[x] I have incremented the language count in the readme
[x] I have no association with the language
Link to programming language: https://koajs.com
It's Koa.js not Koajs
I've fixed some conflicts
I've made a new pull request
| gharchive/pull-request | 2022-01-20T21:28:45 | 2025-04-01T06:39:22.036796 | {
"authors": [
"ThePeeps191",
"calgary34"
],
"repo": "leachim6/hello-world",
"url": "https://github.com/leachim6/hello-world/pull/1250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
506723092 | various improvements
Description
Thank you for the work on this role. I've started to use it and I've found some improvements that everyone I'm sure will appreciate.
I didn't know how to run the automated tests, I've tried to run molecule but it wasn't finding the ANSIBLE_LIBRARY even if ansible is installed and working, please let me know how can I check it.
added tags support: setup, upd_conf, unsecure_logs
added possibility to show registration logs via unsecure_logs tag. See PR #11
moved global to global_values in gitlab_runner_config
added global_strings to gitlab_runner_config
changed the task Set advanced configuration to loop through values
changed extra_options to array and loop through it
updated README
I'm open to discussion about how to improve what I've implemented. For now it's working on my system but there is always a way to optimize.
Cheers
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
Reviews
@vutkin
@tgadiev
@kharkevich
Checklist:
[x] I have performed a self-review of my own code
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] New and existing tests pass with my changes
Merged. Thanks @mprenditore !
Merged. Thanks @mprenditore !
Awesome, welcome!
If I'll add new features in the future I'mm make other MRs.
Cheers!
| gharchive/pull-request | 2019-10-14T15:24:35 | 2025-04-01T06:39:22.046762 | {
"authors": [
"mprenditore",
"tgadiev"
],
"repo": "lean-delivery/ansible-role-gitlab-runner",
"url": "https://github.com/lean-delivery/ansible-role-gitlab-runner/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1438624860 | Unable to start test after latest update
Hi @bartekpacia,
I have updated today to latest version. I'm also experimenting with running API calls from test using dio package.
When I try to run my test I'm getting following error:
patrol drive --flavor development --verbose --target integration_test/create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (83ms)
✓ Installed server (0.5s)
✓ Installed instrumentation (0.6s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
ProcessException: The system cannot find the file specified.
Command: flutter --no-version-check build apk --debug --target C:\Users\lmlikota\haven-holding-mobile\integration_test/create_intervention_test.dart --flavor development --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
> Running create_intervention_test.dart on emulator-5554...
asset W 11-07 17:07:01 31508 30276] Asset path build/app/outputs/flutter-apk\app-development-debug.apk is neither a directory nor file (type=1).
ERROR: dump failed because assets could not be loaded
Failed to extract manifest from APK: ProcessException: The command failed
Command: flutter --no-version-check build apk --debug --target C:\Users\lmlikota\haven-holding-mobile\integration_test/app_test.dart --flavor development --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
> Running app_test.dart on emulator-5554...
asset W 11-07 17:07:36 8624 22908] Asset path build/app/outputs/flutter-apk\app-development-debug.apk is neither a directory nor file (type=1).
ERROR: dump failed because assets could not be loaded
Failed to extract manifest from APK: ProcessException: The command failed
Command: C:\Users\lmlikota\AppData\Local\Android\sdk\build-tools\32.1.0-rc1\aapt dump xmltree build/app/outputs/flutter-apk\app-development-debug.apk AndroidManifest.xml.
Problem building Android application: see above error(s).
pl.leancode.automatorserver.ServerLoop:
✗ app_test.dart failed
flutter_driver exited with code 1
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
Do you have any idea what is going on? I can normally build my app using flutter build apk --flavor development -t lib/core/environments/main_qa.dart
Hi @lmlikota, sorry for the bug, and thanks for reporting it. I'm pretty sure I introduced it in #552.
Looking into it.
Should be fixed in patrol_cli v0.7.6+1
I had a similar issue after the update, I am currently using v0.7.6+1 and this is the issue that is being returned:
Building apk for app_test.dart...
ProcessException: The system cannot find the file specified.
the\location.\integration_test\app_test.dart --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
Running app_test.dart on emulator-5554...
VMServiceFlutterDriver: Connecting to Flutter application at http://127.0.0.1:62725/pEMEPNmS7aI=/
VMServiceFlutterDriver: Isolate found with number: 2722090989120719
VMServiceFlutterDriver: Isolate is paused at start.
VMServiceFlutterDriver: Attempting to resume isolate
VMServiceFlutterDriver: Flutter Driver extension is taking a long time to become available. Ensure your test app (often "lib/main.dart") imports "package:flutter_driver/driver_extension.dart" and calls enableFlutterDriverExtension() as the first call in main().
✗ app_test.dart failed
flutter_driver exited with code -1
See the logs above to learn what happened. If the logs above aren't useful then
@bartekpacia Unfortunately I still can't run test. I have different error now as mentioned by @iEnergyy
patrol drive --flavor development --verbose --target integration_test\create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (0.1s)
✓ Installed server (0.5s)
✓ Installed instrumentation (1.0s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
ProcessException: The system cannot find the file specified.
Command: flutter --no-version-check build apk --debug --target C:\Users\lmlikota\haven-holding-mobile\integration_test\create_intervention_test.dart --flavor development --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
> Running create_intervention_test.dart on emulator-5554...
asset W 11-08 15:47:37 15912 31840] Asset path build\app\outputs\flutter-apk\app-development-debug.apk is neither a directory nor file (type=1).
ERROR: dump failed because assets could not be loaded
Failed to extract manifest from APK: ProcessException: The command failed
Command: C:\Users\lmlikota\AppData\Local\Android\sdk\build-tools\32.1.0-rc1\aapt dump xmltree build\app\outputs\flutter-apk\app-development-debug.apk AndroidManifest.xml.
Problem building Android application: see above error(s).
✗ create_intervention_test.dart failed
flutter_driver exited with code 1
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
@lmlikota Does the file exist? build\app\outputs\flutter-apk\app-development-debug.apk?
I'm checking that and file does exist but in my case path is: C:\Users\lmlikota\haven-bridge-mobile\build\app\outputs\flutter-apk\app-development-debug.apk Looks like it's missing path to the project root?
Looks like Windows can't handle relative path in this case.
Paths, and relative ones, work very similar to what you have in OS X/macOS.
Windows uses "", not "/".
Basically ".." is one level higher
"." is a sub-folder of the current working directory
This is from https://superuser.com/questions/1270591/how-to-use-relative-paths-on-windows-cmd
Hi @bartekpacia,
we have some progress regarding this issue. Actually looks like we have fixed problem with relative path on Windows with this change in flutter_tool.dart on line 238.
final prefix = absolute(join('build', 'app', 'outputs', 'flutter-apk'));
But unfortunately we have stumbled upon another issue with PatrolBinding while running our test
patrol drive --flavor development --verbose --target integration_test\create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (0.1s)
✓ Installed server (0.6s)
✓ Installed instrumentation (0.8s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
pl.leancode.automatorserver.ServerLoop:
Building with sound null safety
Running Gradle task 'assembleDevelopmentDebug'...
14,8s
√ Built build\app\outputs\flutter-apk\app-development-debug.apk.
✓ Building apk for create_intervention_test.dart succeeded!
> Running create_intervention_test.dart on emulator-5554...
Installing build\app\outputs\flutter-apk\app-development-debug.apk...
1.706ms
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->add(I)Z (unsupported,test-api, reflection, allowed)
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->add(ILjava/lang/String;)Z (unsupported,test-api, reflection, allowed)
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->get(I)I (unsupported, reflection, allowed)
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->getName(I)Ljava/lang/String; (unsupported, reflection, allowed)
VMServiceFlutterDriver: Connecting to Flutter application at http://127.0.0.1:52363/Z5J6G0RC8vo=/
VMServiceFlutterDriver: Isolate found with number: 851346242011535
VMServiceFlutterDriver: Isolate is paused at start.
VMServiceFlutterDriver: Attempting to resume isolate
Patrol: creating NativeAutomator
host: localhost
port: 8081
packageName: hr.biss.haven_holding.dev
bundleId: hr.biss.havenHolding.dev
Patrol: Initializing PatrolBinding...
E/flutter ( 8093): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: 'package:flutter/src/foundation/binding.dart': Failed assertion: line 146 pos 12: '_debugInitializedType == null': is not true.
E/flutter ( 8093): #0 _AssertionError._doThrowNew (dart:core-patch/errors_patch.dart:51:61)
E/flutter ( 8093): #1 _AssertionError._throwNew (dart:core-patch/errors_patch.dart:40:5)
E/flutter ( 8093): #2 new BindingBase (package:flutter/src/foundation/binding.dart:146:12)
E/flutter ( 8093): #3 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #4 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #5 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #6 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #7 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #8 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding&PaintingBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #9 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding&PaintingBinding&WidgetsBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #10 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding&PaintingBinding&WidgetsBinding&TestDefaultBinaryMessengerBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #16 new NativeAutomator (package:patrol/src/native/native_automator.dart:96:23)
E/flutter ( 8093): #17 patrolTest (package:patrol/src/custom_finders/common.dart:40:9)
E/flutter ( 8093): #18 main (file:///C:/Users/lmlikota/haven-holding-mobile/integration_test/create_intervention_test.dart:11:3)
E/flutter ( 8093): #19 _runMain.<anonymous closure> (dart:ui/hooks.dart:134:23)
E/flutter ( 8093): #20 _delayEntrypointInvocation.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:297:19)
E/flutter ( 8093): #21 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:192:12)
E/flutter ( 8093):
00:00 +0: (tearDownAll)
VMServiceFlutterDriver: Connected to Flutter application.
00:00 +1: All tests passed!
All tests passed.
✓ create_intervention_test.dart passed!
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
I hope this helps a bit.
@lmlikota Thanks a lot for the fix, I created #586 to introduce your fix.
Regarding the crash from this comment:
The '_debugInitializedType == null': is not true occurs when you initialize bindings before Patrol initializes its own binding. Maybe this is a bug but first I'd like to ask you for the Dart test target file you're running (create_intervention_test.dart).
Regarding the crash from this comment:
The '_debugInitializedType == null': is not true occurs when you initialize bindings before Patrol initializes its own binding. Maybe this is a bug but first I'd like to ask you to share the Dart test target file you're running (create_intervention_test.dart).
You are right, i have commented out
Future<void> main() async {
//IntegrationTestWidgetsFlutterBinding.ensureInitialized(); => I'm not even sure do I need this in first place, looks like not really
patrolTest('sign in', config: patrolConfig, nativeAutomation: true,
($) async {
app.main();
I mean it's not a bug it's probably just me, after commenting out my test has passed 😃
patrol drive --flavor development --verbose --target integration_test\create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (0.1s)
✓ Installed server (0.9s)
✓ Installed instrumentation (1.1s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
Building with sound null safety
Running Gradle task 'assembleDevelopmentDebug'...
pl.leancode.automatorserver.ServerLoop:
20,8s
√ Built build\app\outputs\flutter-apk\app-development-debug.apk.
✓ Building apk for create_intervention_test.dart succeeded!
> Running create_intervention_test.dart on emulator-5554...
Installing build\app\outputs\flutter-apk\app-development-debug.apk...
2.510ms
VMServiceFlutterDriver: Connecting to Flutter application at http://127.0.0.1:60101/HcSS9PBgank=/
VMServiceFlutterDriver: Isolate found with number: 1069006792052387
VMServiceFlutterDriver: Isolate is paused at start.
VMServiceFlutterDriver: Attempting to resume isolate
Patrol: creating NativeAutomator
host: localhost
port: 8081
packageName: hr.biss.haven_holding.dev
bundleId: hr.biss.havenHolding.dev
Patrol: Initializing PatrolBinding...
00:00 +0: sign in
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F....ID 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/EGL_emulation( 9322): app_time_stats: avg=239.46ms min=7.79ms max=409.23ms count=5
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/EGL_emulation( 9322): app_time_stats: avg=253.07ms min=183.14ms max=345.97ms count=5
D/InsetsController( 9322): show(ime(), fromIme=true)
D/EGL_emulation( 9322): app_time_stats: avg=2902.03ms min=2213.30ms max=3590.77ms count=2
D/EGL_emulation( 9322): app_time_stats: avg=155.24ms min=21.73ms max=246.49ms count=7
D/EGL_emulation( 9322): app_time_stats: avg=217.75ms min=23.16ms max=487.33ms count=5
00:07 +1: (tearDownAll)
00:07 +2: All tests passed!
All tests passed.
✓ create_intervention_test.dart passed!
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
| gharchive/issue | 2022-11-07T16:17:31 | 2025-04-01T06:39:22.067065 | {
"authors": [
"bartekpacia",
"iEnergyy",
"lmlikota"
],
"repo": "leancodepl/patrol",
"url": "https://github.com/leancodepl/patrol/issues/573",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2164453751 | Fix magit-section usage so the expand/contract functionality works
Use named sections so magit-section can track which sections are expanded.
Don't use sections for individual diagnostics, since they can't be tracked.
Capture data in lexicals for deferred rendering by magit-insert-section-body.
Use with-current-buffer (delete lean4-with-info-output-to-buffer).
Make the diagnostic line:col headers into text buttons.
force-push: rebase onto #51
rebase onto leanprover-community:master
Can someone merge this please?
I see no reason not to. It's just a minor bugfix. @sebeaumont, please ping Yuri (@urkud) on Zulip.
| gharchive/pull-request | 2024-03-02T01:16:08 | 2025-04-01T06:39:22.080545 | {
"authors": [
"bustercopley",
"sebeaumont"
],
"repo": "leanprover-community/lean4-mode",
"url": "https://github.com/leanprover-community/lean4-mode/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2117353350 | chore(NormedSpace/Exponential): Fintype → Finite
bors merge
| gharchive/pull-request | 2024-02-04T22:51:29 | 2025-04-01T06:39:22.096215 | {
"authors": [
"semorrison",
"urkud"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/10260",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2243155849 | chore: remove remaining cdots that were not ·
A simple replacement . --> ·.
See #12143 for the source of these replacements.
LGTM
maintainer merge
bors r+
| gharchive/pull-request | 2024-04-15T09:21:49 | 2025-04-01T06:39:22.098717 | {
"authors": [
"adomani",
"alexjbest",
"sgouezel"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/12146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1529427366 | feat: port Data.List.Rotate
I think this should wait for Data.Fin.Basic and then the nthLe lemmas can be restated with get where the statement is a bit nicer anyway.
bors r+
| gharchive/pull-request | 2023-01-11T17:33:58 | 2025-04-01T06:39:22.100639 | {
"authors": [
"ChrisHughes24",
"qawbecrdtey"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/1490",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1698630909 | feat: port MeasureTheory.Lattice
[x] depends on: #3819
This PR/issue depends on:
leanprover-community/mathlib4#3819
By Dependent Issues (🤖). Happy coding!
Merging master to get a clean diff.
bors merge
| gharchive/pull-request | 2023-05-06T13:24:06 | 2025-04-01T06:39:22.103653 | {
"authors": [
"Komyyy",
"semorrison"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/3824",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1833164534 | refactor(LinearAlgebra/QuadraticForm): rename Isometry to IsometryEquiv
This is consistent with LinearIsometryEquiv vs LinearIsometry. The motivation is to make room for QuadraticForm.Isometry as the homomorphism.
bors merge
| gharchive/pull-request | 2023-08-02T13:12:34 | 2025-04-01T06:39:22.105814 | {
"authors": [
"eric-wieser"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/6305",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2054606093 | chore(Subfield): use rintro
Also explain why it can't be used in another field
bors merge
| gharchive/pull-request | 2023-12-23T01:08:05 | 2025-04-01T06:39:22.107450 | {
"authors": [
"robertylewis",
"urkud"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/9230",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
372329953 | Refactor(data/real/cau_seq_filter): completeness iff Cauchy sequences converge
Currently, the equivalence of completeness (i.e., convergence of all Cauchy filters) and the convergence of Cauchy sequences is only proved in normed fields, for distances coming from a multiplicative absolute value. However, the proof already works in a general metric space. We refactor to formulate the general result in metric spaces, and then apply it in the specific case of normed fields with an absolute value.
TO CONTRIBUTORS:
Make sure you have:
[x] reviewed and applied the coding style: coding, naming
[x] make sure definitions and lemmas are put in the right files
[x] make sure definitions and lemmas are not redundant
For reviewers: code review check list
Is there anything wrong with this PR?
Maybe it needs rebasing. I have rebased it in #PR464, and moreover #PR464 illustrates how it is used, so maybe I should simply close this #PR435. If someone wants to review just this #PR435 on Cauchy sequences (which is only refactoring, no new material), let me know and I will rebase it. Otherwise, if you want to go directly for #PR464, then this one can be closed.
Okay, the rebase wasn't too hard. I merged it in 4a013fb04d6e504be8582ad610016d8dcce3e5f3
But I think I need to rewrite this part anyway. I added now that metric spaces are first countable and I hope to later use this fact to simplify the relation proofs between Cauchy filters and Cauchy sequences.
| gharchive/pull-request | 2018-10-21T16:04:59 | 2025-04-01T06:39:22.111208 | {
"authors": [
"johoelzl",
"kckennylau",
"sgouezel"
],
"repo": "leanprover/mathlib",
"url": "https://github.com/leanprover/mathlib/pull/435",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1586958599 | Submit a Cloud Definition
Do NOT copy/paste a definition from somewhere else. Read about the word you want to define and come up with your own definition. Copy/Paste submissions will be closed and not added.
Fill out the JSON with your submission:
{
"word": "Media Access Control Address",
"content": "A 12-digit unique identifier embedded on every computer or network interface card that can connect to the internet. The MAC address is used to identify devices on the network and ensures data is sent to the correct device.",
"learn_more_URL":"https://www.howtogeek.com/764868/what-is-a-mac-address-and-how-does-it-work/",
"tag":"networking",
"abbreviation": "MAC Address",
"author_name":"Chris Rivas",
"author_link": "https://www.linkedin.com/in/chris-rivas4/"
}
Fill out the JSON below with the following.
Word (REQUIRED)
The word you are defining. Check this URL for all words we currently have.
Content (REQUIRED)
The definition. No more than 3 sentences.
learn more URL (REQUIRED)
Website where people can visit to learn more about the word.
tag (REQUIRED and select one)
Tech category the word fits in. Options:
compute
security
service
general
analytics
developer tool
web
networking
database
storage
devops
ai/ml
identity
iot
monitoring
cost management
disaster recovery
abbreviation (OPTIONAL)
If the word is commonly abbreviated, please provide it. For example, command line interface is often abbreviated as CLI.
author name (REQUIRED)
Your name.
author link (REQUIRED)
The URL you want your name to link to.
I've added this, thank you.
| gharchive/issue | 2023-02-16T03:51:09 | 2025-04-01T06:39:22.215014 | {
"authors": [
"crivas01",
"madebygps"
],
"repo": "learntocloud/cloud-dictionary",
"url": "https://github.com/learntocloud/cloud-dictionary/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2182609281 | fix: gaia profile test
Try out this version of Leather — Extension build, Test report
Attempting to fix the profile test. They passed locally by separating the tests and forcing the open pages closed. The last test was getting stuck not even logging into the account (I think).
EDIT: Also, changing the Gaia test to sign into Account 2, bc it appears to get stuck trying to sign into Account 1.
Thanks @fbwoolf . I'm going to merge this to have the tests fixed elsewhere.
| gharchive/pull-request | 2024-03-12T20:25:05 | 2025-04-01T06:39:22.219296 | {
"authors": [
"fbwoolf",
"pete-watters"
],
"repo": "leather-wallet/extension",
"url": "https://github.com/leather-wallet/extension/pull/5064",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1292032015 | Unearthed Mod Compatibility
Unearthed generate grass with non dirt versio across the world, would you consider grass from other mod?
I will check this whenever i've time ! Thanks for suggestion !
| gharchive/issue | 2022-07-02T11:13:46 | 2025-04-01T06:39:22.222499 | {
"authors": [
"ardissaps",
"lebonq"
],
"repo": "lebonq/Automatic-Path",
"url": "https://github.com/lebonq/Automatic-Path/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
863417352 | Publish to market place
See: https://github.com/ledwindra/continuous-integration-stata/issues/5
Checks failed due to no action in the marketplace yet. No worries
| gharchive/pull-request | 2021-04-21T03:38:29 | 2025-04-01T06:39:22.232557 | {
"authors": [
"ledwindra"
],
"repo": "ledwindra/continuous-integration-stata",
"url": "https://github.com/ledwindra/continuous-integration-stata/pull/6",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1564040994 | Is there an example of stripping diacritics?
In french, accent and diacritics may or may not be used by users. It should be considered. Is there an option for this?
there's a static utility function uFuzzy.latinize(stringsArr) that you can use to pre-process your haystack once before doing any searches, and preprocess your needle on each search.
i should probably have it accept a single string as well so there's no additional ceremony of wrapping the needle in an array and unwrapping the result.
| gharchive/issue | 2023-01-31T10:26:48 | 2025-04-01T06:39:22.235440 | {
"authors": [
"leeoniya",
"rap2hpoutre"
],
"repo": "leeoniya/uFuzzy",
"url": "https://github.com/leeoniya/uFuzzy/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
104231861 | Created gymkirchenfeld.ch
Domain-Name is: gymkirchenfeld.ch
Website at: www.gymkirchenfeld.ch
Teacher-E-Mail-Adresses: firstname.lastname@gymkirchenfeld.ch
Is this a post-secondary school?
no it’s not. I first thought so, but after researching and comparing educational systems of the US and Switzerland we’re a higher secondary school.
On 02.10.2015, at 19:29, Lyzi Diamond notifications@github.com wrote:
Is this a post-secondary school?
—
Reply to this email directly or view it on GitHub.
| gharchive/pull-request | 2015-09-01T10:02:21 | 2025-04-01T06:39:22.244422 | {
"authors": [
"garee76",
"lyzidiamond"
],
"repo": "leereilly/swot",
"url": "https://github.com/leereilly/swot/pull/939",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
160104689 | Set initial values for variables
To give an initial solution, the user might want to give values for (a subset of) the variables.
These should be checked at the beginning of the solve.
I prefer to not change the signature of CSIPaddVar and instead provide a new function CSIPsetInitialValues, analogous to the bounds.
Also, I'm not sure whether SCIP already supports "partial" solutions, but I believe it's still WIP. In that case, I don't want to implement anything on the CSIP side now, but just try to pass the (0-filled?) values as a full solution candidate.
See also the discussion about heuristic callbacks in #3.
Agreed it should be a separate function. The input could be a single flat
vector with NaN entries for unspecified values. This is the format we use
in MathProgBase.
On Jun 14, 2016 03:12, "Robert Schwarz" notifications@github.com wrote:
To give an initial solution, the user might want to give values for (a
subset of) the variables.
These should be checked at the beginning of the solve.
I prefer to not change the signature of CSIPaddVar and instead provide a
new function CSIPsetInitialValues, analogous to the bounds.
Also, I'm not sure whether SCIP already supports "partial" solutions, but
I believe it's still WIP. In that case, I don't want to implement anything
on the CSIP side now, but just try to pass the (0-filled?) values as a full
solution candidate.
See also the discussion about heuristic callbacks in #3
https://github.com/leethargo/CSIP/issues/3.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/leethargo/CSIP/issues/6, or mute the thread
https://github.com/notifications/unsubscribe/ABp0My-Tn_J9CfIiznr29HiVaN9HY5ynks5qLkZZgaJpZM4I0_fk
.
So there will be a heuristic plugin in the next SCIP release (or now, on the master branch) that supports "partial solutions". We would then be able to call SCIPcreatePartialSol and the subtree below the fixations will be searched with some limits.
In the current release, only full solutions are supported.
Can you call SCIPcreatePartialSol during the solve also?
No, according to the docs, it's only possible in the PROBLEM stage.
| gharchive/issue | 2016-06-14T06:12:41 | 2025-04-01T06:39:22.256297 | {
"authors": [
"leethargo",
"mlubin"
],
"repo": "leethargo/CSIP",
"url": "https://github.com/leethargo/CSIP/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1618665892 | 🛑 st3_web_addr is down
In 5603f2c, st3_web_addr ($st3_web_addr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: st3_web_addr is back up in b3121d1.
| gharchive/issue | 2023-03-10T09:47:26 | 2025-04-01T06:39:22.268995 | {
"authors": [
"legion7298"
],
"repo": "legion7298/upptimer",
"url": "https://github.com/legion7298/upptimer/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2688121705 | 🛑 VePhim API hf is down
In 37a0dfa, VePhim API hf (https://api.vephim.online/api/ping) was down:
HTTP code: 401
Response time: 111 ms
Resolved: VePhim API hf is back up in f79d8a8 after 5 minutes.
| gharchive/issue | 2024-11-24T17:55:43 | 2025-04-01T06:39:22.285354 | {
"authors": [
"lehuygiang28"
],
"repo": "lehuygiang28/uptime-tracking",
"url": "https://github.com/lehuygiang28/uptime-tracking/issues/241",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1205642499 | Sesion Guardada
Antes que nada Leifer, sos un Crack, Maestro...!!! Te felicito por tu contenido creo que nos sirve de mucho a personas que venimos aprendiendo programacion y soluciones practicas.
Tengo una duda, estoy usando Multidivice=True, sin embargo al parar el servicio y volverlo a correr, me pide nuevamente que escanee el codigo y no queda la Sesion Guardada
Ayuda con lo anterior!
Saludos Leifer, hay alguna solucion para esto?... Me pasa igual.
Buenas por el momento aun no! espero pronto salga update
Con la actualizacion de whatsapp-web.js 1.16.6 el inconveniente persiste
Esperemos próxima update
| gharchive/issue | 2022-04-15T14:58:39 | 2025-04-01T06:39:22.294383 | {
"authors": [
"Gonzalito87",
"HerlyOlivares",
"izelaya051290",
"leifermendez"
],
"repo": "leifermendez/bot-whatsapp",
"url": "https://github.com/leifermendez/bot-whatsapp/issues/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
995830284 | urequests module unavailable
Hi,
I need to make a https request from my uPy code and I see that the urequests model is missing. Is there something I am missing or has anyone successfully used the module to make https requests using this firmware?
I tried installing the module seperatley but it unfortunately doesn't support https requests. I tried by flashing the official micropython firmware v1.17 and it works on that version.
Therefore, I am trying to follow the DIY approach to flash the firmware with camera support in order to use the latest firmware that has urequests working. Unfortunately it is not clear to follow the instructions, especially the steps here:
Should I use the https://github.com/lemariva/esp32-camera repo or the git clone https://github.com/espressif/esp32-camera in the components folder?
In the following steps, from the instructions here, what needs to be done after adding the PATH variables to install the idf?
Since I am not very familiar with the usage of these tools and compiling the firmware, excuse me if the queries are trivial!
Thank you in advance,
Ani
Sorry for that, it was the https://github.com/lemariva/esp32-camera not the official one that had a bug. Anyways, today I've updated the building guide (readme and link) and the firmware. Check them out!
MicroPython: Updated support for cameras: M5CAMERA, ESP32-CAM etc.
| gharchive/issue | 2021-09-14T10:02:40 | 2025-04-01T06:39:22.309889 | {
"authors": [
"ani-rudh",
"lemariva"
],
"repo": "lemariva/micropython-camera-driver",
"url": "https://github.com/lemariva/micropython-camera-driver/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
413713263 | benchmark against rust de facto standard lib: serde
https://github.com/serde-rs/json
Probably not a fair benchmark, serde is [relatively] slow when not using structs. Serde benchmark for canada.json is about 1/4 the speed.
Feel free, we would be interested to see the results.
This was closed? Anyhow: I am encouraging people to do more benchmarking.
| gharchive/issue | 2019-02-23T16:45:31 | 2025-04-01T06:39:22.311581 | {
"authors": [
"LifeIsStrange",
"TkTech",
"geofflangdale",
"lemire"
],
"repo": "lemire/simdjson",
"url": "https://github.com/lemire/simdjson/issues/62",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
439896750 | [ADDED] min/max and operator options to the exports.mathExpr, extended d tests and (english) readme
If you require tougher math expressions these additional options allow this now: mathOperator mathMin mathMax added with default values to ensure this is a non-breaking change to the existing code base.
There are many design problems in v1.x. They are all my mistakes. I would rather stop maintaining for v1.x. Do you really need me to release a new version? Or we should refactor v3.x? I have a lot of new ideas about that.
But if you really need v1.x to be released, I will accept it. The version will be v1.4.0.
Currently I have pulled this into my project directly from github but this is not a nice solution for when my current project goes live in a month or so. Therefore if it is not too much trouble to release this to v1 that would be great.
As you wish 😋
:D you absolute star! Thank you!
merhaba site kurcamda yardımcı olurmusun ?
| gharchive/pull-request | 2019-05-03T05:50:28 | 2025-04-01T06:39:22.314087 | {
"authors": [
"johndcarmichael",
"kerem3322",
"lichaozhy"
],
"repo": "lemonce/svg-captcha",
"url": "https://github.com/lemonce/svg-captcha/pull/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
561578962 | 老哥图挺好看的?是拿什么画的?
小白不会画图。。。请问拿什么软件画?
Mac上的一款软件OmniGraffle。
| gharchive/issue | 2020-02-07T11:22:38 | 2025-04-01T06:39:22.314936 | {
"authors": [
"ShaoaAllen",
"lemonhu"
],
"repo": "lemonhu/stock-knowledge-graph",
"url": "https://github.com/lemonhu/stock-knowledge-graph/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1160490216 | add test for complicated tag and fix issue for it
Connected with issue https://github.com/lenforiee/OsuPyParser/issues/1
I formatted unittests to work with default python unittests. I added file with beatmap with tag that caused bug and added unittest for it.
In second commit I successfully solved issue by replacing usage of "in" keyword with line.startswith(key).
It passes unit test for it and it worked for me for 1000+ files.
Hey! I do like your changes myself but dropping my two cents in, I personally think it would be better not to bundle the .osu test map with the repo (its not made by its creator etc) but rather download it along with the tests.
This is a good and bad idea at the same time:
pros:
Working with real data.
Solves problem with license.
cons:
Dependency on other service -> accessing other services and downloading = more code that need to be maintained.
Added complexity for testing -> testing should be as straightforward as possible.
It could be hard to find example for every unit test -> It's easier to manually edit beatmap file.
An easy solution is to create the own beatmap file(that will be distributed using project license), that will mimic some existing beatmap and modify it to include the required features. The drawback of this is that it loses strict connection to reality.
I'm now looking for some official information about osu beatmap file licensing. From terms of service, user that uploads a file to osu! server, gives osu! rights to it.(source: https://osu.ppy.sh/legal/en/Terms#user-submissions-and-content-removal) Later, osu! is distributing it, but I couldn't find under what license. In beatmap file there is information who is author of it, probably that is enough to use it in this project.
I found and fixed regression connected with change to startswith(key). Opening some beatmaps had been failing because they were encoded using UTF8 with BOM, now they are handled correctly.
Hey sorry that responding to it took me 2 months but I am planning to rewrite it very soon as the code quality is far worse than what I currently write.
Anyways thanks for contribution!
| gharchive/pull-request | 2022-03-06T01:26:40 | 2025-04-01T06:39:22.360255 | {
"authors": [
"RealistikDash",
"Vergenter",
"lenforiee"
],
"repo": "lenforiee/OsuPyParser",
"url": "https://github.com/lenforiee/OsuPyParser/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
886595278 | Exception Handling for Protocol Handler
What would you like to be added:
When a Lens URL is not correct have a way to hand the exceptions
Why is this needed:
Currently Lens comes to the foreground but nothing happens. The user could be confused
Agreed this is non-trivial. So until there is a more robust framework for handling this I propose a simple generic catch-all notification like:
Sorry there was a problem with the Lens URL. Please check to see if there is an error in the URL or try upgrading to the latest version of Lens and/or extensions.
@leenamba I disagree that a notification is a "simple catch-all". But I assume that your answers to my questions are:
renderer
no
no
Currently targeting this for 5.0.0. Will create a backport PR once it is merged.
| gharchive/issue | 2021-05-11T09:12:20 | 2025-04-01T06:39:22.368311 | {
"authors": [
"Nokel81",
"leenamba"
],
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/issues/2747",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
940902089 | Lens cannot detect my kube-state-metrics installed in a non-default namespace
Describe the bug
Lens is not able to detect a kube-sate-metrics instance that I have running outside of the default namespace.
To Reproduce
Steps to reproduce the behavior:
Install kube-state-metrics in a cluster outside of the default namespace
Click on the "Cluster" item on the left bar
See no metrics
Expected behavior
Lens should be able to discover my KSM instance, no matter in which namespace it is, and use it to provide me cluster wide metrics.
Screenshots
Not really needed.
Environment (please complete the following information):
Lens Version: 5.0.2-latest.20210705.2
OS: [e.g. OSX] MacOS
Installation method (e.g. snap or AppImage in Linux): DMG file
We only use kube-state-metrics for the pod-metrics in the node details panel. The rest of the metrics views are powered by prometheus.
I have installed kube-state-metrics within the kube-system namespace and they are picked up in that space. So this is working as intended.
| gharchive/issue | 2021-07-09T16:19:37 | 2025-04-01T06:39:22.372739 | {
"authors": [
"Nokel81",
"douglascamata"
],
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/issues/3329",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
830207637 | Fix: cluster-settings page back-button navigation is broken
Broadcasting IPC event renderer:navigate from cluster-view (iframe -> main-layout-header) was triggered within iframe too which is not desired behaviour.
Before
https://user-images.githubusercontent.com/6377066/110957068-f15f2200-8353-11eb-91f2-4ce19d225eb4.mov
After:
https://user-images.githubusercontent.com/6377066/110957050-ed330480-8353-11eb-875d-fb70fb653b5d.mov
Looks like this fixes https://github.com/lensapp/lens/issues/2315 too
| gharchive/pull-request | 2021-03-12T15:07:11 | 2025-04-01T06:39:22.375367 | {
"authors": [
"ixrock",
"jim-docker"
],
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/pull/2330",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
682853009 | Windows
Specs
Leon version: Latest
OS (or browser) version: Windows
Node.js version: v14
Complete "npm run check" output: normal
(if using Docker) Complete "npm run docker:check" output: usual
(optional) Leon package version: latest
Expected Behavior
should work on windows usually
Actual Behavior
does not load on windows could be due to so many outdated node modules
How Do We Reproduce?
I think maybe update all of the modules, I can try to do it later maybe
Extra (like a sample repo to reproduce the issue, etc.)
probs will help update the modules though, just try to run on the latest version of windows with node v14 and the latest version of npm and all of them
Thanks for your report!
Could you please try with the latest version of the develop branch ?
Also there is now Gitpod, so you can easily start Leon directly in your browser, it is worth taking a look: https://gitpod.io/#https://github.com/leon-ai/leon
| gharchive/issue | 2020-08-20T16:05:54 | 2025-04-01T06:39:22.431520 | {
"authors": [
"Divlo",
"pupperr"
],
"repo": "leon-ai/leon",
"url": "https://github.com/leon-ai/leon/issues/194",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1451330007 | tflite conversion - GPU/XNNPACK fails
Hi!
Thanks for great repo!
I have converted the EfficientFormer model to tflite. However, applying both XNNPACK and GPU delegates fail.
GPU delegate created.
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for GPU.
Failed to apply GPU delegate.
Benchmarking failed.
XNNPACK delegate created.
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Failed to apply XNNPACK delegate.
Benchmarking failed.
Do you know what could be the issue? Im using latest tensorflow version for conversion.
thanks, however, LayerNorm works. i believe the problem is with FullyConnected layers
You mean the Dense layers? If you can help confirm a model like mm = efficientformer.EfficientFormerL1(num_classes=0) without output Dense layers works, I think we can have a function like convert_dense_to_conv2d.
Similar issue solved in Converting EfficientFormer into tflite doesn't work #137.
| gharchive/issue | 2022-11-16T10:36:08 | 2025-04-01T06:39:22.436753 | {
"authors": [
"leondgarse",
"macsmy"
],
"repo": "leondgarse/keras_cv_attention_models",
"url": "https://github.com/leondgarse/keras_cv_attention_models/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
151459325 | SpeechToSpeech sample doesnt work
Having trouble with the SpeechToSpeech sample app only the SpeechToText part works the translate and text to speech doesen't.
There is no speech_to_speech service so i use the services
cf create-service speech_to_text standard speech-to-text-service-standard
cf create-service language_translation standard language-translation-service
cf create-service text_to_speech standard text-to-speech-service
Please use my url http://pd-speech-to-speech-app.eu-gb.mybluemix.net/
Whats wrong here
Regards
Roland
Same issue for me
http://181.135.63.86:3006/
It sounds like you are not using the correct credentials for the TTS (text-to-speech) and LT (language translation).
How did you clone the SpeechToSpeech application ?
Did you use
button to clone?
If not, please make sure that your credentials for the TTS (text-to-speech) and LT (language translation) are correct and you are
able to run their demos.
i have locally and im sure i put correct credentials
pls help
I sent you the instructions how to preserve Spanish models only and hide
others.
It is not clear from your message if you followed the instructions. Did you?
The first steps (prior to anything else) are
clone the project
npm install
npm run build
Were you able to pass these 3 steps ?
On Wed, Jun 1, 2016 at 1:16 PM, felipe notifications@github.com wrote:
i have locally and im sure i put correct credentials
pls help
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223062314,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cbhAU9VCFgG6oZaHoJe089z6hJN0ks5qHb5pgaJpZM4IROgA
.
no, i clone this repo https://github.com/watson-developer-cloud/speech-to-text-nodejs.git, and point 3 is not avaible
Did you use the button "Deploy to Bluemix" magic button ?
On Wed, Jun 1, 2016 at 1:24 PM, felipe notifications@github.com wrote:
no, i clone this repo
https://github.com/watson-developer-cloud/speech-to-text-nodejs.git, and
point 3 is not avaible
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223064587,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cazjeGxx7FZsiELXg71jp7E7OKX8ks5qHcBGgaJpZM4IROgA
.
no i need deploy locally do u help me with this and speechtospeech, pls give me a hand, is possible hangout with gmail or video chat in skype?
Use the magic button "Deploy to Bluemix" - it will create the clone for
you running on Bluemix. Based on you previous emails I assume you have
Bluemix account already, so cloning the S2S using the magic button should
not be a problem.
Follow the "Running locally" instructions in
https://github.com/leonrch/SpeechToSpeech step-by-step. The goal is to get
the correct credentials for all 3 services.
cf env
Clone the GIT to get your local copy, modify app.js by specifying the
user names and passwords for the three following services: STT, TTS, LT.
Navigate to the folder where the application is cloned. You will be able to
npm install
npm run build
once the credentials (step 2) are retrieved.
Good luck!
PS: Unfortunately, I do not have time for Skype, hangout, etc
On Wed, Jun 1, 2016 at 1:27 PM, felipe notifications@github.com wrote:
no i need deploy locally do u help me with this and speechtospeech, pls
give me a hand, is possible hangout with gmail or video chat in skype?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223065462,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cRyNSjNaOME5V76ttU5uI9d0YvScks5qHcECgaJpZM4IROgA
.
this is my all steps
sudo apt-get update
sudo apt-get install curl git
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install nodejs
node --version
npm --version
uname -a
git clone https://github.com/leonrch/SpeechToSpeech.git
cd SpeechToSpeech/
sudo nano /home/felipe/SpeechToSpeech/app.js
##
var config = {
version: 'v1',
url: 'https://stream.watsonplatform.net/speech-to-text/api',
username: 'ee68d465-a36f-4ff1-8709-be4461328550',
password: 'correct password'
};
var mt_credentials = extend({
url: 'https://gateway.watsonplatform.net/language-translation/api',
username: '46a608ad-4ee8-48f7-92f6-85d4289cd82f',
password: 'correct password',
version: 'v2'
}, bluemix.getServiceCreds('language-translation')); // VCAP_SERVICES
var tts_credentials = extend({
url: 'https://stream.watsonplatform.net/text-to-speech/api',
version: 'v1',
username: 'd0898e8f-7eab-47f0-8b4d-f2270a61e262',
password: 'correct password',
}, bluemix.getServiceCreds('text_to_speech'));
##
npm install
npm run build
sudo npm install pm2 -g
pm2 start app.js
sudo su -c "env PATH=$PATH:/usr/bin pm2 startup linux -u felipe --hp /home/felipe"
pm2 save
http://181.135.63.86:3006/
You HAVE to make sure the credential you are using are correct! Try to clone each service (STT, TTS, LT) and ensure you can run their demos locally.
I see you made several modifications in your version. Did you try the cloned version WITHOUT your modifications before claiming it does not work? Please try it AS IS without adding even one space.
If you do not follow steps 1 and 2, I am not sure I can help you.
Anyway, I do not feel I deserved the honour to debug your application WITH YOUR CHANGES.
http://181.135.63.86:3005/, LT
http://181.135.63.86:3003/. TTS
http://181.135.63.86:3002/, STT
if u see all is working individually
Good!
Now try to change app.js of your local copy (clone) with the credentials
for TTS, STT, LT and try npm build
On Wed, Jun 1, 2016 at 2:16 PM, felipe notifications@github.com wrote:
http://181.135.63.86:3005/, LT
http://181.135.63.86:3003/. TTS
http://181.135.63.86:3002/, STT
[image: stt]
https://cloud.githubusercontent.com/assets/428820/15720660/eb280302-27fa-11e6-8e5a-203f8cfaa575.png
[image: tts]
https://cloud.githubusercontent.com/assets/428820/15720661/eb293970-27fa-11e6-8f39-ad6d038ea2e4.png
[image: lt]
https://cloud.githubusercontent.com/assets/428820/15720662/eb2b909e-27fa-11e6-825c-e1ed27115671.png
if u see all is working individually
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223079480,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_ccNJVe2uVDU99n6tEiz4ch4Rlyr4ks5qHcxngaJpZM4IROgA
.
i change credentials with mine
felipe@felipeurrego:~/SpeechToSpeech$ npm install
npm WARN package.json SpeechToSpeechBrowserDemoApp@0.2.1 Normalized value of bugs field is an empty object. Deleted.
npm WARN optional dep failed, continuing fsevents@1.0.12
npm WARN deprecated jade@0.26.3: Jade has been renamed to pug, please install the latest version of pug instead of jade
http-proxy@1.11.3 node_modules/http-proxy
├── eventemitter3@1.2.0
└── requires-port@0.0.1
connect@3.3.5 node_modules/connect
├── utils-merge@1.0.0
├── parseurl@1.3.1
├── debug@2.1.3 (ms@0.7.0)
└── finalhandler@0.3.4 (escape-html@1.0.1, on-finished@2.2.1)
errorhandler@1.2.4 node_modules/errorhandler
├── escape-html@1.0.1
└── accepts@1.1.4 (negotiator@0.4.9, mime-types@2.0.14)
body-parser@1.15.1 node_modules/body-parser
├── content-type@1.0.2
├── bytes@2.3.0
├── depd@1.1.0
├── qs@6.1.0
├── on-finished@2.3.0 (ee-first@1.1.1)
├── raw-body@2.1.6 (unpipe@1.0.0)
├── http-errors@1.4.0 (inherits@2.0.1, statuses@1.3.0)
├── debug@2.2.0 (ms@0.7.1)
├── iconv-lite@0.4.13
└── type-is@1.6.13 (media-typer@0.3.0, mime-types@2.1.11)
express@4.10.8 node_modules/express
├── merge-descriptors@0.0.2
├── cookie@0.1.2
├── utils-merge@1.0.0
├── media-typer@0.3.0
├── vary@1.0.1
├── fresh@0.2.4
├── methods@1.1.1
├── parseurl@1.3.1
├── range-parser@1.0.3
├── finalhandler@0.3.3
├── content-disposition@0.5.0
├── serve-static@1.7.2
├── escape-html@1.0.1
├── cookie-signature@1.0.5
├── path-to-regexp@0.1.3
├── depd@1.0.1
├── qs@2.3.3
├── on-finished@2.2.1 (ee-first@1.1.0)
├── debug@2.1.3 (ms@0.7.0)
├── proxy-addr@1.0.10 (forwarded@0.1.0, ipaddr.js@1.0.5)
├── etag@1.5.1 (crc@3.2.1)
├── type-is@1.5.7 (mime-types@2.0.14)
├── accepts@1.1.4 (negotiator@0.4.9, mime-types@2.0.14)
└── send@0.10.1 (ms@0.6.2, destroy@1.0.3, mime@1.2.11, on-finished@2.1.1)
transformer-proxy@0.3.3 node_modules/transformer-proxy
├── util@0.10.3 (inherits@2.0.1)
├── promise@7.1.1 (asap@2.0.4)
└── stream@0.0.1 (expect.js@0.3.1, mocha@2.5.3)
browserify-shim@3.8.12 node_modules/browserify-shim
├── through@2.3.8
├── mothership@0.2.0 (find-parent-dir@0.3.0)
├── resolve@0.6.3
├── rename-function-calls@0.1.1 (detective@3.1.0)
└── exposify@0.4.3 (map-obj@1.0.1, has-require@1.1.0, globo@1.0.2, through2@0.4.2, transformify@0.1.2, replace-requires@1.0.3)
harmon@1.3.1 node_modules/harmon
└── trumpet@1.7.0 (inherits@2.0.1, duplexer2@0.0.2, through2@1.1.1, readable-stream@1.1.14, html-tokenize@1.2.5, html-select@2.3.24)
watson-developer-cloud@0.9.30 node_modules/watson-developer-cloud
├── async@1.4.2
├── cookie@0.2.4
├── isstream@0.1.2
├── csv-stringify@0.0.8
├── extend@3.0.0
├── string@3.3.1
├── object.omit@2.0.0 (is-extendable@0.1.1, for-own@0.1.4)
├── object.pick@1.1.2 (isobject@2.1.0)
├── request@2.61.0 (tunnel-agent@0.4.3, aws-sign2@0.5.0, forever-agent@0.6.1, oauth-sign@0.8.2, caseless@0.11.0, stringstream@0.0.5, json-stringify-safe@5.0.1, tough-cookie@2.2.2, qs@4.0.0, node-uuid@1.4.7, combined-stream@1.0.5, mime-types@2.1.11, form-data@1.0.0-rc4, http-signature@0.11.0, bl@1.0.3, hawk@3.1.3, har-validator@1.8.0)
├── solr-client@0.5.0 (duplexer@0.1.1, httperror@0.2.3, JSONStream@0.9.0, request@2.49.0, json-bigint@0.1.4)
└── jshint@2.9.2 (strip-json-comments@1.0.4, exit@0.1.2, console-browserify@1.1.0, shelljs@0.3.0, minimatch@2.0.10, cli@0.6.6, htmlparser2@3.8.3, lodash@3.7.0)
browserify@10.2.6 node_modules/browserify
├── browser-resolve@1.11.2
├── https-browserify@0.0.1
├── tty-browserify@0.0.0
├── path-browserify@0.0.0
├── constants-browserify@0.0.1
├── punycode@1.4.1
├── builtins@0.0.7
├── string_decoder@0.10.31
├── through2@1.1.1
├── isarray@0.0.1
├── inherits@2.0.1
├── os-browserify@0.1.2
├── process@0.11.3
├── htmlescape@1.1.1
├── commondir@0.0.1
├── stream-browserify@1.0.0
├── defined@1.0.0
├── duplexer2@0.0.2
├── assert@1.3.0
├── shell-quote@0.0.1
├── domain-browser@1.1.7
├── xtend@4.0.1
├── querystring-es3@0.2.1
├── timers-browserify@1.4.2
├── deps-sort@1.3.9
├── util@0.10.3
├── events@1.0.2
├── concat-stream@1.4.10 (typedarray@0.0.6)
├── parents@1.0.1 (path-platform@0.11.15)
├── vm-browserify@0.0.4 (indexof@0.0.1)
├── has@1.0.1 (function-bind@1.1.0)
├── read-only-stream@1.1.1 (readable-wrap@1.0.0)
├── console-browserify@1.1.0 (date-now@0.1.4)
├── readable-stream@1.1.14 (core-util-is@1.0.2)
├── url@0.10.3 (punycode@1.3.2, querystring@0.2.0)
├── subarg@1.0.0 (minimist@1.2.0)
├── http-browserify@1.7.0 (Base64@0.2.1)
├── shasum@1.0.2 (sha.js@2.4.5, json-stable-stringify@0.0.1)
├── buffer@3.6.0 (ieee754@1.1.6, isarray@1.0.0, base64-js@0.0.8)
├── glob@4.5.3 (inflight@1.0.5, once@1.3.3, minimatch@2.0.10)
├── JSONStream@1.1.1 (through@2.3.8, jsonparse@1.2.0)
├── labeled-stream-splicer@1.0.2 (stream-splicer@1.3.2)
├── browserify-zlib@0.1.4 (pako@0.2.8)
├── resolve@1.1.7
├── syntax-error@1.1.6 (acorn@2.7.0)
├── browser-pack@5.0.1 (umd@3.0.1, combine-source-map@0.6.1)
├── insert-module-globals@6.6.3 (is-buffer@1.1.3, combine-source-map@0.6.1, lexical-scope@1.2.0)
├── crypto-browserify@3.11.0 (create-hmac@1.1.4, randombytes@2.0.3, pbkdf2@3.0.4, create-hash@1.1.2, diffie-hellman@5.0.2, create-ecdh@4.0.0, browserify-cipher@1.0.0, browserify-sign@4.0.0, public-encrypt@4.0.0)
└── module-deps@3.9.1 (stream-combiner2@1.0.2, detective@4.3.1)
watchify@3.7.0 node_modules/watchify
├── defined@1.0.0
├── xtend@4.0.1
├── through2@2.0.1 (readable-stream@2.0.6)
├── outpipe@1.1.1 (shell-quote@1.6.0)
├── chokidar@1.5.1 (path-is-absolute@1.0.0, inherits@2.0.1, glob-parent@2.0.0, async-each@1.0.0, is-glob@2.0.1, is-binary-path@1.0.1, readdirp@2.0.0)
├── anymatch@1.3.0 (arrify@1.0.1, micromatch@2.3.8)
└── browserify@13.0.1 (browser-resolve@1.11.2, https-browserify@0.0.1, tty-browserify@0.0.0, path-browserify@0.0.0, punycode@1.4.1, duplexer2@0.1.4, string_decoder@0.10.31, constants-browserify@1.0.0, inherits@2.0.1, os-browserify@0.1.2, htmlescape@1.1.1, process@0.11.3, stream-browserify@2.0.1, assert@1.3.0, domain-browser@1.1.7, read-only-stream@2.0.0, querystring-es3@0.2.1, timers-browserify@1.4.2, util@0.10.3, deps-sort@2.0.0, events@1.1.0, parents@1.0.1, vm-browserify@0.0.4, has@1.0.1, console-browserify@1.1.0, shell-quote@1.6.0, url@0.11.0, readable-stream@2.1.4, subarg@1.0.0, labeled-stream-splicer@2.0.0, shasum@1.0.2, stream-http@2.3.0, concat-stream@1.5.1, glob@5.0.15, JSONStream@1.1.1, buffer@4.6.0, syntax-error@1.1.6, browserify-zlib@0.1.4, resolve@1.1.7, browser-pack@6.0.1, insert-module-globals@7.0.1, crypto-browserify@3.11.0, module-deps@4.0.7)
felipe@felipeurrego:~/SpeechToSpeech$ npm run build
> SpeechToSpeechBrowserDemoApp@0.2.1 build /home/felipe/SpeechToSpeech
> browserify -o public/js/main.js src/index.js
felipe@felipeurrego:~/SpeechToSpeech$
Not working
http://181.135.63.86:3006/
DID YOU USE "DEPLOY ON BLUEMIX" BUTTON?
IF NOT -- PLEASE DO, PLEASE!
(I know you need a local version), but cloning using "DEPLOY ON BLUEMIX"
BUTTON is the first step! Let me know if it works.
On Wed, Jun 1, 2016 at 2:43 PM, felipe notifications@github.com wrote:
i change credentials with mine
felipe@felipeurrego:~/SpeechToSpeech$ npm install
npm WARN package.json SpeechToSpeechBrowserDemoApp@0.2.1 Normalized value of bugs field is an empty object. Deleted.
npm WARN optional dep failed, continuing fsevents@1.0.12
npm WARN deprecated jade@0.26.3: Jade has been renamed to pug, please install the latest version of pug instead of jade
http-proxy@1.11.3 node_modules/http-proxy
├── eventemitter3@1.2.0
└── requires-port@0.0.1
connect@3.3.5 node_modules/connect
├── utils-merge@1.0.0
├── parseurl@1.3.1
├── debug@2.1.3 (ms@0.7.0)
└── finalhandler@0.3.4 (escape-html@1.0.1, on-finished@2.2.1)
errorhandler@1.2.4 node_modules/errorhandler
├── escape-html@1.0.1
└── accepts@1.1.4 (negotiator@0.4.9, mime-types@2.0.14)
body-parser@1.15.1 node_modules/body-parser
├── content-type@1.0.2
├── bytes@2.3.0
├── depd@1.1.0
├── qs@6.1.0
├── on-finished@2.3.0 (ee-first@1.1.1)
├── raw-body@2.1.6 (unpipe@1.0.0)
├── http-errors@1.4.0 (inherits@2.0.1, statuses@1.3.0)
├── debug@2.2.0 (ms@0.7.1)
├── iconv-lite@0.4.13
└── type-is@1.6.13 (media-typer@0.3.0, mime-types@2.1.11)
express@4.10.8 node_modules/express
├── merge-descriptors@0.0.2
├── cookie@0.1.2
├── utils-merge@1.0.0
├── media-typer@0.3.0
├── vary@1.0.1
├── fresh@0.2.4
├── methods@1.1.1
├── parseurl@1.3.1
├── range-parser@1.0.3
├── finalhandler@0.3.3
├── content-disposition@0.5.0
├── serve-static@1.7.2
├── escape-html@1.0.1
├── cookie-signature@1.0.5
├── path-to-regexp@0.1.3
├── depd@1.0.1
├── qs@2.3.3
├── on-finished@2.2.1 (ee-first@1.1.0)
├── debug@2.1.3 (ms@0.7.0)
├── proxy-addr@1.0.10 (forwarded@0.1.0, ipaddr.js@1.0.5)
├── etag@1.5.1 (crc@3.2.1)
├── type-is@1.5.7 (mime-types@2.0.14)
├── accepts@1.1.4 (negotiator@0.4.9, mime-types@2.0.14)
└── send@0.10.1 (ms@0.6.2, destroy@1.0.3, mime@1.2.11, on-finished@2.1.1)
transformer-proxy@0.3.3 node_modules/transformer-proxy
├── util@0.10.3 (inherits@2.0.1)
├── promise@7.1.1 (asap@2.0.4)
└── stream@0.0.1 (expect.js@0.3.1, mocha@2.5.3)
browserify-shim@3.8.12 node_modules/browserify-shim
├── through@2.3.8
├── mothership@0.2.0 (find-parent-dir@0.3.0)
├── resolve@0.6.3
├── rename-function-calls@0.1.1 (detective@3.1.0)
└── exposify@0.4.3 (map-obj@1.0.1, has-require@1.1.0, globo@1.0.2, through2@0.4.2, transformify@0.1.2, replace-requires@1.0.3)
harmon@1.3.1 node_modules/harmon
└── trumpet@1.7.0 (inherits@2.0.1, duplexer2@0.0.2, through2@1.1.1, readable-stream@1.1.14, html-tokenize@1.2.5, html-select@2.3.24)
watson-developer-cloud@0.9.30 node_modules/watson-developer-cloud
├── async@1.4.2
├── cookie@0.2.4
├── isstream@0.1.2
├── csv-stringify@0.0.8
├── extend@3.0.0
├── string@3.3.1
├── object.omit@2.0.0 (is-extendable@0.1.1, for-own@0.1.4)
├── object.pick@1.1.2 (isobject@2.1.0)
├── request@2.61.0 (tunnel-agent@0.4.3, aws-sign2@0.5.0, forever-agent@0.6.1, oauth-sign@0.8.2, caseless@0.11.0, stringstream@0.0.5, json-stringify-safe@5.0.1, tough-cookie@2.2.2, qs@4.0.0, node-uuid@1.4.7, combined-stream@1.0.5, mime-types@2.1.11, form-data@1.0.0-rc4, http-signature@0.11.0, bl@1.0.3, hawk@3.1.3, har-validator@1.8.0)
├── solr-client@0.5.0 (duplexer@0.1.1, httperror@0.2.3, JSONStream@0.9.0, request@2.49.0, json-bigint@0.1.4)
└── jshint@2.9.2 (strip-json-comments@1.0.4, exit@0.1.2, console-browserify@1.1.0, shelljs@0.3.0, minimatch@2.0.10, cli@0.6.6, htmlparser2@3.8.3, lodash@3.7.0)
browserify@10.2.6 node_modules/browserify
├── browser-resolve@1.11.2
├── https-browserify@0.0.1
├── tty-browserify@0.0.0
├── path-browserify@0.0.0
├── constants-browserify@0.0.1
├── punycode@1.4.1
├── builtins@0.0.7
├── string_decoder@0.10.31
├── through2@1.1.1
├── isarray@0.0.1
├── inherits@2.0.1
├── os-browserify@0.1.2
├── process@0.11.3
├── htmlescape@1.1.1
├── commondir@0.0.1
├── stream-browserify@1.0.0
├── defined@1.0.0
├── duplexer2@0.0.2
├── assert@1.3.0
├── shell-quote@0.0.1
├── domain-browser@1.1.7
├── xtend@4.0.1
├── querystring-es3@0.2.1
├── timers-browserify@1.4.2
├── deps-sort@1.3.9
├── util@0.10.3
├── events@1.0.2
├── concat-stream@1.4.10 (typedarray@0.0.6)
├── parents@1.0.1 (path-platform@0.11.15)
├── vm-browserify@0.0.4 (indexof@0.0.1)
├── has@1.0.1 (function-bind@1.1.0)
├── read-only-stream@1.1.1 (readable-wrap@1.0.0)
├── console-browserify@1.1.0 (date-now@0.1.4)
├── readable-stream@1.1.14 (core-util-is@1.0.2)
├── url@0.10.3 (punycode@1.3.2, querystring@0.2.0)
├── subarg@1.0.0 (minimist@1.2.0)
├── http-browserify@1.7.0 (Base64@0.2.1)
├── shasum@1.0.2 (sha.js@2.4.5, json-stable-stringify@0.0.1)
├── buffer@3.6.0 (ieee754@1.1.6, isarray@1.0.0, base64-js@0.0.8)
├── glob@4.5.3 (inflight@1.0.5, once@1.3.3, minimatch@2.0.10)
├── JSONStream@1.1.1 (through@2.3.8, jsonparse@1.2.0)
├── labeled-stream-splicer@1.0.2 (stream-splicer@1.3.2)
├── browserify-zlib@0.1.4 (pako@0.2.8)
├── resolve@1.1.7
├── syntax-error@1.1.6 (acorn@2.7.0)
├── browser-pack@5.0.1 (umd@3.0.1, combine-source-map@0.6.1)
├── insert-module-globals@6.6.3 (is-buffer@1.1.3, combine-source-map@0.6.1, lexical-scope@1.2.0)
├── crypto-browserify@3.11.0 (create-hmac@1.1.4, randombytes@2.0.3, pbkdf2@3.0.4, create-hash@1.1.2, diffie-hellman@5.0.2, create-ecdh@4.0.0, browserify-cipher@1.0.0, browserify-sign@4.0.0, public-encrypt@4.0.0)
└── module-deps@3.9.1 (stream-combiner2@1.0.2, detective@4.3.1)
watchify@3.7.0 node_modules/watchify
├── defined@1.0.0
├── xtend@4.0.1
├── through2@2.0.1 (readable-stream@2.0.6)
├── outpipe@1.1.1 (shell-quote@1.6.0)
├── chokidar@1.5.1 (path-is-absolute@1.0.0, inherits@2.0.1, glob-parent@2.0.0, async-each@1.0.0, is-glob@2.0.1, is-binary-path@1.0.1, readdirp@2.0.0)
├── anymatch@1.3.0 (arrify@1.0.1, micromatch@2.3.8)
└── browserify@13.0.1 (browser-resolve@1.11.2, https-browserify@0.0.1, tty-browserify@0.0.0, path-browserify@0.0.0, punycode@1.4.1, duplexer2@0.1.4, string_decoder@0.10.31, constants-browserify@1.0.0, inherits@2.0.1, os-browserify@0.1.2, htmlescape@1.1.1, process@0.11.3, stream-browserify@2.0.1, assert@1.3.0, domain-browser@1.1.7, read-only-stream@2.0.0, querystring-es3@0.2.1, timers-browserify@1.4.2, util@0.10.3, deps-sort@2.0.0, events@1.1.0, parents@1.0.1, vm-browserify@0.0.4, has@1.0.1, console-browserify@1.1.0, shell-quote@1.6.0, url@0.11.0, readable-stream@2.1.4, subarg@1.0.0, labeled-stream-splicer@2.0.0, shasum@1.0.2, stream-http@2.3.0, concat-stream@1.5.1, glob@5.0.15, JSONStream@1.1.1, buffer@4.6.0, syntax-error@1.1.6, browserify-zlib@0.1.4, resolve@1.1.7, browser-pack@6.0.1, insert-module-globals@7.0.1, crypto-browserify@3.11.0, module-deps@4.0.7)
felipe@felipeurrego:~/SpeechToSpeech$ npm run build
SpeechToSpeechBrowserDemoApp@0.2.1 build /home/felipe/SpeechToSpeech
browserify -o public/js/main.js src/index.js
felipe@felipeurrego:~/SpeechToSpeech$
Not working
http://181.135.63.86:3006/
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223087236,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cb8PZQotu51j3BW6wpodtG-FCsRaks5qHdLpgaJpZM4IROgA
.
http://speechtospeech-ingfelipeurrego-1353.mybluemix.net/
it works now?
Yes, the application referenced link you sent, seems to be working.
The way, the "DEPLOY ON BLUEMIX" button works is
it takes the latest code from the repo and compiles it
binds to the services listed in manifest.yml
The fact the cloned app works tells me that the source code is OK.
The problem you are experiencing is likely related to wrong credentials.
Please change app.js ONLY.
On Wed, Jun 1, 2016 at 2:57 PM, felipe notifications@github.com wrote:
http://speechtospeech-ingfelipeurrego-1353.mybluemix.net/
it works now?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223091168,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_caoF7G8H6VXRQWMCOfUbboaNmPyRks5qHdYwgaJpZM4IROgA
.
how can i binds services listed in manifest.yml
Hi again, how can i bind services listed in manifest.yml, i do all steps but not working locally...
Only works with magic button
sudo nano /home/felipe/SpeechToSpeech/.env
##
{
"speech_to_text": [
{
"name": "speech-to-text-service-standard",
"label": "speech_to_text",
"plan": "standard",
"credentials": {
"url": "https://stream.watsonplatform.net/speech-to-text/api",
"password": "mypass",
"username": "265bdf48-47de-4bd0-8a9c-05194f2f29dd"
}
}
],
"language_translation": [
{
"name": "language-translation-service",
"label": "language_translation",
"plan": "standard",
"credentials": {
"url": "https://gateway.watsonplatform.net/language-translation/api",
"password": "mypass",
"username": "bf0d3db6-ceae-4d9a-8c99-f581d5a22dab"
}
}
],
"text_to_speech": [
{
"name": "text-to-speech-service",
"label": "text_to_speech",
"plan": "standard",
"credentials": {
"url": "https://stream.watsonplatform.net/text-to-speech/api",
"password": "mypass",
"username": "0ddf28b9-677b-4bb2-b6d3-51f8dfc547ff"
}
}
]
}
##
http://181.135.63.86:3006/
Not working
hi again sudo nano /home/felipe/SpeechToSpeech/.env is good?
r u there, pls give me a little help
any suggestion?
are u bussy?
Share me some little help
@rhaenggi pls download manually with git clone something happens
| gharchive/issue | 2016-04-27T18:36:51 | 2025-04-01T06:39:22.510982 | {
"authors": [
"johnfelipe",
"leonrch",
"rhaenggi"
],
"repo": "leonrch/SpeechToSpeech",
"url": "https://github.com/leonrch/SpeechToSpeech/issues/5",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1177342358 | Piped content avalability
How long piped content to cli3cloud.com would be stored ?
I can see it works in real time as well content still available even output is stopped.
Hi, for now, while the traffic is still small, the content will not get deleted at all and I am certain it will stay this way until my Postgres database will come to a limit, which is very unlikely to be honest.
If the database comes to a limit, which we are still very very far away from, you can expect that the output will be stored for at least a month.
| gharchive/issue | 2022-03-22T21:47:02 | 2025-04-01T06:39:22.514102 | {
"authors": [
"CompuRoot",
"leonwind"
],
"repo": "leonwind/cli2cloud",
"url": "https://github.com/leonwind/cli2cloud/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
406080340 | Proposal for kubectl to have a installer script
About the tool
The CLI tool for Kubernetes!
How the "cURL & bash" command be
curl https://installer.to/kubectl | bash
Checklist
[x] I checked the Issues list and I'm sure I'm not duplicating an existing request
[x] I checked the Pull Request list and I'm sure I'm not proposing for a tool that is about to get added
I request the community to consider my proposal and cast your votes by commenting,
+1 if you like to see an installer script for this tool in this repo
-1 if you do not like to see an installer script for this tool in this repo
[ ] apt
[ ] apt
[ ] apt
Partially done in #14
| gharchive/issue | 2019-02-03T11:54:02 | 2025-04-01T06:39:22.524799 | {
"authors": [
"VibhorCodecianGupta",
"agentmilindu"
],
"repo": "leopardslab/installer.to",
"url": "https://github.com/leopardslab/installer.to/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
688598516 | Docker image not building on Windonws
Build failing on Appveyor
Fixed Nlohmann JSON version
| gharchive/issue | 2020-08-29T20:21:53 | 2025-04-01T06:39:22.525647 | {
"authors": [
"leozz37"
],
"repo": "leozz37/texugo",
"url": "https://github.com/leozz37/texugo/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1083314955 | Port to avalonia
I really like the look of this theme but I use avalonia for most of my projects would you be happy for me to port this to avalonia? I would adhere to the rules of the license and credit you and this repo
Hi @Sabuto, Thank you for your appreciation and interest in the project. If building a dll for Avalonia only requires Nuget packages, you can add a new project like WPFUI.Avalonia in the main repository. Just send PR
It would require a complete rewrite as avalonia styling system is different and they have different controls for example in wpf the togglebutton is the equivalent of toggleswitch in avalonia and their togglebutton is something different. Also creating custom controls is completely different too. I'm happy to create it and then maybe add a pr so the project can be included in this repo?
All projects can be in one solution and published as a separate NuGet packages and DLL's for selected framework.
for anyone interested in helping out before i commit to this repo please feel free https://github.com/Sabuto/WpfUi.Avalonia/
This already exists: https://github.com/amwx/FluentAvalonia
This already exists: https://github.com/amwx/FluentAvalonia
Yeah I figured that after I started doing it
| gharchive/issue | 2021-12-17T14:29:10 | 2025-04-01T06:39:22.533280 | {
"authors": [
"Pomianowski",
"Sabuto",
"nlogozzo",
"sabuto"
],
"repo": "lepoco/wpfui",
"url": "https://github.com/lepoco/wpfui/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1291454741 | Left navigation bar smooth animation
Hello Lepo.
I greatly appreciate the new version of the project.
It would be so nice that when you click on a navigation icon, in the banner on the left of the screen, the animation of the icon is fluid. We feel that when opening a page there are slowdowns and it's not very pretty.
It would really be an improvement that would make it stand out the project. However, the animation of the page when it opens is really great.
Do you think it is a nice enchancement?
| gharchive/issue | 2022-07-01T14:04:32 | 2025-04-01T06:39:22.534738 | {
"authors": [
"PierreLeGit"
],
"repo": "lepoco/wpfui",
"url": "https://github.com/lepoco/wpfui/issues/258",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1571367306 | Cannot run doctests or build docs for projects containing 'server' methods.
While looking into https://github.com/leptos-rs/cargo-leptos/issues/66 I've discovered it seems we are unable to run doctests or build docs for any project containing server methods.
Here is some example output using the todo_app_sqlite_axum example.
At the end of this error output you will see that this is coming from a failing rustdoc command.
Checking leptos_axum v0.1.3 (/home/phillipb/Repositories/leptos-experiments/leptos/integrations/axum)
Documenting todo_app_sqlite_axum v0.1.0 (/home/phillipb/Repositories/leptos-experiments/leptos/examples/todo_app_sqlite_axum)
error[E0407]: method `call_fn_client` is not a member of trait `leptos::ServerFn`
--> src/todo.rs:39:1
|
39 | #[server(GetTodos, "/api")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ not a member of trait `leptos::ServerFn`
|
= note: this error originates in the attribute macro `server` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0407]: method `call_fn_client` is not a member of trait `leptos::ServerFn`
--> src/todo.rs:80:1
|
80 | #[server(AddTodo, "/api")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ not a member of trait `leptos::ServerFn`
|
= note: this error originates in the attribute macro `server` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0407]: method `call_fn_client` is not a member of trait `leptos::ServerFn`
--> src/todo.rs:97:1
|
97 | #[server(DeleteTodo, "/api")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not a member of trait `leptos::ServerFn`
|
= note: this error originates in the attribute macro `server` (in Nightly builds, run with -Z macro-backtrace for more info)
error: Compilation failed, aborting rustdoc
For more information about this error, try `rustc --explain E0407`.
error: could not document `todo_app_sqlite_axum`
Caused by:
process didn't exit successfully: `rustdoc --edition=2021 --crate-type cdylib --crate-type rlib --crate-name todo_app_sqlite_axum src/lib.rs ... (edited for brevity)
Here is the command I'm using to run the doctests.
cd examples/todo_app_sqlite_axum
LEPTOS_OUTPUT_NAME=todo_app_sqlite_axum LEPTOS_SITE_ROOT=target/site LEPTOS_SITE_PKG_DIR=pkg LEPTOS_SITE_ADDR=127.0.0.1:3000 LEPTOS_RELOAD_PORT=3001 LEPTOS_LIB_DIR=. LEPTOS_BIN_DIR=. cargo test --package=todo_app_sqlite_axum --doc --target-dir=target/server --no-default-features --features=ssr
Likewise running cargo doc against the example project also fails. Again it is the rustdoc command being run by cargo that fails complaining about the server function (as shown above).
cd examples/todo_app_sqlite_axum
LEPTOS_OUTPUT_NAME=todo_app_sqlite_axum LEPTOS_SITE_ROOT=target/site LEPTOS_SITE_PKG_DIR=pkg LEPTOS_SITE_ADDR=127.0.0.1:3000 LEPTOS_RELOAD_PORT=3001 LEPTOS_LIB_DIR=. LEPTOS_BIN_DIR=. cargo doc --no-deps --target-dir=target/server --no-default-features --features=ssr
The nature of the errors suggests rustdoc is invoking rustc in a way that leads to a failed compilation of the server macro.
Just wondering if anyone has any suggestions on how to troubleshoot this further?
Thanks.
Nice catch! I think I've got a fairly simple fix for this and it allows cargo doc to run in the todo_app_sqlite example on my branch, so I'm hopeful. I can't see any reason it should break any of the examples themselves but I'll let the CI run and see. Thanks for reporting this.
| gharchive/issue | 2023-02-05T09:53:49 | 2025-04-01T06:39:22.543751 | {
"authors": [
"gbj",
"phillipbaird"
],
"repo": "leptos-rs/leptos",
"url": "https://github.com/leptos-rs/leptos/issues/474",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2045483588 | draft for sso_auth_session example
Draft for an Example for SSO auth, based on conversation with Benwis & diversable. Please give all nits, thank you for your feedback.
halp
Installed package `cargo-all-features v1.10.0` (executables `cargo-build-all-features`, `cargo-check-all-features`, `cargo-test-all-features`)
[cargo-make] INFO - Execute Command: "rustup" "run" "nightly" "cargo" "+nightly" "build-all-features"
error: no such command: `+nightly`
Cargo does not handle `+toolchain` directives.
Did you mean to invoke `cargo` through `rustup` instead?
[cargo-make] ERROR - Error while executing command, exit code: 101
[cargo-make] WARN - Build Failed.
Error: Process completed with exit code 1.
I have not looked into the code too much (short on time today), but I saw you refer to google SSO there.
You could, if you like, integrate Rauthy to have a fully self-contained example.
I do have created a minimal client as well from which you could just grab some code, if you like.
I have not looked into the code too much (short on time today), but I saw you refer to google SSO there. You could, if you like, integrate Rauthy to have a fully self-contained example. I do have created a minimal client as well from which you could just grab some code, if you like.
You could actually add Rauthy directly and push a pre-configured SQLite database to the example, which would even mean way less setup.
That looks like a cool project! I'll check that out. Thanks :)
Prior art by @kerkmann might also be useful here: https://crates.io/crates/leptos_oidc
It’s already been tested with Rauthy as well.
Thanks @erlend-sh , thanks for mentioning it! :heart:
@sjud Yes, if you need some help or some knowledge sharing, just contact or ping me. :3
@sjud Do you think this is ready for merging?
Ya let’s merge. We all have bigger fish to fry and the code works.
On Jan 13, 2024, at 2:59 AM, Ben Wishovich @.***> wrote:
@sjud Do you think this is ready for merging?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.
| gharchive/pull-request | 2023-12-18T01:03:24 | 2025-04-01T06:39:22.550759 | {
"authors": [
"benwis",
"erlend-sh",
"kerkmann",
"sebadob",
"sjud"
],
"repo": "leptos-rs/leptos",
"url": "https://github.com/leptos-rs/leptos/pull/2117",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
273183430 | Tests failing with 0.8.1
Currently one test is failing across multiple systems:
======================================================== test session starts ========================================================
platform linux2 -- Python 2.7.14, pytest-3.2.3, py-1.4.34, pluggy-0.4.0
rootdir: /var/tmp/paludis/build/dev-python-mistune-0.8.1/work/PYTHON_ABIS/2.7/mistune-0.8.1, inifile:
plugins: virtualenv-1.2.11, shutil-1.2.11, expect-1.1.0, cov-2.5.1, hypothesis-3.36.0
collected 66 items
tests/test_cases.py ......................F..........................
tests/test_extra.py ............
tests/test_subclassing.py .....
============================================================= FAILURES ==============================================================
__________________________________________________________ test_extra[22] ___________________________________________________________
folder = '/var/tmp/paludis/build/dev-python-mistune-0.8.1/work/PYTHON_ABIS/2.7/mistune-0.8.1/tests/fixtures/extra'
name = 'case_insensitive_refs'
def render(folder, name):
filepath = os.path.join(folder, name + '.text')
with open(filepath) as f:
content = f.read()
html = m.parse(content)
filepath = os.path.join(folder, name + '.html')
with open(filepath) as f:
result = f.read()
html = re.sub(r'\s', '', html)
result = re.sub(r'\s', '', result)
for i, s in enumerate(html):
if s != result[i]:
begin = max(i - 30, 0)
msg = '\n\n%s\n------Not Equal(%d)------\n%s' % (
html[begin:i+30], i, result[begin:i+30]
)
> raise ValueError(msg)
E ValueError:
E
E <p>[hi]</p>
E ------Not Equal(3)------
E <p><ahref="/url">hi</a></p>
tests/test_cases.py:30: ValueError
========================================================= warnings summary ==========================================================
tests/test_cases.py::test_extra
yield tests are deprecated, and scheduled to be removed in pytest 4.0
tests/test_cases.py::test_normal
yield tests are deprecated, and scheduled to be removed in pytest 4.0
-- Docs: http://doc.pytest.org/en/latest/warnings.html
========================================== 1 failed, 65 passed, 2 warnings in 1.21 seconds ==========================================
I'm seeing this as well. The reason is that the mistune-0.8.1.tar.gz tarball on PyPI doesn't match the v0.8.1 tag in this repository - in particular, it has a version of _keyify that doesn't lowercase the key.
--- mistune-0.8.1/mistune.py 2017-11-07 06:00:37.000000000 +0000
+++ mistune-git-0.8.1/mistune.py 2017-12-02 16:28:45.117989907 +0000
@@ -48,7 +48,8 @@
def _keyify(key):
- return _key_pattern.sub(' ', escape(key, quote=True))
+ key = escape(key.lower(), quote=True)
+ return _key_pattern.sub(' ', key)
def escape(text, quote=False, smart_amp=True):
@lepture, please could you put out a 0.8.2 release with the correct contents? (Don't just reroll the existing tarball - the incorrect version is cached in various places now.)
@atsampson done.
| gharchive/issue | 2017-11-11T23:09:00 | 2025-04-01T06:39:22.554158 | {
"authors": [
"Cogitri",
"atsampson",
"lepture"
],
"repo": "lepture/mistune",
"url": "https://github.com/lepture/mistune/issues/141",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2594256527 | Set default value
I tried to pass viewTime: 09:00 in options but it has no effect.
$('#inputfield').timepicker({
step: 300,
format: 'h:mm A',
viewTime: '09:00 AM',
});
I need to pass a default value in case the input is empty, so I forced to update updatePicker in order to set my default value,
//comment old code
// let viewTime = $input.data('viewtime');
//new code to set time to 09:00 AM if the input value is empty
let inputValue = $input.val();
let viewTime =parseTime("09:00 AM");
if (inputValue) {
viewTime = parseTime(inputValue);
}
I will suggest to add a default value option so user can set it.
I'm not sure if having a default value as an option is really needed. You can probably set a default value like this:
let $input = jQuery('#inputfield');
if (!$input.val()) {
$input.val('09:00 AM');
}
Or by setting a value directly on the input tag itself:
<input type="text" id="inputfield" name="meet_time" value="09:00 AM" />
Makes sense. I've added a new defaultTime option to the newest version that should let you do that.
| gharchive/issue | 2024-10-17T10:08:18 | 2025-04-01T06:39:22.619224 | {
"authors": [
"isayedahmad",
"lesilent"
],
"repo": "lesilent/timepicker-bs4",
"url": "https://github.com/lesilent/timepicker-bs4/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1147412863 | Add VOF auxiliary physics calculations
Description of the problem
Adds two VOF auxiliary physics (phase fraction gradient and curvature) to the VOF solver. These variables can be outputted.
Future changes
These auxiliary variables will be used in the calculation of the surface tension force.
Looks very good. Some minor cleaning to do and some cleanup to do in removing some variables which are never used. Additionnaly, can you add some comments in your assembly routines? They are not very detailed and some comments could help readability, otherwise it's looking all good
Your comments have been addressed. :)
Seems good to go to me :).
Merging
| gharchive/pull-request | 2022-02-22T22:01:21 | 2025-04-01T06:39:22.646444 | {
"authors": [
"blaisb",
"shahabgol"
],
"repo": "lethe-cfd/lethe",
"url": "https://github.com/lethe-cfd/lethe/pull/403",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1082118449 | Bump: 0.8.0-next -> 0.9.0
Proposed changes
This PR bumps FIWARE Big Bang version from 0.8.0-next -> 0.9.0.
Types of changes
What types of changes does your code introduce to the project: Put an x in the boxes that apply
[ ] Bugfix (non-breaking change which fixes an issue)
[X] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Update only documentation, not any source code.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of
them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before
merging your code.
[X] I have read the CONTRIBUTING doc
[ ] I have signed the CLA
[ ] I have updated the change log (CHANGELOG.md)
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have added necessary documentation (if appropriate)
[ ] Any dependent changes have been merged and published in downstream modules
Further comments
N/A
Codecov Report
Merging #124 (597a90b) into main (a188ec8) will not change coverage.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## main #124 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 2 2
Lines 1474 1474
=========================================
Hits 1474 1474
Impacted Files
Coverage Δ
config.sh
100.00% <100.00%> (ø)
lets-fiware.sh
100.00% <100.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a188ec8...597a90b. Read the comment docs.
| gharchive/pull-request | 2021-12-16T12:06:14 | 2025-04-01T06:39:22.661830 | {
"authors": [
"codecov-commenter",
"fisuda"
],
"repo": "lets-fiware/FIWARE-Big-Bang",
"url": "https://github.com/lets-fiware/FIWARE-Big-Bang/pull/124",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1000170591 | Fix certbot option
Proposed changes
This PR fixes certbot option.
Types of changes
What types of changes does your code introduce to the project: Put an x in the boxes that apply
[X] Bugfix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Update only documentation, not any source code.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of
them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before
merging your code.
[X] I have read the CONTRIBUTING doc
[ ] I have signed the CLA
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have added necessary documentation (if appropriate)
[ ] Any dependent changes have been merged and published in downstream modules
Further comments
N/A
Codecov Report
Merging #38 (7f6a9c2) into main (a3f0acf) will not change coverage.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## main #38 +/- ##
=======================================
Coverage 72.39% 72.39%
=======================================
Files 2 2
Lines 547 547
=======================================
Hits 396 396
Misses 151 151
Impacted Files
Coverage Δ
lets-fiware.sh
71.56% <0.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a3f0acf...7f6a9c2. Read the comment docs.
| gharchive/pull-request | 2021-09-19T00:47:45 | 2025-04-01T06:39:22.674330 | {
"authors": [
"codecov-commenter",
"fisuda"
],
"repo": "lets-fiware/FIWARE-Big-Bang",
"url": "https://github.com/lets-fiware/FIWARE-Big-Bang/pull/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
352822119 | Is there a way to filter slaves in redis cluster whose replication link is down
Some slaves master link is down at runtime if we are hitting the slave we get responses the link is down with master is there a way where we could point to master intermittently
There is currently no way to filter these nodes. How do you discover that a particular node replication link is down? INFO replication or is there a flag in CLUSTER NODES?
@mp911de Thanks for the quick response from info replication. Currently, the ReadFrom cannot be plugged to be used.
You could query INFO from the individual nodes yourself and keep a table of that state somewhere around. I think overriding RedisClusterClient.determinePartitions(…) is the appropriate hook to fetch the data you need. Within a custom ReadFrom you can then filter nodes that do not have a master link.
On a side note: Wouldn't it be easier to set slave-serve-stale-data to yes in your Redis config?
@mp911de slave-serve-stale-data we are not using it as we don't do a bgsave in slaves. So whenever the slave resyncs the latest data on restart we cannot tolerate to show stale data.
Closing this one as the question is answered.
| gharchive/issue | 2018-08-22T06:42:46 | 2025-04-01T06:39:22.683956 | {
"authors": [
"mp911de",
"s-aravind-flipkart"
],
"repo": "lettuce-io/lettuce-core",
"url": "https://github.com/lettuce-io/lettuce-core/issues/833",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1486792581 | Add release documentation
Had Tomek follow the instructions. takes of white lab coat
Thinking about it. Should this be inside the README? As it will then also be published to NPM. Would it make sense to have a CONTRIBUTORS.md instead?
Closed in favour of #9.
| gharchive/pull-request | 2022-12-09T13:47:57 | 2025-04-01T06:39:22.688340 | {
"authors": [
"twesterhuis"
],
"repo": "leukeleu/prettier-config",
"url": "https://github.com/leukeleu/prettier-config/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2215880458 | chore(setup): configure tailwind integration
follow the guide to configure tailwindcss with astro
(https://docs.astro.build/guides/integrations-guide/tailwind)
#3 👈
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @lewxdev and the rest of your teammates on Graphite
| gharchive/pull-request | 2024-03-29T19:06:26 | 2025-04-01T06:39:22.728955 | {
"authors": [
"lewxdev"
],
"repo": "lewxdev/lewx.dev",
"url": "https://github.com/lewxdev/lewx.dev/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1966681835 | 🛑 Movie Banners is down
In 3d93501, Movie Banners (https://fh-api.filmhouseng.com/movie/movie-banners?platform=WEB) was down:
HTTP code: 502
Response time: 69 ms
Resolved: Movie Banners is back up in 22f02a7 after 1 hour, 29 minutes.
| gharchive/issue | 2023-10-28T17:42:29 | 2025-04-01T06:39:22.731559 | {
"authors": [
"lexNwimue"
],
"repo": "lexNwimue/filmhouse-monitor",
"url": "https://github.com/lexNwimue/filmhouse-monitor/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
266306258 | RankNTypes issue
Ran into an issue with the following (contrived) example:
(data Nop
(nop (∀ [a] {a -> a})))
(defn ->nop : (∀ [a] {a -> Nop -> a})
[[x (nop f)] (f x)])
The equivalent Haskell is:
data Nop = Nop (forall a. a -> a)
toNot :: a -> Nop -> a
toNot x (Nop f) = f x
Hackett complains with:
; a38218: skolem escaped its scope
; in: a38218
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:113:4 simplify/elaborate
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:98:2 τs⇔/λ!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:214:2 τ⇔/λ!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:229:2 τ⇔!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:238:2 τ⇐!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:243:2
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:368:0
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:200:8 for-loop
Removing the outer quantification also throws the same error
(defn int-nop : {Nop -> Integer}
[[(nop f)] (f 3)])
For some reason, Hackett really doesn't like using quantified types stored within data. The following does work:
(defn int-nop/fn : {(∀ [a] {a -> a}) -> Integer}
[[f] (f 3)])
I'm going to guess this has something to do with unordered contexts, or maybe is just a small bug in the implementation. I'll take a look when I have some time
I’ve known for a while that way skolems are handled is currently very broken, though I assumed it generally erred on the side of being more permissive. I think the way skolems are added and removed from contexts it probably totally wrong, and I just didn’t put a lot of effort into making them right and coming up with good test cases. That needs to be improved, and the skolem escape error message should also be made more user-friendly when that happens (currently it’s so unreliable that I didn’t even bother).
Digging into this slightly, the issue seems to clearly be in the typechecking for pattern-matching. Currently, this is done with pat⇒! and pat⇐!. When pat⇒! infers the types for a match against a data constructor, it calls pat⇐! to try and ensure subpatterns have the proper types. The trouble is that pat⇐! eventually calls τ<:!, which is wrong—subsumption always instantiates quantifiers.
Currently, it requires that subpatterns’ types by subtypes of data constructors’ types. This is probably the worst possible choice, and flipping that relation makes your example typecheck. However, even flipping the subsumption relation makes pattern-matching instantiate quantifiers too early. When pattern-matching against a nop constructor, you should end up with a polymorphic binding, not a monomorphic one. Subsumption will instantiate the quantifier to a fresh unification variable, which means your binding can be used with any type, but only one (unless you explicitly force generalization by creating a local binding with a polymorphic type signature).
I think the solution is to perform some sort of simpler unification algorithm that doesn’t over-instantiate. But I haven’t taken the time to figure out exactly how that should work.
An immediate solution would be to modify the pat⇐! function to recognize pattern variables rather than just calling into pat⇒!. The previous way is problematic because all pattern variables end up with τ:var^ types, which can't be instantiated to polymorphic types.
However I'm not exactly sure why it would complain about skolems "escaping" either way.
| gharchive/issue | 2017-10-17T22:57:28 | 2025-04-01T06:39:22.737904 | {
"authors": [
"iitalics",
"lexi-lambda"
],
"repo": "lexi-lambda/hackett",
"url": "https://github.com/lexi-lambda/hackett/issues/46",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
617165215 | Check ambiguous cognate sets
Depending on how many cases there are, it may even be possible to manually assign them. But in principle, this dataset has partial cognates, as indicated by A/B in the cognates.tsv file, while corresponding cognates in the words themselves are not marked. If there are just a few cases, one could catch them in the code.
I don't understand what you mean exactly by "manually assign" them and "catch them in the code".
Check lexibank_pharaocoracholaztecan.py. There I wrote code that essentially parses the word document, catches newlines inside the table (converted the table to plain text, but I had to deal with multiple newlines inside the same table), and also identified concepts, etc.
This code allows us to check certain things explicitly (which I call "manually"). This has the advantage of allowing us to do things without touching the original data, as it has been published as is, and it makes more sense to not touch it anymore (only if you write a new paper and do more codings).
@maunus, I have now checked the cognates again. There are some cases not clear to me (I refere to cognates.tsv extracted from your excel sheet).
do you make a distinction between a and A, as I find in row 3
what is the difference between ? and -, both occurring in row 10, for example?
I understand the A/B structure, but in one case you have a/(B) (50), in another case, you have A/(B) (54, and in one case, you have C D (is the latter C/D?
what about the cases of ab in line 68?
I have a concrete proposal how to cope with this.
If you check the following examples, there are not many ambiguous cases:
{
"A(B)": ["A"],
"A/(B)": ["A"],
"A/B": ["A", "B"],
"A/B/C": ["A", "B", "C"],
"A/B/D": ["A", "B", "D"],
"A/B?": ["A"],
"A/C": ["A", "C"],
"B/(A)": ["A"],
"B/(a)": ["B"],
"B/C": ["B", "C"],
"C D": ["C", "D"],
"C/(B)": ["C"],
"C/B": ["C", "B"],
"C/E": ["C", "E"],
"D/B": ["D", "B"],
"a/(B)": ["a"],
"a/A": ["a", "A"],
"a/B": ["a", "B"],
"ab": ["ab"],
}
The data is provided in a Python dictionary (or JSON datastructure) here. You can see how I treat the source from the target, so C/B is two elements, but a/(B) is one element only, assuming you also could not count that in your nexus file.
If you have two cognates, we provide the word form twice. This is not best practice, but we tolerate it for now, as this is also an example dataset to show you how to do cognate annotation in a more consistent and transparent way with additional tools and long table formats.
If you want to modify parts of the decisions I made here, just point me to them here, or change them directly in the code.
Most of those difference are really just information to myself, about the more detailed structure of the cognates, so A and a are different versions of the same cognate root, whereas B and b would be two versions of another root. I haven't actually used this in the analysis but just treated a/A as the same, but I would like to since it would give amore fine grained structure of shared roots and innovations (but it adds information about phonological changes, and grammatical innovations etc, so I don't know if it really belongs).
Okay. If we now have as an initial goal just to make it possible to derive the nexus or the distances file as it was underlying your paper, we'd then say: lower case upper case is the same, right? For all purposes going beyond this, for additional analyses, I recommend to start from the wordlist file that is submitted in examples, and load it into edictor. It has several advantages, first it shows the long table format we use, which allows too annotae cognates and word in the same table, second, you just git-clone this repository, and then open the file in edictor. You can annotate cognates, etc., and use this for future studies (and I can always help if there are problems).
The distinction between a and A is that they are form variants of the same cognate root, so the varieties that have a have a shared innovation to the root.
I think in the first rows I was trying to keep apart ? and - as two different kinds of missing data, one being when there is no data in the sources, and the other being when the extant sources does not allow us to reconstruct a form for PCN. But is seems that in the lower rows I abandoned this distinction (as I probably realized it makes no difference to the analysis). I think we should probably just have "?" for "unknown" across the board.
In 50 Cora and Huichol has a compound root combining A+B, Nahuatl has A, but also root B, though in another meaning, so it shouldn't figure under "navel". Other UA has only root A. So tjhe meaning of the parenthesis is that the root is there, but that it shouldn't count in the analysis (so basically extra information for us, but irrelevant to the computation). C D is supposed to be C/D.
IN line 68 it seems they all ought to be capitals AB and A and B, since there isn't any distinction between a and A, or b and B.
Yes, lower case and upper case should be treated the same in the nexus file, and anything in () should be ignored.
And yes, I want to start learning edictor once we are don with this part. I want to use it for my Nahuatl dialect database.
I think the changes I would make to your proposal is:
"ab": ["A", "B"]
Should I be cleaning the cognates.tsv file now? Or will that screw up the stuff you have already been extracting from it?
If we just delete the stuff in () and change all the lower case into capitals we could dispense with the extra code. The information they represent really is only useful for qualitative purposes.
Rather not clean, we better cover it from the code, since this is
"officially published", so we rather post-edit it, not the original source.
All done already. There is no extra code but a mapping, so it is better
to leave it as this, and keep the original data intact.
Ok, we keep it as is then. Though, I feel the version here is in a way a more "official publication" than the pdf at my website, and I would like it to be better.
Ok, in the distances.dst file there are more decimals than I operated with - where do they come from?
It is hard to compare with the languages in a different order.
I didn't include the proto-languages in my distance matrix, and for the distance number I simply counted the number of cognates out of 100, so I got 0.65 for Cora/Huichol.
Here is the matrix I used:
And here is the one at distances.dst compared with the one I used in Splitstree
I can't really figure out how to compare the two tables. The numbers are inverted right, so that Cora/Huichol gives a distance of 0.3579, but 65/100 shared forms. In the distance matrix I used when I put it into splitstree I put 0.35, there (just taking the inverse of 65/100).
Cognate counting is a tricky business.
There are several ways to count, and often, it is not clear which version one uses.
E.g., you have missing data: how do you count?
how do you count shared cognates?
Our standard calculation in lingpy only compares existing items in both languages. Furthermore, in case of multiple matches, it averages, so you have A/B, it'll give 0.5 to shared A and 0.5 to shared B, etc.
Excluding languages is trivial, just have to adjust the script.
Ok, so that does change the outcome a bit, and accounts for the decimal differences. Now I want to see what the network looks like with those figures.
Here's the count of shared cognates (ignoring meanings):
Language 1
language 2
count
Cahita
Cora
33
Cahita
Huichol
36
Cahita
Tarahumaran
66
Cahita
Tepiman
57
Cora
Huichol
63
Cora
Tarahumaran
26
Cora
Tepiman
34
Huichol
Tarahumaran
32
Huichol
Tepiman
35
Tarahumaran
Tepiman
51
So there are differences, but hard to tell, why.
Oh, I didn't I didn't exclude proto-Nahua by the way. That is important.
2 cognates lower for Cora/Huichol
Some of the differences are really large.
Wait, I found the bug. We forgot to account for upper-casing the "a" etc.
|:------------|:------------|---:|
| Cahita | Cora | 45 |
| Cahita | Huichol | 51 |
| Cahita | Tarahumaran | 68 |
| Cahita | Tepiman | 60 |
| Cora | Huichol | 67 |
| Cora | Tarahumaran | 40 |
| Cora | Tepiman | 43 |
| Huichol | Tarahumaran | 46 |
| Huichol | Tepiman | 44 |
| Tarahumaran | Tepiman | 55 |
Excellent. Can you include proto-Nahuan in the list of shared cognates?
|:------------|:------------|---:|
| Cahita | Cora | 45 |
| Cahita | Huichol | 51 |
| Cahita | Tarahumaran | 68 |
| Cahita | Tepiman | 60 |
| Cahita | ProtoNahua | 54 |
| Cora | Huichol | 67 |
| Cora | Tarahumaran | 40 |
| Cora | Tepiman | 43 |
| Cora | ProtoNahua | 58 |
| Huichol | Tarahumaran | 46 |
| Huichol | Tepiman | 44 |
| Huichol | ProtoNahua | 57 |
| Tarahumaran | Tepiman | 55 |
| Tarahumaran | ProtoNahua | 44 |
| Tepiman | ProtoNahua | 49 |
BTW: the numbers differ still, since you counted only shared cognates PER cognate set, so AB in one and AB in another would only count one time. This is a bit inconsistent, since you counted AB vs. A also as one match, so the count here (also easier to code on the fly) just counts all shared cognate sets, and I checked with cora vs. huichol, where you find two ABs, so this makes up for 65+2 = 67.
We have a NOTE.md field on github. There, one can add custom comments. So you could do so, and explain a bit more, if you want. E.g., your matrix would be useful there. And we can also add your nexus file here directly.
Great, thanks! There are some odd shifts for example now Cora/Nahuatl has 58 where I originally counted 53, and Nahuatl/Huichol has 57 where I counted 56.
It seems most numbers are higher. Does it count A/B A/B as a single match or as a double match?
I dounf the suggestion in this article to make a lot of sense: It suggests counting percentages of shared vocabulary not out of the 100 but only out of the potential cognates. So when there is missing data the number of potential cognates fall, and when there are double cognates it rises above 100. Is this something we could/should do?
Haugen, Jason D., Michael Everdell, and Benjamin A. Kuperman. "Uto-Aztecan Lexicostatistics 2.0." International Journal of American Linguistics 86, no. 1 (2020): 1-30.
See my note above on AB counting.
Yes, I read it after typing.
well, you know, with cognate counting, I would say: there are so many ways, it won't make much difference. Teh most important thing is: make it standardized, make it transparent how you count, or use a code that always does the same.
I think the point in that article is that since some of the UA languages have very little documentation, the missing data can skew the numbers quite a bit.
The debate is very long, more advanced is the technique by Starostin (whom not many read), and they have a standardized procedure.
He's the first who also said that borrowings should count as missing data.
Ah, that is interesting. This hasn't come up in this word list, but that is how I would do it if I identify a borrowing from Nahuatl in to Cahitan for example.
And one should try to avoid missing data. In this case, it is better to not use a language, if one has low mutual coverage. We discussed this in our Sino-Tibetan study.
Reference here. There's a PDF online (easy to find, otherwise send an email and I share it).
But sometimes they are the languages one is interested in... But I did exclude Tubar and Opata from this lost for that reason.
So how we count in lingpy is:
determine slots where both have a word
take this sublist as 100%
count how many cognate sets are shared, if you have synonyms, take proportions (!)
divide this number by the length of the sublist
But I think one can prove that it doesn't make that big of a difference.
It is more important to keep one's data in such a clean state that one doesn't need to do lexicostatistics with UGPMA, but that one can do more complex phylogenetic studies. Neighbornets are nice for comparison, but even here, the preferred way is to go for a binarized representation for presence of absence of cognate sets.
That makes a lot of sense.
| gharchive/issue | 2020-05-13T05:57:50 | 2025-04-01T06:39:22.774967 | {
"authors": [
"LinguList",
"Maunus"
],
"repo": "lexibank/pharaocoracholaztecan",
"url": "https://github.com/lexibank/pharaocoracholaztecan/issues/6",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1672164356 | Please review annotation of 'll' as 'ʒ' in Imbabura Quechua.
My Quechua dictionary (not necessarily correct for Imbabura Quechua), indicates that 'll' is a voiced lateral palatal fricative. Which doesn't have a symbol on my IPA chart, but next to 'j'. This would make it similar to the Spanish pronunciation of 'll'.
We have lateral release and friction as modifying features in clts.
>>> from pyclts import CLTS
>>> bipa = CLTS().bipa
>>> bipa["with-friction voiced palatal lateral approximant consonant"].s
'ʎ͓'
>>> bipa["with-lateral-release voiced palatal fricative consonant"].s
'ʝˡ'
Either variant would be fine with me. If you want to modify this in Quechua, this would be fine with me!
On the other hand: 'ʒ' is very typical for the Spanish of Mendoza and the region. So it is not unnormal for variants of Spanish or other regions to acquire this sound for ll from Spanish, which is an intermediate stage of the extreme variant of unvoiced 'ʒ' in Buenos Aires.
Ok. Best action seems to leave it as us. It just surprised me. And matching ‘l’ in the Aymara was more costly, but such is life! Thanks.
J.E.M.
On Apr 17, 2023, at 11:44 PM, Johann-Mattis List @.***> wrote:
On the other hand: 'ʒ' is very typical for the Spanish of Mendoza and the region. So it is not unnormal for variants of Spanish or other regions to acquire this sound for ll from Spanish, which is an intermediate stage of the extreme variant of unvoiced 'ʒ' in Buenos Aires.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
| gharchive/issue | 2023-04-18T00:44:33 | 2025-04-01T06:39:22.780571 | {
"authors": [
"LinguList",
"fractaldragonflies"
],
"repo": "lexibank/wold",
"url": "https://github.com/lexibank/wold/issues/23",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
70481160 | get<string_type> works for SQL_LONGVARCHAR
As per your recommendation in the issue I created, void result::result_impl::get_ref_impl<string_type>(short column, string_type& result) const now handles LONGVARCHAR for SQL_C_CHAR.
Dude, awesome! Have you tested this with your PostgreSQL setup and it works correctly?
Yup, tested it against a few different variants of multi-character strings and it seems to work fine. I am also testing against SQL Server, though I don't have any LONGVARCHAR fields to test against right now. FYI, I am using nanodbc to pull SQL query results directly into Excel spreadsheets with a C++ XLL. Works like a charm.
Very cool! Glad my library is working out for you. It's been a while since I've worked with C++ and ODBC, but I still remember how bad it was writing straight ODBC code shudders. Much thanks for improving the library; I've wanted to tackle CLOB/BLOB support for a long time but just never got around to it.
| gharchive/pull-request | 2015-04-23T18:31:50 | 2025-04-01T06:39:22.783177 | {
"authors": [
"lexicalunit",
"manwithahammer"
],
"repo": "lexicalunit/nanodbc",
"url": "https://github.com/lexicalunit/nanodbc/pull/39",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1881570388 | [10.4 stable] Partially revert "vtpm : clean up and bump up vtpm-tools to v5.5"
This patch partially reverts the following commit:
06c19647e254 ("vtpm : clean up and bump up vtpm-tools to v5.5")
namely bumping up vtpm-tools and tss to a newer version, because of some compatibility issues with old eve-tools discovered by customers. Commit needs to be verified once again by CS.
CC: @siddharthzed
Closing this due to https://github.com/lf-edge/eve/pull/3438#issuecomment-1725787326
| gharchive/pull-request | 2023-09-05T09:22:48 | 2025-04-01T06:39:22.795880 | {
"authors": [
"rouming"
],
"repo": "lf-edge/eve",
"url": "https://github.com/lf-edge/eve/pull/3439",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
106419333 | mozilla-modern.badssl.com (+ intermediate, +old)
The Mozilla TLS configurations are one of the most commonly used configurations on the internet:
https://wiki.mozilla.org/Security/Server_Side_TLS
As such, it would probably be nice to have a site for each level of recommendations. Note that the Mozilla TLS configuration generator should make this pretty easy to do:
https://mozilla.github.io/server-side-tls/ssl-config-generator/
Dupe of #22. It seems this is as good an ideas as ever. ;-)
Hah! u r 2 smt 4 me!
| gharchive/issue | 2015-09-14T20:17:24 | 2025-04-01T06:39:22.804927 | {
"authors": [
"lgarron",
"marumari"
],
"repo": "lgarron/badssl.com",
"url": "https://github.com/lgarron/badssl.com/issues/69",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
274149154 | Projektübersicht / Filter
Liste der verfügbaren Filter etwas kondensieren, eher so wie 'ne Wolke anstatt einer Liste.
Speichern der zuletzt benutzten Filter (Projektgebunden).
Spalten dynamisch in der Übersicht auswählen können, Name, Description sind eigentlich erstmal uninteressant.
Allgemein weniger Spacing in den Listen.
Liste der Filter wurde verkleinert.
Die Filter klappen nichtmehr aus wenn ich welche hinzugefügt habe, kann also die Werte der Filter nichtmehr wählen.
Auf der aktuellen Version läuft es bei mir. Update nochmal und führ vorher "npm install" aus. Da ich neue packages hinzugefügt habe, kann es zu Problemen kommen, wenn die nicht aktuell sind.
Ah okay, ich hab das Problem jetzt auch. Tritt nur beim Navigieren zu der Seite auf. Beim neu laden läuft es. Arbeite am fix.
Habs gefixt. Eigenartiger Bug.
Top, nu hab ich gleich noch was gefunden, da müssen wir uns aber vllt. mal unterhalten was mehr Sinn macht, da die Flags einfach Integer sind, wirs natürlich hässlich, wenn man bei max_epochs z.B. 5 angibt, da will man natürlich nicht dass die "Flag" übersetzt wird weil es ja an dieser Stelle keine Flag sondern halt die Zahl 5 ist. Eine Möglichkeit wäre, dass man quasi für jeden Parameter 'nen Typ (Flag, Int) angeben kann und dementsprechend übersetzt wird oder nicht, oder die Flags werden alle in Hexadezimal umgestellt. Kein Plan was schlauer ist. Jedenfalls wäre es nice, wenn man auf Filter klickt, dass die Felder farblich getoggelt werden können, dann sieht man sofort welche man schon angeklickt hat und kann diese auch noch rausnehmen wenn man sich verklickt hat ;). Wann sollen wir uns mal wieder treffen?
Alle Punkte soweit umgesetzt.
Hm, ich kann die Filter jetzt toggeln, aber sie erscheinen nicht links in der Liste :o
jo. das war mein Versuch die Filter zu sortieren. Hat nicht geklappt. Sie wurden angezeigt wenn man zu einer anderen Seite navigiert und dann wieder zurück. Habe ich jetzt erstmal raus genommen. Muss ich mir was anderes überlegen.
| gharchive/issue | 2017-11-15T13:04:19 | 2025-04-01T06:39:22.819938 | {
"authors": [
"goforthanddie",
"lgueldenhaupt"
],
"repo": "lgueldenhaupt/BA",
"url": "https://github.com/lgueldenhaupt/BA/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
267275342 | OS_FrontendMaster-dl
tool cannot download workshops on front end Masters
This issue is not helpful unless you provide more details what problem are you currently facing. Please reopen the issue later when you can add a more detailed description to the issue.
| gharchive/issue | 2017-10-20T19:11:46 | 2025-04-01T06:39:22.841655 | {
"authors": [
"li-xinyang",
"ssanusi"
],
"repo": "li-xinyang/OS_FrontendMaster-dl",
"url": "https://github.com/li-xinyang/OS_FrontendMaster-dl/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2099505891 | Why don't bodies sometimes collide?
How can this be configured?
Video:
https://github.com/liabru/matter-js/assets/25684443/bd0a8523-d791-48f8-bcb6-d15992e43774
When I encounter this situation, the console will report an error
| gharchive/issue | 2024-01-25T03:42:17 | 2025-04-01T06:39:22.844371 | {
"authors": [
"MLH-AIDS",
"MaxMinimus"
],
"repo": "liabru/matter-js",
"url": "https://github.com/liabru/matter-js/issues/1273",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2209799618 | MouseConstraint preventing clicks on buttons on mobile.
<html>
<body>
<div id="mouse">
<button id="button">Test</button>
</div>
</body>
</html>
<script>
import Matter from "matter-js";
document.querySelector("#button")!.addEventListener("click", console.log);
const engine = Matter.Engine.create();
Matter.Composite.add(
engine.world,
Matter.MouseConstraint.create(engine, {
mouse: Matter.Mouse.create(document.querySelector("#mouse")!),
}),
);
</script>
I cant seem to click on the button on mobile when there is a MouseConstraint. Shouldn't at least nothing happen when there is no body at the location you touch, especially in this example when there is no bodies at all?
I don’t see how/where the matterJS is being placed with this dummy code. So I can’t verify if the problem is associated with an overlaying canvas element.
Nor do I understand why there is an explanation mark at the end of mouse: Matter.Mouse.create(
@JeffreyArts Presumably, the !s are TypeScript, but you're right it's unclear because TS doesn't work in <script> tags. A complete, runnable example is missing here.
| gharchive/issue | 2024-03-27T04:02:16 | 2025-04-01T06:39:22.846635 | {
"authors": [
"JeffreyArts",
"MichaelPriebe",
"ggorlen"
],
"repo": "liabru/matter-js",
"url": "https://github.com/liabru/matter-js/issues/1283",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
238721296 | layout_behavior属性缺失
Error:(25) No resource identifier found for attribute 'layout_behavior' in package 'com.liaoinstan.demospring'
(⊙_⊙;) ...什么
| gharchive/issue | 2017-06-27T02:32:55 | 2025-04-01T06:39:22.850451 | {
"authors": [
"fishsoft",
"liaoinstan"
],
"repo": "liaoinstan/SpringView",
"url": "https://github.com/liaoinstan/SpringView/issues/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
58229441 | FileHandleResolver for model asset loaders
The G3dModelLoaders use JsonReader/UBJsonReader. I wrote my own FileHandleResolver and now the Modelloaders search in the wrong place. I guess the json readers need the resolvers as well? At least they get files passed to in the loader and look at the wrong place.
No, the json readers don't need to resolve any files. Please include enough information to reproduce the issue you're reporting. https://github.com/libgdx/libgdx/wiki/Getting-Help
Closing this. If you still think that this is an issue then please provide the requested information and we'll reopen.
will do. for now I've found another solution. :)
| gharchive/issue | 2015-02-19T16:15:26 | 2025-04-01T06:39:22.901847 | {
"authors": [
"mk1x86",
"xoppa"
],
"repo": "libgdx/libgdx",
"url": "https://github.com/libgdx/libgdx/issues/2861",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
129956075 | com.badlogic.gdx.backends.lwjgl.LwjglGL20.toIntBuffer problem
private IntBuffer toIntBuffer (int v[], int offset, int count) {
ensureBufferCapacity(count << 2);
floatBuffer.clear(); // !!!! we use intBuffer, but clear floatBuffer
com.badlogic.gdx.utils.BufferUtils.copy(v, count, offset, intBuffer);
return intBuffer;
}
https://github.com/libgdx/libgdx/wiki/Issue-Tracker
| gharchive/issue | 2016-01-30T08:13:29 | 2025-04-01T06:39:22.902988 | {
"authors": [
"fogone",
"xoppa"
],
"repo": "libgdx/libgdx",
"url": "https://github.com/libgdx/libgdx/issues/3801",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
651806624 | [Bug] one validator and one fullnode nodes rebooted on testnet
🐛 Bug
Unexpected node reboots were observed on testnet:
one validator node (val-5) rebooted at 2020-07-02T16:01:30 PST
one fullnode node (fn-0) rebooted at 2020-07-02T03:02:30 PST
To reproduce
There is no known reliable repro step at the moment.
This is the first recorded incident of validator reboot, and second for fullnode. Fullnode was first seen rebooting on its own in a8cd371, deployed during the week of 5/20.
Expected Behavior
These two nodes are expected to remain up and in operation until next scheduled update.
System information
Please complete the following information:
Libra Version 21768f2
Rust Version 1.44.0
Additional context
All validators were connected to their network peers
All fullnodes were connected
val-5 's disk filled up before it rebooted
No log was found in the ECS instances to drill down further
ECS console recorded the event but no further info was available
can we ensure we retain the logs of crashed container in the future?
can we ensure we retain the logs of crashed container in the future?
We already did. There is log rotate. It was just unfortunately in this case the logs weren't there as they should have been.
cc @bmwill - flagging this issue to see if something the observability design can help with.
| gharchive/issue | 2020-07-06T20:43:10 | 2025-04-01T06:39:22.996910 | {
"authors": [
"sausagee",
"zekun000"
],
"repo": "libra/libra",
"url": "https://github.com/libra/libra/issues/4921",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
532988298 | [cluster-test] [ci] Decouple --run-ci-suite and --changelog
This allow to combine --changelog and --run-ci-suite flags.
This is needed because when cluster test runs from CI, it can't reliably get from-to commits for changelog
For CI run we will need to specify them manually:
ct --run-ci-suite --changelog commit_from commit_to
@bors-libra delegate+
:v: @andll can now approve this pull request
@bors-libra r=sausagee
:pushpin: Commit 201a690 has been approved by sausagee
:hourglass: Testing commit 201a6907289409732d79416a0372ac9720de73fc with merge 45c946130e2fe7502f22fc73f5ffdd82744edc77...
@bors-libra r=sausagee
:pushpin: Commit 840675f has been approved by sausagee
:hourglass: Testing commit 840675fc0db6efbb3973b46cad247b2d91a49ea3 with merge 15a41a809e010b875c3156c66e8b1ba229bc99d6...
:sunny: Test successful - checks-circle_commit_workflow
Approved by: sausagee
Pushing 15a41a809e010b875c3156c66e8b1ba229bc99d6 to master...
| gharchive/pull-request | 2019-12-04T22:58:39 | 2025-04-01T06:39:23.001522 | {
"authors": [
"andll",
"bors-libra",
"sausagee"
],
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/1908",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
470762595 | Consistent naming of 0x0
Why is one 0x00 and the other 0x0? Looks weird.
Thanks for the contribution! This looks like a good change, and we can definitely merge it.
However, due to some CI changes put in place this past week, to get CI to run the best thing for you to do is to close this PR and make a new one. Going forward this won't be an issue, but unfortunately, we do not have a way to force CI to run for older PRs.
| gharchive/pull-request | 2019-07-21T09:41:24 | 2025-04-01T06:39:23.003176 | {
"authors": [
"revmischa",
"tnowacki"
],
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
722452882 | Cherry-pick PR #6506 into release-0.23: [writeset-generator] Implement a primitive tooling for generating genesis writeset
This cherry-pick was triggerd by a request on #6506
Please review the diff to ensure there are not any unexpected changes.
Motivation
This implements an offchain binary tool that would be used by node operators to generate genesis writeset in case of some catestropic scenarios. In case of a majority of validator lossed, this tools, together with the db bootstrapper will be used to remove bad validators to kick validators out from the network.
The flow should be following: when we lost 1/3 of the node in the network, each node operator would:
Pause their own network
Sync their node to the latest committed state.
Use this tool to generate waypoint transaction. e.g:
run cargo run --bin libra-writeset-generator -- --output <path-to-genesis-transaction> remove-validators <addresses to be removed>
Spawn up the db bootstrapper with the genesis transaction generated in step 4 provided.
Have you read the Contributing Guidelines on pull requests?
Yes
Test Plan
Added e2e test for the generated transaction.
cc @sherry-x
/land
@sherry-x :exclamation: This PR is still missing approvals, unable to queue for landing
/land
| gharchive/pull-request | 2020-10-15T15:43:39 | 2025-04-01T06:39:23.007823 | {
"authors": [
"bors-libra",
"sherry-x"
],
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/6529",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1917956068 | glorious model O showing wrong dpi list again?
Information
ratbagd version (ratbagd --version): 0.17
Distribution: Arch
Kernel version (ex. uname -srmo): Linux 6.5.5-273-tkg-pds x86_64 GNU/Linux
just like in issue #1476
dpi shows from 0 to 2000, while it should show from 0 to 12000 for the model O, not sure how long this has again been broken for
Hi! It's probably not like that issue again, it is the same issue. We haven't had a new release yet, I've only recently pushed through the last blocker, so I hope release soon.
Since you are on Arch, you can use {libratbag,piper}-git from AUR for now, it's what I do myself. :smile:
By the way, if you by any chance have an account on AUR, could you ask piper-git maintainer to drop the libibus dependency? Piper never actually depended on it.
oh my bad, thought there was maybe a minor version release since april
also doesn't explain how it fixed itself after a ratbag update a few days after u responded to me in that issue
anyways, cheers and sorry for my dumass
| gharchive/issue | 2023-09-28T17:13:13 | 2025-04-01T06:39:23.016356 | {
"authors": [
"Alib234",
"staticssleever668"
],
"repo": "libratbag/libratbag",
"url": "https://github.com/libratbag/libratbag/issues/1535",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1520205710 | cmake: Debug builds have a 'd' appended to the library name
Similar to https://github.com/libsdl-org/SDL/issues/6703
I suppose the following needs to be updated as well:
create libSDL3_image.so.0 instead of libSDL3_image-3.0.so.0
copy so/dylib versioning behavior of SDL3 to SDL3_image
install sdl3-image.pc instead of SDL3_image.pc
Thanks, can you make similar changes for SDL_ttf and then SDL_mixer?
| gharchive/issue | 2023-01-05T07:04:39 | 2025-04-01T06:39:23.139711 | {
"authors": [
"madebr",
"slouken"
],
"repo": "libsdl-org/SDL_image",
"url": "https://github.com/libsdl-org/SDL_image/issues/322",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
367701770 | aix: don't EISDIR on read from directory fd
Remove the artificial EISDIR that was generated when trying to
uv_fs_read() from a file descriptor that refers to a directory.
We don't do that on the BSDs either (where reading from a directory
is allowed) and it introduces an extra stat() call for every read.
Refs: https://github.com/libuv/libuv/pull/2023#issuecomment-427759265
I couldn't find tests that check for the presence/absence of EISDIR. If the CI run turns up green, I'll see about adding some.
I tried to implement a test for the same in https://github.com/libuv/libuv/pull/2023. Perhaps we could use that?
Landed in https://github.com/libuv/libuv/commit/25a3894c8d59fada12253d3cb1befd14e18ecd75. Thanks Ben!
Thanks!
| gharchive/pull-request | 2018-10-08T09:20:13 | 2025-04-01T06:39:23.162452 | {
"authors": [
"bnoordhuis",
"cjihrig",
"thefourtheye",
"vtjnash"
],
"repo": "libuv/libuv",
"url": "https://github.com/libuv/libuv/pull/2025",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
415593545 | questions about batch size in GHMC_loss
Hi, thanks for your nice work. In your paper, you mentioned the best bin size is 30, which is a balanced value, what is the batch size in your experiments when you using bin size 30?
@longchuan1985 We use the batch size of 16 (8 GPUs with two images per GPU), which can be seen in the example script. And I want to clarify that the relationship between the bin size and batch size is not so strong because the effect of bin size mainly depends on the distribution of the gradient norm of examples (but I admit larger batch size will make the distribution more steady).
| gharchive/issue | 2019-02-28T12:11:53 | 2025-04-01T06:39:23.163788 | {
"authors": [
"libuyu",
"longchuan1985"
],
"repo": "libuyu/GHM_Detection",
"url": "https://github.com/libuyu/GHM_Detection/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1125023833 | addalpha() support
The VIPS docs mention an addalpha() function, but attempting to call image:addalpha() results in a VipsOperation: class "addalpha" not found error. I'm not sure why exactly this is. It's mentioned here that addalpha is just a convenience function for bandjoin_const, but I don't see what makes it different from any other operation that the binding supports.
I'd open a PR, but thing is, I'm not sure how to implement it. Typically, I'd just make a call to vips_lib.vips_addalpha(), but pyvips defines the function manually, so there might be something I'm missing.
Hi @RiskoZoSlovenska,
Most libvips operations are defined as subclasses of VipsOperation, and they all just appear in lua-vips automatically. A few very simple things (eg. addalpha, which is just two lines of code) are tiny convenience functions and are implemented in the bindings themselves.
I'd implement addalpha in lua in Image_methods.lua, which I guess is what you've done in your PR. I'll have a look.
| gharchive/issue | 2022-02-05T20:28:15 | 2025-04-01T06:39:23.168075 | {
"authors": [
"RiskoZoSlovenska",
"jcupitt"
],
"repo": "libvips/lua-vips",
"url": "https://github.com/libvips/lua-vips/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1648114522 | parserInternal.h not found?
When forcing npm to rebuild from source the compilation fails since its unable to find parserInternal.h:
allon@computer:~/src/git/samplenode$ npm install --build-from-source libxmljs
> libxmljs@1.0.1 install /home/allon/src/git/samplenode/node_modules/libxmljs
> node-pre-gyp install --fallback-to-build --loglevel http
make: Entering directory '/home/allon/src/git/samplenode/node_modules/libxmljs/build'
CXX(target) Release/obj.target/xmljs/src/xml_sax_parser.o
../src/xml_sax_parser.cc:6:10: fatal error: parserInternals.h: No such file or directory
#include "parserInternals.h"
^~~~~~~~~~~~~~~~~~~
compilation terminated.
xmljs.target.mk:176: recipe for target 'Release/obj.target/xmljs/src/xml_sax_parser.o' failed
make: *** [Release/obj.target/xmljs/src/xml_sax_parser.o] Error 1
make: Leaving directory '/home/allon/src/git/samplenode/node_modules/libxmljs/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/home/allon/.nvm/versions/node/v14.16.0/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
gyp ERR! System Linux 4.15.0-202-generic
gyp ERR! command "/home/allon/.nvm/versions/node/v14.16.0/bin/node" "/home/allon/.nvm/versions/node/v14.16.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "build" "--fallback-to-build" "--loglevel=http" "--module=/home/allon/src/git/samplenode/node_modules/libxmljs/build/Release/xmljs.node" "--module_name=xmljs" "--module_path=/home/allon/src/git/samplenode/node_modules/libxmljs/build/Release" "--napi_version=7" "--node_abi_napi=napi" "--napi_build_version=0" "--node_napi_label=node-v83"
gyp ERR! cwd /home/allon/src/git/samplenode/node_modules/libxmljs
gyp ERR! node -v v14.16.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/home/allon/.nvm/versions/node/v14.16.0/bin/node /home/allon/.nvm/versions/node/v14.16.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --loglevel=http --module=/home/allon/src/git/samplenode/node_modules/libxmljs/build/Release/xmljs.node --module_name=xmljs --module_path=/home/allon/src/git/samplenode/node_modules/libxmljs/build/Release --napi_version=7 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v83' (1)
node-pre-gyp ERR! stack at ChildProcess.<anonymous> (/home/allon/src/git/samplenode/node_modules/@mapbox/node-pre-gyp/lib/util/compile.js:89:23)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:315:20)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:1048:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5)
node-pre-gyp ERR! System Linux 4.15.0-202-generic
node-pre-gyp ERR! command "/home/allon/.nvm/versions/node/v14.16.0/bin/node" "/home/allon/src/git/samplenode/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build" "--loglevel" "http"
node-pre-gyp ERR! cwd /home/allon/src/git/samplenode/node_modules/libxmljs
node-pre-gyp ERR! node -v v14.16.0
node-pre-gyp ERR! node-pre-gyp -v v1.0.10
node-pre-gyp ERR! not ok
Failed to execute '/home/allon/.nvm/versions/node/v14.16.0/bin/node /home/allon/.nvm/versions/node/v14.16.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --loglevel=http --module=/home/allon/src/git/samplenode/node_modules/libxmljs/build/Release/xmljs.node --module_name=xmljs --module_path=/home/allon/src/git/samplenode/node_modules/libxmljs/build/Release --napi_version=7 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v83' (1)
npm WARN node-fetch@2.6.9 requires a peer of encoding@^0.1.0 but none is installed. You must install peer dependencies yourself.
npm WARN samplenode@1.0.0 No description
npm WARN samplenode@1.0.0 No repository field.
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! libxmljs@1.0.1 install: `node-pre-gyp install --fallback-to-build --loglevel http`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the libxmljs@1.0.1 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/allon/.npm/_logs/2023-03-30T18_20_14_564Z-debug.log
(Reproduced on several different Node.js versions and several different platforms, pasting here one example)
Looks like we need to find a way to init submodules before building from source after installing
Try after npm run init-submodules or git submodule update --init --recursive
Maybe I'm being daft, but there's no init-submodules script there - I'm not cloning it and trying to build, I'm trying to install it as a module in another project, but compile the C part locally instead of depending on a prebuilt binary
@rchipka no improvement, unfortunately. In fact, I think we're worse off:
With --build-from-source I still get an error about parserInternal.h, and another new error about failing to git run init-submodules (presumably because init-submodules assumes it would be run on libxmljs' root folder, but in fact its being executed on the root of the project that is trying to install it, which in this case isn't even a git project
allon@allon-z440:~/src/untracked/samplenode$ npm install --build-from-source libxmljs
> libxmljs@1.0.2 install /home/allon/src/untracked/samplenode/node_modules/libxmljs
> node-pre-gyp install --loglevel http || (npm run init && npm run build)
make: Entering directory '/home/allon/src/untracked/samplenode/node_modules/libxmljs/build'
CXX(target) Release/obj.target/xmljs/src/xml_sax_parser.o
../src/xml_sax_parser.cc:6:10: fatal error: parserInternals.h: No such file or directory
#include "parserInternals.h"
^~~~~~~~~~~~~~~~~~~
compilation terminated.
xmljs.target.mk:166: recipe for target 'Release/obj.target/xmljs/src/xml_sax_parser.o' failed
make: *** [Release/obj.target/xmljs/src/xml_sax_parser.o] Error 1
make: Leaving directory '/home/allon/src/untracked/samplenode/node_modules/libxmljs/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:262:23)
gyp ERR! stack at ChildProcess.emit (events.js:182:13)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:240:12)
gyp ERR! System Linux 4.15.0-202-generic
gyp ERR! command "/home/allon/.nvm/versions/node/v10.13.0/bin/node" "/home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "build" "--loglevel=http" "--module=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release/xmljs.node" "--module_name=xmljs" "--module_path=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release" "--napi_version=3" "--node_abi_napi=napi" "--napi_build_version=0" "--node_napi_label=node-v64"
gyp ERR! cwd /home/allon/src/untracked/samplenode/node_modules/libxmljs
gyp ERR! node -v v10.13.0
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/home/allon/.nvm/versions/node/v10.13.0/bin/node /home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --loglevel=http --module=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release/xmljs.node --module_name=xmljs --module_path=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release --napi_version=3 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v64' (1)
node-pre-gyp ERR! stack at ChildProcess.cmd.on (/home/allon/src/untracked/samplenode/node_modules/@mapbox/node-pre-gyp/lib/util/compile.js:89:23)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:182:13)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:962:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:251:5)
node-pre-gyp ERR! System Linux 4.15.0-202-generic
node-pre-gyp ERR! command "/home/allon/.nvm/versions/node/v10.13.0/bin/node" "/home/allon/src/untracked/samplenode/node_modules/.bin/node-pre-gyp" "install" "--loglevel" "http"
node-pre-gyp ERR! cwd /home/allon/src/untracked/samplenode/node_modules/libxmljs
node-pre-gyp ERR! node -v v10.13.0
node-pre-gyp ERR! node-pre-gyp -v v1.0.10
node-pre-gyp ERR! not ok
Failed to execute '/home/allon/.nvm/versions/node/v10.13.0/bin/node /home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --loglevel=http --module=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release/xmljs.node --module_name=xmljs --module_path=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release --napi_version=3 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v64' (1)
> libxmljs@1.0.2 init /home/allon/src/untracked/samplenode/node_modules/libxmljs
> node scripts/init.js
Initializing submodules
> libxmljs@1.0.2 init-submodules /home/allon/src/untracked/samplenode/node_modules/libxmljs
> git submodule update --init --recursive
fatal: not a git repository (or any of the parent directories): .git
npm ERR! code ELIFECYCLE
npm ERR! errno 128
npm ERR! libxmljs@1.0.2 init-submodules: `git submodule update --init --recursive`
npm ERR! Exit status 128
npm ERR!
npm ERR! Failed at the libxmljs@1.0.2 init-submodules script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR! /home/allon/.npm/_logs/2023-03-31T13_45_18_262Z-debug.log
child_process.js:651
throw err;
^
Error: Command failed: npm run init-submodules
at checkExecSyncError (child_process.js:611:11)
at execSync (child_process.js:648:13)
at Object.<anonymous> (/home/allon/src/untracked/samplenode/node_modules/libxmljs/scripts/init.js:9:5)
at Module._compile (internal/modules/cjs/loader.js:688:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:699:10)
at Module.load (internal/modules/cjs/loader.js:598:32)
at tryModuleLoad (internal/modules/cjs/loader.js:537:12)
at Function.Module._load (internal/modules/cjs/loader.js:529:3)
at Function.Module.runMain (internal/modules/cjs/loader.js:741:12)
at startup (internal/bootstrap/node.js:285:19)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! libxmljs@1.0.2 init: `node scripts/init.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the libxmljs@1.0.2 init script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR! /home/allon/.npm/_logs/2023-03-31T13_45_18_282Z-debug.log
npm WARN node-fetch@2.6.9 requires a peer of encoding@^0.1.0 but none is installed. You must install peer dependencies yourself.
npm WARN samplenode@1.0.0 No description
npm WARN samplenode@1.0.0 No repository field.
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! libxmljs@1.0.2 install: `node-pre-gyp install --loglevel http || (npm run init && npm run build)`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the libxmljs@1.0.2 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/allon/.npm/_logs/2023-03-31T13_45_18_382Z-debug.log
I'd like to avoid bundling the entire libxml2 source code with the library if we don't have to.
The node_modules directory already has a significant bloat problem and if one less library contributes to this, then I'd consider that a win.
If this was a thing then we'd have an easy way to accomplish this. I've tried the solution they mentioned (gyp actions), but that doesn't work for windows builds.
What we need is a way to hook into (or otherwise preempt) node-pre-gyp/node-gyp so that we can do the proper initialization before build.
Having this infrastructure setup would be beneficial anyways since libxmljs now has a fairly complex build infrastructure if you truly want to start from scratch (submodules, cmake configure, SWIG, then build).
Actually, looking at the log, it's not a cwd issue.
We won't be able to init submodules from the npm packaged version unless we include the .git directory. Hmm.
Alright, for simplicity's sake I'm thinking we're just gonna have to go back to packaging submodules.
I think I've got an answer that at least keeps git-sourced installations (eg. CI builds) simple by automating submodule init in those instances.
@mureinik hoping we're good now
@rchipka unfortunately still not :-(
Definitely making progress, but now it's failing because it can't require nan:
allon@allon-z440:~/src/untracked/samplenode$ npm install --build-from-source libxmljs
> libxmljs@1.0.3 preinstall /home/allon/src/untracked/samplenode/node_modules/libxmljs
> node scripts/preinstall.js
Initializing git repo
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Initialized empty Git repository in /home/allon/src/untracked/samplenode/node_modules/libxmljs/.git/
> libxmljs@1.0.3 install /home/allon/src/untracked/samplenode/node_modules/libxmljs
> node-pre-gyp install --fallback-to-build --loglevel http
internal/modules/cjs/loader.js:582
throw err;
^
Error: Cannot find module 'nan'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:580:15)
at Function.Module._load (internal/modules/cjs/loader.js:506:25)
at Module.require (internal/modules/cjs/loader.js:636:17)
at require (internal/modules/cjs/helpers.js:20:18)
at [eval]:1:1
at Script.runInThisContext (vm.js:96:20)
at Object.runInThisContext (vm.js:303:38)
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (internal/modules/cjs/loader.js:688:30)
at evalScript (internal/bootstrap/node.js:582:27)
gyp: Call to 'node -e "require('nan')"' returned exit status 1 while in binding.gyp. while trying to load binding.gyp
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:345:16)
gyp ERR! stack at ChildProcess.emit (events.js:182:13)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:240:12)
gyp ERR! System Linux 4.15.0-202-generic
gyp ERR! command "/home/allon/.nvm/versions/node/v10.13.0/bin/node" "/home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "configure" "--fallback-to-build" "--loglevel=http" "--module=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release/xmljs.node" "--module_name=xmljs" "--module_path=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release" "--napi_version=3" "--node_abi_napi=napi" "--napi_build_version=0" "--node_napi_label=node-v64"
gyp ERR! cwd /home/allon/src/untracked/samplenode/node_modules/libxmljs
gyp ERR! node -v v10.13.0
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/home/allon/.nvm/versions/node/v10.13.0/bin/node /home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --loglevel=http --module=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release/xmljs.node --module_name=xmljs --module_path=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release --napi_version=3 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v64' (1)
node-pre-gyp ERR! stack at ChildProcess.cmd.on (/home/allon/src/untracked/samplenode/node_modules/@mapbox/node-pre-gyp/lib/util/compile.js:89:23)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:182:13)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:962:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:251:5)
node-pre-gyp ERR! System Linux 4.15.0-202-generic
node-pre-gyp ERR! command "/home/allon/.nvm/versions/node/v10.13.0/bin/node" "/home/allon/src/untracked/samplenode/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build" "--loglevel" "http"
node-pre-gyp ERR! cwd /home/allon/src/untracked/samplenode/node_modules/libxmljs
node-pre-gyp ERR! node -v v10.13.0
node-pre-gyp ERR! node-pre-gyp -v v1.0.10
node-pre-gyp ERR! not ok
Failed to execute '/home/allon/.nvm/versions/node/v10.13.0/bin/node /home/allon/.nvm/versions/node/v10.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --loglevel=http --module=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release/xmljs.node --module_name=xmljs --module_path=/home/allon/src/untracked/samplenode/node_modules/libxmljs/build/Release --napi_version=3 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v64' (1)
npm WARN node-fetch@2.6.9 requires a peer of encoding@^0.1.0 but none is installed. You must install peer dependencies yourself.
npm WARN samplenode@1.0.0 No description
npm WARN samplenode@1.0.0 No repository field.
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! libxmljs@1.0.3 install: `node-pre-gyp install --fallback-to-build --loglevel http`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the libxmljs@1.0.3 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/allon/.npm/_logs/2023-03-31T16_42_10_111Z-debug.log
With my local copy of libxmljs, moving nan from the devDependencies to dependencies solves the issue, but I'm not sure this is the right move.
Alternatively, if the consumer wants to use --build-from-source they can install nan directly.
I'm guessing that using --build-from-source isn't a common practice (e.g., in my usecase, I'm using it on a very old CentOS machine to force libxml to be compiled with an older glibc that the packaged binary). Perhaps instead of "dirtying" the dependencies, defining nan in the peerDependencies and documenting it in the README is a better move?
My bad, sorry for the whiplash here. I was fixing the duplicate NaN dependency and removed it from the wrong section.
I have no issues with keeping it in "dependencies" since that should contain everything necessary to build the C++ code imo.
My bad, sorry for the whiplash here. I was fixing the duplicate NaN dependency and removed it from the wrong section.
I have no issues with keeping it in "dependencies" since that should contain everything necessary to build the C++ code imo.
@rchipka Thanks!
I can confirm that with 1.0.6 it installs cleanly on both my Windows and Linux machines.
Closing this bug.
@mureinik Awesome, thanks for confirming and for the help on this one!
| gharchive/issue | 2023-03-30T18:30:20 | 2025-04-01T06:39:23.196039 | {
"authors": [
"mureinik",
"rchipka",
"riley-ciq"
],
"repo": "libxmljs/libxmljs",
"url": "https://github.com/libxmljs/libxmljs/issues/614",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
502251117 | Add support to the REST API to search with FHIR search reference parameters
Greetings, Thanks for taking the time to consider this issue. Our team is ready and happy to tackle this issue asap. Recommendations and feedback welcome!
Problem
The Blaze REST API doesn't support searching with FHIR search reference parameters Which allows for Searching Resources by a field on that resource that is a reference type.
Currently, it only supports: _summary and identifier. Which are documented in the CapabilityStatment which is accessible at [base]/fhir/metadata (where base would be the host and port of your blaze server blaze:8080)
An example of a http request url query that currently isn't supported. This url also opens to a public fhir server so we can see the results:
http://hapi.fhir.org/baseR4/MedicationRequest?subject=1200
Here The MedicationRequest subject
is of Type Reference. This is important because it directly relates to the FHIR search reference parameters specification linked above.Meaning, the search params specification documents how to send a GET request to query that relationship.
Purposed Solution
Add a feature to allow Blaze to Generate REST api endpoints dynamically based off FHIR data.
Implementation details
This can be broken into essential two parts
get search parameters schema from.
extend blaze search to use search params
The FHIR schema containing the Reference paramaters specification is encoded in this fhir definitions download link downloads a set of files including search-paramaters.json. This can be used to verify the reference parameters and build the lookup.
Context
Blaze version: 0.6.2
related issues
#33 : mentions enabling fhirs "_revinclude" functionality, which gets us the desired ability to query
resources by Patient, but i believe will return more data then we want because were loading data on demand.
e.g When the user loads the medications page, thats when we hope to request both the page and data for there medications. This
saves us having to load that data when they first visit the site and keeps response times down.
I've already read your proposal some days ago. Please give me more time to think about it in more detail.
The current approach does a scan of all resources of that type. In order to make use of indexes everything including those of type bytes would need to be defined in a way that their sort order is correct. We are implementing (locally in a temporary fork) something that extends the current approach for now, but I would like to see something more index based here. I would be happy to make other issues along those lines.
One more note a vase like approach is great to bring more observability to these generated endpoints. I am not saying use vase I am just saying the approach is nice.
The answer to this question is at the end of #50 .
greetings again,
Here is how were currently solving the search problem on our fork of blaze.
This might give you some insight into how you want to handle it.
In blaze.edn we define a :blaze.interaction/search-type per resource
e.g Here is how we handle Condition, we pass it a list of codes matched with their expressions
[:blaze.interaction/search-type :blaze.interaction.search/Condition]
{:database/conn #blaze/ref :blaze.datomic/conn
:blaze.fhir.SearchParameter/config [{:blaze.fhir.SearchParameter/code "patient"
:blaze.fhir.SearchParameter/expression [:Condition/subject :Patient/id]}
{:blaze.fhir.SearchParameter/code "category"
:blaze.fhir.SearchParameter/expression [:Condition/category :CodeableConcept/coding :Coding/code :code/code]}]}
This works with our modified version of the search_type namespace. Where we have changed
resource pred to take a config map that contains the code and expression from above.
These inform the search how to filter via the expression per code which mostly happens in the match? function.
(defn- match?
[tree path search]
(let [k (first path)
subtree (get tree k)]
(cond
(nil? k) (= search tree)
(nil? subtree) false
(set? subtree) (some (fn [st] (match? st (rest path) search))
subtree)
:else (match? subtree (rest path) search))))
(defn- resource-pred [query-params config]
(let [valid-query-params (select-keys query-params (map :blaze.fhir.SearchParameter/code config))
select-path-by-code (fn [config code]
(->> config
(filter #(= (:blaze.fhir.SearchParameter/code %) code))
first
:blaze.fhir.SearchParameter/expression))]
(when (seq valid-query-params)
(fn [resource] (every? (fn [[path search]] (match? resource path search))
(mapv (fn [[k v]] [(select-path-by-code config k) v]) valid-query-params))))))
The rest of the search type handler is then opended up to pass these arguments. note this code is mostly the same, were must passing
the searchparam config along with the connection.
(defn- handler-intern [{:keys [database/conn blaze.fhir.SearchParameter/config]}]
(fn [{{{:fhir.resource/keys [type]} :data} ::reitit/match
:keys [params]
::reitit/keys [router]}]
(-> (search router (d/db conn) type params config)
(ring/response))))
(defn handler
""
[config]
(-> (handler-intern config)
(wrap-params)
(wrap-observe-request-duration "search-type")))
(defmethod ig/init-key :blaze.interaction/search-type
[_ config]
(log/info "Init FHIR search-type interaction handler")
(handler config))
Hopefully this helps!
Ok that looks good. You then put the :blaze.interaction.search/Condition key in :blaze/rest-api under the Condition type - right? Integrant instantiates then a :blaze.interaction/search-type with the corresponding :blaze.fhir.SearchParameter/config.
How do you plan to handle the different types of search parameters like token, string or reference?
The search is still not indexed. Is that ok for you in a first iteration?
You then put the :blaze.interaction.search/Condition key in :blaze/rest-api under the Condition type - right?
correct. e.g
[:blaze.interaction/search-type :blaze.interaction.search/Condition]
{:database/conn #blaze/ref :blaze.datomic/conn
:blaze.fhir.SearchParameter/config [{:blaze.fhir.SearchParameter/code "patient"
:blaze.fhir.SearchParameter/expression [:Condition/subject :Patient/id]}
{:blaze.fhir.SearchParameter/code "category"
:blaze.fhir.SearchParameter/expression [:Condition/category :CodeableConcept/coding :Coding/code :code/code]}]}
How do you plan to handle the different types of search parameters like token, string or reference?
I don't have a strategy for the other types currently.
The search is still not indexed. Is that ok for you in a first iteration?
I'm not sure, will probably proceed assuming its ok and then benchmark if we notice things are too slow.
Would it be an idea to parse SeachParameter definitions directly instead of having to write them down by hand?
Yes It would. We plan on generating chunks of our blaze.edn from that data. but some of the expressions are non trivial to generate optimally. We would like to give ourselves hooks at least until we know we can read the SeachParameters we want. Also this comes back to the possibility of different schema choices. If someone decided they wanted to use tuples to make a better index. We would like that flexibility somewhere.
| gharchive/issue | 2019-10-03T19:23:55 | 2025-04-01T06:39:23.254410 | {
"authors": [
"alexanderkiel",
"drewverlee",
"dspiteself"
],
"repo": "life-research/blaze",
"url": "https://github.com/life-research/blaze/issues/49",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
378874516 | Update spray-json to 1.3.5
Updates io.spray:spray-json from 1.3.4 to 1.3.5.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention @scala-steward in the comments below.
Have a nice day!
Yes, that's expected, we didn't guarantee anything before and now it should be alphabetic. You can JsValue.sortedPrint which should have consistent ordering across releases.
@jrudolph I take it from https://github.com/spray/spray-json/issues/155 that there's no way to preserve insertion order? nbd I guess, though I'd prefer it, the ordering I had before was more human-readable
| gharchive/pull-request | 2018-11-08T19:06:52 | 2025-04-01T06:39:25.259502 | {
"authors": [
"SethTisue",
"jrudolph",
"scala-steward"
],
"repo": "lightbend/scala-sculpt",
"url": "https://github.com/lightbend/scala-sculpt/pull/69",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
190506267 | Add Required(Bool) to Textfield
Adding the required option for addTextField(). This will make the preferredAction action button disabled until there is text input.
Use case: You have an Alert+Textfield to name an object before saving, the name cannot be blank and you don't want to dismiss/reinit an alert to prompt the user for input again.
.addTextField(&nameField, required: true )
Note: only one required field is supported.
OP: Please check variable declarations, I'm new to iOS, feedback appreciated!
@eriksargent
| gharchive/pull-request | 2016-11-19T16:47:20 | 2025-04-01T06:39:25.289938 | {
"authors": [
"nitrag"
],
"repo": "lightningkite/LKAlertController",
"url": "https://github.com/lightningkite/LKAlertController/pull/41",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1566801276 | multi: remove existing meta field in asset TLV to replace w/ meta hash optionally revealed in genesis minting proof
Related to https://github.com/lightninglabs/taro/issues/62.
Problem Statement
In this PR, we fix an issue with the meta field as defined today:
The field in practice may be very large (hundreds of KBs), and is serialized along with each asset TLV in a proof chain.
This means that the proof size is a function of the size of the meta field itself, meaning more data to lug around for clients.
Solution
Meta -> Meta Hash in TLV
Instead, we make the meta field itself a hash commitment. Only the meta hash field exists in every asset TLV. This means the asset genesis (which rn is serialized along with addresses) nearly constant sized other than the tag (which we can also instead make into a commitment). The meta hash is then the hash of the TLV serialization of the meta reveal (see below).
To start, the meta has a single type: opaque blob. This type is itself a TLV, so we can add more types in the future, and also other types contingent on the type itself. Eg: we can add a JSON blob type, or a MIME type that then relies on the existence of some other string to fully bind the structure of the meta bytes.
MetaReveal in Proof File Blob
We then add a new optional field to the proof format that allows the minter (or anyone that knows of the pre-image) to reveal it within the first proof state transition. As this is only exist in the first state, which is to be bootstrapped by the users from a Base Universe, the proof sizes are no longer dependent on the asset meta itself (constant value at the start). We then add a rule that this can only exist for assets that have a genesis witness (minted assets).
New assets_meta table in the database
Along the way we modify the DB to add a new assets_meta table and reference that directly. This allows for larger meta blobs, as we no longer need to read the entire thing if we want to look at the genesis details for assets. Two query mechanisms based on the asset ID and the asset hash have also been added.
RPC + CLI Changes
On the RPC layer, we now expose the meta field to callers, and display the hash in most other locations. We also add an API that lets callers fetch the meta based on the hash or asset ID.
On the CLI, we propagate all the above updates, then also add a new option to read the meta from a file on disk, as it may be too large to pass as a command line string.
Follow up Work
Update the spec accordingly.
Spec PR here: https://github.com/Roasbeef/bips/pull/34
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
| gharchive/pull-request | 2023-02-01T20:45:57 | 2025-04-01T06:39:25.301107 | {
"authors": [
"Roasbeef",
"lightninglabs-deploy"
],
"repo": "lightninglabs/taro",
"url": "https://github.com/lightninglabs/taro/pull/249",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
613218396 | Whats the meaning of IncorrectOrUnknownPaymentDetails and how is it different from UnknownPaymentHash?
Get this error occasionally sending to route:
{ payment_error:
'IncorrectOrUnknownPaymentDetails(amt=867594000 mSAT, height=629188)@1',
payment_preimage: <Buffer >,
payment_route: null,
payment_hash:
<Buffer 08 d9 5b 32 c1 ac 07 54 91 17 cb cb ba 6d 02 ad e0 28 3c 7c fa e1 98 6f a9 f5 5e a2 8a 24 0c c2> }
Feels like this is the same as UnknownPaymentHash but not quite sure
Failure due to incorrect details covers a few cases (from BOLT#4):
The node you are sending the payment to does not have an invoice with that hash
The amount being paid is wrong, or the htlc has the wrong expiry, or expires too soon so is rejected
Payment secret for MPP is wrong
These errors are all combined into a single error to prevent probing attacks.
What version of lnd are you running on?
0.10
Is it safe to assume that funds sent in this attempt are not spent?
Yes, it's a permanent failure so the payment status will say failed.
thanks!
| gharchive/issue | 2020-05-06T10:36:57 | 2025-04-01T06:39:25.304955 | {
"authors": [
"Overtorment",
"carlaKC"
],
"repo": "lightningnetwork/lnd",
"url": "https://github.com/lightningnetwork/lnd/issues/4247",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1550803578 | Update README.md
What is changed, added or deleted? (Required)
What is the reference link(s)?
@BirdboyBolu Thanks for your contribution!
| gharchive/pull-request | 2023-01-20T12:55:05 | 2025-04-01T06:39:25.401219 | {
"authors": [
"BirdboyBolu",
"lilin90"
],
"repo": "lilin90/awesome-technical-communication",
"url": "https://github.com/lilin90/awesome-technical-communication/pull/2",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
73415331 | Command: auto_complete
Reopening old Issue in relevant Repo
https://github.com/limetext/lime/issues/17
Ugh, old commit references limetext/lime/#62 not this one.
| gharchive/issue | 2015-05-05T19:54:31 | 2025-04-01T06:39:25.516491 | {
"authors": [
"haddel",
"quarnster"
],
"repo": "limetext/lime-backend",
"url": "https://github.com/limetext/lime-backend/issues/62",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1587829132 | Inject dask client into ensemble
Allows for a single dask client to be created for all unit tests. This speeds up execution and reduces warnings.
Locally, unit test execution goes from 35-45 seconds to 8-9 seconds
In github CI, execution goes from 67-77 seconds to 16-19 seconds
Addresses nearby black formatting and pylint warnings
Codecov Report
Merging #36 (9b1d62d) into main (f91b74b) will increase coverage by 0.54%.
The diff coverage is 97.22%.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## main #36 +/- ##
==========================================
+ Coverage 79.08% 79.63% +0.54%
==========================================
Files 6 6
Lines 373 383 +10
==========================================
+ Hits 295 305 +10
Misses 78 78
Impacted Files
Coverage Δ
src/lsstseries/analysis/__init__.py
100.00% <ø> (ø)
src/lsstseries/analysis/stetsonj.py
90.24% <ø> (ø)
src/lsstseries/timeseries.py
87.67% <ø> (ø)
src/lsstseries/ensemble.py
63.15% <97.05%> (ø)
src/lsstseries/__init__.py
100.00% <100.00%> (ø)
src/lsstseries/analysis/structurefunction2.py
97.84% <100.00%> (ø)
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
| gharchive/pull-request | 2023-02-16T14:56:13 | 2025-04-01T06:39:25.597287 | {
"authors": [
"codecov-commenter",
"delucchi-cmu"
],
"repo": "lincc-frameworks/lsstseries",
"url": "https://github.com/lincc-frameworks/lsstseries/pull/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1948231390 | Ask user for GitHub repo link
Currently, copier doesn't ask and doesn't know GitHub repo link. However it would be useful to have and use at least in two places:
[ ] asv.conf.json needs it
[ ] pyproject.toml should use it for project.urls to populate "Source Code" URL for PyPi
For pre-existing projects we can get it from git with something like git remote get-url origin and with a bit of parsing for ssh-based URLs.
We could also include some badges on the README if we have the github URL at the template hydration stage.
I'm thinking to instead just ask for the organization name, since that will help us to generate those badge URLs that aren't just appending to the base URL but perturb it in weird ways (that I'd prefer to have in the template than try to remember every time)
e.g.
https://github.com/{{project_organization}}/{{project_name}}
https://{{project_organization}}.github.io/{{project_name}}/
https://{{project_name}}.readthedocs.io/
https://codecov.io/gh/{{project_organization}}/{{project_name}}
| gharchive/issue | 2023-10-17T20:48:25 | 2025-04-01T06:39:25.600276 | {
"authors": [
"delucchi-cmu",
"hombit"
],
"repo": "lincc-frameworks/python-project-template",
"url": "https://github.com/lincc-frameworks/python-project-template/issues/307",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
821232747 | [Feature request] Support thrift v0.14.0
Dear devs,
Are you planning to support thrift v0.14.0 ?
ref: https://github.com/apache/thrift/releases/tag/v0.14.0
ps: With thrift v0.13.0 there exists a CVE https://nvd.nist.gov/vuln/detail/CVE-2020-13949
Sure, why not! (Maybe) We will include supporting Thrift 0.14.0 in the next release(1.6.0).
| gharchive/issue | 2021-03-03T15:46:07 | 2025-04-01T06:39:25.616354 | {
"authors": [
"ikhoon",
"selectAll"
],
"repo": "line/armeria",
"url": "https://github.com/line/armeria/issues/3370",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
319775535 | Add 'apiplural' directive for referring to a type in plural form
Motivation:
While documenting, it is often necessary to refer to a type in plural
form:
This class retrieves the list of :api:`Endpoint`\s from ZooKeeper.
Although \ does its job here, the rendered output isn't very pretty,
mainly because:
There's spacing between Endpoint and s. The spacing between them
should be moved after the plural suffix s.
There's gray background at Endpoint but not at s. Both Endpoint
and s should have the same background color.
Modifications:
Introduce a new directive apiplural which pluralizes a type
reference automatically
Result:
Aesthetics and convenience
Before:
After:
Thanks!
| gharchive/pull-request | 2018-05-03T03:10:54 | 2025-04-01T06:39:25.620788 | {
"authors": [
"trustin"
],
"repo": "line/armeria",
"url": "https://github.com/line/armeria/pull/1179",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
964540588 | Prohibit mirroring to internal repositories
Motivation:
We should prohibit mirroring to internal repositories which can cause a security incident.
Modification:
Raise an exception if the localRepo of mirroring setting is one of meta and dogma which are internal repositories.
Result:
You cannot set up mirroring to internal repositories anymore.
Thanks for reviewing. 😉
| gharchive/pull-request | 2021-08-10T02:40:33 | 2025-04-01T06:39:25.622786 | {
"authors": [
"minwoox"
],
"repo": "line/centraldogma",
"url": "https://github.com/line/centraldogma/pull/621",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1573969145 | Support regex for deprecated apis
Describe the bug
Please add/fix support for regex in deprecated apis.
I attached the example when the first line of regex is working, but the second is not working.
To Reproduce
automations:
{% for item in deprecated %}
catch_deprecated_components_{{ loop.index }}:
if:
- {{ source.diff.files | matchDiffLines(regex=item.regex) | some }}
run:
- action: add-label@v1
args:
label: 'deprecated-component'
- action: request-changes@v1
args:
comment: |
`{{ item.regex }}` is deprecated, use `EventType` from `constants.py/constants.ts[js]`
{% endfor %}
deprecated:
- regex: r/^[+].*Types.EVENT_REQUESTED/
- regex: r/^[+].*eventRequested\/v1/
Expected behavior
the current snippet should run as valid.
Screenshots
@MishaKav we can't seem to be able to repredocue this issue:
@MishaKav this was fixed, the issue was when using the string action in your rules, which triggered unjustified CM syntax check error,
| gharchive/issue | 2023-02-07T09:20:25 | 2025-04-01T06:39:25.633664 | {
"authors": [
"MishaKav",
"vim-zz"
],
"repo": "linear-b/gitstream",
"url": "https://github.com/linear-b/gitstream/issues/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2029605301 | Add wildcard/regex for ignore_repositories
Is your feature request related to a problem? Please describe.
I would like a feature that allows wildcard and/or regex values for the ignore_repositories option in config: so that we don't need to continually update rules when repos are created.
Describe the solution you'd like
When managing hundreds of repo's for an org, separating their name by a prefix/suffix for team/tool/department etc. makes organizing them easier. It would be great if we could dynamically assign rules based on a wildcard or regex syntax
i.e., Team1-Repo1 and Repo1-Team2 -- we want to build a rule that would only apply to everything under Team1- but not Team2.
It looks like wildcards are available for filenames, but not repositories: https://docs.gitstream.cm/cm-file/#configignore_repositories
Describe alternatives you've considered
Presently, it looks like our only option is to add each named repo as a per-line string in a rule's config:ignore_repositories sub-section. While precise, this isn't elegant and is a bit of a chore.
Thank you,
Thanks for the recommendation @PFarrell90, this is a great idea! We'll provide updates here on the status of this improvement.
Hi
We have recently added this capability to gitStream, documentation is available here
I am closing this issue; please re-open it in case you find it does not work as expected
| gharchive/issue | 2023-12-07T00:12:28 | 2025-04-01T06:39:25.638339 | {
"authors": [
"BenLloydPearson",
"PFarrell90",
"PavelLinearB"
],
"repo": "linear-b/gitstream",
"url": "https://github.com/linear-b/gitstream/issues/377",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
769126539 | Share ClickCounter type between web & gtk
This is a pretty marginal win, but does remove some duplicate logic.
Oooh, had completely forgotten about that. This implementation doesn't fully follow the design detailed there, but it could definitely be the basis of a unified design at some point in the future.
| gharchive/pull-request | 2020-12-16T17:25:51 | 2025-04-01T06:39:25.639630 | {
"authors": [
"cmyr"
],
"repo": "linebender/druid",
"url": "https://github.com/linebender/druid/pull/1468",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1751868649 | I encountered a problem in the second step
I use single-gpu to inference, but I made an error like this:
Traceback (most recent call last):
File "imaginaire/inference.py", line 95, in
main()
File "imaginaire/inference.py", line 86, in main
trainer.load_checkpoint(cfg, args.checkpoint)
File "/home/mio/work/project/Neural_Actor_Main_Code-master/imaginaire/imaginaire/trainers/base.py", line 259, in load_checkpoint
self.net_G.load_state_dict(checkpoint['net_G'])
File "/home/mio/anaconda3/envs/neuralactor/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1667, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for WrappedModel:
Missing key(s) in state_dict: "module.averaged_model.label_embedding.conv_first.layers.conv.weight_orig", "module.averaged_model.label_embedding.conv_first.layers.conv.weight_u", "module.averaged_model.label_embedding.down_0.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_0.layers.conv.weight_u", "module.averaged_model.label_embedding.down_1.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_1.layers.conv.weight_u", "module.averaged_model.label_embedding.down_2.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_2.layers.conv.weight_u", "module.averaged_model.label_embedding.down_3.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_3.layers.conv.weight_u", "module.averaged_model.label_embedding.down_4.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_4.layers.conv.weight_u", "module.averaged_model.label_embedding.up_4.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_4.layers.conv.weight_u", "module.averaged_model.label_embedding.up_3.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_3.layers.conv.weight_u", "module.averaged_model.label_embedding.up_2.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_2.layers.conv.weight_u", "module.averaged_model.label_embedding.up_1.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_1.layers.conv.weight_u", "module.averaged_model.label_embedding.up_0.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_0.layers.conv.weight_u", "module.averaged_model.up_7.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_7.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_7.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_7.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_6.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_6.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_6.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_6.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_5.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_5.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_5.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_5.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_4.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_4.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_4.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_4.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_4.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_4.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_3.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_3.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_3.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_3.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_2.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_2.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_2.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_2.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_1.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_1.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_1.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_1.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_0.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_0.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_0.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_0.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_0.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_0.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_0.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_0.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_1.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_1.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_1.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_1.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_2.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_2.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_2.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_2.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_3.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_3.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_3.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_3.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_4.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_4.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_4.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_4.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_4.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_4.conv_block_s.layers.conv.weight_u", "module.averaged_model.res_0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_0.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_0.conv_block_1.layers.conv.weight_u", "module.averaged_model.res_1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_1.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_1.conv_block_1.layers.conv.weight_u", "module.averaged_model.res_2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_2.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_2.conv_block_1.layers.conv.weight_u", "module.averaged_model.res_3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_3.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_3.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.2.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.2.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.3.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.3.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.2.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.2.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.3.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.3.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.up_flow.1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.up_flow.1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.up_flow.3.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.up_flow.3.layers.conv.weight_u", "module.averaged_model.flow_network_temp.up_flow.5.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.up_flow.5.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.conv_first.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.conv_first.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_0.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_0.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_1.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_1.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_2.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_2.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_3.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_3.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_4.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_4.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_4.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_4.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_3.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_3.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_2.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_2.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_1.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_1.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_0.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_0.layers.conv.weight_u".
Is there a problem with the model? How can I solve it?
I have the same problem, how to solve it?
| gharchive/issue | 2023-06-12T04:30:43 | 2025-04-01T06:39:25.692764 | {
"authors": [
"mioyeah",
"zidonghua2018"
],
"repo": "lingjie0206/Neural_Actor_Main_Code",
"url": "https://github.com/lingjie0206/Neural_Actor_Main_Code/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
390524915 | 4G网络下无法下载并且要一分钟左右才会回调onError,WIFI下可以下载
4G网络下无法下载并且要一分钟左右才会回调onError,WIFI下可以下载
FileDownloader.setupOnApplicationOnCreate(context)
.connectionCreator(new FileDownloadUrlConnection
.Creator(new FileDownloadUrlConnection.Configuration()
.connectTimeout(5_000) // set connection timeout.
.readTimeout(8_000) // set read timeout.
.proxy(Proxy.NO_PROXY) // set proxy
))
.commit();
看上去像是单纯的网络问题,发生的错误是 SocketTimeoutException 。日志看上去也没有什么不正常的地方。有确保当时的 4G 网络确实是 ok 的吗?
方便的话可以提供下你的下载地址,我这边尝试复现一下问题,我的邮箱:vesperdone@gmail.com
| gharchive/issue | 2018-12-13T05:34:48 | 2025-04-01T06:39:25.695490 | {
"authors": [
"rantianhua",
"striveenFei"
],
"repo": "lingochamp/FileDownloader",
"url": "https://github.com/lingochamp/FileDownloader/issues/1155",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
260207055 | javax.net.ssl.SSLException: Read error: ssl=0x743c10e800: I/O error during system call, Connection reset by peer
你好,在使用三星手机下载单个视频文件时,连接某个wifi网络下载每次都是失败,回调error:notify error -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 javax.net.ssl.SSLException: Read error: ssl=0x743c10e800: I/O error during system call, Connection reset by peer
25 17:15:12.101 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify pending -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93
09-25 17:15:12.103 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[1] new[6] 1
09-25 17:15:12.103 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify started -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93
09-25 17:15:12.103 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.ConnectTask: -2108620344 request header {Range=[bytes=0-]}
09-25 17:15:13.632 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FileDownloadUtils: etag find "69e304385b148360137e9c2e286aa027-2" for task(-2108620344)
09-25 17:15:13.644 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[6] new[2] 1
09-25 17:15:13.645 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify connected -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93
09-25 17:15:13.676 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadLaunchRunnable: fetch data with multiple connection(count: [3]) for task[-2108620344] totalLength[7391596]
09-25 17:15:13.677 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadLaunchRunnable: enable multiple connection: id[-2108620344] index[0] range[0, 2463864) current offset(0)
09-25 17:15:13.678 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadLaunchRunnable: enable multiple connection: id[-2108620344] index[1] range[2463865, 4927729) current offset(2463865)
09-25 17:15:13.679 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadLaunchRunnable: enable multiple connection: id[-2108620344] index[2] range[4927730, 0) current offset(4927730)
09-25 17:15:13.685 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.ConnectTask: -2108620344 request header {Range=[bytes=0-2463864]}
09-25 17:15:13.686 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.ConnectTask: -2108620344 request header {Range=[bytes=2463865-4927729]}
09-25 17:15:13.694 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.ConnectTask: -2108620344 request header {Range=[bytes=4927730-]}
09-25 17:15:14.960 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadRunnable: the connection[1] for -2108620344, is connected range[2463865, 4927729) current offset[2463865] with code[206]
09-25 17:15:14.962 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: start fetch(1): range [2463865, 4927729), seek to[2463865]
09-25 17:15:14.963 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadRunnable: the connection[2] for -2108620344, is connected range[4927730, 0) current offset[4927730] with code[206]
09-25 17:15:14.966 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadRunnable: the connection[0] for -2108620344, is connected range[0, 2463864) current offset[0] with code[206]
09-25 17:15:14.967 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: start fetch(2): range [4927730, 0), seek to[4927730]
09-25 17:15:14.969 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: start fetch(0): range [0, 2463864), seek to[0]
09-25 17:15:14.994 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[2] new[3] 1
09-25 17:15:14.995 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 4096 7391596
09-25 17:15:15.000 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[2467961], consume[9]
09-25 17:15:15.036 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[4931826], consume[17]
09-25 17:15:15.213 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:15.214 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 84735 7391596
09-25 17:15:15.218 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:15.221 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 85783 7391596
09-25 17:15:16.699 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:16.700 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 163607 7391596
09-25 17:15:16.701 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:16.703 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 171775 7391596
09-25 17:15:16.706 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[68169], consume[9]
09-25 17:15:18.070 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[2537172], consume[8]
09-25 17:15:18.100 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:18.100 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 245503 7391596
09-25 17:15:18.244 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5001037], consume[17]
09-25 17:15:18.469 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:18.472 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 333591 7391596
09-25 17:15:18.919 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:18.921 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 414439 7391596
09-25 17:15:18.924 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:18.926 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 414439 7391596
09-25 17:15:19.248 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:19.249 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 493311 7391596
09-25 17:15:19.251 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:19.252 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 501479 7391596
09-25 17:15:19.788 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:19.788 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 576255 7391596
09-25 17:15:20.018 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[133729], consume[21]
09-25 17:15:20.096 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[2724564], consume[20]
09-25 17:15:20.112 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:20.116 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 659199 7391596
09-25 17:15:20.120 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:20.124 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 660247 7391596
09-25 17:15:21.489 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5192525], consume[16]
09-25 17:15:21.520 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:21.521 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 737023 7391596
09-25 17:15:22.101 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[203361], consume[8]
09-25 17:15:22.374 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:22.376 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 815871 7391596
09-25 17:15:23.165 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:23.168 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 894743 7391596
09-25 17:15:23.171 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:23.174 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 902911 7391596
09-25 17:15:23.290 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[2793148], consume[11]
09-25 17:15:23.423 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:23.424 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 977687 7391596
09-25 17:15:23.642 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5296973], consume[18]
09-25 17:15:23.650 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:23.653 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1058535 7391596
09-25 17:15:24.221 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:24.222 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1138455 7391596
09-25 17:15:24.469 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[334433], consume[7]
09-25 17:15:24.473 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:24.474 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1216255 7391596
09-25 17:15:24.785 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:24.790 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1291031 7391596
09-25 17:15:25.060 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:25.061 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1372927 7391596
09-25 17:15:25.371 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[2954964], consume[29]
09-25 17:15:25.424 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:25.427 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1447703 7391596
09-25 17:15:25.684 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:25.685 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1529599 7391596
09-25 17:15:25.686 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:25.687 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1529599 7391596
09-25 17:15:25.840 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5471053], consume[17]
09-25 17:15:26.040 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:26.043 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1604375 7391596
09-25 17:15:26.396 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:26.397 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1678103 7391596
09-25 17:15:27.796 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[525921], consume[20]
09-25 17:15:27.818 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[3059412], consume[19]
09-25 17:15:27.834 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:27.837 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1765143 7391596
09-25 17:15:27.840 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:27.843 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1773311 7391596
09-25 17:15:27.893 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5540685], consume[21]
09-25 17:15:30.237 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:30.240 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1848087 7391596
09-25 17:15:30.453 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[595553], consume[26]
09-25 17:15:30.462 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:30.466 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 1925911 7391596
09-25 17:15:30.672 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5610317], consume[22]
09-25 17:15:30.827 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:30.829 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2003711 7391596
09-25 17:15:30.835 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[3129044], consume[23]
09-25 17:15:31.196 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:31.199 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2081535 7391596
09-25 17:15:31.202 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:31.205 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2085607 7391596
09-25 17:15:31.935 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:31.937 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2164479 7391596
09-25 17:15:31.940 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:31.942 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2168575 7391596
09-25 17:15:32.396 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:32.397 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2252567 7391596
09-25 17:15:32.399 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:32.400 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2260735 7391596
09-25 17:15:32.480 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[748129], consume[19]
09-25 17:15:32.835 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[3] 1
09-25 17:15:32.838 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify progress -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 2335511 7391596
09-25 17:15:32.851 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5749581], consume[23]
09-25 17:15:32.940 4620-11923/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[0] offset[765537], consume[7]
09-25 17:15:32.961 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.MessageSnapshotGate: ~~~callback -2108620344 old[3] new[-1] 1
09-25 17:15:32.963 3088-11896/com.ubtechinc.alpha2_alexa V/FileDownloader.FileDownloadList: remove -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 left -1 0
09-25 17:15:32.964 3088-11896/com.ubtechinc.alpha2_alexa D/FileDownloader.FileDownloadMessenger: notify error -2108620344@com.liulishuo.filedownloader.DownloadTask@cc33e93 javax.net.ssl.SSLException: Read error: ssl=0x743c10e800: I/O error during system call, Connection reset by peer
09-25 17:15:32.965 3088-11896/com.ubtechinc.alpha2_alexa V/FileDownloader.DownloadTaskHunter: filedownloader:lifecycle:over com.liulishuo.filedownloader.DownloadTaskHunter@29b2ec9 by -1
09-25 17:15:32.970 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[3216084], consume[9]
09-25 17:15:32.989 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[2] offset[5762893], consume[7]
09-25 17:15:32.990 4620-11925/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadStatusCallback: require callback -1 but the host thread of the flow has already dead, what is occurred because of there are several reason can final this flow on different thread.
09-25 17:15:33.018 4620-11924/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.FetchDataTask: require sync id[-2108620344] index[1] offset[3216084], consume[11]
09-25 17:15:33.020 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadLaunchRunnable: finish sub-task for [-2108620344] TRUE FALSE
09-25 17:15:33.021 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadLaunchRunnable: finish sub-task for [-2108620344] TRUE FALSE
09-25 17:15:33.022 4620-11895/com.ubtechinc.alpha2_alexa:filedownloader D/FileDownloader.DownloadLaunchRunnable: finish sub-task for [-2108620344] TRUE FALSE
我刚刚也遇到这个问题,我下载的视频地址为:
https://scontent-sin6-1.cdninstagram.com/t50.2886-16/20639948_1877553475898837_2968671662599307264_n.mp4
这个应该不是FileDownloader的问题,是网络连接的问题,参考这里: https://github.com/lingochamp/FileDownloader/issues/473#issuecomment-278847797
| gharchive/issue | 2017-09-25T09:21:23 | 2025-04-01T06:39:25.813441 | {
"authors": [
"Jacksgong",
"didikeeLunaon",
"leoncoding"
],
"repo": "lingochamp/FileDownloader",
"url": "https://github.com/lingochamp/FileDownloader/issues/773",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
62165757 | restructure composer requirements
Not all requirements are needed, only for testing.
Thanks! :+1:
| gharchive/pull-request | 2015-03-16T18:07:05 | 2025-04-01T06:39:25.827134 | {
"authors": [
"dennisdegreef",
"jildertmiedema"
],
"repo": "link0/profiler",
"url": "https://github.com/link0/profiler/pull/70",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
406491163 | Não é possivel instalar
Pessoal, não consigo rodar o front. Alguém pode ajudar?
@jvoltz, poderia informar qual a versão do Node/Npm vc está utilizando?
Olá. Eu estava testando em um Ubuntu com a versão 10.15 do node.
@jvoltz, Poderia tentar rodar novamente utilizando a versão 6.11.5 do node?
Estou com um problema semelhante, instalei a versao 6.11.5 do node e deu tudo certo... O problema é que com a versão 6.11.5 o problema ocorre com o 'getokr-api' , não reconhece o token 'async function' (que pelo que vi só está disponível no node 7.6 pra cima.
Instalei o 7.6 e o problema volta a ocorrer no getokr-web
@mtkkk Obrigado por reportar o problema, subi uma alteração tanto na API quanto no WEB para corrigir o mesmo.
Após corrigir o problema, realizei os testes utilizando a versão 8.11.1 de Node e 3.10.10 do NPM.
Poderia tentar novamente? Aguardo o feedback
Problema resolvido! Obrigado pela atenção :)
| gharchive/issue | 2019-02-04T19:51:08 | 2025-04-01T06:39:25.831008 | {
"authors": [
"icarocleto",
"jvoltz",
"mtkkk"
],
"repo": "linkapi/getokr-web",
"url": "https://github.com/linkapi/getokr-web/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2225732670 | C++ module example using VS 2022/CMake
Here is an example on how to provide a C++ module alongside the header file.
Only tested with Visual Studio 2020 alongside CMake. Currently don't have access to clang 17.
I also had to explicitly add the required standard library includes to the test files.
If nothing else, it is way to learn about how to do this, when modules are more mature.
I see.
I'd rather not have CMake though for this very little library. I'll lookup what the CMake directives are actually doing and check if I can do this with gcc/mingw64.
Thanks, I'll keep it open for reference, and probably add a milestone in the future. I consider closed PRs as either rejected (aka: wontfix) or merged 🙂
| gharchive/pull-request | 2024-04-04T14:50:50 | 2025-04-01T06:39:25.833731 | {
"authors": [
"linkdd",
"pjmlp"
],
"repo": "linkdd/logfmtxx",
"url": "https://github.com/linkdd/logfmtxx/pull/4",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.